Macro Economy Chapter 11. Microfoundations and Expectations (1970s–1990s)

Microfoundations and Expectations: Rational Expectations, New Classical Macroeconomics, and RBC (1970s–1990s)

Summary

This text explains how macroeconomics changed when economists started taking expectations and incentives more seriously. Rational expectations means people in the model form forecasts in a way that is consistent with how the model says the economy works, rather than using simple “rules of thumb.” The Lucas critique warns that relationships seen in past data can break down when policy rules change, because people and firms adjust their behavior. This pushed economists toward structural models that try to stay valid even when policy changes. A related idea is that good policy is not only about choosing the “best” action, but also about credibility: policymakers may want to promise one thing but later have an incentive to do something else (time inconsistency), and reputation can affect whether the public believes them. During this period, Real Business Cycle (RBC) models promoted a quantitative approach where economists simulate a fully specified model and compare it to key patterns in the data, often using calibration. Critics then asked whether basic RBC models can match important real-world features, especially the way output moves over time. Overall, the era shifted macroeconomics toward model-consistent expectations, careful thinking about policy regime changes, and more explicit debate about what counts as convincing empirical evidence.


Key Takeaways

  • Rational expectations reframed macroeconomics by requiring expectations to be model-consistent rather than imposed from outside the theory. [2][9]
  • The Lucas critique highlighted an identification problem: historical correlations can shift when policy regimes change, undermining naive policy evaluation. [1]
  • Credibility and commitment problems (time inconsistency and reputation) became central to thinking about monetary policy design and outcomes. [4][5][10]
  • RBC pushed a quantitative general-equilibrium program—often assessed via calibration and simulation—shifting debates over what counts as evidence and “testing.” [6][7]
  • Empirical critiques asked whether core RBC models reproduce observed output dynamics, sharpening the question of what these models explain well versus poorly. [8][7]

1) From “Expectations” as an Assumption to Expectations as a Discipline

Facts

A defining methodological move of this era was to treat expectations as rational—that is, aligned with the model’s implied probability structure—rather than as an exogenous or ad hoc behavioral assumption. [2][9] In a canonical theoretical statement, Lucas formalized how expectations, information, and market-clearing behavior could shape the real effects of monetary disturbances and the conditions under which money is neutral or non-neutral. [2] The broader New Classical program emphasized internally consistent optimization and equilibrium reasoning, with expectations playing a central role in mapping policy and shocks into outcomes. [2][9]

Interpretation

Rational expectations can be read as a bid for internal coherence: if agents are forward-looking and respond to policy and information, then models should not impose expectations that contradict the model’s own logic. This stance likely helped shift macroeconomic practice toward explanations in which policy effects depend on how expectations are formed and updated, rather than on expectations being treated as a free parameter. [2][9]


2) The Lucas Critique and the Identification Problem in Policy Evaluation

Facts

The Lucas critique argues that econometric relationships estimated from historical data—especially reduced-form correlations used for policy evaluation—may not be stable when policy rules change, because private decision rules (and thus observed behavior) adapt to the new regime. [1] In this sense, the critique is not merely technical; it is a warning about regime dependence and the difficulty of identifying policy effects from historical correlations alone. [1]

Interpretation

The critique implies an identification challenge for macro policy analysis: to evaluate counterfactual policies credibly, one needs structures that remain meaningful across regimes—often framed as “deep” behavioral parameters in structural models—rather than relying on correlations that may be contingent on the policy environment. [1] This interpretation is grounded in Lucas’s argument about policy regime changes altering the underlying behavioral relationships being estimated. [1]

High-risk claim (easy to overstate): “The Lucas critique invalidates all reduced-form evidence for policy evaluation.”

  • Status: Interpretation; overbroad as stated. The critique establishes a regime-change vulnerability, not a universal impossibility result. [1]
  • What would be needed: Empirical and applied policy-evaluation studies comparing when reduced-form relationships are stable versus unstable under regime change. (Not in Source Pack.) [citation needed]

3) Rules, Discretion, Credibility: Time Inconsistency and Reputation

Facts

Kydland and Prescott formalized the idea that optimal policy plans can be time-inconsistent: a plan that looks optimal in advance may not be optimal when the time comes to implement it, creating incentives to deviate and thereby undermining credibility. [4][10] This “rules rather than discretion” result reframed macro policy as a problem of commitment, not just optimization. [4] Barro and Gordon extended credibility logic by modeling how reputation can shape monetary policy outcomes and help explain persistent inflation bias under discretion in a setting where policymakers face incentives that differ from the public’s preferences. [5]

Sargent and Wallace’s analysis of monetary instruments and money supply rules under rational expectations contributed to the broader rules-versus-discretion framing by treating policy choice as constrained by model-consistent expectations and private optimization. [3]

Interpretation

Taken together, time inconsistency and reputation models shifted attention from “What is the best policy?” to “What policy can be made credible given incentives and expectations?” [4][5] The implied policy lesson—at a conceptual level—is that rule-like behavior or commitment devices can improve outcomes when discretionary incentives generate systematically biased results. [4][10]

High-risk claim (causality): “These models directly caused central banks to adopt specific rule-like regimes.”

  • Status: Interpretation and causal attribution not established by the Source Pack. The sources establish the theoretical logic and its importance in the discipline, not a documented institutional causality pathway. [4][5][10]
  • What would be needed: Central bank archival evidence, legislative histories, and institutional comparative studies tying specific reforms to these ideas. [citation needed]

4) RBC and the “Computational Experiment”: A Quantitative General-Equilibrium Program

Facts

RBC models advanced a research program in which business cycles are analyzed using intertemporal, micro-founded general equilibrium frameworks, often emphasizing real shocks and propagation mechanisms. [6][10] In their “Time to build” paper, Kydland and Prescott developed a prominent RBC structure in which real-side mechanisms help propagate shocks into aggregate fluctuations. [6] Later, Kydland and Prescott explicitly framed the computational experiment—calibration and simulation—as an econometric tool: researchers specify a model, choose parameters (often using external information or matching selected moments), simulate the model, and compare its generated behavior to empirical regularities. [7]

Interpretation

RBC’s methodological innovation was not only its substantive story about fluctuations, but its attempt to make macroeconomics quantitatively accountable to a set of empirical “stylized facts” through simulation-based comparison. [7][6] At the same time, the approach made the identification question sharper: if a calibrated model matches certain moments, does that constitute a test, or merely a demonstration that a mechanism is plausible? This interpretive tension is explicitly part of the methodological discussion in [7]. [7]

High-risk claim (scope of explanation): “RBC models explain most business-cycle fluctuations primarily via technology shocks.”

  • Status: Interpretation; too strong without additional quantitative decomposition evidence. The Source Pack supports RBC’s programmatic emphasis and modeling strategy, but not a general empirical dominance claim. [6][7]
  • What would be needed: Empirical variance decompositions and robustness checks across model variants and measurement choices. (Not in Source Pack.) [citation needed]

5) What These Models Explain Well vs Poorly: Critique and Output Dynamics

Facts

A core empirical critique within this era evaluates whether RBC models reproduce observed properties of macro time series—especially output dynamics. [8] Cogley and Nason specifically examined output dynamics in RBC models, contributing to a debate over whether baseline RBC structures can match key empirical features of aggregate data. [8] This critique also connects directly to methodological questions emphasized in the “computational experiment” framing: what moments are targeted, what features are missed, and what constitutes persuasive evidence. [7][8]

Interpretation

The debate can be organized around the chapter’s identification lens:

  • What RBC can do well (as a research program): Provide coherent, micro-founded mechanisms that can be simulated and confronted with selected empirical regularities—turning qualitative stories into quantitative implications. [7][6]
  • What RBC can do poorly (as a baseline empirical account): Reproducing certain observed features of output dynamics, at least in the specific comparisons highlighted by the critique literature in the Source Pack. [8]

Because the Source Pack includes a focused critique (output dynamics) rather than a comprehensive survey of successes/failures across many dimensions, any broader ranking of RBC’s empirical performance across domains should be treated cautiously. [8]

High-risk claim (methodology verdict): “Calibration is not a valid empirical test” (or “Calibration is sufficient evidence”).

  • Status: Interpretation; the Source Pack supports a methodological defense/exposition and a specific critique but does not justify a blanket verdict. [7][8]
  • What would be needed: Broader methodology literature comparing calibration, estimation, and model validation standards. (Not in Source Pack.) [citation needed]

Research Lens: Microfoundations, Rational Expectations, and the Identification Turn

Dominant framework: Intertemporal, micro-founded equilibrium modeling with rational expectations—paired with quantitative simulation/calibration to assess whether model-implied dynamics resemble selected empirical regularities. [2][7][9]
Core implication for policy analysis: If agents adjust behavior when policy rules change, policy evaluation must grapple with regime dependence (Lucas critique) and credibility (time inconsistency/reputation). [1][4][5]
Core implication for empirical work: Quantitative macro increasingly leaned on simulation-based evaluation (“computational experiments”) and explicit scrutiny of which empirical moments models can match. [7][8]


6) What Changed: Institutions, Policy Tools, Measurement, and Research Practice

Facts (Research practice and tools)

This era’s research practice shifted toward structural, micro-founded approaches that aimed to remain meaningful under policy regime change, motivated by the Lucas critique’s warning about unstable reduced-form relationships. [1] Rational expectations provided a central discipline for expectation formation within these models, reinforcing the move toward internally consistent optimization frameworks. [2][9] The RBC program further promoted simulation-based quantitative evaluation and calibration—“computational experiments”—as part of the macro toolkit. [7][6]

Facts (Measurement and data)

Even when the arguments were theoretical, the empirical conversation relied on standard macro aggregates and official statistics:

  • FRED functions as a widely used platform for accessing macroeconomic time series—such as real GDP and unemployment—commonly used to illustrate business-cycle and policy-relevant patterns. [11]
  • BEA national accounts (NIPA) conventions anchor the construction and citation of national income and product aggregates used in quantitative macro comparisons. [12]
  • BLS CPI provides a primary inflation measure, central to discussions where credibility and inflation outcomes are at stake. [13]
  • For cross-country comparisons, the OECD Economic Outlook documentation and the IMF International Financial Statistics provide standardized macro and macro-financial series with documented conventions and coverage. [14][15]

Interpretation (Institutions and regimes)

A reasonable interpretation is that the research emphasis on credibility, rules, and regime dependence strengthened the intellectual case for viewing policy as a system of expectations management rather than a sequence of one-off interventions. This interpretation is grounded in the logic of time inconsistency, reputation, and the Lucas critique. [4][5][1]

High-risk claim (institutional detail): Specific institutional reforms (e.g., changes in central bank mandates or adoption of explicit policy-rule frameworks) require dedicated institutional documentation not contained in this Source Pack. [4][5][10]

  • Needed: Primary institutional histories, central bank documents, and cross-country legal records. [citation needed]

Why It Matters Now

Facts

The Lucas critique remains a durable warning: when policy changes, behavior can change too, complicating inference from historical correlations. [1] Time inconsistency and reputation models remain core tools for thinking about credibility, commitment, and how expectations can shape policy outcomes. [4][5] The RBC and computational experiment tradition continues to matter as a template for quantitative, micro-founded modeling—along with ongoing debate over validation and empirical fit. [7][8][6]

Interpretation

In contemporary macro policy debates—especially when regimes shift (new frameworks, new constraints, new tools)—the microfoundations era offers a practical checklist:

  • Ask whether a proposed policy will change private behavior in ways that invalidate historical relationships. [1]
  • Treat credibility as a constraint, not an afterthought, when discretion creates incentives to deviate. [4][5]
  • Be explicit about what empirical moments a model is supposed to match—and what would falsify it—rather than treating “plausible simulation” as conclusive proof. [7][8]

These are not simply academic themes; they are guardrails for reasoning under uncertainty when policy itself reshapes the economy’s behavioral structure. [1][4][7]


References (numbered to match citations)

  1. Lucas, Robert E., Jr. (1976). Econometric policy evaluation: A critique. Carnegie-Rochester Conference Series on Public Policy, 1, 19–46. DOI: 10.1016/S0167-2231(76)80003-6.
  2. Lucas, Robert E., Jr. (1972). Expectations and the neutrality of money. Journal of Economic Theory, 4(2), 103–124. DOI: 10.1016/0022-0531(72)90142-1.
  3. Sargent, Thomas J., & Wallace, Neil (1975). “Rational” expectations, the optimal monetary instrument, and the optimal money supply rule. Journal of Political Economy, 83(2), 241–254. DOI: 10.1086/260321.
  4. Kydland, Finn E., & Prescott, Edward C. (1977). Rules rather than discretion: The inconsistency of optimal plans. Journal of Political Economy, 85(3), 473–491. DOI: 10.1086/260580.
  5. Barro, Robert J., & Gordon, David B. (1983). Rules, discretion and reputation in a model of monetary policy. Journal of Monetary Economics, 12(1), 101–121. DOI: 10.1016/0304-3932(83)90051-X.
  6. Kydland, Finn E., & Prescott, Edward C. (1982). Time to build and aggregate fluctuations. Econometrica, 50(6), 1345–1370. DOI: 10.2307/1913386.
  7. Kydland, Finn E., & Prescott, Edward C. (1996). The computational experiment: An econometric tool. Journal of Economic Perspectives, 10(1), 69–85. DOI: 10.1257/jep.10.1.69.
  8. Cogley, Timothy, & Nason, James M. (1995). Output dynamics in real-business-cycle models. American Economic Review, 85(3), 492–511.
  9. The Royal Swedish Academy of Sciences (1995). The Prize in Economic Sciences 1995 – Press release (Robert E. Lucas Jr.). NobelPrize.org.
  10. The Royal Swedish Academy of Sciences (2004). The Prize in Economic Sciences 2004 – Press release (Finn E. Kydland and Edward C. Prescott). NobelPrize.org.
  11. Federal Reserve Bank of St. Louis (FRED). FRED: Federal Reserve Economic Data (database). Publication year: Needs verification (use access date when notes are finalized).
  12. U.S. Bureau of Economic Analysis (BEA). Guidelines for Citing BEA (web guidance for BEA data, including NIPA). Date: Needs verification.
  13. U.S. Bureau of Labor Statistics (BLS). Consumer Price Index (CPI) Databases (data portal). Date: Needs verification.
  14. OECD (2025). OECD Economic Outlook, Database Documentation, Volume 2025 Issue 2. OECD.
  15. International Monetary Fund (IMF). International Financial Statistics (IFS), 1920–2024 (database). Bibliographic fields/edition: Needs verification.

Further Reading (from the Source Pack)

  • Lucas (1976), “Econometric policy evaluation: A critique.” [1]
  • Kydland & Prescott (1977), “Rules rather than discretion.” [4]
  • Barro & Gordon (1983), “Rules, discretion and reputation.” [5]
  • Kydland & Prescott (1996), “The computational experiment.” [7]
  • Cogley & Nason (1995), “Output dynamics in real-business-cycle models.” [8]
  • Nobel Prize press releases on Lucas (1995) and Kydland/Prescott (2004). [9][10]