Statistical modeling of monetary policy and its eects
Christopher A. Sims Princeton University sims@[Link]
December 8, 2011
Outline
Tinbergens Project Haavelmos critique: the renewed project The large models run aground, as probability models The monetarist vs. Keynesian debates: failure to model policy behavior Bayesian inference Rational Expectations Causality tests, VARs, SVARs Dynamic stochastic general equilibrium models (DSGEs) What still requires work
The project
A statistical model, with error terms and condence intervals on parameter estimates. Multiple equations, covering the whole economy at the aggregate level. A testing ground for theories of the business cycle Keynes did not like it.
Outline
Tinbergens Project Haavelmos critique: the renewed project The large models run aground, as probability models The monetarist vs. Keynesian debates: failure to model policy behavior Bayesian inference Rational Expectations Causality tests, VARs, SVARs Dynamic stochastic general equilibrium models (DSGEs) What still requires work
Single equation vs. multiple equation modeling
Though Tinbergen used multiple equations, he estimated them one at a time. There was no attempt to treat the set of equations as a joint probability model of all the time series.
The probability approach
Keynes had argued that because Tinbergens model contained error terms, it could explain any observed data and therefore could not be used to test theories of the business cycle, contrary to Tinbergens claims. Haavelmo defended Tinbergen against this argument, arguing instead that economic models, in order to be testable, must contain explicit error terms, since they would not make precise predictions. Economic models are testable, he said, so long as they are formulated as probability models that make assertions about the likely size and correlation patterns of their error terms.
Haavelmos proposal
He suggested considering a model as a proposed probability distribution for a complete set of data, containing many variables and many time periods. He set up and explained how a simple Keynesian model could be formulated, estimated and tested this way.
Outline
Tinbergens Project Haavelmos critique: the renewed project The large models run aground, as probability models The monetarist vs. Keynesian debates: failure to model policy behavior Bayesian inference Rational Expectations Causality tests, VARs, SVARs Dynamic stochastic general equilibrium models (DSGEs) What still requires work
Large models
The Keynesian viewpoint implied that business uctuations had many sources and that many policy instruments were relevant to stabilization policy. In order to be useful in guiding year-to-year or month-to-month policy decisions, a model would have to be on a much larger scale than Haavelmos example. A stellar group of theorists developed what became known as the Cowles Foundation methdology for codifying and expanding Haavelmos ideas about inference. By the 1960s computing power had developed to the point that models with hundreds of equations could be estimated and solved. The collaboration of dozens of leading macroeconomists and econometricians led to the formation and estimation of models with hundreds of equations.
Problems of scale
A model with hundreds of equations and hundreds of variables has, in principle, tens of thousands of unknown coecients describing the relations of variables to one another. One cant ask the data to tell you the values of all of them there are not tens of thousands of observations. One must bring in a priori judgment, that some coecients some potential channels of inuence are negligible, or of a priori known form. The large scale modelers did exactly this, but in the process assumed away many sources of uncertainty. They simplied the models as if they were certain that the restrictions they were imposing were correct, even though they were only approximate.
Outline
Tinbergens Project Haavelmos critique: the renewed project The large models run aground, as probability models The monetarist vs. Keynesian debates: failure to model policy behavior Bayesian inference Rational Expectations Causality tests, VARs, SVARs Dynamic stochastic general equilibrium models (DSGEs) What still requires work
The monetarist project
Milton Friedman, Anna Schwartz, David Meiselman, and others formulated a view of the business cycle and stabilization policy that suggested that the large Keynesian models were overcomplicated and had missed some simple statistical relationships that were central to good policy. Growth in the stock of money was tightly related to growth in income, they argued, and patterns of timing suggested that this tight relationship was causal uctuations in money growth causing uctuations in income. A statistically estimated equation with income explained by current and past money growth implied that most of the business cycle could be eliminated by simply making money supply growth constant.
The Keynesian response
Examining their own large models, the Keynesians found that (contrary to the Keynesian consensus of the early 1950s) monetary policy was a powerful tool. But their models did not imply that constant money growth would eliminate the business cycle, or even be a good policy. James Tobin showed that the timing patterns in the money income relation that monetarists displayed could arise in a model that had no causal inuence of money on income. But Tobin did not use a large Keynesian statistical model to make his point. Those models were not credible, and had a aw that made them unusable for his purposes.
Policy behavior as part of the model
The behavior of monetary and scal policy makers is to some extent systematic, but is also a source of uncertainty to the private sector. A serious probability model of the economy must take account of systematic policy responses, and also of their random component. Yet policy-makers do not see their own actions as random. Neither the Keynesian large-modelers nor the monetarists confronted this issue. Each group treated its favorite policy variables as non-random, exogenous, autonomous, or determined outside the model. This was a major gap in Haavelmos research program, and it left the Keynesian vs. monetarist debate of the 1960s in a confused state.
Outline
Tinbergens Project Haavelmos critique: the renewed project The large models run aground, as probability models The monetarist vs. Keynesian debates: failure to model policy behavior Bayesian inference Rational Expectations Causality tests, VARs, SVARs Dynamic stochastic general equilibrium models (DSGEs) What still requires work
The fundamental dierence
The textbook frequentist view distinguishes non-random, but unknown, parameters from random quantities that repeatedly vary, or could conceivably repeatedly vary. The Bayesian view treats everything that is not known as random, until it is observed, after which it becomes non-random.
Coin ipping: The Bayesian view
Suppose we have observed the outcome of 10 ips of a coin that we know to be biased, so that it has an unknown probability p of turning up heads. Before we saw the 10 ips, we thought any value of p between zero and one equally likely. We need to determine the probability that the next ip will turn up heads. The Bayesian view is that, if since we do not know p, p is random. The outcome of the next ip is also random, with part of the randomness coming from our uncertainty about p. If we saw 8 of the rst 10 ips were head, the probability that the next would be heads would be .75. Not the apparently natural estimate of p, which is .8, because we cant rule out the possibility that the 8 of 10 result was a random outcome with p below .8.
Coin ipping: The frequentist view
There is no way from a frequentist perspective to put a probability on the next ip being heads, using the information in the rst 10 ips. The outcome of the next ip is random from this perspective, but its distribution depends on p, which is xed, not random. Frequentist reasoning can describe the probability distribution across many possible samples of an estimator (like the apparently natural estimate here, p = .8), but this cannot be transformed into a probability distribution for the next ip. Since this kind of prediction problem is so common, and decision makers want distributions for forecasts, there are frequentist tricks to produce something like a probability distribution for a forecast, but they are all forms of mental gymnastics.
Bayesian modeling of policy behavior
A Bayesian perspective nds no mystery or paradox in the notion that policy-makers see their own actions as non-random, while from the point of view of the private sector or an econometrician those same actions have probability distributions. This viewpoint is essential in creating models that include probability models of policy-maker behavior, and at the same time can be useful to policy-makers in planning their own actions. Treating models from this viewpoint is a challenge, computationally, and it is only in recent years, with the importation into economics of Markov Chain Monte Carlo methods, that implementing it for large models has become practical.
Bayesian treatment of prior information
A Bayesian approach comfortably accommodates uncertain prior information. In a large model, it allows introducing sensible restrictions on the values of unknown parameters, without pretending that these restrictions are without uncertainty. That is, it allows introducing probability distributions for model parameters, then allowing the data to update or sharpen those distributions. It thereby avoids the need to imply unrealistic precision in the probability distributions for model predictions.
Outline
Tinbergens Project Haavelmos critique: the renewed project The large models run aground, as probability models The monetarist vs. Keynesian debates: failure to model policy behavior Bayesian inference Rational Expectations Causality tests, VARs, SVARs Dynamic stochastic general equilibrium models (DSGEs) What still requires work
The insight of Lucas, Sargent, Wallace, and others
The behavior of policy makers, both its systematic and its unpredictable components, is an essential part of the environment for all economic decision-makers. Economic models therefore need to include a component that models policy behavior and to recognize that this component inuences the behavior of the private sector. This zeroed in on the common major weakness of the large Keynesian models and the monetarist regressions.
The malign inuence of rational expectations
Some economists, still uncomfortable with thinking of policy actions as realizations of random variables, took the view that rational expectations modeling required that policy changes could be modeled only as non-random, exogenous changes in the policy rule itself. This suggested that what policy-makers regularly do, which is choose values for variables that they control, was trivial, was not an activity for which economists could provide advice, or both. Those academic economists who, like Keynes, were impatient with the sometimes tedious complexity of policy models eagerly took up the excuse to ignore the still widely used large policy models. Without regular academic constructive criticism, the models wandered still further away from Haavelmos project.
Outline
Tinbergens Project Haavelmos critique: the renewed project The large models run aground, as probability models The monetarist vs. Keynesian debates: failure to model policy behavior Bayesian inference Rational Expectations Causality tests, VARs, SVARs Dynamic stochastic general equilibrium models (DSGEs) What still requires work
Testing the monetarist regressions
If the monetarists were right in claiming that the strong correlations of money growth with income primarily reected a causal inuence of monetary policy errors on income, future money growth should not contribute to explaining current income, once the inuence of current and past money growth on income had been accounted for. In a 1972 paper I looked at this question, and found that the monetarist regressions passed the test. Future money growth did not help predict current income.
Spurious unidirectional causality
In economic models, apparent causal direction based on predictive power can be misleading. Rational expectations, applied to nancial markets, explains why this is true: prices of frequently traded assets will react promptly to all new information, and hence themselves be unpredictable. Regressions of any economic variable on an asset price will thus tend to show that current and past asset prices help in prediction, but that future asset prices do not.
Spurious money-income equations
While the stock of money is not an asset price, a policy that smooths the time path of interest rates may make money behave like an asset price in causality test regressions, even when monetary policy errors actually have little inuence on income. This was recognized early by a few economists, and later by me. I showed it was true in a detailed model in a 1989 American Journal of Agricultural Economics paper.
Money and interest rates
While some economists were estimating the money-explains-income monetarist equations, others were estimating demand for money, explaining money stock as a function of income and interest rates. If money stock were exogenous in the monetarist equations, it seemed unlikely that current and past income could properly be treated as helping to determine money stock in the money demand equations. In a 1978 paper, Yash Pal Mehra showed that the money demand equations passed the same kind of test I had used on the monetarist income-money regressions. I decided the only way to reconcile these results was to start using more than one equation.
VARs and SVARs
Descriptive linear multiple equation systems, aimed at capturing empirical regularities while using only prior distributions without economic content expressing belief that coecients on longer lags were less likely to be important and that variables were likely to evolve smoothly over time were called vector autoregressions, or VARs. When prior beliefs with economic content were introduced, so that the estimated systems had potential causal interpretations, the system was called a structural VAR, or SVAR.
The monetarist debate viewed through SVARs
I and many other researchers, including Martin Eichenbaum, Larry Christiano, Olivier Blanchard, and Mark Watson among the earliest, were able to show using SVARs that inuences of monetary policy were detectable in the data. But at the same time, we showed that most movements in both money stock and interest rates represented systematic reactions of monetary authorities to the state of the economy. Only a small part of macroeconomic uctuations could be attributed to erratic monetary policy.
Outline
Tinbergens Project Haavelmos critique: the renewed project The large models run aground, as probability models The monetarist vs. Keynesian debates: failure to model policy behavior Bayesian inference Rational Expectations Causality tests, VARs, SVARs Dynamic stochastic general equilibrium models (DSGEs) What still requires work
A new class of models
In the last 10 years economists, starting with Christiano, Eichenbaum, Frank Smets and Raf Wouters developed models that aimed at mimicking the SVARs estimated patterns of inuence of monetary policy and that were: big enough to be usable for policy analysis; fully intepreted, so all sources of variation in them have economic interpretations; specied as complete probability models, like Haavelmos original example estimated using a Bayesian approach; about as well aligned with the data, according to Bayesian measurs of t, as are VARs.
Haavelmos program complete?
These models are I think closer to what Haavelmo was aiming at than their predecessors. But we should recognize that, though they may represent an advance, they represent the result of a history of scrambling up an unstable slope. Previous researchers, and I myself, have been mistaken before, and probably will be again. And we can see today that these models are still vulnerable to important criticisms.
Outline
Tinbergens Project Haavelmos critique: the renewed project The large models run aground, as probability models The monetarist vs. Keynesian debates: failure to model policy behavior Bayesian inference Rational Expectations Causality tests, VARs, SVARs Dynamic stochastic general equilibrium models (DSGEs) What still requires work
Outliers
The crash of 2008 and its aftermath have taught us that very large errors, or shocks in our models do occasionally occur and have strong eects. Most models have tended to ignore this, even though a look at historically tted error terms would make it clear that it was not only in the 1930s and in 2008-10 that such large shocks have occurred.
Outlier examples
A fairly well known example is that monetary policy in the period 1979-1982, the Volcker disination appears as very large shocks to the usual predictable behavior of monetary policy. Perhaps equally important, but less well known, is that the US federal government primary decit (expenditures, other than interest payments, less revenues) relative to outstanding debt showed a huge jump in the second quarter of 1975. The data by itself cannot determine our probability models for these rare events, but we should be recognizing that they occur, using theory and what data we have to bring them in to policy discussions.
Unbelievable stories
Though DSGEs provide complete interpretations of their shocks, some of these interpretations are dubious. This is particularly true of their modeling of the sources of monetary non-neutrality, which are essential to welfare evaluation of monetary policy. We need to be considering competitors to the New Keynesian Phillips curve, in other words, in the context of these models.
Fiscal policy
These models have little or no treatment of scal policy impacts on ination, using the insights of rational expectations. This is perhaps a legacy of the professions intense focus on the monetarist/Keynesian controversies of the 60s and 70s. In light of the current situation, it is urgent that we ll this gap.
Preserving respect for Tinbergens and Haavelmos project
It is remarkable how rapidly and completely academic interest shifted away from serious probability-based policy modeling after the rational expectations revolution. This reected aspects of the sociology of our profession that remain with us. Despite the recent pickup in academic interest in these issues, there remains substantial resistance to giving them academic respect. We need to preserve the momentum of this research.