Recently, I was reminded of the commonly used slogan “evidence-based policy.” Except for pure marketing purposes, I find this terminology to be a misnomer, a misleading portrayal of academic discourse and the advancement of understanding. While we want to embrace evidence, the evidence seldom speaks for itself; typically, it requires a modeling or conceptual framework for interpretation. Put another way, economists—and everyone else—need two things to draw a conclusion: data, and some way of making sense of the data.

That’s where modeling comes in. Modeling is used not only to aid our basic understanding of phenomena, but also to capture how we view any implied trade-offs for social well-being. The latter plays a pivotal role when our aim is to use evidence in policy design.

This is intuitive if you think about the broad range of ideas and recommendations surrounding macroeconomic policy and the spirited, sometimes acrimonious way in which they’re debated. If everything were truly evidence based, to the extent we can agree on the accuracy of the evidence, why would there be such heterogeneity of opinion? The disagreement stems from the fact that people are using different models or conceptual frameworks, each with its own policy implications. Each of them might be guided by evidence, but policy conclusions can rarely be drawn directly from the evidence itself.

The interplay between theory and evidence has long been discussed by prominent scholars in economics and other disciplines, including some at the University of Chicago. My colleague Stephen Stigler reminded me of a quote of Alfred Marshall’s from 1885 about the potentially important impact of the choice of evidence to report:

The most reckless and treacherous of all theorists is he who professes to let facts and figures speak for themselves, who keeps in the background the part he has played, perhaps unconsciously, in selecting and grouping them.

This concern has not been erased by our current data-rich environment.

Others have weighed in on how to give policy-relevant interpretations to evidence. Back in 1947, Tjalling Koopmans, a prominent member of the Cowles Commission (an economic-research organization then headquartered at the University of Chicago, and now housed at Yale), wrote an essay called “Measurement without Theory,” exposing the limitations of well-known evidence on business cycles. This same theme was revisited later by other scholars affiliated with the Cowles Commission, namely Jacob Marschak and Leo Hurwicz, and then again in an acclaimed paper by my current and longtime colleague Bob Lucas written in 1976. Of course, the generation and construction of new data adds much richness to economic analyses. For many important economic questions, however, empiricism by itself is of limited value.

For a recent exchange illustrating divergence in opinions given evidence, consider the disparate viewpoints of two excellent economic historians, both working at the same institution: Northwestern’s Joel Mokyr and Robert Gordon. Here’s Mokyr on why we should be optimistic about the long-term prospects for innovation:

There are a myriad of reasons why the future should bring more technological progress than ever before—perhaps the most important being that technological innovation itself creates questions and problems that need to be fixed through further technological progress.

And here’s Gordon, with a markedly less rosy analysis:

. . . the rise and fall of growth are inevitable when we recognize that progress occurs more rapidly in some time periods than others. . . . The 1870–1970 century was unique: Many of these inventions could only happen once, and others reached natural limits.

Gordon warns us that we can’t expect technological progress to keep up with the pace set in the previous century, whereas Mokyr says, to paraphrase, “That century was special, but other special things are likely to happen in the future in ways we can’t fully articulate right now. There’s no reason to be pessimistic about technological progress going forward.” These are two astute scholars relying upon the same historical evidence, yet they’ve drawn different conclusions. Why? The evidence alone does not answer the question they are addressing, and they’re using different subjective inputs to help in extrapolating from the evidence. (For more from Gordon and Mokyr on innovation, watch “Can innovation save the US economy?” part of The Big Question video series.)

More agreement between models might make for less arguing among politicians and the people who advise them, but it wouldn’t necessarily make economics more useful as a science.

This sort of disagreement in models and interpretation stems in part from the essential complexity of dynamic macroeconomic phenomena. Moreover, some of the market environments, including financial markets, that economists like myself study are similarly complicated for both external analysts and market participants. In fact, a modeling challenge that I and others have confronted is how to incorporate, meaningfully acknowledge, and capture the limits to our understanding—and what implications these limits have for markets and economic outcomes.

While experimental evidence of various guises is available, unlike many of our colleagues in the physical and biological sciences, macroeconomists are limited in terms of the types of experiments we can run. Other sources of evidence can be helpful, including those captured in aggregate time series and in microeconomic cross sections. But for important policy-relevant questions, to use this evidence in meaningful ways requires conceptual frameworks or models. We are often interested in assessing alternative policies for which the information in the existing data may be quite limited. The evidence, economic data, tells us to some degree what happened in the economy as a result of a set of conditions; models are what allow us to compare what happened with what would have happened under a different set of conditions, including, of course, different policies. Without the framework to enable that comparison, the data are descriptive, perhaps, but not nearly as useful. Thus, the models are in essence tools that allow us to explore better hypothetical changes in the underlying economic environment. The choice of model is a vital input into the analysis and can have a big impact on the policy implications.

While macroeconomists are limited in the terms of the experiments they can conduct, policy makers do, on occasion, experiment—usually by accident. I have been involved in overseeing a project on the fiscal and monetary histories of Latin American countries. This project provides a good example of unintentional (and unfortunately sometimes socially costly) “experimentation.” Countries with geographic proximity and with some cultural similarities have vastly different macroeconomic experiences. But even in this case, where we have countries with differing policies and economic results, we need conceptual frameworks to put it all together. Otherwise, we’re left with a specific narrative for Colombia, another for Brazil, and another for Argentina. How do we extract the lessons from all that evidence and aggregate it in a way that’s useful for thinking about the interaction between monetary and fiscal policy in other contexts? We need a framework for that.

More agreement between models might make for less arguing among politicians and the people who advise them, but it wouldn’t necessarily make economics more useful as a science: models, or at least features of models, can be broadly accepted and still be wrong. Think of all the economists who were surprised by the magnitude of the 2008–09 financial crisis. In the types of models being used by central banks’ research departments—those advising the US Federal Reserve, or the European central banks—prior to the crisis, the role of the financial sector was typically passive. Financial markets were often seen as barometers, but they weren’t really triggers of important macroeconomic consequences. In fact, in many respects, people thought that for economies such as the United States, financial crises were largely a thing of the past.

The crisis, needless to say, was an eye-opener for many people who held this view, and even many who didn’t. Plenty of economists will tell you they predicted the crisis, but you have to ask, did they predict the magnitude of it? And how many other crises did they predict that didn’t actually materialize? I think even many of those who confidently predicted a crisis didn’t anticipate how catastrophic it would be for the global economy.

While we at least implicitly have to rely on theoretical frameworks in order to interpret and act upon the evidence, this is not without some concern. No one has yet built, or will ever build, a perfect model. A model is a simplification, an abstraction; it’s not meant to be a full describer of reality. There is typically uncertainty as to the limits of how far any model can take us before we might encounter quantitatively important mistakes in our use of it. It is naive to criticize models for not being fully descriptive, but at the same time it is important that we use them with our eyes open to their potential limitations.

Understanding that the evidence itself does not contain all the answers is crucial to an informed society. We’re living in a world that along some dimensions feels very data rich. We’re able to collect a lot of data, we have powerful hardware to store and process them, and we have machine-learning techniques to find patterns in them. But many of the important questions we face are about fundamentally dynamic problems. They’re in areas in which, along some dimensions, our knowledge is sparse. How do we best handle financial-market oversight in order to limit the possibilities of a big financial crisis? What economic policies should be in place to confront climate change? For such problems, we need more than a call for action; we need the implementation of precise policies while speculating on their impact.

Uncertainty doesn’t mean that you simply cross your arms, close your eyes, and do nothing while you wait for complete certainty. In economics, you will be waiting a long time.

Many of the people who influence, or want to influence, public policy are reluctant to acknowledge that we’re often working with incomplete information. Ambiguity, they believe, is hard to sell to the public or to politicians claiming to represent the public: politicians and policy makers often seek confidence in policy outcomes, even when this confidence is not justified. As a result, there will always be people willing to step to the forefront to give confident answers. Friedrich Hayek warned of this in his Nobel lecture:

The conflict between what, in its present mood, the public expects science to achieve in satisfaction of popular hopes and what is really in its power is a serious matter because, even if the true scientists should all recognize the limitations of what they can do in the field of human affairs, so long as the public expects more, there will always be some who will pretend, and perhaps honestly believe, that they can do more to meet popular demands than is really in their power.

There are aspects to climate change and its economic ramifications that we don’t fully understand. We have some evidence from economics. We have some evidence from climate science. In neither case does the evidence speak for itself in terms of the precise impacts human actions today will have on the climate and social outcomes in the future, so we’re attempting to use models to help fill in the gaps of our understanding.

Of course, we don’t need to know exactly what the magnitude of the damage to the climate is in order to know we need to try to mitigate it. Waiting to address it could be incredibly socially costly. It’s enough to know there’s a possibility of bad outcomes to make you want to act now.

But there are those who suggest we shouldn’t dilute the public’s attention by saying, “We don’t fully understand the climate system and the economics of climate change.” They feel that if we advertise this to the public, it will lead to nothing being done to address climate change.

I find that discussion depressing. Prudent and smart decisions don’t require full knowledge. They require that you assess the uncertainty and figure out its potential consequences. The uncertainty doesn’t mean that you simply cross your arms, close your eyes, and do nothing while you wait for complete certainty. In economics, you will be waiting a long time.

I have loaded much into the term “prudent,” however. Designing activist policy prescriptions on the basis of a false pretense of knowledge can indeed be harmful.


Lars Peter Hansen is David Rockefeller Distinguished Service Professor at the University of Chicago Departments of Economics and Statistics and at Chicago Booth. He was a 2013 recipient of the Nobel Prize in Economic Sciences.

More from Chicago Booth Review

More from Chicago Booth

Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.