Monthly Archives: August 2013

Science and the Uncertainty Behind a Social Cost of Carbon

What do the models tell us about the social cost of carbon and therefore the urgency of strong action to mitigate emissions? Very little.

That’s the assessment by MIT Sloan’s Professor Robert Pindyck in a paper distributed in July and forthcoming in this September’s Journal of Economic Literature. The paper is very readable. While a couple of equations are displayed, there is really no math. The argument does, however, presume that the reader has a sound grasp of the economic concepts pertinent to long-horizon decision making under uncertainty.

Pindyck’s argument is not against action.  It’s against the way models are currently employed as authority in the case for action. He rightly calls attention to the fact that

…certain inputs (e.g. the discount rate) are arbitrary, but have huge effects on the [social cost of carbon] estimates the models produce; the models’ descriptions of the impact of climate change are completely ad hoc, with no theoretical or empirical foundation; and the models can tell us nothing about the most important driver of the [social cost of carbon], the possibility of a catastrophic climate outcome. [Model]-based analyses of climate policy create a perception of knowledge and precision, but that perception is illusory and misleading.

Pindyck is not being a nihilist here. He is trying to redirect policy analysis to a path he believes will be more fruitful in truly informing public discussion and arriving at better choices.

So how can we bring economic analysis to bear on the policy implications of possible catastrophic outcomes? Given how little we know, a detailed and complex modeling exercise is unlikely to be helpful. (Even if we believed the model accurately represented the relevant physical and economic relationships, we would have to come to agreement on the discount rate and other key parameters.) Probably something simpler is needed. Perhaps the best we can do is come up with rough, subjective estimates of the probability of a climate change sufficiently large to have a catastrophic impact, and then some distribution for the size of that impact (in terms, say, of a reduction in GDP or the effective capital stock).

The problem is analogous to assessing the world’s greatest catastrophic risk during the Cold War — the possibility of a U.S.-Soviet thermonuclear exchange. How likely was such an event? There were no data or models that could yield reliable estimates, so analyses had to be based on the plausible, i.e., on events that could reasonably be expected to play out, even with low probability. Assessing the range of potential impacts of a thermonuclear exchange had to be done in much the same way. Such analyses were useful because they helped evaluate the potential benefits of arms control agreements.

The same approach might be used to assess climate change catastrophes. First, consider a plausible range of catastrophic outcomes (under, for example, BAU), as measured by percentage declines in the stock of productive capital (thereby reducing future GDP). Next, what are plausible probabilities? Here, “plausible” would mean acceptable to a range of economists and climate scientists. Given these plausible outcomes and probabilities, one can calculate the present value of the benefits from averting those outcomes, or reducing the probabilities of their occurrence. The benefits will depend on preference parameters, but if they are sufficiently large and robust to reasonable ranges for those parameters, it would support a stringent abatement policy. Of course this approach does not carry the perceived precision that comes from an IAM-based analysis, but that perceived precision is illusory. To the extent that we are dealing with unknowable quantities, it may be that the best we can do is rely on the “plausible.”

These are wise words. Importantly, Pindyck’s paper directs us to appreciate the full research task in front of us. There is lots to be done. We are far from a full, comprehensive answer. The right starting point may be to focus on small elements of the problem, as opposed to employing a modeling framework that seduces us with its completeness. The completeness is illusory and tricks us into making assumptions on things about which we are ignorant.

Back in 2007, when the UK’s Stern Review was out and MIT hosted a discussion of the Review, my critique shared many points with Pindyck’s. I emphasized the enormous uncertainties surrounding the assessments of damage from climate change, and called attention to what I called “The Heroic Assumptions” embedded in the Stern Review’s calculations. I then described two alternative strategies for scientists and economists faced with uncertainties that are so great.

Strategy #1 is to put one’s head down and plow forward to produce a bottom line policy recommendation. Use the best estimates you have and the best available modeling choices you have, no matter how ill informed by empirical research. Make the best ethical judgments possible, while being explicit about one’s choices. And then turn the crank on the model and spit out a cost number and a benefit number and the resulting policy recommendation.

Strategy #2 is to say clearly what we know, and just as clearly what we don’t know. Inform the discussion as far as science can reliably inform it, but no farther. Unpack the key points that need to be addressed, but accept that the public debate is the right forum in which to assess and evaluate how society should act in the face of the great imponderables. Leave it to society to make the critical value judgments.

Too many scientists and economists choose strategy #1. More humility is necessary. Strategy #2 is healthier for democracy and truer to the ethos of what economic science ought to be about.

%d bloggers like this: