Author Archives: John Parsons

Where Are We in the Reform of OTC Derivatives Markets

Here is my take on the current status of the reform. It’s a chapter in a report put out by the Americans for Financial Reform and the Roosevelt Institute titled An Unfinished Mission: Making Wall Street Work for Us. Here’s the link for the full report.

The Value of Clearing Derivatives

financial dominos

What are the costs and benefits of the reform of derivative markets now taking place? A report released last week by the Bank for International Settlements (BIS) pegged the central estimate of the benefits at 0.16% of annual GDP.[1] With US GDP at something more than $15 trillion, that’s $24 billion annually. For the OECD as a whole, the figure is nearly triple that.

Approximately 50% of the benefits are due to the push to central clearing. Continue reading

Science and the Uncertainty Behind a Social Cost of Carbon

What do the models tell us about the social cost of carbon and therefore the urgency of strong action to mitigate emissions? Very little.

That’s the assessment by MIT Sloan’s Professor Robert Pindyck in a paper distributed in July and forthcoming in this September’s Journal of Economic Literature. The paper is very readable. While a couple of equations are displayed, there is really no math. The argument does, however, presume that the reader has a sound grasp of the economic concepts pertinent to long-horizon decision making under uncertainty.

Pindyck’s argument is not against action.  It’s against the way models are currently employed as authority in the case for action. He rightly calls attention to the fact that

…certain inputs (e.g. the discount rate) are arbitrary, but have huge effects on the [social cost of carbon] estimates the models produce; the models’ descriptions of the impact of climate change are completely ad hoc, with no theoretical or empirical foundation; and the models can tell us nothing about the most important driver of the [social cost of carbon], the possibility of a catastrophic climate outcome. [Model]-based analyses of climate policy create a perception of knowledge and precision, but that perception is illusory and misleading.

Pindyck is not being a nihilist here. He is trying to redirect policy analysis to a path he believes will be more fruitful in truly informing public discussion and arriving at better choices.

So how can we bring economic analysis to bear on the policy implications of possible catastrophic outcomes? Given how little we know, a detailed and complex modeling exercise is unlikely to be helpful. (Even if we believed the model accurately represented the relevant physical and economic relationships, we would have to come to agreement on the discount rate and other key parameters.) Probably something simpler is needed. Perhaps the best we can do is come up with rough, subjective estimates of the probability of a climate change sufficiently large to have a catastrophic impact, and then some distribution for the size of that impact (in terms, say, of a reduction in GDP or the effective capital stock).

The problem is analogous to assessing the world’s greatest catastrophic risk during the Cold War — the possibility of a U.S.-Soviet thermonuclear exchange. How likely was such an event? There were no data or models that could yield reliable estimates, so analyses had to be based on the plausible, i.e., on events that could reasonably be expected to play out, even with low probability. Assessing the range of potential impacts of a thermonuclear exchange had to be done in much the same way. Such analyses were useful because they helped evaluate the potential benefits of arms control agreements.

The same approach might be used to assess climate change catastrophes. First, consider a plausible range of catastrophic outcomes (under, for example, BAU), as measured by percentage declines in the stock of productive capital (thereby reducing future GDP). Next, what are plausible probabilities? Here, “plausible” would mean acceptable to a range of economists and climate scientists. Given these plausible outcomes and probabilities, one can calculate the present value of the benefits from averting those outcomes, or reducing the probabilities of their occurrence. The benefits will depend on preference parameters, but if they are sufficiently large and robust to reasonable ranges for those parameters, it would support a stringent abatement policy. Of course this approach does not carry the perceived precision that comes from an IAM-based analysis, but that perceived precision is illusory. To the extent that we are dealing with unknowable quantities, it may be that the best we can do is rely on the “plausible.”

These are wise words. Importantly, Pindyck’s paper directs us to appreciate the full research task in front of us. There is lots to be done. We are far from a full, comprehensive answer. The right starting point may be to focus on small elements of the problem, as opposed to employing a modeling framework that seduces us with its completeness. The completeness is illusory and tricks us into making assumptions on things about which we are ignorant.

Back in 2007, when the UK’s Stern Review was out and MIT hosted a discussion of the Review, my critique shared many points with Pindyck’s. I emphasized the enormous uncertainties surrounding the assessments of damage from climate change, and called attention to what I called “The Heroic Assumptions” embedded in the Stern Review’s calculations. I then described two alternative strategies for scientists and economists faced with uncertainties that are so great.

Strategy #1 is to put one’s head down and plow forward to produce a bottom line policy recommendation. Use the best estimates you have and the best available modeling choices you have, no matter how ill informed by empirical research. Make the best ethical judgments possible, while being explicit about one’s choices. And then turn the crank on the model and spit out a cost number and a benefit number and the resulting policy recommendation.

Strategy #2 is to say clearly what we know, and just as clearly what we don’t know. Inform the discussion as far as science can reliably inform it, but no farther. Unpack the key points that need to be addressed, but accept that the public debate is the right forum in which to assess and evaluate how society should act in the face of the great imponderables. Leave it to society to make the critical value judgments.

Too many scientists and economists choose strategy #1. More humility is necessary. Strategy #2 is healthier for democracy and truer to the ethos of what economic science ought to be about.

Be careful how you swing that hatchet!

Eugene

During last year’s debate about the Volcker Rule, Morgan Stanley commissioned a study by the consulting firm IHS that predicted dire consequences for the U.S. economy. I called the study a hatchet job. My main complaint was that the study made the obviously unreasonable assumption that the bank commodity trading operations would be closed down and not replaced. IHS even excluded the option of having banks sell the operations.

So this story in today’s Financial Times gave me a good chuckle:

US private equity group Riverstone is leading talks on an investment of as much as $1bn in a new commodities investment venture to be run by a former Deutsche Bank executive…

Morgan Stanley is considering a sale or a joint venture for its commodities business… James Gorman, Morgan Stanley’s chief executive, last October said the bank was exploring “all form of structures” for its commodities business.

Glenn Dubin, Paul Tudor Jones and a group of other commodity hedge fund investors last year bought the energy trading business from Louis Dreyfus Group and Highbridge Capital, the hedge fund owned by JPMorgan Chase. The parties later renamed the business Castleton Commodities International.

And so, another industry funded hatchet job on the Dodd-Frank financial reform ages poorly.

Backwardation in Gold Prices?

Izabella Kaminska at FT Alphaville clarifies what’s going on.

Would you like fries with that McSwap?

McSwap

Last week the OTC swaps market took a big step towards the creation of standardized interest rate swaps. Pushed by the buy-side, ISDA developed a “Market Agreed Coupon” or MAC contract with common, pre-agreed terms. From the ISDA press release:

The MAC confirmation features a range of pre-set terms in such areas as start and end dates, payment dates, fixed coupons, currencies and maturities. It is anticipated that coupons in the contract will be based on the three- or six-month forward curve and rounded to the nearest 25 basis point increments. Effective dates will be IMM dates, which are the third Wednesday of March, June, September and December. The initial currencies covered include the USD, EUR, GBP, JPY, CAD and AUD. Maturities will be 1, 2, 3, 5, 7, 10, 15, 20 and 30 years.

This is good for end-users. Dealers have long used superfluous customization as a tool to blunt competition and maintain margins. Creating a subset of contracts with standardized terms will make the interest rate swap market more efficient in many ways.

Some in the industry worry this just feeds the trend to futurization of swaps:

“It’s quite speculative to try to figure how this will turn out, but on the one hand a more standardised product is presented as more homogeneous, which is good for OTC markets, while on the other, you could argue the more a product is standardised, the less differentiated it is from futures and ultimately could lose out to straight futures activity,” says one New York-based rates trader. “I think there is a fear that this standardisation process creates a much easier path towards futurisation. You could argue this is one step closer towards promoting the success of swap future contracts.” (RISK magazine, subscr. required)

But that ship had already sailed. The G20 specifically rejected the old model of faux customization, and mandated standardization in support of improved transparency and clearing. Whether standardization happens within the OTC swaps space, or via futurization is a detail.

Smooth Talk About Gold

Bruce Bartlett used his New York Times Economix blog post today to argue that “Gold’s Declining Price Is a Reversion to the Mean”. He buttresses his argument by pointing out that,

In a recent paper, the economists Claude B. Erb and Campbell R. Harvey present strong evidence that the gold market was severely overbought. The increase in gold prices did not represent a change in the trend of inflation. As the chart indicates, even with the sell-off, the price of gold is still high and has a long ways to fall to get back to the “golden constant” that gold-standard advocates cite as proof that the dollar should be pegged to gold.

Bartlett Harvey

Continue reading

CSI: prop trading investigation squad.

Does JP Morgan’s derivative portfolio hedge its other lines of business? This picture says ‘no’.

Does JP Morgan’s use derivatives to make prop trade bets on interest rates. This picture suggests ‘yes.’

 

Piazzesi et al.

Let me explain.

Continue reading

Can Hedging Save Cyprus?

Lenos Trigeorgis has a piece in the Financial Times’ Economists’ Forum advocating the use of GDP-linked bonds for Cyprus.

Suppose that its steady-state GDP growth is 4 per cent and that fixed interest on EU rescue loans is 3% per cent Instead of the fixed rate loan, Cyprus could issue bonds paying interest at its GDP growth minus 1% (the difference between the average growth rate and the EU bailout rate). If GDP growth next year is 0 per cent, lenders would pay the Cypriot government 1%, providing Cyprus with some relief in hard times. But if after, say, 10 years GDP growth is 7 per cent, lenders would instead receive 6 per cent. In essence, during recession EU lenders will provide insurance and interest subsidy to troubled Eurozone members, helping them pull themselves up, in exchange for higher growth returns during good times. Increased interest bills in good times might also discourage governments from sliding back into bad habits.

As we’ve written in a couple of earlier posts, this is easier said than done. But it’s certainly thinking along the right lines.

Gold’s Random Walk

A number of journalists are helping to broadcast Goldman Sachs’ latest prediction for gold prices. Goldman’s press agents planted the story in the Wall Street Journal, the Financial Times and the New York Times, among other places.

This is silly. There’s plenty of scientific evidence that the gold price is a random walk. Here’s an old reference: Eduardo Schwartz’s Presidential Address to the American Finance Association back in 1996. There are older and more recent papers finding the same.

Last week I wrote a post in which I mentioned that the time series of commodity spot prices are often mean reverting. They contain an element of predictability. Gold, however, is the exception. Gold is very, very, very cheap to store. And it is widely held purely as a store of value without any use value. Consequently, the spot price of gold quickly incorporates changing market views about future supply availability and any other fundamentals like those itemized in the Goldman report. For all intents and purposes, a physical investment in gold is a financial security, which means that the spot price is a martingale. The distinction I made in my last post between the spot price series for a commodity and the time series for a specific futures price is a meaningless distinction for gold.