On April 15, three UMass Amherst economists (a graduate student Thomas Herndon, and his advisors Ash and Pollin) published a critique of an influential paper, “Growth in a Time of Debt,” written by Harvard economists Carmen Reinhart and Kenneth Rogoff (R+R 2010 for short). Since then, basically everyone has weighed in on the debate – even Stephen Colbert, in this hilarious bit (and a follow-up interview with Herndon). Here’s one of many summaries of the affair.
Much of the subsequent debate has attempted to assess the original paper and the critique. I want to argue that, in some sense, this is the wrong debate. There was nothing wrong with the original R+R 2010 piece. Oh, sure, there were Excel coding errors and a questionable weighting scheme, but as many commenters have since noted, these are common in lots of early-stage research. You try things out, make mistakes, show it to your colleagues, go back and improve your methods, and science progresses. This is the argument advanced by defenders of R+R on all sides, from Greg Mankiw to Betsey Stevenson and Justin Wolfers to Jeff Smith (and even some sociologists). On a purely academic level, I agree with this argument.
But Reinhart and Rogoff didn’t just write a short conference paper. They wrote op-eds in prominent places. They spoke to policymakers. They argued that because of what they’d found in that short, not peer-reviewed piece, policymakers should fear a 90% debt/GDP threshold or cliff. They drew on their academic credibility – both personal, and in the research they had published – to try to influence policy quite directly. And for that reason I disagree strongly with Jeff Smith when he argues that Herndon, Ash, and Pollin should have shared their critique with Reinhart and Rogoff before going public and given them a chance to respond. And similarly, in some sense I agree with Greg Mankiw when he (and many others) write that the spreadsheet errors have gotten too much attention. At least, I would agree if this were purely an academic debate. But if this is a political fight, then Reinart and Rogoff’s credibility as policy experts is exactly what’s at stake. They are tenured economists at Harvard, so they start off with a lot of credibility. The spreadsheet error is important because it shows just how sloppy was the research they were shilling in 2010-2011. At the time, Krugman wrote of R+R 2010, “this just isn’t careful work.” The Excel spreadsheet error is the smoking gun proof of that.
So, yes, Herndon, Ash, and Pollin (HAP, for short) were explicitly critiquing R+R 2010, but the critique matters because of the editorials and policy tracts that R+R wrote in 2010-2012 that explicitly invoked R+R 2010 to argue for cuts in government spending or at least more attention paid to rising debt. These editorials, and similar advice given by R+R themselves directly to policymakers, shifted the terrain of the debate: this is not just an academic dispute that can take place on the usual academic time scales and with the usual academic norms, but rather an explicitly political fight about government spending in a time of recession. Here’s one example that will hopefully serve as a case-in-point, a 2011 editorial published by Bloomberg titled Too Much Debt Means the Economy Can’t Grow. Here are the key invocations of the 2010 paper:
Our empirical research on the history of financial crises and the relationship between growth and public liabilities supports the view that current debt trajectories are a risk to long-term growth and stability, with many advanced economies already reaching or exceeding the important marker of 90 percent of GDP.
And:
In our study “Growth in a Time of Debt,” we found relatively little association between public liabilities and growth for debt levels of less than 90 percent of GDP. But burdens above 90 percent are associated with 1 percent lower median growth. Our results are based on a data set of public debt covering 44 countries for up to 200 years. The annual data set incorporates more than 3,700 observations spanning a wide range of political and historical circumstances, legal structures and monetary regimes.
We aren’t suggesting there is a bright red line at 90 percent; our results don’t imply that 89 percent is a safe debt level, or that 91 percent is necessarily catastrophic. Anyone familiar with doing empirical research understands that vulnerability to crises and anemic growth seldom depends on a single factor such as public debt. However, our study of crises shows that public obligations are often hidden and significantly larger than official figures suggest.
“Growth in a Time of Debt” was a brief, not peer-reviewed paper. That’s their evidentiary basis. And yes, they back somewhat away from the strong threshold claim – but in terms of sensitivity, as in, we know high debt somewhere around here kills growth, but we aren’t sure it’s exactly 90%. And, as they later note in defense of themselves, they do emphasize the median claim (which is robust to the Excel and weighting critiques of HAP). But the causal claim is right in the friggin’ title* and it had already been disputed by many prominent critics who read the original paper (e.g. Krugman here). Suppose you’re a Harvard professor – would you publish an op-ed which relied this heavily on a not peer-reviewed study that had already received substantial critiques for overstating its causal claims? If you did, would you expect the same kind of courtesy from your colleagues as if you’d just posted a paper on SSRN or NBER?
To me, Herndon, Ash, and Pollin are responding to this op-ed (and others like it) as much or more than they are responding to R+R 2010 itself. Responding to an academic working paper may have some norms of fair warning associated with it. Responding to a series of hackish op-eds drawing legitimacy from a working paper that looks like a publication doesn’t have the same norms or goals. It’s about destroying credibility, not improving flawed methods. Social science always has an element of politics – we are, after all, making knowledge about people. But R+R, much like the equally contentious Regnerus affair, was politicized in a more transparent, more partisan way. Regnerus at least somehow managed to get his study through the normal peer-review process, and left it to others to make fools of themselves citing it in public for political effect by playing up a (questionably derived) correlation into a strong causal statement, leaving himself in a somewhat defensible position of never having taken a position on the political issue in public, nor of having made the bad causal claim (later undermined somewhat by behind the scenes evidence). Reinhart and Rogoff did no such thing – they took their preliminary results straight into the heart of an important political debate and made (academic) fools of themselves.
There’s always something messy and frustrating around these explicit interweavings of high-stakes politics and “normal” social science that twists everything up. But in the end, I think Mankiw, Stevenson and Wolfers, Smith, and other academics, give Reinhart and Rogoff too much credit by treating this incident as a purely academic, normal science debate. The error wasn’t (just) in the spreadsheet, it was in the attempt to claim policy relevant expertise based on the spreadsheet.
* As Mike argues in the comments, op-eds receive their final titles from editors rather than authors, so it’s hard to know who picked that title out. That said, R+R make the causal version of the claim in various places in this piece and elsewhere. As O’Brien notes, “R-R whisper “correlation” to other economists, but say “causation” to everyone else.” (This footnote was added after Mike’s comment.)