Coasian Comparison

Josh Barro wrote a somewhat controversial op-ed in the NYT attempting to apply the Coase theorem to the negotiation between passengers on a plane about whether or not to recline their seats. Economist Greg Mankiw and political theorist Jim Johnson each posted a short reaction to the paper. See if you can spot the difference in tone. First, here’s Mankiw:

Screen Shot 2014-08-28 at 10.51.00 AM

And here’s Johnson:

Screen Shot 2014-08-28 at 10.51.55 AM

I wonder what Steve Medema would say.

Advertisement

Statistical Optimism: Mortgage Finance and Depressions, Retro Edition

Let’s coin a new phrase: “statistical optimism.” Statistical optimism refers to the belief that if only we had better statistics about X, and that everyone was made aware of those statistics, then we would make better decisions about X and some set of problems would go away without any major changes in the institutions actually making decisions. It’s a practical, quanty version of the classic Enlightenment-style idea that more knowledge always makes things better. Note that by statistics, here, I mean the production and distribution of quantitative data, the old sense of statistics (vital statistics, censuses, national income statistics, etc.), and not the inferential field we know and love today.

This phrase came to mind today as I was reading through a March, 1932 interview with Senator La Follette* about the need for better economic statistics to improve economic planning in the midst of the depression. The interview is chock full of great quotes that give you a flavor of what it was like to live in a time before the CPS, NIPA, and all the other routine, standardized, official data we take for granted. For example:

It is a sad commentary on our statistical information that in the third winter of the depression we have absolutely no authoritative official figures on unemployment. The only data we have are those collected by the census in 1930 for the country as a whole and for certain cities in January 1931.

The authoritative bit here was to be important, too, as FDR and Hoover fought in the 1932 campaign over whose (partial, non-standardized) unemployment figures were better.

The belief that gets me, though, and that seems to be widely shared across the political spectrum at this point, is that just having good data will fix all kinds of ideological disputes. It was this belief, in part, that motivated the founding of the NBER, and it was this belief that animated Hoover to work to produce all kinds of economic reports in the 1920s and early 1930s in concert with economists and businessmen (e.g. Recent Economic Trends, Recent Social Trends, etc.). La Follette was a Republican also, but later founded the Wisconsin Progressive Party, and clearly believed in less business-led solutions to economic problems than Hoover, but he had the same attitude of statistical optimism. A quote from the end of the interview about the potential for authoritative statistics to prevent future depressions struck me as especially relevant and, from a post-2008 perspective, ironic:

Suppose late in 1928 some authoritative body in Washington had publicly emphasized the fact that there was an excess of private houses on the market. Suppose it had pointed out that construction figures showed an appreciable falling off in the building of new houses. Surely in the light of such warnings people would not have continued investing their hard-earned savings in first and second mortgage real estate bonds thus increasing the supply of new capital for speculative building which continued into 1929.

If only it were so.

FRED New Housing Starts 2006 to 2011

Though, I suppose, in fairness to La Follette, what he called for was not simply the creation of better data but also the creation of an institution – a national economic council, somewhat of a precursor to what ended up being the Council of Economic Advisers – that would have the authority to interpret data, not just collect it. Still, the optimism is palpable, and from our vantage point, tragic.

* La Follette is important in my work because he introduced a resolution in 1932 which called for the creation of the first** official US national income estimates.
** Well, he thought they were the first, and so do most people. The FTC actually produced an estimate in 1926, but almost no one knows about it, and no one did much with it then either.

GDP is important, but it’s not that important

Given that I’m writing a dissertation on the history of national income accounting, I hate to say this, but… GDP just isn’t as important as some people want to make it out to be. Some of the worst offenders in this genre of claim seem to be, unsurprisingly, GDP’s biggest critics. Let’s take an example from an op-ed in this week’s New York Times*, Our Mismeasured Economy by Lew Daly at Demos. The editorial follows in a nearly 100-year old tradition of criticizing how national income statistics handle hard to measure, non-market production, in this case government output. Daly argues, sensibly enough, that the way we handle government is ad hoc and arbitrarily rules out the possibility that government could actually add value (we explicitly assume that the value of government output is equal to what we pay for it, no more, no less).

That’s all well and good, but what bothers me is the over-the-top way in which Daly motivates his critique. Here’s the opening line:

Today’s polarized debates about the role of government often boil down to a single issue: the size of government compared with the size of the overall economy, as measured in gross domestic product.

Really? Are we following the same debates? Because although I’ve certainly seen reference to the size of government (and in fact, we see examples of these claims as far back as the early 1930s), they do not seem to me to be a dominant mode of debate at the present juncture. To be fair to Daly, I have not done a systematic content analysis of contemporary ‘debates about the role of government’, but I would be shocked if even a small percentage of these debates (5%?) explicitly or implicitly referenced the size of government as measured in the national accounts. And far fewer “boil down to a single issue” in those terms. Think, for example, of the recent Hobby Lobby case, and other debates around the Affordable Care Act. This debate is about ‘the role of the government’, but it’s not about the ‘size’ of the government but about its intrusiveness: can the government mandate that private companies provide certain kinds of care to their workers? Think also of the NSA wiretapping scandals. Again, the proper role of the government is at the center of the debate, but not the government’s size as a percentage of GDP.

Daly’s op-ed makes a number of sensible points about what we miss if we focus our debate on the productivity of government on the national accounts (though I’m not sure I agree that the fix is to change how we measure GDP as opposed to, say, coming up with alternative measurements of government productivity and restricting our analysis of GDP to where such a number makes the most sense – GDP is built on a bedrock of “market epistemology”**, and I doubt it will ever move far from that principle). But there’s no need to tee those claims up with an overblown one about the centrality of GDP to contemporary political debates about the role of the government.*** GDP is important because of its diffuse implications for how we think about the world, as well as some more narrow technical uses (such as the World Bank’s categorization of “least developed” countries, see, e.g, Jerven’s work) – but it’s not quite so woven into technical systems as, say, inflation statistics, which directly determine wage increases and social security benefits. And so I get that it’s a bit tougher to talk about why getting GDP ‘right’ is so important. But maybe that means we should be having a different debate, about what the right ways to measure and think about the productivity of government are, rather than a narrow technical one about GDP. Somehow I doubt that the Tea Party is going to stop complaining about the Affordable Care Act if the government’s share of GDP goes down a point.

* H/T to Beth Berman for sending this piece along.
** I define market epistemology as the belief that markets provide the best or only definitive information about economic value. Market epistemology shapes debates about the production boundary and in turn the boundary of the economy – that is, it shapes what we decide to count and how we decide to count it, especially for difficult cases like unpaid housework, government output, and owner-occupied housing. See, e.g., chapter 4 of the dissertation I should be writing instead of this blog post!
*** At this point, I’d also like to fully embrace the irony of using a single editorial as a case to motivate a more general argument about the perils of motivating a general argument about discourse from a handful of cases. I can dig up more if you’d really like, I read this stuff for a living.

Regulating Better, Not Regulating Less: Occupational Licensing Edition

Today, I came across an interesting-looking NBER working paper on occupational licensing, Relaxing Occupational Licensing Requirements: Analyzing Wages and Prices for a Medical Service by Kleiner et al. The paper, which I have only skimmed, examines the consequences of relaxing restrictions on what kinds of services nurse practitioners can offer to patients (as compared to services offered by doctors). Here’s a big chunk of the abstract summarizing their findings:

We find that when only physicians are allowed to prescribe controlled substances that this is associated with a reduction in nurse practitioner wages, and increases in physician wages suggesting some substitution among these occupations. Furthermore, our estimates show that prescription restrictions lead to a reduction in hours worked by nurse practitioners and are associated with increases in physician hours worked. Our analysis of insurance claims data shows that the more rigid regulations increase the price of a well-child medical exam by 3 to 16 %. However, our analysis finds no evidence that the changes in regulatory policy are reflected in outcomes such as infant mortality rates or malpractice premiums.

So, to summarize: letting nurse practitioners do more decreased the cost of care to patients without sacrificing quality. Assuming for a moment that the results hold up, this paper clearly strikes a blow against the current system of occupational licensing which puts such restrictions on nurse practitioners. Keen.

I posted the above paper to Facebook and was amused to see quick responses from two libertarian friends who read my posting of the paper as an endorsement for a general end to occupational licensing (as called for e.g. here).* But this paper contributes virtually nothing to our understanding of the possible consequences of eliminating licensing. The point of the paper is that some kinds of care can be done by a larger set of licensed professionals than currently do them – there’s not much evidence here (for or against) abandoning licensing entirely. And I think it’s telling that this paper drew such a reaction, where evidence of a regulatory imperfection is read as strong proof that the entire idea is flawed, even when the proposed alternative (i.e. no licensing) has not actually been tested.

More generally, to make social democracy work means regulating better, not (necessarily) regulating less.** It’s very much in keeping with social democratic principles to argue that particular rules should be reformed, and even better, to draw on evidence to make those arguments. But that’s a far cry from abandoning a whole class of regulation because we have evidence that they’re not perfectly implemented, especially when we’ve just been given tools to make those regulations work better.

*Some of this may have been in mocking jest, i.e. “Dan <3s neoliberal schemes for deregulation:-)".
**Regulating better has been made especially difficult lately by the performative insistence of half the political class that government as a whole must be incompetent.

Undoing Publication Bias with “P-Curves”, Minimum Wage Edition

Following the blog rabbit hole today, I came across an interesting statistics and data analysis blog I hadn’t seen before: Simply Statistics. The blog authors are biostatisticians at Johns Hopkins, and at least one is creating a 9-month MOOC sequence on data analysis that looks quite interesting. So, far my favorite post (and the one that led me to the blog) is a counter-rant to all the recent p-value bashing (e.g. this Nature piece): On the scalability of statistical procedures: why the p-value bashers just don’t get it. The post’s argument boils down to something like, “P-values, there is no alternative!” But check out the full post for the interesting defense of the oft-maligned and even more oft-misinterpreted mainstay of conventional quantitative research.

Apart from that post, I also enjoyed a link to a recent working paper, which is what I wanted to highlight here. Even though the blog authors defend p-valus as a simple way of controlling researcher degrees of freedom, they also seem to be part of a growing group of statisticians interested in finding ways of correcting for the “statistical significance filter“, as Andrew Gelman puts it. The method presented in “P-Curve Fixes Publication Bias: Obtaining Unbiased Effect Size Estimates from Published Studies Alone” seems quite intuitive. Basically, the authors show how to simulate a p-curve (distribution of p-values) that best matches the observed p-values in a collection of studies, given the assumption that only significant results are published (but not perfectly accounting for other forms of p-hacking, discussed in the paper). Although the paper is short, it presents payoffs for analysis of two vexing problems, including the relationship between unemployment and the minimum wage. Here’s the example reproduced in full:

Our first example involves the well-known economics prediction that increases in minimum wage raise unemployment. In a meta-analysis of the empirical evidence, Card and Krueger (1995) noted that effect size estimates are smaller in studies with larger samples and comment that “the studies in the literature have been affected by specification-searching and publication biases, induced by editors’ and authors’ tendencies to look for negative and statistically significant estimates of the employment effect of the minimum wage […] researchers may have to temper the inferences they draw […]” (p.242).

From Figure 1 in their article (Card & Krueger, 1995) we obtained the t-statistic and degrees of freedom from the fifteen studies they reviewed. As we show in our Figure 4, averaging the reported effect size estimates one obtains a notable effect size, but correcting for selective reporting via p-curve brings it to zero. This does not mean increases in minimum wage would never increase unemployment, it does mean that the evidence Card and Kruger collected suggesting it had done so in the past, can be fully accounted by selective reporting. P-curve provides a quantitative calibration to Card and Krueger’s qualitative concerns. The at the time controversial claim that the existing evidence pointed to an effect size smaller than believed was not controversial enough; the evidence actually pointed to a nonexisting effect.

So, Nelson et al. provide an intuitive way of formalizing Card & Krueger’s assertion that publication bias could account for some of the findings of a negative relationship between unemployment and minimum wage increases – and even further, that publication bias could actually reduce the best estimate of the effect to zero (which seems consistent with much, thought certainly not all, of the recent literature).

These methods seem really neat, but I’m not entirely sure what problems in sociology we could generalize them to. In the subfields I follow most closely, most research is either not quantitative, or is based on somewhat idiosyncratic data and hence it’s hard to imagine a bunch of studies with sufficiently comparable dependent variables and hypotheses from which one could draw a distribution. I’d bet demographers would have more luck. But in economic sociology, published replication seems sufficiently rare to prevent us from making much headway on the the issue of publication bias using quantitative techniques like this – which perhaps points to a very different set of problems.

Dear New Yorker: Kuznets Did Not Invent GDP (and that matters)

The New Yorker just published a short piece by James Surowiecki on the difficulty of valuing the gains for consumers generated by new technologies, especially digital goods which are given away for free. The piece is a nice summary of recent research by economists like Brynjolfsson and Mandel who attempt to augment existing national income accounts with better measurements of the consumer surplus generated by the internet. So far so good.

In making this argument, Surowiecki briefly invokes the history of our existing national accounts:

Our main yardstick for the health of the economy is G.D.P. growth, a concept devised in the nineteen-thirties by the economist Simon Kuznets.

Despite its brevity, this sentence packs in two big, misleading claims.* First, “GDP” was not commonly used in the 1930s, or even the 1940s. In the US, economists only began to emphasize GDP in the 1990s. Elsewhere, the transition to GDP as the principal aggregate took place a little earlier. Gross National Product was first discussed in the early 1940s, during World War II (Carson 1975).** Before that, including throughout the 1930s, economist tended to write about National Income.

Second, Simon Kuznets did not “devise” GDP, or even GNP. Simon Kuznets did write extensively in the 1930s and 1940s about the practice of compiling national income statistics. Kuznets identified several different aggregates of interest, and came up with many useful conceptual distinctions for determining where to draw the “boundary of production” and how to value those goods and services included in that boundary (especially problematic goods that lacked market prices). Kuznets himself focused his attention National Income, which looks more like Net National Product than GDP or GNP. The biggest difference between national income and GNP is the attempt to subtract out capital depreciation, although Kuznets’ version of national income also treated government expenditures differently than GNP as developed by the Department of Commerce in the 1940s (Carson 1971).

That said, even though Kuznets was a major figure in the development of national income statistics in the United States, he was by no means the sole deviser of our modern national income accounts. The idea of calculating a total national income goes back to at least William Petty and his 17th century Political Arithmetick (Studenski 1958). Important precursors include such storied names as Lavoisier, better known as the founder of modern chemistry, who also worked on the double-counting problem in national income statistics, and Wesley Mitchell, Kuznets’ mentor and a co-founder of the National Bureau of Economic Research. And contemporary to Kuznets, we have figures in the UK like Colin Clark, James Meade, and Richard Stone (not to mention John Maynard Keynes himself, see Tily 2009). Stone, notably, won the Nobel Prize in Economics in 1984 for his work developing international standards for national income accounts. These standards (e.g. the United Nations System of National Accounts) follow conventions closer to the US Department of Commerce, and to the British Treasury, than Kuznets would have preferred. In fact, Kuznets was skeptical of the whole project of thinking of national income through the metaphor of accounts and accounting (Kuznets 1948)!

Both mistakes – claiming that GDP goes back to the 1930s, and suggesting that GDP was devised by a single man – tend to suggest that our current understanding of the economy has been more fixed and unchanging since the 1930s than it really has. These mistakes also downplay the multitude of alternatives that have been considered over the past 100 years, from Nordhaus and Tobin’s (1972) “measure of economic welfare” to Norway’s inclusion of housework in its official national income statistics in the 1940s (Sangolt 1999), to contemporary attempts to account for environmental damage (Muller et al. 2011). I am enthusiastic about attempts to augment GDP, to create alternatives, and to make explicit its arbitrariness and quirks by showing what it fails to see. But we do that debate a disservice when we collapse all of the work of national income statisticians into Kuznets, and all of the various proposed measures into GDP.

In short: Don’t blame Kuznets. GDP is younger than you think, and more alternatives have been proposed than we usually remember.

* Note that while I’m picking on Surowiecki here, this basic claim is repeated all the time in criticisms of national accounts. Surowiecki actually treats the issue more carefully than most by citing some of Kuznets’ cautionary words on the dangers of overinterpreting measures of national income.
** For citations, see this bibliography.

Credibility of Economics “Modest or Even Low”: Ioannidis QOTD

John Ioannidis is an increasingly prominent epidemiologist known primarily for his debunking-style papers on the problems of health research. In his 2005 “Why Most Published Research Findings Are False”, Ioannidis argued that most statistical research relies on terrible practices that suggest that false positives likely exceed true positives. Ioannidis has applied this argument most thoroughly to genetic association studies, which search for a correlation between a large collection of potential genes and a given outcome.

Just recently, however, Ioannidis and co-author Doucouliagos have turned this same analytical apparatus on empirical economics research. Although the paper is short on detailed analysis, the overall take presented is pretty damning. Here’s the abstract, and the quote of the day:

The scientific credibility of economics is itself a scientific question that can be addressed with both theoretical speculations and empirical data. In this review, we examine the major parameters that are expected to affect the credibility of empirical economics: sample size, magnitude of pursued effects, number and pre-selection of tested relationships, flexibility and lack of standardization in designs, definitions, outcomes and analyses, financial and other interests and prejudices, and the multiplicity and fragmentation of efforts. We summarize and discuss the empirical evidence on the lack of a robust reproducibility culture in economics and business research, the prevalence of potential publication and other selective reporting biases, and other failures and biases in the market of scientific information. Overall, the credibility of the economics literature is likely to be modest or even low. [emphasis added]

Oh, snap. Their preferred solutions seem to be similar to the recommendations for other statistical sciences, including better meta-analysis, more replication, and so on. They are also enthusiastic about RCTs without really noting how RCTs are only appropriate for answering some questions within the bailiwick of economics, and are relatively limited in their usefulness for answering other, very important questions. Anyway, recommended if you’re into critiques of economics research and best research practices.

Scary thought: what would this paper look like if the target was not empirical economics but rather quantitative sociology?

You Can’t Beat the Market (But the Market Can Beat You)

Today, the Nobel Prize in Economics was awarded to three American economists: Eugene Fama, Lars Peter Hansen, and Robert Shiller. Marginal Revolution has tag-team brief coverage of all three, and a guest blogger at NoahOpinion delves more deeply into Hansen’s work. Kevin Bryan of A Fine Theorem is insightful here, as usual.

My brief summary of what’s interesting and important about the work of Fama and Shiller (and, to a lesser extent, Hansen, who fits into this picture slightly differently) would be somewhere close to Brad DeLong’s. DeLong hopes the 2013 Nobel will serve as a pedagogical moment for explaining two big truths of financial economics: financial markets aren’t perfect at figuring out what things should be worth in any kind of long-term, societal value sense… but financial markets are incredibly hard to outperform in the short-term. Put differently: in the short-run, you can’t beat the market. In the long-run, the market can beat you (up).

Krugman nicely summarizes the politics of the Prize, or at least one take on those politics:

Fama’s work on efficient markets was essential in setting up the benchmark against which alternatives had to be tested; Shiller did more than anyone else to codify the ways the efficient market hypothesis fails in practice. If Fama has said some foolish things in recent years, no matter — he did earn this honor, as did Shiller.

So, all good — and you actually have to admire the prize committee for finding a way to give Fama the long-expected honor without seeming as if they are completely out of touch with everything going on around them.

For a more historical take on the work of Fama in particular and its place in the history of economics and finance, see Donald MacKenzie’s An Engine, Not a Camera. For a critical take on the EMH and its various weak and strong forms, see John Quiggin’s Zombie Economics. As always, Kieran Healy has the last word:
Healy on 2013 Econ Nobel

Keynes + Hayek, Not Keynes vs. Hayek

Keynes and Hayek are often described as being at opposite poles of the economic spectrum. As this awesome historical rap battle puts it, Keynes wants to steer markets, Hayek wants to set them free. Nicholas Wapshott’s recent book is even titled Keynes Hayek: The Clash that Defined Modern Economics. But how far apart were Keynes and Hayek? On some issues, not that far at all. In part, our mistaken impression comes from misremembering Keynes as much more of a government interventionist than he was, and for forgetting that Hayek did indeed have some pragmatic impulses.

One clear sign of their agreement comes from Keynes’ comments on Hayek’s influential political tract The Road to Serfdom. Keynes wrote: “In my opinion it is a grand book… Morally and philosophically I find myself in agreement with virtually the whole of it: and not only in agreement with it, but in deeply moved agreement.” Keynes disagreed with some of the practical bits, but in his heart, he was a committed economic liberal. He believed in the need for a middle path that acknowledged the potential for markets (more specifically, for integrated economies) to fail to use resources efficiently, but his goal was always to tamper with the market and its price-setting mechanism as little as possible – to preserve capitalism by socializing only the bits most prone to break down, and by targeting aggregates rather than engaging in industrial planning of the kind embodied in, say, the first era of the New Deal.

Another, less well-known story, comes from Hayek’s comments on Keynes’ influential tract, How to Pay for the War.[1] In that tract, Keynes argues that the UK government should finance World War II through a system of compulsory saving, in order to avoid dramatic inflation and the need for wage and price controls as much as possible. Keynes explicitly argued for a reduction in real wages, to prevent overstimulating the economy. Here, we see the logical possibility embedded in the General Theory to run “both ways” (stimulate a recession, tamp down a boom) on full display – something some later Keynesians, and many critics of Keynes, seem to ignore. Keynes’ and Hayek’s correspondence about the pamphlet after its publication in early 1940 is revealing, and I quote it here in almost full:

Keynes to Hayek, February 27, 1940:
I enclose a copy of my pamphlet. You will see that I have bagged your idea about a post-war capital levy. I have mentioned the source of the idea in the acknowledgments on the last page.

Yours sincerely,
JMK

So, note, Hayek is thanked in the acknowledgments of the pamphlet!

Hayek to Keynes, March 3, 1940:
Many thanks for sending me your pamphlet. I have now read it carefully and still find myself in practically complete agreement in so far as policy during the war is concerned. Is it reassuring to know that we agree so completely on the economics of scarcity, even if we differ on when it applies.

I have been thinking for some time what could be done to make the unanimity of economists on this point clearer to the general public than seems yet to be the case. I am not clear how far any sort of public pronouncement by groups of economists would do any good – and in any case I am not the right person to organize such a move. But I wanted to say that if at any time you feel that some such support would be desirable I shall be glad to sign or to write anything that might be of use. As I seem to be regarded as representing the extreme opposite of your views, this might be not entirely without use in demonstrating the unanimity of expert opinion.

Yours sincerely,
F. A. Hayek.

So here, Hayek agrees with Keynes completely – although adding the caveat that he does not agree with Keynes always – and goes so far as to offer to use his status as Keynes’ polar opposite to add strength to the proposal. Finally:

Keynes to Hayek, March 6, 1940:
I am extremely glad that we should find ourselves in so much agreement on the practical issue. I think it might be very helpful indeed to have some pronouncement by economists that here for once is something about which different schools agree.

I don’t have much else to say. I don’t want to over-exaggerate their similarities in the same way that we so often over-exaggerate their differences. But it is fascinating to see them speak highly of each other, and of each others’ very public-facing work. Keynes + Hayek, not (always) Keynes vs. Hayek.

[1] These letters are in the John Maynard Keynes Papers held at King’s College, Cambridge. JMK/HP [How to Pay for the War]/2.

Bhutan’s new rival in public happiness measurement: Lithuania

As readers of this blog may know, Bhutan has long produced a Gross National Happiness indicator as an alternative measure of welfare to more traditional economic indicators (e.g. GDP). Happiness measures have gained some traction elsewhere, and research on the economics of happiness (starting from the Easterlin paradox and its critics) has gotten quite mainstream. But official, public measures of happiness are still the exception rather than the norm. But not, it seems, in Lithuania:

Lithuanian capital to install public ‘happiness barometer’
The mayor of Vilnius plans to install a huge screen on the town hall to broadcast a real-time “happiness barometer” that will monitor the mood of the Lithuanian capital.

The giant display will monitor the level of happiness among the city’s 520,000 residents by showing a number on the scale of one to 10 that reflects tabulated votes sent in by locals from their mobile phones and computers.

“This barometer is a great tool for politicians. If we take a decision and see a sharp fall in the mood of the city, then we know we have done something horribly wrong,” mayor Arturas Zuokas said.

I’m not sure how I feel about this. On the one hand, most citizens aren’t that tapped into government activity, and so real-time reactions are likely to be more noise than signal. On the other hand, I’d rather CNN report on the noise-like fluctuations of happiness than to hear another billion stories about why the Dow was up or down ten points today (or, why various mostly clueless pundits think the Dow was up or down ten points, anyway).

What would such a story look like? “Happiness was up 5% today on strong sunshine, moderated by a predicted storm this weekend and reports of possible cost overruns in the new Defense Department IT overhaul…”? Try writing your own fictional happiness trend story in the comments!