Credibility of Economics “Modest or Even Low”: Ioannidis QOTD

John Ioannidis is an increasingly prominent epidemiologist known primarily for his debunking-style papers on the problems of health research. In his 2005 “Why Most Published Research Findings Are False”, Ioannidis argued that most statistical research relies on terrible practices that suggest that false positives likely exceed true positives. Ioannidis has applied this argument most thoroughly to genetic association studies, which search for a correlation between a large collection of potential genes and a given outcome.

Just recently, however, Ioannidis and co-author Doucouliagos have turned this same analytical apparatus on empirical economics research. Although the paper is short on detailed analysis, the overall take presented is pretty damning. Here’s the abstract, and the quote of the day:

The scientific credibility of economics is itself a scientific question that can be addressed with both theoretical speculations and empirical data. In this review, we examine the major parameters that are expected to affect the credibility of empirical economics: sample size, magnitude of pursued effects, number and pre-selection of tested relationships, flexibility and lack of standardization in designs, definitions, outcomes and analyses, financial and other interests and prejudices, and the multiplicity and fragmentation of efforts. We summarize and discuss the empirical evidence on the lack of a robust reproducibility culture in economics and business research, the prevalence of potential publication and other selective reporting biases, and other failures and biases in the market of scientific information. Overall, the credibility of the economics literature is likely to be modest or even low. [emphasis added]

Oh, snap. Their preferred solutions seem to be similar to the recommendations for other statistical sciences, including better meta-analysis, more replication, and so on. They are also enthusiastic about RCTs without really noting how RCTs are only appropriate for answering some questions within the bailiwick of economics, and are relatively limited in their usefulness for answering other, very important questions. Anyway, recommended if you’re into critiques of economics research and best research practices.

Scary thought: what would this paper look like if the target was not empirical economics but rather quantitative sociology?

Advertisements

Quantitative Methodology Meets Lord of the Rings QOTD

Phil Schrodt is a just retired political scientist and quantitative methodologist. His retirement letter is well worth a read: it’s part sweet, part scathing analysis of contemporary political science and academia more generally. His letter led me to his recent working paper titled Seven Deadly Sins of Quantitative Political Analysis. Following Chris Achen’s influential critique of garbage-can models in political science, Schrodt argues that political scientists abuse the linear model while chasing statistical significance and ignoring Bayesian alternatives. Pretty standard stuff, if you follow Andrew Gelman’s blog and the like, but especially entertaining and clear. In the 2013 post-script, Schrodt offer some reasons for pessimism in the light of a decade of fighting and losing this methodological war:

The institutional response to Achen (2003) was, of course, the notorious journal-length methodological suicide note, Conflict Management and Peace Science Vol. 22, No. 4, which, like J.R.R. Tolkien’s Gollum leaving the sunlit world for a lonely life in subterranean darkness, reads like “Precious, oh precious garbage can models. Evil Achens wants to take away garbage can models…no, no, we won’t lets them…precious garbage can models…”

I just love that image, and I highly recommend the rest of the paper as well.

Snarky History of Health Economics QOTD

In general, applied fields get less than their fair share of attention from historians and sociologists. This is especially true of economics. STS scholars have done a bit better – especially with finance – but historians of economics probably publish ten articles on Adam Smith for every one on labor economics or development or… One nice exception to the rule is Evelyn Forget’s (2004) HOPE piece on the history of health economics.*

Forget argues that health economics has a past but not yet a history. This past stretches all the way back to William Petty, who considered the costs of the plague:

100,000 persons dying of the Plague, above the ordinary number, is near 7 million pounds loss to the Kingdom: . . . how well might 70,000 pounds have been bestowed in preventing this Centuple loss?

In a lovely footnote, Forget (2004: 619) explains a bit of Petty’s analysis:

Petty valued an Englishman at 69 pounds and a Frenchman at 60 pounds. He allowed that the latter may be an underestimate because he could buy an Algerian slave for 60 pounds. This latter statement is what health economists refer to as “sensitivity analysis.”

Oh, snap! The article goes on to offer multiple accounts of the intellectual history of “QALYs” – quality adjusted life years, or how health economists try to figure out the utility of different interventions, and the justification for QALYs based on (a misreading of) developments in theoretical welfare economics. Recommended, if that’s your sort of thing though still a bit too focused on the big theoretical developments for my taste and not as much on applied economics actually being applied!

*As an aside, “Forget” has to be one of the best possible historian last names!

The Future of the History of Science QOTD

Isis, the journal of The History of Science Society, has a four essay symposium on the future of the history of science. Ken Alder’s excellent contribution begins with a few humorous linguistic nuggets about the history of science and its close neighbor, science studies, which I thought worth extracting and re-posting:

For many people, both inside the academy and out, the juxtaposition of “history” with “science” seems to imply a logical contradiction, giving the phrase “the history of science” the ring of an oxymoron, like “jumbo shrimp” or “deafening silence.” That is because science, in the prevailing view, still designates that form of natural knowledge that winnows truth from error to produce a state-of-the-art summation of all and only those prior discoveries that possess current value. In this sense, science is an enterprise that swallows its own past. History, by contrast, is generally thought to offer a recapitulation of the past on its own terms and in a manner oblivious to the tug of present-day concerns. Hence, the apparent contradiction. But this is just the sort of problem that we should be able to turn to good account.

According to rhetoricians, the contradiction of a genuine oxymoron is actually supposed to suggest a larger unexpected meaning, much the way the word’s two Greek roots—“oxy,” meaning “sharp,” and “moron,” meaning “dull”—combine to describe a recognizable human type: the “clever fool.” After all, paradoxes such as “deafening silences” really do exist. And since science—as we who study its history can attest—has repeatedly proved itself far more bountiful than its current accumulation implies, and since history—as we who study the past can attest—keeps provoking new arguments, there is no reason our field cannot aspire in the manner of a genuine oxymoron to open onto some larger unexpected meaning.

[H]istorians of science have often taken up a set of powerful intellectual tools fashioned in collaboration with our cousinly enterprise known as “science studies.” The problem here is that that field’s self-description generates its own contradictions. Many academics and university administrators—not to mention members of the laity—are thoroughly bewildered by the term “science studies,” which they seem to regard as a kind of pleonasm: a redundancy either way they turn. In those quarters where science is taken to be the premier form of inquiry (including wide swaths of the social sciences), the field is presumed to aspire to the status of a science of science, meaning that it should actually be called “science science.” And among humanists its diversity of methods (from literary criticism to actor-network theory) and its range of topics (from bioprospecting to financial markets) means that its name might just as well be “studies studies.” In either case, the confusion stems from the fact that the field’s name seems to reify its object of inquiry even though the field’s central mission is to challenge science’s singularity and coherence.

Of note in regards to the last comment is the existence of the NSF’s “science of science and innovation policy” program, which is very much not science studies, but rather an attempt to bring scientific methods to bear on questions of encouraging better science (“The Science of Science & Innovation Policy (SciSIP) program supports research designed to advance the scientific basis of science and innovation policy.”).

Alder’s entire essay is worth reading if you are invested in the project of history of science (or, in my case, science studies. Studies studies?). I particularly like this little gem on the call for treating history of science as a heterogeneous collection of tools and not an overly unified subfield, which characterizes another major theme of the essay: “After all, if science is no one thing, its history cannot be either.” Alder’s answer is another linguistic invention: episcience. For more on that, read the rest of the essay.

IRB QOTD: Schrag, “How Talking Became Human Subjects Research”

For better or for worse, social science research is now governed by an institutional review board system that seems to have the problems and promises of medical research, and not social science, as its priority. Zachary Schrag has an excellent article on the history of social sciences and the IRB, “How Talking Became Human Subjects Research.” Schrag summarizes the argument quite vividly:

“This article draws on previously untapped manuscript materials in the National Archives that show that regulators did indeed think about the social sciences, just not very hard. … Compared to medical experimentation, the social sciences were the Rosencrantz and Guildenstern of human subjects regulation. Peripheral to the main action, they stumbled onstage and off, neglected or despised by the main characters, and arrived at a bad end.”

Recommended.

The Rich (Universities) Are Different QOTD

If you have any interest in the future of higher ed and you haven’t read Jonathan Rees’ reflections on participating in a prominent MOOC (massive, open, online course), you definitely should.

One of this recent posts takes on the issue of the cost of education in reference to MOOCs, and specifically the perception that guild-like professors are trying to maintain their cushy lifestyles at the expense of needy students who can’t afford expensive tuition. This argument has more than one flaw, as Rees notes, especially the idea that most university faculty make a lot of money (in fact, by volume of teaching, I believe most are now adjuncts working a low piece-rate). Rees goes on to quote an article from The American Interest on lavish expenditures by top universities:

Last year Yale finalized plans to build new residential dormitories at a combined cost of $600 million. The expansion will increase the size of Yale’s undergraduate population by about 1,000. The project is so expensive that Yale could actually buy a three-bedroom home in New Haven for every new student it is bringing in and still save $100 million.

The rest of the debate about MOOCs is more important, but I just wanted to pause and reflect on that quote for a minute.

Whatever is happening to higher-ed more broadly, something very different is going on at Yale, Columbia, Harvard and a handful of others. These universities may be generating the content for MOOCs, but they aren’t worried about their traditional model disappearing anytime soon. And gee whiz do they have a lot of money! As Rees elsewhere notes, the Coursera model seems to be premised on the idea that “super professors” will generate free content that will be offered in place of existing courses at less prestigious universities – basically, the most prominent and well-paid faculty doing free labor to (poorly) replace the labor of the adjuncts and TT faculty at lower-tiered schools. The poorly part is important – for some courses, maybe the MOOC is as good an offering as a big traditional lecture taught by a regular professor, especially if the lecture lacks any interaction whatsoever. But, as Rees repeatedly argues:

Maybe you can teach the world a lot of facts by showing them videos of the best professors of the world, but if you can’t teach them how to “do” history, then MOOCs will never be able to replace the in-class experience unless the powers that be no longer care whether students get access to that experience or not.

At Yale, clearly the powers that be care a lot about that experience. But Yale can afford to care, and to generate the tools that help undo that experience elsewhere. Reading along with Rees, and trying out Coursera a bit for myself, I am not very optimistic for the future of humanities and social science education.

Hyperscience QOTD: Shapin on Pseudoscience

Eminent historian of science Steven Shapin has a review essay in the London Review of Books about a new history of pseudoscience (specifically, the Velikovsky affair). The end of the essay is especially brilliant, as Shapin thinks through what we can learn from the affair about the more general problem of demarcating pseudoscience from the real stuff (whatever that might be):

Whenever the accusation of pseudoscience is made, or wherever it is anticipated, its targets commonly respond by making elaborate displays of how scientific they really are. Pushing the weird and the implausible, they bang on about scientific method, about intellectual openness and egalitarianism, about the vital importance of seriously inspecting all counter-instances and anomalies, about the value of continual scepticism, about the necessity of replicating absolutely every claim, about the lurking subjectivity of everybody else. Call this hyperscience, a claim to scientific status that conflates the PR of science with its rather more messy, complicated and less than ideal everyday realities and that takes the PR far more seriously than do its stuck-in-the-mud orthodox opponents. Beware of hyperscience. It can be a sign that something isn’t kosher. A rule of thumb for sound inference has always been that if it looks like a duck, swims like a duck and quacks like a duck, then it probably is a duck. But there’s a corollary: if it struts around the barnyard loudly protesting that it’s a duck, that it possesses the very essence of duckness, that it’s more authentically a duck than all those other orange-billed, web-footed, swimming fowl, then you’ve got a right to be suspicious: this duck may be a quack.

I wonder if economics, and perhaps social science more generally, suffer from a kind of generalized case of hyperscience. Think, for example, of economists’ much more explicit invocation of philosophy of science (well, a particular version of it) to justify their practices. It reminds me a lot of Shapin’s definition of hyperscience “a claim to scientific status that conflates the PR of science with its rather more messy, complicated and less than ideal everyday realities and that takes the PR far more seriously than do its stuck-in-the-mud orthodox opponents.” But think also of the constant refrains of testing hypotheses in major sociology journals, and all the hemming and hawing about methods. Are we protesting too much? Or are things just different in the social sciences, trapped as we are in the nether regions between objectivity and advocacy, causal inference and normative theory, etc.? What would our criteria be for identifying social pseudoscience or hyperscience?

Zeynep Tufekci QOTD: On Free Speech, Power, and the Internet

Zeynep Tufekci is a sociologist who studies the internet, among other things. This week, she published a fantastic blog post about the interactions of free speech, power, and new media. The post focuses on two very divergent examples: the Reddit/Gawker controversy over the release of the identity of a Reddit editor who pioneered threads dedicated to non-consensual “soft porn” photos of underaged girls, and the controversy over the anti-Muslim Youtube video linked to widespread protests. In it, Tufekci ties together arguments about the internet’s capacity for norm-shifting, the importance of recognizing conflicts between rights rather than asserting one right as absolute, the way that assertions of absolute rights often reinforce disparities of power, and the way that assertions of the so-called “digital divide” similarly work to deflect criticisms of the exercise of power. I won’t summarize the post too much, because it’s brilliant and you all should read it. Instead, I’m just going to excerpt a few choice quotes:

Rather, let’s look at this as a good example for why “free speech” as an absolute value for any community that is not balanced by any other concern is at best an abdication of responsibility, and at worst an attempt to exercise power over vulnerable populations.

Because the way power enters into this debate isn’t whether or not there will be creeps who wade through Flickr to find photos of children on the beach –surely there will be with or without Reddit. Indeed, a common answer to issues like this is “well, creeps we shall always have amongst us.” Indeed, that is true. However, the existence of child predators with or without Reddit is in fact a strong argument for shunning them from major sites, including Reddit. Allowing them to be part of the community is not an assertion of free speech, rather it is assertion by Reddit and Condé Nast of the right of adult men to sexualize children & violate women’s privacy through non-consensual exposure. It’s simple as that.

This stance of “it’s just the Internet” is basically relegating the children preyed upon into the “virtual” realm. They just aren’t real enough to count while Reddit moderators are so hyperreal that exposing their mere name is a grave violation. In fact, digital dualism often surfaces as this kind of “power assertions” when gatekeepers and already-powerful who have access to broad publics trivialize self-expression on the Internet (“it’s just cat videos”), never miss a chance to put down Twitter (“it’s about what you had for lunch”), or consider social interaction on Facebook to be unreal compared to “real life” interaction.

Again, I’m not talking about banning everything offensive. Not at all. I’m calling on major sites on the Internet to assert that in this community, we affirm the right of people to exist in an environment that is not hostile to vulnerable populations over the right of people who claim that their right to prey upon children trumps all other rights.

Go read the whole thing, and follow the blog if you haven’t already!

Review: The Haves and the Have-Nots by Milanovic (2010)

I actually don’t remember how I came upon World Bank economist Branko Milanovic’s (2010) The Haves and the Have-Nots and decided to buy it, but I’m very glad I did. Despite the cheesy name, Milanovic’s book is a really excellent overview of global inequality with an innovative structure that mirrors the problem at hand. The book contains three essays and 26 vignettes. The essays deal with three aspects of inequality: within-country, between-country, and global (trying to examine the distribution for all individuals in all countries). They are very readable – you could easily assign them to first-year college students – and explain both the history of studies of inequality as well as some of the complexities of measurement and data availability. Each essay is followed by 7-10 vignettes that range from deathly serious to a bit silly, such as the analysis of where Mr. Darcy would have fallen in the income distribution of early 19th century England. Along the way, Milanovic addresses the pressure for migration in the contemporary world, inequality under socialism and capitalism (and how between region inequality was a problem for the USSR, and may yet be a problem for China), and more. The scope is delightfully wide, but Milanovic has pretty historical and sociological sensibilities for an economist which makes the endeavor feel much less imperialistic.

The most interesting and unexpected vignette so far (I’m a bit more than halfway through the book), analyzed within- and between-country inequality in the context of Marx’s writing of Capital and the history of socialist revolution in the early 20th century. Milanovic argues that when Marx was writing, circa 1870, between-country inequality was quite small as compared to within-country inequality. The richest people in the poorest countries were much richer than the poorest people in the richest countries. Workers everywhere (in Europe) had quite comparable incomes. And so it made the claim that the workers of the world all had common interests much easier to make. Much like Malthus, though, Marx had the misfortune of being right about the past and present, but wrong about the near future:

But in an ironic twist of fate, it is precisely around the publication of the first volume of Das Kapital in 1867… that things started to change. A new data series of English real wages produced by Gregory Clark shows that it is around 1867-1870 that real wages began their secular rise that continues (with some small declines from time to time) to this moment. Moreover, it is around the end of the nineteenth and the in the first half of the twentieth centuries that income differences between the rich world of West Europe, North America, and Oceania, and the rest of the world (Africa, Asia, and Latin America) exploded. (110)

By the 1900s, then, it was no longer so true that workers in England and Germany had similar standards of living to workers in Russia, and between-country inequality only got worse over the 20th century. Workers in Germany, in other words, had something to lose other than their chains at the time of the Russian revolution.

Which leads us to the present, and another nice quote from Milanovic. In vignette 2.2, Milanovic shows a graph of income ventiles (groups of 5% of the population) for the USA, Brazil, China, and India (reproduced below).

Milanovic (2010: 116).


The graph is a bit awkward to read, but conveys an enormous amount of information, all of which emphasizes the importance of between-country inequality in the present moment. As Milanovic puts it:

In the case of India and the United States, only about 3 percent of the Indian population have incomes higher than the bottom (the very poorest) U.S. percentile. (118)

Interpreting this finding is hard, as it’s based on notoriously fickle purchasing power parity data, and probably masks a lot of non-income forms of inequality, but the story is still striking. All but the tiniest fraction of people in India have less money than all but the absolute poorest Americans. As Milanovic explains in the introduction to the next vignette:

[In] a regression where we have the actual incomes of everybody in the world (of course only in principle, because the data are based on national income surveys)… it turns out that place of birth explains more than 60 percent of the variability in global income. (120)

Social scientists, sociologists included, tend to think about within-country inequality and stratification and focus on big predictors like race, gender, and parent’s socioeconomic status or class. But in a global perspective, nationality dominates all of these variables as a predictor of income. At least, it does in the 21st century. That between-country inequality dominates within-country is not an eternal truth, however, as Milanovic reminds us.

Highly recommended.

“Bad Pharma” QotD

Ben Goldacre writes a very cool, if frequently depressing, blog Bad Science. He’s also written a new book about Bad Science in Big Pharma, appropriately titled Bad Pharma. He posted the foreword on his blog today, including the following paragraph which summarizes the entire argument of the book:

Drugs are tested by the people who manufacture them, in poorly designed trials, on hopelessly small numbers of weird, unrepresentative patients, and analysed using techniques which are flawed by design, in such a way that they exaggerate the benefits of treatments. Unsurprisingly, these trials tend to produce results that favour the manufacturer. When trials throw up results that companies don’t like, they are perfectly entitled to hide them from doctors and patients, so we only ever see a distorted picture of any drug’s true effects. Regulators see most of the trial data, but only from early on in its life, and even then they don’t give this data to doctors or patients, or even to other parts of government. This distorted evidence is then communicated and applied in a distorted fashion. In their forty years of practice after leaving medical school, doctors hear about what works through ad hoc oral traditions, from sales reps, colleagues or journals. But those colleagues can be in the pay of drug companies – often undisclosed – and the journals are too. And so are the patient groups. And finally, academic papers, which everyone thinks of as objective, are often covertly planned and written by people who work directly for the companies, without disclosure. Sometimes whole academic journals are even owned outright by one drug company. Aside from all this, for several of the most important and enduring problems in medicine, we have no idea what the best treatment is, because it’s not in anyone’s financial interest to conduct any trials at all. These are ongoing problems, and although people have claimed to fix many of them, for the most part, they have failed; so all these problems persist, but worse than ever, because now people can pretend that everything is fine after all.

I haven’t read the book yet, but if it’s anything like the blog, it will be compelling, well-researched, and incredibly infuriating.