Two recent blog posts/stories caught my eye as they pick up on a theme that fascinates me, but about which I have very little expertise: hilariously, and perhaps depressingly, bad estimates of social phenomena by survey respondents. The first example is from a recent SocImages post: Americans Way (Like, Way) Overestimate the Number of Gays and Lesbians. The title of the post gives away the main finding pretty well: while the best estimates of GLB identified individuals are about 3.5%, “more than a third of Americans believe that more than one out of every four people identifies as gay or lesbian” and the median answer was in the 20-25% range. Even broadening out to the notion of attraction or behavior, and not just self-identification, survey estimates put the number closer to 10% or 11% (though these blog posts don’t report the broadest possible number, which would include all individuals answering yes to at least one of attraction, behavior, or identity). Interestingly, these estimates are significantly up from 2008 (when only a quarter responded that more than 25% of Americans are gay).
The second comes from the Inequalities blog, and concerns fraud in the welfare system in Britain. According to the survey, the average British person “believes that 30 in 100 disability claimants and 35 in 100 unemployment claimants are falsely claiming.” This authors go on to say that this figure is a tremendous overestimate:
Even if we put together fraud AND customer error, the latest figures show that 3.3% of unemployment claims are ‘false claims’, and a mere 1.1-1.2% of disability benefit claims. (I’ve talked about fraud figures previously on the blog here). So for people on average to think that 30-40% of claims are false is a massive, massive over-estimate – far more than could be explained by the difficulty in getting good estimates of fraud rates.
Both of these findings remind me of an old, and depressing, set of survey results about American opinion about foreign aid. Foreign aid is debated a lot in the public sphere, and it’s a constant target of conservative claims about government waste and the need to cut back. Survey results find that Americans want to cut foreign aid, but also that they massively overstate its size in the budget. Here’s a PBS story from 2010:
The survey… asked the question: “What percentage of the federal budget goes to foreign aid?”
The median answer was roughly 25 percent, according to the poll of 848 Americans. In reality, about 1 percent of the budget is allotted to foreign aid.
More broadly, another recent poll shows that Americans have just an awful understanding of how much money is spent on many different parts of the budget. For example, the median estimate was that public broadcasting accounts for 5% of the federal budget, instead of the .1% it actually does.
I should say right here that I wouldn’t have known the exact answer to most of these questions if you asked me before looking at the data. The federal budget is massive and complicated. I understand its broad trends – social security, medicaid/medicare, defense, and interest on the debt are the biggies, most other social assistance or public good provision programs are tiny – but if you asked me right now for an estimate of how much the federal government spends on highways, for example, I would have only a wild guess. So, given that someone who would bother to follow polling data on federal budget priorities doesn’t have the outline of the budget memorize, what does that imply that for our expectations of the average survey respondent?
Another way of asking this question is to step back and ask, what kind of knowledge do these questions give us? Most respondents are never asked to make decisions explicitly about the federal budget. I have no control over how much congress spends on foreign aid or PBS. Mostly, I have a vote and some limited capacity to donate to candidates or write letters or whatnot. To do that, how informed do I need to be? This is a sort of rational inattention argument, and it makes some sense for the budget data – although obviously, it suggests that manipulation is relatively easy, as people’s priors are weak and so if the Republican party spends years saying that foreign aid is creating our crushing tax burden, people don’t have a knee jerk “That’s mathematically impossible!” reaction like they perhaps ought.
But that argument doesn’t hold up, I think, for the sexuality question. The federal budget is a freakishly complicated process that allocates an almost unimaginable amount of money, from a personal standpoint – trillions and trillions of dollars. But our estimates of the percentage of gays and lesbians are presumably informed by our relatively large sampling of known individuals – we all know thousands of people – among other things. It makes me wonder, to start with, where people think all these gays and lesbians are! Do midwesterners think that New York and San Francisco are 90% GLB or something?
I don’t have good answers, but I am hoping some of you do. How should we interpret these sorts of trend/proportion knowledge questions in surveys? If we frame these questions as “public understanding of social facts” or “public understanding of social science” (to riff on the equally poorly-acronym’d “public understanding of science”), what kind of meaningful information do they give us? These reports are usually framed in an enlightenment social problems vein: if only we actually knew the real numbers, we wouldn’t care so much about (fraud, foreign aid, etc.). But that’s deeply unsatisfying. What’s a better way?