Americans: 80% or 45% Middle Class? (Or, Response Categories Matter)

There is a common trope in discussions of class in America: everyone thinks they are middle class. Lots of polling data supports this assumption. When asked to self-report a class, about 80% of Americans say “middle class” (Here’s one news story on the topic from the Washington Independent). But, at the same time, other polls and surveys, including the General Social Survey, show only ~45% of Americans report themselves as middle class, while a similar percentage respond “working class.” What’s the deal?

The answer has a lot to do with how we think about survey questions and their connections to identity. The second finding comes from a slightly different question. Instead of being asked, “What class do you identify as?” or “Would you say you are lower, middle or upper class?” respondents to the GSS are asked, “If you were asked to use one of four names for your social class, would you say you belong in: the lower class, the working class, the middle class, or the upper class?” Adding the “working class” category as an explicit response dramatically shifts the responses. This blog post at Truth Out has some nice, quick analysis of the GSS to show that (for example) the percentage responding working vs. middle class is quite stable over the past 40 years, and that having a college degree is a strong predictor of identifying as middle rather than working class (but not determinative). I ran some numbers of my own on the 2006 GSS data and found that income, party identification (Republican vs. Democrat), education, and race (but not gender) were all strongly associated with class self-identification (that is, higher income, more Republican, more years of education and being white all increased the odds of identifying as middle class).

What does all this mean? First, I think it means that we are too quick to jump from a survey response to a deeply-held identity. Some large fraction of Americans will respond that they are middle class, but also that they are working class, depending on what question they are asked. Neither is “false,” rather they are responses to different prompts. Surveys themselves are complicated social interactions, a bit like psych lab experiments, for a very unwieldy and hard to control laboratory: depending on how you poke people, they give different responses, all of which are real, but none of which are easy to generalize to other settings.

Second, I think it means that there might be a missed opportunity for a “working class politics.” If we know that almost half of all Americans will call themselves working class if given the opportunity, then it’s reasonable to guess that they will also respond to policies and rhetoric that explicitly invokes that label.

Advertisements
Next Post

3 Comments

  1. This is neat, and brings to mind one of the chief lessons I’ve learned from studying statistical data: people are highly variable when you ask them to self-assess anything.

    This lesson was impressed upon me most forcefully by a paper in political science on the prediction of presidential vote shares. The paper noted that socio-economic predictor models were not very good at predicting presidential elections (it’s actually a hard thing to do well), BUT that they were closer to the actual outcome for far longer than if you followed the polls. Put another way: if you bet money everyday on the likely outcome of an election, you’d do far better to follow a bad predictive model, than if you asked people for which candidate they were going to vote. Much with the people who say they are middle class in one context and working class in another, people will switch from saying they’re voting for one party to the next, day to day. People are surprisingly bad at assessing their own behaviors, especially when they have to judge their position relative to others, or their position at some future time.

    This is basically why I like statistical data that ask people to report something more concrete about their lives. In my line of research there are people who ask respondents “Do you think that you save enough for the future?” They get very pessimistic results. This isn’t shocking; Americans have a real complex about saving and spending (see “The Protestant Ethic”). I much prefer to look at data that ask respondents “What was the balance of your checking account three months ago?” and then ask “What was the balance of your checking account today?” This is much more concrete, and will tell me something about a household’s actual saving behavior (with caveats, of course). There are definitely times when we want a household’s self-evaluation (esp. in cultural research). However, when we want to predict something about behavior, I find it’s not very helpful to ask respondents what they think they’re going to do.

    Also, I agree that “working class” politics has some legs. People like the sound of “working class.” I’d be interested to know the ramifications of framing economic well-being in terms of “working class” or “the 99%”.

    (This makes me think that at some point I should write a post about all the lessons stats have taught me).

    • Jeff,

      I think your tack is very similar to that taken by most economists. Attitudes are mostly irrelevant, revealed preferences and behaviors are what we care about. I think that’s a useful approach for a lot of questions (e.g. predicting how a tax cut will change someone’s consumption patterns, say). People don’t have the best sense of how they make lots of small decisions that aggregate in important but complicated ways, but by focusing on smaller actions we might be able to gain traction on the bigger process.

      But! I think that for questions of meaning, interpretation and identity, we can’t focus solely on already existing behaviors. For example, if we want to know whether or not different rhetoric will “resonate,” I think knowing about attitudes and ideals might be hugely important. Moreover, the attitudes and identities are themselves phenomena of interest.

      So, I think it’s a matter of relying on the right kind of survey responses for the right kind of question. If you want to know how much money a family is going to spend on clothes next year, ask them how much they spent this year, not how much they plan to spend or whether they think they spend too much. But if you are interested in how they understand themselves as consumers, and how they might be mobilized around a politics of ethical consumption (say), then you are going to need fuzzier questions to have a shot at making sense of it.

      All of this is really just an elaboration on your aside, “There are definitely times when we want a household’s self-evaluation (esp. in cultural research).” Does it line up with what you had in mind?

      • Most definitely, that’s basically what I was thinking. There’s a place for delving into people’s self-evaluations — in these cases, seeing how the question was asked will be part and parcel of interpreting respondent’s results. So, it’s not an issue of whether the GSS is right, or if other data sources are right, but rather: what frames of meaning are we activating by asking the questions in different ways?

        As an aside though, I would say that cultural research would benefit from occasionally being a bit more skeptical and hard-nosed regarding responses. In particular, I wish more cultural research was willing to make the connection between frames of meaning and likely actions. For instance, one can try to get at feelings of racial antipathy (“racism”) in a lot of ways. Still, I think we have to be honest that people will often offer “well meaning” responses (e.g. “I like people of all colors”) when they really are not interested in the practical implications of this well-meaning response. So, I think there is room here to connect thoughts and practices by maybe including another question along the lines of “A teacher wants to include African-American history in their curriculum, are you happy about this?”