An idle thought for a Monday morning. Discourse on methodological debates in Sociology often falls into a “quant vs. qual” trap. Quals are interpretive, humanistic, small N, non-generalizable, etc. Quants are positivistic, scientistic, ahistorical, big N, causal, etc. These arguments are historically specific – the move to quantification (especially big N surveys) was a move away from causal arguments (which were linked with heavy theorizing and interpretation) in the 1940s-1950s (see Luker’s summary in her methods book, which also traces the changing gender of quantitative work). But they also stifle a (potentially) more interesting about the varieties of research that get labeled quant and qual.
For example, I am primarily a historical sociologist. My data range from other people’s publication (especially economics texts and articles, as well as newspapers) to individual and organizational archives. The kinds of questions I am interested in and the ways I marshall data are not very closely connected to much of the qualitative research I see my colleagues doing – interview studies of 30-100 individuals with the goal of uncovering mechanisms, strategies, and interpretations (potentially) generalizable to larger populations (though the interview sample is not itself representative). I actually find these studies much harder to read and interpret than most quantitative work, even though historical and interpretive interview-based work are more often lumped together.
Within “quantitative sociology,” I think there is also a massive diversity of types of data, strategies for analysis, presumptions about causality and generalizability, and so on. In particular, I want to point to one major difference that I haven’t seen paid much attention in methods (or “logic of research”) courses. Some quantitative studies use large N datasets (surveys of individuals, census data, etc.) to attempt to uncover broad and diffuse mechanisms. For example, classic stratification strategy looks at predictors of occupation, income, or other outcomes based on variables reflecting parents’ resources (education, income, etc.), perhaps childhood neighborhood, etc. There are many possible paths connecting the dots from where you grew up to where you ended up and rather than tackling one individual hurdle, the big N analysis looks for the average effect.
On the other hand, some quantitative research attempts to uncover the logic behind a specific kind of administrative process or organizational decision. I’ll give two sets of examples, one from criminology and one from economic sociology. Lots and lots of research within economic sociology examines the diffusion of organizational practices. That is, we are asking why did this organization make this particular decision (to adopt a poison pill takeover defense, to layoff workers, to create a CFO position, etc.) at this particular time? Sometimes, we use some of the same variables in our analysis that the organization likely examined in its own analysis: e.g., we might examine profitability data in predicting who announced a layoff vs. actually carried it through. Similarly, criminologists ask a lot of questions about sentencing. The goal is to uncover the logic behind the decisions of parole boards, judges, and so on.* The variables used in the analysis include many of the same variables used by the parole board – explicitly (e.g. the kind of violation, criminal background) or implicitly (race, age).
In both cases, researchers’ access to the data partially or wholly results from the administrative records kept by the organization: financial accounting and SEC reports, case files and histories, etc. Relying on administrative data has massive advantages (such data may be audited as with financial accounts, they are already produced, they are often much larger in scope than a reasonably priced survey, coverage rates often approach 100%, etc.) but they also have some serious weaknesses (the more administrative data are used for control or regulation purposes, the more incentive organizations and individuals have to falsify or fudge the data, the categories themselves are the organizations’ not the researchers’, there may be “good organizational reasons for bad records” [Goffman 1967], etc.). Additionally, the kind of question being asked (unpacking organizational decision-making processes vs. the survey-style diffuse-process research seems like it would require a different manner of interpretation, would combine differently with ethnographic observation or qualitative interviews, and so on. Instead of more rehashing of the quant vs. qual divide (and re-inscribing one historically specific set of stereotypes), I’d love to see more focus on these sorts of differences in styles of research within and between “quantitative” and “qualitative.”
Or, put more simply, type of data is not the same as method.
* This discussion inspired by the excellent research of my Michigan Colleague Jonah Siegel on neighborhood effects and parole violations.
Austen
/ February 21, 2012You wrote:
“My data range from other people’s publication (especially economics texts and articles, as well as newspapers) to individual and organizational archives. The kinds of questions I am interested in and the ways I marshall data are not very closely connected to much of the qualitative research I see my colleagues doing – interview studies of 30-100 individuals with the goal of uncovering mechanisms, strategies, and interpretations (potentially) generalizable to larger populations (though the interview sample is not itself representative). I actually find these studies much harder to read and interpret than most quantitative work, even though historical and interpretive interview-based work are more often lumped together.”
What do your colleagues who do in-depth interviews use their data for? I do a lot of interviews, mainly because my primary concern is with social-psychology. I find interview data are the best for this kind of analysis. However, I don’t think interviews necessarily have a role in research questions that aren’t about social-psychology. Do you see a role for interview data that I don’t see? I guess I’m looking for insight into how ‘typical’ sociologists use interviews, or whether your colleagues are a bunch of social-psychologists. Thanks…
Dan Hirschman
/ February 21, 2012Austen,
A lot of sociologists in my department use qualitative interviews to investigate issues around gender, race, culture, religion, inequality, etc. So, for example, one colleague interviewed Catholic women dealing with infertility and navigating the Church’s restrictions on assisted reproductive technologies. Another interviewed male administrative assistants on the strategies they use to respond to being in a feminized position. Etc. The goal is to understand how people are experiencing their lives, and how they strategize responses to problematic situations. I think? I definitely think interpretive interviews are the way to answer these questions, I’m just not especially skilled at reading this kind of work or making helpful comments on it.
Oz
/ February 25, 2012Hey Dan and Austen, Nice discussion.
You mentioned that you are a historical sociologist. I know of a few historical sociologists that use interviews to study events rather than experiences thereof (is that different? I am struggling with that myself). For instance, Pardo-Guerra* used interviews with brokers, stock exchange workers, technicians and the like to reconstructed an historical account of the LSE automation (and other people from the ‘Edinburgh’ school’ are doing similar things). I think that this sort of data collection, which is again based on oral-history style questioning (trying to log what happened), is helpful in studies that stress practice over discourse (what do you think?). I find that people who use Actor-Network Theory go down this road.
* (2010) ‘Creating flows of interpersonal bits: the automation of the London Stock Exchange, c. 1955-1990’ Economy & Society 39(1):84-109
Dan Hirschman
/ February 28, 2012Oz,
Quick response: I agree completely, not all interviews in Sociology are done for the purposes of the kind of qualitative papers I find hard to evaluate. My advisor interviewed prominent economists in her history of the financialization of the US economy, but these interviews function more like a mix of secondary sources (histories of the debates of the period) and archival documents (the accounts of important individuals in a given process) and not as much like qualitative interviews. Does that make sense? I think for ANT-style histories of scientific controversies or technical innovations, the same applies.
Austen
/ February 25, 2012Thanks for the reply…I read your blog and I wonder…based on the little I’ve deduced about your dissertation, do you consider yourself a ‘macroeconomist’? If not, why not…thanks
Dan Hirschman
/ February 28, 2012In the US, as Marion Fourcade has shown, being an economist means holding a PhD in economics and having certain mathematical skills, etc. I am not an economist. Rather, I am a sociologist, and working on becoming a sociologist of macroeconomics. I do have thoughts on macroeconomics and macroeconomic policy that come from my close reading of economics blogs, papers, and history, but they are the thoughts of an informed sociologist not someone waving the card (or claiming the identify of) “macroeconomist.”