An idle thought for a Monday morning. Discourse on methodological debates in Sociology often falls into a “quant vs. qual” trap. Quals are interpretive, humanistic, small N, non-generalizable, etc. Quants are positivistic, scientistic, ahistorical, big N, causal, etc. These arguments are historically specific – the move to quantification (especially big N surveys) was a move away from causal arguments (which were linked with heavy theorizing and interpretation) in the 1940s-1950s (see Luker’s summary in her methods book, which also traces the changing gender of quantitative work). But they also stifle a (potentially) more interesting about the varieties of research that get labeled quant and qual.
For example, I am primarily a historical sociologist. My data range from other people’s publication (especially economics texts and articles, as well as newspapers) to individual and organizational archives. The kinds of questions I am interested in and the ways I marshall data are not very closely connected to much of the qualitative research I see my colleagues doing – interview studies of 30-100 individuals with the goal of uncovering mechanisms, strategies, and interpretations (potentially) generalizable to larger populations (though the interview sample is not itself representative). I actually find these studies much harder to read and interpret than most quantitative work, even though historical and interpretive interview-based work are more often lumped together.
Within “quantitative sociology,” I think there is also a massive diversity of types of data, strategies for analysis, presumptions about causality and generalizability, and so on. In particular, I want to point to one major difference that I haven’t seen paid much attention in methods (or “logic of research”) courses. Some quantitative studies use large N datasets (surveys of individuals, census data, etc.) to attempt to uncover broad and diffuse mechanisms. For example, classic stratification strategy looks at predictors of occupation, income, or other outcomes based on variables reflecting parents’ resources (education, income, etc.), perhaps childhood neighborhood, etc. There are many possible paths connecting the dots from where you grew up to where you ended up and rather than tackling one individual hurdle, the big N analysis looks for the average effect.
On the other hand, some quantitative research attempts to uncover the logic behind a specific kind of administrative process or organizational decision. I’ll give two sets of examples, one from criminology and one from economic sociology. Lots and lots of research within economic sociology examines the diffusion of organizational practices. That is, we are asking why did this organization make this particular decision (to adopt a poison pill takeover defense, to layoff workers, to create a CFO position, etc.) at this particular time? Sometimes, we use some of the same variables in our analysis that the organization likely examined in its own analysis: e.g., we might examine profitability data in predicting who announced a layoff vs. actually carried it through. Similarly, criminologists ask a lot of questions about sentencing. The goal is to uncover the logic behind the decisions of parole boards, judges, and so on.* The variables used in the analysis include many of the same variables used by the parole board – explicitly (e.g. the kind of violation, criminal background) or implicitly (race, age).
In both cases, researchers’ access to the data partially or wholly results from the administrative records kept by the organization: financial accounting and SEC reports, case files and histories, etc. Relying on administrative data has massive advantages (such data may be audited as with financial accounts, they are already produced, they are often much larger in scope than a reasonably priced survey, coverage rates often approach 100%, etc.) but they also have some serious weaknesses (the more administrative data are used for control or regulation purposes, the more incentive organizations and individuals have to falsify or fudge the data, the categories themselves are the organizations’ not the researchers’, there may be “good organizational reasons for bad records” [Goffman 1967], etc.). Additionally, the kind of question being asked (unpacking organizational decision-making processes vs. the survey-style diffuse-process research seems like it would require a different manner of interpretation, would combine differently with ethnographic observation or qualitative interviews, and so on. Instead of more rehashing of the quant vs. qual divide (and re-inscribing one historically specific set of stereotypes), I’d love to see more focus on these sorts of differences in styles of research within and between “quantitative” and “qualitative.”
Or, put more simply, type of data is not the same as method.
* This discussion inspired by the excellent research of my Michigan Colleague Jonah Siegel on neighborhood effects and parole violations.