Review: A Tale of Two Cultures by Goertz and Mahoney (2012)

Attention conservation notice: This post is a 2600 word book review about a recent text on quantitative and qualitative methodology in sociology and political science. If you aren’t in one of those fields, you probably won’t be interested in the post.

The idea of a schism between the “Quants” and the “Quals” is a common trope in sociology. According to the quals, the quants are naive positivists, running regression after regression, producing gobbledegook that masquerades as science. According to the quants, the quals are crazed post-modernists, rejecting the very possibility of reproducible or generalizable knowledge with their fetishistic attachment to the peculiarities of particular cases or, at best, small-N comparisons. Everyone knows that these stereotypes are misleading, at best, but they lurk beneath a lot of high-minded and thoughtful debates about the role of statistics courses in the graduate curriculum, or the ability of small-N research to uncover causal relationships. And they recur especially in discussions of the new (well, not that new, see Figure 1) “big thing”: mixed-methods research (by which we mean mixed-data research, which relies on both large N statistical analysis and small-N interview samples or historical analysis of a few cases). Which method should serve which? How should we all get along?

Source: JStor Data For Research, searching “mixed methods” in all Sociology journals.

Into this fierce turf war, Gary Goertz and James Mahoney have published a new and incredibly illuminating salvo, A Tale of Two Cultures: Qualitative and Quantitative Research in the Social Sciences (2012, Princeton University Press). Goertz, a political scientist, and Mahoney, a sociologist, attempt to make sense of the different cultures of research in these two camps without attempting to apply the criteria of one to the other. In other words, the goal is to illuminate difference and similarity rather than judge either approach (or, really, affiliated collection of approaches) as deficient by a universal standard. In this review, I’m going to try to lay out what I found most useful from G&M’s analysis and the one or two points I found frustrating. I’m also going to try to historicize the book a bit in a long history of debates within the social sciences about what exactly we mean by social science, and how invocations of the natural sciences and especially mathematics have a privileged role in those debates.

The first, and most important, move that G&M make is to define outside of the scope of their discussion a major branch of social science research: interpretive approaches. G&M are interested in quantitative and qualitative approaches to causal explanation. Thus, they want to leave aside the parts of the social sciences that have close connections to the humanities. This move is pulled off brilliantly, in a way that lets G&M move on to their substantive argument without dismissing the value of interpretive work, by recognizing that A Tale of Two Cultures is itself an example of interpretive social science:

Our decision not to treat interpretive approaches in this book should not be taken as evidence that we see no place for these approaches in the social sciences. In fact, our two cultures argument is, broadly speaking, an exercise in description and interpretation. We seek to elucidate the practices and associated meanings of two relatively coherent cultures of research. Thus, while interpretative analysts will not find their tradition of research represented in the qualitative culture that we describe, they nonetheless will find many of the tools of their tradition put to use in our analysis. (G&M 5)

Although other “interpretive” social scientists may well disagree, I found this footnote to be satisfying as a shield against the argument that G&M devalue interpretive research. After all, they are putting their shiny new book squarely into the interpretive tradition! That being said, one of my lingering concerns with the book (to which I will return) is the easy division it imagines between qualitative-research-as-causal-explanation and qualitative-research-as-interpretive-analysis. While I completely agree that these two goals are distinct, and have their origins in very different theoretical endeavors, I think they are more often muddled in empirical work – hence why quantitative research may sometimes collapse together small-N comparative analysis in the style of Skocpol and Mill with post-structuralist inspired cultural analysis. I think G&M would like to posit this sharp division between the two strands of qualitative research – which, drawing on Luker, might perhaps most usefully be thought of as “canonical” qualitative research and “non-canonical” qualitative research – because they want to highlight the pure logic of causal explanation in canonical qualitative research.

Onto the meat of the argument. G&M argue that the two cultures of quantitative and (causal) qualitative research differ in how they understand causality, how they use mathematics, how they privilege within-case vs. between-case variation, how they generate counterfactuals, and more. G&M argue, perhaps counter to our expectations, that both cultures have answers to each of these questions, and that the answers are reasonably coherent across cultures, but create tensions when researchers attempt to evaluate each others’ research: we mean different things, we emphasize different sorts of variation, and so on. Each of these differences is captured in a succinct chapter that lays out in incredible clarity the basic choices made by each culture, and how these choices aggregate up to very different models of research.

Perhaps the most counterintuitive, but arguably most rhetorically important, is the assertion that both quant and qual research are tightly linked to mathematics. For quant research, the connection is obvious: quantitative research relies heavily on probability and statistics. Causal explanation consists of statistically identifying the average effect of a treatment. For qual research, the claim is much more controversial. Rather than relying on statistics, G&M assert that qualitative research relies on logic and set theory, even if this reliance is often implicit rather than formal. G&M argue that at the core of explanation in the qualitative culture are the set theoretic/logical criteria of necessary and sufficient causes. Combinations of necessary and sufficient explanations constitute causal explanations. This search for non-trivial necessary and sufficient conditions for the appearance of an outcome shape the choices made in the qualitative culture, just as the search for significant statistical variation shapes quantitative resarch. G&M include a brief review of basic logic, and a quick overview of the fuzzy-set analysis championed by Charles Ragin. I had little prior experience with fuzzy sets (although plenty with formal logic), and I found this chapter extremely compelling and provocative. Qualitative social science works much more often with the notion of partial membership – some countries are not quite democracies, while others are completely democracies, and others are completely not democracies. This fuzzy-set approach highlight the non-linearities inherent in partial membership, as contrasted with quantitative approaches that would tend to treat “degree of democracy” as a smooth variable.

This attempt to link qualitative research to set theory and logic is at once both compelling, and deeply reminiscent of a long lineage of debates about the role of math in the social sciences. In the history of economics, the 1910s-1940s saw a massive struggle within American economics to define the course of the field. To gloss it quickly, on one side, the institutional economists focused on case studies, legal arrangements, charting the business cycle, and such, and were derided as merely descriptive. On the other, the neoclassicals built increasingly complicated formalisms to try to understand the underlying dynamics of the economic system (although this really took off in earnest in the 1930s, with the emergence of the Econometrics Society). As Yuval Yonay (1998) and Malcolm Rutherford (2011) have shown, both the institutionalists and the neoclassicals argued that their work was more scientific, and both linked their prestige and science-y-ness to their use of math. The institutionalists pointed to their extensive work producing quantitative data about the real world (time series of production, employment, and so on, including what eventually became the modern national income accounts). On the other side, the neoclassicals pointed to their use of formal logic, proofs, and especially calculus (important for marginal analysis, which lay at the root of neoclassicism). The details are not important for the comparison, but what’s interesting is how important the claim to mathematics was for both sides. Similarly, G&M put a lot of effort into arguing that qualitative research has, at its core, a different branch of mathematics, and one that is arguably the most prestigious inside the field of mathematics itself (at least, at some points in its history).

Instead of exhaustively charting each of the difference between the two cultures, as identified by G&M, I’m just going to list a few of my favorites. In general, the expositions of these differences were very clear, and chapters could easily be assigned as part of graduate – or even potentially advanced undergraduate – course on research methods. So, quantitative research:

  • Relies on probability and statistics.
  • Defines causal explanation in terms of identifying average treatment effects.
  • Asserts “No strong causal inference without an experiment” (G&M 102) or at least without “manipulation” (cf. Holland 1986).
  • Privileges cross-case analysis.
  • Offers symmetrical explanations: Explaining a large Y is the exact flip of explaining a low Y.
  • Understands mechanisms as linking treatments and outcomes, but sees mechanisms as “adding weight” or “making a contribution,” never explaining a specific case.
  • Plausible counterfactuals are those that are within the scope of the observed variation in the data.
  • All variation in the data is meaningful, transformations of the data help make the best use of that variation.
  • Relies on random selection/sampling, avoiding excessive reliance on extreme cases.
  •  

    Some of these will only make sense, or begin to seem controversial, in contrast with qualitative research. Also, I should note that the Holland position on causation – the idea of no causation without manipulation – is certainly characteristic of the literature on causal inference, but it’s not entirely clear that all (or even most) large N studies in sociology hew to its dictums. So, how does qualitative research compare? Qualitative research:

  • Relies on logic and set theory (albeit implicitly).
  • Defines causal explanation in terms of identifying the presence of sufficient conditions or the absence of necessary conditions.
  • Asserts “No strong causal inference without process tracing.” (G&M 103)
  • Privileges within-case analysis (at least in comparison with quantitative research), and even goes so far as to attempt to explain particular outcomes.
  • Offers asymmetrical explanations: Explanations for Y=1 may be different for explanations for Y=0 (implicitly leaving untheorized some logically possible, but unobserved, combinations).
  • Mechanisms link conditions to outcomes, and are directly visible through within-case analysis via process tracing.
  • Plausible counterfactuals are generated according to the “minimum rewrite rule” (G&M 119).
  • Not all variation is meaningful, transformations of data try to map data onto concepts.
  • Selection is based on conditions tested, e.g. in order to test a necessary condition, you need only sample where Y = 1, and thus sampling on the dependent variable is a virtue, not a vice.
  •  

    Rather than explaining all of these differences – which is, after all, the point of the book – I will simply pick out one I found particularly revelatory: the treatment of variation. G&M note that quantitative research is usually interested in the middle of the data. Extreme points are labeled outliers, and decried as mucking about with the model. Their extreme-ness suggests that atypical forces are at play, and thus these points may disturb researchers’ efforts to identify the average effect.

    For qualitative researchers, on the other hand, G&M identify what they call “The Principle of Unimportant Variation”: “There are regions in the data that have the same semantic meaning.” (G&M 144) Here we are returned to discussions of fuzzy-sets. The problem for conceptual analysis of a small number of cases is that you need to have good, clear definitions that sort the data well into just a few discrete bins. But every conceptual system will have liminal cases. G&M use the idea of a “developed country.” Sweden, the USA, and France are all developed countries. Guatemala, Haiti, and Nicaragua are all not developed. In between, we can imagine a lot of cases that are “kind of” developed – parts of Eastern Europe, Russia, etc. If we use as our primary measurement of “development” GDP/capita, then the map between the variable and the concept is highly non-linear. G&M use a few nice, simple charts and figures to demonstrate this point (G&M 145). Every country with GDP/capita over, say, 20,000$/person is considered fully developed. Every country with GDP/capita less than $2500 is considered fully undeveloped. Everything in between is kind of muddy, to varying degrees. Of course, a quantitative model could accommodate these sorts of conceptual mass points, but it’s very much against the norms of the culture. Instead, we’d tend to load GDP/capita (or maybe log GDP/capita) into a regression equation, which thus implicitly assumes that all variation is meaningful, and that an extra $1000 is equally meaningful across the spectrum (or that a change of 10% is equally meaningful, in the log context). But G&M assert that for qualitative research, this is not the case: Sweden and the USA are both fully developed countries. And explanations that rely on development as a necessary or sufficient condition (e.g. “developed countries never go to war against each other”) don’t care the slightest about whether GDP/capita is 25,000 or 40,000.

    I’ll end with a couple small criticisms. First, the text makes use of a very small number of datasets for examples, relying quite heavily on the Polity dataset. This adds clarity, especially for a reader like myself unfamiliar with that dataset, as we have time to get acquainted with it. That said, it lacks variety, and exhibits a really convenient amount of medium-N character. And many of the other examples are variables coded at the country level. Again, this makes sense, as big-N statistical analysis and small-N comparative analysis often get together and argue with each other exactly at the medium-N level, like OECD countries, or UN members. For teaching purposes, especially in sociology, it might be useful to supplement the text with a few more examples that vary the unit of analysis to see how these concepts apply elsewhere – to organizational or individual level variables, for example. Even if the problematic clash of cultures is less frequently encountered at those levels of analysis, it might be a very useful exercise to check how the concepts and culture clash play out on each team’s “home turf,” so to speak. What would it look like to take a set theoretic approach to claims about stratification and inequality? How might we think about treatment effects in the context of world-systems analysis? Etc.

    Second, as signaled above, I think Goertz and Mahoney are a bit too clean in their distinction between qualitative research interested in causal explanation, and qualitative research that is traditionally labeled interpretive. This clean division is also part of their normative project, I think, as G&M want causal qualitative research to sharpen its vocabulary and more explicitly embrace the sorts of set theory that they have long advocated. Perhaps the easiest way to make this case is to claim that qualitative research is already doing so, but could be made even better by formalizing the logic. But, this move might require abandoning the interpretivist impulse that also undergirds a lot of small-N research. Also, arguably, (and Reed [2011], among others, argues just this), some interpretive research is interested in causal explanation, albeit with a very different working notion of causation. That said, such interpretive causal research could badly use some methodological clarity of just the sort Goertz and Mahoney have provided for more canonical qualitative and quantitative causal explanation.

    Were I teaching a graduate methods course, I would be delighted to assign big chunks of A Tale of Two Cultures, alongside Luker’s Salsa Dancing into the Social Sciences, as excellent interpretive accounts of the sometimes cold, sometimes hot, methodenstreit of the contemporary social sciences. And practicing researchers probably have a lot to learn from it as well, especially as they try to make sense of their colleagues’ frustratingly similar, yet distinct, methodological commitments.

    Advertisement
    %d bloggers like this: