Qualitative Coding as Ritual: A Review of Biernacki’s “Reinventing Evidence in Social Inquiry”

I just finished reading Richard Biernacki’s (2012) Reinventing Evidence in Social Inquiry: Decoding Facts and Variables. The book argues that cultural sociology has erred in attempting to merge humanistic and scientistic modes of inquiry. This urge is manifested in the qualitative or interpretive coding of meaning in texts. Biernacki argues that coding should be understood as a ritual practice, one that decontextualizes meaning in order to selectively recontextualize it, and thus reinforces pre-existing ideas or theories with the appearance of empirical foundations. (151) The central chapters of the book are three case studies where Biernacki reanalyzes the sources underlying three prominents works of cultural sociology based on coding (Griswold 1987, Bearman and Stovel 2000, and Evans 2002).* Biernacki’s work was the subject of a multi-year controversy before its publication, as covered by the San Diego Union-Tribune here.**

Biernacki begins his book with an excellent description of the tensions between the scientistic and humanistic approaches. The book names five features of the “scientific” interpretation of texts that are incompatible with humanist approaches

(a) quantitative or abstractive generalizing starts by defining a clear target population about which to reason or generalize; (b) the relevant variables are self-contained once the research design is formalized; (c) correlations are between abstract factors in an open mathematical space bracketed for the sake of the procedure from unpredicted attributions of meaning; (d) there is a standard causal environment partially separable from the outside, unmeasured environment that makes cases comparable and that undergirds interpretable results; (e) finally, elements of the examined universe, including therefore separable text elements, each comprise events or features with potentially independent and potentially universal causes. Each of these features of inquiry is invalid and reversed for more purely humanist text interpretation… (8)

The critique of the pre-determined sample is particularly clear and relevant, and reminds me a bit of Kristin Luker’s discussion of why non-canonical social science research must rely on different techniques to justify their objects of study (“data outcroppings” rather than representative samples, see my review of Salsa Dancing into the Social Sciences here). Biernacki shows how the samples chosen by the three authors he studies shape their findings. For example, Evans’ sample of bioethics texts about human genetic engineering includes many broader works in philosophy and theology from before 1973, but then changes its sampling strategy for later periods (as the main subject of the inquiry becomes more conventionally codified), which ends up excluding many broader works. According to Biernacki, this sampling procedure artificially creates a trend, “directing the probe away from philosophical references toward more specific, technologically sensitive keywords (such as “gene therapy”) would be mirrored self-fulfillingly in a narrowing of content over time. It would create the ‘observed’ trend away from broad spiritual concerns toward formally rational application of technology.” (63) In a slightly different context, Biernacki critiques Wendy Griswold’s sample of book reviews, which includes everything from small snippets in newspapers to book-length treatises. (119; 125-126, see also biernackireviews.com/ ) This heterogeneity complicates Griswold’s findings about the relative presence of certain topics in reviews from different countries, as the reviews from the West Indies were much longer than those from the UK, and intended for a very different audience, and thus more likely to mention a diverse array of topics. (113) And so on.

Biernacki’s prose is dense, but some of the takeaway messages are quite clear, and damning. Here’s a bit of summary from the conclusion:

Add up the quirky samples, sourceless observations, changed numbers, problematic classifying, misattributions, substituted documents, and violations of the sine qua non of sharable or replicable data, and it seems that in some respect, each of the demonstration studies lack referential ties to the outside world of evidence. (127)

In the infelicitously blurred genre of coding, the term “qualitative” can designate readings so soft as to appear nonsubsistent. Since it is challenging to match codes individually to their source points, we retain only boilerplate promises that a thicket of qualitative codes corresponds intelligibly to anything. (128)

Each study was narrated as a tale of discovery, yet each primary finding was guaranteed a priori. (128)

While I think Biernacki’s claims about the problems of sampling are his clearest and most compelling, the core of Biernacki’s critique of the ritual character of coding practices in cultural sociology is a deeper disagreement about the nature of “meaning.” (For more on this topic, see the articles in Reed and Alexander 2009, including the debate between Biernacki and Evans that prefigures this book.) Biernacki summarizes this difference:

The premise of coding is that meanings are entities about which there can be facts. But we all know that novel questions and contexts elicit fresh meanings from sources, which is enough to intimate that meaning is neither an encapsulated thing to be found nor a constructed fact of the matter. It is categorically absurd to treat a coding datum as a discrete observation of meaning in an object-text. My preference is to think of “meaning” as the puzzle we try to grasp when our honed concepts of what is going on collide with the words and usages of the agents we study. Describing meaning effectively requires us to exhibit that fraught interchange between cultures in its original: the primary sources displayed in contrast to the researcher’s typifying of them. (131)

Biernacki wraps the entire critique up with the idea of ritual, a claim he takes very seriously. I think this discussion goes a bit far at times and distracts from the methodological critique – in other words, Biernacki jumps from the critique itself to a diagnosis of why cultural sociology fell down this rabbit hole, but that diagnosis can get in the way of the clarity of the critique. In part, I think this reflects Biernacki’s aim – which is not to produce better qualitative coding, but to undermine the entire enterprise. Biernacki’s solution to the various problems encountered is not a handful of fixes (more careful sampling, say), but a return to Weberian ideal types, and a divorce between humanistic and intendedly scientific approaches in cultural sociology. This approach, more true to the humanistic tradition, will also better satisfy the scientistic aims of the field:

This volume has shown that humanist inquiry on its own better satisfies the “hard” science criteria of transparency, of retesting the validity of interpretations, of extrapolating from mechanisms, of appraising the scope of interpretations, of recognizing destabilizing anomalies, of displaying how we decide to “take” a case as meaning something, of forcing revision in interpretive decisions, of acknowledging the dilemmas of sampling, and of separating the evidence from the effects of instrumentation. (151)

Strangely, I felt that the book was a bit too short at 155pp – a comment I’m not sure I’ve ever made before of an academic monograph! But that brevity is a virtue in that the book is readable in a single sitting, despite the density of the argument. I highly recommend it, and hope it becomes a touchstone for methodological debates in the coming years.

* Or attempts a reanalysis at least, as in each case the exact corpus used was very difficult to reproduce – none of the original authors could produce an accurate list of sources analyzed! Just one of the many common issues raised in reanalysis that cuts across, to some extent, the quant/qual divide. But in qualitative research, thus far much less attention has been paid to these issues, I think.

** At one point, the Dean of UCSD (Biernacki’s home institution – but also that of one of the authors he criticizes) ordered Biernacki to cease working on his book: “[Dean of the Social Sciences] Elman wrote Biernacki a letter ordering him not to publish his work or discuss it at professional meetings. Doing so, Elman wrote, could result in “written censure, reduction in salary, demotion, suspension or dismissal.” The Dean argued that the book could “damage the reputation” of Biernacki’s colleague and thus constituted harassment. All of this speaks to Gelman’s concerns (see also OrgTheory) that exposing fraud or other research malfeasance is a thankless task – here Biernacki engaged in a sophisticated theoretical critique and empirical reexamination of his colleague’s work, and in thanks he received threats and a gag order from the administration! Perhaps a claim of outright fraud would have been easier to sustain, but certainly the attempt to reproduce a colleague’s findings was not received as a wholesome part of the scientific enterprise.

Advertisement

3 Comments

  1. Looks very interesting, Dan – thanks for the tip. I look forward to reading, and likely disagreeing with, it.

  2. It is amazing how anglosaxons separate “science” vs. “humanities”, and believe there is only “one best way” to do science. This oversimplified vision ignores social sciences, wich ar not “humanities” and do not follow the positivist or santadrd “one best way” of what you call “science”. Your discusions sounds like XIX century music.

  3. Sharon W

     /  November 4, 2012

    The story you referred to about the Dean got a lot of attention in San Diego:

    http://www.utsandiego.com/news/2011/may/25/ucsd-faculty-says-professors-academic-freedom-brea/

%d bloggers like this: