“The Power of Market Fundamentalism”: A Q&A with Fred Block and Peggy Somers

For thirty years, Fred Block and Peggy Somers have been writing about the ideas of Karl Polanyi. In part because of their efforts, Polanyi has become a central theoretical touchstone for economic sociology. In their new book, The Power of Market Fundamentalism: Karl Polanyi’s Critique, Block and Somers collect some of their most influential articles alongside several new chapters exploring Polanyi’s understanding of the political power of ideas, and the ethical importance of recognizing “the reality of society.” Below is the full version of an email interview I conducted with Fred and Peggy; a condensed version will be published in Accounts, the newsletter of the Economic Sociology Section of ASA.*

Q. Let’s start at the very beginning. How did you first come into contact with Polanyi’s work? What did it mean to you at the time?

A. We both read The Great Transformation initially in the 1960’s and we saw Polanyi’s overall political and intellectual sensibility fitting with the kind of humanistic Marxism that was embraced by our part of the New Left. The book made a deep impression and as the relentless rise of Thatcherism and Reaganism began to demean the image of New Deal and Great Society movements and social programs, and the 1960s more generally, we kept coming back to it as we sought to make sense of the political defeat of those earlier movements.

The Great Transformation also resonated as a critical counterpart to Marx’s story of England’s transition from a pre-industrial agrarian economy to the rise of factory production. In the 1970s and 80s, as interest in Marx retreated, many social scientists turned away from political economy altogether and focused instead on the state. In Polanyi we found a home that allowed us to retain the critique of what we call (adapting Polanyi) free-market utopianism, but from a perspective that made the state and social relations the constitutive elements of all market economies. Polanyi thus provided us with the foundations of a political economy that foregrounded politics and culture without any retreat from the centrality of the economy.
(more…)

Advertisement

Review of “The Nature of Race”, Ann Morning (2011)

I just finished reading Ann Morning’s The Nature of Race. It’s excellent. Morning shows, through analysis of high school textbooks, interviews with scientists (biologists and anthropologists), and interviews with undergraduates, that racial essentialism is still the dominant mode of understanding race. Again, this is true among practicing biologists, and it’s even true among many undergraduates studying cultural anthropology (though this belief is not as prominent). Morning is one of a number of scholars (including Steve Epstein, Alondra Nelson, and others) who have pointed to the return of biological conceptions of race connected to modern understandings of genetics. More generally, Morning shows that peoples’ conceptions of race are mixed and messy, but with an essentialist biological argument playing a starring role in many settings.

A bit more on what Morning means by essentialism. Morning lays out three broad positions on race (“racial conceptualizations”) that are themselves not entirely coherent (as in, there are multiple versions of each): essentialism, constructivism, and anti-essentialism.

Essentialism holds that humans are divisible into discrete biological groups which we can call races, and that members of these racial groups are different in important and unchangeable ways. Contemporary essentialist arguments invoke genetics; older arguments relied on phenotypes to assign people to races and on different understandings of biology to motivate their arguments about the fixedness of various traits (intelligence, athletic ability, what have you).

A second position, anti-essentialism, says that essentialism is wrong – human biological variation does not neatly fall into discrete groups, humans have always interbred across what we think of as racial lines, and racial classifications reflect cultural biases not biological realities.

The last position, constructivism, is in many ways an ally to anti-essentialism – the two can go hand in hand, but they are often invoked separately. Constructivism says that race is a social and cultural construct, but that it is real and meaningful and connected to various forms of domination and inequality (empire, slavery, etc.). The two positions both make sense, but they sometimes seem to imply contradictory actions: constructivism implies that it’s important to measure race and racial difference because it’s so baked into how we setup society that we can’t ignore it but have to effectively fight it, while anti-essentialism seems to imply that we should just get rid of the damned thing entirely. That’s a bit of my extrapolation from her argument, but I think it holds up. I think anti-essentialism and constructivism also become more useful in different contexts: when thinking about say, school segregation vs. research on new medications.

An important caveat: these are conceptualizations not people; an individual can express multiple such positions when prompted in different ways, even though they seem to contradict. People are funny like that.

The book is excellent, and readable, and inspiring… but it’s also depressing as hell, in a subtle way. As Morning shows, strong arguments against essentialist biological understandings of race go back at least to the 1930s (and almost surely further, but in very recognizable forms to that period). The scientific evidence against racial essentialism has only gotten stronger. And yet, somehow, we are losing the fight. Morning’s last chapter offers some tentative ideas about why racial essentialism is so enduring, and why it might be especially resurgent now in an era that has seen tremendous legal victories in the fight for civil rights, but persistent and massive racial inequality and segregation. But at least one reason has to be that social scientists haven’t yet figured out how to convince everyone – especially, but not limited to, biological scientists and undergraduates – that race is not an essential biological fact, but rather an enduring cultural and social creation that plays out meaningfully in everyday life and is baked into social structures of domination. I’m not sure how we do that, but somehow we have to do better.

Statistics Done Wrong: Practical Tips for Avoiding Fallacies

The theme of this week’s posts is apparently “free web books on contentious topics.” Yesterday, it was typography. Today it’s statistics. In Statistics Done Wrong, Alex Reinhart presents a short guide to common problems with the way statistics is done in medicine, and the hard and soft sciences. Readers familiar with Andrew Gelman’s and John Ionnidis’s work will recognize most of the material, but Reinhart has done a nice job of packaging it all together into a short, comprehensible guide suitable for a student with relatively limited background (say in the middle of the first year required states sequence).

For example, Reinhart offers a nice example of the problem of assuming that “no significant difference” in an underpowered study implies no actually significant difference.

In the 1970s, many parts of the United States began to allow drivers to turn right at a red light.

Several studies were conducted to consider the safety impact of the change. For example, a consultant for the Virginia Department of Highways and Transportation conducted a before-and-after study of twenty intersections which began to allow right turns on red. Before the change there were 308 accidents at the intersections; after, there were 337 in a similar length of time. However, this difference was not statistically significant, and so the consultant concluded there was no safety impact.

Based on this data, more cities and states began to allow right turns at red lights. The problem, of course, is that these studies were underpowered. More pedestrians were being run over and more cars were involved in collisions, but nobody collected enough data to show this conclusively until several years later, when studies arrived clearly showing the results: significant increases in collisions and pedestrian accidents (sometimes up to 100% increases). The misinterpretation of underpowered studies cost lives.

Overall, I enjoyed the guide and recommend it, especially Reinhart’s suggestion for actions, also the conclusion:

Your task can be expressed in four simple steps:

1. Read a statistics textbook or take a good statistics course. Practice.
2. Plan your data analyses carefully and deliberately, avoiding the misconceptions and errors you have learned.
3. When you find common errors in the scientific literature – such as a simple misinterpretation of p values – hit the perpetrator over the head with your statistics textbook. It’s therapeutic.
4. Press for change in scientific education and publishing. It’s our research. Let’s not screw it up.

A rousing academic call to arms if ever there was one!

Practical Typography: You’re Doing it Wrong

A graphic designer friend sent along a link to a very handy introduction to typography, Practical Typography by Matthew Butterick. The entire guide is freely available and the site itself illustrates many of the principles laid out by the author.

Possibly the most useful part of the guide for academics is the page on research papers. Butterick recommends a few big changes to the typical double-spaced, 12-point Times New Roman* with 1″ margins we’ve come to know and love(?). Instead, Butterick suggests bigger margins (and thus smaller lines), a bit less than 1.5 spacing, and a slightly smaller font. The amount of text on the page actually goes up a bit, but the added white space makes the whole thing more readable. Most of his changes will look familiar if you’re used to LaTeX – LaTeX’s article class uses most of these principles by default. Tl;dr: Bad. Good.

I’m still working my way through the whole guide, but so far I’ve found it very useful and accessible, and with just the right amount of snark.** Also recommended to anyone trying to convince a co-author to use a single space between sentences.

* Butterick’s Times New Roman (TNR) hatred is interesting. Part of his dislike of the font comes from it being optimized for printing small characters on bad paper, as it was originally used. Modern variants are a bit thicker and thus look better when printed at the larger sizes that are typical. But part of Butterick’s dislike simply comes from TNR serving as a signal of typographic apathy. Here I disagree (though without the aid of any actual training in typography). Because TNR is both ubiquitous and otherwise inoffensive, it serves nicely as an unmarked category, suitable for when you really don’t have much need to call attention to your font choice – or better yet, when you explicitly want your font choice to go unnoticed. It’s also installed by default on every word processing device available. So, for example, while I might love for my undergrads to write in Butterick’s delightful TNR-alternative Equity, it’s much easier to make them write in TNR. Also, as Butterick elsewhere notes, documents meant to be shared for collaboration necessarily require fonts that are system fonts. TNR is both a default system font for Macs and PCs and everyone’s used to it. That said, I might consider changing my working papers to something a bit nicer in the future. Put differently, much of this advice only makes sense when you are about to share your work with someone you hope is going to consume it, rather than someone obligated to read it or actively contributing to it.

** E.g. “Many system fonts are not very good. This is less of a problem on the Mac. But some of the Windows system fonts are among the most awful on the planet. I won’t name names, but my least favorite rhymes with Barial.”

“Poor Numbers”: A Q&A with Morten Jerven on Economic Statistics in Africa

One of the most interesting parts of the history of national income statistics is how rapidly and widely they diffused across the globe. Before World War II, hardly any country had official national income data, and none routinely relied on them to make policy. In the 1950s, the United Nations codified a global standard, the System of National Accounts, and official national income data became a requirement of modern nationhood – something every country had to produce, knowledge seen as indispensable for planning purposes, development aid, even assessing UN dues. And yet, while the UNSNA was a global standard, not all of the globe was equally suited to produce numbers according to its rules. In particular, the economies of Africa did not look like the economies of Western Europe where national income statistics were pioneered. Essential data that powered national income statistics, including censuses, income and payroll tax data, and more, simply did not exist in many poorer countries. African economies were just not as calculable, as those of Western Europe (at least not in the framework of the UNSNA). Thus, producers and users of African national income statistics have long known that such data were not perfect. But how bad were these numbers? Where are the problems? Are the uncertainties uniform (like the systematic undercounting of the informal sector) or more idiosyncratic? And at worst, users of such data hoped that even if the absolute GDP levels were off, the trends were still meaningful – that we might not know exactly how much poorer one nation was than another, but we could tell which countries were growing the fastest and thus assess the impact of economic development policies.

For the past five years, economic historian Morten Jerven has been arguing that African economic statistics are truly poor; so poor that we have misled ourselves into believing that we know much more about African economies than we really do. His new book, Poor Numbers: How We Are Misled by African Development Statistics and What to Do about It, summarizes this research and presents a coherent case for skepticism, and especially for criticism of the seemingly standardized world economic databases (such as the World Bank’s Development Indicators). These supposedly comparable data leave very little trace of the messy process that produced them, and thus leave end users incapable of assessing just how bad (or, less often, good) the statistics they are working with really are. As a consequence, we are collectively capable of being surprised when countries manage to update and improve their statistical output and produce seemingly fantastic outcomes, as when Ghana’s GDP shot up by 60% after a recent revision. The blog Democracy in Africa has a more detailed, chapter by chapter review of the book here. Below is a short Q&A with Professor Jerven about his new book, including his experience interviewing government statisticians across Africa, interacting with World Bank officials, and trying to convince the development community to better understand the available data.

(more…)

Review: “The Great Persuasion” by Burgin (2012)

There have been a lot of books and articles written in the past few years on the flowering of pro-market, anti-government sentiment among economists in specific, and intellectuals more generally. Examples include Teles on the conservative legal movement, Phillips-Fein on the business response to the New Deal, Miroski and Plehwe’s edited volume on The Road from Mont Pelerin, and a newer edited volume about the rise of Chicago Economics, to name just a handful. In this growing field, Angus Burgin’s (2012) The Great Persuasion: Reinventing Free Markets Since the Depression stands out as a tight, well-argued, and entertaining intellectual biography of a few key players. That said, I was a little disappointed with the vagueness of the broader argument. What follows is a brief summary and some more detailed reactions.

The book is well-written and compact (~230 pages), while still covering a lot of ground. The first half of the book offers a very nice institutional history of the rise and fall of the Mont Pelerin Society (MPS) and an intellectual biography of Hayek. The treatment of a large schism within the MPS in the 1940s-1950s was fascinating, as was the discussion of the general problems of expanding a small, elite group of perhaps 40 to a meeting of several hundred, and how that changed the character and purpose of the MPS. There was also a nice discussion of the pre-WWII origins of the MPS in the “Colloque Lippmann.” Readers interested in a more detailed account of Hayek’s life and work in particular should check out Caldwell’s Hayek’s Challenge which covers much of the same ground, with more emphasis on Hayek’s 19th and 20th century precursors, and less details on the MPS as an organization.

The second half is much more novel, in exploring Milton Friedman’s trajectory over two chapters. This part was mostly new to me – I knew bits about Friedman’s past, but there’s actually been relatively little written about him, and Burgin does a nice job of tracing his intellectual evolution. In particular, the transition from Hayek to Friedman marks the moment when neoliberalism (a term neither fully embraced) went from being a moderated, updated form of liberalism (the original goal of the MPS) to being full-blown liberalism reborn (Friedman post-1940s).

Unfortunately, the book is a bit skimpy on the payoffs and theorizing. I mean, history is history, but I was expecting a bit more than “Keynes, Hayek, Friedman were all right that ideas have big political influence in the long-run” which is basically where it ends. I was hoping for at least a bit of speculation about, say, why was it that Hayek and Friedman in particular mattered so much? Additionally, the immediate political influence (or lack of influence!) of the MPS and Friedman were not emphasized much. There was a useful discussion of Friedman’s role as advisor to Goldwater, for example, but not too much else. So, we end up with a self-contained “history of economists” and not really much on how these ideas were consequential – if you didn’t already know the punchline (the rise of neoliberalism/market fundamentalism in the 70s-80s), the book would feel very undermotivated, and it loses a lot by not connecting the dots itself.

Another, related, issue is that Burgin spends little time on the content of Friedman and Hayek’s ideas. The main narrative is that the Chicago School Mark I (Viner, Knight, etc.) and the original MPSers were much more moderate – especially when it comes to anti-trust / market power, but also on the New Deal as a whole. This earlier group wanted to argue for capitalism but mostly argued against complete socialism. The Chicago School Mark II (Friedman, Stigler, etc.) was more radical in its libertarianism – they were on the offensive against the bits of socialism that had become entrenched rather than defensive of capitalism. Fair enough. Beyond that, tho, there’s only a scanty discussion of what any of these folks actually thought. The socialist calculation debate is mentioned as important, but there’s not really a good summary of what it was (see, for example, Caldwell above, or Dale’s biography of Karl Polanyi, or Cosma Shalizi’s incredible post on the topics for good treatments). Friedman and Stigler’s early pamphlet against rent control was apparently influential, but it’s mostly discussed in terms of the rhetoric and some conflicts they had with their funders about a paragraph on inequality (and how government should do something about it directly, rather than muck about with rent control) rather than the argument itself. And so on. So, again, a history of economists, not a history of economic ideas, nor of the direct impact of those ideas.

Finally, I found it really fascinating how much the 1930s-era Chicago school was worried about monopolies and big business and how post-1950s, the whole movement just abandons that line of thought. Friedman himself has a bit of conversion, and I wish Burgin had gone into more detail about it – Burgin himself only notes the changing ideas on monopoly as one example, but I think it might be more important than that. The 1930s folks believed market power existed and was a problem, and one that called for government intervention (most stridently, UChicago’s Henry Simon in A Positive Program for Laissez-Faire). The Law & Econ movement (Aaron Director especially) won over Friedman and Stigler in the 1940s-1950s, who then all agree that market power is not an issue, and that kills the legitimacy of the Galbraithian countervailing power argument among the neoliberal economists. We can (perhaps) imagine a counterfactual history where Hayek’s successor was not Friedman, but someone more in line with Simons, Viner, and Knight and who eclectically championed union busting and monopoly busting / pro-competitive regulation, instead of (or in addition to) eliminating the FDA and the negative income tax. It would be interesting to think what the consequences would have been. And that counterfactual exercise would have necessarily entailed a deeper reflection on the significance of Hayek and Friedman, and their networks.

Overall, The Great Persuasion makes a great starting point for learning about the Mont Pelerin Society, Hayek, Friedman, and trends in free market economic thought in the mid-20th century. If you go into the book looking for solid intellectual history, you will be rewarded. If you go in expecting deep insights into the political role of ideas (economic or otherwise), you will be a bit disappointed.

Review: A Tale of Two Cultures by Goertz and Mahoney (2012)

Attention conservation notice: This post is a 2600 word book review about a recent text on quantitative and qualitative methodology in sociology and political science. If you aren’t in one of those fields, you probably won’t be interested in the post.

The idea of a schism between the “Quants” and the “Quals” is a common trope in sociology. According to the quals, the quants are naive positivists, running regression after regression, producing gobbledegook that masquerades as science. According to the quants, the quals are crazed post-modernists, rejecting the very possibility of reproducible or generalizable knowledge with their fetishistic attachment to the peculiarities of particular cases or, at best, small-N comparisons. Everyone knows that these stereotypes are misleading, at best, but they lurk beneath a lot of high-minded and thoughtful debates about the role of statistics courses in the graduate curriculum, or the ability of small-N research to uncover causal relationships. And they recur especially in discussions of the new (well, not that new, see Figure 1) “big thing”: mixed-methods research (by which we mean mixed-data research, which relies on both large N statistical analysis and small-N interview samples or historical analysis of a few cases). Which method should serve which? How should we all get along?

Source: JStor Data For Research, searching “mixed methods” in all Sociology journals.

Into this fierce turf war, Gary Goertz and James Mahoney have published a new and incredibly illuminating salvo, A Tale of Two Cultures: Qualitative and Quantitative Research in the Social Sciences (2012, Princeton University Press). Goertz, a political scientist, and Mahoney, a sociologist, attempt to make sense of the different cultures of research in these two camps without attempting to apply the criteria of one to the other. In other words, the goal is to illuminate difference and similarity rather than judge either approach (or, really, affiliated collection of approaches) as deficient by a universal standard. In this review, I’m going to try to lay out what I found most useful from G&M’s analysis and the one or two points I found frustrating. I’m also going to try to historicize the book a bit in a long history of debates within the social sciences about what exactly we mean by social science, and how invocations of the natural sciences and especially mathematics have a privileged role in those debates.

The first, and most important, move that G&M make is to define outside of the scope of their discussion a major branch of social science research: interpretive approaches. G&M are interested in quantitative and qualitative approaches to causal explanation. Thus, they want to leave aside the parts of the social sciences that have close connections to the humanities. This move is pulled off brilliantly, in a way that lets G&M move on to their substantive argument without dismissing the value of interpretive work, by recognizing that A Tale of Two Cultures is itself an example of interpretive social science:

Our decision not to treat interpretive approaches in this book should not be taken as evidence that we see no place for these approaches in the social sciences. In fact, our two cultures argument is, broadly speaking, an exercise in description and interpretation. We seek to elucidate the practices and associated meanings of two relatively coherent cultures of research. Thus, while interpretative analysts will not find their tradition of research represented in the qualitative culture that we describe, they nonetheless will find many of the tools of their tradition put to use in our analysis. (G&M 5)

Although other “interpretive” social scientists may well disagree, I found this footnote to be satisfying as a shield against the argument that G&M devalue interpretive research. After all, they are putting their shiny new book squarely into the interpretive tradition! That being said, one of my lingering concerns with the book (to which I will return) is the easy division it imagines between qualitative-research-as-causal-explanation and qualitative-research-as-interpretive-analysis. While I completely agree that these two goals are distinct, and have their origins in very different theoretical endeavors, I think they are more often muddled in empirical work – hence why quantitative research may sometimes collapse together small-N comparative analysis in the style of Skocpol and Mill with post-structuralist inspired cultural analysis. I think G&M would like to posit this sharp division between the two strands of qualitative research – which, drawing on Luker, might perhaps most usefully be thought of as “canonical” qualitative research and “non-canonical” qualitative research – because they want to highlight the pure logic of causal explanation in canonical qualitative research.

Onto the meat of the argument. G&M argue that the two cultures of quantitative and (causal) qualitative research differ in how they understand causality, how they use mathematics, how they privilege within-case vs. between-case variation, how they generate counterfactuals, and more. G&M argue, perhaps counter to our expectations, that both cultures have answers to each of these questions, and that the answers are reasonably coherent across cultures, but create tensions when researchers attempt to evaluate each others’ research: we mean different things, we emphasize different sorts of variation, and so on. Each of these differences is captured in a succinct chapter that lays out in incredible clarity the basic choices made by each culture, and how these choices aggregate up to very different models of research.

Perhaps the most counterintuitive, but arguably most rhetorically important, is the assertion that both quant and qual research are tightly linked to mathematics. For quant research, the connection is obvious: quantitative research relies heavily on probability and statistics. Causal explanation consists of statistically identifying the average effect of a treatment. For qual research, the claim is much more controversial. Rather than relying on statistics, G&M assert that qualitative research relies on logic and set theory, even if this reliance is often implicit rather than formal. G&M argue that at the core of explanation in the qualitative culture are the set theoretic/logical criteria of necessary and sufficient causes. Combinations of necessary and sufficient explanations constitute causal explanations. This search for non-trivial necessary and sufficient conditions for the appearance of an outcome shape the choices made in the qualitative culture, just as the search for significant statistical variation shapes quantitative resarch. G&M include a brief review of basic logic, and a quick overview of the fuzzy-set analysis championed by Charles Ragin. I had little prior experience with fuzzy sets (although plenty with formal logic), and I found this chapter extremely compelling and provocative. Qualitative social science works much more often with the notion of partial membership – some countries are not quite democracies, while others are completely democracies, and others are completely not democracies. This fuzzy-set approach highlight the non-linearities inherent in partial membership, as contrasted with quantitative approaches that would tend to treat “degree of democracy” as a smooth variable.

This attempt to link qualitative research to set theory and logic is at once both compelling, and deeply reminiscent of a long lineage of debates about the role of math in the social sciences. In the history of economics, the 1910s-1940s saw a massive struggle within American economics to define the course of the field. To gloss it quickly, on one side, the institutional economists focused on case studies, legal arrangements, charting the business cycle, and such, and were derided as merely descriptive. On the other, the neoclassicals built increasingly complicated formalisms to try to understand the underlying dynamics of the economic system (although this really took off in earnest in the 1930s, with the emergence of the Econometrics Society). As Yuval Yonay (1998) and Malcolm Rutherford (2011) have shown, both the institutionalists and the neoclassicals argued that their work was more scientific, and both linked their prestige and science-y-ness to their use of math. The institutionalists pointed to their extensive work producing quantitative data about the real world (time series of production, employment, and so on, including what eventually became the modern national income accounts). On the other side, the neoclassicals pointed to their use of formal logic, proofs, and especially calculus (important for marginal analysis, which lay at the root of neoclassicism). The details are not important for the comparison, but what’s interesting is how important the claim to mathematics was for both sides. Similarly, G&M put a lot of effort into arguing that qualitative research has, at its core, a different branch of mathematics, and one that is arguably the most prestigious inside the field of mathematics itself (at least, at some points in its history).

Instead of exhaustively charting each of the difference between the two cultures, as identified by G&M, I’m just going to list a few of my favorites. In general, the expositions of these differences were very clear, and chapters could easily be assigned as part of graduate – or even potentially advanced undergraduate – course on research methods. So, quantitative research:

  • Relies on probability and statistics.
  • Defines causal explanation in terms of identifying average treatment effects.
  • Asserts “No strong causal inference without an experiment” (G&M 102) or at least without “manipulation” (cf. Holland 1986).
  • Privileges cross-case analysis.
  • Offers symmetrical explanations: Explaining a large Y is the exact flip of explaining a low Y.
  • Understands mechanisms as linking treatments and outcomes, but sees mechanisms as “adding weight” or “making a contribution,” never explaining a specific case.
  • Plausible counterfactuals are those that are within the scope of the observed variation in the data.
  • All variation in the data is meaningful, transformations of the data help make the best use of that variation.
  • Relies on random selection/sampling, avoiding excessive reliance on extreme cases.
  •  

    Some of these will only make sense, or begin to seem controversial, in contrast with qualitative research. Also, I should note that the Holland position on causation – the idea of no causation without manipulation – is certainly characteristic of the literature on causal inference, but it’s not entirely clear that all (or even most) large N studies in sociology hew to its dictums. So, how does qualitative research compare? Qualitative research:

  • Relies on logic and set theory (albeit implicitly).
  • Defines causal explanation in terms of identifying the presence of sufficient conditions or the absence of necessary conditions.
  • Asserts “No strong causal inference without process tracing.” (G&M 103)
  • Privileges within-case analysis (at least in comparison with quantitative research), and even goes so far as to attempt to explain particular outcomes.
  • Offers asymmetrical explanations: Explanations for Y=1 may be different for explanations for Y=0 (implicitly leaving untheorized some logically possible, but unobserved, combinations).
  • Mechanisms link conditions to outcomes, and are directly visible through within-case analysis via process tracing.
  • Plausible counterfactuals are generated according to the “minimum rewrite rule” (G&M 119).
  • Not all variation is meaningful, transformations of data try to map data onto concepts.
  • Selection is based on conditions tested, e.g. in order to test a necessary condition, you need only sample where Y = 1, and thus sampling on the dependent variable is a virtue, not a vice.
  •  

    Rather than explaining all of these differences – which is, after all, the point of the book – I will simply pick out one I found particularly revelatory: the treatment of variation. G&M note that quantitative research is usually interested in the middle of the data. Extreme points are labeled outliers, and decried as mucking about with the model. Their extreme-ness suggests that atypical forces are at play, and thus these points may disturb researchers’ efforts to identify the average effect.

    For qualitative researchers, on the other hand, G&M identify what they call “The Principle of Unimportant Variation”: “There are regions in the data that have the same semantic meaning.” (G&M 144) Here we are returned to discussions of fuzzy-sets. The problem for conceptual analysis of a small number of cases is that you need to have good, clear definitions that sort the data well into just a few discrete bins. But every conceptual system will have liminal cases. G&M use the idea of a “developed country.” Sweden, the USA, and France are all developed countries. Guatemala, Haiti, and Nicaragua are all not developed. In between, we can imagine a lot of cases that are “kind of” developed – parts of Eastern Europe, Russia, etc. If we use as our primary measurement of “development” GDP/capita, then the map between the variable and the concept is highly non-linear. G&M use a few nice, simple charts and figures to demonstrate this point (G&M 145). Every country with GDP/capita over, say, 20,000$/person is considered fully developed. Every country with GDP/capita less than $2500 is considered fully undeveloped. Everything in between is kind of muddy, to varying degrees. Of course, a quantitative model could accommodate these sorts of conceptual mass points, but it’s very much against the norms of the culture. Instead, we’d tend to load GDP/capita (or maybe log GDP/capita) into a regression equation, which thus implicitly assumes that all variation is meaningful, and that an extra $1000 is equally meaningful across the spectrum (or that a change of 10% is equally meaningful, in the log context). But G&M assert that for qualitative research, this is not the case: Sweden and the USA are both fully developed countries. And explanations that rely on development as a necessary or sufficient condition (e.g. “developed countries never go to war against each other”) don’t care the slightest about whether GDP/capita is 25,000 or 40,000.

    I’ll end with a couple small criticisms. First, the text makes use of a very small number of datasets for examples, relying quite heavily on the Polity dataset. This adds clarity, especially for a reader like myself unfamiliar with that dataset, as we have time to get acquainted with it. That said, it lacks variety, and exhibits a really convenient amount of medium-N character. And many of the other examples are variables coded at the country level. Again, this makes sense, as big-N statistical analysis and small-N comparative analysis often get together and argue with each other exactly at the medium-N level, like OECD countries, or UN members. For teaching purposes, especially in sociology, it might be useful to supplement the text with a few more examples that vary the unit of analysis to see how these concepts apply elsewhere – to organizational or individual level variables, for example. Even if the problematic clash of cultures is less frequently encountered at those levels of analysis, it might be a very useful exercise to check how the concepts and culture clash play out on each team’s “home turf,” so to speak. What would it look like to take a set theoretic approach to claims about stratification and inequality? How might we think about treatment effects in the context of world-systems analysis? Etc.

    Second, as signaled above, I think Goertz and Mahoney are a bit too clean in their distinction between qualitative research interested in causal explanation, and qualitative research that is traditionally labeled interpretive. This clean division is also part of their normative project, I think, as G&M want causal qualitative research to sharpen its vocabulary and more explicitly embrace the sorts of set theory that they have long advocated. Perhaps the easiest way to make this case is to claim that qualitative research is already doing so, but could be made even better by formalizing the logic. But, this move might require abandoning the interpretivist impulse that also undergirds a lot of small-N research. Also, arguably, (and Reed [2011], among others, argues just this), some interpretive research is interested in causal explanation, albeit with a very different working notion of causation. That said, such interpretive causal research could badly use some methodological clarity of just the sort Goertz and Mahoney have provided for more canonical qualitative and quantitative causal explanation.

    Were I teaching a graduate methods course, I would be delighted to assign big chunks of A Tale of Two Cultures, alongside Luker’s Salsa Dancing into the Social Sciences, as excellent interpretive accounts of the sometimes cold, sometimes hot, methodenstreit of the contemporary social sciences. And practicing researchers probably have a lot to learn from it as well, especially as they try to make sense of their colleagues’ frustratingly similar, yet distinct, methodological commitments.

    Qualitative Coding as Ritual: A Review of Biernacki’s “Reinventing Evidence in Social Inquiry”

    I just finished reading Richard Biernacki’s (2012) Reinventing Evidence in Social Inquiry: Decoding Facts and Variables. The book argues that cultural sociology has erred in attempting to merge humanistic and scientistic modes of inquiry. This urge is manifested in the qualitative or interpretive coding of meaning in texts. Biernacki argues that coding should be understood as a ritual practice, one that decontextualizes meaning in order to selectively recontextualize it, and thus reinforces pre-existing ideas or theories with the appearance of empirical foundations. (151) The central chapters of the book are three case studies where Biernacki reanalyzes the sources underlying three prominents works of cultural sociology based on coding (Griswold 1987, Bearman and Stovel 2000, and Evans 2002).* Biernacki’s work was the subject of a multi-year controversy before its publication, as covered by the San Diego Union-Tribune here.**

    Biernacki begins his book with an excellent description of the tensions between the scientistic and humanistic approaches. The book names five features of the “scientific” interpretation of texts that are incompatible with humanist approaches

    (a) quantitative or abstractive generalizing starts by defining a clear target population about which to reason or generalize; (b) the relevant variables are self-contained once the research design is formalized; (c) correlations are between abstract factors in an open mathematical space bracketed for the sake of the procedure from unpredicted attributions of meaning; (d) there is a standard causal environment partially separable from the outside, unmeasured environment that makes cases comparable and that undergirds interpretable results; (e) finally, elements of the examined universe, including therefore separable text elements, each comprise events or features with potentially independent and potentially universal causes. Each of these features of inquiry is invalid and reversed for more purely humanist text interpretation… (8)

    The critique of the pre-determined sample is particularly clear and relevant, and reminds me a bit of Kristin Luker’s discussion of why non-canonical social science research must rely on different techniques to justify their objects of study (“data outcroppings” rather than representative samples, see my review of Salsa Dancing into the Social Sciences here). Biernacki shows how the samples chosen by the three authors he studies shape their findings. For example, Evans’ sample of bioethics texts about human genetic engineering includes many broader works in philosophy and theology from before 1973, but then changes its sampling strategy for later periods (as the main subject of the inquiry becomes more conventionally codified), which ends up excluding many broader works. According to Biernacki, this sampling procedure artificially creates a trend, “directing the probe away from philosophical references toward more specific, technologically sensitive keywords (such as “gene therapy”) would be mirrored self-fulfillingly in a narrowing of content over time. It would create the ‘observed’ trend away from broad spiritual concerns toward formally rational application of technology.” (63) In a slightly different context, Biernacki critiques Wendy Griswold’s sample of book reviews, which includes everything from small snippets in newspapers to book-length treatises. (119; 125-126, see also biernackireviews.com/ ) This heterogeneity complicates Griswold’s findings about the relative presence of certain topics in reviews from different countries, as the reviews from the West Indies were much longer than those from the UK, and intended for a very different audience, and thus more likely to mention a diverse array of topics. (113) And so on.

    Biernacki’s prose is dense, but some of the takeaway messages are quite clear, and damning. Here’s a bit of summary from the conclusion:

    Add up the quirky samples, sourceless observations, changed numbers, problematic classifying, misattributions, substituted documents, and violations of the sine qua non of sharable or replicable data, and it seems that in some respect, each of the demonstration studies lack referential ties to the outside world of evidence. (127)

    In the infelicitously blurred genre of coding, the term “qualitative” can designate readings so soft as to appear nonsubsistent. Since it is challenging to match codes individually to their source points, we retain only boilerplate promises that a thicket of qualitative codes corresponds intelligibly to anything. (128)

    Each study was narrated as a tale of discovery, yet each primary finding was guaranteed a priori. (128)

    While I think Biernacki’s claims about the problems of sampling are his clearest and most compelling, the core of Biernacki’s critique of the ritual character of coding practices in cultural sociology is a deeper disagreement about the nature of “meaning.” (For more on this topic, see the articles in Reed and Alexander 2009, including the debate between Biernacki and Evans that prefigures this book.) Biernacki summarizes this difference:

    The premise of coding is that meanings are entities about which there can be facts. But we all know that novel questions and contexts elicit fresh meanings from sources, which is enough to intimate that meaning is neither an encapsulated thing to be found nor a constructed fact of the matter. It is categorically absurd to treat a coding datum as a discrete observation of meaning in an object-text. My preference is to think of “meaning” as the puzzle we try to grasp when our honed concepts of what is going on collide with the words and usages of the agents we study. Describing meaning effectively requires us to exhibit that fraught interchange between cultures in its original: the primary sources displayed in contrast to the researcher’s typifying of them. (131)

    Biernacki wraps the entire critique up with the idea of ritual, a claim he takes very seriously. I think this discussion goes a bit far at times and distracts from the methodological critique – in other words, Biernacki jumps from the critique itself to a diagnosis of why cultural sociology fell down this rabbit hole, but that diagnosis can get in the way of the clarity of the critique. In part, I think this reflects Biernacki’s aim – which is not to produce better qualitative coding, but to undermine the entire enterprise. Biernacki’s solution to the various problems encountered is not a handful of fixes (more careful sampling, say), but a return to Weberian ideal types, and a divorce between humanistic and intendedly scientific approaches in cultural sociology. This approach, more true to the humanistic tradition, will also better satisfy the scientistic aims of the field:

    This volume has shown that humanist inquiry on its own better satisfies the “hard” science criteria of transparency, of retesting the validity of interpretations, of extrapolating from mechanisms, of appraising the scope of interpretations, of recognizing destabilizing anomalies, of displaying how we decide to “take” a case as meaning something, of forcing revision in interpretive decisions, of acknowledging the dilemmas of sampling, and of separating the evidence from the effects of instrumentation. (151)

    Strangely, I felt that the book was a bit too short at 155pp – a comment I’m not sure I’ve ever made before of an academic monograph! But that brevity is a virtue in that the book is readable in a single sitting, despite the density of the argument. I highly recommend it, and hope it becomes a touchstone for methodological debates in the coming years.

    * Or attempts a reanalysis at least, as in each case the exact corpus used was very difficult to reproduce – none of the original authors could produce an accurate list of sources analyzed! Just one of the many common issues raised in reanalysis that cuts across, to some extent, the quant/qual divide. But in qualitative research, thus far much less attention has been paid to these issues, I think.

    ** At one point, the Dean of UCSD (Biernacki’s home institution – but also that of one of the authors he criticizes) ordered Biernacki to cease working on his book: “[Dean of the Social Sciences] Elman wrote Biernacki a letter ordering him not to publish his work or discuss it at professional meetings. Doing so, Elman wrote, could result in “written censure, reduction in salary, demotion, suspension or dismissal.” The Dean argued that the book could “damage the reputation” of Biernacki’s colleague and thus constituted harassment. All of this speaks to Gelman’s concerns (see also OrgTheory) that exposing fraud or other research malfeasance is a thankless task – here Biernacki engaged in a sophisticated theoretical critique and empirical reexamination of his colleague’s work, and in thanks he received threats and a gag order from the administration! Perhaps a claim of outright fraud would have been easier to sustain, but certainly the attempt to reproduce a colleague’s findings was not received as a wholesome part of the scientific enterprise.

    Freeman Dyson QOTD: On Information, Science and Wikipedia

    The New York Review of Books has an interesting piece by famed physicist/mathematician Freeman Dyson. Dyson reviews a new book on the history of information and information theory by James Gleick, The Information: A History, a Theory, a Flood.

    Dyson reviews discusses the history of the definition of information, the theoretical mathematics injected into discussions of information in the 20th century, and the modern problem of info-glut.* Dyson jumps off from Gleick to discuss Wikipedia and how it works, and in particular, how WIkipedia is a better metaphor for science than the traditional accumulation of true facts story you get in K-12 education. I rather liked Dyson’s description of both Wikipedia and science:

    Among my friends and acquaintances, everybody distrusts Wikipedia and everybody uses it. Distrust and productive use are not incompatible. Wikipedia is the ultimate open source repository of information. Everyone is free to read it and everyone is free to write it. It contains articles in 262 languages written by several million authors. The information that it contains is totally unreliable and surprisingly accurate….

    Jimmy Wales hoped when he started Wikipedia that the combination of enthusiastic volunteer writers with open source information technology would cause a revolution in human access to knowledge. The rate of growth of Wikipedia exceeded his wildest dreams. Within ten years it has become the biggest storehouse of information on the planet and the noisiest battleground of conflicting opinions. It illustrates Shannon’s law of reliable communication. Shannon’s law says that accurate transmission of information is possible in a communication system with a high level of noise. Even in the noisiest system, errors can be reliably corrected and accurate information transmitted, provided that the transmission is sufficiently redundant. That is, in a nutshell, how Wikipedia works.

    The information flood has also brought enormous benefits to science. The public has a distorted view of science, because children are taught in school that science is a collection of firmly established truths. In fact, science is not a collection of truths. It is a continuing exploration of mysteries. Wherever we go exploring in the world around us, we find mysteries. Our planet is covered by continents and oceans whose origin we cannot explain. Our atmosphere is constantly stirred by poorly understood disturbances that we call weather and climate. The visible matter in the universe is outweighed by a much larger quantity of dark invisible matter that we do not understand at all. The origin of life is a total mystery, and so is the existence of human consciousness. We have no clear idea how the electrical discharges occurring in nerve cells in our brains are connected with our feelings and desires and actions.

    Science is the sum total of a great multitude of mysteries. It is an unending argument between a great multitude of voices. It resembles Wikipedia much more than it resembles the Encyclopaedia Britannica.

    I think the next time I teach a class, I will point my students to this essay when explaining to them that, of course they should look at Wikipedia, but that they aren’t allowed to rely on it as an authoritative source. They should follow its links, and look to see who said what, why they said it, and what evidence they used to make their claims. Because that’s what science is, and that’s how science works – we collect mysteries**, and argue forever and ever about what they mean and where they come from. I’d be curious to know what y’all think about Dyson, and what differences you see between Dyson’s gloss of what science is and how it works from the vision presented in Popper, Kuhn, or Latour?

    I highly recommend the whole essay, and I look forward to checking out the book.

    * And I would be remiss not to mention that Dyson cites Borges’ “The Library of Babel”, though I’m not sure I quite agree with his reading. The Library of Babel shows that information is not simply in the possession of statements, texts (facts), but rather in the structure that maps between them (and is missing in that ill-fated library). The Library of Babel is a misnomer – a collection of every possible book is not a library, but rather an unordered chaos in the guise of shelves and books. It is the opposite of a library. This is a universe with too little information, not too much!

    ** In Representing and Intervening, Ian Hacking argues, persuasively I think, that the natural sciences are defined more by their ability to create new phenomena than by their access to ‘truth’. New phenomena, once created, never go away, even though our interpretations of them may radically change as we fit them into new theories, new paradigms, etc. What Dyson calls mysteries we might just as well call phenomena, I think. But mysteries is a bit more poetic!

    Thinking Sociologically

    Sarah of Textual Relations has an excellent, short review of Bauman and May’s Thinking Sociologically:

    My favourite part of the book, though, is that it actually follows through on the promise of the title. Like The Sociological Imagination before it, Thinking Sociologically focuses on what it means to approach questions in a sociological way. Rather than bounding the discipline by providing a list of “sociological” topics, Bauman and May spend the introduction exploring the idea that what makes sociology different from the other social sciences is not its subject matter, but rather its way of thinking about the world.

    Check it out!