David Card’s Woytinsky Lecture on “Empirical Micro”

Last week, economist David Card gave the prestigious Woytinsky Lecture at the University of Michigan. Card is well-known for his somewhat unorthodox empirical work which relies heavily on so-called “natural experiments” (although this approach has become much more orthodox since the start of Card’s career). For example, Card used data from fast food restaurants on the border between New Jersey and Pennsylvania to show that a modest increase in the minimum wage in one state did not decrease employment – in fact, employment in these restaurants actually increased in the state whose minimum wage went up (see Card and Krueger’s book for details). This empirical finding – that an increase in he minimum wage did not decrease employment – flew in the face of very well-established orthodox microeconomic theory. Card’s work did not rely on a newer theoretical model of labor search or decision-making, but rather a novel quasi-experimental framework and data collection to attempt to pin down a causal effect. In this post, I’ll try to summarize Card’s talk and add a bit of analysis about how this relates to differences between sociology and economics.

Card’s lecture, titled “Model-Based or Design-Based? Competing Approaches in ‘Empirical Micro'”, attempted to lay out the history of the competing approaches to microeconomics and their relative virtues. Card’s approach received the label “design-based”, to reflect the importance of carefully designing an actual experiment or some sort of natural experiment to collect data that plausibly pin down a causal effect. This approach has a lot of limits: in particular, it’s often difficult to generalize or to pose counterfactuals. Card showed that a particular minimum wage increase did not seem to reduce employment in a low-wage sector in a particular state, but it’s hard to know what that finding means for (say) a larger proposed minimum wage increase in another state, or worse, another country. Card argued that this design-based approach is particularly useful for testing theories to see if they are plausible: in other words, dominant microeconomic models often make strong predictions that do not hold up. Testing these predictions can help us find better models.

The other major approach Card lays out is called “model-based.” Here, the goal is to specify a complete framework that relates all the relevant variables – a complete “Data Generating Process” (DGP). To actually produce estimates for all of the parameters in this DGP is often impossible without reasonably strong theoretical assumptions – things like rationality, the shape of preference functions, some restrictions on how long certain effects take to appear, etc. Once you make sufficient restrictions to actually estimate the full model, however, you can then use the model for more complicated analyses of counterfactuals, answering questions like, if we change the minimum wage by X, how much will employment increase or decrease? If we changed it by X+1, how much would employment change? Etc. This sort of analysis is precisely what policymakers want to know, especially in fields like anti-trust where the question is often, if we allow this merger, how much will consumers gain (from economies of scale, increased efficiency) or lose (from increased market/monopoly power)? The cost of such complete models is that they require such strong assumptions – assumptions often proven invalid (or at least, incomplete) by the design-based studies of particular interventions.

Card’s lecture did a wonderful job of laying out the history of these approaches and characterizing the debates between their proponents. One of the most useful aspects of his talk was his discussion of “working models.” Card argued that the design-based and model-based approaches were not quite as far apart as they first seemed. Arguments based on carefully designed experiments or quasi-experiments still rely on implicit underlying models to determine the relevant units of analyses, time frames, etc. As Card put it, “All economic papers – all social science papers – have a working model.” These working models are often very plausible in terms of at least some sort of well-specified model, but rather than taking the time and energy to specify every bit of the complete model, the argument focuses on the data and statistical estimates. For example, a paper on changes in the minimum wage might assume that three years time is enough for all of the effects to work their way through the system, and that the change in minimum wage was exogenous. These statements would all be made upfront in narrative form, and readers can choose at the outset whether to accept the working model and suspend their disbelief for a moment, or to challenge the working model on one or more points. Card noted that “deep questions” in seminars are often those that convincingly challenge some previously implicit aspect of the working model by comparing it to another working model that is just a bit different (i.e. small differences in your assumptions about the timing of certain changes), while “really wacky” questions are those that assume a radically different working model (i.e. dramatic shifts in your assumptions about individual rationality).

Card defended the implicit working models of much design-based empirical work on the grounds that all models are false. For example, almost any exogenous variable in one model is endogenous in another. Your model may assume that preferences are exogenous while mine asks where those preferences come from. Even worse, “political economy” questions really shake up what counts as endogenous vs. exogenous – the minimum wage change in Card’s work is assumed exogenous to the employment decisions made by fast food firms, but what if those firms had been lobbying for (or against) such a change, and behaving accordingly? Exact specifications of exogeneity and endogeneity become very hard to maintain when you start taking into account these deep and subtle connections between everything and everything else.

Card’s talk had a lot of resonance with a long line of work in the philosophy of science. While Card spoke of “working models,” I think the more general term would be “background assumptions.” (Zammito does a nice job of tracing ideas about background assumption in the history of philosophy of science and science studies here.) Card’s take on the topic was very tightly connected to an economist’s understanding of a model: precise mathematical specification of preferences and behaviors with assumptions about who is maximizing what with what information available, etc. But I think similar principles apply in other fields.

For example, at one point Card joked that some of the model-based microeconomists were not sure if the working models-approach of the design-based approach was economics at all: “Are these models ok, or should these people be in the sociology department?” Discipline-bashing aside, the question is a pretty reasonable one and offers some nice vocabulary for talking about differences between (especially quantitative, survey-based) sociology and design-based empirical micro-economics. In many ways, the two are very similar – we use the same statistical techniques, the same datasets, and sometimes even the same dependent variables. The big difference, I think, lies in the space of permissible working models and how we are called on to defend those working models. My guess is that the range of permissible models in sociology is quite large – formal modeling is a very small enterprise in sociology, with little clout discipline-wide. Thus, relatively little theory – formal and mathematical, or more narrative/holistic – is needed to motivate an interesting finding. On the other hand, we have little apparatus for checking to see if one set of behaviors uncovered is consistent with another set, or for generalizing to how a set of individual behaviors maps onto an aggregate outcome. In econ, on the other hand, formal modeling is very advanced, but also relatively narrow. So, there is a lot more possibility for design-based studies to influence the formal modeling community to testing the plausibility of their assumptions and either forcing certain reconsiderations or validating a new approach. But there’s also a way in which empirical micro is handcuffed to a relatively narrow set of questions and assumptions – even when they take the liberty of relaxing the formal specifications and using an implicit working model.

At the end of his talk, Card laid out a tripartite agenda for empirical microeconomics. He argued that economists should:

  • 1. Test the basic models in use through design-based studies
  • 2. Fit the best available model using model-based approaches (and then use these best available models to do, e.g., welfare analysis, as called upon by the state for policy-analysis)
  • 3. Do descriptive work to point to new models.
  • This last call was most interesting to me, as similar calls have issued forth in Sociology (e.g. Abbott’s calls to revalue descriptive work). Unfortunately, Card did not much elaborate on what he meant by descriptive work, nor exactly how economists should go about doing it and publishing it. I hope Card writes more on the topic soon.

    Also, I know some of my readers attended the lecture as well – please comment with your own analysis, or if you think I missed or misstated any important points.

    5 Comments

    1. afinetheorem

       /  March 31, 2012

      Some interesting background for you here, Dan:

      David Card was notorious for being quite anti-structural, in that he (historically) preferred very simple models or none at all in his empirical work. He has, I’m told by his students, completely shifted on this point, and is now much more interested in the intersection of model and data. This puts him at odds with a branch of economists that are solely interested in experiments, natural or otherwise.

      To give an example, consider the famous Card-Krueger minimum wage paper. The empirics – that a minimum wage rise along a state border led to *no change* in employment in that state – is to me totally uninteresting, in that it may simply be random variation in data. But there is a very nice theoretical explanation: search models of labor demand, such as the ones that Dale Mortensen won his Nobel for a couple years back. In those models, an increase in minimum wage increases your labor costs, but also increases the cost of replacing your current workers (because you need to find them, which takes time). This makes you less likely to fire your current workers. Therefore, the model suggests that minimum wage changes in one state will lead to no change (or statisically hard to find change) in unemployment, but will also decrease the match (firing and hiring) rate in the same state. We have overwhelming data now that this is precisely what happens. In this sense, the search model of labor is “more useful” than the complete labor markets model (or regressions that naively assume precisely such a world). That is, the theory is what is convincing, not the empirics alone. I think David – though I don’t know him – would agree with this assessment.

      • Thanks! I knew he was known for being a standard bearer of the design-based approach (in his terms), but I didn’t know this marked such a strong shift.

    2. Michael Bishop

       /  April 1, 2012

      Nice post 🙂

    3. MrStructural

       /  July 13, 2013

      Although I have little patience for protracted discussions of “what someone said” or “what someone meant” since ultimately they don’t matter, i conceed that it drives me nuts when i read (as i did above) that “David Card was notorious for being quite anti-structural, in that he (historically) preferred very simple models or none at all in his empirical work.” Card has always (historically) been “pro-structural”. In fact, it is hard for me to remember many of his papers that aren’t structural. From the begininng: consider Ashenfelter & Card (RESTUD 1982), Abowd and Card (Ecta 1989, and AER 1987), to years before this post such as Card, Chetty, and Weber (QJE 2007). The latter is about as structural as it gets. Even the Pennsylvannia vs New Jersey minimum wage started out as a structural (albeit simple compared to most of his work) model (compare the IRS working paper to the AER paper) or just look at Card and Krueger’s book which describes the structural “dynamic monopsony” framework and unfortunately, finds it wanting in some respects. In looking over his IDEAS page I noticed that RECENTLY he has published some non–structural work (David Card & Stefano DellaVigna, AER 2013) on the journal submission process, but this paper is more the exception than the rule. Even another exception like Card & DiNardo (JOLE, 2002) is essentially an illustration of the absence of minimally adequate structural models of Skill Biased Tech Change not a protestation against “structure.” If I had to hazard a guess, I would say that about 95% of his more than 300 papers (according to IDEAS) are structural models. Where it might be said that Card differs from some other engaged in “structural” models, it is that his work shows a concern for what would be a minimal condition for structural model, namely that it fits the actually data in hand. Why one would want to forecast or perform “counterfactual analysis” with a model that can’t even “predict” the data used in estimation is a mystery to me, although it is common practice.

    1. Solow on Physics Envy QOTD « A (Budding) Sociologist's Commonplace Book