Last week, economist David Card gave the prestigious Woytinsky Lecture at the University of Michigan. Card is well-known for his somewhat unorthodox empirical work which relies heavily on so-called “natural experiments” (although this approach has become much more orthodox since the start of Card’s career). For example, Card used data from fast food restaurants on the border between New Jersey and Pennsylvania to show that a modest increase in the minimum wage in one state did not decrease employment – in fact, employment in these restaurants actually increased in the state whose minimum wage went up (see Card and Krueger’s book for details). This empirical finding – that an increase in he minimum wage did not decrease employment – flew in the face of very well-established orthodox microeconomic theory. Card’s work did not rely on a newer theoretical model of labor search or decision-making, but rather a novel quasi-experimental framework and data collection to attempt to pin down a causal effect. In this post, I’ll try to summarize Card’s talk and add a bit of analysis about how this relates to differences between sociology and economics.
Card’s lecture, titled “Model-Based or Design-Based? Competing Approaches in ‘Empirical Micro'”, attempted to lay out the history of the competing approaches to microeconomics and their relative virtues. Card’s approach received the label “design-based”, to reflect the importance of carefully designing an actual experiment or some sort of natural experiment to collect data that plausibly pin down a causal effect. This approach has a lot of limits: in particular, it’s often difficult to generalize or to pose counterfactuals. Card showed that a particular minimum wage increase did not seem to reduce employment in a low-wage sector in a particular state, but it’s hard to know what that finding means for (say) a larger proposed minimum wage increase in another state, or worse, another country. Card argued that this design-based approach is particularly useful for testing theories to see if they are plausible: in other words, dominant microeconomic models often make strong predictions that do not hold up. Testing these predictions can help us find better models.
The other major approach Card lays out is called “model-based.” Here, the goal is to specify a complete framework that relates all the relevant variables – a complete “Data Generating Process” (DGP). To actually produce estimates for all of the parameters in this DGP is often impossible without reasonably strong theoretical assumptions – things like rationality, the shape of preference functions, some restrictions on how long certain effects take to appear, etc. Once you make sufficient restrictions to actually estimate the full model, however, you can then use the model for more complicated analyses of counterfactuals, answering questions like, if we change the minimum wage by X, how much will employment increase or decrease? If we changed it by X+1, how much would employment change? Etc. This sort of analysis is precisely what policymakers want to know, especially in fields like anti-trust where the question is often, if we allow this merger, how much will consumers gain (from economies of scale, increased efficiency) or lose (from increased market/monopoly power)? The cost of such complete models is that they require such strong assumptions – assumptions often proven invalid (or at least, incomplete) by the design-based studies of particular interventions.
Card’s lecture did a wonderful job of laying out the history of these approaches and characterizing the debates between their proponents. One of the most useful aspects of his talk was his discussion of “working models.” Card argued that the design-based and model-based approaches were not quite as far apart as they first seemed. Arguments based on carefully designed experiments or quasi-experiments still rely on implicit underlying models to determine the relevant units of analyses, time frames, etc. As Card put it, “All economic papers – all social science papers – have a working model.” These working models are often very plausible in terms of at least some sort of well-specified model, but rather than taking the time and energy to specify every bit of the complete model, the argument focuses on the data and statistical estimates. For example, a paper on changes in the minimum wage might assume that three years time is enough for all of the effects to work their way through the system, and that the change in minimum wage was exogenous. These statements would all be made upfront in narrative form, and readers can choose at the outset whether to accept the working model and suspend their disbelief for a moment, or to challenge the working model on one or more points. Card noted that “deep questions” in seminars are often those that convincingly challenge some previously implicit aspect of the working model by comparing it to another working model that is just a bit different (i.e. small differences in your assumptions about the timing of certain changes), while “really wacky” questions are those that assume a radically different working model (i.e. dramatic shifts in your assumptions about individual rationality).
Card defended the implicit working models of much design-based empirical work on the grounds that all models are false. For example, almost any exogenous variable in one model is endogenous in another. Your model may assume that preferences are exogenous while mine asks where those preferences come from. Even worse, “political economy” questions really shake up what counts as endogenous vs. exogenous – the minimum wage change in Card’s work is assumed exogenous to the employment decisions made by fast food firms, but what if those firms had been lobbying for (or against) such a change, and behaving accordingly? Exact specifications of exogeneity and endogeneity become very hard to maintain when you start taking into account these deep and subtle connections between everything and everything else.
Card’s talk had a lot of resonance with a long line of work in the philosophy of science. While Card spoke of “working models,” I think the more general term would be “background assumptions.” (Zammito does a nice job of tracing ideas about background assumption in the history of philosophy of science and science studies here.) Card’s take on the topic was very tightly connected to an economist’s understanding of a model: precise mathematical specification of preferences and behaviors with assumptions about who is maximizing what with what information available, etc. But I think similar principles apply in other fields.
For example, at one point Card joked that some of the model-based microeconomists were not sure if the working models-approach of the design-based approach was economics at all: “Are these models ok, or should these people be in the sociology department?” Discipline-bashing aside, the question is a pretty reasonable one and offers some nice vocabulary for talking about differences between (especially quantitative, survey-based) sociology and design-based empirical micro-economics. In many ways, the two are very similar – we use the same statistical techniques, the same datasets, and sometimes even the same dependent variables. The big difference, I think, lies in the space of permissible working models and how we are called on to defend those working models. My guess is that the range of permissible models in sociology is quite large – formal modeling is a very small enterprise in sociology, with little clout discipline-wide. Thus, relatively little theory – formal and mathematical, or more narrative/holistic – is needed to motivate an interesting finding. On the other hand, we have little apparatus for checking to see if one set of behaviors uncovered is consistent with another set, or for generalizing to how a set of individual behaviors maps onto an aggregate outcome. In econ, on the other hand, formal modeling is very advanced, but also relatively narrow. So, there is a lot more possibility for design-based studies to influence the formal modeling community to testing the plausibility of their assumptions and either forcing certain reconsiderations or validating a new approach. But there’s also a way in which empirical micro is handcuffed to a relatively narrow set of questions and assumptions – even when they take the liberty of relaxing the formal specifications and using an implicit working model.
At the end of his talk, Card laid out a tripartite agenda for empirical microeconomics. He argued that economists should:
This last call was most interesting to me, as similar calls have issued forth in Sociology (e.g. Abbott’s calls to revalue descriptive work). Unfortunately, Card did not much elaborate on what he meant by descriptive work, nor exactly how economists should go about doing it and publishing it. I hope Card writes more on the topic soon.
Also, I know some of my readers attended the lecture as well – please comment with your own analysis, or if you think I missed or misstated any important points.