For the sake of argument, let’s pretend that there are two kinds of systems: simple and complex. I’m going to characterize these systems in a very different way than is typical in the literature on complex systems*. A simple system is one where at least one actor is sufficiently intelligent, knowledgeable and powerful to unilaterally alter the dynamics of the system based on a predictive model of what the system will do. Perhaps we might more meaningfully say that a system is simple or complex relative to a given actor if that actor has the capacity to unilaterally alter the system based on their understanding thereof. A complex system (a system complex relative to a given actor) is one where no actor can unilaterally alter the system based on their understanding thereof (though they may be able to alter their own place within the system). Complex systems are like structures or fields: actors can be relatively skillful at navigating the field, and thus able to obtain a better outcome, but this action does not on its own alter the dynamics of the field (but rather, usually, reinforces those dynamics – “playing the game”). Simple systems are more like my computer or my circle of close friends – if I work hard at understanding the system, I can open it up and reshape it in fundamental ways through my own actions, and even if no one else plays along, the other actors in the system are forced to respond to my actions (which in turn disrupts or alters the system – or even ends it). As Simmel long ago noted, if you leave a dyadic tie, the tie is gone. So, dyadic systems are usually simple: both actors usually have the capacity to break the tie and thus end the system. Whether or not that ruptures any larger system, or substantively changes the logic of any larger field, depends on the size and importance of the actors, their centrality, etc.
In some sense, any system being modeled that does not include intelligent actors is a complex system. This point echoes back to Ian Hacking’s work on “dynamic nominalism.” Quarks don’t care if you call them quarks; people care an awful lot if you call them autistic or latino or bourgeoisie and respond accordingly. Their responses reshape the categories in ways that sometimes frustrate the intent of the person or group who came up with the categories or model in the first place.
Quarks, on the other hand, stay quarks even after you’ve modeled them as such. Flip the switch, turn on the accelerator, observe the reactions. Change the energy levels, get a different mix of particles. Whatever. Again, as Ian Hacking has noted, science has been really successful because of its capacity to produce new phenomena – to intervene in the world – not just because of its capacity to accurately represent the world. Scientific models (and their accompanying technical apparati) help us gain mastery over complex systems by giving us the knowledge and ability to muck about with the system.
It’s useful to distinguish simple from complex systems in this way because it determines where and when models of the system are likely to suffer from some version of Goodheart’s Law, or the Asimov problem of psychohistory, that is, where the existence of the model changes the system. Let’s call this process “model reactivity” (riffing here on Espeland and Sauder’s work on college rankings). Modeling a complex system is hard, but modeling a simple system may be harder, or at least, suffers from different problems because you are modeling a system that has already been modeled, and where the existence of models are consequential for the behaviors of actors capable of changing the functioning of the system.
I want to argue that the economy is not a complex system. More specifically, I’ve been thinking about discussions of macroeconomic modeling currently taking place across the econblogosphere (for some summary and an example, see here). Here’s the thing: there exist many institutions capable of producing, understanding, and acting on macroeconomic models in ways that we would expect to alter some of the underlying structural dynamics at play. For example, in the United States, the Federal Government, the Fed in particular, and many large firms – especially the gigantic financial firms – are in a position to do just that. Some of these responses are included as functions of the model, but (as far as I can tell) in very simple ways – like the “monetary response function” used to model central bank behavior. In other ways, modelers recognize this “simplicity” (in my terms) of the system by agreeing that no model of financial crises or bubbles will ever routinely predict the exact timing of a crisis or crash. Such a model would be taken advantage of by powerful actors whose actions would then change the system.
In many ways, this argument parallels Charles Perrow’s claim that the financial crisis was not a “normal accident.” On that argument, see his piece in the Markets on Trial edited volume. Here’s a chunk of the abstract that explains Perrow’s criticism of the complexity account of the financial crisis:
The first is a “normal accident theory” arguing that the complexity and coupling of the financial system caused the failure. Although these structural characteristics were evident, I argue that the case does not fit the theory because the cause was not the system, but behavior by key agents who were aware of the great risks they were exposing their firms, clients, and society to. … I argue that while ideologies, etc., can have real effects on the behavior of many firm members and society in general, in this case financial elites, to serve personal ends, crafted the ideologies and changed institutions, fully aware that this could harm their firms, clients, and the public. Complexity and coupling only made deception easier and the consequences more extensive. [Emphasis added]
The mere existence of key agents, I argue, transforms the complex system into a simple one (for some players, for some purposes). Not that those key agents can act flawlessly, but rather, that their size and knowledge makes modeling the system impossible without taking into account their ability to alter whatever temporary rules seem to be evident. Investment banks may not have known all of the consequences of their actions in the mid-2000s, but they had a much better sense than the rest of us, and at least some of those actors (e.g. Goldman Sachs) were capable of timing the bubble to maximum advantage (or minimum disadvantage). The Fed responded in turn, guided by its own models (models here being both quantitative, econometric and structural models but also more heuristic or cultural models of how-things-generally-work). The financial collapse of 2007-2009 was at once a story of millions of individual actors slowly changing behaviors (home buyers and individual mortgage brokers) and the story of influential policy decisions by a handful of organizations. The latter part is not characteristic of a complex system but rather a simple one.
The discussion here also echoes the distinction between strategic rational choice theory – e.g. game theory – and atomistic rational choice theory – e.g. competitive markets. Competitive markets can be analyzed as if each actor paid attention only to their own circumstance, and perhaps a few sufficient statistics like prices that capture the aggregate dynamics and help them to choose among a limited choice of actions. Competitive markets do not suffer from “model reactivity” because agents’ behaviors are already (perhaps boundedly) rational in the context of the market, and no agent is in a position to alter the particular rules of the market. Situations of strategic choice are characterized endlessly from model reactivity – though not without possibility of some order to the chaos, as the extensive history of game theory shows. And, perhaps, to break down the binary that I started this post with, by creating models and tools that simplify information processing, modelers enable boundedly rational actors to make more complex decisions, and treat complex systems a bit more like simple ones.
*From my limited understanding thereof, someone please comment with citations if folks in that field or others have already talked about this.
Oz
/ April 12, 2012Thanks for the post. Very interesting read! I’ve had a few questions popup while going over it – first, I can understand how this sort of reasoning can enrich the deabte on Macromodeling for economists (if they’ll pay attention). However, I am interested to know how you see it from a sociological point of view?
Second, following your blog I know you’re familiar with Callon’s approach; so I ask why even categorize the economy as a system in the first place (conceptually)? Thinking of assemblages, all the entities that partake in a certain movement will be responsible of changing the very essence (as this essence is semiotic) of that movement – not simply the key actors (something that, of course, doesn’t have any analytic meaning for Callon) who can act upon the macromodel. It will even be nonsensical to disinterate a ‘change in the system’ into particular acting actors, as the macromodel, for instance, brings about action in itself (or more so in accordance with others).
Anyway, what I am trying to say is not that Callon’s approach is the only way to understand and work with social ontologies. It’s clearly not. But as I struggle with his approach and possible controbutions thereof to the study of ‘the economic’, I was wondering how you see it vis a vis your post.
best
Dan Hirschman
/ April 12, 2012Oz,
Hah! This is, in some sense, my sociological perspective, but you are right that I am thinking in terms of the economist’s (particularly the macroeconomic modeler or forecaster’s) problems. Though, as Perrow’s article shows, economic sociologists often adopt the same position – arguing that the economy is a complex system and thus subject to periodic crisis (“normal accidents”), albeit with an institutionalist rather than formalist analysis.
In re: Callon – excellent question! And one I’ll need to think more about. With my Callon hat on, I argue in my dissertation (I will argue in my dissertation…) that the economy is very much a real thing, an object of economic analysis and of policymaking. But it is a boundary object, an object “multiple” in Annemarie Mol’s terms. This is very much in the spirit of Callon – economics (broadly construed) frames the economy, performs it into being. But it somewhat differs in substance from much of Callon – and the rest of the performativity literature – by focusing on how the economy as a whole is constructed, rather than how particular objects become economic (as if the economic were a relatively stable sort of cateogry into which things could get placed, or not). But all of that is a bit besides the point, as your first suspicion was correct – in this post, I have my economic sociology hat on, and am speaking to econ soc’ers and economists, not so much my usual science studies concerns. But I’ll need to think more about the connections.
danielbeunza
/ April 12, 2012Hi Dan. Great post. One issue that merits attention these days in connection with the debate you bring up is that of Friedman vs. Keynes. In the current position within the cycle in Europe, should governments cut down on spending and stay within budget, or should they allow for deficit. The case of Spain, my own country and where I am at the moment, is illustrative.
Here, the economists have fudged. Some argue for “sacrifice” and balanced budgets in order to please financial markets, while others argue for avoiding “depression” and deficit. The result is that there is always a way to criticize the government. One way out of the theoretical impasse would be to look empirically at the reaction of bond investors on government policy. Are they rewarding austerity, as the pro-balanced budget camp argues? If, as the keynesian camp believes, austerity is counter productive, investors should understand this and punish deficit reductions…
Dan Hirschman
/ April 12, 2012Daniel,
Great point/elaboration. In some ways, contemporary economics is the ultimate postmodern science because there are no material foundations anymore – it’s expectations and preferences all the way down. And maybe that’s most of what I’m pointing to, with your example being a perfect one: whether or not austerity is a good idea is partly a function of whether a handful of institutions and their bond traders think austerity is a good idea which is a function of which models (formal or otherwise) they are using which in turn is a function of the dominant economic theories. So, as a theorist, how do you model this system, knowing your model may be self-fulfilling or self-negating if (but perhaps only if!) certain key actors pick it up and run with it.
danielbeunza
/ April 13, 2012Precisely. This is what economists call “indeterminacy.” It was very interesting to see not just how academic economists model it, but also how they use it in consulting reports. For instance, when Fred Mishkin wrote his report to Iceland some time ago (the one that stars in “Inside Job”) he did mention self-fulfilling prophecies as a potential risk that might be there regardless of “fundamentals.” What he recommended is an extra dollop of economic orthodoxy to stave off any prophecy of doom. Clearly the recipe did not work.
In my view, and following my work on valuation as frame-making, policy-making should be in constant touch with the message sent by markets. Since the 1970s, economists know how to do that: they “back out” the implied estimates of rivals on the basis of some visible magnitude, typically a spread. So the spread between Spanish and German bonds gives you the implied probability of government default, for instance. Once you know what exactly are investors thinking, it’s much easier to design policies that address them.