Willem Buiter, professor of political economy at the LSE and former central banker, has an excellent critique of modern macroeconomics at VOX EU today. His argument has three main positions, not entirely novel but all excellently argued. The first is a rejection of the idea of “markets in everything” or “complete markets”:
It is clear that, when searching for an appropriate simplification to address the intractable mess of modern market economies, the starting point of ‘no markets’, that is, autarky or no trade, is a much better one than that of ‘complete markets’. Goods and services that are potentially tradable are indexed by time, place and state of nature or state of the world. Time is a continuous variable, meaning that for complete markets along the time dimension alone, there would have to be rather more markets for future delivery (infinitely many in any time interval, no matter how small) than you can shake a stick at. Location likewise is a continuous variable in a 3-dimensional space. Again rather too many markets. Add uncertainty (states of nature or states of the world), never mind private or asymmetric information, and ‘too many potential markets’, if I may ruin the wonderful quote from Amadeus attributed to Emperor Joseph II, comes to mind. If any market takes a finite amount of resources (however small) to function, complete markets would exhaust the resources of the universe.
I really like the idea that we need to start from the assumption of *no* markets, rather than markets everywhere with some failures, and work forwards, asking along the way, “For every good, service or financial instrument that plays a role in your ‘model of the world’, you should explain why a market for it exists – why it is traded at all.” I think this argument fits nicely with most of the economic sociology critiques of economics – markets are social structures, and perilous ones at that, and they are hard to bring into existence and maintain, and thus their existence (rather than their lack) is the interesting story.
His second argument is that even given the existence of markets, there is no reason for them to behave rationally unless there is some auctioneer at the end of time making everything work out nicely (e.g. a central planner). Thus, the efficient markets hypothesis leads to models of planned economies, not market ones. It’s very counterintuitive at first glance, but actually a rather elegant argument.
The friendly auctioneer at the end of time, who ensures that the right terminal boundary conditions are imposed to preclude, for instance, rational speculative bubbles, is none other than the omniscient, omnipotent and benevolent central planner. No wonder modern macroeconomics is in such bad shape. The EMH is surely the most notable empirical fatality of the financial crisis. By implication, the complete markets macroeconomics of Lucas, Woodford et. al. is the most prominent theoretical fatality. The future surely belongs to behavioural approaches relying on empirical studies on how market participants learn, form views about the future and change these views in response to changes in their environment, peer group effects etc. Confusing the equilibrium of a decentralised market economy, competitive or otherwise, with the outcome of a mathematical programming exercise should no longer be acceptable.
Lastly, even if we buy that in the long-run prices somehow converge to where they should be, modern macroeconomics has linearized reality to the point of making their models worthless for policy purposes because they ignore all the messy interactions between markets, even in the short term:
If one were to hold one’s nose and agree to play with the New Classical or New Keynesian complete markets toolkit, it would soon become clear that any potentially policy-relevant model would be highly non-linear, and that the interaction of these non-linearities and uncertainty makes for deep conceptual and technical problems. Macroeconomists are brave, but not that brave. So they took these non-linear stochastic dynamic general equilibrium models into the basement and beat them with a rubber hose until they behaved. This was achieved by completely stripping the model of its non-linearities and by achieving the transubstantiation of complex convolutions of random variables and non-linear mappings into well-behaved additive stochastic disturbances.
Alright, that’s enough big quotes. The whole article is only a few thousand words, I highly recommend it.