Marx’s Reception by 19th Century Political Economists

As a warning, this post has nothing to do with the US elections.

Right now, the undergraduate social theory class for which I am GSI-ing* is covering Marx’s Capital. Capital is a tough book for sociology majors, and graduate students, even in comparison to some of the rest of the quite dense canon. One reason it’s so dense, I think, is because most sociologists lack familiarity with both Hegelian thought and 19th century political economy.** Having read big chunks of Smith, Malthus, Ricardo, and Mill, for example, can make Marx a lot easier to swallow. Marx’s Labor Theory of Value is not quite so radical when lined up next to Smith and Ricardo, who said many of the same things, albeit with a very different political take.*** Economics soon shifted away from the labor theory, towards marginalism, and I wonder if Marx’s reputation suffered because of it. While Smith and Ricardo themselves are lionized by contemporary economics, their labor theories of value are downplayed or ignored in favor of other insights, while Marx’s is made central and used to critique him.

This leads to the post’s title question. How did Marx’s contemporaries viewed his work. Marginalism was on the rise by the 1870s, but it was not yet the only game in town. How did the Smithians, Ricardians and Millsians of the late 19th century view Marx? I assume they were mostly ideologically opposed to his take on things, although perhaps not uniformly, but what was their theoretical critique of Marx like?

Any recommended citations or primary sources would be much appreciated!

* Equivalent to TA-ing, but with a better union.
** I certainly did when I started graduate school in sociology.
*** Hence Samuelson’s derisive comment that Marx was a “minor post-Ricardian.”

Advertisements

What stayed the same in the 1970s?

There’s a ton of research in economic sociology that focuses on major upheavals to the American (and, to some extent, global) economy in the 1970s. Financialization is an obvious one, along with the decline of unions, increasing inequality especially concentrated in the top 1%, increases in the incarceration rate, the emergence of institutional investors as major players in the corporate world, the declining centrality of commercial banks, the death of the Bretton Woods system, the rise of neoliberalism, and so on. Trends big and small begin, or have sharp inflection points, in the 1970s. Obviously many of these trends are related – financialization is part of what drives the dramatic increase in top incomes, for example. But it’s hard to imagine that political economy is so integrated that all of the major upheavals from the 1950s-1990s happened in this one decade. So, my question for you all is.. what stayed the same in the 1970s? What trends persisted from the 1950s-1990s, or had major changes in other decades? What do we miss by periodizing the last 70ish years as 1945-1970, and 1970s-present?

For example, in the city of Detroit, population declines first occur in the 1950s – well before the riots of the 1960s – as the spread of cars leads to suburbanization (although I’m not sure how widespread that trend is across major cities in the US). This trend picks up in the 1960s and 1970s, but it’s going strong in the 1950s. What other trends are we missing by focusing so much on the shifts of the 1970s?

The Climate Change Dystopia and National Accounts

I joke a lot about the robot apocalypse. I follow the news on the latest things we’ve taught robots and AI to do (e.g. consume organic matter, lie, shoot guns, recognize cats, etc.) and laugh at how when you add their capabilities together, you get the robot apocalypse that fiction (and Charli Carpenter) have warned us about for years. I like to think about the robot apocalypse because, while plausible in a science fiction way, it doesn’t seem imminent. Unlike the Climate Change Dystopia (CCD, let’s call it).*

This morning I read a piece from the next issue of Rolling Stone, Global Warming’s Terrifying New Math. There’s not that much that’s truly new about the piece, but it does an excellent job of summarizing how bad things have gotten and how much worse they’re going to get absent immediate, radical changes. The piece is framed around three numbers: 2 degrees Celsius (the amount of warming scientists used to think would be allowable without triggering CCD, though new estimates are a bit lower), 565 gigatons of carbon (how much more we can dump into the atmosphere and keep global warming to 2 degrees Celsisus), and 2,795 gigatons of carbon – the amount of fossil fuels that large companies already have in their reserves. That’s perhaps the most novel contribution of the article: to think through the political and economic consequences of the fact that fossil fuel producers have already discovered five times more fossil fuels than we can safely burn without triggering a catastrophe. These reserves are built into the value of large oil and gas companies around the world. These companies will fight tooth and nail for the right to burn all of what they’ve found, and even to search for more. In fact, they are already fighting hard, and have successfully produced doubt in the public about the extent and severity of the problem, as well as the scientific consensus (see Oreskes and Conway for details). And that’s going to destroy the world as we know it.

Here’s just one choice quote from the article on the absurd politics of energy and climate change:

Sometimes the irony is almost Borat-scale obvious: In early June, Secretary of State Hillary Clinton traveled on a Norwegian research trawler to see firsthand the growing damage from climate change. “Many of the predictions about warming in the Arctic are being surpassed by the actual data,” she said, describing the sight as “sobering.” But the discussions she traveled to Scandinavia to have with other foreign ministers were mostly about how to make sure Western nations get their share of the estimated $9 trillion in oil (that’s more than 90 billion barrels, or 37 gigatons of carbon) that will become accessible as the Arctic ice melts. Last month, the Obama administration indicated that it would give Shell permission to start drilling in sections of the Arctic.

For one clear demonstration of how things are going to get worse: this year’s record heats have made it harder to grow corn. When it’s too hot for too long, corn kernels don’t develop right, and they won’t produce corn. The US exports a tremendous amount of corn, and that keeps global food prices low (lower than they would be). This year, food prices are going to go up. Here’s a graph and caption from Paul Krugman:

I’ve been searching for something useful to say about the epic heat wave and drought afflicting U.S. agriculture, other than that this is the shape of things to come. Of course it’s about climate change: a rising number of temperature records is exactly what you’d expect given an underlying upward trend in global temperatures. And the economic consequences will be large: maybe 1 percent on U.S. consumer prices, but suffering and food riots in poorer nations that spend more of their income on food.

I don’t have too much insight into climate change beyond the apocalyptic things I read from the actual experts. The one point that arises from my research is that if we took climate change – or really, the environment at all – into account in our estimates of economic growth, the picture we have of “the economy” would look dramatically different. For example, Muller, Mendelsohn and Nordhaus (2011) produce a modified set of national accounts that take into account just one kind of environmental damage: air pollution (including estimates of the cost of CO2 emissions, where possible). Muller et al find that including this one form of pollution radically alters our assessment of how valuable certain industries are to the overall economy (understood here as “what GDP measures”).

For some industries (sewage treatment plants, solid waste combustion, stone quarrying, marinas, and petroleum fired and coal-fired power generation), [Gross Environmental Damage, GED] actually exceeds conventionally measured [Value Added, VA]. Crop and livestock production also have high GED/VA ratios, which is surprising given that these activities generally occur in rural (low marginal damage) areas. Other industries with high GED/VA ratios include water transportation, carbon black manufacturing, steam heat and air conditioning supply, and sugarcane mills. It is likely that many of these sources are underregulated.

Muller et al are cautious because of the many kinds of uncertainties in their estimates, and because GDP is already a wacky enough measure to begin with that it’s hard to take too seriously as a measure of economic welfare (despite the fact that we do just that all the time), but the point is clear: we are much less well off than we think if we think hard about pollution, and some industries may be downright destructive (at least at the margin), with their marginal product being less valuable than their marginal environmental damage.

Muller et al end with a call for the production official national accounts that take into account environmental damage. While they don’t make this link, I would argue that such accounts would be a helpful tool for climate change debates: every time someone asks, “But what will this do to the economy?” we would be able to provide a very different answer, one that recognized the previously undercounted costs (including increasing global warming). But, like most of the incremental political tools mentioned in the Rolling Stone article, these sorts of changes would likely take years, and only have small effects on our national debates. And we simply may not have the time for incremental measures anymore.

* For an excellent fictional account of the CCD, see Paolo Bacigalupi’s work, especially The Windup Girl.

Hirschman QotD on Macroeconomic Policy and Surprise

Albert Hirschman is one of the most interesting economists of the 20th century, and not just because of his excellent last name. He was influential in development economics (unbalanced growth, linkages), organizational theory (“Exit, Voice and Loyalty”), and the the history of economic thought (“The Passions and the Interests”). A bit closer to my interests, Hirschman contributed an interesting essay to one of the rarest of academic species: the important edited volume, in this case, The Political Power of Economic Ideas: Keynesianism across Nations. Hirschman’s chapter argues that Keynesian economics may have been less influential in the post-WWII US than expected (given its policy dominance 1938-1945, in the “second New Deal” and WWII itself) because Keynesian economists redirected their attention to the world stage, leading influential missions to Japan, Germany, and eventually almost everywhere. Thus, Keynesianism came to dominate the world, even as it never fully took root in the US and faced lots of challenges.

One of Hirschman’s best arguments concerns the connections between macroeconomic policy making and “surprise”. Hirschman notes the arguments of other scholars in the edited volume that public understanding of a policy is possibly one predictor of its policy success – to implement new policies, you have to convince the public of how and why they work, and overturn existing understandings of the old policy. But, Hirschman argues that this can create a tension, especially when macroeconomic policies emphasize expectations as a source of problems in the economy:

To work at all, a policy must first be at least minimally understood, but it becomes unsustainable if it is understood too well, in the sense that the operators will neutralize it by anticipating its effects. In other words, the public’s understanding of the policy must be neither inadequate nor excessive, but since the understanding presumably passes from the former of these negative conditions to the latter, the viability of any policy is necessarily limited in time. Recent experience suggests that this is not an entirely unrealistic interpretation of macroeconomic policy making in a decentralized economy. (Hirschman 1989: 355)

I think part of the “rational expectations” move in recent economic theorizing is directly trying to address this problem: to come up with a macroeconomic policy that “works” even when fully understood and predicted by the relevant actors whose behavior you are trying to predict and control. But, these policies may still make strong assumptions about the knowledge of economic actors (in this case, assuming a maximal understanding which may not obtain). So it’s still a fascinating problem, and I recommend Hirschman’s take on it for anyone interested.

What do P-Values and Earnings Management Have in Common?

Marginal Revolution just linked to a paper with one of my favorite titles of all time: Star Wars: the Empirics Strike Back. The paper reports a big meta-analysis of all of the significance tests done in three top econ journals – AER, JPE, and QJE – and shows how the distribution of their significance tests shows a tip between .10 and .25, and a slight bump at just under .05, thus exhibiting a two-humped “camel shape.” The authors argue that this distribution suggests that researchers play with specifications to coax findings near significance across the threshold. The paper’s title is actually a substantive finding as well: “Inflation [of significance] is larger in articles where stars are used in order to highlight statistical significance and lower in articles with theoretical models.”

I like this finding because it’s much narrower and suggests a much more plausible mechanism than McCloskey and Ziliak’s famous “Standard Error of Regressions.” (For a good critique of M&Z, especially their coding scheme, see Hoover and Siegler.) Rather than simply asserting that economists don’t understand significance, and are part of a “cult”, Brodeur et al. show a small but predictable amount of massaging to push results towards the far-too-important thresholds of .10 and .05. So, they agree in some sense with M&Z that economists are putting too much emphasis on these thresholds, but without an excessive claim about cult-like behavior.

Economists, in fact, look like corporate earnings managers. I’m not super up on the earnings management literature, but various authors in that field argue that corporations have strong incentives to report positive rather than negative earnings, and to meet analyst expectations. The distribution of earnings shows just that: fewer companies reporting very slightly negative earnings than you would expect, and fewer that just barely miss analyst expectations than just barely exceed (see, e.g. Lee 2007, Payne and Robb 2000, Degeorge et al. 1999 – if anyone has a better cite for these findings, please leave a comment!). Like the economists* engaged in the Star Wars, businesses have incentives to coax earnings towards analyst’s expectations.

What’s the larger lesson? I think these examples are both cases of a kind of expanded Goodheart’s Law, or a parallel case to Espeland and Sauder’s “reactivity” of rankings. Another variant that perhaps gets closest is Campbell’s Law first articulated in the context of high-stakes testing: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.” It’s not clear just how “corrupt” economics findings or corporate earnings statements are, but Campbell’s law and its close proxies remind us of the need to look for both subtle and overt forms of distortion whenever we turn a particular measure into a powerful threshold.

* Note that I am picking on economics here only because the article studied econ journals. I would bet that a similar finding could be obtained in Sociology journals, and probably other social science fields with a heavy statistical bent.

Not Just Any Formula: Self-Confirming Equilibria and the Performativity of Black-Scholes-Merton

One of the most successful, but still controversial, papers in recent economic sociology is MacKenzie and Millo’s (2003) Constructing a Market, Performing Theory. M&M trace the history of Chicago Board Options Exchange and its relationship to a particular economic theory – the Black-Scholes-Merton (BSM) options pricing model. One of the main findings is summarized nicely in the abstract:

Option pricing theory—a “crown jewel” of neoclassical economics—succeeded empirically not because it discovered preexisting price patterns but because markets changed in ways that made its assumptions more accurate and because the theory was used in arbitrage.

Economics is thus performative (in what MacKenzie would later call a “Barnesian” sense), because the economic theory altered the world in such a way to make itself more true. M&M elaborate a bit more in the conclusion:

Black, Scholes, and Merton’s model did not describe an already existing world: when first formulated, its assumptions were quite unrealistic, and empirical prices differed systematically from the model. Gradually, though, the financial markets changed in a way that fitted the model. In part, this was the result of technological improvements to price dissemination and transaction processing. In part, it was the general liberalizing effect of free market economics. In part, however, it was the effect of option pricing theory itself. Pricing models came to shape the very way participants thought and talked about options, in particular via the key, entirely model‐dependent, notion of “implied volatility.” The use of the BSM model in arbitrage—particularly in “spreading”—had the effect of reducing discrepancies between empirical prices and the model, especially in the econometrically crucial matter of the flat‐line relationship between implied volatility and strike price.

Elsewhere, I have emphasized these other aspects of performativity – the legitimacy, the creation of implied volatility as a kind of economic object that could be calculated, etc. These are what I think of as Callonian performativity, a claim about how economic theories and knowledge practices produce economic objects (what Caliskan and Callon now call “economization“). But at the heart of M&M – and at the heart of the controversy surrounding the paper – is the claim that Black-Scholes-Merton “made itself true.” This claim summoned up complaints that M&M had given dramatically too much power to the economists – their theories were now capable of reshaping the world willy-nilly! Following M&M’s analysis, would any theory of options-pricing have sufficed, if it had sufficient backing by prominent economists, etc.? And if not, aren’t M&M just saying that BSM was a correct theory?

One way of out of this problem is to invoke a game theoretic concept: the self-confirming equilibrium (Fudenberg and Levine 1993).* In game theory, an equilibrium refers to consistent strategies – a strategy which no player has a reason to deviate from. There are lots of technical definitions of different kinds of equilibria depending on the kind of game (certain or probabilistic, sequential or simultaneous, etc.) and various refinements that go far above my head. The most famous, the Nash equilibrium, can be thought of as “mutual best responses” – my action is the optimal response to your action which is your optimal response to my best action. The traditional Nash equilibrium, like many parts of economics, assumes a lot – particularly, that you know all possible states of the world, the probabilities they will obtain (in a probabilistic game) and your payoffs in each. The self-confirming equilibria is one way to relax these knowledge assumptions. The name gives away the basic insight: my action is the best response to your action, and vice-versa, but not necessarily to all possible actions you might take. Here’s the wikipedia summary:

[P]layers correctly predict the moves their opponents actually make, but may have misconceptions about what their opponents would do at information sets that are never reached when the equilibrium is played. Informally, self-confirming equilibrium is motivated by the idea that if a game is played repeatedly, the players will revise their beliefs about their opponents’ play if and only if they observe these beliefs to be wrong.

So, if we think of different traders all using BSM, checking the model to see if it was working, and then choosing to use it again, we can see how BSM could work as a self-confirming equilibria.** And, in turn, the concept might help restrict the sets of theories that could have been self-confirming. A radically different theory might not have produced consistent outcomes – but many other such theories could have. I don’t know enough about options pricing to say for sure, but logically I think it works: given all the kinds of imperfect information and expectations one could have, there were probably a wide range of formulas that would have worked (coordinated traders’ activities in a self-confirming way) but not just any formula would do. So, a possible amendment of M&M’s findings would be to say that in addition to all the generic/Callonian ways that BSM was performative (legitimizing the market, creating “implied volatility” as an object to be traded), it also was in a class of theories capable of coordinating expectations and thus once it was adopted, it pushed the market to conform to its predictions. Until the 1987 crash, of course, when it broke down and was replaced with a host of follow-ups that attempted to account for persistent deviations. But that’s another story!

*I thank Kevin Bryan for the suggestion.
**I may be butchering the technical definition here, apologies if so. The overall metaphor should still work though.

EDIT: Kevin offers some additional useful clarification. First, here’s a link to a post discussing self-confirming equilibria (SCE) on Cheap Talk (about college football of all things). Second, I should have pointed out that the SCE concept only makes a difference in dynamic games (which take place over time). In one shot games, there is no chance to learn, and thus nothing to be self-confirmed. Third, here’s Kevin’s take on how the SCE concept could apply:

Here’s how it could work in BSM. SCE requires that pricing according to BSM be a best response if everyone else is pricing according to BSM. But option pricing is a dynamic game. It may be that if I price according to some other rule today, a group of players tomorrow will rationally respond to my deviation in a way that makes my original change in pricing strategy optimal. Clearly, this is not something I would just “learn” without actually doing the experiment.

My hunch, given how BSM is constructed, is that there are probably very few pricing rules that are SCE. But I agree it’s an appropriate addendum to performativity work.

Keynes on Financial Markets QotD

Keynes is a fascinating writer. Some of his essays are absolutely witty. The General Theory (GT), on the other hand, is a legendarily inscrutable book, subject to almost as much exegetical debate as the work of Marx. But even in GT, Keynes has some brilliant moments. In a discussion of the stock market,* Keynes lambasts investors for caring about liquidity more than about making good investments:

The social object of skilled investment should be to defeat the dark forces of time and ignorance which envelop our future. The actual, private object of the most skilled investment to-day is ‘to beat the gun’, as the Americans so well express it, to outwit the crowd, and to pass the bad, or depreciating, half-crown to the other fellow. (GT, 155)

All of this has happened before, and all of it will happen again…

* A discussion which follows, oddly, a statement of what would later be called the weak efficient markets hypothesis (I think):

We are assuming, in effect, that the existing market valuation, however arrived at, is uniquely correct in relation to our existing knowledge of the facts that will influence the yield of the investment, and that it will only change in proportion to changes in this knowledge… if we can rely on the maintenance of the convention, an investor can legitimately encourage himself with the idea that the only risk he runes is that of a genuine change in the news over the near future…(GT, 152-153)

I’m not yet certain, but it would be fascinating if Keynes’ General Theory was somehow premised on the efficient markets hypothesis that has come to be associated (albeit more in its strong than weak form) with very conservative/anti-government/pro-market economists. Keynes later goes on to talk about speculation at length, and speculative investment overwhelming long-term investment, so I think he was more explaining the potentially reasonable, but practically problematic, convention of looking to market values to assess the value of investments rather than justifying its truth in practice.

Keynes QOTD: Econometrics as Alchemy

Keynes is rightly regarded as one of the most influential economists of 20th century. But I think he also has a reputation for being a bad writer. The General Theory, his most widely cited book, is a dense tome filled with somewhat imprecise new concepts that later authors have spent 75 years debating (spawning schools including the [Neo-Classical] Keynesians, New Keynesians, Post Keynesians, Paleo Keynesian…). Keynes’ reputation, though, is unfair: most of his writings are clear and incredibly witty, befitting the man himself, who was a member of the influential literary/artistic Bloomsbury Group along with Virginia Woolf and more. So, I tend to imagine him as a character in an Oscar Wilde play. Keynes’ (1939) review of Jan Tinbergen’s early work producing econometric models is an excellent example. Here’s how Keynes concludes his critical review:

No one could be more frank, more painstaking, more free from subjective bias or parti pris than Professor Tinbergen. There is no one, therefore, so far as human qualities go, whom it would be safer to trust with black magic. That there is anyone I would trust with it at the present stage or that this brand of statistical alchemy is ripe to become a branch of science, I am not yet persuaded. But Newton, Boyle and Locke all played with alchemy. So let him continue.

I love this quote, especially for the tongue-in-cheek invocation of physics. As Mirowski and others have alleged, Physices in the late 19th and 20th century suffered from a heavy case of physics envy. Here, Keynes points to the greatest physicist of them all and says, remember how he was kind of nuts and played with alchemy a lot? It does a lot of rhetorical work very quickly – both praising and dismissing Tinbergen, while opening up the possibility that such work will be necessary to actually set economics on scientific footing (even if it is itself absolutely unscientific).

A Random Thought About Labor Unions

I have never understood people who think that large corporations are all well and good but labor unions are illegitimate, infringe on the freedom of contract, distort the market, and so on. This post is an attempt at clarifying why it is that those two beliefs are confusing when held together.

Pretend no one has market power – employers are small and lack monopoly power – and labor markets work well. If you decide you don’t like your boss or your working conditions you can quit and get a similar new job. Everything’s hunky dory.

Now pretend that employers are big and have (some) monopsony power as big buyers of labor (monopsony means single-buyer, the counterpart to monopoly). If you get fired, you might not be able to get a new job, and if you did, it might be at much lower wages or in a very distant locale (think of being made to leave a company town). At worst, coordinated employers could blacklist you, and then you’d starve on the streets. Employers also have lots of concentrated political power, and can prevent legislation they don’t like from passing (higher taxes on capital gains or the rich, safety and health rules, what have you).

Now imagine we introduce labor unions into the picture. How does this inherently make the situation worse for workers or the system as a whole? You can say, well now workers at a firm lose their choice to not join the union – unions are monopolies on labor. Well, kinda. Now, instead of choosing which large employer to work for, a worker is choosing which employer-union combination to work for. Assuming that there are multiple unions and multiple employers, the worker still has choices. The supposed monopoly power of the union is a function of the existing monopsony power of the firm – if the firm was in a perfectly competitive market, unions would have nothing to bargain for. I guess if a single union truly dominated an otherwise large and dispersed industry, some of these arguments would make sense. But in the US at least, unions have been most successful in relatively concentrated manufacturing industries and the public sector (a serious monopsony employer, as many careers only exist inside the government).

Now, there are lots of good arguments against particular unions, and certainly against all sorts of things particular unions have done (excluding minorities or women, corruption, etc.). But I just don’t understand a belief system that legitimates large employers but completely rejects unions.

This post is a lot more hand-wavy than I’d like, so please comment if you can suggest a more formal treatment of these topics (even if it shows that I’m completely off base!).

Why are the popular parts of government growing the slowest?

I’m at Duke University right now, taking a summer course on the history of economics in the 20th century, so posting may be light. Drop me a line if you have recommendations for things to do near Duke or want to say hello!

So, one of the stories of the current recession is that in spite of a massive federal stimulus, overall government spending was relatively flat. We did not engage in a massive austerity program, as some conservatives wanted (and some European countries actually tried), but we also didn’t see a massive uptick in total spending. Here’s Krugman’s take:

Overall, then, government’s role has not increased. The whole Obama/socialist thing never happened.

And here’s the thing: government’s role should have increased, at least for now. We still have a private sector in the throes of deleveraging, which means that this is a time for the government — which can borrow at negative real interest rates! — to be spending more. Instead, the entire brief increase in government’s role during 2009-2010 has now been unwound, with more cuts to come.

The reason that overall government spending is basically flat has to do with the counteracting forces of increased federal government spending (via the stimulus and automatic stabilizers, i.e. social welfare programs) and decreased state and local government spending (forced by balanced budget amendments, among other things). For example, local governments are actually cutting employees. Here’s a graph of federal vs. state and local current expenditures (not adjusted for inflation, but indexed to 2007 = 100).

Here’s a similar graph of real (adjusted for inflation) state and local expenditures and investment vs. all government.

State and Local government are decreasing as a proportion of government spending – and by this measure, actually decreasing in real terms.

At the same time, polling data consistently show that state and local government poll better than the federal government. For example, here’s a poll showing approval ratings for state, local, executive and judicial branches:

So, the consistently most popular parts of government are flat or shrinking, while the less popular parts are growing. Oops?