Learning Goods, Knowledge Problems, and Hats

I lost my hat this weekend. What’s remarkable about that fact is that a year ago, I would never have imagined it happening. You see, a year ago, I was not the kind of person who wore hats. Long ago, in middle school, I rocked a baseball cap whose brim I refused to bend into a curve.* Since then, I’ve been hat free. What changed and what does this have to do with economics?

Microeconomics generally assumes that people are rational. Rationality in economics is defined by three basic axioms about preferences: that preferences are complete, transitive, and reflexive. Note that none of these assumptions imply self-interest – that has nothing to do with the technical definition of rationality, even if it is the dominant way rationality is modeled.

Reflexive is a silly math-y one – it says you are indifferent between something and itself.

Transitivity is the most criticized one, it says if you prefer A to B, and B to C, then you must prefer A to C. There are all sorts of empirical examples of this failing**, but it’s not clear to me which of them are super important as critiques of the whole enterprise. Transitivity is really important because if preferences aren’t transitive then you can’t talk about a “best” option as for every choice there is a better choice.

Completeness is much less criticized, but is a really interesting assumption in some ways, and it connects to the idea of stability that I talked about previously. Completeness says that for any two options A and B you either prefer A to B, B to A, or are indifferent between them. Well, what else is there, right? At the risk of starting a one-blog Abbott and Costello routine, I don’t know.*** That is, the possibility excluded by the completeness axiom is that you don’t know whether you prefer A to b. One student in today’s class suggested the phrase “perfectly decisive” which gets close to the idea – you can always chose (even if the choice is “either is fine”).

How realistic is that assumption? Or, perhaps more interestingly, what does it imply about someone if that axiom holds or fails to hold? One example of a possible problem for completeness are so-called “learning goods”, those goods whose value is unknown until we try them out. A common place these goods come up is with new inventions: a new kind of car, the personal computer, etc. Without some direct experience with a new object, how can we evaluate it? And if we do evaluate it and determine our preferences in advance, they might well be changed if we are for some reason given the opportunity to use the good (hence free samples).

What does this have to do with hats? Hats are about as far from a new or innovative good as you can get. So presumably, I shouldn’t have any problem knowing whether I prefer wearing a hat to not wearing one, right?

Last year, I won a free hat at pub trivia. It was a Jameson’s hat, but unlike the baseball caps I’d won previously, it was styled more like Castro’s famous cap. At some point, I tried the hat out and found I rather liked it. As a glasses-wearer, I find it’s fantastic for keeping my glasses dry in a light drizzle, and for keeping the sun off my eyes. When it started falling apart, I bought a nicer version made by Goorin Bros, and have been wearing it most of the summer (until this weekend’s unfortunate disappearance). I have become, in other words, a hat-wearer. What happened?

I think one way of interpreting it would be to say that my preferences were complete, but very soft. That is, I was capable of deciding between hat and no hat, but my preference for no hat was based on very little – I hadn’t tried out all the possible hats looking for one I happened to like, and even though I knew hats like the ones I eventually started wearing existed, I’d never thought I would like them. I was simply not a person who wore hats. Being suddenly thrust into the role of hat-owner****, and owner of a particular kind of hat, changed that – and my preferences changed.

All this gets at a certain kind of knowledge problem. Usually “the knowledge problem” is invoked in economics to talk about a critique of central planning. Following Hayek, and others, central planning cannot effectively incorporate all the dispersed knowledge held by individuals and thus cannot be as efficient as a decentralized, presumably market-based, system. But in this case, the knowledge problem obtained at the individual level: I thought I knew what my options were, and my preferences over those options felt complete, but with a tiny bit of extra experience, I went from “no hat” to “where did my hat go?!”

I think the various “nudge” style arguments (also known as “libertarian paternalism”) work off this sort of premise. These interventions sometimes work by changing the default options without restricting the full set of choices available – things like moving to opt-out instead of opt-in for retirement savings plans. Once you try it, and get used to being the kind of person who saves for retirement (or saves a certain amount for retirement), perhaps you’ll grow to like it. We (the feds, the company pushing a new product, the state, whatever) have a lot of data showing that people like you will like something once you’ve tried it, so we’ll do what it takes to get you to have that first experience, and then rely on your updated preferences to do the rest. So here, the knowledge problem runs backwards: I only have an N of 1, Google has an N in the billions. If they can correctly guess which subset of those N I am like, they can push me towards new things that I rate to like once I try them. And perhaps the same is true of the federal government. Either way, the premise is that my preferences are either incomplete (I don’t have an opinion about some things) or very soft, and that premise really alters how we think about some of the standard economic problems.

* Why would I want to break my hat? It made no sense.
** Our professor used a nice example of a sequence of jackets ranging from black to white, where every pair of jackets were very slightly different in color (Black, very dark gray, dark gray, grey, etc.). It’s possible that you are indifferent between any adjacent pair – dark gray vs. very dark gray, for example – but you have strong preferences about white vs. black.
*** Third base!
**** Lots more sociology goes here, but I’m no expert on role theory, presentation of self, identity, etc. so I’ll stick to the mangled economics.



  1. John

     /  September 12, 2011

    Great post—it made me realize my primary critique of these axioms isn’t to transitivity but to completeness, which goes somewhat deeper. To respond to your question (“What else is there?”), I think there’s another possibility:

    Completeness supposes that any three goods can be well-ordered: a < b < c. But in reality this is often (more often than not, I think) not the case. In reality, one's preference for a good is often an n-dimensional vector, and considering it as a scalar amounts to a lossy compression. Say I want to choose between two jackets, one blue, one red. I like blue more than red, but the cut of the blue jacket is somewhat less my style than that of the red one. And maybe the red one has a really well-made iPad pocket, while the blue one has a built-in AM/FM radio. So along one dimension I prefer the blue one, and along another I prefer the red, and along others there's no basis for comparison.

    Now, if you ask me on a survey, "Which do you prefer?", I might be unconsciously nudged to collapse these into scalars in order to produce an answer, but it'd be an unstable one because it wouldn't accurately reflect reality.

  2. afinetheorem

     /  September 12, 2011

    Interesting, as always, but I don’t think it’s right to say completeness goes uncriticized. It is by far the axiom most often relaxed by decision theorists! There is a huge literature in that area which discusses which economic results hold and which do not in the absence of completeness. Macherroni and FA Ok are modern guys doing a lot of work here, but Nobel winner Bob Aumann has a paper in 1962 on this topic. A really formal treatment that has been well received recently (for applied use) are the choice from lists papers by Salant and Rubinstein (there are many related cites within). And of course the behaviorists have a bunch of work here as well: you can think of any framing problem as being related to exactly the incompleteness that you discuss.

  3. joshmccabe

     /  September 12, 2011

    I wouldn’t say so much that its the knowledge problem backwards (its the same old knowledge problem), but it is a good criticism of hardline “individuals always know what is best for themselves” arguments.A lot of free market economists have been (dare I say irrationally) hostile to behavior economics arguments because of what they see as creeping paternalism.

    On a personal note, I also recently lost my decade old, ragged, discolored, torn up, stained Red Sox cap which I wore quite frequently. My fiancee replaced it with a brand spanking new cap which I’ve hardly worn. I tried to explain that commensuration as a social process didn’t apply in this situation. She just thinks I’m an idiot.

    • I think the correct answer was to like it better/more differently *because* she gave it to you. But I’m just guessing.

      Also, sad to hear you guys stopped blogging. If the spirit ever moves you, I’d be happy to host a guest post.

  4. Joe

     /  September 15, 2011

    Just a sidenote: completeness implies reflexivity. So rationality only requires completeness and transitivity.

    • Why? Completeness says for all a, b, a > b, b >a, or a ~ b. So wouldn’t it be possible for a > a (without reflexivity)?

%d bloggers like this: