About This Site
> Game Theory Section
A Theorem on Finite Abelian Groups
Combinatorial Game Theory
The Prisoner's DilemmaI've seen explanations of the prisoner's dilemma in about a jillion places (e.g., Metamagical Themas), but I still don't have the feeling that it's common knowledge. So, here's one more explanation.
Two conspirators are captured, and are now prisoners. They are interrogated separately, and have to choose whether to stay silent or confess, under the following conditions.
As a prisoner, what should you do? That's the dilemma.
Now, here's what makes the prisoner's dilemma interesting. Suppose both prisoners are purely rational decision makers. We'd like to think they could cooperate and stay silent, but actually there's a compelling game-theoretic argument in favor of confessing.
If my partner doesn't confess, I'm better off confessing, because I'll go free; if, however, my partner does confess, I'm still better off confessing, because it will make my term slightly less severe.
As a result, rational prisoners will both confess, even though it gets them more severe terms than if they'd both stayed silent.
As an aspiring rational decision maker, it's distressing to think that there's any situation where rational analysis leads to a wrong answer. It's even more distressing when one realizes that the prisoner's dilemma is not just some contrivance, but is actually a good model for many common situations. To recognize those other situations, it helps to generalize a bit, to think of a whole crowd of players (prisoners), each having to choose whether to cooperate (stay silent) or defect (confess).
Here's an example from when I was a kid. When my family would visit national parks, I'd always want to take things home with me—rocks, sticks, and so on—and my mom would always correct me, saying “what if everyone took just one rock?”.
Energy conservation is another decent example. If people would all cooperate to conserve energy, the supply would be more than adequate, and prices would be low. However, in such an environment of low prices, it would be advantageous to each individual to defect by using more energy.
Fortunately, there seems to be a happy ending: although the dilemma led to many years of perplexity, some clever folks (I don't know who, exactly) eventually realized that the game theory argument has major caveats that make it inapplicable in many cases. I won't claim this is a complete list, but in my mind there are two such caveats.
First, the game must take all consequences into account. If a prisoner who confesses and goes free is likely to be killed by associates of the other prisoner, well, confessing suddenly starts to look less appealing.
Second, the game must be played only once. If the game is repeated (iterated), game theory can properly only be applied to the series as a whole. Each player, instead of choosing whether to cooperate or defect, chooses an entire strategy for when to cooperate and defect.
Even if a game is played only once, it's possible that defecting would make others realize you're not to be trusted, and would influence other, similar games in the future; this influence on related games is one of the consequences that must be taken into account.
So, if you're playing exactly one game with absolutely no external consequences, defecting really is the correct solution. Conversely, if you're playing a series of interrelated games, cooperating really is a rationally justifiable choice … see Evolutionarily Stable Strategies for more details.
It turns out I didn't get the point about iterated games quite right. It's true that in the iterated game each player must choose an entire strategy, but the dominant strategy is still to defect every time. Cooperation only enters into the picture when one considers populations of strategies.
I got something else wrong, too. Two of my three examples—national parks and energy conservation—are examples not of the prisoner's dilemma, but rather of the tragedy of the commons.
I just can't leave well enough alone; here are a few more points about iterated games.
The statement that the dominant strategy is to defect every time is only true for games of known finite length. Here's how Axelrod, summarizing the work of others, explains it in The Evolution of Cooperation.
… the players still have no incentive to cooperate. This is certainly true on the last move since there is no future to influence. On the next-to-last move neither player will have an incentive to cooperate since they can both anticipate a defection by the other player on the very last move. Such a line of reasoning implies that the game will unravel all the way back to mutual defection on the first move …
(The unraveling, by the way, reminds me of the paradox of the unexpected hanging.)
The unraveling proof fails for games of indefinite length, and in fact there turns out to be no single dominant strategy for such games. One might think, then, that indefinite length would be a necessary condition for cooperation, but, surprisingly, it isn't. There really is something about considering populations of strategies that allows cooperation to emerge, even if the individual games are of known finite length. (See, for example, the first of Axelrod's two tournaments.)
Sigh … one more correction. I just heard that the character of Dr. Strangelove was based on Herman Kahn. Was it not based on von Neumann, then? I really have no solid information about any of this.
Bubbles and Barriers
Daytime Running Lights
Driving in Boston
Evolutionarily Stable Strategies
How to Merge
No Pure Strategy Is Stable
Shape of Strategy Space, The
Tragedy of the Commons, The
Why I Hate SUVs
@ August (2000)
o September (2000)
o November (2000)
o January (2001)
o November (2005)