You are currently browsing the tag archive for the ‘Prisoner’s Dilemma’ tag.

Video – TED lecture: Empathy, cooperation, fairness and reciprocity – caring about the well-being of others seems like a very human trait. But Frans de Waal shares some surprising videos of behavioural tests, on primates and other mammals, that show how many of these moral traits all of us share. (TED, Nov. 2011, link).

Evolutionary explanations are built around the principle that all that natural selection can work with are the effects of behaviour – not the motivation behind it. This means there is only one logical starting point for evolutionary accounts, as explained by Trivers (2002, p. 6): “You begin with the effect of behaviour on actors and recipients; you deal with the problem of internal motivation, which is a secondary problem, afterwards. . . . [I]f you start with motivation, you have given up the evolutionary analysis at the outset.” ~ Frans B.M. de Waal, 2008.

Do animals have morals? And above all, did morality evolved? The question is pertinent in a broad range of quite different areas, as in as well Computer Sciences and Norm Generation (e.g. link for an MSc thesis) in bio-inspired Computation and Artificial Life, but here new fresh answers come directly from Biology. Besides the striking video lecture above, what follows are 2 different excerpts (abstract and conclusions) from a 2008 paper by Frans B.M. de Waal (Living Links Center lab., Emory University, link): de Waal, F.B.M. (2008). Putting the altruism back in altruism: The evolution of empathy. Ann. Rev. Psychol. 59: 279-300 (full PDF link):

(…) Abstract: Evolutionary theory postulates that altruistic behaviour evolved for the return-benefits it bears the performer. For return-benefits to play a motivational role, however, they need to be experienced by the organism. Motivational analyses should restrict themselves, therefore, to the altruistic impulse and its knowable consequences. Empathy is an ideal candidate mechanism to underlie so-called directed altruism, i.e., altruism in response to another’s pain, need, or distress. Evidence is accumulating that this mechanism is phylogenetically ancient, probably as old as mammals and birds. Perception of the emotional state of another automatically activates shared representations causing a matching emotional state in the observer.With increasing cognition, state-matching evolved into more complex forms, including concern for the other and perspective-taking. Empathy-induced altruism derives its strength from the emotional stake it offers the self in the other’s welfare. The dynamics of the empathy mechanism agree with predictions from kin selection and reciprocal altruism theory. (…)

(…) Conclusion: More than three decades ago, biologists deliberately removed the altruism from altruism.There is now increasing evidence that the brain is hardwired for social connection, and that the same empathy mechanism proposed to underlie human altruism (Batson 1991) may underlie the directed altruism of other animals. Empathy could well provide the main motivation making individuals who have exchanged benefits in the past to continue doing so in the future. Instead of assuming learned expectations or calculations about future benefits, this approach emphasizes a spontaneous altruistic impulse and a mediating role of the emotions. It is summarized in the five conclusions below: 1. An evolutionarily parsimonious account (cf. de Waal 1999) of directed altruism assumes similar motivational processes in humans and other animals. 2. Empathy, broadly defined, is a phylogenetically ancient capacity. 3. Without the emotional engagement brought about by empathy, it is unclear what could motivate the extremely costly helping behavior occasionally observed in social animals. 4. Consistent with kin selection and reciprocal altruism theory, empathy favours familiar individuals and previous cooperators, and is biased against previous defectors. 5. Combined with perspective-taking abilities, empathy’s motivational autonomy opens the door to intentionally altruistic altruism in a few large-brained species.(…) in, de Waal, F.B.M. (2008). Putting the altruism back in altruism: The evolution of empathy. Ann. Rev. Psychol. 59: 279-300 (full PDF link).

Frans de Waal research work does not end up here, of course. He is a ubiquitous influence and writer on many related areas such as: Cognition, Communication, Crowding/Conflict Resolution, Empathy and Altruism, Social Learning and Culture, Sharing and Cooperation and last but not least, Behavioural Economics. All of his papers are free on-line, in a web page I do vividly recommend a long visit.

[…] Nash: See if I derive an equilibrium (link) where prevalence is a non-singular event where nobody loses, can you imagine the effect that would have on conflict scenarios, arm negotiations… (…) currency exchange? […], in Memorable quotes for “A Beautiful Mind” (2001), movie directed by Ron Howard, starring Russell Crowe, along with Ed Harris.

Did you just mention privatization, “increase in productivity” and self-interest as a solution? Well, the answer depends a lot if you are in a pre or post equilibrium physical state. The distribution curve in question is more or less a Bell-curve. So maybe it’s time for all of us, to make a proper balance in here, having a brief look onto it from a recent scientific perspective.

Let us consider over-exploitation. Imagine a situation where multiple herders share a common parcel of land, on which they are each entitled to let their cows graze. In Hardin‘s (1968) example (check his seminal paper below), it is in each herder’s interest to put the next (and succeeding) cows he acquires onto the land, even if the quality of the common is damaged for all as a result, through overgrazing. The herder receives all of the benefits from an additional cow, while the damage to the common is shared by the entire group. If all herders make this individually rational economic decision, the common will be depleted or even destroyed, to the detriment of all, causing over-exploitation.

Video – “Balance“: Wolfgang and Christoph Lauenstein (Directors), Germany, 1989. Academy Award for Best Animated Short (1989).

This huge dilemma, know as “The tragedy of the commons” arises from the situation in which multiple individuals, acting independently and rationally consulting their own self-interest, will ultimately deplete a shared limited resource, even when it is clear that it is not in anyone’s long-term interest for this to happen. On my own timeself-interest” allow me to start this post directly with a key passage, followed by two videos and a final abstract. First paper below, is in fact the seminal Garrett Hardin paper, an influential article titled precisely “The Tragedy of the Commons,” written in December 1968 and first published in journal Science (Science 162, 1243-1248, full PDF). One of the key passages goes on like this. Hardin asks:

[…] In a welfare state, how shall we deal with the family, the religion, the race, or the class (or indeed any distinguishable and cohesive group) that adopts overbreeding as a policy to secure its own aggrandizement (13)? To couple the concept of freedom to breed with the belief that everyone born has an equal right to the commons is to lock the world into a tragic course of action. […]

So the question is: driven by rational choice, are we as Humanity all doomed into over-exploitation in what regards our common resources? Will we all end-up in a situation where any tiny move will drive us into a disaster, as the last seconds on the animated short movie above clearly and brilliantly illustrate?

Fortunately, the answer is no, according to recent research. Besides Hardin‘s work has been criticized on the grounds of historical inaccuracy, and for failing to distinguish between common property and open access resources (Wikipedia entry), there is subsequent work by Elinor Ostrom and others suggesting that using Hardin‘s work to argue for privatization of resources is an “overstatement” of the case.

Video – Elinor Ostrom: “Beyond the tragedy of commons“. Stockholm whiteboard seminars. (video lecture, 8:26 min.)

In fact, according to Ostrom work in the study of common pool resources (CPR), awarded in 2009 for the Nobel Prize in Economic Sciences, there are eight design principles of stable local common pool resource management, possible to avoid the present dilemma. Among others, one of her works I definitely recommend reading is her Presidential address on the American Political Science Association, presented back in 1997, entitled, “A Behavioral Approach to the Rational Choice Theory of Collective Action” (The American Political Science Review Journal, Vol. 92, No. 1, pp. 1-22, Mar., 1998). Her impressive paper-work starts like this:

[…] Extensive empirical evidence and theoretical developments in multiple disciplines stimulate a need to expand the range of rational choice models to be used as a foundation for the study of social dilemmas and collective action. After an introduction to the problem of overcoming social dilemmas through collective action, the remainder of this article is divided into six sections. The first briefly reviews the theoretical predictions of currently accepted rational choice theory related to social dilemmas. The second section summarizes the challenges to the sole reliance on a complete model of rationality presented by extensive experimental research. In the third section, I discuss two major empirical findings that begin to show how individuals achieve results that are “better than rational” by building conditions where reciprocity, reputation, and trust can help to overcome the strong temptations of short-run self-interest. The fourth section raises the possibility of developing second-generation models of rationality, the fifth section develops an initial theoretical scenario, and the final section concludes by examining the implications of placing reciprocity, reputation, and trust at the core of an empirically tested, behavioral theory of collective action. […]

” […] What I refused to see is what the prisoner’s dilemma teaches: anyone who plays the “All Cooperate” strategy is a sucker, and incents the other to defect on every move. I now believe that the lesson of the prisoner’s dilemma is that a robust ethic succeeds where a weak one fails. Be fair, be strong, reward cooperation and punish defection, and you will have nothing to regret. […] “, in An Ethic Based on the Prisoner’s Dilemma, The Ethical Spectacle, September, 1995.

[…] Martin Nowak is known for his many influential papers on cooperation and in theoretical biology. This book is a popular writing on his scientific adventures, personal motivations and collaborations. Given his work it is remarkable is that this book does contain nor mathematical equations neither graphical illustrations. Nowak is currently a professor of mathematics and biology at Harvard University. Moreover, he directs since 2003 his own research program on Evolutionary Dynamics. This program has been made possible by a 30 million pledge by Wall Street tycoon Jeffrey Epstein. This is just one ingredient of the remarkable story of Nowak scientific life. The book starts with laying out the puzzle of cooperation illustrated by the prisoner’s dilemma. If both players are selfish and rational they will defect. Why do we see so much cooperation in human societies and other domains of the biological world? This puzzle was introduced to Nowak by Karl Sigmund, a professor in mathematics from the University of Vienna, while Nowak was a student in biochemistry. Sigmund talked about the famous Axelrod tournament and Nowak got hooked. The tournament of Axelrod assumed that the strategies did not make errors. What if there are errors? Will Tit for Tat still be a good strategy? His analysis showed that a more promising strategy is a more Win Stay, Loose Shift. This strategy leads to cooperation if both agents do the same, and defect if not. Hence agents can forgive.

The analysis of strategies that do well in direct reciprocity is one of the five chapters in which Nowak discuss five ways in which the prisoner’s dilemma can be solved. The second chapter is on indirect reciprocity. In a landmark paper with Karl Sigmund Nowak showed that when agents derive information on their reputation (image score) cooperation can evolve in one-shot prisoner’s dilemma. The third chapter is on spatial games and features another landmark paper on spatial chaos. This paper, written with Lord Robert May, shows that cooperation can evolve if agents interact with neighbours and imitate the best strategy of their neighbours. The forth chapter is on group selection. This controversial approach is now better known as multi-level selection. Finally, the fifth chapter is on kin-selection, the first theory on cooperation based on genetic relatedness. The discussion on the five ways to overcome the prisoner’s dilemma is especially interesting due to the discussion on the scientific process. How long hikes with Sigmund let to inspirations that let Nowak drop all other activities he was working on. How chance meetings let to new ideas. How he got, to Oxford, Princeton and finally Harvard.

In the second part of the book discusses cooperation in biology. It covers his applications to the origins of life, the study of cancer and the dominance of ant colonies. This work might be less familiar to the readers of JASSS. Especially the work on cancer, defectors in our own biology, can lead to practical applications. The final part of the book focuses on human societies. Humans are called supercooperators since they are the only organism that uses all five ways to solve social dilemmas. First the evolution of language is discussed. Nowak made important contributions to the study of language by simulating agents benefiting from mutual understanding in language games. According to Nowak, the emergence of language is the most important development in life since 600 million years. It resulted to new types of cooperation. Especially in the context of indirect reciprocity it is key to have language. We need gossip and other types of information transmission to derive reliable estimates on the reputation of strangers.

Then Nowak discusses public goods and the use of costly punishment to derive cooperation. This is the only part of the book where he discusses empirical research. With two graduate students he performed experiments which showed that punishment is not something special, but in line with earlier work on reciprocity and tit for tat. Then Nowak continues with his recent work on network theory and set theory. The book closes with a reflection on the consequences of his work. Cooperation is a crucial ingredient to evolution, but there always will be cycles. The question is how to re-establish cooperation after it has been collapsed. This book provides a nice overview of the findings of Nowak’s work. Note however, that Nowak has substantial work in other areas of research not discussed in the book such as infectious diseases. Together with science writer Roger Highfield, Nowak provides an inspirational story on science in practice. This covers the importance of his mentors in his early years, and his current role of a mentor to his students at Harvard. In conclusion, this is a marvellous book. Although I may not always agree with the findings of Nowak’s research, it is a motivating account on the messy practice of science. I highly recommend this book for students and faculty in social simulation and science in general. […], Reviewed by Marco A. Janssen
(Arizona State University) on JASSS 2011 [Nowak, Martin, Supercooperators: Altruism, Evolution, and Why We Need Each Other to Succeed, ISBN 9781439100189 (pb), Free Press (The): New York, NY, 2011].

Fig. – Christ having some problems on passing the right message. Comic strip from Zach Weiner (Saturday Morning Breakfast Cereal blog – ).

Social psychologists, sociologists, and economists have all proposed theories of norm emergence. In general, they views norm emergence as depending on three factors: (i) actors’ preferences regarding their own behaviour (inclinations); (ii) actors’ preferences regarding the behaviour of others (regulatory interests); and (iii) measures for enforcing norms (enforcement resources), such as access to sanctions and information. Whereas most studies of norm emergence have focused on inclinations or enforcement resources, this article analyses the role of regulatory interests in norm emergence. Specifically, it analyses systems of collective sanctions in which, when and individual violates or complies with a rule, not merely the individual but other members of that person’s group as well are collectively punished of rewarded by an external agent. These collective sanctions give individuals an incentive to regulate one another’s behaviour. This paper demonstrates that when a group is subjected to collective sanctions, a variety of responses may be rational: the group may either create a secondary sanctioning system to enforce the agent’s dictates, or it may revolt against the agent to destroy its sanctioning capacity. According to the proposed theoretic model. the optimal response depends quite sensitively on the group’s size, internal cohesion, and related factors. Abstract: D.D. Heckathorn, “Collective sanctions and the creation of prisoner’s dilemma norms“, American Journal of Sociology (1988), Volume: 94, Issue: 3, Publisher: University of Chicago Press, Pages: 535-562.

Video – […] see, in this world, there are two kinds of people, … my friend, … those with ‘loaded guns’ and those who dig. You dig. […] Last 8 minutes finale of The Good, the Bad and the Ugly (Il buono, il brutto, il cattivo), a 1966 Italian epic spaghetti western film directed by Sergio Leone, starring Lee Van Cleef, Eli Wallach and Clint Eastwood in the title roles, playing a kind of 3-agent Prisoner’s dilemma game. Now, one of them, the Good (Clint Eastwood) is the only who knows he is in fact just playing a 2-agent PD game. And that,  besides the inner non-linearity complexity of the ‘game’, makes all the difference…

Book – Karl Sigmund, The Calculus of Selfishness, Princeton Series on Theoretical and Computational Biology, Princeton University Press,  ISBN: 978-1-4008-3225-5, 192 pp., 2009.

[…] Cooperation means that a donor pays a cost, c, for a recipient to get a benefit, b. In evolutionary biology, cost and benefit are measured in terms of fitness. While mutation and selection represent the main forces of evolutionary dynamics, cooperation is a fundamental principle that is required for every level of biological organization. Individual cells rely on cooperation among their components. Multicellular organisms exist because of cooperation among their cells. Social insects are masters of cooperation. Most aspects of human society are based on mechanisms that promote cooperation. Whenever evolution constructs something entirely new (such as multicellularity or human language), cooperation is needed. Evolutionary construction is based on cooperation. The five rules for cooperation which we examine in this chapter are: kin selection, direct reciprocity, indirect reciprocity, graph selection, and group selection. Each of these can promote cooperation if specific conditions are fulfilled. […], Martin A. Nowak, Karl Sigmund, How populations cohere: five rules for cooperation, in R. M. May and A. McLean (eds.) Theoretical Ecology: Principles and Applications, Oxford UP, Oxford (2007), 7-16. [PDF]

How does cooperation emerge among selfish individuals? When do people share resources, punish those they consider unfair, and engage in joint enterprises? These questions fascinate philosophers, biologists, and economists alike, for the “invisible hand” that should turn selfish efforts into public benefit is not always at work. The Calculus of Selfishness looks at social dilemmas where cooperative motivations are subverted and self-interest becomes self-defeating. Karl Sigmund, a pioneer in evolutionary game theory, uses simple and well-known game theory models to examine the foundations of collective action and the effects of reciprocity and reputation. Focusing on some of the best-known social and economic experiments, including games such as the Prisoner’s Dilemma, Trust, Ultimatum, Snowdrift, and Public Good, Sigmund explores the conditions leading to cooperative strategies. His approach is based on evolutionary game dynamics, applied to deterministic and probabilistic models of economic interactions. Exploring basic strategic interactions among individuals guided by self-interest and caught in social traps, The Calculus of Selfishness analyses to what extent one key facet of human nature–selfishness–can lead to cooperation. (from Princeton Press). [Karl Sigmund, The Calculus of Selfishness, Princeton Series on Theoretical and Computational Biology, Princeton University Press,  ISBN: 978-1-4008-3225-5, 192 pp., 2009.]

What follows comes partly from chapter 1, available here:

THE SOCIAL ANIMAL: Aristotle classified humans as social animals, along with other species, such as ants and bees. Since then, countless authors have compared cities or states with bee hives and ant hills: for instance, Bernard de Mandeville, who published his The Fable of the Bees more than three hundred years ago. Today, we know that the parallels between human communities and insect states do not reach very far. The amazing degree of cooperation found among social insects is essentially due to the strong family ties within ant hills or bee hives. Humans, by contrast, often collaborate with non-related partners. Cooperation among close relatives is explained by kin selection. Genes for helping offspring are obviously favouring their own transmission. Genes for helping brothers and sisters can also favour their own transmission, not through direct descendants, but indirectly, through the siblings’ descendants: indeed, close relatives are highly likely to also carry these genes. In a bee hive, all workers are sisters and the queen is their mother. It may happen that the queen had several mates, and then the average relatedness is reduced; the theory of kin selection has its share of complex and controversial issues. But family ties go a long way to explain collaboration. The bee-hive can be viewed as a watered-down version of a multicellular organism. All the body cells of such an organism carry the same genes, but the body cells do not reproduce directly, any more than the sterile worker-bees do. The body cells collaborate to transmit copies of their genes through the germ cells – the eggs and sperm of their organism. Viewing human societies as multi-cellular organisms working to one purpose is misleading. Most humans tend to reproduce themselves. Plenty of collaboration takes place between non-relatives. And while we certainly have been selected for living in groups (our ancestors may have done so for thirty million years), our actions are not as coordinated as those of liver cells, nor as hard-wired as those of social insects. Human cooperation is frequently based on individual decisions guided by personal interests. Our communities are no super-organisms. Former Prime Minister Margaret Thatcher pithily claimed that “there is no such thing as society“. This can serve as the rallying cry of methodological individualism – a research program aiming to explain collective phenomena bottom-up, by the interactions of the individuals involved. The mathematical tool for this program is game theory. All “players” have their own aims. The resulting outcome can be vastly different from any of these aims, of course.

THE INVISIBLE HAND: If the end result depends on the decisions of several, possibly many individuals having distinct, possibly opposite interests, then all seems set to produce a cacophony of conflicts. In his Leviathan from 1651, Hobbes claimed that selfish urgings lead to “such a war as is every man against every man“. In the absence of a central authority suppressing these conflicts, human life is “solitary, poor, nasty, brutish, and short“. His French contemporary Pascal held an equally pessimistic view: : “We are born unfair; for everyone inclines towards himself…. The tendency towards oneself is the origin of every disorder in war, polity, economy etc“. Selfishness was depicted as the root of all evil. But one century later, Adam Smith offered another view.An invisible hand harmonizes the selfish efforts of individuals: by striving to maximize their own revenue, they maximize the total good. The selfish person works inadvertently for the public benefit. “By pursuing his own interest he frequently promotes that of the society more effectually than when he really intends to promote it“. Greed promotes behaviour beneficial to others. “It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own self-interest. We address ourselves, not to their humanity but to their self-love, and never talk to them of our own necessities but of their advantages“. A similar view had been expressed, well before Adam Smith, by Voltaire in his Lettres philosophiques: “Assuredly, God could have created beings uniquely interested in the welfare of others. In that case, traders would have been to India by charity, and the mason would saw stones to please his neighbour. But God designed things otherwise….It is through our mutual needs that we are useful to the human species; this is the grounding of every trade; it is the eternal link between men“. Adam Smith (who knew Voltaire well) was not blind to the fact that the invisible hand is not always at work. He merely claimed that it frequently promotes the interest of the society, not that it always does. Today, we know that there are many situations – so-called social dilemmas – where the invisible hand fails to turn self-interest to everyone’s advantage.

If you want to be incrementally better: Be competitive. If you want to be exponentially better: Be cooperative“. ~ Anonymous

Two hunters decide to spent their week-end together. But soon, a dilemma emerges between them. They could choose for hunting a deer stag together or either -individually- hunt a rabbit on their own. Chasing a deer, as we know, is something quite demanding, requiring absolute cooperation between them. In fact, both friends need to be focused for a long time and in position, while not being distracted and tempted by some arbitrary passing rabbits. On the other hand, stag hunt is increasingly more beneficiary for both, but that benefice comes with a cost: it requires a high level of trust between them. Somehow at some point, each hunter concerns that his partner may diverts while facing a tempting jumping rabbit, thus jeopardizing the possibility of hunting together the biggest prey.

The original story comes from Jean Jacques Rousseau, French philosopher (above). While, the dilemma is known in game theory has the “Stag Hunt Game” (Stag = adult deer). The dilemma could then take different quantifiable variations, assuming different values for R (Reward for cooperation), T (Temptation to defect), S (Sucker’s payoff) and P (Punishment for defection). However, in order to be at the right strategic Stag Hunt Game scenario we should assume R>T>P>S. A possible pay-off table matrix taking in account two choices C or D (C = Cooperation; D = Defection), would be:

Choice — C ——- D ——
C (R=3, R=3) (S=0, T=2)
D (T=2, S=0) (P=1, P=1)

Depending on how fitness is calculated, stag hunt games could also be part of a real Prisoner’s dilemma, or even Ultimatum games. As clear from above, highest pay-off comes from when both hunters decide to cooperate (CC). Over this case (first column – first row), both receive a reward of 3 points, that is, they both really focused on hunting a big deer while forgetting everything else, namely rabbits. However – and here is where exactly the dilemma appears -, both CC or DD are Nash equilibrium! That is, at this strategic landscape point no player has anything to gain by changing only his own strategy unilaterally. The dilemma appears recurrently in biology, animal-animal interaction, human behaviour, social cooperation, over Co-Evolution, in society in general, and so on. Philosopher David Hume provided also a series of examples that are stag hunts, from two individuals who must row a boat together up to two neighbours who wish to drain a meadow. Other stories exist with very interesting variations and outcomes. Who does not knows them?!

The day before last school classes, two kids decided to do something “cool”, while conjuring on appearing before their friends on the last school day period, both with mad and strange haircuts. Although, despite their team purpose, a long, anguish and stressful night full of indecisiveness followed for both of them…

Bluffing poster

On Bilateral Monopolies: […] Mary has the world’s only apple, worth fifty cents to her. John is the world’s only customer for the apple, worth a dollar to him. Mary has a monopoly on selling apples, John has a monopoly (technically, a monopsony, a buying monopoly) on buying apples. Economists describe such a situation as bilateral monopoly. What happens? Mary announces that her price is ninety cents, and if John will not pay it, she will eat the apple herself. If John believes her, he pays. Ninety cents for an apple he values at a dollar is not much of a deal but better than no apple. If, however, John announces that his maximum price is sixty cents and Mary believes him, the same logic holds. Mary accepts his price, and he gets most of the benefit from the trade. This is not a fixed-sum game. If John buys the apple from Mary, the sum of their gains is fifty cents, with the division determined by the price. If they fail to reach an agreement, the summed gain is zero. Each is using the threat of the zero outcome to try to force a fifty cent outcome as favorable to himself as possible. How successful each is depends in part on how convincingly he can commit himself, how well he can persuade the other that if he doesn’t get his way the deal will fall through. Every parent is familiar with a different example of the same game. A small child wants to get her way and will throw a tantrum if she doesn’t. The tantrum itself does her no good, since if she throws it you will refuse to do what she wants and send her to bed without dessert. But since the tantrum imposes substantial costs on you as well as on her, especially if it happens in the middle of your dinner party, it may be a sufficiently effective threat to get her at least part of what she wants. Prospective parents resolve never to give in to such threats and think they will succeed. They are wrong. You may have thought out the logic of bilateral monopoly better than your child, but she has hundreds of millions of years of evolution on her side, during which offspring who succeeded in making parents do what they want, and thus getting a larger share of parental resources devoted to them, were more likely to survive to pass on their genes to the next generation of offspring. Her commitment strategy is hardwired into her; if you call her bluff, you will frequently find that it is not a bluff. If you win more than half the games and only rarely end up with a bargaining breakdown and a tantrum, consider yourself lucky.

Herman Kahn, a writer who specialized in thinking and writing about unfashionable topics such as thermonuclear war, came up with yet another variant of the game: the Doomsday Machine. The idea was for the United States to bury lots of very dirty thermonuclear weapons under the Rocky Mountains, enough so that if they went off, their fallout would kill everyone on earth. The bombs would be attached to a fancy Geiger counter rigged to set them off if it sensed the fallout from a Russian nuclear attack. Once the Russians know we have a Doomsday Machine we are safe from attack and can safely scrap the rest of our nuclear arsenal. The idea provided the central plot device for the movie Doctor Strangelove. The Russians build a Doomsday Machine but imprudently postpone the announcement they are waiting for the premier’s birthday until just after an American Air Force officer has launched a unilateral nuclear attack on his own initiative. The mad scientist villain was presumably intended as a parody of Kahn. Kahn described a Doomsday Machine not because he thought we should build one but because he thought we already had. So had the Russians. Our nuclear arsenal and theirs were Doomsday Machines with human triggers. Once the Russians have attacked, retaliating does us no good just as, once you have finally told your daughter that she is going to bed, throwing a tantrum does her no good. But our military, knowing that the enemy has just killed most of their friends and relations, will retaliate anyway, and the knowledge that they will retaliate is a good reason for the Russians not to attack, just as the knowledge that your daughter will throw a tantrum is a good reason to let her stay up until the party is over. Fortunately, the real-world Doomsday Machines worked, with the result that neither was ever used.

Friedman's Law's Order book

For a final example, consider someone who is big, strong, and likes to get his own way. He adopts a policy of beating up anyone who does things he doesn’t like, such as paying attention to a girl he is dating or expressing insufficient deference to his views on baseball. He commits himself to that policy by persuading himself that only sissies let themselves get pushed around and that not doing what he wants counts as pushing him around. Beating someone up is costly; he might get hurt and he might end up in jail. But as long as everyone knows he is committed to that strategy, other people don’t cross him and he doesn’t have to beat them up. Think of the bully as a Doomsday Machine on an individual level. His strategy works as long as only one person is playing it. One day he sits down at a bar and starts discussing baseball with a stranger also big, strong, and committed to the same strategy. The stranger fails to show adequate deference to his opinions. When it is over, one of the two is lying dead on the floor, and the other is standing there with a broken beer bottle in his hand and a dazed expression on his face, wondering what happens next. The Doomsday Machine just went off. With only one bully the strategy is profitable: Other people do what you want and you never have to carry through on your commitment. With lots of bullies it is unprofitable: You frequently get into fights and soon end up either dead or in jail. As long as the number of bullies is low enough so that the gain of usually getting what you want is larger than the cost of occasionally having to pay for it, the strategy is profitable and the number of people adopting it increases. Equilibrium is reached when gain and loss just balance, making each of the alternative strategies, bully or pushover, equally attractive. The analysis becomes more complicated if we add additional strategies, but the logic of the situation remains the same.

This particular example of bilateral monopoly is relevant to one of the central disputes over criminal law in general and the death penalty in particular: Do penalties deter? One reason to think they might not is that the sort of crime I have just described, a barroom brawl ending in a killing more generally, a crime of passion seems to be an irrational act, one the perpetrator regrets as soon as it happens. How then can it be deterred by punishment? The economist’s answer is that the brawl was not chosen rationally but the strategy that led to it was. The higher the penalty for such acts, the less profitable the bully strategy. The result will be fewer bullies, fewer barroom brawls, and fewer “irrational” killings. How much deterrence that implies is an empirical question, but thinking through the logic of bilateral monopoly shows us why crimes of passion are not necessarily undeterrable. […]

in Chapter 8, David D. Friedman, “Law’s Order: What Economics Has to Do With Law and Why it Matters“, Princeton University Press, Princeton, New Jersey, 2000.

Note – Further reading should include David D. Friedman’s “Price Theory and Hidden Order“. Also, a more extensive treatment could be found on “Game Theory and the Law“, by Douglas G. Baird, Robert H. Gertner and Randal C. Picker, Cambridge, Mass: Harvard University Press, 1994.

[...] People should learn how to play Lego with their minds. Concepts are building bricks [...] V. Ramos, 2002.

@ViRAms on Twitter

Error: Twitter did not respond. Please wait a few minutes and refresh this page.


Blog Stats

  • 244,343 hits