The Survival Game: How Game Theory Explains the Biology of Cooperation and Competition

The Survival Game: How Game Theory Explains the Biology of Cooperation and Competition

by David P. Barash Ph.D.
The Survival Game: How Game Theory Explains the Biology of Cooperation and Competition

The Survival Game: How Game Theory Explains the Biology of Cooperation and Competition

by David P. Barash Ph.D.

Paperback(First Edition)

$24.00 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores

Related collections and offers


Overview

"An accessible, intriguing explanation of game theory . . . that can help explain much human behavior." -Seattle Post-Intelligencer

Humans, like bacteria, woodchucks, chimpanzees, and other animals, compete or cooperate in order to get food, shelter, territory, and other resources to survive. But how do they decide whether to muscle out or team up with the competition?

In The Survival Game, David P. Barash synthesizes the newest ideas from psychology, economics, and biology to explore and explain the roots of human strategy. Drawing on game theory-the study of how individuals make decisions-he explores the give-and-take of spouses in determining an evening's plans, the behavior of investors in a market bubble, and the maneuvers of generals on a battlefield alongside the mating and fighting strategies of "less rational" animals. Ultimately, Barash's lively and clear examples shed light on what makes our decisions human, and what we can glean from game theory and the natural world as we negotiate and compete every day.


Product Details

ISBN-13: 9780805076998
Publisher: Holt, Henry & Company, Inc.
Publication date: 09/01/2004
Edition description: First Edition
Pages: 320
Product dimensions: 5.50(w) x 8.50(h) x 0.71(d)

About the Author

A professor of psychology at the University of Washington in Seattle, zoologist David P. Barash is the author of more than a dozen books, including The Myth of Monogamy (0-8050-7136-9) and The Mammal in the Mirror. He lives in Redmond, Washington.

Read an Excerpt

The Survival Game

1

The Games We All Play: What They Are, Why They Matter

In Molière's play Le Bourgeois Gentilhomme, Monsieur Jourdain is astonished to learn that all of his life, without knowing it, he has been speaking prose. We are a bit like M. Jourdain: without knowing it, we all play games. It is not necessary to be athletic, or competitive, or especially frolicsome. Game playing is a big part of life. And since we are full-time players, it behooves us to understand what's going on.

Here goes.

What's the Big Idea?

Most of us assume that life is straightforward, essentially under our own control. If we want something, and reach for it, we may succeed or fail. Either way, the outcome is widely thought to result from our actions alone. But in fact, what we get is often determined by factors out of our control: Maybe the object we are reaching for is too heavy, or too far away, or guarded by angry dragons.

For our purposes, there is a whole class of situations that are more complex and more interesting than these, circumstances in which the payoff—the gain we are seeking—is limited by the fact that others are also reaching for the same goal. In cases of this sort, as the Rolling Stones proclaimed in a notable song several decades ago, you can'talways get what you want. Why not? Because if someone else wants the same thing, and if he or she is pretty much as smart, fast, strong, and motivated as you, then something's got to give. (Whether, in the end, you can get "what you need," as the Stones also announced, is another question, and one that is even more complicated.)

Actually, games arise even if the players aren't human beings. Animals also play games, whether they know it or not, just as people do—whether they know it or not. Thus, two bull elk may desire the same cow, with the success of each ultimately depending not just on what male number one does, but also on male number two. And, of course, the female is also likely to have something to say about the outcome. Whether people or animals (or even viruses), the important thing is that there is some sort of outcome, which is determined by the combined actions of two or more different players, whether their interests are shared, opposed, or—most commonly—a little bit of each.

Let's get a bit more technical, but not much. There are many circumstances in which the interests of individuals are interdependent and yet in conflict, so that the payoff to individual A, who is pursuing a particular goal, depends on the actions of individual B, who may be pursuing the same goal. In these cases, the return to each player—which can be a person, animal, organization, country, or even a bacterium—is determined by the actions of both, taken together. Furthermore, each is in a sense at the mercy of the other, in two ways. First, the outcome to each party depends on the other's actions, and second, it is often the case that neither can change the other's behavior. It is one of life's crucial constraints.

There are many variants on this theme. Consider, for example, one of the simplest: call it the Interrupted Telephone Call Game, a frustration that everyone has experienced. You are talking to a friend, long distance, and suddenly the conversation is cut off; you both get a dial tone. You want to resume talking, and so does your friend, but if you both dial up the other, neither one will get through! If you both wait, again you both lose. The only way to "win" is for one (either one) to redial and the other to wait. So this is a case in which the interests of both parties converge, and yet the payoff to each still depends on the independent behavior of the other.

Another way to put it: What is the best thing to do when confronted with the Interrupted Telephone Call Game? There is no simple answer here, since it depends on what the other person does. If she is going to call you, then you should wait. If she is going to wait, then you should call. Things get interesting when—as is usually the case—the two of you haven't agreed in advance who will do what if your call is interrupted.

Although this is admittedly a trivial case, it points out something important about interactions of this sort: Two or more parties may each have a limited number of possible "moves"—in this case, you and your friend can either "dial" or "wait"—with the payoff to each of you depending not just on what either of you do, but on what the other does at the same time. Moreover, neither can control the actions of the other. A poignant literary example comes from O. Henry's short story "The Gift of the Magi," in which the young husband sells his cherished pocket watch in order to buy hair combs for his adored wife, who—independently—has sold her precious and much-admired hair to buy a watch chain for him!

In some of the most interesting situations we'll be examining, the "players" are more competitive, if not overtly antagonistic. In such cases, each participant is typically trying to maximize his payoff while often simultaneously attempting to minimize the other's return. (And each also knows that the other is trying to achieve the same thing: "She knows that I'm trying to seem smarter than her, so she'll probably study this other case carefully, to get a jump, so I'll review this counter-example, to get ahead of her ... .") Interestingly, although experts in game theory typically assume this sort of conscious planning and counter-planning, it isn't strictly necessary. A gazelle "knows" that the cheetah is planning to catch it, and the cheetah "knows" that the gazelle "knows" this, and so forth. In this case, the "knowledge" is implanted by evolution rather than by conscious awareness, but nonetheless, the game goes on.

Competitive interactions of this sort, whether human or "merely" biological, are not only intriguing as intellectual exercises, but they can yield tremendous insight into important real-life dilemmas, whether interpersonal or involving whole societies. In other cases, two individuals—or companies, or countries—find themselves locked in a deeply frustrating dilemma, in which both "players" strive for their own best interest, but, as a result, both are worse off. This is not simply theory but, rather, painful and dangerous practice.

Take, for example, nuclear weapons in Pakistan and India. Each country is tempted by the prospect of gaining a nuclear advantage over the other; at the same time, each would be better off using its limited budget to enhance the welfare of its own impoverished people. But each country is also fearful of being taken advantage of by the other if it lets down its guard and forgoes nuclear weapons. And so, two countries that can ill afford such a dangerous and expensive competition find themselves locked in a nuclear arms race that does neither one any good ... and that, moreover, does harm to their own security and that of the rest of the world. Everyone would be better off if these two "players" would only "do the right thing" and stop their nuclear competition, but because each fears being suckered by the other, both see themselves as doomed to keep it up. As we'll see, arms races of this sort also occur between married couples, parents and children, and so on.

Once again, the biological world fits right in, although such "natural" arms races are less potentially lethal than their nuclear counterparts. Pity the poor peacock, for example, forced to grow a set of outlandish and metabolically expensive tail feathers, which threaten to get tangled in the undergrowth and serve no real purpose other than convincing the female that its possessor is better than his rivals. After all, if a particular peacock decided not to run this tail-feather race, such a presumably "rational" decision would place him at a disadvantage relative to the other males who decided to participate, and who, as a result, got the peahen.

Even trees are victims. Given that successful reproduction is the biological bottom line, why should redwoods grow so tall? After all, you don't have to be two hundred feet in height, and bother piling up hundreds of tons of wood, just to make some tiny seeds. But a redwood tree that opted out of the big-and-tall competitive fray would literally wither in the shade produced by other trees that were just a bit less restrained. And so, redwoods are doomed by their own unconsciousselfishness to be "irrationally" large, for no particular reason other than the fact that other redwoods are doing the same thing.

Then there is the "politician's dilemma" of whether or not to "go negative." By doing so, someone running for public office doesn't merely have to invest in dead wood; he or she also loses respect and society loses the opportunity to debate genuine issues. But political rivals, not unlike aspiring redwood trees, often find themselves stuck in an awkward competitive game, in which they typically fear being suckered by their opponent (victimized this time by shady, negative campaign tactics), as well as tempted to reap the benefits of attacking successfully and unilaterally. Or like two contestants in a particularly grueling tug-of-war, each side may long to ease up, but fears that the other will take advantage, so both sides end up holding tight, straining mightily ... and often getting nowhere. Not uncommonly, the two players come out somewhat behind, the only winners in the world of electoral politics being the consultants, the speechwriters, and the media.

Don't miss the forest for the trees (and not only redwoods). There are some shared threads linking situations of this sort, from gargantuan redwoods, horny peacocks and elk, to interrupted telephone calls, negative political ads, and dilemmas of disarmament. In all these cases, two sides, or players, each have goals or potential payoffs they wish to attain. They each have a limited palette of options, things they can do in pursuit of their goals; in the simplest case, just two (grow big—or fancy—or not, dial or wait, arm or disarm, go negative or stay positive). And although each is free to choose what to do, no one is free to obtain the desired payoff simply by reaching for it. In each case it depends on what the other guy does. Get used to it: you can't always get what you want. Especially if someone else's desires interfere with your own.

There is a complex branch of mathematics that handles such situations. Known as game theory, it has been around for about sixty years.a Although it has generated hundreds of scholarly articles (andseveral technical journals devoted entirely to its analysis) as well as many academic books, game theory has never truly reached the general public.b This is a shame, because it offers many rewards. As I hope to show, it provides a novel and intellectually compelling way of looking at everyday phenomena. And the fact that the same principles apply to the unthinking, biological world suggests that game theory itself may be in touch with some deeper truths: not a "theory of everything," as some physicists have been pursuing, but at least a theory of many interesting things. In addition, I believe that its essence can be conveyed without elaborate mathematics. In fact, I hereby promise to make this book an equation-free zone. Game theory and its implications are simply too important—and too much fun!—to be left to the mathematicians. Our games, our selves.

Games are also too dangerous to be ignored. Consider the Game of Chicken, a version of which was memorably portrayed in the James Dean movie Rebel Without a Cause. In classic Games of Chicken, two cars head toward each other, each daring the other to swerve. The one who chickens out is the loser; the one who is so brave, or so stupid, as to persevere in going straight is the winner. (In the movie version, Dean and his rival each drove toward a cliff, seeking to be the last to bail out; the basic principle remains the same.)

A Game of Chicken is similar to a nuclear confrontation in that each player can either insist on pushing straight ahead ("arm") or swerve ("disarm"). Moreover, each would get the highest payoff by doing the former if at the same time the other did the latter. On the other hand, the Game of Chicken differs from an arms race in one crucial respect. The worst outcome for either player in the latter case arises if its side acted cooperatively while the other acted competitively; by contrast, the worst outcome in the Game of Chicken occurs when both players act competitively (each hoping that the other will "swerve"). The Cuban Missile Crisis in 1962 was a terrifying example of international Chicken, in which the Soviet Union swerved ... therebyavoiding fried chicken. There are many situations in daily life—issuing "take it or leave it" ultimatums, for example—when we engage in lower-risk Games of Chicken, usually without realizing it.

Biologists have begun to identify "game theoretic" strategies by which animals—and not just genuine chickens—play, quite seriously, at survival, at bluffing, and in reproductive roles. Of course, they don't need to know, consciously, that they are playing such games, any more than they need to understand the details of digestive physiology in order to eat.

Take this example. A male bluebird must "decide" between two options: He can remain with his sexually receptive mate, or wander off in search of additional female companions. He cannot do both, just as individuals playing the Game of Chicken cannot both go straight and swerve, or two countries caught in a nuclear arms race cannot both arm and disarm. Similarly, the male bluebird's payoff depends on what others are doing: If most other males are staying home, he may do well by looking for additional sexual opportunities, because there is at least the chance that some females will be left unguarded, and little risk in trying. But if too many other males are also looking to sow their wild bluebird oats, then a gallivanting male runs the risk that while he is out seeking copulations, another male—doing the same as himself—will succeed in copulating with his female! The apparent result, understood via game theory, is that individuals are likely to be either homebodies or sexual adventurers, in predictable proportions, depending on the risks, the payoffs, and what others in the population are up to.

Ironically, game theory may even apply more directly to the interactions of animals than of people, despite the fact that animals are by all accounts less rational than human beings. This is because animals are more driven by automatic processes, the results of natural selection having endowed animals with automatic responses that have largely proven, over many generations, to be fitness-enhancing and, thus, mathematically valid. By contrast, human beings are less automatic and, thus, less logically predictable. Here is yet another fundamental paradox, at the heart of all behavior, human and nonhuman alike.

This is but scratching the surface. Game theory is loaded with implications for a wide range of human activities; it offers intriguing mentalexercise plus the insight that comes from seeing old problems in new ways. By examining interactions and the striving for payoffs in a "gamey" way, it is possible to shed new light on many other situations, including military strategy, stock market investing (buying a stock is a good strategy if and only if others will also buy it; you can't make money on Wall Street based on what you do alone), as well as moral decision making. And lots more.

We'll also use game theory as a lens to focus on interactions between individuals as well as organizations. Again, the underlying commonality is simply this: what one "player" gets isn't determined simply by what he, or she, or it does, but also by the other "player," who is no less smart, no more ethical, and every bit as motivated to succeed. Imagine this: You are accosted by your neighbor, because one of your trees fell down on her prize rosebush. She threatens to sue, yet both of you would rather settle out of court. She would like to get as much money as possible; you would like to give up as little as you can get away with. She must decide whether to demand a large amount of money, or a smaller amount. Independently, you must decide whether to agree or take it to your insurance company and, possibly, small claims court ... which would be expensive and time-consuming for all concerned.

Your rose-fancying neighbor doesn't want to ask for too much—not wanting to drive both of you to serious litigation—but neither does she want to settle for too little. Similarly, you don't want to make it too clear at what point you'd settle, or else Rosey will ask right up to that amount. It's a cat-and-mouse game, which, in fact, is not a bad way to describe much of game theory, and also much of life.

While writing this chapter, I was invited by a large and well-funded organization to give an after-dinner lecture. How much should I request as my fee? If I asked too little, I'd be underpaid; if I demanded too much, I might drive them to ask someone else. So I wasn't free to simply ask for a lot of money, because I might be stuck with none at all. My "move" was determined by my awareness that the organization had "moves" of its own: it could agree, or refuse, leaving me high and dry. (I also knew, to my advantage, that their original choice had justwithdrawn because of ill health, and the meeting was a mere two weeks away; I didn't quite have them "over a barrel," but this information gave me some helpful, added leverage.)

The roots of game theory go very deep, certainly deeper than the formal mathematics itself (not to mention your neighbor's roses). If, as seems likely, people have been playing "games" for millennia, all this gamesmanship has doubtless left its imprint on our very natures. And so, we assess prospects and possibilities, and are remarkably good intuitive statisticians. We spend time and energy seeking to "read" each other, playing "what if" games in our heads, often without even realizing it. We may even owe one of our most cherished traits—consciousness itself—to the game-playing propensities of our ancestors. The likelihood is that consciousness evolved because it leads to self-awareness; being conscious is, to a large extent, not just being aware, but being aware of being aware. And why is such self-awareness useful? Because it gives us the opportunity to assess what someone else may be thinking and feeling, and, thus, makes us better game players: "If she does that, then I ought to do this ..." Or even further: "She's probably thinking that I'm going to do this, so she's likely to do that, so maybe I'll fool her by ..."

Let's not get too carried away with this idea, however, since it isn't even necessary to have a mind at all in order to be a perfectly competent game player (although presumably it helps to have a mind in order to be a game theorist!). In any event, it seems likely that self-awareness is especially useful in a game-playing world, because it helps its possessors make good guesses as to how someone else is feeling or thinking, and, thus, how someone else will behave.

It is even possible that we owe not only consciousness, but even our vaunted rationality to game playing at its deepest levels. At issue in this case is not the payoff of solving the kinds of theoretical mind games we'll explore in the pages to come, but the real-world payoff, accumulated over eons, from being a winner.

In some cases, game theory actually helps advise participants what to do, but in fact during the half century or so that it has been around, the formal structure of game theory hasn't really given very muchadvice to average people trying to navigate the shoals of everyday life. Maybe someday it will; as we'll see, it already does for animals. As humans unravel its secrets, game theory has changed some aspects of our own behavior, notably in the realm of strategic, nuclear doctrine and the specific design strategies of some financial auctions. In an intriguing example of knowledge and its effects being recursive, knowing the rules of our games has begun to change the way we play. For now, however, it seems likely that game theory is most useful as a way to clarify our thinking, to see complex matters in a new way, and—if you enjoy occasional forays into logic and paradox—to have some fun with your gray matter.

One more thing: You'll find that for all the mathematical abracadabra (most of which we'll ignore) game theory is actually an oversimplification. But that's not necessarily a bad thing. After all, it is intended to be a model, and models by their nature—if they're good at their job—are simpler than their subjects. So of course the games we're going to discuss are simpler than life itself. If they weren't, then there wouldn't be any reason to think about them; instead, we'd just slog through the myriad details of every actual interaction, treating each one as something new and altogether unknown.

There you have it. That's the big idea. If it seems complicated and confusing right now, just be patient. By the time you've finished this book, it'll be obvious. And you'll wonder how you ever managed to play so many games, and to be such a canny strategic mathematician, without knowing it.

Sartre's Dictum

In Jean-Paul Sartre's play No Exit we are given this stunning line: "Hell is other people."

The great existential philosopher wasn't a misanthropist. Instead, he was a firm believer in freedom and the power—indeed, the obligation—of human beings to choose their own course of action. For Sartre, we are all "condemned to be free." Accordingly, it can be sheer hell when others get in the way. (It is said that when asked to account for the Confederacy's defeat in the Civil War, George Pickett—famedfor Pickett's Charge, during the Battle of Gettysburg—replied: "I think the Yankees had something to do with it.")

For game theorists, other people aren't hellish; rather, they are the reason game theory exists, and why it is worth knowing. For the rest of us, there is nothing diabolical about the fact that other people exist, and that they have their interests, which often compete with, complement, or otherwise interact with our own. It contributes much—maybe all—of the spice of life. But at the same time, it complicates things, and immensely so.

Robinson Crusoe, alone on his desert island, is able to pursue a simple strategy: seeking the greatest good for the greatest number ... an easy matter when the number is one. The Crusoe Course is simply to decide what yields the highest payoff, then go ahead and do it. Even if the payoff is not guaranteed, choose the one that, on average, is likely to yield the best outcome. Add someone else—in the Robinson Crusoe story, "his man Friday"—however, and things immediately become more complex. An engineer designing a building or a bridge need not worry that gravity or a river is plotting against him. Of course, gravity and hydrodynamics go their own way, and cannot be directly controlled by the engineer, but it is possible to account rather precisely for them. In simple games, the actions of each side are also uncontrollable (by the other player) but, worse than gravity or water, they are controlled by independent minds (minds that may well be as good or better than one's own), and, worse yet, they are likely to be aiming for results that are directly opposed to yours.

Furthermore, these minds are essentially secret and private, as hidden from ourselves as we are from them. According to Charles Dickens, in A Tale of Two Cities, it is

a wonderful fact to reflect upon, that every human creature is constituted to be that profound secret and mystery to every other. A solemn consideration, when I enter a great city by night, that every one of those. darkly clustered houses encloses its own secret; that every room in every one of them encloses its own secret; that every breathing heart in the hundreds of thousands of breasts there, is, in some of its imaginings, a secret to the heart nearest it! ... In any of the burial-places of this citythrough which I pass, is there a sleeper more inscrutable than its busy inhabitants are, in their innermost personality, to me, than I am to them?

And yet, despite all this secrecy and inscrutability, we have no choice but to interact with one another. Hell? Sometimes. Especially if you like your answers simple, your options straightforward and linear. And if you don't live alone on a desert island.

Most efforts to model and understand human behavior used—and still use—an approach derived from classical physics: Nongame theorists generally assume that objects (including people and animals) respond in certain predictable ways to the action of known external forces. Rarely incorporated into these models was the notion that these "external forces" may have their own agendas, and, furthermore, that the subject whose behavior is to be predicted and understood is likely to be acting with those others in mind. ("I know he knows. And I know that he knows that I know. And I know that he knows that I know that he knows ...") And even game theorists focused largely on people, developing most of their models under the assumption that the players were conscious strategists.

Decision making can be complicated, even when there is only one player, and even when that player has complete access to all relevant information (what game theorists call "decision making under certainty"). For example, consider simple "optimization problems," in which the goal is to find the best way to accomplish something. For game theorists, such problems are not especially intriguing, since the idea is simply to find the most efficient way of achieving a particular goal: minimize time spent, maximize profit, construct the strongest or shortest link between two points, and so forth. In such cases, there is no other player in any meaningful sense, aside from nature. And often, nature does not respond.

Nonetheless, even so simple a decision as whether to take an umbrella when you leave the house involves several considerations: How awkward is it to carry? Are you likely to forget it somewhere? Does it make you look like a dork? But there is at least one thing that you don't have to factor into your decision: Carrying an umbrelladoesn't make rain any more likely. Yet so-called optimization problems can still become fiendishly complicated. They are the stuff of the specialized mathematical discipline called linear programming (which is allied to game theory, but distinct).

Take one example, the Traveling Salesperson Problem. Imagine that a salesman must visit ten different cities, stopping in each no more than once. What order of visitations will produce the minimum traveling distance on his part? Even in this seemingly straightforward case, there are 3,628,800 different possibilities! Why so many? Our understandably bewildered salesman can start his tour in any of the ten cities, after which he can visit any of nine, followed by eight, and so on. The number of options is therefore 10 × 9 × 8 × 7 × 6 × 5 × 4 × 3 × 2 × 1 = 3, 628,800. For twenty cities, the number leaps to 2,432,902,008,176,640,000. And for thirty cities it exceeds 265 thousand billion billion billion, far more than the number of subatomic particles in the visible universe!1 Take, then, the fifty state capitals in the United States and ask: How would you proceed if you wanted to visit all of them while flying the shortest possible distance? (Let's make the simplifying—and inaccurate—assumption that each state capital has its own airport and each one provides nonstop service to every other.) It turns out that in this "simple" case, it is literally impossible to make the best possible decision, even under conditions of certainty. This is not to say that the best possible route doesn't exist, just that there is no way to determine it.

Certainty is not the same as simplicity.

Even less simple, in a way, is the problem of making decisions under profound uncertainty, when there is another side and that other side is also making its own decisions. In some cases, these "decisions" aren't even conscious, and may even be counter-intuitive. Moreover, they may even involve players whose identity is unclear. Take dieting. This has long been seen as a simple matter of willpower (or won't power). But endocrinologists report that a consequence of self-starvation is that one's body responds as though there is a famine, thus reducing its metabolic rate. The result is that even something as apparently self-directed and inner-controlled as whether to eat less and, thus, lose weight turns out to be a case of playing a game against one's own body: You can control your eating (up to a point), but you can't control yourbody's response physiologically. And even if you and your body agree that you have a shared interest in losing weight, your brain and your fat cells may disagree.

As we shall see, it gets even trickier when the other side is more cognitively sophisticated than a glob of adipose tissue; in short, when the other side is taking the likelihood of your decision into account. Add to this the fact that in many cases each decision maker's interest may be diametrically opposed.

Most of the games we'll examine involve just two players, largely because adding additional participants makes things unwieldy, even for the mathematically adroit. At the same time, it is worth noting that sometimes a third party actually simplifies things or, rather, settles them down. Consider the following (somewhat) hypothetical game of geopolitical intrigue: Imagine that the newly installed government in Afghanistan isn't to Pakistan's liking. Pakistan may be tempted to invade Afghanistan and establish a more pliable regime; after all, Pakistan is much stronger militarily. So the situation could be dangerously unstable. But add a third player—India—and things become not simply more complex but, ironically, more stable. Thus, under this scenario Pakistan would likely inhibit its aggressive inclinations for fear that if it deployed large numbers of troops into Afghanistan, it might dangerously weaken itself vis-à-vis its border with India, which might then invade. So (even ignoring the role of the United States, world opinion, and numerous other factors) sometimes the addition of a third participant may stabilize interaction between two.

The more the stabiler? Not necessarily. Add a fourth player, China, and what happens? Given the historical distrust between China and India, the fact of China's existence would probably inhibit India in the (unlikely) event that Pakistan invades Afghanistan. This is because an Indian invasion of Pakistan might well weaken India vis-à-vis China, just as a hypothetical Pakistani invasion of Afghanistan could tempt India. In cases like these, the progression can be drawn out indefinitely, at least in theory: even numbers of players threaten instability, odd numbers promise the opposite. (I'm not familiar with parallels in the nonhuman world, but I can't help wondering whether this simplearithmetic might trickle into our biology and, thus, augur poorly for the stability of biological twosomes, notably monogamy.)

In other cases, game players needn't worry about just one individual on the other side; they have to consider what large numbers of others are doing. Let's imagine that you are about to purchase a new computer and, furthermore, that you have a preference for Apples. You have second thoughts, however, for the simple reason that PCs are much more popular, and, as a result, there is some risk that Apple might go belly-up, leaving you with a machine devoid of customer support and with very low trade-in value. A more immediate problem is this: because more people have PCs, there are more software options for them. On the other hand, if—as has been happening in recent years—Macs begin to make a comeback, the payoff to buying a Mac goes up. Success breeds success. Your payoff depends on what others do.

Buying a stock, especially in hope of making a short-term killing, would be another example. The question in this case is not simply whether the company being invested in is likely to make profits in the future. If so, this would be equivalent to choosing a computer based only on whether you like it, regardless of whether anyone else does. And investing in a particular stock would be like deciding whether or not to carry an umbrella, a decision based almost entirely on your assessment of whether it might rain, not on whether other people are going to carry their own umbrellas. Instead, short-term investors must also ask themselves whether their stock pick is likely to be attractive to other investors, and, therefore, whether its stock price is likely to go up or down as a result.

Famed economist John Maynard Keynes pointed this out in an oft-quoted passage:

Professional investment may be likened to those newspaper competitions in which the competitors have to pick out the prettiest faces from a hundred photographs, the prize being awarded to the competitor which most nearly corresponds to the average preferences of the competitors as a whole; so that each competitor has to pick, not those faces which he himself finds prettiest, but those which he thinks likeliest tocatch the fancy of the other competitors, all of whom are looking at the problem from the same point of view. It is not a case of choosing those which, to the best of one's judgement, are really the prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligence to anticipating what average opinion expects the average opinion to be.2

Maybe we should update Sartre's epigram. Purgatory is other people. Hell is when we don't have a clue about them. And game theory—to damn it with faint praise—is at least a ticket from the latter to the former. In short, although it can be useful and even fun, it's no stairway to heaven.

Isn't It All Too Machiavellian?

It's one thing to use game theory to figure out an optimum strategy for betting at cards, managing a baseball team, or maybe even developing a competitive business plan, but quite another to be "game theoretic" when it comes to human interactions. (Hence, it seems more acceptable to be a game theoretician when playing stock market games than when engaged in a domestic "Battle of the Sexes," as we'll explore shortly.) There is something Machiavellian—cold, cynical, selfish, and calculating—about analyzing human situations and then basing one's decision on nothing more than this: What generates the highest payoff. My point, however, is that for better or worse, this is precisely what many of us—perhaps most—actually do. Like the centipede, who fell all over himself when asked to explain how he coordinates the movement of all his legs, people often feel intellectually or ethically paralyzed when asked how they arrive at competitive, interactive outcomes.

Not so with animals. They don't seek to justify their behavior in terms of its potential payoff gain (at least, so far as we can tell, they don't!). And yet, maximize their payoffs is precisely what they do. Given the choice between foraging in a food-rich environment and one that is depauperate, every animal that has ever been tested goes for the former. Machiavellian? Perhaps so, but few people—and probably no animals—would see this as reprehensible. Similarly, a sexually arousedmale Tungara frog, just yearning to fill the night air with his ardent vocalizations, will nonetheless inhibit himself if, by cheerily croaking away, he would be the only one doing so; there are fringe-lipped, frog-eating bats that home in on croaking male Tungara frogs, so it is very much in the interest of even the horniest male amphibian to croak in a chorus or not at all. Hence, frog choruses, a Machiavellian strategy if ever there was one, whereby each male reduces his personal risk by spreading some of it to others.

The point is this: Strategies happen. It isn't a matter of basing decisions on Machiavellian payoffs, but of recognizing that we—like Tungara frogs—are doing so naturally. Indeed, the more skilled all of us (frogs and folks) become at seeing value in payoffs, strategic cooperation, and so forth, the better off we all may be. At the same time, the better we are at recognizing what others are doing, the better we can be at making good choices for ourselves.

Nonetheless, there is something in us that bristles at the idea. We experience ourselves as possessing—if nothing else—free will. Not surprisingly, therefore, most people resist formulations that seek to reduce our actions to simple cause-and-effect relationships. "Experience teaches us no less clearly than reason," writes Spinoza, "that men believe themselves to be free, simply because they are conscious of their actions, and unconscious of the causes whereby these actions are determined." Most people are especially resistant, in fact, to the suggestion that their actions are "determined" by anything so crass as maximizing their "payoffs."

At least part of the problem is that human beings generally cherish an image of themselves as kind, generous, inclined to "do the right thing," and they attribute that inclination to nothing other than the fact that it is the right thing. If a person does something—even the most generous act—for an identified "payoff," then by definition it no longer appears generous. We are suspicious of the "altruist" who admits that he is seeking social approval, or even divine ratification, or attempting to induce future reciprocation by the beneficiary. The true, admirable altruist (quotation marks removed) is one whose actions are supposed to be uncaused, or, better yet, caused only by purity of heart.

So let's turn directly to that most Machiavellian of thinkers, Niccolò Machiavelli himself, who wrote: "A man who wishes to make a profession of goodness in everything must necessarily come to grief among so many who are not good." And also:

Men must either be caressed or else annihilated; they will revenge themselves for small injuries, but cannot do so for great ones; the injury therefore that we do to a man must be such that we need not fear his vengeance.

And finally, here is Machiavelli's famous advice on whether it is better for a ruler to be loved or feared:

One ought to be both feared and loved, but as it is difficult for the two to go together, it is much safer to be feared than loved, if one of the two has to be wanting. For it may be said of men in general that they are ungrateful, voluble, dissemblers, anxious to avoid danger, and covetous of gain; as long as you benefit them, they are entirely yours; they offer you their blood, their goods, their life, and their children ... and men have less scruple in offending one who makes himself loved than one who makes himself feared; for love is held by a chain of obligation which, men being selfish, is broken whenever it serves their purpose; but fear is maintained by a dread of punishment which never fails.

Still, Machiavelli argued that

a prince should make himself feared in such a way that if he does not gain love, he at any rate avoids hatred; for fear and the absence of hatred may well go together, and will be always attained by one who abstains from interfering with the property of his citizens and subjects or with their women. And when he is obliged to take the life of any one, let him do so when there is proper justification and manifest reason for it; but above all he must abstain from taking the property of others, for men forget more easily the death of their father than the loss of their patrimony.

The reason for quoting Machiavelli at length is not simply to call attention to his up-front cynicism, almost refreshing in its brutal honesty, but to point out his game theory relevance (which is not simply its "up-front cynicism, almost refreshing in its brutal honesty"!). Rather, it is because Machiavelli, for all his "unprincipled" recommendations, reflects a basic game theory principle: We must consider not only our interests but also the interests and behavior of others, and how those others are likely to respond to our actions. Thus, a prince, according to Machiavelli, should be scrupulous with regard to his subjects' property rights because those subjects are likely to take special umbrage if those rights are infringed. Because love and loyalty are fickle, a prince should rely, if need be, on fear. Again and again we find in Machiavelli that sound strategy depends on a careful assessment of what others—one's subjects or a foreign leader—are likely to do.

Such thinking isn't inherently right-wing, or militaristic. One could just as well offer comparably Machiavellian, game-theoretic political advice that is progressive, left-leaning, and pro-peace. It's really a matter of seeing the degree to which "selfish" benefit might be achieved by taking the other side into account. For every argument that equates self-interest with going it alone, there is another that emphasizes the benefits of cooperation. Taking the other side's perspective into account is not, in itself, compatible with either right-wing or left-wing political theory, but is very definitely the way of game theory.

Here's yet another, more general way of looking at it, from negotiation experts Roger Fisher and William Ury, no Machiavellians, they:

The ability to see the situation as the other side sees it, as difficult as it may be, is one of the most important skills a negotiator can possess. It is not enough to know that they see things differently. If you want to influence them, you also need to understand empathetically the power of their point of view and to feel the emotional force with which they believe it. It is not enough to study them like beetles under a microscope; you need to know what it feels like to be a beetle. To accomplish this task you should be prepared to withhold judgement for a while as you "try on" their views. They may well believe that their views are"right" as strongly as you believe that yours are. You may see the glass as half full of cool water. Your spouse may see a dirty, half-empty glass about to cause a ring on the mahogany finish.3

The Regrettable Reality of Conflict

Game theory is a theory of conflict. Fortunately, it also offers a powerful theory of cooperation, which we shall get to. But first, this hard truth: If mutual agreement were at the root of most interactions, there would be little for game theoreticians to theorize about. As it is, there's quite a lot.

So long as there are different individuals, there are going to be conflicts of interest, not only among those individuals but also among the groups formed by them. Let's look at a few relevant words.

The word rivalry originated with the Latin rivus (river or stream). Rivals were literally "those who use a stream in common." Competitors, by contrast, are those who compete (from the Latin com meaning "together," and petere, "to strive"). Competitors strive against one another by seeking to obtain something that is present in limited supply, such as water, food, mates, or status. Rivals necessarily compete, whenever a sought-after resource is present in limited supply. This is unavoidable. But they do not have to be enemies.

The word enemy also derives from the Latin, this time in ("not") plus amicus ("friendly"), and it implies a state of active hostility. The word conflict, on the other hand, derives from confligere, which means literally "to strike together." It is impossible for two physical objects—like two billiard balls—to occupy the same space. They conflict, and if either is in motion, their conflict will be resolved by a new position for both of them. Within the biological realm—whether human or animal—conflict occurs when different individuals or social entities are rivals or otherwise in competition. Such conflicts can have many different outcomes: one side changed, one side eliminated, both sides changed, neither side changed, or (rarely) both sides eliminated. Conflicts can be resolved in many ways: by violence, by mutual agreement, by issues changing over time, and so forth.

One possibility, and in some ways the simplest means of overcoming enmity, is to diminish competition itself. South of Worcester, Massachusetts, near the Connecticut border, there is a small lake with the wonderful Mohican name of Chargoggagoggmanchauggauggagoggchaubunagungamaugg. In English, it means "You fish your side, I fish my side, nobody fishes the middle: no trouble." We may assume that the early lakeshore residents of this mellifluously named body of water were not enemies. They also knew something about managing human affairs. By diminishing competition—thereby, in a sense, overcoming it—they overcame enmity. But what if, at some time in the future, the inhabitants of one side started to cheat? And what if, anticipating such a possibility, the inhabitants of the other sought to do so first?

At this point, an indignant reader might point out that such perverse questioning is itself part of the problem. If people didn't worry about the cheating of others, and hurry to preempt them, much misery could be avoided. By the same token, game theory can be accused of potentially provoking conflict, because it anticipates it, thereby threatening to become a self-fulfilling prophecy. The idea is that in some cases, predicting certain behavior can contribute to its taking place: anticipate the enmity of the other side, and behave accordingly, and sure enough, you've made an enemy. National security analysts have even coined a term to describe this self-fulfilling prophecy: the security dilemma. It arises when a country attempts to enhance its security by increasing its military power, which in turn reduces the security of its neighbors, who respond by increasing their military power ... which leaves the first country—and everyone else—less secure than before.

Game theoreticians have been accused, sometimes correctly, of contributing to precisely this perverse outcome. But at the same time, there are many potential ways out, notably by using game theory itself to anticipate the worst so as not to be blindsided, and also to avoid provoking the result that one wishes to avoid. It is one thing to be prepared, quite another to cause the problem you dread. A special strength of game theory should be that it is explicitly not one-sided; it incorporates concern for the other players, and recognition of their interests, if only because this leads to a more realistic prediction of outcomes for all concerned.

At the same time, an alternative perspective also deserves hearing. Even as the danger looms that anticipating conflict could result in precipitating it, there is also a hidden, potential risk in denying its likelihood. Here's how it works. Peace is generally thought to be the absence of conflict. Moreover, peace is often considered to be more natural than conflict, perhaps because it is more desirable. Fair enough ... except that the results of this perspective can be troubling in their own right. If you truly believe that the normal state is for the lion to lie down with the lamb, for people to live together in unconflicted bliss, then you are likely to feel especially annoyed when difficulties arise. As a result, when conflicts of interest emerge—as they inevitably do—well-meaning but disappointed idealists are sorely tempted to blame someone for upsetting the peaceful applecart. Convinced that serious evil is afoot, the next step may be to eradicate the evildoer.

By contrast, a more realistic expectation—namely, that conflict, like shit, happens—may lead to less outrage and, in the end, a less violent resolution. Game theory is built upon the idea that conflict is highly likely if not inevitable, but that it needn't necessarily be violent or destructive. Marriage counselors and family therapists, similarly, rarely urge their clients to avoid conflict and to seek, instead, a situation of uninterrupted harmonious bliss. Rather, the goal is to learn how to manage conflict effectively, creatively, and nonviolently. When stuck with lemons, make lemonade.

Here's a simple case of conflict, readily resolved, with a little help from game theory. Two people want to divide a piece of pie. The simple solution—remarkably sophisticated in its assumptions—is for one to cut and the other to choose. Neither person is presumed to be especially nasty or otherwise disreputable; nonetheless, each wants as much as possible. But since Cut the Pie is a "zero-sum" game, each player is aware that whatever the other gets means that much less for oneself. Also, neither can influence the behavior of the other—at least, not directly. Each assumes that the other will pick the largest possible piece. A simple solution might be for the cutter to divide the pie so that there is a very large piece and a very small piece, hoping that the chooser will take the small one! But obviously, this would be a terribly naive assumption, the kind of wishful thinking that results from a failureto take into account the fact that others are as capable of enlightened self-interest as is oneself. Instead, the rational cutter assumes that the chooser will try to maximize her piece, and so, he tries to make the two pieces as similar as possible. The solution to this game is a 50:50 division, or something as close to 50:50 as a human being—armed with a knife, rational self-interest, and a realistic perception of another's likely behavior—is able to get.c

The result is an equilibrium at which each player does the best he can, given his knowledge that the other is doing the same.

Here is another way of looking at it. An important part of game theory recognizes that, paradoxically, even conflict tends to settle down. Or in mathematical language, it reaches an equilibrium. What kind of equilibrium? Often, one at which each player is making the best possible response to the other, who, in turn, is making his or her best response to the first. (This is a so-called Nash equilibrium, named for mathematician John Nash, winner of the 1994 Nobel Prize in economics; sadly, however, even though Nash equilibria are stable, they don't always produce optimum outcomes for the participants; perhaps the best examples come from the renowned—or notorious—game of Prisoner's Dilemma, discussed in chapter 3.)

For a comparatively benign example of how players can settle, unconsciously and at least somewhat peacefully, into equilibrium strategies, take this case, described by Stephen Jay Gould.4 Gould didn't intend it to illuminate game theoretic equilibria, but it does. He was interested in something more mundane: baseball. More specifically, he was intrigued with the fact that batting averages in baseball consistently declined during the twentieth century. Although there were never very many .400 hitters, for example, there were far more such "phenoms" early in baseball history than in recent times. One explanation: There simply were better athletes in the good old days. But wait: Why shouldn't there also have been better pitchers, which would have kept the batting averages in check?

Analyzing decades of baseball statistics (baseball yields nothing if not statistics), Gould demonstrated that not only are there fewer exceptionally good hitters in modern times, there are also fewer exceptionally bad ones. To be sure, there aren't many .400 hitters these days, but neither are there many who hit less than .200, or even .220. Gould made the point that modernity as such has produced a reduction in the extremes, as pitchers and batters have gradually adjusted to each other:

When baseball was very young, styles of play had not become sufficiently regular to foil the antics of the very best. Wee Willie Keeler could "hit 'em where they ain't" (and compile an average of .432 in 1897) because fielders didn't yet know where they should be. Slowly, players moved toward optimal methods of positioning, fielding, pitching, and batting—and variation inevitably declined. The best now meet an opposition too finely honed to its own perfection to permit the extremes of achievement that characterized a more casual age.

We can have confidence in Gould's explanation because this system—in which offensive and defensive players gradually develop an equilibrium of mutual best responses to each other—tends to break down whenever the rules are changed. Things were thrown out of equilibrium, and variation increased (that is, more high and low batting averages) when the pitching mound was lowered, when new teams entered, when the size of the strike zone was changed, and even when new technology was introduced, such as a livelier ball or different kinds of gloves. In such cases, the balance was typically shaken for a time, during which offensive and defensive players experimented with different ways of doing things; some did unusually well, others were exceptionally inept. Eventually, a new equilibrium was typically established, with each not only doing the best he could, but—crucial for our game theory perspective—the best he could against the other. Each side's "best" depended (and continues to depend) on what the other is doing.

Baseball is only a game. But so are many other things.

What It'll Tell Us and What It Won't

A criticism sometimes raised is that game theory is too encased in its own logical analysis; it doesn't tell us anything that isn't already present in the underlying assumptions of the system. Like 2 + 2 = 4. (I know: I promised no equations, but this one hardly counts, nor do the others appearing just ahead.) Two plus two must equal four, because of how we define two and four, the latter being twice two. But, in fact, self-contained systems can be very interesting, or at least, they can include a lot of richness not immediately apparent. For example, take the Pythagorean theorem. In a right triangle, the hypotenuse squared equals the sum of the square of the other two sides. This is necessarily true, given the nature of right triangles. It flows automatically and ineluctably from the definition of this particular kind of geometric shape. Yet it isn't immediately obvious. It had to be worked out by a mathematical genius. Even though the relationship of hypotenuse to sides is embedded in the very definition of "right triangle," it wasn't known until Pythagoras pointed it out. Since then, it has served as the cornerstone for much of geometry, most of which is also logically self-contained (as is algebra, by the way).

Thus, we can solve a simple equation (say, 3x + 4 = 10) by performing certain operations (subtracting 4 from each side, then dividing by 3), only because in doing so, we aren't really changing the equation in any way. By performing these simple operations, we reveal something already contained in the equation, the fact that x = 2. But that doesn't mean that we haven't revealed something important. (Or at least, something that many struggling algebra students don't consider self-evident.)

More than this, in solving this equation we have used formal logic to tell us something that most of us, at least, could not immediately discern. That's what game theory does: It tells us things that are true but not immediately obvious. In the case of algebraic equations, the "value" of x varies with the initial conditions: 5x + 6 = 21 specifies a different value of x than does 3x + 4 = 10. Simple algebra enables us to reveal x once any particular condition—a specific equation—is given. Game theory will not tell us anything about the world in general, anymore than algebra will tell us the value of x in general, but it will allow us to learn a whole lot once the particular starting conditions—the nature of the game—is made clear.

By stripping away extraneous material, game theory can help us look clearly at what's at stake in many situations, especially those involving cooperation or competition. At the same time, we have already noted that game theory oversimplifies reality. In most of the cases we'll be examining, each player will have two choices (either go straight or swerve, either accumulate weapons or don't, either look for new sexual partners or stay home, and so forth), whereas, in fact, individuals as well as organizations usually have more than two options from which to choose, no matter what is going on. But for simplicity's sake—or, as mathematicians put it, to make the analysis "tractable"—it is often necessary to reduce each player's array of options to just two. Supporters of game theory point out, however, that the basic principles don't change no matter how many courses of action you consider; relying on simple models to help us understand complex reality is essentially a matter of walking before we can run. And devotees of game theory have reason to believe that they've at least been getting somewhere.

Nonetheless, here is a bit more complexity that must be taken into account: Even when payoffs depend on another's action, in most cases, people aren't helplessly dependent on what the other party does. They can talk to each other, for example, and try to persuade, cajole, threaten, or point to the better angels of human nature: one's own, if attempting to convince someone else that you are going to "do the right thing," or someone else's, if attempting to convince that someone else to behave angelically. Supporters of game theory point out, in turn, that no matter what anyone may say, what really counts is the moment of truth when someone actually acts (as distinct from talks), and that a rich texture of discussion doesn't negate the underlying reality of payoffs; it just adds to the information that any rational player must consider in deciding on the next move.

That's not all. Critics sometimes note that game theory, with its dependence on identified payoffs and its assumption that players aremerely trying to obtain the highest payoff possible, fails to give other factors their due: For example, some players may be more altruistic and less selfish than game theory assumes. They may seek a different, and perhaps even a higher reward, such as the pleasure of helping others (doing good being its own reward), or perhaps greasing their way into heaven. To this, game theoreticians respond that such considerations, even if true, don't invalidate the notion that different actions produce different payoffs, or that people seek to obtain the highest payoff possible; it is simply necessary to consider how people value the consequences of their behavior, which might require changing the payoffs received by certain people for certain actions. This is very different from denying the existence or relevance of payoffs themselves. For a devoutly religious Christian, turning the other cheek might have a very high payoff value. For someone else, it may merely result in a painful slap in the face.

Game theory does not necessarily assume personal selfishness. People can, for example, prefer not to receive a particular payoff, in which case it takes on a negative value. Or perhaps they would like to donate a particular payoff to charity ... in which case it still has a positive value to them, but because of its "give-away value."

Contrary to what some might think, game theory is not necessarily predisposed toward those who are especially Machiavellian, amoral, or immoral. There is no reason why a payoff couldn't take all sorts of highly ethical considerations into account. There is not even a presupposition that winning is always the goal. An adult playing checkers or tic-tac-toe with a child, for example, might ascribe a high payoff to losing . For a dyed-in-the-wool altruist, there could presumably be a high payoff to helping someone else, even at substantial personal cost. This simply means that for such an individual, a different payoff value must be assumed, not that there is no possible set of rewards that accurately reflects his or her preferences. Reformulating one's payoffs to take this into account doesn't show the power of "true altruism." Quite the contrary: It nearly always italicizes the self-aggrandizing nature of the underlying motivation. It suggests that even the most self-denying saint is deriving personal—dare we say, selfish?—benefit from her actions.

Nor is this pattern limited to human beings. For example, one of the most important discoveries in evolutionary biology concerns the nature of animal altruism. It is now widely acknowledged that an act that appears to be self-abnegating-benefiting another at the expense of the "altruist"—can actually be selfish at the level of individual genes, insofar as copies of the genes in question are present in the beneficiary. Since genetic relatives are, by definition, individuals likely to share genes, altruism directed toward them warrants a payoff value that is high rather than low. Thus, when a prairie dog "barks" an alarm call, thereby warning its kin of an impending attack by a coyote, the alarmist—more properly, genes that lead to alarm calling—may well be obtaining a substantial positive return on his or her behavior, since within the bodies of those prairie dogs thereby alerted there will reside copies of alarm-calling genes, which profit from the alarm calling itself. Altruism at the level of bodies (negative payoff) is typically selfishness at the level of genes (positive payoff).5

Maybe the unpleasant connotations of game theory do not reside in the models themselves so much as in the very idea of externalizing personal motivations. There is something hard-eyed and seemingly amoral in the very effort to identify and—worse yet—to quantify motivations, even if one gives a very high payoff to "kindness," "ethical behavior," or "doing what is right, simply because it is right." Face it: We don't like being told that there are reasons why we do things. The extent to which people take—or give—credit for an action seems to vary inversely with the degree to which that action can be "explained." Somehow, a rational basis for behavior seems to diminish our individuality, our free will, or our goodness. It is as though many of us would rather believe that our actions are uncaused, especially those behaviors in which we take particular ethical pride. Good actions are supposed to flow of their own accord, rather than be caused, certainly not caused by some sort of scheming, manipulative assessment, even if entirely unconscious, of alternative costs and benefits. And even if "doing the right thing" is valued for its own sake.

Another criticism is that game theory applied to behavior takes ends as given and concerns itself only with means to achieve these ends. Inthis regard, it may be no different from reason itself. Thus rationality may best be understood as a logical technique, the most effective way of gaining one's ends—what Max Weber called Zweckrationalität (instrumental rationality), as opposed to Wertrationalität (rationality of rightness). The former illuminates the most efficient way of going about achieving one's goals, whereas the latter is concerned with what these goals ought to be. Game theory is totally neutral concerning Wertrationalität (what an individual ought to prefer), although it is quite clear on Zweckrationalität (how to go about achieving a goal once it has been identified).

For Bertrand Russell, justly renowned for his contribution to both mathematical rationality and ethical analysis, "reason has a perfectly clear and precise meaning. It signifies the choice of the right means to an end that you wish to achieve."6 (Professor Andrew Colman has pointed out to me that this insight was prefigured by the eighteenth-century philosopher David Hume, in his Treatise of Human Nature: "A passion can never, in any sense, be called unreasonable, but when founded on a false supposition, or when it chooses means insufficient for the designed end.")

For game theorists, rationality means acting so as to bring about the most preferred possible outcome, given that other players are also trying to achieve the same ends. It does not speak to the question of how those outcomes are ranked: It may be perfectly rational for someone to prefer beer over wine, and for someone else to feel exactly the opposite. But if you do prefer beer, then it is irrational to buy wine ... unless you are buying it for someone else, or for some other purpose.

Game theory never questions the rationality of the utilities (values or goals) people employ. It simply asks: Given these goals, and their rankings, what is the most effective way to achieve them? It assumes that the players are rational, but this simply means that they choose from among the available options the one most likely to achieve their goals, what they value most. It doesn't concern itself with what people value, or why. So, if someone wants to lose a game, then that's his highest payoff. If he wants to suffer pain, that's his choice, even if others might think such a goal "irrational." The issue then becomes: Giventhat someone wants to suffer pain of a particular sort, what is the most effective way to achieve it? Similarly if the goal is to inflict pain, to win the largest amount of money, and so forth.

At the same time, game theory carefully applied can help us find out what people are valuing, by helping us focus on what payoffs they are obtaining, or maneuvering toward. (This assumes, of course, that they are rational in that at least to some extent their strategies are oriented toward obtaining whatever it is that they want.) In evolutionary studies, for example, it is increasingly clear that living things seek to maximize their "fitness"—their success in projecting genes into future generations—and this clarity has been achieved, in part, by noting the degree to which animal behavior accords with the predictions of game theory whenever reproductive success is at stake. By contrast, if biologists were to assume that animals are maximizing their oxygen intake, or their time spent sleeping, or their probability of bumping into one another, such predictions would not be supported. Game theory thus helps us understand what living things are up to.

You might want to think of game theory as being like National Public Radio (NPR), or a microwave oven: neither is essential, yet both are useful. Game theory can be an enhancement to our understanding (like NPR) and in some cases, a convenience (like a microwave oven). I don't recommend relying on NPR, and nothing else, for news, or cooking only with microwaves, but at the same time, I wouldn't ignore the former just because The New York Times or the local television news does a better job with certain things, or discard my microwave oven because grilling is far preferable when it comes to cooking a steak.

Later, we'll take a look at rationality itself, and ask to what extent people actually behave rationally, however defined. But for now, let's leave it that game theory seems to be a useful tool, not an end in itself. It can help clarify our thinking and, in some cases, even enable us to have a rollicking good time playing with our own minds and seeing familiar things in new ways. It isn't, however, a holy grail, or even a map for how to find it.

Duels, Truels, and Rules

Game theory violates one of the most beloved American myths: the rugged individualist, the notion that success comes to those who go out on their own and grab it by the scruff of the neck, wrenching wealth and happiness from the world, without regard to what anyone else does, or cares. Think of Arthur Miller's play Death of a Salesman, in which we learn briefly of Willy Loman's brother Ben, who went off to the wilds of Africa: "When I walked into the jungle, I was seventeen. When I walked out I was twenty-one. And, by God, I was rich!" It didn't matter whether there were any native Africans, whether Ben had any partners or victims. Ben Loman "made it," and he did so all by himself. It is a useful—if inaccurate—image for a growing country, expanding into a rich continent. Think of the rugged frontiersmen (and -women), carving a nation out of the wilderness, reveling in liberty and pursuing happiness. Or of the gunslinger, or the Wild West marshal.

But even here a kind of game intervenes. Call it the Gunslinger Game. Two gunfighters walk slowly toward each other in the dusty street. The outcome for each clearly depends not only on what he does, but on the other's action, too. Neither wants to get shot; each wants to shoot the other. There is a payoff to drawing your gun first and shooting your opponent, but since each seeks that same payoff, the situation is unstable, to put it mildly. A frequent outcome: Both draw and each shoots the other.

The game gets more interesting—and no less lethal—if we add this complication: Each still wants to shoot the other, but neither wants to be the one who draws first, because the initiator can legally be seen as the attacker. The problem, of course, is that the attacker is also more likely to be the victor ... unless the responder is so fast that he can wait until the attacker attacks, then still fire first. In the real world, most gunfighters are not that confident. Neither are countries. (Animals, interestingly, often are. Thus, conflict among the bitterest of rivals typically involves lots of bluff and bluster, but little lethality. Even male rattlesnakes, capable of killing each other by a single bite, instead seek to push each other over; they keep their revolvers holstered.)

But human beings have discovered how to arm themselves via technology, and, as a result, they confront each other with weaponry that goes beyond the "merely biological." And as a further result, situations of this kind, in which each side fears being preempted by the other, are dangerously unstable. Think of the United States and the former Soviet Union during the darkest days of the Cold War. A great fear was that either side (or both) might reason as follows: "I'm not planning to attack, but they might be (or, they evidently think—incorrectly—that we are planning to attack them). As a result, they might well be intending to preempt us by attacking first. So, I better preempt their preemption." In the fateful days leading up to World War I, both the Allies and the Central Powers knew that whoever mobilized first would have an advantage. Germany feared that France and Russia would get the jump; France and Russia had similar anxieties. No wonder there was a war.

We'll see shortly that a duel, whether between people or countries, can be what is known as a Prisoner's Dilemma, in which the participants become locked into a devilish paradox whose outcome is disadvantageous to both. Dueling gunslingers, like countries playing brinkmanship, can thus find themselves pushed into shooting each other.

An interesting variant on the duel was first introduced by economist Martin Shubik,7 and then elaborated by NYU political scientist Steven Brams.8 Call it a truel, since it is a duel for three. Let's imagine three gunslingers, approaching one another equally, like three angles of an equilateral triangle. Assume further that they each have only one bullet (this isn't strictly necessary, but makes it easier to analyze). If the trio—call them Moe, Larry, and Curly—are all equally good shots, we might assume that, for example, Moe might shoot at Larry, Larry at Curly, while Curly fires at Moe. Instead, let's also suppose that one of them, Curly, is particularly incompetent. You might then think that he would be a dead duck, but, in fact, the ideal strategy for him—and, ironically, for the others—suggests that he has a fighting chance ... in fact, the best chance of all.

Curly's ideal strategy is to wait. The two best shots should proceed to fire at each other, since each is the greatest threat to the other. Having refrained, Curly, the worst shot, can then fire with impunity at thesurvivor. In this way, the poorest shot is paradoxically assured the greatest probability of survival. (For a modern-day example, think of a hotly contested presidential primary election, in which the front-runner is prone to being knocked out early, since he is the target of everyone else's "best shot." In such cases, it may well be smartest to stay in the middle of the pack and not make your move until the leaders have damaged one another.)

A kind of truel has even been observed among animals, including certain fish. In these cases, large aggressive males—equivalent to hotshot gunslingers—fight it out with one another while small, unobtrusive males take advantage of their mutual preoccupation to sneak in and attempt to fertilize the females.

In an actual human truel with guns, each individual has an even better and more paradoxical option: fire into the air. A player who did so would no longer be a threat to the other two; those two would have no reason to shoot at the disarmed individual and would be more likely, instead, to aim for each other. In fact, given the odds for survival, a disarmament competition might even ensue, in which all parties strive to be the first to waste their bullet, thereby removing themselves as a threat to the others! Alternatively, each ought to refrain from shooting at all, although Reservoir Dogs, directed by Quentin Tarantino, suggests otherwise: in the movie's climactic scene, three criminals confront one another in a tense triangle of drawn weapons ... and everyone shoots everyone. Maybe it's just Hollywood. Or maybe more people should study game theory.

Copyright © 2003 by David P. Barash

Table of Contents

1.The Games We All Play: What They Are, Why They Matter1
What's the Big Idea?
Sartre's Dictum
Isn't It All Too Machiavellian?
The Regrettable Reality of Conflict
What It'll Tell Us and What It Won't
Duels, Truels, and Rules
2.Mastering the Matrix35
Coordination Games
Pascal's Wager
Simple Strategies
Follow the Leader?
Guessing Games
The Battle of the Bismarck Sea
3.Prisoner's Dilemma and the Problem of Cooperation67
The Classic Formulation
A Few Examples
Forward Thinking and Backward Induction
Other Ways Out?
Axelrod's Tournament and Rapoport's TIT FOR TAT
The Live-and-Let-Live System During World War I
Reciprocity
Future Tripping
Prisoner's Dilemma as Rorschach Test
Zero-Sum Jealousy
4.Social Dilemmas: Personal Gain Versus Public Good121
Some Examples
Social Dilemmas and the Social Contract
The Free-Rider Problem
The Tragedy of the Commons
Rousseau's Stag Hunt
A Bit of Psychology: Groups and Grouches
Some More Psychology: Lifeboats, Loners, and Losers
5.Games of Chicken165
The Classic Case
Some Examples: Chickens, One to One
Madmen, Steering Wheels, and Other Ploys
Ultimatum Games
Social Chickens
War and Peace
The Conscious Chicken
6.Animal Antics201
Snaggle-Mouths, Side-Blotches, and Stability
Sex
Social Dilemmas Among Natural Groups
Hawks and Doves
Bullies, Bourgeois, and Other Complications
Dads, Cads, Coys, and Sluts
Different Ways of Being a Male
7.Thoughts from the Underground245
On Being Reasonable
On Being Unreasonable
Stubborn Unreasonableness
Predictable Unreasonableness
Understandable Unreasonableness
A Troublesome Fallacy
"Logical" Vicious Circles
Pas Trop de Zele
Notes279
Acknowledgments289
Index291
From the B&N Reads Blog

Customer Reviews