## Read an Excerpt

#### Game Theory and Politics

**By Steven J. Brams**

**Dover Publications, Inc.**

**Copyright © 2004 Steven J. Brams**

All rights reserved.

ISBN: 978-0-486-14363-7

All rights reserved.

ISBN: 978-0-486-14363-7

CHAPTER 1

*INTERNATIONAL RELATIONS GAMES*

**1.1. INTRODUCTION**

Most discussions of game theory in the international relations literature hardly go beyond a presentation of a few hypothetical examples used in general expository treatments of the subject. Although some vague analogies are usually drawn to situations of conflict and cooperation in international relations, there seems to be little in the *substance* of international relations that has inspired rigorous applications of the mathematics of game theory. An exception is the considerable work (some classified) that has been done on specific problems related to the use and control of weapons systems (e.g., targeting of nuclear weapons, violation of arms inspection agreements, search strategies in submarine warfare), but these are not problems of general strategic interest.

Given this dichotomy between general but vague and rigorous but specialized studies in international relations, we shall try to tread a middle ground in our discussion of applications. At a mostly informal level, we shall indicate with several real-life examples the relevance of game-theoretic reasoning to the analysis of different conflict situations in the international arena. At a more formal level, we shall develop some of the mathematics and prove one theorem in two-person game theory that seem particularly pertinent to the identification of optimal strategies and equilibrium outcomes in international relations games.

Several of our examples relate to military and defense strategy, the area in international relations to which game theory has been most frequently (and successfully) applied. This is not to say, however, that only militarists and warmongers find in game theory a convenient rationale for their hard-nosed and occasionally apocalyptic views of the world. On the contrary, some of the most interesting and fruitful applications have been made by scholars with a profound interest in, and concern for, the preservation of international peace, especially as this end may be fostered by more informed analytic studies of arms control and disarmament policies, international negotiation and bargaining processes, and so forth.

If there is anything that serious game-theoretic analysts of international relations share, whatever their personal values, it is the assumption that actors in the international arena are rational with respect to the goals they seek to advance. Although there may be fundamental disagreements about what these goals are, the game-theoretic analyst does not assume events transpire in willy-nilly uncontrolled and uncontrollable ways. Rather, he is predisposed to assume that most foreign policy decisions are made by decision makers who carefully weigh the advantages and disadvantages likely to follow from alternative policies. Especially when the stakes are high, as they tend to be in international politics, this assumption does not seem an unreasonable one.

In our review of applications of game theory to international relations, we shall adopt a standard classification of games. Much of the terminology used in the book will be introduced in this chapter, mostly in the context of our discussion of specific games. This may make the discussion seem a little disjointed as we pause to define terms, but pedagogically it seems better than isolating all the new concepts in a technical appendix that offers little specific motivation for their usage. For convenient reference, however, technical concepts are assembled in a glossary at the end of the book.

Unlike later chapters, where we shall develop one or only a few different game-theoretic models in depth, we shall analyze several different games and discuss their applications in this chapter. From the point of view of the substance of international relations, our hopscotch run through its subject matter with examples will probably seem quite cursory and unsystematic. In part, however, this approach is dictated by the heterogeneity of the international relations literature and the lack of an accepted paradigm, or framework, within which questions of international conflict and cooperation can be subsumed. It is hoped that an explicit categorization of games in international relations will provide one useful touchstone to a field that presently lacks theoretical coherence.

**1.2. TWO-PERSON ZERO-SUM GAMES WITH SADDLEPOINTS**

Although the concept of a "game" usually connotes lighthearted entertainment and fun, it carries no such connotation in game theory. Whether a game is considered to be frivolous or serious, its various formal representations in game theory all connect its participants, or *players,* to *outcomes*—the social states realized from the play of a game—through the rules. Players need not be single individuals but may represent any autonomous decision-making units that can make conscious choices according to the *rules,* or instructions for playing the game. As we shall show, there are three major ways of representing a game, two of which are described in this chapter.

Although it is customary to begin with a discussion of one-person games, or games against nature, these are really of little interest in international relations. A passive or indifferent nature is not usually a significant force in international politics. If "nature" (as a fictitious player) bequeaths to a country great natural resources (e.g., oil), it is the beliefs held by leaders about how to use these resources, not the resources themselves, which usually represent the significant political factor in its international relations (as the policies of oil-producing nations made clear to oil-consuming nations in 1973-74).

When we introduce a second player in games, many essentially bilateral situations in international relations can be modeled. Most of these, however, are not situations of pure conflict in which the gains of one side match the losses of the other. Although voting games of the kind we shall discuss in Chapter 3 can usefully be viewed in this way, there is no currency like votes in international relations that is clearly won by one side and lost by the other side in equal amounts. In the absence of such a currency, it is more difficult to construct a measure of value for adversaries in international conflict.

The analysis of the World War II battle described below illustrates one approach to this problem. In February 1943 the struggle for New Guinea reached a critical stage, with the Allies controlling the southern half of New Guinea and the Japanese the northern half. At this point intelligence reports indicated that the Japanese were assembling a troop and supply convoy that would try to reinforce their army in New Guinea. It could sail either north of New Britain, where rain and poor visibility were predicted, or south, where the weather was expected to be good. In either case, the trip was expected to take three days.

General Kenney, commander of the Allied Air Forces in the Southwest Pacific Area, was ordered by General MacArthur as supreme commander to inflict maximum destruction on the convoy. As Kenney reported in his memoirs, he had the choice of concentrating the bulk of his reconnaissance aircraft on one route or the other; once the Japanese convoy was sighted, his bombing force would be able to strike.

Since the Japanese wanted to avoid detection and Kenney wanted maximum possible exposure for his bombers, it is reasonable to view this as a *strictly competitive game.* That is, cooperation between the two players is precluded by the simple fact that it leads to no joint gains; what one player wins has to come from the other player. The fact that nothing of value is added to, or subtracted from, a strictly competitive game means that the *payoffs,* or numbers associated with the outcomes for each pair of strategies of the two players, necessarily sum to some constant. (These numbers, called *utilities,* indicate the degree of preference that players attach to outcomes.)

We refer to such games as *constant-sum;* if the constant is equal to zero, the game is called *zero-sum.* In the above-described game, which came to be known as the Battle of the Bismarck Sea, the *payoff matrix,* whose entries indicate the expected number of days of bombing by Kenney following detection of the Japanese convoy, is shown in **Figure 1.1** for the two strategies of each player. Because the payoffs, to the Japanese are equal to the negative of the payoffs to Kenney (i.e., the payoffs multiplied by - 1), the game is zero-sum. Since the row player's (Kenney's) gains are the column player's (Japanese's) losses and vice versa, we need list only one entry in each cell of the matrix, which conventionally represents the payoff to the row player.

The commanders of each side had complete freedom to select one of their two alternative strategies, but the choice of neither commander alone could determine the outcome of the battle that would be fought. If we view Kenney as the maximizing player, we see that he could assure himself of an outcome not less than the minimum in each row. Consider the minimum values of each row given in **Figure 1.1**: Kenney's best choice was to select the strategy associated with the maximum of the row minima (i.e., the value 2, which is circled), or the *maximin*—search north—guaranteeing him at least two days of bombing. Similarly, the Japanese commander, whose interest was diametrically opposed to Kenney's, would note that the worst that could happen to him is the maximum in any column. To minimize his exposure to bombing, his best choice was to select the strategy associated with the minimum of the column maxima (i.e., the value 2, which is circled), or *minimax*—sail north—guaranteeing him no more than two days of bombing.

The least amount that a player can receive from the choice of a strategy is the *security level* of that strategy. For the maximizing player, his security levels are the minima of his rows, and for the minimizing player they are the maxima of his columns. For the players to maximize their respective security levels, the maximizing player (in this example, Kenney) should choose a strategy that assures him of at least two days of bombing, and the minimizing player (the Japanese) a strategy that insures him against more than two days of bombing.

Note that there is no arbitrariness introduced in assuming that Kenney is the maximizing player and the Japanese the minimizing player. If we defined the entries in the payoff matrix to be the number of days required for detection of the convey *before* commencement of bombing, then the roles of the two players would simply be reversed but the strategic problem would remain the same.

The above strategy choices are predicated on the conservative assumption that a player will seek to foreclose the selection of his least desirable outcomes rather than try for his best outcome (three days of bombing in the case of Kenney), should there be a conflict between the two goals.

A player's choice of such a strategy will be reinforced if he anticipates that his opponent will apply the same assumption to his own choice of a strategy. For example, the pessimistic expectation of Kenney that the Japanese would sail north, where visibility was expected to be bad, gives him added impetus to search north (ensuring two days of bombing). On the other hand, the pessimistic expectation of the Japanese that Kenney would search north, where their chances of evasion were greatest, offers no incentive for them to sail north (whether they sail north or south, they can expect two days of bombing). If their expectation should turn out to be incorrect, however, they would suffer a penalty for sailing south (namely, three days of bombing), which is sufficient to dictate their choice of the northern route. Thus, the outcome of the game is, in a sense, determined, and we refer to it as a *strictly determined game.*

More precisely, if a matrix game contains an entry that is simultaneously the minimum of the row in which it occurs and the maximum of the column in which it occurs (i.e., if the maximin is equal to the minimax), the game is strictly determined and the entry is a *saddlepoint.* The word "saddlepoint" derives its name from the fact that the surface of a saddle curves upward in one direction from the center (i.e., the line of motion of the horse), corresponding to a row minimum, and downward in the other direction, corresponding to a column maximum. Thus, an entry in a matrix game which is simultaneously the minimum of a row and the maximum of a column is analogous to the center of a saddle-shaped surface (though the matrix entries do not possess the property of continuity that characterizes a smooth surface). A two-person zero-sum game may have several saddlepoints, but in such a case all of them will have the same value.

The *value* of a game has the property that it is the best outcome that either player can assure himself of, and in a strictly determined game it is always equal to the saddlepoint. A strategy assuring a player that he will obtain a payoff at least equal to this amount is an *optimal strategy,* and a player who selects his optimal strategy is said to be *rational.* In the Battle of the Bismarck Sea, the maximin and minimax strategies that intersect the saddlepoint are optimal strategies and are *in equilibrium:* It is not to the advantage of either player to change his optimal strategy if the other player does not change his. This can be seen from the fact that the row player (Kenney) cannot gain by unilaterally changing his maximin strategy since the saddlepoint is the largest entry in the Japanese "sail north" column; and the column player (the Japanese) cannot gain by unilaterally changing his minimax strategy since there is no larger entry in the Kenney "search north" row. Therefore, a player's knowledge of his opponent's optimal strategy provides no inducement for him to switch his own choice of an optimal strategy; on the contrary, as we have shown, such knowledge may reinforce a player's choice of his optimal strategy.

Thus, it is not true, as Anatol Rapoport asserts, that "players *must* take the reasoning of the opponent" to determine their optimal strategies in a strictly determined game. Indeed, in choosing his optimal strategy, a player can ignore the strategy choice of his opponent. By maximizing *his own* security level, he automatically assures himself of a payoff at least equal to the value of the game; if both players maximize their security levels, the saddlepoint will always be the outcome selected. To be sure, the logic of maximizing security levels is certainly prudent when playing against a rational opponent applying the same logic, but the strategy choice implied by this logic can be determined independently of an opponent's strategy choice.

Neither is it true, as Karl W. Deutsch asserts, that a minimax strategy "cannot take advantage of any mistakes he [an adversary] may make." In games with a saddlepoint, a minimax strategy may indeed enable one to exploit an adversary's mistakes; a glance at **Figure 1.1** will show that if the Japanese had selected their minimax strategy of sailing north, Kenney would have suffered his worst outcome (1 day of bombing) had he selected his nonoptimal strategy of searching south.

It is true that optimal strategies hold up best against the most damaging choices of an adversary. Furthermore, as the best "defensive" strategies, they are consonant with conventional military doctrine. As prescribed in United States military service manuals, a commanding officer is enjoined to take account of an enemy's capabilities (what he is able to do) and not his intentions (what he is going to do), assuming there may be a conflict. Not surprisingly, the commanders in the Battle of the Bismarck Sea selected their optimal maximin/minimax strategies, which are consistent with this doctrine. The Japanese convoy sailed the northern route and was spotted by Kenney's reconnaissance aircraft, concentrating on the north, one day after its departure, which allowed Kenney two days of bombing. As it turned out, the Battle of the Bismarck Sea ended in a disastrous defeat for the Japanese, who did not know that Kenney, by modifying some of his aircraft for low-level bombing, had developed a technique that proved deadly in the battle.

Although the Japanese's lack of intelligence might be viewed as a failure on their part, they did not err in choosing a strategy that would minimize their maximum losses. Because the game itself was unfair, they could not avoid some losses, whatever strategy they adopted. Formally, a game is *fair* if the choice of optimal strategies by both players results in the same zero payoff to each (neither player wins anything from the other—that is, the value of the game is zero).

**1.3. INFORMATION IN GAMES**

In section 1.2 we indicated that we would be concerned not with games against nature but with games that include one or more other players with freedom to make independent choices. In the Battle of the Bismarck Sea example, we showed that both players abided by the security-level principle —that is, they chose strategies that maximized their security levels. We refer to such games that involve rational players who invoke their optimal strategies as *games of strategy,* as contrasted with *games of chance,* whose outcomes do not depend at all on the strategy choices of the players but instead on some random or stochastic process determined by a probability distribution.

*(Continues...)*

Excerpted fromGame Theory and PoliticsbySteven J. Brams. Copyright © 2004 Steven J. Brams. Excerpted by permission of Dover Publications, Inc..

All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.

Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.