Competition: The Birth of a New Science

Competition: The Birth of a New Science

by James Case
Competition: The Birth of a New Science

Competition: The Birth of a New Science

by James Case

Paperback(First Edition)

$25.00 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores

Related collections and offers


Overview

What do chess-playing computer programs, biological evolution, competitive sports, gambling, alternative voting systems, public auctions, corporate globalization, and class warfare have in common? All are manifestations of a new paradigm in scientific thinking, one that the author calls "the emerging science of competition." Drawing in part on the pioneering work of mathematicians such as John von Neumann, John Nash (of A Beautiful Mind fame), and Robert Axelrod, James Case explores the common game-theoretical strands that tie these seemingly unrelated fields together, showing how each can be better understood in the shared light of the others. Not since James Gleick's bestselling book Chaos brought widespread public attention to the new sciences of chaos and complexity has a general-interest science book served such an eye-opening purpose. Competition will appeal to a wide range of readers, from policy wonks and futurologists to former jocks and other ordinary citizens seeking to make sense of a host of novel—and frequently controversial—issues.


Product Details

ISBN-13: 9780809035786
Publisher: Farrar, Straus and Giroux
Publication date: 06/24/2008
Edition description: First Edition
Pages: 368
Product dimensions: 5.50(w) x 8.14(h) x 0.94(d)

About the Author

James Case holds a PhD from the University of Michigan, has taught at a number of universities and worked in private industry, and is the regular book reviewer for SIAM News of the Society for Industrial and Applied Mathematics.

Read an Excerpt

Competition

CHAPTER 1

Man Versus Machine

On May 11, 1997, an IBM computer named Deep Blue defeated the reigning world chess champion in a six-game match. It was the first loss of Garry Kasparov's professional career and, with the possible exception of Bobby Fischer's victory over Boris Spassky in 1972, the most widely reported chess match in history. A pre-match press conference was attended by more than two hundred reporters from around the world, including TV crews from ABC, CBS, NBC, CNN, Fox, and several foreign networks. As the match progressed, and the likelihood of a Deep Blue victory increased, the number of reporters in attendance grew. Daily results were reported on the evening news, analyzed in the morning papers, and satirized by late-night TV comics. Not even the launch of the System/360 line of computers in 1964—by far the biggest public-relations event in IBM history—generated as much publicity.

While pundits differed as to the exact significance of Deep Blue's triumph, few doubted that an important milestone had been passed. This book will explore the implications of Deep Blue's achievement, and a growing list of others like it, as they contribute to man's understanding of competition, games, and the general welfare. For Deep Blue's triumph was by no means an isolated event. It followed hard on the heels of the 1994 defeat of Marion Tinsley—by all accounts the greatest checkers player of all time—at the hands of a computer program named Chinook, and it preceded by only three months the overwhelming(six games to none) victory of a program named Logistello over Takeshi Murakami, the then-reigning world champion player of Othello.

Chess, checkers, and Othello are but a few of the games at which computers now rival the strongest human players. Some of the others, like poker, bridge, backgammon, Monopoly, and Scrabble, will be familiar to most readers. Yet others, like awari, Kalah, dakon, draughts, Lines of Action, Nine Men's Morris, Qubic, shogi, Go-Moku, and Hex, may be less so.1 Of particular interest is the ancient Asian game of Go, at which even the best computer programs are still no match for quite ordinary human players. Why do computers find Go so difficult?

The short answer to that question is complexity. A chess player has, on average, only twenty to thirty legal moves to choose from on any given turn, while checkers players seldom have more than a dozen. Go players, in contrast, have a hundred or more. This turns out to mean that brute-force searching—in which one simply tries every alternative—is virtually useless in Go, although it is an important component of every successful computer program for playing chess, checkers, and most of the other games at which computers currently excel. For any number of reasons, including the fact that at least two different measures of complexity have proven relevant for the analysis of computer games, a more complete explanation of Go's intractability will have to be deferred until later in the book. The reader should bear in mind that an entire science of complexity has emerged in recent years, to take its place beside the almost equally novel science of chaos. Both are at least peripherally related to the study of games and competition, and will recur in what follows.

Deep Blue's triumph was exceedingly hard won. After absorbing the lessons learned by previous students of computer chess, a three-man team still required twenty-eight man-years (beginning in or about 1985) to complete the project. Hardware designer Feng-hsiung Hsu alone devoted twelve years to the project. This lead threesome was assisted by an ever-changing cast of human chess experts, IBM project managers, and faculty members at Carnegie Mellon University, where the project began. Chinook was developed at the University of Alberta, under the direction of Professor Jonathan Schaeffer, beginning in 1989, and achieved its triumph within a "mere" five years. Logistello was built at the NEC Research Institute in Princeton, New Jersey, by ateam led by Michael Buro. Hsu and Schaeffer have written books recounting the inside stories of their memorable campaigns.2

Computers exhibiting the ability to mimic aspects of human thought are said to possess artificial intelligence, or AI. A simple electronic calculator doesn't have AI, but a machine that can learn from its mistakes, or that exhibits the ability to reason effectively, undoubtedly does. As computers have grown more powerful, the dividing line between AI and "routine applications" has receded to the point where deeds once deemed indicative of AI are no longer so interpreted. Wags now suggest that a capability constitutes AI only as long as it retains the gee-whiz ability to delight and amaze. That said, computers able to play board games like chess and checkers at the highest human level, to compose arresting melodies, or to provide original proofs for known mathematical theorems have always been acknowledged to possess AI, and probably always will be. All this being the case, it is hardly surprising that most world-class game-playing computer programs have been designed by AI experts.

Arthur Samuel, the first man to program a computer to play checkers, once offered the following explanation for the scientific community's continuing interest in game-playing computers:

Programming computers to play games is but one stage in the development of an understanding of the methods which must be employed for the machine simulation of intellectual behavior. As we progress in this understanding it seems reasonable to assume that these newer techniques will be applied to real-life situations with increasing frequency, and the effort devoted to games ... will decrease. Perhaps we have not yet reached this turning point, and we may still have much left to learn from the study of games.3

Samuel's explanation rings almost as true today as it did when he first offered it. The "effort devoted to games" has actually increased in recent years, due in part to the advent of extraordinarily powerful desktop computers—a development not foreseen in 1960—and to the recent successes of Deep Blue, Chinook, Logistello, and other game-playing computer systems. His "turning point" is not yet in sight, and mankind still seems to have much to learn about competition and games. Nevertheless, his "newer techniques" have materialized and arebeing "applied to real-life situations with increasing frequency," not to mention admirable effect.

Nowhere are the effects more apparent than in the area of expert systems, also known as knowledge-based or rule-based systems. They consist of five component parts: an asset, a human manager, a book of rules, a compendium of facts, and an "inference engine." The latter three constitute a support system meant to help the human manager extract a high rate of return from the asset in question. Commercial inventory and revenue management systems are typical examples.

The facts stored in an airline's revenue management system concern the number of seats in each category that remain unsold for each scheduled flight, along with the economic data needed to estimate the expected demand for such seats. If an inordinate number of tickets for a given flight remain unsold as departure time approaches, the system will recommend that management begin discounting the more expensive ones. The inference engine will apply the rules contained in the system to the available facts to determine an effective schedule of discounts.

The facts stored in a motel chain's revenue management system concern the number of rooms of each size that remain unsold for each night in the coming year, along with the economic data required to predict unusual levels of demand. If an inordinate number of rooms remain unsold for a given date, the system will recommend that the manager begin discounting them, in order to stimulate sales. Alternatively, if unexpectedly many rooms have been sold, the system may recommend that the manager begin to encumber the remaining ones. If, for instance, Tuesday is the busiest night at a particularly business-oriented motel, the manager may refuse on Saturday, Sunday, and Monday to accept additional reservations for the coming Tuesday night, except from customers prepared to stay either Monday or Wednesday night as well. Again, the inference engine will apply the rules built into the system to the available facts to identify an effective program of discounts and/or encumbrances.

Complicated though commercial revenue management systems may sometimes be, their complexity pales by comparison with the knowledge-based systems contained in many game-playing programs. When Chinook reaches the stage at which eight or fewer pieces remain on the board—meaning that the pieces are deployed in one ofexactly 443,748,401,247 possible configurations—it transfers command to its "endgame manager," an expert system of truly epic complexity. Game-playing computers stand at the forefront of expert system design.

The facts embedded in Chinook's endgame manager specify, for each possible configuration of eight or fewer pieces, whether it is a winning position for White, a winning position for Black, or a draw, as well as the maximum number of moves required to obtain the promised result. If, for instance, the system describes the current position as a W-43, meaning that White can win in forty-three moves, the inference engine will search the rule book for directions to a position designated W-42. Should Chinook's opponent play skillfully, the board will then be in a position designated W-41 when Chinook's turn comes to play again. But if the opponent makes a mistake, Chinook's next move may be made from a position designated W-40 or fewer. Only if Chinook's endgame manager contains a hitherto undetected programming error will it ever find itself in a non-winning position after having occupied a winning one with eight or fewer pieces left on the board.

The lessons learned about the care and feeding of expert systems by the builders of Deep Blue, Chinook, Logistello, and other world-class game-playing programs all have been published in the open scientific literature, for the benefit of all. It would be surprising indeed if none of those lessons should prove applicable to future management systems. Samuel's "study of games" has proven more subtle, surprising, and fruitful than even he expected.

Fruitful though that study has been, it has failed to answer the most intriguing questions. It can be proven, for instance, as a mathematical theorem applicable to chess, checkers, and a host of other two-player games, that one of three mutually exclusive conditions prevails at any time during the course of play: either the player whose turn it is to move can win the game, or the opponent can win, or both can avoid losing, in which case the game will presumably end in a draw. Yet nobody knows which of those three alternatives applies to the opening move in chess, checkers, Othello, and other familiar board games currently ruled by machines.

The game plans executed by Deep Blue, Chinook, Logistello, and the like can all be beaten, and occasionally are. They are by no meansthe unbeatable game plans whose existence the above-cited theorem guarantees. Unbeatable game plans are known only for simpler—and less frequently played—games such as nim, tic-tac-toe, Qubic, Nine Men's Morris, mancala, Go-Moku, and Connect Four. Games for which unbeatable game plans are known are often described as "solved," "cracked," or "cooked." People soon lose interest in playing such games unless the unbeatable game plans are so complicated that only a computer can carry them out.

The trade-off between strategic complexity and effectiveness first became an issue during the 1960s, after Edward Thorp discovered that twenty-one (a.k.a. blackjack)—as then played in leading casinos—could be beaten by certain "card-counting" strategies.4 Thorp, then a math instructor at MIT, used the school's state-of-the-art computer to demonstrate that the blackjack strategy that commits the house to "take a hit" on seventeen or less, but to "stand" on eighteen or more, is far from unbeatable.5 He then published three increasingly sophisticated card-counting strategies and used the strongest to win thousands of dollars in casino play. As a result, he was allegedly asked to leave a number of such establishments and threatened with bodily harm should he return.

The simplest of Thorp's published strategies was not strong enough to beat the house strategy on a consistent basis. It merely allowed weekend warriors to get in on the action without undue risk. The second was a more or less break-even proposition requiring more skill on the part of the player. The third was a reliable cash cow but required users to practice several hours a day in order to realize its full potential. To defend their incomes from the army of card counters spawned by Thorp's book, and its eventual emulators, the casinos soon replaced the single deck with which blackjack had traditionally been played with a "shoe" containing no fewer than four identical decks, thereby increasing the number of cards to be counted seemingly beyond the limits of human mental arithmetic. After satisfying himself that he could clear perhaps $300,000 a year playing blackjack, Thorp abandoned casino gambling in favor of Wall Street—the biggest game of all—where he and his computer-generated strategies have prospered ever since.6

Chess, checkers, and Othello are games of "perfect information," meaning that anything either player might care to know about thecurrent state of a game in progress is there on the game board for all to see. Because the players of Scrabble lack knowledge of the tiles held by their opponents, and the players of bridge and poker lack knowledge of the cards held by the other players, the players of those games are said to possess only "imperfect information." Games of imperfect information are conceptually more complex than those of perfect information.

Backgammon represents an intermediate level of complexity in that, although players possess perfect information, the randomness introduced by frequent rolls of the dice represents a definite complicating factor. It means, among other things, that the same two strategies, game plans, or computer programs for playing backgammon may have to oppose each other many times over before a clear winner can be determined.

The strongest backgammon-playing computer programs now perform at a superhuman level.7 They differ from Deep Blue in that they employ machine-learning techniques to constantly improve their games. The idea that a computer program could teach itself to play board games by playing them over and over again against its own clone goes back to Arthur Samuel, as does so much of AI.8 Samuel lacked only the computing power and machine-learning techniques to carry out his plan. The first program to do so successfully, called TD-Gammon, was constructed by Gerald Tesauro at IBM during the 1990s. The project has an interesting history.

In 1987 Tesauro, then at the Institute for Advanced Study in Princeton, New Jersey, collaborated with Terry Sejnowski, then at the Johns Hopkins University, to train a "neural network" to play backgammon. Neurons, the fundamental building blocks of the human nervous system, have been known to medical science for more than a century. During World War II, pioneers in AI invented an artificial (electronic) neuron, assembled small networks of them, and predicted that larger and speedier networks would one day emulate human intelligence. It took a long time to realize that prediction, but by the mid-1980s neural networks were becoming practical instruments of machine learning. Sejnowski was one of the key developers of neural network techniques.

Neural networks can often be trained to emulate human interpreters of complex information, whether that information pertains topotential oil-drilling sites, heart disease patients, credit applications, or baseball players. One trains such a network by supplying it with a library of stimulus-response data. For oil-drilling sites, the stimuli would include the seismographic and geological data available before drilling begins, while the responses would include the quantities and flow characteristics of whatever deposits were eventually found. For credit applications, the stimuli would include former applicants' financial status and histories, while the responses would include eventual default rates and recovery costs. Given an adequate supply of coherent data, neural networks (of which there are now many kinds) can often learn to interpret new data from the same source at least as accurately as human evaluators do, typically at a fraction of the cost.

To build their backgammon-playing program, Tesauro and Sejnowski supplied an off-the-shelf neural network with an expert's assessment of the best move from several thousand board positions.9 The resulting program, which they called Neurogammon, could play at an intermediate level. They were pleasantly surprised by this, since backgammon permits more than 1020 possible board positions and Neurogammon was obliged to learn from a tiny fraction of that number. Yet somehow it was able to transfer what it had learned to unfamiliar board positions and to identify advantageous moves from most of them. But the level to which Neurogammon could hope to progress was limited: it could never hope to beat the expert from whom it had learned the game.

Some years later, after moving to IBM, Tesauro investigated a more ambitious approach to computer backgammon. He did so by combining Samuel's old self-teaching idea with a modern high-speed computer and a state-of-the-art machine-learning technique. The technique he chose is of interest in its own right.

Worker bees routinely travel several miles in a single foraging trip. They are instinctively attracted to blue flowers, but, being among the smartest of insects, they quickly learn to associate the color, shape, and fragrance of other local flowers with their nectar content. The more nectar they find in a given patch of flowers, the more likely they are to return to it. Only in 1995, after becoming the director of the Computational Neurobiology Lab at the Salk Institute in San Diego, did Sejnowski learn that a single neuron in the bee's brain underlies its odor-recognition capability.

This cell, called VUMmx 1, uses a substance called octopamine—which is chemically similar to the dopamine found in vertebrates—to influence the bee's decision-making system. Science has been aware of the dopamine system since the 1960s, when psychologists learned to implant electrodes in the dopamine-stimulating neurons of rats. The rats were then given access to a bar that would deliver a pulse of electrical current to those neurons whenever they pushed it, thereby releasing dopamine and activating the parts of the brain that respond to it. The rats promptly lost interest in food, sex, and whatever else ordinarily interests them. So anxious did they become to stimulate their dopamine systems that researchers claimed to have located the brain's "pleasure center."

The discovery of VUMmx 1 suggested that bees might be using their octopamine systems to improve their nectar-finding efficiency. If the location of an abundant source of nectar were to release an octopamine jolt into the bee's brain, causing it to identify flowers of that size, color, fragrance, and so on with raw pleasure, the entire hive would benefit. Although Pavlov and his successors had long ago demonstrated a link between learning and reward, modern psychologists tend to dismiss "reward learning" as an inferior variety. Yet it seemed to Read Montague and Peter Dayan—researchers in Sejnowski's lab—that simple creatures might use it to good effect. Accordingly, they built a computer model of a bee's octopamine reward system, based on the way the system appears to work in actual bees. Their simulated bee behaved remarkably like a real (worker) bee, exhibiting a number of known bee-havioral patterns, including an aversion to patches of flowers in which the nectar content fluctuates dramatically from one bloom to the next.

The success of their bee simulations led Montague and Dayan to speculate that the output of VUMmx 1 measures prediction error. When a bee visits a particular patch of flowers, in the expectation that nectar will be found there, the results either confirm or deny the expectation. If accurate expectations are rewarded with generous jolts of octopamine, while inaccurate ones are not, the reliability of a bee's expectations would seem likely to improve with practice. Such a mechanism could only enhance a bee's ability to teach itself what it needs to know about the world in which it lives.

Temporal-difference (TD) learning is an ingredient of many artificiallearning systems. It compares the estimated chances of success in some multistage task—say, the building of a skyscraper—before and after the completion of each stage. If the estimated chances of completing the whole task on time and within budget don't change very much as stage after stage of the task is completed, the successive estimates were probably pretty good to begin with. But if successive best estimates vary all over the map, some of them have to be inaccurate. Thus it makes a certain amount of sense to award points to the estimator—as TD learning does—each time a new best estimate is in rough agreement with its predecessor.

Tesauro's familiarity with TD-learning techniques persuaded him to employ them in his implementation of Samuel's plan to build a self-teaching game-playing computer program. The result was TD-Gammon, the first backgammon-playing computer program to compete at the highest human level. After a few thousand games against itself, it began to play at a beginner level. After a hundred thousand games, it could beat Neurogammon. After a million games, it could compete with the best human players in the world, and was beginning to do so by 1995. The current version is said to play better than the best humans.

Perhaps surprisingly, the mathematical theory of games has played a relatively minor role in the development of world-class game-playing computer programs. That is in part because John von Neumann—the inventor of mathematical game theory—was primarily interested in games of imperfect information, such as Scrabble, poker, and bridge, in which it is possible to outsmart one's opponents by means of a successful bluff. He was an enthusiastic poker player himself and delighted in the diversionary aspects of the game. He and his immediate successors allowed the branch of game theory concerned with perfect information to languish to some extent. Only later did the developers of differential game theory, combinatorial game theory, and game-playing computer programs begin (each in their own characteristic way) to rectify this oversight. 10

One reason for game theory's overall lack of success was the primitive state of electronic computers at mid-century. The computing power required to master games like chess, checkers, Othello, poker, backgammon, and Scrabble simply didn't exist until the late 1980s, when game-playing computers began (quite suddenly and unexpectedly) to challenge the strongest human players. The second reason forgame theory's rather disappointing results has been that the most important forms of competition—the ones most directly affecting the general welfare—involve more than two players. This makes them significantly more complex than the two-player contests that computers seem destined to dominate.

Instead of appealing to game theory, or to any other theory for that matter, the "hackers" who have trained computers to play games at or above the highest human level have relied mainly on the experimental method.11 Every contest between rival computer programs, or between a program and a human player, constitutes an experiment testing one or more hypotheses about what works and what doesn't. One expert has described the fifty-year effort to defeat a human chess champion as "the longest running experiment in the history of computer science."

Virtually every man-versus-machine contest has been exhaustively analyzed after the fact, prompting the reformulation of some hypotheses and the outright rejection of others. Hsu's book Behind Deep Blue is particularly informative in this respect, describing the many matches and tournaments played by Deep Blue's progenitors—including Deep Blue I, which lost to Kasparov in 1996—and the design changes made after each in response to lessons learned.

The five Computer Olympiads held in 1989, 1990, 1991, 1992, and 2000 were invaluable proving grounds for game-playing computer programs, and for the legion of working hypotheses incorporated into each one. The hackers who built those programs are not unaware of game theory. On the contrary, they know it well and apply it whenever it helps. Yet their victories owe far more to trial, error, and the experimental method than to any abstract theory.

It will be argued at length in what follows that the real significance of Deep Blue's triumph over Kasparov, and of the other defeats human champions have lately suffered at the hands of soulless machines, is that they mark the emergence of a genuine science of competition. Such a science differs from any mere theory of competition—such as the mathematical theory of games—in that it mines historical and experimental data to confirm, augment, and occasionally discredit conclusions reached long ago by unadorned logic.

While old sciences never really die, they are often eclipsed by newer and more dynamic ones. Many of today's most fruitful sciences, including space science, brain science, computer science, environmentalscience, and the sciences of chaos and complexity—not to mention the science of the human genome—were unknown a century ago. By the year 2100, the emerging science of competition may be as familiar and uncontroversial as highway science, poultry science, and dental science are today. If so, it will be a very influential science indeed, with implications for almost every aspect of public policy, including health, energy, environmental, education, immigration, military, and (above all) economic policy.

The early chapters of this book describe the current state of the fledgling science of competition—what it has discovered and what it seems destined to discover before long. The middle chapters attempt to identify the more important forms of competition in which mankind is currently engaged, and to explain what science can and cannot yet tell us about them. It is a book about science on the one hand and competition on the other. The common ground is surprisingly large and largely surprising.

Copyright © 2007 by James Case

Table of Contents


Preface     vii
Man Versus Machine     3
The Art and Science of Competition     15
Tree Games and Backward Induction     30
Models and Paradigms     51
Two-Sided Competition     71
Many-Sided Competition     92
Competition in the Wild     115
Auctions     130
Competition in Financial Markets     145
Orthodox Economic Thought     176
Economic Competition     193
Evidence Pro and Con     208
Free Trade     222
Heterodox Economic Thought     243
Spontaneous Cooperation     262
Imperfect Competition     278
Policy Implications     300
Epilogue     319
Notes     327
Index     343
From the B&N Reads Blog

Customer Reviews