 Shopping Bag ( 0 items )

All (13) from $22.16

New (7) from $27.98

Used (6) from $22.16
More About This Textbook
Overview
Since its original publication in 2000, Game Theory Evolving has been considered the best textbook on evolutionary game theory. This completely revised and updated second edition of Game Theory Evolving contains new material and shows students how to apply game theory to model human behavior in ways that reflect the special nature of sociality and individuality. The textbook continues its indepth look at cooperation in teams, agentbased simulations, experimental economics, the evolution and diffusion of preferences, and the connection between biology and economics.
Recognizing that students learn by doing, the textbook introduces principles through practice. Herbert Gintis exposes students to the techniques and applications of game theory through a wealth of sophisticated and surprisingly funtosolve problems involving human and animal behavior. The second edition includes solutions to the problems presented and information related to agentbased modeling. In addition, the textbook incorporates instruction in using mathematical software to solve complex problems. Game Theory Evolving is perfect for graduate and upperlevel undergraduate economics students, and is a terrific introduction for ambitious doityourselfers throughout the behavioral sciences.
Editorial Reviews
Mathematical Reviews
Game Theory Evolving is an exceptionally wellwritten and constructed introduction to the field. And with Gintis' outline of agentbased modeling and his tips for programming, many readers may be motivated to take up his invitation and experiment with a problem in evolutionary dynamics of their own.— Jennifer M. Wilson
Science  Karl Sigmund
Gintis has wholeheartedly embraced the evolutionary approach to games. . . . The author is an accomplished economist raised in the classical mold, and his background shows in many aspects of the book . . . [He] has important things to say. . . .Mathematical Reviews  Jennifer M. Wilson
Game Theory Evolving is an exceptionally wellwritten and constructed introduction to the field. And with Gintis' outline of agentbased modeling and his tips for programming, many readers may be motivated to take up his invitation and experiment with a problem in evolutionary dynamics of their own.From the Publisher
"Gintis has wholeheartedly embraced the evolutionary approach to games. . . . The author is an accomplished economist raised in the classical mold, and his background shows in many aspects of the book . . . [He] has important things to say. . . ."—Karl Sigmund, Science"Game Theory Evolving is an exceptionally wellwritten and constructed introduction to the field. And with Gintis' outline of agentbased modeling and his tips for programming, many readers may be motivated to take up his invitation and experiment with a problem in evolutionary dynamics of their own."—Jennifer M. Wilson, Mathematical Reviews
Science
Gintis has wholeheartedly embraced the evolutionary approach to games. . . . The author is an accomplished economist raised in the classical mold, and his background shows in many aspects of the book . . . [He] has important things to say. . . .— Karl Sigmund
Product Details
Related Subjects
Meet the Author
Herbert Gintis holds faculty positions at the Santa Fe Institute, Central European University, and University of Siena. He has coedited numerous books, including "Moral Sentiments and Material Interests, Unequal Chances" (Princeton), and "Foundations of Human Sociality".
Read an Excerpt
Game Theory Evolving
A ProblemCentered Introduction to Modeling Strategic InteractionBy Herbert Gintis
Princeton University Press
Copyright © 2009 Princeton University PressAll right reserved.
ISBN: 9780691140513
Chapter One
Probability TheoryDoubt is disagreeable, but certainty is ridiculous.
Voltaire
1.1 Basic Set Theory and Mathematical Notation
A set is a collection of objects. We can represent a set by enumerating its objects. Thus,
A = {1, 3, 5, 7, 9, 34}
is the set of single digit odd numbers plus the number 34. We can also represent the same set by a formula. For instance,
A = {xx [member of] N [conjunction] (x < 10 [conjunction] x is odd) [disjunction] (x = 34)}.
In interpreting this formula, N is the set of natural numbers (positive integers), "" means "such that," "[member of]" means "is a element of," [conjunction] is the logical symbol for "and," and [disjunction] is the logical symbol for "or." See the table of symbols in chapter 14 if you forget the meaning of a mathematical symbol.
The subset of objects in set X that satisfy property p can be written as
{x [member of] Xp(x)}.
The union of two sets A, B [subset] X is the subset of X consisting of elements of X that are in either A or B:
A [union] B = {xx [member of] A [disjunction] x [member of] B}.
The intersection of two sets A, B [subset] X is the subset of X consisting of elements of X that are in both A or B:
A [intersection] B = {xx [member of] A [conjunction] x [member of] B}.
If a [member of] A and b [member of] B, the ordered pair (a,b) is an entity such that if (a, b) = (c, d), then a = c and b = d. The set {(a, b)a [member of] A [conjunction] b [member of] B} is called the product of A and B and is written A x B. For instance, if A = B = R, where R is the set of real numbers, then A x B is the real plane, or the real twodimensional vector space. We also write
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
A function f can be thought of as a set of ordered pairs (x, f(x)). For instance, the function f(x) = [chi square] is the set
{(x, y)(x, y [member of] R) [conjunction] (y = [chi square])}
The set of arguments for which f is defined is called the domain of f and is written dom (f). The set of values that f takes is called the range of f and is written range (f). The function f is thus a subset of dom(f) x range(f). If f is a function defined on set A with values in set B, we write f : A > B.
1.2 Probability Spaces
We assume a finite universe or sample space [OMEGA] and a set X of subsets A, B, C, ... of [OMEGA], called events. We assume X is closed under finite unions (if [A.sub.1], [A.sub.2], ... [A.sub.n] are events, so is [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]), finite intersections (if [A.sub.1], ..., [A.sub.n] are events, so is [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]), and complementation (if A is an event so is the set of elements of [OMEGA] that are not in A, which we write [A.sup.c]). If A and B are events, we interpret A [intersection] B = AB as the event "A and B both occur," A [intersection] B as the event "A or B occurs," and [A.sup.c] as the event "A does not occur."
For instance, suppose we flip a coin twice, the outcome being HH (heads on both), HT (heads on first and tails on second), TH (tails on first and heads on second), and TT (tails on both). The sample space is then [OMEGA] = {HH, TH, HT, TT}. Some events are {HH, HT} (the coin comes up heads on the first toss), {TT} (the coin comes up tails twice), and {HH, HT, TH} (the coin comes up heads at least once).
The probability of an event A [member of] X is a real number P[A] such that 0 [less than or equal to] P[A] [less than or equal to] 1. We assume that P[[OMEGA]] = 1, which says that with probability 1 some outcome occurs, and we also assume that if [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], where [A.sub.i] [member of] X and the {[A.sub.i]} are disjoint (that is, [A.sub.i] [intersection] [A.sub.j] = 0 for all i [not equal to] j), then [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], which says that probabilities are additive over finite disjoint unions.
1.3 De Morgan's Laws
Show that for any two events A and B, we have
[(A [intersection] B).sup.c] = [A.sup.c] [intersection] [B.sup.c]
and
[(A [intersection] B).sup.c] = [A.sup.c] [intersection] [B.sup.c].
These are called De Morgan's laws. Express the meaning of these formulas in words.
Show that if we write p for proposition "event A occurs" and q for "event B occurs," then
not (p or q) [??] (not p and not q);
not (p and q) [??] (not p or not q).
The formulas are also De Morgan's laws. Give examples of both rules.
1.4 Interocitors
An interocitor consists of two kramels and three trums. Let [A.sub.k] be the event "the kth kramel is in working condition," and [B.sub.j] is the event "the jth trum is in working condition." An interocitor is in working condition if at least one of its kramels and two of its trums are in working condition. Let ITLITL be the event "the interocitor is in working condition." Write ITLITL in terms of the [A.sub.k] and the [B.sub.j].
1.5 The Direct Evaluation of Probabilities
Theorem 1.1 Given [a.sub.1], ..., [a.sub.n] and [b.sub.1], ..., [b.sub.m], all distinct, there are n x m distinct ways of choosing one of the [a.sub.i] and one of the [b.sub.j]. If we also have [c.sub.1], ..., [c.sub.r], distinct from each other, the [a.sub.i] and the [b.sub.j], then there are n x m x r distinct ways of choosing one of the [a.sub.i], one of the [b.sub.j], and one of the [c.sub.k].
Apply this theorem to determine how many different elements there are in the sample space of
a. the double coin flip
b. the triple coin flip
c. rolling a pair of dice
Generalize the theorem.
1.6 Probability as Frequency
Suppose the sample space [OMEGA] consists of a finite number n of equally probable elements. Suppose the event A contains m of these elements. Then the probability of the event A is m/n.
A second definition: Suppose an experiment has n distinct outcomes, all of which are equally likely. Let A be a subset of the outcomes, and n(A) the number of elements of A. We define the probability of A as P[A] = n(A)/n.
For example, in throwing a pair of dice, there are 6 x 6 = 36 mutually exclusive, equally likely events, each represented by an ordered pair (a, b), where a is the number of spots showing on the first die and b the number on the second. Let A be the event that both dice show the same number of spots. Then n(A) = 6 and P[A] = 6/36 = 1/6.
A third definition: Suppose an experiment can be repeated any number of times, each outcome being independent of the ones before and after it. Let A be an event that either does or does not occur for each outcome. Let [n.sub.t](A) be the number of times A occurred on all the tries up to and including the [t.sup.th] try. We define the relative frequency of A as [n.sub.t](A)/t, and we define the probability of A as
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
We say two events A and B are independent if P[A] does not depend on whether B occurs or not and, conversely, P[B] does not depend on whether A occurs or not. If events A and B are independent, the probability that both occur is the product of the probabilities that either occurs: that is,
P[A and B] = P[A] x P[B].
For example, in flipping coins, let A be the event "the first ten flips are heads." Let B be the event "the eleventh flip is heads." Then the two events are independent.
For another example, suppose there are two urns, one containing 100 white balls and 1 red ball, and the other containing 100 red balls and 1 white ball. You do not know which is which. You choose 2 balls from the first urn. Let A be the event "The first ball is white," and let B be the event "The second ball is white." These events are not independent, because if you draw a white ball the first time, you are more likely to be drawing from the urn with 100 white balls than the urn with 1 white ball.
Determine the following probabilities. Assume all coins and dice are "fair" in the sense that H and T are equiprobable for a coin, and 1, ..., 6 are equiprobable for a die.
a. At least one head occurs in a double coin toss.
b. Exactly two tails occur in a triple coin toss.
c. The sum of the two dice equals 7 or 11 in rolling a pair of dice.
d. All six dice show the same number when six dice are thrown.
e. A coin is tossed seven times. The string of outcomes is HHHHHHH.
f. A coin is tossed seven times. The string of outcomes is HTHHTTH.
1.7 Craps
A roller plays against the casino. The roller throws the dice and wins if the sum is 7 or 11, but loses if the sum is 2, 3, or 12. If the sum is any other number (4, 5, 6, 8, 9, or 10), the roller throws the dice repeatedly until either winning by matching the first number rolled or losing if the sum is 2, 7, or 12 ("crapping out"). What is the probability of winning?
1.8 A Marksman Contest
In a headtohead contest Alice can beat Bonnie with probability p and can beat Carole with probability q. Carole is a better marksman than Bonnie, so p > q. To win the contest Alice must win at least two in a row out of three headtoheads with Bonnie and Carole and cannot play the same person twice in a row (that is, she can play BonnieCaroleBonnie or CaroleBonnieCarole). Show that Alice maximizes her probability of winning the contest playing the better marksman, Carole, twice.
1.9 Sampling
The mutually exclusive outcomes of a random action are called sample points. The set of sample points is called the sample space. An event A is a subset of a sample space [OMEGA]. The event A is certain if A = [OMEGA] and impossible if A = 0 (that is, A has no elements). The probability of an event A is P[A] = n(A)/n([OMEGA]), if we assume [OMEGA] is finite and all [omega] [member of] [OMEGA] are equally likely.
a. Suppose six dice are thrown. What is the probability all six die show the same number?
b. Suppose we choose r object in succession from a set of n distinct objects [a.sub.1], ..., [a.sub.n], each time recording the choice and returning the object to the set before making the next choice. This gives an ordered sample of the form ([b.sub.1], ..., [b.sub.r]), where each [b.sub.j] is some [a.sub.i]. We call this sampling with replacement. Show that, in sampling r times with replacement from a set of n objects, there are [n.sub.r] distinct ordered samples.
c. Suppose we choose r objects in succession from a set of n distinct objects [a.sub.1], ..., [a.sub.n], without returning the object to the set. This gives an ordered sample of the form ([b.sub.1], ..., [b.sub.r]), where each [b.sub.j] is some unique [a.sub.i]. We call this sampling without replacement. Show that in sampling r times without replacement from a set of n objects, there are
n(n  1) ... (n  r + 1) = n!/(n  r)!
distinct ordered samples, where n! = n x (n  1) x ... x 2 x 1.
1.10 Aces Up
A deck of 52 cards has 4 aces. A player draws 2 cards randomly from the deck. What is the probability that both are aces?
1.11 Permutations
A linear ordering of a set of n distinct objects is called a permutation of the objects. It is easy to see that the number of distinct permutations of n > 0 distinct objects is n! = n x (n  1) x ... x 2 x 1. Suppose we have a deck of cards numbered from 1 to n > 1. Shuffle the cards so their new order is a random permutation of the cards. What is the average number of cards that appear in the "correct" order (that is, the kth card is in the kth position) in the shuffled deck?
1.12 Combinations and Sampling
The number of combinations of n distinct objects taken r at a time is the number of subsets of size r, taken from the n things without replacement. We write this as [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. In this case, we do not care about the order of the choices. For instance, consider the set of numbers (1,2,3,4}. The number of samples of size two without replacement = 4!/2! = 12. These are precisely {12,13,14,21,23,24,31,32,34,41,42,43}. The combinations of the four numbers of size two (that is, taken two at a time) are {12,13,14,23,24,34}, or six in number. Note that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. A set of n elements has n!/r!(n  r)! distinct subsets of size r. Thus, we have
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
1.13 Mechanical Defects
A shipment of seven machines has two defective machines. An inspector checks two machines randomly drawn from the shipment, and accepts the shipment if neither is defective. What is the probability the shipment is accepted?
1.14 Mass Defection
A batch of 100 manufactured items is checked by an inspector, who examines 10 items at random. If none is defective, she accepts the whole batch. What is the probability that a batch containing 10 defective items will be accepted?
1.15 House Rules
Suppose you are playing the following game against the house in Las Vegas. You pick a number between one and six. The house rolls three dice, and pays you $1,000 if your number comes up on one die, $2,000 if your number comes up on two dice, and $3,000 if your number comes up on all three dice. If your number does not show up at all, you pay the house $1,000. At first glance, this looks like a fair game (that is, a game in which the expected payoff is zero), but in fact it is not. How much can you expect to win (or lose)?
1.16 The Addition Rule for Probabilities
Let A and B be two events. Then 0 [less than or equal to] P[A] [less than or equal to] 1 and
P[A [union] B] = P[A] + P[B]  P[AB].
If A and B are disjoint (that is, the events are mutually exclusive), then
P[A [union] B] = P[A] + P[B].
(Continues...)
Table of Contents
Preface xv
Chapter 1: Probability Theory 1
1.1 Basic Set Theory and Mathematical Notation 1
1.2 Probability Spaces 2
1.3 De Morgan's Laws 3
1.4 Interocitors 3
1.5 The Direct Evaluation of Probabilities 3
1.6 Probability as Frequency 4
1.7 Craps 5
1.8 A Marksman Contest 5
1.9 Sampling 5
1.10 Aces Up 6
1.11 Permutations 6
1.12 Combinations and Sampling 7
1.13 Mechanical Defects 7
1.14 Mass Defection 7
1.15 House Rules 7
1.16 The Addition Rule for Probabilities 8
1.17 A Guessing Game 8
1.18 North Island, South Island 8
1.19 Conditional Probability 9
1.20 Bayes' Rule 9
1.21 Extrasensory Perception 10
1.22 Les Cinq Tiroirs 10
1.23 Drug Testing 10
1.24 Color Blindness 11
1.25 Urns 11
1.26 The Monty Hall Game 11
1.27 The Logic of Murder and Abuse 11
1.28 The Principle of Insufficient Reason 12
1.29 The Greens and the Blacks 12
1.30 The Brain and Kidney Problem 12
1.31 The Value of Eyewitness Testimony 13
1.32 When Weakness Is Strength 13
1.33 The Uniform Distribution 16
1.34 Laplace's Law of Succession 17
1.35 From Uniform to Exponential 17
Chapter 2: Bayesian Decision Theory 18
2.1 The Rational Actor Model 18
2.2 Time Consistency and Exponential Discounting 20
2.3 The Expected Utility Principle 22
2.4 Risk and the Shape of the Utility Function 26
2.5 The Scientific Status of the Rational Actor Model 30
Chapter 3: Game Theory: Basic Concepts 32
3.1 Big John and Little John 32
3.2 The Extensive Form 38
3.3 The Normal Form 41
3.4 Mixed Strategies 42
3.5 Nash Equilibrium 43
3.6 The Fundamental Theorem of Game Theory 44
3.7 Solving for MixedStrategy Nash Equilibria 44
3.8 Throwing Fingers 46
3.9 Battle of the Sexes 46
3.10 The HawkDove Game 48
3.11 The Prisoner's Dilemma 50
Chapter 4: Eliminating Dominated Strategies 52
4.1 Dominated Strategies 52
4.2 Backward Induction 54
4.3 Exercises in Eliminating Dominated Strategies 55
4.4 Subgame Perfection 57
4.5 Stackelberg Leadership 59
4.6 The SecondPrice Auction 59
4.7 The Mystery of Kidnapping 60
4.8 The Eviction Notice 62
4.9 Hagar's Battles 62
4.10 Military Strategy 63
4.11 The Dr. Strangelove Game 64
4.12 Strategic Voting 64
4.13 Nuisance Suits 65
4.14 An Armaments Game 67
4.15 Football Strategy 67
4.16 Poker with Bluffing 68
4.17 The Little Miss Muffet Game 69
4.18 Cooperation with Overlapping Generations 70
4.19 DominanceSolvable Games 71
4.20 Agentbased Modeling 72
4.21 Why Play a Nash Equilibrium? 75
4.22 Modeling the FinitelyRepeated Prisoner's Dilemma 77
4.23 Review of Basic Concepts 79
Chapter 5: PureStrategy Nash Equilibria 80
5.1 Price Matching as Tacit Collusion 80
5.2 Competition on Main Street 81
5.3 Markets as Disciplining Devices: Allied Widgets 81
5.4 The Tobacco Market 87
5.5 The Klingons and the Snarks 87
5.6 Chess: The Trivial Pastime 88
5.7 NoDraw, HighLow Poker 89
5.8 An Agentbased Model of NoDraw, HighLow Poker 91
5.9 The Truth Game 92
5.10 The Rubinstein Bargaining Model 94
5.11 Bargaining with Heterogeneous Impatience 96
5.12 Bargaining with One Outside Option 97
5.13 Bargaining with Dual Outside Options 98
5.14 Huey, Dewey, and Louie Split a Dollar 102
5.15 Twin Sisters 104
5.16 The Samaritan's Dilemma 104
5.17 The Rotten Kid Theorem 106
5.18 The Shopper and the Fish Merchant 107
5.19 Pure Coordination Games 109
5.20 Pick Any Number 109
5.21 Pure Coordination Games: Experimental Evidence 110
5.22 Introductory Offers 111
5.23 Web Sites (for Spiders) 112
Chapter 6: MixedStrategy Nash Equilibria 116
6.1 The Algebra of Mixed Strategies 116
6.2 Lions and Antelope 117
6.3 A Patent Race 118
6.4 Tennis Strategy 119
6.5 Preservation of Ecology Game 119
6.6 Hard Love 120
6.7 Advertising Game 120
6.8 Robin Hood and Little John 122
6.9 The Motorist's Dilemma 122
6.10 Family Politics 123
6.11 Frankie and Johnny 123
6.12 A Card Game 124
6.13 CheaterInspector 126
6.14 The Vindication of the Hawk 126
6.15 Characterizing 2 x 2 Normal Form Games I 127
6.16 Big John and Little John Revisited 128
6.17 Dominance Revisited 128
6.18 Competition on Main Street Revisited 128
6.19 Twin Sisters Revisited 129
6.20 Twin Sisters: An AgentBased Model 129
6.21 OneCard, TwoRound Poker with Bluffing 131
6.22 An AgentBased Model of Poker with Bluffing 132
6.23 Trust in Networks 133
6.24 El Farol 134
6.25 Decorated Lizards 135
6.26 Sex Ratios as Nash Equilibria 137
6.27 A Mating Game 140
6.28 Coordination Failure 141
6.29 Colonel Blotto Game 141
6.30 Number Guessing Game 142
6.31 Target Selection 142
6.32 A Reconnaissance Game 142
6.33 Attack on Hidden Object 143
6.34 TwoPerson, ZeroSum Games 143
6.35 Mutual Monitoring in a Partnership 145
6.36 Mutual Monitoring in Teams 145
6.37 Altruism(?) in Bird Flocks 146
6.38 The Groucho Marx Game 147
6.39 Games of Perfect Information 151
6.40 Correlated Equilibria 151
6.41 Territoriality as a Correlated Equilibrium 153
6.42 Haggling at the Bazaar 154
6.43 Poker with Bluffing Revisited 156
6.44 Algorithms for Finding Nash Equilibria 157
6.45 Why Play Mixed Strategies? 160
6.46 Reviewing of Basic Concepts 161
Chapter 7: PrincipalAgentModels 162
7.1 Gift Exchange 162
7.2 Contract Monitoring 163
7.3 Profit Signaling 164
7.4 Properties of the Employment Relationship 168
7.5 Peasant and Landlord 169
7.6 Bob's Car Insurance 173
7.7 A Generic PrincipalAgent Model 174
Chapter 8: Signaling Games 179
8.1 Signaling as a Coevolutionary Process 179
8.2 A Generic Signaling Game 180
8.3 Sex and Piety: The DarwinFisher Model 182
8.4 Biological Signals as Handicaps 187
8.5 The ShepherdsWho Never Cry Wolf 189
8.6 My Brother's Keeper 190
8.7 Honest Signaling among Partial Altruists 193
8.8 Educational Signaling 195
8.9 Education as a Screening Device 197
8.10 Capital as a Signaling Device 199
Chapter 9: Repeated Games 201
9.1 Death and Discount Rates in Repeated Games 202
9.2 Big Fish and Little Fish 202
9.3 Alice and Bob Cooperate 204
9.4 The Strategy of an Oil Cartel 205
9.5 Reputational Equilibrium 205
9.6 Tacit Collusion 206
9.7 The OneStage Deviation Principle 208
9.8 Tit for Tat 209
9.9 I'd Rather Switch Than Fight 210
9.10 The Folk Theorem 213
9.11 The Folk Theorem and the Nature of Signaling 216
9.12 The Folk Theorem Fails in Large Groups 217
9.13 Contingent Renewal Markets Do Not Clear 219
9.14 ShortSide Power in Contingent Renewal Markets 222
9.15 Money Confers Power in Contingent Renewal Markets 223
9.16 The Economy Is Controlled by the Wealthy 223
9.17 Contingent Renewal Labor Markets 224
Chapter 10: Evolutionarily Stable Strategies 229
10.1 Evolutionarily Stable Strategies: Definition 230
10.2 Properties of Evolutionarily Stable Strategies 232
10.3 Characterizing Evolutionarily Stable Strategies 233
10.4 A Symmetric Coordination Game 236
10.5 A Dynamic Battle of the Sexes 236
10.6 Symmetrical Throwing Fingers 237
10.7 Hawks, Doves, and Bourgeois 238
10.8 Trust in Networks II 238
10.9 Cooperative Fishing 238
10.10 Evolutionarily Stable Strategies Are Not Unbeatable 240
10.11 A Nash Equilibrium That Is Not an EES 240
10.12 Rock, Paper, and Scissors Has No ESS 241
10.13 Invasion of the PureStrategy Mutants 241
10.14 Multiple Evolutionarily Stable Strategies 242
10.15 Evolutionarily Stable Strategies in Finite Populations 242
10.16 Evolutionarily Stable Strategies in Asymmetric Games 244
Chapter 11: Dynamical Systems 247
11.1 Dynamical Systems: Definition 247
11.2 Population Growth 248
11.3 Population Growth with Limited Carrying Capacity 249
11.4 The LotkaVolterra PredatorPrey Model 251
11.5 Dynamical Systems Theory 255
11.6 Existence and Uniqueness 256
11.7 The Linearization Theorem 257
11.8 Dynamical Systems in One Dimension 258
11.9 Dynamical Systems in Two Dimensions 260
11.10 Exercises in TwoDimensional Linear Systems 264
11.11 LotkaVolterra with Limited Carrying Capacity 266
11.12 Take No Prisoners 266
11.13 The HartmanGrobman Theorem 267
11.14 Features of TwoDimensional Dynamical Systems 268
Chapter 12: Evolutionary Dynamics 270
12.1 The Origins of Evolutionary Dynamics 271
12.2 Strategies as Replicators 272
12.3 A Dynamic HawkDove Game 274
12.4 Sexual Reproduction and the Replicator Dynamic 276
12.5 Properties of the Replicator System 278
12.6 The Replicator Dynamic in Two Dimensions 279
12.7 Dominated Strategies and the Replicator Dynamic 280
12.8 Equilibrium and Stability with a Replicator Dynamic 282
12.9 Evolutionary Stability and Asymptotically Stability 284
12.10 Trust in Networks III 284
12.11 Characterizing 2 x 2 Normal Form Games II 285
12.12 Invasion of the PureStrategy Nash Mutants II 286
12.13 A Generalization of Rock, Paper, and Scissors 287
12.14 Uta stansburiana in Motion 287
12.15 The Dynamics of Rock, Paper, and Scissors 288
12.16 The LotkaVolterraModel and Biodiversity 288
12.17 Asymmetric Evolutionary Games 290
12.18 Asymmetric Evolutionary Games II 295
12.19 The Evolution of Trust and Honesty 295
Chapter 13: Markov Economies and Stochastic Dynamical Systems 297
13.1 Markov Chains 297
13.2 The Ergodic Theorem for Markov Chains 305
13.3 The Infinite Random Walk 307
13.4 The Sisyphean Markov Chain 308
13.5 Andrei Andreyevich's TwoUrn Problem 309
13.6 Solving Linear Recursion Equations 310
13.7 Good Vibrations 311
13.8 Adaptive Learning 312
13.9 The Steady State of a Markov Chain 314
13.10 Adaptive Learning II 315
13.11 Adaptive Learning with Errors 316
13.12 Stochastic Stability 317
Chapter 14: Table of Symbols 319
Chapter 15: Answers 321
Sources for Problems 373
References 375