- Shopping Bag ( 0 items )
In simple action theory, when people choose between courses of action, they know what the outcome will be. When an individual is making a choice "against nature," such as switching on a light, that assumption may hold true. But in strategic interaction outcomes, indeterminacy is pervasive and often intractable. Whether one is choosing for oneself or making a choice about a policy matter, it is usually possible only to make a guess about the outcome, one based on anticipating what other actors will do. In this book Russell Hardin asserts, in his characteristically clear and uncompromising prose, "Indeterminacy in contexts of strategic interaction . . . Is an issue that is constantly swept under the rug because it is often disruptive to pristine social theory. But the theory is fake: the indeterminacy is real."
In the course of the book, Hardin thus outlines the various ways in which theorists from Hobbes to Rawls have gone wrong in denying or ignoring indeterminacy, and suggests how social theories would be enhanced--and how certain problems could be resolved effectively or successfully--if they assumed from the beginning that indeterminacy was the normal state of affairs, not the exception. Representing a bold challenge to widely held theoretical assumptions and habits of thought, Indeterminacy and Society will be debated across a range of fields including politics, law, philosophy, economics, and business management.
"Hardin shows us the importance of recognizing indeterminacy for a wide range of theories, from rational choice to deontological moral theory. The significance of this work for political and moral philosophy should not be underestimated."--Sarah Marshall, Philosophical Quarterly
"This book achieves an unusual feat of balance--conveying both the profundity and the limitations of attempts to use rational choice tools to address grand questions about ideal social organization."--Steven Rytina, American Journal of Sociology
INDETERMINACY in contexts of strategic interaction-which is to say, in virtually all social contexts-is an issue that is constantly swept under the rug because it is often disruptive to pristine social theory. But the theory is fake: the indeterminacy is real. I wish here to address such indeterminacy, its implications for collective choice, and the ways in which it has been hidden from view or ignored in manifold theories, and some ways in which it has been well handled and even made central to theory. The effort to pretend indeterminacy away or to hide it from view pervades social theory of virtually every kind, from the most technical game-theoretic accounts of extremely fine points to moral theory from its beginnings. The issue is that the basic rationality that makes sense of or fits individual choice in the simplest contexts of choosing against nature does not readily generalize to contexts in which individuals are interacting with other individuals. The basic rationality that says more resources are preferable to less is indeterminate for the more complicated context, which is almost the whole of our lives outside the casino and the lottery.
Typically, the central task in strategic interactions is obtaining the best possible outcome for oneself. Unfortunately, in many social contexts I cannot simply act in a way that determines my own outcome. I can choose only a strategy, not an outcome. All that I determine with my strategy choice is some constraints on the possible array of outcomes I might get. To narrow this array to a single outcome requires action from you and perhaps many others. I commonly cannot know what is the best strategy choice for me to make unless I know what strategy choices others will make. But if all of us can know what all others are going to do, then it is not coherent to say that thereafter we can alter our choices in the light of that knowledge. This is the form of indeterminacy at issue in this book: indeterminacy that results from strategic interaction. Interactive choice as represented descriptively in game theory is often indeterminate for each individual chooser. For an individual chooser in a moment of choice, this indeterminacy is not a failure of reason by the chooser; the indeterminacy is in the world because it follows from the mismatch of the preferences of all those in the interaction of the moment.
Problems of a partially related kind were characterized by C. H. Waddington ( 1967, 17), in a sadly neglected book, as stochastic. This word, which means, roughly, "probabilistic," comes from a Greek root that means "proceeding by guesswork" or, more literally, "skillful in aiming." Stochastic problems are those for which, in a sense, nature might outsmart our choice of strategy so that we get an outcome very different from what we would have wanted, at least in some cases. As a remarkably clear and varied example of the problems at issue, I will often discuss problems of vaccination against a major disease. That range of problems has the attractive feature that it, in its simpler forms, should not be very controversial, either pragmatically, causally, or morally. Especially because it is often not morally controversial, it will be very useful in exemplifying the nature of stochastic policy problems, including many, such as nuclear deterrence (in which accidents could have happened), that are also troubled with strategic interaction. For stochastic problems of individual choice, we can readily reduce our choices to their expected values; and then we can select more rather than less. Stochastic collective choices or policies commonly entail losses for some and gains for others, so that we can choose unproblematically from expected value only if we do not know in advance who will be the losers and gainers.
To see the peculiarly stochastic nature of many collective choice problems that we face, consider the program of polio vaccination before it was eradicated as a disease in the wild in North America (that is, outside certain laboratory stores of the virus). The facts are roughly these. We vaccinate millions, including almost the entire population of children. Many of these would die and many would be permanently, even hideously, crippled if not vaccinated. Among those vaccinated, a very small number suffer serious cases of paralytic polio. There is no question that fewer are harmed by vaccinating than by not vaccinating the population. Our strategic action is to protect people, but among the outcomes that could follow from our action, we also harm some people, some of whom might never have got polio if not vaccinated or even if no one had been vaccinated.1
When we choose an action or a policy, this is often the structure of it. We have some chance of doing harm and some chance of doing good in the unavoidable sense that in order to do something good we must risk doing something bad. Sometimes this is for reasons of the nature of the world, as in the case of vaccination. But at other times it is for reasons of the nature of strategic interaction. I choose, in a sense, a strategy, not an outcome. Then I get an outcome that is the result of the strategy choices of others in interaction with my choice.
These two classes of problems, strategic interaction and stochastic patterns of outcomes, have a common feature, which is the central issue of this book. They make indeterminate what an action or a policy is. In philosophical action theory, the actions are simple ones such as flipping a switch to turn on a light. In real life, our most important actions are not so simple. They are inherently interactions. We have reasons for taking our actions, but our reasons may not finally be reflected in the results of our actions even if hope for specific results is our reason for our choice of actions.
In three contexts I argue that taking indeterminacy into account up front by making it an assumption helps us to analyze certain problems correctly or to resolve them successfully. In these cases, using theories that ignore the indeterminacy at issue can lead to clouded understandings, even wrong understandings of the relevant issues. One of these contexts is the hoary problem of the iterated prisoner's dilemma and what strategy is rational when playing it. The second is the real world prisoner's dilemma of nuclear deterrence policy that, one hopes, is now past. The third is the great classical problem of how we can justify institutional actions that violate honored principles. For example, public policy is often determined by a cost-benefit analysis, which entails interpersonal comparisons of utility. The people who do these policy analyses are commonly economists who eschew interpersonal comparisons as metaphysically meaningless. Such comparisons are one theoretical device for avoiding indeterminacy. Although they have been intellectually rejected on theoretical grounds, and seemingly rightly so, still they make eminently good sense on an account of their authorization that is grounded in indeterminacy. In all three of these contexts, by starting with the-correct-presumption of indeterminacy, we get to a better outcome than if we insist on imposing determinacy.
Note that the vaccination case, in which some are harmed while others are benefited, is only a partial or reduced analogue of the social interaction case in which depending on what you do, I may do very well or very badly. The problem is simplified in that one of the actors is nature, rather than a strategically manipulative agent with its own interests possibly in conflict with ours. Keeping this simplified problem in mind helps to make the more complex issues of strategic interaction between two or more manipulative, self-interested agents relatively clear. The causal analysis of stochastic problems such as vaccination policy is unlikely to be controversial, whereas any account of a strategic interaction that stipulates what each of the parties will do or ought to do is likely to be controversial. Indeed, that is the central fact that provokes this work.
It is a correct assessment of rationality in social contexts that it is ill defined and often indeterminate. If this is true, then any instruction on what it is rational to do should not be based on the (wrong) assumption of determinacy. Assuming that the world of social choice is indeterminate rather than determinate would lead one to make different decisions in many contexts. Determinacy can be both disabling and enabling, depending on the nature of the decisions at stake. The indeterminacy of a principle of rational choice sometimes plays into the indeterminacy of knowledge, but I wish to address problems of knowledge primarily as they affect rational choice through strategic or rational indeterminacy, not as they affect ordinary decision making through epistemological indeterminacy.
Epistemological indeterminacy from causal ignorance-for example, from inadequate theory or inadequate knowledge-is a major problem in its own right, but it is not the focus here. Hence, I am not concerned with the disruptive possibilities of unintended consequences (which are a major problem in innovation and in public policy), even insofar as they result from complex interactions.
Strategic or rational indeterminacy, as in current theory, is partly the product of the ordinal revolution in economic and choice theory. That revolution has swept up economics and utilitarianism and has helped spawn rational choice theory through the ordinal theories of Kenneth Arrow ( 1963) and Joseph Schumpeter ( 1950). The problem of such indeterminacy arises from simple aggregation of interests, and therefore it is pervasive in neoclassical economics as well as in ordinal utilitarianism or welfarism. It is pervasive because our choices have social (or interactive) contexts. Arrow demonstrated this indeterminacy in principle already in one of the founding works of social choice. It is instructive that he discovered the indeterminacy while trying to find a determinate solution to collective aggregation of ordinal preferences. Unlike most theorists, however, he did not quail from the discovery but made it the center piece of his Impossibility Theorem (Arrow 1983, 1-4).
Because there is collective indeterminacy, there is indeterminacy in individual choice in contexts of strategic interaction. These are, in a sense, contexts of aggregation of interests, even though there may be substantial conflict over how to aggregate and those in interaction need not be concerned at all with the aggregate but only with personal outcomes. In such interactions, we may treat each other as merely part of the furniture of the universe with which we have to deal, so that we have no direct concern with the aggregate outcome, only with our own. As John Rawls (1999, 112 [1971, 128]) supposes, we are mutually disinterested. We should finally see such indeterminacy not as an anomaly but as the normal state of affairs, on which theory should build. Theory that runs afoul of such indeterminacy is often foul theory.
A quick survey of contexts of strategic interaction in which indeterminacy has played an important role and in which theorists have attempted to get around it or to deal with it would include at least the following seven (each of these will be discussed more fully in later chapters).
In game theory, John Harsanyi simply stipulates that a solution theory must be determinate despite the fact that adopting his determinacy principle makes no sense as an optimizing move. His move comes from nowhere, as though somehow it is irrational to live with indeterminacy (in which case it is irrational to live). It appears to be a response to the oddity of the prisoner's dilemma game when this game is iterated for a fixed number of plays (see chapter 2). That game is a pervasive part of life because it is essentially the structure of exchange. Any economist's theory of rationality must be able to handle that game. One might even say that the analysis of that game should come before almost anything else in the economist's world.
Equilibrium theory in economics is fundamentally indeterminate if there is a coordination problem. There is a coordination problem whenever there is more than one coordination equilibrium. In any whole economy, to which general equilibrium is meant to apply, there are apt to be numerous coordination problems in the abstract (see chapter 2).
Thomas Hobbes attempted to trick up determinacy in his theory of the creation of a sovereign, although he needed no trickery in his selection of any extant sovereign as determinately preferable to putting any alternative in place by rebellion (see chapter 3).
Jeremy Bentham imposed determinacy in his version of utilitarianism by supposing that utilities are comparable and additive across persons, so that in a comparison of various states of the universe, we could supposedly add up the utilities and immediately discover which state has the highest utility (see chapter 4).
Ronald Coase, with his Coase Theorem, may have made the cleverest move to overcome the problem of indeterminacy in an ordinal world by using cardinal prices to resolve the choice of what to produce (see chapter 5), although his resolution of this problem still leaves open the question of how to share the gains from production among the owners of the relevant productive assets.
A standard move in much of moral theory is to reduce the inordinate multiplicity of possible problems of individual choice by introducing a set of rules that grossly simplifies the choices we must make (see chapter 6). Such moral theory is now called deontology.
In his theory of justice, John Rawls achieves the appearance of determinacy in the abstract with his difference principle, but under that appearance there is a morass of indeterminacy in his category of primary goods that, if taken seriously, negates much of the seeming simplicity and appeal of his theory (see chapter 7). Nevertheless, far more clearly than most of the theorists considered here, he recognizes the centrality of the problem of indeterminacy in social choice and very cleverly attempts to overcome it.
We may ex ante create institutions that make decisions on principles that we would not have been able to use directly. For example, it may be mutually advantageous ex ante for us to have an institution use cost-benefit analysis, with its attendant interpersonal comparisons, even though we might not be able to give a mutual-advantage defense of such an analysis in any particular instance of its application (see chapter 8).
Excerpted from Indeterminacy and Society by Russell Hardin Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.