How Reason Almost Lost Its Mind: The Strange Career of Cold War Rationalityby Paul Erickson, Judy L. Klein, Lorraine Daston, Rebecca Lemov
In the United States at the height of the Cold War, roughly between the end of World War II and the early 1980s, a new project of redefining rationality commanded the attention of sharp minds, powerful politicians, wealthy foundations, and top military brass. Its home was the human sciencespsychology, sociology, political science, and economics, among others&
In the United States at the height of the Cold War, roughly between the end of World War II and the early 1980s, a new project of redefining rationality commanded the attention of sharp minds, powerful politicians, wealthy foundations, and top military brass. Its home was the human sciencespsychology, sociology, political science, and economics, among othersand its participants enlisted in an intellectual campaign to figure out what rationality should mean and how it could be deployed.
How Reason Almost Lost Its Mind brings to life the peopleHerbert Simon, Oskar Morgenstern, Herman Kahn, Anatol Rapoport, Thomas Schelling, and many othersand places, including the RAND Corporation, the Center for Advanced Study in the Behavioral Sciences, the Cowles Commission for Research and Economics, and the Council on Foreign Relations, that played a key role in putting forth a “Cold War rationality.” Decision makers harnessed this picture of rationalityoptimizing, formal, algorithmic, and mechanicalin their quest to understand phenomena as diverse as economic transactions, biological evolution, political elections, international relations, and military strategy. The authors chronicle and illuminate what it meant to be rational in the age of nuclear brinkmanship.
- University of Chicago Press
- Publication date:
- Product dimensions:
- 11.70(w) x 8.10(h) x 1.30(d)
Read an Excerpt
How Reason Almost Lost Its Mind
The Strange Career of Cold War Rationality
By PAUL ERICKSON, JUDY L. KLEIN, Lorraine Daston, REBECCA LEMOV, THOMAS STURM, Michael D. Gordin
THE UNIVERSITY OF CHICAGO PRESSCopyright © 2013 The University of Chicago
All rights reserved.
Enlightenment Reason, Cold War Rationality, and the Rule of Rules
Sometime in 1952, RAND Corporation mathematician Merrill Flood decided to bring work home from the office. He asked his three teenage children to bid by a reverse auction for an attractive babysitting opportunity on the condition that "they would tell me how they reached their agreement." After a week of deliberation, the teenagers, who had been given permission to better their collective lot by forming coalitions, were still unable to reach an agreement. Worse, the winning individual bid of ninety cents was grossly irrational according to the game-theoretic pay-off matrix, which Flood proceeded to calculate. Flood drew sweeping conclusions from this domestic experiment: "This is probably an extreme example, although not really so extreme when you compare the magnitude of the children's error with that made by mature nations at war because of inability to split-the-difference. I have noticed very similar 'irrational' behavior in many other real life situations since August 1949 [when the Soviet Union exploded its first atomic bomb] and find it to be commonplace rather than rare." Rationality began at home, but its ambitions embraced the wide world.
From teenager befuddlement to wartime stalemate: Flood and his colleagues at the Santa Monica think tank RAND were after an articulation of rationality so powerful and so general that it would apply to situations as prosaic as haggling over the price of a used Buick with nuclear strategist Herman Kahn (another of Flood's homespun experiments) and as apocalyptic as nuclear warfare. Flood, who had made a name in World War II operations research (described in chapter 2) and later (as we will see in chapter 5) contributed to the formulation of the prisoner's dilemma, had by the early 1950s grown skeptical about the applicability of game theory to anything but the most trivial parlor games. But neither he nor his fellow analysts at RAND and other bastions of Cold War research ever doubted that whatever rationality was, it would be a matter of rules, the more mechanical the better. Perhaps the axioms of John von Neumann's and Oskar Morgenstern's game theory would have to be modified in light of experiments like the one performed on Flood's children or the SWAP war game played by his fellow RAND researchers (figure 1.1); perhaps a superego or even a neurosis would have to be programmed into the "rational mind" of a mechanical deliberator; perhaps actors would have to be taught to behave "in a spirit of calmly aggressive selfishness." No matter how heterodox, however, attempts to model rationality almost never questioned one precept: rationality consisted of rules.
In the two decades following World War II, human reason was reconceptualized as rationality. Philosophers, mathematicians, economists, political scientists, military strategists, computer scientists, and psychologists sought, defined, and debated new kinds of norms for "rational actors," a deliberately capacious category that included business firms, chess players, the mafia, computers, parents and children, and nuclear superpowers. Older concepts of reason had sometimes been disembodied, the property of an ethereal Christian soul or a Cartesian res cogitans, but new-fangled views of rationality often departed from materiality (and humanity) altogether: "The rule that such a device is to follow," explained MIT computer scientist Joseph Weizenbaum, "the law of which it is to be an embodiment, is an abstract idea. It is independent of matter, of material embodiment, in short, of everything except thought and reason." As Weizenbaum himself went on to argue, the reason in question was restricted to "formal thinking, calculation, and systematic rationality." What made both the generality and the immateriality of rational actors conceivable was the implicit assumption that whatever rationality was, it could be captured by a finite, well-defined set of rules to be applied unambiguously in specified settings—without recourse to the faculty of judgment so fundamental to traditional ideals of reason and reasonableness. Von Neumann and Morgenstern articulated their definition of rationality in the context of game theory thus: "We described in [section] 4.1.2 what we expect a solution—i.e., a characterization of 'rational behavior'—to consist of. This amounted to a complete set of rules of behavior in all conceivable situations. This holds equivalently for a social economy and for games." The solution to the failure of extant rules in logic and arithmetic to cover the full range of decision making under uncertainty was, according to University of Chicago economist and Cowles Commission member Jacob Marschak, more such rules: "We need additional definitions and postulated rules to 'prolong' logic and arithmetic into the realm of decision. We shall define rational behavior as that which follows those rules, in addition to the rules of logic and arithmetic."
And not just any kind of rules: it was above all algorithms—for centuries the exclusive province of arithmetic but extended to logic in the late nineteenth century and from logic to all of mathematics in the early twentieth century—which characterized these attempts to define rational behavior. Algorithms did not even merit an entry in one of the most comprehensive mathematical dictionaries of the mid-nineteenth century, but by the turn of the twentieth century, a flourishing research program in mathematical logic had elevated the humble algorithms of elementary calculation to the status of a model for the foundations of all mathematical demonstration. In a seminal treatise, Russian mathematician A. A. Markov described the three desiderata of an algorithm: "(a) the precision of the prescription, leaving no place to arbitrariness, and its universal comprehensibility—the definiteness of the algorithm; (b) the possibility of starting out with initial data, which may vary within given limits—the generality of the algorithm; and (c) the orientation of the algorithm toward obtaining some desired result, which is indeed obtained in the end with proper initial data—the conclusiveness of the algorithm." Although they often described their epoch in terms of complexity, uncertainty, and risk and conjured the specter of a nuclear war triggered by accident, misunderstanding, or lunacy, the participants in the debate over Cold War rationality believed that the crystalline definiteness, generality, and conclusiveness of the algorithm could cope with a world on the brink.
Theorists of games, strategic conflict, artificial intelligence, and cognitive science diverged frequently and substantively on major issues: for example, cognitive scientist Herbert Simon's program to model "bounded rationality" with heuristics clashed with economists' imperative to optimize; economist Thomas Schelling was skeptical about the usefulness of zero-sum games for modeling strategic decisions; even Morgenstern wondered whether, pace the adversarial assumptions of game theory, cooperation might not be "more natural" than conflict in many situations. The Cold War rationalists, ever critical of themselves and each other, by no means constituted anything like a unified program, much less a school. What nonetheless justifies the label is the shared assumption, rarely examined but always fundamental, that whatever rationality was, it could be stated in algorithmic rules—whether these were strategies in game theory, the consistency specifications of personal utilities, linear programming code, actuarial formulas for clinical decisions, or cognitive representations.
What was novel about this view of rationality? After all, algorithms are as old as addition, subtraction, multiplication, and division. Visions, theories, and devices that seem to foreshadow this or that element of Cold War rationality can be found in other times and places: Gottfried Wilhelm Leibniz's seventeenth-century dream of reducing reason to a calculus; Daniel Bernoulli's eighteenth-century explorations of how mathematical expectation in probability theory could be redefined to express what economists later called utility; Charles Babbage's nineteenth-century project for an analytical engine that would perform the operations of mathematical analysis as well as those of arithmetic; William Stanley Jevons's slightly later logic piano that mechanically derived conclusions from premises. In retrospect, any and all of these may look like anticipations of the rule-bound rationality pursued so energetically in the mid-twentieth century, and, as we'll see, they were sometimes enlisted in attempts to provide game theory or utility theory or artificial intelligence with eminent ancestors. But only in retrospect do these dispersed ideas and inventions hang together: for Bernoulli and other early probabilists, for example, utility theory, grounded in subjective preferences, had little to do with mechanical calculation; the moral that Babbage drew from his difference engine was not that it was artificially intelligent but rather that computation, even complex computation, required little or no intelligence. Human reason was often defined in opposition to mechanical rule following (or the rote behavior of animals). As an 1842 account of Babbage's plans for an analytical engine put it, the "mechanical branch" of mathematics must be distinguished from "the domain of understanding ... reserving for the pure intellect that which depends on the reasoning faculties." Until the middle decades of the twentieth century, algorithmic rules, most especially those executed by machines, seemed the least, not the most promising materials for a normative account of rationality.
In order to take the measure of just what was new about various versions of Cold War rationality, we must therefore step back and survey its emergence against the background of a longer history of reason and rules. Only then does the historicity of Cold War rationality snap into focus: under what circumstances could mechanical rule following, previously excluded from the "domain of understanding," become the core of rationality? This chapter traces how the elements of Cold War rationality were available (and, at least in one notable case, united) at latest by the mid-nineteenth century. But far from solidifying into a new ideal of rationality, they underwent radical intellectual (and economic) devaluation, as working to rigid rules was first handed over to badly paid laborers and eventually to machines. It was in the context of the Cold War that those same elements—algorithmic rules impervious to context and immune to discretion, rules that could be executed by any computer, human or otherwise, with "no authority to deviate from them in any detail"—came together as a new form of rationality with glittering cachet in the human sciences and beyond. To tell this story in its entirety would require volumes, encompassing everything from the history of philosophy since the Enlightenment to the rise of the modern bureaucracy to the development of the computer (and perhaps also the cookbook). Here, however, we will concentrate on those features of earlier accounts of reason that seem to most resemble aspects of Cold War rationality: Enlightenment applications of arithmetic algorithms and probability; nineteenth-century attempts to mechanize calculation; and the shift in the meaning of rule from model to algorithm.
We begin with a comparison of Cold War rationality to older alternatives, especially those Enlightenment versions that seem to resemble it most closely (and which were sometimes cited by the Cold War rationalists as forerunners). Key to the contrast between Enlightenment and Cold War versions of rationality is the rise of the modern, automated algorithm in connection with the economic rationalization of calculation. Rules too have their history, and the allure of rules as the backbone of rationality demands explanation. Against this background, algorithm-driven rationality emerged as a powerful tool and seductive fantasy. Or, as some critics maintained, as a powerful fantasy and a seductive tool, for its ambitions and applications were from the outset and remain controversial. Neither the rise of mathematical logic in the first half of the twentieth century nor the spread of computers in the second suffices to explain why algorithm-centered rationality became compelling in the American human sciences during the Cold War. Even within their own ranks, the Cold War rationalists struggled to maintain the consistency and clarity of their rules in the face of phenomena such as emotional outbursts, neuroses, indecision, dissent, caprices, and other manifestations of what they came to call problems of "integration," whether on the part of world leaders or their own children. On the mock gothic campuses of leading universities, in the studied informality of think tank offices, in front of room-sized computers named ENIAC and MANIAC and in the none-too-tranquil bosom of their families, the Cold War rationalists pondered the coherence of both society and the self (figure 1.2).
1.1. "Let Us Calculate"
In December 1971 Princeton professor of political economy Oskar Morgenstern wrote to his colleague Margaret Wilson in the Philosophy Department to ask for the source of a visionary quotation from the seventeenth-century philosopher and mathematician Gottfried Wilhelm Leibniz:
For all inquiries which depend on reasoning would be performed by the transposition of characters and by a kind of calculus, which would immediately facilitate the discovery of beautiful results ... it would be easy to verify the calculation either by doing it over or by trying tests similar to that of casting out nines in arithmetic. And if someone would doubt my results, I should say to him: "Let us calculate, Sir," and thus by taking to pen and ink, we should soon settle the question.
Morgenstern, German-born, Austrian-educated, and a certified Bildungsbürger who strewed maxims from La Rochefoucauld and historical analogies to the military campaigns of Charles V and Napoleon in his lectures on arms control to the Council of Foreign Relations, had collected materials for a history of game theory (never completed). In addition to giving his shared brainchild an intellectual pedigree, Morgenstern seems to have been wrestling with, on the one hand, the tensions between his earlier education, heavily inflected with philosophy and history as well as his own earlier reservations about the unreality of hyperrational theories used to make economic forecasts, and, on the other, the formality and simplifying assumptions of game theory. He, however, was not alone among the Cold War rationalists in seeking forerunners among the Enlightenment probabilists, especially in the work of Marie Jean Antoine Nicolas de Caritat, the Marquis de Condorcet. Were they right to see themselves as reviving a very distinctive form of Enlightenment reasonableness?
Certainly, Enlightenment luminaries were well acquainted with the mathematics of games and the refinements of machinery; some were moreover fascinated by the possibility of turning probability theory into a reasonable calculus and by automata that mimicked the behavior of humans and animals. For example, Condorcet once computed the minimum probability of not being falsely convicted of a crime that a citizen in a just society must be guaranteed on the analogy of a risk small enough that anyone would take it without a second thought—such as taking the packet boat from Dover to Calais in calm weather on a seaworthy boat manned by a competent crew. Immanuel Kant suggested that the intensity of belief be gauged by how much the believer was willing to wager in a bet as to the conviction's truth or falsehood: "Sometimes he reveals that he is persuaded enough for one ducat but not for ten." (In the same passage, Kant avowed that he himself would be willing to risk "many advantages of life" in a bet on the existence of inhabitants on at least one other planet besides Earth.) The mathematics of games seemed to these Enlightenment thinkers fertile in lessons about how to reason with exactitude and consistency.
Machines in the form of automata similarly stimulated speculation about how far the analogy between human beings and machines could be stretched, whether in the form of beguilingly realistic automata that played the flute or wrote "cogito ergo sum" with a quill pen, or in that of materialist treatises like Julien Offray de la Mettrie's L'Homme machine (1748), which described the human body as "a machine that winds its own strings" and asserted the soul to be "but an empty word." One celebrated eighteenth-century automaton, the Turkish chess player first shown in Leipzig in 1784, dramatized the possibilities of the "mechanized higher faculties" (although it was ultimately exposed as a fraud). There would have been nothing to shock a well-read Enlightenment philosophe in musings about intelligent machines—a category that perhaps included human beings (figure 1.3).
Excerpted from How Reason Almost Lost Its Mind by PAUL ERICKSON, JUDY L. KLEIN, Lorraine Daston, REBECCA LEMOV, THOMAS STURM, Michael D. Gordin. Copyright © 2013 The University of Chicago. Excerpted by permission of THE UNIVERSITY OF CHICAGO PRESS.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Meet the Author
Paul Erickson is assistant professor of history and science in society at Wesleyan University and lives in Middletown, CT. Judy L. Klein is professor of economics at Mary Baldwin College and lives in Staunton, VA. Lorraine Daston is director of the Max Planck Institute for the History of Science and visiting professor in the Committee on Social Thought at the University of Chicago. She lives in Berlin, Germany. Rebecca Lemov is associate professor of the history of science at Harvard University and lives in Cambridge, MA. Thomas Sturm is a Ramón y Cajal Research Professor in the Department of Philosophy at the Autonomous University of Barcelona and lives in Cerdanyola del Vallès, Spain. Michael D. Gordin is professor of the history of science at Princeton University and lives in Princeton, NJ.
and post it to your social network
Most Helpful Customer Reviews
See all customer reviews >