Implications of Modern Decision Science for Military Decision Support Systems
An overview of higher-level decisionmaking and modern methods to improve decision support.
1103281858
Implications of Modern Decision Science for Military Decision Support Systems
An overview of higher-level decisionmaking and modern methods to improve decision support.
20.0 Out Of Stock
Implications of Modern Decision Science for Military Decision Support Systems

Implications of Modern Decision Science for Military Decision Support Systems

by Paul Michael Davis
Implications of Modern Decision Science for Military Decision Support Systems

Implications of Modern Decision Science for Military Decision Support Systems

by Paul Michael Davis

Paperback

$20.00 
  • SHIP THIS ITEM
    Temporarily Out of Stock Online
  • PICK UP IN STORE

    Your local store may have stock of this item.

Related collections and offers


Overview

An overview of higher-level decisionmaking and modern methods to improve decision support.

Product Details

ISBN-13: 9780833038081
Publisher: RAND Corporation
Publication date: 08/15/2005
Pages: 166
Product dimensions: 15.20(w) x 23.00(h) x 1.40(d)

Read an Excerpt

Implications of Modern Decision Science for Military Decision-Support Systems


By Paul K. Davis Jonathan Kulick Michael Egner

Rand Corporation

Copyright © 2005 RAND Corporation
All right reserved.




Chapter One

Introduction

Objective

This monograph presents a selective survey of modern decision science prepared to assist the United States Air Force Research Laboratory (AFRL) in planning its research programs and, more specifically, developing methods and tools for decision support. Our emphasis is on relatively high-level decisionmaking rather than, say, that of pilots or intelligence analysts in the midst of real-time operations. We focus largely on what the military refers to as the strategic and operational levels. This said, we also draw upon considerable tactical-level research that has lessons for our work.

Definition and Scope

Definitions are necessary in a study such as this. We take the view that science is inquiry leading to an organized body of knowledge in a subject domain. The body of knowledge includes principles and frameworks. The knowledge is meaningful and transferable, and claims made about phenomena in the subject domain are, at least in principle, testable and reproducible. With that prelude,

Decision science contributes both to the understanding of human decisionmaking and to developing methods and tools to assist that decisionmaking. The latter branch relates closely to understanding what constitutes good decision support and how to go about providingit.

Figure 1.1 indicates the breakdown that we have used in our approach to the subject. In addressing human decisionmaking, we consider research on descriptive, normative, and prescriptive aspects (how humans actually make decisions, how they perhaps should make decisions, and how to go about doing so effectively, respectively). We primarily address individual-level decisionmaking, but we include some discussion of group processes and collaboration. We largely consider human decisionmaking, but we touch also upon decisionmaking in intelligent machines. In addressing concepts, methods, and tools, we focus primarily on relatively high-level decisionmaking, and our scope therefore tends to be associated with strategy, systems analysis, policy analysis, and choice under uncertainty.

It follows that our discussion omits a great deal that others might have included. For example, we do not address tactics, details of military targeting and maneuver, or fine-tuning resource allocation within a homogeneous domain. Nor do we deal with algorithms, computational methods, and mathematics such as might be treated in a review of operations research. Nor do we discuss many important issues of cognitive psychology, such as the performance of pilots as a function of cockpit displays. Even with these restrictions of scope, there is much to cover.

Descriptive Versus Prescriptive Research

In discussions of human decisionmaking, a distinction has often been made between descriptive and prescriptive research. The situation is actually more complex, because methods and tools intended for decision support should be cognitively comfortable for real human decisionmakers. That is not straightforward, because people do not easily reason in the manner sometimes recommended by economists or mathematicians. Furthermore, decisionmaking paradigms that once were thought to be obviously rational and good are not always as good as advertised, and they can even be dysfunctional. It turns out, then, that the frontiers of modern decision science include new concepts about what should be prescribed, not just about tools to support one style of reasoning or another.

Approach in This Monograph

A report surveying the whole of decision science would be very long and would necessarily duplicate portions of earlier books and articles. We have chosen to keep the discussion rather brief, to include what we consider useful citations to the existing literature, and to focus primarily on modern concepts and issues with which readers may be relatively unfamiliar and that have important implications for research on decision-support and related systems. Chapter Two describes some of the major findings of recent decades on how real decisionmakers actually reason and decide. This discussion reflects the "heuristics and biases" research most associated with Daniel Kahneman and Amos Tversky, and also loosely defined "naturalistic" research associated with Gary Klein, Gerd Gigerenzer, and others. The chapter also draws on research done in management schools by James March and others. Chapter Three reviews classic concepts of decision science and aspects of their evolution through the 1980s. Chapter Four discusses major themes of modern decision science. These build on the classic concepts but also repudiate the classic overemphasis on optimization, particularly in problems characterized by deep uncertainty. The principal theme is encouraging and assisting adaptiveness. Chapter Five is a first attempt to reconcile some of the contradictory strands discussed in Chapter Two and to move toward a synthesis that might be useful to those involved in analysis and decision support; it also recapitulates our conclusions and recommendations, including recommendations for research that AFRL might reasonably pursue and suggestions for terms of reference in the development of decision-support systems.

Finally, we note that although much of the monograph is rather general, our focus is on decision science relevant to military decisionmaking, and many of our examples are accordingly military.

Chapter Two

Human Decisionmaking

This chapter concerns the decision process and what decision science tells us about how human beings actually make decisions. Our primary emphasis is on higher-level decisionmaking, but we also draw upon literature that deals with operational decisionmaking, such as that by pilots, firemen, or platoon commanders. We do this in part because the lessons from that research extrapolate to a considerable extent to the decisionmakers on whom we have focused. We also emphasize decisionmaking by individuals. Even when decisions are made by or in groups of people and follow from interpersonal or social decision processes, the participants employ many of the same judgment and decisionmaking processes as they do when acting alone. While in no way a comprehensive treatment of judgment and decisionmaking, this chapter provides a basis for the subsequent chapters on analysis methods, as decision support is meaningless without supported decisionmaking.

How to Think About Decisionmaking

If we are to support decisionmaking, and so perhaps to improve it, we must first understand it. Despite decades of academic study, how best to think about decisionmaking remains unclear. Figure 2.1 illustrates this dilemma with four dichotomies taken from a summary work by James March (March, 1994). Should we see decisionmaking fundamentally as choice-based, as in evaluating alternatives, or as rule-based, as in recognizing the pattern of a situation and responding appropriately? Should we see the decisionmaking process as one characterized by a search for clarity and consistency or as one in which inconsistency and ambiguity are not only present but exploited (as in factions agreeing on an action despite having different objectives in mind)? Should we understand decisions as fitting into problem solving and measured by an allegedly objective goodness of outcome, or do we understand them in more social terms, such as symbols of a successful negotiation, the reaffirmation of the organization's ethos, or a leader's strength? And, finally, are decisions the result of individual actors or of more complex systems?

These matters are central to our work, because if we conceive of decision support strictly in terms of "rational" action (shown on the left side of Figure 2.1), we relegate our work to that of technical support. That may provide good information but miss many of the factors that confront real decisionmakers. On the other hand, if we conceive of decision support purely in terms of facilitating natural human processes, we may be denying decisionmakers the opportunity to see sharply some of the consequences of alternatives, or to see alternatives at all. Moreover, we might reinforce cognitive biases that generate what can be seen only as errors.

Decision support has typically focused on what its practitioners see as the rational-analysis issues, with the expectation that decisionmakers themselves will fill in the other factors. Probably with good justification, practitioners of decision support have seen worrying about political factors and other soft consequences as beyond their ken, or at least beyond their pay grade. Furthermore, the ethic of much systems analysis and policy analysis has been to present clearly the more analytical perspective so that policymakers can understand fully that aspect of the problem, without "contamination" by other, more political factors, even though the other factors may be legitimate and important to the policymakers in their final decisions. In this monograph, we have taken a more expansive view of decision support, moving among extremes of the four dichotomies.

Images of the Decision Process

If we imagine decisionmaking as a relatively orderly process, we can represent it schematically as shown on the left side of Figure 2.2. Although this depiction has prominent feedback loops, the image perceived by many is nonetheless one of linearity. The right side of Figure 2.2, then, is an alternative depiction emphasizing that the actual process is anything but linear or orderly. Both versions are syntheses of classic depictions and concerns that have too often been given short shrift, notably the early steps of recognizing that a crisis is approaching and reviewing the full range of interests at stake, rather than only the most obvious.

Subsequent steps-including development of alternatives, choice of strategy, and the notion of monitoring and adapting-have long been emphasized. The importance of subsequent adaptation was perhaps first acknowledged by Nobel Laureate Herbert Simon in his studies of decisionmaking in the business context and his outright rejection of then-dominant theories that imagined a more straightforward process designed to maximize utility (expected profit). Simon recognized that high-level decisions are beset by uncertainty and that any notions of optimizing are inappropriate:

Human behavior, even rational human behavior, is not to be accounted for by a handful of invariants.... Its base mechanisms may be relatively simple ... but that simplicity operates in interaction with extremely complex boundary conditions imposed by the environment.

With all of these qualifications ... Man, faced with complexity beyond his ken, uses his information processing capacities to seek out alternatives, to resolve uncertainties, and thereby-sometimes, not always-to find ways of action that are sufficient unto the day, that satisfice (Simon, 1978, last paragraph).

A more extreme view would be that one should not even imagine optimizing, or doing a full "rational analysis," but instead should hope merely to move mostly in the right direction or even to succeed by "muddling through," as suggested by Charles Lindblom in a famous article in the late 1950s (Lindblom, 1995). The Lindblom view was that, in contrast to the normative version of decisionmaking, in which leaders assemble the options, consider all of the pros and cons, and make a reasoned judgment, reality more typically is so complex that comprehensive assessment of nonincremental options is too difficult and the result is a sequence of more hesitant steps over time. Later, Lindblom argued as well that issues are often characterized by partisan debate and compromise rather than by a more overall-rational process. Even so, the results can often be good. If Lindblom's initial work was pessimistic about doing better than just muddling through, later work by James Quinn and others suggested that indeed a firm could do better if it had an adequate vision or dream-still very far from anything like a blueprint, but strong enough to result in more than mere muddling. He referred to this process as logical incrementalism (Quinn, 1980).

The Problems of Heuristics and Biases

Until Simon's work in the 1950s, it was generally assumed that insofar as people engaged in orderly decisionmaking (as shown on the left sides of Figures 2.1 and 2.2), they were good at it-"good" being more or less synonymous with "rational." Simon took this standard down a notch with the notion of bounded rationality: In making any but the simplest decisions, we operate within a complex external environment and have limited cognitive capabilities, time, and other resources. We therefore are rational only within the bounds imposed on us (Simon, 1956, 1982a,b).

While Simon sought to bring economic man into conformity with findings in cognitive psychology, a generation of psychologists used classical economic principles such as expected-utility maximization and Bayesian probability judgments as benchmarks. They then drew inferences about cognition by observing deviations from those benchmarks (Camerer, 1995). Nobel Laureate Daniel Kahneman and the late Amos Tversky conducted the foremost experiments in this field. Their findings highlight three classes of heuristics, or cognitive shortcuts, used in making decisions (Tversky and Kahneman, 1974). The heuristics often work very well, but they can also cause trouble. The heuristics Kahneman and Tversky highlighted are discussed below.

Availability Heuristic

The perceived likelihood or frequency of an event increases with the ease of imagining it. Readily available instances or images are effectively assumed to represent unbiased estimates of statistical probabilities, even when they are not germane. For example, the USSR's Cold War assessment of the likelihood of Germany being a renascent military threat to its interests was biased by the vivid memory (availability) of World War II and the USSR's casualties in that war (Heuer, 1981). As another example, in assessing an enemy's behavior, a decisionmaker will often rely on the most available model for decisionmaking-his own plans and intentions. Britain based its pre-World War II estimates of the Luftwaffe's size on the basis that the "best criteria for judging Germany's rate of expansion were those that governed the rate at which the RAF could itself form efficient units" (Hinsley, Thomas, Ransom, and Knight, 1979).

Representativeness Heuristic

An object is judged to belong to a class according to how well it resembles that class (i.e., how well the object fits a stereotype of that class). This heuristic can be especially dangerous in reasoning by historical analogy (Jervis, 1976): "This situation is similar to a previous one in some important respects, so we can expect that events will proceed as they did before." For example, when policymakers in 1965 decided to deploy tens of thousands more troops in Vietnam, they had in mind historical analogies of Munich, Dien Bien Phu, and especially Korea (Khong, 1992). As Ernest May notes, "Potentially, history is an enormously rich resource for people who govern ... [but] such people draw upon this resource haphazardly or sloppily" (May, 1973).

Anchoring and Adjustment Heuristic

A judgment is made with an initial value (anchor) in mind and is adjusted according to new information, but such adjustments are often too small, so the judgment is overweighted toward the anchor (even when the anchor is arbitrary). For example, during the Civil War Battle of Chancellorsville, Union Army General Howard once received reports early in the day, including one from his superior officer, that the enemy forces opposite his position were a covering force for a retreat (Tatarka, 2002). As the day wore on, General Howard received many reports indicating that enemy forces were in fact massing for an attack. Nevertheless, having anchored on the initial reports, he failed to adapt to the new information adequately and his corps was surprised by a Confederate attack in the evening. The Union side eventually lost the battle.

(Continues...)



Excerpted from Implications of Modern Decision Science for Military Decision-Support Systems by Paul K. Davis Jonathan Kulick Michael Egner Copyright © 2005 by RAND Corporation. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

From the B&N Reads Blog

Customer Reviews