Information Sampling and Adaptive Cognition available in Hardcover, Paperback

Information Sampling and Adaptive Cognition
- ISBN-10:
- 0521539331
- ISBN-13:
- 9780521539333
- Pub. Date:
- 12/05/2005
- Publisher:
- Cambridge University Press
- ISBN-10:
- 0521539331
- ISBN-13:
- 9780521539333
- Pub. Date:
- 12/05/2005
- Publisher:
- Cambridge University Press

Information Sampling and Adaptive Cognition
Buy New
$47.99Buy Used
$34.93-
SHIP THIS ITEMIn stock. Ships in 1-2 days.PICK UP IN STORE
Your local store may have stock of this item.
Available within 2 business hours
-
SHIP THIS ITEM
Temporarily Out of Stock Online
Please check back later for updated availability.
Overview
Product Details
ISBN-13: | 9780521539333 |
---|---|
Publisher: | Cambridge University Press |
Publication date: | 12/05/2005 |
Edition description: | New Edition |
Pages: | 498 |
Product dimensions: | 5.98(w) x 9.02(h) x 0.98(d) |
About the Author
Peter Juslin is Professor of Psychology at Uppsala University in Sweden. He received the Brunswik New Scientist Award in 1994 and the Oscar's Award at Uppsala University in 1996 for young distinguished scientists. He has published a large number of scientific papers in many journals including many articles in the main APA-journals such as Psychology Review.
Read an Excerpt
Cambridge University Press
0521831598 - Information sampling and adaptive cognition - Edited by Klaus Fiedler and Peter Juslin
Excerpt
PART I
INTRODUCTION
1
Taking the Interface between Mind and Environment Seriously
Klaus Fiedler and Peter Juslin
Metaphors mold science. Research on judgment and decision making (JDM) - for which psychologists and economists have gained Nobel prizes (Kahneman, Slovic & Tversky, 1982; Simon, 1956, 1990) and elevated political and public reputations (Swets, Dawes & Monahan, 2000) - is routinely characterized as having proceeded in several waves, each carried by its own metaphor. A first wave of JDM research in the 1960s compared people's behavior to normative models, a common conclusion being that, with some significant exceptions, human judgment is well approximated by normative models (Peterson & Beach, 1967): The mind was likened to an "intuitive statistician" (a metaphor borrowed from Egon Brunswik).
A second and more influential wave of JDM research, initiated by Tversky and Kahneman, emphasized the shortcomings, biases, and errors of human judgments and decisions (Kahneman et al., 1982). Because of the mind's constrained capacity to process information, people can only achieve bounded rationality (Simon, 1990), and - as inspired by research on problem solving (Newell & Simon, 1972) - they have to rely on cognitive heuristics. In contrast to problem-solving research, however, the focus was on heuristics leading to bias and error in judgment. The metaphor of an intuitive statistician was soon replaced by the notion that the mind operates according to principles other than those prescribed by decision theory, probability theory, and statistics. In the fond lurks Homo economicus, a rational optimizer with gargantuan appetite for knowledge, time, and computation defining the norm for rational behavior. In its shadow crouches Homo sapiens with its limited time, knowledge, and computational capacities, apparently destined for error and bias by the reliance on simple cognitive heuristics (see Gigerenzer, Todd, & the ABC Group, 1999, for a criticism and an alternative use of this metaphor).
This volume brings together research inspired by, or exemplifying, yet another developing metaphor that builds on and refines the previous metaphors, that of the mind as a naïve intuitive statistician. A sizeable amount of recent JDM and social psychological research, in part covered by the present volume, reevokes the metaphor of an intuitive statistician, but a statistician who is naïve with respect to the origins and constraints of the samples of information given. The basic idea is that the processes operating on the given input information in general provide accurate descriptions of the samples and, as such, are not violating normative principles of logic and reasoning. Erroneous judgments rather arise from the naïvete with which the mind takes the information input for granted, failing to correct for selectivity and constraints imposed on the input, reflecting both environmental structures and strategies of sampling. A more comprehensive discussion of this conception - to be called a sampling approach in the remainder of this book - can be found in Fiedler (2000).
Without making any claims for the superiority of this metaphor over its predecessors, we provide in this volume an overview of research that can be aligned with this sampling approach and we invite other researchers to explore the possibilities afforded by this perspective. In the following, we first try to characterize the metaphor of a naïve intuitive statistician and discuss its potentials. Thereafter, we discuss the role of sampling - the key construct with this approach - in behavioral science. We outline a three-dimensional scheme, or taxonomy, of different sampling processes, and we locate in this scheme the contributions to the present volume.
THE NAÏVE INTUITIVE STATISTICIAN
When judgment bias (defined in one way or another) is encountered, the most common explanatory scheme in JDM and social psychological research is to take the information input as given truth and to postulate cognitive algorithms that account for the deviation of the judgment output from the input and its normatively permissible transformations. The input itself is not assigned a systematic role in the diagnosis of judgment biases. An obvious alternative approach is to assume that the cognitive algorithms may be consistent with normative principles most of the time - as evident in their successful use in many natural environments (Anderson, 1990; Oaksford & Chater, 1998) - and to place the explanatory burden at the other end: the information samples to which cognitive algorithms are applied.
One way to explicate this alternative approach is by starting from the assumption that the cognitive algorithms provide accurate descriptions of the sample data and do not violate normative principles of probability theory, statistics, and deductive logic, provided they are applied under appropriate task conditions and with sufficient sample size. Both normative and fallacious behavior may arise depending on the fit between the algorithm and the sample input. From this point of view, the sampling constraints that produce accurate judgments should often be those that have prevailed in our personal history or in the history of our species. Base-rate neglect may thus diminish when the process is fed with natural frequencies rather than single event probabilities (Gigerenzer & Hoffrage, 1995); overconfidence may diminish or disappear when general knowledge items are selected to be representative of the environment to which the knowledge refers (Gigerenzer, Hoffrage, & Kleinbölting, 1991; Juslin, 1994).
This formulation is reminiscent of the distinction between cognitive processes and cognitive representations. The proposal is that cognitive biases can be often explained not by cognitive processes that replace heuristic for normative algorithms, but by the biased samples that provide the representational input to the process. Both environmental and cognitive constraints may affect the samples to which the cognitive processes are applied, and the environmental feedback is often itself a function of the decisions that we make (Einhorn & Hogarth, 1978). At first, the distinction between bias in process or representation may seem of minor practical import; the outcome is a cognitive bias just the same. This conclusion is elusive however; the cure for erroneous judgment depends heavily on correct diagnosis of the reasons for the error.
Statistics tells us that there are often ways to adjust estimators and to correct for sampling biases. Under certain conditions at least, people should thus be able to learn the appropriate corrections. Specifically, both incentives are needed to detect errors in judgment and correct for them, because it matters to the organism, and unambiguous corrective feedback must be available. The latter condition may be violated by the absence of any sort of feedback whatsoever. More often, however, the sampling process is conditioned on unknown constraints and there exists no self-evident scheme for interpreting the observed feedback (Brehmer, 1980).
A fundamental source of ambiguity lies in the question of whether outcomes should be interpreted within a deterministic causal scheme or within a probabilistic one. If a deterministic scheme is assumed, any change can be traced to causal antecedents. A probabilistic scheme highlights statistical notions like regression to the mean. We might, for example, observe that a soccer player plays extremely well in a first game where she plays forward and then that she plays poorer as a libero in a second game. On a causal deterministic scheme, we might conclude that the difference is explained by a causal antecedent, such as playing as forward or as libero. With a probabilistic scheme, we might conclude that since the player was extremely good in the first game, there is little chance that everything turns out equally well in the second game, so there is inherently a large chance that performance is poorer in the second game. Moreover, if a probabilistic scheme is adopted, it matters whether the observed behavioral sample is a random sample or conditioned on some selective sampling strategy. The amount of mental computation required to assess the nature of the problem at hand is far from trivial.
The notion of a naïve intuitive statistician can indeed be regarded as an explication of the specific sense in which the mind approximates a statistician proper. Consistently with the heuristics and biases program, some of the failures to optimally correct sample estimates may derive from constrained and heuristic cognitive processing. Many of the phenomena that have traditionally been accounted for in terms of, say, the availability heuristic (Tversky & Kahneman, 1973) can with equal force be interpreted as the results of optimal processing by frequency counts that rely on biased samples. For example, misconceptions about the risk of various death causes may in part derive from biases in the newspaper coverage of accidents and diseases, that is, the environmental sample impinging on the judge, rather than biased processing of this input (Lichtenstein & Fischhoff, 1977). This recurrent working hypothesis, that cognitive algorithms per se are unbiased and that the explanatory burden is on the input end of the process, is central to the metaphor of a naïve intuitive statistician. We propose that this metaphor can complement previous approaches to judgment and decision making in several important ways.
One virtue is that it has the inherent potential to explain both achievement and folly with the same conceptual scheme, thereby reconciling the previous two research programs. Although fad and fashion have changed over the years, it seems fair to conclude that there exist numerous studies documenting both impressive judgment accuracy and serious error. In terms of the metaphor of a naïve intuitive statistician the issue of rationality is a matter of the fit between the cognitive processes and the input samples to which they are applied. Arguably, rather than having a continued discussion of the rationality of human judgment in a general and vacuous manner, it is more fruitful to investigate the sampling conditions that allow particular cognitive algorithms to perform at their best or at their worst.
In the light of the naïve intuitive statistician, the key explanatory concepts concern information sampling and the influence of different sampling schemes. In the next section, we discuss the manifold ways in which information sampling enters research in behavioral science. Thereafter, we suggest a taxonomy of different sampling processes leading to distinct cognitive phenomena. The goal of the taxonomy is to provide an organizing scheme within which the contributions to the present volume can then be located.
SAMPLING - A UNIVERSAL ASPECT OF PSYCHOLOGICAL THEORIZING
Although "sampling," a technical term borrowed from statistics, is quite narrow for a book devoted to the broad topic of adaptive cognition, it nevertheless sounds familiar and almost commonsensical, given the ubiquitous role played by statistics in everyday decisions and behavior. Sampling is actually an aspect that hardly any approach to psychology and behavioral science can miss or evade, though the role of sampling as a key theoretical term often remains implicit. Look what it takes to explain behavior in a nondeterministic, probabilistic world. Why does the same stimulus situation elicit different emotional reactions in different persons? Why does the time required to solve a problem vary with the aids or tools provided in different experimental conditions, or with personality traits of the problem solvers? What facilitates or inhibits perceptual discrimination? What helps or hinders interpersonal communication? How can we explain the preferences of decision makers and their preference reversals?
Theoretical answers to these questions share, as a common denominator, the principle of subset selection. To explain human behavior, it is crucial to understand which subset of the universe of all available stimuli and which subset of the universe of all possible responses are chosen. For instance, theories of emotion explain the conditions under which specific emotional reactions are selected from a broader repertoire of response alternatives. Problem solving entails selecting appropriate subsets of tools, talents, and strategic moves. Perceptual discrimination is a function of the stimulus features selected for the comparison of two or more percepts. Likewise, effective communication depends on whether the receiver finds the same subset of meaningful elements in the communication that the communicator has intended. Decision making has to rely on selected states of the world, which trigger subjective probabilities, and on desired end states, which trigger subjective utilities. Theoretical explanation means to impose subset constraints on psychological processes, which as a matter of rule entails sampling processes.
Sampling Constraints in the Internal and External Environments
Two metatheories of subset selection can be distinguished in behavioral science; one refers to sampling in the internal environment and the other to sampling in the external environment. Here we totally refrain from assumptions about cognitive processes but we are exclusively concerned with environmental information, which can however be represented internally or externally. A considerable part of environmental information encountered in the past is no longer out there as perceptual input but represented in memory, much like present environmental information may be represented in the sensory system. Both internally (memorized) as well as externally (perceived) represented information reflects environmental input, though, as clearly distinguished from cognitive processes.
It is interesting to note that the first metatheory, concerned with sampling in the internal environment stored in memory, has attracted much more research attention in twentieth-century psychology than has the second metatheory, which builds on information sampling in the external world. Countless theories thus assume that variation in performance reflects which subset of all potentially available information is mentally activated when an individual makes decisions in a given situation at a given point in time (Martin, 1992). Drawing on concepts such as accessibility, priming, selective knowledge, retrieval cues, domain specificity, schematic knowledge, or resource limitation, the key idea is that human memory is the site of subset selection, where constraints are imposed onto the stimulus input driving judgments and decisions. The domain of this popular metatheory extends well beyond cognitive psychology. Selective accessibility of knowledge structures or response programs also lies at the heart of the psychology of emotions, aggression, social interaction, development, and abnormal psychology.
The second metatheory, although similar in its structure and rationale, is much less prominent and continues to be conspicuously neglected in behavioral science (Gigerenzer & Fiedler, 2005). The same selectivity that holds for environments conserved in memory also applies to the perceptual environment originally encountered in the external world. The information provided by the social and physical environment can be highly selective as a function of spatial and temporal constraints, social distance, cultural restrictions, or variable density of stimulus events. People are typically exposed to richer and more detailed information about themselves than about others, just because they are closest to themselves. Similarly, they are exposed to denser information about in-groups than out-groups of their own culture than to other, exotic cultures. Not only does environmental input vary quantitatively in terms of the density or amount of detail; it is also biased toward cultural habits, conversational norms, redundancy, and specific physical stimulus properties.
Analysis of the stimulus environment impinging on the organism, and of its constraints on information processing, is of similar explanatory power as the analysis of memory constraints - as anticipated in the seminal work of Brunswik (1956), Gibson (1979), and Lewin (1951). In a hardly contestable sense, the perceptual input from the external world even has priority over internal representations in memory, because the former determines the latter. A precondition for organisms to acquire (selective) world knowledge and adaptive behavior is that appropriate stimuli, task affordances, or feedback loops are provided in the first place. Education can indeed be regarded as a matter of engineering the environment to provide the information samples most conducive of learning. Academic ecologies constrain education; professional ecologies constrain the development of specialized skills and expertise; television and media ecologies constrain the development of aggression and other social behaviors; marketing ecologies place constraints on the behavior of consumers, entrepreneurs, and bargainers; and the "friendliness" versus "wickedness" of learning environments can explain the occurrence of cognitive fallacies and illusions.
A nice example, borrowed from Einhorn and Hogarth (1978), is that personnel managers' self-confident belief that they have selected the most appropriate employees will hardly ever be challenged by negative feedback, just because the applicants they have rejected are not available. Similarly, many police officers and lawyers continue to believe in the accuracy of polygraph lie detection, because validity studies selectively include those cases that confirm the polygraph (i.e., confessions after a positive test result) but exclude those cases that could disconfirm the test (Fiedler, Schmid, & Stahl, 2002; Patrick & Iacono, 1991), for in the absence of a confession, the validity criterion is unknown.
Although the point seems clear and uncontestable, selective environmental input has rarely found a systematic place in psychological theories, with a few notable exceptions (Chater, 1996; Oaksford & Chater, 1994; van der Helm & Leeuwenberg, 1996). One prominent exception is Eagly's (1987) conception of gender differences as a reflection of societal roles played by men and women. In most areas of research, however, environmental factors are either completely ignored or, at best, assigned the role of moderators, acting merely as boundary conditions to be counterbalanced in a well-conducted experiment. A conspicuous symptom of this state of affairs is that research methodology and peer reviewing put much weight on representative sampling of participants, but stimuli and tasks are often selected arbitrarily and one or two problems or task settings are usually considered sufficient (Wells & Windschitl, 1999). Contemporary psychology has not even developed appropriate concepts and taxonomies for types of environmental constraints. As a consequence, many intrapsychic (i.e., cognitive, motivational, emotional) accounts of phenomena are susceptible to alternative explanations in terms of environmental constraints (Fiedler & Walther, 2004; Gigerenzer & Fiedler, 2004; Juslin, Winman, & Olsson, 2000).
In spite of the uneven status attained by the two metatheories in the past, sampling processes in internal and external environments draw on the same principle of selectivity - the shared notion that behavioral variation can be explained in terms of subsets of information that are selected from a more inclusive universe. As this notion is equally applicable to internal and external sampling processes, it opens a modern perspective on adaptive cognition and behavior (cf. Gigerenzer, Todd, & the ABC Group, 1999), which takes the interface between mind and the environment seriously. However, given the traditional neglect of external ecologies in the study of internal cognitive processes (Gigerenzer & Fiedler, 2004), the present book emphasizes primarily sampling in the external world.
A TAXONOMY OF SAMPLING PROCESSES
The sampling metaphor is rich enough in implicational power to suggest an entire taxonomy, an elaborated conceptual framework, within which the various cognitive-ecological interactions described in the following chapters can be located. For the present purpose (i.e., to provide a preview of this volume), we introduce only three dimensions underlying such a sampling taxonomy. (Further aspects will be discussed in Chapter 11). However, for the moment, the following three dimensions are required to structure the contents of the present volume. One dimension on which sampling processes can differ was already mentioned; we may distinguish samples by their origin, as either internally (cognitively) generated samples or externally (environmentally) provided. A second important dimension refers to conditionality; drawing samples can to varying degrees be conditional on a more of less constraining subset. On a third, multilevel dimension, the unit of information being sampled can vary from elementary to complex units. Within the resulting three-dimensional space, depicted in Figure 1.1, different types of sampling can be located.
FIGURE 1.1. A three-dimensional taxonomy of sampling processes.
Image not available in HTML version |
As mentioned, other dimensions may also be revealing and theoretically fruitful, for example, passive sampling (being exposed to environmental stimuli) versus active sampling (a strategic information search in memory or external databases). Another intriguing dimension is whether samples are drawn simultaneously, in a single shot, or sequentially over time as in a continuous updating process (Hogarth & Einhorn, 1992). Last but not least, one may distinguish sampling processes going on in the judges' minds and in the researchers' minds. Note also that all three dimensions included in Figure 1.1 refer to antecedent properties of sampling processes, not to the outcome. When sampling outcomes are taken into account, we encounter further distinctions with important psychological implications, such as the size of samples (large versus small) or their representativeness (biased versus unbiased). The taxonomy also leaves the content unspecified, that is, whether the entities being sampled are persons, stimulus objects, task situations, or contextual conditions.
© Cambridge University Press