Brains/Practices/Relativism: Social Theory after Cognitive Science / Edition 1

Brains/Practices/Relativism: Social Theory after Cognitive Science / Edition 1

by Stephen Turner
ISBN-10:
0226817407
ISBN-13:
9780226817408
Pub. Date:
05/01/2002
Publisher:
University of Chicago Press
ISBN-10:
0226817407
ISBN-13:
9780226817408
Pub. Date:
05/01/2002
Publisher:
University of Chicago Press
Brains/Practices/Relativism: Social Theory after Cognitive Science / Edition 1

Brains/Practices/Relativism: Social Theory after Cognitive Science / Edition 1

by Stephen Turner

Paperback

$37.0
Current price is , Original price is $37.0. You
$37.00 
  • SHIP THIS ITEM
    Temporarily Out of Stock Online
  • PICK UP IN STORE

    Your local store may have stock of this item.

  • SHIP THIS ITEM

    Temporarily Out of Stock Online

    Please check back later for updated availability.


Overview

Brains/Practices/Relativism presents the first major rethinking of social theory in light of cognitive science. Stephen P. Turner focuses especially on connectionism, which views learning as a process of adaptation to input that, in turn, leads to patterns of response distinct to each individual. This means that there is no common "server" from which people download shared frameworks that enable them to cooperate or communicate. Therefore, argues Turner, "practices"—in the sense that the term is widely used in the social sciences and humanities—is a myth, and so are the "cultures" that are central to anthropological and sociological thought.

In a series of tightly argued essays, Turner traces out the implications that discarding the notion of shared frameworks has for relativism, social constructionism, normativity, and a number of other concepts. He suggests ways in which these ideas might be reformulated more productively, in part through extended critiques of the work of scholars such as Ian Hacking, Andrew Pickering, Pierre Bourdieu, Quentin Skinner, Robert Brandom, Clifford Geertz, and Edward Shils.

Product Details

ISBN-13: 9780226817408
Publisher: University of Chicago Press
Publication date: 05/01/2002
Edition description: 1
Pages: 224
Product dimensions: 6.00(w) x 9.00(h) x 0.70(d)

About the Author

Stephen P. Turner is a graduate research professor and chair of the Department of Philosophy at the University of South Florida. He is the author or coeditor of a number of books, including The Social Theory of Practices, published by the University of Chicago Press, and The Cambridge Companion to Weber.

Read an Excerpt

Brains/Practices/Relativism: Social Theory After Cognitive Science


By Stephen P. Turner

University of Chicago Press

Copyright © 2002 Stephen P. Turner
All right reserved.

ISBN: 0226817407

One / THROWING OUT THE TACIT RULE BOOK /

Learning and Practices

"Practices" talk, I have argued elsewhere, gets into trouble over the notion of "sharing" (1994). The idea that there are "shared" practices requires some sort of notion of how they come to be shared, and this notion in turn dictates how practices can be conceived. If we decide that these difficulties are insurmountable, I argued, we can dispense with the notion of sharing altogether. Practices without sharing, to use a phrase favored in the nineteenth century, are habits--individual rather than shared. Habits are simply the part of the phenomenon described by the term practices that remains when the idea of people possessing the same shared thing is eliminated. Habits, however, is a potentially misleading term, especially if habit is thought of as a generic alternative explanation rather than simply as the residue of the concept of practices once its objectionable elements have been eliminated. In what follows I will try to avoid this potential misunderstanding by restating my argument against the "social" conception of practices in somewhat different terms, without appealing to habit as a concept, and by locatingthe argument in relation to recent work in cognitive science.

Practices, for the sake of the following, is defined as those nonlinguistic conditions for an activity that are learned. By "a practice" I will mean an activity that requires its genuine participants to have learned something of this tacit sort in order to perform. What I intend to discuss are some general features of learning that constrain our conception of practices and therefore of a practice which depends on them. Ordinarily, the tacit stuff is not all there is to a practice. Most cases of a practice involve explicit communication or even explicit rules. Rules are not self-applying, so in the case where there are explicit rules, such as the law, the relevant practices are the ones enabling a person to follow the rules--for a lawyer or judge to interpret the law, for example. Sometimes there are no explicit rules, but there is explicit discussion. Painting a house, for example, can be done correctly or incorrectly, and there is a fairly elaborate vocabulary of evaluation and description of mistakes. Some kinds of "knowing how" that might be called "a practice" may have no such elaborate vocabulary of appraisal, and perhaps may have none at all. The practice of flirting, for example, before it was theorized about, presumably lacked such a vocabulary, and small children who flirt presumably have no vocabulary with which to discuss it--but it nevertheless has to be learned. My concern throughout will be with the tacit parts of a practice.

The Linguistic Analogy

Language has always exercised a regulatory role in discussions of practice. Any account of practice that fails to account for language will be defective, because linguistic practices are part and parcel of many other practices and because linguistic practices are in principle not sufficiently different from other practices to regard them as likely to have a radically different character. The usual understanding of what is involved in the case of language is this: we communicate by virtue of sharing in the possession of this highly structured whole, a language, including the nonlinguistic learned conditions for the use of the language, the practices. This notion can be put in a much more cautious way, as for example Davidson does when he speaks of "sharing a language, in whatever sense this is required for communication" (1977, 166). The "required sense" of sharing may be minimal, and may not consist of shared tacit rules. In what follows, I propose to deal with the question of what the "required sense" is, and how it can be squared with a plausible account of learning.

Davidson's remark is fairly conventional stuff in contemporary philosophy, but the argument that informs it is elusive. Is this a kind of unformulated transcendental argument, which amounts to the claim that the "sharing" of "language," in some unspecified sense of these terms, is a condition of the possibility of "communication" in some unspecified sense of this term? Or is it a kind of inference to the best explanation in which there are no real alternatives--an inference, so to speak, to the only explanation (which is perhaps not a bad definition of transcendental argument)? There are good reasons to be suspicious of arguments of this form. Yet this general picture, of some sort of shared (and presumably tacit) stuff at the basis of language, is highly appealing, and so is its extension to practices generally. The claim that there is some class of things that couldn't happen, were it not for the existence of some sort of shared practices, is a commonplace, despite--and perhaps because of--its vagueness.

SYMBOLIC AND CONNECTIONIST MODELS OF HIGHER COGNITIVE PROCESSES

As I have said, the Achilles heel of transcendental arguments is that the unique explanation to which the explanans point may not be the only explanation. In this case the argument establishes nothing. There is a close analogue to this kind of argument in cognitive science, and it has recently succumbed, at least in the view of many, to the demonstration that an alternative explanation suffices. The argument is this. People have the capacity to reason mathematically and speak grammatically. We can represent mathematical reasoning and the grammatical structure of a language explicitly, in terms of formal proofs and grammatical rules respectively. The fact that people can do in their head what can be done by formal proofs or in accordance with grammatical rules is a fact of the same kind as communication. It is the sort of fact that seems to require that the people who reason mathematically or speak grammatically possess capacities which pretty closely resemble, and operate like, those of formal proof. In short, people, in thinking mathematically or speaking grammatically, must be employing some sort of mental analogue to the rules of inference and axioms that go into mathematical proofs. The problem for the cognitive theorist is to model these capacities by identifying the tacit rules and axioms that are employed.

In cognitive science, this problem leads to a specific difficulty, the "central paradox of cognition," stated by Smolensky, Legendre, and Miyata as follows:

Formal theories of logical reasoning, grammar, and other higher mental faculties compel us to think of the mind as a machine for rule-based manipulation of structured arrays of symbols. What we know of the brain compels us to think of human information processing in terms of manipulation of a large set of numbers, the activity levels of interconnected neurons. Finally, the richness of human behavior, both in everyday environments and in the controlled environments of the psychological laboratory, seems to defy rule-based description, displaying strong sensitivity to subtle statistical factors in experience as well as to structural properties of information. (Smolensky, Legendre, and Miyata 1993, 382)
In this case there is an alternative explanation or approach, namely connectionism.

Connectionism refers to the claim that the appropriate model for the computation occurring in the brain is not, as a once dominant viewpoint had it, the operation of logic machines that process symbols, but rather is the parallel distributed processing that is used on various actual computer applications (such as flight simulators) and requires very substantial computing power. The "symbolic processing" model worked as follows: the mind acquires, either by genetic preprogramming or learning, rules for processing inputs, in a manner familiar from ordinary computing, in which symbols come in well-defined forms and the computer program operates as if computational "rules" are "applied" to them mechanically to produce predictable outputs. Connectionist models work differently. The computer is given a learning algorithm, but no detailed "rules." The computer is then "trained up" by feeding it data and then giving feedback for "correct" answers. This is very much a Humean rather than a Kantian machine. Everything that is inside, except for the most basic capacity for forming "expectations," is a result of inputs, or experience. The inputs are not symbolic, but simply impulses originating from various sensory sources, which are distributed through the brain in pathways made up of "connections" that are formed statistically, by the association of impulses of one kind with impulses of another kind. These are modeled mathematically as "weightings" of the impulses, which travel from "node," or pathway link, to "node" and which modify the link by passing through it, just as a person walking in the forest makes a path, increasing the likelihood of future impulses of a similar kind being distributed in a similar way. The changes in the likelihoods are "learning." These computer methods actually work: this is a model based on actual computer achievements, in which parallel distributed processing systems learn to do such things as detect cancers by being trained entirely empirically with inputs of images and feedback for correct predictions. No theory is needed, and no rules are identified or used in this method. The processes are statistical, and the capacities and outputs of the computer depend on what has been fed to it in the form of data and feedback.

The problem for modelers attempting to deal with human cognition is whether this approach can account for higher mental processes. The general explanatory problem is the question of "how... competence that is highly systematic, coherent, compositional, and productive" can be achieved with the specific kinds of "finite and fixed resources" that connectionism employs (Smolensky, Legendre, and Miyata 1993, 383). The highly influential paper by these authors from which these quotations are taken presents some technical results that bear on this problem. Indeed, in the opinion of most cognitive scientists and philosophical observers, these results represent a decisive turning point in the resolution of the issues. Briefly, what Smolensky, Legendre, and Miyata establish is that a connectionist account can be given of certain kinds of grammatical rules previously thought to be impossible to account for without reference to internalized formal rules. Their strategy is to show how "a fully distributed pattern of numerical activities" of a connectionist kind can be "the functional near-equivalent of a symbolic structure" (382). That is, they show how something like "rules" can be the product of "learning" through the simple mechanisms of spreading activation employed by connectionist accounts of the brain. The key idea in their analysis is that there is a kind of purposive process which occurs "when the... activation spreading process satisfies certain mathematical properties" (383), a process they call maximizing Harmony. The strategy of the paper is to make rule acquisition, by which they really mean the acquisition of functional equivalents to a rule, into a special case of connectionist learning generally. This raises an obvious possibility: that practices too may be special cases of this kind, or, alternatively, that practices may be better understood not as a special case of the same kind, but in light of the general properties of connectionist learning. In what follows, I will suggest that the latter is the most plausible conclusion.

I take it that there are two main implications of interest to the study of social practices of connectionism generally, one flowing from the other. The first is that because learning starts from a system state in which some learning has already occurred, we would expect that learning is always a product of a transition between a state A and state B such that the transition between another state in another machine and a state Bn which is functionally equivalent to B will be a transition from a different starting point to a different end point. In short, the mechanisms in question, even if they are governed by the same basic simple mechanisms, are individuated by the history of the mechanism. And, in general, the simpler the mechanism and the longer the chains of links between simple mechanisms, the greater the diversity produced by differences in, so to speak, the training history. Like paths from one point in space to another, the connections in a net that produce the "same" competency may be different in structure.

The implication of this that bears on the theory of social practices or the idea of shared practices is that two individuals with an ability to perform the general kind of task may go about it in ways that are quite different on the level of neurocognitive description. Put more simply, if we throw out the idea that there is a rule book that people tacitly master in order to, say, communicate, we also throw out the idea that there is some single thing that people must all have in order to communicate. The approach taken by Smolensky, Legendre, and Miyata modifies this implication, for it suggests that something very much like "the same" rules of grammar may result from the fact that there is, in effect, a common end point to the process of mastering a grammar, namely maximal Harmony. But the approach also raises the question of when and where this notion of functional equivalence is applicable or relevant.

It suggests the following answer to the question: when information is plentiful and structured in such a way that "optimizing Harmony" or some other quasi-purposive system goal can lead to the same rule-like results. The general point supported by connectionism is the idea that the simpler the mechanisms that are the building blocks, the longer or more complex the total structure producing the result will be. Unlike rule books, these various complex individual mental structures are built up over time on the basis of different learning events. Ordinarily, there will be a significant diversifying effect: the individual facts of the history of the acquisition of many cognitive skills will differ in such a way that the results differ. The question is where the result is rule-like and the rule-like structures are shared, and where it is either not rule-like or not shared. I note, incidentally, that Smolensky, Legendre, and Miyata say nothing about the question of whether more than one individually generated reduction of complexity can be functionally equivalent to a grammatical rule. However, there seems to be no reason that more than one result might be optimally Harmonious. Moreover, actual speakers do vary, so there is no reason to think that there is even one set of rule-equivalents that the process of harmonizing would necessarily tend toward in any given language.

We cannot, of course, answer questions about the existence of shared functional rule-equivalents directly. But some light can be shed on them by considering the ways in which "rules" are learned. Consider the child's acquisition of the ability to perform simple arithmetical tasks. What is it to "be able" to add 2 + 2? Is it merely to parrot the correct answer? Presumably not. Indeed, there may be no "criteria" in a Wittgensteinian sense for the possession of this competence. Obviously, a child does not master arithmetic immediately or all at once in the sense that the capacity is turned on like a switch. Other things must be mastered first, such as counting, and these are often things that (it is quite unproblematical to suggest) are mastered in different ways. Some children may count on their fingers. Others may learn through singing the numbers, and others may master a great deal of material by rote without knitting it together, and only later make connections between the numbers of a mathematical kind. In short, students come to the learning of 2 + 2 = 4 from different starting points. They are then put through a series of experiences; naturally, each student's experience is slightly different, as is each classroom's experience.

The differences, however, are not supposed to make a difference in the performance of the capacity. There are right answers, and the point of the various experiences with students having various prior experiences is that the experiences taken together transform the child cognitively in such a way that the child is able to perform the cognitive task correctly. Almost everybody manages to do this. At the period prior to mastery, the cognitive architecture that supports the child's efforts will be, according to the picture I have given here, different. Different children will have different experiences on the way to mastery, and the cognitive architecture will be a product of the path and the experiences along this path that the child takes from its starting point to the goal of mastering the cognitive task. The purposes of children will vary as well. There may be a complex heterogeneity with respect to the goals. Some children may wish to avoid the embarrassment of being brought before the blackboard and humiliated for making mistakes. Other children may have a more positive experience of mastery and pride in achievement. These differences, like differences in the history of learning, do not have any effect on the competence itself.

The question is, why? On the account of mastery that fits best with the tacit rule book model, the reason for this is essentially as follows: mastery is no more and no less than "getting" the basic rules of arithmetic, which are the same for everyone. The student tries this and that, gets corrected, gets told that the answer is correct, and does all of this without understanding. But at some point something clicks, and the student "has" the rule. The history of acquisition is irrelevant because the important moment is the moment of clicking on to the rule, of getting the rule. This model of learning, what I will call the snap-on model, makes the history irrelevant. There is a radical difference of kind between the period before acquiring the rule and the period after, and the learning events of the first period have no effects in the second.

This is obviously an appealing story. It fits well with a certain view of Wittgenstein, and indeed may--though I doubt it--represent the most plausible explication of his notion of rule following. I do not wish to take the issue up here directly, but I will note that in the history of the reception of the Philosophical Investigations there was a period in which something like the account I have given here was purveyed by Wittgenstein's students and interpreters. Nevertheless, I think it misleads us about practices generally, misleads us into looking for "criteria" or "agreements" where there is nothing of the sort to be found. The main reason for this is that mastery--however one wishes to think of it--is in most cases not the same thing for different people under different circumstances. It is purpose relative, and the purposes of individuals involved in the activity vary. It is also situation or experience relative, in the sense that it depends on the materials to which the rules are applied.

Differences in purpose lead to differences in experience, and this means differences in the information that is fed into the system. Diversity is the normal result, but diversity is nevertheless consistent with a great many kinds of cooperation, and indeed, I think, with communication. In what follows I will consider some examples, and suggest that most of those we call practices are more plausibly thought of as the common activities of people with diverse learnings than as activities made possible by the sharing of the same rule-like structures. Obviously, there is no room here for knockdown arguments. Indeed, the complexity of the processes involved assures, I think, that they will always be opaque to analysis. But something may usefully be suggested about the probable effects of differences in purpose on the cognitive side of practices.



Continues...

Excerpted from Brains/Practices/Relativism: Social Theory After Cognitive Science by Stephen P. Turner Copyright © 2002 by Stephen P. Turner. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Acknowledgments
Introduction: Social Theory After Cognitive Science
1. Throwing Out the Tacit Rule Book: Learning and Practices
2. Searle's Social Reality
3. Imitation or the Internalization of Norms: Is Twentieth-Century Social Theory Based on the Wrong Choice?
4. Relativism as Explanation
5. The Limits of Social Constructionism
6. Making Normative Soup Out of Nonnormative Bones
7. Teaching Subtlety of Thought: The Lessons of "Contextualism"
8. Practice in Real Time
9. The Significance of Shils
References
Index
From the B&N Reads Blog

Customer Reviews