- Shopping Bag ( 0 items )
Ships from: Naperville, IL
Usually ships in 1-2 business days
Ships from: acton, MA
Usually ships in 1-2 business days
Every society features its ideal human being. The ancient Greeks valued the person who displayed physical agility, rational judgment, and virtuous behavior. The Romans highlighted manly courage, and followers of Islam prized the holy soldier. Under the influence of Confucius, Chinese populations traditionally valued the person who was skilled in poetry, music, calligraphy, archery, and drawing. Among the Keres tribe of the Pueblo Indians today, the person who cares for others is held in high regard.
Over the past few centuries, particularly in Western societies, a certain ideal has become pervasive: that of the intelligent person. The exact dimensions of that ideal evolve over time and setting. In traditional schools, the intelligent person could master classical languages and mathematics, particularly geometry. In a business setting, the intelligent person could anticipate commercial opportunities, take measured risks, build up an organization, and keep the books balanced and the stockholders satisfied. At the beginning ofthe twentieth century, the intelligent person was one who could be dispatched to the far corners of an empire and who could then execute orders competently. Such notions remain important to many people.
As the turn of this millennium approaches, however, a premium has been placed on two new intellectual virtuosos: the "symbol analyst" and the "master of change." A symbol analyst can sit for hours in front of a string of numbers and words, usually displayed on a computer screen, and readily discern meaning in this thicket of symbols. This person can then make reliable, useful projections. A master of change readily acquires new information, solves problems, forms "weak ties" with mobile and highly dispersed people, and adjusts easily to changing circumstances.
Those charged with guiding a society have always been on the outlook for intelligent young people. Two thousand years ago, Chinese imperial officials administered challenging examinations to identify those who could join and direct the bureaucracy. In the Middle Ages, church leaders searched for students who displayed a combination of studiousness, shrewdness, and devotion. In the late nineteenth century, Francis Galton, one of the founders of modern psychological measurement, thought that intelligence ran in families, and so he looked for intelligence in the offspring of those who occupied leading positions in British society.
Galton did not stop with hereditary lineages, however. He also believed that intelligence could be measured more directly. Beginning around 1870, he began to devise more formal tests of intelligence, ones consistent with the emerging view of the human mind as subject to measurement and experimentation. Galton thought that more intelligent persons would exhibit greater sensory acuity, and so the first formal measures of intelligence probed the ways in which individuals distinguished among sounds of different loudness, lights of different brightness, and objects of different weight. As it turned out, Galton (who thought himself very intelligent) bet on indices of intelligence that proved unrevealing for his purposes. But in his wager on the possibility of measuring intelligence, he was proved correct.
Since Galton's time, countless people have avidly pursued the best ways of defining, measuring, and nurturing intelligence. Intelligence tests represent but the tip of the cognitive iceberg. In the United States, tests such as the Scholastic Assessment Test, the Miller Analogies Test, and the various primary, secondary, graduate, and professional examinations are all based on technology originally developed to test intelligence. Even assessments that are deliberately focused on measuring achievement (as opposed to "aptitude" or "potential for achievement") often strongly resemble traditional tests of intelligence. Similar testing trends have occurred in many other nations as well. It is likely that efforts to measure intelligence will continue and, indeed, become more widespread in the future. Certainly, the prospect of devising robust measures of a highly valued human trait is attractive, for example, for those faced with decisions about educational placement or employment. And the press to determine who is intelligent and to do so at the earliest possible age is hardly going to disappear.
Despite the strong possibility that intelligence testing will remain with us indefinitely, this book is based on a different premise, namely, that intelligence is too important to be left to the intelligence testers. Just in the past half century, our understanding of the human mind and the human brain has been fundamentally altered. For example, we now understand that the human mind, reflecting the structure of the brain, is composed of many separate modules or faculties. At the same time, in the light of scientific and technological changes, the needs and desires of cultures all over the world have undergone equally dramatic shifts. We are faced with a stark choice: either to continue with the traditional views of intelligence and how it should be measured or to come up with a different, and better, way of conceptualizing the human intellect. In this book, I adopt the latter tack. I present evidence that human beings possess a range of capacities and potentials—multiple intelligences—that, both individually and in consort, can be put to many productive uses. Individuals can not only come to understand their multiple intelligences but also deploy them in maximally flexible and productive ways within the human roles that various societies have created. Multiple intelligences can be mobilized at school, at home, at work, or on the street—that is, throughout the various institutions of a society.
But the task for the new millennium is not merely to hone our various intelligences and use them properly. We must figure out how intelligence and morality can work together to create a world in which a great variety of people will want to live. After all, a society led by "smart" people still might blow up itself or the rest of the world. Intelligence is valuable but, as Ralph Waldo Emerson famously remarked, "Character is more important than intellect." That insight applies at both the individual and the societal levels.
ORGANIZATION OF THE BOOK
In Chapter 2, I describe the traditional scientific view of intelligence. I introduce my own view—the theory of multiple intelligences—in Chapter 3. While this theory was developed nearly two decades ago, it has not remained static. Thus, in Chapters 4 and 5, I consider several new candidate intelligences, including naturalist, spiritual, existential, and moral ones. In Chapter 6, I address some of the questions and criticisms that have arisen about the theory and I dispel some of the more prominent myths. I treat other controversial issues in Chapter 7. And I explore in Chapter 8 the relationships among intelligence, creativity, and leadership.
The next three chapters focus on ways in which the theory of multiple intelligences can be applied. Chapters 9 and 10 are devoted to a discussion of the theory in scholastic settings, and in chapter 11 I discuss its applications in the wider world. Finally, returning to the issues raised in Chapter 1, in Chapter 12 I explore my answer to the provocative question "Who owns intelligence?"
Since my presentation of the theory almost twenty years ago, an enormous secondary literature has developed around it. And many individuals have propagated the theory in various ways. In the appendices, I present an up-to-date listing of my own writings on the theory, writings by other scholars who have devoted books or major articles to the theory, selected miscellaneous materials, and key individuals in the United States and abroad who have contributed to the development of the theory or related practices. I provided a similar, but much smaller, listing of resources in Multiple Intelligences: The Theory in Practice, completed in 1992. I am humbled by the continued and growing interest in the theory, and proud that it has touched so many people all over the world.
A TALE OF TWO BOOKS
In the fall of 1994, an unusual event occurred in the book-publishing industry. An eight-hundred-page book, written by two scholars and including two hundred pages of statistical appendices, was issued by a general trade publisher. The manuscript had been kept under embargo and therefore had not been seen by potential reviewers. Despite (or perhaps because of) this secrecy, The Bell Curve, by Richard J. Herrnstein and Charles Murray, received front-page coverage in the weekly news magazines and became a major topic of discussion in the media and around dinner tables. Indeed, one would have had to go back half a century to a landmark treatise on black-white relations, Gunnar Myrdal's An American Dilemma, to find a social science book that engendered a comparable buzz.
Even in retrospect, it is difficult to know fully what contributed to the notoriety surrounding The Bell Curve. None of the book's major arguments were new to the educated public. Herrnstein, a Harvard psychology professor, and Murray, an American Enterprise Institute political scientist, argued that intelligence is best thought of as a single property distributed within the general population along a bell-shaped curve. That is, comparatively few people have very high intelligence (say, IQ over 130), comparatively few have very low intelligence (IQ under 70), and most people are clumped together somewhere in between (IQ from 85 to 115). Moreover, the authors adduced evidence that intelligence is to a significant extent inherited—that is, within a defined population, the variation in measured intelligence is due primarily to the genetic contributions of one's biological parents.
These claims were fairly well known and hardly startling. But Herrnstein and Murray went further. They moved well beyond a discussion of measuring intelligence to claim that many of our current social ills are due to the behaviors and capacities of people with relatively low intelligence. The authors made considerable use of the National Longitudinal Survey of Youth, a rich data set of over 12,000 youths who have been followed since 1979. The population was selected in such a way as to include adequate representation from various social, ethnic, and racial groups; members of the group took a set of cognitive and aptitude measures under well controlled conditions. On the basis of these data, the authors presented evidence that those with low intelligence are more likely to be on welfare, to be involved in crime, to come from broken homes, to drop out of school, and to exhibit other forms of social pathology. And while they did not take an explicit stand on the well-known data showing higher IQs among whites than among blacks, they left the clear impression that these differences were difficult to change and, therefore, probably were a product of genetic factors.
I have labeled the form of argument in The Bell Curve "rhetorical brinkmanship." Instead of stating the unpalatable, the authors lead readers to a point where they are likely to draw a certain conclusion on their own. And so, while Herrnstein and Murray claimed to remain "resolutely neutral" on the sources of black-white differences in intelligence, the evidence they presented strongly suggests a genetic basis for the disparity. Similarly, while they did not recommend eugenic practices, they repeatedly used the following form of reasoning: Social pathology is due to low intelligence, and intelligence cannot be significantly changed through societal interventions. The reader is drawn, almost ineluctably, to conclude that "we" (the intelligent reader, of course) must find a way to reduce the number of "unintelligent" people.
The reviews of The Bell Curve were primarily negative, with the major exception of those in politically conservative publications. Scholars were extremely critical, particularly in regard to the alleged links between low intelligence and social pathology. Not surprisingly, the authors' conclusions about intelligence have been endorsed by many psychologists who specialize in measurement and on whose work much of the book was built.
Why the fuss over a book that offered few new ideas and dubious scholarship? I would not minimize the skill of the publisher, who kept the book under wraps from scholars while making sure that it got into the hands of people who would promote it or write at length about it. The application of seemingly scientific objectivity to racial issues on which many people hold private views may also have contributed to the book's success. But my own, admittedly more cynical, view is that a demand arises every twenty-five years or so for a restatement of the "nature," or hereditary explanation, of intelligence. Supporting this view is the fact that the Harvard Educational Review in 1969 published a controversial article titled "How much can we boost scholastic achievement?" The author, the psychologist Arthur Jensen, harshly criticized the effectiveness of early childhood intervention programs like Head Start. He said that such programs did not genuinely aid disadvantaged children and suggested that perhaps black children needed to be taught in a different way.
Just one year after the appearance of The Bell Curve, another book was published to even greater acclaim. In most respects, Emotional Intelligence, by the New York Times reporter and psychologist Daniel Goleman, could not have been more different from The Bell Curve. Issued by a mass-market trade publisher, Goleman's short book was filled with anecdotes and presented only a few scattered statistics. Moreover, in sharp contrast to The Bell Curve, Emotional Intelligence contained a dim view of the entire psychometric tradition, as indicated by its subtitle: Why it can matter more than IQ.
In Emotional Intelligence, Goleman argued that our world has largely ignored a tremendously significant set of skills and abilities—those dealing with people and emotions. In particular, Goleman wrote about the importance of recognizing one's own emotional life, regulating one's own feelings, understanding others' emotions, being able to work with others, and having empathy for others. He described ways of enhancing these capacities, particularly among children. More generally, he argued that the world could be more hospitable if we cultivated emotional intelligence as diligently as we now promote cognitive intelligence. Emotional Intelligence may well be the best-selling social science book ever published. By 1998, it had sold over 3 million copies worldwide, and in countries as diverse as Brazil and Taiwan it has remained on the best-seller list for unprecedented lengths of time. On the surface, it is easy to see why Emotional Intelligence is so appealing to readers. Its message is hopeful, and the author tells readers how to enhance their own emotional intelligence and that of others close to them. And—this is meant without disrespect—the message of the book is contained in its title and its subtitle.
I often wonder whether the readers of The Bell Curve have also read Emotional Intelligence. Can one be a fan of both books? There are probably gender and disciplinary differences in the audiences: To put it sharply, if not stereotypically, business people and tough-minded social scientists are probably more likely to gravitate toward The Bell Curve, while teachers, social workers, and parents are probably more likely to embrace Emotional Intelligence. (However, a successor volume, Goleman's Working with Emotional Intelligence, sought to attract the former audiences, too.) But I suspect that there is also some overlap. Clearly, educators, business people, parents, and many others realize that the concept of intelligence is important and that conceptualizations of it are changing more rapidly than ever before.
A BRIEF HISTORY OF PSYCHOMETRICS
By 1860 Charles Darwin had established the scientific case for the origin and evolution of all species. Darwin had also become curious about the origin and development of psychological traits, including intellectual and emotional ones. It did not take long before a wide range of scholars began to ponder the intellectual differences across the species, as well as within specific groups, such as infants, children, adults, or the "feeble-minded" and "eminent geniuses." Much of this pondering occurred in the armchair; it was far easier to speculate about differences in intellectual power among dogs, chimpanzees, and people of different cultures than to gather comparative data relevant to these putative differences. It is perhaps not a coincidence that Darwin's cousin, the polymath Francis Galton, was the first to establish an anthropometric laboratory for the purpose of assembling empirical evidence of people's intellectual differences.
Still, the honor of having fashioned the first intelligence test is usually awarded to Alfred Binet, a French psychologist particularly interested in children and education. In the early 1900s, families were flocking into Paris from the provinces and from far-flung French territories. Some of the children from these families were having great difficulty with schoolwork. In the early 1900s, Binet and his colleague Théodore Simon were approached by the French Ministry of Education to help predict which children were at risk for school failure. Proceeding in a completely empirical fashion, Binet administered hundreds of test questions to these children. He wanted to identify a set of questions that were discriminating, that is, when passed, such items predicted success in school and when failed, the same items predicted difficulty in school.
Like Galton, Binet began with largely sensory-based items but soon discovered the superior predictive power of other, more "scholastic" questions. From Binet's time on, intelligence tests have been heavily weighted toward measuring verbal memory, verbal reasoning, numerical reasoning, appreciation of logical sequences, and ability to state how one would solve problems of daily living. Without fully realizing it, Binet had invented the first tests of intelligence.
A few years later, in 1912, the German psychologist Wilhelm Stern came up with the name and measure of the "intelligence quotient," or the ratio of one's mental age to one's chronological age, with the ratio to be multipled by 100 (which is why it is better to have an IQ of 130 than one of 70).
Like many Parisian fashions of the day, the IQ test made its way across the Atlantic—with a vengeance—and became Americanized during the 1920s and 1930s. Whereas Binet's test had been administered one on one, American psychometricians—led by Stanford University psychologist Lewis Terman and the Harvard professor and army major Robert Yerkes—prepared paper-and-pencil (and, later, machine-scorable) versions that could be administered easily to many individuals. Since specific instructions were written out and norms were created, test takers could be examined under uniform conditions and their scores could be compared. Certain populations elicited special interest; much was written about the IQs of mentally deficient people, of putative young geniuses, U.S. Army recruits, members of different racial and ethnic groups, and immigrants from northern, central, and southern Europe, and by the mid-1920s, the intelligence test had become a fixture in educational practice in the United States and throughout much of western Europe.
Early intelligence tests were not without their critics. Many enduring concerns were first raised by the influential American journalist Walter Lippmann. In a series of debates with Lewis Terman, published in the New Republic, Lippmann criticized the test items' superficiality and possible cultural biases, and he noted the risks associated with assessing an individual's intellectual potential via a single, brief oral or paper-and-pencil method. IQ tests were also the subject of countless jokes and cartoons. Still, by sticking to their tests and their tables of norms, the psychometricians were able to defend their instruments, even as they made their way back and forth among the halls of academe; their testing cubicles in schools, hospitals, and employment agencies; and the vaults in their banks.
Surprisingly, the conceptualization of intelligence did not advance much in the decades following the pioneering contributions of Binet, Terman, Yerkes, and their American and western European colleagues. Intelligence testing came to be seen, rightly or wrongly, as a technology useful primarily in selecting people to fill academic or vocational niches. In one of the most famous—and also most cloying—quips about intelligence testing, the influential Harvard psychologist E. G. Boring declared, "Intelligence is what the tests test." So long as these tests continued to do what they were supposed to do—that is, yield reasonable predictions about people's success in school—it did not seem necessary or prudent to probe too deeply into their meanings or to explore alternative views of what intelligence is or how it might be assessed.
THREE KEY QUESTIONS ABOUT INTELLIGENCE
Over the decades, scholars and students of intelligence have continued to argue about three questions. The first: Is intelligence singular, or are there various, relatively independent intellectual faculties? Purists—from Charles Spearman, an English psychologist who conducted research in the early 1900s, to his latter-day disciples Herrnstein and Murray—have defended the notion of a single, supervening "general intelligence." Pluralists—from the University of Chicago's L. L. Thurstone, who in the 1930s posited seven "vectors of the mind," to the University of Southern California's J. P. Guilford, who discerned up to one hundred and fifty "factors of the intellect"—have construed intelligence to be composed of many dissociable components. In his much cited The Mismeasure of Man, the paleontologist Stephen Jay Gould argued that the conflicting conclusions reached on this issue simply reflect alternative assumptions about a particular statistical procedure ("factor analysis") rather than about "the way the mind really is." More specifically, depending upon the assumptions made, the procedure called "factor analysis" can yield different conclusions about the extent to which different test items do (or do not) correlate with one another. In the ongoing debate among psychologists about this issue, the psychometric majority favors a general intelligence perspective.
The general public, however, generally focus on a second, even more contentious question: Is intelligence (or are intelligences) predominantly inherited? Actually, this is by and large a Eurocentric question. In the Confucius-influenced societies of East Asia, it is widely assumed that individual differences in intellectual endowment are modest and that personal effort largely accounts for achievement level. Interestingly, Darwin was sympathetic to this viewpoint. He wrote to his cousin Galton, "I have always maintained that, excepting for fools, men did not differ much in intelligence, only in zeal and hard work." In the West, however, there is more support for the view—first defended vocally by Galton and Terman, and echoed recently by Hernstein and Murray—that intelligence is inborn and that a person can do little to alter his or her quantitative intellectual birthright.
Studies of identical twins reared apart provide surprisingly strong support for the "heritability" of psychometric intelligence (the intelligence tapped in standard measures like an IQ test). That is, if one wants to predict someone's score on an intelligence test, it is on the average more relevant to know the identity of the biological parents (even if the individual has had no contact with them) than the identity of the adoptive parents. By the same token, the IQs of identical twins are more similar than the IQs of fraternal twins. And contrary to both common sense and political correctness, IQs of biologically related individuals actually grow more similar, rather than more different, after adolescence. (This trend could be a by-product of general healthiness, which aids performance on any mental or physical measure, rather than a direct result of native intellect reasserting itself.)
While the statistics point to significant heritability of IQs, many scholars still object to the suggestion that biological lineage largely determines intelligence. They argue, among other things:
The science of behavioral genetics was developed to work with animals other than humans. In any event, it is a new science that is changing rapidly.
Since researchers cannot conduct genuine experiments with human beings (such as randomly assigning identical and fraternal twins to different homes), behavioral genetic conclusions involve unwarranted extrapolations from necessarily messy data.
Only people from certain environments—chiefly middle-class Americans—have been studied, so we cannot know about the "elasticity" of human potential across more diverse environments.
Because they look alike, identical twins are more likely to elicit similar responses from others in their environment.
Generally, identical twins reared apart were placed in backgrounds similar to those of their biological parents, in terms of race, ethnicity, social class, and so forth.
Identical twins reared apart did share one environment from conception to birth.
Even without such findings for support, many of the general public as well as scholars simply feel uncomfortable with the view that culture and child rearing are impotent when stacked against the powers of the gene. They point to the enormous differences between individuals raised in different cultural settings (or even different cultures within one country), and they cite the often impressive results of their own and others' efforts to rear children who exhibit certain traits and values. Of course, the resulting differences among children are not necessarily an argument against genetic factors. After all, different racial and ethnic groups may differ in respect to their genetic makeup, on intellectual as well as physical dimensions. And children with different genetic makeups may elicit different responses from their parents.
Most scholars agree that even if psychometric intelligence is largely inherited, it is not possible to pinpoint the reasons for differences in average IQ between groups. For instance, the fifteen-point difference typically observed in the United States between African-American and white populations cannot be readily explained, because it is not possible in our society to equate the contemporary (let alone the historical) experiences of these two groups. The conundrum: One could only ferret out genetic differences in intellect (if any) between black and white populations in a society that was literally color-blind.
A third question has intrigued observers: Are intelligence tests biased? In early intelligence tests, the cultural assumptions built into certain items are glaring. After all, who except the wealthy could draw on personal experiences to answer questions about polo or fine wines? And if a test question asks respondents whether they would turn over money found in the street to the police, might responses not differ for middle-class respondents and destitute ones? Would the responses not be shaped by the knowledge that the police force is known to be hostile to members of one's own ethnic or racial group? However, test scorers cannot consider such issues or nuances, and therefore score only orthodox responses as correct. Since these issues resurfaced in the 1960s, psychometricians have striven to remove the obviously biased items from intelligence measures.
It is far more difficult, though, to deal with biases built into the test situation. For example, personal background certainly figures into someone's reactions to being placed in an unfamiliar surrounding, instructed by an interrogator who is dressed in a certain way and speaks with a certain accent, and given a printed test booklet to fill out or a computer-based test to click. And as the Stanford psychologist Claude Steele has shown, the biases prove even more acute in cases when the test takers belong to a racial or ethnic group widely considered to be less smart than the dominant group (who are more likely to be the creators, administrators, and scorers of the test), and when these test takers know their intellect is being measured.
Talk of bias touches on the frequently held assumption that tests in general, and intelligence tests in particular, are inherently conservative instruments—tools of the establishment. Interestingly, some test pioneers thought of themselves as social progressives who were devising instruments that could reveal people of talent, even if they came from "remote and apparently inferior institutions" (to quote wording used in a catalogue for admission to Harvard College in the early 1960s). And occasionally, the tests did reveal intellectual diamonds in the rough. More often, however, the tests indicated the promise of people from privileged backgrounds (as evidenced, for instance, in the correlation between wealthy areas' ZIP codes and high IQ scores). Despite the claims of Herrnstein and Murray, the nature of the causal relation between IQ and social privilege has not been settled; indeed, it continues to stimulate many dissertations in the social sciences.
Paradoxically, the extensive use of IQ scores has led to the tests not being widely administered anymore. There has been much legal wrangling about the propriety of making consequential decisions about education (or, indeed, life chances) on the basis of IQ scores; as a result, many public school officials have become test shy. (Independent schools are not under the same constraints and have remained friendly to IQ-style measurements—the larger the applicant pool, the friendlier the admissions office!) By and large, IQ testing in the schools is now restricted to cases in which there is a recognized problem (such as a suspected learning disability) or a selection procedure (such as determining eligibility for an enrichment program that serves gifted children). Nevertheless, intelligence testing—and, perhaps more importantly, the line of thinking that gives rise to it—have actually won the war. Many widely used scholastic measures are thinly disguised intelligence tests—almost clones thereof—that correlate highly with scores on standard psychometric instruments. Virtually no one raised in the developed world today has gone untouched by Binet's deceptively simple invention of a century ago.
ATTACKS ON THE INTELLIGENCE ESTABLISHMENT
Although securely ensconced in many corners of society, the concept of intelligence has in recent years undergone its most robust challenges since the days of Walter Lippmann and the New Republic crowd. People informed by psychology but not bound by psychometricians' assumptions have invaded this formerly sacrosanct territory. They have put forth their own conceptions about what intelligence is, how (and even whether) it should be measured, and which values should be invoked in shaping the human intellect. For the first time in many years, the intelligence establishment is clearly on the defensive, and it seems likely that the twenty-first century will usher in fresh ways of thinking about intelligence.
The history of science is a tricky business, and particularly so when one sits in the midst of it. The rethinking of intelligence has been affected especially by the perspectives of scholars who are not psychologists. For instance, anthropologists, who spend their lives immersed in cultures different from their own, have called attention to the parochialism of the Western view of intelligence. Some cultures do not even have a concept called intelligence, and others define intelligence in terms of traits that Westerners might consider odd—obedience or good listening skills or moral fiber, for example. These scholars also have pointed out the strong and typically unexamined assumption built into testing instruments: that performance on a set of unrelated items, mostly drawn from the world of schooling, can somehow be summed up to yield a single measure of intellect. From their perspective, it makes far more sense to look at a culture's popular theory of intellect and to devise measures or observations that catch such forms of thinking on the fly. As the cross-cultural investigator Patricia Greenfield has remarked, with respect to the typical Western testing instrument, "You can't take it with you."
Neuroscientists are equally suspicious of the psychologists' assumptions about intellect. Half a century ago, there were still neuroscientists who believed that the brain was an all-purpose machine and that any portion of the brain could subserve any human cognitive or perceptual function. However, this "equipotential" position (as it was called) is no longer tenable. All evidence now points to the brain as being a highly differentiated organ: Specific capacities, ranging from the perception of the angle of a line to the production of a particular linguistic sound, are linked to specific neural networks. From this perspective, it makes much more sense to think of the brain as harboring an indefinite number of intellectual capacities, whose relationship to one another needs to be clarified.
It is possible to acknowledge the brain's highly differentiated nature and still adhere to a more general view of intelligence. Some investigators believe that nervous systems differ from one another in the speed and efficiency of neural signaling, and that this characteristic may underlie differences in individuals' measured intelligence. Some empirical support exists for this position, though no one yet knows whether such differences in signaling efficiency are inborn or can be developed. Those partial to the general view of intelligence also point to the increasingly well-documented flexibility (or plasticity) of the human brain during the early years of life. This plasticity suggests that different parts of the brain can take over a given function, particularly when pathology arises. Still, noting that some flexibility exists in the organization of human capacities during early life is hardly tantamount to concluding that intelligence is a single property of a whole brain. And the early flexibility evidence runs counter to the frequently voiced argument of "generalists" that intelligence is fixed and unchangeable.
Finally, the trends in computer science and artificial intelligence also militate against the entrenched view of a single, general-purpose intellect. When artificial intelligence was first developed in the 1950s and 1960s, programmers generally viewed problem solving as a generic capacity and contended that a useful problem-solving program should be applicable to a variety of problems (for example, one should be able to use a single program to play chess, understand language, and recognize faces). The history of computer science has witnessed a steady accumulation of evidence against this "general problem-solver" tradition. Rather than setting up programs that embrace general heuristic strategies, scientists have found it far more productive to build specific kinds of knowledge into each program. So-called expert systems "know" a great deal about a certain domain (such as chemical spectrography, voice recognition, or chess moves) and essentially nothing about the other domains of experience. Development of a machine that is generally smart seems elusive—and is perhaps a fundamentally wrong-headed conceit.
Like neuroscientists, some computer scientists have retained a generic view of intelligence. They point to new parallel-distributed systems (PDPs) whose workings are more akin to the human brain's processes than the step-by-step procedures of earlier computational systems. Such PDPs do not need to have knowledge built into them; like most animals, they learn from accumulated experience, even experience unmediated by explicit symbols and rules. Still, such systems have not yet exhibited forms of thinking that cut across different content areas (as a general intelligence is supposed to do); if anything, their realms of expertise have thus far proved even more specific than those displayed by expert systems based on earlier computer models.
The insularity of most psychological discussions came home to me recently when I appeared on a panel devoted to the topic of intelligence. For a change, I was the only psychologist. An experimental physicist summarized what is known about the intelligence of different animals. A mathematical physicist discussed the nature of matter, as it allows for conscious and intelligent behavior. A computer scientist described the kinds of complex systems that can be built out of simple, nervelike units and sought to identify the point at which these systems begin to exhibit intelligent, and perhaps even creative, behavior. As I listened intently to these thoughtful scholars, I clearly realized that psychologists no longer own the term intelligence—if we ever did. What it means to be intelligent is a profound philosophical question, one that requires grounding in biological, physical, and mathematical knowledge. Correlations (or noncorrelations) among test scores mean little once one ventures beyond the campus of the Educational Testing Service.
THE RESTLESSNESS AMONG PSYCHOLOGISTS
Even some psychologists have been getting restless, and none more so than the Yale psychologist Robert Sternberg. Born in 1949, Sternberg has written dozens of books and several hundred articles, most focusing on intelligence in one way or another. Influenced by the new view of the mind as an "information-processing device," Sternberg began with the strategic goal of understanding the actual mental processes—the discrete mental steps—someone would employ when responding to standardized test items. He asked what happens—on a millisecond-by-millisecond basis—when one must solve analogies or indicate an understanding of vocabulary words. What does the mind do, step by step, as it completes the analogy "Lincoln: president :: Margaret Thatcher :?" According to Sternberg, it is not sufficient to know whether someone could arrive at the correct answer. Rather, one should look at the test taker's actual mental steps in solving a problem, identify the difficulties encountered, and, to the extent possible, figure out how to help this person and others solve items of this sort.
Sternberg soon went beyond identifying the components of standard intelligence testing. First, he asked about the ways in which people actually order the components of reasoning: For example, how do they decide how much time to allot to a problem, and how do they know whether they've made a right choice? As a cognitive scientist might put it, he probed the microstructure of problem solving. Second, Sternberg began to examine two previously neglected forms of intelligence. He investigated the capacity of individuals to automatize familiar information or problems, so that they can be free to direct their attention to new and unfamiliar information. And he looked at how people deal practically with different kinds of contexts—how they know and use what is needed to behave intelligently at school, at work, on the streets, and even when one is in love. Sternberg noted that these latter forms of "practical intelligence" are extremely important for success in our society and yet are rarely, if ever, taught explicitly or tested systematically.
More so than many other critics of standard intelligence testing, Sternberg has sought to measure these newly recognized forms of intelligence through the kinds of pencil-and-paper laboratory methods favored by the profession. And he has found that people's ability to deal effectively with novel information or to adapt to diverse contexts can be differentiated from their success with standard IQ-test-style problems. (These findings should come as no surprise to those who have seen high-IQ people flounder outside of a school setting or those who, at a high school or college reunion, have found their academically average or below-average peers to be the richest or most powerful alumni at the event.) But Sternberg's efforts to create a new intelligence test have not been crowned with easy victory. Most psychometricians are conservative: They cling to their tried-and-true tests and believe any new tests to be marketed must correlate highly with existing instruments, such as the familiar Stanford-Binet or Wechsler tests.
Other psychologists have also called attention to neglected aspects of the terrain of intelligence. For example, David Olson of the University of Toronto has emphasized the importance of mastering different media (like computers) or symbol systems (like written or graphic materials) and has redefined intelligence as "skill in the use of a medium." The psychologists Gavriel Salomon and Roy Pea, both experts on technology and education, have noted the extent to which intelligence inheres in the resources to which a person has access, ranging from pencils to Rolodexes(tm) to libraries or computer networks. In their view, intelligence is better thought of as "distributed" in the world rather than concentrated "in the head." Similarly, the psychologist James Greeno and anthropologist Jean Lave have described intelligence as being "situated": By observing others, one learns to behave appropriately in situations and thereby appears intelligent. According to a strict situationalist perspective, it does not make sense to think of a separate capacity called intelligence that moves with a person from one place to another. And my colleague at Harvard, David Perkins, has stressed the extent to which intelligence is learnable: One can master various strategies, acquire different kinds of expertise, and learn to negotiate in varied settings.
Nearly every year ushers in a new set of books and a new ensemble of ideas about intelligence. On the heels of The Bell Curve and Emotional Intelligence came David Perkins's Outsmarting IQ, Stephen Ceci's On Intelligence: More or Less, Robert Sternberg's Successful Intelligence, and Robert Coles's Moral Intelligence of Children. Some of the authors sought to differentiate among different forms of intelligence, such as those dealing with novel, as opposed to "crystallized," information. Some sought to broaden the expanse of intelligence to include emotions, morality, creativity, or leadership. And others sought to bring intelligence wholly or partially outside the head, situating it in the group, the organization, the community, the media, or the symbol systems of a culture.
The different textures of these books are of interest chiefly to those within the trade of social scientists. Outsiders are well advised not to try to follow every new warp and woof, since many will soon unravel. However, the general message is clear: Intelligence, as a construct to be defined and a capacity to be measured, is no longer the property of a specific group of scholars who view it from a narrowly psychometric perspective. In the future, many disciplines will help define intelligence, and many more interest groups will participate in the measurement and uses of it.
Now I want to focus on the view of intelligence that, in my view, has the strongest scientific support and the greatest utility for the next millennium: the theory of multiple intelligences.
Excerpted from Intelligence by Howard Gardener Copyright © 1997 by Howard Gardener. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.