Uh-oh, it looks like your Internet Explorer is out of date.

For a better shopping experience, please upgrade now.

The Society of Mind

The Society of Mind

5.0 1
by Marvin Minsky

See All Formats & Editions

Marvin Minsky — one of the fathers of computer science and cofounder of the Artificial Intelligence Laboratory at MIT — gives a revolutionary answer to the age-old question: "How does the mind work?"
Minsky brilliantly portrays the mind as a "society" of tiny components that are themselves mindless. Mirroring his theory, Minsky boldly casts


Marvin Minsky — one of the fathers of computer science and cofounder of the Artificial Intelligence Laboratory at MIT — gives a revolutionary answer to the age-old question: "How does the mind work?"
Minsky brilliantly portrays the mind as a "society" of tiny components that are themselves mindless. Mirroring his theory, Minsky boldly casts The Society of Mind as an intellectual puzzle whose pieces are assembled along the way. Each chapter — on a self-contained page — corresponds to a piece in the puzzle. As the pages turn, a unified theory of the mind emerges, like a mosaic. Ingenious, amusing, and easy to read, The Society of Mind is an adventure in imagination.

Editorial Reviews

From the Publisher
Issac Asimov Information Week 270 brilliantly original essays on...how the mind works.

Douglas Hofstadter author of Gödel, Escher, Bach and Metamagical Themas A stunning collage of staccato images, filled to the brim with witty insights and telling aphorisms.

The New York Times Book Review INGENIOUS...STIMULATING...crisp, packed with quips, aphorisms and homely illustrations. A pleasure to read...It will make you think. And that's what brains are for.

Michael Crichton author to The Andromeda Strain PROVOCATIVE, DELIGHTFUL, CHALLENGING, a rich, funny and altogether fascinating book.

Martin Gardner The Boston Sunday Globe SPARKLING WITH JOKES and apt quotations...and rich insights.

Gene Roddenberry creator of Star Trek A REMARKABLE BOOK....I am grateful that Marvin Minsky was my tour guide on this journey in the realms of my own consciousness.

San Jose Mercury News SCATTERED WITH GEMS....Liable to be influential far beyond the narrow researches of artificial intelligence.

Professor Guy Cellerier Genetic Artificial Intelligence and Epistemics Laboratory, University of Geneva A PROFOUND AND FASCINATING BOOK that lays down the foundations for the solution of one of the last great problems of modern science....Marks a new era.

Publishers Weekly - Publisher's Weekly
Minsky, cofounder of MIT's Artificial Intelligence Lab, is a charter member of the community of AI pioneers committed to understanding the workings of the human mind and mimicking its processes by computer. Here he takes his place as this generation's Buckminster Fullera revered seminal thinker whose depth and originality sometimes place him out of reach for many. But Minsky's difference is his style: he writes aphoristically, with wit and precision, and makes the most of his perception that the mind learns by images, which perform as agents that connect, interact and even ``censor'' in a staggeringly subtle ``society'' of microprocedures. This holistic view of the mind's learning stages is the culmination of Minsky's study, and its insights into the developing world of computers-as-machines are matched by paradoxically intuitive glimpses of the growth of a sense of ``self'' through introspection, short- and long-term memory, mind-frames utilizing pictures and language. Minsky's creative terminology for freshly perceived mental processes is a major contribution to the future of mind-science. Illustrated. Major ad/promo; Macmillan Book Club alternate. (January)

Product Details

Simon & Schuster
Publication date:
Sales rank:
Product dimensions:
8.40(w) x 10.90(h) x 0.80(d)

Read an Excerpt



Everything should be made as simple as possible, but not simpler.

Albert Einstein

This book tries to explain how minds work. How can intelligence emerge from nonintelligence? To answer that, we'll show that you can build a mind from many little parts, each mindless by itself.

I'll call "Society of Mind" this scheme in which each mind is made of many smaller processes. These we'll call agents. Each mental agent by itself can only do some simple thing that needs no mind or thought at all. Yet when we join these agents in societies — in certain very special ways — this leads to true intelligence.

There's nothing very technical in this book. It, too, is a society — of many small ideas. Each by itself is only common sense, yet when we join enough of them we can explain the strangest mysteries of mind.

One trouble is that these ideas have lots of cross-connections. My explanations rarely go in neat, straight lines from start to end. I wish I could have lined them up so that you could climb straight to the top, by mental stair-steps, one by one. Instead they're tied in tangled webs.

Perhaps the fault is actually mine, for failing to find a tidy base of neatly ordered principles. But I'm inclined to lay the blame upon the nature of the mind: much of its power seems to stem from just the messy ways its agents cross-connect. If so, that complication can't be helped; it's only what we must expect from evolution's countless tricks.

What can we do when things are hard to describe? We start by sketching out the roughest shapes to serve as scaffolds for the rest; it doesn't matter very much if some of those forms turn out partially wrong. Next, draw details to give these skeletons more lifelike flesh. Last, in the final filling-in, discard whichever first ideas no longer fit.

That's what we do in real life, with puzzles that seem very hard. It's much the same for shattered pots as for the cogs of great machines. Until you've seen some of the rest, you can't make sense of any part.


Good theories of the mind must span at least three different scales of time: slow, for the billion years in which our brains have evolved; fast, for the fleeting weeks and months of infancy and childhood; and in between, the centuries of growth of our ideas through history.

To explain the mind, we have to show how minds are built from mindless stuff, from parts that are much smaller and simpler than anything we'd consider smart. Unless we can explain the mind in terms of things that have no thoughts or feelings of their own, we'll only have gone around in a circle. But what could those simpler particles be — the "agents" that compose our minds? This is the subject of our book, and knowing this, let's see our task. There are many questions to answer.

Function: How do agents work?

Embodiment: What are they made of?

Interaction: How do they communicate?

Origins: Where do the first agents come from?

Heredity: Are we all born with the same agents?

Learning: How do we make new agents and change old ones?

Character: What are the most important kinds of agents?

Authority: What happens when agents disagree?

Intention: How could such networks want or wish?

Competence: How can groups of agents do what separate agents cannot do?

Selfness: What gives them unity or personality?

Meaning: How could they understand anything?

Sensibility: How could they have feelings and emotions?

Awareness: How could they be conscious or self-aware?

How could a theory of the mind explain so many things, when every separate question seems too hard to answer by itself? These questions all seem difficult, indeed, when we sever each one's connections to the other ones. But once we see the mind as a society of agents, each answer will illuminate the rest.


It was never supposed [the poet Imlac said] that cogitation is inherent in matter, or that every particle is a thinking being. Yet if any part of matter be devoid of thought, what part can we suppose to think? Matter can differ from matter only in form, bulk, density, motion and direction of motion: to which of these, however varied or combined, can consciousness be annexed? To be round or square, to be solid or fluid, to be great or little, to be moved slowly, or swiftly one way or another, are modes of material existence, all equally alien from the nature of cogitation. If matter be once without thought, it can only be made to think by some new modification, but all the modifications which it can admit are equally unconnected with cogitative powers.

Samuel Johnson

How could solid-seeming brains support such ghostly things as thoughts? This question troubled many thinkers of the past. The world of thoughts and the world of things appeared to be too far apart to interact in any way. So long as thoughts seemed so utterly different from everything else, there seemed to be no place to start.

A few centuries ago it seemed equally impossible to explain Life, because living things appeared to be so different from anything else. Plants seemed to grow from nothing. Animals could move and learn. Both could reproduce themselves — while nothing else could do such things. But then that awesome gap began to close. Every living thing was found to be composed of smaller cells, and cells turned out to be composed of complex but comprehensible chemicals. Soon it was found that plants did not create any substance at all but simply extracted most of their material from gases in the air. Mysteriously pulsing hearts turned out to be no more than mechanical pumps, composed of networks of muscle cells. But it was not until the present century that John yon Neumann showed theoretically how cell-machines could reproduce while, almost independently, James Watson and Francis Crick discovered how each cell actually makes copies of its own hereditary code. No longer does an educated person have to seek any special, vital force to animate each living thing.

Similarly, a century ago, we had essentially no way to start to explain how thinking works. Then psychologists like Sigmund Freud and Jean Piaget produced their theories about child development. Somewhat later, on the mechanical side, mathematicians like Kurt Gödel and Alan Turing began to reveal the hitherto unknown range of what machines could be made to do. These two streams of thought began to merge only in the 1940s, when Warren McCulloch and Walter Pitts began to show how machines might be made to see, reason, and remember. Research in the modern science of Artificial Intelligence started only in the 1950s, stimulated by the invention of modern computers. This inspired a flood of new ideas about how machines could do what only minds had done previously.

Most people still believe that no machine could ever be conscious, or feel ambition, jealousy, humor, or have any other mental life-experience. To be sure, we are still far from being able to create machines that do all the things people do. But this only means that we need better theories about how thinking works. This book will show how the tiny machines that we'll call "agents of the mind" could be the long sought "particles" that those theories need.


You know that everything you think and do is thought and done by you. But what's a "you"? What kinds of smaller entities cooperate inside your mind to do your work? To start to see how minds are like societies, try this: pick up a cup of tea!

Your GRASPING agents want to keep hold of the cup.

Your BALANCING agents want to keep the tea from spilling out.

Your THIRST agents want you to drink the tea.

Your MOVING agents want to get the cup to your lips.

Yet none of these consume your mind as you roam about the room talking to your friends. You scarcely think at all about Balance; Balance has no concern with Grasp; Grasp has no interest in Thirst; and Thirst is not involved with your social problems. Why not? Because they can depend on one another. If each does its own little job, the really big job will get done by all of them together: drinking tea.

How many processes are going on, to keep that teacup level in your grasp? There must be at least a hundred of them, just to shape your wrist and palm and hand. Another thousand muscle systems must work to manage all the moving bones and joints that make your body walk around. And to keep everything in balance, each of those processes has to communicate with some of the others. What if you stumble and start to fall? Then many other processes quickly try to get things straight. Some of them are concerned with how you lean and where you place your feet. Others are occupied with what to do about the tea: you wouldn't want to burn your own hand, but neither would you want to scald someone else. You need ways to make quick decisions.

All this happens while you talk, and none of it appears to need much thought. But when you come to think of it, neither does your talk itself. What kinds of agents choose your words so that you can express the things you mean? How do those words get arranged into phrases and sentences, each connected to the next? What agencies inside your mind keep track of all the things you've said — and, also, whom you've said them to? How foolish it can make you feel when you repeat — unless you're sure your audience is new.

We're always doing several things at once, like planning and walking and talking, and this all seems so natural that we take it for granted. But these processes actually involve more machinery than anyone can understand all at once. So, in the next few sections of this book, we'll focus on just one ordinary activity — making things with children's building-blocks. First we'll break this process into smaller parts, and then we'll see how each of them relates to all the other parts.

In doing this, we'll try to imitate how Galileo and Newton learned so much by studying the simplest kinds of pendulums and weights, mirrors and prisms. Our study of how to build with blocks will be like focusing a microscope on the simplest objects we can find, to open up a great and unexpected universe. It is the same reason why so many biologists today devote more attention to tiny germs and viruses than to magnificent lions and tigers. For me and a whole generation of students, the world of work with children's blocks has been the prism and the pendulum for studying intelligence.

In science, one can learn the most by studying what seems the least.


Imagine a child playing with blocks, and imagine that this child's mind contains a host of smaller minds. Call them mental agents. Right now, an agent called Builder is in control. Builder's specialty, is making towers from blocks.

Our child likes to watch a tower grow as each new block is placed on top. But building a tower is too complicated a job for any single, simple agent, so Builder has to ask for help from several other agents:

In fact, even to find another block and place it on the tower top is too big for a job for any single agent. So Add, in turn, must call for other agents' help. Before we're done, we'll need more agents than would fit in any diagram.

Why break things into such small parts? Because minds, like towers, are made that way — except that they're composed of processes instead of blocks. And if making stacks of blocks seems insignificant — remember that you didn't always feel that way. When first you found some building toys in early childhood, you probably spent joyful weeks of learning what to do with them. If such toys now seem relatively dull, then you must ask yourself how you have changed. Before you turned to more ambitious things, it once seemed strange and wonderful to be able to build a tower or a house of blocks. Yet, though all grown-up persons know how to do such things, no one understands how we learn to do them! And that is what will concern us here. To pile up blocks into heaps and rows: these are skills each of us learned so long ago that we can't remember learning them at all. Now they seem mere common sense — and that's what makes psychology hard. This forgetfulness, the amnesia of infancy, makes us assume that all our wonderful abilities were always there inside our minds, and we never stop to ask ourselves how they began and grew.


You cannot think about thinking, without thinking about thinking about something.

Seymour Papert

We found a way to make our tower builder out of parts. But Builder is really far from done. To build a simple stack of blocks, our child's agents must accomplish all these other things.

See must recognize its blocks, whatever their color, size, and place — in spite of different backgrounds, shades, and lights, and even when they're partially obscured by other things.

Then, once that's done, Move has to guide the arm and hand through complicated paths in space, yet never strike the tower's top or hit the child's face.

And think how foolish it would seem, if Find were to see, and Grasp were to grasp, a block supporting the tower top!

When we look closely at these requirements, we find a bewildering world of complicated questions. For example, how could Find determine which blocks are still available for use? It would have to "understand" the scene in terms of what it is trying to do. This means that we'll need theories both about what it means to understand and about how a machine could have a goal. Consider all the practical judgments that an actual Builder would have to make. It would have to decide whether there are enough blocks to accomplish its goal and whether they are strong and wide enough to support the others that will be placed on them.

What if the tower starts to sway? A real builder must guess the cause. It is because some joint inside the column isn't square enough? Is the foundation insecure, or is the tower too tall for its width? Perhaps it is only because the last block was placed too roughly.

All children learn about such things, but we rarely ever think about them in our later years. By the time we are adults we regard all of this to be simple "common sense." But that deceptive pair of words conceals almost countless different skills.

Common sense is not a simple thing. Instead, it is an immense society of hard-earned practical ideas — of multitudes of life-learned rules and exceptions, dispositions and tendencies, balances and checks.

If common sense is so diverse and intricate, what makes it seem so obvious and natural? This illusion of simplicity comes from losing touch with what happened during infancy, when we formed our first abilities. As each new group of skills matures, we build more layers on top of them. As time goes on, the layers below become increasingly remote until, when we try to speak of them in later life, we find ourselves with little more to say than "I don't know."


We want to explain intelligence as a combination of simpler things. This means that we must be sure to check, at every step, that none of our agents is, itself, intelligent. Otherwise, our theory would end up resembling the nineteenth-century "chessplaying machine" that was exposed by Edgar Allan Poe to actually conceal a human dwarf inside. Accordingly, whenever we find that an agent has to do anything complicated, we'll replace it with a subsociety of agents that do simpler things. Because of this, the reader must be prepared to feel a certain sense of loss. When we break things down to their smallest parts, they'll each seem dry as dust at first, as though some essence has been lost.

For example, we've seen how to construct a tower-building skill by making Builder from little parts like Find and Get. Now, where does its "knowing-how-to-build" reside when, clearly, it is not in any part — and yet those parts are all that Builder is? The answer: It is not enough to explain only what each separate agent does. We must also understand how those parts are interrelated — that is, how groups of agents can accomplish things.

Accordingly, each step in this book uses two different ways to think about agents. If you were to watch Builder work, from the outside, with no idea of how it works inside, you'd have the impression that it knows how to build towers. But if you could see Builder from the inside, you'd surely find no knowledge there. You would see nothing more than a few switches, arranged in various ways to turn each other on and off. Does Builder "really know" how to build towers? The answer depends on how you look at it. Let's use two different words, "agent" and "agency," to say why Builder seems to lead a double life. As agency, it seems to know its job. As agent, it cannot know anything at all.

When you drive a car, you regard the steering wheel as an agency that you can use to change the car's direction. You don't care how it works. But when something goes wrong with the steering, and you want to understand what's happening, it's better to regard the steering wheel as just one agent in a larger agency: it turns a shaft that turns a gear to pull a rod that shifts the axle of a wheel. Of course, one doesn't always want to take this microscopic view; if you kept all those details in mind while driving, you might crash because it took too long to figure out which way to turn the wheel. Knowing how is not the same as knowing why. In this book, we'll always be switching between agents and agencies because, depending on our purposes, we'll have to use different viewpoints and kinds of descriptions.

Copyright © 1985, 1986 by Marvin Minsky

Meet the Author

Marvin Minsky is Toshiba Professor of Media Arts and Sciences, and Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology. His research has led to many advances in artificial intelligence, psychology, physical optics, mathematics, and the theory of computation. He has made major contributions in the domains of computer graphics, knowledge and semantics, machine vision, and machine learning. He has also been involved with technologies for space exploration.

Professor Minsky is one of the pioneers of intelligence-based robotics. He designed and built some of the first mechanical hands with tactile sensors, visual scanners, and their software and interfaces. In 1951 he built the first neural-network learning machine. With John McCarthy he founded the MIT Artificial Intelligence Laboratory in 1959. He has written seminal papers in the fields of artificial intelligence, perception, and language. His book The Society of Mind contains hundreds of ideas about the mind, many of which he has further developed in this book.

Customer Reviews

Average Review:

Post to your social network


Most Helpful Customer Reviews

See all customer reviews

The Society Of Mind 5 out of 5 based on 0 ratings. 1 reviews.
Guest More than 1 year ago
Who said simple is beautiful? Beautiful as it may seem, the simple is often deceptive. Like the famous Maxwell's equations in differntial form. They look beautifully simple. But to apply it requires another not very simple form. Minsky tells us that what seem simple enough to us (so simple that we are hardly aware of) are actually very hard to implement in machines. So the biggest question in artificial intelligence is to simulate simple human tasks. Minsky is not like some lousy popular textbook writers. He seeks to find an answer, and all along, he does not attempt to make a logical 'jump'. The book is full of ideas. Though not with any big promises or big future plan, it gives you practical insights (such as why 'agents' should be simple enough, when the 'agencies' are not. Highly recommended to those who want to see the practical side of artificial intelligence. Like the integral form of Maxwell's equations.