Can computers think? Can they use reason to develop their own concepts, solve complex problems, play games, understand our languages? This comprehensive survey of artificial intelligence ― the study of how computers can be made to act intelligently ― explores these and other fascinating questions. Introduction to Artificial Intelligence presents an introduction to the science of reasoning processes in computers, and the research approaches and results of the past two decades. You'll find lucid, easy-to-read coverage of problem-solving methods, representation and models, game playing, automated understanding of natural languages, heuristic search theory, robot systems, heuristic scene analysis and specific artificial-intelligence accomplishments. Related subjects are also included: predicate-calculus theorem proving, machine architecture, psychological simulation, automatic programming, novel software techniques, industrial automation and much more.
A supplementary section updates the original book with major research from the decade 1974-1984. Abundant illustrations, diagrams and photographs enhance the text, and challenging practice exercises at the end of each chapter test the student's grasp of each subject.The combination of introductory and advanced material makes Introduction to Artificial Intelligence ideal for both the layman and the student of mathematics and computer science. For anyone interested in the nature of thought, it will inspire visions of what computer technology might produce tomorrow.
Read an Excerpt
Introduction to Artificial Intelligence
By Philip C. Jackson Jr.
Dover Publications, Inc.Copyright © 1985 Philip C. Jackson, Jr.
All rights reserved.
"Artificial intelligence" is the ability of machines to do things that people would say require intelligence. Artificial intelligence (AI) research is an attempt to discover and describe aspects of human intelligence that can be simulated by machines. For example, at present there are machines that can do the following things:
1. Play games of strategy (e.g., Chess, Checkers, Poker) and (in Checkers) learn to play better than people.
2. Learn to recognize visual or auditory patterns.
3. Find proofs for mathematical theorems.
4. Solve certain, well-formulated kinds of problems.
5. Process information expressed in human languages.
The extent to which machines (usually computers) can do these things independently of people is still limited; machines currently exhibit in their behavior only rudimentary levels of intelligence. Even so, the possibility exists that machines can be made to show behavior indicative of intelligence, comparable or even superior to that of humans.
Alternatively, AI research may be viewed as an attempt to develop a mathematical theory to describe the abilities and actions of things (natural or man-made) exhibiting "intelligent" behavior, and serve as a calculus for the design of intelligent machines. As yet there is no "mathematical theory of intelligence," and researchers dispute whether there ever will be.
This book serves as an introduction to research on machines that display intelligent behavior (note 1–1). Such machines sometimes will be called "artificial intelligences," "intelligent machines," or "mechanical intelligences."
The inclination in this book is toward the first viewpoint of AI research, without forsaking the second. Since AI research is still in its infancy, it is therefore prudent to withhold estimation of its future. It is best to begin with a summation of present knowledge, considering such questions as:
1. What is known about natural intelligence?
2. When can we justifiably call a machine intelligent?
3. How and to what extent do machines currently simulate intelligence or display intelligent behavior?
4. How might machines eventually simulate intelligence?
5. How can machines and their behavior be described mathematically?
6. What uses could be made of intelligent machines?
Each of these questions will be explored in some detail in this book. The first and second questions are covered in this chapter. It is hoped that the six questions are covered individually in enough detail so that the reader will be guided to broader study if he is so inclined. For parts of this book, some knowledge of mathematics (especially sets, functions, and logic) is presupposed, though much of the book is understandable without it.
A basic goal of AI research is to construct a machine that exhibits the behavior associated with human intelligence, that is, comparable to the intelligence of a human being (note 1–2). It is not required that the machine use the same underlying mechanisms (whatever they are) that are used in human cognition (note 1–3), nor is it required that the machine go through stages of development or learning such as those through which people progress.
The classic experiment proposed for determining whether a machine possesses intelligence on a human level is known as Turing's test (after A. M. Turing, who pioneered research in computer logic, undecidability theory, and artificial intelligence). This experiment has yet to be performed seriously, since no machine yet displays enough intelligent behavior to be able to do well in the test. Still, Turing's test is the basic paradigm for much successful work and for many experiments in machine intelligence, from the Samuel's Checkers Player to "semantic-information processing" programs such as Colby's PARRY or Raphael's SIR (see Chapters 4 and 7).
Basically, Turing's test consists of presenting a human being, A, with a typewriter-like or TV-like terminal, which he can use to converse with two unknown (to him) sources, B and C (see Fig. 1–1). The interrogator A is told that one terminal is controlled by a machine and that the other terminal is controlled by a human being whom A has never met. A is to guess which of B and C is the machine and which is the person. If A cannot distinguish one from the other with significantly better than 50% accuracy, and if this result continues to hold no matter what people are involved in the experiment, the machine is said to simulate human intelligence (note 1–4).
Some comments on Turing's test are in order. First, the nature of Turing's test is such that it does not permit the interrogator A to observe the physical natures of B and C; rather, it permits him only to observe their "intellectual behavior," that is, their ability to communicate with formal symbols and to "think abstractly." So, while the test does not enable A to be prejudiced by the physical nature of either B or C, neither does it give a way to compare those aspects of an entity's behavior that reflect its ability to act nonabstractly in the real world—that is, to be intelligent in its performance of concrete operations on objects. Can the machine, for example, fry an egg or clean a house?
Second, one possible achievement of AI research would be to produce a complete description of a machine that can successfully pass Turing's test, or to find a proof that no machine can pass it. The complete description must be of a machine that can actually be constructed. A proof that there is no such constructible machine (it might say, e.g., "The number of parts in such a machine must be greater than the number of electrons in the universe.") is consequently to be regarded as a proof of the "no machine" alternative.
Third, it may be that more than one type of machine can pass Turing's test. In this case, AI research has a secondary problem of creating a general description of all machines that will successfully pass Turing's test.
Fourth, if a machine passes Turing's test, it means in effect that there is at least one machine that can learn to solve problems as well as a human being. This would lead to asking if a constructible machine can be described which would be capable of learning to solve not only those problems that people can usually solve, but also those that people create but can only rarely solve. That is, is it possible to build mechanical intelligences that are superior to human intelligence?
It is not yet possible to give a definite answer to any of these questions. Some evidence exists that AI research may eventually attain at least the goal of a machine that passes Turing's test.
It is clear that the intellectual capabilities of a human being are directly related to the functioning of his brain, which appears to be a finite structure of cells. Moreover, people have succeeded in constructing machines that can "learn" to produce solutions to certain specific intellectual problems, which are superior to the solutions people can produce. The most notable example is Samuel's Checkers Player, which has learned to play a better game of Checkers than its designer, and which currently plays at a championship level (see Chapter 4).
The definition of "intelligence" in Webster's Third International Dictionary (1966) reads:
1 in·tel·li·gence \[TEXT NOT REPRODUCIBLE IN ASCII] \ n-S often attrib [ME, fr. MF, fr. OF, fr. L. intelligentia, fr. intelligent-, intelligens (pres. part.) + -ia -y — more at INTELLIGENT] 1 a (1) : the faculty of understanding : capacity to know or apprehend : INTELLECT, REASON <~, which emerged during the revolutionary cycles of matter as the hightst form yet achieved —Hermann Reith>
2 intelligencevt -ED/-ING/-S obs: to bring tidings of (something) or to (someone)
(Reprinted by permission from Webster's Third International Dictionary © 1971 by G. & C. Merriam Co., Publishers of the Merriam-Webster Dictionaries.)
To summarize the definition in one phrase, one might say that intelligence is the ability "to act rightly in a given situation." Although one could imagine an entity that always behaves "rightly," without making any errors, AI research is more concerned with the concept of partial success, with building machines that can make mistakes, but which can also change their behavior with time and perhaps stop making mistakes. Intuitively, AI research is concerned with building machines that can "adjust" or "adapt" to certain environments, and which in effect learn to solve problems within these environments. This corresponds with the ordinary conception of human intelligence—that it is limited, but that it can learn and thereby improve its performance of certain tasks with time.
Surprisingly little is known concerning the limitations of human intelligence. No one has made any complete survey of the problems that can be solved by human beings. The ability to solve certain types of problems has been studied and made the basis of "intelligence" tests, but the generality and validity of these tests is disputable. Isaac Newton, for example, might have scored low on such tests when he was an adolescent; yet he is estimated by some to have had an intelligence quotient (IQ) near 200. One of the shortcomings of these tests is that they predict little concerning the development of a person's intelligence, especially what problems he could learn to solve.
Evidence concerning human intelligence can be obtained from four major sources: history, introspection, the social sciences, and the biological sciences. Included in the social sciences are psychology, anthropology, sociology, economics, political science; among the biological sciences are neurobiology, biochemistry, biology. "Introspective" sciences might include mathematical logic, systems analysis, and music theory.
Evidence from History
A discourse on the full history of human intelligence is certainly beyond the bounds of this book. Some allusions to this history can be woven in while presenting evidence from other sources.
Evidence from Introspection
Introspection has yielded a wealth of seemingly ambiguous and contradictory views of intelligence. One important introspective work familiar in the Western world is Descartes' Discourse on Method. This work purports to be ultimately based only on the notion of thought: "I think therefore I exist." So far as the work concerns intelligence, Descartes made a clear distinction between animals and human beings. Animals, he believed, are not much different from machines; anything an animal can do he could imagine being done by a sufficiently complicated machine. People, however, are different from either animals or machines, since people have an ability to "communicate" with each other, to use signs, sentences, and languages that are clearly not completely the result of instinct or construction. Descartes regarded the ability to use languages as the most significant indication that something has human intelligence:" ... for the word is the sole sign and the only certain mark of the presence of thought hidden and wrapped up in the body ..."
Descartes was partially correct in his observation that animals cannot communicate in the same fashion as people. There is recent evidence that dolphins have some sort of language, but the nature of their language is still not understood (Lilly, 1967). Chapter 7 explores the relationship of intelligence and language.
Another introspective way of looking at the mind is that provided by the "rooms of consciousness" concept. In this system a human mind is viewed as being able to inhabit and move among a set of rooms, which are distinguished from each other by their lighting—Socrates' metaphor of the Cave in Plato's Republic is a good example. Various rooms can be associated with different levels and abilities of intelligence; this introspective metaphor has been developed in Eastern cultures by Buddha and Lao Tse, as well as in the Western world by other philosophers. Also, the significance of "light" in the metaphor is typical. Other variations on the metaphor speak of some rooms as possessing illusions and dreams.
One viewpoint of intelligence, which is often developed by introspection, is that there is a distinction between scientific (intellectual) learning and spiritual learning abilities. Scientific learning is said to rely on certain rules for the belief, derivation, refutation, and proof of propositions about the universe. Presumably, science requires a language for describing events and the meanings of measurements, and is dependent on the existence of invariant, reproducible things in the universe. "Spiritual" learning, on the other hand, does not require words or language and may evade intellectual reasoning processes. For various people, introspection has yielded, for example, the following notions of nonintellectual learning:
1. Subconscious learning, in which knowledge is somehow obtained without conscious reasoning.
2. Emotional learning, in which knowledge is perceived as an emotion, without reasoning.
3. Inspired learning, in which knowledge is given to one instantaneously, without reasoning, perhaps by a deity.
4. Paradoxical learning, in which one is able to perceive knowledge that is self-contradictory, regardless of how it is expressed in words, and therefore beyond logical or scientific learning.
Again, this introspective viewpoint has been developed both in Eastern and Western cultures. The reader who wishes to study the subject deeply may wish to read Dostoevsky, Freud, Jung, and Lao Tse. Various people have, of course, argued that emotional and subconscious learning can be scientifically explained.
The viewpoint that intelligence in certain forms cannot be explained logically or scientifically is relevant to artificial intelligence research. If this viewpoint is correct, then presumably there are some types of knowledge that machines cannot be said to possess and there are some ways of gaining knowledge they cannot use. Chapter 2 discusses the nature of machines and of scientific and mathematical descriptions of things more thoroughly. For now, the viewpoint expressed there is that while it can be argued mathematically that there are entities which cannot be completely described mathematically, there is probably no way of proving in the real world that something is beyond the power of science to explain. All that can be proved is that science has so far not explained it.
Thus, no comment is made here as to the existence or nature of spiritual learning: What is important is whether there are some forms of learning and intelligence that can be exhibited by machines. Whether "some" means "all" is, scientifically speaking, an open question.
Excerpted from Introduction to Artificial Intelligence by Philip C. Jackson Jr.. Copyright © 1985 Philip C. Jackson, Jr.. Excerpted by permission of Dover Publications, Inc..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Table of ContentsPreface
Evidence from History
Evidence from Introspection
Evidence from the Social Sciences
Evidence from the Biological Sciences
State of Knowledge
The Neuron and the Synapse
Neural Data Processing
Computers and Simulation
2. "MATHEMATICS, PHENOMENA, MACHINES"
On Mathematical Description
The Mathematical Description of Phenomena
Types of Phenomena
Simple Turing Machines
Polycephalic Turing Machines
Universal Turing Machines
Limits to Computational Ability
3. PROBLEM SOLVING
Evolutionary and Reasoning Programs
"Paradigms for the Concept of "Problem"
"Problem Solvers, Reasoning Programs, and Languages"
General Problem Solver
State-Space (Situation-Space) Problems
Problem Reduction and Graphs
Heuristic Search Theory
Need for Search
"Planning, Reasoning by Analogy, and Learning"
Reasoning by Analogy
"Models, Problem Representations, and Levels of Competence"
The Problem of Problem Representation
Levels of Competence
4 GAME PLAYING
Games and Their State Spaces
Game Trees and Heuristic Search
Game Trees and Minimax Analysis
Static Evaluations and Backed-up Evaluations
The Alpha-Beta Technique
Generating (Searching) Game Trees
Learning Situations for Generalization
Chess and GO
The Game of GO
Poker and Machine Development of Heuristics
General Game-Playing Programs
5. PATTERN PERCEPTION
Some Basic Definitions and Examples
Eye Systems for Computers
Picture Enhancement and Line Detection
Perception of Regions
Perception of Objects
Learning to Recognize Structures of Simple Objects
Some Problems for Pattern Perception Systems
6. THEOREM PROVING
First-Order Predicate Calculus
The Unification Procedure
The Binary Resolution Procedure
Heuristic Search Strategies
Reasoning by Analogy
Solving Problems with Theorem Provers
Predicate-Calculus Descriptions of State-Space Problems
"Path Finding, Example Generation, Constructive Proofs, Answer Extraction"
Applications to Real-World Problems
Theorem Proving in Planning and Automatic Programming
7. SEMANTIC INFORMATION
Natural and Artificial Languages
Artificial Languages and Programming Languages
"Grammars, Machines, and Extensibility"
"Programs that "Understand" Natural Language"
Recursive Approaches to Syntax
Semantics and Inference
Generation and Integration
Some Conversations with Computers
Language and Perception
Networks of Question-Answering Programs
Pattern Recognition and Grammatical Inference
"Communications, Teaching, and Learning"
8. PARALLEL PROCESSING AND EVOLUTIONARY SYSTEMS
Abelian Machine Spaces
Questions of Generality and Equivalence
Self-affecting Systems: Self-reproduction
"Hierarchical, Self-organizing, and Evolutionary Systems"
9. THE HARVEST OF ARTIFICIAL INTELLIGENCE
A Look at Possibilities
Tools and People
Over Mechanization of the World: The Machine as Dictator
The Well-natured Machine