Uh-oh, it looks like your Internet Explorer is out of date.

For a better shopping experience, please upgrade now.

Artificial Intelligence: A New Synthesis

Artificial Intelligence: A New Synthesis

by Nils J. Nilsson

See All Formats & Editions

Intelligent agents are employed as the central characters in this new introductory text. Beginning with elementary reactive agents, Nilsson gradually increases their cognitive horsepower to illustrate the most important and lasting ideas in AI. Neural networks, genetic programming, computer vision, heuristic search, knowledge representation and reasoning, Bayes


Intelligent agents are employed as the central characters in this new introductory text. Beginning with elementary reactive agents, Nilsson gradually increases their cognitive horsepower to illustrate the most important and lasting ideas in AI. Neural networks, genetic programming, computer vision, heuristic search, knowledge representation and reasoning, Bayes networks, planning, and language understanding are each revealed through the growing capabilities of these agents. The book provides a refreshing and motivating new synthesis of the field by one of AI's master expositors and leading researchers. Artificial Intelligence: A New Synthesis takes the reader on a complete tour of this intriguing new world of AI.

  • An evolutionary approach provides a unifying theme
  • Thorough coverage of important AI ideas, old and new
  • Frequent use of examples and illustrative diagrams
  • Extensive coverage of machine learning methods throughout the text
  • Citations to over 500 references
  • Comprehensive index

Editorial Reviews

This introductory text emphasizes important and lasting AI ideas and is intended for a one-semester introductory college course, a precursor to more advanced courses. It begins by considering elementary reactive agents which can sense and report on factors in their environment, and moves on to consider more complex agents, including those which can plan a series of actions based on their effects, those which can be said to "reason" and "deduce" properties of their environment, and those which exist in environments where they must interact and communicate with other agents. Annotation c. by Book News, Inc., Portland, Or.

Product Details

Elsevier Science
Publication date:
Morgan Kaufmann Series in Artificial Intelligence
Sold by:
Barnes & Noble
File size:
7 MB

Read an Excerpt

Chapter 8: Uninformed Search

8.1 Formulating the State Space

Many problems of practical interest have search spaces so large that they cannot be represented by explicit graphs. Elaborations of the basic search procedure described in the last chapter are then required. First, we need to be especially careful about how we formulate such problems for search; second, we have to have methods for representing large search graphs implicitly; and, third, we need to use efficient methods for searching such large graphs.

In some planning problems, like the one about block stacking, it is not too difficult to conceive of data structures for representing the different world states and the actions that change them. Typically though, finding representations that result in manageable state-space graphs is difficult. Doing so requires careful analysis of the problem-taking into account symmetries, ignoring irrelevant details, and finding appropriate abstractions, Unfortunately, the task of setting up problems for search is largely an art that still requires human participation.

In addition to problems involving agents stacking blocks, sliding tile problems are often used to illustrate how state-space search is used to plan action sequences. For variety, I will use problems of this sort in this and the next chapter. A typical example is the Fifteen-puzzle, which consists of fifteen tiles set in a four-by-four array—leaving one empty or blank cell into which an adjacent tile can be slid. The task is to find a sequence of tile movements that transforms an initial disposition of tiles into some particular arrangement. The Eight-puzzle is a reduced version, with eighttiles in a three-by-three array. Suppose the object of the puzzle is to slide tiles from the starting configuration until the goal configuration is reached, as shown in Figure 8.1.

In this problem, an obvious iconic state description would be a three-by three array in which each cell contains one of the numbers I through 8 or symbol representing the blank space. A goal state is the array representing the configuration on the right side of Figure 8. 1. The moves between states correspond to sliding a tile into the blank cell. As mentioned, we often have representations choices in setting up the state space of a problem. In the Eight-puzzle, we might imagine that we have 8 x 4 different moves, namely, move 1 up, move 1 down move 1 left, move 1 right, move 2 up, ..., and so on. (Of course, not all of these moves are possible in any given state.) A more compact formulation involves only four different moves, namely, move blank left, move blank up, move blank right move blank down. A given start state and the set of possible moves implicitly define the graph of states accessible from the start. The number of nodes in the state-space graph for this representation of the Eight-puzzle is 9! = 362,880. (It happens that the state space for the Eight-puzzle is divided into two separate graphs; a tile configuration in one graph cannot be reached from one in the other graph.)

8.2 Components of Implicit State-Space Graphs

That part of a state-space graph reachable by actions from the start state is represented implicitly by a description of the start state and by descriptions of the effects of actions that can be taken in any state. Thus, it is possible in principle to transform an implicit representation of a graph into an explicit one. To do so, one generates all of the nodes that are successors of the start node (by applying all possible operators at that node), then generates all of their successors, and so on. For graphs that are too large to represent explicitly, the search process need render explicit only so much of the state space as is required to find a path to the goal. The process terminates when an acceptable path to a goal node has been found.

More formally, there are three basic components to an implicit representation of a state-space graph:

1. A description with which to label the start node. This description is some data structure modeling the initial state of the environment.

2. Functions that transform a state description representing one state of the environment into one that represents the state resulting after an action. These functions are usually called operators. In our agent problems, they are models of the effects of actions. When an operator is applied to a node, it generates one of that node's successors.

3. A goal condition, which can be either a True-False-valued function on state descriptions or a list of actual instances of state descriptions that correspond to goal states.
We will study two broad classes of search processes. In one, we have no problem-specific reason to prefer one part of the search space to any other, insofar as finding a path to the goal is concerned. Such processes are called uninformed. In the other, we do have problem-specific information to help focus the search. Such processes are called heuristic.1 I begin with the uninformed ones and will discuss heuristic processes in the next chapter.

8.3 Breadth-First Search

Uninformed search procedures apply operators to nodes without using any special knowledge about the problem domain (other than knowledge about what actions are legal ones). Perhaps the simplest uninformed search procedure is breadth-first search. That procedure generates an explicit state-space graph by applying all possible operators to the start node, then applying all possible operators to all the direct successors of the start node, then to their successors, and so on. Search proceeds uniformly outward from the start node. Since we apply all possible operators to a node at each step, it is convenient to group them into a function called the successor function. The successor function, when applied to a node, produces the entire set of nodes that can be produced by applying all of the operators that can be applied to that node. Each application of a successor function to a node is called expanding the node.

Figure 8.2 shows the nodes created in a breadth-first search for a solution to the Eight-puzzle problem. The start and goal nodes are labeled, and the order of node expansions is shown by a numeral next to each node. Nodes of the same depth are expanded according to some fixed order. In expanding a node, I applied operators in the order: move blank left, up, right, down. Even though each move is reversible, I omit the arcs from successors back to their parents. The solution path is shown by the dark line. As we shall see, breadth-first search has the property that when a goal node is found, we have found a path of minimal length to the goal. A disadvantage of breadth-first search, however, is that it requires the generation and storage of a tree whose size is exponential in the depth of the shallowest goal node....

Meet the Author

Nils J. Nilsson's long and rich research career has contributed much to AI. He has written many books, including the classic Principles of Artificial Intelligence. Dr. Nilsson is Kumagai Professor of Engineering, Emeritus, at Stanford University. He has served on the editorial boards of Artificial Intelligence and Machine Learning and as an Area Editor for the Journal of the Association for Computing Machinery. Former Chairman of the Department of Computer Science at Stanford, and former Director of the SRI Artificial Intelligence Center, he is also a past president and Fellow of the American Association for Artificial Intelligence.

Customer Reviews

Average Review:

Post to your social network


Most Helpful Customer Reviews

See all customer reviews