- Shopping Bag ( 0 items )
Intelligent agents are employed as the central characters in this new introductory text. Beginning with elementary reactive agents, Nilsson gradually increases their cognitive horsepower to illustrate the most important and lasting ideas in AI. Neural networks, genetic programming, computer vision, heuristic search, knowledge representation and reasoning, Bayes networks, planning, and language understanding are each revealed through the growing capabilities of these agents. The book provides a refreshing and motivating new synthesis of the field by one of AI's master expositors and leading researchers. Artificial Intelligence: A New Synthesis takes the reader on a complete tour of this intriguing new world of AI.
• An evolutionary approach provides a unifying theme
• Thorough coverage of important AI ideas, old and new
• Frequent use of examples and illustrative diagrams
• Extensive coverage of machine learning methods throughout the text
• Citations to over 500 references
• Comprehensive index
Intelligent agents are employed as the central characters in this new introductory text. Beginning with elementary reactive agents, Nilsson gradually increases their cognitive horsepower to illustrate the most important and lasting ideas in AI. Neural networks, genetic programming, computer vision, heuristic search, knowledge representation and reasoning, Bayes networks, planning, and language understanding are each revealed through the growing capabilities of these agents. The book provides a refreshing and motivating new synthesis of the field by one of AI's master expositors and leading researchers. Artificial Intelligence: A New Synthesis takes the reader on a complete tour of this intriguing new world of AI. An evolutionary approach provides a unifying theme
8.1 Formulating the State Space
Many problems of practical interest have search spaces so large that they cannot be represented by explicit graphs. Elaborations of the basic search procedure described in the last chapter are then required. First, we need to be especially careful about how we formulate such problems for search; second, we have to have methods for representing large search graphs implicitly; and, third, we need to use efficient methods for searching such large graphs.
In some planning problems, like the one about block stacking, it is not too difficult to conceive of data structures for representing the different world states and the actions that change them. Typically though, finding representations that result in manageable state-space graphs is difficult. Doing so requires careful analysis of the problem-taking into account symmetries, ignoring irrelevant details, and finding appropriate abstractions, Unfortunately, the task of setting up problems for search is largely an art that still requires human participation.
In addition to problems involving agents stacking blocks, sliding tile problems are often used to illustrate how state-space search is used to plan action sequences. For variety, I will use problems of this sort in this and the next chapter. A typical example is the Fifteen-puzzle, which consists of fifteen tiles set in a four-by-four array—leaving one empty or blank cell into which an adjacent tile can be slid. The task is to find a sequence of tile movements that transforms an initial disposition of tiles into some particular arrangement. The Eight-puzzle is a reduced version, with eighttiles in a three-by-three array. Suppose the object of the puzzle is to slide tiles from the starting configuration until the goal configuration is reached, as shown in Figure 8.1.
In this problem, an obvious iconic state description would be a three-by three array in which each cell contains one of the numbers I through 8 or symbol representing the blank space. A goal state is the array representing the configuration on the right side of Figure 8. 1. The moves between states correspond to sliding a tile into the blank cell. As mentioned, we often have representations choices in setting up the state space of a problem. In the Eight-puzzle, we might imagine that we have 8 x 4 different moves, namely, move 1 up, move 1 down move 1 left, move 1 right, move 2 up, ..., and so on. (Of course, not all of these moves are possible in any given state.) A more compact formulation involves only four different moves, namely, move blank left, move blank up, move blank right move blank down. A given start state and the set of possible moves implicitly define the graph of states accessible from the start. The number of nodes in the state-space graph for this representation of the Eight-puzzle is 9! = 362,880. (It happens that the state space for the Eight-puzzle is divided into two separate graphs; a tile configuration in one graph cannot be reached from one in the other graph.)
8.2 Components of Implicit State-Space Graphs
That part of a state-space graph reachable by actions from the start state is represented implicitly by a description of the start state and by descriptions of the effects of actions that can be taken in any state. Thus, it is possible in principle to transform an implicit representation of a graph into an explicit one. To do so, one generates all of the nodes that are successors of the start node (by applying all possible operators at that node), then generates all of their successors, and so on. For graphs that are too large to represent explicitly, the search process need render explicit only so much of the state space as is required to find a path to the goal. The process terminates when an acceptable path to a goal node has been found.
More formally, there are three basic components to an implicit representation of a state-space graph:
8.3 Breadth-First Search
Uninformed search procedures apply operators to nodes without using any special knowledge about the problem domain (other than knowledge about what actions are legal ones). Perhaps the simplest uninformed search procedure is breadth-first search. That procedure generates an explicit state-space graph by applying all possible operators to the start node, then applying all possible operators to all the direct successors of the start node, then to their successors, and so on. Search proceeds uniformly outward from the start node. Since we apply all possible operators to a node at each step, it is convenient to group them into a function called the successor function. The successor function, when applied to a node, produces the entire set of nodes that can be produced by applying all of the operators that can be applied to that node. Each application of a successor function to a node is called expanding the node.
Figure 8.2 shows the nodes created in a breadth-first search for a solution to the Eight-puzzle problem. The start and goal nodes are labeled, and the order of node expansions is shown by a numeral next to each node. Nodes of the same depth are expanded according to some fixed order. In expanding a node, I applied operators in the order: move blank left, up, right, down. Even though each move is reversible, I omit the arcs from successors back to their parents. The solution path is shown by the dark line. As we shall see, breadth-first search has the property that when a goal node is found, we have found a path of minimal length to the goal. A disadvantage of breadth-first search, however, is that it requires the generation and storage of a tree whose size is exponential in the depth of the shallowest goal node....
2 Stimulus-Response Agents
3 Neural Networks
4 Machine Evolution
5 State Machines
6 Robot Vision
7 Agents that Plan
8 Uninformed Search
9 Heuristic Search
10 Planning, Acting, and Learning
11 Alternative Search Formulations and Applications
12 Adversarial Search
13 The Propositional Calculus
14 Resolution in The Propositional Calculus
15 The Predicate Calculus
16 Resolution in the Predicate Calculus
17 Knowledge-Based Systems
18 Representing Commonsense Knowledge
19 Reasoning with Uncertain Information
20 Learning and Acting with Bayes Nets
21 The Situation Calculus
23 Multiple Agents
24 Communication Among Agents
25 Agent Architectures