Artificial Intelligence: A New Synthesis / Edition 1
Intelligent agents are employed as the central characters in this introductory text. Beginning with elementary reactive agents, Nilsson gradually increases their cognitive horsepower to illustrate the most important and lasting ideas in AI. Neural networks, genetic programming, computer vision, heuristic search, knowledge representation and reasoning, Bayes networks, planning, and language understanding are each revealed through the growing capabilities of these agents. A distinguishing feature of this text is in its evolutionary approach to the study of AI. This book provides a refreshing and motivating synthesis of the field by one of AI's master expositors and leading researches.
1111447755
Artificial Intelligence: A New Synthesis / Edition 1
Intelligent agents are employed as the central characters in this introductory text. Beginning with elementary reactive agents, Nilsson gradually increases their cognitive horsepower to illustrate the most important and lasting ideas in AI. Neural networks, genetic programming, computer vision, heuristic search, knowledge representation and reasoning, Bayes networks, planning, and language understanding are each revealed through the growing capabilities of these agents. A distinguishing feature of this text is in its evolutionary approach to the study of AI. This book provides a refreshing and motivating synthesis of the field by one of AI's master expositors and leading researches.
76.95 Out Of Stock
Artificial Intelligence: A New Synthesis / Edition 1

Artificial Intelligence: A New Synthesis / Edition 1

by Nils J. Nilsson
Artificial Intelligence: A New Synthesis / Edition 1

Artificial Intelligence: A New Synthesis / Edition 1

by Nils J. Nilsson

Paperback

$76.95 
  • SHIP THIS ITEM
    Temporarily Out of Stock Online
  • PICK UP IN STORE

    Your local store may have stock of this item.

Related collections and offers


Overview

Intelligent agents are employed as the central characters in this introductory text. Beginning with elementary reactive agents, Nilsson gradually increases their cognitive horsepower to illustrate the most important and lasting ideas in AI. Neural networks, genetic programming, computer vision, heuristic search, knowledge representation and reasoning, Bayes networks, planning, and language understanding are each revealed through the growing capabilities of these agents. A distinguishing feature of this text is in its evolutionary approach to the study of AI. This book provides a refreshing and motivating synthesis of the field by one of AI's master expositors and leading researches.

Product Details

ISBN-13: 9781558605350
Publisher: Morgan Kaufmann Publishers Inc.
Publication date: 04/06/2009
Pages: 536
Product dimensions: 7.50(w) x 9.25(h) x 1.08(d)

About the Author

Nils J. Nilsson's long and rich research career has contributed much to AI. He has written many books, including the classic Principles of Artificial Intelligence. Dr. Nilsson is Kumagai Professor of Engineering, Emeritus, at Stanford University. He has served on the editorial boards of Artificial Intelligence and Machine Learning and as an Area Editor for the Journal of the Association for Computing Machinery. Former Chairman of the Department of Computer Science at Stanford, and former Director of the SRI Artificial Intelligence Center, he is also a past president and Fellow of the American Association for Artificial Intelligence.

Read an Excerpt

Chapter 8: Uninformed Search

8.1 Formulating the State Space

Many problems of practical interest have search spaces so large that they cannot be represented by explicit graphs. Elaborations of the basic search procedure described in the last chapter are then required. First, we need to be especially careful about how we formulate such problems for search; second, we have to have methods for representing large search graphs implicitly; and, third, we need to use efficient methods for searching such large graphs.

In some planning problems, like the one about block stacking, it is not too difficult to conceive of data structures for representing the different world states and the actions that change them. Typically though, finding representations that result in manageable state-space graphs is difficult. Doing so requires careful analysis of the problem-taking into account symmetries, ignoring irrelevant details, and finding appropriate abstractions, Unfortunately, the task of setting up problems for search is largely an art that still requires human participation.

In addition to problems involving agents stacking blocks, sliding tile problems are often used to illustrate how state-space search is used to plan action sequences. For variety, I will use problems of this sort in this and the next chapter. A typical example is the Fifteen-puzzle, which consists of fifteen tiles set in a four-by-four array—leaving one empty or blank cell into which an adjacent tile can be slid. The task is to find a sequence of tile movements that transforms an initial disposition of tiles into some particular arrangement. The Eight-puzzle is a reduced version, with eighttiles in a three-by-three array. Suppose the object of the puzzle is to slide tiles from the starting configuration until the goal configuration is reached, as shown in Figure 8.1.

In this problem, an obvious iconic state description would be a three-by three array in which each cell contains one of the numbers I through 8 or symbol representing the blank space. A goal state is the array representing the configuration on the right side of Figure 8. 1. The moves between states correspond to sliding a tile into the blank cell. As mentioned, we often have representations choices in setting up the state space of a problem. In the Eight-puzzle, we might imagine that we have 8 x 4 different moves, namely, move 1 up, move 1 down move 1 left, move 1 right, move 2 up, ..., and so on. (Of course, not all of these moves are possible in any given state.) A more compact formulation involves only four different moves, namely, move blank left, move blank up, move blank right move blank down. A given start state and the set of possible moves implicitly define the graph of states accessible from the start. The number of nodes in the state-space graph for this representation of the Eight-puzzle is 9! = 362,880. (It happens that the state space for the Eight-puzzle is divided into two separate graphs; a tile configuration in one graph cannot be reached from one in the other graph.)

8.2 Components of Implicit State-Space Graphs

That part of a state-space graph reachable by actions from the start state is represented implicitly by a description of the start state and by descriptions of the effects of actions that can be taken in any state. Thus, it is possible in principle to transform an implicit representation of a graph into an explicit one. To do so, one generates all of the nodes that are successors of the start node (by applying all possible operators at that node), then generates all of their successors, and so on. For graphs that are too large to represent explicitly, the search process need render explicit only so much of the state space as is required to find a path to the goal. The process terminates when an acceptable path to a goal node has been found.

More formally, there are three basic components to an implicit representation of a state-space graph:

1. A description with which to label the start node. This description is some data structure modeling the initial state of the environment.

2. Functions that transform a state description representing one state of the environment into one that represents the state resulting after an action. These functions are usually called operators. In our agent problems, they are models of the effects of actions. When an operator is applied to a node, it generates one of that node's successors.

3. A goal condition, which can be either a True-False-valued function on state descriptions or a list of actual instances of state descriptions that correspond to goal states.
We will study two broad classes of search processes. In one, we have no problem-specific reason to prefer one part of the search space to any other, insofar as finding a path to the goal is concerned. Such processes are called uninformed. In the other, we do have problem-specific information to help focus the search. Such processes are called heuristic.1 I begin with the uninformed ones and will discuss heuristic processes in the next chapter.

8.3 Breadth-First Search

Uninformed search procedures apply operators to nodes without using any special knowledge about the problem domain (other than knowledge about what actions are legal ones). Perhaps the simplest uninformed search procedure is breadth-first search. That procedure generates an explicit state-space graph by applying all possible operators to the start node, then applying all possible operators to all the direct successors of the start node, then to their successors, and so on. Search proceeds uniformly outward from the start node. Since we apply all possible operators to a node at each step, it is convenient to group them into a function called the successor function. The successor function, when applied to a node, produces the entire set of nodes that can be produced by applying all of the operators that can be applied to that node. Each application of a successor function to a node is called expanding the node.

Figure 8.2 shows the nodes created in a breadth-first search for a solution to the Eight-puzzle problem. The start and goal nodes are labeled, and the order of node expansions is shown by a numeral next to each node. Nodes of the same depth are expanded according to some fixed order. In expanding a node, I applied operators in the order: move blank left, up, right, down. Even though each move is reversible, I omit the arcs from successors back to their parents. The solution path is shown by the dark line. As we shall see, breadth-first search has the property that when a goal node is found, we have found a path of minimal length to the goal. A disadvantage of breadth-first search, however, is that it requires the generation and storage of a tree whose size is exponential in the depth of the shallowest goal node....

Table of Contents

Artificial Intelligence: A New Synthesis


1 Introduction
1.1 What is AI?
1.2 Approaches to Artificial Intelligence
1.3 Brief History of AI
1.4 Plan of the Book
1.5 Additional Readings and Discussion

I Reactive Machines

2 Stimulus-Response Agents
2.1 Perception and Action
2.1.1 Perception
2.1.2 Action
2.1.3 Boolean Algebra
2.1.4 Classes and Forms of Boolean Functions
2.2 Representing and Implementing Action Functions
2.2.1 Production Systems
2.2.2 Networks
2.2.3 The Subsumption Architecture
2.3 Additional Readings and Discussion

3 Neural Networks
3.1 Introduction
3.2 Training Single TLUs
3.2.1 TLU Geometry
3.2.2 Augmented Vectors
3.2.3 Gradient Descent Methods
3.2.4 The Widrow-Hoff Procedure
3.2.5 The Generalized Delta Procedure
3.2.6 The Error-Correction Procedure
3.3 Neural Networks
3.3.1 Motivation
3.3.2 Notation
3.3.3 The Backpropagation Method
3.3.4 Computing Weight Changes in the Final Layer
3.3.5 Computing Changes to the Weights in Intermediate Layers
3.4 Generalization, Accuracy, and Overfitting
3.5 Additional Readings and Discussion

4 Machine Evolution
4.1 Evolutionary Computation
4.2 Genetic Programming
4.2.1 Program Representation in GP
4.2.2 The GP Process
4.2.3 Evolving a Wall-Following Robot
4.3 Additional Readings and Discussion

5 State Machines
5.1 Representing the Environment by Feature Vectors
5.2 Elman Networks
5.3 Iconic Representations
5.4 Blackboard Systems
5.5 Additional Readings and Discussion

6 Robot Vision
6.1 Introduction
6.2 Steering a Van
6.3 Two Stages of Robot Vision
6.4 Image Processing
6.4.1 Averaging
6.4.2 Edge Enhancement
6.4.3 Combining Edge-Enhancement with Averaging
6.4.4 Region Finding
6.4.5 Using Image Attributes other than Intensity
6.5 Scene Analysis
6.5.1 Interpreting Lines and Curves in the Image
6.5.2 Model-Based Vision
6.6 Stereo Vision
6.7 Additional Readings and Discussion

II Search in State Spaces

7 Agents that Plan
7.1 Memory Versus Computation
7.2 State-Space Graphs
7.3 Searching Explicit State Spaces
7.4 Feature-Based State Spaces
7.5 Graph Notation 7.6 Additional Readings and Discussion

8 Uninformed Search
8.1 Formulating the State Space
8.2 Components of Implicit State-Space Graphs
8.3 Breadth-First Search
8.4 Depth-First or Bracktracking Search
8.5 Iterative Deepening
8.6 Additional Readings and Discussion

9 Heuristic Search
9.1 Using Evaluation Functions
9.2 A General Graph-Searching Algorithm
9.2.1 Algorithm A
9.2.2 Admissibility of A
9.2.3 The Consistency (or Monotone) Condition
9.2.4 Iterative-Deepening A
9.2.5 Recursive Best-First Search
9.3 Heuristic Functions and Search Efficiency
9.4 Additional Readings and Discussion

10 Planning, Acting, and Learning
10.1 The Sense/Plan/Act Cycle
10.2 Approximate Search
10.2.1 Island-Driven Search
10.2.2 Hierarchical Search
10.2.3 Limited-Horizon Search
10.2.4 Cycles
10.2.5 Building Reactive Procedures
10.3 Learning Heuristic Functions
10.3.1 Explicit Graphs
10.3.2 Implicit Graphs
10.4 Rewards Instead of Goals
10.5 Additional Readings and Discussion

11 Alternative Search Formulations and Applications
11.1 Assignment Problems
11.2 Constructive Methods
11.3 Heuristic Repair
11.4 Function Optimization

12 Adversarial Search
12.1 Two-Agent Games
12.2 The Minimax Procedure
12.3 The Alpha-Beta Procedure
12.4 The Search Efficiency of the Alpha-Beta Procedure
12.5 Other Important Matters
12.6 Games of Chance
12.7 Learning Evaluation Functions
12.8 Additional Readings and Discussion

III Knowledge Representation and Reasoning

13 The Propositional Calculus
13.1 Using Constraints on Feature Values
13.2 The Language
13.3 Rules of Inference
13.4 Definition of Proof
13.5 Semantics
13.5.1 Interpretations
13.5.2 The Propositional Truth Table
13.5.3 Satisfiability and Models
13.5.4 Validity
13.5.5 Equivalence
13.5.6 Entailment
13.6 Soundness and Completeness
13.7 The PSAT Problem
13.8 Other Important Topics
13.8.1 Language Distinctions
13.8.2 Metatheorems
13.8.3 Associative Laws
13.8.4 Distributive Laws

14 Resolution in The Propositional Calculus
14.1 A New Rule of Inference: Resolution
14.1.1 Clauses as wwf
14.1.2 Resolution on Clauses
14.1.3 Soundness of Resolution
14.2 Converting Arbitrary wffs to Conjunctions of Clauses
14.3 Resolution Refutations
14.4 Resolution Refutation Search Strategies
14.4.1 Ordering Strategies
14.4.2 Refinement Strategies
14.5 Horn Clauses

15 The Predicate Calculus
15.1 Motivation
15.2 The Language and its Syntax
15.3 Semantics
15.3.1 Worlds
15.3.2 Interpretations
15.3.3 Models and Related Notions
15.3.4 Knowledge
15.4 Quantification
15.5 Semantics of Quantifiers
15.5.1 Universal Quantifiers
15.5.2 Existential Quantifiers
15.5.3 Useful Equivalences
15.5.4 Rules of Inference
15.6 Predicate Calculus as a Language for Representing Knowledge
15.6.1 Conceptualizations
15.6.2 Examples
15.7 Additional Readings and Discussion

16 Resolution in the Predicate Calculus
16.1 Unification
16.2 Predicate-Calculus Resolution
16.3 Completeness and Soundness
16.4 Converting Arbitrary wffs to Clause Form
16.5 Using Resolution to Prove Theorems
16.6 Answer Extraction
16.7 The Equality Predicate
16.8 Additional Readings and Discussion

17 Knowledge-Based Systems
17.1 Confronting the Real World
17.2 Reasoning Using Horn Clauses
17.3 Maintenance in Dynamic Knowledge Bases
17.4 Rule-Based Expert Systems
17.5 Rule Learning
17.5.1 Learning Propositional Calculus Rules
17.5.2 Learning First-Order Logic Rules
17.5.3 Explanation-Based Generalization
17.6 Additional Readings and Discussion

18 Representing Commonsense Knowledge
18.1 The Commonsense World
18.1.1 What is Commonsense Knowledge?
18.1.2 Difficulties in Representing Commonsense Knowledge
18.1.3 The Importance of Commonsense Knowledge
18.1.4 Research Areas
18.2 Time
18.3 Knowledge Representation by Networks
18.3.1 Taxonomic Knowledge
18.3.2 Semantic Networks
18.3.3 Nonmonotonic Reasoning in Semantic Networks
18.3.4 Frames
18.4 Additional Readings and Discussion

19 Reasoning with Uncertain Information
19.1 Review of Probability Theory
19.1.1 Fundamental Ideas
19.1.2 Conditional Probabilities
19.2 Probabilistic Inference
19.2.1 A General Method
19.2.2 Conditional Independence
19.3 Bayes Networks
19.4 Patterns of Inference in Bayes Networks
19.5 Uncertain Evidence
19.6 D-Seperation
19.7 Probabilistic Inference in Polytrees
19.7.1 Evidence Above
19.7.2 Evidence Below
19.7.3 Evidence Above and Below
19.7.4 A Numerical Example
19.8 Additional Readings and Discussion

20 Learning and Acting with Bayes Nets
20.1 Learning Bayes Nets
20.1.1 Known Network Structure
20.1.2 Learning Network Structure
20.2 Probabilistic Inference and Action
20.2.1 The General Setting
20.2.2 An Extended Example
20.2.3 Generalizing the Example
20.3 Additional Readings and Discussion

IV Planning Method Based on Logic

21 The Situation Calculus
21.1 Reasoning about States and Actions
21.2 Some Difficulties
21.2.1 Frame Axioms
21.2.2 Qualifications
21.2.3 Ramifications
21.3 Generating Plans
21.4 Additional Reading and Discussion

22 Planning
22.1 STRIPS Planning Systems
22.1.1 Describing States and Goals
22.1.2 Forward Search Methods
22.1.3 Recursive STRIPS
22.1.4 Plans with Runtime Conditionals
22.1.5 The Sussman Anomaly
22.1.6 Backward Search Methods
22.2 Plan Spaces and Partial-Order Planning
22.3 Hierarchical Planning
22.3.1 ABSTRIPS
22.3.2 Combining Hierarchical and Partial-Order Planning
22.4 Learning Plans'
22.5 Additional Readings and Discussion

V Communication and Integration

23 Multiple Agents
23.1 Interacting Agents
23.2 Models of Other Agents
23.2.1 Varieties of Models
23.2.2 Simulation Strategies
23.2.3 Simulated Databases
23.2.4 The Intentional Stance
23.3 A Modal Logic of Knowledge
23.3.1 Modal Operators
23.3.2 Knowledge Axioms
23.3.3 Reasoning about Other Agents' Knowledge
23.3.4 Predicting Actions of Other Agents
23.4 Additional Readings and Discussion

24 Commu

From the B&N Reads Blog

Customer Reviews