- Shopping Bag ( 0 items )
Want a NOOK? Explore Now
Search has been vital to artificial intelligence from the very beginning as a core technique in problem solving. The authors present a thorough overview of heuristic search with a balance of discussion between theoretical analysis and efficient implementation and application to real-world problems. Current developments in search such as pattern databases and search with efficient use of external memory and parallel processing units on main boards and graphics cards are detailed.
Heuristic search as a problem solving tool is demonstrated in applications for puzzle solving, game playing, constraint satisfaction and machine learning. While no previous familiarity with heuristic search is necessary the reader should have a basic knowledge of algorithms, data structures, and calculus. Real-world case studies and chapter ending exercises help to create a full and realized picture of how search fits into the world of artificial intelligence and the one around us.
*Provides real-world success stories and case studies for heuristic search algorithms *Includes many AI developments not yet covered in textbooks such as pattern databases, symbolic search, and parallel processing units
In this book, we study the theory and the applications of heuristic search algorithms. The general model is the guided exploration in a space of states.
After providing some mathematical and notational background and motivating the impact of search algorithms by reflecting several success stories, we introduce different state space formalisms. Next we provide examples of single-agent puzzles such as the (n^{2} - 1)-Puzzle and extensions to it, the Rubik's Cube, as well as Sokoban problems. Furthermore, the practically important application areas of Route Planning and Multiple Sequence Alignment are introduced. The former is fundamental to vehicle navigation and the latter is fundamental to computational biology. In the TSP we consider the computation of round trips. For each of the domains, we introduce heuristic evaluation functions as a means to accelerate the search. We motivate them graphically and formalize them. We define properties of heuristics, such as consistency and admissibility, and how they are related. Moreover, we define the general descriptive schemes of production systems, Markov decision process problems, and action planning. For the case of a production system, we will see that general state space problem solving is in fact undecidable. For the case of action planning, we will see how to derive some problem-independent heuristics.
1.1 NOTATIONAL AND MATHEMATICAL BACKGROUND
Algorithms are specifications of action sequences, similar to recipes for cooking. The description should be concrete enough to cook a tasteful meal. On the other hand, some abstraction is necessary to keep the presentation readable; we don't teach the cook how to dice onions. In presenting algorithms in computer science, the situation is similar. The presentation should be concrete enough to allow analysis and reimplementation, but abstract enough to be ported on different programming languages and machines.
1.1.1 Pseudo Code
A program representation in a fictitious, partly abstract programming language is called pseudo code. However, its intention is to give a high-level description of an algorithm to a human, not to a machine. Therefore, irrelevant details (e.g., memory management code) are usually omitted, and sometimes natural language is used when convenient.
Most programs consist of assignments (<-), selection (e.g., branching based on if conditions), and iteration (e.g., while loops). Subroutine calls are important to structure the program and to implement recursion, and are shown in italics. In the pseudo code implementations, we use the following constructs.
if ((condition)) (body) else (alternative) Branching of the program based on the case selected in the Boolean predicate condition. and, or, not Logical operation on Boolean conditions. while ((condition)) (body) Loop to be checked prior to the execution of its body. do (body) while ((condition)) Loop to be checked after the execution of its body. for each (element) in (Set) Variable element iterates on the (often ordered) set Set. return Backtrack to calling procedure with result.
Conditional and loop constructs introduce compound statements; that is, the parts of the statements constitute lists of statements (blocks) themselves. To clarify the hierarchical structure of the program, sometimes explicit begin and end statements are given. Different from this convention, in this book, in the interest of succinctness we chose to rely solely on indentation. For example, in the following fragment, note the end of the block that is executed in case the condition evaluates to false:
if ((condition)) (if-true-statement 1) (if-true-statement 2) ... else (if-false-statement 1) (false-statement 2) ... (after-if-statement 1) ...
For easier understanding, each line in the pseudo code is annotated with some short comments, separated from the program code by a double semicolon.
1.1.2 Computability Theory
Computability theory is the branch of theoretical computer science that studies which problems are computationally solvable using different computation models. Computability theory differs from the related discipline of computational complexity theory (see next section) in asking whether a problem can be solved at all, given any finite but arbitrarily large amount of resources.
A common model of computation is based on an abstract machine, the Turing machine (see Fig. 1.1). The computational model is very simple and assumes a computer M in the form of a 7-tuple M = (Q, Σ, Λ, δ, B, F, q_{0}), with state set Q, input alphabet Σ, tape alphabet Λ, transition function δ : Q x Λ -> Q x Λ x {L,R,N}, blank symbol B, final state set F, and head position q^{0}.
The machine takes some input word (w_{1}, ..., w_{n}) over the alphabet Σ and assumes that it is already located on the tape at position 1, ..., n. The initial head position is 1. The process of computation terminates, if some final state in F is reached. The state transition function δ sets the new state in Q, which can be interpreted as performing on step in the program that is run on the Turing machine. Then a character is written on the tape at the current head position. Depending on the value in {L,R,N} the head is moved to the left (L), to the right (R), or remains unchanged (N), respectively. The output of the computation is the content of the tape after termination. The machine solves a decision problem when for an input string, it produces a binary output signifying "yes" or "no." A problem is decidable if a Turing machine exists that always gives an answer in finite time.
Since the time of Turing, many other formalisms for describing effective computability have been proposed, including recursive functions, the lambda calculus, register machines, Post systems, combinatorial logic, and Markov algorithms. The computational equivalence of all these systems corroborates the validity of the Church-Turing thesis: Every function that would naturally be regarded as computable can be computed by a Turing machine.
A recursively enumerable set S is a set such that a Turing machine exists that successively outputs all of its members. An equivalent condition is that we can specify an algorithm that always terminates and answers "yes" if the input is in S; if the input is not in S, computation might not halt at all. Therefore, recursively enumerable sets are also called semi-decidable.
A function f(x) is computable if the set of all input–output pairs is recursively enumerable. Decision problems are often considered because an arbitrary problem can always be reduced to a decision problem, by enumerating possible pairs of domain and range elements and asking, "Is this the correct output?"
1.1.3 Complexity Theory
Complexity theory is part of the theory of computation dealing with the resources required during computation to solve a given problem, predominantly time (how many steps it takes to solve a problem) and space (how much memory it takes). Complexity theory differs from computability theory, which deals with whether a problem can be solved at all, regardless of the resources required.
The class of algorithms that need space s(n), for some function s(n), is denoted by the class DSPACE(s(n)); and those that use time t(n) is denoted by DTIME(t(n)). The problem class P consists of all problems that can be solved in polynomial time; that is, it is the union of complexity classes DTIME(t(n)), for all polynomials t(n).
A nondeterministic Turing machine is a (nonrealizable) generalization of the standard deterministic Turing machine of which the transition rules can allow more than one successor configuration, and all these alternatives can be explored in parallel (another way of thinking of this is by means of an oracle suggesting the correct branches). The corresponding complexity classes for nondeterministic Turing machines are called NSPACE(s(n)) and NTIME(s(n)). The complexity class NP (for nondeterministic polynomial) is the union of classes NTIME(t(n)), for all polynomials t(n). Note that a deterministic Turing machine might not be able to compute the solution for a problem in NP in polynomial time, however, it can verify it efficiently if it is given enough information (a.k.a. certificate) about the solution (besides the answer "yes" or "no").
Both P and NP are contained in the class PSPACE, which is the union of DSPACE(s(n)) for any polynomial function s(n); PSPACE doesn't impose any restriction on the required time.
A problem S is hard for a class ITLITL if all other problems S [member of] ITLITL are polynomially reducible to ITLITL; this means that there is a polynomial algorithm to transform the input x' of S' to an input x to S, such that the answer for S applied to x is "yes" if and only if the answer for S' applied to x' is "yes." A problem S is complete for a class ITLITL if it is a member of ITLITL and it is hard for ITLITL.
One of the big challenges in computer science is to prove that P [not equal to] NP. To refute this conjecture, it would suffice to devise a deterministic polynomial algorithm for some NP-complete problem. However, most people believe that this is not possible. Figure 1.2 graphically depicts the relations between the mentioned complexity classes.
Although the model of a Turing machine seems quite restrictive, other, more realistic models, such as random access machines, can be simulated with polynomial overhead. Therefore, for merely deciding if an algorithm is efficient or not (i.e., it inherits at most a polynomial time or at least an exponential time algorithm), complexity classes based on Turing machines are sufficiently expressive. Only when considering hierarchical and parallel algorithms will we encounter situations where this model is no longer adequate.
1.1.4 Asymptotic Resource Consumption
In the previous section, we had a quite crude view on complexity classes, distinguishing merely between polynomial and more than polynomial complexity. While the actual degree of the polynomial can be preeminently important for practical applications (a "polynomial" algorithm might still not be practically feasible), exponential complexity cannot be deemed efficient because multiple resources are needed to increase the input size.
(Continues...)
Excerpted from Heuristic Search by Stefan Edelkamp Stefan Schrödl Copyright © 2012 by Elsevier Inc. . Excerpted by permission of MORGAN KAUFMANN. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
PART I Heuristic Search Primer Chapter 1 Introduction Chapter 2 Basic Search Algorithms Chapter 3 Dictionary Data Structures Chapter 4 Automatically Created Heuristics
PART II Heuristic Search under Memory Constraints Chapter 5 Linear-Space Search Chapter 6 Memory Restricted Search Chapter 7 Symbolic Search Chapter 8 External Search
PART III Heuristic Search under Time Constraints Chapter 9 Distributed Search Chapter 10 State Space Pruning Chapter 11 Real-Time Search by Sven Koenig
PART IV Heuristic Search Variants Chapter 12 Adversary Search Chapter 13 Constraint Search Chapter 14 Selective Search
PART V Heurstic Search Applications Chapter 15 Action Planning Chapter 16 Automated System Verification Chapter 17 Vehicle Navigation Chapter 18 Computational Biology Chapter 19 Robotics by Sven Koenig
FRINGEINDEPENEDENTREVIEW
Posted September 4, 2011
Do you have a background in artificial intelligence (AI) heuristic state space search? If you don't, then this book is for you! Authors Stefan Edelkamp and Stefan Schroedl, have done an outstanding job of writing a book that is a comprehensive introduction to AI heuristic state space search. Edelkamp and Schroedl, begin by proving the correctness of the basic search algorithms approaches and discussing the optimal efficiency of A*. In addition, the authors take a look closer at efficient data structures. They then present search techniques with main memory requirements that scale linear with the search depth. The authors then, consider memory-restricted state-caching algorithms that dynamically extend the search frontier. The authors continue by showing you how to represent states and action as functions, and then briefly address the problem of obtaining an efficient state encoding. In addition, the authors give an introduction to the more general topic of I/O-efficient algorithms. They then introduce you to parallel state space search algorithms, starting with parallel depth-first search heading toward parallel heuristic search. The authors then discuss real-time search and illustrate it with examples. Then, the authors show you why adversaries introduce an element of uncertainty in the search process. They continue by showing you why constraint technology has evolved into one of the most effective search options, which is understandable, because its declarative formulation makes it easy to use. Next, the authors study selective search algorithms, a generic term that is used to cover aspects of local search and randomized search. They then introduce designs fpr planning heuristics and show you how to tackle more expensive planning formalisms, like metric and temporal planning; as well as, planning with constraints. In addition, the authors give examples of symbolic search in solving diagnosis problems. They then briefly review the interplay of search algorithms and other components of navigation systems; as well as, discuss particular algorithmic challenges arising in this field. The authors then provide a coherent perspective and point out recent trends in research of this ubiquitous problem in computational biology. Finally, they explain why there is a need to speed up search by developing robot-navigation methods that sacrifice the minimality of their trajectories. In this most excellent book, the authors' intent is to strike a balance between search algorithms and their theoretical analysis on one hand, and their efficient implementation and application to important real-world problems on the other hand. Perhaps more importantly, this book supplements broad textbooks on AI and, serves as a primary textbook for more advanced AI classes on search.
Was this review helpful? Yes NoThank you for your feedback. Report this reviewThank you, this review has been flagged.
Overview
Search has been vital to artificial intelligence from the very beginning as a core technique in problem solving. The authors present a thorough overview of heuristic search with a balance of discussion between theoretical analysis and efficient implementation and application to real-world problems. Current developments in search such as pattern databases and search with efficient use of external memory and parallel processing units on main boards and graphics cards are detailed....