 Shopping Bag ( 0 items )

All (7) from $47.55

New (4) from $70.15

Used (3) from $47.55
More About This Textbook
Overview
From the very beginning of artificial intelligence, search has been vital as a core technique in problemsolving. Heuristic Search provides a thorough overview of heuristic search with a balance of discussion between theoretical analysis and efficient implementation and application to realworld problems. Current developments in search such as pattern databases and search with efficient use of external memory and parallel processing units on main boards and graphics cards are detailed.
Search as a problemsolving tool is shown in applications for puzzle solving, game playing, constraint satisfaction and machine learning. While no previous familiarity with heuristic search is necessary, the reader should have a basic knowledge of algorithms, data structures, and calculus. Realworld case studies and chapter ending exercises are included.
The content is organized into five parts as follows:
Editorial Reviews
From the Publisher
"Heuristic search lies at the core of Artificial Intelligence and it provides the foundations for many different approaches in problem solving. This book provides a comprehensive yet deep description of the main algorithms in the field along with a very complete discussion of their main applications. Very wellwritten, it embellishes every algorithm with pseudocode and technical studies of their theoretical performance." Carlos Linares López, Universidad Carlos III de MadridProduct Details
Related Subjects
Meet the Author
Stefan Edelkamp is senior researcher and lecturer at University Bremen, where he heads projects on intrusion detection, on model checking and on planning for general game playing. He received an M.S. degree from the University Dortmund for his Master’s thesis on "Weak Heapsort", and a Ph.d. degree from the University of Freiburg for his dissertation on "Data Structures and Learning Algorithms in State Space Search". Later on, he obtained a postdoctoral lecture qualification (Venia Legendi) for his habilitation on "Heuristic Search". His planning systems won various first and second performance awards at International Planning Competitions. Stefan Edelkamp has published extensively on search, serves as member on program committees (including recent editions of SARA, SOCS, ICAPS, ECAI, IJCAI, and AAAI) and on steering committees (including SPIN and MOCHART). He is member of the editorial board of JAIR and organizes international workshops, tutorials, and seminars in his area of expertise. In 2011 he will cochair the ICAPS Conference as well as the German Conference on AI.
Stefan Schroedl is a researcher and software developer in the areas of artifical intelligence and machine learning. He worked as a freelance software developer for different companies in Germany and Switzerland, among others, designing and realizing a route finding systems for a leading commercial product in Switzerland. At DaimlerChrylser Research, he continued to work on automated generation and search of route maps based on global positioning traces. Stefan Schroedl later joined Yahoo! Labs to develop auction algorithms, relevance prediction and user personalization systems for web search advertising. In his current position at A9.com, he strives to improve Amazon.com's product search using machinelearned ranking models. He has published on route finding algorithms, memorylimited and externalmemory search, as well as on search for solving DNA sequence alignment problems. Stefan Schroedl hold a Ph.D. for his dissertation "Negation as Failure in Explanation Based Generalization", and a M.S degree for his thesis "Coupling Numerical and Symbolic Methods in the Analysis of Neurophysiological Experiments".
Read an Excerpt
Heuristic Search
Theory and ApplicationsBy Stefan Edelkamp Stefan Schrödl
MORGAN KAUFMANN
Copyright © 2012 Elsevier Inc.All right reserved.
ISBN: 9780080919737
Chapter One
IntroductionIn this book, we study the theory and the applications of heuristic search algorithms. The general model is the guided exploration in a space of states.
After providing some mathematical and notational background and motivating the impact of search algorithms by reflecting several success stories, we introduce different state space formalisms. Next we provide examples of singleagent puzzles such as the (n^{2}  1)Puzzle and extensions to it, the Rubik's Cube, as well as Sokoban problems. Furthermore, the practically important application areas of Route Planning and Multiple Sequence Alignment are introduced. The former is fundamental to vehicle navigation and the latter is fundamental to computational biology. In the TSP we consider the computation of round trips. For each of the domains, we introduce heuristic evaluation functions as a means to accelerate the search. We motivate them graphically and formalize them. We define properties of heuristics, such as consistency and admissibility, and how they are related. Moreover, we define the general descriptive schemes of production systems, Markov decision process problems, and action planning. For the case of a production system, we will see that general state space problem solving is in fact undecidable. For the case of action planning, we will see how to derive some problemindependent heuristics.
1.1 NOTATIONAL AND MATHEMATICAL BACKGROUND
Algorithms are specifications of action sequences, similar to recipes for cooking. The description should be concrete enough to cook a tasteful meal. On the other hand, some abstraction is necessary to keep the presentation readable; we don't teach the cook how to dice onions. In presenting algorithms in computer science, the situation is similar. The presentation should be concrete enough to allow analysis and reimplementation, but abstract enough to be ported on different programming languages and machines.
1.1.1 Pseudo Code
A program representation in a fictitious, partly abstract programming language is called pseudo code. However, its intention is to give a highlevel description of an algorithm to a human, not to a machine. Therefore, irrelevant details (e.g., memory management code) are usually omitted, and sometimes natural language is used when convenient.
Most programs consist of assignments (<), selection (e.g., branching based on if conditions), and iteration (e.g., while loops). Subroutine calls are important to structure the program and to implement recursion, and are shown in italics. In the pseudo code implementations, we use the following constructs.
if ((condition)) (body) else (alternative) Branching of the program based on the case selected in the Boolean predicate condition. and, or, not Logical operation on Boolean conditions. while ((condition)) (body) Loop to be checked prior to the execution of its body. do (body) while ((condition)) Loop to be checked after the execution of its body. for each (element) in (Set) Variable element iterates on the (often ordered) set Set. return Backtrack to calling procedure with result.
Conditional and loop constructs introduce compound statements; that is, the parts of the statements constitute lists of statements (blocks) themselves. To clarify the hierarchical structure of the program, sometimes explicit begin and end statements are given. Different from this convention, in this book, in the interest of succinctness we chose to rely solely on indentation. For example, in the following fragment, note the end of the block that is executed in case the condition evaluates to false:
if ((condition)) (iftruestatement 1) (iftruestatement 2) ... else (iffalsestatement 1) (falsestatement 2) ... (afterifstatement 1) ...
For easier understanding, each line in the pseudo code is annotated with some short comments, separated from the program code by a double semicolon.
1.1.2 Computability Theory
Computability theory is the branch of theoretical computer science that studies which problems are computationally solvable using different computation models. Computability theory differs from the related discipline of computational complexity theory (see next section) in asking whether a problem can be solved at all, given any finite but arbitrarily large amount of resources.
A common model of computation is based on an abstract machine, the Turing machine (see Fig. 1.1). The computational model is very simple and assumes a computer M in the form of a 7tuple M = (Q, Σ, Λ, δ, B, F, q_{0}), with state set Q, input alphabet Σ, tape alphabet Λ, transition function δ : Q x Λ > Q x Λ x {L,R,N}, blank symbol B, final state set F, and head position q^{0}.
The machine takes some input word (w_{1}, ..., w_{n}) over the alphabet Σ and assumes that it is already located on the tape at position 1, ..., n. The initial head position is 1. The process of computation terminates, if some final state in F is reached. The state transition function δ sets the new state in Q, which can be interpreted as performing on step in the program that is run on the Turing machine. Then a character is written on the tape at the current head position. Depending on the value in {L,R,N} the head is moved to the left (L), to the right (R), or remains unchanged (N), respectively. The output of the computation is the content of the tape after termination. The machine solves a decision problem when for an input string, it produces a binary output signifying "yes" or "no." A problem is decidable if a Turing machine exists that always gives an answer in finite time.
Since the time of Turing, many other formalisms for describing effective computability have been proposed, including recursive functions, the lambda calculus, register machines, Post systems, combinatorial logic, and Markov algorithms. The computational equivalence of all these systems corroborates the validity of the ChurchTuring thesis: Every function that would naturally be regarded as computable can be computed by a Turing machine.
A recursively enumerable set S is a set such that a Turing machine exists that successively outputs all of its members. An equivalent condition is that we can specify an algorithm that always terminates and answers "yes" if the input is in S; if the input is not in S, computation might not halt at all. Therefore, recursively enumerable sets are also called semidecidable.
A function f(x) is computable if the set of all input–output pairs is recursively enumerable. Decision problems are often considered because an arbitrary problem can always be reduced to a decision problem, by enumerating possible pairs of domain and range elements and asking, "Is this the correct output?"
1.1.3 Complexity Theory
Complexity theory is part of the theory of computation dealing with the resources required during computation to solve a given problem, predominantly time (how many steps it takes to solve a problem) and space (how much memory it takes). Complexity theory differs from computability theory, which deals with whether a problem can be solved at all, regardless of the resources required.
The class of algorithms that need space s(n), for some function s(n), is denoted by the class DSPACE(s(n)); and those that use time t(n) is denoted by DTIME(t(n)). The problem class P consists of all problems that can be solved in polynomial time; that is, it is the union of complexity classes DTIME(t(n)), for all polynomials t(n).
A nondeterministic Turing machine is a (nonrealizable) generalization of the standard deterministic Turing machine of which the transition rules can allow more than one successor configuration, and all these alternatives can be explored in parallel (another way of thinking of this is by means of an oracle suggesting the correct branches). The corresponding complexity classes for nondeterministic Turing machines are called NSPACE(s(n)) and NTIME(s(n)). The complexity class NP (for nondeterministic polynomial) is the union of classes NTIME(t(n)), for all polynomials t(n). Note that a deterministic Turing machine might not be able to compute the solution for a problem in NP in polynomial time, however, it can verify it efficiently if it is given enough information (a.k.a. certificate) about the solution (besides the answer "yes" or "no").
Both P and NP are contained in the class PSPACE, which is the union of DSPACE(s(n)) for any polynomial function s(n); PSPACE doesn't impose any restriction on the required time.
A problem S is hard for a class ITLITL if all other problems S [member of] ITLITL are polynomially reducible to ITLITL; this means that there is a polynomial algorithm to transform the input x' of S' to an input x to S, such that the answer for S applied to x is "yes" if and only if the answer for S' applied to x' is "yes." A problem S is complete for a class ITLITL if it is a member of ITLITL and it is hard for ITLITL.
One of the big challenges in computer science is to prove that P [not equal to] NP. To refute this conjecture, it would suffice to devise a deterministic polynomial algorithm for some NPcomplete problem. However, most people believe that this is not possible. Figure 1.2 graphically depicts the relations between the mentioned complexity classes.
Although the model of a Turing machine seems quite restrictive, other, more realistic models, such as random access machines, can be simulated with polynomial overhead. Therefore, for merely deciding if an algorithm is efficient or not (i.e., it inherits at most a polynomial time or at least an exponential time algorithm), complexity classes based on Turing machines are sufficiently expressive. Only when considering hierarchical and parallel algorithms will we encounter situations where this model is no longer adequate.
1.1.4 Asymptotic Resource Consumption
In the previous section, we had a quite crude view on complexity classes, distinguishing merely between polynomial and more than polynomial complexity. While the actual degree of the polynomial can be preeminently important for practical applications (a "polynomial" algorithm might still not be practically feasible), exponential complexity cannot be deemed efficient because multiple resources are needed to increase the input size.
(Continues...)
Table of Contents
PART I Heuristic Search Primer Chapter 1 Introduction Chapter 2 Basic Search Algorithms Chapter 3 Dictionary Data Structures Chapter 4 Automatically Created Heuristics
PART II Heuristic Search under Memory Constraints Chapter 5 LinearSpace Search Chapter 6 Memory Restricted Search Chapter 7 Symbolic Search Chapter 8 External Search
PART III Heuristic Search under Time Constraints Chapter 9 Distributed Search Chapter 10 State Space Pruning Chapter 11 RealTime Search by Sven Koenig
PART IV Heuristic Search Variants Chapter 12 Adversary Search Chapter 13 Constraint Search Chapter 14 Selective Search
PART V Heurstic Search Applications Chapter 15 Action Planning Chapter 16 Automated System Verification Chapter 17 Vehicle Navigation Chapter 18 Computational Biology Chapter 19 Robotics by Sven Koenig