BN.com Gift Guide

Markov Decision Processes in Artificial Intelligence [NOOK Book]

Overview

Markov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision problems under uncertainty as well as Reinforcement Learning problems. Written by experts in the field, this book provides a global view of current research using MDPs in Artificial Intelligence. It starts with an introductory presentation of the fundamental aspects of MDPs (planning in MDPs, Reinforcement Learning, Partially Observable MDPs, Markov games and the use of non-classical criteria). Then it presents more ...
See more details below
Markov Decision Processes in Artificial Intelligence

Available on NOOK devices and apps  
  • NOOK Devices
  • Samsung Galaxy Tab 4 NOOK 7.0
  • Samsung Galaxy Tab 4 NOOK 10.1
  • NOOK HD Tablet
  • NOOK HD+ Tablet
  • NOOK eReaders
  • NOOK Color
  • NOOK Tablet
  • Tablet/Phone
  • NOOK for Windows 8 Tablet
  • NOOK for iOS
  • NOOK for Android
  • NOOK Kids for iPad
  • PC/Mac
  • NOOK for Windows 8
  • NOOK for PC
  • NOOK for Mac
  • NOOK for Web

Want a NOOK? Explore Now

NOOK Book (eBook)
$94.49
BN.com price
(Save 42%)$165.00 List Price
Note: This NOOK Book can be purchased in bulk. Please email us for more information.

Overview

Markov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision problems under uncertainty as well as Reinforcement Learning problems. Written by experts in the field, this book provides a global view of current research using MDPs in Artificial Intelligence. It starts with an introductory presentation of the fundamental aspects of MDPs (planning in MDPs, Reinforcement Learning, Partially Observable MDPs, Markov games and the use of non-classical criteria). Then it presents more advanced research trends in the domain and gives some concrete examples using illustrative applications.
Read More Show Less

Editorial Reviews

From the Publisher
"As an overall conclusion, this book is an extensive presentation of MDPs and their applications in modeling uncertain decision problems and in reinforcement learning." (Zentralblatt MATH, 2011)

"The range of subjects covered is fascinating, however, from game-theoretical applications to reinforcement learning, conservation of biodiversity and operations planning. Oriented towards advanced students and researchers in the fields of both artificial intelligence and the study of algorithms as well as discrete mathematics." (Book News, September 2010)

Read More Show Less

Product Details

  • ISBN-13: 9781118620106
  • Publisher: Wiley
  • Publication date: 3/4/2013
  • Series: ISTE
  • Sold by: Barnes & Noble
  • Format: eBook
  • Edition number: 1
  • File size: 8 MB

Table of Contents

Preface xvii

List of Authors xix

PART 1. MDPS: MODELS AND METHODS 1

Chapter 1. Markov Decision Processes 3
Frédérick GARCIA and Emmanuel RACHELSON

1.1. Introduction 3

1.2. Markov decision problems 4

1.3. Value functions 9

1.4. Markov policies 12

1.5. Characterization of optimal policies 14

1.6. Optimization algorithms for MDPs 28

1.7. Conclusion and outlook 37

1.8. Bibliography 37

Chapter 2. Reinforcement Learning 39
Olivier SIGAUD and Frédérick GARCIA

2.1. Introduction 39

2.2. Reinforcement learning: a global view 40

2.3. Monte Carlo methods 45

2.4. From Monte Carlo to temporal difference methods 45

2.5. Temporal difference methods 46

2.6. Model-based methods: learning a model 59

2.7. Conclusion 63

2.8. Bibliography 63

Chapter 3. Approximate Dynamic Programming 67
Rémi MUNOS

3.1. Introduction 68

3.2. Approximate value iteration (AVI) 70

3.3. Approximate policy iteration (API) 77

3.4. Direct minimization of the Bellman residual 87

3.5. Towards an analysis of dynamic programming in Lp-norm 88

3.6. Conclusions 93

3.7. Bibliography 93

Chapter 4. Factored Markov Decision Processes 99
Thomas DEGRIS and Olivier SIGAUD

4.1. Introduction 99

4.2. Modeling a problem with an FMDP 100

4.3. Planning with FMDPs 108

4.4. Perspectives and conclusion 122

4.5. Bibliography 123

Chapter 5. Policy-Gradient Algorithms 127
Olivier BUFFET

5.1. Reminder about the notion of gradient 128

5.2. Optimizing a parameterized policy with a gradient algorithm 130

5.3. Actor-critic methods 143

5.4. Complements 147

5.5. Conclusion 150

5.6. Bibliography 150

Chapter 6. Online Resolution Techniques 153
Laurent PÉRET and Frédérick GARCIA

6.1. Introduction 153

6.2. Online algorithms for solving an MDP 155

6.3. Controlling the search 167

6.4. Conclusion 180

6.5. Bibliography 180

PART 2. BEYOND MDPS 185

Chapter 7. Partially Observable Markov Decision Processes 187
Alain DUTECH and Bruno SCHERRER

7.1. Formal definitions for POMDPs 188

7.2. Non-Markovian problems: incomplete information 196

7.3. Computation of an exact policy on information states 202

7.4. Exact value iteration algorithms 207

7.5. Policy iteration algorithms 222

7.6. Conclusion and perspectives 223

7.7. Bibliography 225

Chapter 8. Stochastic Games 229
Andriy BURKOV, Laëtitia MATIGNON and Brahim CHAIB-DRAA

8.1. Introduction 229

8.2. Background on game theory 230

8.3. Stochastic games 245

8.4. Conclusion and outlook 269

8.5. Bibliography 270

Chapter 9. DEC-MDP/POMDP 277
Aurélie BEYNIER, François CHARPILLET, Daniel SZER and Abdel-Illah MOUADDIB

9.1. Introduction 277

9.2. Preliminaries 278

9.3. Multi agent Markov decision processes 279

9.4. Decentralized control and local observability 280

9.5. Sub-classes of DEC-POMDPs 285

9.6. Algorithms for solving DEC-POMDPs 295

9.7. Applicative scenario: multirobot exploration 310

9.8. Conclusion and outlook . . . 312

9.9. Bibliography 313

Chapter 10. Non-Standard Criteria 319
Matthieu BOUSSARD, Maroua BOUZID, Abdel-Illah MOUADDIB, Régis SABBADIN and Paul WENG

10.1. Introduction 319

10.2. Multicriteria approaches 320

10.3. Robustness in MDPs 327

10.4. Possibilistic MDPs 329

10.5. Algebraic MDPs 342

10.6. Conclusion 354

10.7. Bibliography 355

PART 3. APPLICATIONS 361

Chapter 11. Online Learning for Micro-Object Manipulation 363
Guillaume LAURENT

11.1. Introduction 363

11.2. Manipulation device 364

11.3. Choice of the reinforcement learning algorithm 367

11.4. Experimental results 370

11.5. Conclusion 373

11.6. Bibliography 373

Chapter 12. Conservation of Biodiversity 375
Iadine CHADÈS

12.1. Introduction 375

12.2. When to protect, survey or surrender cryptic endangered species 376

12.3. Can sea otters and abalone co-exist? 381

12.4. Other applications in conservation biology and discussions 391

12.5. Bibliography 392

Chapter 13. Autonomous Helicopter Searching for a Landing Area in an Uncertain Environment 395
Patrick FABIANI and Florent TEICHTEIL-KÖNIGSBUCH

13.1. Introduction 395

13.2. Exploration scenario 397

13.3. Embedded control and decision architecture 401

13.4. Incremental stochastic dynamic programming 404

13.5. Flight tests and return on experience 407

13.6. Conclusion 410

13.7. Bibliography 410

Chapter 14. Resource Consumption Control for an Autonomous Robot 413
Simon LE GLOANNEC and Abdel-Illah MOUADDIB

14.1. The rover’s mission 414

14.2. Progressive processing formalism 415

14.3. MDP/PRU model 416

14.4. Policy calculation 418

14.5. How to model a real mission 419

14.6. Extensions 422

14.7. Conclusion 423

14.8. Bibliography 423

Chapter 15. Operations Planning 425
Sylvie THIÉBAUX and Olivier BUFFET

15.1. Operations planning 425

15.2. MDP value function approaches 433

15.3. Reinforcement learning: FPG 442

15.4. Experiments 446

15.5. Conclusion and outlook 448

15.6. Bibliography 450

Index 453

Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star

(0)

4 Star

(0)

3 Star

(0)

2 Star

(0)

1 Star

(0)

Your Rating:

Your Name: Create a Pen Name or

Barnes & Noble.com Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation

Reminder:

  • - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
  • - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

 
Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously

    If you find inappropriate content, please report it to Barnes & Noble
    Why is this product inappropriate?
    Comments (optional)