ISBN-10:
1439821089
ISBN-13:
9781439821084
Pub. Date:
04/26/2010
Publisher:
Taylor & Francis
Reinforcement Learning and Dynamic Programming Using Function Approximators: Control in Continuous State Spaces / Edition 1

Reinforcement Learning and Dynamic Programming Using Function Approximators: Control in Continuous State Spaces / Edition 1

Hardcover

View All Available Formats & Editions
Current price is , Original price is $120.0. You
Select a Purchase Option
  • purchase options
    $83.86 $120.00 Save 30% Current price is $83.86, Original price is $120. You Save 30%.
  • purchase options

Product Details

ISBN-13: 9781439821084
Publisher: Taylor & Francis
Publication date: 04/26/2010
Series: Automation and Control Engineering Series , #39
Pages: 280
Product dimensions: 6.40(w) x 9.20(h) x 0.80(d)

About the Author

Robert Babuska, Lucian Busoniu, and Bart de Schutter are with the Delft University of Technology. Damien Ernst is with the University of Liege.

Table of Contents

1 Introduction
The dynamic programming and reinforcement learning problem
Approximation in dynamic programming and reinforcement learning
About this book
2 An introduction to dynamic programming and reinforcement learning
Introduction
Markov decision processes
Value iteration
Policy iteration
Policy search
Summary and discussion
3 Dynamic programming and reinforcement learning in large and continuous
spaces
Introduction
The need for approximation in large and continuous spaces
Approximation architectures
Approximate value iteration
Approximate policy iteration
Finding value function approximators automatically
Approximate policy search
Comparison of approximate value iteration, policy iteration, and policy search
Summary and discussion
4 Approximate value iteration with a fuzzy representation
Introduction
Fuzzy Q-iteration
Analysis of fuzzy Q-iteration
Optimizing the membership functions
Experimental study
Summary and discussion
5 Approximate policy iteration for online learning and continuous-action control
Introduction
A recapitulation of least-squares policy iteration
Online least-squares policy iteration
Online LSPI with prior knowledge
LSPI with continuous-action, polynomial approximation
Experimental study
Summary and discussion
6 Approximate policy search with cross-entropy optimization of basis functions
Introduction
Cross-entropy optimization
Cross-entropy policy search
Experimental study
Summary and discussion
Appendix A Extremely randomized trees
Structure of the approximator
Building and using a tree
Appendix B The cross-entropy method
Rare-event simulation using the cross-entropy method
Cross-entropy optimization
Symbols and abbreviations
Bibliography
List of algorithms
Index

Customer Reviews

Most Helpful Customer Reviews

See All Customer Reviews