Reinforcement Learning: An Introduction / Edition 1

Reinforcement Learning: An Introduction / Edition 1

by Richard S. Sutton, Andrew G. Barto
     
 

View All Available Formats & Editions

ISBN-10: 0262193981

ISBN-13: 9780262193986

Pub. Date: 03/01/1998

Publisher: MIT Press

Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning,Richard Sutton and Andrew Barto provide a clear and simple account of the

Overview

Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning,Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability.

The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces,and planning; the two final chapters present case studies and consider the future of reinforcement learning.

Product Details

ISBN-13:
9780262193986
Publisher:
MIT Press
Publication date:
03/01/1998
Series:
Adaptive Computation and Machine Learning series
Edition description:
New Edition
Pages:
344
Sales rank:
342,784
Product dimensions:
7.00(w) x 9.00(h) x 0.81(d)
Age Range:
18 Years

Table of Contents

Series Foreword
Preface
I The Problem
1 Introduction
1.1 Reinforcement Learning
1.2 EXamples
1.3 Elements of Reinforcement Learning
1.4 An EXtended EXample: TicTacToe
1.5 Summary
1.6 History of Reinforcement Learning
1.7 Bibliographical Remarks
2 Evaluative Feedback
2.1 An nArmed Bandit Problem
2.2 ActionValue Methods
2.3 SoftmaX Action Selection
2.4 Evaluation Versus Instruction
2.5 Incremental Implementation
2.6 Tracking a Nonstationary Problem
2.7 Optimistic Initial Values
2.8 Reinforcement Comparison
2.9 Pursuit Methods
2.10 Associative Search
2.11 Conclusions
2.12 Bibliographical and Historical Remarks
3 The Reinforcement Learning Problem
3.1 The AgentEnvironment Interface
3.2 Goals and Rewards
3.3 Returns
3.4 Unified Notation for Episodic and Continuing Tasks
3.5 The Markov Property
3.6 Markov Decision Processes
3.7 Value Functions
3.8 Optimal Value Functions
3.9 Optimality and ApproXimation
3.10 Summary
3.11 Bibliographical and Historical Remarks
II Elementary Solution Methods
4 Dynamic Programming
4.1 Policy Evaluation
4.2 Policy Improvement
4.3 Policy Iteration
4.4 Value Iteration
4.5 Asynchronous Dynamic Programming
4.6 Generalized Policy Iteration
4.7 Efficiency of Dynamic Programming
4.8 Summary
4.9 Bibliographical and Historical Remarks
5 Monte Carlo Methods
5.1 Monto Carlo Policy Evaluation
5.2 Monte Carlo Estimation of Action Values
5.3 Monte Carlo Control
5.4 OnPolicy Monte Carlo Control
5.5 Evaluating One Policy While Following Another
5.6 OffPolicy Monte Carlo Control
5.7 IncrementalImplementation
5.8 Summary
5.9 Bibliographical and Historical Remarks
6 TemporalDifference Learning
6.1 TD Prediction
6.2 Advantages of TD Prediction Methods
6.3 Optimality of TD(0)
6.4 Sarsa: OnPolicy TD Control
6.5 QLearning: OffPolicy TD Control
6.6 ActorCritic Methods
6.7 RLearning for Undiscounted Continuing Tasks
6.8 Games, Afterstates, and Other Special Cases
6.9 Summary
6.10 Bibliographical and Historical Remarks
III A Unified View
7 Eligibility Traces
7.1 nStep TD Prediction
7.2 The Forward View of TD ()
7.3 The Backward View of TD ()
7.4 Equivalence of Forward and Backward Views
7.5 Sarsa()
7.6 Q()
7.7 Eligibility Traces for ActorClient Methods
7.8 Replacing Traces
7.9 Implementation Issues
7.10 Variable
7.11 Conclusions
7.12 Bibliographical and Historical Remarks
8 Generalization and Function ApproXimation
8.1 Value Prediction with Function ApproXimation
8.2 GradientDescent Methods
8.3 Linear Methods
8.4 Control with Function ApproXimation
8.5 OffPolicy Bootstrapping
8.6 Should We Bootstrap?
8.7 Summary
8.8 Bibliographical and Historical Remarks
9 Planning and Learning
9.1 Models and Planning
9.2 Integrating Planning, Acting, and Learning
9.3 When the Model Is Wrong
9.4 Prioritized Sweeping
9.5 Full vs. Sample Backups
9.6 Trajectory Sampling
9.7 Heuristic Search
9.8 Summary
9.9 Bibliographical and Historical Remarks
10 Dimensions of Reinforcement Learning
10.1 The Unified View
10.2 Other Frontier Dimensions
11 Case Studies
11.1 TDGammon
11.2 Samuel's Checkers Player
11.3 The Acrobot
11.4 Elevator Dispatching
11.5 Dynamic Channel Allocation
11.6 JobShop Scheduling
References
Summary of Notation
IndeX

Customer Reviews

Average Review:

Write a Review

and post it to your social network

     

Most Helpful Customer Reviews

See all customer reviews >