Partially Observed Markov Decision Processes: From Filtering to Controlled Sensing
Covering formulation, algorithms, and structural results, and linking theory to real-world applications in controlled sensing (including social learning, adaptive radars and sequential detection), this book focuses on the conceptual foundations of partially observed Markov decision processes (POMDPs). It emphasizes structural results in stochastic dynamic programming, enabling graduate students and researchers in engineering, operations research, and economics to understand the underlying unifying themes without getting weighed down by mathematical technicalities. Bringing together research from across the literature, the book provides an introduction to nonlinear filtering followed by a systematic development of stochastic dynamic programming, lattice programming and reinforcement learning for POMDPs. Questions addressed in the book include: when does a POMDP have a threshold optimal policy? When are myopic policies optimal? How do local and global decision makers interact in adaptive decision making in multi-agent social learning where there is herding and data incest? And how can sophisticated radars and sensors adapt their sensing in real time?
1122872895
Partially Observed Markov Decision Processes: From Filtering to Controlled Sensing
Covering formulation, algorithms, and structural results, and linking theory to real-world applications in controlled sensing (including social learning, adaptive radars and sequential detection), this book focuses on the conceptual foundations of partially observed Markov decision processes (POMDPs). It emphasizes structural results in stochastic dynamic programming, enabling graduate students and researchers in engineering, operations research, and economics to understand the underlying unifying themes without getting weighed down by mathematical technicalities. Bringing together research from across the literature, the book provides an introduction to nonlinear filtering followed by a systematic development of stochastic dynamic programming, lattice programming and reinforcement learning for POMDPs. Questions addressed in the book include: when does a POMDP have a threshold optimal policy? When are myopic policies optimal? How do local and global decision makers interact in adaptive decision making in multi-agent social learning where there is herding and data incest? And how can sophisticated radars and sensors adapt their sensing in real time?
108.0 In Stock
Partially Observed Markov Decision Processes: From Filtering to Controlled Sensing

Partially Observed Markov Decision Processes: From Filtering to Controlled Sensing

by Vikram Krishnamurthy
Partially Observed Markov Decision Processes: From Filtering to Controlled Sensing

Partially Observed Markov Decision Processes: From Filtering to Controlled Sensing

by Vikram Krishnamurthy

Hardcover

$108.00 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores

Related collections and offers


Overview

Covering formulation, algorithms, and structural results, and linking theory to real-world applications in controlled sensing (including social learning, adaptive radars and sequential detection), this book focuses on the conceptual foundations of partially observed Markov decision processes (POMDPs). It emphasizes structural results in stochastic dynamic programming, enabling graduate students and researchers in engineering, operations research, and economics to understand the underlying unifying themes without getting weighed down by mathematical technicalities. Bringing together research from across the literature, the book provides an introduction to nonlinear filtering followed by a systematic development of stochastic dynamic programming, lattice programming and reinforcement learning for POMDPs. Questions addressed in the book include: when does a POMDP have a threshold optimal policy? When are myopic policies optimal? How do local and global decision makers interact in adaptive decision making in multi-agent social learning where there is herding and data incest? And how can sophisticated radars and sensors adapt their sensing in real time?

Product Details

ISBN-13: 9781107134607
Publisher: Cambridge University Press
Publication date: 03/21/2016
Pages: 488
Product dimensions: 7.09(w) x 10.00(h) x 0.98(d)

About the Author

Vikram Krishnamurthy is a Professor and Canada Research Chair in Statistical Signal Processing at the University of British Columbia, Vancouver. His research contributions focus on nonlinear filtering, stochastic approximation algorithms and POMDPs. Dr Krishnamurthy is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) and served as a distinguished lecturer for the IEEE Signal Processing Society. In 2013, he received an honorary doctorate from KTH, Royal Institute of Technology, Sweden.

Table of Contents

Preface; 1. Introduction; Part I. Stochastic Models and Bayesian Filtering: 2. Stochastic state-space models; 3. Optimal filtering; 4. Algorithms for maximum likelihood parameter estimation; 5. Multi-agent sensing: social learning and data incest; Part II. Partially Observed Markov Decision Processes. Models and Algorithms: 6. Fully observed Markov decision processes; 7. Partially observed Markov decision processes (POMDPs); 8. POMDPs in controlled sensing and sensor scheduling; Part III. Partially Observed Markov Decision Processes: 9. Structural results for Markov decision processes; 10. Structural results for optimal filters; 11. Monotonicity of value function for POMPDs; 12. Structural results for stopping time POMPDs; 13. Stopping time POMPDs for quickest change detection; 14. Myopic policy bounds for POMPDs and sensitivity to model parameters; Part IV. Stochastic Approximation and Reinforcement Learning: 15. Stochastic optimization and gradient estimation; 16. Reinforcement learning; 17. Stochastic approximation algorithms: examples; 18. Summary of algorithms for solving POMPDs; Appendix A. Short primer on stochastic simulation; Appendix B. Continuous-time HMM filters; Appendix C. Markov processes; Appendix D. Some limit theorems; Bibliography; Index.
From the B&N Reads Blog

Customer Reviews