"An excellent introduction to optimal control and estimation theory and its relationship with LQG design. . . . invaluable as a reference for those already familiar with the subject." — Automatica. This highly regarded graduate-level text provides a comprehensive introduction to optimal control theory for stochastic systems, emphasizing application of its basic concepts to real problems. The first two chapters introduce optimal control and review the mathematics of control ...
"An excellent introduction to optimal control and estimation theory and its relationship with LQG design. . . . invaluable as a reference for those already familiar with the subject." — Automatica.
This highly regarded graduate-level text provides a comprehensive introduction to optimal control theory for stochastic systems, emphasizing application of its basic concepts to real problems. The first two chapters introduce optimal control and review the mathematics of control and estimation. Chapter 3 addresses optimal control of systems that may be nonlinear and time-varying, but whose inputs and parameters are known without error.
Chapter 4 of the book presents methods for estimating the dynamic states of a system that is driven by uncertain forces and is observed with random measurement error. Chapter 5 discusses the general problem of stochastic optimal control, and the concluding chapter covers linear time-invariant systems.
Robert F. Stengel is Professor of Mechanical and Aerospace Engineering at Princeton University, where he directs the Topical Program on Robotics and Intelligent Systems and the Laboratory for Control and Automation. He was a principal designer of the Project Apollo Lunar Module control system.
"An excellent teaching book with many examples and worked problems which would be ideal for self-study or for use in the classroom. . . . The book also has a practical orientation and would be of considerable use to people applying these techniques in practice." — Short Book Reviews, Publication of the International Statistical Institute.
"An excellent book which guides the reader through most of the important concepts and techniques. . . . A useful book for students (and their teachers) and for those practicing engineers who require a comprehensive reference to the subject." — Library Reviews, The Royal Aeronautical Society.
1.1 Framework for Optimal Control
1.2 Modeling Dynamic Systems
1.3 Optimal Control Objectives
1.4 Overview of the Book
2. THE MATHEMATICS OF CONTROL AND ESTIMATION
2.1 "Scalars, Vectors, and Matrices "
Inner and Outer Products
"Vector Lengths, Norms, and Weighted Norms "
"Stationary, Minimum, and Maximum Points of a Scalar Variable (Ordinary Maxima and Minima) "
Constrained Minima and Lagrange Multipliers
2.2 Matrix Properties and Operations
Inverse Vector Relationship
Differentiation and Integration
Some Matrix Identities
Eigenvalues and Eigenvectors
Singular Value Decomposition
Some Determinant Identities
2.3 Dynamic System Models and Solutions
Nonlinear System Equations
Numerical Integration of Nonlinear Equasions
Numerical Integration of Linear Equations
Representation of Data
2.4 "Random Variables, Sequences, and Processes "
Scalar Random Variables
Groups of Random Variables
Scalar Random Sequences and Processes
Correlation and Covariance Functions
Fourier Series and Integrals
Special Density Functions of Random Processes
Spectral Functions of Random Sequences
2.5 Properties of Dynamic Systems
Static and Quasistatic Equilibrium
"Modes of Motion for Linear, Time-Invariant Systems "
"Reachability, Controllability, and Stabilizability "
"Constructability, Observability, and Detectability "
2.6 Frequency Domain Modeling and Analysis
Frequency-Response Function and Bode Plot
Nyquist Plot and Stability Criterion
Effects of Sampling
3. OPTIMAL TRAJECTORIES AND NEIGHBORING-OPTIMAL SOLUTIONS
3.1 Statement of the Problem
3.2 Cost Functions
3.3 Parametric Optimization
3.4 Conditions for Optimality
Necessary Conditions for Optimality
Sufficient Conditions for Optimality
The Minimum Principle
The Hamiltonn-Jacobi-Bellman Equation
3.5 Constraints and Singular Control
Terminal State Equality Constraints
Equality Constraints on the State and Control
Inequality Constraints on the State and Control
3.6 Numerical Optimization
Penalty Function Method
Neighboring Extremal Method
3.7 Neighboring-Optimal Solutions
Continuous Neighboring-Optimal Control
Dynamic Programming Solution for Continuous Linear-Quadratic Control
Small Disturbances and Parameter Variations
4. OPTIMAL STATE ESTIMATION
4.1 Least-Squares Estimates of Constant Vectors
Weighted Least-Squares Estimator
Recursive Least-Squares Estimator
4.2 Propagation of the State Estimate and Its Uncertainty
Discrete- Time Systems
Sampled-Data Representation of Continuous-Time Systems
Simulating Cross-Correlated White Noise
4.3 Discrete-Time Optimal Filters and Predictors
Alternative Forms of the Linear-Optimal filter
4.4 Correlated Disturbance Inputs and Measurement Noise
Cross-Correlation of Disturbance Input and Measurement Noise
Time-Correlated Measurement Noise
4.5 Continuous-Time Optimal Filters and Predictors
Alternative Forms of the Linear-Optimal Filter
Correlation in Disturbance Inputs and Measurement Noise
4.6 Optimal Nonlinear Estimation
Neighboring-Optimal Linear Estimator
Extended Kalman-Bucy Filter
4.7 Adaptive Filtering
5. STOCHASTIC OPTIMAL CONTROL
5.1 Nonlinear Systems with Random Inputs and Perfect Measurements
Stochastic Principle of Optimality for Nonlinear Systems
Stochastic Principle of Optimality for Linear-Quadratic Problems
Evaluation of the Variational Cost Function
5.2 Nonlinear Systems with Random Inputs and Imperfect Measurements
Stochastic Principle of Optimality
5.3 The Certainty-Equivalence Property of Linear-Quadratic-Gaussian Controllers
The Continuous-Time Case
The Discrete-Time Case
Additional Cases Exhibiting Certainty Equivalence
5.4 "Linear, Time-Invariant Systems with Random Inputs and Imperfect Measurements "
Asymptotic Stability of the Linear-Quadratic Regulator
Asymptotic Stability of the Kalman-Bucy Filter
Asymptotic Stability of the Stochastic Regulator
Steady-State Performance of the Stochastic Regulator
The Discrete-Time Case
6. LINEAR MULTIVARIABLE CONTROL
6.1 Solution of the Algeb