 Shopping Bag ( 0 items )

All (3) from $190.19

New (2) from $190.19

Used (1) from $192.01
More About This Textbook
Overview
This original work offers the most comprehensive and uptodate treatment of the important subject of optimal linear estimation, which is encountered in many areas of engineering such as communications, control, and signal processing, and also several other fields, e.g., econometrics and statistics. The book not only highlights the most significant contributions to this field during the 20th century, including the works of Weiner and Kalman, but it does so in an original and novel manner that paves the way for further developments in the new millennium. This book contains a large collection of problems that complement the text and are an important part of it, in addition to numerous sections that offer interesting historical accounts and insights.
Editorial Reviews
Booknews
Largely focuses on estimation problems for finitedimensional linear systems with statespace models, covering most aspects of an area now generally known as Wiener and Kalman filtering theory. Distinctive features the treatment are the pervasive use of a geometric point of view; the emphasis on the numerically favored square root/array forms of many algorithms; and the emphasis on equivalence and duality concepts for the solution of several related problems in adaptive filtering, estimation, and control. The authors argue that these features are not as abstract and complicated as other treatments of the topic seem to fear. Annotation c. Book News, Inc., Portland, OR booknews.comProduct Details
Related Subjects
Read an Excerpt
Preface
The problem of estimating the values of a random (or stochastic) process given observations of a related random process is encountered in many areas of science and engineering, e.g., communications, control, signal processing, geophysics, econometrics, and statistics. Although the topic has a rich history, and its formative stages can be attributed to illustrious investigators such as Laplace, Gauss, Legendre, and others, the current high interest in such problems began with the work of H. Wold, A. N. Kolmogorov, and N. Wiener in the late 1930s and early 1940s. N. Wiener in particular stressed the importance of modeling not just "noise" but also "signals" as random processes. His thoughtprovoking originally classified 1942 report, released for open publication in 1949 and now available in paperback form under the title Time Series Analysis, is still very worthwhile background reading.
As with all deep subjects, the extensions of these results have been very farreaching as well. A particularly important development arose from the incorporation into the theory of multichannel statespace models. Though there were various earlier partial intimations and explorations, especially in the work of R. L. Stratonovich in the former Soviet Union, the chief credit for the explosion of activity in this direction goes to R. E. Kalman, who also made important related contributions to linear systems, optimal control, passive systems, stability theory, and network synthesis.
In fact, leastsquares estimation is one of those happy subjects that is interesting not only in the richness and scope of its results, but also because of its mutually beneficialconnections with a host of other (often apparently very different) subjects. Thus, beyond those already named, we may mention connections with radiative transfer and scattering theory, linear algebra, matrix and operator theory, orthogonal polynomials, moment problems, inverse scattering problems, interpolation theory, decoding of ReedSolomon and BCH codes, polynomial factorization and root distribution problems, digital filtering, spectral analysis, signal detection, martingale theory, the socalled Hh theories of estimation and control, leastsquares and adaptive filtering problems, and many others. We can surely apply to it the lines written by William Shakespeare about another (beautiful) subject:
Her infinite variety."
Though we were originally tempted to cover a wider range, many reasons have led us to focus this volume largely on estimation problems for finitedimensional linear systems with statespace models, covering most aspects of an area now generally known as Wiener and Kalman filtering theory. Three distinctive features of our treatment are the pervasive use of a geometric point of view, the emphasis on the numerically favored squareroot/array forms of many algorithms, and the emphasis on equivalence and duality concepts for the solution of several related problems in adaptive filtering, estimation, and control. These features are generally absent in most prior treatments, ostensibly on the grounds that they are too abstract and complicated. It is our hope that these misconceptions will be dispelled by the presentation herein, and that the fundamental simplicity and power of these ideas will be more widely recognized arid exploited.
The material presented in this book can be broadly categorized into the following topics:
Chapter 1: Overview
Chapter 2: Deterministic LeastSquares Problems
Chapter 3: Stochastic LeastSquares Problems
Chapter 4: The Innovations Process
Chapter 5: StateSpace Models
Chapter 6: Innovations for Stationary Processes
Chapter 7: Wiener Theory for Scalar Processes
Chapter 8: Recursive Wiener Filters
Chapter 9: The Kalman Filter
Chapter 10: Smoothed Estimators
Chapter 11: Fast Algorithms
Chapter 12: Array Algorithms
Chapter 13: Fast Array Algorithms
Chapter 16: ContinuousTime StateSpace Estimation
Chapter 14: Asymptotic Behavior
Chapter 15: Duality and Equivalence in Estimation and Control
Chapter 17: A Scattering Theory Approach
Being intended for a graduatelevel course, the book assumes familiarity with basic concepts from matrix theory, linear algebra, linear system theory, and random processes. Four appendices at the end of the book provide the reader with background material in all these areas.
There is ample material in this book for the instructor to fashion a course to his or her needs and tastes. The authors have used portions of this book as the basis for onequarter firstyear graduate level courses at Stanford University, the University of California at Los Angeles, and the University of California at Santa Barbara; the students were expected to have had some exposure to discretetime and statespace theory. A typical course would start with Secs.1.11.2 as an overview (perhaps omitting the matrix derivations), with the rest of Ch. 1 left for a quick reading (and rereading from time to time), most of Chs. 2 and 3 (focusing on the geometric approach) on the basic deterministic and stochastic leastsquares problems, Ch. 4 on the innovations process, Secs. 6.46.5 and 7.37.7 on scalar Wiener filtering, Secs. 9.19.3, 9.5, and 9.7 on Kalman filtering, Secs. 10.110.2 as an introduction to smoothing, Secs. 12.112.5 and 13.113.4 on array algorithms, and Secs. 16.116.4 and 16.6 on continuoustime problems.
More advanced students and researchers would pursue selections of material from Sec. 2.8, Chs. 8, 11, 14, 15, and 17, and Apps. E and R These cover, among other topics, leastsquares problems with uncertain data, the problem of canonical spectral factorization, convergence of the Kalman filter, the algebraic Riccati equation, duality, backwardstime and complementary models, scattering, etc. Those wishing to go on to the more recent H
¥ theory can find a treatment closely related to the philosophy of the current book (cf. Sec. 1.6) in the research monograph of Hassibi, Sayed, and Kailath (1999).
A feature of the book is a collection of nearly 300 problems, several of which complement the text and present additional results and insights. However, there is little discussion of real applications or of the error and sensitivity analyses required for them. The main issue in applications is constructing an appropriate model, or actually a set of models, which are further analyzed and then refined by using the results and algorithms presented in this book. Developing good models and analyzing them effectively requires not only a good appreciation of the actual application, but also a good understanding of the theory, at both an analytical and intuitive level. It is the latter that we have tried to achieve here; examples of successful applications have to be sought in the literature, and some references are provided to this end.
Acknowledgments
The development of this textbook has spanned many years. So the material, as well as its presentation, has benefited greatly from the inputs of the many bright students who have worked with us on these topics: J. Omura, P Frost, T Duncan, R. Geesey, D. Duttweiler, H. Aasnaes, M. Gevers, H. Weinert, A. Segall, M. Mort B. Dickinson, G. Sidhu, B. Friedlander, A. Vieira, S. Y Kung, B. Levy, G. Verghese, D. Lee, J. Delosme, B. Porat, H. LevAri, J. Cioffi, A. Bruckstein, T. Citron, Y Bresler, R. Roy, J. Chun, D. Slock, D. Pal, G. Xu, R. Ackner, Y Cho, P Park, T. Boros, A. Erdogan, U. Forsell, B. Halder, H. Hindi, V Nascimento, T. Pare, R. Merched, and our young friend Amir Ghazanfarian (in memoriam) from whom we had so much more to learn.
We are of course also deeply indebted to the many researchers and authors in this beautiful field. Partial acknowledgment is evident through the citations and references; while the list of the latter is quite long, we apologize for omissions and inadequacies arising from the limitations of our knowledge and our energy. Nevertheless, we would be remiss not to explicitly mention the inspiration and pleasure we have gained in studying the papers and books of N. Wiener, R. E. Kalman, and P Whittle.
Major support for the many years of research that led to this book was provided by the Mathematics Divisions of the Air Force Office of Scientific Research and the Army Research Office, by the Joint Services Electronics Program, by the Defense Advanced Research Projects Agency, and by the National Science Foundation. Finally, we would like to thank Bernard Goodwin and Tom Robbins, as well as the staff of Prentice Hall, for their patience and other contributions to this project.
T Kailath
Stanford, CA
A. H. Sayed
Westwood, CA
B. Hassibi
Murray Hill, NJ
Table of Contents
1. Overview.
The Asymptotic Observer. The Optimum Transient Observer. Coming Attractions. The Innovations Process. SteadyState Behavior. Several Related Problems. Complements. Problems.
2. Deterministic LeastSquares Problems.
The Deterministic LeastSquares Criterion. The Classical Solutions. A Geometric Formulation: The Orthogonality Condition. Regularized LeastSquares Problems. An Array Algorithm: The QR Method. Updating LeastSquares Solutions: RLS Algorithms. Downdating LeastSquares Solutions. Some Variations of LeastSquares Problems. Complements. Problems. On Systems of Linear Equations.
3. Stochastic LeastSquares Problems.
The Problem of Stochastic Estimation. Linear LeastMeanSquares Estimators. A Geometric Formulation. Linear Models. Equivalence to Deterministic LeastSquares. Complements. Problems. LeastMeanSquares Estimation. Gaussian Random Variables. Optimal Estimation for Gaussian Variables.
4. The Innovations Process.
Estimation of Stochastic Processes. The Innovations Process. Innovations Approach to Deterministic LeastSquares Problems. The Exponentially Correlated Process. Complements. Problems. Linear Spaces, Modules, and Gramians.
5. StateSpace Models.
The Exponentially Correlated Process. Going Beyond the Stationary Case. Higher Order Processes and StateSpace Models. Wide Sense Markov Processes. Complements. Problems. Some Global Formulas.
6. Innovations for Stationary Processes.
Innovations via Spectral Factorization. Signals and Systems. Stationary Random Processes. Canonical Spectral Factorization. Scalar Rational zSpectra. VectorValued Stationary Processes. Complements. Problems. Continuous TimeSystems and Processes.
7. Wiener Theory for Scalar Processes.
ContinuousTime Wiener Smoothing. The ContinuousTime WienerHopf Equation. DiscreteTime Problems. The Discrete Time WienerHopf Technique. Causal Parts via Partial Fractions. Important Special Cases and Examples. Innovations Approach to the Wiener Filter. Vector Processes. Extensions of Wiener Filtering. Complements. Problems. The ContinuousTime WienerHopf Technique.
8. Recursive Wiener Filtering
TimeInvariant StateSpace Models. An Equivalence Class for Input Gramians. Canonical Spectral Factorization. Factorization Given Covariance Data. Predicted and Smoothed Estimators of the State. Extensions to TimeVariant Models. Complements. Problems. The Popov Function. System Theory Approach to Rational Spectral Factorization. The KYP and Bounded Real Lemmas. Vector Spectral Factorization in ContinuousTime.
9. The Kalman Filter.
The Standard StateSpace Model. The Kalman Filter Recursions for the Innovations. Recursions for Predicted and Filtered State Estimators. Triangular Factorizations of Ry and Ry ^{1}. An Important Special Assumption: R_{i} >> 0. CovarianceBased Filters. Approximate Nonlinear Filtering. Backwards Kalman Recursions. Complements. Problems. Factorization of Ry Using the MGS Procedure. Factorization via Gramian Equivalence Classes.
10. Smoothed Estimators.
General Smoothing Formulas. Exploiting StateSpace Structure. The RauchTungStriebel (RTS) Recursions. TwoFilter Formulas. The Hamiltonian Equations (R_{i} >> 0). Variational Origin of Hamiltonian Equations. Applications of Equivalence. Complements. Problems.
11. Fast Algorithms.
The Fast (CKMS) Algorithms. Two Important Cases. Structured TimeVariant Systems. CKMS Recursions given Covariance Data. Relation to Displacement Rank. Complements. Problems.
12. Array Algorithms.
Review and Notations. Potter's Explicit Algorithm for Scalar Measurement Update. Several Array Algorithms. Numerical Examples. Derivations of the Array Algorithms. A Geometric Explanation of the Arrays. Paige's Form of the Array Algorithm. Array Algorithms for the Information Forms. Array Algorithms for Smoothing. Complements. Problems. The UD Algorithm. The Use of Schur and Condensed Forms. Paige's Array Algorithm.
13. Fast Array Algorithms.
A Special Case: P _{0} = 0. A General Fast Array Algorithm. From Explicit Equations to Array Algorithms. Structured TimeVariant Systems. Complements. Problems. Combining Displacement and StateSpace Structures.
14. Asymptotic Behavior.
Introduction. Solutions of the DARE. Summary of the Convergence Proofs. Riccati Solutions for Different Initial Conditions. Convergence Results. The Case of Stable Systems. The Case of S…Ö0. Exponential Convergence of the Fast Recursions. Complements. Problems.
15. Duality and Equivalence in Estimation and Control.
Dual Bases. Application to Linear Models. Duality and Equivalence Relationships. Duality Under Causality Constraints. Measurement Constraints and a Separation Principle. Duality in the Frequency Domain. Complementary StateSpace Models. Complements. Problems.
16. ContinuousTime StateSpace Estimation.
ContinuousTime Models. The ContinuousTime Kalman Filter Equations. Some Examples. Smoothed Estimators. Fast Algorithms for TimeInvariant Models. Asymptotic Behavior. SteadyState Filter. Complements. Problems. Backwards Markovian Models.
17. A Scattering Theory Approach.
A Generalized TransmissionLine Model. Backward Evolution. The Star Product. Various Riccati Formulas. Homogeneous Media: TimeInvariant Models. DiscreteTime Scattering Formulation. Further Work. Complements. Problems. A Complementary StateSpace Model.
A. Useful Matrix Results.
Some Matrix Identities. Kronecker Products. The Reduced and Full QR Decompositions. The Singular Value Decomposition and Applications. Basis Rotations. Complex Gradients and Hessians. Further Reading.
B. Unitary and JUnitary Transformations.
Householder Transformations. Circular or Givens Rotations. Fast Givens Transformations. JUnitary Householder Transformations. Hyperbolic Givens Rotations. Some Alternative Implementations.
C. Some System Theory Concepts.
Linear StateSpace Models. StateTransition Matrices. Controllabilty and Stabilizabilty. Observabilty and Detectabilty. Minimal Realizations.
D. Lyapunov Equations.
DiscreteTime Lyapunov Equations. ContinuousTime Lyapunov Equations. Internal Stability.
E. Algebraic Riccati Equations.
Overview of DARE. A Linear Matrix Inequality. Existence of Solutions to the DARE. Properties of the Maximal Solution. Main Result. Further Remarks. The Invariant Subspace Method. The Dual DARE. The CARE. Complements.
F. Displacement Structure.
Motivation. Two Fundamental Properties. A Generalized Schur Algorithm. The Classical Schur Algorithm. Combining Displacement and StateSpace Structures.
Preface
Preface
The problem of estimating the values of a random (or stochastic) process given observations of a related random process is encountered in many areas of science and engineering, e.g., communications, control, signal processing, geophysics, econometrics, and statistics. Although the topic has a rich history, and its formative stages can be attributed to illustrious investigators such as Laplace, Gauss, Legendre, and others, the current high interest in such problems began with the work of H. Wold, A. N. Kolmogorov, and N. Wiener in the late 1930s and early 1940s. N. Wiener in particular stressed the importance of modeling not just "noise" but also "signals" as random processes. His thoughtprovoking originally classified 1942 report, released for open publication in 1949 and now available in paperback form under the title Time Series Analysis, is still very worthwhile background reading.
As with all deep subjects, the extensions of these results have been very farreaching as well. A particularly important development arose from the incorporation into the theory of multichannel statespace models. Though there were various earlier partial intimations and explorations, especially in the work of R. L. Stratonovich in the former Soviet Union, the chief credit for the explosion of activity in this direction goes to R. E. Kalman, who also made important related contributions to linear systems, optimal control, passive systems, stability theory, and network synthesis.
In fact, leastsquares estimation is one of those happy subjects that is interesting not only in the richness and scope of its results, but also because of its mutually beneficial connections with a host of other (often apparently very different) subjects. Thus, beyond those already named, we may mention connections with radiative transfer and scattering theory, linear algebra, matrix and operator theory, orthogonal polynomials, moment problems, inverse scattering problems, interpolation theory, decoding of ReedSolomon and BCH codes, polynomial factorization and root distribution problems, digital filtering, spectral analysis, signal detection, martingale theory, the socalled Hh theories of estimation and control, leastsquares and adaptive filtering problems, and many others. We can surely apply to it the lines written by William Shakespeare about another (beautiful) subject:
Her infinite variety."
Though we were originally tempted to cover a wider range, many reasons have led us to focus this volume largely on estimation problems for finitedimensional linear systems with statespace models, covering most aspects of an area now generally known as Wiener and Kalman filtering theory. Three distinctive features of our treatment are the pervasive use of a geometric point of view, the emphasis on the numerically favored squareroot/array forms of many algorithms, and the emphasis on equivalence and duality concepts for the solution of several related problems in adaptive filtering, estimation, and control. These features are generally absent in most prior treatments, ostensibly on the grounds that they are too abstract and complicated. It is our hope that these misconceptions will be dispelled by the presentation herein, and that the fundamental simplicity and power of these ideas will be more widely recognized arid exploited.
The material presented in this book can be broadly categorized into the following topics:
Chapter 1: Overview
Chapter 2: Deterministic LeastSquares Problems
Chapter 3: Stochastic LeastSquares Problems
Chapter 4: The Innovations Process
Chapter 5: StateSpace Models
Chapter 6: Innovations for Stationary Processes
Chapter 7: Wiener Theory for Scalar Processes
Chapter 8: Recursive Wiener Filters
Chapter 9: The Kalman Filter
Chapter 10: Smoothed Estimators
Chapter 11: Fast Algorithms
Chapter 12: Array Algorithms
Chapter 13: Fast Array Algorithms
Chapter 16: ContinuousTime StateSpace Estimation
Chapter 14: Asymptotic Behavior
Chapter 15: Duality and Equivalence in Estimation and Control
Chapter 17: A Scattering Theory Approach
Being intended for a graduatelevel course, the book assumes familiarity with basic concepts from matrix theory, linear algebra, linear system theory, and random processes. Four appendices at the end of the book provide the reader with background material in all these areas.
There is ample material in this book for the instructor to fashion a course to his or her needs and tastes. The authors have used portions of this book as the basis for onequarter firstyear graduate level courses at Stanford University, the University of California at Los Angeles, and the University of California at Santa Barbara; the students were expected to have had some exposure to discretetime and statespace theory. A typical course would start with Secs.1.11.2 as an overview (perhaps omitting the matrix derivations), with the rest of Ch. 1 left for a quick reading (and rereading from time to time), most of Chs. 2 and 3 (focusing on the geometric approach) on the basic deterministic and stochastic leastsquares problems, Ch. 4 on the innovations process, Secs. 6.46.5 and 7.37.7 on scalar Wiener filtering, Secs. 9.19.3, 9.5, and 9.7 on Kalman filtering, Secs. 10.110.2 as an introduction to smoothing, Secs. 12.112.5 and 13.113.4 on array algorithms, and Secs. 16.116.4 and 16.6 on continuoustime problems.
More advanced students and researchers would pursue selections of material from Sec. 2.8, Chs. 8, 11, 14, 15, and 17, and Apps. E and R These cover, among other topics, leastsquares problems with uncertain data, the problem of canonical spectral factorization, convergence of the Kalman filter, the algebraic Riccati equation, duality, backwardstime and complementary models, scattering, etc. Those wishing to go on to the more recent H ¥ theory can find a treatment closely related to the philosophy of the current book (cf. Sec. 1.6) in the research monograph of Hassibi, Sayed, and Kailath (1999).
A feature of the book is a collection of nearly 300 problems, several of which complement the text and present additional results and insights. However, there is little discussion of real applications or of the error and sensitivity analyses required for them. The main issue in applications is constructing an appropriate model, or actually a set of models, which are further analyzed and then refined by using the results and algorithms presented in this book. Developing good models and analyzing them effectively requires not only a good appreciation of the actual application, but also a good understanding of the theory, at both an analytical and intuitive level. It is the latter that we have tried to achieve here; examples of successful applications have to be sought in the literature, and some references are provided to this end.
Acknowledgments
The development of this textbook has spanned many years. So the material, as well as its presentation, has benefited greatly from the inputs of the many bright students who have worked with us on these topics: J. Omura, P Frost, T Duncan, R. Geesey, D. Duttweiler, H. Aasnaes, M. Gevers, H. Weinert, A. Segall, M. Mort B. Dickinson, G. Sidhu, B. Friedlander, A. Vieira, S. Y Kung, B. Levy, G. Verghese, D. Lee, J. Delosme, B. Porat, H. LevAri, J. Cioffi, A. Bruckstein, T. Citron, Y Bresler, R. Roy, J. Chun, D. Slock, D. Pal, G. Xu, R. Ackner, Y Cho, P Park, T. Boros, A. Erdogan, U. Forsell, B. Halder, H. Hindi, V Nascimento, T. Pare, R. Merched, and our young friend Amir Ghazanfarian (in memoriam) from whom we had so much more to learn.
We are of course also deeply indebted to the many researchers and authors in this beautiful field. Partial acknowledgment is evident through the citations and references; while the list of the latter is quite long, we apologize for omissions and inadequacies arising from the limitations of our knowledge and our energy. Nevertheless, we would be remiss not to explicitly mention the inspiration and pleasure we have gained in studying the papers and books of N. Wiener, R. E. Kalman, and P Whittle.
Major support for the many years of research that led to this book was provided by the Mathematics Divisions of the Air Force Office of Scientific Research and the Army Research Office, by the Joint Services Electronics Program, by the Defense Advanced Research Projects Agency, and by the National Science Foundation. Finally, we would like to thank Bernard Goodwin and Tom Robbins, as well as the staff of Prentice Hall, for their patience and other contributions to this project.
T Kailath
Stanford, CA
A. H. Sayed
Westwood, CA
B. Hassibi
Murray Hill, NJ