Kernel Adaptive Filtering: A Comprehensive Introduction / Edition 1

Kernel Adaptive Filtering: A Comprehensive Introduction / Edition 1

ISBN-10:
0470447532
ISBN-13:
9780470447536
Pub. Date:
03/01/2010
Publisher:
Wiley
ISBN-10:
0470447532
ISBN-13:
9780470447536
Pub. Date:
03/01/2010
Publisher:
Wiley
Kernel Adaptive Filtering: A Comprehensive Introduction / Edition 1

Kernel Adaptive Filtering: A Comprehensive Introduction / Edition 1

$136.95 Current price is , Original price is $136.95. You
$136.95 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores
  • SHIP THIS ITEM

    Temporarily Out of Stock Online

    Please check back later for updated availability.


Overview

Online learning from a signal processing perspective

There is increased interest in kernel learning algorithms in neural networks and a growing need for nonlinear adaptive algorithms in advanced signal processing, communications, and controls. Kernel Adaptive Filtering is the first book to present a comprehensive, unifying introduction to online learning algorithms in reproducing kernel Hilbert spaces. Based on research being conducted in the Computational Neuro-Engineering Laboratory at the University of Florida and in the Cognitive Systems Laboratory at McMaster University, Ontario, Canada, this unique resource elevates the adaptive filtering theory to a new level, presenting a new design methodology of nonlinear adaptive filters.

  • Covers the kernel least mean squares algorithm, kernel affine projection algorithms, the kernel recursive least squares algorithm, the theory of Gaussian process regression, and the extended kernel recursive least squares algorithm

  • Presents a powerful model-selection method called maximum marginal likelihood

  • Addresses the principal bottleneck of kernel adaptive filters—their growing structure

  • Features twelve computer-oriented experiments to reinforce the concepts, with MATLAB codes downloadable from the authors' Web site

  • Concludes each chapter with a summary of the state of the art and potential future directions for original research

Kernel Adaptive Filtering is ideal for engineers, computer scientists, and graduate students interested in nonlinear adaptive systems for online applications (applications where the data stream arrives one sample at a time and incremental optimal solutions are desirable). It is also a useful guide for those who look for nonlinear adaptive filtering methodologies to solve practical problems.


Product Details

ISBN-13: 9780470447536
Publisher: Wiley
Publication date: 03/01/2010
Series: Adaptive and Cognitive Dynamic Systems: Signal Processing, Learning, Communications and Control , #57
Pages: 240
Product dimensions: 6.10(w) x 9.30(h) x 0.70(d)

About the Author

Weifeng Liu, PhD, is a senior engineer of the Demand Forecasting Team at Amazon.com Inc. His research interests include kernel adaptive filtering, online active learning, and solving real-life large-scale data mining problems.

José C. Principe is Distinguished Professor of Electrical and Biomedical Engineering at the University of Florida, Gainesville, where he teaches advanced signal processing and artificial neural networks modeling. He is BellSouth Professor and founder and Director of the University of Florida Computational Neuro-Engineering Laboratory.

Simon Haykin is Distinguished University Professor at McMaster University, Canada.He is world-renowned for his contributions to adaptive filtering applied to radar and communications. Haykin's current research passion is focused on cognitive dynamic systems, including applications on cognitive radio and cognitive radar.

Read an Excerpt

Click to read or download

Table of Contents

Preface xi

Acknowledgments xv

Notation xvii

Abbreviations and Symbols xix

1 Background and Preview 1

1.1 Supervised, Sequential, and Active Learning 1

1.2 Linear Adaptive Filters 3

1.3 Nonlinear Adaptive Filters 10

1.4 Reproducing Kernel Hilbert Spaces 12

1.5 Kernel Adaptive Filters 16

1.6 Summarizing Remarks 20

Endnotes 21

2 Kernel Least-Mean-Square Algorithm 27

2.1 Least-Mean-Square Algorithm 28

2.2 Kernel Least-Mean-Square Algorithm 31

2.3 Kernel and Parameter Selection 34

2.4 Step-Size Parameter 37

2.5 Novelty Criterion 38

2.6 Self-Regularization Property of KLMS 40

2.7 Leaky Kernel Least-Mean-Square Algorithm 48

2.8 Normalized Kernel Least-Mean-Square Algorithm 48

2.9 Kernel ADALINE 49

2.10 Resource Allocating Networks 53

2.11 Computer Experiments 55

2.12 Conclusion 63

Endnotes 65

3 Kernel Affine Projection Algorithms 69

3.1 Affine Projection Algorithms 70

3.2 Kernel Affine Projection Algorithms 72

3.3 Error Reusing 77

3.4 Sliding Window Gram Matrix Inversion 78

3.5 Taxonomy for Related Algorithms 78

3.6 Computer Experiments 80

3.7 Conclusion 89

Endnotes 91

4 Kernel Recursive Least-Squares Algorithm 94

4.1 Recursive Least-Squares Algorithm 94

4.2 Exponentially Weighted Recursive Least-Squares Algorithm 97

4.3 Kernel Recursive Least-Squares Algorithm 98

4.4 Approximate Linear Dependency 102

4.5 Exponentially Weighted Kernel Recursive Least-Squares Algorithm 103

4.6 Gaussian Processes for Linear Regression 105

4.7 Gaussian Processes for Nonlinear Regression 108

4.8 Bayesian Model Selection 111

4.9 Computer Experiments 114

4.10 Conclusion 119

Endnotes 120

5 Extended Kernel Recursive Least-Squares Algorithm 124

5.1 Extended Recursive Least Squares Algorithm 125

5.2 Exponentially Weighted Extended Recursive Least Squares Algorithm 128

5.3 Extended Kernel Recursive Least Squares Algorithm 129

5.4 EX-KRLS for Tracking Models 131

5.5 EX-KRLS with Finite Rank Assumption 137

5.6 Computer Experiments 141

5.7 Conclusion 150

Endnotes 151 l 6 Designing Sparse Kernel Adaptive Filters 152

6.1 Definition of Surprise 152

6.2 A Review of Gaussian Process Regression 154

6.3 Computing Surprise 156

6.4 Kernel Recursive Least Squares with Surprise Criterion 159

6.5 Kernel Least Mean Square with Surprise Criterion 160

6.6 Kernel Affine Projection Algorithms with Surprise Criterion 161

6.7 Computer Experiments 162

6.8 Conclusion 173

Endnotes 174

Epilogue 175

Appendix 177

A Mathematical Background 177

A.l Singular Value Decomposition 177

A.2 Positive-Definite Matrix 179

A.3 Eigenvalue Decomposition 179

A.4 Schur Complement 181

A.5 Block Matrix Inverse 181

A.6 Matrix Inversion Lemma 182

A.7 Joint, Marginal, and Conditional Probability 182

A.8 Normal Distribution 183

A.9 Gradient Descent 184

A.10 Newton's Method 184

B Approximate Linear Dependency and System Stability 186

References 193

Index 204

From the B&N Reads Blog

Customer Reviews