logo资料库

adaptive filter theory 5th.pdf

第1页 / 共908页
第2页 / 共908页
第3页 / 共908页
第4页 / 共908页
第5页 / 共908页
第6页 / 共908页
第7页 / 共908页
第8页 / 共908页
资料共908页,剩余部分请下载后查看
Cover
Title Page
Copyright Page
Contents
Preface
Acknowledgments
Background and Preview
1. The Filtering Problem
2. Linear Optimum Filters
3. Adaptive Filters
4. Linear Filter Structures
5. Approaches to the Development of Linear Adaptive Filters
6. Adaptive Beamforming
7. Four Classes of Applications
8. Historical Notes
Chapter 1 Stochastic Processes and Models
1.1 Partial Characterization of a Discrete-Time Stochastic Process
1.2 Mean Ergodic Theorem
1.3 Correlation Matrix
1.4 Correlation Matrix of Sine Wave Plus Noise
1.5 Stochastic Models
1.6 Wold Decomposition
1.7 Asymptotic Stationarity of an Autoregressive Process
1.8 Yule–Walker Equations
1.9 Computer Experiment: Autoregressive Process of Order Two
1.10 Selecting the Model Order
1.11 Complex Gaussian Processes
1.12 Power Spectral Density
1.13 Properties of Power Spectral Density
1.14 Transmission of a Stationary Process Through a Linear Filter
1.15 Cramér Spectral Representation for a Stationary Process
1.16 Power Spectrum Estimation
1.17 Other Statistical Characteristics of a Stochastic Process
1.18 Polyspectra
1.19 Spectral-Correlation Density
1.20 Summary and Discussion
Problems
Chapter 2 Wiener Filters
2.1 Linear Optimum Filtering: Statement of the Problem
2.2 Principle of Orthogonality
2.3 Minimum Mean-Square Error
2.4 Wiener–Hopf Equations
2.5 Error-Performance Surface
2.6 Multiple Linear Regression Model
2.7 Example
2.8 Linearly Constrained Minimum-Variance Filter
2.9 Generalized Sidelobe Cancellers
2.10 Summary and Discussion
Problems
Chapter 3 Linear Prediction
3.1 Forward Linear Prediction
3.2 Backward Linear Prediction
3.3 Levinson–Durbin Algorithm
3.4 Properties of Prediction-Error Filters
3.5 Schur–Cohn Test
3.6 Autoregressive Modeling of a Stationary Stochastic Process
3.7 Cholesky Factorization
3.8 Lattice Predictors
3.9 All-Pole, All-Pass Lattice Filter
3.10 Joint-Process Estimation
3.11 Predictive Modeling of Speech
3.12 Summary and Discussion
Problems
Chapter 4 Method of Steepest Descent
4.1 Basic Idea of the Steepest-Descent Algorithm
4.2 The Steepest-Descent Algorithm Applied to the Wiener Filter
4.3 Stability of the Steepest-Descent Algorithm
4.4 Example
4.5 The Steepest-Descent Algorithm Viewed as a Deterministic Search Method
4.6 Virtue and Limitation of the Steepest-Descent Algorithm
4.7 Summary and Discussion
Problems
Chapter 5 Method of Stochastic Gradient Descent
5.1 Principles of Stochastic Gradient Descent
5.2 Application 1: Least-Mean-Square (LMS) Algorithm
5.3 Application 2: Gradient-Adaptive Lattice Filtering Algorithm
5.4 Other Applications of Stochastic Gradient Descent
5.5 Summary and Discussion
Problems
Chapter 6 The Least-Mean-Square (LMS) Algorithm
6.1 Signal-Flow Graph
6.2 Optimality Considerations
6.3 Applications
6.4 Statistical Learning Theory
6.5 Transient Behavior and Convergence Considerations
6.6 Efficiency
6.7 Computer Experiment on Adaptive Prediction
6.8 Computer Experiment on Adaptive Equalization
6.9 Computer Experiment on a Minimum-Variance Distortionless-Response Beamformer
6.10 Summary and Discussion
Problems
Chapter 7 Normalized Least-Mean-Square (LMS) Algorithm and Its Generalization
7.1 Normalized LMS Algorithm: The Solution to a Constrained Optimization Problem
7.2 Stability of the Normalized LMS Algorithm
7.3 Step-Size Control for Acoustic Echo Cancellation
7.4 Geometric Considerations Pertaining to the Convergence Process for Real-Valued Data
7.5 Affine Projection Adaptive Filters
7.6 Summary and Discussion
Problems
Chapter 8 Block-Adaptive Filters
8.1 Block-Adaptive Filters: Basic Ideas
8.2 Fast Block LMS Algorithm
8.3 Unconstrained Frequency-Domain Adaptive Filters
8.4 Self-Orthogonalizing Adaptive Filters
8.5 Computer Experiment on Adaptive Equalization
8.6 Subband Adaptive Filters
8.7 Summary and Discussion
Problems
Chapter 9 Method of Least-Squares
9.1 Statement of the Linear Least-Squares Estimation Problem
9.2 Data Windowing
9.3 Principle of Orthogonality Revisited
9.4 Minimum Sum of Error Squares
9.5 Normal Equations and Linear Least-Squares Filters
9.6 Time-Average Correlation Matrix Φ
9.7 Reformulation of the Normal Equations in Terms of Data Matrices
9.8 Properties of Least-Squares Estimates
9.9 Minimum-Variance Distortionless Response (MVDR) Spectrum Estimation
9.10 Regularized MVDR Beamforming
9.11 Singular-Value Decomposition
9.12 Pseudoinverse
9.13 Interpretation of Singular Values and Singular Vectors
9.14 Minimum-Norm Solution to the Linear Least-Squares Problem
9.15 Normalized LMS Algorithm Viewed as the Minimum-Norm Solution to an Underdetermined Least-Squares Estimation Problem
9.16 Summary and Discussion
Problems
Chapter 10 The Recursive Least-Squares (RLS) Algorithm
10.1 Some Preliminaries
10.2 The Matrix Inversion Lemma
10.3 The Exponentially Weighted RLS Algorithm
10.4 Selection of the Regularization Parameter
10.5 Updated Recursion for the Sum of Weighted Error Squares
10.6 Example: Single-Weight Adaptive Noise Canceller
10.7 Statistical Learning Theory
10.8 Efficiency
10.9 Computer Experiment on Adaptive Equalization
10.10 Summary and Discussion
Problems
Chapter 11 Robustness
11.1 Robustness, Adaptation, and Disturbances
11.2 Robustness: Preliminary Considerations Rooted in H[Sup(∞)] Optimization
11.3 Robustness of the LMS Algorithm
11.4 Robustness of the RLS Algorithm
11.5 Comparative Evaluations of the LMS and RLS Algorithms from the Perspective of Robustness
11.6 Risk-Sensitive Optimality
11.7 Trade-Offs Between Robustness and Efficiency
11.8 Summary and Discussion
Problems
Chapter 12 Finite-Precision Effects
12.1 Quantization Errors
12.2 Least-Mean-Square (LMS) Algorithm
12.3 Recursive Least-Squares (RLS) Algorithm
12.4 Summary and Discussion
Problems
Chapter 13 Adaptation in Nonstationary Environments
13.1 Causes and Consequences of Nonstationarity
13.2 The System Identification Problem
13.3 Degree of Nonstationarity
13.4 Criteria for Tracking Assessment
13.5 Tracking Performance of the LMS Algorithm
13.6 Tracking Performance of the RLS Algorithm
13.7 Comparison of the Tracking Performance of LMS and RLS Algorithms
13.8 Tuning of Adaptation Parameters
13.9 Incremental Delta-Bar-Delta (IDBD) Algorithm
13.10 Autostep Method
13.11 Computer Experiment: Mixture of Stationary and Nonstationary Environmental Data
13.12 Summary and Discussion
Problems
Chapter 14 Kalman Filters
14.1 Recursive Minimum Mean-Square Estimation for Scalar Random Variables
14.2 Statement of the Kalman Filtering Problem
14.3 The Innovations Process
14.4 Estimation of the State Using the Innovations Process
14.5 Filtering
14.6 Initial Conditions
14.7 Summary of the Kalman Filter
14.8 Optimality Criteria for Kalman Filtering
14.9 Kalman Filter as the Unifying Basis for RLS Algorithms
14.10 Covariance Filtering Algorithm
14.11 Information Filtering Algorithm
14.12 Summary and Discussion
Problems
Chapter 15 Square-Root Adaptive Filtering Algorithms
15.1 Square-Root Kalman Filters
15.2 Building Square-Root Adaptive Filters on the Two Kalman Filter Variants
15.3 QRD-RLS Algorithm
15.4 Adaptive Beamforming
15.5 Inverse QRD-RLS Algorithm
15.6 Finite-Precision Effects
15.7 Summary and Discussion
Problems
Chapter 16 Order-Recursive Adaptive Filtering Algorithm
16.1 Order-Recursive Adaptive Filters Using Least-Squares Estimation: An Overview
16.2 Adaptive Forward Linear Prediction
16.3 Adaptive Backward Linear Prediction
16.4 Conversion Factor
16.5 Least-Squares Lattice (LSL) Predictor
16.6 Angle-Normalized Estimation Errors
16.7 First-Order State-Space Models for Lattice Filtering
16.8 QR-Decomposition-Based Least-Squares Lattice (QRD-LSL) Filters
16.9 Fundamental Properties of the QRD-LSL Filter
16.10 Computer Experiment on Adaptive Equalization
16.11 Recursive (LSL) Filters Using A Posteriori Estimation Errors
16.12 Recursive LSL Filters Using A Priori Estimation Errors with Error Feedback
16.13 Relation Between Recursive LSL and RLS Algorithms
16.14 Finite-Precision Effects
16.15 Summary and Discussion
Problems
Chapter 17 Blind Deconvolution
17.1 Overview of Blind Deconvolution
17.2 Channel Identifiability Using Cyclostationary Statistics
17.3 Subspace Decomposition for Fractionally Spaced Blind Identification
17.4 Bussgang Algorithm for Blind Equalization
17.5 Extension of the Bussgang Algorithm to Complex Baseband Channels
17.6 Special Cases of the Bussgang Algorithm
17.7 Fractionally Spaced Bussgang Equalizers
17.8 Estimation of Unknown Probability Distribution Function of Signal Source
17.9 Summary and Discussion
Problems
Epilogue
1. Robustness, Efficiency, and Complexity
2. Kernel-Based Nonlinear Adaptive Filtering
Appendix A: Theory of Complex Variables
A.1 Cauchy–Riemann Equations
A.2 Cauchy’s Integral Formula
A.3 Laurent’s Series
A.4 Singularities and Residues
A.5 Cauchy’s Residue Theorem
A.6 Principle of the Argument
A.7 Inversion Integral for the z-Transform
A.8 Parseval’s Theorem
Appendix B: Wirtinger Calculus for Computing Complex Gradients
B.1 Wirtinger Calculus: Scalar Gradients
B.2 Generalized Wirtinger Calculus: Gradient Vectors
B.3 Another Approach to Compute Gradient Vectors
B.4 Expressions for the Partial Derivatives əf/əz and əf/əz*
Appendix C: Method of Lagrange Multipliers
C.1 Optimization Involving a Single Equality Constraint
C.2 Optimization Involving Multiple Equality Constraints
C.3 Optimum Beamformer
Appendix D: Estimation Theory
D.1 Likelihood Function
D.2 Cramér–Rao Inequality
D.3 Properties of Maximum-Likelihood Estimators
D.4 Conditional Mean Estimator
Appendix E: Eigenanalysis
E.1 The Eigenvalue Problem
E.2 Properties of Eigenvalues and Eigenvectors
E.3 Low-Rank Modeling
E.4 Eigenfilters
E.5 Eigenvalue Computations
Appendix F: Langevin Equation of Nonequilibrium Thermodynamics
F.1 Brownian Motion
F.2 Langevin Equation
Appendix G: Rotations and Reflections
G.1 Plane Rotations
G.2 Two-Sided Jacobi Algorithm
G.3 Cyclic Jacobi Algorithm
G.4 Householder Transformation
G.5 The QR Algorithm
Appendix H: Complex Wishart Distribution
H.1 Definition
H.2 The Chi-Square Distribution as a Special Case
H.3 Properties of the Complex Wishart Distribution
H.4 Expectation of the Inverse Correlation Matrix Ф[Sup(–1)](n)
Glossary
Text Conventions
Abbreviations
Principal Symbols
Bibliography
Suggested Reading
Index
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
Y
Z
AdAptive Filter theory Fifth edition Simon haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Upper Saddle River Boston Columbus San Francisco New York Indianapolis London Toronto Sydney Singapore Tokyo Montreal Dubai Madrid Hong Kong Mexico City Munich Paris Amsterdam Cape Town
Vice President and Editorial Director, ECS: Marcia J. Horton Senior Editor: Andrew Gilfillan Associate Editor: Alice Dworkin Editorial Assistant: William Opaluch Senior Managing Editor: Scott Disanno Production Liaison: Irwin Zucker Production Editor: Pavithra Jayapaul, Jouve India Operations Specialist: Linda Sager Executive Marketing Manager: Tim Galligan Art Editor: Greg Dulles Art Director: Jayne Conte Cover Designer: Suzanne Behnke Cover Image: © Gordon Swanson Composition/Full-Service Project Management: Jouve India Copyright © 2014, 2002, 1996, 1991 by Pearson Education, Inc., Upper Saddle River, New Jersey 07458. All rights reserved. Manufactured in the United States of America. This publication is protected by Copyright and permissions should be obtained from the publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical, photocopying, recording, or likewise. To obtain permission(s) to use materials from this work, please submit a written request to Pearson Higher Education, Permissions Department, 1 Lake Street, Upper Saddle River, NJ 07458. The author and publisher of this book have used their best efforts in preparing this book. These efforts include the development, research, and testing of the theories and programs to determine their effectiveness. The author and publisher make no warranty of any kind, expressed or implied, with regard to these programs or the documentation contained in this book. The author and publisher shall not be liable in any event for incidental or consequential damages in connection with, or arising out of, the furnishing, performance, or use of these programs. MATLAB is a registered trademark of The Math Works, Inc., 3 Apple Hill Drive, Natick, MA 01760-2098. Library of Congress Cataloging-in-Publication Data Simon Haykin Adaptive filter theory / Simon Haykin, Communications Research Laboratory, McMaster University, Hamilton, Ontario, Canada.—Fifth edition. pages cm ISBN-13: 978-0-13-267145-3 ISBN-10: 0-13-267145-X 1. Adaptive filters. I. Title. TK7872.F5H368 2012 621.3815'324—dc23 2012025640 10 9 8 7 6 5 4 3 2 1 ISBN 13: 978-0-13-267145-3 ISBN 10: 0-13-267145-X
This book is dedicated to • The many researchers around the world for their contributions to the ever-growing literature on adaptive filtering, and • The many reviewers, new and old, for their useful inputs and critical comments.
Contents Preface x Acknowledgments xvi Background and Preview 1 The Filtering Problem 1 Linear Optimum Filters 4 Adaptive Filters 4 Linear Filter Structures 6 Approaches to the Development of Linear Adaptive Filters 12 Adaptive Beamforming 13 Four Classes of Applications 17 Historical Notes 20 1. 2. 3. 4. 5. 6. 7. 8. Chapter 1 Stochastic Processes and Models 30 Partial Characterization of a Discrete-Time Stochastic Process 30 Yule–Walker Equations 51 Computer Experiment: Autoregressive Process of Order Two 52 Correlation Matrix 34 Correlation Matrix of Sine Wave Plus Noise 39 Stochastic Models 40 1.1 1.2 Mean Ergodic Theorem 32 1.3 1.4 1.5 1.6 Wold Decomposition 46 1.7 Asymptotic Stationarity of an Autoregressive Process 49 1.8 1.9 1.10 Selecting the Model Order 60 1.11 Complex Gaussian Processes 63 1.12 Power Spectral Density 65 1.13 Properties of Power Spectral Density 67 1.14 Transmission of a Stationary Process Through a Linear Filter 69 1.15 Cramér Spectral Representation for a Stationary Process 72 1.16 Power Spectrum Estimation 74 1.17 Other Statistical Characteristics of a Stochastic Process 77 1.18 Polyspectra 78 1.19 Spectral-Correlation Density 81 1.20 Summary and Discussion 84 Problems 85 Chapter 2 Wiener Filters 90 2.1 2.2 Linear Optimum Filtering: Statement of the Problem 90 Principle of Orthogonality 92 iv
Contents v Error-Performance Surface 100 2.3 Minimum Mean-Square Error 96 2.4 Wiener–Hopf Equations 98 2.5 2.6 Multiple Linear Regression Model 104 2.7 2.8 2.9 Generalized Sidelobe Cancellers 116 2.10 Summary and Discussion 122 Example 106 Linearly Constrained Minimum-Variance Filter 111 Problems 124 Chapter 3 Linear Prediction 132 Forward Linear Prediction 132 Backward Linear Prediction 139 Levinson–Durbin Algorithm 144 Properties of Prediction-Error Filters 153 Schur–Cohn Test 162 3.1 3.2 3.3 3.4 3.5 3.6 Autoregressive Modeling of a Stationary Stochastic Process 164 3.7 3.8 3.9 All-Pole, All-Pass Lattice Filter 175 3.10 3.11 Predictive Modeling of Speech 181 3.12 Summary and Discussion 188 Cholesky Factorization 167 Lattice Predictors 170 Joint-Process Estimation 177 Problems 189 Chapter 4 Method of Steepest Descent 199 Basic Idea of the Steepest-Descent Algorithm 199 The Steepest-Descent Algorithm Applied to the Wiener Filter 200 Stability of the Steepest-Descent Algorithm 204 Example 209 The Steepest-Descent Algorithm Viewed as a Deterministic Search Method 221 4.1 4.2 4.3 4.4 4.5 4.6 Virtue and Limitation of the Steepest-Descent Algorithm 222 4.7 Summary and Discussion 223 Problems 224 Chapter 5 Method of Stochastic Gradient Descent 228 Principles of Stochastic Gradient Descent 228 5.1 5.2 Application 1: Least-Mean-Square (LMS) Algorithm 230 5.3 Application 2: Gradient-Adaptive Lattice Filtering Algorithm 237 5.4 Other Applications of Stochastic Gradient Descent 244 5.5 Summary and Discussion 245 Problems 246 Chapter 6 The Least-Mean-Square (LMS) Algorithm 248 Signal-Flow Graph 248 6.1 6.2 Optimality Considerations 250 6.3 Applications 252 6.4 6.5 6.6 6.7 6.8 Statistical Learning Theory 272 Transient Behavior and Convergence Considerations 283 Efficiency 286 Computer Experiment on Adaptive Prediction 288 Computer Experiment on Adaptive Equalization 293
vi Contents 6.9 Computer Experiment on a Minimum-Variance Distortionless-Response Beamformer 302 6.10 Summary and Discussion 306 Problems 308 Chapter 7 Normalized Least-Mean-Square (LMS) Algorithm and Its Generalization 315 7.1 Normalized LMS Algorithm: The Solution to a Constrained Optimization Problem 315 7.2 7.3 7.4 Geometric Considerations Pertaining to the Convergence Process for Real-Valued Stability of the Normalized LMS Algorithm 319 Step-Size Control for Acoustic Echo Cancellation 322 Data 327 7.5 Affine Projection Adaptive Filters 330 7.6 Summary and Discussion 334 Problems 335 Chapter 8 Block-Adaptive Filters 339 Block-Adaptive Filters: Basic Ideas 340 Fast Block LMS Algorithm 344 8.1 8.2 8.3 Unconstrained Frequency-Domain Adaptive Filters 350 8.4 8.5 8.6 8.7 Self-Orthogonalizing Adaptive Filters 351 Computer Experiment on Adaptive Equalization 361 Subband Adaptive Filters 367 Summary and Discussion 375 Problems 376 Chapter 9 Method of Least-Squares 380 Principle of Orthogonality Revisited 384 Time-Average Correlation Matrix ≥ 391 Statement of the Linear Least-Squares Estimation Problem 380 9.1 9.2 Data Windowing 383 9.3 9.4 Minimum Sum of Error Squares 387 9.5 Normal Equations and Linear Least-Squares Filters 388 9.6 9.7 Reformulation of the Normal Equations in Terms of Data Matrices 393 9.8 9.9 Minimum-Variance Distortionless Response (MVDR) Spectrum Estimation 401 9.10 Regularized MVDR Beamforming 404 9.11 Singular-Value Decomposition 409 9.12 Pseudoinverse 416 9.13 9.14 Minimum-Norm Solution to the Linear Least-Squares Problem 419 9.15 Normalized LMS Algorithm Viewed as the Minimum-Norm Solution to an Interpretation of Singular Values and Singular Vectors 418 Properties of Least-Squares Estimates 397 Underdetermined Least-Squares Estimation Problem 422 9.16 Summary and Discussion 424 Problems 425 Chapter 10 The Recursive Least-Squares (RLS) Algorithm 431 10.1 Some Preliminaries 431 10.2 The Matrix Inversion Lemma 435 10.3 The Exponentially Weighted RLS Algorithm 436 10.4 Selection of the Regularization Parameter 439 10.5 Updated Recursion for the Sum of Weighted Error Squares 441 10.6 Example: Single-Weight Adaptive Noise Canceller 443 10.7 Statistical Learning Theory 444
10.8 Efficiency 449 10.9 Computer Experiment on Adaptive Equalization 450 10.10 Summary and Discussion 453 Problems 454 Chapter 11 Robustness 456 Contents vii 11.1 Robustness, Adaptation, and Disturbances 456 11.2 Robustness: Preliminary Considerations Rooted in H∞ Optimization 457 11.3 Robustness of the LMS Algorithm 460 11.4 Robustness of the RLS Algorithm 465 11.5 Comparative Evaluations of the LMS and RLS Algorithms from the Perspective of Robustness 470 11.6 Risk-Sensitive Optimality 470 11.7 Trade-Offs Between Robustness and Efficiency 472 11.8 Summary and Discussion 474 Problems 474 Chapter 12 Finite-Precision Effects 479 12.1 Quantization Errors 480 12.2 Least-Mean-Square (LMS) Algorithm 482 12.3 Recursive Least-Squares (RLS) Algorithm 491 12.4 Summary and Discussion 497 Problems 498 Chapter 13 Adaptation in Nonstationary Environments 500 13.1 Causes and Consequences of Nonstationarity 500 13.2 The System Identification Problem 501 13.3 Degree of Nonstationarity 504 13.4 Criteria for Tracking Assessment 505 13.5 Tracking Performance of the LMS Algorithm 507 13.6 Tracking Performance of the RLS Algorithm 510 13.7 Comparison of the Tracking Performance of LMS and RLS Algorithms 514 13.8 Tuning of Adaptation Parameters 518 13.9 13.10 Autostep Method 526 13.11 Computer Experiment: Mixture of Stationary and Nonstationary Environmental Incremental Delta-Bar-Delta (IDBD) Algorithm 520 13.12 Summary and Discussion 534 Data 530 Problems 535 Chapter 14 Kalman Filters 540 14.1 Recursive Minimum Mean-Square Estimation for Scalar Random Variables 541 14.2 Statement of the Kalman Filtering Problem 544 14.3 The Innovations Process 547 14.4 Estimation of the State Using the Innovations Process 549 14.5 Filtering 555 14.6 14.7 Summary of the Kalman Filter 558 14.8 Optimality Criteria for Kalman Filtering 559 14.9 Kalman Filter as the Unifying Basis for RLS Algorithms 561 14.10 Covariance Filtering Algorithm 566 14.11 Information Filtering Algorithm 568 14.12 Summary and Discussion 571 Initial Conditions 557 Problems 572
分享到:
收藏