Digital Spectral Analysis: Second Edition
Digital Spectral Analysis offers a broad perspective of spectral estimation techniques and their implementation. Coverage includes spectral estimation of discrete-time or discrete-space sequences derived by sampling continuous-time or continuous-space signals. The treatment emphasizes the behavior of each spectral estimator for short data records and provides over 40 techniques described and available as implemented MATLAB functions.
In addition to summarizing classical spectral estimation, this text provides theoretical background and review material in linear systems, Fourier transforms, matrix algebra, random processes, and statistics. Topics include Prony's method, parametric methods, the minimum variance method, eigenanalysis-based estimators, multichannel methods, and two-dimensional methods. Suitable for advanced undergraduates and graduate students of electrical engineering — and for scientific use in the signal processing application community outside of universities — the treatment's prerequisites include some knowledge of discrete-time linear system and transform theory, introductory probability and statistics, and linear algebra. 1987 edition.
1129740007
Digital Spectral Analysis: Second Edition
Digital Spectral Analysis offers a broad perspective of spectral estimation techniques and their implementation. Coverage includes spectral estimation of discrete-time or discrete-space sequences derived by sampling continuous-time or continuous-space signals. The treatment emphasizes the behavior of each spectral estimator for short data records and provides over 40 techniques described and available as implemented MATLAB functions.
In addition to summarizing classical spectral estimation, this text provides theoretical background and review material in linear systems, Fourier transforms, matrix algebra, random processes, and statistics. Topics include Prony's method, parametric methods, the minimum variance method, eigenanalysis-based estimators, multichannel methods, and two-dimensional methods. Suitable for advanced undergraduates and graduate students of electrical engineering — and for scientific use in the signal processing application community outside of universities — the treatment's prerequisites include some knowledge of discrete-time linear system and transform theory, introductory probability and statistics, and linear algebra. 1987 edition.
39.95 In Stock
Digital Spectral Analysis: Second Edition

Digital Spectral Analysis: Second Edition

by S. Lawrence Marple, Jr.
Digital Spectral Analysis: Second Edition

Digital Spectral Analysis: Second Edition

by S. Lawrence Marple, Jr.

eBook

$39.95 

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers

LEND ME® See Details

Overview

Digital Spectral Analysis offers a broad perspective of spectral estimation techniques and their implementation. Coverage includes spectral estimation of discrete-time or discrete-space sequences derived by sampling continuous-time or continuous-space signals. The treatment emphasizes the behavior of each spectral estimator for short data records and provides over 40 techniques described and available as implemented MATLAB functions.
In addition to summarizing classical spectral estimation, this text provides theoretical background and review material in linear systems, Fourier transforms, matrix algebra, random processes, and statistics. Topics include Prony's method, parametric methods, the minimum variance method, eigenanalysis-based estimators, multichannel methods, and two-dimensional methods. Suitable for advanced undergraduates and graduate students of electrical engineering — and for scientific use in the signal processing application community outside of universities — the treatment's prerequisites include some knowledge of discrete-time linear system and transform theory, introductory probability and statistics, and linear algebra. 1987 edition.

Product Details

ISBN-13: 9780486838861
Publisher: Dover Publications
Publication date: 03/20/2019
Series: Dover Books on Electrical Engineering
Sold by: Barnes & Noble
Format: eBook
Pages: 432
File size: 31 MB
Note: This product may take a few minutes to download.

About the Author

S. Lawrence Marple, Jr., is a Professor in the School of Electrical Engineering and Computer Science at Oregon State University.

Read an Excerpt

CHAPTER 1

INTRODUCTION

Spectral analysis is any signal processing method that characterizes the frequency content of a measured signal. The Fourier transform is the mathematical foundation for relating a time or space signal, or a model of this signal, to its frequency-domain representation. Statistics play a significant role in spectral analysis because most signals have a noisy or random aspect. If the underlying statistical attributes of a signal were known exactly or could be determined without error from a finite interval of the signal, then spectral analysis would be an exact science. The practical reality, however, is that only an estimate of the spectrum can be made from a single finite segment of the signal. As a result, the practice of spectral analysis since the 1880s has tended to be a subjective craft, applying science but also requiring a degree of empirical art.

The difficulty of the spectral estimation problem is illustrated by Fig. 1.1. Two typical spectral estimates are shown in this figure, obtained by processing the same finite sample sequence by two different spectral estimation techniques. Each spectral plot represents the distribution of signal strength with frequency. A precise meaning of signal "strength" in terms of energy or energy per unit time (power) will be provided in Chaps. 2 and 4. The units of frequency, as adopted in this text, are either cycles per second (Hertz) for temporal signals or cycles per meter (wavenumber) for spatial signals. Signal strength P(f)] at frequency f will be computed as 10 log10 [P(f)/Pmax] and plotted in units of decibels (dB) relative to the maximum spectral strength Pmax over all frequencies. The maximum relative strength plotted by this approach will, therefore, be 0 dB. The significant differences between the two spectral estimates may be attributed to differing assumptions made concerning the nature of the data and to the type of averaging used in recognition of the statistical impact of noise in the data. In a situation where no a priori knowledge of signal characteristics is available, one would find it difficult to select which of the two spectral estimators, if either, has represented the true underlying spectrum with better fidelity. It appears that estimate 1.1(b) has higher resolution than estimate 1.1(a), but this could be an artifact of the processing used to generate estimate 1.1(b), rather than actual detail that exists in the spectrum. This is the kind of uncertainty that arises in practice and that illustrates the subjective nature of spectral analysis.

Classical methods of spectral estimation have been well documented by several texts; the books by Blackman and Tukey [1958] and Jenkins and Watts [1968] are probably the two best known. Since the publication of these and related texts, interest has grown for devising alternative spectral estimation methods that perform better for limited data records. In particular, new methods of spectral estimation have been promoted that yield apparent improvements in frequency resolution over that achievable with classical spectral estimators. Limited data records occur frequently in practice. For example, to study intrapulse modulation characteristics within a single short radar pulse, only a few time samples may be taken from the finite duration radar pulse. In sonar, many data samples are available, but target motion necessitates that the analysis interval be short in order to assure that the target statistics are effectively unchanging within the analysis interval. The emphasis in this book is on the new, or "modern," methods of spectral estimation. In this sense, this text supplements the classical spectral estimation material covered in earlier texts. All methods described in this text assume sampled digital data, in contrast to some earlier texts that considered only continuous data.

The intent of each chapter is to provide the reader with an understanding of the assumptions that are made concerning the method or methods. The beginning of each chapter summarizes the spectral estimator technique, or techniques, and the associated software covered in that chapter, enabling scientists and engineers to implement readily each spectral estimator without being immersed in the theoretical details of the chapter. Some guidelines for applications are also provided. No attempt has been made to rank the spectral estimation methods with respect to each other. This text references a variety of MATLAB spectral estimation programs; users should probably apply several of the techniques to their experimental data. It may then be possible to extract a better understanding of the measured process from features common to all of the selected spectral estimates. For generality, complex-valued signals are assumed. The use of complex-valued signals is becoming more commonplace in digital signal processing systems. Section 2.12 describes two common sources of complex-valued signals.

An illuminating perspective of spectral analysis may be obtained by studying its historical roots. Further insight may be developed by examining some of the issues of spectral estimation. Both topics are covered in the remaining sections of this chapter. A brief description of how to use this text concludes the chapter.

1.1 HISTORICAL PERSPECTIVE

Cyclic, or recurring, processes observed in natural phenomena have instilled in humans, since the earliest times, the basic concepts that are embedded to this day in modern spectral estimation. Without performing an explicit mathematical analysis, ancient civilizations were able to devise calendars and time measures from their observations of the periodicities in the length of the day, the length of the year, the seasonal changes, the phases of the moon, and the motion of other heavenly bodies such as planets. In the sixth century BC, Pythagoras developed a relationship between the periodicity of pure sine vibrations of musical notes produced by a string of fixed tension and a number representing the length of the string. He believed that the essence of harmony was inherent in numbers. Pythagoras extended this empirical relationship to describe the harmonic motion of heavenly bodies, describing it as the "music of the spheres."

The mathematical basis for modern spectral estimation has its origins in the seventeenth-century work of the scientist Sir Isaac Newton. He observed that sunlight passing through a glass prism was expanded into a band of many colors. Thus, he discovered that each color represented a particular wavelength of light and that the white light of the sun contained all wavelengths. It was also Newton who introduced [1671] the word spectrum as a scientific term to describe this band of light colors. Spectrum is a variant of the Latin word "specter," meaning image or ghostly apparition. The adjective associated with spectrum is spectral. Thus, spectral estimation, rather than spectrum estimation, is the preferred terminology. Newton presented in his major work Principia [1687] the first mathematical treatment of the periodicity of wave motion that Pythagoras had empirically observed.

The solution to the wave equation for the vibrating musical string was developed by Daniel Bernoulli [1738], a mathematician who discovered the general solution for the displacement u(x, t) of the string at time t and position x (the endpoints of the string are at x = 0 and x = ?) in the wave equation to be

[MATHEMATICAL EXPRESSION OMITTED] (1.1)

where c, a physical quantity characteristic of the material of the string, represents the velocity of the traveling waves on the string. The term A0 is normally zero and we will assume this here. The mathematician L. Euler [1755] demonstrated that the coefficients Ak and Bk in the series given by Eq. (1.1), which would later be called the Fourier series, were found as solutions to

[MATHEMATICAL EXPRESSION OMITTED] (1.2)

for which t0 = π/2kc. The French engineer Jean Baptiste Joseph Fourier in his thesis Analytical Theory of Heat [1822] extended the wave equation results by asserting that any arbitrary function u(x), even one with a finite number of discontinuities, could be represented as an infinite summation of sine and cosine terms,

[MATHEMATICAL EXPRESSION OMITTED] (1.3)

The mathematics of taking a function u(x ), or its samples, and determining its Ak and Bk coefficients has become known as harmonic analysis, due to the harmonic indexing of the frequencies in the sine and cosine terms.

Beginning in the mid-nineteenth century, practical scientific applications using harmonic analysis to study phenomenological data such as sound, weather, sunspot activity, magnetic deviations, river flow, and tidal variations were being made. In many of these applications, the fundamental period was either obscured by measurement error noise or was not visually evident. In addition, secondary periodic components that bore no harmonic relationship to the fundamental periodicity were often present. This yielded some problems with the estimates of the various periodicities. Manual computation of the Fourier series coefficients by direct computational techniques or by graphic-aided methods proved to be extremely tedious and were limited to very small data sets. Mechanical harmonic analyzers were developed to assist the analysis. These calculating machines were basically mechanical integrators, or planimeters, because they found the area under the curves u(x) sin kx and u(x) cos kx over the interval 0 ≥ x? ≥ π thereby providing a calculation of the Fourier series coefficients Ak and Bk. The British physicist Sir William Thomson (aka Lord Kelvin, for whom a temperature scale was named) developed the first mechanical harmonic analyzer based on a dual-function-product planimeter, (∫ u(θ)φ(θ)φ dθ, invented by his brother James Thomson and modified to Evaluate cosine and sine functions [1876, 1878]. Figure 1.2(a) and (e) illustrates versions of this device. A tracing point was guided manually along the plotted curve to be analyzed; the coefficients were then read from the integrating cylinders. A different integrating cylinder was required to evaluate each coefficient up to only the third harmonic. It was used by the British Meteorological Office to analyze graphical records of daily changes in atmospheric temperature and pressure. Observers said of this device that, due to its size and weight, it was practically a permanent fixture in the room where it was used. Improvements in harmonic analyzers were subsequently made by O. Henrici [1894] [see Fig. 1.2(b)], A. Sharp [1894], G. U. Yule [1895], and the American physicists Albert A. Michelson (who is more famous for his measurement of the speed of light) and S. W. Stratton [1898]. The Michelson-Stratton harmonic analyzer [Fig. 1.2(c)], designed using spiral springs, was particularly impressive in that it not only could perform the analysis of 80 harmonic coefficients simultaneously, but it also could work as a synthesizer (inverse Fourier transformer) to construct the superposition of the Fourier series components. Michelson used the machine in his Nobel Prize-winning optical studies. As a synthesizer, the machine could predict interference fringe patterns by representing them as simple harmonic curves. As an analyzer, it decomposed a visibility curve into harmonic components representing the harmonic distribution of light in a source.

The results from a harmonic analysis in this era were sometimes used to synthesize a periodic waveform from the harmonic components for purposes of prediction (a Fourier series model of a data sequence). One of the earliest uses was for tidal prediction. Using direct manual calculation, Sir William Thomson performed harmonic analyses of tidal gauge observations from British ports starting in 1866; by 1872 he had developed a tide-predicting machine that utilized the coefficients estimated from his harmonic analysis. Later versions of this machine [see Fig. 1.2(d)] could combine up to 10 harmonic tidal constituents, each a function of the port where tides were to be predicted, by initializing the machine by crank and pulley settings, as Thomson's tide predictor was a rather large machine, with a base 3 feet by 6 feet. It took approximately four hours to draw one year of tidal curves for one harbor. A tide predictor built by William Ferrel in 1882, now on display in the Smithsonian Museum in Washington, DC, was used by the U.S. Coast and Geodetic Survey to prepare tide tables from 1883 to 1910.

Although the mechanical harmonic analyzers were useful for evaluating time series with obvious periodicities (smoothly varying time sequences with little or no noise), numerical methods of harmonic analysis (fitting a Fourier series) were still required when evaluating very noisy data (described in the early literature as data with "irregular fluctuations") for possible hidden periodicities (nonobvious periodic signals), or evaluating signals with nonharmonically related periods. Of the many scientists who used harmonic analysis, Schuster [1897, 1898, 1900, 1905, 1906] made the most profound impact on what has become the classic spectral estimation technique. He suggested that a plot of the squared envelope Sk = A2k + B2k of the Fourier transform coefficients (first proposed by Stokes [1879])

[MATHEMATICAL EXPRESSION OMITTED] (1.4)

be computed over a range of n integral periods of T0, where the correspondence k= 2πn/T0 should be made. Schuster's notation has been used here. Schuster termed his method the periodogram [1898]. The periodogram, in concept, could be evaluated over a continuum of periods (the inverse of which are frequencies). In his papers, Schuster recognized many of the problems and peculiarities of the periodogram. Depending on the choice of starting time τ, he observed that irregular and different patterns were obtained, sometimes producing spurious peaks (he called them "accidental periodicities") where no periodicity truly existed. Schuster [1894, 1904] knew from his understanding of Fourier analysis of optical spectra that averaging of Sk, obtained by evaluation of different data segments (with the period T0 fixed), was necessary to smooth the periodogram (obtain the "mean periodogram" in Schuster's jargon) and remove the spurious parts of the spectrum. Although he recognized the need to average, the implementation required computational means beyond the resources available in Schuster's era. Quoting Schuster [1898, p. 25],

"The periodogram as defined by the equations [1.4] will in general show an irregular outline, and also depend on the value of τ. In the optical analysis of light we are helped by the fact that the eye only receives the impression of the average of a great number of adjacent periods, and also the average, as regards time, of the intensity of radiation of any particular period. ... If we were to follow the optical analogy we should have to vary the time τ ... continuously and take the average value of r = [square root of A2 + B2] obtained in this way for each value of k ... but this would involve an almost prohibitive labor."

A thorough theoretical understanding of the statistical basis for averaging was still thirty years away in the work of Wiener, and fifty years from practical implementation of spectral averaging methods based on fast Fourier transform algorithms and digital computers to significantly reduce the "prohibitive" computational burden.

Schuster was also aware of the sidelobes (he called them "spurious periodicities") around mainlobe responses in the periodogram that are inherent in all Fourier analysis of finite record lengths. His cognizance of sidelobes was due to his ability to make an analogy with the diffraction patterns of the optical spectroscope caused by its limited spatial aperture ("limited resolving power"). Schuster pointed out that many researchers in his day were incorrectly asserting that all maxima in the periodogram were hidden periodicities when, in fact, many were sidelobes rather than true periodicities. In addition to the spurious periodicities in the periodogram, Schuster was also aware of the estimation bias introduced into the periodogram estimate when the measurement interval was not an exact integer multiple of the period under analysis. Many scientists in Schuster's day thought that white light might be a close grouping of monochromatic line components (analogous to white noise being a grouping of sinusoidal frequencies), but Schuster was able to show empirically that white light was a continuum of frequencies. Wiener was later able to extend this white light analogy to a statistical white noise process.

(Continues…)


Excerpted from "Digital Spectral Analysis"
by .
Copyright © 2019 S. Lawrence Marple Jr..
Excerpted by permission of Dover Publications, Inc..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

CONTENTS
 
NOTATIONAL CONVENTIONS
GLOSSARY OF KEY SYMBOLS
PREFACE
 
1 INTRODUCTION
1.1 Historical Perspective
1.2 Sunspot Numbers
1.3 A Test Case
1.4 Issues in Spectral Estimation
1.5 How to Use This Text
References
 
2 REVIEW OF LINEAR SYSTEMS AND TRANSFORM THEORY
2.1 Introduction
2.2 Signal Notation
2.3 Continuous Linear Systems
2.4 Discrete Linear Systems
2.5 Continuous-Time Fourier Transform
2.6 Sampling and Windowing Operations
2.7 Relating the Continuous and Discrete Transforms
2.8 The Issue of Scaling for Power Determination
2.9 The Issue of Zero Padding
2.10 The Fast Fourier Transform
2.11 Resolution and the Time-Bandwidth Product
2.12 Extra: Source of Complex-Valued Signals
2.13 Extra: Wavenumber Processing with Linear Spatial Arrays
References
 
3 REVIEW OF MATRIX ALGEBRA
3.1 Introduction
3.2 Matrix Algebra Basics
3.3 Special Vector and Matrix Structures
3.4 Matrix Inverse
3.5 Solution of Linear Equations
3.6 Overdetermined and Underdetermined Linear Equations
3.7 Solution of Overdetermined and Underdetermined Linear Equations
3.8 The Toeplitz Matrix
3.9 The Vandermonde Matrix
References
 
4 REVIEW OF RANDOM PROCESS THEORY
4.1 Introduction
4.2 Probability and Random Variables
4.3 Random Processes
4.4 Substituting Time Averages for Ensemble Averages
4.5 Entropy Concepts
4.6 Limit Spectrum of Test Data
4.7 Extra: Bias and Variance of the Sample Spectrum
References
 
5 CLASSICAL SPECTRAL ESTIMATION
5.1 Introduction
5.2 Summary
5.3 Windows
5.4 Resolution and the Stability-Time-Bandwidth Product
5.5 Autocorrelation and Cross Correlation Estimation
5.6 Correlogram Power Spectral Density (PSD) Estimators
5.7 Periodogram PSD Estimators
5.8 Combined Periodogram/Correlogram Estimators
5.9 Application to Sunspot Numbers
5.10 Conclusion
References
 
6 PARAMETRIC MODELS OF RANDOM PROCESSES
6.1 Introduction
6.2 Summary
6.3 Autoregressive (AR), Moving Average (MA), and Autoregressive-Moving Average (ARMA) Random Process Models
6.4 Relationships Among AR, MA, and ARMA Process Parameters
6.5 Relationship of AR, MA, and ARMA Parameters to ACS
6.6 Spectral Factorization
References
 
7 AUTOREGRESSIVE PROCESS AND SPECTRUM PROPERTIES
7.1 Introduction
7.2 Summary
7.3 Autoregressive Process Properties
7.4 Autoregressive Power Spectral Density Properties
References
 
8 AR SPECTRAL ESTIMATION: BLOCK DATA ALGORITHMS
8.1 Introduction
8.2 Summary
8.3 Correlation Function Estimation Method
8.4 Reection Coecient Estimation Methods
8.5 Least Squares Linear Prediction Estimation Methods
8.6 Estimator Characteristics
8.7 Model Order Selection
8.8 Autoregressive Processes with Observation Noise
8.9 Application to Sunspot Numbers
8.10 Extra: Covariance Linear Prediction Fast Algorithm
8.11 Extra: Modi_ed Covariance Linear Prediction Fast Algorithm
References
 
9 AR SPECTRAL ESTIMATION: SEQUENTIAL DATA ALGORITHMS
9.1 Introduction
9.2 Summary
9.3 Gradient Adaptive Autoregressive Methods
9.4 Recursive Least Squares (RLS) Autoregressive Methods
9.5 Fast Lattice Autoregressive Methods
9.6 Application to Sunspot Numbers
9.7 Extra: Fast RLS Algorithm for Recursive Linear Prediction
References
 
10 ARMA AVERAGE SPECTRAL ESTIMATION
10.1 Introduction
10.2 Summary
10.3 Moving Average Parameter Estimation
10.4 Separate Autoregressive and Moving Average Parameter Estimation
10.5 Simultaneous Autoregressive and Moving Average Parameter Estimation
10.6 Sequential Approach to ARMA Estimation
10.7 A Special ARMA Process for Sinusoids in White Noise
10.8 Application to Sunspot Numbers
References
 
11 PRONY'S METHOD
11.1 Introduction
11.2 Summary
11.3 Simultaneous Exponential Parameter Estimation
11.4 Original Prony Concept
11.5 Least Squares Prony Method
11.6 Modi_ed Least Squares Prony Method
11.7 Prony Spectrum
11.8 Accounting for Known Exponential Components
11.9 Identi_cation of Exponentials in Noise
11.10 Application to Sunspot Numbers
11.11 Extra: Fast Algorithm to Solve Symmetric Covariance Linear Equations
References
 
12 MINIMUM VARIANCE SPECTRAL ESTIMATION
12.1 Introduction
12.2 Summary
12.3 Derivation of the Minimum Variance Spectral Estimator
12.4 Relationship of Minimum Variance and Autoregressive Spectral Estimators
12.5 Implementation of the Minimum Variance Spectral Estimator
12.6 Application to Sunspot Numbers
References
 
13 EIGENANALYSIS-BASED FREQUENCY ESTIMATION
13.1 Introduction
13.2 Summary
13.3 Eigenanalysis of Autocorrelation Matrix for Sinusoids in White Noise
13.4 Eigenanalysis of Data Matrix for Exponentials in Noise
13.5 Signal Subspace Frequency Estimators
13.6 Noise Subspace Frequency Estimators
13.7 Order Selection
References
 
14 SUMMARY OF SPECTRAL ESTIMATORS
Synopsis Table
References
 
15 MULTICHANNEL SPECTRAL ESTIMATION
15.1 Introduction
15.2 Summary
15.3 Multichannel Linear Systems Theory
15.4 Multichannel Random Process Theory
15.5 Multichannel Classical Spectral Estimators
15.6 Multichannel ARMA, AR, and MA Processes
15.7 Multichannel Yule-Walker Equations
15.8 Multichannel Levinson Algorithm
15.9 Multichannel Block Toeplitz Matrix Inverse
15.10 Multichannel Autoregressive Spectral Estimation
15.11 Autoregressive Order Selection
15.12 Experimental Comparison of Multichannel AR PSD Estimators
15.13 Multichannel Minimum Variance Spectral Estimation
15.14 Two Channel Spectral Analysis: Sunspot Numbers and Air Temperature
References
 
16 TWO-DIMENSIONAL SPECTRAL ESTIMATION
16.1 Introduction
16.2 Summary
16.3 Two-Dimensional Linear Systems and Transform Theory
16.4 Two-Dimensional Random Process Theory
16.5 Classical 2-D Spectral Estimation
16.6 Modi_ed Classical 2-D Spectral Estimation
16.7 Two-Dimensional Autoregressive Spectral Estimation
16.8 Two-Dimensional Maximum Entropy Spectral Estimation
16.9 Two-Dimensional Minimum Variance Spectral Estimation Estimator
References

INDEX
From the B&N Reads Blog

Customer Reviews