Digital Filters

Digital signals occur in an increasing number of applications: in telephone communications; in radio, television, and stereo sound systems; and in spacecraft transmissions, to name just a few. This introductory text examines digital filtering, the processes of smoothing, predicting, differentiating, integrating, and separating signals, as well as the removal of noise from a signal. The processes bear particular relevance to computer applications, one of the focuses of this book.
Readers will find Hamming's analysis accessible and engaging, in recognition of the fact that many people with the strongest need for an understanding of digital filtering do not have a strong background in mathematics or electrical engineering. Thus, this book assumes only a knowledge of calculus and a smattering of statistics (reviewed in the text). Adopting the simplest, most direct mathematical tools, the author concentrates on linear signal processing; the main exceptions are the examination of round-off effects and a brief mention of Kalman filters.
This updated edition includes more material on the z-transform as well as additional examples and exercises for further reinforcement of each chapter's content. The result is an accessible, highly useful resource for the broad range of people working in the field of digital signal processing.

1116996085
Digital Filters

Digital signals occur in an increasing number of applications: in telephone communications; in radio, television, and stereo sound systems; and in spacecraft transmissions, to name just a few. This introductory text examines digital filtering, the processes of smoothing, predicting, differentiating, integrating, and separating signals, as well as the removal of noise from a signal. The processes bear particular relevance to computer applications, one of the focuses of this book.
Readers will find Hamming's analysis accessible and engaging, in recognition of the fact that many people with the strongest need for an understanding of digital filtering do not have a strong background in mathematics or electrical engineering. Thus, this book assumes only a knowledge of calculus and a smattering of statistics (reviewed in the text). Adopting the simplest, most direct mathematical tools, the author concentrates on linear signal processing; the main exceptions are the examination of round-off effects and a brief mention of Kalman filters.
This updated edition includes more material on the z-transform as well as additional examples and exercises for further reinforcement of each chapter's content. The result is an accessible, highly useful resource for the broad range of people working in the field of digital signal processing.

16.95 In Stock
Digital Filters

Digital Filters

by Richard W. Hamming
Digital Filters

Digital Filters

by Richard W. Hamming

eBook

$16.95 

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers

LEND ME® See Details

Overview

Digital signals occur in an increasing number of applications: in telephone communications; in radio, television, and stereo sound systems; and in spacecraft transmissions, to name just a few. This introductory text examines digital filtering, the processes of smoothing, predicting, differentiating, integrating, and separating signals, as well as the removal of noise from a signal. The processes bear particular relevance to computer applications, one of the focuses of this book.
Readers will find Hamming's analysis accessible and engaging, in recognition of the fact that many people with the strongest need for an understanding of digital filtering do not have a strong background in mathematics or electrical engineering. Thus, this book assumes only a knowledge of calculus and a smattering of statistics (reviewed in the text). Adopting the simplest, most direct mathematical tools, the author concentrates on linear signal processing; the main exceptions are the examination of round-off effects and a brief mention of Kalman filters.
This updated edition includes more material on the z-transform as well as additional examples and exercises for further reinforcement of each chapter's content. The result is an accessible, highly useful resource for the broad range of people working in the field of digital signal processing.


Product Details

ISBN-13: 9780486319247
Publisher: Dover Publications
Publication date: 03/12/2013
Series: Dover Civil and Mechanical Engineering
Sold by: Barnes & Noble
Format: eBook
Pages: 304
File size: 15 MB
Note: This product may take a few minutes to download.

About the Author

Richard W. Hamming: The Computer Icon
Richard W. Hamming (1915–1998) was first a programmer of one of the earliest digital computers while assigned to the Manhattan Project in 1945, then for many years he worked at Bell Labs, and later at the Naval Postgraduate School in Monterey, California. He was a witty and iconoclastic mathematician and computer scientist whose work and influence still reverberates through the areas he was interested in and passionate about. Three of his long-lived books have been reprinted by Dover: Numerical Methods for Scientists and Engineers, 1987; Digital Filters, 1997; and Methods of Mathematics Applied to Calculus, Probability and Statistics, 2004.

In the Author's Own Words:
"The purpose of computing is insight, not numbers."

"There are wavelengths that people cannot see, there are sounds that people cannot hear, and maybe computers have thoughts that people cannot think."

"Whereas Newton could say, 'If I have seen a little farther than others, it is because I have stood on the shoulders of giants, I am forced to say, 'Today we stand on each other's feet.' Perhaps the central problem we face in all of computer science is how we are to get to the situation where we build on top of the work of others rather than redoing so much of it in a trivially different way."

"If you don't work on important problems, it's not likely that you'll do important work." — Richard W. Hamming

Read an Excerpt

DIGITAL FILTERS


By R. W. HAMMING

Dover Publications, Inc.

Copyright © 1989 Lucent Technologies
All rights reserved.
ISBN: 978-0-486-31924-7



CHAPTER 1

Introduction


1.1 WHAT IS A DIGITAL FILTER?

In our current technical society we often measure a continuously varying quantity. Some examples include blood pressure, earthquake displacements, voltage from a voice signal in a telephone conversation, brightness of a star, population of a city, waves falling on a beach, and the probability of death. All these measurements vary with time; we regard them as functions of time: u(t) in mathematical notation. And we may be concerned with blood pressure measurements from moment to moment or from year to year. Furthermore, we may be concerned with functions whose independent variable is not time, for example the number of particles that decay in a physics experiment as a function of the energy of the emitted particle. Usually these variables can be regarded as varying continuously (analog signals) even if, as with the population of a city, a bacterial colony, or the number of particles in the physics experiment, the number being measured must change by unit amounts.

For technical reasons, instead of the signal u(t), we usually record equally spaced samples un of the function u(t). The famous sampling theorem, which will be discussed in Chapter 8, gives the conditions on the signal that justify this sampling process. Moreover, when the samples are taken they are not recorded with infinite precision but are rounded off (sometimes chopped off) to comparatively few digits (see Figure 1.1-1). This procedure is often called quantizing the samples. It is these quantized samples that are available for the processing that we do. We do the processing in order to understand what the function samples un reveal about the underlying phenomena that gave rise to the observations, and digital filters are the main processing tool.

It is necessary to emphasize that the samples are assumed to be equally spaced; any error or noise is in the measurements un. Fortunately, this assumption is approximately true in most applications.

Suppose that the sequence of numbers {un} is such a set of equally spaced measurements of some quantity u(t), where n is an integer and t is a continuous variable. Typically, t represents time, but not necessarily so. We are using the notation un = u(n). The simplest kinds of filters are the nonrecursive filters; they are defined by the linear formula

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1-1)

The coefficients ck are the constants of the filter, the un-k are the input data, and the yn are the outputs. Figure 1.1-2 shows how this formula is computed. Imagine two strips of paper. On the first strip, written one below the other, are the data values un-k. On the second strip, with the values written in the reverse direction (from bottom to top), are the filter coefficients ck. The zero subscript of one is opposite the n subscript value of the other (either way). The output yn is the sum of all the products ckun- k. Having computed one value, one strip, say the coefficient strip, is moved one space down, and the new set of products is computed to give the new output yn+1. Each output is the result of adding all the products formed from the proper displacement between the two zero-subscripted terms. In the computer, of course, it is the data that is "run past" the coefficient array {ck}.

This process is basic and is called a convolution of the data with the coefficients. It does not matter which strip is written in the reverse order; the result is the same. So the convolution of un with the coefficients ck is the same as the convolution of the coefficients ck with the data un.

In practice, the number of products we can handle must be finite. It is usual to assume that the length of the run of nonzero coefficients ck is much shorter than is the run of data yn. Once in a while it is useful to regard the ck coefficients as part of an infinite array with many zero coefficients, but it is usually preferable to think of the array { ck } as being finite and to ignore the zero terms beyond the end of the array. Equation (1.1-1) becomes, therefore,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1-2)

Thus the second strip (of coefficients ck) in Figure 1.1-2 is comparatively shorter than is the first strip (of data un).

Various special cases of this formula occur frequently and should be familiar to most readers. Indeed, such formulas are so commonplace that a book could be devoted to their listing. In the case of five nonzero coefficients ck, where all the coefficients that are not zero have the same value, we have the familiar smoothing by 5s formula (derived in Section 3.2)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1-3)

Another example is the least-squares smoothing formula derived by passing a least-squares cubic through five equally spaced values un and using the value of the cubic at the midpoint as the smoothed value. The formula for this smoothed value (which will be derived in Section 3.3) is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1-4)

Many other formulas, such as those for predicting stock market prices, as well as other time series, also are nonrecursive filters.

Nonrecursive filters occur in many different fields and, as a result, have acquired many different names. Among the disguises are the following:

Finite impulse response filter

FIR filter

Transversal filter

Tapped delay line filter

Moving average filter


We shall use the name nonrecursive as it is the simplest to understand from its name, and it contrasts with the name recursive filter, which we will soon introduce.

The concept of a window is perhaps the most confusing concept in the whole subject, so we now introduce it in these simple cases. We can think of the preceding formulas as if we were looking at the data un-k through a window of coefficients ck (see Figure 1.1-3). As we slide the strip of coefficients along the data, we see the data in the form of the output yn, which is the running weighted average of the original data un. It is as if we saw the data through a translucent (not transparent) window where the window was tinted according to the coefficientsck. In the smoothing by 5s, all data values get through the translucent window with the same amount, [??]; in the second example they come through the window with varying weights. (Don't let any negative weights bother you, since we are merely using a manner of speaking when we use the words "translucent window.")

When we use not only data values to compute the output values yn but also use other values of the output, we have a formula of the form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where both the ck and the dk are constants. In this case it is usual to limit the range of nonzero coefficients to current and past values of the data un and to only past values of the output yn. Furthermore, again the number of products that can be computed in practice must be finite. Thus the formula is usually written in the form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1-5)

where there may be some zero coefficients. These are called recursive filters(see Figure 1.1-4). Some equivalent names follow:

Infinite impulse response filter

IIR filter

Ladder filter

Lattice filter

Wave digital filter

Autoregressive moving average filter

ARMA filter

Autoregressive integrated moving average filter

ARIMA filter


We shall use the name recursive filter. A recursive digital filter is simply a linear difference equation with constant coefficients and nothing more; in practice it may be realized by a short program on a general purpose digital computer or by a special purpose integrated circuit chip.

A familiar example (from the calculus) of a recursive filter is the trapezoid rule for integration

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1-6)

It is immediately obvious that a recursive filter can, as it were, remember all the past data, since the yn- 1 value on the right side of the equation enters into the computation of the new value yn, and hence into the computation of yn+1, yn+2, and so on. In this way the initial condition for the integration is "remembered" throughout the entire estimation of the integral.

Other examples of a recursive digital filter are the exponential smoothing forecast

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and the trend indicator

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

As is customary, we have set aside recursive filters that use future values, values beyond the currently computed value. If we used future values beyond the current yn, we would have to solve a system of linear algebraic equations, and this is a lot of computing. At times it is worth it, but often we have only past computed values and the current value of the data. Filters that use only past and current values of the data are called causal, for if time is the independent variable, they do not react to future events but only past ones (causes).

It is worth a careful note, however, that more and more often all the data of an experiment is recorded on a magnetic tape or other storage medium before any data processing is done. In such cases the restriction to causal filters is plainly foolish. Future values are available! There are, of course, many situations in which the data must be reduced and used as they come in, and in such cases the restriction to causal filters is natural.

The student may wonder where we get the starting values of the yn. Once well going they, of course, come from previous computations, but how to start? The custom of assuming that the missing values y are to be taken as zeros is very dubious. This assumption usually amounts to putting a sharp discontinuity into the function yn, and since as noted previously the recursive filter remembers the past, it follows that these zero values continue to affect the computation for some time, if not indefinitely. It is evident in the simple example of the trapezoid integration that the needed starting value of y is the starting area, usually taken to be zero, but not necessarily so.

We have said it before, but it is necessary to say again that the coefficients ck and dk of the filter are assumed to be constants. Such filters are called time-invariant filters and are the filters most used in practice. Time-varying filters are occasionally useful and will be briefly touched upon in this text.

Finally, it should be realized that in practice all computing must be done with finite-length numbers. The process of quantization affects not only the input numbers, but it may affect all the internal (to the filter) arithmetic that is done. Consequently, there are roundoff errors in the final output numbers yn. It is often convenient to think in terms of infinite precision arithmetic and perfect input data un; but in the end we must deal with reality. Furthermore, the details of the way we arrange to do the arithmetic can affect the accuracy of the output numbers. We will look at this topic more closely in the closing chapters.


1.2 WHY SHOULD WE CARE ABOUT DIGITAL FILTERS?

The word filter is derived from electrical engineering, where filters are used to transform electrical signals from one form to another, especially to eliminate (filter out) various frequencies in a signal. As we have already seen, a digital filter is a linear combination of the input data un and possibly the output data yn and includes many of the operations that we do when processing a signal.


(Continues...)

Excerpted from DIGITAL FILTERS by R. W. HAMMING. Copyright © 1989 Lucent Technologies. Excerpted by permission of Dover Publications, Inc..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Preface to the third edition
1. Introduction
1.1 What is a digital filter?
1.2 Why should we care about digital filters?
1.3 How shall we treat the subject?
1.4 General-purpose versus special-purpose computers
1.5 Assumed statistical background
1.6 The distribution of a statistic
1.7 Noise amplification in a filter
1.8 Geometric progressions
2. The frequency approach
2.1 Introduction
2.2 Aliasing
2.3 The idea of an eigenfunction
2.4 Invariance under translation
2.5 Linear systems
2.6 The eigenfunctions of equally spaced sampling
2.7 Summary
3. Some classical applications
3.1 Introduction
3.2 Least-squares fitting of polynomials
3.3 Least-squares quadratics and quartics
3.4 Modified least squares
3.5 Differences and derivatives
3.6 More on smoothing: decibles
3.7 Missing data and interpolation
3.8 A class of nonrecursive smoothing filters
3.9 An example of how a filter works
3.10 Integration: recursive filters
3.11 Summary
4. Fourier series: continuous case
4.1 Need for the theory
4.2 Orthogonality
4.3 Formal expansions
4.4 Odd and even functions
4.5 Fourier series and least squares
4.6 Class of functions and rate of convergence
4.7 Convergence at a point of continuity
4.8 Convergence at a point of discontinuity
4.9 The complex Fourier series
4.10 The phase form of a Fourier series
5. Windows
5.1 Introduction
5.2 Generating new Fourier series: the convolution theorems
5.3 The Gibbs phenomenon
5.4 Lanczos smoothing: The sigma factors
5.5 The Gibbs phenomenon again
5.6 Modified Fourier series
5.7 The von Hann window: the raised cosine window
5.8 Hamming window: raised cosine with a platform
5.9 Review of windows
6. Design of nonrecursive filters
6.1 Introduction
6.2 A low-pass filter design
6.3 Continuous design methods: a review
6.4 A differentiation filter
6.5 Testing the differentiating filter on data
6.6 New filters from old ones: sharpening a filter
6.7 Bandpass differentiators
6.8 Midpoint formulas
7. Smooth nonrecursive filters
7.1 Objections to ripples in a transfer function
7.2 Smooth filters
7.3 Transforming to the Fourier series
7.4 Polynomial Processing in general
7.5 The design of a smooth filter
7.6 Smooth bandpass filters
8. The Fourier integral and the sampling theorem
8.1 Introduction
8.2 Summary of results
8.3 The Sampling theorem
8.4 The Fourier integral
8.5 Some transform pairs
8.6 Band-limited functions and the Sampling theorem
8.7 The convolution theorem
8.8 The effect of a finite sample size
8.9 Windows
8.10 The uncertainty principle
9. Kaiser windows and optimization
9.1 Windows
9.2 Review of Gibbs Phenomenon and the Rectangular window
9.3 The Kaiser window: I subscript 0-sinh window
9.4 Derivation of the Kaiser formulas
9.5 Design of a bandpass filter
9.6 Review of Kaiser window filter design
9.7 The same differentiator again
9.8 A particular case of differentiation
9.9 Optimizing a design
9.10 A Crude method of optimizing
10. The finite Fourier series
10.1 Introduction
10.2 Orthogonality
10.3 Relationship between the discrete and continuous expansions
10.4 The fast Fourier transform
10.5 Cosine expansions
10.6 Another method of design
10.7 Padding out zeros
11. The spectrum
11.1 Review
11.2 Finite sample effects
11.3 Aliasing
11.4 Computing the spectrum
11.5 Nonharmonic frequencies
11.6 Removal of the mean
11.7 The phase spectrum
11.8 Summary
12. Recursive filters
12.1 Why recursive filters?
12.2 Linear differential equation theory
12.3 Linear difference equations
12.4 Reduction to simpler form
12.5 Stability and the Z transformation
12.6 Butterworth Filters
12.7 A simple case of butterworth filter design
12.8 Removing the phase: two-way filters
13. Chebyshev approximation and Chebyshev filters
13.1 Introduction
13.2 Chebyshev polynomials
13.3 The Chebyshev Criterion
13.4 Chebyshev filters
13.5 Chebyshev filters, type 1
13.6 Chebyshev filters, type 2
13.7 Elliptic filters
13.8 Leveling an error curve
13.9 A Chebyshev identity
13.10 An example of the design of an integrator
13.11 Phase-free recursive filters
13.12 The transient
14. Miscellaneous
14.1 Types of Filter Design
14.2 Finite arithmetic effects
14.3 Recursive versus nonrecursive filters
14.4 Direct modeling
14.5 Decimation
14.6 Time-varying filters
14.7 References
Index
From the B&N Reads Blog

Customer Reviews