This introductory text examines digital filtering — the processes of refining signals
— and its relevance to many applications, particularly computer-related functions. Assuming only a knowledge of calculus and some statistics, it concentrates on linear signal processing, with some consideration of roundoff effects and Kalman filters. Numerous examples and exercises.
Read More Show Less
... See more details below
Digital Filters

Available on NOOK devices and apps  
  • NOOK Devices
  • Samsung Galaxy Tab 4 NOOK
  • NOOK HD/HD+ Tablet
  • NOOK
  • NOOK Color
  • NOOK Tablet
  • Tablet/Phone
  • NOOK for Windows 8 Tablet
  • NOOK for iOS
  • NOOK for Android
  • NOOK Kids for iPad
  • PC/Mac
  • NOOK for Windows 8
  • NOOK for PC
  • NOOK for Mac
  • NOOK for Web

Want a NOOK? Explore Now

NOOK Book (eBook)
BN.com price
(Save 38%)$16.95 List Price


This introductory text examines digital filtering — the processes of refining signals
— and its relevance to many applications, particularly computer-related functions. Assuming only a knowledge of calculus and some statistics, it concentrates on linear signal processing, with some consideration of roundoff effects and Kalman filters. Numerous examples and exercises.
Read More Show Less

Editorial Reviews

Presents fundamental ideas and basic methods of designing simple nonrecursive and recursive filters. Uses simple, direct mathematical tools and includes an accurate (but not excessively rigorous) introduction to the necessary mathematics in each case. Annotation c. Book News, Inc., Portland, OR (booknews.com)
Read More Show Less

Product Details

  • ISBN-13: 9780486319247
  • Publisher: Dover Publications
  • Publication date: 3/12/2013
  • Series: Dover Civil and Mechanical Engineering
  • Sold by: Barnes & Noble
  • Format: eBook
  • Pages: 296
  • File size: 14 MB
  • Note: This product may take a few minutes to download.

Read an Excerpt



Dover Publications, Inc.

Copyright © 1989 Lucent Technologies
All rights reserved.
ISBN: 978-0-486-31924-7




In our current technical society we often measure a continuously varying quantity. Some examples include blood pressure, earthquake displacements, voltage from a voice signal in a telephone conversation, brightness of a star, population of a city, waves falling on a beach, and the probability of death. All these measurements vary with time; we regard them as functions of time: u(t) in mathematical notation. And we may be concerned with blood pressure measurements from moment to moment or from year to year. Furthermore, we may be concerned with functions whose independent variable is not time, for example the number of particles that decay in a physics experiment as a function of the energy of the emitted particle. Usually these variables can be regarded as varying continuously (analog signals) even if, as with the population of a city, a bacterial colony, or the number of particles in the physics experiment, the number being measured must change by unit amounts.

For technical reasons, instead of the signal u(t), we usually record equally spaced samples un of the function u(t). The famous sampling theorem, which will be discussed in Chapter 8, gives the conditions on the signal that justify this sampling process. Moreover, when the samples are taken they are not recorded with infinite precision but are rounded off (sometimes chopped off) to comparatively few digits (see Figure 1.1-1). This procedure is often called quantizing the samples. It is these quantized samples that are available for the processing that we do. We do the processing in order to understand what the function samples un reveal about the underlying phenomena that gave rise to the observations, and digital filters are the main processing tool.

It is necessary to emphasize that the samples are assumed to be equally spaced; any error or noise is in the measurements un. Fortunately, this assumption is approximately true in most applications.

Suppose that the sequence of numbers {un} is such a set of equally spaced measurements of some quantity u(t), where n is an integer and t is a continuous variable. Typically, t represents time, but not necessarily so. We are using the notation un = u(n). The simplest kinds of filters are the nonrecursive filters; they are defined by the linear formula


The coefficients ck are the constants of the filter, the un-k are the input data, and the yn are the outputs. Figure 1.1-2 shows how this formula is computed. Imagine two strips of paper. On the first strip, written one below the other, are the data values un-k. On the second strip, with the values written in the reverse direction (from bottom to top), are the filter coefficients ck. The zero subscript of one is opposite the n subscript value of the other (either way). The output yn is the sum of all the products ckun- k. Having computed one value, one strip, say the coefficient strip, is moved one space down, and the new set of products is computed to give the new output yn+1. Each output is the result of adding all the products formed from the proper displacement between the two zero-subscripted terms. In the computer, of course, it is the data that is "run past" the coefficient array {ck}.

This process is basic and is called a convolution of the data with the coefficients. It does not matter which strip is written in the reverse order; the result is the same. So the convolution of un with the coefficients ck is the same as the convolution of the coefficients ck with the data un.

In practice, the number of products we can handle must be finite. It is usual to assume that the length of the run of nonzero coefficients ck is much shorter than is the run of data yn. Once in a while it is useful to regard the ck coefficients as part of an infinite array with many zero coefficients, but it is usually preferable to think of the array { ck } as being finite and to ignore the zero terms beyond the end of the array. Equation (1.1-1) becomes, therefore,


Thus the second strip (of coefficients ck) in Figure 1.1-2 is comparatively shorter than is the first strip (of data un).

Various special cases of this formula occur frequently and should be familiar to most readers. Indeed, such formulas are so commonplace that a book could be devoted to their listing. In the case of five nonzero coefficients ck, where all the coefficients that are not zero have the same value, we have the familiar smoothing by 5s formula (derived in Section 3.2)


Another example is the least-squares smoothing formula derived by passing a least-squares cubic through five equally spaced values un and using the value of the cubic at the midpoint as the smoothed value. The formula for this smoothed value (which will be derived in Section 3.3) is


Many other formulas, such as those for predicting stock market prices, as well as other time series, also are nonrecursive filters.

Nonrecursive filters occur in many different fields and, as a result, have acquired many different names. Among the disguises are the following:

Finite impulse response filter

FIR filter

Transversal filter

Tapped delay line filter

Moving average filter

We shall use the name nonrecursive as it is the simplest to understand from its name, and it contrasts with the name recursive filter, which we will soon introduce.

The concept of a window is perhaps the most confusing concept in the whole subject, so we now introduce it in these simple cases. We can think of the preceding formulas as if we were looking at the data un-k through a window of coefficients ck (see Figure 1.1-3). As we slide the strip of coefficients along the data, we see the data in the form of the output yn, which is the running weighted average of the original data un. It is as if we saw the data through a translucent (not transparent) window where the window was tinted according to the coefficientsck. In the smoothing by 5s, all data values get through the translucent window with the same amount, [??]; in the second example they come through the window with varying weights. (Don't let any negative weights bother you, since we are merely using a manner of speaking when we use the words "translucent window.")

When we use not only data values to compute the output values yn but also use other values of the output, we have a formula of the form


where both the ck and the dk are constants. In this case it is usual to limit the range of nonzero coefficients to current and past values of the data un and to only past values of the output yn. Furthermore, again the number of products that can be computed in practice must be finite. Thus the formula is usually written in the form


where there may be some zero coefficients. These are called recursive filters(see Figure 1.1-4). Some equivalent names follow:

Infinite impulse response filter

IIR filter

Ladder filter

Lattice filter

Wave digital filter

Autoregressive moving average filter

ARMA filter

Autoregressive integrated moving average filter

ARIMA filter

We shall use the name recursive filter. A recursive digital filter is simply a linear difference equation with constant coefficients and nothing more; in practice it may be realized by a short program on a general purpose digital computer or by a special purpose integrated circuit chip.

A familiar example (from the calculus) of a recursive filter is the trapezoid rule for integration


It is immediately obvious that a recursive filter can, as it were, remember all the past data, since the yn- 1 value on the right side of the equation enters into the computation of the new value yn, and hence into the computation of yn+1, yn+2, and so on. In this way the initial condition for the integration is "remembered" throughout the entire estimation of the integral.

Other examples of a recursive digital filter are the exponential smoothing forecast


and the trend indicator


As is customary, we have set aside recursive filters that use future values, values beyond the currently computed value. If we used future values beyond the current yn, we would have to solve a system of linear algebraic equations, and this is a lot of computing. At times it is worth it, but often we have only past computed values and the current value of the data. Filters that use only past and current values of the data are called causal, for if time is the independent variable, they do not react to future events but only past ones (causes).

It is worth a careful note, however, that more and more often all the data of an experiment is recorded on a magnetic tape or other storage medium before any data processing is done. In such cases the restriction to causal filters is plainly foolish. Future values are available! There are, of course, many situations in which the data must be reduced and used as they come in, and in such cases the restriction to causal filters is natural.

The student may wonder where we get the starting values of the yn. Once well going they, of course, come from previous computations, but how to start? The custom of assuming that the missing values y are to be taken as zeros is very dubious. This assumption usually amounts to putting a sharp discontinuity into the function yn, and since as noted previously the recursive filter remembers the past, it follows that these zero values continue to affect the computation for some time, if not indefinitely. It is evident in the simple example of the trapezoid integration that the needed starting value of y is the starting area, usually taken to be zero, but not necessarily so.

We have said it before, but it is necessary to say again that the coefficients ck and dk of the filter are assumed to be constants. Such filters are called time-invariant filters and are the filters most used in practice. Time-varying filters are occasionally useful and will be briefly touched upon in this text.

Finally, it should be realized that in practice all computing must be done with finite-length numbers. The process of quantization affects not only the input numbers, but it may affect all the internal (to the filter) arithmetic that is done. Consequently, there are roundoff errors in the final output numbers yn. It is often convenient to think in terms of infinite precision arithmetic and perfect input data un; but in the end we must deal with reality. Furthermore, the details of the way we arrange to do the arithmetic can affect the accuracy of the output numbers. We will look at this topic more closely in the closing chapters.


The word filter is derived from electrical engineering, where filters are used to transform electrical signals from one form to another, especially to eliminate (filter out) various frequencies in a signal. As we have already seen, a digital filter is a linear combination of the input data un and possibly the output data yn and includes many of the operations that we do when processing a signal.


Excerpted from DIGITAL FILTERS by R. W. HAMMING. Copyright © 1989 Lucent Technologies. Excerpted by permission of Dover Publications, Inc..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Read More Show Less

Table of Contents

Preface to the third edition
1. Introduction
1.1 What is a digital filter?
1.2 Why should we care about digital filters?
1.3 How shall we treat the subject?
1.4 General-purpose versus special-purpose computers
1.5 Assumed statistical background
1.6 The distribution of a statistic
1.7 Noise amplification in a filter
1.8 Geometric progressions
2. The frequency approach
2.1 Introduction
2.2 Aliasing
2.3 The idea of an eigenfunction
2.4 Invariance under translation
2.5 Linear systems
2.6 The eigenfunctions of equally spaced sampling
2.7 Summary
3. Some classical applications
3.1 Introduction
3.2 Least-squares fitting of polynomials
3.3 Least-squares quadratics and quartics
3.4 Modified least squares
3.5 Differences and derivatives
3.6 More on smoothing: decibles
3.7 Missing data and interpolation
3.8 A class of nonrecursive smoothing filters
3.9 An example of how a filter works
3.10 Integration: recursive filters
3.11 Summary
4. Fourier series: continuous case
4.1 Need for the theory
4.2 Orthogonality
4.3 Formal expansions
4.4 Odd and even functions
4.5 Fourier series and least squares
4.6 Class of functions and rate of convergence
4.7 Convergence at a point of continuity
4.8 Convergence at a point of discontinuity
4.9 The complex Fourier series
4.10 The phase form of a Fourier series
5. Windows
5.1 Introduction
5.2 Generating new Fourier series: the convolution theorems
5.3 The Gibbs phenomenon
5.4 Lanczos smoothing: The sigma factors
5.5 The Gibbs phenomenon again
5.6 Modified Fourier series
5.7 The von Hann window: the raised cosine window
5.8 Hamming window: raised cosine with a platform
5.9 Review of windows
6. Design of nonrecursive filters
6.1 Introduction
6.2 A low-pass filter design
6.3 Continuous design methods: a review
6.4 A differentiation filter
6.5 Testing the differentiating filter on data
6.6 New filters from old ones: sharpening a filter
6.7 Bandpass differentiators
6.8 Midpoint formulas
7. Smooth nonrecursive filters
7.1 Objections to ripples in a transfer function
7.2 Smooth filters
7.3 Transforming to the Fourier series
7.4 Polynomial Processing in general
7.5 The design of a smooth filter
7.6 Smooth bandpass filters
8. The Fourier integral and the sampling theorem
8.1 Introduction
8.2 Summary of results
8.3 The Sampling theorem
8.4 The Fourier integral
8.5 Some transform pairs
8.6 Band-limited functions and the Sampling theorem
8.7 The convolution theorem
8.8 The effect of a finite sample size
8.9 Windows
8.10 The uncertainty principle
9. Kaiser windows and optimization
9.1 Windows
9.2 Review of Gibbs Phenomenon and the Rectangular window
9.3 The Kaiser window: I subscript 0-sinh window
9.4 Derivation of the Kaiser formulas
9.5 Design of a bandpass filter
9.6 Review of Kaiser window filter design
9.7 The same differentiator again
9.8 A particular case of differentiation
9.9 Optimizing a design
9.10 A Crude method of optimizing
10. The finite Fourier series
10.1 Introduction
10.2 Orthogonality
10.3 Relationship between the discrete and continuous expansions
10.4 The fast Fourier transform
10.5 Cosine expansions
10.6 Another method of design
10.7 Padding out zeros
11. The spectrum
11.1 Review
11.2 Finite sample effects
11.3 Aliasing
11.4 Computing the spectrum
11.5 Nonharmonic frequencies
11.6 Removal of the mean
11.7 The phase spectrum
11.8 Summary
12. Recursive filters
12.1 Why recursive filters?
12.2 Linear differential equation theory
12.3 Linear difference equations
12.4 Reduction to simpler form
12.5 Stability and the Z transformation
12.6 Butterworth Filters
12.7 A simple case of butterworth filter design
12.8 Removing the phase: two-way filters
13. Chebyshev approximation and Chebyshev filters
13.1 Introduction
13.2 Chebyshev polynomials
13.3 The Chebyshev Criterion
13.4 Chebyshev filters
13.5 Chebyshev filters, type 1
13.6 Chebyshev filters, type 2
13.7 Elliptic filters
13.8 Leveling an error curve
13.9 A Chebyshev identity
13.10 An example of the design of an integrator
13.11 Phase-free recursive filters
13.12 The transient
14. Miscellaneous
14.1 Types of Filter Design
14.2 Finite arithmetic effects
14.3 Recursive versus nonrecursive filters
14.4 Direct modeling
14.5 Decimation
14.6 Time-varying filters
14.7 References
Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star


4 Star


3 Star


2 Star


1 Star


Your Rating:

Your Name: Create a Pen Name or

Barnes & Noble.com Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation


  • - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
  • - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously

    If you find inappropriate content, please report it to Barnes & Noble
    Why is this product inappropriate?
    Comments (optional)