**Uh-oh, it looks like your Internet Explorer is out of date.**

For a better shopping experience, please upgrade now.

# Digital Filters / Edition 3

Digital Filters / Edition 3 available in Hardcover

- ISBN-10:
- 0132128128
- ISBN-13:
- 9780132128124
- Pub. Date:
- 01/28/1989
- Publisher:
- Prentice Hall Professional Technical Reference

**Temporarily Out of Stock Online**

## Product Details

ISBN-13: | 9780132128124 |
---|---|

Publisher: | Prentice Hall Professional Technical Reference |

Publication date: | 01/28/1989 |

Series: | Prentice Hall Signal Processing Series |

Edition description: | 3rd ed |

Pages: | 304 |

## Read an Excerpt

#### DIGITAL FILTERS

**By R. W. HAMMING**

**Dover Publications, Inc.**

**Copyright © 1989 Lucent Technologies**

All rights reserved.

ISBN: 978-0-486-31924-7

All rights reserved.

ISBN: 978-0-486-31924-7

CHAPTER 1

**Introduction**

**1.1 WHAT IS A DIGITAL FILTER?**

In our current technical society we often measure a continuously varying quantity. Some examples include blood pressure, earthquake displacements, voltage from a voice signal in a telephone conversation, brightness of a star, population of a city, waves falling on a beach, and the probability of death. All these measurements vary with time; we regard them as functions of time: *u(t)* in mathematical notation. And we may be concerned with blood pressure measurements from moment to moment or from year to year. Furthermore, we may be concerned with functions whose independent variable is not time, for example the number of particles that decay in a physics experiment as a function of the energy of the emitted particle. Usually these variables can be regarded as varying continuously (analog signals) even if, as with the population of a city, a bacterial colony, or the number of particles in the physics experiment, the number being measured must change by unit amounts.

For technical reasons, instead of the signal *u(t)*, we usually record *equally spaced samples un* of the function *u(t)*. The famous *sampling theorem*, which will be discussed in **Chapter 8**, gives the conditions on the signal that justify this sampling process. Moreover, when the samples are taken they are not recorded with infinite precision but are rounded off (sometimes chopped off) to comparatively few digits (see **Figure 1.1-1**). This procedure is often called *quantizing* the samples. It is these quantized samples that are available for the processing that we do. We do the processing in order to understand what the function samples *un* reveal about the underlying phenomena that gave rise to the observations, and digital filters are the main processing tool.

It is necessary to emphasize that the samples are *assumed* to be equally spaced; any error or *noise* is in the measurements *un*. Fortunately, this assumption is approximately true in most applications.

Suppose that the sequence of numbers {*un*} is such a set of equally spaced measurements of some quantity *u(t)*, where *n* is an integer and *t* is a continuous variable. Typically, *t* represents time, but not necessarily so. We are using the notation *un = u(n)*. The simplest kinds of filters are the *nonrecursive filters*; they are defined by the linear formula

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1-1)

The coefficients *ck* are the constants of the filter, the *un-k* are the input data, and the *yn* are the outputs. **Figure 1.1-2** shows how this formula is computed. Imagine two strips of paper. On the first strip, written one below the other, are the data values *un-k*. On the second strip, with the values written in the *reverse direction* (from bottom to top), are the filter coefficients *ck*. The zero subscript of one is opposite the *n* subscript value of the other (either way). The output *yn* is the sum of all the products *ck*u*n- k*. Having computed one value, one strip, say the coefficient strip, is moved one space down, and the new set of products is computed to give the new output y*n+1*. Each output is the result of adding all the products formed from the proper displacement between the two zero-subscripted terms. In the computer, of course, it is the data that is "run past" the coefficient array {*ck*}.

This process is basic and is called a *convolution* of the data with the coefficients. It does not matter which strip is written in the reverse order; the result is the same. So the convolution of *un* with the coefficients *ck* is the same as the convolution of the coefficients *ck* with the data *un*.

In practice, the number of products we can handle must be finite. It is usual to assume that the length of the run of nonzero coefficients *ck* is much shorter than is the run of data *yn*. Once in a while it is useful to regard the *ck* coefficients as part of an infinite array with many zero coefficients, but it is usually preferable to think of the array { *ck* } as being finite and to ignore the zero terms beyond the end of the array. Equation (**1.1-1**) becomes, therefore,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1-2)

Thus the second strip (of coefficients *ck*) in **Figure 1.1-2** is comparatively shorter than is the first strip (of data *un*).

Various special cases of this formula occur frequently and should be familiar to most readers. Indeed, such formulas are so commonplace that a book could be devoted to their listing. In the case of five nonzero coefficients *ck*, where all the coefficients that are not zero have the same value, we have the familiar smoothing by *5s* formula (derived in **Section 3.2**)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1-3)

Another example is the least-squares smoothing formula derived by passing a least-squares cubic through five equally spaced values *un* and using the value of the cubic at the midpoint as the smoothed value. The formula for this smoothed value (which will be derived in **Section 3.3**) is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1-4)

Many other formulas, such as those for predicting stock market prices, as well as other time series, also are nonrecursive filters.

Nonrecursive filters occur in many different fields and, as a result, have acquired many different names. Among the disguises are the following:

Finite impulse response filter

FIR filter

Transversal filter

Tapped delay line filter

Moving average filter

We shall use the name *nonrecursive* as it is the simplest to understand from its name, and it contrasts with the name *recursive filter*, which we will soon introduce.

The concept of a *window* is perhaps the most confusing concept in the whole subject, so we now introduce it in these simple cases. We can think of the preceding formulas as if we were looking at the data *un-k* through a *window of coefficients ck* (see **Figure 1.1-3**). As we slide the strip of coefficients along the data, we see the data in the form of the output *yn*, which is the running weighted average of the original data *un*. It is as if we saw the data through a translucent (not transparent) window where the window was tinted according to the *coefficients**ck*. In the smoothing by *5s*, all data values get through the translucent window with the same amount, [??]; in the second example they come through the window with varying weights. (Don't let any negative weights bother you, since we are merely using a manner of speaking when we use the words "translucent window.")

When we use not only data values to compute the output values *yn* but also use other values of the output, we have a formula of the form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where both the *ck* and the *dk* are constants. In this case it is usual to limit the range of nonzero coefficients to current and past values of the data *un* and to only past values of the output *yn*. Furthermore, again the number of products that can be computed in practice must be finite. Thus the formula is usually written in the form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1-5)

where there may be some zero coefficients. These are called *recursive* filters(see **Figure 1.1-4**). Some equivalent names follow:

Infinite impulse response filter

IIR filter

Ladder filter

Lattice filter

Wave digital filter

Autoregressive moving average filter

ARMA filter

Autoregressive integrated moving average filter

ARIMA filter

We shall use the name *recursive* filter. A recursive digital filter is simply a linear difference equation with constant coefficients and nothing more; in practice it may be realized by a short program on a general purpose digital computer or by a special purpose integrated circuit chip.

A familiar example (from the calculus) of a recursive filter is the trapezoid rule for integration

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1-6)

It is immediately obvious that a recursive filter can, as it were, remember all the past data, since the *yn- 1* value on the right side of the equation enters into the computation of the new value *yn*, and hence into the computation of *yn+1, yn+2*, and so on. In this way the initial condition for the integration is "remembered" throughout the entire estimation of the integral.

Other examples of a recursive digital filter are the exponential smoothing forecast

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and the trend indicator

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

As is customary, we have set aside recursive filters that use *future* values, values beyond the currently computed value. If we used future values beyond the current *yn*, we would have to solve a system of linear algebraic equations, and this is a lot of computing. At times it is worth it, but often we have only past computed values and the current value of the data. Filters that use only past and current values of the data are called *causal*, for if time is the independent variable, they do not react to future events but only past ones (causes).

It is worth a careful note, however, that more and more often all the data of an experiment is recorded on a magnetic tape or other storage medium *before* any data processing is done. In such cases the restriction to causal filters is plainly foolish. Future values are available! There are, of course, many situations in which the data must be reduced and used as they come in, and in such cases the restriction to causal filters is natural.

The student may wonder where we get the starting values of the *yn*. Once well going they, of course, come from previous computations, but how to start? The custom of assuming that the missing values y are to be taken as zeros is very dubious. This assumption usually amounts to putting a sharp discontinuity into the function *yn*, and since as noted previously the recursive filter remembers the past, it follows that these zero values continue to affect the computation for some time, if not indefinitely. It is evident in the simple example of the trapezoid integration that the needed starting value of y is the starting area, usually taken to be zero, but not necessarily so.

We have said it before, but it is necessary to say again that the coefficients *ck* and *dk* of the filter are assumed to be constants. Such filters are called *time-invariant filters* and are the filters most used in practice. Time-varying filters are occasionally useful and will be briefly touched upon in this text.

Finally, it should be realized that in practice all computing must be done with finite-length numbers. The process of quantization affects not only the input numbers, but it may affect all the internal (to the filter) arithmetic that is done. Consequently, there are roundoff errors in the final output numbers *yn*. It is often convenient to *think* in terms of infinite precision arithmetic and perfect input data *un*; but in the end we must deal with reality. Furthermore, the details of the way we arrange to do the arithmetic can affect the accuracy of the output numbers. We will look at this topic more closely in the closing chapters.

**1.2 WHY SHOULD WE CARE ABOUT DIGITAL FILTERS?**

The word *filter* is derived from electrical engineering, where filters are used to transform electrical signals from one form to another, especially to eliminate (filter out) various frequencies in a signal. As we have already seen, a digital filter is a linear combination of the input data *un* and possibly the output data *yn* and includes many of the operations that we do when processing a signal.

*(Continues...)*

Excerpted fromDIGITAL FILTERSbyR. W. HAMMING. Copyright © 1989 Lucent Technologies. Excerpted by permission of Dover Publications, Inc..

All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.

Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

## First Chapter

#### DIGITAL FILTERS

**By R. W. HAMMING**

**Dover Publications, Inc.**

**Copyright © 1989 Lucent Technologies**

All rights reserved.

ISBN: 978-0-486-31924-7

All rights reserved.

ISBN: 978-0-486-31924-7

CHAPTER 1

**Introduction**

**1.1 WHAT IS A DIGITAL FILTER?**

In our current technical society we often measure a continuously varying quantity. Some examples include blood pressure, earthquake displacements, voltage from a voice signal in a telephone conversation, brightness of a star, population of a city, waves falling on a beach, and the probability of death. All these measurements vary with time; we regard them as functions of time: *u(t)* in mathematical notation. And we may be concerned with blood pressure measurements from moment to moment or from year to year. Furthermore, we may be concerned with functions whose independent variable is not time, for example the number of particles that decay in a physics experiment as a function of the energy of the emitted particle. Usually these variables can be regarded as varying continuously (analog signals) even if, as with the population of a city, a bacterial colony, or the number of particles in the physics experiment, the number being measured must change by unit amounts.

For technical reasons, instead of the signal *u(t)*, we usually record *equally spaced samples un* of the function *u(t)*. The famous *sampling theorem*, which will be discussed in **Chapter 8**, gives the conditions on the signal that justify this sampling process. Moreover, when the samples are taken they are not recorded with infinite precision but are rounded off (sometimes chopped off) to comparatively few digits (see **Figure 1.1-1**). This procedure is often called *quantizing* the samples. It is these quantized samples that are available for the processing that we do. We do the processing in order to understand what the function samples *un* reveal about the underlying phenomena that gave rise to the observations, and digital filters are the main processing tool.

It is necessary to emphasize that the samples are *assumed* to be equally spaced; any error or *noise* is in the measurements *un*. Fortunately, this assumption is approximately true in most applications.

Suppose that the sequence of numbers {*un*} is such a set of equally spaced measurements of some quantity *u(t)*, where *n* is an integer and *t* is a continuous variable. Typically, *t* represents time, but not necessarily so. We are using the notation *un = u(n)*. The simplest kinds of filters are the *nonrecursive filters*; they are defined by the linear formula

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1-1)

The coefficients *ck* are the constants of the filter, the *un-k* are the input data, and the *yn* are the outputs. **Figure 1.1-2** shows how this formula is computed. Imagine two strips of paper. On the first strip, written one below the other, are the data values *un-k*. On the second strip, with the values written in the *reverse direction* (from bottom to top), are the filter coefficients *ck*. The zero subscript of one is opposite the *n* subscript value of the other (either way). The output *yn* is the sum of all the products *ck*u*n- k*. Having computed one value, one strip, say the coefficient strip, is moved one space down, and the new set of products is computed to give the new output y*n+1*. Each output is the result of adding all the products formed from the proper displacement between the two zero-subscripted terms. In the computer, of course, it is the data that is "run past" the coefficient array {*ck*}.

This process is basic and is called a *convolution* of the data with the coefficients. It does not matter which strip is written in the reverse order; the result is the same. So the convolution of *un* with the coefficients *ck* is the same as the convolution of the coefficients *ck* with the data *un*.

In practice, the number of products we can handle must be finite. It is usual to assume that the length of the run of nonzero coefficients *ck* is much shorter than is the run of data *yn*. Once in a while it is useful to regard the *ck* coefficients as part of an infinite array with many zero coefficients, but it is usually preferable to think of the array { *ck* } as being finite and to ignore the zero terms beyond the end of the array. Equation (**1.1-1**) becomes, therefore,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1-2)

Thus the second strip (of coefficients *ck*) in **Figure 1.1-2** is comparatively shorter than is the first strip (of data *un*).

Various special cases of this formula occur frequently and should be familiar to most readers. Indeed, such formulas are so commonplace that a book could be devoted to their listing. In the case of five nonzero coefficients *ck*, where all the coefficients that are not zero have the same value, we have the familiar smoothing by *5s* formula (derived in **Section 3.2**)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1-3)

Another example is the least-squares smoothing formula derived by passing a least-squares cubic through five equally spaced values *un* and using the value of the cubic at the midpoint as the smoothed value. The formula for this smoothed value (which will be derived in **Section 3.3**) is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1-4)

Many other formulas, such as those for predicting stock market prices, as well as other time series, also are nonrecursive filters.

Nonrecursive filters occur in many different fields and, as a result, have acquired many different names. Among the disguises are the following:

Finite impulse response filter

FIR filter

Transversal filter

Tapped delay line filter

Moving average filter

We shall use the name *nonrecursive* as it is the simplest to understand from its name, and it contrasts with the name *recursive filter*, which we will soon introduce.

The concept of a *window* is perhaps the most confusing concept in the whole subject, so we now introduce it in these simple cases. We can think of the preceding formulas as if we were looking at the data *un-k* through a *window of coefficients ck* (see **Figure 1.1-3**). As we slide the strip of coefficients along the data, we see the data in the form of the output *yn*, which is the running weighted average of the original data *un*. It is as if we saw the data through a translucent (not transparent) window where the window was tinted according to the *coefficients**ck*. In the smoothing by *5s*, all data values get through the translucent window with the same amount, [??]; in the second example they come through the window with varying weights. (Don't let any negative weights bother you, since we are merely using a manner of speaking when we use the words "translucent window.")

When we use not only data values to compute the output values *yn* but also use other values of the output, we have a formula of the form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where both the *ck* and the *dk* are constants. In this case it is usual to limit the range of nonzero coefficients to current and past values of the data *un* and to only past values of the output *yn*. Furthermore, again the number of products that can be computed in practice must be finite. Thus the formula is usually written in the form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1-5)

where there may be some zero coefficients. These are called *recursive* filters(see **Figure 1.1-4**). Some equivalent names follow:

Infinite impulse response filter

IIR filter

Ladder filter

Lattice filter

Wave digital filter

Autoregressive moving average filter

ARMA filter

Autoregressive integrated moving average filter

ARIMA filter

We shall use the name *recursive* filter. A recursive digital filter is simply a linear difference equation with constant coefficients and nothing more; in practice it may be realized by a short program on a general purpose digital computer or by a special purpose integrated circuit chip.

A familiar example (from the calculus) of a recursive filter is the trapezoid rule for integration

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1-6)

It is immediately obvious that a recursive filter can, as it were, remember all the past data, since the *yn- 1* value on the right side of the equation enters into the computation of the new value *yn*, and hence into the computation of *yn+1, yn+2*, and so on. In this way the initial condition for the integration is "remembered" throughout the entire estimation of the integral.

Other examples of a recursive digital filter are the exponential smoothing forecast

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and the trend indicator

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

As is customary, we have set aside recursive filters that use *future* values, values beyond the currently computed value. If we used future values beyond the current *yn*, we would have to solve a system of linear algebraic equations, and this is a lot of computing. At times it is worth it, but often we have only past computed values and the current value of the data. Filters that use only past and current values of the data are called *causal*, for if time is the independent variable, they do not react to future events but only past ones (causes).

It is worth a careful note, however, that more and more often all the data of an experiment is recorded on a magnetic tape or other storage medium *before* any data processing is done. In such cases the restriction to causal filters is plainly foolish. Future values are available! There are, of course, many situations in which the data must be reduced and used as they come in, and in such cases the restriction to causal filters is natural.

The student may wonder where we get the starting values of the *yn*. Once well going they, of course, come from previous computations, but how to start? The custom of assuming that the missing values y are to be taken as zeros is very dubious. This assumption usually amounts to putting a sharp discontinuity into the function *yn*, and since as noted previously the recursive filter remembers the past, it follows that these zero values continue to affect the computation for some time, if not indefinitely. It is evident in the simple example of the trapezoid integration that the needed starting value of y is the starting area, usually taken to be zero, but not necessarily so.

We have said it before, but it is necessary to say again that the coefficients *ck* and *dk* of the filter are assumed to be constants. Such filters are called *time-invariant filters* and are the filters most used in practice. Time-varying filters are occasionally useful and will be briefly touched upon in this text.

Finally, it should be realized that in practice all computing must be done with finite-length numbers. The process of quantization affects not only the input numbers, but it may affect all the internal (to the filter) arithmetic that is done. Consequently, there are roundoff errors in the final output numbers *yn*. It is often convenient to *think* in terms of infinite precision arithmetic and perfect input data *un*; but in the end we must deal with reality. Furthermore, the details of the way we arrange to do the arithmetic can affect the accuracy of the output numbers. We will look at this topic more closely in the closing chapters.

**1.2 WHY SHOULD WE CARE ABOUT DIGITAL FILTERS?**

The word *filter* is derived from electrical engineering, where filters are used to transform electrical signals from one form to another, especially to eliminate (filter out) various frequencies in a signal. As we have already seen, a digital filter is a linear combination of the input data *un* and possibly the output data *yn* and includes many of the operations that we do when processing a signal.

*(Continues...)*

Excerpted fromDIGITAL FILTERSbyR. W. HAMMING. Copyright © 1989 Lucent Technologies. Excerpted by permission of Dover Publications, Inc..

All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.

Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

## Table of Contents

#### Contents

PREFACE TO THE THIRD EDITION,1 INTRODUCTION,

2 THE FREQUENCY APPROACH,

3 SOME CLASSICAL APPLICATIONS,

4 FOURIER SERIES: CONTINUOUS CASE,

5 WINDOWS,

6 DESIGN OF NONRECURSIVE FILTERS,

7 SMOOTH NONRECURSIVE FILTERS,

8 THE FOURIER INTEGRAL AND THE SAMPLING THEOREM,

9 KAISER WINDOWS AND OPTIMIZATION,

10 THE FINITE FOURIER SERIES,

11 THE SPECTRUM,

12 RECURSIVE FILTERS,

13 CHEBYSHEV APPROXIMATION AND CHEBYSHEV FILTERS,

14 MISCELLANEOUS,

INDEX,

## Reading Group Guide

#### Contents

PREFACE TO THE THIRD EDITION,1 INTRODUCTION,

2 THE FREQUENCY APPROACH,

3 SOME CLASSICAL APPLICATIONS,

4 FOURIER SERIES: CONTINUOUS CASE,

5 WINDOWS,

6 DESIGN OF NONRECURSIVE FILTERS,

7 SMOOTH NONRECURSIVE FILTERS,

8 THE FOURIER INTEGRAL AND THE SAMPLING THEOREM,

9 KAISER WINDOWS AND OPTIMIZATION,

10 THE FINITE FOURIER SERIES,

11 THE SPECTRUM,

12 RECURSIVE FILTERS,

13 CHEBYSHEV APPROXIMATION AND CHEBYSHEV FILTERS,

14 MISCELLANEOUS,

INDEX,

## Interviews

#### Contents

PREFACE TO THE THIRD EDITION,1 INTRODUCTION,

2 THE FREQUENCY APPROACH,

3 SOME CLASSICAL APPLICATIONS,

4 FOURIER SERIES: CONTINUOUS CASE,

5 WINDOWS,

6 DESIGN OF NONRECURSIVE FILTERS,

7 SMOOTH NONRECURSIVE FILTERS,

8 THE FOURIER INTEGRAL AND THE SAMPLING THEOREM,

9 KAISER WINDOWS AND OPTIMIZATION,

10 THE FINITE FOURIER SERIES,

11 THE SPECTRUM,

12 RECURSIVE FILTERS,

13 CHEBYSHEV APPROXIMATION AND CHEBYSHEV FILTERS,

14 MISCELLANEOUS,

INDEX,

## Recipe

#### Contents

PREFACE TO THE THIRD EDITION,1 INTRODUCTION,

2 THE FREQUENCY APPROACH,

3 SOME CLASSICAL APPLICATIONS,

4 FOURIER SERIES: CONTINUOUS CASE,

5 WINDOWS,

6 DESIGN OF NONRECURSIVE FILTERS,

7 SMOOTH NONRECURSIVE FILTERS,

8 THE FOURIER INTEGRAL AND THE SAMPLING THEOREM,

9 KAISER WINDOWS AND OPTIMIZATION,

10 THE FINITE FOURIER SERIES,

11 THE SPECTRUM,

12 RECURSIVE FILTERS,

13 CHEBYSHEV APPROXIMATION AND CHEBYSHEV FILTERS,

14 MISCELLANEOUS,

INDEX,