Numerical Methods

Numerical Methods

by Germund Dahlquist, Åke Björck

View All Available Formats & Editions

"Substantial, detailed and rigorous . . . readers for whom the book is intended are admirably served." — MathSciNet (Mathematical Reviews on the Web), American Mathematical Society.
Practical text strikes fine balance between students' requirements for theoretical treatment and needs of practitioners, with best methods for large- and small-scale


"Substantial, detailed and rigorous . . . readers for whom the book is intended are admirably served." — MathSciNet (Mathematical Reviews on the Web), American Mathematical Society.
Practical text strikes fine balance between students' requirements for theoretical treatment and needs of practitioners, with best methods for large- and small-scale computing. Prerequisites are minimal (calculus, linear algebra, and preferably some acquaintance with computer programming). Text includes many worked examples, problems, and an extensive bibliography.

Product Details

Dover Publications
Publication date:
Dover Books on Mathematics
Sold by:
Barnes & Noble
File size:
42 MB
This product may take a few minutes to download.

Related Subjects

Read an Excerpt



Dover Publications, Inc.

Copyright © 1974 Prentice-Hall, Inc.
All rights reserved.
ISBN: 978-0-486-13946-3




Mathematics is used in one form or another within most of the areas of science and industry. There has always been a close interaction between mathematics on the one hand and science and technology on the other. During the present century, advanced mathematical models and methods have been used more and more even within other areas—for example, in medicine, economics, and social science.

Very often, applications lead to mathematical problems which in their complete form cannot be conveniently solved with exact formulas. One often restricts oneself then to special cases or simplified models which can be exactly analyzed. In most cases, one thereby reduces the problem to a linear problem—for example, a linear differential equation. Such an approach can be very effective, and leads quite often to concepts and points of view which can at least qualitatively be used even in the unreduced problem.

But occasionally such an approach does not suffice. One can instead treat a less simplified problem with the use of a large amount of numerical calculation. The amount of work depends on the demand for accuracy. With computers, which have been developed during the past twenty-five years, the possibilities of using numerical methods have increased enormously. The points of view which one has taken toward them have also changed.

To develop a numerical method means, in most cases, that one applies a small number of general and relatively simple ideas. One combines these ideas in an inventive way with one another and with such knowledge of the given problem as one can obtain in other ways—for example, with the methods of mathematical analysis. Some knowledge of the background of the problem is also of value; among other things, one should take into account the order of magnitude of certain numerical data of the problem.

In this book we shall illustrate the use of the general ideas behind numerical methods on some problems which often occur as subproblems or computational details of larger problems, though as a rule they occur in a less pure form and on a larger scale than they do here. When we present and analyze numerical methods, we use to some degree the same approach which was described first above: we study in detail special cases and simplified situations, with the aim of uncovering more generally applicable concepts and points of view which can be a guide in more difficult problems.

In this chapter we shall throw some light upon some important ideas and problems in abbreviated form. A more systematic treatment comes in the chapters following.


One of the most frequently recurring ideas in many contexts is iteration (from the Latin iteratio, "repetition") or successive approximation. Taken generally, iteration means the repetition of a pattern of action or process. Iteration in this sense occurs, for example, in the repeated application of a numerical process—perhaps very complicated and itself containing many instances of the use of iteration in the somewhat narrower sense to be described below—in order to successively improve previous results. To illustrate a more specific use of the idea of iteration, we consider the problem of solving an equation of the form

x = F(x) (1.2.1)

Here F is a differentiable function whose value we can compute for any given value of the real variable x (within a certain interval). Using the method of iteration, one starts with an initial approximation x0, and computes the sequence

x1 = F(x0), x2 = F(x1), x3 = F(x2), ... (1.2.2)

Each computation of the type

xn+1 = F(xn)

is called an iteration. If the sequence {xn} converges to a limiting value α, then we have lim F(xn) = F(α), so x = a satisfies the equation x = F(x). As n grows, we would like the numbers xn to be better and better estimates of the desired root. One stops the iterations when sufficient accuracy has been attained.

A geometric interpretation is shown in Fig. 1.2.1. A root of Eq. (1.2.1) is given by the abscissa (and ordinate) of an intersection point of the curve y = F(x) and the line y = x. Using iteration and starting from (x0, F(x0)) we obtain x1 = F(x0) and the point x1 on the x-axis is obtained by first drawing a horizontal line from the point (x0, F(x0)) = (x0, x1,) until it intersects the line y = x in the point (x1, x1). From there we draw a vertical line to (x1, F(x1)) = (x1, x2) and so on. In Fig. 1.2.1 it is obvious that {xn} converges monotonely to a. Figure 1.2.2 shows a case where F is a decreasing function. There we also have convergence but not monotone convergence: the successive iterates xn are alternately to the right and to the left the of root α.

But there are also divergent cases, exemplified by Figs. 1.2.3 and 1.2.4. One can see geometrically that the quantity which determines the rate of convergence (or divergence) is the slope of the curve y = F(x) in the neighborhood of the root. Indeed, from the mean value theorem we have


where [xi]n lies between xn-1 and xn. Thus convergence is faster the smaller |F'(x)| is in a neighborhood of the root. Convergence is assured if |F'(x)| < 1 for all x in a neighborhood of the root containing x0 and x1. But if |F'(α)| > 1, xn converges to a only in very exceptional cases, no matter how close to a one chooses to x0 (x0 ≠ α).

Example 1.2.1.A Fast Method for Calculating Square Roots

The equation x2 = c can be written in the form x = F(x), where


(Fig. 1.2.5). The limiting value is α = c½ and F'(α) = 0. (Show this!) Thus we set


For c = 2, x0 = 1.5, we get x1 = ½(1.5 + 2/1.5) = 1.4167, x2 = 1.414216; compare [square root of 2] = 1.414214....

One can get a good value for x0 with a slide rule, but, as can be seen, a rougher value for x0 suffices. One can in fact show that if xn has t correct digits, then xn+1 will have at least 2t - 1 correct digits. The above iterative method for calculating square roots is used quite generally on both desktop calculators and computers.

Iteration is one of the most important aids for the practical as well as theoretical treatment of both linear and nonlinear problems. One very common application of iteration is to the solution of systems of equations. In this case {xn} is a sequence of vectors, and F is a vector-valued function. When iteration is applied to differential equations, {xn} means a sequence of functions, and F(x) means an expression in which integration or other operations on functions may be involved. A number of other variations on the very general idea of iteration will be given in later chapters. For each application it is of course necessary to find a suitable way to put the equations in a form similar to Eq. (1.2.1) and to choose a suitable initial approximation. One has a certain amount of choice in these matters which should be used in order to reduce the number of iterations one will have to make.

Example 1.2.2

The equation x2 = 2 can also be written, among other ways, in the form x = 2/x. In the previous example we saw that the form x = ½(x + 2/x) gave rapid convergence when iteration was applied. On the other hand, the formula xn+1 = 2/xn gives a sequence which goes back and forth between x0 (for even n) and 2/x0 (for n odd)—the sequence does not converge.

Another often recurring idea is that one locally (that is, in a small neighborhood) approximates a complicated function with a linear function. We shall illustrate the use of this idea in the solution of the equation f(x) = 0. Geometrically, this means that we are seeking the intersection point between the x-axis and the curve y = f(x) (Fig. 1.2.6). Assume that we have an approximating value x0 to the root. We then approximate the curve with its tangent at the point (x0, f(x0)). Let x1 be the abscissa of the point of intersection between the x-axis and the tangent. Normally x1 will be a much better approximation to the root than x0. In most cases, x1 will have nearly twice as many correct digits as x0, but if x0 is a very poor initial approximation, then it is possible that x1 will be worse than x0.

A combination of the ideas of iteration and local linear approximation gives rise to a much used and, ordinarily, rapidly convergent process which is called Newton-Raphson's method (Fig. 1.2.7). In this iterative method xn+1 is defined as the abscissa of the point of intersection between the x-axis and the tangent to the curve y = f(x) in the point (xn, f(xn)).

The approximation of the curve y = f(x) with its tangent at the point (x0, f(x0)) is equivalent to replacing the function with the first-degree terms in its Taylor series about x = x0. The corresponding approximation for functions of many variables also has important uses.

Another way (instead of drawing the tangent) to approximate a curve locally is to choose two neighboring points on the curve and to approximate the curve with the secant which joins the two points (Fig. 1.2.8). In a later chapter, we shall discuss more closely the secant method for the solution of equations, which is based on the above approximation.

The same secant approximation is useful in many other contexts. It is, for instance, generally used when one "reads between the lines" in a table of numerical values. In this case the secant approximation is called linear interpolation.

When the secant approximation is used in the approximate calculation of a definite integral,


numerical integration (Fig. 1.2.9), it is called the trapezoidal rule. With this method, the area between the curve y = y(x) and the x-axis is approximated with sum T(h) of the areas of a series of parallel trapezoids. Using the notation of Fig. 1.2.9 we have


(in the figure, n = 4). We shall show in a later chapter that the error (T(h) - I) in the above approximation is very nearly proportional to h2 when h is small. One can then, in principle, attain arbitrarily high accuracy by choosing h sufficiently small, except that the computational work involved (the number of points where y(x) must be computed) is inversely proportional to h. Thus the computational work grows rapidly as one demands higher accuracy (smaller h).

Numerical integration is a fairly common problem because in fact it is quite seldom that the "primitive" function can be analytically calculated in a finite expression containing only elementary functions. It is not possible, for example, for such simple functions as exp(x2) or (sin x)/x. In order to obtain higher accuracy with significantly less work than the trapezoidal rule requires, one can use one of the following two important ideas:

(a) Local approximation of the integrand with a polynomial of higher degree (or with a function of some other class, for which one knows the primitive function).

(b) Computation with the trapezoidal rule for several values of h and then extrapolation to h = 0, so- called Richardson extrapolation or the deferred approach to the limit, with the use of general results concerning the dependence of the error upon h.

The technical details for the various ways of approximating a function with a polynomial, among others Taylor expansions, interpolation, and the method of least squares, are treated in later chapters.

The extrapolation idea can easily be applied to numerical integration with the trapezoidal rule. As was mentioned previously, the trapezoidal approximation to [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] has an error approximately proportional to the square of the step size. Thus, using two step sizes, h and 2h, one has:


Thus, by adding the corrective term, 1/3(T(h) - T(2h)), to T(h), one should get an estimate of I which is much better than T(h). In Chap. 7 we shall see that the improvement is in most cases quite striking. That chapter also contains a further development of the extrapolation idea, Romberg's method.

Example 1.2.3



for f(x) = x3 and f(x) = x4 by the trapezoidal method. Extrapolate and compare with the exact results.

We have seen above that some knowledge of the behavior of the error can, together with the idea of extrapolation, lead to a powerful method for improving results. Such a line of reasoning is useful not only for the common problem of numerical integration, but also in many other types of problems.

Approximate solution of differential equations is a very important problem which now, since the development of computers, one has the possibility of treating to a much larger extent than previously. Nearly all the areas of science and technology contain mathematical models which lead to systems of ordinary or partial differential equations. Let us consider a case of just one ordinary differential equation,

dy/dx = f(x,y)

with initial condition y(0) = p. The differential equation indicates, at each point (x, y), the direction of the tangent to the solution curve which passes through the point in question. The direction of the tangent changes continuously from point to point, but the simplest approximation (which was proposed as early as the 18th century, by Euler) is that one studies the solution for only certain values of x = 0, h, 2h, 3h, ... (h is called the "step" or "step length") and assumes that dy/dx is constant between the points. In this way, the solution curve is approximated by a polygon segment (Fig. 1.2.10) which joins the points (0, y0), (h, y1), (2h, y2), ... where

yo = p, yn+1 - yn/h = f(nh,yn), (1.2.3)

Thus we have a simple recursion formula, (Euler's method):



Excerpted from NUMERICAL METHODS by GERMUIMD DAHLQUIST, ÅKE BJÖRCK, Ned Anderson. Copyright © 1974 Prentice-Hall, Inc.. Excerpted by permission of Dover Publications, Inc..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Customer Reviews

Average Review:

Write a Review

and post it to your social network


Most Helpful Customer Reviews

See all customer reviews >