 Shopping Bag ( 0 items )

All (17) from $2.26

New (6) from $11.08

Used (11) from $2.26
More About This Textbook
Overview
This excellent text for advanced undergraduates and graduate students covers norms, numerical solution of linear systems and matrix factoring, iterative solutions of nonlinear equations, eigenvalues and eigenvectors, polynomial approximation, and other topics. It offers a careful analysis and stresses techniques for developing new methods, plus many examples and problems. 1966 edition.
Product Details
Related Subjects
Read an Excerpt
ANALYSIS OF NUMERICAL METHODS
By EUGENE ISAACSON, HERBERT BISHOP KELLER
Dover Publications, Inc.
Copyright © 1966 John Wiley & SonsAll rights reserved.
ISBN: 9780486137988
Contents
Chapter 1 Norms, Arithmetic, and WellPosed Computations,Chapter 2 Numerical Solution of Linear Systems and Matrix Inversion,
Chapter 3 Iterative Solutions of NonLinear Equations,
Chapter 4 Computation of Eigenvalues and Eigenvectors,
Chapter 5 Basic Theory of Polynomial Approximation,
Chapter 6 Differences, Interpolation Polynomials, and Approximate Differentiation,
Chapter 7 Numerical Integration,
Chapter 8 Numerical Solution of Ordinary Differential Equations,
Chapter 9 Difference Methods for Partial Differential Equations,
Bibliography,
Index,
CHAPTER 1
Norms, Arithmetic, and WellPosed Computations
0. INTRODUCTION
In this chapter, we treat three topics that are generally useful for the analysis of the various numerical methods studied throughout the book. In Section 1, we give the elements of the theory of norms of finite dimensional vectors and matrices. This subject properly belongs to the field of linear algebra. In later chapters, we may occasionally employ the notion of the norm of a function. This is a straightforward extension of the notion of a vector norm to the infinitedimensional case. On the other hand, we shall not introduce the corresponding natural generalization, i.e., the notion of the norm of a linear transformation that acts on a space of functions. Such ideas are dealt with in functional analysis, and might profitably be used in a more sophisticated study of numerical methods.
We study briefly, in Section 2, the practical problem of the effect of rounding errors on the basic operations of arithmetic. Except for calculalations involving only exactinteger arithmetic, rounding errors are invariably present in any computation. A most important feature of the later analysis of numerical methods is the incorporation of a treatment of the effects of such rounding errors.
Finally, in Section 3, we describe the computational problems that are 'reasonable' in some general sense. In effect, a numerical method which produces a solution insensitive to small changes in data or to rounding errors is said to yield a wellposed computation. How to determine the sensitivity of a numerical procedure is dealt with in special cases throughout the book. We indicate heuristically that any convergent algorithm is a wellposed computation.
1. NORMS OF VECTORS AND MATRICES
We assume that the reader is familiar with the basic theory of linear algebra, not necessarily in its abstract setting, but at least with specific reference to finitedimensional linear vector spaces over the field of complex scalars. By 'basic theory' we of course include: the theory of linear systems of equations, some elementary theory of determinants, and the theory of matrices or linear transformations to about the Jordan normal form. We hardly employ the Jordan form in the present study. In fact a much weaker result can frequently be used in its place (when the divisor theory or invariant subspaces are not actually involved). This result is all too frequently skipped in basic linear algebra courses, so we present it as
THEOREM 1.For any square matrix A of order n there exists a non singular matrix P, of order n, such that
B = P1AP
is upper triangular and has the eigenvalues of A, say λj [equivalent to] λj(A), j = 1, 2, ..., n, on the principal diagonal (i.e., any square matrix is equivalent to a triangular matrix).
Proof. We sketch the proof of this result. The reader should have no difficulty in completing the proof in detail.
Let λ1 be an eigenvalue of A with corresponding eigenvector u1. Then pick a basis for the ndimensional complex vector space, Cn, with u1 as the first such vector. Let the independent basis vectors be the columns of a nonsingular matrix P1, which then determines the transformation to the new basis. In this new basis the transformation determined by A is given by B1 ? P11 and since Au1 = λ1u1,
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
where A2 is some matrix of order n  1.
The characteristic polynomial of B1 is clearly
det (λIn  B1 = (λ  λ1) det (λIni  A2),
where In is the identity matrix of order n. Now pick some eigenvalue λ2 of A2 and corresponding (n  1)dimensional eigenvector, v2; i.e.,
A2v2 = λ2v2.
With this vector we define the independent ndimensional vectorts
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
Note that with the scalar α = α1ν12 + α2ν22 + ... + αn 1νn1, 2
B1[??]1 = λ1[??]1, B1[??]2 = λ2[??]2 + α[??]1,
and thus if we set u1 = P1[??]1, u2 = P1[??]2, then
Au1 = λ1u1, Au2 = λ2u2 + αu1.
Now we introduce a new basis of Cn with the first two vectors being and u1 and u2. The non singular matrix P2 which determines this change of basis has u1 and u2 as its first two columns; and the original linear transformation in the new basis has the representation
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
where A3 is some matrix of order n  2.
The theorem clearly follows by the above procedure; a formal inductive proof could be given.
It is easy to prove the related stronger result of Schur stated in Theorem 2.4 of Chapter 4 (see Problem 2.13 (b) of Chapter 4). We turn now to the basic content of this section, which is concerned with the generalization of the concept of distance in ndimensional linear vector spaces.
The 'distance' between a vector and the null vector, i.e., the origin, is a measure of the 'size' or 'length' of the vector. This generalized notion of distance or size is called a norm. In particular, all such generalizations are required to have the following properties:
(0) To each vector x in the linear space, [??], say, a unique real number is assigned; this number, denoted by x or N(x), is called the norm of x iff:
(i)x ̥ 0 for all x [member of] [??] and x = 0 iff x = o; where o denotes the zero vector (if [??] Cn, then oi = 0);
(ii)αx = α · x for all scalars α and all x [member of] [??];
(iii) x + y ≤ x + y, the triangle inequality, for all x, y [member of] [??].
Some examples of norms in the complex ndimensional space Cn are
(1a) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
(1b) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
(1c) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
(1d) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
It is an easy exercise for the reader to justify the use of the notation in (1d) by verifying that
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
The norm, ·2, is frequently called the Euclidean norm as it is just the formula for distance in ordinary threedimensional Euclidean space extended to dimension n. The norm, ·∞, is called the maximum norm or occasionally the uniform norm. In general, ·p, for p ≥ 1 is termed the pnorm.
To verify that (1) actually defines norms, we observe that conditions (0), (i), and (ii) are trivially satisfied. Only the triangle inequality, (iii), offers any difficulty. However,
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
and
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
so (1a) and (1d) define norms.
The proof of (iii) for (1b), the Euclidean norm, is based on the well known CauchySchwarz inequality which states that
(2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
To prove this basic result, let x and y be the ndimensional vectors with components xj and yj, j = 1, 2, ..., n, respectively. Then for any real scalar, [xi],
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
But since the real quadratic polynomial in [xi] above does not change sign its discriminant must be nonpositive; i.e.,
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
However, we note that
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
and (2) follows from the above pair of inequalities.
Now we form
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
An application of the CauchySchwarz inequality yields finally
N2(x + y) ≤ N2(x) + N2(y)
and so the Euclidean norm also satisfies the triangle inequality.
The statement that
(3) Np(x + y) ≤ Np(x) + Np(y), p ≥ 1,
is know as Minkowski's inequality. We do not derive it here as general pnorms will not be employed further. (A proof of (3) can be found in most advanced calculus texts.)
We can show quite generally that all vector norms are continuous functions in Cn. That is,
LEMMA 1.Every vector norm, N(x), is a continuous function of x1, x2, ..., xn, the components of x.
Proof. For any vectors x and δ we have by (iii)
N(x + δ) ≤ N(x) + N(δ),
so that
N(x + δ)  N(x) ≤ N(δ).
On the other hand, by (ii) and (iii),
N(x) = N(x + [de;ta]  [delta) ≤ N(x + δ) + N([deta])
so that
N(δ) ≤ N(x + δ)  N(x).
Thus, in general
N(x + δ)  N(x) ≤ N(δ).
With the unit vectors {ek}, any δ has the representation
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
Using (ii) and (iii) repeatedly implies
(4a) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
where
(4b) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
Using this result in the previous inequality yields, for any ε > 0 and all δ with N∞(δ) ≤ ε/M,
N(x + δ)  N(x) ≤ ε.
This is essentially the definition of continuity for a function of the n variables x1, x2, ..., xn.
See Problem 6 for a mild generalization.
Now we can show that all vector norms are equivalent in the sense of
THEOREM 2.For each pair of vector norms, say N(x) and N'(x), there exist positive constants m and M such that for all x [member of] Cn:
mN' (x) ≤ N(x) ≤ MN'(x).
Proof. The proof need only be given when one of the norms is N∞, since N and N' are equivalent if they are each equivalent to N∞. Let S [subset] Cn be defined by
S [equivalent to] {x  N∞(x) = 1, x [member of] Cn}
(this is frequently called the surface of the unit ball in Cn). S is a closed bounded set of points. Then since N(x) is a continuous function (see Lemma 1), we conclude by a theorem of Weierstrass that N(x) attains its minimum and its maximum on S at some points of S. That is, for some x0 [member of] S and x1 [member of] S
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
or
0 < N(x0) ≤ N(x) ≤ N(x1) < ∞
for all x [member of] S.
(Continues...)