Applied Iterative Methods

Overview

This graduate-level text examines the practical use of iterative methods in solving large, sparse systems of linear algebraic equations and in resolving multidimensional boundary-value problems. Assuming minimal mathematical background, it profiles the relative merits of several general iterative procedures. Topics include polynomial acceleration of basic iterative methods, Chebyshev and conjugate gradient acceleration procedures applicable to partitioning the linear system into a ?red/black? block form, adaptive...

See more details below
Paperback
$19.76
BN.com price
(Save 20%)$24.95 List Price
Other sellers (Paperback)
  • All (24) from $4.95   
  • New (10) from $5.95   
  • Used (14) from $4.95   
Applied Iterative Methods

Available on NOOK devices and apps  
  • NOOK Devices
  • Samsung Galaxy Tab 4 NOOK
  • NOOK HD/HD+ Tablet
  • NOOK
  • NOOK Color
  • NOOK Tablet
  • Tablet/Phone
  • NOOK for Windows 8 Tablet
  • NOOK for iOS
  • NOOK for Android
  • NOOK Kids for iPad
  • PC/Mac
  • NOOK for Windows 8
  • NOOK for PC
  • NOOK for Mac
  • NOOK for Web

Want a NOOK? Explore Now

NOOK Book (eBook)
$13.99
BN.com price
(Save 43%)$24.95 List Price

Overview

This graduate-level text examines the practical use of iterative methods in solving large, sparse systems of linear algebraic equations and in resolving multidimensional boundary-value problems. Assuming minimal mathematical background, it profiles the relative merits of several general iterative procedures. Topics include polynomial acceleration of basic iterative methods, Chebyshev and conjugate gradient acceleration procedures applicable to partitioning the linear system into a “red/black” block form, adaptive computational algorithms for the successive overrelaxation (SOR) method, and computational aspects in the use of iterative algorithms for solving multidimensional problems. 1981 edition. 48 figures. 35 tables.

Read More Show Less

Product Details

  • ISBN-13: 9780486434773
  • Publisher: Dover Publications
  • Publication date: 7/26/2004
  • Series: Dover Books on Mathematics Series
  • Pages: 416
  • Product dimensions: 5.40 (w) x 8.48 (h) x 0.82 (d)

Read an Excerpt

Applied Iterative Methods


By Louis A. Hageman, David M. Young

Dover Publications, Inc.

Copyright © 1981 Louis A. Hageman and David M. Young
All rights reserved.
ISBN: 978-0-486-15330-8



CHAPTER 1

Background on Linear Algebra and Related Topics


1.1 INTRODUCTION

In this chapter we give some background material from linear algebra which will be helpful for our discussion of iterative methods. In Sections 1.2–1.5, we present a brief summary of basic matrix properties and principles which will be used in subsequent chapters. No proofs are given. It is assumed that the reader is already familiar with the general theory of matrices such as presented, for instance, in Noble and Daniel [1977] or in Faddeev and Faddeeva [1963, Chap. 1]. In Sections 1.6 and 1.7, we discuss the matrix problem which is obtained from a simple discretization of the generalized Dirichlet problem. The purpose of this example is to illustrate some of the matrix concepts presented in this chapter and to illustrate the formulations of matrix problems arising from the discretization of boundary- value problems.


1.2 VECTORS AND MATRICES

We let EN denote the set of all N × 1 column matrices, or vectors, whose components may be real or complex. A typical element v of EN is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-2.1)


To indicate that vi, i = 1, 2, ..., N, are the components of v, we write v = (vi). A collection of vectors v(1), v(2), ..., v(s) is said to be linearly dependent if there exist real or complex numbers c1, c2, ..., cs, not all zero, such that

c1v(1) + c2v(2) + ··· + csv(s) = 0.


If this equality holds only when all the constants c1, ..., cs are zero, then the vectors v(1), ..., v(s) are said to be linearly independent. A basis for EN is a set of N linearly independent vectors of EN. Given such a basis, say, v(1), v(2), ..., v(N) then any vector w in EN can be expressed uniquely as a linear combination of basis vectors; i.e., there exists a unique set of numbers c1, c2, ..., cN such that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-2.2)


The transpose of the vector v is denoted by vT and the conjugate transpose by vH. Given two vectors w and v of EN, the inner product (w, v) of the vector w with v is defined by

(w, v) [equivalent to] wHv. (1-2.3)


Similarly, EN, N denotes the set of all N × N square matrices whose elements may be real or complex. A typical element of EN, N is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-2.4)


or, equivalently, in abbreviated form by A = (ai,j) for 1 ≤ i,jN. We denote the transpose of the matrix A by AT and the conjugate transpose of A by AH. If the elements of A are real, then AH = AT. Normally, we shall deal only with real matrices. The matrix A is symmetric if A = AT.

The special N × N matrix A = (ai, j), where ai,i = 1 and ai,j = 0 if ij, is called the identity matrix and is denoted by I. A matrix A in EN,N is nonsingular if there exists a matrix H such that AH = HA = I. If such an H exists, it is unique. The matrix H is called the inverse of A and is denoted by A-1.

A real matrix A in EN,N is symmetric and positive definite (SPD) if A is symmetric and if (v, Av) > 0 for any nonzero vector v. If A is SPD, then A is nonsingular. The matrix LLT is SPD for any real nonsingular matrix L. Also, if A is SPD, there exists a unique SPD matrix J such that J2 = A. The matrix J is called the square root of the SPD matrix A and is denoted by A1/2.


1.3 EIGENVALUES AND EIGENVECTORS

An eigenvalue of the N × N matrix A is a real or complex number λ which, for some nonzero vector y, satisfies the matrix equation

(A - λI)y = 0. (1-2.1)


Any nonzero vector y which satisfies (1-3.1) is called an eigenvector of the matrix A corresponding to the eigenvalue λ.

In order for (1-3.1) to have a nontrivial solution vector y, the determinant of A - λI (denoted by det (A - λI)) must be zero. Hence, any eigenvalue λ must satisfy

det(A - λI) = 0. (1-3.2)


Equation (1-3.2), which is called the characteristic equation of A, is a polynomial of degree N in λ. The eigenvalues of A are the N zeros of the polynomial (1-3.2).

A matrix A in EN, N has precisely N eigenvalues, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], some of which may be complex. The existence of at least one eigenvector corresponding to each eigenvalue λi is assured since (1-3.1) with λ = λi has a nontrivial solution. Eigenvectors corresponding to unequal eigenvalues are linearly independent. Thus, when all the eigenvalues of A are distinct, the set of eigenvectors for A includes a basis for the vector space EN. However, this is not always the case when some eigenvalues of A are repeated. In this book, we shall be concerned, for the most part, with those matrices whose eigenvalues are real and whose set of eigenvectors includes a basis for EN. Such eigenproperties are satisfied, for example, by the eigenvalues and eigenvectors of symmetric matrices.

Before discussing symmetric matrices and related matrices, we introduce some notations that will be used repeatedly in this and subsequent chapters.

The spectral radiusS(A) of the N × N matrix A is defined as the maximum of the moduli of the eigenvalues of A; i.e., if [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is the set of eigenvalues of A, then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-3.3)


If the eigenvalues of A are real, we let m(A) and M(A) denote, respectively, the algebraically smallest and largest eigenvalues of A, i.e.,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-3.4)


Eigenproperties of Symmetric and Related Matrices

Important eigenproperties of real symmetric matrices are summarized in the following theorem.

Theorem 1-3.1. If the N × N matrix A is real and symmetric, then

(1) the eigenvalues λi, i = 1, ..., N, of A are real, and

(2) there exists N real eigenvectors [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for A such that

(a) Ay(i) = λiy(i), i = 1, ..., N,

(b) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(c) (y(i), y(j)) = 0 if i ≠ 0 and (y(i), y(j)) = i if i = j.


When A is SPD, in addition to the eigenproperties given in Theorem 1-3.1, the eigenvalues of A are also positive. Since the matrix A is nonsingular if and only if no eigenvalue of A equals zero, it follows that a SPD matrix is also nonsingular.

Two matrices A and B are similar if B = WAW-1 for some nonsingular matrix W. Similar matrices have identical eigenvalues.

Except for (2c), the conclusions of Theorem 1-3.1 also are valid for any real matrix A which is similar to a real symmetric matrix C.

For any real matrix A in EN, N and for any nonzero vector v in EN (real or complex), the Rayleigh quotient of v with respect to A is defined as the quotient of inner products (v, Av)/(v, v). If A is symmetric, then for any nonzero vector v in EN

m(A)(v, Av)/(v, v)M(A). (1-3.5)


Here m(A) and M(A) are defined by (1-3.4). Moreover, there exist nonzero vectors w and z such that

(w, Aw)/(w, w) = m(A), (z, Az)/(z, z) = M(A) (1-3.6)

and

Aw = m(A)w, Az = M(A)z. (1-3.7)


Eigenproperties of Real Nonsymmetric Matrices

The material in this subsection is used primarily in Chapter 9. Since the discussion is somewhat involved, many readers may wish to skip it on a first reading.

A real nonsymmetric matrix A may have complex eigenvalues. Since the coefficients of the characteristic polynomial (1-3.2) are real for this case, any complex eigenvalues of A must occur in complex conjugate pairs; i.e., if λi is a complex eigenvalue of the real matrix A, then λk = λi* is also an eigenvalue of A. Here, λi* denotes the complex conjugate of λi. Moreover, if y(i) is an eigenvector corresponding to λi, then y(k) = y*(i) is an eigenvector of A corresponding to λk = λi*.

For an N × N nonsymmetric matrix A, it is not always possible to find a basis for EN from the set of eigenvectors of A. However, it is always possible to form a basis from the independent eigenvectors of A supplemented by other vectors (called principal vectors) which are associated with the eigenvalues and eigenvectors of A. Such a basis can best be described in terms of the Jordan canonical form associated with A. The following is a restatement of the results given in Noble and Daniel [1977].

A square matrix of order ≥ 1 that has the form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-3.8)


is called a Jordan block. Note that the elements of J are zero except for those on the principal diagonal, which are all equal to λ, and those on the first superdiagonal, which are all equal to unity. Any matrix A can be reduced to a direct sum of Jordan blocks by a similarity transformation. More precisely, we have

Theorem 1-3.2. For any N × N matrix A, there exists a nonsingular matrix V such that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-3.9)


where each Ji, 1 ≤ ik, is a Jordan block whose constant diagonal element is an eigenvalue of A. The number of linearly independent eigenvectors of A is equal to the number k of Jordan blocks in (1-3.9).

The matrix J in (1-3.9) is called the Jordan canonical form of A and is unique up to a permutation of the diagonal submatrices. Let the column vectors of V be denoted by [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]; i.e., V = [v(1), v(2), ..., v(N)]. Since V is nonsingular, the set of vectors [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is a basis for EN. For use in later chapters, we now examine the behavior of the matrix-vector products Aiv(i), i = 1, ..., N.

From the relation AV = V J and the form of J, it follows that the vectors v(i) separate, for each Jordan block J, into equations of the form

Av(i) = λiv(i) + viv(i - 1), (1-3.10)


where λi is an eigenvalue of A and vi is either 0 or 1, depending on J. If vi = 0, then v(i) is an eigenvector of A. When v(i) is an eigenvector of A, we have by (1-3.10) that

Alv(i) = Al-1iv(i)) = (λi)lv(i). (1-3.11)


(Continues...)

Excerpted from Applied Iterative Methods by Louis A. Hageman, David M. Young. Copyright © 1981 Louis A. Hageman and David M. Young. Excerpted by permission of Dover Publications, Inc..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Read More Show Less

Table of Contents

1. Background on Linear Algebra and Related Topics
2. Background on Basic Iterative Methods
3. Polynomial Acceleration
4. Chebyshev Acceleration
5. An Adaptive Chebyshev Procedure Using Special Norms
6. Adaptive Chebyshev Acceleration
7. Conjugate Gradient Acceleration
8. Special Methods for Red/Black Partitionings
9. Adaptive Procedures for the Successive Overrelaxation Method
10. The Use of Iterative Methods in the Solution of Partial Differential Equations
11. Case Studies
12. The Nonsymmetrizable Case
Appendixes
Bibliography
Index
Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star

(0)

4 Star

(0)

3 Star

(0)

2 Star

(0)

1 Star

(0)

Your Rating:

Your Name: Create a Pen Name or

Barnes & Noble.com Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation

Reminder:

  • - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
  • - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

 
Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously

    If you find inappropriate content, please report it to Barnes & Noble
    Why is this product inappropriate?
    Comments (optional)