Uhoh, it looks like your Internet Explorer is out of date.
For a better shopping experience, please upgrade now.
Advanced Mathematics for Engineers and Scientists
Paperback(Unabridged)

Want it by Friday, September 28?
Order by 12:00 PM Eastern and choose Expedited Shipping at checkout.
Same Day shipping in Manhattan. See Details
ADVERTISEMENT
Overview
This book can be used as either a primary text or a supplemental reference for courses in applied mathematics. Its core chapters are devoted to linear algebra, calculus, and ordinary differential equations. Additional topics include partial differential equations and approximation methods.
Each chapter features an ample selection of solved problems. These problems were chosen to illustrate not only how to solve various algebraic and differential equations but also how to interpret the solutions in order to gain insight into the behavior of the system modeled by the equation. In addition to the workedout problems, numerous examples and exercises appear throughout the text.
Product Details
ISBN13:  9780486479309 

Publisher:  Dover Publications 
Publication date:  02/17/2011 
Series:  Dover Books on Mathematics 
Edition description:  Unabridged 
Pages:  408 
Sales rank:  1,270,215 
Product dimensions:  7.40(w) x 9.20(h) x 0.80(d) 
About the Author
A Professor of Mathematics at Colorado State University for 30 years, Paul C. DuChateau is the author of two other books on advanced calculus and applied complex variables.
Read an Excerpt
Advanced Mathematics for Engineers and Scientists
By Paul DuChateau
Dover Publications, Inc.
Copyright © 1992 Paul DuChateauAll rights reserved.
ISBN: 9780486141596
CHAPTER 1
Systems of Linear Algebraic Equations
This chapter provides an introduction to the solution of systems of linear algebraic equations. After a brief discussion of matrix notation we present the Gaussian elimination algorithm for solving linear systems. We also show how the algorithm can be extended slightly to provide the socalled LU factorization of the coefficient matrix. This factorization is nearly equivalent to computing the matrix inverse and is an extremely effective solution approach for certain kinds of problems. The solved problems provide simple BASIC computer programs for both the Gaussian elimination algorithm and the LU decomposition. These are applied to example problems to illustrate their use. The solved problems also include examples of physical problems for which the mathematical models lead to systems of linear equations.
The presentation of the solution algorithms is rather formal, particularly with respect to explaining what happens when the algorithms fail in the case of a singular system of equations. To provide a clearer understanding of these and other matters we include a brief development of some abstract ideas from linear algebra. We introduce the four fundamental subspaces associated with a matrix A: the row and column spaces, the null space and the range of A. The relationships that exist between these subspaces and the corresponding subspaces for the transpose matrix AT provide the key to understanding the solution of systems of linear equations in the singular as well as the nonsingular case. The solved problems expand on the ideas set forth in the text. For example, Problem 1.15 gives a physical interpretation for the abstract solvability condition that must be imposed on the data vector in a singular system. Problems 1.16 through 1.20 discuss the notion of a "least squares" solution for an overdetermined system and apply this to least squares fits for experimental data.
We should perhaps conclude this introduction with the disclaimer that this chapter is meant to be only an introduction to the numerical and abstract aspects of systems of linear algebraic equations. While the Gaussian elimination algorithm does form the core of many of the more sophisticated solution algorithms for linear systems, the version provided here contains none of the enhancements that exist to take advantage of special matrix structure, nor does it contain provisions to compensate for numerical instabilities in the system. Such considerations are properly the subject of a more advanced course on numerical linear algebra. This chapter seeks only to provide the foundation on which more advanced treatments can build.(1.1)
SYSTEMS OF SIMULTANEOUS LINEAR EQUATIONS
Terminology
Consider the following system of m equations in the n unknowns
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.1)
Here a11, ..., amn denote the coefficients in the equations and the numbers b1 ..., bm are referred to as the data. In a specific example these quantities would be given numerical values and we would then be concerned with finding values for x1, ..., xn such that these equations were satisfied. An ntuple of real values {x1, ..., xn} which satisfies each of the m equations in (1.1) is said to be a solution for the system. The collection of all ntuples which are solutions is called the solution set for the system. For any given system, one of the following three possibilities must occur:
a. The solution set contains a single ntuple; then the system is said to be nonsingular.
b. The solution set contains infinitely many ntuples; then the system is said to be singular, or, more precisely, singular dependent or underdetermined.
c. The solution set is empty; in this case we say the system is singular inconsistent or overdetermined.
EXAMPLE 1.1
a. Consider the system
x1 + x2 = 4 x1  x2 = 0
Each equation in this simple system defines a separate function whose graph is a straight line in the x1x2 plane. The lines corresponding to these equations are seen to intersect at the unique point (2, 2). Thus, the solution set for this system consists of the single 2tuple [2, 2]. The system is nonsingular.
b. The system
x1 + x2 = 4 2x1 + 2x2 = 8
produces just a single line in the x1x2 plane (alternatively, there are two lines that coincide). The solution set for the system contains infinitely many 2tuples corresponding to all of the points on the line. That is, every 2tuple that is of the form [x1, 2  x1] for any choice of x1 is in the solution set. The system is singular. More precisely, the system is singular dependent (underdetermined). The equations of the system are not independent equations.
c. The system
x1 + x2 = 4 2x1 + 2x2 = 0
produces a pair of parallel lines in the x1x2 plane. There are no points that lie on both lines and consequently the solution set for the system is empty. The system is singular. More precisely, the system is singular inconsistent (overdetermined). The equations of the system are contradictory.
Matrix Notation
VECTORS
In order to discuss systems of equations efficiently it will be convenient to have the notion of an nvector. We shall use the notation X to denote an array of n numbers arranged in a column, and XT will denote the same array arranged in a row,
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
We refer to these respectively as column vectors and row vectors. The purpose of this distinction will become apparent.
INNER (DOT) PRODUCT
For nvectors X and Y having entries x1, ..., xn and y1, ..., yn, respectively, we define the inner product of X and Y as the number
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.2)
We use any of the notations X · Y, XTY or (X, Y) to indicate the inner product of X and Y. The inner product is also called the dot product.
MATRICES
An m × n matrix is a rectangular array of numbers containing m rows and n columns. For example,
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
denote, respectively, an m × n matrix and an n × p matrix. Here aij denotes the entry in the ith row and jth column of the matrix A. Note that a column vector is an n × 1 matrix and a row vector is a 1 × n matrix.
MATRIX PRODUCT
For an m × n matrix A and an n × p matrix B, the product AB is defined to be the m × p matrix C whose entries Cij are equal to
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.3)
Note that the product AB is not defined if the number of columns of A is not equal to the number of rows of B. Note also that if A is m × n we can think of A as composed of m rows, each of which is a (row) mvector; i.e.; A is composed of rows R1T, ..., RmT. Similarly, we can think of B as consisting of p columns, C1, ..., Cp, each of which is a (column) nvector. Then for i = 1, ..., m and j = 1, ..., p, the (i, j) entry of the product matrix AB is equal to the dot product RiTCj
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.4)
If A is m × n and B is n × m, then the products AB and BA are both defined but produce results which are m × m and n × n, respectively. Thus, if m is not equal to n, AB cannot equal BA. Even if m equals n, AB is not necessarily equal to BA. The matrix product is not commutative.
IDENTITY MATRIX
Let I denote the n × n matrix whose entries Ijk are equal to 1 if j equals k and are equal to 0 if j is different from k. Then I is called the n × n identity matrix since for any n × n matrix A, we have IA = AI = A. Thus, I plays the role of the identity for matrix multiplication.
MATRIX NOTATION FOR SYSTEMS
The system of equations (1.1) can be expressed in matrix notation by writing AX = B; i.e.,
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.5)
We refer to A as the coefficient matrix for the system and to X and B as the unknown vector and the data vector, respectively.
Gaussian Elimination
AUGMENTED MATRIX
We shall now introduce the Gaussian elimination algorithm for solving the system (1.5) in the case m = n. We begin by forming the augmented matrix for the system (1.5). This is the n × n + 1 matrix composed of the coefficient matrix A augmented by the data vector B as follows
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
EXAMPLE 1.2 AUGMENTED MATRIX
Consider the system of equations
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
The augmented matrix for this system is
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
Note that the augmented matrix contains all the information needed to form the original system of equations. Only extraneous symbols have been stripped away.
ROW OPERATIONS
Gaussian elimination consists of operating on the rows of the augmented matrix in a systematic way in order to bring the matrix into a form where the unknowns can be easily determined. The allowable row operations are
Eij(a)—add to row i of the augmented matrix (a times row j) to form a new ith row (for a ≠ 0 and i ≠ j),
Ej(a)—multiply the jth row of the augmented matrix by the nonzero constant a to form a new jth row (for a ≠ 0),
Pij—exchange rows i and j of the augmented matrix.
These operations are referred to as elementary row operations. They do not alter the solution set of the system (1.5).
Theorem 1.1
Theorem 1.1. The solution set for a system of linear equations remains invariant under elementary row operations.
EXAMPLE 1.3 ROW REDUCTION TO UPPER TRIANGULAR FORM
Consider the system of equations from Example 1.2. The row operation E12(2) has the following effect on the augmented matrix:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
We follow this with E13(1)
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
and E23(2)
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
The augmented matrix has been reduced to upper triangular form. A matrix is said to be in upper triangular form when all entries below the diagonal are zero. That is, A is in upper triangular form if aij = 0 for i>j.
BACK SUBSTITUTION
The final form of the augmented matrix in Example 1.3 is associated with the following system of equations UX = B*; i.e.,
x1 + 2x2 + 2x3 = 3 x2 + x3 = 2 3x3 = 3
This system has the same solution set as the original system and since the coefficient matrix of the reduced system is upper triangular, we can solve for the unknowns by back substitution. That is,
3x3 = 3 implies x3 = 1.
Substituting this result in the second equation of the reduced system leads to
x2 + (1) = 2 or x2 = 3.
Finally, substituting the values for x3 and x2 into the first equation yields
x1 + 2(3) + 2(1) = 3 or x1 = 1.
GAUSSIAN ELIMINATION
We can summarize the algorithm for solving the system AX = B by Gaussian elimination as follows:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
The solved problems provide examples of systems requiring row exchanges for the reduction.
ECHELON FORM
The notion of an upper triangular matrix is not sufficiently precise for purposes of describing the possible outcomes of the Gaussian elimination algorithm. We say an upper triangular matrix is in echelon form if all the entries in a column that are below the first nonzero entry in each row are zero.
EXAMPLE 1.4 ECHELON FORM
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is upper triangular but is not in echelon form.
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is in echelon form.
ROW EQUIVALENCE AND RANK
If matrix A can be transformed into matrix B by a finite number of elementary row operations, we say that A and B are row equivalent. We define the rank of a matrix A to be the number of nontrivial rows in any echelon form upper triangular matrix that is row equivalent to A. A trivial row in a matrix is one in which every entry is a zero. Now we can state
Theorem 1.2
Theorem 1.2. Let A denote an n × n matrix and consider the associated linear system AX = B.
a. If rank A = n, then the system is nonsingular and has a unique solution X for every data ntuple B.
b. If rank A = rank[AB] < n, then the system is singular dependent; i.e., there are infinitely many solutions.
c. If rank A< rank[AB], then the system is singular inconsistent; there is no solution.
(Continues...)
Excerpted from Advanced Mathematics for Engineers and Scientists by Paul DuChateau. Copyright © 1992 Paul DuChateau. Excerpted by permission of Dover Publications, Inc..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by DialABook Inc. solely for the personal use of visitors to this web site.
Table of Contents
Preface
1. Systems of Linear Algebraic Equations
2. Algebraic Eigenvalue Problems
3. Multivariable Calculus
4. Ordinary Differential Equations: Linear Initial Value Problems
5. Nonlinear Ordinary Differential Equations
6. Linear Boundary Value Problems
7. Fourier Series and Eigenfunction expansions
8. Integral Transforms
9. Linear Problems in Partial Differential Equations
10. Linear and Nonlinear Partial Differential Equations
11. Functional Optimizationand Variational Methods
12. Finite Difference Methods for Differential Equations
13. Introduction to the Finite Element Method
Index