 Shopping Bag ( 0 items )

All (11) from $60.00

New (6) from $90.36

Used (5) from $60.00
More About This Textbook
Overview
Suitable for a first year graduate course, this textbook unites the applications of numerical mathematics and scientific computing to the practice of chemical engineering. Written in a pedagogic style, the book describes basic linear and nonlinear algebric systems all the way through to stochastic methods, Bayesian statistics and parameter estimation. These subjects are developed at a level of mathematics suitable for graduate engineering study without the exhaustive level of the theoretical mathematical detail. The implementation of numerical methods in MATLAB is integrated within each chapter and numerous examples in chemical engineering are provided, with a library of corresponding MATLAB programs. This book will provide the graduate student with essential tools required by industry and research alike. Supplementary material includes solutions to homework problems set in the text, MATLAB programs and tutorial, lecture slides, and complicated derivations for the more advanced reader. These are available online at www.cambridge.org/9780521859714.
Product Details
Related Subjects
Meet the Author
Kenneth J Beers has been Assistant Professor at MIT since the year 2000. He has taught extensively across the engineering discipline at both the undergraduate and graduate level. This book is a result of the sucessful course the author devised at MIT for numerical methods applied to chemical engineering.
Read an Excerpt
Cambridge University Press
9780521859714  Numerical Methods for Chemical Engineering  Applications in MATLAB^{®}  by Kenneth J. Beers
Excerpt
1 Linear algebra
This chapter discusses the solution of sets of linear algebraic equations and defines basic vector/matrix operations. The focus is upon elimination methods such as Gaussian elimination, and the related LU and Cholesky factorizations. Following a discussion of these methods, the existence and uniqueness of solutions are considered. Example applications include the modeling of a separation system and the solution of a fluid mechanics boundary value problem. The latter example introduces the need for sparsematrix methods and the computational advantages of banded matrices. Because linear algebraic systems have, under welldefined conditions, a unique solution, they serve as fundamental building blocks in morecomplex algorithms. Thus, linear systems are treated here at a high level of detail, as they will be used often throughout the remainder of the text.
Linear systems of algebraic equations
We wish to solve a system of N simultaneous linear algebraic equations for the N unknowns x_{1},x_{2},. . . ,x_{N}, that are expressed in the general form
a_{ij} is the constant coefficient (assumed real) that multiplies the unknown x_{j} in equation i. b_{i} is the constant “righthandside” coefficient for equation i, also assumed real. As a particular example, consider the system
for which
It is common to write linear systems in matrix/vector form as
where
Row i of A contains the values a_{i1}, a_{i2}, . . . , a_{iN} that are the coefficients multiplying each unknown x_{1}, x_{2}, . . . , x_{N} in equation i. Column j contains the coefficients a_{1j}, a_{2j}, . . . , a_{Nj} that multiply x_{j} in each equation i = 1, 2, . . . , N. Thus, we have the following associations,
We often write Ax = b explicitly as
For the example system (1.2),
In MATLAB we solve Ax = b with the single command, x = A\b. For the example (1.2), we compute the solution with the code
A = [1 1 1; 2 1 3; 3 1 6];
b = [4; 7; 2];
x = A\b,
x =
19.0000
7.0000
8.0000
Thus, we are tempted to assume that, as a practical matter, we need to know little about how to solve a linear system, as someone else has figured it out and provided us with this handy linear solver. Actually, we shall need to understand the fundamental properties of linear systems in depth to be able to master methods for solving more complex problems, such as sets of nonlinear algebraic equations, ordinary and partial differential equations, etc. Also, as we shall see, this solver fails for certain common classes of very large systems of equations, and we need to know enough about linear algebra to diagnose such situations and to propose other methods that do work in such instances. This chapter therefore contains not only an explanation of how the MATLAB solver is implemented, but also a detailed, fundamental discussion of the properties of linear systems.
Our discussion is intended only to provide a foundation in linear algebra for the practice of numerical computing, and is continued in Chapter 3 with a discussion of matrix eigenvalue analysis. For a broader, more detailed, study of linear algebra, consult Strang (2003) or Golub & van Loan (1996).
Review of scalar, vector, and matrix operations
As we use vector notation in our discussion of linear systems, a basic review of the concepts of vectors and matrices is necessary.
Scalars, real and complex
Most often in basic mathematics, we work with scalars, i.e., singlevalued numbers. These may be real, such as 3, 1.4, 5/7, 3.14159 . . . . , or they may be complex, 1 + 2i, 1/2 i, where i = √1. The set of all real scalars is denoted ℜ. The set of all complex scalars we call C. For a complex number z ∈ C, we write z = a + ib, where a, b ∈ ℜ and
The complex conjugate, □ = z^{*}, of z = a + ib is
Note that the product □z is always real and nonnegative,
so that we may define the realvalued, nonnegative modulus of z, z, as
Often, we write complex numbers in polar notation,
Using the important Euler formula, a proof of which is found in the supplemental material found at the website that accompanies this book,
Figure 1.1 Physical interpretation of a 3D vector.
we can write z as
Vector notation and operations
We write a threedimensional (3D) vector v (Figure 1.1) as
v is real if v_{1}, v_{2}, v_{3} ∈ ℜ; we then say v ∈ ℜ^{3}. We can easily visualize this vector in 3D space, defining the three coordinate basis vectors in the 1(x), 2(y), and 3(z) directions as
to write v ∈ ℜ^{3} as
We extend this notation to define ℜ^{N}, the set of Ndimensional real vectors,
where v_{j} ∈ ℜ for j = 1, 2, . . . , N. By writing v in this manner, we define a column vector; however, v can also be written as a row vector,
The difference between column and row vectors only becomes significant when we start combining them in equations with matrices.
We write v ∈ ℜ^{N} as an expansion in coordinate basis vectors as
where the components of e^{[j]} are Kroenecker deltas δ_{jk},
Addition of two real vectors v ∈ ℜ^{N}, w ∈ ℜ^{N} is straightforward,
as is multiplication of a vector v ∈ ℜ^{N} by a real scalar c ∈ ℜ,
For all u, v, w ∈ ℜ ^{N} and all c_{1}, c_{2} ∈ ℜ,
where the null vector 0 ∈ ℜ^{N} is
We further add to the list of operations associated with the vectors v, w ∈ ℜ^{N} the dot (inner, scalar) product,
For example, for the two vectors
For 3D vectors, the dot product is proportional to the product of the lengths and the cosine of the angle between the two vectors,
where the length of v is
Therefore, when two vectors are parallel, the magnitude of their dot product is maximal and equals the product of their lengths, and when two vectors are perpendicular, their dot product is zero. These ideas carry completely into N dimensions. The length of a vector v ∈ ℜ^{N} is
If v · w = 0, v and w are said to be orthogonal, the extension of the adjective “perpendicular” from ℜ^{3} to ℜ^{N}. If v · w = 0 and v = w = 1, i.e., both vectors are normalized to unit length, v and w are said to be orthonormal.
The formula for the length v of a vector v ∈ ℜ^{N} satisfies the more general properties of a norm v of v ∈ ℜ^{N}. A norm v is a rule that assigns a real scalar, v ∈ ℜ, to each vector v ∈ ℜ^{N} such that for every v, w ∈ ℜ^{N}, and for every c ∈ ℜ, we have
Each norm also provides an accompanying metric, a measure of how different two vectors are
In addition to the length, many other possible definitions of norm exist. The pnorm, v_{p}, of v ∈ ℜ^{N} is
Table 1.1 pnorm values for the 3D vector (1, −2, 3)
The length of a vector is thus also the 2norm. For v = [1 −2 3], the values of the pnorm, computed from (1.35), are presented in Table 1.1.
We define the infinity norm as the limit of v_{p} as p → ∞, which merely extracts from v the largest magnitude of any component,
For v = [1 −2 3], v_{∞} = 3.
Like scalars, vectors can be complex. We define the set of complex Ndimensional vectors as C^{N}, and write each component of v ∈ C^{N} as
The complex conjugate of v ∈ C^{N}, written as □ or v^{∗}, is
For complex vectors v, w ∈ C^{N}, to form the dot product v · w, we take the complex conjugates of the first vector’s components,
This ensures that the length of any v ∈ C is always real and nonnegative,
For v, w ∈ C^{N}, the order of the arguments is significant,
Matrix dimension
For a linear system Ax = b,
to have a unique solution, there must be as many equations as unknowns, and so typically A will have an equal number N of columns and rows and thus be a square matrix. A matrix is said to be of dimension M × N if it has M rows and N columns. We now consider some simple matrix operations.
Multiplication of an M × N matrix A by a scalar c
Addition of an M × N matrix A with an equalsized M × N matrix B
Note that A + B = B + A and that two matrices can be added only if both the number of rows and the number of columns are equal for each matrix. Also, c(A + B) = cA + cB.
Multiplication of a square N × N matrix A with an Ndimensional vector v
This operation must be defined as follows if we are to have equivalence between the coefficient and matrix/vector representations of a linear system:
Av is also an Ndimensional vector, whose j th component is
We compute (Av)_{j} by summing a_{jk}v_{k} along rows of A and down the vector,
Multiplication of an M × N matrix A with an Ndimensional vector v
From the rule for forming Av, we see that the number of columns of A must equal the dimension of v; however, we also can define Av when M ≠ N,
If v ∈ ℜ^{N}, for an M × N matrix A, Av ∈ ℜ^{M}. Consider the following examples:
Note also that A(cv) = cAv and A(v + w) = Av + Aw.
Matrix transposition
We define for an M × N matrix A the transpose A^{T} to be the N × M matrix
The transpose operation is essentially a mirror reflection across the principal diagonal a_{11}, a_{22}, a_{33}, . . .. Consider the following examples:
If a matrix is equal to its transpose, A = A^{T}, it is said to be symmetric. Then,
Complexvalued matrices
Here we have defined operations for real matrices; however, matrices may also be complexvalued,
For the moment, we are concerned with the properties of real matrices, as applied to solving linear systems in which the coefficients are real.
Vectors as matrices
Finally, we note that the matrix operations above can be extended to vectors by considering a vector v ∈ ℜ^{N} to be an N×1 matrix if in column form and to be a 1×N matrix if in row form. Thus, for v, w ∈ ℜ^{N}, expressing vectors by default as column vectors, we write the dot product as
The notation v^{T}w for the dot product v · w is used extensively in this text.
Elimination methods for solving linear systems
With these basic definitions in hand, we now begin to consider the solution of the linear system Ax = b, in which x, b ∈ ℜ^{N} and A is an N × N real matrix. We consider here elimination methods in which we convert the linear system into an equivalent one that is easier to solve. These methods are straightforward to implement and work generally for any linear system that has a unique solution; however, they can be quite costly (perhaps prohibitively so) for large systems. Later, we consider iterative methods that are more effective for certain classes of large systems.
© Cambridge University Press
Table of Contents
1. Linear algebra; 2. Nonlinear algebraic systems; 3. Matrix eigenvalue analysis; 4. Initial value problems; 5. Numerical optimization; 6. Boundary value problems; 7. Probability theory and stochastic simulation; 8. Bayesian statistics and parameter estimation; 9. Fourier analysis.