 Shopping Bag ( 0 items )

All (10) from $40.00

New (6) from $132.46

Used (4) from $40.00
More About This Textbook
Overview
An Instructor's Manual presenting detailed solutions to all the problems in the book is available from the Wiley editorial department.
Editorial Reviews
From the Publisher
"Zarkowski (Univ. of Alberta) offers this book as a general, advanced undergraduate work in numerical analysis, containing all of the usual topics." (CHOICE, October 2004)Product Details
Related Subjects
Meet the Author
CHRISTOPHER J. ZAROWSKI, PhD, is an associate professor in the Department of Electrical and Computer Engineering at the University of Alberta, Canada. He has authored more than fifty journal articles and conference papers and is a senior member of the IEEE.
Read an Excerpt
An Introduction to Numerical Analysis for Electrical and Computer Engineers
By Christopher J. Zarowski
John Wiley & Sons
Copyright © 2004 John Wiley & Sons, Inc.All right reserved.
ISBN: 0471467375
Chapter One
Functional Analysis Ideas1.1 INTRODUCTION
Many engineering analysis and design problems are far too complex to be solved without the aid of computers. However, the use of computers in problem solving has made it increasingly necessary for users to be highly skilled in (practical) mathematical analysis. There are a number of reasons for this. A few are as follows.
For one thing, computers represent data to finite precision. Irrational numbers such as [pi] or [square root of 2] do not have an exact representation on a digital computer (with the possible exception of methods based on symbolic computing). Additionally, when arithmetic is performed, errors occur as a result of rounding (e.g., the truncation of the product of two nbit numbers, which might be 2n bits long, back down to n bits). Numbers have a limited dynamic range; we might get overflow or underflow in a computation. These are examples of finiteprecision arithmetic effects. Beyond this, computational methods frequently have sources of error independent of these. For example, an infinite series must be truncated if it is to be evaluated on a computer. The truncation error is something "additional" toerrors from finiteprecision arithmetic effects. In all cases, the sources (and sizes) of error in a computation must be known and understood in order to make sensible claims about the accuracy of a computergenerated solution to a problem.
Many methods are "iterative." Accuracy of the result depends on how many iterations are performed. It is possible that a given method might be very slow, requiring many iterations before achieving acceptable accuracy. This could involve much computer runtime. The obvious solution of using a faster computer is usually unacceptable. A better approach is to use mathematical analysis to understand why a method is slow, and so to devise methods of speeding it up. Thus, an important feature of analysis applied to computational methods is that of assessing how much in the way of computing resources is needed by a given method. A given computational method will make demands on computer memory, operations count (the number of arithmetic operations, function evaluations, data transfers, etc.), number of bits in a computer word, and so on.
A given problem almost always has many possible alternative solutions. Other than accuracy and computer resource issues, ease of implementation is also relevant. This is a human labor issue. Some methods may be easier to implement on a given set of computing resources than others. This would have an impact on software/hardware development time, and hence on system cost. Again, mathematical analysis is useful in deciding on the relative ease of implementation of competing solution methods.
The subject of numerical computing is truly vast. Methods are required to handle an immense range of problems, such as solution of differential equations (ordinary or partial), integration, solution of equations and systems of equations (linear or nonlinear), approximation of functions, and optimization. These problem types appear to be radically different from each other. In some sense the differences between them are true, but there are means to achieve some unity of approach in understanding them.
The branch of mathematics that (perhaps) gives the greatest amount of unity is sometimes called functional analysis. We shall employ ideas from this subject throughout. However, our usage of these ideas is not truly rigorous; for example, we completely avoid topology, and measure theory. Therefore, we tend to follow simplified treatments of the subject such as Kreyszig, and then only those ideas that are immediately relevant to us. The reader is assumed to be very comfortable with elementary linear algebra, and calculus. The reader must also be comfortable with complex number arithmetic (see Appendix 1.A now for a review if necessary). Some knowledge of electric circuit analysis is presumed since this will provide a source of applications examples later. (But application examples will also be drawn from other sources.) Some knowledge of ordinary differential equations is also assumed.
It is worth noting that an understanding of functional analysis is a tremendous aid to understanding other subjects such as quantum physics, probability theory and random processes, digital communications system analysis and design, digital control systems analysis and design, digital signal processing, fuzzy systems, neural networks, computer hardware design, and optimal design of systems. Many of the ideas presented in this book are also intended to support these subjects.
1.2 SOME SETS
Variables in an engineering problem often take on values from sets of numbers. In the present setting, the sets of greatest interest to us are (1) the set of integers Z = {...3, 2, 1, 0, 1, 2, 3 ...}, (2) the set of real numbers R, and (3) the set of complex numbers BLDBLD = {x + jyj = [square root of (1)], x, y [member of] R}. The set of nonnegative integers is [Z.sup.+] = {0, 1, 2, 3, ...,} (so [Z.sup.+] [subset] Z). Similarly, the set of nonnegative real numbers is [R.sup.+] = {x [member of] Rx [greater than or equal to] 0}. Other kinds of sets of numbers will be introduced if and when they are needed.
If A and B are two sets, their Cartesian product is denoted by A × B = {(a, b)a [element of] A, b [element of] B}. The Cartesian product of n sets denoted [A.sub.0], [A.sub.1], ..., [A.sub.n1] is [A.sub.0] × [A.sub.1] × ··· × [A.sub.n1] = {([a.sub.0], [a.sub.1], ..., [a.sub.n1])[a.sub.k] [member of] [A.sub.k]}.
Ideas from matrix/linear algebra are of great importance. We are therefore also interested in sets of vectors. Thus, [R.sup.n] shall denote the set of nelement vectors with realvalued components, and similarly, [BLDBLD.sup.n] shall denote the set of nelement vectors with complexvalued components. By default, we assume any vector x to be a column vector:
(1.1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
Naturally, row vectors are obtained by transposition. We will generally avoid using bars over or under symbols to denote vectors. Whether a quantity is a vector will be clear from the context of the discussion. However, bars will be used to denote vectors when this cannot be easily avoided. The indexing of vector elements [x.sub.k] will often begin with 0 as indicated in (1.1). Naturally, matrices are also important. Set [R.sup.n×m] denotes the set of matrices with n rows and m columns, and the elements are realvalued. The notation [BLDBLD.sup.n×m] should now possess an obvious meaning. Matrices will be denoted by uppercase symbols, again without bars. If A is an n × m matrix, then
(1.2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
Thus, the element in row p and column q of A is denoted [a.sub.p,q]. Indexing of rows and columns again will typically begin at 0. The subscripts on the right bracket "]" in (1.2) will often be omitted in the future. We may also write [a.sub.pq] instead of [a.sub.p,q] where no danger of confusion arises.
The elements of any vector may be regarded as the elements of a sequence of finite length. However, we are also very interested in sequences of infinite length. An infinite sequence may be denoted by x = ([x.sub.k]) = ([x.sub.0], [x.sub.1], [x.sub.2], ...), for which [x.sub.k] could be either realvalued or complexvalued. It is possible for sequences to be doubly infinite, for instance, x = ([x.sub.k]) = (..., [x.sub.2], [x.sub.1], [x.sub.0], [x.sub.1], [x.sub.2], ...).
Relationships between variables are expressed as mathematical functions, that is, mappings between sets. The notation ƒA [right arrow] B signifies that function ƒ associates an element of set A with an element from set B. For example, ƒR [right arrow] R represents a function defined on the realnumber line, and this function is also realvalued; that is, it maps "points" in R to "points" in R. We are familiar with the idea of "plotting" such a function on the xy plane if y = ƒ(x) (i.e., x, y [element of] R). It is important to note that we may regard sequences as functions that are defined on either the set Z (the case of doubly infinite sequences), or the set [Z.sup.+] (the case of singly infinite sequences). To be more specific, if, for example, k [element of] [Z.sup.+], then this number maps to some number [x.sub.k] that is either realvalued or complexvalued. Since vectors are associated with sequences of finite length, they, too, may be regarded as functions, but defined on a finite subset of the integers. From (1.1) this subset might be denoted by [Z.sub.n] = {0, 1, 2, ..., n  2, n  1}.
Sets of functions are important. This is because in engineering we are often interested in mappings between sets of functions. For example, in electric circuits voltage and current waveforms (i.e., functions of time) are input to a circuit via voltage and current sources. Voltage drops across circuit elements, or currents through circuit elements are output functions of time. Thus, any circuit maps functions from an input set to functions from some output set. Digital signal processing systems do the same thing, except that here the functions are sequences. For example, a simple digital signal processing system might accept as input the sequence ([x.sub.n]), and produce as output the sequence ([y.sub.n]) according to
(1.3) [y.sub.n] = [x.sub.n] + [x.sub.n+1]/2
for which n [element of] [Z.sup.+].
Some specific examples of sets of functions are as follows, and more will be seen later. The set of realvalued functions defined on the interval [a, b] [subset] R that are n times continuously differentiable may be denoted by [C.sup.n]]a, b]. This means that all derivatives up to and including order n exist and are continuous. If n = 0 we often just write ITLa, b], which is the set of continuous functions on the interval [a, b]. We remark that the notation [a, b] implies inclusion of the endpoints of the interval. Thus, (a, b) implies that the endpoints a and b are not to be included [i.e., if x [element of] (a, b), then a < x < b].
A polynomial in the indeterminate x of degree n is
(1.4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
Unless otherwise stated, we will always assume [p.sub.n,k] [element of] R. The indeterminate x is often considered to be either a real number or a complex number. But in some circumstances the indeterminate x is merely regarded as a "placeholder," which means that x is not supposed to take on a value. In a situation like this the polynomial coefficients may also be regarded as elements of a vector (e.g., [p.sub.n] = [[[p.sub.n,0] [p.sub.n,1] ··· [p.sub.n,n]].sup.T]). This happens in digital signal processing when we wish to convolve sequences of finite length, because the multiplication of polynomials is mathematically equivalent to the operation of sequence convolution. We will denote the set of all polynomials of degree n as [P.sup.n]. If x is to be from the interval [a, b] [subset of] R, then the set of polynomials of degree n on [a, b] is denoted by [P.sup.n][a, b]. If m < n we shall usually assume [P.sup.m][a, b] [subset] [P.sup.n][a, b].
1.3 SOME SPECIAL MAPPINGS: METRICS, NORMS, AND INNER PRODUCTS
Sets of objects (vectors, sequences, polynomials, functions, etc.) often have certain special mappings defined on them that turn these sets into what are commonly called function spaces. Loosely speaking, functional analysis is about the properties of function spaces. Generally speaking, numerical computation problems are best handled by treating them in association with suitable mappings on wellchosen function spaces. For our purposes, the three most important special types of mappings are (1) metrics, (2) norms, and (3) inner products. You are likely to be already familiar with special cases of these really very general ideas.
The vector dot product is an example of an inner product on a vector space, while the Euclidean norm (i.e., the square root of the sum of the squares of the elements in a realvalued vector) is a norm on a vector space. The Euclidean distance between two vectors (given by the Euclidean norm of the difference between the two vectors) is a metric on a vector space. Again, loosely speaking, metrics give meaning to the concept of "distance" between points in a function space, norms give a meaning to the concept of the "size" of a vector, and inner products give meaning to the concept of "direction" in a vector space.
In Section 1.1 we expressed interest in the sizes of errors, and so naturally the concept of a norm will be of interest. Later we shall see that inner products will prove to be useful in devising means of overcoming problems due to certain sources of error in a computation. In this section we shall consider various examples of function spaces, some of which we will work with later on in the analysis of certain computational problems. We shall see that there are many different kinds of metric, norm, and inner product. Each kind has its own particular advantages and disadvantages as will be discovered as we progress through the book.
Sometimes a quantity cannot be computed exactly. In this case we may try to estimate bounds on the size of the quantity. For example, finding the exact error in the truncation of a series may be impossible, but putting a bound on the error might be relatively easy. In this respect the concepts of supremum and infimum can be important. These are defined as follows.
Suppose we have E [subset] R.
Continues...
Table of Contents
Preface.
1 Functional Analysis Ideas.
1.1 Introduction.
1.2 Some Sets.
1.3 Some Special Mappings: Metrics, Norms, and Inner Products.
1.3.1 Metrics and Metric Spaces.
1.3.2 Norms and Normed Spaces.
1.3.3 Inner Products and Inner Product Spaces.
1.4 The Discrete Fourier Series (DFS).
Appendix 1.A Complex Arithmetic.
Appendix 1.B Elementary Logic.
References.
Problems.
2 Number Representations.
2.1 Introduction.
2.2 FixedPoint Representations.
2.3 FloatingPoint Representations.
2.4 Rounding Effects in Dot Product Computation.
2.5 Machine Epsilon.
Appendix 2.A Review of Binary Number Codes.
References.
Problems.
3 Sequences and Series.
3.1 Introduction.
3.2 Cauchy Sequences and Complete Spaces.
3.3 Pointwise Convergence and Uniform Convergence.
3.4 Fourier Series.
3.5 Taylor Series.
3.6 Asymptotic Series.
3.7 More on the Dirichlet Kernel.
3.8 Final Remarks.
Appendix 3.A COordinate Rotation DIgital Computing (CORDIC).
3.A.1 Introduction.
3.A.2 The Concept of a Discrete Basis.
3.A.3 Rotating Vectors in the Plane.
3.A.4 Computing Arctangents.
3.A.5 Final Remarks.
Appendix 3.B Mathematical Induction.
Appendix 3.C Catastrophic Cancellation.
References.
Problems.
4 Linear Systems of Equations.
4.1 Introduction.
4.2 LeastSquares Approximation and Linear Systems.
4.3 LeastSquares Approximation and IllConditioned Linear Systems.
4.4 Condition Numbers.
4.5 LU Decomposition.
4.6 LeastSquares Problems and QR Decomposition.
4.7 Iterative Methods for Linear Systems.
4.8 Final Remarks.
Appendix 4.A Hilbert Matrix Inverses.
Appendix 4.B SVD and Least Squares.
References.
Problems.
5 Orthogonal Polynomials.
5.1 Introduction.
5.2 General Properties of Orthogonal Polynomials.
5.3 Chebyshev Polynomials.
5.4 Hermite Polynomials.
5.5 Legendre Polynomials.
5.6 An Example of Orthogonal Polynomial LeastSquares Approximation.
5.7 Uniform Approximation.
References.
Problems.
6 Interpolation.
6.1 Introduction.
6.2 Lagrange Interpolation.
6.3 Newton Interpolation.
6.4 Hermite Interpolation.
6.5 Spline Interpolation.
References.
Problems.
7 Nonlinear Systems of Equations.
7.1 Introduction.
7.2 Bisection Method.
7.3 FixedPoint Method.
7.4 Newton–Raphson Method.
7.4.1 The Method.
7.4.2 Rate of Convergence Analysis.
7.4.3 Breakdown Phenomena.
7.5 Systems of Nonlinear Equations.
7.5.1 FixedPoint Method.
7.5.2 Newton–Raphson Method.
7.6 Chaotic Phenomena and a Cryptography Application.
References.
Problems.
8 Unconstrained Optimization.
8.1 Introduction.
8.2 Problem Statement and Preliminaries.
8.3 Line Searches.
8.4 Newton’s Method.
8.5 Equality Constraints and Lagrange Multipliers.
Appendix 8.A MATLAB Code for Golden Section Search.
References.
Problems.
9 Numerical Integration and Differentiation.
9.1 Introduction.
9.2 Trapezoidal Rule.
9.3 Simpson’s Rule.
9.4 Gaussian Quadrature.
9.5 Romberg Integration.
9.6 Numerical Differentiation.
References.
Problems.
10 Numerical Solution of Ordinary Differential Equations.
10.1 Introduction.
10.2 FirstOrder ODEs.
10.3 Systems of FirstOrder ODEs.
10.4 Multistep Methods for ODEs.
10.4.1 Adams–Bashforth Methods.
10.4.2 Adams–Moulton Methods.
10.4.3 Comments on the Adams Families.
10.5 VariableStepSize (Adaptive) Methods for ODEs.
10.6 Stiff Systems.
10.7 Final Remarks.
Appendix 10.A MATLAB Code for Example 10.8.
Appendix 10.B MATLAB Code for Example 10.13.
References.
Problems.
11 Numerical Methods for Eigenproblems.
11.1 Introduction.
11.2 Review of Eigenvalues and Eigenvectors.
11.3 The Matrix Exponential.
11.4 The Power Methods.
11.5 QR Iterations.
References.
Problems.
12 Numerical Solution of Partial Differential Equations.
12.1 Introduction.
12.2 A Brief Overview of Partial Differential Equations.
12.3 Applications of Hyperbolic PDEs.
12.3.1 The Vibrating String.
12.3.2 Plane Electromagnetic Waves.
12.4 The FiniteDifference (FD) Method.
12.5 The FiniteDifference TimeDomain (FDTD) Method.
Appendix 12.A MATLAB Code for Example 12.5.
References.
Problems.
13 An Introduction to MATLAB.
13.1 Introduction.
13.2 Startup.
13.3 Some Basic Operators, Operations, and Functions.
13.4 Working with Polynomials.
13.5 Loops.
13.6 Plotting and MFiles.
References.
Index.