Applied Analysis
This is a basic text for graduate and advanced undergraduate study in those areas of mathematical analysis that are of primary concern to the engineer and the physicist, most particularly analysis and design of finite processes that approximate the solution of an analytical problem. The work comprises seven chapters:
Chapter I (Algebraic Equations) deals with the search for roots of algebraic equations encountered in vibration and flutter problems and in those of static and dynamic stability. Useful computing techniques are discussed, in particular the Bernoulli method and its ramifications.
Chapter II (Matrices and Eigenvalue Problems) is devoted to a systematic development of the properties of matrices, especially in the context of industrial research.
Chapter III (Large-Scale Linear Systems) discusses the "spectroscopic method" of finding the real eigenvalues of large matrices and the corresponding method of solving large-scale linear equations as well as an additional treatment of a perturbation problem and other topics.
Chapter IV (Harmonic Analysis) deals primarily with the interpolation aspects of the Fourier series and its flexibility in representing empirically given equidistant data.
Chapter V (Data Analysis) deals with the problem of reduction of data and of obtaining the first and even second derivatives of an empirically given function — constantly encountered in tracking problems in curve-fitting problems. Two methods of smoothing are discussed: smoothing in the small and smoothing in the large.
Chapter VI (Quadrature Methods) surveys a variety of quadrature methods with particular emphasis on Gaussian quadrature and its use in solving boundary value problems and eignenvalue problems associated with ordinary differential equations.
Chapter VII (Power Expansions) discusses the theory of orthogonal function systems, in particular the "Chebyshev polynomials."
This unique work, perennially in demand, belongs in the library of every engineer, physicist, or scientist interested in the application of mathematical analysis to engineering, physical, and other practical problems.

1000479115
Applied Analysis
This is a basic text for graduate and advanced undergraduate study in those areas of mathematical analysis that are of primary concern to the engineer and the physicist, most particularly analysis and design of finite processes that approximate the solution of an analytical problem. The work comprises seven chapters:
Chapter I (Algebraic Equations) deals with the search for roots of algebraic equations encountered in vibration and flutter problems and in those of static and dynamic stability. Useful computing techniques are discussed, in particular the Bernoulli method and its ramifications.
Chapter II (Matrices and Eigenvalue Problems) is devoted to a systematic development of the properties of matrices, especially in the context of industrial research.
Chapter III (Large-Scale Linear Systems) discusses the "spectroscopic method" of finding the real eigenvalues of large matrices and the corresponding method of solving large-scale linear equations as well as an additional treatment of a perturbation problem and other topics.
Chapter IV (Harmonic Analysis) deals primarily with the interpolation aspects of the Fourier series and its flexibility in representing empirically given equidistant data.
Chapter V (Data Analysis) deals with the problem of reduction of data and of obtaining the first and even second derivatives of an empirically given function — constantly encountered in tracking problems in curve-fitting problems. Two methods of smoothing are discussed: smoothing in the small and smoothing in the large.
Chapter VI (Quadrature Methods) surveys a variety of quadrature methods with particular emphasis on Gaussian quadrature and its use in solving boundary value problems and eignenvalue problems associated with ordinary differential equations.
Chapter VII (Power Expansions) discusses the theory of orthogonal function systems, in particular the "Chebyshev polynomials."
This unique work, perennially in demand, belongs in the library of every engineer, physicist, or scientist interested in the application of mathematical analysis to engineering, physical, and other practical problems.

18.99 In Stock
Applied Analysis

Applied Analysis

by Cornelius Lanczos
Applied Analysis

Applied Analysis

by Cornelius Lanczos

eBook

$18.99  $24.95 Save 24% Current price is $18.99, Original price is $24.95. You Save 24%.

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers

LEND ME® See Details

Overview

This is a basic text for graduate and advanced undergraduate study in those areas of mathematical analysis that are of primary concern to the engineer and the physicist, most particularly analysis and design of finite processes that approximate the solution of an analytical problem. The work comprises seven chapters:
Chapter I (Algebraic Equations) deals with the search for roots of algebraic equations encountered in vibration and flutter problems and in those of static and dynamic stability. Useful computing techniques are discussed, in particular the Bernoulli method and its ramifications.
Chapter II (Matrices and Eigenvalue Problems) is devoted to a systematic development of the properties of matrices, especially in the context of industrial research.
Chapter III (Large-Scale Linear Systems) discusses the "spectroscopic method" of finding the real eigenvalues of large matrices and the corresponding method of solving large-scale linear equations as well as an additional treatment of a perturbation problem and other topics.
Chapter IV (Harmonic Analysis) deals primarily with the interpolation aspects of the Fourier series and its flexibility in representing empirically given equidistant data.
Chapter V (Data Analysis) deals with the problem of reduction of data and of obtaining the first and even second derivatives of an empirically given function — constantly encountered in tracking problems in curve-fitting problems. Two methods of smoothing are discussed: smoothing in the small and smoothing in the large.
Chapter VI (Quadrature Methods) surveys a variety of quadrature methods with particular emphasis on Gaussian quadrature and its use in solving boundary value problems and eignenvalue problems associated with ordinary differential equations.
Chapter VII (Power Expansions) discusses the theory of orthogonal function systems, in particular the "Chebyshev polynomials."
This unique work, perennially in demand, belongs in the library of every engineer, physicist, or scientist interested in the application of mathematical analysis to engineering, physical, and other practical problems.


Product Details

ISBN-13: 9780486319261
Publisher: Dover Publications
Publication date: 03/25/2013
Series: Dover Books on Mathematics
Sold by: Barnes & Noble
Format: eBook
Pages: 576
File size: 19 MB
Note: This product may take a few minutes to download.

Read an Excerpt

APPLIED ANALYSIS


By Cornelius Lanczos

DOVER PUBLICATIONS INC.

Copyright © 1988 DOVER PUBLICATIONS INC.
All rights reserved.
ISBN: 978-0-486-31926-1



CHAPTER 1

ALGEBRAIC EQUATIONS


1. Historical introduction. Algebraic equations of the first and second order aroused the interest of scientists from the earliest days. While the early Egyptians solved mostly equations of first order, the early Babylonians (about 2000 B.C.) were already familiar with the solution of the quadratic equation, and constructed tables for the solution of cubic equations by bringing the general cubic into a normal form.

The Hindus developed the systematic algebraic theory of the equations of first and second order (seventh century). The standard method of solving the general quadratic equation by completing the square is a Hindu invention. The Hindus were familiar with the operational viewpoint and were not afraid of the use of negative numbers, considering them as "debts." The clear insight into the nature of imaginary and complex numbers came much later, in the time of Euler (eighteenth century).

The solution of cubic equations was first discovered by the Italian Tartaglia (early sixteenth century); Cardano's pupil Ferrari added a few years later the solution of biquadratic equations. The essentially different character of equations of fifth and higher order was clearly recognized by Lagrange (late eighteenth century), but the first exact proof that general equations of fifth and higher order cannot be solved by purely algebraic tools is due to the Norwegian, Abel (1824), while a few years later (1832) the French Galois gave the general group-theoretical foundation of the entire problem.

The "fundamental theorem of algebra" states that every algebraic equation has at least one solution within the realm of complex numbers. If this is proved, we can immediately infer (by successive divisions by the root factors) that every polynomial of the order n can be resolved into a product of n root factors. The first rigorous proof of the fundamental theorem of algebra was given by Gauss when only 22 years of age (1799). Later Cauchy's theory of the functions of a complex variable provided a deeper insight into the nature of the roots of an algebraic equation and yielded a simplified proof for the fundamental theorem.

The existence of n generally complex roots of an algebraic equation of nth order is in no contradiction to the unsolvability of an algebraic equation of fifth or higher order by algebraic means. The latter statement means that the roots of a general algebraic equation of higher than fourth order are not obtainable by purely algebraic operations on the coefficients (i.e., addition, subtraction, multiplication, division, raising to a power and taking the root). Such operations can approximate, however, the roots with any degree of accuracy.

2. Allied fields, (a) The problem of solving an algebraic equation of nth order is closely related to the theory of vibrations around a state of equilibrium. The frequencies (or the squares of the frequencies) of a mechanical system appear as the "characteristic roots" or "eigenvalues" of a matrix, obtainable by solving the "characteristic equation" of the matrix, which is an algebraic equation of the order n.

(b) In electrical engineering the response of an electric network is always a linear superposition of exponential functions. The exponents of these functions are obtainable as the roots of a certain polynomial which can be constructed if the elements of the network and the network diagram are given.

(c) Intricate algebraic and geometric relations frequently yield by elimination an algebraic equation of second or higher order for one of the unknowns.

3. Cubic equations. Equations of third and fourth order are still solvable by algebraic formulas. However, the numerical computations required by the formulas are usually so involved and timeabsorbing that we prefer less cumbersome methods which give the roots in approximation only but still close enough for later refinement.

The solution of a cubic equation (with real coefficients) is particularly convenient since one of the roots must be real. After finding this root, the other two roots follow immediately by solving a quadratic equation.

A general cubic equation can be written in the form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-3.1)

The factor of [xi]3 can always be normalized to 1 since we can divide through by the highest coefficient. Moreover, the absolute term can always be made negative because, if it is originally positive, we put [xi]1 = - [xi] and operate with this [xi]1.

Now it is convenient to introduce a new scale factor which will normalize the absolute term to -1. We put

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-3.2)

and write the new equation

f(x) = x3 + a1x2 + b1x –c = 0 (1-3.3)

If we choose

α = 1/ [cube root of c] (1-3.4)

we obtain

c1 = 1 (1-3.5)

Now, since f(0) is negative and f(∞) is positive, we know that there must be at least one root between x = 0 and x = ∞. We put x = 1 and evaluate f(1). If f(1) is positive, the root must be between 0 and 1; if f(1) is negative, the root must be between 1 and ∞. Moreover, since

x1 · x2 · x3 = 1 (1-3.6)

we know in advance that we cannot have three roots between 0 and 1, or 1 and ∞. Hence, if f(1)> 0, we know that there must be one and only one real root in the interval [0,1], while if f(1)< 0, we know that there must be one and only one real root in the interval [1, ∞]. The latter interval can be changed to the interval [1,0] by the transformation

[bar.x] = 1/x (1-3.7)

which simply means that the coefficients of the equation change their sequence:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-3.8)

Hence we have reduced our problem to the new problem: find the real root of a cubic equation in the range [0,1]. We solve this problem in good approximation by taking advantage of the remarkable properties of the Chebyshev polynomials (cf. VII, 9) which enable us to reduce a higher power to lower powers with a small error. In particular, the third Chebyshev polynomial

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-3.9)

normalized to the range [0, 1] gives

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-3.10)

with a maximum error of ± 1/32. The original cubic is thus reducible to a quadratic, with an error not exceeding 3 %.

We now solve this quadratic, retaining only the root between 0 and 1.

4. Numerical example. In actual practice a need not be taken with great accuracy but can be rounded off to two significant figures. Consider the solution of the following cubic:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-4.1)

Barlow's Tables give the cube root of 70 as 4.1212···, the reciprocal of which gives α = 0.2426 ···. We conveniently choose

α = 1.25 (1-4.2)

obtaining

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-4.3)

At x = 1, f(1) = -0.856 is still negative. The root is thus between x = 1 and ∞. We invert the range by putting

[bar.x] = 1/x (1-4.4)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-4.5)

The substitution (1-3.10) reduces this equation to the quadratic

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-4.6)

solution of which gives

[bar.x] = [0.915 ± 3.370]/5.406 (1-4.7)

The negative sign of the square root yields a spurious result, since it falls outside the range considered. The positive sign gives

[bar.x] = 0.79 (1-4.8)

and thus

x = 1/0.79 = 1.27

and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-4.9)

Substitution in the original equation shows that the left side gives the remainder 5.692, which in comparison with the absolute term 70 is an error of 8 %.

The operation with large roots is numerically not advantageous. It is thus of considerable importance that we can always restrict ourselves to roots which are in absolute value less than 1, because if the absolute value of the root is greater than 1, the reciprocal transformation bar.x = 1/x, which merely inverts the polynomial, changes the root to its reciprocal. Hence in our example we will prefer to substitute the reciprocal of (9), i.e.,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-4.10)

into the inverted equation

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-4.11)

The remainder is now -0.0395, an error of 4% compared with the absolute term 1.

5. Newton's method. If we have a good approximation x = x0 to a root of an algebraic equation, we can improve that approximation by a method known as "Newton's method." We put

x = x0 + h (1-5.1)

and expand f(x0 + h) into powers of h:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-5.2)

For small h the higher order terms will rapidly diminish. If we neglect everything beyond the second term, then the solution of the equation

f(x) = f(x0 + h) = 0 (1-5.3)

is obtained in good approximation by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-5.4)

We can now consider

x1 = x0 + h0 (1-5.5)

as a new first approximation, replacing x0 by x1 Hence

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-5.6)

combined with

x2 = x0 + h0 + h1 (1-5.7)

is a still closer approximation of the root, and generally we obtain the iterative scheme

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-5.8)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-5.9)

which converges rapidly to x, if x0 is a sufficiently close first approximation.

Newton's scheme is not restricted to algebraic equations, but is equally applicable to transcendental equations.

An increase of convergence is obtainable if we stop only after the third term, considering the second-order term a small correction of the first-order term. Hence we write

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-5.10)

and solve this equation in the form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-5.11)

replacing the h in the denominator by the first approximation (4). This yields a formula which can best be remembered in the following form:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-5.12)

6. Numerical example for Newton's method. In [section] 4 2 the cubic equation

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-6.1)

was treated, and the approximation

x0 = 0.79 (1-6.2)

was obtained. We substitute this value in f(x) and likewise in

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-6.3)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-6.4)

and obtaining

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-6.5)

Substitution in the formula (1-5.12) gives

1/h = 98.945453 + 1.066568 = 100.012021 (1-6.6)

h = 0.009998798 (1-6.7)

x1 = x0 + h = 0.7999988 (1-6.8)

If this new x1 is substituted in f(x), we obtain

f(x1) = -0.00000418 (1-6.9)

At this point we can stop, since the error is only 4 units in the 6th place; the coefficients of an algebraic equation are seldom given with more than 5 decimal place accuracy.


(Continues...)

Excerpted from APPLIED ANALYSIS by Cornelius Lanczos. Copyright © 1988 DOVER PUBLICATIONS INC.. Excerpted by permission of DOVER PUBLICATIONS INC..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

INTRODUCTION
1. Pure and applied mathematics
2. "Pure analysis, practical analysis, numerical analysis"
Chapter I
ALGEBRAIC EQUATIONS
1. Historical introduction
2. Allied fields
3. Cubic equations
4. Numerical example
5. Newton's method
6. Numerical example for Newton's method
7. Horner's scheme
8. The movable strip technique
9. The remaining roots of the cubic
10. Substitution of a complex number into a polynomial
11. Equations of fourth order
12. Equations of higher order
13. The method of moments
14. Synthetic division of two polynomials
15. Power sums and the absolutely largest root
16. Estimation of the largest absolute value
17. Scanning of the unit circle
18. Transformation by reciprocal radii
19. Roots near the imaginary axis
20. Multiple roots
21. Algebraic equations with complex coefficients
22. Stability analysis
Chapter II
MATRICES AND EIGENVALUE PROBLEMS
1. Historical survey
2. Vectors and tensors
3. Matrices as algebraic quantities
4. Eigenvalue analysis
5. The Hamilton-Cayley equation
6. Numerical example of a complete eigenvalue analysis
7. Algebraic treatment of the orthogonality of eigenvectors
8. The eigenvalue problem in geometrical interpretation
9. The principal axis transformation of a matrix
10. Skew-angular reference systems
11. Principal axis transformation of a matrix
12. The invariance of matrix equations under orthogonal transformations
13. The invariance of matrix equations under abitrary linear transformations
14. Commutative and noncommutative matrices
15. Inversion of a triangular matrix
16. Successive orthogonalization of a matrix
17. Inversion of a triangular matrix
18. Numerical example for the successive orthogonalization of a matrix
19. Triangularization of a matrix
20. Inversion of a complex matrix
21. Solution of codiagonal systems
22. Matrix inversion by partitioning
23. Peturbation methods
24. The compatibility of linear equations
25. Overdetermination and the principle of least squares
26. Natural and artificial skewness of a linear set of equations
27. Orthogonalization of an arbitrary linear system
28. The effect of noise on the solution of large linear systems
Chapter III.
LARGE-SCALE LINEAR SYSTEMS
1 Historical introduction
2 Polynomial operations with matrices
3 "The p,q algorithm"
4 The Chebyshev polynomials
5 Spectroscopic eigenvalue analysis
6 Generation of the eigenvcctors
7 Iterative solution of large-scale linear systems
8 The residual test
9 The smallest eigenvalue of a Hermitian matrix
10 The smallest eigenvalue of an arbitrary matrix
Chapter IV.
HARMONIC ANALYSIS
1. Historical notes
2. Basic theorems
3. Least square approximations
4. The orthogonality of the Fourier functions
5. Separation of the sine and the cosine series
6. Differentiation of a Fourier series
7. Trigonometric expansion of the delta function
8. Extension of the trigonometric series to the nonintegrable functions
9. Smoothing of the Gibbs oscillations by the s factors
10. General character of the s smoothing
11. The method of trigonometric interpolation
12. Interpolation by sine functions
13. Interpolation by cosine functions
14. Harmonic analysis of equidistant data
15. The error of trigonometric interpolation
16. Interpolation by Chebyshev polynomials
17. The Fourier integral
18. The input-output relation of electric networks
19. Empirial determination of the input-output relation
20. Interpolation of the Fourier transform
21. Interpolatory filter analysis
22. Search for hidden periodicities
23. Separation of exponentials
24. The Laplace transform
25. Network analysis and Laplace transform
26. Inversion of the Laplace transform
27. Inversion by Legendre polynomials
28. Inversion by Chebysev polynomials
29. Inversion by Fourier series
30. Inversion by Laguerre functions
31. Interpolation of the Laplace transform
Chapter V
DATA ANALYSIS
1. Historical introduction
2. Interpolation by simple differences
3. Interpolation by central differences
4. Differentiation of a tabulated function
5. The difficulties of a difference table
6. The fundemental principle of the method of least squares
7. Smoothing of data by fourth differences
8. Differentiation of an empirical function
9. Differentiation by integration
10. The second derivative of an empirical function
11. Smoothing in the large by Fourier analysis
12. Empirical determination of the cutoff frequency
13. Least-square polynomials
14. Polynomial interpolations in the large
15. The convergence of equidistant polynomial interpolation
16. Orthogonal function systems
17. Self-adjoint differential operators
18. The Sturm-Liouville differential equation
19. The hypergeometric series
20. The Jacobi polynomials
21. Interpolation by orthogonal polynomials
Chapter VI
QUADRATURE METHODS
1. Historical notes
2. Quadrature by planimeters
3. The trapezoidal rule
4. Simpson's rule
5. The accuracy of Simpson's formula
6. The accuracy of the trapezoidal rule
7. The trapezoidal rule with end correction
8. Numerical examples
9. Approximation by polynomials of higher order
10. The Gaussian quadrature method
11. Numerical example
12. The error of the Gaussian quadrature
13. The coefficients of a quadrature formula with arbitrary zeros
14. Gaussian quadrature with rounded-off zeros
15. The use of double roots
16. Engineering applications of the Gaussian quadrature method
17. Simpson's formula with end correction
18. Quadrature involving exponentials
19. Quadrature by differentiation
20. The exponential function
21. Eigenvalue problems
22. Convergence of the quadrature based on boundary values
Chapter VII
POWER EXPANSIONS
1. Historical introduction
2. Analytical extension by reciprocal radii
3. Numerical example
4. The convergence of the Taylor series
5. Rigid and flexible expansions
6. Expansions in orthogonal polynomials
7. The Chebyshev polynomials
8. The shifted Chebyshev polynomials
9. Telescoping of a power series by successive reductions
10. Telescoping of a power series by rearrangement
11. Power expansions beyond the Taylor range
12. The t method
13. The canonical polynomials
14. Examples of the t method
15. Estimation of the error by the t method
16. The square root of a complex number
17. Generalization of the t method. The method of selected points
APPENDIX: NUMERICAL TABLES
INDEX
From the B&N Reads Blog

Customer Reviews