Applied Analysis


Basic text for graduate and advanced undergraduate deals with search for roots of algebraic equations encountered in vibration and flutter problems and in those of static and dynamic stability. Other topics devoted to matrices and eigenvalue problems, large-scale linear systems, harmonic analysis and data analysis, more.

Read More Show Less
... See more details below
Paperback (Unabridged)
$17.66 price
(Save 29%)$24.95 List Price
Other sellers (Paperback)
  • All (20) from $7.02   
  • New (8) from $13.43   
  • Used (12) from $7.02   
Applied Analysis

Available on NOOK devices and apps  
  • NOOK Devices
  • NOOK HD/HD+ Tablet
  • NOOK
  • NOOK Color
  • NOOK Tablet
  • Tablet/Phone
  • NOOK for Windows 8 Tablet
  • NOOK for iOS
  • NOOK for Android
  • NOOK Kids for iPad
  • PC/Mac
  • NOOK for Windows 8
  • NOOK for PC
  • NOOK for Mac
  • NOOK Study
  • NOOK for Web

Want a NOOK? Explore Now

NOOK Book (eBook)
$13.99 price
(Save 43%)$24.95 List Price


Basic text for graduate and advanced undergraduate deals with search for roots of algebraic equations encountered in vibration and flutter problems and in those of static and dynamic stability. Other topics devoted to matrices and eigenvalue problems, large-scale linear systems, harmonic analysis and data analysis, more.

Read More Show Less

Product Details

  • ISBN-13: 9780486656564
  • Publisher: Dover Publications
  • Publication date: 7/21/2010
  • Series: Dover Books on Mathematics Series
  • Edition description: Unabridged
  • Pages: 576
  • Product dimensions: 5.42 (w) x 8.32 (h) x 1.17 (d)

Read an Excerpt


By Cornelius Lanczos


All rights reserved.
ISBN: 978-0-486-31926-1



1. Historical introduction. Algebraic equations of the first and second order aroused the interest of scientists from the earliest days. While the early Egyptians solved mostly equations of first order, the early Babylonians (about 2000 B.C.) were already familiar with the solution of the quadratic equation, and constructed tables for the solution of cubic equations by bringing the general cubic into a normal form.

The Hindus developed the systematic algebraic theory of the equations of first and second order (seventh century). The standard method of solving the general quadratic equation by completing the square is a Hindu invention. The Hindus were familiar with the operational viewpoint and were not afraid of the use of negative numbers, considering them as "debts." The clear insight into the nature of imaginary and complex numbers came much later, in the time of Euler (eighteenth century).

The solution of cubic equations was first discovered by the Italian Tartaglia (early sixteenth century); Cardano's pupil Ferrari added a few years later the solution of biquadratic equations. The essentially different character of equations of fifth and higher order was clearly recognized by Lagrange (late eighteenth century), but the first exact proof that general equations of fifth and higher order cannot be solved by purely algebraic tools is due to the Norwegian, Abel (1824), while a few years later (1832) the French Galois gave the general group-theoretical foundation of the entire problem.

The "fundamental theorem of algebra" states that every algebraic equation has at least one solution within the realm of complex numbers. If this is proved, we can immediately infer (by successive divisions by the root factors) that every polynomial of the order n can be resolved into a product of n root factors. The first rigorous proof of the fundamental theorem of algebra was given by Gauss when only 22 years of age (1799). Later Cauchy's theory of the functions of a complex variable provided a deeper insight into the nature of the roots of an algebraic equation and yielded a simplified proof for the fundamental theorem.

The existence of n generally complex roots of an algebraic equation of nth order is in no contradiction to the unsolvability of an algebraic equation of fifth or higher order by algebraic means. The latter statement means that the roots of a general algebraic equation of higher than fourth order are not obtainable by purely algebraic operations on the coefficients (i.e., addition, subtraction, multiplication, division, raising to a power and taking the root). Such operations can approximate, however, the roots with any degree of accuracy.

2. Allied fields, (a) The problem of solving an algebraic equation of nth order is closely related to the theory of vibrations around a state of equilibrium. The frequencies (or the squares of the frequencies) of a mechanical system appear as the "characteristic roots" or "eigenvalues" of a matrix, obtainable by solving the "characteristic equation" of the matrix, which is an algebraic equation of the order n.

(b) In electrical engineering the response of an electric network is always a linear superposition of exponential functions. The exponents of these functions are obtainable as the roots of a certain polynomial which can be constructed if the elements of the network and the network diagram are given.

(c) Intricate algebraic and geometric relations frequently yield by elimination an algebraic equation of second or higher order for one of the unknowns.

3. Cubic equations. Equations of third and fourth order are still solvable by algebraic formulas. However, the numerical computations required by the formulas are usually so involved and timeabsorbing that we prefer less cumbersome methods which give the roots in approximation only but still close enough for later refinement.

The solution of a cubic equation (with real coefficients) is particularly convenient since one of the roots must be real. After finding this root, the other two roots follow immediately by solving a quadratic equation.

A general cubic equation can be written in the form


The factor of [xi]3 can always be normalized to 1 since we can divide through by the highest coefficient. Moreover, the absolute term can always be made negative because, if it is originally positive, we put [xi]1 = - [xi] and operate with this [xi]1.

Now it is convenient to introduce a new scale factor which will normalize the absolute term to -1. We put


and write the new equation

f(x) = x3 + a1x2 + b1x –c = 0 (1-3.3)

If we choose

α = 1/ [cube root of c] (1-3.4)

we obtain

c1 = 1 (1-3.5)

Now, since f(0) is negative and f(∞) is positive, we know that there must be at least one root between x = 0 and x = ∞. We put x = 1 and evaluate f(1). If f(1) is positive, the root must be between 0 and 1; if f(1) is negative, the root must be between 1 and ∞. Moreover, since

x1 · x2 · x3 = 1 (1-3.6)

we know in advance that we cannot have three roots between 0 and 1, or 1 and ∞. Hence, if f(1)> 0, we know that there must be one and only one real root in the interval [0,1], while if f(1)< 0, we know that there must be one and only one real root in the interval [1, ∞]. The latter interval can be changed to the interval [1,0] by the transformation

[bar.x] = 1/x (1-3.7)

which simply means that the coefficients of the equation change their sequence:


Hence we have reduced our problem to the new problem: find the real root of a cubic equation in the range [0,1]. We solve this problem in good approximation by taking advantage of the remarkable properties of the Chebyshev polynomials (cf. VII, 9) which enable us to reduce a higher power to lower powers with a small error. In particular, the third Chebyshev polynomial


normalized to the range [0, 1] gives


with a maximum error of ± 1/32. The original cubic is thus reducible to a quadratic, with an error not exceeding 3 %.

We now solve this quadratic, retaining only the root between 0 and 1.

4. Numerical example. In actual practice a need not be taken with great accuracy but can be rounded off to two significant figures. Consider the solution of the following cubic:


Barlow's Tables give the cube root of 70 as 4.1212···, the reciprocal of which gives α = 0.2426 ···. We conveniently choose

α = 1.25 (1-4.2)



At x = 1, f(1) = -0.856 is still negative. The root is thus between x = 1 and ∞. We invert the range by putting

[bar.x] = 1/x (1-4.4)


The substitution (1-3.10) reduces this equation to the quadratic


solution of which gives

[bar.x] = [0.915 ± 3.370]/5.406 (1-4.7)

The negative sign of the square root yields a spurious result, since it falls outside the range considered. The positive sign gives

[bar.x] = 0.79 (1-4.8)

and thus

x = 1/0.79 = 1.27



Substitution in the original equation shows that the left side gives the remainder 5.692, which in comparison with the absolute term 70 is an error of 8 %.

The operation with large roots is numerically not advantageous. It is thus of considerable importance that we can always restrict ourselves to roots which are in absolute value less than 1, because if the absolute value of the root is greater than 1, the reciprocal transformation bar.x = 1/x, which merely inverts the polynomial, changes the root to its reciprocal. Hence in our example we will prefer to substitute the reciprocal of (9), i.e.,


into the inverted equation


The remainder is now -0.0395, an error of 4% compared with the absolute term 1.

5. Newton's method. If we have a good approximation x = x0 to a root of an algebraic equation, we can improve that approximation by a method known as "Newton's method." We put

x = x0 + h (1-5.1)

and expand f(x0 + h) into powers of h:


For small h the higher order terms will rapidly diminish. If we neglect everything beyond the second term, then the solution of the equation

f(x) = f(x0 + h) = 0 (1-5.3)

is obtained in good approximation by


We can now consider

x1 = x0 + h0 (1-5.5)

as a new first approximation, replacing x0 by x1 Hence


combined with

x2 = x0 + h0 + h1 (1-5.7)

is a still closer approximation of the root, and generally we obtain the iterative scheme



which converges rapidly to x, if x0 is a sufficiently close first approximation.

Newton's scheme is not restricted to algebraic equations, but is equally applicable to transcendental equations.

An increase of convergence is obtainable if we stop only after the third term, considering the second-order term a small correction of the first-order term. Hence we write


and solve this equation in the form


replacing the h in the denominator by the first approximation (4). This yields a formula which can best be remembered in the following form:


6. Numerical example for Newton's method. In [section] 4 2 the cubic equation


was treated, and the approximation

x0 = 0.79 (1-6.2)

was obtained. We substitute this value in f(x) and likewise in



and obtaining


Substitution in the formula (1-5.12) gives

1/h = 98.945453 + 1.066568 = 100.012021 (1-6.6)

h = 0.009998798 (1-6.7)

x1 = x0 + h = 0.7999988 (1-6.8)

If this new x1 is substituted in f(x), we obtain

f(x1) = -0.00000418 (1-6.9)

At this point we can stop, since the error is only 4 units in the 6th place; the coefficients of an algebraic equation are seldom given with more than 5 decimal place accuracy.


Excerpted from APPLIED ANALYSIS by Cornelius Lanczos. Copyright © 1988 DOVER PUBLICATIONS INC.. Excerpted by permission of DOVER PUBLICATIONS INC..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Read More Show Less

Table of Contents

1. Pure and applied mathematics
2. "Pure analysis, practical analysis, numerical analysis"
Chapter I
1. Historical introduction
2. Allied fields
3. Cubic equations
4. Numerical example
5. Newton's method
6. Numerical example for Newton's method
7. Horner's scheme
8. The movable strip technique
9. The remaining roots of the cubic
10. Substitution of a complex number into a polynomial
11. Equations of fourth order
12. Equations of higher order
13. The method of moments
14. Synthetic division of two polynomials
15. Power sums and the absolutely largest root
16. Estimation of the largest absolute value
17. Scanning of the unit circle
18. Transformation by reciprocal radii
19. Roots near the imaginary axis
20. Multiple roots
21. Algebraic equations with complex coefficients
22. Stability analysis
Chapter II
1. Historical survey
2. Vectors and tensors
3. Matrices as algebraic quantities
4. Eigenvalue analysis
5. The Hamilton-Cayley equation
6. Numerical example of a complete eigenvalue analysis
7. Algebraic treatment of the orthogonality of eigenvectors
8. The eigenvalue problem in geometrical interpretation
9. The principal axis transformation of a matrix
10. Skew-angular reference systems
11. Principal axis transformation of a matrix
12. The invariance of matrix equations under orthogonal transformations
13. The invariance of matrix equations under abitrary linear transformations
14. Commutative and noncommutative matrices
15. Inversion of a triangular matrix
16. Successive orthogonalization of a matrix
17. Inversion of a triangular matrix
18. Numerical example for the successive orthogonalization of a matrix
19. Triangularization of a matrix
20. Inversion of a complex matrix
21. Solution of codiagonal systems
22. Matrix inversion by partitioning
23. Peturbation methods
24. The compatibility of linear equations
25. Overdetermination and the principle of least squares
26. Natural and artificial skewness of a linear set of equations
27. Orthogonalization of an arbitrary linear system
28. The effect of noise on the solution of large linear systems
Chapter III.
1 Historical introduction
2 Polynomial operations with matrices
3 "The p,q algorithm"
4 The Chebyshev polynomials
5 Spectroscopic eigenvalue analysis
6 Generation of the eigenvcctors
7 Iterative solution of large-scale linear systems
8 The residual test
9 The smallest eigenvalue of a Hermitian matrix
10 The smallest eigenvalue of an arbitrary matrix
Chapter IV.
1. Historical notes
2. Basic theorems
3. Least square approximations
4. The orthogonality of the Fourier functions
5. Separation of the sine and the cosine series
6. Differentiation of a Fourier series
7. Trigonometric expansion of the delta function
8. Extension of the trigonometric series to the nonintegrable functions
9. Smoothing of the Gibbs oscillations by the s factors
10. General character of the s smoothing
11. The method of trigonometric interpolation
12. Interpolation by sine functions
13. Interpolation by cosine functions
14. Harmonic analysis of equidistant data
15. The error of trigonometric interpolation
16. Interpolation by Chebyshev polynomials
17. The Fourier integral
18. The input-output relation of electric networks
19. Empirial determination of the input-output relation
20. Interpolation of the Fourier transform
21. Interpolatory filter analysis
22. Search for hidden periodicities
23. Separation of exponentials
24. The Laplace transform
25. Network analysis and Laplace transform
26. Inversion of the Laplace transform
27. Inversion by Legendre polynomials
28. Inversion by Chebysev polynomials
29. Inversion by Fourier series
30. Inversion by Laguerre functions
31. Interpolation of the Laplace transform
Chapter V
1. Historical introduction
2. Interpolation by simple differences
3. Interpolation by central differences
4. Differentiation of a tabulated function
5. The difficulties of a difference table
6. The fundemental principle of the method of least squares
7. Smoothing of data by fourth differences
8. Differentiation of an empirical function
9. Differentiation by integration
10. The second derivative of an empirical function
11. Smoothing in the large by Fourier analysis
12. Empirical determination of the cutoff frequency
13. Least-square polynomials
14. Polynomial interpolations in the large
15. The convergence of equidistant polynomial interpolation
16. Orthogonal function systems
17. Self-adjoint differential operators
18. The Sturm-Liouville differential equation
19. The hypergeometric series
20. The Jacobi polynomials
21. Interpolation by orthogonal polynomials
Chapter VI
1. Historical notes
2. Quadrature by planimeters
3. The trapezoidal rule
4. Simpson's rule
5. The accuracy of Simpson's formula
6. The accuracy of the trapezoidal rule
7. The trapezoidal rule with end correction
8. Numerical examples
9. Approximation by polynomials of higher order
10. The Gaussian quadrature method
11. Numerical example
12. The error of the Gaussian quadrature
13. The coefficients of a quadrature formula with arbitrary zeros
14. Gaussian quadrature with rounded-off zeros
15. The use of double roots
16. Engineering applications of the Gaussian quadrature method
17. Simpson's formula with end correction
18. Quadrature involving exponentials
19. Quadrature by differentiation
20. The exponential function
21. Eigenvalue problems
22. Convergence of the quadrature based on boundary values
Chapter VII
1. Historical introduction
2. Analytical extension by reciprocal radii
3. Numerical example
4. The convergence of the Taylor series
5. Rigid and flexible expansions
6. Expansions in orthogonal polynomials
7. The Chebyshev polynomials
8. The shifted Chebyshev polynomials
9. Telescoping of a power series by successive reductions
10. Telescoping of a power series by rearrangement
11. Power expansions beyond the Taylor range
12. The t method
13. The canonical polynomials
14. Examples of the t method
15. Estimation of the error by the t method
16. The square root of a complex number
17. Generalization of the t method. The method of selected points
Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star


4 Star


3 Star


2 Star


1 Star


Your Rating:

Your Name: Create a Pen Name or

Barnes & Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation


  • - By submitting a review, you grant to Barnes & and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Terms of Use.
  • - Barnes & reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously

    If you find inappropriate content, please report it to Barnes & Noble
    Why is this product inappropriate?
    Comments (optional)