## Read an Excerpt

#### APPLIED ANALYSIS

**By Cornelius Lanczos**

**DOVER PUBLICATIONS INC.**

**Copyright © 1988 DOVER PUBLICATIONS INC.**

All rights reserved.

ISBN: 978-0-486-31926-1

All rights reserved.

ISBN: 978-0-486-31926-1

CHAPTER 1

**ALGEBRAIC EQUATIONS**

**1. Historical introduction.** Algebraic equations of the first and second order aroused the interest of scientists from the earliest days. While the early Egyptians solved mostly equations of first order, the early Babylonians (about 2000 B.C.) were already familiar with the solution of the quadratic equation, and constructed tables for the solution of cubic equations by bringing the general cubic into a normal form.

The Hindus developed the systematic algebraic theory of the equations of first and second order (seventh century). The standard method of solving the general quadratic equation by completing the square is a Hindu invention. The Hindus were familiar with the operational viewpoint and were not afraid of the use of negative numbers, considering them as "debts." The clear insight into the nature of imaginary and complex numbers came much later, in the time of Euler (eighteenth century).

The solution of cubic equations was first discovered by the Italian Tartaglia (early sixteenth century); Cardano's pupil Ferrari added a few years later the solution of biquadratic equations. The essentially different character of equations of fifth and higher order was clearly recognized by Lagrange (late eighteenth century), but the first exact proof that general equations of fifth and higher order cannot be solved by purely algebraic tools is due to the Norwegian, Abel (1824), while a few years later (1832) the French Galois gave the general group-theoretical foundation of the entire problem.

The "fundamental theorem of algebra" states that every algebraic equation has at least one solution within the realm of complex numbers. If this is proved, we can immediately infer (by successive divisions by the root factors) that every polynomial of the order *n* can be resolved into a product of *n* root factors. The first rigorous proof of the fundamental theorem of algebra was given by Gauss when only 22 years of age (1799). Later Cauchy's theory of the functions of a complex variable provided a deeper insight into the nature of the roots of an algebraic equation and yielded a simplified proof for the fundamental theorem.

The existence of *n* generally complex roots of an algebraic equation of *n*th order is in no contradiction to the unsolvability of an algebraic equation of fifth or higher order by algebraic means. The latter statement means that the roots of a general algebraic equation of higher than fourth order are not obtainable by purely algebraic operations on the coefficients (i.e., addition, subtraction, multiplication, division, raising to a power and taking the root). Such operations can *approximate*, however, the roots with any degree of accuracy.

**2. Allied fields, (a)** The problem of solving an algebraic equation of nth order is closely related to the theory of vibrations around a state of equilibrium. The frequencies (or the squares of the frequencies) of a mechanical system appear as the "characteristic roots" or "eigenvalues" of a matrix, obtainable by solving the "characteristic equation" of the matrix, which is an algebraic equation of the order *n*.

(b) In electrical engineering the response of an electric network is always a linear superposition of exponential functions. The exponents of these functions are obtainable as the roots of a certain polynomial which can be constructed if the elements of the network and the network diagram are given.

(c) Intricate algebraic and geometric relations frequently yield by elimination an algebraic equation of second or higher order for one of the unknowns.

**3. Cubic equations**. Equations of third and fourth order are still solvable by algebraic formulas. However, the numerical computations required by the formulas are usually so involved and timeabsorbing that we prefer less cumbersome methods which give the roots in *approximation* only but still close enough for later refinement.

The solution of a cubic equation (with real coefficients) is particularly convenient since one of the roots must be real. After finding this root, the other two roots follow immediately by solving a quadratic equation.

A general cubic equation can be written in the form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-3.1)

The factor of [xi]3 can always be normalized to 1 since we can divide through by the highest coefficient. Moreover, the absolute term can always be made negative because, if it is originally positive, we put [xi]1 = - [xi] and operate with this [xi]1.

Now it is convenient to introduce a new scale factor which will normalize the absolute term to -1. We put

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-3.2)

and write the new equation

f(x) = x3 + a1x2 + b1x –c = 0 (1-3.3)

If we choose

α = 1/ [cube root of c] (1-3.4)

we obtain

c1 = 1 (1-3.5)

Now, since *f*(0) is negative and *f*(∞) is positive, we know that there must be at least one root between *x* = 0 and *x* = ∞. We put *x* = 1 and evaluate *f*(1). If *f*(1) is positive, the root must be between 0 and 1; if *f*(1) is negative, the root must be between 1 and ∞. Moreover, since

x1 · x2 · x3 = 1 (1-3.6)

we know in advance that we cannot have *three* roots between 0 and 1, or 1 and ∞. Hence, if *f*(1)> 0, we know that there must be one and *only one* real root in the interval [0,1], while if *f*(1)< 0, we know that there must be one and only one real root in the interval [1, ∞]. The latter interval can be changed to the interval [1,0] by the transformation

[bar.x] = 1/x (1-3.7)

which simply means that the coefficients of the equation change their sequence:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-3.8)

Hence we have reduced our problem to the new problem: find the real root of a cubic equation in the range [0,1]. We solve this problem in good approximation by taking advantage of the remarkable properties of the Chebyshev polynomials (cf. VII, 9) which enable us to reduce a higher power to lower powers with a small error. In particular, the third Chebyshev polynomial

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-3.9)

normalized to the range [0, 1] gives

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-3.10)

with a maximum error of ± 1/32. The original cubic is thus reducible to a quadratic, with an error not exceeding 3 %.

We now solve this quadratic, retaining only the root between 0 and 1.

**4. Numerical example**. In actual practice a need not be taken with great accuracy but can be rounded off to two significant figures. Consider the solution of the following cubic:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-4.1)

*Barlow's Tables* give the cube root of 70 as 4.1212···, the reciprocal of which gives α = 0.2426 ···. We conveniently choose

α = 1.25 (1-4.2)

obtaining

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-4.3)

At *x* = 1, *f*(1) = -0.856 is still negative. The root is thus between *x* = 1 and ∞. We invert the range by putting

[bar.x] = 1/x (1-4.4)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-4.5)

The substitution (**1-3.10**) reduces this equation to the quadratic

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-4.6)

solution of which gives

[bar.x] = [0.915 ± 3.370]/5.406 (1-4.7)

The negative sign of the square root yields a spurious result, since it falls outside the range considered. The positive sign gives

[bar.x] = 0.79 (1-4.8)

and thus

x = 1/0.79 = 1.27

and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-4.9)

Substitution in the original equation shows that the left side gives the remainder 5.692, which in comparison with the absolute term 70 is an error of 8 %.

The operation with large roots is numerically not advantageous. It is thus of considerable importance that we can always restrict ourselves to roots which are in absolute value less than 1, because if the absolute value of the root is greater than 1, the reciprocal transformation *bar.x* = 1/*x*, which merely inverts the polynomial, changes the root to its reciprocal. Hence in our example we will prefer to substitute the reciprocal of (9), i.e.,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-4.10)

into the inverted equation

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-4.11)

The remainder is now -0.0395, an error of 4% compared with the absolute term 1.

**5. Newton's method**. If we have a good approximation *x* = *x*0 to a root of an algebraic equation, we can improve that approximation by a method known as "Newton's method." We put

x = x0 + h (1-5.1)

and expand *f*(*x*0 + *h*) into powers of h:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-5.2)

For small *h* the higher order terms will rapidly diminish. If we neglect everything beyond the second term, then the solution of the equation

f(x) = f(x0 + h) = 0 (1-5.3)

is obtained in good approximation by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-5.4)

We can now consider

x1 = x0 + h0 (1-5.5)

as a new first approximation, replacing *x*0 by *x*1 Hence

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-5.6)

combined with

x2 = x0 + h0 + h1 (1-5.7)

is a still closer approximation of the root, and generally we obtain the iterative scheme

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-5.8)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-5.9)

which converges rapidly to *x*, if *x*0 is a sufficiently close first approximation.

Newton's scheme is not restricted to algebraic equations, but is equally applicable to transcendental equations.

An increase of convergence is obtainable if we stop only after the *third* term, considering the second-order term a small correction of the first-order term. Hence we write

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-5.10)

and solve this equation in the form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-5.11)

replacing the *h* in the denominator by the first approximation (4). This yields a formula which can best be remembered in the following form:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-5.12)

**6. Numerical example for Newton's method**. In **[section] 4 2** the cubic equation

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-6.1)

was treated, and the approximation

x0 = 0.79 (1-6.2)

was obtained. We substitute this value in *f(x)* and likewise in

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-6.3)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-6.4)

and obtaining

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1-6.5)

Substitution in the formula (**1-5.12**) gives

1/h = 98.945453 + 1.066568 = 100.012021 (1-6.6)

h = 0.009998798 (1-6.7)

x1 = x0 + h = 0.7999988 (1-6.8)

If this new *x*1 is substituted in *f(x)*, we obtain

f(x1) = -0.00000418 (1-6.9)

At this point we can stop, since the error is only 4 units in the 6th place; the coefficients of an algebraic equation are seldom given with more than 5 decimal place accuracy.

*(Continues...)*

Excerpted fromAPPLIED ANALYSISbyCornelius Lanczos. Copyright © 1988 DOVER PUBLICATIONS INC.. Excerpted by permission of DOVER PUBLICATIONS INC..

All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.

Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.