Used and New from Other Sellers
Used and New from Other Sellers
from $1.99
Usually ships in 12 business days
(Save 96%)
Other sellers (Hardcover)

All (42)
from
$1.99

New (2)
from
$105.00

Used (40)
from
$1.99
Note: Marketplace items are not eligible for any BN.com coupons and promotions
$105.00
Seller since 2014
Brand new.
Ships from: acton, MA
Usually ships in 12 business days
 •Standard, 48 States
 •Standard (AK, HI)
$105.00
Seller since 2014
Brand new.
Ships from: acton, MA
Usually ships in 12 business days
 •Standard, 48 States
 •Standard (AK, HI)
More About This Textbook
Overview
This book aims to teach the methods of numerical computing, and as such it is a practical reference and textbook for anyone using numerical analysis. The authors provide the techniques and computer programs needed for analysis and also advice on which techniques should be used for solving certain types of problems. The authors assume the reader is mathematically literate and is familiar with FORTRAN and PASCAL programming languages, but no prior experience with numerical analysis or numerical methods is assumed. The book includes all the standard topics of numerical analysis (linear equations, interpolation and extrapolation, integration, nonlinear rootfinding, eigensystems and ordinary differential equations). The programs in the book are in ANSIstandard FORTRAN77 for the main text, and are repeated in UCSDPASCAL at the end. They are available on discs for use on IBMPC microcomputers and their compatibles. A workbook providing sample programs that illustrate the use of each subroutine and procedure is available, as well as discs giving programs listed in the book in USCDPASCAL and FORTRAN77 for use on IBMPC microcomputers and their compatibles.
Note: This edition only includes the diskette.
A considerable (50%) expansion of what has come to be the handbook of first resort in scientific computation. A few new chapters were added, and, mainly, many more topics/techniques are now concisely discussed. A great addition to any reference collection, even if you already have the first edition. The text comes in both C and Fortran editions. Cambridge also makes available the following companion products, which our customers have found extremely valuable in their own right: Example books tutorial guides to using the routines within your own programs Program disks all 300+ routines from the text itself, plus sample programs from the example books. For the basic book, example books, and disks, be sure to specify whether Fortran or C editions are preferred; for the disks, be sure to specify IBM (3.5" or 5.25") or Macintosh formats.
Editorial Reviews
From Barnes & Noble
Fatbrain Review
A considerable (50%) expansion of what has come to be the handbook of first resort in scientific computation. A few new chapters were added, and, mainly, many more topics/techniques are now concisely discussed. A great addition to any reference collection, even if you already have the first edition. The text comes in both C and Fortran editions. Cambridge also makes available the following companion products, which our customers have found extremely valuable in their own right: Example books tutorial guides to using the routines within your own programs Program disks all 300+ routines from the text itself, plus sample programs from the example books. For the basic book, example books, and disks, be sure to specify whether Fortran or C editions are preferred; for the disks, be sure to specify IBM (3.5" or 5.25") or Macintosh formats.Booknews
A manual of modern numerical methods for solving radiative transfer problems. Contains several new numerical methods on operator perturbation as well as on polarized radiative transfer. The methods are principally directed to astrophysical plasmas, but are easily adaptable to applications involving other media where selfabsorption of radiation is important. A C version of the FORTRAN and Pascal reference work. Brings the full power of the C language to bear on scientific tasks. Some 200 computers routines are included. For programmers and for scientists and engineers who want to work in C. Annotation c. Book News, Inc., Portland, OR (booknews.com)Product Details
Related Subjects
Read an Excerpt
Chapter 9: Root Finding and Nonlinear Sets of Equations
9.0: Introduction
We now consider that most basic of tasks, solving equations numerically. While most equations are born with both a righthand side and a lefthand side, one traditionally moves all terms to the left, leavingwhose solution or solutions are desired. When there is only one independent variable, the problem is onedimensional, namely to find the root or roots of a function.
With more than one independent variable, more than one equation can be satisfied simultaneously. You likely once learned the implicit function theorem which (in this context) gives us the hope of satisfying N equations in N unknowns simultaneously. Note that we have only hope, not certainty. A nonlinear set of equations may have no (real) solutions at all. Contrariwise, it may have more than one solution. The implicit function theorem tells us that "generically" the solutions will be distinct, pointlike, and separated from each other. If, however, life is so unkind as to present you with a nongeneric, i.e., degenerate, case, then you can get a continuous family of solutions. In vector notation, we want to find one or more Ndimensional solution vectors x such that
where f is the Ndimensional vectorvalued function whose components are the individual equations to be satisfied simultaneously.
Don't be fooled by the apparent notational similarity of equations (9.0.2) and (9.0.1). Simultaneous solution of equations in N dimensions is much more difficult than finding roots in the onedimensional case. The principal difference between one and many dimensions is that, in one dimension, it is possible to bracket or "trap" a root between bracketing values, and then hunt it down like a rabbit. In multidimensions, you can never be sure that the root is there at all until you have found it.
Except in linear problems, root finding invariably proceeds by iteration, and this is equally true in one or in many dimensions. Starting from some approximate trial solution, a useful algorithm will improve the solution until some predetermined convergence criterion is satisfied. For smoothly varying functions, good algorithms will always converge, provided that the initial guess is good enough. Indeed one can even determine in advance the rate of convergence of most algorithms.
It cannot be overemphasized, however, how crucially success depends on having a good first guess for the solution, especially for multidimensional problems. This crucial beginning usually depends on analysis rather than numerics. Carefully crafted initial estimates reward you not only with reduced computational effort, but also with understanding and increased selfesteem. Hamming's motto, "the purpose of computing is insight, not numbers," is particularly apt in the area of finding roots. You should repeat this motto aloud whenever your program converges, with tendigit accuracy, to the wrong root of a problem, or whenever it fails to converge because there is actually no root, or because there is a root but your initial estimate was not sufficiently close to it.
"This talk of insight is all very well, but what do I actually do?" For onedimensional root finding, it is possible to give some straightforward answers: You should try to get some idea of what your function looks like before trying to find its roots. If you need to massproduce roots for many different functions, then you should at least know what some typical members of the ensemble look like. Next, you should always bracket a root, that is, know that the function changes sign in an identified interval, before trying to converge to the root's value.
Finally (this is advice with which some daring souls might disagree, but we give it nonetheless) never let your iteration method get outside of the best bracketing bounds obtained at any stage. We will see below that some pedagogically important algorithms, such as secant method or NewtonRaphson, can violate this last constraint, and are thus not recommended unless certain fixups are implemented.
Multiple roots, or very close roots, are a real problem, especially if the multiplicity is an even number. In that case, there may be no readily apparent sign change in the function, so the notion of bracketing a root  and maintaining the bracket  becomes difficult. We are hardliners: we nevertheless insist on bracketing a root, even if it takes the minimumsearching techniques of Chapter 10 to determine whether a tantalizing dip in the function really does cross zero or not. (You can easily modify the simple golden section routine of §10.1 to return early if it detects a sign change in the function. And, if the minimum of the function is exactly zero, then you have found a double root.)
As usual, we want to discourage you from using routines as black boxes without understanding them. However, as a guide to beginners, here are some reasonable starting points:
Avoiding implementations for specific computers, this book must generally steer clear of interactive or graphicsrelated routines. We make an exception right now. The following routine, which produces a crude function plot with interactively scaled axes, can save you a lot of grief as you enter the world of root finding.
#include < stdio.h >
#define ISCR 60 Number of horizontal and vertical positions in display.
#define JSCR 21
#define BLANK ' '
#define ZERO ''
#define YY 'l'
#define XX ''
#define FF 'x'
void scrsho(float (*fx)(float))
For interactive CRT terminal use. Produce a crude graph of the function fx over the promptedfor interval x1, x2. Query for another plot until the user signals satisfaction.
{
float ysml,ybig,x2,x1,x,dyj,dx,y[ISCR+1];
char scr[ISCR+1][JSCR+1];
for (;;) {
scanf("%f %f",&x1,&x2);
if (x1 == x2) break;
for (j=l;j<=JSCR;j++) Fill vertical sides with character '1'.
for (j=2;j<=(JSCR1);j++) Fill interior with blanks.
dx=(x2x1)/(ISCR1);
x=x1;
ysml=ybig=0.0; Limits will include 0.
for (i=l;i<=ISCR;i++) { Evaluate the function at equal intervals.
if (y[i] < ysml) ysml=y[i];
if (y[i] > ybig) ybig=y[i];
x += dx;
if (ybig == ysml) ybig=ysml+1.0; Be sure to separate top and bottom.
dyj=(JSCR1)/(ybigysml);
jz=l(int) (ysml*dyj); Note which row corresponds to 0.
for (i=l;i<=ISCR;i++) { Place an indicator at function height and 0.
j=l+(int) ((y[i]ysml)*dyj);
scr[i][j]=FF;
printf(" %10.3f ",ybig);
for (i=l;i<=ISCR;i++) printf("%c",scr[i][JSCR]);
printf("\n");
for (j=(JSCR1);j>=2;j) { Display.
for (i=l;i<=ISCR;i++) printf("%c",scr[i][j]);
printf("\n");
printf(" %10.3f ",ysml);
for (i=i;i<=ISCR;i++) printf("%c",scr[i][1]);
printf("\n");
printf("%8s %10.3f %44s %10.3f\n"," ",xl," ",x2);
...
Table of Contents