Like New — packaging may have been opened. A "Like New" item is suitable to give as a gift.
Very Good — may have minor signs of wear on packaging but item works perfectly and has no damage.
Good — item is in good condition but packaging may have signs of shelf wear/aging or torn packaging. All specific defects should be noted in the Comments section associated with each item.
Acceptable — item is in working order but may show signs of wear such as scratches or torn packaging. All specific defects should be noted in the Comments section associated with each item.
Used — An item that has been opened and may show signs of wear. All specific defects should be noted in the Comments section associated with each item.
Refurbished — A used item that has been renewed or updated and verified to be in proper working condition. Not necessarily completed by the original manufacturer.
Ships same day or next business day! UPS expedited shipping available (Priority Mail for AK/HI/APO/PO Boxes). Used sticker & some writing and/or highlighting. Used books may not ...include working access code or dust jacketRead moreShow Less
Our feedback rating says it all: Five star service and fast delivery! We have shipped four million items to happy customers, and have one MILLION unique items ready to ship today!
Ships same day or next business day via UPS (Priority Mail for AK/HI/APO/PO Boxes)! Used sticker and some writing and/or highlighting. Used books may not include working access ...code or dust jacket.Read moreShow Less
1994-06-07 Paperback Very Good Light shelf wear and crease in upper front corner of paperback cover. Pages are clean and binding is tight. nm All proceeds benefit local ...libraries.Read moreShow Less
Used, Acceptable Condition, may show signs of wear and previous use. Please allow 4-14 business days for delivery. 100% Money Back Guarantee, Over 1,000,000 customers served.
This excellent text for advanced undergraduates and graduate students covers norms, numerical solution of linear systems and matrix factoring, iterative solutions of nonlinear equations, eigenvalues and eigenvectors, polynomial approximation, and other topics. It offers a careful analysis and stresses techniques for developing new methods, plus many examples and problems. 1966 edition.
Chapter 1 Norms, Arithmetic, and Well-Posed Computations,
Chapter 2 Numerical Solution of Linear Systems and Matrix Inversion,
Chapter 3 Iterative Solutions of Non-Linear Equations,
Chapter 4 Computation of Eigenvalues and Eigenvectors,
Chapter 5 Basic Theory of Polynomial Approximation,
Chapter 6 Differences, Interpolation Polynomials, and Approximate Differentiation,
Chapter 7 Numerical Integration,
Chapter 8 Numerical Solution of Ordinary Differential Equations,
Chapter 9 Difference Methods for Partial Differential Equations,
Bibliography,
Index,
CHAPTER 1
Norms, Arithmetic, and Well-Posed Computations
0. INTRODUCTION
In this chapter, we treat three topics that are generally useful for the analysis of the various numerical methods studied throughout the book. In Section 1, we give the elements of the theory of norms of finite dimensional vectors and matrices. This subject properly belongs to the field of linear algebra. In later chapters, we may occasionally employ the notion of the norm of a function. This is a straightforward extension of the notion of a vector norm to the infinite-dimensional case. On the other hand, we shall not introduce the corresponding natural generalization, i.e., the notion of the norm of a linear transformation that acts on a space of functions. Such ideas are dealt with in functional analysis, and might profitably be used in a more sophisticated study of numerical methods.
We study briefly, in Section 2, the practical problem of the effect of rounding errors on the basic operations of arithmetic. Except for calculalations involving only exactinteger arithmetic, rounding errors are invariably present in any computation. A most important feature of the later analysis of numerical methods is the incorporation of a treatment of the effects of such rounding errors.
Finally, in Section 3, we describe the computational problems that are 'reasonable' in some general sense. In effect, a numerical method which produces a solution insensitive to small changes in data or to rounding errors is said to yield a well-posed computation. How to determine the sensitivity of a numerical procedure is dealt with in special cases throughout the book. We indicate heuristically that any convergent algorithm is a well-posed computation.
1. NORMS OF VECTORS AND MATRICES
We assume that the reader is familiar with the basic theory of linear algebra, not necessarily in its abstract setting, but at least with specific reference to finite-dimensional linear vector spaces over the field of complex scalars. By 'basic theory' we of course include: the theory of linear systems of equations, some elementary theory of determinants, and the theory of matrices or linear transformations to about the Jordan normal form. We hardly employ the Jordan form in the present study. In fact a much weaker result can frequently be used in its place (when the divisor theory or invariant subspaces are not actually involved). This result is all too frequently skipped in basic linear algebra courses, so we present it as
THEOREM 1.For any square matrix A of order n there exists a non- singular matrix P, of order n, such that
B = P-1AP
is upper triangular and has the eigenvalues of A, say Î»j [equivalent to] Î»j(A), j = 1, 2, ..., n, on the principal diagonal (i.e., any square matrix is equivalent to a triangular matrix).
Proof. We sketch the proof of this result. The reader should have no difficulty in completing the proof in detail.
Let Î»1 be an eigenvalue of A with corresponding eigenvector u1. Then pick a basis for the n-dimensional complex vector space, Cn, with u1 as the first such vector. Let the independent basis vectors be the columns of a non-singular matrix P1, which then determines the transformation to the new basis. In this new basis the transformation determined by A is given by B1 ? P1-1 and since Au1 = Î»1u1,
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
where A2 is some matrix of order n - 1.
The characteristic polynomial of B1 is clearly
det (Î»In - B1 = (Î» - Î»1) det (Î»In-i - A2),
where In is the identity matrix of order n. Now pick some eigenvalue Î»2 of A2 and corresponding (n - 1)-dimensional eigenvector, v2; i.e.,
A2v2 = Î»2v2.
With this vector we define the independent n-dimensional vectorts
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
Note that with the scalar Î± = Î±1Î½12 + Î±2Î½22 + ... + Î±n- 1Î½n-1, 2
B1[??]1 = Î»1[??]1, B1[??]2 = Î»2[??]2 + Î±[??]1,
and thus if we set u1 = P1[??]1, u2 = P1[??]2, then
Au1 = Î»1u1, Au2 = Î»2u2 + Î±u1.
Now we introduce a new basis of Cn with the first two vectors being and u1 and u2. The non- singular matrix P2 which determines this change of basis has u1 and u2 as its first two columns; and the original linear transformation in the new basis has the representation
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
where A3 is some matrix of order n - 2.
The theorem clearly follows by the above procedure; a formal inductive proof could be given.
It is easy to prove the related stronger result of Schur stated in Theorem 2.4 of Chapter 4 (see Problem 2.13 (b) of Chapter 4). We turn now to the basic content of this section, which is concerned with the generalization of the concept of distance in n-dimensional linear vector spaces.
The 'distance' between a vector and the null vector, i.e., the origin, is a measure of the 'size' or 'length' of the vector. This generalized notion of distance or size is called a norm. In particular, all such generalizations are required to have the following properties:
(0) To each vector x in the linear space, [??], say, a unique real number is assigned; this number, denoted by ||x|| or N(x), is called the norm of x iff:
(i)||x|| Ì¥ 0 for all x [member of] [??] and ||x|| = 0 iff x = o; where o denotes the zero vector (if [??] Cn, then oi = 0);
(ii)||Î±x|| = ||Î±|| Â· ||x|| for all scalars Î± and all x [member of] [??];
(iii) ||x + y|| â‰¤ ||x|| + ||y||, the triangle inequality, for all x, y [member of] [??].
Some examples of norms in the complex n-dimensional space Cn are
(1a) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
(1b) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
(1c) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
(1d) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
It is an easy exercise for the reader to justify the use of the notation in (1d) by verifying that
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
The norm, ||Â·||2, is frequently called the Euclidean norm as it is just the formula for distance in ordinary three-dimensional Euclidean space extended to dimension n. The norm, ||Â·||âˆž, is called the maximum norm or occasionally the uniform norm. In general, ||Â·||p, for p â‰¥ 1 is termed the p-norm.
To verify that (1) actually defines norms, we observe that conditions (0), (i), and (ii) are trivially satisfied. Only the triangle inequality, (iii), offers any difficulty. However,
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
and
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
so (1a) and (1d) define norms.
The proof of (iii) for (1b), the Euclidean norm, is based on the well known Cauchy-Schwarz inequality which states that
(2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
To prove this basic result, let |x| and |y| be the n-dimensional vectors with components |xj| and |yj|, j = 1, 2, ..., n, respectively. Then for any real scalar, [xi],
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
But since the real quadratic polynomial in [xi] above does not change sign its discriminant must be non-positive; i.e.,
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
However, we note that
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
and (2) follows from the above pair of inequalities.
Now we form
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
An application of the Cauchy-Schwarz inequality yields finally
N2(x + y) â‰¤ N2(x) + N2(y)
and so the Euclidean norm also satisfies the triangle inequality.
The statement that
(3) Np(x + y) â‰¤ Np(x) + Np(y), p â‰¥ 1,
is know as Minkowski's inequality. We do not derive it here as general p-norms will not be employed further. (A proof of (3) can be found in most advanced calculus texts.)
We can show quite generally that all vector norms are continuous functions in Cn. That is,
LEMMA 1.Every vector norm, N(x), is a continuous function of x1, x2, ..., xn, the components of x.
With the unit vectors {ek}, any Î´ has the representation
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
Using (ii) and (iii) repeatedly implies
(4a) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
where
(4b) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
Using this result in the previous inequality yields, for any Îµ > 0 and all Î´ with Nâˆž(Î´) â‰¤ Îµ/M,
|N(x + Î´) - N(x)| â‰¤ Îµ.
This is essentially the definition of continuity for a function of the n variables x1, x2, ..., xn.
See Problem 6 for a mild generalization.
Now we can show that all vector norms are equivalent in the sense of
THEOREM 2.For each pair of vector norms, say N(x) and N'(x), there exist positive constants m and M such that for all x [member of] Cn:
mN' (x) â‰¤ N(x) â‰¤ MN'(x).
Proof. The proof need only be given when one of the norms is Nâˆž, since N and N' are equivalent if they are each equivalent to Nâˆž. Let S [subset] Cn be defined by
S [equivalent to] {x | Nâˆž(x) = 1, x [member of] Cn}
(this is frequently called the surface of the unit ball in Cn). S is a closed bounded set of points. Then since N(x) is a continuous function (see Lemma 1), we conclude by a theorem of Weierstrass that N(x) attains its minimum and its maximum on S at some points of S. That is, for some x0 [member of] S and x1 [member of] S
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
Our reader reviews allow you to share your comments on titles you liked,
or didn't, with others. By submitting an online review, you are representing to
Barnes & Noble.com that all information contained in your review is original
and accurate in all respects, and that the submission of such content by you
and the posting of such content by Barnes & Noble.com does not and will not
violate the rights of any third party. Please follow the rules below to help
ensure that your review can be posted.
Reviews by Our Customers Under the Age of 13
We highly value and respect everyone's opinion concerning the titles we offer.
However, we cannot allow persons under the age of 13 to have accounts at BN.com or
to post customer reviews. Please see our Terms of Use for more details.
What to exclude from your review:
Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the
information on the product page, please send us an email.
Reviews should not contain any of the following:
- HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
- Time-sensitive information such as tour dates, signings, lectures, etc.
- Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
- Comments focusing on the author or that may ruin the ending for others
- Phone numbers, addresses, URLs
- Pricing and availability information or alternative ordering information
- Advertisements or commercial solicitation
Reminder:
- By submitting a review, you grant to Barnes & Noble.com and its
sublicensees the royalty-free, perpetual, irrevocable right and license to use the
review in accordance with the Barnes & Noble.com Terms of Use.
- Barnes & Noble.com reserves the right not to post any review -- particularly
those that do not follow the terms and conditions of these Rules. Barnes & Noble.com
also reserves the right to remove any review at any time without notice.
- See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend
Create a Pen Name
Welcome, penname
You have successfully created your Pen Name. Start enjoying the benefits of the BN.com Community today.
If you find inappropriate content, please report it to Barnes & Noble
More About This Textbook
Overview
This excellent text for advanced undergraduates and graduate students covers norms, numerical solution of linear systems and matrix factoring, iterative solutions of nonlinear equations, eigenvalues and eigenvectors, polynomial approximation, and other topics. It offers a careful analysis and stresses techniques for developing new methods, plus many examples and problems. 1966 edition.
Product Details
Related Subjects
Read an Excerpt
ANALYSIS OF NUMERICAL METHODS
By EUGENE ISAACSON, HERBERT BISHOP KELLER
Dover Publications, Inc.
Copyright Â© 1966 John Wiley & SonsAll rights reserved.
ISBN: 978-0-486-13798-8
Contents
Chapter 1 Norms, Arithmetic, and Well-Posed Computations,Chapter 2 Numerical Solution of Linear Systems and Matrix Inversion,
Chapter 3 Iterative Solutions of Non-Linear Equations,
Chapter 4 Computation of Eigenvalues and Eigenvectors,
Chapter 5 Basic Theory of Polynomial Approximation,
Chapter 6 Differences, Interpolation Polynomials, and Approximate Differentiation,
Chapter 7 Numerical Integration,
Chapter 8 Numerical Solution of Ordinary Differential Equations,
Chapter 9 Difference Methods for Partial Differential Equations,
Bibliography,
Index,
CHAPTER 1
Norms, Arithmetic, and Well-Posed Computations
0. INTRODUCTION
In this chapter, we treat three topics that are generally useful for the analysis of the various numerical methods studied throughout the book. In Section 1, we give the elements of the theory of norms of finite dimensional vectors and matrices. This subject properly belongs to the field of linear algebra. In later chapters, we may occasionally employ the notion of the norm of a function. This is a straightforward extension of the notion of a vector norm to the infinite-dimensional case. On the other hand, we shall not introduce the corresponding natural generalization, i.e., the notion of the norm of a linear transformation that acts on a space of functions. Such ideas are dealt with in functional analysis, and might profitably be used in a more sophisticated study of numerical methods.
We study briefly, in Section 2, the practical problem of the effect of rounding errors on the basic operations of arithmetic. Except for calculalations involving only exactinteger arithmetic, rounding errors are invariably present in any computation. A most important feature of the later analysis of numerical methods is the incorporation of a treatment of the effects of such rounding errors.
Finally, in Section 3, we describe the computational problems that are 'reasonable' in some general sense. In effect, a numerical method which produces a solution insensitive to small changes in data or to rounding errors is said to yield a well-posed computation. How to determine the sensitivity of a numerical procedure is dealt with in special cases throughout the book. We indicate heuristically that any convergent algorithm is a well-posed computation.
1. NORMS OF VECTORS AND MATRICES
We assume that the reader is familiar with the basic theory of linear algebra, not necessarily in its abstract setting, but at least with specific reference to finite-dimensional linear vector spaces over the field of complex scalars. By 'basic theory' we of course include: the theory of linear systems of equations, some elementary theory of determinants, and the theory of matrices or linear transformations to about the Jordan normal form. We hardly employ the Jordan form in the present study. In fact a much weaker result can frequently be used in its place (when the divisor theory or invariant subspaces are not actually involved). This result is all too frequently skipped in basic linear algebra courses, so we present it as
THEOREM 1.For any square matrix A of order n there exists a non- singular matrix P, of order n, such that
B = P-1AP
is upper triangular and has the eigenvalues of A, say Î»j [equivalent to] Î»j(A), j = 1, 2, ..., n, on the principal diagonal (i.e., any square matrix is equivalent to a triangular matrix).
Proof. We sketch the proof of this result. The reader should have no difficulty in completing the proof in detail.
Let Î»1 be an eigenvalue of A with corresponding eigenvector u1. Then pick a basis for the n-dimensional complex vector space, Cn, with u1 as the first such vector. Let the independent basis vectors be the columns of a non-singular matrix P1, which then determines the transformation to the new basis. In this new basis the transformation determined by A is given by B1 ? P1-1 and since Au1 = Î»1u1,
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
where A2 is some matrix of order n - 1.
The characteristic polynomial of B1 is clearly
det (Î»In - B1 = (Î» - Î»1) det (Î»In-i - A2),
where In is the identity matrix of order n. Now pick some eigenvalue Î»2 of A2 and corresponding (n - 1)-dimensional eigenvector, v2; i.e.,
A2v2 = Î»2v2.
With this vector we define the independent n-dimensional vectorts
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
Note that with the scalar Î± = Î±1Î½12 + Î±2Î½22 + ... + Î±n- 1Î½n-1, 2
B1[??]1 = Î»1[??]1, B1[??]2 = Î»2[??]2 + Î±[??]1,
and thus if we set u1 = P1[??]1, u2 = P1[??]2, then
Au1 = Î»1u1, Au2 = Î»2u2 + Î±u1.
Now we introduce a new basis of Cn with the first two vectors being and u1 and u2. The non- singular matrix P2 which determines this change of basis has u1 and u2 as its first two columns; and the original linear transformation in the new basis has the representation
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
where A3 is some matrix of order n - 2.
The theorem clearly follows by the above procedure; a formal inductive proof could be given.
It is easy to prove the related stronger result of Schur stated in Theorem 2.4 of Chapter 4 (see Problem 2.13 (b) of Chapter 4). We turn now to the basic content of this section, which is concerned with the generalization of the concept of distance in n-dimensional linear vector spaces.
The 'distance' between a vector and the null vector, i.e., the origin, is a measure of the 'size' or 'length' of the vector. This generalized notion of distance or size is called a norm. In particular, all such generalizations are required to have the following properties:
(0) To each vector x in the linear space, [??], say, a unique real number is assigned; this number, denoted by ||x|| or N(x), is called the norm of x iff:
(i)||x|| Ì¥ 0 for all x [member of] [??] and ||x|| = 0 iff x = o; where o denotes the zero vector (if [??] Cn, then oi = 0);
(ii)||Î±x|| = ||Î±|| Â· ||x|| for all scalars Î± and all x [member of] [??];
(iii) ||x + y|| â‰¤ ||x|| + ||y||, the triangle inequality, for all x, y [member of] [??].
Some examples of norms in the complex n-dimensional space Cn are
(1a) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
(1b) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
(1c) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
(1d) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
It is an easy exercise for the reader to justify the use of the notation in (1d) by verifying that
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
The norm, ||Â·||2, is frequently called the Euclidean norm as it is just the formula for distance in ordinary three-dimensional Euclidean space extended to dimension n. The norm, ||Â·||âˆž, is called the maximum norm or occasionally the uniform norm. In general, ||Â·||p, for p â‰¥ 1 is termed the p-norm.
To verify that (1) actually defines norms, we observe that conditions (0), (i), and (ii) are trivially satisfied. Only the triangle inequality, (iii), offers any difficulty. However,
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
and
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
so (1a) and (1d) define norms.
The proof of (iii) for (1b), the Euclidean norm, is based on the well known Cauchy-Schwarz inequality which states that
(2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
To prove this basic result, let |x| and |y| be the n-dimensional vectors with components |xj| and |yj|, j = 1, 2, ..., n, respectively. Then for any real scalar, [xi],
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
But since the real quadratic polynomial in [xi] above does not change sign its discriminant must be non-positive; i.e.,
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
However, we note that
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
and (2) follows from the above pair of inequalities.
Now we form
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
An application of the Cauchy-Schwarz inequality yields finally
N2(x + y) â‰¤ N2(x) + N2(y)
and so the Euclidean norm also satisfies the triangle inequality.
The statement that
(3) Np(x + y) â‰¤ Np(x) + Np(y), p â‰¥ 1,
is know as Minkowski's inequality. We do not derive it here as general p-norms will not be employed further. (A proof of (3) can be found in most advanced calculus texts.)
We can show quite generally that all vector norms are continuous functions in Cn. That is,
LEMMA 1.Every vector norm, N(x), is a continuous function of x1, x2, ..., xn, the components of x.
Proof. For any vectors x and Î´ we have by (iii)
N(x + Î´) â‰¤ N(x) + N(Î´),
so that
N(x + Î´) - N(x) â‰¤ N(Î´).
On the other hand, by (ii) and (iii),
N(x) = N(x + [de;ta] - [delta) â‰¤ N(x + Î´) + N([deta])
so that
-N(Î´) â‰¤ N(x + Î´) - N(x).
Thus, in general
|N(x + Î´) - N(x)| â‰¤ N(Î´).
With the unit vectors {ek}, any Î´ has the representation
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
Using (ii) and (iii) repeatedly implies
(4a) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
where
(4b) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
Using this result in the previous inequality yields, for any Îµ > 0 and all Î´ with Nâˆž(Î´) â‰¤ Îµ/M,
|N(x + Î´) - N(x)| â‰¤ Îµ.
This is essentially the definition of continuity for a function of the n variables x1, x2, ..., xn.
See Problem 6 for a mild generalization.
Now we can show that all vector norms are equivalent in the sense of
THEOREM 2.For each pair of vector norms, say N(x) and N'(x), there exist positive constants m and M such that for all x [member of] Cn:
mN' (x) â‰¤ N(x) â‰¤ MN'(x).
Proof. The proof need only be given when one of the norms is Nâˆž, since N and N' are equivalent if they are each equivalent to Nâˆž. Let S [subset] Cn be defined by
S [equivalent to] {x | Nâˆž(x) = 1, x [member of] Cn}
(this is frequently called the surface of the unit ball in Cn). S is a closed bounded set of points. Then since N(x) is a continuous function (see Lemma 1), we conclude by a theorem of Weierstrass that N(x) attains its minimum and its maximum on S at some points of S. That is, for some x0 [member of] S and x1 [member of] S
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
or
0 < N(x0) â‰¤ N(x) â‰¤ N(x1) < âˆž
for all x [member of] S.
(Continues...)