×

Uh-oh, it looks like your Internet Explorer is out of date.

For a better shopping experience, please upgrade now.

"This book is a radical departure from all previous concepts of advanced calculus," declared the Bulletin of the American Mathematics Society, "and the nature of this departure merits serious study of the book by everyone interested in undergraduate education in mathematics." Classroom-tested in a Princeton University honors course, it offers students a

## Overview

"This book is a radical departure from all previous concepts of advanced calculus," declared the Bulletin of the American Mathematics Society, "and the nature of this departure merits serious study of the book by everyone interested in undergraduate education in mathematics." Classroom-tested in a Princeton University honors course, it offers students a unified introduction to advanced calculus.
Starting with an abstract treatment of vector spaces and linear transforms, the authors introduce a single basic derivative in an invariant form. All other derivatives — gradient, divergent, curl, and exterior — are obtained from it by specialization. The corresponding theory of integration is likewise unified, and the various multiple integral theorems of advanced calculus appear as special cases of a general Stokes formula. The text concludes by applying these concepts to analytic functions of complex variables.

## Product Details

ISBN-13:
9780486173740
Publisher:
Dover Publications
Publication date:
01/31/2013
Series:
Dover Books on Mathematics
Sold by:
Barnes & Noble
Format:
NOOK Book
Pages:
560
File size:
19 MB
Note:
This product may take a few minutes to download.

## Read an Excerpt

By H. K. Nickerson, D. C. Spencer, N. E. Steenrod

#### Dover Publications, Inc.

ISBN: 978-0-486-17374-0

CHAPTER 1

THE ALGEBRA OF VECTOR SPACE

§1. Axioms

1.1. Definition. A vector space V is a set, whose elements are called vectors, together with two operations. The first operation, called addition, assigns to each pair of vectors A, B a vector, denoted by A + B, called their sum. The second operation, called multiplication by a scalar, assigns to each vector A and each real number x a vector denoted by xA. The two operations are required to have the following eight properties:

Axiom 1. A + B = B + A for each pair of vectors A, B. (I.e. addition is commutative.)

Axiom 2. (A + B) + C = A + (B + C) for each triple of vectors A, B, C. (I.e. addition is associative.)

Axiom 3. There is a unique vector , called the zero vector, such that [??] + A = A for each vector A.

Axiom 4. To each vector A there corresponds a unique vector, denoted by A, such that A + (-A) = [??].

Axiom 5. x(A + B) = xA + xB for each real number x and each pair of vectors A, B. (I.e. multiplication is distributive with respect to vector addition.)

Axiom 6. (x + y)A = xA + yA for each pair x, y of real numbers and each vector A. (I. e. multiplication is distributive with respect to scalar addition.)

Axiom 7. (xy)A = x(yA) for each pair x, y of real numbers and each vector A.

Axiom 8. For each vector. A,

(i) 0A = [??],

(ii) 1A = A,

(iii)(-1)A = -A.

1.2. Definition. The difference A - B of two vectors is defined to be the sum A + (-B).

The subsequent development of the theory of vector spaces will be based on the above axioms as our starting point. There are other approaches to the subject in which the vector spaces are constructed. For example, starting with a euclidean space, we could define a vector to be an oriented line segment. Or, again, we could define a vector to be a sequence (x1, ..., xn) of n real numbers. These approaches give particular vector spaces having properties not possessed by all vector spaces. The advantages of the axiomatic approach are that the results which will be obtained apply to all vector spaces, and the axioms supply a firm starting point for a logical development.

§2. Redundancy

The axioms stated above are redundant. For example the word "unique" in Axiom 3 can be omitted. For suppose [??] and [??]' are two vectors satisfying [??] + A = A and [??]' + A = A for every A. In the first identity, take A = [??]'; and in the second, take A = [??]. Using Axiom 1, we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

This proves the uniqueness.

The word "unique" can likewise be omitted from Axiom 4.

For suppose A, B, C are three vectors such that

A + B = [??], and A + C = [??].

Using these relations and Axioms 1, 2 and 3, we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Therefore B = C, and so there can be at most one candidate for -A.

The Axiom 8 (i) is a consequence of the preceding axioms:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

§3. Cartesian spaces

3.1. Definition. The cartesian k-dimensional space, denoted by Rk, is the set of all sequences (a1, a2, ..., ak) of k real numbers together with the operations

(a1, a2, ..., ak) + (b1, b2, ..., bk) = (a1 + b1, a2 + b2, ..., ak + bk)

and

x(a1, a2, ..., ak) = (xa1, xa2, ..., xak).

In particular, R1 = R is the set of real numbers with the usual addition and multiplication. The number ai is called the ith component of (a1, a2, ..., ak), i = 1, ..., k.

3.2. Theorem. For each integer k > o, Rk is a vector space.

Proof. The proofs of Axioms 1 through 8 are based on the axioms for the real numbers R.

Let A = (a1, a2, ..., ak), B = (b1, b2, ..., bk), etc. For each i = 1, ..., k, the ith component of A + B is ai + bi, and that of B + A is bi + ai. Since the addition of real numbers is commutative, ai + bi = bi + ai. This implies A + B = B + A; hence Axiom 1 is true.

The ith component of (A + B) + C is (ai + bi) + ci, and that of A + (B + C) is ai + (bi + ci). Thus the associative law for real numbers implies Axiom 2.

Let [??] = (0, 0, ..., 0) be the sequence each of whose components is zero. Since 0 + ai = ai, it follows that [??] + A = A. This proves Axiom 3 since the uniqueness part of the axiom is redundant (see §2).

If A = (a1, a2, ..., ak), define - A to be (-a1, -a2, ..., -ak). Then A + (-A) = [??]. This proves Axiom 4 (uniqueness is again redundant).

If x is a real number, the ith component of x(A + B) is, by definition, x(ai + bi); and that of xA + xB is, by definition, xai + xbi. Thus the distributive law for real numbers implies Axiom 5.

The verifications of Axioms 6, 7 and 8 are similar and are left to the reader.

§4. Exercises

1. Verify that x Rk satisfies Axioms 6, 7 and 8.

2. Prove that Axiom 8(iii) is redundant. Show also that (-x)A = -(xA) for each x and each A.

3. Show that Axiom 8(ii) is not a consequence of the preceding axioms by constructing a set with two operations which satisfy the preceding axioms but not, 8(ii). (Hint: Consider the real numbers with multiplication redefined by xy = o for all x and y.) Can such an example satisfy Axiom 8(iii)?

4. Show that A + A = 2A for each A.

5. Show that x[??] = [??] for each x.

6. If x ≠ o and xA = [??], show that A = [??].

7. If x and A are such that xA = [??], show that either x = o or x = 0 or A = [??].

8. Show that the set consisting of a single vector [??] is a vector space.

9. If a vector A is such that A = -A, then A = [??].

10. If a vector space contains some vector other than [??], show that it contains infinitely many distinct vectors. (Hint: Consider A, 2A, 3A, etc.)

11. Let D be any non-empty set, and define RD to be the set of all functions having domain D and values in R. If f and g are two such functions, their sum f + g is the element of RD defined by

(f + g)(d) = f(d) + g(d)for each d in D.

If f is in RD and x is a real number, let xf be the element of RD defined by

(xf)(d) = xf(d)for each d in D.

Show that RD is a vector space with respect to these operations.

12. Let V be a vector space and let D be a nonempty set. Let VD the set of all functions with domain D and values in V. Define sum and product as in Exercise 11, and show that VD is a vector space.

13. A sum of four vectors A + B + C + D may be associated (parenthesized) in five ways, e.g. (A + (B + C)) + D. Show that all five sums are equal, and therefore A + B + C + D makes sense without parentheses.

14. Show that A + B + C + D = B + D + C + A.

§5. Associativity and commutativity

5.1. Proposition. If k is an integer ≥ 3, then any two ways of associating a sum A1 + ... + Ak of k vectors give the same sum. Consequently parentheses may be dropped in such sums.

Proof. The proof proceeds by induction on the number of vectors. Axiom 2 gives the case of 3 vectors. Suppose now that k > 3, and that the theorem is true for sums involving fewer than k vectors. We shall show that the sum of k vectors obtained from any method M of association equals the sum obtained from the standard association MO obtained by adding each term in order, thus:

(... (((A1 + A2) + A3) + A4) ...) + Ak.

A method M must have a last addition in which, for some integer i with 1 ≤ i < k, a sum of A1 + ... + Ai is added to a sum of Ai+1 + ... + Ak. If i is k - 1, the last addition has the form

(A1 + ... + Ak-1) + Ak.

The part in parentheses has fewer than k terms and, by the inductive hypothesis, is equal to the sum obtained by the standard association on k - 1 terms. This converts the full sum to the standard association on k terms. If i = k - 2, it has the form

(A1 + ... + Ak-2) + (Ak-1 + Ak)

which equals

((A1 + ... + Ak-2) + Ak-1) + Ak

by Axiom 2 (treating A1 + ... + Ak-2 as a single vector). By the inductive hypothesis, the sum of the first k - 1 terms is equal to the sum obtained from the standard association. This converts the full sum to the standard association on k terms. Finally, suppose i < k - 2. Since Ai+1 + ... + Ak has fewer than k terms, the inductive hypothesis asserts that its sum is equal to a sum of the form (Ai+1 + ... + Ak-1) + Ak. The full sum has the for

(A1 + ... + Ai) + ((Ai+1 + ... + Ak-1) + Ak) = ((A1 + ... + A1) + (Ai+1 + ... + Ak-1)) + Ak

by Axiom 2 applied to the three vectors A1 + ... + Ai, Ai+1 + ... + Ak-1 and Ak. The inductive hypothesis permits us to reassociate the sum of the first k - 1 terms into the standard association. This gives the standard association on k terms.

The theorem just proved is called the general associative law; it says in effect that parentheses may be omitted in the writing of sums. There is a general commutative law as follows.

5.2. Proposition. The sum of any number of terms is independent of the ordering of the terms.

The proof is left to the student. The idea of the proof is to show that one can pass from any order to any other by a succession of steps each of which is an interchange of two adjacent terms.

§6. Notations

The symbols U, V, W will usually denote vector spaces. Vectors will usually be denoted by A, B, C, X, Y, Z. The symbol R stands for the real number system, and a, b, c, x, y, z will usually represent real numbers (= scalars). Rk is the vector space defined in 3.1. The symbols i, j, k, l, m, n will usually denote integers.

We shall use the symbol [member of] as an abbreviation for "is an element of". Thus p [member of] Q should be read: p is an element of the set Q. For example, x [member of] R means that x is a real number, and A [member of] V means that A is a vector in the vector space V.

The symbol [subset] is an abbreviation for "is a subset of", or, equally well, "is contained in". Thus P [subset] Q means that each element of the set P is also an element of Q (p [member of] P implies p [member of] Q). It is always true that Q [subset] Q.

If P and Q are sets, the set obtained by uniting the two sets is denoted by P [union] Q and is called the union of P and Q. Thus r [member of] P [union] Q is equivalent to: r [member of] P or r [member of] Q or both. For example, if P is the interval [1, 3] of real numbers and Q is the interval [2, 5], then P [union] Q is the interval [1, 5]. In case P [subset] Q, then P [union] Q = Q.

It is convenient to speak of an "empty set". It is denoted by [??] and is distinguished by the property of having no elements. If we write P [intersection] Q = [??], we mean that P and Q have no element in cammon. Obvious tautologies are

I [subset] P, I [union] Q = Q, I [intersection] P = I.

§7. Linear subspaces

7.1. Definition. A non-empty subset U of a vector space V is called alinear subspace of V if it satisfies the conditions:

(i) if A [member of] U and B [member of] U, then A + B [member of] U,

(ii) if A [member of] U and x [member of] R, then xA [member of] U.

These conditions assert that the two operations of the vector space V give operations in U.

7.2. Proposition. U is itself a vector space with respect to these operations.

Proof. The properties expressed by Axioms 1, 2, 5, 6, 7, 8 are automatically inherited by U. As for Axiom 3, A [member of] U implies OA [member of] U by (ii). Since OA = [??] (Axiom 8), it follows that [??] [member of] U; hence Axiom 3 holds in U. Similarly, if A [member of] U, then (-1) A [member of] U by (ii). Since (-1)A = -A (Axiom 8), it follows that -A [member of] U; hence Axiom 4 holds in U.

The addition and multiplication in a linear subspace will always be assumed to be the ones it inherits from the whole space.

It is obvious that the subset of V consisting of the single element is a linear subspace. It is also trivially true that V is a linear subspace of V. Again, if U is a linear subspace of V, and if U' is a linear subspace of U, then U' is a linear subspace of V.

7.3. Proposition. If V is a vector space and {U} is any family of linear subspaces of V, then the vectors common to all the subspaces in {U} form a linear subspace of V denoted by [intersection] {U}.

Proof. Let A [member of] [intersection] {U}, and B [member of] {U}, and x [member of] R. Then, for each U [member of] {U}, we have A [member of] U and B [member of] U. Since U is a linear subspace, it follows that A + B [member of] U and xA [member of] U. Since these relations hold for each U [member of] {U}, it follows that A + B [member of] [intersection] {U} and xA [member of] [intersection] {U}. Therefore [intersection] {U} is a linear subspace.

7.4. Definition. If V is a vector space and D is a non-empty subset of V, then any vector obtained as a sum

x1A1 + x2A2 + ... + xkAk

(abbreviated [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]), where A1, ..., Ak are all in D, and x1, ..., xk are any elements of R, is called a finite linear combination of the elements of D. Let L(D) denote the set of all finite linear combinations of the elements of D. It is clearly a linear subspace of V, and it is called the linear subspace spanned by D. We make the convention L(I) = [??].

7.5. Proposition. D [subset] L(D).

For, if A [member of] D, then A = 1A is a finite linear combination of elements of D (with k = 1).

7.6. Proposition. If U is a linear subspace of V, and if D is a subset of U, then L(D) [sebuset] U. In particular L(U) = U.

The proof is obvious.

Remark. A second method of constructing L(D) is the following: Define L'(D) to be the common part of all linear subspaces of V which contain D. By Proposition 7.3, L'(D) is a linear subspace. Since L'(D) contains D, Proposition 7.6 gives L(D) [subset] L'(D). But L(D) is one of the family of linear subspaces whose common part is L'(D). Therefore L'(D) [subset] L(D). The two inclusions L(D) [subset] L'(D) and L'(D) [subset] L(D) imply L(D) = L'(D). To summarize, L(D) is the smallest linear subspace of V containing D.

§8. Exercises

1. Show that U is a linear subspace of V in each of the following cases:

(a) V = R3 and U = set of triples (x1, x2, x3) such that

x1 + x2 + x3 = 0.

(b) V = R3 and U = set of triples (x1, x2, x3) such that x3 = 0.

(c) (See Exercise 4.11), V = RD and U = RD' where D' [subset] D.

(d)V = RR, i.e. V = set of all real-valued functions of a real variable, and U = the subset of continuous functions.

(Continues...)

Excerpted from Advanced Calculus by H. K. Nickerson, D. C. Spencer, N. E. Steenrod. Copyright © 2014 Dover Publications, Inc.. Excerpted by permission of Dover Publications, Inc..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

## Customer Reviews

Average Review:

Post to your social network