Mathematical Logic

Mathematical Logic

by Stephen Cole Kleene


View All Available Formats & Editions
Members save with free shipping everyday! 
See details


Undergraduate students with no prior classroom instruction in mathematical logic will benefit from this evenhanded multipart text. It begins with an elementary but thorough overview of mathematical logic of first order. The treatment extends beyond a single method of formulating logic to offer instruction in a variety of techniques: model theory (truth tables), Hilbert-type proof theory, and proof theory handled through derived rules.
The second part supplements the previously discussed material and introduces some of the newer ideas and the more profound results of twentieth-century logical research. Subsequent chapters explore the study of formal number theory, with surveys of the famous incompleteness and undecidability results of Godel, Church, Turing, and others. The emphasis in the final chapter reverts to logic, with examinations of Godel's completeness theorem, Gentzen's theorem, Skolem's paradox and nonstandard models of arithmetic, and other theorems. The author, Stephen Cole Kleene, was Cyrus C. MacDuffee Professor of Mathematics at the University of Wisconsin, Madison. Preface. Bibliography. Theorem and Lemma Numbers: Pages. List of Postulates. Symbols and Notations. Index.

Product Details

ISBN-13: 9780486425337
Publisher: Dover Publications
Publication date: 12/18/2002
Series: Dover Books on Mathematics Series
Edition description: Reprint
Pages: 432
Sales rank: 633,785
Product dimensions: 5.50(w) x 8.50(h) x 0.88(d)

Read an Excerpt

Mathematical Logic

By Stephen Cole Kleene

Dover Publications, Inc.

Copyright © 1967 Stephen Cole Kleene
All rights reserved.
ISBN: 978-0-486-31707-6



§ 1. Linguistic considerations: formulas. Mathematical logic (also called symbolic logic) is logic treated by mathematical methods. But our title has a double meaning, since we shall be studying the logic that is used in mathematics.

Logic has the important function of saying what follows from what. Every development of mathematics makes use of logic. A familiar example is the presentation of geometry in Euclid's "Elements" (c. 330-320 B.C.), in which theorems are deduced by logic from axioms (or postulates). But any orderly arrangement of the content of mathematics would exhibit logical connections. Similarly, logic is used in organizing scientific knowledge, and as a tool of reasoning and argumentation in daily life.

Now we are proposing to study logic, and indeed by mathematical methods. Here we are confronted by a bit of a paradox. For, how can we treat logic mathematically (or in any systematic way) without using logic in the treatment?

The solution of this paradox is simple, though it will take some time before we can appreciate fully how it works. We simply put the logic that we are studying into one compartment, and the logic that we are using to study it in another. Instead of "compartments", we can speak of "languages". When we are studying logic, the logic we are studying will pertain to one language, which we call the object language, because this language (including its logic) is an object of our study. Our study of this language and its logic, including our use of logic in carrying out the study, we regard as taking place in another language, which we call the observer's language. Or we may speak of the object logic and the observer's logic.

It will be very important as we proceed to keep in mind this distinction between the logic we are studying (the object logic) and our use of logic in studying it (the observer's logic). To any student who is not ready to do so, we suggest that he close the book now, and pick some other subject instead, such as acrostics or beekeeping.

All of logic, like all of physics or all of history, constitutes a very rich and varied discipline. We follow the usual strategy for approaching such disciplines, by picking a small and manageable portion to treat first, after which we can extend our treatment to include some more.

The portion of logic we study first deals with connections between propositions which depend only on how some propositions are constructed out of other propositions that are employed intact, as building blocks, in the construction. This part of logic is called propositional logic or the propositional calculus.

We deal with propositions through declarative sentences which express them is some language (the object language); the propositions are the meanings of the sentences. Declarative sentences express propositions (while interrogatory sentences ask questions and imperative sentences express commands). The same proposition may be expressed by different (declarative) sentences. Thus "John loves Jane" and "Jane is loved by John" express the same proposition, but "John loves Mary" expresses a different proposition. Under the usual definition of > from <, the two sentences "5 < 3" and "3 > 5" express the same proposition (which happens to be false), namely that increasing 5 by a suitable positive quantity will give 3; but "52 — 42 = 10" expresses a different proposition (also false). Each of "5 < 3", "3 > 5" and "52 – 42 = 10" asserts something about the outcome of a mathematical process, which is the same process in the first two cases, but a different one in the third. "3 — 2 = 1" and "(481 — 581) + 101 = 1" express two different propositions (both true).

We save time, and retain flexibility for the applications, by not now describing any particular object language. (Examples will be given later.)

Throughout this chapter, we shall simply assume that we are dealing with one or another object language in which there is a class of (declarative) sentences, consisting of certain sentences (the aforementioned building blocks) and all the further sentences that can be built from them by certain operations, as we describe next. These sentences we call formulas, in deference to the use of mathematical symbolism in them or at least in our names for them.

First, in this language there are to be some unambiguously constituted sentences, whose internal structure we shall ignore (for our study of the propositional calculus) except for the purpose of identifying the sentences.

We call these sentences prime formulas or atoms; and we denote them by capital Roman letters from late in the alphabet, as "P", "Q", "R", ..., "P1", "P2", "P3", .... Distinct such letters shall represent distinct atoms, each of which is to retain its identity throughout any particular investigation in the propositional calculus.

Second, the language is to provide five particular constructions or operations for building new sentences from given sentences. Starting with the prime formulas or atoms, we can use these operations, over and over again, to build other sentences, called composite formulas or molecules, as follows. (The prime formulas and the composite formulas together constitute the formulas.,) If each of A and B is a given formula (i.e. either a prime formula, or a composite formula already constructed), then, A~B, A [contains] B, A & B and A [disjunction] B are (composite) formulas. If A is a given formula, then [logical not] A is a (composite) formula. (The first four operations are "binary" operations, the last is "unary".)

The symbols "~", "[contains]", "&", "[disjunction]" and "[logical not]" are called propositional connectives They can be read by using the words shown at the right in the following table; but the symbols are easier to write and manipulate.

Here we must mention the fact that the natural word languages, such as English, suffer from ambiguities. (Of this, more will be said later.) Logicians are therefore prone to build special symbolic languages. Our unspecified object language may be such a symbolic language, having symbols "~, "[contains]", "&", "[disjunction]" and "[logical not]" which play roles described accurately below but suggested or approximately described by the words. It comes to nearly the same thing to think of the object language as a suitably restricted and regulated part of a natural language, such as English; then "~", "[contains]", "&", [disjunction] and "[logical not]" can be thought of as names in the observer's language for the verbal expressions at the right in the table.

The names at the left in the table above apply to the propositional connectives or to the formulas constructed using them. Thus "&" is our symbol for conjunction; and A & B is a conjunction, namely the conjunction of A and B. Also A [contains] B is the implication by A of B; etc.

So that there will be no ambiguity as to which formulas are the atoms, we now stipulate that none of the atoms be of any of the five forms A~B, A [contains] B, A & B, "[disjunction]" B and [logical not]A which the molecules have. For example, "Socrates is a man", "John loves Jane", "John loves Mary", "[logical not] 5 < 3", "3 > 5", "a+b = c" and "a > 0" (where "a", "b", "c" stand for numbers) could be atoms; then "John loves Jane or John loves Mary", " 5 < 3" and "5 < 3 ~ 3 > 5" would be molecules.

We are using capital Roman letters from the beginning of the alphabet, as "A", "B", "C", ..., "A1", "A2", "A3", ..., to stand for any formulas, not necessarily prime. Distinct such letters "A", "B", "C", ..., "A1", "A2", "A3", ... need not represent distinct formulas (in contrast to "P", "Q", "R", "P1", "P2", "P3", ..., which represent distinct prime formulas).

Composite formulas would sometimes be ambiguous as to the manner in which the symbols are to be associated if we did not introduce parentheses. So we shall write "(A [contains] B) [contains] C" or "A [contains] (B [contains] C)" and not simply "A [contains] B [contains] C". However we can minimize the need for parentheses by assigning decreasing ranks to our propositional connectives, in the order listed:

~, [contains], &, [disjunction], [logical not]

Where there would otherwise be two ways of construing a formula, the connective with the greater rank reaches further. Thus "A [contains] B & C" shall mean A [contains] (B & C), and "[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] "shall mean [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. The unary operator [??] being ranked last, "[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]" shall mean rather than [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and "[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]" shall mean [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. This practice is familiar in algebra, where "a + bc2 = d" means (a + (b(c2))) = d.

Example 1. In "A [contains](B [contains] C)" the letters "A", "B", "C" stand for formulas constructed from P, Q, R, ..., P1 P2, P3, ... (i.e. from the atoms named by "P", "Q", "R", ..., "P1", "P2", "P3") using (zero or more times) ~, [contains], &, [disjunction], [logical not], and (as required) parentheses. For example, A [contains] (B [contains] C) might be the particular formula [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (so A is P, B is Q [disjunction] R and C is [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]). Here a second pair of parentheses has been inserted, but our ranking of the symbols makes parentheses around Q [disjunction] R superfluous. The parentheses enable us to see how the formula was constructed, starting from the atoms P, Q, R, and using five steps of composition to introduce the five numbered occurrences of propositional connectives, thus :


In a manner obvious from the parentheses or the construction, each occurrence of a connective "connects" or "applies to" or "operates on" one or two parts of the formula, called the scope of that (occurrence of a) connective. The scopes of the connectives are shown here by correspondingly numbered underlines; thus the scope of [contains]4 consists of the two parts Q [disjunction] R and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Exercise 1.1. Identify the scope of each (occurrence of a) propositional connective: (a) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (b) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

§ 2. Model theory: truth tables, validity. Not only are we restricting ourselves in this chapter to the study of the logic of propositions. But also in this and later chapters we shall concern ourselves primarily with a certain kind of logic, called classical logic.

Since the discovery of non-Euclidean geometries by Lobatchevsky (1829) and Bolyai (1833), it has been clear that different systems of geometry are conceptually equally possible. (We shall say a little more about this in § 36.) Similarly, there are different systems of logic. Different theories can be deduced from the same mathematical postulates, the differences depending on the system of logic used to make the deductions. The classical logic, like the Euclidean geometry, is the simplest and the most commonly used in mathematics, science and daily life. In this book we shall find the space for only brief indications of other kinds of logic.

Thus far we have assumed about each prime formula or atom only that it can be identified; i.e. that each time it occurs it can be recognized as the same, and as different from other atoms.

Now we make one further assumption about the atoms, which is characteristic of classical logic. We assume that each atom (or the proposition it expresses) is either true or false but not both.

We are not assuming that we know of each atom whether it is true or false. That knowledge would require us to look into the constitution of the atoms, or to consider facts to which they allude under an agreed interpretation of the words or symbols, none of which is within our purview in the propositional calculus.

Our assumption is thus that, for each atom, there are exactly two possibilities: it may be true, it may be false.

The question now arises: How does the truth or falsity (truth value) of a composite formula or molecule depend upon the truth value(s) of its component prime formula(s) or atom(s)? This will be determined by repeated use of five definitions, given by the following tables. These tables relate the truth value of each molecule to the truth value(s) of its immediate component(s). In the left-hand columns we list all the possible assignments of truth t and falsity f to the immediate component(s). Then in the line (or row) for a given assignment, we show the resulting truth value of each molecule in the column headed by that molecule.

Thus A~B is true exactly when A and B have the same truth value (hence the reading "equivalent", i.e. "equal valued", for ~); A [contains] B is false exactly when A is true and B is false; A & B is true exactly when A and B are both true; A [disjunction] B is false exactly when both are false; and [logical not]A is true exactly when A is false.

Some controversy has arisen about the name "implication", and the reading "implies", for our [contains]. Say A is "the moon is made of green cheese" and B is "2+2=5". Then A [contains] B is true under our table (because A is false), even though there is no connection of ideas between A and B. Similarly, if B is "2+2=4", A [contains] B is true (because B is true), quite apart from whether A bears any relationship to "2+2=4". This is considered paradoxical by some writers (Lewis 1912, 1917, Lewis and Langford 1932).

In modern mathematics the name "multiplication" is often used for various mathematical operations that behave more or less analogously to the arithmetical one called "multiplication". Similarly, we find it convenient to use the name "implication" to designate the operation defined by the second truth table above; and then in our logical discussions we usually read A [contains] B as "A implies B", even though "if A then B" or "A only if B" probably renders the meaning better in everyday English. This "implication", and the present "equivalence", are called more specifically "material implication" and "material equivalence".

It is, of course, possible to be interested in other senses of "implication"; but then one must have recourse to ways of defining it other than by a "two-valued truth table". Our definition is the only reasonable one with such a table.

A related question is why we should want to assert a material implication A [contains] B, when if A is true we could more simply assert B (or if our hearers don't know that A is true, we could more informatively assert A & B), and if A is false we could more simply say nothing. But we ordinarily assert sentences of the form "If A, then B" when we don't know whether A is true or not. For example, before an election I might say [1] "If our candidate for President carries the state by 500,000, then our man for the Senate will also win". This form of statement enables me to predict what will happen in one eventuality without attempting to say more. If it turns out that our candidate for President doesn't carry the state by 500,000, my prediction will not have been proved false. Since we are committed here to a two-valued logic, my statement should then be regarded as true, though perhaps uninteresting. If when I speak, returns are in showing our candidate ahead by a safe 500,000, I would more likely say [1a] "Our man for the Senate will win" or [1b] "Since our candidate for President is carrying the state by 500,000, our man for the Senate will win". But [1] would not have become false, just partially redundant and thus unnatural for me to say (unless I haven't heard the latest election returns).

To take an analogous mathematical example, suppose a positive integer n > 1 is written on a piece of paper in your pocket, and I do not know what the integer is. I could truthfully say [2] "If n is odd, then xn + yn can be factored". By saying this, I am claiming that, when you produce the paper showing the value of n, then I will be able to factor xn + yn for the n you produce if it turns out to be odd (and I am making no claim about the factorability of xn + yn in the contrary case). Thus, if you have bet that I am wrong, to settle the bet you produce the number n. If for example it is 3, I then show you the factorization (x + y)(x2 — xy + y2), and you pay me. If for example it is 4 (or 6), I win automatically.


Excerpted from Mathematical Logic by Stephen Cole Kleene. Copyright © 1967 Stephen Cole Kleene. Excerpted by permission of Dover Publications, Inc..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

1. Linguistic considerations: formulas
2. "Model theory: truth tables,validity "
3. "Model theory: the substitution rule, a collection of valid formulas"
4. Model theory: implication and equivalence
5. Model theory: chains of equivalences
6. Model theory: duality
7. Model theory: valid consequence
8. Model theory: condensed truth tables
9. Proof theory: provability and deducibility
10. Proof theory: the deduction theorem
11. "Proof theory: consistency, introduction and elimination rules"
12. Proof theory: completeness
13. Proof theory: use of derived rules
14. Applications to ordinary language: analysis of arguments
15. Applications to ordinary language: incompletely stated arguments
16. "Linguistic considerations: formulas, free and bound occurrences of variables"
17. "Model theory: domains, validity"
18. Model theory: basic results on validity
19. Model theory: further results on validity
20. Model theory: valid consequence
21. Proof theory: provability and deducibility
22. Proof theory: the deduction theorem
23. "Proof theory: consistency, introduction and elimination rules"
24. "Proof theory: replacement, chains of equivalences"
25. "Proof theory: alterations of quantifiers, prenex form"
26. "Applications to ordinary language: sets, Aristotelian categorical forms"
27. Applications to ordinary language: more on translating words into symbols
28. "Functions, terms"
29. Equality
30. "Equality vs. equivalence, extensionality"
31. Descriptions
32. Countable sets
33. Cantor's diagonal method
34. Abstract sets
35. The paradoxes
36. Axiomatic thinking vs. intuitive thinking in mathematics
37. "Formal systems, metamathematics"
38. Formal number theory
39. Some other formal systems
40. Decision and computation procedures
41. "Turing machines, Church's thesis"
42. Church's theorem (via Turing machines)
43. Applications to formal number theory: undecidability (Church) and incompleteness (Gödel's theorem)
44. Applications to formal number theory: consistency proofs (Gödel's second theorem)
45. "Application to the predicate calculus (Church, Turing)"
46. "Degrees of unsolvability (Post), hierarchies (Kleene, Mostowski)."
47. Undecidability and incompleteness using only simple consistency (Rosser)
48. Gödel's completeness theorem: introduction
49. Gödel's completeness theorem: the basic discovery
50. "Gödel's completeness theorem with a Gentzen-type formal system, the Löwenheim-Skolem theorem"
51. Gödel's completeness theorem (with a Hilbert-type formal system)
52. "Gödel's completeness theorem, and the Löwenheim-Skolem theorem, in the predicate calculus with equality"
53. Skolen's paradox and nonstandard models of arithmetic
54. Gentzen's theorem
55. "Permutability, Herbrand's theorem"
56. Craig's interpolation theorem
57. "Beth's theorem on definability, Robinson's consistency theorem"

Customer Reviews