**Publishers Weekly -**

*Publisher's Weekly*
Spend $25, Get

Free Shipping

Free Shipping

×

**Uh-oh, it looks like your Internet Explorer is out of date.**

For a better shopping experience, please upgrade
now.

This reader-friendly volume groups the patterns of mathematics into five archetypes: numbers, space, logic, infinity, and information. Rudy Rucker presents an accessible introduction to each of these important areas, reflecting intelligence gathered from the frontiers of mathematical thought. More than 100 drawings illuminate explorations of digital versus

- LendMe LendMe™ Learn More

- ISBN-13:
- 9780486782195
- Publisher:
- Dover Publications
- Publication date:
- 11/12/2013
- Sold by:
- Barnes & Noble
- Format:
- NOOK Book
- Pages:
- 336
- File size:
- 18 MB
- Note:
- This product may take a few minutes to download.

All rights reserved.

ISBN: 978-0-486-78219-5

CHAPTER 1

**NUMBER**

**Zero and One**

For one reason or another, we humans very commonly use 0 and 1 as an example of the most basic possible type of distinction. We think in terms of many other such distinctions — off and on, negative and positive, night and day, woman and man — but, at least to a mathematician, zero and one have a special appeal.

What is it about zero and one? Suppose I free-associate a little.

I think it is significant that the international hand gesture for coitus consists of a forefinger (1) bustling in a thumb and forefinger loop (0). This key archetype finds socially acceptable expression as the fat lady and the thin man. The symbol 0 seems egg-like, female, while 1 is spermlike and male. Can this really be an accident? A study of the history of mathematics shows that our present-day symbols were only adopted after centuries of trial and error. It seems likely that the symbols to survive are those that best "fit" our ingrained modes of thought.

An egg is round, and a sperm is skinny. There's only one egg, but there are lots and lots of sperm. The sperm are active, while the egg just waits. The lovely round woman is besieged by suitors seeking rest, seeking completion.

Formally speaking, both zero and one are undefinable. The primitive concepts "nothing" and "something" cannot be explained in terms of anything simpler. We understand these concepts only because they are built into the world we live in.

Before the beginning of time, the universe is like a 0, an egg. The egg hatches, and out comes 1, the chicken. Chickens here, chickens there, chickens, chickens everywhere.

Why, incidentally, does the egg hatch? Why is there something instead of nothing? Well, if there *weren't* any somethings, then we wouldn't be here asking about it.... Is this enough of an answer?

Occasionally, in moments of deep absorption, all distinctions may seem to fall away, and you do have a kind of 0 experience. As long as you are, in some sense, in touch with the void, you aren't going to be asking, "Why is there something instead of nothing?" But the moment passes, and you're a regular person again. Cluck, cluck.

One and zero are like Punch and Judy, endlessly acting their play. What is perhaps astonishing is that mankind has managed to build up a science based on this play: the science of number.

**Numbers and Logs**

The simplest possible numeration system is the "tally" system. Here a number *N* is represented by a list of *N* ones. Ordinary finger-counting uses this system (**Fig. 20**).

The tally system is awkward for expressing big numbers. The Roman numeration system is considerably more efficient. Recall that the Romans used I, V, X, L, C, D, and M to stand for, respectively, 1, 5, 10, 50, 100, 500, and 1000. Writing X instead of 1111111111 is a big savings. One flaw with the Roman system is that you can't always read a Roman number one letter at a time — to know that IV is 4 and LX is sixty, you have to see two letters together. This makes Roman numbers hard to read. How many of us have seen the Roman number MDCCLXXVI on the back of a dollar bill without realizing that it's supposed to be 1776? Another problem with the Roman system is that it peters out at M, so that if I want to express eleven thousand, I have to write MMMMMMMMMMM. Out past M, something like the tally system sets in again.

These days, every advanced culture on Earth uses the familiar base-ten system of numeration. Quantities are expressed as sums of units, groups of ten, groups of ten tens, groups of ten groups of ten tens, and so on:

39TEN = 3 groups of then and 9 units

322TEN = 3 groups of ten tens, 2 groups of ten, and 2 units

1987TEN = 1 group of ten groups of ten tens, 9 groups of ten tens, 8 groups of ten, and 7 units.

The subscript "TEN" is a reminder that our number system is based on groupings of ten. It is a little hard to grasp that there is nothing sacred about the base-ten numeration system. Simple-minded as it may seem, the only reason we use ten is that we have ten fingers. A numeration system could equally well be based on, say, three — with groups of three, groups of three threes, groups of three groups of three threes, and so on. To make this clear, take a look at the different ways in which 54 units are broken up and described in numeration systems based on two through ten.

Digital computers use the "binary" or base-two number system. One reason for this is that computer memories are based on internal switches which can be set two ways: *on* or *off.* By letting on be 1 and off be 0, computers can represent numbers by setting their memory switches in various arrangements.

By applying a binary system to the positions of your ten fingers, you can actually count up to 1,023. This is done as follows. Lay your two hands down side by side, palms up as in **Fig. 22**. Running from right to left, associate each digit with a successive number from the "doubling sequence": 1, 2, 4, 8, 16, 32, 64, 128, 256, and 512. These are, of course, powers of two. The "grouping by twos" process suggested in **Fig. 21** indicates that each whole number can be represented in a unique way as a sum of some numbers from the doubling sequence.

If you stick up the appropriate finger for each doubling-sequence number used in a given number's breakdown, you get a finger pattern specific to that number, as illustrated in **Fig. 23**. To the right of the finger patterns, I have written the corresponding base-two notation for each pattern.

Using only one hand, you can represent any number between 00000TWO and 11111TWO, which is any number between 0 and 31TEN; 32TEN possibilities in all. If you use your toes as well as your hands (though raising and lowering toes is not easy) you could reach 1111111111111111111lTWO? which is 1,048,576TEN, or a bit more than a million.

If we try to base a numeration system on one, we end up with the tally system, because all powers of 1 are equal to 1. Two then has the virtue of being the smallest number that can be used as the base of a numeration system. For purposes of analyzing how much information a decision among *N* possibilities involves, it is good to think in terms of base two.

To choose among thirty-two possibilities is to make five 0/1 choices in writing out a five-digit binary number. Another way of putting this is to say that a choice among thirty-two possibilities requires five bits of information, where a bit of information is a single choice between 0 and 1. It is significant that thirty-two is two to the *fifth* power, and a choice among thirty-two possibilities takes *five* bits of information. In general, choosing among two to the *N* possibilities requires *N* bits of information. Making the choice is like selecting a path along a path that has *N* binary forks in it.

To express this insight more clearly, we need to talk about logarithms. Information theory is usually formulated in terms of logarithms. In the past, this always put me off; like anyone else, I hated logarithms, but then I had a key insight: *the logarithm of a number is approximately equal to the number of digits it takes to write the number out.* In base ten, log 10 is 1, log 100 is 2, log 1987 is about 3, log 12345 is about 5, and the log of one billion is 10.

Formally, we define the logarithm operation as the inverse of exponentiation. In base ten, the logarithm of a number N is precisely the power to which ten would have to be raised to equal N.

We don't have to base our logarithms on ten. When we are doing information theory, it is much better to base our logarithms on two. In base two, the logarithm of a number *N* is precisely the power to which two would have to be raised to equal *N:*

y = log2 x means that 2y = x.

No matter what base we use, logarithms have a number of very beautiful properties. Three properties that make logarithms useful for calculation are

log xN = N log x;

log xy = log x + log y;

log x/y = log x - log y.

In the old days, people used to compute a large product like 1987 × 7891 by looking up log 1987 and log 7891, adding the logs, and looking up the number whose log is equal to the sum. The computation was done either by means of a log-antilog table, or by means of a slide rule. A slide rule basically consists of two identical sticks printed with number names in such a way that a number's name appears at a position whose actual distance from the stick's end is the logarithm of the number. Products are computed by the analog process of measuring off the corresponding stick lengths. Electronic calculators actually do their work by using logarithms, though one doesn't notice this except when the hidden log-antilog steps lead to round-off errors such as saying "1.9999999" when "2" is meant.

It is often useful to draw graphs in which one or both axes are scaled logarithmically. Looking at the graphs on page 48 makes it clear that, working in any given base, the number of digits it takes to write a number *N* lies between log *N* and (log *N*) + 1.

The reason information theory makes such frequent use of logarithms base two is that the most fundamental unit of information is a bit, and a bit of information represents a choice between two possibilities. The number of digital bits it takes to write out a' number *N* in binary notation is approximately equal to the base-two logarithm of *N.*

Shannon thought of information in terms of *reduction of uncertainty.* If someone is about to send you a message, you are uncertain about what the message will say, and the greater the number of possible messages, the greater your uncertainty. The arrival of the message reduces your uncertainty and gives you information. Shannon defines the information in the message as equal to the base-two logarithm of the total number of messages that might possibly have been sent. If we write *I* for his measure of information and *M* for the number of possible messages, then we can write Shannon's definition as an equation:

I = log2 M.

This means that if there are only two possible messages, say "Yes" and "No," then getting the message gives you one bit of information. Four possible messages means two bits of information, eight possible messages means three bits of information, and so on. One million is approximately two to the twentieth power, so if there are one million possible messages, each message carries some twenty bits of information. Put differently, one can choose among one million possibilities by asking twenty yes-or-no questions.

In general, writing a number in binary notation takes about three times as many digits as it does to write the number in decimal notation. Nevertheless, we do not think of decimal notation as a more efficient code for a number, because the information cost of remembering a decimal digit is about three times the cost of remembering a binary digit. This is made quite precise by an equation that relates different-based logarithms:

log2 M = (log2 10) (log10 M).

Think of log2*M* as the number of binary symbols it takes to express *M;* think of log10*M* as the number of decimal symbols it takes to express *M;* and think of log2 10 as a conversion factor giving the number of binary bits it takes to choose one of the ten decimal symbols: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. Put differently, log2 10 is the information content per decimal digit. The information in *M* is the product of the information per digit and the number of digits it takes to write *M.* The value of log2 10 is, by the way, 3.32. That means a decimal digit is worth 3.32 bits of information. Log2 26 is 4.70, which means that a letter of the alphabet is worth 4.7 bits of information. (See **Fig. 27**.)

**Number Patterns**

Whole numbers have an obstinate solidity to them. No matter how much you try, you cannot break 5 into two equal whole numbers — 5 breaks into a 1 and a 4, or into a 2 and a 3, and that's all there is to it. Someone might say, "What about breaking 5 into 2 ½ and 2 ½?" but for now we're not interested in that kind of talk. Right now we're thinking of numbers as made up of unbreakable units.

A number is an eternal, universal pattern. The Greeks may have identified certain of their gods with numbers, but now those gods are forgotten, and the numbers live on. It is perhaps conceivable that some extraterrestrial races do not think in terms of number at all. Sentient clouds of gas, for instance, might never encounter any sensations discrete enough to suggest the idea of counting. But if any intelligent race does form the idea of discrete objects, we may be certain that they will arrive at just the same numbers that we use. Number is an objectively existing feature of the world.

Different numbers of units make up various kinds of patterns. In the Introduction we talked at some length about how certain archetypes are associated with some of the smaller numbers:

1 monad — unity

2 dyad — opposition

3 triad — thesis-antithesissynthesis

4 tetrad — balance of quaternity

5 quintad — a step further out

Now let's go on and look at some other notions that various familiar numbers suggest. In doing this, I will often refer to the Pythagoreans. Pythagoras himself lived in Greece and Italy during the sixth century B.C. He was not only a philosopher, but something of a wizard. His followers banded together, to form a kind of religious community on the island of Samos. The most influential doctrine of these "Pythagoreans" was the notion that, at the deepest level, the universe is made of numbers. In his *Metaphysics,* Aristotle describes the Pythagorean teaching:

They thought they found in numbers, more than in fire, earth, or water, many resemblances to things which are and become; thus such and such an attribute of numbers is justice, another is soul and mind, another is opportunity, and so on; and again they saw in numbers the attributes and ratios of the musical scales. Since, then, all other things seemed in their whole nature to be assimilated to numbers, while numbers seemed to be the first things in the whole of nature, they supposed the elements of numbers to be the elements of all things, and the whole heaven to be a musical scale and a number.

The Pythagoreans thought of numbers as embodying various basic concepts. In general, the even numbers (2, 4, 6, 8, ...) were thought of as female, and the odd numbers (3, 5, 7, 9, ...) were thought of as male. Two specifically stood for woman, and 3 for man. One reason for this may lie in the appearance of the male and female genitalia.

Given that 5 is the sum of 2 and 3, 5 was thought of as standing for marriage.

Another Pythagorean train of thought held 1 to stand for God, or the unified Divinity underlying the world. Two, as the first moving away from unity, stood for strife, or opinion. This interpretation of 2 is present in the notion of the dyad, which pairs opposing concepts.

Three is usually thought of as a lucky number; the Christian religion is based on the notion of a triune God. Another important feature of 3 is that three dots describe the simplest possible two-dimensional shape: the triangle. Churches in Austria and southern Germany usually have over their altars pictures of God as an eye inside a triangle. For some reason, this image has found its way onto the American dollar bill.

In the rest of this section I am going to be thinking of numbers primarily as patterns in space. That is, I will think of numbers as collections of dots, and will look at some of the patterns into which various-sized collections can be formed. The Pythagoreans often used space patterns to shed light on number properties. In doing this, one tries to keep the patterns as simple and as "digital" as possible; the primary topic here is, after all, number rather than space.

One of the earliest examples of people using dot patterns to represent numbers appears in the Chinese image known as the *lo-shu.* Here there are patterns representing the numbers 1 through 9, and these patterns are arranged into a "magic square." Supposedly, Em-porer Yu saw the *lo-shu* pattern on the back of a tortoise on the banks of the Yellow River in 2200 B.C. The pattern is called a magic square because the sum of the numbers along any horizontal, vertical, or diagonal is always the same: 15.

Excerpted fromMind ToolsbyRudy Rucker. Copyright © 1987 Rudy Rucker. Excerpted by permission of Dover Publications, Inc..

All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.

Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

*Software* followed by *Wetware* in 1988, both of which won Philp K. Dick Awards.

Average Review: