Pub. Date:
Yale University Press
Almost Everyone's Guide to Science: The Universe, Life, and Everything

Almost Everyone's Guide to Science: The Universe, Life, and Everything

by John Gribbin, Mary Gribbin


View All Available Formats & Editions
Current price is , Original price is $13.0. You
Select a Purchase Option (New Edition)
  • purchase options
  • purchase options
    $9.13 $13.00 Save 30% Current price is $9.13, Original price is $13. You Save 30%.
    Note: Access code and/or supplemental material are not guaranteed to be included with textbook rental or used textbook.
  • purchase options

Product Details

ISBN-13: 9780300084603
Publisher: Yale University Press
Publication date: 08/28/2000
Edition description: New Edition
Pages: 240
Sales rank: 1,137,564
Product dimensions: 5.00(w) x 7.75(h) x (d)

About the Author

John Gribbin, visiting fellow in astronomy at the University of Sussex, is the author of many bestselling books of science and science fiction including In Search of Schrödinger’s Cat: Quantum Physics and Reality and The Search for Superstrings, Symmetry and the Theory of Everything.

Read an Excerpt

Almost Everyone's Guide to Science

The Universe, Life and Everything
By John R. Gribbin

Yale University Press

Copyright © 2000 John R. Gribbin
All right reserved.

ISBN: 0300084609

Chapter One

In 1962, in a series of lectures given for undergraduates at Caltech, Richard Feynman placed the atomic model at the centre of the scientific understanding of the world. As he expressed it:

If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence passed on to the next generations of creatures, what statement would contain the most information in the fewest words? I believe it is the atomic hypothesis (or the atomic fact, or whatever you wish to call it) that all things are made of atoms -- little particles that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another. In that one sentence, you will see, there is an enormous amount of information about the world, if just a little imagination and thinking are applied.

The emphasis is Feynman's, and you can find the whole lecture in his book Six Easy Pieces. In the spirit of Feynman, we start our guide to science with atoms. It is often pointed out that the idea of atoms, as the ultimate, indivisible pieces of which matter is composed, goes back to the time of the Ancient Greeks, when, in the fifth century BC, Leucippus of Miletus and his pupil Democritus of Abdera argued the case for such fundamental entities. In fact, although Democritus did give atoms their name (which means 'indivisible'), this is something of a red herring. The idea wasn't taken seriously by their contemporaries, nor by anyone else for more than two thousand years. The real development of the atomic model dates from the end of the eighteenth century, when chemists began the modern investigation of the properties of the elements.

The concept of elements -- fundamental substances from which all the complexity of the everyday world is made up -- also goes back to the early Greek philosophers, who came up with the idea that everything is made up of different mixtures of four elements -- air, earth, fire and water. Apart from the name 'element', and the idea that an element cannot be broken down into any simpler chemical form, there is nothing left of the Greek idea of elements in modern chemistry, which builds from the work of Robert Boyle in the middle of the seventeenth century.

Boyle was the first person to spell out the definition of an element as a substance that could combine with other elements to form compounds, but which could not be broken down into any simpler substance itself. Water, for example, is a compound which can be broken down chemically into its component parts, oxygen and hydrogen. But oxygen and hydrogen are elements, because they cannot be broken down further by chemical means. They are not made of other elements. The number of known elements increased as chemists devised new techniques for breaking compounds up; but by the nineteenth century it was becoming clear which substances really were indivisible.

The breakthrough in understanding the way elements combine to make compounds came when John Dalton revived the idea of atoms at the beginning of the nineteenth century. He based his model on the discovery that for any particular compound, no matter how the compound has been prepared, the ratio of the weights of the different elements present is always the same. For example, in water the ratio of oxygen to hydrogen is always 8:1 by weight; in calcium carbonate (common chalk) the ratio of calcium to carbon to oxygen is always 10:3:12 by weight.

Dalton's explanation was that each kind of element is composed of one kind of identical atoms, and it is the nature of these atoms that determines the properties of the element. On this picture, the key distinguishing feature that makes it possible to tell one kind of atom from another is its weight. When two or more elements combine, it is actually the atoms of the different elements that join together, to make what are now known as molecules. Each molecule of a compound contains the same number of atoms as every other molecule of the compound, each with the same numbers of atoms of each of the elements involved in each molecule. A molecule of water is made up of two hydrogen atoms and one oxygen atom (H2O); a molecule of calcium carbonate is made up of one atom of calcium, one atom of carbon, and three atoms of oxygen (CaCO3). And we now know that in some elements the atoms can join together to make molecules, without other elements being involved. The oxygen in the air that we breathe, for example, is made up of di-atomic molecules, O2 -- these are not regarded as compounds.

Dalton's atomic model was a huge success in chemistry, but throughout the nineteenth century some scientists regarded it only as a useful trick, a way to calculate the way elements behaved in chemical reactions, but not a proof that atoms are 'real'. At the same time, other scientists were finding increasingly compelling evidence that atoms could be regarded as real entities, little hard balls that attract each other when some distance apart but repel when pushed together.

One line of attack stemmed from the work of Amadeo Avogadro (who was, incidentally, the person who showed that the combination of atoms in a molecule of water is H2O, not HO). In 1811 Avogadro published a paper in which he suggested that equal volumes of gas at the same temperature and pressure contain equal numbers of atoms. This was before the idea of molecules was developed, and we would now say that equal volumes of gas at the same temperature and pressure contain equal numbers of molecules. Either way, though, what matters is that Avogadro's model envisaged equal numbers of little hard spheres bouncing around and colliding with one another in a box of gas of a certain size under those conditions, whether the gas was oxygen, or carbon dioxide, or anything else.

The idea behind this is that in a box of gas there is mostly empty space, with the little hard balls whizzing about inside the box, colliding with one another and with the walls of the box. It doesn't matter what the little balls are made of -- as far as the pressure on the walls of the box is concerned, all that matters is the speed of the particles and how often they hit it. The speed depends on the temperature (higher temperature corresponds to faster movement), and the number of hits per second depends on how many little hard balls there are in the box. So at the same temperature, pressure and volume, the number of particles must be the same.

This kind of model also explains the difference between gases, liquids and solids. In a gas, as we have said, there is mostly empty space, with the molecules hurtling through that space and colliding with one another. In a liquid there is no empty space, and the molecules can be envisaged as touching one another, but in constant movement, sliding past one another in an amorphous mass. In a solid the movement has all but stopped, and the molecules are locked in place, except for a relatively gentle jiggling, a kind of molecular running on the spot.

Avogadro's idea wasn't taken very seriously at the time (not even by Dalton). But at the end of the 1850s it was revived by Stanislao Cannizzaro, who realised that it provided a way of getting a measure of atomic and molecular weights. If you can find the number of molecules in a certain volume of one particular gas at a set temperature and pressure (the standard conditions are usually chosen as zero degrees Celsius and one standard atmospheric pressure), then you know the number of molecules present for any gas under those conditions. In order to find out how much each molecule weighs, you just have to weigh the gas and divide by that number.

For these standard conditions, you can choose a volume of gas which corresponds to two grams of hydrogen (two grams, not one, because each molecule of hydrogen contains two atoms, H2). It works out as just over thirteen litres of gas. The number of molecules in such a volume is called Avogadro's Number. The same volume of oxygen under the same conditions weighs thirty-two grams, and chemical evidence tells us that there are two oxygen atoms in each molecule. But it contains the same number of molecules as two grams of hydrogen. So we know, immediately, that one oxygen atom weighs sixteen times as much as one atom of hydrogen. This was a very useful way to determine relative atomic and molecular weights; but working out actual weights depends on knowing Avogadro's Number itself, and that was harder to pin down.

There are several different ways to tackle the problem, but you can get some idea of the way it might be done from just one example, a variation on the theme used by Johann Loschmidt in the mid-1860s. Remember that in a gas there is a lot of empty space between the molecules, but in a liquid the molecules are touching one another. Loschmidt could calculate the pressure of gas in a container (under standard conditions) from Avogadro's Number, which determines the average distance travelled by the molecules between collisions (the so-called 'mean free path'), and the fraction of the volume of the gas actually occupied by the molecules themselves. And he could find out how much empty space there was in the gas by liquefying it and measuring how much liquid was produced -- or, indeed, by using measurements of the density of liquid oxygen and liquid nitrogen that other people had carried out. Since the particles are touching in a liquid, by subtracting the volume of the liquid from the volume of the gas he could find out how much empty space there was in the gas. So by adjusting the value of Avogadro's Number in his pressure calculations to match the measured pressure, he could work out how many molecules were present.

Because the densities of liquid nitrogen and liquid oxygen used in his calculations were not as accurate as modern measurements, Loschmidt's figure for Avogadro's Number, derived in 1866, came out a little on the low side at 0.5 × 1023. Using a different technique, Albert Einstein came up with a value of 6.6 × 1023 in 1911. The best modern value for the number is 6.022045 × 1023 or, in everyday language, just over six hundred thousand billion billion. This is the number of atoms in one gram of hydrogen, sixteen grams of oxygen, or in the gram equivalent of the atomic weight of any element. So each atom of hydrogen weighs 0.17 × 10-23 grams, and so on. Each molecule of air is a few hundred millionths of a centimetre across. At 0°C and one atmosphere pressure, one cubic centimetre of air contains 4.5 x 1019 molecules; the mean free path of a molecule of air is thirteen millionths of a metre, and an oxygen molecule in the air at that temperature travels at just over 461 metres per second (roughly 17,000 km per hour). So each molecule is involved in more than 3.5 billion collisions every second, producing the averaged-out feeling of a uniform pressure on your skin or on the walls of the room.

In fact, the kinetic theory of gases was first proposed by Daniel Bernoulli, as long ago as 1738. He was inspired by the work of Robert Boyle, in the middle of the seventeenth century. Boyle had discovered that when a gas is compressed (for example, in a piston) the volume of the gas changes in inverse proportion to the pressure -- double the pressure and you halve the volume. Bernoulli explained this in terms of the kinetic theory, and also realised that the relationship between the temperature of a gas and its pressure (when you heat a gas its pressure increases, other things being equal) could also be explained in terms of the kinetic energy (energy of motion) of the little particles in the gas -- heating the gas makes the particles move faster, so they have a greater impact on the walls of the container. But he was way ahead of his time. In those days most people who thought about heat at all thought that it was related to the presence of a kind of fluid, called caloric, which moved from one substance to another. Bernoulli's version of the kinetic theory made no impact at all on science at the time.

The kinetic theory was rediscovered twice (first by John Herapath in 1820, then by John Waterston in 1845), and ignored each time, before it finally became accepted by most scientists in the 1850s, largely as a result of the work of James Joule. A complete mathematical version of kinetic theory (a complete model) emerged in the 1860s, largely thanks to the work of Rudolf Clausius, James Clerk Maxwell, and Ludwig Boltzmann. Because this model deals with the averaged-out statistical behaviour of very large numbers of particles, which interact with one another like tiny billiard balls, bouncing around in accordance with Newton's laws of mechanics, it became known as statistical mechanics.

This is an impressive example of the way in which physical laws can be applied in circumstances quite different from those which were being investigated when the laws were discovered, and highlights the important difference between a law and a model. A law, like Newton's law of gravity, really is a universal truth. Newton discovered that every object in the Universe attracts every other object in the Universe with a force that is proportional to one over the square of the distance between the two objects. This is the famous 'inverse square law'. It applies, as Newton pointed out, to an apple falling from a tree and to the Moon in its orbit, in each case being tugged by the gravity of the Earth. It applies to the force holding the Earth in orbit around the Sun and to the force which is gradually slowing the present expansion of the Universe. But although the law is an absolute truth, Newton himself had no idea what caused it -- he had no model of gravity.

Indeed, Newton specifically said in this context hypotheses non fingo ('I do not make hypotheses'), and did not try to explain why gravity obeyed an inverse square law. By contrast, Einstein's general theory of relativity provides a model which automatically produces an inverse square law of gravity. Rather than overturning Newton's ideas about gravity, as some popular accounts suggest, the general theory actually reinforces them, by providing a model to explain the law of gravity (it also goes beyond Newtonian ideas to describe the behaviour of gravity under extreme conditions; more of this later).

In order to be a good model, any model of gravity must, of course, 'predict' an inverse square law, but that doesn't mean that such a model is necessarily the last word, and physicists today confidently expect that one day they will develop a quantum theory of gravity that goes beyond Einstein's theory. If and when they do, though, we can be sure of one thing -- that new model will still predict an inverse square law. After all, whatever new theories and models physicists come up with, the orbits of the planets around the Sun will still be the same, and apples won't suddenly start falling upwards out of trees.

Gravity, as it happens, is a very weak force, unless you have a lot of matter around. It takes the gravity of the entire Earth, pulling on an apple, to break the apple free from the tree and send it falling to the ground. But a child of two can pick the apple up from the ground, overcoming the pull of gravity. For atoms and molecules, rattling around in a box of gas, the gravitational forces between the particles are so tiny that they can be completely ignored. What matters here, as the nineteenth-century developers of statistical mechanics realised, are the other laws discovered by Newton -- the laws of mechanics.

There are just three of these Newton's laws, which are so familiar today that they may seem like obvious common sense, but which form the foundations of all of physics. The first law says that any object stays still, or moves at a steady speed in a straight line (at constant velocity), unless it is pushed or pulled by a force. This isn't quite everyday common sense, because if you set something moving here on Earth (if I kick a ball, for example) it soon stops moving, because of friction. Newton's insight was to appreciate how things behave when there is no friction -- things like rocks moving through space or, indeed, atoms whizzing about in a box of gas (incidentally, although he never developed a kinetic theory of gases, Newton was himself a supporter of the atomic model, and wrote of matter being made up of 'primitive Particles ... incomparably harder than any porous bodies compounded of them; even so very hard, as never to wear out or break in pieces').

Newton's second law says that when a force is applied to an object it accelerates, and keeps on accelerating as long as the force is applied (acceleration means a change in the speed of an object, or the direction in which it is moving or both; so the Moon is accelerating around the Earth, even though its speed stays much the same, because its direction is constantly changing). The acceleration produced by the force depends on the strength of the force divided by the mass of the object (turning this around, physicists often say that the force is equal to the mass times the acceleration). This does match up with common sense -- it is harder to push objects around if they have more mass. And Newton's third law says that when one object exerts a force on another there is an equal and opposite reaction back on the first object. When I kick a ball (or, if I am foolish enough to do so, a rock), the force my foot exerts on the ball (or rock) makes it move, and the equal and opposite force the object exerts on my foot can be clearly felt.

More subtly, just as the Earth tugs on the Moon through gravity, so there is an equal and opposite force tugging on the Earth. Rather than the Moon orbiting the Earth, we should really say that they each orbit around their mutual centre of gravity -- but the Earth is so much more massive than the Moon that, as it happens, this point of balance actually lies below the surface of the Earth. Strictly speaking, too, the equality of action and reaction means that when the apple is falling to the ground the whole Earth, tugged by the apple, moves an infinitesimal amount 'up' to meet the apple. It is Newton's third law that explains the recoil of a gun when it is fired and the way a rocket works by throwing matter out in one direction and recoiling in the opposite direction.

These three laws apply to the Universe at large, which is where Newton applied them to explain the orbits of the planets, and to the everyday world, where they can be investigated by doing experiments such as rolling balls down inclined planes and measuring their speed, or by bouncing balls off one another. But, because they are, indeed, universal laws they also apply on the scale of atoms and molecules, providing the mechanics which, as I have mentioned, is the basis of statistical mechanics and the modern kinetic theory of gases -- even though the modern kinetic theory was developed nearly two centuries after Newton came up with his three laws of mechanics, and he had never applied his laws in this way. Newton did not really invent the laws at all -- they are laws of the Universe, and they operated in the same way before he wrote them down, just as they operate in places he did not happen to investigate.

There are good reasons why the kinetic theory, and statistical mechanics, were at last taken on board by scientists in the middle of the nineteenth century. By that time, the ground had been prepared by the study of thermodynamics (literally, heat and motion), which was of immense practical importance in the days when the industrial revolution in Europe was being driven by steam power.

The principles of thermodynamics can also be summed up in three laws, and they have a wide-ranging importance which applies across science (indeed, across the Universe), not just to the design and construction of efficient steam engines. The first law of thermodynamics is also known as the law of conservation of energy, and says that the total energy of a closed system stays the same. The Sun is not a closed system, and is pouring energy out into space; the Earth is not a closed system, and it receives energy from the Sun. But in a process like a chemical reaction taking place in an insulated test tube, or in the processes involved in statistical mechanics where little hard particles bounce around inside a box, the total amount of energy is fixed. If one fast-moving particle carrying a lot of kinetic energy collides with a slow-moving particle which has little kinetic energy, the first particle will probably lose energy and the second particle will gain energy. But the total energy carried by the two particles before and after the collision will be the same.

Since Einstein came up with the special theory of relativity early in the twentieth century, we know that mass is a form of energy, and that under the right circumstances (such as inside a nuclear power station, or at the heart of the Sun) energy and mass can be interchanged. So today the first law of thermodynamics is called the law of conservation of mass--energy, not just the law of conservation of energy.

The second law of thermodynamics is arguably the most important law in the whole of science. It is the law which says that things wear out. In terms of heat -- the way the law was discovered in the days of steam engines -- the second law says that heat will not flow from a colder place to a hotter place of its own volition. If you put an ice cube in a cup of hot tea, the ice melts and the tea gets cooler -- you never see a cup of lukewarm tea in which the tea gets hotter while an ice cube forms in the middle of the cup, even though such a process would not violate the law of conservation of energy. Another manifestation of the second law is the way that a brick wall, left undisturbed, will be worn down and crumble away, while a pile of bricks, left undisturbed, will never assemble themselves into a brick wall.

In the 1920s the astrophysicist Arthur Eddington summed up the importance of the second law in his book The Nature of the Physical World:

The second law of thermodynamics holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell's equations -- then so much the worse for Maxwell's equations. If it is found to be contradicted by observation -- well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.

The second law is also related to the concept known as entropy, which measures the amount of disorder in the Universe, or in a closed part of the Universe (such as a sealed test tube in the laboratory). Entropy in a closed system cannot decrease, so any change in the system moves it towards a state of higher entropy. The 'system' of an ice cube floating in a cup of tea has more order (less entropy) than a cup of lukewarm tea, which is why the system shifts from the ordered state to the disordered state.

The Universe as a whole is a closed system, so the entropy of the whole Universe must be increasing. But the Earth, as I have pointed out, is not a closed system and receives a continuous input of energy from the Sun. It is this supply of energy from outside that makes it possible for us to create order out of disorder locally (building houses out of piles of bricks, for example); the decrease in entropy associated with all the processes of life on Earth is more than compensated for by the increase in entropy inside the Sun as a result of the processes which make the energy we feed off.

In case you are wondering, the same sort of thing happens on a smaller scale when we cool down the inside of a refrigerator and make ice cubes. We have to use energy to pump heat out of the refrigerator, and this process increases the entropy of the Universe much more than the decrease in entropy that is produced inside the cold refrigerator as a result. If you left a fridge in a closed room, with the door of the fridge open and the motor running, the room would get hotter, not colder, because the energy being wasted by the motor getting hot would be more than the cooling effect of the open fridge.

The third law of thermodynamics has to do with the familiar everyday concept of temperature, something we have been taking for granted in the discussion so far. Although I have only discussed the relationship between heat and entropy in general terms, there is, in fact, a precise mathematical relationship between the two quantities, and this shows that as objects become cooler it is harder and harder to get energy out of them. This is pretty obvious in an everyday context -- the very reason why steam engines were so important in the industrial revolution is because hot steam could be made to do useful work, driving pistons in and out or making wheels go round and round. You could, if you really wanted to, make a kind of toy engine that would push pistons in and out using a much cooler gas (perhaps carbon dioxide), but it would not be very effective.

In the 1840s, William Thomson (later Lord Kelvin) developed these thermodynamic ideas into an absolute scale of temperature, with zero on this scale being the temperature at which no more heat (or any energy) can ever be extracted from an object. This absolute zero is fixed by the laws of thermodynamics (so Thomson could calculate it mathematically, even though he could never cool anything that far), and is at -273°C. It is now known as 0 K, after Kelvin. On the Kelvin scale of temperature the units are each the same size as the degrees on the Celsius scale, so that ice melts at 273 K (there is no 'degree sign' in front of the K). The third law of thermodynamics says that you can never cool anything to 0 K, although if you try hard enough you can (in principle) get as close as you like to absolute zero. An object at zero Kelvin would be in the lowest possible energy state that it could achieve, and no energy could be extracted from it to do work.

Some wag has summed up the three laws of thermodynamics in everyday terms:

1. You can't win.

2. You can't even break even.

3. You can't get out of the game.

The success of the kinetic theory and statistical mechanics helped to convince many physicists that atoms were real. But right up until the end of the nineteenth century, many chemists still regarded the notion of atoms with suspicion. This looks very odd to us, because by the end of the 1860s (just about the time that the kinetic theory was proving so successful), a pattern had been discovered in the properties of the different elements -- a pattern which is now explained entirely in terms of the properties of atoms.

An early attempt to rank the elements in order of their atomic weights (with the weight of hydrogen, the lightest element, set as 1 unit) was made by Jöns Berzelius in the 1820s (even though he never accepted Avogadro's hypothesis), but this never caught on. The real breakthrough came after 1860, when Cannizzaro revived Avogadro's idea and convinced many of his colleagues that it worked, and that the atomic weight was a useful concept in chemistry. Even then, it took time for the full importance of the breakthrough to be appreciated. The big discovery was that if the elements were arranged by atomic weight, elements with similar chemical properties were found at regular intervals in the list -- the element with atomic weight 8 has similar properties to the one with weight 16 (and also to element 24), the element with atomic weight 17 has similar properties to the one with weight 25, and so on.

It didn't take much imagination to go from this discovery to the idea of writing out a list of the elements in a table, with the elements with similar properties arranged underneath each other in a set of vertical columns. In the early 1860s, the French chemist Alexandre Beguyer de Chancourtois and the British chemist John Newlands each independently came up with versions of this idea, but their work was ignored. Worse, Newlands' idea was made fun of by his contemporaries, who said that listing the elements in order of atomic weight made no more sense than listing them in alphabetical order. This was a breathtaking piece of arrant (and arrogant) nonsense, as the alphabet is an arbitrary human convention, while the weights of the atoms are a fundamental physical property, but the comment shows how far chemists were from accepting the reality of atoms in the mid-1860s.

Even when the idea finally began to catch on, there was an element of controversy. At the end of the 1860s, the German Lothar Meyer and the Russian Dmitri Mendeleyev each independently -- and each unaware of the work of Beguyer de Chancourtois and Newlands -- came up with the idea of representing the elements in a periodic table (a grid rather like a chessboard) in which they were arranged in order of their atomic weights, with elements with similar chemical properties underneath one another in the table. But the result is known to this day as Mendeleyev's Periodic Table, with Meyer relegated to a footnote of history along with the other two pioneers of the idea, because Mendeleyev was bold enough to rearrange the order of the elements in the table slightly, to make sure that elements with similar chemical properties fell in the same vertical column, even if this meant a slight shuffling of the order of the atomic weights.

These changes really were slight -- for example, tellurium has an atomic weight of 127.61, just a little bit more than iodine, which has an atomic weight of 126.91. Reversing the order of these two elements in his table enabled Mendeleyev to put iodine under bromine, which it closely resembles chemically, and tellurium under selenium, which it closely resembles chemically, instead of having tellurium under bromine and iodine under selenium. We now know that Mendeleyev was right to make these changes, because the weight of an atom is determined by the combined number of (protons plus neutrons) in the atom, while its chemical properties are related to the number of protons alone -- more of this in the next chapter -- but neither the proton nor the neutron were known in the nineteenth century so there was no way that Mendeleyev could have explained the physical basis for this slight reordering of the elements in a table based on atomic weights, and he relied on the chemical evidence of similarities.

Mendeleyev's boldest step, and the one which eventually led to the widespread acceptance of his periodic table as something related to the fundamental properties of the elements, (and not an arbitrary convention like the alphabet) was his willingness to leave gaps in the table where there was no known element with properties that 'belonged' in a certain place. By 1871 Mendeleyev had produced a table containing the sixty-three elements known at the time, showing the striking periodicity in which families of elements with atomic weights that are multiples of eight times the atomic weight of hydrogen have similar chemical properties to one another. But to make the pattern work, even after minor adjustments like swapping the positions of tellurium and iodine, he had to leave three gaps in the table, and he boldly predicted that new elements would be found with properties (which he specified) corresponding to the places of those gaps in his table. The three elements, with exactly the predicted properties, were discovered over the next fifteen years: gallium, in 1875; scandium, in 1879; and germanium, in 1886.

In the classic tradition of science ('if it disagrees with experiment then it is wrong') Mendeleyev had made a prediction, and it had been proved correct. This persuaded people that the periodic table was important, and as more new elements were discovered and each one was found to fit into Mendeleyev's table, acceptance of his ideas turned to enthusiasm. There are now ninety-two elements known to occur naturally on Earth, and more than twenty heavier elements which have been created artificially in particle accelerators. All of them fit into Mendeleyev's table, allowing for some improvements to the layout of the table, which have been made in the twentieth century to take account of our modern understanding of the structure of atoms.

But even the success of Mendeleyev's periodic table in the last third of the nineteenth century still did not persuade everyone of the reality of atoms. The final acceptance of the 'atomic hypothesis', as Feynman called it, came only in the first decade of the twentieth century -- and it came, in no small measure, thanks to the work of the man whose image towers over twentieth-century science, Albert Einstein.

Einstein would surely have approved of Feynman's comments about the importance of the atomic hypothesis. He consciously devoted his first efforts as a research scientist to various attempts at proving the reality of atoms and molecules, notably in his PhD thesis (completed in 1905), and then in a series of scientific papers looking at the puzzle in different ways, and coming up with different ways to get a value for Avogadro's Number. It is a sign both of the importance of the concept of atoms and the continuing reluctance of scientists to take the idea fully on board that a physicist with the insight of Einstein still felt the need to do this at the beginning of the twentieth century. And there is no doubt what was in his mind when he did all this work. As he later wrote to Max Born, 'my main purpose ... was to find facts which would attest to the existence of atoms of definite size'.

The first way Einstein approached the problem (in his PhD thesis) was to calculate the rate at which sugar molecules in a solution in water pass through a barrier called a semi-permeable membrane, and to compare his calculations (which depended on the size of the molecules and their mean free path) with the results of experiments carried out by other people. This is conceptually very similar to the way Loschmidt got a handle on Avogadro's Number using mean free paths and molecular sizes for gases. A key point about Einstein, though, is that he never was an experimenter, and relied on experimental results provided by other people. So, in 1905, the best value he got for Avogadro's Number, using this technique, was 2.1 × 1023 -- not because there was anything wrong with his calculations but because the experimental results weren't quite accurate. It was this technique, using data from more accurate experiments, that gave him a value of 6.6 × 1023 in 1911.

But in 1905 Einstein already had another way to 'attest to the existence of atoms'. This involved a phenomenon known as Brownian motion, named after the Scottish botanist Robert Brown. In 1827 Brown had noticed that pollen grains suspended in water can be seen (using a microscope) to move about erratically in a zig-zag dance. At first, this surprising discovery was interpreted by some people as a sign that the pollen grains were alive and active; but it soon became clear that the same dancing movement occurred with tiny particles of dust which could not possibly be alive.

In the 1860s, several physicists speculated that the motion might be caused by the little particles being buffeted about by the molecules of the liquid in which they were suspended (the same kind of motion is seen, for example, in particles of cigarette smoke suspended in the air). But this idea came to nothing at the time, largely because they had guessed, incorrectly, that each jerk of the grain (each zig or zag) must be caused by the impact of a single molecule, and that would mean that the molecules had to be huge -- comparable in size to the suspended particles, which was obviously wrong, even in the 1860s.

Einstein tackled the problem from the other end. He was convinced of the reality of atoms and molecules, and wanted to find ways to convince others. He realised that a small particle suspended in a liquid would be buffeted by the molecules of the liquid, and calculated the kind of buffeting that would be produced. Einstein didn't know much about the history of the study of Brownian motion (throughout his career, Einstein didn't read up much on the history of any subject he was interested in, preferring to work things out for himself from first principles -- this is an excellent way to do physics, provided you are as clever as Einstein). So in his first paper on the subject he calculated the way particles suspended in a liquid ought to move, and only said, cautiously, that 'it is possible that the motions discussed here are identical with the so-called Brownian molecular motion'. Colleagues who read that paper quickly reassured him that what he had described mathematically was exactly the observed Brownian motion -- so, in a sense, Einstein predicted Brownian motion and the experiments confirmed his predictions. He was particularly taken by the idea, still true, that we can directly see the motions responsible for heat by looking at little particles suspended in liquids through a microscope. As he put it in 1905, 'according to the molecular-kinetic theory of heat, bodies of microscopically visible size suspended in a liquid will perform movements of such magnitude that they can be easily observed in a microscope.'

The insight on which Einstein based his calculations was that even an object as small as a pollen grain is being buffeted on all sides by a very large number of molecules at any instant. When it moves, jerkily, in one direction, it is not because it has received a single large blow pushing it that way, but because there has been a temporary imbalance in the buffeting -- a few more molecules hitting it on one side than the other at that particular moment. Einstein then used some neat mathematics to work out the statistics of this kind of buffeting, and to predict the kind of zig-zag path the pollen grain (or whatever) would follow as a result, with each little movement taking place in a totally random direction. It turns out that the distance of the particle from its starting point increases in proportion to the square root of the time that has elapsed. It travels twice as far in four seconds as it does in one second (2 is the square root of 4), it takes sixteen seconds to travel four times as far as it does in one second (4 is the square root of 16), and so on. And the direction the particle ends up in is oriented at random to its starting point, whether you look after four seconds, sixteen seconds, or any other number of seconds. This is now known as a 'random walk', and the same kind of statistics carries over into many other areas of science -- including the behaviour of radioactive atoms when they decay.

The link between Brownian motion and Avogadro's Number was clear to Einstein, and he suggested how experiments might be carried out to study the exact movements of particles suspended in liquids and work out Avogadro's Number from these studies. But, as ever, he didn't carry out the experiments himself. This time the experiments were done by Jean Perrin, in France. Perrin studied the way a suspension of particles in a liquid forms layers, with most of the particles near the bottom and fewer higher up. The few particles that get higher up in the liquid (in spite of being tugged down by gravity) do so because they are kicked upwards by Brownian motion, and the height they reach depends on the number of kicks they receive, which depends on Avogadro's Number.

In 1908 Perrin used this technique to find a value for Avogadro's Number very close to the value then being found by several different techniques, and his experiments, (combined with Einstein's predictions) are generally seen as marking the moment (less than a century ago!) when the idea of atoms could no longer be doubted. As Einstein wrote to Perrin in 1909, 'I had believed it to be impossible to investigate Brownian motion so precisely.' Perrin himself wrote in the same year:

I think it is impossible that a mind free from all preconception can reflect upon the extreme diversity of the phenomena which thus converge on the same result without experiencing a strong impression, and I think that it will henceforth be difficult to defend by rational arguments a hostile attitude to molecular hypotheses.

We trust you are indeed convinced that the atomic model is a good one. But before we move on to look inside the atom we would like to share with you just one more of the diverse phenomena which point in the same direction -- the blueness of the sky.

This story goes back to the work of John Tyndall in the 1860s but culminates, once again, in a piece of work by Albert Einstein. Tyndall realised that the reason why the sky is blue is because blue light is scattered more easily around the sky than red light. Light from the Sun contains all the colours of the rainbow (or spectrum), with red at one end of the spectrum and blue, indigo and violet light at the other end, with everything mixed together to form white light. Red light has longer wavelengths than blue light, which means (among other things) that it does not scatter as easily from small particles as the blue end of the spectrum. Tyndall's idea was that the scattering which makes the sky blue, bouncing blue light around from particle to particle across the whole sky, is caused by dust particles and droplets of liquid suspended in the air.

He wasn't quite right. This kind of scattering does explain why sunsets and sunrises are red -- red light penetrates dust or haze near the horizon better than blue light does -- but the 'particles' needed to scatter blue light right around the sky, so that it seems to come at us from all directions, have to be very small indeed. Several physicists in the late nineteenth and early twentieth centuries suggested that the scattering might be caused by the molecules of air themselves. But it was Einstein who carried out the definitive calculations and proved, in a paper he wrote in 1910, that the blueness of the sky is indeed caused by the scattering of light from molecules of air. And, yet again, Avogadro's Number can be derived from the calculation. So you don't even need a microscope to see evidence that molecules and atoms exist -- all you need is to look at the blue sky on a clear day.

To put some of the numbers concerning atoms in perspective, remember that the molecular weight of any substance in grams contains Avogadro's number of molecules. So just thirty-two grams of oxygen, for example, contains just over 6 × 1023 molecules of oxygen. Later in this book, we shall be discussing the nature of the Universe at large. Our Sun and Solar System are part of a disc-shaped galaxy of stars, the Milky Way Galaxy, which contains a few hundred billion (a few times 1011) stars, each roughly similar to the Sun. In the whole Universe, there are several hundred billion galaxies visible in principle to our telescopes, and a research project I was involved with at Sussex University showed that our Galaxy is slightly smaller than the average disc-shaped galaxy. So how many stars are there altogether? Multiplying the numbers up, a few times several is probably about 10, and 1011 × 1011 is 1022 so we have 10 × 1022, or 1023, stars in all. In round terms, the number comes out as a bit less than Avogadro's number of stars; which means that there are a few times more molecules of oxygen in just thirty-two grams of the gas (thirteen litres at standard temperature and pressure) than there are stars in the entire visible Universe.

To put this in a human perspective, the maximum capacity of the human lungs is about six litres; so if you take a deep breath, you have more molecules of air in your lungs than there are stars in the visible Universe.

In order to fit so many molecules into such a small amount of matter, each molecule (and each atom) has to be very small. There are many ways to calculate the sizes of atoms and molecules; the simplest is to take the volume of a liquid or solid that contains Avogadro's Number of these particles (thirty-two grams of liquid oxygen, for example) and divide that number into the volume to find out how big each one is -- an idea that goes right back to the work of Cannizzaro but which can now be done with much greater precision. When you do this, you find that all atoms are roughly the same size, with the largest (caesium) 0.0000005mm across. That means that it would take ten million atoms, side by side, to stretch across the gap between two of the points on the serrated edge of a postage stamp.

At the beginning of the twentieth century, the idea of such tiny entities was just becoming fully accepted. But over the next few decades many of the greatest achievements in physics came not just from studying the behaviour of atoms, but from probing the structure within atoms, going down to scales one ten-thousandth the size of the atom to study the nucleus, and then to still smaller scales to study the fundamental particles of nature -- at least, they are thought to be the fundamental particles of nature now, at the end of the twentieth century. We can begin to get a picture of what goes on inside the atom by looking at the way electrons are arranged in the outer parts of atoms, and at how they interact with light.


Excerpted from Almost Everyone's Guide to Science by John R. Gribbin Copyright © 2000 by John R. Gribbin. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Scientific notationviii
Introduction: If it disagrees with experiment it is wrong1
1Atoms and elements9
2Inside the atom27
3Particles and fields48
5Molecules of life85
7Our changing planet122
8Winds of change144
9The Sun and its family162
10The lives of the stars183
11The large and the small201
Further Reading221


Exclusive Author Essay
Not many people know that I am a failed science fiction writer. I got into science through reading SF at a very early age (eight), and, like the clown who wants to play Hamlet (or the other way around), I always wanted to write fiction. Eventually, I managed to do so, learning the tricks of the trade through collaborations with two skilled novelists, Douglas Orgill and D. G. Compton. I even managed to get a couple of novels published all by myself -- counting the collaborations, I've published more novels than Joseph Heller. But not to such effect.

My agent complained about this, since apart from the first book I wrote with Douglas Orgill (The Sixth Winter) none of these novels were commercially successful. But a strange thing happened. Through the discipline of learning how to plot stories, build up suspense, and get inside believable characters, I found that my nonfiction writing was getting better. Even my agent had to agree. By writing novels, I became a much better science writer, even if the novels didn't succeed on their own terms.

It started out the other way around -- The Sixth Winter grew out of my interest in the nonfiction developments in the scientific study of climate change, and another collaboration with Douglas Orgill, Brother Esau, out of my interest in human evolution. Both appeared in the early 1980s, just before I set to work on what is still my best-known book, In Search of Schrödinger's Cat. Writing about science fact that reads like fiction (quantum physics) clearly benefited from the novel-writing! The circle from fact to fiction and back again was completed when I returned to the theme of human evolution in Being Human (written with my wife, Mary), and the storytelling aspect of writing fiction was particularly useful in a string of scientific biographies that I got involved with, including Richard Feynman: A Life in Science (also written with Mary).

Emboldened by this, I went back to writing fiction. I wrote my very best novel (The Sixth Winter included) and had a good publication deal lined up. Then, the publisher went bankrupt. I still have the novel, if anyone would like to take a chance on publishing it, but I've promised my agent not to write any more unless and until it sees the light of day. Even so, for those with eyes to see (as they say), my latest book, The Birth of Time, clearly bears the stamp of a writer who has been at least once around the fiction-writing block. Since this book deals (in part) with my own scientific work, it involved some personal storytelling which I could never have managed so well without that background.

So when people ask me who the biggest influences on my career have been, the answer has to be, first, Isaac Asimov and Arthur C. Clarke (for their fiction, not their nonfiction), and then Douglas Orgill and David Compton, for teaching me how to do it. And I've not given up. Maybe I'll try a movie script....

John Gribbin is a visiting fellow in astronomy at the University of Sussex and the author of many popular science books, including In Search of Schrödinger's Cat, Almost Everyone's Guide to Science, and Q Is for Quantum. He is a fellow of the Royal Society of Literature (a unique honor for a science writer).

Customer Reviews

Most Helpful Customer Reviews

See All Customer Reviews