The Birth of Time: How Astronomers Measure the Age of the Universeby John Gribbin, John Gribbin
How old is the universe? This engrossing book recounts how scientists have achieved the definitive answer to one of the great scientific mysteries of our time. Research astronomer John Gribbin tells the story of the struggle to determine the age of the universe and offers an insider's view of the thrilling breakthrough of the 1990s, when Hubble Space Telescope data… See more details below
How old is the universe? This engrossing book recounts how scientists have achieved the definitive answer to one of the great scientific mysteries of our time. Research astronomer John Gribbin tells the story of the struggle to determine the age of the universe and offers an insider's view of the thrilling breakthrough of the 1990s, when Hubble Space Telescope data revealed that the universe is between 13 and 16 billion years old -- older by at least one billion years than the oldest stars.
- Yale University Press
- Publication date:
- Product dimensions:
- 5.85(w) x 8.55(h) x 1.15(d)
Read an Excerpt
All Things Must Pass
The Discovery of Cosmic Time
When did time begin? Throughout most of human history, to most people, the question would have been meaningless. The earliest and most widespread view of time, in cultures as diverse as the Hindus, the Chinese, the civilisations of Central America, Buddhists, and even the pre-Christian Greeks, saw it in terms of cycles of birth, death, and rebirth. Like the changing cycle of the seasons, in which the Earth itself is constantly renewed, the Universe was seen as eternal, but changing in a regular rhythm. Even God is seen as being reborn, time and again, in the Buddhist and other religions.
But in the Christian religion which came to dominate the European culture from which the modern scientific investigation of the world sprang, there is only one God, and there was one unique creation event in which the Universe was born. That modern scientific investigation of the world only began in the seventeenth century, with the work of Galileo, Descartes, and Newton. Until the end of the eighteenth century, there was no conflict between the estimate of the age of the Universe calculated by theologians and the estimates made by scientists, for the simple reason that the scientists had no basis on which to make such estimates. And, rather than the vast stretch of ancient time (perhaps infinitely long) allowed by other religions, the Christian establishment taught that the world (a term synonymous in those days with the modern term Universe) had been created in the year 4004 B.C.
This date was notplucked at random out of the air as some wild guess by the priests, but was actually a serious attempt to relate the events described in the Bible to the world at large. It began in quite a scientific way, but the calculation became elaborated to the point of ludicrousness not long before Isaac Newton published his epic book Philosophiae Naturalis Principia Mathematica (generally referred to as the Principia), in which he laid down the principles of the scientific method which has taken us, in a little over three hundred years, from the first understanding of the orbits of the planets around the Sun to an understanding of the birth of the Universe itself, and a sound scientific determination of when that event happened.
The date for the beginning of time that Newton himself would have been taught as the Gospel truth derived initially from a calculation made by Martin Luther and his colleagues in the sixteenth century. They had based their estimate on counting back the genealogies in the Old Testament, all the way from Jesus Christ to Adam himself, and came up with a date for the Creation of 4000 B.C. This was a nice round number (what modern scientists would call an "order of magnitude" estimate), and it probably does tell us something meaningful about the timing of events described in the Bible. But in 1620, Archbishop James Ussher published his Sacred Chronology, which developed these ideas further. The most significant contribution made by Ussher was to shift the timescale back by four years. Johann Kepler, the pioneering German astronomer born in 1571, had suggested that the darkening of the sky during the crucifixion of Jesus must have been caused by a solar eclipse, and by Kepler's day astronomers were able to calculate that a suitable eclipse had occurred four years earlier than the equivalent date used by Luther, the inference being that all the events in the Lutheran genealogy occurred four years earlier than he had thought including the Creation itself.
This revision of the timescale by Ussher to set the date of the Creation as 4004 B.C. was already going far beyond the accuracy of the method being usedeven if the method was a good one, and they had got the date of the crucifixion exactly right, what chance was there that all the "begats" recorded in the Bible were accurate to within four years? But the situation became even sillier in 1654, when John Lightfoot, Vice-Chancellor of the University of Cambridge, pronounced that from his study of the scriptures he had determined the final moment of the Creation, the precise moment when Adam himself was created, as 9 A.M. (Mesopotamian time) on Sunday, 26 October 4004 B.C. Isaac Newton was in his twelfth year when Lightfoot made that pronouncement, and the Principia would not be published until 1687. Although some may have had doubts about Lightfoot's "improvement" to the timescale, the date of 4004 B.C. for the Creation was noted in the margin of the Authorised Version of the Bible until well into the nineteenth century, when science at last became capable of mounting a serious challenge to religious orthodoxy on this point. But this is not to say that some people had not had their doubts about the biblical timescale, even before it became enshrined in the margin of the Authorised Version.
The thing that made open-minded people think that the Earth must have had a much longer history than a few thousand years was the fossil record in the rocks. Time and again over the past thousand years, different scientists were independently struck by the need for a long timescale in order to explain how the fossilised remains of diverse species got to be where they are today. The first person that we know to have puzzled over this phenomenon, and to have written his thoughts down so that they have been preserved for us to read, was the Arab scholar Abu Ali al-Hassan ibn al-Haytham, usually known to later generations of scientists by the Europeanised version of his name, as Alhazan. He was born in about A.D. 965, and died in 1038, so his productive period as a scientist covered the decades either side of A.D. 1000. He is best known for his work on optics, which remained unsurpassed for more than five hundred years (indeed, his book on optics was translated into Latin in the twelfth century, republished in Europe under the rifle Opticae Thesaurus in 1572, and was regarded as the standard text for more than a further hundred years, until Newton published his Opticks in 1704). But Alhazan was a wide-ranging and original thinker, who noticed the existence of fossilised remains of fish in rock strata high above sea level, in mountainous regions. He realised that the fish must have died and been covered in sediments in the ocean, and that the ocean floor had been slowly uplifted to make the mountains a process clearly requiring a very long stretch of time, although he had no means of calculating just how long.
The standard explanation for the origin of fossils in those days was, of course, the biblical Flood. If the entire Earth really had been covered by water, including the mountain tops, then in principle there would be no difficulty in explaining how the remains of fish came to be found on mountain tops. But it wasn't just fish that showed up in the fossil record. Leonardo da Vinci (who lived from 1452 to 1519) pointed out that the fossilised remains of clams and sea-snails could be found in the mountains of Lombardy, 400 kilometres from the nearest sea (the Adriatic). There was no way that clams could travel 400 kilometres in the forty days and forty nights that the rain lasted, plus the 150 days that the waters of the Flood had, according to Scripture, covered the Earth. And there are many parts of the world where similar fossils are found much further from the present-day boundaries of the ocean.
In the seventeenth century, this kind of argument was elaborated by Niels Steensen (a Danish medical doctor who worked in Italy and wrote under the latinised version of his name, Nicolaus Steno). He noted the similarity between certain fossils and the teeth of modern sharks, and argued in a book published in 1667 the case that fossils were indeed produced by sedimentation on the sea floor, and later uplifted by geological activity. The idea was taken up by Robert Hooke, a contemporary of Steno and one of the founders of the Royal Society. By the end of the seventeenth century, there was a serious scientific challenge to the notion of the biblical Flood as the explanation of fossils, a growing realisation that the surface of the Earth was involved in upheavals that could turn sea floor into mountains, and by implication a challenge to Archbishop Ussher's timescale. But there was no clear idea of the sort of timescale involved in such processes. The first step towards a scientific assessment of the age of the Earth came only towards the end of the eighteenth century, from the work of the French naturalist George-Louis Leclerc, Comte de Buffon.
Leclerc, who was born in 1707, was the son of a wealthy lawyer, and studied law himself before turning to science. He became a leading naturalist, the Director of the Royal Botanical Gardens (the Jardin du Roi) in Paris, and was made Comte de Buffon by Louis XV in 1771. Among his many interests, he was one of the first people to express clearly (in his book Les Epoques de la Nature, published in 1778) the idea that all the observed variety of topographical features over the surface of the Earth could be explained as a result of the slow working of processes visible today, over geological time. And, instead of invoking God as the direct, "hands on" Creator of the Earth, Buffon came up with a plausible (at the time) scientific explanation for the origin of our planet, suggesting that it had formed from a ball of molten material, torn out of the Sun by the impact of a comet. The question this raised was, how long would it have taken for this molten ball of rock to have cooled to the state it is in today?
In fact, a century before Buffon, Isaac Newton had mentioned in his Principia that a globe of red-hot iron as big as the Earth would take 50,000 years to cool down. But this was not taken as a serious estimate of the age of the Earth, and passed almost unnoticed alongside the much deeper scientific insights provided by the Principia. Buffon improved on Newton's estimate by carrying out a series of experiments with balls of iron (and other substances) of different sizes, heating them up until they glowed red and were on the point of melting, and then observing how long it took for them to cool down. Using this information, he calculated that if the Earth had indeed been formed in a molten state, it would have taken 36,000 years to cool to the point where life could exist, and a further 39,000 years to cool to its present temperature. That pushed the date of the creation of the Earth back to 75,000 years, almost twenty times further into the past than the date then enshrined in religious dogma.
Theologians of the day were unhappy about Buffon's revision of the timescale of Earth history, and the encroachment of science into what had been the theologians' preserve. It was the beginning of a debate that was to rumble on into the twentieth century, before science at last came up with a solidly based estimate of the age of the Earth that could encompass the timescales required by geology and evolution. But in the nineteenth century, even though estimates of the age of the Earth were revised upwards dramatically, compared even with Buffon's estimate, both geology and evolution were always pointing to a longer timescale than anything that could be explained by the laws of physics as they were then understood.
The next step was taken by another Frenchman, Jean Fourier, who lived from 1768 to 1830. Fourier's lasting contribution to science was the development of mathematical techniques for dealing with what are known as time-varying phenomena Fourier analysis can be used, for example, to break down a complicated pattern of pressure variations in a sound wave into a set of simple waves, or harmonics, which can be added together to reproduce the original sound. But even many physicists and mathematicians who happily use Fourier's techniques as labour-saving devices are unaware that he developed those techniques not through any particular love of the mathematics, but because he was fascinated by the way heat flowed from a hotter object to a cooler one, and needed to develop the mathematical tools in order to describe heat flow.
Where Buffon had measured the rate at which lumps of material of different sizes cooled down, and had then extrapolated his empirical findings up to estimate the rate at which the whole Earth would cool, Fourier developed laws mathematical equations to describe heat flow, and then used them to calculate how long it would take for the Earth to cool. He also, crucially, made allowance for a factor that Buffon had overlooked. He realised that although the Earth is cool on the outside today, it is still hot in its interior (as the activity of volcanoes demonstrates). The temperature of molten rock, which still exists inside the Earth, is more than 6,000 degrees on the Celsius scale, and Fourier's equations could describe how heat flowed outwards from the hot interior of the planet through the layers of cooler material at the surface layers of solid rock which act as an insulating blanket around the molten material inside the Earth, holding the heat in and ensuring that the planet takes much longer to cool down than Buffon had estimated.
And I do mean much longer. The number that came out of Fourier's equations was so staggering that, as far as we know, he never brought himself to write it down (or if he did, he burnt the paper he wrote it on before anyone else saw it). What he did write down, in 1820, and leave for posterity, was the formula for the age of the Earth, based on these arguments. It is easy to put the numbers into the formula and get the answer out, and Fourier must have done this for himself. But he never told anybody, because the age he came up with was beyond anyone's wildest imagination at the time not 75 thousand years, but 100 million years. And yet, within fifty years the number that was so staggeringly large that Fourier could not bring himself to write it down in 1820 was not only widely known, but was regarded as being embarrassingly small, in the wake of the development of geological ideas about the Earth itself, and of the theory of evolution by natural selection.
Although Buffon had realised that the same physical processes that operate on Earth today could explain how the world had got into its present state, the person who first expressed the idea with full force, and who seems to have had a clear idea of just how long a period of time would be involved, was the Scot James Hutton, who was some twenty years younger than Buffon, and worked on geology in the second half of the eighteenth century. At that time, the established wisdom was that terrestrial features such as mountain ranges might indeed have been thrust upward by mighty forces, but that such events occurred catastrophically, in a very short space of time (perhaps literally overnight); it was also widely accepted that they might involve supernatural forces, and the biblical Flood was included as the classic example of such a catastrophe. By contrast, the idea that only the same natural processes that we see at work today are needed to explain how the features of the Earth have changed over time became known as uniformitarianism. In modern science, the distinction is rather blurred, because it is now accepted that what seem to be catastrophic events on any human scale (for example, the impact from space that brought an end to the era of the dinosaurs, about sixty-five million years ago) do occur on Earth from time to time. But the point to bear in mind is that on a long enough timescale, even such rare (by human standards) events are part of the natural, uniformitarian processes that have shaped the Earth. In Hutton's day, the catastrophists had to envisage all of the events that had built mountains and carved valleys, created islands and deep oceans, as having happened within the span of six thousand years catastrophic indeed!
Hutton, who was born in 1726, studied law and medicine, but never practised either; in the early 1750s, he settled on farming as a career (his father, although primarily a merchant, owned a small estate in Berwickshire), but devoted much of his time to chemistry, while becoming increasingly intrigued by geology as a result (initially) of studying the rocky foundations of the land he farmed. In the 1760s, Hutton made a fortune out of the invention of a method for manufacturing the important industrial chemical sal ammoniac (ammonium chloride), and in 1768 he settled in Edinburgh and devoted the rest of his life (he died in 1797) to scientific pursuits.
Hutton was the first person to point out, for example, that the heat of the Earth's interior could explain, without any need for supernatural intervention, how sedimentary rocks, laid down in water, could later be fused into granites and flints. Heat from inside the Earth, he said, was also responsible for pushing up mountain ranges and twisting geological strata. And, most important of all in the present context, he realised that this would take a very long time indeed. In one striking example, Hutton came up with an analogy that used the same kind of direct, human experience that the theologians had already used in their calculations of the date of the Creation. Hutton pointed out that Roman roads, laid down in Europe two thousand years earlier, were still clearly visible, almost unmarked by erosion. Clearly, in the absence of catastrophes the time required for natural processes to have carved the face of the Earth into its present form must be enormously longer than two thousand years, and, he specifically pointed out, much, much longer than the six thousand years offered by Ussher's interpretation of Scripture. How much longer? Hutton wasn't even willing to guess. In a paper published by the Royal Society of Edinburgh in 1788, he wrote: "The result, therefore, of our present enquiry is, that we find no vestige of a beginning no prospect of an end." He was saying that, as far as eighteenth-century science was concerned, the origin of the Earth was lost in the mists of time, and its future stretched equally incomprehensibly far into the future.
Hutton's ideas had some impact in scientific circles (and were attacked by theologians of the old school), especially after his friend John Playfair published an edited version of Hutton's writings in 1802. But Hutton's writing style was impenetrably dense (in spite of the occasional flash of a felicitous example like the one mentioned above), and his work did not make a deep impression in the world at large. Uniformitarianism, and the implied extension of the geological timescale, only really became a subject of public debate after another Scot, the geologist Charles Lyell (who was born in 1797, the year Hutton died) took up the idea and promoted it at the beginning of the 1830s.
Like Hutton, Lyell studied law but, also like Hutton, his scientific interests soon came to dominate his life. His father was wealthy enough to support the young man, and in the 1820s he travelled widely on the continent of Europe, where he saw the evidence of the effects of the forces of nature at work first hand. He was particularly impressed by a visit to the region around Mount Etna. The fruits of Lyell's travels appeared in his three-volume work Principles of Geology, published between 1830 and 1833. The subtitle of the first volume of the series clearly set out his position: Being an Attempt to Explain the Former Changes of the Earth's Surface by Reference to Causes Now in Operation. Unlike Hutton, Lyell wrote clearly and accessibly, opening up these ideas for any educated person of the time. Indeed, the work is still just as accessible, still worth reading, and is available in a Penguin Classic edition.
Lyell's book made a particularly striking impression on the young Charles Darwin. Darwin had been born in 1809, and at the time he set out on his famous voyage in the Beagle, at the end of 1831, he regarded himself, in scientific terms, as primarily a geologist. He took the first volume of Lyell's masterwork with him on the voyage; the second volume caught up with him during the ship's circumnavigation of the globe; and the third volume was waiting for him when he returned to England in 1836. He later wrote that the book "altered the whole tone of one's mind ... when seeing a thing never seen by Lyell, one yet saw it partially through his eyes." And, in the most telling phrase of all, in the context of his theory of evolution by natural selection, Darwin said that Lyell had given him "the gift of time." For, of course, the theory of natural selection also explains how great changes have been brought about by very slow, uniformitarian processes operating over enormous amounts of time. Evolution by natural selection can only explain how the variety of forms of life on Earth evolved from a common ancestor if there has been a huge span of time during which evolution could do its work. After the Origin of Species was published in 1859, both biology and geology were telling scientists that the Earth must be very ancient indeed, with "no vestige of a beginning"; and, by implication, the Sun must be at least as old as the Earth, or life could not have existed and evolved on Earth over the requisite time span. This threw the biologists and geologists into direct conflict with the physicists, and in particular with the greatest physicist of the time, Lord Kelvin.
The problem was not that the physicists didn't know what was going on. Quite the reverse. By the middle of the nineteenth century they understood the laws of physics well enough to be able to say with absolute confidence that, according to the known laws of physics, there was absolutely no way that the Sun could have been shining for as long as Darwin and the geologists required.
Lord Kelvin started life (he was born in 1824) as plain William Thomson, and he was another of the Scots that played a large part in the development of British science. As well as being a great physicist, Thomson was very practically minded, and in the great tradition of the Victorian entrepreneurs he applied his talents not just to science but to engineering, making a fortune from his patents and being the brains behind the first successful transatlantic telegraph cable (as profound a development in the 1860s as the development of the communications satellite was to be in the 1960s). It was for his services to industry, adding to the wealth of Britain, not his scientific work, that he was first knighted (in 1866) and then ennobled, in 1892, as the first Baron Kelvin of Largs. Although most of his great scientific work was behind him by the time he became a peer, he is usually referred to even in scientific circles today simply as Lord Kelvin, and the absolute scale of temperature, which he devised from the fundamental principles of thermodynamics, is now known as the Kelvin scale, not the Thomson scale. Zero on the Kelvin scale is at -273 degrees on the Celsius scale, but the size of each degree on the Kelvin scale is the same as on the Celsius scale.
Kelvin was the towering figure in physics in Britain in the second half of the nineteenth century, and almost as dominant in the context of European science. He graduated with high honours from the University of Cambridge in 1845, and a year later, at the age of twenty-two, became professor of natural philosophy (the old name for what we now call physics) at the University of Glasgow. He held the post for fifty-three years, until he retired in 1899. Among his many achievements in science, Kelvin laid the foundations of thermodynamics (formulating the famous Second Law of Thermodynamics, which says that heat cannot flow unaided from a cooler object to a hotter one, in 1851), and helped to develop the theory of the electromagnetic field. It was through his study of thermodynamics that he was led to ponder the question of the ages of the Earth and the Sun.
The most important thing that thermodynamics teaches us is that nothing lasts forever. All things must pass, and everything wears out. This led Kelvin to exactly the opposite conclusion from that drawn by Hutton concerning the history of the Earth. In 1852 he wrote: "Within a finite period of time past the earth must have been, and within a finite period of time to come the earth must again be, unfit for the habitation of man as at present constituted, unless operations have been, or are to be performed which are impossible under the laws to which the known operations going on at the present in the material world are subject." Of course, there is not really any conflict between Kelvin's assertion that the age of the Earth is finite and Hutton's assertion that there was no vestige of a beginning in the geological record. We now know that Kelvin's finite age is so enormously long that no vestige of the beginning could, indeed, have been detected in the eighteenth century, even though we can perceive it quite clearly today. But there are two points worth picking up concerning Kelvin's comment, and his continuing contribution to the debate over the next half-century (he died in 1907). The first is that Kelvin was working rigorously within the laws of physics known in his day; the second is that because of his huge prestige, the view that the Earth had a relatively short lifetime, certainly much shorter than that required by the geologists and evolutionists, held sway right into the twentieth century.
Partly under the stimulus of pressure from, first, the geologists and later the evolutionists, Kelvin refined his thermodynamic arguments in successive stages through the second half of the nineteenth century. Some of his ideas built from the work of his British contemporary John Waterston, and a lot of Kelvin's thinking about the way the Sun might gain its energy was duplicated by the German physicist Hermann von Helmholtz, leading to one of those bitter debates about priority that all too often plague science. But there is no need to follow every step in Kelvin's development of his ideas about the Sun, and today we can readily agree that Helmholtz had the same idea independently, so that the timescale for the life of the Sun (or, indeed, any star) that comes out of the calculation is often called the Kelvin-Helmholtz timescale (in Germany, of course, it is known as the Helmholtz-Kelvin timescale). The complete version of the idea was presented in a lecture by Kelvin at the Royal Institution, in London, in 1887. It is impeccable science, and it goes like this.
The Sun is a very large ball of gas, with a mass roughly 330,000 times the mass of the Earth, and a diameter roughly 109 times the diameter of the Earth. It ought to be shrinking, under its own weight; but it is held up by the pressure associated with the heat of its interior. But that heat has to come from somewhere the laws of thermodynamics spelled out, more clearly to Kelvin's generation than ever before, that there must be a source of energy to keep the Sun shining. The major sources of energy known in Kelvin's day were chemical sources, and the industrial revolution in Britain was being fuelled by the combustion of coal. But it was easy to calculate that if the Sun were entirely made of coal, burning in pure oxygen, it could maintain its output of energy for only a few thousand years. Updating the argument to express it in terms of the fuel that powers the modern industrialised world, if the Sun were made entirely of gasoline burning in pure oxygen, it could maintain its present heat for only about thirty thousand years.
The insight which Kelvin and Helmholtz thought up independently was that there is actually another source of energy, other than chemical energy, which the Sun can draw on gravity. When an object falls in a gravitational field, its gets accelerated, picking up kinetic energy (energy of motion). If it is then brought to an abrupt halt by hitting the ground, this kinetic energy is turned into heat, as it is dissipated as thermal motion among the atoms and molecules that make up the object (and the atoms and molecules of the object it hits). In the early stages of the development of the idea, Kelvin considered how much heat might be released by allowing meteors, comets, or even whole planets to collide with the Sun. But then he realised that this is not necessary. The greatest source of gravitational energy, as far as the Sun is concerned, is the most massive object in the Solar System, the Sun itself.
It is one of the insights from thermodynamics that heat is associated with atoms and molecules moving about and colliding with one anotherthe faster they move, the hotter the object is. If you imagine all the material that now makes up the Sun dispersed into a thin cloud in space, then falling together under the influence of gravity to make the Sun, it is easy to see how gravitational energy will be converted into heat as all the atoms and molecules move faster and faster, and collide with one another. Indeed, this is still the way that astronomers believe stars form and get hot in the first place. The additional insight from Kelvin and Helmholtz is that even in its present state, as a relatively compact, hot ball of gas, the Sun can draw on its remaining reserves of gravitational energy and turn them into heat by shrinking slowly. Shrinking means that all the particles in the Sun move closer to the centre, falling in its gravitational field and gaining kinetic energy, so that they jostle one another more vigorously, getting hot. If the Sun were shrinking at a rate of only 50 metres a year, Kelvin calculated, it would release enough energy to explain its observed brightness. This amount of shrinking was far too small to be detected by astronomers in the nineteenth century, so there was no obvious reason to reject the idea. It extended the timescale available for geology and evolution enormously but not enormously enough. In round terms, the Kelvin-Helmholtz timescale says that a star like the Sun must fizzle out in about twenty million years. And this was still far too short to satisfy the needs of geology and evolution. The more clearly Kelvin expressed his argument, and the more accurately he refined his calculations, the more obvious it became that there really was a conflict.
In 1892, the year he received his peerage, Kelvin returned to the remark he had made in 1852, and updated it: "Within a finite period of time past the earth must have been, and within a finite period of time to come must again be, unfit for the habitation of man as at present constituted, unless operations have been and are to be performed which are impossible under the laws governing the known operations going on at present in the material world." And by 1897 he had set the upper limit on the lifetime of the Sun as twenty-four million years. But, in exactly the decade that Kelvin was reaching these conclusions, based on impeccable application of the known laws of physics, other scientists were realising that what he referred to as "the laws governing the known operations going on at present in the material world" were not the whole story. The discovery of radioactivity revealed the existence of previously unknown laws of physics, and previously unknown sources of energy, which would soon resolve the conflict between the timescales of geology and evolution and the timescale of the Sun.
The 1890s were exciting times for physics. The term revolution is over-used as much in science as in other walks of life but the events following the discovery of X-rays in 1895 were as revolutionary as anything that has ever happened in science.
X-rays were discovered by the German physicist Wilhelm Röntgen in 1895, and the discovery was announced on 1 January 1896. Röntgen had been studying what were then called cathode rays (we now know them to be streams of electrons), produced from the negatively charged plate of an electric discharge tube (a "vacuum tube" or cathode ray tube, not unlike the picture tube in a modern TV set). He discovered, by chance, that the cathode rays striking the glass wall of the tube produced a secondary form of radiation, which made a detector screen nearby, painted with barium platinocyanide, glow when the tube was switched on. Although this previously unknown form of radiation was initially called "Röntgen radiation," it soon became known as X-radiation, after the familiar mathematical symbol for the unknown quantity.
The discovery of X-radiation encouraged other physicists to search for "new" forms of radiation, and the most spectacularly successful of these seekers was Henri Becquerel, working in Paris. Because Röntgen had discovered that X-rays come from a bright spot on the wall of the vacuum tube, where the cathode rays made the material of the glass fluoresce, Becquerel looked for similar kinds of activity associated with phosphorescent salts (salts that glow in the dark), including some uranium salts. Phosphorescent material is usually "charged up" by being exposed to sunlight, and glows for a while afterwards, before the glow fades and the material has to be recharged by a further dose of sunlight. Becquerel soon found that some of his phosphorescent salts didn't just produce a visible glow in the dark, but also produced yet another kind of radiation. This radiation could escape and fog a photographic plate nearby, even when the plate was wrapped in thick black paper. This was exciting enough in itself. But at the end of February 1896 Becquerel made a sensational discovery.
In his latest series of experiments, Becquerel had prepared a photographic plate, wrapped in thick black paper so no light could penetrate, and a piece of copper in the shape of a cross (he had already found that the new radiation could not penetrate metal). The copper cross sat on top of the wrapped photographic plate, and a dish of uranium salts sat on top of the copper. Becquerel planned to expose the salts to sunlight and see if the resulting activity of the salts produced enough radiation to make an imprint of the outline of the copper cross (a kind of radiation shadow) on the photographic plate. Because the skies over Paris were overcast for several days, Becquerel left the prepared experiment in a cupboard, ready and waiting. Then, perhaps because he had got bored, he developed the photographic plate anyway, even though the experiment had not been exposed to sunlight. It showed a clear image of the copper cross. Becquerel had not only discovered a new form of radiation (soon to be called radioactivity); he had discovered a new form of energy, because the activity of the salts clearly did not require an input of energy from the Sun (unlike normal phosphorescence), nor was there any "man-made" input of energy to the system, like the electricity which drove the cathode rays to make the X-rays in Röntgen's experiment. It looked as if uranium salts could sit quietly radiating energy out into the world at large from no visible source, in seeming contradiction to one of the most cherished laws of science, the law of conservation of energy.
Becquerel's discovery was taken up and carried forward by the husband and wife team of Marie and Pierre Curie, also working in Paris. It was Marie Curie who introduced the term radioactive substance, in a paper published in 1898. The team showed that the amount of radioactivity in a sample of salts containing uranium depended on the amount of uranium in the sample (so it was clear that the radioactivity came from uranium itself), and they identified two previously unknown radioactive elements (that is, previously unknown elements, not just known elements that were not known to be radioactive), polonium and radium. The key implication of this work is that radioactivity is a property of the individual atoms of an element it is not something to do with the chemistry of uranium salts, or any other compound. And this was all going on at the same time that, over in Cambridge, J. J. Thomson (no relation to Lord Kelvin) was discovering that cathode rays are actually tiny charged particles, the particles we now call electrons, which had somehow been chipped away from atoms, which had previously been regarded as the indestructible and unchanging building blocks of matter.
The person who put all of the pieces of the puzzle together, coming up with a new timescale for the Earth and pointing the way towards a new energy source for the Sun, was the New Zealand-born physicist Ernest Rutherford, who had been born in 1871 (he lived until 1937). Rutherford worked with Thomson in Cambridge in the 1890s, before moving to McGill University, in Montreal, in 1898; he stayed there until 1907, when he took up a post at the University of Manchester, in England. He moved again, to become Director of the Cavendish Laboratory in Cambridge, in 1919, and stayed there for the rest of his career.
The great thing about radioactivity is that it gives you both a timescale and an energy source, in one package. Rutherford showed that the radiation Becquerel had discovered was actually a mixture of two kinds of radiation, which he called alpha rays and beta rays. It has since been established that beta rays are fast-moving electrons, like cathode rays but carrying much more energy. Rutherford himself showed that alpha rays are a stream of particles, that each alpha particle has the same mass as four hydrogen atoms, and that each alpha particle carries two units of positive charge. He concluded, correctly, that an alpha particle is identical to a helium atom that has lost two traits of negative electric charge that it has lost two electrons. This was a significant step forward, less than ten years after the identification of the electron itself as a component of atoms.
Jumping ahead in our story a little, to Rutherford's work in Manchester, it was also a team trader Rutherford's direction that discovered the basic structure of the atom. Hans Geiger and Ernest Marsden fired alpha particles (produced by natural radioactivity) towards thin sheets of gold foil, and were surprised to discover that although most of the alpha particles passed right through the foil as if it were not there, just occasionally one of the alpha particles bounced back as if it had struck something solid. Rutherford interpreted these results as indicating that every atom consists of a very compact core of positively charged material (which he called the nucleus) surrounded by a tenuous cloud of negatively charged electrons. An alpha particle can brush through the electron cloud as if it were not there, like a cannonball whizzing through a fog bank. But, just occasionally, an alpha particle (which itself has positive charge) will meet an atomic nucleus more or less head on, and be deflected by electrical repulsion (as if the cannonball whizzing through the fog bank hits a solid object concealed by the fog, and bounces off). To put this in perspective, the largest atom is just 0.0000005 millimetres (that is, 5 x [10.sup.-7] mm) across; within any atom, the size of the nucleus compared with the size of the electron cloud that makes up the bulk of the atom is in the same proportions as a grain of sand to the volume of the Albert Hall.
Armed with this image of an atom, we can go back to the story of radioactivity. A hydrogen atom is regarded as being made up of a single proton (relatively massive, and carrying one unit of positive charge) and a single electron (with only one two-thousandth the mass of a proton, and carrying one unit of negative charge); a helium atom has a nucleus containing two protons and two neutrons (electrically neutral particles almost identical in mass to a proton) with two electrons outside the nucleus. An alpha particle is exactly the same as a helium nucleus that has no electrons associated with it. And either electrons (beta rays) or helium nuclei (alpha rays) can be ejected from the nuclei of radioactive atoms in the process of radioactive decay. Working with Frederick Soddy, in Canada, Rutherford explained that radioactivity is associated with the disintegration of atoms (thanks to his later work, we would now say the disintegration of nuclei), when atoms of the radioactive element are converted into atoms of another element. This immediately tells us that the source of energy associated with radioactivity is, after all, finite, and does not violate the law of conservation of energy. Radioactivity involves a rearrangement of the nuclei of atoms into more stable, lower energy states, with the "spare" energy being released along the way. This is exactly equivalent to the way energy is released by chemical reactions (for example, by burning) when atoms are rearranged into lower energy states and the spare energy is released (in this case, as heat and light) along the way. Once all the original radioactive atoms in a sample have disintegrated in this way, the radioactivity, and the release of energy, will stop but it may be a very long time before this happens.
Rutherford also discovered that whatever amount of radioactive material (in the form of a pure radioactive element, such as radium or uranium) that you start out with, half of the atoms in the sample will decay in this way in a certain amount of time, now called the "half-life" of the element. He didn't even have to wait for many years to measure the half-lives of interesting substances, because this kind of law can be extrapolated from measuring the way in which the rate at which a sample decays in the laboratory, changes over a much shorter period of time (this was, of course, one of the first things Rutherford looked at, to see if the radioactivity of his samples was decreasing as time passed, as it must do if the law of conservation of energy holds).
In a sample of radium, for example, after 1,602 years just half of the atoms will have decayed into atoms of the gas radon, as alpha and beta particles are ejected from the original radium nuclei. In the next 1,602 years, half of the rest of the sample (one quarter of the original atoms) will decay in this way, and so on. This is one of the strange features of the rules of quantum physics, which govern the behaviour of things on the scale of atoms and below; it was the discovery of this kind of behaviour which led Albert Einstein to comment in despair, "I cannot believe that God plays dice." But all the evidence is that Einstein was wrong; in effect, an individual atom (strictly speaking, an individual nucleus) does "play dice," as if at some randomly chosen instant during each half-life each atom (nucleus) rolled a single die, and decayed if the number that came up was odd, but didn't decay if the number that came up was even. An individual nucleus may decay in the next second, or not for thousands of years, and there is no way to tell in advance what it will do. But over a large enough collection of nuclei, the overall behaviour of the sample becomes very regular and predictable. Don't worry about the quantum physics; all that matters for our present story is that, as Rutherford realised, this provides a clock which can be used to measure the age of the Earth.
Provided you know how many radioactive atoms you started out with in a sample of rock, all you have to do is to measure how many are left (by measuring the strength of the radioactivity of the sample) to know exactly how many half-lives have elapsed since the rock was formed. But how do you know how much radioactivity there was in the rock in the first place? The first handle on the problem is to find a radioactive decay process which produces a stable product that would not otherwise be present at all in the samples being studied. Then, simply by measuring how much of this "daughter" product is present you know how much of the radioactive "parent" has decayed already.
Rutherford himself first tackled the problem, in 1905, by measuring traces of helium trapped inside rocks that contain uranium compounds. The helium could only have been produced by alpha particles from the decay of the uranium, with each alpha particle latching on to two electrons to become an atom of helium. This gave Rutherford and his colleague Bertram Boltwood (an American chemist chiefly based at Yale, who visited Manchester in 1909-10) an estimate of 500 million years for the ages of the relevant rocks. Since any helium that had been present when the rocks were in a molten state would have escaped, and some may have seeped away through cracks in the rock, this was the minimum time since those rocks were laid down very much a minimum age for the Earth. Yet it was twenty times longer than the maximum timescale that Kelvin had calculated for the Sun less than ten years before and this was just the beginning.
Boltwood's key contribution was to take the technique a stage further, looking at all of the products of uranium decay, not just helium. He realised that the ultimate stable product that uranium is transformed into by decay is lead, with radium as an unstable intermediate product. With the decay rates (half-lives) of both uranium and radium known, it was possible in principle to determine the ages of rocks by measuring the amounts of all these substances in them today, assuming that no lead was present at the start. The practical side of this work was far from easy it involved measuring accurately traces of radium amounting to only 380 parts per billion, in various samples of rocks. But by the end of the first decade of the twentieth century it was giving ages for various rock samples in the range from 400 million years to more than two billion years, albeit with some uncertainty in the estimates.
Both Rutherford and Boltwood went on to other work, but the torch was taken up by Arthur Holmes, then working at Imperial College, in London. Holmes dated many rock samples using the uranium-lead technique, and by 1913, he had come up with an age of 1.64 billion years for the oldest of these samples, with relatively small experimental errors. It was Holmes who made the whole business of radiometric dating, as it became known, respectable. He was the first person to use radioactive dating (the term is used synonymously with radiometric dating) to determine the ages of fossils, putting absolute dates into the fossil record for the first time, and over the years that followed he extended the technique by taking on board new ideas and discoveries, most notably the fact that many elements come in different varieties, called isotopes.
All isotopes of an element have the same chemical properties, because each atom has the same number of protons in its nucleus, and therefore the same number of electrons in the cloud around the nucleus. As far as chemistry is concerned, almost all that matters is the number of electrons in the cloud, which is the visible face that an atom shows to other atoms. But different isotopes of the same element have different numbers of neutrons in their nuclei, so they have different masses. The total number of neutrons in the nucleus affects the stability of the nucleus. For example, uranium actually comes in different varieties, the most relevant here being U-238 and U-235. Each uranium atom has 92 protons in its nucleus, but each nucleus of U-238 contains, in addition, 146 neutrons, while each nucleus of U-235 contains 143 neutrons along with its 92 protons. As a result, U-238 (which makes up about 99 percent of all naturally occurring uranium on Earth) has a half-life of 4.51 billion years, while U-235 (which makes up about 0.7 percent of all naturally occurring uranium on Earth) has a half-life of just 713 million years. There are other, rarer isotopes of uranium, but they need not concern us here. What matters, without going into details, is that once scientists understood the nature of isotopes, and had the techniques required to measure the relative abundances of different radioactive isotopes and their daughter products in rock samples, the whole radiometric dating business became that much more accurate.
By 1921, a debate at the annual meeting of the British Association for the Advancement of Science showed that there was a new consensus. Geologists, biologists, zoologists and now the physicists as well all agreed that the Earth must be a few billion years old, and they all agreed that the radiometric dating technique provided the best guide to its age. The final seal of approval came in 1926, in the form of a report from the National Research Council of the US National Academy of Sciences, which endorsed the technique. Since the 1920s, further refinements of the technique (and the discovery of particularly ancient rocks at some sites on Earth) have pushed back the radiometrically determined ages of the oldest known rocks still further. Holmes himself continued to work on the technique (alongside other research) until the end of the 1950s (he died in 1965, at the age of seventy-five), and the current estimate for the ages of the oldest rocks on Earth is 3.8 billion years. Even that, though, is not the end of the storymaterial from meteorites, pieces of rocky debris that fall to Earth from space, has been dated in the same way, and the oldest of these pieces of cosmic debris have ages around 4.5 billion years. Since meteorites are thought to be samples of the rocky material that was left over from the formation of the planets when the Solar System was born, this is now the best measurement we have of the age of the Solar System, and, by implication, the age of the Sun. Not merely twenty times Kelvin's estimate, based on the accurate application of the known laws of nineteenth-century physics, but two hundred times Kelvin's estimate. The reason for the discrepancy, of course, is that there are laws of physics which were not known to nineteenth-century science.
The first clue comes from radioactivity itself. Radioactive decay releases energy that has been stored in the nuclei of atoms. In the case of long-lived isotopes such as U-238, the energy may have been stored in this way for billions of years, since the uranium was manufactured. (How did the energy get in there in the first place? It was put there by the explosion of a dying star, as I shall shortly explain). What Buffon and Fourier and their contemporaries could not know is that the Earth has not simply cooled down into its present state from a molten glob of material, but its internal heat is maintained by the energy released in radioactive decay still going on in its interior. This pushes back estimates of the "cooling age" of the Earth into the same region of time, billions of years, indicated by the radiometric dating. And it was very quickly apparent to Rutherford's generation of physicists that some form of radioactive energy source might keep the Sun shining for a comparably long interval.
When Lord Kelvin remarked, as he often did late in his career, that the only way to provide a timescale for the Sun longer than a few tens of millions of years would be to invoke unknown sources of energy and new laws of physics, it is clear from the context of these remarks that he meant them to be taken as ridiculing such notions, not something to be taken seriously. Right at the end of the nineteenth century, though, the American geologist Thomas Chamberlin, acutely aware of the new discoveries made by Becquerel and the Curies, made a much more prescient comment, in the journal Science (volume 10, page 11):
Is present knowledge relative to the behavior of matter under such extraordinary conditions as obtained in the interior of the sun sufficiently exhaustive to warrant the assertion that no unrecognised sources of heat reside there? What the internal constitution of the atoms may be is yet open to question. It is not improbable that they are complex organisations and seats of enormous energies. Certainly no careful chemist would affirm that the atoms are really elementary or that there may not be locked up in them energies of the first order of magnitude. No cautious chemist would ... affirm or deny that the extraordinary conditions which reside at the center of the sun may not set free a portion of this energy.
But just how much energy do we have to unlock from these "seats of enormous energies" to keep the Sun shining? One of the most striking analogies was made by the physicist George Gamow, in his book A Star Called the Sun, published in the early 1960s. If an electric coffee percolator is advertised as being so effective that it produces heat at the same rate as heat is produced (on average) over the entire volume of the Sun, he asked, how long would you wait for the pot to boil the water to make the coffee? The surprising answer to Gamow's question is that even if the pot were perfectly insulated so that no heat could escape while you were waiting, it would take more than a year (the time does not depend on the size of the coffee pot) for the water to boil.
The key to the puzzle is that on average each gram of the mass of the Sun produces very little heat. Astronomical measurements show that 8.8 x [10.sup.25] calories of heat energy cross the Sun's surface each second. But the mass of the Sun is 2 x [10.sup.33] grams. So on average each gram of material inside the Sun generates a mere 4.4 x [10.sup.-8] calories of heat per second. This isn't just low by the standards of heat generation in the average coffee percolator it is much less than the rate at which heat is generated in your body through the chemical processes associated with human metabolism.
If the whole Sun were just slightly radioactive, it could produce the kind of energy that we see emerging from it in the form of heat and light. In 1903, Pierre Curie and his colleague Albert Laborde actually measured the amount of heat released by a gram of radium, and found that it produced enough energy in one hour to raise the temperature of 1.3 grams of water from 0°C to its boiling point. Radium generates enough heat to melt its own weight of ice in an hour every hour. In July that year, the English astronomer William Wilson pointed out that in that case, if there were just 3.6 grams of radium distributed in each cubic metre of the Sun's volume it would generate enough heat to explain all of the energy being radiated from the Sun's surface today. It was only later appreciated, as we shall see, that the "enormous energies" referred to by Chamberlin are only unlocked in a tiny region at the heart of the Sun, where they produce all of the heat required to sustain the vast bulk of material above them.
The important point, though, is that radioactivity clearly provided a potential source of energy sufficient to explain the energy output of the Sun. In 1903, nobody knew where the energy released by radium (and other radioactive substances) was coming from; but in 1905, another hint at the origin of the energy released in powering both the Sun and radioactive decay came when Albert Einstein published his special theory of relativity, which led to the most famous equation in science, E = mc², relating energy and mass (or rather, spelling out that mass is a form of energy). This is the ultimate source of energy in radioactive decays, where careful measurements of the weights of all the daughter products involved in such processes have now confirmed that the total weight of all the products is always a little less than the weight of the initial radioactive nucleus the "lost" mass has been converted directly into energy, in line with Einstein's equation.
Even without knowing how a star like the Sun might do the trick of converting mass into energy, you can use Einstein's equation to calculate how much mass has to be used up in this way every second to keep the Sun shining. Overall, about 5 million tonnes of mass have to be converted into pure energy each second to keep the Sun shining. This sounds enormous, and it is, by everyday standards roughly the equivalent of turning five million large elephants into pure energy every second. But the Sun is so big that it scarcely notices this mass loss. If it has indeed been shining for 4.5 billion years, as the radiometric dating of meteorite samples implies, and if it has been losing mass at this furious rate for all that time, then its overall mass has only diminished by about 4 percent since the Solar System formed.
By 1913, Rutherford was commenting that "at the enormous temperatures of the sun, it appears possible that a process of transformation may take place in ordinary elements analogous to that observed in the well-known radio-elements," and added, "the time during which the sun may continue to emit heat at the present rate may be much longer than the value computed from ordinary dynamical data [the Kelvin-Helmholtz timescale]."
So, by the beginning of the third decade of the twentieth century the great age debate had moved firmly off the surface of the Earth and out into space. The scientific evidence that the Earth was a few billion years old was compelling, and there were clear hints from the special theory of relativity and from the existence of radioactive elements on Earth that there was a source of energy which could keep the Sun and stars shining for at least that long. But how did they do the trick? And just how old were the oldest stars?
and post it to your social network
Most Helpful Customer Reviews
See all customer reviews >