Read an Excerpt
Big Science
Ernest Rutherford was one of science’s Great Men, a towering figure who drove developments in his era rather than riding in the wakes of others. To an acquaintance who observed, “You’re always at the crest of the wave,” he was said to have replied: “Well, after all, I made the wave, didn’t I?” He was loud, with a boisterous laugh and a hearty appreciation of what was known in his time as “smoking-room humor.” C. P. Snow, a youthful associate of Rutherford’s who would win literary fame with novels set in the corridors of academia and government, remembered Lord Rutherford as “a big, rather clumsy man, with a substantial bay window that started in the middle of the chest” and “large staring blue eyes and a damp and pendulous lower lip.”
Born in 1871 to a handyman and his wife in New Zealand when it was a remote outpost of the British Empire, Rutherford became an intuitive theorist and the preeminent experimental physicist of his age. No one could question his talent for divining the significance of the results produced by his elegant handmade equipment. “Rutherford was an artist,” commented his former student A. S. Russell. “All his experiments had style.”
Rutherford was twenty-four when he first came to Cambridge University’s storied Cavendish Laboratory on a graduate scholarship. It was 1895, a fortuitous moment when physicists were pondering a host of strange new physical forces manifested in their apparatuses. Only a month before Rutherford’s arrival, the German physicist Wilhelm Roentgen had reported that a certain electrical discharge generated radiation so penetrating it could produce an image of the bones of a human hand on a photographic plate. Roentgen called his discovery X-rays.
Roentgen’s report prompted the Parisian physicist Henri Becquerel to look for other signs of X-rays. His technique was to expose a variety of chemical compounds to energizing sunlight. He would seal a photographic plate in black paper, cover the paper with a layer of the candidate compound, place the arrangement under the sun, and check back later to see if a shadow appeared on the sealed plate. During a stretch of overcast Paris weather in February 1896, he shut away in a drawer his latest preparation: a uranium salt sprinkled over the wrapped plate, awaiting the sun’s reemergence from behind the clouds. When he developed the plate, he discovered it had been spontaneously exposed by the uranium in the darkened drawer.
Marie Curie and her husband, Pierre, soon established in their own Paris laboratory that Becquerel’s rays were produced naturally by certain elements, including two that they had discovered and named polonium, in honor of Marie Curie’s native Poland, and radium. They called the phenomenon “radioactivity.” (Becquerel and the Curies would share the 1903 Nobel Prize for their work on what was originally called “Becquerel radiation.”)
Other scientists launched parallel inquiries to unravel the mysteries lurking within the atom’s interior. Cavendish director Joseph John “J. J.” Thomson, Ernest Rutherford’s mentor, discovered the electron in 1897, thereby establishing that atoms were divisible into even smaller particles—“corpuscles,” he called them. Thomson proposed a structural model for the atom in which his negatively charged electrons were suspended within an undifferentiated positively charged mass, like bits of fruit within a soft custard. Irresistibly, this became known as the “plum pudding” model. It would prevail for fourteen years, until Rutherford laid it to rest.
Rutherford, meanwhile, had busied himself examining “uranium radiation,” his term for the emanations discovered by Becquerel. In 1899 he determined that it comprised two distinct types of emissions, which he categorized by their penetrative power: alpha radiation was easily blocked by sheets of aluminum, tin, or brass; beta rays, the more potent, passed easily through copper, aluminum, other light metals, and glass. Rutherford had relocated to Montreal and a professorship at McGill University, which featured a lavishly equipped laboratory funded by a Canadian businessman, in an early example of scientific patronage by industry. Working with a gifted assistant named Frederick Soddy, who would coin the term isotope for structurally distinct but chemically identical forms of the same element, Rutherford determined that the radioactivity of heavy elements such as uranium, thorium, and radium was produced by decay, a natural transmutation that changed them by steps—in some cases, after minutes; in others, centuries, years, or millennia—into radioactively inert lead. Eventually alpha rays were identified as helium atoms stripped of their electrons—that is, helium nuclei—and beta rays as energetic electrons. The work earned Rutherford the 1908 Nobel Prize in chemistry. By then, he had already returned to Britain to take up a professorship at the University of Manchester.
There he would make an even greater mark on science by taking on the core question of atomic structure. “I was brought up to look at the atom as a nice hard fellow, red or grey in color, according to taste,” he remarked years later of the plum pudding model. But although he speculated that the atom was mostly empty space rather than a homogenous mass speckled with charged nuggets, he had not yet conceived an alternative model. With two Manchester graduate assistants, Hans Geiger and Ernest Marsden, he set about finding one, using alpha particles as his tools. As he knew, these were deflected somewhat by magnetic fields but, curiously, even more on their passage through solid matter—even through a thin film such as mica. This suggested that the atomic interior was an electromagnetic maelstrom buffeting the particle on his journey, not a serene, solid pudding.
Rutherford experimented by bombarding gold foils with alpha particles emanating from a glass vial of purified radium. Geiger and Marsden recorded the particles’ scattering by observing the flash, or scintillation, produced whenever one struck a glass plate coated with zinc sulfide. This apparatus displayed Rutherford’s hallmark simplicity and style, but the procedure was unspeakably onerous. The observer first had to sit in the unlighted laboratory for up to an hour to adjust his eyes to the dark, and then could observe only for a minute at a time because the strain of peering at the screen through a microscope tended to produce imagined scintillations mixed with the real ones. (Geiger eventually invented his namesake particle counter to relieve experimenters of the tedium.)
The experiment showed that most of the alpha particles passed through the foil with very slight deflection or none at all. But a tiny number—about one in eight thousand—bounced back at a sharp angle, some even ricocheting directly back at the source.
Rutherford was astonished by the results. “It was almost as incredible as if you fired a fifteen-inch shell at a piece of tissue paper, and it came back and hit you,” he would relate years later, creating one of the most cherished images in the history of nuclear physics. It was not hard for him to understand what had happened, for the phenomenon could be explained only if the atom was mostly empty space, with almost all of its mass concentrated within a single minuscule, charged kernel. The deflections occurred only when the alpha particle happened to strike this kernel directly or come close enough to be deflected by its electric charge. The kernel, Rutherford concluded, was the atomic nucleus.
Rutherford’s discovery revolutionized physicists’ model of the atom. But it was by no means his ultimate achievement. That came in 1919, he reported an even more startling phenomenon than the tissue-paper ricochets of 1911.
Rutherford had again relocated, this time to Cambridge, where he assumed the directorship of the Cavendish. The laboratory had opened in 1874 under the directorship of James Clerk Maxwell, who was a relative unknown at the time of his appointment; within a few short years, however, he had published the work on electricity and magnetism that made his worldwide reputation and established the Cavendish by association as one of Europe’s leading scientific centers. Maxwell’s conceptualization of electricity and magnetism as aspects of the same phenomenon, electromagnetism, would stand as the bridge between the classical physics of Sir Isaac Newton and the relativistic world of Albert Einstein, and his Cavendish would reign as the living repository of the British experimental tradition in physics.
In Rutherford’s time, the Cavendish reveled in its tatty grandeur, the epitome of small science in an institutional setting. The building was shaped like an L around a small courtyard: three stories on the long side, the top floor, with its gabled windows, crammed under a steeply raked roof. Inside the building were a single large laboratory and a smaller lab for the “professor,” a room for experimental equipment, and a lecture theater. There Rutherford held forth three times a week to an audience of about forty students, occasionally consulting a few loose pages of notes drawn from the inside pocket of his coat. Physicist Mark Oliphant, arriving at the Cavendish from Australia in the mid-1920s, remarked on its “uncarpeted floor boards, dingy varnished pine doors and stained plaster walls, indifferently lit by a skylight with dirty glass.” As for the director, he described Rutherford as “a large, rather florid man, with thinning fair hair and a large moustache, who reminded me forcibly of the keeper of the general store and post office.” The lab adhered strictly to the European “gentlemen’s tradition” of closing its doors for the night at six o’clock regardless of whether any experiments were in progress, with an elderly timekeeper assigned to glower at the lab bench of any scientist still working, rattling the lab keys to remind him of the time. Working late was considered “bad taste, bad form, bad science.”
The Cavendish treasured its history of having made great strides with scanty resources. Its entire annual budget was about £2,000, worth about $80,000 in twenty-first-century US currency and meager even in the old days for the magnitude of its work. What took up the slack was the shrewdness and craft of Rutherford’s associates, their ability to extract the maximum results from experimental apparatus of marked simplicity and elegance. The 1919 experiments would exemplify the Rutherford style.
Working with James Chadwick, whose experimental skills matched his own, Rutherford trained his alpha particles on a series of gaseous targets: oxygen, carbon dioxide, even ordinary air. With their apparatus, a refinement of the Marsden-Geiger box of 1911, they found that ordinary air produced especially frequent scintillations resembling those of hydrogen nuclei, or protons. Rutherford surmised correctly that the phenomenon was related to the 80 percent concentration of nitrogen in the air.
“We must conclude,” he wrote, “that the nitrogen atom is disintegrated . . . in a close collision with a swift alpha particle, and that the hydrogen atom which is liberated formed a constituent part of the nitrogen nucleus.” These circumspect words produced a scientific earthquake, for what Rutherford described was the first artificial splitting of the atom. It would eventually be recognized that the reaction entailed the absorption of the alpha’s two protons and two neutrons by the nitrogen nucleus—seven protons and seven neutrons—followed by the ejection of a single proton, thereby transmuting nitrogen-14 into the isotope oxygen-17. But what really set the world of science on a new path was the vision that Rutherford set forth at the close of his paper. “The results as a whole,” he wrote, “suggest that if alpha particles—or similar projectiles—of still greater energy were available for experiment, we might expect to break down the nucleus structure of many of the lighter atoms.”
In other words, alpha particles produced naturally by radium and polonium had exhausted their usefulness as probes of the nucleus. They simply weren’t powerful enough. Some way had to be found to impart greater energies to the projectiles: man’s cunning had to augment nature’s gifts to create a new kind of nuclear probe. Rutherford had drawn a road map for the future of nuclear physics. Off in the distant horizon lay the reality that the task of reaching the necessary energies would overmatch the elegant bench science of Rutherford’s generation.
• • •
Rutherford’s discoveries launched a surge of ingenuity in physics. J. Robert Oppenheimer would later describe this as “a heroic time,” not merely because of the intellectual energy focused on the challenge Rutherford posed but because the work took place in an atmosphere of intellectual crisis. Physicists were forced to confront astonishing paradoxes roiling their conception of the natural world. Through much of the 1920s, they were wracked with doubt that they would be able to resolve them at all.
The words of eminent physicists of the era bristle with intellectual despair. The German physicist Max Born, one of the earliest disciples of the new theory of quantum mechanics, wrote in 1923 that its multiplying contradictions could mean only that “the whole system of concepts of physics must be reconstructed from the ground up.” The Viennese theorist Wolfgang Pauli, who combined rigorous intellectual integrity with an acerbic tongue—his famous critique of a sloppily argued paper was that it was “not even wrong”—lamented in 1925 that physics had become so “decidedly confused” that “I wish I . . . had never heard of it.” Even the level-headed James Chadwick recalled experiments at the Cavendish “so desperate, so far-fetched as to belong to the days of alchemy.”
Despite the complexity of their quest—or perhaps because of it—their work enthralled the public. For laypersons in the twenties, physics was invested with an aura of drama, even romance. The postwar decade had begun with Sir Arthur Eddington’s spectacular confirmation of Einstein’s theory of relativity at a joint meeting of the Royal Society and Royal Astronomical Society in November 1919. “Revolution in Science / New Theory of the Universe / Newtonian Ideas Overthrown” declared the Times of London in a historic headline. Eddington’s painstaking publicity campaign launched the theory of relativity into popular culture and its father, Albert Einstein, into a life of international renown. But that only whetted the public’s appetite for news about the search for the fundamental truths of nature, while fostering the image of modern physicists as intrepid individuals given to collect their data by trekking to the ends of the earth—as Eddington had journeyed to the far-off African island of Príncipe to witness a relativity-confirming eclipse.
Newspaper editors evinced a voracious appetite for news of the latest breakthroughs. Scientists became celebrities. In 1921 a six-week tour of the United States by Marie Curie and her two daughters, Eve and Irène, inspired outbursts of public admiration. The visit was the brainchild of Mrs. Marie Mattingly Meloney, a New York socialite and magazine entrepreneur who had been shocked to learn that Madame Curie’s research was hobbled by a meager supply of radium. Meloney conceived the idea of raising $100,000 to acquire a gram of the precious mineral—about as much as would fit in a thimble—and bringing Curie to America by steamship to accept the gift. “Mme. Curie Plans to End All Cancers,” declared the front page of the New York Times on the morning after her arrival (a bald assertion that the newspaper quietly retracted the following day). The climax of Madame Curie’s visit was a glittering reception at the White House attended by Meloney and the cream of Washington society, including Theodore Roosevelt’s socialite daughter Alice Roosevelt Longworth. There Marie Curie received the beribboned vial of radium directly from the hands of President Warren Harding, after which she expressed her gratitude (the New York Times reported) “in broken English.” Such were the demands of fund-raising even in the era of small science.
The public came to imagine that physics held the key to all phenomena of the natural world, including the chemical and the biological. Wrote Rutherford’s biographer, Arthur S. Eve, physicists were “endeavoring, with some initial success, to explain all physical and chemical processes in terms of positive electrons, negative electrons, and of the effects produced by these in the ether.” If they were right, he observed, “such phenomena as heredity and memory and intelligence, and our ideas of morality and religion . . . are explainable in terms of positive and negative electrons and ether.”
Not all the physicists were quite so confident. As the decade wore on and they delved more deeply into the intricacies of atomic structure, their picture of the natural world grew only murkier. Their perplexity stemmed from two related and equally perplexing phenomena. One was the so-called wave-particle duality of nature at the infinitesimal scale: experiments sometimes showed light and electrons behaving like particles, and other times as waves.
Einstein’s earlier pathbreaking work on the photoelectric effect suggested strongly that light was composed of a stream of “light quanta,” or particles. But he acknowledged that manifestations such as diffraction, interference, and scattering were inescapably wavelike. Instead of reconciling these contradictory observations, he had laid the issue before his colleagues. “It is my opinion,” he declared at a scientific convocation in Salzburg, Germany, in 1909, “that the next phase of the development of theoretical physics will bring us a theory of light that can be interpreted as a kind of fusion of the wave and mission [that is, particle] theories.”
Physicists grappled with the mysteries of subatomic behavior into the mid-1920s, hoping that the steady accretion of observed results would lead them to the truth. But the opposite was the case: the more data they acquired, the less they seemed to know for certain. “The very strange situation was that by coming nearer and nearer to the solution,” reflected the promising young German theoretical physicist Werner Heisenberg, “the paradoxes became worse and worse.” The only answer seemed to be the one proposed as a joke by the British physicist Sir William Bragg: “God runs electromagnetics on Monday, Wednesday, and Friday by the wave theory; and the devil runs them by quantum theory on Tuesday, Thursday, and Saturday.”
It would be Heisenberg and his mentor, the soft-spoken but rigorously logical Dane Niels Bohr, who finally divined the solution, in a process Heisenberg likened to watching an object emerge from a thick fog. Their conclusion was that anything one could know about an event taking place at a quantum scale was limited to what one could observe—and this knowledge depended on the means of observation. In other words, if one used equipment designed to examine electrons as particles, they would appear to behave as particles; if one used equipment best suited for detecting waves, they appeared as waves. Electrons as particles and electrons as waves were equally valid manifestations of the same thing; there was no contradiction, but rather, in Bohr’s term, “complementarity.”
• • •
Theoretical breakthrough that it was, complementarity did nothing to resolve the paradoxical results emanating from the atomic nucleus. Its structure was the second great mystery vexing physicists in the twenties.
Ernest Rutherford depicted the atom as a miniature solar system, with negatively charged electrons surrounding a tiny yet massive nucleus consisting of positively charged protons and negatively charged electrons. The alluring simplicity of this model helped it become received truth, especially after Niels Bohr augmented it in 1913 with the premise that the electrons could orbit only at certain distances from the nucleus associated with specific energy levels; this seemed to reconcile the classical mechanics governing orbital motion with quantum mechanics, which dictated the energy levels and thus the orbital “shells” that electrons could occupy. The atom as a whole carried a neutral charge: the negative charges of its orbital electrons balanced the positive charge of the nucleus, the latter created by an excess of protons over electrons. By Rutherford’s reckoning, therefore, the helium atom had two orbital electrons and a nucleus comprising four protons and two electrons; radium had 138 electrons and a nucleus of 226 protons and 88 electrons.
It soon became obvious that this model created more problems than it solved—and the heavier the atom, the greater the problems. By 1923, the tenth anniversary of Bohr’s atomic model, physicists were questioning its general applicability. Bohr’s model corresponded to experimental observation only for the very simplest atom, hydrogen, which had only one proton and one electron. At the next-heaviest atom, helium, the model began to break down, creating the anomalies that drew from Max Born his expression of despair.
The troublemakers were those nuclear electrons. No one could explain how such massive particles could fit in the nucleus; or how, once wedged there, the devilishly energetic particles could be made to stay put. Bohr himself was driven to concede that his treasured quantum mechanics might not apply to the nucleus after all, or that some even more novel and confounding mechanics might have to be developed to explain the proliferating experimental anomalies.
The ready solution came from Rutherford. The grand old man of the Cavendish had been mulling over the riddle since the beginning of the decade, when he theorized that electrons became “much deformed” under the intense forces within the nucleus, so that they took on a very different character from orbital electrons. He was thinking that under such circumstances an electron might combine with a proton to form an uncharged, hitherto undetected compound particle he dubbed the neutron.
Rutherford dragooned the ever-faithful Chadwick into the search for the elusive neutron. “He expounded to me at length . . . on the difficulty in seeing how complex nuclei could possibly build up if the only elementary particles available were the proton and the electron, and the need therefore to invoke the aid of the neutron,” Chadwick related years later. “He freely admitted that much of this was pure speculation . . . and seldom mentioned these matters except in private discussion.” But “he had completely converted me.”
As the search proceeded, it became more obvious that solving the mystery of nuclear structure required probes of higher energies than nature provides. Rutherford was not reluctant to state the implications of this rule publicly. Radium emitted alpha particles at a meager 7.6 million electron volts and beta rays—that is, electrons—at only 3 million volts. “What we require,” Rutherford declared, “is an apparatus to give us a potential of the order of 10 million volts which can be safely accommodated in a reasonably sized room and operated by a few kilowatts of power . . . I recommend this interesting problem to the attention of my technical friends.”
• • •
But generating the voltage that Rutherford specified was only part of the problem, and the easiest part at that: nature could meet that specification, as the voltage of a single lightning bolt ran to hundreds of millions of volts. These enormous, fleeting voltages made for pretty spectacles but not much else, however. The problem was harnessing the power, sustaining it, and manipulating it for an assault on the nucleus. “There appears to be no obvious limit to the voltages obtainable” by arrangements common in the power industry such as transformers connected in series, Rutherford declared, adding that a power plant could produce “a torrent of sparks several yards in length and resembling a rapid succession of lightning flashes on a small scale.” But this technology was still not capable of “approaching, much less surpassing, the success of the radioactive elements, in providing us with high-speed electrons and high-speed atoms.”
Scientists who tried managing energies of the necessary magnitude often ended up with equipment blown to smithereens and laboratories littered with glass shards. Some chose to brave nature’s fury: three men from the University of Berlin strung a pair of seven-hundred-yard steel cables between two Alpine peaks and waited for a thunderstorm. When it arrived, they measured the electrical potential at 15 million volts—but an errant lightning strike blasted one of them off the mountain to his death.
American universities, already beginning to enjoy the fruits of collaboration with big business, duly put this relationship to work. The California Institute of Technology received from Southern California Edison Co. the gift of a million-volt transformer so Caltech might develop high-voltage technologies Edison could exploit to transmit electricity to Los Angeles from a proposed dam on the Colorado River, three hundred miles away. (This would be Hoover Dam.) Caltech physicists used the machine to generate X-rays, but it was nowhere near as compact as Rutherford had specified; rather than fitting into “a reasonably sized room,” it filled a three-story, nine-thousand-square-foot building, and had to be anchored in an excavated pit to fit under the roof. And still it was not a serviceable producer of high-energy particle beams. In the end, the unit became best known for the spectacular displays it put on during Caltech’s annual community “exhibit day,” when it could be made to produce a “long sinuous snarling arc” of electricity accompanied by a thunderous report.
One of the foremost figures in the quest was physicist Merle Tuve, who was determined to put a million volts in a vacuum tube at a time when that much energy would blow existing vacuum tubes to bits. “All of us youngsters were, I believe, extremists,” he explained later. “We always wanted to go to extreme of temperature, extreme of pressure, extreme of voltage, extreme of vacuum, extreme of something or other.” His chosen instrument was the Tesla coil, a high-voltage transformer invented by the visionary physicist Nikola Tesla in the 1890s. Tuve’s version was made from copper and wire wound about a hollow three-foot glass tube submerged in a pressurized vat of oil to suppress sparking. He and his colleagues at the Carnegie Institution of Washington managed to produce 1.5 million volts and even to demonstrate the production of beta rays and the occasional accelerated proton, but the device was quirky, erratic, and uncontrollable, and before long, Tuve abandoned it as unsuitable for nuclear research and cursed it as an “albatross.”
Tuve moved on to an electrostatic generator invented by a Princeton University engineer named Robert Van de Graaff. This apparatus consisted of a large hollow metal sphere situated atop a tower through which a continuous belt turned, picking up an electrical charge at the bottom and spraying it out at the top, so that the sphere eventually acquired a suitable voltage. The Van De Graaff produced copious volts and sparks, which would turn it into a staple of many a Hollywood mad-scientist set, but did no better than any of the previous efforts at producing Rutherford’s “copious supply” of high-energy bullets. Tuve and Van de Graaff struggled to make it work with the vacuum tubes and other apparatus necessary to produce a focused beam of charged, energetic particles. Eventually they succeeded, but by the time they did, Van de Graaff’s technology had been outrun by something entirely new.
Its developer was Ernest Lawrence, who had shared his boyhood fascination with electric gadgetry with Merle Tuve, his schoolmate and friend from across the street in a compact South Dakota town named Canton. It was Ernest’s destiny to begin his career at a moment when physics had hit a brick wall in its understanding of the atomic nucleus. The obstacle was galling; physicists could peer over the wall at a murky landscape on the far side, shrouded seductively in mist. Lawrence would breach that wall and clear away the mist, marking at that same moment the transition to Big Science from small science. He did so by inventing a serviceable method for artificially driving subatomic particles into the nucleus with enough energy to give physicists a clear picture of what it was made of. To their colleagues, Rutherford and Lawrence would be known as “the two Ernests,” and their work would bookend an epochal quest for knowledge of the natural world.