Uh-oh, it looks like your Internet Explorer is out of date.

For a better shopping experience, please upgrade now.

Patriot's History of the Modern World, Vol. II: From the Cold War to the Age of Entitlement, 1945-2012

Patriot's History of the Modern World, Vol. II: From the Cold War to the Age of Entitlement, 1945-2012

5.0 1
by Larry Schweikart, Dave Dougherty

See All Formats & Editions

The bestselling historians turn their focus to America’s role in the world since the end of World War II
Schweikart, author of the number one New York Times bestseller A Patriot’s History of the United States, and Dougherty take a critical look at America, from the postwar boom to her search for identity in the


The bestselling historians turn their focus to America’s role in the world since the end of World War II
Schweikart, author of the number one New York Times bestseller A Patriot’s History of the United States, and Dougherty take a critical look at America, from the postwar boom to her search for identity in the twenty-first century.
The second volume of A Patriot’s History of the Modern World picks up in 1945 with a world irrevocably altered by World War II and a powerful, victorious United States. But new foes and challenges soon arose: the growing sphere of Communist influence, hostile dictatorships and unreliable socialist allies, the emergence of China as an economic contender, and the threat of world Islamification.
The book reestablishes the argument of American exceptionalism and the interplay of our democratic pillars—Judeo-Christian religious beliefs, free market capitalism, land ownership, and common law—around the world.
Schweikart and Dougherty offer a fascinating conservative history of the last six decades.

Editorial Reviews

Publishers Weekly
Rock-ribbed Americanism confronts communism abroad and liberalism at home in this conservatives’ chronicle of post-war history. Schweikart and Dougherty (A Patriot’s History of the Modern World, Vol. I) continue their account of how an “American exceptionalism” based on Christianity, private property, common law, and free markets shaped the world. Their narrative is equal parts glory and gloom, as America leads the capitalist democracies to victory over the communist bloc in the Cold War only to be undermined by a degenerate welfare-state socialism pushed by domestic progressives advocating “the soft slavery of entitlements and the silver shackles of government support.” The authors’ sweeping panorama takes in war, geopolitics, economics, culture, and sexual mores, all filtered through a staunchly conservative viewpoint spiced with polemical digressions on global warming alarmism, diet fads, and other topics. Their critique of left-liberal historiography is spirited—a discussion of Soviet spies in the United States is particularly revealing—but their platform is clear, as when they repeat claims about John Kerry’s Vietnam service and Barack Obama’s birthplace. The result is a history dear to the right-wing audience. (Dec.)
Kirkus Reviews
Schweikart (History/Univ. of Dayton) and Dougherty follow up the first volume of their Patriot's History of the Modern World (2012) with a disappointing sequel, again stressing above all the unchallenged nature of American exceptionalism. The authors differentiate themselves from more traditional historians, who locate the American exception in the written Constitution, citizens' self-government and the separation of powers. Instead, they adopt a pre-Constitutional frame focusing on "the four pillars of American exceptionalism." These include "common law, a Christian (mostly Protestant) religious culture, access to private property...and free market capitalism," and Schweikart and Dougherty boldly assert that without all four pillars, "no true American style republic could be developed." Many historians would find the authors' thesis unsupportable, and this volume is disappointing mainly since it fails to elaborate how the "four pillars" have played out across the history of the world since 1945. The authors fail to pursue the opportunities to link historical developments to their primary thesis. Chief among these missed opportunities regards Martin Luther King's leadership of the civil rights movement. Schweikart and Dougherty present King as "an Atlanta-born Republican pastor who had a divinity degree from Boston University," but they do not examine how a movement of mainly Protestant Christians, drawing from the nonviolent principles of Mahatma Gandhi, might affect the Protestant cultural requirement of their frame. Equally, they miss the references to King's movement that were so common among Lutheran-influenced protesters in East German nonviolent demonstrations in the late 1980s, and they ignore the impact of U.S. constitutional thinking in post–World War II settlements in Germany and Japan. The authors provide an avalanche of facts, but the causes that could link to their underlying "four pillars" thesis are neither offered nor proven. They conclude with a story from the Bible and compare it to the state of America in 2013, "which wants a government to ‘fight our battles' and take care of everyone, needy or not." Right-slanted, monotonous historical reading offering little new, valid insight.

Product Details

Penguin Publishing Group
Publication date:
Sales rank:
Product dimensions:
6.60(w) x 9.40(h) x 2.20(d)
Age Range:
18 Years

Read an Excerpt


It could have been a postwar American town or the Farmer’s Market in Los Angeles. Busy streets served as the setting for a bustling vegetable market, teeming with customers, awash in produce—a rich bounty spread out over hundreds of stands. As far as one could see, makeshift shops in the open air stretched down the street—in post–World War II Romania. Communism, in the early 1950s, still had not gained total control of the Romanian market, and farmers came from the countryside to sell their goods. A young Romanian, Gabriel Bohm, walked through the marketplace with his mother in awe of the cornucopia of fruits and vegetables, displayed under homemade tents or on crates by ordinary farmers. Bohm remembered seeing a market “full of merchandise . . . good looking, healthy stuff.” Yet within twenty years, Bohm witnessed a dramatic change. The same scene in 1965 would be much different: empty streets, devoid of vendors, patrolled by police. “Those markets were deserted,” he recalled years later: “not a single carrot, not a single vendor selling a carrot.”

There were other changes as well, ending many of the mainstays of life. Churches were closed, political gatherings banned. What had happened in the interim to Bohm and other Romanians? Communism took full control of the economy. “We saw the country deteriorate,” he noted. “Anyone who could get out would. You had to be brain dead not to get out.”1 Yet they could not get out. Nor could their neighbors in Bulgaria, Hungary, Poland, or East Germany, all of them trapped as prisoners of the Soviet Union, which since 1945 had embarked on a program of expansionism dictated by Soviet communism’s godfather, Vladimir Lenin. Nor were the scenes of want and desperation in those Communist-controlled countries different in any of the other Eastern European nations that could be observed—Hungary, Bulgaria, Poland—and often they were worse.

East Germans lived in constant fear of the Stasi, the state’s secret police, which recruited informants and compiled dossiers on almost every citizen to crush any potential opposition to the state. Even so much as an anti-Communist cartoon or joke was sufficient grounds for jail. One East Berliner who escaped to West Germany discovered decades later, after communism’s collapse, that one of her best friends had informed on her to the Stasi. Everywhere in the Iron Curtain countries (as they were labeled by Winston Churchill in 1946), the state spied on average citizens. Even children were tricked into informing on parents. Bulgarians knew that people simply disappeared—but they did not know that a secret prison island, kept off official maps, was their ultimate destination.

An atmosphere of discontent and fear permeated the Communist bloc. After a time, the fear and depression produced a numbing absence of vitality. Western visitors to Eastern Europe at this time all came back with the same impression of the visual images that awaited them there: “gray,” “it was grime, gray,” “all gray,” they said.2 Millions of people were prisoners in their own countries, unable to leave and usually afraid to resist.

A stunning contrast could be seen, literally, across borders where Western European nations thrived after 1945. Even Germany, crushed into rubble, with up to 10 percent of its 1939 population killed or wounded in the war, staged an astonishing revival after VE Day.3 The success could be attributed to the massive humanitarian and economic assistance provided by the United States, the adoption (even in quasi-socialist countries such as France, Italy, and Greece) of markets and price mechanisms, and the determination of Europeans themselves to recover from war yet again.

But Western Europe would soon drift into a lethargy of planned economies at the very time that a cold war was being waged to free Eastern Europe from those very ideas. By the time the Berlin Wall fell in 1989, the West had lost or deliberately given up many of the freedoms that the East had sought and just gained. And within another decade still, the advent of the European Union would subtly and quietly impose controls on ordinary life that, while wrapped in a velvet glove, felt to some like the iron fist they had resisted.

Worse still, Europe was not alone. From 1945 to 1970, virtually all of the then-labeled Third World, including Africa, Asia, and Latin America, embraced state planning and rejected markets. Many of the European colonies (Uganda, Congo, Rwanda), winning their freedom in the postwar division of the world, immediately put dictators in power who squelched talk of republics and democracy. By the 1970s, postwar optimism had been replaced by widespread anger and desperation, with one set of masters exchanged for another.

None of this was supposed to have happened. Just as in the post–World War I era, the end of war was to have meant a golden age of freedom and equality. Unlike after World War I, however, this time there was little doubt in non-Communist countries that the United States was in charge of the postwar world. Emerging from the war with its homeland and domestic industries not only entirely intact, but cranked up to full production, American productivity exceeded the wildest expectations of the Truman administration. Racing to shore up Europe from 1947 to 1950, the United States introduced the Marshall Plan, aiding war-torn Europe without asking for anything in return. Understanding the Soviet threat to the rest of Europe, America created a new alliance system and took virtually all of the major Western European nations under its protection.

As much as America was to be the leader and role model in this new era, all the efforts of U.S. occupation forces and all the money delivered by foreign aid could not address the fundamental weakness of postwar development efforts. The salient point of the post–World War II period was that by 1957 no nation had adopted the four pillars that made American exceptionalism successful in the first place. As developed in the first volume of this history, those pillars consisted of a Christian (mostly Protestant) religious foundation, free enterprise, common law, and private property with titles and deeds. Missing even in postwar Europe, these features were almost totally unknown throughout the rest of the world. Long-established nations such as France and Italy seemed little different from emerging states such as Uganda or Cameroon, or the reconstructed countries of Germany or Japan.

Thus, another thirty years later—by 2000—the promise of global liberty that appeared so imminent in 1946 seemed to have slipped away to a significant degree almost everywhere. Drone spy technology monitored the movements of ordinary citizens; big-city mayors banned not only guns, but also soft drinks and fats and plastic bags; European cities saw “no-go” zones of Muslims abolish Western law and replace it with Sharia; countries published lists of children’s names that were permitted and not permitted; street preaching was banned, and pastors jailed for speaking the Gospel aloud in churches. That these liberty-limiting developments occurred in African or Asian nations hardly raised an eyebrow—so far had many of those countries fallen after 1945—but that they all occurred in the United States or Europe seemed a shocking and stunning reversal of the very reasons the “Good War” had been fought in the first place.

Why had this subtle but dangerous reversal occurred so rapidly and so unexpectedly (to some)? Indeed, what were “democracies” doing engaging in such practices at all? In fact, all along the promise of postwar liberty itself was illusory, constructed on the premise that most of the world would be rebuilt along the lines of American-style democracy and freedoms. Our argument is that without the four pillars of American exceptionalism, such developments were not only likely, they were inevitable. Moreover, we argue that Europeans’ use of terms such as “democracy,” “republic,” and even “liberty” were not the same as those understood by Americans, and therefore other nations never entertained any intention of adopting the American pillars. In our previous volume of A Patriot’s History of the Modern World: From America’s Exceptional Ascent to the Atomic Bomb, we reviewed the impact of common law, a Christian (mostly Protestant) religious culture, access to private property (including ownership with easy acquisition of deeds and titles), and free-market capitalism, which brought America to the forefront of world power by the end of the war. Instead of copying American success, victorious or liberated nations more often sought only to dip their toes in the water of freedom, adopting free markets without common law or restricting capitalism, permitting Christian religion but steadily edging away from acknowledging the Christian foundations of society, paying lip service to private property without instituting the land-ownership institutions, such as titles and deeds, that are necessary to make it a reality. More often still, nations ignored all four of the pillars. Thus, the American model was only implemented piecemeal, where implemented at all (South Korea, for example). As we pointed out in volume 1, while any one of the pillars might be beneficial to a society, without all four no true American-style republic could be developed. The pillars were simply mutually dependent.

This volume continues the story of America’s rise to world dominance through three themes. First, we trace the battle that began early in the twentieth century between the Progressives and the Constitutionalists. The former, grounded in the “reform” movement of the late 1800s, sought to perfect man and society by a process of government-directed and controlled change. The Progressives wanted to deemphasize the Constitution as it was written, and with it, American exceptionalism. They conducted a century-long assault on the notion that the United States had any providential founding, that its heroes and heroines were particularly wise, just, or courageous. By insisting that laws needed to be continually reassessed in light of current morality, Progressives saw the Constitution as outdated or irrelevant. Constitutionalists, on the other hand, maintained that America’s founding stemmed from her Christian roots, and that the Declaration of Independence and Constitution were representative of common law doctrines in which codes of conduct, given by God to the people, bubbled up, supported and promoted by the people (as opposed to being handed to a king or ruler to be dispensed downward). Moreover, Constitutionalists maintained that the Founding Fathers were, in fact, wise and visionary, and that they established a framework of laws that addressed every eventuality. Progressives enacted a legislative campaign to regulate markets, redistribute wealth, and limit private property ownership. Constitutionalists wanted to free markets, enable all to pursue wealth, and restrain government’s ability to infringe upon individuals’ property rights. Finally, the Progressives—many of whom, in the early stages of the movement, were nominal Christians—fervently labored to remove Christianity from the public square, from all political discourse, and from entertainment. Indeed, Christianity stood in the way of implementing most of their reforms. Constitutionalists, of course, understood the admonitions of the Founders, who urged that the nation adhere to its Christian roots and above all pursue virtue.

Over the course of the second half of the twentieth century and the first decade of the next, American exceptionalism faced hostility abroad, but more surprisingly, antipathy by numerous groups at home. The Progressive Left endeavored through the educational system, the law, and entertainment to denigrate and ridicule the very concept that America had anything special to offer, and to insist that the United States had become just one nation among many. That a number of Western and non-Western powers arose to challenge American dominance was to be expected, particularly when the American public had so generously provided the financial and commercial means of their recovery in many cases. Germany and Japan took the best of the American industrial, manufacturing, and management practices, modified them, and implemented them with zeal, producing world-class automobiles, electronics, robotics, and a host of other products that drove American goods either fully or partially from the market. Once several nations could claim economic proximity to the United States (though none could claim parity), were not their systems, goals, practices, and cultures worthy of emulation as well? But the Progressive assault did not stop there: it insisted that undeveloped cultures were no worse than ours, only different. Americans were urged to seek out the value in what in previous generations would have been termed “backward” cultures, and to “understand” practices once deemed undesirable at best or barbaric at worse. President Barack Obama’s 2009 Cairo speech, as one example, cited advances and greatness in Islamic culture that never existed, implying that Americans needed to be more like Egypt rather than Egyptians being more like Americans.4 Absurdly saying that “Islam has always been a part of America’s story,” Obama claimed that Islam “pav[ed] the way for Europe’s Renaissance” and gave us “cherished music,” the “magnetic compass and tools of navigation,” and furthered “our understanding of how disease spreads and how it can be healed.”5 Although his intention may have been to strike new chords of friendship, the act of ascribing to people accomplishments they never achieved looked phony and, according to polls in the subsequent three years, had no effect on Muslim views of America.

By 2012, the culmination of this Progressive march saw the United States elect a president with little or no understanding of free market capitalism, no appreciation of private property rights, little demonstrable Christian religious influence (to the point that by 2012 polls showed that up to half of the American public thought he was a Muslim), and an apparent disdain for American exceptionalism. Barack Obama repeatedly apologized to foreign nations for past American “mistakes” or transgressions and denigrated (or greatly mischaracterized) American exceptionalism by insisting that “the Brits believe in British exceptionalism and the Greeks believe in Greek exceptionalism.” As the British magazine The Economist stated, Americans had put into power “a left-wing president who has regulated to death a private sector he neither likes nor understands. . . .”6 In 2008, in his famous “Joe the Plumber” comment, Obama stated that it was government’s duty to “spread the wealth around,” and in 2012, referring to private businesses that had become successful, he said, “You didn’t build that [business]. . . . Somebody else made that happen.” That “somebody else” was, of course, government—not the private sector. Comments such as those showed Obama had no concept of what made markets work. Likewise, in his bailout of General Motors, he demonstrated that he had no regard for private property—in that case, the property of the bondholders who were saddled with an enormous loss to protect union pensions.

Obama’s national health care law forced the Catholic Church to compromise on its core religious beliefs regarding conception. His Supreme Court appointments routinely interpreted the American Constitution in the light of international law. And when it came to private property, Obama continued to implement the United Nations’ antigrowth/anticapitalist Agenda 21 initiatives, which were inserting themselves into all aspects of American life.

Of course, some of the erosion had already occurred. Fearing Islamic terrorists, after 9/11 Americans readily assented to substantial limitations on their freedoms, from airport body searches to cameras on stoplights. Once necessary Patriot Act precautions had grossly expanded with new computerized surveillance and monitoring technologies, including “latch-on” phone tapping, air drone camera planes, and listening devices, to the point that virtually anyone could be found by the national government. Benjamin Franklin’s comment, “They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety,” looked more prescient all the time. Worse still, by 2012, few politicians anywhere were seeking to limit such powers, let alone roll them back.

The exceptionalism that had saved the world had not met a receptive audience, even if at first the rhetoric and spirit were wildly embraced. Quite the contrary, it seemed that to some extent, Europe insisted on revisiting post–World War I practices yet again, and certainly in the former colonies the delusion of creating new “democratic” states without at least some of the pillars of American exceptionalism proved especially vexing. Yet the record of such efforts seemed abundantly clear by 1946. Europeans, after all, had witnessed the full-blown collapse of their societies not once in the first half of the twentieth century but twice. They had likewise seen the manifest failure and folly of both variations of socialism—fascism and communism.

From 1917 to 1989, neither outright government ownership under Soviet-style communism nor ownership-by-proxy through German/Italian fascism provided material prosperity or human dignity. Indeed, both heaped unparalleled inhumanity on top of astronomical levels of state-sanctioned killing. According to R. J. Rummel, perhaps the leading authority on government murder, the top governments in terms of democide (the murder of a person or people by a government including genocide, politicide, mass murder, and deaths arising from the reckless and depraved disregard for life, but excluding abortion deaths and battle deaths in war) from 1900 through 1987 were:

The only democracy on the list, Great Britain, attained its numbers only during the course of World War I and World War II through the economic blockade of the Central Powers and bombing of German cities. Even Rummel’s chart is somewhat misleading, however, in that if one looks at democide as a percentage of a country’s total population, still other nondemocratic regimes top the list, including Cambodia under Pol Pot, Turkey under Kemal Atatürk, Yugoslavia, Czechoslovakia, Mexico, Uganda, Romania, and Mongolia.

It is a mockery of honest statistics to claim that the United States by these measures is in any way a “violent nation” (6,000 deaths from intergroup or collective violence from 1900 to 1987), and its residence at nearly the bottom of Rummel’s list reflected the fact that by this standard alone, America was truly exceptional. But the point stands that by far the most deadly ideological systems were the Communist, fascist, and authoritarian systems tested on multiple occasions by the Europeans and exported to their colonial cousins. A quite contrary point emerges, namely that only when the fundamental elements of the American foundation are applied can a nation routinely protect its citizens from such murder.

Europe’s global failure to maintain peace, stability, and human rights over the course of over one hundred years—even with relatively free markets and democratic governments—points out the essential symbiosis of the American pillars. The United States of America had largely avoided anything approaching such carnage by government. She did so not because of any one of the four legs of exceptionalism, but because all four worked together. That began to change in the postwar era as Progressives accelerated their attacks on these pillars.

Their central target was America’s Christian roots, and the Progressives had help from intellectuals, elites, and even the Supreme Court. After the war, pressure from humanism, statism, and communism pushed religion further into disfavor—especially among elites. John Dewey, the so-called father of American progressive education, had already penetrated the schools with a covert war on faith. His goal was nothing less than full secularization and humanization of American education. Then in 1947 the groundbreaking Supreme Court ruling in Everson v. Board of Education of the Township of Ewing seemed to separate religion from all government in all cases, effectively changing the First Amendment’s intent from protecting religion from the government to protecting government from religion. Despite massive revival appeal by preachers such as Billy Graham in the 1950s and 1960s, media elites instituted a guerrilla campaign against religion, highlighted by the infamous 1966 Time magazine cover announcing “God Is Dead.” Sheer numbers of people disproved such a silly assertion, of course, as evangelical church rolls continued to grow, but Christianity was already being successfully branded as a “crutch” for the uneducated, the rubes, and the slow-witted. Increasingly, Christians were made to feel out of touch and isolated, when in fact their faith remained the majority view. Depending on how one asked the question in a poll, between 60 and 90 percent of Americans still considered themselves Christian by 1970.

Television, although more slowly, soon added to the assault on religion. At first, television shows depicted generic ministers (with their collars) as genial problem solvers—as opposed to serious moral teachers—but by the mid-1970s clergy were increasingly portrayed as crooks or buffoons, or, even worse, as hypocrites. For the media, the church, ministers, and Christianity had ceased to exist except when a plot line needed a convenient villain or comedic foil. For example, a 2012 ABC show originally called Good Christian Bitches provoked such an uproar that ABC had to change the title to Good Christian Belles—but still advertised it with a blonde in a miniskirt choir robe. Movies such as Monsignor (1982), Agnes of God (1985), and any number of horror films portrayed clerics and nuns as depraved, conniving, or utterly powerless. (Hollywood did begin to change slightly after 2000, when a market for Christian and/or Christian-friendly films was demonstrated to be a sure money maker by Mel Gibson’s The Passion of the Christ (2004) and by Facing the Giants, an extremely low-budget movie made essentially by amateurs from Sherwood Baptist Church in Albany, Georgia, and earning ten times its budget.)

But the ridicule had its effect on church attendance. By the late 1990s, in a desperate effort to recapture members and prove themselves relevant, American mainline Protestant churches underwent a revolution that saw them open coffee bars, establish date nights, provide sports leagues, and introduce modern music, all to little or no effect in raising total numbers. Quite the contrary; as the mainline Protestant denominations liberalized and adopted moral relativism, their believers fled to other churches, including the Catholic Church, that professed stricter doctrines and adherence to God’s law and absolute moral teachings. Megachurches rose rapidly, their converts generally consisting of “churched” people who had stopped going to their original mainline denominational gathering. In 1900 Christians represented fully 96.4 percent of all Americans, and 46.1 percent were members of Protestant churches. In 2000, the numbers had fallen to 84.7 and 23.2 percent respectively. By 2025, it is expected to drop further to 80.3 and 21.2 percent.8 However, evangelicals increased to 14.6 percent of the church membership by 2000, or 50 percent more than the mainline churches.

Of all the pillars of American exceptionalism, none would erode more during the time covered by this volume than the moral foundation provided by Christianity, and especially Protestant Christianity. But the steady debasement of American Christian morality and the underpinning of American democracy were not by-products of the Progressive agenda. They were the Progressive agenda. Moral relativism, as taught in American schools, universities, and recently, mainline Protestant churches, asserts that morality is not based on any absolute standard but depends on variables such as individual feelings, backgrounds, culture, specific situations, polls, and various opinions. “Truth” itself is relative depending on one’s viewpoint. This can best be seen in generational attitudes toward marriage between homosexuals. In a Pew Research poll in 2012, only 36 percent of Americans born before 1946 favored same-sex marriage, but those born after 1980 favored it by 63 percent.9 Similarly, most churches have all but given up on the issue of divorce, and quietly seek to manage it rather than prevent it. And the Catholic Church, despite remaining firm in its opposition to artificial birth control, seems to have fought a losing battle. By 2011, almost 70 percent of Catholic women used some form of birth control.10

While much of Europe (and the rest of the world) was not Protestant, the Catholic Church might have substituted as one of the pillars outside America. But Catholicism suffered from other problems than its positions on social issues. Worldwide it had been late coming to the table of republicanism and the Church had been on the wrong side in the Dreyfus affair in France. Not until the 1920s did the Vatican finally permit Italian Catholics to form a political party and take an active part in the pseudo-democratic government. The Vatican had supported the Nationalist/anti-Republican forces in the Spanish Civil War, then the Third Reich because of its opposition to communism, losing substantial credibility as a bastion against evil. By the end of World War II, then, the Catholic Church—along with many of the German Protestant churches—had ceded any moral authority it had when the century began.

By 1946, most emerging nations had absorbed the ideological structures and religious attitudes that had failed their former colonial masters simply because that’s what they had been taught. India, for example, warmly embraced Keynesian state planning; Egypt adopted a variant of state socialism; and one African nation after another imposed high levels of government regulation on top of considerable degrees of outright state ownership of the “commanding heights” of industry. Virtually none—not even prostrate Japan—tried to recreate the American experience or erect the four pillars of exceptionalism. Where adopting Protestant Christianity might have proved impossible, a religion that could not be easily manipulated by the state, as Shinto was by the Japanese in the 1930s, proved the second best option. Nevertheless, when evaluating their situation at the end of World War II, Japan and most newly decolonized states did not even consider examining Christian principles as possibly being an important element in their future recovery. Christian missionaries had made little headway in Japan, with its strongly Shinto and Buddhist population, and after 1932, Shintoism was melded with the state and any other religion discouraged by the government. Japan and other (at the time) Third World nations thereby also cavalierly ignored common law in that they had no history of government emanating from the people and, without Christianity, no religious structure that would encourage democracy. Likewise, outside of Europe, private property ownership tied to deeds and legal documents was rare, mainly due to the long-standing traditions of personal honor that obviated the (apparent) need for such paperwork. While Japan managed a miraculous recovery and implemented a democratic political system, weaknesses stemming from the missing exceptional elements soon brought Japan’s rapid rise to a halt, cresting in the late 1980s. Japan’s decline started immediately thereafter, producing two decades of stagnation and the onset of a national malaise.

Indeed, Christianity worked hand-in-glove with free markets, and while capitalism and commerce certainly were not impossible without Christianity, the absence of the religion tended to result in commerce that was heavily regulated by government, as government picked and chose industries and corporations to receive support. Europeans, of course, still worked and innovated, but from about 1970 through 2000, excluding the Communist states, the European continent did not add a single net new job while the United States added more than 20 million net new jobs.11 Even after a short setback with the dot-com bust, then after suffering through the economic impact of the 9/11 terrorist attacks, the American economy revived to produce an additional 6 million jobs under George W. Bush before the 2007 mortgage industry collapse. Meanwhile, the average workweek in Europe continued to decline and deficits mounted; by 2010, many members of the European Union teetered on bankruptcy, relying solely on the strength of Germany and its loans to keep them afloat. All Europe seemed to assume that German war guilt for World War II would provide for the citizens of the victimized nations what they could not provide for themselves. But sixty-five years was a long time, and by 2010, Germans were clamoring for France and the other nations to assume some responsibility for their own welfare. Greece, Spain, Ireland, Portugal, and Italy all faced unsustainable debt levels due to their social welfare policies. France was little better off, and the 2012 elections in France even installed a socialist prime minister who lowered the retirement age. All of this reduced the incentive for individuals to care for themselves and their families, and replaced both God and the family with the state. Virtually every aspect of life—from child-care subsidies to education grants to housing vouchers to retirement—were all provided (poorly) by government.

Without common law—which was lost by the few European states that ever had it—and without limits on what private property governments could seize, the European free market became increasingly more restricted between 1945 and 2012. China, in contrast, moved in the other direction. She saw her weaknesses under communism exposed by tiny Hong Kong, to the point that even before the Cultural Revolution, Deng Xiaoping pragmatically had tried to meld capitalism and a Marxist political system, noting, “It doesn’t matter whether a cat is white or black, as long as it catches mice.” With this admission that China would permit a price system to operate “as long as it caught mice,” communism in its pure state was doomed in China. Instead of fully adopting all the American pillars, however, China floundered with a mixed economy moving in the direction of state capitalism, lacking common law and a free political system.

Yet China had one advantage that even Japan and Europe lacked: her Protestant Christian population was rapidly growing, providing a basis for a movement that could, conceivably, transform China from the inside in the decades to come. Indeed, in sheer numbers China had become one of the larger Christian countries, with more than 89 million Christians in 2000.12 That number represented an increase from 1.7 million in 1900, and is expected to reach more than 135 million by 2025 (though as a percentage of its population, Christianity remains a minority). Did that mean that China was the world’s next superpower? Not at all, for the absence of common law and private ownership of property meant that China—like Europe—would struggle with the political aspects of liberty and be fundamentally unable to hear the voice of the people when they spoke.

Liberty continued to be advanced in China and everywhere else by the continued application of some of the American pillars, despite Europe’s slow retreat from them. The pressure from the productive power of the American economy opened otherwise closed societies to a willingness to examine American values. For forty-five years after World War II, the American invention/innovation machine had produced a level of wealth and prosperity unseen before in human history. This technological stampede culminated in the early twenty-first century with a communications revolution exceeding that of Gutenberg’s printing press. Much more than the effect of the printing press in the 1400s, the new telecommunications explosion mitigated the ability of any society to restrict freedoms. While still not powerful enough to entirely prevent such abuses, the communications technology often transmitted the news (and video) of government oppression instantaneously. At worst, this could embarrass the violator, and at best, so publicize abuses that restrictions were lifted or individuals permitted to leave their abusive country. Footage of Tiananmen Square, with its lone protester standing defiantly in front of a tank, did not bring about instantaneous change, but over time was a contributory factor in China’s (still wanting) liberalization.

But technology and the rise of electronic entertainment also had other, less desirable effects on the modern world, fracturing the social fabric by, ironically, reducing genuine communication between people. Cell phones, personal computers, and the rise of social network Web sites such as Facebook and Myspace dramatically reduced the membership not only in churches, but in virtually all social organizations. In America, this meant a dwindling membership in such organizations as the Elks, Kiwanis, Rotary, Eastern Star, Masons, and Shriners, as well as dealing a severe blow to group activities such as bowling, picnics, and parades.

Elsewhere, in Japan and Korea, for example, young adults either fully embraced the communication and entertainment revolutions—with their demand for products literally driving much of the new market—or, in some cases, completely withdrew from society in unique and troubling new ways. But whether in an American shopping mall or a Tokyo street or a Dutch coffeehouse, particularly after the advent of cellular telephones and texting, it was not unusual to see a gathering of several teens or even adults where not one but all would be engaged in some communication with someone elsewhere, and none talking with those immediately present. For American nuclear and extended families, this shattered their cohesion. For Asians, with their strong (but weakening) structures of familial honor, the youth retreated inward, convinced that their futures were dim. Such trends were not true of all, of course, but were increasingly common as the fracturing of social bonds gained momentum and the realities of decaying economies set in. A somewhat odder circumstance developed in Europe, where parents reported spending more time with their children, but not always for positive reasons. Studies of European teens—whose family divorce rate had doubled since the 1970s—found anxiety and depression had increased 100 percent from thirty years earlier, although that rate hit a plateau in 2004. Those same European teens shifted heavily from work to education, with work levels falling by half since the 1980s and the number of youth in education more than doubling. Essentially, European young people quit working and began to mill about colleges and universities.13 But, like Americans, European young people seldom participated in organizations (only 20 percent according to one study), and movie attendance was more popular in Europe than in America (82 percent of European young people routinely went to movies).14 Families saw their cohesion shattered as conversation in households disappeared and family members each went their own ways as ready access to the outside world and its influences opened up.

This atomization became readily seen in not only communications but also television viewing (with its hundreds of channels), music listening (with USA Today compiling no fewer than a half dozen different “top 40s”), and publishing (with The New York Times featuring a dozen bestseller lists, by genre). In short, the wealth and prosperity of the United States in the postwar years had resulted in the shattering of community—sometimes for the good (no one doubts that social nosiness was a problem in previous decades), but usually for the worse. This was especially true with the American white middle class, where American sociologist Charles Murray noted that the number one television show in 1963–64 was The Beverly Hillbillies with a Nielsen share of 39.1 percent, meaning that almost 40 percent of all American TV viewers watched the show. (In contrast, the number one show in the United States in 2004 was American Idol, whose Nielsen share was less than one third that!) The demise of such shared cultural touchstones could not be underestimated.

A similar diversification occurred with news, which at one time had played a role of uniting people around a few daily event narratives, usually nonpoliticized. By the 1970s, however, the major networks and large city newspapers, plus major magazines such as Time and Newsweek, had tilted decidedly to the left and had politicized everything from proper diets to the weather. Their tilt would continue steadily, until they were virtually horizontal, and little more than mouthpieces for Progressive politicians by the turn of the century. In response, numerous alternative media began to arise, and became exceedingly popular—talk radio, alternative newspapers, then later, Internet sources such as the Drudge Report and the new television network Fox News. Once the major media conglomerates lost their monopoly power as alternative news media and other news sources became widely available to everyone, a new competition for news arose that hadn’t been seen in the United States since before the Civil War.

Variety was a good thing, but it came at a steep price, for the youth—now hearing from both sides that the other was always wrong—reverted to cynicism and detachment from the political process. In addition, much of the information available on the Internet was simply incorrect, whether by design or ignorance, but it often masqueraded as “news.” Given the blurring of traditional news into political opinion, it became increasingly difficult to rely on either as an ultimate source of facts. Misinformation was rampant and spread rapidly. (For example, in an analysis made by one of the authors in 2010, more than sixty sites, including blogs, stated that Thomas Jefferson had used the phrase “wall of separation” between church and state twice. In fact, only three debunked the second citation—showing Jefferson’s letter to Virginia Baptists in 1808 having the phrase added later by editor Eyler Coates in a lead-in paragraph to his discussion of Jefferson and freedom of religion.15) At the same time, however, the penetration of Western, and especially American, news and media into virtually all of the world became a force for opening closed societies to an alternative view that oppressive governments found nearly impossible to stop.

Yet instead of spreading American exceptionalism—and a road map for nations still struggling to succeed materially and culturally—the new message of unfettered freedom and sexual liberation was the one often seen and heard by other countries. They failed to appreciate the three hundred years of training and discipline in individual liberty that came through property ownership and common law, which (like religion) stood as barriers to tyranny and constrained individual excesses. In the United States, state and local governments, each of which had delineated powers and as late as 2012 still retained considerable autonomy (U.S. states actually retained more sovereignty that nations in the European Union), frequently prevented abuses by the national government. Intervening cultural, social, religious, and political barriers to tyranny (including states’ rights and federalism) served as a powerful—but increasingly diminishing—buffer to the highly centralized state. Where were such barriers in other societies? Some of Africa, still dominated by tribalism, found that the tribes merely grafted themselves onto the state and manipulated it. Otherwise, where were any intermediary or intervening institutions in China? Iran? Or even most states within the European Union? Instead of observing American success and letting it serve as a beacon, most states had steadily moved toward greater centralization, observing fewer individual rights and eroding the power of nongovernmental institutions such as family and church.

It was the American pillar of common law that had manifested the other three pillars in the political world. Through the evolution of common law, American politics had developed over more than two centuries a unique electoral system that mitigated against tyranny and extremism—but which no one else adopted. First, the electoral college itself demands that every four years presidential candidates must address the issues of the heartland with some degree of seriousness. No candidate can write off the swath of states that runs from Ohio through Missouri to Nevada. In the 2000 election, these were called “red” states (for Republicans, versus “blue” states for Democrats, which hugged the coasts). Europe has no such electoral protection for the large majority of its nonelites. Second, the common law that undergirds the entire U.S. structure assumes that all the people are imbued with a sense of political understanding and are the source of all power, as exemplified by the Arkansas state motto, “The People Rule.” The U.S. Constitution begins with “We the People” whereas, for example, the Treaty of Lisbon, looked upon as the constitution of the European Union, begins, “His Majesty the King of the Belgians, The President of the Republic of Bulgaria . . .” Of course, the rise of the so-called low-information voter and citizen apathy challenges this notion that the population as a whole features a solid sense of political understanding.

Third, American politics since the 1820s has accepted the “winner-take-all/single-member-district” system, whereby the majority vote winner in a district carries the entire district. There is no proportional representation.16 Fringe groups must be absorbed into one of the two mainstream parties, or risk being as irrelevant as the Libertarian Party or the Communist Party of the United States. While sometimes large numbers of a splinter group, such as the Populists in the late 1800s or the Tea Party movement in 2010, can have a significant impact on a major party’s platform and agenda, standing on its own, a third party has little chance of surviving. Europe, and virtually the rest of the world’s democracies (including Israel), has embraced proportional representation with its concomitant demand of appeasing each subgroup through coalitions. It was the requirement to form coalitions that doomed Spain to civil war in 1936 (there were twenty-one political parties represented in the Spanish Cortes) and enabled the rise of Adolf Hitler in Germany. Instead of providing a more honest representation of the people, proportional representation has allowed governments to duck and dodge difficult political issues even more than the U.S. Senate and House, for one can always blame “the other guy” in another minority party who will not unite. French coalitions fractured so instantaneously in the 1950s that the nation went through twelve governments in ten years. In the place of functioning legislatures, established and terrifically powerful entrenched bureaucracies arose. Germany had long featured government by bureaucracy (Beamtentum). The full extent of its ossification is perhaps best exemplified by the bureaucrats hard at work in the Economic Ministry in Berlin in April 1945. While they calculated the year’s coal and steel production from Silesia (a province almost entirely occupied by the Red Army the preceding month), the bureaucrats were interrupted by Soviet tanks on the street below. So buried were they in red tape that they literally ignored the real situation on the ground.

By 2000, all Europe had followed suit. Armies of faceless “public servants,” often unionized, churned out regulations, dealt with appeals, saw that garbage and taxes were collected, and delivered mail. Most of all, the bureaucratic structure ensured that virtually no rapid change could occur and that the public’s role in any policies would be drastically minimized. The assurance of uninterrupted daily services came at a price, sapping European will and energy by creating the illusion that government could meet all needs. With the exception (at times) of Germany and Britain (under Margaret Thatcher), Europe on the whole began to resemble the Ottoman Empire in its death throes with its inability to act, and its utter incapacity to act decisively.

From 1945 to 1989, the Soviet Union and its allies remained apart from these changes. If anything, the Communist bloc intensified the bureaucratization with its infamous nomenklatura that administered every element of life. There was even a Soviet “Ministry of Rock” to supervise rock and roll music. But during that forty-four year period, class divisions reasserted themselves as the nomenklatura began to look like Western-style CEOs with nicer homes, cars, and better privileges. Nikita Khrushchev, Stalin’s successor, had liberalized the USSR only to a point, ending most of the genocide but replacing it with systematic institutionalization of political opponents in asylums. Well into the 1970s, the Soviet leadership believed it could not only fight, but win, a nuclear war with the United States—especially if the American president was weak enough to be bullied.

What forced the change at first was the free market: communism simply didn’t work, and its structures began to disintegrate. Despite the appraisal by many Westerners that the Soviet economy was sound—and even accelerating—the truth was much different. Communism was failing to provide even the most basic goods, including food and toilet paper (much less cars), to average citizens. The “commanding heights” that Lenin sought to hold were themselves crumbling. What pushed the USSR over the edge was an alliance of Westerners—Margaret Thatcher, Ronald Reagan, and Pope John Paul II—who actively sought to bring an end to the “Evil Empire.” Once it became official policy not to tolerate a Communist Russia, but to defeat it, the end came quickly.

Still, this did not result in the complete victory of American exceptionalism worldwide. Instead, by the last part of the twentieth century, the simultaneous deemphasis on American culture and power and the new emphasis on the “equality” of other nations resulted in horrific inaction. The Europeans (like American Progressives) stood aside meekly as Rwanda was wracked by murder and war in the 1990s, and barely lifted themselves off life support to resist Serbia in that decade. Even then, the Serbian intervention was primarily to placate the Muslim Middle East to ensure an uninterrupted supply of oil. While Britain and even France participated in the UN-sanctioned eviction of Saddam Hussein’s Iraqi forces out of Kuwait in 1990–91, neither they nor any other Europeans would take the additional necessary steps of deposing him and searching the country for weapons of mass destruction. Meanwhile, Libya, under its dictator Muammar Gaddafi, quietly conducted its own WMD programs. Throughout the Middle East, Africa, and parts of Asia, ruthless “elected” thug-presidents, prime ministers, and monarchs crushed popular dissent, oppressed minority populations and women, pillaged natural resources, and defiantly ignored the international community whenever criticism was raised. While in the latter half of the twentieth century the United States would resist the Soviet Union aggressively, threats judged less immediate survived and metastasized. In one of the last instances of Westerners exerting power in their own interests without UN sanctions, votes, tribunals, committees, or support, the United States removed Manuel Noriega of Panama from power in 1989. No one offered assistance, despite the fact that France would convict him in absentia for murder and money laundering and confine him for fourteen months before extraditing him back to Panama in 2011.

Between the eviction of Noriega and the terror attacks on 9/11, one is hard pressed to find any instances of Westerners using old-fashioned power projection for national interests. Quite the contrary, the only nondomestic efforts of the European Union have been to reduce and restrain American military and political power worldwide; to use propaganda to denounce the notion of a superpower existing at all; and—with the help of China—to stage a relentless assault on the dollar as the world’s reserve currency. This strategy, particularly the attack on the dollar, seemed successful in the 1980s when, briefly, Tokyo replaced New York as the world’s leading financial center. China next mounted a challenge—still ongoing as of this writing—but already the Chinese economy has begun to founder, and financial experts question whether the “Chinese miracle” will be as illusory as the “Irish miracle” or the “Japanese miracle” of previous years.

China’s current trend seems to again reaffirm our contention that no democratic system can succeed in modern times for long on both political and economic grounds without the four pillars of exceptionalism. It is no overblown claim to say that as of 2012 not one other nation in the world possessed the four pillars, and that the absence of common law—dictated to much of the world through the European civil law system—was as much to blame for the world’s problems as trade fluctuations, energy prices, or terrorist threats. Indeed, a more appropriate way to view the twentieth and twenty-first centuries would be that it was because of the American ascendance, dominance, and influence, and its extension of significant elements of its four pillars to the rest of the world, that progress took place at all! This “Americocentric” view has been ridiculed and demagogued, but seldom seriously examined, let alone disproved. By 2012, the world’s weaknesses stemmed substantially from the weakness and decline of the United States of America and the lack of faith by Progressives in its founding principles and pillars.

Making this campaign against American preeminence all the more perplexing is the fact that Europeans and free nations around the world since 1945 had enthusiastically welcomed an American military presence, willingly invited in American culture, and greedily pocketed American Marshall Plan funds when the Soviet Union constituted a genuine threat. During most of that time, criticisms were muted and usually accompanied by a “but-we’re-glad-you’re-here” sentiment. Once the Soviet threat evaporated, however, the Europeans—having profited for decades from extremely low defense budgets that allowed them to spend extravagantly on domestic programs—criticized Americans as too warlike, and insufficiently concerned with social welfare. Europe, having imposed on the world two of the most horrific wars in human history, now lectured the United States about human rights.

None of this posed a danger so long as American political and intellectual leadership remained, well, pro-American. But by the late twentieth century, the entertainment and music industry (to a large extent) and at least half of the political culture had come to see the United States as the source of the world’s problems, not the solution. Of course, there never was such a thing as “Greek exceptionalism” or (for at least one hundred years) “British exceptionalism” as defined by the four pillars that characterize the American experiment. But the fact that a left-wing politician would fail to understand American exceptionalism shouldn’t be surprising: historian Gordon Wood missed the target as well, writing that “our beliefs in liberty, equality, constitutionalism, and the well-being of ordinary people” gave Americans a special sense of destiny.17 As usual, however, it was Alexis de Tocqueville in Democracy in America who came closer:

Thus the Americans are in an exceptional situation, and it is unlikely that any other democratic people will be similarly placed. Their strictly Puritan origin; their exclusively commercial habits; even the country they inhabit, which seems to divert their minds from the study of science, literature, and the arts; the accessibility of Europe, which allows them to neglect these things without relapsing into barbarism . . . His desires, needs, education, and circumstances all seem united to draw the American’s mind earthward. Only religion from time to time makes him turn a transient and distracted glance toward heaven. We should therefore give up looking at all democratic people through American spectacles and try at least to see them as they actually are.18

Tocqueville revealed his understanding of the American political system’s unique structure and the free market that he saw everywhere. He also warned of the dangers to liberty that arose from the radical egalitarianism he sensed in the American character.

In a sense, then, the cold war provided the perfect object lesson in the value of America’s ascent—pitting Christianity, law coming from the people, private property, and free markets against an enemy who accepted none of those principles. And, in a sense, it has been the demise of Soviet Russia that has loosed the shackles of self-restraint on the part of Progressives and other statists both at home and abroad. With no visible, obvious foreign threat (al-Qaeda, when it is mentioned at all, is dismissed as a group of religious radicals, not the spear point of a competing worldview), self-restraint has been replaced by self-loathing. Guilt and lack of conviction were natural results. American leadership no longer championed American exceptionalism, because by 2012 American leaders no longer believed in it. The mission that killed Osama bin Laden was heralded as essentially the end of the war on terror, with the West having never come to grips with the persistent threat of radical Islam.

This is the story, then, of the world from 1945 to 2012 as it celebrated, then abandoned, the four essential elements of the American character. It is also the story of the abandonment of the very concept of virtue, for virtue is not relative. In the American past it was learned, to paraphrase Lincoln, in every act of being an American. And it can be taught, though the education of virtue in America (let alone the world) is as obsolete as a black-and-white television set. Above all, virtue must be practiced. Any society unwilling or unable to support virtue is a society adrift. The world’s immediate quest for the advantages derived from American-like qualities of liberty, justice, and equality was laudable and timely, but the steady—and often deliberate—deterioration of the principles that underlay those qualities has been inversely pathological and sudden.

This, then, stands as the question of the hour. Can enough Americans find their founding principles again to save themselves? And if they do, is it too late to save the rest of the passengers on the sinking world ship? One thing is certain: we remain the “last, best hope” for mankind, for a world without an exceptional America will not long tolerate even a France or a Belgium, or any Western democratic power with free markets. It is entirely possible that without American exceptionalism and its pillars, there would be no free markets, for the impetus toward wealth and power redistribution is utterly unstoppable in their absence. Once private property, citizen virtue, and free markets are gone, democide, violence, and murder will take their place, for in such a world without liberty, the horrors of Nazi Germany or the gulag archipelago will not only become common—they will cease to be horrors at all.


Hot Spots, Cold War


1945: Yalta, Potsdam agreements; Franklin Roosevelt dies; Germany surrenders; atomic bomb dropped; Japan surrenders; Germany divided into occupation zones; Japan pulls out of China and other territories; high inflation in United States

1946: CIA established; ENIAC, first digital computer, produced; Churchill gives “Iron Curtain” speech; Nuremberg trial verdicts; decoding of Venona intercepts reveals names of Soviet spies in United States

1947: Canadian independence granted; Communists seize power in Poland; Marshall Plan instituted; Communists seize power in Hungary

1948: Communists seize control of Czechoslovakia; Berlin airlift begins; first trial of Alger Hiss; Harry Truman reelected

1949: NATO formed; USSR tests first atomic bomb

1950: Rosenberg spies indicted; McCarthy’s Wheeling speech; Truman orders development of hydrogen bomb; Korean War begins; second trial of Hiss

1951: UNIVAC, first commercial computer, delivered

1952: Elizabeth II becomes queen of England; Truman attempts nationalization of American steel industry; Eisenhower elected president

1953: Korean War ends; Khrushchev becomes secretary of Soviet Communist Party; rebellion in East Germany crushed; Rosenbergs executed for espionage

1954: Brown v. Board of Education; Army-McCarthy hearings

1955: Soviets and Allies withdraw from Austria; Eisenhower suffers heart attack; Montgomery, Alabama, bus boycott

1956: Hungarian Revolution suppressed; Eisenhower reelected; Sudanese independence

1957: Sputnik launched by USSR; Civil Rights Commission established; Governor Orval Faubus blocks black students from attending Little Rock (Arkansas) Central High School

1958: NASA formed; Hewlett-Packard creates first microprocessor

1959: Castro seizes power in Cuba; Titan intercontinental ballistic missile (ICBM) fired; Alaska and Hawaii admitted as states

Ghosts Walking in Procession

The flattened landscape could have been Mars, at least what people knew of the planet. No one had been there, and no man-made robotic cameras had yet photographed it. But the charred scene, interrupted occasionally by a shell that only upon close inspection could be identified as a building, had been entirely created—and destroyed—by humans. This was Hiroshima, only hours after the atomic blast that leveled buildings as though they were cardboard and incinerated Japanese citizens like kindling. Partial structures that had only minutes earlier been multistory buildings stood atop mountains of rubble and burned bodies. Akihiro Takahashi, then a student at a junior high school, watched from a playground as the silver plane flew over. It was the last thing he remembered before he was blown backward thirty feet from where he stood, his ears “nearly melted off.” He was badly burned everywhere—his back, arms, legs. When he regained awareness (somehow, he had instinctively walked to the river to cool himself), he saw horrific images. Years later, he told an interviewer the people “looked like ghosts walking in procession.”1 In the river, corpses bobbed in the water. And beyond, the city was flattened, turned to rubble, devoid of life.

The scene in Berlin just a few months earlier was scarcely different. Spared an atomic attack, the German capital nevertheless had been pulverized, buildings hollowed out from Soviet artillery and bombing, brick and stone piles blocking streets, vegetation scarce. Berlin’s residents cowered in the basements of their apartment buildings, many structures blasted apart on one or two sides. There may have been no radiation, but the extent of the devastation was much the same as in the Japanese cities ravaged by atomic bombs, and in the case of cities such as Dresden and Hamburg, perhaps worse.

An unexpected, even shocking, change occurred within a short time (by historical standards). By 2000, both Berlin and Hiroshima not only had come back, restored to vitality and health, but were rebuilt anew, making them both in many ways more “modern” than some American industrial centers such as Detroit. Indeed, a popular Internet mail item compared pictures of Detroit in 2012 with Hiroshima, and concluded by asking, “Who won the war?”

Both the decisive victory by the Allies and the subsequent “hard peace,” as John Kennedy called it, occurred during a time when America had emerged as the primary world power. Although the Soviet Union—by 1949 in control of large swaths of eastern Europe—had used its manpower advantage to overwhelm the Nazis on the Eastern Front, there was no doubt that the United States was years ahead of the USSR in science, technology, medicine, and the material condition of its most ordinary citizens. Determined to close this gap and eventually overtake the United States, the Soviets committed themselves domestically and in their foreign policy to expansionism in the purest sense of Lenin’s doctrine. Communist “greatness” had to be spread to the world, by force if necessary. And, for a short time, only the USA stood between the Communists and their goal.

This came at a tremendous (and at the time, often hidden) cost. The United States—as late as the mid-1970s—doggedly shouldered the burden of protecting Europe with high defense spending, allowing America’s own heavy industries to wither under the weight of regulation and taxation, compounded by rising inflation. Calls to remove American military bases from the Far East and Europe were met with legitimate concern by hawks at home who feared Soviet expansionism, but also by foreign calls for the United States (and its money!) to remain right where it was. Isolationist voices of the prewar period had been silenced by Nazi aggression and Pearl Harbor. At home, by the 1960s the economic reengineering implemented during the New Deal had lost its association with World War II necessities and was finally being questioned, both by conservatives fearing the growing power of the federal government, and by liberals fearing the rise of corporate power, especially when Democrat John F. Kennedy implemented Mellonesque tax cuts to stimulate investment.

Nevertheless, the detrimental impact of New Deal policies in all sectors of American productivity became increasingly apparent for all to see. The transformation toward a more planned economy was accompanied by the rise of various social pathologies of Progressivism, which gained new momentum in the 1960s. As these three tributaries converged in a single river, the current was accelerated by Progressive animosity to American exceptionalism, made manifest in public schools and slowly integrating itself into popular culture. This was seen in the ridicule of American traditions and religion starting to take root, the ongoing assault against the free market (especially through environmentalism), and the revisionist teaching of American history by the so-called New Left historians that deplored America’s place in the world. The cold war, a necessary and noble expenditure of blood and treasure (ultimately ending in victory for the United States), provided the canvas upon which these other strokes appeared. By 2012, one could hardly recognize the portrait of America, so greatly had it changed from 1945.

Well before the defeat of Nazi Germany, Allied leaders Franklin Roosevelt, Winston Churchill, and Joseph Stalin had met at Yalta in the Crimea in February 1945. By that time, the Red Army was sweeping German forces before it in the East and would reach Berlin in two months, while the British and American forces would cross the Rhine on March 24. All but the most delusional Nazis understood that not only had Germany lost, but it stood a good chance of being eradicated. Indeed, at Yalta, the Allies agreed to unconditional surrender on the part of all Axis powers, and Germany would undergo demilitarization and denazification. Stalin promised free elections in Poland and the clear implication was that aside from Germany herself, all occupied nations would soon be handed back to their citizens.

Germany was split eventually into four sectors, one each under the control of the Americans, British, French, and Russians; and Berlin—inside the Soviet sector (soon called East Germany)—was similarly divided into four sectors. In Eastern Europe, the Soviets disregarded their Yalta agreements. Instead of withdrawing from the other countries it occupied, the Soviet Union set them on a path to Communist control, and in Poland, installed a Communist government immediately. At Yalta, however, Roosevelt—who always thought he could handle Stalin—came away thinking the world was “on the road to a world of peace.” But his own aides, including ambassador to the USSR Averell Harriman, warned him that the Soviets had no intention of allowing democracy in the eastern European states.

At Potsdam, in July and August 1945, President Harry Truman (who succeeded Roosevelt after his death), Clement Attlee (who took over as prime minister from Churchill), and Stalin discussed the postwar disposition of Japan under what became the Potsdam Declaration. Truman alerted Stalin to a “powerful new weapon” without specifically telling him the United States had an atomic bomb—but the Soviets already knew all about its development through their extensive spy network inside the U.S. government and the Manhattan Project. Soon thereafter, Poland, Bulgaria, Hungary, Romania, Czechoslovakia, and Albania were designated as Soviet satellite states with Communist governments.2 Japan fell to American occupation forces headed by General Douglas MacArthur, who ignored the Soviets, reconstructed Japan along American lines, gave women the vote and developed a democratic constitution. But in Europe, the reconstruction process was different for it directly involved the two former allies now politically and philosophically opposed to each other.

An American diplomat, George Kennan, in February 1946 had sent an analysis of the situation to the U.S. Treasury Department—the so-called Long Telegram—responding to a query as to why the Soviets had not supported the new World Bank or International Monetary Fund. He described a paranoid, insecure, and “neurotic” expansionist Soviet Union.3 He declared there could be no permanent modus vivendi between the West and the USSR, but Kennan thought military force would not be necessary (and the reality of the day was that short of using atomic bombs, the West did not have enough ground strength or air power to force the Soviets to do anything). Only the nonmilitary option remained, and he urged a “vigilant containment of Russian expansive tendencies.”4

At the same time, the USSR was going out of its way to shut down all travel and communications between the West and Communist territories, imposing what Winston Churchill in a 1946 speech called the Iron Curtain. Therefore, to forestall the outbreak of another European war, this time caused by Soviet expansionism, the United States was forced to become the leader of a group of European states, something it had never done before and which was expressly contrary to its entire political history. Kennan’s strategy came to be known as “containment,” which, at its core, inverted Leninism. In a 1917 book, Imperialism: The Highest Stage of Capitalism, Lenin insisted capitalism could not survive without taking over other countries for new markets. But now Kennan had shown that communism was the ideology that could not survive without expansion, and if trapped within a closed system, it would rot from the inside out. Putting containment into practice, however, first required that the “Old” Europe be rebuilt sufficiently to stand on its own politically and economically, and to resist Soviet expansion . . . when it came.

Rotting in the Train Station

The material damage in 1945 Europe as seen by arriving English and American troops was sobering. First, the most photogenic aspect of ruin—the cities—was evident to all. Rubble covered the streets, and the landscape was pockmarked with craters and shell holes. Three quarters of Germany’s houses were uninhabitable, yielding a homeless population of 20 million in Germany (with 500,000 in Hamburg alone). In Berlin, 75 percent of the buildings were uninhabitable, and 40 percent of all German homes throughout the country had been destroyed. Much of Europe’s transportation system had been destroyed. Railroads, power stations, and communications centers were flattened, not just in Germany but in all other European nations once occupied by the Nazis. France, for example, could only put a little over 20 percent of its locomotives on the tracks, and her entire merchant fleet lay at the bottom of the Atlantic or Mediterranean.5 Holland had seen much of her land flooded. All of this came on top of a staggering human cost. Between 1939 and 1945, more than 36 million people had died from war or direct genocide by the Nazis and Communists, and almost half that number were noncombatants. Germany lost 5.5 million soldiers in the war; the USSR, 10 million. The United States lost an astonishingly low 2.5 percent of its armed forces in all the services (just under 417,000). Britain—not counting the colonies—lost nearly as many as America (384,000) even though almost twice as many Americans served.

At times it looked like the whole continent had shut down. European industry had been eradicated by bombing, and cities often consisted of little more than rubble with streets. Displaced persons wandered about looking for family, relatives, food, and shelter, and with males of military age in very short supply, the burden of survival fell on the women. More than 10 million people had fled before the Russians.6 Many refugees were housed under wretched conditions, and more than 7 million Germans were in POW camps, over 2.6 million in American camps alone.7 In one American camp, Bretzenheim, German prisoners received only 600 to 850 calories per day.8 In the summer of 1946, 100 million Europeans consumed fewer than 1,500 calories per day. In 1945, the average German consumer lived on a mere 850 calories a day.9 Over the summer in 1945, rotting corpses in Berlin posed a risk of disease, dysentery was widespread, and one American adviser said that in October of that year ten people a day were dying in the Lehrter railway station from malnutrition and illness.10 The situation grew even worse in 1947 due to a devastating snowstorm that stopped food production, and in Berlin 19,000 Berliners suffered frostbite. Inflation soared: a carton of cigarettes costing fifty cents in the United States sold for 1,800 Reichsmarks in Germany. Europe could not buy goods abroad, and Europe’s balance-of-payments debt to the United States was $5 billion and growing. Once only leftist intellectuals blamed capitalism for the dislocations; by 1947 that opinion was becoming widely shared by people of all walks of life due to the slowness of economic recovery. The criticism was misplaced, of course; the primary problem in Western Europe was that the governments were socialist with large Communist minorities, and the Communists were actively working against recovery.

As the Soviets moved in toward the end of the war and afterward, the horrors grew worse not only for Germans, but everywhere the Red Army seized control. In Vienna alone in April 1945, an estimated 37,000 women were raped by the Red Army.11 In Berlin, an ocean of children disappeared—some 53,000—most dead, some kidnapped and sent east, but a tragic fate awaited children everywhere. Czechoslovakia had 49,000 orphans, Holland, 60,000, and Poland as many as 300,000. Aid flowed in through the Allies and the new United Nations Relief and Rehabilitation Administration (UNRRA), but not nearly enough. Just as after World War I, millions of people were forcibly relocated once again—many to the Soviet-controlled zone of Germany as Poland and the Soviet Union emptied their territories, along with those of East Prussia, West Prussia, Silesia, Pomerania, and New Brandenberg, of their former German inhabitants. Likewise, Romania, Yugoslavia, Hungary, and Czechoslovakia also expelled everyone of even partial German ethnicity, with most of the refugees ending up in the three zones of occupation under American, French, and British administrations.

A very real chance existed of the entire German nation disintegrating, and with it, the likelihood that the Soviets would move in on “humanitarian” grounds to strip West Germany of its remaining industry, as it had in the East. Only the United States, and, to a lesser degree, Britain and France, both of which relocated hundreds of thousands of German POWs to their countries to work as slave labor, prevented this scenario.

President Truman, after the report of his economic mission to Germany and Austria, wisely repudiated the then-popular Morgenthau plan, which would have deindustrialized Germany permanently and turned the country into an agrarian nation. A separate report from General Lucius D. Clay, the military governor of the U.S. zone, through industrialist Lewis Brown, reached similar conclusions. A policy of pastoralization not only would fail to feed postwar Germany, but would push the nation toward communism. Quickly, Clay announced a new policy oriented toward the “complete revival of German industry, primarily coal mining.”12 It all meant restoring Germany to be a partial economic counterweight to the Russians, who increasingly appeared untrustworthy in the extreme.

Holding a hard line against the Soviets did not come easy for Truman, who at one point in late 1945 described Stalin as “a fine man who wanted to do the right thing,” an “honest man who is easy to get along with—who arrives at sound decisions.”13 Even as Churchill delivered his Iron Curtain speech at Westminster College in Missouri, Truman continued to refer to the Soviets as “friends” and offered Stalin the opportunity for rebuttal at the school. After his first meeting with Stalin at Potsdam, Truman likened the dictator to his old Missouri political boss Tom Pendergast and felt he could deal with him. These statements were eerily similar to Roosevelt’s comments about the Soviet dictator, made to Winston Churchill just a few months earlier: “I think I can personally handle Stalin better than either your Foreign Office or my State Department,” and presaged President George W. Bush’s assessment of Russian leader Vladimir Putin in June 2001 when he said, “I looked the man in the eye. I was able to get a sense of his soul.”14 At Tehran, FDR thought his teasing and disparaging of Churchill would soften up the Soviet dictator: “I kept it up until Stalin was laughing with me . . . The ice was broken and we talked like men and brothers.”15 The “brothers” then discussed Stalin’s demand that fifty thousand German officers be shot after the war, to which Roosevelt quipped, “I have a compromise to propose. Not 50 thousand, but only 49 thousand should be shot.”16 The dictator, although claiming it was a joke, had in fact been deadly serious.

Roosevelt, with his penchant for personal negotiations, like so many American presidents before and after (Wilson, Obama), had made many of the same mistakes Woodrow Wilson made twenty-six years before, agreeing to a series of demands (this time from a potential enemy instead of allies, as Wilson did with France and Britain), making secret concessions, and snobbishly negotiating apart from Britain (while, of course, Churchill negotiated with Stalin apart from FDR). Emerging from the Tehran and Yalta conferences with platitudes, declarations, signatures, and above all “assurances,” a deluded FDR handed over Eastern Europe to the Soviets as meekly as Wilson had surrendered on point after point to the Allied powers at Versailles. Each president then proceeded to congratulate himself on his shrewd and skillful negotiating powers. While the British certainly had their own interests to guard, they nevertheless saw the road map more clearly than the Americans, observing that “Stalin has got the President in his pocket.”17 Yet the British were in no position to do much about it, and Roosevelt deliberately kept Churchill at arm’s length so as not to “feed Soviet suspicions that the British and Americans were operating in concert.”18 Since the president was unaware that the NKVD (predecessor to the KGB) had penetrated high levels of both the American State Department and British Foreign Office and other government offices—and therefore knew exactly what the level of cooperation (or lack thereof) was—the point was moot. Even more so since the man with whom FDR was dealing was a maniacal mass murderer who could laugh at Hollywood films one minute and sign execution orders the next without the slightest break in mood.

Most of all, Stalin was a man who had internalized Lenin’s dictum to “probe with the bayonet,” searching for Western weaknesses. Even when it became abundantly clear in the last months of Roosevelt’s life that Stalin had betrayed him and violated virtually every agreement he had signed, the president took it as a personal wound, not as the disastrous foreign policy misjudgment it genuinely was. Rosy reports from the U.S. ambassador to the USSR, Joseph Davies, a man in thrall to communism and Joseph Stalin himself, further amplified Truman’s initial positive assessments of the Communists. Davies insisted that to “distrust Stalin was ‘bad Christianity, bad sportsmanship, bad sense.’”19 Within a few months Truman recovered from his initial misperceptions of Stalin and laid down the law in one of his first meetings with Soviet foreign minister Vyacheslav Molotov (an assumed name meaning “the Hammer”). “I gave it to him straight,” Truman said. “I let him have it. It was a straight one-two to the jaw.” Molotov blustered, “I have never been talked to like that in my life,” when, of course, he frequently suffered far worse from his bloodthirsty boss. Truman dispatched him by replying, “Carry out your agreements and you won’t get talked to like that.”20 Undeterred, Stalin kept a step ahead of Truman thanks to his spy networks and his steely-eyed focus on gobbling up as much territory as possible.21 Few in the West wanted to contemplate the possibility of yet another malevolent dictator equal to Hitler at work, this time in the USSR. Surely, went the typical thinking, having evicted one demon from the world stage, another had not taken his place? It was cognitive dissonance in its most virulent form.

Truman, however, only slowly acknowledged the threat that the Soviet Union posed to the American way of life. Following George Kennan’s Long Telegram in February 1946, Truman received another blunt note: “I think it is now time for [the president] to get tough with someone.”22 It was from his mother. The Clifford-Elsey Report, produced by Truman’s special counsel Clark Clifford and aide George Elsey, told Truman that all of his top staff considered the Soviet Union “expansionist,” and the United States must be prepared to take military action if necessary to counter Soviet threats. The morning after he read the report, Truman met with Clifford and ordered him to bring all copies of the report to him immediately for destruction: if he allowed the report to be made public, Truman said, it would ruin any possibility of Soviet-U.S. rapprochement.23

Indeed, Truman reacted cautiously to Stalin’s steady encroachments, and the fait accompli of a divided postwar Europe might have been far worse if not for the Lone Ranger antics of Truman’s secretary of state, James F. “Jimmy” Byrnes. Byrnes no doubt felt he was much better qualified to be president than Truman, and tended to act as if he was. An Irish Catholic from South Carolina who converted to Episcopalianism, Byrnes could boast he had been in all three branches of government, as a congressman for fourteen years, senator for ten, a Supreme Court justice appointed by FDR in 1941, resigning in 1942 to head Roosevelt’s Economic Stabilization Office (controlling wages and prices), and in 1943 the Office of War Mobilization. Byrnes was totally bereft of any formal education, but was the consummate political fixer, and thought he had Roosevelt’s backing for vice president in 1944. But FDR had settled on Truman for reasons unknown, and Byrnes left the Chicago convention distressed and humiliated. FDR took Byrnes with him to Yalta, possibly as a consolation prize, and later Truman chose Byrnes for secretary of state to patch things up, but also to co-opt his acumen. Distrustful of “those little bastards at the State Department,” Byrnes often negotiated directly with Soviet officials without Truman’s instructions, and even without informing the president, who complained, “I have to read the newspaper to find out about American foreign policy.”24 Yet the president also distrusted the State Department, and with good cause. The department was riddled with individuals like Joe Davies, who took the Soviet side at Potsdam in almost every instance. After one of Byrnes’s communiqués in January 1946, Truman finally concluded, “I do not think we should play compromise any longer . . . I am tired of babying the Soviets.”25 But Byrnes also made mistakes. It was Byrnes who resolved the reparations issue by giving Stalin recognition of the western Neisse as Poland’s western border (thus giving Silesia to Poland), as well as recognizing the Soviet puppet governments in Bulgaria, Hungary, and Romania. For him, the settlement was just another political “fix.”

Between the Kennan containment doctrine and Truman’s new sober approach to dealing with the USSR, the Marshall Plan was born in 1947. As Army chief of staff (1939–47), General George C. Marshall had become a powerful and respected force in the United States and was widely credited for wartime efficiency. He could be petty—officers whom he deemed unsuitable were shunned for promotion, including one whose name Marshall mistakenly and unfairly took for someone else. But he was also a no-nonsense organizer.

Secretary of State Byrnes had shared Truman’s views, but increasingly was seen by the president as competition. Byrnes resigned in 1947, and Marshall was named secretary of state, to lend credibility and moral force to the administration. Determined to save Europe, Marshall instructed Kennan and others to develop a plan to do exactly that. Under the Marshall Plan, the United States would give $6 to $7 billion per year in aid to Europe for three years. Outlining his blueprint on June 4 at Harvard, Marshall announced his objective as “the revival of a working economy in the world so as to permit the emergence of political and social conditions in which free institutions can exist. . . . Our policy is directed not against any country or doctrine but against hunger, poverty, desperation and chaos.”26 Over time, more than $13 billion poured into eighteen countries, with Britain and France receiving over 44 percent of the funds while nonbelligerent countries such as Ireland, Iceland, Portugal, and Turkey—none of whom suffered war damage—took substantial bites of the Marshall Plan apple. In the end, although the Marshall Plan contributed much less than expected to the rebuilding of Europe, it stabilized European governments and prevented communism from dominating Western politics.

Like Truman, Marshall originally misjudged the potential to work with Stalin, feeling that Byrnes and Truman had taken too hard a line with the Soviets. Until his March 1947 meeting with the dictator in Moscow, Marshall felt confident he could influence Stalin with humanitarian points and reach an agreement with him on collaborative efforts to rebuild Europe. The meeting was long and difficult, but noting Stalin’s lack of concern for any meaningful actions to ease the postwar suffering, Marshall finally came to understand what Stalin had in store for Europe. Germany was to have a strong central government, which Stalin could nudge into communism and make into another Soviet satellite state. France and Italy were on the verge of becoming Communist, and even Great Britain was moving rapidly left. Charles de Gaulle had resigned in frustration in France, and had been replaced with a socialist government under Léon Blum that was shaky at best. The Fourth Republic, founded in December 1946, appeared doomed, and suffering from substantial Communist gains, was expected to succumb to Stalin’s wishes. At stake was nothing less than the future of a free Europe.

At the Moscow meeting, Marshall talked with his counterpart from England, Ernst Bevin, a trade-union sympathizer who had often used his position as minister of labor to secure higher wages for workers during the war, and had been named foreign secretary in 1945, in the Attlee government. Also involved was the French foreign minister, Georges Bidault. Both committed themselves to working with Marshall for a free Europe and an economically restored Germany. Nonetheless, Europe continued to deteriorate before their eyes, and communism was on the rise everywhere, but particularly in Italy and France. Although the United Nations Relief and Rehabilitation Administration had distributed $4 billion in relief aid, mostly food and medicine, that had merely kept Western Europeans alive and partially clothed.27

Virtually all of Europe became eligible for the Marshall Plan, except Spain (deemed “not only a bad credit risk [but] a moral risk” as well).28 The Soviets pressured their Eastern European allies to reject all Marshall Plan assistance, and Finland declined to participate to avoid antagonizing its next-door neighbor. Officially, the Soviets claimed Marshall’s program would interfere in the domestic affairs of countries accepting the aid. Nonetheless, as Truman insisted in 1949, the United States intended to “assist the people of economically underdeveloped areas to raise their standard of living.”29 Implementing the plan was more difficult than one might think. The devil was in the details, and after bringing together the sixteen countries that were the intended recipients into the Committee of European Economic Cooperation, national interests often proved difficult obstacles to overcome. Marshall’s design, voted into law as the Economic Cooperation Act (ECA) on April 3, 1948, was sometimes opposed by the British where it tended to reduce their status, and sometimes by France, which wanted to demonstrate her independence from the United States. With American money, Europe seemed to rise like a phoenix from the ashes to reject communism, although many Europeans embraced a softer version of socialism.

American goods accompanied American aid, rushing into devastated European countries desperate for employment and relief from wartime scarcity and squalor. Encompassed in FDR’s Four Freedoms, the principle of a rising standard of living for all seemed no longer just an American goal but an international one. The United Nations Charter in Article 55 affirmed its objective of promoting “higher standards of living” worldwide. This in itself was remarkable, as it meant the UN explicitly dedicated itself to a capitalist end instead of mere subsistence.

The Communists, of course, feared the Marshall Plan and attempted to stop it, but their efforts failed. At an Italian Communist Party rally in 1948, twenty thousand people heard Communist leader Palmiro Togliatti denounce American aid. He was booed for more than five minutes before the Communist crowd began chanting “Long live the United States.”30 When the first ship bearing Marshall Plan aid arrived in Bordeaux, it was greeted with wild cheers of dockworkers who only months before had protested “American imperialism.”31 And it was the Soviets’ own actions, such as their harsh response to the Czech crisis of March 1948, that sharpened support for the Marshall Plan: fully 73 percent of Americans then polled thought U.S. policy toward the Russians was “too soft.”32 Communists’ fears were well grounded. The first reports of the Marshall Plan’s results in 1948 showed that industrial production in Europe had risen above prewar levels, exports were 13 percent above the 1938 levels, and electrical and steel output had risen to postwar highs. By 1950, Europe’s industrial production was 24 percent higher than before the war and its agriculture reached an all-time high.33

Although not part of the Marshall Plan, the Bretton Woods Agreement, secured in 1944 at a small resort hotel in New Hampshire out of a desire to avoid the economic chaos that had followed World War II, addressed another support plank for rebuilding Europe. Finance ministers and economists from Europe and the United States, featuring the last major appearance and policy influence by John Maynard Keynes, established the World Bank, the International Monetary Fund (IMF), and an international money structure tied to the U.S. dollar. For Bretton Woods to work, the dollar had to remain strong—but for the developing nations to benefit from the system (and, in theory, not become breeding grounds for communism), the World Bank and IMF had to lend generously, largely at the urging of Harry Dexter White, a Soviet agent in place as assistant secretary of the treasury. Since the United States was (and would always be) the leading source of funds for those institutions, it meant that tremendous pressure existed for the United States to lend and the temptation was built in to run deficits . . . which would weaken the dollar. In short, Bretton Woods embodied diametrically opposing objectives. Economist Robert Triffin would describe this in a theory that was named for him, “Triffin’s dilemma.” Of most immediate importance after the war, however, was that the fragile economies of Greece, Italy, and France needed stabilizing before the voters elected Communist governments, but soon the objectives were broadened to include most of the European countries and Great Britain. The United States ended the immediate postwar period by almost single-handedly saving free markets (and free peoples) in Europe and by committing herself to future outlays for development of the Third World. Virtually no other nation in history had approached the end of a conflict with such magnanimity.

Theft of the Century

When it became clear that a free Western Europe would not topple internally after the Marshall Plan took effect and NATO was formed, the Cold War accelerated on both sides. Soviet-occupied Eastern Europe was stripped, its factories dismantled and shipped east. News of executions of repatriated POWs reached the West. Soviet forces did not withdraw from Iran, prompting Truman to threaten the USSR at the United Nations. Yugoslavia shot down two U.S. transport aircraft. And all this was just what the Americans could actually see. Worse was the hidden and effective penetration of Western governments at almost every level by Communist agents while the United States curtailed its intelligence operations. Truman had abolished the Office of Strategic Services over OSS Chief William J. Donovan’s strenuous objections and, according to historian Burton Hersh, had told Donovan, “I am completely opposed to international spying on the part of the United States. It is un-American.” Fortunately, British intelligence came to the rescue under UKUSA, a 1946 agreement made between the United Kingdom and the United States to share information and pursue joint operations.

But in many ways the horse was out of the barn: Soviet agents operated at high levels—and in critical posts—in the State Department and Treasury. Harry Dexter White, Treasury’s number two man, whose influence over policy was extremely substantial, was on the NKVD’s payroll.34 Senator Joe McCarthy had just arrived in Washington, and would not focus on communism in government as a political issue until 1950. Thus, Soviet agents not only penetrated the American and British governments, but operated nearly unchecked for many years, and while it is possibly overstated to say they “cost us China,” or “gave the Reds the A-bomb,” in every respect the actions of Communist cells greased the wheels of Soviet actions.

The Soviet effort in espionage was so widespread and devastating, it is difficult even sixty years later to comprehend and accept the enormity of the crimes of so many people—all but a few escaping all punishment. The McCarthyism so loved as an issue by the American Left failed to scratch the surface of the Soviet penetration of the U.S. government, and the tally of identified spies clearly exceeds the numbers bandied about by Tail Gunner Joe. Unrecognized by the American public to this day, the USSR had mounted an unprecedented espionage effort that literally stripped the United States of the vast majority of its secrets, including how to make an atomic bomb. Malmstrom Air Force Base near Great Falls, Montana, became perhaps the most significant port of exit for information and technology the world has ever seen. Planeloads of classified documents and espionage reports moved through Malmstrom on their way to Moscow, and largely because of them the Soviet Union was able to maintain a rough parity with American military technology for another fifteen years. This fact made the American victory in the Cold War in the 1990s all the more impressive.

A combination of Byrnes’s consistent nudging, Churchill’s blunt warnings, and Kennan’s Long Telegram combined to convince Truman that cooperation with Stalin was impossible. He reinforced the U.S.-British alliance while Stalin blundered badly, uniting the fractious Western powers with an act of aggression. At Molotov’s instigation, Stalin began throwing up obstacles to travel between the Soviet-controlled sector of Berlin and the American/British sectors in January 1948. Truman, of course, could not tolerate this, nor could he send tanks and ground forces into a situation where they would be hopelessly outnumbered, even with European allies. General Clay insisted that the United States could not give up such a substantial enclave in the East, calling it a “symbol of American intent.” His counterpart in the Army Air Force, General Curtis LeMay (who wanted a much more aggressive stance against the Soviets), was asked, “Can you haul coal?” He reportedly answered, “We can haul anything.” Clay and Truman enlisted LeMay in developing a strategy that relied on an airlift to supply all of the city’s needs, while at the same time shifting the onus to the Soviets to fire the first shot, and assume the war guilt of any subsequent fight. Although Clay still was not sure it would work, he saw it as the only option, and one that permitted the British Royal Air Force to play a role (France, lacking any air power, was necessarily excused from duty). By mid-July 1948, Western planes were depositing thirteen thousand tons of food, clothing, and medicine a day for the two million people trapped behind the blockade—an amount ten times that supplied to the trapped German Sixth Army at Stalingrad.35 The airlift lasted a year, ultimately bringing in more goods than the preblockade rail lines had provided. With the “Easter Parade” of April 15–16, 1949, the Americans and British inundated the city and humiliated the Soviets, who lifted the blockade under the cover of negotiations. Victorious General Clay, who had administered Berlin, returned home to a massive ticker-tape parade.36

Latin American Revival

Amid the uncertainty surrounding a divided and rebuilding Europe, the United States was gratified that, at least for the moment, its southern neighbors in Latin America were relatively amiable diplomatically and stable economically. But the trajectory for growth in the region remained low. Despite having gained their independence over a century before the African states, the nations of Latin America had nevertheless made only marginal progress shortly before and after World War II. United States markets for South American goods had dried up during the Depression, then rebounded during the war as the Allies needed raw materials. Although Americans continued to purchase heavily after the war, the Latin American nations had lost the entire European and Japanese market and, for all intents, Britain’s too, with her shrinking treasury. Intercontinental trade made up some of the difference; Paraguay exported timber and extracts to its neighbors, and the southern tier of countries had by 1941 developed a plan to create a customs union to promote multilateral trade, led by Argentina’s finance minister, Federico Pinedo. This increased the share of exports going from one South American country to another, and helped buffer the collapse of intercontinental exports after 1945. Had Latin America used the respite to diversify and modernize, it might have maintained a more positive development trajectory, but most countries remained extraordinarily dominated by single products. In ten different nations, one exported commodity claimed 50 percent of the total shipment of goods.

In other ways, however, the war benefited South America. Immigrants fleeing Europe brought new skills as Germans and Italians streamed into Argentina and Brazil, Germans moved to Chile and Uruguay, and Spaniards fled to Mexico. But lacking capital and facing state competition in many cases, immigrant companies had limited impact. Worse, the war brought inflation. Argentina saw wholesale prices rise 12.3 percent from 1939 to 1945; Brazil, 17 percent; Chile and the Dominican Republic, over 19 percent, provoking governments such as Argentina’s to nationalize foreign-owned properties. Under such chaotic circumstances, the ascension of dictators such as Colonel Juan Perón in Argentina was predictable. Perón led a coup in 1943, accelerating the state’s role in the economy and (like Anastasio Somoza in Nicaragua) using the excuse of war to boost defense spending to keep himself in power.

Many South American countries turned to military dictators after the war, including Venezuela, Cuba, and Nicaragua. By 1954, only four democracies remained on the continent “even by generous standards of classification: Uruguay, Costa Rica, Chile, and Brazil.”37 Dictators were acceptable, but Communists were not, due to the Catholic Church’s strong resistance to Marxism. Despite a collectivist bent, the Vatican remained vehemently opposed to “godless” communism and aligned with any Latin American group that stood against Marxists. At least that was true at the top levels of Catholicism—at the village priest level, communism and “solidarity with the poor” were already starting to take root.

American policy sought to battle red influence on the continent less through religion—which it thought the Vatican entirely capable of handling—and more through trade, increasing exports from one fourth to one third between 1946 and 1950. Seeking to please their new sponsor, several Latin American countries cracked down on pro-Communist labor unions and some nations completely outlawed Communist parties (Brazil in 1947, Chile and Costa Rica in 1948), while in other countries, including Mexico, the Communist Party was legal, but could not get on a ballot or register for elections. These measures had an impact: Communist Party membership fell by half in South America, and by 1954, all Latin American nations except Argentina had banned Communists.

But the inverse was not true, and the norteamericano was not welcomed with open arms (the famous Mexican saying “so far from God, so close to America” characterizing the general attitude that sovereignty demanded freedom from American influence). The wake-up call that these attitudes existed came with Vice President Richard Nixon’s trip to Caracas, Venezuela, in 1958, when his car was pelted by rocks and a crowd rocked it back and forth. Nixon later said the “spit was flying so fast the driver turned on his windshield wipers.”38 He attributed the hooliganism to Communist agitators, but the tour “prompted an immediate (but incomplete) reassessment of U.S. policy toward LA.” Nixon suggested the United States distance itself from authoritarian rulers, recommending a “formal handshake for dictators; an embraso for leaders in freedom.”39 That this constituted political, and not cultural, opposition to the United States could be seen when American jazz groups toured South America. At the very time Nixon’s car was pelted with eggs, jazz greats such as Louis Armstrong—on U.S. State Department–sponsored tours—were playing venues in Latin America to packed houses of receptive audiences.

Whether the U.S. was guilty of neglecting its southern neighbors, with minor and infrequent exceptions the United States did not have to fear Soviet incursions from the south for more than a decade. Such assurances allowed America to focus much-needed resources elsewhere, such as in Asia, where new Communist aggression was afoot.

Chinese Dragons, Korean Tigers

Marshall’s economic aid program may have saved Europe, but no amount of money could save China and much of the Far East from the red menace. There the stage was set for forty years of Communist aggression that would not cease until after the fall of the USSR.

In July 1947, General Albert Wedemeyer, who had replaced General Joseph Stilwell in China, submitted a memo known as the Wedemeyer Report to President Truman, recommending the United States send aid and advisers in various military and economic fields to the Nationalist Chinese.40 He later placed the blame for the loss of China on the Truman administration and, in particular, the State Department, for its failure to sufficiently support the Nationalists. In reality, however, the Kuomintang under Chiang Kai-shek (Jiang Jieshi) was probably too corrupt to withstand the Communist pressure even if the United States had donated all its World War II military equipment as aid to his Nationalist armies. Marshall, Truman, and the American public recoiled at the thought of getting involved in a large-scale Asian land war—wresting isolated islands from fanatical Japanese had proved difficult enough, and a Communist China was simply not seen in the United States as a substantial threat. After all, China, with its population of 500 million people, had been unable to defeat or even hold back Japan, a nation only one seventh its size, during World War II. Since the middle of the nineteenth century, Anglo-Europeans had tended to dismiss China as irrelevant. Even up to May 1945, Japanese armies carried out offensive operations in China, seizing substantial territory while the seaward approaches to their home islands fell like dominoes to U.S. forces. In China, the Japanese had made extensive use of poison gas, as well as chemical and biological weapons, even dropping fleas infected with bubonic plague. Like Germany marching against Russia, theirs was a war of annihilation.

China’s civil war between the Communists under Mao Zedong and Chiang’s Nationalists (Kuomintang) shifted into high gear following Japan’s defeat. Although the Kuomintang held two thirds of the population and three quarters of China’s area, the Communists received the Japanese arms captured by the Soviets as well as much additional Soviet aid. Marshall, heavily influenced by Dean Acheson, who tended to believe Soviet propaganda, refused to believe the Soviets were backing their fellow Communists. But more important, the Chinese Communists were a cohesive force, while there was bitter in-fighting among the Nationalists and their various supporting warlords. In addition, the Communist promise of land reform, wherein the peasants would receive plots of land in the breakup of large land holdings, was irresistible to huge numbers of starving and landless peasants. That those plots would later be collectivized was left unsaid.

Truman initially supported Chiang, sending $2.8 billion in aid and credits by August 1949 (a third of the allotment of the Marshall Plan), but rapidly soured on continuing any involvement in China. Following the Japanese surrender, fifty thousand Marines in two divisions were sent to China to supervise the Japanese withdrawal. American troops were not to fight alongside the Nationalists, only to look out for American interests. Even so, they took casualties (ten killed and thirty-three wounded), one of whom, Captain John Birch, was killed by Communists and became enshrined by American conservatives as the first battle death in World War III. Truman reduced the force to less than half its original size by June of 1946, and a year later, only small units remained to protect American citizens. By the time Marshall resigned as secretary of state in January of 1949, the Nationalists had suffered heavy defeats. In October, Chiang, his command fractured and riddled with corruption, fled the mainland with remnants of his army to Taiwan (Formosa), from which he continued to represent China in the UN.

The next shoe to fall in Asia was Korea. Unfortunately, the United States had dismantled its armed forces, with total numbers of men in uniform falling by about 10.5 million between 1945 and 1947.41 This demobilization occurred not just to fulfill the national sentiment to “bring the boys home,” but to give vent to Truman’s great distrust of the professional officer corps and their lack of (in his eyes) fiscal management. Although Truman possessed many noble attributes, his background in the National Guard had made him contemptuous of military brass. He had been turned down in his applications to West Point and Annapolis due to poor eyesight, and never seemed to get over that rejection.42 He insulted the military with his appointments; a political fixer, Harry Vaughan, became his senior army aide and adviser; Louis Johnson, another political hack, was named secretary of defense; and Francis “Rowboat” Matthews secretary of the navy.

Convinced that military expenditures contained a great deal of waste and fraud, Truman slashed the military budget to $12 billion, supporting an Air Force of 450 bombers and 2,475 fighter planes and a 238-ship Navy. The Marines were reduced to 86,000 men, and the Army to ten divisions with all its combat battalions reduced by a third. Only one division in the entire U.S. military—the 82nd Airborne—stood even remotely ready for combat. Clearly this had a beneficial effect on the American economy, where “government spending fell like a stone,” and government’s share of GDP plummeted from 44 percent to a mere 8.9 percent.43 But military preparedness suffered badly. The peacetime draft failed to fill the Army’s quotas, and with deferments accepted for almost any reason, supplied the military with indifferent, unwilling, and disgruntled troops. Training was perfunctory and troops were rushed overseas to maintain some semblance of military presence. Even the stockpile of atomic bombs was in woeful shape, and the United States possessed no bombers capable of delivering atomic bombs to Russia from the U.S. mainland and returning. By early 1949, the United States still had fewer than seventy-five atomic bombs (and even some of those were being retired), and included no tactical atomic bombs at all. In August 1949, intelligence detected an atomic explosion in the Soviet Union. Truman brushed it off, expressing doubt that the Soviets possessed the technological know-how to build an atomic bomb. Having emerged from the Second World War as a lion, Truman rapidly transformed himself into a lamb due to his continued desire to see Stalin as reasonable—at the very time that Asian Communists decided to test America’s resolve.

The crisis in Korea (the “Hermit Kingdom” of Choson) was long in coming. The Soviets had wanted all of Korea and more for their last-minute entry into the war against Japan. FDR had agreed at Yalta that Korea would become an Allied trusteeship administered by all four powers, including China. But with Russians pouring into northern Korea, two U.S. Army colonels at the Pentagon, one being Dean Rusk, later secretary of state, after consulting a National Geographic Society map during the night of August 10–11, 1945, suggested the thirty-eighth parallel as a convenient dividing line for the Japanese to surrender to Soviet forces above the parallel, and American below.44 Although the United States had wanted the dividing line to be as far north as possible, the Soviets had reached well into Korea, and the United States would not have troops in Korea until September 4. The Soviets accepted the line immediately, since the USSR was separated from Korea by only the Tumen River, while the U.S. mainland was an ocean away. The formal division took place on August 15 per General Order #1 when the Soviets were already south of the thirty-eighth parallel, but they withdrew northward and began establishing the parallel as a national border, with “People’s Committees” for governing.

In 1946–47, the US-USSR Joint Commission meetings failed to reach positive agreements, and in 1947, the United States managed to obtain UN agreement to establish a UN Temporary Commission on Korea. The Soviets refused to cooperate, and in 1948, elections were held in South Korea and the Republic of Korea was born. The formation of the Democratic People’s Republic of Korea followed in September 1948, with Kim Il Sung as its premier. In South Korea, an independent democratic government arose under Syngman Rhee, although the structure hardly was indicative of stability, as at one time South Korea featured 113 political parties.45 The U.S. State Department handled the transition of transferring power to the South Korean government, and John J. Muccio, a career diplomat and U.S. ambassador to South Korea, assumed the command of all U.S. military advisers. Muccio was a forty-eight-year-old bachelor, ruggedly handsome and reputed to be “a chronic womanizer,” who got along well with American military personnel and liked to play soldier. MacArthur, the “American Caesar” who had liberated the Philippines and polished his reputation as the hero of the Pacific Theater in World War II, ignored Korea since he had no command responsibility there, and even maintaining a combat-ready force in Japan was far beyond his resources.

To the north, however, more than 100,000 native Koreans who had fought in Chinese Communist armies were returned to their homeland. They became the nucleus of the North Korean People’s Army under the command of Kim Il Sung. Soviets provided T-34 tanks and modernized North Korea’s army, while the United States created a South Korean constabulary of 20,000 men to help combat frequent Communist guerrilla activities, rebellions, and disruptions of political activities in the South. This force grew to 50,000 men by 1948. With the American withdrawal in June 1949 of its last combat troops, the Republic of Korea (ROK) increased its forces further to about 65,000 men, mostly equipped by the United States but denied tanks, mobile artillery, and aircraft—supposedly to prevent the South Korean government from invading its neighbor. The bumbling expansion was overseen by Muccio, who did little when North Korea began flexing its muscles with border incidents. Throughout the remainder of 1949 and until the invasion on Sunday, June 25, 1950, the North Koreans regularly attacked southern outposts and border installations. In the last six months of 1949, more than four hundred such attacks occurred.46 Probing with a bayonet in classic Communist style, Kim Il Sung was developing his plans thorough combat intelligence, keeping his troops sharp, and weakening the morale of ROK troops by frequently annihilating outposts and small units.

Into this arena of heightening tension stepped Secretary of State Dean Acheson. A liberal’s liberal, considered by the U.S. Progressive establishment as one of its most brilliant spokesmen, Acheson gave a speech on January 10, 1950, in which he described the United States’ position in Asia by saying, “This defensive perimeter runs along the Aleutians to Japan and then goes to the Ryukyus [the island chain that includes Okinawa].” He went on to note that “so far as the military security of other areas of the Pacific is concerned, it must be clear that no person can guarantee these areas against military attack. . . . Should such an attack occur . . . the initial reliance must be on the people to resist it and then upon the commitments of the entire civilized world under the . . . United Nations. . . .”47 If the United States had wanted to tell Stalin it was permissible for the North Koreans to attack South Korea, the message could not have been more explicit. In April 1950, Stalin approved Kim Il Sung’s plan to invade the South; in May Kim received Mao’s blessing and promise of support.48

A week after Acheson’s remarks, Congress voted down an aid appropriation of $10 million for South Korea. Then Democratic senator Tom Connally, chairman of the Senate Committee on Foreign Relations, responded to a reporter’s question as to whether the United States would seriously consider abandoning South Korea. Connally answered, “I am afraid it is going to be seriously considered because I’m afraid it’s going to happen whether we want it or not.”49 Border incidents had reached almost three a night, but suddenly in May they stopped and North Korean civilians were evacuated from the border zone.

These signals went unheeded by Muccio and everyone else on the U.S. side. The ROK army was put on emergency alert that lasted from June 11 to June 23, and ROK intelligence was on record as stating an attack was inevitable, but the time was unknown. The CIA, just in its infancy, paid no attention to Korea, and MacArthur’s intelligence chief, Major General Charles Willoughby, simply disregarded all indications of a North Korean attack. He considered none of the intelligence reports strong enough to issue a warning of an invasion. On June 20, Assistant Secretary of State for Far Eastern Affairs Dean Rusk testified before a congressional committee that “we see no present intention that the people across the border [North Koreans] have any intention of fighting a major war for that purpose [to seize the South].” Two days earlier Kim Il Sung had given the order to invade the South, and five days later the attack began.

Even then, the United States responded slowly, partly due to the glowing and false reports regularly coming out of South Korea as to the status of its army and training—information readily believed by the cost cutters in Washington as proving the case for their policies. The reality was enormously different. Although ROK units performed well at times, in general the North Korean army, the Inmin Gun, went through the South Koreans like a hot knife through butter.

When news of the attack reached Washington in the evening of June 24, Truman directed Acheson to request the United Nations Security Council to convene an emergency session the following day, condemning North Korea and demanding a cease-fire. However, the Army chief of staff was not notified of Truman’s actions until nine hours later. Acheson, under public fire for defending Soviet spy Alger Hiss, became the most hawkish of Truman’s advisers, possibly to divert attention from his earlier blunder. The UN Security Council passed a resolution (with the Soviets absent, having boycotted meetings since January) calling upon all members to assist South Korea in repelling the attack. After a late dinner with the Joint Chiefs of Staff and Acheson, Truman assessed the situation. The Air Force and Navy argued that air and sea action alone might suffice as a response, but the Army’s General J. Lawton Collins thought otherwise.50 Later that evening, Truman and the Joint Chiefs held a teleconference with General MacArthur, during which he was ordered to send a survey party to Korea and report back on the military situation. Almost a full three days after the invasion started, MacArthur finally was placed in command of military operations in Korea. But with only a single division even remotely combat ready, but unavailable for Korea, it would be a photo-finish race to keep from losing all of South Korea before significant U.S. forces (or anyone else, for that matter) could intervene.

The initial relief force, Task Force Smith with 440 men, of whom only about 75 had combat experience, was easily overrun by the North Koreans. Poorly trained and out-of-shape occupation troops from Japan were thrown into battle; U.S. bazookas were too light to penetrate the armor of the T-34s; and no one had prepared for the Korean climate or terrain. Americans encountered one rugged hill after another with scant cover, and the physical demands rendered some hapless units ineffective before they ever fired a shot. Two American understrength regiments crumbled in the first week of fighting as the North Korean troops advanced fifty miles.

A full 10 percent of the active U.S. Army units were not in Korea but holding bases in Europe; and the army that did arrive in Korea met defeat in every engagement. Troops pushing papers in a plush billet in Japan two weeks earlier suddenly found themselves defending a rocky road junction with an M1 Garand, eighty rounds of ammunition, and an understrength company against T-34 tanks and merciless battalions of hardened North Koreans. In scenes reminiscent of the early stages of World War II, American troops surrendered or died in shocking numbers as whole companies were annihilated. Not only was the North Korean Army effective, they were also barbaric. Many Americans thought that, having defeated the Japanese, they would never encounter such cruelty in an enemy again. They were wrong. Captured and wounded Americans were tied with barbed wire and murdered in foxholes they dug themselves. Some of the troops that died had only received eight weeks of basic training before facing the North Koreans. It is perhaps needless and redundant to look for someone to blame for the massacres the troops were sent into: such military unpreparedness had always been a feature of an American democracy steeped in a suspicion of standing armies and wedded to the Western Way of War principle that peace was the norm and conflict the exception. But certainly the situation in Korea was worsened by Progressive politicians, who had deliberately and uniformly viewed the Soviets as partners or allies as opposed to opponents or even enemies.

On July 18, a third understrength division began landing in Korea (the combat strength of the first three divisions was slightly less than 50 percent), but by then it was clear the best the U.S. forces could do would be to retain a small toehold on the southeast corner of the Korean Peninsula. Savage and costly fighting in August and September maintained that precarious position, known as the Pusan Perimeter because it controlled the approaches to the port of Pusan. Then came MacArthur’s legendary strategy of using his reinforcements, not to fight northward from Pusan up the entire peninsula, but to land behind the North Korean army and cut it off using his superior sea power. The plan involved an amphibious landing at Seoul’s port, Inchon, and was fraught with extreme risks. Tidal mud flats and high tides at Inchon severely limited the windows in which an invasion could occur, and narrowed the approaches in which it had to land, making any assault highly vulnerable. Most generals of lesser status, not to mention audacity, would not have considered the scheme. Compared with Inchon, Normandy in 1944 was, in betting parlance, a “lock.”

Success at Inchon came in no small degree because of its boldness and surprise. American intelligence also played one of the most decisive roles yet in the war. The North Koreans, assuming the United States was too sclerotic to respond quickly, did not even bother to encode their military transmissions of battle plans and troop movements. American analysts, working in Japan, quickly discovered this and worked backward with that information to break all of the North Korean codes within thirty days after the war began—“one of the most important code-breaking accomplishments of the twentieth century.”51 Intercepting and translating one third of all of the North Koreans’ enciphered messages, by August 1950 the Armed Forces Security Agency (AFSA) provided “near complete . . . real-time access to the plans and intentions of the enemy forces” faced by Lieutenant General Walton Walker.52

General Douglas MacArthur’s daring deserves most of the credit for the Inchon landing in September 1950, but it would not have been possible without the signal intelligence coming from AFSA, which tracked the location of virtually every North Korean unit. One young field commander, James Woolnough, later recalled that “they knew exactly where each platoon of North Koreans were going, and they’d move to meet it. . . .”53 Indeed, the U.S. intercepts revealed that the North Koreans anticipated such a landing, but south of Inchon, much as Hitler had bet his chips on the D-Day landings coming at Calais. MacArthur’s forces came ashore behind the North Korean positions, sending the North Koreans reeling in retreat hundreds of miles up the peninsula. MacArthur drove hard for the Yalu River, the boundary separating Korea from China. As in 1944, Americans expected to push back the enemy and have their troops home by Christmas. But the enemy found new resources, this time in the form of more than a million Communist Chinese soldiers.

Late in October 1950, Chinese troops began making their presence known. An ROK division was attacked by two Chinese divisions, then two others ran headlong into ambushes by separate Chinese armies. Chinese prisoners captured by Americans on October 29 could have yielded valuable intelligence, but division and higher headquarters refused to believe the men were Chinese. First and foremost among the doubters was Major General Charles Willoughby (again), MacArthur’s G-2 (Intelligence Section). His presence on MacArthur’s staff had nearly always been a negative factor, since the opening days of World War II, as he always expected the enemy to do what he personally would do, rather than understanding the full range of potential actions. Willoughby argued that if the Chinese had planned to intervene, they would have done so earlier and certainly before most of the North Korean forces were destroyed. (In fact, it did not bother Mao at all that Korea lost much of its fighting force—one less enemy on the border.) But Willoughby went far beyond offering his opinion, actually falsifying intelligence reports to feed MacArthur what he thought MacArthur would want to hear.54

By November, Chinese troops assaulted forward units of the U.S. Eighth Army, only to suddenly disappear. Lieutenant General Edward “Ned” Almond’s X Corps pushed to the Yalu River, unwisely separating his units in his zeal to reach the river first. Now, finally, MacArthur accepted the likelihood that he was in a war with the Chinese, and feared he might soon see large numbers of the enemy crossing the Yalu. He ordered the Yalu bridges to be attacked by the Air Force—an order immediately countermanded by the Joint Chiefs, who directed him not to bomb within five miles of the Yalu for fear of provoking the Chinese. For the first time, American wartime restraint gave the enemy a substantial military advantage: the Chinese could now cross the river in absolute safety. Although the order was rescinded three days later, the attacks failed to destroy the bridges, and once the river froze, the bridges became unimportant.

On the evening of November 25, the Chinese army, which lay in wait, struck with hurricane force, overwhelming the American troops. Instantly, Eighth Army on the western side of the peninsula raced south to escape. They would not stop until Seoul had fallen again. The X Corps on the east, separated from the Eighth Army by rugged mountains, was isolated at the Chosin Reservoir, where Task Force MacLean-Faith, a temporary unit made up of 2,300 men, lost more than 1,000 in killed and missing on its eastern shore. The weather went from cold to colder, and to stop moving often meant frostbite or death. The 1st Marine Division produced an epic tale of hardship, heroism, and success under the most adverse conditions. They did not retreat, but attacked from a different direction. Staying together in units large enough to produce sufficient firepower to turn back battalion-sized assaults, they showed the world how to defeat massed bodies of infantry. Finally, carrying their wounded and many of their dead, they reached the port of Hungnam in safety. Notable in the fighting was a lieutenant, Chew-Een Lee, leader of a weapons platoon and a Chinese-American (he would simply say “an American”). Wounded, his arm useless, Lee led a nighttime attack on Chinese who were blocking the withdrawal of two Marine regiments from Yudam-ni. Described as “hard as steel, tough as nails, cold as ice—all the clichés apply. . . . [Lee] was a natural-born battle leader.”55 With such men, the vastly outnumbered Marines fought the Chinese to a standstill. The Chinese went into battle against the Marines with twelve divisions and 120,000 men, lost 72,500, and did not seriously contest the embarkation of the 1st Marines from Hungnam. Three Chinese armies disappeared from the Chinese order of battle until reconstituted in the spring of 1951. After lines were stabilized just above the thirty-eighth parallel, the war dragged on for more than two years, until a cease-fire enacted in July 1953 under the newly elected president, Dwight Eisenhower.

America’s ability to survive in the Korean War, while deliberately refraining from using atomic weapons, constituted the first occurrence in modern history where a nation with superior military technology voluntarily refused to use it out of moral concerns. Of course, MacArthur had appealed for the use of such weapons. He also wanted to broaden the war by attacking China from the outset. Truman skillfully outmaneuvered him among the officer corps, all of whom were outranked by MacArthur, and many of whom resented his imperial demeanor. The resentment ran both ways—the American Caesar had only disdain for Truman as a wartime leader, routinely slighting him in almost every conceivable manner. MacArthur lumped the president in with “those who advocate appeasement and defeatism in Asia,” as he called it in his planned speech to the Veterans of Foreign Wars (which was withdrawn upon Truman’s review and order).56

For Truman’s part, he harbored similar disdain toward MacArthur, calling the general “Mr. Prima Donna, Brass Hat, and Five Star MacArthur.” Truman already had predicted that MacArthur would seek to run against him as a Republican in 1952 and would try to go over his head to the voters. Another serious breach of protocol followed when MacArthur sent a letter to the House of Representatives that contained the phrase “There is no substitute for victory.” Truman gave the general the boot on April 11, 1951, bringing MacArthur back from the Far East into retirement, and unleashing a firestorm. Polling showed Truman with a 26 percent popularity rating contrasted with MacArthur’s 69 percent, and the general’s televised address to Congress upon his return garnered an audience of 30 million. Walking through the streets of New York City—without an organized parade—MacArthur drew a crowd of 7 million.

But MacArthur began to lose support almost as soon as he specified what his version of victory entailed, not the least of which was his promise to drop fifty atomic bombs on Chinese cities. Senior officers lined up against him: General Omar Bradley, in hearings before the joint Senate Foreign Relations and Armed Services Committees, predicted that a conflict such as MacArthur desired in Asia would denude Europe of troops and would be the “wrong war at the wrong place and with the wrong enemy.”57 Bradley, a press favorite, seemed the antithesis to “fighting generals” such as MacArthur or the now-deceased George S. Patton. Reporters had carefully cultivated an image for Bradley as “the GI’s General” (largely to minimize the aggressive actions of commanders such as Patton). In reality, Bradley was ruthless in sacking other officers and, concealing his intense ambition, had risen faster and higher than anyone other than Eisenhower in the European theater during the Second World War. In Korea, however, Bradley applied a global strategic context to the conflict that MacArthur’s theater approach lacked and recognized the importance of avoiding a war with the Chinese while the USSR stood poised to overrun Europe. As General Matthew Ridgway, MacArthur’s successor, stabilized the situation at the front, MacArthur’s poll numbers dropped by half. Following an uninspired speaking tour in Texas, the general’s political star dimmed, and he declined to run for the Republican nomination. He was replaced in the presidential race by another general-turned-politician, Dwight D. Eisenhower, who, as supreme commander of Allied forces in Europe, resigned his position in May 1952 to run as a Republican. Ike had much of what MacArthur lacked—the reputation as a conciliator, deal maker, and team player. He certainly had not ruffled feathers in the Truman administration as MacArthur had. Yet by aggressively countering the isolationist positions of Ohio senator Robert A. Taft, and by promising to fight “communism, Korea, and corruption,” he deftly made the transition from soldier to politician.

MacArthur’s arguments lived on, however, so much so that by January 1952 the Joint Chiefs and Truman began to seriously contemplate the use of atomic bombs and pondered the necessity of more radical actions against China. General Mark Clark, whose slow plod up the Italian boot in World War II had frustrated the Allied command, succeeded Ridgway and sent an audacious plan to Washington that involved dropping atomic bombs on selected Chinese targets. In 1953, President Dwight Eisenhower also studied the use of atomic bombs “on a sufficiently large scale” to end the war.58 Public opinion likewise favored employing atomic artillery shells if “truce talks break down.”59

Standard misinterpretations of the Korean War as a “draw” have become so common among liberal American historians, the media, and the public that all perspective has been lost. Even though the USSR itself had not provoked the war, or conducted it, the conflict was every bit as much a test of American willpower as Berlin had been in 1948. Despite Mao’s independence from Moscow, the North Koreans were proxies once removed and would not have lasted without help from both major Communist powers. Not only did the Chinese contribute substantial land forces and provide the bulk of the manpower at war’s end, but Soviet MiG-15 pilots fought in the skies against the U.S. Air Force from March 1951 until the end of the war. The Soviets furnished two Guards Fighter Aviation Divisions and thirteen squadrons, decorated sixteen aces, and claimed to have outscored American F-86 Sabre pilots by two to one. On the other hand, Americans may have knocked down Chinese and North Korean MiGs at a rate of thirteen to one.60 The Soviets also regularly attacked American B-29 bombers, and on Black Tuesday, October 23, 1951, sent up eighty-four MiG-15s against nine B-29s, and shot down five, for an American loss of 56 percent in the greatest jet engagement in the history of aerial warfare. America accepted such losses in the air, and often heavy casualties during bloody engagements on the ground, yet persevered to the end. If anything, the war constituted an Antietam on a larger scale—a tactical stalemate and a strategic gain. Communist forces had been repelled, kept again from expanding their empire. North Korea did not give up, however, and went back to its prewar strategy of subverting South Korea, and from the cease-fire on July 27, 1953, to 2012, has breached the armistice agreement more than four thousand times. More than 1,200 American personnel have died during that time, with hundreds wounded, and 87 captured, along with more than 3,000 ROK casualties.61

Korea was a microcosm of two monumental realities of the modern world. First, it reflected a much deeper philosophy about the nature of communism than even Kennan or Truman vocalized (if they understood it at all), and one that stood Lenin on his head. When Lenin published Imperialism: The Highest Stage of Capitalism in 1917, he had argued that to survive, capitalism had to conquer and exploit colonies. Confined to existing borders, capitalist nations would fail due to the inherent contradictions stemming from the business cycle. In fact, the postwar world was proving just the opposite. Lacking a war to justify totalitarian control and maintain production, Communist states experienced an economic slowdown eventually leading to their collapse, but not those in the West. As long as “colonies,” in the form of conquered territories acquired during war, could be pillaged, the inevitable could be delayed. Korea accelerated that collapse by consuming resources to no good effect, as North Korea was unable to attain a military victory. The capitalist United States could absorb unprofitable wars, but Communist powers could not.

Contrary to notions that Korea represented a demonstration of the “remarkable self-limiting character of the American Republic,” the entanglement reflected something more significant: a deep understanding of the nature of the long-term struggle in which Americans were engaged. A military victory in a peninsula in which American territory was not directly threatened, and in which our ally only saw the restoration of his original borders, was neither necessary nor worth the cost of a full-scale war with China. Certainly the Chinese gained nothing from the Korean War. On their doorstep, and even when vastly outnumbering their American foes, the best they could achieve was a stalemate. And the South Koreans bulked up their military, reducing the necessary permanent troop levels from the United States and committing Seoul to acquiring and maintaining state-of-the-art equipment. The Soviets were unimpressed with the Chinese effort, while China’s southern neighbor, India, which had begun to look warily across her border, decided the Sino threat was not as formidable as formerly supposed.

The second significant development of the war involved the global role of the United States as the world’s policeman. This proved critical because on one level, the United States had abrogated this role, or, at the very least, compromised it by bringing in the feeble United Nations. Allowing the war to unfold as a UN action, as opposed to strictly a defense of American interests, planted a weed in the garden that soon threatened to choke off all legitimate U.S. overseas actions. Nevertheless, the conflict showed that on another level Americans had the stomach for the rugged, long-term struggle against militaristic communism that was demanded.

Such a commitment involved remarkable staying power, for while American politics were (and are) short-sighted and generally confined to time frames going no further than the next election, the worldwide Communist/Marxist movement was multigenerational in its planning. Communism placed a dictatorship at the helm of every Communist state—an imperial monarch unrestricted by any laws or rules, and one depending on terror and violence to maintain his position. There have been no exceptions to this mode of government. As demonstrated in the introduction, mass murder has always been the order of the day, whether the country is North Korea, Spain, China, the USSR, Venezuela, Cuba, Poland, Cambodia, Vietnam, or any other country where Communists seized power and fundamentally transformed the political form of government and the country’s society. The butcher’s bill by 2010 would reach well over a hundred million deaths, not even including those in war.

Penetration at the Highest Levels

Modern histories of the cold war era, particularly of the first decade after the war’s end, have been written in the most remarkable of vacuums. While many—though not all—acknowledge the presence of Soviet spies in American government and the diplomatic corps, the impact of these agents is treated as little more than an inconvenience. One gets the sense that these works toss in “Oh, then there were a few spies . . .” for balance, as if a doctor examining a patient in critical condition is told, “Oh, then there is the cancer. . . .” In reality, Soviet espionage worked beyond the Kremlin’s wildest dreams, and the USSR’s stunning success made America’s eventual victory in the cold war seem miraculous in hindsight.

Among all the nations in existence in 1945, the United States was probably the slowest in public acceptance of an intelligence apparatus. Russia under the czars had the Okhrana, transformed and eventually enhanced under the Communists as the KGB. European states, including France and England, had well-established secret services; during World War II, the British had more than thirty thousand people working on cryptanalysis alone. But after World War I, peacetime America possessed a few (although highly efficient) code breakers housed in the Army Signal Corps, with both the Army and Navy having a smattering of officers and analysts assigned to intelligence slots within the service branches. At the time of the Japanese attack on Pearl Harbor, only about three hundred analysts labored in the American cryptologic effort. The Army’s Office of Strategic Services, formed in 1942, served the country admirably until disbanded by Truman in 1945. Truman harbored suspicions about its Republican, New York socialite head, “Wild Bill” Donovan (who appeared to operate outside the chain of command), and even feared the agency might work against American interests. He therefore broke up the thirteen-thousand-strong OSS within a month after the Japanese surrender and with only ten days’ notice, transferring its functions and research to the State Department. The War Department retained only a small secret intelligence group named the Strategic Services Unit.

Besides the demise of the OSS, 1945 saw the nearly complete collapse of American communications intelligence efforts. During the war, there had been two separate and independent intelligence organizations: the Army’s Signal Security Agency and the Navy’s Naval Communications Intelligence Organization. In September 1945, the former was redesignated as the Army Security Agency (ASA), and the Navy’s group was deactivated entirely. By the end of 1945, the organizations had lost 80 percent of their personnel, the military’s listening posts were occupied by caretakers, and the ability to intercept radio traffic had all but disappeared. Code-breaking work nearly ceased, and going into 1946, there was essentially no intelligence on foreign countries and their military establishments other than that collected by State Department personnel operating out of embassies.

Through the March 1946 UKUSA Agreement, the British and Americans shared their intelligence efforts. The Anglo-American alliance had been strained during the war by the constant sniping and insubordination of British top-level officers to General Dwight D. Eisenhower’s command. What neither Ike, nor Roosevelt, nor Truman knew was that the British had spied on the United States extensively prior to the war, and had penetrated the State Department with agents designed to support the continuation of the British imperial possessions. (We discussed in the previous volume how British agents also penetrated the major polling organizations, trying to swing public opinion toward a war with Germany prior to 1941.) On the other hand, when times got desperate, the British shipped all their top atomic scientists and major military secrets, including their radar designs, to the United States.

UKUSA’s code-breaking operations scored a success almost immediately. By 1947, three Soviet cipher systems had been broken, but almost as quickly Soviet penetration of the Arlington Hall group (the Army’s equivalent intelligence operation to England’s Bletchley Park) soon brought all that to a halt. After receiving information on Arlington Hall’s successes from Soviet agent William Weisband, who also informed them of the Venona project, the Soviets, on October 29, 1948 (Black Friday), the single worst day in American intelligence history up to that point, changed essentially all their cipher systems, machines, call signs, and frequencies. Soviet radio communications went utterly dark, and would remain so for the next thirty years. With signal intelligence gone, it would be up to human intelligence to close the gap.

Truman belatedly formed the Central Intelligence Group in 1946 to fill the vacuum left by the dissolution of the OSS and provide early warning of hostile intentions from whatever origin. This organization was replaced by the Central Intelligence Agency (CIA) under the National Security Act of 1947, but it hardly got a running start. Its powers and budget were limited, and although increased in 1949, it was not until 1953 that the Agency took on substantial importance, even then still severely lacking in effectiveness.

In the meantime, an organization almost unknown today picked up the slack in American intelligence work. The Army’s Counterintelligence Corps (CIC) assumed extensive intelligence-gathering duties in Europe and Japan as a natural outgrowth of its occupation operations. In Europe, the CIC handled the interrogation centers for refugees coming from the East, as well as all military personnel processing and de-Nazification. Here again, an unthinkable blunder severely and negatively impacted American intelligence capabilities. Truman agreed to return all Soviet citizens to Soviet control, and this proceeded at an alarming rate, often before the CIC could even think about using some of these individuals as agents. Displaced people from other Eastern European countries offered some potential, but there was little money and almost no time to make assessments. Black market operations were widespread, and chaos seemed to reign supreme. The only bright spot was that the CIC did not suffer a severe reduction in force like the remainder of the American military establishment. By 1947, the Army had shrunk to 550,000 men, but the CIC’s personnel dropped only 24 percent, from 5,000 to 3,800.62 The CIC was merged with the 285 military attaché and consulate personnel around the world, plus the Signal Corps group of SIGINT personnel called the Army Security Agency, to make up the Army’s Intelligence Division. With such a tiny intelligence establishment, the United States ventured forth to fight the Soviet Union’s vast intelligence services.

In Europe, anti-Communist and anti-Soviet activities rapidly collapsed in Ukraine and satellite countries. Any such movements were immediately branded by the Soviets as fascist or reactionary and brutally suppressed. Even defectors from before the war and refugees had been liquidated in the West, which seemed unable to protect them. First, Stalin’s old rival Leon Trotsky had an ice pick plunged into his skull while living in exile in Mexico in 1940. The following year, Walter Krivitsky, a Soviet general in the Red Army intelligence service, died in Washington of a gunshot to the head, almost certainly assassinated by a Soviet agent. Krivitsky, who had been repeatedly and savagely attacked by American Progressives for exposing the true nature of the Soviet regime, was nevertheless at the top of the NKVD (the predecessor of the KGB) assassination list. Others, like Ukrainian leader Stepan Bandera and Bulgarians Georgi Markov and Vladimir Kostov, succumbed to induced “heart attacks” (actually poison gas) in public after being touched with the point of an umbrella by an unknown passerby.

Nonetheless, the 1,400-man CIC in West Germany staffed the refugee reception centers and sorted out those Soviet agents, Nazi war criminals, and individuals who might prove useful in performing covert missions in Soviet-occupied territories and the Soviet Union itself. Mostly staffed by reserve officers, the CIC units provided a wealth of human intelligence, and their widespread activities gave hope to Western Europeans, perhaps even more so than the Marshall Plan, that the United States would not abandon them. The CIC’s most pressing problem was determining Soviet intentions, and with radio communications impenetrable, telephone operators, truckers, railroad personnel, and anyone who could report on military communications and movements were prime sources of information. By the late 1950s, nothing could move in East Germany by road, rail, or canal without being reported through Army intelligence channels. With technical and signal intelligence unable to penetrate Soviet organizations, a small army of low-level informants who hated the Communists spread out across Eastern Europe and told the Army what it needed to know.

One key breakthrough occurred in 1946, when General Edwin Sibert, assistant chief of staff for the European theater, brought General Reinhard Gehlen, former head of Wehrmacht intelligence on the Eastern Front, back from captivity in the United States to an American intelligence center located in Oberursel, Germany. Recovering Gehlen’s files, the United States gained a list of agents exceeding 3,500 names. The subsequent Gehlen Organization not only provided the United States with cold war intelligence, but eventually formed the nucleus of Germany’s CIA, the Bundesnachrichtendienst (BND) or Federal Intelligence Service.63

Nothing, however, compared to the intelligence breakthrough that came from the so-called Venona project. Initiated in 1943 by Brigadier General Carter Clarke of the Army’s Special Branch, Venona was the decryption operation targeting Soviet cable traffic to and from the United States by the Army’s Signal Intelligence Service at Arlington Hall. According to Venona researchers Herbert Romerstein and Eric Breindel, Eleanor Roosevelt got wind of Clarke’s initiative through her government contacts, and ordered him to cease and desist. Nonetheless, he ignored her interference and persisted.

With great effort, and after receiving critical information from intercepted and decoded Japanese messages, message traffic from Soviet “one-time pads” was deciphered. Finnish cryptographic personnel had developed some understanding of Soviet codes and been able to identify certain features, and their sharing of this information with the Japanese gave American analysts a head start. As the name implied, one-time pads were single-date, single-code pads for use only once. Amazingly, some of them were reused as a Soviet cost-cutting measure, permitting code breakers to track patterns.

Arlington Hall discovered that the Soviets had reused pages over a four-year period, 1942–45, and code breakers were able to solve cipher keys and read significant strings of NKVD code by December 1945. Using the Venona intercepts, cryptanalysts were able to determine the code names and real identities of Soviet agents in the United States, and to some degree, Great Britain.

But Army intelligence and the CIC focused narrowly on military subjects, and what neither Britain’s MI5 (counterintelligence) nor the FBI could do before the late 1940s was catch Communists within the British government or the Roosevelt administration. Their ineptitude in rounding up spies was all the more remarkable because it was, as the saying goes, a target-rich environment. One agent in England, the harmless-looking Harold “Kim” Philby, was a Soviet operative for years (1934–51), in 1944 actually being the head of MI6’s (foreign intelligence) unit targeted at the Soviet Union. He became the British liaison officer to the CIA, and after betraying nearly every important British and American secret to the Soviet Union, was awarded the position of officer of the Order of the British Empire (Britain was spared the embarrassment of having made him one of the higher ranks that required him to be called “sir”). He worked throughout the British possessions, and in Turkey and Washington, where he housed another spy, Guy Burgess. Described as “unstable, dangerously alcoholic, and flamboyantly homosexual,” Burgess lived in Philby’s home and proved a source of perpetual embarrassment.64 Apparently even a spy had “family values,” and Philby disliked Burgess—his wife despised him even more so—but he could not boot him into the street, for Philby needed to supervise him. Burgess’s actions brought constant surveillance by J. Edgar Hoover’s FBI for espionage. Cited with three speeding tickets in a single day, Burgess pleaded diplomatic immunity, and Hoover fumed that Burgess routinely used British diplomatic vehicles while cruising for homosexual encounters.65 Finally, Burgess, attempting to place another spy on a ship to France, ended up boarding the vessel with him and eventually arrived in London. The single serious British investigation into Philby during this time cleared him, and Foreign Secretary (later prime minister) Harold Macmillan told the House of Commons, “I have no reason to conclude that Mr. Philby has at any time betrayed the interests of this country. . . .”66 Philby survived the investigation, but was briefly out at MI6 before being re-recruited to conduct espionage in Lebanon for the British. In his role as double agent, Philby seemed once again secure, until Anatoliy Golitsyn, a major in the KGB, defected in 1961 and confirmed Philby’s spy activities.67

Philby demonstrated how easily spies who penetrated one Western government could expose others. Had the United States not been subjected to a single agent who came from another country, however, it still would have housed a virtual rats’ nest of spies, most of them within the Roosevelt administration and most of them ultimately exposed by Venona. Among the most poisonous of the domestic agents was Harry Dexter White (code name JURIST), the assistant secretary of the treasury in the Roosevelt administration, and a man who had extensive influence over American monetary policy. White, though not a member of the Communist Party of the USA, nevertheless was evaluated by another Soviet agent inside the treasury, Harold Glasser, who had begun his work at a low level in the Monetary Research Division and saw substantial promotions after that. Glasser informed his Soviet superiors that as of 1937 White was providing all the information of importance that they needed.68 Working for the NKVD, White facilitated the hiring at treasury of eleven Soviet sources, and shielded some when they came under scrutiny. White also successfully labored to get Ji Chaoding, a member of the American Communist Party, a key spot in the Chinese Nationalist Treasury Department, in which capacity he advised the Nationalist government in its financial policy toward Mao Zedong. When the Communist Chinese took over in 1949, Ji, having openly announced his Communist affiliation, remained as a senior official of the new Communist government.

Treasury’s influence was significant when it came to policy debates about the reconstruction of Europe, and Venona decrypts specifically referenced White advising the Soviets on how far they could push on certain European issues, such as the annexation of Estonia, Latvia, and Lithuania. He met covertly with the Soviet delegation at the UN founding conference in 1945 and supplied them with information on American negotiating strategies.69 Presented with a questionnaire by an NKVD agent outlining numerous issues about which the Soviets needed information, White wrote down all the answers. When White considered leaving the diplomatic corps to make more money in private business, the NKVD offered to pay for his daughter’s education. When it came to direct involvement, White’s hands were bloody there as well: the Soviets had asked for a $6 billion loan in 1945, but White persuaded his boss, Henry Morgenthau, to increase it to $10 billion, with better terms and a lower interest rate. White earlier had successfully stalled a loan of gold to Chiang’s Nationalist government, badly damaging Nationalist attempts to battle inflation.70 Working in tandem with another Soviet agent, Frank Coe (code name PEAK), at the State Department, White delayed the gold loan.

White was abetted in his work by Alger Hiss, a State Department official, and Lauchlin Currie, a senior administrative assistant to the president. Hiss had been a member of the Yalta delegation, where it was argued that FDR “gave away the farm,” was the secretary general of the conference establishing the United Nations, and had already come up on the government’s radar during the House Un-American Activities Committee (HUAC) hearings in 1948. Lying his way through the hearings, Hiss was eventually outed as a Soviet spy by Whittaker Chambers, a fellow Communist agent before the war whom Hiss claimed not to know. Once again, Venona produced solid evidence that Hiss was the agent code named ALES, and that he had been taking orders from the Kremlin while at Yalta.71 The evidence was there; as John Earl Haynes and Harvey Klehr, two of the foremost scholars on Venona, concluded (and as they said in the title of their 2007 article): “Alger Hiss Was Guilty.”72

Nonetheless, Hiss, a Harvard Law School graduate and member of the eastern American liberal elite, was readmitted to the Massachusetts bar in 1975. Tall, spare, and looking every bit the patrician he was, Alger Hiss managed to bifurcate his two personas so successfully that one might think he possessed multiple personalities. When exposed by Whittaker Chambers, he so outclassed his accuser—an overweight, rumpled character with bad teeth—that Hiss’s attorneys successfully used a reputational defense, contrasting the smooth, erudite, educated Hiss with the disheveled Chambers. From 1932 until 1937, Chambers had actively conducted espionage for the USSR. He came to lose his faith, and in 1938 broke with communism altogether. After storing crucial documents, as well as microfilm of other papers, to serve as a bargaining chip, Chambers hoped to just drop out. But as so many mobsters discovered, leaving the “family” was never a realistic option. Nevertheless, as Stalin allied with Hitler in the 1939 nonaggression pact, Chambers decided to inform on other agents provided he received immunity from prosecution. Among those he named as spies were Alger Hiss and Lauchlin Currie.

In August 1948, Chambers was summoned to testify before HUAC, where he revealed the members of the Ware group, which included Hiss, although he did not name Hiss as a spy, only a Communist. The committee tended to side with Hiss—despite his shifting testimony and inconsistent statements—and only Richard Nixon supported Chambers. Armed with information from the FBI, Nixon pursued Hiss and began to turn the committee. Harry Truman, who had inherited FDR’s viper’s nest of Communist agents, dismissed the allegations as a “red herring.” But privately Truman knew that Hiss and other Soviet agents could easily bring down his administration if fully exposed, and in 1947 he issued Executive Order 9835, establishing a loyalty screening for federal employees. Astonishingly, however, the willingness to support Hiss extended to Dean Acheson, who obtained a post for him at the Carnegie Endowment after Hiss was called upon to resign from the State Department. Acheson even reviewed Hiss’s statement before HUAC and consulted with Hiss’s attorney during the subsequent trials.

Regardless of the evidence and what people thought, the statute of limitations on espionage was up and Hiss could not be prosecuted. Chambers had stated that Hiss had engaged in espionage, and true to his patrician sense of effrontery, Hiss filed a $75,000 libel suit against Chambers in October 1948. It was his greatest mistake: Chambers produced a set of damning documents, called the “Pumpkin Papers” because he had kept them hidden in a hollowed-out pumpkin, and which contained various documents with Hiss’s name on them that had been stolen from the State Department and given to the Soviets. While Chambers was suddenly more believable, Hiss had foolishly exposed himself to new charges of perjury, and he was indicted again in 1949. During the trial, Hiss’s testimony shifted as one lie after another was revealed as being inconsistent with hard evidence and multiple witnesses, and Chambers was speaking verifiable truths. Quite possibly, the only reason Hiss was able to swing four jurors to his side in his first trial was that the jury foreman was the brother-in-law of Austin MacCormick, the noted leftist criminologist who opposed the death penalty and helped prepare Hiss to cope with his time in prison. Evidence was introduced by the prosecution at Hiss’s first trial that the foreman favored Hiss, but the judge refused to disqualify him.73 At a second trial in 1950, however, Hiss was convicted largely due to evidence that his wife Priscilla had used a Woodstock typewriter to type the espionage documents—a typewriter Hiss himself went out of his way to produce. Although the charge was perjury, the issue was espionage and all the jurors knew it.74

Perhaps because of his conviction, Hiss remade himself in the ensuing years, successfully convincing a large number of American liberals of his innocence, ceaselessly (and shamelessly) portraying himself as a victim of McCarthy-style persecution. In effect, he set himself up as the American Dreyfus to discredit the U.S. government and anticommunism in general. For this tactic to work, his supporters had to already be committed to destroying most of the pillars of American exceptionalism. When the Russian general Dmitri Volkogonov said in 1992 that he had been unable to find anything on Hiss in the KGB archives, the major television networks in the United States reported it immediately; but when he retracted his statement, all were silent.75 Volkogonov later admitted he had spent only two days in the Foreign Intelligence Archive, had not examined any archives, and instead asked the former head of the KGB to provide him with information. Yevgeny Primakov, who by then oversaw the SRV—the replacement to the KGB’s foreign intelligence section—told his researchers to give Volkogonov only “selected files.” After multiple references to Hiss’s service were uncovered in the Soviet archives during glasnost, one would think the Left would quietly drop their assertions. But the ideology is never wrong, just slightly in error on a few details. A number of Progressives continued to work tirelessly for Hiss’s exoneration. Their voices were only muted after the 1995 release of the Venona documents, but even then the Progressive Nation Institute in 2007 held a conference proclaiming Hiss’s innocence, against all logic and evidence, even from the former USSR officials themselves.

Plenty of other Soviet agents busily toiled in the U.S. government. White’s Treasury Department also produced Lauchlin Currie (code name PAGE). Although Currie did not cooperate as often as did some other spies, he handed over critical documents to the NKVD, such as those in 1945 indicating FDR’s bargaining position regarding the postwar Polish government. In July 1941, immediately after the Nazi invasion of Russia, Roosevelt and Treasury Secretary Morgenthau instructed Currie and White to develop measures for dealing with the Japanese. They proposed an economic blockade of Japan, but FDR adopted a much less confrontational oil embargo on July 26.76 Ultimately, the oil issue would accelerate the Japanese attack on the United States, and then subsequently with Germany—all to the benefit of the Soviet Union. It takes little imagination to see Stalin’s hands at work through Currie and White. During the war, the FBI lacked enough evidence to prosecute Currie, given that Venona was classified and could not be brought up in court proceedings. But Currie felt the investigators closing in, and in 1950 he gave up his U.S. citizenship and left for Colombia.77

Less subtle and even more damaging were the efforts of the husband-wife tag team of Julius and Ethel Rosenberg. Liberal historians insist to this day—in spite of Venona identifying them in a number of NKVD messages—that the Rosenbergs were not guilty of espionage.78 When the U.S. and British governments, in 1950, unearthed a Communist spy ring involving Klaus Fuchs, a German physicist who had worked on atomic bomb research, the investigators soon were led to Harry Gold (his courier), then to David Greenglass (Julius’s brother-in-law, who worked on the Manhattan Project), then to the Rosenbergs themselves. At his trial, Greenglass testified that Julius Rosenberg passed to the Soviets schematics of the “lens” device that activated the atomic bomb. It was one of the most heavily guarded secrets of the war, and after Rosenberg’s action it was in the hands of the Soviet Union.

Julius Rosenberg, a disheveled, bespectacled son of Jewish immigrants, had worked in the U.S. Army Signal Corps, then as an engineer-inspector through the war, where he had access to high-level electronics, communications, and radar information. He was recruited by the Soviets, appropriately, on Labor Day 1942 and became a part of an “engineers’ cell,” which he soon headed (under code name LIBERAL). In 2001, Alexander Feklisov, his former handler at the NKVD, wrote that Rosenberg provided thousands of classified documents to the USSR, including the design for a proximity fuse that in 1960 was used to shoot down Francis Gary Powers’s U-2 spy plane.79

Like Hiss and Currie, Julius Rosenberg recruited many others to his cause, including Joel Barr, Alfred Sarant, Morton Sobell, and William Perl. Through Perl, Rosenberg passed along design and production documents for the USAF P-80 Shooting Star interceptor aircraft. Greenglass also implicated both Rosenbergs deeply in the reproduction and transmission of a variety of documents of the “Fat Man” atomic bomb. Their actions advanced the Soviet development of the bomb by an estimated five years. At their 1951 trial under Judge Irving Kaufman, the Rosenbergs were convicted and sentenced to death under the Espionage Act of 1917. Kaufman was devastating in his comments to the couple during his sentencing on April 5, 1951:

I consider your crime worse than murder . . . I believe your conduct in putting into the hands of the Russians the A-Bomb years before our best scientists predicted Russia would perfect the bomb has already caused, in my opinion, the Communist aggression in Korea, with the resultant casualties exceeding 50,000 and who knows but that millions more of innocent people may pay the price of your treason. Indeed, by your betrayal you undoubtedly have altered the course of history to the disadvantage of our country. No one can say that we do not live in a constant state of tension. We have evidence of your treachery all around us every day for the civilian defense activities throughout the nation are aimed at preparing us for an atom bomb attack.80

Despite the efforts of countless writers, and reams of paper dedicated to arguing the couple’s innocence, history has proven the verdict correct and vindicated Kaufman’s rhetoric. Not only did Venona identify Julius as a spy, but subsequent evidence has shown Ethel’s complicity as well. The coup de grâce came in 2008 when their coconspirator, Morton Sobell, admitted that both he and Julius were Soviet agents. Confronted with that admission, even one of the Rosenbergs’ sons, Michael Meeropol, glumly concluded, “I don’t have any reason to doubt Morty.”81 And, as if a higher confirmation was needed, Soviet premier Nikita Khrushchev said in his memoirs that both Rosenbergs “provided very significant help in accelerating the production of our atomic bomb.”82

Hysteria or Sobriety?

Understanding the so-called Red Scare of the 1950s is difficult in the modern age, where terms such as “McCarthyism” and “hysteria” are commonplace. But hysteria is an ungrounded, irrational reaction—a child shrieking when someone tosses him a pencil, as opposed to a person recoiling when someone throws a cobra on the table. Concerns about communism in the early 1950s were grounded in abundant evidence:

   • The USSR had gained, then still occupied, Eastern Europe and, despite its promises, there was no indication the Soviets would ever permit free elections there
   • The Soviets had blockaded Berlin
   • China had fallen to the Communists
   • Communist North Korea invaded South Korea, with air support from the USSR and ground intervention from the Chinese Communists
   • The USSR obtained the atomic bomb five years before Western intelligence thought it would be able to do so
   • Extensive spy networks were discovered in the U.S. government and defense establishment
   • The Communist Party of the United States of America was public, vocal, and active in its intention to make the United States into a Communist state
   • Labor unions and Hollywood were populated with CPUSA members and/or sympathizers

Thus, when Senator Joseph McCarthy, a Republican from Wisconsin, made his famous Lincoln Day speech at Wheeling, West Virginia, on February 9, 1950, there was ample evidence for even his “wildest” allegations. Debate continues to the present day over what he actually said, because there was no audio recording of the event, although a version of it was reprinted in the Congressional Record of the Senate on February 20, in which McCarthy said, “I have in my hand 57 cases of individuals who would appear to be either card carrying members or certainly loyal to the Communist Party, but who nevertheless are still helping to shape our foreign policy. . . .”83 But these remarks were edited, and McCarthy may have ad-libbed, as witnesses later said he produced two numbers that day, for one witness wrote down both, and recalled that he said there were 205 individuals under investigation, of which already 57 were known Communists. But the difference between the two numbers became a central point for critics wishing to undermine the broader point McCarthy raised: that despite his having alerted the government to dozens of members of the Communist Party working in the State Department, few had been fired.84

McCarthy instantly became both the icon of the Right, who considered him a watchman sounding a clarion call, and the nemesis of the Left, who saw him as the focal point of unreasonable suspicion and fear. McCarthy was an Irish Catholic who had switched parties, who counted as two of his closest friends John Kennedy and Richard Nixon, and who employed Robert F. Kennedy as one of his investigators. He was a hard drinker and tough, a friend of farmers, and a champion of civil rights. Using his Permanent Subcommittee on Investigations, McCarthy tended to operate on the fringes as often as he could, holding hearings, challenging witnesses. Although it would later be claimed that he tarred innocent people with his accusations, in reality McCarthy refused to name names in public from his list, instead insisting that he would reveal all those on his list in a Senate secret session.85 McCarthy insisted that he did not want to wrongfully implicate anyone as a Communist before an investigation, and repeatedly stated in public, “I do not have all the information about [them],” which is why he wanted the FBI to follow up.86 Democrats seeking to shield Truman, and Roosevelt’s legacy, refused to allow a secret session. They knew such a session would reveal the extent of Soviet penetration into the administration. Years later, most of those McCarthy named, including Mary Jane Keeney, Lauchlin Currie, Solomon Adler, Harold Glasser, and Virginius Frank Coe, turned out to be . . . Communists! McCarthy actually had only led with his list of known agents: the list of “suspected” agents that he identified added dozens of additional names, including a roster of State Department employees, some in the Commerce Department, a judge, and the head of Truman’s Council of Economic Advisers.87

McCarthy overstepped in the hearings on the U.S. Army, eventually calling into question the integrity of former Army chief of staff George Marshall, although actually he may have had grounds, particularly over China. But Marshall enjoyed immense popularity, and that was too much. Washington supporters deserted McCarthy, and the press, which often worked with him to generate headlines, turned against him. “McCarthyism” became a code word for crazed abuse of power—only on the Right, however—and Eisenhower disavowed the senator.88 Yet he had not only been correct in his claims, but, if anything, they had been understated and late. By the time McCarthy began investigating the State Department, the FBI had already purged several people and was investigating many more. That was, after all, how McCarthy knew that a number were under scrutiny. More important to the modern historical debate was the notion that McCarthy led a “movement” (let alone, as Dean Acheson claimed, “a mob”). One third of Americans didn’t even know who he was and as of 1953—the height of his power—two thirds had no opinion of him at all.89

One of the more interesting lines of speculation to explain McCarthy’s allegations, advanced after the Venona documents were published, was that someone leaked to McCarthy information on Venona itself. The Soviets, of course, had been informed of Venona’s existence and the penetration of their messages using a one-time pad for communications as early as 1946 from at least three sources. Lauchin Currie also apparently became aware of Venona in 1944. Using his White House position, Currie sent instructions to the Army’s Special Branch to stop all work on Soviet cipher communications, which would have effectively killed the Venona project. Fortunately, the Army ignored the White House order. Currie also sent a White House order to the OSS, instructing it to stop collecting information on Soviet intelligence operations. When that failed, Currie employed his State Department contacts to muscle the OSS, and that worked. In an act of sheer naïveté, or because it was influenced by Currie’s minions, the State Department ordered the OSS to cease all operations concerned with Soviet influence, and furthermore, in the spirit of “Allied cooperation,” to give all its material on Soviet codes, including the code book the OSS had obtained from Finnish sources that allowed it to read Soviet dispatches, to the USSR. Obediently, the OSS complied and did not even keep a copy of the code book.

A second individual compromised Venona—William Weisband, an NKVD agent (code named LINK) assigned as a Russian linguist to the Arlington Hall group working on the Venona intercepts in 1945. Shortly after Weisband saw code breaker Meredith Gardner extract a number of names of American scientists from a Soviet intercept in 1945—indicating the scientists were “dirty”—all Soviet use of the one-time pads stopped, and America’s ability to read current Soviet mail ceased. Weisband had successfully blinded the United States at a critical time during the cold war.

Venona was also disclosed to the Soviets by Kim Philby, but somewhat later. By 1950 at the latest, then, the Soviets were fully informed, although the American public had to wait until forty-five years later to receive an inkling of the damage done by Soviet spies. It took a major effort by researchers and the influence of Senator Daniel Patrick Moynihan, who thought the records would reveal little that wasn’t already known, to persuade NSA to publicly release the Venona information. Eventually, these revelations showed that McCarthy’s numbers, even while varying, were low but still not far off the mark set by Venona. According to Venona, 349 individuals provided information to Soviet intelligence operatives as actual vetted Soviet agents, and several hundred more have been identified as occasional but sometimes valuable sources.90 Curiously, many of these agents were Jewish, rather ungenerously referred to by the Soviets as “rats.” Most of these individuals escaped all prosecution, and firmly supported by the American Left, many lived long and fulfilling lives. Forty-five years later, the picture was crystal clear: of the 218 proven to be Soviet spies identified to the FBI, 101 (including 61 Soviet officials) left the United States before the statute of limitations ran out, 11 had died, 14 cooperated with the FBI, 77 were not prosecuted for various and sometimes incomprehensible reasons, and 15 were prosecuted.91

Nationally, J. Edgar Hoover, sensing an opportunity to fill McCarthy’s void, “turned anticommunism at the Bureau nearly into a personal vendetta.”92 His investigations uncovered Communists, but more important, had a dampening effect on recruitment efforts into the CPUSA. From its 1944 high of eighty thousand members, the CPUSA’s membership fell below ten thousand in 1956, and by 1971 counted fewer than three thousand.93 Many Americans at all levels feared and loathed communism, to the extent that even Harvard University in 1953 banned known Communists and rightly called the ideology “beyond the scope of academic freedom.”94 A handful of anti-Communist films emerged as well, such as I Married a Communist and The Red Menace, public schools instituted democracy classes, and insider exposé books appeared, relating the writers’ hidden lives as Communists.

While certainly excesses occurred, and occasionally an innocent person was wrongfully persecuted, or fired from a job, the overwhelming reality of America during the Red Scare was that no one acted scared enough to take any serious action against the American Communist Party. It continued to function legally throughout the McCarthy era, its spokesmen publicly advocated its doctrines, recruiters openly solicited new members, and its press daily published a variety of tracts, newspapers, pamphlets, and books totaling millions of copies.95 College purges rarely occurred: out of 400,000 college faculty in the 1950s, only 126 cases of dismissal or threatened dismissal due to Communist views were recorded. Yet without any official sanctions or repression, the movement shriveled, becoming as irrelevant after a decade as the Ku Klux Klan in the twenty-first century.

There were several reasons the FBI chose not to publicize the hunt for Communists or to prosecute Soviet spies at this time: embarrassment, fear of Soviet retaliation, and the desire to keep Venona a secret. Nonetheless, the opening of the Soviet archive (the Russian Center for the Preservation and Study of Documents of Recent History) since 1991 has provided much verification of known spies and also added new leads on Soviet sources. With regard to the atomic spies, the Rosenbergs were the tip of the iceberg; Klaus Fuchs, Theodore Hall, and J. Robert Oppenheimer all performed valuable work for the Soviet Union, and Hall, whose treason was extensive and well documented, was one of those never prosecuted. The highest-ranking official in the FDR administration working for Soviet intelligence was Roosevelt’s close friend and special adviser Harry Hopkins.96 What is so shocking about this disclosure is that Hopkins on several occasions negotiated on FDR’s behalf alone with Stalin.

Even journalists were recruited to help spread Soviet propaganda, one of the most loyal being writer and reporter for the left-leaning Nation I. F. Stone, who became the guiding light and mentor to many well-known media figures still active as of this writing. Progressive commentator and journalist Walter Lippmann regularly met with Vladimir Pravdin, an assassin and agent in the NKVD stationed in the United States until 1945, and shared inside information from highly placed sources in the Roosevelt administration.97 The Soviets also planted an agent in Lippmann’s office—his secretary, Mary Price (code named ARENA), who spied for the USSR and performed courier duties from 1941 to 1945.98 An example of a Soviet spy who lived large with impunity in the public eye after supposedly terminating her Soviet connections, Price moved to North Carolina, organized a Progressive group, the Southern Conference for Human Welfare, ran for governor on the Progressive Party’s ticket in 1947, moved back to Washington and worked at the Czechoslovakian embassy when it was a Soviet satellite, and finally worked for the National Council for Churches. Her career is also worth noting because of the affinity outright Communists had with subsequent so-called human rights groups, some of which proved mere fronts for leftover, out-of-work Communist agents.

What makes this period so particularly troubling is that the American Left backed these spies and traitors to the hilt, acclaiming them as heroes of American liberalism. And this has continued to the present day, as even Hiss’s case is still considered in many modern textbooks “controversial.”99 Stone, Hopkins, the Rosenbergs, Hall, Oppenheimer, and others are still defended by the Left, and the blacklisting of Communists in Hollywood is still portrayed as having been unfair, unwarranted, un-American, and seemingly in all cases, visited on innocents. Presenting blacklisted writers at Hollywood award presentations drew enormous applause when such individuals returned to work, and many “Red Diaper Babies” are carrying on the tradition in Hollywood. And more disconcerting still, Venona only constituted a small fraction of the penetration of America’s government. Even in terms of Soviet communications from the United States, Venona only accessed a tiny number, 1.5 percent of the NKVD traffic in 1945, 49 percent in 1944, 15 percent in 1943, and 1.8 percent in 1942. Of the Soviet military traffic, 50 percent was read in 1943, but nothing from any other year.100 Of the three thousand messages that were decoded, most were only partially decoded, and most of the agents identified in the traffic were never linked to a particular individual. There are probably hundreds or even thousands of Americans, some of whom may be still alive, who could be exposed from Russian archives.

On the American side, few, if any, attempts were made before 1946 to recruit agents in the NKVD or its successors by military intelligence. The Soviets had laid their groundwork in the 1920s and ’30s, whereas the United States first recognized the need to penetrate high levels in the Soviet Union only following World War II. Forced to play catch-up, the United States recruited few Soviet spies—almost all were “walk-ins” who contacted American intelligence voluntarily before being approached. One of the few KGB officers to provide intelligence information to the CIA while remaining in Soviet service was Colonel Vladimir Vetrov, code named FAREWELL, who would literally enable Ronald Reagan to win the cold war. Colonel Oleg Penkovsky, code named HERO, was a Red Army officer in the Chief Intelligence Directorship of the General Staff rather than a KGB officer, but the two, both volunteers, one for the CIA in 1961 and 1962, and one for the French from 1980 to 1982, made enormous contributions. Both were executed by the KGB, Penkovsky after being betrayed by a Soviet agent in the CIA—one that has not been found to this day. All in all, the CIA probably earned a grade of C- in human intelligence from 1947 to the fall of the Soviet Union, and an F in recruitment. Most Soviet sources were defectors, who betrayed personnel and operations. A typical agent in place was Soviet major Petr Popov, who worked for the CIA from 1952 to 1955 in Austria, and furnished the United States with the Field Service Regulations of the Red Army. But more typical were walk-ins like KBG major Peter Deriabin, who defected in 1954, and KGB major Yuri Nosenko, who defected in 1964. The United States played catch-up rather poorly until the fall of the Soviet Union.

The United States, however, was not alone in being penetrated to its very core by Soviet agents. It was perhaps to be expected that the West German government would be heavily infiltrated by Soviet agents, and France and Italy possessed large Communist parties that at times were part of their governments and could send secrets wholesale to their masters in the Kremlin. But Great Britain, Canada, and Australia were shot through and through with Soviet agents, so much so that in the late 1940s the American intelligence community, itself riddled with Soviet spies, refused to share information with Canada and Australia.101 When Igor Gouzenko, a Soviet cipher clerk in the Ottawa embassy, attempted to defect, he was turned away from various Canadian agencies, including the Royal Canadian Mounted Police, for two days until it became apparent the Soviets were aggressively attempting to find him. Prime Minister Mackenzie King didn’t want to give Gouzenko asylum because he feared becoming embroiled in “an unpleasant diplomatic incident” with the Soviets.102

But the case with the greatest impact on the United States occurred in Great Britain. The Cambridge Five, a ring of Soviet spies who had become Communists while attending Cambridge University in the 1930s, not only affected the workings of the British government and submitted huge numbers of British secrets to the Soviets, but also revealed American security secrets. Guy Burgess, Kim Philby, Donald Maclean, Anthony Blunt, and John Cairncross—Soviet code names HICKS, STANLEY, HOMER, JOHNSON, and LISZT, respectively—were extremely well placed due to their social standing and their being in the British old boy network. But the jewel in the Soviet espionage crown was probably Roger Hollis, director-general of England’s MI5 (counterintelligence service) from 1956 to 1965. In addition to the contentions of various officials in the British intelligence services that Hollis was on the Soviet payroll based on evidence that convinced some, but not others, Robert Lamphere of the FBI also thought Hollis was a Soviet spy, and had betrayed the Venona project to the Soviets. Lamphere wrote, “To me, there now remains little doubt that it was Hollis who provided the earliest information to the KGB that the FBI was reading their 1944–45 cables.”103 Given these exceptional penetrations of the highest levels of Western governments, and especially the government of the United States, the victory in the cold war now seems outside the bounds of probability and an even greater testament to the exceptional core elements of America.

Aware of the stigma (not to mention genuine threat) of an association with communism, many U.S. labor unions began to purge their rolls as the cold war began. Walter Reuther’s United Auto Workers (UAW) and Philip Murray’s Congress of Industrial Organizations (CIO) both booted out Communists before McCarthy’s speeches. Equally important, an emerging Hollywood leading man, Ronald Reagan, waged a war against the “Stalinist machine” in the film industry.104 Reagan, with the help of a tough Hollywood labor leader who mentored and trained him, Roy Brewer, built a bipartisan base to work against avowed Communists. They kept the Screen Actors Guild a group run by the anti-Communist majority, not the Stalinist minority. Reagan had joined HICCASP (the Hollywood Independent Citizens Committee of the Arts, Sciences, and Professions), which had originated as an FDR support group, but found that meetings were dominated by pro-Communist rhetoric. A group of ten met in secret, including Reagan and actress Olivia de Havilland. Reagan whispered, “You know, Olivia, I always thought you were one of ‘them.’” She laughed and said, “That’s funny, I thought you were one of them.”105 When it became obvious to Reagan and eleven others that HICCASP had become little more than a Communist front organization, they all resigned. In 1947, Reagan won the first of seven elections to be president of the Screen Actors Guild, where he fought a two-front battle to purge Communists while at the same time protecting innocent industry people from the House Un-American Activities Committee. He succeeded in doing both, cleansing Hollywood of any overt Stalinist influences (though, over time, the film industry and television would drift steadily to the left) and saving many wrongfully accused of being Communists.

America’s firsthand brush with communism showed that the pillars of American exceptionalism were stronger than the promise of utopia, and that in America’s free marketplace of ideas, radical, imposed egalitarianism did not stand a chance. But the Soviets quickly trained their sights on another target, one possessing few of the American institutions, cultural ideas, or attitudes about work and property—the Third World. That proved a target-rich environment indeed.


Dark Continents


1945: Indonesia declares independence; Ho Chi Minh seizes power in Hanoi; United Nations founded; Nag Hammadi Gnostic gospels discovered in Egypt; Gandhi and Nehru order British forces to leave India

1946: King David Hotel bombing in Israel

1947: Exodus incident off Palestine; India partitioned into India and Pakistan; Dead Sea Scrolls found

1948: Burmese independence; Gandhi assassinated; Organization of American States founded; Arab-Israeli War begins; Republic of Korea established; Truman reelected president

1949: Chinese Communists defeat Nationalists

1950: Chiang Kai-shek moves Nationalist government to Taiwan (Formosa); Korean War begins; French-Indochina war begins

1951: China seizes Tibet; Libyan independence

1952: Mau Mau Uprising in Kenya

1954: Nasser becomes premier of Egypt; Viet Minh defeat French at Dien Bien Phu; Algerian civil war

1955: U.S. advisers sent to South Vietnam

1956: Britain, France, and Israel attack Egypt (Suez Crisis); Moroccan and Tunisian independence

1957: China’s Mao Zedong states that 800,000 “class enemies” have been liquidated; Ghanaian independence

1958: Eisenhower sends troops to Lebanon to protect government; Guinean independence

1959: Madagascar independence; Castro seizes power in Cuba; North Vietnam invades Laos

De-Europeanizing the World

America had fought the last year of World War II with one eye on the USSR as a major, possibly antagonistic, power. “Atomic diplomacy,” the leftist historical argument that President Harry Truman dropped the atomic bombs more to intimidate the Soviets than to defeat the Japanese, has been proven a myth by both American and—more recently—Japanese sources. But even before Soviet premier Joseph Stalin made it clear he intended to greatly expand Soviet influence, if not boundaries, U.S. leaders calculated communism’s influence and potential for mayhem when assessing the postwar realities in the Far East and Europe.

The United States’ other major ally, Great Britain, kept her eye on something quite different: her empire. British wartime decisions, particularly in India and Asia, were made based on whether the empire would be strengthened or weakened. Decolonization, supposedly begun in the 1920s with long periods of “preparation,” became a sudden reality for Britain and other European countries immediately after the war, when the attrition of their military forces and the diminution of their treasuries prevented them from continuing to sustain a global effort.

Britain and other colonial powers faced a spectrum of poor choices to make up for their past sins, and a painless decolonization proved impossible. Throughout the world, the imperial powers would be castigated for building their colonial empires, then savaged even more when they left the colonies to govern themselves. Critics of imperialism, even those who admitted some benefits of occupation, complained about Britain’s feeble attempts to establish self-government (however far in the future that lay), and complained louder when nations such as Belgium made virtually no effort to ensure native autonomy. Centuries of human sacrifice, cannibalism, slavery, oppression of every conceivable stripe, tribal divisiveness, and lack of successful self-government by natives were glossed over, excused, forgotten, or wished away in the literature in order to pin the Third World’s problems on the Europeans. (The term “Third World” itself is of mixed origins, first used in print by French dissident writer Alfred Sauvy in 1952.) These criticisms completely ignored the fact that it was only in the context of European ideas that these were even contextualized as problems! Barrels of ink have been spilled lambasting the European decolonization process in the twentieth century, with the bulk of it spent describing the end of the British empire as a preamble to the rise of the “American Empire” and corollary cautionary tales about employing American power in a dangerous world.

As matters turned out, much of the fretting was for naught. Britain’s empire unraveled with shocking speed, though not nearly as fast as the Soviet bloc would collapse just forty years later. It ostensibly began to crumble with the Battle of Britain, when a German invasion of England loomed and revealed to the colonies not the empire’s strength, but its weakness. Despite controlling territories that stretched from Latin America to the Far East, by 1940 Britain was reduced to begging for aged American destroyers and seeing her treasury emptied. The very levers Britain had pulled to govern the masses in India and Africa had prevented those regions from developing their own industry and invention—the lifeblood of a nation at war. Although England could call on the colonies to provide troops, most of whom dutifully served and courageously fought, they had been denied the elements of autonomy and innovation that had made England and the United States great. It was akin to expecting the practice team to walk onto the field and take over from the all-pros.

During the last months of World War II, General George Marshall had stoutly supported his Supreme Allied Commander, General Dwight D. Eisenhower, in his struggle with British antagonist-counterparts Field Marshal Alan Brooke and Field Marshal Bernard Montgomery over control of the Allied forces. Winston Churchill’s demand that Eisenhower designate a deputy supreme commander of all ground forces was not finally defeated until after Yalta. As incomprehensible as it might seem to historians writing in the twenty-first century, the top British command wanted British personnel placed in all large American formations, supposedly for liaison. But their duties would actually be twofold: spy on the Americans for the British (Montgomery automatically did this in American units under his command) and advise them on military strategy.

Marshall’s stance was that since the United States was supplying three fourths of the manpower fighting in Western Europe, 45 percent of the world’s armaments, half of the world’s goods, and two thirds of all the ships—an American should indeed be in charge.1 The British may have had an inferiority complex that manifested itself in arrogance and disdain for their allies, but with good cause—they were inferior. Field Marshal Harold Alexander (later governor general of Canada), speaking with Marshall at Yalta, was as patronizing as ever concerning American failures in the Battle of the Bulge and intimated that it was the British who saved them. Marshall shot back, “Yes, American troops start out and make every possible mistake. But after the first time, they do not repeat their mistakes. The British troops start in the same way, and continue making the same mistakes over and over.”2

At its core, the struggle over British or American command was not about how to defeat Germany—it was to establish who would be dominant in postwar European affairs, and, secondarily, to ensure that Britain could retain effective control over her colonies. In Marshall’s eyes, never again was the United States to be called upon to pull Europe’s chestnuts out of the fire. If the children weren’t going to play nicely together, then the United States would have to oversee the playground. The Americans had to become the keepers of the peace in Europe, and if the British complained, they, as well as the French, had proven themselves unable or unwilling to do what was necessary. Especially in the context of retaining their empires, the Europeans had been helpless to prevent their fragmentation or, conversely, to properly prepare their subjects for independence. Nowhere did this prove more troublesome and bloody than in India, the Crown Jewel of the Empire.

India, Gandhi, and Nehru

Britain’s continued presumption that the colonies would remain intact was based on the flawed notion that their colonial people loved the mother country and would help her in time of war. Many did not, or did so only out of an expectation that independence would be granted out of gratitude. This attitude, of course, was entirely wishful thinking on the part of colonies, dosed heavily with hints, winks, and nods from dishonest British officials and MPs who alluded to murky plans for independence without ever developing clear policies for such an eventuality. For the colonies, war presented the opportunity to bolt from a weakened empire once and for all.

Unfortunately, the war also convinced colonial nationalists of the illusion that Soviet-style command or planned economies could succeed, especially for a poorer country seeking to climb out of want. Invisible to those colonies and the outside world was the Soviet Union’s human cost—the archipelago with its gulags, thousands of political prisoners, and the unmarked graves of “wreckers.” Also largely unseen in the Soviet “miracle” had been the role of Americans and other Westerners in propping up Lenin and Stalin, either through loans and transfers of necessary goods or as the largest customer for bargain-basement sales of priceless artwork. Andrew Mellon alone acquired some $6 million worth of art from the Soviets, providing hard currency the dictatorship badly needed.

Meet the Author

Larry Schweikart is a professor of history at the University of Dayton and the coauthor of A Patriot’s History of the United States and A Patriot’s History Reader, among many other books.
Dave Dougherty is the coauthor of A Patriot’s History Reader among other books.

Customer Reviews

Average Review:

Post to your social network


Most Helpful Customer Reviews

See all customer reviews

Patriot's History of the Modern World Vol. II: From the Cold War to the Age of Entitlement, 1945-2012 5 out of 5 based on 0 ratings. 1 reviews.
Anonymous 5 months ago