The End of Big: How the Internet Makes David the New Goliath

The End of Big: How the Internet Makes David the New Goliath

by Nicco Mele
The End of Big: How the Internet Makes David the New Goliath

The End of Big: How the Internet Makes David the New Goliath

by Nicco Mele

eBook

$11.99 

Available on Compatible NOOK Devices and the free NOOK Apps.
WANT A NOOK?  Explore Now

Related collections and offers


Overview

How seemingly innocuous technologies are unsettling the balance of power by putting it in the hands of the masses - and what a world without "big" will mean for all of us.
In The End of Big, social media pioneer, political and business strategist, and Harvard Kennedy School faculty member Nicco Mele offers a fascinating, sometimes frightening look at how our ability to stay connected - constantly, instantly, and globally - is dramatically changing our world.
Governments are being upended by individuals relying only on social media. Major political parties are seeing their power eroded by grassroots forces through online fund-raising. Universities are scrambling to preserve their student populations in the face of less expensive, more accessible online courses. Print and broadcast news outlets are struggling to compete with citizen journalists and bloggers. Our traditional institutions are being disrupted in revolutionary ways, some for the better. But, as Nicco Mele argues, the benefits of new technology come with unintended consequences. In The End of Big, Mele examines:

- How fringe political forces enter the mainstream and gain traction using everyday technology - with the enormous potential to undermine central power

- What happens when investigative journalism is replaced by ad hoc bloggers, mobile video, and instantaneous tweets...and whether they challenge or simply enable power

- Why Web-based micro-businesses are outcompeting major corporations, and what innovations will alter the way we work, own things, and pay for goods and services

- The collapse of traditional party politics, and the rise of a new kind of democracy, one which could produce dynamic and effective leaders...or demagogues

- How citizen initiatives can replace local and state government functions, such as safety regulations, tax collection, and garbage pickup, and do so cheaper, faster, and better

Mele argues that unless we exercise caution in our use of these new technologies, we risk a dark and wildly unstable future, one in which our freedoms and basic human values could be destroyed rather than enhanced. Both hopeful and alarming, The End of Big is a thought-provoking, passionately argued book that offers genuine insight into the ways we are using technology, and how it is radically changing our world in ways we are only now beginning to understand.


Product Details

ISBN-13: 9781250021861
Publisher: St. Martin's Publishing Group
Publication date: 04/23/2013
Sold by: Macmillan
Format: eBook
Pages: 320
File size: 532 KB

About the Author

NICCO MELE is a leading forecaster of business, politics, and culture in our fast-moving digital age. Named by Esquire magazine as one of America's "Best and Brightest," he served as webmaster for Howard Dean's 2004 presidential campaign and popularized the use of technology and social media for political fund-raising, reshaping American politics. Not long after, he helped lead the online efforts for Barack Obama in his successful bid for the U.S. Senate. Mele's firm, EchoDitto, is a leading Internet strategy company working with nonprofit organizations and Fortune 500 companies, among them Google, AARP, the Clinton Global Initiative, the United Nations, and others. He also serves on a number of boards, including the Nieman Foundation for Journalism at Harvard, is a cofounder of the Massachusetts Poetry Festival, and is on the faculty at Harvard's Kennedy School of Government.


NICCO MELE -- consultant to the Fortune 1000 and entrepreneur -- is one of America’s leading forecasters of business, politics, and culture in our fast-moving digital age. Since Esquire named him one of America’s “Best and Brightest”, Nicco has been a sought-after innovator, media commentator, and speaker.  His firm, EchoDitto, is a leading Internet strategy consulting company working with a long list of non-profit institutions and Fortune 500 companies such as Google, Diageo, and Medco.  Nicco serves on a number of boards including the Nieman Foundation for Journalism at Harvard; he is co-founder of the Massachusetts Poetry Festival.  Nicco is also on the faculty at Harvard’s Kennedy School of Government.

Read an Excerpt

1
 

BURN IT ALL DOWN
… “You’ve begun
to separate the dark from the dark.”1
Look around you.
Bloggers rather than established news outlets break news. Upstart candidates topple establishment politicians. Civilian insurgencies organized on Facebook challenge conventional militaries. Engaged citizens pull off policy reforms independent of government bureaucracies. Local musicians bypass record labels to become YouTube sensations. Twentysomething tech entrepreneurs working in their pajamas destabilize industry giants and become billionaires.
Radical connectivity—our breathtaking ability to send vast amounts of data instantly, constantly, and globally2—has all but transformed politics, business, and culture, bringing about the upheaval of traditional, “big” institutions and the empowerment of upstarts and renegades. When a single crazy pastor in Florida can issue pronouncements that, thanks to the Internet, cause widespread rioting in Pakistan, you know something has shifted. When a lone saboteur can leak hundreds of thousands of secret government documents to the world, helping spark revolutions in several Middle Eastern countries, you again know something has shifted.
Technophiles celebrate innovations such as smartphones, the Internet, or social media as agents of progress; Luddites denounce them as harbingers of a new dark age. One thing is certain: Radical connectivity is toxic to conventional power structures. Today, before our eyes, the top-down nation-state model as we’ve known it is collapsing. Traditional sources of information like broadcast and print media are in decline. Aircraft carriers and other military hardware that for decades underpinned geopolitical power are obsolete and highly vulnerable, while organized violence remains a growing threat. Competitive hierarchies within industries are disappearing. Traditional cultural authorities are fading. Everything we depend on to preserve both social stability and cherished values, including the rule of law, civil liberties, and free markets, is coming unraveled.
The End of Big is at hand.
Institutions Aren’t Dispensable
You might ask, Isn’t the destruction of old institutions potentially a pretty good thing? Many traditional, big institutions are deeply flawed and even corrupt—they deserve to die. Few among us are not frustrated with the culture of influence and money in the two big political parties or disgusted by the behavior of at least one big corporation. Echoing the philosopher Oswald Spengler, isn’t creative destruction, well, creative?
Our institutions have in fact failed us. Building a sustainable economy, for instance, that allows us to avert the catastrophic consequences of global warming seems hopeless in the face of big government, big business, and a dozen other big institutions. Ultimately, technological advances provide unprecedented opportunities for us to reshape our future for the better.
At the same time we can’t fetishize technology and say “to hell with our institutions” without suffering terrible consequences. The State Department was designed and built for an era predating telephones and jet travel, let alone the distance-collapsing magic of the Internet. But that fact doesn’t mean we should or can give up diplomacy. Government has become bloated and inefficient—but we still need somebody to repair roads, keep public order, and create the public sphere where the market cannot or should not dominate. Unless we exercise more deliberate choice over the design and use of technologies, we doom ourselves to a future inconsistent with the hard-won democratic values on which modern society is based: limited government, the rule of law, due process, free markets, and individual freedoms of religion, speech, press, and assembly. To the extent these values disappear, we’re dooming ourselves to a chaotic, uncontrollable, and potentially even catastrophic future.
No, I Am Not Exaggerating
Ten years from now, we might well find ourselves living in constant fear of extremist groups and lone individuals who, thanks to technology, can disrupt society at will, shutting off power, threatening food supplies, creating mayhem in the streets, and impeding commercial activity. We’ve already seen a small group of hackers disrupt commuting in San Francisco for a few days (as a protest against police brutality) while flash mobs of approximately one hundred people have been showing up at retailers in cities like St. Paul, Las Vegas, and Washington, D.C., ransacking stores and making off with sacks of loot.3
This is just the beginning. Can you imagine daily life without currency issued by the national government? It’s distinctly possible. What if in a hyperlocal society the sanitation department never comes to collect your trash—what would you do then? What if government bodies can no longer regulate the large numbers of small businesses that will exist with the End of Big? Could you trust that your food, medicines, and automobiles are safe? What will happen if authoritative news reporting ceases to exist and if cultural authorities fade into the background, inaugurating a new dark age? How will our democracy function? How will business advance? How will we solve big problems like hunger and global warming?
Wrapping Our Minds Around the Basic Problem
This book explores the destructive consequences of radical connectivity across many domains of contemporary society, from the press to political parties, from militaries to markets. Other writers have examined the transformative potential of new technologies, generally focusing on specific domains such as business, economics, or culture, or on a specific dimension of technology’s impact. This book seeks to address a broader problem that directly affects us all. Radical connectivity is altering the exercise of power faster than we can understand it. Most of us feel lost in the dust kicked up by the pace of change. We can tell political, social, and economic life is shifting, but we don’t know what to make of it in the aggregate. Some changes seem destined to improve our lives, yet the impact on familiar institutions like the press makes us nervous. Opportunities for progress abound (and I will explore those, too), but so do openings for instability and even outright chaos. The devices and connectivity so essential to modern life put unprecedented power in the hands of every individual—a radical redistribution of power that our traditional institutions don’t and perhaps can’t understand.
Most of us, including policy experts, scholars, and politicians, haven’t subjected radical connectivity to a deep and critical scrutiny, weighing the benefits and risks with a cold eye. Throughout the entire 2012 U.S. presidential primary campaign, not a single debate featured a substantial question about technology—about the nature and role of privacy for citizens, for instance, or about the disruptive impact of social media on the Middle East. But the problem runs deeper. We don’t yet have an adequate vocabulary to talk about what’s happening. The word “technology” is weak; a wheel is technology, and so is the printing press, whereas our present-day technology collapses time, distance, and other barriers. “Networked” doesn’t quite capture the dramatic global reach, the persistent presence, the mobile nature of our world. You often hear “social” used in connection with technology—social media, social business, social sharing—but the consequences of radical connectivity on institutions are anything but social: They are disruptive, confusing, even dangerous.
Sometimes people utter the catchall term “digital,” but it’s not clear what that means, either; remember the digital watches of the 1980s? “Open” sounds good: open government, open-source politics, open-source policy. But WikiLeaks brings severe diplomatic and political consequences that “open” doesn’t capture. Just because something is machine readable and online doesn’t necessarily mean it is open. Also, “openness” describes the end result of technology, but it ignores the closed cabal of nerds (of which I’m one) that came up with this technology and defined its political implications. Not to mention that the control a handful of companies exert over our technology is far from open—companies like Apple, Google, and Facebook.
Ha Ha! You Laughed at Us, Now We Control You
This last point is critical—and as a prelude to this book, I’ll spend the rest of this first chapter fleshing it out. Why has the digital age spawned so much social, political, and economic upheaval? Is it happenstance? Or is it a function of how specific groups of users have chosen to use technologies? Neither. A radical individualistic and antiestablishment ideology reminiscent of the 1960s is baked right into the technologies that underlie today’s primary communications tools. Current consumer technologies are specifically designed to do two things: empower the individual at the expense of existing institutions, ancient social structures and traditions, and uphold the authority and privilege of the computer nerds.
Power is not about knowing how to use Twitter. It’s about grasping the thinking underneath the actual technology—the values, mind-sets, worldviews, and arguments embedded in all those blinking gadgets and cool Web sites.4 Without realizing it, citizens and elected leaders have abdicated control over our political and economic destinies to a small band of nerds who have decided, on our own, that upstarts and renegades should triumph over established power centers and have designed technology to achieve that outcome. “Cyber-activists are perceived to be the underdogs, flawed and annoying, perhaps, but standing up to overbearing power,” says the tech pioneer Jaron Lanier. “I actually take seriously the idea that the Internet can make non-traditional techie actors powerful. Therefore, I am less sympathetic to hackers when they use their newfound power arrogantly and non-constructively.”5 Indeed, in our arrogance and optimism, we nerds haven’t considered the impact of our designs, nor have we thought through the potential for chaos, destabilization, fascism, and other ills. We’ve simply subjected the world to our designs, leaving everyone else to live with the consequences, whether or not we like them.
Technology seems value-neutral, yet it isn’t. It has its own worldview, one the rest of us adopt without consideration because of the convenience and fun of our communications devices. People worship Steve Jobs and his legendary leadership of Apple, and they consume Apple products such as the iPhone and iPad with delight and intensity—yet these products and indeed the vision of Jobs are reorganizing our world from top to bottom. The nerds who brought you today’s three most dominant communications technologies—the personal computer, the Internet, and mobile phones—were in different ways self-consciously hostile to established authority and self-consciously alert to the vast promise and potential of individual human beings. The personal computer was born out of the counterculture of the late 60s and early 70s, in part a reaction to the failure of institutions at that time. The Internet’s commercialization took place in the context of the antiregulatory fervor of Newt Gingrich’s Republican revolution and Bill Clinton’s private-sector friendly, centrist administration. And mobile phone adoption exploded during the 2000s, challenging the institutions of the nation-state and bringing globalization to the digital world.
A Big Word: “Technopoly”
In his book Technopoly, the late cultural critic Neal Postman highlights the way the very technologies we design to serve us wind up controlling us. At first, Postman argues, we use technology as tools to help save labor and get the job done. Over time, the tools come to “play a central role in the thought world of the culture.” Finally, in the technopoly, the tools no longer support the culture but dictate and shape the culture. “The culture seeks its authorization in technology, finds its satisfactions in technology, and takes its orders from technology.” In reading Postman’s work, I drop the word “culture” and substitute “big institutions.” In our time, institutions like government, political parties, and the media seek authorization from technology and even take orders from technology.
The technopoly is not machine-controlled. There are people behind it, people with political ideals—as well as with economic and political interests—that they bring to technology design. The nerds whom mainstream society once portrayed as outcasts and undesirables are now powerful oligarchs, both literally and figuratively. McSweeney’s Internet Tendency, an online literary magazine, ran a very fun piece titled “In Which I Fix My Girlfriend’s Grandparents’ WiFi and Am Hailed as a Conquering Hero.”6 If you can fix someone’s technical problem, you suddenly receive enormous power and respect—a little bit of nerd ability goes a long way. Some of the largest, most powerful corporations on earth—Google, Facebook, Apple, Microsoft, Hewlett-Packard—are tech companies run by nerds. Joined by nonprofit organizations like Wikipedia, they shape our public life, our culture, and, increasingly, our institutions. Their products have reflected the political sensibilities of nerds both individually and collectively over the past fifty years. At its core, the technopoly’s nonpartisan political philosophy can be summarized in one phrase reminiscent of the 60s: Burn it all down … and make me some money! In parallel to this philosophy, with little sense of irony, the nerds in pursuit of market dominance have created a few new, really huge institutions of the digital age, the “even bigger” platforms we rely on every day, like Facebook, Twitter, Amazon, Google, and the lock-in universe of Apple’s iPhone. These are the glaring exceptions to the End of Big that prove the rule.
Before later chapters of this book can examine radical connectivity’s destructive effects on citizenship and community, political campaigning, news reporting, universities, scientific inquiry, entertainment, and corporate strategy, and before I can point the way toward the renewal of our institutions and a more stable and prosperous twenty-first century, we need to understand how a radical antiestablishment ethic became embedded in our technologies. Let me recount a few details of this well-worn story, the history of my people, the American nerds.7
Personal Computer Versus Institutional Computer
We take personal computers for granted, but in the history of computer science, they are a relatively recent detour. During the 1940s and 1950s, most computer science took place inside large organizations—militaries, corporations, universities. Even by the late 1960s, the freshly minted computer nerd looking for a job would likely have gone into a large institution. That’s because computers were giant institutional devices requiring a substantial amount of money and space. In 1969, Seymour Cray started selling the CDC 7600, a supercomputer whose base price was about $5 million. Imagine a wall of today’s stainless steel side-by-side refrigerators, then build yourself a large office cubicle with walls made of giant stainless steel refrigerators. You’ve got the Cray 7600. Its top speed was about 36.4 MHz, not much compared to today’s iPhone 4’s 1 GHz (although both machines have the same flop score, a measure of how fast they can do floating point calculations). That means your iPhone 4 is about as powerful as a Cray supercomputer. Only large institutions could afford these bad boys, and they used them not to watch and share videos or listen to your neighbor’s kid’s noisy garage band but to perform complex calculations in mathematics, nuclear physics, and other disciplines.
Yet within this large, institutional world of computer science, a debate was erupting: Should we continue to build megacomputers that can essentially function as stand-alone, artificial intelligence machines, designed to tackle big, complex problems, or should we build small computers that augment and extend the capabilities of individual human beings? Most of the institutional energy through the 1960s and early 70s—not to mention the prestige and rewards—fell on the artificial intelligence side of things, while the small computer school was looked down on, their research compared with secretarial and administrative work. Which would you rather do—build a better administrative assistant or build a mind that could solve big problems? Today, small computing has won out—in part because of the momentum of Moore’s Law.8 While the work of building a big mind is still around (remember when IBM’s Big Blue supercomputer beat Garry Kasparov at chess?), the small computer side has birthed a range of technologies culminating in your current Droid or iPhone. There is a particular hero to this strand of American nerd-ocity, one whose story begins to elucidate the political ideology behind the personal computer, and he is the computer scientist Douglas Engelbart.
“The Mother of All Demos”
A product of the greatest generation that fought World War II, Engelbart had a sense of the United States’ grandeur and majesty when dedicated to a great challenge, and during the 1950s and 60s he was looking for the next great challenge. Inspired by Vannevar Bush’s article “As We May Think,” which championed the wider dissemination of knowledge as a national peacetime challenge, Engelbart imagined people sitting at “working stations”9 and coming together in powerful ways thanks to modern computing. Using computers to connect people to build a more powerful computer, to “harness collective intellect”,10 became his life’s mission. By early 1968, he showed off several of his inventions at a demonstration subsequently known as “The Mother of All Demos.” Many facets of the modern personal computer were present at this demonstration in nascent form: the mouse, the keyboard, the monitor, hyperlinks, videoconferencing. Unfortunately, “The Mother of All Demos” did not turn Engelbart into an instant celebrity outside nerd circles. Even inside nerd circles, most of his colleagues regarded Engelbart as something of a crank. The idea that you could sit in front of a computer and actually work at it seemed lunatic in this age of massive institutional computers that worked for days to solve your complex problem while you did something else. You dropped a problem off with a computer and returned a few days later to find it solved; you didn’t sit in front of it and wait.
Yet Engelbart’s vision wasn’t all that radical. Even as he imagined people sitting at computers and using them to augment and extend their work, he still saw them as big, institutional things. In Engelbart’s view, work stations would be thin terminals without much power—essentially shared screens plugged into one big computer. Personal computers as we think of them were not a part of his original vision, and in fact he resisted the personal computer revolution at first, working most of his life at big institutions like Xerox and Stanford. To get to the personal computer and the makings of the End of Big, we need to shift to a different strain of thought that was popping up at the same time in the nerd world, which received its most memorable expression in a book by another quixotic computer scientist, Ted Nelson.
Computer Lib
You’ve heard of “women’s lib” coming out of the Vietnam era? Well, turns out there was “computer lib,” too. Ted Nelson’s pivotal 1974 book Computer Lib: You Can and Must Understand Computers Now confronted nerds everywhere with a rousing call to action, demanding that they claim computing for individuals so as to free them from the oppression of, you guessed it, large institutions. Computer Lib had a radical style similar to Stewart Brand’s countercultural publication The Whole Earth Catalog, yet Computer Lib devoted itself to computers, offering both a primer on the basics of programming and a breathtaking vision of computing’s future. The book’s cover art—a raised fist, à la the Black Panthers—left little doubt about its intended radicalism. Computer science was burgeoning as a discipline at major universities. At the same time, much of the country was still caught up in the turmoil of antiwar protests and other social movements. Many young people were arguing that the government and the military-industrial complex had fundamentally betrayed the people by pursuing war in Southeast Asia. Students had gone so far as to organize protests against construction of large Defense Department supercomputers.11 In this environment, Nelson’s book landed like a tab of Alka-Seltzer in a tall, cold glass of water. It was provocative and clear in its anti-institutional leanings and influenced a generation of nerds.
Rereading Computer Lib today, I can’t help but marvel at Nelson’s prescience. In one section, he describes what it means to be online and in another he imagines a world of hyperlinks—decades before the Web was invented. One of my favorite quotes from the book is the following: “If you are interested in democracy and its future, you better understand computers.” In 1974, the idea that thousands of people would have their own computers was not merely radical in a political sense—it was crazy talk. Computers cost millions of dollars. How could everyone have one? To his credit, Nelson also recognized the way nerds could form a new kind of institution that kept non-nerds at bay. In a portion of the book called “Down with Cybercrud,” Nelson disparaged the half-truths and lies that nerds told non-nerds to keep them from understanding computers’ power. He came out aggressively against the institutional nature of computers, hoping to bring them out of the big universities and military and into the homes of the masses, where they could serve what he saw as a truly liberating purpose.
Home-Brewed for the People
Inspired by Ted Nelson and others, a generation of nerds emerged from the late 1960s and 70s determined to disrupt the march of the institutional computer and bring the personal computer “to every desk in America,” as Bill Gates famously put it. Brand described this generation as embodying a “hacker ethic”: “Most of our generation scorned computers as the embodiment of centralized control. But a tiny contingent—later called ‘hackers’—embraced computers and set about transforming them into tools of liberation.”12 This contingent went to work in their parents’ garages and in their dorm rooms and eventually brought behemoths like Apple and Microsoft into existence. But the early products were considered hobbyist items on par with model trains and HAM radios, or, in today’s world, DIY craft beers and handmade artisan soaps. One group, the People’s Computer Company, put this explanation on the cover of its newsletter: “Computers are mostly used against people instead of for people; used to control people instead of to free them; Time to change all that—we need a … People’s Computer Company.”13 It all amounted to a sharp departure from mainstream computer science in America, which lived on in the giant mainframes of academic and government institutions.
A famous example of the burgeoning anti-institutional computer counterculture is the Homebrew Computer Club, an ad hoc group of hobbyist nerds who in 1975 began meeting once a month in Gordon French’s garage in Silicon Valley. Some of its more famous members included the Apple founders Steve Jobs and Steve Wozniak.14 Gates drew the ire of the Homebrew Computer Club by selling something that had previously been given away free—a terrible development for hobbyists. Microsoft’s first software product, Altair BASIC, was sold at a time that software was generally bundled with a hardware purchase. Homebrew members famously started to circulate illegal copies of the software at the group’s meetings—arguably the first instance of pirating software. The Homebrew members were annoyed by a young Gates trying to sell software—it should be free! An angry Gates published an “Open Letter to Hobbyists” in the Homebrew Computer Club’s newsletter, writing, “As the majority of Hobbyists must be aware, most of you steal your software.”15 Jobs and Wozniak, born of the Homebrew Computer Club, took a different approach. The ads that appeared in 1976 for their first Apple computer announced that “our philosophy is to provide software for our machines free or at minimal cost” and “yes folks, Apple BASIC is Free.”16
1984
During the decade after Computer Lib, as personal computers became fixtures in American homes and as computer companies became established organizations in their own right, the notion that personal computers represented a naked challenge to the centralized power of both computing and larger institutions persisted. John Markoff’s account of the counterculture’s influence on personal computing relates how “[t]he old computing world was hierarchical and conservative. Years later, after the PC was an established reality, Ken Olson, the founder of minicomputer maker Digital Equipment Corporation, still … publicly asserted that there was no need for a home computer.”17 On the other hand, antiestablishment ideology became entrenched in manifold specifics of the PC’s design; Markoff relates, for instance, that the visualization that comes with iTunes—the pretty colors that move and change in sequence with the music—was inspired in part by Jobs’s use of LSD, which Jobs called “one of the two or three most important things he had done in his life.” A liberationist ethic also became entrenched in the overt marketing of personal computing devices, most famously in a classic television commercial, Apple’s 1984 spot.
Following the rousing success of Apple’s first two home computer models, Steve Jobs wanted to do something big to roll out its third model, the Macintosh personal computer. He hired Ridley Scott, who two years earlier had directed the sci-fi classic Bladerunner, to make the commercial.18 The result was a powerful and intense ad that referenced the dystopian future of George Orwell’s classic novel 1984. In the ad, a young woman breaks into a large auditorium where a crowd of mindless automatons sit listening to a giant screen of a speaking man, presumably Big Brother. The woman, representing the Macintosh (she has a sketch of the Mac on her tank top), smashes the screen. The advertisement closes with the text, “On January 24th, Apple Computer will introduce Macintosh. And you’ll see why 1984 won’t be like 1984.19
Recounting the story of the ad, industry journalist Adelia Cellini noted, “Apple wanted the Mac to symbolize the idea of empowerment, with the ad showcasing the Mac as a tool for combating conformity and asserting originality. What better way to do that than have a striking blond athlete take a sledgehammer to the face of that ultimate symbol of conformity, Big Brother?”20 In many ways, this ad represents the apex of the personal computer revolution. The anti-institutional counterculture nerd ethos of the 1960s and 1970s had grown into the personal computer industry—an industry large enough by this time that it could now buy an advertisement during the Super Bowl. Yet the advertisement it used to make a splash still paradoxically evoked a deep strain of radical individualism and personal expression in the midst of conformity.
The Internet Comes to Malaysia
Like the personal computer, the Internet reflected an “anti-big,” individualist philosophy when it emerged during the mid- to late-1990s, albeit a strain of this philosophy flowing from the other side of the political spectrum. Not long after 1984, consumers of the personal computer wanted to connect these devices together so that they could share things. First, they wanted to share practical, physical things, such as a printer. By the mid-1990s, an Ethernet jack became standard issue in all personal computers. The arrival of Ethernet marked the Internet’s true beginning, tacit admission that what was really powerful—and liberating—was not the personal computer but the person connected through the computer.
I remember how revolutionary the Internet felt when it first came out. During the mid-1980s, when I was in grade school, we had two computers—an early Apple IIe and an Atari ST. I loved that Atari ST almost as much as life itself, learning on that machine how to write code using Logo, a programming language designed for kids. A few years later, we got a modem, allowing our computer to talk to other computers over our landline. I immediately set to work figuring out how to use this technology. We were living in Malaysia at the time—my father was an American diplomat working for the United States Information Service—and Kuala Lumpur had a small but burgeoning hacker community. I first found them through bulletin board systems—BBSes as they were called—accessed through my modem. A BBS was modeled after a real, physical bulletin board; someone would volunteer a computer as the electronic bulletin board, and then other people could connect to that computer and post messages to the board.
One of my friends in high school hosted a BBS on his home computer. At night after his parents went to bed, he snuck around the house and unplugged the telephones so they wouldn’t ring when people dialed into the BBS. Then he plugged the telephone line into his computer and waited for people to dial in. All night long people would call and connect to his computer, leaving messages on the electronic bulletin board. I, too, would stay up all night, dialing into different BBSes, getting into arguments about code and other subjects. Sometimes I’d get a busy signal and would have to wait to connect my modem all over again. Other times, a real person would answer the phone, irate that I had woken them at 2 A.M. When a new BBS came online, people in our little community got really excited. Once a new BBS with two separate phone lines came into service—wow, was that a big day. And yet, these BBSes were limited. I could share things with maybe twenty other nerds in my neighborhood, but I couldn’t share things with a random nerd in another city, let alone another country. My computer was still its own little island.
I wish I could remember the day the Internet came to Malaysia, but I experienced it as a gradual evolution, like a tide coming in. For months, posts appeared on the BBSes about the coming of the Internet, although Malaysia as a country had not yet connected to it. In 1992, the government commercialized an Internet provider that had a satellite Internet connection from Kuala Lumpur to Stockton, California. It took a while for that provider to offer Internet access to individuals at home; initially only corporate or institutional access was available. Our family finally got online; I still remember we were subscriber number 117. The experience was electric. Suddenly, we weren’t isolated any more. Living in Malaysia, I had to wait days or even weeks to get box scores for the New York Mets. I literally had to wait for a paper copy of the New York Times to arrive off a slow boat from China. Now I was connected to the Internet and could get those scores instantly, in real time! And not just the scores—the plays! Granted, I wasn’t watching video, as I would today, but even the written recaps of games that I received and pored over amounted to a revelation, producing an incredible sense of joy and closeness to home. It felt like liberation, as if the rules of distance and administrative boundaries no longer applied.
A Merging of Acronyms
Like the personal computer, the Internet initially took shape within a big institution—the United States military. The Defense Department’s Advanced Research Projects Agency (DARPA) had developed the idea for the Internet decades earlier, imagining a communications system that would facilitate the exchange of research from institutions scattered around the country. Over time, DARPA came to understand a second advantage to a network communications system: it could withstand a nuclear attack because it possessed no single point of failure. The Internet’s physical infrastructure and design came together during the 1960s as part of an attempt to make communication between huge institutional computers easier. Bob Taylor, a DARPA computer scientist, had three monitors in his office, each connected to an institutional computer funded by DARPA (one each at UC Berkeley, MIT, and a private software company called SDC in Santa Monica, CA). Bob grew tired of having three different systems and wanted one way to communicate with all three. As he remembers, “For each of these three terminals, I had three different sets of user commands. So if I was talking online with someone at S.D.C. and I wanted to talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from the S.D.C. terminal, go over and log in to the other terminal and get in touch with them. I said, ‘Oh, man!’ it’s obvious what to do: If you have these three terminals, there ought to be one terminal that goes anywhere you want to go.”21
It took a few years, but by early December 1969, four universities were connected to one another through a system called ARPANET: UCLA, Stanford, UC Santa Barbara, and the University of Utah. As time passed, more academic institutions joined the network. By 1981, 213 academic institutions had connected and it was becoming clear that ARPANET was primarily an educational network and not something that properly belonged under military purview.22 By 1986, the National Science Foundation (NSF) was running the Internet’s essential backbone, called NSFNET, and by 1990 ARPANET was shut down, although the military continued to manage a few key functions, such as the registration of domain names. During the late 1980s, the Internet also began to grow beyond the United States, connecting to educational institutions in Europe and Asia. In 1989, the first private company selling access to everyday customers in the United States opened its doors—a move that concerned scientists, who feared that the Internet would lose its research focus and become co-opted for other activities (online poker, anyone? porn?). In 1992, Congress got involved, passing a law encouraging the NSF to open the Internet to “additional users” beyond “research and education activities.”
A Declaration of the Independence of Cyberspace
Here’s where the story gets really interesting, from an End of Big perspective. By 1995, the NSF had relinquished control of the Internet’s essential infrastructure to the Department of Commerce, removing the last restrictions on the Internet’s ability to carry commercial traffic. When I asked people who participated how and why this happened, the overwhelming point of view was that legislators and regulators in Washington, D.C., simply weren’t paying attention. They didn’t understand the Internet’s power; it had no place in their landscape of power, and no familiar analogue existed that would make it easy to grasp. Mitch Kapor noted in an April 2012 interview, “Nobody in Washington DC took [the Internet] seriously, so it was allowed to happen. By the time anybody noticed, it had already won.”23
In effect, the Internet was released into the wild in a strong pro-business climate pushed by conservatives who wanted one big institution—government—to get out of free markets. The following year, Congress passed the Telecommunications Act of 1996, which deregulated the radio spectrum, allowing, among other things, the rise of huge media conglomerates like Clear Channel (paradoxical, I know). The underlying philosophy received memorable expression in a piece written by the technologist, cattle rancher, and Grateful Dead lyricist John Perry Barlow called “A Declaration of the Independence of Cyberspace.” The piece begins, “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.”24
Around the same time, Kapor published a front-page piece in the third issue of Wired magazine arguing that the Internet’s architecture realized Thomas Jefferson’s ideal of decentralization. The technological leaders of the time expected that the Internet’s adoption would lead to positive social change and was better left untouched. In a recent interview with me, Kapor noted that among the technological pioneers of the era, “there was very little understanding of how human institutions work.” The blistering Moore’s Law pace of technology is simply not the pace of human institutions, and, even in the early 1990s, the technology had outstripped our institutions’ ability to keep pace with it.
The nerd community’s desire to see the Internet as a free and open space stems in part from the overbearing behavior of a single institution: AT&T. For years, AT&T’s communication monopoly stymied the creativity of computer scientists and innovators, reaffirming in their minds the great distrust of large institutions that had taken root during the late 1960s. The man widely credited with inventing Ethernet and modern computer networking, Robert Metcalf, described it like this in Vanity Fair:
Imagine a bearded grad student being handed a dozen AT&T executives, all in pin-striped suits and quite a bit older and cooler. And I’m giving them a tour. And when I say a tour, they’re standing behind me while I’m typing on one of these terminals.… And I turned around to look at these ten, twelve AT&T suits, and they were all laughing. And it was in that moment that AT&T became my bête noire, because I realized in that moment that these sons of bitches were rooting against me … To this day, I still cringe at the mention of AT&T. That’s why my cell phone is a T-Mobile. The rest of my family uses AT&T, but I refuse.25
Barlow’s radical libertarian ethic, hardened by the AT&T executives’ contempt, layered itself on the earlier “burn it all down” ethic espoused by the personal computer’s founders, helping to bring us the connectivity we know and love today: a free commercial space where individuals and companies can pursue their business largely (but not entirely) free of government’s big institutional obstruction.
The “Openness” of the Internet
Not only was the Internet’s propagation inherently individualistic; it was orchestrated to favor the flow of information between individuals and groups, unconstrained by organizations or hierarchies. The popular notion that “information wants to be free” dates from the 1930s, when academics in the engineering disciplines that gave rise to computer science started talking about the free communication of scientific knowledge. Over time, their philosophy morphed and took on an increasingly political definition. As with the personal computer, you can trace the technology and ethos that powers WikiLeaks as well as its philosophical underpinnings to the Vietnam era and the open-source movement then emerging from computer science. WikiLeaks is in part a reaction to the broader socioeconomic dynamics of our time, but its challenge to the established power of the United States and its institutions is made possible by personal information technology and distributed open-source computing.
Today’s communications technologies are also designed to foster the creation of new online groups capable of challenging established authorities. The Internet is not just a bunch of people; it’s a place where anybody can start his or her own group without asking for somebody’s permission. David Reed, who was one of the inventors of TCP/IP, the fundamental language of the Internet, is famous for postulating that the power of any network is exponentially related to the ability of the nodes of the network to form groups within that network. The best way to understand Reed’s Law is to imagine a fax machine. To send a fax to a hundred people, you have to send a hundred faxes. To send an e-mail to a hundred people, you only have to send one e-mail. You have spontaneously formed a group—a group of a hundred people to send a single e-mail—within the network of the entire Internet.
Reed’s Law suggests the enormous power unleashed when barriers to group forming fall. Today, anyone can form a group at any time for any reason and organize to grow the reach and impact of that group. The implications for existing institutions are evident most clearly in politics. When Barack Obama ran for president, his Web site let anyone—you!—start their own group as part of my.barackobama.com. I myself formed such a group and found that these groups made the network exponentially more powerful. Because I started my own group, I felt ownership and put time into recruiting and organizing members. Thousands of groups like mine played a pivotal role in Obama’s ability to raise funds, secure the Democratic nomination over a competitor with strong establishment backing, and win the election. (By the way, the role of technology in electing Obama is yet another paradox: democratically decentralized groups in the service of a historically centralized institution, the president and the executive branch).26
Clarifications and Caveats
Even as I recount this history, I need to acknowledge that I am simplifying things. For instance, I am lumping all nerds together under one banner, whereas in reality nerd life and thinking cover a range of viewpoints clustered around two dominant ones. One view is the powerful notion that, as we’ve seen, institutions are not to be trusted. The other amounts to a somewhat contradictory quest for market dominance. Throughout the book, I contrast the power that technology pushes to the individual with the giant, nearly monopolistic platform companies that serve this power. These companies—familiar names like Apple, Google, Microsoft, Facebook—are fundamentally pragmatic businesses that see (and exploit) market opportunity, even when it threatens significant social institutions. Jeff Hammerbacher, formerly the manager of the Facebook Data Team and founder of Cloudera, has been often quoted as saying, “The best minds of my generation are thinking about how to make people click ads. That sucks.” It sucks even more when it comes at the expense of institutions that have been critical to the success of Western-style democracy.
I also don’t want to imply that the development of technology itself was entirely values-driven. Real linear innovation also helped steer technology toward empowerment of the individual. Moore’s Law has effectively entered the cultural lexicon as shorthand for the speed with which technology naturally got small, fast, and cheap—and as such, conducive to the empowerment of individuals. It is hard—perhaps impossible—to separate cultural influences from the inherent technical drive toward smaller, faster, and cheaper, and, thankfully for our purposes, we don’t have to. Multiple factors made us what we are today: a cultural and technological regime where the individual consumer really is king.
Too Flat for the Nation-State?
Over the past ten years, as personal, connected computers have become progressively smaller, the connections between them have gone wireless. Seven billion souls inhabit Earth, and there are over five billion active mobile phones—and counting. When you combine the personal computer, the Internet, and the mobile phone, you have the technical preconditions for our present radical connectivity. Anyone anywhere can reach anyone else anywhere at any time at practically zero cost—and share with them almost anything, from the poem they just wrote to a live video. The design and marketing of mobile technology does not contradict the anti-institutional ethic of personal computing and the Internet; on the contrary, it adds another layer, in which the big institution is not merely the United States federal government but nation-states everywhere.
You see the post- or antistate ethic at work in advertising for mobile phones, which often evokes the experience of having access to anything anywhere in the world, without regard for the national boundaries defined and administered by nation-states. Apple’s ubiquitous iPod ads, for instance, simply show a human figure enthralled and dancing to their music—suspended in space, knowing no boundaries, experiencing total freedom. Innumerable other ads trumpet the global coverage of mobile phones or the ability to sustain personal contact with important people in your life across vast stretches of geography and irrespective of national political boundaries.
Politically, such advertising jibes with free market capitalism and, specifically, with the pro-globalization politics espoused by many writers and policymakers on both sides of the aisle, most famously by the New York Times columnist Thomas Friedman. In his influential 2005 book The World Is Flat, Friedman argued that the world was shrinking and that many of the traditional rules of economics were being rewritten as the barriers to competition were being flattened. Nation-states seeking to compete in the new global economy needed to get out of the way and reduce administrative barriers that had conventionally inhibited the flow of people, money, and ideas. Other writers favoring free markets have gone so far as to proclaim that the nation-state is effectively dead and meaningless as a political entity in our age of globalization.
The nation-state may not be dead yet (it remains the basis and means for all vital international agreements, from the World Trade Organization to Interpol), but it is losing its cultural and political force. “One of the huge changes brought by the printing press and advanced exponentially by the Internet is that people are able to readily pursue different interests and points of view,” says Doug Casey, the best-selling author and chairman of Casey Research. “As a result, they have less and less in common: living within the same political borders is no longer enough to make them countrymen.”27 Professor Liza Hopkins agrees, writing that “states continue to be an important container for social identities, but the rise in global flows of bodies, goods and information now makes the nation only one of a number of competing sites of allegiance.”28
Nerd Disease
As antiestablishment thinking has gotten baked into the technologies that assure radical connectivity, and as mobile technology has rendered its practical impact more pronounced, the influence of the nerds has only gotten stronger and more pervasive. The megaplatforms underpinning radical connectivity and created by the nerds constitute one glaring exception to the End of Big. The nerds also position themselves as a new “Even Bigger” institution in our age of eroding institutions by purposely building complexity into technology applications, thus creating a market need for their nerd expertise. I call this complexity “nerd disease,” and I admit to benefiting from it myself: A substantial portion of my consulting practice involves explaining to CEOs that they need to combat nerd disease and start holding their technology and information officers more accountable.
Let me offer a quick but representative example of nerd disease in action. I once worked with a small company where the technology team was in one building across the parking lot from a warehouse that shipped product. The company, on the advice of its nerds on staff, was preparing to spend about $1 million (a large sum for this company) to integrate their e-commerce and warehouse systems so that when someone placed an order online it would automatically get sent to the warehouse for shipping. Early on in the project, as we started to figure out how we were going to pull this off technically, it became apparent that the project was growing increasingly complex, with commensurate increases in the cost estimates. The company’s chief technology officer kept explaining to the nervous CEO that the integration of the systems was necessary, even with the increased complexity and cost. I finally suggested a different solution to the problem: Twice a day, have someone print out the online orders and walk them across the street to the warehouse. It would have taken several years of triple-digit growth before the cost of this simple solution would come anywhere near the cost of the complex but nerd-approved solution the company had been planning.
Nerds everywhere propogate their own authority by emphasizing complex, never-ending processes over outcome—a huge reason technical projects mushroom. Vivek Kundra, the first chief information officer of the U.S. government, was handed a stack of PDF documents representing $27 billion of IT projects that were years behind schedule and millions of dollars over budget. That’s a classic case of nerd disease run amok—benefitting the nerds, enshrining them and their knowledge as a new authoritative institution. Kundra took an approach Justice Brandeis would have admired. He started a Web site called ITDashboard.gov and listed every project that was overbudget, how much it was overbudget, how long it was behind schedule, and a photo of the federal manager responsible for the project. Then he took a photo of the newly inaugurated President Obama scrolling through the Web site in the Oval Office. Despite pushback from the “IT Cabal,” this measure of transparency created some accountability. Kundra noted the success of the project: “We were able to save $3 billion after terminating projects that were not working, or de-scoping them. We also halted $20 billion worth of financial systems last summer because these were projects we had spent billions on with very little to show to taxpayers.”29
Trying to challenge the nerds at their own game is difficult at best, because the nerds also create their own myths of invincibility. A prime example is the Apple Store’s so-called Genius Bar; apparently the nerds are geniuses who alone can understand radical connectivity and keep the whole giant ship running and heading in the right direction. That sort of pompous mythmaking is endemic. It’s a shame, too, because most of our technology is at its core not especially complicated. Understanding its fundamental accessibility is the necessary first step toward taking real ownership of the technology and, with it, a new control over where our society is headed.
Looking Ahead
No single remedy available today will cure nerd disease, just as no single remedy will prevent the ongoing destruction of existing big institutions that remain vital to upholding social order and the Western values of democracy. As the next seven chapters argue, our leaders are on the verge of abandoning us to a chaotic world where the progressive gains of the last two centuries might be lost. Digital literacy is a critical solution, but perhaps most central is a deep awareness of our own values and a willingness to bring those values to bear in decision making. In the last chapter, I argue that the next decade will belong to those who can take the bottom-up, grassroots energy unleashed by radical connectivity and marry it with effective, engaged leadership to craft powerful political movements and build new businesses. We must make sure that we openly and collectively recommit ourselves to the values our nation was built on and to the building of new or updated institutions to replace old-style institutions in turmoil. Starting with the newly empowered, small entities, we have to discipline and inspire ourselves not merely to burn it all down but also to build it back up. Technology and smallness aren’t just the problem confronting us—they are also the solution, the only real path to a safer, healthier, more prosperous future. It’s up to us to walk this path purposefully and decisively, even in the face of understandable emotional responses like anxiety, uncertainty, and fear.
By way of introduction, this first chapter has offered a brief analysis of how a radical antiestablishment worldview became embedded in the technologies underlying radical connectivity. The challenge is to understand the worldview embedded in our technologies and then redesign institutions to bring technologies into alignment with limited government, the rule of law, due process, free markets, and individual freedoms of religion, speech, press, and assembly. We must marshal our best thinking, talent, and energy to imagine the institutions of the future; nothing less than the continued progress of the human race is at stake. It is my greatest hope that this book will inspire and engage people to build enduring institutions of the future, whether that means starting from scratch or reforming existing institutions to fit the reality of radical connectivity.
In the true spirit of the Internet, this book tends to go wide rather than deep. I try to paint a vast panorama that gives you what we so often lack: a sense of control over a truly unwieldy and confounding subject. As a result, I have not been able to dive exhaustively into the many subject areas I cover. Almost every chapter might occupy a single book on its own, and in many cases entire books have been written about material I try to address in a few pages. I have done my best to stay true to quantitative and qualitative assessments of the state of affairs in different disciplines, but those of you with domain expertise will undoubtedly take issue with certain of my points. I hope such mental sparring will not distract you from seeing the large picture, thinking about the way our world is changing, and helping craft solutions. In addition, not all chapters are created equal. The impact of the technology varies in its severity from chapter to chapter. Sometimes I scratch my head and struggle to figure out if the End of Big is in fact a problem in a given industry; other times, the End of Big seems manifestly catastrophic. This is in keeping with social and cultural phenomena generally, which tend to develop unevenly over time.
This book is ultimately about the nature of power in the digital age and how it overwhelms virtually every establishment it touches. Although we haven’t yet realized it, our societies are on the cusp of a transformation as dramatic as the one the Athenians wrought when they decided to elect leaders instead of choosing them by birthright. We have a tremendous opportunity to reimagine the kind of society we want to live in and bring it into being; in this sense, I am far from a tech reactionary. But unless we understand our present and emerging technological regime, we will get nowhere, and we may even find ourselves at the mercy of a chaotic and unforgiving anarchy. The End of Big seeks to bring clarity and purpose to the digital lives we each live right now. It plots a course for the citizen of the present future, prompting us to grasp the moment that is already presented to us, so that we might lead each other toward a better, more humane, more progressive world.

 
Copyright © 2013 by Nicco Mele

Table of Contents

1 Burn It All Down 1

2 Big News 33

3 Big Political Parties 61

4 Big Fun 96

5 Big Government 121

6 Big Armies 154

7 Big Minds 186

8 Big Companies 215

9 Big Opportunities? 247

Acknowledgments 267

Notes 271

Index 293

From the B&N Reads Blog

Customer Reviews