- Shopping Bag ( 0 items )
Los Angeles TimesA useful starting point to understanding the choices that lie ahead.—Richard Waters, Los Angeles Times
— Richard Waters
— Richard Waters
— Katie Baker
— Paul Starr
— Hal Abelson
"This remarkably researched and highly entertaining book is a must-read for all who take the ubiquitous nature of the Internet in our everyday lives for granted. The future of the internet is NOT a positive one, unless we all work collaboratively to ensure its lasting success. Zittrain’s analysis is first-class and should be widely heeded by leaders from all sectors of society."—Dr. Klaus Schwab, Executive Chairman and Founder of the World Economic Forum
“The most compelling book ever written on why a transformative technology's trajectory threatens to stifle that technology's greatest promise for society. Zittrain offers convincing road maps for redeeming that promise.”—Laurence H. Tribe, Carl M. Loeb University Professor and Professor of Constitutional Law, Harvard Law School
“Jonathan Zittrain does what no one has before—he eloquently and subtly pinpoints the magic that makes Wikipedia, and the Internet as a whole, work. The best way to save the Internet is to turn off your laptop until you've read this book.”—Jimbo Wales, Founder, Wikipedia
“A superb and alarming discussion, from one of the most astute and forward-looking analysts of the Internet. Zittrain explains how the glorious promise of the Internet might not be realized—and points the way toward reducing the current risks. Absolutely essential reading."—Cass Sunstein, Karl N. Llewellyn Distinguished Service Professor of Jurisprudence, The University of Chicago Law School, and co-author of Nudge: Improving Decisions About Health, Wealth, and Happiness
"In the web counterrevolution that Jonathan Zittrain foresees, users will lose the ability to control content, companies will gain the power to censor data, and security will trump innovation. It's a gloomy scenario that his new book Future of the Internet, says is already underway."—Katie Baker, Newsweek
"The thrust of Zittrain's book is that the shift back toward sterile technology cannot be entirely avoided, though the dangers can be mitigated. . . . Ignore Zittrain's warnings and we may prove his forecast right."—Paul Starr, The American Prospect
Zittrain (Internent Governance & Regulation, Oxford Univ.; cofounder, Berkman Ctr. for Internet & Society, Harvard Univ.) cogently explores two opposing scenarios for the future of the personal computer (PC) and the Internet. He defines PCs and the Internet as types of "generative technologies," nonhierarchical, open systems that invite and encourage broad participation over top-down hierarchy and external regulation. The existing generative paradigm has been challenged by both computer manufacturers and government, each with a different agenda. Big business is increasingly pushing for closed appliances allowing companies exclusive right to determine software their systems will use and providing them with full access to information about consumer behavior. Government agencies seek the power to leverage technologies for surveillance-based information gathering. For Zittrain, these interests conflict with the desire of consumers and Internet users for privacy, choice, and community. He cites Wikipedia as an example of "netizenship," a messy but effective way of resolving issues without the need for external regulation. This is a passionate and intelligent book, of interest to students and scholars of cyber law and Internet/society issues.
Herman Hollerith was a twenty-year-old engineer when he helped to compile the results of the 1880 U.S. Census. He was sure he could invent a way to tabulate the data automatically, and over the next several years he spent his spare time devising a punch card system for surveyors to use. The U.S. government commissioned him to tally the 1890 Census with his new system, which consisted of a set of punch cards and associated readers that used spring-mounted needles to pass through the holes in each card, creating an electrical loop that advanced the reader's tally for a particular hole location.
Rather than selling the required equipment to the government, Hollerith leased it out at a rate of one thousand dollars per year for each of the first fifty machines. In exchange, he was wholly responsible for making sure the machines performed their designated tasks. The tally was a success. It took only two and a half years to tally the 1890 Census, compared to the seven years required for the 1880 Census. Hollerith's eponymous Tabulating Machine Company soon expanded to other governments' censuses, and then to payroll, inventory, and billing for large firms like railroad and insurance companies.Hollerith retained the idea of renting rather than selling, controlling the ongoing computing processes of his clients in order to ensure a desirable outcome. It worked. His clients did not want to be burdened with learning how to operate these devices themselves. Instead, they wanted exactly one vendor to summon if something went wrong.
By the 1960s, the company name was International Business Machines, and IBM dominated business computing. Its leadership retained Hollerith's original control paradigm: firms leased IBM's mainframes on a monthly basis, and the lease covered everything-hardware, software, maintenance, and training. Businesses developed little in-house talent for operating the machines because everything was already included as part of the deal with IBM. Further, while IBM's computers were general-purpose information processors, meaning they could be repurposed with new software, no third-party software industry existed. All software was bundled with the machine rental as part of IBM's business model, which was designed to offer comprehensive computing solutions for the particular problems presented by the client. This model provided a convenient one-stop-shopping approach to business computing, resulting in software that was well customized to the client's business practices. But it also meant that any improvements to the computer's operation had to happen through a formal process of discussion and negotiation between IBM and the client. Further, the arrangement made it difficult for firms to switch providers, since any new vendor would have to redo the entire project from scratch.
IBM's competitors were not pleased, and in 1969, under the threat of an antitrust suit-which later materialized-IBM announced that it would unbundle its offerings. It became possible to buy an IBM computer apart from the software, beginning a slow evolution toward in-house programming talent and third-party software makers. Nevertheless, for years after the unbundling announcement many large firms continued to rely on custom-built, externally maintained applications designed for specific purposes.
Before unbundling, mainstream customers encountered computing devices in one of two ways. First, there was the large-scale Hollerith model of mainframes managed by a single firm like IBM. These computers had general-purpose processors inside, capable of a range of tasks, and IBM's programming team devised the software that the customer needed to fulfill its goals. The second type of computing devices was information appliances: devices hardwired for a particular purpose. These were devices like the Friden Flexowriter, a typewriter that could store what was typed by making holes in a roll of tape. Rethreading the tape through the Flexowriter allowed it to retype what had come before, much like operating a player piano. Cutting and pasting different pieces of Flexowriter tape together allowed the user to do mail merges about as easily as one can do them today with Microsoft Word or its rivals. Information appliances were substantially cheaper and easier to use than mainframes, thus requiring no ongoing rental and maintenance relationship with a vendor. However, they could do only the tasks their designers anticipated for them. Firms could buy Flexowriters outright and entrust them to workers-but could not reprogram them.
Today's front-line computing devices are drawn from an entirely different lineage: the hobbyist's personal computer of the late 1970s. The PC could be owned as easily as a Flexowriter but possessed the flexibility, if not the power, of the generic mainframe. A typical PC vendor was the opposite of 1960s IBM: it made available little more than a processor in a box, one ingeniously under-accessorized to minimize its cost. An owner took the inert box and connected it to common household appliances to make it a complete PC. For example, a $99 Timex/Sinclair Z-1000 or a $199 Texas Instruments TI-99/4A could use a television set as a display, and a standard audio cassette recorder to store and retrieve data. The cassette player (and, later, PC-specific diskette drives) could also store and retrieve code that reprogrammed the way the computers worked. In this way, the computers could run new software that was not necessarily available at the time the computer was purchased. PC makers were selling potential functionality as much as they were selling actual uses, and many makers considered themselves to be in the hardware business only. To them, the PCs were solutions waiting for problems.
But these computers did not have to be built that way: there could simply be a world of consumer information technology that comprised appliances. As with a Flexowriter, if a designer knew enough about what the user wanted a PC to do, it would be possible to embed the required code directly into the hardware of the machine, and to make the machine's hardware perform that specific task. This embedding process occurs in the digital watch, the calculator, and the firmware within Mr. Coffee that allows the machine to begin brewing at a user-selected time. These devices are all hardware and no software (though some would say that the devices' software is inside their hardware). If the coffeemaker, calculator, or watch should fail to perform as promised, the user knows exactly whom to blame, since the manufacturers determine the device's behavior as surely as Herman Hollerith controlled the design and use of his tabulators.
The essence-and genius-of separating software creation from hardware construction is that the decoupling enables a computer to be acquired for one purpose and then used to perform new and different tasks without requiring the equivalent of a visit to the mechanic's shop. Some might remember global retailer Radio Shack's "75-in-1 Electronic Project Kit," which was a piece of cardboard with lots of electronic components attached to it. Each component-a transistor, resistor, capacitor, speaker, relay, or dial-was wired to springy posts so that a budding Hollerith could quickly attach and detach wires linking individual components to one another, reconfiguring the board to imitate any number of appliances: radio, doorbell, lie detector, or metronome. The all-important instruction manual offered both schematics and wiring instructions for various inventions-seventy-five of them-much like a book of recipes. Kids could tinker with the results or invent entirely new appliances from scratch as long as they had the ideas and the patience to attach lots of wires to springy posts.
Computer software makes this sort of reconfigurability even easier, by separating the act of algorithm-writing from the act of wiring and rewiring the machine. This separation saves time required for switching between discrete tasks, and it reduces the skill set a programmer needs in order to write new software. It also lays the groundwork for the easy transmission of code from an inventor to a wider audience: instead of passing around instructions for how to rewire the device in order to add a new feature, one can distribute software code that feeds into the machine itself and rewires it in a heartbeat.
The manufacturers of general-purpose PCs could thus write software that gave a PC new functionality after the computer left the factory. Some early PC programs were distributed in printed books for buyers to retype into their machines, but increasingly affordable media like cassette tapes, diskettes, and cartridges became a more cost-effective way to install software. The consumer merely needed to know how to load in the cassette, diskette, or cartridge containing the software in order to enjoy it.
Most significantly, PCs were designed to run software written by authors other than the PC manufacturer or those with whom the PC manufacturer had special arrangements. The resulting PC was one that its own users could program, and many did. But PCs were still firmly grounded in the realm of hobbyists, alongside 75-in-1 Project Kit designs. To most people such a kit was just a big pile of wires, and in the early 1980s a PC was similarly known as more offbeat recreation-a 75-in-1 Project Kit for adults-than as the gateway to a revolution.
The business world took up PCs slowly-who could blame companies for ignoring something called "personal computer"? In the early 1980s firms were still drawing on custom-programmed mainframes or information appliances like smart typewriters. Some businesses obtained custom-programmed minicomputers, which the employees accessed remotely through "dumb" terminals connected to the minicomputers via small, rudimentary in-building networks. The minicomputers would typically run a handful of designated applications -payroll, accounts receivable, accounts payable, and perhaps a more enterprise-specific program, such as a case management system for a hospital or a course selection and assignment program for a university.
As the 1980s progressed, the PC increased in popularity. Also during this time the variety of things a user could do with a PC increased dramatically, possibly because PCs were not initially networked. In the absence of a centrally managed information repository, there was an incentive to make an individual PC powerful in its own right, with the capacity to be programmed by anyone and to function independently of other computers. Moreover, while a central information resource has to be careful about the places to which access is granted-too much access could endanger others' use of the shared machine- individual PCs in hobbyist hands had little need for such security. They were the responsibility of their keepers, and no more.
The PC's ability to support a variety of programs from a variety of makers meant that it soon outpaced the functionality of appliancized machines like dedicated word processors, which were built to function the same way over the entire life of the machine. An IT ecosystem comprising fixed hardware and flexible software soon proved its worth: PC word processing software could be upgraded or replaced with better, competing software without having to junk the PC itself. Word processing itself represented a significant advance over typing, dynamically updated spreadsheets were immensely more powerful than static tables of numbers generated through the use of calculators, and relational databases put index cards and more sophisticated paper-based filing systems to shame. Entirely new applications like video games, beginning with text-based adventures, pioneered additional uses of leisure time, and existing games-such as chess and checkers-soon featured the computer itself as a worthy opponent.
PCs may not have been ideal for a corporate environment-documents and other important information were scattered on different PCs depending on who authored what, and enterprise-wide backup was often a real headache. But the price was right, and diffidence about them soon gave way as businesses could rely on college graduates having skills in word processing and other basic PC tools that would not have to be relearned on a legacy minicomputer system. The mature applications that emerged from the PC's uncertain beginnings provided a reason for the white-collar worker to be assigned a PC, and for an ever broader swath of people to want a PC at home. These machines may have been bought for one purpose, but the flexible architecture-one that made them ready to be programmed using software from any number of sources-meant that they could quickly be redeployed for another. Someone could buy a PC for word processing and then discover the joys of e-mail, or gaming, or the Web.
Bill Gates used to describe his company's vision as "a computer on every desk and in every home, all running Microsoft software." That may appear to be a simple desire to move units-nearly every PC sold meant more money for Microsoft-but as it came true in the developed world, the implications went beyond Microsoft's profitability. Significantly, Gates sought to have computers "all running Microsoft software" rather than computers running only Microsoft software. Windows PCs, like their Mac OS and Linux counterparts, do not insist that all the software found within them come from the same vendor and its partners. They were instead designed to welcome code from any source. Despite Microsoft's well-earned reputation as a ruthless monopolist, a reputation validated by authorities in multiple jurisdictions, a Microsoft PC on nearly every desk can also be interpreted as an ongoing invitation to outside coders to write new software that those PCs can run.
An installed base of tens of millions of PCs ensured the existence of pretilled soil in which new software from any source could take root. Someone writing a creative new application did not need to persuade Microsoft or Apple to allow the software onto the machine, or to persuade people to buy a new piece of hardware to run it. He or she needed only to persuade users to buy (or simply acquire) the software itself, and it could run without further obstacle. As PCs were connected to the Internet, the few remaining barriers-the price of the media and corresponding trip to the computer store-were largely eliminated. People could simply click on the desired link, and new software would be installed.
Networked PCs may have been purchased for a variety of narrow reasons, but collectively they represented openness to new code that could be tried and shared at very little effort and cost. Their manufacturers-both hardware and operating system makers-found their incentives largely aligned with those of independent software developers. The more outside developers there were writing new code, the more valuable a computer would become to more people. To be sure, operating system makers sometimes tried to expand their offerings into the "application space"-for example, Microsoft and Apple each developed their own versions of word processing software to compete with third-party versions, and the Microsoft antitrust cases of the 1990s arose from attempts to link operating system dominance to application dominance-but the most successful business model for both Microsoft and Apple has been to make their computers' operating systems appealing for third-party software development, since they profit handsomely from the sale of the platforms themselves.
* * *
The Hollerith model is one of powerful, general-purpose machines maintained continuously and exclusively by a vendor. The appliance model is one of predictable and easy-to-use specialized machines that require little or no maintenance. Both have virtues. The Hollerith machine is a powerful workhorse and can be adapted by the vendor to fulfill a range of purposes. The appliance is easy to master and it can leverage the task for which it was designed, but not much else. Neither the Hollerith machine nor the appliance can be easily reprogrammed by their users or by third parties, and, as later chapters will explain, "generativity" was thus not one of their features.
A third model eclipsed them: powerful desktop PCs that were adaptable to many different tasks and accessible to anyone who wanted to recode them, and that had the capacity to connect to an Internet that was as good as invisible when it was working well. Perhaps the PC model of computing would have gathered steam even if it had not been initially groomed in hobbyist backwaters. But the strength of the Hollerith model and the risk aversion of many commercial firms to alternatives-"No one got fired for choosing IBM systems"-suggest that the idea of user-maintained and user-tweaked computers running code from many different sources was substantially enhanced by first being proven in environments more amenable to experimentation and risk-taking. These backwater environments cultivated forms of amateur tinkering that became central to major software development. Both small and large third-party applications are now commonplace, and major software efforts often include plug-in architecture that allows fourth parties to write code that builds on the third parties' code.
The box has mattered. The complex, expensive computers of the 1960s, centrally run and managed by a professional class, allowed for customization to the user's needs over time, but at substantial expense. The simpler, inexpensive in- formation appliances intended for individual use diffused technology beyond large consuming firms, but they could not be repurposed or customized very well; changes to their operation took place only as successive models of the appliance were released by the manufacturer. The PC integrated the availability of the appliance with the modifiability of the large generic processor-and began a revolution that affected not only amateur tinkerers, but PC owners who had no technical skills, since they could install the software written by others.
Excerpted from The Future of the Internet by Jonathan Zittrain Copyright © 2008 by Jonathan Zittrain. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Foreword Lawrence Lessig vii
Preface to the Paperback Edition ix
Part I The Rise and Stall of the Generative Net 7
1 Battle of the Boxes 11
2 Battle of the Networks 19
3 Cybersecurity and the Generative Dilemma 36
Part II After the Stall 63
4 The Generative Pattern 67
5 Tethered Appliances, Software as Service, and Perfect Enforcement 101
6 The Lessons of Wikipedia 127
Part III Solutions 149
7 Stopping the Future of the Internet: Stability on a Generative Net 153
8 Strategies for a Generative Future 175
9 Meeting the Risks of Generativity: Privacy 2.0 200
Posted July 31, 2011
No text was provided for this review.