Read an Excerpt
High Stakes, No ProsonersA Winner's Tale of Greed and Glory in the Internet Wars
By Charles H. Ferguson
W. W. Norton & CompanyCopyright ©2001 Charles H. Ferguson
All right reserved.
Starting the Company
The best electric train set any boy ever had!
Orson Welles, speaking of the RKO movie studio
Do you sincerely want to be rich?
The favorite line of Bernard Cornfeld, a major swindler
People start companies for a number of reasons. Obviously there's the money, and frequently there's ego; sometimes there's a passion for improving the world. But in high technology, there's another reason too; it's fun, especially when you win. There is an intense euphoria to playing one of the fastest-moving, highest-stakes, flat-out competitive games in the world. Playing the technology game for real is, without question, a serious peak experience. It almost compensates for the tension.
Start-ups are the intellectual equivalent of driving a small, fast convertible with the top down, the stereo playing Keith Jarrett or Bach or J. J. Cale very loud, doing a hundred miles an hour on an empty country road at sunset. You might crash, but the experience is visceral, immediate, and intense. Inother words, while competitive energy may not be the noblest of reasons for starting a company, it is an extremely powerful one. Not that I've ever actually exceeded the speed limit myself, of course.... Still, others seem to agree that the comparison is apt. Bill Gates used to take his Porsches out at 4:00 A.M. for a spin at 150 miles per hour, and I think that the high produced by competing and winning is a major reason that smart people often enjoy working for Microsoft. And Microsoft is, I think, one of only a few large companies that can deliver that experience. The purest high is found in start-ups. So everyone who spends time in the Valley, and who is smart, ambitious, and competitive, contracts the start-up disease.
I was no exception. I have serious adrenaline addictions, both intellectual and physical. I certainly wanted financial security, along with the ability to speak my mind without worrying about getting fired. But I also had another reason for starting a company, one that for me was extremely powerful: the need to prove myself, to do something real. This turned out to be an even more serious matter than I understood at the time. I had gnawing doubts about myself, and I could tell that other people, including many I respected, did too. By the time I was even one tenth of the way into doing Vermeer, I realized that these doubts were well founded. There simply is no substitute for doing something for which you are personally accountable and whose success can be clearly evaluated.
By late 1993, I was making a good living as a consultant, giving strategic advice to the top managements of large high-technology companies. I think that I mostly gave good advice, and by the standards of consultants I was quite courageous in calling things the way I saw them. But I knew that I didn't always go as far as I should have, and I also realized, deep down, that it was a lot easier to give advice than to really be responsible for implementing it. My doubts about consulting were made sharper by the glaringly obvious difference between several politicized, mediocre large clients and the Silicon Valley entrepreneurs who were eating their lunch.
In this regard, high technology is unusual, because it is young and changes so fast. Unlike other industries, which tend to be run by career bureaucrats, the CEOs of the best high-technology companies have generally started and built their companies themselves, and sometimes invented their industry's principal technologies. People like Bill Gates, Andy Grove and before him Bob Noyce at Intel, John Warnock at Adobe, Wilf Corrigan at LSI Logic, Larry Ellison at Oracle, Jerry Sanders of AMD, and Bob Metcalfe of 3Com didn't make their fortunes by climbing up the organization. Bob Metcalfe didn't just start 3Com, which became the dominant provider of Ethernet networking technology. He also invented Ethernet.
Some of these people are pretty arrogant, but they deserve to be. They're very smart, tough, and usually have good academic credentials as well. The founders of Intel all had Ph.D.s in physics; Jim Clark was a professor at Stanford; Bob Metcalfe and John Warnock were researchers at the Xerox Palo Alto Research Center (PARC). But they didn't just get degrees, do good research, and get tenure; they also struck out on their own and submitted themselves to the reality test. Their general view is that this sets them apart, and that only someone who has been there deserves to be taken seriously. So if you're giving advice in Silicon Valley, the unspoken question always hanging in the room is "Well, what have you ever done?" Deep in the pit of my stomach I knew that this was a very good question, and that I didn't have an answer.
My personal background fueled both my insecurities and my drive. While I was growing up in San Francisco, my family was extremely poorwe lived on about three thousand dollars a yearprimarily because my father was an alcoholic who died when I was in my late teens. Serious alcoholics do not make for fun families, and I fought with my parents continuously from age ten until my mother divorced my father five years later. During most of this time, furthermore, I was crippled by a childhood disease that left me physically dependent upon my parents and almost bankrupted them. I emerged from this experience shy and solitary, but also intensely driven, fiercely independent, judgmental, impatient, permanently on the lookout for threats, and highly competitive. Those aren't the most charming instincts; happily, I mellowed somewhat as I grew older. To my surprise, however, the struggles to launch Vermeer brought everything to the surface again. When I realized what I was in for, I became a tough, impatient, angry, suspicious guy. At times this caused major problems for me, my friends, those who worked with me, and for the company. On the other hand, my toughness and skepticism also served us well. In fact, sometimes I wasn't suspicious enough.
I had first noticed computers as an undergraduate at UC Berkeley, where I majored in mathematics but also took some computer science. I discovered that I was a dilettante and also took courses in genetics, economic history, philosophy, art history, literature, political science, and sociology. I started enjoying life a lot and soon realized that I didn't have the personality for technical work. The dominant culture of mathematicians, computer scientists, and superb hackers is solitary and introverted. But with every passing year I grew more attracted to travel, dinner parties, tennis, jazz clubs, political debates, and the company of the opposite sex. In order to postpone adulthood as long as possible, I took leave from Berkeley just before graduating and went to Europe. Because my father had died before I finished college, I qualified for three hundred dollars per month in Social Security benefits if I remained a student. I therefore enrolled in a fifth-rate French university, lived with an extraordinary family in a twelfth-century village in Normandy, and spent a year reading books and jogging on the beach. Lest you regard this as a poor use of federal money, let me say quite seriously that it was not. It saved my sanity and allowed me for once to reflect on my past and future. Every human being deserves, and benefits from, such a period in their youth.
As soon as I returned to Berkeley to finish working for my degree, I read a magazine that changed my life. The September 1977 issue of Scientific American was entitled Microelectronics. It was a remarkably broad and farsighted survey of semiconductor technology, including microprocessors and early personal computers. It was immediately clear to me that this technology was both fascinating and of great importance. Although I did many things throughout the subsequent decade, I never lost my interest in information technology, and I became ever more intrigued not only by the technology itself but also by its economic, social, and political implications.
So instead of going to graduate school in mathematics or computer science, I entered the Ph.D. program in political science at MIT. It was perfect: I could be a dilettante again. I roamed through MIT and Harvard, taking courses in technical departments as well as in economics, history, philosophy, andoccasionallypolitical science. I began working with Carl Kaysen, who had just come to MIT. Carl knew everyone, had been everywhere, had done it all: OSS in the Second World War, professor at Harvard, deputy national security advisor in the Kennedy administration, director of the Institute for Advanced Study at Princeton, on the boards of UPS and Polaroid, member of the editorial board of Foreign Affairs, et cetera. He's also a wonderful man, incredibly devoted to his students, and has the best collection of wicked one- liners on the planet. In 1981 Carl invited me to join a faculty seminar studying American competitiveness. I noticed that Japan was making inroads against the U.S. not only in steel and automobiles but also in high technology, particularly the semiconductor industry. I wrote several papers arguing that semiconductors were strategically critical to information technology, and that the Americans were about to be hosed by the Japanese, whose superior financial resources, strategic cohesion, and manufacturing discipline gave them an advantage as the industry became more capital intensive. This turned out to be largely right.
Shortly afterward, after finishing my coursework but before my Ph.D. thesis, I dipped my toe into the real world for the first time. In 1982, I took leave from MIT and spent two years working in Silicon Valleyfirst at IBM's Santa Teresa Laboratory, which developed half of IBM's mainframe software, and then for six wild months at Oracle, which at the time had about seventy employees.
This experience contributed to the first major mistake I made, which was to overestimate large U.S. companies and underestimate the Silicon Valley system. The mistake was not completely weird, given the specifics of my limited experience. Although by 1984 IBM was in decline, it still contained a remarkable number of gifted, ethical, yet tough people. Conversely, life at Oracle in 1984 was quite simply insane.
Larry Ellison, Oracle's founder and CEO, was brilliant, driven, witty, charming, ruthless, wildly egotistical, a superb salesman, a notorious womanizerand a seriously random number. Anything could happen, and often did. For example, one fine day I was part of a group that had a late-morning meeting with Larry. When we convened at his office, he wasn't there. This was not unusual. Larry had just acquired a new wifehis thirdand he apparently liked to go home during the day to make love with her. But our meeting was important, so we piled into a car and took off for his new house in Woodside. My manager, a former classical cellist who was as stunned by Larry's behavior as I was, turned to me in the car and said, "And now, we journey to the land where time has no meaning." When we arrived, Larry came to the door, tousled but in good humor, and invited us into a huge house devoid of furniture, at least on the first floor, where we transacted our business and left. But this was nothing. Larry has since become known, among other things, for buying supersonic combat aircraft for his personal use, securing the felony conviction of a former lover, and winning an Australian yacht race that took place in a storm that killed six people.
I liked Larry, actually. While he was severely warped, heand the people who routinely saved him from himselfgot the important things right. And running start-ups isn't a popularity contest. Larry perceived the importance of relational databases and of SQL (the standard language for dealing with them) earlier than anyone else, including IBM, which had invented them. Oracle grew up to become the third-largest software company in the world. But although Larry was a nice planet to visit, I didn't want to live there. In six months at Oracle I had five different managers, the last of whom was a former CIA agent who was much crazier than Larry was, but nasty and not half as bright. I had a hard time keeping a sense of humor, and eventually I got uppity enough that they canned me. In one of the most hysterically funny conversations I've ever had, Larry called me into his office to give me a sober parting lecture on the importance of maturity, diplomacy, self-discipline, patience, and tact.
I returned to MIT to finish my Ph.D.; with great kindness, my advisor arranged a fellowship for me. I ended up spending the next seven years there, primarily studying the role of technology in economic growth and global competition, first as my dissertation topic and then as a postdoctoral fellow. My thesis focused on the semiconductor industry and correctly forecast the Japanese DRAM (memory chip) cartel that materialized in 1987 and remained in operation until broken by the Koreans in 1991. I became heavily involved in policy efforts to contain the Japanese and help strengthen the U.S. industry, an issue about which I was quite passionate. I testified before Congress; consulted to the White House, the U.S. Trade Representative, and the Defense Department; wrote New York Times op-ed pieces and Harvard Business Review articles. I became friends with a number of White House officials. I worked on the formation of Sematecha government-industry semiconductor manufacturing technology consortiumand on the 1986 Semiconductor Trade Agreement that forced open the Japanese market and penalized dumping.
During this time, I argued that in semiconductors the scale, strategic coordination, and massive intellectual property theft of the Japanese posed a major threat to the fragmented U.S. industry. I further argued that U.S. policy should counteract this strategic asymmetry by forcing open the Japanese market, supporting U.S. research and development, and disciplining predatory and cartelistic behavior by the Japanese. This argument was correct, and I think it contributed significantly to national debate. I also argued, however, that U.S. policy should therefore favor a consolidation of U.S. high technology around its largest firms, such as IBM, AT&T, Xerox, Kodak, and DEC, on the grounds that only they had the resources to stand up to Japanese competition. I discounted the value of Silicon Valley start-ups on the grounds that they were too small and fragmented to compete with the Japanese. This was deeply wrong, the product of inexperience and academic disconnection from reality.
Despite this, or perhaps because of it, large technology companies started asking me to consult for them regarding their commercial dealings with the Japanese. Gradually my consulting broadened to include many strategic issues. At this point, my little voice began to nag me. I realized that I wasn't qualified to make some of these decisions, and I learned that often my clients weren't either. Many of the large-company executives for whom I consulted were political and mediocre, while many of the start-ups I talked to were full of brilliant, aggressive people. Moreover, by the late 1980s IBM, DEC, and other large firms were failing to embrace major new technologies, even technologies that they had themselves invented. This problem was particularly serious where new technology threatened the personal empires of executives.
I also noticed enormous differences in governance and personal incentives between the Fortune 500 and start-ups. The boards of directors of my clients were primarily composed of useless decorations, like university presidents or former government officials, with no stake in the company and neither the ability nor the incentive to rock the boat. Start-up boards, on the other hand, were composed of founders and investors who often knew what they were doing and whose wealth depended in an extremely direct way on the company's success. Start-up governance isn't perfect, as you will see in this book, but it is better, on average, than entrenched mediocrity.
By the early 1990s I was forced to admit to myself that the Japanese were not rolling over the American technology sector as I had predicted. In quiet moments, I realized that my mistake derived from my lack of any real, direct experience, which would have forewarned me that the Japanese, and the Fortune 500, each had limits that I might not find described in any academic article. While the Japanese dominated commodity manufacturing businesses such as displays, memories, and consumer electronics, younger American companies were winning in businesses based upon rapidly changing, innovative systems designs. In several major new areas, especially where they had the opportunity to establish a proprietary industry standard, Americans completely dominated the world market. Once Intel's microprocessors and Microsoft's Windows won the market-share battle, they essentially owned the business, and all other vendors, including the Japanese, had to adapt themselves to the so-called Wintel (Windows-Intel) standard. Even in high-volume commodity manufacturing industries such as personal computer systems, the superior designs and flexibility of companies such as Compaq, Dell, and Gateway ensured that Americans dominated the industry.
By 1992, I had figured out what was going on. The key prize in high technology is proprietary control of an industry standard. Windows is again the most obvious example, but there are many others. A number of specific "architectural strategies" can be used to achieve this position, and I cataloged them. While there are many complex variations, they involve two basic goals. The first is acquisition of a sufficiently high market share that your product becomes a de facto industry standard. The other is to create lock-in, so that users cannot easily switch to clones or rival systems. The two primary sources of lock-in are user interfaces (UIs), the means by which end-users interact with your product, and application programming interfaces (APIs), which are the standardized technologies used by engineers to create applications based upon your product.
I also noticed that standards and systems architectures played a major role in defining the structure of the industry and the profitability of the various niches within it. There were architectural platform leaders, like Microsoft or Intel; imitators or clones, such as AMD (which copies Intel); architectural losers, such as Apple; commodity manufacturers, such as the PC and component vendors; and point product vendors, such as Intuit in tax software, who create specialized applications for major platforms.
The battle to create and own a proprietary industry standard generates rapid improvements in price and performance, at least until somebody emerges totally dominant. In fact even after a monopolist emerges, there is still considerable pressure to innovate, because unless you can induce your installed base to upgrade frequently, you have a hard time continuing to grow. Again, the most obvious example is the personal computer industry, which has been controlled by Intel and Microsoft since the mid-1980s. There is a risk that a monopolist can misbehave, either through sheer inefficiency (like IBM in the 1980s), or through being excessively predatory, as with Microsoft now. But slow-moving, least-common-denominator, nonproprietary, committee-developed standards are often much worse than any monopoly. Imagine if some international standards committee had created a universal PC standard in the early days. We'd still be using brightly colored, inexpensive, totally obsolete, probably Japanese and Korean PCs today, just as we still use fax machines, VCRs, and television sets that are a generation or more behind where they could be.
Marketplace contests over proprietary, or de facto, industry standards favor the swift and innovative, not the deep pocketed, so American start-ups do well. This helped explain why the Japanese and Koreans dominated cost-sensitive, mass-produced commodity sectors while the Americans dominated more innovative and complex systems. It also helped explain how the apparently fragmented, small-scale Silicon Valley system could organize itself to create large new industries. As soon as the market settled on a standard, large numbers of companies were created quickly through the start-up system, and they could operate quite independently. As long as they adhered to the standard, they had only minimal need to talk to one another directly. This is, in fact, how the personal computer and Internet industries both work. At least in new, rapidly moving industries this enables the Silicon Valley system, collectively, to rival or even exceed the scale and power of much larger firms, whether IBM or the Japanese. And when large firms tie themselves in knots through internal politics, Silicon Valley wins every time.
As I thought about this in 1992, I realized that IBM's day of reckoning was coming soon. I developed these ideas in a Harvard Business Review article and a book that I co-authored with Charlie Morris, a fellow consultant. The book (Computer Wars) analyzed the rise of Silicon Valley and the decline of IBM and was published within just a few weeks of IBM's crash in December 1992. Charlie and I embarked on a brief crusade, telling the world how badly IBM's management and board of directors had damaged the company. We published several articles, including an open letter to the IBM board, which we FedExed to them. I take great pleasure in the thought that we helped force John Akers out in February 1993; I wish most of the board had been replaced too. Their conduct made hundreds of thousands of layoffs necessary and had probably resulted in several hundred billion dollars' worth of obsolete computers being forced on users over the previous decade. By pointing this out, Charlie and I did the world some good.
But around the same time, I also made my next two mistakes. Both of them provided lessons that helped me later in starting Vermeer. One involved IBM again; the other involved Xerox, for which I consulted intensively in the early 1990s.
IBM's crisis in 1992-93 was enough to wake up anyone. In a few months the company lost over $30 billion, and its stock declined by 60 percent. John Akers was forced out, and after a CEO search that to my sheer terror included John Sculley of Applemore about that in a minuteLouis Gerstner was hired to run IBM. Gerstner's previous job was running a cigarette company, not the highest calling, but we could overlook that if he saved IBM. Gerstner rapidly laid off more than two hundred thousand people (more than half of IBM's employees), replaced many senior executives, and announced that IBM would build a major systems integration and service business. So far, so good. He also, however, recentralized management of the company; continued to invest heavily in IBM's obsolete mainframe and minicomputer systems; avoided aggressive efforts to develop new products based on leading-edge technologies; and surrounded himself with financial guys rather than real technologists. No strategy was visible. When asked about all this, Gerstner replied, "The last thing IBM needs now is vision."
I thought this was insane and criticized Gerstner both publicly and in some major consulting engagements. I predicted IBM's problems would get even worse because it would be dominated by businesses that were either declining (mainframes and minicomputers) or low-margin commodities (like disk drives and PCs). But Gerstner was right, and I was wrong. IBM was able to restore healthy mainframe and minicomputer sales and profits, even though those systems are quite obsolete. Gerstner also saw that IBM's global operations and broad product line were the perfect platform for a conventional, dull, but large and successful systems integration business. Gerstner did save IBMby turning it into a boring but efficient company, which was exactly the right thing to do. I gained respect for implementation relative to grand strategy.
Then there was Xerox. Xerox asked me to look at Xerox PARC, the famous research center in Palo Alto that fifteen years earlier had invented just about everything in personal computers and networking. I was charmed into suspending my skepticism, despite many warning signs. PARC had become disconnected from reality, even though most of the world's venture capitalists were literally only a mile away, on Sand Hill Road. I fell under the spell of an extremely brilliant, messianic, manipulative, ambitious young researcher there. He asserted that PARC had developed advanced digital technologies that would enable the creation of revolutionary, inexpensive digital systems that would combine image processing, printing, copying, and fax capabilities. In fact, these products would do everything but your laundry.
So I did something extremely stupid: I believed this guy and didn't check his claims carefully. I recommended to Xerox top management the creation of a new division to develop these products. It was a fiasco. In my defense, I wasn't the only one at fault, and some of the work I did for Xerox was extremely good. I would say that on a net basis I helped the place understand digital systems, and Xerox has done just fine. But that new division wasn't my fines hour. I learned the importance of facts and skepticism, which served me well at Vermeer. And once again my little voice told me that if I had some real experience, and if I had known that I would be held accountable for the consequences of my advice, things might have turned out better.
Next came a truly Kafkaesque experience: Apple. Here, I didn't make any big mistakes, but I learned some important lessons nonetheless. Shortly after IBM's crisis began and just after Akers was forced out, in early 1993, I got a telephone call from my colleague Charlie Morris. "We're flying to California tomorrow," he said. "We've just been hired by John Sculley to decide whether he should sell Apple to IBM and whether he should become IBM's new CEO."
We flew out, spent a day briefing Sculley, and then spent a week analyzing the merits of an IBM-Apple merger. In the back of my mind I was already worried about Sculley. Everyone knew that over the previous decade Apple had totally blown its opportunity to usurp Microsoft. Throughout the 1980s the Macintosh operating system had been light-years ahead of Microsoft. But instead of licensing the software to the entire computer industry, Apple had chosen to restrict the Mac OS to its own, second-rate, hardware. As everyone in the industry knew, this was a recipe for a few years of easy profits, to be followed by guaranteed disaster in the long run. Sculley had to bear major responsibility for that, since he had been the CEO the whole time.
We recommended against the IBM merger, and also against his taking the CEO job, which he said was his for the asking. I'll never know whether that was true, but by the time I'd spent a couple of weeks working for him and studying Apple, the thought of John Sculley running IBM scared the hell out of me. Helping to keep him away from IBM during its crisis might be one of my greatest contributions to humanity. Furthermore, it was abundantly clear that Apple was headed over a cliff just like IBM. Sculley seemed to realize that too, but was doing nothing about it.
I nonetheless proposed to Sculley that we change focus, analyze Apple's strategic condition, and make recommendations. He agreed. We were given the run of the place and offices in his building. But the fact that our client was John Sculley posed a dilemma, because John Sculley was a big part of the problemalong with the board that kept him there. We prepared a detailed report, recommending major structural and strategic changes, and scheduled a half-day presentation to Apple's executive committee. At the last minute it was canceled. Suddenly Sculley stopped returning my phone calls, nobody had ever heard of us, our bills weren't getting paid. We asked what was going on and received no answer, although we eventually did get paid. Shortly afterward Sculley was forced out. I tried to contact several board members; none returned my calls. Almost immediately after leaving Apple, Sculleyever oblivioustook a job as CEO of an obviously fraudulent company, only to quit four months later after the company was besieged by reporters, investigations, and lawsuits. At Apple, disaster followed disaster as revenues, profits, and market share went through the floor. Sculley was replaced by Michael Spindler, who was as incompetent as Sculley; Spindler was in turn quickly replaced by Gil Amelio, who was incompetent too; and Amelio was quickly replaced in his turnby Steve Jobs, who had just conned Amelio into buying NeXT, his failing company, for $400 million. Jobs kept Apple afloat, in part by doing a deal with Bill Gates that practically turned Apple into a Microsoft subsidiary. My question: Where's Tom Wolfe when you need him? Or perhaps Hunter S. Thompson would be more appropriate.
In addition to being absurd, the Sculley episode bothered me. Our report was hard-hitting by consulting standards, but it didn't point the finger at Sculley and Apple's board as bluntly as it should have, because that's not the way consultants talk to clients if they want to stay employed. I began to feel as never before that I did not want to spend the rest of my life consulting, and that whatever I did, I wanted complete security to say what I thought.
But it all served a cosmic purpose. After Sculley was forced out, I was asked to analyze Apple's opportunities in electronic publishing, multimedia, and online services. My client was David Nagel, a nice man who rocked no boats, and who was in charge of Apple's software operations. (David is now the chief technology officer of AT&T, which is slightly frightening.) Apple was still a mess, incapable of using my advice no matter what it was. But over the following six months, what I learned from that assignment got me thinking and led me to my cool idea, which in turn led to Vermeer and the Web.
I came to realize that the structure of the online services industry was all wrong. In the months after I completed my assignment, I began to suspect that online services represented a major software opportunity that had been overlooked. All online services used obsolete technology and mutually incompatible systems, which were generally awful, too. America Online at least took advantage of modern graphical user interface (GUI) technology, so its service was easier to use, but otherwise it had the same problems as all the others. There was a problem here, and maybe an opportunity. I came up with the idea of a universal "viewer" or "browser," a single piece of software that would allow you to view and use any of those services. I briefly considered starting a company based upon it, but I came to realize that it was a bad idea. My bad idea, however, was edging toward where my Good Idea was waiting.
I Start to Get It
With the proliferation of personal computers, it became increasingly important for users to get access to information stored in other computers, whether across the hall or halfway around the world. This problem got worse with the advent of powerful, inexpensive "servers," which provided services for workgroups and small companies. By the early 1990s servers costing less than five thousand dollars were being sold in the millions per year by companies such as Compaq and Dell. Organizations were coming to have enormous quantities of electronic information that they wanted to distribute and that people wanted to seeemployees, customers, and shareholders, among others.
Thus the incompatibilities between the various commercial online services were just one special case of a far larger problem. Not many people understood this problem well, and only a few products addressed it. Most of them were bad, or at best limited. For example, Adobe was developing a technology, now called Acrobat or PDF, that allowed you to view electronic documents created by other people, even if you didn't have the application software, like Word or Lotus 1-2-3, that originally created the document. But Acrobat had big problems. You needed a small rocket engine to run it, it was a pain to create Acrobat documents, and you couldn't edit them. A few other people, including Apple, developed similar products; they all sank without a trace. The problem, however, was very real: people clearly needed a simple way to publish, distribute, and view electronic documents.
I had learned by now that it was a good signfor an aspiring entrepreneurif the incumbent industry was a mess, and it was an especially good sign if the incumbents' business model would lead them to resist innovation and self-cannibalization. CompuServe, Dialog, LEXIS, Prodigy, and the like were all based on expensive, centralized, inflexible, mutually incompatible designs. If a company like Fidelity Investments wanted to distribute its marketing literature through one of these services, it would have to undertake a major effort, pay an absurd amount of money, and even then would not be able to distribute real-time data, like the current market price of its funds. In addition, none of these services would be any help to a company that wanted to deploy a big internal information distributionsystem like a big insurance company that wanted to make its policy documents and databases available online to all of its agents, for instance.
That was when my lightbulb went on. The solution to this problem, blindingly obvious once you thought of it, was for a software company to develop a single, standardized software product that would allow anyone and everyone to create and operate their own online service. Such a product would logically have a three-part architecture: a viewer, a server, and a visual development tool. Each part would be connected to the others by an open, standardized interface. This would enable any copy of the tool to develop content for any server; for servers to talk to one another; and for any copy of the viewer to look at any service. Moreover, you could use the same system for internal, private services and for public online services.
It was staggeringly clear to me that if a software company could solve that problemdevelop an architecture and product for the online distribution and retrieval of electronic informationit would have a golden opportunity to establish a proprietary standard for online services. That was my Good Idea. And I couldn't think of any reason why such a product was beyond the reach of current technology.
It was also instantly clear to me that this opportunity was huge. I could envision a long list of potential usersnews media, financial-services firms like Fidelity, catalog retailers like Lands' End, or anyone who wanted to publish information about their company, their financial performance, their job openings. Furthermore, the same software could be used for internal distribution of memos, expense reports, repair manuals, credit information within banks, price changes, legal forms, regulatory updatesthe list was endless. The potential market was big enough that the software could be sold fairly cheaply, which would help it proliferate and become an industry standard, which customers and end-users would love because it would solve the problem of incompatibility. If everyone used the same software, users could tap into any service, companies could move information between services, and others could create directories and indexes of services. The software could also interact easily with a company's existing data systems, enabling online access to real-time information. An electronic shopping catalog, for example, could tie into the company's inventory database and accounting systems in a graceful, efficient way and tell you if the widget you wanted was in stock.
At this point you're probably thinking, But that's what the World Wide Web does. Exactly. But at the time, I didn't know that. I had reinvented the wheel. However, it also turned out that my wheel was richer and more complete. At the time, I knew about the Web in a vague way. But I didn't fully realize what it did, or how it worked, and there were only about a thousand Web servers in use, mostly in universities, versus several million now. Likewise I knew about the Internet, but discounted it, too, because it was run by the government, was reserved for research and governmental purposes, and was mainly a convenient way for academics to send one another mail. Bill Gates missed it too, far longer than I did, so at least I had good company.
Dipping My Toe In
Now that I had a truly cool idea, I checked more seriously for competitors. I could see none on the horizon. Microsoft was developing an online service code-named "Marvel" (which became MSN), but it sounded like just an updated version of CompuServe or AOL, which is exactly what it was. The one potentially serious threat was Lotus Notes, a product that allowed big companies to set up and manage internal information services. I have a cousin, Doug Ferguson, who then worked at Lotus, and I asked him for the name of a Notes consultant who could answer detailed questions. I was already getting paranoid about secrecya useful emotion, as you'll see, although I should have been more diplomatic about it. So I made my consultant sign a nondisclosure agreement, and even then, I told him absolutely nothing about why I was asking all those questions.
I felt much better afterward, for it was clear that Notes wasn't the answer. For example, it was not designed for large-scale, anonymous public servicesevery user had to be registered by a system administrator. Furthermore, Notes was complicated and expensive to set up, maintain, and even to use. There were no modern visual tools for creating Notes applications; people did them by hand, the hard way. Notes was also a fairly closed system, with many nonstandard ways of performing common tasks.
The glaring deficiencies in Notes were fairly characteristic of Lotus at the time. Much as at IBM and Apple, the Lotus CEO, Jim Manzi, had allowed the company to become politicized and bureaucratized, and it was clearly in decline. The spreadsheet and PC applications business was getting killed by Microsoft, and despite being the first and only "groupware" product, Notes was taking off slowly. It had gotten to the point, in fact, that you didn't have to worry about competition from any Lotus product anymore. Nearly two years later, in the middle of Vermeer's start-up, IBM purchased Lotus for the astonishing price of $3.5 billion. Manzi resigned almost immediately and became CEO of an Internet start-up, which rapidly sank without a trace.
So I decided that Notes posed no major risk, although I also decided that it would be best to focus initially on external, public information services just to keep it simple. But was I ready to take the plunge? I had enjoyed consulting, was good at it, and was earning a good livingin 1993, I made nearly half a million dollars. I was single and, while I lived well, I was financially conservative and had saved several hundred thousand dollars. For me, this was quite a lot of money, but it wasn't enough to give me financial independence if I returned to research or policy work. More important, consulting itself was getting frustrating. I had some great clients, like Motorola, but I couldn't take too many more like Apple. So the idea of starting a company was tempting.
But it was also scary, so I wasted time trying to find some middle ground. I approached some big-company friends, in the hope of working out a dealan equity stake or royalty if they used the idea. I spoke with Bob Palmer, the CEO of DEC, and some senior people at Intel and Sybase. But it was silly, as well as cowardly. I refused to disclose my idea unless they first agreed to pay me if they decided to use it. But of course they'd have to be morons to promise me money just because I asserted that I had a good ideamaybe one they were already working on. My high-level friends told me, very pleasantly, to go to hell. I gradually resigned myself to the necessity of doing this for real.
I decided to look around for a co-founder, or even two. While I had a good understanding of software, I was by no means the kind of deep technologist I would need to design this system. And while I can do a lot of things well, I knew that I'd want and need management help. I neither enjoyed nor had experience with many aspects of creating and managing an organization, ranging from major functions such as sales to the millions of detailsinsurance plans, office space, equipment leasing, trade show schedules, accountingthat you must deal with efficiently to make your company work.
So for most of the first quarter of 1994, I concentrated on finding a partner. A good friend at Xerox wasn't interested, but he gave me some names, and of course I had contacts of my own. I had serious discussions, always under nondisclosure, with another half-dozen people. Mitch Allen, a sharp engineering manager at Apple, liked the idea but was surprisingly timid about leaving a large company. Rex Golding, who had just left 3DO (a troubled game company now making a second try) after four tough years as its chief financial officer, seemed burned out and risk averse. Don Emery, who had just sold his small software company to WordPerfect, was still trying to negotiate a severance agreement with his new owner, Novell, and didn't seem very driven or interested. A technical consultant and programmer whom Charlie Morris and I had once hired actually said yes, but backed out a couple of weeks later. Several computer science professors proved surprisingly timid and disconnected from the start-up world. I began to realize that a lot more people talked about doing a start-up than were actually willing to take the plunge.
By this time I was getting nervous. Everybody had signed nondisclosure agreements, and I'd been careful not to give any information to the consultants I'd hired. But I had to disclose my ideas when talking to potential co-founders, and that list was getting longer by the day. Even if they all honored their agreements, I was putting my Good Idea "in play," and the idea was obvious enough, and big enough, that sooner or later someone was going to do something about it. Then one day I called a computer science professor at MIT I knew, John Guttag, and asked him if he knew anybody good. "Well," John said, "actually, yes, I think I might."
It was Randy Forgaard, and Guttag was more right than I could have dreamt. Randy had gotten bachelor's and master's degrees in computer science from MIT and had been a programmer and lead engineer for several small start-ups, so he had good experience. His most recent company had just been sold, so he was available, and while he was not wealthy, he had enough money to take a chance. Although we have very different personalities, we got along well from the start. We checked a half dozen of each other's references. After Randy signed a nondisclosure agreement, I told him the idea. He thought it over and said yes. We made a deal. Neither of us would draw a salary, but I would pay all expenses. We agreed on an initial eighty-twenty ownership split. I said I would give him more if the relationship went well (a couple of months later I gave Randy an additional 6 percent). The whole thing took a week. In April 1994, we started the company.
Excerpted from High Stakes, No Prosoners by Charles H. Ferguson Copyright ©2001 by Charles H. Ferguson. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.