×

Uh-oh, it looks like your Internet Explorer is out of date.

For a better shopping experience, please upgrade now.

Innovation Explosion: Using Intellect and Software to Revolutionize Growth Strategies
     

Innovation Explosion: Using Intellect and Software to Revolutionize Growth Strategies

by James Brian Quinn, Karen A. Zien
 


Now from James Brian Quinn, internationally renowned author of the award-winning Intelligent Enterprise (Free Press), comes a trailblazing work on how both entrepreneurs and nations can develop, harness, and utilize intellect, science, and technology to maximize innovation and growth. With coauthors Jordan J. Baruch and Karen Anne Zien, Quinn reveals in

Overview


Now from James Brian Quinn, internationally renowned author of the award-winning Intelligent Enterprise (Free Press), comes a trailblazing work on how both entrepreneurs and nations can develop, harness, and utilize intellect, science, and technology to maximize innovation and growth. With coauthors Jordan J. Baruch and Karen Anne Zien, Quinn reveals in practical terms how successful firms can intertwine intellectual capital and modern software capabilities to cut innovation cycle times by 90%, costs by 75%, and risks by 60% or more, and thereby revolutionize all aspects of innovation management, corporate strategy, national policy, and even economics.


Innovation Explosion speaks directly to managers, entrepreneurs, government policymakers, and academics. The authors introduce and develop truly novel concepts that go well beyond earlier books on how to manage innovation. Its most notable concepts include:


  • the "software paradigm" for shortening innovation cycles, improving payoffs, leveraging resources, and decreasing risks well beyond any other approach available at this time
  • dynamic innovation organizations that move beyond teams toward independent collaborations of much greater power
  • uniquely structured knowledge systems able to create "autocatalytic" or "negative entropy" effects, yielding massive "technological multiplier" benefits for corporations and economies
  • "core-competency-with-strategic-outsourcing" strategies that allow greater concentration, leveraging, and flexibility than any other strategies presented to date
  • critical methodologies for executives to use in measuring intellectual assets and innovation, and developing them farther than previously thought possible
  • practical changes in national policies, which are needed to support private innovation and which, if implemented, will greatly expand marketing growth and national wealth
  • international implications of the software revolution, now enabling worldwide economic development on a scale never before seen.


Innovation Explosion breaks entirely new ground in both theory and management practice.

Product Details

ISBN-13:
9780684833941
Publisher:
Free Press
Publication date:
11/07/1997
Pages:
386
Product dimensions:
6.52(w) x 9.56(h) x 1.38(d)

Read an Excerpt

Chapter 3 Managing Software-Based Innovation

To take proper advantage revolutionary opportunities software-based innovation offers, many companies will have to improve their own internal software management capabilities dramatically. The alternative may be oblivion. Five issues are critical to this process:

1. Designing the software infrastructure as a learning system integrated, to the extent possible, from the marketplace, through operations, to upstream scientific and technological data sources and models.
2. Focusing the system not just on decreasing internal innovation costs, but on capturing, exploiting, and leveraging user information to maximize value-in-use for customers and to support their flexible remodification of innovated products or services with customers further downstream.
3. Recognizing the software system as an integral component of the organization that largely determines the language, modes, and possibilities of human interactions, and hence much of the institution's culture.
4. Utilizing the full capabilities of software as a self-learning system to define new innovative opportunities to interlink remote sources of knowledge in new ways, and to create catalytic growth effects by providing hooks onto which others can attach multiple new innovative capabilities.
5. Establishing a systematic approach to software development appropriate to the company's specific strategy and management style.

No company needs to develop all its own software, but it must know how to manage software systems and software development itself. The software industry has become one of the world's largest ($200 billion) and rapidly growing (13% per year) industries, employing millions of programmers worldwide (see Figure 3.1). Companies can tap into this rich resource for many aspects of their software activities, but they must never lose strategic control over this vital source of innovation and competitive edge.

Software-centered design enables huge leverages in the marketplace, and it changes the very thought processes of innovation. It avoids the trap of thinking that physical materials somehow have a special intrinsic value to customers. Instead, it focuses design on customer or use features and flexibilities, making outputs more effective and easier to implement. As an end product or a component in a product or system, software itself has no intrinsic value or permanence. Its value lies solely in customers' perceptions of its value in the particular uses to which they apply it. Virtually all the value capture occurs on user premises and accrues to users, not the producers of the software. This effect is multiplied as customers consciously modify and use the software -- or software hooks on the product -- to serve their own customers. Sensible software design exploits the innovations its customers will later make.

For example, Microsoft's Windows first creates value for its buyers. Then these buyers use the software output to create value for other customers, who may then use those results to add value for still other customers. The total value produced is thousands of times that captured by Microsoft. And most of the true innovation occurs in how Microsoft's customers use the software to serve their own and their customers' needs. The same is true of the value created by smart machines or macrosystems, like SABRE, Economost, CHIPS, UNIX, Excel, Turbotax, HTML, Navigator, and Java. The profits capturable by their innovators pale beside the user value such programs create for others. To develop and capture a major part of this value, competent software designers know they must work actively and interactively with customers, their end users, and their internal process managers -- both as the software (or the product embedding it) is being developed and after it enters users' hands. Chapters 5 and 6 develop in detail many organizational and software-supported methodologies for doing this.

Because of the low costs that software permits and the dominating importance of its value in use, the efficiency of program steps in software is nowhere near as important in value creation as are the functionality benefits it generates for users and the ease and effectiveness with which customers can use the software. This customer value ratio is obscured in other innovation paradigms, which focus on decreasing the costs of design steps and shortening internal process times, rather than focusing on the internal learning and customer interaction processes, which seem costly, time-consuming and inefficient in themselves but create much greater value in use.

THE INTERNET MODEL OF INNOVATION

Value in use is brought to its peak in Internet innovation. Under the traditional or physics-based model of innovation, a system once developed comes under the effects of positive entropy: the output or asset value of the physical system generally begins to deteriorate immediately upon introduction or use. By contrast, if learning feedback loops are built into them, software systems tend to undergo auto catalysis, "negative entropy," or positive gain as people link into the system, find entirely new potentials the designers did not anticipate, and enrich the software itself. Netscape conservatively estimates that on average each of its customers realizes twenty times more value from use of its software than Netscape's profits on the sale. The ratio is probably much higher.

An Innovation Explosion

Like physical products, customers often use the software (and its hooks) in totally unexpected ways. The result is an absolute explosion of possibilities. Only twenty different software features can be arranged to interact or be sequenced in no fewer than 10 different ways. Anyone who ignores these potentials is foolish in the extreme. The classic example is the Internet, which expanded radically beyond its initial concept of providing efficient computer sharing between national laboratories into a whole virtual world of products, knowledge exchanges, and communications for its users. By 1999 Internet software sales in a variety of modes are estimated to be over $8 billion, but their total economic impact will be many times this high (see Figure 3.2). Few of these potentials could be foreseen by the Net's founders.

Two key elements in maximizing benefits from software innovation systems are designing hooks onto which others can later connect their particular adaptations, innovations, or unanticipated demands, and providing means by which the system can upgrade itself from feedback concerning uses and innovations occurring at the nodes.

Exploiting Auto-catalysis

Not surprisingly, the Internet has used these concepts well, as have other successful large-scale systems like COSMOS II, Landsat, Navstar, Economost, SWIFT, MCI, and AT&T, which transformed their industries. The power of these concepts in terms of economic impact is overwhelming. If one multiplies the gain of a software innovation by the exponential impact of the 30 million people now, and billions ultimately, who might use the innovation in their computer systems through the Internet, the potentials become unimaginable. The World Wide Web, through its graphics application capabilities, is daily stirring up imaginative possibilities for a plethora of totally new markets, products, services, arts, and information potentials. And these will multiply and achieve huge geographical leverages as they diffuse throughout the world. Fully exploiting the auto-catalytic aspects of software innovations rests on three principles.

1. Diffusion. Software has the capacity to capture and diffuse any state of the art almost instantly and at low cost. No longer does one have to wait on the sequential processes of writing, publication, reproduction, introduction, and distribution to deliver a knowledge innovation to many remote points. In well-designed systems, once an innovation is available at any knowledge center on the system, it becomes available in detail (minus proprietary or security considerations) to all on the system. For example, when experts at the center of a financial house analyze a series of stock trends or investment opportunities, all brokerage nodes can use their results immediately. And the central system instantly commands all accounting and contact personnel's computers on how to comply with new regulations, avoid calculation errors, and control anticipated risks.

For large technical systems, software can instantly update the computers of all team and support participants with the most current design decisions, models in use, purchasing, operations, maintenance data, and so on. For example:

* Silicon Graphics now has intranet connections for all of its more than 10,000 employees. Different departments have their own Web pages, which others can access through Silicon Junction software. There are now 200,000 Web pages located on 2,400 servers. The company reports "improved timeliness, accuracy, productivity, and strengthened teamwork" as major gains. In addition to facilitating design coordination, purchase order and sales order interaction costs have dropped by 50% each while accuracy and service levels have gone up. Internal designers can find information and self-coordinate on a scale never before possible.

* Litton-PRC's Integrated Tactical Warning/Attack Systems Support (ISISS) program provides sustaining engineering support, modifications, and upgrades for the U.S. space and defense warning systems. Other PRC systems provide similar sustaining support for Jet Propulsion Laboratory's deep space systems, as well as full CADD, documentation, fabrication, implementation, test, and quality assurance support for the space vehicles themselves.

2. Open systems. Maximizing innovation impact in software systems requires as open a system as possible, both to capture innovation at the nodes and to disseminate knowledge to all centers. Total transparency may not be feasible because of the need to protect confidential information or intellectual property at the center; however, as much openness as possible at interfaces will amplify effects. If the system can also capture the learning and use characteristics of those at the user nodes, it can build these into the options available elsewhere on the system. DARPANET was essentially set up as a system where all could share in this fashion and not duplicate efforts. UNIX went one step beyond, creating an open environment community where each person could rely heavily on others to contribute their tools and learning so all could do their jobs better. Openness dominates such systems except for elements that must purposely be kept opaque in order to ensure standard interfaces, availability of basic processing capabilities, and protection against compromises of proprietary or privacy codes or disruptions of the entire system. When used properly, productivity in research soars:

* By interconnecting the more than 100 biological databases available on the World Wide Web, essentially complete cellular data are becoming available about specific biological entities like E. coli bacteria and Haemophilus influenza. By comparing such data with those of other organisms like yeast, worms, or flies, "the underlying determinants of living systems are becoming available at a truly astounding rate." By interconnecting such data, software is creating "the threshold of a new era in biological sciences."

3. Destruction of hierarchies. With properly developed software, the need for traditional hierarchy disappears; and with reasonable management support, the often-sought-after goals of flat or fast-response network organizations can generally emerge. The software itself embodies many of the rules-disseminating and consistency-generating roles of hierarchies but without the bureaucracies that hierarchies usually entail. Yet it simultaneously multiplies innovation impacts by diffusing the firm's experience curve quickly throughout the entire enterprise, by providing positive hooks onto which customers can add value in their use domains, and by connecting internal innovators to worldwide sources of new knowledge from external specialists, suppliers, and research centers. (Chapters 4 through 9 describe how advanced firms are redesigning themselves around such systems.) The same principles applied to the national endeavor can engender a huge software-based auto-catalytic innovation system creating massive economic growth multipliers. (Chapters 11 through 13 show how.)

HOW SOFTWARE INNOVATION ORGANIZATIONS HAVE CHANGED

The key to obtaining these gains lies in effectively connecting external user interfaces, internal self-learning innovation engines, and upstream technology and raw databases on an interactive basis. Why has it taken so long to merge these essential systems? Early software designs often involved use of complex machine languages and mathematical algorithms to perform specific functions. A priesthood of mathematicians emerged who alone understood the formulaic software needed for most systems and applications. They engaged in little direct interaction with non-expert users, who could understand neither the underlying formulas nor the 1s and 0s of binary language. As users in large companies and customer institutions became more sophisticated and higher-level languages became more common, users could participate more in software design. However, advanced software development stayed in specialists' hands within both hardware-producing and user enterprises. And most of the world (both within and outside their own organizations) tended to regard these programmers with awe and suspicion.

From Priesthoods to Users

Then a transition occurred, led by time-sharing and menu-driven applications software. More innovation occurred at the users' keyboards. A few advanced algorithm (generally mathematician) creators like Seymore Cray still dominated the design of the highest powered computers' software and the more complex open system software of telephone, broadcast, and interactive communications networks, while specialist programmers developed most customized software and virtually all workbench and PC-level operating software. But most innovators on today's huge installed base of more than 50 million PCs are not programmers in any specialist sense. They customize prepackaged programs for their own use; and when they do write software, they use higher-level languages that vary from Word or Excel to Power Object. On networks, prepackaged software like Navigator and other browsers performs much of the finding and data-organizing steps of early innovation, while other advanced programs like Java and HTML allow user-innovators to access and combine inputs from other sources in endless variations for their own particular purposes.

The Internet and World Wide Web have become a software framework for innovating new services through combining various elements (in the form of software) sourced from many different nodes on the networks. Essentially all innovation on the Net now occurs at the participating nodes, with no governing hierarchy or organization giving orders to anyone. The ultimate in decentralized innovation has emerged. Users -- not producer organizations or the central system -- are the real innovators on the Web, Net, and intranets, and they capture virtually all the innovations' value for themselves.

Many companies' software units have paralleled this development sequence. Most successful innovative companies now provide infrastructures or system intranets that (like the Internet) embody the rules and sophisticated methodologies for their own use. These systems enable many remote organizational nodes to customize the firm's central knowledge capabilities and innovate for their own specialized (internal user or external) customer needs. Although directly connecting user interfaces, engines, and databases could substantially increase the rapidity of advance and customer impacts of innovations, few enterprises have fully developed and exploited their systems' integrated possibilities. They have tended to suboptimize, directly interconnecting only one or two of the three key systems. Winners in the future will integrate all three.

Self-Learning Software Systems

In many cases, individual software systems can now learn from their own algorithms and reprogram themselves to find new optimums for their subsystems. Using built-in decision criteria, they constantly update themselves based on inputs from exogenous environments. The National Institutes of Health, MIT, National Weather Service, and NASA are developing large-scale self-learning systems for their areas of special interest. In industry, some companies, like the oil majors or retailers, have already integrated their market and operations modules for interactive learning. Others have integrated their technology databases and design systems. Self-learning software may teach a subsystem to take actions directly, as learning-based chess, logistics, and stock trading programs do. Or they may signal humans (or other software systems) that new forms of analysis or action are needed. Some specific examples will make the point:

* At Citibank genetic algorithms evolve models that can predict currency trends under various past market conditions. Neural networks then discern which past market fits closest to current trends, and makes forecasts accordingly. Since 1992 Citibank has earned 25% annual profits on automated currency trading -- much more than its human traders. Deere and Co. supplements its production scheduling programs with genetic algorithms to reschedule operations when machines go down. The genetic algorithms can learn from similar past situations what related scheduling problems were associated with those events, and can evolve optimized new schedules for the specific circumstance. MCC combines neural nets and fuzzy logic to decipher reams of data from chemical plant operations, leading to insights that manufacturing people did not have in the past. These are tested in a simulation that generates rules for optimizing the real plant. Users, including Eastman Kodak, claim large savings.

* Computer models recently revealed that, counterintuitively, approximately 25% of the nitrogen in the Chesapeake Bay comes from air pollution from as far west as Ohio. Conventional wisdom has been that nitrogen oxide falls out from air fairly quickly. But once sufficient data were available, the bay's actual nitrogen pollution levels could not be explained in terms of local conditions. Only when two separate sets of data were combined in a self-learning system did analysts discover totally new relationships that defined the problem quite differently, and suggested needed new policy solutions. The Environmental Protection Agency's National Environmental Supercomputing Center combined a digitally fed model of air flows across the country with another model that examined waterborne and earth-source pollution intrusions into the bay. Because of extensive historical data and substantial testing, each of the programs had been found reliable in its field. Both could be updated from direct data or through a relatively few on-site samples to establish local conditions. Such techniques now allow generation, updating, and testing of hypotheses about many complex phenomena, like ozone depletion, disease causes, radioactive contamination, and weather activity. But the models require constant updating of their inputs and testing with real-world experiments to verify their conclusions.

Simulations have long been used to generate and test new scenarios for strategic planning. Self-learning software has proved invaluable in optimizing many flow process, micromanufacturing (semiconductor), health monitoring, and logistics system designs. It is widely used in retailing, financial, communications, and utility service monitoring systems and provides some of the most important problem and opportunity identification capabilities for innovation in these fields. In both manufacturing and services, the ability to collect and analyze large systems' data at the micro level, through self-learning software, has become a key contributor to both innovation and fast-response customer services. Software in these fields has changed the opportunity search process in a fundamental fashion, allowing identification, tracking, and experimenting with small trends or anomalies that would otherwise be overlooked. American Express's Genesis, McKesson's Economost, American Airlines' SABRE, Kao's ECHO, and Trilogy's Conquer (all described elsewhere in this book) are only a few of many interesting approaches.

FROM SELF-LEARNING TO OBJECT ORIENTATION

The most powerful new software systems for both self-learning and option generation are evolutionary and object-oriented systems. John von Neumann, Benoit Mandelbrot, Stuart Kauffman, and others early noted that the binary system had an analog in evolutionary systems. If one fed into the computer a non-linear formula in which the value of the unknown, once computed, became an input value for the next iteration, programs could both learn from themselves and become self-organizing systems. Given a simple set of rules to guide them, many tended to stabilize by finding a higher level of order and complexity that satisfied preprogrammed "criterion functions," defining and prioritizing desired outcome relationships.

Using object orientation, such evolutionary programs can go even further. Object-oriented systems can emulate a series of self-identified, precoded entities in large-scale economic, physical science, or life systems. Like atoms or cells in real-life systems these "objects" are not directed by a higher-order program (or criterion function), but only by their own individually encoded rules. They constantly recombine or repel according to these rules until they self-distruct, stabilize, or emerge into a pattern that is an entirely new higher-order system -- as astronomic gas clouds form into stars, planetary systems, and galaxies. In the object-oriented version of this process, each element, or object, acts like a "Velcro ball" with precoded hooks that allow other balls to freely attach or interface with it. As these balls combine, they can create entirely new assemblies, which may ultimately emerge as new higher-order subsystems or systems with their own distinctive input and output characteristics; in other words, the software can innovate potential new subsystem and system solutions.

Object orientation has a strong applicability to large-scale service or disaggregated product organizations where sub-elements of the system can be discretely described but interactions cannot. Its concepts are uniquely powerful in converting Internet-like infrastructures into customer-producer interfaces capable of generating great innovation and value. Given combinatory rules and operant criterion functions, objects can randomly or systematically find each other and combine to create innovative new solutions. Objects can provide, in easily manipulable form, the minimum replicable units of data that are the essence of creating mass customization economies and flexibilities. Along with massive parallel processing, they offer a new and powerful basis for managing and innovating in such systems.

Service Sector Applications

An especially broad area of application for object orientation is in service sector systems, which often require considerably less complicated model manipulation within their engines than do science or production programs. Service manipulations tend to center on relatively simple processing (disaggregating, aggregating, mixing, and matching) of data from a wide variety of input variables existing in small discrete pockets of the database and relating these to a number of different customers' individual needs. For example:

* Retailing involves identifying and handling details about many thousands of products and even larger numbers of customers, but relatively simple stocking, accounting, and billing routines for handling operations. The real complexity is in tracking thousands of objects and their characteristics from suppliers through retail shelves and into customers' hands. Similarly, bank transactions involve handling many thousands of accounts, each with perhaps hundreds of transactions per day, related to many more thousands of transaction partners. Yet transactions within each account are usually relatively simple (addition, subtraction, and interest calculation) manipulations. The same is true for airline reservation, brokerage, home entertainment, communications, monetary exchange, and credit card activities. The bulk of such operations is in handling relatively simple calculations that relate highly disaggregated databases to many remote customer interfaces (with unique geographical, demographic, and use patterns).

Although there may be some very complex calculations at a service enterprise's center -- such as air route optimization and aircraft deployment programs in airlines or the sophisticated economic models of financial houses -- these depend on the databases and definable objects that make up the main transaction stream of the enterprise. The bulk of activity in many well-run service companies (like AT&T, Wal-Mart, or Federal Express) occurs inside such systems. If data are broken down into sufficient detail and their operating engines permit, companies can simultaneously optimize flexibility at the customer contact point and maximize operating efficiencies that flow from repeatability, experience curve effects, and integrated cost and quality control. Most accomplish this by (1) seeking the smallest replicable core unit of task or information that is useful across the enterprise, (2) developing micromeasures to manage processes and functions at this level, (3) mixing these micro-units in a variety of combinations to match localized or individualized customers' needs, and (4) recapturing customer use and operating data patterns that allow the systems to learn from their own results, with or without parallel processing.

Managing at the Minimum Replicable Unit Level

Managing and measuring critical performance variables at the smallest repeatable -- individual customer, departmental, sales counter, activity, or stock-keeping unit -- levels has become relatively common in services. So precise are many large enterprises' and nationwide chains' systems (e.g., MCI, American Express, Mrs. Fields, General Mills Restaurants Group) that their headquarters can tell within minutes, or even seconds, when something goes wrong in the system -- at a client contact point or in a decentralized operating unit -- and often precisely what the problem is. The concept is now so far advanced that some industries -- like transportation, banking, communications, structural design, and medical research -- can disaggregate the critical units of service production to the level of data blocks, packets, or "bytes" of information. These, and details about customer use, become the "objects" of object-oriented systems.

Broadcast, power, utility, banking, and communications transmission networks, which must analyze and correct problems within split seconds, have long had on-line electronic monitoring and control systems operating at such detailed levels. Often these systems automatically correct identified problems without human intervention. Electronic systems monitor signal strength and quality continuously, and they automatically switch to alternate routings or equipment if telephone, electric power, or nuclear plant measurements move outside preset boundaries. In many cases, however, human intervention is necessary. For example, CNN has found that a broadcast pause of more than ten seconds is a complete disaster, causing massive audience tune-outs. Hence, much of its organizational innovation has gone into preventing or handling such catastrophes quickly once the electronic system identifies them and before the customer ever knows there is an issue.

Strategic use of minimum replicable unit concepts began in the services (telecommunications, retailing, and transportation) industries. They boomed when the airlines in the mid-1960s found they could not realize the benefits of their new wide-bodied aircraft investments without learning to manage customer relationships at the micro level. Once identified and structured in detail, their micro-units of data about customers and operations became the source of many innovations that proved critical to competitiveness: routing, targeted pricing, seating, baggage handling, special services, frequent flyer incentives, minute-by-minute scheduling, massive operations coordination, and interconnected reservations and billing payment systems. Many experts credit the SABRE system, which captures detailed customer and flight data on this basis, with moving American Airlines from being one of the weakest airlines in the early 1970s to its later preeminence, while other previously prominent airlines (notably TWA, Pan Am, Braniff, and Eastern) fell into oblivion during deregulation.

Creating Added Value

Mass production benefits from standardization are not the real purpose of focusing on the smallest replicable unit of operations. Much more interesting are the strategic and innovation opportunities such systems reveal and help implement. The larger the organization, the more refined these replicability units may practically be -- and the higher their leverage for creating value-added gains for customers. Information systems (including both access and manipulation capabilities) represent one of the few areas where true economies of scale still apply. Greater volume allows a larger company to (1) collect more detail about its individual operating and market segments, (2) efficiently analyze these data at more disaggregated levels, (3) experiment with these detailed segmentations in ways smaller concerns cannot, and (4) target operating programs and innovation to individuals and groups in a more customized fashion. Increased granularity can allow more potentially economic variations and higher payoffs for large companies than small. In two major examples:

* American Express is the only independent credit card company with a large travel service. By capturing in the most disaggregated possible form (essentially data bytes) the details of transactions that its 25 million traveler, shopper, retailer, lodging, and transportation customers put through its credit card and travel systems, it can mix and match the patterns and capabilities that each group seeks or has available to add value for each segment in ways most of its competitors cannot. It can identify lifestyle changes (like marriage or moving) or match forthcoming travel plans with its customers' specific buying habits to notify them of special promotions, product offerings, or services that American Express's retailers are presenting in their local or planned travel areas. It can also offer its 2 million retailer and transportation customers more demographic or comparative analyses of customer buying patterns, shifting travel patterns, or needs for individualized wheelchair, pickup, or other convenience services. Until Visa and Mastercard became even larger, no one could match the value-added that American Express could provide its individual consumers and commercial customer groups.

* General Mills Restaurants Group's sophisticated use of technology has been its key to innovating both a friendlier, more responsive atmosphere and lower competitive prices in its unique dinner house chains: Red Lobster, Olive Garden and Bennigan's. At the strategic level, it taps into the most extensive disaggregated databases in its industry and uses conceptual mapping technologies to define precise unserved needs in the restaurant market. Using these inputs, a creative internal and external team of restaurateurs, chefs, and culinary institutes arrives at a few concept designs for test. Using other models derived from its databases, the group can pretest and project the nationwide impact of selected concepts and even define the specific neighborhoods most likely to support that concept. Other technologies combine to designate optimum restaurant sitings and create the architectural designs likely to be most successful at each.

On an operations level, by mixing and matching in great detail the continuously collected performance data from its own operations and laboratory analyses, GMR can specify or select the best individual pieces and combinations of kitchen equipment to use at each location. It can optimize each facility's layout to minimize personnel, walking distances, cleanup times, breakdowns, and operations or overhead costs. Once a restaurant is functioning, GMR has an integrated electronic point-of-sale and operations management system directly connected to headquarters computers for monitoring and analyzing daily operations and customer trends. An inventory, sales tracking, personnel, and logistics forecasting program automatically adjusts plans, measures performance, and controls staffing levels and products for holidays, time of day, seasonality, weather, special offers, and promotions. All of these lower innovation investments, cycle times, and risks.

At the logistics level, using one of industry's most sophisticated satellite, earth-sensing, and database sytems, GMR can forecast and track fisheries and other food sources worldwide. It can predict long-and short-term seafood yields, species availability, and prices; and it can plan its menus, promotions, and purchases accordingly. It knows its processing needs in such detail that it teaches suppliers exactly how to size, cut, and pack fish for maximum market value and lowest handling costs to GMR, while achieving minimum waste and shipping costs for the supplier. Its software systems have allowed GMR to innovate in important ways that others could not.

Critical to effective innovation system design are conceptualizing and implementing this smallest replicable unit concept as early as possible in the software design process. Summing disaggregated data later is much easier than moving from a more aggregated system to a greater refinement of detail. Further, highly disaggregated data often capture unexpected experience patterns, suggesting the potentials for innovation that more summary data would obscure. Much of the later power and flexibility of American Airlines', McKesson's, Benneton's, and National Car Rental's systems derived from making this choice correctly. Less successful competitors' systems did not; they usually chose a larger replicability unit in order to save initial installation costs or designed their systems around their existing accounting or organizational structures rather than around the data blocks that were relevant to operations and especially to customers.

Among the classics of such problems are the banks that captured their data around the account numbers of their clients rather than around the details of each transaction, the particular customer's characteristics and use patterns, and the external market events that made different financial products more (or less) attractive in a given situation. Similarly, in environmental research many ecological models focused solely on animal classes, soil compositions, flora classes, climatic factors, or toxicity levels and essentially ignored the crucial interactions among all these subsystems. Until new data definitions (or software that could make the interfaces between these subsystems transparent) appeared, researchers could not ask or assess the most crucial problems in ecological science.

Object Orientation and Intranets for User-Based Innovation

Object-oriented technologies allow programmers to design systems more economically using minimum replicable level concepts. Individual objects contain both the desired variables broken down to minimum replicable levels and the methods (embedded instructions) allowing that object to carry out actions. System-wide innovation can occur either by using the objects as elements in a designed system or through employing them in genetic, evolutionary, or other self-learning processes. Object orientation allows users at each major node on internal networks to call forth and combine elements from all other nodes easily. They can readily design customized services for local customers, without breaking any of the firm's operating principles, by using objects or software "buttons" that embed these rules. If the right information and incentives exist at the customer contact point, rapid innovation can occur instantaneously with direct customer participation. By incorporating many of the rules, interfaces, and best-practice patterns that bureaucracies formerly enforced, object orientation provides a long step toward the ultimate in decentralizing innovation processes: disaggregation to the individual user level.

Even without object orientation, many companies now use well-developed intranets for internal flexibility and efficiency and to leverage their professional and creative intellect for customers. For example:

* Arthur Andersen Worldwide has more than 82,000 people in some 360 offices in seventy-six countries. Its ANET is a T-1 and frame relay system linking most of these offices by data, voice, and video. ANET captures the history of Andersen's contacts with major clients worldwide and places these in accessible customer reference files. In addition, auditors or other professionals who find unique solutions to problems can introduce them into the system through carefully indexed subject files available to all. Any field professional can query others throughout the system on an electronic bulletin board to seek alternatives or potential solutions to a new problem. The Andersen Notes system provides an interactive environment for contact people to develop solutions jointly. Andersen's increasing size and complexity make it impossible for its professionals to rely on personal knowledge of whom to call for information. Instead, through its software systems, Andersen can instantly assemble needed intellect from all over the world to generate complex professional analyses and innovative solutions. These systems, when combined with highly specialized software in Andersen's various offices, have led various partners to describe the company's distinctive competency as "empowering people to deliver better quality technology-based solutions to clients in a shorter time."

Internally, over 20% of the one thousand largest companies have intranets, and their use is growing faster than the Internet's. Netscape Communications estimates that over 70% of its software goes to the internal networks. And Zona Research estimates that 43 % of the $1.1 billion in Web servers goes to this market, moving to an estimated $4 billion before the end of the 1990s. Such networks use many of the same principles and software as the Internet. The only real difference is that intranets are privately owned and are fenced off by firewalls that let them look outward, while others cannot look in.

Innovation, the Internet, and a New Economics of Computing

Externally, TCP/IP communications standards, HTML, and various Web-compatible software languages, like lava (a 64-kilobyte virtual machine that runs just as well on one PC architecture as another), now provide the basis for interactive innovation using the power of all nodes on the Internet. Entrepreneurs can leverage their innovations enormously through customers who combine them with others available on the Net, creating new products of their own and disseminating these modifications to countless others who may remodify them for their own or further customers' use. As Business Week noted, the Web and Mosaic now provide a huge "virtual disk drive" of sources and uses for innovation.

With Java-like software widespread, software companies will not have to create (nor will they benefit from) unique versions of their products for each manufacturer's computer. Each customer will merely download versions and updates of applets containing desired databases and applications from the Net. In their own way, the applets of the Java system become the minimum replicable elements of effective computing, while the network becomes the computer itself. Such software may well restructure the entire telecomputer industry and redefine the nature of intellectual property. New pricing methodologies (e.g., paying single-use fees for applet software or individual databases) seem likely to cause a secondary revolution in distribution and pricing systems for software, publications, and digital entertainment systems, further eroding the lines between "pipelines," applications, and content.

A whole new economics of innovation is likely to emerge. After the initial investment in software development and debugging is completed, marginal costs of software production and sales are essentially zero. And once the software achieves sufficient penetration, an infinite number of modifications can be sold to supplement it. Once they adopt and learn a new system, existing users are reluctant to scrap their time and financial investments in a software platform. The original software innovator can offer ever more functionality to its customers at low cost and high margins, making it even more difficult for competitors to enter, while supplementary innovations cause sales to soar for all products associated with the software. There appear to be no negative economies of scale in software production, so the strong get stronger until a totally new platform is innovated. This "Microsoft or Nintendo effect" may change the very nature of competition -- and needed regulation.

* The Doom three-dimensional game, Maxus's Sim, and Netscape demonstrated this effect on the Web. The basic engine for Doom was created in the form of shareware, which was then put on the Internet. Users could download and test a small but enticing version of the game for free. If they wanted to go further with Doom, they had to send in $25 to get more exciting add-ons for different levels of the game. This software allowed them to have a startling first-person view of various adventures transpiring on the screen, and it was easy for players to write their own scenarios for Doom, creating their own functionalities and tailoring games to their own tastes in endless variety. Similarly Maxus's Sim created an engine from which a huge variety of applications resulted. Consultants or clients could create Sim-based models, customized for their own specific end-user purposes. In this form of innovation, system designers no longer have to think of programming for specific end user needs; instead they design the user interface as shareware and leverage their own ideas through the infinite creativity of their customers. Netscape initially offered its browser free to anyone on the Net and encouraged others to distribute it privately. Once the software had high penetration, Netscape began to charge for further installations and related software.

* In a commercial mode, clothing designers, salespeople, buyers, and manufacturers work together to provide precisely the clothing the buyers want. Virtually any product -- from pagers (Motorola), to bathroom fixtures (American Standard), to automobiles (Toyota) -- can now be interactively custom designed to meet the specific and varying needs of niched markets and individual customers on this basis. Once the product enters the marketplace, customers and various suppliers can monitor sales and quickly ramp up, phase in, or phase out different features, styles, models, colors, or fabrics to satisfy consumers' desires, reversing the usual innovation process. End users now essentially design products and define feature mixes for a variety of producers worldwide, cutting investments, time delays, and risks enormously.

REDEFINING INTELLECTUAL PROCESSES AND ORGANIZATIONS

Most past management thinking about innovation has assumed a fairly sequential, physically bounded process which resembles a process flow-chart or linear-mechanical assembly line. Many executives even try to chart a complete step-by-step process in advance. Their models feature sequential investigation, discovery, invention, reduction to practice, scale-up, introduction, and readjusment processes -- all with their "key decision points" and "gates" to the next stage. This view of the world makes managers very comfortable. It all seems so rational and orderly that many enterprises try to control their innovation processes using their charted sequence. Unfortunately, this approach usually turns out to be very costly and time-consuming. Equally unfortunately, innovation rarely happens this way and trying to force it to do so is often counter-productive.

Innovation tends to occur in fitful, chaotic ways, with many random interactions and unexpected, often unpredictable consequences. The highly interactive, circular, self-learning steps of genetic or ecological self-organizing systems or interactive object-oriented programs seem more appropriate models of the way modern innovations occur. Starting with clear success criteria, experimenters continually explore, assemble, and break elements of a system into new units and combinations until they find a combination (in software) that works together, yet optimizes their desired technical and economic success criteria. Once an appropriate interactive organizational concept and software structure are in place, the innovation process can be as decentralized and time-compressed as one desires. Parts of the work can easily proceed in parallel because the software allows independence yet disciplines the interaction rules among component systems. To compress time even more, many innovators operate projects on a parallel basis or on a three-shift, twenty-four-hour day, handing off development (through software) from one design group and geographical time zone to another (Asia, to Europe, to America) at the end of each shift. Once one competitor does this, others must follow or fall ever farther behind.

Benefits of More Fully Integrated Systems

Software integration across the innovating organization's databases, engines, and user interfaces, can avoid many traditional costs and time delays in product design, physical prototyping, and multiple testing in real-world environments. Such systems not only compress time and lower the direct costs of development, they decrease the standby physical investments needed for test facilities. By tapping into the best worldwide bases of physical data and the broadest possible customer use bases, such systems also leverage the intellectual value of the firm's, suppliers', and customers' development personnel substantially.

Many of the traditional problems of scale-up disappear as the software "learns" and captures data from its own experiments and the actual experiences of customers and other laboratories with similar products and circumstances. Management can predict scaling issues much more accurately than it could afford to if it had to test and retest physical models. Because no model, by definition, can handle all the complexities of reality, some physical modeling is generally essential before final commercial prototyping. Nevertheless, experience shows that software premodeling and testing of prototypes can shorten cycle times, decrease costs, increase the interrelationships tested, and diminish risks taken by orders of magnitude.

Capturing Experience and Explaining Why

Software management -- the capacity to access and effectively manipulate available physical science, operations, and customer use information -- becomes at least as important as the organization's own technologists' knowledge about the particular design field. The system's models become learning systems, updating their databases' and engine's capabilities constantly from new knowledge created in the physical science world and modifications introduced by customers' experiences. A much smaller interdisciplinary team using well-designed software can usually obtain higher-value results than a large team utilizing a physical experimental approach. Perhaps the most important point, however, is that such software upgrades the entire learning capability and output of the development process.

Under the old mechanical-chemical engineering design paradigm, large-scale systems' interactions were too complex and interrelated to be well understood. To overcome unknowns required a series of "build 'em and bust 'em" experiments and an expensive shakedown of the plant during scale-up. By combining process science, physical constraints, and consumer environments in an electronic model, experimenters obtain a detailed (and documented) level of process insights they otherwise could not obtain. They understand why things do (or do not) work, thus gaining a reliable basis for recalibrating their intuition. In turn, this knowledge educates experimenters to innovate faster and refines the knowledge originally put into the software. The software also captures the experience curves of external scientific researchers and diffuses the corporation's total knowledge immediately to even its most inexperienced technologists. By combining the knowledge of customers and the knowledge of the science world, properly designed systems create a potential for a large multiple of value -- company knowledge x customer knowledge x scientific knowledge -- due to increased interactiveness. The old physically bounded paradigm of innovation is massively inefficient.

MANAGING SOFTWARE DEVELOPMENT

To exploit such opportunities, companies must be able to manage and innovate in software themselves, a need that will undoubtedly grow as complexities grow and cycle times decrease. But as many have learned to their regret, the very bright, independent people engaged in software development make the activity notoriously difficult to manage. The most innovative companies, in products or services, seem to converge on several approaches, each useful for a different strategic purpose and requiring a quite different management style. Nevertheless, there are several characteristics that all these approaches share. They all simultaneously enable both independent and interdependent innovation, and all involve close interactive customer and expert participation. Like most other innovations, all software is first created in the mind of a highly skilled, motivated, and individualistic person (hence, independence). But to be useful, the software (or device it supports) usually must connect to other software (or hardware) systems and meet specific user needs (hence interdependence). Interesting innovation problems are generally so complex that they require high expertise from many "nonprogrammer" technical people and users for solution. And users may vary from being computer illiterate to very sophisticated.

How do successful companies achieve the needed balance between deep professional knowledge, creative individualism, coordinated integration, and active customer participation.? Table 3.1 suggests the wide variety and scale of some major players. (For others like IBM, EDS, PRC, Andersen Consulting, or CSC Index, accurate figures are not available.) Their strategies and styles, like those of in-house software groups, tend to cluster into five categories depending in part on the nature of the application. Distinctly different approaches are used for: (1) small discrete applications, (2) intermediate size operating systems, (3) large integrated systems, (4) support systems designed to requirements, and (5) legacy system improvements or redesign.

INDIVIDUAL INVENTOR-INNOVATORS (SMALL, DISCRETE APPLICATIONS)

As in the physical sciences, knowledgeable independent inventors and small groups create the largest number of software innovations, particularly at the applications level. In essence, a few highly motivated individuals perceive an opportunity or need, assemble software resources from existing databases and systems, choose an interlinking language and architecture on which to work, and interactively design the program and subsystem steps to satisfy the need as they perceive it. Those who want to sell the software externally first find some real-life application or customer, consciously debug the software for that purpose, then modify and upgrade it until it works in many users' hands for a variety of different purposes. In 1995 alone, venture capitalists invested more than $1.2 billion in such enterprises.

Many important computer software innovations, from Visicalc to Mosaic and lava, started this way, as have virtually all video game programs and new customized programs to solve individual local enterprise problems. Millions of inventor-innovators use largely blocks-and-arrows diagrams and trial-and-error methods to design new software for themselves, improve smaller software systems, or create special new effects. Like other small company innovators, there is no evidence that the process is either efficient or consistent in form. Problem identification, imagination, expertise, persistence, and careful interactive testing with customers are the most usual determinants of success. The sheer numbers of people trying to solve specific problems means that many small innovations prove useful in the marketplace, although a much greater number undoubtedly die along the way. Many larger companies have learned how to harness the enormous potentials of independent software inventors to leverage their own internal software capabilities. For example:

* MCI, as a corporate strategy, has long encouraged outside inventor-entrepreneurs to come up with new software applications (fitting its system's interfaces) to provide new services over its main communication lines. AT&T-Bell Labs created UNIX to assist computer science research. AT&T later gave UNIX to universities and, eventually, to others, slowly realizing that as individuals created programs to provide local solutions or to interface with others, they would require more communications interconnections. UNIX was consciously designed to encourage individuals to interact broadly and to share their useful solutions with others. The hooks it provided later allowed AT&T to sell vastly more services than it could have possibly formally forecast or innovated internally.

* Similarly, Nintendo Co. Ltd. has provided one of the world's most successful platforms for innovation by independent software producers. Its licensing programs, linked to use of Nintendo's marketing and distribution capabilities, have created more independent millionaires than any other Japanese company. And its success created the huge electronic games industry that now branches into other entertainment fields. Nintendo controls and leverages the crucial linkages between its own systems and the marketplace, while providing enough of an open interface to allow thousands of individuals to develop new game software.

Other companies use similar network interface and distribution controls to encourage yet coordinate both internal and external software innovators. The best have developed specific incentive systems and access rules to stimulate both innovation and lateral diffusion of new solutions throughout the company. Chapters 5 and 6 provide multiple examples.

Small Interactive Teams (Operating Systems)

In many of the larger applications houses -- like Microsoft, Oracle, and Netscape -- small, informal, interactive teams are the core of the innovative process. The complexity of these firms' programs is too great for a single individual to develop them alone. In most cases, the target concept is new, discrete, and relatively limited in scope. Relying heavily on individual talents and personal interactions, these firms typically have made little use of computer-aided software engineering (CASE) tools or formalized monitor programs to manage development. They operate in a classic skunk works style, disciplined by the very software they are developing. For example:

* Microsoft tries to develop its applications programs with very small teams. Major programs typically begin with Bill Gates or a few of his "architects" agreeing to the key performance parameters and the broad systems structures needed to ensure interfaces with other Microsoft programs and its desired customer positioning. Overall program goals are broken down into a series of targets for smaller subsystems, each capable of being produced by a two- to five-person team, which then operates quite independently. Interfaces are controlled at several levels: programmatic specifications to make operating systems perform compatibly, application interfaces to interconnect component systems (like memory or file management), and customer interfaces to maintain user compatibility. Other than these, the original target functionalities, and time constraints, there are few rigidities. Detailed targets change constantly as teams find out what they can and cannot accomplish for one purpose and how that affects other subsystems.

Microsoft's key coordinating mechanism is the "build-test-drive." At least every week, but more often two to three times per week, each group compiles its subsystem so the entire program can be run with all new code, functions, and features in place. In the "builds," test suites created by independent test designers and the software itself become the disciplining agents. If teams do not correct errors at this point, interactions between components quickly become so vast that it is impossible to fit all program pieces together, even though each subsystem might work well alone. As soon as possible, the program team proposes a version for a specific (though limited) real-world purpose, gives it to a customer to test in that use, and monitors its actual use in detail. Once it works for that purpose, the program goes to other customers for beta tests and modification in other uses. This approach both decreases developmental risks and takes advantage of customers' suggestions and innovations.

Monitor Programs (Large Integrated Systems)

Such informal approaches serve particularly well for small freestanding or applications programs, although Microsoft has used them for larger operating systems. In most cases, designers of larger operations or systems software find some form of "monitor program" useful. These monitors establish the frameworks, checkpoints, and coordinating mechanisms to make sure all critical program elements are present, compatible, cross-checked, and properly sequenced. They allow larger enterprises to decentralize the writing of code among different divisions or locations while ensuring that all functions and components work properly together. No element is forgotten or left to chance, and interface standards are clearly enforced. Weapons systems, AT&T, and Arthur Andersen have used this programming method successfully. Many firms have found that such formal monitors both lower the cost and increase the reliability of large-scale systems designs. For example:

* Andersen Consulting usually must provide under contract both a unique solution for each customer's problem and a thoroughly tested, fault-free systems product. For years Andersen has combined a highly decentralized process for writing each section of the code with a rigorous centralized system for program coordination and control. At the center of its process have been two tools, METHOD/1 and DESIGN/1. METHOD/1 is a carefully designed, step-by-step methodology describing a predictable, repeatable process for modularizing and controlling all the steps needed to design any major systems program. METHOD/1 has a variety of "routes" to use in different increments for different environments and project sizes. In a typical example, at the highest level there are roughly ten "phases," each broken into approximately five "segments." Below this are a similar number of "tasks" for each job and several "steps" for each task. METHOD/1 defines the exact elements the programmer needs to go through at that particular stage of the process and coordinates software design activities, estimated times, and costs for each step.

DESIGN/1, an elaborate CASE tool, keeps track of all programming details as they develop and disciplines the programmer to define each element carefully. It governs relationships among all steps in the METHOD/1 flowchart to avoid losing data, entering infinite loops, using illegal data, and so on. In addition to ensuring that each step in the METHOD/1 is carefully executed, it allows customers to enter "pseudo-data" or code so they can periodically test the look and feel of screen displays and to check data entry formats for reasonableness and utility during development. The integrated METHOD/1 and DESIGN/1 environment is extremely complex, taking up some 50 megabytes on high-density diskettes. A dedicated team of specialists continually maintains and enhances these programs.

Many organizations have found that such formal monitors lower the cost, increase the reliability, and allow decentralized development of large-scale systems.

Design to Requirements (Support Systems)

The most common approach to developing internal operating software is neither as informal as Microsoft's nor as formal as Andersen's. It is a combination of the two approaches. The process tends to follow this general sequence:

1. Establish goals and requirements (what functionalities, benefits, and performance standards are sought).
2. Define the scope, boundaries, and exclusions from the system (what the system's limits are).
3. Establish priorities among key elements and performance requirements (what is needed, highly desired, wanted, acceptable in background, or dispensable if necessary).
4. Define interrelationships (what data sets, field sizes, flow volumes, and cross-relationships are essential or desirable).
5. Establish what constraints must be met (in terms of platforms, network typologies, costs, timing, etc.) in designing the system.< BR> 6. Break the total problem down into smaller, relatively independent subsystems.
7. For each subsystem, set and monitor specific performance targets, interface standards, and timing-cost limits using agreed-on software test regimes and monitoring programs. Often the design software itself provides the ultimate documentation and discipline for all groups.

Because quite dissimilar skills may be needed for each, different teams typically work on the database system, the engine (or platform) system, and market interface systems. A separate interfunctional group (perhaps under a program manager) usually coordinates activities across divisions or subsystems. Using a combination of software and personalized performance scheduling and evaluation techniques, this group -- supplemented by independent test designers -- ensures that task functionalities, component and subsystem performance, time frames, and dependencies between tasks, output, quality, and priorities are maintained. If the software under design has to support existing processes, successful cross-functional teams typically reengineer the processes first, then design the software prototypes while interactively engaging users throughout the full design and implementation process. Top-level executives do not need to understand the details of software programming, but they do need to see that all these management processes are in place and operate effectively when their firms design their own software.

Highly Disciplined Procedures (Integrating and Improving Legacy Systems)

Often major innovations require the large-scale integration and further development of already installed (legacy or specialized "stovepipe") systems to accomplish new or improved functionalities. Standards tend to be absent or inconsistent among the installed systems; information is distributed and networks are disjointed; data may be unsynchronized, inconsistent, and subject to very different security requirements. Typical examples are ecological, battlefield, and law enforcement systems. Innovations involving such integration obviously require a discipline beyond that necessary even to design a large scale system from scratch. A few large companies like PRC, CSC, and EDS specialize in such systems. They tend to develop their own approaches (like PRC's Software Process Improvement Plan, SPIP) to coordinate the multiple levels of problems involved. Working toward the Software Engineering Institute's capability maturity model (CMM) requirements, these companies try to develop reliable, reusable software modules for broadly applicable subsystems, to update these continually through feedback from actual operational use, and to make sure all key personnel have access to the necessary tools and processes through electronically updated reference guides and libraries.

No single description can capture the full complexity of this approach. However, most of the major practitioners have reduced their approaches to hard copy and electronic manuals, which they will share with potential users. A well-developed system is Litton-PRC's approach:

* Litton-PRC, in early 1993, initiated a process (called Phoenix) with its major programs to systematize and improve its already successful approach to designing, integrating, and improving large-scale systems for the federal government. These programs usually involve major legacy systems as well as modern client-server implementations. Designed around a virtual private internet and a Process Asset Library, (PAL), the PRC system handles the integrated needs of program managers (for communications, performance measurement, overall customer coordination, resource utilization capability), task managers (status, action items, metrics, deliverables), intergroup coordination (schedules, tool information, action items, personnel data), training (needs assessment, training materials, records, etc.), program development, life cycle management, metrics, and lessons learned. Except for confidential information (like personnel records) the system is open to all PRC professionals. Each activity has its own server connected to the network.

Since PRC's strategy focuses on delivering reliable, state-of-the-art software, with predictable cost and performance characteristics, it tries to the maximum extent possible to modularize its software subsystems, maintain detailed interface and compatibility controls, and constantly update all its systems based on actual user experience. Through standard browser technology, the PAL provides access to a thousand files, including corporate processes, briefings, document templates, plans, schedule templates, standards, and procedures that ensure coordinating data are available at all locations. PRC is now implementing an "information finds you system," in order to better fulfill the information needs of all personnel.

Supporting this system are a software process improvement plan which details PRC's approach to software improvement and a PAL Document Tree of PRC software documents, manuals, policies, processes, and products. Organizationally, PRC uses the quality improvement approach originally developed by Florida Power and Light Co. Working with customers and other stakeholders, the team develops a "theme statement" of the priority areas needing improvement, along with quality and performance indicators negotiated with the customer to make sure expectations are reasonable and valid. It then decomposes each associated area into a concise problem description and set of targets for improvements based on these goals and the status of systems currently in place. The team analyzes existing problems for root causes, develops and analyzes potential solutions to each problem, predicts obstacles for implementation, and prepares an action plan (process flowchart) for implementation, monitoring, recording results, standardizing, and replicating the improvements.

As PRC builds each phase of a system, customer-developer teams interact on it and get feedback from users. Often over the course of a project, initial priority demands come to appear routine, and unexpected variations or new functionalities emerge as important and valuable to the customer. Thus, one of PRC's program managers' main tasks is to interact constantly with the development team and users to maintain goal alignment and expectations. PRC says its customers are happiest "when they have been part of the solution, have a say in how well we're doing each step, and have been able to utilize incremental builds of functionality released on a frequent basis."

SUMMARY

Software has become the key element in almost all advanced design and innovation. It is critical to effectiveness at all levels of the innovation process from basic research to post-introduction support of the innovation in the marketplace. It offers infinite opportunities to shorten, merge, or eliminate entire steps in the innovation process, compressing time cycles and lowering risks more than any other contributor to the process can. Even more important, it allows interaction with customers and users in ways that substantially increase the innovations' value in use. The processes of software design provide a powerful new paradigm for innovation, the ultimate forms of which are now appearing in self-learning systems, on interactive intranets within enterprises, and on the Internet and World Wide Web. These are (1)innovations self-designed by users for their own specific purposes, and (2) software-generated innovations from self-learning, evolutionary, and object-oriented software.

Software processes will forever change innovation thinking and practice worldwide. It behooves all managers to reexamine their existing innovation and software management processes in light of these potentials. Using illustrative practices from some of the world's leading software developers, we have tried to provide guidelines for thinking about these issues. Later chapters provide more explicit, top management, middle management, and micro-organization level approaches to implementation.

Copyright © 1997 by James Brian Quinn, Jordan J. Baruch, and Karen Anne Zien

Customer Reviews

Average Review:

Post to your social network

     

Most Helpful Customer Reviews

See all customer reviews