Visit IBM, Microsoft, Novell, and Other Vendors at Our Exclusive Virtual Client/Server Trade Show!
Client/Server Computing Demystified! Distributed computing; relational databases; intranets; online transaction processing wait, don't throw in the towel yet! Fully updated for the latest developments in client/server computing, this clear, straightforward book surveys all the tools and technologies you need to understand how to set up a system that best meets the needs of your business in a language you can understand, too!
Inside, find helpful advice on how to:
- Choose the right clients, servers, and network operating system for your particular needs
- Understand the difference between relational and distributed databases and the technologies that support them SQL, DCOM, CORBA, and OLAP
- Evaluate the different development tools available CASE tools, RAD tools, Java tools, and visual tools like Visual Basic 6
- Connect your system to the Internet or an Intranet
- Plan for tight security and all possible disasters
- Control the hidden costs of client/server computing
About the Author
About Doug Lowe Doug Lowe is a seasoned For Dummies® writer with several titles on networking in print. He is the author of Networking For Dummies®, 3rd edition, Networking with NetWare® For Dummies®, 3rd Edition, as well as PowerPoint® 4 For Windows® For Dummies®. Doug also frequently contributes articles to national computer publications, including Maximize! Magazine.
Read an Excerpt
What the Heck Is Client/Server Computing?
In This Chapter
- Various and sundry definitions of what client/server computing actually is
- Perhaps more important, what client/server computing is not
- How we got into the mess we're in today
- The problem with dumb clients and dumb servers
Welcome to the world of client/server computing, an exciting form of computing that is recommended by four out of five dentists surveyed, tastes great, and is less filling. Client/server computing helps you get the weight off fast and keep it off (I lost 30 pounds). It also fosters a return to family values, promises to help balance the federal budget deficit, and reduce the national debt.
Sadly, the hype and hysteria surrounding client/server computing sound a lot like a late night infomercial. The computer industry flips from one computing fad to the next, remote control in hand, the way my brother-in-law flips through TV channels on a Sunday afternoon. "Structured Programming will save the day!" Flip. "Relational Database will deliver us from evil!" Flip. "Fourth-generation languages are the solution!" Flip. "Downsizing! That's the future!" Flip. "Object-Oriented Programming . . . that's the ticket. Oops!" Flip. Flip. Flip.
The computer industry is such a faddish business. It seems to always be searching for the Lone Ranger's Silver Bullet, the one that will travel straight and true, never miss its mark, and solve every computing problem. Unfortunately, most of the industry's Silver Bullets have turned out to be like Deputy Fife's bullet -- you know, the one Sheriff Andy lets him carry in his pocket: looks good, but in the end not so useful.
Will client/server computing die out in the face of intranets and the Internet? I don't think so. All indications are that it is here to stay. More and more, client/server computing plays a key role in just about every major corporation, and even smaller companies are getting into the game.
Client/server computing does indeed provide many substantial benefits to those who are able to harness this technology.
The one real problem is that most people still don't know exactly what client/server computing is and how it can help. Does client/server computing mean throwing away the mainframe and replacing it with networked PCs? Or is it just setting up a network system that uses a centralized database? Or is it creating an attractive Windows-type interface to replace all those clunky old mainframe systems that no one could ever figure out how to use?
And to make things even more confusing, what impact will the recent growth of the Internet and corporate intranets have on client/server computing? Is the Internet the ideal platform for client/server computing? Does the Internet phenomenon spell the ultimate death of client/server computing? Or are the Internet and intranets just fads that will soon pass?
This chapter answers all these questions and more. It's a gentle introduction to the strange world of client/server computing. Hope you enjoy the ride.
Client/Server Computing Defined: Take One
Spelling out exactly what client/server computing is may be easier tried than accomplished. Client/server is one of the most used and abused com-puter terms around, and getting even two experts to agree on its meaning is difficult. But I won't let that stop me. Here are some definitions of client/server computing I've heard from various viewpoints:
- A type of technological varnish or veneer that can be applied to an aging computer product (hardware or software) in an effort to increase sales (opportunistic view)
- The latest in a series of technological religions whose adherents believe that their approach to computing -- and their approach only -- will save the world from the current computer crisis (eschatological view)
- A buzzword that must be applied to any new computer project in order to obtain management approval (real-world view)
- The use of three basic components to share the computing workload: a client computer, a server computer, and a network that connects them
Can you guess which definition I use in this book? Let me elaborate on that last one a bit.
Client/server computing divides a computer application into three basic components -- a client, a server, and a network that connects the client to the server (you can think of the network as the slash in client/server computing). The client and the server are both computers with varying degrees of processing power, and both share the computing workload necessary to get the job done.
In most cases, the client computer is a personal computer sitting on the user's desk. The server computer is a network server, which may be a high-powered PC, a minicomputer (called a midrange computer these days), or a mainframe. The network is whatever is necessary to connect the client to the server.
So why the fuss concerning this fairly simple process? The fuss, of course, is about what you can or may be able to do with client/server computing, and the ease (or difficulty, as the case may be) with which you can do it. Here are some examples of what client/server computing can enable you to do:
- Create customized business applications that access data on mainframe computers but are as easy to use as off-the-shelf PC applications such as word processing and spreadsheet programs.
- Develop applications that tie together data that is stored in otherwise incompatible computer systems -- for example, a marketing system may access data stored in the corporate mainframe in Chicago, the departmental minicomputer in Cleveland, and the PC down the hall.
- Build a so-called Executive Information System that summarizes the mass quantity of information typically stored in mainframe computers and presents it in a form that even the CEO can understand.
I look at each component of a client/server system and explore client/server applications in depth in later chapters. But first, a bit of background is in order so you can see where client/server computing is coming from and what it may be coming to.
From Mainframes to PCs
A long time ago, sometime between the time The Fonz was learning to ride his motorcycle and Ronald Reagan was still in show business, the first computer companies began selling computers that were made out of spare vacuum tubes left over from radios that couldn't be fixed. These early computers filled entire rooms, and although they weren't nearly as fast as modern Pentium computers, they could divide accurately and so they became very popular with businesses that could afford them.
Computer manufacturers soon learned how to shrink computer components, first by replacing bulky vacuum tubes with tinier transistors, then by replacing transistors with even tinier integrated circuits. The computers themselves didn't really get any smaller, of course: Computer companies just stuffed more shrunken components into the same room-sized boxes to make the computers more powerful. These powerful computers are now called mainframes and are characterized by enormous amounts of disk storage -- entire rooms full of it used for storing data for large-scale information systems.
In the 1960s and 1970s, mainframe computers became an important part of many U.S. businesses -- important enough that an entire department was formed to keep the odd people who ran the computers away from everyone else in the company. Every few years, the people in the computer department would feel neglected, so they'd change the name of the department to make folks sit up and take notice. They started out as the Data Processing (DP) department, then changed to Information Systems (IS) division, and finally became the Information Technology (IT) group. The important thing is that the name had to be conducive to a two- or three-letter acronym (TLA) because that's the kind of language these odd computer people spoke. (Okay, you're right: Computer people are still trying to learn how to speak plain English.)
The most important thing to know about the days of mainframe computers is that the computer department (whatever it was called) had absolute, total control over the computer.
If a poor end-user realized that a simple two-page computer report would save the company millions of dollars, he or she would have to submit an Information Systems Programming Request Form in triplicate, wait for the computer department to conduct a feasibility study, prepare an environmental impact report, and put up with an obnoxious programmer/analyst who said things like "We're going to have to run I-E-B-somesuch to reindex the ISAM dataset to extract the deactivated history record segments to a spool file before we do a merge-purge, a sort, and a double backflip. You'd better pray we don't run out of extents." And maybe, when the system was finally delivered two years later, the report would still be useful.
Up close and personal
Then along came this upstart called the personal computer (PC) that upset the whole Apple cart in 1981. It sat on your desk, was easy to use, and had enough power to let you perform your daily office functions with ease. Those who were gutsy enough to try the PC discovered that they could use off-the-shelf programs such as Lotus 1-2-3 and dBase to solve their problems faster than the corporate computer department could even acknowledge a request for help. And the price of a new PC was just under the limit that many department managers could spend without getting corporate approval. PCs began to appear everywhere.
Soon the users of these computers realized that they needed to share information with one another to become even more efficient, so small local area networks (LANs) were developed that allowed the PCs to talk to each other. With networks in place, PC users decided that the LAN could do the job of the mainframe better, faster, and cheaper, so they ignored the IS department's pleas for restraint. Networks popped up everywhere. Which brings us to the current state of computer affairs. . . .
The mess we're in today
Most large organizations now have lots of PCs haphazardly connected to one another via local area networks and one or more mainframe computers that run the business. This arrangement has some unpleasant side effects:
- The computing world is split into two camps: the PC camp and the mainframe camp. These two groups don't particularly like each other. But they now realize that they have to learn to get along. Client/server computing (the subject of this book) simply won't work unless they do.
- Vital corporate data that used to be collected in a central location (the mainframe computer) is now spread all over the place. The typical company's data store looks something like my youngest daughter's bedroom: Some data is on the mainframe, some is on the various PC networks, some is on individual desktop computers, some is stuffed under the bed, and some of it is just piled up on the floor in plain view. Client/server computing is a way to bring this disparate data together.
Don't you love that phrase, "disparate data"? It has such a nice alliteration to it, plus it sounds so much like "desperate data" (which data can be when trying to find a proper home).
- Data on mainframe computers is backed up systematically. Data on networks may or may not be. Data on individual PCs probably hasn't been backed up in months. (When was the last time you backed up your computer?) No backups can mean disaster in a corporate system.
- Computer users have been spoiled by how easy the PC is to use. Now they expect the mainframe to be as easy to use as the PC. This means that mainframe programmers must work into the wee hours of the morning transforming old, unfriendly mainframe computer programs into friendly, easy-to-use programs that look like their PC counterparts. Client/server computing is one of the chief ways computer programmers hope to pull off this miracle.
Most large companies have a large stash of computer software that is outdated in appearance (that is, it doesn't look as good as a modern Windows program) but still gets the job done. Many of these systems are so complex that rewriting them to work on PCs instead of mainframes would be too expensive for even Ross Perot to pull off. These legacy systems are the bread-and-butter of most businesses today, providing essential business functions such as order processing, billing, payroll, accounts receivable, and accounts payable systems. (Because the term legacy has negative connotations, some mainframers are now trying to get us to say heritage systems instead of legacy systems. But a rose by any other name . . . )
Two Unavoidable Questions
You can probably avoid the two questions that follow if you try hard enough, but they're bound to come up sooner or later. The first one has been on the tip of everyone's tongue for a few years now. The second is just starting to gain momentum.
Is the mainframe dead?
Is the age of mainframe computing finally dead? Are those huge conglomerations of silicon and metal worth more as scrap metal than as computers?
The death of the mainframe has been suggested many times during the past 15 years, but it hasn't yet come to pass, and client/server computing assures that it never will. As you discover in this book, the mainframe computer is the ultimate server. (Peek ahead to Chapter 19 for more on this hot topic.)
I chuckle when mainframe computers are referred to as dinosaurs. Invariably, this reference is meant to malign the mainframe as an obsolete, slow, dim-witted beast on the verge of extinction. Keep in mind one of the most current theories on dinosaurs, however: Dinosaurs weren't slow or dim-witted; in fact, some of them were quite agile and intelligent. They did not really become extinct; they just downsized -- evolving into the nimble little creatures we now know as birds. A downsized dinosaur is probably plucking worms out of the lawn in your back yard or dive bombing your cat this very moment.
A similar evolution is likely to occur with mainframe computers, though it may well look like extinction to casual observers. Here are some reasons why mainframes won't be fading out any time soon, even though they don't have good games:
- Mainframe computer professionals have a decades-long head start on dealing with the everyday problems of managing computer systems: managing software changes, predicting performance, providing for disaster recovery, and so on. Much can be learned from those who have gone before.
- Desktop computers today have the raw computer processing power of mainframes just a decade ago. But they still have a long way to go to catch up in operating system software.
- Mainframe computers make the ultimate servers in client/server systems. In many cases, they are ideally suited to the task.
Is the PC dead?
Is the PC dead? This question has been looming in the background ever since the first PC users hooked their computers together to create a simple network. After all, once a personal computer becomes a part of a network, it isn't really a personal computer anymore.
Putting your personal computer on a network changes everything, forcing you back into a mainframe-computer bureaucracy mode of thinking. If you mess up an isolated computer, that's your problem. But if you mess up a networked computer, it could become everyone's problem. Like a mainframe computer, a network of personal computers must be carefully managed.
So yes, I believe the personal computer is dead, or at least dying. This isn't to say that fewer users are purchasing desktop computers based on Windows, it merely means that these computers aren't really personal computers any-more. Personal computers and their users are rapidly becoming citizens in a networked computing world and the clients in the client/server equation.
With a networked computer, you are not in complete control. You must learn to get along with other members of the network: don't delete files if you're not sure they're yours; don't send a 700-page report to the printer when you know everyone else is scrambling to get their documents printed, too; and don't copy 200MB of old files to the network disk unless you're sure there is enough disk space.
One of the most challenging aspects of client/server computing (from a technical standpoint, at least) is setting up a network that enables desktop computers to not only communicate with one another but also to communicate with whatever mainframe and minicomputers are in place. It's like getting people from New Jersey, New Orleans, and London together for a talk. They all speak English, but they may not understand what each is saying.
Remember, when I say that the personal computer is dead, I don't mean that people are going to start buying mainframe computers instead of PCs. I simply mean that you can no longer afford to think about your desktop computer as some sort of computational island.
Dumb and Dumber Computing
The best way to understand what client/server computing is all about is to look at the extremes of what can be called dumb computing -- computer applications in which either the client or the server plays dumb, unable to do any share of the work. Dumb computing was the norm for most mainframe/PC computing systems before client/server computing was developed.
In an old-fashioned mainframe computer system, the client is a "dumb" terminal that has minimal processing power, and the server is the mainframe computer itself. As Figure 1-1 illustrates, the server computer does nearly all the work in this type of arrangement. That's why mainframe computers are so grumpy.
This type of computer processing is sometimes called host processing because the host computer -- that is, the mainframe -- does all the processing work. The client -- that is, the terminal -- is just along for the ride.
Keep the following points about host processing in mind:
- More than one client terminal can access a single host computer. For host processing to work satisfactorily, the host computer itself must be powerful enough to service as many client terminals as the application calls for. In a nationwide airline reservation system, that might mean tens of thousands of client terminals. That's why mainframe computers are so powerful and expensive.
- The dumb terminal doesn't have to be an old-fashioned mainframe computer terminal. It may well be a high-powered PC that's pretending to be a dumb terminal. This setup has become very common as companies have replaced old IBM 3270-type terminals with PCs. Workers can use their PCs for PC chores such as word processing and spreadsheets, or they can flip the PC into dumb terminal mode to access their mainframe applications. Once flipped into dumb terminal mode, the PC takes a nap while the host computer does all the work.
The alternative to a dumb terminal is a dumb server, which is how most applications that run on local area networks are set up. The entire work of the application is performed on a client computer -- a PC -- at the user's desk. The server computer is called upon whenever a file is needed or something needs to be printed, but the server doesn't do any real work. Figure 1-2 shows the balance of power in this type of arrangement.
A dumb server can have one or both of two functions. It can be
- A file server, which means that its disk drive houses all the files used by the application
- A print server, which means that it has a printer attached to it
In a small network, the same computer may be both the file server and the print server. In a larger network, the file and print servers are likely to be separate computers.
Dumb server systems quickly become inefficient as more users are added to the system. To see why, suppose five users access a large customer database file that lives on a file server. If one of the users decides to display a list of all customers whose year-to-date sales exceed $1,000, the entire customer file must be sent over the network from the server computer to the user's client computer. That can tie up the network, slowing down the other four users. However, the situation is probably tolerable. But suppose the system has 500 users instead of 5, and 100 of the users want the list of customers whose sales exceed $1,000. Now the entire file must be sent over the network simultaneously to 100 different client computers. The entire network will slow to a crawl.
The solution to this problem is client/server computing.
Client/Server Computing Defined: Take Two
The best definition for client/server computing I can come up with is inspired by Mr. Miyagi, the Karate master in the Karate Kid movies. At one point in one of the movies (I don't remember which), he tells his student Daniel to "find balance."
Client/server computing is an effort to find balance between the extremes of the dumb client and dumb server forms of computing. Figure 1-3 shows the balance of processing power when both the client and the server share the workload.
Client/server computing recognizes that a fully capable computer sits at each end of the network cable; therefore, client/server computing attempts to spread the work out equitably to take advantage of both computers.
The exact distribution of work between the client and server computer varies depending on the type of application. Typically, though, the client computer is responsible for all the work required to present to the user a fancy, easy-to-use, Windows-based interface, complete with dialog boxes, buttons, scroll bars, pull-down menus, icons, help, and any other goodies necessary to make the program itself easy to use (or at least make users think the program is easy to use). The server computer is responsible for managing database access: not just retrieving records, but sorting them, selecting just the ones you as the client are interested in, possibly making sure that other clients don't try to change the records you're looking at while you're looking at them, and a whole bunch of other stuff that I discuss elsewhere in this book.
The main benefit of client/server computing over dumb client computing is that, with client/server computing, adding whatever bells and whistles are necessary to make the program easy to use is a realistic option. If the server computer had to do all of the processing, it wouldn't have enough time to handle all the dialog boxes and other fancy interface stuff that Windows users have come to know and love. Plus, the burden on the network would be overwhelming, as the server would have to send an outrageous amount of information to the terminal just to display a simple dialog box.
The advantages of client/server computing over dumb server computing can be illustrated by an example. Suppose a user wants to display all customers whose year-to-date sales exceed $1,000. Instead of sending the entire customer file over the network to the client computer, the client computer asks the server computer to just send the over-$1,000 customers. The server computer then does the work of searching through the entire customer file to extract the ones requested. Then, only those records requested by the client are actually sent over the network. By working together, the client and the server have spared the network the unnecessary burden of sending every record in the file when only a few records are actually needed.
Database access as described in this example is only one use of client/server computing. You can find other common uses outlined in Chapter 4.
Why Office Automation didn't work
When personal computers first leapt into the office scene, most people thought that they would be ideal Office Automation tools. The whole idea of Office Automation began in the early 1970s when Wang Laboratories introduced the first low-cost word processing systems. Businesses felt that if they could automate the tasks performed by office workers, they could increase worker productivity and improve the bottom line.
When the PC came out a decade later, businesses thought the goal of Office Automation -- the so-called "paperless office" -- would finally be realized. By replacing all of the typewriters, 10-key calculators, and file cabinets with computers, businesses expected major productivity gains.
Unfortunately, it didn't work out that way. Most office workers who use PCs, even the ones who use them proficiently, are no more productive than they were before they had PCs. Study after study has shown that offices today are no more productive than they were 10 or 15 years ago.
Why not? Partly because PCs encourage us to waste time on unimportant aspects of our work. We toil over every word of every letter or memo as if we're going after the Pulitzer Prize. Then we waste time dressing up the letter or memo with just the right typeface and clip art as if our jobs depend on how good our memos look rather than how intelligent they sound. As a result, most office workers don't get their work done any faster with a computer.
In truth, Office Automation is an empty promise. There really isn't any bottom-line profit to be gained by replacing typewriters with computers. Businesses do not become more profitable by making their internal memos look better (though workers are probably having more fun -- would you want to go back to a typewriter?).
The real gains come through reengineering business processes, not automating routine office chores. Personal computers and client/server computing can play a major role in reengineering, as outlined in Chapter 2.
What Client/Server Is Not
It's not enough to have a basic idea of what client/server computing is. Now, I want to spell out a few things that client/server is not.
- Client/server isn't networking. Client/server computing depends on networking for its very existence, but the mere presence of a network does not imply that client/server computing is being used. This point is confusing because local area networks use the term client to refer to computers connected to the network and server to refer to computers that are dedicated to sharing disk space and printers.
For an application to be a true client/server application, the server must do more than simply share files and printers; it must share the application's workload with the client computers.
- Client/server isn't database access. Database access is one of the most common divisions of labor in client/server systems, but not all database applications are client/server, and not all client/server applications use databases. For example, if you use a database program such as Paradox or dBase to create a database application, and then you move the database files to a network file server, you have not created a client/server application because the server computer does nothing more than house the database files.
- Client/server isn't GUI. Another common division of labor in client/server applications is to have the client computer provide the fancy graphical user interface, or GUI. However, not all applications that use a Windows-like GUI are client/server applications, and client/server applications do not have to use a GUI.
- Client/server isn't business reengineering. Client/server computing by itself will not improve the way your company does business. Client/server computing is often associated with reengineering because it is a new technology that, when properly used, can implement the kind of nontraditional applications that reengineering efforts frequently call for. However, client/server computing and reengineering are not one and the same. Chapter 2 spells out the role of client/server computing in reengineering.
- Client/server isn't downsizing. Client/server is often thought of as an excuse for getting rid of the mainframe. Sometimes it works out that way, but sometimes it doesn't. A client/server project may call for replacing dumb terminals with desktop computers to use as clients, leaving the mainframe in place to act as the server. In this case, client/server actually upsizes the application.
- Client/server isn't synonymous with open systems. The term open systems refers to the techniques of making inherently incompatible computer systems compatible. Client/server projects often depend on open systems for their success because they require that PC-based networks and mainframe-based networks be able to communicate with one another. The concept of open systems is just one of many issues that must be dealt with in a client/server project.
- Client/server isn't a conspiracy. Client/server computing is not a conspiracy by PC advocates to get rid of mainframes. Nor is it a conspiracy by mainframe old-timers to ensure their job security. It is, for the most part, a recognition of the best of both worlds. Client/server is a way to develop computer applications that are as easy to use as PC applications and as dependable as mainframe applications.
- Client/server isn't a silver bullet. Client/server computing is not like the Lone Ranger's bullet, which flew true and always found its target. Applying the client/server label to a project won't guarantee its success. This book points out the many pits that you may fall into along the client/server path.
That doesn't mean that you should avoid the client/server way or approach it with fear and trepidation. I'm reminded of the scene in The Wizard of Oz when the Scarecrow, Tin Man, and the Cowardly Lion come across a sign on the way to the Witch's castle that reads, "I'd turn back if I were you." Client/server may not be a Silver Bullet, but neither is it a Witch's castle. This book gives you the brains, the heart, and the nerve to plow full speed ahead into the client/server world.
Where Does the Internet Fit In?
The Internet has thrown a huge monkey wrench into the client/server equation. Just a few years ago, the Internet was interesting mostly because it enabled millions of users worldwide to exchange electronic mail (e-mail) as readable text. But with the advent of the World Wide Web, the Internet has become much more.
At first, the Web was viewed as a method of publishing information in an attractive format that could be accessed from any computer on the Internet. But the newest generation of the Web includes programmable clients, using such programming environments as Sun Microsystem's Java and Microsoft's ActiveX. With these programming environments, the Web has become a viable and compelling platform for developing client/server applications.
Unfortunately, the traditional tools of client/server computing, including development tools such as Powersoft's PowerBuilder and Borland's Delphi, do not allow you to develop client/server applications for use on the Web. But with time, these tools will adapt to the Web, and the Web will become the platform of choice for client/server computing.
Table of Contents
PART I: What You Need to Survive Client/Server Computing.
Chapter 1: What the Heck Is Client/Server Computing?
Chapter 2: Client/Server Computing and Your Business.
Chapter 3: Client/Server Myths Dispelled.
Chapter 4: A Smorgasbord of Client/Server Applications.
PART II: Of Clients, Servers, and Networks.
Chapter 5: The Right Kind of Client.
Chapter 6: GUI, Schmooey! Or, Picking a Graphical Interface.
Chapter 7: Servers of All Shapes and Sizes.
Chapter 8: Understanding Local Area Networks.
Chapter 9: Wide Area Networking: The Big Picture.
PART III: A Database Primer.
Chapter 10: Database for $100, Please.
Chapter 11: What Is SQL, and How Do You Pronounce It?
Chapter 12: Relational Database Design.
Chapter 13: Distributed Databases.
Chapter 14: Data Warehouses.
Chapter 15: All about Online Analytical Processing (OLAP).
PART IV: Building Client/Server Systems.
Chapter 16: Client/Server Development Tools.
Chapter 17: A Quick Look at Systems Analysis and Design.
Chapter 18: Visual Programming: A Working Model.
Chapter 19: Reporting for Success.
Chapter 20: Distributed Objects.
Chapter 21: Transaction Monitors.
Chapter 22: Managing Client/Server Systems.
PART V: The Internet Invasion.
Chapter 23: A Little Web Magic: From the Outside In.
Chapter 24: We're Not in HTML Anymore: Tools of the Trade.
Chapter 25: The Coffee Wars: Java and All That Jive.
PART VI: The Part of Tens.
Chapter 26: Ten Client/Server Commandments.
Chapter 27: Ten Client/Server Acronyms Decoded.
Chapter 28: Ten-Point Summary of Client/Server Computing (For Lazy Readers).
Bonus Section: A Client/Server Trade Show For Dummies (Without the Travel!).
Book Registration Information.