- Shopping Bag ( 0 items )
Ships from: Cypress, TX
Usually ships in 1-2 business days
Ships from: Skokie, IL
Usually ships in 1-2 business days
Ships from: Diamond Bar, CA
Usually ships in 1-2 business days
Ships from: acton, MA
Usually ships in 1-2 business days
As mentioned in Chapter 1, there are three fundamental hardware components in a data communication network in addition to the network software: the servers or host computers, the client computers, and the network circuits that connect them. This chapter focuses on these three components. However, before we can discuss the various types of clients, servers, and circuits, we need to discuss fundamental network architectures-the way in which the application software is divided among the clients and servers.
There are three fundamental network architectures. In host-based networks, the host computer performs virtually all of the work. In client-based networks, the client computers perform most of the work. In client-server networks, the work is shared between the servers and clients. The client-server architecture is likely to become the dominant network architecture of the future.
The work done by any application program can be divided into four general functions. The first is data storage. Most application programs require data to be stored and retrieved, whether it is a small file (such as a memo produced by a word processor) or a large database (such as an organization's accounting records). The second function is data access logic, the processing required to access data, which often means database queries in SQL. The third function is the application logic, which also can be simple or complex, depending on the application. The fourth function is the presentation logic, the presentation of information to the user and the acceptance of the user's commands. These four functions are the basic building blocks of any application.
The very first data communications networks were host-based, with the host computer (usually a central mainframe computer) performing all four functions. The clients (usually terminals) enabled users to send and receive messages to and from the host computer. The clients merely captured key strokes and sent them to the host for processing, and accepted instructions from the host on what to display (see Figure 3-1).
This very simple architecture often works very well. Application software is developed and stored on one computer and all data are on the same computer. There is one point of control, because all messages flow through the one central host. In theory, there are economies of scale, because all computer resources are centralized. We will discuss costs later.
The fundamental problem with host-based networks is that the host must process all messages. As the demands for more and more network applications grow, many host computers become overloaded and cannot quickly process all the users' demands. Prioritizing users' access becomes difficult. Response time becomes slower, and network managers are required to spend increasingly more money to upgrade the host computer. Unfortunately, upgrades to host computers are "lumpy." That is, upgrades come in large increments and are expensive (e.g., $500,000); it is difficult to upgrade "a little."
In the late 1970s and early 1980s, intelligent terminals were developed that could perform some of the presentation function (intelligent terminals are discussed in more detail in the next section). This relieved only a little of the bottleneck, however, because the host still performed all of the processing and data storage. Hosts became somewhat less overloaded, but the network became more complex: developing applications was more difficult, and there were more points of failure.
In the late 1980s, there was an explosion in the use of microcomputers and microcomputer-based local area networks. Today, more than 80 percent of most organizations' total computer processing power resides on microcomputer-based LANs, not in centralized mainframe-based host computers. As this trend continues, many experts predict that by the end of the century, the host mainframe computer will contain 10 percent or less of an organization's total computing power.
Part of this expansion was fueled by a number of low-cost, highly popular applications such as word processors, spreadsheets, and presentation graphics programs., It was also fueled in part by managers' frustrations with application software on host mainframe computers. Most mainframe software is not as easy to use as microcomputer software, is far more expensive, and can take years to develop. In the late 1980s, many large organizations had application development backlogs of two to three years. That is, getting any new mainframe application program written would take years. New York City, for example, had a six-year backlog. In contrast, managers could buy microcomputer packages or develop microcomputer-based applications in a few months.
With client-based architectures, the clients are microcomputers on a local area network, and the host computer is a server on the same network. The application software on the client computers is responsible for the presentation logic, the application logic, and the data access logic; the server simply stores the data (see Figure 3-2).
This simple architecture often works very well. However, as the demands for more and more network applications grow, the network circuits can become overloaded. The fundamental problem in client-based networks is that all data on the server must travel to the client for processing. For example, suppose the user wishes to display a list of all employees with company life insurance. All the data in the database (or all the indices) must travel from the server where the database is stored over the network circuit to the client, which then examines each record to see if it matches the data requested by the user. This can overload the network circuits, because far more data is transmitted from the server to the client than the client actually needs. If the files are very large, they may also overwhelm the power of the client computers.
Most organizations today are moving to client-server architectures. Client-server architectures attempt to balance the processing between the client and the server by having both do some of the logic. In these networks, the client is responsible for the presentation logic, while the server is responsible for the data access logic and data storage. The application logic may reside on the client or on the server, or it may be split between both.
Figure 3-3 shows the simplest case, with the presentation logic and application logic on the client and the data access logic and data storage on the server. In this case, the client software accepts user requests and performs the application logic that produces database requests that are transmitted to the server. The server software accepts the database requests, per-forms the data access logic, and transmits the results to the client. The client software accepts the results and presents them to the user.
For example, if the user requests a list of all employees with company life insurance, the client would accept the request, format it so that it could be understood by the server, and transmit it to the server. Upon receiving the request, the server searches the database for all requested records and then transmits only the matching records to the client, which would then present them to the user. The same would be true for database updates; the client accepts the request and sends it to the server. The server processes the update and responds (either accepting the update or explaining why not) to the client, which displays it to the user.
Costs and Benefits of Client-Server Architectures
Client-server architectures have some important benefits compared to host-based architectures. First and foremost, they are scalable. That means it is easy to increase or decrease the storage and processing capabilities of the servers. If one server becomes overloaded, you simply add another server and move some of the application logic or data storage to it. The cost to upgrade is much more gradual and you can upgrade in smaller steps (e.g., $3,000) rather than spending hundreds of thousands to upgrade a mainframe host.
Client-server architectures can support many different types of clients and servers. You are not locked into one vendor, as is often the case in host-based networks. Likewise, it is possible to connect computers that use different operating systems so that users can choose which type of computer they prefer (e.g., combining both IBM microcomputers and Apple Macintoshes on the same network). Some types of computers and operating systems are better suited to different tasks (e.g., transaction processing, real-time video, mathematical processing). Client-server architectures allow you to match the needs of individual applications to different types of computers to maximize performance.
Finally, because no single host computer supports all the applications, the network is generally more reliable. There is no central point of failure that will halt the entire network if it fails, as there is in a host-based network. If any one server fails in a client-server network, the network can continue to function using all the other servers (but, of course, any applications that require the failed server will not function).
Client-server networks also have some critical limitations, the most important of which is their complexity. All applications in a client-server network have two parts, the software on the client, and the software on the server. Writing this software is more complicated than writing the traditional all-in-one software used in hostbased networks. Programmers often need to learn new programming languages and new programming techniques, which requires retraining.
Even updating the network with a new version of the software is more complicated. In a host-based network, there is one place in which application software is stored; to update the software, you simply replace it there. With client-server networks, you must update all clients and all servers. For example, suppose you want to add a new server and move some existing applications from the old server to the new one. All application software on all clients that send messages to the application on the old server must now be changed to send to the new server. Although this is not conceptually difficult, it can be an administrative nightmare.
Much of the debate between host-based and client-server networks has centered on cost. One of the great claims of host-based networks in the 1980s was that they provided economies of scale. Manufacturers of big mainframes claimed that it was cheaper to provide computer services on one big mainframe than on a set of smaller computers. The microcomputer revolution changed this. Over the past 20 years, the costs of microcomputers have continued to drop, while their performance has increased significantly. Today, microcomputer hardware is more than 1000 times cheaper than mainframe hardware for the same amount of computing power.
With cost differences like these, it is easy to see why there has been a sudden rush to microcomputer-based client-server networks. The problem with these cost comparisons is that they overlook the increased complexity associated with developing application software for client-server networks. Several surveys have attempted to discover the cost for software development and maintenance in client-server networks. The truth is, no one really knows, because we do not have enough long-term experience with them. Most experts believe that it costs four to five times more to develop and maintain application software for client-server networks than it does for host-based networks. As more companies gain experience with client-server applications, as new products are developed and refined, and as standards mature, these costs probably will decrease.
One of the strengths of client-server networks is that they enable software and hardware from different vendors to be used together. But this is also one of their disadvantages, because it can be difficult to get software from different vendors to work together. One solution to this problem is middleware, software that sits between the application software on the client and the application software on the server. Middleware does two things. First, it provides a standard way of communicating that can translate between software from different vendors. Many middleware tools began as translation utilities that enabled messages, sent from a specific client too] to be translated into a form understood by a specific server tool.
The second function of middleware is to manage the message transfer from clients to servers (and vice versa) so that clients need not know the specific server that contains the application's data. The application software on the client sends all messages to the middleware, which forwards them to the correct server. The application software on the client is therefore protected from any changes in the physical network. If the network layout changes (e.g., a new server is added), only the middleware must be updated.
There are literally dozens of standards for middleware, each of which is supported by different vendors, and each of which provides different functions. Two of the most important standards are Distributed Computing Environment (DCE) and Common Object Request Broker Architecture (CORBA). Both of these standards cover virtually all aspects of the client-server architecture, but are quite different. Any client or server software that conforms to one of these standards can communicate with any other software that conforms to the same standard. Another important standard is Open Database Connectivity (ODBC), which provides a standard for data access logic....
Pt. 1 Introduction 1
Ch. 1 Introduction to Data Communications 2
Pt. 2 Fundamental Concepts 37
Ch. 2 Application Layer 38
Ch. 3 Physical Layer 75
Ch. 4 Data Link Layer 117
Ch. 5 Network and Transport Layers 146
Pt. 3 Network Technologies 197
Ch. 6 Local Area Networks 198
Ch. 7 Wireless Local Area Networks 229
Ch. 8 Backbone Networks 260
Ch. 9 Metropolitan and Wide Area Networks 293
Ch. 10 The Internet 336
Pt. 4 Network Management 365
Ch. 11 Network Security 366
Ch. 12 Network Design 434
Ch. 13 Network Management 471
Pt. 5 Appendices 505
App. A Connector Cables 506
App. B Spanning Tree Protocol 517
App. C IP Telephony 521
App. D Cellular Technologies 524
App. E TCP/IP Game 526
App. F Windows Server 537
Posted November 17, 2001
I am taking a Business Data Communication class in college. Having been a NetAdmin for over six years and holding several premier IT certs, I thought that I would 'know it all' and this class would be boring for me. This book, however, has made the class very interesting. It seems to be very current, and covers virtually all aspects of data communications from error-checking ( CRC, parity, etc... how it works, and its error rates ), LAN & WAN links, network security, cable & DSL Internet access, to modem protocols like x-modem & z-modem. Congrats to the author(s) for putting together such a current and comprehensive book.Was this review helpful? Yes NoThank you for your feedback. Report this reviewThank you, this review has been flagged.