Business Data Communications and Networking / Edition 11

Business Data Communications and Networking / Edition 11

by Jerry FitzGerald
ISBN-10:
111808683X
ISBN-13:
9781118086834
Pub. Date:
08/23/2011
Publisher:
Wiley
ISBN-10:
111808683X
ISBN-13:
9781118086834
Pub. Date:
08/23/2011
Publisher:
Wiley
Business Data Communications and Networking / Edition 11

Business Data Communications and Networking / Edition 11

by Jerry FitzGerald
$230.75 Current price is , Original price is $230.75. You
$230.75 
  • SHIP THIS ITEM
    This item is available online through Marketplace sellers.
  • PICK UP IN STORE
    Check Availability at Nearby Stores
$216.25 
  • SHIP THIS ITEM

    Temporarily Out of Stock Online

    Please check back later for updated availability.

    • Condition: Good
    Note: Access code and/or supplemental material are not guaranteed to be included with used textbook.

This item is available online through Marketplace sellers.


Overview

A balanced approach that keeps up with a fast-moving field

Rapidly evolving data communications and networking technology are shaping the future of the business world, creating new challenges for both business students and the instructors who must prepare them for the future careers.

In order to provide the most relevant, hands-on learning toll currently available, the latest edition of Business Data Communications and Networking has been thoroughly update and revised, reflecting the input of users of this textbook worldwide. While retaining the balanced coverage of the technical and managerial aspects of data communications that has made previous editions so popular, the edition features a wealth of cutting-edge applications and new applied exercises designed to help students succeed in an ever-changing field.

Highlights of the 11th Edition include:

  • Combined coverage of wireless and wired LANs into one chapter
  • New streamlined and user-friendly format that reflects only current technologies
  • Expanded labs and real-world activities to reinforce key concepts and illustrate the practical uses of network technology
  • Updated coverage on routing, Ethernet and IP services

Product Details

ISBN-13: 9781118086834
Publisher: Wiley
Publication date: 08/23/2011
Edition description: Older Edition
Pages: 608
Product dimensions: 7.50(w) x 9.30(h) x 1.00(d)

About the Author


Dr. Jerry FitzGerald is the principal in Jerry FitzGerald & Associates, which he started in 1977. While at this firm, he has gained valuable experience in risk analysis, computer security, audit and control of computerized systems, data communications, networks, and systems analysis. He received his Ph.D. in business economics and master's degree in business economics from the Claremont Graduate School, an MBA from the University of Santa Clara, and a B.A. in industrial engineering from Michigan State University.

Dr. Alan Dennis is an Associate Professor of Management Information Systems in the Terry College of Business at The University of Georgia. He received his Ph.D. in management information systems from the University of Arizona, an MBA from Queen's University in Ontario, Canada, and a B.A. in computer science from Arcadia University in Nova Scotia, Canada. While publishing more than 80 business and research articles, he also gained extensive experience in the development and application of groupware and Internet technologies.

Read an Excerpt


Chapter 3: Physical Layer: Architectures, Devices and Circuits

Introduction

As mentioned in Chapter 1, there are three fundamental hardware components in a data communication network in addition to the network software: the servers or host computers, the client computers, and the network circuits that connect them. This chapter focuses on these three components. However, before we can discuss the various types of clients, servers, and circuits, we need to discuss fundamental network architectures-the way in which the application software is divided among the clients and servers.

Network Architectures

There are three fundamental network architectures. In host-based networks, the host computer performs virtually all of the work. In client-based networks, the client computers perform most of the work. In client-server networks, the work is shared between the servers and clients. The client-server architecture is likely to become the dominant network architecture of the future.

The work done by any application program can be divided into four general functions. The first is data storage. Most application programs require data to be stored and retrieved, whether it is a small file (such as a memo produced by a word processor) or a large database (such as an organization's accounting records). The second function is data access logic, the processing required to access data, which often means database queries in SQL. The third function is the application logic, which also can be simple or complex, depending on the application. The fourth function is the presentation logic, the presentation of information to the user and the acceptance of the user's commands. These four functions are the basic building blocks of any application.

Host-Based Architectures

The very first data communications networks were host-based, with the host computer (usually a central mainframe computer) performing all four functions. The clients (usually terminals) enabled users to send and receive messages to and from the host computer. The clients merely captured key strokes and sent them to the host for processing, and accepted instructions from the host on what to display (see Figure 3-1).

This very simple architecture often works very well. Application software is developed and stored on one computer and all data are on the same computer. There is one point of control, because all messages flow through the one central host. In theory, there are economies of scale, because all computer resources are centralized. We will discuss costs later.

The fundamental problem with host-based networks is that the host must process all messages. As the demands for more and more network applications grow, many host computers become overloaded and cannot quickly process all the users' demands. Prioritizing users' access becomes difficult. Response time becomes slower, and network managers are required to spend increasingly more money to upgrade the host computer. Unfortunately, upgrades to host computers are "lumpy." That is, upgrades come in large increments and are expensive (e.g., $500,000); it is difficult to upgrade "a little."

In the late 1970s and early 1980s, intelligent terminals were developed that could perform some of the presentation function (intelligent terminals are discussed in more detail in the next section). This relieved only a little of the bottleneck, however, because the host still performed all of the processing and data storage. Hosts became somewhat less overloaded, but the network became more complex: developing applications was more difficult, and there were more points of failure.

Client-Based Architectures

In the late 1980s, there was an explosion in the use of microcomputers and microcomputer-based local area networks. Today, more than 80 percent of most organizations' total computer processing power resides on microcomputer-based LANs, not in centralized mainframe-based host computers. As this trend continues, many experts predict that by the end of the century, the host mainframe computer will contain 10 percent or less of an organization's total computing power.

Part of this expansion was fueled by a number of low-cost, highly popular applications such as word processors, spreadsheets, and presentation graphics programs., It was also fueled in part by managers' frustrations with application software on host mainframe computers. Most mainframe software is not as easy to use as microcomputer software, is far more expensive, and can take years to develop. In the late 1980s, many large organizations had application development backlogs of two to three years. That is, getting any new mainframe application program written would take years. New York City, for example, had a six-year backlog. In contrast, managers could buy microcomputer packages or develop microcomputer-based applications in a few months.

With client-based architectures, the clients are microcomputers on a local area network, and the host computer is a server on the same network. The application software on the client computers is responsible for the presentation logic, the application logic, and the data access logic; the server simply stores the data (see Figure 3-2).

This simple architecture often works very well. However, as the demands for more and more network applications grow, the network circuits can become overloaded. The fundamental problem in client-based networks is that all data on the server must travel to the client for processing. For example, suppose the user wishes to display a list of all employees with company life insurance. All the data in the database (or all the indices) must travel from the server where the database is stored over the network circuit to the client, which then examines each record to see if it matches the data requested by the user. This can overload the network circuits, because far more data is transmitted from the server to the client than the client actually needs. If the files are very large, they may also overwhelm the power of the client computers.

Client-Server Architectures

Most organizations today are moving to client-server architectures. Client-server architectures attempt to balance the processing between the client and the server by having both do some of the logic. In these networks, the client is responsible for the presentation logic, while the server is responsible for the data access logic and data storage. The application logic may reside on the client or on the server, or it may be split between both.

Figure 3-3 shows the simplest case, with the presentation logic and application logic on the client and the data access logic and data storage on the server. In this case, the client software accepts user requests and performs the application logic that produces database requests that are transmitted to the server. The server software accepts the database requests, per-forms the data access logic, and transmits the results to the client. The client software accepts the results and presents them to the user.

For example, if the user requests a list of all employees with company life insurance, the client would accept the request, format it so that it could be understood by the server, and transmit it to the server. Upon receiving the request, the server searches the database for all requested records and then transmits only the matching records to the client, which would then present them to the user. The same would be true for database updates; the client accepts the request and sends it to the server. The server processes the update and responds (either accepting the update or explaining why not) to the client, which displays it to the user.

Costs and Benefits of Client-Server Architectures

Client-server architectures have some important benefits compared to host-based architectures. First and foremost, they are scalable. That means it is easy to increase or decrease the storage and processing capabilities of the servers. If one server becomes overloaded, you simply add another server and move some of the application logic or data storage to it. The cost to upgrade is much more gradual and you can upgrade in smaller steps (e.g., $3,000) rather than spending hundreds of thousands to upgrade a mainframe host.

Client-server architectures can support many different types of clients and servers. You are not locked into one vendor, as is often the case in host-based networks. Likewise, it is possible to connect computers that use different operating systems so that users can choose which type of computer they prefer (e.g., combining both IBM microcomputers and Apple Macintoshes on the same network). Some types of computers and operating systems are better suited to different tasks (e.g., transaction processing, real-time video, mathematical processing). Client-server architectures allow you to match the needs of individual applications to different types of computers to maximize performance.

Finally, because no single host computer supports all the applications, the network is generally more reliable. There is no central point of failure that will halt the entire network if it fails, as there is in a host-based network. If any one server fails in a client-server network, the network can continue to function using all the other servers (but, of course, any applications that require the failed server will not function).

Client-server networks also have some critical limitations, the most important of which is their complexity. All applications in a client-server network have two parts, the software on the client, and the software on the server. Writing this software is more complicated than writing the traditional all-in-one software used in hostbased networks. Programmers often need to learn new programming languages and new programming techniques, which requires retraining.

Even updating the network with a new version of the software is more complicated. In a host-based network, there is one place in which application software is stored; to update the software, you simply replace it there. With client-server networks, you must update all clients and all servers. For example, suppose you want to add a new server and move some existing applications from the old server to the new one. All application software on all clients that send messages to the application on the old server must now be changed to send to the new server. Although this is not conceptually difficult, it can be an administrative nightmare.

Much of the debate between host-based and client-server networks has centered on cost. One of the great claims of host-based networks in the 1980s was that they provided economies of scale. Manufacturers of big mainframes claimed that it was cheaper to provide computer services on one big mainframe than on a set of smaller computers. The microcomputer revolution changed this. Over the past 20 years, the costs of microcomputers have continued to drop, while their performance has increased significantly. Today, microcomputer hardware is more than 1000 times cheaper than mainframe hardware for the same amount of computing power.

With cost differences like these, it is easy to see why there has been a sudden rush to microcomputer-based client-server networks. The problem with these cost comparisons is that they overlook the increased complexity associated with developing application software for client-server networks. Several surveys have attempted to discover the cost for software development and maintenance in client-server networks. The truth is, no one really knows, because we do not have enough long-term experience with them. Most experts believe that it costs four to five times more to develop and maintain application software for client-server networks than it does for host-based networks. As more companies gain experience with client-server applications, as new products are developed and refined, and as standards mature, these costs probably will decrease.

One of the strengths of client-server networks is that they enable software and hardware from different vendors to be used together. But this is also one of their disadvantages, because it can be difficult to get software from different vendors to work together. One solution to this problem is middleware, software that sits between the application software on the client and the application software on the server. Middleware does two things. First, it provides a standard way of communicating that can translate between software from different vendors. Many middleware tools began as translation utilities that enabled messages, sent from a specific client too] to be translated into a form understood by a specific server tool.

The second function of middleware is to manage the message transfer from clients to servers (and vice versa) so that clients need not know the specific server that contains the application's data. The application software on the client sends all messages to the middleware, which forwards them to the correct server. The application software on the client is therefore protected from any changes in the physical network. If the network layout changes (e.g., a new server is added), only the middleware must be updated.

There are literally dozens of standards for middleware, each of which is supported by different vendors, and each of which provides different functions. Two of the most important standards are Distributed Computing Environment (DCE) and Common Object Request Broker Architecture (CORBA). Both of these standards cover virtually all aspects of the client-server architecture, but are quite different. Any client or server software that conforms to one of these standards can communicate with any other software that conforms to the same standard. Another important standard is Open Database Connectivity (ODBC), which provides a standard for data access logic....

Table of Contents

PART I: INTRODUCTION.
Introduction to Data Communications.
PART II: NETWORK FUNDAMENTALS.
Application Layer.
Physical Layer.
Data Link Layer.
Network and Transport Layers.
PART III: NETWORK TECHNOLOGIES.
Local Area Networks.
Backbone Networks.
Metropolitan and Wide Area Networks.
The Internet.
PART IV: NETWORK MANAGEMENT.
Network Security.
Network Design.
Network Management.
Appendix A: Groupware.
Appendix B: Electronic Commerce.
Appendix C: Cellular Technology.
Appendix D: Connector Cables.
Appendix E: Systems Network Architecture.
Appendix F: Token Ring.
Appendix G: TCP/IP Game.
Glossary.
Index.
From the B&N Reads Blog

Customer Reviews