×

Uh-oh, it looks like your Internet Explorer is out of date.

For a better shopping experience, please upgrade now.

Designing Quality of Service Solutions for the Enterprise
     

Designing Quality of Service Solutions for the Enterprise

by Eric D. Siegel
 

The complete guide for planning and implementing Quality of Service for enterprise networks

Can you guarantee that mission-critical data gets through your network no matter how busy the network is? With such services as videoconferencing, voice over IP, and real-time video competing for bandwidth against transaction processing and Web surfing, it's more important

Overview

The complete guide for planning and implementing Quality of Service for enterprise networks

Can you guarantee that mission-critical data gets through your network no matter how busy the network is? With such services as videoconferencing, voice over IP, and real-time video competing for bandwidth against transaction processing and Web surfing, it's more important than ever to ensure that network resources are allocated appropriately. Written by an expert in the field, this hands-on guide will help you determine if you should utilize Quality of Service (QoS) solutions in your network. From choosing the best plan and products for your network to building and implementing QoS, you'll find all the information you'll need to keep your network running smoothly.

To help you decide if QoS is right for your network, this book covers:

• Factors behind the recent interest in QoS, Class of Service (CoS), and policy-based networking

• End-user and enterprise-wide requirements

• Specialized QoS requirements of real-time speech and video, client/server processing, and protocols

• Basic technologies used to provide QoS solutions

• The latest information on QoS solutions, including: Overprovisioning, Isolation, LAN QoS, Frame Relay, ATM and MPOA, IP Type of Service and Filtering, IP Integrated Services and RSVP, IP Differentiated Services, Traffic Shaping, and QoS Policy Management and Measurement

The companion Web site at www.wiley.com/compbooks/siegel features:

• Microsoft PowerPoint presentations of each chapter's material

• Updated information for each chapter

• Updated references for all chapters, with links to relevant sites

Product Details

ISBN-13:
9780471333135
Publisher:
Wiley
Publication date:
10/01/1999
Pages:
320
Product dimensions:
7.52(w) x 9.49(h) x 0.70(d)

Read an Excerpt

Note: The Figures and/or Tables mentioned in this sample chapter do not appear on the Web.

For many years, the Internet functioned quite well without Quality of Service (QoS); the rudimentary QoS facility embedded in the basic Internet Protocol (IP) was generally ignored by network designers and users. Within the past few years, however, there has been an explosion of interest in the area, and all major Internet service providers and vendors are providing products that have Quality of Service features. What, then, is Quality of Service, and what are the meanings of the associated technologies for Class of Service (CoS) and for policy-based networking? And why is there suddenly interest in an area that was generally ignored for 20 years?

To answer these questions, this chapter first gives a brief working definition of QoS, CoS, and policy-based networking, then discusses the factors behind the interest in them.

Introduction

The Internet was originally designed to offer one level of service to everyone: best effort. A user transmitted his or her data into the network, and the network tried to deliver it to the correct recipient as quickly as possible and with as few errors as possible. There were no guarantees of transit time or even delivery; some packets never arrived at their destination. Unlike a telephone system, there were usually no busy signals the network would almost always accept data, but it would cope with congestion by simply discarding that data at the first congestion point.

A rudimentary form of prioritization was available, but it was rarely used. As most network traffic was electronic mail (e-mail) or file transfer, with a limited amount of remote terminal emulation, such service was more than acceptable. There wasn't any World Wide Web with its associated browser technology; there was virtually no voice or video over the Internet; the only alternative was to use very expensive dial-up connections or leased lines.

Additionally, the original users of the Internet were all U. S. government researchers and contractors studying network design. Internet service was provided free of charge, and no one was making a profit on the Internet or worrying about customer satisfaction. There was a very wide difference among different computers (much wider than today's), and the challenges of transmitting data among those implementations was sufficient to absorb everyone's interest. As the technology was new and still experimental, the episodes of poor performance were considered part of the normal experience; users were happy that the network was working at all.

As the Internet technology matured, the number of users of the network began to exceed the number of researchers into the network's technology. In 1975 ownership of the network was transferred from the Department of Defense's Advanced Research Projects Agency (ARPA) to the Defense Communications Agency. Tolerance of network performance problems decreased. Users were no longer satisfied by or interested in the explanatory, research-oriented memos that appeared from the original Internet contractor, Bolt Beranek and Newman (BBN), after each episode of poor performance or network failure. They wanted quick service, low error rates, and high availability. They also wanted greater interconnectivity among the various experimental and operational networks.

To improve network interoperability and availability, a new set of network software was developed. Called TCP/IP (Transmission Control Protocol/ Internet Protocol), it replaced the original NCP (Network Control Protocol) on "Flag Day," January 1, 1983, and the current Internet technology went into production. Other TCP/ IP-based networks appeared and were absorbed into the growing Internet. The Internet was becoming the core communications infrastructure for more than just U. S. networking researchers; it had evolved to support the international community, the military, computer scientists, and the whole academic world. Commercial networks using Internet technology had also appeared. Nevertheless, virtually all Internet communications still provided only one level of service, that of best effort.

In 1991 the regulations against commercial use of the Internet backbone were lifted, and the commercial networks and the Internet merged. By the end of 1991, over 5000 networks were a part of the Internet.

In 1993 the first graphical Web browser, Mosaic, was released, and the Internet launched into a new phase of even more explosive growth. Because Mosaic and other Web browsers were so much more user friendly and versatile than the earlier forms of electronic mail, file transfer, and terminal emulation, non-technical users found the new Web, built on the Internet backbone, to be a crucial part of their daily lives.

Commerce began on the Web, and, with electronic commerce (e-commerce) came more pressure for regulated unfairness (Quality of Service) in Internet performance. Users were paying directly for Internet use, and some were willing to pay extra for better performance to support new applications, such as real-time voice and video, that might save money over alternative telephone company connections. Even if the particular application didn't require unusually good performance for its operation, companies on the Internet wanted guarantees of good performance to ensure that their customers would be pleased with their experience of the company's Web presence. The ubiquitous best-effort service level was no longer enough.

Internet service and equipment vendors quickly responded to the demands for Quality of Service with a barrage of new offerings and a pile of confusing marketing literature. Each new attempt at technology from the Internet standards committees was quickly warped into a product announcement, and life became quite complex for network designers and users. It was to help decrease that confusion and ease the designer's job that this book was conceived.

Definitions

Quality of Service (QoS) and Differentiated Class of Service (CoS) are methods for providing enhanced services to network traffic. Policy-based networking is a recent, grander scheme for administering such services along with other services such as security.

Quality of Service (QoS)

Quality of Service (QoS) is a somewhat vague term referring to the technologies that classify network traffic and then ensure that some of that traffic receives special handling. The special handling may include attempts to provide improved error rates, lower network transit time (latency), and decreased latency variation (jitter). It may also include promises of high availability, which is a combination of mean (average) time between failures (MTBF) and mean time to repair (MTTR).

NOTE The formula for availability is: Availability = MTBF / (MTBF + MTTR) In a chain of elements that must all function properly for the system as a whole to function, the availability of the system is the product of the individual availabilities. Total availability decreases as the number of elements increases.

Quality of Service facilities in some technologies, such as Asynchronous Transfer Mode (ATM), can be quite detailed, providing the user with explicit guarantees of average delay, delay variation (jitter), and data loss. But, as we will see in later chapters, QoS does not necessarily guarantee particular performance. Performance guarantees can be quite difficult and expensive to provide in packet-switched networks, and most applications and users can be satisfied with less stringent promises, such as prioritization only, without delay guarantees. Most modern applications automatically recover from lost data packets or can function in the presence of loss, and most users have learned to accept minor instances of increased network delay.

There is a second part to the definition of QoS: the description of how traffic is to be classified. Some QoS implementations provide per-flow classification, in which each individual flow is categorized and handled separately. This can be expensive if there are a lot of flows to be managed concurrently.

NOTE A flow is defined as a sequence of data packets sharing the same combination of source and destination address, along with any other distinguishing characteristics that may be necessary to differentiate it from other flows sharing the same address pair. A flow in a packet-switched network is very similar to the concept of a circuit in a circuitswitched network or a virtual circuit in a frame relay, X. 25, or ATM network. In some cases, the identifying information for a flow is referred to as the five-tuple (the combination of the source and destination addresses, IP port numbers, and protocol type).

Differentiated Class of Service (CoS)

Differentiated Class of Service (CoS) is a simpler alternative to QoS that was developed because of the expense of classifying traffic into flows and of maintaining and using per-flow information. Whereas QoS classifies packet streams into individual traffic flows, each of which may have a unique set of quality characteristics (error rate, latency, etc.), CoS is much coarser. It doesn't try to distinguish among individual flows; instead, it uses simpler methods to classify packets into one of a few categories. All packets within a particular category are then handled in the same way, with the same quality parameters, as set by the network managers. Unique per-flow quality information is not stored in the network devices.

Clearly CoS is a simpler technology than QoS, and this leads to some confusion in the marketplace. Vendors sometimes advertise QoS capabilities when, by the definitions just explained, their product offers only CoS capabilities. As there is no standard, generally accepted definition of QoS and CoS, the buyer must inquire closely to determine the precise capabilities provided.

NOTE This book uses "QoS" as an abbreviation for "QoS or CoS."

Policy-Based Networking

Both QoS and CoS require management control to prevent all of the users in the network from upgrading themselves to the highest service level. Policy-based networking is the latest, most sophisticated method for implementing that control. Policy-based networking allows much greater control over network performance than previous methods, as it provides facilities for end-to-end control and also allows dynamic changes to the rules as network utilization changes or as business needs dictate.

The primary idea behind policy-based networking is a generalization of the concept of access control lists, which have been used on mainframe computers for years. Once an individual user has been identified to the network along with the resource (network and application) that he or she is requesting, the network management system can decide whether to grant access and the quality of the access to be made available. The rules for access and for management of network resources are stored as policies and are managed by a policy server. The various network components (routers, switches, dial-in access devices, even the network interface cards on personal computers) can then ask the policy server for a decision when the user asks for network resources. The policy server, in turn, can send queries to other policy servers or to other enterprise databases before sending a reply.

Why Now?

During the first 20 years of the Internet's existence, Quality of Service was rarely considered. But recently the need for QoS began to grow.

  • Commercial vendors began offering products and services that used Internet technology and could therefore be deployed either on the Internet itself or on private versions of the Internet (intranets).
  • The new products and services, built to attract customers by using fancy graphics, multimedia, and a large amount of interaction, sometimes made large demands on Internet and intranet services in terms of quality factors such as bandwidth, latency, jitter, and error rate. While this might have been acceptable for experimental use on the early Internet, it was a problem when commercial sales of these products and services created tens of thousands of concurrent users. Even in the absence of unusual quality demands, the sheer number of users placed a severe load on the Internet and intranet backbones, resulting in congestion.
  • Commercial users of the Internet were paying for access and were concerned with customer satisfaction and with communications costs. Poor Internet service meant dissatisfied or lost customers. It didn't just mean temporary delays when using file transfer to obtain research materials or when using remote terminal emulation to play the popular game of Hunt the Wumpus.
  • Given the congestion on the Internet backbone, with the resulting quality problems, some users were willing to pay extra for better performance to support new applications. Typical of these was Internet voice, which might cost less than the alternative telephone company connections, but which required high-quality backbone service to make it work acceptably. Company management could be persuaded to pay a little extra for voice over the Internet, but, at the same time, they might not pay extra to deliver e-mail in two seconds instead of in two minutes. Internet Service Providers (ISPs) therefore began to investigate ways of providing traffic differentiation.
  • Many users were beginning to combine previously existing networks into intranet enterprise backbones using Internet technology, but they discovered that the traffic from those previously separate networks did not always flow smoothly when sharing the same backbone. Some method of managing traffic flows was needed to ensure that enterprise users wouldn't revolt and reestablish the original separate networks.

There is therefore a combination of technical factors and market factors behind the current interest in QoS.

Technical Factors

Technical factors leading to QoS, which are discussed in more detail in the first half of this book, include application, protocol, and architectural requirements.

    • Application requirements include the demands of new technologies such as real-time voice and video. These applications, which prefer end-to-end one-way delays less than 150 ms., are beginning to appear on intranets. Other applications, such as client/ server, can make severe demands on the network if they are poorly designed-which, unfortunately, they often are. A badly built client/ server application may perform dozens of data exchanges across the network to handle a single user transaction. Such applications work well on unloaded local area networks (LANs) in the development facility, but they cause major problems when moved into production on heavily loaded corporate LANs and on wide area networks (WANs).
    • Transport and user protocol requirements include both the minimal needs of particular protocols to avoid their own failure and the need to regulate protocol behavior to prevent some protocols from harming overall network performance. As an example of protocol failure, many transactions that use legacy IBM protocols, such as those from IBM's Systems Network Architecture (SNA), may time out if network delays are greater than a few seconds, forcing the user to reconnect. As an example of network harm, some protocols, if unrestrained by the network's QoS facilities, can absorb all available network bandwidth and starve other, better-behaved, protocols. Without tight network control of latency and jitter, it's also possible that massive traffic flows with large data packets, such as those involved in file transfer, could disrupt real-time traffic flows, causing unwanted pauses in the flow and possible packet loss.
    • Architectural requirements are the natural results of hierarchical traffic aggregation and of interconnecting network segments that have different speeds. Bottlenecks occur at transitions between high-bandwidth network sections and low-bandwidth sections. It is difficult to design aggregation points because new applications and usage patterns can quickly shift the traffic pattern to put stress in unanticipated areas, and overprovisioning-throwing bandwidth at the problem-isn't economically practical in many cases and is sometimes overwhelmed by unanticipated traffic. The increasing shift away from local data flows (e. g., from desktop PC to local printer) and toward enterprisewide data flows (e. g., from desktop PC to intranet server or the Internet) exacerbates this problem.

    As is discussed in Chapter 7's "Overprovisioning" section, pure bandwidth alone usually cannot solve all of these requirements, especially if there are bottlenecks in the network. Even if there's temporarily enough bandwidth, network management personnel are hard pressed to keep ahead of the bandwidth demands of the users and their new applications. Any temporary surge in bandwidth demand-a common occurrence in modern networks-results in uncontrolled losses of data from random users; there's no way to ensure that critical applications and users will continue to get network service. Worse, an unanticipated permanent rise in bandwidth needs can cause major disruptions of those critical applications and users during the weeks or months while the network is being upgraded to handle the new aggregate demands. QoS is becoming important to network architects because it promises to help them handle these requirements.

    Market Factors

    There are also market factors behind the interest in QoS. Responding to the delays and problems appearing on many networks, network providers and equipment vendors have rushed to embrace QoS as a way to differentiate their products and increase their profits. At the same time, certain vendors have used their lack of complex QoS as a marketing advantage. And user organizations, which have to pay the bill for all of this, are trying to regulate usage and decrease their total cost of ownership. Particular examples include these:

    • Many Internet Service Providers (ISPs) are offering guarantees of availability and maximum latency (subject to certain restrictions) to differentiate their product in a crowded marketplace.
    • Vendors of network routers and switches have also discovered that QoS capabilities are a market differentiator. Vendors boast of the particular QoS standards that are supported and the number of different quality classes that the device can manage. Other vendors, possibly those without the ability to support such facilities, emphasize the native speed and bandwidth of their devices and claim that the additional complexity of multiple queues and QoS management (the "QoS tax") is unnecessary and unproven.
    • Network managers are trying to provide more fairness in their billing for the enterprise network backbone. The per-user charge, or "seat charge," used by many organizations is being replaced by more sophisticated service-level agreements and by the new policy-based networking. These techniques demand that the network managers be able to monitor and regulate the flow of user traffic into the network, restricting users to their agreed traffic characteristics. They also require that the network manager be able to provide the promised service levels and be able to prove that those service levels were achieved. Without these new techniques, and the technologies to support them, user organizations may withdraw from enterprise backbones and build their own dedicated networks to avoid backbone contention and perceived unfairness in paying for the network.
    • Finally, network users are aware that some applications are more valuable to the business than others, and that some applications are worth the additional money that the network provider may charge for superior-quality network service.

    Summary

    Pressures for Quality of Service (QoS) grew because some users were willing to pay extra for better performance to increase their productivity or to support new applications, such as voice and video. The merging of different networks into multiservice backbones also increased the need for traffic flow differentiation to control network backbone performance and to allocate costs fairly. In response, suppliers and vendors of Internet connectivity and technology began to bring QoS to the marketplace.

    Quality of Service, the original method for traffic differentiation, distinguishes among individual user-to-user connections and then handles each one separately to provide its requested quality, as measured by end-to-end delay (latency), latency variation (jitter), availability, and more. Differentiated Class of Service (CoS) is coarser; it groups connections into a few classes and handles all the connections in a class in the same way. And policy-based networking is the latest technology being used to manage the assignment of QoS or CoS to connections. It takes into consideration the user identity, the application being used, the overall state of the network backbone, and other factors, then directs the various net-work components appropriately.

  • Meet the Author

    Eric D. Siegel is a Senior Internet Consultant for Keynote, a leading supplier of Internet performance measurement, diagnostic, and consulting services to companies with e-commerce Web sites. He has been a member of the Internet community since 1974, has worked as a senior network architect for over 15 years, and has designed network architectures for many Fortune 500 companies. Eric also teaches QoS and is a session chairman at major industry conferences such as Networld + Interop and COMDEX.

    Customer Reviews

    Average Review:

    Post to your social network

         

    Most Helpful Customer Reviews

    See all customer reviews