"Deploying QoS for Cisco IP Next Generation Networks: The Definitive Guide by two distinguished network architects from Cisco Systems, Vinod Joseph and Brett Chapman, is an outstanding book covering up-to-date and novel quality of service (QoS) concepts and solutions.... it is a book that cannot be missed."Rafal Stankiewicz, IEEE Communications Magazine, October, 2010
Deploying QoS for Cisco IP and Next Generation Networks: The Definitive Guideby Vinod Joseph, Brett Chapman
Deploying QoS for IP Next Generation Networks: The Definitive Guide provides network architects and planners with insight into the various aspects that drive QoS deployment for the various network types. It serves as a single source of reference for businesses that plan to deploy a QoS framework for voice, video, mobility and data applications creating a/i>… See more details below
Deploying QoS for IP Next Generation Networks: The Definitive Guide provides network architects and planners with insight into the various aspects that drive QoS deployment for the various network types. It serves as a single source of reference for businesses that plan to deploy a QoS framework for voice, video, mobility and data applications creating a converged infrastructure. It further provides detailed design and implementation details for various service deployments across the various Cisco platforms such as the CRS-1, 12000, 7600&7200 series routers that are widely deployed in most Carrier Networks.
The book covers architectural and implementation specific information plus recommendations for almost all the popular line cards across the various hardware platforms widely used in the market. It also addresses QoS architecture and deployment on the Cisco CRS-1 platform and is considered as a unique selling point of this book.
In short the books serve as an "On the Job Manual" which can also be used as a study guide for Cisco specialist certification programs (CCNA, CCIP, CCIE)
This book will includes detailed illustration and configurations. In addition, it provides detailed case studies along with platform specific tests and measurement results. A link to a detailed tutorial on QoS metrics and associated test results will be available at the book's companion website in order to ensure that the reader is able to understand QoS functionality from a deployment standpoint.
- Covers the requirements and solutions in deploying QoS for voice, video, IPTV, mobility and data traffic classes (Quad-play networks), saving the reader time in searching for hardware specific QoS information, given the abundance of Cisco platforms and line cards.
- Presents real-life deployments by means of detailed case studies, allowing the reader to apply the same solutions to situations in the work place.
- Provides QoS architecture and implementation details on Cisco CRS-1, 12000, 7600, and 7200 routing platforms using Cisco IOS/IOS-XR software, aiding the reader in using these devices and preparing for Cisco specialist certification.
Read an Excerpt
Deploying QoS for Cisco IP and Next-Generation NetworksThe Definitive Guide
By Vinod Joseph Brett Chapman
Morgan KaufmannCopyright © 2009 Vinod Joseph and Brett Chapman
All right reserved.
Chapter OneThe Evolution of Communication Systems
From the traditional telecommunications point of view, the network is a way to link two end devices from different locations for short periods of time. Network users generally pay for access to the telecom network as well as paying per-connection charges.
Contrast this scenario with the traditional data networking ideology, whereby the media are shared by groups of people all working at the same time. Data represented the important aspect of the system; the network was merely a resource allowing users to execute applications, such as browsing Websites, sending email, printing documents, and transferring files. Whether it took 2 seconds or 20 for the transfer of data was not the issue in general; the priority was that the data were received uncorrupted.
Both the telecom and the data networking industries have changed over time, particularly with the convergence of user requirements and expectations. The need for real-time media formats such as audio, video, and gaming has certainly increased the bandwidth requirements, as has applications development, embedding more media-rich functionality in basic desktop programs such as word processing and yielding larger files. Real-time audio- and videoconferencing over data networks have created a need for the same real-time quality-of-service (QoS) guarantees that the telecom industry has enjoyed since the inception of digital telephony. Even within nonreal-time data communications, there has been a drive for differentiated delivery based on premium paying customers and the expectation for service-level agreement contracts between service providers and enterprises.
This chapter gives a brief overview of QoS definitions and the evolution toward converged, IP-based next-generation networks (NGNs). Both the underlying transmission infrastructure and the overlayed transport networks are discussed.
1.1 Quality-of-Service Definition
There is no true formal definition of QoS. Protocol frameworks such as asynchronous transfer mode, or ATM (discussed in a later section), started the concept of QoS in response to networks converging to carry data with varying requirements. ATM can provide QoS guarantees on bandwidth and delay for the transfer of real-time and nonreal-time data.
In seeking a definition, the following sources give insight into what QoS means:
The International Telecommunication Union (ITU) standard X.902, Information Technology/Open Distributed Processing Reference Model, defines QoS as "A set of quality requirements on the collective behavior of one or more objects." QoS parameters are listed as the speed and reliability of data transmission, that is, throughput, transit delay, and error rate.
The ATM Lexicon defines QoS as "A term which refers to the set of ATM performance parameters that characterize the traffic over a given virtual connection." These parameters include cell loss ratio, cell error rate, cell misinsertion rate, cell delay variation, cell transfer delay, and average cell transfer delay.
The IEEE paper Distributed Multimedia and Quality of Service: A Survey provides a general definition of QoS for real-time applications: "The set of those quantitative and qualitative characteristics of a distributed multimedia system, which are necessary in order to achieve the required functionality of an application."
A starting point for the definition for QoS comes from the Internet Engineering Task Force (IETF). RFC-1946, Native ATM Support for ST2+, states: "As the demand for networked real time services grows, so does the need for shared networks to provide deterministic delivery services. Such deterministic delivery services demand that both the source application and the network infrastructure have capabilities to request, setup, and enforce the delivery of the data."
Ultimately the goal of a QoS framework is to ensure that data are transferred in a deterministic manner that at least meets the performance requirements of each service being delivered.
1.2 Transmission Infrastructure Evolution
In the late 1800s, signals were analog and were allocated a single channel per physical line for transmission—technology called circuit switching. Development of the vacuum tube led to analog systems employing Frequency-Division Multiplexing (FDM) in 1925, allowing multiple circuits across a single physical line. Coaxial cable infrastructure started deployment in the 1930s, allowing greater bandwidth (and resulting in more circuits) to the telecom provider and yielding a more efficient infrastructure.
In the early 1970s the invention of transistors and the concept of Pulse Code Modulation (PCM) led to the first digital channel bank featuring toll-quality transmission. Soon after, a high-bit-rate digital system employing Time-Division Multiplexing (TDM) was realized, allowing digital multiplexing of circuits and giving further efficiency in the use of physical communications infrastructure.
Advances in FDM and TDM allowed greater efficiency in physical infrastructure utilization. TDM communicates the bits from multiple signals alternatively in timeslots at regular intervals. A timeslot is allocated to a connection and remains for the duration of the session, which can be permanent, depending on the application and configuration. The timeslot is repeated with a fixed period to give an effective throughput.
Multirate circuit switching was the next step away from basic circuit switching. This is an enhancement to the synchronous TDM approach used initially in circuit switching. In circuit switching, a station must operate at a fixed data rate regardless of application requirements. In multirate switching, multiplexing of a base bandwidth is introduced. A station attaches to the network by means of a single physical link, which carries multiple fixed data-rate channels (for example, in the case of ISDN, B-channel at 64kbps). The user has a number of data-rate choices through multiplexing basic channels. This allows for services of different rates to be accommodated, whereby the number of channels allocated is greater than or equal to the service bandwidth.
The next evolutionary step from pure circuit switching is fast circuit switching (FCS). This transfer mode attempts to address the problem of handling sources with a fluctuating natural information rate. FCS only allocates resources and establishes a circuit when data need to be sent. However, the rapid allocation and deallocation of resources required to achieve this goal proved complex and required high signaling overhead. Ultimately and quickly, FCS became infeasible as more high data-rate services emerged with the dominance of data over voice transport.
It was not until the advent of optical transmission systems that the very high-bandwidth systems we know today emerged. Optical transmission is accomplished by modulating transmitted information by a laser light-emitting diode, or LED, passing the information signal over optical fiber and reconstructing the information at the receiving end. This technology yielded 45Mbps optical communications systems, which have developed to 1.2, 1.7, and 2.4Gbps. The emergence of Dense Wave-Division Multiplexing (DWDM) technology has seen the potential bandwidth over a single fiber reach 400Gbps and beyond.
In the mid-1980s the most common digital hierarchy in use was plesiosynchronous digital hierarchy (PDH). A digital hierarchy is a system of multiplexing numerous individual baserate channels into higher-level channels. PDH is called plesio (Greek for almost) because the transmission is neither wholly synchronous nor asynchronous. PDH was superseded by synchronous digital hierarchy (SDH) and synchronous optical network (SONET), which took the PDH signals and multiplexed them into a synchronous time-division multiplexing (STM) of basic signals. So, development went from asynchronous multiplexing used in PDH to synchronous multiplexing in SDH/SONET.
In contemporary NGN systems with the emergence of IP-based networks, many service providers are using simple underlying DWDM optical switch physical infrastructure or even driving dark fiber directly from the IP routing and switching equipment.
DWDM works by combining and transmitting multiple signals simultaneously at different wavelengths on the same fiber. In effect, one fiber is transformed into multiple virtual fibers. So, if you were to multiplex eight 2.5Gbps signals into one fiber, you would increase the carrying capacity of that fiber from 2.5Gbps to 20Gbps. DWDM technology can drive single fibers to transmit data at speeds up to 400Gbps.
A key advantage to DWDM is that it is protocol and bit-rate independent. DWDM-based networks can transmit data in IP, ATM, SDH/SONET, and Ethernet and handle bit rates between 100Mbps and multiples of 2.5Gbps.
1.3 The First Global Communications Network: PSTN
Communications networks began with the telegraph system in the 1800s. The first recorded telegraph line was connected in 1844 from Washington, D.C., to Baltimore. In 1858 the first transatlantic cable was commissioned. By 1860 the telegraph system covered the majority of the United States.
The start of the 19th century saw the establishment of the analog public switched telephone network, or PSTN. Users were connected using temporary circuits through switches. This revolutionary approach was known as circuit switching as mentioned earlier. Alexander Graham Bell actually envisaged the telephone network as a one-way communications system for broadcasting music. As others became aware of the existence of Bell's invention, they realized the potential for two-way communication—hence the birth of Morse code messages and ultimately the existence of the PSTN.
As previously discussed, circuit switching allocates a dedicated line on the network path between the communicating parties. Channel capacity has to be available and reserved between each pair of nodes on the path, and each node has to have a switching capability to ensure that the next hop is correct.
For various applications, utilization of the line can vary enormously and can be very inefficient, even for voice applications during periods in which neither party is speaking. However, there is little delay and effective transparency for the users transferring information, whether that is voice or data, once the circuit is established.
TDM provides a fixed bit rate for transmission, which leads to problems in handling services of vastly different bit rates or fluctuating information rates, as was the case for data applications. TDM-based networks also tend to be inefficient because the bandwidth is dedicated to a communications circuit and if the end users are not active within the session, the bandwidth cannot be utilized for other users.
Through the 1970s and 1980s, data transfer through interconnection of computer systems rapidly became a requirement in parallel with the explosive growth of voice communications. This greatly influenced the direction of network development. The following section discusses the Computer Age and the advent of data internetworking.
1.4 The Internet and TCP/IP History
Much of the available Internet history literature suggests that the Internet began with some military computers in the Pentagon in a network called the Advanced Research Projects Agency Network (or Arpanet) in 1969. One theory is that the network was designed to survive a nuclear attack. This project led to the development of the Internet protocols sometime during the 1970s.
In reality, Bob Taylor, the Pentagon official in charge of the Arpanet, suggests that the purpose was not military but scientific. According to Taylor, there was no consideration of surviving a nuclear attack. In fact, Larry Roberts, who was employed to build the Arpanet, has stated that Arpanet was not even intended to be a communications infrastructure.
The Arpanet was invented to make it possible for research institutions to use the processing power of other institutions' computers when they had large calculations to do that required more power or when other agencies' computers were better suited for a given task.
Excerpted from Deploying QoS for Cisco IP and Next-Generation Networks by Vinod Joseph Brett Chapman Copyright © 2009 by Vinod Joseph and Brett Chapman. Excerpted by permission of Morgan Kaufmann. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
and post it to your social network
Most Helpful Customer Reviews
See all customer reviews >