Read an Excerpt
Chapter 5: Designing a Network Topology
Designing a backup path that has the same capacity as the primary path can beexpensive and is only appropriate if the customer's business requirements dictate abackup path with the same performance characteristics as the primary path.
If switching to the backup path requires manual reconfiguration of any components,then Users will notice disruption. For mission-critical applications, disruption isprobably not acceptable. An automatic fallover is necessary for mission-criticalapplications. BY using redundant, partial-mesh network designs, you can speedautomatic recovery time when a link falls.
One other important consideration with backup paths is that they must be tested.Sometimes network designers develop backup solutions that are never tested until acatastrophe happens. When the catastrophe occurs, the backup links do not work. Insome network designs, the backup links are used for load balancing as well asredundancy. This has the advantage that the backup path is a tested solution that isregularly used and monitored as a part of day-to-day operations. Load balancing isdiscussed in more detail in the next section.
The primary purpose of redundancy is to meet availability requirements. A secondarygoal is to improve performance by supporting load balancing across parallel links.
Load balancing must be planned and in some cases configured. Some protocols do notsupport load balancing by default. For example, when running Novell's Routing Protocol(RIP), an Internetwork Packet Exchange (IPX) router can remember only one route to aremote network. You can change this behavior on a Ciscorouter by using the ipx maximum-paths command.
In ISDN environments, You can facilitate load balancing by configuring channelaggregation. Channel aggregation on means that a router can automatically bring upmultiple ISDN B channels as bandwidth requirements increase. The Multilink Point-to-Point Protocol (MPPP) is an Internet Engineering Task Force (IETF) standard for ISDN B-channel aggregation. MPPP ensures that packets arrive in sequence at the receivingrouter. To accomplish this, data is encapsulated within the Point-to-point Protocol (PPP)and datagrams are given a sequence number. At the receiving router, PPP uses thesequence number to re-create the original data stream. Multiple channels appear as onelogical link to upper-layer protocols.Most vendor's implementations of IP routing protocols support load balancing acrossparallel links that have equal cost. (Cost values are used by routing protocols todetermine the most favorable path to a destination. Depending on the routing protocol,cost can be based on hop count, bandwidth, delay, or other factors.) Cisco supports loadbalancing across six parallel paths. With the IGRP and Enhanced [GRP protocols, Ciscosupports load balancing even when the paths do not have the same bandwidth (which isthe main metric used for measuring cost for those protocols). Using a feature calledvariance, IGRP and Enhanced IGRP can load balance across paths that do not haveprecisely the same aggregate bandwidth. Cost, metrics, and variance are discussed inmore detail in Chapter 7, "Selecting Bridging, Switching, and Routing Protocols."
Some routing protocols base cost on the number of hops to a particular destinationsThese routing protocols load balance over unequal bandwidth paths as long as thehop count is equal. Once a slow link becomes saturated, however higher capacitylinks cannot be filled. This is called Pinhole congestion. Pinhole congestion can be avoided by designing equal bandwidth links within one layer of the hierarchyusing a routing protocol that bases cost on bandwidth and has the variance feature.
Load balancing can be affected by advanced switching (forwarding) mechanismsimplemented in routers. Advanced switching processes often cache the path to remotedestinations to allow fast forwarding of subsequent packets to that destination. (Thecache obviates the need for the router CPU to look in the routing table for a path. Theresult of caching is that all packets destined to a particular destination take the same path.In this case, load balancing occurs across traffic flows to different destinations, but not ona packet-per-packet basis. Some newer technologies, such as Cisco Express Forwarding(CEF), can be configured to do packet-per-packet or destination-per-destination loadbalancing. Chapter 12, "Optimizing Your Network Design," covers CEF in more detail.
DESIGNING A CAMPUS NETWORK DESIGN TOPOLOGY
Campus network design topologies should meet a customer's goals for availability andperformance by featuring small broadcast domains, redundant distribution-laversegments, mirrored servers, and multiple ways for a workstation to reach a router for off-net communications. Campus networks should be designed using a hierarchical model sothat the network offers good performance, maintainability, and scalability.
A virtual LAN (VLAN) is an emulation of a standard LAN that allows data transfer totake place without the traditional physical restraints placed on a network. A networkadministrator can use management software to group users into a VLAN so they cancommunicate as if they were attached to the same wire, when in fact they are located ondifferent physical LAN segments. Because VLANs are based on logical instead ofphysical connections, they are very flexible.
Companies that are growing quickly cannot guarantee that employees working on thesame project will be located together. With VLANs, the physical location of a user doesnot matter. A network administrator can assign a user to a VLAN regardless of the user'slocation. In theory, VLAN assignment can be based on applications, protocols,performance requirements, security requirements, traffic-loading characteristics, or otherfactors.
VLANs allow a large flat network to be divided into subnets. This feature can be used todivide up broadcast domains. Instead of flooding all broadcasts out every port, a VLAN-enabled switch can flood a broadcast out only the ports that are part of the I same subnetas the sending station.
In the past, some companies implemented large switched campus networks with fewrouters. The goals were to keep costs down by using switches instead of routers, andprovide good performance because presumably switches were faster than routers. Withoutthe router capability of containing broadcast traffic, however, the companies neededVLANs. VLANs allow the large flat network to be divided into subnets. A router (or arouting module within a switch) was still needed for inter-subnet communication.
As routers become as fast as switches and Layer-3 functionality is added to switches,fewer companies will implement large, flat, switched networks, and there will be less of aneed for VLANs.
VLAN-based networks can be hard to manage and optimize. Also, when a VLAN isdispersed across many physical networks, traffic must flow to each of those networks,which affects the performance of the networks and adds to the capacity requirements oftrunk networks that connect VLANs....