- Shopping Bag ( 0 items )
This reference will help Windows 2000 administrators, business managers, and technical professionals understand Windows 2000 capacity planning and performance tuning issues in order to optimize the performance of computing resources while reducing the total cost of ownership. This Technical Reference offers ready advice on how to implement, configure, and run Windows 2000 Server-based systems for faster, more stable, and more efficient performance. It covers issues such as optimization and replication of the ...
Ships from: Chicago, IL
Usually ships in 1-2 business days
Ships from: Chatham, NJ
Usually ships in 1-2 business days
Ships from: acton, MA
Usually ships in 1-2 business days
This reference will help Windows 2000 administrators, business managers, and technical professionals understand Windows 2000 capacity planning and performance tuning issues in order to optimize the performance of computing resources while reducing the total cost of ownership. This Technical Reference offers ready advice on how to implement, configure, and run Windows 2000 Server-based systems for faster, more stable, and more efficient performance. It covers issues such as optimization and replication of the Active Directory service across enterprise networks, domains in LAN and WAN networks, and Web hosting — making it a comprehensive roadmap to Windows 2000 performance tuning.
Although network monitoring is difficult, optimizing network performance is even harder. Any change you make to one machine normally changes the performance characteristics of the network as a whole and each machine on the network. For example, adding a higher performance network interface card (NIC) to one workstation may adversely affect the performance of other machines on the network that have lower performance NICs installed. Even something as simple as a cable change can affect network performance because cable length affects network timing and, therefore, the rate at which data moves on the cable.
In short, the network is the most difficult part of the tuning process. You not only have the operating system and the local hardware to worry about, but there are also other machines to consider. That's why this chapter is so important to the optimization of your system as a whole. The ability to communicate with other users is what makes the network popular, while the interactions and complexity of the network environment is the stuff of nightmaresfor the network administrator. This chapter seeks to reduce the complexity of the network performance monitoring and optimization equation.
The first section of the chapter, "An Overview of Network Bottleneck Sources," summarizes some of the most important network problems that you encounter. We'll divide this conversation into four major components: the operating system, the local machine, remote machines, and other sources. These four major sources of network bottlenecks are the first place to look for performance problems on the network.
One of the most difficult problems to assess is the effect of network topology on network performance. This topic is the center of discussion of the second section, "Overview of Network Topology Limitations." You can tune the local machine's hardware and use every operating system performance trick in the book, and still not obtain your network performance goals if the network topology is incapable of producing the desired results. For example, anyone who thinks they get the full 10 Mb/s of bandwidth from an Ethernet network is in for a surprise. There are a host of factors that make it impossible to realize the full performance benefits from any network topology. In fact, this is such a significant problem that we spend time talking about the very problem of theoretical topology performance potential as compared to the real world topology performance potential.
The third section, "Understanding Network Component Interactions," looks at how various network elements can work together to improve performance, or conflict to degrade network performance. The whole idea of interactions causes some network administrators to dismiss this area of tuning as too difficult to manage, especially on large networks. However, the gains or losses a network can encounter due to interactions tend to dwarf other areas of network optimization. You can't afford to ignore interactions—they must be managed to optimize data throughput. Fortunately, there are ways to make the interaction picture easier to see and, more importantly, manage.
In the fourth section, "User-Oriented Network Bottleneck Solutions," we look at how the user affects network performance. This is one of those areas where you can't predict the results because user-oriented solutions depend on the cooperation of the user to work. For example, what happens to network performance if you have 50 people using a word processor all day and they unconsciously hit the Save button every few seconds? What you end up with is a lot of unnecessary network traffic because of a "nervous twitch" that can be avoided. The problem from the network administrator's perspective is finding out the source of this nervous twitch, and then finding a remedy for the situation that the user is willing to try. A solution for this kind of problem can be as easy as setting the word processor to automatically save at given intervals, and then demonstrating that the feature does indeed work.
The user isn't the only source of potential network performance enhancements. Given the state of the art, you'll find that hardware is the most common solution to network performance problems. That's the topic of the fifth section, "Hardware-Oriented Network Bottleneck Solutions." You may find that your network is suffering from too many users on one network segment. Perhaps the answer to a network performance problem isn't in the operating system or the user, but in creating a new network segment so the users have the additional bandwidth required to get their work done. There are also other issues to consider, such as the quality of the hardware you use, the kind of drivers, and the way the hardware and drivers are configured. All these items impact the way the network performs.
The sixth, and final, section, "Software-Oriented Network Bottleneck Solutions," contains a discussion of the software elements you need to consider when it comes to network performance. This section looks at two major areas of the network: the local operating system and the applications designed to use the network. Both areas require tuning at some level. In some cases, tuning takes the form of software options. In other cases, you find that wise use of network resources is the answer to performance problems.
There isn't much you can do about certain network performance inhibitors, so we don't even discuss them in this chapter. Users adamantly protect their right to hide data—no amount of training changes that stance. Topology limitations in effect today are unlikely to disappear tomorrow, no matter how much you'd like to get rid of them. However, other network performance inhibitors are actually very easy to remedy and result in massive savings of both time and effort for the network administrator. Not only does the administrator benefit, but also the user. A network that performs well can make users feel as if they're accessing local resources, when, in fact, those resource reside on another machine and can be physically located in another building.
The following four sections don't provide an in-depth view of network bottlenecks, but they do give you an overview of the kinds of problems you'll run into. Sometimes classifying a network bottleneck is the most difficult problem of all because the problem looks like it can be part of several major subsystems. These sections help you reduce the complexity of the problem by breaking it down to one of four major network areas: operating system, local machine (both hardware and software), remote machine (both hardware and software), and other (like users). The purpose of this section is to make network problems easier to resolve by making them easier to see. (You can't easily fix what you can't see or at least understand.)
So, where does the operating system itself come into play? The operating system implements the various layers that are mandated by various networking standards. For example, the TCP protocol has requirements that an operating system must meet to ensure interoperability with other operating systems that also implement TCP. The same holds true with IP and other protocols you may use in setting up network communications. The idea is that all the layers contribute toward one goal—network communication—but that each layer requires separate handling by the operating system to ensure modularity.
Since each layer in the networking model is independent, you can mix and match protocols that are compatible within a networking model. In fact, you'll find that Windows 2000 doesn't always use the familiar TCP/IP protocol pair to communicated across the network. Sometimes it uses UDP/IP instead. The protocols used to communicate between nodes must be agreed on in advance and is often dictated by the application in use at the time.
The independence of each network layer means you can tune each layer within the confines of the protocol specification. The interdependence of each layer, however, means a change at one layer, of necessity, affects all the other layers in the network model. In short, you need to consider how a performance change in one layer will affect the other layers around it. As you can see, the question of tuning a network model that's implemented by Windows 2000 isn't an easy issue to discuss. You need to consider each layer not only individually, but also as part of the greater whole.
The following sections help you better understand how this layering works. First, we look at the network layers from a conceptual point of view. We talk about how the layers fit together from a very generic, standards-oriented, perspective. The second section covers a specific network model implementation. We actually tear a network packet (the envelope used to transfer data from one node to another) apart to see what makes it tick. As a result of reading these two sections, you should walk away with a better idea of how the layering of protocols to create a specific kind of network model works and how you can use this information to better tune your system.