Uh-oh, it looks like your Internet Explorer is out of date.

For a better shopping experience, please upgrade now.

Microsoft Windows 2000 Performance Tuning Technical Reference

Microsoft Windows 2000 Performance Tuning Technical Reference

by John Paul Mueller CNE, Irfan Chaudhry, John Paul Mueller

This reference will help Windows 2000 administrators, business managers, and technical professionals understand Windows 2000 capacity planning and performance tuning issues in order to optimize the performance of computing resources while reducing the total cost of ownership. This Technical Reference offers ready advice on how to implement, configure, and run


This reference will help Windows 2000 administrators, business managers, and technical professionals understand Windows 2000 capacity planning and performance tuning issues in order to optimize the performance of computing resources while reducing the total cost of ownership. This Technical Reference offers ready advice on how to implement, configure, and run Windows 2000 Server-based systems for faster, more stable, and more efficient performance. It covers issues such as optimization and replication of the Active Directory service across enterprise networks, domains in LAN and WAN networks, and Web hosting — making it a comprehensive roadmap to Windows 2000 performance tuning.

Editorial Reviews

Informs experienced administrators on ways to optimize the Windows 2000 server to enhance software, hardware, and network performance. The book begins with the Windows 2000 kernel, and processes, threads and memory management. Other chapters show how to use the systems monitor and network monitor to diagnose processor, memory and disk problems, and take advantage of Active Directory services and performance-tuning tools. Annotation c. Book News, Inc., Portland, OR (booknews.com)

Product Details

Microsoft Press
Publication date:
Microsoft Technical Series
Product dimensions:
7.57(w) x 9.53(h) x 1.79(d)
Age Range:
13 Years

Read an Excerpt

Chapter 8: Network Problems

The network presents more challenges when it comes to performance monitoring than any other part of your system. The reason's simple: You can't monitor the network's performance if the amount of traffic isn't at normal levels. Yet, having the traffic at normal levels means you have to consider the interactions of multiple machines when monitoring performance. In addition, the monitoring you perform today only remains valid as long as the network configuration remains unchanged. So, the difficult process of monitoring a network for specific problems occurs on a regular basis.

Although network monitoring is difficult, optimizing network performance is even harder. Any change you make to one machine normally changes the performance characteristics of the network as a whole and each machine on the network. For example, adding a higher performance network interface card (NIC) to one workstation may adversely affect the performance of other machines on the network that have lower performance NICs installed. Even something as simple as a cable change can affect network performance because cable length affects network timing and, therefore, the rate at which data moves on the cable.

In short, the network is the most difficult part of the tuning process. You not only have the operating system and the local hardware to worry about, but there are also other machines to consider. That's why this chapter is so important to the optimization of your system as a whole. The ability to communicate with other users is what makes the network popular, while the interactions and complexity of the network environment is the stuff of nightmaresfor the network administrator. This chapter seeks to reduce the complexity of the network performance monitoring and optimization equation.

The first section of the chapter, "An Overview of Network Bottleneck Sources," summarizes some of the most important network problems that you encounter. We'll divide this conversation into four major components: the operating system, the local machine, remote machines, and other sources. These four major sources of network bottlenecks are the first place to look for performance problems on the network.

One of the most difficult problems to assess is the effect of network topology on network performance. This topic is the center of discussion of the second section, "Overview of Network Topology Limitations." You can tune the local machine's hardware and use every operating system performance trick in the book, and still not obtain your network performance goals if the network topology is incapable of producing the desired results. For example, anyone who thinks they get the full 10 Mb/s of bandwidth from an Ethernet network is in for a surprise. There are a host of factors that make it impossible to realize the full performance benefits from any network topology. In fact, this is such a significant problem that we spend time talking about the very problem of theoretical topology performance potential as compared to the real world topology performance potential.

The third section, "Understanding Network Component Interactions," looks at how various network elements can work together to improve performance, or conflict to degrade network performance. The whole idea of interactions causes some network administrators to dismiss this area of tuning as too difficult to manage, especially on large networks. However, the gains or losses a network can encounter due to interactions tend to dwarf other areas of network optimization. You can't afford to ignore interactions—they must be managed to optimize data throughput. Fortunately, there are ways to make the interaction picture easier to see and, more importantly, manage.

In the fourth section, "User-Oriented Network Bottleneck Solutions," we look at how the user affects network performance. This is one of those areas where you can't predict the results because user-oriented solutions depend on the cooperation of the user to work. For example, what happens to network performance if you have 50 people using a word processor all day and they unconsciously hit the Save button every few seconds? What you end up with is a lot of unnecessary network traffic because of a "nervous twitch" that can be avoided. The problem from the network administrator's perspective is finding out the source of this nervous twitch, and then finding a remedy for the situation that the user is willing to try. A solution for this kind of problem can be as easy as setting the word processor to automatically save at given intervals, and then demonstrating that the feature does indeed work.

The user isn't the only source of potential network performance enhancements. Given the state of the art, you'll find that hardware is the most common solution to network performance problems. That's the topic of the fifth section, "Hardware-Oriented Network Bottleneck Solutions." You may find that your network is suffering from too many users on one network segment. Perhaps the answer to a network performance problem isn't in the operating system or the user, but in creating a new network segment so the users have the additional bandwidth required to get their work done. There are also other issues to consider, such as the quality of the hardware you use, the kind of drivers, and the way the hardware and drivers are configured. All these items impact the way the network performs.

The sixth, and final, section, "Software-Oriented Network Bottleneck Solutions," contains a discussion of the software elements you need to consider when it comes to network performance. This section looks at two major areas of the network: the local operating system and the applications designed to use the network. Both areas require tuning at some level. In some cases, tuning takes the form of software options. In other cases, you find that wise use of network resources is the answer to performance problems.

Overview of Network Bottleneck Sources

The purpose of a network is to allow users to share both data and peripheral devices. In short, the use of a network is supposed to make users more efficient while reducing the cost of performing the work. Like most things in life, however, the benefits of using a network don't come free. Users constantly seek new ways to get the benefits of networks without sharing the data they create with others. Network bottlenecks cause performance reductions until a company purchases more hardware, which, in turn, reduces the cost savings of using the network. In short, for every benefit you can gain from a network, there are potential problems that reduce or even eliminate the effect of using the network in the first place.

Real World Data sharing on a network is important from several performance perspectives. Consider what happens when a user hides data to maintain control of it. If other users also follow this practice, server performance can suffer because users will make redundant data requests from different areas of the server hard disk drive. The first performance problem that will occur is due to the loss of hard disk drive space that could be used for temporary files and virtual memory. The second performance problem is that the hard disk drive cache won't work as it normally would to enhance server performance. When users make shared data requests, the first request moves the data from the hard drive to server memory. Subsequent requests use the cached copy of the data, which results in a performance gain on the server because reading from memory is faster than reading from the hard drive. In addition, caching multiple copies of the same data from different locations on the hard drive wastes server memory. The third performance penalty is on the network. Every time a data modifications occurs, network traffic increases because every user has to modify his or her copy of the data separately, rather than make one change to a single master document. In short, both network and server performance suffer when users fail to share data and treat the network drive as an extension of their local drive.

There isn't much you can do about certain network performance inhibitors, so we don't even discuss them in this chapter. Users adamantly protect their right to hide data—no amount of training changes that stance. Topology limitations in effect today are unlikely to disappear tomorrow, no matter how much you'd like to get rid of them. However, other network performance inhibitors are actually very easy to remedy and result in massive savings of both time and effort for the network administrator. Not only does the administrator benefit, but also the user. A network that performs well can make users feel as if they're accessing local resources, when, in fact, those resource reside on another machine and can be physically located in another building.

The following four sections don't provide an in-depth view of network bottlenecks, but they do give you an overview of the kinds of problems you'll run into. Sometimes classifying a network bottleneck is the most difficult problem of all because the problem looks like it can be part of several major subsystems. These sections help you reduce the complexity of the problem by breaking it down to one of four major network areas: operating system, local machine (both hardware and software), remote machine (both hardware and software), and other (like users). The purpose of this section is to make network problems easier to resolve by making them easier to see. (You can't easily fix what you can't see or at least understand.)

Operating System Sources

The first aspect you need to understand about Microsoft Windows 2000 is that there are several layers of network-specific software. An application doesn't send data directly to the NIC, and then through the NIC to another machine. The data goes through several transformation layers before it becomes the packet that eventually gets transferred to another node on the network. Once the data arrives at the other node (possibly another machine or a peripheral device similar to a printer), there are several additional layers that interpret the packet and make it suitable for use on the remote node. The actual number of layers the data encounters depends on the protocol used to transmit the data and various operating system options such as data encryption.

The best way to view a protocol is as a set of rules. Think of a diplomatic situation between two countries. Two diplomats (the operating system) interact in a specific way based on a treaty (the protocol). These diplomats represent the countries (the physical machine) and their interests (the application software). In short, you can view a protocol as the rules of engagement and the language used to communicate between two machines.

So, where does the operating system itself come into play? The operating system implements the various layers that are mandated by various networking standards. For example, the TCP protocol has requirements that an operating system must meet to ensure interoperability with other operating systems that also implement TCP. The same holds true with IP and other protocols you may use in setting up network communications. The idea is that all the layers contribute toward one goal—network communication—but that each layer requires separate handling by the operating system to ensure modularity.

Since each layer in the networking model is independent, you can mix and match protocols that are compatible within a networking model. In fact, you'll find that Windows 2000 doesn't always use the familiar TCP/IP protocol pair to communicated across the network. Sometimes it uses UDP/IP instead. The protocols used to communicate between nodes must be agreed on in advance and is often dictated by the application in use at the time.

The independence of each network layer means you can tune each layer within the confines of the protocol specification. The interdependence of each layer, however, means a change at one layer, of necessity, affects all the other layers in the network model. In short, you need to consider how a performance change in one layer will affect the other layers around it. As you can see, the question of tuning a network model that's implemented by Windows 2000 isn't an easy issue to discuss. You need to consider each layer not only individually, but also as part of the greater whole.

The following sections help you better understand how this layering works. First, we look at the network layers from a conceptual point of view. We talk about how the layers fit together from a very generic, standards-oriented, perspective. The second section covers a specific network model implementation. We actually tear a network packet (the envelope used to transfer data from one node to another) apart to see what makes it tick. As a result of reading these two sections, you should walk away with a better idea of how the layering of protocols to create a specific kind of network model works and how you can use this information to better tune your system.

Meet the Author

John Mueller is a freelance author and technical editor. He has writing in his blood, having produced 45 books and almost 200 articles to date. The topics range from networking to artificial intelligence and from database management to heads down programming. His current books include a COM+ programmers guide and a Windows 2000 Web server handbook. His technical editing skills have helped over 23 authors refine the content of their manuscripts, some of which are certification related. In addition to book projects, John has provided technical editing services to both Data Based Advisor and Coast Compute magazines. A recognized authority on computer industry certifications, he has also contributed certification-related articles to magazines such as Certified Professional Magazine.

Irfan Chaudhry has been working as a consultant for the past several years with various size clients, including Fortune 500 companies and nationally recognized legal firms. The projects he has worked on range from LAN/WAN environments design, to migration projects involving NT, to Unix and NetWare. He has been involved with several Internet based projects including, implementation of ecommerce Web sites and design of Web-based application environments. He has his MCSE and is currently working on his MCSD. Irfan has written and been published previously on such topics as Windows NT, NetWare, IIS, Microsoft SQL ServerT, SQL Server Programming and Windows 2000. He works as a senior network engineer at Affiliated Distributors. If Irfan's not in front of a computer, you can find him at Kwons Black Belt Academy training for his 2nd Degree Black Belt. He is always thankful for his loving wifeNoreen and her consistent support.

Customer Reviews

Average Review:

Post to your social network


Most Helpful Customer Reviews

See all customer reviews