Uh-oh, it looks like your Internet Explorer is out of date.
For a better shopping experience, please upgrade now.
Covering topics from analysis tools to kernel tuning, to capacity management, this book offers a single point of reference for what you need to know. Anyone who has ever had to speed existing operations or project usage patterns for future loads, knows that tracking down the relevant information can be a difficult task. That's why this book has been written-it pulls together all of this knowledge, saving countless hours of what might otherwise be wasted research time.
About the Author
Jason Fink has been involved with computers and electronics since 1984 when he blew up a power supply he tried to "improve" in high school. Since then he has worked on mainframe systems, Unix clustered LANs and a variety of other interesting networks and systems. He contributes to the Open Source community in various manners.
Matthew (Matt) Sherer has been fixated with computers for years, starting with a Laser128 back when Apple-compatible machines were all the rage. This fixation resulted in a degree in Computer Science from Juniata College in Huntingdon, PA, and a discovery of UNIX and Linux along the way. Upon graduation, he spent some time in the government contracting world for a while before succumbing to the lure of commercial work. He spends a large portion of his time following all things pertaining to Open Source and Free Software, attempting to project their impact and assist in making sure that they are chosen over more limiting solutions whenever possible. Matt can usually be found behind his monitors either hacking or catching up on news about the latest code. If he's suspiciously not at his machine, chances are good that heÕs off travelling to some distant corner of the world.
Read an Excerpt
Chapter 1: Overview of Performance Tuning
What Is Performance Tuning?In the simplest sense, performance tuning pretty much involves what it sounds like: making something (in the case of this book, a Linux-based Unix machine) run more efficiently. A phrase to remember here is "more efficient"-later in this chapter, we take a good, hard look at real efficiency.
Performance tuning certainly sounds simple enough, but there are nuances to performance tuning that many people are quite surprised to hear about. As an example, the definition of "fast" is almost completely dependent upon perspective. I briefly studied computer usability, and the first lesson I learned about speed where computers are concerned is this simple golden rule:
"It depends ...."
This rule has a strong role in performance evaluation. A simple example might be the difference in performance that a user on a remote network might see, compared to a user on the local network (especially if there is a great deal of network traffic). A more complex example might arise when someone using a Web page on a system that you maintain is running a client system that is older than that of the given Web site-in this case, performance certainly will be different.
When you think of performance tuning, many times your mind is immediately drawn to altering kernel subsystems or parameters to make the system perform better. Although those are actions that you might have to take, in many other situations, things as simple as repairing a SCSI cable can make a world of difference. (This actually happened to me on a larger enterprise system. I once had a system that had been up for nearly 200 days when it suddenly began developing disk I/O problems. My first step is almost always a visual inspection. Because the system had been up for so long, I had not really "looked at" the hardware for a while. I discovered that a SCSI connector's cable was loose and that some of the shielding had become unraveled. This caused a resistance problem on the cable and thus affected the performance. Luckily, I had a spare cable.)
In reality, performance tuning is a process as a whole. It could also be said (arguably) that there are two types of performance tuning:
That is, the system administrator is in one of two modes: either carefully planning and monitoring the system continuously or trying to fix an immediate performance problem. Most administrators run in both modes. (These two rules pretty much apply to any job. Either you are dormant until called upon, or you are continually striving to make the process better. In performance tuning, this has a very special connotation because the general health of the system is very important.)
By virtue, performance tuning is also boundless. It is not necessarily constrained to a particular system on a network or to systems that have multiple-channel disk interfaces. Again, the problem might not be related to the core operating system at all.
Why Consider Tuning?Well, aside from obvious reasons, such as a system that might be running incredibly slowly, the most important reason is to improve system performance and enhance efficiency. The rewards that can be gained from a welltuned system are pretty clear. For example, an orderentry system can now process orders faster because some part of the system has been enhanced through performance tuning. This means that more entries can be pushed through, so production increases. In most organizations, this means savings of both time and money.
Performance tuning need not apply only to organizations, either. As a good example, a user might work on large programs that require a great deal of disk and processor 1/O subsystems. Trimming the kernel and enhancing the disk 1/O subsystem can help the programmer speed up compile time.
The list of examples can go on forever, and there are very few good reasons why you should not tune a system. The only imaginable case might be one in which the system administrator is too inexperienced to tune the system. Some other reasons might include uptime requirements, corporate policy, vendor support agreements, or other influences that are external to the sysadmin or the "shop" in general.
What Can I Tune?In a nutshell, you can tune whatever you want. Most Linux performance tuning revolves around the kernel, with perhaps the biggest exception being aspects of the X Window System. As mentioned earlier, however, the kernel itself is not always the only the thing that requires tuning. A few examples of tuning outside the kernel itself are changing hardware parameters on the hardware, tuning the video driver within X Window, changing the way disks are organized, or simply changing the way a particular process is performed or engineered.
The kernel is divided into parts known as subsystems. For the most part, these subsystems are all based on input and output (1/O). The main subsystems you will find yourself tuning are these:
That is not to say that these are the only subsystems (there's also the sound subsystem, for example), but the three mentioned here are normally the subsystems that a system administrator will need to troubleshoot in relation to the system itself. The networking subsystem is also of great interest to the administrator.
How these subsystems affect the user's experience and each other is very important. For example, one problem might mask, or hide, another. Suppose that the system appears to have heavy disk usage. However, upon closer examination, you discover that it is also doing an unusual amount of paging to the same disk(s). Obviously, this will result in heavier disk usage than under normal circumstances. That does not mean that disk 1/O is all right; it only means that some sort of memory problem could be the culprit instead of disk 1/O. This is the essence of performance monitoring and tuning.
The kernel subsystems and their relations will be discussed in detail in Chapter 10, "Disk Configurations for Performance."
Defining Performance-Tuning ObjectivesAs stated earlier, the most obvious reason for performance tuning is to solve an immediate performance issue. However, it is generally a good idea to also have long-term performance objectives. The reason for this is simple: There is always room for improvement somewhere. You also might have uptime requirements-a system simply might have to work a little under par for a period of time until the system administrator can work on it.
This type of performance tuning falls under the proactive category of tuning. As a good case example, imagine that you want to upgrade BIND to a newer release in a networking architecture so that you can benefit from some enhancement. Obviously, you cannot just replace an organization's BIND services on a whim. Some degree of planning must take place before you take any actions.
Long-term objectives might be simple: Find a way to speed up tape drives so that backups will take less time. Other ones might be incredibly complex, such as improving an entire network infrastructure, which is discussed more in Chapter 7, "Network Performance."
As you can see, performance tuning even in proactive mode can be very time-consuming yet extraordinarily rewarding. The benefits of performance tuning obviously include a faster system, but often a well-tuned system also is a bit more stable. As an example, a kernel that has only the drivers that a system needs is much less prone to error than a generic installation kernel....
Table of Contents
I. INTRODUCTION AND OVERVIEW.
1. Overview of Performance Tuning.
What Is Performance Tuning? Why Consider Tuning? What Can I Tune? Defining Performance-Tuning Objectives. Identifying Critical Resources. Minimizing Critical Resource Requirements. Reflecting Priorities in Resource Allocations. Methodology. Roles of the Performance Analyst. The Relationship Between Users and Performance. Summary.
2. Aspects of Performance Tuning.
Operating System Structure. Kernel Architecture. Virtual Memory Overview. Filesystem Caching. I/O Overview. NFS Performance. Methodology. Measurements. Interpretation and Analysis. Tuning or Upgrade?. Risk Evaluation and Tuning. Conclusion.
II. PERFORMANCE TUNING TOOLS.
3. Popular Unix Performance-Monitoring Tools for Linux.
The Scope of These Tools and the Chapter Interpreting Results and Other Notes. All-Purpose Tools. Benchmarking Your Disks with Bonnie. Other Tools. Some Network-Monitoring Tools. ntop. Summary.
4. Linux-Specific Tools.
The sysstat for Linux Distribution. ktop and gtop. Using the /proc Filesystem to Monitor System Activities. Other Free Utilities. Summary.
III. PERFORMANCE MONITORING TECHNIQUES.
5. Apparent and Nonapparent Bottlenecks.
There Is Always a Bottleneck. User Expectations. Performance Agreements. Tuning CPU Bottlenecks. Specific Clustering Solutions. Tuning CPU-Related Parameters in the Kernel. Tuning CPU-Related Parameters in Software. Memory Bottlenecks. Conclusion.
6. X Window Performance.
Analyzing the X Server's Performance. Measuring the Results. Tuning the X Server for Local Use. Tuning X Desktop Performance. Summary.
7. Network Performance.
Overview of Network-Performance Issues. Hardware Methods. Application Network Tuning. The Value of Knowing Your Domain. Tuning Samba. Tuning NFS. Tuning NIS. Making Kernel Changes to Improve Performance. Bonding. Enforced Bandwidth Limitations. New and Interesting Capabilities. High Availability/Load Balancing. Tools. Conclusion.
IV. TUNING AND PERFORMANCE MECHANICS.
8. Job Control.
Background Mode. The at Facilities. Using cron. nice and renice. Summary.
9. The Linux Kernel.
Why Alter the Kernel? Data-Presentation Areas. Network Manipulation. Virtual Memory. Recompiling the Kernel. Summary.
10. Disk Configurations for Performance.
Managing Disk Space. Disk I/O Utilization. Swap Strategies. Using RAID. Software RAID. Alternative Solutions. Summary.
11. Linux and Memory Management.
Determining Physical RAM Requirements. Swap Space. Swap Partitions Versus Swap Files. Advanced Topics. Summary.
12. Linux System Services.
A “Typical” inetd.conf File. The Internet Server inetd. TCP Wrappers. The rc.d Scripts. Summary.
V. CAPACITY PLANNING.
13. Thinking About Capacity Planning.
What Is Capacity Planning? Classic System Requirements. Application-Specific Requirements. Structured Approaches. Thinking Outside the Box. The Interrelationship of Performance Tuning and _Capacity Planning. Planning May Preempt the Need for Tuning. Predictive Analysis Versus Actual Results. How to Evaluate Your Need for Capacity Planning. Summary.
14. Methods for Capacity Planning.
A Blueprint for Capacity Planning. General Principles. Software and Hardware Requirements. How to Develop a Structure. Ensuring Flexibility in the Future. Example. Summary.
VI. CASE STUDIES.
15. Case Studies: Web Server Performance.
Web Server Performance. A Classic Memory Problem. Increasing Network Latency. Another Memory Problem. Linux Virtual Server (LVS) Walkthrough. Summary.
16. Where to Go From Here.
World Wide Web Resources. Magazines, Journals, and Newsletters. Newsgroups and Mailing Lists. Summary.