Driving the Power of AIX: Performance Tuning on IBM Power

Driving the Power of AIX: Performance Tuning on IBM Power

by Ken Milberg

Paperback

$54.95
View All Available Formats & Editions
Usually ships within 6 days

Overview

A concise reference for IT professionals, this book goes beyond the rules and contains the best practices and strategies for solid tuning methodology. Tips based on years of experience from an AIX tuning master show specific steps for monitoring and tuning CPU, virtual memory, disk I/O, and network components. Also offering techniques for tuning Oracle and Linux structures that run on an IBM power system—as well as for the new AIX 6.1—this manual discusses what tools are available, how to best use them to collect historical data, and when to analyze trends and results. The only comprehensive, up-to-date guide on AIX tuning, this is a must-have for administrators, systems engineers and architects, and other capacity planners.

Product Details

ISBN-13: 9781583470985
Publisher: MC Press, LLC
Publication date: 11/01/2009
Pages: 256
Product dimensions: 7.00(w) x 8.90(h) x 0.60(d)

About the Author

Ken Milberg is the president and managing consultant of PowerTCO Solutions, a New York–based IBM business partner. He is a technical editor for IBM Systems Magazine and Power Systems Edition, as well as a frequent contributor to IBM developerWorks. He has consulted with many global Fortune 400 companies, is a PMI-certified project management professional and an IBM certified advanced technical expert, and holds certifications in Solaris and HP-UX.technical. He lives in New York City.

Read an Excerpt

CHAPTER 1

Performance Tuning Methodology

Performance tuning is a never-ending process, and an important concept to understand is that it is not unusual to fix one bottleneck only to create another. That's part of what makes our lives as AIX administrators so indispensable! The following time-tested tuning and analysis methodology will aid you throughout the tuning lifecycle:

1. Establish a baseline

2. Stress test and monitor

3. Identify bottleneck

4. Tune

5. Repeat (starting with step 2)

Step 1. Establishing a Baseline

Well before you ever tune a system, it is imperative to establish a baseline. The baseline is a snapshot of what the system looks like when you first put it into production, while it is performing at acceptable enough levels to the business for it to be deployed. The baseline should not only capture performance statistics but also document the actual configuration of the system (amount of memory, CPU, and disk). It's important to document the system configuration because otherwise you won't be comparing apples with apples when the time comes to examine the baseline to your current configuration. This step is particularly relevant in our new partitioned world, when you can dynamically add or subtract CPU resources at a moment's notice.

To come up with a proper baseline, you must first identify the appropriate tools to use for monitoring. Some tools are more suited to immediate gratification, while others are geared more toward historical trending and analysis. Tools such as nmon and topas, which we'll discuss in detail in Chapter 5, can serve both purposes.

Once you've identified your monitoring tools, you need to gather your statistics and performance measurements. This information helps you to define what an acceptable level of performance is for a given system. You need to know what a well-performing system looks like before you start receiving calls complaining about performance. You should also work with the appropriate application and functional teams to define exactly what a well-behaved system is. At that time, you would translate that definition into an acceptable service level agreement (SLA), on which the customer would sign off.

Step 2. Stress Testing and Monitoring

This step is where you monitor the system at peak workloads and during problem periods. Stressing your system, preferably in a controlled environment, can help you make the right diagnosis — an essential part of performance tuning. Is your bottleneck really a CPU bottleneck, or is it related more to memory or I/O?

It's also important not to fall too much in love with any one utility. I like to use several monitoring tools here to help validate my findings. For example, I might use an interactive tool (e.g., vmstat) and then a data capturing tool (nmon) to help me track data historically.

The monitoring step is critical because you cannot effectively tune anything without having an accurate historical record of what has been going on in your system, particularly during periods of stress. Larger organizations that recognize the importance of this process even have their own stress-testing teams, which work together with application and infrastructure teams to test new deployments before putting them into production.

It's also essential here to establish performance policies for the system. You can determine the measures that are relevant during monitoring, analyze them historically, and then examine them further during stress testing.

Step 3. Identifying the Bottleneck

The objective of stressing and monitoring the system is to determine the bottleneck. Ask any doctor: you cannot provide the correct medicine (the tuning) without the proper diagnosis. If the system is in fact CPU-bound, you can run additional tools, such as curt, ps, splat, tprof, and trace (we'll discuss these utilities later), to further identify the actual processes that are causing the bottleneck.

It's possible that your system might in fact be memory- or I/O-bound and not CPU-bound. Fixing one bottleneck, such as a memory problem, can actually cause another, such as a CPU bottleneck, because in this case your system is now letting the CPU perform to its optimum capacity. At one point in time, it might not have had the capacity to handle the increased amount of resources given to it. I've seen this situation quite often, and it isn't necessarily a bad thing. Quite the opposite: it ultimately helps you isolate all your bottlenecks and tune the system to its max.

You'll find that monitoring and tuning systems is quite a dynamic process and not always predictable. That's what makes performance tuning as challenging as it is.

Step 4. Tuning

Once you've identified the bottleneck, it's time to tune it. For a CPU bottleneck, that usually means one of four solutions:

Balancing system workload — This solution involves running processes at different intervals to more efficiently use the 24-hour day. More often that not, this is what we usually do to resolve CPU bottlenecks.

Tuning the scheduler — Tuning the scheduler using nice or renice helps you assign different priorities to running processes to prevent CPU hogs.

Tuning scheduler parameters — Adjust scheduler parameters to fine-tune priority formulas. For example, you can use the schedo command to change the amount of time the operating system lets a given process run before calling the dispatcher to choose another.

Increasing resources — Add CPUs or, in a virtualized environment, reconfigure logical partitions (LPARs) to boost available resources. This solution might include uncapping partitions or adding more virtual processors to existing partitions. Virtualizing the partitioned environment appropriately can help increase physical resource utilization, decrease CPU bottlenecks on specific LPARs, and reduce the expense of idle capacity in LPARs that are not "breathing heavy."

Step 5. Repeat

After tuning, you need to go through the process again, starting with step 2, stress testing and monitoring. Only by repeating your tests and consistently monitoring your systems can you determine whether your tuning has made an impact. I know some administrators who simply tune certain parameters based on best practices for a specific application and then move on. That is the worst thing you can do. For one thing, what works in some environments might not work in yours. More important, how do you really know whether what you've tuned has helped the bottleneck unless you look at the data?

To reiterate, AIX performance tuning is a dynamic and reiterative process, and to achieve real success, you need to consistently monitor your systems, which can only happen once you've established a baseline and SLA. The bottom line is, if you can't define the behavior of a system that runs well, how will you define the behavior of a system that doesn't?

CHAPTER 2

Introduction to AIX

AIX — which stands for Advanced Interactive eXecutive — is a POSIX-compliant and X/Open-certified Unix operating system introduced by IBM in 1986. While AIX is based on UNIX System V, it has roots in the Berkeley Software Distribution (BSD) version of Unix as well. Today, AIX has an abundance of both flavors (you can go with chocolate one day and vanilla the next), providing another reason for its popularity.

Unix

From its introduction in 1969 and development in the mid-1970s, Unix has evolved into one of the most successful operating systems to date. The roots of this operating system go as far back as the mid-1960s, when AT&T's Bell Labs partnered with General Electric and the Massachusetts Institute of Technology (MIT) to develop a multi-user operating system called Multics (which stood for Multiplexed Information and Computer Service). Dennis Ritchie and Ken Thompson worked on this project until AT&T withdrew from it. The two eventually created another operating system in an effort to port a computer game that simulated space travel. They did so on a Digital Equipment Corporation (DEC) PDP-7 computer, and they named the new operating system Unics (for Uniplexed Information and Computing Service). Somewhere along the way, "Unics" evolved into "Unix."

AIX

AIX was the first operating system to introduce the idea of a journaling file system, an advance that enabled fast boot times by avoiding the need to perform file system checking (fsck) for disks on reboot. AIX also has a strong, built-in Logical Volume Manager (LVM), introduced as early as 1990, which helps to partition and administer groups of disks.

Another important innovation was the introduction of shared libraries, which avoided the need for an application to statically link to the libraries it used. The resulting smaller binaries used less of the hardware RAM to run and required less disk space for installation.

IBM ported AIX to its RS/6000 platform of products in 1989. The release of AIX Version 3 coincided with the announcement of the first RS/6000 models. At the time, these systems were considered unique in that they not only outperformed all other machines in integer compute performance but also beat the competition by a factor of 10 in floating-point performance.

Version 4, introduced in 1994, added support for symmetric multiprocessing (SMP) with the first RS/6000 SMP servers. The operating system evolved until 1999, when AIX 4.3.3 introduced workload management (WLM). In May 2001, IBM unveiled AIX 5L (the L stands for "Linux affinity"), coinciding with the release of its POWER4 servers, which provided for the logical partitioning of servers. In October of the following year, IBM announced dynamic logical partitioning (DLPAR) with AIX 5.2.

The latest update to AIX 5L, AIX 5.3 (introduced in August 2004), provided innovative new features for virtualization, security, reliability, systems management, and administration. Most important, AIX 5.3 fully supported the Advanced Power Virtualization (APV) capabilities of the POWER5 architecture, including micropartioning, virtual I/O servers, and symmetric multithreading (SMT). Arguably, this was the most important release of AIX in more than a decade, and it remains the most popular (as of this writing). That is why we'll primarily focus on AIX 5.3 for the purposes of this book.

IBM announced AIX 6-Beta in May 2007 and formally introduced AIX 6.1 in November 2007. Major innovations of AIX 6.1 include workload partitions (WPARs), which are similar to Solaris containers, and Live Application Mobility (not available with Solaris), which lets you move the partitions without application down time. Chapter 16 discusses performance monitoring and tuning on AIX 6.1.

AIX Market Share

AIX celebrated its 20th anniversary in January 2006, and it appears to have an extremely bright future in the Unix space. IBM's AIX has been the only Unix that increased its market share through the years, and IBM continues to own the market space for Unix servers. Most of the Unix growth at this time stems from IBM.

AIX has benefited from the many hardware innovations that the POWER platform has introduced through the years, and it continues to do so. IBM has also made good decisions around its Linux strategy. Linux, supported natively on the POWER5, more or less complements, rather than competes with, AIX on the POWER architecture.

CHAPTER 3

Introduction to POWER Architecture

The "POWER" in POWER architecture stands for Power Optimization with Enhanced RISC, and it is the processor used by IBM's midrange Unix offering, AIX. POWER is a descendant of IBM's 801 CPU and is a second-generation Reduced Instruction Set Computer (RISC) processor. It was introduced in 1990 to support Unix RS/6000 systems.

The POWER architecture incorporated many characteristics that were already common in most RISC architectures. The instructions were fixed in length (four bytes) and had consistent formats. What made the architecture unique among existing RISC architectures was that it was functionally partitioned, separating the functions of program flow control, fixed-point computation, and floating-point computation.

The objective of most RISC architectures was to be extremely simple so that implementations would have an extremely short cycle type. This approach would result in processors that could execute instructions at the fastest possible clock rate. The designers of the POWER architecture chose to minimize the total time spent to complete a task. This time was a byproduct of three different components: path length, the number of cycles needed to complete an instruction, and cycle time.

During the early 1990s, five different RISC architectures actively competed with one another. IBM partnered with Apple and Motorola to come up with a common architecture that would meet the standards of an alliance they would form. The first design was very simple, and all its instructions were completed in one cycle. It lacked floating-point and parallel processing capability. The POWER architecture was a real attempt to correct this flaw. It consisted of more than 100 instructions and was known as a complex RISC system.

The POWER1 chip consisted of 800,000 transistors per chip and was functionally partitioned. It had separate floating-point registers and could scale from low-end to the highest-end workstations. The first chip actually consisted of several chips on a single motherboard but was refined to one RISC chip with more than a million transistors. Some of you may be surprised to learn that this chip was actually used as the CPU for the Mars Pathfinder mission.

The POWER2 chip was released in 1993 and was the standard-bearer for nearly five years. It contained 15 million transistors per chip. It also added a second floating-point unit (FPU) and extra cache. This chip was known for powering the IBM Deep Blue supercomputer that would beat Garry Kasparov at chess in 1997. (Joefon Jann, whose team developed this system, wrote the Foreword to this book.)

The POWER3 architecture was the first 64-bit symmetric multiprocessor. Designed to work on both scientific and technical computer applications, it included a data prefetch engine, dual floating-point execution units, and a nonblocked interleaved data cache. It used copper interconnect, which delivered double the performance for the same price.

The POWER4 (code-named Regatta) architecture, released in 2001, featured 174 million transistors per processor. It incorporated micron copper and silicon-based technology. Each processor had 64-bit, 1 GHz PowerPC cores and could execute as many as 200 instructions simultaneously. POWER4 became the driving force behind the IBM Regatta Servers, which supported logical partitioning. The POWER4 processor supported logical partitioning with a new privileged processor state called the POWER Hypervisor mode.

As wonderful as the Regattas were, if you purchased one shortly before the POWER5 systems were released, you were not a happy camper.

POWER5

IBM's POWER5 architecture, introduced in 2003, contained 276 million transistors per processor. It was based on the 130 nm copper/silicon-on-insulator (SOI) process and featured chip multiprocessing, a larger cache, a memory controller on the chip, simultaneous multithreading (SMT), advanced power management, and improved Hypervisor technology. The POWER5 was built to allow up to 256 logical partitions and was available on IBM's System i and System p servers. Each POWER5 core is designed to support SMT and single-threaded modes. The software (the Hypervisor) switches the processor from SMT to single-threaded mode.

Some of the objectives of the POWER5 were

• To maintain binary capability with older POWER4 systems

• To enhance and extend symmetric multiprocessing (SMP) scalability

• To improve performance and reliability

• To provide additional server flexibility

• To improve power efficiency

• To provide virtualization capabilities

As a result of its dual-core design and support for SMT, one POWER5 chip appears as a four-way microprocessor to the operating system. Processors using SMT can issue multiple instructions from different code paths during a single cycle. Multiple instructions from both hardware threads can be issued from one cycle.

Figure 3.1 depicts the Hypervisor, without which there is no virtualization.

As you examine this architecture, you can see that the layers above the POWER Hypervisor are similar, but the contents are characterized by the operating system. The layers of code supporting AIX and Linux consist of system firmware and Run-Time Abstraction Services (RTAS). Open Firmware and RTAS are both platform-specific firmware, and both are tailored by the platform developer to manipulate the specific platform hardware.

In the POWER5 processor, IBM introduced further design enhancements that enabled the sharing of processors by multiple partitions. The POWER Hypervisor Decrementer (HDEC) is a new hardware facility in the POWER5 design that is programmed to provide the POWER Hypervisor with a timed interrupt independent of partition activity. It was the POWER5 architecture, along with the extraordinary virtualization capabilities of Advanced Power Virtualization (APV) that really paved the way for server consolidation around IBM POWER systems. (IBM has since rebranded the term Advanced Power Virtualization to PowerVM.)

(Continues…)


Excerpted from "Driving the Power of AIX"
by .
Copyright © 2009 Ken Milberg.
Excerpted by permission of MC Press.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Foreword,
Preface,
SECTION I: INTRODUCTION,
Chapter 1: Performance Tuning Methodology,
Chapter 2: Introduction to AIX,
Chapter 3: Introduction to POWER Architecture,
SECTION II: CPU,
Chapter 4: CPU: Introduction,
Chapter 5: CPU: Monitoring,
Chapter 6: CPU: Tuning,
SECTION III: MEMORY,
Chapter 7: Memory: Introduction,
Chapter 8: Memory: Monitoring,
Chapter 9: Memory: Tuning,
SECTION IV: DISK I/O,
Chapter 10: Disk I/O: Introduction,
Chapter 11: Disk I/O: Monitoring,
Chapter 12: Disk I/O: Tuning,
SECTION V NETWORK I/O,
Chapter 13: Network I/O: Introduction,
Chapter 14: Network I/O: Monitoring,
Chapter 15: Network I/O: Tuning,
SECTION VI: BONUS TOPICS,
Chapter 16: AIX 6.1,
Chapter 17: Tuning AIX for Oracle,
Chapter 18: Linux on Power,

Customer Reviews

Most Helpful Customer Reviews

See All Customer Reviews