DB2 10 for z/OS: The Smarter, Faster Way to Upgrade
Addressing the cost control and efficiency pressures in the current economic climate, this handbook offers solutions for DB2 for z/OS customers seeking competitive advantages for increased productivity and more operational efficiency. This guide prepares businesses for DB2 updates and changes. Included in this book are reviews of performance improvement; methods to improve database security; and suggestions for better usability and interfacing with SQL8 and pureXML.
1112352715
DB2 10 for z/OS: The Smarter, Faster Way to Upgrade
Addressing the cost control and efficiency pressures in the current economic climate, this handbook offers solutions for DB2 for z/OS customers seeking competitive advantages for increased productivity and more operational efficiency. This guide prepares businesses for DB2 updates and changes. Included in this book are reviews of performance improvement; methods to improve database security; and suggestions for better usability and interfacing with SQL8 and pureXML.
12.99 In Stock
DB2 10 for z/OS: The Smarter, Faster Way to Upgrade

DB2 10 for z/OS: The Smarter, Faster Way to Upgrade

DB2 10 for z/OS: The Smarter, Faster Way to Upgrade

DB2 10 for z/OS: The Smarter, Faster Way to Upgrade

eBook

$12.99  $16.95 Save 23% Current price is $12.99, Original price is $16.95. You Save 23%.

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers

LEND ME® See Details

Overview

Addressing the cost control and efficiency pressures in the current economic climate, this handbook offers solutions for DB2 for z/OS customers seeking competitive advantages for increased productivity and more operational efficiency. This guide prepares businesses for DB2 updates and changes. Included in this book are reviews of performance improvement; methods to improve database security; and suggestions for better usability and interfacing with SQL8 and pureXML.

Product Details

ISBN-13: 9781583476109
Publisher: Mc Press
Publication date: 11/01/2011
Sold by: Barnes & Noble
Format: eBook
Pages: 112
File size: 2 MB

About the Author

John Campbell is an IBM distinguished engineer and is one of IBM’s foremost authorities for implementing high-end database and transaction-processing applications. Cristian Molaro is an independent DB2 specialist and an IBM Gold Consultant. He was recognized as an IBM Champion in 2009, 2010, and 2011. Surekha Parekh is IBM’s World-wide Marketing Program Director for DB2 for z/OS. She is responsible for market strategy, planning, and promotion of DB2 on System z.

Read an Excerpt

DB2 10 for z/OS

The Smarter, Faster Way to Upgrade


By John Campbell, Cristian Molaro, Surekha Parekh

MC Press

Copyright © 2011 IBM
All rights reserved.
ISBN: 978-1-58347-610-9



CHAPTER 1

Planning for IBM DB2 10 for z/OS Upgrade


by John Campbell


Executive Summary

In the spring of 2010, DB2 10 for z/OS was released to 24 worldwide customers for beta testing. The evaluation focused on regression testing, "out-of-the-box" performance, and additional performance and scalability, as well as other new functions.

Customer experience and feedback about the program have been mainly positive, and most customers who were involved in the program plan to start migration to DB2 10 for z/OS in 2011. An incremental improvement was observed in the effectiveness of the program, in terms of the quality of the issues and problems found, relative to the respective programs for DB2 Version 8 and Version 9. Some customers did very well with regression and new function testing; others provided only limited qualification about what they did and what they achieved.

After the early stages of planning and execution, it is often difficult for customers to sustain the effort required during a six-month period, due to competing business and technical priorities. People, hardware, and time are usually constrained to varying degrees. As of the end of the beta program, no customers were in "true, business production."

The release of DB2 10 for z/OS provides many opportunities for price/performance and scalability improvements. But there is a tradeoff in terms of some increased real storage consumption. Customers need to carefully plan, provision, and monitor their real storage consumption.

The new, 64-bit SQL runtime can provide generous, 31-bit virtual storage constraint relief in the DB2 DBM1 address space. This support provides enhanced vertical performance scalability of an individual DB2 subsystem or DB2 member. It also opens opportunities for further price/performance improvement, through greater use of persistent threads running with the BIND option RELEASE(DEALLOCATE), DB2 member consolidation, and LPAR consolidation.


Introduction

This paper focuses on the planning stage of migrating to IBM DB2 10 for z/OS. The key points of emphasis are:

[check] Make sure everyone is educated as to what is needed to ensure project success.

[check] Production of a detailed project plan, communicated to all involved, is crucial for success.

[check] Some preparation can occur very early, in terms of understanding, obtaining, and installing the prerequisites.


The release of DB2 10 for z/OS was announced on February 9, 2010, and began shipping on March 12, 2010. It was the largest beta test program in the history of DB2 for z/OS.

The information in this paper is drawn from the lessons learned in cooperation with 24 of IBM's largest customers, representing a variety of industries and countries around the world. An extended beta test program started in Q3 2010 and lasted for six months. The program also included 73 parties in vendor programs.

These customers were looking mainly for 31-bit virtual storage constraint relief in the DBM1 address space and all opportunities for price/performance improvement. Other areas of interest included:

• Regression testing (Be sure to approach regression testing in the order in which you plan to move to production.)

• "Out-of-the-box" performance

• Additional performance improvements

• Scalability enhancements

• New functions


Stages of migration

The primary stages of migration to a new version are:

1. Planning

" Early stages:

• Making the decision to migrate

• Determining what can be gained

• Planning for prerequisites

• Avoiding incompatibilities

• Planning performance and storage

• Assessing available resources

2. Migration

3. Implementation of the new improvements


Needed application changes can be made over a longer period to make the migration process easier and less costly. Plans for monitoring virtual and real storage resource consumption, as well as performance, are necessary. An early health check, communication of the required changes, and staging of the work will make the project go much more smoothly.


Highlights of the Beta Test

DB2 10 for z/OS delivers great value by reducing CPU resource consumption in most customer cases. IBM internal testing and early beta customer results revealed that, depending on the specific workload, many customers could achieve "out-of-the-box" DB2 CPU savings of up to 10 percent for traditional OLTP workloads and up to 20 percent for specific new workloads (e.g., native SQL procedures), compared with running the same workloads on DB2 9 for z/OS.

The objective of providing and proving generous, 31-bit virtual storage constraint relief in the DBM1 address space was achieved by the end of the program. This achievement is significant in terms of the enhanced vertical scalability of an individual DB2 subsystem or DB2 member of a data sharing group. We are confident that customers can scale up, in practical terms, the number of active threads by 5 to 10 times to meet their demands.

Further opportunities for price/performance improvement are made possible through the use of persistent threads with the BIND option RELEASE(DEALLOCATE). Examples of using persistent threads include protected ENTRY threads with Customer Information Control System (CICS®), Wait For Input (WFI) regions with Information Management System/Transaction Manager (IMS/TM), and high-performance database access threads (DBATs) for incoming Distributed Data Facility (DDF) workloads.

Another goal was to improve INSERT performance, particularly in the area of universal table spaces (UTS). We wanted to ensure that insert performance for UTS was equal to, or better than, the classic table space types, such as segmented and partitioned. This goal was achieved in most cases.

Hash access was good, provided we hit the smaller-than-expected "sweet spot." Results for complex queries were also good.

Provided users chose the correct value, the performance of inline large objects (LOBs) was also impressive. Support for inline LOB column values has the potential to save even more on performance by avoiding indexed access to the auxiliary table space. However, it is important to note that the value you choose for the inline LOB value must ensure that most of the LOB column values are 100 percent inline in the base table space.

In the area of latch contention reduction, we focused on the hot latches in DB2 10 for z/OS in such a way that, once we solved the 31-bit virtual storage constraint in the DBM1 address space, enabling you to scale five to ten times, we wanted to be sure there were no secondary issues related to latch contention that would inhibit the vertical scalability of a single DB2 subsystem or DB2 member.

As the beta program progressed, the reliability of, and customer confidence in, DB2 10 for z/OS greatly improved.

Generally speaking, online transaction processing (OLTP) performance improvements were as predicted. We were aiming for a target of 5 percent to 10 percent reduction in CPU resource consumption for most traditional OLTP workloads. During testing, several customers ran benchmarks showing that such reductions could be achieved. However, in cases where the transactions consisted of a few very simple SQL statements, the 5 percent to 10 percent target was not achieved.

This is where the increase in package allocation cost outweighed the improvement in SQL runtime optimization. However, we did identify some steps that can be taken to improve this. We have delivered an Authorized Program Analysis Report (APAR) to reduce package allocation cost. It is also possible to mitigate this situation by making more use of persistent threads with the BIND option RELEASE(DEALLOCATE).

Another issue was single-thread BIND/REBIND performance. Even in Conversion Mode (CM), the performance, in terms of CPU resource consumption and elapsed time, was degraded. One reason for this result was that in DB2 10 for z/OS the default for access plan stability is EXTENDED. Also, DB2 10 for z/OS uses indexed access, even in CM, to access the respective DB2 Catalog and Directory tables.

Another area where we had mixed results was SQL Data Definition Language (DDL) concurrency. We had hoped that by restructuring the DB2 Catalog and Directory to introduce row-level locking, remove hash link access, and more, we could improve concurrency when running parallel SQL DDL and parallel BIND/ REBIND operations. The concurrency improvement was eventually achieved for parallel BIND/REBIND activity. Although it also helped in some cases with SQL DDL, most customers will still have to run SQL DDL activity single-threaded.

The final issue was access path lockdown. Two new options in DB2 10 for z/OS, APREUSE and APCOMPARE, enable you to generate a new SQL runtime while in most cases keeping the old access paths. Unfortunately, there were some issues with the underlying OPTHINTS infrastructure inherited by DB2 10 for z/OS, which is used by APREUSE and APCOMPARE. The introduction of APREUSE and APCOMPARE was delayed until these issues were addressed. These features are now available in the service stream via APARs, and their use is strongly recommended.

In general terms, the results of the beta program were mainly positive customer experiences, and we received good feedback about the program. A majority of customers in the beta program plan to start migrating to DB2 10 for z/OS in 2011. We observed incremental improvement in the program over what we experienced with the DB2 8 and DB2 9 for z/OS programs.

There was really no "single voice" or message across the customer set. We saw significant variation in terms of customer commitment and achievement. A small subset of customers did a very good job on regression and new function testing and provided good feedback. Others, due to limited resources, provided only limited qualification about what they were going to do and what they were able to achieve.

It is worth keeping in mind, for those who have never been involved in a Quality Partnership Program (QPP)/beta program, that it can be a challenge for customers to sustain the effort over a six-month period, due to competing business and technical priorities as well as constraints on people, hardware resources, and time.

By the end of the program, no customers were in true, business production. But we also need to appreciate that a QPP/beta program is not the same as an Early Support Program. We continue to develop and test the DB2 for z/OS product as the program progresses.

One of the benefits of DB2 10 for z/OS is that it provides many opportunities for price/performance (cost reduction) improvements. It is a major theme of this release. In discussions with customers, these opportunities for price/performance improvement are most welcome.

Also keep in mind that customers can be intimidated by some of the marketing "noise" about improved price/performance, often because of the raised expectation level of their respective CIOs. But in some cases, it is because when they run their own workloads, they do not see the anticipated improvements in CPU resource consumption and elapsed time performance. Many customers saw big improvements for certain workloads, while for other workloads, they saw little, if any, improvement.

Also note that if you have small test workloads that are untypical of the total mixed workload running in production, this can skew expectations on savings — either positively or negatively. Once DB2 10 for z/OS is in production, the results with the full, mixed workload may differ. We found that some measurements and quotes were overly positive and should be ignored.

A remaining question is: "How do you extrapolate from a small workload and project what the savings would be for the total, mixed workload in production?" Estimating with accuracy and high confidence is not practical, or possible, without proper benchmarking using a workload that truly represents production. Most customers reported incremental improvement over the DB2 8 and DB2 9 for z/OS programs.

Overall, most tests identified opportunities for price/performance (cost savings) improvements, which is the major theme of this release. Some customers reported big improvements in CPU and elapsed time reduction for certain workloads, while others did not. Keep in mind that smaller workloads may skew expectations on savings.


Summary of results

The DB2 10 for z/OS beta program confirmed improvements in the following areas:

[check] -bit virtual storage constraint relief in the DBM1 address space

[check] Insert performance

[check] Hash access good when hitting the smaller-than-expected sweet spot

[check] Complex queries

[check] Inline large objects (LOBs) and structured large objects (SLOBs)

[check] Latch contention reduction

[check] Quality of problems and issues found

[check] Reliability and confidence as program progressed


Performance and Scalability

One of the key lessons learned in the beta program was the need to plan on additional real storage. A 10 percent to 30 percent increase of real memory is a very rough estimate. For small systems with tiny buffer pools, the increase will be toward the high end of the range; for big systems with large buffer pools, it will be toward the low end of the range. It is important for customers to properly provision and monitor real storage consumption.

Many traditional OLTP workloads saw a 5 percent to 10 percent reduction in CPU utilization in CM mode after REBIND under DB2 10 for z/OS (some more, some less). On the initial migration to DB2 10 for z/OS, most customers will not perform a mass REBIND of all plans and packages. So, before REBINDing plans and packages, you may see little or no reduction in CPU resource consumption.

To maximize the price/performance benefits after migrating to CM, take these two steps:

1. REBIND your packages and plans to generate the new 64-bit SQL runtime. This way, you avoid the overhead of making the runtime for migrated packages from earlier releases look like the DB2 10 for z/OS runtime and re-enable fast column (SPROC) processing, which would otherwise be disabled.

2. Take advantage of 1 MB size real storage page frames to reduce translation lookaside buffer (TLB) misses. The 1 MB size real storage page frames are available on the z10™ and z196 processors. The prerequisite for using them is to specify the long-term page fix option for your local buffer pools. Long-term page fix buffer pools, which were introduced in DB2 8, provide an opportunity to reduce CPU resource consumption by avoiding the repetitive cost of page fix and page free operations for each page involved in an I/O operation.

The lesson is, be sure to use PGFIX=YES on your local buffer pools, provided there is sufficient real storage provisioned to fully back the requirement of the total DB2 working set below and above the 2 GB bar.

In a few cases, customers saw less than 5 percent saving in CPU resource consumption for traditional OLTP with very light transactions — "skinny" packages with a few simple SQL statements. This result is due partly to the increasing cost of package allocation, which overrides the benefit of the SQL runtime optimizations. APAR PM31614 may solve this issue by improving package allocation performance. Another way to address this is to use persistent threads with the BIND option RELEASE(DEALLOCATE) to amortize away the repetitive cost of package allocation/deallocation per transaction.

Regarding customers' measurements, keep in mind that — unlike the DB2 Lab environment, where a dedicated environment is used — customer measurements are typically performed in a shared environment, and the measurement results are not always consistent and repeatable. There can be wide variation on measurement "noise" in customer measurements, especially regarding elapsed time performance.

In most cases, customers were not running in a dedicated environment or at the scale/size of true business production. Many customers ran a subset (maybe a high-volume subset) of the total production workload. Sometimes, they used a synthetic test workload to study specific enhancements.

In cases where customers had very large numbers that they were not able to reproduce, the numbers on CPU and elapsed time reductions were not trusted.


Recommendation

Customers should not spend anticipated price/performance (cost reduction) savings until they actually see the improvements in their own true business production environment.


Early results

Table 1.1 summarizes some of the beta program results reported by customers. Some of the additional savings were due to features such as using 1 MB size real storage page frames for selective buffer pools, enabling high-performance DBATs, and respecting the package BIND option RELEASE(DEALLOCATE). Another reason was the improvement in COMMIT processing for applications that commit frequently. We now perform parallel writes to the active log dataset pair even when rewriting a log control interval (CI) that was partially filed and written out previously.

Now, let us discuss the use of the 1 MB size real storage page frames on the z10 and z196 processors. The potential exists for reduced CPU resource consumption through fewer TLB misses; however, the local buffer pools must be defined as long-term, page-fixed (PGFIX=YES). This feature was introduced in DB2 8 to mitigate CPU regression and reduce CPU resource consumption for I/O-intensive buffer pools.

Many customers are still reluctant to use the PGFIX=YES option because they are running too close to the edge on the usage of the amount of real storage provisioned and are in danger of paging to auxiliary (DASD) storage. They understand the value of PGFIX=YES, but it applies only for an hour or two each day. Another factor is that this decision is a long-term one; in most cases, implementing this buffer pool attribute requires a recycle of the DB2 subsystem. A change to the attribute goes pending and is materialized when the buffer pool goes through reallocation. It is also worth noting that a 75 percent cost reduction on real storage is incurred on the z196 processor relative to the z10 processor.


(Continues...)

Excerpted from DB2 10 for z/OS by John Campbell, Cristian Molaro, Surekha Parekh. Copyright © 2011 IBM. Excerpted by permission of MC Press.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Contents

About the Authors,
Preface,
Why Read This Book?,
Introduction,
Hear What Our Customers Are Saying About DB2 10 for z/OS,
PART I: Planning for IBM DB2 10 for z/OS by John Campbel,
Executive Summary,
Introduction,
Highlights of the Beta Test,
Performance and Scalability,
Accounting Trace Class 3 enhancement,
Availability,
Other Issues,
Incompatible Changes,
Migration and Planning Considerations,
Security Considerations When Removing DDF Private Protocol,
Items Planned for Post-GA Delivery,
Summary,
PART II: Gaining the Financial Benefits of DB2 10 for z/OS by Cristian Molaro,
Introduction,
Why Performance Matters,
DB2 10 for z/OS Performance and Cost Savings,
Building the DB2 10 for z/OS Business Case,
Some Invaluable Performance Benefits of DB2 10,
Notes,
Acknowledgements,

From the B&N Reads Blog

Customer Reviews