
DB2 12 for z/OS-The #1 Enterprise Database: SECURE, SEAMLESS INTEGRATION for an Analytics, Mobile & Cloud World
160
DB2 12 for z/OS-The #1 Enterprise Database: SECURE, SEAMLESS INTEGRATION for an Analytics, Mobile & Cloud World
160eBook
Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
Related collections and offers
Overview
Product Details
ISBN-13: | 9781583478615 |
---|---|
Publisher: | MC Press, LLC |
Publication date: | 10/21/2016 |
Sold by: | Barnes & Noble |
Format: | eBook |
Pages: | 160 |
File size: | 5 MB |
About the Author
Read an Excerpt
DB2 12 for z/OS - The #1 Enterprise Database
Secure, Seamless Integration for an Analytics, Mobile & Cloud World
By John Campbell, Wolfgang Hengstler, Namik Hrle, Gareth Jones, Clement Leung, Ruiping Li, Jane Man, Surekha Parekh, Terry Purcell
MC Press
Copyright © 2016 IBMAll rights reserved.
ISBN: 978-1-58347-861-5
CHAPTER 1
DB2 12 for z/OS: Technical Overview and Highlights
by John Campbell and Gareth Jones
Introduction
Cloud, Analytics, and Mobile are changing the landscape for enterprise customers. These technology trends are partly driven by an explosion of data and partly by business needs to gain "deeper business insight" from data to improve business efficiency and effectiveness. The enterprise's data server is at the heart of this revolution, and it is important that the data servers are agile, secure, and provide seamless integration to these newer technologies. This paper will provide an overview of how DB2 12 addresses these business and technological needs.
Why You Should Read This Paper
This paper is targeted at IT Architects, such as Enterprise, System, Software, and Data Architects, who work with business leaders and subject matter experts. It will help architects ensure that business and IT are aligned and also ensure that they design and deliver an architecture that supports the most efficient and secure IT environment meeting an enterprise's business needs.
Highlights
This IBM® DB2® for z/OS® white paper provides a high-level overview of some of the key changes introduced in DB2 12 for z/OS, including the following topics:
Performance for Traditional Workloads
Performance Enablers for Modern Applications
Application Enablement
RAS — Reliability, Availability, Scalability, plus Security
Migration and Prerequisites
Before looking at those areas in detail, it's important to put them in context in terms of the goals that the DB2 for z/OS development team set themselves, in four broad themes:
Application enablement
DBA function
OLTP performance
Query performance
Application Enablement
DB2 development set themselves the target of addressing a number of key customer requirements to expand the use of the existing features of DB2, as well as delivering mobile, hybrid cloud, and DevOps enablement. Two more objectives in the application enablement area were to provide enhance IDAA functionality to support an expanded number of use cases, and to provide incremental improvements in the SQL and SQL/PL areas to make DB2 ready for the next wave of applications.
DBA Function
Even with the existing capability to grow partitioned table spaces to 128 TB, some customers have been constrained by table and partition scalability limits, and addressing this issue has become one of the goals of this release. To complement this, another objective is to simplify large table management. Further goals are to remove the biggest inhibitors to 24 × 7 continuous availability and to provide incremental security and compliance improvements.
OLTP Performance
OLTP performance is still one of the biggest requirements for our customers, so the goal in this area is to build on the improvements in DB2 10 and DB2 11 to deliver even more performance improvements. For DB2 12, the goals are to reduce CPU consumption in the range of 5–10% by exploiting in-memory features, to double the throughput when inserting into a non-clustered table, to remove system scaling bottlenecks associated with high n-way systems, and to provide incremental improvements related to serviceability and availability.
Query Performance
Query performance for OLAP, BI, and other more complex workloads also remains a focus for our customers, and we have targeted four improvements in this area, to build on the work done in DB2 11: 20–30% CPU reduction for complex query workloads, improved efficiency delivered by reducing other resource consumption, a particular target of 80% for UNION ALL performance improvement; and simplified access path management, especially for dynamic SQL.
Quick Hits
Let's have a look at some of the highlights of this release before moving on to discuss the specific changes in detail.
Scale and Speed for the Next Era of Mobile Applications
DB2 development has measured over 1 million inserts per second, and we believe we can scale higher.
DB2 can also support up to 256 trillion rows in a single table, with agile partition technology.
In-memory Database
In-memory database is a major theme for this release. DB2 development has measured up to 23% CPU reduction for index lookup with advanced in-memory techniques.
Next-generation Application Support
In terms of next-generation application support, DB2 can now handle up to 360 million transactions per hour through a RESTful Web API into DB2.
Deliver Analytical Insights Faster
Response time is not just a requirement for OLTP workloads, but also for analytical workloads, where DB2 can deliver up to a two-times speed-up for query workloads, and for targeted queries, up to a 100-times speed-up.
Performance for Traditional Workloads
In this section, we'll look at changes in DB2 12 for z/OS that improve performance for traditional workloads.
In-memory Computing
DB2 12 places a strong emphasis on in-memory computing, combining large real memory size together with memory-optimized data structures to drive performance improvements. Unlike in prior releases, where some of the performance improvements were available without necessarily making use of more real memory available to DB2, many of the DB2 12 enhancements require customers to provision more real memory for DB2 to exploit before they can realise the significant performance gains. All the enhancements in this section require additional real memory. As a general comment on real memory provisioning, customers should plan to avoid all paging and make sure that they have sufficient free real memory to run safely.
In-memory Contiguous Buffer Pools
One of the features that falls into this category is the in-memory contiguous buffer pool. The objective of this buffer pool option is to cache entire table spaces or index spaces inthe buffer pool. The larger the object, the larger the buffer pool, and the larger the buffer pool, the more real memory is required.
The in-memory contiguous buffer pool improves performance and reduces CPU consumption by providing direct page access in memory — DB2 development has measured up to an 8% CPU reduction for OLTP workloads using this feature. Direct page access greatly reduces the CPU overhead of Get Page and Release Page operations and is achieved by laying out objects contiguously in page order in the buffer pool, and then accessing the page directly in memory.
This is not the first step taken by DB2 to provide support for in-memory buffer pools. DB2 10 introduced the PGSTEAL(NONE) buffer pool attribute for objects such as table spaces and indexes that can fit in their entirety into a buffer pool. The pages for each object are prefetched into the buffer pool when that object is first accessed, saving subsequent I/O, saving CPU, and providing elapsed time benefits. In DB2 10 and 11, prefetch is disabled, saving CPU. However, DB2 10 and 11 buffer pools defined with PGSTEAL(NONE) still maintained hash chains and LRU chains, with the CPU cost still visible for workloads running with large size buffer pools and getpage-intensive processing.
DB2 12 changes PGSTEAL(NONE) functionality to avoid the LRU and hash chain management overheads. And to make this feature more resilient, DB2 12 introduces an overflow area that is automatically managed by DB2. An overflow area is reserved by DB2 from the buffer pool allocation, and represents 10% of the buffer pool size. It is only used if the objects assigned to the buffer pool do not fit into 90% of the buffer pool size. It is allocated when the buffer pool is allocated, but is only backed by real storage when it is used. Any pages in the overflow area are automatically managed by DB2 on an LRU basis. While no page stealing occurs within the main buffer pool area, it is possible in the overflow area.
In-memory Index Optimization
As customer tables grow larger and larger, index sizes inevitably grow, too. The larger the index in terms of the number of levels, the greater the cost of random index access becomes.
Prior to DB2 12, DB2 made use of index lookaside, to try to avoid the full cost of index probing. However, this typically only benefited skip sequential access through the index. Random SQL only benefited from index lookaside occasionally, often requiring a Get Page for each level of the index.
To make index lookups faster and cheaper, DB2 12 introduces the Index Fast Traverse Block (FTB) in-memory area to provide fast index lookups for random index access for unique indexes with a key size of 64 bytes or less. It is a memory-optimized structure, and contains the non-leaf pages of the index. The FTB area resides outside of the buffer pool, in a memory area managed by DB2, and requires additional real memory. Unique indexes with include columns are also supported provided the total length is 64 bytes or less.
A new zparm, INDEX_MEMORY_CONTROL, is used to control the size of this memory area, with a minimum size of either 500 MB or 20% of total allocated buffer pool storage, whichever is larger, and a maximum size of 200 GB. Customers who wish to control the introduction of the FTB area at a system-wide level can use the zparm to disable fast index traversal, but there is also a new catalog table, SYSIBM.SYSINDEXCONTROL, which is a mechanism for controlling fast index traversal at the index level, by time window.
DB2 automatically determines over time which unique indexes would benefit from the FTB area. Fast index traversal is not attractive for indexes that suffer from frequent leaf page splits, due to inserts and updates, for example, but when the index is used predominantly for read access via key lookups, a significant performance improvement is expected. SELECT, INSERT, UPDATE, and DELETE can all benefit from FTB area because they can avoid traversing the index b-tree. The savings can be substantial, depending on the number of levels in the index b-tree.
A new zIIP-eligible Index Memory Optimization Daemon monitors index usage, and allocates FTB storage to indexes that will benefit. Using the FTB, DB2 can do very fast traversals through the non-leaf page index levels without having to do page-oriented access.
Customers can monitor FTB area usage using the new DISPLAY STATS command, which shows which indexes are using the FTB area. Also for use by customers and by IBM service are two new IFCIDs, 389 and 477, which allow the tracking of FTB area usage at a detailed level.
Figure 1 shows, based on measurements taken by DB2 development, that simple random index lookup is faster and cheaper in DB2 12. It also shows that the greater the number of index levels, the greater the expected CPU savings, varying from 6% for a two-level index to 23% for a five-level index. The expected CPU savings will increase with the number of index levels. Index-only access will show higher percentage CPU savings.
INSERT Performance
INSERT workloads are very common across the DB2 for z/OS customer base and are often performance critical. A typical use case is the journal or audit table, with a high rate of concurrent insert, where rows are inserted at the end of the table space of partition, and where data clustering is not required — it's not always necessary for the rows to be inserted in the order of the clustering index. Some customers try to balance the need for fast insert processing with the need to be able to query the data rows in key order by processing the inserted rows again later to populate other tables with the rows now in clustering key order.
However, performance has historically been a challenge for some customers who need the capability to insert very large volumes of data rows into the database at very high speed. This performance issue was addressed in prior releases of DB2 by forcing the insert of new rows at the end of the current partition for table spaces/tables with the MEMBER CLUSTER and APPEND attributes, without regard to data row clustering. Prior to DB2 12, it was often the case that the space search algorithm used for the tablespace or partition that could be a bottleneck for insert performance, and it is that bottleneck that is addressed in DB2 12.
However, some customers need even more improvement in insert throughput performance, and DB2 12 takes a significant step forward with a new fast INSERT algorithm, which streamlines the free space search operation. This feature is targeted specifically at UTS table spaces with the MEMBER CLUSTER attribute. The use cases that will benefit from this change are broadened, as DB2 does not consider the APPEND table attribute when determining which tables qualify for the new algorithm. The old insert algorithm is known as Algorithm 1, and the new, faster insert algorithm is known as Algorithm 2.
However, this feature is not available until new function has been enabled (or activated) in DB2 12 (now called ANFA, or After New Function Activation). This is because the new fast insert algorithm is dependent on new log records that are introduced with ANFA.
To summarize the requirements for the new fast insert algorithm:
New function has been activated — ANFA.
The UTS table space type is used.
The table space is defined with Member Cluster — MC.
To allow customers to control usage of the new algorithm, it may be turned off via new zparm DEFAULT_INSERT_ALGORITHM, or at the table space level via the DDL attribute INSERT ALGORITHM. The default zparm setting when new function has been activated is to use the new algorithm.
Customers who want to exploit this new feature should plan for additional real memory and larger size buffer pools, for each qualifying table space partition and for each DB2 member — that is, there is an additional real memory requirement per partition per member.
Figure 2 below shows faster insert rates into a group buffer pool-dependent partition by range (PBR) universal table space (UTS) defined with MEMBER CLUSTER and APPEND in two-way data sharing in DB2 12 AFNA. This is a special use case, as the table has no indexes, but it does demonstrate the benefits of faster insert into a table space without any of the side effects introduced by indexes. The workload consists of a number of short, fast insert processes running in parallel, inserting a small number of rows each. There are three things to note with this chart:
The number of inserts per second increases from under 800,000 per second to over 1,000,000 per second — a 25% improvement.
The class 2 elapsed time per transaction with Algorithm 1 at 0.012 seconds per transaction is reduced dramatically to 0.002 seconds per transaction.
The class 2 CPU time per transaction is reduced by about 20%.
A more common scenario is outlined in Figure 3, which is based on an application journal. Just as in the previous example, the experiments were performed in a two-way data sharing environment, with group buffer pool dependency. This is one of the use cases making up the high-insert performance workload test suite run by DB2 development on a continuous basis at the IBM Silicon Valley Laboratory (SVL).
There are three tables involved in this workload, one table with one index, the second with two indexes, and the third with three indexes. The journal table is defined as PBR UTS with row-level locking, and with Member Cluster. The index is based on a sequential key, and the rows arrive at the table in an order based on that sequential key.
The chart compares insert throughput (inserts per second) and CPU per transaction in milliseconds on DB2 11 with insert Algorithm 1, and on DB2 12 with insert Algorithm 2. The CPU per transaction drops significantly, from around 7.3 seconds per transaction to 6.2, approximately a 15% reduction. The insert throughput climbs dramatically, from just over 300,000 per second to just over 400,000 per second — around a 30% increase.
Bear in mind that the benefits you will see will vary, and workloads that are constrained by lock/latch contention on the space map pages and on the data pages in the table space or table space partition are more likely to benefit from the new insert algorithm.
(Continues...)
Excerpted from DB2 12 for z/OS - The #1 Enterprise Database by John Campbell, Wolfgang Hengstler, Namik Hrle, Gareth Jones, Clement Leung, Ruiping Li, Jane Man, Surekha Parekh, Terry Purcell. Copyright © 2016 IBM. Excerpted by permission of MC Press.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Table of Contents
Contents
Why You Should Read This Book by Tom Ramey,About the Authors,
Introduction by Surekha Parekh,
DB2 12 for z/OS: Technical Overview and Highlights by John Campbell and Gareth Jones,
DB2 12 for z/OS: What's the Latest from the Optimizer for Improved Query Performance? by Terry Purcell,
Build a DB2 12 for z/OS Mobile Application Using IBM Mobile First by Jane Man and Clement Leung,
IBM DB2 Analytics Accelerator: A Revolution in Performance by Namik Hrle, Ruiping Li, and Wolfgang Hengstler,
Making Data Simple and Accessible: The Role of Technology in Delivering Analytic Results,