×

Uh-oh, it looks like your Internet Explorer is out of date.

For a better shopping experience, please upgrade now.

Optimizing Informix Applications
     

Optimizing Informix Applications

by Robert D. Schneider
 
Developers and administrators can use this book to improve Informix application and systems performance, thereby increasing productivity and system quality. Shows how to set up an optimization test environment, presents indexing strategies, and shows how to use the Informix optimizer effectively. Presents tips for better SQL, 4GL and ESQL/C

Overview

Developers and administrators can use this book to improve Informix application and systems performance, thereby increasing productivity and system quality. Shows how to set up an optimization test environment, presents indexing strategies, and shows how to use the Informix optimizer effectively. Presents tips for better SQL, 4GL and ESQL/C performance. Offers detailed tips for Informix OnLine SE, Informix OnLine and Informix OnLine DSA engines. Presents a chapter of detailed case studies. Software developers and database/system administrators with at least a beginner-to-intermediate level experience with Informix products.

Editorial Reviews

Booknews
A guide to improving performance of database applications based on the Informix-SE, Informix-OnLine, and Informix-OnLine DSA engines, for developers and database administrators. Chapters on planning strategies, setting up a test environment, indexing approaches, SQL statements, and taking advantage of data fragmentation and parallel processing incorporate source code, database layouts, and hardware configurations. Case studies illustrate common optimization challenges and solutions. Annotation c. Book News, Inc., Portland, OR (booknews.com)

Product Details

ISBN-13:
9780131492387
Publisher:
Prentice Hall Professional Technical Reference
Publication date:
04/10/1995
Series:
Informix Press Series
Pages:
345
Product dimensions:
7.03(w) x 9.22(h) x 0.55(d)

Read an Excerpt

PREFACE: Intended audience

Informix tools and engines let you quickly build powerful and reliable database applications. This alone is often not enough; users also demand high performance. It takes skill and experience to produce such high- quality systems, but many Information Systems (IS) professionals don't have the time to learn the intricacies of Informix products. Optimizing Informix Applications gives you the insight you need to produce the fast database applications your users want. Optimizing Informix Applications is a tool for application developers, database administrators, systems analysts, and system administrators who are interested in designing, developing, maintaining, and administering efficient applications that use Informix database engines and tools.

To get the most out of this book, programmers and analysts should be comfortable with:
* Basic Structured Query Language (SQL).
* A procedural programming language, such as C, COBOL, or 4GL.

Database administrators should understand:

* Relational database design concepts.
* Intermediate or advanced SQL.
* Informix engine architecture.

System administrators should understand:
* Informix engine architecture.
* The UNIX operating system at an intermediate level.

Benefits of Using This Book

Optimizing Informix Applications provides you with these benefits:

* Enhanced productivity.
It's hard for a user to be productive if a typical query takes three minutes to return data. End-of-week reports that take 36 hours to run generally don't lead to a more robust bottom line, either. In this book, wecite dozens of specific changes you can make to your applications and systems to upgrade your organization's productivity.

* Increased application reliability.
Generally, an inefficient application is an unreliable application. At the very least, slow systems are perceived as being less reliable than fast systems. When you increase a system's speed, you will improve its reliability.

* Reduced hardware expenditures.
Few organizations can afford to continually upgrade their hardware environments to get better performance. Therefore, they expect their IS staff to do more with less. You can use the suggestions in this book to squeeze extra speed out of your existing hardware.

* Simpler applications.
In many cases, the simplest solution is the fastest solution. You will find many tips in this book to help you increase a system's speed while reducing its complexity.

* More satisfied users.

In the past, most users were mainly concerned with whether a system worked; performance wasn't a major consideration. Today's users are more sophisticated than ever, and view system speed as the crucial element. Many organizations have learned this painful lesson the hard way, when users don't approve a slow system, no matter how well-designed.

How to Use This Book

This book is divided into five major sections:

1. Introduction. In this chapter, we discuss how you can best use this book, as well as how to set up the most effective optimization test environment.

2. Creating a Well-Designed Database. Your entire application relies on an efficient database design. In this chapter, we review some of the most important facets of a good database design.

3. Optimizing Application Code. This chapter describes how to write efficient database application code, using a variety of programming languages.

4. Tuning the Database Engine. Even the most efficient application can be sluggish if the underlying database engine is tuned incorrectly. In this chapter, we review many techniques you can use to make your database engine run as rapidly as possible.

5. Case Studies. This chapter contains five case studies. Each case study features several interrelated problems. You don't need to read this book from cover to cover. To get the most benefit from our suggestions, review only those sections that apply to your particular situation. For example, if you are running a distributed ESQL/C application that uses the OnLine engine, you really don't need to spend much time reading the sections about 4GL or Informix-SE.

However, there are several sections that deserve your attention, regardless of your situation. These include:

* Setting Up an Optimization Test Environment.
It is a big mistake to rush in and start making performance enhancement changes to your system without first observing a few very important details. This section covers these details.

* General Tips.
This section contains numerous suggestions that can help you develop better, faster applications, no matter what engine or tools you use.
* Relational Database Design Concepts.
A good database design is the foundation of a system that performs well. This section teaches you how to implement the relational model when you design a database. It also covers specific situations where the relational model is not appropriate.

* Indexing Strategies.
The wrong indexing strategy can wreck an otherwise perfect database design. We list some straightforward indexing techniques you can use to dramatically raise your system's throughput.

* SQL Tips.
Without SQL, you can't access data. In this section, we cite several ideas you can incorporate to improve your SQL statements.

* Case Studies.
This chapter contains a number of multi-faceted case studies. Spend some time reviewing them: chances are, you may already have experienced some of the problems we illustrate.

How to Use the Examples

You'll find numerous examples in each chapter, which we have made as clear and concise as possible. This will help you implement our suggestions quickly. Because this book is intended as a reference, the examples in one chapter do not rely on another. This will help you concentrate on specific problems and solutions. Keep in mind that while the examples in this book may not exactly match your database design, application software, and/or engine configuration, you can still apply them to your specific problems.

Setting Up An Optimization Test Environment

When faced with a sluggishly performing application, or an engine that appears out of tune, most administrators and programmers are eager to roll up their sleeves, and begin the optimizing process. Unfortunately, there are a number of potentially tiresome, yet very important steps that you must first take before you begin the "real work" of tuning. If you don't set up a stable, accurate test environment, you face the very real risk of making inaccurate judgments about the root cause(s) of your system's current woeful performance. You might then use these incorrect assumptions as the base of an entire action plan, only to later learn that your original hypotheses were wrong.

In this section, we address these important issues. We've divided the section into three major parts. The first deals with the agenda you should cover before you even begin your testing efforts. This includes ensuring that the right hardware, operating system, database, and application software are in place. In addition, we discuss some organizational issues that you should address immediately at the start of the optimization.

Next, we spell out some methods that you can use to help point the way while your testing is going on. While the most successful system testers and tuners rely heavily on their test plan to guide them during this process, they also choose the right blend of discipline and creativity when following up on new, promising performance leads.

Finally, we illustrate how you can translate the knowledge that you gained during testing process into real performance improvements.

While this section is primarily designed for people tuning existing systems and applications, you can still apply these suggestions if you're doing new development. In fact, one way to help guarantee a production system that meets your standards is to treat good performance as one of the most important development deliverables.

Steps to follow before testing

Hardware considerations

Your hardware environment really consists of a number of different components, each of which plays a meaningful role in your overall performance picture. Let's illustrate each of these components in more detail:
* CPU.
Ideally, both your production and test environment have similar CPU speeds and capacities. If not, you'll always have to wonder which numbers you should believe: those from the production system, or those from the test system. The same problem often arises at multi-processor sites, where the production machine may have 12 CPUs, while the test machine has one. This really becomes a problem if you're tuning the OnLine engine, especially if you're using OnLine 6.00 or newer.

* Memory.
It's crucial that your test platform have the same amount of memory as your production system. Slow-running applications can experience a "miracle cure" when you add a few megabytes of memory to the hardware. If your test and production systems have vastly different memory configurations, be careful before you draw conclusions from your research.

* Disk drives.
If your production system uses older, slower disk drives, and your test system has the speed benefits of the newest generation data storage devices, you're bound to come to some erroneous conclusions about each machine's capacity and response. If you're attempting to test your OnLine engine's response, try to match the production machine's disk configuration when you set up your test machine. Disk sizing and layout can have a big impact on engine throughput; you'll get more realistic results if you're working with a real-life disk profile.

* Connections (local and network).
If your production application uses a distributed database or works in a client/server environment, it's important that you configure your testing platform to accurately reflect these connections. Network and client/server overhead can be substantial; if you omit these factors from your testing, you might not get completely accurate performance information. Ideally, your test system and production system are carbon copies of each other. In reality, however, most administrators and developers are lucky even to have a test machine; few organizations can afford to have a production caliber machine sitting idle. If this is the case in your organization, try to use the production machine for your tests whenever possible. This may mean that you only have access to the equipment after hours, or on weekends. See the "Operational considerations" section of this chapter for more information about gaining access to the production system for testing.

Software considerations

Once you've set up a testing system, your next challenge is to obtain the right software and correctly configure it. In this section, we divide the software into the following elements:

* Operating system.
It's important that your optimizing platform have the same operating system configuration as your production system. Slight differences in UNIX kernel parameters can cause wide disparities in performance; these differences might skew your findings. If you don't know how to check your kernel parameters, ask your system administrator to check them for you. If possible, try to work with identical kernel parameters.

* Database and tools.
Try to ensure that you've installed the same versions of your production system's database server and tools in your test environment. This is especially important if you're examining a client/server system, using the I-STAR or I-NET connectivity products. Even if you don't use any client/server features, remember that Informix continually enhances the query optimizer. You can expect your performance statistics to vary, depending on which version of the database engine you're using.

* Application.
One way to guarantee an unpleasant optimizing experience is to work with different versions of your application on the development, production, and test machines. Sometimes, a one-line difference in a program can have a tremendous influence on performance. Insist on testing the same version of application software that the users are complaining about. Otherwise, you may (rightly) come to the conclusion that there is no performance problem: when you ran your version of the application, it performed perfectly.

Data considerations

After you have set up your hardware and software environment, the next topic to consider is the data that you plan to use during your testing. Ideally, your test database should exactly match the database that's currently in development or production. This means having the same number of tables, indexes, views, stored procedures, and triggers under the testing environment as you do on the production environment.

One common mistake that administrators and developers make when setting up a test database is using an undersized data set. If you don't make your test database big enough, it's possible that you might get an incorrect picture of true system performance. Even the most poorly written query may perform well on a tiny database, whether or not the underlying database design is good.

If possible, use an exact copy of your production database on your test platform. The easiest way to obtain a full copy of a database is to use the 'dbexport' command on the production system, followed by the 'dbimport' command on the test system. If your production database is too large, or if your test system doesn't have enough storage space, it's still important that you create a test database with the same number of tables, indexes, views, and stored procedures. You can use the 'dbschema' utility on the production database to obtain the SQL necessary to create a duplicate test database. If you're testing a query that only works with two moderate-sized tables from your production database, you only need to copy the contents of those two tables from the production system to the test system.

Once you've set up your testing database, and filled it with the correct amount of data, don't forget to run UPDATE STATISTICS with the same options that you chose for your production database.

Setting up a test plan

It's not absolutely essential that you write a test plan prior to beginning your experiments. However, a well-designed test plan can make your tasks much easier by providing a clear sense of direction. This is especially important if you have a multi-person test team. Without a plan, team members might duplicate each other's work, and potentially damage the integrity of your test results. You can also use your test plan to form the basis of your summary report once you finish your examinations.

You don't need to spend an inordinate amount of time on your plan; it should simply define, at a high level, which procedures you expect to follow when conducting your evaluations, along with any assistance you might need from other users or developers.

For example, look at the following set of entries from a sample test plan:

Example 1.1 Test Plan:
Test #: 17
Date: 03/09/95
Start time: 08:00
Finish time: 11:00
Platform: Production
Action: Experiment with adding/altering indexes on the call_tracking table.
Special needs: May need to briefly lock tables when adding/altering indexes.
Team members: Nicole - Primary
Jed - Assist
Li - Assist
Test status:
Test results:

Test #: 18
Date: 03/09/95
Start time: 12:15
Finish time: 16:30
Platform: Production
Action: Raise number of shared memory buffers to test OnLine engine performance.
Monitor engine statistics for balance of day.
Special needs: Need to restart OnLine - will result in approx. 2 minutes database downtime.

Team members: Lynn - Primary
Karl - Assist
Test status:
Test results:

Test #: 19
Date: 03/10/95
Start time: 08:00
Finish time: 17:30
Platform: Development
Action Alter cc01288.ec, cc03432.ec programs to use dirty read isolation levels instead of current cursor stability isolation. Monitor application response times before & after modification.
Special needs: None
Team members: Miguel - Primary
Lynn - Assist
Test status:
Test results:
Test #: 20
Date: 03/11/95
Start time: 08:00
Finish time: 17:30
Platform: Production
Action: If test #19 improves performance, move the cc01288 and cc03432 programs into production.
Special needs: Need user input regarding response times.
Team members: Miguel - Primary
Meyer - Assist
Sandra - Assist
Test status:
Test results:

You could even store your test plan in the database, and then use SQL to create helpful queries.

Operational considerations

After you've finished setting up your hardware, software, and database, and have designed a good test plan, you face perhaps your biggest challenge: obtaining operational support for your tuning efforts.

Operational issues are often the most frustrating part of optimizing an application. Management may be unwilling to allow you full or partial access to the resources that you need. Users are often reluctant to help isolate a performance problem, even though they have the most to gain from any discoveries that you make. The designers and/or developers of the application may be nervous about what your investigation might uncover. Unfortunately, you can only get a true picture of what is really happening on the system by requesting cooperation from management, users, and developers.

Before you begin your experiments, you have every right to ask for each of the following:

* System availability.
If you don't have a dedicated test platform, you'll have to run your experiments on the production system. This probably means you'll be working some late nights or weekends, but it's better than having no system at all. If management is reluctant to grant access to you, point out that you'll be able to solve their performance problems much faster if you can work on the platform where the response delays are occurring.

* User input and support.
You should try to make your tuning experiments as scientific as possible. If users are complaining about response time, it's reasonable for you to ask them to cite specific examples. It's even better if you can sit by their side as they work. You will have a much better chance to diagnose a problem if you observe it while it happens, instead of hearing about it three days later. Once you've investigated and (hopefully) corrected the problem, you still need feedback and assistance from your user community. Otherwise, you may never truly know if your enhancements helped.

* IS staff support.
Unless you're the only person in the IS department, chances are you'll need to work with your peers when optimizing new or existing applications. If you're an administrator, you need good rapport with the programmers and analysts who designed the system. If you're a programmer or an analyst, you need the cooperation of the database administrator. Unfortunately, a large proportion of tuning exercises become mired in politics and finger-pointing. Therefore, whatever your job responsibility, it's important that you conduct your investigation and present your suggestions with sensitivity and tact. (Almost) no one sets out to design a deliberately slow application or database. In addition, a well-designed application can slow down over time under the weight of increased volume and altered requirements. Remember that there are many ways to solve design challenges; the decisions that your peers made during the design process may have been, at that time, the right ones to meet the requirements. Few people find non-constructive criticism enjoyable; keep this in mind as you proceed with your work.

Steps to follow during testing

Discipline versus creativity
What should you do if, during testing, you discover a completely new and potentially exciting path that you feel bears further investigation? Do you abandon your carefully thought-out tuning strategies? Do you merely write down your idea and get back to the original plan? What if you realize that your plan doesn't really meet your needs?

The most practical answer to all of the above questions is to remain flexible while you conduct your tuning exercises. Don't be afraid to yield to your creative impulses; sometimes the most promising optimizing leads appear out of nowhere. You should never allow an idea to escape because you were too busy to write it down.

On the other hand, don't discard your testing plan just because it's now partially obsolete. Instead, why not incorporate your new ideas into the existing plan? For example, using the example from earlier in this chapter, suppose that you've just run test 19:

Example 1.2
Test #: 19
Date: 03/10/95
Start time: 08:00
Finish time: 17:30
Platform: Development
Action: Alter cc01288.ec, cc03432.ec programs to
use dirty read isolation levels instead
of current cursor stability isolation. Monitor
application response times before & after modification.
Special needs None
Team members: Miguel - Primary
Lynn - Asst
Test status: COMPLETED
Test results: Yielded no performance gain, but we discovered
an indexing problem. See test 19.1.

You learned that the problem with the two ESQL/C programs wasn't the isolation levels, as you had originally thought. Instead, you realized that each program used an ORDER BY clause in several SELECT statements. By reviewing the output from SET EXPLAIN, you discovered that there was no index on the columns listed in the ORDER BY clause. Once you added an index, each program ran in one-fifth of its original time. How can you record this information? One way is to add a new test that reflects the results of your additional test:

Example 1.3
Test #: 19.1
Date: 03/10/95
Start time: 08:00
Finish time: 17:30
Platform: Development
Action: Place a composite index on the territory_number
and zip_code fields in the 'leads' table. Test
impact on cc01288 and cc03432 programs.
See test 19 for original plan.
Special needs: None
Team members: Miguel - Primary
Lynn - Assist
Test status: COMPLETED
Test results: cc01288 and cc03432 both ran much faster.

This approach lets you follow your creative ideas, without straying from a disciplined methodology.

Steps to follow after testing

Formal assessment

Once you've finished your testing and (hopefully) confirmed your hypotheses, it's time to formally document your findings. Unfortunately, this is true even if you learn that every one of your hypotheses was completely wrong. It's important that you commit your conclusions to paper for several reasons:

* Better communication.
If you enlisted the support of other people during the optimization process, it's only fair that you provide them with the results of your examination. You could do this verbally, but you'll reduce the potential for confusion and misunderstanding by writing the results. If your recommendations include database, engine configuration, or application modifications, you definitely need to put them in writing, if you expect anyone to take them seriously and implement the changes. Even if you didn't need the help of any users or developers, or don't recommend any alterations, chances are that your management will want to know what you learned.

* Future testing.
Unless this round of testing solved every performance issue that you faced, chances are that you'll need to re-test and re-tune your applications and environment at some point in the future. In fact, even if you eliminated every problem, it's still a good idea to periodically monitor your system's response. You'll find written documentation of earlier test plans and results to be a very useful tool when faced with subsequent assessments.

* Assisting new personnel.
Current turnover rates virtually guarantee that you'll need to explain your testing methodologies and procedures to a new hire at some point in the future. By documenting your findings now, you can help reduce the amount of time that you'll be obligated to spend during future orientations, as well as increasing the accuracy of your presentations.

Determining the next step

Even if you feel that you've located every performance bottleneck, it's important not to rush in and immediately make widespread changes after you finish your tuning experiments. Instead, consider applying one change at a time. After you apply each modification, run some simple performance tests to see what effect your change had on throughput. For example, suppose that you learn, through your experiments, that your system is hampered by three major performance impediments: 1. Not enough shared memory buffers
2. Unnecessarily high levels of isolation
3. Insufficient indexing

How should you proceed in this case? Generally, it's a good idea to implement the simplest alterations first. It's also wise to make one change at a time. By only modifying one variable each time, you stand a better chance of learning which variations had the most impact.
Regardless of which alteration you implement, always remember to keep a copy of the original code, database structure, or engine configuration. In the above example, the shared memory buffers problem is easiest to address: it requires no code or database modifications, and you need only briefly bring OnLine down for the new buffers parameter to take effect. Once you raise the number of shared memory buffers, your next step should be to monitor performance to see what effect your change had. Try to make your evaluations as scientifically as possible, but don't be afraid to go back to your users and get their opinions as well.

Once you've obtained some updated throughput statistics, your next step is to evaluate whether or not your enhancement was helpful. If it wasn't, or if the improvements weren't substantial enough, return the system to its original state. If it was, leave it in place, and move on to the next modification.

Continue following this cycle until you've investigated each performance enhancement. When you have finished, you should have a better performing system.

Customer Reviews

Average Review:

Post to your social network

     

Most Helpful Customer Reviews

See all customer reviews