Performance Testing Microsoft .NET Web Applications

( 2 )

Overview

The shift from stand-alone desktop applications to Web-enabled applications that accommodate hundreds of thousands of simultaneous users makes performance testing critical. Find out how to make your Microsoft .NET-based applications perform as well as or better than traditional desktop applications with this book—written by the Microsoft team that tests and tunes hundreds of Web sites and Web-based applications. You’ll learn how to take advantage of the best available tools to plan and execute performance tests, ...

See more details below
Available through our Marketplace sellers.
Other sellers (Paperback)
  • All (15) from $1.99   
  • New (3) from $19.04   
  • Used (12) from $1.99   
Close
Sort by
Page 1 of 1
Showing 1 – 2 of 3
Note: Marketplace items are not eligible for any BN.com coupons and promotions
$19.04
Seller since 2008

Feedback rating:

(169)

Condition:

New — never opened or used in original packaging.

Like New — packaging may have been opened. A "Like New" item is suitable to give as a gift.

Very Good — may have minor signs of wear on packaging but item works perfectly and has no damage.

Good — item is in good condition but packaging may have signs of shelf wear/aging or torn packaging. All specific defects should be noted in the Comments section associated with each item.

Acceptable — item is in working order but may show signs of wear such as scratches or torn packaging. All specific defects should be noted in the Comments section associated with each item.

Used — An item that has been opened and may show signs of wear. All specific defects should be noted in the Comments section associated with each item.

Refurbished — A used item that has been renewed or updated and verified to be in proper working condition. Not necessarily completed by the original manufacturer.

New
0735615381 BRAND NEW NEVER USED IN STOCK 125,000+ HAPPY CUSTOMERS SHIP EVERY DAY WITH FREE TRACKING NUMBER

Ships from: fallbrook, CA

Usually ships in 1-2 business days

  • Standard, 48 States
  • Standard (AK, HI)
$35.90
Seller since 2014

Feedback rating:

(267)

Condition: New
Brand New Item.

Ships from: Chatham, NJ

Usually ships in 1-2 business days

  • Canadian
  • International
  • Standard, 48 States
  • Standard (AK, HI)
  • Express, 48 States
  • Express (AK, HI)
Page 1 of 1
Showing 1 – 2 of 3
Close
Sort by
Sending request ...

Overview

The shift from stand-alone desktop applications to Web-enabled applications that accommodate hundreds of thousands of simultaneous users makes performance testing critical. Find out how to make your Microsoft .NET-based applications perform as well as or better than traditional desktop applications with this book—written by the Microsoft team that tests and tunes hundreds of Web sites and Web-based applications. You’ll learn how to take advantage of the best available tools to plan and execute performance tests, configure profile tools, analyze performance data from Microsoft Internet Information Services, Microsoft ASP.NET, managed code, the SQL tier, and more. You’ll also learn how to use the testing methodology that Microsoft uses to stress test its own sites—some of the most popular, high-performance Web sites in the world.

Topics covered include:

  • The testing methodology used on Microsoft.com, Xbox.com, and other high-volume sites
  • Planning the performance test
  • Stress testing with Microsoft Application Center Test (ACT)
  • Monitoring application performance with Performance Monitor
  • Testing Web site security
  • Application Network Analysis
  • Analyzing the Web tier
  • Analyzing managed code
  • Analyzing the SQL tier
  • Transaction Cost Analysis (TCA)

INCLUDED ON CD-ROM:

  • A fully searchable electronic copy of the book
  • Scripts that test the performance of IbuySpy.com

For customers who purchase an ebook version of this title, instructions for downloading the CD files can be found in the ebook.

Read More Show Less

Product Details

  • ISBN-13: 9780735615380
  • Publisher: Microsoft Press
  • Publication date: 10/2/2002
  • Edition description: REV
  • Pages: 320
  • Product dimensions: 7.62 (w) x 9.24 (h) x 1.15 (d)

Meet the Author

Founded in 1975, Microsoft (Nasdaq ‘MSFT’) is the worldwide leader in software for personal and business computing. The company offers a wide range of products and services designed to empower people through great software—any time, any place and on any device.

Read More Show Less

Read an Excerpt

Chapter 2.
Preparing and Planning for the Performance Test
Identifying Performance Goals
Response Time Acceptability Goals and Targets
Throughput Goals and Concurrent User Targets
Performance Growth Analysis
User Activity Profile
Backend Activity Profile
Identifying a Web Application’s User Activity
Identifying a Web Application’s Backend Performance Bottlenecks
Key Performance Metrics Criteria
Mirroring the Production Environment
Putting It Together in a Performance Test Plan
Conclusion

2 Preparing and Planning for the Performance Test

Often Web applications fail to meet their customers’ needs and expectations. When a Web application generates errors, has poor response times, or is unavailable, customers can easily become frustrated. If your performance test procedure or methodology is not well thought out and properly planned, the odds of a successful Web application launch are significantly reduced. This chapter identifies the key processes and planning required before you execute a single performance test. By following these steps you will enhance your odds of executing an effective Web application performance test. These steps include identifying performance goals, creating a user activity profile, and defining the key metrics to monitor and analyze when creating a performance test plan.

NOTE:
We have found that many performance test projects fail because testing begins too late in the Web application development cycle or have requirements that are too complex to complete in the allotted time. Focus on the key elements of your Web application and on user scenarios that will occur most often. If time permits, you can always go back and execute a performance test on the other features that are rarely used.

Identifying Performance Goals

High-level performance goals are critical to ensure your Web application meets or exceeds current or future projected requirements. The best approach is to use historical data or extensive marketing research. Examples of poor planning are e-commerce Web applications that can’t handle the peak holiday shopping rush. Every year the media publicizes Web applications that cannot procure all their orders, suffer from slow user response times, Web server error messages, or system downtime. This costs not only in terms of lost sales, but in bad press as well.

High-level performance requirements can be broken down into the following three basic categories:

Response time acceptability

Throughput and concurrent user goals

Future performance growth requirements

Response Time Acceptability Goals and Targets

By researching how and where your users will connect to your Web application, you can build a table similar to Table 2-1 to show the connection speeds and latency of your potential customers. This can help you determine an acceptable amount of time it can take to load each page of your Web application.

Table 2-1 Predicted Connection Speeds

User Worst Connection Average Connection Best Connection

Line Speed 28.8-kbps modem 256-kbps DSL 1.5-mbps T1

Latency 1000 milliseconds 100 milliseconds 50 milliseconds

Once you have identified how your user base will access your Web application, you can determine your response time acceptability targets. These targets define how long it can acceptably take for user scenarios or content to load on various connections. For example, with all things being equal, a 70-kilobyte page will obviously load faster on a 256-kbps DSL connection than on a 28.8-kbps modem connection. The response time acceptability for your 28.8-kbps modem might be 15 seconds, while the 256-kbps DSL connection might be significantly less, at 5 seconds. Response time acceptability targets are useful when you perform an application network analysis, which is discussed in detail in Chapter 5. The purpose of conducting the application network analysis is to perform response time predictions at various connection speeds and latencies, determine the amount of data transferred between each tier, and determine how many network round trips occur with each step of a user scenario. If you do not have historical data or projections for potential customer connection speeds and latencies we recommend using worst-case estimates. The data in Table 2-1 represents worst, average, and best connections of typical end-user Internet connection speeds.

Throughput Goals and Concurrent User Targets

Answering the following questions will help to determine throughput goals and concurrent user targets:

How many concurrent users do we currently sustain or expect in a given time period?

What actions does a typical user perform on our Web application and which pages receive the most page views in a given time period?

How many user scenarios will my Web application process in a given time period?

The best place to gather this information is from historical data, which can be found in Web server log files, System Monitor data, and by monitoring database activity. If you are launching a new Web application, you may need to perform marketing research analysis for anticipated throughput and concurrent user targets. Historical production data or marketing research is useful to ensure you execute the performance tests using the right concurrent user levels. If, after you complete your performance tests, your Web application meets your throughput and concurrent usage requirements, you can continue adding load until your Web application either reaches a bottleneck or achieves maximum throughput. Table 2-2 shows predicted throughput goals and concurrent user profile expectation for the IBuySpy sample Web application. Using the information in this table, a performance test script can be created to mimic anticipated load on the Web application. The ratio column represents the percentage that this particular user operation is executed with respect to all the user operations. The anticipated load per hour is taken from historical data, which will be illustrated in the next section of this chapter and represents how many times per hour this particular user operation typically occurs.

Table 2-2 Throughput and Concurrent User Targets

User Operations Ratio Anticipated Load per hour

Basic Search 14% 1,400

Browse for Product 62% 6,200

Add to Cart 10% 1,000

Login and Checkout 7% 700

Register and Checkout 7% 700

Total 100% 10,000

Performance Growth Analysis

A performance growth analysis is required if your Web application user base is expected to grow over a given time period. You need to account for user growth when performance testing. Performance testing and tuning your Web application after your development cycle is complete will cost more in time and money, when compared to fixing your performance problems during the software development life cycle (SDLC). In the real world example in Chapter 1, expenses incurred by finding and fixing their Web application performance issues after the SDLC included: lost marketing revenues due to bad press, lost users who are not patient enough to wait for slow page views, and test and development labor costs spent troubleshooting and fixing the issue. Taking a little extra time during the performance test cycle to populate your database with additional data to see how it will perform when it is larger will save you money in the long run. Also, execute the stress test with more load and with higher levels of concurrent users to predict future bottlenecks. By fixing these issues ahead of your growth curve, you will reduce your performance testing and tuning needs in the immediate future and ultimately provide your users with a better experience.

TIP:

One tip for preloading your database with extra orders, baskets, and so on is to run your performance test script for a sustained period of time (possibly a few days) before executing your actual performance analysis. Your Web application UI will ensure data is accurately added to your system. The fastest way to pre-populate your database is to have the database developer create and test data via SQL scripts. However, one common mistake when pre-populating a database with SQL scripts is to miss certain tables that are updated through the UI of a Web application. The point is to ensure that you populate your database accurately; otherwise, it will adversely affect your performance test results.

The easiest way to determine the growth capacity of your Web application is to calculate the increase in volume you are currently experiencing over a specific period of time. For example, assume your user base is growing at a rate of 10 percent per month. Table 2-3 illustrates an anticipated growth plan that can be used when performing your stress tests. This assumes your Web application is currently seeing 10,000 users per day and will grow at a rate of 10 percent per month. When determining your growth rate, don’t forget to account for special promotions that may increase traffic to your Web application.

Table 2-3 Future Growth Profile

Time Period Users Per Day

Current 10,000 per day

Three months out 13,310 per day

Six months out 16,104 per day

Nine months out 21,434 per day

Twelve months out 28,529 per day

User Activity Profile

We use IIS logs to create user activity profiles. The IIS logs are text files that contain information about each request and can be viewed directly with a simple text editor or imported into a log analysis program. We recommend using a set of IIS logs covering at least a week’s worth of user activity from your Web application to obtain realistic averages. Using more log files creates more reliable usage profiles and weightings. To illustrate the process of creating a user activity profile, we imported a month of IIS log data from a recent performance analysis on a typical e- commerce Web application into a commercial log file analyzer. These IIS log files are comprised of shopper page views related to Homepage, Search, Browse for Product, Add to Basket, and Checkout operations performed on the Web application. The logfile analyzer enabled us to generate Table 2-4. Many commercial log file analyzers that fit all budgets are available. These log analyzers can accurately import, parse, and report on Web application traffic patterns.

Table 2-4 User Activity Profile

User Operation/Page Name(s) Number of Page Views Ratio

Homepage 720,000 40%

default.aspx 720,000 40%

Search 90,000 5%

search.aspx 90,000 5%

Browse 450,000 25%

productfeatures.aspx 216,000 12%

productoverview.aspx 234,000 13%

Add to Basket 360,000 20%

basket.aspx 360,000 20%

Checkout 180,000 10%

checkout.aspx 90,000 5%

checkoutsubmit.aspx 54,000 3%

confirmation.aspx 36,000 2%

Totals 1,800,000 100%

There is a distinction between a hit and a page view. A hit is defined as a request for any individual object or file that is on a Web application, while a page view or request is defined as the request to retrieve an HTML, ASP or ASP.NET page from a Web application and the transmittal of the requested page, which can contain references to many additional page elements. The page is basically what you see after the transfer and can consist of many other files. Page views do not include hits to images, component pages of a frame, or other non-HTML files.

TIP:

To simplify constructing your user profile, leave Web application traffic such as image and miscellaneous requests out of the user profile. Also, leave out activity from monitoring tools that ping or access various pages to verify the Web application is functioning properly.

Backend Activity Profile

A backend activity profile is used to identify user activity and performance bottlenecks at the database tier of an existing Web application. This information can be useful to ensure your performance test is accurate.

Identifying a Web Application’s User Activity

Existing databases contain concrete information concerning what your users are doing with your Web application. Examples of this type of information for a typical e-commerce application include how many baskets are being created, how many orders are processed, how many logins occur, how many searches are taking place, and so on. This information can be gathered using simple queries to extract the data from your existing database. This data can assist you in creating user scenarios, user scenarios ratios, or other marketing information that can help you make decisions from the business side. For example, you can compare the number of baskets created to the number of checkouts to find the abandoned basket rate. This information can be important to designing your stress test to execute in the correct ratio. If you find that 50 percent of the baskets created turn into actual orders processed, you can mimic this ratio when executing your performance test.

Identifying a Web Application’s Backend Performance Bottlenecks

If you are performance testing an existing Web application you can identify current performance bottlenecks by interrogating the database server for queries that take a long time to process, cause deadlocks, and result in high server resource utilization. This data collection process occurs during the planning phase of the performance testing methodology, and involves capturing SQL trace data using SQL Profiler, and Performance Monitor logs that are comprised of Windows and SQL Server objects in a typical Web application. In other words, the timeframe for the captured SQL trace should be when application performance goes from acceptable to poor performance. The captured information will give you a clearer picture of where the bottleneck is occurring. Chapter 8 walks you through the process of determining the source of the SQL performance issue. These possible causes include: blocking, locks, deadlocks, problematic queries, or stored procedures with long execution times.

Key Performance Metrics Criteria

As a performance analyst, tester, or developer, you must produce a blueprint on how to performance test the Web application to ensure the high- level performance goals are met. If you don’t create a performance test plan, you may find out about requirements too late in the SDLC to properly test for them. Using the performance requirements criteria in the section above, you now need to identify key metrics that will be monitored and analyzed during the actual performance test.

TIP:

The breakpoint of your Web application can be defined in various ways. Examples include a server that exceeds predefined resource utilization targets, too many server errors, or unacceptable response times due to processing delays.

Key metrics for the performance test include the following:

Server errors acceptability This may seem like a moot point, and no server errors are acceptable because they result in a bad user experience. However, during stress testing you will probably come across these errors, so you should be prepared to understand why they are occurring and decide whether they will happen in your live production environment, with real users hitting your Web application. For example, often when a stress test first begins and then again when it shuts down, errors are caused by too much load occurring too quickly, or by uncompleted page requests. These errors are caused by your stress test, so you can ignore them because they are unlikely to reoccur in the production environment.

Server utilization acceptability This is an important aspect of performance testing. By identifying this up front, you will be able to determine the maximum allowable level your servers should endure. When performance testing, this will be a key element in determining the maximum load to apply to your Web application. This metric can differ for each Web application and should be documented for support, development, test, and management teams to agree on. For example, you might ramp the Web tier up until we reach 75 percent CPU utilization. At this level we are serving approximately 2,000 users per server, which meets the concurrent user targets identified by our performance requirements. With documentation of these metrics, the support team can monitor the production Web servers looking for spikes that meet or exceed the performance requirement. The support team can begin to scale the Web application up or out to support the increased traffic.

Memory leaks or other stability issues These issues often arise when running extended performance tests. For example, if you execute a stress test for a short period of time you may not find the memory leak or other stability issues that only occur after an extended period of heavy activity. Many times multiple tests can be executed to accomplish different goals. You may want to run a quick one-hour test to determine your Web application’s maximum throughput, and then run a weekend-long extended stress test to determine if your Web application can sustain this maximum load.

Processing delays These will occur in almost every Web application where complex business logic requires coding. The key is to minimize process delays to an acceptable amount of time. It’s a good idea to know what’s acceptable before performance testing, so you don’t waste time escalating an issue to your development team that does not require fixing because it meets performance goals. Examples of processing delay acceptability are shown in Table 2-5 and include stored procedures taking more than 500 milliseconds and any Web page duration (measured by the time taken field in your Web tier logs) taking more than one second to process. Table 2-5 shows an example of a performance metric acceptability table. Your requirements may be different; the key point is to come up with a set of requirements that make sense for your Web application.

Table 2-5 Performance Metrics and Acceptable Levels

Metric Location Acceptable Level

CPU utilization Performance Monitor < 75%

Memory—available MB Performance Monitor > 128 MB

Memory—pages/second Performance Monitor < 2

ASP execution time Performance Monitor < 1 second

DB processing delays SQL Profiler < 500 milliseconds

Web tier processing delays Time Taken field of Web logs < 1 second

Mirroring the Production Environment

The performance test environment should be as close to the production environment as possible. This includes the server capacity and configuration, network environment, the Web tier load balancing scheme, and your backend database. By mirroring your production environment you ensure that your throughput numbers will be more accurate.

TIP:

If production equivalent hardware is not feasible for your performance-testing environment, you can still uncover many bottlenecks in the code and architecture. Even though a production equivalent environment is optimal, performance testing is possible in almost any environment you can scrape together.

Putting It Together in a Performance Test Plan

The performance test plan is a strategy or formal approach to allow everyone involved in a Web application, from the development team, test team, and management team, to understand exactly how, why, and what part of the application is being performance tested. The following sections are found in a performance test plan:

Application overview This gives a brief description of the business purpose of the Web application. This may include some marketing data stating estimates or historical revenue produced by the Web application.

Architecture overview This depicts the hardware and software used for the performance test environment, and will include any deviations from the production environment. For example, document it if you have a Web cluster of four Web servers in the production environment, but only two Web servers in your performance test environment.

High-level goals This section illustrates what you are trying to accomplish by performance testing your Web application. Examples include identifying what throughput and concurrent usage levels you will be striving for as well as the maximum acceptable response times.

Performance test process This will include a description of your user scenarios, tools you use to stress test, and any intricacies you will put in your stress scripts. This section will also explain what ratios and sleep times or user think times you will include in your test script.

Performance test scripts The scripts are unlikely to be completed until after your performance analysis cycle has finished. But it is important to include these in the test plan to make them available in the next release or phase of the Web application test cycle. Because stress test scripts take time and effort to create, having test scripts available as a reference for future testing can save time.

Conclusion

Performance testing is a critical phase of any Web application’s development cycle and needs to be the critical path for release to production. By planning properly before you start, you can ensure a successful performance test that will improve your odds of having a high-performing Web application when your customers begin to use it.

Read More Show Less

Table of Contents

Dedication;
Acknowledgements;
Introduction;
Who This Book is For;
About the Companion CD-ROM;
Chapter Overviews;
Chapter 1: Laying the Performance Analysis Groundwork;
1.1 Why Is Performance Testing and Tuning Important?;
1.2 Effects of Current and Emerging Architecture Technologies;
1.3 What Is .NET?;
1.4 Performance Goals;
1.5 Performance Testing Your Application;
1.6 Conclusion;
Chapter 2: Preparing and Planning for the Performance Test;
2.1 Identifying Performance Goals;
2.2 User Activity Profile;
2.3 Backend Activity Profile;
2.4 Key Performance Metrics Criteria;
2.5 Mirroring the Production Environment;
2.6 Putting It Together in a Performance Test Plan;
2.7 Conclusion;
Chapter 3: Stress Testing with Microsoft Application Center Test (ACT);
3.1 Getting Started;
3.2 Core Concepts of ACT;
3.3 Running ACT;
3.4 Conclusion;
Chapter 4: Monitoring Application Performance with System Monitor;
4.1 Using System Monitor;
4.2 Monitoring Objects, Counters, and Instances for Performance Bottlenecks;
4.3 Typical Processor-related Problems and Solutions;
4.4 Conclusion;
Chapter 5: Application Network Analysis;
5.1 Conducting an Application Network Analysis;
5.2 Using Microsoft Network Monitor;
5.3 Conclusion;
Chapter 6: Analyzing and Performance Tuning the Web Tier;
6.1 Getting Started;
6.2 Understanding Configuration and Performance;
6.3 Profiling a .NET Web Application;
6.4 Performance Tuning Tips;
6.5 Common Web Tier Bottlenecks;
6.6 Scaling the Web Tier;
6.7 Conclusion;
Chapter 7: Performance Analysis of Managed Code;
7.1 CLR and Performance;
7.2 The Life and Times of a .NET Web Application;
7.3 Profiling Managed Code;
7.4 Conclusion;
Chapter 8: Analyzing the SQL Tier;
8.1 Getting Started;
8.2 Identifying Bottlenecks;
8.3 Index Tuning;
8.4 Conclusion;
Chapter 9: Estimating IIS Tier Capacity with Transaction Cost Analysis;
9.1 Concurrent Users: A Loosely Defined Term;
9.2 Benefits of Completing a TCA;
9.3 TCA In Five Steps;
9.4 Conclusion;
Chapter 10: Performance Modeling: Tools for Predicting Performance;
10.1 Predicting and Evaluating Performance Through TCA;
10.2 Advanced Performance Modeling;
10.3 Performance Modeling Technology;
10.4 Indy: A Performance Technology Infrastructure;
10.5 Conclusion;
About the Author;
Read More Show Less

First Chapter

Preparing and Planning for the Performance Test
  • Identifying Performance Goals
    • Response Time Acceptability Goals and Targets
    • Throughput Goals and Concurrent User Targets
    • Performance Growth Analysis
  • User Activity Profile
  • Backend Activity Profile
    • Identifying a Web Application’s User Activity
    • Identifying a Web Application’s Backend Performance Bottlenecks
  • Key Performance Metrics Criteria
  • Mirroring the Production Environment
  • Putting It Together in a Performance Test Plan
    • Conclusion

Preparing and Planning for the Performance Test

Often Web applications fail to meet their customers’ needs and expectations. When a Web application generates errors, has poor response times, or is unavailable, customers can easily become frustrated. If your performance test procedure or methodology is not well thought out and properly planned, the odds of a successful Web application launch are significantly reduced. This chapter identifies the key processes and planning required before you execute a single performance test. By following these steps you will enhance your odds of executing an effective Web application performance test. These steps include identifying performance goals, creating a user activity profile, and defining the key metrics to monitor and analyze when creating a performance test plan.

NOTE:

We have found that many performance test projects fail because testing begins too late in the Web application development cycle or have requirements that are too complex to complete in the allotted time. Focus on the key elements of your Web application and on user scenarios that will occur most often. If time permits, you can always go back and execute a performance test on the other features that are rarely used.

Identifying Performance Goals

High-level performance goals are critical to ensure your Web application meets or exceeds current or future projected requirements. The best approach is to use historical data or extensive marketing research. Examples of poor planning are e-commerce Web applications that can’t handle the peak holiday shopping rush. Every year the media publicizes Web applications that cannot procure all their orders, suffer from slow user response times, Web server error messages, or system downtime. This costs not only in terms of lost sales, but in bad press as well.

High-level performance requirements can be broken down into the following three basic categories:

  • Response time acceptability
  • Throughput and concurrent user goals
  • Future performance growth requirements

Response Time Acceptability Goals and Targets

By researching how and where your users will connect to your Web application, you can build a table similar to Table 2-1 to show the connection speeds and latency of your potential customers. This can help you determine an acceptable amount of time it can take to load each page of your Web application.

Table 2-1  Predicted Connection Speeds

User Worst Connection Average Connection Best Connection
Line Speed 28.8-kbps modem 256-kbps DSL 1.5-mbps T1
Latency 1000 milliseconds 100 milliseconds 50 milliseconds

Once you have identified how your user base will access your Web application, you can determine your response time acceptability targets. These targets define how long it can acceptably take for user scenarios or content to load on various connections. For example, with all things being equal, a 70-kilobyte page will obviously load faster on a 256-kbps DSL connection than on a 28.8-kbps modem connection. The response time acceptability for your 28.8-kbps modem might be 15 seconds, while the 256-kbps DSL connection might be significantly less, at 5 seconds. Response time acceptability targets are useful when you perform an application network analysis, which is discussed in detail in Chapter 5. The purpose of conducting the application network analysis is to perform response time predictions at various connection speeds and latencies, determine the amount of data transferred between each tier, and determine how many network round trips occur with each step of a user scenario. If you do not have historical data or projections for potential customer connection speeds and latencies we recommend using worst-case estimates. The data in Table 2-1 represents worst, average, and best connections of typical end-user Internet connection speeds.

Throughput Goals and Concurrent User Targets

Answering the following questions will help to determine throughput goals and concurrent user targets:

  • How many concurrent users do we currently sustain or expect in a given time period?
  • What actions does a typical user perform on our Web application and which pages receive the most page views in a given time period?
  • How many user scenarios will my Web application process in a given time period?

The best place to gather this information is from historical data, which can be found in Web server log files, System Monitor data, and by monitoring database activity. If you are launching a new Web application, you may need to perform marketing research analysis for anticipated throughput and concurrent user targets. Historical production data or marketing research is useful to ensure you execute the performance tests using the right concurrent user levels. If, after you complete your performance tests, your Web application meets your throughput and concurrent usage requirements, you can continue adding load until your Web application either reaches a bottleneck or achieves maximum throughput. Table 2-2 shows predicted throughput goals and concurrent user profile expectation for the IBuySpy sample Web application. Using the information in this table, a performance test script can be created to mimic anticipated load on the Web application. The ratio column represents the percentage that this particular user operation is executed with respect to all the user operations. The anticipated load per hour is taken from historical data, which will be illustrated in the next section of this chapter and represents how many times per hour this particular user operation typically occurs.

Table 2-2  Throughput and Concurrent User Targets

User Operations Ratio Anticipated Load per hour
Basic Search 14% 1,400
Browse for Product 62% 6,200
Add to Cart 10% 1,000
Login and Checkout 7% 700
Register and Checkout 7% 700
Total 100% 10,000

Performance Growth Analysis

A performance growth analysis is required if your Web application user base is expected to grow over a given time period. You need to account for user growth when performance testing. Performance testing and tuning your Web application after your development cycle is complete will cost more in time and money, when compared to fixing your performance problems during the software development life cycle (SDLC). In the real world example in Chapter 1, expenses incurred by finding and fixing their Web application performance issues after the SDLC included: lost marketing revenues due to bad press, lost users who are not patient enough to wait for slow page views, and test and development labor costs spent troubleshooting and fixing the issue. Taking a little extra time during the performance test cycle to populate your database with additional data to see how it will perform when it is larger will save you money in the long run. Also, execute the stress test with more load and with higher levels of concurrent users to predict future bottlenecks. By fixing these issues ahead of your growth curve, you will reduce your performance testing and tuning needs in the immediate future and ultimately provide your users with a better experience.

TIP:
One tip for preloading your database with extra orders, baskets, and so on is to run your performance test script for a sustained period of time (possibly a few days) before executing your actual performance analysis. Your Web application UI will ensure data is accurately added to your system. The fastest way to pre-populate your database is to have the database developer create and test data via SQL scripts. However, one common mistake when pre-populating a database with SQL scripts is to miss certain tables that are updated through the UI of a Web application. The point is to ensure that you populate your database accurately; otherwise, it will adversely affect your performance test results.

The easiest way to determine the growth capacity of your Web application is to calculate the increase in volume you are currently experiencing over a specific period of time. For example, assume your user base is growing at a rate of 10 percent per month. Table 2-3 illustrates an anticipated growth plan that can be used when performing your stress tests. This assumes your Web application is currently seeing 10,000 users per day and will grow at a rate of 10 percent per month. When determining your growth rate, don’t forget to account for special promotions that may increase traffic to your Web application.

Table 2-3  Future Growth Profile

Time Period Users Per Day
Current 10,000 per day
Three months out 13,310 per day
Six months out 16,104 per day
Nine months out 21,434 per day
Twelve months out 28,529 per day

User Activity Profile

We use IIS logs to create user activity profiles. The IIS logs are text files that contain information about each request and can be viewed directly with a simple text editor or imported into a log analysis program. We recommend using a set of IIS logs covering at least a week’s worth of user activity from your Web application to obtain realistic averages. Using more log files creates more reliable usage profiles and weightings. To illustrate the process of creating a user activity profile, we imported a month of IIS log data from a recent performance analysis on a typical e- commerce Web application into a commercial log file analyzer. These IIS log files are comprised of shopper page views related to Homepage, Search, Browse for Product, Add to Basket, and Checkout operations performed on the Web application. The logfile analyzer enabled us to generate Table 2-4. Many commercial log file analyzers that fit all budgets are available. These log analyzers can accurately import, parse, and report on Web application traffic patterns.

Table 2-4 User Activity Profile

User Operation/Page Name(s) Number of Page Views Ratio
Homepage 720,000 40%
   default.aspx    720,000    40%
Search 90,000 5%
   search.aspx    90,000    5%
Browse 450,000 25%
   productfeatures.aspx    216,000    12%
   productoverview.aspx    234,000    13%
Add to Basket 360,000 20%
   basket.aspx    360,000    20%
Checkout 180,000 10%
   checkout.aspx    90,000    5%
   checkoutsubmit.aspx    54,000    3%
   confirmation.aspx    36,000    2%
Totals 1,800,000 100%

There is a distinction between a hit and a page view. A hit is defined as a request for any individual object or file that is on a Web application, while a page view or request is defined as the request to retrieve an HTML, ASP or ASP.NET page from a Web application and the transmittal of the requested page, which can contain references to many additional page elements. The page is basically what you see after the transfer and can consist of many other files. Page views do not include hits to images, component pages of a frame, or other non-HTML files.

TIP:
To simplify constructing your user profile, leave Web application traffic such as image and miscellaneous requests out of the user profile. Also, leave out activity from monitoring tools that ping or access various pages to verify the Web application is functioning properly.


Backend Activity Profile

A backend activity profile is used to identify user activity and performance bottlenecks at the database tier of an existing Web application. This information can be useful to ensure your performance test is accurate.

Identifying a Web Application’s User Activity

Existing databases contain concrete information concerning what your users are doing with your Web application. Examples of this type of information for a typical e-commerce application include how many baskets are being created, how many orders are processed, how many logins occur, how many searches are taking place, and so on. This information can be gathered using simple queries to extract the data from your existing database. This data can assist you in creating user scenarios, user scenarios ratios, or other marketing information that can help you make decisions from the business side. For example, you can compare the number of baskets created to the number of checkouts to find the abandoned basket rate. This information can be important to designing your stress test to execute in the correct ratio. If you find that 50 percent of the baskets created turn into actual orders processed, you can mimic this ratio when executing your performance test.

Identifying a Web Application’s Backend Performance Bottlenecks

If you are performance testing an existing Web application you can identify current performance bottlenecks by interrogating the database server for queries that take a long time to process, cause deadlocks, and result in high server resource utilization. This data collection process occurs during the planning phase of the performance testing methodology, and involves capturing SQL trace data using SQL Profiler, and Performance Monitor logs that are comprised of Windows and SQL Server objects in a typical Web application. In other words, the timeframe for the captured SQL trace should be when application performance goes from acceptable to poor performance. The captured information will give you a clearer picture of where the bottleneck is occurring. Chapter 8 walks you through the process of determining the source of the SQL performance issue. These possible causes include: blocking, locks, deadlocks, problematic queries, or stored procedures with long execution times.

Key Performance Metrics Criteria

As a performance analyst, tester, or developer, you must produce a blueprint on how to performance test the Web application to ensure the high- level performance goals are met. If you don’t create a performance test plan, you may find out about requirements too late in the SDLC to properly test for them. Using the performance requirements criteria in the section above, you now need to identify key metrics that will be monitored and analyzed during the actual performance test.

TIP:
The breakpoint of your Web application can be defined in various ways. Examples include a server that exceeds predefined resource utilization targets, too many server errors, or unacceptable response times due to processing delays.

Key metrics for the performance test include the following:

  • Server errors acceptability This may seem like a moot point, and no server errors are acceptable because they result in a bad user experience. However, during stress testing you will probably come across these errors, so you should be prepared to understand why they are occurring and decide whether they will happen in your live production environment, with real users hitting your Web application. For example, often when a stress test first begins and then again when it shuts down, errors are caused by too much load occurring too quickly, or by uncompleted page requests. These errors are caused by your stress test, so you can ignore them because they are unlikely to reoccur in the production environment.
  • Server utilization acceptability This is an important aspect of performance testing. By identifying this up front, you will be able to determine the maximum allowable level your servers should endure. When performance testing, this will be a key element in determining the maximum load to apply to your Web application. This metric can differ for each Web application and should be documented for support, development, test, and management teams to agree on. For example, you might ramp the Web tier up until we reach 75 percent CPU utilization. At this level we are serving approximately 2,000 users per server, which meets the concurrent user targets identified by our performance requirements. With documentation of these metrics, the support team can monitor the production Web servers looking for spikes that meet or exceed the performance requirement. The support team can begin to scale the Web application up or out to support the increased traffic.
  • Memory leaks or other stability issues These issues often arise when running extended performance tests. For example, if you execute a stress test for a short period of time you may not find the memory leak or other stability issues that only occur after an extended period of heavy activity. Many times multiple tests can be executed to accomplish different goals. You may want to run a quick one-hour test to determine your Web application’s maximum throughput, and then run a weekend-long extended stress test to determine if your Web application can sustain this maximum load.
  • Processing delays These will occur in almost every Web application where complex business logic requires coding. The key is to minimize process delays to an acceptable amount of time. It’s a good idea to know what’s acceptable before performance testing, so you don’t waste time escalating an issue to your development team that does not require fixing because it meets performance goals. Examples of processing delay acceptability are shown in Table 2-5 and include stored procedures taking more than 500 milliseconds and any Web page duration (measured by the time taken field in your Web tier logs) taking more than one second to process. Table 2-5 shows an example of a performance metric acceptability table. Your requirements may be different; the key point is to come up with a set of requirements that make sense for your Web application.

Table 2-5  Performance Metrics and Acceptable Levels

Metric Location Acceptable Level
CPU utilization Performance Monitor < 75%
Memory—available MB Performance Monitor > 128 MB
Memory—pages/second Performance Monitor < 2
ASP execution time Performance Monitor < 1 second
DB processing delays SQL Profiler < 500 milliseconds
Web tier processing delays Time Taken field of Web logs < 1 second

Mirroring the Production Environment

The performance test environment should be as close to the production environment as possible. This includes the server capacity and configuration, network environment, the Web tier load balancing scheme, and your backend database. By mirroring your production environment you ensure that your throughput numbers will be more accurate.

TIP:
If production equivalent hardware is not feasible for your performance-testing environment, you can still uncover many bottlenecks in the code and architecture. Even though a production equivalent environment is optimal, performance testing is possible in almost any environment you can scrape together.

Putting It Together in a Performance Test Plan

The performance test plan is a strategy or formal approach to allow everyone involved in a Web application, from the development team, test team, and management team, to understand exactly how, why, and what part of the application is being performance tested. The following sections are found in a performance test plan:

  • Application overview This gives a brief description of the business purpose of the Web application. This may include some marketing data stating estimates or historical revenue produced by the Web application.
  • Architecture overview This depicts the hardware and software used for the performance test environment, and will include any deviations from the production environment. For example, document it if you have a Web cluster of four Web servers in the production environment, but only two Web servers in your performance test environment.
  • High-level goals This section illustrates what you are trying to accomplish by performance testing your Web application. Examples include identifying what throughput and concurrent usage levels you will be striving for as well as the maximum acceptable response times.
  • Performance test process This will include a description of your user scenarios, tools you use to stress test, and any intricacies you will put in your stress scripts. This section will also explain what ratios and sleep times or user think times you will include in your test script.
  • Performance test scripts The scripts are unlikely to be completed until after your performance analysis cycle has finished. But it is important to include these in the test plan to make them available in the next release or phase of the Web application test cycle. Because stress test scripts take time and effort to create, having test scripts available as a reference for future testing can save time.

Conclusion

Performance testing is a critical phase of any Web application’s development cycle and needs to be the critical path for release to production. By planning properly before you start, you can ensure a successful performance test that will improve your odds of having a high-performing Web application when your customers begin to use it.

Read More Show Less

Customer Reviews

Average Rating 5
( 2 )
Rating Distribution

5 Star

(2)

4 Star

(0)

3 Star

(0)

2 Star

(0)

1 Star

(0)

Your Rating:

Your Name: Create a Pen Name or

Barnes & Noble.com Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation

Reminder:

  • - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
  • - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

 
Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously
Sort by: Showing all of 2 Customer Reviews
  • Anonymous

    Posted October 29, 2002

    A practical guide to better Perfomance!!!

    Finally¿. a well written, clear and accurate book that leads the reader through the Performance Testing maze. This should be mandatory reading for all serious web developers, testers and program managers.

    Was this review helpful? Yes  No   Report this review
  • Anonymous

    Posted October 17, 2002

    BEST PERFORMANCE TESTING BOOK AROUND

    I've recently had an opportunity to read Microsoft ACE Team's (http://aceteam) book "Performance Testing Microsoft .NET Web Applications" and would highly suggest this book to anyone who is looking to find performance bugs or increase performance on your teams web applications. The book is well written, gives real world examples, explains the most common bottlenecks in different areas and how to detect them (disks, network, web tier, sql tier, etc) and has a great section on how TCA works. MS Press would also love your feedback on the book. The book is available through MS Market at cost (go to MS Market and search for ISBN 0-7356-1538-1). I highly recommend this book.

    Was this review helpful? Yes  No   Report this review
Sort by: Showing all of 2 Customer Reviews

If you find inappropriate content, please report it to Barnes & Noble
Why is this product inappropriate?
Comments (optional)