Uh-oh, it looks like your Internet Explorer is out of date.

For a better shopping experience, please upgrade now.

Performance Testing Microsoft .NET Web Applications
  • Alternative view 1 of Performance Testing Microsoft .NET Web Applications
  • Alternative view 2 of Performance Testing Microsoft .NET Web Applications

Performance Testing Microsoft .NET Web Applications

5.0 2
by MICROSOFT ACE TEAM, Microsoft Press (Manufactured by), Ace Team Microsoft

The shift from stand-alone desktop applications to Web-enabled applications that accommodate hundreds of thousands of simultaneous users makes performance testing critical. Find out how to make your Microsoft® .NET-based applications perform as well as or better than traditional desktop applications with this book—written by the Microsoft team that tests


The shift from stand-alone desktop applications to Web-enabled applications that accommodate hundreds of thousands of simultaneous users makes performance testing critical. Find out how to make your Microsoft® .NET-based applications perform as well as or better than traditional desktop applications with this book—written by the Microsoft team that tests and tunes hundreds of Web sites and Web-based applications. You’ll learn how to take advantage of the best available tools to plan and execute performance tests, configure profile tools, analyze performance data from Microsoft Internet Information Services, Microsoft ASP.NET, managed code, the SQL tier, and more. You’ll also learn how to use the testing methodology that Microsoft uses to stress test its own sites—some of the most popular, high-performance Web sites in the world.

Topics covered include:

  • The testing methodology used on Microsoft.com, Xbox™.com, and other high-volume sites
  • Planning the performance test
  • Stress testing with Microsoft Application Center Test (ACT)
  • Monitoring application performance with Performance Monitor
  • Testing Web site security
  • Application Network Analysis
  • Analyzing the Web tier
  • Analyzing managed code
  • Analyzing the SQL tier
  • Transaction Cost Analysis (TCA)


  • A fully searchable electronic copy of the book
  • Scripts that test the performance of IbuySpy.com

A Note Regarding the CD or DVD

The print version of this book ships with a CD or DVD. For those customers purchasing one of the digital formats in which this book is available, we are pleased to offer the CD/DVD content as a free download via O'Reilly Media's Digital Distribution services. To download this content, please visit O'Reilly's web site, search for the title of this book to find its catalog page, and click on the link below the cover image (Examples, Companion Content, or Practice Files). Note that while we provide as much of the media content as we are able via free download, we are sometimes limited by licensing restrictions. Please direct any questions or concerns to booktech@oreilly.com.

Product Details

Microsoft Press
Publication date:
Developer Reference Series
Edition description:
Product dimensions:
7.62(w) x 9.24(h) x 1.15(d)

Read an Excerpt

Chapter 2.
Preparing and Planning for the Performance Test
Identifying Performance Goals
Response Time Acceptability Goals and Targets
Throughput Goals and Concurrent User Targets
Performance Growth Analysis
User Activity Profile
Backend Activity Profile
Identifying a Web Application’s User Activity
Identifying a Web Application’s Backend Performance Bottlenecks
Key Performance Metrics Criteria
Mirroring the Production Environment
Putting It Together in a Performance Test Plan

2 Preparing and Planning for the Performance Test

Often Web applications fail to meet their customers’ needs and expectations. When a Web application generates errors, has poor response times, or is unavailable, customers can easily become frustrated. If your performance test procedure or methodology is not well thought out and properly planned, the odds of a successful Web application launch are significantly reduced. This chapter identifies the key processes and planning required before you execute a single performance test. By following these steps you will enhance your odds of executing an effective Web application performance test. These steps include identifying performance goals, creating a user activity profile, and defining the key metrics to monitor and analyze when creating a performance test plan.

We have found that many performance test projects fail because testing begins too late in the Web application development cycle or have requirements that are too complex to complete in the allotted time. Focus on the key elements of your Web application and on user scenarios that will occur most often. If time permits, you can always go back and execute a performance test on the other features that are rarely used.

Identifying Performance Goals

High-level performance goals are critical to ensure your Web application meets or exceeds current or future projected requirements. The best approach is to use historical data or extensive marketing research. Examples of poor planning are e-commerce Web applications that can’t handle the peak holiday shopping rush. Every year the media publicizes Web applications that cannot procure all their orders, suffer from slow user response times, Web server error messages, or system downtime. This costs not only in terms of lost sales, but in bad press as well.

High-level performance requirements can be broken down into the following three basic categories:

Response time acceptability

Throughput and concurrent user goals

Future performance growth requirements

Response Time Acceptability Goals and Targets

By researching how and where your users will connect to your Web application, you can build a table similar to Table 2-1 to show the connection speeds and latency of your potential customers. This can help you determine an acceptable amount of time it can take to load each page of your Web application.

Table 2-1 Predicted Connection Speeds

User Worst Connection Average Connection Best Connection

Line Speed 28.8-kbps modem 256-kbps DSL 1.5-mbps T1

Latency 1000 milliseconds 100 milliseconds 50 milliseconds

Once you have identified how your user base will access your Web application, you can determine your response time acceptability targets. These targets define how long it can acceptably take for user scenarios or content to load on various connections. For example, with all things being equal, a 70-kilobyte page will obviously load faster on a 256-kbps DSL connection than on a 28.8-kbps modem connection. The response time acceptability for your 28.8-kbps modem might be 15 seconds, while the 256-kbps DSL connection might be significantly less, at 5 seconds. Response time acceptability targets are useful when you perform an application network analysis, which is discussed in detail in Chapter 5. The purpose of conducting the application network analysis is to perform response time predictions at various connection speeds and latencies, determine the amount of data transferred between each tier, and determine how many network round trips occur with each step of a user scenario. If you do not have historical data or projections for potential customer connection speeds and latencies we recommend using worst-case estimates. The data in Table 2-1 represents worst, average, and best connections of typical end-user Internet connection speeds.

Throughput Goals and Concurrent User Targets

Answering the following questions will help to determine throughput goals and concurrent user targets:

How many concurrent users do we currently sustain or expect in a given time period?

What actions does a typical user perform on our Web application and which pages receive the most page views in a given time period?

How many user scenarios will my Web application process in a given time period?

The best place to gather this information is from historical data, which can be found in Web server log files, System Monitor data, and by monitoring database activity. If you are launching a new Web application, you may need to perform marketing research analysis for anticipated throughput and concurrent user targets. Historical production data or marketing research is useful to ensure you execute the performance tests using the right concurrent user levels. If, after you complete your performance tests, your Web application meets your throughput and concurrent usage requirements, you can continue adding load until your Web application either reaches a bottleneck or achieves maximum throughput. Table 2-2 shows predicted throughput goals and concurrent user profile expectation for the IBuySpy sample Web application. Using the information in this table, a performance test script can be created to mimic anticipated load on the Web application. The ratio column represents the percentage that this particular user operation is executed with respect to all the user operations. The anticipated load per hour is taken from historical data, which will be illustrated in the next section of this chapter and represents how many times per hour this particular user operation typically occurs.

Table 2-2 Throughput and Concurrent User Targets

User Operations Ratio Anticipated Load per hour

Basic Search 14% 1,400

Browse for Product 62% 6,200

Add to Cart 10% 1,000

Login and Checkout 7% 700

Register and Checkout 7% 700

Total 100% 10,000

Performance Growth Analysis

A performance growth analysis is required if your Web application user base is expected to grow over a given time period. You need to account for user growth when performance testing. Performance testing and tuning your Web application after your development cycle is complete will cost more in time and money, when compared to fixing your performance problems during the software development life cycle (SDLC). In the real world example in Chapter 1, expenses incurred by finding and fixing their Web application performance issues after the SDLC included: lost marketing revenues due to bad press, lost users who are not patient enough to wait for slow page views, and test and development labor costs spent troubleshooting and fixing the issue. Taking a little extra time during the performance test cycle to populate your database with additional data to see how it will perform when it is larger will save you money in the long run. Also, execute the stress test with more load and with higher levels of concurrent users to predict future bottlenecks. By fixing these issues ahead of your growth curve, you will reduce your performance testing and tuning needs in the immediate future and ultimately provide your users with a better experience.


One tip for preloading your database with extra orders, baskets, and so on is to run your performance test script for a sustained period of time (possibly a few days) before executing your actual performance analysis. Your Web application UI will ensure data is accurately added to your system. The fastest way to pre-populate your database is to have the database developer create and test data via SQL scripts. However, one common mistake when pre-populating a database with SQL scripts is to miss certain tables that are updated through the UI of a Web application. The point is to ensure that you populate your database accurately; otherwise, it will adversely affect your performance test results.

The easiest way to determine the growth capacity of your Web application is to calculate the increase in volume you are currently experiencing over a specific period of time. For example, assume your user base is growing at a rate of 10 percent per month. Table 2-3 illustrates an anticipated growth plan that can be used when performing your stress tests. This assumes your Web application is currently seeing 10,000 users per day and will grow at a rate of 10 percent per month. When determining your growth rate, don’t forget to account for special promotions that may increase traffic to your Web application.

Table 2-3 Future Growth Profile

Time Period Users Per Day

Current 10,000 per day

Three months out 13,310 per day

Six months out 16,104 per day

Nine months out 21,434 per day

Twelve months out 28,529 per day

User Activity Profile

We use IIS logs to create user activity profiles. The IIS logs are text files that contain information about each request and can be viewed directly with a simple text editor or imported into a log analysis program. We recommend using a set of IIS logs covering at least a week’s worth of user activity from your Web application to obtain realistic averages. Using more log files creates more reliable usage profiles and weightings. To illustrate the process of creating a user activity profile, we imported a month of IIS log data from a recent performance analysis on a typical e- commerce Web application into a commercial log file analyzer. These IIS log files are comprised of shopper page views related to Homepage, Search, Browse for Product, Add to Basket, and Checkout operations performed on the Web application. The logfile analyzer enabled us to generate Table 2-4. Many commercial log file analyzers that fit all budgets are available. These log analyzers can accurately import, parse, and report on Web application traffic patterns.

Table 2-4 User Activity Profile

User Operation/Page Name(s) Number of Page Views Ratio

Homepage 720,000 40%

default.aspx 720,000 40%

Search 90,000 5%

search.aspx 90,000 5%

Browse 450,000 25%

productfeatures.aspx 216,000 12%

productoverview.aspx 234,000 13%

Add to Basket 360,000 20%

basket.aspx 360,000 20%

Checkout 180,000 10%

checkout.aspx 90,000 5%

checkoutsubmit.aspx 54,000 3%

confirmation.aspx 36,000 2%

Totals 1,800,000 100%

There is a distinction between a hit and a page view. A hit is defined as a request for any individual object or file that is on a Web application, while a page view or request is defined as the request to retrieve an HTML, ASP or ASP.NET page from a Web application and the transmittal of the requested page, which can contain references to many additional page elements. The page is basically what you see after the transfer and can consist of many other files. Page views do not include hits to images, component pages of a frame, or other non-HTML files.


To simplify constructing your user profile, leave Web application traffic such as image and miscellaneous requests out of the user profile. Also, leave out activity from monitoring tools that ping or access various pages to verify the Web application is functioning properly.

Backend Activity Profile

A backend activity profile is used to identify user activity and performance bottlenecks at the database tier of an existing Web application. This information can be useful to ensure your performance test is accurate.

Identifying a Web Application’s User Activity

Existing databases contain concrete information concerning what your users are doing with your Web application. Examples of this type of information for a typical e-commerce application include how many baskets are being created, how many orders are processed, how many logins occur, how many searches are taking place, and so on. This information can be gathered using simple queries to extract the data from your existing database. This data can assist you in creating user scenarios, user scenarios ratios, or other marketing information that can help you make decisions from the business side. For example, you can compare the number of baskets created to the number of checkouts to find the abandoned basket rate. This information can be important to designing your stress test to execute in the correct ratio. If you find that 50 percent of the baskets created turn into actual orders processed, you can mimic this ratio when executing your performance test.

Identifying a Web Application’s Backend Performance Bottlenecks

If you are performance testing an existing Web application you can identify current performance bottlenecks by interrogating the database server for queries that take a long time to process, cause deadlocks, and result in high server resource utilization. This data collection process occurs during the planning phase of the performance testing methodology, and involves capturing SQL trace data using SQL Profiler, and Performance Monitor logs that are comprised of Windows and SQL Server objects in a typical Web application. In other words, the timeframe for the captured SQL trace should be when application performance goes from acceptable to poor performance. The captured information will give you a clearer picture of where the bottleneck is occurring. Chapter 8 walks you through the process of determining the source of the SQL performance issue. These possible causes include: blocking, locks, deadlocks, problematic queries, or stored procedures with long execution times.

Key Performance Metrics Criteria

As a performance analyst, tester, or developer, you must produce a blueprint on how to performance test the Web application to ensure the high- level performance goals are met. If you don’t create a performance test plan, you may find out about requirements too late in the SDLC to properly test for them. Using the performance requirements criteria in the section above, you now need to identify key metrics that will be monitored and analyzed during the actual performance test.


The breakpoint of your Web application can be defined in various ways. Examples include a server that exceeds predefined resource utilization targets, too many server errors, or unacceptable response times due to processing delays.

Key metrics for the performance test include the following:

Server errors acceptability This may seem like a moot point, and no server errors are acceptable because they result in a bad user experience. However, during stress testing you will probably come across these errors, so you should be prepared to understand why they are occurring and decide whether they will happen in your live production environment, with real users hitting your Web application. For example, often when a stress test first begins and then again when it shuts down, errors are caused by too much load occurring too quickly, or by uncompleted page requests. These errors are caused by your stress test, so you can ignore them because they are unlikely to reoccur in the production environment.

Server utilization acceptability This is an important aspect of performance testing. By identifying this up front, you will be able to determine the maximum allowable level your servers should endure. When performance testing, this will be a key element in determining the maximum load to apply to your Web application. This metric can differ for each Web application and should be documented for support, development, test, and management teams to agree on. For example, you might ramp the Web tier up until we reach 75 percent CPU utilization. At this level we are serving approximately 2,000 users per server, which meets the concurrent user targets identified by our performance requirements. With documentation of these metrics, the support team can monitor the production Web servers looking for spikes that meet or exceed the performance requirement. The support team can begin to scale the Web application up or out to support the increased traffic.

Memory leaks or other stability issues These issues often arise when running extended performance tests. For example, if you execute a stress test for a short period of time you may not find the memory leak or other stability issues that only occur after an extended period of heavy activity. Many times multiple tests can be executed to accomplish different goals. You may want to run a quick one-hour test to determine your Web application’s maximum throughput, and then run a weekend-long extended stress test to determine if your Web application can sustain this maximum load.

Processing delays These will occur in almost every Web application where complex business logic requires coding. The key is to minimize process delays to an acceptable amount of time. It’s a good idea to know what’s acceptable before performance testing, so you don’t waste time escalating an issue to your development team that does not require fixing because it meets performance goals. Examples of processing delay acceptability are shown in Table 2-5 and include stored procedures taking more than 500 milliseconds and any Web page duration (measured by the time taken field in your Web tier logs) taking more than one second to process. Table 2-5 shows an example of a performance metric acceptability table. Your requirements may be different; the key point is to come up with a set of requirements that make sense for your Web application.

Table 2-5 Performance Metrics and Acceptable Levels

Metric Location Acceptable Level

CPU utilization Performance Monitor < 75%

Memory—available MB Performance Monitor > 128 MB

Memory—pages/second Performance Monitor < 2

ASP execution time Performance Monitor < 1 second

DB processing delays SQL Profiler < 500 milliseconds

Web tier processing delays Time Taken field of Web logs < 1 second

Mirroring the Production Environment

The performance test environment should be as close to the production environment as possible. This includes the server capacity and configuration, network environment, the Web tier load balancing scheme, and your backend database. By mirroring your production environment you ensure that your throughput numbers will be more accurate.


If production equivalent hardware is not feasible for your performance-testing environment, you can still uncover many bottlenecks in the code and architecture. Even though a production equivalent environment is optimal, performance testing is possible in almost any environment you can scrape together.

Putting It Together in a Performance Test Plan

The performance test plan is a strategy or formal approach to allow everyone involved in a Web application, from the development team, test team, and management team, to understand exactly how, why, and what part of the application is being performance tested. The following sections are found in a performance test plan:

Application overview This gives a brief description of the business purpose of the Web application. This may include some marketing data stating estimates or historical revenue produced by the Web application.

Architecture overview This depicts the hardware and software used for the performance test environment, and will include any deviations from the production environment. For example, document it if you have a Web cluster of four Web servers in the production environment, but only two Web servers in your performance test environment.

High-level goals This section illustrates what you are trying to accomplish by performance testing your Web application. Examples include identifying what throughput and concurrent usage levels you will be striving for as well as the maximum acceptable response times.

Performance test process This will include a description of your user scenarios, tools you use to stress test, and any intricacies you will put in your stress scripts. This section will also explain what ratios and sleep times or user think times you will include in your test script.

Performance test scripts The scripts are unlikely to be completed until after your performance analysis cycle has finished. But it is important to include these in the test plan to make them available in the next release or phase of the Web application test cycle. Because stress test scripts take time and effort to create, having test scripts available as a reference for future testing can save time.


Performance testing is a critical phase of any Web application’s development cycle and needs to be the critical path for release to production. By planning properly before you start, you can ensure a successful performance test that will improve your odds of having a high-performing Web application when your customers begin to use it.

Meet the Author

Founded in 1975, Microsoft (Nasdaq ‘MSFT’) is the worldwide leader in software for personal and business computing. The company offers a wide range of products and services designed to empower people through great software—any time, any place and on any device.

Customer Reviews

Average Review:

Post to your social network


Most Helpful Customer Reviews

See all customer reviews

Performance Testing Microsoft .NET Web Applications 5 out of 5 based on 0 ratings. 2 reviews.
Guest More than 1 year ago
Finally¿. a well written, clear and accurate book that leads the reader through the Performance Testing maze. This should be mandatory reading for all serious web developers, testers and program managers.
Guest More than 1 year ago
I've recently had an opportunity to read Microsoft ACE Team's (http://aceteam) book "Performance Testing Microsoft .NET Web Applications" and would highly suggest this book to anyone who is looking to find performance bugs or increase performance on your teams web applications. The book is well written, gives real world examples, explains the most common bottlenecks in different areas and how to detect them (disks, network, web tier, sql tier, etc) and has a great section on how TCA works. MS Press would also love your feedback on the book. The book is available through MS Market at cost (go to MS Market and search for ISBN 0-7356-1538-1). I highly recommend this book.