Microsoft SQL Server 2000 Performance Tuning Technical Reference by Edward Whalen, Marcilina Garcia, Steve Adrien DeLuca, Dean Thompson |, Paperback | Barnes & Noble
Microsoft SQL Server 2000 Performance Tuning Technical Reference

Microsoft SQL Server 2000 Performance Tuning Technical Reference

by Edward Whalen, Marcilina Garcia, Steve Adrien DeLuca, Dean Thompson

Performance tuning a relational database can be engaging yet frustrating, and this guide gives you the practical information you need to configure and tune a Microsoft® SQL Server™ 2000 database for better, faster, more scalable solutions. The authors start with the basics and build upon them to teach the mechanics of performance tuning and how they


Performance tuning a relational database can be engaging yet frustrating, and this guide gives you the practical information you need to configure and tune a Microsoft® SQL Server™ 2000 database for better, faster, more scalable solutions. The authors start with the basics and build upon them to teach the mechanics of performance tuning and how they affect the whole system. This book also shows how to optimize for the underlying operating system and hardware. It’s the only book of its kind coauthored by engineers who have worked in the SQL Server performance group.

Expert instruction helps you understand these topics:


  • Architectural fundamentals that affect tuning
  • I/O tuning and RAID storage considerations
  • How to tune hardware, database layout, and configuration parameters
  • Feature enhancements for better ease-of-use, performance, manageability, and reliability.


  • How to use the Microsoft Windows® 2000 System Monitor and the SQL Server Profiler to shorten transaction response times.


  • How to model software and hardware usage to predict resource consumption and conduct preconfiguration planning, and how to perform what-if scenarios about workload growth to avoid slow response times.


  • How to tune online transaction processing (OLTP) systems, data warehouses, and replicated systems.
  • How to set up your system for high-performance backup and recovery.


  • How to get optimal performance by using Query Analyzer and Profiler to tune SQL statements and stored procedures.
  • How to take advantage of indexes and hints.

Product Details

Microsoft Press
Publication date:
Technical Reference Series
Edition description:
Product dimensions:
7.66(w) x 9.56(h) x 1.40(d)
Age Range:
13 Years

Related Subjects

Read an Excerpt

Chapter 5 New Features and Performance Enhancements

The new features in Microsoft SQL Server 2000 extend the ease-of-use, performance, manageability, and reliability of SQL Server 7.0. They also make SQL Server 2000 an excellent database platform not only for small-scale online transaction processing (OLTP) applications, but also for very large-scale OLTP, data warehousing, and e-commerce applications. If you are upgrading directly from SQL Server 6.5 to SQL Server 2000, (bypassing version 7.0), and you are not familiar with SQL Server 7.0, it would be helpful for you to read Microsoft SQL Server 7.0 Administrator's Companion (John Fronckowiak, Marcilina Garcia, and Edward Whalen, Microsoft Press, 1999), and Microsoft SQL Server 7.0 Performance Tuning Technical Reference (Steve Adrien DeLuca, Marcilina Garcia, Jamie A. Reding, and Edward Whalen, Microsoft Press, 2000). Enormous architectural changes were implemented in SQL Server 7.0. These changes remain in place in SQL Server 2000, but are not spelled out in this book, which focuses on features new to SQL Server 2000 since SQL Server 7.0.

Extended Memory Support

SQL Server 2000 Enterprise Edition can use the Microsoft Windows 2000 Address Windowing Extensions (AWE) API to support large address spaces. SQL Server supports 8 GB on Windows 2000 Advanced Server, and 64 GB on Windows 2000 Datacenter Server. AWE is supported only on these two operating systems—it is not supported by Windows 2000 Professional or Server editions. SQL Server also includes awe enabled, a new configuration option that lets SQL Server use AWE. (This option is defined in Chapter 4, "System Tuning.") To enable SQL Server to use the extended memory space, set awe enabled to 1. See "Memory Sizes Supported" in Chapter 2, "SQL Server 2000 Architecture Fundamentals," for information on AWE.

Named Instances of SQL Server

You can run multiple instances of SQL Server 2000, specifically called default and named instances, on the same computer, so you can group common application tasks to be served by a specific SQL Server instance. To identify a default instance, the network name of the computer alone is used. To identify a named instance, the computer name plus an instance name is used, in the form computer_name\instance_name. Each instance of SQL Server 2000 has its own folder location for its copy of program files and for its data files. You can have at most one default instance (having no default but only a named instance is an option). You also can have multiple SQL Server 2000 named instances, with or without a default instance. A default instance can be SQL Server 6.5, 7.0, or 2000 (any instance of SQL Server 6.5 or 7.0 is called the default instance, and named instances of these previous versions is not allowed). Only SQL Server 2000 allows named instances. If you run SQL Server 6.5 as the default instance and a SQL Server 2000 named instance, you can only switch between the two versions, not run them at the same time. With a SQL Server 7.0 default instance and a SQL Server 2000 named instance, both can run at the same time, without having to switch between the two. Applications can connect to multiple instances on one computer in the same way they would connect to an instance of SQL Server running on a different computer. You must simply identify the appropriate instance in the application.

When multiple instances of SQL Server are running, each instance competes for system resources, such as CPU and memory, and each instance yields resources only to the operating system, not to other instances. So if you have two instances running, for example, and each has its memory options set to default such that memory allocation is dynamic, then one instance can take most of the memory, leaving little for the second instance. You should set the memory option max server memory to a specific value for each instance to avoid letting one instance take all of the memory. For example you might give the one instance 60 percent of the memory and the other instance 40 percent. Note that multiple instances running on the same computer will generally not perform as well as if each instance was running on its own computer, because all instances compete for CPU, memory, and disk resources.

You create an instance using the SQL Server installation CD. You can choose to install a default instance or a named instance. To install a named instance, simply choose the Create A New Instance Of SQL Server option when installing and specify the name of the instance.

Federated Servers with Distributed Partitioned Views

In SQL Server 2000, you can spread databases across multiple servers, called federated servers, which make up a federation, for greater processing scalability. If your system provides services for large database systems or large Web sites, you might need the processing power of multiple servers to support the transaction load. Distributing processing over multiple servers is called scaling out, (as opposed to scaling up, which refers to upgrading to a single, more powerful computer). The servers in a federation are called member servers. Figure 5-1 shows a federation of three member servers.

Click to view graphic

Figure 5-1.  Federated servers.

You create federated servers by spreading database tables across a group of servers, which you do by creating distributed partitioned views. This involves horizontally partitioning a table into smaller tables, each of which holds a subset of the complete table data. The subset tables are called member tables, and they reside on the member servers. The member tables have the same schema on each of the member servers. The data ranges for member tables are defined by CHECK constraints on the partitioning column. After you create the member tables on the member servers, you create a distributed partitioned view with the same view name on each of the member servers—this view consists of a union of all the member tables. Each view makes it appear that each server has a full copy of the original table. Applications can reference the view rather than a specific member table; they don't need to know which specific member server holds the requested data. You can make both SELECT and data modifications through the view.

Here's an example of a distributed partitioned view across three federated servers, Server1, Server2, and Server3. We can split up a table with sales data into three smaller tables based on the three possible values in the partitioning column, Region_ID. The users on the system typically access data in only one of the three regions at a time, so we can split the data for this table and other related tables by region. The application can determine which region the user is referencing and route the user's query to the appropriate member server. If all the data is on that member server, it is retrieved or modified without the application having to access any of the other member servers.

In our example we create three member tables, one on each of the federated servers, and distribute the data for those three member tables according to Region_ID. Specifically, we create a WesternRegionSales table on Server1, a CentralRegionSales table on Server2, and an EasternRegionSales table on Server3. Rows of sales data with a Region_ID of 1 go into the WesternRegionSales table, with a Region_ID of 2 into the CentralRegionSales table, with a Region_ID of 3 into the EasternRegionSales table. Next, we create a distributed partitioned view that encompasses all the member tables, so that applications can access data without having to specify in which of the three tables the data resides. The statement that creates the view on Server1 looks like this:

SELECT * FROM WesternRegionSales
SELECT * FROM Server2.SalesDB.dbo.CentralRegionSales
SELECT * FROM Server3.SalesDB.dbo.EasternRegionSales

On Server2 the view is created as follows:

SELECT * FROM CentralRegionSales
SELECT * FROM Server1.SalesDB.dbo.WesternRegionSales
SELECT * FROM Server3.SalesDB.dbo.EasternRegionSales

And on Server3 the view is created as follows:

SELECT * FROM EasternRegionSales
SELECT * FROM Server1.SalesDB.dbo.WesternRegionSales
SELECT * FROM Server2.SalesDB.dbo.CentralRegionSales

If an application executes a SQL statement that references the Sales_By_Region view, and the application identifies the user as a region 1 user, it routes the request to Server1, although the user cannot tell where the data is located. If all the data requested by the user is found on Server1, there is no need to access Server2 or Server3—the ideal and most efficient model for federated servers. If, on the other hand, the requested data resides in member tables on more than one of the member servers, the SQL Server instance on Server1 generates a distributed query that pulls the result rows in from the other servers. This requires more overhead and processing power, and causes more contention on the tables.

Each member table requires a CHECK constraint, which you can define when you create the table, or alter later. When a SELECT statement is executed that references the partitioned view and contains the partitioning column in the search condition, SQL Server uses the CHECK constraints to determine which member servers contain the resulting rows, and searches only those servers. The CHECK constraint also allows only appropriate data to be inserted or updated in each member table. Without CHECK constraints, SQL Server still retrieves the data, but it must search each member table on each member server, causing unnecessary processing, and the application must be coded to check any modifications to ensure that they go to the appropriate server. This example of a CHECK constraint on Server1 indicates and ensures that only rows with Region_ID of 1 reside on that member server:

ALTER TABLE WesternRegionSales ADD CONSTRAINT Region_Check 
CHECK (Region_ID = 1)

On Server2 the constraint looks like this:

ALTER TABLE CentralRegionSales ADD CONSTRAINT Region_Check 
CHECK (Region_ID = 2)

And on Server3:

ALTER TABLE EasternRegionSales ADD CONSTRAINT Region_Check 
CHECK (Region_ID = 3)

Partitioned views existed in SQL Server 7.0, but you could not update them, so you could use them only for read-only data. You can update views in SQL Server 2000, which means that when you update, insert, or delete data through the partitioned view, the data is modified on the appropriate member server automatically without the user having to know on which server the data lies. Also, SQL Server 2000 has an improved query optimizer that minimizes the amount of data transferred for distributed queries.

For a detailed description of the rules concerning member tables and partitioning columns, and requirements concerning inserting, updating, and deleting with partitioned views, see "Creating a Partitioned View" in SQL Server Books Online, the online documentation provided with Microsoft SQL Server 2000. We recommend that you read these rules before designing your member tables.

We've talked about partitioning one table of a database, but what about the other tables? Ideally you partition as many tables as possible to maximize parallel processing across the federation of servers, and you locate related data on each member server to avoid distributed queries. In our example, related tables would include any more tables with data that could be partitioned by region ID, so that any data requested for a particular region would be found on the same server.

But there will probably be some tables that all users need full access to. These tables will not be part of a partitioned view. Instead, you can place full copies of these tables on each of the member servers, where they are accessed by their table name rather than a view name. You ensure that each member server gets any modifications to the data in these tables by creating INSTEAD OF triggers on each member server that perform a distributed query to update the data on other member servers. For example, say a table called Products stores the products for sale in all regions. This table exists in its entirety on each of the member servers. To update this table and keep data consistent across the member servers, you could create a trigger as follows on Server1, Server2, and Server3:

CREATE TRIGGER trig_products ON Products 
DELETE Prod FROM Server1.SalesDB.dbo.Products Prod, DELETED D
WHERE Prod.Product_ID = D.Product_ID
INSERT Server1.SalesDB.dbo.Products SELECT * FROM INSERTED

DELETE Prod FROM Server2.SalesDB.dbo.Products Prod, DELETED D 
WHERE Prod.Product_ID = D.Product_ID
INSERT Server2.SalesDB.dbo.Products SELECT * FROM INSERTED

DELETE Prod FROM Server3.SalesDB.dbo.Products Prod, DELETED D 
WHERE Prod.Product_ID = D.Product_ID
INSERT Server3.SalesDB.dbo.Products SELECT * FROM INSERTED

As you can see, this trigger causes distributed processing to occur in order to complete the data modification on all of the member servers. To repeat, to get optimal performance with a federation, you should minimize the amount of distributed processing. The more data transfer between the federated servers, the slower the federation performance. It is best to distribute data between the member tables in the federation such that users only need to read or modify data from the one member server they access directly.

For federated servers to be an option for your system, your databases must have certain characteristics. If your databases are somewhat complex, it can be quite difficult to redesign them to be distributed in a way that fits the federated servers model. If you are designing your databases from the starting point, it might be an easier task because you can plan your table schemas for distribution ahead of time. If implementing federated servers is not an option for you, another option might be to offload certain tasks to another server which has a copy of the entire database, and use replication between servers to keep data synchronized. This is commonly done when separating OLTP tasks from reporting tasks....

Meet the Author

Edward Whalen is an expert in database performance, administration, and backup recovery solutions.

Marcilina Garcia specializes in performance benchmarks, database design, and configuration.

Steve Adrien DeLuca Program Manager responsible for developing performance tools at Microsoft Corporation since 1998, Mr. DeLuca is currently developing performance and capacity planning solutions for Microsoft’s Distributed Management Division. While at Microsoft Mr. DeLuca has written three other books and has filed for eleven patents for work in capacity planning and performance. Prior to working at Microsoft, Mr. DeLuca worked as an architect engineer at Oracle Corporation, where he co-invented and developed the Oracle System Sizer. In addition to his work at Microsoft and Oracle, Mr. DeLuca has performed the function of performance engineer specializing in sizing and capacity planning for such organizations as DEC, Tandem, Apple, and the U.S. Air Force. Mr. DeLuca has been participating in performance benchmarks, developing performance tools, and lecturing about them around the world since 1980.

Michael Dean Thompson Dean Thompson has worked as a Senior Consultant for Performance Tuning Corporation and as a Technology Specialist with Microsoft for SQL Server in the Gulf Coast District. Prior to Microsoft, Dean was an application development consultant in Dallas with over 13 years of experience. Currently, Dean is working as a contract database analyst and data architect in the Houston area. When he is not working on SQL Server, he is developing web applications in ASP and Perl, or performance tuning his Ford Mustang. Dean can be reached at His website is

Customer Reviews

Average Review:

Write a Review

and post it to your social network


Most Helpful Customer Reviews

See all customer reviews >