Managing Microsoft Exchange Server

Managing Microsoft Exchange Server

by Paul Robichaux, Robert Denn

Microsoft Exchange is a big, complicated application; it requires more disk storage than Windows NT Server and has several hundred configuration property pages and dialogs. But it is also a very powerful and flexible messaging system. However, knowing that it can be made to do something and understanding how to do it are often worlds apart. Managing Microsoft


Microsoft Exchange is a big, complicated application; it requires more disk storage than Windows NT Server and has several hundred configuration property pages and dialogs. But it is also a very powerful and flexible messaging system. However, knowing that it can be made to do something and understanding how to do it are often worlds apart. Managing Microsoft Exchange Server bridges this gap.This book is a no-nonsense, practical guide to planning, installing, managing, maintaining, and troubleshooting Exchange networks. Targeted at medium-sized installations and up, Managing Microsoft Exchange Server addresses the difficult problems these users face: Internet integration, storage management, cost of ownership, system security, and performance management. It goes beyond the basics to provide real hands-on advice about what you need to know after you have your first site up-and-running and are facing issues of growth, optimization, or recovery planning. Managing Microsoft Exchange Server comprehensively explains how Exchange works, what it can do, and how you can make it work for you.

Editorial Reviews

Ray Duncan

I've been running a departmental Exchange server since early 1996, and recently became indirectly responsible for the Exchange network for an entire health system -- 15 servers, 5500 users, and still growing. Tending a farm of Exchange servers is strongly reminiscent of caring for premature infants (my "other" career). Both have a lot of potential, but both are complex organisms based on a large number of loosely coupled, immature systems, both are fragile and unpredictable, and both can go from a state of good health to death's door in a matter of minutes.

Every Exchange administrator learns quickly that, in order to survive, one must regard Exchange updates and service packs as a game of Russian Roulette, subscribe to Exchange mailing lists, search the Microsoft knowledge bases on TechNet on a regular basis, examine each successive Exchange Resource Kit diligently for the tools and capabilities that should have been included in the base system, and exhibit the patience and persistence of Job when wrangling with Microsoft phone support. Only a monopoly like Microsoft could dominate the email market with such a flaky product!

In the course of my own efforts to keep Exchange from falling over (or getting it back on its feet afterwards), I've looked through a lot of different books on Exchange administration. From my own experience, I've developed a short checklist of things to look for that tell me whether the author's objectivity and depth of experience make the book worth buying:

  • A clear description of the management of public folders, including the procedure to "re-home" a public folder, and an explanation of the differences between "affinity" and "replication" and their impact on user access to public folders within sites and across site boundaries.
  • How to create a standard Exchange site connector between two sites when there is no trust relationship between the underlying NT domains, including the obscure fact that at least one of the machines involved in the site connector must be a PDC.
  • The benefits, pitfalls, and chronic stability problems of Outlook Web Access (OWA) -- the Active Server Page interface to Exchange mailboxes, public folders, and the global address list -- and the crucial fixes for OWA in Exchange version 5.5 Service Pack 2.
  • The features and problems of the Exchange, Schedule+, and Outlook clients on the various supported platforms, how to set up each of the clients for off-line use and automatic or on-demand synchronization, and the server-side control of the contents of the off-line address book (OAB), scheduling of OAB generation, and forcing OAB compatibility with Exchange 4.0/5.0 clients.
  • Moving mailboxes and servers from one site to another, safely de-installing the first server in a site, a systematic approach to disaster recovery, and how to change the organization name for an Exchange network (you can't).
  • The perils of using Microsoft's NT clustering solution ("Wolfpack") for Exchange versus NT high-availability solutions from other vendors -- specifically the notorious problems with Wolfpack's performance, reliability of failover, efficient use of resources, and Microsoft phone support.

It's a sad commentary on technical trade book publishing that, until now, all of the Exchange administration books I've seen have failed the above checklist miserably, even though these are issues that are faced by every Exchange administrator. Why? Because most of the books are simply rehashes of Microsoft's own Exchange documentation, which also fails to provide accurate or complete information on the above topics (and many others).

Paul Robichaux's new book Managing Microsoft Exchange Server, on the other hand, passes nearly item on my checklist with flying colors, coming up short only on the Exchange-on-NT clustering topic (Paul discusses it briefly, but does not warn potential users away from it strongly enough, and omits the crucial fact that Microsoft does not officially support Exchange on their own clustering solution). The short excerpt below alone contains several pearls that almost every Exchange administrator has learned the hard way, but can't be found in any Microsoft Exchange manual.

True to form for O'Reilly and Associates, Managing Microsoft Exchange Server is eminently practical and accurate, well structured, carefully edited, and a pleasure to read. Every Exchange administrator will want to have a copy of this book readily available at home and at work. Highly recommended.
Electronic Review of Computer Books

Product Details

O'Reilly Media, Incorporated
Publication date:
Product dimensions:
7.07(w) x 9.21(h) x 1.30(d)

Read an Excerpt

Chapter 2: Exchange Architecture

The art of building, or architecture, is the beginning of all the arts that lie outside the person. Havelock EllisThe Dance of Life

In Chapter 1, Introducing Exchange Server, you got a gentle introduction to Exchange. Now it's time to dig deeper and tear into the underlying Exchange architecture, including the process by which mail is addressed and routed, where it's stored, and how Exchange security works. Understanding the underpinnings of the Exchange services is a prerequisite to planning your Exchange implementation, since you need to know how everything works to get an accurate idea of what network and server configuration will meet your needs.

Exchange Message Addressing

If you're used to ordinary DNS-style user@host.domain names, you may find Exchange addressing confusing at first. Let's start with the reassuring basic fact that every addressable object--mailboxes, public folders, and servers--has at least one address, and may have more depending on which connectors you've installed and how they're configured.

Each server in a site has a site address, and each mailbox or public folder has a recipient address. These addresses must be unique. Site addresses are built by concatenating the organization and site names you provide; recipient addresses may be automatically generated using a number of prebuilt styles, or you can customize them using a variety of templates and tools. No matter how they're generated, recipient addresses combine a mailbox address with a site address.

Exchange identifies every object with a distinguished name, or DN. DNs are so called because they distinguish objects from one another; Exchange DNs include whatever combination of organization, site, and recipient names exists for a particular object. All Exchange components try first to use DNs for address resolution. If a DN can't be resolved, Exchange will try to use the X.400 address for the object instead.

WARNING: At some point, you may be tempted to remove the X.400 address for a mailbox or public folder. Don't succumb to the temptation-- Exchange requires those addresses to be present and correct even if you're not using the X.400 connector.

Custom Recipients

As long as your mail system only includes Exchange servers, you can get by with using only Exchange addresses. As soon as you mix in SMTP or other mail systems, though, you'll encounter custom recipients. A custom recipient address is nothing more than an address in the Exchange directory that uses some other mail system format. For example, let's say you're setting up an Exchange server for a company that frequently works with outside consultants. You could add a custom recipient for each consultant's SMTP address so that those addresses would appear in the organization directory, along with their human-readable names.

When a message is addressed to a custom recipient, the originator and recipient DNs (which originally came from the Exchange directory) are rewritten with equivalent addresses using the format of the custom address. For example, if you send a message to a custom recipient with an SMTP address, the Exchange MTA will rewrite the addresses in SMTP format.

Address Spaces

Each connector defines an address space, or range of addresses it knows how to handle. Microsoft's documentation says that the "address space represents the path a connector uses to send messages outside the site." This is correct, but it might be more helpful to say that address spaces define which addresses the connector will send to, not just the path used.

Each address space represents a range of addresses, like *.com or OREILLY/MSMAILPO/*. The address space has an associated routing cost, which is used by the MTA to find the lowest-cost path to a message destination. Each connector must have at least one address space, but it can have more, with different address ranges and costs.

When the MTA prepares to deliver a message, it will use the address space data in the gateway address routing table to find all the connectors that have address spaces matching the address. If you define an SMTP address space of *.org, the connector that owns that space will accept messages sent to any address in the .org domain, but it won't match addresses in any other domain. You can use this feature to control message routing. You can also define address space restrictions, which are applied to keep messages from being sent over a particular connector.

Exchange Message Routing

Now that you know how Exchange addresses work, let's move on to the nitty-gritty of getting mail from point A to point B. When a user composes a message and hits the Send button, what happens next depends on the relative locations of the sender and recipients. Understanding how mail flows between users and servers will be handy later. It's also important to note that directory and public folder data are replicated using mail messages. The discussion in this section covers both user-to-user and server-to-server traffic, since it's all carried as messages.

When All Recipients Are on the Same Server

The simplest case is when the sender (or originator, in Exchange parlance) and all recipients are on the same Exchange server. I'll call the server where the originator's mailbox is located the local server. Figure 2-1 illustrates the process, which works like this:

Figure 2-1. Message flow between originator and recipients who are on the same server is straightforward

  1. The originator's mail client connects to the local server using MAPI or any supported Internet protocol. MAPI connections actually establish a connection to the IS service; Internet protocol connections deliver the message to the IMS, which in turn hands it to the MTA. When the message is transmitted, the client may disconnect, because its work is done (although MAPI clients normally stay connected until you force them to log off ).
  2. The local server's IS service checks the local copy of the directory to see which message recipients are on the local server.
  3. Because the recipients are all on the local server, the IS service places a single copy of the message into the server's private IS. It then notifies each connected recipient that new mail has arrived (but only if they're using a MAPI client).
  4. If any recipient address is a distribution list, the IS service asks the local MTA to expand the distribution list, and the process restarts at step 2. If any recipients on the distribution list have mailboxes on other servers, the process follows the steps outlined in the following section.

Note that the MTA doesn't have anything to do in this scenario unless one or more recipient addresses point to distribution lists. The IS service, and the private IS, handle all message transit between users whose mailboxes are on the same server. This is an important point to remember when you're trying to troubleshoot mail delivery problems.

Recipients on Multiple Servers Within One Site

The routing scheme used when the originator and all recipients are on the same server is very straightforward. What happens when one or more recipients have mailboxes on other servers? The overall process is much the same, except that the local server's MTA gets into the act. Here are the steps in the routing process (shown in Figure 2-2). Note that steps 3 -7 are repeated once for each recipient:

Figure 2-2. Message flow between originator and recipients on different servers involves the MTA

  1. The originator's mail client connects to the local server and delivers the message.
  2. The local server's IS service checks the directory to see which message recipients are on the local server. It delivers mail to any local recipients per the steps outlined in the preceding section.
  3. For every recipient whose mailbox is not on the local server, the IS service hands the message over to the local MTA.
  4. The MTA resolves the remote recipient's address in the directory and finds the remote server that hosts the recipient's mailbox.
  5. The local server's MTA opens an RPC connection to the remote MTA and transfers the message.
  6. The remote MTA accepts the message and resolves the recipient address to find out where the recipient's home server is. If the remote MTA belongs to the user's home server, the message is delivered to the IS service. If not, the remote MTA transfers it to the correct home server's MTA, and the process begins again at step 5.
  7. Once the message arrives at the IS service of the recipient's home server, the IS puts a single copy in the message store, as in step 3 in the single-server example.

The remote MTA will resolve any recipient addresses that aren't real Exchange addresses--say, SMTP addresses or other custom recipients--by turning them into distinguished names, then using the DNs to decide which connector should carry the message.

You may also have noticed that step 6 allows multiple hops between MTAs. This allows Exchange to route messages between sites according to routing rules that you establish. You can allow Exchange to make its own routing calculations, or you can specify routing costs to force messages into particular paths.

Users on Multiple Servers in Different Sites

When a message is addressed to recipients whose home servers are in different sites, the routing process gets still more complicated. In this case, the MTAs must use the GWART to figure out how to get the message to its destination. This is a two-step process. The first step, routing, requires the MTA to enumerate all routes that can get the message to its destination. The second, selection, requires the MTA to choose the path with the lowest total cost, as expressed by the routing costs in the GWART. Here's how the process (shown in Figure 2-3) works:

  1. The originator delivers the mail to its local server, which then turns around and uses the directory to attempt to resolve all recipient addresses.
  2. Local recipients get copies of the message per the steps outlined in the earlier section "When All Recipients Are on the Same Server."
  3. Recipients whose home server is within the site get copies according to the steps outlined in the preceding section.
  4. For each recipient whose home server is outside the site, the MTA searches the GWART for a match between the recipient's address and the address spaces in the GWART, then it uses the list of matches to build a list of all potential routes. This is the routing step.
  5. Once routing is complete, the MTA chooses the lowest-cost connector from the list of available connectors. This is the selection step.
  6. The MTA uses the selected connector to transmit the message to the remote server. The server may be an Exchange MTA or another type of server, like an SMTP server or a cc:Mail post office.[2] Once connected, the remote server delivers the message (using the steps outlined in the preceding section if it's an Exchange MTA), and it eventually ends up in the recipient's mailbox.

Figure 2-3. Message flow when originator and recipient are in different sites

Address types

Step 4 in the preceding list says "the MTA searches the GWART for a match between the recipient's address and the address spaces in the GWART." However, that's a highly compressed version of what actually happens. The MTA actually compares the address type of the recipient address with the GWART. There are three sets of address types:

distinguished name (DN)
Searched when the MTA finds a DN for the recipient in the directory. This address type shows up in the GWART as "EX."
Domain Defined Attribute (DDA)
Indicates a custom recipient address. The exact format of an address of this type can vary; for example, an SMTP DDA address contains only the RFC-822 address, but an MS Mail DDA address contains the post office name and some other stuff that MS Mail needs. Each DDA type is listed in the GWART as a distinct type.
Originator/Recipient (O/R)
An X.400 address; in the GWART, it's marked as "X400."

When the MTA uses the GWART to resolve an address, it scans the GWART from beginning to end, looking for an address space whose type matches the recipient address type. When it finds a matching address space, it adds the connector that owns it to the list of available connectors for this particular message. Once the GWART has been completely scanned, the MTA still has to choose the lowest-cost connector to actually carry the message.

The selection process

The selection process that the MTA uses has six distinct steps. As each step finishes, all connectors that meet that step's criteria are passed on to the next step. At the final step, if there's still more than one connector on the available list, the MTA chooses one at random; this provides load balancing if there are several connectors with similar availability and cost.

The selection steps are as follows:

  1. Comparing the open retry count to the maximum permitted retry count. Each connector has a setting that controls how many times the MTA may try to use it to transfer a message before giving up. This step effectively throws out any connector that has exceeded its maximum retry count.
  2. Choosing active connectors. Some types of connectors are always active (the MS Mail, IMS, and site connectors), while others (like the X.400 and Dynamic RAS connectors) allow you to schedule when they're active. As you'll learn in Chapter 7, Managing Connectors and the MTA, scheduled connectors can be activated by the local or remote servers. In this step, the MTA first chooses active connectors. If there aren't any whose address types match, it chooses from the list of connectors that will become active according to their schedules. If there aren't any of those, it selects connectors that are remote-initiated.
  3. Selecting connectors whose open retry counts are low. The MTA prefers not to use connectors with open retries, so in this step it selects the connectors with the lowest open retry counts.
  4. Skipping connectors that have previously failed. Each connection failure triggers a timer that waits until the next permitted opening interval before re-opening the connector and retrying the connection. In this step, the MTA rejects any connector that has had a recent failure; it also skips any connector whose timer is still going.
  5. Choosing the lowest-cost connector. Each address space you attach to a connector has an associated cost. The MTA will choose the connectors with the lowest cost at this step.
  6. Staying local. Local connectors are those running on the same server as the MTA; remote connectors are those running on a messaging bridgehead server. Whenever possible, the MTA will use a local connector to send the message. If no local connectors are available, the MTA will choose a messaging bridgehead server, albeit reluctantly.

If no connectors can be found using these rules, the MTA will try to reroute the message by repeating the routing and selection steps to find a usable route. However, the MTA doesn't reroute messages sent over foreign connectors (like the IMS, MS Mail, and Notes connectors); it considers its job done once the message is delivered to the connector, even if the connector can't deliver the message. If repeating the rerouting process doesn't find any connector to deliver the message, the MTA returns it with a nondelivery report.

Directory Replication

Each server in a site maintains its own writable local copy of the site directory. That means that each server can accept changes to the directory's data, but it requires that replication and synchronization take place on all servers in a site. Fortunately, intrasite replication is automatic. It takes place in a four-step process:

  1. When data in a server's local copy of the directory changes, that server (the sending server) starts a five-minute countdown. Every change that occurs resets the countdown timer, so changes can be accumulated without causing unnecessary replication traffic.
  2. When the countdown timer expires, the sending server DS sends an RPC notification to every other DS in the site. This notification contains information about the change, but not the changed data.
  3. Each DS that receives the notification checks its payload against its own copy of the directory. If it doesn't already have the changes, it sends an RPC request back to the sending server DS asking for the changes.
  4. Once the sending DS has transmitted the changes, the receiving DS makes the changes to its own local copy of the directory.

This replication process is a big part of the reason for the requirement that sites have permanent and RPC-capable network connections; intrasite replication happens automatically, and it uses RPCs.

Directory replication between multiple sites

Replication between sites won't happen unless you configure it. This is a feature, not a bug; many organizations don't want or need sites to share directory information. Those that do usually want to schedule and control replication instead of letting it happen automatically, which is why intersite replication happens according to a schedule you specify.

The first thing to know about intersite replication is that it depends on mail connectivity. Before you even think about trying to establish replication, you must ensure that mail flows properly between the sites. If you don't, replication traffic will quickly fill up the queues on either side of your site connection.

To enable intersite replication, you must add a directory replication connector between the sites you want replicated. There can only be one of these connectors for each pair of sites. The connector copies directory updates from its local directory (which in turn are received from other site servers according to the four steps above) and transfers them as messages. The steps involved in intersite replication are as follows:

  1. Changes arrive and accumulate in the sending server's directory.
  2. When the schedule calls for it, the receiving server sends a mail message to the sending server asking for any updates. This message contains information on what updates the receiving server already has.
  3. When that message arrives at the sending server (which may take some time, depending on how the site-to-site communications are configured), it sends all the requested updates back to the receiving server as mail messages.
  4. The replication messages eventually arrive at the receiving server, which applies the changes to its local copy of the database.
  5. Changes are replicated from the receiving server to other servers in its site using the same intrasite replication mechanism explained previously.

Exchange Security

Microsoft has implemented four categories of security features in Exchange:

Windows NT security
These features depend on the NT user database (in one domain or many). They provide access control and authentication to workstations and servers. Since each mailbox is assigned to an NT user account, these features also provide a first level of access control to mailboxes.
Exchange access controls
Allow you to assign individual sets of Exchange permissions to Exchange objects and containers. The available Exchange roles and permissions are quite different from NT's built-in sets, and there's not necessarily any correspondence between the two categories.
Makes a permanent record of security-related events (and, optionally, Exchange configuration changes) in the system's event log.
Exchange advanced security
Uses public-key cryptography to provide authentication, privacy, and data integrity. The origins of messages can be verified; messages in transit can be protected from tampering or snooping; and recipients can verify that messages they receive are identical to what was sent.

Windows NT Security Features

Exchange relies on Windows NT security for most of its access control and authentication. However, Exchange maintains its own set of access permissions, which are separate from those granted to NT users. An account in the Domain Admins group, for example, won't have administrative permissions in Exchange unless you manually grant them. The only account that gains these privileges in Exchange is the account you're logged on with when you install Exchange. Since that account must have Administrator rights on the install machine, that one account--and that account onlywill have administrative access for both Exchange and NT. After you've installed Exchange, you'll have to run Exchange Administrator and give other accounts whatever permissions you want them to have.

NT user accounts and mailboxes

In most cases, users must be able to log on to an NT domain before they may access Exchange resources. You can allow anonymous NNTP, IMAP4rev1, and LDAP traffic; however, by default the NNTP and IMAP4rev1 protocols require authentication too. The account used to log on controls which resources the user may access, in both NT and Exchange.

Each private mailbox on an Exchange server is owned by one Windows NT account; one Windows NT account may own many mailboxes. Because Exchange uses NT's domain account database to control mailbox access, you can build Exchange environments on top of NT domains. Doing so requires you to use domain trust relationships to grant interdomain resource access. For example, you might put your Exchange servers in their own domain, then import user accounts from a single master domain. This is fairly straightforward and doesn't require any extra configuration in Exchange: when a user from the FINANCE domain logs on to a workstation in the STLOUIS domain, she can access her Exchange server normally as long as a proper trust relationship exists between FINANCE and STLOUIS.

There are three ways to get permission to open a user's mailbox:

  • Present credentials from that user's account by logging on with it. This is the most commonly used method.
  • Open it while logged on as the site service account.
  • Give other accounts permission from within Exchange Administrator.

Exchange also allows users to delegate mailbox access, so that individual users can have assistants or others send and receive mail on their behalf. You'll learn how to control this in Chapter 13, Managing Exchange Clients.

The site service account

When you install Exchange, you're prompted to specify the name of a site service account. Exchange uses this account to provide a security context for its services, so as far as Exchange is concerned, this account is all-powerful. Since the service account provides the security context, it owns all of the mailbox and server configuration data. In fact, you can log on with that account and use it to run Exchange Administrator.

WARNING: Be very careful with the site service accountit can be used to read anyone's mail or reconfigure the Exchange services. Protect it accordingly! At a minimum, choose a very difficult (or, better still, completely random) password. Write it down and store it in a sealed, tamper-evident envelope, then store the envelope wherever you store sensitive materials. Don't use the Administrator account as your site service account.

Before you install Exchange on the first server in a site, you must create the site service account using the User Manager, assign it a password, and set the "Password cannot be changed" and "Password never expires" flags on the account. Once you've done so, you need to supply the same account name and password as you install each additional server in the site. If you're using multiple domains, you'll also have to be sure that the site service account is available to each server that needs it. This is necessary so that services can use a common set of authentication credentials.

It may be that your site and domain layouts don't correspond--perhaps you put all your user accounts in a single master domain, but you have separate sites for each business unit in your company. If you want to combine servers from multiple domains into a single site, you must ensure that each domain has access to a common trusted domain, and the site service account must live in that trusted domain.

Using Exchange Access Controls

Exchange offers its own set of permissions. Apart from the restriction that users must provide valid Windows NT credentials to talk to the server, you can use Exchange permissions to fine-tune what users can do once they've successfully attached to the server. This is useful both for controlling what end users can do (for instance, allowing some users, but not others, to create and manage public folders) and to allow different levels of administrative privilege to different classes of users. For example, you might entrust one group of administrators with the responsibility of creating mailboxes, but not with the ability to delete them.


In Exchange, you use permissions to grant access on an object to a user account or group. Permissions have three parts that specify which account has the permission, what permission it is, and what object it applies to. For example, I might grant the group RA\ExchangeAdmins administrative privileges on the site configuration container.

The permissions themselves fall into three categories. We've already covered mailbox permissions ; they're granted to one or more NT accounts or groups, which may then use them to log onto the mailbox and use it. Public folder permissions govern what users may do to public folders; you can control read, write, and modify access to user mailboxes, distribution lists, and other public folders. Finally, directory permissions control read and write access to the Exchange directory. (In Exchange 6.0, which uses the Windows 2000 directory, the two directories will probably be one and the same.) By default, anonymous LDAP users can read the directory, while MAPI users must log on to the server. Only accounts which have been granted specific write access may modify the directory's contents.

Exchange permissions are hierarchical; objects inherit permissions from their parent containers. In practice, this means that normally you'll apply permissions to container objects like the Recipients container. For example, assigning a given account the Permissions Admin role on the Recipients container would grant that account Permissions Admin privileges on all mailboxes, public folders, distribution lists, and custom recipients.

The permissions shown in Table 2-1 normally apply to directory objects and containers. In most cases, the permissions can only be used from within Exchange Administrator or by a service that holds them; however, the Outlook 97 and 98 clients can take advantage of some permitted operations at the client level.

Table 2-1: Exchange Permissions


What It Does

Add Child

Users with this permission can add new children of the directory object on which they hold this permission. For example, granting a user this permission on the Recipients container allows them to add new mailboxes.

Modify User Attributes

This permission allows users to change user-level attributes like membership in distribution lists or visibility in the Address Book.

Modify Admin Attributes

Some directory attributes, like job title, display name, and custom attributes, can only be modified by users who hold this permission.

Modify Permission

This permission allows users to modify permissions on an existing object; it doesn't affect granting permissions on new objects.


Users who hold this permission can delete items from the directory.

Send As

When a user holds this permission, she can send messages with another user's return address. For example, the CEO of a company can grant his executive secretary "Send As" permission so that he can send mail that appears with the CEO's address. Users always have this permission on their own mailboxes. Server objects also have it--they grant it to the site service account so that servers can talk to each other.

Mailbox Owner

This permission enables users to act as the owner of a mailbox; they can read, delete, and modify the mailbox contents, not just send messages from it. Like "Send As," user mailboxes and server objects get this permission by default.

Logon Rights

This permission controls access to the directory; without it, users can't access any directory data. Exchange services need this permission, and it must be granted to any user account that will be running Exchange Administrator.


Only the site service account should have this permission; holders can replicate directory information between servers.


When you apply this permission on an object, you're giving the holder permission to search it. Its main use is customizing search permissions on address book views; see "Using Container-level Search Controls" in Chapter 9, Managing the Directory, for more details.


In Exchange, a role represents a group of permissions you want to grant. While you can assign individual permissions to user accounts, roles speed up the process and reduce the likelihood that you'll accidentally give someone permissions you don't want them to have or, contrariwise, forget to give them a right they need. Table 2-2 shows the predefined Exchange roles; you can also define custom roles and apply them in Exchange Administrator.

Table 2-2: Exchange Roles


Included Permissions

Usually Granted To


Modify User Attributes, Mailbox Owner, Send As

Individual user accounts


Modify User Attributes, Search

Individual user accounts

Send As

Send As

Mailbox delegates


Add Child, Modify User Attributes, Modify Admin Attributes, Delete, Logon Rights


Permissions Administrator

Add Child, Modify User Attributes, Modify Admin Attributes, Delete, Logon Rights, Modify Permissions

Administrators whom you want to be able to modify

Service Account Administrator

Add Child, Modify User Attributes, Modify Admin Attributes, Delete, Logon Rights, Modify Permissions, Replication, Mailbox Owner, Send As

Site service account

View-Only Administrator

Logon Rights

Administrators whom you want to view, but not change, Exchange settings in Exchange Administrator

The Permissions tab

You assign permissions to containers and individual objects using their various properties dialogs. Each object that can have permissions assigned to it can display a Permissions tab in its properties dialog. Figure 2-4 shows an example; you can see which accounts have inherited permissions and which accounts have been granted specific permissions.

Figure 2-4. The Permissions tab in Exchange Administrator summarizes which accounts have what access to the selected object

By default, Exchange Administrator will only show this tab for container objects. You can turn it on using the Permissions tab in the Exchange Administrator Options dialog (see "Setting Exchange Administrator Preferences" in Chapter 5, Using Exchange Administrator, for complete details).


Exchange logs a wide variety of security events to the system's event log (more specifically, in the application section of the event log ). For example, if you grant a user or group administrative privileges on a recipient container, you'll see an event log message noting the change. You can use the NT Event Viewer or a variety of third-party tools to automatically scan the event log and notify you of changes. Exchange also logs messages about backup status and database integrity to the event log, so reviewing it periodically--or, better yet, automatically--is a very good idea. See Windows NT Event Logging, by James Murray (O'Reilly & Associates) for more details on how NT's event log works. Chapter 15, Troubleshooting Exchange Server, will tell you how to interpret the various events that Exchange can log, as well as how to fix the problems that caused the event in the first place.

The Exchange Databases

Exchange uses a set of databases to store its directory, mail, and public folder data. This approach may seem a little unusual; after all, most Unix-based mail systems (and many other PC-based LAN mailers) store messages in either individual files or plain-text message collections. A relational database may seem like overkill, but there are three good reasons to use a database for the message store:

The Exchange transaction logging architecture records each transaction to a log file before it's put into the database. If the database is damaged or corrupted, the transactions can be replayed. This can often restore a database to its original pristine state without requiring a full restore from a backup. As a bonus, it also makes it much easier to keep the databases consistent with one another.
In an ordinary file-based mail store, one of two things has to happen. The store must be indexed to allow quick retrieval and insertion of messages, or users have to pay the performance penalty of dealing with files that are always sequentially read and written. Databases allow fast random access to individual messages and items, along with indexing. The Exchange database implementation has particularly good performance because it is heavily multithreaded and because it uses transaction logging.
Exchange is designed to scale to large environments. Some Exchange sites, like General Electric, Boeing, and Compaq/Digital, handle more than 50,000 Exchange users spread throughout the world. A single server must be able to efficiently handle all the directory data for an organization of that size or larger. Database storage also allows efficient retrieval of user mail from large message stores. It allows the server to do much of the work for providing custom address book and message views.

As you may remember from Chapter 1, every change to the database is modeled as a transaction. The Exchange database model is adopted from Microsoft's SQL Server, though the actual database software isn't based on SQL Server and doesn't require an SQL Server license. SQL Server is a fairly robust SQL database with some useful properties, which together go by the nifty acronym of ACID:

Every transaction is an indivisible unit, like an atom. When a transaction is posted, one of two things may happen: either every result of the transaction is committed to the database, or no results are. Atomic operations may not be interrupted by other system activities; atomic database updates thus either succeed completely or fail completely; there's no half-done state. This property ensures that once a transaction is committed, all its side effects are properly registered in the database.
Every posted transaction leaves the database in a valid state. No transaction is allowed to leave the database in an inconsistent or partially complete state.
Every transaction is treated as though it were the only transaction being handled. In a system like Exchange, where many concurrent threads may be posting transactions, isolation ensures that the results of each transaction are properly stored in the database and that there are no hidden side effects or linkages between transactions. Any transaction can extend or undo changes made by another one, since at the start of each transaction the database assumes that every other transaction has already been handled.
Once a transaction is posted, its effects are permanent, unless they are undone by another transaction. Ideally, a durable database is one that can happily survive (or at least recover from) media failures or database corruption. Exchange uses its transaction logs to make its databases more durable. As long as you can recover the complete set of transaction logs, you can completely restore a damaged database. If you can only recover part of the log set, you can at least restore a portion of the database, which is usually better than nothing.

All About Logging

Exchange transaction logging is often misunderstood, both because it's confusing and because the Exchange documentation makes it even more so. The fundamental idea behind logging is simple: the logs store copies of all transactions. These stored transactions can be played back later to restore a corrupted database, or even to retry a transaction that didn't complete successfully. Logging is a mainstay of relational database engines because it provides a backup mechanism for transactions to help preserve the ACID properties.

Log and checkpoint files

Exchange logs transactions. Although Exchange treats the log as a single entity, it's actually a set of files. Each log file is exactly 5MB (that's 5,242,880 bytes, for you purists) in size, even if there aren't any transactions in it. If you see a log file that's any other size, it's probably corrupted. The DS and IS maintain their own log files, each named edb.log. As transactions for each service occur, they're written to the appropriate log file. When the log file fills up, the DS or IS service renames it, using a sequential hexadecimal ID (the first file is edb00001.log, the second is edb00002.log, and so on). These renamed log files are called generations ; edb.log represents the highest, or most recent, generation. Note that just because a log file is full doesn't mean its transactions have been committed--all commitments happen according to the rules outlined in the section "The logging process," later in this chapter.

The log files contain a number of tidbits that are useful if the logs have to be played back during server recovery, including the full path to the database files, information about which generation of log data is in the file, a signature and timestamp for the data, and a signature for the database. This header information enables the store to make sure that each log file is replayed into the correct database file, and to balk if you do something like try to restore files from one machine onto another.

Of course, the log files also contain information about the transactions themselves. For each transaction, the log records the type of transaction (i.e., whether the transaction represents a change, a rollback of a previous change, or a commit of a previous change). These transactions record the low-level modifications to individual pages and tables within the database.

When the DS and IS services are shut down normally, any transactions that have been made to the in-RAM copy of the database are committed to the disk version, and the checkpoint file is updated to reflect which transactions have been committed. If the service is shut down abnormally (say, by a power failure), when it restarts it will scan its inventory of log files and play back any uncommitted transactions from the log files to the database. This means that it's very important not to move, delete, edit, or otherwise disturb the log files until their transactions have been committed.

How do the services know which transactions have been logged? The IS and DS services maintain checkpoint files named edb.chk. Whenever a transaction is committed, the checkpoint file is updated to point to that transaction. The services use the checkpoint file at startup time; if this file is present, transactions are played back from the checkpoint to the end of the last available log file. The checkpoint files tell the store which transaction log files contain uncommitted transactions, and would be needed in case of a crash. If the checkpoint file is missing or damaged, Exchange can scan each log file and check whether its transactions have been committed, but this is much slower than using the checkpoint files.

Reserve logs

Since transaction processing depends on log files, it's a fair question to wonder what would happen if there wasn't enough space to start a log file. As a last-ditch defense against running out of space, the DS and IS services each maintain two reserve log files named res1.log and res2.log. When edb.log fills up and is renamed, if there's not enough space to create a new file, the store services will use the reserve file instead. If this happens, ESE will send a remote procedure call to the service. When the service gets this special emergency message, it will flush any uncommitted transactions from memory into the reserve log files, then shut down cleanly. The service will also log an event in the system event log; if your DS or IS service won't start, check the event log to make sure you have adequate free space.

The logging process

Logging transactions is a good way to keep the database unsullied and consistent; however, there may be performance costs involved. A simplistic logging mechanism would just log transactions to a file, then periodically inject them into the database. The Exchange logging process is quite a bit smarter; it works like this:

  1. Something happens--a message arrives, directory data changes--and a new database transaction is created by the directory or IS services. The transaction only reflects data that has changed; for example, if you open a draft message in your mailbox, edit it, and resave it, the transaction will contain only your changes, not the entire draft.
  2. The timestamp on the page that will be changed by the new transaction is updated.
  3. The transaction is logged to the current generation of log file for the service that owns it. Transactions are written to the log file in sequence, with no random-access seeking. Once the transaction has been logged, the caller assumes that it will be properly registered in the database and goes about its business.
  4. The transaction is applied to the version of the store database cached in RAM. The store never, ever records a transaction to the cached database until the transaction has been logged.
  5. When the log file hits its maximum size, the service that owns the log file renames it and creates a new log generation. This log file will stay on disk until it's purged during an online backup.
  6. Exchange copies the transactions from the cached copy in RAM back to the disk version of the database. This so-called "lazy commit" strategy means that at any point in time the "real" database consists of data from the database file on disk, data from the database copy in RAM, and as-yet-uncommitted transactions.

When the DS or IS services are shut down normally, they attempt to commit any outstanding transactions from the in-memory database, but not from the log files. If the service shuts down abnormally, and the transaction files remain intact, when the services restart they'll replay transactions starting at the checkpoint. If the transaction files are missing or partially damaged, the DS and IS services will do the best they can to commit any transactions that can be recovered.

Databases and Caching

For most applications, disk caching provides a big performance win with little or no overhead--applications request disk I/O and it's handled by the cache, if possible. Windows NT includes disk caching that's smart enough to order disk writes so that overall disk head motion is minimized. Many server systems also include caching disk controllers. Caching is especially popular when combined with controllers that provide hardware RAID.

Here's the killer question: if a caching disk controller has data in its cache, and the power goes out, where does the data go? The answer is most likely to be "nowhere," and that's bad news. Exchange depends on transaction logs and lazy commits, so an unexpected power failure can trash both the log files and the database. It's probably obvious that you should equip your Exchange servers with uninterruptible power supply (UPS) units, but you must make sure that your UPS has sufficient capacity to keep the server up long enough for NT to shut down properly and flush its caches.

If you're using a caching controller, you have several options. For a long time, Microsoft recommended turning off write caching on the controller. This reduces the likelihood that data will be lost, but why buy a caching controller if you can't use write caching? More recently, Microsoft, Compaq, Dell, Intergraph, and most other server vendors have endorsed the position that it's OK to use write caching on your controllers as long as they're equipped with a battery backup. This type of controller has a small onboard battery that keeps the cache memory alive until system power is restored; the controller can then write out the cached data before the next reboot. The Dell PowerEdge 2300 I used to write this book has a PowerRAID II controller with a battery backup; it easily survived the severe thunderstorms and power outages common in an Alabama summer.

TIP: You can speed up your Exchange server's shutdown time by manually stopping the Exchange services before shutting down the server; see Chapter 14, Managing Exchange Servers, for details. Never power down your Exchange server instead of shutting it down cleanly. That's a fast route to corrupting your Information Store databases.

Circular logging

Since a new log file is created whenever the current one fills up, log files can potentially take up a large amount of space on your disk. One solution is to put them on a dedicated disk (more on that in Chapter 3, Exchange Planning ); another is to enable circular logging. Normally, every log file is kept until its transactions have been committed; the files are usually purged when backups are made. However, when you enable circular logging, Exchange will keep a fixed number of log files, rolling from one to another as transactions arrive. The default number is four, but Exchange may use extra log files if a large set of transactions arrives. As the fourth log file fills up, Exchange will commit transactions from the first file; when the fourth file is completely full, all transactions will be flushed from the first file and it will be reused. However, since these additional log files are never deleted, a very busy server can still use more than the default 25MB circular logging allocation.

Circular logging is turned on by default in Exchange. This is actually a Bad Thing, since with circular logging on, you only have the most recent transactions in the log files--older data is lost as the circle rolls around. Without a complete record of all transactions posted to the database, recovering the entire database may be difficult if you don't notice that it's been corrupted or damaged for a while. Circular logging also limits your backup choices, as you'll see in the next section. In Microsoft's defense, circular logging does keep log files from filling the entire disk, making it possible for novice administrators to run Exchange without being aware of its disk space or backup needs.

Having said that, you should immediately turn circular logging off as soon as you install Exchange. When it's on, it's much more difficult to recover the IS from a backup. With 10GB disks selling for less than US$150, there's no excuse for not having enough disk space for the log files.

TIP: You should acquire the habit of looking through the event logs regularly. The Exchange services all log helpful messages that will tell you your database is corrupt before you've overwritten the good version on your backup tapes with the corrupted version.

Log files and backups

Exchange comes with an updated version of the standard NT backup application, ntbackup, which can back up the DS and IS databases without stopping the Exchange services. This is a real boon, since you normally don't want to shut down your server to back it up. Depending on the type of backup you select (see the section "Backup Considerations" in Chapter 17, Recovery and Repair), various things may happen to your log files. Full and incremental backups will purge log files whose transactions have all been committed--both of these backup types record all changes to the IS and DS databases, so the log files are no longer needed. Differential backups require a complete set of all log files since the last full backup, so differential backups don't purge any files. If you enable circular logging, you won't be able to do incremental or differential backups. (If you stop the Exchange services and do an offline backup, you can do incremental and differential backups, but you lose the Exchange-aware features of ntbackup.)

One cool feature of the Exchange-ntbackup integration is the way incoming transactions are handled during a backup. Here's how the process works for a full backup:

  1. The backup starts. Any transactions that are in the transaction log but haven't been committed to the database are committed. The various checkpoint (.chk) files are updated to reflect which transactions have been committed and which are still outstanding.
  2. The backup proceeds. Each database file is backed up in turn, in 64KB chunks.
  3. If new transactions arrive for a database that's currently being backed up, they're stored in two places: in the transaction logs for that database and in the corresponding patch file ( priv.pat, pub.pat, or dir.pat--one for each database). The patch files hold copies of incoming transactions that would ordinarily apply to database pages that have already been backed up.
  4. When the database file is completely backed up, ntbackup backs up the patch files. While it's doing so, new transactions are stored in the log files.
  5. When the patch files are completely backed up, the remaining log files are backed up, too. Normally, the last log file will be partially full; each log file is exactly 5MB, but there can be fewer transactions in the last file.

Incremental backups work differently: they just copy the log files, not the main databases. This five-step process allows Exchange to continue running while a backup is in progress. Incoming transactions generate log files; on an active system, there may be several log files generated during the backup. There's nothing wrong with this, because the backup process is designed to handle it. However, it can make the patch files grow pretty large, so they take extra time to back up and restore.

Where Everything Lives

Exchange keeps its data in four separate but interrelated databases. These databases store all the contents of Exchange's directory and public folder data; they may also store some or all of your users' mail data. Each database is stored in its own file and is self-contained, though relationships between tables will exist.

Table 2-3 shows the four databases, their contents, and their usual locations. When you install Exchange on a server, the installer will put the database files in subdirectories of the main Exchange installation directory. For a variety of reasons, these locations aren't always optimal (though in Chapter 3 you'll find a set of recommendations for disk layout). The final step in the Exchange installation process according to Microsoft is to run the Performance Optimizer, which will automatically determine what it thinks is the best location for each of the three primary databases. However, it's better to wait and run Performance Optimizer manually after you've finished installing and configuring any connectors you need (see Chapter 4, Installing Exchange, for more details).

Table 2-3: Exchange Database Files


What's in It


Usual Location

Public Information Store

Public folder messages



Private Information Store

User mailbox data, including messages, rules, and views


\ex chsrvr\mdbdata\priv.edb

Directory store

Addresses for every addressable object, public folder hierarchy, permissions




Transactions that are in progress but not yet committed; this file only exists while it's in use



Transaction log files live in the same directory as their respective databases; for example, the default location for directory transaction logs is \exchsrvr\mdbdata.

Mail storage

Did you notice the weasel words in the second sentence of this section? Databases may store some or all of your users' mail data. The cynical reader is probably wondering, "Well, where's the rest of it stored?" The answer: it depends. Exchange offers several different methods of mail storage. Which ones are available depends on which mail clients are in use.[3] The default, which is probably most desirable for the majority of sites, is to keep all user mail in the private IS on the server. This affords you the benefits of single-instance storage, the recoverability and performance features of the database, and an easy way to back up all mail at one time.

Some users prefer to store mail on their local clients. There are serious reasons not to take this approach (see Chapter 13 for more details), but it's sometimes necessary or even desirable. Microsoft's mail clients support local client storage using two types of stores. The offline store file (OST) is a replica of selected folders from the server. Think of an OST as a portable copy of part of the server's message store; it normally contains the Inbox, Outbox, Sent Items, Calendar, Journal, Tasks, and Contacts folders. You can customize your OST so that it contains whatever private and public folders you want to use offline. OSTs must periodically be synchronized with the server; once that's done, though, they're self-sufficient and can be used offline.

The personal store file (PST) is a client-based message store that contains the user's messages. Exchange and Outlook both support PST files. Users may open several PSTs, and PSTs may be used in addition to server-based storage, not just as a replacement. When using a PST, new mail may be delivered directly to the PST (actually, it's sent to the client's Inbox and then moved to the PST), although this isn't the default. There may still be a copy on the server if the message was also sent to other recipients; however, for mail sent only to the PST user, the server doesn't maintain a copy if the user has selected to have all new mail sent straight to the PST. This places the burden of data integrity, security, and backup squarely on the PST user. It also means that corruption or damage to the PST may not be recoverable, though Microsoft does provide a tool for PST repair.

Both OST and PST files are subject to two size limits. The total file size for a single OST or PST can't exceed 2GB, and no single file can have more than about 65,000 items.

Database Maintenance

All this database magic comes with a price. The database itself must be periodically maintained. The good news is that this maintenance is largely automatic and invisible. You probably won't notice it unless you reschedule the automatic tasks to happen during working hours on your server.

As with most other databases, the ESE holds on to space once it's allocated and reuses it internally instead of returning it to the filesystem. For example, if you create 50 10MB Word documents, then delete 20 of them, you'll regain 200MB (20 × 10) of space. If your Exchange stores grow to 2GB and you delete 500MB worth of mailbox data, the stores will still take up 2GB, but Exchange will be able to recycle the 500MB of free space. The Exchange databases are like a balloon with a one-way valve attached--you can blow it up bigger and bigger, but it doesn't normally get any smaller.


Over time, your Exchange database files will become fragmented. This fragmentation is internal; as transactions occur, the contents of the database file itself are split into little islands of free space. Since the database must index each message component, along with the free space chunks, fragmentation slows down database performance. The more fragmented the store files become, the bigger the performance hit.

The DS and IS services automatically defragment their databases. This process, known as online defragmentation, is a scheduled maintenance task that runs nightly, or whenever else you schedule it. During an online defragmentation, the DS and IS services shuffle data in the private, public, and directory databases to minimize fragmentation and keep mailbox, public folder, and directory data in contiguous blocks.

You can also use the eseutil tool (covered completely in Chapter 17) to do an offline defragmentation. The difference is that during an offline defragmentation, the IS and/or DS services must be stopped, and eseutil can do a more thorough job of defragmenting and recovering space. Offline defragmentations can actually shrink the size of the database files to match the actual size of the data in them; they return the unused space to the filesystem. Contrast this with online defragmentations, which move data around but don't shrink the database size. The offline defragmentation process actually creates a temporary database and moves data from the original database to the new one, defragmenting and compacting as it goes. When the defragmentation is done, the new database replaces the old one.

The scheduled online defragmentation process will do its thing unobtrusively, and in most cases, you won't need to run an offline defragmentation. However, there are times when you might want to force Exchange to defragment the database more thoroughly. For example, let's say you've just moved two hundred users from one server to another, and you want to reclaim the space those two hundred mailboxes were taking up on the old server, instead of waiting for the private IS to grow into the space. You'll need to run an offline defragmentation to regain the space.


Exchange will automatically compact database entries, removing deleted items and periodically sweeping away expired public folders and views. This process is akin to opening a filing cabinet and removing any outdated or unnecessary files; it doesn't increase the total amount of filing space, but it does increase the amount you can actually use. Exchange's compaction process attempts to consolidate partially full pages into a smaller number of completely full pages. This consolidation speeds reading and writing to the database as transactions arrive.

Automatic maintenance tasks

Because the public, private, and directory stores are integral to Exchange's operation, you probably won't be surprised to see all the background maintenance tasks that take place. These tasks primarily do housekeeping, cleaning out expired data and flushing data from caches into the database. (We've already discussed the lazy commit system, which isn't really a maintenance task anyway.) Tasks fall into two categories: those that run according to the schedule set on the IS Maintenance tab of the Server Properties object (Table 2-4) and those that run either when Exchange needs them or when a separate schedule fires (see Table 2-5).

Table 2-4: Tasks That Run on the IS Maintenance Schedule


When It Runs

What It Does

Index aging

Controlled by registry values (see Chapter 11, Managing the Information Store, or KB article Q159157)

Clients can create custom views, each of which is stored as an index in the database. Once these indices hit a certain age without being used, Exchange purges them to free up table space.

Tombstone aging

Minimum of every 24 hours unless overridden by the registry (see Chapter 11)

Deleted public folders are marked with a "tombstone." This tells replication partners that the marked item no longer exists and shouldn't be replicated. After the tombstone reaches a certain age, it's removed to keep the tombstone list from growing infinitely large.

Tombstone maintenance

Minimum of every 24 hours

This task compacts deleted items and replaces them with tombstones, which are then aged.

Public store expiration

Minimum of 24 hours unless overridden by Replication Expiry registry value

Public folders may have a message age limit. Messages older than this limit are deleted as part of the maintenance process.

Public store version updates

Minimum of every 24 hours

Each server stores the version of Exchange that it's running in its directory; this allows any two servers to agree on a common schema and feature set. Once per day, the IS maintenance task updates this version number to reflect any changes to Exchange.


Table 2-5: Other Automatic Maintenance Tasks


When It Runs

What It Does

Background cleanup

Controlled by the BackgroundCleanup registry values (see Chapter 11).

Reclaims empty space formerly used by deleted items. Space is marked as unused and can be moved or reallocated by the compaction task.

Storage warning notification

Controlled by values set on the IS Site Configuration object's Storage Warnings tab.

Checks each mailbox and public folder, sending "you're using too much storage" warnings to users who are over their assigned quotas.

Database grooming

Main task runs every 10 minutes; each grooming subtask has its own scheduled interval.

Reloads and reapplies storage and per-user quotas from the directory; flushes cached directory information back to the DS.

Database compaction

Nightly at 1 a.m., unless changed.

Moves all unused and reclaimed space to the end of the database.

DS/IS Consistency

The DS and IS databases are closely entwined. For example, the directory contains information on which users exist and which mailboxes they own, but the mailboxes and their contents belong in the private IS. It's important to keep the directory and IS stores synchronized and consistent; remember the "C" in ACID? Without consistency between these two, it may become impossible to tell who owns an object, or even which objects exist. For example, if data in the directory and in the IS don't match, the global address list might show recipients that couldn't receive mail because they had no actual mailboxes in the private IS!

The DS and IS services normally keep their databases consistent without any help from you. As transactions arrive, they're posted to the appropriate services. When directory changes are replicated, the receiving server can apply them without fear--they won't apply to the private IS, and there are guaranteed to be matching transactions for any changes to the public IS.

It's possible for the three databases to end up in an inconsistent state, though. If you restore the DS without the corresponding public or private IS, or if for some reason your IS files are corrupt, you have to manually tell Exchange to readjust its associations. You may also need to do this if you lose your domain controllers, since restoring the SAM database may result in the loss of some account information. The DS/IS Consistency Adjustment page (covered in detail in Chapter 17) allows you to force Exchange to make these adjustments, using the directory as the master and adjusting the public and private IS contents as needed.

Exchange Advanced Security

Exchange advanced security provides message protection for some or all of your users. It does so by using public-key cryptography. Each user is assigned a public and a private key; the keys are mathematically related, but it's practically impossible to derive one from the other. Table 2-6 outlines the basic cryptographic operations, signing and encryption, and shows how they work in a conversation between two hypothetical Exchange users, Alice and Bob.

Table 2-6: Basic Public-Key Cryptographic Operations

Desired Result

Sender Uses

Recipient Uses

What Happens

Alice wants to send a digitally signed message

Alice uses her private key to sign the message

Bob uses Alice's public key to verify

Exchange computes a digest of the message and encrypts it with Alice's private key. It can only be decrypted with Alice's public key.

Bob wants to verify a message from Alice



When Bob gets the message, his Outlook or Exchange client computes its own digest of the message. The client then uses Alice's public key to decrypt the digest Alice originally signed. If the digests match, the signature is authentic.

Alice wants to send Bob an encrypted message

Alice uses Bob's public key to encrypt

Bob uses his private key to decrypt

Alice's client fetches Bob's public key from the Exchange server, then uses it to encrypt the message to him only.

Bob wants to read encrypted mail sent to him



When the message arrives, Bob's client attempts to use his private key to decrypt it. If the message was encrypted to him, he'll be able to read it. If Alice chose to sign the message, the client will verify it only if the decryption succeeds.

The key management server

The Exchange advanced security system requires the use of a key management server, or KMS. This server is an NT machine running Microsoft's Certificate Server (included with the Internet Information Server 4.0 package) and the Exchange KMS software. You may have zero or one KMS machines per Exchange site.

The KMS' main job is to issue keys to clients. Overall, here's what the KMS is required to do:

  • Generate temporary keys so that users requesting new keys can get them securely from the KMS
  • Issue signature and encryption certificates to individual users enrolled in advanced security
  • Keep backup copies of public signature keys and private encryption keys
  • Manage the master copy of the certificate revocation list (CRL)

When the KMS receives a client request, it archives the provided keys and creates a new digital certificate for the user. This certificate is signed by the KMS, and it attests that the KMS believes that the user's keys are authentic. Other users within the same organization can verify both the user's certificate and the KMS signature; this provides an additional level of trust, since normally the KMS is the only entity able to issue keys within an organization.[4]

The KMS has to be well protected, for two reasons. First, if it's offline, you won't be able to use it to issue new certificates, revoke existing certificates, or recover archived keys. Worse still, if it's compromised you won't be able to trust its signatures or revocation lists--meaning you might potentially have to build a new KMS installation and reissue all your users' certificates!

Advanced security steps

The detailed process of enrolling users in advanced security and keeping the KMS healthy is discussed in Chapter 16, Exchange Security. However, now is a good time to outline the steps you must take in order to provide advanced security for your organization.

First, you have to install your Exchange servers. Once that's done, you need to install the KMS, which in turn requires setting up IIS 4.0. You must also configure the KMS as appropriate for your organization's security needs. Once that's done, here's what happens:

  1. You enroll users in advanced security, either en masse or individually.
  2. You ask the KMS to generate temporary keys for each user. The temporary key is used only once; it safeguards the communication channel from the client back to the KMS. Ideally, you give each user his or her temporary key in person. If you absolutely must, you can distribute them electronically, although this raises the possibility that a malicious user may steal the key and use it to masquerade as the real user and steal her keys.
  3. The user uses her client to generate whatever type of key pair (encryption-only, signature-only, or dual-purpose) you've enabled on her mailbox. The client generates the keys and uses the temporary key to establish a secure channel to the KMS.
  4. Once the secure channel is open, the client registers its keys with the KMS. The user's public keys are added to the Exchange directory and can be freely replicated; the KMS also keeps an archival copy. Optionally, the KMS can also archive a copy of the private key from the user's encryption certificate.
  5. The client uses its private keys when requested by the user. Other users who want to send encrypted mail to, or verify mail signed by, a user can get the necessary keys from the Exchange directory. Certificates may be revoked or rekeyed at any time by the Exchange administrator.

1. You can always just type an address into your client, and Exchange will attempt to handle it. Depending on how address spaces are configured, though, custom recipient addresses may be necessary.

2. The remote MTA may be on the destination server, or it may be on a messaging bridgehead. Bridgehead servers are discussed in the "Bridgeheads" section of Chapter 7.

3. Which ones are sensible depends on your organization's needs. This topic is covered in more detail in Chapter 13.

4. How many KMSs you have in your organization will depend on your site and organization design. In general, one KMS per organization is enough for most uses. See Chapter 16 for more details.

Meet the Author

Paul Robichaux is an experienced software deveoper and author. He's worked on UNIX, Macintosh, and Win32 development projects over the past six years, including a stint on Intergraph's OLE team. He is the author of the Windows NT Server 4 Administrator's Guide.

Customer Reviews

Average Review:

Write a Review

and post it to your social network


Most Helpful Customer Reviews

See all customer reviews >