Effective COM: 50 Ways to Improve Your COM and MTS-based Applications / Edition 1

Paperback (Print)
Buy New
Buy New from BN.com
Used and New from Other Sellers
Used and New from Other Sellers
from $1.99
Usually ships in 1-2 business days
(Save 95%)
Other sellers (Paperback)
  • All (23) from $1.99   
  • New (5) from $35.90   
  • Used (18) from $1.99   


In Effective COM, the authors, Don Box, Keith Brown, Tim Ewald, and Chris Sells, offer 50 concrete guidelines for creating COM based applications that are more efficient, robust, and maintainable. Drawn from the authors' extensive practical experience working with and teaching COM, these rules of thumb, pitfalls to avoid, and experience-based pointers will enable you to become a more productive and successful COM programmer.

These guidelines appear under six major headings: the transition from C++ to COM; interfaces, the fundamental element of COM development; implementation issues; the unique concept of apartments; security; and transactions. Throughout this book, the issues unique to the MTS programming model are addressed in detail. Developers will benefit from such insight and wisdom as:

  • Define your interfaces before you define your classes (and do it in IDL)
  • Design with distribution in mind
  • Dual interfaces are a hack. Don't require people to implement them
  • Don't access raw interface pointers across apartment boundaries
  • Avoid creating threads from an in-process server
  • Smart Interface Pointers add at least as much complexity as they remove
  • CoInitializeSecurity is your friend. Learn it, love it, call it
  • Use fine-grained authentication
  • Beware exposing object references from the middle of a transaction hierarchy
  • Don't rely on JIT activation for scalability

and much more invaluable advice.

For each guideline, the authors present a succinct summary of the challenge at hand, extensive discussion of their rationale for the advice, and many compilable code examples. Readers will gain a deeper understanding of COM concepts, capabilities, and drawbacks, and the know-how to employ COM effectively for high quality distributed application development. A supporting Web site, including source code, can be found at http://www.develop.com/effectivecom.


This publication defines the best practices and reusability criteria of the Component Object Model (COM) for developers. The guidelines and best practices in this publication are intended to enhance the efficiency, reusability and reliability of applications utilizing COM and its distributed variant DCOM. This is not a COM tutorial, so you should already be familiar with the basics of COM, its interfaces and related technologies such as the Microsoft Transaction Server (MTS) and C++. This publication is intended for practicing developers who need a greater, more in-depth understanding of the relationships and the ramifications inherent in COM/ DCOM based development.

Read More Show Less

Editorial Reviews

COM is Microsoft's middle-tier technology for object-oriented, distributed applications development. Four authors from a COM education firm offer guidelines for creating COM-based applications based on their own practical experiences working with and teaching COM. The book is arranged in six chapters: the transition from C++ to COM; interfaces; implementation issues; apartments; security; and transactions. Its aim is to provide working developers of COM and MTS with solutions to common design and coding problems. Annotation c. by Book News, Inc., Portland, Or.
Davide Marcato

Beware (and Overcome) the Hidden Traps of COM/MTS Programming

It should be no secret to anyone that carrying out good (D)COM development is significantly more difficult than programming C++ and Windows applications the classic way. The game gets even more complicated when MTS (Microsoft Transaction Server) comes into play, with its additional burden of intricacies related to the rules and the constraints imposed by the transactional context. This complexity should not be ascribed to the intrinsic difficulty of some esoteric COM aspects; after all, most of them are shared by the other paradigms for distributed and component-based paradigms currently on the cutting edge of the technological wave, namely CORBA and (Enterprise) JavaBeans. The true reason why it all appears so prone to errors is the higher degree of discipline required to benefit from the positive aspects of the paradigm, both in terms of the thought on every design decision made and in the great care required during the coding phase (the latter gains particular importance if you work in C++). I find this necessary addition of thought in even the apparently innocuous and basic tasks beneficial in the long run to the mindset of many badly educated (or lazy, depending on the case) programmers and object-oriented design (OOD) specialists. The dark side of the medal is represented by the steep learning curve that keeps many from really getting familiar with the COM way of thinking and programming in short timeframes.

It is arguable that the tardiness showed by the industry in endorsing the COM message and investing in it has at least part of its motivations in the lack of explanatory and comprehensive documentation on the foundations (conceptual and implementative) of the model. Now that the primary hole has been more or less filled by a decent amount of quality literature, including most notably Essential COM, authored by one of the coauthors of this text and reviewed by DDJ's ERCB some months ago, many engineers in the industry are struggling to apply the newly digested paradigm to everyday software projects, often facing unexpected difficulties and uncertainties. The problem lies in the point I made earlier: Many developers have got in touch with COM/MTS and know its theory reasonably well, but they are stuck in the second part of the learning curve -- the one that extends from the theoretical knowledge up to the actual hands-on expertise, the one required to effectively build COM-based systems of nontrivial dimension.

The COM universe is so extremely vast and the paradigm shift so big that it is often very daunting to get acquainted and secure working with it, either as architectural designers or hardcore implementers.

When you find yourself in this situation, any reliable source of suggestions, proven guidelines, and exhaustive answers to recurring doubts would greatly help understand and overcome the many nontrivial issues. That's where Effective COM fits right in. The book can be thought of as a distilled dispenser of 50 rules-of-thumb and clearly explained guidelines stemming from the combined wisdom amassed by the four coauthors in many years of real-world experience and research.

The structure, not only the title, clearly resembles that of Scott Meyers's most excellent Effective C++. Each numbered rule is first stated and then explained and motivated, making frequent recourses to short and focused code snippets rigorously in pure C++, either to demonstrate the rule at work or to present the alternative solutions and drawbacks that made the authors opt for other possibilities. I have mixed feelings about whether this subdivision of the book in 50 seemingly independent aphorisms really fits this topic as well as it did for the C++ language in Meyers's case -- but it immensely helped the authors split what they had to say in short, manageable blocks. In this respect, the book is more digestible than its predecessor Essential COM, as one can blissfully read it in chunks of limited length, a few pages at a time, and even follow an order different from that proposed by the numeration.

The writing shows the same clean, direct style that characterized the aforementioned precursor. Considering the essential design shared by most Addison-Wesley texts of this type, with little or no graphical elements or screenshots and many copious pages of written text and short code fragments, the reader might at first get the impression of an overly academic manual. But as s/he proceeds with the reading, this sentiment quickly turns into appreciation for the directness of the concepts illustrated. When the topics get tough, the last thing you desire is a bunch of disturbing elements and side notes steering you away from the center of the analysis.

In conclusion, this is neither an introductory text, nor a tutorial. If your daily job is that of a manager, or if you are still familiarizing yourself with the COM, you'd better save the money for the moment. But if you are an experienced developer spending hours a day with COM/MTS and C++, or simply a COM programmer with much more theoretical knowledge than practical experience and strong intentions to avoid the typical errors and general misconceptions , you will do yourself a big favor purchasing a copy.--Dr. Dobb's Electronic Review of Computer Books

Read More Show Less

Product Details

  • ISBN-13: 9780201379686
  • Publisher: Addison-Wesley
  • Publication date: 11/20/1998
  • Series: Addison-Wesley Object Technology Series
  • Edition number: 1
  • Pages: 240
  • Product dimensions: 7.50 (w) x 9.30 (h) x 0.60 (d)

Meet the Author

Don Box is a leading educator, recognized authority on the Component Object Model (COM), coauthor of the Simple Object Access Protocol (SOAP) specification, and coiner of the term "COM is Love." He recently joined Microsoft as an architect in the Microsoft® .NET Developer and Platform Evangelism Group.

Earlier in his career, Box cofounded DevelopMentor Inc., a component software think tank aimed at educating developers on the use of the COM, Java, and XML. A popular public speaker, Box is known for engaging audiences around the world, combining deep technical insight with often outrageous stunts.

Keith Brown focuses on application security at Pluralsight, which he cofounded with several other .NET experts to foster a community, develop content, and provide premier training. Keith regularly speaks at conferences, including TechEd and WinDev, and serves as a contributing editor and columnist to MSDN Magazine.

Tim Ewald is a Director of Content at DevelopMentor, a premier developer services company. His research and development work focuses on the design and implementation of scalable systems using component technologies such as COM and Java. Tim has authored or co-authored several DevelopMentor courses, including the MTS and COM+ curriculum. He is also a co-author of Effective COM (Addison-Wesley), a former columnist for DOC and Application Strategies, and a frequent conference speaker. Before joining DevelopMentor, Tim worked as an independent consultant specializing in COM and related technologies.

Chris Sells is a content strategist on the Microsoft MSDN content team. Previously, he was the director of software engineering at DevelopMentor. Chris is the author of Windows Telephony Programming (Addison-Wesley, 1998) and Windows Forms Programming in Visual Basic .NET (Addison-Wesley, 2004), and coauthor of Effective COM (Addison-Wesley, 1999), ATL Internals (Addison-Wesley, 1999), and Essential .NET, Volume 1 (Addison-Wesley, 2003).

Read More Show Less

Read an Excerpt


The evolution of the Component Object Model (COM) has in many ways paralleled the evolution of C++. Both movements shared a common goal of achieving better reuse and modularity through refinements to an existing programming model. In the case of C++, the preceding model was procedural programming in C, and C++'s added value was its support for class-based object-oriented programming. In the case of COM, the preceding model was class-based programming in C++, and COM's added value is its support for interface-based object-oriented programming.

As C++ evolved, its canon evolved as well. One notable work in this canon was Scott Meyers' Effective C++. This text was perhaps the first text that did not try to teach the reader the basic mechanics and syntax of C++. Rather, Effective C++ was targeted at the working C++ practitioner and offered 50 concrete rules that all C++ developers should follow to craft reasonable C++-based systems. The success of Effective C++ required a critical mass of practitioners in the field working with the technology. Additionally, Effective C++ relied on a critical mass of supporting texts in the canon. At the time of its initial publication, the supporting texts were primarily The C++ Programming Language by Stroustrup and The C++ Primer by Lippman, although a variety of other introductory texts were also available.

The COM programming movement has reached a similar state of critical mass. Given the mass adoption of COM by Microsoft as well as many other development organizations, the number of COM developers is slowly but surely approaching the number of Windows developers. Also, fiveyears after its first public release, there is finally a sufficiently large canon to lay the tutorial groundwork for a more advanced text. To this end, Effective COM represents a homage to Scott Meyers' seminal work and attempts to provide a book that is sufficiently approachable that most working developers can easily find solutions to common design and coding problems.

Virtually all existing COM texts assume that the reader has no COM knowledge and focus most of their attention on teaching the basics. Effective COM attempts to fill a hole in the current COM canon by providing guidelines that transcend basic tutorial explanations of the mechanics or theory of COM. These concrete guidelines are based on the authors' experiences working with and training literally thousands of COM developers over the last four years as well as on the communal body of knowledge that has emerged from various Internet-based forums, the most important of which is the DCOM mailing list hosted at DCOM-request@discuss.microsoft.com.

This book owes a lot to the various reviewers who offered feedback during the book's development. These reviewers included Saji Abraham, David Chappell, Steve DeLassus, Richard Grimes, Martin Gudgin, Ted Neff, Mike Nelson, Peter Partch, Wilf Russell, Ranjiv Sharma, George Shepherd, and James Sievert. Special thanks go to George Reilly, whose extensive copyediting showed the authors just how horrible their grammar really is. Any errors that remain are the responsibility of the authors. You can let us know about these errors by sending mail to effectiveerrata@develop.com. Any errata or updates to the book will be posted to the book's Web page, ...

Read More Show Less

Table of Contents


Shifting from C++ to COM.







About the Authors.


Read More Show Less

First Chapter


Threads often add complexity to already complex software. Threads often introduce performance problems in software that is already too slow. Threads typically add defects to software that is already too buggy. Nonetheless, COM developers are forced to work in multithreaded processes all of the time. Fortunately, the COM apartment provides at least some semblance of order to what might otherwise be a completely chaotic environment. Unfortunately, the architecture of COM apartments has evolved over the years, leaving many developers confused-not because of any inherent complexity but rather because of the evolving story of the apartment. Although things have been fairly stable for a while, there is still a considerable body of literature that contains outdated information and terminology (the fact that the term apartment model still appears in print in 1998 is embarrassing, given that all of COM is based on apartments). This chapter investigates some of the common techniques that are used to ensure that a component has at least a fighting chance at surviving when deployed in the potentially toxic environment that is a Win32 process.

29. Don't access raw interface pointers across apartment boundaries.

Interface pointers live in apartments. They are bound to their apartments by court orders that don't allow them to leave without an escort. Break the law and you'll pay the penalty. Maybe not today, maybe not tomorrow, but you'll get hurt eventually. Case in point: imagine that an interface pointer to a proxy originates in apartment A. Being the threading wizard that you are, you choose to tuck this interface pointer into a global variable so that another thread in your process can have access to the interface:

STDMETHODIMP CFoo::Advise(IUnknown* pSink) {
void SetSink(IUnknown* pSink) {
  (g_pSink = pSink)->AddRef( );
  SetEvent(g_hEventSinkAvailable); //auto-reset event
void GetSink(IUnknown** ppSink) {
  WaitForSingleobject(g_hEventSinkAvailable, INFINITE);
  *ppSink = g_pSink;
  g_pSink = 0;

Ignoring any potential race conditions (a critical section would solve this, but race conditions are irrelevant to this discussion), imagine how this code works. Some client of yours calls Advise, passing an interface pointer to an implementation of some outgoing interface that you plan to call out on eventually. You publish the sink by calling SetSink, allowing another thread to grab that sink pointer via GetSink. You even call AddRef for the new thread (good job - you obviously read Item 30). All seems well and good until you try to call out on the sink, at which time you immediately discover that the call fails with RPC_E_WRONGTHREAD (check those HRESULTs - see Item 2). What happened?

Apparently, you were fortunate enough to be dealing with a proxy from an external client. Proxies remember the apartment they were born in (each apartment is identified by a 64-bit object exporter ID, or OXID) and verify that they are being called on a thread executing in that apartment. Each thread that calls CoInitialize or CoInitializeEx has a thread-local storage slot that holds information such as the OXID of the apartment in which the thread is executing. Imagine what would happen if you didn't have a proxy (e.g., the interface pointer was obtained from an in-process server). The code would probably work most of the time but would fail randomly, during important demos at large conferences, because you have likely violated the object's concurrency requirements.

Apartments are designed to allow objects with different concurrency and reentrancy requirements to coexist peaceably with one another, but to make this happen, you must follow the rules: don't pass interface pointers between apartments. Marshal them across apartment boundaries instead.

So here's the code used to correct the problem:

HRESULT SetSink(IUnknown* pSink) {
  IID_IUnknown, pSink, &g_pStream);
 SetEvent(g_hEventSinkAvailable); //auto-reset event
 return hr;
HRESULT GetSink(IUnknown** ppSink) {
 WaitForSingleobject(g_hEventSinkAvailable, INFINITE);
 HRESULT hr = CoGetInterfaceAndReleaseStream(
  g_pStream, IID_IUnknown, (void**)ppSink);
 g_pStream = 0;
 return hr;

Note that the long-winded function names used for marshaling are simply shortcuts for creating a memory-based IStream implementation and marshaling a MEOW packet' into it. (Note that this particular stream is implemented specifically to be usable across apartments.) The fundamental API used to marshal interface pointers is CoMarshalInterface, and for unmarshaling, CoUnmarshalInterface. You can easily imagine how the interthread marshaling APIs are implemented:

HRESULT CoMarshalInterThreadInterfaceInStream(
  if (FAILED(hr)) {
   (*pps)->Re1ease( );
   *PPS = 0;
return hr;

And similarly for unmarshaling:

HRESULT CoGetInterfaceAndReleaseStream(
 IStream* ps, REFIID iid, void** ppv) {
 HRESULT hr = CoUnmarshalInterface(ps, iid, ppv);
 if (FAILED(hr))
  *ppv = 0;
ps->Release( );
return hr,

Bear in mind the basic rule that interface pointers, no matter where they come from (proxies or otherwise), should be marshaled across apartment boundaries and never used directly by a thread executing in a different apartment unless explicitly documented otherwise. The particular implementation of IStream returned by ComarshalInterThreadInterfaceInStream is one of the canonical examples of an explicitly documented exception.

Should you be worried about ending up with multiple layers of proxies with all this marshaling? No. CoMarshalInterface is implemented such that if a standard proxy is marshaled, the resulting marshaled OBJREF will refer to the original object, not to the proxy.

You may discover a nasty limitation of ComarshalInterThreadInterfaceInStream and friends if you attempt to modify the code in GetSink to allow multiple threads to unmarshal the interface pointer. It is easy to imagine a scenario in which multiple threads need to send notification messages. So you change GetSink to look like this:

 HRESULT GetSink(IUnknown** ppSink, bool bLastReader) {
  WaitForSingleObject(g_hEventSinkAvailable, INFINITE);
  if (!bLastReader)
   g_pStream->AddRef( ); faulty attempt to keep
HRESULT hr = CoGetInterfaceAndReleaseStream(
  g_pStream IID_IUnknown, (void**)ppSink);
 if (bLastReader)
   g_pStream = 0;

return hr;

However, when you step through this new implementation of GetSink in the debugger, you'll notice that CoGetInterfaceAndReleaseStream fails when the second caller attempts to unmarshal the interface. It's not just for convenience that CoGetInterfaceAndReleaseStream releases the stream automatically; rather, it's because the stream was only good for a single unmarshal anyway. Notice how the implementation of CoMarshalInterThreadInterfaceInStream calls CoMarshalInterface with MSHLFLAGS_NORMAL. The semantics of a normal marshal (sometimes referred to as a call marshal) are that the marshaled OBJREF is good for at most one unmarshal. These semantics are enforced by CoUnmarshalInterface, which implicitly calls CoReleasemarshalData on the OBJREF after a successful unmarshal. After CounmarshalInterface returns, the OBJREF is no longer valid.

It turns out that there is another type of marshal that is commonly used, the table marshal. Table marshals are designed to create MEOW packets that can be placed in tables for many consumers. These MEOW packets can be unmarshaled zero or more times, which is exactly what you want. Unfortunately, table marshaling a proxy is illegal for reasons outside the scope of this discussion, so it is not a general solution to the marshal-once-unmarshal-many problem at hand. Fortunately, modern' implementations of COM provide the Global Interface Table (GIT), which allows apartment-relative interface pointers to be converted into apartment-neutral "GIT cookies" that can freely be passed between apartments within the same process. The GIT will work properly with both real objects and with proxies, so it is possible to achieve the effects of table marshaling within a single process. Item 33 illustrates the most common application of the GIT in detail.

30. When passing an interface pointer between one MTA thread and another, use AddRef.

In Item 20, we discussed the importance of only using interface pointers that have been AddRef'ed. Threading adds an interesting twist to any talk about resource management, as threads tend to have distinct lifetimes. To see the potential problem, try to debug the following code:

 struct CustomerID {
  CustomerID(long a, long b) : A(a), B(b) { }
  long A;
  long B;
 DWORD WINAPI ThreadProc(void* pv) {
  CustomerID* pid = (customerID*)pv;
  // look up customer in remote database
  // perform analysis on customer info
  // spool report to printer...
  return 0;
 void PrintCustomerInfoForJoe( ) {
  CustomerID id(100, 223); // joe's id
  DWORD tid;
 CloseHandle(CreateThread(O, 0,
 } // oops

Pretty obvious bug, agreed? The thread that calls PrintCustomerInfoForJoe has a distinct stack on which it creates an instance of a CustomerID as an automatic variable. This thread then proceeds to create a second thread, passing a pointer to this local variable. As soon as the original thread returns from the function call, the customer ID goes out of scope and is overwritten by arbitrary stack "gunk." The secondary thread is out of luck. The problem is that the secondary thread had no control over the lifetime of a resource that it required to do its job. Here's the fix:

DWORD WINAPI ThreadProc(void* pv) {
 CustomerID* Pid = (customerID*)pv;
 delete pid;
 return 0;
void PrintCustomerInfoForJoe( ) {
 CustomerID* Pid = new CustomerID(100, 223);
 DWORD tid;

CloseHandle(CreateThread(O, 0,
      ThreadProc, pid,
      0, &tid));

This example demonstrates how important it is for you to coordinate the lifetime of resources with the lifetimes of the threads that depend on them.

COM interface pointers are also resources that need to be managed. Each thread that you employ should maintain a reference on any interface pointers it plans to call through. By simply following this rule, you can avoid many nasty race conditions. Consider a similar example using interface pointers:

DWORD WINAPI ThreadProc(void* pv) {
void SpawnThreadWithInterface(IUnknown* pUnk) {
 // to avoid race conditions, be sure to AddRef
 // before the new thread is created
 // (assumes calling thread is also in MTA)
 pUnk->AddRef( );
 DWORD tid;
 CloseHandle(CreateThread(O, 0,
      ThreadProc, pUnk,
      0, &tid));

Note that both threads in this example are running In a single apartment (namely the MTA for the process), so we don't have to worry about marshaling the interface pointer (see Item 29).

31. User-interface threads and objects must run in single-threaded apartments (STAs).

You may want a thread in your application to use COM and to service a Windows user interface (ul). If so, that thread needs to run in a single-threaded apartment (STA). The reason is simple: both user interfaces and STAs have thread affinity. MTAs do not have thread affinity and offer no guarantees about how their threads will be used. ul elements such as window handles are inexorably tied to the thread that created them. If that thread is in an MTA, the ul will break in a big way.

Imagine you're writing an application that wants to display a user interface and also make outbound calls to a collection of remote objects. For argument's sake, assume the client creates a number of additional worker threads to do some background processing and that these threads make calls to the same set of remote objects. It may seem reasonable to have all the client threads living in its MTA so they can share the remote objects' proxies without having to marshal them from thread to thread. You therefore decide to make all your threads enter the MTA, with the application's main thread taking on the further responsibility of creating the ul and then running a message pump.

In this scenario, the worker threads will be just fine. But during any outbound COM calls on the main thread, the UI will hang. This is because when the ul thread (which is in an MTA) makes a call to another apartment, it enters the appropriate proxy, marshals the current call stack, and heads into the channel. Inside the channel, it makes a blocking remote procedure call (RPC). While the thread is waiting inside the channel for the RPC response message, any window messages being posted to its queue simply stack up there. This has the effect of hanging the UI for any windows that that thread created, probably causing the end user to terminate the process prematurely.

This doesn't happen to a thread in an STA. When an STA thread enters the channel, it drops into a COM-managed message loop while COM creates another thread (or picks one up from a cache) and pushes this new thread into the black hole of RPC to wait for the response packet. This complex mechanism is necessary for STAs to service reentrant callbacks using the same thread, as described in Item 35. The loop also supports the dispatching of window messages and can keep a UT running.

The channel doesn't provide this behavior for MTA threads because it is unnecessary. An MTA supports reentrance implicitly; callbacks can be serviced directly by other RPC receive threads. This approach conserves resources; the channel doesn't consume an extra thread for each outbound call. Unfortunately, it also makes it impossible to implement an MTA-based user interface that works.

The solution is to leave the worker threads in the MTA but move the user-interface thread to an STA. Then, when the UI thread makes an outbound call, the channel will do the right thing: leave the calling thread in a message pump so that at a minimum, painting, task switching, and window activation messages are processed as necessary. In this case any proxy acquired in the MTA (via a worker thread) must be marshaled to the STA (the UI thread) before it can be used there.

Now imagine you are writing a server instead of a client. You're not planning on making outbound COM calls - you just want a set of objects that responds to requests by modifying the state of a user interface displayed wherever the server process is running. To maximize concurrency, you deploy all these objects in an MTA. The problem here is that user interface elements have thread affinity - the thread that created them must service them. Dropped in the MTA, however, these elements are subject to bombardment by random threads as incoming calls arrive.

If a thread in the MTA invokes a method on an object that creates a window, and then the thread returns to the RPC receive pool and is later destroyed, the window isn't going to work very well. In fact, when a thread is destroyed, any windows that were created by that thread are also destroyed automatically. To keep your windows alive and messages flowing properly, you need a thread that runs for the life of the user interface. The application's main thread is a reasonable choice for this since it has to hang around waiting to revoke the server's class objects. So, hell-bent on trying to make your UI run from the MTA, you modify your application's main thread to enter the MTA. You then register your class objects and spin in a message loop, instead of simply waiting for an event or whatever other signal you usually use for shutdown notification. Now you have to force that thread to create all the user interface elements so that it can dispatch their messages. You can't use COM to do this because all of the R_PC threads are in the same apartment (the MTA), and no marshaling takes place between them. Instead you'd have to use some other interthread communication mechanism. Even if you got it to work, you still couldn't make outbound COM calls (from your UI thread) without freezing your windows. Clearly, this is not a good approach.

The solution is to move the objects that build the UI into an STA. If this is all the objects really do, then have the client deal with them directly. If the objects do other processing as well, and you want that work done concurrently, have the client send messages to objects in the MTA, which in turn send messages to objects in the STA whenever they need to manipulate the user interface.

In any case, live by this one simple rule: a thread that wants to expose a user interface and use COM needs to run in an STA.

32. Avoid creating threads from an in-process server.

COM is about hiding implementation details. Using interfaces to delimit component boundaries, COM protects clients from almost every aspect of an object's implementation, including where that component actually resides. Location transparency is one of COM's most useful and admired features. In theory, you can write the same client and object codes without worrying about the execution context of either. In practice this is largely true, with a few notable exceptions. Threading is one of them. Writing in-process servers that create their own threads is a dangerous business and should be avoided.

You may be thinking, What about the ThreadingModel named value I specify when I register my BILLY Isn't that supposed to make all the threading details work out right? The answer is yes, if you work under the assumption that your DLL is passive and it's the client executable that manages all the threads. Once your in-process server threads itself and becomes active in its own right, unpleasant issues start to arise.

Recall that in-process servers are loaded on demand and are unloaded (if not in use) when their client calls CoFreeUnusedLibraries. Internally, CoFreeUnusedLibraries walks the list of in-process servers currently loaded and calls each one's D1lCanUnloadNow function. If the return value is S_OK, the server can be unloaded; if the return value is S_FALSE, the server must remain loaded. The typical implementation of D1lCanUnloadNow simply checks the value of a server lock count that tracks outstanding references to objects. If your in-process server starts up threads, you need to make sure that the DLL supplying their code doesn't get unloaded out from under them. If it does, your process will likely crash due to a hardware-generated exception the next time the thread is scheduled to execute. There are two ways to avoid this problem. One is to make sure your threads don't outlive the last outstanding reference to an object; therefore, when the server lock count goes to zero, the threads are no longer running and the DLL can be unloaded safely. The other is to treat a thread's existence as a reason to keep the server loaded and to increment the server lock count while the thread is running.

The second approach is more useful than the first because it decouples the lifetime of your threads from the lifetime of references to your objects, which the client controls. Allowing your thread to act totally independent from the client is desirable, but it isn't as simple as it seems. The trouble is that clients expect their in-process servers to be passive. When a client process decides to shut down, it releases all its references to objects and assumes its servers may be safely unloaded. This latter assumption is made inside the client's call to CoUninitialize, which doesn't bother to consult DllCanUnloadNow before it plucks each server from memory. You can't blame the client for this - it doesn't know any better and there is no way to tell it that a thread in the DLL is still actively pursuing some objective.

What happens if your client calls CoUninitialize while you still have threads running? CoUninitialize will unload your in-process server, so you have no chance to stop your threads. The obvious place for this code is inside D11Main:

 HANDLE g_hStopEvent = 0; // Shutdown event
 HANDLE g_hThread = 0; // Thread handle to wait for

 // Assume g_hStopEvent and g_hThread are created
 // when the worker thread is started
 BOOL WINAPI D11Main(HINSTANCE h, DWORD dwReason, void*){
 WaitForSingleobject(g_hThread, INFINITE);
return TRUE;

When the DLL is being unloaded, D11Main is called with the DLL_PROCESS_DETACH flag as its second argument. In this code, the DLL sets a Win32 event to signal its worker thread that it's time to shut down (the worker thread has to periodically check for this). Then the main thread blocks, waiting for the worker thread to stop.

This looks great, but it results in deadlock every time. The trouble is that access to DllMain is serialized. The main thread has entered the function and is waiting inside it. When the worker thread notices that g_hStopEvent has been signaled and exits, part of that shutdown is a notification to D11Main (with DLL_THREAD_DETACH as the second parameter), which must complete before the thread is destroyed. So the main thread holds D11Main waiting for the worker thread to signal, yet the worker thread needs DllMain to do this, and neither will let go until the other does: ergo, deadlock.

Calling DisableThreadLibraryCalls to stop the worker thread's call to your D11Main won't solve the problem. Even though your in-process server is willing to let the thread go without any sort of good-bye, other DLLs loaded in the process are not. Since the synchronization of calls to D11Main is done with a single lock that protects all the DLLs in a process, the worker thread's calls to these other libraries will cause the deadlock situation as well.

What if the code were modified so that the main thread didn't wait on the worker thread's existence, but rather on some per-thread event signaled by the worker thread function just before it exits? The trouble with this model is that the main thread's call to WaitForSingleObject returns as soon as the worker thread calls SetEvent, and the DLL can be unloaded before the thread function's final few instructions complete. Putting DllMain to sleep for a little while to avoid this issue would also be bad since you would probably freeze your client's user interface. Besides, you can never know how long it would have to sleep anyway, since the thread Won't signal until it has acquired the global DllMain mutex and sent its DLL_THREAD_DETACH message.

So you can't wait for the thread to stop naturally without deadlocking. And you can't wait for another signal that the thread is about to stop because that inevitably introduces a race condition between the worker thread, which is trying to complete, and the main thread, which is trying to unload the DLL (and the worker thread's code). What if you took the gloves off, stopped playing nice, and called TerminateThread? In other words, have D11Main simply stop your worker threads when the library is being unloaded. This will work, sort of. The problem is that when you terminate a thread, it doesn't have a chance to exit cleanly. Specifically, it doesn't have a chance to notify all the DLLs in the process that it is going away. Any DLL expecting to clean up resources that were allocated on a per-thread basis will be disappointed. This sort of leak can lead to everything from degradation of performance to a horrible crash any time between the call to TerminateThread and the rest of your process's life. Using this technique is incredibly dangerous.

Ultimately, there is no way to both disconnect the lifetime of your threads from the lifetime of references to your objects and to shut down your in-process server correctly. Accept it and move on. If you need to create threads, bulld an executable server instead. Inside an executable server, you have complete control of the lifetime of both threads and objects, and all of these shutdown problems vanish. Of course, access to an out-of-process server is slower, but it's not much worse than calling across apartments in the client process, if that's what you had in mind for your thread. And, of course, having a system that doesn't crash is something of a feature, too. All of this being said, someone reading this book will still try to create threads from in-process servers. But at least our consciences are now clear.

33. Beware the Free-Threaded Marshaler (FTM).

The Free-Threaded Marshaler may have remained shrouded in mystery but for two things. First, an author of this book who shall go unnamed documented it in a well-read and well-respected technical journal. Second, a certain COM component code-generating wizard lists it as an option along with useful features such as whether the object supports ISupportErrorInfo and which apartment type the object would like. The only thing that could have made it worse is if the option were selected by default. You should not select this option yourself unless you are fully aware of the implications.

Recall that COM supports two types of apartments: single-threaded (STA) and multithreaded (NITA). STA components enjoy the convenience of handling only a single request at a time via the Windows message queue, whereas MTA components experience the scalability of being able to handle multiple requests simultaneously. This arrangement makes it advantageous for STA components to manage user-interface chores and then to create MTA components to handle background computational tasks. However, because these two kinds of components must communicate with one another, there is the overhead of a proxy and a stub, even when the components share the same address space. The FTM was invented to eliminate this overhead.

The logic goes something like this. By definition, NITA components are thread neutral and handle thread synchronization internally. If an NITA component shares the same address space with an STA component, why shouldn't the STA component be able to call directly into the MTA component, without going through a proxy?

When an object aggregates the FTM's implementation of IMarshal, two things happen. If an interface from the object is marshaled out-of-process, the FTM allows standard marshaling to happen just as you'd expect. On the other hand, if an interface from the object is marshaled within the process (i.e., MSHCTX_INPROC is used), the FTM custom marshals, filling the marshaling packet with the raw interface pointer value. When the interface is unmarshaled in a different apartment, the apartment receives an interface pointer directly on the component, instead of receiving an interface pointer to a proxy. In effect, objects using the FTM become apartment neutral:

The good news is that being apartment neutral is often faster. The bad news is that it sometimes isn't, and unless you're careful, it doesn't even work. The problem lies not in the apartment-neutral object itself but in what it's holding. Objects often hold interface pointers to other objects. Those interface pointers either point to proxies or directly to other objects. In general, these interface pointers are apartment relative unless the underlying object is also apartment neutral. This means that a component using the FTM cannot safely hold interface pointers, as is shown below:

 class CoPenguin : public IBird {
  CoPenguin(IFoodSource* pfs)
  m_cRef(O), m_pfs(pfs), m_pftm(O) {
  if( !m_pfs ) throw "Need food!";
  m_pfs->AddRef( );
  // aggregate the FTM
  CoCreateFreeThreadedMarshaler(this, &m_pftm);

 virtual -CoPenguin( ) {
  m_pftm->Release( );
  m_pfs->Release( );

 HRESULT QueryInterface(REFIID iid, void** ppv) {
  if (IID-Imarshal == iid )
   return m_pftm->QueryInterface(iid, ppv);
  // remainder of IUnknown left as an exercise

 // IBird
  // Will return an error if m_pfs is a proxy
  // and the caller is in an apartment other than
  // the one used to construct the CoPenguin
  return m_pfs->GetFood( );


 // etc.

 IUnknown* m_pftm; // FTM is apartment neutral

The reason that the proxy returns an error on a method call outside of its apart- ment is that interface pointers are apartment relative. To allow the component using the FTM to work from any apartment, interface pointers must be ditched in favor of an object reference that is apartment agnostic. To avoid the proxy's complaint, the interface pointer must be marshaled to the proper apartment for each method call on the component using the FTM. The intuitive way to solve this problem is to marshal an interface into an OBJREF using the CoMarshalInterface function and to store that rather than the interface pointer. An OBJREF is essentially an apartment-independent representation of an interface pointer, which is exactly what we're looking for.

When using CoMarshalInterface, there are two kinds of marshaling one could choose, normal marshaling (sometimes called call marshaling) and table marshaling. A normal marshal's properties are that it can only be unmarshaled once and that the data basically becomes invalid if the unmarshal has not yet occurred after a short timeout. A table marshal, on the other hand, can be unmarshaled any number of times and has no timeout. It seems perfectly suited for our use. Unfortunately, a table marshal is not allowed on a proxy, so cross-apartment interface pointers cannot be used with this technique (see Item 29 for more details).

Essentially, we need a component that has enough internal knowledge of COM to be able to marshal an interface pointer into an OBJREF and hold onto it for us; such a component must also allow us to unmarshal the OBJREF into several different apartments. The COM Global Interface Table (GIT) provides just this functionality. The GIT is a process-wide, apartment-neutral object that holds interface pointers and returns apartment-independent interface pointer identifiers (also called GIT cookies). The GIT can be accessed with CoCreateInstance using CLSID_StdGlobalInterfaceTable. The GIT provides its caching and retrieval function via the IGlobalInterfaceTable interface:

 interface IGlobalInterfaceTable : IUnknown {
  HRESULT RegisterInterfaceInGlobal(
   [in] IUnknown *punk,    [in] REFIID iid,
   [out] DWORD *pdwCookie);
  HRESULT RevokeInterfaceFromGlobal(
   [in] DWORD dwCookie);
  HRESULT GetInterfaceFromGlobal(
   [in] REFIID riid,
   [out, iid-is(riid)] void** ppv);

Now an object using the FTM coupled with the GIT to hold its interface pointers can retrieve them on demand for every method call. This method will ensure that each interface pointer is marshaled properly into whichever apartment the calling thread belongs to. For example:

 class CoPenguin : public IBird {
  CoPenguin(IFoodSource* pFoodSource)
   // Cache the GIT
   // (error handling omitted for brevity)


   aggregate the FTM
   CoCreateFreeThreadedmarshaler(this, &m_pftm);

 virtual -CoPenguin( ) {
  m_pftm->Release( );
  m_pgit->Release( );

 // IBird
  IFoodSource* pfs;
  HRESULT hr = m_pgit->GetInterfaceFromGlobal(
   if (SUCCEEDED(hr)) {
    hr = pfs->GetFood( );
   return hr;

  // etc.


Notice that the name of this item was not "Use the FTM with wild abandon." Even with the GIT the use of the FTM should be avoided unless you have a really compelling reason to become apartment neutral (e.g., an Active Server Page expects objects at application or session scope to be apartment neutral). One of the problems is the sheer complexity associated with the FTM. Having an object that holds references to other objects - an extremely common thing to do-is now much more complicated. However, the complexity can be dealt with. The main problem is efficiency.

One of the main reasons to group components into the same apartment is so that they can share the same concurrency model (i.e., they can work together without the overhead of the proxy/stub). However, unless all closely cooperating components use the FTM, there will be a proxy/stub pair when using the GIT on held interfaces. In the case in which one method call on a component using the FTM translates into several method calls on numerous cached interfaces, you're paying the interapartment hop cost many times rather than just once. The promise of the FTM is efficiency, but the reality is that it is often less efficient unless you're willing to enforce apartment neutrality on all closely cooperating components.

34. Beware physical locks in the MTA.

If your objects live inside an MTA they can, by definition, be concurrently accessed by more than one thread. It is up to you to make them thread safe by protecting both their data members and any global data they share. You can do this using any of the standard Win32 synchronization mechanisms for locking (e.g., critical sections, mutexes). Methods that need to access either type of data in a thread-safe way - avoiding a simultaneous read and write or simultaneous writes - must acquire a lock before doing so and release it after they're done. The code fragment below shows a simple implementation class that needs to protect its state with a mutex.

  class Long : public ILong
   long m_l;

   STDMETHODIMP ReadValue(long *pl) {
    WaitForSingleobject(m_lock, INFINITE);
    *pl = m_l;
    return S_OK;

   STDMETHODIMP Writevalue(long 1) {
    WaitForSingleobject(m_lock, INFINITE);
    m_l = 1;
    return S_OK;

   STDMETHODIMP DoubleValue(void) {
    WaitForSingleobject(m_lock, INFINITE);
    m_l *= 2;
    return S_OK;

Every access to the m_l member variable is bracketed by calls to acquire and release the m_lock mutex so that only one operation touches m_l at a time.

Notice that this code is very careful to release m_lock before returning from each method. What would happen if you wrote code that returned from a method without releasing a lock?

 STDMETHODIMP Long::Start(void) {
  WaitForSingleobject(m_lock, INFINITE);
  return S_OK;

 STDMETHODIMP Long::DoubleValue(void) {
  m_1 *= 2;
  return S_OK;

 STDMETHODIMP Long::End(void) {
  return S_OK;

The goal is to allow your client to lock the object by calling Start, hold any number of other clients at bay while calling DoubleValue to update m_l any number of times, then unlock the object by calling End. Attractive as this might be, and assuming all clients followed the protocol, this code won't work!

In this case, threads spell trouble with a capital T. Many of the Win32 synchronization objects (including the mutex) have thread affinity. That is, they must be acquired and released by a specific physical thread. It is impossible for one thread to get a mutex and for another thread to consequently release it, so for the code above to work, the same physical thread must call Start and End. Since calls to objects in the MTA are dispatched immediately using arbitrary RPC receive threads, there is absolutely no way you can guarantee this for objects running inside the MTA. The essence of this quandary is that the Start method returns, allowing its thread to leave the MTA while holding a physical lock. Once the thread has left, there is no way to know if, when, or where that particular thread will be used again. The best you can hope for is that the locking thread exits and the lock is abandoned (this isn't much of a hope ... ).

A similar issue arises if your code makes outbound calls to other objects while holding a physical lock.

 // Implementation of Doubler object
 STDMETHODIMP Doubler::DoubleLong(ILong *pl) {
  long 1;
  1 *= 2;
  return S_OK;  }

 // Implementation of Long object
 STDMETHODIMP Long::ReadValue(long *pl) {
  WaitForSingleobject(m_lock, INFINITE);
  *pl = m_l;
  return S_OK;

 STDMETHODIMP Long::WriteValue(long 1) {
  WaitForSingleobject(m_lock, INFINITE);
  m_l = l;
  return S_OK;

 // New implementation of DoubleValue,
 // delegates to Doubler
 STDMETHODIMP Long::DoubleValue(void) {
  WaitForSingleobject(m_lock, INFINITE);
  // delegate to Doubler object m_pDoubler->DoubleLong(static_cast(this));
  return S_OK;

This modified implementation of Long::DouhleValue delegates to an instance of a Doubler object. The Long sends a pointer to its own interface as input to m_pDoubler->DoubleLong so that the Doubler object can call back to Long::ReadValue and Long::WriteValue to get and set the data it manipulates. All this works great, unless the Doubler object is in another apartment.

Since Long is implemented in an MTA, the delegating call to m_pDoubler>DoubleLong is genuinely blocking - that thread is blocked in the channel waiting for the outbound RPC call to complete. Note that before blocking, the thread acquired m_lock. When the Doubler object calls back through the ILong* it was passed, the call is directly dispatched by an arbitrary RPC receive thread. That thread enters ReadValue and immediately waits to acquire m_lock. The result is deadlock, which is definitely not the desired result. Again, the problem is that many Win32 synchronization objects, in this case the m_lock mutex, have thread affinity, whereas the MTA does not.

How can you solve these problems? The second problem is easier, so let's tackle it first. There are a number of options here. One is to avoid making outbound calls while holding a physical lock. In other words, DoubleValue shouldn't make the outbound call to m_pDoubler->DoubleLong because m_lock has been acquired. This may seem somewhat harsh, but it means you never have to worry about this sort of deadlock. A slightly looser constraint can also be used: avoid making any outbound call that can possibly result in a callback that may cause deadlock. In this case, DoubleValue shouldn't make the outbound call to m_pDoubler->DoubleLong because m_lock has been acquired and because DoubleLong will call back to Long::ReadValue, which will result in deadlock. If you choose to use this more flexible policy, realize that you are shouldering a heavy burden - you must carefully analyze your code for all potential deadlock situations.

A third approach is to never use INFINITE as the second parameter to WaitForSingleobject (or any of its siblings), so that deadlock can be broken (albeit rather harshly).

 STDMETHODIMP Long::ReadValue(long *pl) {
  // Wait for up to 1 minute
  DWORD dw = WaitForSingleobject(m_lock, 60000);
  if (WAIT_TIMEOUT == dw)
   return E_UNEXPECTED;
  *pl = m_l;
  return S_OK;

If ReadValue were written this way, the callback wouldn't cause deadlock, but it would result in a runtime error. This is better than hanging the system, but only just barely (also, it may be impossible to choose a timeout interval that will not erroneously reject valid lock requests).

Finally, you could protect your state using a handmade lock based on a logical thread of execution. The calls from DoubleValue to m_pDoubler>DoubleLong and back to ReadValue are causally related; that is, they are all part of the same logical thread. If you could identify a logical thread of execution, you could use that information to control access to m_l. To make this work, you have to propagate some token that identifies your logical thread. You can do this either explicitly, by passing the token as a parameter that is forwarded through each call, or implicitly, by using a channel hook that forwards the value through each call. In the first case, your interfaces have to be designed with this in mind because each method has to carry an extra parameter. In the second case, you have to have a channel hook registered in every process through which the logical thread passes.

You can apply this final solution, using a logical lock, to the first problem as well - wanting to hold a lock across multiple method invocations - but it's harder. By having a client acquire a logical lock, again represented as a token, and passing it back into each method, you can deal with the fact that the call to Start and the call to End arc dispatched on different physical threads. However, if you want your logical lock to have the same semantics as a physical lock (e.g., a mutex), you have to implement Start such that any threads attempting to acquire the logical lock wait until the first client is done. When the client finally calls End, you must allow only one of the threads waiting in Start to proceed. In addition, your logical lock has to be prepared to deal with abandonment in case the client that holds it never returns to release it.

The moral of the story is: be very careful about how you synchronize your threads inside the MTA. Don't return from a call to the NITA while holding a physical lock. Don't call out of the MTA while holding a physical lock unless you can guarantee that the call won't result in a callback that deadlocks or that you'll detect such a deadlock if it occurs and break it. Finally, if you just can't do one of these things, implement some protocol for logical locking instead.

35. STAs may need locks too.

Many developers realize that writing thread-safe components for the MTA is hard. They like to spend their evenings and weekends at home, so they choose to deploy their components in STAs instead. If you are one of them, be aware that although the STA environment is certainly more forgiving, locking may still be an issue.

Consider a process that uses multiple STAs. If components in separate apartments access shared data (global or static variables, for example), that data must be protected using critical sections or the like. In this scenario, multiple STAs present problems very similar to those arising in the MTA.

You shouldn't write code that acquires a lock in one method and releases it in another. Your client may never call back, and the lock won't be abandoned until the STA that owns it is destroyed. This style also leaves the data open to access from any method of any object in the apartment that owns the lock. You also shouldn't write code that acquires a lock and then makes an outbound call if there is any possibility that it can result in a call to an object in another STA in the same process that then tries to acquire the lock. This restricts direct calls into other apartments in the same process as well.

Multiple STAs make you nervous, so you decide to stick with just one. That means one thread to worry about and no locking issues, right? Wrong. But how can that be, since by definition a single STA provides no concurrency? The answer is simple: STAs have to support reentrance.

Consider the case in which a client passes a reference to object A (which lives in its own apartment) to an object B (which lives in another apartment). Assume that object B makes a call back to object A within the scope of the client's initial outbound call. For this to work, the client's apartment must be reentered.

 interface IBackward;

 interface IForward : IUnknown {
  HRESULT Call([in] IBackward *pBack);

 interface IBackward : IUnknown {
  HRESULT Callback(void);

 // IForward implementation
 HRESULT ObjectB::Call(TBackward *pBack) {
  HRESULT hr = pBack->Callback( );
  assert(SUCCEEDED(hr)); // callback must succeed
  return hr;

 // IBackward implementation
 HRESULT ObjectA::Callback(void) {
  return S_OK;

 // Client code
 IForward *pf = 0;
 if (SUCCEEDED(GetObjectB(&pf))) {
  IBackward *pb = new ObjectA;
  pf->Release( );
  delete pb;

If this code didn't work for all apartment types, COM wouln't be very useful. So the COM remoting layer goes to great pains to make sure it does work, even if both apartments in this scenario are STAs.

Frankly, this is highly unintuitive. The call out of the client apartment to object B is synchronous (as all COM calls are as of this writing). The callback from object B to object A must be serviced by a thread that has entered object A's apartment. In the case of an STA, that means the apartment's one and only thread, which is waiting for the initial outbound call to return. You'd expect this to result in deadlock, but it doesn't. It turns out that calls out of an STA aren't really blocking, they're actually quasi-blocking. To make an outbound call, your STA thread calls into a proxy, marshals the call stack for transfer to the target apartment, and then calls into the channel, which is COM's wrapper around RPC. But the STA thread doesn't actually make the RPC call; RPC is blocking and stops the calling thread cold. So the channel starts a separate thread (or grabs an existing one from a cache) to make the blocking RPC call, and the STA thread waits, spinning in a message pump. Remember that inbound requests are dispatched to STA-based objects by queuing a message for the appropriate thread. This is why STA-based servers need to pump messages. It also explains how STA reentrance works. As shown below, when a callback is made to an STA whose thread is in an outbound call, the invocation is posted to the thread's message queue and will be dispatched by the pump inside the channel.

Note that this technique allows the callback to be serviced on the correct thread, keeping COM's apartment laws intact.

Without the message pump hidden in the channel, it would be impossible to call back into an STA. So the presence of the message pump is good. What happens if your client is in an outbound call to object B and some other client calls into your client's apartment trying to access object A? The other client's inbound call gets queued, as you'd expect, and is dispatched by the pump inside the channel. The presence of the message pump in this case may or may not be good. What if the other client's invocation on object A will modify some state that your client thread expected to remain unchanged during its outbound call to object B? In this case, unconstrained reentrance is a problem.

When state needs to be protected, mutexes (and other synchronization objects) spring to mind. Unfortunately, they won't help you here because a mutex is designed to mutually exclude access between distinct threads, and each STA consists of only a single thread. If your client thread acquires a mutex before calling out to another apartment, don't expect the lock to stop some other client's call into your apartment from accessing the state the lock is supposed to protect. The channel will service the inbound call on your own thread (which has been "borrowed" for exactly this purpose). Besides, a typical use of an STA is to house a user interface, and blocking a UI thread by waiting on a mutex is a cardinal sin (see Item 31).

To solve this problem, you need some kind of logical lock, and COM provides one via the IMessageFilter interface. IMessageFilter provides the hooks necessary to extend the channel's message pump logic to control reentrance by selectively blocking incoming requests:

 interface IMessageFilter : IUnknown {
  DWORD RetryRejectedCall(
  DWORD MessagePending(
   [in] DWORD dwTickCount,
   [in] DWORD dwPendingType

Each STA can register its own custom implementation of IMessageFilter by calling CoRegisterMessageFilter. The filter's implementation of HandleInComingCall allows it to control whether an inbound call is processed or rejected. The default filter provided by COM lets everything through. A custom filter can postpone other clients' inbound calls when your thread is in an outbound call:

 DWORD CustomFilter::HandleInComingCall(
   // a new top-level call is inbound, but the STA
   // thread is currently making an outbound call

The dwCallType flag indicates what sort of call is coming in. CALLTYPE_TOPLEVEL_CALLPENDING indicates a new request from some other apartment while your thread is making an outbound call. The sample above chooses to postpone such calls by returning SERVERCALL_RETRY_LATER. A dwCallType value of CALLTYPE_NESTED would have indicated a nested callback (as was discussed earlier), which should typically be allowed through. The sample above allows all such calls to be processed by returning SERVERCALL_ISHANDLED. A third value, SERVERCALL_REJECTED, is also available to force a client to cancel the call.

Interestingly, HandleInComingCall is called whenever an inbound request arrives, whether or not your STA thread is currently in an outbound call. This is useful if you want to protect state for other reasons. For example, if your STA thread is providing a user interface in addition to servicing COM calls, it might want to display a modal dialog box to allow the user to edit some data. If the same data can be modified via calls to objects in the STA, the dialog processing code has to be prepared to deal with state changes that take place between the time it was displayed and dismissed. If you don't expect much (or any) contention, and resolving the dialog's inaccurate view of the world isn't difficult, you can be optimistic and write the code to reconcile the differences when necessary.

If you expect a lot of contention, or resolving the dialog's inaccurate view of the world is difficult, you can be pessimistic and stop the inbound COM calls that corrupt the dialog's view by changing the state underneath it.

 DWORD CustomFilter::HandleInComingCall(
  // a new top-level call is inbound, but the STA
  // thread is currently in an outbound call
  // a new top-level call is inbound, but the STA
  // thread is currently servicing the user
  // interface and doesn't want the world to change
  // while it does so
  else if (dwCallType == CALLTYPE_TOPLEVEL &&

This version of the custom filter blocks any incoming calls whenever the global g_bLockStateBecauseUserInterfaceIsBusy flag is set. Note that the INTERFACEINFO* argument indicates the particular object, interface, and method being used. This information can be used to make finer-grained decisions about calls to accept or reject. For example, you could accept calls that you were certain wouldn't modify protected state.

Even with these reentrance issues, it is usually easier to write components for an STA than for the MTA. If you choose this path, however, remember two things: using multiple STAs introduces many of the problems of the MTA, and even with a single STA, locking issues arise out of the necessity to support reentrance and callbacks.

36. Avoid extant marshals on in-process objects.

Examine the following code:

 STDMETHODIMP CoPaint::CreateBrush(REFIID iid,
  void** ppv) {
  return CoCreateInstance(CLSID_CoBrush, 0,
   CLSCTX_LOCAL_SERVER, iid, ppv);

 STDMETHODIMP CoPaint::CreatePen(REFIID iid,
  void** ppv) {
  return CoCreateInstance(CLSID_CoPen, 0,

See any potential problems? Imagine that the CoPaint server is implemented as a simple, single-threaded (STA-based) out-of-process server. Note that CoPaint relies on Helper objects to implement various Child objects. Conceptually, this is a great way to partition an application, since a separate development team can own each server. However, beware of arbitrarily handing out pointers to objects implemented in DLL (in-process) servers. Recall that a DLL simply makes a guest appearance in the client's apartment; it has no control over the lifetime of that apartment. Therefore, when an external client calls CoPaint::CreatePen in the code above, an in-process instance of CoPen is created inside the apartment (in a separate DLL) and a reference to that object is marshaled back to the caller.

Traditional COM servers allow external clients to control their lifetimes, so imagine that the CoPaint server is implemented similarly, such that when the last client disconnects, the server will shut down. But it is impossible to really know if all objects living in the server's apartment have no external clients, since CoPen is implemented in a separate DLL, with a separate lock count, invisible to CoPaint. The effect is that any external references to CoPen will be left dangling when all external references to objects implemented in CoPaint are released and the server shuts down.

Note that instances of CoBrush do not experience this phenomenon, since the CreateBrush method exports a pointer to a proxy, and standard proxies marshal in such a way that the caller gets a duplicate proxy that references the original object in the out-of-process CoBrush server. The CoBrush server has a lifetime that is completely independent of the CoPaint server.

Note that if you could hook CoPen's implementation of AddRef and Release, then the problem would be solved, since you would know when there were no outstanding references to the object. This is the case with aggregation (the inner object delegates to the outer's implementation of IUnknown), so this rule doesn't apply to aggregated objects, whose lifetimes are intrinsically controlled by the outer object. In fact, given this, it would be perfectly acceptable to simply create a generic implementation of IUnknown that aggregated an object from an in-process server and blindly delegated QueryInterface calls for everything except IID_IUnknown to the inner object. This would allow you to hand out pointers to the in-process object while retaining knowledge of the object's lifetime. Unfortunately, this "blind aggregation" mechanism only works with objects specifically written to support aggregation.

For objects that do not support aggregation, you could provide a wrapper class that simply delegates all calls to the in-process object, but this is a lot of work if written manually for each object. One of the authors of this book created a generic delegator that automates this approach and is even more efficient in delegating method calls than a simple hand written implementation.

There are other techniques for dealing with this lifetime issue. The first is simply to avoid the problem in the first place. Don't ever export pointers to in-process objects implemented in separate DLLs. If you're not willing to settle for this conservative approach, you might try explicitly documenting lifetime restrictions within the semantics of the interfaces you design. For instance, you could document a hierarchy of objects such that Child objects must be released before Parent objects (although this is somewhat restricting for clients).

A more invasive technique is possible if you can convince the DLL implementers to notify you when it is OK to shut down. OLE documents use such a protocol (see I01eContainer::LockContainer for an example). However, when using general-purpose components not written against such a protocol, there is not much you can do to discover when it is reasonable to exit your process.

Since it is generally unsafe to export pointers to in-process objects, you may wonder how surrogates work. Isn't their entire job in life to create instances of objects in DLL-based servers and expose them out-of-process so they can be used offhost? Well, yes. However, the surrogate plumbing has incestuous knowledge of how many extant marshals exist within the process. When the surrogate plumbing detects that the number of marshals has dropped to zero, it notifies the surrogate process (usually dllhost.exe) that it is safe to shut down. Unfortunately, there is no documented way to detect when there are no extant marshals within a process, so this technique is not an option for our lowly out-of-process object broker.

It would be possible to run the DLL within the system-supplied surrogate process and then explicitly ask for a local server implementation of CoPen. Although safe, this would n ow require a change to the component's configuration to assign it an AppID (which is required for surrogate activation).

37. Use CoDisconnectObject to inform the stub when you go away prematurely.

It is sometimes necessary to destroy an object with a nonzero reference count. In general, this is a bad idea, since it is likely that other parties are expecting you to stay around (they are holding extant references, after all). This caveat being stated, it still may be necessary to leave this earth prematurely either due to failfast policies or when a server is shut down via some out-of-band technique (e.g., COM servers that run as NT services responding to a net stop request from the administrator).

When an object shuts down with a nonzero reference count, it must ensure that no other calls are dispatched by parties that may hold extant references. If the only other parties that hold references are simply in-process objects from the same source base, this can be achieved using some sort of global switch that indicates that a proccss-wide shutdown is taking place:

 MyClass::-MyClass(void) {
  if (m_pSomeotherobject && !g_bShuttingDown)
   m_pSomeOtherObject->Release( );

While this code assumes that all users of the object are aware of this idiom, that is often not the case inside a particular object hierarchy.

The previous example does not take into account clients that live outside the process of the object. In this case, it is possible that the client may issue a call to a proxy after the object has decided to destroy itself If this happens, then the call will arrive at the stub, which now holds a dangling reference to the once-healthy object. When the stub dispatches the call through its dangling reference, the results will be less than pretty, since it is likely that the memory once occupied by your object now contains a completely random vptr/vtable combination. To avoid this problem, the object should call CoDisconnectobject to inform the stub that the object is going away.

CoDisconnectObject does two things. First, it informs the stub to release any held references to the actual object. Second, it informs the stub to no longer dispatch calls from extant proxies. If a call arrives after the call to CoDisconnectObject, the issuing proxy will be notified immediately that the object is dead. This fact is communicated to the caller via a well-known HRESULT (RPC_E_DISCONNECTED). This is considerably more pleasant than having random code execute in the server....

Read More Show Less


The evolution of the Component Object Model (COM) has in many ways paralleled the evolution of C++. Both movements shared a common goal of achieving better reuse and modularity through refinements to an existing programming model. In the case of C++, the preceding model was procedural programming in C, and C++'s added value was its support for class-based object-oriented programming. In the case of COM, the preceding model was class-based programming in C++, and COM's added value is its support for interface-based object-oriented programming.

As C++ evolved, its canon evolved as well. One notable work in this canon was Scott Meyers' Effective C++. This text was perhaps the first text that did not try to teach the reader the basic mechanics and syntax of C++. Rather, Effective C++ was targeted at the working C++ practitioner and offered 50 concrete rules that all C++ developers should follow to craft reasonable C++-based systems. The success of Effective C++ required a critical mass of practitioners in the field working with the technology. Additionally, Effective C++ relied on a critical mass of supporting texts in the canon. At the time of its initial publication, the supporting texts were primarily The C++ Programming Language by Stroustrup and The C++ Primer by Lippman, although a variety of other introductory texts were also available.

The COM programming movement has reached a similar state of critical mass. Given the mass adoption of COM by Microsoft as well as many other development organizations, the number of COM developers is slowly but surely approaching the number of Windows developers. Also, five years after its first public release, there is finally a sufficiently large canon to lay the tutorial groundwork for a more advanced text. To this end, Effective COM represents a homage to Scott Meyers' seminal work and attempts to provide a book that is sufficiently approachable that most working developers can easily find solutions to common design and coding problems.

Virtually all existing COM texts assume that the reader has no COM knowledge and focus most of their attention on teaching the basics. Effective COM attempts to fill a hole in the current COM canon by providing guidelines that transcend basic tutorial explanations of the mechanics or theory of COM. These concrete guidelines are based on the authors' experiences working with and training literally thousands of COM developers over the last four years as well as on the communal body of knowledge that has emerged from various Internet-based forums, the most important of which is the DCOM mailing list hosted at DCOM-request@discuss.microsoft.com.

This book owes a lot to the various reviewers who offered feedback during the book's development. These reviewers included Saji Abraham, David Chappell, Steve DeLassus, Richard Grimes, Martin Gudgin, Ted Neff, Mike Nelson, Peter Partch, Wilf Russell, Ranjiv Sharma, George Shepherd, and James Sievert. Special thanks go to George Reilly, whose extensive copyediting showed the authors just how horrible their grammar really is. Any errors that remain are the responsibility of the authors. You can let us know about these errors by sending mail to effectiveerrata@develop.com. Any errata or updates to the book will be posted to the book's Web page, http://www.develop.com/effectivecom.

The fact that some of the guidelines presented in this book fly in the face of popular opinion and/or "official" documentation from Microsoft may at first be confusing to the reader. We encourage you to test our assertions against your current beliefs and let us know what you find. The four authors can be reached en masse by sending electronic mail to effectivecom@develop.com.

Intended Audience

This book is targeted at developers currently using the Component Object Model and Microsoft Transaction Server (MTS) to develop software. Effective COM is not a tutorial or primer; rather, it assumes that the reader has already tackled at least one pilot project in COM and has been humbled by the complexity and breadth of distributed object computing. This book also assumes that the reader is at least somewhat familiar with the working vocabulary of COM as it is described in Essential COM. The book is targeted primarily at developers who work in C++; however, many of the topics (e.g., interface design, security, transactions) are approachable by developers who work in Visual Basic, Java, or Object Pascal.

What to Expect

The book is arranged in six chapters. Except for the first chapter, which addresses the cultural differences between "100% pure" C++ and COM, each chapter addresses one of the core atoms of COM.

Shifting from C++ to COM

Developers who work in C++ have the most flexibility when working in COM. However, it is these developers who must make the most adjustments to accommodate COM-based development. This chapter offers five concrete guidelines that make the transition from pure C++ to COM-based development possible. Aspects of COM/C++ discussed include exception handling, singletons, and interface-based programming.


The most fundamental atom of COM development is the interface. A well-designed interface will help increase system efficiency and usability. A poorly designed interface will make a system brittle and difficult to use. This chapter offers 12 concrete guidelines that help COM developers design interfaces that are efficient, correct, and approachable. Aspects of interface design discussed include round-trip optimization, semantic correctness, and common design flaws.


Writing COM code in C++ requires a raised awareness of details, irrespective of the framework or class library used to develop COM components. This chapter offers 11 concrete guidelines that help developers write code that is efficient, correct, and maintainable. Aspects of COM implementation discussed include reference counting, memory optimization, and type-system errors.


Perhaps one of the most perplexing aspects of COM is its concept of an apartment. Apartments are used to model concurrency in COM and do not have analogues in most operating systems or languages. This chapter offers nine concrete guidelines that help developers ensure that their objects operate properly in a multithreaded environment. Aspects of apartments discussed include real-world lock management, common marshaling errors, and life-cycle management.


One of the few areas of COM that is more daunting than apartments is security. Part of this is due to the aversion to security that is inherent in most developers, and part is due to the fairly arcane and incomplete documentation that has plagued the security interfaces of COM. This chapter offers five concrete guidelines that distill the security solution space of COM. Aspects of security discussed include access control, authentication, and authorization.


Many pages of print have been dedicated to Microsoft Transaction Server, but precious few of them address the serious issues related to the new transactional programming model implied by MTS. This chapter offers eight concrete pieces of advice that will help make your MTS-based systems more efficient, scalable, and correct. Topics discussed include the importance of interception, activity-based concurrency management, and the dangers of relying on just-in-time activation as a primary mechanism for enhancing scalability.


First and foremost, Chris would like to thank his wife, Melissa, for supporting him in his various extra-circular activities, including this book.

Thanks to J. Carter Shanklin and the Addison Wesley staff for providing the ideal writing environment. I couldn't imagine writing for another publisher.

Thanks to all of the reviewers for their thoughtful (and thorough) feedback.

Thanks to all my students as well as the contributing members of the DCOM and ATL mailing lists. Whatever insight this book provides comes from discussing our mutual problems with COM.

Last but not least, thanks to my fellow authors for their hard work and diligence in seeing this project through to the end. It is truly a pleasure and an honor to be included as an author with professionals of such caliber.

Don would like to thank the three other Boxes that fill up his non-COM lifestyle.

Thanks to my coauthors for sharing the load and waiting patiently for me to finish my bits and pieces (the hunger strike worked, guys).

A tremendous thanks to Scott Meyers for giving us his blessing to leverage his wildly successful format and apply it to a technology that completely butchers his life's work.

Thanks to all of my cohorts at DevelopMentor for tolerating another six months of darkness while I delayed yet another book project.

Thanks to J. Carter Shanklin at Addison-Wesley for creating a great and supportive environment.

Thanks to the various DCOM listers who have participated in a long but fun conversation. This book in many ways represents an executive summary of the megabytes of security bugs, MTS mysteries, and challenging IDL puzzles that have been posted by hundreds of folks on the COM front lines.

A special thanks goes to the Microsoft folks who work on COM and Visual C++, for all of the support over the years.

Keith would like to thank his family for putting up with all the late nights. The joy they bring to my life is immeasurable.

Thanks to Don, Tim, and Chris, for thinking enough of me to extend an invitation to participate in this important project.

Thanks to Mike Abercrombie and Don Box at DevelopMentor for fostering a home where independent thought is nourished and the business model is based on honesty and genuine concern for the community.

Thanks to everyone who participates in the often lengthy threads on the DCOM list. That mail reflector has been incredibly useful in establishing a culture among COM developers, and from that culture has sprung forth a wealth of ideas, many of which are captured in this book.

Thanks to Saji Abraham and Mike Nelson for their dedication to the COM community.

Thanks, Carter, this book is so much better than it possibly could have been if you had pressed us for a deadline.

And finally, thanks to all the students who have participated in my COM and security classes. Your comments, questions, and challenges never cease to drive me toward a deeper understanding of the truth.

First and foremost, Tim would like to thank his coauthors for undertaking this project and seeing it through to completion. As always, gentlemen, it's been a pleasure.

Also, thanks to friends and colleagues Alan Ewald, Owen Tallman, Fred Tibbitts, Paul Rielly, everyone at DevelopMentor, students, and the participants on the DCOM mailing list for listening to me go on and on about COM--nodding sagely, laughing giddily, or screaming angrily as necessary.

A special thanks to Mike, Don, and Lorrie for suffering through the earliest days of DM to produce an extraordinary environment for thinking.

And, of course, thanks to my family: Sarah for letting me wear a COM ring too, Steve and Kristin for reminding me about the true definition of success, Alan and Chris for allowing me to interrupt endlessly to ask geeky questions, and Nikke and Stephen Downes-Martin for accepting phone calls from any airport I happen to be in.

Finally, thank you J. Carter Shanklin and Addison-Wesley for letting us do our own thing.

Chris Sells
Portland, OR
August 1998
Don Box
Redondo Beach, CA
August 1998
Keith Brown
Rolling Hills Estates, CA
August 1998
Tim Ewald
Nashua, NH
August 1998


Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star


4 Star


3 Star


2 Star


1 Star


Your Rating:

Your Name: Create a Pen Name or

Barnes & Noble.com Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation


  • - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
  • - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously

    If you find inappropriate content, please report it to Barnes & Noble
    Why is this product inappropriate?
    Comments (optional)