Programming Applications for Microsoft Windows

( 1 )


An update to a bestselling, practical Windows programming guide, this title is a comprehensive inside look at the Windows 2000 and 64-bit Windows environments. It provides detailed system information that's unavailable elsewhere, including architectural and implementation details and sample code.
Read More Show Less
... See more details below
Available through our Marketplace sellers.
Other sellers (Paperback)
  • All (27) from $1.99   
  • New (4) from $63.00   
  • Used (23) from $1.99   
Sort by
Page 1 of 1
Showing 1 – 3 of 4
Note: Marketplace items are not eligible for any coupons and promotions
Seller since 2014

Feedback rating:



New — never opened or used in original packaging.

Like New — packaging may have been opened. A "Like New" item is suitable to give as a gift.

Very Good — may have minor signs of wear on packaging but item works perfectly and has no damage.

Good — item is in good condition but packaging may have signs of shelf wear/aging or torn packaging. All specific defects should be noted in the Comments section associated with each item.

Acceptable — item is in working order but may show signs of wear such as scratches or torn packaging. All specific defects should be noted in the Comments section associated with each item.

Used — An item that has been opened and may show signs of wear. All specific defects should be noted in the Comments section associated with each item.

Refurbished — A used item that has been renewed or updated and verified to be in proper working condition. Not necessarily completed by the original manufacturer.

New Brand new. excellent condition.

Ships from: Scarborough, Canada

Usually ships in 1-2 business days

  • Canadian
  • International
  • Standard, 48 States
  • Standard (AK, HI)
  • Express, 48 States
  • Express (AK, HI)
Seller since 2015

Feedback rating:


Condition: New
Brand new.

Ships from: acton, MA

Usually ships in 1-2 business days

  • Standard, 48 States
  • Standard (AK, HI)
Seller since 2015

Feedback rating:


Condition: New
Brand new.

Ships from: acton, MA

Usually ships in 1-2 business days

  • Standard, 48 States
  • Standard (AK, HI)
Page 1 of 1
Showing 1 – 3 of 4
Sort by
Sending request ...


An update to a bestselling, practical Windows programming guide, this title is a comprehensive inside look at the Windows 2000 and 64-bit Windows environments. It provides detailed system information that's unavailable elsewhere, including architectural and implementation details and sample code.
Read More Show Less

Product Details

  • ISBN-13: 9781572319967
  • Publisher: Microsoft Press
  • Publication date: 9/1/1999
  • Series: Microsoft Programming Series
  • Edition description: 4th ed.
  • Edition number: 4
  • Pages: 1200
  • Product dimensions: 7.59 (w) x 9.55 (h) x 2.10 (d)

Meet the Author

Jeffrey Richter is a cofounder of Wintellect ( training, debugging, and consulting firm dedicated to helping companies build better software faster. He is the author of the previous editions of this book, Windows via C/C++, and several other Windows®-related programming books. Jeffrey has been consulting with the Microsoft® .NET Framework team since October 1999.

Read More Show Less

Read an Excerpt

Chapter 9: Thread Synchronization with Kernel Objects

  • Wait Functions
  • Successful Wait Side Effects
  • Event Kernel Objects
    • The Handshake Sample Application
  • Waitable Timer Kernel Objects
    • Having Waitable Timers Queue APC Entries
    • Timer Loose Ends
  • Semaphore Kernel Objects
  • Mutex Kernel Objects
    • Abandonment Issues
    • Mutexes vs. Critical Sections
    • The Queue Sample Application
  • A Handy Thread Synchronization Object Chart
  • Other Thread Synchronization Functions
    • Asynchronous Device I/O
    • WaitForInputIdle
    • MsgWaitForMultipleObjects(Ex)
    • WaitForDebugEvent
    • SignalObjectAndWait

In the last chapter, we discussed how to synchronize threads using mechanisms that allow your threads to remain in user mode. The wonderful thing about user-mode synchronization is that it is very fast. If you are concerned about your thread's performance, you should first determine whether a user-mode thread synchronization mechanism will work for you.

While user-mode thread synchronization mechanisms offer great performance, they do have limitations, and for many applications they simply do not work. For example, the interlocked family of functions operates only on single values and never places a thread into a wait state. You can use critical sections to place a thread in a wait state, but you can use them only to synchronize threads contained within a single process. Also, you can easily get into deadlock situations with critical sections because you cannot specify a timeout value while waiting to enter the critical section.

In this chapter, we'll discuss how to use kernel objects to synchronize threads. As you'll see, kernel objects are far more versatile than the user-mode mechanisms. In fact, the only bad side to kernel objects is their performance. When you call any of the new functions mentioned in this chapter, the calling thread must transition from user mode to kernel mode. This transition is costly: it takes about 1000 CPU cycles on the x86 platform for a round-trip—and this, of course, does not include the execution of the kernel-mode code that actually implements the function your thread is calling.

Throughout this book, we've discussed several kernel objects, including processes, threads, and jobs. You can use almost all of these kernel objects for synchronization purposes. For thread synchronization, each of these kernel objects is said to be in a signaled or nonsignaled state. The toggling of this state is determined by rules that Microsoft has created for each object. For example, process kernel objects are always created in the nonsignaled state. When the process terminates, the operating system automatically makes the process kernel object signaled. Once a process kernel object is signaled, it remains that way forever; its state never changes back to nonsignaled.

A process kernel object is nonsignaled while the process is running, and it becomes signaled when the process terminates. Inside a process kernel object is a Boolean value that is initialized to FALSE (nonsignaled) when the object is created. When the process terminates, the operating system automatically changes the corresponding object's Boolean value to TRUE, indicating that the object is signaled.

If you want to write code that checks whether a process is still running, all you do is call a function that asks the operating system to check the process object's Boolean value. That's easy enough. You might also want to tell the system to put your thread in a wait state and wake it up automatically when the Boolean changes from FALSE to TRUE. This way, you can write code in which a thread in a parent process that needs to wait for the child process to terminate can simply put itself to sleep until the kernel object identifying the child process becomes signaled. As you'll see, Microsoft Windows offers functions that accomplish all this easily.

I've just described the rules that Microsoft has defined for a process kernel object. As it turns out, thread kernel objects follow the same rules. That is, thread kernel objects are always created in the nonsignaled state. When the thread terminates, the operating system automatically changes the thread object's state to signaled. Therefore, you can use the same technique in your application to determine whether a thread is no longer executing. Just like process kernel objects, thread kernel objects never return to the nonsignaled state.

The following kernel objects can be in a signaled or nonsignaled state:

Threads can put themselves into a wait state until an object becomes signaled. Note that the rules that govern the signaled/nonsignaled state of each object depend on the type of object. I've already mentioned the rules for process and thread objects. I discuss the rules for jobs in Chapter 5.

In this chapter, we'll look at the functions that allow a thread to wait for a specific kernel object to become signaled. Then we'll look at the kernel objects that Windows offers specifically to help you synchronize threads: events, waitable timers, semaphores, and mutexes.

When I was first learning this stuff, it helped if I imagined that kernel objects contained a flag (the wave-in-the-air kind, not the bit kind). When the object was signaled, the flag was raised; when the object was nonsignaled, the flag was lowered.

(Image unavailable)

Threads are not schedulable when the objects they are waiting for are nonsignaled (the flag is lowered). However, as soon as the object becomes signaled (the flag goes up), the thread sees the flag, becomes schedulable, and shortly resumes execution.

(Image unavailable)

Wait Functions

Wait functions cause a thread to voluntarily place itself into a wait state until a specific kernel object becomes signaled. By far the most common of these functions is WaitForSingleObject:

 DWORD WaitForSingleObject(

When a thread calls this function, the first parameter, hObject, identifies a kernel object that supports being signaled/nonsignaled. (Any object mentioned in the list in the previous section works just great.) The second parameter, dwMilliseconds, allows the thread to indicate how long it is willing to wait for the object to become signaled.

The following function call tells the system that the calling thread wants to wait until the process identified by the hProcess handle terminates:

 WaitForSingleObject(hProcess, INFINITE);

The second parameter tells the system that the calling thread is willing to wait forever (an infinite amount of time) until this process terminates.

Usually, INFINITE is passed as the second parameter to WaitForSingleObject, but you can pass any value (in milliseconds). By the way, INFINITE is defined as 0xFFFFFFFF (or _1). Of course, passing INFINITE can be a little dangerous. If the object never becomes signaled, the calling thread never wakes up—it is forever deadlocked but, fortunately, not wasting precious CPU time.

Here's an example of how to call WaitForSingleObject with a timeout value other than INFINITE:

 DWORD dw = WaitForSingleObject(hProcess, 5000); switch (dw) {

The code above tells the system that the calling thread should not be schedulable until either the specified process has terminated or 5000 milliseconds have expired, whichever comes first. So this call returns in less than 5000 milliseconds if the process terminates, and it returns in about 5000 milliseconds if the process hasn't terminated. Note that you can pass 0 for the dwMilliseconds parameter. If you do this, WaitForSingleObject always returns immediately.

WaitForSingleObject's return value indicates why the calling thread became schedulable again. If the object the thread is waiting on became signaled, the return value is WAIT_OBJECT_0; if the timeout expires, the return value is WAIT_TIMEOUT. If you pass a bad parameter (such as an invalid handle) to WaitForSingleObject, the return value is WAIT_FAILED (call GetLastError for more information).

The function below, WaitForMultipleObjects, is similar to WaitForSingleObject except that it allows the calling thread to check the signaled state of several kernel objects simultaneously:

 DWORD WaitForMultipleObjects(

The dwCount parameter indicates the number of kernel objects you want the function to check. This value must be between 1 and MAXIMUM_WAIT_OBJECTS (defined as 64 in the Windows header files). The phObjects parameter is a pointer to an array of kernel object handles.

You can use WaitForMultipleObjects in two different ways—to allow a thread to enter a wait state until any one of the specified kernel objects becomes signaled, or to allow a thread to wait until all of the specified kernel objects become signaled. The fWaitAll parameter tells the function which way you want it to work. If you pass TRUE for this parameter, the function will not allow the calling thread to execute until all of the objects have become signaled.

The dwMilliseconds parameter works exactly as it does for WaitForSingleObject. If, while waiting, the specified time expires, the function returns anyway. Again, INFINITE is usually passed for this parameter, but you should write your code carefully to avoid the possibility of deadlock.

The WaitForMultipleObjects function's return value tells the caller why it got rescheduled. The possible return values are WAIT_FAILED and WAIT_TIMEOUT, which are self-explanatory. If you pass TRUE for fWaitAll and all of the objects become signaled, the return value is WAIT_OBJECT_0. If you pass FALSE for fWaitAll, the function returns as soon as any of the objects becomes signaled. In this case, you probably want to know which object became signaled. The return value is a value between WAIT_OBJECT_0 and (WAIT_OBJECT_0 + dwCount _ 1). In other words, if the return value is not WAIT_TIMEOUT and is not WAIT_FAILED, you should subtract WAIT_OBJECT_0 from the return value. The resulting number is an index into the array of handles that you passed as the second parameter to WaitForMultipleObjects. The index tells you which object became signaled.

Here's some sample code to make this clear:

 HANDLE h[3]; h[0] = hProcess1; h[1] = hProcess2; h[2] = hProcess3; DWORD dw = WaitForMultipleObjects(3, h, FALSE, 5000); switch (dw) {

If you pass FALSE for the fWaitAll parameter, WaitForMultipleObjects scans the handle array from index 0 on up, and the first object that is signaled terminates the wait. This can have some undesirable ramifications. For example, your thread might be waiting for three child processes to terminate by passing three process handles to this function. If the process at index 0 in the array terminates, WaitForMultipleObjects returns. Now the thread can do whatever it needs to and then loop back around, waiting for another process to terminate. If the thread passes the same three handles, the function returns immediately with WAIT_OBJECT_0 again. Unless you remove the handles that you've already received notifications from, your code will not work correctly.

Successful Wait Side Effects

For some kernel objects, a successful call to WaitForSingleObject or WaitForMultipleObjects actually alters the state of the object. A successful call is one in which the function sees that the object was signaled and returns a value relative to WAIT_OBJECT_0. A call is unsuccessful if the function returns WAIT_TIMEOUT or WAIT_FAILED. Objects never have their state altered for unsuccessful calls.

When an object has its state altered, I call this a successful wait side effect. For example, let's say that a thread is waiting on an auto-reset event object (discussed later in this chapter). When the event object becomes signaled, the function detects this and can return WAIT_OBJECT_0 to the calling thread. However, just before the function returns, the event is set to the nonsignaled state—the side effect of the successful wait.

This side effect is applied to auto-reset event kernel objects because it is one of the rules that Microsoft has defined for this type of object. Other objects have different side effects, and some objects have no side effects at all. Process and thread kernel objects have no side effects at all—that is, waiting on one of these objects never alters the object's state. As we discuss various kernel objects in this chapter, we'll go into detail about their successful wait side effects.

What makes WaitForMultipleObjects so useful is that it performs all of its operations atomically. When a thread calls WaitForMultipleObjects, the function can test the signaled state of all the objects and perform the required side effects all as a single operation.

Let's look at an example. Two threads call WaitForMultipleObjects in exactly the same way:

 HANDLE h[2]; h[0] = hAutoResetEvent1;   // Initially nonsignaled h[1] = hAutoResetEvent2;   // Initially nonsignaled WaitForMultipleObjects(2, h, TRUE, INFINITE);

When WaitForMultipleObjects is called, both event objects are nonsignaled; this forces both threads to enter a wait state. Then the hAutoResetEvent1 object becomes signaled. Both threads see that the event has become signaled, but neither can wake up because the hAutoResetEvent2 object is still nonsignaled. Because neither thread has successfully waited yet, no side effect happens to the hAutoResetEvent1 object.

Next, the hAutoResetEvent2 object becomes signaled. At this point, one of the two threads detects that both objects it is waiting for have become signaled. The wait is successful, both event objects are set to the nonsignaled state, and the thread is schedulable. But what about the other thread? It continues to wait until it sees that both event objects are signaled. Even though it originally detected that hAutoResetEvent1 was signaled, it now sees this object as nonsignaled.

As I mentioned, it's important to note that WaitForMultipleObjects works atomically. When it checks the state of the kernel objects, no other thread can alter any object's state behind its back. This prevents deadlock situations. Imagine what would happen if one thread saw that hAutoResetEvent1 was signaled and reset the event to nonsignaled and then the other thread saw that hAutoResetEvent2 was signaled and reset this event to nonsignaled. Both threads would be frozen: one thread would wait for an object that another thread had gotten, and vice versa. WaitForMultipleObjects ensures that this never happens.

This brings up an interesting question: If multiple threads wait for a single kernel object, which thread does the system decide to wake up when the object becomes signaled? Microsoft's official response to this question is, "The algorithm is fair." Microsoft doesn't want to commit to the internal algorithm used by the system. All it says is that the algorithm is fair, which means that if multiple threads are waiting, each should get its own chance to wake up each time the object becomes signaled.

This means that thread priority has no effect: the highest-priority thread does not necessarily get the object. It also means that the thread waiting the longest does not necessarily get the object. And it is possible for a thread that got the object to loop around and get it again. However, this wouldn't be fair to the other threads, so the algorithm tries to prevent this. But there is no guarantee.

In reality, the algorithm Microsoft uses is simply the popular "first in, first out" scheme. The thread that has waited the longest for an object gets the object. However, actions can occur in the system that alter this behavior, making it less predictable. This is why Microsoft doesn't explicitly state how the algorithm works. One such action is a thread getting suspended. If a thread waits for an object and then the thread is suspended, the system forgets that the thread is waiting for the object. This is a feature because there is no reason to schedule a suspended thread. When the thread is later resumed, the system thinks that the thread just started waiting on the object.

While you debug a process, all threads within that process are suspended when breakpoints are hit. So debugging a process makes the "first in, first out" algorithm highly unpredictable because threads are frequently suspended and resumed.

Event Kernel Objects

Of all the kernel objects, events are by far the most primitive. They contain a usage count (as all kernel objects do), a Boolean value indicating whether the event is an auto-reset or manual-reset event, and another Boolean value indicating whether the event is signaled or nonsignaled.

Events signal that an operation has completed. There are two different types of event objects: manual-reset events and auto-reset events. When a manual-reset event is signaled, all threads waiting on the event become schedulable. When an auto-reset event is signaled, only one of the threads waiting on the event becomes schedulable.

Events are most commonly used when one thread performs initialization work and then signals another thread to perform the remaining work. The event is initialized as nonsignaled, and then after the thread completes its initial work, it sets the event to signaled. At this point, another thread, which has been waiting on the event, sees that the event is signaled and becomes schedulable. This second thread knows that the first thread has completed its work.

Here is the CreateEvent function, which creates an event kernel object:

 HANDLE CreateEvent(

In Chapter 3, we discussed the mechanics of kernel objects—how to set their security, how usage counting is done, how their handles can be inheritable, and how objects can be shared by name. Since all of this should be familiar to you by now, I won't discuss the first and last parameters of this function.

The fManualReset parameter is a Boolean value that tells the system whether to create a manual-reset event (TRUE) or an auto-reset event (FALSE). The fInitialState parameter indicates whether the event should be initialized to signaled (TRUE) or nonsignaled (FALSE). After the system creates the event object, CreateEvent returns the process-relative handle to the event object. Threads in other processes can gain access to the object by calling CreateEvent using the same value passed in the pszName parameter; by using inheritance; by using the DuplicateHandle function; or by calling OpenEvent, specifying a name in the pszName parameter that matches the name specified in the call to CreateEvent:

 HANDLE OpenEvent(

As always, you should call the CloseHandle function when you no longer require the event kernel object.

Once an event is created, you control its state directly. When you call SetEvent, you change the event to the signaled state:

 BOOL SetEvent(HANDLE hEvent);

When you call ResetEvent, you change the event to the nonsignaled state:

 BOOL ResetEvent(HANDLE hEvent);

It's that easy.

Microsoft has defined a successful wait side effect rule for an auto-reset event: an auto-reset event is automatically reset to the nonsignaled state when a thread successfully waits on the object. This is how auto-reset events got their name. It is usually unnecessary to call ResetEvent for an auto-reset event because the system automatically resets the event. In contrast, Microsoft has not defined a successful wait side effect for manual-reset events.

Let's run through a quick example of how you can use event kernel objects to synchronize threads. Here's the setup:

 // Create a global handle to a manual-reset, nonsignaled event. HANDLE g_hEvent; int WINAPI WinMain(...) { DWORD WINAPI WordCount(PVOID pvParam) { DWORD WINAPI SpellCheck (PVOID pvParam) { DWORD WINAPI GrammarCheck (PVOID pvParam) {

When this process starts, it creates a manual-reset, nonsignaled event and saves the handle in a global variable. This makes it easy for other threads in this process to access the same event object. Now three threads are spawned. These threads wait until a file's contents are read into memory, and then each thread accesses the data: one thread does a word count, another runs the spelling checker, and the third runs the grammar checker. The code for these three thread functions starts out identically: each thread calls WaitForSingleObject, which suspends the thread until the file's contents have been read into memory by the primary thread.

Once the primary thread has the data ready, it calls SetEvent, which signals the event. At this point, the system makes all three secondary threads schedulable—they all get CPU time and access the memory block. Notice that all three threads will access the memory in a read-only fashion. This is the only reason why all three threads can run simultaneously. Also note that if the machine has multiple CPUs on it, all of these threads can truly execute simultaneously, getting a lot of work done in a short amount of time.

If you use an auto-reset event instead of a manual-reset event, the application behaves quite differently. The system allows only one secondary thread to become schedulable after the primary thread calls SetEvent. Again, there is no guarantee as to which thread the system will make schedulable. The remaining two secondary threads will continue to wait.

The thread that becomes schedulable has exclusive access to the memory block. Let's rewrite the thread functions so that each function calls SetEvent (just like the WinMain function does) just before returning. The thread functions now look like this:

 DWORD WINAPI WordCount(PVOID pvParam) { DWORD WINAPI SpellCheck (PVOID pvParam) { DWORD WINAPI GrammarCheck (PVOID pvParam) {

When a thread has finished its exclusive pass over the data, it calls SetEvent, which allows the system to make one of the two waiting threads schedulable. Again, we don't know which thread the system will choose, but this thread will have its own exclusive pass over the memory block. When this thread is done, it will call SetEvent as well, causing the third and last thread to get its exclusive pass over the memory block. Note that when you use an auto-reset event, there is no problem if each secondary thread accesses the memory block in a read/write fashion; the threads are no longer required to consider the data readonly. This example clearly demonstrates the difference between using a manual-reset event and an auto-reset event.

For the sake of completeness, I'll mention one more function that you can use with events:

 BOOL PulseEvent(HANDLE hEvent);

PulseEvent makes an event signaled and then immediately nonsignaled; it's just like calling SetEvent immediately followed by ResetEvent. If you call PulseEvent on a manual-reset event, any and all threads waiting on the event when it is pulsed are schedulable. If you call PulseEvent on an auto-reset event, only one waiting thread becomes schedulable. If no threads are waiting on the event when it is pulsed, there is no effect.

PulseEvent is not very useful. In fact, I've never used it in any practical application because you have no idea what threads, if any, will see the pulse and become schedulable. Since you can't know the state of any threads when you call PulseEvent, the function is just not that useful. That said, I'm sure that in some scenarios PulseEvent might come in handy—but none spring to mind. See the discussion of the SignalObjectAndWait function later in this chapter for a little more information on PulseEvent.

The Handshake Sample Application

The Handshake ("09 Handshake.exe") application, listed in Figure 9-1, demonstrates the use of auto-reset events. The source code files and resource files for the application are in the 09-Handshake directory on the companion CDROM. When you run Handshake, the following dialog box appears.

(Image unavailable)

Handshake accepts a request string, reverses all the characters in the string, and places the result in the Result field. What makes Handshake exciting is the way it accomplishes this heroic task.

Handshake solves a common programming problem. You have a client and a server that want to talk to each other. Initially, the server has nothing to do, so it enters a wait state. When the client is ready to submit a request to the server, it places the request into a shared memory buffer and then signals an event so that the server thread knows to examine the data buffer and process the client's request. While the server thread is busy processing the request, the client's thread needs to enter a wait state until the server has the request's result ready. So the client enters a wait state until the server signals a different event that indicates that the result is ready to be processed by the client. When the client wakes up again, it knows that the result is in the shared data buffer and can present the result to the user.

When the application starts, it immediately creates two nonsignaled, autoreset event objects. One event, g_hevtRequestSubmitted, indicates when a request is ready for the server. This event is waited on by the server thread and is signaled by the client thread. The second event, g_hevtResultReturned, indicates when the result is ready for the client. The client thread waits on this event and the server thread is responsible for signaling it.

After the events are created, the server thread is spawned and executes the ServerThread function. This function immediately has the server wait for a client's request. Meanwhile, the primary thread, which is also the client thread, calls DialogBox, which displays the application's user interface. You can enter some text in the Request field, and then, when you click the Submit Request To Server button, the request string is placed in a buffer that is shared between the client and the server threads and the g_hevtRequestSubmitted event is signaled. The client thread then waits for the server's result by waiting on the g_hevtResultReturned event.

The server wakes, reverses the string in the shared memory buffer, and then signals the g_hevtResultReturned event. The server's thread loops back around, waiting for another client request. Notice that this application never calls ResetEvent because it is unnecessary: auto-reset events are automatically reset to the nonsignaled state after a successful wait. Meanwhile, the client thread detects that the g_hevtResultReturned event has becomes signaled. It wakes and copies the string from the shared memory buffer into the Result field of the user interface.

Perhaps this application's only remaining notable feature is how it shuts down. To shut down the application, you simply close the dialog box. This causes the call to DialogBox in _tWinMain to return. At this point, the primary thread copies a special string into the shared buffer and wakes the server's thread to process this special request. The primary thread waits for the server thread to acknowledge receipt of the request and for the server thread to terminate. When the server thread detects this special client request string, it exits its loop and the thread just terminates.

I chose to have the primary thread wait for the server thread to die by calling WaitForMultipleObjects so that you would see how this function is used. In reality, I could have just called WaitForSingleObject, passing in the server thread's handle, and everything would have worked exactly the same.

Once the primary thread knows that the server thread has stopped executing, I call CloseHandle three times to properly destroy all the kernel objects that the application was using. Of course, the system would do this for me automatically, but it just feels better to me when I do it myself. I like being in control of my code at all times.

Read More Show Less

Table of Contents

Error Handling
You Can Do This Too
The ErrorShow Sample Application
Character Sets
Single-Byte and Double-Byte Character Sets
Unicode: The Wide-Byte Character Set
Why You Should Use Unicode
Windows 2000 and Unicode
Windows 98 and Unicode
Windows CE and Unicode
Keeping Score
A Quick Word About COM
How to Write Unicode Source Code
Unicode Support in the C Run-Time Library
Unicode Data Types Defined by Windows
Unicode and ANSI Functions in Windows
Windows String Functions
Making Your Application ANSI- and Unicode-Ready
Windows String Functions
Determining If Text Is ANSI or Unicode
Translating Strings Between Unicode and ANSI
Kernel Objects
What Is a Kernel Object?
Usage Counting
A Process's Kernel Object Handle Table
Creating a Kernel Object
Closing a Kernel Object
Sharing Kernel Objects Across Process Boundaries
Object Handle Inheritance
Named Objects
Duplicating Object Handles
Writing Your First Windows Application
A Process's Instance Handle
A Process's Previous Instance Handle
A Process's Command Line
A Process's Environment Variables
A Process's Affinity
A Process's Error Mode
A Process's Current Drive and Directory
The System Version
The CreateProcess Function
pszApplicationName and pszCommandLine
psaProcess, psaThread, and blnheritHandles
Terminating a Process
The Primary Thread's Entry-Point Function Returns
The ExitProcess Function
The TerminateProcess Function
When All the Threads in the Process Die
When a Process Terminates
Child Processes
Running Detached Child Processes
Enumerating the Processes Running in the System
The Process Information Sample Application
Placing Restrictions on a Job's Processes
Placing a Process in a Job
Terminating All Processes in a Job
Querying Job Statistics
Job Notifications
The JobLab Sample Application
Thread Basics
When to Create a Thread
When Not to Create a Thread
Writing Your First Thread Function
The CreateThread Function
pfnStartAddr and pvParam
Terminating a Thread
The Thread Function Returns
The ExitThread Function
The TerminateThread Function
When a Process Terminates
When a Thread Terminates
Some Thread Internals
C/C++ Run-Time Library Considerations
Oops---I Called CreateThread Instead of
_beginthreadex by Mistake
C/C++ Run-Time Library Functions That You
Should Never Call
Gaining a Sense of One's Own Identity
Converting a Pseudo-Handle to a Real Handle
Thread Scheduling, Priorities, and Affinities
Suspending and Resuming a Thread
Suspending and Resuming a Process
Switching to Another Thread
A Thread's Execution Times
Putting the Context in Context
Thread Priorities
An Abstract View of Priorities
Programming Priorities
Dynamically Boosting Thread Priority
Tweaking the Scheduler for the Foreground Process
The Scheduling Lab Sample Application
Thread Synchronization in User Mode
Atomic Access: The Interlocked Family of Functions
Cache Lines
Advanced Thread Synchronization
A Technique to Avoid
Critical Sections
Critical Sections: The Fine Print
Critical Sections and Spinlocks
Critical Sections and Error Handling
Useful Tips and Techniques
Thread Synchronization with Kernel Objects
Wait Functions
Successful Wait Side Effects
Event Kernel Objects
The Handshake Sample Application
Waitable Timer Kernel Objects
Having Waitable Timers Queue APC Entries
Timer Loose Ends
Semaphore Kernel Objects
Mutex Kernel Objects
Abandonment Issues
Mutexes vs. Critical Sections
The Queue Sample Application
A Handy Thread Synchronization Object Chart
Other Thread Synchronization Functions
Asynchronous Device I/O
Thread Synchronization Toolkit
Implementing a Critical Section: The Optex
The Optex Sample Application
Creating Thread-Safe Datatypes and Inverse Semaphores
The InterlockedType Sample Application
The Single Writer/Multiple Reader Guard (SWMRG)
The SWMRG Sample Application
Implementing a WaitForMultipleExpressions Function
The WaitForMultipleExpressions Sample Application
Thread Pooling
Call Functions Asynchronously
Call Functions at Timed Intervals
The TimedMsgBox Sample Application
Call Functions When Single Kernel Objects Become Signaled
Call Functions When Asynchronous I/O
Requests Complete
Working with Fibers
The Counter Sample Application
Windows Memory Architecture
A Process's Virtual Address Space
How a Virtual Address Space Is Partitioned
Null-Pointer Assignment Partition
(Windows 2000 and Windows 98)
MS-DOS/16-Bit Windows Application
Compatibility Partition (Windows 98 Only)
User-Mode Partition (Windows 2000 and
Windows 98)
64-KB Off-Limits Partition (Windows 2000 Only)
Shared MMF Partition (Windows 98 Only)
Kernel-Mode Partition (Windows 2000 and
Windows 98)
Regions in an Address Space
Committing Physical Storage Within a Region
Physical Storage and the Paging File
Physical Storage Not Maintained in the Paging File
Protection Attributes
Copy-On-Write Access
Special Access Protection Attribute Flags
Bringing It All Home
Inside the Regions
Address Space Differences for Windows 98
The Importance of Data Alignment
Exploring Virtual Memory
System Information
The System Information Sample Application
Virtual Memory Status
The Virtual Memory Status Sample Application
Determining the State of an Address Space
The VMQuery Function
The Virtual Memory Map Sample Application
Using Virtual Memory in your Own Applications
Reserving a Region in an Address Space
Committing Storage in a Reserved Region
Reserving a Region and Committing Storage Simultaneously
When to Commit Physical Storage
Decommitting Physical Storage and Releasing a Region
When to Decommit Physical Storage
The Virtual Memory Allocation Sample Application
Changing Protection Attributes
Resetting the Contents of Physical Storage
The MemReset Sample Application
Address Windowing Extensions (Windows 2000 only)
The AWE Sample Application
A Thread's Stack
A Thread's Stack Under Windows 98
The C/C++ Run-Time Library's Stack-Checking Function
The Summation Sample Application
Memory-Mapped Files
Memory-Mapped Executables and DLLs
Static Data Is Not Shared by Multiple
Instances of an Executable or a
Sharing Static Data Across Multiple
Instances of an Executable or a
The AppInst Sample Application
Memory-Mapped Data Files
One File, One Buffer
Two Files, One Buffer
One File, Two Buffers
One File, Zero Buffers
Using Memory-Mapped Files
Creating or Opening a File Kernel Object
Creating a File-Mapping Kernel Object
Mapping the File's Data into the Process's Address Space
Unmapping the File's Data from the Process's Address Space
Closing the File-Mapping Object and the File Object
The File Reverse Sample Applic

Read More Show Less

Customer Reviews

Average Rating 5
( 1 )
Rating Distribution

5 Star


4 Star


3 Star


2 Star


1 Star


Your Rating:

Your Name: Create a Pen Name or

Barnes & Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation


  • - By submitting a review, you grant to Barnes & and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Terms of Use.
  • - Barnes & reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously
Sort by: Showing 1 Customer Reviews
  • Anonymous

    Posted February 2, 2000

    Jeff Richter Continues a Tradition

    This book is an essential guide to those who want information on user-mode development above and beyond the essential course from Programming Windows. This book covers many topics. In today's world of developers, one can never be fully sure that the documentation one encounters for a technology is valid. The only way to confirm that documentation is to invest an extra thirty dollars into a reference that will teach you information you will find invaluable in your Windows development career. Programming Applications is the best place to start. Congratulations, Jeff Richter. You've done it again.

    Was this review helpful? Yes  No   Report this review
Sort by: Showing 1 Customer Reviews

If you find inappropriate content, please report it to Barnes & Noble
Why is this product inappropriate?
Comments (optional)