ISBN-10:
0321228111
ISBN-13:
2900321228115
Pub. Date:
09/10/2004
Publisher:
Addison-Wesley
Patterns for Parallel Programming(Software Patterns Series) / Edition 1

Patterns for Parallel Programming(Software Patterns Series) / Edition 1

by Timothy G. Mattson

Hardcover

View All Available Formats & Editions
Current price is , Original price is $64.99. You
Select a Purchase Option (New Edition)
  • purchase options
    $52.96 $64.99 Save 19% Current price is $52.96, Original price is $64.99. You Save 19%.
    Currently Unavailable for Shipping
    Please check below for Buy Online, Pick up in Store.
  • purchase options
    $35.32 $64.99 Save 46% Current price is $35.32, Original price is $64.99. You Save 46%.
    icon-error
    Note: Access code and/or supplemental material are not guaranteed to be included with textbook rental or used textbook.

Product Details

ISBN-13: 2900321228115
Publisher: Addison-Wesley
Publication date: 09/10/2004
Series: Software Patterns Series
Edition description: New Edition
Pages: 355
Product dimensions: 6.00(w) x 1.25(h) x 9.00(d)

About the Author

Timothy G. Mattson is Intel's industry manager for life sciences. His research focuses on technologies that simplify parallel computing for general programmers, with an emphasis on computational biology. He holds a Ph.D. in chemistry from the University of California, Santa Cruz.

Beverly A. Sanders is associate professor at the Department of Computer and Information Science and Engineering, University of Florida, Gainesville. Her research focuses on techniques to help programmers construct high-quality, correct programs, including formal methods, component systems, and design patterns. She holds a Ph.D. in applied mathematics from Harvard University.

Berna L. Massingill is assistant professor in the Department of Computer Science at Trinity University, San Antonio, Texas. Her research interests include parallel and distributed computing, design patterns, and formal methods. She holds a Ph.D. in computer science from the California Institute of Technology.

0321228111AB08232004

Read an Excerpt

"If you build it, they will come."

And so we built them. Multiprocessor workstations, massively parallel supercomputers, a cluster in every department ... and they haven't come. Programmers haven't come to program these wonderful machines. Oh, a few programmers in love with the challenge have shown that most types of problems can be force-fit onto parallel computers, but general programmers, especially professional programmers who "have lives", ignore parallel computers. And they do so at their own peril. Parallel computers are going mainstream. Multithreaded microprocessors, multicore CPUs, multiprocessor PCs, clusters, parallel game consoles ... parallel computers are taking over the world of computing. The computer industry is ready to flood the market with hardware that will only run at full speed with parallel programs. But who will write these programs?

This is an old problem. Even in the early 1980s, when the "killer micros" started their assault on traditional vector supercomputers, we worried endlessly about how to attract normal programmers. We tried everything we could think of: high-level hardware abstractions, implicitly parallel programming languages, parallel language extensions, and portable message-passing libraries. But after many years of hard work, the fact of the matter is that "they" didn't come. The overwhelming majority of programmers will not invest the effort to write parallel software.

A common view is that you can't teach old programmers new tricks, so the problem will not be solved until the old programmers fade away and a new generation takes over.

But we don't buy into that defeatist attitude. Programmers have shown a remarkable ability to adopt new software technologies over the years. Look at how many old Fortran programmers are now writing elegant Java programs with sophisticated object-oriented designs. The problem isn't with old programmers. The problem is with old parallel computing experts and the way they've tried to create a pool of capable parallel programmers.

And that's where this book comes in. We want to capture the essence of how expert parallel programmers think about parallel algorithms and communicate that essential understanding in a way professional programmers can readily master. The technology we've adopted to accomplish this task is a pattern language. We made this choice not because we started the project as devotees of design patterns looking for a new field to conquer, but because patterns have been shown to work in ways that would be applicable in parallel programming. For example, patterns have been very effective in the field of object-oriented design. They have provided a common language experts can use to talk about the elements of design and have been extremely effective at helping programmers master object-oriented design.

This book contains our pattern language for parallel programming. The book opens with a couple of chapters to introduce the key concepts in parallel computing. These chapters focus on the parallel computing concepts and jargon used in the pattern language as opposed to being an exhaustive introduction to the field. The pattern language itself is presented in four parts corresponding to thefour phases of creating a parallel program:

Finding Concurrency. The programmer works in the problem domain to identify the available concurrency and expose it for use in the algorithm design.

Algorithm Structure. The programmer works with high-level structures for organizing a parallel algorithm.

Supporting Structures. We shift from algorithms to source code and consider how the parallel program will be organized and the techniques used to manage shared data.

Implementation Mechanisms. The final step is to look at specific software constructs for implementing a parallel program.

The patterns making up these four design spaces are tightly linked. You start at the top (Finding Concurrency), work through the patterns, and by the time you get to the bottom (Implementation Mechanisms), you will have a detailed design for your parallel program.

If the goal is a parallel program, however, you need more than just a parallel algorithm. You also need a programming environment and a notation for expressing the concurrency within the program's source code. Programmers used to be confronted by a large and confusing array of parallel programming environments. Fortunately, over the years the parallel programming community has converged around three programming environments.

OpenMP. A simple language extension to C, C++, or Fortran to write parallel programs for shared-memory computers.

MPI. A message-passing library used on clusters and other distributed-memory computers.

Java. An object-oriented programming language with language features supporting parallel programming on shared-memory computers and standard class libraries supporting distributed computing.

Many readers will already be familiar with one or more of these programming notations, but for readers completely new to parallel computing, we've included a discussion of these programming environments in the appendixes.

In closing, we have been working for many years on this pattern language. Presenting it as a book so people can start using it is an exciting development for us. But we don't see this as the end of this effort. We expect that others will have their own ideas about new and better patterns for parallel programming. We've assuredly missed some important features that really belong in this pattern language. We embrace change and look forward to engaging with the larger parallel computing community to iterate on this language. Over time, we'll update and improve the pattern language until it truly represents the consensus view of the parallel programming community. Then our real work will begin—using the pattern language to guide the creation of better parallel programming environments and helping people to use these technologies to write parallel software. We won't rest until the day sequential software is rare.


Table of Contents

Prefacex
1A Pattern Language for Parallel Programming1
1.1Introduction1
1.2Parallel Programming3
1.3Design Patterns and Pattern Languages4
1.4A Pattern Language for Parallel Programming5
2Background and Jargon of Parallel Computing7
2.1Concurrency in Parallel Programs Versus Operating Systems7
2.2Parallel Architectures: A Brief Introduction8
2.2.1Flynn's Taxonomy8
2.2.2A Further Breakdown of MIMD9
2.2.3Summary12
2.3Parallel Programming Environments12
2.4The Jargon of Parallel Computing16
2.5A Quantitative Look at Parallel Computation18
2.6Communication21
2.6.1Latency and Bandwidth21
2.6.2Overlapping Communication and Computation and Latency Hiding22
2.7Summary23
3The Finding Concurrency Design Space24
3.1About the Design Space24
3.1.1Overview25
3.1.2Using the Decomposition Patterns26
3.1.3Background for Examples26
3.2The Task Decomposition Pattern29
3.3The Data Decomposition Pattern34
3.4The Group Tasks Pattern39
3.5The Order Tasks Pattern42
3.6The Data Sharing Pattern44
3.7The Design Evaluation Pattern49
3.8Summary55
4The Algorithm Structure Design Space57
4.1Introduction57
4.2Choosing an Algorithm Structure Pattern59
4.2.1Target Platform59
4.2.2Major Organizing Principle60
4.2.3The Algorithm Structure Decision Tree60
4.2.4Re-evaluation62
4.3Examples62
4.3.1Medical Imaging62
4.3.2Molecular Dynamics63
4.4The Task Parallelism Pattern64
4.5The Divide and Conquer Pattern73
4.6The Geometric Decomposition Pattern79
4.7The Recursive Data Pattern97
4.8The Pipeline Pattern103
4.9The Event-Based Coordination Pattern114
5The Supporting Structures Design Space121
5.1Introduction121
5.1.1Program Structuring Patterns122
5.1.2Patterns Representing Data Structures123
5.2Forces123
5.3Choosing the Patterns125
5.4The SPMD Pattern126
5.5The Master/Worker Pattern143
5.6The Loop Parallelism Pattern152
5.7The Fork/Join Pattern167
5.8The Shared Data Pattern173
5.9The Shared Queue Pattern183
5.10The Distributed Array Pattern198
5.11Other Supporting Structures211
5.11.1SIMD211
5.11.2MPMD212
5.11.3Client-Server Computing214
5.11.4Concurrent Programming with Declarative Languages214
5.11.5Problem-Solving Environments215
6The Implementation Mechanisms Design Space216
6.1Overview217
6.2UE Management217
6.2.1Thread Creation/Destruction218
6.2.2Process Creation/Destruction220
6.3Synchronization221
6.3.1Memory Synchronization and Fences221
6.3.2Barriers226
6.3.3Mutual Exclusion229
6.4Communication237
6.4.1Message Passing238
6.4.2Collective Communication245
6.4.3Other Communication Constructs251
Appendix AA Brief Introduction to OpenMP253
A.1Core Concepts254
A.2Structured Blocks and Directive Formats257
A.3Worksharing259
A.4Data Environment Clauses262
A.5The OpenMP Runtime Library265
A.6Synchronization266
A.7The Schedule Clause270
A.8The Rest of the Language272
Appendix BA Brief Introduction to MPI273
B.1Concepts273
B.2Getting Started275
B.3Basic Point-to-Point Message Passing277
B.4Collective Operations279
B.5Advanced Point-to-Point Message Passing283
B.6MPI and Fortran288
B.7Conclusion290
Appendix CA Brief Introduction to Concurrent Programming in Java291
C.1Creating Threads293
C.2Atomicity, Memory Synchronization, and the volatile Keyword297
C.3Synchronized Blocks297
C.4Wait and Notify299
C.5Locks301
C.6Other Synchronization Mechanisms and Shared Data Structures303
C.7Interrupts304
Glossary307
Bibliography317
About the Authors333
Index335

Introduction

"If you build it, they will come."

And so we built them. Multiprocessor workstations, massively parallel supercomputers, a cluster in every department ... and they haven't come. Programmers haven't come to program these wonderful machines. Oh, a few programmers in love with the challenge have shown that most types of problems can be force-fit onto parallel computers, but general programmers, especially professional programmers who "have lives", ignore parallel computers. And they do so at their own peril. Parallel computers are going mainstream. Multithreaded microprocessors, multicore CPUs, multiprocessor PCs, clusters, parallel game consoles ... parallel computers are taking over the world of computing. The computer industry is ready to flood the market with hardware that will only run at full speed with parallel programs. But who will write these programs?

This is an old problem. Even in the early 1980s, when the "killer micros" started their assault on traditional vector supercomputers, we worried endlessly about how to attract normal programmers. We tried everything we could think of: high-level hardware abstractions, implicitly parallel programming languages, parallel language extensions, and portable message-passing libraries. But after many years of hard work, the fact of the matter is that "they" didn't come. The overwhelming majority of programmers will not invest the effort to write parallel software.

A common view is that you can't teach old programmers new tricks, so the problem will not be solved until the old programmers fade away and a new generation takes over.

But we don't buy into that defeatist attitude. Programmers have shown aremarkable ability to adopt new software technologies over the years. Look at how many old Fortran programmers are now writing elegant Java programs with sophisticated object-oriented designs. The problem isn't with old programmers. The problem is with old parallel computing experts and the way they've tried to create a pool of capable parallel programmers.

And that's where this book comes in. We want to capture the essence of how expert parallel programmers think about parallel algorithms and communicate that essential understanding in a way professional programmers can readily master. The technology we've adopted to accomplish this task is a pattern language. We made this choice not because we started the project as devotees of design patterns looking for a new field to conquer, but because patterns have been shown to work in ways that would be applicable in parallel programming. For example, patterns have been very effective in the field of object-oriented design. They have provided a common language experts can use to talk about the elements of design and have been extremely effective at helping programmers master object-oriented design.

This book contains our pattern language for parallel programming. The book opens with a couple of chapters to introduce the key concepts in parallel computing. These chapters focus on the parallel computing concepts and jargon used in the pattern language as opposed to being an exhaustive introduction to the field. The pattern language itself is presented in four parts corresponding to thefour phases of creating a parallel program:

Finding Concurrency. The programmer works in the problem domain to identify the available concurrency and expose it for use in the algorithm design.

Algorithm Structure. The programmer works with high-level structures for organizing a parallel algorithm.

Supporting Structures. We shift from algorithms to source code and consider how the parallel program will be organized and the techniques used to manage shared data.

Implementation Mechanisms. The final step is to look at specific software constructs for implementing a parallel program.

The patterns making up these four design spaces are tightly linked. You start at the top (Finding Concurrency), work through the patterns, and by the time you get to the bottom (Implementation Mechanisms), you will have a detailed design for your parallel program.

If the goal is a parallel program, however, you need more than just a parallel algorithm. You also need a programming environment and a notation for expressing the concurrency within the program's source code. Programmers used to be confronted by a large and confusing array of parallel programming environments. Fortunately, over the years the parallel programming community has converged around three programming environments.

OpenMP. A simple language extension to C, C++, or Fortran to write parallel programs for shared-memory computers.

MPI. A message-passing library used on clusters and other distributed-memory computers.

Java. An object-oriented programming language with language features supporting parallel programming on shared-memory computers and standard class libraries supporting distributed computing.

Many readers will already be familiar with one or more of these programming notations, but for readers completely new to parallel computing, we've included a discussion of these programming environments in the appendixes.

In closing, we have been working for many years on this pattern language. Presenting it as a book so people can start using it is an exciting development for us. But we don't see this as the end of this effort. We expect that others will have their own ideas about new and better patterns for parallel programming. We've assuredly missed some important features that really belong in this pattern language. We embrace change and look forward to engaging with the larger parallel computing community to iterate on this language. Over time, we'll update and improve the pattern language until it truly represents the consensus view of the parallel programming community. Then our real work will begin--using the pattern language to guide the creation of better parallel programming environments and helping people to use these technologies to write parallel software. We won't rest until the day sequential software is rare.

Customer Reviews

Most Helpful Customer Reviews

See All Customer Reviews

Patterns for Parallel Programming 3 out of 5 based on 0 ratings. 3 reviews.
Anonymous More than 1 year ago
Anonymous More than 1 year ago
Guest More than 1 year ago
This book tries to do for parallel programming what the seminal Gang of Four book did for sequential programming. While it remains to be seen if Mattson, Sanders and Massingill will succeed, their book serves a vital educational role. Until the 90s, parallel programming was relatively rare. But as hardware continued to get cheaper, many practical uses emerged. The authors point out that the bottleneck has now shifted to software. How does one find concurrency in a design? And given this, how to code it? The book tackles both issues. The latter is treated by explaining how to use MPI, OpenMP and Java for parallel coding. Where MPI and OpenMP were expressly made for this task. And with Java, the book discusses its concurrency classes and how these can be applied to parallel problems. The former issue of somehow finding concurrency is harder. This is really a wetware issue. You are the wetware. The book's core value is in showing common parallel patterns, that distills the essence of much previous work in the field. Plus, the book is not just iterating through a list of such patterns. As with the GoF, we have a pattern language. A metalevel, in which a walkthrough of the patterns and comparing these with your problem, helps you find appropriate patterns to map it to.