Using MPI: Portable Parallel Programming with the Message Passing Interface / Edition 2

Using MPI: Portable Parallel Programming with the Message Passing Interface / Edition 2

3.0 2
by William Gropp, Ewing Lusk, Anthony Skjellum
     
 

ISBN-10: 0262571323

ISBN-13: 9780262571326

Pub. Date: 11/26/1999

Publisher: MIT Press

The Message Passing Interface (MPI) specification is widely used for solving significant scientific and engineering problems on parallel computers. There exist more than a dozen implementations on computer platforms ranging from IBM SP-2 supercomputers to clusters of PCs running Windows NT or Linux ("Beowulf" machines).

The initial MPI Standard document, MPI-1,

Overview

The Message Passing Interface (MPI) specification is widely used for solving significant scientific and engineering problems on parallel computers. There exist more than a dozen implementations on computer platforms ranging from IBM SP-2 supercomputers to clusters of PCs running Windows NT or Linux ("Beowulf" machines).

The initial MPI Standard document, MPI-1, was recently updated by the MPI Forum. The new version, MPI-2, contains both significant enhancements to the existing MPI core and new features.Using MPI is a completely up-to-date version of the authors' 1994 introduction to the core functions of MPI. It adds material on the new C++ and Fortran 90 bindings for MPI throughout the book. It contains greater discussion of datatype extents, the most frequently misunderstood feature of MPI-1, as well as material on the new extensions to basic MPI functionality added by the MPI-2 Forum in the area of MPI datatypes and collective operations.Using MPI-2 covers the new extensions to basic MPI. These include parallel I/O, remote memory access operations, and dynamic process management. The volume also includes material on tuning MPI applications for high performance on modern MPI implementations.

Product Details

ISBN-13:
9780262571326
Publisher:
MIT Press
Publication date:
11/26/1999
Series:
Scientific and Engineering Computation Series
Edition description:
second edition
Pages:
396
Sales rank:
910,677
Product dimensions:
8.00(w) x 9.00(h) x 0.75(d)
Age Range:
18 Years

Table of Contents

Series Foreword xiii
Preface to the Second Edition xv
Preface to the First Edition xix
Background
1(12)
Why Parallel Computing?
1(1)
Obstacles to Progress
2(1)
Why Message Passing?
3(7)
Parallel Computational Models
3(6)
Advantages of the Message-Passing Model
9(1)
Evolution of Message-Passing Systems
10(1)
The MPI Forum
11(2)
Introduction to MPI
13(10)
Goal
13(1)
What Is MPI?
13(1)
Basic MPI Concepts
14(4)
Other Interesting Features of MPI
18(2)
Is MPI Large or Small?
20(1)
Decisions Left to the Implementor
21(2)
Using MPI in Simple Programs
23(46)
A First MPI Program
23(5)
Running Your First MPI Program
28(1)
A First MPI Program in C
29(1)
A First MPI Program in C++
29(5)
Timing MPI Programs
34(1)
A Self-Scheduling Example: Matrix-Vector Multiplication
35(8)
Studying Parallel Performance
43(10)
Elementary Scalability Calculations
43(2)
Gathering Data on Program Execution
45(1)
Instrumenting a Parallel Program with MPE Logging
46(1)
Events and States
47(1)
Instrumenting the Matrix-Matrix Multiply Program
47(2)
Notes on Implementation of Logging
49(3)
Examining Logfiles with Upshot
52(1)
Using Communicators
53(6)
Another Way of Forming New Communicators
59(3)
A Handy Graphics Library for Parallel Programs
62(2)
Common Errors and Misunderstandings
64(2)
Application: Quantum Monte Carlo Calculations in Nuclear Physics
66(1)
Summary of a Simple Subset of MPI
67(2)
Intermediate MPI
69(42)
The Poisson Problem
70(3)
Topologies
73(8)
A Code for the Poisson Problem
81(12)
Using Nonblocking Communications
93(3)
Synchronous Sends and "Safe" Programs
96(1)
More on Scalability
96(2)
Jacobi with a 2-D Decomposition
98(2)
An MPI Derived Datatype
100(1)
Overlapping Communication and Computation
101(4)
More on Timing Programs
105(2)
Three Dimensions
107(1)
Common Errors and Misunderstandings
107(2)
Application: Simulating Vortex Evolution in Superconducting Materials
109(2)
Advanced Message Passing in MPI
111(46)
MPI Datatypes
111(6)
Basic Datatypes and Concepts
111(3)
Derived Datatypes
114(3)
Understanding Extents
117(1)
The N-Body Problem
117(21)
Gather
118(5)
Nonblocking Pipeline
123(1)
Moving Particles between Processes
124(8)
Sending Dynamically Allocated Data
132(2)
User-Controlled Data Packing
134(4)
Visualizing the Mandelbrot Set
138(8)
Sending Arrays of Structures
145(1)
Gaps in Datatypes
146(4)
MPI-2 Functions for Manipulating Extents
148(2)
New MPI-2 Datatype Routines
150(2)
More on Datatypes for Structures
152(2)
Deprecated Functions
154(2)
Common Errors and Misunderstandings
156(1)
Parallel Libraries
157(38)
Motivation
157(6)
The Need for Parallel Libraries
157(1)
Common Deficiencies of Previous Message-Passing Systems
158(2)
Review of MPI Features That Support Libraries
160(3)
A First MPI Library
163(14)
MPI-2 Attribute-Caching Routines
172(1)
A C++ Alternative to MPI_Comm_dup
172(5)
Linear Algebra on Grids
177(12)
Mappings and Logical Grids
178(3)
Vectors and Matrices
181(4)
Components of a Parallel Library
185(4)
The LINPACK Benchmark in MPI
189(1)
Strategies for Library Building
190(2)
Examples of Libraries
192(3)
Other Features of MPI
195(58)
Simulating Shared-Memory Operations
195(10)
Shared vs. Distributed Memory
195(1)
A Counter Example
196(4)
The Shared Counter Using Polling instead of an Extra Process
200(1)
Fairness in Message Passing
201(1)
Exploiting Request-Response Message Patterns
202(3)
Application: Full-Configuration Interaction
205(1)
Advanced Collective Operations
206(8)
Data Movement
206(1)
Collective Computation
206(7)
Common Errors and Misunderstandings
213(1)
Intercommunicators
214(6)
Heterogeneous Computing
220(2)
The MPI Profiling Interface
222(7)
Finding Buffering Problems
226(2)
Finding Load Imbalances
228(1)
The Mechanics of Using the Profiling Interface
228(1)
Error Handling
229(11)
Error Handlers
230(3)
An Example of Error Handling
233(1)
User-Defined Error Handlers
234(3)
Terminating MPI Programs
237(2)
MPI-2 Functions for Error Handling
239(1)
The MPI Environment
240(3)
Processor Name
242(1)
Is MPI Initialized?
242(1)
Determining the Version of MPI
243(2)
Other Functions in MPI
245(1)
Application: Computational Fluid Dynamics
246(7)
Parallel Formulation
246(2)
Parallel Implementation
248(5)
Understanding how MPI Implementations Work
253(8)
Introduction
253(4)
Sending Data
253(1)
Receiving Data
254(1)
Rendezvous Protocol
254(1)
Matching Protocols to MPI's Send Modes
255(1)
Performance Implications
256(1)
Alternative MPI Implementation Strategies
257(1)
Tuning MPI Implementations
257(1)
How Difficult Is MPI to Implement?
257(1)
Device Capabilities and the MPI Library Definition
258(1)
Reliability of Data Transfer
259(2)
Comparing MPI with Other Systems for Interprocess Communication
261(12)
Sockets
261(5)
Process Startup and Shutdown
263(2)
Handling Faults
265(1)
PVM 3
266(6)
The Basics
267(1)
Miscellaneous Functions
268(1)
Colective Operations
268(1)
MPI Counterparts of Other Features
269(1)
Features Not in MPI
270(1)
Process Startup
270(1)
MPI and PVM related tools
271(1)
Where to Learn More
272(1)
Beyond Message Passing
273(6)
Dynamic Process Management
274(1)
Threads
275(1)
Action at a Distance
276(1)
Parallel I/O
277(1)
MPI-2
277(1)
Will There Be an MPI-3?
278(1)
Final Words
278(1)
Glossary of Selected Terms 279(74)
A Summary of MPI-1 Routines and Their Arguments
289(40)
B The MPICH Implementation of MPI
329(8)
C The MPE Multiprocessing Environment
337(8)
D MPI Resources on the World Wide Web
345(2)
E Language Details
347(6)
References 353(10)
Subject Index 363(4)
Function and Term Index 367

Customer Reviews

Average Review:

Write a Review

and post it to your social network

     

Most Helpful Customer Reviews

See all customer reviews >

Using MPI: Portable Parallel Programming with the Message Passing Interface 3 out of 5 based on 0 ratings. 2 reviews.
Anonymous More than 1 year ago
Guest More than 1 year ago
The book was good guide for me in the perspective that it took me through all the steps needed to provide a complete interface between two computers. It gave the instructions both in language 'C' and Fortran and some in 'psuedo' code for the most complex instructions. Anyone with some computer training could absorb what they had written including the history for 'buf' interest. Its goal was to provide a step by step insruction set to enable one computer to instruct another computer to calculate a program and return a response to the originator. Any method of communication could be used - serial port, printer port or any other communication sytem for that matter. The title is its intention which is what I wanted.