×

Uh-oh, it looks like your Internet Explorer is out of date.

For a better shopping experience, please upgrade now.

Using MPI: Portable Parallel Programming with the Message - Passing Interface
     

Using MPI: Portable Parallel Programming with the Message - Passing Interface

by William Gropp, Ewing Lusk, Anthony Skjellum
 

ISBN-10: 0262571048

ISBN-13: 9780262571043

Pub. Date: 11/01/1994

Publisher: MIT Press


The parallel programming community recently organized an effort to standardize the communication subroutine libraries used for programming on massively parallel computers such as the Connection Machine and Cray's new T3D, as well as networks of workstations. The standard they developed, Message-Passing Interface (MPI), not only unifies within a

Overview


The parallel programming community recently organized an effort to standardize the communication subroutine libraries used for programming on massively parallel computers such as the Connection Machine and Cray's new T3D, as well as networks of workstations. The standard they developed, Message-Passing Interface (MPI), not only unifies within a common framework programs written in a variety of existing (and currently incompatible) parallel languages but allows for future portability of programs between machines. Three of the authors of MPI have teamed up here to present a tutorial on how to use MPI to write parallel programs, particularly for large-scale applications.

MPI, the long-sought standard for expressing algorithms and running them on a variety of computers, allows leveraging of software development costs across parallel machines and networks and will spur the development of a new level of parallel software. This timely book covers all the details of the MPI functions used in the motivating examples and applications, with many MPI functions introduced in context.

The topics covered include issues in portability of programs among MPP systems, examples and counterexamples illustrating subtle aspects of the MPI definition, how to write libraries that take advantage of MPI's special features, application paradigms for large-scale examples, complete program examples, visualizing program behavior with graphical tools, an implementation strategy and a portable implementation, using MPI on workstation networks and on MPPs (Intel, Thinking Machines, IBM), scalability and performance tuning, and how to convert existing codes to MPI.


Product Details

ISBN-13:
9780262571043
Publisher:
MIT Press
Publication date:
11/01/1994
Series:
Scientific and Engineering Computation
Pages:
328
Product dimensions:
7.93(w) x 9.01(h) x 0.75(d)

Table of Contents



    Series Foreword ..... xiii
    Preface ..... xv

1 Background ..... 1


    1.1 Why Parallel Computing? ..... 1
    1.2 Obstacles to Progress ..... 2
    1.3 Why Message Passing? ..... 3

      1.3.1 Parallel Computational Models ..... 3
      1.3.2 Advantages of the Message-Passing Model ..... 7

    1.4 Current Message-Passing Systems ..... 9
    1.5 The MPI Forum ..... 9

2 What's New about MPI? ..... ..... 11


    2.1 A New Point of View ..... 11
    2.2 What's Not New? ..... 11
    2.3 Basic MPI Concepts ..... 12
    2.4 Other Interesting Features of MPI ..... 15
    2.5 Is MPI Large or Small? ..... 117

      2.5.1 MPI Is Large (125 Functions) ..... 18
      2.5.2 MPI Is Small (6 Functions) ..... 18
      2.5.3 MPI Is Whatever Size You Like ..... 18

    2.6 Decisions Left to the Implementor ..... 19

3 Using MPI in Simple Programs ..... 21


    3.1 A First MPI Program ..... 21
    3.2 Running Your First MPI Program ..... 25
    3.3 A First MPI Program in C ..... 26
    3.4 Timing MPI Programs ..... 27
    3.5 A Self-Scheduling Example: Matrix-Vector Multiplication ..... 29
    3.6 Studying Parallel Performance ..... 36

      3.6.1 Elementary Scalability Calculations ..... 37
      3.6.2 Gathering Data on Program Execution ..... 39
      3.6.3 Instrumenting a Parallel Program with MPE Logging ..... 40
      3.6.4 Events and States ..... 40
      3.6.5 Instrumenting the Matrix-Matrix Multiply Program ..... 41
      3.6.6 Notes on Implementation of Logging ..... 43
      3.6.7 Examining Logfiles with Upshot ..... 46

    3.7 Using Communicators ..... 47
    3.8 A Handy Graphics Library for Parallel Programs ..... 53
    3.9 Application:Determination of Nuclear Structures ..... 56
    3.10 Summary of a Simple Subset of MPI ..... 58

4 Intermediate MPI ..... 59


    4.1 The Poisson Problem ..... 60
    4.2 Topologies ..... 63
    4.3 A Code for the Poisson Problem ..... 71
    4.4 Using Nonblocking Communications ..... 80
    4.5 Synchronous Sends and "Safe" Programs ..... 82
    4.6 More on Scalability ..... 83
    4.7 Jacobi with a ..... 2-D Decomposition ..... 86
    4.8 An MPI Derived Datatype ..... 87
    4.9 Overlapping Communication and Computation ..... 88
    4.10 More on Timing Programs ..... 93
    4.11 Three Dimensions ..... 94
    4.12 Application: Simulating Vortex Evolution in Superconducting Materials ..... 94

5 Advanced Message Passing in MPI ..... 97


    5.1 MPI Datatypes ..... 97

      5.1.1 Basic Datatypes and Concepts ..... 97
      5.1.2 Derived Datatypes ..... 100

    5.2 The N-Body Problem ..... 102

      5.2.1 Gather ..... 103
      5.2.2 Nonblocking Pipeline ..... 107
      5.2.3 Moving Particles between Processes ..... 108
      5.2.4 Sending Dynamically Allocated Data ..... 113
      5.2.5 User-Controlled Data Packing ..... 114

    5.3 Visualizing the Mandelbrot Set ..... 116
    5.4 Gaps in Datatypes ..... 124

6 Parallel Libraries ..... 127


    6.1 Motivation ..... 127

      6.1.1 The Need for Parallel Libraries ..... 127
      6.1.2 Common Deficiencies of Message-Passing Systems ..... 128
      6.1.3 Review of MPI Features That Support Libraries ..... 128

    6.2 A First MPI Library ..... 129
    6.3 Linear Algebra on Grids ..... 137

      6.3.1 Mappings and Logical Grids ..... 139
      6.3.2 Vectors and Matrices ..... 145
      6.3.3 Components of a Parallel Library ..... 146

    6.4 The LINPACK Benchmark in MPI ..... 149
    6.5 Strategies for Library Building ..... 152

7 Other Features of MPI ..... 155


    7.1 Simulating Shared-Memory Operations ..... 155

      7.1.1 Shared vs. Distributed Memory ..... 155
      7.1.2 A Counter Example ..... 156
      7.1.3 The Shared Counter Using Polling Instead of an Extra Process ..... 157
      7.1.4 Shared Memory on Distributed-Memory Machines ..... 162

    7.2 Application: Full-Configuration Interaction ..... 163
    7.3 Advanced Collective Operations ..... 164

      7.3.1 Data Movement ..... 164
      7.3.2 Collective Computation ..... 164
      7.3.3 User-Defined Operations ..... 166
      7.3.4 Other Collective Operations ..... 167

    7.4 Intercommunicators ..... 167
    7.5 Heterogeneous Computing ..... 173
    7.6 The MPI Profiling Interface ..... 174
    7.7 Error Handling ..... 178

      7.7.1 Error Handlers ..... 178
      7.7.2 User-Defined Error Handlers ..... 180
      7.7.3 Terminating MPI Programs ..... 181

    7.8 Environmental Inquiry ..... 181

      7.8.1 Processor Name ..... 183
      7.8.2 Is MPI Initialized? ..... 184

    7.9 Other Functions in MPI ..... 184
    7.10 Application: Computational Fluid Dynamics ..... 185

      7.10.1 Parallel Formulation ..... 186
      7.10.2 Parallel Implementation ..... 188

8 Implementing MPI ..... 191


    8.1 Introduction ..... 191
    8.2 Sending and Receiving ..... 193

      8.2.1 Sending a message ..... 194
      8.2.2 Receiving a message ..... 195

    8.3 Data Transfer ..... 195

      8.3.1 Transfers to and from the ADI ..... 196
      8.3.2 Noncontiguous Data ..... 197

    8.4 Message Queuing ..... 197
    8.5 Unexpected Messages ..... 198
    8.6 Device Capabilities and the MPI Library Definition ..... 199

9 Dusty Decks: Porting Existing Message-Passing Programs to MPI ..... 201


    9.1 Intel NX ..... 202
    9.2 IBM EUI ..... 206
    9.3 TMC CMMD ..... 208
    9.4 Express ..... 210
    9.5 PVM 2.4.x ..... 212
    9.6 PVM 3.2.x ..... 217
    9.7 p4 ..... 220
    9.8 PARMACS ..... 223
    9.9 TCGMSG ..... 227
    9.10 Chameleon ..... 230
    9.11 Zipcode ..... 233
    9.12 Where to Learn More ..... 237

10 Beyond Message Passing ..... 239


    10.1 Dynamic Processes ..... 240
    10.2 Threads ..... 242
    10.3 Action at a Distance ..... 244
    10.4 Parallel I/O ..... 245
    10.5 Will There Be an MPI-2? ..... 246
    10.6 Final Words ..... 246

    Glossary of Selected Terms ..... 247

A Summary of MPI Routines and Their Arguments ..... 255

B The Model MPI Implementation ..... 279

C The MPE Multiprocessing Environment Functions ..... 283

D MPI Resources on the Information Superhighway ..... 289

E Language Details ..... 291


    Bibliography ..... 295
    Subject Index ..... 301
    Function and Term Index ..... 305

Customer Reviews

Average Review:

Post to your social network

     

Most Helpful Customer Reviews

See all customer reviews