Principles of Parallel Programming / Edition 1

Principles of Parallel Programming / Edition 1

4.0 1
by Calvin Lin, Lawrence Snyder, Larry Snyder
     
 

View All Available Formats & Editions

ISBN-10: 0321487907

ISBN-13: 9780321487902

Pub. Date: 03/11/2008

Publisher: Pearson

With the rise of multi-core architecture, parallel programming is an increasingly important topic for software engineers and computer system designers. Written by well-known researchers Larry Snyder and Calvin Lin, this highly anticipated first edition emphasizes the principles underlying parallel computation, explains the various phenomena, and clarifies why these

Overview

With the rise of multi-core architecture, parallel programming is an increasingly important topic for software engineers and computer system designers. Written by well-known researchers Larry Snyder and Calvin Lin, this highly anticipated first edition emphasizes the principles underlying parallel computation, explains the various phenomena, and clarifies why these phenomena represent opportunities or barriers to successful parallel programming. Ideal for an advanced upper-level undergraduate course, Principles of Parallel Programming supplies enduring knowledge that will outlive the current hardware and software, aiming to inspire future researchers to build tomorrow’s solutions.

Product Details

ISBN-13:
9780321487902
Publisher:
Pearson
Publication date:
03/11/2008
Series:
Alternative eText Formats Series
Edition description:
New Edition
Pages:
352
Product dimensions:
7.30(w) x 9.20(h) x 1.00(d)

Table of Contents

Chapter 1 Introduction: Parallelism = Opportunities + Challenges
The Power and Potential of Parallelism
Examining Sequential and Parallel Programs
A Paradigm Shift
Parallelism Using Multiple Instruction Streams
The Goals: Scalable Performance and Portability
Summary
Historical Context
Exercises

Chapter 2 Parallel Computers And Their Model
Balancing Machine Specifics with Portability
A Look at Five Parallel Computers
The RAM: An Abstraction of a Sequential Computer
The PRAM: A Parallel Computer Model
The CTA: A Practical Parallel Computer Model
Memory Reference Mechanisms
A Closer Look at Communication
Applying the CTA Model
Summary
Historical Perspective
Exercises

Chapter 3 Reasoning about Performance
Introduction
Motivation and Some Basic Concepts
Sources of Performance Loss
Parallel Structure
Reasoning about Performance
Performance Trade-Offs
Measuring Performance
What should we measure?
Summary
Historical Perspective
Exercises

Chapter 4 First Steps Towards Parallel Programming
Task and Data Parallelism
Peril-L
Count 3s Example
Conceptualizing Parallelism
Alphabetizing Example
Comparison of Three Solutions
Summary
Historical Perspective
Exercises

Chapter 5 Scalable Algorithmic Techniques
The Inevitability of Trees
Blocks of Independent Computation
Schwartz’ Algorithm
Assigning Work To Processes Statically
Assigning Work to Processes Dynamically
The Reduce & Scan Abstractions
Trees
Summary
Historical Context
Exercises

Chapter 6 Programming with Threads
POSIX Threads
Thread Creation and Destruction
Mutual Exclusion
Synchronization
Safety Issues
Performance Issues
Open MP
The Count 3s Example
Semantic Limitations on
Reduction
Thread Behavior and Interaction
Sections
Summary of OpenMP
Java Threads
Summary
Historical Perspectives
Exercises

Chapter 7 Local View Programming Languages
MPI: The Message Passing Interface
Getting Started
Safety Issues
Performance Issues
Co-Array Fortran
Unified Parallel C
Titanium
Summary
Exercises

Chapter 8 Global View Programming Languages
The Z-level Programming Language
Basic Concepts of ZPL
Life, An Example
Design Principles
Manipulating Arrays Of Different Ranks
Reordering Data With Remap
Parallel Execution of ZPL
Performance Model
Summary
NESL
Historical Context
Exercises

Chapter 9 Assessing Our Knowledge
Introduction
Evaluating Existing Approaches
Lessons for the Future
Summary
Historical Perspectives
Exercises

Chapter 10 Future Directions in Parallel Programming
Attached Processors
Grid Computing
Transactional Memory
Summary
Exercises

Chapter 11 Capstone Project: Designing a Parallel Program
Introduction
Motivation
Getting Started
Summary
Historical Perspective
Exercises

Appendix 1 More Advanced Concepts

Customer Reviews

Average Review:

Write a Review

and post it to your social network

     

Most Helpful Customer Reviews

See all customer reviews >

Principles of Parallel Programming 4 out of 5 based on 0 ratings. 1 reviews.
Boudville More than 1 year ago
Years ago I briefly worked on a hypercube, and when I got this book, I wondered how it had fared. Alas, the hypercube, at least under this name, rated no mention. Though there is a passing reference to a binary 3 cube which is a 3 dimensional hypercube.

The authors explain the current state of multiprocessor architectures. The few remaining computer CPU makers have efforts in this field. Intel, AMD, Sun and IBM. The book describes qualitatively the salient aspects of each. One nice thing about the discussion is that it focuses on this, without drowning you in unnecessary hardware details. This turns out to be a key theme of the book. It abstracts out essential hardware properties, so that you can appreciate these and apply the book's ideas without being tied to any given chip.

The book also describes an important type of multiprocessor. Cluster machines, where each node is typically some off the shelf CPU, buffed up with a lot of local memory. The key differences between clusters are often related to how the nodes are hooked to each other, by some type of bus or crossbar. Affordability is an important property of clusters; thus the maximal use of commodity hardware. (The hypercube that I mentioned earlier would be a cluster.)

For a programmer, there is one overriding idea that you should get from the book. For optimal performance, minimise the internodal communication, compared to the use of a node's cache. The access time of the former can be 2-5 orders of magnitude slower. Details vary with the given architectures, of course. But typically nothing else comes close, in terms of effects on your throughput.