Introduction to Parallel Computing: Design and Analysis of Parallel Algorithms / Edition 5by Vipin Kumar, Ananth Grama, Anshul Gupta, George Karypis
Pub. Date: 11/28/1993
Take an in-depth look at techniques for the design and analysis of parallel algorithms with this new text. The broad, balanced coverage of important core topics includes sorting and graph algorithms, discrete optimization techniques, and scientific computing applications. The authors focus on parallel algorithms for realistic machine models while avoiding
Take an in-depth look at techniques for the design and analysis of parallel algorithms with this new text. The broad, balanced coverage of important core topics includes sorting and graph algorithms, discrete optimization techniques, and scientific computing applications. The authors focus on parallel algorithms for realistic machine models while avoiding architectures that are unrealizable in practice. They provide numerous examples and diagrams illustrating potentially difficult subjects and conclude each chapter with an extensive list of bibliographic references. In addition, problems of varying degrees of difficulty challenge readers at different levels. Introduction to Parallel Computing is an ideal tool for students and professionals who want insight into problem-solving with parallel computers.
- Presents parallel algorithms in terms of a small set of basic data communication operations, greatly simplifying the design and understanding of these algorithms.
- Emphasizes practical issues of performance, efficiency, and scalability.
- Provides a self-contained discussion of the basic concepts of parallel computer architectures.
- Covers algorithms for scientific computation, such as dense and sparse matrix computations, linear system solving, finite elements, and FFT.
- Discusses algorithms for combinatorial optimization, including branch-and-bound, unstructured tree search, and dynamic programming.
- Incorporates various parallel programming models and languages as well as illustrative examples for commercially- available computers.
Junior/Senior/Graduate ComputerScience and Computer Engineering majors
Operating Systems and Analysis of Algorithms
- Publication date:
- Edition description:
- Older Edition
- Product dimensions:
- 6.69(w) x 9.57(h) x 1.28(d)
Table of ContentsIntroduction.
What is Parallel Computing?
Issues in Parallel Computing.
Organization and Contents of The Text.
Models of Parallel Computers.
An Idealized Parallel Computer.
Dynamic Interconnection Networks.
Static Interconnection Networks.
Embedding Other Networks Into a Hypercube.
Routing Mechanisms For Static Networks.
Communication Costs in Static Interconnection Networks.
Architectural Models For Parallel Algorithm Design.
Basic Communication Operations.
All-To-All Broadcast, Reduction, and Prefix Sums.
One-To-All Personalized Communications.
All-To-All Personalized Communications.
Faster Methods For Some Communication Operations.
Performance and Scalability of Parallel Systems.
The Effect of Granularity and Data Mapping On Performance.
The Scalability of Parallel Systems.
The Isoefficiency Metric of Scalability.
Sources of Parallel Overhead.
Minimum Execution Time and Minimum Cost-Optimal Execution Time.
Other Scalability Metrics andBibliographic Remarks.
Dense Matrix Algorithms.
Solving a System of Linear Equations.
Bubble Sort and Its Variants.
Other Sorting Algorithms.
Minimum Spanning Tree: Prim's Algorithm.
Single-Source Shortest Paths: Dijkstra's Algorithms.
All-Pairs Shortest Paths.
Algorithms For Sparse Graphs.
Search Algorithms For Discrete Optimization Problems.
Sequential Search Algorithms.
Search Overhead Factor.
Parallel Depth-First Search.
Parallel Best-First Search.
Speedup Anomalies in Parallel Search Algorithms.
Nonserial Monadic Dp Formulations.
Serial Polyadic Dp Formulations.
Nonserial Polyadic Dp Formulations.
Summary and Discussion.
Fast Fourier Transform.
The Binary-Exchange Algorithm.
The Transpose Algorithm.
Cost-Effectiveness of Meshes and Hypercubes For Fft.
Solving Sparse Systems of Linear Equations.
Finite Element Method.
Direct Methods For Sparse Linear Systems.
Systolic Algorithms and Their Mapping Onto Parallel Computers.
General Issues in Mapping Systolic Systems Onto Parallel Computers.
Mapping One-Dimensional Systolic Arrays.
Primitive For The Message-Passing Programming Paradigm.
Primitives For The Shared-Address-Space Programming Paradigm.
Appendix A. Complexity of Functions and Order Analysis.
and post it to your social network
Most Helpful Customer Reviews
See all customer reviews >