Scientific Parallel Computing
What does Google's management of billions of Web pages have in common with analysis of a genome with billions of nucleotides? Both apply methods that coordinate many processors to accomplish a single task. From mining genomes to the World Wide Web, from modeling financial markets to global weather patterns, parallel computing enables computations that would otherwise be impractical if not impossible with sequential approaches alone. Its fundamental role as an enabler of simulations and data analysis continues an advance in a wide range of application areas.



Scientific Parallel Computing is the first textbook to integrate all the fundamentals of parallel computing in a single volume while also providing a basis for a deeper understanding of the subject. Designed for graduate and advanced undergraduate courses in the sciences and in engineering, computer science, and mathematics, it focuses on the three key areas of algorithms, architecture, languages, and their crucial synthesis in performance.


The book's computational examples, whose math prerequisites are not beyond the level of advanced calculus, derive from a breadth of topics in scientific and engineering simulation and data analysis. The programming exercises presented early in the book are designed to bring students up to speed quickly, while the book later develops projects challenging enough to guide students toward research questions in the field. The new paradigm of cluster computing is fully addressed. A supporting web site provides access to all the codes and software mentioned in the book, and offers topical information on popular parallel computing systems.


  • Integrates all the fundamentals of parallel computing essential for today's high-performance requirements
  • Ideal for graduate and advanced undergraduate students in the sciences and in engineering, computer science, and mathematics
  • Extensive programming and theoretical exercises enable students to write parallel codes quickly
  • More challenging projects later in the book introduce research questions
  • New paradigm of cluster computing fully addressed
  • Supporting web site provides access to all the codes and software mentioned in the book

1148032854
Scientific Parallel Computing
What does Google's management of billions of Web pages have in common with analysis of a genome with billions of nucleotides? Both apply methods that coordinate many processors to accomplish a single task. From mining genomes to the World Wide Web, from modeling financial markets to global weather patterns, parallel computing enables computations that would otherwise be impractical if not impossible with sequential approaches alone. Its fundamental role as an enabler of simulations and data analysis continues an advance in a wide range of application areas.



Scientific Parallel Computing is the first textbook to integrate all the fundamentals of parallel computing in a single volume while also providing a basis for a deeper understanding of the subject. Designed for graduate and advanced undergraduate courses in the sciences and in engineering, computer science, and mathematics, it focuses on the three key areas of algorithms, architecture, languages, and their crucial synthesis in performance.


The book's computational examples, whose math prerequisites are not beyond the level of advanced calculus, derive from a breadth of topics in scientific and engineering simulation and data analysis. The programming exercises presented early in the book are designed to bring students up to speed quickly, while the book later develops projects challenging enough to guide students toward research questions in the field. The new paradigm of cluster computing is fully addressed. A supporting web site provides access to all the codes and software mentioned in the book, and offers topical information on popular parallel computing systems.


  • Integrates all the fundamentals of parallel computing essential for today's high-performance requirements
  • Ideal for graduate and advanced undergraduate students in the sciences and in engineering, computer science, and mathematics
  • Extensive programming and theoretical exercises enable students to write parallel codes quickly
  • More challenging projects later in the book introduce research questions
  • New paradigm of cluster computing fully addressed
  • Supporting web site provides access to all the codes and software mentioned in the book

115.0 In Stock
Scientific Parallel Computing

Scientific Parallel Computing

Scientific Parallel Computing

Scientific Parallel Computing

Hardcover(New Edition)

$115.00 
  • SHIP THIS ITEM
    In stock. Ships in 3-7 days. Typically arrives in 3 weeks.
  • PICK UP IN STORE

    Your local store may have stock of this item.

Related collections and offers


Overview

What does Google's management of billions of Web pages have in common with analysis of a genome with billions of nucleotides? Both apply methods that coordinate many processors to accomplish a single task. From mining genomes to the World Wide Web, from modeling financial markets to global weather patterns, parallel computing enables computations that would otherwise be impractical if not impossible with sequential approaches alone. Its fundamental role as an enabler of simulations and data analysis continues an advance in a wide range of application areas.



Scientific Parallel Computing is the first textbook to integrate all the fundamentals of parallel computing in a single volume while also providing a basis for a deeper understanding of the subject. Designed for graduate and advanced undergraduate courses in the sciences and in engineering, computer science, and mathematics, it focuses on the three key areas of algorithms, architecture, languages, and their crucial synthesis in performance.


The book's computational examples, whose math prerequisites are not beyond the level of advanced calculus, derive from a breadth of topics in scientific and engineering simulation and data analysis. The programming exercises presented early in the book are designed to bring students up to speed quickly, while the book later develops projects challenging enough to guide students toward research questions in the field. The new paradigm of cluster computing is fully addressed. A supporting web site provides access to all the codes and software mentioned in the book, and offers topical information on popular parallel computing systems.


  • Integrates all the fundamentals of parallel computing essential for today's high-performance requirements
  • Ideal for graduate and advanced undergraduate students in the sciences and in engineering, computer science, and mathematics
  • Extensive programming and theoretical exercises enable students to write parallel codes quickly
  • More challenging projects later in the book introduce research questions
  • New paradigm of cluster computing fully addressed
  • Supporting web site provides access to all the codes and software mentioned in the book


Product Details

ISBN-13: 9780691119359
Publisher: Princeton University Press
Publication date: 04/17/2005
Edition description: New Edition
Pages: 392
Product dimensions: 7.00(w) x 10.00(h) x (d)

About the Author

L. Ridgway Scott is Louis Block Professor of Computer Science and of Mathematics at the University of Chicago. He is the coauthor of The Mathematical Theory of Finite Element Methods. Terry Clark is Assistant Professor of Computer Science in the Department of Electrical Engineering and Computer Science at the University of Kansas. Babak Bagheri is a software architect at PROS Revenue Management, a company that designs software for pricing and revenue management. Scott, Clark, and Bagheri codeveloped the P-languages.

Table of Contents

Preface ix

Notation xiii





Chapter 1. Introduction 1

1.1 Overview 1

1.2 What is parallel computing? 3

1.3 Performance 4

1.4 Why parallel? 11

1.5 Two simple examples 15

1.6 Mesh-based applications 24

1.7 Parallel perspectives 30

1.8 Exercises 33





Chapter 2. Parallel Performance 37





2.1 Summation example 37

2.2 Performance measures 38

2.3 Limits to performance 44

2.4 Scalability 48

2.5 Parallel performance analysis 56

2.6 Parallel payoff 59

2.7 Real world parallelism 64

2.8 Starting SPMD programming 66

2.9 Exercises 66





Chapter 3. Computer Architecture 71

3.1 PMS notation 71

3.2 Shared memory multiprocessor 75

3.3 Distributed memory multicomputer 79

3.4 Pipeline and vector processors 87

3.5 Comparison of parallel architectures 89

3.6 Taxonomies 92

3.7 Current trends 94

3.8 Exercises 95





Chapter 4. Dependences 99

4.1 Data dependences 100

4.2 Loop-carried data dependences 103

4.3 Dependence examples 110

4.4 Testing for loop-carried dependences 112

4.5 Loop transformations 114

4.6 Dependence examples continued 120

4.7 Exercises 123





Chapter 5. Parallel Languages 127

5.1 Critical factors 129

5.2 Command and control 134

5.3 Memory models 136

5.4 Shared memory programming 139

5.5 Message passing 143

5.6 Examples and comments 148

5.7 Parallel language developments 153

5.8 Exercises 154





Chapter 6. Collective Operations 157

6.1 The @notation 157

6.2 Tree/ring algorithms 158

6.3 Reduction operations 162

6.4 Reduction operation applications 164

6.5 Parallel prefix algorithms 168

6.6 Performance of reduction operations 169

6.7 Data movement operations 173

6.8 Exercises 174





Chapter 7. Current Programming Standards 177

7.1 Introduction to MPI 177

7.2 Collective operations in MPI 181

7.3 Introduction to POSIX threads 184

7.4 Exercises 187





Chapter 8. The Planguage Model 191

8.1 I P language details 192

8.2 Ranges and arrays 198

8.3 Reduction operations in Pfortran 200

8.4 Introduction to PC 204

8.5 Reduction operations in PC 206

8.6 Planguages versus message passing 207

8.7 Exercises 208





Chapter 9. High Performance Fortran 213

9.1 HPF data distribution directives 214

9.2 Other mechanisms for expressing concurrency 219

9.3 Compiling HPF 220

9.4 HPF comparisons and review 221

9.5 Exercises 222





Chapter 10. Loop Tiling 227

10.1 Loop tiling 227

10.2 Work vs.data decomposition 228

10.3 Tiling in OpenMP 228

10.4 Teams 232

10.5 Parallel regions 233

10.6 Exercises 234





Chapter 11. Matrix Eigen Analysis 237

11.1 The Leslie matrix model 237

11.2 The power method 242

11.3 A parallel Leslie matrix program 244

11.4 Matrix-vector product 249

11.5 Power method applications 251

11.6 Exercises 253





Chapter 12. Linea Systems 257

12.1 Gaussian elimination 257

12.2 Solving triangular systems in parallel 262

12.3 Divide-and-conquer algorithms 271

12.4 Exercises 277

12.5 Projects 281





Chapter 13. Particle Dynamics 283

13.1 Model assumptions 284

13.2 Using Newton's third law 285

13.3 Further code complications 288

13.4 Pair list generation 290

13.5 Force calculation with a pair list 296

13.6 Performance of replication algorithm 299

13.7 Case study:particle dynamics in HPF 302

13.8 Exercises 307

13.9 Projects 310





Chapter 14. Mesh Methods 315

14.1 Boundary value problems 315

14.2 Iterative methods 319

14.3 Multigrid methods 322

14.4 Multidimensional problems 327

14.5 Initial value problems 328

14.6 Exercises 333

14.7 Projects 334





Chapter 15. Sorting 335

15.1 Introduction 335

15.2 Parallel sorting 337

15.3 Spatial sorting 342

15.4 Exercises 353

15.5 Projects 355





Bibliography 357

Index 369


What People are Saying About This

Ashok Srinivasan

I have used a different text each time I have taught parallel computing, but felt that they missed important material that I wished to include. This book includes many topics not addressed in other parallel computing texts, and the first few chapters are particularly well written.
Ashok Srinivasan, Florida State University

Schaller

This well-organized book takes a unique approach in its focus on all the intricacies involved in determining parallel performance, and I especially appreciated its use of the same small problems, summation, to motivate many concepts.
Nan C. Schaller, Rochester Institute of Technology

Suely Oliveira

The authors of this well-written book provide a good overview of most of the issues they address, and their survey of different parallel programming languages and methodologies is quite impressive.
Suely Oliveira, University of Iowa

From the Publisher

"The authors of this well-written book provide a good overview of most of the issues they address, and their survey of different parallel programming languages and methodologies is quite impressive."—Suely Oliveira, University of Iowa

"I have used a different text each time I have taught parallel computing, but felt that they missed important material that I wished to include. This book includes many topics not addressed in other parallel computing texts, and the first few chapters are particularly well written."—Ashok Srinivasan, Florida State University

"This well-organized book takes a unique approach in its focus on all the intricacies involved in determining parallel performance, and I especially appreciated its use of the same small problems, summation, to motivate many concepts."—Nan C. Schaller, Rochester Institute of Technology

From the B&N Reads Blog

Customer Reviews