Parallel Programming for Modern High Performance Computing Systems

Parallel Programming for Modern High Performance Computing Systems

by Pawel Czarnul

Hardcover

$105.00
View All Available Formats & Editions
Choose Expedited Shipping at checkout for guaranteed delivery by Tuesday, April 30

Product Details

ISBN-13: 9781138305953
Publisher: Taylor & Francis
Publication date: 02/28/2018
Pages: 330
Product dimensions: 6.12(w) x 9.25(h) x (d)

About the Author

Paweł Czarnul is an assistant professor at the Department of Computer Architecture, Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology, Poland. He obtained his MSc and PhD degrees from Gdansk University of Technology in 1999 and 2003 respectively. He obtained his DSc (habilitation) degree in 2015. His research interests include parallel and distributed computing including service oriented approaches, high performance computing, and Internet technologies.

Table of Contents

1. Understanding the Need for Parallel Computing
1.1 Introduction
1.2 From Problem to Parallel Solution – Development Steps
1.3 Approaches to Parallelization
1.4 Selected Use Cases with Popular APIS
1.5 Outline of The Book

2. Overview of Selected Parallel and Distributed Systems for High Performance Computing
2.1 Generic Taxonomy of Parallel Computing Systems
2.2 Multicore CPUS
2.3 GPUS
2.4 Manycore CPUS/Coprocessors
2.5 Cluster Systems
2.6 Growth of High Performance Computing Systems and Relevant Metrics
2.7 Volunteer-based Systems
2.8 Grid Systems

3. Typical Paradigms for Parallel Applications
3.1 Aspects of Parallelization
3.2 Masterslave
3.3 SPMD/Geometric Parallelism
3.4 Pipelining
3.5 Divide and conquer

4. Selected APIs for Parallel Programming
4.1 Message Passing Interface (MPI)
4.2 OPENMP
4.3 PTHREADS
4.4 CUDA
4.5 OPENCL
4.6 OPENACC
4.7 Selected Hybrid Approaches

5. Programming Parallel Paradigms Using Selected APIS
5.1 Masterslave
5.2 Geometric SPMD
5.3 Divide and conquer

6. Optimization Techniques and Best Practices for Parallel Codes
6.1 Data Prefetching, Communication and Computations Overlapping and Increasing Computation Efficiency
6.2 Data Granularity
6.3 Minimization of Overheads
6.4 Process/Thread Affinity
6.5 Data Types and Accuracy
6.6 Data Organization and Arrangement
6.7 Checkpointing
6.8 Simulation of Parallel Application Execution
6.9 Best Practices and Typical Optimizations

Appendix A. Resources
A.1 Software Packages

Appendix B. Further reading
B.1 Context of this Book
B.2 Other Resources on Parallel Programming

Customer Reviews

Most Helpful Customer Reviews

See All Customer Reviews