Parallel Processing and Parallel Algorithms: Theory and Computation
Motivation It is now possible to build powerful single-processor and multiprocessor systems and use them efficiently for data processing, which has seen an explosive ex­ pansion in many areas of computer science and engineering. One approach to meeting the performance requirements of the applications has been to utilize the most powerful single-processor system that is available. When such a system does not provide the performance requirements, pipelined and parallel processing structures can be employed. The concept of parallel processing is a depar­ ture from sequential processing. In sequential computation one processor is in­ volved and performs one operation at a time. On the other hand, in parallel computation several processors cooperate to solve a problem, which reduces computing time because several operations can be carried out simultaneously. Using several processors that work together on a given computation illustrates a new paradigm in computer problem solving which is completely different from sequential processing. From the practical point of view, this provides sufficient justification to investigate the concept of parallel processing and related issues, such as parallel algorithms. Parallel processing involves utilizing several factors, such as parallel architectures, parallel algorithms, parallel programming languages and performance analysis, which are strongly interrelated. In general, four steps are involved in performing a computational problem in parallel. The first step is to understand the nature of computations in the specific application domain.
1112226644
Parallel Processing and Parallel Algorithms: Theory and Computation
Motivation It is now possible to build powerful single-processor and multiprocessor systems and use them efficiently for data processing, which has seen an explosive ex­ pansion in many areas of computer science and engineering. One approach to meeting the performance requirements of the applications has been to utilize the most powerful single-processor system that is available. When such a system does not provide the performance requirements, pipelined and parallel processing structures can be employed. The concept of parallel processing is a depar­ ture from sequential processing. In sequential computation one processor is in­ volved and performs one operation at a time. On the other hand, in parallel computation several processors cooperate to solve a problem, which reduces computing time because several operations can be carried out simultaneously. Using several processors that work together on a given computation illustrates a new paradigm in computer problem solving which is completely different from sequential processing. From the practical point of view, this provides sufficient justification to investigate the concept of parallel processing and related issues, such as parallel algorithms. Parallel processing involves utilizing several factors, such as parallel architectures, parallel algorithms, parallel programming languages and performance analysis, which are strongly interrelated. In general, four steps are involved in performing a computational problem in parallel. The first step is to understand the nature of computations in the specific application domain.
109.99 In Stock
Parallel Processing and Parallel Algorithms: Theory and Computation

Parallel Processing and Parallel Algorithms: Theory and Computation

by Seyed H Roosta
Parallel Processing and Parallel Algorithms: Theory and Computation

Parallel Processing and Parallel Algorithms: Theory and Computation

by Seyed H Roosta

Hardcover(2000)

$109.99 
  • SHIP THIS ITEM
    In stock. Ships in 6-10 days.
  • PICK UP IN STORE

    Your local store may have stock of this item.

Related collections and offers


Overview

Motivation It is now possible to build powerful single-processor and multiprocessor systems and use them efficiently for data processing, which has seen an explosive ex­ pansion in many areas of computer science and engineering. One approach to meeting the performance requirements of the applications has been to utilize the most powerful single-processor system that is available. When such a system does not provide the performance requirements, pipelined and parallel processing structures can be employed. The concept of parallel processing is a depar­ ture from sequential processing. In sequential computation one processor is in­ volved and performs one operation at a time. On the other hand, in parallel computation several processors cooperate to solve a problem, which reduces computing time because several operations can be carried out simultaneously. Using several processors that work together on a given computation illustrates a new paradigm in computer problem solving which is completely different from sequential processing. From the practical point of view, this provides sufficient justification to investigate the concept of parallel processing and related issues, such as parallel algorithms. Parallel processing involves utilizing several factors, such as parallel architectures, parallel algorithms, parallel programming languages and performance analysis, which are strongly interrelated. In general, four steps are involved in performing a computational problem in parallel. The first step is to understand the nature of computations in the specific application domain.

Product Details

ISBN-13: 9780387987163
Publisher: Springer New York
Publication date: 12/10/1999
Edition description: 2000
Pages: 566
Product dimensions: 7.01(w) x 10.00(h) x 0.05(d)

Table of Contents

1 Computer Architecture.- 1.1 Classification of Computer Architectures.- 1.2 Parallel Architectures.- 1.3 Data Flow Architectures.- Summary.- Exercises.- 2 Components of Parallel Computers.- 2.1 Memory.- 2.2 Interconnection Networks.- 2.3 Goodness Measures for Interconnection Networks.- 2.4 Compilers.- 2.5 Operating Systems.- 2.6 Input and Output Constraints.- Summary.- Exercises.- 3 Principles of Parallel Programming.- 3.1 Programming Languages for Parallel Processing.- 3.2 Precedence Graph of a Process.- 3.3 Data Parallelism Versus Control Parallelism.- 3.4 Message Passing Versus Shared Address Space.- 3.5 Mapping.- 3.6 Granularity.- Summary.- Exercises.- 4 Parallel Programming Approaches.- 4.1 Parallel Programming with UNIX.- 4.2 Parallel Programming with PCN.- 4.3 Parallel Programming with PVM.- 4.4 Parallel Programming with C-Linda.- 4.5 Parallel Programming with EPT.- 4.6 Parallel Programming with CHARM.- Summary.- 5 Principles of Parallel Algorithm Design.- 5.1 Design Approaches.- 5.2 Design Issues.- 5.3 Performance Measures and Analysis.- 5.4 Complexities.- 5.5 Anomalies in Parallel Algorithms.- 5.6 Pseudocode Conventions for Parallel Algorithms.- 5.7 Comparison of SIMD and MIMD Algorithms.- Summary.- Exercises.- 6 Parallel Graph Algorithms.- 6.1 Connected Components.- 6.2 Paths and All-Pairs Shortest Paths.- 6.3 Minimum Spanning Trees and Forests.- 6.4 Traveling Salesman Problem.- 6.5 Cycles in a Graph.- 6.6 Coloring of Graphs.- Summary.- Exercises.- 7 Parallel Search Algorithms.- 7.1 Divide and Conquer.- 7.2 Depth-First Search.- 7.3 Breadth-First Search.- 7.4 Best-First Search.- 7.5 Branch-and-Bound Search.- 7.6 Alpha-Beta Minimax Search.- Summary.- Exercises.- 8 Parallel Computational Algorithms.- 8.1 Prefix Computation.- 8.2 Transitive Closure.- 8.3 Matrix Computation.- 8.3.1 Matrix-Vector Multiplication.- 8.3.2 Matrix-Matrix Multiplication.- 8.4 System of Linear Equations.- 8.5 Computing Determinants.- 8.6 Expression Evaluation.- 8.7 Sorting.- Summary.- Exercises.- 9 Data Flow and Functional Programming.- 9.1 Data Flow Programming.- 9.2 Functional Programming.- Summary.- 10 Asynchronous Parallel Programming.- 10.1 Parallel Programming with Ada.- 10.2 Parallel Programming with Occam.- 10.3 Parallel Programming with Modula-2.- Summary.- 11 Data Parallel Programming.- 11.1 Data Parallel Programming with C*.- 11.2 Data Parallel Programming with Fortran 90.- Summary.- Exercises.- 12 Artificial Intelligence and Parallel Processing.- 12.1 Production Systems.- 12.2 Reasoning Systems.- 12.3 Parallelism Analysis.- 12.4 Parallelizing AI Algorithms.- 12.5 Parallelizing AI Architectures.- 12.6 Parallelizing AI Programming Languages.- 12.7 Neural Networks or Parallel Distributed Processing.- Summary.- Exercises.- Author Index.
From the B&N Reads Blog

Customer Reviews