Memory Storage Patterns in Parallel Processing
This project had its beginnings in the Fall of 1980. At that time Robert Wagner suggested that I investigate compiler optimi­ zation of data organization, suitable for use in a parallel or vector machine environment. We developed a scheme in which the compiler, having knowledge of the machine's access patterns, does a global analysis of a program's operations, and automatically determines optimum organization for the data. For example, for certain architectures and certain operations, large improvements in performance can be attained by storing a matrix in row major order. However a subsequent operation may require the matrix in column major order. A determination must be made whether or not it is the best solution globally to store the matrix in row order, column order, or even have two copies of it, each organized differently. We have developed two algorithms for making this determination. The technique shows promise in a vector machine environment, particularly if memory interleaving is used. Supercomputers such as the Cray, the CDC Cyber 205, the IBM 3090, as well as superminis such as the Convex are possible environments for implementation.
1000800730
Memory Storage Patterns in Parallel Processing
This project had its beginnings in the Fall of 1980. At that time Robert Wagner suggested that I investigate compiler optimi­ zation of data organization, suitable for use in a parallel or vector machine environment. We developed a scheme in which the compiler, having knowledge of the machine's access patterns, does a global analysis of a program's operations, and automatically determines optimum organization for the data. For example, for certain architectures and certain operations, large improvements in performance can be attained by storing a matrix in row major order. However a subsequent operation may require the matrix in column major order. A determination must be made whether or not it is the best solution globally to store the matrix in row order, column order, or even have two copies of it, each organized differently. We have developed two algorithms for making this determination. The technique shows promise in a vector machine environment, particularly if memory interleaving is used. Supercomputers such as the Cray, the CDC Cyber 205, the IBM 3090, as well as superminis such as the Convex are possible environments for implementation.
54.99 In Stock
Memory Storage Patterns in Parallel Processing

Memory Storage Patterns in Parallel Processing

by Mary E. Mace
Memory Storage Patterns in Parallel Processing

Memory Storage Patterns in Parallel Processing

by Mary E. Mace

Paperback(Softcover reprint of the original 1st ed. 1987)

$54.99 
  • SHIP THIS ITEM
    In stock. Ships in 1-2 days.
  • PICK UP IN STORE

    Your local store may have stock of this item.

Related collections and offers


Overview

This project had its beginnings in the Fall of 1980. At that time Robert Wagner suggested that I investigate compiler optimi­ zation of data organization, suitable for use in a parallel or vector machine environment. We developed a scheme in which the compiler, having knowledge of the machine's access patterns, does a global analysis of a program's operations, and automatically determines optimum organization for the data. For example, for certain architectures and certain operations, large improvements in performance can be attained by storing a matrix in row major order. However a subsequent operation may require the matrix in column major order. A determination must be made whether or not it is the best solution globally to store the matrix in row order, column order, or even have two copies of it, each organized differently. We have developed two algorithms for making this determination. The technique shows promise in a vector machine environment, particularly if memory interleaving is used. Supercomputers such as the Cray, the CDC Cyber 205, the IBM 3090, as well as superminis such as the Convex are possible environments for implementation.

Product Details

ISBN-13: 9781461291947
Publisher: Springer US
Publication date: 12/22/2011
Series: The Springer International Series in Engineering and Computer Science , #30
Edition description: Softcover reprint of the original 1st ed. 1987
Pages: 160
Product dimensions: 6.10(w) x 9.25(h) x 0.01(d)

Table of Contents

1 Introduction.- 2 Solution For Graphs Without Shared Nodes.- 3 Solution For Graphs With Shared Nodes.- 4 Illustration of Collapsible Graph Algorithm.- 5 Shapes Problem Complexity Issues.- 6 Shapes Solution for Jacobi Iteration.- Appendices:.- A Definition of Collapsible Graphs.- B Restriction 1.- C Properties of Collapsible Graph Transformations.- D Equivalence of a, b, c to A, B.- E Time Bounds of Collapsible Graph Algorithm.- F Cost Function for Shared Nodes.- References.
From the B&N Reads Blog

Customer Reviews