Languages and Compilers for Parallel Computing: 9th International Workshop, LCPC'96, San Jose, California, USA, August 8-10, 1996, Proceedings / Edition 1

Paperback (Print)
Buy New
Buy New from BN.com
$107.71
Used and New from Other Sellers
Used and New from Other Sellers
from $11.62
Usually ships in 1-2 business days
(Save 92%)
Other sellers (Paperback)
  • All (12) from $11.62   
  • New (6) from $108.52   
  • Used (6) from $11.62   

Overview

This book presents the thoroughly refereed post-workshop proceedings of the 9th International Workshop on Languages and Compilers for Parallel Computing, LCPC'96, held in San Jose, California, in August 1996.
The book contains 35 carefully revised full papers together with nine poster presentations. The papers are organized in topical sections on automatic data distribution and locality enhancement, program analysis, compiler algorithms for fine-grain parallelism, instruction scheduling and register allocation, parallelizing compilers, communication optimization, compiling HPF, and run-time control of parallelism.

Read More Show Less

Product Details

  • ISBN-13: 9783540630913
  • Publisher: Springer Berlin Heidelberg
  • Publication date: 7/11/1997
  • Series: Lecture Notes in Computer Science Series , #1239
  • Edition description: 1997
  • Edition number: 1
  • Pages: 618
  • Product dimensions: 9.21 (w) x 6.14 (h) x 1.32 (d)

Table of Contents

Cross-loop reuse analysis and its application to cache optimizations.- Locality analysis for distributed shared-memory multiprocessors.- Data distribution and loop parallelization for shared-memory multiprocessors.- Data localization using loop aligned decomposition for macro-dataflow processing.- Exploiting monotone convergence functions in parallel programs.- Exact versus approximate array region analyses.- Context-sensitive interprocedural analysis in the presence of dynamic aliasing.- Initial results for glacial variable analysis.- Compiler algorithms on if-conversion, speculative predicates assignment and predicated code optimizations.- Determining asynchronous pipeline execution times.- Compiler techniques for concurrent multithreading with hardware speculation support.- Resource-Directed Loop Pipelining.- Integrating program optimizations and transformations with the scheduling of instruction level parallelism.- Bidirectional scheduling: A new global code scheduling approach.- Parametric computation of margins and of minimum cumulative register lifetime dates.- Global register allocation based on graph fusion.- Automatic parallelization for non-cache coherent multiprocessors.- Lock coarsening: Eliminating lock overhead in automatically parallelized object-based programs.- Are parallel workstations the right target for parallelizing compilers?.- Optimal reordering and mapping of a class of nested-loops for parallel execution.- Communication-minimal tiling of uniform dependence loops.- Communication-minimal partitioning of parallel loops and data arrays for cache-coherent distributed-memory multiprocessors.- Resource-based communication placement analysis.- Statement-level communication-free partitioning techniques for parallelizing compilers.- Generalized overlap regions for communication optimization in data-parallel programs.- Optimizing the representation of local iteration sets and access sequences for block-cyclic distributions.- Interprocedural array redistribution data-flow analysis.- HPF on fine-grain distributed shared memory: Early experience.- Simple qualitative experiments with a sparse compiler.- Factor-join: A unique approach to compiling array languages for parallel machines.- Compilation of constraint systems to procedural parallel programs.- A multithreaded substrate and compilation model for the implicitly parallel language pH.- Threads for interoperable parallel programming.- A programming environment for dynamic resource allocation and data distribution.- Dependence driven execution for data parallelism.-—-SSA and its construction through symbolic interpretation.- Compiler support for maintaining cache coherence using data prefetching (extended abstract).- 3D visualization of program structure and data dependence for parallelizing compilers and parallel programming.- Side effect analysis on user-defined reduction functions with dynamic pointer-linked data structures.- Estimating minimum execution time of perfect loop nests with loop-carried dependences.- Automatic data and computation partitioning on scalable shared memory multiprocessors.- The loop parallelizer LooPo—announcement.- A generalized forall concept for parallel languages.- Memory optimizations in the Intel Reference Compiler.

Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star

(0)

4 Star

(0)

3 Star

(0)

2 Star

(0)

1 Star

(0)

Your Rating:

Your Name: Create a Pen Name or

Barnes & Noble.com Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation

Reminder:

  • - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
  • - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

 
Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously

    If you find inappropriate content, please report it to Barnes & Noble
    Why is this product inappropriate?
    Comments (optional)