The Art of Multiprocessor Programming, Revised Reprint

The Art of Multiprocessor Programming, Revised Reprint

by Maurice Herlihy, Nir Shavit
5.0 1
Pub. Date:
Elsevier Science
Select a Purchase Option (Revised)
  • purchase options
    $57.57 $74.95 Save 23% Current price is $57.57, Original price is $74.95. You Save 23%.
  • purchase options


The Art of Multiprocessor Programming, Revised Reprint

Revised and updated with improvements conceived in parallel programming courses, The Art of Multiprocessor Programming is an authoritative guide to multicore programming. It introduces a higher level set of software development skills than that needed for efficient single-core programming. This book provides comprehensive coverage of the new principles, algorithms, and tools necessary for effective multiprocessor programming. Students and professionals alike will benefit from thorough coverage of key multiprocessor programming issues.

  • This revised edition incorporates much-demanded updates throughout the book, based on feedback and corrections reported from classrooms since 2008
  • Learn the fundamentals of programming multiple threads accessing shared memory
  • Explore mainstream concurrent data structures and the key elements of their design, as well as synchronization techniques from simple locks to transactional memory systems
  • Visit the companion site and download source code, example Java programs, and materials to support and enhance the learning experience

Product Details

ISBN-13: 9780123973375
Publisher: Elsevier Science
Publication date: 06/05/2012
Edition description: Revised
Pages: 536
Sales rank: 876,182
Product dimensions: 7.50(w) x 9.20(h) x 1.40(d)

Table of Contents

1. Introduction

2. Mutual Exclusion

3. Concurrent Objects and Linearization

4. Foundations of Shared Memory

5. The Relative Power of Synchronization Methods

6. The Universality of Consensus

7. Spin Locks and Contention

8. Monitors and Blocking Synchronization

9. Linked Lists: the Role of Locking

10. Concurrent Queues and the ABA Problem

11. Concurrent Stacks and Elimination

12. Counting, Sorting and Distributed Coordination

13. Concurrent Hashing and Natural Parallelism

14. Skiplists and Balanced Search

15. Priority Queues

16. Futures, Scheduling and Work Distribution

17. Barriers

18. Transactional Memory


Customer Reviews

Most Helpful Customer Reviews

See All Customer Reviews

The Art of Multiprocessor Programming, Revised Reprint 5 out of 5 based on 0 ratings. 1 reviews.
Are you a senior-level undergraduate and/or practitioner who programs multiprocessors? If you are, then this book is for you! Authors Maurice Herlihy and Nir Shavit, have done an outstanding job of writing a book that focuses on how to program multiprocessors that communicate via a shared memory. Herlihy and Shavit, begin by covering classical mutual exclusion algorithms that work by reading and writing shared memory. In addition, the authors examine various ways of specifying correctness and progress. They then describe the foundations of concurrent shared memory computing. Next, the authors show you a simple technique for proving statements of the form, where there is no wait-free implementation of X by Y. The authors continue by describing how to use consensus objects to build a universal construction that implements any concurrent object. In addition, they then try to make you understand how architecture affects performance, and how to exploit this knowledge to write efficient concurrent programs. The authors then show you how monitors are a structured way of combining synchronization and data. Next, the authors introduce several useful techniques that go beyond course-grained locking to allow multiple threads to access a single object at the same time. They continue by considering a kind of pool that provides first-in-first-out fairness. In addition, the authors show you how to implement concurrent stacks. They then show you how some important problems that seem inherently sequential can be made highly parallel by spreading out coordination tasks among multiple parties. Next, the authors look at concurrent hashing – a problem that seems to be naturally parallelizable or, using a more technical term: disjoint-access-parallel; meaning that concurrent method calls are likely to access disjoint locations, implying that there is little need for synchronization. They continue by looking at concurrent search structures with logarithmic depth. In addition, the authors describe how a priority queue typically provides an add() method to add an item to the set, and a removeMin() method to remove and return the item of minimal score. The authors then show you how to decompose certain kinds of problems into components that can be executed in parallel. Next, they discuss how a barrier is a way of forcing asynchronous threads to act almost as if they were synchronous. Finally, the authors review and analyze the strengths and weaknesses of the standard synchronization primitives, and describe some emerging alternatives that are likely to extend, and perhaps even to displace many of today’s standard primitives. This most excellent book focuses on computability: figuring out what can be computed in an asynchronous concurrent environment. Perhaps more importantly, this book deals with the practice of multiprocessor programming, and focuses on performance.