The Art of Multiprocessor Programming, Revised Reprint

The Art of Multiprocessor Programming, Revised Reprint

by Maurice Herlihy, Nir Shavit
5.0 1

NOOK Book(eBook)

$62.99 $73.95 Save 15% Current price is $62.99, Original price is $73.95. You Save 15%.
View All Available Formats & Editions
Available on Compatible NOOK Devices and the free NOOK Apps.
Want a NOOK ? Explore Now

Customer Reviews

Most Helpful Customer Reviews

See All Customer Reviews

The Art of Multiprocessor Programming, Revised Reprint 5 out of 5 based on 0 ratings. 1 reviews.
FRINGEINDEPENEDENTREVIEW More than 1 year ago
Are you a senior-level undergraduate and/or practitioner who programs multiprocessors? If you are, then this book is for you! Authors Maurice Herlihy and Nir Shavit, have done an outstanding job of writing a book that focuses on how to program multiprocessors that communicate via a shared memory. Herlihy and Shavit, begin by covering classical mutual exclusion algorithms that work by reading and writing shared memory. In addition, the authors examine various ways of specifying correctness and progress. They then describe the foundations of concurrent shared memory computing. Next, the authors show you a simple technique for proving statements of the form, where there is no wait-free implementation of X by Y. The authors continue by describing how to use consensus objects to build a universal construction that implements any concurrent object. In addition, they then try to make you understand how architecture affects performance, and how to exploit this knowledge to write efficient concurrent programs. The authors then show you how monitors are a structured way of combining synchronization and data. Next, the authors introduce several useful techniques that go beyond course-grained locking to allow multiple threads to access a single object at the same time. They continue by considering a kind of pool that provides first-in-first-out fairness. In addition, the authors show you how to implement concurrent stacks. They then show you how some important problems that seem inherently sequential can be made highly parallel by spreading out coordination tasks among multiple parties. Next, the authors look at concurrent hashing – a problem that seems to be naturally parallelizable or, using a more technical term: disjoint-access-parallel; meaning that concurrent method calls are likely to access disjoint locations, implying that there is little need for synchronization. They continue by looking at concurrent search structures with logarithmic depth. In addition, the authors describe how a priority queue typically provides an add() method to add an item to the set, and a removeMin() method to remove and return the item of minimal score. The authors then show you how to decompose certain kinds of problems into components that can be executed in parallel. Next, they discuss how a barrier is a way of forcing asynchronous threads to act almost as if they were synchronous. Finally, the authors review and analyze the strengths and weaknesses of the standard synchronization primitives, and describe some emerging alternatives that are likely to extend, and perhaps even to displace many of today’s standard primitives. This most excellent book focuses on computability: figuring out what can be computed in an asynchronous concurrent environment. Perhaps more importantly, this book deals with the practice of multiprocessor programming, and focuses on performance.