Computer Organization and Design: The Hardware/Software Interface

Computer Organization and Design: The Hardware/Software Interface

5.0 1
by David A. Patterson, John L. Hennessy
     
 

View All Available Formats & Editions

This Fourth Revised Edition of Computer Organization and Design includes a complete set of updated and new exercises, along with improvements and changes suggested by instructors and students. Focusing on the revolutionary change taking place in industry today--the switch from uniprocessor to multicore microprocessors--this classic textbook has a modern and

Overview

This Fourth Revised Edition of Computer Organization and Design includes a complete set of updated and new exercises, along with improvements and changes suggested by instructors and students. Focusing on the revolutionary change taking place in industry today--the switch from uniprocessor to multicore microprocessors--this classic textbook has a modern and up-to-date focus on parallelism in all its forms. Examples highlighting multicore and GPU processor designs are supported with performance and benchmarking data. As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, computer arithmetic, pipelining, memory hierarchies and I/O. Sections on the ARM and x86 architectures are also included.

All disc-based content for this title is now available on the Web.



  • This Revised Fourth Edition of Computer Organization and Design has been updated with new exercises and improvements throughout suggested by instructors teaching from the book
  • Covers the revolutionary change from sequential to parallel computing, with a chapter on parallelism and sections in every chapter highlighting parallel hardware and software topics
  • Includes an appendix by the Chief Scientist and the Director of Architecture of NVIDIA covering the emergence and importance of the modern GPU, describing in detail for the first time the highly parallel, highly multithreaded multiprocessor optimized for visual computing
  • The companion CD provides a toolkit of simulators and compilers along with tutorials for using them, as well as advanced content for further study and a search utility for finding content on the CD and in the printed text. For the convenience of readers who have purchased an ebook edition or who may have misplaced the CD-ROM, all CD content is available as a download at bit.ly/nFXcLq

Editorial Reviews

Patterson and Hennessy's Computer Organization and Design has long been recognized as an exceptionally thorough and reliable introduction to computer design. Now they've fully updated their classic and linked it even more systematically to real-world practice.

You'll now find "Real Stuff" sections in every chapter: from Google clusters to advanced pipelining, AMD's Opteron memory hierarchy to recent innovations in Intel's Pentium line. You'll find new hardware coverage: FPGAs, HDL simulators, logic design conventions, and more. Plenty of new software coverage, too, from optimizing compilers to implementing object-oriented languages.

The meat of the book remains as good as ever: detailed, accessible introductions to modern instruction sets, memory, storage, networking, multiprocessing, clustering, and more. How it all fits. Why it works as it does. Whether you're primarily interested in hardware or software, this is the rock-solid grounding you'll need to succeed. Bill Camarda, from the October 2007 Read Only

bn.com
The Barnes & Noble Review
Even today, to write great software, it helps to understand the underlying hardware. And if you’re a hardware architect, you’d better understand how your choices will impact developers. Computer Organization and Design, Third Edition will help software and hardware folks understand each other. The authors even provide separate learning paths for each audience.

Using the actual MIPS 32 architecture to ground their discussions in reality, David Patterson and John Hennessy illuminate computer arithmetic, pipelining, memory hierarchies, I/O, multiprocessing, clustering, and much more. Throughout, welcome “Fallacies and Pitfalls” sections clear up much of the misinformation that bedevils the field.

This edition’s been heavily updated, both for clarity and content. Especially worth noting: a stronger focus on the relationship between hardware and program performance, and a comparison of the Pentium 4 with AMD’s influential new Opteron. Bill Camarda

Bill Camarda is a consultant, writer, and web/multimedia content developer. His 15 books include Special Edition Using Word 2003 and Upgrading & Fixing Networks for Dummies, Second Edition.

Booknews
An introduction to the field for students in software and hardware design, emphasizing the relationships between software and hardware. Presents each idea from its first principles, adding complexity through a series of worked examples and solutions, with coverage of the MIPS instruction set, fundamentals of assembly language, computer arithmetic, pipelining, and memory hierarchies. Discusses design, performance, and significance of I/O systems, and emerging architectures of multiprocessor systems. Each chapter includes sections on examples (new to this edition), fallacies and pitfalls, and history of the field, plus exercises and key terms. Layout is attractive and readable. Assumes beginning courses in programming. Annotation c. by Book News, Inc., Portland, Or.
From the Publisher

"...the fundamental computer organization book, both as an introduction for readers with no experience in computer architecture topics, and as an up-to-date reference for computer architects."--Computing Reviews, July 22 2014

Product Details

ISBN-13:
9780080886138
Publisher:
Elsevier Science
Publication date:
10/13/2011
Series:
Morgan Kaufmann Series in Computer Architecture and Design
Sold by:
Barnes & Noble
Format:
NOOK Book
Pages:
914
Sales rank:
445,203
File size:
14 MB
Note:
This product may take a few minutes to download.

Read an Excerpt

Computer Organization and Design

THE HARDWARE/SOFTWARE INTERFACE
By David A. Patterson John L. Hennessy

Morgan Kaufmann

Copyright © 2012 Elsevier, Inc.
All right reserved.

ISBN: 978-0-08-088613-8


Chapter One

Computer Abstractions and Technology

1.1 Introduction 3 1.2 Below Your Program 10 1.3 Under the Covers 13 1.4 Performance 26 1.5 The Power Wall 39 1.6 The Sea Change: The Switch from Uniprocessors to Multiprocessors 41 1.7 Real Stuff: Manufacturing and Benchmarking the AMD Opteron X4 44 1.8 Fallacies and Pitfalls 51 1.9 Concluding Remarks 54 1.10 Historical Perspective and Further Reading 55 1.11 Exercises 56

1.1 Introduction

Welcome to this book! We're delighted to have this opportunity to convey the excitement of the world of computer systems. This is not a dry and dreary field, where progress is glacial and where new ideas atrophy from neglect. No! Computers are the product of the incredibly vibrant information technology industry, all aspects of which are responsible for almost 10% of the gross national product of the United States, and whose economy has become dependent in part on the rapid improvements in information technology promised by Moore's law. This unusual industry embraces innovation at a breath taking rate. In the last 25 years, there have been a number of new computers whose introduction appeared to revolutionize the computing industry; these revolutions were cut short only because someone else built an even better computer.

This race to innovate has led to unprecedented progress since the inception of electronic computing in the late 1940s. Had the transportation industry kept pace with the computer industry, for example, today we could travel from New York to London in about a second for roughly a few cents. Take just a moment to contemplate how such an improvement would change society—living in Tahiti while working in San Francisco, going to Moscow for an evening at the Bolshoi Ballet—and you can appreciate the implications of such a change.

Computers have led to a third revolution for civilization, with the information revolution taking its place alongside the agricultural and the industrial revolutions. The resulting multiplication of humankind's intellectual strength and reach naturally has affected our everyday lives profoundly and changed the ways in which the search for new knowledge is carried out. There is now a new vein of scientific investigation, with computational scientists joining theoretical and experimental scientists in the exploration of new frontiers in astronomy, biology, chemistry, and physics, among others.

The computer revolution continues. Each time the cost of computing improves by another factor of 10, the opportunities for computers multiply. Applications that were economically infeasible suddenly become practical. In the recent past, the following applications were "computer science fiction."

* Computers in automobiles: Until microprocessors improved dramatically in price and performance in the early 1980s, computer control of cars was ludicrous. Today, computers reduce pollution, improve fuel efficiency via engine controls, and increase safety through the prevention of dangerous skids and through the inflation of air bags to protect occupants in a crash.

* Cell phones: Who would have dreamed that advances in computer systems would lead to mobile phones, allowing person-to-person communication almost anywhere in the world?

* Human genome project: The cost of computer equipment to map and analyze human DNA sequences is hundreds of millions of dollars. It's unlikely that anyone would have considered this project had the computer costs been 10 to 100 times higher, as they would have been 10 to 20 years ago. Moreover, costs continue to drop; you may be able to acquire your own genome, allowing medical care to be tailored to you.

* World Wide Web: Not in existence at the time of the first edition of this book, the World Wide Web has transformed our society. For many, the WWW has replaced libraries.

* Search engines: As the content of the WWW grew in size and in value, finding relevant information became increasingly important. Today, many people rely on search engines for such a large part of their lives that it would be a hardship to go without them.

Clearly, advances in this technology now affect almost every aspect of our society. Hardware advances have allowed programmers to create wonderfully useful software, which explains why computers are omnipresent. Today's science fiction suggests tomorrow's killer applications: already on their way are virtual worlds, practical speech recognition, and personalized health care.

Classes of Computing Applications and Their Characteristics

Although a common set of hardware technologies (see Sections 1.3 and 1.7) is used in computers ranging from smart home appliances to cell phones to the larg est supercomputers, these different applications have different design require ments and employ the core hardware technologies in different ways. Broadly speaking, computers are used in three different classes of applications.

Desktop computers are possibly the best-known form of computing and are characterized by the personal computer, which readers of this book have likely used extensively. Desktop computers emphasize delivery of good performance to single users at low cost and usually execute third-party software. The evolution of many computing technologies is driven by this class of computing, which is only about 30 years old!

Servers are the modern form of what were once mainframes, minicomputers, and supercomputers, and are usually accessed only via a network. Servers are oriented to carrying large workloads, which may consist of either single complex applications—usually a scientific or engineering application—or handling many small jobs, such as would occur in building a large Web server. These applications are usually based on software from another source (such as a database or simulation system), but are often modified or customized for a particular function. Servers are built from the same basic technology as desktop computers, but provide for greater expandability of both computing and input/output capacity. In general, servers also place a greater emphasis on dependability, since a crash is usually more costly than it would be on a single-user desktop computer.

Servers span the widest range in cost and capability. At the low end, a server may be little more than a desktop computer without a screen or keyboard and cost a thousand dollars. These low-end servers are typically used for file storage, small business applications, or simple Web serving (see Section 6.10). At the other extreme are supercomputers, which at the present consist of hundreds to thousands of processors and usually terabytes of memory and petabytes of storage, and cost millions to hundreds of millions of dollars. Supercomputers are usually used for high-end scientific and engineering calculations, such as weather fore casting, oil exploration, protein structure determination, and other large-scale problems. Although such supercomputers represent the peak of computing capability, they represent a relatively small fraction of the servers and a relatively small fraction of the overall computer market in terms of total revenue.

Although not called supercomputers, Internet datacenters used by companies like eBay and Google also contain thousands of processors, terabytes of memory, and petabytes of storage. These are usually considered as large clusters of computers (see Chapter 7).

Embedded computers are the largest class of computers and span the widest range of applications and performance. Embedded computers include the microprocessors found in your car, the computers in a cell phone, the computers in a video game or television, and the networks of processors that control a modern airplane or cargo ship. Embedded computing systems are designed to run one application or one set of related applications, that are normally integrated with the hardware and delivered as a single system; thus, despite the large number of embedded computers, most users never really see that they are using a computer!

Figure 1.1 shows that during the last several years, the growth in cell phones that rely on embedded computers has been much faster than the growth rate of desktop computers. Note that the embedded computers are also found in digital TVs and set-top boxes, automobiles, digital cameras, music players, video games, and a variety of other such consumer devices, which further increases the gap between the number of embedded computers and desktop computers.

Embedded applications often have unique application requirements that combine a minimum performance with stringent limitations on cost or power. For example, consider a music player: the processor need only be as fast as necessary to handle its limited function, and beyond that, minimizing cost and power are the most important objectives. Despite their low cost, embedded computers often have lower tolerance for failure, since the results can vary from upsetting (when your new television crashes) to devastating (such as might occur when the computer in a plane or cargo ship crashes). In consumer-oriented embedded applications, such as a digital home appliance, dependability is achieved primarily through simplicity—the emphasis is on doing one function as perfectly as possible. In large embedded systems, techniques of redundancy from the server world are often employed (see Section 6.9). Although this book focuses on general-purpose computers, most concepts apply directly, or with slight modifications, to embedded computers.

Elaboration: Elaborations are short sections used throughout the text to provide more detail on a particular subject that may be of interest. Disinterested readers may skip over an elaboration, since the subsequent material will never depend on the contents of the elaboration.

Many embedded processors are designed using processor cores, a version of a processor written in a hardware description language, such as Verilog or VHDL (see Chapter 4). The core allows a designer to integrate other application-specific hardware with the processor core for fabrication on a single chip.

What You Can Learn in This Book

Successful programmers have always been concerned about the performance of their programs, because getting results to the user quickly is critical in creating successful software. In the 1960s and 1970s, a primary constraint on computer performance was the size of the computer's memory. Thus, programmers often followed a simple credo: minimize memory space to make programs fast. In the last decade, advances in computer design and memory technology have greatly reduced the importance of small memory size in most applications other than those in embedded computing systems.

Programmers interested in performance now need to understand the issues that have replaced the simple memory model of the 1960s: the parallel nature of processors and the hierarchical nature of memories. Programmers who seek to build competitive versions of compilers, operating systems, databases, and even applications will therefore need to increase their knowledge of computer organization.

We are honored to have the opportunity to explain what's inside this revolutionary machine, unraveling the software below your program and the hard ware under the covers of your computer. By the time you complete this book, we believe you will be able to answer the following questions:

* How are programs written in a high-level language, such as C or Java, translated into the language of the hardware, and how does the hardware execute the resulting program? Comprehending these concepts forms the basis of understanding the aspects of both the hardware and software that affect program performance.

* What is the interface between the software and the hardware, and how does software instruct the hardware to perform needed functions? These concepts are vital to understanding how to write many kinds of software.

* What determines the performance of a program, and how can a programmer improve the performance? As we will see, this depends on the original program, the software translation of that program into the computer's language, and the effectiveness of the hardware in executing the program.

* What techniques can be used by hardware designers to improve performance? This book will introduce the basic concepts of modern computer design. The interested reader will find much more material on this topic in our advanced book, Computer Architecture: A Quantitative Approach.

* What are the reasons for and the consequences of the recent switch from sequential processing to parallel processing? This book gives the motivation, describes the current hardware mechanisms to support parallelism, and surveys the new generation of "multicore" microprocessors (see Chapter 7).

(Continues...)



Excerpted from Computer Organization and Design by David A. Patterson John L. Hennessy Copyright © 2012 by Elsevier, Inc.. Excerpted by permission of Morgan Kaufmann. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Meet the Author

David A. Patterson has been teaching computer architecture at the University of California, Berkeley, since joining the faculty in 1977, where he holds the Pardee Chair of Computer Science. His teaching has been honored by the Distinguished Teaching Award from the University of California, the Karlstrom Award from ACM, and the Mulligan Education Medal and Undergraduate Teaching Award from IEEE. Patterson received the IEEE Technical Achievement Award and the ACM Eckert-Mauchly Award for contributions to RISC, and he shared the IEEE Johnson Information Storage Award for contributions to RAID. He also shared the IEEE John von Neumann Medal and the C&C Prize with John Hennessy. Like his co-author, Patterson is a Fellow of the American Academy of Arts and Sciences, the Computer History Museum, ACM, and IEEE, and he was elected to the National Academy of Engineering, the National Academy of Sciences, and the Silicon Valley Engineering Hall of Fame. He served on the Information Technology Advisory Committee to the U.S. President, as chair of the CS division in the Berkeley EECS department, as chair of the Computing Research Association, and as President of ACM. This record led to Distinguished Service Awards from ACM, CRA, and SIGARCH.
John L. Hennessy is the tenth president of Stanford University, where he has been a member of the faculty since 1977 in the departments of electrical engineering and computer science. Hennessy is a Fellow of the IEEE and ACM; a member of the National Academy of Engineering, the National Academy of Science, and the American Philosophical Society; and a Fellow of the American Academy of Arts and Sciences. Among his many awards are the 2001 Eckert-Mauchly Award for his contributions to RISC technology, the 2001 Seymour Cray Computer Engineering Award, and the 2000 John von Neumann Award, which he shared with David Patterson. He has also received seven honorary doctorates.

Customer Reviews

Average Review:

Write a Review

and post it to your social network

     

Most Helpful Customer Reviews

See all customer reviews >

Computer Organization and Design, Revised Fourth Edition: The Hardware/Software Interface 5 out of 5 based on 0 ratings. 1 reviews.
FRINGEINDEPENEDENTREVIEW More than 1 year ago
Do you want to learn how to design a computer or understand how a system works and why it performs as it does? If you do, then this book is for you! Authors John L. Hennessy and David A. Patterson, have done an outstanding job of writing a 4th edition of a book that shows the relationship between hardware and software and focuses on the concepts that are the basis for current computers. Authors Hennessy and Patterson, begin by showing you how power limits have forced the industry to switch to parallelism, and why parallelism helps. In addition, the authors discuss locks for shared variables, specifically the MIPS instructions Load Linked and Store Conditional. They then discuss the challenges of numerical precision and floating point calculations. The authors then, cover advanced ILP (superscalar, speculation, VLIW, loop-unrolling, and OOO); as well as, the relationship between pipeline depth and power consumption. Next, they introduce coherency, consistency and snooping cache protocols. In addition, the authors introduce bridges and switches, as a complete network system. They continue by showing you how RAID is a parallel I/O system; as well as, a highly available ICO system. Finally, the authors conclude with reasons for optimism and why this foray into parallelism should be more successful than those of the past. The primary goal of this most excellent book, was for the authors to make parallelism a first class citizen in this edition. Perhaps more importantly, the authors streamlined this book to make room for new material in parallelism.