Computer Organization and Design Second Edition: The Hardware/Software Interface / Edition 2

Hardcover (Print)
Used and New from Other Sellers
Used and New from Other Sellers
from $1.99
Usually ships in 1-2 business days
(Save 97%)
Other sellers (Hardcover)
  • All (53) from $1.99   
  • New (1) from $60.00   
  • Used (52) from $1.99   
Close
Sort by
Page 1 of 1
Showing All
Note: Marketplace items are not eligible for any BN.com coupons and promotions
$60.00
Seller since 2014

Feedback rating:

(113)

Condition:

New — never opened or used in original packaging.

Like New — packaging may have been opened. A "Like New" item is suitable to give as a gift.

Very Good — may have minor signs of wear on packaging but item works perfectly and has no damage.

Good — item is in good condition but packaging may have signs of shelf wear/aging or torn packaging. All specific defects should be noted in the Comments section associated with each item.

Acceptable — item is in working order but may show signs of wear such as scratches or torn packaging. All specific defects should be noted in the Comments section associated with each item.

Used — An item that has been opened and may show signs of wear. All specific defects should be noted in the Comments section associated with each item.

Refurbished — A used item that has been renewed or updated and verified to be in proper working condition. Not necessarily completed by the original manufacturer.

New
Brand new.

Ships from: acton, MA

Usually ships in 1-2 business days

  • Standard, 48 States
  • Standard (AK, HI)
Page 1 of 1
Showing All
Close
Sort by

Overview

The performance of software systems is dramatically affected by how well software designers understand the basic hardware technologies at work in a system. Similarly, hardware designers must understand the far reaching effects their design decisions have on software applications. For readers in either category, this classic introduction to the field provides a deep look into the computer. It demonstrates the relationship between the software and hardware and focuses on the foundational concepts that are the basis for current computer design.



Using a distinctive "learning by evolution" approach the authors present each idea from its first principles, guiding readers through a series of worked examples that incrementally add more complex instructions until they have acquired an understanding of the entire MIPS instruction set and the fundamentals of assembly language. Computer arithmetic, pipelining, and memory hierarchies are treated to the same evolutionary approach with worked examples and incremental drawings supporting each new level of sophistication. The design, performance, and significance of I/O systems is also discussed in depth, and an entire chapter is devoted to the emerging architectures of multiprocessor systems.

* Real Stuff provides relevant, tangible examples of how the concepts from the chapter are implemented in commercially successful products.
* Fallacies and Pitfalls share the hard-won lessons of the authors and other designers in industry.
* Big Pictures allow the reader to keep major insights in focus while studying the details.
* Key terms, all fully defined in an end-of-book glossary, summarize the essential ideas introduced inthe chapter.


The prior edition of this text is a computer science classic, and now has been updated to reflect the rapid evolution of software and hardware trends, concepts, issues and technologies. Although this is an undergraduate CS text, it's not an introductory one. It lays a solid foundation for the student, then plumbs the boundary between hardware and software as described and defined by architecture specifications, computer design principles, and best described in the author's words, "...where compilation (in software) ends and interpretation (in hardware) begins." The book discusses concepts of computer abstraction, technology, and performance issues. It initiates the process of "learning by evolution" of assembly language instructions and numbers, datapath and control concepts, pipelining and performance enhancement.

Read More Show Less

Editorial Reviews

Booknews
An introduction to the field for students in software and hardware design, emphasizing the relationships between software and hardware. Presents each idea from its first principles, adding complexity through a series of worked examples and solutions, with coverage of the MIPS instruction set, fundamentals of assembly language, computer arithmetic, pipelining, and memory hierarchies. Discusses design, performance, and significance of I/O systems, and emerging architectures of multiprocessor systems. Each chapter includes sections on examples (new to this edition), fallacies and pitfalls, and history of the field, plus exercises and key terms. Layout is attractive and readable. Assumes beginning courses in programming. Annotation c. by Book News, Inc., Portland, Or.
Read More Show Less

Product Details

  • ISBN-13: 9781558604285
  • Publisher: Elsevier Science
  • Publication date: 8/1/1997
  • Edition description: Older Edition
  • Edition number: 2
  • Pages: 643
  • Product dimensions: 7.69 (w) x 9.57 (h) x 1.94 (d)

Meet the Author


John L. Hennessy (Stanford University) has been a member of the Stanford faculty since 1977, where he teaches computer architecture and supervises a group of energetic Ph.D. students. He is currently Chairman of the Computer Science Department and holds the Willard R. and Inez Kerr Bell Professorship in the School of Engineering. Hennessy is a Fellow of the IEEE, a member of the National Academy of Engineering, and a Fellow of the American Academy of Arts and Sciences. He received the 1994 IEEE Piore Award for his contributions to the development of RISC technology.

Hennessy''s original research area was optimizing compilers. His research group at Stanford developed many of the techniques now in commercial use. In 1981, he started the MIPS project at Stanford with a handful of graduate students. After completing the project in 1984, he took a one-year leave of absence from the university to co-found MIPS Computer Systems, which has since merged with Silicon Graphics. Hennessy's recent research at Stanford focuses on the area of designing and exploiting multiprocessors. Most recently, he has been involved in the development of the DASH multiprocessor architecture, one of the first distributed shared-memory multiprocessors.

David A. Patterson (University of California at Berkeley) has taught computer architecture since joining the faculty in 1977, and is holder of the E.H. and M.E. Pardee Chair of Computer Science. He is a member of the National Academy of Engineering and is a Fellow of both the IEEE and the Association for Computing Machinery (ACM). His teaching has been honored by the ACM with the Outstanding Educator Award and by the University of Californiawith the Distinguished Teaching Award. He also received the inaugural Outstanding Alumnus Award of the UCLA Computer Science Department.

Past chair of the CS Division in the EECS department at Berkeley and the ACM Special Interest Group in Computer Architecture, he is currently chair of the Computing Research Association. He has consulted for many companies, including Digital, HP, Intel, and Sun, and is also co-author of five books.

At Berkeley, he led the design and implementation of RISC I, likely the first VLSI Reduced Instruction Set Computer. This research became the foundation of the SPARC architecture, currently used by Fujitsu, ICL, Sun, TI, and Xerox. He was also a leader of the Redundant Arrays of Inexpensive Disks (RAID) project, which led to high performance storage systems from many companies. These projects led to three distinguished dissertation awards from the ACM. His current research interests are in large-scale computing using networks of workstations (NOW).

Read More Show Less

Read an Excerpt


Chapter 2: The Role of Performance

The Quest for an Average Program

As processors were becoming more sophisticated and relied on memory hierarchies (the topic of Chapter 7) and pipelining (the topic of Chapter 6), a single execution time for each instruction no longer existed; neither execution time nor MIPS, therefore, could be calculated from the instruction mix and the manual. While it might seem obvious today that the right thing to do would have been to develop a set of real applications that could be used as standard benchmarks, this was a difficult task until relatively recent times. Variations in operating systems and language standards made it hard to create large programs that could be moved from machine to machine simply by recompiling. Instead, the next step was benchmarking using synthetic programs. The Whetstone synthetic program was created by measuring scientific programs written in Algol 60 (see Curnow and Wichmann's [1976] description). This program was converted to Fortran and was widely used to characterize scientific program performance. Whetstone performance is typically quoted in Whetstones per second-the number of executions of one iteration of the Whetstone benchmark! Dhrystone was developed much more recently (see Weicker's [1984] description and methodology).

About the same time Whetstone was developed, the concept of kernel benchmarks gained popularity. Kernels are small, time-intensive pieces from real programs that are extracted and then used as benchmarks. This approach was developed primarily for benchmarking high-end machines, especially supercomputers. Livermore Loops and Unpack are the best-known examples. The Livermore Loops consist of a series of 21 small loop fragments. Unpack consists of a portion of a linear algebra subroutine package. Kernels are best used to isolate the performance of individual features of a machine and to explain the reasons for differences in the performance of real programs. Because scientific applications often use small pieces of code that execute for a long period of time, characterizing performance with kernels is most popular in this application class. Although kernels help illuminate performance, they often overstate the performance on real applications. For example, today's super computers often achieve a high percentage of their peak performance on such kernels. However, when executing real applications, the performance often is only a small fraction of the peak performance.

The Quest for a Simple Program

Another misstep on the way to developing better benchmarking methods was the use of toy programs as benchmarks. Such programs typically have between 10 and 100 lines of code and produce a result the user already knows before running the toy program. Programs like Sieve of Erastosthenes, Puzzle, and Quicksort were popular because they are small, easy to compile, and run on almost any computer. These programs became quite popular in the early 1980s, when universities were engaged in designing the early RISC machines. The small size of these programs made it easy to compile and run them on simulators. Unfortunately, we have to admit that we played a role in popularizing such benchmarks, by using them to compare performance and even collecting sets of such programs for distribution. Even more unfortunately, some people continue to use such benchmarks-much to our embarrassment! However, we can report that we have learned our lesson and we now understand that the best use of such programs is as beginning programming assignments.

Summarizing Can Be Tricky

Almost every issue that involves measuring and reporting performance has been controversial, including the question of how to summarize performance. The methods used have included the arithmetic mean of normalized performance, the harmonic mean of rates, the geometric mean of normalized execution time, and the total execution time. Several references listed at the end of this section discuss this question, including Smith's [19881 article, whose proposal is the approach used in section 2.5.

SPECulating about Performance

An important advance in performance evaluation was the formation of the System Performance Evaluation Cooperative (SPEC) group in 1988. SPEC comprises representatives of many computer companies-the founders being Apollo/Hewlett-Packard, DEC, MIPS, and Sun-who have agreed on a set of real programs and inputs that all will run. It is worth noting that SPEC couldn't have come into being before portable operating systems and the popularity of high-level languages. Now compilers, too, are accepted as a proper part of the performance of computer systems and must be measured in any evaluation.

History teaches us that while the SPEC effort may be useful with current computers, it will not meet the needs of the next generation without changing. In 1991, a throughput measure was added, based on running multiple versions of the benchmark. It is most useful for evaluating timeshared usage of a uniprocessor or a multiprocessor. Other system benchmarks that include OSintensive and I/0-intensive activities have also been added. Another change, motivated in part by the kind of results shown in Figure 2.3, was the decision to drop matrix300 and to add more benchmarks. One result of the difficulty in finding benchmarks was that the initial version of the SPEC benchmarks (called SPEC89) contained six floating-point benchmarks but only four integer benchmarks. Calculating a single summary measurement using the geometric mean of execution times normalized to a VAX-11/780 meant that this measure favored machines with strong floating-point performance.

In 1992, a new benchmark set (called SPEC92) was introduced. It incorporated additional benchmarks, dropped matrix300, and provided separate means (SPECint and SPECfp) for integer and floating-point programs. In addition, the SPECbase measure, which disallows program-specific optimization flags, was added to provide users with a performance measurement that would more closely match what they might experience on their own programs. The SPECfp numbers show the largest increase versus the base SPECfp measurement, typically ranging from 15% to 30% higher.

In 1995, the benchmark set was once again updated, adding some new integer and floating-point benchmarks, as well as removing some benchmarks that suffered from flaws or had running times that had become too small given the factor of 20 or more performance improvement since the first SPEC release. SPEC95 also changed the base machine for normalization to a Sun SPARCstation 10/40, since operating versions of the original base machine were becoming difficult to find!

SPEC has also added additional benchmark suites beyond the original suites targeted at CPU performance. The SDM (Systems Development Multitasking) benchmark contains two benchmarks that are synthetic versions of development workloads (edits, compiles, executions, system commands). The SFS (System-level File Server) benchmark set is a synthetic workload for testing performance as a file server. Both these benchmark sets include significant 1/0 and operating systems components, unlike the CPU tests. The most recent addition to SPEC is the SPEChpc96 suite, two benchmarks aimed at testing performance on high-end scientific workloads. For the future, SPEC is exploring benchmarks for new functions, such as Web servers.

Creating and developing such benchmark sets has become difficult and time consuming. Although SPEC was initially created as a good faith effort by a group of companies, it became important to competitive marketing and sales efforts. The selection of benchmarks and the rules for running them are made by representatives of the companies that compete by advertising test results....

Read More Show Less

Table of Contents

Foreword
Worked Examples
Computer Organization and Design Online
Preface
Ch. 1 Computer Abstractions and Technology 2
Ch. 2 The Role of Performance 52
Ch. 3 Instructions: Language of the Machine 104
Ch. 4 Arithmetic for Computers 208
Ch. 5 The Processor: Datapath and Control 336
Ch. 6 Enhancing Performance with Pipelining 434
Ch. 7 Large and Fast: Exploiting Memory Hierarchy 538
Ch. 8 Interfacing Processors and Peripherals 636
Ch. 9 Multiprocessors 710
Appendix A Assemblers, Linkers, and the SPIM Simulator A-2
Appendix B The Basics of Logic Design B-2
Appendix C Mapping Control to Hardware C-2
Glossary G-1
Index I-1
Read More Show Less

Foreword

By John H. Crawford
Intel Fellow, Director of Microprocessor Archetecture
Intel Corporation, Santa Clara, California


Computer design is an exciting and competitive discipline. The microprocessor industry is on a treadmill where we double microprocessor performance every 18 months and double microprocessor complexity - measured by the number of transistors per chip - every 24 months. This unprecedented rate of change has been evident for the entire 25-year history of the microprocessor, and it promises to continue for many years to come as the creativity and energy of many people are harnessed to drive innovation ahead in spite of the challenge of ever-smaller dimensions. This book trains the student with the concepts needed to lay a solid foundation for joining this exciting field. More importantly, this book provides a framework for thinking about computer organization and design that will enable the reader to continue the lifetime of learning necessary for staying at the forefront of this competitive discipline.

The text focuses on the boundary between hardware and software and explores the levels of hardware in the vicinity of this boundary. This boundary is captured in a computer's architecture specification. It is a critical boundary for a successful computer product: an architect must define an interface that can be efficiently implemented by hardware and efficiently targeted by compilers. The interface must be able to retain these efficiencies for many generations of hardware and compiler technology, much of which will be unknown at the time the architecture is specified. This boundary is central to the discipline of computer design: it iswhere compilation (in software) ends and interpretation (in hardware) begins.

This book builds on introductory programming skills to introduce the concepts of assembly language programming and the tools needed for this task: the assembler, linker, and loader. Once these prerequisites are completed, the remainder of the book explores the first few levels of hardware below the architectural interface. The basic concepts are motivated and introduced with clear and intuitive examples, then elaborated into the "real stuff" used in today's modern microprocessors. For example, doing the laundry is used as an analogy in Chapter 6 to explain the basic concepts of pipelining, a key technique used in all modern computers. In Chapter 4, algorithms for the basic floating-point arithmetic operators such as addition, multiplication, and division are first explained in decimal, then in binary, and finally they are elaborated into the best-known methods used for high-speed arithmetic in today's computers.

New to this edition are sections in each chapter entitled "Real Stuff." These sections describe how the concepts from the chapter are implemented in commercially successful products. These provide relevant, tangible examples of the concepts and reinforce their importance. As an example, the Real Stuff in Chapter 6, Enhancing Performance with Pipelining, provides an overview of a dynamically scheduled pipeline as implemented in both the IBM/Motorola PowerPC 604 and Intel's Pentium Pro microprocessvr.

The history of computing is woven as a thread throughout the book to reward the reader with a glimpse of key successes from the brief history of this young discipline. The other side of history is reported in the Fallacies and Pitfalls section of each chapter. Since we can learn more from failure than from success, these sections provide a wealth of learning!

The authors are two of the most admired teachers, researchers, and practitioners of the art of computer design today. John Hennessy has straddled both sides of the hardware/software boundary, providing technical leadership for the legendary MIPS compiler as well as the MIPS hardware products through many generations. David Patterson was one of the original RISC proponents: he coined the acronym RISC, evangelized the case for RISC, and served as a key consultant on Sun Microsystem's SPARC line of processors. Continuing his talent for marketable acronyms, his next breakthrough was RAID (Redundant Arrays of Inexpensive Disks), which revolutionized the disk storage industry for large data servers, and then NOW (Networks of Workstations).

Like other great "software" products, this second edition went through an extensive beta testing program: 13 beta sites tested the draft manuscript in classes to "debug" the text. Changes from this testing have been incorporated into the "production" version.

Patterson and Hennessy have succeeded in taking the first edition of their excellent introductory textbook on computer design and making it even better. This edition retains all of the good points of the original, yet adds significant new content and some minor enhancements. What results is an outstanding introduction to the exciting field of computer design.

Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star

(0)

4 Star

(0)

3 Star

(0)

2 Star

(0)

1 Star

(0)

Your Rating:

Your Name: Create a Pen Name or

Barnes & Noble.com Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation

Reminder:

  • - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
  • - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

 
Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously
Sort by: Showing all of 4 Customer Reviews
  • Anonymous

    Posted October 7, 2003

    The Hardware-Software Interface

    I use this book as a reference in my technical writing. I recommend this book to everyone who have a basic Assembly Language programming background and want to understand everything behind the Machine Language Operation Codes decoding process. The authors build from scratch (and you learn from scratch): 1) How to build a complete Arithmetic and Logic (ALU) Unit - Basic Logic Gates processing - more advanced topics as Ripple Carry 2) How to build a complete Control Unit to guide the ALU Operation - Microprogramming vs. Hardwired Control Implementation 3) Assembly language examples for programming the Control Unit Is a good Technical Book in this area. Complement the study of this book with a review of the Assembly Language Programming presented in the book 'The Art of Computing Programming', Volume 1 by Donald Knuth (also, if you need more application examples of low level programming, review Volume 3 'Sorting and Searching'). This is a very good study track.

    Was this review helpful? Yes  No   Report this review
  • Anonymous

    Posted November 18, 2002

    Logical intro to a detailed topic

    The structure of the book is very well planned. One topic leads logically to the next, building on the skills from the prior. I liked how the authors frequently step away from the details to tie up the loose ends and ensure comprehension of the "big picture". I would have rated 4 however I was a bit disappointed with the coverage on the assembly language programming. The book offers plenty of details on the instruction set but lacks direction on writing functional programs yet challenges the reader to do so in the exercises.

    Was this review helpful? Yes  No   Report this review
  • Anonymous

    Posted April 27, 2001

    Great intro book

    I just used this book for a Computer Architecture course, and I thought it was excellent. The examples are relevant, it is clear and coherent, and the questions are good. Any professors out there looking for a book to use for a computer architecture class should give this one a try.

    Was this review helpful? Yes  No   Report this review
  • Anonymous

    Posted December 11, 2000

    The Best Introduction to processor design and pipelining

    I love this book. One chapter shows you how to build a processor. It starts with a register, connects it with an adder and so on - step by step - until you got a completely functioning MIPS-like CPU. After reading that chapter I immediately run to my computer and started designing a little processor using Verilog design language. Next chapter was even more exciting - I learned how pipelining works. This book is not just technical information - it also offers interesting stories from the history of processor design and computing. You will read how it started, how pipelining and parallel execution were invented, you will read about industry veterans, story of supercomputers. Downsides: chapter covering I/O is not very interesting, chapter about instruction set is a too long discussion about a relatively simple subject. But it does not matter - this book is a must have for any hardware or system engineer or student.

    Was this review helpful? Yes  No   Report this review
Sort by: Showing all of 4 Customer Reviews

If you find inappropriate content, please report it to Barnes & Noble
Why is this product inappropriate?
Comments (optional)