- Shopping Bag ( 0 items )
Ships from: cadiz, KY
Usually ships in 1-2 business days
Using a distinctive "learning by evolution" approach the authors present each idea from its first principles, guiding readers through a series of worked examples that incrementally add more complex instructions until they have acquired an understanding of the entire MIPS instruction set and the fundamentals of assembly language. Computer arithmetic, pipelining, and memory hierarchies are treated to the same evolutionary approach with worked examples and incremental drawings supporting each new level of sophistication. The design, performance, and significance of I/O systems is also discussed in depth, and an entire chapter is devoted to the emerging architectures of multiprocessor systems.
* Real Stuff provides relevant, tangible examples of how the concepts from the chapter are implemented in commercially successful products.
* Fallacies and Pitfalls share the hard-won lessons of the authors and other designers in industry.
* Big Pictures allow the reader to keep major insights in focus while studying the details.
* Key terms, all fully defined in an end-of-book glossary, summarize the essential ideas introduced inthe chapter.
The prior edition of this text is a computer science classic, and now has been updated to reflect the rapid evolution of software and hardware trends, concepts, issues and technologies. Although this is an undergraduate CS text, it's not an introductory one. It lays a solid foundation for the student, then plumbs the boundary between hardware and software as described and defined by architecture specifications, computer design principles, and best described in the author's words, "...where compilation (in software) ends and interpretation (in hardware) begins." The book discusses concepts of computer abstraction, technology, and performance issues. It initiates the process of "learning by evolution" of assembly language instructions and numbers, datapath and control concepts, pipelining and performance enhancement.
As processors were becoming more sophisticated and relied on memory hierarchies (the topic of Chapter 7) and pipelining (the topic of Chapter 6), a single execution time for each instruction no longer existed; neither execution time nor MIPS, therefore, could be calculated from the instruction mix and the manual. While it might seem obvious today that the right thing to do would have been to develop a set of real applications that could be used as standard benchmarks, this was a difficult task until relatively recent times. Variations in operating systems and language standards made it hard to create large programs that could be moved from machine to machine simply by recompiling. Instead, the next step was benchmarking using synthetic programs. The Whetstone synthetic program was created by measuring scientific programs written in Algol 60 (see Curnow and Wichmann's  description). This program was converted to Fortran and was widely used to characterize scientific program performance. Whetstone performance is typically quoted in Whetstones per second-the number of executions of one iteration of the Whetstone benchmark! Dhrystone was developed much more recently (see Weicker's  description and methodology).
About the same time Whetstone was developed, the concept of kernel benchmarks gained popularity. Kernels are small, time-intensive pieces from real programs that are extracted and then used as benchmarks. This approach was developed primarily for benchmarking high-end machines, especially supercomputers. Livermore Loops and Unpack are the best-known examples. The Livermore Loops consist of a series of 21 small loop fragments. Unpack consists of a portion of a linear algebra subroutine package. Kernels are best used to isolate the performance of individual features of a machine and to explain the reasons for differences in the performance of real programs. Because scientific applications often use small pieces of code that execute for a long period of time, characterizing performance with kernels is most popular in this application class. Although kernels help illuminate performance, they often overstate the performance on real applications. For example, today's super computers often achieve a high percentage of their peak performance on such kernels. However, when executing real applications, the performance often is only a small fraction of the peak performance.
The Quest for a Simple Program
Another misstep on the way to developing better benchmarking methods was the use of toy programs as benchmarks. Such programs typically have between 10 and 100 lines of code and produce a result the user already knows before running the toy program. Programs like Sieve of Erastosthenes, Puzzle, and Quicksort were popular because they are small, easy to compile, and run on almost any computer. These programs became quite popular in the early 1980s, when universities were engaged in designing the early RISC machines. The small size of these programs made it easy to compile and run them on simulators. Unfortunately, we have to admit that we played a role in popularizing such benchmarks, by using them to compare performance and even collecting sets of such programs for distribution. Even more unfortunately, some people continue to use such benchmarks-much to our embarrassment! However, we can report that we have learned our lesson and we now understand that the best use of such programs is as beginning programming assignments.
Summarizing Can Be Tricky
Almost every issue that involves measuring and reporting performance has been controversial, including the question of how to summarize performance. The methods used have included the arithmetic mean of normalized performance, the harmonic mean of rates, the geometric mean of normalized execution time, and the total execution time. Several references listed at the end of this section discuss this question, including Smith's [19881 article, whose proposal is the approach used in section 2.5.
SPECulating about Performance
An important advance in performance evaluation was the formation of the System Performance Evaluation Cooperative (SPEC) group in 1988. SPEC comprises representatives of many computer companies-the founders being Apollo/Hewlett-Packard, DEC, MIPS, and Sun-who have agreed on a set of real programs and inputs that all will run. It is worth noting that SPEC couldn't have come into being before portable operating systems and the popularity of high-level languages. Now compilers, too, are accepted as a proper part of the performance of computer systems and must be measured in any evaluation.
History teaches us that while the SPEC effort may be useful with current computers, it will not meet the needs of the next generation without changing. In 1991, a throughput measure was added, based on running multiple versions of the benchmark. It is most useful for evaluating timeshared usage of a uniprocessor or a multiprocessor. Other system benchmarks that include OSintensive and I/0-intensive activities have also been added. Another change, motivated in part by the kind of results shown in Figure 2.3, was the decision to drop matrix300 and to add more benchmarks. One result of the difficulty in finding benchmarks was that the initial version of the SPEC benchmarks (called SPEC89) contained six floating-point benchmarks but only four integer benchmarks. Calculating a single summary measurement using the geometric mean of execution times normalized to a VAX-11/780 meant that this measure favored machines with strong floating-point performance.
In 1992, a new benchmark set (called SPEC92) was introduced. It incorporated additional benchmarks, dropped matrix300, and provided separate means (SPECint and SPECfp) for integer and floating-point programs. In addition, the SPECbase measure, which disallows program-specific optimization flags, was added to provide users with a performance measurement that would more closely match what they might experience on their own programs. The SPECfp numbers show the largest increase versus the base SPECfp measurement, typically ranging from 15% to 30% higher.
In 1995, the benchmark set was once again updated, adding some new integer and floating-point benchmarks, as well as removing some benchmarks that suffered from flaws or had running times that had become too small given the factor of 20 or more performance improvement since the first SPEC release. SPEC95 also changed the base machine for normalization to a Sun SPARCstation 10/40, since operating versions of the original base machine were becoming difficult to find!
SPEC has also added additional benchmark suites beyond the original suites targeted at CPU performance. The SDM (Systems Development Multitasking) benchmark contains two benchmarks that are synthetic versions of development workloads (edits, compiles, executions, system commands). The SFS (System-level File Server) benchmark set is a synthetic workload for testing performance as a file server. Both these benchmark sets include significant 1/0 and operating systems components, unlike the CPU tests. The most recent addition to SPEC is the SPEChpc96 suite, two benchmarks aimed at testing performance on high-end scientific workloads. For the future, SPEC is exploring benchmarks for new functions, such as Web servers.
Creating and developing such benchmark sets has become difficult and time consuming. Although SPEC was initially created as a good faith effort by a group of companies, it became important to competitive marketing and sales efforts. The selection of benchmarks and the rules for running them are made by representatives of the companies that compete by advertising test results....
|Computer Organization and Design Online|
|Ch. 1||Computer Abstractions and Technology||2|
|Ch. 2||The Role of Performance||52|
|Ch. 3||Instructions: Language of the Machine||104|
|Ch. 4||Arithmetic for Computers||208|
|Ch. 5||The Processor: Datapath and Control||336|
|Ch. 6||Enhancing Performance with Pipelining||434|
|Ch. 7||Large and Fast: Exploiting Memory Hierarchy||538|
|Ch. 8||Interfacing Processors and Peripherals||636|
|Appendix A||Assemblers, Linkers, and the SPIM Simulator||A-2|
|Appendix B||The Basics of Logic Design||B-2|
|Appendix C||Mapping Control to Hardware||C-2|
The text focuses on the boundary between hardware and software and explores the levels of hardware in the vicinity of this boundary. This boundary is captured in a computer's architecture specification. It is a critical boundary for a successful computer product: an architect must define an interface that can be efficiently implemented by hardware and efficiently targeted by compilers. The interface must be able to retain these efficiencies for many generations of hardware and compiler technology, much of which will be unknown at the time the architecture is specified. This boundary is central to the discipline of computer design: it iswhere compilation (in software) ends and interpretation (in hardware) begins.
This book builds on introductory programming skills to introduce the concepts of assembly language programming and the tools needed for this task: the assembler, linker, and loader. Once these prerequisites are completed, the remainder of the book explores the first few levels of hardware below the architectural interface. The basic concepts are motivated and introduced with clear and intuitive examples, then elaborated into the "real stuff" used in today's modern microprocessors. For example, doing the laundry is used as an analogy in Chapter 6 to explain the basic concepts of pipelining, a key technique used in all modern computers. In Chapter 4, algorithms for the basic floating-point arithmetic operators such as addition, multiplication, and division are first explained in decimal, then in binary, and finally they are elaborated into the best-known methods used for high-speed arithmetic in today's computers.
New to this edition are sections in each chapter entitled "Real Stuff." These sections describe how the concepts from the chapter are implemented in commercially successful products. These provide relevant, tangible examples of the concepts and reinforce their importance. As an example, the Real Stuff in Chapter 6, Enhancing Performance with Pipelining, provides an overview of a dynamically scheduled pipeline as implemented in both the IBM/Motorola PowerPC 604 and Intel's Pentium Pro microprocessvr.
The history of computing is woven as a thread throughout the book to reward the reader with a glimpse of key successes from the brief history of this young discipline. The other side of history is reported in the Fallacies and Pitfalls section of each chapter. Since we can learn more from failure than from success, these sections provide a wealth of learning!
The authors are two of the most admired teachers, researchers, and practitioners of the art of computer design today. John Hennessy has straddled both sides of the hardware/software boundary, providing technical leadership for the legendary MIPS compiler as well as the MIPS hardware products through many generations. David Patterson was one of the original RISC proponents: he coined the acronym RISC, evangelized the case for RISC, and served as a key consultant on Sun Microsystem's SPARC line of processors. Continuing his talent for marketable acronyms, his next breakthrough was RAID (Redundant Arrays of Inexpensive Disks), which revolutionized the disk storage industry for large data servers, and then NOW (Networks of Workstations).
Like other great "software" products, this second edition went through an extensive beta testing program: 13 beta sites tested the draft manuscript in classes to "debug" the text. Changes from this testing have been incorporated into the "production" version.
Patterson and Hennessy have succeeded in taking the first edition of their excellent introductory textbook on computer design and making it even better. This edition retains all of the good points of the original, yet adds significant new content and some minor enhancements. What results is an outstanding introduction to the exciting field of computer design.
Posted October 7, 2003
I use this book as a reference in my technical writing. I recommend this book to everyone who have a basic Assembly Language programming background and want to understand everything behind the Machine Language Operation Codes decoding process. The authors build from scratch (and you learn from scratch): 1) How to build a complete Arithmetic and Logic (ALU) Unit - Basic Logic Gates processing - more advanced topics as Ripple Carry 2) How to build a complete Control Unit to guide the ALU Operation - Microprogramming vs. Hardwired Control Implementation 3) Assembly language examples for programming the Control Unit Is a good Technical Book in this area. Complement the study of this book with a review of the Assembly Language Programming presented in the book 'The Art of Computing Programming', Volume 1 by Donald Knuth (also, if you need more application examples of low level programming, review Volume 3 'Sorting and Searching'). This is a very good study track.Was this review helpful? Yes NoThank you for your feedback. Report this reviewThank you, this review has been flagged.
Posted November 18, 2002
The structure of the book is very well planned. One topic leads logically to the next, building on the skills from the prior. I liked how the authors frequently step away from the details to tie up the loose ends and ensure comprehension of the "big picture". I would have rated 4 however I was a bit disappointed with the coverage on the assembly language programming. The book offers plenty of details on the instruction set but lacks direction on writing functional programs yet challenges the reader to do so in the exercises.Was this review helpful? Yes NoThank you for your feedback. Report this reviewThank you, this review has been flagged.
Posted April 27, 2001
I just used this book for a Computer Architecture course, and I thought it was excellent. The examples are relevant, it is clear and coherent, and the questions are good. Any professors out there looking for a book to use for a computer architecture class should give this one a try.Was this review helpful? Yes NoThank you for your feedback. Report this reviewThank you, this review has been flagged.
Posted December 11, 2000
I love this book. One chapter shows you how to build a processor. It starts with a register, connects it with an adder and so on - step by step - until you got a completely functioning MIPS-like CPU. After reading that chapter I immediately run to my computer and started designing a little processor using Verilog design language. Next chapter was even more exciting - I learned how pipelining works. This book is not just technical information - it also offers interesting stories from the history of processor design and computing. You will read how it started, how pipelining and parallel execution were invented, you will read about industry veterans, story of supercomputers. Downsides: chapter covering I/O is not very interesting, chapter about instruction set is a too long discussion about a relatively simple subject. But it does not matter - this book is a must have for any hardware or system engineer or student.Was this review helpful? Yes NoThank you for your feedback. Report this reviewThank you, this review has been flagged.