BN.com Gift Guide

Computer Architecture: A Quantitative Approach (3rd Edition) / Edition 3

Hardcover (Print)
Buy Used
Buy Used from BN.com
$61.17
(Save 41%)
Item is in good condition but packaging may have signs of shelf wear/aging or torn packaging.
Condition: Used – Good details
Used and New from Other Sellers
Used and New from Other Sellers
from $1.99
Usually ships in 1-2 business days
(Save 98%)
Other sellers (Hardcover)
  • All (24) from $1.99   
  • New (2) from $70.00   
  • Used (22) from $1.99   
Close
Sort by
Page 1 of 1
Showing All
Note: Marketplace items are not eligible for any BN.com coupons and promotions
$70.00
Seller since 2014

Feedback rating:

(193)

Condition:

New — never opened or used in original packaging.

Like New — packaging may have been opened. A "Like New" item is suitable to give as a gift.

Very Good — may have minor signs of wear on packaging but item works perfectly and has no damage.

Good — item is in good condition but packaging may have signs of shelf wear/aging or torn packaging. All specific defects should be noted in the Comments section associated with each item.

Acceptable — item is in working order but may show signs of wear such as scratches or torn packaging. All specific defects should be noted in the Comments section associated with each item.

Used — An item that has been opened and may show signs of wear. All specific defects should be noted in the Comments section associated with each item.

Refurbished — A used item that has been renewed or updated and verified to be in proper working condition. Not necessarily completed by the original manufacturer.

New
Brand new.

Ships from: acton, MA

Usually ships in 1-2 business days

  • Standard, 48 States
  • Standard (AK, HI)
$70.00
Seller since 2014

Feedback rating:

(193)

Condition: New
Brand new.

Ships from: acton, MA

Usually ships in 1-2 business days

  • Standard, 48 States
  • Standard (AK, HI)
Page 1 of 1
Showing All
Close
Sort by

Overview


This best-selling title, considered for over a decade to be essential reading for every serious student and practitioner of computer design, has been updated throughout to address the most important trends facing computer designers today. In this edition, the authors bring their trademark method of quantitative analysis not only to high performance desktop machine design, but also to the design of embedded and server systems. They have illustrated their principles with designs from all three of these domains, including examples from consumer electronics, multimedia and web technologies, and high performance computing.


The book retains its highly rated features: Fallacies and Pitfalls, which share the hard-won lessons of real designers; Historical Perspectives, which provide a deeper look at computer design history; Putting it all Together, which present a design example that illustrates the principles of the chapter; Worked Examples, which challenge the reader to apply the concepts, theories and methods in smaller scale problems; and Cross-Cutting Issues, which show how the ideas covered in one chapter interact with those presented in others. In addition, a new feature, Another View, presents brief design examples in one of the three domains other than the one chosen for Putting It All Together.


The authors present a new organization of the material as well, reducing the overlap with their other text, Computer Organization and Design: A Hardware/Software Approach 2/e, and offering more in-depth treatment of advanced topics in multithreading, instruction level parallelism, VLIW architectures, memory hierarchies, storage devices and networktechnologies.


Also new to this edition, is the adoption of the MIPS 64 as the instruction set architecture. In addition to several online appendixes, two new appendixes will be printed in the book: one contains a complete review of the basic concepts of pipelining, the other provides solutions a selection of the exercises. Both will be invaluable to the student or professional learning on her own or in the classroom.


Hennessy and Patterson continue to focus on fundamental techniques for designing real machines and for maximizing their cost/performance.

* Presents state-of-the-art design examples including:
* IA-64 architecture and its first implementation, the Itanium
* Pipeline designs for Pentium III and Pentium IV
* The cluster that runs the Google search engine
* EMC storage systems and their performance
* Sony Playstation 2
* Infiniband, a new storage area and system area network
* SunFire 6800 multiprocessor server and its processor the UltraSPARC III
* Trimedia TM32 media processor and the Transmeta Crusoe processor

* Examines quantitative performance analysis in the commercial server market and the embedded market, as well as the traditional desktop market.
Updates all the examples and figures with the most recent benchmarks, such as SPEC 2000.
* Expands coverage of instruction sets to include descriptions of digital signal processors, media processors, and multimedia extensions to desktop processors.
* Analyzes capacity, cost, and performance of disks over two decades.
Surveys the role of clusters in scientific computing and commercial computing.
* Presents a survey, taxonomy, and the benchmarks of errors and failures in computer systems.
* Presents detailed descriptions of the design of storage systems and of clusters.
* Surveys memory hierarchies in modern microprocessors and the key parameters of modern disks.
* Presents a glossary of networking terms.

Read More Show Less

Editorial Reviews

From Barnes & Noble
The Barnes & Noble Review
Not for the newbie, this book is written for individuals who are far beyond the primer stage in designing modern computer systems. A signpost in computer architecture (CA) pointing from the past to the future, this well-organized book brims with computer design information and both cutting-edge and historical material, covering everything from fundamentals to pipelining and even featuring the block diagram of Sony PlayStation 2 with regard to embedded systems.

Though the book is designed with the classroom in mind (the authors are, after all, university professors), there are also many hands-on exercises at the end of each chapter. One feature found in most chapters, "Crosscutting Issues," is used to tie in ideas from the chapter with other cutting-edge ideas presented throughout the book. "Putting It All Together" is a section that shows how the ideas actually work. "Fallacies and Pitfalls" contains good examples of how you might err. But even if your student days are over, Computer Architecture: A Quantitative Approach works well as a reference and refresher on current CA design technologies, as well as innovative ideas on how to use its functional requirements and features.

That said, how you read this book depends on your computing philosophy, i.e., whether you believe that the processor is the key to computer design or that the processor is secondary. The authors recognize this, spelling out different approaches to the material in the Preface, in the section titled "Navigating the Text." The addition of a glossary at the end of each chapter would have made this book the ultimate reference guide to CA design, but as it stands, Computer Architecture is still a must-have for anyone in the field. John Vacca

John Vacca, the former computer security official (CSO) for NASA's space station program (Freedom), has written 38 books about advanced storage, computer security, and aerospace technology.

Read More Show Less

Product Details

Meet the Author


David A. Patterson (University of California at Berkeley) has been teaching computer architecture at the University of California, Berkeley, since joining the faculty in 1977, and he holds the Pardee Chair of Computer Science. His teaching has been honored by the ACM, the IEEE, and the University of California. Patterson has also received the 1995 IEEE Technical Achievement Award for contributions to RISC and the 1999 IEEE Reynold B. Johnson Information Storage Award for contributions to RAID. He is a member of the National Academy of Engineering and is a fellow of both the ACM and the IEEE. In the past, he has been chair of the CS division in the EECS department at Berkeley, the ACM SIG in computer architecture, and the Computing Research Association.

At Berkeley, Patterson led the design and implementation of RISC I, likely the first VLSI Reduced Instruction Set Computer. This research became the foundation of the SPARC architecture, currently used by Sun Microsystems, Fujitsu, and others. He was also a leader of the Redundant Arrays of Inexpensive Disks (RAID) project, which led to high-performance storage systems from many companies. These projects earned three dissertation awards from the ACM. His current research interests are in building novel microprocessors using Intelligent DRAM (IRAM) and he currently consults for Sun, where he holds the title of Chief Scientist of the Network Attached Storage Division.

John L. Hennessy teaches computer architecture at Stanford University, where he has been a member of the faculty since 1977. He is currently Dean of the School of Engineering and the Frederick Emmons Terman Professor of Engineering. Hennessy is a fellowof the IEEE and ACM, a member of the National Academy of Engineering, and a fellow of the American Academy of Arts and Sciences. He received the 1994 IEEE Piore Award for his contributions to the development of RISC technology.

Hennessy's original research group at Stanford developed several of the techniques now in commercial use for optimizing compilers. In 1981, he started the MIPS project at Stanford with a handful of graduate students. After completing the project in 1984, he took a one-year leave from the university to co-found MIPS Computer Systems, which developed one of the first commercial RISC microprocessors. After being acquired by Silicon Graphics in 1991, MIPS Technologies became an independent company in 1998, focusing on microprocessors for the embedded marketplace. As of 1998, over 100 million MIPS microprocessors have been shipped in devices ranging from video games and palmtop computers to laser printers and network switches. Hennessy's recent research at Stanford focuses on the area of designing and exploiting multiprocessors. Recently, he has been involved in the development of the DASH multiprocessor architecture, the first distributed shared-memory multiprocessors supporting cache coherency, and the basis for several commercial multiprocessor designs.

Read More Show Less

Read an Excerpt

Chapter 1: Fundamentals of Computer Design
And now for something completely different.
-Monty Python’s Flying Circus

Computer technology has made incredible progress in the roughly 55 years since the first general-purpose electronic computer was created. Today, less than a thousand dollars will purchase a personal computer that has more performance, more main memory, and more disk storage than a computer bought in 1980 for 1 million dollars. This rapid rate of improvement has come both from advances in the technology used to build computers and from innovation in computer design.

Although technological improvements have been fairly steady, progress arising from better computer architectures has been much less consistent. During the first 25 years of electronic computers, both forces made a major contribution; but beginning in about 1970, computer designers became largely dependent upon integrated circuit technology. During the 1970s, performance continued to improve at about 25% to 30% per year for the mainframes and minicomputers that dominated the industry.

The late 1970s saw the emergence of the microprocessor. The ability of the microprocessor to ride the improvements in integrated circuit technology more closely than the less integrated mainframes and minicomputers led to a higher rate of improvement—roughly 35% growth per year in performance.

This growth rate, combined with the cost advantages of a mass-produced microprocessor, led to an increasing fraction of the computer business being based on microprocessors. In addition, two significantchanges in the computer marketplace made it easier thanever before to be commercially successful with a new architecture. First, the virtual elimination of assembly language programming reduced the need for object-code compatibility. Second, the creation of standardized, vendor-independent operating systems, such as UNIX and its clone, Linux, lowered the cost and risk of bringing out a new architecture.

These changes made it possible to successfully develop a new set of architectures, called RISC (Reduced Instruction Set Computer) architectures, in the early 1980s. The RISC-based machines focused the attention of designers on two critical performance techniques, the exploitation of instruction-level parallelism (initially through pipelining and later through multiple instruction issue) and the use of caches (initially in simple forms and later using more sophisticated organizations and optimizations). The combination of architectural and organizational enhancements has led to 20 years of sustained growth in performance at an annual rate of over 50%. Figure 1.1 shows the effect of this difference in performance growth rates.

The effect of this dramatic growth rate has been twofold. First, it has signifi- cantly enhanced the capability available to computer users. For many applications, the highest-performance microprocessors of today outperform the supercomputer of less than 10 years ago.

Second, this dramatic rate of improvement has led to the dominance of microprocessor-based computers across the entire range of the computer design. Workstations and PCs have emerged as major products in the computer industry. Minicomputers, which were traditionally made from off-the-shelf logic or from gate arrays, have been replaced by servers made using microprocessors. Mainframes have been almost completely replaced with multiprocessors consisting of small numbers of off-the-shelf microprocessors. Even high-end supercomputers are being built with collections of microprocessors.

Freedom from compatibility with old designs and the use of microprocessor technology led to a renaissance in computer design, which emphasized both architectural innovation and efficient use of technology improvements. This renaissance is responsible for the higher performance growth shown in Figure 1.1—a rate that is unprecedented in the computer industry. This rate of growth has compounded so that by 2001, the difference between the highestperformance microprocessors and what would have been obtained by relying solely on technology, including improved circuit design, was about a factor of 15.

Figure 1.1 Growth in microprocessor performance since the mid-1980s has been substantially higher than in earlier years as shown by plotting SPECint performance. This chart plots relative performance as measured by the SPECint benchmarks with base of one being a VAX 11/780. Since SPEC has changed over the years, performance of newer machines is estimated by a scaling factor that relates the performance for two different versions of SPEC (e.g., SPEC92 and SPEC95). Prior to the mid-1980s, microprocessor performance growth was largely technology driven and averaged about 35% per year. The increase in growth since then is attributable to more advanced architectural and organizational ideas. By 2001 this growth led to a difference in performance of about a factor of 15. Performance for floating-point-oriented calculations has increased even faster.

In the last few years, the tremendous improvement in integrated circuit capability has allowed older, less-streamlined architectures, such as the x86 (or IA-32) architecture, to adopt many of the innovations first pioneered in the RISC designs. As we will see, modern x86 processors basically consist of a front end that fetches and decodes x86 instructions and maps them into simple ALU, memory access, or branch operations that can be executed on a RISC-style pipelined processor. Beginning in the late 1990s, as transistor counts soared, the overhead (in transistors) of interpreting the more complex x86 architecture became negligible as a percentage of the total transistor count of a modern microprocessor.

This text is about the architectural ideas and accompanying compiler improvements that have made this incredible growth rate possible. At the center of this dramatic revolution has been the development of a quantitative approach to computer design and analysis that uses empirical observations of programs, experimentation, and simulation as its tools. It is this style and approach to computer design that is reflected in this text.

Sustaining the recent improvements in cost and performance will require continuing innovations in computer design, and we believe such innovations will be founded on this quantitative approach to computer design. Hence, this book has been written not only to document this design style, but also to stimulate you to contribute to this progress.

Read More Show Less

Table of Contents

Foreword
Preface
Acknowledgments
Chapter 1 - Fundamentals of Computer Design
Chapter 2 - Instruction Set Principles and Examples
Chapter 3 - Instruction-Level Parallelism and Its Dynamic Exploitation
Chapter 4 - Exploiting Instruction-Level Parallelism with Software Approaches
Chapter 5 - Memory Hierarchy Design
Chapter 6 - Multiprocessors and Thread-Level Parallelism
Chapter 7 - Storage Systems
Chapter 8 - Interconnection Networks and Clusters
Appendix A - Pipelining: Basic and Intermediate Concepts
Appendix B - Solutions to Selected Exercises Online Appendices
Appendix C - A Survey of RISC Architectures for Desktop, Server, and Embedded Computers
Appendix D - An Alternative to RISC: The Intel 80x86
Appendix E - Another Alternative to RISC: The VAX Architecture
Appendix F - The IBM 360/370 Architecture for Mainframe Computers
Appendix G - Vector Processors Revised by Krste Asanovic
Appendix H - Computer Arithmetic by David Goldberg
Appendix I - Implementing Coherence Protocols
References
Index
Read More Show Less

Foreword

Foreword
by Bill Joy, Chief Scientist and Corporate Executive Officer,
Sun Microsystems, Inc.

I am very lucky to have studied computer architecture under Prof. David Patterson at U.C. Berkeley more than 20 years ago. I enjoyed the courses I took from him, in the early days of RISC architecture. Since leaving Berkeley to help found Sun Microsystems, I have used the ideas from his courses and many more that are described in this important book.

The good news today is that this book covers incredibly important and contemporary material. The further good news is that much exciting and challenging work remains to be done, and that working from Computer Architecture: A Quantitative Approach is a great way to start.

The most successful architectural projects that I have been involved in have always started from simple ideas, with advantages explainable using simple numerical models derived from hunches and rules of thumb. The continuing rapid advances in computing technology and new applications ensure that we will need new similarly simple models to understand what is possible in the future, and that new classes of applications will stress systems in different and interesting ways. The quantitative approach introduced in Chapter 1 is essential to understanding

these issues. In particular, we expect to see, in the near future, much more emphasis on minimizing power to meet the demands of a given application, across all sizes of systems; much remains to be learned in this area.

I have worked with many different instruction sets in my career. I first programmed a PDP-8, whose instruction set was so simple that a friend easily learned to disassemble programsjust by glancing at the hole punches in paper tape! I wrote a lot of code in PDP-11 assembler, including an interpreter for the Pascal programming language and for the VAX (which was used as an example in the first edition of this book); the success of the VAX led to the widespread use of UNIX on the early Internet.

The PDP-11 and VAX were very conventional complex instruction set (CISC) computer architectures, with relatively compact instruction sets that proved nearly impossible to pipeline. For a number of years in public talks I used the performance of the VAX 11/780 as the baseline; its speed was extremely well known because faster implementations of the architecture were so long delayed. VAX performance stalled out just as the x86 and 680x0 CISC architectures were appearing in microprocessors; the strong economic advantages of microprocessors led to their overwhelming dominance. Then the simpler reduced instruction set (RISC) computer architectures—pioneered by John Cocke at IBM; promoted and named by Patterson and Hennessy; and commercialized in POWER PC, MIPS, and SPARC—were implemented as microprocessors and permitted highperformance pipeline implementations through the use of their simple registeroriented instruction sets. A downside of RISC was the larger code size of programs and resulting greater instruction fetch bandwidth, a cost that could be seen to be acceptable using the techniques of Chapter 1 and by believing in the future CMOS technology trends promoted in the now-classic views of Carver Mead. The kind of clear-thinking approach to the present problems and to the shape of future computing advances that led to RISC architecture is the focus of this book.

Chapter 2 (and various appendices) presents interesting examples of contemporary and important historical instruction set architecture. RISC architecture—the focus of so much work in the last twenty years—is by no means the final word here. I worked on the design of the SPARC architecture and several implementations for a decade, but more recently have worked on two different styles of processor: picoJava, which implemented most of the Java Virtual Machine instructions—a compact, high-level, bytecoded instruction set—and MAJC, a very simple and multithreaded VLIW for Java and media-intensive applications. These two architectures addressed different and new market needs: for lowpower chips to run embedded devices where space and power are at a premium, and for high performance for a given amount of power and cost where parallel applications are possible. While neither has achieved widespread commercial success, I expect that the future will see many opportunities for different ISAs, and an in-depth knowledge of history here often gives great guidance—the relationships between key factors, such as the program size, execution speed, and power consumption, returning to previous balances that led to great designs in the past.

Chapters 3 and 4 describe instruction-level parallelism (ILP): the ability to execute more than one instruction at a time. This has been aided greatly, in the last 20 years, by techniques such as RISC and VLIW (very long instruction word) computing. But as later chapters here point out, both RISC and especially VLIW as practiced in the Intel itanium architecture are very power intensive. In our attempts to extract more instruction-level parallelism, we are running up against the fact that the complexity of a design that attempts to execute N instructions simultaneously grows like N2: the number of transistors and number of watts to produce each result increases dramatically as we attempt to execute many instructions of arbitrary programs simultaneously. There is thus a clear countertrend emerging: using simpler pipelines with more realistic levels of ILP while exploiting other kinds of parallelism by running both multiple threads of execution per processor and, often, multiple processors on a single chip. The challenge for designers of high-performance systems of the future is to understand when simultaneous execution is possible, but then to use these techniques judiciously in combination with other, less granular techniques that are less power intensive and complex.

In graduate school I would often joke that cache memories were the only great idea in computer science. But truly, where you put things affects profoundly the design of computer systems. Chapter 5 describes the classical design of cache and main memory hierarchies and virtual memory. And now, new, higher-level programming languages like Java support much more reliable software because they insist on the use of garbage collection and array bounds checking, so that security breaches from "buffer overflow" and insidious bugs from false sharing of memory do not creep into large programs. It is only languages, such as Java, that insist on the use of automatic storage management that can implement true software components. But garbage collectors are notoriously hard on memory hierarchies, and the design of systems and language implementations to work well for such areas is an active area of research, where much good work has been done but much exciting work remains.

Java also strongly supports thread-level parallelism—a key to simple, powerefficient, and high-performance system implementations that avoids the N2 problem discussed earlier but brings challenges of its own. A good foundational understanding of these issues can be had in Chapter 6. Traditionally, each processor was a separate chip, and keeping the various processors synchronized was expensive, both because of its impact on the memory hierarchy and because the synchronization operations themselves were very expensive. The Java language is also trying to address these issues: we tried, in the Java Language Specification, which I coauthored, to write a description of the memory model implied by the language. While this description turned out to have (fixable) technical problems, it is increasingly clear that we need to think about the memory hierarchy in the design of languages that are intended to work well on the newer system platforms. We view the Java specification as a first step in much good work to be done in the future.

As Chapter 7 describes, storage has evolved from being connected to individual computers to being a separate network resource. This is reminiscent of computer graphics, where graphics processing that was previously done in a host processor often became a separate function as the importance of graphics increased. All this is likely to change radically in the coming years—massively parallel host processors are likely to be able to do graphics better than dedicated outboard graphics units, and new breakthroughs in storage technologies, such as memories made from molecular electronics and other atomic-level nanotechnologies, should greatly reduce both the cost of storage and the access time. The resulting dramatic decreases in storage cost and access time will strongly encourage the use of multiple copies of data stored on individual computing nodes, rather than shared over a network. The "wheel of reincarnation," familiar from graphics, will appear in storage.

It is also critical that storage systems, and indeed all systems, become much more robust in the face of failures, not only of hardware, but also of software flaws and human error. This is an enormous challenge in the years ahead.

Chapter 8 provides a great foundational description of computer interconnects and networks. My model of these comes from Andy Bechtolsheim, another of the cofounders of Sun, who famously said, "Ethernet always wins."More modestly stated: given the need for a new networking interconnect, and despite its shortcomings, adapted versions of the Ethernet protocols seem to have met with overwhelming success in the marketplace. Why? Factors such as the simplicity and familiarity of the protocols are obvious, but quite possibly the most likely reason is that the people who are adapting Ethernet can get on with the job at hand rather than arguing about details that, in the end, aren’t dispositive. This lesson can be generalized to apply to all the areas of computer architecture discussed in this book.

One of the things I remember Dave Patterson saying many years ago is that for each new project you only get so many "cleverness beans." That is, you can be very clever in a few areas of your design, but if you try to be clever in all of them, the design will probably fail to achieve its goals—or even fail to work or to be finished at all. The overriding lesson that I have learned in 20 plus years of working on these kinds of designs is that you must choose what is important and focus on that; true wisdom is to know what to leave out. A deep knowledge of what has gone before is key to this ability.

And you must also choose your assumptions carefully. Many years ago I attended a conference in Hawaii (yes, it was a boondoggle, but read on) where Maurice Wilkes, the legendary computer architect, gave a speech. What he said, paraphrased in my memory, is that good research often consists of assuming something that seems untrue or unlikely today will become true and investigating the consequences of that assumption. And if the unlikely assumption indeed then becomes true in the world, you will have done timely and sometimes, then, even great research! So, for example, the research group at Xerox PARC assumed that everyone would have access to a personal computer with a graphics display connected to others by an internetwork and the ability to print inexpensively using Xerography. How true all this became, and how seminally important their work was!

In our time, and in the field of computer architecture, I think there are a number of assumptions that will become true. Some are not controversial, such as that Moore’s Law is likely to continue for another decade or so and that the complexity of large chip designs is reaching practical limits, often beyond the point of positive returns for additional complexity. More controversially, perhaps, molecular electronics is likely to greatly reduce the cost of storage and probably logic elements as well, optical interconnects will greatly increase the bandwidth and reduce the error rates of interconnects, software will continue to be unreliable because it is so difficult, and security will continue to be important because its absence is so debilitating.

Taking advantage of the strong positive trends detailed in this book and using them to mitigate the negative ones will challenge the next generation of computer architects, to design a range of systems of many shapes and sizes.

Computer architecture design problems are becoming more varied and interesting. Now is an exciting time to be starting out or reacquainting yourself with the latest in this field, and this book is the best place to start. See you in the chips!

Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star

(0)

4 Star

(0)

3 Star

(0)

2 Star

(0)

1 Star

(0)

Your Rating:

Your Name: Create a Pen Name or

Barnes & Noble.com Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation

Reminder:

  • - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
  • - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

 
Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously
Sort by: Showing all of 3 Customer Reviews
  • Anonymous

    Posted November 1, 2006

    Yet another reviewer

    This book is no match for the state of the art books. Things changed a bit from 3rd ed. but not enough to encompass the changes in computer architecture.

    Was this review helpful? Yes  No   Report this review
  • Anonymous

    Posted June 10, 2003

    Excellent.

    I'm not sure what the other reviewer was talking about. Yes, of course it is superficial if you compare its content to state-of-the-art research publications. I found this book to be excellent for a computer architecture student in his/her early stages of a PhD.

    Was this review helpful? Yes  No   Report this review
  • Anonymous

    Posted November 20, 2002

    Disappointed.

    I used 3rd edition of this book for my graduate course. The book is toooooooo superficial. Even the solved problems are ambiguous. I WILL NOT recommend this book to anyone.

    Was this review helpful? Yes  No   Report this review
Sort by: Showing all of 3 Customer Reviews

If you find inappropriate content, please report it to Barnes & Noble
Why is this product inappropriate?
Comments (optional)