Introduction to Formal Hardware Verification / Edition 1 available in Hardcover, Paperback

Introduction to Formal Hardware Verification / Edition 1
- ISBN-10:
- 364208477X
- ISBN-13:
- 9783642084775
- Pub. Date:
- 12/01/2010
- Publisher:
- Springer Berlin Heidelberg
- ISBN-10:
- 364208477X
- ISBN-13:
- 9783642084775
- Pub. Date:
- 12/01/2010
- Publisher:
- Springer Berlin Heidelberg

Introduction to Formal Hardware Verification / Edition 1
Buy New
$54.99-
SHIP THIS ITEMIn stock. Ships in 1-2 days.PICK UP IN STORE
Your local store may have stock of this item.
Available within 2 business hours
Overview
Product Details
ISBN-13: | 9783642084775 |
---|---|
Publisher: | Springer Berlin Heidelberg |
Publication date: | 12/01/2010 |
Edition description: | Softcover reprint of hardcover 1st ed. 1999 |
Pages: | 299 |
Product dimensions: | 6.10(w) x 9.25(h) x 0.24(d) |
Read an Excerpt
1. Introduction
1.1 Setting the Context
Our society is increasingly influenced by technical systems. Some critical areas of our life even depend vitally on them, ranging from traffic control and medicine, where human life is directly affected, to international currency and stock exchange markets. In the vast majority of such systems, digital hardware plays a crucial role, e.g., in form of controllers or processors. The fact that we are increasingly dependent on these systems has been expressed by J.-C. Laprie in form of the following question:
Do we have enough confidence in computer systems that we let them handle our most valuable goods, namely our life and our money?
This leads directly to the notion of dependability, defined as follows [Cart82]:
Computer system dependability is the trustworthiness of the delivered service such that reliance can justifiably be placed in this service.
As this is a very vague definition, various people have tried to make it more precise and formal. It is widely accepted that dependability subsumes the following four attributes [Lapr92]:
Security is usually achieved using cryptography, which is not treated in this book. The other three are directly related to system faults. They are the primary cause for unanticipated system behavior. And this is the point where formal hardware verification comes into play.
1.1.1Faults and the Design Cycle
To avoid a system failure we either have to show that the system does not contain faults (fault prevention), or it has to be designed in such a way that at least certain kinds of faults can be compensated (fault tolerance) or in case of a failure that no fatal consequences can occur (fail safe). The different approaches directly relate to the sources of faults. As we mainly treat (digital) hardware designs in this book, restricting the further analysis to such systems allows us to make more precise statements about fault sources. Empirical studies have shown that more than 50% of all ASICs (application-specific integrated circuits) do not work properly after initial design and fabrication [Keut91]. Increasing circuit complexity, interface problems, wrong specifications as well as incorrectly working design tools (and designers) are obvious reasons. Depending on the phase of the design cycle, different types of faults occur.
Fault Types. System failures result from different type of faults. We distinguish
Design faults result from an erroneous transformation of a design specification into the layout description, the latter being the basis of fabrication. Hardware verification targets these faults; hence, they will be described in more detail later.
Fabrication faults, resulting from layout defects during the fabrication process, may lead to an incorrect behavior of the circuit, even if the design itself was flawless. To find fabrication faults, circuits have to be tested. This is performed by applying test pattern sequences which distinguish a correct from a faulty circuit. These sequences are usually generated automatically, using the circuit description and suitable fault models (ATPG, automated test pattern generation). Since for large circuits the number of necessary test patterns to achieve a sufficient fault coverage may become prohibitively high, additional circuitry may become necessary. These additional parts either reduce the number of test patterns (e.g., by a scan-path design) or even allow a self-test of the circuit (BIST, built-in self-test). However, it is almost impossible to unequivocally identify a circuit containing fabrication faults. A test procedure is considered to be good if less than five bad chips out of one million are delivered [Keut91]. Test algorithms are also used to check complete systems after assembly. Fabrication faults are not treated in the following as their techniques significantly differ from approaches to find design faults.
Faults during usage often arise only after a certain usage time. Potential causes are aging, wearout, and human mistakes during operation or maintenance. In this book we do not treat these cases either.
Design and fabrication faults occur prior to the actual system usage and hence, can in principle be detected prior to customer delivery. In contrast, faults during usage can not be prevented. Thus, we have to use fault tolerance approaches to prevent these faults from having catastrophic effects on reliability, availability and safety (Fig. 1-1).
1.1.2 Avoiding Design Errors in Hardware Designs
Besides the general dependability issues explained at the beginning, financial and market considerations specific for hardware systems also require flawlessly working circuits:
If an error is found in software, it may be eliminated easily by recompiling the corrected program. On the other hand, if a chip has been already fabricated, error correction does not only mean correcting the design description, but the new version has to be fabricated again. Obviously this is much more expensive than just starting a software compilation process. If the faulty chip has already delivered to customers, a costly replacement may become necessary in addition to the fabrication expenses, whereas in case of software a simple "bug fix patch" can be sent out. This situation has occurred with the famous FDIV bug in the floating point unit of Intel's PENTIUM processor where about 475 million US dollars had to be spent by the company to replace the faulty processors. These difficulties in correcting design flaws after delivery also have a direct consequence on the quality expectations. For hardware they are considerably higher than for most software products, where, e.g., even for leading office software (text processing, calculating and presentation), bugs are not only tolerated but even anticipated.
A last point concerns the turnaround time. Design faults directly affect the development time for a new product. Even if faults are detected prior to fabrication, the time for finding and correcting bugs may be quite high, delaying the market introduction of the product. Typical values for the turnaround time caused only by the emulation' effort after redesign are given in Fig. 1-3. Studies show that best-in-class companies bring products to market in half the time as the rest [PMTM95]. A conservative estimation based on total revenue and product life-span shows that, e.g., for a high-end microprocessor a delay of one weak equals a revenue loss of at least 20 million US dollars.
Thus, a considerable amount of time during the design process is spent for finding errors, usually by simulation and emulation. A typical effort distribution is shown in Fig. 1-2. Hence, every approach which helps reduce this time has a considerable influence on the economic success of a product...