- Shopping Bag ( 0 items )
From the PublisherEngineering a Compiler is a rich survey and exposition of the important techniques necessary to build a modern compiler.
-Jim Larus, Microsoft Research
Engineering a Compiler explores this design space by presenting some of the ways these problems have been solved, and the constraints that made each of those solutions attractive. By presenting the parameters of the problem and their impact on compiler design, the authors convey both the depth of the problems and the breadth of possible solutions. Their goal is to show readers that real tradeoffs exist, and that the impact of those choices can be both subtle and far-reaching.
Audience: Undergraduate Computer Science and Computer Engineering majors.
* CHAPTER OVERVIEW
Compilers are computer programs that translate a program written in one language into a program written in another language. At the same time, a compiler is a large software system, with many internal components and algorithms and complex interactions between them. Thus, the study of compiler construction is an introduction to techniques for the translation and improvement of programs, and a practical exercise in software engineering. This chapter provides a conceptual overview of all the major components of a modern compiler.
Keywords: Compiler, Interpreter, Automatic Translation
The role of the computer in daily life grows each year. With the rise of the Internet, computers and the software that runs on them provide communications, news, entertainment, and security. Embedded computers have changed the ways that we build automobiles, airplanes, telephones, televisions, and radios. Computation has created entirely new categories of activity, from video games to social networks. Supercomputers predict daily weather and the course of violent storms. Embedded computers synchronize traffic lights and deliver e-mail to your pocket.
All of these computer applications rely on software computer programs that build virtual tools on top of the low-level abstractions provided by the underlying hardware. Almost all of that software is translated by a tool called a compiler. A compiler is simply a computer program that translates other computer programs to prepare them for execution. This book presents the fundamental techniques of automatic translation that are used to build compilers. It describes many of the challenges that arise in compiler construction and the algorithms that compiler writers use to address them.
A compiler is a tool that translates software written in one language into another language. To translate text from one language to another, the tool must understand both the form, or syntax, and content, or meaning, of the input language. It needs to understand the rules that govern syntax and meaning in the output language. Finally, it needs a scheme for mapping content from the source language to the target language.
The structure of a typical compiler derives from these simple observations. The compiler has a front end to deal with the source language. It has a back end to deal with the target language. Connecting the front end and the back end, it has a formal structure for representing the program in an intermediate form whose meaning is largely independent of either language. To improve the translation, a compiler often includes an optimizer that analyzes and rewrites that intermediate form.
Computer programs are simply sequences of abstract operations written in a programming language—a formal language designed for expressing computation. Programming languages have rigid properties and meanings—as opposed to natural languages, such as Chinese or Portuguese. Programming languages are designed for expressiveness, conciseness, and clarity. Natural languages allow ambiguity. Programming languages are designed to avoid ambiguity; an ambiguous program has no meaning. Programming languages are designed to specify computations—to record the sequence of actions that perform some task or produce some results.
Programming languages are, in general, designed to allow humans to express computations as sequences of operations. Computer processors, hereafter referred to as processors, microprocessors, or machines, are designed to execute sequences of operations. The operations that a processor implements are, for the most part, at a much lower level of abstraction than those specified in a programming language. For example, a programming language typically includes a concise way to print some number to a file. That single programming language statement must be translated into literally hundreds of machine operations before it can execute.
The tool that performs such translations is called a compiler. The compiler takes as input a program written in some language and produces as its output an equivalent program. In the classic notion of a compiler, the output program is expressed in the operations available on some specific processor, often called the target machine. Viewed as a black box, a compiler might look like this:
Typical "source" languages might be C, C++, FORTRAN, Java, or ml. The "target" language is usually the instruction set of some processor.
Some compilers produce a target program written in a human-oriented programming language rather than the assembly language of some computer. The programs that these compilers produce require further translation before they can execute directly on a computer. Many research compilers produce C programs as their output. Because compilers for C are available on most computers, this makes the target program executable on all those systems, at the cost of an extra compilation for the final target. Compilers that target programming languages rather than the instruction set of a computer are often called source-to-source translators.
Many other systems qualify as compilers. For example, a typesetting program that produces PostScript can be considered a compiler. It takes as input a specification for how the document should look on the printed page and it produces as output a PostScript file. PostScript is simply a language for describing images. Because the typesetting program takes an executable specification and produces another executable specification, it is a compiler.
The code that turns PostScript into pixels is typically an interpreter, not a compiler. An interpreter takes as input an executable specification and produces as output the result of executing the specification.
Some languages, such as Perl, Scheme, and apl, are more often implemented with interpreters than with compilers.
Some languages adopt translation schemes that include both compilation and interpretation. Java is compiled from source code into a form called bytecode, a compact representation intended to decrease download times for Java applications. Java applications execute by running the bytecode on the corresponding Java Virtual Machine (JVM), an interpreter for bytecode. To complicate the picture further, many implementations of the JVM include a compiler that executes at runtime, sometimes called a just-in-time compiler, or JIT, that translates heavily used bytecode sequences into native code for the underlying computer.
Interpreters and compilers have much in common. They perform many of the same tasks. Both analyze the input program and determine whether or not it is a valid program. Both build an internal model of the structure and meaning of the program. Both determine where to store values during execution. However, interpreting the code to produce a result is quite different from emitting a translated program that can be executed to produce the result. This book focuses on the problems that arise in building compilers. However, an implementor of interpreters may find much of the material relevant.
Why Study Compiler Construction?
A compiler is a large, complex program. Compilers often include hundreds of thousands, if not millions, of lines of code, organized into multiple subsystems and components. The various parts of a compiler interact in complex ways. Design decisions made for one part of the compiler have important ramifications for other parts. Thus, the design and implementation of a compiler is a substantial exercise in software engineering.
A good compiler contains a microcosm of computer science. It makes practical use of greedy algorithms (register allocation), heuristic search techniques (list scheduling), graph algorithms (dead-code elimination), dynamic programming (instruction selection), finite automata and push-down automata (scanning and parsing), and fixed-point algorithms (data-flow analysis). It deals with problems such as dynamic allocation, synchronization, naming, locality, memory hierarchy management, and pipeline scheduling. Few software systems bring together as many complex and diverse components. Working inside a compiler provides practical experience in software engineering that is hard to obtain with smaller, less intricate systems.
Compilers play a fundamental role in the central activity of computer science: preparing problems for solution by computer. Most software is compiled, and the correctness of that process and the efficiency of the resulting code have a direct impact on our ability to build large systems. Most students are not satisfied with reading about these ideas; many of the ideas must be implemented to be appreciated. Thus, the study of compiler construction is an important component of a computer science education.
Compilers demonstrate the successful application of theory to practical problems. The tools that automate the production of scanners and parsers apply results from formal language theory. These same tools are used for text searching, website filtering, word processing, and command-language interpreters. Type checking and static analysis apply results from lattice theory, number theory, and other branches of mathematics to understand and improve programs. Code generators use algorithms for tree-pattern matching, parsing, dynamic programming, and text matching to automate the selection of instructions.
Still, some problems that arise in compiler construction are open problems—that is, the current best solutions have room for improvement. Attempts to design high-level, universal, intermediate representations have foundered on complexity. The dominant method for scheduling instructions is a greedy algorithm with several layers of tie-breaking heuristics. While it is obvious that compilers should use commutativity and associativity to improve the code, most compilers that try to do so simply rearrange the expression into some canonical order.
Building a successful compiler requires expertise in algorithms, engineering, and planning. Good compilers approximate the solutions to hard problems. They emphasize efficiency, in their own implementations and in the code they generate. They have internal data structures and knowledge representations that expose the right level of detail—enough to allow strong optimization, but not enough to force the compiler to wallow in detail. Compiler construction brings together ideas and techniques from across the breadth of computer science and applies them in a constrained setting to solve some truly hard problems.
The Fundamental Principles of Compilation
Compilers are large, complex, carefully engineered objects. While many issues in compiler design are amenable to multiple solutions and interpretations, there are two fundamental principles that a compiler writer must keep in mind at all times. The first principle is inviolable:
The compiler must preserve the meaning of the program being compiled.
Correctness is a fundamental issue in programming. The compiler must preserve correctness by faithfully implementing the "meaning" of its input program. This principle lies at the heart of the social contract between the compiler writer and compiler user. If the compiler can take liberties with meaning, then why not simply generate a nop or a return? If an incorrect translation is acceptable, why expend the effort to get it right?
The second principle that a compiler must observe is practical:
The compiler must improve the input program in some discernible way.
A traditional compiler improves the input program by making it directly executable on some target machine. Other "compilers" improve their input in different ways. For example, tpic is a program that takes the specification for a drawing written in the graphics language pic and converts it into LATEX; the "improvement" lies in LATEX's greater availability and generality. A source-to-source translator for c must produce code that is, in some measure, better than the input program; if it is not, why would anyone invoke it?
Excerpted from Engineering a Compiler by Keith D. Cooper Linda Torczon Copyright © 2012 by Elsevier, Inc.. Excerpted by permission of Morgan Kaufmann Publishers. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
|About the Authors|
|About the Cover|
|Ch. 1||Overview of Compilation||1|
|Ch. 4||Context-Sensitive Analysis||151|
|Ch. 5||Intermediate Representations||209|
|Ch. 6||The Procedure Abstraction||251|
|Ch. 7||Code Shape||307|
|Ch. 8||Introduction to Code Optimization||383|
|Ch. 9||Data-Flow Analysis||433|
|Ch. 10||Scalar Optimizations||491|
|Ch. 11||Instruction Selection||545|
|Ch. 12||Instruction Scheduling||585|
|Ch. 13||Register Allocation||619|
|App. B: Data Structures||673|
Posted September 23, 2011
Are you part of the compiler construction community? If you are, then this book is for you! Authors Keith Cooper and Linda Torczon, have done an outstanding job of writing a second edition of a book which aims to give the student new insights and algorithms, and the ability to rediscover older techniques that were effective, but largely forgotten in compiler construction. Cooper and Torczon, begin by providing a conceptual overview of all of the major components of a modern compiler. In addition, the authors introduce regular expressions, a notation that is used to describe the valid words in a programming language. The authors then focus on context-free grammars, a notation that is used to specify the syntax of programming languages. Then, they look at techniques for context-sensitive checking. Next, the authors present a survey of IR forms that compilers use, including graphical IR, linear IRS, and symbol tables. They continue with an in-depth look at the implementation of procedures and procedure calls, from the perspective of a compiler writer. In addition, the authors explore some of the implementation strategies that the compiler can employ for a variety of common programming-language constructs. The authors then introduce the problems and techniques of code optimization and present key concepts through a series of example optimizations. Then, they explore iterative data-flow analysis, which uses a simple fixed-point algorithm. Next, the authors introduce broad selection of machine-independent transformations that address a variety of inefficiencies in the compiled code. They continue with a look at different approaches to instruction selection. In addition, the authors introduce the dominant technique for scheduling in compilers: greedy list scheduling. Finally, the authors focus on global register allocation and assignment via graph coloring. This most excellent book presents two perspectives: big-picture views of the problems in compiler construction and detailed discussions of algorithmic alternatives. Perhaps more importantly, the authors focused on the usability of the book, both as a textbook and as a reference for professionals.Was this review helpful? Yes NoThank you for your feedback. Report this reviewThank you, this review has been flagged.
Posted January 6, 2013
No text was provided for this review.