Atnaujinkite slapukų nuostatas

El. knyga: Engineering a Compiler

4.07/5 (243 ratings by Goodreads)
(Department of Computer Science, Rice University, Houston, Texas, USA), (Principal Investigator on the Massively Scalar Compiler Project, Rice University, Houston, Texas, USA)
  • Formatas: PDF+DRM
  • Išleidimo metai: 18-Jan-2011
  • Leidėjas: Morgan Kaufmann Publishers In
  • Kalba: eng
  • ISBN-13: 9780080916613
Kitos knygos pagal šią temą:
  • Formatas: PDF+DRM
  • Išleidimo metai: 18-Jan-2011
  • Leidėjas: Morgan Kaufmann Publishers In
  • Kalba: eng
  • ISBN-13: 9780080916613
Kitos knygos pagal šią temą:

DRM apribojimai

  • Kopijuoti:

    neleidžiama

  • Spausdinti:

    neleidžiama

  • El. knygos naudojimas:

    Skaitmeninių teisių valdymas (DRM)
    Leidykla pateikė šią knygą šifruota forma, o tai reiškia, kad norint ją atrakinti ir perskaityti reikia įdiegti nemokamą programinę įrangą. Norint skaityti šią el. knygą, turite susikurti Adobe ID . Daugiau informacijos  čia. El. knygą galima atsisiųsti į 6 įrenginius (vienas vartotojas su tuo pačiu Adobe ID).

    Reikalinga programinė įranga
    Norint skaityti šią el. knygą mobiliajame įrenginyje (telefone ar planšetiniame kompiuteryje), turite įdiegti šią nemokamą programėlę: PocketBook Reader (iOS / Android)

    Norint skaityti šią el. knygą asmeniniame arba „Mac“ kompiuteryje, Jums reikalinga  Adobe Digital Editions “ (tai nemokama programa, specialiai sukurta el. knygoms. Tai nėra tas pats, kas „Adobe Reader“, kurią tikriausiai jau turite savo kompiuteryje.)

    Negalite skaityti šios el. knygos naudodami „Amazon Kindle“.

This entirely revised second edition of Engineering a Compiler is full of technical updates and new material covering the latest developments in compiler technology. In this comprehensive text you will learn important techniques for constructing a modern compiler. Leading educators and researchers Keith Cooper and Linda Torczon combine basic principles with pragmatic insights from their experience building state-of-the-art compilers. They will help you fully understand important techniques such as compilation of imperative and object-oriented languages, construction of static single assignment forms, instruction scheduling, and graph-coloring register allocation.

  • In-depth treatment of algorithms and techniques used in the front end of a modern compiler
  • Focus on code optimization and code generation, the primary areas of recent research and development
  • Improvements in presentation including conceptual overviews for each chapter, summaries and review questions for sections, and prominent placement of definitions for new terms
  • Examples drawn from several different programming languages
  • Recenzijos

    "Keith Cooper and Linda Torczon are leading compilers researchers who have also built several state-of-the-art compilers. This book adeptly spans both worlds, by explaining both time-tested techniques and new algorithms, and by providing practical advice on engineering and constructing a compiler. Engineering a Compiler is a rich survey and exposition of the important techniques necessary to build a modern compiler."--Jim Larus, Microsoft Research

    "The book is well written, and well supported with diagrams, tables, and illustrative examples. It is a suitable textbook for use in a compilers course at the undergraduate or graduate level, where the primary focus of the course is code optimization."--ACMs Computing Reviews.com

    "This book is a wealth of useful information, prepared didactically, with many helpful hints, historical indications, and suggestions for further reading. It is a helpful working book for undergraduate and intermediate-level students, written by authors with an excellent professional and teaching background. An engineer will use the book as a general reference. For special topics, an ambitious reader will consult more recent publications in the subject area."--ACMs Computing Reviews.com

    Daugiau informacijos

    The classic introduction to compiler construction fully updated with new techniques and practical insights
    About the Authors iv
    About the Cover viii
    Preface xix
    Chapter 1 Overview of Compilation
    1(24)
    1.1 Introduction
    1(5)
    1.2 Compiler Structure
    6(3)
    1.3 Overview of Translation
    9(12)
    1.3.1 The Front End
    10(4)
    1.3.2 The Optimizer
    14(1)
    1.3.3 The Back End
    15(6)
    1.4 Summary and Perspective
    21(4)
    Chapter Notes
    22(1)
    Exercises
    23(2)
    Chapter 2 Scanners
    25(58)
    2.1 Introduction
    25(2)
    2.2 Recognizing Words
    27(7)
    2.2.1 A Formalism for Recognizers
    29(2)
    2.2.2 Recognizing More Complex Words
    31(3)
    2.3 Regular Expressions
    34(8)
    2.3.1 Formalizing the Notation
    35(1)
    2.3.2 Examples
    36(3)
    2.3.3 Closure Properties of REs
    39(3)
    2.4 From Regular Expression to Scanner
    42(17)
    2.4.1 Nondeterministic Finite Automata
    43(2)
    2.4.2 Regular Expression to NFA: Thompson's Construction
    45(2)
    2.4.3 NFA to DFA: The Subset Construction
    47(6)
    2.4.4 DFA to Minimal DFA: Hopcroft's Algorithm
    53(4)
    2.4.5 Using a DFA as a Recognizer
    57(2)
    2.5 Implementing Scanners
    59(15)
    2.5.1 Table-Driven Scanners
    60(5)
    2.5.2 Direct-Coded Scanners
    65(4)
    2.5.3 Hand-Coded Scanners
    69(3)
    2.5.4 Handling Keywords
    72(2)
    2.6 Advanced Topics
    74(4)
    2.6.1 DFA to Regular Expression
    74(1)
    2.6.2 Another Approach to DFA Minimization: Brzozowski's Algorithm
    75(2)
    2.6.3 Closure-Free Regular Expressions
    77(1)
    2.7
    Chapter Summary and Perspective
    78(5)
    Chapter Notes
    78(2)
    Exercises
    80(3)
    Chapter 3 Parsers
    83(78)
    3.1 Introduction
    83(2)
    3.2 Expressing Syntax
    85(11)
    3.2.1 Why Not Regular Expressions?
    85(1)
    3.2.2 Context-Free Grammars
    86(3)
    3.2.3 More Complex Examples
    89(3)
    3.2.4 Encoding Meaning into Structure
    92(3)
    3.2.5 Discovering a Derivation for an Input String
    95(1)
    3.3 Top-Down Parsing
    96(20)
    3.3.1 Transforming a Grammar for Top-Down Parsing
    98(10)
    3.3.2 Top-Down Recursive-Descent Parsers
    108(2)
    3.3.3 Table-Driven LL(1) Parsers
    110(6)
    3.4 Bottom-Up Parsing
    116(25)
    3.4.1 The LR(1) Parsing Algorithm
    118(6)
    3.4.2 Building LR(1) Tables
    124(12)
    3.4.3 Errors in the Table Construction
    136(5)
    3.5 Practical Issues
    141(6)
    3.5.1 Error Recovery
    141(1)
    3.5.2 Unary Operators
    142(1)
    3.5.3 Handling Context-Sensitive Ambiguity
    143(1)
    3.5.4 Left versus Right Recursion
    144(3)
    3.6 Advanced Topics
    147(8)
    3.6.1 Optimizing a Grammar
    148(2)
    3.6.2 Reducing the Size of LR(1) Tables
    150(5)
    3.7 Summary and Perspective
    155(6)
    Chapter Notes
    156(1)
    Exercises
    157(4)
    Chapter 4 Context-Sensitive Analysis
    161(60)
    4.1 Introduction
    161(3)
    4.2 An Introduction to Type Systems
    164(18)
    4.2.1 The Purpose of Type Systems
    165(5)
    4.2.2 Components of a Type System
    170(12)
    4.3 The Attribute-Grammar Framework
    182(16)
    4.3.1 Evaluation Methods
    186(1)
    4.3.2 Circularity
    187(1)
    4.3.3 Extended Examples
    187(7)
    4.3.4 Problems with the Attribute-Grammar Approach
    194(4)
    4.4 Ad Hoc Syntax-Directed Translation
    198(13)
    4.4.1 Implementing Ad Hoc Syntax-Directed Translation
    199(3)
    4.4.2 Examples
    202(9)
    4.5 Advanced Topics
    211(4)
    4.5.1 Harder Problems in Type Inference
    211(2)
    4.5.2 Changing Associativity
    213(2)
    4.6 Summary and Perspective
    215(6)
    Chapter Notes
    216(1)
    Exercises
    217(4)
    Chapter 5 Intermediate Representations
    221(48)
    5.1 Introduction
    221(5)
    5.1.1 A Taxonomy of Intermediate Representations
    223(3)
    5.2 Graphical IRs
    226(9)
    5.2.1 Syntax-Related Trees
    226(4)
    5.2.2 Graphs
    230(5)
    5.3 Linear IRs
    235(8)
    5.3.1 Stack-Machine Code
    237(1)
    5.3.2 Three-Address Code
    237(1)
    5.3.3 Representing Linear Codes
    238(3)
    5.3.4 Building a Control-Flow Graph from a Linear Code
    241(2)
    5.4 Mapping Values to Names
    243(10)
    5.4.1 Naming Temporary Values
    244(2)
    5.4.2 Static Single-Assignment Form
    246(4)
    5.4.3 Memory Models
    250(3)
    5.5 Symbol Tables
    253(11)
    5.5.1 Hash Tables
    254(1)
    5.5.2 Building a Symbol Table
    255(1)
    5.5.3 Handling Nested Scopes
    256(5)
    5.5.4 The Many Uses for Symbol Tables
    261(2)
    5.5.5 Other Uses for Symbol Table Technology
    263(1)
    5.6 Summary and Perspective
    264(5)
    Chapter Notes
    264(1)
    Exercises
    265(4)
    Chapter 6 The Procedure Abstraction
    269(62)
    6.1 Introduction
    269(3)
    6.2 Procedure Calls
    272(4)
    6.3 Name Spaces
    276(21)
    6.3.1 Name Spaces of Algol-like Languages
    276(4)
    6.3.2 Runtime Structures to Support Algol-like Languages
    280(5)
    6.3.3 Name Spaces of Object-Oriented Languages
    285(5)
    6.3.4 Runtime Structures to Support Object-Oriented Languages
    290(7)
    6.4 Communicating Values Between Procedures
    297(11)
    6.4.1 Passing Parameters
    297(4)
    6.4.2 Returning Values
    301(1)
    6.4.3 Establishing Addressability
    301(7)
    6.5 Standardized Linkages
    308(4)
    6.6 Advanced Topics
    312(10)
    6.6.1 Explicit Heap Management
    313(4)
    6.6.2 Implicit Deallocation
    317(5)
    6.7 Summary and Perspective
    322(9)
    Chapter Notes
    323(1)
    Exercises
    324(7)
    Chapter 7 Code Shape
    331(74)
    7.1 Introduction
    331(3)
    7.2 Assigning Storage Locations
    334(8)
    7.2.1 Placing Runtime Data Structures
    335(1)
    7.2.2 Layout for Data Areas
    336(4)
    7.2.3 Keeping Values in Registers
    340(2)
    7.3 Arithmetic Operators
    342(8)
    7.3.1 Reducing Demand for Registers
    344(1)
    7.3.2 Accessing Parameter Values
    345(2)
    7.3.3 Function Calls in an Expression
    347(1)
    7.3.4 Other Arithmetic Operators
    348(1)
    7.3.5 Mixed-Type Expressions
    348(1)
    7.3.6 Assignment as an Operator
    349(1)
    7.4 Boolean and Relational Operators
    350(9)
    7.4.1 Representations
    351(2)
    7.4.2 Hardware Support for Relational Operations
    353(6)
    7.5 Storing and Accessing Arrays
    359(10)
    7.5.1 Referencing a Vector Element
    359(2)
    7.5.2 Array Storage Layout
    361(1)
    7.5.3 Referencing an Array Element
    362(5)
    7.5.4 Range Checking
    367(2)
    7.6 Character Strings
    369(5)
    7.6.1 String Representations
    370(1)
    7.6.2 String Assignment
    370(2)
    7.6.3 String Concatenation
    372(1)
    7.6.4 String Length
    373(1)
    7.7 Structure References
    374(6)
    7.7.1 Understanding Structure Layouts
    375(1)
    7.7.2 Arrays of Structures
    376(1)
    7.7.3 Unions and Runtime Tags
    377(1)
    7.7.4 Pointers and Anonymous Values
    378(2)
    7.8 Control-Flow Constructs
    380(12)
    7.8.1 Conditional Execution
    381(3)
    7.8.2 Loops and Iteration
    384(4)
    7.8.3 Case Statements
    388(4)
    7.9 Procedure Calls
    392(4)
    7.9.1 Evaluating Actual Parameters
    393(1)
    7.9.2 Saving and Restoring Registers
    394(2)
    7.10 Summary and Perspective
    396(9)
    Chapter Notes
    397(1)
    Exercises
    398(7)
    Chapter 8 Introduction to Optimization
    405(70)
    8.1 Introduction
    405(2)
    8.2 Background
    407(10)
    8.2.1 Examples
    408(4)
    8.2.2 Considerations for Optimization
    412(3)
    8.2.3 Opportunities for Optimization
    415(2)
    8.3 Scope of Optimization
    417(3)
    8.4 Local Optimization
    420(17)
    8.4.1 Local Value Numbering
    420(8)
    8.4.2 Tree-Height Balancing
    428(9)
    8.5 Regional Optimization
    437(8)
    8.5.1 Superlocal Value Numbering
    437(4)
    8.5.2 Loop Unrolling
    441(4)
    8.6 Global Optimization
    445(12)
    8.6.1 Finding Uninitialized Variables with Live Information
    445(6)
    8.6.2 Global Code Placement
    451(6)
    8.7 Interprocedural Optimization
    457(12)
    8.7.1 Inline Substitution
    458(4)
    8.7.2 Procedure Placement
    462(5)
    8.7.3 Compiler Organization for Interprocedural Optimization
    467(2)
    8.8 Summary and Perspective
    469(6)
    Chapter Notes
    470(1)
    Exercises
    471(4)
    Chapter 9 Data-Flow Analysis
    475(64)
    9.1 Introduction
    475(2)
    9.2 Iterative Data-Flow Analysis
    477(18)
    9.2.1 Dominance
    478(4)
    9.2.2 Live-Variable Analysis
    482(5)
    9.2.3 Limitations on Data-Flow Analysis
    487(3)
    9.2.4 Other Data-Flow Problems
    490(5)
    9.3 Static Single-Assignment Form
    495(24)
    9.3.1 A Simple Method for Building SSA Form
    496(1)
    9.3.2 Dominance Frontiers
    497(3)
    9.3.3 Placing ø-Functions
    500(5)
    9.3.4 Renaming
    505(5)
    9.3.5 Translation Out of SSA Form
    510(5)
    9.3.6 Using SSA Form
    515(4)
    9.4 Interprocedural Analysis
    519(7)
    9.4.1 Call-Graph Construction
    520(2)
    9.4.2 Interprocedural Constant Propagation
    522(4)
    9.5 Advanced Topics
    526(7)
    9.5.1 Structural Data-Flow Algorithms and Reducibility
    527(3)
    9.5.2 Speeding up the Iterative Dominance Framework
    530(3)
    9.6 Summary and Perspective
    533(6)
    Chapter Notes
    534(1)
    Exercises
    535(4)
    Chapter 10 Scalar Optimizations
    539(58)
    10.1 Introduction
    539(5)
    10.2 Eliminating Useless and Unreachable Code
    544(7)
    10.2.1 Eliminating Useless Code
    544(3)
    10.2.2 Eliminating Useless Control Flow
    547(3)
    10.2.3 Eliminating Unreachable Code
    550(1)
    10.3 Code Motion
    551(9)
    10.3.1 Lazy Code Motion
    551(8)
    10.3.2 Code Hoisting
    559(1)
    10.4 Specialization
    560(5)
    10.4.1 Tail-Call Optimization
    561(1)
    10.4.2 Leaf-Call Optimization
    562(1)
    10.4.3 Parameter Promotion
    563(2)
    10.5 Redundancy Elimination
    565(4)
    10.5.1 Value Identity versus Name Identity
    565(1)
    10.5.2 Dominator-based Value Numbering
    566(3)
    10.6 Enabling Other Transformations
    569(6)
    10.6.1 Superblock Cloning
    570(1)
    10.6.2 Procedure Cloning
    571(1)
    10.6.3 Loop Unswitching
    572(1)
    10.6.4 Renaming
    573(2)
    10.7 Advanced Topics
    575(17)
    10.7.1 Combining Optimizations
    575(5)
    10.7.2 Strength Reduction
    580(11)
    10.7.3 Choosing an Optimization Sequence
    591(1)
    10.8 Summary and Perspective
    592(5)
    Chapter Notes
    593(1)
    Exercises
    594(3)
    Chapter 11 Instruction Selection
    597(42)
    11.1 Introduction
    597(3)
    11.2 Code Generation
    600(3)
    11.3 Extending the Simple Treewalk Scheme
    603(7)
    11.4 Instruction Selection via Tree-Pattern Matching
    610(11)
    11.4.1 Rewrite Rules
    611(5)
    11.4.2 Finding a Tiling
    616(4)
    11.4.3 Tools
    620(1)
    11.5 Instruction Selection via Peephole Optimization
    621(11)
    11.5.1 Peephole Optimization
    622(7)
    11.5.2 Peephole Transformers
    629(3)
    11.6 Advanced Topics
    632(2)
    11.6.1 Learning Peephole Patterns
    632(1)
    11.6.2 Generating Instruction Sequences
    633(1)
    11.7 Summary and Perspective
    634(5)
    Chapter Notes
    635(2)
    Exercises
    637(2)
    Chapter 12 Instruction Scheduling
    639(40)
    12.1 Introduction
    639(4)
    12.2 The Instruction-Scheduling Problem
    643(8)
    12.2.1 Other Measures of Schedule Quality
    648(1)
    12.2.2 What Makes Scheduling Hard?
    649(2)
    12.3 Local List Scheduling
    651(10)
    12.3.1 The Algorithm
    651(3)
    12.3.2 Scheduling Operations with Variable Delays
    654(1)
    12.3.3 Extending the Algorithm
    655(1)
    12.3.4 Tie Breaking in the List-Scheduling Algorithm
    655(1)
    12.3.5 Forward versus Backward List Scheduling
    656(4)
    12.3.6 Improving the Efficiency of List Scheduling
    660(1)
    12.4 Regional Scheduling
    661(5)
    12.4.1 Scheduling Extended Basic Blocks
    661(2)
    12.4.2 Trace Scheduling
    663(1)
    12.4.3 Cloning for Context
    664(2)
    12.5 Advanced Topics
    666(7)
    12.5.1 The Strategy of Software Pipelining
    666(4)
    12.5.2 An Algorithm for Software Pipelining
    670(3)
    12.6 Summary and Perspective
    673(6)
    Chapter Notes
    673(2)
    Exercises
    675(4)
    Chapter 13 Register Allocation
    679(46)
    13.1 Introduction
    679(2)
    13.2 Background Issues
    681(3)
    13.2.1 Memory versus Registers
    681(1)
    13.2.2 Allocation versus Assignment
    682(1)
    13.2.3 Register Classes
    683(1)
    13.3 Local Register Allocation and Assignment
    684(9)
    13.3.1 Top-Down Local Register Allocation
    685(1)
    13.3.2 Bottom-Up Local Register Allocation
    686(3)
    13.3.3 Moving Beyond Single Blocks
    689(4)
    13.4 Global Register Allocation and Assignment
    693(20)
    13.4.1 Discovering Global Live Ranges
    696(1)
    13.4.2 Estimating Global Spill Costs
    697(2)
    13.4.3 Interferences and the Interference Graph
    699(3)
    13.4.4 Top-Down Coloring
    702(2)
    13.4.5 Bottom-Up Coloring
    704(2)
    13.4.6 Coalescing Copies to Reduce Degree
    706(2)
    13.4.7 Comparing Top-Down and Bottom-Up Global Allocators
    708(3)
    13.4.8 Encoding Machine Constraints in the Interference Graph
    711(2)
    13.5 Advanced Topics
    713(5)
    13.5.1 Variations on Graph-Coloring Allocation
    713(4)
    13.5.2 Global Register Allocation over SSA Form
    717(1)
    13.6 Summary and Perspective
    718(7)
    Chapter Notes
    719(1)
    Exercises
    720(5)
    APPENDIX A ILOC
    725(12)
    A.1 Introduction
    725(2)
    A.2 Naming Conventions
    727(1)
    A.3 Individual Operations
    728(3)
    A.3.1 Arithmetic
    728(1)
    A.3.2 Shifts
    729(1)
    A.3.3 Memory Operations
    729(1)
    A.3.4 Register-to-Register Copy Operations
    730(1)
    A.4 Control-Flow Operations
    731(2)
    A.4.1 Alternate Comparison and Branch Syntax
    732(1)
    A.4.2 Jumps
    732(1)
    A.5 Representing SSA Form
    733(4)
    APPENDIX B Data STRUCTURES
    737(28)
    B.1 Introduction
    737(1)
    B.2 Representing Sets
    738(5)
    B.2.1 Representing Sets as Ordered Lists
    739(2)
    B.2.2 Representing Sets as Bit Vectors
    741(1)
    B.2.3 Representing Sparse Sets
    741(2)
    B.3 Implementing Intermediate Representations
    743(7)
    B.3.1 Graphical Intermediate Representations
    743(5)
    B.3.2 Linear Intermediate Forms
    748(2)
    B.4 Implementing Hash Tables
    750(10)
    B.4.1 Choosing a Hash Function
    750(2)
    B.4.2 Open Hashing
    752(2)
    B.4.3 Open Addressing
    754(2)
    B.4.4 Storing Symbol Records
    756(1)
    B.4.5 Adding Nested Lexical Scopes
    757(3)
    B.5 A Flexible Symbol-Table Design
    760(5)
    Bibliography 765(22)
    Index 787
    Dr. Cooper Ph.D., Professor, Dept. of Computer Science at Rice University, is the leader of the Massively Scalar Compiler Project at Rice, which investigates issues relating to optimization and code generation for modern machines. He is also a member of the Center for High Performance Software Research, the Computer and Information Technology Institute, and the Center for Multimedia Communication -- all at Rice. He teaches courses in Compiler Construction at the undergraduate and graduate level. Linda Torczon is a principal investigator on the Massively Scalar Compiler Project at Rice University, and the Grid Application Development Software Project sponsored by the next Generation Software program of the National Science Foundation. She also serves as the executive director of HiPerSoft and of the Los Alamos Computer Science Institute. Her research interests include code generation, interprocedural dataflow analysis and optimization, and programming environments.