logo资料库

Computer Systems A Programmers Perspective (3rd).pdf

第1页 / 共1121页
第2页 / 共1121页
第3页 / 共1121页
第4页 / 共1121页
第5页 / 共1121页
第6页 / 共1121页
第7页 / 共1121页
第8页 / 共1121页
资料共1121页,剩余部分请下载后查看
Front Cover
Contents
Preface
About the Authors
Chapter 1: A Tour of Computer Systems
1.1: Information Is Bits + Context
1.2: Programs Are Translated by Other Programs into Different Forms
1.3: It Pays to Understand How Compilation Systems Work
1.4: Processors Read and Interpret Instructions Stored in Memory
1.4.1: Hardware Organization of a System
1.4.2: Running the hello Program
1.5: Caches Matter
1.6: Storage Devices Form a Hierarchy
1.7: The Operating System Manages the Hardware
1.7.1: Processes
1.7.2: Threads
1.7.3: Virtual Memory
1.7.4: Files
1.8: Systems Communicate with Other Systems Using Networks
1.9: Important Themes
1.9.1: Amdahl’s Law
1.9.2: Concurrency and Parallelism
1.9.3: The Importance of Abstractions in Computer Systems
1.10: Summary
Bibliographic Notes
Solutions to Practice Problems
Part I: Program Structure and Execution
Chapter 2: Representing and Manipulating Information
2.1: Information Storage
2.1.1: Hexadecimal Notation
2.1.2: Data Sizes
2.1.3: Addressing and Byte Ordering
2.1.4: Representing Strings
2.1.5: Representing Code
2.1.6: Introduction to Boolean Algebra
2.1.7: Bit-Level Operations in C
2.1.8: Logical Operations in C
2.1.9: Shift Operations in C
2.2: Integer Representations
2.2.1: Integral Data Types
2.2.2: Unsigned Encodings
2.2.3: Two’s-Complement Encodings
2.2.4: Conversions between Signed and Unsigned
2.2.5: Signed versus Unsigned in C
2.2.6: Expanding the Bit Representation of a Number
2.2.7: Truncating Numbers
2.2.8: Advice on Signed versus Unsigned
2.3: Integer Arithmetic
2.3.1: Unsigned Addition
2.3.2: Two’s-Complement Addition
2.3.3: Two’s-Complement Negation
2.3.4: Unsigned Multiplication
2.3.5: Two’s-Complement Multiplication
2.3.6: Multiplying by Constant
2.3.7: Dividing by Powers of 2
2.3.8: Final Thoughts on Integer Arithmetic
2.4: Floating Point
2.4.1: Fractional Binary Numbers
2.4.2: IEEE Floating-Point Representation
2.4.3: Example Numbers
2.4.4: Rounding
2.4.5: Floating-Point Operations
2.4.6: Floating Point in C
2.5: Summary
Bibliographic Notes
Homework Problems
Solutions to Practice Problems
Chapter 3: Machine-Level Representation of Program
3.1: A Historical Perspective
3.2: Program Encodings
3.2.1: Machine-Level Code
3.2.2: Code Examples
3.2.3: Notes on Formatting
3.3: Data Formats
3.4: Accessing Information
3.4.1: Operand Specifiers
3.4.2: Data Movement Instructions
3.4.3: Data Movement Example
3.4.4: Pushing and Popping Stack Data
3.5: Arithmetic and Logical Operations
3.5.1: Load Effective Address
3.5.2: Unary and Binary Operations
3.5.3: Shift Operations
3.5.4: Discussion
3.5.5: Special Arithmetic Operations
3.6: Control
3.6.1: Condition Codes
3.6.2: Accessing the Condition Codes
3.6.3: Jump Instructions
3.6.4: Jump Instruction Encodings
3.6.5: Implementing Conditional Branches with Conditional Control
3.6.6: Implementing Conditional Branches with Conditional Moves
3.6.7: Loop
3.6.8: Switch Statements
3.7: Procedures
3.7.1: The Run-Time Stack
3.7.2: Control Transfer
3.7.3: Data Transfer
3.7.4: Local Storage on the Stack
3.7.5: Local Storage in Registers
3.7.6: Recursive Procedures
3.8: Array Allocation and Access
3.8.1: Basic Principles
3.8.2: Pointer Arithmetic
3.8.3: Nested Arrays
3.8.4: Fixed-Size Arrays
3.8.5: Variable-Size Arrays
3.9: Heterogeneous Data Structure
3.9.1: Structures
3.9.2: Unions
3.9.3: Data Alignment
3.10: Combining Control and Data in Machine-Level Programs
3.10.1: Understanding Pointers
3.10.2: Life in the RealWorld: Using the GDB Debugger
3.10.3: Out-of-Bounds Memory References and Buffer Overflow
3.10.4: Thwarting Buffer Overflow Attacks
3.10.5: Supporting Variable-Size Stack Frames
3.11: Floating-Point Code
3.11.1: Floating-Point Movement and Conversion Operations
3.11.2: Floating-Point Code in Procedures
3.11.3: Floating-Point Arithmetic Operations
3.11.4: Defining and Using Floating-Point Constants
3.11.5: Using Bitwise Operations in Floating-Point Code
3.11.6: Floating-Point Comparison Operations
3.11.7: Observations about Floating-Point Code
3.12: Summary
Bibliographic Notes
Homework Problems
Solutions to Practice Problems
Chapter 4: Processor Architecture
4.1: The Y86-64 Instruction Set Architecture
4.1.1: Programmer-Visible State
4.1.2: Y86-64 Instructions
4.1.3: Instruction Encoding
4.1.4: Y86-64 Exceptions
4.1.5: Y86-64 Programs
4.1.6: Some Y86-64 Instruction Details
4.2: Logic Design and the Hardware Control Language HCL
4.2.1: Logic Gates
4.2.2: Combinational Circuits and HCL Boolean Expressions
4.2.3: Word-Level Combinational Circuits and HCL Integer Expressions
4.2.4: Set Membership
4.2.5: Memory and Clocking
4.3: Sequential Y86-64 Implementations
4.3.1: Organizing Processing into Stages
4.3.2: SEQ Hardware Structure
4.3.3: SEQ Timing
4.3.4: SEQ Stage Implementations
4.4: General Principles of Pipelining
4.4.1: Computational Pipelines
4.4.2: A Detailed Look at Pipeline Operation
4.4.3: Limitations of Pipelining
4.4.4: Pipelining a System with Feedback
4.5: Pipelined Y86-64 Implementations
4.5.1: SEQ+: Rearranging the Computation Stages
4.5.2: Inserting Pipeline Registers
4.5.3: Rearranging and Relabeling Signals
4.5.4: Next PC Prediction
4.5.5: Pipeline Hazards
4.5.6: Exception Handling
4.5.7: PIPE Stage Implementations
4.5.8: Pipeline Control Logic
4.5.9: Performance Analysis
4.5.10: Unfinished Business
4.6: Summary
4.6.1: Y86-64 Simulators
Bibliographic Notes
Homework Problems
Solutions to Practice Problems
Chapter 5: Optimizing Program Performance
5.1: Capabilities and Limitations of Optimizing Compilers
5.2: Expressing Program Performance
5.3: Program Example
5.4: Eliminating Loop Inefficiencies
5.5: Reducing Procedure Calls
5.6: Eliminating Unneeded Memory References
5.7: Understanding Modern Processors
5.7.1: Overall Operation
5.7.2: Functional Unit Performance
5.7.3: An Abstract Model of Processor Operation
5.8: Loop Unrolling
5.9: Enhancing Parallelism
5.9.1: Multiple Accumulators
5.9.2: Reassociation Transformation
5.10: Summary of Results for Optimizing Combining Code
5.11: Some Limiting Factors
5.11.1: Register Spilling
5.11.2: Branch Prediction and Misprediction Penalties
5.12: Understanding Memory Performance
5.12.1: Load Performance
5.12.2: Store Performance
5.13: Life in the Real World: Performance Improvement Techniques
5.14: Identifying and Eliminating Performance Bottlenecks
5.14.1: Program Profiling
5.14.2: Using a Profiler to Guide Optimization
5.15: Summary
Bibliographic Notes
Homework Problems
Solutions to Practice Problems
Chapter 6: The Memory Hierarchy
6.1: Storage Technologie
6.1.1: Random Access Memory
6.1.2: Disk Storage
6.1.3: Solid State Disks
6.1.4: Storage Technology Trends
6.2: Locality
6.2.1: Locality of References to Program Data
6.2.2: Locality of Instruction Fetches
6.2.3: Summary of Locality
6.3: The Memory Hierarchy
6.3.1: Caching in the Memory Hierarchy
6.3.2: Summary of Memory Hierarchy Concepts
6.4: Cache Memories
6.4.1: Generic Cache Memory Organization
6.4.2: Direct-Mapped Caches
6.4.3: Set Associative Caches
6.4.4: Fully Associative Caches
6.4.5: Issues with Writes
6.4.6: Anatomy of a Real Cache Hierarchy
6.4.7: Performance Impact of Cache Parameters
6.5: Writing Cache-Friendly Code
6.6: Putting It Together: The Impact of Caches on Program Performance
6.6.1: The Memory Mountain
6.6.2: Rearranging Loops to Increase Spatial Locality
6.6.3: Exploiting Locality in Your Programs
6.7: Summary
Bibliographic Notes
Homework Problems
Solutions to Practice Problems
Part II: Running Programs on a System
Chapter 7: Linking
7.1: Compiler Drivers
7.2: Static Linking
7.3: Object Files
7.4: Relocatable Object Files
7.5: Symbols and Symbol Tables
7.6: Symbol Resolution
7.6.1: How Linkers Resolve Duplicate Symbol Names
7.6.2: Linking with Static Libraries
7.6.3: How Linkers Use Static Libraries to Resolve References
7.7: Relocation
7.7.1: Relocation Entries
7.7.2: Relocating Symbol References
7.8: Executable Object Files
7.9: Loading Executable Object Files
7.10: Dynamic Linking with Shared Libraries
7.11: Loading and Linking Shared Libraries from Applications
7.12: Position-Independent Code (PIC)
7.13: Library Interpositioning
7.13.1: Compile-Time Interpositioning
7.13.2: Link-Time Interpositioning
7.13.3: Run-Time Interpositioning
7.14: Tools for Manipulating Object Files
7.15: Summary
Bibliographic Notes
Homework Problems
Solutions to Practice Problems
Chapter 8: Exceptional Control Flow
8.1: Exceptions
8.1.1: Exception Handling
8.1.2: Classes of Exceptions
8.1.3: Exceptions in Linux/x86-64 Systems
8.2: Processes
8.2.1: Logical Control Flow
8.2.2: Concurrent Flows
8.2.3: Private Address Space
8.2.4: User and Kernel Modes
8.2.5: Context Switches
8.3: System Call Error Handling
8.4: Process Control
8.4.1: Obtaining Process IDs
8.4.2: Creating and Terminating Processes
8.4.3: Reaping Child Processes
8.4.4: Putting Processes to Sleep
8.4.5: Loading and Running Programs
8.4.6: Using fork and execve to Run Programs
8.5: Signals
8.5.1: Signal Terminology
8.5.2: Sending Signals
8.5.3: Receiving Signals
8.5.4: Blocking and Unblocking Signals
8.5.5: Writing Signal Handlers
8.5.6: Synchronizing Flows to Avoid Nasty Concurrency Bugs
8.5.7: ExplicitlyWaiting for Signals
8.6: Nonlocal Jumps
8.7: Tools for Manipulating Processes
8.8: Summary
Bibliographic Notes
Homework Problems
Solutions to Practice Problems
Chapter 9: Virtual Memory
9.1: Physical and Virtual Addressing
9.2: Address Spaces
9.3: VM as a Tool for Caching
9.3.1: DRAM Cache Organization
9.3.2: Page Tables
9.3.3: Page Hits
9.3.4: Page Faults
9.3.5: Allocating Pages
9.3.6: Locality to the Rescue Again
9.4: VM as a Tool for Memory Management
9.5: VM as a Tool for Memory Protection
9.6: Address Translation(R)
9.6.1: Integrating Caches and VM
9.6.2: Speeding Up Address Translation with a TLB
9.6.3: Multi-Level Page Tables
9.6.4: Putting It Together: End-to-End Address Translation
9.7: Case Study: The Intel Core i7/Linux Memory System
9.7.1: Core i7 Address Translation
9.7.2: Linux Virtual Memory System
9.8: Memory Mapping
9.8.1: Shared Objects Revisited
9.8.2: The fork Function Revisited
9.8.3: The execve Function Revisited
9.8.4: User-Level Memory Mapping with the mmap Function
9.9: Dynamic Memory Allocation
9.9.1: The malloc and free Functions
9.9.2: Why Dynamic Memory Allocation?
9.9.3: Allocator Requirements and Goals
9.9.4: Fragmentation
9.9.5: Implementation Issues
9.9.6: Implicit Free Lists
9.9.7: Placing Allocated Blocks
9.9.8: Splitting Free Blocks
9.9.9: Getting Additional Heap Memory
9.9.10: Coalescing Free Blocks
9.9.11: Coalescing with Boundary Tags
9.9.12: Putting It Together: Implementing a Simple Allocator
9.9.13: Explicit Free Lists
9.9.14: Segregated Free Lists
9.10: Garbage Collection
9.10.1: Garbage Collector Basics
9.10.2: Mark&Sweep Garbage Collectors
9.10.3: Conservative Mark&Sweep for C Programs
9.11: Common Memory-Related Bugs in C Programs
9.11.1: Dereferencing Bad Pointers
9.11.2: Reading Uninitialized Memory
9.11.3: Allowing Stack Buffer Overflows
9.11.4: Assuming That Pointers and the Objects They Point to Are the Same Size
9.11.5: Making Off-by-One Errors
9.11.6: Referencing a Pointer Instead of the Object It Points To
9.11.7: Misunderstanding Pointer Arithmetic
9.11.8: Referencing Nonexistent Variables
9.11.9: Referencing Data in Free Heap Blocks
9.11.10: Introducing Memory Leaks
9.12: Summary
Bibliographic Notes
Homework Problems
Solutions to Practice Problems
Part III: Interaction and Communication between Programs
Chapter 10: System-Level I/O
10.1: Unix I/O
10.2: Files
10.3: Opening and Closing Files
10.4: Reading and Writing Files
10.5: Robust Reading and Writing with the Rio Package
10.5.1: Rio Unbuffered Input and Output Functions
10.5.2: Rio Buffered Input Functions
10.6: Reading File Metadata
10.7: Reading Directory Contents
10.8: Sharing Files
10.9: I/O Redirection
10.10: Standard I/O
10.11: Putting It Together: Which I/O Functions Should I Use?
10.12: Summary
Bibliographic Notes
Homework Problems
Solutions to Practice Problems
Chapter 11: Network Programming
11.1: The Client-Server Programming Model
11.2: Networks
11.3: The Global IP Internet
11.3.1: IP Addresses
11.3.2: Internet Domain Names
11.3.3: Internet Connections
11.4: The Sockets Interface
11.4.1: Socket Address Structures
11.4.2: The socket Function
11.4.3: The connect Function
11.4.4: The bind Function
11.4.5: The listen Function
11.4.6: The accept Function
11.4.7: Host and Service Conversion
11.4.8: Helper Functions for the Sockets Interface
11.4.9: Example Echo Client and Server
11.5: Web Servers
11.5.1: Web Basics
11.5.2: Web Content
11.5.3: HTTP Transactions
11.5.4: Serving Dynamic Content
11.6: Putting It Together: The Tiny Web Server
11.7: Summary
Bibliographic Notes
Homework Problems
Solutions to Practice Problems
Chapter 12: Concurrent Programming
12.1: Concurrent Programming with Processes
12.1.1: A Concurrent Server Based on Processes
12.1.2: Pros and Cons of Processes
12.2: Concurrent Programming with I/O Multiplexing
12.2.1: A Concurrent Event-Driven Server Based on I/O Multiplexing
12.2.2: Pros and Cons of I/O Multiplexing
12.3: Concurrent Programming with Threads
12.3.1: Thread Execution Model
12.3.2: Posix Threads
12.3.3: Creating Threads
12.3.4: Terminating Threads
12.3.5: Reaping Terminated Threads
12.3.6: Detaching Threads
12.3.7: Initializing Threads
12.3.8: A Concurrent Server Based on Threads
12.4: Shared Variables in Threaded Programs
12.4.1: Threads Memory Model
12.4.2: Mapping Variables to Memory
12.4.3: Shared Variables
12.5: Synchronizing Threads with Semaphores
12.5.1: Progress Graphs
12.5.2: Semaphores
12.5.3: Using Semaphores for Mutual Exclusion
12.5.4: Using Semaphores to Schedule Shared Resources
12.5.5: Putting It Together: A Concurrent Server Based on Prethreading
12.6: Using Threads for Parallelism
12.7: Other Concurrency Issues
12.7.1: Thread Safety
12.7.2: Reentrancy
12.7.3: Using Existing Library Functions in Threaded Programs
12.7.4: Races
12.7.5: Deadlocks
12.8: Summary
Bibliographic Notes
Homework Problems
Solutions to Practice Problems
Appendix A: Error Handling
A.1: Error Handling in Unix Systems
A.2: Error-Handling Wrappers
References
Index
Back Cover
GlOBAl EdiTiON For these Global Editions, the editorial team at Pearson has collaborated with educators across the world to address a wide range of subjects and requirements, equipping students with the best possible learning tools. This Global Edition preserves the cutting-edge approach and pedagogy of the original, but also features alterations, customization, and adaptation from the North American version. This is a special edition of an established title widely used by colleges and universities throughout the world. Pearson published this exclusive edition for the benefit of students outside the United States and Canada. if you purchased this book within the United States or Canada, you should be aware that it has been imported without the approval of the Publisher or Author. Pearson Global Edition i E d T O N G l O B A l i GlOBAl EdiTiON A P r o g r a m m e r ’ s P e r s p e c t i v e C o m p u t e r S y s t e m s i E d T O N i i T H r d B r y a n t • ’ O H a l l a r o n Computer Systems A Programmer’s Perspective THird EdiTiON Randal E. Bryant • David R. O’Hallaron Bryant_1292101768_mech.indd 1 07/05/15 3:22 PM
Computer Systems A Programmer’s Perspective
Computer Systems A Programmer’s Perspective third edition global edition Randal E. Bryant Carnegie Mellon University David R. O’Hallaron Carnegie Mellon University Global Edition contributions by Manasa S. NMAM Institute of Technology Mohit Tahiliani National Institute of Technology Karnataka Boston Columbus Hoboken Indianapolis New York San Francisco Amsterdam Cape Town Dubai London Madrid Milan Munich Paris Montreal Toronto Delhi Mexico City Sao Paulo Sydney Hong Kong Seoul Singapore Taipei Tokyo
Vice President and Editorial Director: Marcia J. Horton Executive Editor: Matt Goldstein Editorial Assistant: Kelsey Loanes Acquisitions Editor, Global Editions: Karthik Subramanian VP of Marketing: Christy Lesko Director of Field Marketing: Tim Galligan Product Marketing Manager: Bram van Kempen Field Marketing Manager: Demetrius Hall Marketing Assistant: Jon Bryant Director of Product Management: Erin Gregg Team Lead Product Management: Scott Disanno Program Manager: Joanne Manning Project Editor, Global Editions: K.K. Neelakantan Senior Production Manufacturing Controller, Global Editions: Trudy Kimber Procurement Manager: Mary Fischer Senior Specialist, Program Planning and Support: Maura Zaldivar-Garcia Media Production Manager, Global Editions: Vikram Kumar Cover Designer: Lumina Datamatics Manager, Rights Management: Rachel Youdelman Associate Project Manager, Rights Management: William J. Opaluch Full-Service Project Management: Paul Anagnostopoulos, Windfall Software Pearson Education Limited Edinburgh Gate Harlow Essex CM20 2JE England and Associated Companies throughout the world Visit us on the World Wide Web at: www.pearsonglobaleditions.com © Pearson Education Limited 2016 The rights of Randal E. Bryant and David R. O’Hallaron to be identified as the authors of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988. Authorized adaptation from the United States edition, entitled Computer Systems: A Programmer’s Perspective, 3rd edition, ISBN 978-0-13-409266-9, by Randal E. Bryant and David R. O’Hallaron published by Pearson Education © 2016. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without either the prior written permission of the publisher or a license permitting restricted copying in the United Kingdom issued by the Copyright Licensing Agency Ltd, Saffron House, 6-10 Kirby Street, London EC1N 8TS. All trademarks used herein are the property of their respective owners. The use of any trademark in this text does not vest in the author or publisher any trademark ownership rights in such trademarks, nor does the use of such trademarks imply any affiliation with or endorsement of this book by such owners. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library 10 9 8 7 6 5 4 3 2 1 ISBN 10: 1-292-10176-8 ISBN 13: 978-1-292-10176-7 (Print) ISBN 13: 978-1-488-67207-1 (PDF) Typeset in 10/12 Times Ten, ITC Stone Sans by Windfall Software Printed in Malaysia
To the students and instructors of the 15-213 course at Carnegie Mellon University, for inspiring us to develop and refine the material for this book.
MasteringEngineering® For Computer Systems: A Programmer’s Perspective, Third Edition Mastering is Pearson’s proven online Tutorial Homework program, newly available with the third edition of Computer Systems: A Programmer’s Perspective. The Mastering platform allows you to integrate dynamic homework—with many problems taken directly from the Bryant/O’Hallaron textbook—with automatic grading. Mastering allows you to easily track the performance of your entire class on an assignment-by-assignment basis, or view the detailed work of an individual student. For more information or a demonstration of the course, visit www.MasteringEngineering.com
Contents Preface 19 About the Authors 35 1 A Tour of Computer Systems 37 1.1 1.2 1.3 1.4 39 50 50 47 Information Is Bits + Context Programs Are Translated by Other Programs into Different Forms It Pays to Understand How Compilation Systems Work 42 Processors Read and Interpret Instructions Stored in Memory 1.4.1 Hardware Organization of a System 44 1.4.2 Running the hello Program 46 Caches Matter Storage Devices Form a Hierarchy The Operating System Manages the Hardware 1.7.1 Processes 51 1.7.2 Threads 1.7.3 Virtual Memory 1.7.4 Files Systems Communicate with Other Systems Using Networks Important Themes 1.9.1 Amdahl’s Law 58 1.9.2 Concurrency and Parallelism 60 1.9.3 The Importance of Abstractions in Computer Systems Summary Bibliographic Notes 64 Solutions to Practice Problems 58 54 53 55 55 62 63 64 1.5 1.6 1.7 1.8 1.9 1.10 43 40 7 Part I Program Structure and Execution 2 Representing and Manipulating Information 67 2.1 Information Storage 70 2.1.1 Hexadecimal Notation 72 2.1.2 Data Sizes 75
8 Contents 2.2 2.3 2.4 2.5 78 100 117 120 95 96 98 85 85 Introduction to Boolean Algebra 86 Shift Operations in C 93 Signed versus Unsigned in C 110 2.1.3 Addressing and Byte Ordering 2.1.4 Representing Strings 2.1.5 Representing Code 2.1.6 2.1.7 Bit-Level Operations in C 90 2.1.8 Logical Operations in C 92 2.1.9 Integer Representations 2.2.1 Integral Data Types 2.2.2 Unsigned Encodings 2.2.3 Two’s-Complement Encodings 2.2.4 Conversions between Signed and Unsigned 106 2.2.5 2.2.6 Expanding the Bit Representation of a Number 2.2.7 Truncating Numbers 2.2.8 Advice on Signed versus Unsigned 119 Integer Arithmetic 2.3.1 Unsigned Addition 120 2.3.2 Two’s-Complement Addition 126 2.3.3 Two’s-Complement Negation 131 2.3.4 Unsigned Multiplication 132 2.3.5 Two’s-Complement Multiplication 133 2.3.6 Multiplying by Constants 2.3.7 Dividing by Powers of 2 2.3.8 Final Thoughts on Integer Arithmetic Floating Point 2.4.1 Fractional Binary Numbers 2.4.2 2.4.3 Example Numbers 2.4.4 Rounding 2.4.5 Floating-Point Operations 2.4.6 Floating Point in C 160 Summary Bibliographic Notes Homework Problems Solutions to Practice Problems IEEE Floating-Point Representation 148 137 139 163 164 145 158 151 156 144 162 112 143 179 3 Machine-Level Representation of Programs 199 3.1 A Historical Perspective 202
分享到:
收藏