Pthreads Programming -
Pthreads Programming
Bradford Nichols, Dick Buttlar and Jacqueline Proulx Farrell
Compiled by Terrence
Copyright © 1996 O'Reilly &
Associates, Inc.
Pthreads Programming
By Bradford Nichols, Dick Buttlar, and Jackie Farrell
Copyright © 1996 O'Reilly & Associates, Inc., All rights reserved.
Printed in the United States of America
Published by O'Reilly & Associates, Inc., 101 Morris Street, Sebastopol, CA 95472
Editor: Andy Oram
Production Editor: Nancy Crumpton
Printing
History:
September 1996: First Edition
February 1998: Minor corrections
Nutshell Handbook and the Nutshell Handbook Logo are registered trademarks and
The Java Series is a trademark of O'Reilly & Associates, Inc.
Many of the designations used by manufacturers and sellers to distinguish their
products are claimed as trademarks. Where those designations appear in this book,
and O'Reilly & Associates, Inc. was aware of a trademark claim, the designations
have been printed in caps or initial caps.
While every precaution has been taken in the preparation of this book, the publisher
assumes no responsibility for errors or omissions, or for damages resulting from the
use of the information contained herein.
ISBN: 1-5692-115-1
Books24x7.com, Inc © 2000 Feedback
Pthreads Programming Contents
Preface Contents
Organization
Example Programs
FTP
Typographical Conventions
Acknowledgments
Chapter 1 - - Why Threads
Overview
What Are Pthreads?
Potential Parallelism
Specifying Potential Parallelism in a Concurrent Programming
Environment
UNIX Concurrent Programming: Multiple Processes
Pthreads Concurrent Programming: Multiple Threads
Parallel vs. Concurrent Programming
Synchronization
Sharing Process Resources
Communication
Scheduling
Who Am I? Who Are You?
Terminating Thread Execution
Exit Status and Return Values
Pthreads Library Calls and Errors
Why Use Threads Over Processes?
A Structured Programming Environment
Choosing Which Applications to Thread
Chapter 2 - - Designing Threaded Programs
Overview
Suitable Tasks for Threading
Models
Boss/Worker Model
Peer Model
Pipeline Model
Buffering Data Between Threads
Some Common Problems
Performance
Example: An ATM Server
The Serial ATM Server
The Multithreaded ATM Server
Example: A Matrix Multiplication Program
The Serial Matrix-Multiply Program
The Multithreaded Matrix-Multiply Program
Chapter 3 - - Synchronizing Pthreads
Overview
Selecting the Right Synchronization Tool
Mutex Variables
Using Mutexes
Error Detection and Return Values
Using pthread_mutex_trylock
When Other Tools Are Better
Some Shortcomings of Mutexes
Contention for a Mutex
Example: Using Mutexes in a Linked List
Complex Data Structures and Lock Granularity
Requirements and Goals for Synchronization
Access Patterns and Granularity
Locking Hierarchies
Sharing a Mutex Among Processes
Condition Variables
Using a Mutex with a Condition Variable
When Many Threads Are Waiting
Checking the Condition on Wake Up: Spurious Wake Ups
Condition Variable Attributes
Condition Variables and UNIX Signals
Condition Variables and Cancellation
Reader/Writer Locks
Synchronization in the ATM Server
Synchronizing Access to Account Data
Limiting the Number of Worker Threads
Synchronizing a Server Shutdown
Thread Pools
An ATM Server Example That Uses a Thread Pool
Chapter 4 - - Managing Pthreads
Overview
Setting Thread Attributes
Setting a Thread’s Stack Size
Example: The ATM Server’s Communication Module
Setting a Thread’s Detached State
Setting Multiple Attributes
Destroying a Thread Attribute Object
The pthread_once Mechanism
Keys: Using Thread-Specific Data
Initializing a Key: pthread_key_create
Associating Data with a Key
Retrieving Data from a Key
Destructors
Cancellation
The Complication with Cancellation
Cancelability Types and States
Cancellation Points: More on Deferred Cancellation
A Simple Cancellation Example
Cleanup Stacks
Cancellation in the ATM Server
Scheduling Pthreads
Scheduling Priority and Policy
Scheduling Scope and Allocation Domains
Runnable and Blocked Threads
Scheduling Priority
Scheduling Policy
Using Priorities and Policies
Setting Scheduling Policy and Priority
Inheritance
Scheduling in the ATM Server
Mutex Scheduling Attributes
Priority Ceiling
Priority Inheritance
The ATM Example and Priority Inversion
Chapter 5 - - Pthreads and UNIX
Overview
Threads and Signals
Traditional Signal Processing
Signal Processing in a Multithreaded World
Threads in Signal Handlers
A Simple Example
Some Signal Issues
Handling Signals in the ATM Example
Threadsafe Library Functions and System Calls
Threadsafe and Reentrant Functions
Example of Thread-Unsafe and Threadsafe Versions of the Same
Function
Functions That Return Pointers to Static Data
Library Use of errno
The Pthreads Standard Specifies Which Functions Must Be
Threadsafe
Using Thread-Unsafe Functions in a Multithreaded Program
Cancellation-Safe Library Functions and System Calls
Asynchronous Cancellation-Safe Functions
Cancellation Points in System and Library Calls
Thread-Blocking Library Functions and System Calls
Threads and Process Management
Calling fork from a Thread
Calling exec from a Thread
Process Exit and Threads
Multiprocessor Memory Synchronization
Chapter 6 - - Practical Considerations
Overview
Understanding Pthreads Implementation
Two Worlds
Two Kinds of Threads
Who’s Providing the Thread?
Debugging
Deadlock
Race Conditions
Event Ordering
Less Is Better
Trace Statements
Debugger Support for Threads
Example: Debugging the ATM Server
Performance
The Costs of Sharing Too Much—Locking
Thread Overhead
Synchronization Overhead
How Do Your Threads Spend Their Time?
Performance in the ATM Server Example
Conclusion
Appendix A - - Pthreads and DCE
The Structure of a DCE Server
What Does the DCE Programmer Have to Do?
Example: The ATM as a DCE Server
Appendix B - - Pthreads Draft 4 vs. the Final Standard
Detaching a Thread
Mutex Variables
Condition Variables
Thread Attributes
The pthread_once Function
Keys
Cancellation
Scheduling
Signals
Threadsafe System Interfaces
Error Reporting
System Interfaces and Cancellation-Safety
Process-Blocking Calls
Process Management
Appendix C - - Pthreads Quick Reference
Preface -
Pthreads Programming
Bradford Nichols, Dick Buttlar and Jacqueline Proulx Farrell
Copyright © 1996 O'Reilly &
Associates, Inc.
Preface
It's been quite a while since the people from whom we get our project assignments
accepted the excuse "Gimme a break! I can only do one thing at a time!" It used to be
such a good excuse, too, when things moved just a bit slower and a good day was
measured in written lines of code. In fact, today we often do many things at a time.
We finish off breakfast on the way into work; we scan the Internet for sports scores
and stock prices while our application is building; we'd even read the morning paper
in the shower if the right technology were in place!
Being busy with multiple things is nothing new, though. (We'll just give it a new
computer-age name, like multitasking, because computers are happiest when we
avoid describing them in anthropomorphic terms.) It's the way of the natural
worldwe wouldn't be able to write this book if all the body parts needed to keep our
fingers moving and our brains engaged didn't work together at the same time. It's the
way of the mechanical worldwe wouldn't have been able to get to this lovely
prefabricated office building to do our work if the various, clanking parts of our
automobiles didn't work together (most of the time). It's the way of the social and
business worldthree authoring tasks went into the making of this book, and the
number of tasks, all happening at once, grew exponentially as it went into its review
cycles and entered production.
Computer hardware and operating systems have been capable of multitasking for
years. CPUs using a RISC (reduced instruction set computing) microprocessor break
down the processing of individual machine instructions into a number of separate
tasks. By pipelining each instruction through each task, a RISC machine can have
many instructions in progress at the same time. The end result is the heralded speed
and throughput of RISC processors. Time-sharing operating systems have been
allowing users nearly simultaneous access to the processor for longer than we can
remember. Their ability to schedule different tasks (typically called processes) really
pays off when separate tasks can actually execute simultaneously on separate CPUs
in a multiprocessor system.