logo资料库

William Gropp_Using MPI 3rd.pdf

第1页 / 共330页
第2页 / 共330页
第3页 / 共330页
第4页 / 共330页
第5页 / 共330页
第6页 / 共330页
第7页 / 共330页
第8页 / 共330页
资料共330页,剩余部分请下载后查看
9780262527392_0
00 6987706
01 6987675
02 6987695
03 6987711
04 6987665
05 6987664
06 6987669
07 6987671
08 6987694
09 6987658
10 6987682
11 6987672
12 6987703
13 6987685
14 6987667
15 6987679
16 6987707
17 6987670
18 6987663
Using MPI
Scientific and Engineering Computation William Gropp and Ewing Lusk, editors; Janusz Kowalik, founding editor A complete list of books published in the Scientific and Engineering Computation series appears at the back of this book.
Using MPI Portable Parallel Programming with the Message-Passing Interface Third Edition William Gropp Ewing Lusk Anthony Skjellum The MIT Press Cambridge, Massachusetts London, England
c 2014 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. This book was set in LATEX by the authors and was printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data Gropp, William. Using MPI : portable parallel programming with the Message-Passing Interface / William Gropp, Ewing Lusk, and Anthony Skjellum. — Third edition. p. cm. — (Scientific and engineering computation) Includes bibliographical references and index. ISBN 978-0-262-52739-2 (pbk. : alk. paper) 1. Parallel programming (Computer science) 2. Parallel computers—Programming. 3. Computer interfaces. I. Lusk, Ewing. II. Skjellum, Anthony. III. Title. IV. Title: Using Message-Passing Interface. QA76.642.G76 005.2’75—dc23 2014 2014033587 10 9 8 7 6 5 4 3 2 1
To Patty, Brigid, and Jennifer
Contents Series Foreword Preface to the Third Edition Preface to the Second Edition Preface to the First Edition Background Why Parallel Computing? Obstacles to Progress Why Message Passing? Parallel Computational Models Advantages of the Message-Passing Model 1.3.1 1.3.2 Evolution of Message-Passing Systems The MPI Forum Introduction to MPI Goal What Is MPI? Basic MPI Concepts Other Interesting Features of MPI Is MPI Large or Small? Decisions Left to the Implementor Using MPI in Simple Programs A First MPI Program Running Your First MPI Program A First MPI Program in C Using MPI from Other Languages Timing MPI Programs A Self-Scheduling Example: Matrix-Vector Multiplication Studying Parallel Performance 3.7.1 Elementary Scalability Calculations 1 1.1 1.2 1.3 1.4 1.5 2 2.1 2.2 2.3 2.4 2.5 2.6 3 3.1 3.2 3.3 3.4 3.5 3.6 3.7 xiii xv xix xxi 1 1 2 3 3 9 10 11 13 13 13 14 18 20 21 23 23 28 29 29 31 32 38 39
viii 3.8 3.9 3.10 3.11 3.12 3.13 4 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 3.7.2 Gathering Data on Program Execution 3.7.3 Instrumenting a Parallel Program with MPE Logging Events and States Instrumenting the Matrix-Matrix Multiply Program Notes on Implementation of Logging 3.7.4 3.7.5 3.7.6 3.7.7 Graphical Display of Logfiles Using Communicators Another Way of Forming New Communicators A Handy Graphics Library for Parallel Programs Common Errors and Misunderstandings Summary of a Simple Subset of MPI Application: Computational Fluid Dynamics 3.13.1 Parallel Formulation 3.13.2 Parallel Implementation Intermediate MPI The Poisson Problem Topologies A Code for the Poisson Problem Using Nonblocking Communications Synchronous Sends and “Safe” Programs More on Scalability Jacobi with a 2-D Decomposition An MPI Derived Datatype Overlapping Communication and Computation 4.10 More on Timing Programs 4.11 4.12 4.13 Three Dimensions Common Errors and Misunderstandings Application: Nek5000/NekCEM 5 Fun with Datatypes Contents 41 42 43 43 47 48 49 55 57 60 62 62 63 65 69 70 73 81 91 94 95 98 100 101 105 106 107 108 113
分享到:
收藏