logo资料库

Linear and Nonlinear Programming 4th-Springer.pdf

第1页 / 共547页
第2页 / 共547页
第3页 / 共547页
第4页 / 共547页
第5页 / 共547页
第6页 / 共547页
第7页 / 共547页
第8页 / 共547页
资料共547页,剩余部分请下载后查看
Preface
Contents
1 Introduction
1.1 Optimization
1.2 Types of Problems
1.3 Size of Problems
1.4 Iterative Algorithms and Convergence
Part I Linear Programming
2 Basic Properties of Linear Programs
2.1 Introduction
2.2 Examples of Linear Programming Problems
2.3 Basic Solutions
2.4 The Fundamental Theorem of Linear Programming
2.5 Relations to Convexity
2.6 Exercises
3 The Simplex Method
3.1 Pivots
3.2 Adjacent Extreme Points
3.3 Determining a Minimum Feasible Solution
3.4 Computational Procedure: Simplex Method
3.5 Finding a Basic Feasible Solution
3.6 Matrix Form of the Simplex Method
3.7 Simplex Method for Transportation Problems
3.8 Decomposition
3.9 Summary
3.10 Exercises
4 Duality and Complementarity
4.1 Dual Linear Programs
4.2 The Duality Theorem
4.3 Relations to the Simplex Procedure
4.4 Sensitivity and Complementary Slackness
4.5 Max Flow–Min Cut Theorem
4.6 The Dual Simplex Method
4.7 The Primal-Dual Algorithm
4.8 Summary
4.9 Exercises
5 Interior-Point Methods
5.1 Elements of Complexity Theory
5.2 The Simplex Method Is Not Polynomial-Time
5.3 The Ellipsoid Method
5.4 The Analytic Center
5.5 The Central Path
5.6 Solution Strategies
5.7 Termination and Initialization
5.8 Summary
5.9 Exercises
6 Conic Linear Programming
6.1 Convex Cones
6.2 Conic Linear Programming Problem
6.3 Farkas' Lemma for Conic Linear Programming
6.4 Conic Linear Programming Duality
6.5 Complementarity and Solution Rank of SDP
6.6 Interior-Point Algorithms for Conic Linear Programming
6.7 Summary
6.8 Exercises
Part II Unconstrained Problems
7 Basic Properties of Solutions and Algorithms
7.1 First-Order Necessary Conditions
7.2 Examples of Unconstrained Problems
7.3 Second-Order Conditions
7.4 Convex and Concave Functions
7.5 Minimization and Maximization of Convex Functions
7.6 Zero-Order Conditions
7.7 Global Convergence of Descent Algorithms
7.8 Speed of Convergence
7.9 Summary
7.10 Exercises
8 Basic Descent Methods
8.1 Line Search Algorithms
8.2 The Method of Steepest Descent
8.3 Applications of the Convergence Theory
8.4 Accelerated Steepest Descent
8.5 Newton's Method
8.6 Coordinate Descent Methods
8.7 Summary
8.8 Exercises
9 Conjugate Direction Methods
9.1 Conjugate Directions
9.2 Descent Properties of the Conjugate Direction Method
9.3 The Conjugate Gradient Method
9.4 The C–G Method as an Optimal Process
9.5 The Partial Conjugate Gradient Method
9.6 Extension to Nonquadratic Problems
9.7 Parallel Tangents
9.8 Exercises
10 Quasi-Newton Methods
10.1 Modified Newton Method
10.2 Construction of the Inverse
10.3 Davidon-Fletcher-Powell Method
10.4 The Broyden Family
10.5 Convergence Properties
10.6 Scaling
10.7 Memoryless Quasi-Newton Methods
10.8 Combination of Steepest Descent and Newton's Method
10.9 Summary
10.10 Exercises
Part III Constrained Minimization
11 Constrained Minimization Conditions
11.1 Constraints
11.2 Tangent Plane
11.3 First-Order Necessary Conditions (Equality Constraints)
11.4 Examples
11.5 Second-Order Conditions
11.6 Eigenvalues in Tangent Subspace
11.7 Sensitivity
11.8 Inequality Constraints
11.9 Zero-Order Conditions and Lagrangian Relaxation
11.10 Summary
11.11 Exercises
12 Primal Methods
12.1 Advantage of Primal Methods
12.2 Feasible Direction Methods
12.3 Active Set Methods
12.4 The Gradient Projection Method
12.5 Convergence Rate of the Gradient Projection Method
12.6 The Reduced Gradient Method
12.7 Convergence Rate of the Reduced Gradient Method
12.8 Variations
12.9 Summary
12.10 Exercises
13 Penalty and Barrier Methods
13.1 Penalty Methods
13.2 Barrier Methods
13.3 Properties of Penalty and Barrier Functions
13.4 Newton's Method and Penalty Functions
13.5 Conjugate Gradients and Penalty Methods
13.6 Normalization of Penalty Functions
13.7 Penalty Functions and Gradient Projection
13.8 Exact Penalty Functions
13.9 Summary
13.10 Exercises
14 Duality and Dual Methods
14.1 Global Duality
14.2 Local Duality
14.3 Canonical Convergence Rate of Dual Steepest Ascent
14.4 Separable Problems and Their Duals
14.5 Augmented Lagrangian
14.6 The Method of Multipliers
14.7 The Alternating Direction Method of Multipliers
14.8 Cutting Plane Methods
14.9 Exercises
15 Primal-Dual Methods
15.1 The Standard Problem
15.2 A Simple Merit Function
15.3 Basic Primal-Dual Methods
15.4 Modified Newton Methods
15.5 Descent Properties
15.6 Rate of Convergence
15.7 Primal-Dual Interior Point Methods
15.8 Summary
15.9 Exercises
A Mathematical Review
A.1 Sets
A.2 Matrix Notation
A.3 Spaces
A.4 Eigenvalues and Quadratic Forms
A.5 Topological Concepts
A.6 Functions
B Convex Sets
B.1 Basic Definitions
B.2 Hyperplanes and Polytopes
B.3 Separating and Supporting Hyperplanes
B.4 Extreme Points
C Gaussian Elimination
D Basic Network Concepts
D.1 Flows in Networks
D.2 Tree Procedure
D.3 Capacitated Networks
Bibliography
Index
International Series in Operations Research & Management Science David G. Luenberger Yinyu Ye Linear and Nonlinear Programming Fourth Edition
International Series in Operations Research & Management Science Volume 228 Series Editor Camille C. Price Stephen F. Austin State University, TX, USA Associate Series Editor Joe Zhu Worcester Polytechnic Institute, MA, USA Founding Series Editor Frederick S. Hillier Stanford University, CA, USA More information about this series at http://www.springer.com/series/6161
David G. Luenberger • Yinyu Ye Linear and Nonlinear Programming Fourth Edition 123
David G. Luenberger Department of Management Science and Engineering Stanford University Stanford, CA, USA Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA, USA ISSN 0884-8289 International Series in Operations Research & Management Science ISBN 978-3-319-18841-6 DOI 10.1007/978-3-319-18842-3 ISSN 2214-7934 (electronic) ISBN 978-3-319-18842-3 (eBook) Library of Congress Control Number: 2015942692 Springer Cham Heidelberg New York Dordrecht London © Springer International Publishing Switzerland 1973, 1984 (2003 reprint), 2008, 2016 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. Printed on acid-free paper Springer International Publishing AG Switzerland is part of Springer Science+Business Media (www. springer.com)
To Susan, Robert, Jill, and Jenna; Daisun, Fei, Tim, and Kaylee
Preface This book is intended as a text covering the central concepts of practical optimiza- tion techniques. It is designed for either self-study by professionals or classroom work at the undergraduate or graduate level for students who have a technical back- ground in engineering, mathematics, or science. Like the field of optimization itself, which involves many classical disciplines, the book should be useful to system ana- lysts, operations researchers, numerical analysts, management scientists, and other specialists from the host of disciplines from which practical optimization appli- cations are drawn. The prerequisites for convenient use of the book are relatively modest; the prime requirement being some familiarity with introductory elements of linear algebra. Certain sections and developments do assume some knowledge of more advanced concepts of linear algebra, such as eigenvector analysis, or some background in sets of real numbers, but the text is structured so that the mainstream of the development can be faithfully pursued without reliance on this more advanced background material. Although the book covers primarily material that is now fairly standard, this edi- tion emphasizes methods that are both state-of-the-art and popular. One major in- sight is the connection between the purely analytical character of an optimization problem, expressed perhaps by properties of the necessary conditions, and the be- havior of algorithms used to solve a problem. This was a major theme of the first edition of this book and the fourth edition expands and further illustrates this rela- tionship. As in the earlier editions, the material in this fourth edition is organized into three separate parts. Part I is a self-contained introduction to linear programming, a key component of optimization theory. The presentation in this part is fairly conven- tional, covering the main elements of the underlying theory of linear programming, many of the most effective numerical algorithms, and many of its important special applications. Part II, which is independent of Part I, covers the theory of uncon- strained optimization, including both derivations of the appropriate optimality con- ditions and an introduction to basic algorithms. This part of the book explores the general properties of algorithms and defines various notions of convergence. Part III vii
分享到:
收藏