logo资料库

Stephen Boyd-《Convex Optimization》.pdf

第1页 / 共732页
第2页 / 共732页
第3页 / 共732页
第4页 / 共732页
第5页 / 共732页
第6页 / 共732页
第7页 / 共732页
第8页 / 共732页
资料共732页,剩余部分请下载后查看
Cover
Convex Optimization
Title
Copyright
c Cambridge University Press 2004
ISBN 978-0-521-83378-3
Dedications
Contents
Preface
Chapter 1: Introduction
1.1 Mathematical optimization
1.1.1 Applications
1.1.2 Solving optimization problems
1.2 Least-squares and linear programming
1.2.1 Least-squares problems
1.2.2 Linear programming
Solving linear programs
Using linear programming
1.3 Convex optimization
1.3.1 Solving convex optimization problems
1.3.2 Using convex optimization
1.4 Nonlinear optimization
1.4.1 Local optimization
1.4.2 Global optimization
1.4.3 Role of convex optimization in nonconvex problems
Initialization for local optimization
Convex heuristics for nonconvex optimizati
Bounds for global optimization
1.5 Outline
1.5.1 Part I: Theory
1.5.2 Part II: Applications
1.5.3 Part III: Algorithms
1.5.4 Appendices
1.5.5 Comments on examples
1.5.6 Comments on exercises
1.6 Notation
Bibliography
Part I: Theory
Chapter 2: Convex sets
2.1 Affine and convex sets
2.1.1 Lines and line segments
2.1.2 Affine sets
2.1.3 Affine dimension and relative interior
2.1.4 Convex sets
2.1.5 Cones
2.2 Some important examples
2.2.1 Hyperplanes and halfspaces
2.2.2 Euclidean balls and ellipsoids
2.2.3 Norm balls and norm cones
2.2.4 Polyhedra
2.2.5 The positive semidefinite cone
2.3 Operations that preserve convexityIn
2.3.1 Intersection
2.3.2 Affine functions
2.3.3 Linear-fractional and perspective functions
Linear-fractional functions
2.4 Generalized inequalities
2.4.1 Proper cones and generalized inequalities
2.4.2 Minimum and minimal elements
2.5 Separating and supporting hyperplanes
2.5.1 Separating hyperplane theorem
Strict separation
Converse separating hyperplane theorems
2.5.2 Supporting hyperplanes
2.6 Dual cones and generalized inequalities
2.6.1 Dual cones
2.6.2 Dual generalized inequalities
2.6.3 Minimum and minimal elements via dual inequalities
Dual characterization of minimal elements
Bibliography
Exercises
Chapter 3: Convex functions
3.1 Basic properties and examples
3.1.1 Definition
3.1.2 Extended-value extensions
3.1.3 First-order conditions
3.1.4 Second-order conditions
3.1.5 Examples
3.1.6 Sublevel sets
3.1.7 Epigraph
3.1.8 Jensen’s inequality and extensions
3.1.9 Inequalities
3.2 Operations that preserve convexity
3.2.1 Nonnegative weighted sums
3.2.2 Composition with an affine mapping
3.2.3 Pointwise maximum and supremum
Representation as pointwise supremum of affine functions
3.2.4 Composition
Scalar composition
Vector composition
3.2.5 Minimization
3.2.6 Perspective of a function
3.3 The conjugate function
3.3.1 Definition and examples
3.3.2 Basic properties
Conjugate of the conjugate
Differentiable functions
3.4 Quasiconvex functions
3.4.1 Definition and examples
3.4.2 Basic properties
3.4.3 Differentiable quasiconvex functions
First-order conditions
Proof of second-order conditions for quasiconvexity
3.4.4 Operations that preserve quasiconvexity
Nonnegative weighted maximum
Minimization
3.4.5 Representation via family of convex functions
3.5 Log-concave and log-convex functions
3.5.1 Definition
3.5.2 Properties
Twice differentiable log-convex/concave functions
Multiplication, addition, and integration
Integration of log-concave functions
3.6 Convexity with respect to generalized inequalities
3.6.1 Monotonicity with respect to a generalized inequality
Gradient conditions for monotonicity
3.6.2 Convexity with respect to a generalized inequality
Dual characterization of K-convexity
Composition theorem
Bibliography
Exercises
Chapter 4: Convex optimization problems
4.1 Optimization problems
4.1.1 Basic terminology
Optimal and locally optimal points
Feasibility problems
4.1.2 Expressing problems in standard form
Maximization problems
4.1.3 Equivalent problems
Change of variables
Transformation of objective and constraint functions
Slack variables
Eliminating equality constraints
Eliminating linear equality constraints
Introducing equality constraints
Optimizing over some variables
Epigraph problem form
Implicit and explicit constraints
4.1.4 Parameter and oracle problem descriptions
4.2 Convex optimization
4.2.1 Convex optimization problems in standard form
Concave maximization problems
Abstract form convex optimization problem
4.2.2 Local and global optima
4.2.3 An optimality criterion for differentiable fa
Proof of optimality condition
Unconstrained problems
Problems with equality constraints only
Minimization over the nonnegative orthant
4.2.4 Equivalent convex problems
Eliminating equality constraints
Introducing equality constraints
Slack variables
Epigraph problem form
Minimizing over some variables
4.2.5 Quasiconvex optimization
Locally optimal solutions and optimality conditions
Quasiconvex optimization via convex feasibility problems
4.3 Linear optimization problems
Standard and inequality form linear programs
Converting LPs to standard form
4.3.1 Examples
Diet problem
Chebyshev center of a polyhedron
Dynamic activity planning
Chebyshev inequalities
Piecewise-linear minimization
4.3.2 Linear-fractional programming
Transforming to a linear program
Generalized linear-fractional programming
4.4 Quadratic optimization problems
4.4.1 Examples
4.4.2 Second-order cone programming
Robust linear programming
Linear programming with random constraints
Minimal surface
4.5 Geometric programming
4.5.1 Monomials and posynomials
4.5.2 Geometric programming
Extensions of geometric programming
4.5.3 Geometric program in convex form
4.5.4 Examples
Frobenius norm diagonal scaling
Design of a cantilever beam
Minimizing spectral radius via Perron-Frobenius theory
4.6 Generalized inequality constraints
4.6.1 Conic form problems
4.6.2 Semidefinite programming
4.6.3 Examples
4.7 Vector optimization
4.7.1 General and convex vector optimization problems
4.7.2 Optimal points and values
4.7.3 Pareto optimal points and values
4.7.4 Scalarization
Scalarization of convex vector optimization problems
4.7.5 Multicriterion optimization
Trade-off analysis
Optimal trade-off surface
Scalarizing multicriterion problems
4.7.6 Examples
Regularized least-squares
Risk-return trade-off in portfolio optimization
Bibliography
Exercises
Chapter 5: Duality
5.1 The Lagrange dual function
5.1.1 The Lagrangian
5.1.2 The Lagrange dual function
5.1.3 Lower bounds on optimal value
5.1.4 Linear approximation interpretation
5.1.5 Examples
5.1.6 The Lagrange dual function and conjugate functions
Entropy maximization
Minimum volume covering ellipsoid
5.2 The Lagrange dual problem
5.2.1 Making dual constraints explicit
Lagrange dual of inequality form LP
5.2.2 Weak duality
5.2.3 Strong duality and Slater’s constraint qualification
5.2.4 Examples
5.2.5 Mixed strategies for matrix games
5.3 Geometric interpretation
5.3.1 Weak and strong duality via set of values
Epigraph variation
5.3.2 Proof of strong duality under constraint qualification
5.3.3 Multicriterion interpretation
5.4 Saddle-point interpretation
5.4.1 Max-min characterization of weak and strong duality
5.4.2 Saddle-point interpretation
5.4.3 Game interpretation
5.4.4 Price or tax interpretation
5.5 Optimality conditions
5.5.1 Certificate of suboptimality and stopping criteria
5.5.2 Complementary slackness
5.5.3 KKT optimality conditions
KKT conditions for nonconvex problems
KKT conditions for convex problems
5.5.4 Mechanics interpretation of KKT conditions
5.5.5 Solving the primal problem via the dual
5.6 Perturbation and sensitivity analysis
5.6.1 The perturbed problem
5.6.2 A global inequality
Sensitivity interpretations
5.6.3 Local sensitivity analysis
Shadow price interpretation
5.7 Examples
5.7.1 Introducing new variables and equality constraints
5.7.2 Transforming the objective
5.7.3 Implicit constraints
5.8 Theorems of alternatives
5.8.1 Weak alternatives via the dual function
Strict inequalities
5.8.2 Strong alternatives
Strict inequalities
Nonstrict inequalities
5.8.3 Examples
5.9 Generalized inequalities
5.9.1 The Lagrange dual
Slater’s condition and strong duality
5.9.2 Optimality conditions
Complementary slackness
KKT conditions
5.9.3 Perturbation and sensitivity analysis
5.9.4 Theorems of alternatives
Weak alternatives
Strong alternatives
Bibliography
Exercises
Part II: Applications
Chapter 6: Approximation and fitting
6.1 Norm approximation
6.1.1 Basic norm approximation problem
Approximation interpretation
Estimation interpretation
Geometric interpretation
Design interpretation
Weighted norm approximation problems
Least-squares approximation
Chebyshev or minimax approximation
Sum of absolute residuals approximation
6.1.2 Penalty function approximation
Interpretation
Example
Sensitivity to outliers or large errors
Small residuals and ℓ1-norm approximation
6.1.3 Approximation with constraints
Nonnegativity constraints on variables
Variable bounds
Probability distribution
Norm ball constraint
6.2 Least-norm problems
Reformulation as norm approximation problem
Control or design interpretation
Estimation interpretation
Geometric interpretation
Least-squares solution of linear equations
Least-penalty problems
Sparse solutions via least ℓ1-norm
6.3 Regularized approximation
6.3.1 Bi-criterion formulation
6.3.2 Regularization
Interpretations
Tikhonov regularization
Smoothing regularization
ℓ1-norm regularization
6.3.3 Reconstruction, smoothing, and de-noising
Quadratic smoothing
Quadratic smoothing example
Total variation reconstruction
Total variation reconstruction example
6.4 Robust approximation
6.4.1 Stochastic robust approximation
6.4.2 Worst-case robust approximation
Finite set
Norm bound error
Uncertainty ellipsoids
Norm bounded error with linear structure
6.5 Function fitting and interpolation
6.5.1 Function families
Polynomials
Piecewise-linear functions
Piecewise polynomials and splines
6.5.2 Constraints
Function value interpolation and inequalities
Derivative constraints
Integral constraints
6.5.3 Fitting and interpolation problems
Minimum norm function fitting
Least-norm interpolation
Interpolation, extrapolation, and bounding
6.5.4 Sparse descriptions and basis pursuit
Time-frequency analysis via basis pursuit
6.5.5 Interpolation with convex functions
Bounding values of an interpolating convex function
Interpolation with monotone convex functions
Bounding consumer preference
Bibliography
Exercises
Chapter 7: Statistical estimation
7.1 Parametric distribution estimation
7.1.1 Maximum likelihood estimation
Linear measurements with IID noise
ML interpretation of penalty function approximation
Counting problems with Poisson distribution
Covariance estimation for Gaussian variables
7.1.2 Maximum a posteriori probability estimation
Linear measurements with IID noise
MAP with perfect linear measurements
7.2 Nonparametric distribution estimation
Prior information
Bounding probabilities and expected values
Maximum likelihood estimation
Maximum entropy
Minimum Kullback-Leibler divergence
7.3 Optimal detector design and hypothesis testing
7.3.1 Deterministic and randomized detectors
7.3.2 Detection probability matrix
7.3.3 Optimal detector design
Limits on errors and detection probabilities
Minimax detector design
Bayes detector design
Bias, mean-square error, and other quantities
7.3.4 Multicriterion formulation and scalarization
MAP and ML detectors
7.3.5 Binary hypothesis testing
7.3.6 Robust detectors
Robust minimax detector for finite P
Robust minimax detector for polyhedral P
7.4 Chebyshev and Chernoff bounds
7.4.1 Chebyshev bounds
Probability bounds with known first and second moments
7.4.2 Chernoff bounds
Chernoff bound for a Gaussian variable on a polyhedron
7.4.3 Example
Chebyshev bounds
Chernoff bounds
7.5 Experiment design
7.5.1 The relaxed experiment design problem
7.5.2 Scalarizations
D-optimal design
E-optimal design
A-optimal design
Optimal experiment design and duality
Experiment design example
7.5.3 Extensions
Resource limits
Multiple measurements per experiment
Bibliography
Exercises
Chapter 8: Geometric problems
8.1 Projection on a set
8.1.1 Projecting a point on a convex set
Euclidean projection on a polyhedron
Euclidean projection on a proper cone
8.1.2 Separating a point and a convex set
Separating a point from a polyhedron
8.1.3 Projection and separation via indicator and support functions
8.2 Distance between sets
8.2.1 Computing the distance between convex sets
Euclidean distance between polyhedra
Separating polyhedra
8.2.3 Distance and separation via indicator and support functions
8.3 Euclidean distance and angle problems
8.3.1 Gram matrix and realizability
Realizability
Angle and distance constraints
Singular value and condition number constraints
Ellipsoid and simplex volume
8.3.2 Problems involving angles only
8.3.3 Euclidean distance problems
8.4 Extremal volume ellipsoids
8.4.1 The L¨owner-John ellipsoid
Minimum volume ellipsoid covering union of ellipsoids
Efficiency of L¨owner-John ellipsoidal approximation
Efficiency of L¨owner-John ellipsoidal approximation for symmetric sets
Approximating a norm by a quadratic norm
8.4.2 Maximum volume inscribed ellipsoid
Maximum volume ellipsoid in a polyhedron
Maximum volume ellipsoid in an intersection of ellipsoids
Efficiency of ellipsoidal inner approximations
8.4.3 Affine invariance of extremal volume ellipsoids
8.5 Centering
8.5.1 Chebyshev center
Chebyshev center of a convex set
Chebyshev center of a polyhedron
Euclidean Chebyshev center of intersection of ellipsoids
8.5.2 Maximum volume ellipsoid center
8.5.3 Analytic center of a set of inequalities
Analytic center of a set of linear inequalities
Analytic center of a linear matrix inequality
8.6 Classification
8.6.1 Linear discrimination
Linear discrimination alternative
Robust linear discrimination
Support vector classifier
Approximate linear discrimination via logistic modeling
8.6.2 Nonlinear discrimination
Quadratic discrimination
Polynomial discrimination
8.7 Placement and location
8.7.1 Linear facility location problems
8.7.2 Placement constraints
8.7.3 Nonlinear facility location problems
8.7.4 Location problems with path constraints
Path constraints
Minimax delay placement
8.8 Floor planning
8.8.1 Relative positioning constraints
8.8.2 Floor planning via convex optimization
Minimum spacing
Minimum cell area
Aspect ratio constraints
Alignment constraints
Symmetry constraints
Similarity constraints
Containment constraints
Distance constraints
8.8.3 Floor planning via geometric programming
Bibliography
Exercises
Part III: Algorithms
Chapter 9: Unconstrained minimization
9.1 Unconstrained minimization problems
9.1.1 Examples
Quadratic minimization and least-squares
Unconstrained geometric programming
Analytic center of a linear matrix inequality
Initial point and sublevel set
9.1.2 Strong convexity and implications
Condition number of sublevel sets
The strong convexity constants
9.2 Descent methods
Exact line search
Backtracking line search
9.3 Gradient descent method
9.3.1 Convergence analysis
Analysis for backtracking line search
9.3.2 Examples
A quadratic problem in R^2
A nonquadratic problem in R^2
A problem in R^100
Gradient method and condition number
Conclusions
9.4 Steepest descent method
9.4.1 Steepest descent for Euclidean and quadratic norms
Steepest descent for Euclidean norm
Steepest descent for quadratic norm
Interpretation via change of coordinates
9.4.2 Steepest descent for ℓ1-norm
9.4.3 Convergence analysis
9.4.4 Discussion and examples
Choice of norm for steepest descent
Examples
9.5 Newton’s method
9.5.1 The Newton step
Minimizer of second-order approximation
Steepest descent direction in Hessian norm
Solution of linearized optimality condition
Affine invariance of the Newton step
The Newton decrement
9.5.2 Newton’s method
9.5.3 Convergence analysis
Idea and outline of convergence proof
Damped Newton phase
Quadratically convergent phase
9.5.4 Examples
Example in R^ 2
Example in R^100
Example in R^10000
Affine invariance of Newton’s method
Summary
9.6 Self-concordance
9.6.1 Definition and examples
Self-concordant functions on R^n
9.6.2 Self-concordant calculus
Scaling and sum
Composition with affine function
Composition with logarithm
9.6.3 Properties of self-concordant functions
Upper and lower bounds on second derivatives
Bound on suboptimality
9.6.4 Analysis of Newton’s method for self-concordant functions
Damped Newton phase
Quadratically convergent phase
The final complexity bound
9.6.5 Discussion and numerical examples
A family of self-concordant functions
Practical importance of self-concordance
9.7 Implementation
9.7.1 Pre-computation for line searches
Composition with an affine function
Analytic center of a linear matrix inequality
9.7.2 Computing the Newton step
Band structure
Sparse structure
Diagonal plus low rank
Bibliography
Exercises
Chapter 10: Equality constrained minimization
10.1 Equality constrained minimization problems
10.1.1 Equality constrained convex quadratic minimization
Nonsingularity of the KKT matrix
10.1.2 Eliminating equality constraints
Choice of elimination matrix
10.1.3 Solving equality constrained problems via the dual
10.2 Newton’s method with equality constraints
10.2.1 The Newton step
Definition via second-order approximation
Solution of linearized optimality conditions
The Newton decrement
Feasible descent direction
Affine invariance
10.2.2 Newton’s method with equality constraints
10.2.3 Newton’s method and elimination
10.2.4 Convergence analysis
Assumptions
Bounded inverse KKT matrix assumption
Analysis via the eliminated problem
10.3 Infeasible start Newton method
10.3.1 Newton step at infeasible points
Interpretation as primal-dual Newton step
Residual norm reduction property
Full step feasibility property
10.3.2 Infeasible start Newton method
Using infeasible start Newton method to simplify initialization
10.3.3 Convergence analysis
Assumptions
Comparison with standard Newton method
A basic inequality
Damped Newton phase
Quadratically convergent phase
10.3.4 Convex-concave games
Solution via infeasible start Newton method
10.3.5 Examples
A simple example
An infeasible example
A convex-concave game
10.4 Implementation
10.4.1 Elimination
10.4.2 Solving KKT systems
Solving full KKT system
Solving KKT system via elimination
Elimination with singular H
10.4.3 Examples
Equality constrained analytic centering
Optimal network flow
Optimal control
Analytic center of a linear matrix inequality
Bibliography
Exercises
Chapter 11: Interior-point methods
11.1 Inequality constrained minimization problems
11.2 Logarithmic barrier function and central path
11.2.1 Logarithmic barrier
11.2.2 Central path
Dual points from central path
Interpretation via KKT conditions
Force field interpretation
11.3 The barrier method
11.3.1 The barrier method
Accuracy of centering
Choice of μ
Choice of t^(0)
Infeasible start Newton method
11.3.2 Examples
Linear programming in inequality form
Geometric programming
A family of standard form LPs
11.3.3 Convergence analysis
11.3.4 Newton step for modified KKT equations
11.4 Feasibility and phase I me
11.4.1 Basic phase I method
Sum of infeasibilities
Termination near the phase II central path
11.4.2 Phase I via infeasible start Newton method
11.4.3 Examples
Feasibility using infeasible start Newton method
11.5 Complexity analysis via self-concordance
11.5.1 Self-concordance assumption
11.5.2 Newton iterations per centering step
11.5.3 Total number of Newton iterations
Choosing μ as a function of m
11.5.4 Feasibility problems
Feasibility problems with equality constraints
11.5.5 Combined phase I/phase II complexity
11.5.6 Summary
11.6 Problems with generalized inequalities
11.6.1 Logarithmic barrier and central path
Logarithmic barrier functions for generalized inequalities
The central path
Dual points on central path
11.6.2 Barrier method
11.6.3 Examples
A small SOCP
A small SDP
A family of SDPs
11.6.4 Complexity analysis via self-concordance
Generalized logarithm for dual cone
Derivation of the basic bound
11.7 Primal-dual interior-point methods
11.7.1 Primal-dual search direction
Comparison with barrier method search directions
11.7.2 The surrogate duality gap
11.7.3 Primal-dual interior-point method
Line search
11.7.4 Examples
Small LP and GP
A family of LPs
11.8 Implementation
Sparse problems
Separable objective and a few linear inequality constraints
11.8.1 Standard form linear programming
11.8.2 ℓ1-norm approximation
11.8.3 Semidefinite programming in inequality form
11.8.4 Network rate optimization
Bibliography
Exercises
Appendices
Appendix A: Mathematical background
A.1 Norms
A.1.1 Inner product, Euclidean norm, and angle
A.1.2 Norms, distance, and unit ball
A.1.3 Examples
A.1.4 Equivalence of norms
A.1.5 Operator norms
A.1.6 Dual norm
A.2 Analysis
A.2.1 Open and closed sets
A.2.2 Supremum and infimum
A.3 Functions
A.3.1 Function notation
A.3.2 Continuity
A.3.3 Closed functions
A.4 Derivatives
A.4.1 Derivative and gradient
A.4.2 Chain rule
A.4.3 Second derivative
A.4.4 Chain rule for second derivative
A.5 Linear algebra
A.5.1 Range and nullspace
A.5.2 Symmetric eigenvalue decomposition
A.5.3 Generalized eigenvalue decomposition
A.5.4 Singular value decomposition
Bibliography
Appendix B: Problems involving twoquadratic functions
B.1 Single constraint quadratic optimization
B.2 The S-procedure
B.3 The field of values of two symmetric matrices
B.4 Proofs of the strong duality results
Bibliography
Appendix C: Numerical linear algebra background
C.1 Matrix structure and algorithm complexity
C.1.1 Complexity analysis via flop count
C.1.2 Cost of basic matrix-vector operations
C.2 Solving linear equations with factored matrices
C.2.1 Linear equations that are easy to solve
C.2.2 The factor-solve method
C.3 LU, Cholesky, and LDLT factorization
C.3.1 LU factorization
C.3.2 Cholesky factorization
C.3.3 LDLT factorization
C.4 Block elimination and Schur complements
C.4.1 Eliminating a block of variables
C.4.2 Block elimination and structure
C.4.3 The matrix inversion lemma
C.5 Solving underdetermined linear equations
Bibliography
References
Notation
Index
Back Cover
Convex Optimization
Convex Optimization Stephen Boyd Department of Electrical Engineering Stanford University Lieven Vandenberghe Electrical Engineering Department University of California, Los Angeles
cambridge university press Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, S˜ao Paolo, Delhi Cambridge University Press The Edinburgh Building, Cambridge, CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York http://www.cambridge.org Information on this title: www.cambridge.org/9780521833783 c Cambridge University Press 2004 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2004 Seventh printing with corrections 2009 Printed in the United Kingdom at the University Press, Cambridge A catalogue record for this publication is available from the British Library Library of Congress Cataloguing-in-Publication data Boyd, Stephen P. Convex Optimization / Stephen Boyd & Lieven Vandenberghe p. cm. Includes bibliographical references and index. ISBN 0 521 83378 7 1. Mathematical optimization. 2. Convex functions. I. Vandenberghe, Lieven. II. Title. QA402.5.B69 2004 519.6–dc22 2003063284 ISBN 978-0-521-83378-3 hardback Cambridge University Press has no responsiblity for the persistency or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.
For Anna, Nicholas, and Nora Dani¨el and Margriet
Contents Preface xi 1 Introduction 1 1 1.1 Mathematical optimization . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 Least-squares and linear programming . . . . . . . . . . . . . . . . . . 7 1.3 Convex optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.4 Nonlinear optimization . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.6 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 I Theory 19 2 Convex sets 21 2.1 Affine and convex sets . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.2 Some important examples . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.3 Operations that preserve convexity . . . . . . . . . . . . . . . . . . . . 35 2.4 Generalized inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.5 Separating and supporting hyperplanes . . . . . . . . . . . . . . . . . . 46 2.6 Dual cones and generalized inequalities . . . . . . . . . . . . . . . . . . 51 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3 Convex functions 67 . . . . . . . . . . . . . . . . . . . . . . 67 3.1 Basic properties and examples 3.2 Operations that preserve convexity . . . . . . . . . . . . . . . . . . . . 79 3.3 The conjugate function . . . . . . . . . . . . . . . . . . . . . . . . . . 90 3.4 Quasiconvex functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 . . . . . . . . . . . . . . . . . . 104 3.5 Log-concave and log-convex functions . . . . . . . . . . . . 108 3.6 Convexity with respect to generalized inequalities Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
分享到:
收藏