logo资料库

搜索方法论-优化算法大全.pdf

第1页 / 共715页
第2页 / 共715页
第3页 / 共715页
第4页 / 共715页
第5页 / 共715页
第6页 / 共715页
第7页 / 共715页
第8页 / 共715页
资料共715页,剩余部分请下载后查看
Foreword to the First Edition
Foreword to the Second Edition
Preface to the First Edition
Preface to the Second Edition
Contents
1 Introduction
1.1 Inter-disciplinary Decision Support: Motivation
1.2 The Structure of the Book
1.3 Basic Concepts and Underlying Issues
1.3.1 Artificial Intelligence
1.3.2 Operational Research (Operations Research)
1.3.3 Management Science
1.3.4 Feasible and Infeasible Solutions
1.3.5 Hard Constraints
1.3.6 Soft Constraints and Evaluation Functions
1.3.7 Deterministic Search
1.3.8 Optimization
1.3.9 Local and Global Optimum
1.3.10 Exhaustive Search
1.3.11 Complexity
1.3.12 Order (Big O Notation)
1.3.13 Heuristics
1.3.14 Constructive Heuristics
1.3.15 Local Search Heuristics
1.3.16 Hill Climbing
1.3.17 Metaheuristics
1.3.18 Evolutionary Methods
1.3.19 Exact Methods
1.3.20 Hyper-heuristics
1.3.21 Matheuristics
References
2 Classical Techniques
2.1 Introduction
2.2 Linear Programming
2.2.1 Introduction
2.2.2 The Linear Programming Form
2.2.2.1 A Simple Example
2.2.2.2 The General LP Format
2.2.3 Duality
2.2.4 Solution Techniques
2.3 Branch and Bound
2.3.1 Introduction
2.3.2 Branch and Bound Based on Partial Solutions
2.3.2.1 Example 1: Finding the Shortest Path
2.3.2.2 Example 2: Brown's Algorithm for Graph Coloring
2.3.3 A Generalization
2.3.3.1 Zykov's Algorithm for Graph Coloring
2.3.4 Other Issues
2.3.4.1 Bounds
2.3.4.2 Branching
2.3.4.3 Miscellaneous
2.4 Dynamic Programming
2.4.1 Introduction
2.4.2 Developing a DP Model
2.4.2.1 Forward Recursion and the Unbounded Knapsack Problem
2.4.2.2 Backward Recursion and a Production Planning Problem
2.4.3 Other Issues
2.5 Network Flow Programming
2.5.1 Introduction
2.5.2 The Maximum Flow Problem
2.5.2.1 Introduction
2.5.2.2 The Ford–Fulkerson Labeling Algorithm
2.5.3 Minimum Cost Flow Problem
2.5.3.1 Introduction
2.5.3.2 The Out-of-Kilter Algorithm
2.5.4 Other Issues
2.6 Some Useful Models
2.6.1 Shortest-Path Problems: DP Approaches
2.6.1.1 Bellman's Shortest-Path Algorithm
2.6.1.2 Floyd's Shortest-Path Algorithm
2.6.2 Transportation Assignment and Transhipment Problems: Network Flow Approaches
2.6.2.1 The Transportation Problem
2.6.2.2 The Assignment and Transhipment Problems
2.6.2.3 Other Useful Models
2.7 Promising Areas for Future Application
2.7.1 Pre- and Post-processing
2.7.2 True Hybrids
2.7.3 Cross-fertilization
2.8 Tricks of the Trade
2.8.1 Introduction
2.8.2 Tips for Branch and Bound
2.8.3 Tips for Dynamic Programming
2.8.4 Tips for Network Flow Programming
2.9 Conclusions
References
3 Integer Programming
3.1 Introduction
3.1.1 Facility Location
3.1.2 Solving the Facility Location IP
3.1.3 Difficulties with Integer Programs
3.2 Be Creative in Formulations
3.2.1 Integer Quantities
3.2.2 Binary Decisions
3.2.3 Fixed-Charge Requirements
3.2.4 Logical Constraints
3.2.5 Sequencing Problems
3.3 Find Formulations with Strong Relaxations
3.4 Avoid Symmetry
3.5 Consider Formulations with Many Constraints
3.6 Consider Formulations with Many Variables
3.7 Modify Branch-and-Bound Parameters
3.7.1 Description of Problem
3.7.2 Linear Programming Solver
3.7.3 Choice of Branching Variable
3.7.4 Choice of Subproblem to Solve
3.7.5 Direction of Branching
3.7.6 Tolerances
3.8 Tricks of the Trade
3.9 Conclusion
References
4 Genetic Algorithms
4.1 Introduction
4.2 Basic GA Operators
4.2.1 Selection Methods
4.2.1.1 Fitness Proportionate Selection
4.2.1.2 Ordinal Selection
4.2.2 Recombination Operators
4.2.3 Mutation Operators
4.2.4 Replacement
4.3 Competent GAs
4.4 Efficiency Enhancement of Genetic Algorithms
References
5 Scatter Search
5.1 Introduction
5.2 Diversification Generation Method
5.3 Improvement Method
5.4 Reference Set Update
5.5 Subset Generation
5.6 Solution Combination
5.7 Multiobjective Optimization
5.8 Tricks of the Trade
5.9 Conclusions
5.10 Promising Areas for Future Research
References
6 Genetic Programming
6.1 Introduction
6.2 Preparatory Steps of Genetic Programming
6.3 Executional Steps of GP
6.4 Example of a Run of GP
6.5 Further Features of GP
6.5.1 Automatically Defined Functions and Libraries
6.5.2 Architecture-Altering Operations
6.5.3 Constraining Structures
6.5.4 Developmental GP
6.5.5 Probabilistic GP
6.5.6 Bloat and Bloat Control
6.6 Human-Competitive Results Produced by GP
6.7 Genetic Programming Theory
6.7.1 Models of GP Search
6.7.2 Bloat
6.8 Conclusions
References
7 Artificial Immune Systems
7.1 Introduction
7.2 Overview of the Biological Immune System
7.2.1 Dendritic Cells
7.2.2 Immune Network Theory
7.2.3 Negative Selection Mechanism
7.2.4 Clonal Selection Principle
7.3 Illustrative Problems
7.3.1 Intrusion Detection Systems
7.3.2 Data Mining: Collaborative Filtering and Clustering
7.4 Artificial Immune System Basic Concepts
7.4.1 Initialization/Encoding
7.4.2 Similarity or Affinity Measure
7.4.3 Negative, Clonal or Neighborhood Selection
7.4.4 Somatic Hypermutation
7.4.5 Dendritic-Cell-Based Approaches
7.5 Comparison of Artificial Immune Systems to Genetic Algorithms and Neural Networks
7.6 Extensions of Artificial Immune Systems
7.6.1 Idiotypic Networks: Network Interactions (Suppression)
7.6.2 Danger Theory
7.7 Promising Areas for Future Application
7.8 Tricks of the Trade
7.9 Conclusions
References
8 Swarm Intelligence
8.1 Introduction
8.2 Ant Colony Optimization
8.2.1 Example 1: Basic ACO and the TSP
8.2.1.1 The TSP Problem
8.2.1.2 Pheromone Information
8.2.1.3 Solution Construction
8.2.1.4 Pheromone Update
8.2.1.5 Stopping Criterion
8.2.2 Example 2: Population-Based ACO and TSP
8.2.2.1 Information Transfer and Population Matrix
8.2.2.2 Population Matrix Update
8.2.2.3 Construction of Pheromone Matrix
8.2.3 Example 3: ACO for a Scheduling Problem
8.2.3.1 The SMTWTP Problem
8.2.3.2 Pheromone Encoding
8.2.3.3 Pheromone Evaluation
8.2.3.4 Adaptation of Heuristics
8.2.4 Advanced Features of ACO
8.2.4.1 Variants of Pheromone Update
8.2.4.2 Other ACO Variants
8.2.5 Promising Areas for Future Applications of ACO
8.2.5.1 Hybrid ACO
8.3 Particle Swarm Optimization
8.3.1 Example 1: Basic PSO and Continuous Function Optimization
8.3.2 Example 2: Discrete Binary PSO for Subset Problems
8.3.3 Advanced Features of PSO
8.3.4 Promising Areas for Future Applications of PSO
8.3.4.1 Operations on Particles
8.3.4.2 Cooperative Swarms
8.4 Tricks of the Trade
8.5 Conclusion
References
9 Tabu Search
9.1 Introduction
9.2 Illustrative Problems
9.2.1 The Job-Shop Scheduling Problem
9.2.2 The Capacitated Plant Location Problem
9.3 Basic Concepts
9.3.1 Historical Background
9.3.2 Tabu Search
9.3.3 Search Space and Neighborhood Structure
9.3.4 Tabus
9.3.5 Aspiration Criteria
9.3.6 A Template for Simple Tabu Search
9.3.7 Termination Criteria
9.3.8 Probabilistic Tabu Search and Candidate Lists
9.4 Extensions to the Basic Concepts
9.4.1 Intensification
9.4.2 Diversification
9.4.3 Allowing Infeasible Solutions
9.4.4 Surrogate and Auxiliary Objectives
9.5 Promising Areas for Future Applications
9.6 Tricks of the Trade
9.6.1 Getting Started
9.6.2 More Tips
9.6.3 Additional Tips for Probabilistic Tabu Search
9.6.4 Parameter Calibration and Computational Testing
9.7 Conclusions
References
10 Simulated Annealing
10.1 Introduction
10.2 Local Search
10.3 Basic Simulated Annealing
10.4 Mathematical Modeling
10.5 Equilibrium Statistics
10.6 Practical Application
10.6.1 Static Cooling Schedules
10.6.1.1 Initial Value of the Control Parameter
10.6.1.2 Lowering the Control Parameter Value
10.6.1.3 Final Value of the Control Parameter
10.6.1.4 Markov Chain Length
10.6.2 Dynamic Cooling Schedules
10.7 Tricks of the Trade
10.8 Conclusions
References
11 GRASP: Greedy Randomized Adaptive Search Procedures
11.1 Introduction
11.2 Principles and Building Blocks
11.2.1 Greedy Algorithms
11.2.2 Randomization and Greedy Randomized Algorithms
11.2.3 Neighborhoods
11.2.4 Local Search
11.2.5 Restricted Neighborhoods and Candidate Lists
11.2.6 Intensification and Diversification
11.2.7 Path-Relinking
11.3 A Template for GRASP
11.4 GRASP with Path-Relinking
11.5 Extensions
11.6 Tricks of the Trade
11.7 Some Promising Areas for Future Application
11.7.1 Continuous GRASP
11.7.2 Probabilistic-Based Stopping Rules
References
12 Variable Neighborhood Search
12.1 Introduction
12.2 Preliminaries: Documentation
12.3 Variable Neighborhood Descent
12.4 Reduced VNS
12.5 Basic and General VNS
12.6 Skewed VNS
12.7 Variable Neighborhood Decomposition Search
12.8 Analyzing Performance
12.9 Promising Areas of Research
12.10 Tricks of the Trade
12.10.1 Getting Started
12.10.1.1 A Step-by-Step Procedure
12.10.2 More Tips
12.11 Conclusions
References
13 Very Large-Scale Neighborhood Search
13.1 Introduction
13.2 Preliminaries
13.3 Variable-Depth Neighborhood Search Algorithms
13.3.1 Definitions
13.3.2 Example: Kernighan–Lin Search for the MAX CUT
13.3.3 Example: Lin–Kernighan Search for the TSP
13.3.4 Final Remarks on Variable-Depth Neighborhood Search Algorithms
13.4 Cyclic Exchange Neighborhood Search Algorithms
13.4.1 Basic Concepts
13.4.2 Finding Improving Cyclic Exchanges
13.4.3 Example: Cyclic Exchange Neighborhood Search for the VRP
13.4.4 Example: Cyclic Exchange Neighborhood Search for the Capacitated Minimum Spanning Tree Problem
13.4.5 Final Remarks on Cyclic Exchange Neighborhood Search Algorithms
13.5 Other Very Large-Scale Neighborhood Search Algorithms
13.5.1 Neighborhoods Based on CompoundingIndependent Moves
13.5.2 Neighborhoods Based on Variable Fixing
13.5.3 Other VLSN Search Algorithms
13.6 Tricks of the Trade
13.7 Promising Areas for Future Research
13.8 Conclusions
References
14 Constraint Programming
14.1 Introduction
14.2 Inference
14.3 Modeling
14.4 Search
14.4.1 Extension
14.4.2 Repair
14.5 Example
14.6 Tractability
14.6.1 Theory
14.6.2 Experiment
14.7 Optimization
14.8 Algorithms
14.8.1 Handling Constraints
14.8.2 Domains, and Constraint Propagation
14.8.3 Constraints and Search
14.8.3.1 Separating Constraint Handling from Search
14.8.3.2 Search Heuristics Exploiting Constraint Propagation
14.8.4 Global Constraints
14.8.4.1 Alldifferent
14.8.4.2 Schedule
14.8.4.3 Further Global Constraints
14.8.4.4 Analysis
14.8.5 Different Constraint Behaviors
14.8.6 Extension and Repair Search
14.8.6.1 Constraint Reasoning and Extension Search
14.8.6.2 Constraint Reasoning and Repair Search
14.8.6.3 Languages and Systems
14.9 Constraint Languages
14.9.1 Constraint Logic Programming
14.9.2 Modeling Languages
14.9.3 Constraint Satisfaction and Optimization Systems
14.10 Applications
14.10.1 Current Areas of Application
14.10.2 Applications in Control, Verification and Validation
14.10.3 Combinatorial Problem Solving
14.10.4 Other Applications
14.10.4.1 Constraints and Graphics
14.10.4.2 Constraint Databases
14.11 Potpourri
14.11.1 Dynamic Constraint Problems and Soft Constraints
14.11.2 Explanation
14.11.3 Synthesizing Models and Algorithms
14.11.4 Distributed Processing
14.11.5 Uncertainty
14.12 Tricks of the Trade
14.12.1 Initializing Variables
14.12.2 Constrain the Variables
14.12.3 Search and Propagation
14.12.4 Introducing Redundant Constraints
14.12.5 Adding Search Heuristics
14.12.6 Using an Incomplete Search Technique
References
15 Multi-objective Optimization
15.1 Introduction
15.1.1 How Is It Different from Single-Objective Optimization?
15.2 Two Approaches to Multi-objective Optimization
15.3 Non-dominated Solutions and Pareto-Optimal Solutions
15.3.1 Special Solutions
15.3.1.1 Ideal Objective Vector
15.3.1.2 Utopian Objective Vector
15.3.1.3 Nadir Objective Vector
15.3.2 Concept of Domination
15.3.3 Properties of Dominance Relation
15.3.4 Pareto Optimality
15.3.5 Procedure for Finding Non-dominated Solutions
15.3.5.1 Finding the Best Non-dominated Front
15.3.5.2 A Non-dominated Sorting Procedure
15.4 Some Approaches to Multi-objective Optimization
15.4.1 Classical Method: Weighted-Sum Approach
15.4.2 Classical Method: -Constraint Method
15.4.3 Evolutionary Multi-objective Optimization (EMO) Method
15.4.3.1 Elitist Non-dominated Sorting GA (NSGA-II)
15.4.4 Sample Simulation Results
15.4.5 Other State-of-the-Art MOEAs
15.5 Constraint Handling
15.6 Some Applications
15.6.1 Spacecraft Trajectory Design
15.6.2 A Cantilever Plate Design
15.7 Tricks of the Trade
15.7.1 Classical Multi-objective Optimization
15.7.2 Evolutionary Multi-objective Optimization (EMO)
15.7.2.1 Archive Acceptance Criterion CA(c,A)
15.7.2.2 Population Acceptance Criterion CP(c,P)
15.7.3 Post-optimality Studies
15.7.4 Evaluating a Multi-objective Optimization Algorithm
15.8 Research Challenges
15.9 Conclusions
References
16 Sharpened and Focused No Free Lunch and Complexity Theory
16.1 Introduction
16.2 Complexity: P and NP
16.2.1 Complexity, Search and Optimization
16.3 No Free Lunch
16.3.1 No Free Lunch: Variations on a Theme
16.3.2 No Free Lunch and Permutation Closure
16.3.3 Free Lunch and Compressibility
16.4 Sharpened NFL and Focused NFL
16.4.1 Partitioning the Permutation Closure under Focused NFL
16.4.2 Evaluating Search Algorithms
16.5 Conclusions
16.6 Tricks of the Trade
16.7 Current and Future Research Directions
References
17 Machine Learning
17.1 Introduction
17.1.1 Learning Models
17.1.2 Learning Tasks and Issues in Machine Learning
17.1.2.1 Classification
17.1.2.2 Regression, Interpolation and Density Estimation
17.1.2.3 Learning a Sequence of Actions
17.1.2.4 Data Mining
17.1.2.5 Issues in Machine Learning
17.1.3 Organization of the Chapter
17.2 Overview of Learning Algorithms
17.2.1 Learning Decision Trees
17.2.2 Inductive Logic Programming
17.2.3 Bayesian Learning
17.2.4 Reinforcement Learning
17.2.5 Neural Networks
17.2.6 Evolutionary Learning
17.3 Learning and Evolution
17.3.1 Evolutionary Neural Networks
17.3.1.1 The Evolution of Connection Weights
17.3.1.2 The Evolution of Architectures
17.3.2 The Evolution of Learning Rules
17.3.3 A General Framework for Evolutionary Neural Networks
17.4 Ensemble Learning
17.4.1 Bias–Variance Trade-Off
17.4.1.1 Bias–Variance–Covariance Trade-Off
17.4.1.2 Independent Learning Methods
17.4.1.3 Sequential Learning Methods
17.4.1.4 Simultaneous Learning Methods
17.4.1.5 Negative Correlation Learning
17.4.1.6 Evolutionary Neural Networks as Ensembles
17.5 Promising Areas for Future Application
17.6 Tricks of the Trade
17.6.1 Formulating the Problem
17.6.2 Choosing the Representation
17.6.3 Collecting the Data
17.6.4 Conducting the Learning Process
17.6.5 Analyzing and Evaluating the Learned Knowledge
17.7 Conclusions
References
18 Fuzzy Reasoning
18.1 Introduction
18.2 Basic Definitions of Fuzzy Set Theory
18.2.1 Fuzzy Sets and the Notion of Membership
18.2.2 Membership Functions
18.2.2.1 Examples of Fuzzy Sets
18.2.3 Fuzzy Set Operations
18.2.3.1 Examples of Fuzzy Set Operations
18.2.4 Transformation Operators
18.2.4.1 Example of Transformation Operators
18.2.5 Cartesian Inner Product of Fuzzy Sets
18.2.5.1 Example
18.2.6 Fuzzy Relations
18.2.6.1 Example
18.2.7 Fuzzy Set Composition
18.2.7.1 Example
18.2.8 Fuzzy Implication
18.2.9 Inference Rules
18.2.10 The Inverse Problem
18.2.11 Fuzzy Similarity Measures
18.2.11.1 L-Fuzzy Similarity Measure
18.2.11.2 M-Fuzzy Similarity Measure
18.2.11.3 S-Fuzzy Similarity Measure
18.2.11.4 W-Fuzzy Similarity Measure
18.2.11.5 P-Fuzzy Similarity Measure
18.3 Basic Structure of a Fuzzy Inference System
18.3.1 Defuzzifiation Unit
18.3.1.1 Mean of Maximum Defuzzification Technique
18.3.1.2 Centroid Defuzzification Technique
18.3.1.3 Center of Sums Defuzzification Technique
18.3.2 Design of the Rule Base
18.4 Case Study: A Fuzzy Control System
18.4.1 The Fuzzy Logic Control Closed Loop
18.4.2 Fuzzy Logic Controllers in Proportional-Integral (PI) and Proportional-Differential (PD) Forms
18.4.2.1 PI-Like Fuzzy Controller
18.4.2.2 PD-Like Fuzzy Controller
18.4.3 An Illustrative Example
18.4.3.1 The Case Study: Fuzzy Control of a Plug Flow Tubular Reactor
18.4.3.2 Performance Analysis: Results and Discussion
18.4.4 Fuzzy Adaptive Control Schemes
18.5 Model Identification and Stability of Fuzzy Systems
18.5.1 Fuzzy Systems Modeling
18.5.2 Stability of Fuzzy Systems
18.6 Conclusion and Perspectives
References
19 Rough-Set-Based Decision Support
19.1 Introduction
19.2 Rough Set Fundamentals
19.2.1 Explanation by an Example
19.2.2 A Formal Description of the Indiscernibility-Based Rough Set Approach
19.2.3 Decision Rules Induced from Rough Approximations
19.2.4 From Indiscernibility to Similarity
19.3 The Knowledge Discovery Paradigm and Prior Knowledge
19.4 The Dominance-Based Rough Set Approach
19.4.1 Granular Computing with Dominance Cones
19.4.2 Stochastic DRSA
19.4.3 Induction of Decision Rules
19.4.4 An Illustrative Example
19.5 The DRSA to Multicriteria Choice and Ranking
19.5.1 The Pairwise Comparison Table as Preference Information and as a Learning Sample
19.5.2 Rough Approximation of the Outranking and Non-outranking Relations Specified in the Pairwise Comparison Table
19.5.2.1 Multigraded Dominance
19.5.2.2 Dominance Without Degrees of Preference
19.5.3 Induction of Decision Rules from Rough Approximations of Outranking and Non-outranking Relations
19.5.4 Use of Decision Rules for Decision Support
19.5.5 An Illustrative Example
19.5.6 Summary
19.6 Conclusions and Promising Areas of Future Work
References
20 Hyper-heuristics
20.1 Introduction
20.2 The Need for Hyper-heuristics
20.3 Hyper-heuristics for Boolean Satisfiability
20.3.1 The Problem Area
20.3.2 The Heuristic Generation Process
20.3.3 Remarks
20.4 Hyper-heuristics for Timetabling
20.4.1 The Problem Area
20.4.2 The Heuristic Generation Process
20.4.3 Remarks
20.5 Hyper-heuristics for Packing
20.5.1 The Problem Area
20.5.2 Heuristic Generation Processes
20.5.3 Remarks
20.6 A Little History
20.7 Some Research Issues
20.7.1 No Free Lunch?
20.7.2 Search Methods
20.7.3 Representation Issues
20.7.4 Performance Guarantees
20.8 Getting Started: The HyFlex Framework
20.9 Tricks of the Trade
20.9.1 The Problem Area
20.9.2 Success Criteria
20.9.3 On-line or Off-line
20.9.4 A Good Set of Heuristic Ingredients
20.9.5 Fitness
20.9.6 A Good Set of Tools
20.9.7 Attitude
References
21 Approximations and Randomization
21.1 Introduction
21.2 Approximation Strategies
21.2.1 Preliminaries
21.2.1.1 Optimization Problems
21.2.1.2 Approximation and Performance
21.2.1.3 Complexity Background
21.2.2 The Greedy Method
21.2.2.1 Greedy Vertex Cover
21.2.2.2 Greedy MAX-SAT
21.2.2.3 Greedy MAX-CUT
21.2.2.4 Greedy Knapsack
21.2.3 Sequential Algorithms
21.2.3.1 Sequential Bin Packing
21.2.3.2 Sequential Job Scheduling
21.2.4 Dynamic Programming
21.2.4.1 PTAS for Minimum Job Scheduling
21.2.4.2 FPTAS for Knapsack
21.2.5 LP-Based Algorithms
21.2.5.1 LP Rounding
21.2.6 Primal–Dual Method
21.2.6.1 The LP Duality Theorem
21.2.7 Primal–Dual Method Applied to Minimum Weight Vertex Cover
21.2.8 Randomization
21.2.8.1 Random MAX-CUT Solution
21.2.8.2 Random MAX-SAT Solution
21.3 A Tour of Approximation Classes
21.3.1 PTAS and FPTAS
21.3.1.1 Definition
21.3.1.2 A Few Known Results
21.3.2 APX
21.3.2.1 A Few Known Results
21.3.3 Brief Introduction to PCPs
21.4 Promising Areas for Future Application
21.4.1 Randomized Backtracking and Backdoors
21.4.2 Approximations to Guide Complete Backtrack Search
21.4.3 Average-Case Complexity and Approximation
21.5 Tricks of the Trade
21.6 Conclusions
References
22 Fitness Landscapes
22.1 Historical Introduction
22.2 Combinatorial Optimization
22.2.1 An Example
22.3 Mathematical Characterization
22.3.1 Neighborhood Structure
22.3.2 Local Optima
22.3.3 Basins of Attraction
22.3.4 Plateaux
22.3.5 Graph Representation
22.3.6 Laplacian Matrix
22.3.7 Graph Eigensystem
22.3.8 Elementary Landscapes
22.3.9 Recombination Landscapes
22.3.10 Summary
22.4 Statistical Measures
22.4.1 Autocorrelation
22.4.2 Number of Optima
22.4.2.1 Waiting-Time Model
22.4.2.2 Counting Model
22.4.2.3 Non-parametric Estimates
22.5 Empirical Studies
22.5.1 Practical Applications
22.6 Promising Areas for Future Application
22.7 Tricks of the Trade
22.8 Conclusion
References
Index
Search Methodologies
Edmund K. Burke • Graham Kendall Editors Search Methodologies Introductory Tutorials in Optimization and Decision Support Techniques Second Edition 123
Editors Edmund K. Burke University of Stirling Cottrell Building Scotland, United Kingdom Graham Kendall Vice-Provost (Research and Knowledge Transfer) University of Nottingham, Malaysia and University of Nottingham, UK Jalan Broga, Semenyih, Malaysia ISBN 978-1-4614-6939-1 DOI 10.1007/978-1-4614-6940-7 Springer New York Heidelberg Dordrecht London ISBN 978-1-4614-6940-7 (eBook) Library of Congress Control Number: 2013944229 © Springer Science+Business Media New York 2014 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location, in its current version, and permission for use must always be obtained from Springer. Permissions for use may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution under the respective Copyright Law. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. While the advice and information in this book are believed to be true and accurate at the date of pub- lication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Foreword to the First Edition This is not so much a foreword as a testimonial. As I embarked on the pleasant journey of reading through the chapters of this book, I became convinced that this is one of the best sources of introductory material on the search methodologies topic to be found. The book’s subtitle, “Introductory Tutorials in Optimization and Decision Support Techniques,” aptly describes its aim, and the editors and contributors to this volume have achieved this aim with remarkable success. The chapters in this book are exemplary in giving useful guidelines for imple- menting the methods and frameworks described. They are tutorials in the way tuto- rials ought to be designed. I found no chapter that did not contain interesting and relevant information and found several chapters that can only be qualified as de- lightful. Those of us who have devoted a substantial portion of our energies to the study and elaboration of search methodologies often wish we had a simple formula for passing along the core notions to newcomers to the area. (I must confess, by the way, that I qualify as a relative newcomer to some of the areas in this volume.) While simplicity, like beauty, to some degree lies in the eyes of the beholder and no universal or magical formula for achieving it exists, this book comes much closer to reaching such a goal than I would have previously considered possible. It will occupy a privileged position on my list of recommended reading for students and colleagues who want to get a taste for the basics of search methodologies and who have an interest in equipping themselves to probe more deeply. If good books may be likened to ladders that help us ascend to higher rungs of knowledge and understanding, we all know there are nevertheless many books written in technical areas that seem to be more like stumbling blocks or at best broken stepping stools that deposit us in isolated nooks offering no clear access to continued means of ascent. Not so the present book. Its chapters lead to ideal sites for continued exploration and offer compelling motivation for further pursuit of its ideas and frameworks. If my reckoning is not completely amiss, those who read this volume will find abundant reasons for sharing my conviction that we owe its editors and authors a true debt of gratitude for putting this work together. Boulder, CO, USA Fred Glover v
Foreword to the Second Edition It gives me great pleasure to write this short foreword to the second edition of this outstanding book. The first edition established this volume as an important interna- tional landmark in the exposition and explanation of computational search method- ologies and associated issues. As its title indicates, it covers a broad and diverse portfolio of search techniques from an interdisciplinary perspective in an extensive and comprehensive way. In some ways, this is no surprise because the authors that have been brought together in this volume truly represent a collection of the very top figures in the field. It is an extremely impressive group of authors who have the highest possible qualifications to present these chapters in a clear and authoritative way. As the editors have explained in the Introduction, decision support systems now underpin the core of many modern management approaches in such diverse areas as health care, commerce, industry, science, and government. In many everyday sit- uations across many different working environments, the idea of taking important managerial decisions without some kind of decision support system would be incon- ceivable. Moreover, it is the techniques and methods that are introduced in this book that often lie at the heart of the search engines upon which these decision support systems depend. Thus, the book ideally fits the current needs of today’s increasingly computer-driven world. The aim of the book is clearly articulated in the Introduction. It aims to present a series of well-written tutorials by the leading experts in their fields. Moreover, it does this by covering practically the whole possible range of topics in the disci- pline. It enables students and practitioners to study and appreciate the beauty and the power of some of the computational search techniques that are able to effectively navigate through search spaces that are sometimes inconceivably large. I am convinced that this second edition will build on the success of the first edition and that it will prove to be just as popular. Three main features of the volume add significantly, in my opinion, to its appeal. These can be outlined as follows: vii
分享到:
收藏