Half-title
Title
Copyright
Dedication
Contents
Preface
Acknowledgments
Part I Agents in the World: What Are Agents and How Can They Be Built?
Chapter 1 Artificial Intelligence and Agents
1.1 What Is Artificial Intelligence?
1.1.1 Artificial and Natural Intelligence
1.2 A Brief History of AI
1.2.1 Relationship to Other Disciplines
1.3 Agents Situated in Environments
1.4 Knowledge Representation
1.4.1 Defining a Solution
1.4.2 Representations
1.4.3 Reasoning and Acting
1.5 Dimensions of Complexity
1.5.1 Modularity
1.5.2 Representation Scheme
1.5.3 Planning Horizon
1.5.4 Uncertainty
Sensing Uncertainty
Effect Uncertainty
1.5.5 Preference
1.5.6 Number of Agents
1.5.7 Learning
1.5.8 Computational Limits
1.5.9 Interaction of the Dimensions
1.6 Prototypical Applications
1.6.1 An Autonomous Delivery Robot
1.6.2 A Diagnostic Assistant
1.6.3 An Intelligent Tutoring System
1.6.4 A Trading Agent
1.7 Overview of the Book
1.8 Review
1.9 References and Further Reading
1.10 Exercises
Chapter 2 Agent Architectures and Hierarchical Control
2.1 Agents
2.2 Agent Systems
2.2.1 The Agent Function
2.3 Hierarchical Control
2.3.1 Agents Modeling the World
2.4 Embedded and Simulated Agents
2.5 Acting with Reasoning
2.5.1 Design Time and Offline Computation
2.5.2 Online Computation
2.6 Review
2.7 References and Further Reading
2.8 Exercises
Part II Representing and Reasoning
Chapter 3 States and Searching
3.1 Problem Solving as Search
3.2 State Spaces
3.3 Graph Searching
3.3.1 Formalizing Graph Searching
3.4 A Generic Searching Algorithm
3.5 Uninformed Search Strategies
3.5.1 Depth-First Search
3.5.2 Breadth-First Search
3.5.3 Lowest-Cost-First Search
3.6 Heuristic Search
3.6.1 A Search
3.6.2 Summary of Search Strategies
3.7 More Sophisticated Search
3.7.1 Cycle Checking
3.7.2 Multiple-Path Pruning
3.7.3 Iterative Deepening
3.7.4 Branch and Bound
3.7.5 Direction of Search
Bidirectional Search
Island-Driven Search
Searching in a Hierarchy of Abstractions
3.7.6 Dynamic Programming
3.8 Review
3.9 References and Further Reading
3.10 Exercises
Chapter 4 Features and Constraints
4.1 Features and States
4.2 Possible Worlds, Variables, and Constraints
4.2.1 Constraints
4.2.2 Constraint Satisfaction Problems
4.3 Generate-and-Test Algorithms
4.4 Solving CSPs Using Search
4.5 Consistency Algorithms
4.6 Domain Splitting
4.6.1 Exploiting Propositional Structure
4.7 Variable Elimination
4.8 Local Search
4.8.1 Iterative Best Improvement
4.8.2 Randomized Algorithms
Most Improving Step
Two-Stage Choice
Any Conflict
Simulated Annealing
4.8.3 Evaluating Randomized Algorithms
4.8.4 Exploiting Propositional Structure in Local Search
4.9 Population-Based Methods
4.10 Optimization
4.10.1 Systematic Methods for Optimization
4.10.2 Local Search for Optimization
Continuous Domains
4.11 Review
4.12 References and Further Reading
4.13 Exercises
Chapter 5 Propositions and Inference
5.1 Propositions
5.1.1 Syntax of Propositional Calculus
5.1.2 Semantics of the Propositional Calculus
Humans’ View of Semantics
The Computer’s View of Semantics
5.2 Propositional Definite Clauses
5.2.1 Questions and Answers
5.2.2 Proofs
Bottom-Up Proof Procedure
Top-Down Proof Procedure
5.3 Knowledge Representation Issues
5.3.1 Background Knowledge and Observations
5.3.2 Querying the User
5.3.3 Knowledge-Level Explanation
How Did the System Prove a Goal?
Why Did the System Ask a Question?
5.3.4 Knowledge-Level Debugging
Incorrect Answers
Missing Answers
Infinite Loops
5.4 Proving by Contradictions
5.4.1 Horn Clauses
5.4.2 Assumables and Conflicts
5.4.3 Consistency-Based Diagnosis
5.4.4 Reasoning with Assumptions and Horn Clauses
Bottom-Up Implementation
Top-Down Implementation
5.5 Complete Knowledge Assumption
5.5.1 Non-monotonic Reasoning
5.5.2 Proof Procedures for Complete Knowledge
Bottom-Up Procedure
Top-Down Negation-as-Failure Procedure
5.6 Abduction
5.7 Causal Models
5.8 Review
5.9 References and Further Reading
5.10 Exercises
Chapter 6 Reasoning Under Uncertainty
6.1 Probability
6.1.1 Semantics of Probability
6.1.2 Axioms for Probability
6.1.3 Conditional Probability
Semantics of Conditional Probability
Bayes’ Rule
6.1.4 Expected Values
6.1.5 Information Theory
6.2 Independence
6.3 Belief Networks
6.3.1 Constructing Belief Networks
6.4 Probabilistic Inference
6.4.1 Variable Elimination for Belief Networks
6.4.2 Approximate Inference Through Stochastic Simulation
Sampling from a Single Variable
Forward Sampling in Belief Networks
From Samples to Probabilities
Rejection Sampling
Importance Sampling
Particle Filtering
6.5 Probability and Time
6.5.1 Markov Chains
6.5.2 Hidden Markov Models
Localization
6.5.3 Algorithms for Monitoring and Smoothing
6.5.4 Dynamic Belief Networks
6.5.5 Time Granularity
6.6 Review
6.7 References and Further Reading
6.8 Exercises
Part III Learning and Planning
Chapter 7 Learning: Overview and Supervised Learning
7.1 Learning Issues
7.2 Supervised Learning
7.2.1 Evaluating Predictions
7.2.2 Point Estimates with No Input Features
7.2.3 Learning Probabilities
Probabilities from Experts
7.3 Basic Models for Supervised Learning
7.3.1 Learning Decision Trees
Searching for a Good Decision Tree
7.3.2 Linear Regression and Classification
Squashed Linear Functions
7.3.3 Bayesian Classifiers
7.4 Composite Models
7.4.1 Neural Networks
7.4.2 Ensemble Learning
7.5 Avoiding Overfitting
7.5.1 Maximum A Posteriori Probability and Minimum Description Length
MAP Learning of Decision Trees
Description Length
7.5.2 Cross Validation
7.6 Case-Based Reasoning
7.7 Learning as Refining the Hypothesis Space
7.7.1 Version-Space Learning
Candidate Elimination Algorithm
The Bias Involved in Version-Space Learning
7.7.2 Probably Approximately Correct Learning
7.8 Bayesian Learning
7.9 Review
7.10 References and Further Reading
7.11 Exercises
Chapter 8 Planning with Certainty
8.1 Representing States, Actions, and Goals
8.1.1 Explicit State-Space Representation
8.1.2 Feature-Based Representation of Actions
8.1.3 The STRIPS Representation
8.1.4 Initial States and Goals
8.2 Forward Planning
8.3 Regression Planning
8.4 Planning as a CSP
8.5 Partial-Order Planning
8.6 Review
8.7 References and Further Reading
8.8 Exercises
Chapter 9 Planning Under Uncertainty
9.1 Preferences and Utility
9.1.1 Factored Utility
9.2 One-Off Decisions
9.2.1 Single-Stage Decision Networks
9.3 Sequential Decisions
9.3.1 Decision Networks
9.3.2 Policies
Expected Utility of a Policy
9.3.3 Variable Elimination for Decision Networks
9.4 The Value of Information and Control
9.5 Decision Processes
9.5.1 Value of a Policy
9.5.2 Value of an Optimal Policy
9.5.3 Value Iteration
9.5.4 Policy Iteration
9.5.5 Dynamic Decision Networks
9.5.6 Partially Observable Decision Processes
9.6 Review
9.7 References and Further Reading
9.8 Exercises
Chapter 10 Multiagent Systems
10.1 Multiagent Framework
10.2 Representations of Games
10.2.1 Normal Form of a Game
10.2.2 Extensive Form of a Game
10.2.3 Multiagent Decision Networks
10.3 Computing Strategies with Perfect Information
10.4 Partially Observable Multiagent Reasoning
10.4.1 Computing Nash Equilibria
Eliminating Dominated Strategies
Computing Randomized Strategies
10.4.2 Learning to Coordinate
10.5 Group Decision Making
10.6 Mechanism Design
10.7 Review
10.8 References and Further Reading
10.9 Exercises
Chapter 11 Beyond Supervised Learning
11.1 Clustering
11.1.1 Expectation Maximization
11.1.2 k-Means
11.1.3 EM for Soft Clustering
11.2 Learning Belief Networks
11.2.1 Learning the Probabilities
11.2.2 Unobserved Variables
11.2.3 Missing Data
11.2.4 Structure Learning
11.2.5 General Case of Belief Network Learning
11.3 Reinforcement Learning
11.3.1 Evolutionary Algorithms
11.3.2 Temporal Differences
11.3.3 Q-learning
11.3.4 Exploration and Exploitation
11.3.5 Evaluating Reinforcement Learning Algorithms
11.3.6 On-Policy Learning
11.3.7 Assigning Credit and Blame to Paths
11.3.8 Model-Based Methods
11.3.9 Reinforcement Learning with Features
SARSA with Linear Function Approximation
11.4 Review
11.5 References and Further Reading
11.6 Exercises
Part IV Reasoning About Individuals and Relations
Chapter 12 Individuals and Relations
12.1 Exploiting Structure Beyond Features
12.2 Symbols and Semantics
12.3 Datalog: A Relational Rule Language
12.3.1 Semantics of Ground Datalog
12.3.2 Interpreting Variables
Humans’ View of Semantics
12.3.3 Queries with Variables
12.4 Proofs and Substitutions
12.4.1 Bottom-up Procedure with Variables
12.4.2 Definite Resolution with Variables
Unification
12.5 Function Symbols
12.5.1 Proof Procedures with Function Symbols
12.6 Applications in Natural Language Processing
12.6.1 Using Definite Clauses for Context-Free Grammars
12.6.2 Augmenting the Grammar
12.6.3 Building Structures for Non-terminals
12.6.4 Canned Text Output
12.6.5 Enforcing Constraints
12.6.6 Building a Natural Language Interface to a Database
12.6.7 Limitations
12.7 Equality
12.7.1 Allowing Equality Assertions
Axiomatizing Equality
Special-Purpose Equality Reasoning
12.7.2 Unique Names Assumption
Top-Down Procedure for the Unique Names Assumption
12.8 Complete Knowledge Assumption
12.8.1 Complete Knowledge Assumption Proof Procedures
12.9 Review
12.10 References and Further Reading
12.11 Exercises
Chapter 13 Ontologies and Knowledge-Based Systems
13.1 Knowledge Sharing
13.2 Flexible Representations
13.2.1 Choosing Individuals and Relations
13.2.2 Graphical Representations
Terse Language for Triples
13.2.3 Primitive Versus Derived Relations
13.3 Ontologies and Knowledge Sharing
13.3.1 Description Logic
13.3.2 Top-Level Ontologies
13.4 Querying Users and Other Knowledge Sources
13.4.1 Functional Relations
13.4.2 More General Questions
13.5 Implementing Knowledge-Based Systems
13.5.1 Base Languages and Metalanguages
13.5.2 A Vanilla Meta-interpreter
13.5.3 Expanding the Base Language
13.5.4 Depth-Bounded Search
13.5.5 Meta-interpreter to Build Proof Trees
13.5.6 Ask the User Meta-interpreter
13.5.7 Delaying Goals
13.6 Review
13.7 References and Further Reading
13.8 Exercises
Chapter 14 Relational Planning, Learning, and Probabilistic Reasoning
14.1 Planning with Individuals and Relations
14.1.1 Situation Calculus
14.1.2 Event Calculus
14.2 Learning with Individuals and Relations
14.3 Probabilistic Relational Models
14.4 Review
14.5 References and Further Reading
14.6 Exercises
Part V The Big Picture
Chapter 15 Retrospect and Prospect
15.1 Dimensions of Complexity Revisited
15.2 Social and Ethical Consequences
15.3 References and Further Reading
Appendix A Mathematical Preliminaries and Notation
A.1 Discrete Mathematics
A.2 Functions, Factors, and Arrays
A.3 Relations and the Relational Algebra
Bibliography
Index