logo资料库

ArtificialIntelligenceAModernApproach3eExerciseSolutions.pdf

第1页 / 共237页
第2页 / 共237页
第3页 / 共237页
第4页 / 共237页
第5页 / 共237页
第6页 / 共237页
第7页 / 共237页
第8页 / 共237页
资料共237页,剩余部分请下载后查看
Instructor’s Manual: Exercise Solutions for Artificial Intelligence A Modern Approach Third Edition (International Version) Stuart J. Russell and Peter Norvig with contributions from Ernest Davis, Nicholas J. Hay, and Mehran Sahami Upper Saddle River Boston Columbus San Francisco New York Indianapolis London Toronto Sydney Singapore Tokyo Montreal Dubai Madrid Hong Kong Mexico City Munich Paris Amsterdam Cape Town
Editor-in-Chief: Michael Hirsch Executive Editor: Tracy Dunkelberger Assistant Editor: Melinda Haggerty Editorial Assistant: Allison Michael Vice President, Production: Vince O’Brien Senior Managing Editor: Scott Disanno Production Editor: Jane Bonnell Interior Designers: Stuart Russell and Peter Norvig Copyright © 2010, 2003, 1995 by Pearson Education, Inc., Upper Saddle River, New Jersey 07458. All rights reserved. Manufactured in the United States of America. This publication is protected by Copyright and permissions should be obtained from the publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical, photocopying, recording, or likewise. To obtain permission(s) to use materials from this work, please submit a written request to Pearson Higher Education, Permissions Department, 1 Lake Street, Upper Saddle River, NJ 07458. The author and publisher of this book have used their best efforts in preparing this book. These efforts include the development, research, and testing of the theories and programs to determine their effectiveness. The author and publisher make no warranty of any kind, expressed or implied, with regard to these programs or the documentation contained in this book. The author and publisher shall not be liable in any event for incidental or consequential damages in connection with, or arising out of, the furnishing, performance, or use of these programs. Library of Congress Cataloging-in-Publication Data on File 10 9 8 7 6 5 4 3 2 1 978-0-13-606738-2 0-13-606738-7 ISBN-13: ISBN-10:
Preface This Instructor’s Solution Manual provides solutions (or at least solution sketches) for almost all of the 400 exercises in Artificial Intelligence: A Modern Approach (Third Edition). We only give actual code for a few of the programming exercises; writing a lot of code would not be that helpful, if only because we don’t know what language you prefer. In many cases, we give ideas for discussion and follow-up questions, and we try to explain why we designed each exercise. There is more supplementary material that we want to offer to the instructor, but we have decided to do it through the medium of the World Wide Web rather than through a CD or printed Instructor’s Manual. The idea is that this solution manual contains the material that must be kept secret from students, but the Web site contains material that can be updated and added to in a more timely fashion. The address for the web site is: http://aima.cs.berkeley.edu and the address for the online Instructor’s Guide is: http://aima.cs.berkeley.edu/instructors.html There you will find: • Instructions on how to join the aima-instructors discussion list. We strongly recom- mend that you join so that you can receive updates, corrections, notification of new versions of this Solutions Manual, additional exercises and exam questions, etc., in a timely manner. • Source code for programs from the text. We offer code in Lisp, Python, and Java, and point to code developed by others in C++ and Prolog. • Programming resources and supplemental texts. • Figures from the text, for making your own slides. • Terminology from the index of the book. • Other courses using the book that have home pages on the Web. You can see example syllabi and assignments here. Please do not put solution sets for AIMA exercises on public web pages! • AI Education information on teaching introductory AI courses. • Other sites on the Web with information on AI. Organized by chapter in the book; check this for supplemental material. We welcome suggestions for new exercises, new environments and agents, etc. The book belongs to you, the instructor, as much as us. We hope that you enjoy teaching from it, that these supplemental materials help, and that you will share your supplements and experi- ences with other instructors. iii
Solutions for Chapter 1 Introduction 1.1 a. Dictionary definitions of intelligence talk about “the capacity to acquire and apply knowledge” or “the faculty of thought and reason” or “the ability to comprehend and profit from experience.” These are all reasonable answers, but if we want something quantifiable we would use something like “the ability to apply knowledge in order to perform better in an environment.” b. We define artificial intelligence as the study and construction of agent programs that perform well in a given environment, for a given agent architecture. c. We define an agent as an entity that takes action in response to percepts from an envi- ronment. d. We define rationality as the property of a system which does the “right thing” given what it knows. See Section 2.2 for a more complete discussion. Both describe perfect rationality, however; see Section 27.3. e. We define logical reasoning as the a process of deriving new sentences from old, such that the new sentences are necessarily true if the old ones are true. (Notice that does not refer to any specific syntax oor formal language, but it does require a well-defined notion of truth.) 1.2 See the solution for exercise 26.1 for some discussion of potential objections. The probability of fooling an interrogator depends on just how unskilled the interroga- tor is. One entrant in the 2002 Loebner prize competition (which is not quite a real Turing Test) did fool one judge, although if you look at the transcript, it is hard to imagine what that judge was thinking. There certainly have been examples of a chatbot or other online agent fooling humans. For example, see See Lenny Foner’s account of the Julia chatbot at foner.www.media.mit.edu/people/foner/Julia/. We’d say the chance today is something like 10%, with the variation depending more on the skill of the interrogator rather than the program. In 50 years, we expect that the entertainment industry (movies, video games, com- mercials) will have made sufficient investments in artificial actors to create very credible impersonators. 1.3 Yes, they are rational, because slower, deliberative actions would tend to result in more damage to the hand. If “intelligent” means “applying knowledge” or “using thought and reasoning” then it does not require intelligence to make a reflex action. 1
2 Chapter 1. Introduction 1.4 No. IQ test scores correlate well with certain other measures, such as success in college, ability to make good decisions in complex, real-world situations, ability to learn new skills and subjects quickly, and so on, but only if they’re measuring fairly normal humans. The IQ test doesn’t measure everything. A program that is specialized only for IQ tests (and special- ized further only for the analogy part) would very likely perform poorly on other measures of intelligence. Consider the following analogy: if a human runs the 100m in 10 seconds, we might describe him or her as very athletic and expect competent performance in other areas such as walking, jumping, hurdling, and perhaps throwing balls; but we would not desscribe a Boeing 747 as very athletic because it can cover 100m in 0.4 seconds, nor would we expect it to be good at hurdling and throwing balls. Even for humans, IQ tests are controversial because of their theoretical presuppositions about innate ability (distinct from training effects) adn the generalizability of results. See The Mismeasure of Man by Stephen Jay Gould, Norton, 1981 or Multiple intelligences: the theory in practice by Howard Gardner, Basic Books, 1993 for more on IQ tests, what they measure, and what other aspects there are to “intelligence.” In order of magnitude figures, the computational power of the computer is 100 times 1.5 larger. 1.6 Just as you are unaware of all the steps that go into making your heart beat, you are also unaware of most of what happens in your thoughts. You do have a conscious awareness of some of your thought processes, but the majority remains opaque to your consciousness. The field of psychoanalysis is based on the idea that one needs trained professional help to analyze one’s own thoughts. 1.7 • Although bar code scanning is in a sense computer vision, these are not AI systems. The problem of reading a bar code is an extremely limited and artificial form of visual interpretation, and it has been carefully designed to be as simple as possible, given the hardware. • In many respects. The problem of determining the relevance of a web page to a query is a problem in natural language understanding, and the techniques are related to those we will discuss in Chapters 22 and 23. Search engines like Ask.com, which group the retrieved pages into categories, use clustering techniques analogous to those we discuss in Chapter 20. Likewise, other functionalities provided by a search engines use intelligent techniques; for instance, the spelling corrector uses a form of data mining based on observing users’ corrections of their own spelling errors. On the other hand, the problem of indexing billions of web pages in a way that allows retrieval in seconds is a problem in database design, not in artificial intelligence. • To a limited extent. Such menus tends to use vocabularies which are very limited – e.g. the digits, “Yes”, and “No” — and within the designers’ control, which greatly simplifies the problem. On the other hand, the programs must deal with an uncontrolled space of all kinds of voices and accents.
3 The voice activated directory assistance programs used by telephone companies, which must deal with a large and changing vocabulary are certainly AI programs. • This is borderline. There is something to be said for viewing these as intelligent agents working in cyberspace. The task is sophisticated, the information available is partial, the techniques are heuristic (not guaranteed optimal), and the state of the world is dynamic. All of these are characteristic of intelligent activities. On the other hand, the task is very far from those normally carried out in human cognition. 1.8 Presumably the brain has evolved so as to carry out this operations on visual images, but the mechanism is only accessible for one particular purpose in this particular cognitive task of image processing. Until about two centuries ago there was no advantage in people (or animals) being able to compute the convolution of a Gaussian for any other purpose. The really interesting question here is what we mean by saying that the “actual person” can do something. The person can see, but he cannot compute the convolution of a Gaussian; but computing that convolution is part of seeing. This is beyond the scope of this solution manual. 1.9 Evolution tends to perpetuate organisms (and combinations and mutations of organ- isms) that are successful enough to reproduce. That is, evolution favors organisms that can optimize their performance measure to at least survive to the age of sexual maturity, and then be able to win a mate. Rationality just means optimizing performance measure, so this is in line with evolution. 1.10 This question is intended to be about the essential nature of the AI problem and what is required to solve it, but could also be interpreted as a sociological question about the current practice of AI research. A science is a field of study that leads to the acquisition of empirical knowledge by the scientific method, which involves falsifiable hypotheses about what is. A pure engineering field can be thought of as taking a fixed base of empirical knowledge and using it to solve problems of interest to society. Of course, engineers do bits of science—e.g., they measure the properties of building materials—and scientists do bits of engineering to create new devices and so on. As described in Section 1.1, the “human” side of AI is clearly an empirical science— called cognitive science these days—because it involves psychological experiments designed out to find out how human cognition actually works. What about the the “rational” side? If we view it as studying the abstract relationship among an arbitrary task environment, a computing device, and the program for that computing device that yields the best performance in the task environment, then the rational side of AI is really mathematics and engineering; it does not require any empirical knowledge about the actual world—and the actual task environment—that we inhabit; that a given program will do well in a given environment is a theorem. (The same is true of pure decision theory.) In practice, however, we are interested in task environments that do approximate the actual world, so even the rational side of AI involves finding out what the actual world is like. For example, in studying rational agents that communicate, we are interested in task environments that contain humans, so we have
4 Chapter 1. Introduction to find out what human language is like. In studying perception, we tend to focus on sensors such as cameras that extract useful information from the actual world. (In a world without light, cameras wouldn’t be much use.) Moreover, to design vision algorithms that are good at extracting information from camera images, we need to understand the actual world that generates those images. Obtaining the required understanding of scene characteristics, object types, surface markings, and so on is a quite different kind of science from ordinary physics, chemistry, biology, and so on, but it is still science. In summary, AI is definitely engineering but it would not be especially useful to us if it were not also an empirical science concerned with those aspects of the real world that affect the design of intelligent systems for that world. 1.11 This depends on your definition of “intelligent” and “tell.” In one sense computers only do what the programmers command them to do, but in another sense what the programmers consciously tells the computer to do often has very little to do with what the computer actually does. Anyone who has written a program with an ornery bug knows this, as does anyone who has written a successful machine learning program. So in one sense Samuel “told” the computer “learn to play checkers better than I do, and then play that way,” but in another sense he told the computer “follow this learning algorithm” and it learned to play. So we’re left in the situation where you may or may not consider learning to play checkers to be s sign of intelligence (or you may think that learning to play in the right way requires intelligence, but not in this way), and you may think the intelligence resides in the programmer or in the computer. 1.12 The point of this exercise is to notice the parallel with the previous one. Whatever you decided about whether computers could be intelligent in 1.11, you are committed to making the same conclusion about animals (including humans), unless your reasons for de- ciding whether something is intelligent take into account the mechanism (programming via genes versus programming via a human programmer). Note that Searle makes this appeal to mechanism in his Chinese Room argument (see Chapter 26). 1.13 Again, the choice you make in 1.11 drives your answer to this question. 1.14 a. (ping-pong) A reasonable level of proficiency was achieved by Andersson’s robot (An- dersson, 1988). b. (driving in Cairo) No. Although there has been a lot of progress in automated driving, all such systems currently rely on certain relatively constant clues: that the road has shoulders and a center line, that the car ahead will travel a predictable course, that cars will keep to their side of the road, and so on. Some lane changes and turns can be made on clearly marked roads in light to moderate traffic. Driving in downtown Cairo is too unpredictable for any of these to work. c. (driving in Victorville, California) Yes, to some extent, as demonstrated in DARPA’s Urban Challenge. Some of the vehicles managed to negotiate streets, intersections, well-behaved traffic, and well-behaved pedestrians in good visual conditions.
分享到:
收藏