logo资料库

《机器人学、机器视觉与控制——MATLAB算法基础》英文版.pdf

第1页 / 共486页
第2页 / 共486页
第3页 / 共486页
第4页 / 共486页
第5页 / 共486页
第6页 / 共486页
第7页 / 共486页
第8页 / 共486页
资料共486页,剩余部分请下载后查看
front-matter
Chapter 1-Introduction
lAbout the Book
Chapter 2-Representing Position and Orientation
lRepresenting Pose in 2-Dimensions
lRepresenting Pose in 3-Dimensions
lWrapping Up
Chapter 3-Time and Motion
Trajectories
lTime Varying Coordinate Frames
lWrapping Up
Chapter 4-Mobile Robot Vehicles
Mobility
Car-like Mobile Robots
Flying Robots
Flying Robots
Wrapping Up
Chapter 5-Navigation
Reactive Navigation
Map-Based Planning
Wrapping Up
Chapter 6-Localization
Dead Reckoning
Using a Map
Creating a Map
Localization and Mapping
Monte-Carlo Localization
Wrapping Up
Chapter 7-Robot Arm Kinematics
Describing a Robot Arm
Forward Kinematics
Inverse Kinematics
Trajectories
Advanced Topics
Application: Drawing
Application: a Simple Walking Robot
Wrapping Up
Chapter 8-Velocity Relationships
Manipulator Jacobian
Resolved-Rate Motion Control
Force Relationships
Inverse Kinematics: a General Numerical Approach
Wrapping Up
Chapter 9-Dynamics and Control
Equations of Motion
Drive Train
Forward Dynamics
Manipulator Joint Control
Wrapping Up
Chapter 10-Light and Color
Spectral Representation of Light
Color
lAdvanced Topics
lWrapping Up
Chapter 11-Image Formation
Perspective Transform
Camera Calibration
Non-Perspective Imaging Models
Unified Imaging
lWrapping Up
Chapter 12-Image Processing
Obtaining an Image
Monadic Operations
Diadic Operations
Spatial Operations
Mathematical Morphology
Shape Changing
Wrapping Up
Chapter 13-Image Feature Extraction
Region Features
Line Features
Point Features
Wrapping Up
Chapter 14-Using Multiple Images
Feature Correspondence
Geometry of Multiple Views
Stereo Vision
Structure and Motion
Application: Perspective Correction
Application: Mosaicing
Application: Image Matching and Retrieval
Application: Image Sequence Processing
Wrapping Up
Chapter 15-Vision-Based Control
Position-Based Visual Servoing
Image-Based Visual Servoing
Using Other Image Features
Wrapping Up
Chapter 16-Advanced Visual Servoing
XY/Z-Partitioned IBVS
IBVS Using Polar Coordinates
IBVS for a Spherical Camera
Application: Arm-Type Robot
Application: Mobile Robot
Application: Aerial Robot
Wrapping Up
Springer Tracts in Advanced Robotics Volume 73 Editors: Bruno Siciliano . Oussama Khatib . Frans Groen
Peter Corke Robotics, Vision and Control Fundamental Algorithms in MATLAB® With 393 Images Additional material is provided at www.petercorke.com/RVC
Professor Bruno Siciliano, Dipartimento di Informatica e Sistemistica, Università di Napoli Federico II, Via Claudio 21, 80125 Napoli, Italy, E-mail: siciliano@unina.it Professor Oussama Khatib, Artificial Intelligence Laboratory, Department of Computer Science, Stanford University, Stanford, CA 94305-9010, USA, E-mail: khatib@cs.stanford.edu Professor Frans Groen, Department of Computer Science, Universiteit van Amsterdam, Kruislaan 403, 1098 SJ Amsterdam, The Netherlands, E-mail: groen@science.uva.nl Author Peter Corke Faculty of Built Environment and Engineering School of Engineering Systems Queensland University of Technology (QUT) Brisbane QLD 4000 Australia e-mail: rvc@petercorke.com ISBN 978-3-642-20143-1 e-ISBN 978-3-642-20144-8 DOI 10.1007 /978-3-642-20144-8 Springer Tracts in Advanced Robotics ISSN 1610-7438 Library of Congress Control Number: 2011934624 © Springer-Verlag Berlin Heidelberg 2011 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitations, broad- casting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the rel- evant protective laws and regulations and therefore free for general use. Production: Armin Stasch and Scientific Publishing Services Pvt. Ltd. Chennai, India Typesetting and layout: Büro Stasch · Bayreuth (stasch@stasch.com) Printed on acid-free paper 9 8 7 6 5 4 3 2 1 springer.com
Editorial Advisory Board Oliver Brock, TU Berlin, Germany Herman Bruyninckx, KU Leuven, Belgium Raja Chatila, LAAS, France Henrik Christensen, Georgia Tech, USA Peter Corke, Queensland Univ. Technology, Australia Paolo Dario, Scuola S. Anna Pisa, Italy Rüdiger Dillmann, Univ. Karlsruhe, Germany Ken Goldberg, UC Berkeley, USA John Hollerbach, Univ. Utah, USA Makoto Kaneko, Osaka Univ., Japan Lydia Kavraki, Rice Univ., USA Vijay Kumar, Univ. Pennsylvania, USA Sukhan Lee, Sungkyunkwan Univ., Korea Frank Park, Seoul National Univ., Korea Tim Salcudean, Univ. British Columbia, Canada Roland Siegwart, ETH Zurich, Switzerland Gaurav Sukhatme, Univ. Southern California, USA Sebastian Thrun, Stanford Univ., USA Yangsheng Xu, Chinese Univ. Hong Kong, PRC Shin’ichi Yuta, Tsukuba Univ., Japan STAR (Springer Tracts in Advanced Robotics) has been promoted un- der the auspices of EURON (European Robotics Research Network) European Research Network ROBOTICS ** * * * * * * * * * * N O R U E
To my family Phillipa, Lucy and Madeline for their indulgence and support; my parents Margaret and David for kindling my curiosity; and to Lou Paul who planted the seed that became this book.
Foreword Once upon a time, a very thick document of a dissertation from a faraway land came to me for evaluation. Visual robot control was the thesis theme and Peter Corke was its author. Here, I am reminded of an excerpt of my comments, which reads, this is a masterful document, a quality of thesis one would like all of one's students to strive for, knowing very few could attain – very well considered and executed. The connection between robotics and vision has been, for over two decades, the central thread of Peter Corke’s productive investigations and successful developments and implementations. This rare experience is bearing fruit in his new book on Robotics, Vision, and Control. In its melding of theory and application, this new book has con- siderably benefited from the author’s unique mix of academic and real-world appli- cation influences through his many years of work in robotic mining, flying, under- water, and field robotics. There have been numerous textbooks in robotics and vision, but few have reached the level of integration, analysis, dissection, and practical illustrations evidenced in this book. The discussion is thorough, the narrative is remarkably informative and accessible, and the overall impression is of a significant contribution for researchers and future investigators in our field. Most every element that could be considered as relevant to the task seems to have been analyzed and incorporated, and the effective use of Toolbox software echoes this thoroughness. The reader is taken on a realistic walkthrough the fundamentals of mobile robots, navigation, localization, manipulator-arm kinematics, dynamics, and joint- level control, as well as camera modeling, image processing, feature extraction, and multi-view geometry. These areas are finally brought together through extensive dis- cussion of visual servo system. In the process, the author provides insights into how complex problems can be decomposed and solved using powerful numerical tools and effective software. The Springer Tracts in Advanced Robotics (STAR) is devoted to bringing to the research community the latest advances in the robotics field on the basis of their significance and quality. Through a wide and timely dissemination of critical research developments in robotics, our objective with this series is to promote more exchanges and collaborations among the researchers in the community and contribute to fur- ther advancements in this rapidly growing field. Peter Corke brings a great addition to our STAR series with an authoritative book, reaching across fields, thoughtfully conceived and brilliantly accomplished. Oussama Khatib Stanford, California July 2011
Preface Tell me and I will forget. Show me and I will remember. Involve me and I will understand. Chinese proverb The practice of robotics and machine vision involves the application of computational algorithms to data. The data comes from sensors measuring the velocity of a wheel, the angle of a robot arm’s joint or the intensities of millions of pixels that comprise an image of the world that the robot is observing. For many robotic applications the amount of data that needs to be processed, in real-time, is massive. For vision it can be of the order of tens to hundreds of megabytes per second. Progress in robots and machine vision has been, and continues to be, driven by more effective ways to process data. This is achieved through new and more efficient algorithms, and the dramatic increase in computational power that follows Moore’s law. When I started in robotics and vision, in the mid 1980s, the IBM PC had been recently released – it had a 4.77 MHz 16-bit microprocessor and 16 kbytes (expand- able to 256 k) of memory. Over the intervening 25 years computing power has doubled 16 times which is an increase by a factor of 65 000. In the late 1980s systems capable of real-time image processing were large 19 inch racks of equipment such as shown in Fig. 0.1. Today there is far more computing in just a small corner of a modern microprocessor chip. Over the fairly recent history of robotics and machine vision a very large body of algorithms has been developed – a significant, tangible, and collective achievement of the research community. However its sheer size and complexity presents a barrier to somebody entering the field. Given the many algorithms from which to choose the obvious question is: What is the right algorithm for this particular problem? One strategy would be to try a few different algorithms and see which works best for the problem at hand but this raises the next question: How can I evaluate algorithm X on my own data without spending days coding and debugging it from the original research papers? Fig. 0.1. Once upon a time a lot of equipment was needed to do vision-based robot control. The author with a large rack full of image processing and robot control equipment (1992)
Respectively the trademarks of The Mathworks Inc., Wolfram Research, and PTC. xii Preface Two developments come to our aid. The first is the availability of general purpose mathematical software which it makes it easy to prototype algorithms. There are com- mercial packages such as MATLAB®, Mathematica and MathCad, and open source projects include SciLab, Octave, and PyLab. All these tools deal naturally and effort- lessly with vectors and matrices, can create complex and beautiful graphics, and can be used interactively or as a programming environment. The second is the open-source movement. Many algorithms developed by researchers are available in open-source form. They might be coded in one of the general purpose mathematical languages just mentioned, or written in a mainstream language like C, C++ or Java. For more than fifteen years I have been part of the open-source community and maintained two open-source MATLAB® Toolboxes: one for robotics and one for machine vision. They date back to my own PhD work and have evolved since then, growing features and tracking changes to the MATLAB® language (which have been significant over that period). The Robotics Toolbox has also been translated into a number of different languages such as Python, SciLab and LabView. The Toolboxes have some important virtues. Firstly, they have been around for a long time and used by many people for many different problems so the code is entitled to some level of trust. The Toolbox provides a “gold standard” with which to compare new algorithms or even the same algorithms coded in new languages or executing in new environments. Secondly, they allow the user to work with real problems, not trivial examples. For real robots, those with more than two links, or real images with millions of pixels the computation is beyond unaided human ability. Thirdly, they allow us to gain insight which is otherwise lost in the complexity. We can rapidly and easily experiment, play what if games, and depict the results graphically using MATLAB®’s powerful display tools such as 2D and 3D graphs and images. Fourthly, the Toolbox code makes many common algorithms tangible and acces- sible. You can read the code, you can apply it to your own problems, and you can ex- tend it or rewrite it. At the very least it gives you a headstart. The Toolboxes were always accompanied by short tutorials as well as reference mate- rial. Over the years many people have urged me to turn this into a book and finally it has happened! The purpose of this book is to expand on the tutorial material provided with the Toolboxes, add many more examples, and to weave it into a narrative that covers robotics and computer vision separately and together. I want to show how com- plex problems can be decomposed and solved using just a few simple lines of code. By inclination I am a hands on person. I like to program and I like to analyze data, so it has always seemed natural to me to build tools to solve problems in robotics and vision. The topics covered in this book are based on my own interests but also guided by real problems that I observed over many years as a practitioner of both robotics and computer vision. I hope that by the end of this book you will share my enthusiasm for these topics. I was particularly motivated to present a solid introduction to machine vision for roboticists. The treatment of vision in robotics textbooks tends to concentrate on simple binary vision techniques. In the book we will cover a broad range of topics including color vision, advanced segmentation techniques such as maximally stable extremal regions and graphcuts, image warping, stereo vision, motion estimation and image retrieval. We also cover non-perspective imaging using fisheye lenses and catadioptric optics. These topics are growing in importance for robotics but are not commonly covered. Vision is a powerful sensor, and roboticists should have a solid grounding in modern fundamentals. The last part of the book shows how vision can be used as the primary sensor for robot control. This book is unlike other text books, and deliberately so. Firstly, there are already a number of excellent text books that cover robotics and computer vision separately and in depth, but few that cover both in an integrated fashion. Achieving this integra- tion is a principal goal of this book.
分享到:
收藏