Neural Network Learning:Theoretical FoundationsThis book describes recent theoretical advances in the study of artificialneural networks. It explores probabilistic models of supervised learningproblems, and addresses the key statistical and computationalquestions. Research on pattern classification with binary-outputnetworks is surveyed, including a discussion of the relevance of theVapnik-Chervonenkis dimension. Estimates of this dimension arecalculated for several neural network models. A model of classification byreal-output networks is developed, and the usefulness of classificationwith a large margin is demonstrated. The authors explain the role ofscale-sensitive versions of the Vapnik-Chervonenkis dimension in largemargin classification, and in real estimation. They also discuss thecomputational complexity of neural network learning, describing avariety of hardness results, and outlining two efficient constructivelearning algorithms. The book is self-contained and is intended to beaccessible to researchers and graduate students in computer science,engineering, and mathematics.Martin Anthony is Reader in Mathematics and Executive Director ofthe Centre for Discrete and Applicable Mathematics at the LondonSchool of Economics and Political Science.Peter Bartlett is a Senior Fellow st the Research School of InformationSciences and Engineering at the Australian National University.
Neural Network Learning:Theoretical FoundationsMartin Anthony and Peter L. BartlettCAMBRIDGEUNIVERSITY PRESS
CAMBRIDGE UNIVERSITY PRESSCambridge, New York, Melbourne, Madrid, Cape Town, Singapore, Sao Paulo, DelhiCambridge University PressThe Edinburgh Building, Cambridge CB2 8RU, UKPublished in the United States of America by Cambridge University Press, New Yorkwww. Cambridge. orgInformation on this title: www.cambridge.org/9780521118620© Cambridge University Press 1999This publication is in copyright. Subject to statutory exceptionand to the provisions of relevant collective licensing agreements,no reproduction of any part may take place without the writtenpermission of Cambridge University Press.First published 1999Reprinted 2001, 2002This digitally printed version 2009A catalogue record for this publication is available from the British LibraryLibrary of Congress Cataloguing in Publication dataAnthony, Martin.Learning in neural networks : theoretical foundations /Martin Anthony and Peter L. Bartlett.p. cm.Includes bibliographical references.ISBN 0 521 57353 X (hardcover)1. Neural networks (Computer science). I. Bartlett, Peter L.,1966- . II. Title.QA76.87.A58 1999006.3'2-dc21 98-53260 CIPISBN 978-0-521-57353-5 hardbackISBN 978-0-521-11862-0 paperback
To Colleen, Selena and James.
Contents/-re,11.11.21.31.422.12.22.32.42.52.633.13.23.33.444.14.24.34.44.54.64.7faceIntroductionSupervised learningArtificial neural networksOutline of the bookBibliographical notespage xui11279Part one: Pattern Classification with Binary-OutputNeural NetworksThe Pattern Classification ProblemThe learning problemLearning finite function classesApplications to perceptronsRestricted modelRemarksBibliographical notesThe Growth Function and VC-DimensionIntroductionThe growth functionThe Vapnik-Chervonenkis dimensionBibliographical notesGeneral Upper Bounds on Sample ComplexityLearning by minimizing sample errorUniform convergence and learnabilityProof of uniform convergence resultApplication to the perceptronThe restricted modelRemarksBibliographical notes111313192223252729292935414242434550525358Vll