Approximation Algorithms
Springer-Verlag Berlin Heidelberg GmbH
Vi jay V. Vazirani
Approximation
Algorithms
~Springer
Vi jay V. Vazirani
Georgia Institute of Technology
College of Computing
801 Atlantic Avenue
Atlanta, GA 30332-0280
USA
vazirani@cc.gatech.edu
http://www. cc.gatech.edu/fac/Vijay. Vazirani
Corrected Second Printing 2003
Library of Congress Cataloging-in-Publication Data
Vazirani, Vijay V.
Approximation algorithms I Vi jay V. Vazirani.
p.cm.
Includes bibliographical references and index.
ISBN 978-3-642-08469-0
DOI 10.1007/978-3-662-04565-7
1. Computer algorithms. 2. Mathematical optimization. I. Title.
QA76.g.A43 V39 2001
005-1-dc21
ISBN 978-3-662-04565-7 (eBook)
ACM Computing Classification (1998): F.1-2, G.l.2, G.l.6, G2-4
AMS Mathematics Classification (2000): 68-01; 68W05, 20, 25, 35,40;
68Q05-17, 25; 68R05, 10; 90-08; 90C05, 08, 10, 22, 27, 35, 46, 47, 59, 90;
OSAOS; OSCOS, 12, 20, 38, 40, 45, 69, 70, 85, 90; 11H06; 15A03, 15, 18, 39,48
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broad
casting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of
this publication or parts thereof is permitted only under the provisions of the German Copyright
Law of September 9, 1965, in its current version, and permission for use must always be obtained
from Springer-Verlag Berlin Heidelberg GmbH.
Violations are liable for prosecution under the German Copyright Law.
http://www.springer.de
© Springer-Verlag Berlin Heidelberg 2001, 2003
Originally published by Springer-Verlag Berlin Heidelberg New York in 2003
Softcover reprint of the hardcover 1st edition 2003
The use of general descriptive names, registered names, trademarks, etc. in this publication does
not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
Cover Design: KiinkelLopka, Heidelberg
Typesetting: Camera-ready by the author using a Springer TEX macro package
Printed on acid-free paper
45/3142XO- 54 3 21 o
SPIN 10889341
To my parents
Preface
Although this may seem a paradox, all exact
science is dominated by the idea of approximation.
Bertrand Russell (1872-1970)
Most natural optimization problems, including those arising in important
application areas, are NP-hard. Therefore, under the widely believed con
jecture that P -=/= NP, their exact solution is prohibitively time consuming.
Charting the landscape of approximability of these problems, via polynomial
time algorithms, therefore becomes a compelling subject of scientific inquiry
in computer science and mathematics. This book presents the theory of ap
proximation algorithms as it stands today. It is reasonable to expect the
picture to change with time.
This book is divided into three parts. In Part I we cover combinato
rial algorithms for a number of important problems, using a wide variety
of algorithm design techniques. The latter may give Part I a non-cohesive
appearance. However, this is to be expected - nature is very rich, and we
cannot expect a few tricks to help solve the diverse collection of NP-hard
problems. Indeed, in this part, we have purposely refrained from tightly cat
egorizing algorithmic techniques so as not to trivialize matters. Instead, we
have attempted to capture, as accurately as possible, the individual character
of each problem, and point out connections between problems and algorithms
for solving them.
In Part II, we present linear programming based algorithms. These are
categorized under two fundamental techniques: rounding and the primal
dual schema. But once again, the exact approximation guarantee obtainable
depends on the specific LP-relaxation used, and there is no fixed recipe for
discovering good relaxations, just as there is no fixed recipe for proving a the
orem in mathematics (readers familiar with complexity theory will recognize
this as the philosophical point behind the P -=/= NP question).
Part III covers four important topics. The first is the problem of finding
a shortest vector in a lattice which, for several reasons, deserves individual
treatment (see Chapter 27).
The second topic is the approximability of counting, as opposed to opti
mization, problems (counting the number of solutions to a given instance).
The counting versions of almost all known NP-complete problems are #P
complete. Interestingly enough, other than a handful of exceptions, this is
true of problems in P as well. An impressive theory has been built for ob
taining efficient approximate counting algorithms for this latter class of prob-
VIII
Preface
lems. Most of these algorithms are based on the Markov chain Monte Carlo
(MCMC) method, a topic that deserves a book by itself and is therefore not
treated here. In Chapter 28 we present combinatorial algorithms, not using
the MCMC method, for two fundamental counting problems.
The third topic is centered around recent breakthrough results, estab
lishing hardness of approximation for many key problems, and giving new
legitimacy to approximation algorithms as a deep theory. An overview of
these results is presented in Chapter 29, assuming the main technical theo
rem, the PCP Theorem. The latter theorem, unfortunately, does not have a
simple proof at present.
The fourth topic consists of the numerous open problems of this young
field. The list presented should by no means be considered exhaustive, and
is moreover centered around problems and issues currently in vogue. Exact
algorithms have been studied intensively for over four decades, and yet basic
insights are still being obtained. Considering the fact that among natural
computational problems, polynomial time solvability is the exception rather
than the rule, it is only reasonable to expect the theory of approximation
algorithms to grow considerably over the years.
The set cover problem occupies a special place, not only in the theory of
approximation algorithms, but also in this book. It offers a particularly simple
setting for introducing key concepts as well as some of the basic algorithm
design techniques of Part I and Part II. In order to give a complete treatment
for this central problem, in Part III we give a hardness result for it, even
though the proof is quite elaborate. The hardness result essentially matches
the guarantee of the best algorithm known - this being another reason for
presenting this rather difficult proof.
Our philosophy on the design and exposition of algorithms is nicely il
lustrated by the following analogy with an aspect of Michelangelo's art. A
major part of his effort involved looking for interesting pieces of stone in the
quarry and staring at them for long hours to determine the form they natu
rally wanted to take. The chisel work exposed, in a minimalistic manner, this
form. By analogy, we would like to start with a clean, simply stated problem
(perhaps a simplified version of the problem we actually want to solve in
practice). Most of the algorithm design effort actually goes into understand
ing the algorithmically relevant combinatorial structure of the problem. The
algorithm exploits this structure in a minimalistic manner. The exposition of
algorithms in this book will also follow this analogy, with emphasis on stating
the structure offered by problems, and keeping the algorithms minimalistic.
An attempt has been made to keep individual chapters short and simple,
often presenting only the key result. Generalizations and related results are
relegated to exercises. The exercises also cover other important results which
could not be covered in detail due to logistic constraints. Hints have been
provided for some of the exercises; however, there is no correlation between
the degree of difficulty of an exercise and whether a hint is provided for it.
Preface
IX
This book is suitable for use in advanced undergraduate and graduate level
courses on approximation algorithms. It has more than twice the material
that can be covered in a semester long course, thereby leaving plenty of room
for an instructor to choose topics. An undergraduate course in algorithms
and the theory of NP-completeness should suffice as a prerequisite for most
of the chapters. For completeness, we have provided background information
on several topics: complexity theory in Appendix A, probability theory in
Appendix B, linear programming in Chapter 12, semidefinite programming in
Chapter 26, and lattices in Chapter 27. (A disproportionate amount of space
has been devoted to the notion of self-reducibility in Appendix A because
this notion has been quite sparsely treated in other sources.) This book can
also be used as supplementary text in basic undergraduate and graduate
algorithms courses. The first few chapters of Part I and Part II are suitable
for this purpose. The ordering of chapters in both these parts is roughly by
increasing difficulty.
In anticipation of this wide audience, we decided not to publish this book
in any of Springer's series- even its prestigious Yellow Series. (However, we
could not resist spattering a patch of yellow on the cover!) The following
translations are currently planned: French by Claire Kenyon, Japanese by
Takao Asano, and Romanian by Ion Mandoiu. Corrections and comments
from readers are welcome. We have set up a special email address for this
purpose: approx@cc.gatech.edu.
Finally, a word about practical impact. With practitioners looking for
high performance algorithms having error within 2% or 5% of the optimal,
what good are algorithms that come within a factor of 2, or even worse,
O(logn), of the optimal? Further, by this token, what is the usefulness of
improving the approximation guarantee from, say, factor 2 to 3/2?
Let us address both issues and point out some fallacies in these assertions.
The approximation guarantee only reflects the performance of the algorithm
on the most pathological instances. Perhaps it is more appropriate to view
the approximation guarantee as a measure that forces us to explore deeper
into the combinatorial structure of the problem and discover more powerful
tools for exploiting this structure. It has been observed that the difficulty
of constructing tight examples increases considerably as one obtains algo
rithms with better guarantees. Indeed, for some recent algorithms, obtaining
a tight example has been a paper by itself (e.g., see Section 26.7). Experi
ments have confirmed that these and other sophisticated algorithms do have
error bounds of the desired magnitude, 2% to 5%, on typical instances, even
though their worst case error bounds are much higher. Additionally, the the
oretically proven algorithm should be viewed as a core algorithmic idea that
needs to be fine tuned to the types of instances arising in specific applications.
We hope that this book will serve as a catalyst in helping this theory grow
and have practical impact.