Data Structures
and
Algorithm Analysis in C
Second Edition
Solutions Manual
Mark Allen Weiss
Florida International University
Preface
Included in this manual are answers to most of the exercises in the textbook Data Structures and
Algorithm Analysis in C, second edition, published by Addison-Wesley. These answers reflect
the state of the book in the first printing.
Specifically omitted are likely programming assignments and any question whose solu-
tion is pointed to by a reference at the end of the chapter. Solutions vary in degree of complete-
ness; generally, minor details are left to the reader. For clarity, programs are meant to be
pseudo-C rather than completely perfect code.
Errors can be reported to weiss@fiu.edu. Thanks to Grigori Schwarz and Brian Harvey
for pointing out errors in previous incarnations of this manual.
Table of Contents
1. Chapter 1: Introduction ......................................................................................................
2. Chapter 2: Algorithm Analysis ..........................................................................................
3. Chapter 3: Lists, Stacks, and Queues .................................................................................
4. Chapter 4: Trees .................................................................................................................
5. Chapter 5: Hashing ............................................................................................................
6. Chapter 6: Priority Queues (Heaps) ...................................................................................
7. Chapter 7: Sorting ..............................................................................................................
8. Chapter 8: The Disjoint Set ADT .......................................................................................
9. Chapter 9: Graph Algorithms .............................................................................................
10. Chapter 10: Algorithm Design Techniques ......................................................................
11. Chapter 11: Amortized Analysis ......................................................................................
12. Chapter 12: Advanced Data Structures and Implementation ............................................
1
4
7
14
25
29
36
42
45
54
63
66
-iii-
Chapter 1: Introduction
1.3 Because of round-off errors, it is customary to specify the number of decimal places that
should be included in the output and round up accordingly. Otherwise, numbers come out
looking strange. We assume error checks have already been performed;
the routine
SeparateO is left to the reader. Code is shown in Fig. 1.1.
1.4 The general way to do this is to write a procedure with heading
void ProcessFile( const char *FileName );
which opens FileName,O does whatever processing is needed, and then closes it. If a line of
the form
is detected, then the call
#include SomeFile
ProcessFile( SomeFile );
1.5
1.6
1). Then log YO = 1 + log (YO / 2) < 1 + YO / 2 < YO / 2 + YO / 2 £
2pO (where pO is a positive integer), and consider any 2pO < YO £
is made recursively. Self-referential includes can be detected by keeping a list of files for
which a call to ProcessFileO has not yet terminated, and checking this list before making a
new call to ProcessFile.O
(a) The proof is by induction. The theorem is clearly true for 0 < XO £
1, since it is true for
XO = 1, and for XO < 1, log XO is negative. It is also easy to see that the theorem holds for
1 < XO £
2, since it is true for XO = 2, and for XO < 2, log XO is at most 1. Suppose the theorem
is true for pO < XO £
4pO
(pO ‡
YO, where the first ine-
quality follows by the inductive hypothesis.
(b) Let 2XO = AO. Then AOBO = (2XO)BO = 2XBO. Thus log AOBO = XBO. Since XO = log AO,
theorem is proved.
(a) The sum is 4/3 and follows directly from the formula.
(b) SO =
the second gives 3SO = 1 +
3___ + . . . . Subtracting the first equation from
42
3___ + . . . . 4SO = 1+
43
2___ + . . . . By part (a), 3SO = 4/ 3 so SO = 4/ 9.
42
2___ +
42
1__ +
4
1__ +
4
2__ +
4
the
gives
1__ +
4
(c) SO =
tion
3SO = 2
9___ +
42
3__ +
4
4__ +
4
3SO = 1+
16___ + . . . . Subtracting the first equa-
43
5___ +
42
9___ + . . . . 4SO = 1 +
4___ +
43
42
second
from the
1___. Thus 3SO = 2(4/ 9) + 4/ 3 = 20/ 9. Thus SO = 20/ 27.
i___ +
4iO
4iO
iONO___. Follow the same method as in parts (a) - (c) to obtain a formula for SNO
(d) Let SNO =
4iO
in terms of SNO- 1, SNO- 2, ..., SO0 and solve the recurrence. Solving the recurrence is very
difficult.
7___ + . . . .
43
Rewriting, we
get
iO=0
iO=0
iO=0
-1-
S
¥
S
¥
S
¥
_______________________________________________________________________________
_______________________________________________________________________________
double RoundUp( double N, int DecPlaces )
{
int i;
double AmountToAdd = 0.5;
for( i = 0; i < DecPlaces; i++ )
AmountToAdd /= 10;
return N + AmountToAdd;
}
void PrintFractionPart( double FractionPart, int DecPlaces )
{
int i, Adigit;
for( i = 0; i < DecPlaces; i++ )
{
FractionPart *= 10;
ADigit = IntPart( FractionPart );
PrintDigit( Adigit );
FractionPart = DecPart( FractionPart );
}
}
void PrintReal( double N, int DecPlaces )
{
int IntegerPart;
double FractionPart;
if( N < 0 )
{
putchar(’-’);
N = -N;
}
N = RoundUp( N, DecPlaces );
IntegerPart = IntPart( N ); FractionPart = DecPart( N );
PrintOut( IntegerPart );
/* Using routine in text */
if( DecPlaces > 0 )
putchar(’.’);
PrintFractionPart( FractionPart, DecPlaces );
}
_______________________________________________________________________________
_______________________________________________________________________________
Fig. 1.1.
1.7
S N
iO=OINO/ 2OK
1__ =
i
S N
iO=1
1__ -
i
OINO/ 2 -
1OK
iO=1
1__ ~
i
ln NO -
ln NO/ 2 ~
ln 2.
-2-
S
~
~
1.8
1.9
1 (modO 5).
125 (modO 5). Thus 2100 ”
1 (modO 5). (24)25 ”
SkO+1
FiO =
iO=1
2 + FkO+1 = FkO+3 -
24 = 16 ”
(a) Proof is by induction. The statement is clearly true for NO = 1 and NO = 2. Assume true
for NO = 1, 2, ..., kO. Then
FiO+FkO+1. By the induction hypothesis, the value of the
sum on the right is FkO+2 -
2, where the latter equality follows from the
definition of the Fibonacci numbers. This proves the claim for NO = kO + 1, and hence for all
NO.
(b) As in the text, the proof is by induction. Observe that f
- 1 + f
NO = 1, 2, ..., kO.
+ 1 = f 2. This implies that
- 2 = 1. For NO = 1 and NO = 2, the statement is true. Assume the claim is true for
S k
iO=1
FkO+1 = FkO + FkO- 1
by the definition and we can use the inductive hypothesis on the right-hand side, obtaining
FkO+1 < f kO + f kO- 1
- 1f kO+1 + f
- 1 + f
< f
FkO+1 < (f
- 2f kO+1
- 2)f kO+1 < f kO+1
and proving the theorem.
(c) See any of the advanced math references at the end of the chapter. The derivation
involves the use of generating functions.
1 = NO(NO+1) -
(2iO- 1) = 2
NO = NO2.
iO -
1.10 (a)
(b) The easiest way to prove this is by induction. The case NO = 1 is trivial. Otherwise,
S N
iO=1
S N
iO=1
S N
iO=1
SNO+1
iO=1
iO3 = (NO+1)3 +
= (NO+1)3 +
iO3
S N
iO=1
NO2(NO+1)2
_________
4
= (NO+1)2O
= (NO+1)2O
H
J
NO2___ + (NO+1)O
A
A
I 4
K
J
H
NO2 + 4NO + 4
___________O
A
A
I
K
4
=
(NO+1)2(NO+2)2
_____________
22
(NO+1)(NO+2)
___________O
= O
= O
H
A
I
H
SNO+1
iO
A
I iO=1
2
J
2
A
K
2
J
A
K
-3-
f
Chapter 2: Algorithm Analysis
2.1
2.2
2/NO, 37, MMNOO, NO, NOlog log NO, NOlog NO, NOlog (NO2), NOlog2NO, NO1.5, NO2, NO2log NO, NO3, 2NO/ 2,
2NO. NOlog NO and NOlog (NO2) grow at the same rate.
(a) True.
(b) False. A counterexample is TO1(NO) = 2NO, TO2(NO) = NO, and PfOO(NO) = NO.
(c) False. A counterexample is TO1(NO) = NO2, TO2(NO) = NO, and PfOO(NO) = NO2.
(d) False. The same counterexample as in part (c) applies.
2.3 We claim that NOlog NO is the slower growing function. To see this, suppose otherwise.
log NOO would grow slower than log NO. Taking logs of both sides, we find that,
log NOOlog NO grows slower than log log NO. But the first expres-
MMLOO grows slower than
than log2 LO. But we know that
Then, NOe / MMMMM
under this assumption, e / MMMMMM
MMMMMM
sion simplifies to e
log NOO. If LO = log NO, then we are claiming that e
log LO, or equivalently,
log2 LO = o
(LO), so the original assumption is false, proving the claim.
e 2LO grows slower
that
2.4 Clearly, logkO1NO = o (logkO2NO) if kO1 < kO2, so we need to worry only about positive integers.
The claim is clearly true for kO = 0 and kO = 1. Suppose it is true for kO < iO. Then, by
L’Hospital’s rule,
logiON______ =
lim i
NOfi
N
lim
NOfi
logiO- 1N
_______
N
The second limit is zero by the inductive hypothesis, proving the claim.
2.5 Let PfOO(NO) = 1 when NO is even, and NO when NO is odd. Likewise, let gO(NO) = 1 when NO is
odd, and NO when NO is even. Then the ratio PfOO(NO) / gO(NO) oscillates between 0 and ¥
.
2.6 For all these programs, the following analysis will agree with a simulation:
(I) The running time is OO(NO).
(II) The running time is OO(NO2).
(III) The running time is OO(NO3).
(IV) The running time is OO(NO2).
(V) PjO can be as large as iO2, which could be as large as NO2. kO can be as large as PjO, which is
NO2. The running time is thus proportional to NO.NO2.NO2, which is OO(NO5).
(VI) The ifO statement is executed at most NO3 times, by previous arguments, but it is true
only OO(NO2) times (because it is true exactly iO times for each iO). Thus the innermost loop is
only executed OO(NO2) times. Each time through, it takes OO(PjO2) = OO(NO2) time, for a total of
OO(NO4). This is an example where multiplying loop sizes can occasionally give an overesti-
mate.
(a) It should be clear that all algorithms generate only legal permutations. The first two
algorithms have tests to guarantee no duplicates; the third algorithm works by shuffling an
array that initially has no duplicates, so none can occur. It is also clear that the first two
algorithms are completely random, and that each permutation is equally likely. The third
algorithm, due to R. Floyd, is not as obvious; the correctness can be proved by induction.
2.7
-4-
¥
¥
See
J. Bentley, "Programming Pearls," Communications of the ACM 30 (1987), 754-757.
Note that if the second line of algorithm 3 is replaced with the statement
Swap( A[i], A[ RandInt( 0, N-1 ) ] );
then not all permutations are equally likely. To see this, notice that for NO = 3, there are 27
equally likely ways of performing the three swaps, depending on the three random integers.
Since there are only 6 permutations, and 6 does not evenly divide
27, each permutation cannot possibly be equally represented.
(b) For the first algorithm, the time to decide if a random number to be placed in AO[iO] has
not been used earlier is OO(iO). The expected number of random numbers that need to be
tried is NO/ (NO -
iO). This is obtained as follows: iO of the NO numbers would be duplicates.
Thus the probability of success is (NO -
iO) / NO. Thus the expected number of independent
trials is NO/ (NO -
SNO- 1
iO=0
iO). The time bound is thus
SNO- 1
Ni____ <
NO- i
iO=0
1__ = OO(NO2log NO)
Pj
NO2____ < NO2
NO- i
SNO- 1
iO=0
1____ < NO2
NO- i
S N
PjO=1
The second algorithm saves a factor of iO for each random number, and thus reduces the time
bound to OO(NOlog NO) on average. The third algorithm is clearly linear.
(c, d) The running times should agree with the preceding analysis if the machine has enough
memory. If not, the third algorithm will not seem linear because of a drastic increase for
large NO.
(e) The worst-case running time of algorithms I and II cannot be bounded because there is
always a finite probability that the program will not terminate by some given time TO. The
algorithm does, however, terminate with probability 1. The worst-case running time of the
third algorithm is linear - its running time does not depend on the sequence of random
numbers.
2.8 Algorithm 1 would take about 5 days for NO = 10,000, 14.2 years for NO = 100,000 and 140
centuries for NO = 1,000,000. Algorithm 2 would take about 3 hours for NO = 100,000 and
about 2 weeks for NO = 1,000,000. Algorithm 3 would use 1 ⁄1
2 minutes for NO = 1,000,000.
These calculations assume a machine with enough memory to hold the array. Algorithm 4
solves a problem of size 1,000,000 in 3 seconds.
(a) OO(NO2).
(b) OO(NOlog NO).
2.9
2.10 (c) The algorithm is linear.
2.11 Use a variation of binary search to get an OO(log NO) solution (assuming the array is preread).
2.13 (a) Test to see if NO is an odd number (or 2) and is not divisible by 3, 5, 7, ..., MMNOO.
(b) OO( MMNOO), assuming that all divisions count for one unit of time.
(c) BO = OO(log NO).
(d) OO(2BO/ 2).
(e) If a 20-bit number can be tested in time TO, then a 40-bit number would require about TO2
time.
(f) BO is the better measure because it more accurately represents the sizeO of the input.
-5-