EECS 3101:       Design and Analysis of Algorithms        Instructor: Andy Mirzaian
                      Towers of Hanoi and Generalization

As the title of the course suggests, this course is centered on studying techniques for the
design and analysis of algorithms. The concept of algorithm is central to all of computing,
regardless of which subfield of computer science you may be interested to pursue later.

In this short note we will introduce the main topics that will be discussed in some depth
in the course. We will do this by examining a single problem which you may already be
familiar with, namely, the Tower of Hanoi Problem.

Through the study of this single example we will see the following concepts:

This introductory note will only touch upon these topics.
The remainder of the course will offer a deeper study of these topics.
(The syllabus web page gives a more detailed list of topics discussed in this course.)
If some points remain unclear to you at a first reading of this note, you should not worry.
After each of these concepts are studied more fully in the course, you may come back and
have another fresh look at this note.

The Towers of Hanoi Problem:

This is a neat little puzzle  invented by the French mathematician Edouard Lucas in 1883.
You may find this in almost all introductory books on algorithms. There are many web sites
devoted to this problem too. The problem is: we are given n disks of varying sizes and
three stacks A, B, C. The n disks are initially stacked in decreasing size on stack A. The
objective is to transfer the entire tower to stack B, using stack C as an intermediate holding
place. We should move the disks one at a time and never move a larger one onto a smaller.
The Tower of Hanoi figure
Here are a couple of web sites showing an animation of the problem and its solution:
Site1    Site 2.

What is the minimum number of disk moves we need to solve this problem?
Below we will answer this question by giving an optimum recursive and an iterative
algorithm and then analyze them. Next, we will generalize the problem by using more
than three stacks.

A Recursive Solution:

The largest disk is initially at the bottom of stack A. At some point it has to move.
Consider the first time it moves. For now suppose it moves to its final destination, i.e.,
stack B (we will shortly see that this is the best possible way). At that point, all the
remaining n-1 disks, being smaller, must reside on stack C. That means by now, we
have solved the "n-1 disks" instance of the same problem, so far having moved the
top n-1 disks from A to C, using B as the intermediate stack. After moving the largest
disk from A to B, again we see that the remaining task is to move the other n-1 disks
recursively from C to B, using A as the intermediate stack.

Exercise 0: Prove that this strategy will use the minimum number of moves possible.
(Hint: use mathematical induction.) This is the question of efficiency.

Notation:  "A ==> B"  means "move the top disk of stack A onto stack B".
So, here is the algorithm:

 1. if  n <then return
 2. TH (n-1, A,C,B)
 3. A ==> B
 4. TH (n-1, C,B,A)

Proof of correctness:  We use the concepts of pre-condition and post-condition that,
in a logical way, tell us what should be the required state of the variables at the start and
at the end of the routine. Then we have to logically prove that given that the pre-condition
holds just before the procedure is called, then the post-condition will be established just
after that call is terminated (if it ever terminates). We can basically formalize the informal
description we have above. Here are the pre- and post-conditions of this procedure:

Pre-Condition: We have n > 0 disks stacked in decreasing order on stack A, with stacks
B and C empty.

Post-Condition: All n disks are stacked in decreasing order on stack B, with stacks
A and C empty.

The proof of correctness also follows the lines of argument given informally above.
In this proof which will essentially be by mathematical induction, we will also have to use
the pre- and post-conditions for the two recursive calls at lines 2 and 4 of the algorithm.
This much suffices at this point. More will be said about these concepts later.

Termination and Timing:
Let T(n) denote the number of disk moves the procedure will use to solve the TH instance of
size n. We will see that T(n) is finite, i.e., the algorithm terminates. From the structure of the
algorithm we see that

T(n) = 0                  if n < 0
T(n) = 2T(n-1) + 1    if n > 0
Such a formula that expresses a function in terms of itself and other (known) functions and
operations is called a recurrence relation. Later we will learn some solution techniques for
such recurrences. The solution for this particular recurrence is
T(n) = 2n - 1   for all n > 0.
To convince yourself that this is the correct solution, simply plug it in the recurrence formula
and see that it is satisfied. (This is essentially doing mathematical induction.)

The recursive algorithm given above is conceptually simple to understand. What if we wanted
to "visualize" the sequence of individual disk moves? One way to do this is by using the
recursion tree. This is a tree with each of its internal nodes depicting the recursion stack frame
for an individual (recursive) call. The leaves of the tree correspond to terminating steps
with no further recursive calls. Given the three lines 2, 3, and 4, the recursion tree here will
be a ternary tree, the first subtree of each node corresponds to the first recursive call at line 2,
the second subtree (just a leaf in this case) corresponds to line 3, and the third subtree
corresponds to the second recursive call at line 4. Here is the recursion tree for n= 2:

The recursion tree figure

If we read the labels on the leaves from left to right (ignoring the blank leaves),
we get the sequence of disk moves A==>C, A==>B, C==>B.This also shows that
T(2)=3. It would be cumbersome to draw the recursion tree for large values of n.
In general proof arguments a "mental picture" of the tree may be used. To be able
to better visualize the sequence of individual disk moves we could consider an
iterative algorithm.

An Iterative Solution:

There is a general technique that the computer compiler uses to enable it to run a recursive
algorithm iteratively, using a recursion stack. That would not answer our "visualization"
concern. So, it is not our intention to utilize a user-defined recursion stack to convert the
above recursive algorithm to an iterative one. We will show a much simpler and elegant
solution. For a specific instance of the problem, i.e., a fixed value for n, we put the 3
stacks A, B, C, in a cyclic order. The cyclic order is
           A ___> B ___> C ___> A         if n is odd
           A ___> C ___> B ___> A         if n is even
Now here is the simple iterative algorithm:

  1.  Loop:
        1a. move the smallest disk one position in the direction of the cyclic order.
        1b. make the only other move possible, not involving the smallest disk.
                  Terminate the loop if no such move is possible.

One can easily follow the individual moves made by this iterative algorithm. However,
the "bigger picture" that the recursive solution provided is lost here. In order to reason
about such an iterative solution, we need to know what general "pattern" is maintained
with each iteration. The concept of loop invariant is used to answer this. It basically
corresponds to the concept of induction hypothesis in a mathematical induction
proof. More will be said about this later.

Several questions arise about this purported iterative solution. Here are some of them:

1. Termination: Does this iterative method terminate? Can the loop run for ever?
2. Correctness: Does it actually solve the problem (or gets stuck at step 1b in a
    partially solved state)?
3. Complexity: If it terminates with a correct solution, how many disk moves does it make?
4. Design: How does one come up with such an algorithmic solution in the first place?

Here is the answer in short:

Claim 0: The recursive algorithm TH (n,A,B,C) and the iterative algorithm
IterTH (n,A,B,C) perform exactly the same sequence of disk moves.

Exercise 1: Prove this claim by mathematical induction on n.

This answers questions 1-3 above. The answer to question 4 is: the iterative
algorithm was "derived at" after experimenting with the recursive solution
and observing the pattern of disk moves!

A Generalization (using more than 3 stacks):

What if we had more than 3 stacks available? How much could we save in the
number of disk moves to move all n disks from the initial to the final stack?
Suppose we have k stacks available, including the initial and final stacks.

Definition: Let Tk (n) denote the minimum number of disk moves to move the n disks
from the initial to the final stack, using a total of k stacks (i.e., k-2 intermediate stacks).

With the above definition, we see that T3 (n)=2n- 1  with the standard 3 stacks.
Below we study an upper bound for Tk (n) in general.

An Upper bound for T k(n), for k > 3:

The following algorithm gives a reasonably good upper bound on Tk(n), for k > 3 .

   ALGORITHM TowerOfHanoi (n disks , k stacks)
  /*short-hand: initial & final stack names are not shown here */
      1. if n < k then  move each disk to a separate stack, then reassemble
                                    them on the final stack with a total of 2n-1 moves
       2. else do
             2a.  Choose an integer m between 1 and n-1.
             2b.  With a recursive call TowerOfHanoi (n-m, k),
                      using all k stacks, move the  n-m smallest disks to another intermediate stack.
             2c.  With a recursive call TowerOfHanoi (m, k-1) ,
                      using k-1 stacks, move the remaining m largest disks to their destination,
                      without the use of the intermediate stack that is holding the n-m smallest disks.
             2d. With a recursive call TowerOfHanoi (n-m,k) ,
                      using all k stacks, move the n-m smallest disks to their final destination stack.

NOTE: The reader may wish to read beyond this point after having learned a bit about

recurrences and how to solve them. We will start studying recurrences after a few lectures.

From the above algorithm we get the following (upper bound) recurrence for Tk(n):

Tk(n) = 2n -1    if n < k
Tk(n) < 2 Tk (n-m) + Tk-1 (m)    if n > k, (for some m,  0 < m <n).
Question: What's m? The above recurrence is not well defined until we specify
a value for m. Integer m should be chosen in such a way to minimize the overall
value of Tk(n). The same applies to all recursive calls. In general m could be a
function of n and k. In other words, the correct recurrence is:
Tk(n)  <  min { 2Tk (n-m) + Tk-1 (m)  | 0 < m < n }. 
Exercise 2:  Does the above algorithm make the minimum number of moves possible? 

An upper bound for T4(n):

Take k=4 in the above recurrence . We get:

T4(n)  <  min {2 T4 (n-m) + T3 (m)  | 0 < m < n }.  
Since T3(m) = 2m - 1 is about 2m, we simplify the recurrence to:
T4(n) < min { 2 T4 (n-m) + 2|  0 < m < n }.
Suppose the back-solution goes from n to n - m 0, then to n - m0 - m1, ... until we get down to 0
(or to a number less than 4, so that we can use the recurrence boundary condition). So,
n = m0 + m 1   + ...  + mj        for some j.  
If we expand the recurrence, we get
T4(n) < 2 T4 (n - m0)  + 2m0   
 < 2 [ 2 T4 (n - m0 - m1) +  2m 1 ]  + 2m0  
            < 2 2 T4 (n - m 0 - m1) + 21+m1  + 2m0   

          < ...

         < 2j+mj  + ... +   22+m+ 21+m1    +  2m0  

Observation 1: 2x   is a convex function of x. So, we have 2x  + 2 y > 2. 2 (x+y)/2 .
(That is, the average of the function is no less than the function of the average.)

Thus, to minimize the summation expression for T4 (n) above, we must have

m0  =  1+m 1  =  2+m=  ...  =  j+mj
For, otherwise, we can reduce some from a bigger exponent and give that to a
smaller one to balance the exponents, while keeping the sum of mi 's, i.e., n,
unchanged. By Observation 1, we see that this reduces the sum for T4(n).
Therefore, if m0 = m, then mi =m - i. In other words, the optimum partition
of n into the mi chunks is:
n = m + (m-1) + (m-2) + ... + 2 +1  = m(m+1)/2 .
Thus,  m2/2  <<  (m+1)2/2. Therefore, m   < sqrt(2n)  <  m+1.
m is an integer, for general values of n take m = floor( sqrt(2n)).

So, T4 (n) is (at most) about m copies of 2m added up. Therefore:

T4 (n) = O( m 2 m ) = O( sqrt(2n) 2sqrt(2n) ).  
Here "O(...)" is the "Big-O" asymptotic upper bound notation. We will introduce
and use asymptotic notations  a lot in this course. Note how by using one more
stack we go from T3 (n) = 2n - 1 to T4 (n) = O( sqrt(2n) 2sqrt(2n) ).   

Exercise 3:    Obtain a good (asymptotic) lower bound for T4 (n).

Exercise 4:    Derive an (asymptotic) upper bound on T5 (n) by using
the above recurrence for Tk (n) with k = 5.

Exercise 5:    Solve the general recurrence for Tk (n) .

    By taking this course, you are embarking on
    an exciting journey into the world of algorithms.