Problem 2.1-2:
Show that for any real constants
a and b, b > 0,
Note the need for absolute values.
Problem 2.1-4:
(a) Is ?
(b) Is ?
Is ?
Yes, if for all n
(b) Is
Is ?
note
Is ?
Is ?
No! Certainly for any constant c we can find an n such that this is not true.
Recurrence Relations
Many algorithms, particularly divide and conquer algorithms, have time complexities which are naturally modeled by recurrence relations.
A recurrence relation is an equation which is defined in terms of itself.
Why are recurrences good things?
Recursion is Mathematical Induction!
In both, we have general and boundary conditions, with the general condition breaking the problem into smaller and smaller pieces.
The initial or boundary condition terminate the recursion.
As we will see, induction provides a useful tool to solve recurrences - guess a solution and prove it by induction.
n | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
0 | 1 | 3 | 7 | 15 | 31 | 63 | 127 |
Prove by induction:
Solving Recurrences
No general procedure for solving recurrence relations is known, which is why it is an art. My approach is:
Realize that linear, finite history, constant coefficient recurrences always can be solved
Consider , ,
It has history = 2, degree = 1, and coefficients of 2 and 1. Thus it can be solved mechanically! Proceed:
Systems like Mathematica and Maple have packages for doing this.
Guess a solution and prove by induction
To guess the solution, play around with small values for insight.
Note that you can do inductive proofs with the big-O's notations - just be sure you use it right.
Example: .
Show that for large enough c and n. Assume that it is true for n/2, then
Starting with basis cases T(2)=4, T(3)=5, lets us complete the proof for .
Try backsubstituting until you know what is going on
Also known as the iteration method. Plug the recurrence back into itself until you see a pattern.
Example: .
Try backsubstituting:
The term should now be obvious.
Although there are only terms before we get to T(1), it doesn't hurt to sum them all since this is a fast growing geometric series:
Recursion Trees
Drawing a picture of the backsubstitution process gives you a idea of what is going on.
We must keep track of two things - (1) the size of the remaining argument to the recurrence, and (2) the additive stuff to be accumulated during this call.
Example:
Although this tree has height , the total sum at each level decreases geometrically, so:
The recursion tree framework made this much easier to see than with algebraic backsubstitution.
See if you can use the Master theorem to provide an instant asymptotic solution
The Master Theorem: Let and b>1 be constants, let f(n) be a function, and let T(n) be defined on the nonnegative integers by the recurrence
where we interpret n/b as or . Then T(n) can be bounded asymptotically as follows:
Examples of the Master Theorem
Which case of the Master Theorem applies?
Reading from the equation, a=4, b=2, and f(n) = n.
Is ?
Yes, so case 1 applies and .
Reading from the equation, a=4, b=2, and .
Is ?
No, if , but it is true if , so case 2 applies and .
Reading from the equation, a=4, b=2, and .
Is ?
Yes, for , so case 3 might apply.
Is ?
Yes, for , so there exists a c < 1 to satisfy the regularity condition, so case 3 applies and .
Why should the Master Theorem be true?
Consider T(n) = a T(n/b) + f(n).
Suppose f(n) is small enough
Then we have a recursion tree where the only contribution is at the leaves.
There will be levels, with leaves at level l.
Suppose f(n) is large enough
Example: . In fact this holds unless !
In case 3 of the Master Theorem, the additive term dominates.
In case 2, both parts contribute equally, which is why the log pops up. It is (usually) what we want to have happen in a divide and conquer algorithm.
Famous Algorithms and their Recurrence
Matrix Multiplication
Since dwarfs , case 1 of the master theorem applies and .
This has been ``improved'' by more and more complicated recurrences until the current best in .
Polygon Triangulation
The simplest algorithm might be to try each pair of points and check if they see each other. If so, add the diagonal and recur on both halves, for a total of .
However, Chazelle gave an algorithm which runs in time. Since , by case 1 of the Master Theorem, Chazelle's algorithm is linear, ie. T(n) = O(n).
Sorting
Since but not , Case 2 of the Master Theorem applies and .
In case 2, the divide and merge steps balance out perfectly, as we usually hope for from a divide-and-conquer algorithm.
Mergesort Animations
Approaches to Algorithms Design
Incremental
A good example of this approach is insertion sort
Divide-and-Conquer
A good example of this approach is Mergesort.