$30.00
Description
1. Master theorem (6 points)
Use the master theorem to nd tight asymptotic bounds for the following recurrences. If the master theorem cannot be applied, state the reason and give an upper bound (bigOh) which is as tight as you can justify. Justify your answers (by showing which case of the master theorem applies, and why.)
p
(1) T (n) = 9T (n=3) + n
Case 1 :
^{p}_{n} _{=} _{n}^{1} _{2} _{O(n}^{log}3 ^{9} _{)}
2
= n^{1} 2 O(n^{2} ) for = 1
2
1
= n2 2 O(n)

1
C n for C = 1 and n_{0} = 1 so T (n) = (n^{2})
n2

T (n) = 2T (n=2) + n log n Case 2 :
n log n = (n^{log}2 ^{2} log^{k} n) for k = 1 0


(n log n)

T (n) = (n log^{k+1} n) for k = 1 ) T (n) = (n log^{2} n)

T (n) = 36T (n=6) + n^{2} Case 2 :
_{n}^{2} _{= (n}^{log}6 ^{36} _{log}^{k} _{n) for} _{k} _{= 0}_{0}


(n^{2})

T (n) = (n^{2} log^{k+1} n) for k = 0 ) T (n) = (n^{2} log n)

T (n) = T (2n=5) + n Case 3 :
_{n} _{= (n}^{log}2:5 ^{1+} _{) for = 1}


(n^{1})



(n)

n C n for C = 1 and n_{0} = 1 Regularity : 1 ^{2}_{5}^{n} C n for C = ^{2}_{5} < 1 Both conditions are satis ed so T (n) = (n)
The two subproblems created in the recurrence are of di erence sizes so
the master theorem does not apply here.
With a recursion tree, the largest height is log_{1:5} n and a cost of n at each level. So a reasonable bound is O(n log n)


T (n) = T (n 2) + n

b is not > 1 so master theorem does not apply here. Using substitution and T (1) = d_{0}:
Stops when n 2x = 1 where T(n) recurses x = ^{n}_{2} ^{1} times. So roughly has a runtime of T (n) = d_{0} + (^{n}_{2} ^{1} )n = O(n^{2})

Recursive Program (5 points)
Consider the following recursive function for n 0:
Algorithm 1 int recFunc(int n)
//Base Case:
if n <= 2 then
return n;
end if
//Recursive Case:
i = 0;
while i < 100 do
print(\Hello!”);
i = i + 1;
end while
int a = 2*recFunc(n=2); //Integer division
j = 0
while j < a do
print(\Bye!”);


= j + 1;

end while return a;

Use strong induction to prove the value returned by recFunc(n) is n. BaseCases : (0 n 4)
For n = 0; 1; or 2 the function immediately returns n so trivially rec
Func(n) returns a value n. Inductivestep :
Assume: recFunc(k) returns a value k for all 0 k < n.
Prove: recFunc(n) returns a value n.
If n is odd then n = 2m + 1 for some positive integer m < n.
So recFunc(n) returns 2 recFunc(^{2m}_{2}^{+1} ) = 2 recFunc(m) because of integer division. Since m < n, under the inductive hypothesis, recFunc(n) n.
Similarly, if n is even n = 2m for some positive integer m < n.
So recFunc(n) returns 2 recFunc(^{2}_{2}^{m} ) = 2 recFunc(m). Since m < n, under the inductive hypothesis, recFunc(n) n.
This covers all cases of n 0, thus by strong induction recFunc(n) returns a value n.
(2) Set up a runtime recurrence for the runtime T (n) of this algorithm.

T (n) =
8
1
: 0 n 2
>
<
3T (n=2) + 100 : n > 2
>
:

Solve this runtime recurrence using the master theorem. Case1 :
_{100 =} _{O(n}^{log}2 ^{3} _{) for = 1}


_{O(n}1:585 1_{)}



O(n^{0:585})

100 C n^{0:585} for C = 100 and n_{0} = 1, so T (n) = (n^{log}2 ^{3})
3. BigOh Induction (4 points)
Use strong induction to prove that T (n) 2 O(log n) where T (1) = 90 and T (n) = T (n=2) + 10 (for n 2).
Show T (n) C log n for n 2.
Base Case (n = 2):
LHS: T (2) = T (2=2) + 10 = T (1) + 10 = 90 + 10 = 100
RHS: C log 2 = C 1
Choose C = 100 ) T (2) = 100 100 1
Inductive Step:
Assume: T (k) C log k for 2 k < n
Prove: T (n) C log n
T (n) = T (n=2) + 10 C log n=2 + 10
C[log n log 2] + 10
C log n C + 10 Choose C 10 make extra terms 0 C log n
Choosing C = 10 and n_{0} = 2 completes the inductive step, so T (n) 2 O(log n)
4. Recursive Algorithm and Analysis (6 points)
Suppose company X has hired you as a software engineer.
Company X is desperate to improve the run time of their software. Rather than allow you to use the standard binary search algorithm, company X demands you create a new version. Since recursively dividing the array into two parts is so e cient for searching, the bigwigs at company X conclude dividing the array into three parts will be even more e cient.
This new \3ary” search divides the given sorted array into 3 equal sized sorted subarrays. It nds which subarray will contain the given value and then recursively searches that subarray. If the given value is in the array you should return true. Otherwise you should return false.

(2 points) Write up this new recursive algorithm using pseudocode. (Hint: For simplicity you can assume the number of elements in the given sorted array, n, is a power of 3)
Algorithm 2 bool threeSearch(int A[1 : : : n], int val)
//Assuming integer division
if n == 1 then
if A[1] == val then
return true;
else
return false;
end if
else if n == 2 then
if A[1] == val or A[2] == val then
return true;
else
return false;
end if
else if val <= A[n=3] then
return threeSearch(A[1 : : : n=3], val);
else if val <= A[2 n=3] then
return threeSearch(A[n=3 + 1 : : : 2 n=3], val); else
return threeSearch(A[2 n=3 + 1 : : : n], val); end if

(1 point) Create a recurrence to represent the worst case run time of our new \3ary” search from the pseudocode.
(Hint: The worst case run time occurs when the given value is not in the array)
T (n) = T (n=3) + 1

(2 points) Use master theorem to solve your above recurrence relation. Case2 :
f(n) = 1 = (n^{log}3 ^{1} log^{k} n) for k = 0


(n^{0})



(1)

Clearly, 1 = (1), so T (n) = (n^{0} log^{k+1} n) for k = 0, so T (n) = (log n)

(1 point) Compare this asymptotic run time to that of binary search. Does our new algorithm perform signi cantly better than binary search?
No, both algorithms actually perform asymptotically the same, only differing by constants. Binary search is (log_{2} n) while the \3ary” search is (log_{3} n). Their di erence is only by a constant factor of a logarithm which is insigni cant in asymptotic analysis.

(0 points – Just for fun) Try comparing the constants of binary search
and \3ary” search. You can use the change of base formula for log to convert both to log_{2} n.