Dis01 Sol
Dis01 Sol
Nelson
Note: Your TA may not get to all the problems. This is totally fine, the discussion worksheets are
not designed to be finished in an hour. The discussion worksheet is also a resource you can use to
practice, reinforce, and build upon concepts discussed in lecture, readings, and the homework.
Asymptotic Notation: We define the following notation for two functions f (n), g(n) ≥ 0:
• f (n) = O(g(n)) if there exists a c > 0 where after large enough n, f (n) ≤ c · g(n).
(Asymptotically, f grows at most as much as g)
• f (n) = Ω(g(n)) if g(n) = O(f (n)). (Asymptotically, f grows at least as much as g)
• f (n) = Θ(g(n)) if f (n) = O(g(n)) and g(n) = O(f (n)). (Asymptotically, f and g grow the
same)
If we compare this to the order on the numbers, O is a lot like ≤, Ω is a lot like ≥, and Θ is a lot like
= (except all are with regard to asymptotic behavior).
(a) For each pair of functions f (n) and g(n), state whether f (n) = O(g(n)), f (n) = Ω(g(n)), or
f (n) = Θ(g(n)). For example, for f (n) = n2 and g(n) = 2n2 − n + 3, write f (n) = Θ(g(n)).
(i) f (n) = n and g(n) = n2 − n
Solution: n grows slower than n2 so f (n) = O(g(n))
(ii) f (n) = n2 and g(n) = n2 + n
Solution: We compare the largest terms in asymptotics so we get that these two functions
grow at roughly the same rate. f (n) = Θ(g(n))
(iii) f (n) = 8n and g(n) = n log n
Solution: As a rule, for any c > 0, nc = O(nc log n). If we apply this here with c = 1, we
get f (n) = O(g(n)).
Formally,
8n 8
lim = lim =0
n→∞ n log n n→∞ log n
(b) For each of the following, state the order of growth using Θ notation, e.g. f (n) = Θ(n).
1
CS 170, Spring 2020 Discussion 1 A. Chiesa & J. Nelson
(i) f (n) = 50
Solution: f (n) = Θ(1)
(ii) f (n) = n2 − 2n + 3
Solution: f (n) = Θ(n2 )
(iii) f (n) = n + · · · + 3 + 2 + 1
n(n+1)
Solution: f (n) = 2 = Θ(n2 )
(iv) f (n) = n100 + 1.01 n
√
(vi) f (n) = (g(n))2 where g(n) = n + 5
Solution: √ √
f (n) = ( n + 5)2 = n + 10 n + 25
f (n) = Θ(n)
Solution: In general, we can observe the following properties of O/Θ/Ω from this:
• If d > c, nc = O(nd ), but nc 6= Ω(nd ) (this is sort of saying that nd grows strictly more than
nc ).
• Asymptotic notation only cares about “highest-growing” terms. For example, n2 + n = Ω(n2 ).
• Asymptotic notation does not care about leading constants. For example 50n = Θ(n).
• Any exponential with base > 1 grows more than any polynomial
• The base of the exponential matters. For example, 3n = O(4n ), but 3n 6= Ω(4n ).
• If d > c, nc log n = O(nd ).
2
CS 170, Spring 2020 Discussion 1 A. Chiesa & J. Nelson
Note that these are all sufficient conditions involving limits, and are not true definitions of O, Θ, and
Ω.
So f (n) = O(g(n))
f (n)
(b) Find an f (n), g(n) ≥ 0 such that f (n) = O(g(n)), yet lim 6= 0.
n→∞ g(n)
Solution: Let f (n) = 3n and g(n) = 5n. Then lim f (n) = 35 , meaning that f (n) = Θ(g(n)).
n→∞ g(n)
However, it’s still true in this case that f (n) = O(g(n)) (just by the definition of Θ).
(c) Prove that for any c > 0, we have log n = O(nc ).
f (n) f 0 (n)
Hint: Use L’Hôpital’s rule: If lim f (n) = lim g(n) = ∞, then lim = lim 0 (if the RHS
n→∞ n→∞ n→∞ g(n) n→∞ g (n)
exists)
Solution: By L’Hôpital’s rule,
log n n−1 1
lim c
= lim = lim =0
n→∞ n n→∞ cnc−1 n→∞ cnc
3
CS 170, Spring 2020 Discussion 1 A. Chiesa & J. Nelson
Recall that mergesort takes an arbitrary array and returns a sorted copy of that array. It turns
out that mergesort is asymptotically optimal at performing this task (however, other sorts like
Quicksort are often used in practice).
(a) Let T (n) represent the number of operations mergesort performs given a array of length n.
Find a base case and recurrence for T (n), use asymptotic notation.
Solution:
For the base case, we simply have T (1) = 1 (almost nothing is done). For the recursive case n ≥ 2,
we get T (n) = 2T (n/2) + O(n).
On a high level, merge takes two pointers through A and B respectively, and keeps adding the
next-smallest element from A or B. We notice that merge looks at each element from A or B
only once. So given arrays of length m and n, merge takes O(m + n). Note that the O(n + m) is
important here, the time is not just n + m (there’s more than one array access for each element
in A or B, for example).
A call to mergesort involves two recursive calls to pieces of size n/2 and one call to merge
given both halves, so the runtime here is
4
CS 170, Spring 2020 Discussion 1 A. Chiesa & J. Nelson
T (n)
T (n) = 2T (n/2) + n
T (n) = 4T (n/4) + n + n
T (n) = 8T (n/8) + n + n + n
..
.
n
T (n) = 2log n T + n(log n) = n log n
2log n
Each time we expand we get an extra n added to the back, and each time we expand the input
size to T () halves. But we can expand only log n times until we get T (1) and the expansion ends.
You can also visualize this as a full binary tree with log n levels, with n work done at each level,
and each node representing the work done to sort that part of the array.
Question: This doesn’t consider the fact that T (n) = 2T (n/2) + O(n) only means that there is
some c > 0 where for large enough n, T (n) ≤ 2T (n/2) + c · n. We assumed T (n) = 2T (n/2) + n
to make the analysis simpler. How can you modify the analysis to account for this formally?
Note: This is not the only way to solve recurrences like these, but it is a good way to solve
recurrences in general. We will soon talk about an important tool called The Master Theorem,
which does the geometric series expansion for you and lets you solve recurrences with a simple
rule.
(c) Consider the correctness of merge. What is the desired property of C once merge completes?
What is required of the arguments to merge for this to happen?
Solution:
We desire C to be sorted, and to contain all elements from A and B. merge requires that A and
B are individually sorted.
Question: This falls short of a full proof of correctness for merge. How would you fully proof
correctness for merge? (Hint: a good place to start would be to think about ‘invariants’, or
properties that hold for C as i and j increase)
5
CS 170, Spring 2020 Discussion 1 A. Chiesa & J. Nelson
(d) Using the property you found for merge, show that mergesort is correct.
Solution:
We establish by induction. Let
P (n) : mergesort(A[1, . . . , n]) is correct for any array A of length n
For the base case, P (1) is true because a length-1 array is already sorted, and that is what we
return.
Now assume for a particular n that P (k) holds for all k < n (this is strong induction). L and R
are then guaranteed to be sorted, then by part (c) their merge will be sorted. As L and R contain
all the elements of the array, we return a sorted copy of the array.