Week2 Ch2 Ch3
Week2 Ch2 Ch3
(Fall 2013)
Istanbul Technical University
Computer Eng. Dept.
Chapter 2: Getting Started
Course slides from
Leiserson’s @MIT
Edmonds@York Un.
Ruan @UTSA
have been used in Last updated: Sept. 18, 2013
preparation of these slides.
Analysis of Algorithms 1, Dr. Çataltepe & Dr. Ekenel, Dept. of Computer Engineering, ITU
1
Outline
• Chapter 2
– Insertion Sort
• Pseudocode Conventions
• Analysis of Insertion Sort
• Loop Invariants and Correctness
– Merge Sort
• Divide and Conquer
• Analysis of Merge Sort
• Chapter 3: Growth of Functions
– Asymptotic notation
– Comparison of functions
– Standard notations and common functions
2
Week 2:
Merge Sort
• Insertion sort used an incremental approach
to sorting: sort the smallest subarray (1 item),
add one more item to the subarray, sort it,
add one more item, sort it, etc.
3
Week 2:
Merge Sort
• Divide-and-conquer:
– Divide the problem into several subproblems.
– Conquer the subproblems by solving them
recursively. If the subproblems are small enough,
solve them directly.
– Combine the solutions to the subproblems to get
the solution for the original problem.
4
Week 2:
Merge Sort
• Divide-and-conquer:
– Divide the n-element sequence to be sorted into
two subsequences of n/2 each.
– Conquer by sorting the subsequences recursively
by calling merge sort again. If the subsequences
are small enough (of length 1), solve them directly.
(Arrays of length 1 are already sorted.)
– Combine the two sorted subsequences by
merging them to get a sorted sequence.
5
Week 2:
Merge Sort
Merge-Sort(A, p, r)
1 if p < r
2 then {q ← ⎣(p+r)/2⎦
3 Merge-Sort(A, p, q)
4 Merge-Sort(A, q+1, r)
5 Merge(A, p, q, r)}
6
Week 2:
Merge Sort
• Note that the merge sort basically consists of
recursive calls to itself. The base case (which
stops the recursion) occurs when a
subsequence has a size of 1.
7
Week 2:
Merge
• Without going into detail about how Merge-Sort works
yet, let us take a look at the Merge part. Merge
works by assuming you have two already-sorted
sublists and an empty array:
1 4 5
2 3 6
8
Week 2:
Merge
1 4 7 ∞ 2 3 9 ∞
9
Week 2:
Merge
1 4 7 ∞ 2 3 9 ∞
p q q+1 r
• The two sublists are indexed from p to q (for the first
sublist) and from q+1 to r for the second sublist.
There are (r – p) + 1 items in the two sublists
combined, so we will need an output array of that
size.
10
Week 2:
Merge
1 4 7 ∞ 2 3 9 ∞
11
Week 2:
Merge
4 7 ∞ 2 3 9 ∞
1 2
12
Week 2:
Merge
4 7 ∞ 3 9 ∞
1 2 3
13
Week 2:
Merge
4 7 ∞ 9 ∞
1 2 3 4
14
Week 2:
Merge
7 ∞ 9 ∞
1 2 3 4 7
15
Week 2:
Merge
∞ 9 ∞
1 2 3 4 7 9
16
Week 2:
Merge
∞ ∞
1 2 3 4 7 9
17
Week 2:
Merge
• Assuming that the two sublists are in sorted
order when they are passed to the Merge
routine, is Merge guaranteed to output a
sorted array?
• Yes. We can verify that each step of Merge
preserves the sorted order that the two
sublists already have.
18
Week 2:
Merge(A, p, q, r)
1 n1 ← (q – p) + 1
2 n2 ← (r – q)
3 create arrays L[1..n1+1] and R[1..n2+1]
4 for i ← 1 to n1 do
5 L[i] ← A[(p + i) -1]
6 for j ← 1 to n2 do
7 R[j] ← A[q + j]
8 L[n1 + 1] ← ∞
9 R[n2 + 1] ← ∞
10 i ← 1
11 j ← 1
12 for k ← p to r do
13 if L[i] <= R[j]
14 then A[k] ← L[i]
15 i ←i + 1
16 else A[k] ← R[j]
17 j ← j + 1
19
Week 2:
Analysis of Merge
• Line 1 computes the length n1 of the subarray A[p..q].
• Line 2 computes the length n2 of the subarray A[q+1..r].
• We create arrays L and R ("left" and "right"), of lengths n1 + 1
and n2 + 1, respectively, in line 3.
• The for loop of lines 4-5 copies the subarray A[p..q] into L[1..
n1], and the for loop of lines 6-7 copies the subarray A[q+1..r]
into R[1..n2].
• Lines 8-9 put the sentinels at the ends of the arrays L and R.
• Lines 10-17, illustrated in Figure 2.3, perform the r - p + 1 basic
steps by maintaining the following loop invariant.
20
Week 2:
Analysis of Merge
• The loop in lines 12-17 of Merge is the heart
of how Merge works. They maintain the loop
invariant:
• At the start of each iteration of the for loop of
lines 12-17, the subarray A[p..k-1] contains
the k - p smallest elements of L[1..n1+1] and
R[1..n2+1], in sorted order.
• Moreover, L[i] and R[j] are the smallest
elements of their arrays that have not been
copied back into A.
21
Week 2:
Analysis of Merge
• To prove that Merge is a correct algorithm,
we must show that:
• Initialization: the loop invariant holds prior to
the first iteration of the for loop in lines 12-17
• Maintenance: each iteration of the loop
maintains the invariant
• Termination: the invariant provides a useful
property to show correctness when the loop
terminates
22
Week 2:
Initialization
• As we enter the for loop, k is set equal to p.
• This means that subarray A[p..k-1] is empty.
• Since k - p = 0, the subarray is guaranteed to
contain the k - p smallest elements of L and
R.
• By lines 10 and 11, i = j = 1, so L[i] and R[j]
are the smallest elements of their arrays that
have not been copied into A.
23
Week 2:
Maintenance
• As we enter the loop, we know that A[p..k-1]
contains the k - p smallest elements of L and
R.
• Assume L[i] <= R[j]. Then:
– L[i] is the smallest element not copied into A.
– Line 14 will copy L[i] into A[k].
– At this point the subarray A[p..k] will contain the k -
p + 1 smallest elements.
– Incrementing k (in line 12) and i (in line 15)
reestablishes the loop invariant for the next
iteration.
• Assume L[i] >= R[j]. Then:
– Lines 16-17 maintain the loop invariant.
24
Week 2:
Termination
• The loop invariant states that subarray
“A[p..k-1] contains the k - p smallest
elements of L[1..n1+1] and R[1..n2+1], in
sorted order.”
• When we drop out of the loop, k = r + 1.
• So r = k – 1, and A[p..k-1] is actually A[p..r],
which is the whole array.
• The arrays L and R together contain n1 + n2 +
2 elements. From lines 1 and 2 we know that
n1 + n2 = ((q – p) + 1) + ((r – q) = (r – p) + 1,
and this is the number of all of the elements
in the array. The extra 2 is the two sentinel
elements.
25
Week 2:
Merge Sort
• Now let us look at Merge-Sort again:
Merge-Sort(A, p, r)
1 if p < r
2 then {q ← ⎣(p+r)/2⎦
3 Merge-Sort(A, p, q)
4 Merge-Sort(A, q+1, r)
5 Merge(A, p, q, r)}
Merge Sort
• Given our Merge routine, we can now see
how Merge-Sort works.
– Assume a list of length = 2m
– Take an unsorted list as input.
– Split it in half. Now you have two sublists.
– Split those in half, and so on, until you have lists of
length 1.
– Merge those into sublists of length 2, then merge
those into sublists of length 4, etc. Keep going
until you have just one list left.
– That list is now sorted.
27
Week 2:
lecture 3
Merge Sort
Merge-Sort(A, p, r)
1 if p < r
2 then {q ← ⎣(p+r)/2⎦
3 Merge-Sort(A, p, q)
4 Merge-Sort(A, q+1, r)
5 Merge(A, p, q, r)}
• Let us call Merge-Sort with an array of 4 elements: Merge-
Sort(A, 1, 4), where p = 1 and r = 4.
• Line 1: p < r, so do the then part of the if
• Line 2: q ← ⎣(p+r)/2⎦, which is 2
• Line 3: we call Merge-Sort(A, 1, 2)
• WAIT HERE (let us call our place Z) until we return from this
call
28
Week 2:
Merge Sort
Merge-Sort(A, p, r)
1 if p < r
2 then {q ← ⎣(p+r)/2⎦
3 Merge-Sort(A, p, q)
4 Merge-Sort(A, q+1, r)
5 Merge(A, p, q, r)}
• Calling Merge-Sort(A, 1, 2)
• Line 1: p < r, so do the then part of the if
• Line 2: q ← ⎣(p+r)/2⎦, which is 1
• Line 3: we call Merge-Sort(A, 1, 1)
• WAIT HERE (let us call our place Y) until we return from this
call
29
Week 2:
Merge Sort
Merge-Sort(A, p, r)
1 if p < r
2 then {q ← ⎣(p+r)/2⎦
3 Merge-Sort(A, p, q)
4 Merge-Sort(A, q+1, r)
5 Merge(A, p, q, r)}
• Calling Merge-Sort(A, 1, 1)
• Line 1: p = r, so skip the then part of the if
• Return from this call to Y
30
Week 2:
Merge Sort
Merge-Sort(A, p, r)
1 if p < r
2 then {q ← ⎣(p+r)/2⎦
3 Merge-Sort(A, p, q)
4 Merge-Sort(A, q+1, r)
5 Merge(A, p, q, r)}
• We called Merge-Sort(A, 1, 2)
• We have returned from our call in line 3
• Line 4: We call Merge-Sort(A, 2, 2)
• WAIT HERE (let us call our place X) until we return
from this call
31
Week 2:
Merge Sort
Merge-Sort(A, p, r)
1 if p < r
2 then {q ← ⎣(p+r)/2⎦
3 Merge-Sort(A, p, q)
4 Merge-Sort(A, q+1, r)
5 Merge(A, p, q, r)}
• Calling Merge-Sort(A, 2, 2)
• Line 1: p = r, so skip the then part of the if
• Return from this call to X
32
Week 2:
Merge Sort
Merge-Sort(A, p, r)
1 if p < r
2 then {q ← ⎣(p+r)/2⎦
3 Merge-Sort(A, p, q)
4 Merge-Sort(A, q+1, r)
5 Merge(A, p, q, r)}
• We called Merge-Sort(A, 2, 2)
• We have returned from our call in line 4
• Line 5: We call Merge(A, 1, 2, 2)
• What does Merge do?
33
Week 2:
Merge Sort
Merge-Sort(A, p, r)
1 if p < r
2 then {q ← ⎣(p+r)/2⎦
3 Merge-Sort(A, p, q)
4 Merge-Sort(A, q+1, r)
5 Merge(A, p, q, r)}
• Step 5: Merge(A, 1, 2, 2) :
• creates two temporary arrays of 1 element each
• copies A[1] and A[2] into these 2 arrays
• merges the elements in these two temporary arrays
back into A[1..2] in sorted order
• returns from the call to Z
34
Week 2:
Merge Sort
Merge-Sort(A, p, r)
1 if p < r
2 then {q ← ⎣(p+r)/2⎦
3 Merge-Sort(A, p, q)
4 Merge-Sort(A, q+1, r)
5 Merge(A, p, q, r)}
• Return from call to Merge-Sort(A, 1, 2) in Line 3. At this point
half of our original array, A[1..2], is in sorted order.
• Next we call Merge-Sort(A, 3, 4). It will put A[3..4] into sorted
order.
• Line 5 will merge A[1..2] and A[3..4] into A[1..4] in sorted
order.
35
Week 2:
Merge Sort
sorted sequence
1 2 2 3 4 5 6 6
Merge
2 4 5 6 1 2 3 6
Merge
Merge
2 5 4 6 1 3 2 6
5 2 4 6 1 3 2 6
initial sequence
36
Week 2:
Analysis of Divide-and-
Conquer Algorithms
• The Merge-Sort algorithm contains a recursive call to
itself. When an algorithm contains a recursive call to
itself, its running time often can be described by a
recurrence equation, or recurrence.
• The recurrence equation describes the running time
on a problem of size n in terms of the running time on
smaller inputs.
• We can use mathematical tools to solve the
recurrence and provide bounds on the performance
of the algorithm.
37
Week 2:
Analysis of Divide-and-
Conquer Algorithms
• A recurrence of a divide-and-conquer algorithm is
based on its 3 parts: divide, conquer, and combine.
• Let T(n) be the running time on a problem of size n.
• If the problem is small enough, say n <= c, we can
solve it in a straightforward manner, which takes
constant time, which we write as Θ(1).
• If the problem is bigger, we solve it by dividing the
problem to get a subproblems, each of which is 1/b
the size of the original. For Merge-Sort, both a and b
are 2.
38
Week 2:
Analysis of Divide-and-
Conquer Algorithms
• Assume it takes D(n) time to divide the problem into
subproblems.
• Assume it takes C(n) time to combine the solutions to
the subproblem into the solution for the original
problem.
• We get the recurrence:
Θ(1) , if n ≤ c
T(n) = { aT(n/b) + D(n) + C(n), otherwise
39
Week 2:
Analysis of Merge Sort
• Base case: n = 1. Merge sort on an array of size 1
takes constant time, Θ(1).
• Divide: The Divide step of Merge-Sort just calculates
the middle of the subarray. This takes constant time.
So D(n) = Θ(1).
• Conquer: We make 2 calls to Merge-Sort. Each call
handles ½ of the subarray that we pass as a
parameter to the call. The total time required is 2T(n/
2).
• Combine: Running Merge on an n-element subarray
takes Θ(n), so C(n) = Θ(n).
40
Week 2:
Analysis of Merge Sort
• Here is what we get
Θ(1) , if n = 1
T(n) = { 2T(n/2) + Θ(1) + Θ(n), if n > 1
42
Week 2:
Analysis of Merge Sort
• Example: 8 = 23
• Step 0:
• Step 1:
• Step 2:
• Step 3:
43
Week 2:
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
44
Week 2:
Analysis of Merge Sort
• So, it took us log2n steps to divide the array all the
way down into subarrays of size 1.
• As a result, we will have log2n + 1 (sub)arrays to deal
with. In our example, where n = 8 and log2n = 3, we
will have to deal with arrays of size 1, 2, 4, and 8.
• Every time we Merge the arrays, it takes us n steps,
since we have to put each array item into its proper
position within each array.
45
Week 2:
Analysis of Merge Sort
• Consequently, we will have log2n + 1 recursive calls
of the Merge-Sort function, and each time we call
Merge-Sort the Merge function will cost us n steps,
times a constant value.
• The total cost, then, can be expressed as:
cn(log2n + 1)
• Multiplying this out gives:
cn(log2n) + cn
• Ignoring the low-order term and the constant c gives:
Θ(n•log2n)
46
Week 2:
Asymptotic Notation
• What does asymptotic mean?
• Asymptotic describes behavior of
function in the limit - for sufficiently
large values of its parameter
47
Week 2:
Asymptotic Notation
• The order of growth of the running time of an
algorithm is defined as the highest-order term
(usually the leading term) of an expression
that describes the running time of the
algorithm
• We ignore the leading term’s constant
coefficient, as well as all of the lower order
terms in the expression
• Example: The order of growth of an algorithm
whose running time is described by the
expression an2+ bn + c is simply n2
48
Week 2:
Big O
• Let us say that we have some function that
represents the sum total of all the running-
time costs of an algorithm; call it f(n)
• For merge sort, the actual running time is:
f(n) = cn(log2n) + cn
• We want to describe the running time of
merge sort in terms of another function, g(n),
so that we can say f(n) = O(g(n)), like this:
cn(log2n) + cn = O(nlog2n)
49
Week 2:
Big O: Definition
• For a given function g(n), O(g(n)) is the set of
functions
50
Week 2:
Big O
f(n) ∈ O(g(n))
c·g(n)
51
Week 2:
Big O
• Big O is an upper bound on a function, to
within a constant factor.
• O(g(n)) is a set of functions
• Commonly used notation
f(n) = O(g(n))
• Correct notation
f(n) ∈ O(g(n))
52
Week 2:
Big O
• Question:
How do you demonstrate that f(n) ∈ O(g(n))?
• Answer:
Show that you can find values for c and n0
such that 0 ≤ f(n) ≤ c g(n) for all n ≥ n0
53
Week 2:
Big O
Example: Show that 7n – 2 is O(n)
56
Week 2:
Big O
Example: Show that ½ n2 – 3n is O(n2)
•Find a real constant c > 0 and an integer
constant n0 ≥ 1 such that ½ n2 – 3n ≤ cn2 for
every integer n ≥ n0
•Choose c = ½ and n0 = 1
•Now ½ n2 – 3n ≤ ½ n2 for every integer n ≥ 1
57
Week 2:
Big O
Example: Show that an(log2n) + bn is O(nlog n)
• Find a real constant c > 0 and an integer
constant n0 ≥ 1 such that
an(log2n) + bn ≤ cnlog n
for every integer n ≥ n0.
• Choose c = a+b and n0 = 2 (why 2?)
• Now an(log2n) + bn ≤ cnlog n for every
integer n ≥ 2.
58
Week 2:
Big O
Example: Show that an(log2n) + bn is O(nlog n)
• Find a real constant c > 0 and an integer
constant n0 ≥ 1 such that
an(log2n) + bn ≤ cnlog n
for every integer n ≥ n0.
• Choose c = a+b and n0 = 2 (why 2?)
• log21 = 0, a 1(log21) + b1 ≤ c1log 1
• 0+b ≤ 0 => NOT TRUE!
59
Week 2:
Big O
• Question:
Is n = O(n2) ?
• Answer:
Yes. Remember that f(n) ∈ O(g(n)) if there
exist positive constants c and n0 such that
{0 ≤ f(n) ≤ c•g(n) for all n ≥ n0 }
If we set c = 1 and n0 = 1, then it is obvious
that c•n ≤ n2 for all n ≥ n0.
60
Week 2:
Big O
• What does this mean about Big-O?
• When we write f(n) = O(g(n)) we mean that
some constant times g(n) is an asymptotic
upper bound on f(n); we are not claiming that
this is a tight upper bound.
61
Week 2:
Big O
• Big-O notation describes an upper bound
• Assume we use Big-O notation to bound the
worst case running time of an algorithm
• Now we have a bound on the running time of
the algorithm on every input
62
Week 2:
Big O
• Is it correct to say “the running time of
insertion sort is O(n2)”?
• Technically, the running time of insertion sort
depends on the characteristics of its input.
– If we have n items in our list, but they are already in
sorted order, then the running time of insertion sort
on this particular input is O(n).
63
Week 2:
Big O
• So what do we mean when we say that the
running time of insertion sort is O(n2)?
• What we normally mean is:
the worst case running time of insertion sort is
O(n2)
• That is, if we say that “the running time of
insertion sort is O(n2)”, we guarantee that
under no circumstances will insertion sort
perform worse than O(n2).
64
Week 2:
Big Theta: Definition
• For a given function g(n), Θ(g(n)) is the
set of functions:
Θ(g(n)) = {f(n): there exist positive
constants c1, c2, and n0
such that
0 ≤ c1 g(n) ≤ f(n) ≤ c2 g(n)
for all n ≥ n0 }
65
Week 2:
Big Theta
• What does this mean?
• When we use Big-Theta notation, we are
saying that function f(n) can be “sandwiched”
between some small constant times g(n) and
some larger constant times g(n).
• In other words, f(n) is equal to g(n) to within a
constant factor.
66
Week 2:
Big Theta
f(n) ∈ Θ(g(n))
c2·g(n)
67
Week 2:
Big Theta
• If f(n) = Θ(g(n)), we can say that g(n) is an
asymptotically tight bound for f(n).
• Basically, we are guaranteeing that f(n) never
performs any better than c1 g(n), but also
never performs any worse than c2 g(n).
• We can see this visually by noting that, after
n0, the curve for f(n) never goes below c1 g(n)
and never goes above c2 g(n).
68
Week 2:
Big Theta
• Let us look at the performance of the merge
sort.
• We said that the performance of merge sort
was cn(log2n) + cn
• Does this depend upon the characteristics of
the input for merge sort? That is, does it make
a difference if the list is already sorted, or
reverse sorted, or in random order?
• No. Unlike insertion sort, merge sort behaves
exactly the same way for any type of input.
69
Week 2:
Big Theta
• The running time of merge sort is:
cn(log2n) + cn
• So, using asymptotic notation, we can discard
the “+ cn” part of this equation, giving:
cn(log2n)
• And we can disregard the constant multiplier,
c, which gives us the running time of merge
sort:
Θ(n(log2n))
70
Week 2:
Big Theta
• Why would we prefer to express the running
time of merge sort as Θ(n(log2n)) instead of
O(n(log2n))?
• Because Big-Theta more precise than Big-O
• If we say that the running time of merge sort is
O(n(log2n)), we are merely making a claim
about merge sort’s asymptotic upper bound,
whereas if we say that the running time of
merge sort is Θ(n(log2n)), we are making a
claim about merge sort’s asymptotic upper
and lower bounds 71
Week 2:
Big Theta
• Would it be incorrect to say that the running
time of merge sort is O(n(log2n))?
• No, not at all.
• It is just that we are not giving all of the
information that we have about the running
time of merge sort.
• But sometimes all we need to know is the
worst-case behavior of an algorithm. If that is
so, then Big-O notation is fine.
72
Week 2:
Big Theta
• One final note: the definition of Θ(g(n))
technically requires that every member f(n) ∈
Θ(g(n)) be asymptotically nonnegative – that
is, f(n) must be nonnegative whenever n is
sufficiently large.
• We assume that every function used within Θ
notation (and the other notations used in your
textbook’s Chapter 3) is asymptotically
nonnegative
73
Week 2:
Big Omega: Definition
• For a given function g(n), Ω(g(n)) is the
set of functions:
Ω(g(n)) = { f(n): there exist positive
constants c and n0 such
that 0 ≤ c g(n) ≤ f(n)
for all n ≥ n0 }
74
Week 2:
Big Omega
f(n) ∈ Ω(g(n))
75
Week 2:
Big Omega
• We know that Big-O notation provides an
asymptotic upper bound on a function.
• Big-Omega notation provides an asymptotic
lower bound on a function.
• Basically, if we say that f(n) = Ω(g(n)) then we
are guaranteeing that, beyond n0, f(n) never
performs any better than c g(n).
76
Week 2:
Big Omega
• We usually use Big-Omega when we are
talking about the best case performance of an
algorithm.
• For example, the best case running time of
insertion sort (on an already sorted list) is Ω(n).
• But this also means that insertion sort never
performs any better than Ω(n) on any type of
input.
• So the running time of insertion sort is Ω(n).
77
Week 2:
Big Omega
• Could we say that the running time of insertion
sort is Ω(n2)?
• No. We know that if its input is already sorted,
the curve for merge sort will dip below n2 and
approach the curve for n.
• Could we say that the worst case running time
of insertion sort is Ω(n2)?
• Yes.
78
Week 2:
Big Omega
• It is interesting to note that, for any two
functions f(n) and g(n), f(n) = Θ(g(n)) if and
only if f(n) = O(g(n)) and f(n) = Ω(g(n)).
79
Week 2:
Little o: Definition
• For a given function g(n), o(g(n)) is the set of
functions:
o(g(n))= {f(n): for any positive constant c,
there exists a constant n0
such that 0 ≤ f(n) < c g(n)
for all n ≥ n0 }
80
Week 2:
Little o
• Note the < instead of ≤ in the definition of
Little-o:
0 ≤ f(n) < c g(n) for all n ≥ n0
• Contrast this to the definition used for Big-O:
0 ≤ f(n) ≤ c g(n) for all n ≥ n0
• Little-o notation denotes an upper bound that
is not asymptotically tight. We might call this a
loose upper bound.
• Examples: 2n ∈ o(n2) but 2n2 ∉ o(n2)
81
Week 2:
Little o: Definition
• Given that f(n) = o(g(n)), we know that g grows
strictly faster than f. This means that you can
multiply g by a positive constant c and beyond
n0, g will always exceed f.
• No graph to demonstrate little-o, but here is an
example:
n2 = o(n3) but
n2 ≠ o(n2).
Why? Because if c = 1, then f(n) = c g(n), and
the definition insists that f(n) be less than c
g(n).
82
Week 2:
Little omega: Definition
• For a given function g(n), ω(g(n)) is the set of
functions:
ω(g(n))= {f(n): for any positive constant c,
there exists a constant n0
such that 0 ≤ c g(n) < f(n)
for all n ≥ n0 }
83
Week 2:
Little omega: Definition
• Note the < instead of ≤ in the definition:
0 ≤ c g(n) < f(n)
• Contrast this to the definition used for Big-Ω:
0 ≤ c g(n) ≤ f(n)
• Little-omega notation denotes a lower bound
that is not asymptotically tight. We might call
this a loose lower bound.
• Examples:
n ∉ ω(n2) n ∈ ω( n ) n ∈ ω(lg n)
84
Week 2:
Little omega
• No graph to demonstrate little-omega, but here
is an example:
n3 is ω(n2) but
n3 ≠ ω(n3).
Why? Because if c = 1, then f(n) = c g(n), and
the definition insists that c g(n) be strictly less
than f(n).
85
Week 2:
Comparison of Notations
f(n) = o(g(n)) ≈ a < b
f(n) = O(g(n)) ≈ a ≤ b
f(n) = Θ(g(n)) ≈ a = b
f(n) = Ω(g(n)) ≈ a ≥ b
f(n) = ω(g(n)) ≈ a > b
86
Week 2:
Asymptotic Notation
c2·g(n)
c·g(n)
n0 n0
Input Size (n) Input Size (n)
f(n)
c·g(n)
n0 Big
Input Size (n)
Omega 87
Week 2:
Asymptotic Notation in
Equations and Inequalities
• When asymptotic notation stands alone on
right-hand side of equation, ‘=’ is used to
mean ‘∊’.
• In general, we interpret asymptotic notation as
standing for some anonymous function we do
not care to name.
• Example: 2n2 + 3n + 1 = 2n2 + Θ(n) means
that 2n2 + 3n + 1 = 2n2 + f(n) for some f(n) ∊
Θ(n). (In this case, f(n) = 3n + 1, which is in
Θ(n).)
88
Week 2:
Asymptotic Notation in
Equations and Inequalities
• This use of asymptotic notation eliminates
inessential detail in an equation (e.g., we do
not have to specify lower-order terms; they are
understood to be included in anonymous
function).
• The number of anonymous functions in an
expression is the number of times asymptotic
notation appears
n
– Example: ∑ O(i ) is one anonymous function
i =1
– not the same as O(1)+O(2)+…+O(n), which has n
hidden constants 89
Week 2:
Asymptotic Notation in
Equations and Inequalities
• Appearance of asymptotic notation on left-
hand side of equation means, no matter how
the anonymous functions are chosen on the
left-hand side, there is a way to choose the
anonymous functions on the right-hand side to
make the equation valid.
• Example: 2n2 + Θ(n) = Θ(n2) means that for
any function f(n)∈ Θ(n) there is some function
g(n) ∈ Θ(n2) such that 2n2 + f(n) = g(n) for all n.
90
Week 2:
Comparison of Functions
• Transitivity:
f(n) = Θ(g(n)) and g(n) = Θ(h(n)) imply f(n) = Θ(h(n))
f(n) = O(g(n)) and g(n) = O(h(n)) imply f(n) = O(h(n))
f(n) = Ω(g(n)) and g(n) = Ω(h(n)) imply f(n) = Ω(h(n))
f(n) = o(g(n)) and g(n) = o(h(n)) imply f(n) = o(h(n))
f(n) = ω(g(n)) and g(n) = ω(h(n)) imply f(n) = ω(h(n))
91
Week 2:
Comparison of Functions
• Reflexivity:
f(n) = Θ(f(n))
f(n) = O(f(n))
f(n) = Ω(f(n))
92
Week 2:
Comparison of Functions
• Symmetry:
f(n) = Θ(g(n)) iff g(n) = Θ(f(n))
93
Week 2:
Comparison of Functions
• Transpose symmetry:
f(n) = O(g(n)) iff g(n) = Ω(f(n))
f(n) = o(g(n)) iff g(n) = ω(f(n))
94
Week 2:
Comparison of Functions
• Analogies:
f(n) = o(g(n)) ≈ a<b
f(n) = O(g(n)) ≈ a≤b
f(n) = Θ(g(n)) ≈ a=b
f(n) = Ω(g(n)) ≈ a≥b
f(n) = ω(g(n)) ≈ a>b
95
Week 2:
Comparison of Functions
• Asymptotic relationships:
96
Week 2:
Comparison of Functions
• Asymptotic relationships:
• Not all functions are asymptotically
comparable.
• That is, it may be the case that neither
f(n) = o(g(n)) nor f(n) = ω(g(n)) is true.
97
Week 2:
Using limit of ratio to show
order of growth of a function
f ( n)
lim = 0 ⇒ f (n) = o( g (n)) ⇒ f (n) = O( g (n))
n →∞ g ( n)
f ( n)
lim = c, c > 0, c < ∞ ⇒ f (n) = Θ( g (n)) ⇔
n →∞ g ( n)
98
Week 2:
Standard Notation
• Pages 51 – 56 contain review material
from your previous math courses
• Please read this section of your textbook
and refresh your memory of these
mathematical concepts
• The remaining slides in this section are
for your aid in reviewing the material
99
Week 2:
Monotonicity
• A function f(n) is monotonically
increasing if m ≤ n implies f(m) ≤ f(n).
• A function f(n) is monotonically
decreasing if m ≤ n implies f(m) ≥ f(n).
• A function f(n) is strictly increasing if m <
n implies f(m) < f(n).
• A function f(n) is strictly decreasing if m
< n implies f(m) > f(n).
100
Week 2:
Floor and Ceiling
• For any real number x, the floor of x is the
greatest integer less than or equal to x.
• The floor function f(x) = ⎣x⎦ is monotonically
increasing.
• For any real number x, the ceiling of x is the
least integer greater than or equal to x.
• The ceiling function f(x) = ⎡x⎤ is monotonically
increasing.
101
Week 2:
Modulo Arithmetic
• For any integer a and any positive integer n,
the value of a modulo n ( or a mod n) is the
remainder we have after dividing a by n.
• a mod n = a - ⎣a/n⎦ n
• if (a mod n ) = (b mod n), then a ≡ b mod n
(read as “a is equivalent to b mod n”)
102
Week 2:
Polynomials
• Given a nonnegative integer d, a polynomial in
n of degree d is a function p(n) of the form
d
i
p (n) = ∑ ai n
i =0
where the constants a0, a1, ..., ad are the
coefficients of the polynomial and ad ≠ 0.
103
Week 2:
Polynomials
• A polynomial is asymptotically positive if and
only if ad > 0.
• If a polynomial p(n) of degree d is
asymptotically positive, then p(n) = Θ(nd).
• For any real constant a ≥ 0, na is monotonically
increasing.
• For any real constant a ≤ 0, na is monotonically
decreasing.
• A function is polynomially bounded if
f(n) = O(nk) for some constant k.
104
Week 2:
Exponentials
• For all n and a ≥ 1, the function an is
monotonically increasing in n.
• For all real constants a and b such that a > 1,
b
n
lim n = 0
n →∞ a
105
Week 2:
Logarithms
• lg n = log2 n (binary logarithm)
• ln n = loge n (natural logarithm)
• lgk n = (lg n)k (exponentiation)
• lg lg n = lg (lg n) (composition)
• lg n + k means (lg n) + k, not log (n + k)
• If b > 1 and we hold b constant, then, for n > 0,
the function logbn is strictly increasing.
• Changing the base of a logarithm from one
constant to another only changes the value of
the logarithm by a constant factor.
106
Week 2:
Logarithms
• A function is polylogarithmically bounded if f(n)
= O(lgk n) for some constant k.
• lgb n = o(na) for any constant a > 0
• This means that any positive polynomial
function grows faster than any polylogarithmic
function.
107
Week 2:
Factorials
• N factorial is defined for integers ≥ 0 as:
n! = { 1
n • (n – 1)!
if n = 0
if n > 0
108
Week 2:
Fibonacci Numbers
• The Fibonacci numbers are defined by the
recurrence:
F0 = 0
F1 = 1
Fi = Fi-1 + Fi-2 ≥ 2
• Fibonacci numbers grow exponentially
109
Week 2: