Analysis of Algorithms
Input Algorithm Output
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 1
Running Time
Most algorithms transform best case
input objects into output average case
worst case
objects. 120
The running time of an 100
algorithm typically grows
Running Time
80
with the input size.
60
Average case time is often
difficult to determine. 40
We focus on the worst case 20
running time. 0
1000 2000 3000 4000
Easier to analyze Input Size
Crucial to applications such as
games, finance and robotics
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 2
Experimental Studies
Write a program 9000
implementing the algorithm 8000
Run the program with 7000
inputs of varying size and 6000
composition, noting the
Time (ms)
5000
time needed:
4000
3000
2000
1000
0
Plot the results 0 50 100
Input Size
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 3
Limitations of Experiments
It is necessary to implement the
algorithm, which may be difficult
Results may not be indicative of the
running time on other inputs not included
in the experiment.
In order to compare two algorithms, the
same hardware and software
environments must be used
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 4
Theoretical Analysis
Uses a high-level description of the
algorithm instead of an implementation
Characterizes running time as a function
of the input size, n.
Takes into account all possible inputs
Allows us to evaluate the speed of an
algorithm independent of the
hardware/software environment
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 5
Pseudocode
High-level description of an algorithm
More structured than English prose
Less detailed than a program
Preferred notation for describing
algorithms
Hides program design issues
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 6
Pseudocode Details
Control flow Method call
if … then … [else …] method (arg [, arg…])
while … do … Return value
repeat … until … return expression
for … do … Expressions:
Indentation replaces braces Assignment
Method declaration Equality testing
Algorithm method (arg [, arg…])
Input … n2 Superscripts and other
Output … mathematical
formatting allowed
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 7
The Random Access Machine
(RAM) Model
A CPU
An potentially unbounded
bank of memory cells, 2
1
each of which can hold an 0
arbitrary number or
character
Memory cells are numbered and accessing
any cell in memory takes unit time.
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 8
Seven Important Functions
Seven functions that
often appear in algorithm 1E+30
1E+28
analysis: 1E+26
Cubic
Constant 1 1E+24 Quadratic
Logarithmic log n 1E+22
Linear
1E+20
Linear n 1E+18
N-Log-N n log n 1E+16
T (n )
1E+14
Quadratic n2 1E+12
Cubic n3 1E+10
Exponential 2n 1E+8
1E+6
1E+4
In a log-log chart, the 1E+2
slope of the line 1E+0
corresponds to the 1E+0 1E+2 1E+4 1E+6 1E+8 1E+10
n
growth rate
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 9
Slide by Matt Stallmann
Functions Graphed included with permission.
Using “Normal” Scale
g(n) = n lg n
g(n) = 1 g(n) = 2n
g(n) = n2
g(n) = lg n
g(n) = n
g(n) = n3
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 10
Primitive Operations
Basic computations Examples:
performed by an algorithm Evaluating an
Identifiable in pseudocode expression
Largely independent from the Assigning a value
to a variable
programming language Indexing into an
Exact definition not important array
(we will see why later) Calling a method
Returning from a
Assumed to take a constant
method
amount of time in the RAM
model
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 11
Counting Primitive Operations
By inspecting the pseudocode, we can determine the
maximum number of primitive operations executed by an
algorithm, as a function of the input size
Step 1: 2 ops, 3: 2 ops, 4: 2n ops, 5: 2n
ops, 6: 0 to n ops, 7: 1 op
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 12
Estimating Running Time
Algorithm find_max executes 5n 5 primitive
operations in the worst case, 4n 5 in the best
case. Define:
a = Time taken by the fastest primitive operation
b = Time taken by the slowest primitive operation
Let T(n) be worst-case time of find_max. Then
a (4n 5) T(n) b(5n 5)
Hence, the running time T(n) is bounded by two
linear functions.
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 13
Growth Rate of Running Time
Changing the hardware/ software
environment
Affects T(n) by a constant factor, but
Does not alter the growth rate of T(n)
The linear growth rate of the running
time T(n) is an intrinsic property of
algorithm find_max
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 14
Slide by Matt Stallmann
included with permission.
Why Growth Rate Matters
if runtime time for n time for 2 time for 4
is... +1 n n
c (lg n +
c lg n c lg (n + 1) c(lg n + 2)
1)
cn c (n + 1) 2c n 4c n
~ c n lg n 2c n lg n 4c n lg n + runtime
c n lg n quadruples
+ cn + 2cn 4cn
when
~ c n2 + 2c problem
cn 2
4c n2 16c n2
n size doubles
~ c n3 + 3c
cn 3
8c n3 64c n3
n2
c 2n c2 n+1
c2 2n
c2 4n
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 15
Slide by Matt Stallmann
included with permission.
Comparison of Two Algorithms
insertion sort is
n2 / 4
merge sort is
2 n lg n
sort a million items?
insertion sort takes
roughly 70 hours
while
merge sort takes
roughly 40 seconds
This is a slow machine, but if
100 x as fast then it’s 40 minutes
versus less than 0.5 seconds
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 16
Constant Factors
1E+26
The growth rate is 1E+24 Quadratic
Quadratic
not affected by 1E+22
1E+20 Linear
constant factors or 1E+18 Linear
lower-order terms 1E+16
1E+14
Examples T (n ) 1E+12
1E+10
102n 105 is a linear 1E+8
function 1E+6
105n2 108n is a 1E+4
quadratic function 1E+2
1E+0
1E+0 1E+2 1E+4 1E+6 1E+8 1E+10
n
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 17
Big-Oh Notation
10,000
Given functions f(n) and 3n
g(n), we say that f(n) is 2n+10
1,000
O(g(n)) if there are
n
positive constants
c and n0 such that 100
f(n) cg(n) for n n0
10
Example: 2n 10 is O(n)
2n 10 cn 1
(c 2) n 10 1 10 100 1,000
n 10(c 2) n
Pick c 3 and n0 10
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 18
Big-Oh Example
1,000,000
n^2
Example: the function 100n
100,000
n2 is not O(n) 10n
n2 cn 10,000 n
nc
The above inequality 1,000
cannot be satisfied
since c must be a 100
constant
10
1
1 10 100 1,000
n
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 19
More Big-Oh Examples
7n-2
7n-2 is O(n)
need c > 0 and n0 1 such that 7n-2 c•n for n n0
this is true for c = 7 and n0 = 1
3n3 + 20n2 + 5
3n3 + 20n2 + 5 is O(n3)
need c > 0 and n0 1 such that 3n3 + 20n2 + 5 c•n3 for n n0
this is true for c = 4 and n0 = 21
3 log n + 5
3 log n + 5 is O(log n)
need c > 0 and n0 1 such that 3 log n + 5 c•log n for n n0
this is true for c = 8 and n0 = 2
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 20
Big-Oh and Growth Rate
The big-Oh notation gives an upper bound on the
growth rate of a function
The statement “f(n) is O(g(n))” means that the growth
rate of f(n) is no more than the growth rate of g(n)
We can use the big-Oh notation to rank functions
according to their growth rate
f(n) is O(g(n)) g(n) is O(f(n))
g(n) grows more Yes No
f(n) grows more No Yes
Same growth Yes Yes
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 21
Big-Oh Rules
If is f(n) a polynomial of degree d, then f(n) is
O(nd), i.e.,
Drop lower-order terms
Drop constant factors
Use the smallest possible class of functions
Say “2n is O(n)” instead of “2n is O(n2)”
Use the simplest expression of the class
Say “3n 5 is O(n)” instead of “3n 5 is O(3n)”
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 22
Asymptotic Algorithm Analysis
The asymptotic analysis of an algorithm determines
the running time in big-Oh notation
To perform the asymptotic analysis
We find the worst-case number of primitive operations
executed as a function of the input size
We express this function with big-Oh notation
Example:
We say that algorithm find_max “runs in O(n) time”
Since constant factors and lower-order terms are
eventually dropped anyhow, we can disregard them
when counting primitive operations
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 23
Computing Prefix Averages
We further illustrate 35
asymptotic analysis with X
two algorithms for prefix 30 A
averages 25
The i-th prefix average of
20
an array X is average of the
first (i 1) elements of X: 15
A[i] X[0] X[1] … X[i])/(i+1) 10
Computing the array A of 5
prefix averages of another 0
array X has applications to 1 2 3 4 5 6 7
financial analysis
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 24
Prefix Averages (Quadratic)
The following algorithm computes prefix averages in
quadratic time by applying the definition
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 25
Arithmetic Progression
7
The running time of
6
prefixAverage1 is
O(1 2 …n) 5
The sum of the first n 4
integers is n(n 1) 2 3
There is a simple visual
proof of this fact 2
Thus, algorithm 1
prefixAverage1 runs in 0
O(n2) time
1 2 3 4 5 6
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 26
Prefix Averages 2 (Looks
Better)
The following algorithm uses an internal Python
function to simplify the code
Algorithm prefixAverage2 still runs in O(n2) time!
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 27
Prefix Averages 3 (Linear Time)
The following algorithm computes prefix averages in
linear time by keeping a running sum
Algorithm prefixAverage3 runs in O(n) time
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 28
Math you need to Review
Summations
Logarithms and Exponents
properties of logarithms:
logb(xy) = logbx + logby
logb (x/y) = logbx - logby
logbxa = alogbx
logba = logxa/logxb
properties of exponentials:
a(b+c) = aba c
Proof techniques abc = (ab)c
Basic probability ab /ac = a(b-c)
b = a logab
bc = a c*logab
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 29
Relatives of Big-Oh
big-Omega
f(n) is (g(n)) if there is a constant c > 0
and an integer constant n0 1 such that
f(n) c•g(n) for n n0
big-Theta
f(n) is (g(n)) if there are constants c’ > 0 and c’’
> 0 and an integer constant n0 1 such that
c’•g(n) f(n) c’’•g(n) for n n0
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 30
Intuition for Asymptotic
Notation
Big-Oh
f(n) is O(g(n)) if f(n) is asymptotically
less than or equal to g(n)
big-Omega
f(n) is (g(n)) if f(n) is asymptotically
greater than or equal to g(n)
big-Theta
f(n) is (g(n)) if f(n) is asymptotically
equal to g(n)
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 31
Example Uses of the
Relatives of Big-Oh
5n2 is (n2)
f(n) is (g(n)) if there is a constant c > 0 and an integer constant n0 1
such that f(n) c•g(n) for n n0
let c = 5 and n0 = 1
5n2 is (n)
f(n) is (g(n)) if there is a constant c > 0 and an integer constant n0 1
such that f(n) c•g(n) for n n0
let c = 1 and n0 = 1
5n2 is (n2)
f(n) is (g(n)) if it is (n2) and O(n2). We have already seen the former,
for the latter recall that f(n) is O(g(n)) if there is a constant c > 0 and
an integer constant n0 1 such that f(n) < c•g(n) for n n0
Let c = 5 and n0 = 1
© 2013 Goodrich, Tamassia, Goldwasser Analysis of Algorithms 32