02 Fundamentals of The Analysis of Algorithm Efficiency
02 Fundamentals of The Analysis of Algorithm Efficiency
of Algorithm Efficiency
CHA PT ER 2
I N T RODUC TI ON TO T HE DES IGN & AN ALYSIS OF ALG OR ITHMS
BY A N A N Y L E V I TIN
The Analysis Framework
two kinds of efficiency
•Time efficiency,
◦ also called time complexity,
◦ indicates how fast an algorithm in question runs.
•Space efficiency,
◦ also called space complexity, refers to the amount of memory
units required by the algorithm in addition
◦ to the space needed for its input and output.
Measuring an Input’s Size
•Observation: almost all algorithms run longer on larger
inputs
•it is logical to investigate an algorithm’s efficiency as a
function of some parameter n indicating the algorithm’s
input size
Measuring an Input’s Size
•The choice of an appropriate size metric can be influenced
by operations of the algorithm in question.
•For example, how should we measure an input’s size for a
spell-checking algorithm? If the algorithm examines
individual characters of its input, we should measure the
size by the number of characters; if it works by processing
words, we should count their number in the input.
Units for Measuring Running Time
•some standard unit of time measurement (a second, or
millisecond, and so on) to measure the running time of a
program implementing the algorithm.
•obvious drawbacks to such an approach, however:
◦ dependence on the speed of a particular computer,
◦ dependence on the quality of a program implementing the
algorithm and of the compiler used in generating the machine
code, and
◦ the difficulty of clocking the actual running time of the program.
Units for Measuring Running Time
•One possible approach is to count the number of times
each of the algorithm’s operations is executed. This
approach is both excessively difficult and, as we shall see,
usually unnecessary.
•basic operation is the operation contributing the most to
the total running time, and compute the number of times
the basic operation is executed.
Orders of Growth
Asymptotic Notations and
Basic Efficiency Classes
Types of analysis (3 ways to analyze an
algorithm)
Ο Θ
Ω
(big oh) (big theta)
(big omega)
worst case, longest average case,
best case, least time
time average time
•t (n) and g(n) can be any nonnegative functions defined on
the set of natural numbers.
•t (n) will be an algorithm’s running time (usually indicated
by its basic operation count C(n)), and
•g(n) will be some simple function to compare the count
with.
•O(g(n)) is the set of all functions with a lower or same order
of growth as g(n) (to within a constant multiple, as n goes to
infinity).
•The second notation, (g(n)), stands for the set of all
functions with a higher or same order of growth as g(n) (to
within a constant multiple, as n goes to infinity).
•(g(n)) is the set of all functions that have the same order of
growth as g(n) (to within a constant multiple, as n goes to
infinity).
•Thus, every quadratic function an2 + bn + c with a > 0 is in
Θ(n2), but so are, among infinitely many others, n2 + sin n
and n2 + log n.
O-notation
A function t (n) is said to be in O(g(n)),
denoted t (n) ∈ O(g(n)),
if t (n) is bounded above
by some constant multiple of g(n) for all large n
such that
t (n) ≤ cg(n) for all n ≥ n0.
O-notation
•is an Asymptotic Notation for the worst case, or ceiling of
growth for a given function.
•It provides us with an asymptotic upper bound for the
growth rate of runtime of an algorithm
O-notation
•O(g(n)) is the set of all functions with a lower or same order
of growth as g(n) (to within a constant multiple, as n goes to
infinity)
n ∈ O(n2),
100n + 5 ∈ O(n2),
1
n(n − 1) ∈ O(n2)
2
O-notation
Ω-notation
A function t (n) is said to be in Ω(g(n)),
denoted t (n) ∈ Ω(g(n)),
If t (n) is bounded below
by some positive constant multiple of g(n) for all large n
such that
t (n) ≥ cg(n) for all n ≥ n0.
Ω-notation
•stands for the set of all functions with a higher or same
order of growth as g(n) (to within a constant multiple, as n
goes to infinity)
n3 ∈ Ω(n2)
1
n(n − 1) ∈ Ω(n2)
2
Ω-notation
Θ-notation
A function t (n) is said to be in Θ(g(n)),
denoted t (n) ∈ Θ(g(n)),
if t (n) is bounded both above and below
by some positive constant multiples of
g(n) for all large n
such that
c2g(n) ≤ t (n) ≤ c1g(n) for all n ≥ n0.
Θ-notation
•is the set of all functions that have the same order of
growth as g(n) (to within a constant multiple, as n goes to
infinity).
•Thus, every quadratic function an2 + bn + c with a > 0 is in
Θ(n2), but so are, among infinitely many others, n2 + sin n
and n2 + log n.
Θ-notation
Basic Efficiency Classes
http://www.allsyllabus.com
Exercise
2 n
2 n
(3/2)n
(3/2)n
(3/2) n
(3/2) n
2n 3
2n 3
3n 2
3n 2
1000
1000
3n
3n