0% found this document useful (0 votes)
121 views

02 Fundamentals of The Analysis of Algorithm Efficiency

The document discusses the analysis of algorithm efficiency, including time and space complexity. It introduces asymptotic notations like Big-O, Big-Omega, and Big-Theta that are used to analyze an algorithm's efficiency based on input size. Common efficiency classes like constant, linear, polynomial, and exponential are also covered. Examples are provided to illustrate how to determine the complexity of different functions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
121 views

02 Fundamentals of The Analysis of Algorithm Efficiency

The document discusses the analysis of algorithm efficiency, including time and space complexity. It introduces asymptotic notations like Big-O, Big-Omega, and Big-Theta that are used to analyze an algorithm's efficiency based on input size. Common efficiency classes like constant, linear, polynomial, and exponential are also covered. Examples are provided to illustrate how to determine the complexity of different functions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 49

Fundamentals of the Analysis

of Algorithm Efficiency
CHA PT ER 2
I N T RODUC TI ON TO T HE DES IGN & AN ALYSIS OF ALG OR ITHMS
BY A N A N Y L E V I TIN
The Analysis Framework
two kinds of efficiency
•Time efficiency,
◦ also called time complexity,
◦ indicates how fast an algorithm in question runs.
•Space efficiency,
◦ also called space complexity, refers to the amount of memory
units required by the algorithm in addition
◦ to the space needed for its input and output.
Measuring an Input’s Size
•Observation: almost all algorithms run longer on larger
inputs
•it is logical to investigate an algorithm’s efficiency as a
function of some parameter n indicating the algorithm’s
input size
Measuring an Input’s Size
•The choice of an appropriate size metric can be influenced
by operations of the algorithm in question.
•For example, how should we measure an input’s size for a
spell-checking algorithm? If the algorithm examines
individual characters of its input, we should measure the
size by the number of characters; if it works by processing
words, we should count their number in the input.
Units for Measuring Running Time
•some standard unit of time measurement (a second, or
millisecond, and so on) to measure the running time of a
program implementing the algorithm.
•obvious drawbacks to such an approach, however:
◦ dependence on the speed of a particular computer,
◦ dependence on the quality of a program implementing the
algorithm and of the compiler used in generating the machine
code, and
◦ the difficulty of clocking the actual running time of the program.
Units for Measuring Running Time
•One possible approach is to count the number of times
each of the algorithm’s operations is executed. This
approach is both excessively difficult and, as we shall see,
usually unnecessary.
•basic operation is the operation contributing the most to
the total running time, and compute the number of times
the basic operation is executed.
Orders of Growth
Asymptotic Notations and
Basic Efficiency Classes
Types of analysis (3 ways to analyze an
algorithm)

Ο Θ
Ω
(big oh) (big theta)
(big omega)
worst case, longest average case,
best case, least time
time average time
•t (n) and g(n) can be any nonnegative functions defined on
the set of natural numbers.
•t (n) will be an algorithm’s running time (usually indicated
by its basic operation count C(n)), and
•g(n) will be some simple function to compare the count
with.
•O(g(n)) is the set of all functions with a lower or same order
of growth as g(n) (to within a constant multiple, as n goes to
infinity).
•The second notation, (g(n)), stands for the set of all
functions with a higher or same order of growth as g(n) (to
within a constant multiple, as n goes to infinity).
•(g(n)) is the set of all functions that have the same order of
growth as g(n) (to within a constant multiple, as n goes to
infinity).
•Thus, every quadratic function an2 + bn + c with a > 0 is in
Θ(n2), but so are, among infinitely many others, n2 + sin n
and n2 + log n.
O-notation
A function t (n) is said to be in O(g(n)),
denoted t (n) ∈ O(g(n)),
if t (n) is bounded above
by some constant multiple of g(n) for all large n
such that
t (n) ≤ cg(n) for all n ≥ n0.
O-notation
•is an Asymptotic Notation for the worst case, or ceiling of
growth for a given function.
•It provides us with an asymptotic upper bound for the
growth rate of runtime of an algorithm
O-notation
•O(g(n)) is the set of all functions with a lower or same order
of growth as g(n) (to within a constant multiple, as n goes to
infinity)
n ∈ O(n2),
100n + 5 ∈ O(n2),
1
n(n − 1) ∈ O(n2)
2
O-notation
Ω-notation
A function t (n) is said to be in Ω(g(n)),
denoted t (n) ∈ Ω(g(n)),
If t (n) is bounded below
by some positive constant multiple of g(n) for all large n
such that
t (n) ≥ cg(n) for all n ≥ n0.
Ω-notation
•stands for the set of all functions with a higher or same
order of growth as g(n) (to within a constant multiple, as n
goes to infinity)
n3 ∈ Ω(n2)
1
n(n − 1) ∈ Ω(n2)
2
Ω-notation
Θ-notation
A function t (n) is said to be in Θ(g(n)),
denoted t (n) ∈ Θ(g(n)),
if t (n) is bounded both above and below
by some positive constant multiples of
g(n) for all large n
such that
c2g(n) ≤ t (n) ≤ c1g(n) for all n ≥ n0.
Θ-notation
•is the set of all functions that have the same order of
growth as g(n) (to within a constant multiple, as n goes to
infinity).
•Thus, every quadratic function an2 + bn + c with a > 0 is in
Θ(n2), but so are, among infinitely many others, n2 + sin n
and n2 + log n.
Θ-notation
Basic Efficiency Classes
http://www.allsyllabus.com
Exercise

2​ n

Constant Linear Polynomial Exponential


Exercise

2​ n

Constant Linear Polynomial Exponential


Exercise

(3/2)n

Constant Linear Polynomial Exponential


Exercise

(3/2)n

Constant Linear Polynomial Exponential


Exercise

(3/2)​ n

Constant Linear Polynomial Exponential


Exercise

(3/2)​ n

Constant Linear Polynomial Exponential


Exercise

2n 3

Constant Linear Polynomial Exponential


Exercise

2n 3

Constant Linear Polynomial Exponential


Exercise

3n 2

Constant Linear Polynomial Exponential


Exercise

3n 2

Constant Linear Polynomial Exponential


Exercise

Constant Linear Polynomial Exponential


Exercise

Constant Linear Polynomial Exponential


Exercise

1000

Constant Linear Polynomial Exponential


Exercise

1000

Constant Linear Polynomial Exponential


Exercise

3n

Constant Linear Polynomial Exponential


Exercise

3n

Constant Linear Polynomial Exponential


True/False
1. n(n + 1) / 2 ∈ O(n3)
2. n(n + 1) / 2 ∈ Θ(n3)
3. n(n + 1) / 2 ∈ O(n2)
4. n(n + 1) / 2 ∈ Ω(n)
True/False
1. n(n + 1) / 2 ∈ O(n3) True
2. n(n + 1) / 2 ∈ Θ(n3) False
3. n(n + 1) / 2 ∈ O(n2) True
4. n(n + 1) / 2 ∈ Ω(n) True
Problems
1. n ∈ O(n2)
2. 100n + 5 ∈ O(n2)
3. ½ n (n – 1) ∈ O(n2)
4. n3 ∉ O(n2)
5. n4 + n + 1 ∉ O(n2)
Rank these functions according to their
growth, from slowest growing (at the top) to
fastest growing (at the bottom)
1
n
(3/2)n
n2
n3
2n
Rank these functions according to their
growth, from slowest growing to fastest
growing
1 constant 1
n Linear n
(3/2)n exponential n2
n2 Polynomial n3
n3 Polynomial (3/2)n
2n Exponential 2n
Rank these functions according to their
growth, from slowest growing to fastest
growing
•82n
•4n
•log2 n
•n log2 n
•64
•6n3
•Log8 n
•8n2
•n log6 n
Rank these functions according to their
growth, from slowest growing to fastest
growing
•82n exponential •64
•4n linear fxn •Log8 n
•log2 n logarithmic fxn •log2 n
•n log2 n linearithmic fxn
•4n
•64 constant
•n log6 n
•6n3 polynomial
•Log8 n logarithmic fxn •n log2 n
•8n2 polynomial •8n2 , 6n3
•n log6 n linearithmic fxn •82n

You might also like