0% found this document useful (0 votes)
40 views29 pages

Analyzing Algorithmic Complexity Big O Notation

The document discusses Big O notation, a mathematical concept used in computer science to analyze the efficiency and scalability of algorithms by representing their upper bound time complexity. It outlines various time complexities, including constant (O(1)), linear (O(n)), logarithmic (O(log n)), and quadratic (O(n^2)), among others, providing examples and explanations for each. Additionally, it covers how to calculate time complexity through operations and nested loops.

Uploaded by

siefeddineomar05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views29 pages

Analyzing Algorithmic Complexity Big O Notation

The document discusses Big O notation, a mathematical concept used in computer science to analyze the efficiency and scalability of algorithms by representing their upper bound time complexity. It outlines various time complexities, including constant (O(1)), linear (O(n)), logarithmic (O(log n)), and quadratic (O(n^2)), among others, providing examples and explanations for each. Additionally, it covers how to calculate time complexity through operations and nested loops.

Uploaded by

siefeddineomar05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Chapter2: Analyzing

algorithmic complexity
Big O notation
Dr. Charbel Rahhal
Big-O Notation:
• Big-O Notation is commonly used in computer science to analyze
the efficiency of algorithms.

• The letter O in Big O notation stands for "order," and it is used to


represent the upper bound or worst case of the growth rate of a
function.

• The upper bound of an algorithm represents an upper limit on the


amount of resources (usually time) the algorithm will consume to
solve a specific problem for any possible input of a given size.

2
Big-O Notation:
• In simpler terms, Big O notation provides a way to express how long
it takes an algorithm to run as the input size increases.

• It provides an asymptotic analysis that helps in comparing algorithms


and understanding their scalability as the input size increases.

• "Asymptotic" is a term used in mathematics and computer science to


describe the behavior of a function as the input approaches a certain
value, often infinity.
Operations

O(g(n))

n = number of elements 3
Big-O Notation:
Here are some common Big O notations:
• O(1): Constant time complexity.
• O(log n): Logarithmic time complexity.
• O(n): Linear time complexity.
• O(n log n): Linearithmic time complexity.
• O(n^2): Quadratic time complexity.
• O(2^n): Exponential time complexity.
• O(n!): Factorial time complexity.
Big-O is the worst case running time of a program

4
100

Note: 1micro sec = 10-6 sec

5
Growth-Rate Functions
O(1): Constant time complexity. Regardless of the input size, the
algorithm's running time remains constant. The algorithm's
running time does not grow with the input size.
It's the most efficient in terms of time complexity, but it's not always
achievable for more complex problems (input).
Examples include simple arithmetic operations (x=x+1), accessing
any element in an array.

O(n): Linear time complexity. Time requirement for a linear


algorithm increases directly with the size of the input. If the input
size doubles, the running time also roughly doubles.
Example include iterating through an array or list once. Find max
element in array, sum of elements in an array, search for a value in
array. 6
Growth-Rate Functions
O(logn): Logarithmic time complexity. The algorithm's running
time grows logarithmically with the input size.
This means that the running time increases slowly as the input size
increases.
O(log n) is considered faster-growing than O(n), which is
typically more efficient than linear growth. This makes
algorithms with logarithmic time complexity more efficient
for larger datasets.
Common in algorithms that repeatedly divide the problem space in
half.
Example as binary search on sorted arrays, divide the problem into
smaller parts at each step.

7
Growth-Rate Functions
O(n*logn): Linearithmic time complexity. Time requirement for a
n*logn algorithm increases more rapidly than a linear algorithm.
Common for many efficient sorting algorithms.
More efficient than quadratic time complexity but less efficient than
linear or logarithmic time complexity.

O(n2): Quadratic time complexity. Time requirement for a


quadratic algorithm increases rapidly with the size of the input.
The running time of the algorithm is proportional to the square of
the input size.
Common in nested loops where each iteration involves linear
operations.
Less efficient than linear or logarithmic time complexity

8
9
Growth-Rate Functions
O(n3): Cubic Time Complexity. The runtime of the algorithm is
proportional to the cube of the input size.
Time requirement for a cubic algorithm increases more rapidly with
the size of the input than the time requirement for a quadratic
algorithm.
This often occurs in algorithms with three nested loops or algorithms
that involve solving subproblems, each taking linear time.

10
Growth-Rate Functions
O(2n): Exponential time complexity. The runtime grows
exponentially with the size of the input.
As the size of the input increases, the time requirement for an
exponential algorithm increases too rapidly to be practical.
Often found in brute-force algorithms that try all possible
solutions.
A brute force algorithm is an exhaustive search method that
systematically evaluates all possible solutions to a problem and
selects the best one. The term "brute force" implies that the
algorithm doesn't use any clever optimization techniques and
simply tries all possibilities.

11
Constant Time: O(1)
• An algorithm is said to run in constant time O(1) if it requires the
same amount of time regardless of the input size.
Examples: x=x + 1;
• array: accessing any element
• fixed-size stack: push and pop methods
• fixed-size queue: enqueue and dequeue methods

Stack

Queue
12
Linear Time: O(n)
• An algorithm is said to run in linear time O(n) if its time execution is
directly proportional to the input size n, i.e. time grows linearly as
input size increases.

Examples:
• array: linear search, traversing, find minimum
• ArrayList: contains method // find an element if it exists
• queue: contains method

13
Calculating Time Complexity
• If there is a single iteration, and the iterative variable is
incrementing linearly (size increases +1) then it's O(n) e.g.

for(i=0; i<n; i++) //O(n)

for(i=0; i<n; i = i + 4) // still O(n)


• If the iterative variable is incremented
geometrically(multiplying the previous one by a fixed number),
then it's O(log n) e.g.
for(i=1;i<n;i = i * 2) //O(log n)
14
Logarithmic Time: O(log n)

• An algorithm is said to run in logarithmic time O(log n) if its time


execution is proportional to the logarithm of the input size.

Example:
Binary search
Low Middle High

15
Note: Logarithm
• In mathematics, the logarithm of a number is the exponent
to which another fixed value, the base, must be raised to
produce that number.

• Exponentiation is a mathematical operation, written as b^n,


involving two numbers, the base b and the exponent (or
power) n.

• For example, the logarithm of 1000 to base 10 is 3, because 10


to the power 3 is 1000: 1000 = 10 × 10 × 10 = 10^3.
16
Logarithm

17
Quadratic Time: O(n )
2

• An algorithm is said to run in quadratic time O(n2) if its


time execution is proportional to the square of the input size.
• Examples:
• bubble sort

• selection sort

• insertion sort
18
Nested loop
If there is nested loop, where one has a complexity of O(n)
and the other O(logn), then overall complexity is O(nlogn).

for(i=0; i<n; i++) // O(n)

{
for(j=1; j<n; j=j*3) // O(log n)
}

//Overall O(nlogn)

19
The Execution Time of Algorithms
• Each operation in an algorithm (or a program) has a cost.
 Each operation takes a certain of time.
count = count + 1;  take a certain amount of time, but it is
constant

A sequence of operations:

count = count + 1; Cost: c1


sum = sum + count; Cost: c2

 Total Cost = c1 + c2
20
The Execution Time of Algorithms (cont.)
Example: Simple If-Statement
Cost Times
if (n < 0) c1 1
absval = -n c2 1
else
absval = n; c3 1

Total Cost <= c1 + max(c2,c3)

21
The Execution Time of Algorithms (cont.)
Example: Simple Loop
Cost Times
i = 1; c1
sum = 0; c2
while (i <= n) { c3
i = i + 1; c4
sum = sum + i; c5
}

Total Cost =
 The time required for this algorithm is proportional to n
22
The Execution Time of Algorithms (cont.)
Example: Simple Loop
Cost Times
i = 1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
i = i + 1; c4 n
if n=5 so n+1=6
sum = sum + i; c5 n
because when
} reaching 5 the
Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*c5 while should
stop but it tests
 The time required for this algorithm is proportional to n i=6 and this has
cost.
23
The Execution Time of Algorithms (cont.)
Cost Times
i=1; c1
sum = 0; c2
while (i <= n) { c3
j=1; c4
Example: Nested Loop
while (j <= n) { c5
sum = sum + i; c6
j = j + 1; c7
}
i = i +1; c8
}

Total Cost =  The time required for this algorithm is proportional to n2 24


The Execution Time of Algorithms (cont.)
Cost Times
i=1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
j=1; c4 n
Example: Nested Loop
while (j <= n) { c5 n*(n+1)
sum = sum + i; c6 n*n
j = j + 1; c7 n*n
}
i = i +1; c8 n
}

Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8

 The time required for this algorithm is proportional to n2 25


The Execution Time of Algorithms (cont.)
Cost Times
i = 1; c1
sum = 0; c2
while (i <= n) { c3
i = i + 1; c4
sum = sum + i; c5
}

 So, the growth-rate function for this algorithm is


26
The Execution Time of Algorithms (cont.)
Cost Times
i = 1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
i = i + 1; c4 n
sum = sum + i; c5 n
}
T(n) = c1 + c2 + (n+1)*c3 + n*c4 + n*c5

= (c3+c4+c5)*n + (c1+c2+c3)

= a*n + b
 So, the growth-rate function for this algorithm is O(n) // linear time
27
The Execution Time of Algorithms (cont.)
Cost Times
i=1; c1
sum = 0; c2
while (i <= n) { c3
j=1; c4
while (j <= n) { c5
sum = sum + i; c6
j = j + 1; c7
}
i = i +1; c8
}
 So, the growth-rate function for this algorithm is 28
Cost Times
i=1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
j=1; c4 n
while (j <= n) { c5 n*(n+1)
sum = sum + i; c6 n*n
j = j + 1; c7 n*n
}
i = i +1; c8 n
}
T(n) = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8
= (c5+c6+c7)*n2 + (c3+c4+c5+c8)*n + (c1+c2+c3)

= a*n2 + b*n + c
 So, the growth-rate function for this algorithm is O(n2) // Quadratic 29

You might also like