0% found this document useful (0 votes)
4 views32 pages

ada unit 1

The document discusses asymptotic analysis, which measures algorithm efficiency independent of machine-specific constants, using mathematical tools like Big O, Omega, and Theta notations to express time and space complexities. It explains the characteristics, advantages, and disadvantages of algorithms, as well as methods for analyzing their complexity, including prior and posterior analysis. Additionally, it highlights the importance of algorithms in various fields such as computer science, mathematics, and artificial intelligence.

Uploaded by

Afreen Ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views32 pages

ada unit 1

The document discusses asymptotic analysis, which measures algorithm efficiency independent of machine-specific constants, using mathematical tools like Big O, Omega, and Theta notations to express time and space complexities. It explains the characteristics, advantages, and disadvantages of algorithms, as well as methods for analyzing their complexity, including prior and posterior analysis. Additionally, it highlights the importance of algorithms in various fields such as computer science, mathematics, and artificial intelligence.

Uploaded by

Afreen Ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 32

The main idea of asymptotic analysis is to have a measure of the efficiency of algorithms that don't

depend on machine-specific constants and don't require algorithms to be implemented and time
taken by programs to be compared. Asymptotic notations are mathematical tools to represent the
time complexity of algorithms for asymptotic analysis.

Asymptotic Notations:

 Asymptotic Notations are mathematical tools used to analyze the performance of algorithms
by understanding how their efficiency changes as the input size grows.

 These notations provide a concise way to express the behavior of an algorithm's time or
space complexity as the input size approaches infinity.

 Rather than comparing algorithms directly, asymptotic analysis focuses on understanding the
relative growth rates of algorithms' complexities.

 It enables comparisons of algorithms' efficiency by abstracting away machine-specific


constants and implementation details, focusing instead on fundamental trends.

 Asymptotic analysis allows for the comparison of algorithms' space and time complexities by
examining their performance characteristics as the input size varies.

 By using asymptotic notations, such as Big O, Big Omega, and Big Theta, we can categorize
algorithms based on their worst-case, best-case, or average-case time or space complexities,
providing valuable insights into their efficiency.

There are mainly three asymptotic notations:

1. Big-O Notation (O-notation)

2. Omega Notation (Ω-notation)

3. Theta Notation (Θ-notation)

1. Theta Notation (Θ-Notation):

Theta notation encloses the function from above and below. Since it represents the upper and the
lower bound of the running time of an algorithm, it is used for analyzing the average-case
complexity of an algorithm.

.Theta (Average Case) You add the running times for each possible input combination and take the
average in the average case.

Let g and f be the function from the set of natural numbers to itself. The function f is said to be Θ(g),
if there are constants c1, c2 > 0 and a natural number n0 such that c1* g(n) ≤ f(n) ≤ c2 * g(n) for all n
≥ n0

Theta notation
Mathematical Representation of Theta notation:

Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1 * g(n) ≤ f(n) ≤ c2 * g(n) for
all n ≥ n0}

Note: Θ(g) is a set

The above expression can be described as if f(n) is theta of g(n), then the value f(n) is always
between c1 * g(n) and c2 * g(n) for large values of n (n ≥ n0). The definition of theta also requires
that f(n) must be non-negative for values of n greater than n0.

The execution time serves as both a lower and upper bound on the algorithm's time complexity.

It exist as both, most, and least boundaries for a given input value.

2. Big-O Notation (O-notation):

Big-O notation represents the upper bound of the running time of an algorithm. Therefore, it gives
the worst-case complexity of an algorithm.

.It is the most widely used notation for Asymptotic analysis.


.It specifies the upper bound of a function.
.The maximum time required by an algorithm or the worst-case time complexity.
.It returns the highest possible output value(big-O) for a given input.
.Big-O(Worst Case) It is defined as the condition that allows an algorithm to complete statement
execution in the longest amount of time possible.

If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a positive constant C
and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0

It returns the highest possible output value (big-O)for a given input.

The execution time serves as an upper bound on the algorithm's time complexity.

Mathematical Representation of Big-O Notation:

O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }

For example, Consider the case of Insertion Sort. It takes linear time in the best case and quadratic
time in the worst case. We can safely say that the time complexity of the Insertion sort is O(n2).
Note: O(n2) also covers linear time.

If we use Θ notation to represent the time complexity of Insertion sort, we have to use two
statements for best and worst cases:

 The worst-case time complexity of Insertion Sort is Θ(n2).

 The best case time complexity of Insertion Sort is Θ(n).


The Big-O notation is useful when we only have an upper bound on the time complexity of an
algorithm. Many times we easily find an upper bound by simply looking at the algorithm.

3. Omega Notation (Ω-Notation):

Omega notation represents the lower bound of the running time of an algorithm. Thus, it provides
the best case complexity of an algorithm.

The execution time serves as a lower bound on the algorithm's time complexity.

It is defined as the condition that allows an algorithm to complete statement execution in the
shortest amount of time.

Let g and f be the function from the set of natural numbers to itself. The function f is said to be Ω(g),
if there is a constant c > 0 and a natural number n0 such that c*g(n) ≤ f(n) for all n ≥ n0

Mathematical Representation of Omega notation :

Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }

Let us consider the same Insertion sort example here. The time complexity of Insertion Sort can be
written as Ω(n), but it is not very useful information about insertion sort, as we are generally
interested in worst-case and sometimes in the average case.

The word Algorithm means "A set of finite rules or instructions to be followed in calculations or other
problem-solving operations" Or "A procedure for solving a mathematical problem in a finite number
of steps that frequently involves recursive operations".

Therefore Algorithm refers to a sequence of finite steps to solve a particular problem.

Use of the Algorithms:

Algorithms play a crucial role in various fields and have many applications. Some of the key areas
where algorithms are used include:

1. Computer Science: Algorithms form the basis of computer programming and are used to
solve problems ranging from simple sorting and searching to complex tasks such as artificial
intelligence and machine learning.

2. Mathematics: Algorithms are used to solve mathematical problems, such as finding the
optimal solution to a system of linear equations or finding the shortest path in a graph.

3. Operations Research: Algorithms are used to optimize and make decisions in fields such as
transportation, logistics, and resource allocation.
4. Artificial Intelligence: Algorithms are the foundation of artificial intelligence and machine
learning, and are used to develop intelligent systems that can perform tasks such as image
recognition, natural language processing, and decision-making.

5. Data Science: Algorithms are used to analyze, process, and extract insights from large
amounts of data in fields such as marketing, finance, and healthcare.

These are just a few examples of the many applications of algorithms. The use of algorithms is
continually expanding as new technologies and fields emerge, making it a vital component of
modern society.

Algorithms can be simple and complex depending on what you want to achieve.

It can be understood by taking the example of cooking a new recipe. To cook a new recipe, one reads
the instructions and steps and executes them one by one, in the given sequence. The result thus
obtained is the new dish is cooked perfectly. Every time you use your phone, computer, laptop, or
calculator you are using Algorithms. Similarly, algorithms help to do a task in programming to get the
expected output.

The Algorithm designed are language-independent, i.e. they are just plain instructions that can be
implemented in any language, and yet the output will be the same, as expected.

What is the need for algorithms?

1. Algorithms are necessary for solving complex problems efficiently and effectively.

2. They help to automate processes and make them more reliable, faster, and easier to
perform.

3. Algorithms also enable computers to perform tasks that would be difficult or impossible for
humans to do manually.

4. They are used in various fields such as mathematics, computer science, engineering, finance,
and many others to optimize processes, analyze data, make predictions, and provide
solutions to problems.

What are the Characteristics of an Algorithm?

As one would not follow any written instructions to cook the recipe, but only the standard one.
Similarly, not all written instructions for programming are an algorithm. For some instructions to be
an algorithm, it must have the following characteristics:

 Language Independent: The Algorithm designed must be language-independent, i.e. it must


be just plain instructions that can be implemented in any language, and yet the output will
be the same, as expected.

 Input: An algorithm has zero or more inputs. Each that contains a fundamental operator
must accept zero or more inputs.
 Output: An algorithm produces at least one output. Every instruction that contains a
fundamental operator must accept zero or more inputs.

 Definiteness: All instructions in an algorithm must be unambiguous, precise, and easy to


interpret. By referring to any of the instructions in an algorithm one can clearly understand
what is to be done. Every fundamental operator in instruction must be defined without any
ambiguity.

 Finiteness: An algorithm must terminate after a finite number of steps in all test cases. Every
instruction which contains a fundamental operator must be terminated within a finite
amount of time. Infinite loops or recursive functions without base conditions do not possess
finiteness.

 Effectiveness: An algorithm must be developed by using very basic, simple, and feasible
operations so that one can trace it out by using just paper and pencil.

Properties of Algorithm:

 It should terminate after a finite time.

 It should produce at least one output.

 It should take zero or more input.

 It should be deterministic means giving the same output for the same input case.

 Every step in the algorithm must be effective i.e. every step should do some work.

Advantages of Algorithms:

 It is easy to understand.

 An algorithm is a step-wise representation of a solution to a given problem.

 In an Algorithm the problem is broken down into smaller pieces or steps hence, it is easier
for the programmer to convert it into an actual program.

Disadvantages of Algorithms:

 Writing an algorithm takes a long time so it is time-consuming.

 Understanding complex logic through algorithms can be very difficult.

 Branching and Looping statements are difficult to show in Algorithms(imp).

How to analyze an Algorithm?

For a standard algorithm to be good, it must be efficient. Hence the efficiency of an algorithm must
be checked and maintained. It can be in two stages:

1. Priori Analysis:

"Priori" means "before". Hence Priori analysis means checking the algorithm before its
implementation. In this, the algorithm is checked when it is written in the form of theoretical steps.
This Efficiency of an algorithm is measured by assuming that all other factors, for example, processor
speed, are constant and have no effect on the implementation. This is done usually by the algorithm
designer. This analysis is independent of the type of hardware and language of the compiler. It gives
the approximate answers for the complexity of the program.

2. Posterior Analysis:

"Posterior" means "after". Hence Posterior analysis means checking the algorithm after its
implementation. In this, the algorithm is checked by implementing it in any programming language
and executing it. This analysis helps to get the actual and real analysis report about correctness(for
every possible input/s if it shows/returns correct output or not), space required, time consumed, etc.
That is, it is dependent on the language of the compiler and the type of hardware used.

What is Algorithm complexity and how to find it?

An algorithm is defined as complex based on the amount of Space and Time it consumes. Hence the
Complexity of an algorithm refers to the measure of the time that it will need to execute and get the
expected output, and the Space it will need to store all the data (input, temporary data, and output).
Hence these two factors define the efficiency of an algorithm.
The two factors of Algorithm Complexity are:

 Time Factor: Time is measured by counting the number of key operations such as
comparisons in the sorting algorithm.

 Space Factor: Space is measured by counting the maximum memory space required by the
algorithm to run/execute.

Therefore the complexity of an algorithm can be divided into two types:

1. Space Complexity: The space complexity of an algorithm refers to the amount of memory required
by the algorithm to store the variables and get the result. This can be for inputs, temporary
operations, or outputs.

How to calculate Space Complexity?


The space complexity of an algorithm is calculated by determining the following 2 components:

 Fixed Part: This refers to the space that is required by the algorithm. For example, input
variables, output variables, program size, etc.

 Variable Part: This refers to the space that can be different based on the implementation of
the algorithm. For example, temporary variables, dynamic memory allocation, recursion
stack space, etc.
Therefore Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is the fixed
part and S(I) is the variable part of the algorithm, which depends on instance characteristic I.

Example: Consider the below algorithm for Linear Search

Step 1: START
Step 2: Get n elements of the array in arr and the number to be searched in x
Step 3: Start from the leftmost element of arr[] and one by one compare x with each element of arr[]
Step 4: If x matches with an element, Print True.
Step 5: If x doesn’t match with any of the elements, Print False.
Step 6: END
Here, There are 2 variables arr[], and x, where the arr[] is the variable part of n elements and x is the
fixed part. Hence S(P) = 1+n. So, the space complexity depends on n(number of elements). Now, space
depends on data types of given variables and constant types and it will be multiplied accordingly.

2. Time Complexity: The time complexity of an algorithm refers to the amount of time required by
the algorithm to execute and get the result. This can be for normal operations, conditional if-else
statements, loop statements, etc.

How to Calculate, Time Complexity?


The time complexity of an algorithm is also calculated by determining the following 2 components:

 Constant time part: Any instruction that is executed just once comes in this part. For
example, input, output, if-else, switch, arithmetic operations, etc.

 Variable Time Part: Any instruction that is executed more than once, say n times, comes in
this part. For example, loops, recursion, etc.
Therefore Time complexity T(P) T(P) of any algorithm P is T(P) = C +
TP(I), where C is the constant time part and TP(I) is the variable part of the algorithm, which
depends on the instance characteristic I.

Example: In the algorithm of Linear Search above, the time complexity is calculated as follows:

Step 1: --Constant Time


Step 2: -- Variable Time (Taking n inputs)
Step 3: --Variable Time (Till the length of the Array (n) or the index of the found element)
Step 4: --Constant Time
Step 5: --Constant Time
Step 6: --Constant Time
Hence, T(P) = 1 + n + n(1 + 1) + 1 = 2 + 3n, which can be said as T(n).

How to express an Algorithm?

1. Natural Language:- Here we express the Algorithm in the natural English language. It is too
hard to understand the algorithm from it.

2. Flowchart:- Here we express the Algorithm by making agraphical/pictorial representation of


it. It is easier to understand than Natural Language.

3. Pseudo Code:- Here we express the Algorithm in the form of annotations and informative
text written in plain English which is very much similar to the real code but as it has no syntax
like any of the programming languages, it can’t be compiled or interpreted by the computer.
It is the best way to express an algorithm because it can be understood by even a layman
with some school-level knowledge.

Heap Sort is an efficient sorting technique based on the heap data structure.

The heap is a nearly-complete binary tree where the parent node could either be minimum or
maximum. The heap with minimum root node is called min-heap and the root node with maximum
root node is called max-heap. The elements in the input data of the heap sort algorithm are
processed using these two methods.

The heap sort algorithm follows two main operations in this procedure −

 Builds a heap H from the input data using the heapify (explained further into the chapter)
method, based on the way of sorting ascending order or descending order.
 Deletes the root element of the root element and repeats until all the input elements are
processed.

The heap sort algorithm heavily depends upon the heapify method of the binary tree. So what is this
heapify method?

Heapify Method

The heapify method of a binary tree is to convert the tree into a heap data structure. This method
uses recursion approach to heapify all the nodes of the binary tree.

Note − The binary tree must always be a complete binary tree as it must have two children nodes
always.

The complete binary tree will be converted into either a max-heap or a min-heap by applying
the heapify method.

Heap Sort Algorithm

As described in the algorithm below, the sorting algorithm first constructs the heap ADT by calling
the Build-Max-Heap algorithm and removes the root element to swap it with the minimum valued
node at the leaf. Then the heapify method is applied to rearrange the elements accordingly.

Algorithm: Heapsort(A)

BUILD-MAX-HEAP(A)

for i = A.length downto 2

exchange A[1] with A[i]

A.heap-size = A.heap-size - 1

MAX-HEAPIFY(A, 1)

Analysis

The heap sort algorithm is the combination of two other sorting algorithms: insertion sort and merge
sort.

The similarities with insertion sort include that only a constant number of array elements are stored
outside the input array at any time.

The time complexity of the heap sort algorithm is O(nlogn), similar to merge sort.

Example

Let us look at an example array to understand the sort algorithm better −

12 3 9 14 10 18 8 23

Building a heap using the BUILD-MAX-HEAP algorithm from the input array −
Rearrange the obtained binary tree by exchanging the nodes such that a heap data structure is
formed.

The Heapify Algorithm

Applying the heapify method, remove the root node from the heap and replace it with the next
immediate maximum valued child of the root.

The root node is 23, so 23 is popped and 18 is made the next root because it is the next maximum
node in the heap.

Now, 18 is popped after 23 which is replaced by 14.

The current root 14 is popped from the heap and is replaced by 12.

12 is popped and replaced with 10.


Similarly all the other elements are popped using the same process.

Here the current root element 9 is popped and the elements 8 and 3 are remained in the tree.

Then, 8 will be popped leaving 3 in the tree.

After completing the heap sort operation on the given heap, the sorted elements are displayed as
shown below −

Every time an element is popped, it is added at the beginning of the output array since the heap data
structure formed is a max-heap. But if the heapify method converts the binary tree to the min-heap,
add the popped elements are on the end of the output array.

The final sorted list is,

3 8 9 10 12 14 18 23

Using divide and conquer approach, the problem in hand, is divided into smaller sub-problems and
then each problem is solved independently. When we keep dividing the sub-problems into even
smaller sub-problems, we may eventually reach a stage where no more division is possible. Those
smallest possible sub-problems are solved using original solution because it takes lesser time to
compute. The solution of all sub-problems is finally merged in order to obtain the solution of the
original problem.
Broadly, we can understand divide-and-conquer approach in a three-step process.

Divide/Break

This step involves breaking the problem into smaller sub-problems. Sub-problems should represent a
part of the original problem. This step generally takes a recursive approach to divide the problem
until no sub-problem is further divisible. At this stage, sub-problems become atomic in size but still
represent some part of the actual problem.

Conquer/Solve

This step receives a lot of smaller sub-problems to be solved. Generally, at this level, the problems
are considered 'solved' on their own.

Merge/Combine

When the smaller sub-problems are solved, this stage recursively combines them until they
formulate a solution of the original problem. This algorithmic approach works recursively and
conquer & merge steps works so close that they appear as one.

Arrays as Input

There are various ways in which various algorithms can take input such that they can be solved using
the divide and conquer technique. Arrays are one of them. In algorithms that require input to be in
the form of a list, like various sorting algorithms, array data structures are most commonly used.

In the input for a sorting algorithm below, the array input is divided into subproblems until they
cannot be divided further.

Then, the subproblems are sorted (the conquer step) and are merged to form the solution of the
original array back (the combine step).

Since arrays are indexed and linear data structures, sorting algorithms most popularly use array data
structures to receive input.

Pros and cons of Divide and Conquer Approach


Divide and conquer approach supports parallelism as sub-problems are independent. Hence, an
algorithm, which is designed using this technique, can run on the multiprocessor system or in
different machines simultaneously.

In this approach, most of the algorithms are designed using recursion, hence memory management
is very high. For recursive function stack is used, where function state needs to be stored.

1. Divide:

 Break down the original problem into smaller subproblems.

 Each subproblem should represent a part of the overall problem.

 The goal is to divide the problem until no further division is possible.

In Merge Sort, we divide the input array in two halves. Please note that the divide step of Merge Sort
is simple, but in Quick Sort, the divide step is critical. In Quick Sort, we partition the array around a
pivot.

2. Conquer:

 Solve each of the smaller subproblems individually.

 If a subproblem is small enough (often referred to as the “base case”), we solve it directly
without further recursion.

 The goal is to find solutions for these subproblems independently.

In Merge Sort, the conquer step is to sort the two halves individually.

3. Merge:

 Combine the sub-problems to get the final solution of the whole problem.

 Once the smaller subproblems are solved, we recursively combine their solutions to get the
solution of larger problem.

 The goal is to formulate a solution for the original problem by merging the results from the
subproblems.

In Merge Sort, the merge step is to merge two sorted halves to create one sorted array. Please note
that the merge step of Merge Sort is critical, but in Quick Sort, the merge step does not do anything
as both parts become sorted in place and the left part has all elements smaller (or equal( than the
right part.

Characteristics of Divide and Conquer Algorithm

Divide and Conquer Algorithm involves breaking down a problem into smaller, more manageable
parts, solving each part individually, and then combining the solutions to solve the original problem.
The characteristics of Divide and Conquer Algorithm are:

 Dividing the Problem: The first step is to break the problem into smaller, more manageable
subproblems. This division can be done recursively until the subproblems become simple
enough to solve directly.

 Independence of Subproblems: Each subproblem should be independent of the others,


meaning that solving one subproblem does not depend on the solution of another. This
allows for parallel processing or concurrent execution of subproblems, which can lead to
efficiency gains.

 Conquering Each Subproblem: Once divided, the subproblems are solved individually. This
may involve applying the same divide and conquer approach recursively until the
subproblems become simple enough to solve directly, or it may involve applying a different
algorithm or technique.

 Combining Solutions: After solving the subproblems, their solutions are combined to obtain
the solution to the original problem. This combination step should be relatively efficient and
straightforward, as the solutions to the subproblems should be designed to fit together
seamlessly.

Binary search is a fast search algorithm with run-time complexity of (log n). This search algorithm
works on the principle of divide and conquer, since it divides the array into half before searching. For
this algorithm to work properly, the data collection should be in the sorted form.

Binary search looks for a particular key value by comparing the middle most item of the collection. If
a match occurs, then the index of item is returned. But if the middle item has a value greater than
the key value, the right sub-array of the middle item is searched. Otherwise, the left sub-array is
searched. This process continues recursively until the size of a subarray reduces to zero.

Binary Search Algorithm

Binary Search algorithm is an interval searching method that performs the searching in intervals only.
The input taken by the binary search algorithm must always be in a sorted array since it divides the
array into subarrays based on the greater or lower values. The algorithm follows the procedure
below −

Step 1 − Select the middle item in the array and compare it with the key value to be searched. If it is
matched, return the position of the median.

Step 2 − If it does not match the key value, check if the key value is either greater than or less than
the median value.

Step 3 − If the key is greater, perform the search in the right sub-array; but if the key is lower than
the median value, perform the search in the left sub-array.

Step 4 − Repeat Steps 1, 2 and 3 iteratively, until the size of sub-array becomes 1.

Step 5 − If the key value does not exist in the array, then the algorithm returns an unsuccessful
search.

Pseudocode

The pseudocode of binary search algorithms should look like this −

Procedure binary_search
A ← sorted array

n ← size of array

x ← value to be searched

Set lowerBound = 1

Set upperBound = n

while x not found

if upperBound < lowerBound

EXIT: x does not exists.

set midPoint = lowerBound + ( upperBound - lowerBound ) / 2

if A[midPoint] < x

set lowerBound = midPoint + 1

if A[midPoint] > x

set upperBound = midPoint - 1

if A[midPoint] = x

EXIT: x found at location midPoint

end while

end procedure

Analysis

Since the binary search algorithm performs searching iteratively, calculating the time complexity is
not as easy as the linear search algorithm.

The input array is searched iteratively by dividing into multiple sub-arrays after every unsuccessful
iteration. Therefore, the recurrence relation formed would be of a dividing function.

To explain it in simpler terms,

 During the first iteration, the element is searched in the entire array. Therefore, length of the
array = n.
 In the second iteration, only half of the original array is searched. Hence, length of the array
= n/2.

 In the third iteration, half of the previous sub-array is searched. Here, length of the array will
be = n/4.

 Similarly, in the ith iteration, the length of the array will become n/2i

To achieve a successful search, after the last iteration the length of array must be 1. Hence,

n/2i = 1

That gives us −

n = 2i

Applying log on both sides,

log n = log 2i

log n = i. log 2

i = log n

The time complexity of the binary search algorithm is O(log n)

Example

For a binary search to work, it is mandatory for the target array to be sorted. We shall learn the
process of binary search with a pictorial example. The following is our sorted array and let us assume
that we need to search the location of value 31 using binary search.

First, we shall determine half of the array by using this formula −

mid = low + (high - low) / 2

Here it is, 0 + (9 - 0) / 2 = 4 (integer value of 4.5). So, 4 is the mid of the array.

Now we compare the value stored at location 4, with the value being searched, i.e. 31. We find that
the value at location 4 is 27, which is not a match. As the value is greater than 27 and we have a
sorted array, so we also know that the target value must be in the upper portion of the array.

We change our low to mid + 1 and find the new mid value again.

low = mid + 1

mid = low + (high - low) / 2


Our new mid is 7 now. We compare the value stored at location 7 with our target value 31.

The value stored at location 7 is not a match, rather it is less than what we are looking for. So, the
value must be in the lower part from this location.

Hence, we calculate the mid again. This time it is 5.

We compare the value stored at location 5 with our target value. We find that it is a match.

We conclude that the target value 31 is stored at location 5.

Binary search halves the searchable items and thus reduces the count of comparisons to be made to
very less numbers.

Binary Search Algorithm

Iteration Method

do until the pointers low and high meet each other.

mid = (low + high)/2

if (x == arr[mid])

return mid

else if (x > arr[mid]) // x is on the right side

low = mid + 1

else // x is on the left side

high = mid - 1

Recursive Method

binarySearch(arr, x, low, high)

if low > high

return False
else

mid = (low + high) / 2

if x == arr[mid]

return mid

else if x > arr[mid] // x is on the right side

return binarySearch(arr, x, mid + 1, high)

else // x is on the left side

return binarySearch(arr, x, low, mid - 1)

Recurrence Relation of Merge Sort

The recurrence relation of merge sort is:


T(n)={Θ(1)if n=12T(n2)+Θ(n)if n>1T(n)={Θ(1)2T(2n)+Θ(n)if n=1if n>1

 T(n) Represents the total time time taken by the algorithm to sort an array of size n.

 2T(n/2) represents time taken by the algorithm to recursively sort the two halves of the
array. Since each half has n/2 elements, we have two recursive calls with input size as (n/2).

 O(n) represents the time taken to merge the two sorted halves

 Merge sort is a sorting technique based on divide and conquer


technique. With worst-case time complexity being (n log n), it is one
of the most used and approached algorithms.
 Merge sort first divides the array into equal halves and then
combines them in a sorted manner.
 How Merge Sort Works?
 To understand merge sort, we take an unsorted array as the
following −


 We know that merge sort first divides the whole array iteratively
into equal halves unless the atomic values are achieved. We see
here that an array of 8 items is divided into two arrays of size 4.


 This does not change the sequence of appearance of items in the
original. Now we divide these two arrays into halves.


 We further divide these arrays and we achieve atomic value which
can no more be divided.

 Now, we combine them in exactly the same manner as they were


broken down. Please note the color codes given to these lists.
 We first compare the element for each list and then combine them
into another list in a sorted manner. We see that 14 and 33 are in
sorted positions. We compare 27 and 10 and in the target list of 2
values we put 10 first, followed by 27. We change the order of 19
and 35 whereas 42 and 44 are placed sequentially.


 In the next iteration of the combining phase, we compare lists of
two data values, and merge them into a list of found data values
placing all in a sorted order.


 After the final merging, the list becomes sorted and is considered
the final solution.


 Advertisement

 Merge Sort Algorithm


 Merge sort keeps on dividing the list into equal halves until it can no
more be divided. By definition, if it is only one element in the list, it
is considered sorted. Then, merge sort combines the smaller sorted
lists keeping the new list sorted too.
 Step 1: If it is only one element in the list, consider it already
 sorted, so return.
 Step 2: Divide the list recursively into two halves until it can no
 more be divided.
 Step 3: Merge the smaller lists into new list in sorted order.
 Pseudocode
 Analysis
 Let us consider, the running time of Merge-Sort as T(n). Hence,
 T(n)={c2xT(n2)+dxnifn≤1otherwisewherecanddarecon
stantsT(n)={cifn≤12xT(n2)+dxnotherwisewherecanddareconstant
s
 Therefore, using this recurrence relation,
 T(n)=2iT(n/2i)+i⋅d⋅nT(n)=2iT(n/2i)+i⋅d⋅n
As,i=logn,T(n)=2lognT(n/2logn)

+logn⋅d⋅nAs,i=logn,T(n)=2lognT(n/2logn)+logn⋅d⋅n
 =c⋅n+d⋅n⋅logn=c⋅n+d⋅n⋅logn
 Therefore,T(n)=O(nlogn).Therefore,T(n)=O(nlogn).
 Example
MERGE_SORT(arr, left, right):

if left < right:

mid = (left + right) // 2

MERGE_SORT(arr, left, mid)

MERGE_SORT(arr, mid + 1, right)

MERGE(arr, left, mid, right)

MERGE(arr, left, mid, right):

Create two temporary arrays: L = arr[left…mid], R = arr[mid+1…right]

Initialize i, j, k to 0 (indexes for L, R, and original array)

while i < length(L) and j < length(R):

if L[i] <= R[j]:

arr[k] = L[i]

i += 1

else:

arr[k] = R[j]

j += 1

k += 1

Copy remaining elements from L (if any)

Copy remaining elements from R (if any)

Quick sort is a sorting technique that has the ability to break a massive data array into smaller ones
in order to save time. Here, in the case of the quick sort algorithm, an extensive array is divided into
two arrays, one of which has values less than the specified value, say pivot. The pivot value is bigger
than the other.

Working of Quick Sort Algorithm


Now, it’s time to see the working of a quick sort algorithm, and for that, we need to take an unsorted
array.

Let the components of the array are –

Here,

L= Left

R = Right

P = Pivot

In the given series of arrays, let’s assume that the leftmost item is the pivot. So, in this condition, a[L]
= 23, a[R] = 26 and a[P] = 23.

Since, at this moment, the pivot item is at left, so the algorithm initiates from right and travels
towards left.

Now, a[P] < a[R], so the algorithm travels forward one position towards left, i.e. –

Now, a[L] = 23, a[R] = 18, and a[P] = 23.

Since a[P] > a[R], so the algorithm will exchange or swap a[P] with a[R], and the pivot travels to right,
as –

Now, a[L] = 18, a[R] = 23, and a[P] = 23. Since the pivot is at right, so the algorithm begins from left
and travels to right.

As a[P] > a[L], so algorithm travels one place to right as –


Now, a[L] = 8, a[R] = 23, and a[P] = 23. As a[P] > a[L], so algorithm travels one place to right as –

Now, a[L] = 28, a[R] = 23, and a[P] = 23. As a[P] < a[L], so, swap a[P] and a[L], now pivot is at left, i.e.

Since the pivot is placed at the leftmost side, the algorithm begins from right and travels to left. Now,
a[L] = 23, a[R] = 28, and a[P] = 23. As a[P] < a[R], so algorithm travels one place to left, as –

Now, a[P] = 23, a[L] = 23, and a[R] = 13. As a[P] > a[R], so, exchange a[P] and a[R], now pivot is at
right, i.e. –

Now, a[P] = 23, a[L] = 13, and a[R] = 23. Pivot is at right, so the algorithm begins from left and travels
to right.

Now, a[P] = 23, a[L] = 23, and a[R] = 23. So, pivot, left and right, are pointing to the same element. It
represents the termination of the procedure.

Item 23, which is the pivot element, stands at its accurate position.
Items that are on the right side of element 23 are greater than it, and the elements that are on the
left side of element 23 are smaller than it.

Now, in a similar manner, the quick sort algorithm is separately applied to the left and right sub-
arrays. After sorting gets done, the array will be –

Quick Sort Algorithm

Algorithm:

QUICKSORT (array A, begin, finish)

1 if (begin < finish)

2{

3 p = partition(A, begin, finish)

4 QUICKSORT (A, begin, p – 1)

5 QUICKSORT (A, p + 1, finish)

6}

Partition Algorithm:

The partition algorithm rearranges the sub-arrays in a place.

PARTITION (array A, begin, finish)

1 pivot ? A[finish]

2 i ? begin-1

3 for j ? begin to finish-1 {

4 do if (A[j] < pivot) {

5 then i ? i + 1
6 swap A[i] with A[j]

7 }}

8 swap A[i+1] with A[finish]

9 return i+1

Quicksort Time Complexity

Case Time Complexity

Best Case O(n*logn)

Average Case O(n*logn)

Worst Case O(n2)

Quicksort Space Complexity

Space Complexity O(n*logn)

Stable NO

Strassen's Matrix Multiplication is the divide and conquer approach to solve the matrix multiplication
problems. The usual matrix multiplication method multiplies each row with each column to achieve
the product matrix. The time complexity taken by this approach is O(n3), since it takes two loops to
multiply. Strassens method was introduced to reduce the time complexity from O(n3) to O(nlog 7).

Naive Method

First, we will discuss Naive method and its complexity. Here, we are calculating Z=X Y. Using Naive
method, two matrices (X and Y) can be multiplied if the order of these matrices are p q and q r and
the resultant matrix will be of order p r. The following pseudocode describes the Naive multiplication

Algorithm: Matrix-Multiplication (X, Y, Z)

for i = 1 to p do

for j = 1 to r do

Z[i,j] := 0

for k = 1 to q do

Z[i,j] := Z[i,j] + X[i,k] × Y[k,j]


Complexity

Here, we assume that integer operations take O(1) time. There are three for loops in this algorithm
and one is nested in other. Hence, the algorithm takes O(n3) time to execute.

Strassens Matrix Multiplication Algorithm

In this context, using Strassens Matrix multiplication algorithm, the time consumption can be
improved a little bit.

Strassens Matrix multiplication can be performed only on square matrices where n is a power of 2.
Order of both of the matrices are n × n.

Divide X, Y and Z into four (n/2)×(n/2) matrices as represented below −

Z=[IKJL]Z=[IJKL] X=[ACBD]X=[ABCD] and Y=[EGFH]Y=[EFGH]

Using Strassens Algorithm compute the following −

M1:=(A+C)×(E+F)M1:=(A+C)×(E+F)

M2:=(B+D)×(G+H)M2:=(B+D)×(G+H)

M3:=(A−D)×(E+H)M3:=(A−D)×(E+H)

M4:=A×(F−H)M4:=A×(F−H)

M5:=(C+D)×(E)M5:=(C+D)×(E)

M6:=(A+B)×(H)M6:=(A+B)×(H)

M7:=D×(G−E)M7:=D×(G−E)

Then,

I:=M2+M3−M6−M7I:=M2+M3−M6−M7

J:=M4+M6J:=M4+M6

K:=M5+M7K:=M5+M7

L:=M1−M3−M4−M5L:=M1−M3−M4−M5

Analysis

T(n)={c7xT(n2)+dxn2ifn=1otherwisewherecanddareconstantsT(n)={cifn=17xT(n2)+dxn2otherwisewh
erecanddareconstants

Using this recurrence relation, we get T(n)=O(nlog7)T(n)=O(nlog7)

Hence, the complexity of Strassens matrix multiplication algorithm is O(nlog7)O(nlog7).

Pseudocode of Strassen’s multiplication

 Divide matrix A and matrix B in 4 sub-matrices of size N/2 x N/2 as shown in the above
diagram.

 Calculate the 7 matrix multiplications recursively.

 Compute the submatrices of C.


 Combine these submatrices into our new matrix C

Complexity:

 Worst case time complexity: Θ(n^2.8074)

 Best case time complexity: Θ(1)

 Space complexity: Θ(logn)

What is Strassen’s Matrix Multiplication Algorithm

Here's how Strassen's Matrix Multiplication Algorithm works:

 Divide: Take the two matrices you want to multiply, let's call them A and B. Split them into
four smaller matrices, each about half the size of the original matrices.

 Calculate: Use these smaller matrices to calculate seven special values, which we'll call P1,
P2, P3, P4, P5, P6, and P7. You do this by doing some simple additions and subtractions of
the smaller matrices.

 Combine: Take these seven values and use them to compute the final result matrix, which
we'll call C. You calculate the values of C using the values of P1 to P7.

This method may sound a bit more complicated, but it's faster for really big matrices because it
reduces the number of multiplications you need to do, even though it involves more additions and
subtractions. For smaller matrices, the regular multiplication is faster, but for huge matrices,
Strassen's method can save a lot of time.

🔍 Upper Bound of Quick Sort with Example

Let’s understand the upper bound of the Quick Sort algorithm in simple words, along with an
example.

🧠 What is an Upper Bound?

In algorithms, upper bound means the maximum time an algorithm will take to solve a problem —
worst-case time complexity.

📚 Quick Sort Overview

Quick Sort works using Divide and Conquer.


Steps:

1. Choose a pivot element.

2. Partition the array such that:

o All smaller elements go to the left of the pivot

o All larger elements go to the right

3. Recursively apply Quick Sort to the left and right parts.


⛔ When is Quick Sort Slow?

Quick Sort is fastest when the pivot divides the array evenly, but it becomes slow when the pivot
divides the array very unevenly — this happens in the worst case.

🧨 Worst Case (Upper Bound)

 Worst case happens when the pivot is always the smallest or largest element.

 This leads to one part having n – 1 elements and the other having 0 elements — which is
very unbalanced.

🔺 Time Complexity (Upper Bound):

Worst-case time complexity: **O(n²)**

This is because:

 In the first call, we do n comparisons

 In the next, n – 1, then n – 2, and so on...

 So total = n + (n–1) + (n–2) + ... + 1 = n(n – 1)/2 = O(n²)

📌 Example of Worst Case

Let’s take a sorted array as input:

Input: [1, 2, 3, 4, 5]

Let’s say we always choose the last element as the pivot:

Step-by-step:

1. Pivot = 5 → All elements < 5 → placed on left


Recursive call on: [1, 2, 3, 4]

2. Pivot = 4 → All elements < 4 → left


Recursive call on: [1, 2, 3]

3. Pivot = 3 → All elements < 3 → left


Recursive call on: [1, 2]

4. Pivot = 2 → left: [1]

5. Base case reached.

Each step had to compare all elements:

 First step: 4 comparisons

 Second step: 3 comparisons

 Third step: 2 comparisons


 Fourth step: 1 comparison
Total: 4 + 3 + 2 + 1 = 10 comparisons → O(n²)

✅ Summary

Case Time Complexity

Best O(n log n)

Average O(n log n)

Worst (Upper Bound) ❗ O(n²)

The upper bound of Quick Sort is O(n²), and it happens when the pivot divides the array in the worst
possible way — completely unbalanced.

Let me know if you’d like to see this in code or tree form!

🔍 Why Divide-and-Conquer is Useful for Merge Sort — Explained with Example

📘 What is Divide and Conquer?

Divide and Conquer is a powerful strategy in computer science where a problem is broken into
smaller sub-problems, solved independently, and then combined to get the final answer.

It follows 3 main steps:

1. Divide the problem into smaller sub-problems.

2. Conquer the sub-problems by solving them recursively.

3. Combine the results of the sub-problems to form the final solution.

🔁 How Merge Sort Uses Divide and Conquer

Merge Sort applies the divide-and-conquer strategy as follows:

1. Divide the array into two halves.

2. Recursively sort each half using merge sort.

3. Merge the two sorted halves into a single sorted array.

📌 Step-by-Step Example: Merge Sort on [38, 27, 43, 3, 9, 82, 10]

Let’s walk through Merge Sort using Divide and Conquer:

Step 1: Divide

Break the array into two halves:


[38, 27, 43, 3] and [9, 82, 10]

Keep dividing until each part has only one element:

[38, 27] → [38], [27]

[43, 3] → [43], [3]

[9, 82] → [9], [82]

[10] → Already single

Step 2: Conquer (Sort small parts)

Now sort and merge each pair of small arrays:

 Merge [38] and [27] → [27, 38]

 Merge [43] and [3] → [3, 43]

 Merge [9] and [82] → [9, 82]

Now we have:

[27, 38], [3, 43], [9, 82], [10]

Step 3: Combine

Now combine the sorted sub-arrays:

 Merge [27, 38] and [3, 43] → [3, 27, 38, 43]

 Merge [9, 82] and [10] → [9, 10, 82]

Finally:

 Merge [3, 27, 38, 43] and [9, 10, 82] → ✅ Sorted Array: [3, 9, 10, 27, 38, 43, 82]

✅ Why is Divide and Conquer Effective for Merge Sort?

1. Balanced Division

Each divide step splits the array into halves. This keeps the recursion depth to log n, ensuring
efficiency.

2. Simpler Sorting at Lower Levels

When the array is broken down to small parts, sorting those tiny arrays is fast and easy.

3. Efficient Merging

Merging two sorted arrays takes linear time. Since the subarrays are already sorted, merging them is
quick and deterministic.

4. Guaranteed Time Complexity

Merge sort always works in O(n log n) time — regardless of input order (unlike quick sort).
💡 Real-life Analogy

Imagine sorting a huge pile of papers:

 First, you divide them into small batches.

 You sort each batch (maybe by handing them to your friends).

 Finally, you combine all sorted batches to make one big sorted file.

This teamwork approach is fast, structured, and less error-prone — just like Divide and Conquer in
Merge Sort.

🧠 Summary

Feature Benefit in Merge Sort

Divide Breaks complex task into easier pieces

Conquer Solves small pieces quickly and correctly

Combine Efficiently merges sorted parts

Result Stable and reliable sorting with O(n log n) time

Let me know if you'd like this in code form, diagram, or PDF!

🧠 How Strassen’s Matrix Multiplication Is Better Than Normal Matrix Multiplication (In Terms of
Time Complexity)

🧾 Basic Idea

Strassen’s Matrix Multiplication is an advanced algorithm that reduces the number of multiplications
needed when multiplying two matrices, thus improving time complexity over the standard method.

🔢 1. Normal (Naive) Matrix Multiplication

In the classical approach, multiplying two n×nn \times n matrices requires:

 n³ multiplications

 Time complexity:

O(n3)\boxed{O(n^3)}

This is because for every element in the result matrix, we perform n multiplications and there are n²
elements in total.

⚡ 2. Strassen’s Matrix Multiplication


Strassen’s algorithm reduces the number of multiplications by cleverly rearranging how matrices are
combined.

 Instead of 8 multiplications (as done when dividing matrices into four submatrices), it uses
only 7 multiplications and 18 additions/subtractions.

🧮 Time Complexity of Strassen’s Algorithm:

O(nlog⁡27)≈O(n2.81)\boxed{O(n^{\log_2 7}) \approx O(n^{2.81})}

This is faster than O(n3)O(n^3), especially for large matrices.

✂️How Strassen Works (Concept Summary)

Given two n×nn \times n matrices A and B:

1. Divide A and B into 4 submatrices each (of size n/2×n/2n/2 \times n/2)

2. Use 7 specific combinations of these submatrices (called M1 to M7) to calculate the product
matrix C.

3. This avoids the need for 8 full multiplications.

📊 Comparison Table

Feature Normal Multiplication Strassen’s Method

Multiplications per recursion 8 7

Additions/Subtractions Few More (18 per level)

Time Complexity O(n3)O(n^3) O(n2.81)O(n^{2.81})

Performance for small n Often better May be slower

Performance for large n Slower Better

✅ Why Strassen is Better (Theoretical View)

 Reduces the growth rate of operations with increasing matrix size.

 Suitable for large matrices in high-performance computing.

 Inspires even faster algorithms (like Coppersmith-Winograd with O(n2.376)O(n^{2.376})).

⚠️Limitations of Strassen’s Method

 More additions and subtractions → may affect performance on small matrices.

 Numerical stability can be a concern due to floating-point errors.

 Memory overhead is higher due to recursive splitting.


🧠 Summary

Strassen’s Algorithm improves matrix multiplication time from

O(n3)toO(n2.81)\boxed{O(n^3)} \quad \text{to} \quad \boxed{O(n^{2.81})}

by reducing the number of required multiplications using a clever divide-and-conquer strategy. It's
especially effective for large matrix computations.

Let me know if you’d like a step-by-step example or visual diagram of how Strassen’s 7 multiplications
work!

Absolutely! Here's a more detailed tabular comparison of Big-O (O), Big-Omega (Ω), and Theta (Θ)
notations with additional technical and practical differences:

📊 Detailed Comparison Table

Aspect Big-O (O) Big-Omega (Ω) Theta (Θ)

Tight bound (both upper


Meaning Upper bound Lower bound
and lower)

Average/typical case or
Describes Worst-case scenario Best-case scenario
exact growth

To define the maximum To define the minimum


Purpose To define exact time/space
time/space time/space

Inequality used T(n) ≤ c · f(n) T(n) ≥ c · f(n) c₁·f(n) ≤ T(n) ≤ c₂·f(n)

Bound Type Asymptotic upper bound Asymptotic lower bound Asymptotic tight bound

Algorithm won’t grow Algorithm won’t grow Algorithm grows exactly like
Function Growth
faster than f(n) slower than f(n) f(n)

Best-Case
Not shown Yes Yes
Performance

Worst-Case
Yes Not shown Yes
Performance

Average-Case
Not directly Not directly Yes
Usefulness

T(n) = O(n²) → Max time T(n) = Ω(n²) → Min time T(n) = Θ(n²) → Always time
Usage Example
like n² like n² like n²

Lies below or on curve Lies above or on curve of Lies within bounds of two
Graph Behavior
of f(n) f(n) curves of f(n)
Aspect Big-O (O) Big-Omega (Ω) Theta (Θ)

Guarantees won’t be Guarantees at least this Guarantees exact


Real-World Focus
slower than this fast performance behavior

Common Use in Widely used for worst- Used for minimum time Used when performance is
Analysis case estimation consistent

Example Algorithm
Merge Sort: O(n log n) Merge Sort: Ω(n log n) Merge Sort: Θ(n log n)
(Sort)

🧠 Summary

 Use Big-O to show how bad it can get (worst-case).

 Use Big-Omega to show how fast it can be (best-case).

 Use Theta when you are confident it behaves consistently.

Let me know if you'd like a graphical diagram showing the difference visually or how these apply to
specific algorithms like Bubble Sort or Quick Sort!

You might also like