ada unit 1
ada unit 1
depend on machine-specific constants and don't require algorithms to be implemented and time
taken by programs to be compared. Asymptotic notations are mathematical tools to represent the
time complexity of algorithms for asymptotic analysis.
Asymptotic Notations:
Asymptotic Notations are mathematical tools used to analyze the performance of algorithms
by understanding how their efficiency changes as the input size grows.
These notations provide a concise way to express the behavior of an algorithm's time or
space complexity as the input size approaches infinity.
Rather than comparing algorithms directly, asymptotic analysis focuses on understanding the
relative growth rates of algorithms' complexities.
Asymptotic analysis allows for the comparison of algorithms' space and time complexities by
examining their performance characteristics as the input size varies.
By using asymptotic notations, such as Big O, Big Omega, and Big Theta, we can categorize
algorithms based on their worst-case, best-case, or average-case time or space complexities,
providing valuable insights into their efficiency.
Theta notation encloses the function from above and below. Since it represents the upper and the
lower bound of the running time of an algorithm, it is used for analyzing the average-case
complexity of an algorithm.
.Theta (Average Case) You add the running times for each possible input combination and take the
average in the average case.
Let g and f be the function from the set of natural numbers to itself. The function f is said to be Θ(g),
if there are constants c1, c2 > 0 and a natural number n0 such that c1* g(n) ≤ f(n) ≤ c2 * g(n) for all n
≥ n0
Theta notation
Mathematical Representation of Theta notation:
Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1 * g(n) ≤ f(n) ≤ c2 * g(n) for
all n ≥ n0}
The above expression can be described as if f(n) is theta of g(n), then the value f(n) is always
between c1 * g(n) and c2 * g(n) for large values of n (n ≥ n0). The definition of theta also requires
that f(n) must be non-negative for values of n greater than n0.
The execution time serves as both a lower and upper bound on the algorithm's time complexity.
It exist as both, most, and least boundaries for a given input value.
Big-O notation represents the upper bound of the running time of an algorithm. Therefore, it gives
the worst-case complexity of an algorithm.
If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a positive constant C
and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0
The execution time serves as an upper bound on the algorithm's time complexity.
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }
For example, Consider the case of Insertion Sort. It takes linear time in the best case and quadratic
time in the worst case. We can safely say that the time complexity of the Insertion sort is O(n2).
Note: O(n2) also covers linear time.
If we use Θ notation to represent the time complexity of Insertion sort, we have to use two
statements for best and worst cases:
Omega notation represents the lower bound of the running time of an algorithm. Thus, it provides
the best case complexity of an algorithm.
The execution time serves as a lower bound on the algorithm's time complexity.
It is defined as the condition that allows an algorithm to complete statement execution in the
shortest amount of time.
Let g and f be the function from the set of natural numbers to itself. The function f is said to be Ω(g),
if there is a constant c > 0 and a natural number n0 such that c*g(n) ≤ f(n) for all n ≥ n0
Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }
Let us consider the same Insertion sort example here. The time complexity of Insertion Sort can be
written as Ω(n), but it is not very useful information about insertion sort, as we are generally
interested in worst-case and sometimes in the average case.
The word Algorithm means "A set of finite rules or instructions to be followed in calculations or other
problem-solving operations" Or "A procedure for solving a mathematical problem in a finite number
of steps that frequently involves recursive operations".
Algorithms play a crucial role in various fields and have many applications. Some of the key areas
where algorithms are used include:
1. Computer Science: Algorithms form the basis of computer programming and are used to
solve problems ranging from simple sorting and searching to complex tasks such as artificial
intelligence and machine learning.
2. Mathematics: Algorithms are used to solve mathematical problems, such as finding the
optimal solution to a system of linear equations or finding the shortest path in a graph.
3. Operations Research: Algorithms are used to optimize and make decisions in fields such as
transportation, logistics, and resource allocation.
4. Artificial Intelligence: Algorithms are the foundation of artificial intelligence and machine
learning, and are used to develop intelligent systems that can perform tasks such as image
recognition, natural language processing, and decision-making.
5. Data Science: Algorithms are used to analyze, process, and extract insights from large
amounts of data in fields such as marketing, finance, and healthcare.
These are just a few examples of the many applications of algorithms. The use of algorithms is
continually expanding as new technologies and fields emerge, making it a vital component of
modern society.
Algorithms can be simple and complex depending on what you want to achieve.
It can be understood by taking the example of cooking a new recipe. To cook a new recipe, one reads
the instructions and steps and executes them one by one, in the given sequence. The result thus
obtained is the new dish is cooked perfectly. Every time you use your phone, computer, laptop, or
calculator you are using Algorithms. Similarly, algorithms help to do a task in programming to get the
expected output.
The Algorithm designed are language-independent, i.e. they are just plain instructions that can be
implemented in any language, and yet the output will be the same, as expected.
1. Algorithms are necessary for solving complex problems efficiently and effectively.
2. They help to automate processes and make them more reliable, faster, and easier to
perform.
3. Algorithms also enable computers to perform tasks that would be difficult or impossible for
humans to do manually.
4. They are used in various fields such as mathematics, computer science, engineering, finance,
and many others to optimize processes, analyze data, make predictions, and provide
solutions to problems.
As one would not follow any written instructions to cook the recipe, but only the standard one.
Similarly, not all written instructions for programming are an algorithm. For some instructions to be
an algorithm, it must have the following characteristics:
Input: An algorithm has zero or more inputs. Each that contains a fundamental operator
must accept zero or more inputs.
Output: An algorithm produces at least one output. Every instruction that contains a
fundamental operator must accept zero or more inputs.
Finiteness: An algorithm must terminate after a finite number of steps in all test cases. Every
instruction which contains a fundamental operator must be terminated within a finite
amount of time. Infinite loops or recursive functions without base conditions do not possess
finiteness.
Effectiveness: An algorithm must be developed by using very basic, simple, and feasible
operations so that one can trace it out by using just paper and pencil.
Properties of Algorithm:
It should be deterministic means giving the same output for the same input case.
Every step in the algorithm must be effective i.e. every step should do some work.
Advantages of Algorithms:
It is easy to understand.
In an Algorithm the problem is broken down into smaller pieces or steps hence, it is easier
for the programmer to convert it into an actual program.
Disadvantages of Algorithms:
For a standard algorithm to be good, it must be efficient. Hence the efficiency of an algorithm must
be checked and maintained. It can be in two stages:
1. Priori Analysis:
"Priori" means "before". Hence Priori analysis means checking the algorithm before its
implementation. In this, the algorithm is checked when it is written in the form of theoretical steps.
This Efficiency of an algorithm is measured by assuming that all other factors, for example, processor
speed, are constant and have no effect on the implementation. This is done usually by the algorithm
designer. This analysis is independent of the type of hardware and language of the compiler. It gives
the approximate answers for the complexity of the program.
2. Posterior Analysis:
"Posterior" means "after". Hence Posterior analysis means checking the algorithm after its
implementation. In this, the algorithm is checked by implementing it in any programming language
and executing it. This analysis helps to get the actual and real analysis report about correctness(for
every possible input/s if it shows/returns correct output or not), space required, time consumed, etc.
That is, it is dependent on the language of the compiler and the type of hardware used.
An algorithm is defined as complex based on the amount of Space and Time it consumes. Hence the
Complexity of an algorithm refers to the measure of the time that it will need to execute and get the
expected output, and the Space it will need to store all the data (input, temporary data, and output).
Hence these two factors define the efficiency of an algorithm.
The two factors of Algorithm Complexity are:
Time Factor: Time is measured by counting the number of key operations such as
comparisons in the sorting algorithm.
Space Factor: Space is measured by counting the maximum memory space required by the
algorithm to run/execute.
1. Space Complexity: The space complexity of an algorithm refers to the amount of memory required
by the algorithm to store the variables and get the result. This can be for inputs, temporary
operations, or outputs.
Fixed Part: This refers to the space that is required by the algorithm. For example, input
variables, output variables, program size, etc.
Variable Part: This refers to the space that can be different based on the implementation of
the algorithm. For example, temporary variables, dynamic memory allocation, recursion
stack space, etc.
Therefore Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is the fixed
part and S(I) is the variable part of the algorithm, which depends on instance characteristic I.
Step 1: START
Step 2: Get n elements of the array in arr and the number to be searched in x
Step 3: Start from the leftmost element of arr[] and one by one compare x with each element of arr[]
Step 4: If x matches with an element, Print True.
Step 5: If x doesn’t match with any of the elements, Print False.
Step 6: END
Here, There are 2 variables arr[], and x, where the arr[] is the variable part of n elements and x is the
fixed part. Hence S(P) = 1+n. So, the space complexity depends on n(number of elements). Now, space
depends on data types of given variables and constant types and it will be multiplied accordingly.
2. Time Complexity: The time complexity of an algorithm refers to the amount of time required by
the algorithm to execute and get the result. This can be for normal operations, conditional if-else
statements, loop statements, etc.
Constant time part: Any instruction that is executed just once comes in this part. For
example, input, output, if-else, switch, arithmetic operations, etc.
Variable Time Part: Any instruction that is executed more than once, say n times, comes in
this part. For example, loops, recursion, etc.
Therefore Time complexity T(P) T(P) of any algorithm P is T(P) = C +
TP(I), where C is the constant time part and TP(I) is the variable part of the algorithm, which
depends on the instance characteristic I.
Example: In the algorithm of Linear Search above, the time complexity is calculated as follows:
1. Natural Language:- Here we express the Algorithm in the natural English language. It is too
hard to understand the algorithm from it.
3. Pseudo Code:- Here we express the Algorithm in the form of annotations and informative
text written in plain English which is very much similar to the real code but as it has no syntax
like any of the programming languages, it can’t be compiled or interpreted by the computer.
It is the best way to express an algorithm because it can be understood by even a layman
with some school-level knowledge.
Heap Sort is an efficient sorting technique based on the heap data structure.
The heap is a nearly-complete binary tree where the parent node could either be minimum or
maximum. The heap with minimum root node is called min-heap and the root node with maximum
root node is called max-heap. The elements in the input data of the heap sort algorithm are
processed using these two methods.
The heap sort algorithm follows two main operations in this procedure −
Builds a heap H from the input data using the heapify (explained further into the chapter)
method, based on the way of sorting ascending order or descending order.
Deletes the root element of the root element and repeats until all the input elements are
processed.
The heap sort algorithm heavily depends upon the heapify method of the binary tree. So what is this
heapify method?
Heapify Method
The heapify method of a binary tree is to convert the tree into a heap data structure. This method
uses recursion approach to heapify all the nodes of the binary tree.
Note − The binary tree must always be a complete binary tree as it must have two children nodes
always.
The complete binary tree will be converted into either a max-heap or a min-heap by applying
the heapify method.
As described in the algorithm below, the sorting algorithm first constructs the heap ADT by calling
the Build-Max-Heap algorithm and removes the root element to swap it with the minimum valued
node at the leaf. Then the heapify method is applied to rearrange the elements accordingly.
Algorithm: Heapsort(A)
BUILD-MAX-HEAP(A)
A.heap-size = A.heap-size - 1
MAX-HEAPIFY(A, 1)
Analysis
The heap sort algorithm is the combination of two other sorting algorithms: insertion sort and merge
sort.
The similarities with insertion sort include that only a constant number of array elements are stored
outside the input array at any time.
The time complexity of the heap sort algorithm is O(nlogn), similar to merge sort.
Example
12 3 9 14 10 18 8 23
Building a heap using the BUILD-MAX-HEAP algorithm from the input array −
Rearrange the obtained binary tree by exchanging the nodes such that a heap data structure is
formed.
Applying the heapify method, remove the root node from the heap and replace it with the next
immediate maximum valued child of the root.
The root node is 23, so 23 is popped and 18 is made the next root because it is the next maximum
node in the heap.
The current root 14 is popped from the heap and is replaced by 12.
Here the current root element 9 is popped and the elements 8 and 3 are remained in the tree.
After completing the heap sort operation on the given heap, the sorted elements are displayed as
shown below −
Every time an element is popped, it is added at the beginning of the output array since the heap data
structure formed is a max-heap. But if the heapify method converts the binary tree to the min-heap,
add the popped elements are on the end of the output array.
3 8 9 10 12 14 18 23
Using divide and conquer approach, the problem in hand, is divided into smaller sub-problems and
then each problem is solved independently. When we keep dividing the sub-problems into even
smaller sub-problems, we may eventually reach a stage where no more division is possible. Those
smallest possible sub-problems are solved using original solution because it takes lesser time to
compute. The solution of all sub-problems is finally merged in order to obtain the solution of the
original problem.
Broadly, we can understand divide-and-conquer approach in a three-step process.
Divide/Break
This step involves breaking the problem into smaller sub-problems. Sub-problems should represent a
part of the original problem. This step generally takes a recursive approach to divide the problem
until no sub-problem is further divisible. At this stage, sub-problems become atomic in size but still
represent some part of the actual problem.
Conquer/Solve
This step receives a lot of smaller sub-problems to be solved. Generally, at this level, the problems
are considered 'solved' on their own.
Merge/Combine
When the smaller sub-problems are solved, this stage recursively combines them until they
formulate a solution of the original problem. This algorithmic approach works recursively and
conquer & merge steps works so close that they appear as one.
Arrays as Input
There are various ways in which various algorithms can take input such that they can be solved using
the divide and conquer technique. Arrays are one of them. In algorithms that require input to be in
the form of a list, like various sorting algorithms, array data structures are most commonly used.
In the input for a sorting algorithm below, the array input is divided into subproblems until they
cannot be divided further.
Then, the subproblems are sorted (the conquer step) and are merged to form the solution of the
original array back (the combine step).
Since arrays are indexed and linear data structures, sorting algorithms most popularly use array data
structures to receive input.
In this approach, most of the algorithms are designed using recursion, hence memory management
is very high. For recursive function stack is used, where function state needs to be stored.
1. Divide:
In Merge Sort, we divide the input array in two halves. Please note that the divide step of Merge Sort
is simple, but in Quick Sort, the divide step is critical. In Quick Sort, we partition the array around a
pivot.
2. Conquer:
If a subproblem is small enough (often referred to as the “base case”), we solve it directly
without further recursion.
In Merge Sort, the conquer step is to sort the two halves individually.
3. Merge:
Combine the sub-problems to get the final solution of the whole problem.
Once the smaller subproblems are solved, we recursively combine their solutions to get the
solution of larger problem.
The goal is to formulate a solution for the original problem by merging the results from the
subproblems.
In Merge Sort, the merge step is to merge two sorted halves to create one sorted array. Please note
that the merge step of Merge Sort is critical, but in Quick Sort, the merge step does not do anything
as both parts become sorted in place and the left part has all elements smaller (or equal( than the
right part.
Divide and Conquer Algorithm involves breaking down a problem into smaller, more manageable
parts, solving each part individually, and then combining the solutions to solve the original problem.
The characteristics of Divide and Conquer Algorithm are:
Dividing the Problem: The first step is to break the problem into smaller, more manageable
subproblems. This division can be done recursively until the subproblems become simple
enough to solve directly.
Conquering Each Subproblem: Once divided, the subproblems are solved individually. This
may involve applying the same divide and conquer approach recursively until the
subproblems become simple enough to solve directly, or it may involve applying a different
algorithm or technique.
Combining Solutions: After solving the subproblems, their solutions are combined to obtain
the solution to the original problem. This combination step should be relatively efficient and
straightforward, as the solutions to the subproblems should be designed to fit together
seamlessly.
Binary search is a fast search algorithm with run-time complexity of (log n). This search algorithm
works on the principle of divide and conquer, since it divides the array into half before searching. For
this algorithm to work properly, the data collection should be in the sorted form.
Binary search looks for a particular key value by comparing the middle most item of the collection. If
a match occurs, then the index of item is returned. But if the middle item has a value greater than
the key value, the right sub-array of the middle item is searched. Otherwise, the left sub-array is
searched. This process continues recursively until the size of a subarray reduces to zero.
Binary Search algorithm is an interval searching method that performs the searching in intervals only.
The input taken by the binary search algorithm must always be in a sorted array since it divides the
array into subarrays based on the greater or lower values. The algorithm follows the procedure
below −
Step 1 − Select the middle item in the array and compare it with the key value to be searched. If it is
matched, return the position of the median.
Step 2 − If it does not match the key value, check if the key value is either greater than or less than
the median value.
Step 3 − If the key is greater, perform the search in the right sub-array; but if the key is lower than
the median value, perform the search in the left sub-array.
Step 4 − Repeat Steps 1, 2 and 3 iteratively, until the size of sub-array becomes 1.
Step 5 − If the key value does not exist in the array, then the algorithm returns an unsuccessful
search.
Pseudocode
Procedure binary_search
A ← sorted array
n ← size of array
x ← value to be searched
Set lowerBound = 1
Set upperBound = n
if A[midPoint] < x
if A[midPoint] > x
if A[midPoint] = x
end while
end procedure
Analysis
Since the binary search algorithm performs searching iteratively, calculating the time complexity is
not as easy as the linear search algorithm.
The input array is searched iteratively by dividing into multiple sub-arrays after every unsuccessful
iteration. Therefore, the recurrence relation formed would be of a dividing function.
During the first iteration, the element is searched in the entire array. Therefore, length of the
array = n.
In the second iteration, only half of the original array is searched. Hence, length of the array
= n/2.
In the third iteration, half of the previous sub-array is searched. Here, length of the array will
be = n/4.
Similarly, in the ith iteration, the length of the array will become n/2i
To achieve a successful search, after the last iteration the length of array must be 1. Hence,
n/2i = 1
That gives us −
n = 2i
log n = log 2i
log n = i. log 2
i = log n
Example
For a binary search to work, it is mandatory for the target array to be sorted. We shall learn the
process of binary search with a pictorial example. The following is our sorted array and let us assume
that we need to search the location of value 31 using binary search.
Here it is, 0 + (9 - 0) / 2 = 4 (integer value of 4.5). So, 4 is the mid of the array.
Now we compare the value stored at location 4, with the value being searched, i.e. 31. We find that
the value at location 4 is 27, which is not a match. As the value is greater than 27 and we have a
sorted array, so we also know that the target value must be in the upper portion of the array.
We change our low to mid + 1 and find the new mid value again.
low = mid + 1
The value stored at location 7 is not a match, rather it is less than what we are looking for. So, the
value must be in the lower part from this location.
We compare the value stored at location 5 with our target value. We find that it is a match.
Binary search halves the searchable items and thus reduces the count of comparisons to be made to
very less numbers.
Iteration Method
if (x == arr[mid])
return mid
low = mid + 1
high = mid - 1
Recursive Method
return False
else
if x == arr[mid]
return mid
T(n) Represents the total time time taken by the algorithm to sort an array of size n.
2T(n/2) represents time taken by the algorithm to recursively sort the two halves of the
array. Since each half has n/2 elements, we have two recursive calls with input size as (n/2).
O(n) represents the time taken to merge the two sorted halves
We know that merge sort first divides the whole array iteratively
into equal halves unless the atomic values are achieved. We see
here that an array of 8 items is divided into two arrays of size 4.
This does not change the sequence of appearance of items in the
original. Now we divide these two arrays into halves.
We further divide these arrays and we achieve atomic value which
can no more be divided.
In the next iteration of the combining phase, we compare lists of
two data values, and merge them into a list of found data values
placing all in a sorted order.
After the final merging, the list becomes sorted and is considered
the final solution.
Advertisement
arr[k] = L[i]
i += 1
else:
arr[k] = R[j]
j += 1
k += 1
Quick sort is a sorting technique that has the ability to break a massive data array into smaller ones
in order to save time. Here, in the case of the quick sort algorithm, an extensive array is divided into
two arrays, one of which has values less than the specified value, say pivot. The pivot value is bigger
than the other.
Here,
L= Left
R = Right
P = Pivot
In the given series of arrays, let’s assume that the leftmost item is the pivot. So, in this condition, a[L]
= 23, a[R] = 26 and a[P] = 23.
Since, at this moment, the pivot item is at left, so the algorithm initiates from right and travels
towards left.
Now, a[P] < a[R], so the algorithm travels forward one position towards left, i.e. –
Since a[P] > a[R], so the algorithm will exchange or swap a[P] with a[R], and the pivot travels to right,
as –
Now, a[L] = 18, a[R] = 23, and a[P] = 23. Since the pivot is at right, so the algorithm begins from left
and travels to right.
Now, a[L] = 28, a[R] = 23, and a[P] = 23. As a[P] < a[L], so, swap a[P] and a[L], now pivot is at left, i.e.
–
Since the pivot is placed at the leftmost side, the algorithm begins from right and travels to left. Now,
a[L] = 23, a[R] = 28, and a[P] = 23. As a[P] < a[R], so algorithm travels one place to left, as –
Now, a[P] = 23, a[L] = 23, and a[R] = 13. As a[P] > a[R], so, exchange a[P] and a[R], now pivot is at
right, i.e. –
Now, a[P] = 23, a[L] = 13, and a[R] = 23. Pivot is at right, so the algorithm begins from left and travels
to right.
Now, a[P] = 23, a[L] = 23, and a[R] = 23. So, pivot, left and right, are pointing to the same element. It
represents the termination of the procedure.
Item 23, which is the pivot element, stands at its accurate position.
Items that are on the right side of element 23 are greater than it, and the elements that are on the
left side of element 23 are smaller than it.
Now, in a similar manner, the quick sort algorithm is separately applied to the left and right sub-
arrays. After sorting gets done, the array will be –
Algorithm:
2{
6}
Partition Algorithm:
1 pivot ? A[finish]
2 i ? begin-1
5 then i ? i + 1
6 swap A[i] with A[j]
7 }}
9 return i+1
Stable NO
Strassen's Matrix Multiplication is the divide and conquer approach to solve the matrix multiplication
problems. The usual matrix multiplication method multiplies each row with each column to achieve
the product matrix. The time complexity taken by this approach is O(n3), since it takes two loops to
multiply. Strassens method was introduced to reduce the time complexity from O(n3) to O(nlog 7).
Naive Method
First, we will discuss Naive method and its complexity. Here, we are calculating Z=X Y. Using Naive
method, two matrices (X and Y) can be multiplied if the order of these matrices are p q and q r and
the resultant matrix will be of order p r. The following pseudocode describes the Naive multiplication
−
for i = 1 to p do
for j = 1 to r do
Z[i,j] := 0
for k = 1 to q do
Here, we assume that integer operations take O(1) time. There are three for loops in this algorithm
and one is nested in other. Hence, the algorithm takes O(n3) time to execute.
In this context, using Strassens Matrix multiplication algorithm, the time consumption can be
improved a little bit.
Strassens Matrix multiplication can be performed only on square matrices where n is a power of 2.
Order of both of the matrices are n × n.
M1:=(A+C)×(E+F)M1:=(A+C)×(E+F)
M2:=(B+D)×(G+H)M2:=(B+D)×(G+H)
M3:=(A−D)×(E+H)M3:=(A−D)×(E+H)
M4:=A×(F−H)M4:=A×(F−H)
M5:=(C+D)×(E)M5:=(C+D)×(E)
M6:=(A+B)×(H)M6:=(A+B)×(H)
M7:=D×(G−E)M7:=D×(G−E)
Then,
I:=M2+M3−M6−M7I:=M2+M3−M6−M7
J:=M4+M6J:=M4+M6
K:=M5+M7K:=M5+M7
L:=M1−M3−M4−M5L:=M1−M3−M4−M5
Analysis
T(n)={c7xT(n2)+dxn2ifn=1otherwisewherecanddareconstantsT(n)={cifn=17xT(n2)+dxn2otherwisewh
erecanddareconstants
Divide matrix A and matrix B in 4 sub-matrices of size N/2 x N/2 as shown in the above
diagram.
Complexity:
Divide: Take the two matrices you want to multiply, let's call them A and B. Split them into
four smaller matrices, each about half the size of the original matrices.
Calculate: Use these smaller matrices to calculate seven special values, which we'll call P1,
P2, P3, P4, P5, P6, and P7. You do this by doing some simple additions and subtractions of
the smaller matrices.
Combine: Take these seven values and use them to compute the final result matrix, which
we'll call C. You calculate the values of C using the values of P1 to P7.
This method may sound a bit more complicated, but it's faster for really big matrices because it
reduces the number of multiplications you need to do, even though it involves more additions and
subtractions. For smaller matrices, the regular multiplication is faster, but for huge matrices,
Strassen's method can save a lot of time.
Let’s understand the upper bound of the Quick Sort algorithm in simple words, along with an
example.
In algorithms, upper bound means the maximum time an algorithm will take to solve a problem —
worst-case time complexity.
Quick Sort is fastest when the pivot divides the array evenly, but it becomes slow when the pivot
divides the array very unevenly — this happens in the worst case.
Worst case happens when the pivot is always the smallest or largest element.
This leads to one part having n – 1 elements and the other having 0 elements — which is
very unbalanced.
This is because:
Input: [1, 2, 3, 4, 5]
Step-by-step:
✅ Summary
The upper bound of Quick Sort is O(n²), and it happens when the pivot divides the array in the worst
possible way — completely unbalanced.
Divide and Conquer is a powerful strategy in computer science where a problem is broken into
smaller sub-problems, solved independently, and then combined to get the final answer.
Step 1: Divide
Now we have:
Step 3: Combine
Merge [27, 38] and [3, 43] → [3, 27, 38, 43]
Finally:
Merge [3, 27, 38, 43] and [9, 10, 82] → ✅ Sorted Array: [3, 9, 10, 27, 38, 43, 82]
1. Balanced Division
Each divide step splits the array into halves. This keeps the recursion depth to log n, ensuring
efficiency.
When the array is broken down to small parts, sorting those tiny arrays is fast and easy.
3. Efficient Merging
Merging two sorted arrays takes linear time. Since the subarrays are already sorted, merging them is
quick and deterministic.
Merge sort always works in O(n log n) time — regardless of input order (unlike quick sort).
💡 Real-life Analogy
Finally, you combine all sorted batches to make one big sorted file.
This teamwork approach is fast, structured, and less error-prone — just like Divide and Conquer in
Merge Sort.
🧠 Summary
🧠 How Strassen’s Matrix Multiplication Is Better Than Normal Matrix Multiplication (In Terms of
Time Complexity)
🧾 Basic Idea
Strassen’s Matrix Multiplication is an advanced algorithm that reduces the number of multiplications
needed when multiplying two matrices, thus improving time complexity over the standard method.
n³ multiplications
Time complexity:
O(n3)\boxed{O(n^3)}
This is because for every element in the result matrix, we perform n multiplications and there are n²
elements in total.
Instead of 8 multiplications (as done when dividing matrices into four submatrices), it uses
only 7 multiplications and 18 additions/subtractions.
1. Divide A and B into 4 submatrices each (of size n/2×n/2n/2 \times n/2)
2. Use 7 specific combinations of these submatrices (called M1 to M7) to calculate the product
matrix C.
📊 Comparison Table
by reducing the number of required multiplications using a clever divide-and-conquer strategy. It's
especially effective for large matrix computations.
Let me know if you’d like a step-by-step example or visual diagram of how Strassen’s 7 multiplications
work!
Absolutely! Here's a more detailed tabular comparison of Big-O (O), Big-Omega (Ω), and Theta (Θ)
notations with additional technical and practical differences:
Average/typical case or
Describes Worst-case scenario Best-case scenario
exact growth
Bound Type Asymptotic upper bound Asymptotic lower bound Asymptotic tight bound
Algorithm won’t grow Algorithm won’t grow Algorithm grows exactly like
Function Growth
faster than f(n) slower than f(n) f(n)
Best-Case
Not shown Yes Yes
Performance
Worst-Case
Yes Not shown Yes
Performance
Average-Case
Not directly Not directly Yes
Usefulness
T(n) = O(n²) → Max time T(n) = Ω(n²) → Min time T(n) = Θ(n²) → Always time
Usage Example
like n² like n² like n²
Lies below or on curve Lies above or on curve of Lies within bounds of two
Graph Behavior
of f(n) f(n) curves of f(n)
Aspect Big-O (O) Big-Omega (Ω) Theta (Θ)
Common Use in Widely used for worst- Used for minimum time Used when performance is
Analysis case estimation consistent
Example Algorithm
Merge Sort: O(n log n) Merge Sort: Ω(n log n) Merge Sort: Θ(n log n)
(Sort)
🧠 Summary
Let me know if you'd like a graphical diagram showing the difference visually or how these apply to
specific algorithms like Bubble Sort or Quick Sort!