DAA-unit4
DAA-unit4
The Merge Sort algorithm is a divide-and-conquer algorithm that sorts an array by first breaking it down
into smaller arrays, and then building the array back together the correct way so that it is sorted.
Divide: The algorithm starts with breaking up the array into smaller and smaller pieces until one such sub-
array only consists of one element.
Conquer: The algorithm merges the small pieces of the array back together by putting the lowest values
first, resulting in a sorted array.
The breaking down and building up of the array to sort the array is done recursively.
In the animation above, each time the bars are pushed down represents a recursive call, splitting the array
into smaller pieces. When the bars are lifted up, it means that two sub-arrays have been merged together.
The Merge Sort algorithm can be described like this:
How it works:
Divide the unsorted array into two sub-arrays, half the size of the original.
Continue to divide the sub-arrays as long as the current piece of the array has more than one element.
Merge two sub-arrays together by always putting the lowest value first.
Keep merging until there are no sub-arrays left.
Take a look at the drawing below to see how Merge Sort works from a different perspective. As you can see,
the array is split into smaller and smaller pieces until it is merged back together. And as the merging
happens, values from each sub-array are compared so that the lowest value comes first.
Let's try to do the sorting manually, just to get an even better understanding of how Merge Sort works before
actually implementing it in a programming language.
Step 1: We start with an unsorted array, and we know that it splits in half until the sub-arrays only consist of
one element. The Merge Sort function calls itself two times, once for each half of the array. That means that
the first sub-array will split into the smallest pieces first.
[ 12, 8, 9, 3, 11, 5, 4]
[ 12, 8, 9] [ 3, 11, 5, 4]
[ 12] [ 8, 9] [ 3, 11, 5, 4]
[ 12] [ 8] [ 9] [ 3, 11, 5, 4]
Step 2: The splitting of the first sub-array is finished, and now it is time to merge. 8 and 9 are the first two
elements to be merged. 8 is the lowest value, so that comes before 9 in the first merged sub-array.
[ 12] [ 8, 9] [ 3, 11, 5, 4]
Step 3: The next sub-arrays to be merged is [ 12] and [ 8, 9]. Values in both arrays are compared from the
start. 8 is lower than 12, so 8 comes first, and 9 is also lower than 12.
[ 8, 9, 12] [ 3, 11, 5, 4]
Step 7: The two sub-arrays on the right are merged. Comparisons are done to create elements in the new
merged array:
1. 3 is lower than 4
2. 4 is lower than 11
3. 5 is lower than 11
4. 11 is the last remaining value
[ 8, 9, 12] [ 3, 4, 5, 11]
Step 8: The two last remaining sub-arrays are merged. Let's look at how the comparisons are done in more
detail to create the new merged and finished sorted array:
3 is lower than 8:
Before [ 8, 9, 12] [ 3, 4, 5, 11]
After: [ 3, 8, 9, 12] [ 4, 5, 11]
Step 9: 4 is lower than 8:
Before [ 3, 8, 9, 12] [ 4, 5, 11]
After: [ 3, 4, 8, 9, 12] [ 5, 11]
Step 10: 5 is lower than 8:
Before [ 3, 4, 8, 9, 12] [ 5, 11]
After: [ 3, 4, 5, 8, 9, 12] [ 11]
Step 11: 8 and 9 are lower than 11:
Before [ 3, 4, 5, 8, 9, 12] [ 11]
After: [ 3, 4, 5, 8, 9, 12] [ 11]
Step 12: 11 is lower than 12:
Before [ 3, 4, 5, 8, 9, 12] [ 11]
After: [ 3, 4, 5, 8, 9, 11, 12]
Code
def merge(left, right):
result = []
i=j=0
while i < len(left) and j < len(right):
if left[i] < right[j]:
result.append(left[i])
i += 1
else:
result.append(right[j])
j += 1
result.extend(left[i:])
result.extend(right[j:]
return result
def mergeSort(arr):
step = 1 # Starting with sub-arrays of length 1
length = len(arr)
while step < length:
for i in range(0, length, 2 * step):
left = arr[i:i + step]
right = arr[i + step:i + 2 * step]
merged = merge(left, right)
# Place the merged array back into the original array
for j, val in enumerate(merged):
arr[i + j] = val
step *= 2 # Double the sub-array length for the next iteration
return arr
unsortedArr = [3, 7, 6, -10, 15, 23.5, 55, -13]
sortedArr = mergeSort(unsortedArr)
print("Sorted array:", sortedArr)
Time complexity
Time complexity for Merge Sort is
O(n⋅logn)
And the time complexity is pretty much the same for different kinds of arrays. The algorithm needs to split
the array and merge it back together whether it is already sorted or completely shuffled.
Quick Sort
The Quicksort algorithm takes an array of values, chooses one of the values as the 'pivot' element, and
moves the other values so that lower values are on the left of the pivot element, and higher values are on the
right of it. The last element of the array is chosen to be the pivot element, but we could also have chosen the
first element of the array, or any element in the array really.
Then, the Quicksort algorithm does the same operation recursively on the sub-arrays to the left and right side
of the pivot element. This continues until the array is sorted.
After the Quicksort algorithm has put the pivot element in between a sub-array with lower values on the left
side, and a sub-array with higher values on the right side, the algorithm calls itself twice, so that Quicksort
runs again for the sub-array on the left side, and for the sub-array on the right side. The Quicksort algorithm
continues to call itself until the sub-arrays are too small to be sorted.
How it works:
Swap the pivot element with the first element of the higher values so that the pivot element lands in between
the lower and higher values.
Do the same operations (recursively) for the sub-arrays on the left and right side of the pivot element.
Continue reading to fully understand the Quicksort algorithm and how to implement it yourself.
Before we implement the Quicksort algorithm in a programming language, let's manually run through a
short array, just to get the idea.
Step 1: We start with an unsorted array.
[ 11, 9, 12, 7, 3]
Step 2: We choose the last value 3 as the pivot element.
[ 11, 9, 12, 7, 3]
Step 3: The rest of the values in the array are all greater than 3, and must be on the right side of 3. Swap 3
with 11.
[ 3, 9, 12, 7, 11]
Step 4: Value 3 is now in the correct position. We need to sort the values to the right of 3. We choose the
last value 11 as the new pivot element.
[ 3, 9, 12, 7, 11]
Step 5: The value 7 must be to the left of pivot value 11, and 12 must be to the right of it. Move 7 and 12.
[ 3, 9, 7, 12, 11]
Step 6: Swap 11 with 12 so that lower values 9 anf 7 are on the left side of 11, and 12 is on the right side.
[ 3, 9, 7, 11, 12]
Step 7: 11 and 12 are in the correct positions. We choose 7 as the pivot element in sub-array [ 9, 7], to the
left of 11.
[ 3, 9, 7, 11, 12]
Step 8: We must swap 9 with 7.
[ 3, 7, 9, 11, 12]
And now, the array is sorted.
Code
def partition(array, low, high):
pivot = array[high]
i = low - 1
Binary Search
Binary search is an efficient algorithm for finding an item from a sorted list or array. It works by repeatedly
dividing the search interval in half. If the value of the search key is less than the item in the middle of the
interval, the search continues on the lower half. Otherwise, it continues on the upper half.
Here is a step-by-step breakdown
down of how binary search works:
Steps of Binary Search:
1. Start with two pointers:
o One pointing to the start of the list (beg).
o One pointing to the end of the list (end).
2. Find the middle element:
o Calculate the middle index using:
middle=
o Compare the middle element with the target value.
3. Check if the middle element is the target:
o If the element at the middle index equals the target value, return the index (the search is
successful).
4. Narrow the search range:
o If the target value is less than the middle element, move the end pointer to middle - 1 (search
the left half).
o If the target value is greater than the middle element, move the beg pointer to middle + 1
(search the right half).
5. Repeat the process:
o Keep repeating the process until the beg pointer exceeds the end
pointer (i.e., when the range is invalid), meaning the item is not present in the array.
6. Return -1 if not found:
o If the search ends without finding the target value, return -1
1 (indicating that
tha the item is not in
the array).
Let the elements of array are -
mid = (beg + end)/2
So, in the given array -
beg = 0
end = 8
mid = (0 + 8)/2 = 4. So, 4 is the mid of the array.
Now, the element to search is found. So algorithm will return the index of the element matched.
Time Complexity:
● Best case: O(1) if the middle element is the target.
● Average and Worst case: O(logn) where n is the number of elements in the array. This is because
with each step, the search range is halved.
Key Points:
● Pre-condition: The array must be sorted for binary search to work correctly.
● Efficiency: It is much faster than linear search, especially for large datasets, because it cuts the
search space in half with each step.
Binary Tree Traversal
Binary tree traversal refers to the process of visiting all nodes in a binary tree in a systematic order. There
are several ways to traverse a binary tree, depending on the order in which nodes are visited. The most
common binary tree traversal techniques are:
1. In-order Traversal
2. Pre-order Traversal
3. Post-order Traversal
4.
1. In-order Traversal (LNR)
In this traversal method, the nodes are recursively visited in this order:
Left subtree → Node → Right subtree
Steps:
This traversal is particularly useful when dealing with binary search trees (BST), as it visits nodes in
ascending order.
Example:
In this traversal method, the nodes are recursively visited in this order:
Steps:
Pre-order traversal is useful when you want to create a copy of the binary tree, or serialize it.
Example:
In this traversal method, the nodes are recursively visited in this order:
Steps:
Post-order traversal is useful when you need to delete or free the memory of a tree (or when performing
calculations on child nodes before the parent node).
Example:
# In-order Traversal
print("In-order Traversal:")
inorder(root)
print()
# Pre-order Traversal
print("Pre-order Traversal:")
preorder(root)
print()
# Post-order Traversal
print("Post-order Traversal:")
postorder(root)
print()
Output:
In-order Traversal:
41523
Pre-order Traversal:
21453
Post-order Traversal:
4 5 1 3 2Summary of Binary Tree Traversal:
1. In-order (LNR): Left → Node → Right. Useful in binary search trees (BST) for visiting nodes in
ascending order.
2. Pre-order (NLR): Node → Left → Right. Useful for copying or serializing the tree.
3. Post-order (LRN): Left → Right → Node. Useful for deletion or bottom-up processing.
These traversal methods are foundational for many algorithms and problems involving binary trees, such as
searching, balancing, and tree manipulations.