0% found this document useful (0 votes)
6 views

DAA-unit4

The document explains the Merge Sort and Quicksort algorithms, detailing their divide-and-conquer strategies for sorting arrays. It includes step-by-step examples of how each algorithm works, along with Python code implementations and time complexity analysis. Additionally, it covers the Binary Search algorithm and Binary Tree Traversal methods, highlighting their efficiency and traversal techniques.

Uploaded by

vefole2166
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

DAA-unit4

The document explains the Merge Sort and Quicksort algorithms, detailing their divide-and-conquer strategies for sorting arrays. It includes step-by-step examples of how each algorithm works, along with Python code implementations and time complexity analysis. Additionally, it covers the Binary Search algorithm and Binary Tree Traversal methods, highlighting their efficiency and traversal techniques.

Uploaded by

vefole2166
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Merge Sort

The Merge Sort algorithm is a divide-and-conquer algorithm that sorts an array by first breaking it down
into smaller arrays, and then building the array back together the correct way so that it is sorted.
Divide: The algorithm starts with breaking up the array into smaller and smaller pieces until one such sub-
array only consists of one element.
Conquer: The algorithm merges the small pieces of the array back together by putting the lowest values
first, resulting in a sorted array.
The breaking down and building up of the array to sort the array is done recursively.
In the animation above, each time the bars are pushed down represents a recursive call, splitting the array
into smaller pieces. When the bars are lifted up, it means that two sub-arrays have been merged together.
The Merge Sort algorithm can be described like this:
How it works:
Divide the unsorted array into two sub-arrays, half the size of the original.
Continue to divide the sub-arrays as long as the current piece of the array has more than one element.
Merge two sub-arrays together by always putting the lowest value first.
Keep merging until there are no sub-arrays left.
Take a look at the drawing below to see how Merge Sort works from a different perspective. As you can see,
the array is split into smaller and smaller pieces until it is merged back together. And as the merging
happens, values from each sub-array are compared so that the lowest value comes first.

Let's try to do the sorting manually, just to get an even better understanding of how Merge Sort works before
actually implementing it in a programming language.
Step 1: We start with an unsorted array, and we know that it splits in half until the sub-arrays only consist of
one element. The Merge Sort function calls itself two times, once for each half of the array. That means that
the first sub-array will split into the smallest pieces first.
[ 12, 8, 9, 3, 11, 5, 4]
[ 12, 8, 9] [ 3, 11, 5, 4]
[ 12] [ 8, 9] [ 3, 11, 5, 4]
[ 12] [ 8] [ 9] [ 3, 11, 5, 4]

Step 2: The splitting of the first sub-array is finished, and now it is time to merge. 8 and 9 are the first two
elements to be merged. 8 is the lowest value, so that comes before 9 in the first merged sub-array.
[ 12] [ 8, 9] [ 3, 11, 5, 4]

Step 3: The next sub-arrays to be merged is [ 12] and [ 8, 9]. Values in both arrays are compared from the
start. 8 is lower than 12, so 8 comes first, and 9 is also lower than 12.
[ 8, 9, 12] [ 3, 11, 5, 4]

Step 4: Now the second big sub-array is split recursively.


[ 8, 9, 12] [ 3, 11, 5, 4]
[ 8, 9, 12] [ 3, 11] [ 5, 4]
[ 8, 9, 12] [ 3] [ 11] [ 5, 4]
Step 5: 3 and 11 are merged back together in the same order as they are shown because 3 is lower than 11.
[ 8, 9, 12] [ 3, 11] [ 5, 4]
Step 6: Sub-array with values 5 and 4 is split, then merged so that 4 comes before 5.
[ 8, 9, 12] [ 3, 11] [ 5] [ 4]
[ 8, 9, 12] [ 3, 11] [ 4, 5]

Step 7: The two sub-arrays on the right are merged. Comparisons are done to create elements in the new
merged array:

1. 3 is lower than 4
2. 4 is lower than 11
3. 5 is lower than 11
4. 11 is the last remaining value
[ 8, 9, 12] [ 3, 4, 5, 11]

Step 8: The two last remaining sub-arrays are merged. Let's look at how the comparisons are done in more
detail to create the new merged and finished sorted array:

3 is lower than 8:
Before [ 8, 9, 12] [ 3, 4, 5, 11]
After: [ 3, 8, 9, 12] [ 4, 5, 11]
Step 9: 4 is lower than 8:
Before [ 3, 8, 9, 12] [ 4, 5, 11]
After: [ 3, 4, 8, 9, 12] [ 5, 11]
Step 10: 5 is lower than 8:
Before [ 3, 4, 8, 9, 12] [ 5, 11]
After: [ 3, 4, 5, 8, 9, 12] [ 11]
Step 11: 8 and 9 are lower than 11:
Before [ 3, 4, 5, 8, 9, 12] [ 11]
After: [ 3, 4, 5, 8, 9, 12] [ 11]
Step 12: 11 is lower than 12:
Before [ 3, 4, 5, 8, 9, 12] [ 11]
After: [ 3, 4, 5, 8, 9, 11, 12]

The sorting is finished!

Code
def merge(left, right):
result = []
i=j=0
while i < len(left) and j < len(right):
if left[i] < right[j]:
result.append(left[i])
i += 1
else:
result.append(right[j])
j += 1
result.extend(left[i:])
result.extend(right[j:]
return result
def mergeSort(arr):
step = 1 # Starting with sub-arrays of length 1
length = len(arr)
while step < length:
for i in range(0, length, 2 * step):
left = arr[i:i + step]
right = arr[i + step:i + 2 * step]
merged = merge(left, right)
# Place the merged array back into the original array
for j, val in enumerate(merged):
arr[i + j] = val
step *= 2 # Double the sub-array length for the next iteration
return arr
unsortedArr = [3, 7, 6, -10, 15, 23.5, 55, -13]
sortedArr = mergeSort(unsortedArr)
print("Sorted array:", sortedArr)

Time complexity
Time complexity for Merge Sort is
O(n⋅logn)
And the time complexity is pretty much the same for different kinds of arrays. The algorithm needs to split
the array and merge it back together whether it is already sorted or completely shuffled.

Quick Sort

The Quicksort algorithm takes an array of values, chooses one of the values as the 'pivot' element, and
moves the other values so that lower values are on the left of the pivot element, and higher values are on the
right of it. The last element of the array is chosen to be the pivot element, but we could also have chosen the
first element of the array, or any element in the array really.

Then, the Quicksort algorithm does the same operation recursively on the sub-arrays to the left and right side
of the pivot element. This continues until the array is sorted.

Recursion is when a function calls itself.

After the Quicksort algorithm has put the pivot element in between a sub-array with lower values on the left
side, and a sub-array with higher values on the right side, the algorithm calls itself twice, so that Quicksort
runs again for the sub-array on the left side, and for the sub-array on the right side. The Quicksort algorithm
continues to call itself until the sub-arrays are too small to be sorted.

How it works:

Choose a value in the array to be the pivot element.


Order the rest of the array so that lower values than the pivot element are on the left, and higher values are
on the right.

Swap the pivot element with the first element of the higher values so that the pivot element lands in between
the lower and higher values.

Do the same operations (recursively) for the sub-arrays on the left and right side of the pivot element.
Continue reading to fully understand the Quicksort algorithm and how to implement it yourself.
Before we implement the Quicksort algorithm in a programming language, let's manually run through a
short array, just to get the idea.
Step 1: We start with an unsorted array.
[ 11, 9, 12, 7, 3]
Step 2: We choose the last value 3 as the pivot element.
[ 11, 9, 12, 7, 3]
Step 3: The rest of the values in the array are all greater than 3, and must be on the right side of 3. Swap 3
with 11.
[ 3, 9, 12, 7, 11]
Step 4: Value 3 is now in the correct position. We need to sort the values to the right of 3. We choose the
last value 11 as the new pivot element.
[ 3, 9, 12, 7, 11]
Step 5: The value 7 must be to the left of pivot value 11, and 12 must be to the right of it. Move 7 and 12.
[ 3, 9, 7, 12, 11]
Step 6: Swap 11 with 12 so that lower values 9 anf 7 are on the left side of 11, and 12 is on the right side.
[ 3, 9, 7, 11, 12]
Step 7: 11 and 12 are in the correct positions. We choose 7 as the pivot element in sub-array [ 9, 7], to the
left of 11.
[ 3, 9, 7, 11, 12]
Step 8: We must swap 9 with 7.
[ 3, 7, 9, 11, 12]
And now, the array is sorted.
Code
def partition(array, low, high):
pivot = array[high]
i = low - 1

for j in range(low, high):


if array[j] <= pivot:
i += 1
array[i], array[j] = array[j], array[i]

array[i+1], array[high] = array[high], array[i+1]


return i+1

def quicksort(array, low=0, high=None):


if high is None:
high = len(array) - 1

if low < high:


pivot_index = partition(array, low, high)
quicksort(array, low, pivot_index-1)
quicksort(array, pivot_index+1, high)
my_array = [64,
4, 34, 25, 12, 22, 11, 90, 5]
quicksort(my_array)
print("Sorted array:", my_array)

Quicksort Time Complexity


The worst case scenario for Quicksort is O(n2). This is when the pivot element is either the highest or lowest
value in every sub-array,
array, which leads to a lot of recursive calls. With our implementation above, this
happens when the array is already sorted.
But on average, the time complexity for Quick
Quicksort is actually just O(nlogn), which is a lot better than for the
previous sorting algorithms we have looked at. That is why Quicksort is so popular.

Binary Search

Binary search is an efficient algorithm for finding an item from a sorted list or array. It works by repeatedly
dividing the search interval in half. If the value of the search key is less than the item in the middle of the
interval, the search continues on the lower half. Otherwise, it continues on the upper half.
Here is a step-by-step breakdown
down of how binary search works:
Steps of Binary Search:
1. Start with two pointers:
o One pointing to the start of the list (beg).
o One pointing to the end of the list (end).
2. Find the middle element:
o Calculate the middle index using:
middle=
o Compare the middle element with the target value.
3. Check if the middle element is the target:
o If the element at the middle index equals the target value, return the index (the search is
successful).
4. Narrow the search range:
o If the target value is less than the middle element, move the end pointer to middle - 1 (search
the left half).
o If the target value is greater than the middle element, move the beg pointer to middle + 1
(search the right half).
5. Repeat the process:
o Keep repeating the process until the beg pointer exceeds the end
pointer (i.e., when the range is invalid), meaning the item is not present in the array.
6. Return -1 if not found:
o If the search ends without finding the target value, return -1
1 (indicating that
tha the item is not in
the array).
Let the elements of array are -
mid = (beg + end)/2
So, in the given array -
beg = 0
end = 8
mid = (0 + 8)/2 = 4. So, 4 is the mid of the array.

Now, the element to search is found. So algorithm will return the index of the element matched.
Time Complexity:
● Best case: O(1) if the middle element is the target.
● Average and Worst case: O(logn) where n is the number of elements in the array. This is because
with each step, the search range is halved.

Python Code Implementation:


def binary_search(arr, target):
low = 0
high = len(arr) - 1
while low <= high:
mid = (low + high) // 2 # Find the middle index
if arr[mid] == target: # Target found
low = mid + 1
else: # Target is in the left half
high = mid - 1
return -1 # Target not found

Key Points:
● Pre-condition: The array must be sorted for binary search to work correctly.
● Efficiency: It is much faster than linear search, especially for large datasets, because it cuts the
search space in half with each step.
Binary Tree Traversal
Binary tree traversal refers to the process of visiting all nodes in a binary tree in a systematic order. There
are several ways to traverse a binary tree, depending on the order in which nodes are visited. The most
common binary tree traversal techniques are:
1. In-order Traversal
2. Pre-order Traversal
3. Post-order Traversal
4.
1. In-order Traversal (LNR)
In this traversal method, the nodes are recursively visited in this order:
Left subtree → Node → Right subtree

Steps:

● Traverse the left subtree recursively.

● Visit the node (print or process the data).


● Traverse the right subtree recursively.

This traversal is particularly useful when dealing with binary search trees (BST), as it visits nodes in
ascending order.

Example:

For the binary tree:


2
/\
1 3

The in-order traversal would visit the nodes in the order: 1 → 2 → 3.

2. Pre-order Traversal (NLR)

In this traversal method, the nodes are recursively visited in this order:

● Node → Left subtree → Right subtree

Steps:

● Visit the node (print or process the data).


● Traverse the left subtree recursively.
● Traverse the right subtree recursively.

Pre-order traversal is useful when you want to create a copy of the binary tree, or serialize it.

Example:

For the binary tree:


2
/\
1 3

The pre-order traversal would visit the nodes in the order: 2 → 1 → 3.

3. Post-order Traversal (LRN)

In this traversal method, the nodes are recursively visited in this order:

● Left subtree → Right subtree → Node

Steps:

● Traverse the left subtree recursively.


● Traverse the right subtree recursively.
● Visit the node (print or process the data).

Post-order traversal is useful when you need to delete or free the memory of a tree (or when performing
calculations on child nodes before the parent node).

Example:

For the binary tree:


2
/\
1 3

The post-order traversal would visit the nodes in the order: 1 → 3 → 2.

Traversal Algorithms in Code

Here are Python implementations for each of these traversal methods.

Binary Tree Node Class:


class Node:
def __init__(self, value):
self.value = value
self.left = None
self.right = None

1. In-order Traversal (Recursive):


def inorder(root):
if root:
inorder(root.left) # Visit left subtree
print(root.value, end=" ") # Visit node
inorder(root.right) # Visit right subtre

2. Pre-order Traversal (Recursive):


def preorder(root):
if root:
print(root.value, end=" ") # Visit node
preorder(root.left) # Visit left subtree
preorder(root.right) # Visit right subtree

3. Post-order Traversal (Recursive):


def postorder(root):
if root:
postorder(root.left) # Visit left subtree
postorder(root.right) # Visit right subtree
print(root.value, end=" ") # Visit node
Example Tree and Traversals:
Consider the following binary tree:
2
/\
1 3
/\
4 5

Let's perform all four traversals on this tree:


# Construct the tree
root = Node(2)
root.left = Node(1)
root.right = Node(3)
root.left.left = Node(4)
root.left.right = Node(5)

# In-order Traversal
print("In-order Traversal:")
inorder(root)
print()

# Pre-order Traversal
print("Pre-order Traversal:")
preorder(root)
print()

# Post-order Traversal
print("Post-order Traversal:")
postorder(root)
print()

Output:
In-order Traversal:
41523
Pre-order Traversal:
21453
Post-order Traversal:
4 5 1 3 2Summary of Binary Tree Traversal:

1. In-order (LNR): Left → Node → Right. Useful in binary search trees (BST) for visiting nodes in
ascending order.
2. Pre-order (NLR): Node → Left → Right. Useful for copying or serializing the tree.
3. Post-order (LRN): Left → Right → Node. Useful for deletion or bottom-up processing.

These traversal methods are foundational for many algorithms and problems involving binary trees, such as
searching, balancing, and tree manipulations.

You might also like