0% found this document useful (0 votes)
19 views9 pages

cheatsheet

The document provides an overview of various algorithms including Merge Sort, Depth First Search, Breadth First Search, Dijkstra's algorithm, and Kruskal's and Prim's algorithms for finding minimum spanning trees. It also discusses the Naïve Pattern Search algorithm, the heapify procedure, and the divide and conquer approach for finding minimum values in arrays. Additionally, it addresses the concept of stable sorting and analyzes the time complexities of the discussed algorithms.

Uploaded by

malucy2007
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views9 pages

cheatsheet

The document provides an overview of various algorithms including Merge Sort, Depth First Search, Breadth First Search, Dijkstra's algorithm, and Kruskal's and Prim's algorithms for finding minimum spanning trees. It also discusses the Naïve Pattern Search algorithm, the heapify procedure, and the divide and conquer approach for finding minimum values in arrays. Additionally, it addresses the concept of stable sorting and analyzes the time complexities of the discussed algorithms.

Uploaded by

malucy2007
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

1. [6] Describe and analyze Merge Sort algorithm.

Description: Merge Sort is a divide and conquer algorithm that:


• Divides the array into two halves.
• Recursively sorts each half.
• Merges the sorted halves in O(n) time.
It guarantees O(n log n) time complexity in all cases.

Pseudocode:
Merge Sort:
MERGE_SORT(A, left, right)
1. if left < right:
2. mid = (left + right) / 2
3. MERGE_SORT(A, left, mid)
4. MERGE_SORT(A, mid + 1, right)
5. MERGE(A, left, mid, right)
Merge Function:
MERGE(A, left, mid, right)
1. Create temporary arrays for left and right halves.
2. Compare elements and merge them back into A.

Complexity Analysis:
Merge Sort follows the recurrence:
Base Case (Initial Call)

T (n) = 2T (n/2) + cn

Recursive Step
T (n/2) = 2T (n/4) + c(n/2)
Substituting in T (n):

T (n) = 4T (n/4) + 2c(n/2) + cn = 4T (n/4) + 2cn


General Case After k levels, each subproblem has size n/2k :

T (n) = 2k T (n/2k ) + kcn

Stopping at n/2k = 1 gives k = log2 n:

T (n) = O(n log n)

Thus, Merge Sort runs in O(n log n) time in all cases.

1
2. [3] With the help of a diagram and proper explanation prove
that a complete binary tree has depth of log n and width of n/2.
Proof of Depth: log n
In a complete binary tree, each level doubles the number of nodes.
- The number of nodes at level h is:

2h

- The total number of nodes in a tree of height h follows a geometric series:

n = 1 + 2 + 4 + · · · + 2h = 2h+1 − 1
− > 2h+1 = n + 1
− > h + 1 = log2 (n + 1)
− > h ≈ log2 n
Thus, the depth of a complete binary tree is O(log n).

Proof of Width: n2
The width of a binary tree is the number of nodes at the level with the most
nodes.
- In a complete binary tree, the last level contains the most nodes. - The
last level is at depth h, and the number of nodes at this level is:

2h
- Since we know that:
n
2h+1 ≈ n− > 2h ≈
2
n
Thus, the maximum width of a complete binary tree is 2.

3. [4] What is the heapify procedure? What is the run time of


min-heapify? Explain.
Heapify Procedure:
The heapify procedure is used to maintain the heap property in a binary
heap.
Heap Property:
• Min-heap: Value(parent) ≤ Value(children)
• Max-heap: Value(parent) ≥ Value(children)
Min-Heapify Algorithm:
Given a node at index i in an array representation of a binary heap:

1. Let left = 2i (left child index)


2. Let right = 2i + 1 (right child index)

2
3. Find the smallest among A[i], A[left], and A[right]
4. If A[i] is not the smallest:
• Swap A[i] with the smaller child
• Recursively call Min-Heapify(A, smallest)

Time Complexity Analysis


The Min-Heapify procedure moves an element down the heap to restore the
heap property.
- In the worst case, the element moves from the root to a leaf.
- The height of a binary heap with n elements is O(log n).
- Each step involves a constant number of comparisons and swaps, i.e., O(1)
per level.
Thus, the total time complexity of Min-Heapify is:

O(log n)

4. [5] Divide and Conquer Algorithm to Find Minimum Value in


an Array
Algorithm:
Using the divide and conquer approach, we split the array into two halves,
recursively find the minimum in each half, and then return the smaller of the
two.
Pseudocode:

MINIMUM(A, left, right):


if left == right:
return A[left]
mid = (left + right) / 2
leftMin = MINIMUM(A, left, mid)
rightMin = MINIMUM(A, mid+1, right)
return min(leftMin, rightMin)

Complexity Analysis:
The recurrence relation for this algorithm is:

T (n) = 2T (n/2) + O(1)


= 2 (2T (n/4) + O(1)) + O(1)
= 4T (n/4) + 2O(1) + O(1)
= 8T (n/8) + 4O(1) + 2O(1) + O(1)
After k levels, the problem reduces to size n/2k = 1, meaning k = log2 n.

T (n) = 2log2 n T (1) + O(log n)

3
Since T (1) = O(1), we get:

T (n) = O(n)
This means the algorithm runs in linear time, which is optimal for finding
the minimum element in an unsorted array.

5. [5] Illustrate the operation of Radix Sort on the following list of numbers:
329, 457, 657, 839, 436, 720, 355.
Hint: Radix Sort uses a stable sort to sort the array on digit i. O(kn) =
O(n log n).

6. [2] What is stable sorting?


Answer: A sorting algorithm is stable if it preserves the relative order of
equal elements in the input. That is, if two elements have the same key and
appear in order in the original array, they remain in the same order in the sorted
output. Examples of stable sorting algorithms include Merge Sort, Bubble
Sort, and Radix Sort.

7. Using the Master Method to give asymptotic bounds for the


following recurrences:
(a) T (n) = 9T (n/3) + n
Compare this with the recurrence form: T(n) = aT(n/b) + f(n)
- a = 9, b = 3, and f (n) = O(n).
- logb a = log3 9 = 2
Since:
f (n) = O(n) < O(n2 )
Case 1 of the Master Theorem applies, giving:

T (n) = Θ(n2 )
(b) T (n) = 3T (n/4) + 1
- a = 3, b = 4, and f (n) = O(1).

logb a = log4 3 ≈ 0.792

− > f (n) = O(1) < O(n0.792 )


Case 1 of the Master Theorem applies again, giving:

T (n) = Θ(n0.792 )

4
1. [6] Describe and analyze naı̈ve pattern search algorithm.
The Naı̈ve Pattern Search Algorithm (also called the Brute Force Algorithm)
is a straightforward method for finding occurrences of a pattern P of length m
in a text T of length n. It works by checking for P at every possible position in
T.
Algorithm Steps:
1. Start with the first character of T .
2. Compare each character of P with the corresponding character in T .

3. If all characters of P match, record the position as an occurrence.


4. Move one character forward in T and repeat until reaching n − m + 1.
Pseudocode:
Naı̈vePatternSearch(T, P):
n = length(T)
m = length(P)

for i = 0 to n - m:
match = true
for j = 0 to m - 1:
if T[i + j] P[j]:
match = false
break
if match:
print "Pattern found at index", i

Time Complexity Analysis:


• Best Case: O(n) (When mismatches occur early, e.g., first character
mismatch)
• Worst Case: O(nm) (When all characters match, e.g., searching "aaaa"
in "aaaaaaaa")
Thus, the overall complexity is O(nm) in the worst case. It is inefficient for
large texts but works well for small inputs.

2. [3+3] Perform (a) Depth First and (b) Breadth First Traversals
on the given graph.
(a) Depth First Search (DFS) Traversal: Depth First Search (DFS)
explores as far as possible along each branch before backtracking. It uses a stack
(or recursion) to track nodes.
Algorithm Steps:
1. Start at Node 1 (mark as visited).

5
2. Move to Node 4 (first unvisited neighbor).
3. Move to Node 2 (next unvisited neighbor of 4).
4. Move to Node 5 (next unvisited neighbor of 2).

5. Move to Node 3 (next unvisited neighbor of 5).


DFS Traversal Order:

1→4→2→5→3

(b) Breadth First Search (BFS) Traversal: Breadth First Search (BFS)
explores all neighbors of a node before moving deeper. It uses a queue to track
nodes.
Algorithm Steps:
1. Start at Node 1, enqueue: [1].

2. Dequeue 1, visit and enqueue neighbor: [4].


3. Dequeue 4, visit and enqueue its neighbors: [2, 5].
4. Dequeue 2, enqueue its neighbor (but 5 is already in queue).

5. Dequeue 5, visit and enqueue its neighbor: [3].


6. Dequeue 3, BFS is complete.
BFS Traversal Order:

1→4→2→5→3

3. [5] Use Dijkstra’s algorithm to find the shortest path between


source ‘a’ and destination ‘d’ for the weighted graph shown below,
showing all the steps.
Dijkstra’s Algorithm: Dijkstra’s algorithm finds the shortest path from
a source node to all other nodes in a weighted graph with non-negative edge
weights. It uses a priority queue (min-heap) to always expand the node with
the shortest known distance.
Algorithm Steps:
1. Initialize distances: Set the distance to the source node a as 0 and all
others as ∞.
2. Use a priority queue (min-heap) to select the node with the smallest ten-
tative distance.
3. Update the distances of its neighboring nodes.

6
4. Repeat until all nodes are visited or the destination node d has the shortest
path determined.
Step-by-Step Execution:
Iteration Visited Node Distance Updates
Initialization - d(a) = 0, d(b) = ∞, d(c) = ∞, d(d) = ∞
1 a d(b) = 1, d(c) = 4
2 b d(c) = min(4, 1 + 8) = 4, d(d) = 1 + 10 = 11
3 c d(d) = min(11, 4 + 3) = 7
4 d Shortest path to d is finalized: d(d) = 7
Shortest Path from a to d:

a → c → d (Total cost = 7)

Thus, the shortest path from a to d using Dijkstra’s algorithm is a → c → d


with a total cost of 7.

4. [4+4] Use Kruskal’s algorithm to find a minimal spanning tree


for the weighted graph shown below, showing steps. Then, use Prim’s
algorithm to do the same thing, using the same weighted graph, again
showing steps.

Kruskal’s Algorithm
Kruskal’s algorithm builds the Minimum Spanning Tree (MST) by sorting all
edges by weight and selecting the smallest edges while avoiding cycles.
Steps:
1. Sort all edges in increasing order of weight.
2. Initialize an empty MST.
3. Add edges one by one, ensuring no cycles are formed.
4. Stop when MST contains (V − 1) edges.
Sorted Edges:

(0, 1) = 1, (1, 4) = 1, (2, 3) = 1, (3, 4) = 1, (1, 3) = 3, (1, 2) = 4, (0, 2) = 6

Step-by-step MST construction:


• Pick edge (0,1) → No cycle
• Pick edge (1,4) → No cycle
• Pick edge (2,3) → No cycle
• Pick edge (3,4) → No cycle
• Pick edge (1,3) → Forms a cycle, discard it.

7
• Pick edge (1,2) → Forms a cycle, discard it.
• Pick edge (0,2) → Not needed.
Final MST using Kruskal’s Algorithm:

(0, 1), (1, 4), (2, 3), (3, 4)

Prim’s Algorithm
Prim’s algorithm starts from a node and grows the MST by selecting the smallest
edge that connects an unvisited node.
Steps:

1. Start with node 0.


2. Pick the smallest edge connecting to an unvisited node.
3. Repeat until all nodes are in the MST.
Step-by-step MST construction:

• Start at node 0.
• Pick edge (0,1) [weight 1].
• Pick edge (1,4) [weight 1].

• Pick edge (3,4) [weight 1].


• Pick edge (2,3) [weight 1].
Final MST using Prim’s Algorithm:

(0, 1), (1, 4), (3, 4), (2, 3)

Conclusion: Both Kruskal’s and Prim’s algorithms yield the same MST
with total weight **4**.

5. [5] Suppose you run a day care for an office building and there
are seven children A, B, C, D, E, F. You need to assign a locker where
each child’s parent can put the child’s food. The children come and
leave so they are not all there at the same time. You have 1 hour
time slots starting 7:00 a.m. to 9:00 a.m. A star in the table means a
child is present at that time. What is the minimum number of lockers
necessary? Show how you would assign the lockers.
Solution:
We determine the minimum number of lockers by analyzing overlapping time
slots.
Children present at each time:

8
• 7:00 AM: A, C, E
• 8:00 AM: B, C
• 9:00 AM: D, E, F

Locker Assignment:
• 7:00 AM: A → L1, C → L2, E → L3
• 8:00 AM: B → L1 (reuses A’s locker), C → L2 (same locker)

• 9:00 AM: D → L1 (reuses B’s locker), E → L3 (same locker), F → L2


(reuses C’s locker)
Minimum lockers required: 3

You might also like