0% found this document useful (0 votes)
12 views72 pages

Design and Analysis of Algorithms __ 23CSH-282

The document provides comprehensive notes and questions on the Design & Analysis of Algorithms course, covering topics such as algorithm performance analysis, asymptotic notations, time and space complexity, and various algorithmic strategies including Divide and Conquer and Greedy approaches. It includes detailed explanations, examples, and time complexities for algorithms like Merge Sort, Quick Sort, and Huffman Coding. Additionally, it outlines methods for analyzing recursive algorithms and solving problems using different algorithmic techniques.

Uploaded by

kaushik07oct2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views72 pages

Design and Analysis of Algorithms __ 23CSH-282

The document provides comprehensive notes and questions on the Design & Analysis of Algorithms course, covering topics such as algorithm performance analysis, asymptotic notations, time and space complexity, and various algorithmic strategies including Divide and Conquer and Greedy approaches. It includes detailed explanations, examples, and time complexities for algorithms like Merge Sort, Quick Sort, and Huffman Coding. Additionally, it outlines methods for analyzing recursive algorithms and solving problems using different algorithmic techniques.

Uploaded by

kaushik07oct2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

Design & Analysis of Algorithms

(23CSH-282)

ALL UNITS - NOTES & QUESTIONS

​ Compiled by : Subhayu

Contents :
(Click on the Unit below, to skip to that particular unit)
Unit 1……………………………………………………………………………………………………………………………………..
Unit 2…………………………………………………………………………………………………………………………………….
Unit 3……………………………………………………………………………………………………………………………………..
MST 1 and 2 solutions………………………………………………………………………………………………………
Sample Questions…………………………………………………..…………………………………………………………..
UNIT-1 : Introduction to Algorithms
Contact Hours: 10

Chapter 1 : Algorithm Performance Analysis

1. Analysis Framework
Analyzing an algorithm helps determine how efficient it is in terms of time and
space. The two main types of analysis are:

1.1. Input Size (n)

Most algorithms are analyzed based on the size of the input. As n increases, we
want to know how the algorithm’s performance changes.

1.2. Performance Cases

●​ Worst Case: The maximum number of steps the algorithm takes for any
input of size n.​

○​ Example: Linear search in an array of n elements, where the item is


not present.​

●​ Average Case: The expected number of steps over all possible inputs of size
n.​

●​ Best Case: The minimum number of steps the algorithm takes.​

○​ Example: In linear search, if the item is found at the first position.​

2. Asymptotic Notations
Asymptotic notations describe the growth of an algorithm's running time or
space as the input size becomes very large. These notations provide a high-level
understanding of an algorithm's efficiency.

2.1. Big O Notation O(g(n))

●​ Describes the upper bound of an algorithm.​

●​ It tells us the worst-case growth rate.​

●​ Example: If an algorithm takes at most 3n + 2 steps, it is O(n).​

If f(n) ≤ c * g(n) for all n ≥ n₀, then f(n) = O(g(n)).

2.2. Omega Notation Ω(g(n))

●​ Describes the lower bound.​

●​ It tells us the best-case growth rate.​

●​ Example: If an algorithm always takes at least n steps, it is Ω(n).​

If f(n) ≥ c * g(n) for all n ≥ n₀, then f(n) = Ω(g(n)).

2.3. Theta Notation Θ(g(n))

●​ Describes the tight bound (both upper and lower).​

●​ It tells us the average or expected time when lower and upper bounds are
the same.​

●​ Example: If f(n) is always between 2n and 4n, then f(n) = Θ(n).​

If c₁ * g(n) ≤ f(n) ≤ c₂ * g(n) for all n ≥ n₀, then f(n) = Θ(g(n)).

3. Time and Space Complexity

3.1. Time Complexity


Measures how the time taken by an algorithm increases with the size of the input.

●​ Example (Iterative):​

for i in range(n):

print(i)

●​ Time Complexity: O(n) (loop runs n times)​

●​ Example (Nested Loops):​



for i in range(n):

for j in range(n):

print(i, j)

Time Complexity: O(n²)​

3.2. Space Complexity

Measures how much additional memory the algorithm uses based on input size.

●​ Example:​

arr = [0] * n
●​ Space Complexity: O(n)​

If an algorithm uses a constant amount of extra memory: O(1) (constant space)

4. Iterative vs Recursive Algorithm Analysis

4.1. Iterative Algorithm

Time complexity depends on the number of iterations.

●​ Example:​

def sum_n(n):
total = 0

for i in range(1, n+1):

total += i

return total

●​ Time Complexity: O(n)​

4.2. Recursive Algorithm

Time complexity depends on the number of recursive calls and the work done in
each call.

●​ Example:​

def factorial(n):

if n == 0:

return 1

return n * factorial(n-1)

Time Complexity: O(n)​


Space Complexity: O(n) due to recursive call stack​

5. Recurrence Relations
A recurrence relation expresses the time complexity of a recursive function in
terms of the time complexity of smaller inputs.

5.1. Substitution Method

Used to guess the form of the solution and prove it using mathematical
induction.

●​ Example: Given:​

T(n) = T(n-1) + n
T(1) = 1

●​ Try:​
T(n) = O(n²)
●​ Use induction to prove this guess.​

5.2. Recursion Tree Method

Used to visualize the recurrence relation by drawing a tree showing how the input
is broken down at each level.

●​ Example:​

T(n) = 2T(n/2) + n
●​ Each level does n work. Tree has log n levels → Total = n log n​

5.3. Master Theorem

Used for solving divide-and-conquer recurrence relations of the form:

●​ T(n) = aT(n/b) + f(n)

Where:

●​ a ≥ 1 and b > 1​

●​ f(n) is a function of n​

Master Theorem Cases:

Case Condition Result

1 If f(n) = O(n^log_b a - ε) T(n) = Θ(n^log_b a)


2 If f(n) = Θ(n^log_b a) T(n) = Θ(n^log_b a
* log n)

3 If f(n) = Ω(n^log_b a + ε) and T(n) = Θ(f(n))


regularity condition holds

Example:​

T(n) = 2T(n/2) + n → a = 2, b = 2, f(n) = n

log_b a = log₂2 = 1

f(n) = Θ(n^1) → Case 2

Result: T(n) = Θ(n log n)

Summary Table:

Topic Key Points

Cases Worst, Best, Average case help predict


performance

Asymptotic O (upper bound), Ω (lower bound), Θ


Notations (tight bound)

Complexity Time and space help measure efficiency

Iterative vs Iterative – loops; Recursive – function calls


Recursive
Recurrence Used to solve recursive complexities

Methods Substitution, Recursion Tree, Master


Theorem

Chapter 2 : Divide and Conquer

1. Understanding the Divide and Conquer Approach

Definition:

The Divide and Conquer strategy breaks a problem into smaller sub-problems,
solves each recursively, and then combines the results to form the solution to the
original problem.

Steps:

1.​ Divide: Break the problem into smaller parts.​

2.​ Conquer: Recursively solve each sub-problem.​

3.​ Combine: Merge the results of the sub-problems to get the final answer.​

Example: Merge Sort, Quick Sort, Binary Search

2. Algorithms Using Divide and Conquer

2.1 Find Minimum and Maximum (Using Divide and Conquer)

Instead of comparing all elements linearly, the array is divided into halves and
minimum/maximum is found recursively.
Algorithm (High-Level):

●​ If the array has one element: return it as min and max.​

●​ If two elements: compare and return min and max.​

●​ If more than two: divide into halves, find min/max of each, then compare
results.​

Time Complexity:

T(n) = 2T(n/2) + 2 → O(n)

2.2 Merge Sort (2-Way Merge Sort)

●​ Divide: The array is divided into two halves.​

●​ Conquer: Sort the halves recursively.​

●​ Combine: Merge the sorted halves.​

Code Outline:
def merge_sort(arr):
if len(arr) <= 1:
return arr
mid = len(arr)//2
left = merge_sort(arr[:mid])
right = merge_sort(arr[mid:])
return merge(left, right)

Time Complexity: O(n log n)

Space Complexity: O(n) (due to auxiliary arrays)

2.3 Quick Sort

●​ Divide: Choose a pivot and partition the array such that:​


○​ Left part < pivot​

○​ Right part > pivot​

●​ Conquer: Recursively sort the subarrays.​

Code Outline:
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[0]
less = [x for x in arr[1:] if x <= pivot]
more = [x for x in arr[1:] if x > pivot]
return quick_sort(less) + [pivot] + quick_sort(more)

Time Complexity:

●​ Worst Case: O(n^2) (if pivot is always smallest/largest)​

●​ Average Case: O(n log n)​

2.4 Heap Sort

Heap Sort uses a binary heap data structure.

Steps:

1.​ Build a max heap from the input.​

2.​ Swap the first (largest) element with the last.​

3.​ Reduce the heap size and heapify again.​

Time Complexity:

●​ Build Heap: O(n)​


●​ Heapify per element: O(log n)​

●​ Total: O(n log n)​

In-Place: Yes

Stable: No

2.5 Linear Search

Simple searching algorithm where each element is checked one-by-one.

Time Complexity:

●​ Worst Case: O(n)​

●​ Best Case: O(1)​

2.6 Binary Search (Divide and Conquer)

Works only on sorted arrays.

●​ Divide: Compare target with middle element.​

●​ Conquer: Search either left or right half.​

Code Outline:
def binary_search(arr, target):
low, high = 0, len(arr)-1
while low <= high:
mid = (low + high)//2
if arr[mid] == target:
return mid
elif arr[mid] < target:
low = mid + 1
else:
high = mid - 1
return -1
Time Complexity: O(log n)

3. Strassen’s Matrix Multiplication (Divide and


Conquer)
Multiplies two n x n matrices faster than the standard O(n^3) method.

Idea:

●​ Divide each matrix into 4 sub-matrices.​

●​ Use 7 multiplications instead of 8 (as in standard divide-and-conquer).​

●​ Combine results to get the final product.​

Time Complexity:

T(n) = 7T(n/2) + O(n^2)​


Using Master Theorem → O(n^log₂7) ≈ O(n^2.81)

4. Convex Hull (Divide and Conquer)


A Convex Hull is the smallest convex polygon that encloses a set of points in the
plane.

Divide and Conquer Approach (Quick Hull):

●​ Find leftmost and rightmost points.​

●​ Form a line segment.​

●​ Find the point farthest from the line.​

●​ Recursively repeat for the subsets.​


Time Complexity:

●​ Best/Average: O(n log n)​

●​ Worst: O(n^2)​

5. Decrease and Conquer Approach


In Decrease and Conquer, the problem is solved by reducing the input size by a
constant (often 1) at each step.

Example: Topological Sort

Used in Directed Acyclic Graphs (DAG) to order vertices such that:

●​ For every edge (u, v), u appears before v.​

Approach:

●​ Use in-degree method or DFS.​

●​ Repeatedly remove nodes with 0 in-degree.​

Time Complexity: O(V + E)


Summary Table:
Algorithm Type Time Notes
Complexity

Merge Sort Divide & O(n log n) Stable


Conquer

Quick Sort Divide & O(n log n) In-place, not stable


Conquer avg

Heap Sort Divide & O(n log n) In-place, not stable


Conquer

Binary Search Divide & O(log n) Requires sorted input


Conquer

Linear Search Simple O(n) Works on unsorted


arrays

Strassen’s Divide & O(n^2.81) Less practical for small


Multiplication Conquer n

Convex Hull Divide & O(n log n) Used in computational


Conquer geometry

Topological Sort Decrease & O(V + E) Only for DAGs


Conquer
UNIT-2: Greedy Approach​ ​ ​ ​ ​
​ & Dynamic Programming
Contact Hours: 10

Chapter 3 : Greedy Approach


1. Understanding the Greedy Approach

Definition:

A Greedy algorithm solves a problem by always making the locally optimal


choice at each step, hoping that this will lead to a globally optimal solution.

Key Characteristics:

●​ Works in stages.​

●​ Irrevocable decisions at each step.​

●​ No backtracking.​

●​ Efficient when problem exhibits greedy-choice property and optimal


substructure.​

2. Fractional Knapsack Problem

Problem Statement:

Given:

●​ n items with value[i] and weight[i]​


●​ A knapsack with capacity W​

Maximize the total value in the knapsack. Items can be broken (fractions
allowed).

Greedy Strategy:

●​ Calculate value/weight ratio for each item.​

●​ Sort items by this ratio in descending order.​

●​ Pick as much as possible from the item with the highest ratio.​

●​ Continue until the knapsack is full.​

Steps:

1.​ For each item, compute value/weight.​

2.​ Sort items by this ratio.​

3.​ Initialize total value = 0 and remaining capacity = W.​

4.​ For each item:​

○​ If full item can be added, take it.​

○​ Else, take fraction to fill the knapsack.​

5.​ Return the total value.​

Time Complexity: O(n log n) (due to sorting)

3. Job Sequencing with Deadline

Problem Statement:

You are given n jobs with:


●​ deadline[i]​

●​ profit[i]​

Each job takes one unit of time, and only one job can be scheduled at a time.
Maximize total profit by selecting jobs within their deadlines.

Greedy Strategy:

●​ Sort jobs by profit in descending order.​

●​ For each job, try to assign it to the latest available time slot before or on
its deadline.​

●​ Skip job if no time slot is free.​

Steps:

1.​ Sort jobs by profit (high to low).​

2.​ Initialize a time slot array (size = max deadline).​

3.​ For each job:​

○​ Check available slots from min(deadline, max_slot) down to 1.​

○​ If a slot is free, assign the job and add its profit.​

Time Complexity: O(n^2)

(Can be optimized using Disjoint Set to O(n log n))

4. Huffman Coding

Problem Statement:
Given characters with their frequencies, build a binary prefix code (no code is
prefix of another) that minimizes the total encoded length.

Greedy Strategy:

●​ Combine the two least frequent characters repeatedly into a new node.​

●​ Use a min-heap to always access the two least frequent nodes.​

●​ Construct a binary tree (Huffman Tree), assign 0 to left, 1 to right.​

Steps:

1.​ Create a min-heap of all characters based on frequency.​

2.​ While there is more than one node:​

○​ Extract two nodes with lowest frequencies.​

○​ Create a new node with frequency = sum of the two.​

○​ Insert the new node back.​

3.​ Generate codes from the final tree.​

Time Complexity: O(n log n)

5. Minimum Spanning Tree (MST)


An MST of a connected, undirected graph is a subset of edges that connects all
vertices with the minimum total edge weight and no cycles.

5.1 Kruskal’s Algorithm (Greedy Edge Selection)

Idea:

Always pick the smallest edge that doesn’t form a cycle.


Steps:

1.​ Sort all edges by weight.​

2.​ Initialize each vertex as a separate set (Union-Find).​

3.​ For each edge:​

○​ If the edge connects different sets (i.e., no cycle), include it in MST.​

○​ Union the sets.​

Time Complexity: O(E log E)

5.2 Prim’s Algorithm (Greedy Vertex Expansion)

Idea:

Start from any vertex and repeatedly add the smallest edge that connects a new
vertex to the growing MST.

Steps:

1.​ Start from any vertex.​

2.​ Initialize a visited set and a priority queue.​

3.​ While MST is incomplete:​

○​ Choose the minimum weight edge connecting MST to a new vertex.​

○​ Add the edge and vertex to MST.​

○​ Update the priority queue.​

Time Complexity:

●​ With Priority Queue: O(E log V)​


6. Activity Selection Problem

Problem Statement:

Given n activities with start[i] and finish[i], select the maximum number of
non-overlapping activities.

Greedy Strategy:

Always pick the next activity that finishes earliest, among the remaining ones.

Steps:

1.​ Sort activities by finish time.​

2.​ Select the first activity.​

3.​ For each subsequent activity:​

○​ If its start time ≥ finish time of last selected activity, select it.​

Time Complexity: O(n log n) (for sorting)

Summary Table:
Problem Strategy Time Complexity

Fractional Knapsack Maximize value/weight O(n log n)

Job Sequencing with Maximize profit O(n^2) or


Deadline O(n log n)

Huffman Coding Minimize code length O(n log n)

Kruskal’s MST Select smallest edge O(E log E)

Prim’s MST Expand via min edge O(E log V)

Activity Selection Finish earliest first O(n log n)


Numerical Problems for Clear Understanding

1. Fractional Knapsack Problem

Problem:

Items:

●​ Item 1: value = 60, weight = 10​

●​ Item 2: value = 100, weight = 20​

●​ Item 3: value = 120, weight = 30​


Knapsack capacity = 50​

Solution:

Calculate value/weight:

●​ Item 1: 60 / 10 = 6​

●​ Item 2: 100 / 20 = 5​

●​ Item 3: 120 / 30 = 4​

Sort items by value/weight (descending): Item 1 → Item 2 → Item 3

Steps:

●​ Take Item 1 (10 kg): Remaining = 40, Value = 60​

●​ Take Item 2 (20 kg): Remaining = 20, Value = 60 + 100 = 160​

●​ Take 20 kg of Item 3 (fractional): 20/30 * 120 = 80​

🔹 Total Value = 160 + 80 = 240

2. Job Sequencing with Deadlines


Problem:

Jobs (Job ID, Deadline, Profit):

●​ Job A: 2, 100​

●​ Job B: 1, 19​

●​ Job C: 2, 27​

●​ Job D: 1, 25​

●​ Job E: 3, 15​

Solution:

Sort jobs by profit: A (100), C (27), D (25), B (19), E (15)

Now assign each job to latest available slot ≤ deadline:

●​ A → Slot 2 (available) ✅​

●​ C → Slot 1 (available) ✅​

●​ D → Slot 1 (already filled) ❌​

●​ B → Slot 3 (available) ✅​

●​ E → All earlier slots filled ❌​

🔹 Jobs Scheduled: A, C, B → Profit = 100 + 27 + 19 = 146

3. Huffman Coding

Problem:

Characters with Frequencies:

●​ A: 5​
●​ B: 9​

●​ C: 12​

●​ D: 13​

●​ E: 16​

●​ F: 45​

Solution (Min-Heap Based):

1.​ Combine A(5) + B(9) → Node 14​

2.​ Combine C(12) + D(13) → Node 25​

3.​ Combine 14 + E(16) → Node 30​

4.​ Combine 25 + 30 → Node 55​

5.​ Combine 45 + 55 → Root 100​

Generate binary codes by traversing the tree:

●​ F: 0​

●​ C: 100​

●​ D: 101​

●​ A: 1100​

●​ B: 1101​

●​ E: 111​

🔹 Efficient encoding achieved


4. Kruskal’s Algorithm

Problem:

Graph with edges and weights:

mathematica
CopyEdit
A-B: 1
B-C: 4
A-C: 3
C-D: 2
D-E: 5
E-A: 6

Solution:

Sort edges: A-B(1), C-D(2), A-C(3), B-C(4), D-E(5), E-A(6)

Add edges avoiding cycles:

●​ A-B ​

●​ C-D ​

●​ A-C ​

●​ D-E ​

●​ E-A <- we don’t add this (forms cycle)​

🔹 MST Edges: A-B, C-D, A-C, D-E → Total weight = 1+2+3+5 = 11

5. Prim’s Algorithm

Problem (Same Graph as Above):

Start from vertex A.

Step-by-step selection:
●​ A → B (1)​

●​ B → C (4) or A → C (3), pick A-C → (3)​

●​ Now C → D (2)​

●​ D → E (5)​

●​ E → A (6) is ignored (already included)​

🔹 MST Edges: A-B, A-C, C-D, D-E → Total = 1+3+2+5 = 11

6. Activity Selection Problem

Problem:

Activities with start and finish times:

Activity Start End

A1 1 3

A2 2 5

A3 4 6

A4 6 7

A5 5 8

A6 8 9

Solution:

Sort by finish time: A1 (3), A2 (5), A3 (6), A4 (7), A5 (8), A6 (9)

Pick A1 (ends at 3) Next: A3 (starts at 4) ​


Next: A4 (starts at 6) ​
Next: A6 (starts at 8)

🔹 Selected: A1, A3, A4, A6 → 4 activities


Chapter 4 : Dynamic Programming
What is Dynamic Programming?

Dynamic Programming (DP) is a method for solving complex problems by breaking


them into smaller subproblems, solving each subproblem only once, and storing
their results (usually using a table). It is used when:

●​ The problem has overlapping subproblems.​

●​ The problem has optimal substructure (solution to the problem can be built
from optimal solutions of subproblems).​

There are two main approaches:

●​ Top-down (Memoization): Recursive + caching​

●​ Bottom-up (Tabulation): Iterative + table filling​

1. 0/1 Knapsack Problem

Problem:

Given weights and values of n items, put these items in a knapsack of capacity W
such that total value is maximized, and you cannot break items (0 or 1 of each).

Steps:

1.​ Create a 2D DP table dp[n+1][W+1].​

2.​ dp[i][j] = Maximum value for first i items and capacity j.​

3.​ Initialization:​

○​ dp[0][j] = 0 (no items)​


○​ dp[i][0] = 0 (zero capacity)​

4.​ For each item i from 1 to n, and for each capacity j:​

○​ If weight[i-1] <= j:​


dp[i][j] = max(dp[i-1][j], value[i-1] + dp[i-1][j - weight[i-1]])​

○​ Else:​
dp[i][j] = dp[i-1][j]​

5.​ Final answer: dp[n][W]​

2. Longest Common Subsequence (LCS)

Problem:

Given two strings, find the length of their longest subsequence that appears in
both.

Steps:

1.​ Create a 2D table dp[m+1][n+1] for strings of length m and n.​

2.​ Initialization: dp[0][j] = dp[i][0] = 0​

3.​ Loop through both strings:​

○​ If characters match:​
dp[i][j] = dp[i-1][j-1] + 1​

○​ Else:​
dp[i][j] = max(dp[i-1][j], dp[i][j-1])​

4.​ Final result: dp[m][n]​


3. Travelling Salesman Problem (TSP)

Problem:

Given a set of cities and distances between each pair, find the shortest tour that
visits all cities and returns to the start.

Steps (DP with bitmasking):

1.​ Represent visited cities using a bitmask.​

2.​ Define dp[mask][i] as the minimum cost to reach city i having visited cities
in mask.​

3.​ Initialize dp[1][0] = 0 (starting at city 0).​

For each mask, and for each u in mask, update:​



for each v not in mask:
dp[mask | (1<<v)][v] = min(dp[mask | (1<<v)][v], dp[mask][u] + dist[u][v])

4.​
5.​ Final answer: min(dp[all_visited][i] + dist[i][0]) for all i​

4. Bellman-Ford Algorithm

Problem:

Find the shortest path from a single source to all vertices, even with negative edge
weights (no negative cycle).

Steps:

1.​ Initialize distance of all vertices as ∞, and source as 0.​

2.​ Repeat V-1 times:​


○​ For each edge (u, v), relax it: if dist[v] > dist[u] + weight(u, v):
dist[v] = dist[u] + weight(u, v)​

3.​ Check for negative cycles:​

○​ Run step 2 once more; if any distance improves → negative cycle


exists.​

5. Floyd-Warshall Algorithm

Problem:

Find shortest paths between all pairs of vertices in a weighted graph.

Steps:

1.​ Initialize matrix dist[i][j] with weights; dist[i][i] = 0.​

For each intermediate vertex k, update:​



lua​
CopyEdit​
dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j])

2.​
3.​ Repeat for all i, j, and k.​

Time Complexity: O(V^3)

6. Optimal Binary Search Tree (OBST)

Problem:

Given keys with access probabilities, build a BST with minimum expected search
cost.
Steps:

1.​ Define dp[i][j] = minimum cost of BST from key i to j.​

2.​ For each length of interval:​

○​ Try all possible root positions r between i and j​

○​ Cost = dp[i][r-1] + dp[r+1][j] + sum of frequencies from i to j​

○​ Take minimum of all such costs​

3.​ Final result: dp[0][n-1]​

7. Coin Change Problem

Problem:

Given coin denominations and a total amount, find the number of ways to make
the amount using any number of coins.

Steps:

1.​ Create a table dp[amount+1] with dp[0] = 1​

2.​ For each coin:​

○​ For all amounts j = coin to amount: dp[j] += dp[j - coin]​

3.​ Final result: dp[amount]​

This version counts the number of combinations. A variation gives the minimum
number of coins using a similar table.

8. Matrix Chain Multiplication


Problem:

Given dimensions of matrices, find the most efficient way to multiply them.

Steps:

1.​ Define dp[i][j] = minimum cost of multiplying matrices from i to j.​

2.​ For increasing chain length l = 2 to n:​

○​ For i = 1 to n-l+1, j = i + l - 1:​

For each k = i to j-1:​



cost = dp[i][k] + dp[k+1][j] + p[i-1]*p[k]*p[j]
dp[i][j] = min(dp[i][j], cost)

3.​ Final result: dp[1][n-1]​

Numerical Problems for Clear Understanding

1. 0/1 Knapsack Problem

Problem:

Items:​
Weights = [1, 3, 4, 5]​
Values = [10, 40, 50, 70]​
Knapsack Capacity = 8

Solution:

Create a dp table where dp[i][w] is the max value for first i items and capacity
w.

Final dp table value:​


dp[4][8] = 110 → Take items with weights 3 and 5 (values 40 + 70)
2. Longest Common Subsequence (LCS)

Problem:

X = "AGGTAB"​
Y = "GXTXAYB"

Solution:

Build a table dp[m+1][n+1].​


LCS length = dp[6][7] = 4

LCS = "GTAB"

3. Travelling Salesman Problem (TSP)

Problem:

Distance matrix (4 cities):

0 1 2 3
-------------
0 | 0 10 15 20
1 | 10 0 35 25
2 | 15 35 0 30
3 | 20 25 30 0

Solution:

Use bitmasking + DP.​


Minimum tour cost: 80​
Tour: 0 → 1 → 3 → 2 → 0

4. Bellman-Ford Algorithm
Problem:

Vertices = 5, source = 0​
Edges:

0 → 1 (6)
0 → 2 (7)
1 → 2 (8)
1 → 3 (5)
1 → 4 (-4)
2 → 3 (-3)
2 → 4 (9)
3 → 1 (-2)
4 → 3 (7)
4 → 0 (2)

Result:

After V-1 relaxations:

dist[] = [0, 2, 7, 4, -2]

5. Floyd-Warshall Algorithm

Problem:

Graph:

A → B (3)
A → C (8)
B → C (2)
B → D (5)
C → D (1)
D → A (2)

Result:

All-pairs shortest path matrix:


A B C D
A [0 3 5 6]
B [8 0 2 3]
C [3 6 0 1]
D [2 5 7 0]

6. Optimal Binary Search Tree (OBST)

Problem:

Keys: 10, 20, 30​


Frequency: [34, 8, 50]

Result:

Build BST minimizing expected cost.

Cost Table (dp[0][2]) = 142

Root: 30 (highest frequency); construct tree to minimize weighted depth.

7. Coin Change Problem

Problem:

Coins = [1, 2, 3]​


Amount = 4

Solution:

Number of ways = 4

Ways:​
(1+1+1+1), (1+1+2), (2+2), (1+3)
8. Matrix Chain Multiplication

Problem:

Matrix dimensions: [10, 30, 5, 60]​


→ Multiply A1 (10x30), A2 (30x5), A3 (5x60)

Solution:

Calculate minimum cost for multiplying:

Try all ways of parenthesizing:

●​ (A1A2)A3 = 10×30×5 + 10×5×60 = 1500 + 3000 = 4500​

●​ A1(A2A3) = 30×5×60 + 10×30×60 = 9000 + 18000 = 27000​

Minimum cost = 4500


UNIT-3: Optimization and ​ ​
Complexity
Contact Hours: 10

Chapter 5: Backtracking

1. Introduction to Backtracking
Backtracking is a systematic method of trying out different sequences of
decisions until a solution is found.

●​ It is used to solve combinatorial and constraint satisfaction problems.​

●​ It follows a depth-first search approach.​

●​ If a path does not lead to a solution, it "backtracks" and tries the next
alternative.​

General structure of a backtracking algorithm:

def backtrack(state):
if is_solution(state):
process_solution(state)
else:
for choice in valid_choices(state):
make_choice(choice)
backtrack(new_state)
undo_choice(choice)

2. Recursive vs Iterative Backtracking

Recursive Backtracking
●​ The backtracking logic is implemented using function recursion.​

●​ Most common and simpler to implement.​

Example: Solving N-Queens using recursion.

Iterative Backtracking

●​ Uses stacks or loops instead of recursion.​

●​ Suitable when recursion depth may exceed limits or to reduce memory


usage.​

3. N-Queens Problem

Problem:

Place N queens on an N x N chessboard such that no two queens threaten each


other.

Constraints:

●​ No two queens in the same row, column, or diagonal.​

Approach:

1.​ Start from the first row.​

2.​ Try placing a queen in each column.​

3.​ If placing a queen in a column is safe, move to the next row.​

4.​ If not, backtrack and try next column.​

Key Function:
●​ is_safe(row, col) checks if placing a queen is valid.​

4. Hamiltonian Cycle

Problem:

Find a cycle that visits each vertex of a graph exactly once and returns to the
starting point.

Input:

●​ A graph represented by an adjacency matrix.​

Approach:

1.​ Start from vertex 0.​

2.​ Try adding an unvisited vertex to the path.​

3.​ Backtrack if a vertex cannot be added.​

Constraints:

●​ A valid vertex must be adjacent to the last vertex in the path and not
already visited.​

5. Knight’s Tour Problem

Problem:

Move a knight on a chessboard such that it visits every square exactly once.

Approach:
●​ Use recursion to try all 8 possible knight moves from the current position.​

●​ Use a board matrix to track visited squares.​

Steps:

1.​ Start from a position (e.g., (0,0)).​

2.​ Try each move.​

3.​ If no move is possible and not all squares are visited, backtrack.​

6. Graph Coloring Problem

Problem:

Color the vertices of a graph using at most m colors such that no two adjacent
vertices have the same color.

Approach:

1.​ Start from the first vertex.​

2.​ Try assigning colors from 1 to m.​

3.​ Check if coloring is valid with is_safe() function.​

4.​ Backtrack if no valid color can be assigned.​

7. Lower Bound Theory


Lower Bound Theory helps us determine the minimum possible cost (or time)
required to solve a problem.
●​ In backtracking, lower bounds are used to prune unpromising paths early.​

●​ It avoids unnecessary exploration of solution paths that cannot possibly


lead to an optimal solution.​

Example:

●​ In a variation of TSP (Traveling Salesman Problem), if the current path cost


already exceeds the best found cost, backtrack immediately.​

Summary of Common Backtracking Problems


Problem Key Idea Constraints Checked

N-Queens Place queens without attacking each Rows, columns,


other diagonals

Hamiltonian Visit each vertex once and return to Valid edge, unvisited
Cycle start

Knight's Tour Visit each cell once with knight's Board bounds,
move unvisited

Graph Coloring Color adjacent nodes differently Adjacent vertices


coloring
Numerical Problems for Clear Understanding

1. N-Queens Problem (4-Queens Example)

Problem:

Place 4 queens on a 4×4 board so that no two queens attack each other.

Solution Idea:

We fill the board row by row, placing one queen per row.

Steps:

●​ Start at Row 0, try placing queen at (0,0)​

●​ Move to Row 1, (1,0) is invalid (same column), (1,1) is invalid (diagonal),


(1,2) is safe.​

●​ Row 2: Try (2,1), (2,3) → continue and backtrack as needed.​

●​ Final Solution:​

. Q . .
. . . Q
Q . . .
. . Q .

This corresponds to: Q at (0,1), (1,3), (2,0), (3,2)

2. Hamiltonian Cycle Example

Graph:

Vertices: A, B, C, D​
Adjacency Matrix:

A B C D
A [ 0 1 1 1 ]
B [ 1 0 1 0 ]
C [ 1 1 0 1 ]
D [ 1 0 1 0 ]

Find a cycle:

Start at A → B → C → D → A

●​ All vertices visited exactly once, and final edge D→A exists.​

Hamiltonian cycle: A-B-C-D-A

3. Knight’s Tour Problem (4x4 Board)

Goal:

Move knight so that it visits all 16 cells exactly once.

Start Position:

(0, 0)

Knight moves in L-shape:

●​ (x+2, y+1), (x+1, y+2), etc.​

Example Steps:

1.​ (0,0)​

2.​ (2,1)​

3.​ (3,3)​

4.​ (1,2) ...continue backtracking until the knight covers all cells.​
This is a trial-and-error approach where invalid moves are discarded, and the
algorithm backtracks.

4. Graph Coloring Example

Graph:

●​ 4 Vertices: A, B, C, D​

●​ Edges: (A-B), (A-C), (B-C), (C-D)​

●​ Max colors = 3​

Goal:

Color vertices so that no adjacent vertices have same color.

Steps:

●​ Color A with Color 1​

●​ Color B with Color 2 (A-B connected)​

●​ Color C with Color 3 (connected to A and B)​

●​ Color D with Color 1 (only connected to C)​

Coloring:

●​ A → 1​

●​ B → 2​

●​ C → 3​

●​ D → 1​
5. Lower-Bound Theory Example (TSP context)

Cities and Costs:

A B C D

A 0 10 15 20

B 10 0 35 25

C 15 35 0 30

D 20 25 30 0

Partial path: A → B → C

Cost so far = 10 + 35 = 45​


Estimate future cost using lower bounds (minimum edge from remaining cities)

●​ D to A = 20​

●​ Estimated total ≥ 45 + 20 = 65​

If current best path is < 65, we prune this branch.


Chapter 6 : Branch & Bound

1. What is Branch and Bound?


Branch and Bound is a problem-solving strategy used for combinatorial
optimization problems such as:

●​ 0/1 Knapsack​

●​ Traveling Salesman Problem (TSP)​

●​ Assignment Problems​

It systematically searches the solution space while eliminating suboptimal


solutions early, using bounds to prune branches that cannot lead to an optimal
solution.

Basic Concepts:

●​ Branching: Divide the problem into smaller sub-problems (branches).​

●​ Bounding: Calculate an upper or lower bound on the best possible solution


from a node. If the bound is worse than the best solution found so far,
discard (prune) that node.​

●​ Backtracking vs Branch and Bound:​

○​ Backtracking explores all possible solutions.​

○​ Branch and Bound uses bounds to prune non-promising solutions.​

2. Types of Branch and Bound Strategies


A. FIFO Branch and Bound (Breadth-First Search)

●​ Uses a queue (First In First Out) to explore nodes.​

●​ Explores the nodes level by level, like in Breadth-First Traversal.​

●​ Nodes are expanded in the order they were added to the queue.​

B. Least Cost Branch and Bound (LCBB)

●​ Uses a priority queue, where nodes with lowest cost (bound) are given
priority.​

●​ Explores most promising nodes first (like Best-First Search).​

●​ Useful when we can calculate a cost function or lower bound for nodes.​

3. 0/1 Knapsack Problem Using Branch and


Bound

Problem Statement:

Given weights and values of n items, put these items in a knapsack of capacity W
to get the maximum total value, where you can either include or exclude an item
(0/1 only).

A. Node Representation in B&B Tree:

Each node contains:

●​ Level (index of current item)​

●​ Profit (total value so far)​


●​ Weight (total weight so far)​

●​ Bound (maximum profit possible from this node onwards)​

B. Bounding Function:

Estimate the upper bound of maximum profit starting from the current node.

Example (using fractional items for bound only):

●​ bound = current profit + value of remaining items (as fraction if needed)

4. FIFO Branch and Bound for 0/1 Knapsack


(Steps)
1.​ Initialize queue with root node (no items selected).​

2.​ While queue is not empty:​

○​ Remove front node.​

○​ If this node’s weight is within limit and profit is higher than current
max, update max.​

○​ Generate two children:​

■​ One includes next item.​

■​ One excludes next item.​

○​ Add children to queue without pruning.​

This guarantees that all possibilities are explored, but may not be optimal in
speed.
5. LC Branch and Bound for 0/1 Knapsack
(Steps)
1.​ Use a priority queue ordered by maximum bound (i.e., most promising
solution).​

2.​ Insert root node.​

3.​ While queue is not empty:​

○​ Remove node with highest bound.​

○​ If this node’s bound is better than max profit:​

■​ Generate left child (includes item).​

■​ Generate right child (excludes item).​

■​ For each, calculate weight, profit, and bound.​

■​ If weight ≤ W and profit > current max → update max.​

■​ Add children to queue if their bound is better than current


max.​

Prunes more paths than FIFO, thus faster and more efficient.

Example: 0/1 Knapsack Using LCBB

Items:

●​ Item 1: Weight = 2, Value = 40​

●​ Item 2: Weight = 3, Value = 50​


●​ Item 3: Weight = 4, Value = 60​

●​ Knapsack Capacity = 5​

Steps (simplified):

1.​ Sort by value/weight ratio.​

2.​ Root node: profit = 0, weight = 0​

3.​ Generate children:​

○​ Left: include item 1 → (profit = 40, weight = 2)​

○​ Right: exclude item 1 → (profit = 0, weight = 0)​

4.​ Continue branching, update max when weight ≤ 5 and profit > previous
max.​

5.​ Use fractional value for bounding estimate when exceeding weight.​

Advantages of Branch and Bound


●​ Efficiently reduces the number of explored nodes.​

●​ Guarantees optimal solutions.​

●​ Useful for problems where solution space is exponential.​

Limitations
●​ Still exponential in worst case.​

●​ Needs careful design of bounding functions for best performance.


Numerical Problems for Clear Understanding

1. 0/1 Knapsack using FIFO Branch and Bound

Problem:

●​ Items:​

○​ Item 1: Weight = 2, Profit = 40​

○​ Item 2: Weight = 3, Profit = 50​

○​ Item 3: Weight = 4, Profit = 60​

●​ Knapsack Capacity (W) = 5​

Step 1: Create a queue and insert root node:

●​ Root node: No items taken​

○​ Profit = 0, Weight = 0, Level = -1​

Step 2: Process nodes level-by-level (FIFO)

1.​ From root, we generate 2 children:​

○​ Include Item 1 → Profit = 40, Weight = 2​

○​ Exclude Item 1 → Profit = 0, Weight = 0​

2.​ Continue exploring both:​

○​ From (Profit = 40, Weight = 2):​

■​ Include Item 2 → Profit = 90, Weight = 5 ✅ (max so far)​

■​ Exclude Item 2 → Profit = 40, Weight = 2​


○​ From (Profit = 0, Weight = 0):​

■​ Include Item 2 → Profit = 50, Weight = 3​

■​ Exclude Item 2 → Profit = 0, Weight = 0​

3.​ Keep checking which combinations stay within capacity and give higher
profit.​

Max Profit = 90 (Item 1 + Item 2)

🔹 2. 0/1 Knapsack using Least Cost Branch and


Bound (LCBB)

Same Problem Setup

●​ Items:​

○​ Item 1: W=2, P=40​

○​ Item 2: W=3, P=50​

○​ Item 3: W=4, P=60​

●​ Capacity = 5​

Step 1: Sort items by value/weight

●​ V/W: Item 1 = 20, Item 2 ≈ 16.67, Item 3 = 15​


→ Already sorted.​

Step 2: Use priority queue (max bound)

●​ Start with root (Profit=0, Weight=0)​


●​ Calculate bound using fractional items:​

○​ Take Item 1 → 2W → P=40​

○​ Take Item 2 → 3W → P=90 (exact fit)​

○​ So, bound = 90​

Process node with highest bound:

1.​ Include Item 1 → (P=40, W=2), recalculate bound:​

○​ Can take all of Item 2 (3W), total profit = 90 → bound = 90​

2.​ Include Item 2 → (P=50, W=3), bound = P + fractional of next item​

3.​ Continue branching and pruning nodes with bound ≤ current max profit.​

Final Max Profit = 90 (Item 1 + 2)


Chapter 7 : Computational Complexity

1. Introduction to Computational Complexity


Computational complexity refers to measuring the efficiency of algorithms in
terms of time (how fast) and space (how much memory).

Complexity classes help in grouping problems based on the resources required to


solve them.

2. Class P (Polynomial Time)

Definition:

Class P contains decision problems (yes/no questions) that can be solved by a


deterministic algorithm in polynomial time.

In other words:

A problem is in P if an algorithm can solve any instance of size n in O(n^k) time


for some constant k.

Examples:

●​ Binary Search​

●​ Merge Sort​

●​ Dijkstra’s Algorithm​

●​ Finding the GCD of two numbers​

These are efficiently solvable problems.

3. Class NP (Non-deterministic Polynomial Time)


Definition:

Class NP consists of problems for which a given solution can be verified in


polynomial time, even if finding the solution may not be feasible in polynomial
time.

These problems may not be solvable in polynomial time using deterministic


algorithms.

Key point:

●​ "Verifiable in polynomial time"​

●​ Not necessarily "solvable" in polynomial time (unless P = NP)​

Examples:

●​ Sudoku​

●​ Hamiltonian Path​

●​ Travelling Salesman Problem (decision version)​

●​ Subset Sum​

If someone gives you a solution to any of these, you can check it quickly. But
finding it may take exponential time.

4. Deterministic vs Non-Deterministic Algorithms

Deterministic Algorithm:

●​ Performs exactly one operation at every step.​

●​ The outcome is predictable and reproducible.​

●​ Follows a definite logic path.​


Example: Binary search, Merge sort.

Non-Deterministic Algorithm:

●​ May perform multiple choices at a decision point.​

●​ Often imagined as a machine that “guesses” the right path.​

●​ It is theoretical—used to define NP problems.​

Example (theoretical): Trying all possible solutions in parallel and magically


choosing the correct one.

5. NP-Complete Problems

Definition:

A problem is NP-Complete (NPC) if:

1.​ It belongs to NP​

2.​ Every problem in NP can be reduced to it in polynomial time​

These are the "hardest" problems in NP. If one NPC problem is solved in
polynomial time, then P = NP.

Examples:

●​ Boolean Satisfiability (SAT)​

●​ Hamiltonian Cycle​

●​ Travelling Salesman Problem (decision version)​

●​ 3-SAT​
6. NP-Hard Problems

Definition:

A problem is NP-Hard if every problem in NP can be reduced to it, but it may


not belong to NP itself (i.e., the solution may not be verifiable in polynomial
time).

These are at least as hard as NP problems.

Examples:

●​ Halting Problem​

●​ Optimization version of TSP​

●​ Scheduling problems​

7. Relationships between Classes

Diagrammatically:
css
CopyEdit
P ⊆ NP
NP-Complete ⊆ NP
NP-Hard ⊄ NP (in general)

●​ All P problems are in NP​

●​ All NP-Complete are in NP​

●​ Not all NP-Hard are in NP​

8. Polynomial Time Reductions


To prove a problem is NP-Complete, we use polynomial time reduction:

●​ Reduce a known NP-Complete problem to the new problem in polynomial


time.​

●​ This shows the new problem is at least as hard.​

9. Practical Implications
●​ If a problem is in P, we can solve it efficiently.​

●​ If a problem is NP-Complete, we rely on heuristics or approximation


algorithms.​

●​ If it is NP-Hard, the solution may not even be checkable in reasonable time.​

Example Comparisons
Problem Class Notes

Sorting Numbers P Solved in O(n log n)

Sudoku Solver NP-Complete Can verify solution quickly

TSP (decision NP-Complete Can be verified in polynomial time


version)

TSP (optimization) NP-Hard Not verifiable in polynomial time


Solutions of Mid Semester Tests

Mid Semester Test 1

Section A (2x5= 10)


1.​ Define the terms worst-case, best-case and average-case in algorithmic
analysis.
2.​ Identify the steps involved in solving recurrence equations using the
substitution method.
3.​ Describe the recurrence relation for a recursive algorithm that splits an
array into two halves and processes each half recursively.
4.​ Explain the concept of topological sorting and its significance.
5.​ Describe the Decrease-and-Conquer Approach with an example.

Section B (5x2= 10)


6.​ Use Strassen’s algorithm to multiply the following 2x2 matrices :
A=[[1,2],[3,4]], B=[[5,6],[7,8]]. Show all the intermediate steps.
7.​ Illustrate the Topological Sort and give an example of a directed acyclic
graph (DAG).

Solutions/Answers :

1. Define the terms worst-case, best-case and average-case in


algorithmic analysis.

●​ Worst-case:​
This represents the maximum time or space an algorithm may take on
any input of size n. It shows the algorithm’s upper bound.​

○​ Example: In linear search, if the element is not present, it checks all


n elements → O(n).​

●​ Best-case:​
This represents the minimum time or space required. It occurs under ideal
conditions.​
○​ Example: In linear search, if the element is at the first position →
O(1).​

●​ Average-case:​
This reflects the expected performance over all possible inputs. It provides
a realistic efficiency estimate.​

○​ Example: In linear search, assuming equal probability, average


comparisons = n/2 → O(n).​

2. Identify the steps involved in solving recurrence equations using


the substitution method.

The Substitution Method involves:

1.​ Guess the form of the solution (e.g., T(n) = O(n log n)).​

2.​ Use mathematical induction to prove the guess.​

3.​ Base case: Verify that the recurrence holds true for the smallest input (e.g.,
n = 1).​

4.​ Inductive step: Assume it holds for n = k and prove it for n = k+1.​

5.​ Conclude the function’s time complexity based on the final inequality.​

3. Describe the recurrence relation for a recursive algorithm that


splits an array into two halves and processes each half recursively.

Such a divide-and-conquer algorithm generally follows the recurrence:

T(n) = 2T(n/2) + f(n)

●​ 2T(n/2) → solving both halves recursively.​


●​ f(n) → time for dividing the array and combining results (usually linear,
O(n)).​

Example: Merge Sort​


T(n) = 2T(n/2) + O(n)​
Solving this via Master Theorem gives: T(n) = O(n log n)

4. Explain the concept of topological sorting and its significance.

Topological Sorting is a linear ordering of vertices in a Directed Acyclic Graph


(DAG) such that for every directed edge u → v, vertex u comes before v in the
ordering.

Significance:

●​ Used in task scheduling, where some tasks must be done before others
(e.g., course prerequisites).​

●​ Helps in detecting cycles in a graph (topological sort is only possible for


DAGs).​

Example:​
Tasks: A → B → C​
Topological Order: A, B, C

5. Describe the Decrease-and-Conquer Approach with an example.

In the Decrease-and-Conquer approach, the problem is solved by:

1.​ Solving a smaller instance of the problem.​

2.​ Using the solution of the smaller instance to build the final answer.​

Types:

●​ Decrease by constant (e.g., n → n-1)​


●​ Decrease by a constant factor (e.g., n → n/2)​

Example:​
Binary Search:

●​ Each step reduces the array size by half.​

●​ Time complexity: O(log n)​

Steps:

●​ Find the middle element.​

●​ Compare with the target.​

●​ Recur on the left or right half based on comparison.

6. Use Strassen’s algorithm to multiply the following 2x2 matrices:

A = [[1,2],[3,4]]​
B = [[5,6],[7,8]]

Strassen’s Algorithm for 2x2 Matrices

Let:​
A = [[a,b],[c,d]] = [[1,2],[3,4]]​
B = [[e,f],[g,h]] = [[5,6],[7,8]]

We compute 7 products (Strassen’s formulas):

M1 =(a+d)(e+h) =(1+4)(5+8)=5×13 = 65
M2 =(c+d)e =(3+4)×5=7×5 = 35
M3 =a(f−h) =1×(6−8)=1×(−2) = −2
M4 =d(g−e) =4×(7−5)=4×2 = 8
M5 =(a+b)h =(1+2)×8=3×8 = 24
M6 =(c−a)(e+f) =(3−1)×(5+6)=2×11 = 22
M7 =(b−d)(g+h) =(2−4)×(7+8)=(−2)×15 = −30

Now, calculate the resulting matrix C = A × B:


C11 =M1+M4−M5+M7=65+8−24−30 =19
C12 =M3+M5=−2+24 =22
C21 =M2+M4=35+8 =43
C22 =M1−M2+M3+M6=65−35−2+22 =50

Final Result:​
C = [[19,22],[43,50]]

7. Illustrate the Topological Sort and give an example of a Directed


Acyclic Graph (DAG).

Topological Sort Overview

●​ Applicable to Directed Acyclic Graphs (DAGs) only.​

●​ It arranges nodes linearly such that for every directed edge u → v, u


appears before v.​

Example DAG:

Consider a graph with the following edges:

●​ A → B​

●​ A → C​

●​ B → D​

●​ C → D​

Graphically:

A
/ \
B C
\ /
D

Topological Sorting Algorithm (Kahn's Algorithm - Conceptual):

1.​ Find all vertices with in-degree 0 and add them to a queue.​

2.​ Remove one vertex from the queue, add it to the topological order.​

3.​ For each neighbor of that vertex, decrease its in-degree by 1.​

4.​ If any neighbor's in-degree becomes 0, add it to the queue.​

5.​ Repeat until all nodes are processed.​

Execution on the above graph:

●​ In-degrees:​

○​ A: 0, B: 1, C: 1, D: 2​

●​ Start with A → Output: A​

●​ After removing A: B and C both get in-degree 0​

●​ Choose B → Output: A, B​

●​ D’s in-degree becomes 1​

●​ Choose C → Output: A, B, C​

●​ D’s in-degree becomes 0​

●​ Choose D → Output: A, B, C, D​

Topological Order: A, B, C, D or A, C, B, D (both valid)


Mid Semester Test 2

Section A (2x5= 10)


1.​ Contrast Fractional Knapsack and 0/1 Knapsack Problem.
2.​ Identify the major drawback of Greedy Algorithms when solving Knapsack
problems.
3.​ Given the characters and their frequencies : Character Frequency A 5 B 9
C 12 D 13 E 16 F 45. Show the Huffman Tree and determine the Huffman
Codes for each character.
4.​ Discuss the concept of Minimum Spanning Tree (MST).
5.​ Name two applications of the Activity Selection Problem.

Section B (5x2= 10)


6.​ Differentiate between fractional and 0/1 knapsack based on approach and
solution.
7.​ Explain the working mechanism of matrix chain multiplication.

Solutions/Answers :

1. Contrast Fractional Knapsack and 0/1 Knapsack Problem

Feature Fractional Knapsack 0/1 Knapsack

Selection Items can be divided into Items must be taken as a whole


Type fractions or not taken at all

Approach Greedy algorithm gives optimal Greedy approach does not


solution guarantee optimal solution

Efficiency Easier and faster (O(n log n) due Requires dynamic programming
to sorting) or backtracking

Example If you can’t take the whole item, If you can’t take the whole
you can take a part of it item, you take nothing
Problem Continuous Discrete
Type

2. Identify the major drawback of Greedy Algorithms when


solving Knapsack problems

The major drawback of Greedy Algorithms in solving 0/1 Knapsack problems is


that:

●​ They do not guarantee an optimal solution.​

Greedy algorithms select items based on the maximum value-to-weight ratio,


assuming this leads to the best result. However, in 0/1 Knapsack, such local
decisions can lead to suboptimal global solutions since items cannot be broken
down into smaller parts.

3. Given characters and their frequencies, build Huffman Tree


and codes

Character Frequency

A 5

B 9

C 12

D 13

E 16

F 45

Step-by-step Huffman Tree Construction:


1.​ Combine A(5) + B(9) → Node1 = 14​

2.​ Combine C(12) + D(13) → Node2 = 25​

3.​ Combine Node1(14) + E(16) → Node3 = 30​

4.​ Combine Node2(25) + Node3(30) → Node4 = 55​

5.​ Combine F(45) + Node4(55) → Root = 100​

Huffman Codes (may vary based on tree structure):

Character Code

A 1100

B 1101

C 100

D 101

E 111

F 0

4. Discuss the concept of Minimum Spanning Tree (MST)

●​ A Minimum Spanning Tree (MST) of a weighted, connected, undirected


graph is a subset of edges that:​

○​ Connects all vertices​

○​ Contains no cycles​
○​ Has the minimum possible total edge weight​

●​ MST ensures minimum cost to connect all nodes.​

Algorithms to find MST:

●​ Kruskal’s Algorithm: Greedily selects the shortest edge avoiding cycles.​

●​ Prim’s Algorithm: Builds the MST by adding the least costly edge from a
node already in the tree.​

Applications:

●​ Network design (telephone, computer, electrical)​

●​ Circuit design​

●​ Road/railway network construction​

5. Name two applications of the Activity Selection Problem

1.​ Job Scheduling:​

○​ Allocating resources (like CPU time) to the maximum number of


non-overlapping jobs.​

2.​ Classroom/Conference Room Allocation:​

○​ Assigning the maximum number of non-conflicting classes/events in


the same room.

6. Differentiate between Fractional and 0/1 Knapsack based on


Approach and Solution

Feature Fractional Knapsack 0/1 Knapsack


Approach Uses Greedy Algorithm Uses Dynamic Programming,
based on the highest Recursion, or Branch & Bound.
value-to-weight ratio. Greedy does not always work.

Item Items can be divided. You Items cannot be divided. You either
Selection can take fractions of an take the whole item or none.
item.

Optimal Always provides an Does not guarantee optimal solution


Solution optimal solution due to with greedy; requires DP for
continuous nature. optimality.

Time Efficient: O(n log n) (due Higher: O(nW) for dynamic


Complexity to sorting). programming (where W = capacity).

Example If capacity left is 3, and If next item weighs 4 and capacity


next item weighs 4, take left is 3, skip it.
3/4 of it.

7. Explain the Working Mechanism of Matrix Chain Multiplication

Matrix Chain Multiplication problem is not about multiplying matrices but finding
the most efficient way (i.e., minimum number of scalar multiplications) to
multiply a given chain of matrices.

Problem Setup:

Given matrices: A₁, A₂, A₃, ..., An​


Each matrix Ai has dimension: p[i-1] × p[i]

We need to find the parenthesization that minimizes total multiplications.

Key Concepts:
●​ Matrix multiplication is associative, but the order of multiplication affects
the total computation.​

●​ We store the results of subproblems to avoid recomputation (Dynamic


Programming).​

Steps in Algorithm:

1.​ Let m[i][j] represent the minimum number of scalar multiplications needed
to compute the matrix product Ai...Aj.​

2.​ Let s[i][j] store the index at which the optimal split occurs.​

3.​ Initialize m[i][i] = 0 for all i.​

4.​ For chain length L = 2 to n:​

○​ For i = 1 to n−L+1:​

■​ Set j = i + L − 1​

■​ Set m[i][j] = ∞​

■​ For k = i to j−1:​

■​ Compute cost = m[i][k] + m[k+1][j] +


p[i−1]×p[k]×p[j]​

■​ If cost < m[i][j], update m[i][j] and s[i][j]​

Example:

Let matrices be A1: 10×30, A2: 30×5, A3: 5×60​


p[] = {10, 30, 5, 60}

We have to multiply A1 × A2 × A3.

Possible parenthesizations:

●​ ((A1×A2)×A3): Cost = (10×30×5) + (10×5×60) = 1500 + 3000 = 4500​


●​ (A1×(A2×A3)): Cost = (30×5×60) + (10×30×60) = 9000 + 18000 =
27000

Optimal = 4500 multiplications, hence parenthesize as ((A1×A2)×A3)


Question Bank for UNIT-3: Optimization and Complexity

2-Marks Questions (12 Questions)


1.​ Define backtracking and give one example of its application.​

2.​ How does recursive backtracking differ from iterative backtracking?​

3.​ What is the base case in the N-Queen problem?​

4.​ List the key properties of a Hamiltonian cycle.​

5.​ State the difference between a Hamiltonian path and an Euler path.​

6.​ What is the significance of the lower-bound theory in algorithm design?​

7.​ State the objective of the Knight's Tour problem.​

8.​ What is the chromatic number in graph coloring?​

9.​ Mention the difference between FIFO and LC branch and bound approaches.​

10.​Define NP-Complete with one example.​

11.​ What is the main idea behind the concept of a deterministic algorithm?​

12.​Define the decision version of the 0/1 Knapsack problem.​

5-Marks Questions (6 Questions)


1.​ Write the steps involved in solving the 4-Queens problem using
backtracking.​

2.​ Explain the working of FIFO Branch and Bound using a binary tree
structure.​
3.​ Differentiate between P, NP, NP-Hard and NP-Complete problems with
examples.​

4.​ Solve a simple instance of the Knight’s Tour problem using backtracking for
a 5×5 board.​

5.​ Explain how lower-bound theory helps in reducing the number of


computations in algorithms.​

6.​ Illustrate the use of backtracking to solve a graph coloring problem with 3
colors and 4 nodes.​

10-Marks Questions (6 Questions)


1.​ Solve the 4-Queens problem using backtracking and provide a state-space
tree to support your answer.​

2.​ Implement the Hamiltonian Cycle problem for a given undirected graph
using backtracking.​

3.​ Solve the 0/1 Knapsack problem using Least Cost Branch and Bound, and
explain the pruning strategy used.​

4.​ Discuss in detail the difference between FIFO and LC (Least Cost) Branch
and Bound methods with respect to a Knapsack instance.​

5.​ Prove that SAT is NP-Complete. Briefly explain the implications of this
classification in computational complexity.​

6.​ Explain the classification of computational problems into P, NP, NP-Hard,


and NP-Complete. Support your answer with proper examples and
diagrams.

You might also like