0% found this document useful (0 votes)
2 views26 pages

9. Analysis and Design of Algorithms

The document provides an overview of algorithms, including definitions, analysis, and various algorithmic strategies such as Divide and Conquer, Greedy Algorithms, and Dynamic Programming. It covers specific algorithms like Quick Sort, Merge Sort, and graph algorithms including BFS and DFS, along with their complexities and applications. Additionally, it discusses computational complexity classes and includes multiple-choice questions to reinforce understanding of algorithm concepts.

Uploaded by

shrinjoyee30
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views26 pages

9. Analysis and Design of Algorithms

The document provides an overview of algorithms, including definitions, analysis, and various algorithmic strategies such as Divide and Conquer, Greedy Algorithms, and Dynamic Programming. It covers specific algorithms like Quick Sort, Merge Sort, and graph algorithms including BFS and DFS, along with their complexities and applications. Additionally, it discusses computational complexity classes and includes multiple-choice questions to reinforce understanding of algorithm concepts.

Uploaded by

shrinjoyee30
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

📘 1.

Introduction to Algorithms & Asymptotic


Notations
🔹 What is an Algorithm?
An algorithm is a finite sequence of well-defined instructions to solve a problem.

🔹 Importance of Algorithm Analysis


●​ Helps compare solutions based on efficiency​

●​ Eliminates dependency on hardware​

🔹 Asymptotic Notations
Used to describe time and space complexity of algorithms in terms of input size n.

Notation Meaning Example

O(f(n)) Upper bound (Worst-case) Binary Search: O(log n)

Ω(f(n)) Lower bound (Best-case) Bubble Sort: Ω(n)

Θ(f(n)) Tight bound Merge Sort: Θ(n log n)


(Average-case)

o(f(n)) Non-tight upper bound n is o(n log n)

ω(f(n)) Non-tight lower bound n log n is ω(n)

📘 2. Divide and Conquer Approach


🔹 Concept
The problem is divided into smaller sub-problems, solved recursively, and their solutions are
combined.

🔹 Steps
1.​ Divide – Split the problem into subproblems.​

2.​ Conquer – Solve each subproblem recursively.​

3.​ Combine – Merge the sub-solutions into one.​

📘 3. Union and Find Algorithms (Disjoint Sets)


🔹 Disjoint Sets
A collection of non-overlapping sets used in graph algorithms like Kruskal’s MST.

🔹 Operations
●​ Find(x): Returns the representative (root) of the set containing x.​

●​ Union(x, y): Merges the sets containing x and y.​

🔹 Optimizations
●​ Path Compression (during Find): makes trees flat​

●​ Union by Rank: attach smaller tree under larger​

🔹 Applications:
●​ Network connectivity​

●​ Detecting cycles in graphs​

●​ Minimum Spanning Tree (Kruskal's)​

📘 4. Quick Sort
🔹 Concept
●​ Choose a pivot.​
●​ Rearrange elements so that elements < pivot go to left and > pivot to right.​

●​ Recursively sort the subarrays.​

🔹 Time Complexity:
●​ Best/Average Case: O(n log n)​

●​ Worst Case: O(n²) — when pivot is smallest/largest element​

●​ Space Complexity: O(log n) due to recursion stack​

🔹 Properties:
●​ Not stable​

●​ In-place​

📘 5. Merge Sort
🔹 Concept
●​ Recursively divide the array into halves.​

●​ Sort and merge them back.​

🔹 Time Complexity:
●​ All Cases: O(n log n)​

●​ Space Complexity: O(n)​

🔹 Properties:
●​ Stable​

●​ Not in-place (uses additional memory)​


📘 6. Heap and Heap Sort
🔹 Heap
●​ A complete binary tree​

●​ Two types:​

○​ Max Heap: Root ≥ Children​

○​ Min Heap: Root ≤ Children​

🔹 Heapify
●​ Procedure to maintain heap property.​

🔹 Heap Sort Steps:


1.​ Build a max heap.​

2.​ Swap root with last node, reduce heap size.​

3.​ Heapify the root again.​

4.​ Repeat until sorted.​

🔹 Time Complexity:
●​ Build Heap: O(n)​

●​ Heap Sort: O(n log n)​

🔹 Space Complexity:
●​ O(1) (in-place)​

📘 Greedy Algorithms
🔹 Introduction
A greedy algorithm makes locally optimal choices at each step with the hope of finding a
global optimum.

✅ Characteristics:
●​ Greedy-choice property​

●​ Optimal substructure​

🔸 1. Knapsack Problem (0/1)


●​ Given weights and values, select items to maximize value without exceeding the
weight capacity.​

🧠 Greedy Approach: Fails for 0/1 Knapsack


●​ Greedy works only for Fractional Knapsack, not for 0/1.​

✅ Fractional Knapsack (Greedy works):


●​ Sort items by value/weight ratio​

●​ Pick items greedily till full​

●​ Time Complexity: O(n log n)​

🔸 2. Job Sequencing with Deadlines


●​ Schedule jobs with deadlines and profits to maximize total profit.​

✅ Greedy Strategy:
●​ Sort jobs by decreasing profit​

●​ Assign jobs to the latest possible free slot before deadline​

●​ Time Complexity: O(n²) or O(n log n) with efficient DS​


🔸 3. Minimum Spanning Tree (MST)
✅ Prim’s Algorithm:
●​ Grow MST from a chosen vertex by adding the minimum edge connecting tree to an
outside vertex​

●​ Data Structure: Min Heap / Priority Queue​

●​ Time Complexity: O(E log V)​

✅ Kruskal’s Algorithm:
●​ Sort edges by weight​

●​ Add edges if they don’t form a cycle (Use Union-Find)​

●​ Time Complexity: O(E log E)​

🔸 4. Huffman Coding
●​ Lossless data compression​

●​ Build binary tree based on character frequency​

✅ Steps:
1.​ Build a min-heap of frequencies​

2.​ Extract two min items and merge​

3.​ Repeat until one node remains​

4.​ Assign 0/1 as you move left/right​

✅ Time Complexity: O(n log n)


📘 Graph Algorithms
🔸 1. BFS (Breadth First Search)
●​ Uses Queue (FIFO)​

●​ Explores neighbors first​

✅ Applications:
●​ Shortest path in unweighted graphs​

●​ Level-order traversal​

⏱ Time Complexity: O(V + E)

🔸 2. DFS (Depth First Search)


●​ Uses Stack (recursion or explicit)​

●​ Explores depth before breadth​

✅ Applications:
●​ Detecting cycles​

●​ Topological sort​

●​ Strongly Connected Components (SCCs)​

⏱ Time Complexity: O(V + E)

🔸 3. Topological Sort
●​ Applies only to Directed Acyclic Graphs (DAGs)​

●​ Linear ordering such that for every edge u→v, u comes before v​

✅ Techniques:
●​ DFS-based (store postorder in stack)​

●​ Kahn’s algorithm (BFS using in-degree)​


🔸 4. Strongly Connected Components (SCCs)
●​ For directed graphs​

●​ SCC = every vertex is reachable from every other vertex​

✅ Kosaraju’s Algorithm:
1.​ DFS & store finishing times​

2.​ Transpose graph​

3.​ DFS in order of decreasing finish times​

⏱ Time Complexity: O(V + E)

🔸 5. Single Source Shortest Path


✅ Bellman-Ford Algorithm
●​ Works for negative weights​

●​ Detects negative cycles​

●​ Time Complexity: O(VE)​

✅ Dijkstra’s Algorithm
●​ Only non-negative weights​

●​ Uses Min Priority Queue​

●​ Time Complexity: O((V + E) log V)​

🔸 6. All Pairs Shortest Path


✅ Warshall’s Algorithm / Floyd-Warshall
●​ Dynamic programming approach​
●​ Works with negative weights but not negative cycles​

●​ Matrix-based​

●​ Time Complexity: O(V³)​

📘 Dynamic Programming (DP)


🔹 What is DP?
Dynamic Programming is a method to solve problems by breaking them into overlapping
subproblems and storing solutions to subproblems to avoid recomputation
(memoization/tabulation).

✅ Key Concepts:
●​ Optimal Substructure: Optimal solution can be built from optimal solutions of
subproblems.​

●​ Overlapping Subproblems: Same subproblems are solved multiple times.​

🔸 1. Matrix Chain Multiplication


●​ Goal: Determine the most efficient way to multiply a sequence of matrices (i.e., find
the parenthesization with the least number of scalar multiplications).​

●​ Input: Dimensions of matrices.​

●​ DP Approach: dp[i][j] stores minimum cost of multiplying from matrix i to j.​

●​ Time Complexity: O(n³)​

🔸 2. Longest Common Subsequence (LCS)


●​ Given two strings, find the length of the longest subsequence present in both.​

●​ DP Approach:​
○​ dp[i][j] = LCS of first i characters of X and first j of Y.​

●​ Time Complexity: O(m × n)​

🔸 3. 0/1 Knapsack
●​ Given weights and values of n items and a capacity W, determine the max value that
can be carried in the knapsack.​

●​ Cannot take fractional items (0/1 decision).​

●​ DP Table: dp[i][w] = max value with first i items and capacity w.​

●​ Time Complexity: O(n × W)​

📘 Backtracking
🔹 What is Backtracking?
A technique for solving constraint satisfaction problems by exploring possible options and
backtracking when a constraint is violated.

🔸 1. 8-Queens Problem
●​ Place 8 queens on an 8×8 board such that no two queens attack each other.​

●​ Use recursion with backtracking to place one queen per row and check column and
diagonal safety.​

🔸 2. Sum of Subsets
●​ Find all subsets of a set of positive integers whose sum equals a given number.​

●​ Explore all subset combinations using recursion and backtrack if sum exceeds target.​
🔸 3. Graph Coloring
●​ Assign minimum number of colors to graph vertices so that no two adjacent vertices
have the same color.​

●​ Backtrack if no valid color is found for a vertex.​

🔸 4. Hamiltonian Cycle
●​ Find a cycle in a graph that visits each vertex once and returns to the starting point.​

●​ Backtrack when adding a vertex to the cycle leads to dead-end (i.e., not a
Hamiltonian path).​

📘 Computational Complexity
🔹 Complexity Measures:
●​ Time Complexity: Steps required as input size increases.​

●​ Space Complexity: Memory used by an algorithm.​

🔸 Polynomial vs. Non-Polynomial Problems


●​ P: Problems solvable in polynomial time (e.g., O(n²), O(n³)).​

●​ NP: Solutions can be verified in polynomial time.​

●​ NP-Hard: As hard as the hardest problems in NP, not necessarily in NP.​

●​ NP-Complete: Both NP and NP-Hard.​

🔸 Key Definitions:
Class Description
P Problems solvable in polynomial time

NP Solutions verifiable in polynomial time

NP-Complete In NP and NP-Hard (e.g., SAT, Hamiltonian Cycle)

NP-Hard Not necessarily in NP, but as hard as NP problems

Multiple Choice Questions: Algorithms

1.​ Algorithms Introduction: Asymptotic Notations​



Which of the following notations represents the asymptotic upper bound of an
algorithm's running time? a) Ω (Omega) b) Θ (Theta) c) O (Big-O) d) o (Small-o)​

Answer: c) O (Big-O)​

2.​ If an algorithm has a running time of T(n)=3n2+2n+5, its asymptotic time complexity
is: a) O(n) b) O(nlogn) c) O(n2) d) O(n3)​

Answer: c) O(n2)​

3.​ The notation f(n)=Ω(g(n)) means that: a) f(n) grows asymptotically no faster than
g(n). b) f(n) grows asymptotically at least as fast as g(n). c) f(n) grows asymptotically
at the same rate as g(n). d) f(n) grows asymptotically strictly faster than g(n).​

Answer: b) f(n) grows asymptotically at least as fast as g(n).​

4.​ Which of the following describes the tight bound of an algorithm's running time? a)
Big-O b) Big-Omega c) Big-Theta d) Small-o​

Answer: c) Big-Theta​

5.​ Which of the following is true regarding O(1) complexity? a) The running time grows
linearly with the input size. b) The running time is constant, regardless of the input
size. c) The running time grows logarithmically with the input size. d) The running
time grows quadratically with the input size.​

Answer: b) The running time is constant, regardless of the input size.​

6.​ Divide and Conquer Approach​



Which of the following sorting algorithms is an example of the Divide and Conquer
approach? a) Insertion Sort b) Bubble Sort c) Quick Sort d) Selection Sort​

Answer: c) Quick Sort​

7.​ In the Union-Find algorithm, what is the primary purpose of path compression? a) To
balance the tree structure. b) To reduce the number of union operations. c) To flatten
the tree and speed up future find operations. d) To detect cycles in a graph.​

Answer: c) To flatten the tree and speed up future find operations.​

8.​ What is the worst-case time complexity of Quick Sort? a) O(nlogn) b) O(n) c) O(n2)
d) O(logn)​

Answer: c) O(n2)​

9.​ Merge Sort divides the array into two halves, sorts them recursively, and then merges
them. What is its time complexity in the worst case? a) O(n) b) O(n2) c) O(nlogn) d)
O(logn)​

Answer: c) O(nlogn)​

10.​A Heap is a specialized tree-based data structure that satisfies the heap property. In
a Max-Heap, which element is always at the root? a) The smallest element b) The
largest element c) A random element d) The first inserted element​

Answer: b) The largest element​

11.​Greedy Algorithms​

The Knapsack problem with fractional items (allowing fractions of items) can be
optimally solved using which algorithmic paradigm? a) Dynamic Programming b)
Greedy Approach c) Divide and Conquer d) Backtracking​

Answer: b) Greedy Approach​

12.​In the Job Sequencing with Deadlines problem, what is the primary greedy choice
made? a) Select jobs with the earliest deadline first. b) Select jobs with the highest
profit first. c) Select jobs with the shortest duration first. d) Select jobs that fit into the
available slots first.​

Answer: b) Select jobs with the highest profit first.​

13.​Which algorithm uses a priority queue to always select the edge with the minimum
weight when constructing a Minimum Spanning Tree? a) Kruskal's algorithm b)
Prim's algorithm c) Dijkstra's algorithm d) Bellman-Ford algorithm​

Answer: b) Prim's algorithm​

14.​Kruskal's algorithm for Minimum Spanning Tree relies on which data structure for
efficiently checking cycles? a) Adjacency List b) Adjacency Matrix c) Disjoint Set
Union (Union-Find) d) Hash Table​

Answer: c) Disjoint Set Union (Union-Find)​

15.​Huffman codes are used for: a) Finding the shortest path in a graph. b) Constructing
a Minimum Spanning Tree. c) Lossless data compression. d) Solving the Knapsack
problem.​

Answer: c) Lossless data compression.​

16.​Graph Algorithms​

Which graph traversal algorithm explores as far as possible along each branch
before backtracking? a) Breadth-First Search (BFS) b) Depth-First Search (DFS) c)
Dijkstra's algorithm d) Prim's algorithm​

Answer: b) Depth-First Search (DFS)​

17.​BFS is typically implemented using which data structure? a) Stack b) Queue c)


Priority Queue d) Hash Map​

Answer: b) Queue​

18.​A Topological Sort is possible only for which type of graph? a) Undirected Graph b)
Weighted Graph c) Directed Acyclic Graph (DAG) d) Complete Graph​

Answer: c) Directed Acyclic Graph (DAG)​

19.​Which algorithm can find the shortest path from a single source to all other vertices in
a graph with negative edge weights? a) Dijkstra's algorithm b) Bellman-Ford
algorithm c) Floyd-Warshall algorithm d) Prim's algorithm​

Answer: b) Bellman-Ford algorithm​

20.​What is the time complexity of Dijkstra's algorithm with a min-priority queue (e.g.,
using a Fibonacci heap)? a) O(V2) b) O(ElogV) c) O(V+ElogV) d) O(E+VlogV)​

Answer: d) O(E+VlogV)​

21.​The Warshall's algorithm is used to find: a) Single Source Shortest Paths b) All Pairs
Shortest Paths c) Minimum Spanning Tree d) Strongly Connected Components​

Answer: b) All Pairs Shortest Paths​

22.​The Warshall's algorithm (also known as Floyd-Warshall) uses which algorithmic


paradigm? a) Greedy Approach b) Divide and Conquer c) Dynamic Programming d)
Backtracking​

Answer: c) Dynamic Programming​

23.​Strongly Connected Components (SCCs) are typically found using which graph
traversal algorithm as a core component? a) BFS b) DFS c) Dijkstra's d) Kruskal's​

Answer: b) DFS​

24.​If a graph has a cycle, which algorithm will fail to produce a valid output? a) BFS b)
DFS c) Topological Sort d) Dijkstra's algorithm (with non-negative weights)​

Answer: c) Topological Sort​

25.​The time complexity of finding Strongly Connected Components using Kosaraju's or


Tarjan's algorithm is typically: a) O(V2) b) O(V+E) c) O(ElogV) d) O(VlogV)​

Answer: b) O(V+E)​

26.​Dynamic Programming​

Which problem is an example of overlapping subproblems, a characteristic feature of
Dynamic Programming? a) Quick Sort b) Merge Sort c) Fibonacci sequence
calculation (naive recursive) d) Binary Search​

Answer: c) Fibonacci sequence calculation (naive recursive)​

27.​In Matrix Chain Multiplication, the goal is to: a) Multiply all matrices in a given chain.
b) Find the optimal parenthesization of a chain of matrices to minimize the number of
scalar multiplications. c) Find the largest product of matrices. d) Determine if a chain
of matrices can be multiplied.​

Answer: b) Find the optimal parenthesization of a chain of matrices to
minimize the number of scalar multiplications.​

28.​The Longest Common Subsequence (LCS) problem finds: a) The longest common
string within two given strings. b) The longest string that is a subsequence of two
given strings. c) The shortest common subsequence of two given strings. d) The
number of common characters between two strings.​

Answer: b) The longest string that is a subsequence of two given strings.​

29.​The 0/1 Knapsack problem differs from the fractional knapsack problem because: a)
Items can be taken in fractions. b) Items must be taken entirely (0 or 1). c) There is
no weight limit. d) The goal is to minimize profit.​

Answer: b) Items must be taken entirely (0 or 1).​

30.​The time complexity of solving the 0/1 Knapsack problem using dynamic
programming is: a) O(N) b) O(W) (where W is knapsack capacity) c) O(N⋅W) d)
O(N2)​

Answer: c) O(N⋅W)​

31.​Backtracking​

The 8-Queen Problem aims to place 8 queens on an 8×8 chessboard such that: a)
All queens are in a straight line. b) No two queens attack each other. c) Queens
attack as many pieces as possible. d) Queens are placed randomly.​

Answer: b) No two queens attack each other.​

32.​Which algorithmic technique is typically used to solve the Sum of Subsets problem?
a) Dynamic Programming b) Greedy Approach c) Backtracking d) Divide and
Conquer​

Answer: c) Backtracking​

33.​In graph coloring, what is the objective? a) To find the shortest path between two
nodes. b) To color the vertices of a graph such that no two adjacent vertices have the
same color, using the minimum number of colors. c) To assign a unique color to each
vertex. d) To color the edges of a graph.​

Answer: b) To color the vertices of a graph such that no two adjacent vertices
have the same color, using the minimum number of colors.​

34.​A Hamiltonian Cycle in a graph is a cycle that: a) Visits every vertex exactly once and
returns to the starting vertex. b) Visits every edge exactly once. c) Connects all
vertices with the minimum number of edges. d) Passes through a specific set of
vertices.​

Answer: a) Visits every vertex exactly once and returns to the starting vertex.​

35.​Backtracking is a general algorithmic technique that builds solutions incrementally


and: a) Explores all possible paths blindly. b) Prunes branches that cannot lead to a
valid solution. c) Always chooses the locally optimal choice. d) Divides the problem
into smaller subproblems.​

Answer: b) Prunes branches that cannot lead to a valid solution.​

36.​Computational Complexity​

A problem is considered to be in class P if it can be solved by a deterministic Turing
machine in: a) Exponential time b) Polynomial time c) Logarithmic time d) Constant
time​

Answer: b) Polynomial time​

37.​NP stands for: a) Non-Polynomial b) Nondeterministic Polynomial c) Not Possible d)


New Problem​

Answer: b) Nondeterministic Polynomial​

38.​A problem X is NP-Hard if: a) It can be solved in polynomial time. b) Any NP problem
can be reduced to X in polynomial time. c) It is in NP and also NP-Complete. d) It is
equivalent to a P problem.​

Answer: b) Any NP problem can be reduced to X in polynomial time.​

39.​Which of the following statements is true regarding NP-Complete problems? a) They


are all solvable in polynomial time. b) They are NP-Hard and also in NP. c) They are
generally considered easier than P problems. d) There is no known way to verify a
solution in polynomial time.​

Answer: b) They are NP-Hard and also in NP.​

40.​Which statement best describes the relationship between P and NP? a) P is a subset
of NP. b) NP is a subset of P. c) P and NP are disjoint sets. d) P equals NP (P=NP)
has been proven true.​

Answer: a) P is a subset of NP.​

41.​Algorithms Introduction: Asymptotic Notations​



Which of the following functions grows fastest as n→∞? a) n2 b) nlogn c) 2n d) n!​

Answer: d) n!​

42.​If f(n)=O(g(n)), it implies that: a) f(n) grows at the same rate as g(n). b) f(n) grows
strictly faster than g(n). c) There exist positive constants c and n0​such that
0≤f(n)≤c⋅g(n) for all n≥n0​. d) There exist positive constants c and n0​such that
f(n)≥c⋅g(n) for all n≥n0​.​

Answer: c) There exist positive constants c and n0​such that 0≤f(n)≤c⋅g(n) for
all n≥n0​.​

43.​Which of the following is true? a) n!=O(nn) b) nn=O(n!) c) n!=Θ(nn) d) n2=Ω(n3)​



Answer: a) n!=O(nn)​

44.​What is the time complexity of searching an element in a sorted array using binary
search? a) O(n) b) O(logn) c) O(nlogn) d) O(1)​

Answer: b) O(logn)​

45.​The "little-o" notation (o) describes a(n) _________ upper bound. a) inclusive b)
exclusive c) tight d) lower​

Answer: b) exclusive​

46.​Divide and Conquer Approach​



Finding the maximum and minimum elements in an array using the divide and
conquer approach takes how many comparisons in the worst case? a) 2n b) n−1 c)
3n/2−2 d) nlogn​

Answer: c) 3n/2−2​

47.​In Quick Sort, the worst-case time complexity occurs when the pivot element: a)
Always divides the array into two equal halves. b) Is always the smallest or largest
element. c) Is chosen randomly. d) Is always the median.​

Answer: b) Is always the smallest or largest element.​

48.​What is the space complexity of Merge Sort? a) O(1) b) O(logn) c) O(n) d) O(n2)​

Answer: c) O(n) (due to the auxiliary array for merging)​

49.​Which of the following is NOT typically a phase in a Divide and Conquer algorithm?
a) Divide b) Conquer c) Combine d) Optimize​

Answer: d) Optimize​

50.​In a Disjoint Set Union (DSU) data structure, the find operation with path
compression helps in: a) Reducing the height of the trees. b) Increasing the number
of sets. c) Performing union operations faster. d) Sorting the elements within a set.​

Answer: a) Reducing the height of the trees.​

51.​Greedy Algorithms​

The Activity Selection Problem, where the goal is to select the maximum number of
non-overlapping activities, is best solved using: a) Dynamic Programming b) Greedy
Approach c) Backtracking d) Divide and Conquer​

Answer: b) Greedy Approach​

52.​In Prim's algorithm for Minimum Spanning Tree, the set of vertices A initially contains:
a) All vertices in the graph. b) No vertices. c) An arbitrary starting vertex. d) All
vertices with even degrees.​

Answer: c) An arbitrary starting vertex.​

53.​Kruskal's algorithm builds the MST by: a) Adding edges to a growing tree, always
choosing the minimum weight edge connecting a vertex in the tree to one outside the
tree. b) Sorting all edges by weight in ascending order and adding them if they don't
form a cycle. c) Iteratively relaxing edges until all shortest paths are found. d)
Performing a depth-first traversal of the graph.​

Answer: b) Sorting all edges by weight in ascending order and adding them if
they don't form a cycle.​

54.​The time complexity of Prim's algorithm using a min-priority queue (binary heap) is:
a) O(V2) b) O(ElogV) c) O(V+ElogV) d) O(ElogE)​

Answer: c) O(V+ElogV)​

55.​Huffman coding is a prefix code, meaning: a) No code is a suffix of another code. b)


No code is a prefix of another code. c) All codes have the same length. d) Codes are
assigned alphabetically.​

Answer: b) No code is a prefix of another code.​

56.​Graph Algorithms​

Which algorithm is guaranteed to find the shortest path in a graph with non-negative
edge weights? a) Bellman-Ford b) Floyd-Warshall c) Dijkstra's d) DFS​

Answer: c) Dijkstra's​

57.​The time complexity of DFS for an adjacency list representation is: a) O(V) b) O(E) c)
O(V+E) d) O(V2)​

Answer: c) O(V+E)​

58.​What is the main difference between BFS and DFS in terms of traversal order? a)
BFS explores neighbors before going deeper; DFS explores deeper before
backtracking. b) BFS uses a stack; DFS uses a queue. c) BFS is used for shortest
paths; DFS is not. d) BFS works only on directed graphs; DFS works on undirected.​

Answer: a) BFS explores neighbors before going deeper; DFS explores deeper
before backtracking.​

59.​A graph has a topological sort if and only if it is a: a) Connected graph. b) Complete
graph. c) Directed Acyclic Graph (DAG). d) Bipartite graph.​

Answer: c) Directed Acyclic Graph (DAG).​

60.​The Bellman-Ford algorithm can detect: a) A Hamiltonian cycle. b) A negative cycle.


c) A minimum spanning tree. d) A maximum flow path.​

Answer: b) A negative cycle.​

61.​For a graph with V vertices and E edges, the Floyd-Warshall algorithm has a time
complexity of: a) O(V2) b) O(V⋅E) c) O(V3) d) O(E2)​

Answer: c) O(V3)​

62.​What is the primary application of finding Strongly Connected Components? a)


Finding the shortest path between two nodes. b) Detecting cycles in an undirected
graph. c) Analyzing the structure of directed graphs, especially for reachability. d)
Optimizing network flow.​

Answer: c) Analyzing the structure of directed graphs, especially for
reachability.​

63.​Which algorithm is best suited for finding the shortest path in an unweighted graph?
a) Dijkstra's algorithm b) Bellman-Ford algorithm c) Breadth-First Search (BFS) d)
Floyd-Warshall algorithm​

Answer: c) Breadth-First Search (BFS)​

64.​The adjacency matrix representation of a graph is generally preferred when: a) The


graph is sparse (few edges). b) The graph is dense (many edges). c) Memory usage
is a critical concern for sparse graphs. d) Finding neighbors is a rare operation.​

Answer: b) The graph is dense (many edges).​

65.​What is the time complexity of finding all connected components in an undirected


graph using BFS or DFS? a) O(V) b) O(E) c) O(V+E) d) O(V2)​

Answer: c) O(V+E)​

66.​Dynamic Programming​

Dynamic Programming is suitable for problems that exhibit: a) Optimal substructure
and greedy choice property. b) Optimal substructure and overlapping subproblems. c)
Overlapping subproblems and greedy choice property. d) Independent subproblems.​

Answer: b) Optimal substructure and overlapping subproblems.​

67.​In the context of the Longest Common Subsequence, a subsequence does not
require: a) The elements to be in the same relative order. b) The elements to be
contiguous (adjacent). c) The elements to be from the original sequence. d) The
elements to be unique.​

Answer: b) The elements to be contiguous (adjacent).​

68.​The memoization technique is used in dynamic programming to: a) Solve problems


without recursion. b) Store the results of expensive function calls and return the
cached result when the same inputs occur again. c) Divide the problem into
independent subproblems. d) Always choose the locally optimal solution.​

Answer: b) Store the results of expensive function calls and return the cached
result when the same inputs occur again.​

69.​What is the difference between top-down (memoization) and bottom-up (tabulation)


dynamic programming? a) Top-down uses iteration; bottom-up uses recursion. b)
Top-down is generally faster; bottom-up is generally slower. c) Top-down solves
subproblems as needed (recursive); bottom-up solves all subproblems iteratively in a
defined order. d) Top-down requires more memory; bottom-up requires less memory.​

Answer: c) Top-down solves subproblems as needed (recursive); bottom-up
solves all subproblems iteratively in a defined order.​

70.​For the 0/1 Knapsack problem, if we have n items and a capacity W, the DP table
size is typically: a) N×N b) N×W c) W×W d) N+W​

Answer: b) N×W​
71.​The principle of optimality states that: a) An optimal solution to a problem contains
optimal solutions to its subproblems. b) The greedy choice always leads to an
optimal solution. c) Every problem can be solved by dividing it into smaller,
independent subproblems. d) All subproblems must be solved before solving the
main problem.​

Answer: a) An optimal solution to a problem contains optimal solutions to its
subproblems.​

72.​The edit distance (Levenshtein distance) between two strings can be computed
using: a) Greedy algorithms b) Divide and Conquer c) Dynamic Programming d)
Backtracking​

Answer: c) Dynamic Programming​

73.​The subset sum problem (determining if a subset of a given set of numbers sums to
a target value) can be solved using: a) Greedy algorithms b) Dynamic Programming
c) BFS d) Prim's algorithm​

Answer: b) Dynamic Programming (also Backtracking)​

74.​If the number of columns in matrix A is m and the number of rows in matrix B is p, for
matrix multiplication A×B to be valid, what condition must hold? a) m=p b) m=p c)
m>p d) m<p​

Answer: a) m=p​

75.​When solving the Matrix Chain Multiplication problem using dynamic programming,
the subproblems involve finding the minimum cost of multiplying: a) Individual
matrices. b) Pairs of adjacent matrices. c) Subchains of matrices. d) The entire chain.​

Answer: c) Subchains of matrices.​

76.​Backtracking​

In backtracking, if a partial solution cannot be extended to a complete solution, the
algorithm: a) Continues to explore that path anyway. b) Stops immediately and
reports failure. c) Backtracks to the last decision point and tries another option. d)
Starts over from the beginning.​

Answer: c) Backtracks to the last decision point and tries another option.​

77.​The N-Queen problem for N queens on an N×N chessboard is a classic example


solved by: a) Dynamic Programming b) Greedy Algorithms c) Backtracking d) Divide
and Conquer​

Answer: c) Backtracking​

78.​Which of the following problems typically involves finding all possible solutions rather
than just one optimal solution? a) Shortest Path b) Minimum Spanning Tree c)
Sudoku Solver (finding one solution) d) Sum of Subsets (finding all subsets that sum
to target)​

Answer: d) Sum of Subsets (finding all subsets that sum to target)​

79.​The Graph Coloring problem aims to find the: a) Chromatic number of a graph. b)
Shortest path between two nodes. c) Maximum flow in a network. d) Longest cycle in
a graph.​

Answer: a) Chromatic number of a graph.​

80.​A Hamiltonian path visits every vertex in a graph: a) Exactly once. b) At least once. c)
Multiple times. d) Only once if it's a cycle.​

Answer: a) Exactly once.​

81.​The Traveling Salesperson Problem (TSP) is generally solved using: a) Greedy


approach (for approximation) or Dynamic Programming / Backtracking (for exact) b)
BFS c) DFS d) Prim's algorithm​

Answer: a) Greedy approach (for approximation) or Dynamic Programming /
Backtracking (for exact)​

82.​In backtracking, a "state-space tree" represents: a) The possible states of a Turing


machine. b) All possible solutions or partial solutions. c) The memory usage of the
algorithm. d) The optimal path to a solution.​

Answer: b) All possible solutions or partial solutions.​

83.​The isSafe function in the 8-Queen problem checks for attacks along: a) Rows and
columns only. b) Diagonals only. c) Rows, columns, and diagonals. d) Adjacent
squares only.​

Answer: c) Rows, columns, and diagonals.​

84.​Backtracking is often used for problems where: a) A simple greedy choice


guarantees optimality. b) The search space is large and needs systematic
exploration. c) Optimal substructure is the only property. d) Inputs are always sorted.​

Answer: b) The search space is large and needs systematic exploration.​
85.​Which of the following is typically a characteristic of backtracking? a) It always finds
the optimal solution efficiently. b) It explores a search tree by trying to extend a partial
solution. c) It uses a table to store subproblem results. d) It is limited to solving only
optimization problems.​

Answer: b) It explores a search tree by trying to extend a partial solution.​

86.​Computational Complexity​

If a problem can be verified in polynomial time, it belongs to which class? a) P b) NP
c) NP-Hard d) NP-Complete​

Answer: b) NP​

87.​Which of the following is a classic example of an NP-Complete problem? a) Sorting


b) Shortest Path (with non-negative weights) c) Hamiltonian Cycle d) Finding
Maximum Element​

Answer: c) Hamiltonian Cycle​

88.​The question "P = NP?" asks whether: a) All problems in P are also in NP. b) All
problems in NP can be solved in polynomial time by a deterministic algorithm. c) All
problems in NP-Hard are also in NP-Complete. d) All problems can be solved in
polynomial time.​

Answer: b) All problems in NP can be solved in polynomial time by a
deterministic algorithm.​

89.​If problem A is NP-Complete and problem B is NP-Hard, and A can be reduced to B


in polynomial time, then: a) B is also NP-Complete. b) B is in P. c) B is not
necessarily NP-Complete. d) A is easier than B.​

Answer: c) B is not necessarily NP-Complete. (It could be NP-Complete, but
it's not guaranteed without knowing if B is in NP).​

90.​The "satisfiability problem" (SAT) is a well-known: a) P problem b) NP-Complete


problem c) Undecidable problem d) Linear time problem​

Answer: b) NP-Complete problem​

91.​What is the significance of proving a problem to be NP-Complete? a) It means the


problem has a polynomial-time algorithm. b) It suggests that finding an efficient
(polynomial-time) algorithm is unlikely. c) It means the problem has no solution. d) It
proves that P = NP.​

Answer: b) It suggests that finding an efficient (polynomial-time) algorithm is
unlikely.​

92.​A decision problem is one where the output is always: a) An integer b) A string c)
Yes/No (or True/False) d) An optimal solution​

Answer: c) Yes/No (or True/False)​

93.​The class of problems for which a solution can be verified in polynomial time is: a) P
b) NP c) NP-Hard d) EXP (Exponential time)​

Answer: b) NP​

94.​If a problem is known to be in P, it implies that it is also in: a) NP b) NP-Hard c)


NP-Complete d) None of the above (not necessarily)​

Answer: a) NP​

95.​What is the fundamental difference between an NP-Hard problem and an


NP-Complete problem? a) NP-Hard problems are always harder than NP-Complete
problems. b) NP-Complete problems are a subset of NP-Hard problems that are also
in NP. c) NP-Hard problems have polynomial-time solutions, while NP-Complete do
not. d) NP-Complete problems are always solvable, while NP-Hard problems are not.​

Answer: b) NP-Complete problems are a subset of NP-Hard problems that are
also in NP.​

96.​Which of the following is an example of an NP-Hard problem that is not known to be


in NP? a) Traveling Salesperson Problem (optimization version) b) Sorting c)
Searching d) Matrix multiplication​

Answer: a) Traveling Salesperson Problem (optimization version)​

97.​If P = NP, then: a) All NP-Complete problems would have polynomial-time solutions.
b) All NP-Hard problems would be in P. c) All problems would be solvable in
logarithmic time. d) No problem would be computationally intractable.​

Answer: a) All NP-Complete problems would have polynomial-time solutions.​

98.​The "reduction" process in complexity theory means: a) Reducing the size of the
input. b) Transforming one problem into another in polynomial time. c) Making an
algorithm more efficient. d) Converting an NP-Hard problem to a P problem.​

Answer: b) Transforming one problem into another in polynomial time.​

99.​An algorithm with exponential time complexity O(2n) is generally considered: a)


Highly efficient b) Tractable for large inputs c) Intractable for large inputs d) Always
better than O(n2)​

Answer: c) Intractable for large inputs​

100.​ A polynomial time algorithm is one whose running time is bounded by: a) O(cn)
for some constant c>1. b) O(nc) for some constant c≥0. c) O(logn). d) O(n!).​

**Answer: b)

46.​

You might also like