0% found this document useful (0 votes)
136 views13 pages

Algorithm Design Analysis Important Viva Question in Exam

The document outlines key concepts in algorithm design and analysis, including definitions and examples of algorithms, time complexity, and various sorting and searching techniques. It discusses important algorithms such as Bubble Sort, Quick Sort, Dijkstra's Algorithm, and dynamic programming, along with their applications and efficiencies. Additionally, it covers greedy algorithms, recursion, and divide-and-conquer strategies, providing insights into their use in solving optimization and computational problems.

Uploaded by

Sai Nithesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
136 views13 pages

Algorithm Design Analysis Important Viva Question in Exam

The document outlines key concepts in algorithm design and analysis, including definitions and examples of algorithms, time complexity, and various sorting and searching techniques. It discusses important algorithms such as Bubble Sort, Quick Sort, Dijkstra's Algorithm, and dynamic programming, along with their applications and efficiencies. Additionally, it covers greedy algorithms, recursion, and divide-and-conquer strategies, providing insights into their use in solving optimization and computational problems.

Uploaded by

Sai Nithesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

ALGORITHM DESIGN ANALYSIS IMPORTANT VIVA QUESTION IN EXAM

1. What is an Algorithm?
A step-by-step procedure to solve a problem.

2. What is Asymptotic Notation?


A mathematical representation to describe the performance (time/space) of algorithms as input
size grows (e.g., Big-O, Omega, Theta).

3. What is the Time Complexity of an Algorithm?


The measure of the time an algorithm takes to complete based on input size.

4. Bubble Sort Algorithm:


Repeatedly swap adjacent elements if they are in the wrong order.
Example: Sort [4, 3, 1, 2] → [3, 1, 2, 4] → [1, 2, 3, 4].

5. Selection Sort Algorithm:


Repeatedly find the minimum element and move it to the sorted portion.
Example: [4, 3, 1, 2] → [1, 3, 4, 2] → [1, 2, 4, 3] → [1, 2, 3, 4].

6. Quick Sort Algorithm:


Choose a pivot, partition the array around it, and recursively sort partitions.

7. What is NP-Complete?
Problems that are both NP (verifiable in polynomial time) and as hard as any NP problem.

8. Time vs Space Efficiency:

Time Efficiency: Speed of execution.

Space Efficiency: Memory usage.

9. What is the Order of Algorithm?


Describes the growth rate of an algorithm (e.g., O(n), O(log n)).

10. What is Brute Force?


A straightforward, exhaustive approach to solve problems.

11. Merge Sort Algorithm:


Divide the array into halves, sort each half, and merge them.
Example: [4, 3, 1, 2] → [4, 3], [1, 2] → [3, 4], [1, 2] → [1, 2, 3, 4].

12. Linear Search:


Search sequentially in a list.

Advantage: Simple, works on unsorted lists.

Disadvantage: Slow for large datasets.

13. Binary Search:


Search by dividing a sorted list into halves until the element is found.

14. Insertion Sort Algorithm:


Build the sorted array one element at a time by inserting elements in the correct position.
Example: [4, 3, 1, 2] → [3, 4, 1, 2] → [1, 3, 4, 2] → [1, 2, 3, 4].

15. What is the Optimal Solution?


The best possible solution under given constraints.

16. Why Hashing?


Provides fast data access using keys.

17. Encryption Algorithm:


Converts plaintext into ciphertext using a key, ensuring secure communication.

18. Dynamic Programming:


Solves problems by breaking them into overlapping subproblems and storing results.
19. Knapsack Problem:
Maximize value within a weight limit by selecting items optimally.

20. Warshall's Algorithm:


Finds the transitive closure of a graph.

21. Greedy Algorithm:


Makes the best choice at each step for local optimization.

22. Advantages of Greedy Algorithm:


Simple, efficient, and works well for specific problems like MSTs.

23. What is Minimum Spanning Tree (MST)?


A tree connecting all vertices in a graph with the minimum total edge weight.

24. Kruskal’s Algorithm:


Sort edges by weight and add them to the MST if they don't form a cycle.

25. What is Sorting Network?


A hardware-efficient algorithm for sorting numbers using comparisons and swaps.

26. Floyd's Algorithm:


Finds the shortest paths between all pairs of vertices in a graph.

27. Prim's Algorithm:


Builds an MST by starting with one vertex and adding the smallest edge.

28. Efficiency of Prim’s Algorithm:


Runs in O(V²) for dense graphs and O(E log V) with a priority queue.

29. Dijkstra's Algorithm:


Finds the shortest path from a source vertex to all others in a weighted graph.
30. What are Huffman Trees?
Binary trees used in optimal prefix encoding.

31. What is Huffman Code?


A lossless compression technique assigning shorter codes to frequent characters.

32. Advantages of Huffman Encoding:


Reduces storage size, efficient for data compression.

33. Dynamic Huffman Coding:


Updates Huffman codes dynamically as data is processed.

34. What is Backtracking?


Searches for solutions by exploring and abandoning invalid paths.

35. Dynamic Programming vs Greedy:

DP: Solves subproblems optimally and combines results.

Greedy: Makes locally optimal choices.

36. Use of Dijkstra’s Algorithm:


Finds shortest paths in weighted graphs.

37. What is N-Queen Problem?


Place N queens on an N×N chessboard such that no two queens attack each other.

38. What is State-Space Tree?


A tree representing all possible solutions to a problem.

39. What is the Assignment Problem?


Assign tasks to agents such that the cost is minimized.
Geeks For Geeks Side Imp Question Algorithm:

SORTING ALGORITHM

Question 1: What is a sorting algorithm?

Answer: A sorting algorithm is a method used to arrange elements in a specific order, often from
smallest to largest or vice versa, making data easier to manage and search.

Question 2: What are the different types of sorting algorithms?

Answer: There are two types of Sorting algorithms: Comparison based sorting algorithms and
non-comparison-based sorting algorithms. Comparison based sorting algorithms include Bubble
Sort, Selection Sort, Insertion Sort, Merge Sort, Quick Sort, Heap Sort, etc. and
non-comparison-based sorting algorithms include Radix Sort, Counting Sort and Bucket Sort.

Question 3: Why Sorting algorithms are important?

Answer: The effectiveness of other algorithms (like search and merge algorithms) that depend
on input data being in sorted lists is enhanced by efficient sorting. Sorting is also frequently
helpful for generating output that is readable by humans. Sorting is directly used in
divide-and-conquer strategies, database algorithms, data structure algorithms, and many other
applications.

Question 4: What is the difference between comparison-based and non-comparison-based


sorting algorithms?

Answer: Comparison-based sorting algorithms compare elements to determine their order, while
non-comparison-based algorithms use other techniques, like counting or bucketing, to sort
elements without direct comparisons.

Question 5: Explain what is ideal Sorting Algorithm?

Answer: The Ideal Sorting Algorithm would have the following properties:

Stable: Equal keys are not reordered.


Operates in place: Requires O(1) extra space.
Worst-case O(n log n) key comparisons: Guaranteed to perform no more than O(n log n) key
comparisons in the worst case.
Adaptive: Speeds up to O(n) when the data is nearly sorted or when there are few unique keys.

Question 6: What is meant by “Sort in Place”?


Answer: In-place algorithms prioritize space efficiency by utilizing the same memory space for
both input and output. This eliminates the need for additional storage, thereby reducing memory
requirements. Selection Sort, Bubble Sort, Insertion Sort, Heap Sort and Quicksort are in-place
sorting algorithms.

Question 7: Which sort algorithm works best on mostly sorted data?

Answer: For mostly sorted data, Insertion Sort typically works best. It’s efficient when elements
are mostly in order because it only needs to make small adjustments to place each element in
its correct position, making it faster than other sorting algorithms like Quick Sort or Merge Sort.

Question 8: Why is Merge sort preferred over Quick Sort for sorting linked lists?

Answer: Merge Sort is preferred for sorting linked lists because its divide-and-conquer approach
easily divides the list into halves and merges them efficiently without requiring random access,
which is difficult in linked lists. Quick Sort’s reliance on random access and potential worst-case
time complexity makes it less suitable for linked lists.

Question 9: What is Stability in sorting algorithm and why it is important?

Answer: Stability in sorting algorithms means that the relative order of equal elements remains
unchanged after sorting. Stable sorting algorithms ensure that equal elements maintain their
original positions in the sorted sequence. Some of the stable sorting algorithms are: Bubble
Sort, Insertion Sort, Merge Sort and Counting Sort.

Question 10: What is the best sorting algorithm for large datasets?

Answer: For large datasets, efficient sorting algorithms like Merge Sort, Quick Sort, or Heap Sort
are commonly used due to their average time complexity of O(n log n), which performs well
even with large amounts of data.

Question 11: How does Quick Sort work?

Answer: Quick Sort is a Divide and Conquer sorting algorithm. It chooses a pivot element and
rearrange the array so that elements smaller than the pivot are on the left, and elements greater
are on the right. Then, recursively apply the partitioning process to the left and right subarrays.
Subarrays of size one or zero are considered sorted.

Question 12: What is the worst-case time complexity of Quick Sort?

Answer: In the worst case, Quick Sort may take O(N^2) time to sort the array. The worst case
will occur when everytime the problem of size N, gets divided into 2 subproblems of size 1 and
N – 1.
SEARCHING ALGORITHM

Question 1: What is a searching algorithm?

Answer: A searching algorithm is a method used to find a specific item within a collection of
data. Searching Algorithms are designed to check for an element or retrieve an element from
any data structure where it is stored.

Question 2: What are the different types of searching algorithms?

Answer: Searching algorithms include Linear Search, Binary Search, Depth-First Search (DFS),
Breadth-First Search (BFS), and Hashing, each with its own approach to find elements.

Question 3: Explain Linear Search and its time complexity.


Answer: Linear Search checks each element in a list one by one until finding the target or
reaching the end. Its time complexity is O(n) in the worst case.

Question 4: How does Binary Search work?


Answer: Binary Search divides a sorted array in half repeatedly, narrowing down the search
space by comparing the target with the mid until finding the target or exhausting the elements.
Its time complexity is O(log n).

Question 5: What are the requirements for using Binary Search?

Answer: Binary Search requires a sorted array and the ability to access elements by index for
efficient traversal.

Question: 6 Explain why complexity of Binary search is O (log2n) ?

Answer: Binary search halves the search space with each step, reducing the number of
elements to be searched by half each time. This logarithmic reduction results in a time
complexity of O(log2n), where n is the number of elements in the sorted array.

Question 7: How does Hashing work in searching?

Answer: Hashing uses a hash function to compute an index for each element, allowing for
constant-time search operations in the average case by storing elements in a hash table.

Question 8: Compare Linear Search and Binary Search.


Answer: Linear Search checks elements sequentially, while Binary Search halves the search
space with each step, making it more efficient for sorted data with a time complexity of O(log n).

Question 10: Why use binary search if there is a ternary search?


Answer: Binary search is preferred for finding specific values in sorted arrays, as it divides the
search space in half with each step, resulting in efficient searches with a time complexity of
O(log2n). Binary Search is useful for finding maximum or minimum value in a Monotonic
function whereas Ternary search is useful for finding the maximum or minimum value in a
unimodal function. Also, the time complexity of Ternary Search is O(2 * log3N) which is greater
than O(log2N)

Question 11: When is each searching algorithm most appropriate to use?

Answer: Choose the appropriate searching algorithm based on factors like data structure, data
size, and desired search efficiency, such as Binary Search for sorted arrays and Hashing for
constant-time searches.

GREEDY ALGORITHM

Question 1: What is a greedy algorithm?


Answer: A greedy algorithm makes locally optimal choices at each step with the hope of finding
a global optimum solution.

Question 2: What is greedy algorithm used for?


Answer: Greedy algorithms are primarily used for optimization problems where making locally
optimal choices at each step leads to finding a globally optimal solution. They find applications
in various domains such as scheduling, routing, resource allocation, and combinatorial
optimization.

Question 3: Explain Dijkstra’s algorithm and its application.

Answer: Dijkstra’s algorithm finds the shortest path from a starting node to all other nodes in a
weighted graph. It’s commonly used in routing and network optimization problems.

Question 5: Can you discuss the greedy algorithm for finding the minimum spanning tree in a
graph?

Answer: Prim’s algorithm and Kruskal’s algorithm are two greedy approaches for finding the
minimum spanning tree in a weighted graph. Prim’s algorithm starts with an arbitrary vertex and
adds the minimum weight edge at each step until all vertices are included, while Kruskal’s
algorithm sorts edges by weight and adds them one by one while avoiding cycles.

Question 6: What is Huffman coding, and how does it utilize a greedy strategy to compress
data?
Answer: Huffman coding is a technique for lossless data compression where characters are
represented by variable-length codes. It uses a greedy strategy to assign shorter codes to more
frequent characters.

DYNAMIC PROGRAMMING

Question 1: What is dynamic programming, and how does it differ from other methods?
Answer: Dynamic programming breaks down complex problems into smaller, simpler
subproblems and stores solutions to avoid repeating calculations, unlike other methods that
may solve problems directly without reusing solutions.

Question 2: Explain the Fibonacci sequence and how dynamic programming helps calculate
Fibonacci numbers efficiently.
Answer: The Fibonacci sequence is a series where each number is the sum of the two
preceding ones. Dynamic programming stores previously calculated Fibonacci numbers to avoid
recalculating them, making the process faster and more efficient.

Question 3: What kinds of problems are suitable for dynamic programming solutions?
Answer: Dynamic programming works well for problems with overlapping subproblems and
optimal substructure, meaning solutions can be built from smaller optimal solutions.

Question 4: What is memoization in dynamic programming, and why is it useful?


Answer: Memoization involves storing previously calculated results to avoid redundant
computations in recursive algorithms, saving time and improving efficiency. Memoization is used
in Top-down approach

RECURSIVE ALGORITHM

Question 1: What is recursion, and how does it work?


Answer: Recursion is a problem-solving approach where a function calls itself to solve smaller
instances of the same problem.

Question 2: Can you provide an example of a problem that can be solved using recursion?
Answer: Examples include factorial computation, Fibonacci sequence generation, and
traversing tree structures.

Question 3: What is the base case in recursion, and why is it important?


Answer: The base case provides the termination condition for recursion, preventing infinite
loops and ensuring the recursion eventually stops.

Question 4: How does recursion differ from iteration?


Answer: Recursion involves solving problems through self-referential function calls, while
iteration involves repeating a set of instructions using loops
Question 5: What are tail-recursive functions, and how do they differ from non-tail-recursive
functions?
Answer: Tail-recursive functions optimize memory usage by performing recursive calls as the
last operation in the function, eliminating the need to store intermediate results on the call stack.

Question 9: Discuss how recursion is used in tree traversal algorithms.


Answer: Recursion is commonly used in tree traversal algorithms like depth-first search (DFS).
In DFS, a recursive function is used to explore each node’s children, visiting deeper levels of the
tree before backtracking.

Question 10: How does recursion play a role in solving the Towers of Hanoi problem?
Answer: Recursion is essential for solving the Towers of Hanoi problem efficiently. The recursive
algorithm involves moving disks from one peg to another while adhering to the rules of the
game, using recursion to break down the problem into smaller subproblems.

DIVIDE AND CONQUER

Question 1: What is Divide and Conquer Algorithm?


A divide-and-conquer algorithm is a problem-solving technique that follows these steps:

Divide: Break the problem down into smaller, independent subproblems.


Conquer: Solve each subproblem recursively.
Combine: Merge the solutions to the subproblems to solve the original problem.

Question 2: How would you use Divide & Conquer to find the maximum and minimum of an
array?
Answer: To find the maximum and minimum of an array using Divide & Conquer, we can
recursively divide the array into smaller subarrays until we reach the base case with just one or
two elements, then compare the max and min within each subarray.

Question 3: What is the role of recursion in Divide & Conquer algorithms?

Answer: Recursion plays a fundamental role in Divide & Conquer algorithms by breaking down
a problem into smaller subproblems and solving them separately, then combining their solutions
to solve the larger problem.

Question 5: How does the efficiency of Divide and Conquer algorithms compare to other
problem-solving techniques?

Answer: Divide and Conquer algorithms often exhibit efficient performance, especially for
large-scale problems, but the efficiency depends on factors like problem characteristics and
implementation details.
Question 6: How does the QuickSort algorithm utilize the Divide and Conquer strategy?
Answer: QuickSort selects a pivot, partitions the array, recursively sorts the partitions, and
combines them, showcasing the Divide and Conquer strategy.

BACKTRACKING ALGORITHM

Question 1: What is Backtracking Algorithm?


Answer: Backtracking is an algorithmic technique for solving problems recursively by trying to
build a solution incrementally, one piece at a time, removing those solutions that fail to satisfy
the constraints of the problem at any point of time.

Question 2: Why is this called Backtracking?


Answer: Backtracking is a problem-solving technique that involves constructing a solution
incrementally, and backtracking when a dead end is reached. It is often used to find all possible
solutions to a problem, or to find the best solution.

Question 3: Explain what is Explicit and implicit Backtracking Constraints?


Answer: Explicit backtracking constraints are explicitly stated in the problem definition, while
implicit backtracking constraints must be inferred from the problem’s logic.

Question 4: What are the main challenges or limitations associated with Backtracking
algorithms?
Answer: Backtracking can be computationally expensive and may explore many paths, leading
to inefficiency, especially in problems with large solution spaces.

Question 5: Can you discuss a situation where Backtracking might be more suitable than other
algorithms?
Answer: Backtracking is often suitable for problems with numerous possibilities, like solving
puzzles or optimization problems, where exploring all options is essential.

TREE ALGORITHM

Question 1: What is a tree in the context of data structures and algorithms?

Answer: In data structures, a tree is a hierarchical structure composed of nodes connected by


edges, where each node has a parent-child relationship, and there is a single root node.

Question 2: Explain the difference between a binary tree and a binary search tree.

Answer: A binary tree is a tree structure where each node has at most two children, while a
binary search tree follows the binary tree structure and additionally ensures that the left child is
less than the parent, and the right child is greater.
Question 3: How do you traverse a binary tree in depth-first and breadth-first orders?

Answer: Depth-first traversal includes pre-order, in-order, and post-order. Breadth-first traversal
visits nodes level by level, starting from the root.

Question 4: What is the height of a tree, and how is it different from the depth?

Answer: The height is the length of the longest path from the root to a leaf, while the depth is the
length of the path from a node to the root.

Question 6: What is a balanced tree, and why is it essential in certain applications?

Answer: A balanced tree minimizes the height disparity between left and right subtrees,
ensuring efficient operations. It is vital for maintaining performance in search and retrieval
applications.

Question 7: Explain the process of inserting a node into a binary search tree.
Answer: Inserting a node into a binary search tree involves traversing the tree to find the
appropriate position for the new node based on its key value. Starting at the root, the algorithm
compares the key of the new node with the key of the current node.

GRAPH ALGORITHM

Question 1: What is a graph, and how does it differ from a tree in data structures?

Answer: A graph is a collection of nodes connected by edges, allowing for more complex
relationships. Unlike a tree, a graph has no strict hierarchy or parent-child relationships.

Question 2: Explain the concepts of a directed graph and an undirected graph.

Answer: In a directed graph, edges have a specific direction, indicating a one-way relationship.
In an undirected graph, edges have no direction, representing a mutual connection.

Question 3: What is a cycle in a graph, and how do you detect cycles algorithmically?

Answer: A cycle is a closed path in a graph. Cycles can be detected using algorithms like
Depth-First Search (DFS) or Union-Find.

Question 4: Describe the breadth-first search (BFS) algorithm for traversing a graph.

Answer: BFS starts at a source node, explores its neighbors, and then move to their neighbors
level by level, using a queue data structure.
Question 5: How does depth-first search (DFS) work, and what are its applications in graph
algorithms?

Answer: DFS explores as far as possible along each branch before backtracking. It’s used for
traversal, topological sorting, and solving problems like connected components.

Question 6: What is Dijkstra’s algorithm, and how does it find the shortest path in a weighted
graph?

Answer: Dijkstra’s algorithm finds the shortest path from a source node to all other nodes in a
weighted graph by iteratively selecting the node with the minimum distance.

Question 9: Discuss topological sorting and its applications in directed acyclic graphs (DAGs).
Answer: Topological sorting orders the nodes in a DAG based on dependencies, ensuring that
each node appears before its successors. It’s used in scheduling and task planning.

Question 10: How does the Bellman-Ford algorithm work, and what does it address in graph
algorithms?
Answer: Bellman-Ford finds the shortest paths in a graph, even with negative edge weights. It
detects negative cycles, making it suitable for graphs with such characteristics.

You might also like