Copy of Quick Notes
Copy of Quick Notes
Divide and conquer is a problem-solving strategy where a complex problem is broken down into
smaller, more manageable subproblems. These subproblems are then solved independently, and
their solutions are combined to solve the original problem.
The Process
1. Divide: Break the problem into smaller subproblems of the same type as the original problem.
2. Conquer: Solve the subproblems recursively. If the subproblems are small enough, solve them
directly.
3. Combine: Combine the solutions of the subproblems to obtain the solution to the original problem.
• Efficiency: Can often lead to efficient algorithms, especially for problems with recursive structures.
• Simplicity: Breaking down a problem into smaller subproblems can make it easier to understand and
solve.
Common Applications
Key Considerations
• Subproblem size: The size of the subproblems should be carefully chosen to balance efficiency and
overhead.
• Combining solutions: The combination step should be efficient and accurate.
• Recursion depth: Excessive recursion can lead to stack overflow, so it's important to consider the
depth of recursion.
• ______________________________________________________
• Optimal Substructure: The optimal solution to a problem can be constructed from optimal solutions
to its subproblems.
• Overlapping Subproblems: The same subproblems are solved repeatedly.
Core Idea
The essence of dynamic programming lies in storing the solutions to subproblems to avoid
recomputation. This is achieved through two primary approaches:
• Iterative Solution: The problem is solved iteratively, starting from the base cases and building up to
Steps Involved
Identify the problem: Determine if the problem exhibits optimal substructure and overlapping
subproblems.
Define the subproblems: Break the problem into smaller, overlapping subproblems.
Choose a data structure: Decide on a suitable data structure to store the solutions to subproblems
(e.g., array, hash table).
Recurrence relation: Express the solution to a subproblem in terms of solutions to smaller
subproblems.
Base cases: Define the base cases for the problem.
Fill the table or memoize: Implement either the top-down (memoization) or bottom-up (tabulation)
approach.
Python
def fib_recursive(n):
if n <= 1:
return n
return fib_recursive(n-1) + fib_recursive(n-2)
Memoization:
Python
def fib_memo(n, memo={}):
if n in memo:
return memo[n]
if n <= 1:
memo[n] = n
return n
memo[n] = fib_memo(n-1, memo) + fib_memo(n-2, memo)
return memo[n]
Tabulation:
Python
def fib_tab(n):
fib = [0, 1]
for i in range(2, n+1):
fib.append(fib[i-1] + fib[i-2])
return fib[n]
By understanding the core concepts and applying the steps outlined above, you can effectively solve
a wide range of complex problems using dynamic programming.
_____________________________________________________________________________
Greedy Approach
A greedy algorithm is a simple approach to problem-solving where you make the best choice
available at the current moment without considering the future consequences. It's like making a
decision based on what seems optimal at that particular step, hoping it will lead to the overall optimal
solution.
How it Works
Key Characteristics
• Local Optimization: Greedy algorithms focus on making the best choice at each step, without
considering the long-term impact.
• Efficiency: They are often simpler and faster to implement compared to other approaches like
dynamic programming or backtracking.
• Not Always Optimal: While greedy algorithms can provide good solutions for many problems, they
don't guarantee the optimal solution in all cases.
• Have optimal substructure: The optimal solution to a problem can be constructed from optimal
solutions to its subproblems.
• Possess the greedy choice property: Making the locally optimal choice at each step leads to a
globally optimal solution.
• Activity Selection Problem: Given a set of activities with start and finish times, select the maximum
number of activities that can be performed without overlapping.
Limitations
• Not Always Optimal: As mentioned, greedy algorithms might not always find the optimal solution.
• Dependent on Problem Structure: The success of a greedy algorithm heavily depends on the
specific problem and the choice of the greedy criterion.
Conclusion
Greedy algorithms are a valuable tool in the problem-solving arsenal, but it's essential to understand
their limitations and when they are appropriate to use. By carefully analyzing the problem and
selecting the right greedy criterion, you can often find efficient and effective solutions.
______________________________________________________________________
Backtracking is a general algorithmic technique that involves exploring all possible solutions to a
problem by incrementally building candidates and abandoning a candidate (backtracking) as soon as
it determines that the candidate cannot be completed to a valid solution.
How it works:
Key Characteristics:
• Depth-First Search (DFS): Explores one branch of the search tree completely before moving to the
next.
• State-Space Tree: The search space can be visualized as a tree, where each node represents a
partial solution.
• Pruning: Unpromising branches of the search tree are eliminated to improve efficiency.
• N-Queens problem
• Sudoku
• Maze solving
• Hamiltonian cycle
• Subset sum problem
• Advantages:
• Can find all solutions to a problem.
• Flexible for various problem types.
• Disadvantages:
• Can be inefficient for large search spaces.
• May require significant memory for complex problems.
In essence, backtracking is a systematic approach to exploring all possible solutions, but it's
important to use pruning techniques to avoid exploring unnecessary paths.
Definition
A Hamiltonian cycle is a path in a graph that visits each vertex exactly once and returns to the starting
vertex. If a graph contains a Hamiltonian cycle, it is called a Hamiltonian graph.
Problem Complexity
The problem of finding a Hamiltonian cycle is NP-complete, meaning there is no known efficient
algorithm to solve it for all types of graphs. However, it can be solved for smaller or specific types of
graphs.
• Backtracking:
• Start with an empty path and add vertices one by one.
• Before adding a vertex, check if it's adjacent to the previously added vertex and hasn't been visited
before.
• If a valid vertex is found, add it to the path and recursively check for the next vertex.
• If no valid vertex is found, backtrack and remove the last added vertex.
• Dynamic Programming: Can be used for some special cases of the problem, but it's generally not
efficient for large graphs.
• Approximation Algorithms: For finding approximate solutions in reasonable time.
• Heuristic Search: Techniques like A* search can be adapted for the problem, but they don't
guarantee optimal solutions.
Applications
Note: Due to the NP-complete nature of the problem, finding exact solutions for large graphs can be
computationally expensive. In practical applications, heuristics and approximation algorithms are
often used to find satisfactory solutions.
N-QUEEN
Algorithm: Hamiltonian(k)
Purpose: This algorithm uses backtracking to find all the Hamiltonian cycles in a graph represented
as an adjacency matrix G[1:n, 1:n]. All cycles begin at node 1.
Steps:
Initialization: The algorithm starts at node 1 and attempts to extend a path from there.
NextValue(k): This function assigns a legal next value to x[k]. It ensures that the next vertex to be
added to the path is adjacent to the previous vertex and hasn't been visited before.
Termination:
• If x[k] = 0, it means there's no valid next vertex to extend the path, so the function returns.
• If k = n, it means a complete cycle has been formed, so the cycle is printed.
Recursion:
• Otherwise, the algorithm recursively calls itself with k+1, trying to extend the path further.
Backtracking: The repeat loop ensures that all possible paths are explored. If a dead-end is reached,
the algorithm backtracks to the previous vertex and tries a different path.
Key Points:
Example:
Let's consider a graph with 4 vertices (A, B, C, D) and the following adjacency matrix:
G=[
[0, 1, 0, 1],
[1, 0, 1, 1],
[0, 1, 0, 1],
[1, 1, 1, 0]
]
The algorithm would start with vertex A (node 1) and try to find all possible Hamiltonian cycles by
exploring different paths. It would backtrack when it reaches a dead-end or a cycle is complete.
Note: Finding all Hamiltonian cycles in a graph can be computationally expensive, especially for large
graphs. In practice, heuristics and optimizations are often used to improve the efficiency of the
algorithm.
Purpose
The NextValue algorithm is a crucial subroutine used in backtracking to construct Hamiltonian cycles.
Its primary goal is to determine the next vertex to be added to a partially constructed Hamiltonian
path.
How it Works
Initialization: The algorithm starts with the current vertex x[k-1] in the path and attempts to find a
suitable next vertex x[k].
Vertex Increment: The algorithm iteratively increments x[k] modulo n+1. This ensures that all vertices
are considered cyclically.
Adjacency Check: It checks if there's an edge between the current vertex x[k-1] and the potential
next vertex x[k] using the adjacency matrix G.
Distinctness Check: It verifies if the potential next vertex x[k] has already been included in the path.
This is done by iterating through the previously assigned vertices x[1] to x[k-1] and checking for
duplicates.
Cycle Check: If k reaches n, it means we've reached the last vertex. In this case, it's necessary to
check if there's an edge between x[n] and x[1] to form a complete cycle.
Key Points
• The algorithm ensures that the next vertex is adjacent to the current vertex.
• It prevents the inclusion of the same vertex multiple times in the path.
• It handles the special case where k = n to check for a complete cycle.
Role in Backtracking
The NextValue algorithm plays a vital role in the backtracking process for finding Hamiltonian cycles.
By efficiently determining the next possible vertex, it helps to prune the search space and avoid
exploring invalid paths. This significantly improves the efficiency of the backtracking algorithm.
In essence, the NextValue algorithm acts as a constraint checker, ensuring that only valid
vertices are considered as potential extensions to the Hamiltonian path.
Prim's vs Kruskal's