0% found this document useful (0 votes)
2 views14 pages

Copy of Quick Notes

The document provides an overview of various algorithmic techniques including Divide and Conquer, Dynamic Programming, Greedy Algorithms, and Backtracking. It explains their processes, advantages, applications, and key characteristics, along with examples such as Merge Sort, Fibonacci Sequence, and Hamiltonian cycles. Additionally, it discusses the complexities and limitations of these approaches, emphasizing the importance of selecting the appropriate method based on the problem structure.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views14 pages

Copy of Quick Notes

The document provides an overview of various algorithmic techniques including Divide and Conquer, Dynamic Programming, Greedy Algorithms, and Backtracking. It explains their processes, advantages, applications, and key characteristics, along with examples such as Merge Sort, Fibonacci Sequence, and Hamiltonian cycles. Additionally, it discusses the complexities and limitations of these approaches, emphasizing the importance of selecting the appropriate method based on the problem structure.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

daa

04 August 2024 11:04

Divide and conquer approach


Dynamic programming
Greedy
Backtracking
Greedy vs backtracking
Dynamic vs greedy
BB vs backtracking
Dynamic programming vs backtracking
Fifo vs lcbb
Hamiltonian cycles
N-Queen
Hamiltonian program explanation
Prims vs kruskal's
Merge vs quick sort

Divide and Conquer Approach

Divide and conquer is a problem-solving strategy where a complex problem is broken down into
smaller, more manageable subproblems. These subproblems are then solved independently, and
their solutions are combined to solve the original problem.

The Process

The divide and conquer approach typically involves three steps:

1. Divide: Break the problem into smaller subproblems of the same type as the original problem.

2. Conquer: Solve the subproblems recursively. If the subproblems are small enough, solve them
directly.

3. Combine: Combine the solutions of the subproblems to obtain the solution to the original problem.

Example: Merge Sort

A classic example of divide and conquer is the Merge Sort algorithm.

• Divide: The array to be sorted is divided into two halves.


• Conquer: The two halves are sorted recursively using Merge Sort.
• Combine: The two sorted halves are merged into a single sorted array.

Advantages of Divide and Conquer

• Efficiency: Can often lead to efficient algorithms, especially for problems with recursive structures.
• Simplicity: Breaking down a problem into smaller subproblems can make it easier to understand and
solve.

Quick Notes Page 1


• Parallelism: Some divide and conquer algorithms can be parallelized to improve performance.

Common Applications

• Sorting algorithms: Merge Sort, Quick Sort


• Searching algorithms: Binary Search
• Matrix multiplication: Strassen's algorithm
• Fast Fourier Transform (FFT)
• Closest pair of points

Key Considerations

• Subproblem size: The size of the subproblems should be carefully chosen to balance efficiency and
overhead.
• Combining solutions: The combination step should be efficient and accurate.
• Recursion depth: Excessive recursion can lead to stack overflow, so it's important to consider the
depth of recursion.
• ______________________________________________________

Dynamic Programming: A Detailed Explanation

Dynamic programming is a powerful algorithmic technique used to solve optimization problems by


breaking them down into simpler overlapping subproblems. It's particularly effective when dealing with
problems that exhibit:

• Optimal Substructure: The optimal solution to a problem can be constructed from optimal solutions
to its subproblems.
• Overlapping Subproblems: The same subproblems are solved repeatedly.

Core Idea

The essence of dynamic programming lies in storing the solutions to subproblems to avoid
recomputation. This is achieved through two primary approaches:

1. Memoization (Top-Down Approach)

• Recursive Solution: The problem is initially solved recursively.


• Memoization: The results of subproblems are stored in a data structure (usually a hash table or
array) to avoid recalculation.
• Lookup: Before solving a subproblem, check if the result is already stored. If yes, return the stored
value; otherwise, solve the subproblem and store the result.

2. Tabulation (Bottom-Up Approach)

• Iterative Solution: The problem is solved iteratively, starting from the base cases and building up to

Quick Notes Page 2


the final solution.
• Table: A table is created to store the solutions to subproblems.
• Filling the Table: The table is filled in a specific order, ensuring that the solutions to smaller
subproblems are available when needed for larger ones.

Steps Involved

Identify the problem: Determine if the problem exhibits optimal substructure and overlapping
subproblems.
Define the subproblems: Break the problem into smaller, overlapping subproblems.
Choose a data structure: Decide on a suitable data structure to store the solutions to subproblems
(e.g., array, hash table).
Recurrence relation: Express the solution to a subproblem in terms of solutions to smaller
subproblems.
Base cases: Define the base cases for the problem.
Fill the table or memoize: Implement either the top-down (memoization) or bottom-up (tabulation)
approach.

Example: Fibonacci Sequence

The Fibonacci sequence can be calculated efficiently using dynamic programming.

Recursive (without memoization):

Python
def fib_recursive(n):
if n <= 1:
return n
return fib_recursive(n-1) + fib_recursive(n-2)

Memoization:

Python
def fib_memo(n, memo={}):
if n in memo:
return memo[n]
if n <= 1:
memo[n] = n
return n
memo[n] = fib_memo(n-1, memo) + fib_memo(n-2, memo)
return memo[n]

Tabulation:

Python
def fib_tab(n):
fib = [0, 1]
for i in range(2, n+1):
fib.append(fib[i-1] + fib[i-2])
return fib[n]

Applications of Dynamic Programming

Dynamic programming is widely used in various fields, including:

Quick Notes Page 3


• Computer science: Algorithm design, optimization problems, graph algorithms.
• Economics: Optimal control, resource allocation.
• Biology: Bioinformatics, protein folding.
• Finance: Portfolio optimization, option pricing.

By understanding the core concepts and applying the steps outlined above, you can effectively solve
a wide range of complex problems using dynamic programming.

_____________________________________________________________________________

Greedy Approach

A greedy algorithm is a simple approach to problem-solving where you make the best choice
available at the current moment without considering the future consequences. It's like making a
decision based on what seems optimal at that particular step, hoping it will lead to the overall optimal
solution.

How it Works

Start with an empty solution set.


At each step, select the best available choice based on a specific criterion.
Add the chosen item to the solution set.
Repeat steps 2 and 3 until a solution is reached or no more choices are available.

Key Characteristics

• Local Optimization: Greedy algorithms focus on making the best choice at each step, without
considering the long-term impact.
• Efficiency: They are often simpler and faster to implement compared to other approaches like
dynamic programming or backtracking.
• Not Always Optimal: While greedy algorithms can provide good solutions for many problems, they
don't guarantee the optimal solution in all cases.

When to Use Greedy Approach

Greedy algorithms are suitable for problems that:

• Have optimal substructure: The optimal solution to a problem can be constructed from optimal
solutions to its subproblems.
• Possess the greedy choice property: Making the locally optimal choice at each step leads to a
globally optimal solution.

Examples of Greedy Algorithms

• Activity Selection Problem: Given a set of activities with start and finish times, select the maximum
number of activities that can be performed without overlapping.

Quick Notes Page 4


• Fractional Knapsack Problem: Given items with weights and values, maximize the total value of
items that can be put into a knapsack of a given weight capacity.
• Huffman Coding: Create an optimal prefix code for a set of characters based on their frequencies.
• Dijkstra's Algorithm: Find the shortest path between two nodes in a graph.

Limitations

• Not Always Optimal: As mentioned, greedy algorithms might not always find the optimal solution.
• Dependent on Problem Structure: The success of a greedy algorithm heavily depends on the
specific problem and the choice of the greedy criterion.

Conclusion

Greedy algorithms are a valuable tool in the problem-solving arsenal, but it's essential to understand
their limitations and when they are appropriate to use. By carefully analyzing the problem and
selecting the right greedy criterion, you can often find efficient and effective solutions.

______________________________________________________________________

Backtracking: A Systematic Search

Backtracking is a general algorithmic technique that involves exploring all possible solutions to a
problem by incrementally building candidates and abandoning a candidate (backtracking) as soon as
it determines that the candidate cannot be completed to a valid solution.

How it works:

Incremental Construction: Build a solution step-by-step.


Check for Feasibility: At each step, check if the partial solution is promising.
Backtrack: If the partial solution is not promising, undo the last decision and try a different option.
Complete Solution: If a complete solution is found, return it.

Key Characteristics:

• Depth-First Search (DFS): Explores one branch of the search tree completely before moving to the
next.
• State-Space Tree: The search space can be visualized as a tree, where each node represents a
partial solution.
• Pruning: Unpromising branches of the search tree are eliminated to improve efficiency.

When to Use Backtracking:

Quick Notes Page 5


• Problems with a large search space.
• Problems where solutions can be built incrementally.
• Problems where constraints can be checked efficiently.

Examples of Backtracking Problems:

• N-Queens problem
• Sudoku
• Maze solving
• Hamiltonian cycle
• Subset sum problem

Advantages and Disadvantages:

• Advantages:
• Can find all solutions to a problem.
• Flexible for various problem types.
• Disadvantages:
• Can be inefficient for large search spaces.
• May require significant memory for complex problems.

In essence, backtracking is a systematic approach to exploring all possible solutions, but it's
important to use pruning techniques to avoid exploring unnecessary paths.

Quick Notes Page 6


Greedy vs backtracking

Quick Notes Page 7


Quick Notes Page 8
Hamiltonian Cycle

Definition

A Hamiltonian cycle is a path in a graph that visits each vertex exactly once and returns to the starting
vertex. If a graph contains a Hamiltonian cycle, it is called a Hamiltonian graph.

Problem Complexity

The problem of finding a Hamiltonian cycle is NP-complete, meaning there is no known efficient
algorithm to solve it for all types of graphs. However, it can be solved for smaller or specific types of
graphs.

Solving the Hamiltonian Cycle Problem

Quick Notes Page 9


The most common approach to solving the Hamiltonian cycle problem is backtracking.

• Backtracking:
• Start with an empty path and add vertices one by one.
• Before adding a vertex, check if it's adjacent to the previously added vertex and hasn't been visited
before.
• If a valid vertex is found, add it to the path and recursively check for the next vertex.
• If no valid vertex is found, backtrack and remove the last added vertex.

Other Approaches (for specific graph types or heuristics):

• Dynamic Programming: Can be used for some special cases of the problem, but it's generally not
efficient for large graphs.
• Approximation Algorithms: For finding approximate solutions in reasonable time.
• Heuristic Search: Techniques like A* search can be adapted for the problem, but they don't
guarantee optimal solutions.

Applications

The Hamiltonian cycle problem has applications in various fields, including:

• Logistics and transportation


• Network design
• Computer science (e.g., graph theory, algorithm design)

Note: Due to the NP-complete nature of the problem, finding exact solutions for large graphs can be
computationally expensive. In practical applications, heuristics and approximation algorithms are
often used to find satisfactory solutions.

N-QUEEN

Quick Notes Page 10


Certainly, let's break down the algorithm for finding Hamiltonian cycles provided in the image:

Algorithm: Hamiltonian(k)

Purpose: This algorithm uses backtracking to find all the Hamiltonian cycles in a graph represented
as an adjacency matrix G[1:n, 1:n]. All cycles begin at node 1.

Steps:

Initialization: The algorithm starts at node 1 and attempts to extend a path from there.
NextValue(k): This function assigns a legal next value to x[k]. It ensures that the next vertex to be
added to the path is adjacent to the previous vertex and hasn't been visited before.
Termination:
• If x[k] = 0, it means there's no valid next vertex to extend the path, so the function returns.
• If k = n, it means a complete cycle has been formed, so the cycle is printed.
Recursion:
• Otherwise, the algorithm recursively calls itself with k+1, trying to extend the path further.
Backtracking: The repeat loop ensures that all possible paths are explored. If a dead-end is reached,
the algorithm backtracks to the previous vertex and tries a different path.

Key Points:

• The algorithm uses a recursive approach to explore all possible paths.


• The NextValue function is crucial for ensuring that only valid paths are considered.
• The algorithm efficiently prunes paths that cannot lead to a complete cycle.

Quick Notes Page 11


Time Complexity: The time complexity of this algorithm is O(n!), where n is the number of vertices in
the graph. This is because in the worst case, the algorithm needs to explore all possible permutations
of vertices.

Example:

Let's consider a graph with 4 vertices (A, B, C, D) and the following adjacency matrix:

G=[
[0, 1, 0, 1],
[1, 0, 1, 1],
[0, 1, 0, 1],
[1, 1, 1, 0]
]

The algorithm would start with vertex A (node 1) and try to find all possible Hamiltonian cycles by
exploring different paths. It would backtrack when it reaches a dead-end or a cycle is complete.

Note: Finding all Hamiltonian cycles in a graph can be computationally expensive, especially for large
graphs. In practice, heuristics and optimizations are often used to improve the efficiency of the
algorithm.

Understanding the NextValue Algorithm

Purpose

The NextValue algorithm is a crucial subroutine used in backtracking to construct Hamiltonian cycles.
Its primary goal is to determine the next vertex to be added to a partially constructed Hamiltonian
path.

How it Works

Initialization: The algorithm starts with the current vertex x[k-1] in the path and attempts to find a
suitable next vertex x[k].
Vertex Increment: The algorithm iteratively increments x[k] modulo n+1. This ensures that all vertices
are considered cyclically.
Adjacency Check: It checks if there's an edge between the current vertex x[k-1] and the potential
next vertex x[k] using the adjacency matrix G.
Distinctness Check: It verifies if the potential next vertex x[k] has already been included in the path.
This is done by iterating through the previously assigned vertices x[1] to x[k-1] and checking for
duplicates.
Cycle Check: If k reaches n, it means we've reached the last vertex. In this case, it's necessary to
check if there's an edge between x[n] and x[1] to form a complete cycle.

Key Points

• The algorithm ensures that the next vertex is adjacent to the current vertex.
• It prevents the inclusion of the same vertex multiple times in the path.
• It handles the special case where k = n to check for a complete cycle.

Quick Notes Page 12


• The use of modulo arithmetic ((x[k] := (x[k] + 1) mod (n+1);) allows for cyclic traversal of vertices.

Role in Backtracking

The NextValue algorithm plays a vital role in the backtracking process for finding Hamiltonian cycles.
By efficiently determining the next possible vertex, it helps to prune the search space and avoid
exploring invalid paths. This significantly improves the efficiency of the backtracking algorithm.

In essence, the NextValue algorithm acts as a constraint checker, ensuring that only valid
vertices are considered as potential extensions to the Hamiltonian path.

Prim's vs Kruskal's

Quick Notes Page 13


Quick Notes Page 14

You might also like