0% found this document useful (0 votes)
25 views4 pages

Greedy

Dynamic Programming (DP) is an optimization technique that enhances recursive solutions by storing results of subproblems to avoid redundant computations, reducing time complexity from exponential to polynomial. It is applicable to problems with optimal substructure and overlapping subproblems, such as Fibonacci numbers and the Longest Common Subsequence. DP can be implemented through two approaches: top-down (memoization) and bottom-up (tabulation).
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views4 pages

Greedy

Dynamic Programming (DP) is an optimization technique that enhances recursive solutions by storing results of subproblems to avoid redundant computations, reducing time complexity from exponential to polynomial. It is applicable to problems with optimal substructure and overlapping subproblems, such as Fibonacci numbers and the Longest Common Subsequence. DP can be implemented through two approaches: top-down (memoization) and bottom-up (tabulation).
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Dynamic Programming is an algorithmic technique with the following

properties.
 It is mainly an optimization over plain recursion. Wherever we see a
recursive solution that has repeated calls for the same inputs, we can
optimize it using Dynamic Programming.
 The idea is to simply store the results of subproblems so that we do not
have to re-compute them when needed later. This simple optimization
typically reduces time complexities from exponential to polynomial.
 Some popular problems solved using Dynamic Programming
are Fibonacci Numbers, Diff Utility (Longest Common
Subsequence), Bellman–Ford Shortest Path, Floyd Warshall, Edit
Distance and Matrix Chain Multiplication.

Dynamic Programming (DP) Introduction


Last Updated : 24 Dec, 2024



Dynamic Programming is a commonly used algorithmic technique used to


optimize recursive solutions when same subproblems are called again.
 The core idea behind DP is to store solutions to subproblems so that
each is solved only once.
 To solve DP problems, we first write a recursive solution in a way that
there are overlapping subproblems in the recursion tree (the recursive
function is called with the same parameters multiple times)
 To make sure that a recursive value is computed only once (to improve
time taken by algorithm), we store results of the recursive calls.
 There are two ways to store the results, one is top down (or
memoization) and other is bottom up (or tabulation).

When to Use Dynamic Programming (DP)?


Dynamic programming is used for solving problems that consists of the
following characteristics:

1. Optimal Substructure:
The property Optimal substructure means that we use the optimal results
of subproblems to achieve the optimal result of the bigger problem.
Example:
Consider the problem of finding the minimum cost path in a weighted
graph from a source node to a destination node. We can break this
problem down into smaller subproblems:
 Find the minimum cost path from the source node to each
intermediate node.
 Find the minimum cost path from each intermediate node to the
destination node.
The solution to the larger problem (finding the minimum cost path from the
source node to the destination node) can be constructed from the
solutions to these smaller subproblems.

2. Overlapping Subproblems:
The same subproblems are solved repeatedly in different parts of the
problem refer to Overlapping Subproblems Property in Dynamic
Programming.
Example:
Consider the problem of computing the Fibonacci series. To compute
the Fibonacci number at index n, we need to compute the Fibonacci
numbers at indices n-1 and n-2. This means that the subproblem of
computing the Fibonacci number at index n-2 is used twice (note that the
call for n - 1 will make two calls, one for n-2 and other for n-3) in the
solution to the larger problem of computing the Fibonacci number at index
n.
You may notice overlapping subproblems highlighted in the second
recursion tree for Nth Fibonacci diagram shown below.
Approaches of Dynamic Programming (DP)
Dynamic programming can be achieved using two approaches:
1. Top-Down Approach (Memoization):
In the top-down approach, also known as memoization, we keep the
solution recursive and add a memoization table to avoid repeated calls of
same subproblems.
 Before making any recursive call, we first check if the memoization
table already has solution for it.
 After the recursive call is over, we store the solution in the memoization
table.
2. Bottom-Up Approach (Tabulation):
In the bottom-up approach, also known as tabulation, we start with
the smallest subproblems and gradually build up to the final solution.
 We write an iterative solution (avoid recursion overhead) and build the
solution in bottom-up manner.
 We use a dp table where we first fill the solution for base cases and
then fill the remaining entries of the table using recursive formula.
 We only use recursive formula on table entries and do not make
recursive calls.
How will Dynamic Programming (DP) Work?
Let’s us now see the above recursion tree with overlapping subproblems
highlighted with same color. We can clearly see that that recursive
solution is doing a lot work again and again which is causing the time
complexity to be exponential. Imagine time taken for computing a large
Fibonacci number.

 Identify Subproblems: Divide the main problem into smaller,


independent subproblems, i.e., F(n-1) and F(n-2)
 Store Solutions: Solve each subproblem and store the solution in a
table or array so that we do not have to recompute the same again.
 Build Up Solutions: Use the stored solutions to build up the solution
to the main problem. For F(n), look up F(n-1) and F(n-2) in the table
and add them.
 Avoid Recomputation: By storing solutions, DP ensures that each
subproblem (for example, F(2)) is solved only once, reducing
computation time.
Using Memoization Approach - O(n) Time and O(n) Space
To achieve this in our example we simply take an memo array initialized
to -1. As we make a recursive call, we first check if the value stored in the
memo array corresponding to that position is -1. The value -1 indicates
that we haven't calculated it yet and have to recursively compute it. The
output must be stored in the memo array so that, next time, if the same
value is encountered, it can be directly used from the memo array.

Longest Common Subsequence (LCS)


Last Updated : 04 Mar, 2025



Given two strings, s1 and s2, the task is to find the length of the Longest
Common Subsequence. If there is no common subsequence, return 0.
A subsequence is a string generated from the original string by deleting 0
or more characters, without changing the relative order of the remaining
characters.
For example, subsequences of "ABC" are "", "A", "B", "C", "AB", "AC", "BC"
and "ABC". In general, a string of length n has 2n subsequences.
Examples:
Input: s1 = "ABC", s2 = "ACD"
Output: 2
Explanation: The longest subsequence which is present in both strings is
"AC".
Input: s1 = "AGGTAB", s2 = "GXTXAYB"
Output: 4
Explanation: The longest common subsequence is "GTAB".
Input: s1 = "ABC", s2 = "CBA"
Output: 1
Explanation: There are three longest common subsequences of length 1,
"A", "B" and "C".

You might also like