0% found this document useful (0 votes)
2 views77 pages

DAA E-Content - Module 5 Dynamic Programming

The document is a module on Dynamic Programming from a course on Design and Analysis of Algorithms, detailing key concepts, techniques, and specific problems such as the Coin Change Problem and Longest Common Subsequence. It outlines the principles of dynamic programming, including optimal substructure and overlapping sub-problems, and explains methods like memoization and tabulation. Additionally, it provides algorithms and examples for solving various dynamic programming problems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views77 pages

DAA E-Content - Module 5 Dynamic Programming

The document is a module on Dynamic Programming from a course on Design and Analysis of Algorithms, detailing key concepts, techniques, and specific problems such as the Coin Change Problem and Longest Common Subsequence. It outlines the principles of dynamic programming, including optimal substructure and overlapping sub-problems, and explains methods like memoization and tabulation. Additionally, it provides algorithms and examples for solving various dynamic programming problems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 77

School of Computer Science and Engineering

Design and Analysis of Algorithm [R1UC407B]

Module-V: Dynamic Programming


Dr. A K Yadav

School of Computer Science and Engineering


Plat No 2, Sector 17A, Yamuna Expressway
Greater Noida, Uttar Pradesh - 203201

February 14, 2025

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 1/74
School of Computer Science and Engineering

Contents

Dynamic programming 3
Coin Change Problem 9
Longest Common Subsequence 14
Traveling Salesman Problem 26
Multi stage graph 27
Floyd Warshall algorithm 28
0-1 Knapsack Problem 36
Matrix Chain Multiplication(MCM) 43
Optimal binary search trees 53
Binomial Coefficient 73

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 2/74
School of Computer Science and Engineering

Dynamic programming
▶ Solves problems by combining the solutions to sub-problems
▶ Sub-problems are overlapping
▶ Doesn’t solve overlapping sub-problems again and again
▶ Behave like Divide and Conquer if sub-problems are not
overlapping
▶ Used in optimization problems
▶ Can be used either top-down with memoization or bottom-up
method

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 3/74
School of Computer Science and Engineering

Four Steps of Dynamic programming


1. Characterize the structure of an optimal solution
2. Recursively define the value of an optimal solution
3. Compute the value of an optimal solution
4. Construct an optimal solution from computed information

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 4/74
School of Computer Science and Engineering

Elements of dynamic programming


Two key ingredients that an optimization problem must have in
order to apply dynamic programming:
1. Optimal substructure
2. Overlapping sub-problems

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 5/74
School of Computer Science and Engineering

Memoization
▶ Memoization is mainly storing the value in the form of Memo
or in some tabular method
▶ Whenever we need some calculation of the sub-problem then
we first check the memo
▶ If solution of the sub-problems is already available in memo
then we use that solution and not solve the sub-problem again
▶ If the solution of the sub-problem is not in memo then we
solve the sub-problem and note the result in the form of the
memo for next call

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 6/74
School of Computer Science and Engineering

Difference between Dynamic Programming and


Memoization
▶ Memoization is a term describing an optimization technique
where we cache previously computed results, and return the
cached result when the same computation is needed again.
▶ Dynamic programming is a technique for solving problems of
recursive nature, iteratively and is applicable when the
computations of the subproblems overlap.
▶ Dynamic programming is typically implemented using
tabulation, but can also be implemented using memoization.
So as we can see, neither one is a ”subset” of the other.

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 7/74
School of Computer Science and Engineering

▶ When we solve a dynamic programming problem using


tabulation we solve the problem bottom-up i.e. by solving all
related sub-problems first, typically by filling up an
n-dimensional table. Based on the results in the table, the
solution to the top / original problem is then computed.
▶ If we use memoization to solve the problem we do it by
maintaining a map of already solved sub problems. We do it
top-down in the sense that we solve the top problem first
(which typically recurses down to solve the sub-problems).

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 8/74
School of Computer Science and Engineering

Coin Change Problem


Given an integer array of coins of size N representing different
types of denominations and an integer sum, the task is to count all
combinations of coins to make a given value sum. Assume that
you have an infinite supply of each type of coin.
For each coin, there are 2 options:
1 Include the current coin: Subtract the current coin’s
denomination from the target sum and call the count function
recursively with the updated sum and the same set of coins
i.e., count(coins, n, sum–coins[n − 1])
2 Exclude the current coin: Call the count function
recursively with the same sum and the remaining coins. i.e.,
count(coins, n − 1, sum).

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 9/74
School of Computer Science and Engineering

The final result will be the sum of both cases.


Base case:
▶ If the target sum (sum) is 0, there is only one way to make
the sum, which is by not selecting any coin. So,
count(0, coins, n) = 1.
▶ If the target sum (sum < 0) is negative or no coins are left to
consider (n = 0), then there are no ways to make the sum, so
count(sum, coins, 0) = 0.
We can use the following steps to implement the dynamic
programming(tabulation) approach for Coin Change.
▶ Create a 2D dp array with rows and columns equal to the
number of coin denominations and target sum.
▶ dp[0][0] will be set to 1 which represents the base case where
the target sum is 0, and there is only one way to make the
change by not selecting any coin.
Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 10/74
School of Computer Science and Engineering

▶ Iterate through the rows of the dp array (i from 1 to n),


representing the current coin being considered.
▶ The inner loop iterates over the target sums (j from 0 to
sum).
▶ Add the number of ways to make change without using the
current coin, i.e., dp[i][j] += dp[i-1][j].
▶ Add the number of ways to make change using the current
coin, i.e., dp[i][j] += dp[i][j-coins[i-1]].
▶ dp[n][sum] will contain the total number of ways to make
change for the given target sum using the available coin
denominations.

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 11/74
School of Computer Science and Engineering
// Returns total distinct ways to make sum using n coins
//of different denominations
int count(vector<int>& coins, int n, int sum)
{
// 2d dp array where n is the number of coin
// denominations and sum is the target sum
vector<vector<int>> dp(n+1, vector<int>(sum+1,0));
// Represents the base case where the target sum is 0
// and there is only one way to make change:
// by not selecting any coin
dp[0][0] = 1;
for (int i = 1; i <= n; i++)
{
for (int j = 0; j <= sum; j++)
{
// Add the number of ways to make change without
// using the current coin,
Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 11/74
School of Computer Science and Engineering
dp[i][j] += dp[i - 1][j];
if ((j - coins[i - 1]) >= 0)
{
// Add the number of ways to make change
// using the current coin
dp[i][j] += dp[i][j - coins[i - 1]];
}
}
}
return dp[n][sum];
}
// Driver Code
int main()
{
vector<int> coins{ 1, 2, 3 };
int n = 3;
int sum = 5;
Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 11/74
School of Computer Science and Engineering
cout << count(coins, n, sum);
return 0;
}

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 12/74
School of Computer Science and Engineering

Longest Common Subsequence


▶ Given two sequences X = ⟨x1 , x2 , . . . , xm ⟩ of length m and
Y = ⟨y1 , y2 , . . . , yn ⟩ of length n
▶ Z = ⟨z1 , z2 , . . . , zk ⟩ is a common subsequence of X and Y if
Z is a subsequence of both X and Y
▶ Z is a Longest common subsequence of X and Y if X and Y
have no common subsequence of length greater then Z
▶ ith prefix of X is Xi = ⟨x1 , x2 , . . . , xi ⟩ for i = 0, 1, . . . , m

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 12/74
School of Computer Science and Engineering

Step 1: Characterizing a longest common subsequence


▶ Let X = ⟨x1 , x2 , . . . , xm ⟩ and Y = ⟨y1 , y2 , . . . , yn ⟩ be two
sequences, and let Z = ⟨z1 , z2 , . . . , zk ⟩ be any LCS of X and
Y.
▶ Optimal substructure of an LCS
▶ If xm = yn then zk = xm = yn and Zk−1 is an LCS of Xm−1
and Yn−1
▶ If xm ̸= yn then zk ̸= xm implies that Z is an LCS of Xm−1
and Y
▶ If xm ̸= yn then zk =
̸ yn implies that Z is an LCS of X and
Yn−1

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 13/74
School of Computer Science and Engineering

Step 2: A recursive solution


▶ In step 1 there are two cases:
▶ Case 1: if xm = yn then find the LCS of Xm−1 and Yn−1
▶ Appending xm to LCS of Xm−1 and Yn−1 gives LCS of X and
Y
▶ Case 2: if xm ̸= yn then solve two sub-problems: find LCS of
Xm−1 and Y and find LCS of X and Yn−1
▶ Longer of these two LCS will be the LCS of X and Y
▶ Let c[i, j] is the length of an LCS of the sequences Xi and Yj
▶ If either of two sequences is empty i.e. i = 0 or j = 0 then
LCS will be of length zero that is c[i, j] = 0

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 14/74
School of Computer Science and Engineering

▶ Recursive solution will be



0

 if i = 0 or j = 0
c[i, j] = c[i − 1, j − 1] + 1 if i, j > 0 and xi = yj

max (c[i, j − 1], c[i − 1, j])

if i, j > 0 and xi ̸= yj

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 15/74
School of Computer Science and Engineering

Step 3: Computing the length of an LCS


▶ Let X = ⟨x1 , x2 , . . . , xm ⟩ and Y = ⟨y1 , y2 , . . . , yn ⟩ be two
sequences, and let Z = ⟨z1 , z2 , . . . , zk ⟩ be any LCS of X and
Y.
▶ Let Table c[i, j] to store the length of an LCS of the
sequences Xi and Yj
▶ c[0 . . . m, 0 . . . n]
▶ Let Table b[i, j] to store the direction of the flow of length of
LCS to construct an optimal solution
▶ b[1 . . . m, 1 . . . n]

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 16/74
School of Computer Science and Engineering

Bottom-up Approach
LCS-LENGTH(X, Y)
1. m = length(X )
2. n = length(Y )
3. Let c[0 . . . m, 0 . . . n] and b[1 . . . m, 1 . . . n] two new tables
4. for i = 1 to m
5. c[i, 0] = 0//Y is empty
6. for j = 1 to n
7. c[0, j] = 0//X is empty
8. for i = 1 to m
9. for j = 1 to n
10. if xi = yj
11. c[i, j] = c[i − 1, j − 1] + 1

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 17/74
School of Computer Science and Engineering

12. b[i, j] =↖
13. else if c[i − 1, j] ≥ c[i, j − 1]
14. c[i, j] = c[i − 1, j]
15. b[i, j] =↑
16. else
17. c[i, j] = c[i, j − 1]
18. b[i, j] =→
19. return c, b
Length of the LCS is c[m,n]
Complexity of the algorithm is Θ(mn)

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 18/74
School of Computer Science and Engineering

Top-down Approach:
MEMOIZED-LCS(X , Y , m, n)
1. Let c[0 . . . m, 0 . . . n]
2. for i = 0 to m
3. for j = 0 to n
4. c[i, j] = 0
5. return LCS(X , Y , c, b, m, n)
LCS(X , Y , c, b, i, j)
1. if c[i, j] > 0
2. return c[i, j]
3. if xi = yj
4. c[i, j] = LCS(X , Y , c, b, i − 1, j − 1] + 1
5. b[i, j] =↖

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 19/74
School of Computer Science and Engineering

6. else if LCS(X , Y , c, b, i − 1, j) ≥ LCS(X , Y , c, b, i, j − 1)


7. c[i, j] = LCS(X , Y , c, b, i − 1, j)
8. b[i, j] =↑
9. else
10. c[i, j] = LCS(X , Y , c, b, i, j − 1)
11. b[i, j] =→
12. return c[i, j]

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 20/74
School of Computer Science and Engineering

Step 4: Constructing an LCS


▶ Step 3 calculates c[i, j] and b[i, j]
▶ c[i, j] gives the length of LCS of Xi and Yj
▶ b[i, j] tells from where we calculated c[i, j]
▶ ↖ shows that xi = yj and part of LCS
▶ ↑ shows that c[i − 1, j] ≥ c[i, j − 1]
▶ → shows that c[i, j − 1] > c[i − 1, j]
▶ call PRINT-LCS(b, X , m, n)
PRINT-LCS(b, X , i, j)
1. if i = 0 or j = 0
2. return
3. if b[i, j] =↖
4. PRINT-LCS(b, X , i − 1, j − 1)
Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 21/74
School of Computer Science and Engineering

5. print xi
6. else if b[i, j] =↑
7. PRINT-LCS(b, X , i − 1, j)
8. else
9. PRINT-LCS(b, X , i, j − 1)
Complexity of the algorithm is Θ(m + n)

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 22/74
School of Computer Science and Engineering

Traveling Salesman Problem


Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 23/74
School of Computer Science and Engineering

Multi stage graph


Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 24/74
School of Computer Science and Engineering

Floyd Warshall algorithm


▶ The algorithm find the shortest route between all
nodes/vertices of the non-negative directed weighted graph G.
▶ Suppose V is set of n nodes/vertices in graph G i.e.
{1, 2, . . . , n} ∈ V
▶ E is the set of edges of the graph G .i.e (i, j) ∈ E if there is a
edge from i to j in the graph
▶ So we can say G = (V , E )
▶ wij is the weight of the edge from vertex i to j.
▶ if p is a path from vertex vi to vj through vertices
v1 , v2 , . . . , vl then the nodes other than source and destination
is known as intermediate nodes.
▶ Sum of weight of the edges of the path p is the distance dij
between vertex vi to vj
▶ if this distance is the shortest among all possible paths from vi
to vj then it is called the shortest path δ(i, j)
Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 25/74
School of Computer Science and Engineering

Step 1: The structure of a shortest path


▶ Let graph G has set of vertices V = {1, 2, . . . , n}
▶ Let any pair of vertices i, j ∈ V , all paths can be drawn from
the set of intermediate vertices {1, 2, . . . , k} for some k
▶ Let p is the shortest/minimum-weight simple path from i to j.
▶ There are two cases for any vertex k : either k is the part of
the minimum-weight path or not the part of the path.
▶ If k is not the part of the path then it can be removed from
the the set of vertices without effecting the path. So the path
will be from {1, 2, . . . , k − 1} or we can say a shortest path
from vertex i to vertex j with all intermediate vertices in the
set {1, 2, . . . , k − 1} is also a shortest path from i to j with all
intermediate vertices in the set {1, 2, . . . , k}

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 26/74
School of Computer Science and Engineering

▶ But if it is part of the path then we can divide the whole path
p into two paths: p1 - from i to k and p2 - from k to j
▶ Intermediate nodes of p1 and p2 will be from {1, . . . , k − 1}
because now k is a source in one path and destination in
other but not the intermediate node.
(
δ(i, j) if k is not part of p
δ(i, j) =
δ(i, k) + δ(k, j) if k is part of p

with intermediate nodes {1, 2, . . . , k − 1}

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 27/74
School of Computer Science and Engineering

Step 2: A recursive solution


▶ dijk is the shortest path from vertex i to vertex j with all
intermediate vertices in the set {1, 2, . . . , k}
▶ if k = 0 then no intermediate node so dij0 = wij if there is a
direct edge from i to j else dij0 = ∞ if there is no direct edge
from i to j and dij0 = 0 if i = j.
▶ dijk = min{dijk−1 , dikk−1 + dkjk−1 }
▶ πijk is the predecessor of vertex j for the shortest path from
vertex i with all intermediate vertices in the set {1, 2, . . . , k}
▶ if k = 0 then no intermediate node so πij0 = i if there is a
direct edge from i to j else πij0 = NIL if there is no direct edge
from i to j and πij0 = NIL if i = j.

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 28/74
School of Computer Science and Engineering

▶ πijk = πijk−1 if dijk−1 ≤ dikk−1 + dkjk−1 or


k−1
πijk = πkj if dijk−1 > dikk−1 + dkjk−1



0 if i = j, k = 0

∞

if (i, j) ∈
/ E, k = 0
dijk =


wij if (i, j) ∈ E , k = 0
min{dijk−1 , dikk−1 + dkjk−1 }






NIL if i = j, k = 0

NIL


 if wij = ∞, k = 0
πijk = i if wij < ∞, k = 0

πijk−1 dijk−1 ≤ dikk−1 + dkjk−1




 if
 k−1
dijk−1 > dikk−1 + dkjk−1

πkj if

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 29/74
School of Computer Science and Engineering

Step 3: Computation of Shortest path


Let W is the weight matrix and A is the adjacency matrix of the
graph G of n vertices.
Bottom-up Approach-I
FLOYD-WARSHALL(W,A,n)
1. D 0 = W
2. Π0 = A
3. for k = 1 to n
4. Let D k = (dijk ) a n × n matrix
5. Let Πk = (πijk ) a n × n matrix
6. for i = 1 to n
7. for j = 1 to n
8. if dijk−1 ≤ dikk−1 + dkjk−1

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 30/74
School of Computer Science and Engineering

9. dijk = dijk−1
10. πijk = πijk−1
11. else
12. dijk = dikk−1 + dkjk−1
k−1
13. πijk = πkj
14. return D n , Πn

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 31/74
School of Computer Science and Engineering

Bottom-up Approach-II
FLOYD-WARSHALL(W,A,n)
1. D = W
2. Π = A
3. for k = 1 to n
4. for i = 1 to n
5. for j = 1 to n
6. if dij ≤ dik + dkj
7. dij = dij
8. πij = πij
9. else
10. dij = dik + dkj
11. πij = πkj
12. return D, Π
Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 32/74
School of Computer Science and Engineering

0-1 Knapsack Problem


▶ Set of n items from 1 to n are given
▶ wi is the weight of the i th item
▶ vi is the value of the i th item
▶ We can pick any item as a whole, fraction of weight is not
allowed
▶ Total weight we pick can not be more than W but value of
the sum of picked items must be maximum

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 33/74
School of Computer Science and Engineering

▶ So 0-1 Knapsack Problem is an optimization problem stated


as:
n
X
Maximize vi xi
i=1

n
X
under the constraints xi wi ≤ W , and xi ∈ {0, 1}
i=1
▶ In bounded Knapsack Problem we can pick upto ci copies of
xi item:
n
X
Maximize vi xi
i=1

n
X
under the constraints xi wi ≤ W , and xi ∈ {0 . . . ci }
i=1

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 34/74
School of Computer Science and Engineering

▶ In unbounded Knapsack Problem we can pick any number of


xi item including none:
n
X
Maximize vi xi
i=1

n
X
under the constraints xi wi ≤ W , and xi ≥ 0
i=1

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 35/74
School of Computer Science and Engineering

Compute the maximum Bag Value


▶ Total number of items is n
▶ v [i] is the value of the i th item whose weight is w [i].
▶ Let Table m[0 . . . n, 0 . . . W ] is used to store the value of the
bag for n items and W maximum capacity.
▶ m[i, j] is the value of the bag from i item and j capacity.

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 36/74
School of Computer Science and Engineering

Bottom-up Approach:
OPTIMAL-0-1KP(w,v,n,W)
1. Let m[0 . . . n, 0 . . . W ]
2. for i = 0 to n
3. for j = 0 to W
4. if i = 0 or j = 0 //no item or nil bag capacity
5. m[i, j] = 0
6. else if w [i] ≤ j// i th item can be accommodated in bag
7. if (m[i − 1, j − w [i]] + v [i]) > m[i − 1, j]
8. m[i, j] = m[i − 1, j − w [i]] + v [i]//pick the item
9. else
10. m[i, j] = m[i − 1, j]//don’t pick the item
11. return m

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 37/74
School of Computer Science and Engineering

Top-down Approach:
Mamoized-0-1KP(w,v,n,W)
1. Let m[0 . . . n, 0 . . . W ]
2. for i = 0 to n
3. for j = 0 to W
4. m[i, j] = 0
5. return knapsack(m, w , v , n, W )
knapsack(m, w , v , i, j)
1. if m[i, j] > 0
2. return m[i, j]
3. if i = 0 or j = 0 //no item or nil bag capacity
4. return 0
5. if w [i] < j

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 38/74
School of Computer Science and Engineering

6. return max (knapsack(m, w , v , i − 1, j),


knapsack(m, w , v , i − 1, j − w [i]) + v [i])
7. else
8. return knapsack(m, w , v , i − 1, j)
9. return m
Complexity of the Knapsack is O(nW )

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 39/74
School of Computer Science and Engineering

Matrix Chain Multiplication(MCM)


▶ Matrix chain multiplication is not the multiplication of the
matrices but it is the way to find the order of matrix
multiplication with minimum number of scalar multiplication.
▶ It is mainly fully parenthesization of the matrices
▶ ABC can be multiplied by two way : A(BC ) or (AB)C .
▶ Which one cost the minimum number of scalar multiplication
can be found using MCM
▶ Find the order of multiplication of the matrices A1 A2 . . . An
▶ We need an array p of size n + 1 to store the dimensions of n
compatible matrices

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 40/74
School of Computer Science and Engineering

Step 1: The structure of an optimal parenthesization


▶ Let Ai...j = Ai . . . Aj is the product of matrices from
Ai , Ai+1 , . . . Aj for i ≤ j
▶ if i = j then there is only one matrix and number of scalar
multiplication is zero.
▶ if i < j then we can put parenthesis anywhere after Ai but
before Aj
▶ Suppose we use parenthesis after matrix Ak i.e.
(Ai . . . Ak )(Ak+1 . . . Aj ) where i ≤ k < j for optimal solutions
▶ Now the total cost of the Ai...j will be cost of Ai...k plus cost
of Ak+1...j plus the cost of multiplying them together.
▶ We supposed k is at optimal position so no other value of k is
less costly.

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 41/74
School of Computer Science and Engineering

▶ So Ai...k and Ak+1...j is also optimal.


▶ We can find out the optimal solution of the the problem from
optimal solution of the the sub-problem
▶ We have to take the correct value of k, i ≤ k < j such that
the sub-problems having the optimal solutions.
▶ We have to examine all possible value of k for the optimal
solution.

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 42/74
School of Computer Science and Engineering

Step 2: A recursive solution


We have to define cost of an optimal solution recursively in terms
of the optimal solutions to sub-problems.
▶ Let m[i, j] be the minimum number of scalar multiplications
needed to compute the matrix Ai...j where 1 ≤ i ≤ j ≤ n
▶ The lowest cost to multiply A1...n will be m[1, n].
▶ If i = j that is there is only one matrix then no need to
multiply. The m[i, i] = 0 for all 1 ≤ i ≤ n.
▶ If i < j then we split the matrix at position k for optimal
solution.
▶ The total cost will be
m[i, j] = m[i, k] + m[k + 1, j] + pi−1 × pk × pj
▶ For all possible values of k = i, i + 1, . . . , j − 1 we have to find
out m[i, j]
Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 43/74
School of Computer Science and Engineering

▶ We pick that value of k for which m[i, j] is minimum.


▶ ∴ Recursive definition for the minimum cost of parenthesizing
the product Ai...j = Ai Ai+1 . . . Aj will be:

0 if i = j
m[i, j] =
 min {m[i, k] + m[k + 1, j] + pi−1 × pk × pj } if i < j
i≤k<j

▶ Store the value of k in s[i, j] for which m[i, j] is minimum.

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 44/74
School of Computer Science and Engineering

Step 3: Computing the optimal costs


▶ We can find the solution of the above recurrence by using two
methods: Bottom-up or top-down.
▶ We compute overlapping sub-problems only once to avoid the
exponential time cost.
▶ Ai is on dimensions pi−1 × pi .
▶ Table m[1 . . . n, 1 . . . n] is used for storing m[i, j] and
s[1 . . . n − 1, 2 . . . n] is used to store the value of k for which
m[i, j] is minimum for all i ≤ k < j.

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 45/74
School of Computer Science and Engineering

Bottom-up Approach:
MATRIX-CHAIN-ORDER(p,n)
1. Let m[1 . . . n, 1 . . . n] and s[1 . . . n − 1, 2 . . . n]
2. for i = 1 to n
3. m[i, i] = 0
4. for l = 2 to n // l is the chain length
5. for i = 1 to n − l + 1
6. j =i +l −1
7. m[i, j] = ∞
8. for k = i to j − 1
9. q = m[i, k] + m[k + 1, j] + pi−1 × pk × pj
10. if q < m[i, j]
11. m[i, j] = q
12. s[i, j] = k
13. return m, s
Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 46/74
School of Computer Science and Engineering

Top-down Approach:
MEMOIZED-MATRIX-CHAIN(p,n)
1. Let m[1 . . . n, 1 . . . n]
2. for i = 1 to n
3. for j = i to n
4. m[i, i] = ∞
5. return LOOKUP-CHAIN(m,s,p,1,n)
LOOKUP-CHAIN(m,s,p,i,j)
1. if m[i, j] < ∞
2. return m[i, j]
3. if i = j
4. m[i, j] = 0
5. else for k = i to j − 1

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 47/74
School of Computer Science and Engineering

6. q =LOOKUP-CHAIN(m,s,p,i,k)
+LOOKUP-CHAIN(m,s,p,k+1,j)+pi−1 pk pj
7. if q < m[i, j]
8. m[i, j] = q
9. s[i, j] = k
10. return m[i, j]

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 48/74
School of Computer Science and Engineering

Step 4: Constructing an optimal solution


▶ Table m[i, j] gives the number of scalar multiplication to
multiply matrices from Ai to Aj but does not show the order
of multiplication
▶ Table s[i, j] store the position of the parenthesis to partition
the matrices from Ai to Ak = As[i,j] and As[i,j]+1 to Aj .
▶ So the multiplication will be (Ai . . . As[i,j] ) and (As[i,j]+1 . . . Aj )
PRINT-OPTIMAL-PARENS(s,i,j)
1. if i = j
2. print Ai
3. else print ”(”
4. PRINT-OPTIMAL-PARENS(s, i, s[i, j])
5. PRINT-OPTIMAL-PARENS(s, s[i, j] + 1, j)
6. print ”)”
Complexity of the MCM is O(n3 )
Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 49/74
School of Computer Science and Engineering

Optimal binary search trees


▶ Suppose we want to store English dictionary
▶ We can use any balanced binary search try to store the
dictionary word
▶ If every word of dictionary is to be searched same number of
times
▶ we can search any world in O(lg n)
▶ Total time taken is O(mn lg n)
where m = number of times each word to be searched
n = number of words
▶ This system works perfectly if every word is searched equal
number of times.
▶ But if the some words are searched very frequently and some
are very rare then this performance of this system goes slow
Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 50/74
School of Computer Science and Engineering

▶ An alternate of this is Optimal binary search trees


▶ Let a K = ⟨k1 , k2 , . . . kn ⟩ sequence of n distinct key in the
sorted order such that k1 < k2 < · · · < kn
▶ If we search keys not in database that we represent as dummy
keys di
▶ Total key and dummy keys will be
d0 < k1 < d1 < k2 . . . dn−1 < kn < dn
▶ d0 represents all keys less than k1 and not in database, so are
dummy keys
▶ dn represents all keys greater then kn and not in database, so
are dummy keys
▶ di for 0 < i < n represents all keys greater then ki and less
than ki+1 and not in database, so are dummy keys.
▶ So Total no of n Keys are k1 , k2 , . . . kn

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 51/74
School of Computer Science and Engineering

▶ So Total no of n + 1 Dummy Keys are d0 , d1 , d2 , . . . dn


▶ Let probability of key ki to be searched is pi for 1 ≤ i ≤ n.
▶ Let probability of dummy key di to be searched is qi for
0 ≤ i ≤ n.
▶ If we build some binary search tree for these keys and dummy
keys then keys will be on internal nodes and represents
successful search and dummy keys will be on external nodes
and represents unsuccessful search.
▶ Every search is either successful finding some key ki or
unsuccessful finding some dummy key di so:
n
X n
X
pk + qk = 1
k=1 k=0

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 52/74
School of Computer Science and Engineering

▶ if kr is the root of the tree then two subtree will be: one
subtree having keys from k1 to kr −1 and dummy keys d0 to
dr −1 and other subtree having keys from kr +1 to kn and
dummy keys dr to dn
▶ if we search any keys from ki to kj then dummy keys will be
di−1 to dj for 1 ≤ i ≤ j ≤ n
▶ If j = i − 1 then there is no key but only one dummy key di−1
▶ Let
j
X j
X
w [i, j] = pk + qk
k=i k=i−1

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 53/74
School of Computer Science and Engineering

▶ kr is the root of the tree having keys from ki to kj and


dummy keys from di−1 to dj then two subtree will be: one
subtree having keys from ki to kr −1 and dummy keys di−1 to
dr −1 and other subtree having keys from kr +1 to kj and
dummy keys dr to dj
▶ The actual cost of a search equals the number of nodes
examined i.e. the depth of the node found by the search in
tree plus 1

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 54/74
School of Computer Science and Engineering

▶ Then the expected cost of a search in tree for al keys and


dummy keys is
n
X n
X
E [1, n] = (depth(kk ) + 1) .pk + (depth(dk ) + 1) .qk
k=1 i=0

n
X n
X n
X n
X
E [1, n] = depth(kk ).pk + depth(dk ).qk + pk + qk
k=1 i=0 k=1 k=0
n
X n
X
E [1, n] = depth(kk ).pk + depth(dk ).qk + 1
k=1 i=0

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 55/74
School of Computer Science and Engineering

▶ The expected cost of a search in tree for keys from ki to kj


and dummy keys di−1 to dj is
j
X j
X
E [i, j] = (depth(kk ) + 1) .pk + (depth(dk ) + 1) .qk
k=i k=i−1

j
X j
X j
X j
X
E [i, j] = depth(kk ).pk + depth(dk ).qk + pk + qk
k=i k=i−1 k=i k=i−1
j
X j
X
E [i, j] = depth(kk ).pk + depth(dk ).qk + w [i, j]
k=i k=i−1

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 56/74
School of Computer Science and Engineering

Step 1: The structure of an optimal binary search tree



▶ Let T is a optimal tree having subtree T with keys ki . . . kj
and dummy keys di−1 . . . dj for some 1 ≤ i ≤ j ≤ n

▶ Then T will also be a optimal subtree
▶ We can find the optimal cost of the tree by optimal cost of
the subtree
▶ Take kr as the root from keys ki to kj and dummy keys di−1
to dj
▶ Now there are two subtrees one having keys from ki to kr −1
with dummy keys from di−1 to dr −1
▶ The second tree having keys from kr +1 to kj with dummy
keys from dr to dj

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 57/74
School of Computer Science and Engineering

▶ Now the total cost of the tree will be sum of cost of left
subtree having keys from ki to kr −1 with dummy keys from
di−1 to dr −1 plus cost of right subtree having keys from kr +1
to kj with dummy keys from dr to dj plus cost of key kr
▶ di is the dummy key between the keys ki and ki+1
▶ If we choose ki as the root then there is no key (keys from ki
to ki−1 ) on left subtree but only one dummy key di−1
▶ If we choose kj as the root then there is no key (keys from
kj+1 to kj ) on right subtree but only one dummy key dj
▶ By taking all key from ki to kj as root one by one, we can find
the optimal cost of the tree

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 58/74
School of Computer Science and Engineering

Step 2: A recursive solution


▶ Let a subproblem with keys ki to kj and dummy keys di−1 to
dj where i ≥ 1, j ≤ n and j ≥ i − 1
▶ Let e[i, j] is the expected cost of searching an optimal binary
search tree containing the keys ki to kj and dummy keys di−1
to dj
▶ Finally we have to find out e[1, n]
▶ Let w [i, j] is the sum of the probability of all the keys ki . . . kj
and dummy keys di−1 . . . dj and w [1, n] = 1
j
X j
X
w [i, j] = pk + qk
k=i k=i−1

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 59/74
School of Computer Science and Engineering

▶ When j = i − 1 then no key but only dummy key so


e[i, i − 1] = qi−1 and w [i, i − 1] = qi−1
▶ When j > i − 1, then take a key kr as root such that left
subtree having keys from ki to kr −1 and right subtree having
keys from kr +1 to kj is optimal
▶ When a tree becomes subtree of a root node then hight of the
each node of the tree increases by one.
▶ The expected search cost of this subtree increases by the sum
of all the probabilities in the subtree i.e.

e[i, r − 1] = e[i, r − 1] + w [i, r − 1]

e[r + 1, j] = e[r + 1, j] + w [r + 1, j]

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 60/74
School of Computer Science and Engineering

▶ Taking kr as the root e[i, j] becomes

e[i, j] = e[i, r − 1] + w [i, r − 1] + pr + e[r + 1, j] + w [r + 1, j]

e[i, j] = e[i, r − 1] + e[r + 1, j] + w [i, j]


∵ w [i, j] = w [i, r − 1] + pr + w [r + 1, j]
▶ The recursive solution will be:

qi−1 if j = i − 1
e[i, j] =
 min {e[i, r − 1] + e[r + 1, j] + w [i, j]} if j > i − 1
i≤r ≤j

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 61/74
School of Computer Science and Engineering

Step 3: Computing the expected search cost of OBST


▶ Optimal cost of OBST is stored in e[1 . . . n + 1, 0 . . . n] that is
size of upper triangular matrix is (n + 1) × (n + 1).
▶ e[1, 0] is used to store the value of d0 and e[n + 1, n] is used
to store the value of dn
▶ e[i, j] is used to store the value of optimal search cost of the
keys from ki to kj for 1 ≤ i ≤ n + 1, 0 ≤ j ≤ n, and j ≥ i − 1
▶ root[i, j] is used to store the root of the tree having keys from
ki to kj for 1 ≤ i ≤ j ≤ n
▶ e[i, i − 1] = w [i, i − 1] = qi−1 for 1 ≤ i ≤ n + 1
▶ w [i, j] = w [i, j − 1] + pj + qj for 1 ≤ i ≤ n + 1, j > i − 1
▶ e[i, j] = min {e[i, r − 1] + e[r + 1, j] + w [i, j]} for j > i − 1
i≤r ≤j

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 62/74
School of Computer Science and Engineering

Bottom-up Approach:
OPTIMAL-BST(p,q,n)
1. Let e[1 . . . n + 1, 0 . . . n], w [1 . . . n + 1, 0 . . . n] and
root[1 . . . n, 1 . . . n]
2. for i = 1 to n + 1 //when no keys, will be only one dummy key
3. w [i, i − 1] = qi−1
4. e[i, i − 1] = qi−1
5. for l = 1 to n // l number of keys in the subtree
6. for i = 1 to n − l + 1
7. j =i +l −1
8. e[i, j] = ∞
9. w [i, j] = w [i, j − 1] + pj + qj
10. for r = i to j

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 63/74
School of Computer Science and Engineering

11. q = e[i, r − 1] + e[r + 1, j] + w [i, j]


12. if q < e[i, j]
13. e[i, j] = q
14. root[i, j] = r
15. return e, root

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 64/74
School of Computer Science and Engineering

Top-down Approach:
MEMOIZED-OPTIMAL-BST(p,q,n)
1. Let e[1 . . . n + 1, 0 . . . n], w [1 . . . n + 1, 0 . . . n] and
root[1 . . . n, 1 . . . n]
2. for i = 1 to n + 1
3. for j = i − 1 to n
4. e[i, j] = ∞
5. if j = i − 1
6. w [i, j] = qi−1
7. else
8. w [i, j] = w [i, j − 1] + pj + qj
9. return LOOKUP-OBST(e,w,root,p,q,1,n)

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 65/74
School of Computer Science and Engineering

LOOKUP-OBST(e,w,root,p,q,i,j)
1. if e[i, j] < ∞
2. return e[i, j]
3. if j = i − 1
4. e[i, j] = qi−1
5. else for r = i to j
6. q =LOOKUP-OBST(e,w,root,p,q,i,r-1)
+LOOKUP-OBST(e,w,root,p,q,r+1,j)+w [i, j]
7. if q < e[i, j]
8. e[i, j] = q
9. root[i, j] = r
10. return e[i, j]

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 66/74
School of Computer Science and Engineering

Step 4: Constructing an OBST


▶ Table e[i, j] gives the cost of OBST for the keys ki to kj with
dummy keys di−1 to dj
▶ Table root[i, j] store the root node kr = kroot[i,j]
▶ At kr = kroot[i,j] there will be two subtree: one left subtree
with keys ki to kroot[i,j]−1 and second right subtree with
kroot[i,j]+1 to kj
▶ First call will be:
▶ if n ≥ 1 //atleast one key
▶ r = root[1, n]
▶ print kr is the root of the OBST
▶ PRINT-OBST(root, i, r − 1, r , ”left”)
▶ PRINT-OBST(root, r + 1, j, r , ”right”)
▶ else
▶ print d0 is the root of the OBST
Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 67/74
School of Computer Science and Engineering

PRINT-OBST(root,i,j,r,child)
1. if i ≤ j
2. c = root[i, j]
3. print kc is child of kr
4. PRINT-OBST(root, i, r − 1, c, ”left”)
5. PRINT-OBST(root, r + 1, j, c, ”right”)
Complexity of the OBST is O(n3 )

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 68/74
School of Computer Science and Engineering

Class Test I ADA, IT 5th Sem, Dated: 06th Sep, 2019


Even numbered students will attempt even numbered questions
and odd numbered students will attempt odd numbered questions.
Q1. Prove that lg n! = O(n lg n)

Q2. Find the solution of T (n) = 3T ( 3 n) + lg3 n
Q3. Find the cost to multiply the matrices whose dimensions are
given as 15,5,10,2,5.
Q4. Find the maximum length subsequence of the two sequences
< 00110101 > and < 101100110 >

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 69/74
School of Computer Science and Engineering

Binomial Coefficient
▶ (x + a)n can be expended as
n C x n a 0 +n C x n−1 a 1 + · · · +n C x n−k a k + . . .n C x 0 a n
0 1 k n
▶ Coefficient of the term x n−k ak is know as Binomial
Coefficient C (n, k) =n Ck
n!
▶ C (n, k) = n−k!k!
▶ C (n, k) = C (n − 1, k − 1) + C (n − 1, k) if n > k > 0
n−1!
▶ ⇒ n−k!k−1! n−1!
+ n−k−1!k!
n−1! k
▶ ⇒ n−k!k−1! n−1! n−k
k + n−k−1!k! n−k
(n−1)!k (n−1)!(n−k)
▶ ⇒ n−k!k! + n−k!k!
(n−1)!(k+n−k)
▶ ⇒ n−k!k!
▶ ⇒ n!
n−k!k!
▶ ⇒ C (n, k)
▶ C (n, 0) = C (n, n) = 1
Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 70/74
School of Computer Science and Engineering

Computing a Binomial Coefficient


Bottom-up Approach:
No optimal solution only fixed value
BC(c,n,k)
1. Let c[0 . . . n, 0 . . . k]
2. for i = 0 to n
3. for j = 0 to i
4. if i = j or j = 0
5. c[i, j] = 1
6. else
7. c[i, j] = c[i − 1, j − 1] + c[i − 1, j]
8. return c

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 71/74
School of Computer Science and Engineering

Top-down Approach:
Mamoized-BC(c,n,k)
1. Let c[0 . . . n, 0 . . . k]
2. for i = 0 to n
3. for j = 0 to i
4. c[i, j] = 0
5. return BC (c, n, k)
BC (c, i, j)
1. if c[i, j] > 0
2. return c[i, j]
3. if i = j or j = 0
4. return 1
5. return c[i, j] = BC (c, i − 1, j − 1) + BC (c, i − 1, j]
Complexity of the Knapsack is O(nk)
Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 72/74
School of Computer Science and Engineering

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 73/74
School of Computer Science and Engineering

Thank you
Please send your feedback or any queries to
[email protected]

Module-V: Dynamic Programming Dr. A K Yadav Design and Analysis of Algorithm 74/74

You might also like