Lecture13 Slides
Lecture13 Slides
1
Dyna
Last time Progr mic
amm
ing!
• Not coding in an action movie.
3
Today
• Examples of dynamic programming:
1. Longest common subsequence
2. Knapsack problem
• Two versions!
3. Independent sets in trees
• If we have time…
• (If not the slides will be there as a reference)
5
Longest Common Subsequence
• How similar are these two species?
DNA: DNA:
AGCCCTAAGGGCTACCTAGCTT GACAGCCTACAAGCGTTAGCTTG
6
Longest Common Subsequence
• How similar are these two species?
DNA: DNA:
AGCCCTAAGGGCTACCTAGCTT GACAGCCTACAAGCGTTAGCTTG
• Pretty similar, their DNA has a long common subsequence:
AGCCTAAGCTTAGCTT
7
Longest Common Subsequence
• Subsequence:
• BDFH is a subsequence of ABCDEFGH
• If X and Y are sequences, a common subsequence
is a sequence which is a subsequence of both.
• BDFH is a common subsequence of ABCDEFGH and of
ABDFGHI
• A longest common subsequence…
• …is a common subsequence that is longest.
• The longest common subsequence of ABCDEFGH and
ABDFGHI is ABDFGH.
8
We sometimes want to find these
• Applications in bioinformatics
9
Recipe for applying Dynamic Programming
• Step 1: Identify optimal substructure.
• Step 2: Find a recursive formulation for the length
of the longest common subsequence.
• Step 3: Use dynamic programming to find the
length of the longest common subsequence.
• Step 4: If needed, keep track of some additional
info so that the algorithm from Step 3 can find the
actual LCS.
• Step 5: If needed, code this up like a reasonable
person.
10
Step 1: Optimal substructure
Prefixes:
X A C G G T
Y A C G C T T A
Xi A C G G A
j
Yj A C G C T T A
Xi A C G G A
j
Yj A C G C T T A
Yj A C G C T T A
Case 0
0 if 𝑖 = 0 or 𝑗 = 0
𝐶 𝑖, 𝑗 = ൞𝐶 𝑖 − 1, 𝑗 − 1 + 1 if 𝑋 𝑖 = 𝑌 𝑗 and 𝑖, 𝑗 > 0
max 𝐶 𝑖, 𝑗 − 1 , 𝐶 𝑖 − 1, 𝑗 if 𝑋 𝑖 ≠ 𝑌 𝑗 and 𝑖, 𝑗 > 0
Case 1 Case 2
A C G G A A C G G T
Xi Xi
Yj A C G C T T A Yj A C G C T T A
16
Recipe for applying Dynamic Programming
• Step 1: Identify optimal substructure.
• Step 2: Find a recursive formulation for the length
of the longest common subsequence.
• Step 3: Use dynamic programming to find the
length of the longest common subsequence.
• Step 4: If needed, keep track of some additional
info so that the algorithm from Step 3 can find the
actual LCS.
• Step 5: If needed, code this up like a reasonable
person.
17
LCS DP
• LCS(X, Y):
• C[i,0] = C[0,j] = 0 for all i = 0,…,m, j=0,…n.
• For i = 1,…,m and j = 1,…,n:
• If X[i] = Y[j]:
• C[i,j] = C[i-1,j-1] + 1
• Else:
• C[i,j] = max{ C[i,j-1], C[i-1,j] }
• Return C[m,n]
0 if 𝑖 = 0 or 𝑗 = 0
𝐶 𝑖, 𝑗 = ൞ 𝐶 𝑖 − 1, 𝑗 − 1 + 1 if 𝑋 𝑖 = 𝑌 𝑗 and 𝑖, 𝑗 > 0
max 𝐶 𝑖, 𝑗 − 1 , 𝐶 𝑖 − 1, 𝑗 if 𝑋 𝑖 ≠ 𝑌 𝑗 and18𝑖, 𝑗 > 0
X A C G G A
Example
Y A C T G
Y
A C T G
0 0 0 0 0
A 0
C 0
X G 0
G 0
A 0
0 if 𝑖 = 0 or 𝑗 = 0
𝐶 𝑖, 𝑗 = ൞𝐶 𝑖 − 1, 𝑗 − 1 + 1 if 𝑋 𝑖 = 𝑌 𝑗 and 𝑖, 𝑗 > 0
19
max 𝐶 𝑖, 𝑗 − 1 , 𝐶 𝑖 − 1, 𝑗 if 𝑋 𝑖 ≠ 𝑌 𝑗 and 𝑖, 𝑗 > 0
X A C G G A
Example
Y A C T G
Y
A C T G
0 0 0 0 0
A 0 1 1 1 1 So the LCM of X
C 0 1 2 2 2 and Y has length 3.
X G 0 1 2 2 3
G 0 1 2 2 3
A 0 1 2 2 3
0 if 𝑖 = 0 or 𝑗 = 0
𝐶 𝑖, 𝑗 = ൞𝐶 𝑖 − 1, 𝑗 − 1 + 1 if 𝑋 𝑖 = 𝑌 𝑗 and 𝑖, 𝑗 > 0
20
max 𝐶 𝑖, 𝑗 − 1 , 𝐶 𝑖 − 1, 𝑗 if 𝑋 𝑖 ≠ 𝑌 𝑗 and 𝑖, 𝑗 > 0
Recipe for applying Dynamic Programming
• Step 1: Identify optimal substructure.
• Step 2: Find a recursive formulation for the length
of the longest common subsequence.
• Step 3: Use dynamic programming to find the
length of the longest common subsequence.
• Step 4: If needed, keep track of some additional
info so that the algorithm from Step 3 can find the
actual LCS.
• Step 5: If needed, code this up like a reasonable
person.
21
X A C G G A
Example
Y A C T G
Y
A C T G
0 0 0 0 0
A 0
C 0
X G 0
G 0
A 0
0 if 𝑖 = 0 or 𝑗 = 0
𝐶 𝑖, 𝑗 = ൞𝐶 𝑖 − 1, 𝑗 − 1 + 1 if 𝑋 𝑖 = 𝑌 𝑗 and 𝑖, 𝑗 > 0
22
max 𝐶 𝑖, 𝑗 − 1 , 𝐶 𝑖 − 1, 𝑗 if 𝑋 𝑖 ≠ 𝑌 𝑗 and 𝑖, 𝑗 > 0
X A C G G A
Example
Y A C T G
Y
A C T G
0 0 0 0 0
A 0 1 1 1 1
C 0 1 2 2 2
X G 0 1 2 2 3
G 0 1 2 2 3
A 0 1 2 2 3
0 if 𝑖 = 0 or 𝑗 = 0
𝐶 𝑖, 𝑗 = ൞𝐶 𝑖 − 1, 𝑗 − 1 + 1 if 𝑋 𝑖 = 𝑌 𝑗 and 𝑖, 𝑗 > 0
23
max 𝐶 𝑖, 𝑗 − 1 , 𝐶 𝑖 − 1, 𝑗 if 𝑋 𝑖 ≠ 𝑌 𝑗 and 𝑖, 𝑗 > 0
X A C G G A
Example
Y A C T G
Y
A C T G
• Once we’ve filled this in,
we can work backwards.
0 0 0 0 0
A 0 1 1 1 1
C 0 1 2 2 2
X G 0 1 2 2 3
G 0 1 2 2 3
A 0 1 2 2 3
0 if 𝑖 = 0 or 𝑗 = 0
𝐶 𝑖, 𝑗 = ൞𝐶 𝑖 − 1, 𝑗 − 1 + 1 if 𝑋 𝑖 = 𝑌 𝑗 and 𝑖, 𝑗 > 0
24
max 𝐶 𝑖, 𝑗 − 1 , 𝐶 𝑖 − 1, 𝑗 if 𝑋 𝑖 ≠ 𝑌 𝑗 and 𝑖, 𝑗 > 0
X A C G G A
Example
Y A C T G
Y
A C T G
• Once we’ve filled this in,
we can work backwards.
0 0 0 0 0
A 0 1 1 1 1
C 0 1 2 2 2
X G 0 1 2 2 3
Example
Y A C T G
Y
A C T G
• Once we’ve filled this in,
we can work backwards.
0 0 0 0 0 • A diagonal jump means
that we found an element
A 0 1 1 1 1
of the LCS!
C 0 1 2 2 2
A 0 1 2 2 3
0 if 𝑖 = 0 or 𝑗 = 0
𝐶 𝑖, 𝑗 = ൞𝐶 𝑖 − 1, 𝑗 − 1 + 1 if 𝑋 𝑖 = 𝑌 𝑗 and 𝑖, 𝑗 > 0
26
max 𝐶 𝑖, 𝑗 − 1 , 𝐶 𝑖 − 1, 𝑗 if 𝑋 𝑖 ≠ 𝑌 𝑗 and 𝑖, 𝑗 > 0
X A C G G A
Example
Y A C T G
Y
A C T G
• Once we’ve filled this in,
we can work backwards.
0 0 0 0 0 • A diagonal jump means
that we found an element
A 0 1 1 1 1
of the LCS!
C 0 1 2 2 2 That 2 may as well
have come from
X G 0 1 2 2 3
this other 2. G
G 0 1 2 2 3
A 0 1 2 2 3
0 if 𝑖 = 0 or 𝑗 = 0
𝐶 𝑖, 𝑗 = ൞𝐶 𝑖 − 1, 𝑗 − 1 + 1 if 𝑋 𝑖 = 𝑌 𝑗 and 𝑖, 𝑗 > 0
27
max 𝐶 𝑖, 𝑗 − 1 , 𝐶 𝑖 − 1, 𝑗 if 𝑋 𝑖 ≠ 𝑌 𝑗 and 𝑖, 𝑗 > 0
X A C G G A
Example
Y A C T G
Y
A C T G
• Once we’ve filled this in,
we can work backwards.
0 0 0 0 0 • A diagonal jump means
that we found an element
A 0 1 1 1 1
of the LCS!
C 0 1 2 2 2
X G 0 1 2 2 3 G
G 0 1 2 2 3
A 0 1 2 2 3
0 if 𝑖 = 0 or 𝑗 = 0
𝐶 𝑖, 𝑗 = ൞𝐶 𝑖 − 1, 𝑗 − 1 + 1 if 𝑋 𝑖 = 𝑌 𝑗 and 𝑖, 𝑗 > 0
28
max 𝐶 𝑖, 𝑗 − 1 , 𝐶 𝑖 − 1, 𝑗 if 𝑋 𝑖 ≠ 𝑌 𝑗 and 𝑖, 𝑗 > 0
X A C G G A
Example
Y A C T G
Y
A C T G
• Once we’ve filled this in,
we can work backwards.
0 0 0 0 0 • A diagonal jump means
that we found an element
A 0 1 1 1 1
of the LCS!
C 0 1 2 2 2
X G 0 1 2 2 3 C G
G 0 1 2 2 3
A 0 1 2 2 3
0 if 𝑖 = 0 or 𝑗 = 0
𝐶 𝑖, 𝑗 = ൞𝐶 𝑖 − 1, 𝑗 − 1 + 1 if 𝑋 𝑖 = 𝑌 𝑗 and 𝑖, 𝑗 > 0
29
max 𝐶 𝑖, 𝑗 − 1 , 𝐶 𝑖 − 1, 𝑗 if 𝑋 𝑖 ≠ 𝑌 𝑗 and 𝑖, 𝑗 > 0
X A C G G A
Example
Y A C T G
Y
A C T G
• Once we’ve filled this in,
we can work backwards.
0 0 0 0 0 • A diagonal jump means
that we found an element
A 0 1 1 1 1
of the LCS!
C 0 1 2 2 2
X G 0 1 2 2 3 A C G
G 0 1 2 2 3
This is the LCS!
A 0 1 2 2 3
0 if 𝑖 = 0 or 𝑗 = 0
𝐶 𝑖, 𝑗 = ൞𝐶 𝑖 − 1, 𝑗 − 1 + 1 if 𝑋 𝑖 = 𝑌 𝑗 and 𝑖, 𝑗 > 0
30
max 𝐶 𝑖, 𝑗 − 1 , 𝐶 𝑖 − 1, 𝑗 if 𝑋 𝑖 ≠ 𝑌 𝑗 and 𝑖, 𝑗 > 0
Finding an LCS
• Good exercise to write out pseudocode for what we
just saw!
• Or you can find it in lecture notes.
• Takes time O(mn) to fill the table
• Takes time O(n + m) on top of that to recover the LCS
• We walk up and left in an n-by-m array
• We can only do that for n + m steps.
• Altogether, we can find LCS(X,Y) in time O(mn).
31
Recipe for applying Dynamic Programming
• Step 1: Identify optimal substructure.
• Step 2: Find a recursive formulation for the length
of the longest common subsequence.
• Step 3: Use dynamic programming to find the
length of the longest common subsequence.
• Step 4: If needed, keep track of some additional
info so that the algorithm from Step 3 can find the
actual LCS.
• Step 5: If needed, code this up like a reasonable
person.
32
Our approach actually isn’t so bad
34
Example 2: Knapsack Problem
• We have n items with weights and values:
Item:
Weight: 6 2 4 3 11
Value:
20 8 14 13 35
35
Item:
Weight: 6 2 4 3 11
Capacity: 10 Value: 20 8 14 13 35
• Unbounded Knapsack:
• Suppose I have infinite copies of all items.
• What’s the most valuable way to fill the knapsack?
Total weight: 10
Total value: 42
• 0/1 Knapsack:
• Suppose I have only one copy of each item.
• What’s the most valuable way to fill the knapsack?
Total weight: 9
Total value: 35
36
Some notation
Item:
Weight: w1 w2 w3 … wn
Value: v1 v2 v3 vn
Capacity: W
37
Recipe for applying Dynamic Programming
• Step 1: Identify optimal substructure.
• Step 2: Find a recursive formulation for the value of
the optimal solution.
• Step 3: Use dynamic programming to find the value
of the optimal solution.
• Step 4: If needed, keep track of some additional
info so that the algorithm from Step 3 can find the
actual solution.
• Step 5: If needed, code this up like a reasonable
person.
38
Optimal substructure
• Sub-problems:
• Unbounded Knapsack with a smaller knapsack.
• K[x] = value you can fit in a knapsack of capacity x
Weight wi
Value vi
Capacity x
• Then this is optimal for capacity x - wi: Value V
Why?
1 minute think
(wait) 1 minute share
Capacity x – wi
Value V - vi 40
Optimal substructure item i
Weight wi
Value vi
Capacity x
• Then this is optimal for capacity x - wi: Value V
K[x] = maxi { + }
The maximum is over Optimal way to The value of
all i so that 𝑤𝑖 ≤ 𝑥. fill the smaller item i.
knapsack
• Open problem!
• (But probably the answer is no…otherwise P = NP)
46
Recipe for applying Dynamic Programming
• Step 1: Identify optimal substructure.
• Step 2: Find a recursive formulation for the value of
the optimal solution.
• Step 3: Use dynamic programming to find the value
of the optimal solution.
• Step 4: If needed, keep track of some additional
info so that the algorithm from Step 3 can find the
actual solution.
• Step 5: If needed, code this up like a reasonable
person.
47
Let’s write a bottom-up DP algorithm
• UnboundedKnapsack(W, n, weights, values):
• K[0] = 0
• for x = 1, …, W:
• K[x] = 0
• for i = 1, …, n:
• if 𝑤𝑖 ≤ 𝑥:
• 𝐾 𝑥 = max{ 𝐾 𝑥 , 𝐾 𝑥 − 𝑤𝑖 + 𝑣𝑖 }
• return K[W]
K[x] = maxi { + }
K[x] = maxi { + }
Example • for x = 1, …, W:
• K[x] = 0
• for i = 1, …, n:
• if 𝑤𝑖 ≤ 𝑥:
0 1 2 3 4 • 𝐾 𝑥 = max{ 𝐾 𝑥 , 𝐾 𝑥 − 𝑤𝑖 + 𝑣𝑖 }
• If K[x] was updated:
• ITEMS[x] = ITEMS[x – wi] ∪ { item i }
K 0 • return ITEMS[W]
ITEMS
Item:
Weight: 1 2 3
Value: 1 4 6
Capacity:
50 4
• UnboundedKnapsack(W, n, weights, values):
• K[0] = 0
• ITEMS[0] = ∅
Example • for x = 1, …, W:
• K[x] = 0
• for i = 1, …, n:
• if 𝑤𝑖 ≤ 𝑥:
0 1 2 3 4 • 𝐾 𝑥 = max{ 𝐾 𝑥 , 𝐾 𝑥 − 𝑤𝑖 + 𝑣𝑖 }
• If K[x] was updated:
• ITEMS[x] = ITEMS[x – wi] ∪ { item i }
K 0 1 • return ITEMS[W]
ITEMS
Item:
Weight: 1 2 3
Value: 1 4 6
ITEMS[1] = ITEMS[0] +
Capacity:
51 4
• UnboundedKnapsack(W, n, weights, values):
• K[0] = 0
• ITEMS[0] = ∅
Example • for x = 1, …, W:
• K[x] = 0
• for i = 1, …, n:
• if 𝑤𝑖 ≤ 𝑥:
0 1 2 3 4 • 𝐾 𝑥 = max{ 𝐾 𝑥 , 𝐾 𝑥 − 𝑤𝑖 + 𝑣𝑖 }
• If K[x] was updated:
• ITEMS[x] = ITEMS[x – wi] ∪ { item i }
K 0 1 2 • return ITEMS[W]
ITEMS
Item:
Weight: 1 2 3
Value: 1 4 6
ITEMS[2] = ITEMS[1] +
Capacity:
52 4
• UnboundedKnapsack(W, n, weights, values):
• K[0] = 0
• ITEMS[0] = ∅
Example • for x = 1, …, W:
• K[x] = 0
• for i = 1, …, n:
• if 𝑤𝑖 ≤ 𝑥:
0 1 2 3 4 • 𝐾 𝑥 = max{ 𝐾 𝑥 , 𝐾 𝑥 − 𝑤𝑖 + 𝑣𝑖 }
• If K[x] was updated:
• ITEMS[x] = ITEMS[x – wi] ∪ { item i }
K 0 1 4 • return ITEMS[W]
ITEMS
Item:
Weight: 1 2 3
Value: 1 4 6
ITEMS[2] = ITEMS[0] +
Capacity:
53 4
• UnboundedKnapsack(W, n, weights, values):
• K[0] = 0
• ITEMS[0] = ∅
Example • for x = 1, …, W:
• K[x] = 0
• for i = 1, …, n:
• if 𝑤𝑖 ≤ 𝑥:
0 1 2 3 4 • 𝐾 𝑥 = max{ 𝐾 𝑥 , 𝐾 𝑥 − 𝑤𝑖 + 𝑣𝑖 }
• If K[x] was updated:
• ITEMS[x] = ITEMS[x – wi] ∪ { item i }
K 0 1 4 5 • return ITEMS[W]
ITEMS
Item:
Weight: 1 2 3
Value: 1 4 6
ITEMS[3] = ITEMS[2] +
Capacity:
54 4
• UnboundedKnapsack(W, n, weights, values):
• K[0] = 0
• ITEMS[0] = ∅
Example • for x = 1, …, W:
• K[x] = 0
• for i = 1, …, n:
• if 𝑤𝑖 ≤ 𝑥:
0 1 2 3 4 • 𝐾 𝑥 = max{ 𝐾 𝑥 , 𝐾 𝑥 − 𝑤𝑖 + 𝑣𝑖 }
• If K[x] was updated:
• ITEMS[x] = ITEMS[x – wi] ∪ { item i }
K 0 1 4 6 • return ITEMS[W]
ITEMS
Item:
Weight: 1 2 3
Value: 1 4 6
ITEMS[3] = ITEMS[0] +
Capacity:
55 4
• UnboundedKnapsack(W, n, weights, values):
• K[0] = 0
• ITEMS[0] = ∅
Example • for x = 1, …, W:
• K[x] = 0
• for i = 1, …, n:
• if 𝑤𝑖 ≤ 𝑥:
0 1 2 3 4 • 𝐾 𝑥 = max{ 𝐾 𝑥 , 𝐾 𝑥 − 𝑤𝑖 + 𝑣𝑖 }
• If K[x] was updated:
• ITEMS[x] = ITEMS[x – wi] ∪ { item i }
K 0 1 4 6 7 • return ITEMS[W]
ITEMS
Item:
Weight: 1 2 3
Value: 1 4 6
ITEMS[4] = ITEMS[3] +
Capacity:
56 4
• UnboundedKnapsack(W, n, weights, values):
• K[0] = 0
• ITEMS[0] = ∅
Example • for x = 1, …, W:
• K[x] = 0
• for i = 1, …, n:
• if 𝑤𝑖 ≤ 𝑥:
0 1 2 3 4 • 𝐾 𝑥 = max{ 𝐾 𝑥 , 𝐾 𝑥 − 𝑤𝑖 + 𝑣𝑖 }
• If K[x] was updated:
• ITEMS[x] = ITEMS[x – wi] ∪ { item i }
K 0 1 4 6 8 • return ITEMS[W]
ITEMS
Item:
Weight: 1 2 3
Value: 1 4 6
ITEMS[4] = ITEMS[2] +
Capacity:
57 4
Recipe for applying Dynamic Programming
• Step 1: Identify optimal substructure.
• Step 2: Find a recursive formulation for the value of
the optimal solution.
• Step 3: Use dynamic programming to find the value
of the optimal solution.
• Step 4: If needed, keep track of some additional
info so that the algorithm from Step 3 can find the
actual solution.
• Step 5: If needed, code this up like a reasonable
person.
58
What have we learned?
• We can solve unbounded knapsack in time O(nW).
• If there are n items and our knapsack has capacity W.
59
Item:
Weight: 6 2 4 3 11
Capacity: 10 Value: 20 8 14 13 35
• Unbounded Knapsack:
• Suppose I have infinite copies of all of the items.
• What’s the most valuable way to fill the knapsack?
Total weight: 10
Total value: 42
• 0/1 Knapsack:
• Suppose I have only one copy of each item.
• What’s the most valuable way to fill the knapsack?
Total weight: 9
Total value: 35
60
Recipe for applying Dynamic Programming
• Step 1: Identify optimal substructure.
• Step 2: Find a recursive formulation for the value of
the optimal solution.
• Step 3: Use dynamic programming to find the value
of the optimal solution.
• Step 4: If needed, keep track of some additional
info so that the algorithm from Step 3 can find the
actual solution.
• Step 5: If needed, code this up like a reasonable
person.
61
Optimal substructure: try 1
• Sub-problems:
• Unbounded Knapsack with a smaller knapsack.
I can’t use
any turtles…
63
Optimal substructure: try 2
• Sub-problems:
• 0/1 Knapsack with fewer items.
Then yet
more
items
64
Our sub-problems:
• Indexed by x and j
Capacity x
Value V
First j items Use only the first j items
68
Two cases item j
Capacity x
Value V
First j items Use only the first j items
Capacity x
Value V 69
First j-1 items Use only the first j-1 items.
Two cases item j
Weight wj
Value vj Capacity x
Value V
First j items Use only the first j items
70
Two cases item j
Weight wj
Value vj Capacity x
Value V
First j items Use only the first j items
• Then this is an optimal solution for j-1 items and a
smaller knapsack:
Capacity x – wj
Value V – vj
First j-1 items Use only the first j-171items.
Recipe for applying Dynamic Programming
• Step 1: Identify optimal substructure.
• Step 2: Find a recursive formulation for the value of
the optimal solution.
• Step 3: Use dynamic programming to find the value
of the optimal solution.
• Step 4: If needed, keep track of some additional
info so that the algorithm from Step 3 can find the
actual solution.
• Step 5: If needed, code this up like a reasonable
person.
72
Recursive relationship
• Let K[x,j] be the optimal value for:
• capacity x,
• with j items.
73
Recipe for applying Dynamic Programming
• Step 1: Identify optimal substructure.
• Step 2: Find a recursive formulation for the value of
the optimal solution.
• Step 3: Use dynamic programming to find the value
of the optimal solution.
• Step 4: If needed, keep track of some additional
info so that the algorithm from Step 3 can find the
actual solution.
• Step 5: If needed, code this up like a reasonable
person.
74
Bottom-up DP algorithm
• Zero-One-Knapsack(W, n, w, v):
• K[x,0] = 0 for all x = 0,…,W
• K[0,i] = 0 for all i = 0,…,n
• for x = 1,…,W:
• for j = 1,…,n: Case 1
• K[x,j] = K[x, j-1]
• if wj ≤ x: Case 2
• K[x,j] = max{ K[x,j], K[x – wj, j-1] + vj }
• return K[W,n]
0
j=1
0
j=2
0
j=3
Item:
current relevant Weight: 1 2 3
entry previous entry Value: 1 4 6 Capacity:
76 3
• Zero-One-Knapsack(W, n, w, v):
• K[x,0] = 0 for all x = 0,…,W
• K[0,i] = 0 for all i = 0,…,n
• for x = 1,…,W:
Example • for j = 1,…,n:
• K[x,j] = K[x, j-1]
• if wj ≤ x:
x=0 x=1 x=2 x=3 • K[x,j] = max{ K[x,j],
K[x – wj, j-1] + vj }
0 0 0 0 • return K[W,n]
j=0
0 0
j=1
0
j=2
0
j=3
Item:
current relevant Weight: 1 2 3
entry previous entry Value: 1 4 6 Capacity:
77 3
• Zero-One-Knapsack(W, n, w, v):
• K[x,0] = 0 for all x = 0,…,W
• K[0,i] = 0 for all i = 0,…,n
• for x = 1,…,W:
Example • for j = 1,…,n:
• K[x,j] = K[x, j-1]
• if wj ≤ x:
x=0 x=1 x=2 x=3 • K[x,j] = max{ K[x,j],
K[x – wj, j-1] + vj }
0 0 0 0 • return K[W,n]
j=0
0 1
j=1
0
j=2
0
j=3
Item:
current relevant Weight: 1 2 3
entry previous entry Value: 1 4 6 Capacity:
78 3
• Zero-One-Knapsack(W, n, w, v):
• K[x,0] = 0 for all x = 0,…,W
• K[0,i] = 0 for all i = 0,…,n
• for x = 1,…,W:
Example • for j = 1,…,n:
• K[x,j] = K[x, j-1]
• if wj ≤ x:
x=0 x=1 x=2 x=3 • K[x,j] = max{ K[x,j],
K[x – wj, j-1] + vj }
0 0 0 0 • return K[W,n]
j=0
0 1
j=1
0 1
j=2
0
j=3
Item:
current relevant Weight: 1 2 3
entry previous entry Value: 1 4 6 Capacity:
79 3
• Zero-One-Knapsack(W, n, w, v):
• K[x,0] = 0 for all x = 0,…,W
• K[0,i] = 0 for all i = 0,…,n
• for x = 1,…,W:
Example • for j = 1,…,n:
• K[x,j] = K[x, j-1]
• if wj ≤ x:
x=0 x=1 x=2 x=3 • K[x,j] = max{ K[x,j],
K[x – wj, j-1] + vj }
0 0 0 0 • return K[W,n]
j=0
0 1
j=1
0 1
j=2
0 1
j=3
Item:
current relevant Weight: 1 2 3
entry previous entry Value: 1 4 6 Capacity:
80 3
• Zero-One-Knapsack(W, n, w, v):
• K[x,0] = 0 for all x = 0,…,W
• K[0,i] = 0 for all i = 0,…,n
• for x = 1,…,W:
Example • for j = 1,…,n:
• K[x,j] = K[x, j-1]
• if wj ≤ x:
x=0 x=1 x=2 x=3 • K[x,j] = max{ K[x,j],
K[x – wj, j-1] + vj }
0 0 0 0 • return K[W,n]
j=0
0 1 0
j=1
0 1
j=2
0 1
j=3
Item:
current relevant Weight: 1 2 3
entry previous entry Value: 1 4 6 Capacity:
81 3
• Zero-One-Knapsack(W, n, w, v):
• K[x,0] = 0 for all x = 0,…,W
• K[0,i] = 0 for all i = 0,…,n
• for x = 1,…,W:
Example • for j = 1,…,n:
• K[x,j] = K[x, j-1]
• if wj ≤ x:
x=0 x=1 x=2 x=3 • K[x,j] = max{ K[x,j],
K[x – wj, j-1] + vj }
0 0 0 0 • return K[W,n]
j=0
0 1 1
j=1
0 1
j=2
0 1
j=3
Item:
current relevant Weight: 1 2 3
entry previous entry Value: 1 4 6 Capacity:
82 3
• Zero-One-Knapsack(W, n, w, v):
• K[x,0] = 0 for all x = 0,…,W
• K[0,i] = 0 for all i = 0,…,n
• for x = 1,…,W:
Example • for j = 1,…,n:
• K[x,j] = K[x, j-1]
• if wj ≤ x:
x=0 x=1 x=2 x=3 • K[x,j] = max{ K[x,j],
K[x – wj, j-1] + vj }
0 0 0 0 • return K[W,n]
j=0
0 1 1
j=1
0 1 1
j=2
0 1
j=3
Item:
current relevant Weight: 1 2 3
entry previous entry Value: 1 4 6 Capacity:
83 3
• Zero-One-Knapsack(W, n, w, v):
• K[x,0] = 0 for all x = 0,…,W
• K[0,i] = 0 for all i = 0,…,n
• for x = 1,…,W:
Example • for j = 1,…,n:
• K[x,j] = K[x, j-1]
• if wj ≤ x:
x=0 x=1 x=2 x=3 • K[x,j] = max{ K[x,j],
K[x – wj, j-1] + vj }
0 0 0 0 • return K[W,n]
j=0
0 1 1
j=1
0 1 4
j=2
0 1
j=3
Item:
current relevant Weight: 1 2 3
entry previous entry Value: 1 4 6 Capacity:
84 3
• Zero-One-Knapsack(W, n, w, v):
• K[x,0] = 0 for all x = 0,…,W
• K[0,i] = 0 for all i = 0,…,n
• for x = 1,…,W:
Example • for j = 1,…,n:
• K[x,j] = K[x, j-1]
• if wj ≤ x:
x=0 x=1 x=2 x=3 • K[x,j] = max{ K[x,j],
K[x – wj, j-1] + vj }
0 0 0 0 • return K[W,n]
j=0
0 1 1
j=1
0 1 4
j=2
0 1 4
j=3
Item:
current relevant Weight: 1 2 3
entry previous entry Value: 1 4 6 Capacity:
85 3
• Zero-One-Knapsack(W, n, w, v):
• K[x,0] = 0 for all x = 0,…,W
• K[0,i] = 0 for all i = 0,…,n
• for x = 1,…,W:
Example • for j = 1,…,n:
• K[x,j] = K[x, j-1]
• if wj ≤ x:
x=0 x=1 x=2 x=3 • K[x,j] = max{ K[x,j],
K[x – wj, j-1] + vj }
0 0 0 0 • return K[W,n]
j=0
0 1 1 0
j=1
0 1 4
j=2
0 1 4
j=3
Item:
current relevant Weight: 1 2 3
entry previous entry Value: 1 4 6 Capacity:
86 3
• Zero-One-Knapsack(W, n, w, v):
• K[x,0] = 0 for all x = 0,…,W
• K[0,i] = 0 for all i = 0,…,n
• for x = 1,…,W:
Example • for j = 1,…,n:
• K[x,j] = K[x, j-1]
• if wj ≤ x:
x=0 x=1 x=2 x=3 • K[x,j] = max{ K[x,j],
K[x – wj, j-1] + vj }
0 0 0 0 • return K[W,n]
j=0
0 1 1 1
j=1
0 1 4
j=2
0 1 4
j=3
Item:
current relevant Weight: 1 2 3
entry previous entry Value: 1 4 6 Capacity:
87 3
• Zero-One-Knapsack(W, n, w, v):
• K[x,0] = 0 for all x = 0,…,W
• K[0,i] = 0 for all i = 0,…,n
• for x = 1,…,W:
Example • for j = 1,…,n:
• K[x,j] = K[x, j-1]
• if wj ≤ x:
x=0 x=1 x=2 x=3 • K[x,j] = max{ K[x,j],
K[x – wj, j-1] + vj }
0 0 0 0 • return K[W,n]
j=0
0 1 1 1
j=1
0 1 4 1
j=2
0 1 4
j=3
Item:
current relevant Weight: 1 2 3
entry previous entry Value: 1 4 6 Capacity:
88 3
• Zero-One-Knapsack(W, n, w, v):
• K[x,0] = 0 for all x = 0,…,W
• K[0,i] = 0 for all i = 0,…,n
• for x = 1,…,W:
Example • for j = 1,…,n:
• K[x,j] = K[x, j-1]
• if wj ≤ x:
x=0 x=1 x=2 x=3 • K[x,j] = max{ K[x,j],
K[x – wj, j-1] + vj }
0 0 0 0 • return K[W,n]
j=0
0 1 1 1
j=1
0 1 4 5
j=2
0 1 4
j=3
Item:
current relevant Weight: 1 2 3
entry previous entry Value: 1 4 6 Capacity:
89 3
• Zero-One-Knapsack(W, n, w, v):
• K[x,0] = 0 for all x = 0,…,W
• K[0,i] = 0 for all i = 0,…,n
• for x = 1,…,W:
Example • for j = 1,…,n:
• K[x,j] = K[x, j-1]
• if wj ≤ x:
x=0 x=1 x=2 x=3 • K[x,j] = max{ K[x,j],
K[x – wj, j-1] + vj }
0 0 0 0 • return K[W,n]
j=0
0 1 1 1
j=1
0 1 4 5
j=2
0 1 4 5
j=3
Item:
current relevant Weight: 1 2 3
entry previous entry Value: 1 4 6 Capacity:
90 3
• Zero-One-Knapsack(W, n, w, v):
• K[x,0] = 0 for all x = 0,…,W
• K[0,i] = 0 for all i = 0,…,n
• for x = 1,…,W:
Example • for j = 1,…,n:
• K[x,j] = K[x, j-1]
• if wj ≤ x:
x=0 x=1 x=2 x=3 • K[x,j] = max{ K[x,j],
K[x – wj, j-1] + vj }
0 0 0 0 • return K[W,n]
j=0
0 1 1 1
j=1
0 1 4 5
j=2
0 1 4 6
j=3
Item:
current relevant Weight: 1 2 3
entry previous entry Value: 1 4 6 Capacity:
91 3
• Zero-One-Knapsack(W, n, w, v):
• K[x,0] = 0 for all x = 0,…,W
• K[0,i] = 0 for all i = 0,…,n
• for x = 1,…,W:
Example • for j = 1,…,n:
• K[x,j] = K[x, j-1]
• if wj ≤ x:
x=0 x=1 x=2 x=3 • K[x,j] = max{ K[x,j],
K[x – wj, j-1] + vj }
0 0 0 0 • return K[W,n]
j=0
0 1 1 1
j=1
0 1 4 5
j=2
So the optimal solution is to
0 1 4 6 put one watermelon in your
j=3 knapsack!
Item:
current relevant Weight: 1 2 3
entry previous entry Value: 1 4 6 Capacity:
92 3
Recipe for applying Dynamic Programming
• Step 1: Identify optimal substructure.
• Step 2: Find a recursive formulation for the value of
the optimal solution.
• Step 3: Use dynamic programming to find the value
of the optimal solution.
• Step 4: If needed, keep track of some additional
info so that the algorithm from Step 3 can find the
actual solution.
• Step 5: If needed, code this up like a reasonable
person. You do this one!
(We did it on the slide in the previous
example, just not in the pseudocode!)93
What have we learned?
• We can solve 0/1 knapsack in time O(nW).
• If there are n items and our knapsack has capacity W.
94
Question
• How did we know which substructure to use in
which variant of knapsack? Answer in retrospect:
vs.
In 0/1 knapsack, we
can only use each item
once, so it makes sense
to leave out one item
at a time.
1
2
An independent set 3
is a set of vertices • Given a graph with
so that no pair has weights on the
an edge between vertices…
them.
5 • What is the
1 independent set with
the largest weight?
96
Actually, this problem is NP-
complete.
So, we are unlikely to find an efficient algorithm.
• But if we also assume that the graph is a tree…
2 3
3 5
A tree is a
connected
2 3 1 graph with no
cycles.
5
2 5
Problem:
find a maximal independent set in a tree (with vertex weights).97
Recipe for applying Dynamic Programming
• Step 1: Identify optimal substructure.
• Step 2: Find a recursive formulation for the value of
the optimal solution
• Step 3: Use dynamic programming to find the value
of the optimal solution
• Step 4: If needed, keep track of some additional
info so that the algorithm from Step 3 can find the
actual solution.
• Step 5: If needed, code this up like a reasonable
person.
98
Optimal substructure
• Subtrees are a natural candidate.
• There are two cases:
1. The root of this tree is not in a
maximal independent set.
2. Or it is.
99
Case 1:
the root is not in a maximal independent set
• Use the optimal solution
from these smaller problems.
100
Case 2:
the root is in an maximal independent set
• Then its children can’t be.
• Below that, use the optimal
solution from these smaller
subproblems.
101
Recipe for applying Dynamic Programming
• Step 1: Identify optimal substructure.
• Step 2: Find a recursive formulation for the value of
the optimal solution.
• Step 3: Use dynamic programming to find the value
of the optimal solution
• Step 4: If needed, keep track of some additional
info so that the algorithm from Step 3 can find the
actual solution.
• Step 5: If needed, code this up like a reasonable
person.
102
Recursive formulation: try 1
• Let A[u] be the weight of a maximal independent set
in the tree rooted at u.
•𝐴𝑢 =
σ𝑣∈𝑢.children 𝐴[𝑣]
max ൞
weight 𝑢 + σ𝑣∈𝑢.grandchildren 𝐴[𝑣]
σ𝑣∈𝑢.children 𝐴[𝑣]
𝐴 𝑢 = max ൞
weight 𝑢 + σ𝑣∈𝑢.children 𝐵[𝑣]
104
Recipe for applying Dynamic Programming
• Step 1: Identify optimal substructure.
• Step 2: Find a recursive formulation for the value of
the optimal solution.
• Step 3: Use dynamic programming to find the value
of the optimal solution.
• Step 4: If needed, keep track of some additional
info so that the algorithm from Step 3 can find the
actual solution.
• Step 5: If needed, code this up like a reasonable
person.
105
A top-down DP algorithm
• MIS_subtree(u):
• if u is a leaf:
• A[u] = weight(u)
• B[u] = 0
• else:
• for v in u.children:
• MIS_subtree(v)
• 𝐴 𝑢 = max{ σ𝑣∈𝑢.children 𝐴[𝑣] , weight 𝑢 + σ𝑣∈𝑢.children 𝐵[𝑣] }
• B 𝑢 = σ𝑣∈𝑢.children 𝐴[𝑣]
Running time?
• MIS(T): • We visit each vertex once, and for
every vertex we do O(1) work:
• MIS_subtree(T.root) • Make a recursive call
• return A[T.root] • Participate in summations of
parent node
106
• Running time is O(|V|)
Why is this different from divide-and-conquer?
That’s always worked for us with tree problems before…
• MIS_subtree(u):
• if u is a leaf:
• return weight(u)
• else:
• return max{ σ𝑣∈𝑢.children MIS_subtree(𝑣) ,
• MIS(T):
• return MIS_subtree(T.root)
107
Why is this different from divide-and-conquer?
That’s always worked for us with tree problems before…
110
Recap
• Today we saw examples of how to come up with
dynamic programming algorithms.
• Longest Common Subsequence
• Knapsack two ways
• (If time) maximal independent set in trees.
• There is a recipe for dynamic programming
algorithms.
111
Recipe for applying Dynamic Programming
• Step 1: Identify optimal substructure.
• Step 2: Find a recursive formulation for the value of
the optimal solution.
• Step 3: Use dynamic programming to find the value
of the optimal solution.
• Step 4: If needed, keep track of some additional
info so that the algorithm from Step 3 can find the
actual solution.
• Step 5: If needed, code this up like a reasonable
person.
112
Recap
• Today we saw examples of how to come up with
dynamic programming algorithms.
• Longest Common Subsequence
• Knapsack two ways
• (If time) maximal independent set in trees.
• There is a recipe for dynamic programming
algorithms.
• Sometimes coming up with the right substructure
takes some creativity
• Practice on homework! ☺
• For even more practice check out additional
examples/practice problems in CLRS or section!
113
Next time
• Greedy algorithms!
114