ADA_Module 4_PM
ADA_Module 4_PM
Module 4 - Chapter 1
• Main idea:
- set up a recurrence relating a solution to a larger instance to solutions of some
smaller instances
- solve smaller instances once
- record solutions in a table
- extract solution to the initial instance from that table
2
Example 1: Coin-row problem
There is a row of n coins whose values are some positive integers c₁, c₂,...,c n, not
necessarily distinct. The goal is to pick up the maximum amount of money subject to
the constraint that no two coins adjacent in the initial row can be picked up.
3
Let F(n) = maximum amount that can be picked up from the first n coins in the row of
coins.
To derive a recurrence for F(n), we partition all the allowed coin selections into two
groups:
those without last coin – the max amount is ?
those with the last coin -- the max amount is ?
F(0) = 0, F(1)=c₁
4
F(n) = max{cn + F(n-2), F(n-1)} for n > 1,
F(0) = 0, F(1)=c₁
index 0 1 2 3 4 5 6
coins -- 5 1 6 10 5 2
F( ) 0, N 5 Y 5N 11 Y 15 Y 16 Y 17 Y
7
DP Steps
F(n) = max{cn + F(n-2), F(n-1)} for n > 1,
F(0) = 0, F(1)=c₁
1. Recursive math function describing maximum
8
Example 2: Change-making problem
Given : Different types of coins OR coins of different denomination.
Question : Find out total no of ways you can make the change of given amount using
the coins given.
Condition: There will be infinite supply of coins.
9
Example 2: Change-making problem
Let F (n) be the minimum number of coins whose values add up to n;
F (0) = 0.
The amount n can only be obtained by adding one
coin of denomination dj to the amount n − dj for j = 1, 2,...,m such that n ≥ dj .
Therefore, we can consider all such denominations and select the one minimizing
F (n − dj ) + 1.
Since 1 is a constant, we can, of course, find the smallest F (n− dj ) first and then add 1
to it.
Hence, we have the following recurrence for F (n):
F (n) = min{F (n − dj )} + 1 for n > 0,
j : n≥dj
F (0) = 0. 10
ALGORITHM ChangeMaking(D[1..m], n)
//Applies dynamic programming to find the minimum number of coins
//of denominations d1 < d2 < ... < dm where d1 = 1 that add up to a given amount n
//Input: Positive integer n and array D[1..m] of increasing positive
// integers indicating the coin denominations where D[1] = 1
//Output: The minimum number of coins that add up to n
F[0]← 0 Time Complexity: O(nm)
for i ← 1 to n do Space Complexity: Θ(n)
temp ← ∞; j ← 1
while j ≤ m and i ≥ D[j ] do
temp ← min(F[i − D[j ]], temp)
j← j + 1
F[i]← temp + 1
return F[n]
11
Coin-Collecting Problem
• Problem Statement: Several coins are placed in cells of an n x m board, no more than one
coin per cell. A robot, located in the upper left cell of the board, needs to collect as many of
the coins as possible and bring them to the bottom right cell. On each step, the robot can
move either one cell to the right or one cell down from its current location. When the robot
visits a cell with a coin, it always picks up that coin. Design an algorithm to find the
maximum number of coins the robot can collect and a path it needs to follow to do this.
• Solution: Let F(i, j) be the largest number of coins the robot can collect and bring to the cell
(i, j) in the ith row and jth column of the board. It can reach this cell either from the adjacent
cell (i-1, j) above it or from the adjacent cell (i, j-1) to the left of it.
• The largest numbers of coins that can be brought to these cells are F(i- 1, j) and F(i, j-1)
respectively. Of course, there are no adjacent cells to the left of the first column and above
the first row. For such cells, we assume there are 0 neighbors.
• Hence, the largest number of coins the robot can bring to cell (i, j) is the maximum of the
two numbers F(i-1, j) and F(i, j-1), plus the one possible coin at cell (i, j) itself.
F (i, j ) = max{F (i − 1, j ), F (i, j − 1)} + cij for 1 ≤ i ≤ n, 1 ≤ j ≤ m
F (0, j) = 0 for 1 ≤ j ≤ m and F (i, 0) = 0 for 1 ≤ i ≤ n,
where cij = 1 if there is a coin in cell (i, j) and cij = 0 otherwise.
Time Complexity: Θ(nm)
ALGORITHM
Space Complexity: Θ(nm)
RobotCoinCollection(C[1..n, 1..m])
//Applies dynamic programming to compute the largest number of
//coins a robot can collect on an n × m board by starting at (1, 1)
//and moving right and down from upper left to down right corner
//Input: Matrix C[1..n, 1..m] whose elements are equal to 1 and 0
//for cells with and without a coin, respectively
//Output: Largest number of coins the robot can bring to cell (n, m)
F[1, 1]← C[1, 1];
for j ← 2 to m do
F[1, j ]← F[1, j − 1] + C[1, j ]
for i ← 2 to n do
• F[i, 1]← F[i − 1, 1] + C[i, 1]
• for j ← 2 to m do
• F[i, j ] ← max(F[i − 1, j ], F[i, j − 1]) + C[i, j ]
• return F[n, m]
Coin-Collecting Problem
Tracing back the optimal path:
• It is possible to trace the computations backwards to get an optimal path.
• If F(i-1, j) > F(i, j-1), an optimal path to cell (i, j) must come down from the adjacent cell
above it;
• If F(i-1, j) < F(i, j-1), an optimal path to cell (i, j) must come from the adjacent cell on
the left;
• If F(i-1, j) = F(i, j-1), it can reach cell (i, j) from either direction. Ties can be ignored by
giving preference to coming from the adjacent cell above.
• If there is only one choice, i.e., either F(i-1, j) or F(i, j-1) are not available, use
the other available choice.
• The optimal path can be obtained in Θ(n+m) time.
Warshal Algorithm
DEFINITION
The transitive closure of a directed graph with n vertices can be defined as the n × n boolean
matrix T = {tij }, in which the element in the ith row and the j th column is 1 if there exists a
nontrivial path (i.e., directed path of a positive length) from the ith vertex to the j th vertex;
otherwise, tij is 0.
Named after Stephen Warshall, who discovered this algorithm.
Mainly used to determine Transitive Closure of a Directed Graph or all paths in a directed
graph using adjacency matrix.
To check whether there is directed path between every pair of vertices.
Warshall’s algorithm constructs the transitive closure through a series of n × n boolean
matrices: R(0) ,...,R(k−1), R(k),...R(n).
Warshal Algorithm
ALGORITHM Warshall(A[1..n, 1..n])
//Implements Warshall’s algorithm for computing the transitive closure
//Input: The adjacency matrix A of a digraph with n vertices
//Output: The transitive closure of the digraph
R(0) ← A
for k ← 1 to n do
for i ← 1 to n do
for j ← 1 to n do
R(k)[i, j ] ← R(k−1)[i, j ] or (R(k−1)[i, k] and R(k−1)[k, j ])
return R(n)
Warshal Algorithm
Application:
Data flow and control flow dependencies, redundency checking, nheritance testing in objects
oriented software.
Recursive solution?
What is smaller problem?
How to use solution to smaller in solution to larger Table?
Order to solve?
Initial conditions?
23
Example: Knapsack of capacity W = 5
item weight value
1 2 $12
2 1 $10
3 3 $20
4 2 $15 capacity j
0 1 2 3 4 5
0 0 0 0 0 0 0
w1 = 2, v1= 12 1 0 0
w2 = 1, v2= 10 2
w3 = 3, v3= 20 3
w4 = 2, v4= 15 4 24
Example: Knapsack of capacity W = 5
item weight value
1 2 $12
2 1 $10
3 3 $20
4 2 $15 capacity j
0 1 2 3 4 5
0 0 0 0 0 0 0
w1 = 2, v1= 12 1 0 0 12 12 12 12
w2 = 1, v2= 10 2 0 10 12 22 22 22
w3 = 3, v3= 20 3 0 10 12 22 30 32
w4 = 2, v4= 15 4 0 10 15 25 30 37
? 25
Knapsack Problem by DP
Given n items of
integer weights: w1 w2 … w n
values: v1 v2 … vn
a knapsack of integer capacity W
find most valuable subset of the items that fit into the knapsack
Recursive solution?
What is smaller problem?
How to use solution to smaller in solution to larger Table?
Order to solve?
Initial conditions? 26
Knapsack Problem by DP
Given n items of
integer weights: w1 w2 … w n
values: v1 v2 … v n
a knapsack of integer capacity W
find most valuable subset of the items that fit into the knapsack
12 Minimum
hrs cost
A B
S1 S2 S3 S4 S5……
1 Divide and conquer is used to obtain a solution to the given The greedy method is used to obtain an optimal
problem, it does not aim for the optimal solution. solution to the given problem.
In this technique, the problem is divided into small In Greedy Method, a set of feasible solutions are
2 subproblems. These subproblems are solved independently. generated and pick up one feasible solution is the
Finally, all the solutions to subproblems are collected optimal solution.
together to get the solution to the given problem.
3 Divide and conquer is less efficient and slower because it is A greedy method is comparatively efficient and
recursive in nature. faster as it is iterative in nature.
5 Divide and conquer algorithms mostly run in polynomial Greedy algorithms also run in polynomial time but
time. take less time than Divide and conquer
Minimum Spanning Tree (MST)
6 c 6 c c
a a a
4 1 1 1
4
2 2
d d d
b 3 b b 3
Example:
6 c 6 c c
a a a
1 1 1
4 4
2 2
d d d
b b b 3
3
COST=11 COST=6
16
CCS 5
Depar
Departtment
ment ATME College of
of CCSSEE
Minimum Cost spanning Tree algorithms
• Prim’s algorithm
• Kruskal’s algorithm
Depar
Departtment
ment ATME College of
of CCSSEE
DeparMST – Prim’s algorithm
Departtment
ment
of CCSSEE
Depar
Departtment
ment ATME College of
of CCSSEE
Departt ment
Depar BMATSMEInCostitluletgeeofof
DeDeppaar tr tmm ATATMMEECoCoe
le
lggeeoof fEEnnggn
in
i eeeerrin
i ngg
CSE
Efficiency
The time efficiency of depends on the data structures used for
implementing the priority queue and for representing the input graph.
Since we have implemented using weighted matrix and unordered array,
the efficiency is O(|V2|).
If we implement using adjacency list and the priority queue for min-heap,
the efficiency is O(|E|log|V|).
Kruskal’s algorithm
• Kruskal's algorithm finds MST of a weighted connected graph
G=<V,E> as an acyclic subgraph with |V| - 1 edges.
• Sum of all the edges weight should be minimum.
• The algorithm begins by sorting the graph’s edges in increasing
order of their weights.
• Then it scans this sorted list starting with the empty sub graph and
it adds the next edge on the list to the current sub graph, if such an
inclusion create a cycle, simply skip that edges.
Depar
Departtment
ment ATME College of
of CCSSEE
Depar
Departtment
ment ATME College of
of CCSSEE
Depar
Departtment
ment
of CCSSEE
Time complexity
The crucial check whether two vertices belong tonthe
same tree can be found out using union -find algorithms.
1 6 1
2 3
0 0 0
1 0
2 0
2 3
5 0 2
0 0
2
4 5 2
5 0
5 0 0
4
5
1 2 3 4 5
Exampl S 1 0 0 0 1
e
1 1 60 10 10
d ∞
0
6 10
0 0 1
1 6 10
2 3
0 0 0
1
2 0
2 3
5 0 2
0 0
2
4 5 2
5 0
5 0 0
4
DDeeppaarrtt 5
1 2 3 4 5
Exampl 1 0 0 1 1
S
e
1 1 60 10 15 10
d
0
6 10
0 0 1
1 6 10
2 3
0 0 0
1
2 0
2 3
5 0 2
0 0
2
4 5 2
5 0
5 0 0
4
BAMTSMEnI Cstotiluletgeeoof
5
Exampl
1 2 3 4 5
1 0 1 1 1
S
e
1 1 60 35 15 10
d
6 10
0 0 1
1 6 10
2 3
0 0 0
1
2 0
2 3
5 0 2
0 0
2
4 5 2
5 0
5 0 0
4 5
5
Exampl Final distance form note 1 to
e all other nodes
1 2 3 4 5
1 1 1 1 1
S
1 1 60 35 15 10
d
6 10
0 0 1
1 6 10
2 3
0 0 0
1
2 0
2 3
5 0 2
0 0
2
4 5 2
5 0
5 0 0
5
4 5
4 c
Example2
b
3 6
2
7 4
a e
d5
d
Tree vertices Remaining vertices
4
b c
e(-,∞) a
7
d
4
e
4 c
b(a,3) c(b,3+4) d(b,3+2) 3
b
6
2
e(-,∞) a d e
7 4
5
4
d(b,5) c(b,7) e(d,5+4) 3
b
6
2 c
a d e
7 4
5
4
c(b,7) e(d,9) 3
b c
6
2 5
a d e
7 4
e(d,9)
Depar t ment
of CSE
Dijkstra’s algorithm
ALGORITHM Dijkstra(G, s)
//Dijkstra’s algorithm for single-source shortest paths
//Input: A weighted connected graph G = V,E_x000C_ with nonnegative weights and its
vertex s
//Output: The length dv of a shortest path from s to v
// and its penultimate vertex pv for every vertex v in V
Initialize(Q) //initialize priority queue to empty
for every vertex v in V
dv ← ∞; pv ← null
Insert(Q, v, dv) //initialize vertex priority in the priority queue
ds ← 0; Decrease(Q, s, ds) //update priority of s with ds
VT ← ∅
Dijkstra’s algorithm
for i ← 0 to |V | − 1 do
u∗ ← DeleteMin(Q) //delete the minimum priority element
VT ← VT ∪ {u∗}
for every vertex u in V − VT that is adjacent to u∗ do
if du∗ + w(u∗, u) < du
du ← du∗ + w(u∗, u); pu ← u∗
Decrease(Q, u, du)
DDeeppaarrtt
Key points on Dijkstra’s algorithm
Doesn’t work for graphs with negative weights (whereas
Floyd’s algorithm does, as long as
Applicable to both undirected and directed graphs.
Efficiency O(|V2|) for graphs represented by weight matrix
and array implementation of priority queue
O(|E|log|V|) for graphs represented by adj. lists and
min-heap implementation of priority queue
AATTMMEECCooe
le
lggeeoof fEEnngginineeerirningg
Key points on Dijkstra’s algorithm
Doesn’t work for graphs with negative weights (whereas
Floyd’s algorithm does, as long as
Applicable to both undirected and directed graphs.
Efficiency O(|V2|) for graphs represented by weight matrix
and array implementation of priority queue
O(|E|log|V|) for graphs represented by adj. lists and
min-heap implementation of priority queue
AATTMMEECCooe
le
lggeeoof fEEnngginineeerirningg
•
Encoding messages
Encode a message composed of a string of characters
• Codes used by computer systems
– ASCII
• uses 8 bits per character
• can encode 256 characters
– Unicode
• 16 bits per character
• can encode 65536 characters
• includes all characters encoded by ASCII
• ASCII and Unicode are fixed-length codes
– all characters represented by same number of bits
Problems
• Suppose that we want to encode a message constructed from the
symbols A, B, C, D, and E using a fixed-length code
– How many bits are required to encode each symbol?
at least 3 bits are required
2 bits are not enough (can only encode four symbols)
How many bits are required to encode the message DEAACAAAAABA?
there are twelve symbols, each requires 3 bits
12*3 = 36 bits are required
Drawbacks of fixed-length codes
• Wasted space
– Unicode uses twice as much space as ASCII
• inefficient for plain-text messages containing only ASCII characters
• Same number of bits used to represent all characters
– ‘a’ and ‘e’ occur more frequently than ‘q’ and ‘z’
Symbol Code
Symbol Code Symbol Code
A 0
A 0
A 0 B 100
B 10
B 100 C 101
C 110
C 101 D 110
D 1110
D 1101 E 111
E 11110
E 1111
1110111100011000000100
1101111100101000001000 11011100101000001000
22 bits
What code to use?
• Question: Is there a variable-length code that makes the
most efficient use of space?
Answer: Yes!
Huffman coding tree
• Binary tree
– each leaf contains symbol (character)
– label edge from node to left child with 0
– label edge from node to right child with 1
• Code for any symbol obtained by following path from root to the leaf containing
symbol
• Code has prefix property
– leaf node cannot appear on path to another leaf
– note: fixed-length codes are represented by a complete Huffman tree and
clearly have the prefix property
Building a Huffman tree
• Find frequencies of each symbol occurring in message
• Begin with a forest of single node trees
– each contain symbol and its frequency
• Do recursively
– select two trees with smallest frequency at the root
– produce a new binary tree with the selected trees as children and
store the sum of their frequencies in the root
• Recursion ends when there is one tree
– this is the Huffman coding tree
• Build the Huffman coding tree for the message
This is his message
• Character frequencies
A G M T E H _ I S
1 1 1 1 2 2 3 3 5
• Begin with forest of single trees
1 1 1 1 2 2 3 3 5
A G M T E H _ I S
Step 1
1 1 1 1 2 2 3 3 5
A G M T E H _ I S
Step 2
2 2
1 1 1 1 2 2 3 3 5
A G M T E H _ I S
Step 3
2 2 4
1 1 1 1 2 2 3 3 5
A G M T E H _ I S
Step 4
2 2 4
1 1 1 1 2 2 3 3 5
A G M T E H _ I S
Step 5
2 2 4 6
1 1 1 1 2 2 3 3 5
A G M T E H _ I S
Step 6
4 4
2 2 2 2 6
E H
1 1 1 1 3 3 5
A G M T _ I S
Step 7
8 11
4 4 6 5
2 2 2 2 3 3
E H _ I
1 1 1 1
A G M T
19
Step 8
8 11
4 4 6 5
2 2 2 2 3 3
E H _ I
1 1 1 1
A G M T
19
0 1
Label edges
8 11
0 1
0 1
4 4 6 5
0 1 0 1 0 1 S
2 2 2 2 3 3
0 1 0 1 E H _ I
1 1 1 1
A G M T
Huffman code & encoded message
S
11
E This is his message
010
H
011
_
100
I
101
A
0000
G
0001
M
0010
00110111011110010111100011101111000010010111100000001010
T
0011
Huffman’s algorithm
Step 1 Initialize n one-node trees and label them with the symbols of the
alphabet given. Record the frequency of each symbol in its tree’s root
to indicate the tree’s weight. (More generally, the weight of a tree will
be equal to the sum of the frequencies in the tree’s leaves.)
Step 2 Repeat the following operation until a single tree is obtained. Find
two trees with the smallest weight (ties can be broken arbitrarily, but
see Problem 2 in this section’s exercises). Make them the left and right
subtree of a new tree and record the sum of their weights in the root
of the new tree as its weight.