Module 4 ADA
Module 4 ADA
Dynamic Programming:
3 Basic Examples
Example 1: Coin-Row Problem
Problem Statement
Solution Approach
Algorithm
C Code Snippet
#include <stdio.h>
#include <stdlib.h>
int main() {
int coins[] = {5, 1, 2, 10, 6, 2};
int n = sizeof(coins) / sizeof(coins[0]);
printf("Maximum amount of money: %d\n", coinRow(coins, n));
return 0;
}
• Given: An amount n and coin denominations d1 < d2 < ... < dm.
• Goal: Make change for n using the minimum number of coins.
Solution Approach
C Code Snippet:
#include <stdio.h>
#include <limits.h>
#define INF INT_MAX
// Function to find the minimum number of coins
int minCoins(int denominations[], int m, int n) {
int *C = (int *)malloc((n + 1) * sizeof(int));
for (int i = 0; i <= n; i++) {
C[i] = INF;
}
C[0] = 0;
int main() {
int denominations[] = {1, 2, 5};
int m = sizeof(denominations) / sizeof(denominations[0]);
int n = 11;
printf("Minimum number of coins: %d\n", minCoins(denominations, m, n));
return 0;
}
• Problem: Given an n x m board with coins placed in some cells, a robot starting
at the upper left cell (0,0) needs to collect the maximum number of coins and
reach the bottom right cell (n-1, m-1), moving only right or down.
• Approach: Dynamic programming.
Algorithm
1. Define F(i, j) as the maximum number of coins collected when reaching cell (i,
j).
2. Recurrence relation:
F(i,j)=max(F(i−1,j),F(i,j−1))+coins(i,j)
o If we come from the top (F(i-1, j)) or from the left (F(i, j-1)), add the
coin at cell (i, j) if it exists.
3. Base case:
F(0,0)=coins(0,0)
C Code Snippet:
#include <stdio.h>
#include <stdlib.h>
F[0][0] = board[0][0];
• Objective: Given n items with known weights w1, w2, ..., wn and values v1,
v2, ..., vn, and a knapsack with capacity W, find the most valuable subset of
items that fit into the knapsack.
• Assumptions: All weights and the knapsack capacity are positive integers. Item
values do not have to be integers.
1. Definitions:
o Let F(i, j) be the maximum value of a subset of the first i items that fit
into a knapsack of capacity j.
2. Recurrence Relation:
o Consider the subsets of the first i items:
1. Subsets that do not include the i-th item: The value of the
optimal subset is F(i-1, j).
2. Subsets that include the i-th item: If j - wi >= 0, the value is vi
+ F(i-1, j-wi).
3. Initial Conditions:
4. Goal: Find F(n, W), the maximum value of a subset of the n items that fit into the
knapsack of capacity W, and determine the composition of this optimal subset.
Example
0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 12 12 12 12
2 0 10 12 22 22 22
3 0 10 12 22 30 32
4 0 10 15 25 30 37
Memory Functions
1. ALGORITHM
#include <stdio.h>
#include <stdlib.h>
int** F;
int* Weights;
int* Values;
int n, W;
int main() {
n = 4;
W = 5;
Weights = (int[]) {0, 2, 1, 3, 2};
Values = (int[]) {0, 12, 10, 20, 15};
return 0;
}
2. Efficiency:
o Time: O(nW)
o Space: O(nW)
3. Example: For the same instance as above, the table filled using memory functions
will compute only necessary subproblems, making the approach more efficient for
larger instances.
Summary
#include <stdio.h>
int main() {
int graph[nV][nV] = {{0, 3, INF, 5},
{2, 0, INF, 4},
{INF, 1, 0, INF},
{INF, INF, 2, 0}};
floydWarshall(graph);
}
Design & Analysis
A of Algorithms | Module 4: Dynamic Programming
(a) Digraph. (b) Its adjacency matrix. (c) Its transitive closure.
We can generate the transitive closure of a digraph with the help of depthfirst search or
first search. Performing either traversal starting at the ith vertex gives the information
breadth-first
about the vertices reachable from it and hence the columns that contain 1’s in the ith row of
the transitive closure. Thus, doing such a traversal for every vertex as a starting point yields
the transitive closure in its entirety.
Since this method traverses the same digraph several times, we can use a better algorithm
called Warshall’s algorithm.. Warshall’s algorithm constructs the transitive closure through
a series of n × n boolean matrices:
Each of these matrices provides certain information about directed paths in the digraph.
Specifically, the element in the ith row and jth column of matrix R(k) (i, j = 1, 2, . . . , n, k
= 0, 1, . . . , n) is equal to 1 if and only if there exists a directed path of a positive length from
the ith vertex to the jth vertex with each intermediate vertex, if any, numbered not higher than
k.
Thus, the series starts with R(0) , which does not allow any intermediate vertices in its paths;
hence, R(0) is nothing other than the adjacency matrix of the digraph.
digraph R(1) contains the
information about paths thatat can use the first vertex as intermediate. The last matrix in the
(n)
series, R , reflects paths that can use all n vertices of the digraph as intermediate and hence
is nothing other than the digraph’s transitive closure.
This means that there exists a path from the ith vertex vi to the jth vertex vj with each
intermediate vertex numbered not higher than k:
vi, a list of intermediate vertices each numbered not higher than k, vj . --- (*)
Two situations regarding this path are possible.
Design & Analysis
A of Algorithms | Module 4: Dynamic Programming
1. In the first, the list of its intermediate vertices does not contain the kth vertex. Then this
−1. i.e. r
path from vi to vj has intermediate vertices numbered not higher than k− 1
2. The secondd possibility is that path (*)(* does contain the kth vertex vk among the
intermediate vertices. Then path (*) can be rewritten as;
vi, vertices numbered ≤ k − 1, vk, vertices numbered ≤ k − 1, vj .
i.e r 1 and r 1
Thus, we have the following formula for generating the elements of matrix R(k) from the
elements of matrix R(k−1)
As an example, the application of Warshall’s algorithm to the digraph is shown below. New
1’s are in bold.
Design & Analysis
A of Algorithms | Module 4: Dynamic Programming
Analysis
Its time efficiency is Θ(n3). We can make the algorithm to run faster by treating matrix rows
as bit strings and employ the bitwise or operation available in most modern computer
languages.
Space efficiency: Although separate matrices for recording intermediate results of the
algorithm are used, that can be avoided.
(a) Digraph. (b) Its weight matrix. (c) Its distance matrix
We can generate the distance matrix with an algorithm that is very similar to Warshall’s
algorithm. It is called Floyd’s algorithm.
Floyd’s algorithm computes the distance matrix of a weighted graph with n vertices through a
series of n × n matrices:
Design & Analysis
A of Algorithms | Module 4: Dynamic Programming
The element in the ith row and the jth column of matrix D(k) (i, j = 1, 2, . . . , n, k = 0, 1,
he length of the shortest path among all paths from the i vertex to the jth
. . . , n) is equal to the th
vertex with each intermediate vertex, if any, numbered not higher than k.
As in Warshall’s algorithm, we can compute all the elements of each matrix D(k) from its
immediate predecessor D(k−1)
Taking into account the lengths of the shortest paths in both subsets leads to the following
recurrence:
Application of Floyd’s algorithm to the digraph is shown below.. Updated elements are shown
in bold.
Working
The algorithm begins by sortirting the graph's edges in non decreasing orde rder of their weights.
Then, starting with the empty ty sub graph, it scans this sorted list adding the
th next edge on the
list to the current sub graph if such an inclusion does not create a cycle and
an simply skipping
the edge otherwise.
The fact that ET ,the set of edges
edg composing a minimum spanning tree of graph G actually a
tree in Prim's algorithm but generally
ge just an acyclic sub graph in Kruskal's
l's algorithm.
We can consider the algorith rithm's operations as a progression through a series of forests
containing all the vertices off a given graph and some of its edges. The initia
itial forest consists of
|V| trivial trees, each comprisi
ising a single vertex of the graph. The finall forest
f consists of a
single tree, which is a min inimum spanning tree of the graph. On each e iteration, the
algorithm takes the next edge ge (u, v) from the sorted list of the graph's edges,
ed finds the trees
containing the vertices u andd v, and, if these trees are not the same, uniteites them in a larger
tree by adding the edge (u, v).).
Analysis of Efficiency
The crucial check whether two
wo vertices belong to the same tree can be foun
und out using union-
find algorithms.
Efficiency of Kruskal’s algori
orithm is based on the time needed for sorting ng the edge weights
of a given graph. Hence, withith an efficient sorting algorithm, the time effic
fficiency of Kruskal's
algorithm will be in O (|E| log
og |E|).
Illustration
An example of Kruskal’s alg lgorithm is shown below. The
selected edges are shown in bold.
bo
3. Single source shortest
rtest paths
Single-source shortest-pathss problem is defined as follows. For a givenen vertex called the
source in a weighted connect
ected graph, the problem is to find shortest paths
pa to all its other
vertices. The single-source shortest-paths
sh problem asks for a family off paths,
p each leading
from the source to a different
ent vertex in the graph, though some paths may,
ma of course, have
edges in common.
3.1. Dijkstra's Algorithm
Dijkstra's Algorithm is the
he best-known algorithm for the single-sou ource shortest-paths
problem. This algorithm is applicable
ap to undirected and directed graphs
hs with nonnegative
weights only.
Working - Dijkstra's algorithm
ithm finds the shortest paths to a graph's vertic
tices in order of their
distance from a given source.
e.
First, it finds the shor
ortest path from the source to a vertex neare
arest to it, then to a
second nearest, and so on.
In general, before its i ith iteration commences, the
algorithm has alreadyy identified the shortest paths to i-1
other vertices nearest
st to the source. These vertices, the
source, and the edgess of
o the shortest paths leading to them
from the source form rm a subtree Ti of the given graph
shown in the figure.
Since all the edge wei
eights are nonnegative, the next vertex nearesrest to the source can
be found among the vertices
ve adjacent to the vertices of Ti. The sett of
o vertices adjacent
to the vertices in Ti can
c be referred to as "fringe vertices"; theyy are the candidates
from which Dijkstra's
's algorithm
a selects the next vertex nearest to the
th source.
To identify the ith nea
earest vertex, the algorithm computes, for eve very fringe vertex u,
ce to the nearest tree vertex v (given by the weight
the sum of the distance we of the edge (v,
u)) and the length d., of
o the shortest path from the source to v (prev
reviously determined
by the algorithm) andd then
t selects the vertex with the smallest such
ch sum. The fact that
it suffices to compare
are the lengths of such special paths is the he central insight of
Dijkstra's algorithm.
To facilitate the algorit
rithm's operations, we label each vertex with two
tw labels.
o The numeric labebel d indicates the length of the shortest pathth from the source to
this vertex foundd by the algorithm so far; when a vertex is added
a to the tree, d
indicates the lengt
gth of the shortest path from the source to that
at vertex.
o The other label indicates
in the name of the next-to-last vertex ono such a path, i.e.,
the parent of thee vertex
v in the tree being constructed. (It cann be left unspecified
for the source s and
an vertices that are adjacent to none of the current
cur tree vertices.)
With such labeling,, finding
f the next nearest vertex u* becomes es a simple task of
finding a fringe vertex
ex with the smallest d value. Ties can be broken
en arbitrarily.
After we have identifie
ified a vertex u* to be added to the tree, we need
ne to perform two
operations:
o Move u* from m the fringe to the set of tree vertices.
o For each remaaining fringe vertex u that is connected to u* by an edge of
weight w (u*,, u)
u such that d u*+ w(u*, u) <d u, update the labels of u by u*
and du* + w(u*
u*, u), respectively.
o
Illustration: An example of Dijkstra's algorithm is shown
below. The next closest vertex
tex is shown in bold.
Analysis:
The time efficiency of Dijk ijkstra’s algorithm depends on the data structures
st used for
implementing the priority queue
qu and for representing an input graphh itself. For graphs
represented by their adjacency
cy lists and the priority queue implemented as a min-heap, it is in
O ( |E| log |V| )
Applications
Transportation plannin
ing and packet routing in communication netwtworks, including the
Internet
Finding shortest paths
hs in social networks, speech recognition, doc
ocument formatting,
robotics, compilers, and
an airline crew scheduling.
4. Optimal Tree problem
blem
Background
Suppose we have to encode a text
t that comprises characters from some n-ccharacter alphabet
by assigning to each of the text's th codeword.There
tex characters some sequence of bits called the
are two types of encoding: Fix
ixed-length encoding, Variable-length encodin
ing
Fixed-length encoding: This
is method assigns to each character a bit string
ng of the same length
m (m >= log2 n). This is exac
actly what the standard ASCII code does. Onne way of getting a
coding scheme that yields a shorter bit string on the average is basedd on the old idea of
assigning shorter code-words
rds to more frequent characters and longerr code-words
c to less
frequent characters.
Variable-length encoding: This Th method assigns code-words of differentt lengths
l to different
characters, introduces a proble
blem that fixed-length encoding does not have
ve. Namely, how can
we tell how many bits of an encoded text represent the first (or, more re generally, the ith)
character? To avoid this compplication, we can limit ourselves to prefix-free
ee (or simply prefix)
codes. In a prefix code, no codeword
co is a prefix of a codeword of anothe
her character. Hence,
with such an encoding, we can simply scan a bit string until we get the firs
irst group of bits that
is a codeword for some character,
cha replace these bits by this characte
cter, and repeat this
operation until the bit string's
's end
e is reached.
If we want to create a binaryary prefix code for some alphabet, it is natur tural to associate the
alphabet's characters with leav
eaves of a binary tree in which all the left edge
ges are labelled by 0
and all the right edges are labe
abelled by 1 (or vice versa). The codeword off a character can then
be obtained by recording thee labels on the simple path from the root too the t character's leaf.
Since there is no simple path th to a leaf that continues to another leaf, noo codeword
c can be a
prefix of another codeword; hence,
he any such tree yields a prefix code.
Among the many trees thatt can c be constructed in this manner for a given
giv alphabet with
known frequencies of the character
ch occurrences, construction of such
ch a tree that would
assign shorter bit strings too high-frequency characters and longer ones es to low-frequency
characters can be done by thee following greedy algorithm, invented by Dav
avid Huffman.
4.1 Huffman Trees and Codes
Huffman's Algorithm
Step 1: Initialize n one-nodee trees
t and label them with the characters of the th alphabet. Record
cter in its tree's root to indicate the tree's weigh
the frequency of each characte ght. (More generally,
the weight of a tree will be equal
equ to the sum of the frequencies in the tree's 's leaves.)
Step 2: Repeat the followingg operation
o until a single tree is obtained. Find
nd two trees with the
smallest weight. Make them the t left and right subtree of a new tree and nd record the sum of
their weights in the root of the
he new tree as its weight.
ove algorithm is called a Huffman tree. It def
A tree constructed by the abov efines-in the manner
described-a Huffman code.
Example: Consider the five-ssymbol alphabet {A, B, C, D, _} with the following
fol occurrence
frequencies in a text made upp of
o these symbols: