0% found this document useful (0 votes)
8 views16 pages

Unit-2 (Module-2)

The document discusses the Greedy Method in algorithm design, outlining its general approach and applications such as the Knapsack Problem, Job Sequencing with Deadlines, and Minimum-Cost Spanning Trees. It highlights the characteristics of the greedy technique, including feasibility, local optimality, and irreversibility, and contrasts it with dynamic programming. Various algorithms, including Prim's and Kruskal's for Minimum Spanning Trees, are also presented along with their applications in optimization problems.

Uploaded by

sssaij20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views16 pages

Unit-2 (Module-2)

The document discusses the Greedy Method in algorithm design, outlining its general approach and applications such as the Knapsack Problem, Job Sequencing with Deadlines, and Minimum-Cost Spanning Trees. It highlights the characteristics of the greedy technique, including feasibility, local optimality, and irreversibility, and contrasts it with dynamic programming. Various algorithms, including Prim's and Kruskal's for Minimum Spanning Trees, are also presented along with their applications in optimization problems.

Uploaded by

sssaij20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Design and Analysis of Algoriths Greedy Method

UNIT – 2 (Module-2)
THE GREEDY METHOD
1. The General Method
2. Knapsack Problem
3. Job Sequencing with Deadlines
4. Minimum-Cost Spanning Trees
5. Prim’sAlgorithm
6. Kruskal’s Algorithm
7. Single Source Shortest Paths.

2.1The General Method


Definition:
Greedy technique is a general algorithm design strategy, built on following elements:
• configurations: different choices, values to find
• objective function: some configurations to be either maximized or minimized

The method:
• Applicable to optimization problems ONLY
• Constructs a solution through a sequence of steps
• Each step expands a partially constructed solution so far, until a complete solution
to the problem is reached.
On each step, the choice made must be
• Feasible: it has to satisfy the problem‘s constraints
• Locally optimal: it has to be the best local choice among all feasible choices
available on that step
• Irrevocable: Once made, it cannot be changed on subsequent steps of the
algorithm

NOTE:
• Greedy method works best when applied to problems with the greedy-choice
property
• A globally-optimal solution can always be found by a series of local
improvements from a starting configuration.

Greedy method vs. Dynamic programming method:


• LIKE dynamic programming, greedy method solves optimization problems.
• LIKE dynamic programming, greedy method problems exhibit optimal
substructure
• UNLIKE dynamic programming, greedy method problems exhibit the greedy
choice property -avoids back-tracing.

Applications of the Greedy Strategy:

Prepared by S.Rakesh, Asst.Prof. , IT Dept, CBIT


Design and Analysis of Algoriths Greedy Method

• Optimal solutions:
Change making
Minimum Spanning Tree (MST)
Single-source shortest paths
Huffman codes
• Approximations:
Traveling Salesman Problem (TSP)
Fractional Knapsack problem

2.2Knapsack problem
2.2.1 One wants to pack n items in a luggage
2.2.1.1 The ith item is worth vi dollars and weighs wi pounds
2.2.1.2 Maximize the value but cannot exceed W pounds
2.2.1.3 vi , wi, W are integers
2.2.2 0-1 knapsack  each item is taken or not taken
2.2.3 Fractional knapsack  fractions of items can be taken
2.2.4 Both exhibit the optimal-substructure property
2.2.4.1 0-1: If item j is removed from an optimal packing, the remaining packing is an
optimal packing with weight at most W-wj
2.2.4.2 Fractional: If w pounds of item j is removed from an optimal packing, the
remaining packing is an optimal packing with weight at most W-w that can be taken
from other n-1items plus wj – w of item j
Greedy Algorithm for Fractional Knapsack problem
2.2.5 Fractional knapsack can be solvable by the greedy strategy
2.2.5.1 Compute the value per pound vi/wi for each item
2.2.5.2 Obeying a greedy strategy, take as much as possible of the item with the greatest
value perpound.
2.2.5.3 If the supply of that item is exhausted and there is still more room, take as
much as possible of the item with the next value per pound, and so forth until there is
no moreroom
2.2.5.4 O(n lg n) (we need to sort the items by value per
pound)O-1 knapsack is harder
2.2.6 knapsack cannot be solved by the greedy strategy

Prepared by S.Rakesh, Asst.Prof. , IT Dept, CBIT


Design and Analysis of Algoriths Greedy Method

2.2.6.1 Unable to fill the knapsack to capacity, and the empty space lowers the effective
value perpound of the packing
2.2.6.2 We must compare the solution to the sub-problem in which the item is included
with thesolution to the sub-problem in which the item is excluded before we can make
the choice

2.3 Job sequencing with deadlines


The problem is stated as below.
2.3.1 There are n jobs to be processed on a machine.
2.3.2 Each job i has a deadline di≥ 0 and profit pi≥0 .
2.3.3 Pi is earned iff the job is completed by its deadline.
2.3.4 The job is completed if it is processed on a machine for unit time.
2.3.5 Only one machine is available for processing jobs.
2.3.6 Only one job is processed at a time on the machine
2.3.7 A feasible solution is a subset of jobs J such that each job is completed by its deadline.
2.3.8 An optimal solution is a feasible solution with maximum profit value.
Example : Let n = 4, (p1,p2,p3,p4) = (100,10,15,27), (d1,d2,d3,d4) = (2,1,2,1)

Prepared by S.Rakesh, Asst.Prof. , IT Dept, CBIT


Design and Analysis of Algoriths Greedy Method

2.3.9 Consider the jobs in the non increasing order of profits subject to the constraint that the
resultingjob sequence J is a feasible solution.
2.3.10 In the example considered before, the non-increasing profit
vector is(100 27 15 10) (2 1 2 1)
p1 p4 p3 p2 d1 d d3 d2
J = { 1} is a feasible one
J = { 1, 4} is a feasible one with processing sequence ( 4,1)
J = { 1, 3, 4} is not feasible
J = { 1, 2, 4} is not feasible
J = { 1, 4} is optimal

Theorem: Let J be a set of K jobs and


Σ = (i1,i2,….ik) be a permutation of jobs in J such that di1 ≤ di2 ≤…≤ dik.
2.3.11 J is a feasible solution iff the jobs in J can be processed in the order Σ without
violating any deadly.
Proof:
2.3.12 By definition of the feasible solution if the jobs in J can be processed in the order
without violating any deadline then J is a feasible solution.

Prepared by S.Rakesh, Asst.Prof. , IT Dept, CBIT


Design and Analysis of Algoriths Greedy Method

2.3.13 So, we have only to prove that if J is a feasible one, then Σ represents a possible order in
whichthe jobs may be processed.
2.3.14 Suppose J is a feasible solution. Then there exists Σ1 = (r1,r2,…,rk)
such thatdrj ≥ j, 1 ≤ j <k
i.e. dr1 ≥1, dr2 ≥ 2, …, drk ≥ k.
each job requiring an unit time.

2.3.15 Σ = (i1,i2,…,ik) and Σ1 = (r1,r2,…,rk)


2.3.16 Assume Σ 1 ≠ Σ. Then let a be the least index in which Σ 1 and Σ differ. i.e. a is such that ra ≠
i a.
2.3.17 Let rb = ia, so b > a (because for all indices j less than a rj = ij).
2.3.18 In Σ 1 interchange ra and rb.
2.3.19 Σ = (i1,i2,… ia ib ik ) [rb occurs before ra
2.3.20 in i1,i2,…,ik]
2.3.21 Σ 1 = (r1,r2,… ra rb … rk )
2.3.22 i1=r1, i2=r2,…ia-1= ra-1, ia ≠ rb but ia = rb
2.3.23 We know di1 ≤ di2 ≤ … dia ≤ dib ≤… ≤ dik.
2.3.24 Since ia = rb, drb ≤ dra or dra ≥ drb.
2.3.25 In the feasible solution dra ≥ a drb ≥ b
2.3.26 So if we interchange ra and rb, the resulting permutation Σ11= (s1, … sk) represents an order
withthe least index in which Σ11 and Σ differ is incremented by one.
2.3.27 Also the jobs in Σ11 may be processed without violating a deadline.
2.3.28 Continuing in this way, Σ1 can be transformed into Σ without violating any deadline.
2.3.29 Hence the theorem is proved
GREEDY ALGORITHM FOR JOB SEQUENSING WITH DEADLINE

Procedure greedy job (D, J, n) J may be represented by


// J is the set of n jobs to be completed // one dimensional array J (1: K)
// by their deadlines // The deadlines are
J {1} D (J(1)) ≤ D(J(2)) ≤ .. ≤ D(J(K))
for I  2 to n do To test if JU {i} is feasible,

Prepared by S.Rakesh, Asst.Prof. , IT Dept, CBIT


Design and Analysis of Algoriths Greedy Method

If all jobs in JU{i} can be completed we insert i into J and verify


by their deadlines D(J®) ≤ r 1 ≤ r ≤ k+1
then J  JU{I}
end if
repeat
end greedy-job
Procedure JS(D,J,n,k)
// D(i) ≥ 1, 1≤ i ≤ n are the deadlines //
// the jobs are ordered such that //
// p1 ≥ p2 ≥ ……. ≥ pn //
// in the optimal solution ,D(J(i) ≥ D(J(i+1)) //
// 1 ≤ i ≤ k //
integer D(o:n), J(o:n), i, k, n, r
D(0) J(0)  0
// J(0) is a fictious job with D(0) = 0 //
K1; J(1) 1 // job one is inserted into J //
for i 2 to do // consider jobs in non increasing order of pi //
// find the position of i and check feasibility of insertion //
r k // r and k are indices for existing job in J //
// find r such that i can be inserted after r //
while D(J(r)) > D(i) and D(i) ≠ r do
// job r can be processed after i and //
// deadline of job r is not exactly r //
r r-1 // consider whether job r-1 can be processed after i //
repeat
if D(J(r)) ≥ d(i) and D(i) > r then
// the new job i can come after existing job r; insert i into J at position r+1 //
for I  k to r+1 by –1 do
J(I+1) J(l) // shift jobs( r+1) to k right by//
//one position //
repeat

Prepared by S.Rakesh, Asst.Prof. , IT Dept, CBIT


Design and Analysis of Algoriths Greedy Method

if D(J(r)) ≥ d(i) and D(i) > r then


// the new job i can come after existing job r; insert i into J at position r+1 //
for I  k to r+1 by –1 do
J(I+1) J(l) // shift jobs( r+1) to k right by//
//one position //
Repeat
COMPLEXITY ANALYSIS OF JS ALGORITHM
2.3.30 Let n be the number of jobs and s be the number of jobs included in the solution.
2.3.31 The loop between lines 4-15 (the for-loop) is iterated (n-1)times.
2.3.32 Each iteration takes O(k) where k is the number of existing jobs.
∴ The time needed by the algorithm is 0(sn) s ≤ n so the worst case time is 0(n2).
If di = n - i+1 1 ≤ i ≤ n, JS takes θ(n2) time
D and J need θ(s) amount of space.
2.4 Minimum-Cost Spanning Trees
Spanning Tree
Spanning tree is a connected acyclic sub-graph (tree) of the given graph (G) that includes
all of G‘s vertices

Example: Consider the following graph


1 b
a

5 2
d

c 3

The spanning trees for the above graph are as follows:


1 1
1 b b
b
a a
a
5 2 d d
c c 3
d

Weight (T2) = 8 Weight (T3) = 6


Weight (T1) = 9

Prepared by S.Rakesh, Asst.Prof. , IT Dept, CBIT


Design and Analysis of Algoriths Greedy Method

Minimum Spanning Tree (MST)

Definition:
MST of a weighted, connected graph G is defined as: A spanning tree of G with
minimum total weight.
Example: Consider the example of spanning tree:
For the given graph there are three possible spanning trees. Among them the spanning
tree with the minimum weight 6 is the MST for the given graph

Question: Why can‘t we use BRUTE FORCE method in constructing MST ?


Answer: If we use Brute force method-
• Exhaustive search approach has to be applied.
• Two serious obstacles faced:
1. The number of spanning trees grows exponentially with graph size.
2. Generating all spanning trees for the given graph is not easy.
MST Applications:
• Network design.
Telephone, electrical, hydraulic, TV cable, computer, road
• Approximation algorithms for NP-hard problems.
Traveling salesperson problem, Steiner tree
• Cluster analysis.
• Reducing data storage in sequencing amino acids in a protein
• Learning salient features for real-time face verification
• Auto config protocol for Ethernet bridging to avoid cycles in a network, etc
2.5 Prim’s Algorithm
Some useful definitions:
2.5.1 Fringe edge: An edge which has one vertex is in partially constructed tree
Ti andthe other is not.
2.5.2 Unseen edge: An edge with both vertices not in Ti

Algorithm:
ALGORITHM Prim (G)
//Prim‘s algorithm for constructing a MST
//Input: A weighted connected graph G = { V, E }
//Output: ET the set of edges composing a MST of G
// the set of tree vertices can be initialized with any vertex
VT → { v0}
ET → Ø
for i→ 1 to |V| - 1 do
Find a minimum-weight edge e* = (v*, u*) among all the edges (v, u) such
that v is in VT and u is in V - VT
VT → VT U { u*}

Prepared by S.Rakesh, Asst.Prof. , IT Dept, CBIT


Design and Analysis of Algoriths Greedy Method

ET → ET U { e*}
return ET
STEP 1: Start with a tree, T0, consisting of one vertex
STEP 2: ―G row‖ tree one vertex/edge at a time
2.5.2.1 Construct a series of expanding sub-trees T1, T2, … Tn-1.
2.5.2.2 At each stage construct Ti + 1 from Ti by adding the minimum weight
edge connecting a vertex in tree (Ti) to one vertex not yet in tree, choose
from “fringe” edges (this is the “greedy” step!)
Algorithm stops when all vertices are included

Example:
Apply Prim‘s algorithm for the following graph to find MST.

1
b c
3 4 4 6

a 5 f 5
d
2
6 8
e

Solution:

Tree Remaining Graph


vertices vertices
b(a,3) b
c(-,∞) 3
a ( -, - ) d(-,∞)
e(a,6) a
f(a,5)

1
c(b,1) b c
3
d(-,∞)
b ( a, 3 )
e(a,6) a
f(b,4)

1 c
b
d(c,6) 3
c ( b, 1 ) e(a,6) 4
f(b,4)
a f

Prepared by S.Rakesh, Asst.Prof. , IT Dept, CBIT


Design and Analysis of Algoriths Greedy Method

1
b c
3 4

d(f,5) a f
f ( b, 4)
e(f,2)
2

1
b c
3 4

e ( f, 2) d(f,5) a f 5
d
2

Algorithm stops since all vertices


are included.
d( f, 5)
The weight of the minimum spanning
tree is 15

Efficiency:
Efficiency of Prim‘s algorithm is based on data structure used to store priority queue.
2.5.3 Unordered array: Efficiency: Θ(n2)
2.5.4 Binary heap: Efficiency: Θ(m log n)
2.5.5 Min-heap: For graph with n nodes and m edges: Efficiency: (n + m) log n

Conclusion:
2.5.6 Prim‘s algorithm is a ―evrtex based algorithm‖
2.5.7 Prim‘s algorithm ― Needs priority queue for locating the nearest vertex.‖
The choice of priority queue matters in Prim implementation.
o Array - optimal for dense graphs
o Binary heap - better for sparse graphs
o Fibonacci heap - best in theory, but not in practice

Prepared by S.Rakesh, Asst.Prof. , IT Dept, CBIT


Design and Analysis of Algoriths Greedy Method

2.6 Kruskal’s Algorithm

Algorithm:

ALGORITHM Kruskal (G)


//Kruskal‘s algorithm for constructing a MST
//Input: A weighted connected graph G = { V, E }
//Output: ET the set of edges composing a MST of G

Sort E in ascending order of the edge weights

// initialize the set of tree edges and its size


ET→Ø
edge_counter →0

//initialize the number of processed edges


K →0
while edge_counter < |V| - 1
k→k + 1
if ET U { ei k} is acyclic
ET →ET U { ei k }
edge_counter →edge_counter + 1
return ET

The method:
STEP 1: Sort the edges by increasing weight
STEP 2: Start with a forest having |V| number of trees.
STEP 3: Number of trees are reduced by ONE at every inclusion of an edge
At each stage:
• Among the edges which are not yet included, select the one with minimum
weight AND which does not form a cycle.
• the edge will reduce the number of trees by one by combining two trees of
the forest

Algorithm stops when |V| -1 edges are included in the MST i.e : when the number of
trees in the forest is reduced to ONE.

Prepared by S.Rakesh, Asst.Prof. , IT Dept, CBIT


Design and Analysis of Algoriths Greedy Method

Example:
Apply Kruskal‘s algorithm for the following graph to find MST.

1
b c
3 4 4 6
a 5 f 5
d
2
6 8
e
Solution:
The list of edges is:
Edge ab af ae bc bf cf cd df de ef
Weight 3 5 6 1 4 4 6 5 8 2

Sort the edges in ascending order:


Edge bc ef ab bf cf af df ae cd de
Weight 1 2 3 4 4 5 5 6 6 8

1
bc b
Edge c
1 f
Weight
a d
Insertion
YES
status
Insertion e
1
order

ef 1
Edge b c
2
Weight a f d
Insertion
YES
status 2
Insertion e
2
order

Prepared by S.Rakesh, Asst.Prof. , IT Dept, CBIT


Design and Analysis of Algoriths Greedy Method

ab 1
Edge 3 b c
3
Weight a f d
Insertion
YES
status 2
Insertion e
3
order

bf 1
Edge 3 b c
4
Weight a 4 f d
Insertion
YES
status 2
Insertion e
4
order

Edge cf
Weight 4
Insertion
NO
status
Insertion
-
order

Edge af
Weight 5
Insertion
NO
status
Insertion
-
order

df 1
Edge
3 b c
5
Weight
a 4 f d
Insertion
YES 5
status
2
Insertion
5 e
order
Algorithm stops as |V| -1 edges are included in the MST

Prepared by S.Rakesh, Asst.Prof. , IT Dept, CBIT


Design and Analysis of Algoriths Greedy Method

Efficiency:
Efficiency of Kruskal‘s algorithm is based on the time needed for sorting the edge
weights of a given graph.
2.6.1 With an efficient sorting algorithm: Efficiency: Θ(|E| log |E| )

Conclusion:
2.6.2 Kruskal‘s algorithm is an ―dege based algorithm‖
2.6.3 Prim‘s algorithm with a heap is faster than Kruskal‘s algorithm.
2.7 Single Source Shortest Paths.

Some useful definitions:


2.7.1 Shortest Path Problem: Given a connected directed graph G with non-
negativeweights on the edges and a root vertex r, find for each vertex x, a
directed path P
(x) from r to x so that the sum of the weights on the edges in the path P (x) is as
small as possible.
Algorithm
2.7.2 By Dutch computer scientist Edsger Dijkstra in 1959.
2.7.3 Solves the single-source shortest path problem for a graph with nonnegative
edgeweights.
2.7.4 This algorithm is often used in routing.
E.g.: Dijkstra's algorithm is usually the working principle behind link-state
routing protocols
ALGORITHM Dijkstra(G, s)
//Input: Weighted connected graph G and source vertex s
//Output: The length Dv of a shortest path from s to v and its penultimate vertex Pv for
every vertex v in V

//initialize vertex priority in the priority queue


Initialize (Q)
for every vertex v in V do
Dv→∞ ; Pv→null // Pv , the parent of v
insert(Q, v, Dv) //initialize vertex priority in priority queue
ds→0
//update priority of s with ds, making ds, the minimum
Decrease(Q, s, ds)

VT→0

Prepared by S.Rakesh, Asst.Prof. , IT Dept, CBIT


Design and Analysis of Algoriths Greedy Method

for i→0 to |V| - 1 do


u*→DeleteMin(Q)
//expanding the tree, choosing the locally best vertex
VT→VT U {u*}
for every vertex u in V – VT that is adjacent to u* do
if Du* + w (u*, u) < Du
Du→Du + w (u*, u); Pu u*
Decrease(Q, u, Du)
The method
Dijkstra‘s algorithm solves the single source shortest path problem in 2 stages.
Stage 1: A greedy algorithm computes the shortest distance from source to all other
nodes in the graph and saves in a data structure.
Stage 2 : Uses the data structure for finding a shortest path from source to any vertex v.
• At each step, and for each vertex x, keep track of a “distance” D(x)
and a directed path P(x) from root to vertex x of length D(x).
• Scan first from the root and take initial paths P( r, x ) = ( r, x ) with
D(x) = w( rx ) when rx is an edge,
D(x) = ∞ when rx is not an edge.
For each temporary vertex y distinct from x, set
D(y) = min{ D(y), D(x) + w(xy) }

Example:
Apply Dijkstra‘s algorithm to find Single source shortest paths with vertex a as the
source.
1
b c
3 4 4 6

a 5 f 5
d
2
6 8
e

Solution:
Length Dv of shortest path from source (s) to other vertices v and Penultimate vertex Pv
for every vertex v in V:
Da = 0 , Pa = null
Db = ∞ , Pb = null
Dc = ∞ , Pc = null
Dd = ∞ , Pd = null
De = ∞ , Pe = null
Df = ∞ , Pf = null

Prepared by S.Rakesh, Asst.Prof. , IT Dept, CBIT


Design and Analysis of Algoriths Greedy Method

Tree RemainingDi stance & Path Graph


vertices vertices vertex
Da = 0 Pa = a
b ( a , 3D)b = 3 Pb = [ a, b ] b
c ( - , ∞D)c = ∞ Pc = null 3
a ( -, 0 ) d ( - , ∞D)d = ∞ Pd = null a
e ( a , 6D)e = 6 Pe = [ a, e ]
f ( a , 5D)f = 5 Pf = [ a, f ]

Da = 0 Pa = a 1
c ( b , 3+D1b)= 3 Pb = [ a, b ] b c
3
d ( - , ∞D)c = 4 Pc = [a,b,c]
b ( a, 3 )
e ( a , 6D)d = ∞ Pd = null a
f ( a , 5D)e = 6 Pe = [ a, e ]
Df = 5 Pf = [ a, f ]
Da = 0 Pa = a
Db =
d ( c , 4+6 ) 3 Pb = [ a, b ]
Dc = 5
e(a, 4 Pc = [a,b,c]
c ( b, 4 ) 6)Dd= 10 a f
f(a,5) Pd = [a,b,c,d]
De = 6 Pe = [ a, e ]
Df = 5 Pf = [ a, f ]
Da = 0 Pa = a
Db = 3 Pb = [ a, b ]
Dc = 4 Pc = [a,b,c] a
d ( c , 10D)d = 10 Pd = [a,b,c,d]
f ( a, 5) 6
e ( a , 6D)e = 6 Pe = [ a, e ] e
Df = 5 Pf = [ a, f ]

Da = 0 Pa = a 1
Db = 3 Pb = [ a, b ] b c
Dc = 4 Pc = [a,b,c]
e ( a, 6) d ( c, 10 )d=1 0 Pd = [a,b,c,d]
D 3 6
De = 6 Pe = [ a, e ] d
Df = 5 Pf = [ a, f ] a

Algorithm stops since no


d( c, 10)
edges to scan

Conclusion:
2.7.5 Doesn‘t work with negative weights
2.7.6 Applicable to both undirected and directed graphs
2.7.7 Use unordered array to store the priority queue: Efficiency = Θ(n2)
2.7.8 Use min-heap to store the priority queue: Efficiency = O(m log n)

Prepared by S.Rakesh, Asst.Prof. , IT Dept, CBIT

You might also like