100% found this document useful (1 vote)
2K views63 pages

AD3351 Important Questions & Answers

Uploaded by

hodcse2018
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
2K views63 pages

AD3351 Important Questions & Answers

Uploaded by

hodcse2018
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 63

GKM COLLEGE OF ENGINEERING AND TECHNOLOGY

GKM Nagar, New Perungalathur, Chennai-63.

(BACHELOR OF TECHNOLOGY)

DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND DATA SCIENCE

AD3351-DESIGN AND ANALYSIS OF ALGORITHMS

II YEAR/III SEMESTER

IMPORTANT QUESTIONS & ANSWERS


PART A & B (UNIT 3,,4 & 5)

Prepared by,

K.M. Sai Kiruthika,

HOD/CSE
UNIT III - DYNAMIC PROGRAMMING AND GREEDY TECHNIQUE

Dynamic programming – Principle of optimality - Coin changing problem – Warshall’s and Floyd‘s
algorithms – Optimal Binary Search Trees - Multi stage graph - Knapsack Problem and Memory
functions. Greedy Technique – Dijkstra’s algorithm - Huffman Trees and codes - 0/1 Knapsack
problem.
PART – A

1. What do you mean by dynamic programming? April / May 2015


Dynamic programming is an algorithm design method that can be
used when a solution to the problem is viewed as the result of sequenceof
decisions. Dynamic programming is a technique for solving problems with
overlapping sub problems. These sub problems arise from a recurrence relatinga
solution to a given problem with solutions to its smaller sub problems only
once and recording the results in a table from which the solution to the
original problem is obtained. It was invented by a prominent U.S Mathematician,
Richard Bellman in the 1950s.
2. List out the memory functions used under dynamic programming. April / May2015

Let V[i,j] be optimal value of such an instance.


Then V[i,j]= max {V[i-1,j], vi + V[i-1,j-
wi]} if j- wi 0
V[i-1,j] if j- wi < 0
Initial conditions: V[0,j] = 0 for j>=0 and V[i,0] = 0 for i>=0

3. Define optimal binary search tree. April / May 2010


An optimal binary search tree (BST), is a binary search tree which provides
the smallest possible search time (or expected search time) for a given
sequence of accesses (or access probabilities).
4. List out the advantages of dynamic programming. May / June 2014
 Optimal solutions to sub problems are retained so as to
avoid re computing their values.
 Decision sequences containing subsequences that are sub
optimal are not considered.
 It definitely gives the optimal solution always.

5. State the general principle of greedy algorithm. Nov / Dec 2010


Greedy technique suggests a greedy grab of the best alternative available in the
hope that a sequence of locally optimal choices will yield a globally optimal
solution to the entire problem. The choice must be made as follows

 Feasible: It has to satisfy the problems constraints


 Locally optimal: It has to be the best local choice among all
feasiblechoices available on that step.
 Irrevocable: Once made, it cannot be changed on a subsequent
step of the algorithm
6. State the principle of optimality. Nov / Dec 2010 (Nov/Dec 2016)
It states that an optimal sequence of decisions has the property that
whenever the initial stage or decisions must constitute an optimal sequence with
regard to stage resulting
from the first decision.
. Compare dynamic programming and greedy algorithm. Nov / Dec
2010 Greedy method:
1. Only one sequence of decision is generated.
2. It does not guarantee to give an optimal solution always.
Dynamic programming:
1. Many number of decisions are generated.
2. It definitely gives an optimal solution always.

8. What is greedy algorithm? Nov / Dec 2011


A greedy algorithm is a mathematical process that looks for simple, easy-
to- implement solutions to complex, multi-step problems by deciding which next step
will provide the most obvious benefit. Such algorithms are called greedy because
while the optimal solution to each smaller instance will provide an immediate output,
the algorithm doesn‘t consider the larger problem as a whole. Once a
decision has been made, it is never reconsidered. A greedy algorithm works as follows:
 Determine the optimal substructure of the problem.
 Develop a recursive solution.
 Prove that at any stage of recursion one of the optimal choices isgreedy
choice. Thus it is always safe to make greedy choice.
 Show that all but one of the sub problems induced by having made thegreedy
choice is empty.
 Develop a recursive algorithm and convert into iterative algorithm.

9. What is the drawback of greedy algorithm? May / June


2012 The disadvantage of greedy algorithm is that it is entirely possible
that the most optimal short-term solutions may lead to the worst possible
long- term outcome.

10. What is 0/1 knapsack problem? Nov / Dec 2013


Given a set of N item (vi, wi), and a container of capacity C, find a subset ofthe
items that maximizes the value ∑ vi while satisfying the
weight constraints ∑ wi< C. A greedy algorithm may consider the items in order of
decreasing value-per-unit weight vi/wi. Such an approach guarantees a solution
with value no worse than 1/2 the optimal solution.
11. Write an algorithm to compute the binomial
coefficient. Algorithm Binomial(n,k)
//Computes C(n,k) by the dynamic programming algorithm
//Input: A pair of nonnegative integers n>=k>=0
//Output: The value of C(n,k)
for i ← 0 to n do
for j ←0 to min(i,k) do if j=0 or j=i
C[i,j] ←1
else C[i,j] ← C[i-1,j-1]+C[i-1,j]
return C[n,k]

12. Devise an algorithm to make a change using the greedy


strategy. Algorithm Change(n, D[1..m])
//Implements the greedy algorithm for the change-making problem
//Input: A nonnegative integer amount n and
// a decreasing array of coin denominations D
//Output: Array C[1..m] of the number of coins of each denomination
// in the change or the ‖no solution‖ message for i ← 1 to m do C[i]
← n/D[i]
n ← n mod D[i]
if n = 0 return C
else return ‖no solution‖

13. What is the best algorithm suited to identify the topography for
a graph. Mention its efficiency factors. (Nov/Dec 2015)
Prim and Kruskal‘s are the best algorithm suited to identify the topography of graph

14. What is spanning tree and minimum spanning tree?


Spanning tree of a connected graph G: a connected acyclic sub graph of
G that includes all of G‗s vertices
Minimum spanning tree of a weighted, connected graph G: a spanning treeof G of
the minimum total weight

15. Define the single source shortest path problem. (May/June


2016) (Nov/Dec 2016)
Dijkstra‗s algorithm solves the single source shortest path problem of finding
shortest paths from a given vertex( the source), to all the other vertices of a weighted
graph or digraph. Dijkstra‗s algorithm provides a correct solution for a graph with
non-negative weights.
PART-B

1. Write down and explain the algorithm to solve all pair shortest path
algorithm. April / May 2010

Write and explain the algorithm to compute all pair source shortestpath
using dynamic programming. Nov / Dec 2010
Explain in detail about Floyd’s algorithm. (Nov/Dec 2016)

ALL PAIR SHORTEST PATH


It is to find the distances (the length of the shortest paths) from eachvertex to
all other vertices.
It‗s convenient to record the lengths of shortest paths in an n x nmatrix D
called distance matrix.
It computes the distance matrix of a weighted graph with n vertices thrua
series of n x n matrices.
D0, D (1), …… D (k-1) , D (k),…… D(n)

Algorithm
//Implemnt Floyd‗s algorithm for the all-pairs shortest-paths problem
// Input: The weight matrix W of a graph
// Output: The distance matrix of the shortest paths
lengthsD← w
for k ← 1 to n do
for i ← 1 to n do for j← 1 to n do
D[i,j] min {D[i,j], D[i,k] + D[k,j]}
Return D.
Example:
Weigh Matrix Distance Matrix
2. Write the algorithm to compute the 0/1 knapsack problem usingdynamic programming and
explain Nov / Dec 2010, April / May 2015 Nov/Dec15

KNAPSACK PROBLEM
Given n items of integer weights: w1 w2 … wn , values: v1 v2 … vn and a
knapsack of integer capacity W find most valuable subset of the items thatfit into the
knapsack.
Consider instance defined by first i items and capacity j (j W).
Let V[i,j] be optimal value of such an instance. Then max

The Recurrence:
To design a dynamic programming algorithm, we need to derive a recurrence
relation that expresses a solution to an instance of the knapsack problem in
terms of solutions to its smaller sub instances.
 Let us consider an instance defined by the first i items, 1≤i≤n, with weights
w1, ... , wi, values v1, ... , vi, and knapsack capacity j, 1 ≤j ≤
W. Let V[i, j] be the value of an optimal solution to this instance, i.e., the
value of the most valuable subset of the first i items that fit into the knapsack
of capacity j.
Divide all the subsets of the first i items that fit the knapsack of capacity j
into two categories: those that do not include the ith item and those that do.
Two possibilities for the most valuable subset for the sub problem P(i, j)
i. It does not include the ith item: V[i, j] = V[i-1, j]
ii. It includes the ith item: V[i, j] = vi+ V[i-1, j – wi] V[i, j] = max{V[i-1, j],
vi+V[i-1, j – wi] }, if j – wi 0
Example: Knapsack of capacity W = 5

item weight value


1 2 $12
2 1 $10
3 3 $20
4 2 $15

0 1 2 3 4 5
0 0 0 0
w1 = 2, v1= 1 0 0 12
w2 = 1, v2= 2 0 10 12 22 22 2
w3 = 3, v3= 3 0 10 12 22 30 3
w4 = 2, v4= 4 15 25 30 0 10 3

hus, the maximal value is V[4, 5] = $37.

Composition of an optimal subset:


 The composition of an optimal subset can be found by tracing back the
computations of this entry in the table.

 Since V[4, 5] ≠ V[3, 5], item 4 was included in an optimal solution alongwith
an optimal subset for filling 5 - 2 = 3 remaining units of the knapsack capacity.

 The latter is represented by element V[3, 3]. Since V[3, 3] = V[2, 3], item3 is
not a part of an optimal subset.

 Since V[2, 3] ≠ V[l, 3], item 2 is a part of an optimal selection, which leaves
element
o V[l, 3 -1] to specify its remaining composition.
 Similarly, since V[1, 2] ≠ V[O, 2], item 1 is the final part of the optimal solution
{item
o 1, item 2, item 4}.

Efficiency:
 The time efficiency and space efficiency of this algorithm are both inΘ(n ).
 The time needed to find the composition of an optimal solution is in O(n+ W).
3. Write an algorithm to construct an optimal binary search tree with
suitable example.

OPTIMAL BINARY SEARCH TREE Problem:


Given n keys a1 < …< an and probabilities p1,
…, pn searching for them, find a BST with a minimum average
number of comparisons in successful search.Since total number of BSTs with nnodes is
given by C(2n,n)/(n+1), which grows exponentially, brute force is hopeless.

Let C[i,j] be minimum average number of comparisons made in T[i,j], optimal


BST for keys ai < …< aj , where 1 ≤ i ≤ j ≤ n. Consider optimal BST among all BSTs
with some ak (i ≤ k ≤ j ) as their root; T[i,j] is the best among them.

The recurrence for C[i,j]:


C[i,j] = min {C[i,k-1] + C[k+1,j]} + ∑ ps for 1 ≤ i ≤ j ≤
n C[i,i] = pi for 1 ≤ i ≤ j ≤ n
C[i, i- 1] = 0 for 1≤ i ≤ n+1 which can be interpreted as the number of
comparisons in the empty tree.
0 1 j n
1 0 p1 goal

0
p2
C[i,j]
i
EXAMPLE
Let us illustrate the algorithm by applying it to the four key set we used at the beginning ofthis
section:

KEY A B C D

PROBABILITY 0.1 0.2 0.4 0.3

Initial Tables:

Main Table Root Table

0 1 2 3 4 0 1 2 3 4

1 0 0.1 1 1

2 0 0.2 2 2

3 0 0.4 3 3

4 0 0.3 4 4

5 0 5

C(1,2): [i=1, j=2]

Possible key values k=1 and k=2.

K=1: c(1,2) = c(1,0) + c(2,2) + 0.1 + 0.2 = 0 + 0.2 + 0.1 + 0.2 = 0.5

K=2: c(1,2) = c(1,1) + c(3,2) + 0.1 + 0.2 = 0.1 + 0 + 0.1 + 0.2 = 0.4

Main Table Root Table

0 1 2 3 4 0 1 2 3 4

1 0 0.1 0.4 1 1 2
2 0 0.2 2 2

3 0 0.4 3 3

4 0 0.3 4 4

5 0 5

C(2,3): [i=2, j=3]

Possible key values k=2 and k=3.

K=2: c(2,3) = c(2,1) + c(3,3) + 0.2 + 0.4 = 0 + 0.4 + 0.2 + 0.4 = 1.0

K=3: c(2,3) = c(2,2) + c(4,3) + 0.2 + 0.4 = 0.2 + 0 + 0.2 + 0.4 = 0.8

Main Table Root Table

0 1 2 3 4
0 1 2 3 4
1 0 0.1 0.4
1 1 2
2 0 0.2 0.8
2 2 3
3 0 0.4
3 3
4 0 0.3
4 4
5 0
5

C(3,4): [i=3, j=4]

Possible key values k=3 and k=4.


K=3: c(3,4) = c(3,2) + c(4,4) + 0.4 + 0.3 = 0 + 0.3 + 0.4 + 0.3 = 1.0

K=4: c(3,4) = c(3,3) + c(5,4) + 0.4 + 0.3 = 0.4 + 0 + 0.4 + 0.3 = 1.1

Main Table Root Table

0 1 2 3 4

1 0 0.1 0.4

2 0 0.2 0.8

3 0 0.4 1.0

4 0 0.3

5 0

C(1,3): [i=1, j=3]

Possible key values k=1, k=2 and k=3.

K=1: c(1,3) = c(1,0) + c(2,3) + 0.1 + 0.2 + 0.4 = 0 + 0.8 + 0.1 + 0.2 + 0.4 = 1.5

K=2: c(1,3) = c(1,1) + c(3,3) + 0.1 + 0.2 + 0.4 = 0.1 + 0.4 + 0.1 + 0.2 + 0.4 = 1.2

K=3: c(1,3) = c(1,2) + c(4,3) + 0.1 + 0.2 + 0.4 = 0.4 + 0 + 0.1 + 0.2 + 0.4 = 1.1

Main Table Root Table

0 1 2 3 4 0 1 2 3 4

1 0 0.1 0.4 1.1 1 1 2 3

2 0 0.2 0.8 2 2 3

3 0 0.4 1.0 3 3 3
4 0 0.3 4 4

5 0 5

C(2,4): [i=2, j=4]

Possible key values k=2, k=3 and k=4.

K=2: c(2,4) = c(2,1) + c(3,4) + 0.2 + 0.4 + 0.3 = 0 + 1.0 + 0.2 + 0.4 + 0.3 = 1.9

K=3: c(2,4) = c(2,2) + c(4,4) + 0.2 + 0.4 + 0.3 = 0.2 + 0.3 + 0.2 + 0.4 + 0.3 = 1.4

K=4: c(2,4) = c(2,3) + c(5,4) + 0.2 + 0.4 + 0.3 = 0.8 + 0 + 0.2 + 0.4 + 0.3 = 1.7

Main Table Root Table

0 1 2 3 4 0 1 2 3 4

1 0 0.1 0.4 1.1 1 1 2 3

2 0 0.2 0.8 1.4 2 2 3 3

3 0 0.4 1.0 3 3 3

4 0 0.3 4 4

5 0 5

C(1,4): [i=1, j=4]

Possible key values k=1, k=2, k=3 and k=4.

K=1: c(1,4)=c(1,0)+c(2,4)+0.1+0.2+0.4+0.3= 0 + 1.4 + 0.1+ 0.2 + 0.4 + 0.3 = 2.4

K=2: c(1,4)=c(1,1)+c(3,4)+0.1+0.2+0.4+0.3= 0.1 + 1.0 + 0.1+ 0.2 + 0.4 + 0.3 = 2.1

K=3: c(1,4)=c(1,2)+c(4,4)+0.1+0.2+0.4+0.3= 0.4 + 0.3 + 0.1+ 0.2 + 0.4 + 0.3 = 1.7


K=4: c(1,4)=c(1,3)+c(5,4)+0.1+0.2+0.4+0.3= 1.1 + 0 + 0.1+ 0.2 + 0.4 + 0.3 = 2.1

Main Table Root Table

0 1 2 3 4 0 1 2 3 4

1 0 0.1 0.4 1.1 1.7 1 1 2 3 3

2 0 0.2 0.8 1.4 2 2 3 3

3 0 0.4 1.0 3 3 3

4 0 0.3 4 4

5 0 5

Thus, the average number of key comparisons in the optimal tree is equal to
1.7.
Since R(1, 4) = 3, the root of the optimal tree contains the third key, i.e., C.
Its left subtree is made up of keys A and B, and its right subtree contains just
key D.
Since R(1, 2) = 2, the root of the optimal tree containing A and B is B, with A
being its left child (and the root of the one node tree: R(1, 1) = 1).

4. Explain in detail about Dijkstra Algorithm.


Dijkstra’s algorithm finds the shortest paths to a graph’s vertices in order of
their distance from a given source.
First, it finds the shortest path from the source to a vertex nearest to it, then to a
second nearest, and so on.
To facilitate the algorithm’s operations, we label each vertex with two labels.
The numeric label d indicates the length of the shortest path from the source to
this vertex found by the algorithm so far; when a vertex is added to the tree, d
indicates the length of the shortest path from the source to that vertex. The other
label indicates the name of the next-to-last vertex on such a path, i.e., the parent of
the vertex in the tree being constructed. With such
labeling, finding the next nearest vertex u∗ becomes a simple task of finding a
fringe vertex with the smallest d value. Ties can be broken arbitrarily.
After we have identified a vertex u∗ to be added to the tree, we need to
perform two operations:

1. Move u∗ from the fringe to the set of tree vertices.


2. For each remaining fringe vertex u that is connected to u∗ by an edge
of weight w(u∗, u) such that du∗ + w(u∗, u) < du, update the labels of u
by u∗ and d
u∗ + w(u∗, u), respectively.

ALGORITHM Dijkstra(G, s)
//Dijkstra’s algorithm for single-source shortest paths
//Input: A weighted connected graph G = V, E with nonnegative weights and its vertex s
//Output: The length dv of a shortest path from s to v and its penultimate vertex pv for every vertex v in V

Initialize(Q) //initialize priority queue to empty


for every vertex v in V
d
v ← ∞; pv ← null
Insert(Q, v, dv) //initialize vertex priority in the priority queue
ds

VT ← ∅
← 0; Decrease(Q, s, ds) //update priority of s with ds

for i ← 0 to |V | - 1 do
u∗ ← DeleteMin(Q) //delete the minimum priority
element VT ← VT ∪ {u∗}
for every vertex u in V - VT that is adjacent to u∗
do if d
u∗ + w(u∗, u) <
du d
u ← du∗ + w(u∗, u); pu ← u∗
Decrease(Q, u, du)

5. Let Compute the Huffman coding to compress the data effectively and
encode theString _AB_D. April / May 2015 (Nov/Dec 15)

Character A B C D _

Frequency 0.3 0.1 0.2 0.2 0.1


5 5
HUFFMAN CODING
Any binary tree with edges labeled with 0‗s and 1‗s yields a prefix-free code of
characters assigned to its leaves. Optimal binary tree minimizing the average
length of a code word can be constructed as follows:
Huffman’s algorithm
Initialize n one-node trees with alphabet characters and the tree
weights with their frequencies.
Repeat the following step n-1 times: join two binary trees with smallest weights
into one (as left and right subtrees) and make its weight equal the sum of the
weights of the two trees.
Mark edges leading to left and right subtrees with 0‗s and 1‗s,
respectively. Example:

character A B C D _

codeword 0.35 0.1 0.2 0.2 0.15


Encoded bit string of _AB_D is 1011110010101

PART-C
1. Find all the solution to the travelling salesman problem (cities and distance shown below) by
exhaustive search. Give the optimal solutions.(May/June 2016)

Tour Length
a ---> b ---> c ---> d ---> a I = 2 + 8 + 1 + 7 = 18
a ---> b ---> d ---> c ---> a I = 2 + 3 + 1 + 5 = 11 optimal
a ---> c ---> b ---> d ---> a I = 5 + 8 + 3 + 7 = 23
a ---> c ---> d ---> b ---> a I = 5 + 1 + 3 + 2 = 11 optimal
a ---> d ---> b ---> c --->
I = 7 + 3 +8 + 5 = 23 a ---> d ---> c ---> b ---> a I=7
a
+ 1 + 8 + 2 = 18
UNIT IV - ITERATIVE IMPROVEMENT

The Simplex Method-The Maximum-Flow Problem – Maximum Matching in


Bipartite Graphs- the Stable Marriage Problem.

PART – A

1. Define Bipartite Graph.


Bipartite graph: a graph whose vertices can be partitioned into two disjoint sets V
and U, not necessarily of the same size, so that every edge connects a vertex in V
to a vertex in U. A graph is bipartite if and only if it does not have a cycle of an odd
length
2. What is perfect matching in Bipartite Graph?
Let Y and X represents the vertices of a complete bipartitegraph with
edges connecting possible marriage partners, then a marriage matching is a
perfect matching in such a graph.
3. What is the requirement of flow conservation?
A flow is an assignment of real numbers xij to edges (i,j) of a given
network that satisfy the following:
 flow-conservation requirements: The total amount of
material entering an intermediate vertex must be equal to the
total amount of the material leaving the vertex

 capacity constraints
0 ≤ xij ≤ uij for every edge (i,j) E

4. Differentiate feasible and optimal solution.


Feasible solution is any element of the feasible region of an optimization problem.
The feasible region is the set of all possible solutions of an optimization problem.
An optimal solution is one that either minimizes or maximizes the objective
function.

5. Define augmenting path.


An augmenting path for a matching M is a path from a free vertex in Vto a free
vertex in U whose edges alternate between edges not in M and edges in M
 The length of an augmenting path is always odd
 Adding to M the odd numbered path edges and deletingfrom it
the even numbered path edges increases the matchingsize by 1
(augmentation)
 One-edge path between two free vertices is special caseof
augmenting path
6. What is min cut?
Define flow cut. April / May 2015 Nov/Dec 2015
A cut induced by partitioning vertices of a network into some subset X containing
the source and X , the complement of X, containing the sink is the set of all the
edges with a tail in X and a head in X .Capacity of a cutis defined as the sum of
capacities of the edges that compose the cut. Minimum cut is a cut of the smallest
capacity in a given network.
7. What is simplex method?
The simplex method is the classic method for solving the general linear programming
problem. It works by generating a sequence of adjacent extreme points of the
problem's feasible region with improving values of the objective function.
8. What is blocking pair?
A pair (m, w) is said to be a blocking pair for matching M if man m andwoman
w are not matched in M but prefer each other to their mates in M.
9. Determine the Dual linear program for the following LP, Maximize
3a + 2b + c .
Subject to,2a+b+c <=3 a+b+c <=4 3a+3b+6c<=6 a,b,c >= O.(Nov/Dec
2015) Sol:
The basic solution is a = 0, b 0, c = 0, d = 30, e = 24, f = 36, i.e. (0,0,0,30,24,36).
z =0.
10. What is iterative improvement? (Nov/Dec 2016)
The iterative improvement technique involves finding a solution to an optimization
problem by generating a sequence of feasible solutions with improving values of the
problem's objective function. Each subsequent solution in such a sequence typically
involves a small, localized change in the previous feasible solution. When no such
change improves the value of the objective function, the algorithm returns the last
feasible solution as optimal and stops.
11. Define maximum cardinality matching.
(Nov/Dec 2016) Define maximum
matching.
A maximum matching - more precisely, a maximum cardinality matching -is a
matching with the largest number of edges.

12. Define flow network.


Flow network is formally represented by a connected weighted digraphwith n
vertices numbered from 1 to n with the following properties:
• contains exactly one vertex with no entering edges,called the
source (numbered 1)
• contains exactly one vertex with no leaving edges, called the
sink (numbered n)

• has positive integer weight uij on each directed edge (i.j),


called the edge capacity, indicating the upper boundon
the amount of the material that can be sent from i to j through this
edge

13. When a bipartite graph will become two colorable?


A bipartite graph is 2-colorable: the vertices can be colored in two colors sothat every
edge has its vertices colored differently
14. Define linear programming.
Many problems of optimal decision making can be reduced to an instance ofthe
linear programming problem, which is a problem of optimizing a linearfunction of
several variables subject to constraints in the form of linearequations and linear
inequalities. maximize (or minimize) c1 x1 + ...+ cn xn subject to ai 1x1+ ...
+ ain xn ≤ (or ≥ or =) bi , i = 1,...,m x1 ≥ 0, ... , xn ≥ 0The function z = c1 x1
+
...+ cn xn is called the objective function;constraints x1 ≥ 0, ..., xn ≥ 0 are
called non-negativity constraints
15. State Extreme point theorem.(May/June 2016)
Any linear programming problem with a nonempty bounded feasible region has an
optimal solution; moreover, an optimal solution can always be found at an extreme
point of the problem’s feasible region. Extreme point theorem states that if S is
convex and compact in a locally convex space, then S is the closed convex hull of its
extreme points: In particular, such a set has extreme points. Convex set has its
extreme points at the boundary. Extreme points should be the end points of the line
connecting any two points of convex set.

16.What is The Euclidean minimum spanning tree problem?(May/June 2016)The


Euclidean minimum spanning tree or EMST is a minimum spanning tree of a set of
n points in the plane where the weight of the edge between each pair of points is the
Euclidean distance between those two points. In simpler terms, an EMST connects a
set of dots using lines such that the total length of all the lines is minimized and any
dot can be reached from any other by following the lines

17. State how Binomial Coefficient is computed? (Nov/Dec 2015)


C(n,k)=c(n-1,k-1)+C(n-1,k) And

C(n,0)=1
C(n,n)=1
Where n>k>0
PART – B
1. Discuss in detail about stable marriage problem. (Nov/Dec 2016) Explain
indetail about Gale-Shapley Algorithm.

STABLE MARRIAGE PROBLEM

Consider set Y = {m1,…,mn} of n men and a set X = {w1,…,wn} of n women. Each man
has a ranking list of the women, and each woman has a ranking list of the men (with no ties
in these lists) as given below. The same information can also be presented by an n-by-n
ranking matrix The rows and columns of the matrix represent the men and women of the
two sets, respectively. A cell in row m and column w contains two rankings: the first is the
position (ranking) of w in the m's preference list; the second is the position (ranking) of m
in the w's preference list.
Marriage Matching:
A marriage matching M is a set of n (m, w) pairs whose members are selected from
disjoint n- element sets Y and X in a one-one fashion, i.e., each man m fromY is paired
with exactly one woman w from X and vice versa.
Blocking Pair:
A pair (m, w), where m ϵ Y, w ϵ X, is said to be a blocking pair for a marriage matching M
if man m and woman w are not matched in M but they prefer each other to their mates in
M. For example, (Bob, Lea) is a blocking pair for the marriage matching M = (Bob,Ann),
(Jim,Lea), (Tom,Sue)} because they are not matched in M while Bob prefers Lea to Ann
and Lea prefers Bob to Jim.

Stable and unstable marriage


matching:
A marriage matching M is called stable if there is no blocking pair for it; otherwise, M is
called unstable.

The stable marriage problem is to find a stable marriage matching formen's and
women's given preferences.

Stable Marriage Algorithm:


Step 0 Start with all the men and women being free
Step 1 While there are free men, arbitrarily select one of them and do the following:
Proposal: The selected free man m proposes to w, the next woman on his preference list
Response: If w is free, she accepts the proposal to be matchedwith m. If she is not free,
she compares m with her current mate. If she prefers m to him, she accepts m‗s
proposal, making her former mate free; otherwise, she simply rejects m‗s proposal,
leaving m free Step 2 Return the set of n matched pairs
Analysis of the Gale-Shapley Algorithm
* The algorithm terminates after no more than n with a stable marriage output
iterations
* The stable matching produced by the algorithm is always man-optimal: eachman
gets the highest rank woman on his list under any stable marriage. Onecan obtain the
woman- optimal matching by making women propose to men
* A man (woman) optimal matching is unique for a given set of
participant preferences
* The stable marriage problem has practical applications such as
matching medical- school graduates with hospitals for residency training

2. How do you compute the maximum flow for the following graphusing
Ford – Fulkerson method? Explain. (May/June 2016)

MAXIMUM – FLOW PROBLEM (Nov/Dec 2015)


It is the problem of maximizing the flow of a material through a transportation
network (e.g., pipeline system, communications or transportation networks)
The transportation network in question can be represented by a connected weighted
digraph with n vertices numbered from 1 to n and a set of edges E, with
thefollowing properties:
 contains exactly one vertex with no entering edges, called the source
(numbered 1)
 contains exactly one vertex with no leaving edges, called the sink
(numbered n)
 has positive integer weight uij on each directed edge (i.j), called the
edge capacity, indicating the upper bound on the amount of the material
that can be sent from i to j through this edge
Flow: A flow is an assignment of real numbers xij to edges (i,j) of a givennetwork
that satisfy the following:
Flow-conservation requirements: The total amount of material entering anintermediate
vertex must be equal to the total amount of the materialleaving the vertex

the total amount of the material leaving the source must end up at the sink.

Capacity constraints:

The total outflow from the source or the total inflow into the sink is called the value of the flow (v).
Thus, a (feasible) flow is an assignment of real numbers xij to edges (i, j) of a given network that satisfy
flow-conservation constraints and thecapacity constraints:
0 ≤ xij ≤ uij for every edge (i,j) E

The maximum flow problem can be stated as: Given a network N,


find a flow f of maximum valu
Ford Fulkerson Method / Augmentation Path Method:
 Start with the zero flow (xij = 0 for every edge)
 On each iteration, try to find a flow-augmenting path from source to
sink, which a path along which some additional flow can be sent
 If a flow-augmenting path is found, adjust the flow along the edges ofthis
path to get a flow of increased value and try again
 If no flow-augmenting path is found, the current flow is maximum
Augment Path : 1 →4 →3←2 →5 →6 max flow value =

3 Finding a flow-augmenting path

Augment Path : 1->2->3->6

To find a flow-augmenting path for a flow x, consider paths from


source to sink in the underlying undirected graph in which any two
consecutive vertices i,j are either:
• connected by a directed edge (i to j) with some positive unused
capacity rij =
uij – xij
– known as forward edge ( → )
OR
• connected by a directed edge (j to i) with positive flow xji
– known as backward
edge ( ← )

If a flow-augmenting path is found, the current flow can be increased by r units byincreasing
xij by r on each forward edge and decreasing xji by r on each backwardedge,
where
r = min {rij on all forward edges, xji on all backward
edges} Assuming the edge capacities are integers, r isa
positive integer On each iteration, the flow value increases
by at least 1
Maximum value is bounded by the sum of the capacities of the edgesleaving the
source;
hence the augmenting-path method has to stop after a finite number of
iterations
The final flow is always maximum; its value doesn‗t depend on asequence of
augmenting paths used

Shortest-Augmenting-Path Algorithm
Generate augmenting path with the least number of edges by BFS as follows.
Starting at the source, perform BFS traversal by marking new (unlabeled) vertices with two
labels:
• first label – indicates the amount of additional flow that can be brought
from the source to the vertex being labeled
• 34second label – indicates the vertex from which the vertex being
labeled was reached, with ―+‖ or ―–‖ added to the second
label to indicate whether the vertex was reached via a forward or backward
edge
Labeling of vertex:
 The source is always labeled with ∞,-
 All other vertices are labeled as follows:
 If unlabeled vertex j is connected to the front vertex i of the traversal
queue by a directed edge
+
 from i to j with positive unused capacity rij = uij –xij (forward edge),
vertex j is labeled with lj,i ,
 where lj = min{li, rij}
 If unlabeled vertex j is connected to the front vertex i of the traversalqueue
by a directed edge
 -
 from j to i with positive flow xji (backward edge), vertex j is labeled
lj,i , where lj = min{li, xji}
 If the sink ends up being labeled, the current flow can be augmentedby the
amount
 indicated by
the sink‗s first
label
 The augmentation of the current flow is performed along theaugmenting path
traced by following the vertex second labels from
sink to source; the current flow quantities are increased on theforward edges
and decreased on the backward edges of this path
 If the sink remains unlabeled after the traversal queue becomesempty,
the algorithm returns the current flow as maximum and stops.
Defnition of a Cut: Let X be a set of vertices in a network that includes its source but
does not include its sink, and let X, the complement of X, be the rest of the vertices
including the sink. The cut induced by this partition of the vertices is the set of all the
edges with a tail in X and a head in X.
Capacity of a cut is defined as the sum of capacities of the edges that
compose the cut.
A cut and its capacity are denoted by C(X,X)and
c(X,X)
Note that if all the edges of a cut were deleted from the network, there would be
no directed path from source to sink
Efficiency:
• The number of augmenting paths needed by the shortest-augmenting- path
algorithm never exceeds nm/2, where n and m are the number of vertices and
edges, respectively
• Since the time required to find short)est augmenting path by breadth-first search is
in O(n+m)=O(m) for networks represented by their adjacency lists, the time efficiency
of the shortest-augmenting-path algorithm is in O(nm2 for this representation
• More efficient algorithms have been found that can run in close to O(nm)
time, but these algorithms don‗t fall into the iterative- improvement
paradigm

3. Write down the optimality condition and algorithmic implication for


finding M- augmenting paths in Bipartite Graph.
BIPARTITE GRAPH
A graph whose vertices can be partitioned into two disjoint sets V and U, not necessarily of
the same size, so that every edge connects a vertex in V to a vertex in U. A graph is
bipartite if and only if it does not have a cycle of an odd length

A matching in a graph is a subset of its edges with the property that no two edges share a
vertex A maximum (or maximum cardinality) matching is a matching with the largest
number of edges For a given matching M, a vertex is called free (or unmatched) if it is not
an endpoint of any edge inM; otherwise, a vertex is said to be matched
• If every vertex is matched, then M is a maximum matching
• If there are unmatched or free vertices, then M may be able to be improved
• We can immediately increase a matching by adding an edge
connecting two free vertices (e.g., (1,6) above)

Augmentation path and Augmentation:


An augmenting path for a matching M is a path from a free vertex in V to afree
vertex in U whose edges alternate between edges not in M and edges in M
• The length of an augmenting path is always odd
• Adding to M the odd numbered path edges and deleting from it the
even numbered path edges increases the matching size by
1 (augmentation)
• One-edge path between two free vertices is special case of augmenting
path Matching on the right is maximum (perfect matching)
Augmentation Path Method:
Start with some initial matching o e.g., the empty set
Find an augmenting path and augment the current matching along that
path o e.g., using breadth-first search like method
When no augmenting path can be found, terminate and returnthe last
matching, which is maximum

BFS-based Augmenting Path Algorithm:

Search for an augmenting path for a matching M by a BFS-like traversal of the graph that starts
simultaneously at all the free verticesin one of the sets V and U, say V

Initialize queue Q with all free vertices in one of the sets (say V)
While Q is not empty, delete front vertex w and label every unlabeledvertex u
adjacent to w as follows:

o Case 1 (w is in V): If u is free, augment the matching along the path


ending at u by moving backwards until a free vertex in V is reached.
After that, erase all labels and reinitialize Q with allthe vertices in V
that are still free. If u is matched (not with w), label u with w and
enqueue u
o Case 2 (w is in U) Label its matching mate v with w and enqueue v

After Q becomes empty, return the last matching, which is maximum


4. Solve the given linear equations by simplex method.

Maximize P = 3x + 4y subject
to x+3y≤30 ; 2x+y≤20

Step 1: insert slack variable S1 and S2


x+3y+S1=30
2x+y+S2=20
Step 1: Rewrite the objective function
x+3y+S1=30
2x+y+S2=20
-3x-4y-P=0

Step 3: Form the initial simplex tableau


x Y S1 S2 P
1 3 1 0 0 30
2 1 0 1 0 20
-3 -4 0 0 -1 0

Step 4: Find the pivot element


X Y S1 S2 P
1/3 1 1/3 0 0 10
5/3 0 -1/3 1 0 10
-5/3 0 4/3 0 1 40
Repeat the above the step till there is no negative value in the last row
X Y S1 S2 P
0 1 2/5 -1/5 0 8
1 0 -1/5 3/5 0 6
0 0 1 3/5 1 50

P reaches the maximum value of 50 at x=6 and y=8.


5. Write a note on simplex method (May/June
2016) (Nov/Dec 2016)(Nov/Dec 15)

Outline of Simplex Method: the simplex method to a linear programming problem, it has to be
represented in a special form called the standard form. The standard form has the following
requirements:
1. It must be a maximization problem.
2. All the constraints (except the non-negativity constraints) must be in the form of linear equations
with nonnegative right-hand sides.
3. All the variables must be required to be nonnegative.
Thus, the general linear programming problem in standard form with m constraints and n unknowns
(n ≥ m) is
maximize c1x1 + . . . + cnxn
subject to ai1x1 + . . . + ainxn = bi, where bi ≥ 0 for i = 1, 2, . . . ,
m x1 ≥ 0, . . . , xn ≥0

It can also be written in compact matrix notations:


maximize cx
subject to Ax = b
x≥0
Where

If a constraint is given as an inequality, it can be replaced by an equivalent equation by adding a


slack variable representing the difference between the two sides of the original inequality.
For example, the two inequalities of problem can be transformed, respectively,
into the following equations:
x + y + u = 4 where u ≥ 0 and x + 3y + v = 6 where v ≥ 0
Thus, problem (10.2) in standard form is the following linear programming problem in four
variables:
maximize 3x + 5y + 0u + 0v
subject to x + y + u = 4
x + 3y + + v = 6
x, y, u, v ≥ 0.

If all the coordinates of a basic solution are nonnegative, the basic solution is
called a basic feasible solution.
For example, (0, 0, 4, 6) is an extreme point of the feasible region of problem.
For example, (0, 0, 4, 6) is an extreme
point of the feasible region of problem is presented below:

In general, a simplex tableau for a linear programming problem in standard form


with n unknowns and m linear equality constraints (n ≥ m) has m + 1 rows and
n + 1 columns. Each of the first m rows of the table contains the coefficients of
a corresponding constraint equation, with the last column’s entry containing the
equation’s right-hand side. The columns, except the last one, are labeled by the
names of the variables.
The last row of a simplex tableau is called the objective row. It is initialized
by the coefficients of the objective function with their signs reversed.
If there are several negative entries in the objective row, a commonly used
rule is to select the most negative one, i.e., the negative number with the largest absolute value.
A new basic variable is called the entering variable, while its column
is referred to as the pivot column; we mark the pivot column by ↑ .
Now we will explain how to choose a departing variable, i.e., a basic variable
to become nonbasic in the next tableau.
translate this observation into the following rule for
choosing a departing variable in a simplex tableau: for each positive entry in the
pivot column, compute the θ-ratio by dividing the row’s last entry by the entry in
the pivot column.
We mark the row of the departing variable, called the pivot row, by ←-
and denote it ←--- row.
Finally, the following steps need to be taken to transform a current tableau
into the next one. (This transformation, called pivoting

Then, replace each of the other rows, including the objective row, by the difference

where c is the row’s entry in the pivot column.


The next iteration yields tableau

Summary of the simplex method:

Step 0 Initialization Present a given linear programming problem in standard form and set up an
initial tableau with nonnegative entries in the rightmost column and m other columns composing the
m × m identity matrix. (Entries in the objective row are to be disregarded in verifying these
requirements.) These m columns define the basic variables of the initial basic feasible solution, used
as the labels of the tableau’s rows.
Step 1 Optimality test If all the entries in the objective row (except, possibly, the one in the
rightmost column, which represents the value of the objective function) are nonnegative— top: the
tableau represents an optimal solution whose basic variables’ values are in the rightmost column
and the remaining, non basic variables’ values are zeros.
Step 2 Finding the entering variable Select a negative entry from among the first n elements of the
objective row. (A commonly used rule is to select the negative entry with the largest absolute value,
with ties broken arbitrarily.) Mark its column to indicate the entering variable and the pivot column.
Step 3 Finding the departing variable For each positive entry in the pivot column, calculate the θ-
ratio by dividing that row’s entry in the rightmost column by its entry in the pivot column. (If all
the entries in the pivot column are negative or zero, the problem is unbounded—stop.) Find the row
with the smallest θ-ratio (ties may be broken arbitrarily),
and mark this row to indicate the departing variable and the pivot row.
Step 4 Forming the next tableau Divide all the entries in the pivot row by its entry in the pivot
column. Subtract from each of the other rows, including the objective row, the new pivot row
multiplied by the entry in the pivot column of the row in question. (This will make all the entries in
the pivot column 0’s except for 1 in the pivot row.) Replace the label of the pivot row by the
variable’s name of the pivot column
and go back to Step 1.
PART-C

1. State and Prove Maximum Flow Min cut Theorem. (May/June


2016) (Nov/Dec 16) Maximum Flow Problem
Problem of maximizing the flow of a material through a transportation
network (e.g., pipeline system, communications or transportationnetworks)
Formally represented by a connected weighted digraph with n verticesnumbered
from 1 to n with the following properties:
• Contains exactly one vertex with no entering edges, called the source
(numbered 1)
• Contains exactly one vertex with no leaving edges, called the sink
(numbered n)
• Has positive integer weight uij on each directed edge (i.j), called the edge
capacity, indicating the upper bound on the amount of the material that can be sent
from i to j through this edge.

A digraph satisfying these properties is called a flow network or simply a network

Flow value and Maximum Flow Problem

Since no material can be lost or added to by going through intermediate vertices of the
network, the total amount of the material leaving the source mustend up at the sink:
∑ x1j =
∑ xjn j: (1,j) є
E j: (j,n) є E
The value of the flow is defined as the total outflow from the source (= the totalinflow
into the sink). The maximum flow problem is to find a flow of the largest value (maximum
flow) for a given network.

Max-Flow Min-Cut Theorem

1. The value of maximum flow in a network is equal to the capacity of


its minimum cut
2. The shortest augmenting path algorithm yields both a maximum flow and
a minimum cut:

• Maximum flow is the final flow produced by the algorithm

• Minimum cut is formed by all the edges from the labeled


vertices to unlabeled vertices on the last iteration of the algorithm.

• All the edges from the labeled to unlabeled vertices are full, i.e., their flow
amounts are equal to the edge capacities, while all the edges from the
unlabeled to labeled vertices, if any, have zero flow amounts on them.
UNIT V - COPING WITH THE LIMITATIONS
OF ALGORITHM POWER

Lower - Bound Arguments - P, NP, NP- Complete and NP Hard Problems. Backtracking – N-
Queen problem - Hamiltonian Circuit Problem – Subset Sum Problem. Branch and Bound –
LIFO Search and FIFO search - Assignment problem – Knapsack Problem – Traveling Salesman
Problem approximation Algorithms for NP-Hard Problems – Traveling Salesman problem –
Knapsack problem.
PART – A

1. What are the types of Lower Bound Arguments?


1. Trivial Lower Bounds
2. Information-Theoretic Arguments
3. Adversary Arguments

2. What is live node and dead node? April / May 2012


Live Node: A node which has been generated and all of whose children are notyet been
generated. Dead Node: A node that is either not to be expanded further, or for which all of
its children have been generated.

3. Give the idea behind backtracking. Nov / Dec 2011


State the principle of backtracking. May / June 2013
The principal idea is to construct solutions one component at a time and evaluate such
partially constructed candidates as follows. If a partially constructed solution can be
developed further without violating the problem's constraints, it is done by taking the first
remaining legitimate option for the next component. If there is no legitimate option for the
next component, no alternatives for any remaining component need to be considered. In this
case, the algorithm backtracks to replace the last component of the partially constructed
solution with its next option. This process is continued until the complete solution is
obtained.
4. Define NP Hard and NP Completeness. Nov / Dec 2010, 2011 Class NP
is the class of decision problems that can be solved by nondeterministicpolynomial
algorithms. This class of problems is called non deterministic
polynomial. NP – Hard: If a problem is NP-Hard, this means that any problem inNP
can be reduced to the given problem. NP – Complete: a decision problem is NP
– complete when it is both in NP and P –hard.
5. What is promising and non-promising node?
Promising Node: A node in a state-space tree is said to be promising if it corresponds to a
partially constructed solution that may still lead to a complete solution.
Non Promising Node: A node in a state-space tree is said to be non- promising if it
backtracks to the node‗s parent to consider the nest possible solution for its last
component.

6. Depict the proof which says that a problem 'A' is no harder or no


easier than problem 'B' (Nov/Dec 2015)
Thus, motivation for <=p notation. A is no harder than B. Given B, we can
implement A. But, maybe we can also implement A without using B.

7. Mention the property of NP Complete problem. Nov / Dec 2012, 2013


A decision problem D is said to be NP complete if
 it belongs to class NP;
 every problem in NP is polynomially reducible to D

8. What is Hamiltonian cycle? Nov / Dec 2013


A Hamiltonian circuit or Hamiltonian cycle is defined as a cycle that passes through all the
vertices of the graph exactly once. It is named after the Irish mathematician Sir William
Rowan Hamilton (1805-1865), who became interested in such cycles as an application of
his algebraic discoveries. A Hamiltonian circuit can be also defined as a sequence of n + 1
adjacent vertices vi0,vi1…….vin,vi0 where the first vertex of the sequence is the same as
the last one while all the other n - 1 vertices are distinct

9. State sum of subset problem. May / June 2013


Subset-sum problem: The subset sum problem is to find a subset of a given setS= {s1,
... , sn} of n positive integers whose sum is equal to a given positive integer
d. For example, for S = {1, 2, 5, 6, 8} and d = 9, there are two solutions: {1, 2, 6 and
{1, 8}.

10. Define decision tree. (Nov/Dec 2016)


A decision tree is a tree data structure in which each vertex represents a question and each
descending edge from that vertex (child nodes) represents a possible answer to that
question. The performance of the algorithm whose basic operation is comparison could be
studied using decision tree.

11. Compare backtracking and branch and bound technique. April /


May 2015
Backtracking Technique Branch and Bound technique
Backtracking constructs its state-space Branch-and-bound is an algorithm design
tree in the depth-first searchfashion in technique that enhances the idea
the majority of its applications. of generating a state- space tree
with the idea of estimating
the best value obtainable from a

If the sequence of If the estimate is not superior to the


choicesrepresented by best solution seen up to that
a current node of the state-space treepoint in the processing, the node
can be developed further without is eliminated from further
violating the problem's constraints, itconsideration.
is done by considering the first
remaining legitimate option for the
next component. Otherwise, the
method backtracks by undoing the
last component of the partially built

12. Define state space tree. April / May 2015,May/June 2016 The
processing of backtracking is implemented by constructing a tree of
choices being made. This is called the state-space tree. choices made for the firstcomponent
of the solution; the nodes in the second level represent the choicesfor the second
component and so on.
13. How NP – hard problem is different from NP complete? April /
May 2015
NP – hard NP complete
If a problem is NP-Hard, this means A decision problem is NP –
that any problem in NP can bereduced to complete when it is both in NP and
the given problem. NP – hard.
14. State the reason for terminating search path at the current node
inbranch bound algorithm (Nov/Dec 2016)
The value of the node's bound is not better than the value of the best solution seenso far.
The node represents no feasible solutions because the constraints of the problem are
already violated.
PART – B

1. Explain in detail about how the n–queen’s problem


is solved using backtracking. (Nov/Dec 2016)
N – QUEEN’S PROBLEM
The problem is to place n queens on an n-by-n chessboard so that no two queens
attack each other by being in the same row or in the same column or on the same
diagonal. For n = 1, the problem has a trivial solution, and it is easy to see that there is
no solution for n = 2 and n = 3.

Algorithm
place(k,I)
{
for j := 1 to k-1 do
if(x[j]=I) or(abs(x[j]-I)=abs(j-k))) then return false;return
true;
}

Algorithm Nqueens(k,n)
{
for I:= 1 to n do
{
if( place(k,I) then
{
x[k]:= I;
if(k=n) then write(x[1:n]);
else
Nqueens(k+1,n)
}
}
}
Example
N=4

2. Discuss in detail about Hamiltonian circuit problem and sum of


subsetproblem. (May/June 2016) (Nov/Dec 15)

HAMILTONIAN CIRCUIT PROBLEM AND SUM OF SUBSET PROBLEM


A Hamiltonian circuit or Hamiltonian cycle is defined as a cycle that passes
through all the vertices of the graph exactly once. It is named after the Irish
mathematician Sir William Rowan Hamilton (1805-1865), who became interested
in such cycles as an application of his algebraic discoveries. A Hamiltonian circuit
can be also defined as a sequence of n + 1 adjacent vertices vi0,vi1…….vin,vi0
where the first vertex of the sequence is the same as the last one while all the other
n - 1 vertices are distinct.
Algorithm:(Finding all Hamiltonian cycle)

void Nextvalue(int k)
{
do
{
x[k]=(x[k]+1)%(n+1);
if(!x[k]) return;
if(G[x[k-1]][x[k]])
{
for(int j=1;j<=k-1;j++)
if(x[j]==x[k])
break;
}

}while(1);
}
if(j==k
)
if(k<n)

return;void Hamiltonian(int k)
{
do
{
NextValue(k);
if(!x[k]) return;
if(k==n)
{
for(int i=1;i<=n;i++)
{
Print x[i];
}
}
else Hamiltonian(k+1);
}while(1);
}
Example:

3. State and Explain about Subset sum problem:


Problem Statement: To find a subset of a given set S= {s1, ... ,sn} of n
positive integers whose sum is equal to a given positive integer d. For example, for
S = {1, 2, 5, 6, 8} and d = 9, there are two solutions: {1, 2, 6} and {1, 8}.
Of course, some instances of this problem may have no solutions. It isconvenient to
sort the set's elements in increasing order.
Algorithm sumofsubset(s,k,r)
{S1≤S2………≤Sn

//generate the left child. note s+w(k)<=M sinceBk-


1 is true. X{k]=1;
if (S+W[k]=m) then write(X[1:k]); // there is no recursive call
here as
W[j]>0,1<=j<=n.
else if (S+W[k]+W[k+1]<=m) then sum of sub (S+W[k], k+1,r- W[k]);
//generate right child and evaluate Bk.
if ((S+ r- W[k]>=m)and(S+ W[k+1]<=m)) then
{
X{k]=0;
sum of sub (S, k+1, r- W[k]);
}}
Construction of state space tree for subset sum:
 The state-space tree can be constructed
as a binary tree.
 The root of the tree represents the starting point, with no decisions about
the given elements made as yet.
 Its left and right children represent, respectively, inclusion and exclusion of s1
in a set being sought.
 Similarly, going to the left from a node of the first level correspondsto
inclusion of s2, while going to the right corresponds to its exclusion, and so
th
on. Thus, a path from the root to a node on the i level of the tree indicates
which of the first I numbers have been included in the subsets represented by
that node.
 Record the value of s', the sum of these numbers, in the node. If s'
is equal to d, then there is a solution for this problem.
 The result can be both reported and stopped or, if all the solutions need
to be found, continue by backtracking to the node's parent. If s' is not equal
to d, then terminate the node as non promising if either of the following two
inequalities holds:
s' + si+1 > d (the sums' is too large)
< (the sum is too small)

3. Explain how the travelling salesman problem is solved using branch and bound technique.

TRAVELING SALESMAN PROBLEM


Given n cities with known distances between each pair, find the shortesttour that passes through all
the cities exactly once before returning to the starting city
The branch-and-bound technique can be applied to instances of thetraveling salesman problem
if the result comes up with a reasonable lower bound on tour lengths.
A lower bound on the length l of any tour can be computed as follows.
Calculation of lower bound:

For each city i, 1 ≤ i ≤ n,


 find the sum si; of the distances from city i to the two nearest cities;
 compute the sum s of these n numbers;
 divide the result by 2; and,
 if all the distances are integers, round up the result to the
nearest integer:
lb=s/2
Moreover, for any subset of tours that must include particular edges of agiven
graph, the lower bound can be modified accordingly.

For example, for the instance above the lower bound is:
lb = [(1+ 3) + (3 + 6) + (1 + 2) + (3 + 4) + (2 + 3)]/2 = 14.
The bounding function can be used, to find the shortest Hamiltonian circuit forthe
given

Construction of state space tree:

Root:
First, without loss of generality, consider only tours that start at a.

First level
Second because the graph is undirected, tours can be generated in which b is visited
before c. In addition, after visiting n - 1 = 4 cities, a tour has no choicebut to visit the
remaining unvisited city and return to the starting one.
Lower bound if edge (a,b) is chosen: lb = ceil([(3+1)+(3+6)+(1+2)+(4+3)+(2+3)]/2)=14
Edge (a,c) is not include since b is visited before c. Lower bound if edge (a,d) is chosen: lb =
ceil([5+1)+(3+6)+(1+2)+(5+3)+(2+3)]/2)=16. Lower bound if edge (a,e) is chosen: lb
= ceil([(8+1)+(3+6)+(1+2)+(4+3)+(2+8)]/2)=19 Since the lower bound of edge (a,b) is
smaller among all the edges, it is included in the solution. The state space tree is
expanded from this node.
Second level
The choice of edges should be made between three vertices: c, d and e.
Lower bound if edge (b,c) is chosen. The path taken will be (a ->b->c):
lb = ceil([(3+1)+(3+6)+(1+6)+(4+3)+(2+3)]/2)=16. Lower bound if edge (b,d)
ischosen. The path taken will be (a ->b->d):
lb = ceil([(3+1)+(3+7)+(7+3)+(1+2)+(2+3)]/2)=16
Lower bound if edge (b,e) is chosen. The path taken will be (a ->b->e):
lb = ceil([(3+1)+(3+9)+(2+9)+(1+2)+(4+3)]/2)=19. (Since this lb is larger than othervalues, so
further expansion is stopped)
The path a->b->c and a->b->d are more promising. Hence the state space tree isexpanded
from those nodes.

Next level

There are four possible routes:

a ->b->c->d->e->a a
->b->c->e->d->a a-
>b->d->c->e->a a-
>b->d->e->c->a

Lower bound for the route a ->b->c->d->e->a: (a,b,c,d)(e,a) lb =ceil([(3+8)+(3+6)+(6+4)+(4+3)+(3+8)])

Lower bound for the route a ->b->c->e->d->a: (a,b,c,e)(d,a) lb =


ceil([(3+5)+(3+6)+(6+2)+(2+3)+(3+5)]/2)=19

Lower bound for the route a->b->d->c->e->a: (a,b,d,c)(e,a)


lb =
ceil([(3+8)+(3+7)+(7+4)+(4+2)+(2+8)]/2)
=24

Lower bound for the route a->b->d->e->c->a: (a,b,d,e)(c,a)


lb =
ceil([(3+1)+(3+7)+(7+3)+(3+2)+(2+1)]/2)
=16

Therefore from the above lower bound the solution isThe


optimal tour is a ->b->c->e->d->a
The better tour is a ->b->c->e->d->a

The inferior tour is a->b->d->c->e->a

The first tour is a ->b->c->d->e->a


State Space Tree for the Traveling Salesman Problem

4. Explain in detail about solving knapsack problem using branch and bound
technique. (Nov/Dec 2016)

KNAPSACK
PROBLEM
Given n items of known weights wi and values vi, i = 1, 2, ..., n, and a knapsackof
capacity W, find the most valuable subset of the items that fit in the knapsack. It is
convenient to order the items of a given instance in descending order by their value-to-
weight ratios. Then the first item gives the best payoff per weight unit and the last one
gives the worst payoff per weight unit, with ties resolved
arbitr
arily:
v1/w1≥v2/w2≥…….≥vn/wn

The state-space tree for this problem is constructed as a binary tree as mentioned
below:
 Each node on the ith level of this tree, 0≤i≤n, represents all the subsets of n
items that include a particular selection made from the firsti ordered items.
 This particular selection is uniquely determined by the path from the rootto
the node:
 A branch going to the left indicates the inclusion of the next item,
 While a branch going to the right indicates its exclusion.
 Record the total weight w and the total value v of this selection in the node,
along with some upper bound ub on the value of any subset that can be
obtained by adding zero or more items to this selection.
Lower bound calculation:
A simple way to compute the upper bound ub is to add to v, the total value ofthe items
already selected, the product of the remaining capacity of the knapsack W - w and the best
per unit payoff among the remaining items, which is vi+1/ w i+1:
ub = v + (W-w)( vi+1/ w i+1)
Example:

W = 10

State space tree construction:


Node 0 - At the root of the state-space tree, no items have been selected as yet. Hence,
both the total weight of the items already selected w and their total value v are equal
to 0. The value of the upper bound computed by formula is $100.
o w=0, v=0
o ub = 0 + (10-0) * (10) =10
0
Node 1: the left child of the root represents the subsets that include item 1.
The total weight and value of the items already included are 4 and $40,respectively;
the value of the upper bound is
ub = 40 + (10- 4)* 6 =$76

Node 2 (right of node 1) represents the subsets that do not include item 1.

Ub = 0 + (10-0)*6 =60
Since node 1 has a larger upper bound than the upper boundof node
2, it is more promising for this maximization problem, and branched
from node 1 first.
Node 3 (left of node 1) with item 1 and with item 2
w=4+7=11, v=40;
vi+1/w i+1=5
Here w= 4+7=11>10. This is not a feasible solution since theconstraints
are not satisfied.
Node 4 (right of node 1) with item 1 and without item 2 - w=4; v=40; vi+1/ wi+1=5
ub=40+ -
4)*5=7
0
Node 5 (left of node 4) with item 1 and item 3 – w=4+5=9; v=40 + 25 = 65;vi+1/ w
o
i+1=4
ub = 65 +(10- 9)*4=6
9
Node 6 (right of node 4) with item 1 and without item 3 – w=4;v=40; vi+1/w
i+1 =4 ub =
40 + (10-4)*4=64
The right node yields inferior upper bound. So the left node isselected
for further expansion.
Node 7 (left of node 5) with item 1, 3 and with item 4 – w=3; v=12
Here w= 9+3=12>10. This is not a feasible solution since theconstraints
are not satisfied.
Node 8 (right of node 5) with item 1, 3 and without item
4 w=9; v=65; vi+1/ w i+1=0
ub = 65 + (10- 9)*0=65

Hence the items in the knapsack are {item 1, item 3} with the profit $65.

4. Explain in detail about job assignment problem.

JOB ASSIGNMENT PROBLEM


The job assignment problem is the problem of assigning n people to n jobs sothat the total
cost of the assignment is as small as possible. An instance of the assignment problem is
specified by an n-by-n cost matrix C so that state the problem can be stated as follows:
Select one element in each row of the matrix so that no two selected elements arein the same
column and their sum is the smallest possible. This is done by considering the same small
instance of the problem:

To find a lower bound on the cost of an optimal selection without actually solving
the problem, several methods can be used. For example, it is clear that the cost of
any solution, including an optimal one, cannot be smaller than the sum of the
smallest
elements in each of the matrix‗s rows. For the instance given, the lower bound is
lb= 2 +3 + 1 + 4 = 10.

It is important to stress that this is not the cost of any legitimate selection (3
and 1 came from the sa me lumn of the matrix); it is just a lower bound on the cost
of any legitimate selection.
Apply the same thinking to partially constructed solutions. For example, for
any legitimate selection that selects 9 from the first row, the lower bound will be
lb = 9 + 3 + 1 + 4 = 17.

This problem deals with the order in which the tree‗s nodes will he generated.
Rather than generating a single child of the last promising node as in backtracking,
all the children of the most promising node among non- terminated leaves in the
current tree are generated.
To find which of the nodes is most promising, compare the lower boundsof the
live node. It is sensible to consider a node with the best bound as most
promising, although this does not, of course, preclude the possibility that an optimal
solution will ultimately belong to a different branchof the state-space tree.
This variation of the strategy is called the best-first branch-and-bound.

Returning to the instance of the assignment problem given earlier, start with the
root that corresponds to no elements selected from the cost matrix. As the lower
bound value for the root, denoted lb is 10.

The nodes on the first level of the free correspond to four elements (jobs) in the
first row of the matrix since they are each a potential selection for the first
component of the solution. So there are four live leaves (nodes 1 through 4) that may
contain an optimal solution. The most promising of them is node 2 because it has the
smallest lower bound value.
By following the best-first search strategy, branch out from that node first by
considering the three different ways of selecting an element from the second row and
not in the second column—the three different jobs that can be assigned to person
b.
Of the six live leaves (nodes 1, 3, 4, 5, 6, and 7) that may contain an optimal
solution, we again choose the one with the smallest lower bound, node 5.

First, consider selecting the third column’s element from c‗s row (i.e., assigning
person c to job 3); this leaves with no choice but to select the element from the
fourth column of d‗s row (assigning person d to job 4). This yield leafs that
corresponds to the feasible solution (a →2, b→1, c→3, d →4) with (The total cost
of 13. Its sibling, node 9, corresponds to the feasible

Solution:
solution {a → 2, b →1, c → 4, d → 3) with the total cost of 25, Since its cost islarger
than the cost of the solution represented by leafs, node 9 is simply terminated.
o Note that if its cost were smaller than 13 then it would have to be replaced with the
information about the best solution seen so far with the data provided bythis node.
o Now, as inspect each of the live leaves of the last state-space tree (nodes 1, 3, 4, 6, and
7 in the following figure), it is discovered that their lower bound values are not
smaller than 13 the value of the best selection seen so far (leaf8).
o Hence all of them are terminated and the solution represented by leaf 8 is
recognized as the optimal solution to the problem.
5. Explain in detail about approximation algorithm fortraveling salesman
problem.

APPROXIMATION OF TRAVELING SALESMAN


PROBLEM Nearest- neighbour algorithm
The following simple greedy algorithm is based on the nearest-neighbour heuristic:the idea
of always going to the nearest unvisited city next.

Step 1: Choose an arbitrary city as the start.


Step 2: Repeat the following operation until all the cities have beenvisited: go
to the unvisited city nearest the one visited last (ties can be broken arbitrarily).
Step 3: Return to the starting city.
sa : A – B – C – D – A of
length 10
*
s : A – B – D – C – A of
length 8
Multifragment-heuristic algorithm
Step 1 Sort the edges in increasing order of their weights. (Ties can be broken arbitrarily.)
Initialize the set of tour edges to be constructed to the empty set.
Step 2 Repeat this step until a tour of length n is obtained, where n is the number of
cities in the
instance being solved: add the next edge on the sorted edge list to the set of tour edges,
provided this addition does not create a vertex of degree 3 or a cycle of length less than n;
otherwise, skip the edge.
Step 3 Return the set of tour edges.
This algorithm yields the set of edges for the graph shown above:
{(a, b), (c, d), (b, c), (a, d)}.
Minimum-spanning-tree-based algorithms
There are approximation algorithms for the traveling salesman problem that exploit a
connection between Hamiltonian circuits and spanning trees of the same graph. Since
removing an edge from a Hamiltonian circuit yields a spanning tree, we can expect that
the structure of a minimum spanning tree provides a good basis for constructing a shortest
tour approximation. Here is an algorithm that implements this idea in a rather
straightforward fashion.
Twice-around-the-tree algorithm:

Step 1 Construct a minimum spanning tree of the graph corresponding to agiven


instance of the traveling salesman problem.
Step 2 Starting at an arbitrary vertex, perform a walk around the minimum spanning
tree recording all the vertices passed by. (This can be done by a DFStraversal.)
Step 3 Scan the vertex list obtained in Step 2 and eliminate from it all repeated
occurrences of the same

vertex except the starting one at the end of the list. (This step is equivalent to making
shortcuts in the walk.) The vertices remaining on the list will form a Hamiltonian circuit,
which is the output of the algorithm.
Christofides algorithm:
It also uses a minimum spanning tree but does this in a more sophisticatedway
than the twice- around-the-tree algorithm.
Stage 1: Construct a minimum spanning tree of the graph
Stage 2: Add edges of a minimum-weight matching of all the odd vertices in theminimum
spanning tree.
Stage 3: Find an Eulerian circuit of the multigraph obtained in stage 2 Stage 3:
Create a tour form the path constructed in Sateg 2 by makingshortcuts to
avoid visiting intermediate vertices more than once.
Local Search Heuristics for TSP:
Start with some initial tour (e.g., nearest neighbor).
On each iteration, explore the current tour‗s neighborhood by exchanging
a few edges in it.
If the new tour is shorter, make it the current tour; otherwiseconsider
another edge change.
If no change yields a shorter tour, the current tour is returned as the output.
Example of a 2-
change

6. Explain in detail about the approximation algorithm for


knapsack problem.(Nov/Dec 15) APPROXIMATION
ALGORITHM FOR
KNAPSACK PROBLEM Knapsack
Problem:
Given n items of known weights wi and values vi, i = 1, 2, ..., n, and a
knapsack of capacity W, find the most valuable subset of theitems that fit
in the knapsack.
Greedy algorithms for the knapsack problem:
Step 1: Order the items in decreasing order of relative
values:
v1/w1 >= vn/wn.
Step 2: Sort the items in non increasing order of the ratios computed in Step1.
Step 3 Repeat the following operation until no item is left in the sorted list:if the
current item on the list fits into the knapsack, place it in the knapsack; otherwise,
proceed to the next item.
Item Weight Value
1 7 $42
2 3 $12
3 4 $40
4 5 $25
Computing the value-to-weight ratios and sorting the items in non increasingorder of
these efficiency ratios yields
Item Weight Value Value
/
1 4 $40 10
2 7 $42 6
3 5 $25 5
4 3 $12 4
The greedy algorithm will select the first item of weight 4, skip the next item of
weight 7, select the next item of weight 5, and skip the last item of weight 3. The
solution obtained happens to be optimal for this instance
Greedy algorithm for the continuous knapsackproblem:
Step 1 Compute the value-to-weight ratios vi!wi, i = 1,..................,n for the items
given.
Step 2 Sort the items in non increasing order of the ratios computed inStep 1.
Step 3 Repeat the following operation until the knapsack is filled to its full capacity or
no item is left in the sorted list: if the current item on the list fits into the knapsack in its
entirety, take it and proceed to the next item; otherwise, take its largest fraction to fill
the knapsack to its full capacity and stop.
Item Weight Value Value
/
1 4 $40 10
2 7 $42 6
3 5 $25 5
4 3 $12 4
The algorithm will take the first item of weight 4 and then 6/7 of thenext
item on the sorted list to fill the knapsack to its full capacity.
Approximation schemes:
For this problem, unlike the traveling salesman problem, there exist polynomial-
time approximation schemes, which are parametric families of algorithms that
allow us to get(k) approximations
Approximation algorithm by S.
This algorithm generates all subsets of k items or less, and for each one that fits into
the knapsack, it adds the remaining items as the greedy algorithm would (i.e., in non
increasing order of their value-to-weight ratios). The subset of the highest value
obtained in this fashion is returned as the algorithm's output.
Example: A small example of an approximation scheme with k = 2and
instances given below:
Item Weight Value Value / Weight
1 4 $40 10
2 7 $42 6
3 5 $25 5
4 1 $4 4
Subset Added items Valu
ϕ 1,3, $69
{1} 3,4 $69
{2} 4 $46
{3} 1,4 $69
{1,2 Not feasible
{1,3 4 $69
{1,4 3 $69
{2,3 Not feasible
{2,4 $46
{3,4 1 $69

Solution: The algorithm yields (1, 3, 4}, which is the optimal solution for thisinstance.

PART-C
1. Explain in detail about P,NP,NP complete and NP hard

problems. (Nov/Dec 2015) P, NP, NP COMPLETE AND NP

PROBLEMS

HARD

P:
 P is the set of all decision problems which can be solved in polynomialtime.
 P problems are questions that have a yes/no answer and can be easilysolved
by a computer.
 For example,checkingwhether a number is prime is a relatively
easyproblem to solve.

NP:
 There are a lot of programs that don't run in polynomial timeon a
regular computer, but do run in polynomial time on a
nondeterministic Turing machine.
 These programs solve problems in NP, which stands for
nondeterministic polynomial time.
 NP problems are questions that have yes/no answers that are easy toverify,
but are hard to solve. That means it would take years or centuries for your
computer to come up with an answer.
 For example, Given the cities and distances, is there a route thatcovers
all the cities, returning to the starting point, in less than x distance?
 Two Stages in NP class problems.
 Guessing stage: We can easily guess.
 Verifying stage: It is very hard to verify. i.e) High time complexity.
NP COMPLETE:
 NP complete problems are special kinds of NP problems. We can takeany
kind of NP problem and twist and bend it until it looks like an NP complete
problem.
 A problem is NP complete if the problem is both NP hard, and in NP.
 For example, the knapsack problem is NP. It can ask what's the best way to
stuff knapsack if you had lots of different sized pieces ofdifferent
precious metals lying on the ground , and that you can't carryall of them in
the bag.
NP HARD:Figure:Notion of an NP-complete problem. Polynomial-time reductionsof NP
problems to an NP-complete problem are shown by arrows.

 NP Hard problems are worst than NP problems.


 Even if someone suggested you solution to a NP Hard problem, it'd stilltake
forever to verify if they were right.
 For example, in travelling salesman, trying to figure out the absolute shortest
path through 500 cities in your state would take forever to solve. Even if
someone walked up to you and gave you an itinerary and claimed it was the
absolute shortest path, it'd still take you forever to figure out whether he
was a liar or not.
Figure: A relationship between P, NP, NP Complete and NP Hard problems

2. Give any five undecidable problems and explain the famoushalting


problem. (May/June )

 Halting Problem.
 Post correspondence problem.
 Hilbert‗s tenth problem: the problem of deciding whether a
Diophantine equation (multivariable polynomial equation) has a
solution in integers.
 Determining if a context-free grammar generates all possible strings, orif it
is ambiguous.
 The word problem in algebra and computer science.
 The word problem for certain formal languages.
 Determining whether a λ-calculus formula has a normal form.

In computability theory and computational complexity theory, an undecidable problem is


a decision problem for which it is known to be impossible to construct a single algorithm
that always leads to a correct yes-or-no answer.

Some decision problems cannot be solved at all by any algorithm. Such problems are
called undecidable, as opposed to decidable problems that can be solved by an
algorithm. A famous example of an undecidable problem was given by Alan Turing in
1936.1 The problem in question is called the halting problem: given a computer program
and an input to it, determine whether the program will halt on that input or continue
working indefinitely on it.
Here is a surprisingly short proof of this remarkable fact. By way of contradiction, assume
that A is an algorithm that solves the halting problem. That is, for any program P and input
I,
This is a contradiction because neither of the two outcomes for program Q is possible, which
completes the proof.

You might also like