Aad Mod 4
Aad Mod 4
Module IV
Lecturer: Dr. Reshmi R Class: CSE-A&B
Syllabus:
Dynamic Programming, Back Tracking and Branch & Bound :
Dynamic Programming The Control Abstraction- The Optimality Principle- Matrix Chain Multiplication-Analysis,
All Pairs Shortest Path Algorithm - Floyd-Warshall Algorithm-Analysis.
Back Tracking and Branch & Bound The Control Abstraction of Back Tracking – The N Queen’s Problem. Branch
and Bound Algorithm for Travelling Salesman Problem..
Contents
1.3 Backtracking 13
Dynamic programming is an algorithm design method that can be used when the solution to a problem can be viewed
as the result of a sequence of decisions.
1
For some of the problems, an optimal sequence of decisions can be found by making the decisions one at a time and
never making an erroneous decision. This is true for all problems solvable by the greedy method. For many other
problems, it is not possible to make stepwise decisions (based only on local information) in such a manner that the
sequence of decisions made is optimal.
One way to solve problems for which it is not possible to make a sequence of stepwise decisions leading to an optimal
decision sequence is to try all possible decision sequences. We could enumerate all decision sequences and then pick
out the best. But the time and space requirements may be prohibitive. Dynamic programming often drastically reduces
the amount of enumeration by avoiding the enumeration of some decision sequences that cannot possibly be optimal.
ln dynamic programming an optimal sequence of decisions is obtained by making explicit appeal to the principle of
optimality.
Definition 1 [Principle of optimality] The principle of optimality states that an optimal sequence of decisions has the
property that whatever the initial state and decision are, the remaining decisions must constitute an optimal decision
sequence with regard to the state resulting from the first decision.
1.1.2
AT
Control Abstraction - Dynamic Programming
FIS
The development of a dynamic-programming algorithm can be broken into a sequence of four steps.
The following characteristics properties must be associated with a problem which can be solved using dynamic pro-
gramming.
• Optimal substructure: solution to a given optimization problem can be obtained by the combination of optimal
solutions to its sub-problems.
• Overlapping subproblems: the space of sub-problems must be small, that is, any recursive algorithm solving
the problem should solve the same sub-problems over and over, rather than generating new sub-problems.
The difference between the greedy method and dynamic programming is that in the greedy method only one de-
cision sequence is ever generated. ln dynamic programming, many decision sequences may be generated. However,
AT
sequences containing suboptimal subsequences cannot be optimal (if the principle of optimality holds) and so will not
(as far as possible) be generated. Another important feature of the dynamic programming approach is that optimal
solutions to subproblems are retained so as to avoid recomputing their values.
• Divide-and-conquer algorithms partition the problem into independent subproblems, solve the subproblems re-
cursively, and then combine their solutions to solve the original problem. In contrast, dynamic programming is
FIS
applicable when the subproblems are not independent, that is, when subproblems share subsubproblems.
• Dynamic programming typically applies to optimization problems in which a set of choices must be made in
order to arrive at an optimal solution.
• Both techniques split their input into parts, find subsolutions to the parts, and synthesize larger solutions from
smaller ones.
• Divide and Conquer splits its input at pre-specified deterministic points (e.g., always in the middle)
• Dynamic Programming splits its input at every possible split points rather than at a pre-specified points. After
trying all split points, it determines which split point is optimal.
A sequence (chain) A1 , A2 , ..., An of n matrices to be multiplied, and we wish to compute the product A1 .A2 .An .
Matrix multiplication is associative, and so all parenthesizations yield the same product.
For example, if the chain of matrices is A1 , A2 , A3 , A4 , the product A1 .A2 .A3 .A4 can be fully parenthesized in five
distinct ways:
(A1 .(A2 .(A3 .A4 )))
The matrix-chain multiplication problem can be stated as follows: given a chain A1 , A2 , ..., An of n matrices, where for
i = 1, 2, ..., n, matrix Ai has dimension pi−1 × pi , fully parenthesize the product A1 .A2 .An in a way that minimizes
the number of scalar multiplications.
Procedure:
Step 1: The structure of an optimal parenthesization:
The first step in the dynamic-programming paradigm is to find the optimal sub-structure and then use it to construct an
AT
optimal solution to the problem from optimal solutions to subproblems.
For the matrix-chain multiplication problem, we can perform this step as follows. For evaluating the product Ai Ai+1 ...Aj ,
the parenthesization of the product Ai Ai+1 ...Aj must split the product between Ak and Ak+1 for some integer k in
the range i ≤ k < j . That is, for some value of k, we first compute the matrices Ai...k and Ak+1...j and then multiply
them together to produce the final product Ai...j . The cost of this parenthesization is thus the cost of computing the
matrix Ai...k , plus the cost of computing Ak+1...j , plus the cost of multiplying them together.
The optimal substructure of this problem is as follows. Suppose that an optimal parenthesization of Ai Ai+1 ...Aj splits
the product between Ak and Ak+1 . Then the parenthesization of the “prefix” subchain Ai Ai+1 ...Ak within this optimal
parenthesization of Ai Ai+1 ...Aj must be an optimal parenthesization of Ai Ai+1 ...Ak .
FIS
A similar observation holds for the parenthesization of the subchain Ak+1 Ak+2 ...Aj in the optimal parenthesization
of Ai Ai+1 ...Aj : it must be an optimal parenthesization of Ak+1 Ak+2 ...Aj .
Thus, we can build an optimal solution to an instance of the matrix-chain multiplication problem by splitting the prob-
lem into two subproblems (optimally parenthesizing Ai Ai+1 ...Ak and Ak+1 Ak+2 ...Aj , finding optimal solutions to
subproblem instances, and then combining these optimal subproblem solutions.
• If i = j , the problem is trivial; the chain consists of just one matrix Ai...i = Ai , so that no scalar multiplications
are necessary to compute the product. Thus, m[i, i] = 0 for i = 1, 2, ..., n.
• To compute m[i, j] when i < j , consider the structure of an optimal solution from step 1. Assume that the
optimal parenthesization splits the product Ai Ai+1 ...Aj between Ak and Ak+1 , where i ≤ k < j . Then, m[i, j]
is equal to the minimum cost for computing the subproducts Ai...k and Ak+1...j , plus the cost of multiplying
these two matrices together. Each matrix Ai is pi−1 xpi , i.e, computing the matrix product Ai ...Ak ...Aj takes
pi−1 pk pj scalar multiplications. Thus
m[i, j] = m[i, k] + m[k + 1, j] + pi−1 pk pj
There are only j − i possible values for k, i.e, k = i, i + 1, ..., j − 1. Since the optimal parenthesization must
use one of these values for k, we need only check them all to find the best.
AT
will use the table s to construct an optimal solution.
FIS
A simple inspection of the nested loop structure of M AT RIX − CHAIN − ORDER yields a running time
of O(n3 ) for the algorithm. The loops are nested three deep, and each loop index (l, i, and k) takes on at most
n − 1 values. The algorithm requires θ(n2 ) space to store the m and s tables.
Solution :
Let the given matrices are A1 = 4X10, A2 = 10X3, A3 = 3X12, A4 = 12X20, A5 = 20X7
AT
Using tabular method, consider a 5X5 table and update entries of m[i, i] = 0.
FIS
1. m(1, 2) = m1 ↓ X m2 ↓
z }| { z }| {
= 4X10 X 10X3
= 4X10X3 = 120
2. m(2, 3) = m2 ↓ X m3 ↓
z }| { z }| {
= 10X3 X 3X12
= 10X3X12 = 360
3. m(3, 4) = m3 ↓ X m4 ↓
z }| { z }| {
= 3X12 X 12X20
= 3X12X20 = 720
m[1, 3] = 264
AT
• There are two cases by which we can solve this multiplication: (m1Xm2) + m3 and m1 + (m2xm3).
• After solving both cases we choose the case in which minimum output is there.
m[1, 3] = min
m[1, 2] + m[3, 3] + p p p
0 2 3
m[1, 1] + m[2, 3] + p p p
0 1 3
= 120 + 0 + 4 ∗ 3 ∗ 12 = 264
= 0 + 360 + 4 ∗ 10 ∗ 12 = 840
(1.2)
As Comparing both output, 264 is minimum in both cases so we insert 264 in table and (m1Xm2) + m3 this combi-
FIS
nation is chosen for the multiplication(k = 2).
m[2, 4] = m2.m3.m4
• There are two cases by which we can solve this multiplication: (m2Xm3) + m4 and m2 + (m3xm4).
• After solving both cases we choose the case in which minimum output is there.
m[2, 3] + m[4, 4] + p p p
1 3 4 = 360 + 0 + 10 ∗ 12 ∗ 20 = 2760
m[2, 4] = min (1.3)
m[2, 2] + m[3, 4] + p p p
1 2 4 = 0 + 720 + 10 ∗ 3 ∗ 20 = 1320
m[2, 4] = 1320
As Comparing both output, 1320 is minimum in both cases so we insert 1320 in table and (m2 + (m3Xm4)) this
combination is chosen for the multiplication(k = 2).
m[3, 5] = m3.m4.m5
• There are two cases by which we can solve this multiplication: (m3Xm4) + m5 and m3 + (m4xm5).
• After solving both cases we choose the case in which minimum output is there.
m[3, 4] + m[5, 5] + p p p
2 4 5 = 720 + 0 + 3 ∗ 20 ∗ 7 = 1140
m[3, 5] = min (1.4)
m[3, 3] + m[4, 5] + p p p
2 3 5 = 0 + 1680 + 3 ∗ 12 ∗ 7 = 1932
1. (m1Xm2Xm3)Xm4
2. m1X(m2Xm3Xm4)
3. (m1Xm2)X(m3Xm4)
AT
• After solving both cases we choose the case in which minimum output is there.
m[1, 3] + m[4, 4] + p0 p3 p4
m[1, 4] = min m[1, 2] + m[3, 4] + p0 p2 p4
= 264 + 0 + 4 ∗ 12 ∗ 20 = 1224
(1.5)
= 120 + 720 + 4 ∗ 3 ∗ 20 = 1080
FIS
= 0 + 1320 + 4 ∗ 10 ∗ 20 = 2120
m[1, 1] + m[2, 4] + p p p
0 1 4
m[1, 4] = 1080
As Comparing both output, 1080 is minimum in both cases so we insert 1080 in table and (m1Xm2)X(m3Xm4)) this
combination is chosen for the multiplication(k = 2).
m[2, 5] = m2.m3.m4.m5
1. (m2Xm3Xm4)Xm5
2. m2X(m3Xm4Xm5)
3. (m2Xm3)X(m4Xm5)
• After solving both cases we choose the case in which minimum output is there.
m[2, 4] + m[5, 5] + p1 p4 p5 = 1320 + 0 + 10 ∗ 20 ∗ 7 = 2720
m[2, 5] = min m[2, 3] + m[4, 5] + p1 p3 p5 = 360 + 1680 + 10 ∗ 12 ∗ 7 = 2880 (1.6)
= 0 + 1140 + 10 ∗ 3 ∗ 7 = 1350
m[2, 2] + m[3, 5] + p p p
1 2 5
m[2, 5] = 1350
As Comparing both output, 1350 is minimum in both cases so we insert 1350 in table and (m2X(m3Xm4Xm5)) this
combination is chosen for the multiplication(k = 2).
1. (m1Xm2Xm3Xm4)Xm5
AT
2. m1X(m2Xm3Xm4Xm5)
3. (m1Xm2Xm3)Xm4Xm5
4. m1Xm2X(m3Xm4Xm5)
• After solving both cases we choose the case in which minimum output is there.
m[1, 4] + m[5, 5] + p0 p4 p5 = 1080 + 0 + 4 ∗ 20 ∗ 7 = 1544
= 264 + 1680 + 4 ∗ 12 ∗ 7 = 2016
m[1, 3] + m[4, 5] + p p p
0 3 5
(1.7)
FIS
m[1, 5] = min
m[1, 2] + m[3, 5] + p0 p2 p5 = 120 + 1140 + 4 ∗ 3 ∗ 7 = 1344
= 0 + 1350 + 4 ∗ 10 ∗ 7 = 1630
m[1, 1] + m[2, 5] + p p p
0 1 5
m[1, 5] = 1344
As Comparing both output, 1344 is minimum in both cases so we insert 1344 in table and (m1Xm2X(m3Xm4Xm5)) this
combination is chosen for the multiplication(k = 2).
Floyd - Warshall algorithm considers the “intermediate” vertices of a shortest path, where an intermediate vertex of a
simple path p = v1 , v2 , ....vl is any vertex of p other than v1 or vl , that is, any vertex in the set v2 , v3 , ..., vl − 1.
Let V = {1, 2, ..., n} are set of vertices of the given graph, consider a subset {1, 2, ..., k} of vertices for some k. For
any pair of vertices i, j ∈ V , consider all paths from i to j whose intermediate vertices are all drawn from {1, 2, ..., k},
and let p be a minimum-weight path from among them. The Floyd- Warshall algorithm exploits a relationship between
path p and shortest paths from i to j with all intermediate vertices in the set {1, 2, ..., k − 1}. The relationship depends
on whether or not k is an intermediate vertex of path p.
• If k is not an intermediate vertex of path p, then all intermediate vertices of path p are in the set {1, 2, ..., k − 1}.
Thus, a shortest path from vertex i to vertex j with all intermediate vertices in the set {1, 2, ..., k − 1} is also a
shortest path from i to j with all intermediate vertices in the set {1, 2, ..., k}.
AT
• If k is an intermediate vertex of path p, then we break p down into i →p1 k →p2 j, p1 is a shortest path from i
to k with all intermediate vertices in the set {1, 2, ..., k}. Because vertex k is not an intermediate vertex of path
p1 , we see that p1 is a shortest path from i to k with all intermediate vertices in the set 1, 2, ..., k − 1. Similarly,
p2 is a shortest path from vertex k to vertex j with all intermediate vertices in the set 1, 2, ..., k − 1.
FIS
Path p is a shortest path from vertex i to vertex j , and k is the highest-numbered intermediate vertex of p. Path p1 , the
portion of path p from vertex i to vertex k, has all intermediate vertices in the set 1, 2,..., k-1. The same holds for path
p2 from vertex k to vertex j .
(k)
Let dij be the weight of a shortest path from vertex i to vertex j for which all intermediate vertices are in the set
{1, 2, ..., k}. When k = 0, a path from vertex i to vertex j with no intermediate vertex numbered higher than 0 has no
(0)
intermediate vertices at all. Such a path has at most one edge, and hence dij = wij .
A recursive definition of the above is given as:
(k)
w
ij if k=0
dij = (1.8)
min{d(k−1) , d(k−1) + d(k−1) } if k ≥ 0
ij ik kj
(n)
Because for any path, all intermediate vertices are in the set {1, 2, ..., n}, the matrix D(n) = dij gives the final answer:
(n)
dij = δ(i, j) for all i, j ∈ V .
Algorithm :
Example:
AT
FIS
Let D(k) is the distance matrix, of shortest path weights with k as intermediate vertex.
Π is predecessor matrix,
Π = (πij ), where πi,j is NIL if either i = j or there is no path from i to j, and
otherwise πi,j is the predecessor of j on some shortest path from i.
D(0) is the initial distance matrix and Π(0) is initial predecessor matrix.
Finding shortest distance from vertex 3 to other vertices(2,4,5), when intermediate vertex, k = 1
Finding shortest distance from vertex 4 to other vertices(2,3,5), when intermediate vertex, k = 1
(1) (0) (0) (0)
d42 = min{d42 , d41 + d12 } = min{∞, 2 + 3} = 5; π42 = 1
(1) (0) (0) (0)
d43 = min{d43 , d41 + d13 } = min{(−5), 2 + 8} = −5
(1) (0) (0) (0)
d45 = min{d45 , d41 + d15 } = min{∞, 2 + (−4)} = −2 π45 = 1
Finding shortest distance from vertex 5 to other vertices(2,3,4), when intermediate vertex, k = 1
(1)
As there is no path from 5 to 1, there won’t be any change in d2j , when k=1.
(1) (0) (0) (0)
d52 = min{d52 , d51 + d12 } = min{∞, ∞ + 3} = ∞
(1) (0) (0) (0)
d53 = min{d53 , d51 + d13 } = min{∞, ∞ + 8} = ∞
(1) (0) (0) (0)
d54 = min{d54 , d51 + d14 } = min{6, ∞ + ∞} = 6
AT
Step 2: Let k = 2. Find D(2) and Π(2) , from D(1)
Note : Repeat all the steps as shown in step 1 by considering the distance matrix D(1) and intermediate vertex, k = 2.
FIS
(2) (1) (1) (1)
dij = min{dij , di2 + d2j }
1.3 Backtracking
AT
Distance Matrix D(5) is the shortest path weight matrix (All pairs shortest path).
The problems which deal with searching for a set of solutions or which ask for an optimal solution satisfying some
FIS
constraints can be solved using the backtracking formulation. In many applications of the backtrack method, the desired
solution is expressible as an n—tuple (x1 , ..., xn ), where the xi , are chosen from some finite set Si . Often the problem
to be solved calls for finding one vector that maximizes (or minimizes or satisfies) a criterion function P (x1 , ..., xn ).
The basic idea is to build up the solution vector one component at a time and to use modified criterion functions
Pi (x1,...,xi ) (sometimes called bounding functions) to test whether the vector being formed has any chance of success.
The major advantage of this method is this: if it is realized that the partial vector (x1 , ..., xi ) can in no way lead to an
optimal solution, then mi + 1...mn possible test vectors can be ignored entirely.
Many of the problems we solve using backtracking require that all the solutions satisfy a complex set of constraints.
For any problem these constraints can be divided into two categories: explicit and implicit.
Definition 2 (Explicit Constraints) Explicit constraints are rules that restrict each xi to take on values only from a
given set.
Example: Eight queens on an 8x8 chessboard, queens were placed such that no two "attack” takes place, that is, no
two queens are on the same row, column, or diagonal.
Backtracking - Algorithm:
AT
FIS
Recursive backtracking algorithm
The solution space consists of all n! permutations of the n—tuple (1, 2, ..., n).
8-queens problem:
AT
A classic combinatorial problem is to place eight queens on an 8X8 chessboard so that no two "attack,” that is, so that
no two of them are on the same row, column, or diagonal.
Let us number the rows and columns of the chessboard 1 through 8. The queens can also be numbered 1 through 8.
Since each queen must be on a different row, assume that queen i is to be placed on row i. All solutions to the 8-queens
problem can therefore be represented as 8-tuples (x1 , ..., x8 ), where xi is the column on which queen i is placed.
Explicit constraints using this formulation are Si = {1, 2, 3, 4, 5, 6, 7, 8}, 1 ≤ i ≤ 8. Therefore the solution space
consists of 8X8 tuples.
Implicit constraints for this problem are that no two xi can be the same (i.e., all queens must be on different columns)
FIS
and no two queens can be on the same diagonal.
The first of these two constraints implies that all solutions are permutations of the 8—tuple (1, 2, 3, 4, 5, 6, 7, 8). This
realization reduces the size of the solution space from 8X8 tuples to 8! tuples.
AT
FIS
A possible tree organization for the case n = 4. A tree such as this is called a permutation tree. The edges are labeled
by possible values of xi . Edges from level 1 to level 2 nodes specify the values for xi . Thus, the leftmost subtree
contains all solutions with x1 = 1; its leftmost subtree contains all solutions with x1 = 1 and x2 = 2, and so on. Edges
from level i to level i + 1 are labeled with the values of xi . The solution space is defined by all paths from the root
node to a leaf node. There are 4! = 24 leaf nodes in the tree.
AT
Terminology regarding tree organizations of solution spaces:
Each node in this tree defines a problem state. All paths from the root to other nodes define the state space of the
problem. Solution states are those problem states ’s’ for which the path from the root to s defines a tuple in the solu-
tion space (represented by leaf node). Answer states are those solution states s for which the path from the root to s
defines a tuple that is a member of the set of solutions (i.e., it satisfies the implicit constraints) of the problem. The tree
organization of the solution space is referred to as the state space tree.
The state space tree organizations in which the tree organizations are independent of the problem instance being solved
are called static trees.
FIS
For some problems it is advantageous to use different tree organizations for different problem instances. In this case
the tree organization is determined dynamically as the solution space is being searched. Tree organizations that are
problem instance dependent are called dynamic trees.
Once a state space tree has been conceived of for any problem, this problem can be solved by systematically gen-
erating the problem states, determining which of these are solution states, and finally determining which solution states
are answer states.
There are two fundamentally different ways to generate the problem states. Both of these begin with the root node and
generate other nodes.
A node which has been generated and all of whose children have not yet been generated is called a live node.
The live node whose children are currently being generated is called the E-node (node being expanded).
A dead node is a generated node which is not to be expanded further or all of whose children have been generated.
In both methods of generating problem states, there will be a list of live nodes.
In the first of these two methods as soon as a new child ’C’ of the current E-node, R is generated, this child will become
the new E-node. Then ’R’ will become the E-node again when the subtree C has been fully explored. This corresponds
to a depth first generation of the problem states.
In the second state generation method, the E-node remains the E-node until it is dead.
In both methods, bounding functions are used to kill live nodes without generating all their children.
Depth first node generation with bounding functions is called backtracking. State generation methods in which the
E-node remains the E-node until it is dead, lead to branch-and-bound methods.
AT
through until a solution is found.
FIS
Example of a backtrack solution to the 4-queens problem
The term branch-and-bound refers to all state space search methods in which all children of the E-node are generated
before any other live node can become the E-node.
In branch-and bound terminology, a BFS-like state space search will be called FIFO (First In First Out) search as the
list of live nodes is a first-in-first-out list (or queue). A D-search-like state space search will be called LIFO (Last In
1.3.2.1
AT
First Out) search as the list of live nodes is a last-in-first-out list (or stack).
In both LIFO and FIFO branch—and—bound the selection rule for the next E—node does not give any preference to
a node that has a very good chance of getting the search to an answer node quickly.The search for an answer node can
often be speeded by using an "intelligent" ranking function for live nodes. The next E—node is selected on the basis
of this ranking function.The ideal way to assign ranks would be on the basis of the additional computational effort (or
cost) needed to reach an answer node from the live node. For any node x, this cost could be
FIS
1. the number of nodes in the subtree x that need to be generated before an answer node is generated
2. the number of levels the nearest answer node (in the subtree x) is from x
The difficulty with using either of these ideal cost functions is that computing the cost of a node usually involves a
search of the subtree x for an answer node. Hence, by the time the cost of a node is determined, that subtree has been
searched and there is no need to explore x again. For this reason, search algorithms usually rank nodes only on the
0
basis of an estimate, g (x) of their cost.
Let be an estimate of the additional effort needed to reach an answer node from x. Node x is assigned a rank using
0 0
a function such that c (x) = f (h(x)) + g (x), where h(x) is the cost of reaching x from the root and f () is any
nondecreasing function.
0 0
A search strategy that uses a cost function c (x) = f (h(x)) + g (x) to select the next E-node would always choose for
its next E-node a live node with least Hence, such a search strategy is called an LC-search (Least Cost search).
Let G = (V, E) be a directed graph with edge costs cij . The variable cij is defined such that cij > 0 for all i and j
and cij = ∞, If (i, j) ∈
/ E. Let |V | = n and assume n > 1. A tour of G is a directed simple cycle that includes every
vertex in V . The cost of a tour is the sum of the cost of the edges on the tour. The traveling salesperson problem is to
find a tour of minimum cost.
State space tree for the traveling salesperson problem with n = 4 and i0 = i4 = 1
AT
The tree organization for the case of a complete graph with |V | = 4 is shown above. Each leaf node L is a solution node
and represents the tour defined by the path from the root to L. Node 14 represents the tour i0 = 1, i1 = 3, i2 = 4, i3 = 2
and i4 = 1
0
A better c (.) can be obtained by using the reduced cost matrix corresponding to G.
A row (column) is said to be reduced iff it contains at least one zero and all remaining entries are non-negative. A
matrix is reduced iff every row and column is reduced.
This corresponds to a graph with five vertices. Since every tour on this graph includes exactly one edge (i, j) with
i = k, 1 ≤ k ≤ 5, and exactly one edge (i, j) with j = k, 1 ≤ k ≤ 5, subtracting a constant t from every entry in one
column or one row of the cost matrix reduces the length of every tour by exactly t.
A minimum—cost tour remains a minimum-cost tour following this subtraction operation. If t is chosen to be the
minimum entry in row i(column j), then subtracting it from all entries in row i (column j) introduces a zero into row i
The total amount subtracted from the columns and rows is a lower bound on the length of a minimum-cost tour and
can be used as the ’c’ value for the root of the state space tree. Subtracting 10, 2, 2, 3, 4, 1, and 3 from rows 1, 2, 3,
4, and 5 and columns 1 and 3 respectively of the given cost matrix which yields the reduced matrix. The total amount
subtracted is 25.
Hence, all tours in the original graph have a length at least 25.
AT
Let A be the reduced cost matrix for node R.
Let S be a child of R such that the tree edge (R, S) corresponds to including edge (i, j) in the tour.
If S is not a leaf, then the reduced cost matrix for S may be obtained as follows:
1. Change all entries in row i and column j of A to ∞. This prevents the use of any more edges leaving vertex i or
entering vertex j.
FIS
2. Set A(j, 1) to ∞. This prevents the use of edge (j, 1).
3. Reduce all rows and columns in the resulting matrix except for rows and columns containing only ∞.
0
• reducing column 1 by subtracting by 11. The c () for node 3 is therefore 25 + 17 (the cost of edge (1,3) in the
reduced matrix) + 11 = 53.
0
The matrices and c () value for nodes 2, 4, and 5 are obtained similarly.
The value of upper is unchanged and node 4 becomes the next E-node.
The portion of the state space tree that gets generated is shown below.
AT
The children of E-node (node 4) nodes 6, 7, and 8 are generated.
0
The live nodes at this time are nodes 2, 3, 5, 6, 7, and 8. Node 6 has least c () value and becomes the next E—node.
FIS
AT
FIS
• Backtracking is the modified process of the brute force approach where the technique efficiently searches for a
solution to the problem among all available options.
The branch-N-bound technique is an algorithm used to solve the problem of optimization by breaking it down
into smaller subproblems and by bounding function it eliminates the subproblem that doesn’t contain an optimal
solution.
• Backtracking is a general algorithm for finding all the solutions to some computational problems, notably con-
straint satisfaction problems, that incrementally builds possible candidates to the solutions and abandons a candi-
date as soon as it determines that the candidate cannot possibly be completed to finally become a valid solution.
• Backtracking is used to find all possible solutions available to a problem. When it realises that it has made a bad
choice, it undoes the last choice by backing it up. It searches the state space tree until it has found a solution for
the problem.
Branch-and-Bound is used to solve optimisation problems. When it realises that it already has a better optimal
solution that the pre-solution leads to, it abandons that pre-solution. It completely searches the state space tree
to get optimal solution.
• Backtracking can be useful where some other optimization techniques like greedy or dynamic programming fail.
Such algorithms are typically slower than their counterparts.
Branch and bound builds the state space tree and find the optimal solution quickly by pruning few of the tree
branches which does not satisfy the bound.
• Backtracking traverses the state space tree by DFS(Depth First Search) manner.
AT
Branch-and-Bound traverse the tree in any manner, DFS or BFS.
• Applications of backtracking includes N Queen Problem, Graph coloring problem, Hamiltonian cycle problem.
Applicatios of branch and bound includes Job sequencing problem, Traveling salesman problem, Knapsack prob-
lem Knapsack Problem, Sum of subsets problem
FIS