0% found this document useful (0 votes)
6 views

Back Tracking

The document discusses backtracking as a search algorithm resembling depth-first search, particularly in solving problems like the Eight Queens and Knapsack problems. It explains how backtracking builds partial solutions and explores potential solutions while pruning unsuccessful paths. Additionally, it covers the Minmax principle for decision-making in games and the Branch and Bound method for solving the Traveling Salesman Problem.

Uploaded by

kishan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Back Tracking

The document discusses backtracking as a search algorithm resembling depth-first search, particularly in solving problems like the Eight Queens and Knapsack problems. It explains how backtracking builds partial solutions and explores potential solutions while pruning unsuccessful paths. Additionally, it covers the Minmax principle for decision-making in games and the Branch and Bound method for solving the Traveling Salesman Problem.

Uploaded by

kishan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Back Tracking

- In its basic form, backtracking resembles a depth first search in a directed graph.
- The graph is usually a tree or at least it does not contain cycles.
- The graph exists only implicitly.
- The aim of the search is to find solutions to some problem.
- We do this by building partial solutions as the search proceeds.
- Such partial solutions limit the regions in which a complete solution may be found.
- Generally, when the search begins, nothing is known about the solutions to the problem.
- Each move along an edge of the implicit graph corresponds to adding a new element to a partial
solution. That is to narrow down the remaining possibilities for a complete solution.
- The search is successful if, proceeding in this way, a solution can be completely defined.
- In this case either algorithm may stop or continue looking for alternate solutions.
- On the other hand, the search is unsuccessful if at some stage the partial solution constructed so far
cannot be completed.
- In this case the search backs up like a depth first search.
- When it gets back to a node with one or more unexplored neighbors, the search for a solution
resumes.

The Eight queens problem


- The classic problem of placing eight queens on a chessboard in such a way that none of them threatens
any of the others.
- Recall that a queen threatens the squares in the same row, in the same column, or on the same
diagonals.
- The most obvious way to solve this problem consists of trying systematically all the ways of placing
eight queens on a chessboard, checking each time to see whether a solution has been obtained.
- This approach is of no practical use, even with a computer, since the number of positions we would
have to check is8(64)= 4426165368.
- The first improvement we might try consists of never putting more than one queen on any given row.
- This reduces the computer representation of the chessboard to a vector of eight elements, each giving
the position of the queen in the corresponding row.
- For instance, the vector (3, 1, 6, 2, 8, 6, 4, 7) represents the position where the queen on row 1 is in
column 3, the queen on row 2 is in column 1, and so on.

- Solution using Backtracking


- Backtracking allows us to do better than this. As a first step, we reformulate the eight queen’s problem
as a tree searching problem. We say that a vector V[1…k] of integers between 1 and 8 is k-
promising, for 0 ≤ k ≤ 8, if none of the k queens placed in positions (1, V[1]), (2, V[2]), … , (k, V[k])
threatens any of the others.
- Mathematically, a vector V is k-promising if, for every pair of integers i and j between 1 and k with i ≠
j, we have V[i] - V[j] does not belongs to {i-j, 0, j – i}. For k ≤ 1, any vector V is k-promising.
- Solutions to the eight queens’ problem correspond to vectors that are 8-promisng.
- Let N be the set of k-promising vectors, 0 ≤ k ≤ 8.
Let G = (N, A) be the directed graph such that (U, v) ∈ A if and only if there exists an integer k, 0 ≤ k
≤8 such that,
- U is k-promising,
- V is (k + 1)-promising, and
- U[i] = V[i] for every i ∈ [1..k].

The algorithm for the 8 – queens problem is given as follows:


procedure queens (k, col, diag45, diag135)
{sol[1..k] is k promising,
col = {sol[i] | 1 ≤ i ≤k},
diag45 = {sol[i] – i + 1 | 1 ≤ i ≤ k}, and
diag135 = {sol[i] + i – 1 | 1 ≤ i ≤ k}}
if k = 8 then {an 8-promising vector is a solution}
write sol
else {explore (k+1) promising extensions of sol }
for j ← 1 to 8 do
if j does not belongs to col and j – k does not belongs to diag45 and j + k does not
belongs to diag135
then sol[k+1] ← j
{sol[1..k+1] is (k+1)-
promising} queens(k + 1, col
U {j},
diag45 U {j - k}, diag135 U {j + k})

Solution of 4 queens problem

Knapsack problem using back tracking


- We are given a certain number of objects and a knapsack.
- We shall suppose that we have n types of object, and that an adequate number of objects of each type
are available.
- This does not alter the problem in any important way. For i = 1, 2,..., n, an object of type i has a
positive weight wi and a positive value vi.
- The knapsack can carry a weight not exceeding W.
- Our aim is to fill the knapsack in a way that maximizes the value of the included objects, while
respecting the capacity constraint.
- We may take an object or to leave it behind, but we may not take a fraction of an object.
- Suppose for concreteness that we wish to solve an instance of the problem involving four types of
objects, whose weights are respectively 2, 3, 4 and 5 units, and whose values are 3, 5, 6 and 10. The
knapsack can carry a maximum of 8 units of weight.
- This can be done using backtracking by exploring the implicit tree shown below.
0/1 knapsack space tree

- Here a node such as (2, 3; 8) corresponds to a partial solution of our problem.


- The figures to the left of the semicolon are the weights of the objects we have decided to include, and
the figure to the right is the current value of the load.
- Moving down from a node to one of its children corresponds to deciding which kind of object to put
into the knapsack next. Without loss of generality we may agree to load objects into the knapsack in
order of increasing weight.
- Initially the partial solution is empty.
- The backtracking algorithm explores the tree as in a depth-first search, constructing nodes and partial
solutions as it goes.
- In the example, the first node visited is (2;3), the next is (2,2;6), the third is (2,2,2;9) and the fourth
(2,2,2.2; 12).
- As each new node is visited, the partial solution is extended.
- After visiting these four nodes, the depth-first search is blocked: node (2, 2, 2, 2; 12) has no unvisited
successors (indeed no successors at all), since adding more items to this partial solution would
violate the capacity constraint.
- Since this partial solution may turn out to be the optimal solution to our instance, we memorize it.
- The depth-first search now backs up to look for other solutions.
- At each step back up the tree, the corresponding item is removed from the partial solution.
- In the example, the search first backs up to (2, 2, 2; 9), which also has no unvisited successors; one step
further up the tree, however, at node (2, 2; 6), two successors remain to be visited.
- After exploring nodes (2, 2, 3; 11) and (2, 2, 4; 12), neither of which improves on the solution
previously memorized, the search backs up one stage further, and so on.
- Exploring the tree in this way, (2, 3, 3; 13) is found to be a better solution than the one we have, and
later (3, 5; 15) is found to be better still.
- Since no other improvement is made before the search ends, this is the optimal solution to the instance.
- Algorithm can be given as follows:
- function backpack(i, r)
- {Calculates the value of the best load that can be constructed using items of types i to n and whose total
weight does not exceed r }
- b←0
- {Try each allowed kind of item in
turn} for k ← i to n do
o if w[k] ≤ r then
 b ← max(b, v[k] + backpack(k, r - w[k]))
- return b
- Now to find the value of the best load, call backpack (1, W).

Minmax Principle
- Sometimes it is impossible to complete a search due to large number of nodes for example games like
chess.
- The only solution is to be content with partial solution.
- Minmax is a heuristic approach and used to find move possibly better than all other moves.
- Whichever search technique we use, the awkward fact remains that for a game such as chess a
complete search of the associated graph is out of the question.
- In this situation we have to be content with a partial search around the current position.
- This is the principle underlying an important heuristic called Minimax.
- Minimax (sometimes Minmax) is a decision rule used in decision theory, game theory, statistics and
philosophy for minimizing the possible loss for a worst case (maximum loss) scenario.
- Originally formulated for two-player zero-sum game theory, covering both the cases where players take
alternate moves and those where they make simultaneous moves, it has also been extended to more
complex games and to general decision making in the presence of uncertainty.
- A Minimax algorithm is a recursive algorithm for choosing the next move in an n-player game, usually
a two-player game.
- A value is associated with each position or state of the game. This value is computed by means of a
position evaluation function and it indicates how good it would be for a player to reach that position.
The player then makes the move that maximizes the minimum value of the position resulting from the
opponent's possible following moves.
- Although this heuristic does not allow us to be certain of winning whenever this is possible, it finds a
move that may reasonably be expected to be among the best moves available, while exploring only part
of the graph starting from some given position.
- Exploration of the graph is normally stopped before the terminal positions are reached, using one of
several possible criteria, and the positions where exploration stopped are evaluated heuristically.
- In a sense, this is merely a systematic version of the method used by some human players that consists
of looking ahead a small number of moves.
Example: Tic tac toe
Branch and Bound Travelling Salesman Problem
- Branch and Bound
- Set up a bounding function, which is used to compute a bound (for the value of the
objective function) at a node on a state-space tree and determine if it is promising.
- Promising (if the bound is better than the value of the best solution so far): expand
beyond the node.
- Non-promising (if the bound is no better than the value of the best solution so far):
not expand beyond the node (pruning the state-space tree).
- Traveling Salesman Problem
- Construct the state-space tree:
- A node = a vertex: a vertex in the graph. A node that is not a leaf represents all the
tours that start with the path stored at that node; each leaf represents a tour (or non-
promising node).
- Branch-and-bound: we need to determine a lower bound for each node.
- For example, to determine a lower bound for node [1, 2] means to determine a
lower bound on the length of any tour that starts with edge 1—2.
- Expand each promising node, and stop when all the promising nodes have been
expanded. During this procedure, prune all the non-promising nodes.
- Promising node: the node’s lower bound is less than current minimum tour length.
- Non-promising node: the node’s lower bound is NO less than current minimum tour
length.
- Because a tour must leave every vertex exactly once, a lower bound on the length of a
tour is b (lower bound) minimum cost of leaving every vertex.
- The lower bound on the cost of leaving vertex v1 is given by the minimum of all the
nonzero entries in row 1 of the adjacency matrix.
- The lower bound on the cost of leaving vertex vn is given by the minimum of all the
nonzero entries in row n of the adjacency matrix.
- Because every vertex must be entered and exited exactly once, a lower bound on the
length of a tour is the sum of the minimum cost of entering and leaving every
vertex.
- For a given edge (u, v), think of half of its weight as the exiting cost of u, and half of
its weight as the entering cost of v.
- The total length of a tour = the total cost of visiting (entering and exiting) every vertex
exactly once.
- The lower bound of the length of a tour = the lower bound of the total cost of visiting
(entering and exiting) every vertex exactly once.
- Calculation:
- For each vertex, pick top two shortest adjacent edges (their sum divided by 2 is the
lower bound of the total cost of entering and exiting the vertex); add up these
summations for all the vertices.
- Assume that the tour starts with vertex a and that b is visited before c.
Example:

3
a b

1 5 6
7
9
8
4
c d

2 3

You might also like