Back Tracking
Back Tracking
- In its basic form, backtracking resembles a depth first search in a directed graph.
- The graph is usually a tree or at least it does not contain cycles.
- The graph exists only implicitly.
- The aim of the search is to find solutions to some problem.
- We do this by building partial solutions as the search proceeds.
- Such partial solutions limit the regions in which a complete solution may be found.
- Generally, when the search begins, nothing is known about the solutions to the problem.
- Each move along an edge of the implicit graph corresponds to adding a new element to a partial
solution. That is to narrow down the remaining possibilities for a complete solution.
- The search is successful if, proceeding in this way, a solution can be completely defined.
- In this case either algorithm may stop or continue looking for alternate solutions.
- On the other hand, the search is unsuccessful if at some stage the partial solution constructed so far
cannot be completed.
- In this case the search backs up like a depth first search.
- When it gets back to a node with one or more unexplored neighbors, the search for a solution
resumes.
Minmax Principle
- Sometimes it is impossible to complete a search due to large number of nodes for example games like
chess.
- The only solution is to be content with partial solution.
- Minmax is a heuristic approach and used to find move possibly better than all other moves.
- Whichever search technique we use, the awkward fact remains that for a game such as chess a
complete search of the associated graph is out of the question.
- In this situation we have to be content with a partial search around the current position.
- This is the principle underlying an important heuristic called Minimax.
- Minimax (sometimes Minmax) is a decision rule used in decision theory, game theory, statistics and
philosophy for minimizing the possible loss for a worst case (maximum loss) scenario.
- Originally formulated for two-player zero-sum game theory, covering both the cases where players take
alternate moves and those where they make simultaneous moves, it has also been extended to more
complex games and to general decision making in the presence of uncertainty.
- A Minimax algorithm is a recursive algorithm for choosing the next move in an n-player game, usually
a two-player game.
- A value is associated with each position or state of the game. This value is computed by means of a
position evaluation function and it indicates how good it would be for a player to reach that position.
The player then makes the move that maximizes the minimum value of the position resulting from the
opponent's possible following moves.
- Although this heuristic does not allow us to be certain of winning whenever this is possible, it finds a
move that may reasonably be expected to be among the best moves available, while exploring only part
of the graph starting from some given position.
- Exploration of the graph is normally stopped before the terminal positions are reached, using one of
several possible criteria, and the positions where exploration stopped are evaluated heuristically.
- In a sense, this is merely a systematic version of the method used by some human players that consists
of looking ahead a small number of moves.
Example: Tic tac toe
Branch and Bound Travelling Salesman Problem
- Branch and Bound
- Set up a bounding function, which is used to compute a bound (for the value of the
objective function) at a node on a state-space tree and determine if it is promising.
- Promising (if the bound is better than the value of the best solution so far): expand
beyond the node.
- Non-promising (if the bound is no better than the value of the best solution so far):
not expand beyond the node (pruning the state-space tree).
- Traveling Salesman Problem
- Construct the state-space tree:
- A node = a vertex: a vertex in the graph. A node that is not a leaf represents all the
tours that start with the path stored at that node; each leaf represents a tour (or non-
promising node).
- Branch-and-bound: we need to determine a lower bound for each node.
- For example, to determine a lower bound for node [1, 2] means to determine a
lower bound on the length of any tour that starts with edge 1—2.
- Expand each promising node, and stop when all the promising nodes have been
expanded. During this procedure, prune all the non-promising nodes.
- Promising node: the node’s lower bound is less than current minimum tour length.
- Non-promising node: the node’s lower bound is NO less than current minimum tour
length.
- Because a tour must leave every vertex exactly once, a lower bound on the length of a
tour is b (lower bound) minimum cost of leaving every vertex.
- The lower bound on the cost of leaving vertex v1 is given by the minimum of all the
nonzero entries in row 1 of the adjacency matrix.
- The lower bound on the cost of leaving vertex vn is given by the minimum of all the
nonzero entries in row n of the adjacency matrix.
- Because every vertex must be entered and exited exactly once, a lower bound on the
length of a tour is the sum of the minimum cost of entering and leaving every
vertex.
- For a given edge (u, v), think of half of its weight as the exiting cost of u, and half of
its weight as the entering cost of v.
- The total length of a tour = the total cost of visiting (entering and exiting) every vertex
exactly once.
- The lower bound of the length of a tour = the lower bound of the total cost of visiting
(entering and exiting) every vertex exactly once.
- Calculation:
- For each vertex, pick top two shortest adjacent edges (their sum divided by 2 is the
lower bound of the total cost of entering and exiting the vertex); add up these
summations for all the vertices.
- Assume that the tour starts with vertex a and that b is visited before c.
Example:
3
a b
1 5 6
7
9
8
4
c d
2 3