0% found this document useful (0 votes)
38 views24 pages

AI Search Strategies Guide

The document discusses various searching strategies in artificial intelligence, including uninformed and informed search algorithms. It details properties of search algorithms, compares breadth-first search, depth-first search, and depth-limited search, and introduces informed search techniques like A* and greedy best-first search. Additionally, it highlights the importance of heuristic functions in optimizing search efficiency and outlines the advantages and disadvantages of each algorithm.

Uploaded by

kadalisrideepthi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views24 pages

AI Search Strategies Guide

The document discusses various searching strategies in artificial intelligence, including uninformed and informed search algorithms. It details properties of search algorithms, compares breadth-first search, depth-first search, and depth-limited search, and introduces informed search techniques like A* and greedy best-first search. Additionally, it highlights the importance of heuristic functions in optimizing search efficiency and outlines the advantages and disadvantages of each algorithm.

Uploaded by

kadalisrideepthi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

UNIT II.

SEARCHING TOWARDS
SOLUTION

Searching Strategies: Introduction

Searching is a process to find the solution for a given set of problems. This in artificial
intelligence can be done by using either uninformed searching strategies of either informed
searching strategies.

Searching is a step-by-step procedure to solve a search-problem in a given search space. A


search problem can have three main factors:

a. Search Space: Search space represents a set of possible solutions, which a system may
have.
b. Start State: It is a state from where agent begins the search.
c. Goal test: It is a function which observe the current state and returns whether the goal
state is achieved or not.

Properties of Search Algorithms:

Following are the four essential properties of search algorithms to compare the efficiency of
these algorithms:

Completeness: A search algorithm is said to be complete if it guarantees to return a solution


if at least any solution exists for any random input.

Optimality: If a solution found for an algorithm is guaranteed to be the best solution (lowest
path cost) among all other solutions, then such a solution for is said to be an optimal solution.

Time Complexity: Time complexity is a measure of time for an algorithm to complete its
task.

Space Complexity: It is the maximum storage space required at any point during the search,
as the complexity of the problem.
SEARCH TREE

A search tree is generated by the initial state and the successor function that
together define the state space. In general, we may have a search graph rather than a
search tree, when the same state can be reached from multiple paths.

The choice of which state to expand is determined by the search strategy. There are an
infinite number paths in this state space, so the search tree has an infinite number of
nodes.

A node is a data structure with five components:

➢ STATE: a state in the state space to which the node corresponds;

➢ PARENT-NODE: the node in the search tree that generated this node;

➢ ACTION: the action that was applied to the parent to generate the node;

➢ PATH-COST: t h e cost, denoted by g(n),of the path from initial state to


the node, as indicated by the parent pointers;
➢ DEPTH: the number of steps along the path from the initial state.

Uninformed Search Algorithms:

The search algorithms in this section have no additional information on the goal node other
than the one provided in the problem definition. The plans to reach the goal state from the
start state differ only by the order and/or length of actions. Uninformed search is also
called Blind search.

The following uninformed search algorithms are discussed in this section.

1. Breadth First Search (BFS)


2. Depth First Search (DFS)
3. Depth Bounded DFS

Each of these algorithms will have:

• A problem graph, containing the start node S and the goal node G.
• A strategy, describing the manner in which the graph will be traversed to get to
G.
• A fringe, which is a data structure used to store all the possible states (nodes)
that you can go from the current states.
• A tree, that results while traversing to the goal node.
• A solution plan, which the sequence of nodes from S to G.

Breadth first search

Breadth First Search (BFS) searches breadth-wise in the problem space. Breadth- First
search is like traversing a tree where each node is a state which may a be a potential
candidate for solution. It expands nodes from the root of the tree and then generates one
level of the tree at a time until a solution is found. It is very easily implemented by
maintaining a queue of nodes. Initially the queue contains just the root. In each iteration,
node at the head of the queue is removed and then expanded. The generated child nodes
are then added to the tailof the queue.

Algorithm: Breadth-First Search

1. Create a variable called NODE-LIST and set it to the initial state.

2. Loop until the goal state is found or NODE-LIST is empty.


a. Remove the first element, say E, from the NODE-LIST. If
NODE-LIST was empty then quit.
b. For each way that each rule can match the state described in E do:

i) Apply the rule to generate a new


state.

ii) If the new state is the goal state, quit and return
this state.

iii) Otherwise add this state to the end of NODE-LIST

Since it never generates a node in the tree until all the nodes at shallower levels have
been generated, breadth-first search always finds a shortest path to a goal. Since each
node can be generated in constant time, the amount of time used by Breadth first
search is proportional to the number of nodes generated, which is a function of the
branching factor b and the solution d. O(bd) , the asymptotic time complexity of
breadth first search.
Simple binary tree for Breadth First Search

Look at the above figure with nodes starting from root node, R at the first level,A and B
at the second level and C, D, E and F at the third level. If we want to search for node E
then BFS will search level by level. First it will check if E exists at the root. Then it will
check nodes at the second level. Finally it will find E a the third level.

Advantages of Breadth-First Search

1. Breadth first search will never get trapped exploring the useless path forever.

2. If there is a solution, BFS will definitely find it out.

3. If there is more than one solution then BFS can find the minimal one that
requires lessnumber of steps.

Disadvantages of Breadth-First Search

1. The main drawback of Breadth first search is its memory requirement. Since
each level of the tree must be saved in order to generate the next level, and the
amount of memory is proportional to the number of nodes stored, the space
complexity of BFS is O(bd). As a result, BFS is severely space-bound in
practice so will exhaust the memory available on typical computers in a
matter of minutes.
2. If the solution is farther away from the root, breath first search will consume
lot of time.

Depth first search

Depth First Search (DFS) searches deeper into the problem space. Breadth-first search
always generates successor of the deepest unexpanded node. It uses last-in first-out stack
for keeping the unexpanded nodes. More commonly, depth-first search is implemented
recursively, with the recursion stack taking the place of an explicit node stack.

Algorithm: Depth First Search

1. If the initial state is a goal state, quit and return success. 2. Otherwise, loop until
success or failure is signaled.

a) Generate a state, say E, and let it be the successor of the initial state. If there is no
successor, signal failure.

b) Call Depth-First Search with E as the initial state.

c) If success is returned, signal success. Otherwise continue in this loop.

Example:

Advantages of Depth-First Search

• The advantage of depth-first Search is that memory requirement is only linear


with respect to the search graph. This is in contrast with breadth-first search
which requiresmore space.
• The time complexity of a depth-first Search to depth d is O(bd) since it
generates the same set of nodes as breadth-first search, but simply in a
different order. Thus practically depth-first search is time-limited rather than
space-limited.

• If depth-first search finds solution without exploring much in a path then the
time and space it takes will be very less.
Disadvantages of Depth-First Search

• The disadvantage of Depth-First Search is that there is a possibility that it may


go down the left-most path forever. Even a finite graph can generate an
infinite tree.
• Depth-First Search is not guaranteed to find the solution.

• And there is no guarantee to find a minimal solution, if more than one solution
exists.

Depth bounded DFS (Depth limited search)

Depth-first search will not find a goal if it searches down a path that has infinite length.
So, in general, depth-first search is not guaranteed to find a solution, so it is not complete.
This problem is eliminated by limiting the depth of the search to some value l. However,
this introduces another way of preventing depth-first search from finding the goal: if the
goal is deeper than l it will not be found.

Its time complexity is O(bl) and its space complexity is O(bl).

Depth-limited search can be terminated with two Conditions of failure:

o Standard failure value: It indicates that problem does not have any solution.
o Cutoff failure value: It defines no solution for the problem within a given depth limit.

Algorithm:
function Depth-Limited-Search( problem, limit) returns a solution/fail/cutoff
return Recursive-DLS(Make-Node(Initial-State[problem]), problem,
limit)
function Recursive-DLS(node, problem, limit) returns solution/fail/cutoff
cutoff-occurred? false
if Goal-Test(problem,State[node]) then return Solution(node)
else if Depth[node] = limit then return cutoff
else for each successor in Expand(node, problem) do result
Recursive-DLS(successor, problem, limit)if result =
cutoff then cutoff_occurred? true
else if result not = failure then return result
if cutoff_occurred? then return cutoff else return failure
Advantages:

Depth-limited search is Memory efficient.

Disadvantages:

o Depth-limited search also has a disadvantage of incompleteness.


o It may not be optimal if the problem has more than one solution.

Summary of algorithms

Criterion Breadth-First Depth-First Depth-limited

Complete? Yes No No

Time bd+1 bm bl

Space bd+1 bm bl

Optimal? YES NO NO

INFORMED SEARCH

➢ Informed search strategies use problem specific knowledge beyond the


definition ofthe problem itself.
➢ Use the knowledge of the problem domain to build an evaluation function 𝑓.
➢ For every node 𝑛 is the search space 𝑓(𝑛) quantifiers the desirability of expanding
𝑛 in order to reach the goal.
➢ To solve large problem with large number of possible states problem specific
knowledge need to be added to increase the efficiency of search algorithm.
➢ Can find solution more efficiently than a uniformed strategy.
➢ A key point of informed search strategy is heuristic function. So it is called
as heuristic function.

Heuristic Function
A Heuristic technique helps in solving problems, even though there is no
guarantee that it will never lead in the wrong direction. There are heuristics of
every general applicability as well as domain specific. The strategies are general
purpose heuristics. In order to use them in a specific domain they are coupler
with some domain specific heuristics. There are two major ways in which
domain - specific, heuristic information can be incorporated into rule- based
search procedure.
A typical solution to the 8-puzzle problem has around 20 steps. The
branching factor is about 3. Hence, an exhaustive search to depth 20 will look at
about 320 = 3.5 x 109 states.

Puzzle Problem

By keeping track of repeated states, this number can be cut down to 9!=362,880
different arrangements of 9 squares. We need a heuristic to further reduce
this number. If wewant to find shortest solutions, we need a function that never
overestimates the number of stepsto the goal. Here are two possibilities:
• h1= the number of tiles that are in the wrong position. This is admissible
because any tile that is out of place must be moved at least once.
• h2= the sum of the distances of the tiles from their goal positions. Since
tiles cannot be moved diagonally, we use city block distance. h2 is also
admissible.

Best First Search:

➢ Best First Search is an instance of the general TREE SEARCH or GRAPH


SEARCH algorithm in which a node is selected for expansion based on an
evaluation function𝑓(𝑛).
➢ The Best First Search algorithms have different evaluation functions. A key
componentof these algorithms is a heuristic function denoted ℎ(𝑛).
➢ ℎ(𝑛) = estimated cost of the cheapest path from node 𝑛 to a goal node.
➢ Heuristic functions are the most common form in which additional
knowledge of theproblem is imparted to the search algorithm.

Algorithm:

Best first search algorithm in steps:


Step 1: Place the starting node into the OPEN list.

Step 2: If the OPEN list is empty, Stop and return failure.

Step 3: Remove the node n, from the OPEN list which has the lowest value of h(n), and
places it in the CLOSED list.

Step 4: Expand the node n, and generate the successors of node n.

Step 5: Check each successor of node n, and find whether any node is a goal node or not.
If any successor node is goal node, then return success and terminate the search, else
proceed to Step 6.

Step 6: For each successor node, algorithm checks for evaluation function f(n), and then
check if the node has been in either OPEN or CLOSED list. If the node has not been in
both list, then add it to the OPEN list.

Advantages:
o Best first search can switch between BFS and DFS by gaining the advantages of both
the algorithms.
o This algorithm is more efficient than BFS and DFS algorithms.

Disadvantages:
o It can behave as an unguided depth-first search in the worst case scenario.
o It can get stuck in a loop as DFS.
o This algorithm is not optimal.
The two variants of BFS are Greedy Best First Search and A* Best First Search.
Greedy BFS makes use of the Heuristic function and search and allows us to take
advantage of both algorithms.

Greedy Best First Search:

Greedy best-first search algorithm always selects the path which appears best at that moment.
It is the combination of depth-first search and breadth-first search algorithms. It uses the
heuristic function and search. Best-first search allows us to take the advantages of both
algorithms. With the help of best-first search, at each step, we can choose the most promising
node. In the best first search algorithm, we expand the node which is closest to the goal node
and the closest cost is estimated by heuristic function, i.e.

f(n)= h(n).

Where, h(n)= estimated cost from node n to the goal.

Time Complexity: The worst case time complexity of Greedy best first search is O(bm).

Space Complexity: The worst case space complexity of Greedy best first search is O(bm).
Where, m is the maximum depth of the search space.

Complete: Greedy best-first search is also incomplete, even if the given state space is finite.

Optimal: Greedy best first search algorithm is not optimal.

A* Search Algorithm:

A* search is the most commonly known form of best-first search. It uses heuristic function
h(n), and cost to reach the node n from the start state g(n). A* search algorithm finds the
shortest path through the search space using the heuristic function. This search algorithm
expands less search tree and provides optimal result faster. A

In A* search algorithm, we use search heuristic as well as the cost to reach the node. Hence
we can combine both costs as following, and this sum is called as a fitness number.
Complete: A* algorithm is complete as long as:
o Branching factor is finite.
o Cost at every action is fixed.

Optimal: A* search algorithm is optimal if it follows below two conditions:

o Admissible: the first condition requires for optimality is that h(n) should be an
admissible heuristic for A* tree search. An admissible heuristic is optimistic in nature.
o Consistency: Second required condition is consistency for only A* graph-search.

If the heuristic function is admissible, then A* tree search will always find the least cost path.

Time Complexity: The time complexity of A* search algorithm depends on heuristic


function, and the number of nodes expanded is exponential to the depth of solution d. So the
time complexity is O(bd), where b is the branching factor.

Space Complexity: The space complexity of A* search algorithm is O(bd)

Hill Climbing

➢ The Hill-climbing search algorithm is a loop that continually moves in the


direction of increasing value.
➢ The algorithm only records the state and its evaluation instead of maintaining a
search tree. It takes a problem as an input, and it keeps comparing the values of
the current and the next nodes.
➢ The next node is the highest-valued successor of the current node.

➢ If the value of the current node is greater than the next node, then the current
node will be returned. Otherwise, it will go deeper to look at the next node of the
next node.
➢ It is simply a loop that continually moves in the direction of increasing value that
is, uphill.

➢ It terminates when it reaches a ―peak where no neighbor has a higher [Link]


climbing does not maintain a search tree, so the current node data structure need
only record the state and its objective function value.
➢ Hill-climbing does not look ahead beyond the immediate neighbors of the current state.

The peaks are found on a surface of states where height is defined by Hill-climbing function.

Problems with hill-climbing

Hill-climbing often gets stuck for the following reasons.


1. Local maxima
2. Plateau
3. Ridges

Local Maxima

Local maxima is a peak that is lower than the highest peak in the state space. When a local
maxima is reached, the algorithm will halt even a solution has not been reached yet.

Plateaux

A plateaux is an area of the state space where the neighbors are about the same height. In
such a situation, a random walk will be generated.

Ridges

A ridge may have steeply sloping sides towards the top, but the top only slopes gently
towards a peak. In this case, the search makes little progress unless the top is directly
reached, because it has to go back and forth from side to side.
Variations of Hill Climbing

➢ Stochastic HC: chose randomly among the neighbors going uphill.


➢ First-choice HC: generate random successors until one is better. Good for states
with high numbers of neighbors.
➢ Random restart: the sideway moves restart from a random state.
➢ Evolutionary hill-climbing: represents potential solutions as strings and
performs random mutations. Keeps the mutations that are better states. It's a
particular case of first-choice and the ancestor of the genetic algorithms.

Example: 8-queens problem

The goal of 8-queens problem is to place 8 queens on the chessboard such that no queen
attacks any other.(A queen attacks any piece in the same row, column or diagonal).
The first incremental formulation one might try is the following :
o States : Any arrangement of 0 to 8 queens on board is a state.
o Initial state : No queen on the board.
o Successor function : Add a queen to any empty square.
o Goal Test : 8 queens are on the board, none attacked.
Solution :
ADVERSARIAL SEARCH

Competitive environments, in which the agent‘s goals are in conflict, give rise to
adversarialsearch problems – often known as games.

GAMES

➢ Mathematical Game Theory, a branch of economics, views any multi agent


environment as a game provided that the impact of each agent on the other
is ―significant‖, regardless of whether the agents are cooperative or
competitive.
➢ In, AI, ‖games‖ are deterministic, turn-taking, two-player, zero-sum games of
perfect information.
➢ This means deterministic, fully observable environments in which there are
two agents whose actions must alternate and in which the utility values at the
end of the game are always equal and opposite.

➢ For example, if one player wins the game of chess (+1), the other player
necessarily loses(-1). It is this opposition between the agents‘ utility
functions that makes the situation adversarial.

Optimal Decisions in Game

We will consider games with two players, whom we will call MAX and
MIN. MAX moves first, and then they take turns moving until the game is over.
At the end of the game, points are awarded to the winning player and penalties
are given to the loser. A game can be formally defined as a search problem with
the following components:

➢ The initial state, which includes the board position and identifies the player to
move.
➢ A successor function, which returns a list of (move, state) pairs, each
indicating a legal move and the resulting state.
➢ A terminal test, which describes when the game is over. States where the
game has ended are called terminal states.
➢ A utility function (also called an objective function or payoff function),
which give a numeric value for the terminal states. In chess, the outcome is a
win, loss, or draw, with values +1,-1, or 0. His payoffs in backgammon
range from +192 to -192.

MiniMax Algorithm :

In normal search problem, the optimal solution would be a sequence of move leading to
a goal state – a terminal state that is a win. In a game, on the other hand, MIN has
something to say about it, MAX therefore must find a contingent strategy, which
specifies MAX‘s move in the initial state, then MAX‘s moves in the states resulting
from every possible response by MIN, then MAX‘s moves in the states resulting from
every possible response by MIN those moves, and so on. An optimal strategy leads to
outcomes at least as good as any other strategy when one is playing an infallible
opponent.

Two ply game tree

In the Two ply game tree there nodes are ―MAX nodes‖, in which it is MAX‘s turn to
move, and the nodes are ―MIN nodes‖. The terminal nodes show the utility values for
MAX; the other nodes are labeled with their minimax values. MAX‘s best move at the
root is a1,because it leads to the successor with the highest minimax value, and MIN‘s
best reply is b1,because it leads to the successor with the lowest minimax value.
The minimax algorithm performs a complete depth-first exploration of the game tree. If
the maximum depth of the tree is m, and there are b legal moves at each point, then the
time complexity of the minimax algorithm is O (bm). The space complexity is O(bm)
for an algorithm that generates successors at once.

ALPHA-BETA PRUNING

Pruning: The process of eliminating a branch of the search tree from consideration
without examining is called pruning. The two parameters of pruning technique are:
1. Alpha (α): Best choice for the value of MAX along the path or lower bound
on the value that on maximizing node may be ultimately assigned.
2. Beta (β): Best choice for the value of MIN along the path or upper bound on
the value that a minimizing node may be ultimately assigned.
Alpha-Beta Pruning: The alpha and beta values are applied to a minimax tree, it
returns the same move as minimax, but prunes away branches that cannot possibly
influence the final decision is called Alpha-Beta pruning or Cutoff.
Alpha Beta search updates the values of α and β as it goes along and prunes the
remaining branches at a node(i.e., terminates the recursive call) as soon as the value of
the currentnode is known to be worse than the current α and β value for MAX and MIN,
respectively.

Consider the two ply game tree below . The different Stage of the calculation for
optimal decision for the game tree.
At eachpoint, we show the range of possible values for each node.
(a) The first leaf below B has the value 3. Hence, B, which is a MIN node, has a value
of at most 3.

(b) The second leaf below B has a value of 12; MIN would avoid this move, so
the value of B isstill at most 3.

(c) The third leaf below B has a value of 8; we have seen all B's successors, so
the value of B is exactly 3. Now, we can infer that the value of the root is at
least 3, because MAX has a choiceworth 3 at the root.

(d) The first leaf below C has the value 2. Hence, C, which is a MIN node, has a value
of at most 2. But we know that B is worth 3, so MAX would never choose C.
Therefore, there is no point in looking at the other successors of C. This is an
example of alpha-beta pruning.

(e) The first leaf below D has the value 14, so D is worth at most 14. This is still
higher than MAX'S best alternative (i.e., 3), so we need to keep exploring D's
successors. Notice also that we now have bounds on all of the successors of the
root, so the root's value is also at most 14.

(f) The second successor of D is worth 5, so again we need to keep exploring.


The third successor is worth 2, so now D is worth exactly 2. MAX'S decision at
the root is to move to B, giving a value of 3.

The effectiveness of alpha-beta pruning is highly dependent on the order in which the
successorsare examined. It might be worthwhile to try to examine first the successors that are
likely to be the best. In such case, it turns out that alpha-beta needs to examine only O(bd/2)
nodes to pick the best move, instead of O(bd) for minimax.
Alpha –beta pruning can be applied to trees of any depth, and it is often possible to
pruneentire sub trees rather than just leaves.
Constraint Satisfaction Problem

➢ Constraint satisfaction problem (or CSP) is defined by a set of variables,


X1, X2, . . . , Xn,and a set of constraints, C1, C2, . . . , Cm.
➢ Each variable Xi has a nonempty domain Di of possible values. Each
constraint Ci involves some subset of the variables and specifies the
allowable combinations of values for that subset.
➢ A state of the problem is defined by an assignment of values to some or
all of the variables, {Xi = vi, Xj = vj , . . .}.
➢ An assignment that does not violate any constraints is called a consistent
or legal assignment.
➢ A complete assignment is one in which every variable is mentioned, and
a solution to a CSP is a complete assignment that satisfies all the
constraints.
➢ Some CSPs also require a solution that maximizes an objective function.
Suppose that, having tired of Romania, we are looking at a map of Australia
showing each of its states and territories, and that we are given the task of coloring each
region either red, green, or blue in such a way that no neighboring regions have the same
color. To formulate this as a CSP, we define the variables to be the regions shown in figure
2.8: WA, NT , Q, NSW , V , SA, and T . The domain of each variable is the set {red , green,
blue}. The constraints require neighboring regions to have distinct colors; for example, the
allowable combinations for WA and NT are the pairs{(red , green), (red , blue), (green, red ),
(green, blue), (blue,red),(blue,green)}. There are many possible solutions, suchas :{WA=red,
NT =green, Q= red, NSW = green, V = red, SA = blue, T = red }. It is helpful to visualize a
CSP as a constraint graph.

The principal states and territories of Australia. Coloring this map can be
viewed as a constraint satisfaction problem. The goal is to assign colors to each region so that
no neighboring regions have the same color. The map coloring problem represented as a
constraint

Solution:

BACKTRACKING SEARCH FOR CSPs

The term backtracking search is used for depth-first search that chooses values for
onevariable at a time and backtracks when a variable has no legal values left to assign.
ALGORITHM

Simple backtracking for map colouring


[Link] Part A (2 marks)

1. Differentiate Uninformed Search (Blind search) and Informed Search (Heuristic Search)
strategies.

2. Define Evaluation function f(n) with an example.

3. What is a heuristic function? Illustrate with a sample problem.

4. What is greedy best-first-search?

5. Define A* search?

6. Compare and contrast Global minimum and Global maximum.


7. Illustrate Hill-climbing search with an example?

8. Cite the problems faced by hill-climbing search?

9. List out the variants of hill-climbing?

10. Define constraint satisfaction problem.

11. What is a constraint graph?

12. What are the types of constraints?

13. What is backtracking search?

14. What is adversarial search?

15. Give example for toy problem.

16. List the criteria to measure the performance of different search strategies.

Differentiate Uninformed Search (Blind search) and Informed Search (Heuristic Search)
17. strategies.

18. Differentiate Depth first search and Depth limited first search.

[Link] Part B (12 marks)

1. What is Best First Search? Explain with an example.


Explain the following uninformed search strategies with examples.
2. a. Breadth First Search.
b. Depth First Search
c. Depth Limited Search

3. Explain in detail about Heuristic Functions with examples.

4. Explain the Hill climbing search strategy with examples.

5. Define constraint satisfaction problem (CSP). How CSP is formulated as a search problem?
Explain with an example.
6. Explain in detail about Adversarial search problem with examples.
Explain with algorithm and example:
7. i. Minimax algorithm
ii. Alpha-Beta Pruning
8. Explain real-world problems with examples.

How an algorithm’s performance is evaluated? Compare different uninformed search


9. strategies in terms of the four evaluation criteria.

You might also like