0% found this document useful (0 votes)
35 views

Ai 5

Uploaded by

gecaper319
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Ai 5

Uploaded by

gecaper319
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 33

Informed Search Methods

Artificial Intelligence
Heuristic
• Origin: from Greek Word ‘heuriskein’, means, “to discover”.
• Webster’s New World Dictionary defines Heuristic as,
“helping to discover or learn”
• As an adjective, means, serving to discover
• As noun, a heuristic is an aid to discovery.
A heuristic is a method to help solve a problem, commonly informal.
It is particularly used for a method that often rapidly leads to a
solution that is usually reasonably close to the best possible answer.
Heuristics are "rules of thumb", educated guesses, intuitive judgments
or simply common sense.
A heuristic contributes to the reduction of search in a problem-solving
activity.
Heuristic Search
• Uses domain-dependent (heuristic) information in order to
search the space more efficiently.

Ways of using heuristic information:


• Deciding which node to expand next, instead of doing the
expansion in a strictly breadth-first or depth-first order;
• In the course of expanding a node, deciding which successor or
successors to generate, instead of blindly generating all possible
successors at one time;
• Deciding that certain nodes should be discarded, or pruned,
from the search space.
Heuristic Search

• A moment's reflection will show ourselves


constantly using heuristics in the course of our
everyday lives.

• If the sky is grey we conclude that it would be


better to put on a coat before going out.
• We book our holidays in August because that
is when the weather is best.
Heuristic Function
• It is a function that maps from problem state description
to measure of desirability, usually represented as number.
• Which aspects of the problem state are considered, how
those aspects are evaluated, and the weights given to
individual aspects are chosen in such a way that the value
of the heuristic function at a given node in the search
process gives as good an estimate as possible of whether
that node is on the desired path.
Major Benefits of Heuristics
1. Heuristic approaches have an inherent flexibility,
which allows them to be used on ill-structured and
complex problems.
2. These methods by design may be simpler for the
decision maker to understand, especially when it is
composed of qualitative analysis. Thus chances are
much higher for implementing proposed solution.
3. These methods may be used as part of an iterative
procedure that guarantees the finding of an
optimal solution.
Major Disadvantages and Limitations of Heuristics
• The inherent flexibility of heuristic methods can lead
to misleading or even fraudulent manipulations and
solutions.
• Certain heuristics may contradict others that are
applied to the same problem, which generates
confusion and lack of trust in heuristic methods.
• Heuristics are not as general as algorithms;
Best First Search Algorithm
(Greedy Search Algorithm)
• This algorithm always selects the path which
appears best at that moment.
• It uses a heuristic function which guides it which
path is the best at the moment.
• This algorithm combines the advantages of both BFS
and DFS algorithm.
• This algorithm does not guarantee the goal, i.e. it is
not complete.
• This algorithm is not optimal and may get stuck is a
loop.
Best First Search: Algorithm
1. Start with OPEN containing just the initial state.
2. Until a goal is found or there are no nodes left on
OPEN do:
a) Pick the best node on OPEN
b) Generate its successors.
c) For each successor do:
i. If it has not been generated before, evaluate it, add it to OPEN,
and record its parent.
ii. If it has been generated before, change the parent if this new
path is better than the previous one. In that case, update the
cost of getting to this node and to any successors that this node
may already have.
Best First Search: Example 1
A
3 1
5
B D
5 C
6 4 6
G H E F
2 1
I J
An OR Graph
Best First Search: Example 2
A Open
B 4 D 6 closed
C 4

E 5 F 6 G4 H 3 I 6 J 4

K 7 L 8 M5 N4 O2 P 3 Q 7 R 4

Note: P is the goal


S 6 T 1
H.W: Repeat the same example when T is the goal.
Greedy Best First Search
• This is like DFS, but picks the path that gets you closest
to the goal.
• Needs a measure of distance from the goal.
h(n) = estimated cost of cheapest path from n to goal.
h(n) is a heuristic.
• Analysis
– Greed tends to work quite well (despite being one of the sins)
– But, it does not always find the shortest path.
– Susceptible to false starts.
– May go down an infinite path with no way to reach goal.
– The algorithm is incomplete without cycle checking. The
algorithm is also not optimal .
Greedy Best Fit Search
F
B h = 20
h = 20

D
Start A h=7 G
H
h=1 h = 10 h = 12

E
C h=5
h=8 Goal
h=0

The path from start to goal is: Start  A  D  E  Goal.


The A* Search
• A search algorithm to find the shortest path
through a search space to a goal state using a
heuristic.
f(n) = g(n) + h(n)
• f(n) - function that gives an evaluation of the
state
• g(n) - the cost of getting from the initial state
to the current state
• h(n) - the cost of getting from the current state
to a goal state
A

D C B
7 + 3 = 10 6 + 2 = 8 9 + 1 = 10

E F B F E
6+5=11 8+2=10 9+4=13 6+2=8 4+5=9

D G E A 8
B 9
C 6
D 7
E 5
F 2
G 0
Distance to
destination
A* Search: An Example

Distance Travelled = g
Distance to be covered = h
f=g+h
A 8
B 9
C 6
D 7
E 5
F 2
G 0
Distance to
destination
An 8-Puzzle game
Start State Goal
2 8 3 1 2 3
1 6 4 8 4
7 5 7 6 5

Let f(n) = g(n) + h(n)


where
g(n) = actual distance from n to the
start state.
h(n) = number of tiles out of place.
2 8 3
State A
1 6 4
f(A) = 4
7 5

2 8 3 2 8 3 2 8 3 State D
State B State C
1 6 4 1 4 1 6 4 f(D) = 1+5
f(B) = 1+5 f(C) = 1+3
7 5 7 6 5 7 5 =6
=6 =4

2 8 3 State E 2 3 State F 2 8 3 State G


1 4 f(E) = 2+3 1 8 4 f(F) = 2+3 1 4 f(G) = 2+4
7 6 5 =5 7 6 5 =5 7 6 5 =6

8 3 State H 2 8 3 State I 2 3 State J 2 3 State K


2 1 4 f(H) = 3+3 7 1 4 f(I) = 3+4 1 8 4 f(J) = 3+2 1 8 4 f(K) =
7 6 5 =6 6 5 =7 7 6 5 =5 7 6 5 3+4=7

1 2 3 State L
8 4 f(L) = 4+1
1 2 3 State M 1 2 3 State N
7 6 5 =5
8 4 f(M) = 5+0 7 8 4 f(N) = 5+2
7 6 5 =5 6 5 =7
Hill Climbing
• Searching for a goal state = Climbing to the top
of a hill
• Generate-and-test + direction to move.
• Heuristic function to estimate how close a
given state is to a goal state.

Hill Climbing

Simple Steepest Stochastic Random


Ascent Re-start
State Space Diagram of Hill Climbing
Simple Hill Climbing
Algorithm

1. Evaluate the initial state.


2. Loop until a solution is found or there are no new
operators left to be applied:
- Select and apply a new operator
- Evaluate the new state:
goal  quit
better than current state  new current
state
Simple Hill Climbing
Evaluation function as a way to inject task
specific knowledge into the control process.
Steepest-Ascent Hill Climbing (Gradient
Search)

• Considers all the moves from the current


state.
• Selects the best one as the next state.
Steepest-Ascent Hill Climbing (Gradient
Search)
Algorithm
1. Evaluate the initial state.
2. Loop until a solution is found or a complete iteration produces no
change to current state:
- SUCC = a state such that any possible successor of the
current state will be better than SUCC (the worst state).
- For each operator that applies to the current state, evaluate
the new state:
goal  quit
better than SUCC  set SUCC to this state
- SUCC is better than the current state  set the current
state to SUCC.
Hill Climbing: Disadvantages
Local Maximum
A state that is better than all of its neighbours,
but not better than some other states far
away.
Hill Climbing: Disadvantages
Plateau
A flat area of the search space in which all
neighbouring states have the same value.
Hill Climbing: Disadvantages
Ridge
The orientation of the high region, compared to
the set of available moves, makes it impossible
to climb up.
However, two moves executed serially may
increase the height.
Hill Climbing: Disadvantages

Ways Out
• Backtrack to some earlier node and try going
in a different direction.
• Make a big jump to try to get in a new section.
• Moving in several directions at once.
Hill Climbing: Disadvantages
• Hill climbing is a local method:
Decides what to do next by looking only at the
“immediate” consequences of its choices.
• Global information might be encoded in
heuristic functions.
Hill Climbing: Example 1

8 B 13 C 11 D

4 E 3 F 7 G 5 H

2 I 1 J Goal
Hill Climbing: Example 2
Simulated Anealing
Annealing refers to a physical process that proceeds as follows:

• A solid in a heat bath is heated by raising the temperature to a


maximum value at which all particles of the solid arrange
themselves randomly in the liquid phase.

• Then the temperature of the heat bath is lowered, pertaining all


particles of the solid arrange themselves in the low energy ground
state of a corresponding lattice.

It is presumed that the maximum temperature in phase 1 is


sufficiently high, and the cooling in phase 2 is carried-out
sufficiently slowly. However, if the cooling is too rapid, that is if the
solid is not allowed enough time to reach thermal equilibrium at
each temperature value – the resulting crystal will have many
defects.
Simulated Anealing in AI
Idea: escape local maxima by allowing some "bad"
moves but gradually decrease their frequency.

• Picks random moves


• Keeps executing the move if the situation is actually
improved; otherwise, makes the move of a
probability less than 1
• Number of cycles in the search is determined
according to probability
• The search behaves like hill-climbing when
approaching the end

You might also like