0% found this document useful (0 votes)
190 views107 pages

03 Search

The document discusses search algorithms in artificial intelligence. It introduces search as a way for goal-based agents to find sequences of actions to achieve goals. Many AI tasks like puzzles, games and planning problems can be formulated as search problems to find the optimal path or solution. The document then discusses assumptions made in search problems regarding properties of the environment like being static vs dynamic. It provides examples of search problems and defines key concepts in search spaces. Finally, it explains different search strategies like breadth-first search, depth-first search, uniform cost search, iterative deepening search and informed searches using heuristics like best-first search and hill climbing.

Uploaded by

belal alfuhaidi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
190 views107 pages

03 Search

The document discusses search algorithms in artificial intelligence. It introduces search as a way for goal-based agents to find sequences of actions to achieve goals. Many AI tasks like puzzles, games and planning problems can be formulated as search problems to find the optimal path or solution. The document then discusses assumptions made in search problems regarding properties of the environment like being static vs dynamic. It provides examples of search problems and defines key concepts in search spaces. Finally, it explains different search strategies like breadth-first search, depth-first search, uniform cost search, iterative deepening search and informed searches using heuristics like best-first search and hill climbing.

Uploaded by

belal alfuhaidi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 107

Advanced Artificial

Intelligence

Lecture 3: Search
Search
Search permeates all of AI
An intelligent agent is trying to find a set or sequence
of actions to achieve a goal
This is a goal-based agent

Advanced AI - 3: Search 2
Building Goal-Based Agents

What are the key questions that need to be


addressed?

What goal does the agent need to achieve?


What knowledge does the agent need?
What actions does the agent need to do?

Advanced AI - 3: Search 3
Many AI (and non-AI) Tasks can be
Formulated as Search Problems
Goal is to find a sequence of actions
Puzzles
Games
Navigation
Assignment
Motion planning
Scheduling
Routing
Advanced AI - 3: Search 4
Assumptions
Static or dynamic?

Environment is static

Advanced AI - 3: Search 5
Assumptions
Static or dynamic?
Fully or partially observable?

Environment is fully observable

Advanced AI - 3: Search 6
Assumptions
Static or dynamic?
Fully or partially observable?
Discrete or continuous?

Environment is discrete

Advanced AI - 3: Search 7
Assumptions
Static or dynamic?
Fully or partially observable?
Discrete or continuous?
Deterministic or stochastic?

Environment is deterministic

Advanced AI - 3: Search 8
Assumptions
Static or dynamic?
Fully or partially observable?
Discrete or continuous?
Deterministic or stochastic?
Episodic or sequential?

Environment is sequential

Advanced AI - 3: Search 9
Assumptions
Static or dynamic?
Fully or partially observable?
Discrete or continuous?
Deterministic or stochastic?
Episodic or sequential?
Single agent or multiple agent?

Advanced AI - 3: Search 10
Assumptions
Static or dynamic?
Fully or partially observable?
Discrete or continuous?
Deterministic or stochastic?
Episodic or sequential?
Single agent or multiple agent?

Advanced AI - 3: Search 11
Search Example: Route Finding

Actions: go straight, turn left, turn right


Goal: shortest? fastest? most scenic?
Advanced AI - 3: Search 12
Search Example: River Crossing Problem

Goal: All on right side


of river

Rules:
1) Farmer must row the boat
2) Only room for one other Actions: F>, F<, FC>, FC<,
3) Without the farmer present: FD>, FD<, FS>, FS<
• Dog bites sheep
• Sheep eats cabbage

Advanced AI - 3: Search 13
Search Example: 8-Puzzle

Actions: move tiles (e.g., Move2Down)


Goal: reach a certain configuration

Advanced AI - 3: Search 14
Search Example: Water Jugs Problem
Given 4-liter and 3-liter pitchers, how do you get exactly
2 liters into the 4-liter pitcher?

4 3
Advanced AI - 3: Search 15
Search Example: Robot Motion Planning

Actions: translate and rotate joints

Goal: fastest? most energy efficient? safest?

Advanced AI - 3: Search 16
Search Space Definitions
State
A description of a possible state of the world
Initial state
the state in which the agent starts the search
Goal test
Conditions the agent is trying to meet
Goal state
Any state which meets the goal condition
Action
Function that maps (transitions) from one state to
another

Advanced AI - 3: Search 17
Search Space Definitions
Problem formulation
 Describe a general problem as a search problem
Solution
 Sequence of actions that transitions the world from the initial
state to a goal state
Solution cost (additive)
 Sum of the cost of operators
 Alternative: sum of distances, number of steps, etc.
Search
 Process of looking for a solution
 Search algorithm takes problem as input and returns solution
 We are searching through a space of possible states
Execution
 Process of executing sequence of actions (solution)
Advanced AI - 3: Search 18
Visualize Search Space as a Tree
States are nodes
Actions are edges
Initial state is
root
Solution is path
from root to goal
node
Edges sometimes
have associated
costs
States resulting
from operator are
children
Advanced AI - 3: Search 19
Search State Space

Advanced AI - 3: Search 20
Search Problem Example (as a tree)

Advanced AI - 3: Search 21
Search Strategies
Open = initial state // open list is all generated
states
// that have not been “expanded”
While open not empty // one iteration of search algorithm
state = First(open) // current state is first state in open
Pop(open) // remove new current state from open
if Goal(state) // test current state for goal condition
return “succeed” // search is complete
// else expand the current state by
// generating children and
// reorder open list per search strategy
else open = QueueFunction(open, Expand(state))
Return “fail”

Advanced AI - 3: Search 22
Search Strategies
Search strategies differ only in QueuingFunction
Features by which to compare search strategies
Completeness (always find solution)
Cost of search (time and space)
Cost of solution, optimal solution
Make use of knowledge of the domain
 “uninformed search” vs. “informed search”

Advanced AI - 3: Search 23
Breadth-First Search
Generate children of a state, QueueingFn adds
the children to the end of the open list
Level-by-level search
In tree, assume children are considered left-to-
right unless specified differently

Advanced AI - 3: Search 24
BFS Examples

Advanced AI - 3: Search 25
Depth-First Search
QueueingFn adds the children to the front of
the open list
BFS emulates FIFO queue
DFS emulates LIFO stack
Net effect
Follow leftmost path to bottom, then backtrack
Expand deepest node first

Advanced AI - 3: Search 26
DFS Examples

Advanced AI - 3: Search 27
Uniform

Cost Search (Branch&Bound)
QueueingFn is SortByCostSoFar
Cost from root to current node n is g(n)
Add operator costs along path
First goal found is least-cost solution

Advanced AI - 3: Search 28
UCS Example

Advanced AI - 3: Search 29
Iterative Deepening Search
DFS with depth bound
QueuingFn is enqueue at front as with DFS
Expand(state) only returns children such that
depth(child) <= threshold
This prevents search from going down infinite
path
First threshold is 1
If do not find solution, increment threshold and
repeat

Advanced AI - 3: Search 30
Bidirectional Search
 Search forward from
initial state to goal AND
backward from goal state
to initial state
 Can prune many options
 Considerations
 Which goal state(s) to
use
 How determine when
searches overlap
 Which search to use for
each direction
 Here, two BFS searches

Advanced AI - 3: Search 31
Informed Searches
Best-first search, Hill climbing, Beam search, A*
New terms
 Heuristics
 Optimal solution
 Hill climbing problems
 Admissibility
New parameters
 g(n) = estimated cost from initial state to state n
 h(n) = estimated cost (distance) from state n to closest goal
 h(n) is our heuristic
 Robot path planning, h(n) could be Euclidean distance
 8 puzzle, h(n) could be #tiles out of place
Search algorithms which use h(n) to guide search are
heuristic searchAdvanced
algorithms
AI - 3: Search 32
Informed Search
• h(n) ≥ 0 for all nodes n
• h(n) close to 0 means we think n is close to a goal state
• h(n) very big means we think n is far from a goal state

• All domain knowledge used in the search is encoded in


the heuristic function, h

Advanced AI - 3: Search 33
Best-First Search
QueueingFn is sort-by-h
Best-first search only as good as heuristic
Example heuristic for 8 puzzle: Manhattan
Distance

Advanced AI - 3: Search 34
Example

Advanced AI - 3: Search 35
Example

Advanced AI - 3: Search 36
Example

Advanced AI - 3: Search 37
Example

Advanced AI - 3: Search 38
Example

Advanced AI - 3: Search 39
Example

Advanced AI - 3: Search 40
Example

Advanced AI - 3: Search 41
Example

Advanced AI - 3: Search 42
Example

Advanced AI - 3: Search 43
Hill Climbing (Greedy Search)
QueueingFn is sort-by-h
Only keep lowest-h state on open list
Best-first search is tentative
Hill climbing is irrevocable
Features
Much faster
Less memory
Dependent upon h(n)
If bad h(n), may prune away all goals
Not complete

Advanced AI - 3: Search 44
Example

Advanced AI - 3: Search 45
Example

Advanced AI - 3: Search 46
Beam Search
QueueingFn is sort-by-h
Only keep best (lowest-h) n nodes on open list
n is the “beam width”
n = 1, Hill climbing
n = infinity, Best first search

Advanced AI - 3: Search 47
Example

Advanced AI - 3: Search 48
Example

Advanced AI - 3: Search 49
Example

Advanced AI - 3: Search 50
Example

Advanced AI - 3: Search 51
Example

Advanced AI - 3: Search 52
Example

Advanced AI - 3: Search 53
Example

Advanced AI - 3: Search 54
Example

Advanced AI - 3: Search 55
Example

Advanced AI - 3: Search 56
A*
QueueingFn is sort-by-f
f(n) = g(n) + h(n)
Note that UCS and Best-first both improve search
UCS keeps solution cost low
Best-first helps find solution quickly
A* combines these approaches

Advanced AI - 3: Search 57
Power of f
If heuristic function is wrong it either
overestimates (guesses too high)
underestimates (guesses too low)
Overestimating is worse than underestimating
A* returns optimal solution if h(n) is admissible
heuristic function is admissible if never overestimates
true cost to nearest goal
if search finds optimal solution using admissible
heuristic, the search is admissible

Advanced AI - 3: Search 58
Overestimating A (15)
3 3
2

B (6) C (20) D (10)

15 6 20 10 5

E (20) F(0) G (12) H (20) I(0)

Solution cost: Open list:


ABF = 9 A (15) B (9) F (9)
ADI = 8 Missed optimal solution

Advanced AI - 3: Search 59
Example

Advanced AI - 3: Search 60
Example

Advanced AI - 3: Search 61
Example

Advanced AI - 3: Search 62
Example

Advanced AI - 3: Search 63
Example

Advanced AI - 3: Search 64
Example

Advanced AI - 3: Search 65
Example

Advanced AI - 3: Search 66
Example

Advanced AI - 3: Search 67
Local Searching
Systematic searching: Search for a path from start
state to a goal state, then “execute” solution path’s
sequence of operators

– BFS, DFS, IDS, UCS, Greedy Best-First, A, A*, etc.


– ok for small search spaces
– not okay for NP-Hard problems requiring exponential
time to find the (optimal) solution

Advanced AI - 3: Search 68
Advanced AI - 3: Search 69
Advanced AI - 3: Search 70
Traveling Salesperson Problem
(TSP)
A salesperson wants to visit a list of cities
stopping in each city only once
(sometimes also must return to the first city)
traveling the shortest distance
f = total distance traveled

Advanced AI - 3: Search 71
Traveling Salesperson Problem (TSP)
Nodes are cities 5 City TSP
Arcs are labeled with distances (not to scale)
between cities
A
Adjacency matrix (notice the graph 5 8
is fully connected):
6
A B C D E B C
A 0 5 8 9 7 9 7
B 5 0 6 5 5 5 2
5 3
C 8 6 0 2 3
D 9 5 2 0 4
E 7 5 3 4 0
D E
4
Advanced AI - 3: Search 72
Traveling Salesperson Problem (TSP)
a solution is a permutation of cities, 5 City TSP
called a tour (not to scale)

A
5 8

6
A B C D E B C
A 0 5 8 9 7 9 7
B 5 0 6 5 5 5 2
5 3
C 8 6 0 2 3
D 9 5 2 0 4
E 7 5 3 4 0
D E
4
Advanced AI - 3: Search 74
Traveling Salesperson Problem (TSP)
a solution is a permutation of cities, 5 City TSP
called a tour (not to scale)
e.g. A – B – C – D – E
A
assume tours can start at any 5 8
city and returns home at end
6
A B C D E B C
A 0 5 8 9 7 9 7
B 5 0 6 5 5 5 2
5 3
C 8 6 0 2 3
D 9 5 2 0 4
E 7 5 3 4 0
D E
4
Advanced AI - 3: Search 75
How would you solve TSP
using A or A* Algorithm?

How to represent a state?


Successor function?
Heuristics?

Advanced AI - 3: Search 76
Traveling Salesperson
Classic NP-Hard problem
Problem (TSP)
How many solutions exist? 5 City TSP
n! where n = # of cities (not to scale)

n = 5 results in 120 tours A


n = 10 results in 3,628,800 tours 5 8
n = 20 results in ~2.4*1018 tours
6
A B C D E B C
A 0 5 8 9 7 9 7
B 5 0 6 5 5 5 2
5 3
C 8 6 0 2 3
D 9 5 2 0 4
E 7 5 3 4 0
D E
4
Advanced AI - 3: Search 77
Solving Optimization Problems using Local Search
Methods
Now a different setting:
Each state s has a score or cost, f(s), that we can compute
The goal is to find the state with the highest (or lowest)
score, or a reasonably high (low) score
We do not care about the path
Use variable-based models
Solution is not a path but an assignment of values
for a set of variables
Enumerating all the states is intractable
Previous search algorithms are too expensive

Advanced AI - 3: Search 78
Other Example Problems
N-Queens
Place n queens on n x n checkerboard so that no queen
can “capture” another
f = number of conflicting queens
Boolean Satisfiability
Given a Boolean expression containing n Boolean
variables, find an assignment of {T, F} to each variable so
that the expression evaluates to True
(A ∨ ¬B ∨ C) ∧ (¬A ∨ C ∨ D)
f = number of satisfied clauses

Advanced AI - 3: Search 79
Example Problem: Chip Layout
Channel
Routing

Lots of Chip Real Estate Same connectivity,


much less space

Advanced AI - 3: Search 80
Example Problem: Scheduling

Also:
parking lot layout,
product design, aero-
dynamic design,
“Million Queens”
problem, radiotherapy
treatment planning, …
Advanced AI - 3: Search 81
Local Searching
• Hard problems can be solved in polynomial time
by using either an:
– approximate model: find an exact solution
to a simpler version of the problem
– approximate solution: find a non-optimal solution
to the original hard problem

• We'll explore ways to search through a solution


space by iteratively improving solutions until
one is found that is optimal or near optimal
Advanced AI - 3: Search 82
Local Searching
Local searching: every node is a solution
– Operators/actions go from one solution to
another
– can stop at any time and have a valid
solution
– goal of search is to find a better/best solution
No longer searching a state space for a solution path and
then executing the steps of the solution path
• A* isn't a local search since it considers different partial
solutions by looking at the estimated cost
of a solution path
Advanced AI - 3: Search 83
Informal Characterization
These are problems in which
There is some combinatorial structure being
optimized
There is a cost function: Structure  Real
number, to be optimized, or at least a reasonable
solution is to be found
Searching all possible structures is intractable
There’s no known algorithm for finding the
optimal solution efficiently
“Similar” solutions have similar costs

Advanced AI - 3: Search 84
Local Searching
An operator/action is needed to transform one
solution to another
• TSP: 2-swap operator
– take two cities and swap their positions in the tour
– A-B-C-D-E with swap(A,D) yields D-B-C-A-E
– possible since graph is fully connected
• TSP: 2-interchange operator (aka 2-opt swap)
– reverse the path between two cities
– A-B-C-D-E with interchange(A,D) yields D-C-B-A-E

Advanced AI - 3: Search 85
Neighbors: TSP
state: A-B-C-D-E-F-G-H-A
f = length of tour
2-interchange

A-B-C-D-E-F-G-H-A

flip

A-E-D-C-B-F-G-H-A

Advanced AI - 3: Search 86
Local Searching
Those solutions that can be reached with one
application of an operator are in the current solution's
neighborhood (aka “move set”)
Local search considers next only those solutions in
the neighborhood
• The neighborhood should be much smaller
than the size of the search space
(otherwise the search degenerates)

Advanced AI - 3: Search 87
Local Searching
An evaluation function, f, is used to map each
solution/state to a number corresponding to the
quality/cost of that solution
• TSP: Use the length of the tour;
A better solution has a shorter tour length
• Maximize f:
called hill-climbing (gradient ascent if continuous)
• Minimize f:
called or valley-finding (gradient descent if
continuous)
• Can be used to maximize/minimize some cost
Advanced AI - 3: Search 88
Hill-Climbing (HC)
• Question: What’s a neighbor?
 Problem spaces tend to have structure. A
small change produces a neighboring state
 The size of the neighborhood must be small
enough for efficiency
 Designing the neighborhood is critical; This is
the real ingenuity – not the decision to use
hill-climbing
• Question: Pick which neighbor? The best one
(greedy)
• Question: What if no neighbor is better than the
current state? Stop
Advanced AI - 3: Search 89
Hill-Climbing Algorithm
1. Pick initial state s
2. Pick t in neighbors(s) with the largest f(t)
3. if f(t) <f(s) then stop and return s
4. s = t. Goto Step 2.

• Simple
• Greedy
• Stops at a local maximum

Advanced AI - 3: Search 90
Hill-Climbing (HC)
HC exploits the neighborhood
– like Greedy Best-First search, it chooses what looks
best locally
– but doesn't allow backtracking or jumping to an
alternative path since there is no Frontier list
• HC is very space efficient
– Similar to Beam Search with a beam width of 1

• HC is very fast and often effective in practice

Advanced AI - 3: Search 92
Local Optima in Hill-Climbing
Useful mental picture: f is a surface (‘hills’) in
state space Global optimum,
where we want to be
f

state
But we can’t see the entire landscape all at once.
Can only see a neighborhood; like climbing in
fog.
f

fog
state

Advanced AI - 3: Search 93
Hill-Climbing
Visualized as a 2D surface
 Height is quality/cost of solution f = f(x, y)

f(x, y)
 Solution space is a 2D surface

 Initial solution is a point


 Goal is to find highest point on the x
surface of solution space
 Hill-Climbing follows the direction of the
steepest ascent, i.e., where f increases y
the most

Advanced AI - 3: Search 94
Hill-Climbing (HC)
Solution found by HC is totally determined by
the starting point; its fundamental weakness is
getting stuck:
f(y)
 At a local maximum
 At plateaus and ridges

Global maximum may not be found

Trade off:
y
greedily exploiting locality as in HC
vs. exploring state space as in BFS
Advanced AI - 3: Search 95
Difficulty in Searching for a Global Optimum
(here shown as a Minimum)
barrier to local search
starting
point
descend
direction
local minimum

global minimum

Advanced AI - 3: Search 96
Hill-Climbing with Random Restarts
Very simple modification:
1. When stuck, pick a random new starting state and
re-run hill-climbing from there
2. Repeat this k times
3. Return the best of the k local optima

• Can be very effective


• Should be tried whenever hill-climbing is used
• Fast, easy to implement; works well for many
applications where the solution space surface is not
too “bumpy” (i.e., not too many local maxima)
Advanced AI - 3: Search 97
Life Lesson
Sometimes one needs to temporarily step
backward in order to move forward

Lesson applied to iterative, local search:


Sometimes one needs to move to an
inferior neighbor in order to escape a
local optimum

Advanced AI - 3: Search 98
Simulated Annealing
Origin:
The annealing process of heated solids –
Alloys manage to find a near global minimum
energy state when heated and then slowly cooled
Intuition:
By allowing occasional ascent in the
search process, we might be able to
escape the traps of local minima

Introduced by Nicholas Metropolis


in 1953
Advanced AI - 3: Search 99
Consequences of Occasional Bad Moves
Desired effect (when
searching for a global min):

Advantage
Helps escape
local optima
Idea 1: Use a
small, fixed
probability
threshold, say,
adverse effect p = 0.1
Disadvantage But it might pass the global
optimum after reaching it

Advanced AI - 3: Search 100


However, like swords have two edges, there are two
consequences of allowing occasional ascent steps. On
one hand, it fulfills our desire to let the algorithm
proceed beyond local optima. On the other hand, we
might miss the global optima by allowing the search
process to pass through it. To maintain the desired
effect and reduce the adverse effect, we need a
sophisticated scheme to control the acceptance of
occasional ascents, which is the heart of simulated
annealing.

Advanced AI - 3: Search 101


Simulated Annealing
(Stochastic Hill-Climbing)
Pick initial state, s
k=0
while k < kmax {
T = temperature(k)
Randomly pick state t from neighbors of s
if f(t) > f(s) then s = t
else if (e(f(t) – f(s))/ T ) > random() then s = t
k = k +1
}
return s

Advanced AI - 3: Search 102


Informed Search
Informed searches use domain knowledge
to guide selection of the best path to continue
searching

• Heuristics are used, which are informed guesses

• Heuristic means "serving to aid discovery"


Informed Search
Define a heuristic function, h(n)
uses domain-specific information in some way
is computable from the current state description
it estimates
 the "goodness" of node n
 how close node n is to a goal
 the cost of minimal cost path from node n to a goal state
Informed Search
• h(n) ≥ 0 for all nodes n
• h(n) close to 0 means we think n is close to a goal state
• h(n) very big means we think n is far from a goal state

• All domain knowledge used in the search is encoded in


the heuristic function, h

• An example of a “weak method” for AI because of the


limited way that domain-specific information is
used to solve a problem
Advanced AI - 3: Search 106
Advanced AI - 3: Search 107
Advanced AI - 3: Search 108
Advanced AI - 3: Search 109

You might also like