Artificial Intelligence
Search
Instructor:
Muhammad Yasir Khan
Today
Agents that Plan Ahead
Search Problems
Uninformed Search Methods
Depth-First Search
Breadth-First Search
Uniform-Cost Search
Agents that Plan
Reflex Agents
Reflex agents:
Choose action based on current percept (and
maybe memory)
May have memory or a model of the world’s
current state
Do not consider the future consequences of
their actions
Consider how the world IS
Can a reflex agent be rational?
[Demo: reflex optimal (L2D1)]
[Demo: reflex optimal (L2D2)]
Planning Agents
Planning agents:
Ask “what if”
Decisions based on (hypothesized)
consequences of actions
Must have a model of how the
world evolves in
response to actions
Must formulate a goal (test)
Consider how the world WOULD
BE
Optimal vs. complete planning
Planning vs. replanning
[Demo: re-planning (L2D3)]
[Demo: mastermind (L2D4)]
Search Problems
Search Problems
A search problem consists of:
A state space
A successor function “N”, 1.0
(with actions, costs)
“E”, 1.0
A start state and a goal test
A solution is a sequence of actions (a plan) which
transforms the start state to a goal state
Example: Traveling in Romania
State space:
Cities
Successor function:
Roads: Go to adjacent city with
cost = distance
Start state:
Arad
Goal test:
Is state == Bucharest?
Solution?
What’s in a State Space?
The world state includes every last detail of the environment
A search state keeps only the details needed for planning (abstraction)
Problem: Eat-All-Dots
Problem: Pathing
States: (x,y) location States: {(x,y), dot booleans}
Actions: NSEW Actions: NSEW
Successor: update location Successor: update location
only and possibly a dot boolean
Goal test: is (x,y)=END Goal test: dots all false
State Space Sizes?
World state:
Agent positions: 120
Food count: 30
Ghost positions: 12
Agent facing: NSEW
How many
World states?
120x(230)x(122)x4
States for
pathing?
120
States for eat-all-dots?
120x(230)
State Space Graphs and Search Trees
State Space Graphs
State space graph: A mathematical
representation of a search problem
Nodes are (abstracted) world
configurations
Arcs represent successors (action
results)
The goal test is a set of goal nodes
(maybe only one)
In a state space graph, each state
occurs only
once!
We can rarely build this full graph
State Space Graphs
State space graph: A mathematical
a G
representation of a search problem
Nodes are (abstracted) world configurations b c
Arcs represent successors (action results) e
The goal test is a set of goal nodes (maybe only one) d f
S h
In a state space graph, each state occurs only p r
q
once!
Tiny state space graph for a tiny
We can rarely build this full graph in memory search problem
(it’s too big), but it’s a useful idea
Search Trees
This is now / start
“N”, 1.0 “E”, 1.0
Possible futures
A search tree:
A “what if” tree of plans and their outcomes
The start state is the root node
Children correspond to successors
Nodes show states, but correspond to PLANS that achieve those states
For most problems, we can never actually build the whole tree
State Space Graphs vs. Search Trees
Each NODE in in
State Space Graph the search tree is Search Tree
an entire PATH in
the state space S
G graph. e p
a d
b c
b c e h r q
e
d f a a h r p q f
S h We construct both
on demand – and p q f q c G
p q r
we construct as q c a
G
little as possible.
a
Quiz: State Space Graphs vs. Search Trees
Consider this 4-state graph: How big is its search tree (from S)?
a s
a b
S G
b G a G
b a G b G
… …
Important: Lots of repeated structure in the search
Tree Search
Search Example: Romania
Searching with a Search Tree
Search:
Expand out potential plans (tree nodes)
Maintain a fringe of partial plans under consideration
Try to expand as few tree nodes as possible
Depth-First Search
Depth-First Search
Strategy: expand a a G
deepest node first b a c
Implementation: e
d f
Fringe is a LIFO stack S h
p q r
d e p
b c e h r q
a a h r p q f
p q f q c G
q c G a
a
Search Algorithm Properties
Search Algorithm Properties
Complete: Guaranteed to find a solution if one exists?
Optimal: Guaranteed to find the least cost path?
Time complexity?
Space complexity? b
1 node
… b
nodes
Cartoon of search tree:
m b2
b is the branching factor
tiers nodes
m is the maximum depth
solutions at various depths
bm
Number of nodes in entire tree? nodes
1 + b + b2 + …. bm = O(bm)
Depth-First Search (DFS) Properties
What nodes DFS expand?
Some left prefix of the tree. 1 node
b
Could process the whole tree! … b nodes
If m is finite, takes time O(bm) b2
m nodes
How much space does the fringe take? tiers
Only has siblings on path to root, so O(bm)
Is it complete? bm
m could be infinite, so only if we prevent nodes
cycles (more later)
Is it optimal?
No, it finds the “leftmost” solution,
regardless of depth or cost
Breadth-First Search
Breadth-First Search
Strategy: expand a a G
shallowest node first b c
Implementation: Fringe e
d f
is a FIFO queue S h
p q r
d e p
Searc
q
h b c e h
Tiers a a r h r p q f
p. q f q c G
q. c G a
a
Breadth-First Search (BFS) Properties
What nodes does BFS expand?
Processes all nodes above shallowest solution 1 node
b
Let depth of shallowest solution be s … b nodes
s
Search takes time O(bs) b2
tiers
nodes
How much space does the fringe take? bs
Has roughly the last tier, so O(bs) nodes
Is it complete? bm
s must be finite if a solution exists, so yes! nodes
Is it optimal?
Only if costs are all 1 (more on costs later)
Quiz: DFS vs BFS
Quiz: DFS vs BFS
When will BFS outperform DFS?
When will DFS outperform BFS?
[Demo: dfs/bfs maze water
Iterative Deepening
Idea: get DFS’s space advantage with BFS’s
time / shallow-solution advantages b
Run a DFS with depth limit 1. If no …
solution…
Run a DFS with depth limit 2. If no
solution…
Run a DFS with depth limit 3.
Isn’t that wastefully redundant? …..
Generally most work happens in the lowest
level searched, so not so bad!
Cost-Sensitive Search
a GOAL
2 2
b c
3
2
1 8
2 e
3 d
f
9 8 2
START h
1 4 2
p 4 r
1
q
5
BFS finds the shortest path in terms of number of actions.
It does not find the least-cost path. We will now
cover a similar algorithm which does find the least-
cost path.
Uniform Cost Search
Uniform Cost Search
2 a G
Strategy: expand a b c
cheapest node first: 1 8 2
2 e
3 d f
Fringe is a priority queue 9 2
S h 1
(priority: cumulative cost)
1 p q 8 r
1
5
S 0
d 3 9 p 1
e
e 5 r q
b 4 c h 17
16
Cost a 6 11 h 13 r 7 p 11q f
contour
s a p q f 8 q c G
q 1 c G 1 a
1 0
a
Uniform Cost Search (UCS) Properties
What nodes does UCS expand?
Processes all nodes with cost less than cheapest solution!
b c1
If that solution costs C* and arcs cost at least , then the …
“effective depth” is roughly C*/ C*/
c2
Takes time O(bC*/) (exponential in effective depth) “tiers” c
3
How much space does the fringe take?
Has roughly the last tier, so O(bC*/)
Is it complete?
Assuming best solution has a finite cost and minimum arc cost
is positive, yes!
Is it optimal?
Yes! (Proof next lecture via A*)
Uniform Cost Issues
Remember: UCS explores increasing cost
… c1
contours c2
c
3
The good: UCS is complete and optimal!
The bad:
Explores options in every “direction”
No information about goal location
Star Goa
t l
We’ll fix that soon! [Demo: empty grid UCS
(L2D5)] [Demo: maze with
deep/shallow water
The One Queue
All these search algorithms are the
same except for fringe strategies
Conceptually, all fringes are priority
queues (i.e. collections of nodes with
attached priorities)
Practically, for DFS and BFS, you can
avoid the log(n) overhead from an
actual priority queue, by using stacks
and queues
Can even code one implementation
that takes a variable queuing object