Solving Problems by
Searching
Outline
Problem-solving
agents
Problem types
Problem formulation
Example problems
Basic search
algorithms
Overview
Recall our previous discussion of reflex agents. Such agents cannot operate
well in environments for which the state to action mapping is too large to store
or would take too long to learn.
Problem-solving agents use atomic representations, where states of the
world are considered as wholes, with no internal structure visible to the
problem-solving agent.
We consider two general classes of search: (1) uninformed search algorithms
for which the algorithm is provided no information about the problem other
than its definition; (2) informed search, where the algorithm is given some
guidance.
Overview
Intelligent agents are supposed to maximize their performance measure;
achieving this is sometimes simplified if the agent can adopt a goal and aim
to satisfy it.
Goals help organize behavior by limiting objectives. We consider a goal to be a
set of world states – exactly those states in which the goal is satisfied.
Here we consider environments that are known, observable, discrete and
deterministic (i.e. each action has exactly one corresponding outcome).
The process of looking for a sequence of actions that reaches the goal is
called search. A search algorithm takes a problem as input and returns a
solution in the form of an action sequence.
Well-defined problems and
solutions
A problem can be defined formally by (5) components:
(1) The initial state from which the agent starts.
(2) A description of possible actions available to the agent: ACTIONS(s)
(3) A description of what each action does, i.e. the transition model, specified
by a function RESULT (s,a)=a’.
Together, the initial state, actions and transition model implicitly defined the
state space of the problem – the set of all states reachable from the initial state
by any sequence of actions.
The state space forms a directed network or graph in which the nodes are
states and the edges between nodes are actions. A path in the state space is a
sequence of states connected by a sequence of actions.
Well-defined problems and
solutions
(4) The goal test, which determines whether a given state is a goal state.
Frequently the goal test is intuitive (e.g. check if we arrived at the destination)
– but note that it is also sometimes specified by an abstract property (e.g.
“check mate”).
(5) A path cost function that assigns a numeric cost to each path. The
problem- solving agent chooses a cost function that reflects its own
performance measure.
Commonly (but not always), the cost of a path is additive in terms of the
individual actions along a path. Denote the step cost to take action ‘a’ in state
s, arriving in s’ as: c(s,a,s’).
A key element of successful problem formulation relates to abstraction –
the process of removing (inessential) details from a problem representation.
Example: The 8-
puzzle
States: Locations of tiles. (Q: How large is state
space?)
Actions: Move blank left, right, up, down.
Goal test: s==goal state. (given)
Path cost: 1 per move.
[Note: optimal solution of n-Puzzle family is NP-hard]
Example: 8-
queens
States: Any arrangement of 0 to 8 queens on
the board.
Actions: Add a queen to any empty square.
Transition Model: Returns the board with a queen added to
the specified square.
Goal Test: 8 queens on the board, none attacking.
Q: How many possible sequences? (~1.8x1014)
Revision to problem: arrange queens with one per column, in
the leftmost n columns, with no queen attacking another.
Actions: Add a queen to any square in leftmost empty column
(with
no attacking) reduction to just 2,057 states!
Searching for Solutions
Recall that a solution is an actions sequence; accordingly, search
algorithms work by considering various possible action
sequences.
The possible action sequences starting at the initial state form a
search tree with the initial state at the root; the branches are
actions and the nodes correspond to state in the state space of the
problem.
To consider taking various actions, we expand the current state –
thereby generating a new set of states.
In this way, we add branches from the parent node to children
nodes.
Searching for Solutions
A node with no children is called a leaf; the set of all leaf
nodes available for expansion at any given point is called the
frontier.
The process of expanding nodes on the frontier continues
until either a solution is found or there are no more states to
expand.
We consider the general TREE-SEARCH algorithm next.
All search algorithms share this basic structure; they vary
primarily according to how they choose which state to expand
next – the so- called search strategy.
redundant
In general, apaths.
TREE-SEARCH considers all possible paths 2
1
Tree search algorithms
Basic idea:
Offline, simulated exploration of state space by generating
successors of already-explored states (a.k.a.~expanding
states)
Tree search example
Tree search example
Tree search example
Infrastructure for search algorithms
Search algorithms require a data structure to keep track of
the search tree that is being constructed.
Foe each node n of the tree, we have a structure with
(4) components:
(1) n.STATE: the state in the state space to which the
node
corresponds.
(2) n.PARENT: the node in the search tree that generated
this node.
(3) n.ACTION: the action that was applied to the parent
to generate the node.
(4) n.PATH-COST: the cost, traditionally denoted g(n), of the
path from the initial state to the node, as indicated by the
parent pointers.
Implementation: states vs. nodes
A state is a (representation of) a physical configuration.
A node is a data structure constituting part of a search tree
includes
state, parent node, action, path cost g(x), depth.
The Expand function creates new nodes, filling in the various
fields and using the Successor function of the problem to create
the corresponding states.
Infrastructure for search algorithms
Now that we have nodes (qua data structures), we need to put them
somewhere.
We use a queue, with operations:
EMPTY?(queue): returns true only if no elements
POP(queue): removes the first element of the queue and returns it.
INSERT(element, queue): inserts an element and returns the resulting
queue.
Recall that queues are characterized by the order in which the store the
inserted nodes:
FIFO (first-in, first-out): pops the oldest element of the queue.
LIFO (last-in, first-out, i.e. a stack) pops the newest element.
PRIORITY QUEUE: pops element with highest “priority.”
Search strategies
A search strategy is defined by picking the order of node
expansion
Strategies are evaluated along the following dimensions:
completeness: does it always find a solution when one exists?
time complexity: number of nodes generated
space complexity: maximum number of nodes in memory
optimality: does it always find a least-cost solution?
Time and space complexity are measured in terms of
b: maximum branching factor of the search tree
d: depth of the least-cost solution
m: maximum depth of the state space (may be ∞)
The size of the state space graph ( | V | + | E | )
Uninformed search
strategies
Uninformed search strategies use only the information available
in the problem definition.
Breadth-first search
Uniform-cost search
Depth-first search
Depth-limited search
Iterative deepening search
Breadth-first
search
BFS is a simple strategy in which the root node is expanded
first, then all the successors of the root node are expanded
next, then their successors, etc.
BFS is an instance of the general graph-search algorithm in
which the shallowest unexpanded node is chosen for expansion.
This is achieved by using a FIFO queue for the frontier.
Accordingly, new nodes go to the back of the queue, and old
nodes, which are shallower than the new nodes are expanded
first.
NB: The goal test is applied to each node when it is generated.
Breadth-first
search
Expand shallowest unexpanded node
Implementation:
frontier is a FIFO queue, i.e., new successors go at
end
Breadth-first
search
Expand shallowest unexpanded node
Implementation:
frontier is a FIFO queue, i.e., new successors go at
end
Breadth-first
search
Expand shallowest unexpanded node
Implementation:
fringe is a FIFO queue, i.e., new successors go at
end
Breadth-first
search
Expand shallowest unexpanded node
Implementation:
fringe is a FIFO queue, i.e., new successors go at
end
Properties of
BFS
Complete? Yes (if b is finite)
Time? 1+b+b2+b3+… +bd = O(bd)
Space? O(bd) (keeps every node in memory)
Optimal? Yes (if cost = 1 per step)
Space is the bigger problem (more than
time)
Uniform-cost search
Expand least-cost unexpanded node
Implementation:
frontier = queue ordered by path cost
Equivalent to breadth-first if step costs all equal
Complete? Yes, if step cost ≥ ε
Time? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε))
where C* is the cost of the optimal solution
Space? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε))
Optimal? Yes – nodes expanded in increasing order of g(n)
Depth-first
search
DFS always expands the deepest node in the current frontier of the
search tree. The search proceeds immediately to the deepest level
fo the search tree, where the nodes have no successors.
As those nodes are expended, they are dropped from the frontier,
so then the search “backs up” to the next deepest node that still has
unexplored successors.
DFS is an instance of the general graph-search algorithm which
uses a LIFO queue. This means that the most recently generated
node is chosen for expansion.
Depth-first
search
Expand deepest unexpanded node
Implementation:
frontier = LIFO queue, i.e., put successors at
front
Depth-first
search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at
front
Depth-first
search
Expand deepest unexpanded node
Implementation:
frontier = LIFO queue, i.e., put successors at
front
Depth-first
search
Expand deepest unexpanded node
Implementation:
frontier = LIFO queue, i.e., put successors at
front
Depth-first
search
Expand deepest unexpanded node
Implementation:
frontier = LIFO queue, i.e., put successors at
front
Depth-first
search
Expand deepest unexpanded node
Implementation:
frontier = LIFO queue, i.e., put successors at
front
Depth-first search
Expand deepest unexpanded node
Implementation:
frontier = LIFO queue, i.e., put successors at
front
Depth-first
search
Expand deepest unexpanded node
Implementation:
frontier = LIFO queue, i.e., put successors at
front
Depth-first
search
Expand deepest unexpanded node
Implementation:
frontier = LIFO queue, i.e., put successors at
front
Depth-first
search
Expand deepest unexpanded node
Implementation:
frontier = LIFO queue, i.e., put successors at
front
Depth-first
search
Expand deepest unexpanded node
Implementation:
frontier = LIFO queue, i.e., put successors at
front
Depth-first
search
Expand deepest unexpanded node
Implementation:
frontier = LIFO queue, i.e., put successors at
front
Properties of depth-first
search
Complete? No: fails in infinite-depth spaces, spaces
with loops
Modify to avoid repeated states along path
complete in finite spaces
Time? O(bm): terrible if m is much larger than d
but if solutions are dense, may be much faster than breadth-
first
Space? O(bm), i.e., linear space!
Optimal? No