0% found this document useful (0 votes)
41 views

Chapter Three Solving Problems by Searching and Constraint Satisfaction

The document discusses problem solving through searching and constraint satisfaction. It introduces search as a central topic in AI and describes how problem solving agents formulate goals, represent problems, and search for solutions. The document also provides examples of different problem types and an example problem involving filling jugs of water to illustrate representing a problem as a search.

Uploaded by

derbew
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views

Chapter Three Solving Problems by Searching and Constraint Satisfaction

The document discusses problem solving through searching and constraint satisfaction. It introduces search as a central topic in AI and describes how problem solving agents formulate goals, represent problems, and search for solutions. The document also provides examples of different problem types and an example problem involving filling jugs of water to illustrate representing a problem as a search.

Uploaded by

derbew
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 78

Chapter Three

Solving Problems by
Searching
and
Constraint Satisfaction

2010 1
2010 2
Introduction
• Search is a central topic in AI.
• An important aspect of intelligence is goal-
based problem solving.
• Originated with Newell and Simon’s work on
problem solving; Human Problem Solving
(1972).
• Automated reasoning is a natural search task.
Today’s class
• Problem-solving agents
• Problem types
• Problem formulation
• Example problems
• Basic search algorithms(next class)
Problem Solving by Searching
• Problem is a goal and a means for achieving the goal.
• The process of exploring what the means can do is search.
Search is the process of considering various possible
sequences of operators applied to the initial state, and finding
out a sequence which culminates in a goal sate.
• The goal specifies the state of affairs we want to bring about
and the means specifies the operations we can perform in an
attempt to bring about the goal. Solution will be a sequence
of operations (actions) leading from initial state to goal state
(plan)
2010 5
Example

The states in the space can correspond to any kind of


configuration. For example, the possible settings of a device,
positions in a game or (more abstract still) a set of assignments to
variables. Paths in state space correspond to possible sequences of
transitions between states
2010 6
States
• A problem is defined by its elements and their relations.
• In each instant of the resolution of a problem, those elements
have specific descriptors (How to select them?) and relations.
• A state is a representation of those elements in a given
moment.
• Two special states are defined:
– Initial state (starting point)
– Final state (goal state)
• The solution of many problems can be described by
finding a sequence of actions/steps that lead from an initial
state to a desired state (either a specific state or any state
satisfying given conditions) – goal sate.
• E.g. we want to find a sequence of moving, picking up and
placing actions that a robot can execute to lay a table.
• Search for a goal state or configuration satisfying known
conditions.
• E.g. in scheduling we search for a timetable satisfying given
constraints on when events can occur
• Find the best way to reach a solution: Search for an
optimal sequence of steps(actions) that lead from an initial
state to a goal state. An optimal sequence is one that has
the lowest ‘cost’ (i.e. takes the least time or resources).
2010 8
Example of search problem
• Puzzles
• Route finding, motion control
• Activity planning, games AI
• Schedule
• Mathematical theorem proving
• Design of computer chips, drugs, building etc

2010 9
• A well-defined problem can be described by:
a) State space – all states reachable from initial by an
sequence of actions: S
b) Initial state: S0S
c) Operators (rules) or success factor:A: SiSj
d) Path- sequence through state space: p
e) Path cost- functions that assigns a cost to a path:
f) Goal test- test to determine the goal state :G
•Search
• It is a systematic examination of states to find paths from
the start state to the goal state.
• The set of possible states, together with operators
defining their connectivity constitute the search(problem)
space.

2010 10
Problem-solving agents
• Problem solving agents are goal-directed agents:
1. Goal Formulation: Set of one or more (desirable)
world states (e.g. checkmate in chess).
2. Problem formulation: What actions and states to
consider given a goal and an initial state.
3. Search for solution: Given the problem, search for a
solution --- a sequence of actions to achieve the goal
starting from the initial state.
4. Execution of the solution

Note: Formulation feels somewhat “contrived,” but was meant


to model very general (human) problem solving process.
• Problem solving Agent: an agent that tries to come
up with a sequence of actions that will bring the
environment into a desire sate
• What is the strategy to get a computer solve a
problem for us?
• Define the problem precisely
• Find a way to represent the problem
• Isolate and represent the tasks necessary to solve the
problem
• Choose or Create an algorithm to solve the problem and
apply it to the problem

In real life search usually results


from a lack of knowledge.
2010 12
• Example 3x3 maze

GOAL
• We start out at (0,0) – the
“southwest” corner of the
maze
• Location of goal is unknown
• Check for a wall – the way
forward is blocked
• So we turn right
• Check for a wall – no wall
in front of us
• So we go forward; the red
arrow indicates that (0,0)
is (1,0)’s predecessor.
• We sense a wall
• Turn right
• We sense a wall here too,
so we’re gonna have to
look north.
• Turn left…
• Turn left again; now we’re
facing north
• The way forward is clear…
• …so we go forward.
– “When you come to a fork
in the road, take it.”
–Yogi Berra on depth-first
search
• We sense a wall – can’t
go forward…
• …so we’ll turn right.
• This way is clear…
• …so we go forward.
• Blocked.
• How about this way?
• Clear!
• Whoops, wall here.
• We already know that the
wall on the right is
blocked, so we try turning
left instead.
• Wall here too!
• Now there are no unexplored neighboring
squares that we can get to.
• So, we backtrack! (Retrace the red arrow)
• We turn to face the red
arrow…
• …and go forward.
• Now we’ve backtracked to a square that
might have an unexplored neighbor. Let’s
check!
• Ah-ha!
• Onward!
• Drat!
• There’s gotta be a way
out of here…
• Not this way!
• Two 90-degree turns to
face west…
• Two 90-degree turns to
face west…
• No wall here!
• So we move forward
and…
• What luck! Here’s the
goal.
• Final step: Execute victory
dance.


Problem types
1)Deterministic, fully observable
2) - Agent knows exactly which state it will be in; solution is a sequence of

Increasing complexity
actions.
2) Non-observable --- sensorless problem
• Agent may have no idea where it is (no sensors); it reasons in terms of
belief states; solution is a sequence actions (effects of actions certain).
3) Nondeterministic and/or partially observable: contingency
problem
• Actions uncertain, percepts provide new information about current
state .
• Solution is a “strategy” to reach the goal.
4) Unknown state space and uncertain action effects:
exploration problem
- Solution is a “strategy” to reach the goal (end explore environment).
Example A:
You are given two jugs, a 4-gallon and a 3-
gallon one. Neither has any measuring
marks on it. There is a tap that can be used
to fill the jugs with water. How can you get
exactly 2 gallons of water into the 4-gallon
jugs.
Specify the initial state, the goal state , all
the possible operators to reach from the
start state to the goal state.
Solution:

2010 51
2010 52
Steps to solve the problem
• There are many possible ways to formulate the
problem as search.
• 1st step:
• State description the integers (x,y) {x:0,1,2,3,4},
{y:0,1,2,3,4}
• 2nd step:
• Describe the initial and goal state
• The initial state is {0,0}
• The goal state is {2,x} where x can take any value.
• 3rd step:
• List all the action in the production rule(as rules or
operators as follows)
2010 53
• Fill the 4-gallon jug {x,y}{4,y}, if x<4
• Fill the 3-gallon jug {x,y} {x,3}, if y<3
• Empty 4-gallon jug {x,y}{0,y},if x>0
• Empty 3-gallon jug{x,y}{x,0}, if y>0
• Empty the 4-gallon jug into the 3-gallon one {x,y} 
{0,x+y}(if x+y<=3 and y>0)
• Empty the 3-gallon into the 4-gallon one{x,y} 
{x+y,0} (if x+y<=4)
• Fill the 4-gallon jug from the 3-gallon until it become
full {x,y} {4,y-(4-x)}or {4,x+y-4} (if x+y >4)
• Fill the 3-gallon jug from the 4-gallon until it become
full {x,y} {x-(3-y) or x+y-3 ,3}(if x+y >3)

2010 54
Possible answers
• Option 1 • Option2
(0,3)---rule2 (4,0)---rule1
(3,0)---rule4 (1,3)---rule6
(3,3)---rule2 (1,0)---rule4
(4,2)---rule5 (0,1)---rule6
(0,2)---rule 8 (4,1)---rule1
(2,0)---rule4 (2,3)---rule6
(2,0)---rule7
2010 55
Search tree

2010 56
S.No. 4 gallon jug 3 gallon jug Rule followed
contents contents
1. 0 gallon 0 gallon Initial state
2. 0 gallon 3 gallons Rule no.2
3. 3 gallons 0 gallon Rule no. 9
4. 3 gallons 3 gallons Rule no. 2
5. 4 gallons 2 gallons Rule no. 7
6. 0 gallon 2 gallons Rule no. 5
7. 2 gallons 0 gallon Rule no. 9
Example B: Robotic assembly

states?: real-valued coordinates of robot joint angles


parts of the object to be assembled
actions?: continuous motions of robot joints
goal test?: complete assembly
path cost?: time to execute
Example C : The 8-puzzle

1 2 3
S: start 2 8 3 G: Goal
state 8 4
state 1 6 4
7 6 5
7 5
State:
The boards, i.e., Location of blank, integer location of tile Initial
sate: any sate can be initial state
Operators/ Actions: Blank moves left, right, up, and down
Goal state: Match G
Path cost: each step costs 1 so cost length of path to reach goal
2010 60
Searching for a solution to the
8-puzzle.

Aside: in this tree,


immediate
duplicates are removed.
Branching factor 1, 2, or 3 (max). So, approx. 2#
nodes roughly doubles at each level. Number states of
explored nodes grows exponentially with depth.

A breadth-first search tree. (More detail soon.)

2010 62
• Gedanken experiment: Assume that you knew for
each state, the minimum number of moves to the
final goal state. (Table too big, but assume there is
some formula/algorithm based on the board pattern
that gives this number for each board and quickly.)
• Using the minimum distance information, is there a
clever way to find a minimum length sequence of
moves leading from the start state to the goal
state? How?
d = min dist. to goal
Hmm. How do I know?
d=5 Start state
Note: at least one
neighbor with d =
4. d >= 5 Selectd = 4 d >= 4

d >= 4 Select
d=3 d >= 3

d=2
Select

dSelect
=1

Select
d = 0 d >= 1
Goal
A breadth-first search tree. (More detail soon.)
Branching factor approx. 2. So, with “distance oracle” we only need
to explore approx. 2 * (min. solution length).
Basic idea: State evaluation function can Start state
effectively guide search.

Also in multi-agent settings. (Chess:


board eval.)

Reinforcement learning: Learn the state


eval function.

Goal

A breadth-first search tree.


Perfect “heuristics,” eliminates search.
Approximate heuristics, significantly reduces search.
Best (provably) use of search heuristic info: A* search (soon).
State evaluation functions or “heuristics”

• Provide guidance in terms of what action to take next.


• General principle: Consider all neighboring states, reachable
via some action. Then select the action that leads to the state
with the highest utility (evaluation value). This is a fully
greedy approach.
• Because eval function is often only an estimate of the true state
value, greedy search may not find the optimum path to the
goal.
• By adding some search with certain guarantees on the
approximation, we can still get optimal behavior (A* search)
(i.e. finding the optimal path to the solution). Overall result:
generally exponentially less search required.
Example D: The wolf-goat-cabbage problem

2010 67
A farmer has a goat a wolf and a cabbage on the west
side of the river. He wants to get all of his animals
and his cabbage across the river onto the cost side.
The farmer has a boat but he only has enough room
for himself and one other thing.
Case 1: The goat will eat the cabbage if they are left
together alone.
Case 2: The wolf will eat the goat if they are left
alone.

How can the farmer get everything on the other


side?

2010 68
• Possible solution:
• State Space Representation:
• We can represent the states of the problem with tow
sets W and E. We can also can have representations for
the elements in the two sets as f,g,w,c representing the
farmer, goat, wolf, and cabbage.
• Operators:
• Move f from the E to W and vice versa
• Move f and one of g,c,w from E to W and vice versa.
• Start stae:
• W={f,g,c,w), E={}
• Goal state:
• W={},E={f,g,c,w}

2010 69
• One possible Solution:
• Farmer takes goat across the river,
W={w,c},E={f,g}
• Farmer comes back alone,W={f,c,w,},E={g}
• Farmer takes wolf across the
river,W={c},E=f,g,w}
• Farmer comes back with goat, W=={f,g,c},E={w}
• Farmer takes cabbage across the
river,W={g},E={f,w,c}
• Farmer comes back alone, W={f,g}, E={w,c}
• Farmer takes goat across the river,
W={},E={f,g,w,c}

2010 70
2010 71
Example E: Vacuum Cleaner

1 2

3 4

5 6

7 8

2010 72
Formulating a problem
• There are four essentially different
types of problems
• Simple-state problems
• Multiple-state problems
• Contingency problems
• Exploration problems

2010 73
Vacuum Cleaner
• Single-state
• Start in # 5. Solution?
• If the initial state is 5, then it can calculate the result of the actions which
is, move to right and suck
• Multiple-state,
• Start in {1,2,3,4,5,6,7,8}
• It can discover that the sequence {right, suck, left, suck} is granted to
reach a goal state no matter the initial state is.
• Contingency
• The agent can solve the problem if it can perform actions during execution. For
instance, suppose the suck actions sometimes deposits dirt when there is none
• Exploration
• Unknown state space
• The agent has no information about the effects of its actions
• The agent must experiment, gradually discovering what its actions do and what
sorts of states exist.
2010 74
• Single-state problem
• All world states are known
• All current states are known
• All results of actions are realizable
• Are deterministic static before execution
• Multiple-state problems: Suppose that the robot
has no sensor that can tell it which room it is in
and it doesn't know where it is initially. Then it
must consider sets of possible states.
• All world states are known
• Some current states are known
• All results of actions are reliable
• Deterministic
2010 75
• Contingency problems
• All world states are known
• Some current states are known
• Some results of actions are reliable
• Non-deterministic-must use sensors during execution
• Suppose that the "vacuum" action sometimes actually deposits dirt on
the carpet--but only if the carpet is already clean!
• Now [right,vacuum,left,vacuum] is NOT a correct plan, because one
room might be clean originally, but them become dirty.
[right,vacuum,vacuum, left,vacuum, vacuum] doesn't work either, and
so on.
• There doesn't exist any FIXED plan that always works. An agent for
this environment MUST have a sensor and it must combine decision-
making, sensing, and execution. This is called interleaving.

2010 76
• Exploration problems
• The world state is unknown
• The current state is unknown
• The results of actions are unknown
• Non-deterministic
• So far we have assumed that the robot is ignorant of which
rooms are dirty today, but that the robot knows how many
rooms there are and what the effect of each available action is.
• Suppose the robot is completely ignorant. Then it must take
actions for the purpose of acquiring knowledge about their
effects, NOT just for their contribution towards achieving a
goal.
• This is called "exploration" and the agent must do learning
about the environment

2010 77
To be continued …

You might also like