0% found this document useful (0 votes)
2 views35 pages

M2 Session 1

The document outlines the course BCS602 on Artificial Intelligence and Machine Learning, focusing on problem-solving agents and their methodologies. It discusses the formulation of problems, search strategies, and examples of toy and real-world problems, emphasizing the importance of goal-based agents in achieving optimal solutions. Key concepts include the structure of problem-solving, the formulation of goals, and various search strategies such as breadth-first search.

Uploaded by

Sonia N.S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views35 pages

M2 Session 1

The document outlines the course BCS602 on Artificial Intelligence and Machine Learning, focusing on problem-solving agents and their methodologies. It discusses the formulation of problems, search strategies, and examples of toy and real-world problems, emphasizing the importance of goal-based agents in achieving optimal solutions. Key concepts include the structure of problem-solving, the formulation of goals, and various search strategies such as breadth-first search.

Uploaded by

Sonia N.S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 35

Artificial Intellegence & Machine Learning– BCS602

Course Coordinator:
Dr. Neethi M V
Assistant Professor
Dept. of CSE-DS
ATMECE, Mysuru

Department of Computer Science and Engineering - DS


Module 2 – Problem‐solving

Problem solving by searching: Problem solving agents, Example problems, Searching


for solutions, Uniformed search strategies, Informed search strategies, Heuristic
functions
Textbook 1: Chapter: 3

Department of Computer Science and Engineering - DS


Module 2 – Problem‐solving

Session 1
• This chapter describes one kind of goal-based agent called a problem-
solving agent.
• Goal-based agents that use more advanced factored or structured
resentations are usually called planning agents

Department of Computer Science and Engineering - DS


PROBLEM-SOLVING AGENTS
• Intelligent agents are supposed to maximize their performance measure.
• Achieving this is sometimes simplified if the agent can adopt a goal and aim at satisfying it.
• Goals help organize behaviour by limiting the objectives that the agent is trying to achieve and
hence the actions it needs to consider.
• Goal formulation, based on the current situation and the agent’s performance measure, is the
first step in problem solving.
• Problem formulation is the process of deciding what actions and states to consider, given a goal.
• The process of looking for a sequence of actions that reaches the goal is called search. A search
algorithm takes a problem as input and returns a solution in the form of an action sequence. Once
a solution is found, the actions it recommends can be carried out. This is called the execution
phase.

Department of Computer Science and Engineering - DS


PROBLEM-SOLVING AGENTS
• Thus, we have a simple “formulate, search,
execute” design for the agent, as shown in
Figure 1.
• After formulating a goal and a problem to
solve, the agent calls a search procedure to
solve it.
• It then uses the solution to guide its actions,
doing whatever the solution recommends
as the next thing to do—typically, the first
action of the sequence—and then removing
that step from the sequence.
• Once the solution has been executed, the Figure 1 A simple problem-solving agent.
agent will formulate a new goal. It first formulates a goal and a problem, searches for a
sequence of actions that would solve the problem, and then
executes the actions one at a time. When this is complete, it
formulates another goal and starts over.
Department of Computer Science and Engineering - DS
PROBLEM SOLVING  PROBLEM-SOLVING AGENTS  Well-
• A problem can defined
be defined formally Problems
by five components: and Solutions
• The initial state that the agent starts in. For example, the initial state for our agent in Romania might
be described as In(Arad).
• A description of the possible actions available to the agent. Given a particular state s, ACTIONS(s)
returns the set of actions that can be executed in s. We say that each of these actions is applicable in
s. For example, from the state In(Arad), the applicable actions are {Go(Sibiu), Go(Timisoara),
Go(Zerind)}.
• A description of what each action does; the formal name for this is the transition model, specified by
a function RESULT(s, a) that returns the state that results from doing action a in state s. We also use
the term successor to refer to any state reachable from a given state by a single action.
• For example, we have RESULT(In(Arad), Go(Zerind)) = In(Zerind)
• Together, the initial state, actions, and transition model implicitly define the state
space of the problem—the set of all states reachable from the initial state by any
sequence of actions. The state space forms a directed network or graph in which
the nodes are states and the links between nodes are actions. (The map of
Romania shown in Figure 2 can be interpreted as a state-space graph if we view
each road as standing for two driving actions, one in each direction) A path in the
state space is a sequence of states connected by a sequence of actions.
Department of Computer Science and Engineering - DS
PROBLEM SOLVING  PROBLEM-SOLVING AGENTS  Well-
defined Problems and Solutions
• The goal test, which determines whether a given
state is a goal state. Sometimes there is an explicit
set of possible goal states, and the test simply checks
whether the given state is one of them. The agent’s
goal in Romania is the singleton set {In(Bucharest)}.
• A path cost function that assigns a numeric cost to
each path. The problem-solving agent chooses a cost
function that reflects its own performance measure.
For the agent trying to get to Bucharest, time is of
the essence, so the cost of a path might be its length
in kilometers. The step cost of taking action a in state
s to reach state s` is denoted by c(s, a, s`). The step
costs for Romania are shown in Figure 2 as route
distances. We assume that step costs are Figure 2 A simplified road map of part of Romania.
nonnegative.

Department of Computer Science and Engineering - AIML


PROBLEM SOLVING  EXAMPLE PROBLEMS
• The problem-solving approach has been applied to a vast array of task
environments.
• We list some of the best known here, distinguishing between toy and real-world
problems.
• A toy problem is intended to illustrate or exercise various problem-solving
methods. It can be given a concise, exact description and hence is usable by
different researchers to compare the performance of algorithms.
• A real-world problem is one whose solutions people actually care about.

Department of Computer Science and Engineering - DS


PROBLEM SOLVING  EXAMPLE PROBLEMS  Toy Problem

• The first example we examine is the vacuum world. (See Figure 3) This can be formulated as a problem as
follows:
• States: The state is determined by both the agent location and the dirt locations. The agent is in one of two
locations, each of which might or might not contain dirt. Thus, there are 2 × = 8 possible world states. A
larger environment with n locations has n * 2n states.
• Initial state: Any state can be designated as the initial state.
• Actions: In this simple environment, each state has just three actions: Left, Right, and Suck. Larger
environments might also include Up and Down.
• Transition model: The actions have their expected effects, except that moving Left in the leftmost square,
moving Right in the rightmost square, and Sucking in a clean square have no effect. The complete state space
is shown in Figure 3.
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.

Department of Computer Science and Engineering - DS


PROBLEM SOLVING  EXAMPLE PROBLEMS  Toy Problem

Figure 3 The state space for the vacuum world. Links


denote actions: L = Left, R = Right, S = Suck.

Department of Computer Science and Engineering - DS


PROBLEM SOLVING  EXAMPLE PROBLEMS  Toy Problem

•The 8-puzzle, an instance of which is shown in Figure 4, consists of a 3×3 board with eight numbered
tiles and a blank space. A tile adjacent to the blank space can slide into the space. The object is to reach a
specified goal state, such as the one shown on the right of the figure. The standard formulation is as
follows:
• States: A state description specifies the location of each of the eight tiles and the blank in one of the
nine squares.
• Initial state: Any state can be designated as the initial state. Note that any given goal can be reached
from exactly half of the possible initial states.
• Actions: The simplest formulation defines the actions as movements of the blank space Left, Right, Up,
or Down.
• Transition model: Given a state and action, this returns the resulting state; for example, if we apply
Left to the start state, the resulting state has the 5 and the blank switched.
• Goal test: This checks whether the state matches the goal state.
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.

Department of Computer Science and Engineering - DS


PROBLEM SOLVING  EXAMPLE PROBLEMS  Toy Problem

Figure 4 A typical instance of the 8-puzzle.

Department of Computer Science and Engineering - DS


PROBLEM SOLVING  EXAMPLE PROBLEMS  Toy Problem
•The goal of the 8-queens problem is to place eight queens on a chessboard such that no queen attacks any
other. (A queen attacks any piece in the same row, column or diagonal.) Figure 5 shows an attempted solution
that fails: the queen in the rightmost column is attacked by the queen at the top left.
•There are two main kinds of formulation:
 An incremental formulation involves operators that augment the state description, starting with an empty
state; for the 8-queens problem, this means that each action adds a queen to the state.
 A complete-state formulation starts with all 8 queens on the board and moves them around. In either
case, the path cost is of no interest because only the final state counts. The first incremental formulation
one might try is the following:
• States: Any arrangement of 0 to 8 queens on the board is a state.
• Initial state: No queens on the board.
• Actions: Add a queen to any empty square.
• Transition model: Returns the board with a queen added to the specified square.
• Goal test: 8 queens are on the board, none attacked.

Department of Computer Science and Engineering - DS


PROBLEM SOLVING  EXAMPLE PROBLEMS  Toy Problem

Figure 5 Almost a solution to the 8-queens


problem.

Department of Computer Science and Engineering - DS


PROBLEM SOLVING  EXAMPLE PROBLEMS 
Toy Problem
• Our final toy problem was devised by Donald Knuth (1964) and illustrates how
infinitestate spaces can arise.
• Knuth conjectured that, starting with the number 4, a sequence of factorial, square root,
and floor operations will reach any desired positive integer. For example,
• we can reach 5 from 4 as follows:

• (4!)! = 5 .
PROBLEM SOLVING  EXAMPLE PROBLEMS 
• Toy Problem
The problem definition is very simple:
• States: Positive numbers.
• Initial state: 4.
• Actions: Apply factorial, square root, or floor operation (factorial for integers only).
• Transition model: As given by the mathematical definitions of the operations.
• Goal test: State is the desired positive integer.
• To our knowledge there is no bound on how large a number might be constructed in the
process of reaching a given target—for example, the number
620,448,401,733,239,439,360,000
• is generated in the expression for 5—so the state space for this problem is infinite. Such state
spaces arise frequently in tasks involving the generation of mathematical expressions,
circuits, proofs, programs, and other recursively defined objects.
PROBLEM SOLVING  EXAMPLE PROBLEMS  Real-world Problems
• Route-finding problem is defined in terms of specified locations and transitions along links between them. Route-finding
algorithms are used in a variety of applications. Some, such as Web sites and in-car systems that provide driving
directions, are relatively straightforward extensions of the Romania example.
• Touring problems are closely related to route-finding problems, but with an important difference. As with route finding,
the actions correspond to trips between adjacent cities. The state space, however, is quite different. Each state must
include not just the current location but also the set of cities the agent has visited.
• The traveling salesperson problem (TSP) is a touring problem in which each city must be visited exactly once. The aim
is to find the shortest tour. The problem is known to be NP-hard, but an enormous amount of effort has been expended to
improve the capabilities of TSP algorithms.
• A VLSI layout problem requires positioning millions of components and connections on a chip to minimize area,
minimize circuit delays, minimize stray capacitances, and maximize manufacturing yield. The layout problem comes after
the logical design phase and is usually split into two parts: cell layout and channel routing. In cell layout, the primitive
components of the circuit are grouped into cells, each of which performs some recognized function. Channel routing finds
a specific route for each wire through the gaps between the cells. These search problems are extremely complex, but
definitely worth solving.
• Robot navigation is a generalization of the route-finding problem described earlier. Rather than following a discrete set
of routes, a robot can move in a continuous space with an infinite set of possible actions and states. When the robot has
arms and legs or wheels that must also be controlled, the search space becomes many-dimensional. Advanced techniques
are required just to make the search space finite.
Department of Computer Science and Engineering - DS
SEARCHING FOR SOLUTIONS

• The possible action sequences starting at the


initial state form a search tree with the initial
state SEARCH TREE at the root; the
branches are actions and the nodes
correspond to states in the state space of
NODE the problem.
• Then we need to consider taking various
actions. We do this by expanding the current
state; that is, EXPANDING applying each
legal action to the current state, thereby
generating a new set of states.
• In GENERATING this case, we add three
branches from the parent node In(Arad)
leading to three new child PARENT NODE
nodes:
PROBLEM SOLVING  SEARCHING FOR SOLUTIONS 
Measuring Problem-solving Performance
• We can evaluate an algorithm’s performance in four ways:
1. Completeness: Is the algorithm guaranteed to find a solution when there is one?
2. Optimality: Does the strategy find the optimal solution?
3. Time complexity: How long does it take to find a solution?
4. Space complexity: How much memory is needed to perform the search?
• In AI, the graph is often represented implicitly by the initial state, actions, and transition
model and is frequently infinite. For these reasons, complexity is expressed in terms of
three quantities:
1. b, the branching factor or maximum number of successors of any node;
2. d, the depth of the shallowest goal node (i.e., the number of steps along the path
from the root); and
3. m, the maximum length of any path in the state space.
Department of Computer Science and Engineering - DS
PROBLEM SOLVING  UNINFORMED SEARCH STRATEGIES
• This section covers several search strategies that come under the heading of uninformed search (also called
blind search).
 The term means that the strategies have no additional information about states beyond that provided in the
problem definition. All they can do is generate successors and distinguish a goal state from a non-goal state.
• Breadth-first Search:
• Breadth-first search is a simple strategy in which the root node is expanded first, then all the successors of the
root node are expanded next, then their successors, and so on.
• In general, all the nodes are expanded at a given depth in the search tree before any nodes at the next level are
expanded.
• Breadth-first search is an instance of the general graph-search algorithm in which the shallowest unexpanded node
is chosen for expansion.
• This is achieved very simply by using a FIFO queue for the frontier. Thus, new nodes (which are always deeper
than their parents) go to the back of the queue, and old nodes, which are shallower than the new nodes, get
expanded first.
• Pseudocode is given in Figure 11 and Figure 12 shows the progress of the search on a simple binary tree.
• The time complexity of BFS is and space complexity is also where b is a branching factor and d is a depth.
 PROBLEM SOLVING  UNINFORMED SEARCH STRATEGIES  Breadth-first search

Figure 12 Breadth-first search on a simple binary tree. At each stage,


the node to be expanded next is indicated by a marker.
Figure 11 Breadth-first search on a
graph.

Department of Computer Science and Engineering - DS


 PROBLEM SOLVING  UNINFORMED SEARCH STRATEGIES  Breadth-first search
• Depth-first search:
• Depth-first search always expands the deepest node in the
current frontier of the search tree.
• The progress of the search is illustrated in Figure 13.
• The search proceeds immediately to the deepest level of the
search tree, where the nodes have no successors.
• As those nodes are expanded, they are dropped from the
frontier, so then the search “backs up” to the next deepest node
that still has unexplored successors.
• The depth-first search algorithm is an instance of the graph-
search algorithm; whereas breadth-first-search uses a FIFO
queue, depth-first search uses a LIFO queue.
• A LIFO queue means that the most recently generated node is
chosen for expansion.
• This must be the deepest unexpanded node because it is one
deeper than its parent.
• The time complexity of DFS is and space complexity is where b
is a branching factor and m is a maximum depth. Figure 13 Depth-first search on a binary tree. The
unexplored region is shown in light gray. Explored nodes
with no descendants in the frontier are removed from
memory. Nodes at depth 3 have no successors and M is
the only goal node.
Department
DepartmentofofComputer
ComputerScience
Scienceand
andEngineering
Engineering- AIML
- DS
PROBLEM SOLVING  UNINFORMED SEARCH STRATEGIES  Breadth-first
search
• An exponential complexity bound such as
O(bd) is scary.
• Figure 3.13 shows why. It lists, for various
values of the solution depth d, the time and
memory required for a breadth-first search
with branching factor b = 10.
• The table assumes that 1 million nodes can
be generated per second and that a node
requires 1000 bytes of storage.
• Many search problems fit roughly within
these assumptions (give or take a factor of
100) when run on a modern personal
computer.
Uniform-cost search or Dijikstra’s algorithm

• uniform-cost search UNIFORM-COST SEARCH


expands the node n with the lowest path cost g(n).
This is done by storing the frontier as a priority
queue ordered by g.
• there are two other significant differences from
breadth-first search.
• The first is that the goal test is applied to a node
when it is selected for expansion
• The second difference is that a test is added in case a
better path is found to a node currently on the
frontier
• Uniform-cost search does not care about the number
of steps a path has, but only about their total cost.
• Therefore, it will get stuck in an infinite loop if there
is a path with an infinite
• sequence of zero-cost actions
Depth-first search
• Depth-first search always expands the deepest node in the current frontier of the search tree.
• The search proceeds immediately to the deepest level of the search tree, where the nodes have no
successors.
• As those nodes are expanded, they are dropped from the frontier, so then the search “backs up” to
the next deepest node that still has unexplored successors
• depth-first search uses a LIFO queue.
• A LIFO queue means that the most recently generated node is chosen for expansion.
• The properties of depth-first search depend strongly on whether the graph-search or tree-search
version is used. The graph-search version, which avoids repeated states and re_x0002_dundant paths,
is complete in finite state spaces because it will eventually expand every node.
• depth- first search will explore the entire left subtree even if node C is a goal node. If node J were also
a goal node, then depth-first search would return it as a solution instead of C, which would be a
better solution; hence, depth-first search is not optimal.
• A variant of depth-first search called backtracking search uses still less memory
Depth-first search
Depth-limited search
• The embarrassing failure of depth-first
search in infinite state spaces can be
alleviated by supplying depth-first
search with a predetermined depth
limit l .
• That is, nodes at depth l are treated as if
they have no successors. This approach
is called depth-limited search
• Depth-limited search will also be
nonoptimal if we choose l > d.
• Its time complexity is O(bl) and its space
complexity is O(bl). Depth-first
• search can be viewed as a special case of
depth-limited search with l =∞.
Depth-limited search
• Sometimes, depth limits can be based on knowledge of the problem.
• For example, on the map of Romania there are 20 cities. Therefore, we know
that if there is a solution, it must be of length 19 at the longest, so l = 19 is a
possible choice.
• map carefully, we would discover that any city can be reached from any other
city in at most 9 steps. This number, known as the diameter of the state space,
gives us a better depth limit, DIAMETER which leads to a more efficient depth-
limited search.
• Depth-limited search can be implemented as a simple modification to the
general tree_x0002_or graph-search algorithm.
Iterative deepening depth-first search
• Iterative deepening search (or iterative deepening depth-first search) is a general strategy,
ITERATIVE DEEPENING SEARCH often used in combination with depth-first tree search, that
finds the best depth limit
• Iterative deepening combines the benefits of depth-first and breadth-first search. Like depth-
first search, its memory requirements are modest: O(bd)to be precise. Like breadth-first search,
it is complete when the branching factor is finite and optimal when the path cost is a
nondecreasing function of the depth of the node
• The reason is that in a search tree with the same (or nearly the same) branching factor at each
level, most of the nodes are in the bottom level,so it does not matter much that the upper levels
are generated multiple times.
• which gives a time complexity of O(bd)—asymptotically the same as breadth-first search
Iterative deepening depth-first search
Bidirectional search
• The idea behind bidirectional search is to run two simultaneous
searches—one forward from the initial state and the other
backward from the goal—hoping that the two searches meet in
the middle
• Bidirectional search is implemented by replacing the goal test
with a check to see whether the frontiers of the two searches
intersect; if they do, a solution has been found.
• The check can be done when each node is generated or selected
for expansion and, with a hash table, will take constant time.
• For example, if a problem has solution depth d = 6, and each
direction runs breadth-first search one node at a time, then in
the worst case the two searches meet when they have
generated all of the nodes at depth 3. For b = 10, this means a
total of 2,220 node generations, compared with 1,111,110 for a
standard breadth-first search
• Thus, the time complexity of bidirectional search using breadth-
first searches in both directions is O(bd/2)
Comparing uninformed search strategies
Discussion and Interaction

Department of Computer Science and Engineering - AIML


Department of Computer Science and Engineering - AIML

You might also like