0% found this document useful (0 votes)
0 views20 pages

Unit2 Material

The document discusses problem-solving in artificial intelligence, detailing the process of deriving solutions from initial problem descriptions to desired outcomes through state space modeling. It outlines various search strategies, including uninformed search methods like breadth-first and depth-first search, as well as informed search techniques such as A* and simulated annealing. Additionally, it introduces concepts like problem-solving agents, state-space representation, and challenges like local maxima in hill-climbing algorithms.

Uploaded by

baji.twitter0099
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views20 pages

Unit2 Material

The document discusses problem-solving in artificial intelligence, detailing the process of deriving solutions from initial problem descriptions to desired outcomes through state space modeling. It outlines various search strategies, including uninformed search methods like breadth-first and depth-first search, as well as informed search techniques such as A* and simulated annealing. Additionally, it introduces concepts like problem-solving agents, state-space representation, and challenges like local maxima in hill-climbing algorithms.

Uploaded by

baji.twitter0099
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Course: Artificial Intelligence Course Instructor: Amit Ghosh

Unit 2: Problem Solving


Problem Solving
Problem solving is a method of deriving solution steps beginning from initial description of the
problem to the desired solution.
The task is solved by a series of actions that minimizes the difference between the given situation
and the desired goal.
the problems are frequently modelled as a state space problem where the state space is a set of all
possible states from start to goal states.
The set of states form a graph in which two states are linked if there is an operation which can be
executed to transform one state to other.
For generating new state in the search space, an action/operator/rule is applied and tested
whether the state is the goal state or not. In case the state is not the goal state, the procedure is
repeated.

1
Course: Artificial Intelligence Course Instructor: Amit Ghosh

Problem Solving Agent


A problem-solving agent is a type of artificial intelligence system designed to find solutions to specific
problems or tasks. It typically involves the following components:

1. Problem Formulation: Defining the problem, including the initial state, goal state, and possible
actions.
2. Search Strategy: Utilizing algorithms to explore potential solutions, which can include methods
like depth-first search, breadth-first search, or heuristic approaches.
3. Action Execution: Implementing the solution once the best path to the goal is identified.

Example:
Consider a Rubik's Cube Solver as a problem-solving agent.

• Problem Formulation:

o Initial State: A scrambled Rubik's Cube.


o Goal State: The cube arranged in uniform colors on each face.
o Possible Actions: Rotating one or more faces of the cube.

• Search Strategy: The agent may use a method like the Kociemba algorithm, which efficiently
finds a solution in fewer moves.

• Action Execution: The agent performs the necessary rotations to solve the cube, transforming it
from the initial state to the goal state.

State-Space Search
State space is another method of problem representation that facilitates easy search. Using this
method, one can also find a path from start state to goal state while solving a problem.
A state space basically consists of four components:
1. A set S containing start states of the problem
2. A set G containing goal states of the problem
3. Set of nodes (states) in the graph tree. Each node represents the state in problem-solving
process
4. Set of arcs connecting nodes. Each arc corresponds to operator that is a step in a problem-
solving process.
solution path is a path through the graph from a node in S to a node in G. The main objective of
search algorithm is to determine a solution path in the graph. There may be more than one
solution paths. as there may be more than one ways of solving the problem.

2
Course: Artificial Intelligence Course Instructor: Amit Ghosh

Uninformed Search
Uninformed search methods, also known as blind search methods, are strategies for exploring a
search space without any additional information about the goal's location beyond the initial state and
possible actions. These rely on systematic exploration of the state space.
Common Types of Uninformed Search Methods:
1. Breadth-First Search (BFS):
o Explores all nodes at the present depth level before moving on to the nodes at the next
depth level.
o Guaranteed to find the shortest path in terms of the number of edges if all edges have
the same cost.
2. Depth-First Search (DFS):
o Explores as far down a branch as possible before backtracking.
o Can be less memory-intensive than BFS but does not guarantee the shortest path.
3. Iterative Deepening Search (IDS):
o Combines the benefits of DFS and BFS by gradually increasing the depth limit on the
search to find a solution without excessive memory use.

Breadth-First Search
The breadth-first search (BFS) expands all the states one step away from the start state. and
then expands all states two steps from start state. then three steps. etc.. until a goal state is
reached. All successor states are examined at the same depth before going deeper

3
Course: Artificial Intelligence Course Instructor: Amit Ghosh

This search is implemented using two lists called OPEN and CLOSED. The OPEN list contains
those states that are to be expanded and CLOSED list keeps track of states already expanded. Here
OPEN list is maintained as a queue and CLOSED list as a stack. For the sake of simplicity. we are
writing BFS algorithm for checking whether a goal node exists or not.

Depth-First Search
In the depth-first search (DFS), we go as far down as possible into the scarch tree/graph before
backing up and trying alternatives.

It works by always generating a descendent of the most recently expanded node until some depth cut
off is reached and then backtracks‘ to next most recently expanded node and generates one of its
descendants.

DFS is memory efficient as it only stores a single path from the root to leaf node along with the
remaining unexpanded siblings for each node on the path.

4
Course: Artificial Intelligence Course Instructor: Amit Ghosh

Figure 1 Reference: AI Foundations of computation Agents

We can implement DFS by using two lists called OPEN and CLOSED. The OPEN list contains those
states that are to be expanded, and CLOSED list keeps track of states already expanded.
Here OPEN and CLOSED lists arc maintained as stacks. If we discover that first element of
OPEN is the Goal state. then scarch terminates successfully.

5
Course: Artificial Intelligence Course Instructor: Amit Ghosh

Depth First Iterative Deepening (DFID) :


To keep depth-first search from wandering down an infinite path, we can use depth-limited search,
a version of depth-first search in which we supply a depth limit l, and treat all nodes at depth l as if
they had no successors
Depth-first iterative deepening (DFID) takes advantages of both BES and DES searches on trees The
algorithm for DFID is given as follows:

6
Course: Artificial Intelligence Course Instructor: Amit Ghosh

7
Course: Artificial Intelligence Course Instructor: Amit Ghosh

Informed Search
Local search algorithms operate by searching from a start state to neighboring states, without
keeping track of the paths, nor the set of states that have been reached.

That means they are not systematic—they might never explore a portion of the search space where
a solution actually resides.

However, they have two key advantages: (1) they use very little memory; and (2) they can often
find reasonable solutions in large or infinite state spaces for which systematic algorithms are
unsuitable.

To understand local search, consider the states of a problem laid out in a state-spacelandscape, as
shown in Figure. Each point (state) in the landscape has an “elevation,” defined by the value of the
objective function. If elevation corresponds to an objective function then the aim is to find the
highest peak—a global maximum—and we call the process hill Global maximum climbing. If
elevation corresponds to cost, then the aim is to find the lowest valley—a global minimum—and we
call it gradient descent.

Hill Climbing/Greedy Local Search


Main Idea: Keep a single current node and move to a neighboring state to improve it.
Uses a loop that continuously moves in the direction of increasing

value (uphill):

Choose the best successor, choose randomly if there is


more than one. Terminate when a peak reached where

8
Course: Artificial Intelligence Course Instructor: Amit Ghosh

no neighbor has a higher value. It also called greedy local search


It keeps track of one current state and on each iteration moves to the neighboring state with highest
value—that is, it heads in the direction that provides the steepest ascent.
It terminates when it reaches a “peak” where no neighbor has a higher value. Hill climbing does not
look ahead beyond the immediate neighbors of the current state.

Hill-climbing search

Our aim is to find a path from S to M

9
Course: Artificial Intelligence Course Instructor: Amit Ghosh

Problems with Hill-climbing search

Local maxima:

A local maximum is a Plateau


peak that is higher than each of its
neighboring states,
Plateau
but lower than the global maximum.
Hill-climbing algorithms that reach
the vicinity of a local maximum will
be drawn upwards towards the peak,
but will then be stuck with nowhere
else to go.

10
Course: Artificial Intelligence Course Instructor: Amit Ghosh

Plateaux: a plateau is an area of the state space landscape where the evaluation function is flat. It can
be a flat local maximum, from which no uphill exit exists, or a shoulder, from which it is possible to
make progress.

Ridges: Ridges result in a sequence of local maxima that is very difficult for local search algorithms
to navigate.

Beam Search
Beam search is a heuristic search algorithm in which W number of best nodes at each level is always
expanded.
Beam search uses breadth-first search to build its search tree. At each level of the tree,it generates all
successors of the states at the current level, sorts them in order of increasing heuristic values.
it only considers a W number of states at each level. Other nodes are ignored.

Best nodes are decided on the heuristic cost associated with the node.

Here W is called width of beam search. If B is the branching factor, there will be only W * B nodes under
consideration at any depth but only W nodes will be selected.

Greedy search Beam search

11
Course: Artificial Intelligence Course Instructor: Amit Ghosh

• Start with k randomly generated states


• Generate all successors of those k states
• If no goal found, select best k
successors and repeat.
• For the tree on the right, k =3

• Generate all successors of those k


states
• Among these successors (yellow)
choose, top 3 since k=3.

• For these 3, repeat the same procedure


until state with h=100 is reached.

12
Course: Artificial Intelligence Course Instructor: Amit Ghosh

• New children are generated

13
Course: Artificial Intelligence Course Instructor: Amit Ghosh

A* Search
The heuristic function for a node N is defined as follows: f(N) = g(N) + h(N)
The function g is a measure of the cost of getting from the start node to the current node N, i.e. it
is sum of costs of the rules that were applied along the best path to the current node.
The function h is an estimate of additional cost of getting from the current node N to the goal node.

14
Course: Artificial Intelligence Course Instructor: Amit Ghosh

15
Course: Artificial Intelligence Course Instructor: Amit Ghosh

Let us consider an example of eight puzzle again and solve it by using A* algorithm.
The simple evaluation function f(x) is defined as follows:
f(X) = g(X) + h(X),
where
h(X) = the number of tiles not in their goal position in a given state X ;
g(X) = depth of node X in the search tree

h(X) = the number of tiles not in their goal position in a given state X ;g(X) = depth of node X in the
search tree

16
Course: Artificial Intelligence Course Instructor: Amit Ghosh

Iterative-Deepening A* Search
Iterative-Deepening A* (IDA*) is a combination of the depth-first iterative deepening and A*
algorithm.
Here the successive iterations are corresponding to increasing values of the total cost of a path rather
than increasing depth of the search.
Algorithm works as follows:
® For each iteration, perform a DFS pruning off a branch when its total cost (g + h) exceeds a given
threshold.
® The initial threshold starts at the estimate cost of the start state and increases for each iteration of
the algorithm.
® The threshold used for the next iteration is the minimum cost of all values exceeded the current
threshold.
® These steps are repeated till we find a goal state.
Let us consider an example to illustrate the working of IDA* algorithm

17
Course: Artificial Intelligence Course Instructor: Amit Ghosh

Initially, the threshold value is the estimated cost of the start node. In the first iteration, Threshold=5.
Now we generate all the successors of start node and compute their estimated values as 6, 8,4,8, and
9.
The successors having values greater than 5 are to be pruned. Now for next iteration,

Simulated Annealing
Annealing Process
– Raising the temperature up to a very high level (melting temperature, for example), the atoms
have a higher energy state and a high possibility to re-arrange the crystalline structure.
– Cooling down slowly, the atoms have a lower and lower energy state and a smaller and smaller
possibility to re-arrange the crystalline structure.

18
Course: Artificial Intelligence Course Instructor: Amit Ghosh

Analogy

19
Course: Artificial Intelligence Course Instructor: Amit Ghosh

Algorithm

20

You might also like