Unit2 Material
Unit2 Material
1
Course: Artificial Intelligence Course Instructor: Amit Ghosh
1. Problem Formulation: Defining the problem, including the initial state, goal state, and possible
actions.
2. Search Strategy: Utilizing algorithms to explore potential solutions, which can include methods
like depth-first search, breadth-first search, or heuristic approaches.
3. Action Execution: Implementing the solution once the best path to the goal is identified.
Example:
Consider a Rubik's Cube Solver as a problem-solving agent.
• Problem Formulation:
• Search Strategy: The agent may use a method like the Kociemba algorithm, which efficiently
finds a solution in fewer moves.
• Action Execution: The agent performs the necessary rotations to solve the cube, transforming it
from the initial state to the goal state.
State-Space Search
State space is another method of problem representation that facilitates easy search. Using this
method, one can also find a path from start state to goal state while solving a problem.
A state space basically consists of four components:
1. A set S containing start states of the problem
2. A set G containing goal states of the problem
3. Set of nodes (states) in the graph tree. Each node represents the state in problem-solving
process
4. Set of arcs connecting nodes. Each arc corresponds to operator that is a step in a problem-
solving process.
solution path is a path through the graph from a node in S to a node in G. The main objective of
search algorithm is to determine a solution path in the graph. There may be more than one
solution paths. as there may be more than one ways of solving the problem.
2
Course: Artificial Intelligence Course Instructor: Amit Ghosh
Uninformed Search
Uninformed search methods, also known as blind search methods, are strategies for exploring a
search space without any additional information about the goal's location beyond the initial state and
possible actions. These rely on systematic exploration of the state space.
Common Types of Uninformed Search Methods:
1. Breadth-First Search (BFS):
o Explores all nodes at the present depth level before moving on to the nodes at the next
depth level.
o Guaranteed to find the shortest path in terms of the number of edges if all edges have
the same cost.
2. Depth-First Search (DFS):
o Explores as far down a branch as possible before backtracking.
o Can be less memory-intensive than BFS but does not guarantee the shortest path.
3. Iterative Deepening Search (IDS):
o Combines the benefits of DFS and BFS by gradually increasing the depth limit on the
search to find a solution without excessive memory use.
Breadth-First Search
The breadth-first search (BFS) expands all the states one step away from the start state. and
then expands all states two steps from start state. then three steps. etc.. until a goal state is
reached. All successor states are examined at the same depth before going deeper
3
Course: Artificial Intelligence Course Instructor: Amit Ghosh
This search is implemented using two lists called OPEN and CLOSED. The OPEN list contains
those states that are to be expanded and CLOSED list keeps track of states already expanded. Here
OPEN list is maintained as a queue and CLOSED list as a stack. For the sake of simplicity. we are
writing BFS algorithm for checking whether a goal node exists or not.
Depth-First Search
In the depth-first search (DFS), we go as far down as possible into the scarch tree/graph before
backing up and trying alternatives.
It works by always generating a descendent of the most recently expanded node until some depth cut
off is reached and then backtracks‘ to next most recently expanded node and generates one of its
descendants.
DFS is memory efficient as it only stores a single path from the root to leaf node along with the
remaining unexpanded siblings for each node on the path.
4
Course: Artificial Intelligence Course Instructor: Amit Ghosh
We can implement DFS by using two lists called OPEN and CLOSED. The OPEN list contains those
states that are to be expanded, and CLOSED list keeps track of states already expanded.
Here OPEN and CLOSED lists arc maintained as stacks. If we discover that first element of
OPEN is the Goal state. then scarch terminates successfully.
5
Course: Artificial Intelligence Course Instructor: Amit Ghosh
6
Course: Artificial Intelligence Course Instructor: Amit Ghosh
7
Course: Artificial Intelligence Course Instructor: Amit Ghosh
Informed Search
Local search algorithms operate by searching from a start state to neighboring states, without
keeping track of the paths, nor the set of states that have been reached.
That means they are not systematic—they might never explore a portion of the search space where
a solution actually resides.
However, they have two key advantages: (1) they use very little memory; and (2) they can often
find reasonable solutions in large or infinite state spaces for which systematic algorithms are
unsuitable.
To understand local search, consider the states of a problem laid out in a state-spacelandscape, as
shown in Figure. Each point (state) in the landscape has an “elevation,” defined by the value of the
objective function. If elevation corresponds to an objective function then the aim is to find the
highest peak—a global maximum—and we call the process hill Global maximum climbing. If
elevation corresponds to cost, then the aim is to find the lowest valley—a global minimum—and we
call it gradient descent.
value (uphill):
8
Course: Artificial Intelligence Course Instructor: Amit Ghosh
Hill-climbing search
9
Course: Artificial Intelligence Course Instructor: Amit Ghosh
Local maxima:
10
Course: Artificial Intelligence Course Instructor: Amit Ghosh
Plateaux: a plateau is an area of the state space landscape where the evaluation function is flat. It can
be a flat local maximum, from which no uphill exit exists, or a shoulder, from which it is possible to
make progress.
Ridges: Ridges result in a sequence of local maxima that is very difficult for local search algorithms
to navigate.
Beam Search
Beam search is a heuristic search algorithm in which W number of best nodes at each level is always
expanded.
Beam search uses breadth-first search to build its search tree. At each level of the tree,it generates all
successors of the states at the current level, sorts them in order of increasing heuristic values.
it only considers a W number of states at each level. Other nodes are ignored.
Best nodes are decided on the heuristic cost associated with the node.
Here W is called width of beam search. If B is the branching factor, there will be only W * B nodes under
consideration at any depth but only W nodes will be selected.
11
Course: Artificial Intelligence Course Instructor: Amit Ghosh
12
Course: Artificial Intelligence Course Instructor: Amit Ghosh
13
Course: Artificial Intelligence Course Instructor: Amit Ghosh
A* Search
The heuristic function for a node N is defined as follows: f(N) = g(N) + h(N)
The function g is a measure of the cost of getting from the start node to the current node N, i.e. it
is sum of costs of the rules that were applied along the best path to the current node.
The function h is an estimate of additional cost of getting from the current node N to the goal node.
14
Course: Artificial Intelligence Course Instructor: Amit Ghosh
15
Course: Artificial Intelligence Course Instructor: Amit Ghosh
Let us consider an example of eight puzzle again and solve it by using A* algorithm.
The simple evaluation function f(x) is defined as follows:
f(X) = g(X) + h(X),
where
h(X) = the number of tiles not in their goal position in a given state X ;
g(X) = depth of node X in the search tree
h(X) = the number of tiles not in their goal position in a given state X ;g(X) = depth of node X in the
search tree
16
Course: Artificial Intelligence Course Instructor: Amit Ghosh
Iterative-Deepening A* Search
Iterative-Deepening A* (IDA*) is a combination of the depth-first iterative deepening and A*
algorithm.
Here the successive iterations are corresponding to increasing values of the total cost of a path rather
than increasing depth of the search.
Algorithm works as follows:
® For each iteration, perform a DFS pruning off a branch when its total cost (g + h) exceeds a given
threshold.
® The initial threshold starts at the estimate cost of the start state and increases for each iteration of
the algorithm.
® The threshold used for the next iteration is the minimum cost of all values exceeded the current
threshold.
® These steps are repeated till we find a goal state.
Let us consider an example to illustrate the working of IDA* algorithm
17
Course: Artificial Intelligence Course Instructor: Amit Ghosh
Initially, the threshold value is the estimated cost of the start node. In the first iteration, Threshold=5.
Now we generate all the successors of start node and compute their estimated values as 6, 8,4,8, and
9.
The successors having values greater than 5 are to be pruned. Now for next iteration,
Simulated Annealing
Annealing Process
– Raising the temperature up to a very high level (melting temperature, for example), the atoms
have a higher energy state and a high possibility to re-arrange the crystalline structure.
– Cooling down slowly, the atoms have a lower and lower energy state and a smaller and smaller
possibility to re-arrange the crystalline structure.
18
Course: Artificial Intelligence Course Instructor: Amit Ghosh
Analogy
19
Course: Artificial Intelligence Course Instructor: Amit Ghosh
Algorithm
20