0% found this document useful (0 votes)
11 views173 pages

AI Module 2

The document discusses problem-solving in artificial intelligence, detailing the process of defining, analyzing, and implementing solutions through a problem-solving agent. It covers various search strategies, including uninformed and informed search methods, and introduces specific algorithms such as Breadth-First Search, Depth-First Search, and A* Search. Additionally, it presents examples of classic problems like the 8 Puzzle and the Water Jug Problem to illustrate these concepts.

Uploaded by

shuklameena607
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views173 pages

AI Module 2

The document discusses problem-solving in artificial intelligence, detailing the process of defining, analyzing, and implementing solutions through a problem-solving agent. It covers various search strategies, including uninformed and informed search methods, and introduces specific algorithms such as Breadth-First Search, Depth-First Search, and A* Search. Additionally, it presents examples of classic problems like the 8 Puzzle and the Water Jug Problem to illustrate these concepts.

Uploaded by

shuklameena607
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Module 2

Problem Solving
 Problem-solving agent
 Problem solving is a fundamental aspect of artificial
intelligence (AI). It involves analyzing a problem,
Problem formulating a strategy, and implementing a solution.
Solving Understanding the problem-solving process is crucial
for developing effective AI systems.
Agent  The problem-solving agent preforms precisely by
defining problems and its several solutions.
PROBLEM
SOLVING
AGENT
 Finding information on needs
 Most common technique of problem solving in AI
 Steps of problem solving:
1. Define the problem
2. Analyse the problem
Problem 3. Identification of the solution
searching 4. Choosing the solution
5. Implementation
 Search: Searching is a step by step procedure to solve a search-
problem in a given search space. A search problem can have three
main factors:
 Search Space: Search space represents a set of possible
solutions, which a system may have.
 Start State: It is a state from where agent begins the
search.
Search  Goal test: It is a function which observe the current state and
returns whether the goal state is achieved or not.
Algorithm  Search tree: A tree representation of search problem is called
Search tree. The root of the search tree is the root node which is
Terminologie corresponding to the initial state.

s:  Actions: It gives the description of all the available actions to the


agent.
 Transition model: A description of what each action do, can be
represented as a transition model.
 Path Cost: It is a function which assigns a numeric cost to each
path.
 Solution: It is an action sequence which leads from the start node
to the goal node.
 Optimal Solution: If a solution has the lowest cost among all
solutions.
 Define problem statement and then generate the solution
keeping various conditions in mind.
 A problem can be defined formally by five components:
1. The initial state that the agent starts in.
2. A description of the possible actions available to the agent.

Well-defined 3. A description of what each action does; the formal name for
this is the transition model, specified by a function
problems RESULT(s, a) that returns the state that results from doing
action a in state s. We also use the term successor to refer to
and 4.
any state reachable from a given state by a single action.
The initial state, actions, and transition model implicitly define
solutions the state space of the problem—the set of all states
reachable from the initial state by any sequence of actions.
The state space forms a directed network or graph in which
the nodes are states and the links between nodes are actions.
A path in the state space is a sequence of states connected
by a sequence of actions.
5. The goal test, which determines whether a given state is a
goal state
Uninformed Search
 The uninformed search does not contain any domain
knowledge such as closeness, the location of the goal.
 It operates in a brute-force way as it only includes
Uninformed information about how to traverse the tree and how to
Search identify leaf and goal nodes.
 Uninformed search applies a way in which search tree is
searched without any information about the search
space like initial state operators and test for the goal, so
it is also called blind search.
 It examines each node of the tree until it achieves the
goal node.
 Informed search algorithms use domain knowledge. In an
informed search, problem information is available which
can guide the search.
 Informed search strategies can find a solution more
efficiently than an uninformed search strategy.
 Informed search is also called a Heuristic search.
Informed  A heuristic is a way which might not always be guaranteed
Search for best solutions but guaranteed to find a good solution in
reasonable time.
 Informed search can solve much complex problem which
could not be solved in another way.
 An example of informed search algorithms is a traveling
salesman problem.
 Uninformed Search Methods:
 Breadth First Search (BFS)
 Depth First Search (DFS)
 Uniform cost search
 Depth Limited Search
Search  Depth First Iterative Deepening (DFID)

Technique  Informed Search Methods:


s  Greedy best first Search
 A* Search
 Hill Climbing
 Simulated Annealing
 Memory bounded heuristic Search
• It is a method used for searching path from start
state to goal state while solving a problem.

• Initial state, actions, and transition model


together define the state-space of the problem
State space implicitly.
search
 State-space of a problem is a set of all states
which can be reached from the initial state followed
by any sequence of actions.

• The state-space forms a directed map or graph


where nodes are the states, links between the
nodes are actions, and the path is a sequence of
states connected by the sequence of actions
 When we don't have an algorithm which tells
us definitively how to negotiate the state-
space we need to search the state-space to
State find an optimal path from a start state to a
Space goal state.

Search  We can only decide what to do (or where to


go), by considering the possible moves from
the current state, and trying to look ahead as
far as possible.
Search: It identifies all the best
possible sequence of actions to
reach the goal state from the
current state. It takes a problem
State as an input and returns solution
as its output.
Space Solution: It finds the best
Search algorithm out of various
algorithms, which may be proven
as the best optimal solution.
Execution: It executes the best
optimal solution from the
searching algorithms to reach
the goal state from the current
state.
8 Puzzle Problem
 Here, we have a 3×3 matrix
with movable tiles numbered
from 1 to 8 with a blank
8 space.
 The tile adjacent to the blank
Puzzle space can slide into that
Problem space.
 The objective is to reach a
specified goal state similar to
the goal state, as shown in
the below figure.
 In the figure, our task is to
convert the current state into
8
Puzzle
Problem
 States: It describes the location of each numbered tiles and the blank

tile.

The problem 


Initial State: We can start from any state as the initial state.

Actions: Here, actions of the blank space is defined, i.e., either left,
formulation right, up or down
is as follows:  Transition Model: It returns the resulting state as per the given state

and actions.
 Goal test: It identifies whether we have reached the correct goal-state.
 Path cost: The path cost is the number of steps in the path where the

cost of each step is 1.


States: Locations of tiles
Initial state: Any state
Actions: Move blank left, right, up, down
Goal state: Goal State (Given)
Path Cost: 1 per move
 The aim of this problem is to place 8 queens
on a chessboard in an order where no queen
may attack another.
 A queen can attack other queens either
diagonally or in same row and column.

8-
queens
proble
m:
“You are given two jugs, a 4-liter one and a 3-liter one. Neither
has any measuring markers on it. There is a pump that can be
used to fill the jugs with water. How can you get exactly 2
liters of water into a 4-liter jug.”

Water Jug Problem


Definition
Uninformed
Search
Strategies
 Breadth-first search is a simple strategy in which the
root node is expanded first, then all the successors of
the root node are expanded next, then their
successors, and so on.
 This is achieved very simply by using a FIFO queue
Breadth- for the frontier.
 Thus, new nodes (which are always deeper than their
first search parents) go to the back of the queue, and old nodes,
which are shallower than the new nodes, get
expanded first.
 The goal test is applied to each node when it is
generated rather than when it is selected for
expansion.
Breadth-
first
search
Breadth-
first
search
Breadth-
first
search
Breadth-first
search
Psuedo-
code
• It is simple to implement.
• It can be applied to any search
problem.
• BFS does not suffer from any potential
Advantages infinite loop problem
• BFS will perform well if the search
space is small.
• If the search space is large then search
performance will be poor compared to other heuristic
searches.

• It will perform relatively poor as compare to the


Drawbacks depth-first search algorithm if the goal state lies in
the bottom of the tree.

• BFS needs more memory as compared to DFS.


 Depth-first search always expands the deepest node
in the current frontier of the search tree.
 The search proceeds immediately to the deepest level
Depth-first of the search tree, where the nodes have no
search successors.
 As those nodes are expanded, they are dropped from
the frontier, so then the search “backs up” to the next
deepest node that still has unexplored successors.
 DFS consumes very less space.
 It will reach at goal node in a less time
period than BFS if it traverse in a right
path.

Advantages
 It is possible that states may keep
reoccurring. There is no guarantee of
finding the goal node.
 sometimes state may enter into infinite
loop.
Disadvantage
 The embarrassing failure of depth-first search in
infinite state spaces can be alleviated by supplying
depth-first search with a predetermined depth limit .
Depth  That is, nodes at depth are treated as if they have no
Limited successors. This approach is called depth-limited
search.
Search  The depth limit solves the infinite-path problem.
 Unfortunately, it also introduces an additional source
of incompleteness if we choose
Depth
Limited
Search
● Depth limited search is better than DFS and requires

less time and memory space.


Advantages of
Depth Limited ● There are applications of DLS in graph theory

Search particularly similar to the DFS.

● To combat the disadvantages of DFS, we add a limit to

the depth, and our search strategy performs

recursively down the search tree.


● The depth limit is compulsory for this algorithm to

Disadvantage execute.

s of Depth ● The goal node will not be found if it does not exist in
Limited
the desired limit.
Search
● The goal node may not exist in the depth limit set

earlier, which will push the user to iterate further

adding execution time.


 Iterative deepening search (or iterative deepening
depth-first search) is a general strategy, often used in
combination with depth-first tree search, that finds
the best depth limit.
 It does this by gradually increasing the limit—first 0,
Iterative then 1, then 2, and so on—until a goal is found.
 This will occur when the depth limit reaches d, the
deepening depth of the shallowest goal node.
depth-first  Iterative deepening combines the benefits of depth-
first and breadth-first search. Like depth-first search,
search its memory requirements are modest: O(bd) to be
precise.
 Like breadth-first search, it is complete when the
branching factor is finite and optimal when the path
cost is a nondecreasing function of the depth of the
node.
Advantages:

● It combines the benefits of BFS and DFS

Advantages and search algorithm in terms of fast search and

Drawbacks memory efficiency.

Drawbacks:

● The main drawback of IDDFS is that it repeats


all the work of the previous phase.
Practice
Content
 Informed Search:
 Heuristic Function
 Admissible Heuristic

nformed Search  Informed Search Technique:


 Greedy Best First Search,
 A* Search,
• Heuristics is an approach to problem-solving in which the
objective is to produce a working solution within a reasonable
time frame.
What is a • Instead of looking for a perfect solution, heuristic strategies
look for a quick solution that falls within an acceptable range
Heuristic of accuracy.
• A Heuristic is a technique to solve a problem faster than classic
Search? methods, or to find an approximate solution when classic
methods cannot.
• A Heuristic (or a heuristic function) takes a look at search
algorithms. At each branching step, it evaluates the available
information and makes a decision on which branch to follow.
Types of Heuristic
[Link]:
 In this heuristic function never over estimates the
cost of reaching the goal.
 H(n) is always less than or equal to actual cost of
lowest cost path from node ‘n’ to goal.
 H(n)<=H’(n)
 H(n) heuristic cost
Heuristic Function  H’(n) actual cost
[Link]-Admissible-
In this heuristic function over estimates the
cost of reaching the goal.

H(n)>H’(n)
Heuristic Function
• Algorithms have information on the goal state, which helps in
more efficient searching. This information is obtained by
something called a heuristic.
• Algorithms.
Informed – Greedy best first Search,
Search – A* Search,
Algorithms – Memory bounded heuristic Search.

 Search Heuristics: In an informed search, a heuristic is


a function that estimates how close a state is to the goal state. For
examples – Euclidean distance, etc. (Lesser the distance, closer
the goal.)
• It expands the node that is estimated to be closest
to goal.
• It expands nodes based on f(n) = h(n).
Greedy Best • It is implemented using priority queue.
First Search
• Greedy best-first search algorithm always
selects the path which appears best at that
moment.
Greedy • It uses the heuristic function and search.
• Best-first search allows us to take the
Best-First advantages of both algorithms.
• With the help of best-first search, at each
Search step, we can choose the most promising
node.
• In the best first search algorithm, we expand
the node which is closest to the goal node and
the closest cost is estimated by heuristic
function,
• Heuristic value is a estimated value from
node n to goal state.
Advantages:

● Best first search can switch between BFS and DFS by


gaining the advantages of both the algorithms.
● This algorithm is more efficient than BFS and DFS
algorithms.

Disadvantages:

● It can get stuck in a loop as DFS.


● This algorithm is not optimal.
Expand the nodes of S and put in the CLOSED
list

Initialization: Open [A, B], Closed [S]

Iteration 1: Open [A], Closed [S, B]

Iteration 2: Open [E, F, A], Closed [S, B]

: Open [E, A], Closed [S, B, F]

Iteration 3: Open [I, G, E, A], Closed [S, B, F]

: Open [I, E, A], Closed [S, B, F, G]

Hence the final solution path will be: S----> B-----


>F----> G
Practice
A* Algorithm
A* is formulated with weighted graphs, which means it
can find the best path involving the smallest cost in terms
of distance and time. This makes A* algorithm in artificial
intelligence an informed search algorithm for best-first
search.
A*
algorithm
A*
algorithm
 A* algorithm works based on heuristic methods and this helps
achieve optimality. A* is a different form of the best-first
algorithm.
 When A* enters into a problem, firstly it calculates the cost to
travel to the neighboring nodes and chooses the node with the
lowest cost.
A*  If f(n) denotes the cost, A* chooses the node with the lowest
algorithm f(n) value. Here ‘n’ denotes the neighboring nodes.
 The calculation of the value can be done as shown below:
 f(n)=g(n)+h(n)
 g(n) = shows the shortest path’s value from the starting node
to node n
 h(n) = The heuristic approximation of the value of the node
 The numbers written on edges represent the
distance between the nodes while the numbers
written on nodes represent the heuristic values.
 Let us find the most cost-effective path to reach
from start state A to final state G using A*
Algorithm.
A*
algorithm
example
Let’s start with node A. Since A is a starting node, therefore, the value of

A* g(x) for A is zero and from the graph, we get the heuristic value of A is 11,

algorithm therefore

example  g(x) + h(x) = f(x)


 0+ 11 =11
 Thus for A, we can write
 A=11
Now from A, we can go to point B or point E, so we compute f(x) for each of them

 A→B=2+6=8
A*  A→E=3+6=9
algorithm
example Since the cost for A → B is less, we move forward with this path and compute the

f(x) for the children nodes of B

Since there is no path between C and G, the heuristic cost is set infinity or a very

high value
 A → B → C = (2 + 1) + 99= 102
 A → B → G = (2 + 9 ) + 0 = 11
A*
algorithm Here the path A → B → G has the least cost but it is still more than the
example cost of A → E, thus we explore this path further
A → E → D = (3 + 6) + 1 = 10
Comparing the cost of A → E → D with all the paths we
got so far and as this cost is least of all we move forward
A* with this path. And compute the f(x) for the children of D
algorithm A → E → D → G = (3 + 6 + 1) +0 =10
Now comparing all the paths that lead us to the goal, we
example conclude that A → E → D → G is the most cost-effective
path to get from A to G.
Start

Goal
The numbers written on edges represent the
distance between the nodes.
The numbers written on nodes represent the
heuristic value.
Find the most cost-effective path to reach from start
state A to final state J using A* Algorithm.
 We start with node A.
 Node B and Node F can be reached from node A.

A* Algorithm calculates f(B) and f(F).


 f(B) = 6 + 8 = 14
 f(F) = 3 + 6 = 9

Since f(F) < f(B), so it decides to go to node F.

Path- A → F
Node G and Node H can be reached from node F.

A* Algorithm calculates f(G) and f(H).


 f(G) = (3+1) + 5 = 9
 f(H) = (3+7) + 3 = 13

Since f(G) < f(H), so it decides to go to node G.

Path- A → F → G
Node I can be reached from node G.

A* Algorithm calculates f(I).


f(I) = (3+1+3) + 1 = 8
It decides to go to node I.

Path- A → F → G → I
Node E, Node H and Node J can be reached from node I.

A* Algorithm calculates f(E), f(H) and f(J).


 f(E) = (3+1+3+5) + 3 = 15
 f(H) = (3+1+3+2) + 3 = 12
 f(J) = (3+1+3+3) + 0 = 10

Since f(J) is least, so it decides to go to node J.

Path- A → F → G → I → J
Advantages:

● A* search algorithm is the best algorithm than other


search algorithms.
● A* search algorithm is optimal and complete.
● This algorithm can solve very complex problems.
Disadvantages:

● It does not always produce the shortest path (longest)as it


mostly based on heuristics and approximation.
● A* search algorithm has some complexity issues.
● The main drawback of A* is memory requirement as it
keeps all generated nodes in the memory, so it is not
practical for various large-scale problems.
 Memory-bounded search strategies are necessary
because AI systems often encounter constrained
memory resources in real-world circumstances.
 The notion of memory-bound search, often referred to
as memory-bounded heuristic search.
Memory  When memory resources are restricted, AI uses a
Bounded method called memory-bound search to solve issues
and make judgments quickly.
Heuristic  Conventional search methods, such as
Search the A* or Dijkstra’s algorithms, sometimes require
infinite memory, which may not be realistic in many
circumstances.
 Memory-bound search algorithms, on the other hand,
are created with the limitation of finite memory in
mind.
 The goal of these algorithms is to effectively use the
memory that is available while finding optimum or
nearly optimal solutions.
Memory  They do this by deciding which information to keep
Bounded and retrieve strategically, as well as by using heuristic
functions to direct the search process.
Heuristic  Finding a balance between the quality of the answer
Search produced and the quantity of memory consumed is
the main notion underlying memory-bound search.
 Even with constrained resources, these algorithms
may solve problems effectively by carefully allocating
memory.
 Efficiency in Memory-Limited Situations: Memory-bound
search algorithms perform well when memory is limited. They
don’t need a lot of memory to hold the whole search space or
exploration history to locate solutions.
 Real-world Applicability: Memory-bound search algorithms
Benefits of are useful for a variety of AI applications, particularly those
integrated into hardware with constrained memory. IoT devices,
Memory- robots, autonomous cars, and real-time systems fall under this
category.
Bound  Optimal or Near-Optimal Remedies: Memory-bound search
Search looks for the optimal answer given the memory restrictions.
These algorithms may often effectively provide optimum or
almost ideal answers by using well-informed heuristics.
 Dynamic Memory Management: The memory allocation and
deallocation techniques used by these algorithms are dynamic.
They make decisions about what data to keep and when to
remove or replace it, so memory is used effectively during the
search process.
Local
Search
Algorithms • Local Search Algorithm
• Hill Climbing search
and • Simulated Annealing

Optimizati • Optimization Problems


• Genetic algorithms
on • Ant Colony Optimization

Problems
 It is a local search algorithm. Based on optimization.
 It is used when only GOAL is important but path to goal is
not important.
 Path is not preserved in memory so memory space
requirement is less.
 Hill climbing algorithm is a local search algorithm which
Hill continuously moves in the direction of increasing
elevation/value to find the peak of the mountain or best
Climbing solution to the problem.
 It terminates when it reaches a peak value where no
neighbor has a higher value.
 Hill climbing algorithm is a technique which is used for
optimizing the mathematical problems.
● It is also called greedy local search as it only looks to its good
immediate neighbor state and not beyond that.
● A node of hill climbing algorithm has two components which
are state and value.
Hill Climbing ● Hill Climbing is mostly used when a good heuristic is
available.
● In this algorithm, we don't need to maintain and handle the
search tree or graph as it only keeps a single current state.
Algorithm
[Link] the current state as an initial state.
2. Loop until the goal state is achieved or no more operators
can be applied on the current state:
1. Apply an operation to current state and get a new state.
2. Compare the new state with the goal.
3. Quit if the goal state is achieved.
4. Evaluate the new state with heuristic function and compare it
with current state.
5. If newer state is closer to the goal compared to current state
update the current state.

One of the widely discussed examples of Hill climbing


algorithm is Traveling-salesman Problem in which we need to
minimize the distance traveled by the salesman.
State-
space
Diagram
for Hill
Climbing:
Local Maximum: Local maximum is a state which
is better than its neighbor states, but there is also
another state which is higher than it.
Different regions in Global Maximum: Global maximum is the best
possible state of state space landscape. It has the
the state space
highest value of objective function.
landscape:
Current state: It is a state in a landscape diagram
where an agent is currently present.
Flat local maximum: It is a flat space in the
landscape where all the neighbor states of current
states have the same value.
Shoulder: It is a plateau region which has an uphill
edge.
● Simple hill Climbing:
● Steepest-Ascent hill-climbing:
Types of Hill ● Stochastic hill Climbing:
Climbing
Algorithm:
Simple hill climbing is the simplest way to implement a hill
climbing algorithm.

It only evaluates the neighbor node state at a time and


Simple Hill selects the first one which optimizes current cost and set
it as a current state.
Climbing
It only checks it's one successor state, and if it finds better than
the current state, then move else be in the same state.

This algorithm has the following features:

● Less time consuming


● Less optimal solution and the solution is not guaranteed
 Greedy Approach:
 Objective: The goal of Simple Hill Climbing is to improve the solution
incrementally. It evaluates neighboring states and moves to the one
that provides the best improvement in the objective function (i.e., the
one with the highest value or lowest cost, depending on the problem).

Key  Local Search:


 Search Space: It only considers the immediate neighbors of the
Features current state. It does not explore states beyond this local
neighborhood.

of Simple  Move: The algorithm selects the neighbor that offers the greatest
increase in value (or greatest decrease in cost) and makes that move.

Hill  Termination:
 Stopping Criterion: Simple Hill Climbing terminates when no
Climbing neighboring state provides a better solution. At this point, it has
reached a local maximum (or local minimum, depending on the
problem).
 Randomness:
 Selection: Simple Hill Climbing selects the best neighbor
deterministically, meaning it does not involve random selection of
neighbors. It always moves to the neighbor with the highest (or
lowest) objective function value.
 Efficiency: Simple Hill Climbing is relatively easy to
implement and can be fast for small or simple
problems where the search space is not too large.
 Local Optima: It is prone to getting stuck in local
optima because it only considers immediate
neighbors and does not have a mechanism for
escaping local maxima. Once it reaches a peak in its
Characterist local neighborhood, it cannot move beyond it.

ics and  No Backtracking: The algorithm does not


backtrack or reconsider previous states. Once it
Limitations moves to a new state, it does not return to evaluate
previous states.
 Limited Exploration: Since it only considers
neighboring states, it may not explore the entire
search space, potentially missing better solutions
that are not directly reachable from the current
state.
 The steepest-Ascent algorithm is a variation of simple hill climbing algorithm.
 This algorithm examines all the neighboring nodes of the current state and selects one neighbor node
which is closest to the goal state.
 This algorithm consumes more time as it searches for multiple neighbors.
 Local Search:
 Search Space: It operates in the local neighborhood of the current state, aiming to find a peak or the best
possible solution in that neighborhood
Steepest-  Termination: The algorithm stops when no neighboring state provides a better solution (i.e., it reaches a local
maximum).

Ascent hill  Finding the Best Move:


 Comprehensive Search: By evaluating all neighbors, it avoids making suboptimal moves within the
climbing neighborhood, as it always chooses the best available option.
 Computational Cost: Evaluating all neighbors can be computationally expensive, especially for large search
spaces.

 Handling Local Maxima:

Limitations: Like other hill climbing methods, Steepest-Ascent Hill Climbing can get stuck in local maxima
because it does not have a mechanism to escape these suboptimal solutions.
Stochastic hill climbing does not examine for all
its neighbor before moving.

Rather, this search algorithm selects one


3. Stochastic hill neighbor node at random and decides whether to
choose it as a current state or examine another
climbing state.
 Key Characteristics:
 Randomness: In Stochastic Hill Climbing, the
next move is chosen randomly from the
neighboring states, rather than deterministically
as in regular hill climbing.
 Local Search: It only considers the immediate
neighborhood of the current state. If all
neighbors are worse or equal, it gets stuck in
local optima.
 Termination: It stops when it reaches a local
maximum or a predefined stopping criterion
1. Local Maximum: A local maximum is a peak state in the landscape
which is better than each of its neighboring states, but there is another
state also present which is higher than the local maximum.
Solution: Backtracking technique can be a solution of the local
maximum in state space landscape. Create a list of the promising path so
that the algorithm can backtrack the search space and explore other
Problems in Hill paths as well.
Climbing Algorithm
2. Plateau: A plateau is the flat area of the search space in which all the
neighbor states of the current state contains the same value, because of this
algorithm does not find any best direction to move. A hill-climbing search
might be lost in the plateau area.

Solution: The solution for the plateau is to take big steps or very little steps
while searching, to solve the problem. Randomly select a state which is far
away from the current state so it is possible that the algorithm could find non-
plateau region.
3. Ridges: A ridge is a special form of the local maximum. It has an area
which is higher than its surrounding areas, but itself has a slope, and
cannot be reached in a single move.

Solution: With the use of bidirectional search, or by moving in different


directions, we can improve this problem.
Steepest-Ascent Hill
Feature Simple Hill Climbing Stochastic Hill Climbing
Climbing
Evaluates a subset or random Evaluates all neighbors and Evaluates a random
Neighbour Evaluation
neighbor. selects the best one. neighbour.
Moves to a better neighbor,
Always moves to the neighbor Moves to a randomly chosen
Move Selection which might be selected
with the highest value. neighbor, possibly worse.
randomly.
Greedy, local search with Greedy, but introduces
Search Strategy Greedy, local search. thorough evaluation of randomness in neighbor
neighbors. selection.
Less prone but still can get Can potentially escape local
Prone to getting stuck in local
Escaping Local Optima stuck; evaluates all neighbors maxima due to random
maxima.
for the best one. moves.
Moderate, as only one
Typically low, but depends on Higher, as it evaluates all
Computational Cost neighbor is evaluated at a
how neighbors are chosen. possible neighbors.
time.
Implementation More complex due to the need Moderately simple but
Simple and straightforward.
Complexity to evaluate all neighbors. involves randomness.
Stops when no better
Stops when no better Stops when no better
Termination neighbors are found, but may
neighbors are found. neighbors are found.
include random moves.
Better global search capability
Limited; focuses on local Limited; thorough local search
Global Search Capability due to random neighbor
search. but still local.
selection.
May struggle with plateaus Can potentially handle
Better handling of plateaus by
Handling of Plateaus (areas where neighbors are plateaus better due to
evaluating all neighbors.
equal). randomness.
Simulated Annealing
 Simulated annealing is based on metallurgical
practices by which a material is heated to a high
temperature and cooled.
Simulated  At high temperatures, atoms may shift unpredictably,
often eliminating impurities as the material cools into
Annealing a pure crystal.
 This is replicated via the simulated annealing
optimization algorithm, with energy state
corresponding to current solution.
 Annealing: harden metals and glass by heating them to a high
temperature and then gradually cooling them
 The process contains two steps:
1. Increase the temperature of the heat bath to a maximum
Simulated value at which the solid melts.
Decrease carefully the temperature of the heat bath until
Annealing 2.
the particles arrange themselves in the ground state of the
solid. Ground state is a minimum energy state of the
solid.
 The ground state of the solid is obtained only if the maximum
temperature is high enough and the cooling is done slowly.
Simulated
Annealing
 Implementation of SA is surprisingly simple.

 The algorithm is basically hill-climbing except instead of picking


the best move, it picks a random move.
 If the selected move improves the solution, then it is always

Implementation accepted. Otherwise, the algorithm makes the move anyway with
some probability less than 1.
of SA  The probability decreases exponentially with the “badness” of the
move, which is the amount deltaE by which the solution is
worsened (i.e., energy is increased.)
 Prob(accepting uphill move) ~ 1 - exp(deltaE / kT))

 A parameter T is also used to determine this probability.


 It is analogous to temperature in an annealing system.
 At higher values of T, uphill moves are more likely to occur.
As T tends to zero, they become more and more unlikely,
Implementation until the algorithm behaves more or less like hill-climbing.

of SA  In a typical SA optimization, T starts high and is gradually


decreased according to an “annealing schedule”. The
parameter k is some constant that relates temperature to
energy (in nature it is Boltzmann’s constant.)
Travelling salesman problem
Task allocation
Graph coloring and partitioning
Scheduling algorithm

Examples
 Can deal with arbitrary systems and cost functions
 Easy to code
 Finds the optimal solution
 Flexible and global optimality can be derived successfully.

Advantages
 Require more time
 Cost function is expensive
 Difficult to determine whether optimal solution is achieved or
not.

Disadvantage
 Optimization is the process of making something better.
 Optimization refers to finding the values of inputs in such a way that we
Optimization get the “best” output values.
 The definition of “best” varies from problem to problem, but in
mathematical terms, it refers to maximizing or minimizing one or more
-Genetic algorithms objective functions, by varying the input parameters.
- Ant Colony Optimization  The set of all possible solutions or values which the inputs can take
make up the search space.
 In this search space, lies a point or a set of points which gives the
optimal solution.
 The aim of optimization is to find that point or set of points in the search
space.
A genetic algorithm is a search heuristic that is
inspired by Charles Darwin’s theory of natural
evolution.
This algorithm reflects the process of natural
Genetic Algorithmselection where the fittest individuals are selected for
reproduction in order to produce offspring of the next
generation.
The process of natural selection starts with the
selection of fittest individuals from a
population.
They produce offspring which inherit the
Notion of Natural Selection characteristics of the parents and will be added
to the next generation.
If parents have better fitness, their offspring
will be better than parents and have a better
chance at surviving.
This process keeps on iterating and at the end,
a generation with the fittest individuals will be
found.
Keywords
 Individual - Any possible solution
 Chromosome - Blueprint for an
individual
 Population - Group of all individuals
 Search Space - All possible solutions
to the problem
 Fitness Function- Evaluates fitness
of every individual in the population
 Termination Condition- algorithm
terminates when either a maximum
number of generations has been
produced, or a satisfactory fitness
level has been reached for the
population
This notion can be applied for a search problem. We consider
a set of solutions for a problem and select the set of best ones
out of them.

Five phases are considered in a genetic algorithm.

1. Initial population
2. Fitness function
3. Selection
4. Crossover
5. Mutation
The process begins with a set of individuals which is called a

Population. Each individual is a solution to the problem you want to

Initial Population solve.

An individual is characterized by a set of parameters (variables)

known as Genes. Genes are joined into a string to form a

Chromosome (solution).

In a genetic algorithm, the set of genes of an individual is

represented using a string, in terms of an alphabet. Usually, binary


The fitness function determines how fit an
individual is (the ability of an individual to compete
with other individuals).
Fitness Function
It gives a fitness score to each individual.

The probability that an individual will be selected for


reproduction is based on its fitness score.
The main idea of selection phase is to select the fittest

individuals and let them pass their genes to the next


Selection
generation.

Two pairs of individuals (parents) are selected based on their

fitness scores. Individuals with high fitness have more chance

to be selected for reproduction.


1. Roulette Wheel Selection
Fitness 2. Rank Selection
evaluation
3. Tournament Selection
 Main idea: A string is selected from the mating pool
with a probability proportional to the fitness
 Probability of the ith selected string:

Roulette
Wheel
Selection
 Chance to be selected is exactly proportional to fitness
 Chromosome with bigger fitness will be selected more
times
Imagine a roulette wheel with sectors assigned to each
individual in the population based on their fitness scores.
The probability of being chosen is proportional to the size of
the individual's sector. Here's a step-by-step guide:

Roulette 1. Calculate the total fitness of the population.

Wheel 2. Calculate the relative fitness for each individual by


dividing their fitness by the total fitness.
Selection 3. Calculate the cumulative probability for each individual
by summing up the relative finesses.
4. Generate a random number between 0 and 1.
5. Select the first individual whose cumulative probability is
greater than or equal to the random number.
 Suppose we have a population of four individuals with
the following fitness scores:

 Individual A: 12
 Individual B: 8
Example  Individual C: 6
 Individual D: 4

Calculate the total fitness:


 Total Fitness = 12 + 8 + 6 + 4 = 30
Calculate the relative fitness:
 Individual A: 12 / 30 = 0.4
 Individual B: 8 / 30 = 0.2667
 Individual C: 6 / 30 = 0.2
 Individual D: 4 / 30 = 0.1333

Example
Calculate the cumulative probability:
 Individual A: 0.4
 Individual B: 0.4 + 0.2667 = 0.6667
 Individual C: 0.6667 + 0.2 = 0.8667
 Individual D: 0.8667 + 0.1333 = 1
 Generate a random number between 0 and 1:
 Random number = 0.52

Select the parent:


 In this case, the random number falls between the
Example cumulative probabilities of Individual A (0.4) and
Individual B (0.6667). So, Individual B is selected as a
parent.

 Repeat the process to select another parent, and then


perform crossover and mutation to create offspring.
 Main idea: First ranks the population and then every chromosome
receives fitness from this ranking
 The worst will have fitness 1, second worst 2 etc. and the best will
have fitness N

Rank
Selection
 After this all the chromosomes have a chance to be selected. But
this method can lead to slower convergence, because the best
chromosomes do not differ so much from other ones
 Disadvantage: the population must be sorted on each cycle
 Binary Tournament: Two individuals are
randomly chosen; the fitter of the two is
selected as a parent

Tournme  Probabilistic Binary Tournament: Two


nt individuals are randomly chosen, with a chance
Selectio p, 0.5<p<1, the fitter of the two is selected as
n a parent
 Larger Tournaments: ‘n’ individuals are
randomly chosen, the fittest one is selected as
a parent
 A cross-site is selected randomly along the length of the
mated strings
 Bits next to the cross-site are exchanged
Single Point
 If good strings are not created by crossover, they will
Crossover
not survive beyond next generation
Contd..
Two-point Point Crossover:
 Two random sites are chosen
 The contents bracketed by these are exchanged
between two mated parents

Parent1 10010111 Offspring1 10010011

Parent2 01110001 Offspring2 01110101

Strings before Strings after


Mating Mating
Contd..
Uniform Crossover:
 Bits are randomly copied from the first or from the second
parent
 A random mask is generated
 The mask determines which bits are copied from one parent
and which from the other parent
 Also called as three-parent crossover
 Mask: 0110011000 (Randomly generated)
Parent 1 1010001110
Parent 2 0011010010
Offspring 1 0011001010

Offspring 2 1010010110
Mutation
 Restores lost genetic materials
 Mutation of a bit involves flipping it, changing 0 to 1 and vice

 versa
Mutation restores lost genetic materials

Mutated offspring
Offspring1 1011011111 Offspring1 1011001111

Offspring2 1010000000 Offspring2 1000000000

 Mutation of a bit involves flipping it, changing 0 to 1


and vice versa
In K-Way tournament selection, we select K
individuals from the population at random and
select the best out of these to become a
parent.
The same process is repeated for selecting
Tournament the next parent.
Tournament Selection is also extremely
Selection popular in literature as it can even work with
negative fitness values.
Random Selection
In this strategy we randomly select parents from the existing
population.

There is no selection pressure towards fitter individuals and


therefore this strategy is usually avoided.
Crossover is the most significant phase in a genetic algorithm. For each pair

of parents to be mated, a crossover point is chosen at random from within

the genes.

Crossover
For example, consider the crossover point to be 3 as shown below.
Offspring are created by exchanging the genes of parents
among themselves until the crossover point is reached.
The new offspring are added to the population.
 Choose a random point on the two parents
 Split parents at this crossover point
 Create children by exchanging tails
SGA
operators:
1-point
crossover
 Choose n random crossover points
 Split along those points

n-point
crossov
er
In certain new offspring formed, some of their genes can be subjected to a
mutation with a low random probability. This implies that some of the bits in
the bit string can be flipped.

Mutation

Mutation occurs to maintain diversity within the population and


prevent premature convergence.
The algorithm terminates if the population has
converged (does not produce offspring which are
significantly different from the previous generation).
Then it is said that the genetic algorithm has
provided a set of solutions to our problem.
Termination  Common terminating conditions are:
 A solution is found that satisfies minimum criteria
 Fixed number of generations reached
 Allocated budget (computation time/money) reached
 The highest ranking solution's fitness is reaching or has
reached a plateau such that successive iterations no longer
produce better results
 Manual inspection
START
Generate the initial population
Compute fitness
REPEAT
Pseudocode Selection
Crossover
Mutation
Compute fitness
UNTIL population has converged
STOP
A simple genetic algorithm is as follows:

#1) Start with the population created randomly.

#2) Calculate the fitness function of each chromosome.

#3) Repeat the steps till n offsprings are created. The offsprings are created as shown
below.

 Select a pair of chromosomes from the population.


 Crossover the pair with probability pc to form offsprings.
 Mutate the crossover with probability pm.

#4) Replace the original population with the new population and go to step 2.
Let’s see the steps followed in this iteration process. The initial population of
chromosomes is generated. The initial population should contain enough genes
so that any solution can be generated. The first pool of population is generated
randomly.

 Selection: The best set of genes is selected depending on the fitness


function. The string with the best fitness function is chosen.
 Reproduction: New offsprings are generated by recombination and
mutation.
 Evaluation: The new chromosomes generated are evaluated for their
fitness.
 Replacement: In this step, the old population is replaced with the newly
generated population.
 Recurrent neural network
 Filtering and signal processing
 learning fuzzy rule base
 Mutation testing
 code breaking
Applications
Applications of GA
 Uses of genetic programming can lie in stock market
prediction, advanced mathematics and military
applications
 Genetic Algorithms can be applied to virtually any problem
that has a large search space.
 The military uses GAs to evolve equations to differentiate
between different radar returns.
 Does not require any derivative
information which may not be available
from any real world problem.
Advantag  Performs faster and efficient as
e compared to traditional approaches.
 Instead of providing single solution gives
list of good solutions
 The process always gives answer which
gives gets better and better over time.
 It gives optimal solution to problems
having large search space with large
number of parameters.
 For simple problems having derivative information GA is
not suitable.
 Calculation of fitness value repeatedly is computationally
expensive.
 There is no guarantee of optimality it provides optimal or
Disadvantage near solution.
Step 1-
 Encoding technique- Binary encoding
 Selection operator- Roulette Wheel Selection
 Crossover operator- Single point crossover

Example Step 2-
 Population size (n) = 4
Step 3-
 Initial population (x value) = 13, 24, 8, 19
 Step 4:
We can see that the maximum f(x) value has increased from 576 to 729.
 Ant Colony Optimization (ACO) is a metaheuristic algorithm
inspired by the foraging behavior of ant colonies in nature.
It's used in artificial intelligence and computer science to
solve complex optimization problems.
 Key concepts:
 Pheromone trails: Virtual ants deposit pheromones on paths
Ant Colony they traverse.
 Probabilistic decision-making: Ants choose paths based on
Optimizatio pheromone levels and heuristic information.
 Pheromone evaporation: Trails gradually fade, allowing
n exploration of new solutions.
 Applications:
 Routing problems (e.g., Traveling Salesman Problem)
 Scheduling and resource allocation
 Network optimization
 Image processing
Flowchart
 Advantages:
 Effective for combinatorial optimization problems
 Inherently parallel
Advantages  Adaptable to dynamic environments
and  Limitations:

Limitations  Convergence time can be uncertain


 Parameter tuning may be necessary for optimal
performance
 Game Playing
Adversarial  The Minimax algorithm
Search  Alpha-Beta Pruning
• Adversarial search is a search, where we examine the problem which arises when
we try to plan ahead of the world and other agents are planning against us.
• In previous topics, we have studied the search strategies which are only associated
with a single agent that aims to find the solution which often expressed in the
form of a sequence of actions.
Adversarial
• But, there might be some situations where more than one agent is searching for
Search
the solution in the same search space, and this situation usually occurs in game
playing.
• The environment with more than one agent is termed as multi-agent
environment, in which each agent is an opponent of other agent and playing
against each other.
• Each agent needs to consider the action of other agent and effect of that action on
their performance.
• So, Searches in which two or more players with conflicting goals are
trying to explore the same search space for the solution, are called
adversarial searches, often known as Games.
• Games are modeled as a Search problem and heuristic evaluation function,
and these are the two main factors which help to model and solve games in
AI.
Game Playing
• Game Playing is an important domain of artificial intelligence.
• Games don’t require much knowledge; the only knowledge we need to
provide is the rules, legal moves and the conditions of winning or losing the
game.
• Both players try to win the game. So, both of them try to make the best
move possible at each turn.
• Searching techniques like BFS(Breadth First Search) are not accurate for
this as the branching factor is very high, so searching will take a lot of time.
• So, we need another search procedures that improve –
– Generate procedure so that only good moves are generated.
– Test procedure so that the best move can be explored first.
• Initial state: This defines initial configuration of the game and identifies
first player to move.

• Successor function: This identifies which are the possible states that can
be achieved from the current state. This function returns a list of (move,
Components state) pairs, each indicating a legal move and the resulting state.

of Game • Goal test: Which checks whether a given state is a goal state or not.
States where the game ends are called as terminal states.
Playing • Path cost / utility / payoff function: Which gives a numeric value for the
terminal states.
In chess, the outcome is win, loss or draw, with values +1, -1, or 0. Some
games have wider range of possible outcomes
• Unpredictable Opponent: Generally we cannot predict the
behavior of the opponent. Thus we need to find a solution which is
a strategy specifying a move for every possible opponent move or
every possible state.

• Time Constraints: Every game has a time constraints. Thus it may


Characteristics of game be infeasible to find the best move in this time
playing
• Games are represented in the form of trees wherein
nodes represent all the
possible states of a game and edges represent moves
between them.

Game Trees • Initial state of the game is represented by root and


terminal states by leaves of the tree.

• In a normal search problem, the optimal solution would be


a sequence of moves leading to a goal state that is a win.
Minimax Trees
● Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-
making and game theory.
● It provides an optimal move for the player assuming that opponent is also playing
optimally.
● Mini-Max algorithm uses recursion to search through the game-tree.
● Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers,
tic-tac-toe, go, and various two-players game.
● This Algorithm computes the minimax decision for the current state.
● In this algorithm two players play the game, one is called MAX and other is called
MIN.
● Both the players fight it as the opponent player gets the minimum benefit while they get
the maximum benefit.
● Both Players of the game are opponent of each other, where MAX will select the
maximized value and MIN will select the minimized value.
● The minimax algorithm performs a depth-first search algorithm for the
exploration of the complete game tree.
● The minimax algorithm proceeds all the way down to the terminal node of the
tree, then backtrack the tree as the recursion.
• The main drawback of the minimax algorithm is that it gets
really slow for complex games such as Chess, go, etc.
• This type of games has a huge branching factor, and the
player has lots of choices to decide.
• This limitation of the minimax algorithm can be improved
from alpha-beta pruning.
Limitation
Alpha-Beta Pruning
● Alpha-beta pruning is a modified version of the minimax algorithm. It is
an optimization technique for the minimax algorithm.
● As we have seen in the minimax search algorithm that the number of
game states it has to examine are exponential in depth of the tree. Since
we cannot eliminate the exponent, but we can cut it to half. Hence there
is a technique by which without checking each node of the game tree we
can compute the correct minimax decision, and this technique is called
pruning. This involves two threshold parameter Alpha and beta for future
expansion, so it is called alpha-beta pruning. It is also called as Alpha-
Beta Algorithm.
● Alpha-beta pruning can be applied at any depth of a tree, and sometimes
it not only prune the tree leaves but also entire sub-tree.
● The two-parameter can be defined as:
a. Alpha: The best (highest-value) choice we have found so far at any
point along the path of Maximizer. The initial value of alpha is -∞.
b. Beta: The best (lowest-value) choice we have found so far at any
point along the path of Minimizer. The initial value of beta is +∞.
● The Alpha-beta pruning to a standard minimax algorithm returns the
same move as the standard algorithm does, but it removes all the nodes
which are not really affecting the final decision but making algorithm
slow. Hence by pruning these nodes, it makes the algorithm fast.
Condition for Alpha-beta pruning:
The main condition which required for alpha-beta pruning is:

α>=β
Key points about alpha-beta pruning:
● The Max player will only update the value of alpha.
● The Min player will only update the value of beta.
● While backtracking the tree, the node values will be passed to upper
nodes instead of values of alpha and beta.
● We will only pass the alpha, beta values to the child nodes.
Thank you !

174

You might also like