0% found this document useful (0 votes)
13 views

Artificial Intelligence-Module 2 (1)

The document discusses problem-solving in Artificial Intelligence, focusing on the search for sequences of actions to achieve goal states. It outlines various types of agents, problem definitions, and examples of toy and real-world problems, along with different search algorithms such as breadth-first search, depth-first search, and uniform-cost search. Additionally, it emphasizes the importance of measuring problem-solving performance through completeness, optimality, and complexity considerations.

Uploaded by

Amruth Vaidya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Artificial Intelligence-Module 2 (1)

The document discusses problem-solving in Artificial Intelligence, focusing on the search for sequences of actions to achieve goal states. It outlines various types of agents, problem definitions, and examples of toy and real-world problems, along with different search algorithms such as breadth-first search, depth-first search, and uniform-cost search. Additionally, it emphasizes the importance of measuring problem-solving performance through completeness, optimality, and complexity considerations.

Uploaded by

Amruth Vaidya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Module 2 Artificial Intelligence Dr. Jalesh kumar and Mr.

Devaraj F V,CSE

ARTIFICIAL INTELLIGENCE

• We have some actions that can change the state of the world
– Change induced by an action perfectly predictable
• Try to come up with a sequence of actions that will lead us to a
goal state
– May want to minimize number of actions
– More generally, may want to minimize total cost of actions
• Do not need to execute actions in real life while searching for
solution!
Everything perfectly predictable anyway
Solving Problems by Searching
• Reflex agent is simple
– base their actions on
– a direct mapping from states to actions
– but cannot work well in environments
• which this mapping would be too large to store
• and would take too long to learn
• Hence, goal-based agent is used
Module 2 Artificial Intelligence Dr. Jalesh kumar and Mr. Devaraj F V,CSE

Problem Solving Agent


- What actions and states to consider to the goal
- The process of looking for a sequence of actions
that reaches the goal is called search.

Well-defined problems and solutions


A problem is defined by 5 components:
• Initial state
• Actions
• Transition model or
(Successor functions)
• Goal Test.
Path Cost.
Module 2 Artificial Intelligence Dr. Jalesh kumar and Mr. Devaraj F V,CSE

Example problems
• Toy problems
– those intended to illustrate or exercise
various problem-solving methods
– E.g., puzzle, chess, etc.
• Real-world problems
– tend to be more difficult and whose
solutions people actually care about
– E.g., Design, planning, etc.
Module 2 Artificial Intelligence Dr. Jalesh kumar and Mr. Devaraj F V,CSE

Toy problems

• Example 1: vacuum world


Module 2 Artificial Intelligence Dr. Jalesh kumar and Mr. Devaraj F V,CSE

Example 2:The 8-puzzle

• States:
– a state description specifies the location
of each of the eight tiles and blank in one
of the nine squares
• Initial State:
– Any state in state space
• Successor function:
– the blank moves Left, Right, Up, or Down
• Goal test:
– current state matches the goal
configuration
• Path cost:
– each step costs 1, so the path cost is just
the length of the path
Module 2 Artificial Intelligence Dr. Jalesh kumar and Mr. Devaraj F V,CSE

Example 3:The 8-queens

Example 4:

Measuring problem-solving performance

Completeness

Optimality

Time complexity

Space complexity

Uninformed search
• Given a state, we only know whether it is a goal state or not
Module 2 Artificial Intelligence Dr. Jalesh kumar and Mr. Devaraj F V,CSE

• Cannot say one nongoal state looks better than another nongoal
state
• Can only traverse state space blindly in hope of somehow hitting
a goal state at some point
– Also called blind search
Blind does not imply unsystematic!

Breadth-first search

• Nodes are expanded in the same order in which they are


generated
– Fringe can be maintained as a First-In-First-Out (FIFO)
queue
• BFS is complete: if a solution exists, one will be found
• BFS finds a shallowest solution
– Not necessarily an optimal solution
• If every node has b successors (the branching factor), first
solution is at depth d, then fringe size will be at least bd at some
point
– This much space (and time) required 
Module 2 Artificial Intelligence Dr. Jalesh kumar and Mr. Devaraj F V,CSE

Algorithm:

Uniform Cost Search


When all step costs are equal, breadth-first search is optimal because
it always expands the shallowest unexpanded node. By a simple
extension, we can find an algorithm that is optimal with any step-cost
function. Instead of expanding the shallowest node, uniform-cost
search expands the node n with the lowest path cost g(n). This is done
by storing the frontier as a priority queue ordered by g.
(problem is solved at the last page)

Depth-first search
• Fringe can be maintained as a Last-In-First-Out (LIFO) queue ( a
stack)
• Also easy to implement recursively:
• DFS(node)
– If goal(node) return solution(node);
– For each successor of node
Module 2 Artificial Intelligence Dr. Jalesh kumar and Mr. Devaraj F V,CSE

• Return DFS(successor) unless it is failure;


– Return failure;
• Not complete (might cycle through nongoal states)
• If solution found, generally not optimal/shallowest
• If every node has b successors (the branching factor), and we
search to at most depth m, fringe is at most bm
– Much better space requirement 
– Actually, generally don’t even need to store all of fringe
• Time: still need to look at every node
– bm + bm-1 + … + 1 (for b>1, O(bm))
– Inevitable for uninformed search methods…


Module 2 Artificial Intelligence Dr. Jalesh kumar and Mr. Devaraj F V,CSE

Iterative deepening DFS:


– Call limited depth DFS with depth 0;
– If unsuccessful, call with depth 1;
– If unsuccessful, call with depth 2;
– Etc.
• Complete, finds shallowest solution
• Space requirements of DFS
• May seem wasteful timewise because replicating effort
– Really not that wasteful because almost all effort at
deepest level
db + (d-1)b2 + (d-2)b3 + ... + 1bd is O(bd) for b > 1
Module 2 Artificial Intelligence Dr. Jalesh kumar and Mr. Devaraj F V,CSE

Depth-limited search
The embarrassing failure of depth-first search in infinite state spaces
can be alleviated by supplying depth-first search with a predetermined
depth limit “L”. That is, nodes at depth “L” are treated as if they have
no successors. This approach is called depth-limited search.
Sometimes, depth limits can be based on knowledge of the problem.
For example, on the map of Romania there are 20 cities. Therefore,
we know that if there is a solution, it must be of length 19 at the
longest, so = 19 is a possible choice.

Bidirectional search
• Even better: search from both the start and the goal, in parallel!

• If the shallowest solution has depth d and branching factor is b


on both sides, requires only O(bd/2) nodes to be explored!
Module 2 Artificial Intelligence Dr. Jalesh kumar and Mr. Devaraj F V,CSE

Uniform cost problem:


Module 2 Artificial Intelligence Dr. Jalesh kumar and Mr. Devaraj F V,CSE

You might also like