0% found this document useful (0 votes)
30 views6 pages

Unit-2 AI

Uploaded by

20CE033 Dhruvi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views6 pages

Unit-2 AI

Uploaded by

20CE033 Dhruvi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Unit-2 Problem Solving by Searching

This section serves for why searching is crucial in AI.


●​ Problem-Solving Agents:
○​ Definition: An agent that decides what to do by searching for a sequence of
actions that leads to a desired state. It differs from reactive agents in that it plans
ahead.
○​ Key Idea: The agent needs a model of the world to predict the outcome of its
actions.
○​ Components:
■​ Goal Formulation: Defining the target state(s) the agent wants to achieve.
■​ Problem Formulation: Deciding what actions and states to consider to
achieve the goal.
■​ Search: The process of looking for a sequence of actions (a path) that leads
from the initial state to a goal state.
■​ Execution: Performing the actions found by the search.
●​ Well-defined Problem and Solutions:
○​ Definition of a "Well-defined Problem": A problem can be precisely defined by its
five components:
1.​ Initial State: The starting point of the agent.
2.​ Actions (or Successor Function): A description of what the agent can do,
which generates possible successor states from any given state.
3.​ Transition Model: A description of what each action does (the result of
performing an action in a state). Often implicitly defined by the successor
function.
4.​ Goal Test: A way to determine if a given state is a goal state.
5.​ Path Cost: A function that assigns a numerical cost to each path (sequence
of actions). This is often cumulative.
○​ Definition of a "Solution": A sequence of actions (path) from the initial state to a
goal state.
○​ Optimal Solution: A solution that has the lowest path cost among all possible
solutions.
●​ Formulating Problems:
○​ Purpose: Transforming a real-world scenario into the abstract structure of a
well-defined problem.
○​ Key Considerations:
■​ States: What information is essential to describe the current situation? (e.g.,
in a navigation problem, the current city).
■​ Actions: What moves are allowed? (e.g., driving between cities).
■​ Goal: What is the desired outcome? (e.g., reaching the destination city).
■​ Cost: What aspects are we trying to minimize? (e.g., distance, time, fuel
consumption).
○​ Trade-offs: Choosing the right level of abstraction. Too much detail makes the
search space huge; too little might omit critical information.
●​ Example Problems (Brief Mention):
○​ Vacuum World: Simple, classic example. States: dirt locations, vacuum location.
Actions: move left, right, suck. Goal: all dirt removed.
○​ 8-Puzzle / 15-Puzzle: Sliding tile puzzle. States: arrangement of tiles. Actions:
move blank tile. Goal: sorted tiles.
○​ Missionaries and Cannibals: Logic puzzle. States: number of
missionaries/cannibals on each side of the river. Actions: move people across.
Goal: all people moved without cannibal dominance.
○​ Rubik's Cube: Complex state space.
○​ Chess/Go: Game playing (often involves searching).
●​ Toy Problems vs. Real-world Problems:
○​ Toy Problems: Simplified, abstract, used to illustrate search algorithms. Small state
space, easy to define.
○​ Real-world Problems: Complex, large or infinite state space, often involve
uncertainty, partial observability, and multiple agents. Require more sophisticated
representations and search techniques. (e.g., robot navigation, airline scheduling,
bioinformatics).

Searching for Solution


This section delves into the mechanics of how search algorithms work.
●​ Concept:
○​ Search Tree: A data structure that represents the possible sequence of actions.
Nodes are states, edges are actions.
○​ Node Expansion: Generating successor nodes from a current node by applying all
possible actions.
○​ Frontier (or Open List): The set of all leaf nodes available for expansion at any
given point in the search.
○​ Explored Set (or Closed List): The set of all nodes that have already been
expanded, to avoid cycles and redundant work.

Uninformed Search Strategies (Blind Search)


These strategies use no information about the closeness of a state to the goal. They explore the
search space systematically.
●​ Concept of Breadth-First Search (BFS):
○​ Mechanism: Expands the shallowest unexpanded node first. Uses a FIFO
(First-In, First-Out) queue for the frontier.
○​ Properties:
■​ Completeness: Yes, if the shallowest goal node is finite (and there's a
solution).
■​ Optimality: Yes, if path costs are uniform (e.g., each step costs 1). If costs
vary, it finds the shallowest path, not necessarily the cheapest.
■​ Time Complexity: O(b^d), where b is the branching factor and d is the depth
of the shallowest solution. (Can be very large).
■​ Space Complexity: O(b^d) (must store all nodes in memory, which is a
major drawback for deep solutions).
○​ Analogy: Exploring a maze by checking all passages at the current depth before
moving deeper.
●​ Concept of Depth-First Search (DFS):
○​ Mechanism: Expands the deepest unexpanded node first. Uses a LIFO (Last-In,
First-Out) stack for the frontier.
○​ Properties:
■​ Completeness: No, can get stuck in infinite paths or deep non-goal paths if
the graph has cycles or infinite depth.
■​ Optimality: No, often finds a suboptimal solution if multiple paths exist.
■​ Time Complexity: O(b^m), where m is the maximum depth of the search
space (can be infinite).
■​ Space Complexity: O(bm) (much more space-efficient than BFS for deep
searches, as it only stores the current path).
○​ Analogy: Exploring a maze by going as deep as possible down one path, then
backtracking if it hits a dead end.
●​ Depth-Limited Search (DLS):
○​ Mechanism: DFS with a predefined depth limit L. Nodes at depth L are treated as if
they have no successors.
○​ Properties:
■​ Completeness: Yes, if a solution exists within the depth limit L. No, if the
shallowest solution is beyond L.
■​ Optimality: No.
■​ Time Complexity: O(b^L).
■​ Space Complexity: O(bL).
○​ Drawback: The choice of L is crucial and often arbitrary.
●​ Iterative Deepening Depth-First Search (IDDFS):
○​ Mechanism: Repeatedly performs DLS, increasing the depth limit L by 1 in each
iteration (L=0, L=1, L=2, ...).
○​ Properties:
■​ Completeness: Yes (like BFS).
■​ Optimality: Yes, if path costs are uniform (like BFS).
■​ Time Complexity: O(b^d) (similar to BFS, despite re-exploring nodes, most
work is done at deepest level).
■​ Space Complexity: O(bd) (similar to DFS, making it very space-efficient).
○​ Advantage: Combines the completeness and optimality of BFS with the space
efficiency of DFS. Often the preferred uninformed search.
●​ Bidirectional Search:
○​ Mechanism: Runs two simultaneous searches: one forward from the initial state
and one backward from the goal state. The search stops when the two search
frontiers meet.
○​ Requirements:
■​ The goal state must be explicitly known.
■​ The predecessor function (inverse of successor) must be defined for the
backward search.
○​ Properties:
■​ Completeness: Yes (if BFS is used for both).
■​ Optimality: Yes (if BFS is used for both).
■​ Time Complexity: O(b^{d/2}) (significantly better than O(b^d) because
b^{d/2} + b^{d/2} is much less than b^d).
■​ Space Complexity: O(b^{d/2}) (can be large, as one of the frontiers needs to
be stored).
○​ Challenge: Defining a "predecessor" for the goal and ensuring the meeting point is
valid.

Informed (Heuristic) Search Strategies


These strategies use problem-specific knowledge (heuristics) to guide the search, making it
more efficient.
●​ Concept of Heuristic Function (h(n)):
○​ Definition: An estimated cost from node n to the closest goal state.
○​ Purpose: To guide the search towards promising nodes, reducing the amount of
exploration.
●​ Concept of Greedy Best-First Search (Greedy BFS):
○​ Mechanism: Expands the node that appears to be closest to the goal, as estimated
by the heuristic function h(n). Uses a priority queue for the frontier, ordered by
h(n).
○​ Properties:
■​ Completeness: No, can get stuck in infinite paths or local optima (might not
reach the goal).
■​ Optimality: No, it's "greedy" and doesn't consider the path cost from the
start.
■​ Time Complexity: O(b^m) in worst-case (like DFS), but can be much faster
with good heuristics.
■​ Space Complexity: O(b^m) in worst-case, but can be much better.
○​ Analogy: Always heading towards what looks like the destination, even if there's a
closer, better path through a different direction.
●​ A Search: Minimizing the Total Estimated Solution Cost:*
○​ Mechanism: Expands the node n with the lowest value of f(n) = g(n) + h(n), where:
■​ g(n): The actual cost from the initial state to node n.
■​ h(n): The estimated cost from node n to the closest goal.
■​ f(n): The estimated total cost of the cheapest solution through node n.
○​ Uses a priority queue for the frontier, ordered by f(n).
○​ Properties:
■​ Completeness: Yes, if h(n) is admissible (never overestimates the cost to the
goal) and the branching factor is finite.
■​ Optimality: Yes, if h(n) is admissible and consistent (or monotonic), and path
costs are non-negative.
■​ Admissible Heuristic: h(n) \le h^*(n) (where h^*(n) is the true cost
from n to the goal).
■​ Consistent Heuristic (stronger than admissible): h(n) \le cost(n, a,
n') + h(n') (triangle inequality).
■​ Time Complexity: Exponential O(b^d) in the worst case, but significantly
better with good heuristics. The performance depends heavily on the
heuristic's accuracy.
■​ Space Complexity: O(b^d) (must store all expanded nodes in memory,
similar to BFS, which is its main drawback for very large problems).
○​ Most popular and widely used search algorithm in AI because it combines the
best features of BFS (optimality, completeness) with heuristic guidance.
Case Study: Applications of AI in Transportation
This section brings the theoretical concepts to life with real-world examples.
●​ Context: Transportation is a vast domain where AI, including search algorithms, plays a
critical role in optimization, efficiency, and safety.
●​ Key Application Areas and Relevant Search Concepts:
1.​ Route Optimization/Navigation Systems (e.g., Google Maps, Uber, Logistics):
■​ Problem: Finding the shortest/fastest/cheapest path between two or more
points.
■​ Search Algorithms: Primarily A* search (for single source-destination) and
variations (e.g., Contraction Hierarchies, Hub Labels for large-scale road
networks). Dijkstra's algorithm is also fundamental.
■​ Heuristics: Straight-line distance (Euclidean or Manhattan distance) to the
destination.
■​ States: Intersections, specific locations.
■​ Actions: Traversing road segments.
■​ Costs: Distance, time, fuel consumption, tolls.
2.​ Traffic Management and Congestion Prediction:
■​ Problem: Optimizing traffic flow, rerouting vehicles to avoid congestion.
■​ Search: Complex, dynamic variants of search, often involving multi-agent
systems and real-time data. Reinforcement learning combined with search is
also used.
■​ States: Current traffic conditions, road capacities.
■​ Actions: Adjusting traffic light timings, recommending alternative routes.
■​ Costs: Travel time, congestion levels.
3.​ Autonomous Vehicles (Self-Driving Cars):
■​ Problem: Path planning (local and global), obstacle avoidance, decision
making (lane changes, turns).
■​ Search:
■​ Global Path Planning: A* (or similar) on a discretized map.
■​ Local Path Planning: Rapidly-exploring Random Trees (RRTs),
Probabilistic Roadmaps (PRMs), or optimization-based methods that
implicitly perform search. D* Lite is used for replanning in dynamic
environments.
■​ Heuristics: Distance to target, safety metrics, smoothness of path.
■​ States: Vehicle's position, orientation, speed, surrounding environment
(obstacles).
■​ Actions: Steering, acceleration, braking.
■​ Costs: Distance, time, jerk, collision risk, comfort.
4.​ Logistics and Supply Chain Optimization:
■​ Problem: Vehicle routing problem (VRP), scheduling deliveries, warehouse
optimization.
■​ Search: Very complex combinatorial optimization problems, often solved with
meta-heuristics (e.g., Genetic Algorithms, Simulated Annealing) which
explore a large search space, or specialized exact algorithms for smaller
instances.
■​ States: Locations of vehicles, packages, depots.
■​ Actions: Delivering packages, moving vehicles.
■​ Costs: Fuel, time, labor, late delivery penalties.
5.​ Public Transportation Optimization:
■​ Problem: Bus/train scheduling, resource allocation (drivers, vehicles),
dynamic routing.
■​ Search: Similar to logistics, involving complex scheduling and routing
algorithms.
■​ States: Vehicle locations, passenger counts, schedules.
■​ Actions: Adjusting schedules, rerouting vehicles.
■​ Costs: Operational cost, passenger wait times.
Conclusion for Case Study: Emphasize that AI's role in transportation is evolving, moving
from static route planning to dynamic, real-time optimization and autonomous decision-making,
heavily relying on the principles of problem-solving by searching.

You might also like