0% found this document useful (0 votes)
5 views4 pages

Unit 3

The document outlines various problem-solving methods in Artificial Intelligence, categorizing them into uninformed, informed, local search algorithms, and game playing strategies. Key techniques include breadth-first search, A* search, constraint satisfaction problems, and minimax algorithms. It emphasizes the importance of heuristics and optimization in efficiently solving complex tasks and decision-making under uncertainty.

Uploaded by

Aayush Dahiya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as ODT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views4 pages

Unit 3

The document outlines various problem-solving methods in Artificial Intelligence, categorizing them into uninformed, informed, local search algorithms, and game playing strategies. Key techniques include breadth-first search, A* search, constraint satisfaction problems, and minimax algorithms. It emphasizes the importance of heuristics and optimization in efficiently solving complex tasks and decision-making under uncertainty.

Uploaded by

Aayush Dahiya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as ODT, PDF, TXT or read online on Scribd
You are on page 1/ 4

Unit II-PROBLEM-SOLVING METHODS Problem-solving Methods – Search Strategies-

Uninformed – Informed –
Heuristics – Local Search Algorithms and Optimization Problems – Searching with Partial
Observations – Constraint
Satisfaction Problems – Constraint Propagation – Backtracking Search – Game Playing – Optimal
Decisions in Games
– Alpha – Beta Pruning – Stochastic Games

Problem solving in Artificial Intelligence (AI) involves techniques that enable machines to reason,
plan, and find solutions to complex tasks. These methods can be broadly categorized into search-
based, knowledge-based, and logic-based approaches.

1. Uninformed Search (Blind Search)


These algorithms have no additional knowledge about the states beyond the problem definition
(initial state, actions, goal test).

Key Methods:
• Breadth-First Search (BFS):
• Explores all nodes at the current depth before going deeper.
• Complete and Optimal (for uniform-cost problems).
• Time/space complexity: O(b^d) where b is branching factor, d is depth.

• Depth-First Search (DFS):


• Explores as far as possible along each branch before backtracking.
• Not optimal; may get stuck in loops.
• Uniform Cost Search (UCS):
• Expands the least-cost node first.
• Optimal and complete (like BFS but considers path cost).
• Iterative Deepening Search:
• Combines benefits of BFS and DFS.
• Depth-limited DFS repeated with increasing limits.

🧠 2. Informed Search (Heuristic Search)


Uses a heuristic function h(n) to estimate the cost from node n to the goal.

Key Methods:
• Greedy Best-First Search:
• Selects node with lowest h(n).

• Not always optimal or complete.


• A* Search:
• Uses f(n) = g(n) + h(n) where:

• g(n) = cost to reach node

• h(n) = estimated cost to goal

• Optimal if h(n) is admissible (never overestimates).

Heuristics:
• Examples:
• Manhattan Distance for grid navigation.
• Misplaced tiles for 8-puzzle.
• Good heuristics = faster solutions.

🔄 3. Local Search Algorithms & Optimization Problems


Work with a single current state, move to a neighbor state.
Used when state space is large or path is irrelevant.

Key Algorithms:
• Hill Climbing:
• Greedy local move to increase value.
• Can get stuck in local maxima/plateaus.
• Simulated Annealing:
• Similar to hill climbing but accepts worse moves occasionally.
• Probability of bad moves decreases over time (cooling schedule).
• Genetic Algorithms:
• Inspired by natural evolution.
• Maintains a population of solutions.
• Uses crossover and mutation to evolve better solutions.
• Local Beam Search:
• Keeps k states, chooses best k successors from all neighbors.
👀 4. Searching with Partial Observations
Applies when the agent has incomplete knowledge of the environment.

Concepts:
• Belief States:
• Represent sets of possible actual states.
• Contingency Planning:
• Plans for all possible outcomes of uncertain events.
• POMDPs (Partially Observable Markov Decision Processes):
• Mathematical model to handle decision making under uncertainty.
• Incorporates probabilities and observations.

📐 5. Constraint Satisfaction Problems (CSPs)


Problems where the goal is to assign values to variables under constraints.

Examples:
• Sudoku, Scheduling, Map Coloring.

Key Techniques:
• Backtracking Search:
• Depth-first search that backtracks when a constraint is violated.
• Constraint Propagation:
• Reduces domain of variables by enforcing constraints.
• Techniques:
• Forward Checking: Prune values that will lead to failure.
• Arc Consistency (AC-3): Makes sure binary constraints are satisfied.
• Heuristics in CSPs:
• Minimum Remaining Values (MRV): Choose variable with fewest legal values.
• Degree Heuristic: Pick variable involved in most constraints.
• Least Constraining Value: Choose value that rules out fewest options.

♟️6. Game Playing (Adversarial Search)


Applies to competitive multi-agent environments (e.g., chess, tic-tac-toe).
Algorithms:
• Minimax Algorithm:
• Assumes both players play optimally.
• Agent picks move that maximizes its minimum gain (or minimizes opponent's
maximum gain).
• Optimal Decisions in Games:
• Use game tree, evaluate outcomes with utility function.
• Alpha-Beta Pruning:
• Optimizes minimax by pruning parts of the tree that won’t affect the result.
• Keeps track of:
• α (alpha): best already explored option for maximizer.
• β (beta): best already explored option for minimizer.
• Stochastic Games (with chance elements):
• Include chance nodes for randomness (e.g., dice).
• Use Expectiminimax:
• Combines minimax and expected value to handle uncertainty.

🧩 Final Classification of Search Techniques


Category Techniques Use Case Example
Uninformed Search BFS, DFS, UCS, IDS Puzzle solving, pathfinding
Informed Search A*, Greedy, Heuristics Robot navigation, planning
Local Search & Hill climbing, Genetic algorithms, Scheduling, TSP, AI
Optimization Simulated Annealing planning
Partial Observation Robotics, uncertain
Belief state, POMDP
Search environments
Backtracking, Constraint propagation, Arc
CSPs Sudoku, Exam scheduling
consistency
Minimax, Alpha-Beta Pruning,
Game Playing Chess, Poker, Connect Four
Expectiminimax

You might also like