3_Problem Solving and Searching
3_Problem Solving and Searching
Reflex Agent
BLG 521E: Artificial Intelligence • A reflex agent is a type of intelligent agent that makes decisions based solely on the current
percepts (observations) it receives from the environment, without considering the history of
Problem Solving and Searching past percepts. Its decision-making process is often based on simple condition-action rules (e.g.,
if condition, then action). This means that a reflex agent responds directly to stimuli in its
Instructor: Professor Mehmet Keskinöz environment, rather than using a more sophisticated approach like learning or planning.
• Key Characteristics:
ITU Artificial Intelligence Research and Development Center (ITUAI) • Immediate Response: Reflex agents react immediately to the current state of the environment,
Faculty of Computer and Informatics making them suitable for situations where speed is critical.
Computer Engineering Department • No Memory of the Past: They do not maintain a history of past states or observations, which
Istanbul Technical University, Istanbul, Turkey means their actions are not influenced by what has previously happened.
• Rule-Based Behavior: Reflex agents often operate using a set of pre-defined rules that map
conditions (percepts) directly to actions.
Email: [email protected]
ODS 2001 ODS 2001
• Example:
• A simple example of a reflex agent is a thermostat:
• It senses the current temperature (percept). • Limitations:
• If the temperature is below a set point, it turns on the heater. • Lack of Flexibility: Reflex agents may not perform well in complex environments where a deeper
• If the temperature is above a set point, it turns off the heater. understanding of past interactions is necessary.
• Here, the thermostat doesn't consider past temperatures or predict future temperatures; it simply reacts • No Learning or Adaptation: They cannot improve their behavior over time since they do not
to the current measurement based on the rules defined. learn from past experiences or adapt their rules.
• Advantages: • In summary, reflex agents are best suited for simple, well-defined environments where decisions
• Speed: Reflex agents can be very fast since they do not involve complex reasoning processes. can be made directly from current observations without needing to consider past states or plan
for future actions.
• Simplicity: Their design is relatively straightforward, making them easy to implement for simple tasks.
1
13/10/2024
• A goal-based agent is an intelligent agent that makes decisions based not only on the current
percepts (observations) but also in consideration of specific goals it aims to achieve. Unlike reflex • Example:
agents, goal-based agents have a notion of a desired outcome or goal state and choose actions • A robot navigating a maze to reach a specific location can be considered a goal-based agent:
that help them move closer to that goal. • Its percepts are the walls and paths it encounters as it moves.
• Key Characteristics: • Its goal is to find and reach the exit of the maze.
• Goal-Oriented Behavior: The agent has a defined goal or set of goals that describe desirable • It may use strategies like depth-first search, breadth-first search, or A* search to plan its path
states of the world. It chooses actions that will bring it closer to achieving these goals. and achieve its goal.
• Planning Capabilities: Goal-based agents often involve a planning process, where they evaluate • The agent evaluates potential paths and decides on actions that will lead it toward the exit,
different possible actions and sequences of actions to determine which will best achieve their taking into account the current state of its surroundings as well as its end goal.
goals. • Advantages:
• State-Based Decisions: These agents consider the state of the environment as well as the • Flexibility: Goal-based agents are more flexible than reflex agents, as they can adapt their
desired goal state, making decisions based on how well an action will help reach the goal. behavior to different environments and goals.
• Complex Problem Solving: They can handle more complex scenarios that require reasoning
about the future, choosing actions that consider longer-term outcomes rather than immediate
reactions.
• Limitations:
• Computational Complexity: Planning and reasoning can be computationally intensive, especially
in environments with a large number of possible states or actions.
• Time-Consuming Decision-Making: Since goal-based agents may need to evaluate many
possible action sequences to find the optimal one, they might be slower in responding compared
to reflex agents in certain situations.
• In summary, goal-based agents are ideal for scenarios where the environment is dynamic and
where the agent's actions need to be guided by a specific objective or desired outcome. Their
ability to reason about actions and their long-term effects makes them suitable for tasks that
require planning and strategic thinking.
2
13/10/2024
Examples Examples
3
13/10/2024
4
13/10/2024
5
13/10/2024
6
13/10/2024
Example 8 Puzzles
Search Agent
• A search agent is a specific type of goal-based agent that uses search algorithms to find a path
or sequence of actions that leads to a goal state.
• Search Algorithms: These could include algorithms like breadth-first search (BFS), depth-first
search (DFS), or other search techniques that explore possible sequences of actions.
• All search agents are goal-based agents, but not all goal-based agents are search agents.
• A search agent uses search algorithms to solve problems and achieve goals, while a goal-based
agent is any agent that makes decisions with the purpose of achieving a specified goal, possibly
using methods other than search.
• In many AI contexts, the distinction is significant because the design of a search agent focuses on
algorithmic problem-solving techniques, whereas goal-based agents have a broader range of
decision-making strategies.
7
13/10/2024
8
13/10/2024
9
13/10/2024
10
13/10/2024
Search Strategies
11
13/10/2024
12
13/10/2024
13
13/10/2024
Properties of DFS
14
13/10/2024
Iterative Deepening
15
13/10/2024
UCS UCS
16
13/10/2024
17
13/10/2024
18
13/10/2024
19
13/10/2024
20
13/10/2024
21
13/10/2024
UCS UCS
22
13/10/2024
UCS UCS
UCS UCS
23
13/10/2024
24
13/10/2024
25