0% found this document useful (0 votes)
2 views19 pages

Unit-2

The document discusses the concepts of agents and environments in artificial intelligence, defining an agent as anything that perceives its environment and acts upon it. It outlines the characteristics of rational agents, performance measures, and the properties of task environments using the PEAS framework. Additionally, it categorizes agents into five classes based on their intelligence and capability, including simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, and learning agents.

Uploaded by

goitranjan001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views19 pages

Unit-2

The document discusses the concepts of agents and environments in artificial intelligence, defining an agent as anything that perceives its environment and acts upon it. It outlines the characteristics of rational agents, performance measures, and the properties of task environments using the PEAS framework. Additionally, it categorizes agents into five classes based on their intelligence and capability, including simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, and learning agents.

Uploaded by

goitranjan001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 19

2.1.

Agents and
Environments
Agent:
An agent is anything that can be viewed as perceiving its environment through sensors
and acting upon that environment through the actuator.
Example:
Human Agent: Eyes, ears, and other organs for sensors and hands, legs, mouths and another
body part for actuators.
Robotic Agent: Camera and infrared range finder for sensors and various motor for the actuator.

Environment: The external context in which the agent operates.


Example: A chess game environment for a chess-playing agent.
2.2 Concept of Rationality
2.2.1 Performance Measures
• Criteria for evaluating agent behavior.
• Example: A vacuum cleaner agent’s performance measure could include:
• Amount of dirt cleaned.
• Electricity consumed.
• Time taken.

2.2.2 Rationality and Rational Agent


A rational agent is one that acts to maximize expected performance based on its
percept sequence and knowledge.
Characteristics of a Rational Agent:
• Perceives its environment accurately.
• Acts upon the environment effectively.
• Chooses actions that are expected to lead to the best outcome.
• Learns and adapts over time.
Rationality

Rationality is the quality of making decisions that lead to the best expected outcome
based on:
• The percept sequence (everything the agent has perceived so far),
• The knowledge the agent possesses,
• The actions available to the agent,
• And the performance measure that defines what is considered a successful outcome
2.3.Task environment and its properties
A task environment in artificial intelligence refers to the setting or context
in which an agent operates and attempts to achieve its goals.
A useful way to describe a task environment is using the PEAS which
stands for:
 Performance Measure – Criteria for success.
 Environment – What the agent senses and acts in.
 Actuators – Mechanisms for action.
 Sensors – Mechanisms for perception.
Example: Vacuum Cleaner Agent
PEAS Element Description
Performance Measure Cleanliness, power efficiency, speed
Environment Rooms, floor type, dirt locations
Actuators Wheels, suction motor
Sensors Dirt sensor, bump sensor, position sensor
Properties of Task Environment
1. Fully Observable vs. Partially Observable
• Fully Observable: The agent can see the entire environment at any time.
• Example: In Chess, all pieces on the board are visible to both players. So, the
environment is fully observable.
• Partially Observable: The agent has limited information about the environment.
• Example: In Poker, players can only see their own cards, not the opponents’. So, it's
partially observable.
2. Deterministic vs. Stochastic
• Deterministic: The next state is completely determined by the current state and
action.
• Example: In solving a maze, taking a step always leads to a specific new position—
predictable outcome.
• Stochastic: The outcome is uncertain; randomness is involved.
• Example: In robot navigation, the robot might slip or its sensors may give noisy
data, making outcomes uncertain.
3. Single-Agent vs. Multi-Agent
• Single-Agent: The agent acts alone with no other competing or cooperating
agents.
• Example: Pathfinding in a maze by a robot.
• Multi-Agent: Multiple agents interact, possibly with competing goals.
• Example: In online multiplayer games, players (agents) compete or cooperate.

4. Discrete vs. Continuous


• Discrete: The number of possible actions or states is countable.
• Example: Board games like tic-tac-toe have limited, countable moves.
• Continuous: Infinite or uncountable states/actions.
• Example: Robot arm control — position, speed, and angle vary continuously.
Structure of Agents
Structure of an AI Agent

 The task of AI is to design an agent program which implements the


agent function. The structure of an intelligent agent is a
combination of architecture and agent program. It can be viewed
as:-
Agent = Architecture + Agent Program

 Architecture: Architecture is machinery that an AI agent executes


on.
 Agent Program: Agent program is an implementation of agent
function. An agent program executes on the physical architecture
to produce function.
 Agents can be grouped into five classes based on their
degree of perceived intelligence and capability. All these
agents can improve their performance and generate
better action over the time.
 Simple Reflex Agent
 Model-based reflex agent
 Goal based agent
 Utility-based agent
 Learning agent
Simple reflex agents
 Agents take decisions on the basis of the current percepts an ignore
the rest of the percept history.
 These agents only succeed in the fully observable environment.
 The simple reflex agent works on the Condition-Action rule, which
means it maps the current state to action. Such as a Room cleaner
agent, it works only if there is dirt in the room.
 Problems for the simple reflex agent design approach:
They have very limited intelligence.
They do not have knowledge of non-perceptual parts of the
current state.
Not adaptive to changes in the environment.
Simple reflex agents
Model-based reflex agent

 The Model-based reflex agent can work in partially observable environment,


and track the situation.
 A model-based agent has two important factors:
 Model: It is knowledge about “how things happen in the world”, so it is
called a Model-based agent.
 Internal State: It is a representation of the current state based on
percept history.
 These agents have the model, “which is knowledge of the world” and based
on the model they perform actions.
 Updating the agent state requires information about:
 How the world evolves.
 How the agent’s actions affects the world.
Model-based reflex agent
Goal based agents

• The knowledge of the current state environment is not always


sufficient to decide for an agent to what to do.
• The agent needs to know its goal which describes desirable
situations.
• Goal based agents expand the capabilities of the model-based
agent by having the “goal” information.
• They choose an action, so that they can achieve the goal,
• These agents may have to consider a long sequence of possible
actions before deciding whether the goal is achieved or not. Such
considerations of different scenario are called searching and
planning, which makes an agent proactive.
Utility based
agents
 These agents are similar to the goal-based agents but provide an
extra component of utility measurement which makes them different
by providing a measure of success at a given state.
 Utility based agent act based not only goals but also the best way to
achieve the goal.
 The utility-based agent is useful when there are multiple possible
alternatives, and an agent has to choose in order to perform the best
action.
 The utility function maps each state to a real number to check how
efficiently each actions achieve the goal
Learning agents
 A learning agent in AI is the type of agent which can learn from its past
experiences, or it has learning capabilities.
 It starts to act with basic knowledge and then able to act and adapt
automatically through learning.
 A learning agent has mainly 4 conceptual components, which are:
 Learning element: It is responsible for making improvements by learning from
environment.
 Critic: Learning element takes feedback from critic which describes that how well
the agent is doing with respect to a fixed performance standard.
 Performance element: It is responsible for selecting external actions.
 Problem Generator: This component is responsible for suggesting actions that
will lead to new and informative experiences.

Hence, learning agent are able to learn, analyze performance, and look for new
ways to improve the performance.

You might also like