Unit-2
Unit-2
Agents and
Environments
Agent:
An agent is anything that can be viewed as perceiving its environment through sensors
and acting upon that environment through the actuator.
Example:
Human Agent: Eyes, ears, and other organs for sensors and hands, legs, mouths and another
body part for actuators.
Robotic Agent: Camera and infrared range finder for sensors and various motor for the actuator.
Rationality is the quality of making decisions that lead to the best expected outcome
based on:
• The percept sequence (everything the agent has perceived so far),
• The knowledge the agent possesses,
• The actions available to the agent,
• And the performance measure that defines what is considered a successful outcome
2.3.Task environment and its properties
A task environment in artificial intelligence refers to the setting or context
in which an agent operates and attempts to achieve its goals.
A useful way to describe a task environment is using the PEAS which
stands for:
Performance Measure – Criteria for success.
Environment – What the agent senses and acts in.
Actuators – Mechanisms for action.
Sensors – Mechanisms for perception.
Example: Vacuum Cleaner Agent
PEAS Element Description
Performance Measure Cleanliness, power efficiency, speed
Environment Rooms, floor type, dirt locations
Actuators Wheels, suction motor
Sensors Dirt sensor, bump sensor, position sensor
Properties of Task Environment
1. Fully Observable vs. Partially Observable
• Fully Observable: The agent can see the entire environment at any time.
• Example: In Chess, all pieces on the board are visible to both players. So, the
environment is fully observable.
• Partially Observable: The agent has limited information about the environment.
• Example: In Poker, players can only see their own cards, not the opponents’. So, it's
partially observable.
2. Deterministic vs. Stochastic
• Deterministic: The next state is completely determined by the current state and
action.
• Example: In solving a maze, taking a step always leads to a specific new position—
predictable outcome.
• Stochastic: The outcome is uncertain; randomness is involved.
• Example: In robot navigation, the robot might slip or its sensors may give noisy
data, making outcomes uncertain.
3. Single-Agent vs. Multi-Agent
• Single-Agent: The agent acts alone with no other competing or cooperating
agents.
• Example: Pathfinding in a maze by a robot.
• Multi-Agent: Multiple agents interact, possibly with competing goals.
• Example: In online multiplayer games, players (agents) compete or cooperate.
Hence, learning agent are able to learn, analyze performance, and look for new
ways to improve the performance.