0% found this document useful (0 votes)
52 views18 pages

5 Intelligent Agents 26-07-2024

Uploaded by

Madhavan R
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views18 pages

5 Intelligent Agents 26-07-2024

Uploaded by

Madhavan R
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 18

BCSE306

Artificial Intelligence

Dr.Priyanka N
Assistant Professor

School of Computer Science and Engineering (SCOPE),

VIT Vellore,
[email protected]
What is an Agent?
• Artificial intelligence is defined as the study of rational
agents.
• A rational agent could be anything that makes
decisions, such as a person, firm, machine, or
software.
• An AI system is composed of an agent and its
environment. The agents act in their environment.
The environment may contain other agents.
• An agent is anything that can be viewed as :
 perceiving its environment through sensors and
acting upon that environment through actuators
2
3
Simple agent function for the
vacuum-cleaner world

4
PEAS in Artificial Intelligence
• PEAS stands for Performance measure, Environment,
Actuator, Sensor.

• Rational Agent: The rational agent considers all


possibilities and performs a highly efficient action.

• For example, it chooses the shortest path with low


cost for high efficiency.

5
• Performance Measure: Performance measure is the unit to
define the success of an agent. Performance varies with agents
based on their different precepts.
• Environment: Environment is the surroundings of an agent at
every instant. It keeps changing with time if the agent is set in
motion. There are 5 major types of environments:
Fully Observable & Partially Observable
Episodic & Sequential
Static & Dynamic
Discrete & Continuous
Deterministic & Stochastic
• Actuator: An actuator is a part of the agent that delivers the
output of action to the environment.
• Sensor: Sensors are the receptive parts of an agent that takes in
the input for the agent.
6
7
Types of Environments in AI
1. Fully Observable vs Partially
Observable
• When an agent sensor is capable of sensing or accessing
the complete state of an agent at each point in time, it
is said to be a fully observable environment else it is partially observable.
• Maintaining a fully observable environment is easy as there is no need to
keep track of the history of the surroundings.
• An environment is called unobservable when the agent has no
sensors in all environments.
• Examples:
 Chess – the board is fully observable, and so are the opponent’s moves.
 Driving – the environment is partially observable because what’s around
the corner is not known.

9
2. Deterministic vs Stochastic
• When a uniqueness in the agent’s current state
completely determines the next state of the agent,
the environment is said to be deterministic.
• The stochastic environment is random in nature which
is not unique and cannot be completely determined by
the agent.
• Examples:
 Chess – there would be only a few possible moves for a coin
at the current state and these moves can be determined.
 Self-Driving Cars- the actions of a self-driving car are not
unique, it varies time to time.

10
3. Competitive vs Collaborative
• An agent is said to be in a competitive environment
when it competes against another agent to optimize
the output.
• The game of chess is competitive as the agents compete
with each other to win the game which is the output.
• An agent is said to be in a collaborative environment
when multiple agents cooperate to produce the
desired output.
• When multiple self-driving cars are found on the roads,
they cooperate with each other to avoid collisions and reach
their destination which is the output desired.

11
4. Single-agent vs Multi-agent
• An environment consisting of only one agent is
said to be a single-agent environment.
• A person left alone in a maze is an example of the
single-agent system.
• An environment involving more than one agent is
a multi-agent environment.
• The game of football is multi-agent as it involves
11 players in each team.

12
5. Dynamic vs Static
• An environment that keeps constantly changing
itself when the agent is up with some action is said
to be dynamic.
• A roller coaster ride is dynamic as it is set in motion
and the environment keeps changing every instant.
• An idle environment with no change in its state is
called a static environment.
• An empty house is static as there’s no change in the
surroundings when an agent enters.

13
6. Discrete vs Continuous
• If an environment consists of a finite number of
actions that can be deliberated in the environment to obtain
the output, it is said to be a discrete environment.
• The game of chess is discrete as it has only a finite
number of moves. The number of moves might vary with
every game, but still, it’s finite.
• The environment in which the actions are performed
cannot be numbered i.e. is not discrete, is said to be
continuous.
• Self-driving cars are an example of continuous
environments as their actions are driving, parking, etc. which
cannot be numbered.
14
7.Episodic vs Sequential
• In an Episodic task environment, each of the agent’s
actions is divided into atomic incidents or episodes.
There is no dependency between current and
previous incidents.
• In each incident, an agent receives input from the
environment and then performs the corresponding
action.
• Example: Consider an example of a Pick and Place
robot, which is used to detect defective parts from the
conveyor belts. Here, every time robot(agent) will
make the decision on the current part i.e. there is no
dependency between current and previous decisions.
15
7.Episodic vs Sequential
• In a Sequential environment, the previous decisions
can affect all future decisions.
• The next action of the agent depends on what action
he has taken previously and what action he is
supposed to take in the future.
• Example:
Checkers- Where the previous move can affect all
the following moves.

16
8. Known vs Unknown
• In a known environment, the output for all
probable actions is given.
• Obviously, in case of an unknown
environment, for an agent to make a decision,
it has to gain knowledge about how the
environment works.

17
18

You might also like