Chapter 2 AI
Chapter 2 AI
ARTIFICIAL INTELLIGENCE
Intelligent Agents
Introduction
Agents and Environments
Acting of Intelligent Agents (Rationality)
Structure of Intelligent Agents
PEAS Description & Environment Properties
Agent Types
Simple reflex agent
Model-based reflex agent
Goal-based agent
Utility-based agent
Learning agent
Important Concepts and Terms
3 INTRODUCTION
An agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through effectors.
A human agent has:
eyes, ears, and other organs for sensors,
hands, legs, mouth, and other body parts for effectors.
A robotic agent substitutes:
cameras and infrared range finders for the sensors
various motors for the effectors.
Software agents are programs that permits it to performs parts of its tasks autonomously and to
interact with its environment in a useful manner.
4 AGENTS AND ENVIRONMENT
Percept Action
[A , Clean] Right
[A , Dirty] Suck
[B , Clean] Left
[B , Dirty] Suck
6 RATIONALITY
• A rational agent should select an action that is expected to maximize its performance measure,
given the evidence provided by the percept sequence and whatever built-in knowledge the
agent has.
• Rationality is distinct from omniscience.
• Omniscience (all-knowing with infinite knowledge).
• An omniscient agent knows the actual outcome of its actions and, Can act accordingly;
• It is impossible in reality.
8 RATIONALITY
Agents can perform actions in order to modify future percepts so as to obtain useful
information (information gathering, exploration).
An agent is autonomous if its behavior is determined by its own experience (with ability to
learn and adapt).
In summary, what is rational at any given time depends on four things:
The performance measure that defines degree of success.
PEAS (Performance measure, Environment, Actuators, Sensors) framework.
What the agent knows about the environment.
The actions that the agent can perform.
9 RATIONALITY
PEAS:
Performance measure: A measure of how good the behavior of agents operating in the
environment is?
Environment: What things are considered to be a part of the environment and what things are
excluded?
Actuators: How can an agent perform actions in the environment?
Sensors: How can the agent perceive the environment?
Must first specify the setting for intelligent agent design.
10 RATIONALITY
Quiz(10 %)
For each of the following agents, develop a PEAS description of the task environment:
An intelligent agent is an autonomous entity which act upon an environment using sensors and
actuators for achieving goals.
An intelligent agent may learn from the environment to achieve their goals.
A thermostat is an example of an intelligent agent.
Following are the main four rules for an AI agent:
• Rule 1: An AI agent must have the ability to perceive the environment.
• Rule 2: The observation must be used to make decisions.
• Rule 3: Decision should result in an action.
• Rule 4: The action taken by an AI agent must be a rational action.
15 STRUCTURE OF INTELLIGENT AGENTS
The task of AI is to design an agent program which implements the agent function that
implements the agent mapping from percepts to actions.
This program will run on some sort of computing device, which we will call the architecture.
The architecture might be a plain computer, or it might include special-purpose hardware for
certain tasks, such as processing camera images or filtering audio input.
Architecture is the machinery that the agent executes on.
It is a device with sensors and actuators, for example, a robotic car, a camera, and a PC.
In general, the architecture makes the percepts from the sensors available to the program, runs
the program, and feeds the program's action choices to the effectors as they are generated.
agent = architecture + program
16 STRUCTURE OF INTELLIGENT AGENTS
• The Simple reflex agent does not consider any part of percepts history during their decision
and action process.
• The Simple reflex agent works on Condition-action rule, which means it maps the current state
to action. Such as a Room Cleaner agent, it works only if there is dirt in the room.
• Problems for the simple reflex agent design approach:
• They have very limited intelligence
• They do not have knowledge of non-perceptual parts of the current state
• Not adaptive to changes in the environment.
25 AGENT TYPES
• A simple reflex agent works by finding a rule whose condition matches the current
situation and then doing the action associated with that rule.
26 AGENT TYPES
• One notable example of a model based reflex agent implementation is the Waymo project by
Google.
• Waymo has successfully deployed self-driving cars equipped with advanced sensors and
implemented model based reflex algorithms in their decision-making process.
• These autonomous vehicles navigate the roads, making decisions based on percept history and
real-time sensory input.
• However, it is important to note that such projects are currently limited to specific regions and
regulatory frameworks.
• The car is equipped with sensors that detect obstacles, such as car brake lights in front of them or
pedestrians walking on the sidewalk.
• As it drives, these sensors feed percepts into the car's memory and internal model of its
environment.
28 AGENT TYPES
29 AGENT TYPES
• Goal-based agents
• Goal-based agent: an agent that selects actions that it believes will achieve explicitly
represented goals.
• Goal based agents are the same as model based agents, except
• They contain an explicit statement of the goals of the agent.
• These goals are used to choose the best action at any given time.
• Goal based agents can therefore choose an action which does not achieve anything in the short
term, but in the long term may lead to a goal being achieved.
30 AGENT TYPES
31 AGENT TYPES
• Utility-based agents
• Utility-based agent: an agent that selects actions that it believes will maximize the expected
utility of the outcome state.
• Goals can be useful, but are sometimes too simplistic.
• Utility based agents deal with this by:
• Assigning a utility to each state of the world.
• This utility defines how “happy” the agent will be in such a state.
• Explicitly stating the utility function also makes it easier to define the desired behavior of
utility based agents.
• The word "utility "here refers to "the quality of being useful,”
32 AGENT TYPES
33 LEARNING AGENTS
• Performance Element can be replaced with any of the 4 agent types described above.
• The Learning Element is responsible for suggesting improvements to any part of the
performance element.
• The input to the learning element comes from the Critic.
• The Problem Generator is responsible for suggesting actions that will result in new knowledge
about the world being acquired.