0% found this document useful (0 votes)
50 views32 pages

AI Chapter Two

The document discusses intelligent agents and rational agents. It defines an agent as anything that can perceive its environment and act upon it. A rational agent is one that does the "right thing" to achieve its specified goal. The document then discusses building intelligent agents to perform various tasks like cleaning, handling emails, cooking, and more. It describes the key components of agents including sensors to perceive the environment and effectors to act upon it. The document also discusses how to evaluate if an agent is acting rationally based on its perceptions and capabilities.

Uploaded by

amanterefe99
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views32 pages

AI Chapter Two

The document discusses intelligent agents and rational agents. It defines an agent as anything that can perceive its environment and act upon it. A rational agent is one that does the "right thing" to achieve its specified goal. The document then discusses building intelligent agents to perform various tasks like cleaning, handling emails, cooking, and more. It describes the key components of agents including sensors to perceive the environment and effectors to act upon it. The document also discusses how to evaluate if an agent is acting rationally based on its perceptions and capabilities.

Uploaded by

amanterefe99
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Chapter two :

Inteligent Agents

1

Intelligent Agent
• An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that environment
through effectors.
– A human agent has eyes, ears, and other organs for sensors, and hands, legs,
mouth, and other body parts for effectors.
– A robotic agent substitutes cameras and infrared range finders for the sensors
and various motors for the effectors.
• The agent is assumed to exist in an environment in which it perceives and
acts
Rational Agent
– A rational agent is one that does the right thing.
– An agent is rational since it does the right thing to achieve the specified goal
– The right action is the one that will cause the agent to be most successful.
– how and when to evaluate the agent's success?.

2

• I want to build a robot that will
– Clean my house
– Handle my emails (Information filtering agent)
– Cook when I don’t want to
– Take a note when I am in a meeting
– Cut my hair
– Wash my clothes
– Fix my car (or take it to be fixed)
i.e. do the things that I don’t feel like doing…
• AI is the science of building software or
physical agents that act rationally with respect
to a goal.
3

4

Agent
Human Agent Physical Agent
Sensors Eyes, Ears, Cameras, Scanners,
Nose Microphone, infrared
range finders
Effectors/ Hands, Legs, artificial hands,
Actuators Mouth artificial legs,
Speakers
How Agents should act?
• A rational agent should strive to "do the right thing",
based on what it can perceive and the actions it can
perform.
– What does right thing mean? It is an action that will cause the
agent to be most successful and is expected to maximize goal
achievement, given the available information
• A rational agent is not omniscient
– An Omniscient agent knows the actual outcome of its actions,
and can act accordingly, but in reality omniscience is impossible.
– Rational agents take action with expected success, where as
omniscient agent take action with 100% sure of its success
Example: Is the agent Rational?
Alex was walking along the road to Bus Station; He saw
an old friend across the street. There was no traffic. So,
being rational, he started to cross the street. Meanwhile a
big banner falls off from above and before he finished
crossing the road, he was flattened.
Was Alex irrational to cross the street?
This points out that rationality is concerned with
expected success, given what has been perceived.
–Crossing the street was rational, because most of the time, the crossing
would be successful, and there was no way you could have foreseen the
falling banner.
–The EXAMPLE shows that we can not blame an agent for
failing to take into account something it could not perceive.
Or for failing to take an action that it is incapable of taking.
Rational agent
• In summary what is rational at any given point depends on
PEAS (Performance measure, Environment, Actuators, Sensors)
framework.
– Performance measure
• The performance measure that defines degrees of success of the
agent
– Environment
• Knowledge: What an agent already knows about the environment
– Actuators – generating actions
• The actions that the agent can perform back to the environment
– Percept sequence(sensors-receiving percept)
• Perception: Everything that the agent has perceived so far
concerning the current scenario in the environment
• For each possible percept sequence, an ideal rational
agent should do whatever action is expected to
maximize its performance measure, on the basis of
– the evidence provided by the percept sequence
– and whatever built-in knowledge the agent has.
For example,
• if an agent does not look both ways before crossing a
busy road, then its percept sequence will not tell it
that there is a large truck approaching at high speed.
– it would not be rational to cross the road: the risk of
crossing without looking is too great.
– Second, an ideal rational agent would have chosen the
"looking" action before stepping into the street, because
looking helps maximize the expected performance
9

• How do we decide whether an agent is
successful or not?
– A rational agent should do whatever action is expected to
maximize its performance measure, on the basis of the
evidence provided by the percept sequence and whatever
built-in knowledge the agent has.
• What is the performance measure for “crossing
the road”?
• What about “Chess Playing”?
Example: PEAS
• Consider the task of designing an automated taxi
driver agent; with the goal of “creating comfortable
trip and maximize profits”:
– Performance measure: Safe, Speed, follow traffic rule
– Environment: Roads, other traffic, pedestrians, customers
– Actuators: Artificial legs & hands, Speaker
– Sensors: Cameras, GPS, engine sensors, recorder
(microphone)
– Goal: driving safely from source to destination point
Autonomy

• If the agent's actions are based completely on built-in knowledge,


such that its AUTONOMY need pay no attention to its percepts,
then we say that the agent lacks autonomy
• For example, if the clock manufacturer already knows that the
clock's owner would be going to Australia at some particular date,
then a mechanism could be built in to adjust the hands
automatically by some hours at just the right time
• This would certainly be successful behavior, but the intelligence
seems to belong to the clock's designer rather than to the clock
itself
• An agent's behavior can be based on both its own experience
and the built-in knowledge

13
STRUCTURE OF INTELLIGENT AGENTS
• So far we have described about the behavior of an agent, but here we focus on
about how the insides work.
• The job of AI is to design the agent program:
– a function that implements the agent mapping from
percepts to actions
• We assume this program will run on some sort of computing device, which we will
call the architecture
– The architecture might be a plain computer, or it might
include special-purpose hardware for certain tasks, such
as processing camera images or filtering audio input
• The relationship among agents, architectures, and
programs can be summed up as follows
agent = architecture + program
Programs
– Accepts percept from an environment and generates
actions
• Before designing an agent program, we need to know the possible percept
and actions
14

Architecture
– makes the percepts from the sensors available to the
program,
– runs the program
– and feeds the program's action choices to the effectors as
they are generated.
• The design of an agent is complex because of
– The complexity of the relationship among the behavior of
the agent,
– the percept sequence generated by the environment,
– and the goals that the agent is supposed to achieve
• a robot designed to inspect parts as they come by on a conveyer belt can make
use of a number of simplifying assumptions: that the only thing on the
conveyer belt will be parts of a certain kind, and that there are only two
actions—accept the part or mark it as a reject
• On the contrary ,Imagine a Robot designed to fly a flight simulator for a 747. The
simulator is a very detailed, complex environment, and the software agent must
choose from a wide variety of actions in real time.
15
Program Skeleton of Agent
function SKELETON-AGENT (percept) returns action
 static: knowledge, the agent’s memory of the world

 knowledge UPDATE-KNOWLEDGE(knowledge,percept)
 action  SELECT-BEST-ACTION(knowledge)
 knowledge UPDATE-KNOWLEDGE (knowledge, action)
 return action

On each invocation, the agent’s knowledge base is updated


to reflect the new percept, the best action is chosen, and
the fact that the action taken is also stored in the knowledge
base. The knowledge base persists from one invocation to the
next.
Types of agents
• Four basic types in order of increasing
generality:
– Simple reflex agents
– Model based reflex agents
– Goal-based agents
– Utility-based agents
.
1 Simple reflex agents

• It works by finding a rule whose condition matches


the current situation (as defined by the percept) and
then doing the action associated with that rule.
E.g. If the car in front brakes, and its brake lights
come on, then the driver should notice this and
initiate braking,
– Some processing is done on the visual input to establish the
condition.
– If "The car in front is braking"; then this triggers some established
connection in the agent program to the action "initiate braking".
We call such a connection a condition-action rule written as: If car-
in-front-is breaking then initiate-braking.
• Humans also have many such conditions. Some of
which are learned responses. Some of which are
innate (inborn) responses
– Blinking when something approaches the eye.
 Structure of a simple reflex agent
Simple Reflex Agent

sensors


Environment
What the world
is like now

Condition - What action I


should do now
action rules
effectors

Function SIMPLE-REFLEX-AGENT(percept) returns action


 static: rules, a set of condition-action rules

 state  INTERPRET-INPUT (percept)


 rule  RULE-MATCH (state, rules)
 action  RULE-ACTION [rule]
 return action
2. Model based reflex agents

–The simple reflex agent described before will work only if the correct decision
can be made on the basis of the current percept
– If the car is a recent model -- there is a centrally mount brake light. With
older models, there is no centrally mounted, so what if the agent gets
confused?
– Is it a parking light? Is it a brake light? Is it a turn signal light?
• Some sort of internal state should be in order to choose an action.
• Consider the following more obvious case: from time to time, the driver looks
in the rear-view mirror to check on the locations of nearby vehicles.
• When the driver is not looking in the mirror, the vehicles in the next lane are
invisible (i.e., the states in which they are present and absent are
indistinguishable); but in order to decide on a lane-change maneuver, the driver
needs to know whether or not they are there.
• Updating this internal state information as time goes
by requires two kinds of knowledge to be encoded in
the agent program.
• First, we need some information about how the
world evolves independently of the agent—for
example, that an overtaking car generally will be
closer behind than it was a moment ago.
• Second, we need some information about how the
agent's own actions affect the world
 for example, that when the agent changes lanes to the
right, there is a gap (at least temporarily) in the lane it was
in before; on the freeway one is usually about five miles
north of where one was five minutes ago
21

Structure of Model based reflex agents
 sensors
 State

 How the world evolves What the world


Environment
is like now
 What my actions do

 Condition - action rules


What action I
should do now

 effectors
function Agents that keep track of the world (percept) returns action
 static: state, a description of the current world state
 rules, a set of condition-action rules

 state  UPDATE-STATE (state, percept)


 rule  RULE-MATCH (state, rules)
 action  RULE-ACTION [rule]
 state  UPDATE-STATE (state, action)
 return action
3. Goal based agents
• Choose actions that achieve the goal (an agent with
explicit goals)
• Involves consideration of the future:
 Knowing about the current state of the environment is not
always enough to decide what to do.
– For example, at a road junction, the taxi can turn left,
right or go straight.
 The right decision depends on where the taxi is trying to
get to. As well as a current state description, the agent
needs some sort of goal information, which describes
situations that are desirable. E.g. being at the passenger's
destination.
• The agent may need to consider long sequences, twists
and turns to find a way to achieve a goal.
• Notice that decision-making of this kind is fundamentally
different from the condition action rules of described ,
• in that it involves consideration of the future—both
– "What will happen if I do such-and-such?“ and
– "Will that make me happy?"
• In the reflex agent designs, this information is not explicitly
used, because the designer has precomputed the correct
action for various cases.
For example
• The reflex agent brakes when it sees brake lights, however
a goal-based agent, in principle, could reason that if the car
in front has its brake lights on, it will slow down

24
Structure of a Goal-based agent
 State  sensors

 How the world evolves What the world


is like now

Environment
 What my actions do

What it will be
like if I do action A

 Goals
What action I
should do now

 effectors

 function GOAL_BASED_AGENT (percept) returns action


 state  UPDATE-STATE (state, percept)
 action  SELECT-ACTION [state, goal]
 state  UPDATE-STATE (state, action)
 return action
4. Utility based agent
• Goals are not really enough to generate high
quality behavior.
– For e.g., there are many action sequences that will get the
taxi to its destination, thereby achieving the goal. Some
are quicker, safer, more reliable, or cheaper than others.
We need to consider Speed and safety
• When there are several goals that the agent
can aim for, non of which can be achieved
with certainty.
• Utility provides a way in which the likelihood
of success can be weighed up against the
importance of the goals.
• An agent that possesses an explicit utility
function can make rational decisions. 26
Structure of a utility-based agent
 State  sensors
 How the world evolves What the world is
like now
 What my actions do


Environment
What it will be like
if I do action A

 Utility How happy I will be


in such as a state

What action I
should do now
 effectors

 Function UTILITY_BASED_AGENT (percept) returns action


 state  UPDATE-STATE (state, percept)


 action  SELECT-OPTIMAL_ACTION [state, goal]
 state  UPDATE-STATE (state, action)

Agent and Environments
Agents design is affected by the environment
– actions are done by the agent on the environment, which in turn
provides percepts to the agent
• There are different types(properties) of an environment
1. Accessible vs. inaccessible.
• If an agent's sensory apparatus gives it access to the complete state of
the environment, then we say that the environment is accessible to that
agent.
• An environment is effectively accessible if the sensors detect all
aspects that are relevant to the choice of action.
• An accessible environment is convenient because the agent need not
maintain any internal state to keep track of the world.
• Taxi driving is partially accessible
– Any example of fully accessible?

28

2. Deterministic vs. nondeterministic
Deterministic
– If the next state of the environment is completely determined by the
current state and the actions selected by the agents, then we say the
environment is deterministic.
– In principle, an agent need not worry about uncertainty in an accessible,
deterministic environment.
Nondeterministic
– If the environment is inaccessible, however, then it may appear to be
nondeterministic.
– This is particularly true if the environment is complex, making it hard to
keep track of all the inaccessible aspects.
• Taxi driving is non-deterministic
– Any example of deterministic?
• Thus, it is often better to think of an environment as deterministic or
nondeterministic/from the point of view of the agent.

29
3. Static vs. dynamic
• If the environment can change while an agent is deliberating, then we say the
environment is dynamic for that agent;
• otherwise it is static.
• Static environments are easy to deal with because the agent need not keep
looking at the world while it is deciding on an action, nor need it worry about
the passage of
• Taxi driving is dynamic
– Any example of static?
4. Discrete vs. continuous.
• If there are a limited number of distinct, clearly defined percepts and actions
we say that the environment is discrete.
– Chess is discrete—there are a fixed number of possible moves on each
turn.
– Taxi driving is continuous—the speed and location of the taxi and the other
vehicles sweep through a range of continuous values

30
5.Episodic vs. Sequential

• Does the next “episode” or event depend on


the actions taken in previous episodes?
• In an episodic environment, the agent's
experience is divided into "episodes".
– Each episode consists of the agent perceiving and then
performing a single action, and the choice of action in each
episode depends only on the episode itself.
– The quality of its action depends just on the episode itself.
• In sequential environment the current
decision could affect all future decisions
• Taxi driving is sequential
– Any example of Episodic?
Below are lists of properties of a number of familiar environments

Problems accessible Deterministic Episodic Static Discrete

Part-picking No No Yes No No
robot

Web No No No No Yes
shopping
program

Tutor No No No Yes Yes


Medical No No No No No
Diagnosis

Taxi driving No No No No No

32

 Examples: Agents for Various Applications
Agent type Percepts Actions Goals/performa Environment
nce
Interactive Typed words, Print exercises, Maximize Set of
English Keyboard suggestions, student's score students
tutor corrections on test
Medical Symptoms, Questions, Healthy person, Patient,
diagnosis patient's tests, minimize costs hospital
system answers treatments
Part- Pixels of Pick up parts Place parts in Conveyor
picking varying and sort into correct bins belts with
robot intensity bins parts
Satellite Pixels of Print a Correct Images from
image varying categorization categorization orbiting
analyser intensity, color of scene satellite
Refinery Temperature, Open, close Maximize Refinery
controller pressure valves; adjust purity, yield,
readings temperature safety

You might also like