0% found this document useful (0 votes)
61 views33 pages

Chapter 2 AI

Intelligent Agents

Uploaded by

Fro Abera
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views33 pages

Chapter 2 AI

Intelligent Agents

Uploaded by

Fro Abera
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 33

FUNDAMENTALS OF

ARTIFICIAL INTELLIGENCE

CHAPTER TWO -INTELLIGENT AGENTS


2 CONTENTS

 Intelligent Agents
 Introduction
 Agents and Environments
 Acting of Intelligent Agents (Rationality)
 Structure of Intelligent Agents
 PEAS Description & Environment Properties

 Agent Types
 Simple reflex agent
 Model-based reflex agent
 Goal-based agent
 Utility-based agent

 Learning agent
 Important Concepts and Terms
3 INTRODUCTION

 An agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through effectors.
 A human agent has:
 eyes, ears, and other organs for sensors,
 hands, legs, mouth, and other body parts for effectors.
 A robotic agent substitutes:
 cameras and infrared range finders for the sensors
 various motors for the effectors.
 Software agents are programs that permits it to performs parts of its tasks autonomously and to
interact with its environment in a useful manner.
4 AGENTS AND ENVIRONMENT

 Agents interact with environments through sensors and actuators.


 Agent function is an abstract mathematical description
 Agent program is a concrete implementation running on the agent architecture.
 The agent function maps from percept histories to actions:-
 [f: P* A]
 A percept is a piece of information perceived by the agent.
5 AGENTS AND ENVIRONMENT

 The agent program runs on the physical architecture to produce f.


 Agent =Architecture + Program

 Figure - A vacuum-cleaner world with just two locations.


 Percepts: location and contents, e.g.,[A , Dirty]
 Actions : Left , Right , Suck,… …
 Note: This only uses the last percept of the percept history, so this agent can not learn from
experience.

Percept Action
[A , Clean] Right

[A , Dirty] Suck

[B , Clean] Left

[B , Dirty] Suck
6 RATIONALITY

 A rational agent is one that does the right thing.


 Based on what it can perceive and
 The actions it can perform.
 The right action is the one that will cause the agent to be most successful.
 Performance measure: An objective criterion for success of an agent's behavior.
 E.g., performance measure of a vacuum-cleaner agent could be:
 Amount of dirt cleaned up,
 Amount of time taken,
 Amount of electricity consumed,
 Amount of noise generated, etc.
7 RATIONALITY

 What is a rational agent?


 Rational Agent: For each sequence, possible percept

• A rational agent should select an action that is expected to maximize its performance measure,
given the evidence provided by the percept sequence and whatever built-in knowledge the
agent has.
• Rationality is distinct from omniscience.
• Omniscience (all-knowing with infinite knowledge).
• An omniscient agent knows the actual outcome of its actions and, Can act accordingly;
• It is impossible in reality.
8 RATIONALITY

 Agents can perform actions in order to modify future percepts so as to obtain useful
information (information gathering, exploration).
 An agent is autonomous if its behavior is determined by its own experience (with ability to
learn and adapt).
 In summary, what is rational at any given time depends on four things:
 The performance measure that defines degree of success.
 PEAS (Performance measure, Environment, Actuators, Sensors) framework.
 What the agent knows about the environment.
 The actions that the agent can perform.
9 RATIONALITY

 PEAS:
 Performance measure: A measure of how good the behavior of agents operating in the
environment is?
 Environment: What things are considered to be a part of the environment and what things are
excluded?
 Actuators: How can an agent perform actions in the environment?
 Sensors: How can the agent perceive the environment?
 Must first specify the setting for intelligent agent design.
10 RATIONALITY

 PEAS - Example 1: Taxi-Driving System


 Consider, the task of designing an automated taxi driver:
 Performance measure:
 Safe , fast , legal , comfortable trip , maximize profits
 Environment:
 Roads , other traffic , pedestrians(walkers), customers
 Actuators:
 Steering wheel , accelerator , brake , indicators , horn(alert or alarm)
 Sensors:
 Cameras, speedometer , GPS , engine sensors , keyboard
11 RATIONALITY

 PEAS Example 2: Medical Diagnosis System


 Agent : Medical diagnosis system
 Performance measure:
 Healthy patient , minimize costs, avoid lawsuits(charges).
 Environment:
 Patient , hospital, staff
 Actuators:
 Screen display (questions, tests, diagnoses, treatments, referrals)
 Sensors:
 Keyboard (entry of symptoms , findings , patient's answers)
12 RATIONALITY

 PEAS -Example 3: Interactive English tutor


 Agent : Interactive English tutor
 Performance measure:
 Maximize students’ scores on test
 Environment:
 Set of students
 Actuators:
 Screen display ( exercises , suggestions , corrections)
 Sensors:
 Keyboard (student's answers)
13 RATIONALITY

 Quiz(10 %)
 For each of the following agents, develop a PEAS description of the task environment:

1. Robot soccer player;


2. Internet book-shopping agent;
14 RATIONALITY

 An intelligent agent is an autonomous entity which act upon an environment using sensors and
actuators for achieving goals.
 An intelligent agent may learn from the environment to achieve their goals.
 A thermostat is an example of an intelligent agent.
 Following are the main four rules for an AI agent:
• Rule 1: An AI agent must have the ability to perceive the environment.
• Rule 2: The observation must be used to make decisions.
• Rule 3: Decision should result in an action.
• Rule 4: The action taken by an AI agent must be a rational action.
15 STRUCTURE OF INTELLIGENT AGENTS

 The task of AI is to design an agent program which implements the agent function that
implements the agent mapping from percepts to actions.
 This program will run on some sort of computing device, which we will call the architecture.
 The architecture might be a plain computer, or it might include special-purpose hardware for
certain tasks, such as processing camera images or filtering audio input.
 Architecture is the machinery that the agent executes on.
 It is a device with sensors and actuators, for example, a robotic car, a camera, and a PC.
 In general, the architecture makes the percepts from the sensors available to the program, runs
the program, and feeds the program's action choices to the effectors as they are generated.
 agent = architecture + program
16 STRUCTURE OF INTELLIGENT AGENTS

Agent Type Percepts Actions Goals Environment

Medical Symptoms, Questions, tests, Healthy patient, Patient, hospital


diagnosis system findings, treatments minimize costs
patient's answers
Satellite image Pixels of varying Print a Correct Images from
analysis system intensity, color categorization of categorization orbiting satellite
scene
Part-picking Pixels of varying Pick up parts and Place parts in Conveyor belt
robot intensity sort into bins correct bins with parts
Refinery Temperature, Open, close Maximize purity, Refinery
controller pressure readings valves; adjust yield, safety
temperature
Interactive Typed words Print exercises, Maximize Set of students
English tutor suggestions, student's score
corrections on test
17 ENVIRONMENT TYPES

• Fully observable (vs. partially observable):


• An agent's sensors give it access to the complete state of the environment at each point
in time.
• Full access to the state of the environment.
• If they give it partial access then the environment is partially observable.
• If they have no sensors then is in non-observable.
• Chess – the board is fully observable, and so are the opponent’s moves.
• Driving – the environment is partially observable because what’s around the corner is not
known.
18 ENVIRONMENT TYPES

• Deterministic (vs. stochastic):


• The next state of the environment is completely determined by the current state and the
action executed by the agent.
• If there are apparently “random” events that can make the next state unpredictable, the
environment is stochastic.
• Chess – there would be only a few possible moves for a coin at the current state and these
moves can be determined.
• Self-Driving Cars- the actions of a self-driving car are not unique, it varies time to time.
19 ENVIRONMENT TYPES

• Static (vs. dynamic):


• If the environment stays unchanged whilst the agent is thinking about what action to
take, it is a static environment.
• If it is continually changing, even whilst the agent is thinking, it is dynamic.
• Taxi driving is an example of a dynamic environment whereas Crossword puzzles are an
example of a static environment.
• Cleaning a room (Environment) by a dry-cleaner reboot (Agent ) is an example of a static
environment where the room is static while cleaning.
• Playing soccer is a dynamic environment where players’ positions keep changing
throughout the game. So a player hit the ball by observing the opposite team.
20 ENVIRONMENT TYPES

• Discrete (vs. continuous):


• If the agent has a limited number of possible actions and percepts , it is a discrete
environment.
• If the number of actions and/or percepts is effectively unlimited it is a continuous
environment.
• The game of chess is discrete as it has only a finite number of moves.
• The number of moves might vary with every game, but still, it’s finite.
• Self-driving cars are an example of continuous environments as their actions are driving,
parking, etc. which cannot be numbered.
21 ENVIRONMENT TYPES

• Single agent (vs. multi-agent):


• An agent operating by itself in an environment.
• If there are no other agents in the environment we say it is a single-agent environment.
• If there are other agents it is a multi-agent environment.
• The easiest type of environment is fully-observable, deterministic, episodic, static,
discrete and single agent.
22 AGENT TYPES

• Four basic types in generality:


 Simple reflex agents
 Model-based reflex agents
 Goal-based agents
 Utility-based agents
23 AGENT TYPES

• Simple reflex agents


• An agent whose action depends only on the current percept.
• The simplest kind of agent.
• These use a set of condition-action rules that specify which action to choose for each
given percept.
• These agents use only the current percept, so have no memory of past percepts.
24 AGENT TYPES

• The Simple reflex agent does not consider any part of percepts history during their decision
and action process.
• The Simple reflex agent works on Condition-action rule, which means it maps the current state
to action. Such as a Room Cleaner agent, it works only if there is dirt in the room.
• Problems for the simple reflex agent design approach:
• They have very limited intelligence
• They do not have knowledge of non-perceptual parts of the current state
• Not adaptive to changes in the environment.
25 AGENT TYPES

• A simple reflex agent works by finding a rule whose condition matches the current
situation and then doing the action associated with that rule.
26 AGENT TYPES

• Model-based reflex agents


• The Model-based agent can work in a partially observable environment, and track the situation.
• An agent whose action is derived directly from an internal model of the current world state.
• A more complex type of agent.
• Model based agents maintain an internal model of the world, which is updated by percept’s as
they are received.
• A model-based agent has two important factors:
• Model: It is knowledge about "how things happen in the world," so it is called a Model-
based agent.
• Internal State: It is a representation of the current state based on percept history.
27 AGENT TYPES

• One notable example of a model based reflex agent implementation is the Waymo project by
Google.
• Waymo has successfully deployed self-driving cars equipped with advanced sensors and
implemented model based reflex algorithms in their decision-making process.
• These autonomous vehicles navigate the roads, making decisions based on percept history and
real-time sensory input.
• However, it is important to note that such projects are currently limited to specific regions and
regulatory frameworks.
• The car is equipped with sensors that detect obstacles, such as car brake lights in front of them or
pedestrians walking on the sidewalk.
• As it drives, these sensors feed percepts into the car's memory and internal model of its
environment.
28 AGENT TYPES
29 AGENT TYPES

• Goal-based agents
• Goal-based agent: an agent that selects actions that it believes will achieve explicitly
represented goals.
• Goal based agents are the same as model based agents, except
• They contain an explicit statement of the goals of the agent.
• These goals are used to choose the best action at any given time.
• Goal based agents can therefore choose an action which does not achieve anything in the short
term, but in the long term may lead to a goal being achieved.
30 AGENT TYPES
31 AGENT TYPES

• Utility-based agents
• Utility-based agent: an agent that selects actions that it believes will maximize the expected
utility of the outcome state.
• Goals can be useful, but are sometimes too simplistic.
• Utility based agents deal with this by:
• Assigning a utility to each state of the world.
• This utility defines how “happy” the agent will be in such a state.
• Explicitly stating the utility function also makes it easier to define the desired behavior of
utility based agents.
• The word "utility "here refers to "the quality of being useful,”
32 AGENT TYPES
33 LEARNING AGENTS

 Learning agents: Programs that learn from experience.


 An agent whose behavior improves over time based on its experience
 A learning agent can be divided into four conceptual components:

• Performance Element can be replaced with any of the 4 agent types described above.
• The Learning Element is responsible for suggesting improvements to any part of the
performance element.
• The input to the learning element comes from the Critic.
• The Problem Generator is responsible for suggesting actions that will result in new knowledge
about the world being acquired.

You might also like