Chapter 2
Chapter 2
Intelligence
Chapter 2
Intelligent Agents
Agents and environments
Agent: An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that environment through
actuators.
Human agent:
• Sensors: Eyes, ears, and other
• Actuators: Hands, legs, mouth, and other body
Robotic agent:
• Sensors: Cameras and infrared range finders
• Actuators: Various motors
Agents and environments
Percept: agent's perceptual inputs.
Percept sequence: history of what agent
perceived.
Agent function: maps from percept histories to
actions:
f: P* A
The agent program runs on the physical
architecture to produce f.
Vacuum cleaner
• Multi-agent
• Competitive
• Cooperative
Properties of task environments
Deterministic vs stochastic
• An environment is deterministic if the next state of the environment is
completely determined by the current state of the environment and the
action of the agent.
• In an accessible and deterministic environment the agent need not deal
with uncertainty.
Properties of task environments
Episodic vs Sequential:
• In an episodic task environment, the agent’s experience is divided into atomic
episodes.
• An episodic environment means that subsequent episodes do not
depend on what actions occurred in previous episodes.
• Such environments do not require the agent to plan ahead.
Static vs Dynamic.
• If the environment can change while an agent is deliberating, then we say the
environment is dynamic for that agent; otherwise, it is static.
• In static environments the time it takes to compute a good strategy does not matter.
Properties of task environments
Discrete vs. continuous:
• The discrete/continuous distinction applies to the state of the
environment, to the way time is handled, and to the percepts and actions
of the agent.
• For example, the chess environment has a finite number of distinct states
• Chess also has a discrete set of percepts and actions.