Handout1 AI
Handout1 AI
Intelligence
Text Book: Russell, S. and P. Norvig (1995) Artificial Intelligence: A Modern
Approach Prentice-Hall.
3. Acting Humanly ―The art of creating machines that perform functions that
require intelligence when performed by people.
―The study of how to make computers do things at which, at the moment,
people are better.
Intelligent Agent
Example: - A robotic agent might have cameras and infrared range finders
for sensors and various motors for actuators.
Rationality
This leads to a definition of a rational agent: For each possible percept sequence, a
rational agent should select an action that is expected to maximize its performance
measure, given the evidence provided by the percept sequence and whatever built
in knowledge the agent has.
3 concepts of Rationality:
1. Omniscience
2. Learning
3. Autonomy
An omniscient agent knows the actual outcome of its actions and can act
accordingly; but omniscience is impossible in reality. Rationality maximizes
expected performance, while perfection maximizes actual performance. Doing
actions in order to modify future percepts—sometimes called information
gathering—is an important part of rationality. Information gathering is provided by
the exploration that must be undertaken by an agent in an initially unknown
environment.
A rational agent not only needs to gather information but also to learn as much as
possible from what it perceives. The agent’s initial configuration could reflect
some prior knowledge of the environment, but as the agent gains experience this
may be modified and augmented.
If an agent relies on the prior knowledge of its designers rather than its own
percept it lacks autonomy. But an agent should be autonomous. When designing
an artificially intelligent agent it would be reasonable to provide it with some
initial knowledge as well as ability to learn.
4. Multi Agent: If the task carried out by more than one agent.
Playing chess – two agent environment
====================================================
The Structure of Agents
Model-based reflex agents The most effective way to handle partial observability
is for the agent to keep track of the part of the world it can’t see now. It maintains
the percept history.
Goal-based agents:
The agent needs some sort of goal information that describes situations that are
desirable.
Utility-based agents:
Goals alone are not enough to generate high-quality behavior in most
environments.A more general performance measure should allow a comparison
ofdifferent world states according to exactly how happy they would make the
agent.An agent’s utility function is essentially internalization of the performance
measure.
Learning agents: