Unit I AI Modified.
Unit I AI Modified.
UNIT-I
• Introduction to AI
• Intelligent Agents
• Problem-Solving Agents
• Searching for Solutions
• Breadth-first search
• Depth-first search
• Hill-climbing search
• Simulated annealing search
• Local Search in Continuous Spaces
RESOURCES
TEXT BOOKS:
•1. Artificial Intelligence A Modern Approach, Third Edition, Stuart
Russell and Peter Norvig, Pearson Education.
REFERENCES:
•Artificial Intelligence, 3rd Edn., E. Rich and K. Knight(TMH)
•Artificial Intelligence, 3rd Edn., Patrick Henny Winston, Pearson
Education.
•Artificial Intelligence, Shivani Goel, Pearson Education.
•Artificial Intelligence and Expert systems – Patterson, Pearson
Education.
What is artificial intelligence?
• Popular conception driven by science fiction
– Robots good at everything except emotions, empathy, appreciation of art,
culture, …
Real AI
• A serious science.
• General-purpose AI like the robots of science fiction is incredibly
hard
– Human brain appears to have lots of special and general functions,
integrated in some amazing way that we really do not understand at all
(yet)
• Special-purpose AI is more doable (nontrivial)
– E.g., chess/poker playing programs, logistics planning, automated
translation, voice recognition, web search, data mining, medical
diagnosis, keeping a car on the road, … … … …
Definitions of AI
Historically, all four approaches to AI have been followed, each by different people
with different methods. A rationalist approach involves a combination of
mathematics and engineering.
REF: https://www.analyticsvidhya.com/blog/2019/12/10-exciting-real-world-applications-ai-
9
retail/?utm_source=feedburner&utm_medium=email&utm_
Acting rationally: rational agent
• An agent is just something that acts.
• A rational agent is one that acts so as to achieve the best outcome or, when there
is uncertainty, the best expected outcome.
• The right thing: that which is expected to maximize goal achievement, given the
available information
Rational Agents
• An agent is an entity that perceives and acts
• This course is about designing rational agents
• Abstractly, an agent is a function from percept histories to actions:
[f: P* A]
• For any given class of environments and tasks, we seek the agent (or
class of agents) with the best performance
• Caveat: computational limitations make perfect rationality
unachievable
design best program for given machine resources
–
•
INTELLIGENT AGENTS
• AGENTS AND ENVIRONMENTS
• An Agent is anything that can be ENVIRONMENT viewed as perceiving
its environment through sensors and SENSOR acting upon that
environment through actuators.
12
• A Human Agent has eyes, ears, and other organs for sensors and hands,
legs, vocal tract, and so on for actuators.
• A Robotic Agent might have cameras and infrared range finders for
sensors and various motors for actuators.
• A Software Agent receives keystrokes, file contents, and network
packets as sensory inputs and acts on the environment by displaying on
the screen, writing files, and sending network packets.
• Mathematically speaking an Agent’s behavior is described by the agent
function that maps any given percept (interpret) sequence to an action
Figure 2.3 Partial tabulation of a simple agent function for the vacuum-
cleaner world shown in Figure 2.2.
Ex: Hand-held calculator as an agent that chooses the action of displaying “4”
when given the percept sequence 2 + 2 =
such an analysis would hardly aid our understanding of the calculator.
14
GOOD BEHAVIOR: THE CONCEPT OF RATIONALITY
• Rational Agent:
• It is one that does the right thing - conceptually speaking, every entry in the
table for the agent function is filled out correctly.
• An agent should strive to "do the right thing", based on what it can perceive
and the actions it can perform. The right action is the one that will cause the
agent to be most successful
• Performance measure: An objective criterion for success of an agent's behavior
• Ex: Performance measure of a vacuum-cleaner agent could be amount of dirt
cleaned up, amount of time taken, amount of electricity consumed, amount of
noise generated, etc.
15
PEAS (PERFORMANCE, ENVIRONMENT, ACTUATORS, SENSORS)
DESCRIPTION
• The vacuum world was a simple example; let us consider a more complex
problem: an automated taxi driver. A fully automated taxi is currently
somewhat beyond the capabilities of existing technology.
• The full driving task is extremely open-ended.
16
THE STRUCTURE OF AGENTS
• The job of AI is to design an agent program that implements the agent
function— the mapping from percepts to actions. We assume this program
will run on some sort of computing device with physical sensors and actuators
—we call this the architecture:
agent = architecture + program .
• If the program is going to recommend actions like Walk, the architecture had
better have legs.
• The architecture might be just an ordinary PC, or it might be a robotic car with
several onboard computers, cameras, and other sensors.
• Architecture makes the percepts from the sensors available to the program,
runs the program, and feeds the program’s action choices to the actuators as
they are generated.
17
Agent programs
• Agent programs take the current percept as input from the
sensors and return an action to the actuators.
• Difference between the agent program, which takes the
current percept as input, and the agent function, which
takes the entire percept history.
• The agent program takes just the current percept as input
because nothing more is available from the environment;
• if the agent’s actions need to depend on the entire percept
sequence, the agent will have to remember the percepts.
18
• Four basic kinds of agent programs that embody the principles underlying
almost all intelligent systems:
• Simple reflex agents;
• Model-based reflex agents;
• Goal-based agents; and
• Utility-based agents.
• Each kind of agent program combines particular components in particular
ways to generate actions
• Explains in general terms how to convert all these agents into learning agents
that can improve the performance of their components so as to generate
better actions.
Simple reflex agents
• The simplest kind of agent is the simple reflex agent.
• These agents select actions on the basis of the current percept, ignoring the
rest of the percept history.
19
• Simple reflex behaviors occur even in more complex environments.
• Call such a connection a condition–action rule, written as
if car-in-front-is-braking then initiate-braking
20
Problem solving agents
• “how an agent can find a sequence of actions that achieves its goals when no single
action will do”
22
SEARCHING FOR SOLUTIONS
• A solution is an action sequence, so search algorithms work by considering various
possible action sequences.
• The possible action sequences starting at the initial state form a search tree with the
initial state at the root; the branches are actions and the nodes correspond to states in
the state space of the problem.
• Figure 3.6 shows the first few steps in growing the search tree for finding a route from
Arad to Bucharest.
• The root node of the tree corresponds to the initial state, In(Arad).
• The first step is to test whether this is a goal state.
• Expanding the current state; that is, applying each legal action to the current state,
thereby generating a new set of states.
• Add three branches from the parent node In(Arad) leading to three new child nodes:
In(Sibiu), In(Timisoara), and In(Zerind).
• choose which of these three possibilities to consider further.
UNINFORMED SEARCH STRATEGIES
• Several search strategies that come under the heading of uninformed search
(also called blind search).
• The term means that the strategies have no additional information about
states beyond that provided in the problem definition.
• All they can do is generate successors and distinguish a goal state from a non-
goal state.
• All search strategies are distinguished by the order in which nodes are
expanded.
• Strategies that know whether one non-goal state is “more promising” than
another are called informed search or heuristic search strategies;
• Breadth-First, Uniform-Cost, Depth-First, Depth-Limited, Iterative Deepening,
Bidirectional
BREADTH FIRST SEARCH
• It is a simple strategy in which the root node is expanded first, then all the successors of the
root node are expanded next, then their successors, and so on.
• All the nodes are expanded at a given depth in the search tree before any nodes at the next
level are expanded.
• Breadth-first search is an instance of the general graph-search algorithm in which the
shallowest unexpanded node is chosen for expansion.
• This is achieved very simply by using a FIFO queue for the frontier.
DEPTH FIRST SERCH
• Depth-first search always expands the deepest node
in the current frontier of the search tree.
• The progress of the search is illustrated in Figure
3.16.
• The search proceeds immediately to the deepest
level of the search tree, where the nodes have no
successors.
• As those nodes are expanded, they are dropped
from the frontier, so then the search “backs up” to
the next deepest node that still has unexplored
successors.
• The depth-first search algorithm is an instance of the
graph-search algorithm whereas breadth-first-search
uses a FIFO queue, depth-first search uses a LIFO
queue.
.
HILL CLIMBING
BEYOND CLASSICAL SEARCH