0% found this document useful (0 votes)
34 views80 pages

1279 53 491 Module-1 AI

The document provides an introduction to Artificial Intelligence (AI), defining it as a branch of computer science focused on creating machines that can think and act like humans. It discusses various approaches to AI, including acting humanly, thinking humanly, thinking rationally, and acting rationally, along with foundational disciplines such as philosophy, mathematics, and neuroscience. Additionally, it highlights AI applications in fields like marketing, banking, finance, agriculture, and healthcare, and outlines the history and evolution of AI from its early concepts to modern advancements.

Uploaded by

samarthm2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views80 pages

1279 53 491 Module-1 AI

The document provides an introduction to Artificial Intelligence (AI), defining it as a branch of computer science focused on creating machines that can think and act like humans. It discusses various approaches to AI, including acting humanly, thinking humanly, thinking rationally, and acting rationally, along with foundational disciplines such as philosophy, mathematics, and neuroscience. Additionally, it highlights AI applications in fields like marketing, banking, finance, agriculture, and healthcare, and outlines the history and evolution of AI from its early concepts to modern advancements.

Uploaded by

samarthm2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 80

MODULE 1

INTRODUCTION
AI
ARTIFICIAL INTELLIGENCE
AI

Add a footer 2
1.1 What Is AI?
• The field of artificial intelligence, or AI, is concerned with not just understanding
but also building intelligent entities—machines that can compute how to
act effectively and safely in a wide variety of novel situations. AI
• currently encompasses a huge variety of subfields, ranging from the general
(learning, reasoning, perception, and so on) to the specific, such as
playing chess, proving mathematical theorems, writing poetry, driving a
car, or diagnosing diseases. AI is relevant to any intellectual task; it is truly
a universal field. We have claimed that AI is interesting, but we have not said
what it is. Historically, researchers have pursued several different versions
of AI.
• Some have defined intelligence in terms of fidelity to human performance, while
others prefer an abstract, formal definition of intelligence called rationality
— loosely speaking, doing the “right thing.”

Add a 3
fo1ot0er/9/2024
• Artificial Intelligence is composed of two words Artificial and Intelligence,

• where Artificial defines "man-made," and


• intelligence defines "thinking power",

• hence AI means "a man-made thinking power."


• So, we can define AI as:

• "It is a branch of computer science by which we can create intelligent


machines which can behave like a human, think like humans, and able to make
decisions."

Add a 4
fo1ot0er/9/2024
• The methods used are necessarily different: the pursuit of human-like intelligence
must be in part an empirical science related to psychology, involving
observations and hypotheses about actual human behaviour and thought
processes; a rationalist approach, on the other hand, involves a
combination of mathematics and engineering, and connects to statistics,
control theory, and economics. The various groups have both disparaged and
helped each other. Let us look at the four approaches in more detail.

• 1.1.1 Acting humanly: The Turing test approach :


• The Turing test, proposed by Alan Turing (1950), was designed as a thought
experiment that
• would sidestep the philosophical vagueness of the question “Can a machine
think?”

Add a 5
fo1ot0er/9/2024
• A computer passes the test if a human interrogator, after posing some written
questions, cannot tell whether the written responses come from a person
or from a computer.
• The computer would need the following capabilities:
• 1) Natural language processing: to communicate successfully in a human
• language;
• 2) Knowledge representation: to store what it knows or hears;
• 3) Automated reasoning: to answer questions and to draw new conclusions;
• 4) Machine learning: to adapt to new circumstances and to detect
and extrapolate patterns.

Add a 6
fo1ot0er/9/2024
• Turing viewed the physical simulation of a person as unnecessary to demonstrate
intelligence. However, other researchers have proposed a total Turing test,
which requires interaction with objects and people in the real world.
• To pass the total Turing test, a robot will need ......
• 5) Computer vision and speech recognition to perceive the world;
• 6) Robotics to manipulate objects and move about.

• These six disciplines compose most of AI. Yet AI researchers have devoted little
effort to passing the Turing test, believing that it is more important to study
the underlying principles of intelligence.

• 1.1.2 Thinking humanly:


• The cognitive modeling approach
• To say that a program thinks like a human, we must know how humans think. We
can learn about human thought in three ways:
Add a 7
fo1ot0er/9/2024
• Introspection— trying to catch our own thoughts as they go by
• • Psychological experiments— observing a person in action
• • Brain imaging— observing the brain in action.

• Once we have a sufficiently precise theory of the mind, it becomes possible to


express the theory as a computer program. If the program’s input–
output behaviour matches corresponding human behaviour, that is evidence that
some of the program’s mechanisms could also be operating in humans. For
example, Allen Newell and Herbert Simon, who developed GPS, the
“General Problem Solver” (Newell and Simon 1961), were not content
merely to have their program solve problems correctly. They were more
concerned with comparing the sequence and timing of its reasoning steps
to those of human subjects
• solving the same problems.

Add a 8
fo1ot0er/9/2024
1.1.3 Thinking rationally:
The “laws of thought” approach The Greek philosopher Aristotle was one
of the first to attempt to codify “right thinking”— that is, irrefutable
reasoning processes. His syllogisms provided patterns for argument
structures that always yielded correct conclusions when given correct
premises. The canonical example starts with Socrates is a man and all men
are mortal and concludes that Socrates is mortal. These laws of thought
were supposed to govern the operation of the mind; their study initiated the
field called logic.

1.1.4 Acting rationally: The rational agent approach :


An agent is just something that acts (agent comes from the Latin agere,
to do). Of course, all computer programs do something, but computer
agents are expected to do more: operate autonomously, perceive their
persist over a prolonged time period, adapt to change, and create and pursue
environment,
Add a 9
fo1ot0er/9/2024
• A rational agent is one that acts so as to achieve the best outcome or, when there
is uncertainty, the best expected outcome. In the “laws of thought” approach
to AI, the emphasis was on correct inferences. Making correct inferences is
sometimes part of being a rational agent, because one way to act rationally is
to deduce that a given action is best and then to act on that conclusion. On
the other hand, there are ways of acting rationally that cannot be said to
involve inference.

Add a 10
fo1ot0er/9/2024
• 1.2 The Foundations of Artificial Intelligence

• In this section, we provide a brief history of the disciplines that contributed ideas,
viewpoints, and techniques to AI. Like any history, this one concentrates on
a small number of people, events, and ideas and ignores others that also
were important.

• 1.2.1 Philosophy
• • Can formal rules be used to draw valid conclusions?
• • How does the mind arise from a physical brain?
• • Where does knowledge come from?
• • How does knowledge lead to action?

Add a 11
fo1ot0er/9/2024
• 1.2.2 Mathematics
• • What are the formal rules to draw valid conclusions?
• • What can be computed?
• • How do we reason with uncertain information?
• Philosophers staked out some of the fundamental ideas of AI, but the leap to a
formal science required the mathematization of logic and probability and
the introduction of a new branch of mathematics: computation.

• 1.2.3 Economics
• • How should we make decisions in accordance with our preferences?
• • How should we do this when others may not go along?
• • How should we do this when the payoff may be far in the future?
Add a 12
fo1ot0er/9/2024
• 1.2.4 Neuroscience
• • How do brains process information?
• Neuroscience is the study of the nervous system, particularly the brain. Although
the exact way in which the brain enables thought is one of the great mysteries
of science, the fact that it does enable thought has been appreciated for
thousands of years because of the evidence that strong blows to the head can
lead to mental incapacitation.

• 1.2.5 Psychology
• • How do humans and animals think and act?

• 1.2.6 Computer engineering


• • How can we build an efficient computer?

Add a 13
fo1ot0er/9/2024
AI Applications
1. AI in Marketing

2. AI in Banking

3. AI in Finance

4. AI in Agriculture

5. AI in HealthCare

Add a footer 14
AI in Marketing

What if an algorithm or a bot is built

solely for the purpose of marketing a

brand or a company? It would do a

pretty awesome job!

Add a footer
Artificial Intelligence Applications – AI in 15
Marketing
AI in Banking

AI-based systems to provide customer

support, detect anomalies and credit

card frauds. An example of this is HDFC

Bank & KOTAK Bank.

Artificial Intelligence Applications – AI in 16


Banking
AI in Finance

• The Financial organizations are turning to AI

to improve their stock trading performance

and boost profit.

Artificial Intelligence Applications – AI in


AI in Agriculture

AI can help farmers get more

from the land while using

resources more sustainably.

Ex: See & Spray Bot System

18
AI in Health Care

• When it comes to saving


our

lives, a lot of organizations and

medical care centers are

relying on AI.

• Ex: Cambio Health Care


19
AI vs Machine Learning vs Deep Learning
Machine Learning:
“Machine Learning is a subset of artificial
intelligence. It allows the machines to learn and
make predictions based on its experience(data)“
Deep Learning:
“Deep learning is a particular kind of machine learning that achieves great power and
flexibility by learning to represent the world as nested hierarchy of concepts or abstraction”
FR
HISTORY OF AI

25
History OF
AI
 1921: Czech playwright Karel Čapek released a science fiction play “
Rossum’s Universal Robots” which introduced the idea of “artificial people”
which he named robots. This was the first known use of the word.

 1929: Japanese professor Makoto Nishimura built the first Japanese


robot, named Gakutensoku.

 1949: Computer scientist Edmund Callis Berkley published


the book “Giant Brains, or Machines that Think”
which compared the newer models of computers to human brains.
History OF AI (Continued…)

 1950: Alan Turing published “Computer Machinery and Intelligence”


which proposed a test of machine intelligence called “The Imitation Game”.

 1952: A computer scientist named Arthur Samuel developed a program to play checkers,
which is the first to ever learn the game independently.

 1955: John McCarthy held a workshop at Dartmouth on “artificial intelligence”


which is the first use of the word, and how it came into popular usage.
AI maturation: 1957-1979
 1958: John Mc Car t hy c r eat ed LISP ( ac ronym f or Li s t
Proc e s s i ng) ,t he f i r s t progr am m i ng l anguage f or
AI re s e arc h , whi c h i s s t i l l in popul a r us e t
o t h i s da y.
 1959: Ar t hurSa mue lc r e a t e dt het e r m“ ma c hi nel e a r ni ng” whe n
doi ng a speech about t e ac hi ng m ac hi ne st o pl ay c he s s
be t t e r
t han t he hum ans who progr am m e dt he m .
 1961: The f i r s ti ndus t r i a l r obot Uni ma t e s t a r t e d wor k
i ng on a n a s s e mbl y l i ne at Ge ne r a l M o t or s i n
Ne w J e r s e y, t a s ke d wi t h
t r a ns por t i ng di e c as i ngs and we l di ng par
ts on c ar s( whi c h wa s deemed t oo da ng e r ous for
humans) .
 1965: Edwa r d Fe i ge nba uma nd J os hua Le de r be rgc r e a t e d t
hefi r s t
1 9 6 6 :
Joseph We i z e n b a u m created the first “c
hatterbot” (later shortened
to chatbot), E L I Z A ,am o c kp s y c h o t h e r a p i s t , t h a t used
naturallanguageprocessing (NLP) to converse with
humans.

1 9 6 8 :
S o v i e tm a t h e m a t i c i a n A l e x e y Ivakhnenko pub
lished “Group Method
of DataHandling” in the journal “ Av t o m a t i k a , ” w h i
chproposed a new
approach to AI t h a t w o u l d l a t e rb e c o m ew h a t w e n o w
k n o wa s “DeepLearning.”

1 9 7 3 : A n a p p l i e d mathematician n a m e dJ a m e
sL i g h t h i l l gave a report to the
British Science Council, underlining that strides
were not as impressive as
t h o s et h a t h a d b e e n p r o m i s e d by scientists, which led
to much-reducedsupport and funding for AI rese
arch from the British government.
AI boom: 1980-1987
 1980: Fi r s t c onfe r e nc e of t he AI wa s he l d a t St a nfor d .

 1980: T h e fi r s t e x pe r t s ys t e m c a me i nt o t he c om me r c i a l ma r ke t
, known as XCON ( e x p e r t c o n f i g u r e r ) . I t was desi g n e d t o assi s
t i n t h e or d e r i n g of c o m p u t e r s ys t e ms b y a u t o m a t i c a l l y p i c
k i ng
c om pon e n t s ba s e d on t he c us t ome r ’s ne e ds .
 1981: The J a pa ne s e gove r nme n t a l l oc a t e d $ 850 mi l l i on
( ove r $ 2
b i l l i on dol l a r s in t oda y’s mone y) to t
he Fi ft hGe ne r a t i on
Com put e rpr oj e c t . The i r a i m wa s t o c r e a t e c omput e r s
t ha t c oul d
t r a ns l a t e , c onve r s e i n huma n l a ngua ge ,a nd
e xpr e s s r e a s oni ng on a huma n l e ve l .
 1984: The AAAI wa r ns of a n i nc omi ng “ AIWi nt e r ”
whe re f undi ng and i nt erest woul d de cre as e, and m ak e
 1985: An aut onom ous dr awi ng progr am k nown as AARO N i s
de mons t r a t e d a t t he AAAI c onfe r e nc e .

 1986: Er ns t Di c kma nn a nd h i st e a m at Bunde s we hr Uni


ve r s i t y of M uni c h c r e a t e d a nd de mons t r a t e d t he f i r s tdr i
ve r l e s sc ar ( or r obot c a r ) . I t c oul d dr i ve up t o 55 k m
ph on roads t ha t d i dn’ t ha ve o t he r obs t a c l e s or huma n dr i ve
rs.

 1987: Comme r c i a l l a unc h of Al a c r i t y by Al a c t r i


ous I nc . Al a c r i t y wa s t he first s t r a t e gy ma
na ge r i a l a dvi s or y s ys t e m, a nd us e d a
c ompl e x e xpe r t s ys t e m wi t h 3 , 000 + r u l e s .
 1997: De e pBl ue ( de ve l ope d by I BM ) be a t t he
wor l d c he s s c ha mpi on, Ga r y Ka s pa r ov, i n a hi
ghl y - publ i c i z ed ma t c h ,
be c omi ng t he f i r s t pr ogr a m t o be a t a huma n
c he s s c ha mpi on.
 1997: Wi ndows re l e as e d a s pe e c h re c ogni
t i on s of t ware ( de ve l ope d by Dr agon
Sys t e m s ) .
 2000: Pr ofe s s orCynt h i a Br e a z e a l de ve l ope d t he f i r s t
robot t hat c oul ds i m ul at e hum an e m ot i ons wi t h i t s
f ac e , whi c h
i nc l ude de ye s , e ye brows , e ar s , and a m out h. I t was c a l l e
dKismet.
 2002: The f i r s t Roomba was r e l eased.
 2003: Na s al a nde dt wor ove r sont oM a r s ( Spi r i t a nd
Oppor t uni t y) a nd t he y na v i ga t e d t he s ur f a c e
of t he p l a ne t wi t hout huma n i n t e r ve n t
ion.
 2010: M i c r os oft l a unc he d t he Xbox
360 Ki ne c t , t he f i r s t ga mi ng ha r dwa r e de s i
gne d t o track body move me n t a nd
t r a ns l a t e i t into ga mi ng d i r e c t ions .

 2011 : An NLP c om put e r progr am m e d t o ans we r que s


t i ons nam e d Wat s on ( c r e a t ed by I BM ) won J e
opa r dy a ga i ns t t wo
for me r c ha mpi ons i n a t e l e v i s e d ga me .

 2011 : Appl e re l e as e d Si r i , t he first popul a


r v i r t ua l assistant.
Artificial General Intelligence: 2012-present

 2012: Two r e s e a r c he r s f r om Googl e ( J e ff De


an a nd Andr e w Ng)
t r a i ne d a ne ur a l ne t wor k t o re c ogni ze c at s
by s howi ng i t
unl abel ed i m ages and no back ground i nf orm at i on.

 2015: El on M us k , St e phe n Ha wki ng, a nd St e ve Woz n i a k( a nd


ove r 3 , 000 o t he r s ) s i gne d a n ope n l e t t e r to t
he wor l ds ’
gover nment syst ems banni ng t he devel opment of ( and l a t e r,use
of) a u t onom ous we a pons for pur pos e s of wa r.

 2016: Ha ns o n R o b o t i c s c r e a t e d a h u m a n o i d r o b o t n a m e d
Sophi a , who b e c a m e known as t h e f i rst “ robot c i t i zen” a n d wa
s t h e f i r s t robot c r e at e d wi t h a r e a l i s t i c hum an a p p e a r a n c
 2017: F a c e b o o k progr am m ed t wo AI chat bot s t o c o n v e r s e and l
earn
how t o ne got i at e , b u t a s t h e y we n t b a c k a n d for t h t h e y e
n d e d u p forgoi n g E n g l i s h a n d d e v e l o p i n g t h e i r own l a n g u a
g e , c o m p l e t e l y a u t on om o us l y.

 2018: A Chi ne s e t e c h gr oup c a l l e d Al i ba ba ’ s l a ng


ua g e - proc e s s i ng AI be at hum an i nt e l l e ct on a
St a nfor d r e a d i ng a nd c ompr e he ns i on test.
 2019: Googl e ’s Al pha St a rr e a c he d Gr a ndma s t e r on t
he v i de o ga me
St a r Cr a f t 2 , out per for mi ng all but .2%
of human p l ayer s .
 2020: Ope n AI s t a r t e d be t a t e s t i ng G PT- 3, a mode
l t ha t us e s De e p Le ar ni ng t o c re at e c ode , poe t r y,
and ot he r s uc h l anguage and
wr i t i ng t as k s . Whi l e not t he f i r s t of i t s k i nd , i t
is t he first t ha t
Agents in AI
 A n A I syste m ca n be defi ne d a s the s tudy of the rati ona l a ge
nt a nd i t s e nvironm e nt.
What i s a n A ge nt?

 “The a ge nts s e ns e the e nv i ro nm e nt thro ug h s e ns o


rs a nd a c t
on t h e i r e n v i r o n m e n t t h r o u g h ac t u a t o r s ” .

A n A I a ge nt ca n have m e nta l
prope r t i e s suc h as knowle dge , be l i ef, i nte nti on,
etc .
An Agent
 Pe rc e i ve i t s e nvironm e nt t hr ough se nsors a nd act
upon that e nvironm e nt through a c
tuators.
 Type s o f Age nt:
 Human-Agent: A human agent has eyes, ears, and other organs
which work for sensors and hand, legs, vocal tract work for
actuators.

 Robotic Agent: A robotic agent can have cameras, infrared range


finder,
NLP for sensors and various motors for actuators.

Software Agent: Software agent can have keystrokes(agent), file


contents as sensory input and act on those inputs and display output
on
Examples of Agents

Thermostat Cell phone Camera


Fly by wire system
Fly-by-wire control systems allow aircraft computers to perform tasks without
pilot input. Automatic stability systems operate in this way. Gyroscopes and
sensors such as accelerometers are mounted in an aircraft to sense rotation on
the pitch, roll and yaw axes.
A Human-agent has eyes, ears, and other organs which act as sensors, and
hands, legs, mouth, and other body parts act as actuators.
Components of Agents
Sensor: Sensor is a device which detects the change in the environment
and sends the information to other electronic devices. An agent observes
its environment through sensors.

Actuators: Actuators are the component of machines that converts energy


into motion. The actuators are only responsible for moving and controlling
a system. An actuator can be an electric motor, gears, rails, etc.

Effectors: Effectors are the devices which affect the environment.


Effectors can be legs, wheels, arms, fingers, wings, fins, and display
screen.
Good Behavior: The Concept of Rationality
Rational agent
A rational agent is one that does the right thing. Obviously, doing the right thing is better
than doing the wrong thing, but what does it mean to do the right thing?
Rationality
What is rational at any given time depends on four things:
• The performance measure that defines the criterion of success.
• agent’s prior knowledge of the environment.
• The actions that the agent can perform.
• The agent’s percept sequence to date.
• This leads to a definition of a rational agent: “For each possible percept sequence, a
rational agent should select an action that is expected to maximize its
performance measure, given the evidence provided by the percept sequence
andwhatever built-in knowledge the agent has” 43
Intelligent Agent Rational Agent

1. An Intelligent Agent is a system that can perceive its 1. A Rational Agent is an Intelligent Agent that makes
environment and take actions to achieve a specific goal. decisions based on logical reasoning and optimizes its
behavior to achieve a specific goal.

2. An Intelligent Agent can perceive its environment 2. A Rational Agent's perception is based on the
through various sensors or inputs. information available to it and logical reasoning.

3. It can make decisions based on a set of rules or a pre- 3. It makes decisions based on logical reasoning and
defined algorithm. optimizes its behavior to achieve its goals.

4. An Intelligent Agent can learn from its environment and 4. A Rational Agent can also learn from its
adapt its behavior. environment and adapt its behavior, but it does so
based on logical reasoning.
5. It can operate independently of human intervention. 5. It can also operate independently of human
intervention, but it does so based on logical reasoning.

6. An Intelligent Agent can be designed to achieve a 6. A Rational Agent has a specific goal and optimizes
specific goal. its behavior.
Why are rational agents important?
 Real-world applications: Rational agents can be used to control autonomous
systems such as self-driving cars, robots, or drones, to make financial decisions,
or to plan logistics.

 Optimization: Rational agents can optimize their behavior to achieve a specific


goal, considering the current state of the environment, the available resources,
and the constraints.

 Decision-making: Rational agents can make decisions based on logical


reasoning and optimize their behavior to achieve their goals, considering their
perception of the environment and the performance measure; this allows for better
decision- making.
 Adaptability: Rational agents can learn from their environment
and adapt their behavior. This allows them to improve their performance
over time.

 Autonomy: Rational agents can operate independently of human


intervention. This can lead to increased efficiency and reduced human
error.

 Simulation: Rational agents can be used to simulate the behavior


of other agents or systems, allowing for the study and prediction of
their behavior.
Rational Agent Real-World Applications
 Autonomous systems: Self-driving cars, drones, and robots use rational agents to make decisions, plan
their actions and optimize their behavior to achieve their goals, such as safely transporting passengers or
completing a task.
 Finance: Rational agents are used in financial services to make investment decisions, risk management,
and trading. They can analyze market data, predict future trends, and optimize their behavior to maximize
returns.
 Healthcare: Rational agents make medical diagnoses, plan treatment, and monitor patients' progress. They
can analyze medical data, predict the progression of diseases, and optimize the treatment plan.
 Manufacturing: Rational agents are used in manufacturing to control production processes, plan logistics,
and optimize the use of resources.
 Transportation: Rational agents are used in transportation to plan routes, schedule vehicles, and optimize
the use of resources.
 Customer service: Rational agents interacting with customers, respond to their queries, and provide
recommendations.
 Social media: Rational agents are used to recommending content, filter spam, and moderate content.
The Nature of Environments: FR
Now that we have a definition of rationality, we are almost ready to think about
building rational agents.
We would try to study the “nature of the environment”.

The environment is the Task Environment (problem) for which the Rational
Agent is the solution. Any task environment is characterised on the basis of
“PEAS”.

We begin by showing how to specify a task environment, illustrating the process


with a number of examples.
We then show that task environments come in a variety of flavors

48
10/9Ad/
Specifying the task environment FR

“PEAS”
• Performance – What is the performance characteristic which would
either make the agent successful or not.
• Environment – Physical characteristics and constraints expected.
• Actuators – The physical or logical constructs which would take action.
• Sensors – Again physical or logical constructs which would sense the
environment. From our previous example, these are cameras and
dirt sensors.

Add a footer 49
FR

Add a footer 50
FR

Add a footer 51
FR
Rational Agents could be physical agents like the one described above or it could also be a
program that operates in a non-physical environment like an operating system. Imagine a
bot web site operator designed to scan Internet news sources and show the interesting items
to its users, while selling advertising space to generate revenue.

example, consider an online tutoring system :

Agent Performance Environment Actuator Sensor

Math E learning SLA defined score Student, Teacher, Computer display Keyboard, Mouse
system on the test parents system for
exercises,
corrections,
feedback

Add a footer 52
An automated taxi driver.
Figure 2.4 summarizes the PEAS description FR
for the taxi’s task environment. We discuss each element in more detail in
the following paragraphs.

Add a footer 53
1. First, what is the performance measure FR
To which we would like our automated driver to
aspire?
Desirable qualities include getting to the correct destination; minimizing fuel consumption
and wear and tear; minimizing the trip time or cost; minimizing violations of traffic laws
and disturbances to other drivers; maximizing safety and passenger comfort.

2. Next, what is the driving environment , that the taxi will face?
Any taxi driver must deal with a variety of roads, ranging from rural lanes and urban alleys
to 12-lane freeways. The roads contain other traffic, pedestrians, stray animals, road
works, police cars, puddles, and potholes. The taxi must also interact with potential and
actual passengers.

3.The actuators for an automated taxi include those available to a human driver:
control over the engine through the accelerator and control over steering and braking. In
addition, it will need output to a display screen or voice synthesizer to talk back to the
passengers, and perhaps some way to communicate with other vehicles, politely or otherwise
Add a footer 54
FR
4.The basic sensors for the taxi will include one or more video cameras so that
it can see, as well as lidar and ultrasound sensors to detect distances to other cars
and obstacles. To avoid speeding tickets, the taxi should have a speedometer,
and to control the vehicle properly, especially on curves, it should
have an accelerometer.

Properties of task environments :

1. FULLY OBSERVABLE VS. PARTIALLY OBSERVABLE


2. SINGLE-AGENT VS. MULTIAGENT
3. Deterministic vs. nondeterministic
4. EPISODIC VS. SEQUENTIAL
5. STATIC VS. DYNAMIC
6. DISCRETE VS. CONTINUOUS
7. KNOWN VS. UNKNOWN
Add a footer 55
1.FULLY OBSERVABLE VS. PARTIALLY OBSERVABLE : FR
Full or Partial? If the agents sensors get full access then they do not need to pre-store any
information. Partial may be due to inaccuracy of sensors or incomplete information
about an environment, like limited access to enemy territory

2. SINGLE-AGENT VS. MULTIAGENT :


The distinction between single-agent and multiagent environments may seem simple
enough. For example, an agent solving a crossword puzzle by itself is clearly in a
single-agent environment, whereas an agent playing chess is in a twoagent
environment.

3.Deterministic vs. Non deterministic :


If the next state of the environment is completely determined by the current state and
the action executed by the agent(s), then we say the environment is deterministic;
otherwise, it is nondeterministic.

Add a footer 56
4. EPISODIC VS. SEQUENTIAL : FR
In an episodic task environment, the agent’s experience is divided into atomic
episodes. In each episode the agent receives a percept and then performs a single
action. Crucially, the next episode does not depend on the actions taken in previous
episodes. Many classification tasks are episodic.

5. STATIC VS. DYNAMIC


If the environment can change while an agent is deliberating, then we say the
environment is dynamic for that agent; otherwise, it is static. Static environments are
easy to deal with because the agent need not keep looking at the world while it is
deciding on an action, nor need it worry about the passage of time

6. DISCRETE VS. CONTINUOUS


The discrete/continuous distinction applies to the state of the environment, to the way
time is handled, and to the percepts and actions of the agent. For example, the chess
environment has a finite number of distinct states (excluding the clock).
Add a footer 57
FR
7.KNOWN VS. UNKNOWN
The distinction between known and unknown environments is not the same as the
one between fully and partially observable environments. It is quite possible
for a known environment to be partially observable—for example, in solitaire
card games, I know the rules but am still unable to see the cards that have not yet
been turned over. Conversely, an unknown environment can be fully
observable—in a new video game, the screen may show the entire game state but
I still don’t know what the buttons do until I try them.

reference : https://www.javatpoint.com/agent-environment-in-ai

Add a footer 58
The Structure of Agents FR
So far we have talked about agents by describing behavior—the action that is performed
after any given sequence of percepts.

To understand the structure of Intelligent Agents, we should be familiar with


Architecture and Agent programs.

Architecture : is the machinery that the agent executes on. It is a device with sensors
and actuators, for example, a robotic car, a camera, and a PC.

An agent program is an implementation of an agent function. An agent function is a


map from the percept sequence(history of all that an agent has perceived to date) to an
action.
Agent=Architecture+Agent Program
f:P* → A
Add a footer 59
Agent programs FR
“An AI agent program is a software program that uses artificial intelligence (AI)
to perform tasks, make decisions, and interact with its environment”

They take the current percept as input from the sensors and return an action to the
actuators. Notice the difference between the agent program, which takes the current
percept as input, and the agent function, which may depend on the entire percept
history.

Agent programs are fundamental concepts that define how autonomous systems
or agents perceive their environment and take actions to achieve specific goals.
An agent can be a software entity (like a chatbot or a robot) that perceives its
environment through sensors and acts upon it through actuators.

Add a footer 60
The agent program has no choice but to take just the current percept as input FR
because nothing more is available from the
environment; if the agent’s actions need to depend on the entire percept
sequence, the agent will have to remember the percepts.

Now next question is, how to make agent work in proper order?
Taking the percept from environment and generating action based on environment.

How can we do this?


This can be done by good agent program.

There are two things to note about the skeleton of program,


generally, we recieve the agent mapping as a function, from percept sequence to
action it just recieve only single input, if agent wants to recieve multiple percept,
now it should have “specalized memory”

Add a footer 61
FR
Now we have to design a agent progarm, it should
perform mapping function
i.e AGENT TO PERCEPT (PROGRAM)

Add a footer 62
FR
Four basic kinds of agent programs that embody the principles underlying almost all
intelligent systems:

1. Simple reflex agents


2. Model-based reflex agents
3. Goal-based agents and
4. Utility-based agents.

5. Simple Reflex agent:


The Simple reflex agents are the simplest agents. These agents take decisions on the
basis of the current percepts and ignore the rest of the percept history. These
agents only succeed in the fully observable environment.The Simple reflex agent
does not consider any part of percepts history during their decision and action process.

Add a footer 63
The Simple reflex agent works on Condition-action rule, which means it maps the FR
current state to action. Such as a Room Cleaner agent, it works only if there is dirt in
the room.
Problems for the simple reflex agent design approach:
• They have very limited intelligence
• They do not have knowledge of non-perceptual parts of the current state
• Mostly too big to generate and to store.
• Not adaptive to changes in the environment.

Add a footer 64
FR

Add a footer 65
2. The Model-based FR
agent can work in a partially observable environment, and track the situation.
A model-based agent has two important factors:
Model: It is knowledge about "how things happen in the world," so it is called a Model-based
agent.
Internal State: It is a representation of the current state based on percept history.
These agents have the model, "which is knowledge of the world" and based on the model they
perform actions.
Updating the agent state requires information about:
• How the world evolves
• How the agent's action affects the world.

Add a footer 66
FR

That is, the agent should maintain some sort of internal


state that depends on the percept history and
thereby reflects at least some of the unobserved aspects of the current
Add a footer state.
67
3. Goal-based agents
FR
The knowledge of the current state environment is not always sufficient to decide for an
agent to what to do.
The agent needs to know its goal which describes desirable situations.
Goal-based agents expand the capabilities of the model-based agent by having the "goal"
information.
They choose an action, so that they can achieve the goal.
These agents may have to consider a long sequence of possible actions before deciding
whether the goal is achieved or not.
Such considerations of different scenario are called searching and planning, which makes an
agent proactive.

Add a footer 68
FR

Add a footer 69
4. Utility-based agents FR
These agents are similar to the goal-based agent but provide an extra component of
utility measurement which makes them different by providing a measure of
success at a given state.
Utility-based agent act based not only goals but also the best way to achieve the
goal.
The Utility-based agent is useful when there are multiple possible alternatives, and
an agent has to choose in order to perform the best action.
The utility function maps each state to a real number to check how efficiently each
action achieves the goals.

Add a footer 70
FR

Add a footer 71
A Problem-Solving Agent : FR

In Artificial Intelligence, Search techniques are universal problem-solving


methods. Rational agents or Problem-solving agents in AI mostly used
these search strategies or algorithms to solve a specific problem and
provide the best result.

Problem- solving agents are the goal-based agents and use atomic
representation.

In general, “searching refers to as finding information one needs”.

Add a footer 72
FR
Some of the most popularly used problem solving with the help of artificial
intelligence are:
1. Chess.
2. Travelling Salesman Problem.
3. Tower of Hanoi Problem.
4. Water-Jug Problem.
5. N-Queen Problem.

• Problem Searching
• In general, searching refers to as finding information one needs.
• Searching is the most commonly used technique of problem solving in artificial
intelligence.
• The searching algorithm helps us to search for solution of particular problem.

Add a footer 73
Steps : Solve Problem Using Artificial Intelligence FR
• The process of solving a problem consists of five steps. These
are:

Add a footer 74
FR
1.Defining The Problem: The definition of the problem must be
included precisely. It should contain the possible initial as well as final
situations which should result in acceptable solution.
2. Analyzing The Problem: Analyzing the problem and its requirement must be
done as few features can have immense impact on the resulting solution.
3. Identification OfSolutions: This phase generates reasonable amount
of solutions to the given problem in a particular range.
4. Choosing a Solution: From all the identified solutions, the best
solution is chosen basis on the results produced by respective
solutions.
5. Implementation : After choosing the best solution, its implementation is done.

Add a footer 75
FR
Measuring problem-solving performance
We can evaluate an algorithm’s performance in four ways:
Completeness: Is the algorithm guaranteed to find a solution when there is
one?
Optimality: Does the strategy find the optimal solution?
Time complexity: How long does it take to find a solution?
Space complexity: How much memory is needed to perform the search?

Add a footer 76
Search Algorithm Terminologies FR
•Search: Searching is a step by step procedure to solve a search-problem in a search
given space. A search problem can have three main factors:
1. Search Space: Search space represents a set of possible solutions, which a system
may have.
2. Start State: It is a state from where agent begins the search.
3. Goal test: It is a function which observe the current state and returns whether the goal
state is achieved or not.
• Search tree: A tree representation of search problem is called Search tree. The root
of
the
search tree is the root node which is corresponding to the initial state.
• Transition model: A description of what each action do, can be transition
• Actions: It gives the description of all the available actions to the agent.
represented as a model.
• Path Cost: It is a function which assigns a numeric cost to each path.
•Solution: It is an action sequence which leads from the start node to the goal Optimal
node. ution: If a solution has the lowest cost among all solutions.
Add a footer 77
Example Problems
A Toy Problem is intended to illustrate or exercise various problem-solving methods. Areal- FR
world problem is one whose solutions people actually care about.
Toy Problems
Vacuum World
States: The state is determined by both the agent location and the dirt locations. The agent is in
one of the 2 locations, each of which might or might not contain dirt. Thus there are 2*2^2=8 possible
world states.
Initial state: Any state can be designated as the initial state.
Actions: In this simple environment, each state has just three actions: Left, Right, and Suck.
Larger environments might also include Up and Down.
Transition model: The actions have their expected effects, except that moving Left in the
squ are, moving Right in the rightmost square, and Sucking in a clean squarehave no effect. The
leftmost
complete state space is shown in Figure.
Goal test: This checks whether all the squares are clean.
Path cost: Each step costs 1, so the path cost is the number of steps in the path.

Add a footer 78
FR

VACCUM WORLD STATE SPACE GRAPH


Add a footer 79
Problem formulation.
FR
Problem formulation is a critical step in AI that shapes how an algorithm interacts with its
environment, how it perceives its goal, and the actions it can take to achieve that goal. A well-
formulated problem provides:

Initial State: The starting point or condition from which the AI will begin solving the
problem.
Actions: The set of possible moves or steps the AI can take to transition from one state to
another.
Goal State: The desired outcome or final state the AI aims to achieve.
Path Costs: Costs associated with moving between states. These could be time, distance, energy,
or other metrics depending on the problem context.

2. City Map Example: Pathfinding AI


Imagine we are developing an AI to find the shortest route between two points in a city,
represented as a city map. The problem can be formulated as a pathfinding task, where the AI agent
needs to travel from a starting point (initial state) to a destination (goal state) by selecting the best
route based on some criteria like distance or time.
Add a footer 80
Key Components of Problem Formulation: FR
Initial State:
The location on the city map where the agent (AI) starts. For example, the initial state could be
Point A (e.g., your home).
State Space: All possible locations on the city map, representing intersections, roads, and
landmarks. Each point on the map is a state.
Actions: The possible moves the agent can make from one location to another. For instance, the AI
might choose to move north, south, east, or west between different intersections or roads on the
map.
Transition Model: Defines the result of each action. For example, if the AI decides to move north, it
will land at the next intersection in that direction, updating its state.
Goal State: The destination where the agent wants to arrive, such as Point B (e.g., a restaurant or
office).
Path Cost: A numerical value associated with traveling between two points. The AI could minimize
the distance (shortest path) or the time (fastest path considering traffic or road conditions). Path
costs help the AI decide which moves are optimal.

Add a footer 81
FR

END OF MODULE 1

Add a footer 82

You might also like