Module 1- MCA Sem II - AIML - Introduction to AI
Module 1- MCA Sem II - AIML - Introduction to AI
A RT I F I C I A L I N T E L L I G E N C E , A P P L I C AT I O N O F A I , A I P R O B L E M S , P R O B L E M F O R M U L AT I O N , INTELLIGENT
AGENTS, TYPES OF AGENTS, AGENT ENVIRONMENTS, PEAS R E P R E S E N TAT I O N FOR AN AGENT,
A RC H I T E C T U R E O F I N T E L L I G E N T AG E N T S .
S Y N TA X & S E M A N T I C F O R P R O P O S I T I O N A L L O G I C , S Y N TA X & S E M A N T I C F O R F I R S T O R D E R P R E D I C AT E
LOGIC, P R O P E RT I E S FOR WELL-FORMED FORMULA (WFF), R E S O LU T I O N: R E S O LU T I O N BASICS,
C O N V E R S I O N T O C L A U S A L F O R M , R E S O LU T I O N O F P R O P O S I T I O N L O G I C , U N I F I C AT I O N O F P R E D I C AT E S .
What is AI?
Artificial Intelligence is concerned with the design of intelligence in an artificial device. The term was coined by John
McCarthy in 1956.
Intelligence is the ability to acquire, understand and apply the knowledge to achieve goals in the world.
AI program will demonstrate a high level of intelligence to a degree that equals or exceeds the intelligence required of a
human in performing some task.
Although there is no clear definition of AI or even Intelligence, it can be described as an attempt to build machines that like
humans can think and act, able to learn and use knowledge to solve problems on their own.
Ex. Google Assistant, Siri , Alexa ,Tesla car , Chat Bots , Chat-GPT
The definitions of AI:
The definitions on the top, (a) and (b) are concerned with reasoning, whereas those on the
bottom, (c) and (d) address behavior.
The definitions on the left, (a) and (c) measure success in terms of human performance, and
those on the right, (b) and (d) measure the ideal concept of intelligence called rationality
Can Machines Think ?
In World war II , first computer was developed to break German communication in which “Alan Turing” played
important role.
In 1915 , he published a paper in conference , which had heading “Can Machines Think?”
Key Fundamentals based on Human Behavior:
Reasoning / Logic
Learning
Problem Solving
Percept
Intelligent Systems:
In order to design intelligent systems, it is important to categorize them into
four :
1. Systems that think like humans
2. Systems that think rationally
3. Systems that behave like humans
4. Systems that behave rationally
1. Acting Humanly : (The Turing Test) 3. Acting Rationally : (Rational Agent
The computer would need to possess the following capabilities:
Approach)
natural language processing to enable it to communicate
successfully in English; Doing/Behaving Rightly
knowledge representation to store what it knows or hears. Generalized Approach
automated reasoning to use the stored information to answer
Maximizing Expected Performance
questions and to draw new conclusions.
machine learning to adapt to new circumstances and to
detect and extrapolate patterns.
Clearly tasks of the first type are easy for humans to perform, and almost all are able to master them.
The second range of tasks requires skill development and/or intelligence and only some specialists can
perform them well.
However, when we look at what computer systems have been able to achieve to date, we see that their
achievements include performing sophisticated tasks like medical diagnosis, performing symbolic
integration, proving theorems and playing chess.
Application of AI
AI algorithms have attracted close attention of researchers and have also been applied successfully to solve
problems in engineering. Nevertheless, for large and complex problems, AI algorithms consume considerable
computation time due to stochastic feature of the search approaches
1) Business; financial strategies
2) Engineering: check design, offer suggestions to create new product,
expert systems for all engineering problems
3) Manufacturing: assembly, inspection and maintenance
4) Medicine: monitoring, diagnosing
5) Education: in teaching
6) Fraud detection
7) Object identification
8) Information retrieval
9) Space shuttle scheduling
Intelligent Agents :
Agent :
An Agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through actuators.
A human agent has eyes, ears, and other organs for sensors and hands, legs, mouth, and
other body parts for actuators.
A robotic agent might have cameras and infrared range finders for sensors and various
motors for actuators.
A software agent receives keystrokes, file contents, and network packets as sensory inputs
and acts on the environment by displaying on the screen, writing files, and sending network
packets.
Percept: We use the term percept to refer to the agent's perceptual inputs at any given instant.
Percept Sequence: An agent's percept sequence is the complete history of everything the agent has ever
perceived.
Agent function: Mathematically speaking, we say that an agent's behavior is described by the agent
function that maps any given percept sequence to an action.
Agent program : Internally, the agent function for an artificial agent will be implemented by an agent
program. It is important to keep these two ideas distinct. The agent function is an abstract mathematical
description; the agent program is a concrete implementation, running on the agent architecture.
Types of Agents :
Simple Reflex Agents :
These agents select actions on the basis of the current percept, ignoring the rest of the percept history ,
Ex.
Then switch on AC
Then switch on AC
Model Based Reflex Agents :
The most effective way to handle partial observability is for the agent to keep track of the
part of the world it can’t see now.
That is, the agent should maintain some sort of internal state that depends on the percept
history and thereby reflects at least some of the unobserved aspects of the current state.
Goal Based Agents :
For example,
• Planning for upcoming examination of sem2
• at a road junction, the taxi can turn left, turn right, or go straight on. The correct decision depends on where
the taxi is trying to get to. In other words, as well as a current state description, the agent needs some sort
of goal information GOAL that describes situations that are desirable—for example, being at the
passenger’s destination.
• The agent program can combine this with the model (the same information as was used in the model based
reflex agent) to choose actions that achieve the goal.
Utility-Based Agent :
Goals alone are not enough to generate high-quality behavior in most environments. Works on Partially
Observable environment.
Focus on utility not the goal , utility function and deals with happy and unhappy state
For example,
many action sequences will get the taxi to its destination (thereby achieving the goal) but some
are quicker, safer, more reliable, or cheaper than others.
A more general performance measure should allow a comparison of different world states
according to exactly how happy they would make the agent.
Because “happy” does not sound very scientific, economists and computer scientists use the term
utility instead
Learning Agents :
A learning agent can be divided into four conceptual components,
The most important distinction is between the learning element which is responsible for making
improvements, and the performance element, which is responsible for selecting external actions. (learning
element When to do what?)
The performance element is what we have previously considered to be the entire agent: it takes in percepts
and decides on actions. feedback from the critic on how the agent is doing and determines how the
performance element should be modified to do better in the future. (performance element How to do
everything?)
In driverless car ,
LE-when to apply brake ? PE there will be proper way to drive car, Critic -> learns/observes about
environment and gives feedback to LE , PE changes depending on this , PG when you give different routes
to car
Agent Environment Types :
Fully observable vs. partially observable:
If an agent’s sensors FULLY OBSERVABLE give it access to the complete state of the environment at each
point in time, then we say that the task environment is fully observable. A task environment is effectively
fully observable if the sensors detect all aspects that are relevant to the choice of action;
relevance, in turn, depends on the performance measure.
In other words, its fully observable when the information received by an agent at any point of time is
sufficient to make the optimal decision.
An environment might be partially observable because of noisy and inaccurate sensors or because parts of
the state are simply missing from the sensor data.
In other words , an environment is called as partially observable when the agent needs memory in order to
make the best possible decision.
Deterministic vs. stochastic :
If the next state of the environment is completely determined by the current
state and the action executed by the agent, then we say the environment is
deterministic; otherwise, it is stochastic.
Episodic vs. sequential:
In an episodic task environment, the agent’s experience is divided into atomic
episodes. In each episode the agent receives a percept and then performs a single
action. Crucially, the next episode does not depend on the actions taken in
previous episodes. Many classification tasks are episodic.
In sequential environments, on the other hand, the current decision could affect
all future decisions
Static vs. dynamic:
If the environment can change while an agent is deliberating, then we say the environment is
dynamic for that agent; otherwise, it is static.
Static environments are easy to deal with because the agent need not keep looking at the
world while it is deciding on an action, nor need it worry about the passage of time.
Dynamic environments, on the other hand, are continuously asking the agent what it wants to
do; if it hasn’t decided yet , that counts as deciding to do nothing.
Discrete vs. continuous:
The discrete/continuous distinction applies to the state of the environment, to the way time is
handled, and to the percepts and actions of the agent.
Single agent vs. multiagent:
Known vs. unknown:
In a known environment, the outcomes (or outcome probabilities if the environment
is stochastic) for all actions are given.
Obviously, if the environment is unknown , the agent will have to learn how it works
in order to make good decisions.
Note that the distinction between known and unknown environments is not the same
as the one between fully and partially observable environments. It is quite possible
for a known environment to be partially observable.
Summary of Environments :
PEAS representation for an agent :
In our discussion of the rationality of the simple vacuum-cleaner agent, we had to specify the
performance measure, the environment, and the agent’s actuators and sensors. We group all
these under the heading of the task environment.
For the acronymically minded, we call this the PEAS (Performance, Environment, Actuators,
Sensors) description.
In designing an agent, the first step must always be to specify the task environment as fully as
possible.
Architecture of intelligent Agent
• Sensors:
The agent has five sensors, each of which gives a single bit of information:
– In the square containing the wumpus and in the directly (not diagonally) adjacent squares, the agent will perceive a Stench.
– In the squares directly adjacent to a pit, the agent will perceive a Breeze.
– In the square where the gold is, the agent will perceive a Glitter.
– When an agent walks into a wall, it will perceive a Bump.
– When the wumpus is killed, it emits a woeful Scream that can be perceived anywhere in the cave.
The percepts will be given to the agent program in the form of a list of five symbols;
for example,
if there is a stench and a breeze, but no glitter, bump, or scream, the agent
program will get [Stench, Breeze,None,None,None].
Knowledge Representation
The object of knowledge representation is to express knowledge in computer-tractable form, such that it can be used to help
agents perform well.
A knowledge representation language is defined by two aspects:
Syntax:-
The syntax of a language describes the possible configurations that can constitute sentences.
Usually, we describe syntax in terms of how sentences are represented on the printed page, but the real
representation is inside the computer: each sentence is implemented by a physical configuration or physical property of
some part of the agent.
Semantics:-
The semantics determines the facts in the world to which the sentences refer.
Without semantics, a sentence is just an arrangement of electrons or a collection of marks on a page.
With semantics, each sentence makes a claim about the world and with semantics, we can say that when a
particular configuration exists within an agent, the agent believes the corresponding sentence.
Ex. the syntax of the language of arithmetic expressions says that if x and y are expressions denoting numbers, then x > y is a
sentence about numbers. The semantics of the language says that x > y is false when y is a bigger number than x, and true
otherwise.
Syntax & Semantic of propositional Logic
• Wrapping parentheses: ( … )
Example ,
(P Q) R “If it is hot and humid, then it is raining”
Q P “If it is humid, then it is hot”
Q “It is humid.”
We’re free to choose better symbols, btw: Ho = “It is hot” , Hu = “It is humid” , R = “It is raining”
Logical Connectives:
Logical connectives are used to connect two simpler propositions or representing a sentence logically. We can create compound propositions
with the help of logical connectives. There are mainly five connectives, which are given as follows:
1.Negation: A sentence such as ¬ P is called negation of P. A literal can be either Positive literal or negative literal.
3.Disjunction: A sentence which has ∨ connective, such as P ∨ Q. is called disjunction, where P and Q are the propositions.
4.Implication: A sentence such as P → Q, is called an implication. Implications are also known as if-then rules. It can be represented as
If it is raining, then the street is wet.
Let P= It is raining, and Q= Street is wet, so it is represented as P → Q
It is an extension of propositional logic and unlike propositional logic, it is sufficiently expressive in representing any natural language
construct.
First Order Logic in AI is also known as Predicate Logic or First Order Predicate Logic. It is a robust technique to represent objects as well
as their relationships.
Unlike propositional logic, First Order Logic in Artificial Intelligence doesn't only include facts but also different other entities as listed below.
To represent the above statements, PL logic is not sufficient, so we required some more powerful logic, such as first-order logic.
Objects:
Objects can denote any real-world entity or any variable. E.g., A, B, colors, theories, circles etc.
Relations:
Relations represent the links between different objects. Relations can be unary(relations defined for a single term) and n-ary(relations
defined for n terms). E.g., blue, round (unary); friends, siblings (binary); etc.
Functions:
Functions map their input object to the output object using their underlying relation. Eg: father_of(), mother_of() etc.
Syntax of First-Order logic:
The syntax of FOL determines which collection of symbols is a logical expression in first-order logic.
The basic syntactic elements of first-order logic are symbols. We write statements in short-hand notation in FOL.
Objects :
• Person King John
• Person Richard
• Crown
• Left leg of john
• Left leg of Richard
Relation :
• Onhead
• Brother
• Person
• king
Atomic Sentences
Atomic sentences are the most basic expressions of First Order Logic in AI. These sentences comprise a predicate followed by a set of
terms inside a parenthesis. Formally stating, the structure of an atomic sentence looks like the following.
Predicate ( term 1, term 2, term 3,...) Predicate ( term 1, term 2, term 3,...)
An atomic sentence is true in a given model if the relation referred to by the predicate symbol holds among the objects referred to by the
arguments.
Complex Sentences
Complex sentences can be constructed by combining atomic sentences using connectives like AND ( ∧), OR ( ∨), NOT (¬), IMPLIES ( ⇒), IF
AND ONLY IF (⇔) etc.
Formally stating, if c1,c2,...c1,c2,... represent connectives, a complex sentence in First Order Logic in AI can be defined as follows.
Predicate 1( term 1, term 2,...)c1 Predicate 2( term 1, term 2,...)c2... Predicate 1( term 1, term 2,...)c1Predicate 2( term 1, term 2,...)c2...
Ex.
King(Richard) ∨ King(John)
¬ King(Richard) ⇒ King(John) .
Quantifiers
Once we have a logic that allows objects, it is only natural to want to express properties of
QUANTIFIER entire collections of objects, instead of enumerating the objects by name. Quantifiers let us do this.
First-order logic contains two standard quantifiers, called universal and existential.
Universal quantification (∀)
The second rule, “All kings are persons,” is written in first-order logic as
∀ x King(x) ⇒ Person(x) .
∀ is usually pronounced “For all . . .”. (Remember that the upside-down A stands for “all.”)
Thus, the sentence says, “For all x, if x is a king, then x is a person.” The symbol x is called a variable. By convention,
variables are lowercase letters.
Existential quantification (∃)
Universal quantification makes statements about every object. Similarly, we can make a statement about some object in the
universe without naming it, by using an existential quantifier.
To say, for example, that King John has a crown on his head, we write
Example =>
3. Every Person who buys a policy is Smart ∀ x ∀ y Person(x) ∧ Policy(y) ∧ buys (x,y) => Smart(x)
Exercise =>
Properties of WFF
Equivalent Logical Expression
Resolution
Rules of Resolution and Convert to Clausal Form
(FOPL to CNF)
1. Convert English statements into FOPL
2. Convert FOPL to Conjunctive Normal Form (CNF)
3. Apply Negation to what’s to be proven (Proof by contradiction)
4. Draw resolution graph
Tree reached its null state it means R is False which means R is True
Unification is the process used to find substitutions that make different FOL expressions look identical.
Unification of predicate logic
1. Predicate symbol must be same ,atoms or expression with different predicate symbol can never be unified.
3. Unification will fail if there are two similar variables present in the same expression.
To avoid this,
change variable x to y in knows(x,Elizabeth) ->
knows(y,Elizabeth) , It will still mean the same
and is called as Standardizing.