Unit 2 AI
Unit 2 AI
AGENTS
Unit II
KNOWLEDGE-BASED AGENTS
An intelligent agent needs knowledge about the real world for taking decisions and reasoning to act
efficiently.
Knowledge-based agents are those agents who have the capability of maintaining an internal state of
knowledge, reason over that knowledge, update their knowledge after observations and take actions.
These agents can represent the world with some formal representation and act intelligently.
Knowledge-based agents are composed of two main parts:
Knowledge-base and
Inference system.
The above diagram is representing a generalized architecture for a knowledge-based agent. The knowledge-based agent
(KBA) take input from the environment by perceiving the environment. The input is taken by the inference engine of the
agent and which also communicate with KB to decide as per the knowledge store in KB. The learning element of KBA
regularly updates the KB by learning new knowledge.
Knowledge base: Knowledge-base is a central component of a knowledge-based agent, it is also known as KB. It is a
collection of sentences (here 'sentence' is a technical term and it is not identical to sentence in English). These sentences
are expressed in a language which is called a knowledge representation language. The Knowledge-base of KBA stores
fact about the world.
Why use a knowledge base?
▪ Knowledge-base is required for updating knowledge for an agent to learn with experiences and take
action as per the knowledge.
▪ Inference system
▪ Inference means deriving new sentences from old. Inference system allows us to add a new sentence
to the knowledge base. A sentence is a proposition about the world. Inference system applies logical
rules to the KB to deduce new information.
▪ Inference system generates new facts so that an agent can update the KB. An inference system works
mainly in two rules which are given as:
• Forward chaining
• Backward chaining
Operations Performed by KBA
▪ Following are three operations which are performed by KBA in order to show the intelligent
behavior:
1. TELL: This operation tells the knowledge base what it perceives from the environment.
2. ASK: This operation asks the knowledge base what action it should perform.
3. Perform: It performs the selected action.
A generic knowledge-based agent:
Following is the structure outline of a generic knowledge-based agents program:
1. function KB-AGENT(percept):
2. persistent: KB, a knowledge base
3. t, a counter, initially 0, indicating time
4. TELL(KB, MAKE-PERCEPT-SENTENCE(percept, t))
5. Action = ASK(KB, MAKE-ACTION-QUERY(t))
6. TELL(KB, MAKE-ACTION-SENTENCE(action, t))
7. t = t + 1
8. return action
The knowledge-based agent takes percept as input and returns an action as output. The agent
maintains the knowledge base, KB, and it initially has some background knowledge of the real world.
It also has a counter to indicate the time for the whole process, and this counter is initialized with zero
▪ Each time when the function is called, it performs its three operations:
• Firstly it TELLs the KB what it perceives.
• Secondly, it asks KB what action it should take
• Third agent program TELLS the KB that which action was chosen.
▪ The MAKE-PERCEPT-SENTENCE generates a sentence as setting that the agent perceived the
given percept at the given time.
▪ The MAKE-ACTION-QUERY generates a sentence to ask which action should be done at the
current time.
▪ MAKE-ACTION-SENTENCE generates a sentence which asserts that the chosen action was
executed.
VARIOUS LEVELS OF
KNOWLEDGE-BASED AGENT
A knowledge-based agent can be viewed at different levels which are given below:
1. Knowledge level
Knowledge level is the first level of knowledge-based agent, and in this level, we need to specify what the agent
knows, and what the agent goals are. With these specifications, we can fix its behavior. For example, suppose an
automated taxi agent needs to go from a station A to station B, and he knows the way from A to B, so this comes
at the knowledge level.
2. Logical level:
At this level, we understand that how the knowledge representation of knowledge is stored. At this level,
sentences are encoded into different logics. At the logical level, an encoding of knowledge into logical sentences
occurs. At the logical level we can expect to the automated taxi agent to reach to the destination B.
3. Implementation level:
▪ This is the physical representation of logic and knowledge. At the implementation level agent perform actions
as per logical and knowledge level. At this level, an automated taxi agent actually implement his knowledge
and logic so that he can reach to the destination.
Approaches to designing a knowledge-based agent:
▪ There are mainly two approaches to build a knowledge-based agent:
▪ 1. Declarative approach: We can create a knowledge-based agent by initializing with an empty
knowledge base and telling the agent all the sentences with which we want to start with. This
approach is called Declarative approach.
▪ 2. Procedural approach: In the procedural approach, we directly encode desired behavior as a
program code. Which means we just need to write a program that already encodes the desired
behavior or agent.
▪ However, in the real world, a successful agent can be built by combining both declarative and
procedural approaches, and declarative knowledge can often be compiled into more efficient
procedural code.
THE WUMPUS WORLD AS AN EXAMPLE WORLD
▪ The Wumpus world is a cave with 16 rooms (4×4). Each room is connected to others through
walkways (no rooms are connected diagonally). The knowledge-based agent starts from Room[1, 1].
The cave has – some pits, a treasure and a beast named Wumpus
▪ The Wumpus world is a cave which has 4/4 rooms connected with passageways. So there are total 16
rooms which are connected with each other. We have a knowledge-based agent who will go forward in
this world. The cave has a room with a beast which is called Wumpus, who eats anyone who enters the
room. The Wumpus can be shot by the agent, but the agent has a single arrow. In the Wumpus world,
there are some Pits rooms which are bottomless, and if agent falls in Pits, then he will be stuck there
forever. The exciting thing with this cave is that in one room there is a possibility of finding a heap of
gold. So the agent goal is to find the gold and climb out the cave without fallen into Pits or eaten by
Wumpus. The agent will get a reward if he comes out with gold, and he will get a penalty if eaten by
Wumpus or falls in the pit.
▪ Note: Here Wumpus is static and cannot move.
▪ Following is a sample diagram for representing the Wumpus world. It is showing some rooms with Pits, one
room with Wumpus and one agent at (1, 1) square location of the world.
1.The room adjacent to PITs has a breeze, so if the agent reaches near to PIT, then he will perceive the breeze.
2.The rooms adjacent to the Wumpus room are smelly, so that it would have some stench.
3.There will be glitter in the room if and only if the room has gold.
4.The Wumpus can be killed by the agent if the agent is facing to it, and Wumpus will emit a horrible scream
which can be heard anywhere in the cave.
Performance measure:
• +1000 reward points if the agent comes out of the cave with the gold.
• -1000 points penalty for being eaten by the Wumpus or falling into the pit.
• The game ends if either agent dies or came out of the cave.
Environment:
• A 4*4 grid of rooms.
• The agent initially in room square [1, 1], facing toward the right.
• Location of Wumpus and gold are chosen randomly except the first square [1,1].
• Each square of the cave can be a pit with probability 0.2 except the first square.
Actuators:
• Left turn,
• Right turn
• Move forward
• Grab
• Release
• Shoot
Sensors:
• The agent will perceive the stench if he is in the room adjacent to the Wumpus. (Not diagonally).
• The agent will perceive breeze if he is in the room directly adjacent to the Pit.
• The agent will perceive the glitter in the room where the gold is present.
• When the Wumpus is shot, it emits a horrible scream which can be perceived anywhere in the cave.
• These percepts can be represented as five element list, in which we will have different indicators for each sensor.
• Example if agent perceives stench, breeze, but no glitter, no bump, and no scream then it can be represented as:
[Stench, Breeze, None, None, None].
The Wumpus world Properties:
• Partially observable: The Wumpus world is partially observable because the agent can only perceive the close
environment such as an adjacent room.
• Deterministic: It is deterministic, as the result and outcome of the world are already known.
• One agent: The environment is a single agent as we have one agent only and Wumpus is not considered as an agent
Exploring the Wumpus world:
▪ Now we will explore the Wumpus world and will determine how the agent will find its goal by applying logical
reasoning.
Agent's First step:
▪ Initially, the agent is in the first room or on the square [1,1], and we already know that this room is safe for the
agent, so to represent on the below diagram (a) that room is safe we will add symbol OK. Symbol A is used to
represent agent, symbol B for the breeze, G for Glitter or gold, V for the visited room, P for pits, W for
Wumpus.
▪ At Room [1,1] agent does not feel any breeze or any Stench which means the adjacent squares are also OK.
Agent's second Step:
▪ Now agent needs to move forward, so it will either move to [1, 2], or [2,1]. Let's suppose agent moves
to the room [2, 1], at this room agent perceives some breeze which means Pit is around this room. The
pit can be in [3, 1], or [2,2], so we will add symbol P? to say that, is this Pit room?
▪ Now agent will stop and think and will not make any harmful move. The agent will go back to the [1,
1] room. The room [1,1], and [2,1] are visited by the agent, so we will use symbol V to represent the
visited squares.
Agent's third step:
▪ At the third step, now agent will move to the room [1,2] which is OK. In the room [1,2] agent
perceives a stench which means there must be a Wumpus nearby. But Wumpus cannot be in the room
[1,1] as by rules of the game, and also not in [2,2] (Agent had not detected any stench when he was at
[2,1]). Therefore agent infers that Wumpus is in the room [1,3], and in current state, there is no breeze
which means in [2,2] there is no Pit and no Wumpus. So it is safe, and we will mark it OK, and the
agent moves further in [2,2].
16
BASIC FACTS ABOUT PROPOSITIONAL LOGIC
1. Idempotent rule:
P ˄ P ==> P
P ˅ P ==> P
2. Commutative rule:
P ˄ Q ==> Q ˄ P
P ˅ Q ==> Q ˅ P
3.Associative rule:
P ˄ (Q ˄ R) ==> (P ˄ Q) ˄ R
P ˅ (Q ˅ R) ==> (P ˅ Q) ˅ R
18
4. Distributive Rule:
P ˅ (Q ˄ R) ==> (P ˅ Q) ˄ (P ˅ R)
P ˄ (Q ˅ R) ==> (P ˄ Q) ˅ (P ˄ R)
5. De-Morgan’s Rule:
(ךP ˅ Q) ==> ךP ˄ ךQ
( ךP ˄ Q) ==> ךP ˅ ךQ
6. Implication elimination:
P Q => ךP ˅ Q
19
7. Bidirectional Implication elimination:
( P ⬄ Q ) ==> ( P Q ) ˄ (Q P)
8. Contrapositive rule:
P Q => ךP ךQ
9. Double Negation rule:
ך(ךP) => P
10. Absorption Rule:
P ˅ ( P ˄ Q) => P
P ˄ ( P ˅ Q) => P
20
11.Fundamental identities:
P ˄ ךp => F [contradiction]
P ˅ ךP => T [Tautology]
P ˅ T => P
P ˅ F => P
P ˅ ךT => P
P ˄ F => F
P ˄ T => P
21
12. Modus Ponens:
If P is true and P Q then we can infer Q is also true.
P
P Q
__________
Hence, Q
22
14. Chain rule:
If p q and q r then p r
18. OR introduction:
Given P and Q are true then we can deduce P and Q
separately:
P P˅Q
Q P˅Q
24
▪ Example:
“I will get wet if it rains and I go out of the house”
(S ˄ R) W
25
REASONING PATTERNS IN PROPOSITIONAL LOGIC
▪ The reasoning is the mental process of deriving logical conclusion and making predictions from
available knowledge, facts, and beliefs.
"Reasoning is a way to infer facts from existing data.“
▪ It is a general process of thinking rationally, to find valid conclusions.
▪ In artificial intelligence, the reasoning is essential so that the machine can also think rationally as a
human brain, and can perform like a human.
Types of Reasoning
▪ In artificial intelligence, reasoning can be divided into the following categories:
i. Deductive reasoning
ii. Inductive reasoning
iii. Abductive reasoning
iv. Common Sense Reasoning
v. Monotonic Reasoning
vi. Non-monotonic Reasoning
Note: Inductive and deductive reasoning are the forms of propositional logic.
1.Deductive reasoning:
▪ Deductive reasoning is deducing new information from logically related known information. It is the form
of valid reasoning, which means the argument's conclusion must be true when the premises are true.
▪ Deductive reasoning is a type of propositional logic in AI, and it requires various rules and facts. It is
sometimes referred to as top-down reasoning, and contradictory to inductive reasoning.
▪ In deductive reasoning, the truth of the premises guarantees the truth of the conclusion.
▪ Deductive reasoning mostly starts from the general premises to the specific conclusion, which can be
explained as below example.
▪ Example:
▪ Premise-1: All the human eats veggies
▪ Premise-2: Suresh is human.
▪ Conclusion: Suresh eats veggies.
▪ The general process of deductive reasoning is given below:
2. Inductive Reasoning:
▪ Inductive reasoning is a form of reasoning to arrive at a conclusion using limited sets of facts by the
process of generalization. It starts with the series of specific facts or data and reaches to a general
statement or conclusion.
▪ Inductive reasoning is a type of propositional logic, which is also known as cause-effect reasoning or
bottom-up reasoning.
▪ In inductive reasoning, we use historical data or various premises to generate a generic rule, for which
premises support the conclusion.
▪ In inductive reasoning, premises provide probable supports to the conclusion, so the truth of premises
does not guarantee the truth of the conclusion.
Example:
▪ Premise: All of the pigeons we have seen in the zoo are white.
▪ Conclusion: Therefore, we can expect all the pigeons to be white.
3. Abductive reasoning:
Abductive reasoning is a form of logical reasoning which starts with single or multiple observations then seeks to find
the most likely explanation or conclusion for the observation.
Abductive reasoning is an extension of deductive reasoning, but in abductive reasoning, the premises do not guarantee
the conclusion.
Example:
Implication: Cricket ground is wet if it is raining
Axiom: Cricket ground is wet.
Conclusion It is raining.
4. Common Sense Reasoning
Common sense reasoning is an informal form of reasoning, which can be gained through experiences.
Common Sense reasoning simulates the human ability to make presumptions about events which occurs on every day.
It relies on good judgment rather than exact logic and operates on heuristic knowledge and heuristic rules.
Example:
1.One person can be at one place at a time.
2.If I put my hand in a fire, then it will burn.
The above two statements are the examples of common sense reasoning which a human mind can easily understand
and assume.
5. Monotonic Reasoning:
▪ In monotonic reasoning, once the conclusion is taken, then it will remain the same even if we add
some other information to existing information in our knowledge base. In monotonic reasoning,
adding knowledge does not decrease the set of prepositions that can be derived.
▪ To solve monotonic problems, we can derive the valid conclusion from the available facts only, and
it will not be affected by new facts.
▪ Monotonic reasoning is not useful for the real-time systems, as in real time, facts get changed, so we
cannot use monotonic reasoning.
▪ Monotonic reasoning is used in conventional reasoning systems, and a logic-based system is
monotonic.
▪ Any theorem proving is an example of monotonic reasoning.
Example:
• Earth revolves around the Sun.
▪ It is a true fact, and it cannot be changed even if we add another sentence in knowledge base like,
"The moon revolves around the earth" Or "Earth is not round," etc.
6. Non-monotonic Reasoning
▪ In Non-monotonic reasoning, some conclusions may be invalidated if we add some more
information to our knowledge base.
▪ Logic will be said as non-monotonic if some conclusions can be invalidated by adding more
knowledge into our knowledge base.
▪ Non-monotonic reasoning deals with incomplete and uncertain models.
▪ "Human perceptions for various things in daily life, "is a general example of non-monotonic
reasoning.
Example: Let suppose the knowledge base contains the following knowledge:
Birds can fly
Penguins cannot fly
Pitty is a bird
▪ So from the above sentences, we can conclude that Pitty can fly.
▪ However, if we add one another sentence into knowledge base "Pitty is a penguin", which
concludes "Pitty cannot fly", so it invalidates the above conclusion.
FIRST-ORDER LOGIC IN ARTIFICIAL INTELLIGENCE
▪ In Propositional logic, we have seen how to represent statements using propositional logic. But
unfortunately, in propositional logic, we can only represent the facts, which are either true or false.
PL is not sufficient to represent the complex sentences or natural language statements. The
propositional logic has very limited expressive power. Consider the following sentence, which we
cannot represent using PL logic.
• "Some humans are intelligent",
or
• "Sachin likes cricket."
▪ To represent the above statements, PL logic is not sufficient, so we required some more powerful
logic, such as first-order logic.
First-Order logic:
• First-order logic is another way of knowledge representation in artificial intelligence. It is an
extension to propositional logic.
• FOL is sufficiently expressive to represent the natural language statements in a concise way.
• First-order logic is also known as Predicate logic or First-order predicate logic. First-order logic
is a powerful language that develops information about the objects in a more easy way and can also
express the relationship between those objects.
• First-order logic (like natural language) does not only assume that the world contains facts like
propositional logic but also assumes the following things in the world:
• Objects: A, B, people, numbers, colors, wars, theories, squares, pits, Wumpus etc
• Relations: It can be unary relation such as: red, round, is adjacent, or n-any relation such as: the
sister of, brother of, has color, comes between
• Function: Father of, best friend, third inning of, end of
Atomic sentences:
•Atomic sentences are the most basic sentences of first-order logic. These sentences are formed from a predicate
symbol followed by a parenthesis with a sequence of terms.
•We can represent atomic sentences as Predicate (term1, term2, ......, term n).
Example: Ravi and Ajay are brothers: => Brothers(Ravi, Ajay).
Chinky is a cat: => cat (Chinky).
Complex Sentences:
• Complex sentences are made by combining atomic sentences using connectives.
▪ First-order logic statements can be divided into two parts:
• Subject: Subject is the main part of the statement.
• Predicate: A predicate can be defined as a relation, which binds two atoms together in a
statement.
▪ Consider the statement: "x is an integer.", it consists of two parts, the first part x is the subject
of the statement and second part "is an integer," is known as a predicate.
▪ Inference in First-Order Logic is used to deduce new facts or sentences from existing sentences. Before
understanding the FOL inference rule, let's understand some basic terminologies used in FOL.
Substitution:
▪ Substitution is a fundamental operation performed on terms and formulas. It occurs in all inference systems
in first-order logic. The substitution is complex in the presence of quantifiers in FOL. If we write F[a/x], so
it refers to substitute a constant "a" in place of variable "x".
▪ Note: First-order logic is capable of expressing facts about some or all objects in the universe.
Equality:
▪ First-Order logic does not only use predicate and terms for making atomic sentences but also uses another
way, which is equality in FOL. For this, we can use equality symbols which specify that the two terms refer
to the same object.
▪ Example: Brother (John) = Smith.
▪ As in the above example, the object referred by the Brother (John) is similar to the object referred
by Smith. The equality symbol can also be used with negation to represent that two terms are not the same
objects.
▪ Example: ¬(x=y) which is equivalent to x ≠y.
FOL INFERENCE RULES FOR QUANTIFIER:
▪ As propositional logic we also have inference rules in first-order logic, so following are some basic
inference rules in FOL:
• Universal Generalization
• Universal Instantiation
• Existential Instantiation
• Existential introduction
1. Universal Generalization:
Universal generalization is a valid inference rule which states that if premise P(c) is true for any arbitrary
element c in the universe of discourse, then we can have a conclusion as ∀ x P(x).
It can be represented as: Inference in First-Order Logic.
This rule can be used if we want to show that every element has a similar property.
In this rule, x must not appear as a free variable.
Example: Let's represent, P(c): "A byte contains 8 bits", so for ∀ x P(x) "All bytes contain 8 bits.", it will
also be true.
Universal Instantiation
▪ Universal instantiation is also called as universal elimination or UI is a valid inference rule. It can be
applied multiple times to add new sentences.
▪ The new KB is logically equivalent to the previous KB.
▪ As per UI, we can infer any sentence obtained by substituting a ground term for the variable.
▪ The UI rule state that we can infer any sentence P(c) by substituting a ground term c (a constant
within domain x) from ∀ x P(x) for any object in the universe of discourse.
▪ It can be represented as: Inference in First-Order Logic.
Example:1.
IF "Every person like ice-cream"=> ∀x P(x) so we can infer that
"John likes ice-cream" => P(c)
Example: 2.
"All kings who are greedy are Evil." So let our knowledge base contains this detail as in the form of
FOL:
∀x king(x) ∧ greedy (x) → Evil (x),
So from this information, we can infer any of the following statements using Universal Instantiation:
King(John) ∧ Greedy (John) → Evil (John),
King(Richard) ∧ Greedy (Richard) → Evil (Richard),
King(Father(John)) ∧ Greedy (Father(John)) → Evil (Father(John)),
Existential Instantiation:
Existential instantiation is also called as Existential Elimination, which is a valid inference rule in
first-order logic.
It can be applied only once to replace the existential sentence.
The new KB is not logically equivalent to old KB, but it will be satisfiable if old KB was satisfiable.
This rule states that one can infer P(c) from the formula given in the form of ∃x P(x) for a new
constant symbol c.
The restriction with this rule is that c used in the rule must be a new term for which P(c ) is true.
It can be represented as: Inference in First-Order Logic
▪ Example:
▪ From the given sentence: ∃x Crown(x) ∧ OnHead(x, John),
▪ So we can infer: Crown(K) ∧ OnHead( K, John), as long as K does not appear in the knowledge
base.
• The above used K is a constant symbol, which is called Skolem constant.
• The Existential instantiation is a special case of Skolemization process.
Existential introduction
An existential introduction is also known as an existential generalization, which is a valid inference
rule in first-order logic.
This rule states that if there is some element c in the universe of discourse which has a property P,
then we can infer that there exists something in the universe which has the property P.
It can be represented as: Inference in First-Order Logic
Example:
"Priyanka got good marks in English."
"Therefore, someone got good marks in English."
AGENTS BASED ON PL
▪ Wumpus world agents propositional logic
PROPOSITIONAL VERSUS FIRST
ORDER INFERENCE
▪ Propositional logic is an analytical statement which is either true or false. It is basically a
technique that represents the knowledge in logical & mathematical form. There are two types of
propositional logic; Atomic and Compound Propositions.
Features of Propositional logic:
Facts about Propositional Logic
• Since propositional logic works on 0 and 1 thus it is also known as ‘Boolean Logic’.
• Proposition logic can be either true or false it can never be both.
• In this type of logic, symbolic variables are used in order to represent the logic and any logic can be
used for representing the variable.
• t is comprised of objects, relations, functions, and logical connectives.
• Proposition formula which is always false is called ‘Contradiction’ whereas a proposition formula
which is always true is called ‘Tautology’.
First-Order Logic (FOL)
▪ First-Order Logic is another knowledge representation in AI which is an extended part of PL. FOL
articulates the natural language statements briefly. Another name of First-Order Logic is ‘Predicate
Logic’.
Facts about First Order Logic
•FOL is known as the powerful language which is used to develop information related to objects in a
very easy way.
•Unlike PL, FOL assumes some of the facts that are related to objects, relations, and functions.
•FOL has two main key features or you can say parts that are; ‘Syntax’ & ‘Semantics’.
KEY DIFFERENCES BETWEEN PL AND FOL
• Propositional Logic converts a complete sentence into a symbol and makes it logical whereas in
First-Order Logic relation of a particular sentence will be made that involves relations, constants,
functions, and constants.
• The limitation of PL is that it does not represent any individual entities whereas FOL can easily
represent the individual establishment that means if you are writing a single sentence then it can
be easily represented in FOL.
• PL does not signify or express the generalization, specialization or pattern for example
‘QUANTIFIERS’ cannot be used in PL but in FOL users can easily use quantifiers as it does
express the generalization, specialization, and pattern.
KNOWLEDGE ENGINEERING IN
FIRST ORDER LOGIC
First-order logic (FOL), also known as predicate logic, is a powerful formalism used for knowledge
representation in artificial intelligence and computer science. It extends propositional logic by allowing
the use of quantifiers and predicates, enabling the representation of complex statements about objects and
their relationships. Here are the key components and concepts of knowledge representation in first-order
logic:
1. Constants
2. Variables
3. Predicates
4. Functions
5. Quantifiers
6. Logical Connectives
7. Equality
FORWARD AND BACKWARD
CHAINING
▪ The inference engine is the component of the intelligent system in artificial intelligence, which applies logical
rules to the knowledge base to infer new information from known facts. The first inference engine was part of the
expert system.
Inference engine commonly proceeds in two modes, which are:
1. Forward chaining
2. Backward chaining
Horn Clause and Definite clause:
▪ Horn clause and definite clause are the forms of sentences, which enables knowledge base to use a more
restricted and efficient inference algorithm. Logical inference algorithms use forward and backward chaining
approaches, which require KB in the form of the first-order definite clause.
▪ Definite clause: A clause which is a disjunction of literals with exactly one positive literal is known as a
definite clause or strict horn clause.
▪ Horn clause: A clause which is a disjunction of literals with at most one positive literal is known as horn
clause. Hence all the definite clauses are horn clauses.
▪ Example: (¬ p V ¬ q V k). It has only one positive literal k.
▪ It is equivalent to p ∧ q → k.
Forward Chaining
▪ Forward chaining is also known as a forward deduction or forward reasoning method when using
an inference engine. Forward chaining is a form of reasoning which start with atomic sentences in
the knowledge base and applies inference rules (Modus Ponens) in the forward direction to extract
more data until a goal is reached.
▪ The Forward-chaining algorithm starts from known facts, triggers all rules whose premises are
satisfied, and add their conclusion to the known facts. This process repeats until the problem is
solved.
Properties of Forward-Chaining:
• It is a down-up approach, as it moves from bottom to top.
• It is a process of making a conclusion based on known facts or data, by starting from the initial state
and reaches the goal state.
• Forward-chaining approach is also called as data-driven as we reach to the goal using available
data.
• Forward -chaining approach is commonly used in the expert system, such as CLIPS, business, and
production rule systems.
Example:
"As per the law, it is a crime for an American to sell weapons to hostile nations. Country A, an
enemy of America, has some missiles, and all the missiles were sold to it by Robert, who is an
American citizen."
Prove that "Robert is criminal."
▪ To solve the above problem, first, we will convert all the above facts into first-order definite clauses,
and then we will use a forward-chaining algorithm to reach the goal.
Facts Conversion into FOL:
• It is a crime for an American to sell weapons to hostile nations. (Let's say p, q, and r are variables)
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) ...(1)
• Country A has some missiles. ?p Owns(A, p) ∧ Missile(p). It can be written in two definite clauses
by using Existential Instantiation, introducing new Constant T1.
Owns(A, T1) ......(2)
Missile(T1) .......(3)
• All of the missiles were sold to country A by Robert.
?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)
• Missiles are weapons.
Missile(p) → Weapons (p) .......(5)
• Enemy of America is known as hostile.
Enemy(p, America) →Hostile(p) ........(6)
• Country A is an enemy of America.
Enemy (A, America) .........(7)
• Robert is American
American(Robert). ..........(8)
Forward chaining proof:
Step-1:In the first step we will start with the known facts and will choose the sentences which do not
have implications, such as: American(Robert), Enemy(A, America), Owns(A, T1), and
Missile(T1). All these facts will be represented as below.
Step-2: At the second step, we will see those facts which infer from available facts and with satisfied premises.
Rule-(1) does not satisfy premises, so it will not be added in the first iteration.
▪ Rule-(2) and (3) are already added.
▪ Rule-(4) satisfy with the substitution {p/T1}, so Sells (Robert, T1, A) is added, which infers from the
conjunction of Rule (2) and (3).
▪ Rule-(6) is satisfied with the substitution(p/A), so Hostile(A) is added and which infers from
Rule-(7).
Step-3:
At step-3, as we can check Rule-(1) is satisfied with the substitution {p/Robert, q/T1, r/A}, so we can add
Criminal(Robert) which infers all the available facts. And hence we reached our goal statement.
Step-2:At the second step, we will infer other facts form goal fact which satisfies the rules. So as we can see in
Rule-1, the goal predicate Criminal (Robert) is present with substitution {Robert/P}. So we will add all the
conjunctive facts below the first level and will replace p with Robert.
Here we can see American (Robert) is a fact, so it is proved here.
Step-3:t At step-3, we will extract further fact Missile(q)
which infer from Weapon(q), as it satisfies Rule-(5).
Weapon (q) is also true with the substitution of a constant T1 at q.
Step-4:
At step-4, we can infer facts Missile(T1) and Owns(A, T1)
form Sells(Robert, T1, r) which satisfies the Rule- 4,
with the substitution of A in place of r.
So these two statements are proved here.
Step-5:
▪ At step-5, we can infer the fact Enemy(A, America) from Hostile(A) which satisfies Rule- 6. And
hence all the statements are proved true using backward chaining.
S. No. Forward Chaining Backward Chaining
1. Forward chaining starts from known facts and Backward chaining starts from the goal and works
applies inference rule to extract more data unit backward through inference rules to find the required
it reaches to the goal. facts that support the goal.
4. Forward chaining reasoning applies a Backward chaining reasoning applies a depth-first search
breadth-first search strategy. strategy.
5. Forward chaining tests for all the available Backward chaining only tests for few required rules.
rules
6. Forward chaining is suitable for the planning, Backward chaining is suitable for diagnostic, prescription,
monitoring, control, and interpretation and debugging application.
application.
7. Forward chaining can generate an infinite Backward chaining generates a finite number of possible
number of possible conclusions. conclusions.
9. Forward chaining is aimed for any conclusion. Backward chaining is only aimed for the required data.
RESOLUTION
▪ Resolution is used, if there are various statements are given, and we need to prove a conclusion of
those statements. Unification is a key concept in proofs by resolutions. Resolution is a single
inference rule which can efficiently operate on the conjunctive normal form or clausal form.
▪ Clause: Disjunction of literals (an atomic sentence) is called a clause. It is also known as a unit
clause.
▪ Conjunctive Normal Form: A sentence represented as a conjunction of clauses is said to
be conjunctive normal form or CNF.
Steps for Resolution:
1. Conversion of facts into first-order logic.
2. Convert FOL statements into CNF
3. Negate the statement which needs to prove (proof by contradiction)
4. Draw resolution graph (unification).
Resolution Algorithm
It is used as inference mechanism.
Pre-processing steps:
1. Convert the given English sentence into predicate sentence.
2. Not all of these sentences will be in clausal form (CNF).
If any sentence is not in clausal form then convert it into clausal form.
3. Give these sentences (clauses) as an input to resolution algorithm.
die(fido) ¬die(fido)
▪ Assumptions change.
▪ TMS is a mechanism for processing large collections of logical relations on propositional variables.
2. GENERATION OF
EXPLANATIONS
▪ Solving problems is what Problem Solvers do.
▪ However, often solutions are not enough.
▪ The PS is expected to provide an explanation
▪ TMS uses cached inferences for that aim.
▪ TMS is efficient:
▪ Generating cached inferences once is more beneficial than
▪ running inference rules that have generated these inferences more than once.
Example:
Q: Shall I have an AI experience after completing the CIT program?
A: Yes, because of the TMS course.
▪ There are different types of TMSs that provide different ways of explaining conclusions (JTMS vs ATMS).
▪ In this example, explaining conclusions in terms of their immediate predecessors works much better.
3. FINDING SOLUTIONS TO SEARCH
PROBLEMS
B
A D
C E
A1 or A2 or A3 not (A1 and B1) not (A3 and C3) not (D2 and E2)
B1 or B2 or B3 not (A2 and B2) not (B1 and D1) not (D3 and E3)
C1 or C2 or C2 not (A3 and B3) not (B2 and D2) not (C1 and E1)
D1 or D2 or D3 not (A1 and C1) not (B3 and D3) not (C2 and E2)
E1 or E2 or E2 not (A2 and C2) not (D1 and E1) not (C3 and E3)
3. FINDING SOLUTIONS TO SEARCH
PROBLEMS
To find a solution we can use search:
▪ Syntax: {(inlist),(outlist)}
A: Temperature>=25 {(),(B)}
B: Temperature< 25
D: Raining
E: Day {(),(F)}
F: Night
A: Temperature>=25 {(),(B)}
B: Temperature< 25
D: Raining
E: Day {(),(F)}
F: Night
A: Temperature>=25 {(),(B)}
B: Temperature< 25
D: Raining
E: Day {(),(F)}
F: Night
A: Temperature>=25 {(),(B)}
B: Temperature< 25
D: Raining
E: Day {(),(F)}
F: Night
H: Swim {(E,G),()}
JUSTIFICATION-BASED TMS –
EXAMPLE
Propositions: Justifications:
A: Temperature>=25 {(),(B)}
B: Temperature< 25
D: Raining
E: Day {(),(F)}
F: Night
H: Swim {(E,G),()}
JUSTIFICATION-BASED TMS –
EXAMPLE
Propositions: Justifications:
A: Temperature>=25 {(),(B)}
B: Temperature< 25
C: Not raining De {(),(D)}
p
D: Raining bac ende
ktr n
ack cy-di
E: Day ing
{(),(F)} re
sys cted
F: Night tem
The backtracking finds D in the outlist of C and so D is justified by the contradiction of C. In order
to solve the contradiction, JTMS added the premise X as the inlist of D to make it assumed
JUSTIFICATION-BASED TMS –
EXAMPLE
Propositions: Justifications:
A: Temperature>=25 {(),(B)}
B: Temperature< 25
C: Not raining {(),(D)}
D: Raining
E: Day De
pen {(),(F)}
bac d
F: Night ktr ency
ack -di
G: Nice weather i n g s recte
{(A,C),()}
yst d
H: Swim e
{(E,G),()} m
I: Contradiction {(C),()}
X: Handle {(),()} //premise
D: Raining {(X),()}
J: Read {(D,E),()}
K: Contradiction {(J),()} //becomes tired
PS provides read as justified by D and E After a while it became tired and so stopped to read,
and the PS produced a contradiction to J. The backtracking finds the assumption E as faulty
(because E is antecedent of J and D is supported by premise X in the inlist which is always
true)
JUSTIFICATION-BASED TMS –
EXAMPLE
Propositions: Justifications:
A: Temperature>=25 {(),(B)}
B: Temperature< 25
❑ Then node F is added
C: Not raining {(),(D)} since it backtracked
D: Raining Co from the outlist of the
nt{(),(F)} contradiction E, and
E: Day {(A ext: node F is justified.
F: Night (B, ,D,F)
,
C,E {(A,C),()}
G: Nice weather ,G,
H,I
H: Swim {(E,G),()} ,J,K
I: Contradiction {(C),()} )}
X: Handle {(),()} //premise
D: Raining {(X),()}
J: Read {(D,E),()}
K: Contradiction {(J),()} //becomes tired
F: Night {(X),()}
JUSTIFICATION-BASED TMS –
EXAMPLE
Propositions: Justifications:
A: Temperature>=25 {(),(B)}
B: Temperature< 25
C: Not raining {(),(D)}
❑ The PS provides
D: Raining
L as
E: Day {(),(F)} consequence of F
F: Night
G: Nice weather {(A,C),()}
H: Swim {(E,G),()}
I: Contradiction {(C),()}
X: Handle {(),()} //premise
D: Raining {(X),()}
J: Read {(D,E),()}
K: Contradiction {(J),()} //becomes tired
F: Night {(X),()}
L: Sleep {(F),()}