0% found this document useful (0 votes)
43 views75 pages

Chapter 7 - Logical Agents - Ahmed Guessoum

Chapter 7 of the course on AI focuses on logical agents, covering knowledge-based agents, propositional logic, and the Wumpus World as a case study. It discusses the structure and function of knowledge bases, inference procedures, and the importance of logical reasoning in AI. The chapter also outlines the syntax and semantics of propositional logic, including various logical connectives and their applications in reasoning about agent actions and environments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views75 pages

Chapter 7 - Logical Agents - Ahmed Guessoum

Chapter 7 of the course on AI focuses on logical agents, covering knowledge-based agents, propositional logic, and the Wumpus World as a case study. It discusses the structure and function of knowledge bases, inference procedures, and the importance of logical reasoning in AI. The chapter also outlines the syntax and semantics of propositional logic, including various logical connectives and their applications in reasoning about agent actions and environments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Course: Introduction to AI

Prof. Ahmed Guessoum


The National Higher School of AI

Chapter 7

Logical Agents
Outline
• Knowledge-Based Agents
• The Wumpus World
• Logic
• Propositional Logic: A Very Simple Logic
 Syntax
 Semantics
 A Simple Knowledge Base
 A Simple Inference Procedure
• Propositional Theorem Proving
 Inference and Proofs
 Proof by Resolution
 Horn Clauses and Definite Clauses
 Forward and Backward Chaining
Outline
• Effective Propositional Model Checking
 A Complete Backtracking Algorithm
 Local Search Algorithms
 The Landscape of Random Sat Problems
• Agents Based on Propositional Logic
 The Current State of the World
 A Hybrid Agent
 Logical State Estimation
 Making Plans by Propositional Inference
On Knowledge Representation Power
• Humans have an internal representation of knowledge
and they reason with this knowledge.
• In AI, this approach to intelligence is embodied in
knowledge-based agents.
• The agents seen in chap. 3 & 4 allow to move from one
state to another using domain-specific function, but do not
allow to reason more generally about the states. E.g. in 8-
puzzle that:
 two tiles cannot occupy the same space
• Also, an agent’s only choice for representing what it knows
about the current state is to list all possible concrete
states—hopeless in large environments

4
On Knowledge Representation Power
• CSPs represent states as assignments of values to variables
 some parts of the agent work in a domain-independent
way, allowing for more efficient algorithms.
• In this chapter (and following ones) logic is presented as a
general class of representations to support knowledge-based
agents.
• Combination of Knowledge allowing for reasoning
(inference).
• Knowledge-based agents can:
 accept new tasks in the form of explicitly described goals;
 achieve competence quickly by being told or learning new
knowledge about the environment; and
 adapt to changes in the environment by updating the
relevant knowledge.
5
Knowledge-Based Agents
• The central component of a knowledge-based agent is
its knowledge base (KB).
• A knowledge base is a set of sentences expressed in a
language called a knowledge representation
language.
• Axiom: a sentence taken as given without being derived
from other sentences.
• TELL: a way to add new sentences to the KB and
• ASK: a way to query the KB.
• Both operations may involve deriving new sentences
from old ones.
• The agent maintains a knowledge base, KB, which may
initially contain some background knowledge.
6
A Generic Knowledge-Based Agent
• When agent program called:
 TELLs the KB what it
perceives.
 ASKs the KB what action it
should perform (after
reasoning about the current
state of the world, about the
outcomes of possible action
sequences, etc.).
 TELLs the KB which action
was chosen, and the agent
executes it.
• The details of the
representation language are
hidden in the 3 functions.
• The details of inference
mechanisms hidden in TELL
and ASK.
Declarative vs Procedural K.
• We will mostly be concerned with stating the knowledge
that the agent needs to reason about its environment.
• The agent will add more and more such knowledge until
it knows how to operate in its environment.
• All this is done at the knowledge level; we will refer to
this as a declarative approach to system building.
• The implementation level is not our primary concern:
e.g.
 Is the geographical knowledge implemented as linked lists or
pixel maps?
 Does the agent reason using strings and symbols stored in
registers or by propagating noisy signals in a network of neurons?
• The procedural approach encodes desired behaviours
directly as program code.
8
The Wumpus World
• The wumpus world is a cave consisting of rooms
connected by passageways.
• Somewhere in the cave is the terrible wumpus, a
beast that eats anyone who enters its room.
• The wumpus can be shot by an agent, but the
agent has only one arrow.
• Some rooms contain bottomless pits that will trap
anyone who wanders into these rooms (except for
the wumpus, which is too big to fall in).
• There is a possibility of finding a heap of gold.
• Although the wumpus world is rather tame by
modern computer game standards, it illustrates
some important points about intelligence. 9
PEAS description of Wumpus World
• Performance measure:
 +1000 for climbing out of the cave with the gold,
 –1000 for falling into a pit or being eaten by the wumpus,
 –1 for each action taken and
 –10 for using up the arrow.
 The game ends either when the agent dies or when the
agent climbs out of the cave.
• Environment: A 4×4 grid of rooms.
 The agent always starts in the square labeled [1,1], facing to
the right.
 The locations of the gold and the wumpus are chosen
randomly, with a uniform distribution, from the squares other
than the start square.
 Each square other than the start can be a pit, with
probability 0.2. 10
The Wumpus World
PEAS description of Wumpus World
• Actuators:
 Agent can move Forward, TurnLeft by 90◦, or TurnRight by 90◦.
 Agent dies if it enters a square containing a pit or a live
wumpus. (It is safe, although smelly, to enter a square with a
dead wumpus.)
 If agent tries to move forward and bumps into a wall, then the
agent does not move.
 The action Grab can be used to pick up the gold if it is in the
same square as the agent.
 The action Shoot can be used to fire an arrow in a straight line
in the direction the agent is facing. The arrow continues until it
either hits (and hence kills) the wumpus or hits a wall.
 The agent has only one arrow, so only the first Shoot action has
any effect.
 Finally, the action Climb can be used to climb out of the cave,
but only from square [1,1].
12
PEAS description of Wumpus World
• Sensors: The agent has five sensors, each of which gives a single,
different information:
 In the square containing the wumpus and in the directly (not
diagonally) adjacent squares, the agent will perceive a Stench.
 In the squares directly adjacent to a pit, the agent will perceive
a Breeze.
 In the square where the gold is, the agent will perceive a
Glitter.
 When an agent walks into a wall, it will perceive a Bump.
 When the wumpus is killed, it emits a horrible Scream that can
be perceived anywhere in the cave.
• The percepts will be given to the agent program in the form
of a list of five symbols;
E.g., if there is a stench and a breeze, but no glitter, bump, or
scream, the agent will get [Stench, Breeze, None, None, None].
13
The Wumpus World: reasoning
• Main challenge for agent in the environment:
its initial ignorance of the configuration of the environment
 logical reasoning is required to overcome it.
• Informal knowledge representation language used: writing
down symbols in a grid.
• The agent’s initial KB contains the rules of the environment,
as described previously;
 In particular, it knows that it is in [1,1] and that [1,1] is a
safe square; we denote that with an “A” and “OK,”
respectively, in square [1,1].
 The first percept is [None, None, None, None, None], so
the agent can conclude that its neighbouring squares,
[1,2] and [2,1], are free of dangers—they are OK.
• The reasoning continues…
14
The Wumpus World: Representation & Reasoning
The Wumpus World: Representation & Reasoning
Outline
• Knowledge-Based Agents
• The Wumpus World
• Logic
• Propositional Logic: A Very Simple Logic
 Syntax
 Semantics
 A Simple Knowledge Base
 A Simple Inference Procedure
• Propositional Theorem Proving
 Inference and Proofs
 Proof by Resolution
 Horn Clauses and Definite Clauses
 Forward and Backward Chaining
Logic
• Recall that KBs consist of sentences.
• Sentences are expressed according to the syntax of the
representation language, which specifies all the sentences
that are well formed.
 Example: for the language of arithmetic “x + y = 4” is a
well-formed sentence, whereas “x4y+ =” is not.
• A logic must also define the semantics or meaning of
sentences.
• The semantics defines the truth of each sentence with
respect to each possible world.
 E.g. the semantics for arithmetic specifies that the
sentence “x + y = 4” is true in a world where x is 2 and
y is 2, but false in a world where x is 1 and y is 5.
• In standard logics, every sentence must be either true or
false in each possible world. (Fuzzy logic is different.) 18
Logic
• To be formal (and avoid a looser usage of the word “model” in
your textbook) we will adopt the following definitions and
reflect them in the sequel:
 A “possible world” (or interpretation) is an assignment
that fixes the truth of any sentence as true or false.
 A model is an interpretation that makes a sentence true.
• E.g. for sentence “x+y=4”, the possible models are all possible
assignments of real numbers to x and y that make the sentence
true.
• If a sentence 𝛼 is true in model m, we say that m satisfies 𝛼 or
sometimes m is a model of 𝛼.
• The notation M(𝛼) is used to mean the set of all models of 𝛼.
• Logical entailment between sentences: 𝛼 ⊨ 𝛽 states that
sentence 𝛼 entails sentence 𝛽 (i.e. 𝛽 logically follows from 𝛼). 19
Logical Entailment
• Formal definition of entailment:
𝛼 ⊨ 𝛽 if and only if, for every model of 𝛼, 𝛽 is also true.
• This can be written as: 𝛼 ⊨ 𝛽 if and only if M(𝛼) ⊆ M(𝛽) .
(Note the direction of the ⊆ here: if 𝛼 ⊨ 𝛽 , then 𝛼 is a
stronger assertion than 𝛽: it rules out more possible worlds.)
 E.g. (x = 0) ⊨ (xy = 0).
• Wumpus World:
 The agent has detected nothing in [1,1] and a breeze in
[2,1].
 At this point, KB = union of these percepts and the agent’s
knowledge of the rules of the wumpus world.
 Do the adjacent squares [1,2], [2,2], and [3,1] contain pits?
 There are 23 =8 possible models for these squares.
20
Logical Entailment
• The KB is false in interpretations that contradict Agent’s
knowledge
 E.g., the KB is false in any interpretation in which [1,2]
contains a pit, because there is no breeze in [1,1].
 There are in fact just three models of KB (solid line in figure
below).
• Consider two possible conclusions:
𝛼1 = “There is no pit in [1,2]”
𝛼2 = “There is no pit in [2,2]”
• By inspection, we see that for every model of KB, 𝛼1 is also true.
Hence, KB ⊨ 𝛼1 : there is no pit in [1,2].
• We can also see that in some models of KB, 𝛼2 is false.
Hence, KB ⊭ 𝛼2 : the agent cannot conclude that there is no pit
in [2,2]. (Nor can it conclude that there is a pit in [2,2].)
21
Wumpus World & Possible Worlds
Logical Entailment and Inference
• The preceding example
 illustrates entailment, and
 shows how the definition of entailment can be applied to
derive conclusions—i.e. to carry out logical inference.
• Model checking: enumerating all possible interpretations to
check that 𝛼 is true in all models of 𝐾𝐵, i.e. that M(𝐾𝐵) ⊆ M(𝛼).
• If an inference algorithm i can derive 𝛼 from KB, we write
KB ⊢𝑖 𝛼 . (We say “𝛼 is derived from KB by i ”)
• An inference algorithm that derives only entailed sentences is
called sound (or truth-preserving).
i.e. i is sound if whenever 𝐾𝐵 ⊢𝑖 𝛼 it is also true that 𝐾𝐵 ⊨ 𝛼
• An inference algorithm is complete if it can derive any sentence
that is entailed.
i.e. i is complete if whenever𝐾𝐵 ⊨ 𝛼 it is also true that 𝐾𝐵 ⊢𝑖 𝛼

23
Propositional Logic: Syntax
• Syntax defines the allowable sentences in any language.
• An atomic sentence consists of a single proposition
symbol. Each symbol stands for a proposition that can be
true or false.
• Proposition symbols start with an uppercase letter.
E.g., P, Q, R, W1,3 and North.
• Proposition symbols with fixed meanings: True and False.
• Complex sentences are constructed from simpler
sentences, using parentheses and logical connectives.
• There are five connectives in common use:
 ¬ (not). ¬ W1,3 is called the negation of W1,3
 A literal is either an atomic sentence (a positive
literal) or a negated atomic sentence (a negative
literal).
24
Propositional Logic: Syntax
 ∧ (and): A sentence whose main connective is ∧ is
called a conjunction; its parts are the conjuncts.
 ∨ (or): A sentence using ∨ is a disjunction; its parts
are the disjuncts
• ⇒ (implies): A sentence such as (W1,3∧ P3,1) ⇒ ¬W2,2 is
called an implication (or conditional). Its premise or
antecedent is (W1,3 ∧ P3,1), and its conclusion or
consequent is ¬W2,2.
 Implications are also known as rules or if–then
statements.
 The implication symbol is sometimes written in
other books as ⊃ or →.
 ⇔ (if and only if): A sentence connected by a ⇔
symbol is said to be a biconditional. 25
BNF of Propositional Logic

Logical operators have precedence from ¬ (highest),


then ∧, then ∨, then ⇒, and finally ⇔ (lowest)
Propositional Logic: Semantics
• The semantics defines the rules for determining the truth of a
sentence with respect to a particular model.
• In propositional logic, an interpretation simply fixes the truth
value (true or false) for every proposition symbol.
 If the sentences in the KB make use of the proposition
symbols P1,2, P2,2, and P3,1, then one possible interpretation
for KB is m1 = {P1,2=false, P2,2=false, P3,1=true}.
• The semantics for propositional logic specifies how to
compute the truth value of any sentence, given an
intepretation. This is done recursively starting with the
atomic sentences using a truth table.

27
The Wumpus World
The Wumpus World: Representation & Reasoning
A simple knowledge base
A KB for the Wumpus World:
• Symbols for each [x,y] location:
 Px,y is true if there is a pit in [x, y].
 Wx,y is true if there is a wumpus in [x, y], dead or alive.
 Bx,y is true if the agent perceives a breeze in [x, y].
 Sx,y is true if the agent perceives a stench in [x, y].
• Sentences: Each sentence Ri is labelled for referencing purposes.
• There is no pit in [1,1]: R1 : ¬P1,1
• A square is breezy if and only if there is a pit in a neighbouring
square. To be stated for each square; (for now, just the relevant
sentences are included):
R2 : B1,1 ⇔ (P1,2 ∨ P2,1) R3 : B2,1 ⇔ (P1,1 ∨ P2,2 ∨ P3,1)
• Adding the breeze percepts for the first two squares visited in
the specific world the agent is in (see Figure 7.3(b))
R4 : ¬B1,1 R5 : B2,1
30
A simple inference procedure
• The goal now is to decide whether KB |= 𝛼 for some
sentence 𝛼.
• First inference algorithm: model-checking approach
(direct implementation of entailment definition): enumerate
the interpretations, and check that 𝛼 is true in every model
of KB.
• In Wumpus world example:
 7 relevant proposition symbols (at some point of the
game): B1,1, B2,1, P1,1, P1,2, P2,1, P2,2, and P3,1
 27 =128 possible interpretations.
 Three of these are models of KB.
 In those 3 models, ¬P1,2 (R1) is true, hence there is no
pit in [1,2].
 But, P2,2 is true in 2 of the 3 models and false in 1, so we
cannot tell whether there is a pit in [2,2]. 31
A simple inference procedure
A truth-table enumeration algorithm
for deciding propositional entailment
Properties of TT-ENTAILS?
• The TT-ENTAILS? algorithm is
 Sound because it implements directly the definition
of entailment, and
 Complete because it works for any KB and 𝛼 and
always terminates. (There are only finitely many
interpretations to examine and it enumerates them)
• Complexity of TT-ENTAILS?
 If KB and 𝛼 contain n symbols in all, then there are
2n interpretations. Thus, the time complexity of the
algorithm is O(2n).
 Space complexity: O(n) because the enumeration is
depth-first.
34
Standard logical equivalences
Propositional Theorem Proving
• We have seen how entailment can be shown by model
checking: enumerating interpretations and showing that the
sentence must hold in all models.
• We will show how entailment can be done by theorem
proving: applying rules of inference directly to the KB
sentences to construct a proof of a given sentence without
consulting interpretations.
• If the number of interpretations is large but the length of the
proof is short, then theorem proving can be more efficient
than model checking.
• logical equivalence: two sentences 𝛼 and β are logically
equivalent (written 𝛼 ≡ β):
 if they are true in the same set of models. Alternatively,
 only if each of them entails the other:
𝛼≡β if and only if 𝛼 |= β and β |= 𝛼
36
Validity and Satisfiability
• A sentence is valid if it is true in all models. E.g., True,
P ∨ ¬P, 𝐴 ⟹ 𝐴, (𝐴 ∧ (𝐴 ⟹ 𝐵)) ⟹ 𝐵 are all valid sentences
• Valid sentences are also known as tautologies—they are
necessarily true. (Every valid sentence is logically equivalent to
True).
• Validity is connected to inference via the deduction
theorem: KB |= 𝛼 iff (𝐾𝐵 ⇒ 𝛼) is valid. (Proof as ex.)
• A sentence is satisfiable if it is true in some model: 𝐴 ∨ 𝐵, C
• A sentence is unsatisfiable if it is true in no model: 𝐴 ∧ ¬𝐴
• In the wumpus KB, (R1 ∧ R2 ∧ R3 ∧ R4 ∧ R5), is satisfiable
because there are three models in which it is true.
• Satisfiability can be checked by enumerating the possible
models until one is found that satisfies the sentence
37
Satisfiability and proofs
• The problem of determining the satisfiability of sentences
SAT in propositional logic, the SAT problem, was the first
problem proved to be NP-complete.
• Many problems in CS are satisfiability problems. E.g., all the
CSPs ask whether the constraints are satisfiable by some
assignment.
• 𝛼 is valid iff ¬𝛼 is unsatisfiable; contrapositively, 𝛼 is
satisfiable iff ¬𝛼 is not valid.
• Another useful result: 𝐾𝐵 |= 𝛼 iff (𝐾𝐵 ∧ ¬ 𝛼) is
unsatisfiable.
• Proving 𝛼 from 𝐾𝐵 by checking the unsatisfiability of
(𝐾𝐵 ∧¬ 𝛼) corresponds exactly to a proof by
contradiction (or proof by refutation).
• One assumes a sentence 𝛼 to be false and shows that this
leads to a contradiction with known axioms 𝐾𝐵. 38
Inference and proofs
• Proof methods are of two types: (1) model checking
or (2) application of inference rules.
• An inference rule: a rule that can be applied as one
step in a proof— i.e. in a chain of conclusions that
leads to the desired goal.
• Example of inference rule: Modus Ponens
α⇒β α
β
 if (WumpusAhead ∧ WumpusAlive) ⇒ Shoot and
WumpusAhead and WumpusAlive are given, then
Shoot can be inferred.
α∧β
• Other inference rule: And-Elimination
β
39
Inference and proofs
• By considering the possible truth values of 𝛼 and
β, one can show that Modus Ponens and And-
Elimination are sound.
• All the logical equivalences in Figure 7.11 can be
used as inference rules.
• One can use inference rules as operators in a
standard search algorithm.
• The application of inference rules typically
requires the translation of sentences into a
normal form.

40
Inference rules and equivalences in
the wumpus world:
• We start with the KB containing R1 to R5 and show how to
prove ¬P1,2 (there is no pit in [1,2]).
• First, we apply biconditional elimination to R2 to obtain
R6: (B1,1 ⇒ (P1,2 ∨ P2,1)) ∧ ((P1,2 ∨ P2,1) ⇒ B1,1)
• Then we apply And-Elimination to R6 to obtain
R7: (P1,2 ∨ P2,1) ⇒ B1,1
• Logical equivalence for contrapositives gives
R8: ¬B1,1 ⇒ ¬(P1,2 ∨ P2,1)
• Apply Modus Ponens with R8 and the percept R4 (¬B1,1):
gives R9 : ¬(P1,2 ∨ P2,1)
• Finally, apply De Morgan’s rule, giving the conclusion
R10 : ¬P1,2 ∧ ¬P2,1
• That is, neither [1,2] nor [2,1] contains a pit.
41
Proofs as Search
• This proof was found by hand.
• Any of the search algorithms in Chapter 3 can be applied to
find a sequence of steps that constitutes a proof.
• A proof problem can be defined as follows:
 INITIAL STATE: the initial KB.
 ACTIONS: the set of actions consists of all the inference rules
applied to all the sentences that match the top half of the
inference rule.
 RESULT: the result of an action is to add the sentence in the
bottom half of the inference rule to the KB.
 GOAL: the goal is a state that contains the sentence we are
trying to prove.
• Searching for proofs is alternative to enumerating models.
• Practically, finding a proof can be more efficient because the
proof can ignore irrelevant propositions. 42
Monotonicity
• Final property of logical systems is monotonicity: the set
of entailed sentences can only increase as information is
added to the KB.
• For any sentences 𝛼 and β, if KB |= 𝛼 then KB ∧ β |= 𝛼
• Suppose our KB contains the additional assertion β that
there are exactly eight pits in the Wumpus world.
• This knowledge might help the agent draw additional
conclusions, but it cannot invalidate any conclusion 𝛼
already inferred—such as the conclusion that there is no
pit in [1,2].
• Monotonicity means that inference rules can be applied
whenever suitable premises are found in the KB—the
conclusion of the rule must follow regardless of what else
is in the KB.
43
Proof by Resolution

44
Proof by Resolution
• Resolution: a single inference rule that yields a complete
inference algorithm when coupled with any complete
search algorithm.
• Consider the agent reaching square [1,2] in Figure 7.4(a).
• Percepts: [Stench, None, None, None, None].
•  We add the following facts (+ others) to the KB:
𝑅11 : ¬𝐵1,2 and 𝑅12 : 𝐵1,2 ⇔ (𝑃1,1 ∨ 𝑃2,2 ∨ 𝑃1,3 )
• From these 2 sentences we get:
𝑅13 : ¬𝑃2,2 and 𝑅14 : ¬𝑃1,3 (and ¬𝑃1,1 but already
known) i.e. no pit in [2,2] or [1,3]
• Recall R3 : B2,1 ⇔ (P1,1 ∨ P2,2 ∨ P3,1) and R5 : B2,1
• Apply biconditional elimination, => elimination, and AND-
elimination to R3, followed by Modus Ponens with R5 yields:
R15 : P1,1 ∨ P2,2 ∨ P3,1
i.e. there is a pit in [1,1], [2,2], or [3,1] 45
Proof by Resolution
• First application of the resolution rule: the literal ¬𝑃2,2 in
𝑅13 resolves with the literal 𝑃2,2 in 𝑅15 to give the resolvent
𝑅16 : 𝑃1,1 ∨ 𝑃3,1
In English: if there’s a pit in one of [1,1], [2,2], or [3,1] and it’s
not in [2,2], then it’s in [1,1] or [3,1].
• Likewise, ¬𝑃1,1 in 𝑅1 resolves with 𝑃1,1 in 𝑅16 giving
𝑅17 : 𝑃3,1
In English: if there’s a pit in [1,1] or [3,1] and it’s not in [1,1],
then it’s in [3,1].
• The last two inference steps are examples of the unit
𝑙1 ∨⋯ ∨ 𝑙𝑘 , 𝑚
resolution inference rule:
𝑙1 ∨⋯ ∨ 𝑙𝑖−1 ∨ 𝑙𝑖+1 ∨⋯ ∨ 𝑙𝑘
Where each 𝑙 is a literal and 𝑙𝑖 and 𝑚 are complementary
literals (i.e. they negate each other).
46
Resolution
• A clause is a disjunction of literals.
• The unit resolution rule takes a clause and a literal and
produces a new clause.
• The unit resolution rule can be generalized to the full
resolution rule
𝑙1 ∨ ⋯ ∨ 𝑙𝑘 , 𝑚1 ∨ ⋯ ∨ 𝑚𝑛
𝑙1 ∨ ⋯ ∨ 𝑙𝑖−1 ∨ 𝑙𝑖+1 ∨ ⋯ ∨ 𝑙𝑘 ∨ 𝑚1 ∨ ⋯ ∨ 𝑚𝑗−1 ∨ 𝑚𝑗+1 ∨ ⋯ ∨ 𝑚𝑛

where 𝑙𝑖 and 𝑚𝑗 are complementary literals.


𝑃1,1 ∨𝑃3,1 , ¬𝑃1,1 ∨ ¬𝑃2,2
• e.g.,
𝑃3,1 ∨ ¬𝑃2,2
• The resolution inference rule is sound (Proof straightforward.)
• A resolution-based theorem prover can, for any sentences KB
and α in propositional logic, decide whether 𝐾𝐵 |= α
(completeness of resolution). 47
Conjunctive Normal Form
• The resolution rule applies only to clauses (i.e. disjunctions
of literals).
• But completeness is ensured because: Every sentence of
propositional logic is logically equivalent to a conjunction of
clauses.
• A sentence expressed as a conjunction of clauses is said to
be in conjunctive normal form or CNF.
Converting a sentence in propositional logic to CNF
• E.g. convert B1,1 ⟺ (P1,2 ∨ P2,1) into CNF:
1. Eliminate ⇔, replacing α ⇔ β with (α ⇒ β) ∧ (β ⇒ α).
(B1,1 ⇒ (P1,2 ∨ P2,1)) ∧ ((P1,2 ∨ P2,1) ⇒ B1,1)
2. Eliminate ⇒, replacing α ⇒ β with ¬α ∨ β:
(¬B1,1 ∨ P1,2 ∨ P2,1) ∧(¬(P1,2 ∨ P2,1) ∨ B1,1)
48
Conjunctive Normal Form
3. To have only literals, we “move ¬ inwards” by repeated
application of the following equivalences:
¬(¬α) ≡ α (double-negation elimination)
¬(α ∧ β) ≡ (¬α ∨ ¬β) (De Morgan)
¬(α ∨ β) ≡ (¬α ∧ ¬β) (De Morgan)
By one application of the last rule we obtain:
(¬B1,1 ∨ P1,2 ∨ P2,1) ∧((¬P1,2 ∧ ¬P2,1) ∨ B1,1)
4. We distribute ∨ over ∧ wherever possible:
(¬B1,1∨ P1,2 ∨ P2,1) ∧ (¬P1,2 ∨ B1,1) ∧ (¬P2,1 ∨ B1,1)

• The original sentence is now in CNF, as a conjunction of


three clauses

49
A resolution algorithm
• Inference procedures based on resolution work by using the
principle of proof by contradiction.
• To show that 𝐾𝐵 |= 𝛼 we show that (𝐾𝐵 ∧ ¬ 𝛼) is
unsatisfiable.
• This is done by proving a contradiction.
1. (KB ∧ ¬ 𝛼) is converted into CNF.
2. Resolution is applied to the resulting clauses: each pair that
contains complementary literals is resolved to produce a
new clause.
3. Add the new clause to the KB if it is not already there.
4. Continue the process until one of two things happens:
 there are no new clauses that can be added, in which
case KB does not entail 𝛼; or,
 two clauses resolve to yield the empty clause, in which
case KB entails 𝛼. 50
A resolution algorithm
• The empty clause—a disjunction of no disjuncts—is
equivalent to False because a disjunction is true only if at
least one of its disjuncts is true.
• An empty clause represents a contradiction since it arises
only from resolving two complementary unit clauses such
as P and ¬P.
Wumpus world:
• When the agent is in [1,1], there is no breeze, so there
can be no pits in neighbouring squares; so:
KB = R2 ∧ R4 = (B1,1 ⇔ (P1,2 ∨ P2,1)) ∧ ¬B1,1
• We want to prove 𝛼 which is, say, ¬ P1,2.
• When we convert (KB ∧ ¬ 𝛼) into CNF, we obtain the
clauses at the top of the following figure:
51
Wumpus resolution example
Wumpus resolution example
The above resolution example could have been
more efficient:
¬P1,2 ∨ B1,1
¬B1,1
P1,2
¬P1,2 ∨ B1,1 ¬B1,1
-------------------------
¬P1,2 P1,2
------------------------
[]
• Contradiction so ¬P1,2 is true.
• Efficiency of resolution will be discussed.
53
Horn clauses and definite clauses
• Resolution is complete which makes it a very important
inference method.
• However, in many practical situations, the full power of
resolution is not needed.
• Certain restrictions on the form of sentences used in the KBs
allow for the use of a more restricted and efficient inference
algorithm.
• The definite clause is one such restricted form. It is a
disjunction of literals of which exactly one is positive. E.g.
 The clause ¬L1,1 ∨ ¬Breeze ∨ B1,1 is a definite clause,
 ¬B1,1 ∨ P1,2 ∨ P2,1 is not a definite clause
• Horn clause: a disjunction of literals of which at most one
is positive (Slightly more general than definite clause).
• All definite clauses are Horn clauses, as are clauses with no
positive literals; these are called goal clauses. 54
Horn clauses and definite clauses
• Note that Horn clauses are closed under resolution: i.e. resolving
two Horn clauses, produces a Horn clause.
• KBs of only definite clauses have advantages:
 Every definite clause can be written as an implication whose
premise is a conjunction of positive literals and whose
conclusion is a single positive literal.
 E.g. ¬L1,1 ∨ ¬Breeze ∨ B1,1 can be written as the implication
L1,1 ∧ Breeze ⇒ B1,1
 the premise is called the body and the conclusion is called
the head.
 A sentence consisting of a single positive literal, such as L1,1,
is called a fact. E.g. L1,1 (equivalent to True ⇒ L1,1)
 Inference with Horn clauses can be done through the forward
chaining and backward chaining algorithms which are the
basis for logic programming.
 Deciding entailment with Horn clauses can be done in time
that is linear in the size of the knowledge base 55
A Grammar for CNF
Forward Chaining Algorithm
• The forward-chaining algorithm PL-FC-ENTAILS?(KB, q)
determines if a single proposition symbol q —the query— is
entailed by a KB of definite clauses.
• It begins from known facts (positive literals) in the KB.
• If all the premises of an implication are known, then its
conclusion is added to the set of known facts.
E.g., if L1,1 and Breeze are known and
(L1,1 ∧ Breeze) ⇒ B1,1 is in the KB, then B1,1 can be added.
• This process continues until the query q is added or until no
further inferences can be made.
• This forward chaining algorithm runs in linear time.
• It is easy to see that it is sound: every inference is
essentially an application of Modus Ponens.
• Forward chaining is also complete: every entailed atomic
sentence will be derived. 57
Forward Chaining Algorithm
Example with the
Forward Chaining Algorithm
Forward Chaining
• Forward chaining is an example of the general concept of
data-driven reasoning.
• In data-driven reasoning, the focus of attention starts
with the known data.
• It can be used within an agent to derive conclusions from
incoming percepts, often without a specific query in mind.
 E.g., the wumpus agent might TELL its percepts to the
KB using an incremental forward-chaining algorithm in
which new facts can be added to the agenda to initiate
new inferences.
• In humans, a certain amount of data-driven reasoning
occurs as new information arrives.
 E.g., if one is indoors and hears rain starting to fall, it
might occur to him that the picnic will be canceled.
60
Backward Chaining
• The backward-chaining algorithm works backward
from the query.
 If the query q is known to be true, then no work
is needed.
 Otherwise, the algorithm finds those implications
in the knowledge base whose conclusion is q.
 If all the premises of one of those implications can
be proved true (by backward chaining), then q is
true.
• When applied to the query Q in Figure 7.16, it works
back down the graph until it reaches a set of known
facts, A and B, that forms the basis for a proof.
61
Example with the
Backward Chaining Algorithm
Backward Chaining
• The algorithm is essentially identical to the AND-OR-
GRAPH-SEARCH algorithm in Figure 4.11.
• As with forward chaining, an efficient
implementation of Backward Chaining runs in linear
time.
• Backward chaining is a form of goal-directed
reasoning. It is useful for answering specific
questions such as
 “Is the battery faulty?”
 “Can I cook this dish?”
• Often, the cost of backward chaining is much less
than linear in the size of the KB, because the
process touches only relevant facts. 63
Agents Based on Propositional Logic
• Aim here is to construct a wumpus agent based on PL.
• First step: enable the agent to deduce all it can about
the state of the world given its percept history.
•  write down a complete logical model of the effects of
actions.
• Consider the problem of deducing the current state of the
wumpus world.
• We started with a large collection of sentences of the
following form:
B1,1 ⇔ P1,2 ∨ P2,1 S1,1 ⇔ W1,2 ∨ W2,1 ……..
• The agent knows there is exactly one wumpus, expressed
in two parts. First, that there is at least one wumpus:
W1,1 ∨ W1,2 ∨ … ∨ W4,3 ∨ W4,4
64
Agents Based on Propositional Logic
• Then, that there is at most one wumpus: i.e. For each pair of
locations, we add a sentence that at least one of them must be
wumpus-free:
¬W1,1 ∨ ¬W1,2
¬W1,1 ∨ ¬W1,3
・・・

¬W4,3 ∨ ¬W4,4
• We associate time stamps to different percepts (Stench, Breeze,
etc.) to avoid contradictions like ¬Stench in the KB, and then
Stench in KB (some time later).
• This was supplied to MAKE-PERCEPT-SENTENCE in Fig. 7.1.
• So no problem having Stench4 to the KB, rather than Stench,
neatly avoiding any contradiction with ¬Stench3.
• Symbols associated with permanent aspects of the world: no
need for time superscript. Sometimes called atemporal
variables. 65
Agents Based on Propositional Logic
• E.g., the initial KB includes
𝐿01,1 :the agent is in square [1, 1] at time 0
FacingEast0, HaveArrow0, and WumpusAlive0
• We use the word fluent (in the sense of flowing) to refer to
an aspect of the world that changes.
• We can connect stench and breeze percepts directly to the
properties of the squares where they are experienced
through the location fluent as follows.
For any time step t and any square [x, y], we assert:
𝐿𝑡𝑥,𝑦 ⟹ (𝐵𝑟𝑒𝑒𝑧𝑒 𝑡 ⟺ 𝐵𝑥,𝑦 ) 𝐿𝑡𝑥,𝑦 ⟹ (𝑆𝑡𝑒𝑛𝑐ℎ𝑡 ⟺ 𝑆𝑥,𝑦 )
• We need axioms that allow the agent to keep track of fluents
such as 𝐿𝑡𝑥,𝑦 .
• We need to write down the transition model of the
wumpus world as a set of logical sentences that capture
fluents changes as the result of the agent’s actions 66
Agents Based on Propositional Logic
• First, we need proposition symbols for the occurrences of
actions. These will also be indexed by time
 e.g. Forward0 : the agent executes the Forward action at
time 0.
• By convention, percept at time step t happens first, then
action for time step t, then transition to time step t+1.
• Need to write effect axioms that specify the outcome of an
action at the next time step.
• E.g. if the agent is at location [1,1] facing east at time 0 and
goes Forward, the result is that the agent is in square [2,1]
and no longer in [1,1]:
𝐿01,1 ∧ FacingEast 0 ∧ Forward0 ⇒(𝐿12,1 ∧ ¬𝐿11,1 )
• We need one such sentence for each possible time step, for
each of the 16 squares, and each of the four orientations.
• We also need similar sentences for the other actions: Grab,
Shoot, Climb, TurnLeft, and TurnRight. 67
The remaining slides are
required reading.

68
Agents Based on Propositional Logic
• Suppose the agent decides to move Forward at time 0
and asserts this fact into its knowledge base.
• Given the previous effect axiom and the initial
assertions about the state at time 0, the agent can
deduce that it is in [2, 1] (at time 1).
• So ASK(KB,𝐿12,1 ) =true. Good!
• Problem: ASK(KB,𝐻𝑎𝑣𝑒𝐴𝑟𝑟𝑜𝑤 1 ) returns false : the
agent cannot prove it STILL has the arrow. (It cannot
prove it does not have it either!)
• The reason: the effect axiom fails to state what
remains unchanged as the result of an action.
• The need to do this gives rise to the frame problem.69
Agents Based on Propositional Logic
Possible solution to the frame problem: Add frame axioms
explicitly asserting all the propositions that remain the same.
 e.g., for each time t we would have
Forward𝑡 ⇒ (HaveArrowt ⇔ HaveArrowt+1)
Forward𝑡 ⇒ (WumpusAlivet ⇔ WumpusAlivet+1)
・・・
Drawback:
• The proliferation of frame axioms is remarkably inefficient.
• In a world with m different actions and n fluents, the set of
frame axioms will be of size O(m n).
• This manifestation of the frame problem is sometimes called
the representational frame problem.
• This problem is significant because the real world has very
many fluents.
70
Agents Based on Propositional Logic
• Fortunately for us humans, each action typically
changes just a small number k of the total fluents: the
world exhibits locality.
• So need to define the transition model with a set of
axioms of size O(m k) rather than size O(mn).
• There is also an inferential frame problem: problem
of projecting forward the results of a t step plan of
action in time O(k t) rather than O(n t).
• Solution to the problem: change one’s focus from
writing axioms about actions to writing axioms about
fluents.
• For each fluent F, have an axiom that defines the truth
value of 𝐹 𝑡+1 in terms of fluents (including F itself) at
time t and the actions that may have occurred at time t. 71
Agents Based on Propositional Logic
• The truth value of 𝐹 𝑡+1 can be set in one of two ways:
 either the action at time t causes F to be true at t+1, or
 F was already true at time t and the action at time t
does not cause it to be false.
• An axiom of this form is called a successor-state axiom.
It has this schema:
𝐹 𝑡+1 ⇔ ActionCauses𝐹 𝑡 ∨ (𝐹 𝑡 ∧ ¬𝐴𝑐𝑡𝑖𝑜𝑛𝐶𝑎𝑢𝑠𝑒𝑠𝑁𝑜𝑡𝐹 𝑡 )
• One of the simplest successor-state axioms is the one for
HaveArrow:
HaveArrow 𝑡+1 ⇔ (HaveArrow 𝑡 ∧ ¬Shoot 𝑡 )
• More elaborate axioms for agent’s location:

𝐿𝑡+1
1,1 ⇔ (𝐿𝑡
1,1 ∧ (¬Forward𝑡
∨ Bump 𝑡+1
)) ∨ (𝐿𝑡
1,2 ∧
(South𝑡 ∧ Forward𝑡 )) ∨ (𝐿𝑡2,1 ∧ (West 𝑡 ∧ Forward𝑡 ))
72
Agents Based on Propositional Logic
• Given
 a complete set of successor-state axioms and
 the other axioms listed at the beginning of this section,
the agent will be able to ASK and answer any answerable
question about the current state of the world.
• E.g. initial sequence of percepts and axioms:

¬ Stench0 ∧¬Breeze0∧¬Glitter 0 ∧ ¬Bump0 ∧ ¬Scream0 ; Forward0


¬Stench1 ∧ Breeze1 ∧¬Glitter 1 ∧ ¬Bump1 ∧ ¬Scream1 ; TurnRight1
¬Stench2 ∧ Breeze2 ∧¬Glitter 2 ∧ ¬Bump2 ∧ ¬Scream2 ; TurnRight2
¬Stench3 ∧ Breeze3 ∧¬Glitter 3 ∧ ¬Bump3 ∧ ¬Scream3 ; Forward 3
¬Stench4 ∧ ¬Breeze4 ∧¬Glitter 4 ∧ ¬Bump4 ∧ ¬Scream4 ; TurnRight4
¬Stench5 ∧ ¬Breeze5 ∧¬Glitter 5 ∧ ¬Bump5 ∧ ¬Scream5 ; Forward 5
Stench6 ∧ ¬Breeze6 ∧¬Glitter 6 ∧ ¬Bump6 ∧ ¬Scream6
73
Agents Based on Propositional Logic
• At this point, we have ASK(KB,𝐿61,2 ) = true  the agent knows
where it is.
• Also, ASK(KB,𝑊1,3 ) = true and ASK(KB,𝑃3,1 ) = true
 the agent has found the wumpus and one of the pits.
• Asking whether a square is OK to move into (i.e. contains no
pit nor live wumpus)?
𝑡
𝑂𝐾𝑥,𝑦 ⇔ ¬𝑃𝑥,𝑦 ∧ ¬(𝑊𝑥,𝑦 ∧ 𝑊umpusAlive𝑡 )
• A serious problem remains: need to confirm that all necessary
preconditions of an action hold for it to have its intended
effect.
• BUT many unusual exceptions could cause an action to fail.
• Specifying all these exceptions is called the qualification
problem.
• No complete solution within logic; left to the system designers’
good judgment in deciding how detailed they want to be. 74
Slides based on the textbook

• Russel, S. and
Norvig, P. (2020)
Artificial
Intelligence,
A Modern Approach
(4th Edition),
Pearson Education
Limited.

You might also like