0% found this document useful (0 votes)
22 views43 pages

Chapter 3

Uploaded by

sriramkuriseti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views43 pages

Chapter 3

Uploaded by

sriramkuriseti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 43

1.

Atomic Sentences
An atomic sentence (or atom) is a basic, indivisible statement that asserts a specific fact
about objects using a predicate symbol and terms. It has the following form:
 Form: Predicate(Term1, Term2, ...)
Here:
 The predicate symbol refers to a relation.
 The terms refer to the objects involved in that relation.
Example
An atomic sentence like:
 Brother(Richard, John)
asserts that Richard the Lionheart is the brother of King John, under an intended
interpretation where Brother represents the brotherhood relation and Richard and John
refer to specific individuals.
Atomic sentences can also use complex terms—terms formed by applying functions to
objects. For example:
 Married(Father(Richard), Mother(John))
This states that Richard’s father is married to John’s mother, with Father and Mother as
functions mapping each person to their respective parents.
Truth in a Model
An atomic sentence is true in a given model if the relation denoted by the predicate holds
among the objects referred to by the arguments. For instance, if the model verifies that
Brother(Richard, John) holds, then this atomic sentence is true in that model.
2. Complex Sentences
Complex sentences are constructed by combining atomic sentences with logical
connectives, similar to propositional logic. These connectives include:
 ¬ (negation)
 ∧ (conjunction)
 ∨ (disjunction)
 ⇒ (implication)
 ⇔ (biconditional)
Examples
Given the model from Figure 8.2, here are some complex sentences and their
interpretations:
1. ¬Brother(LeftLeg(Richard), John)
o This sentence states that Richard’s left leg is not John’s brother. Here, ¬ negates
the relation.
2. Brother(Richard, John) ∧ Brother(John, Richard)
o This asserts that Richard is John’s brother and John is Richard’s brother,
assuming a mutual or symmetric relationship.
3. King(Richard) ∨ King(John)
o This disjunction states that either Richard is a king, or John is a king, or both.
4. ¬King(Richard) ⇒ King(John)
o This implication means that if Richard is not a king, then John must be a king.

Quantifiers :
In first-order logic, quantifiers allow us to generalize statements over collections of objects,
rather than specifying each object individually. There are two main types of quantifiers in
first-order logic:
1. Universal Quantification (∀)
The universal quantifier (∀), read as “for all,” is used to assert that a statement applies to all
objects in a domain.
Syntax and Semantics
The general form of a universally quantified sentence is:
 ∀ x P(x), where P(x) is some predicate.
This statement is true if P(x) holds for every possible assignment of x in the model’s domain.
Example
To express the statement “All kings are persons,” we write:
 ∀ x (King(x) ⇒ Person(x))

implication King(x) ⇒ Person(x) is automatically true (due to the nature of implication in


This reads as: “For all objects x, if x is a king, then x is a person.” If x is not a king, the

logic, where any implication with a false premise is true).


2. Existential Quantification (∃)
The existential quantifier (∃), read as “there exists,” asserts that a statement is true for at
least one object in the domain.
Syntax and Semantics
The general form of an existentially quantified sentence is:
 ∃ x P(x), where P(x) is some predicate.
This statement is true if P(x) holds for at least one possible assignment of x in the model’s
domain.
Example
To express the statement “There exists a crown on King John’s head,” we write:
 ∃ x (Crown(x) ∧ OnHead(x, John))
This means “There exists at least one object x such that x is a crown and x is on John’s
head.”
Common Quantifier Pitfalls
1. Confusing Implication with Conjunction:
o Writing ∀ x (King(x) ∧ Person(x)) instead of ∀ x (King(x) ⇒ Person(x)) would
mean “All objects are kings and persons,” which does not express the intended
meaning.
2. Using ∃ with Implication:
Writing ∃ x (Crown(x) ⇒ OnHead(x, John)) is too weak, as this statement is
true as long as any object is not a crown. Instead, ∃ x (Crown(x) ∧ OnHead(x,
o

John)) captures the intended meaning.


Nested Quantifiers
Complex statements often require multiple quantifiers:
1. ∀ x ∃ y (Loves(x, y)) – “Everyone loves someone.”
2. ∃ y ∀ x (Loves(x, y)) – “There exists someone who is loved by everyone.”
The order of quantifiers is important: changing it alters the meaning.
Relationships Between ∀ and ∃

using ∀ can often be converted to statements using ∃ by applying De Morgan’s laws:


The universal and existential quantifiers are interconnected through negation. Statements

 ∀ x ¬P(x) is equivalent to ¬∃ x P(x).


 ∃ x ¬P(x) is equivalent to ¬∀ x P(x).
This allows universal and existential quantifiers to be rewritten in terms of each other, which
can be useful for logical proofs and simplifications.

Equality :
In first-order logic, equality provides a way to specify that two terms refer to the same
object or entity in a model. This helps express precise relationships between objects and is
represented by the equality symbol =.
Using Equality in First-Order Logic
1. Equality Statements:
o An equality statement like Father(John) = Henry means that the object referred
to by Father(John) is the same as the one referred to by Henry. For this
statement to be true, both terms must point to the same object in the model’s
interpretation.
2. Inequality (or Non-Equality) Statements:
o Negation of equality (¬= or simply ≠) allows us to assert that two terms do not
refer to the same object. For example, x ≠ y means that x and y refer to
different objects.
o Using this, we can distinguish between multiple instances of an entity in a more
complex structure.
Example: Ensuring Multiple Entities are Distinct
Suppose we want to say that "Richard has at least two brothers." The statement ∃ x, y
(Brother(x, Richard) ∧ Brother(y, Richard)) alone is insufficient because x and y could both
refer to the same brother. To prevent this, we add ∧ x ≠ y to ensure x and y refer to
different individuals:
 ∃ x, y (Brother(x, Richard) ∧ Brother(y, Richard) ∧ x ≠ y)
This sentence specifies:
 There exist two distinct objects x and y such that both are brothers of Richard, and
they are not the same person.

Alternative semantics :
In standard first-order logic, stating that “Richard’s brothers are John and Geoffrey” requires
more than simply listing them as brothers; we also need to rule out the possibility of
additional brothers and clarify that John and Geoffrey are distinct. This can get
cumbersome:
 To capture that Richard’s brothers are exactly John and Geoffrey, we write:
Brother(John, Richard)∧Brother(Geoffrey, Richard)∧(John≠Geoffrey)∧∀x(Brother(
x, Richard)⇒(x=John∨x=Geoffrey))
This is a precise statement but is verbose compared to natural language. To address this,
alternative semantics, often called database semantics, are sometimes used, especially in
systems where we have definitive, complete knowledge of all the relevant facts, such as in
databases and logic programming.
Key Components of Database Semantics
1. Unique-Names Assumption:
o Each constant symbol (like "John" or "Geoffrey") refers to a unique object, so
“John” and “Geoffrey” are inherently distinct without needing to state that
explicitly.
2. Closed-World Assumption:
o Anything not explicitly stated to be true is assumed false. If we state only “John
and Geoffrey are Richard’s brothers,” we assume that Richard has no other
brothers.
3. Domain Closure:
o All entities are represented by existing symbols, and no additional unnamed
entities exist in the model.
With these assumptions in place, Brother(John, Richard) ∧ Brother(Geoffrey, Richard)
would automatically imply that John and Geoffrey are Richard’s only brothers. This makes it
simpler to represent complete knowledge, like a defined set of relationships, without
needing to add exclusion clauses.
Database Semantics vs. Standard Semantics
 Database Semantics is ideal when:
o We have full, complete knowledge (e.g., a database with all entries).
o Every object and fact is known with certainty, avoiding ambiguity.
 Standard First-Order Semantics is better for:
o Open-world scenarios, where new objects or relations might exist outside what
we currently know.
Operator Precedence
In FOL, operators are applied in the following order of precedence:
1. ¬ (Negation)
2. = (Equality)
3. ∧ (Conjunction)
4. ∨ (Disjunction)
5. ⇒ (Implication)
6. ⇔ (Biconditional)

Using FOL :
In first-order logic (FOL), we have a structured language to represent and reason about
knowledge systematically. Here’s a breakdown of the key concepts:
1. Defining a Domain
 Domain refers to the specific part of the world that our knowledge base (KB) is
concerned with. For example, this could be a domain of family relationships,
numbers, sets, or even a fictional scenario like the wumpus world.
 By defining a domain, we clarify the context for the knowledge and assertions we
add.
2. TELL/ASK Interface for Knowledge Bases
 TELL: Used to add sentences or assertions to the KB. These are the "facts" or "rules"
that we know to be true in our defined domain.
 ASK: Used to query the KB to check if certain information can be derived or logically
follows from what we have told the KB.
3. Assertions in FOL
 Assertions are facts added using TELL. For example:
o TELL(KB, King(John)) adds the fact that John is a king to the KB.
o TELL(KB, Person(Richard)) adds that Richard is a person.
o TELL(KB, ∀ x (King(x) ⇒ Person(x))) asserts that all kings are persons.
 Once these facts are added, they form part of the knowledge from which the KB can
derive answers to queries.
4. Queries in FOL
 Queries are questions asked to the KB using ASK. These can be simple or quantified:
o For example, ASK(KB, King(John)) would return true because King(John) is
explicitly in the KB.
o ASK(KB, Person(John)) would also return true, since the rule ∀ x (King(x) ⇒
Person(x)) implies that if John is a king, then John is a person.
5. Quantified Queries and Binding Lists
 FOL allows for quantified queries, such as ASK(KB, ∃ x Person(x)), asking if there exists
some x for which Person(x) is true. The answer might simply be true if the KB can
confirm its truth but won’t specify which individuals satisfy the query.
 ASKVARS helps when we need specific values for variables that satisfy the query. For
example, ASKVARS(KB, Person(x)) might yield {x/John, x/Richard}, showing that both
John and Richard satisfy the query.
o This output, {x/John, x/Richard}, is known as a substitution or binding list,
which maps variables to specific values that make the query true.
6. Limitations in General FOL

contains King(John) ∨ King(Richard), then asking ASK(KB, ∃ x King(x)) would return


 In first-order logic, not all queries will have specific bindings. For instance, if the KB

true (since there exists at least one king), but there’s no single binding for x that
makes the query true without further specification.
Kinship Kingdom :
The kinship domain in first-order logic provides a structured way to represent family
relationships and reason about them through a set of defined rules and relationships. Here’s
a breakdown of how it’s constructed:
1. Objects and Predicates in the Domain
 The objects are people, and we use predicates to represent characteristics and
relationships. For instance:
o Unary predicates describe properties of individuals, like Male(x) and Female(x).
o Binary predicates represent relationships between two people, such as
Parent(x, y), Sibling(x, y), Spouse(x, y), etc.
2. Defining Relationships with Axioms
 We use axioms to define relationships between people and properties of individuals.
For instance:

parent: ∀m,c Mother(c)=m⇔Female(m)∧Parent(m,c)


o Mother: To define a mother, we express that a person’s mother is their female

o Husband: A husband is defined as a male spouse:


∀w,h Husband(h,w)⇔Male(h)∧Spouse(h,w)

∀x Male(x)⇔¬Female(x)
o Male/Female Exclusivity: Ensures a person cannot be both male and female:

the other is their child, and vice versa: ∀p,c Parent(p,c)⇔Child(c,p)


o Parent/Child Inverse Relationship: This states that if someone is a parent, then

o Grandparent: Defined as the parent of one’s parent:


∀g,c Grandparent(g,c)⇔∃p Parent(g,p)∧Parent(p,c)

distinct: ∀x,y Sibling(x,y)⇔(x≠y)∧∃p Parent(p,x)∧Parent(p,y)


o Sibling: Defined as another child of the same parent, ensuring that siblings are

These rules, or axioms, provide the foundational relationships in the kinship domain,
allowing us to derive new relationships and facts.
3. Definitions and Theorems
 Axioms are foundational definitions, like those we’ve seen, which provide the basic
facts from which conclusions can be derived.
 Theorems are statements that logically follow from axioms. For example:
symmetric: ∀x,y Sibling(x,y)⇔Sibling(y,x)
o Symmetry of Siblinghood: The statement that sibling relationships are

This is a theorem, as it logically follows from the sibling definition rather than being an
initial axiom.
 Reasoning Efficiency: Including theorems within a knowledge base can speed up
reasoning by providing readily derived truths, rather than re-proving them from
axioms every time.
4. Partial Definitions

specifications can be given, such as: ∀x Person(x)⇒...


 Some predicates, like Person(x), may lack a complete definition. Instead, partial

 This flexibility allows us to use concepts without having to exhaustively define them.
5. Specific Facts in the Knowledge Base
 Individual facts, like Male(Jim) or Spouse(Jim, Laura), represent specific instances
within the kinship domain and enable targeted reasoning. If a particular relationship
is missing, we might have to add more axioms to fill any logical gaps in the model.

Numbers, sets and lists :


Numbers: Natural Numbers and the Peano Axioms
The theory of natural numbers is constructed starting with a minimal set of elements:
1. Basic Definitions:
o Predicate: NatNum (identifies natural numbers).
o Constant: 0 (represents the number zero).
o Function: S (successor function, where S(n) represents the next natural number
after n).
2. Peano Axioms:
o Natural Number Definition:
 NatNum(0): states that 0 is a natural number.
 ∀ n NatNum(n) ⇒ NatNum(S(n)): if n is a natural number, then S(n) is
also a natural number.
 This axiom generates the sequence of natural numbers: 0, S(0) (1),
S(S(0)) (2), and so forth.
o Successor Function Properties:
 ∀ n 0 ≠ S(n): 0 is not the successor of any number.
 ∀ m, n m ≠ n ⇒ S(m) ≠ S(n): the successor function is injective, meaning
no two different numbers have the same successor.
3. Addition Definition:
o Using the successor function, addition is defined recursively:
 ∀ m NatNum(m) ⇒ +(0, m) = m: adding 0 to any number m results in m.
 ∀ m, n NatNum(m) ∧ NatNum(n) ⇒ +(S(m), n) = S(+(m, n)): adding S(m)
to n is the same as incrementing the sum of m and n.
 This recursively builds addition through repeated application of
the successor function.
o Infix Notation:
 + is used in infix notation for readability (e.g., m + 0 instead of +(m, 0)).
Once addition is defined, multiplication, exponentiation, and more advanced number
theory can be derived as recursive operations built on addition.
Sets: Basic Set Theory
The domain of sets is fundamental in both mathematical and logical reasoning. Sets are
defined by axioms that outline how elements relate to sets, and how sets interact with each
other.
1. Core Elements:
o Constant: { } (the empty set).
o Predicate: Set (used to identify sets).
o Binary Predicates:
 x ∈ s (membership: x is an element of set s).
 s1 ⊆ s2 (subset: s1 is a subset of s2).
o Functions:
 s1 ∩ s2 (intersection of s1 and s2).
 s1 ∪ s2 (union of s1 and s2).
 {x|s} (adjoining an element x to a set s).
2. Set Axioms:
o Set Construction:
 ∀ s Set(s) ⇔ (s = { }) ∨ (∃ x, s2 Set(s2) ∧ s = {x|s2}): a set is either the
empty set or constructed by adjoining an element to another set.
o Empty Set:
 ¬∃ x, s {x|s} = { }: the empty set has no elements.
o Redundant Elements:
 ∀ x, s x ∈ s ⇔ s = {x|s}: adjoining an element already in a set does not
alter the set.
o Membership:
 ∀ x, s x ∈ s ⇔ ∃ y, s2 (s = {y|s2} ∧ (x = y ∨ x ∈ s2)): recursively defines
membership.
o Subset:
 ∀ s1, s2 s1 ⊆ s2 ⇔ (∀ x x ∈ s1 ⇒ x ∈ s2): a set is a subset of another if
all elements of the first set are also in the second.
o Set Equality:
 ∀ s1, s2 (s1 = s2) ⇔ (s1 ⊆ s2 ∧ s2 ⊆ s1): two sets are equal if each is a
subset of the other.
o Intersection and Union:
 ∀ x, s1, s2 x ∈ (s1 ∩ s2) ⇔ (x ∈ s1 ∧ x ∈ s2): x is in the intersection if
it's in both sets.
 ∀ x, s1, s2 x ∈ (s1 ∪ s2) ⇔ (x ∈ s1 ∨ x ∈ s2): x is in the union if it's in
either set.
These axioms form a foundation for further operations in set theory.
Lists: Ordered Collections with Duplicates
Lists differ from sets in that they are ordered and can contain duplicate elements. Lists are
often represented in Lisp notation.
1. Basic Elements:
o Constant: Nil (represents an empty list).
o Functions:
 Cons: adds an element to the front of a list.
 Append: concatenates two lists.
 First and Rest: retrieve the first element and the remaining elements of a
list, respectively.
o Predicate:
 Find: checks if an element exists in a list.
 List?: identifies lists.
2. List Syntax:
o The empty list [] is equivalent to Nil.
o Cons(x, y), where y is a nonempty list, is often written [x|y].
o A list of elements [A, B, C] corresponds to Cons(A, Cons(B, Cons(C, Nil))).
Syntactic Sugar
Both numbers and sets make use of syntactic sugar—notation that makes expressions more
readable but doesn't alter meaning. For example:
 Writing m + n as an infix operation instead of +(m, n).
 Using {} for the empty set and familiar symbols for union, intersection, and subset
relations.
Building Larger Theories
From these foundational elements, it’s possible to construct complex mathematical
theories:
 Numbers: Build up from addition to multiplication, exponentiation, and properties
like primes.
 Sets: Define operations and relations that lead into advanced set theory.
 Lists: Create ordered sequences with repetitive elements, useful for data structures
and algorithms.

Wumpus world :
To do….
Knowledge Engineering :
Knowledge engineering in first-order logic (FOL) is the process of constructing a knowledge
base for an AI system to make logical inferences in a specific domain. This involves
identifying relevant concepts and relations, defining a vocabulary, encoding rules, and
validating the system’s output. The goal is to develop a formal representation that allows
the system to understand, reason, and answer questions within that domain effectively.
Here’s a breakdown of the knowledge engineering process in FOL, outlined in seven main
steps:
1. Identify the Task:
o The first step is to define the scope of the knowledge base, such as the types of
questions it needs to answer and the necessary information it will need. For
example, in the Wumpus World, we might decide that the system only needs to
answer queries about the location of pits and the wumpus, or we might include
more complex tasks like action selection.
2. Assemble Relevant Knowledge:
o The knowledge engineer collects domain-specific knowledge, either from
personal expertise or by consulting domain experts. This process, called
knowledge acquisition, ensures a thorough understanding of the domain’s
essential facts and concepts. For example, in a Wumpus World-like game,
knowledge about game rules, adjacency definitions, and how agents interact
with elements like pits or smells would be collected.
3. Decide on Vocabulary:
o This step involves defining the vocabulary of predicates, functions, and
constants to be used in the knowledge base. These vocabulary choices are
central to the ontology of the knowledge base, which defines the fundamental
types of objects and relationships. For example, we might decide to represent
Pit as a unary predicate or define the agent’s orientation as a function rather
than a predicate. The chosen ontology determines how different elements will
interact logically.
4. Encode General Domain Knowledge:
o In this phase, the knowledge engineer writes axioms to capture the logical
relationships within the vocabulary terms. These axioms formalize the
knowledge and allow experts to verify that the logical structures are accurate.
For example, rules governing adjacency or movement in the Wumpus World
are encoded here. If any errors or missing terms are identified, the ontology
may need to be revised.
5. Encode Specific Problem Instances:
o This step involves entering specific facts relevant to a particular instance of the
problem. In a game setting, this might include the current layout of the
environment, such as the known locations of pits or the initial position of the
wumpus. For logical agents, problem-specific information can be derived from
sensor inputs; for other knowledge bases, it’s supplied as data.
6. Pose Queries to the Inference System:
o With the knowledge base set up, the next step is to perform logical inference
to answer specific queries based on the encoded knowledge. For instance, a
query in the Wumpus World might ask if a certain square is safe to move into.
The system uses the encoded axioms and specific facts to derive answers,
avoiding the need for explicit procedural coding.
7. Debug the Knowledge Base:
o In this final step, the knowledge engineer checks if the knowledge base
provides correct answers. Errors in the knowledge base could result from
missing or incorrect axioms. For example, if an agent cannot deduce that a
particular square is free of pits when it should, the cause could be a missing or
improperly defined adjacency rule. Debugging often involves tracing reasoning
steps to identify gaps in logic or incorrect assumptions.
Example: Applying Knowledge Engineering to an Electronic Circuit Domain
To understand this better, we can apply the seven steps to a new domain, such as electronic
circuits. Here, the process would start with identifying tasks (e.g., diagnosing faults),
gathering relevant circuit knowledge, defining vocabulary (e.g., Resistor, Voltage), encoding
general circuit principles, adding specifics for a given circuit design, querying the system for
diagnoses, and iteratively debugging until accurate inferences are achieved.
Step-by-Step Knowledge Engineering Process for Electronic Circuits
1. Identify the Task:
o The primary tasks are analyzing the circuit’s functionality and structure. For
instance, we might want to know if a circuit performs addition correctly or
determine the outputs based on given inputs. Questions about connectivity
(e.g., identifying which gates are connected to specific inputs) and feedback
loops are also relevant. Other factors like timing, cost, or power consumption
are not addressed here, as they would require more detailed knowledge.
2. Assemble Relevant Knowledge:
o Digital circuits consist of wires and gates (e.g., AND, OR, XOR, and NOT gates).
Signals travel along wires to the gates’ input terminals, and each gate generates
an output signal. For functionality analysis, it’s sufficient to understand
connections between terminals without specifying paths, physical properties,
or component costs.
3. Decide on Vocabulary:
o To represent components and connections, we define predicates, functions,
and constants:
 Gate and Circuit Identification: Gates are represented as objects (e.g.,
X1, X2), with predicates like Gate(X1) to label objects as gates, and
Type(X1) = XOR to specify a gate’s type (AND, OR, XOR, or NOT).
 Terminals: Each gate or circuit has input and output terminals,
represented with functions like In(1, X1) for the first input terminal of
gate X1 and Out for output terminals.
 Connections: The predicate Connected(Out(1, X1), In(1, X2)) specifies a
connection between terminals.
 Signal States: Signal values are represented by 1 (on) and 0 (off), with
Signal(t) giving the value at terminal t.
4. Encode General Knowledge of the Domain:
 The ontology is formalized using general rules (axioms) to capture behavior and
relationships:
1. Signal Consistency: Connected terminals have the same signal value.
2. Binary Signals: Signals at terminals are either 1 or 0.
3. Connectivity Symmetry: Connections are commutative, meaning if terminal t1
is connected to t2, then t2 is connected to t1.
4. Gate Types: Gates are one of four types: AND, OR, XOR, or NOT.
5. AND Gate Logic: The output is 0 if any input is 0.
6. OR Gate Logic: The output is 1 if any input is 1.
7. XOR Gate Logic: The output is 1 if inputs are different.
8. NOT Gate Logic: The output is the opposite of its input.
9. Gate Arity: AND, OR, XOR gates have two inputs, one output; NOT gates have
one input, one output.
10.Circuit Arity: Circuits have specific numbers of input/output terminals, as
defined by Arity.
11.Distinct Entities: Gates, terminals, signals, types, and Nothing are distinct
entities.
12.Gate and Circuit Relationship: Every gate is also a circuit.
Part -11
Inference in First-Order Logic:
Propositional vs. First-Order Inference, Unification and Lifting, Forward Chaining,
Backward Chaining, Resolution.

Propositional vs FOL :
In formal logic, propositional and first-order logic are two major systems used for inference
and reasoning. Here’s how they differ and work with the ideas introduced in your text.
1. Propositional Logic
Propositional logic operates with statements or propositions that are either true or false,
without diving into the internal structure of those statements. A propositional logic
knowledge base contains atomic statements and logical connectives (AND, OR, NOT, etc.).
However, it lacks the ability to express relationships between objects or the concept of
"some" or "all."
For example:
 Propositional statement: "If John is a king and is greedy, then he is evil." (This could
be written as a single proposition P in propositional logic.)
2. First-Order Logic (FOL)
First-order logic, on the other hand, introduces quantifiers and relations between objects.
It’s more expressive than propositional logic because it allows statements about properties
of objects and their relationships.
Quantifiers in FOL

∀x (King(x) ∧ Greedy(x) ⇒ Evil(x)) says, "For all x, if x is a king and x is greedy, then x
 Universal Quantifier (∀): States that something is true for all objects. For instance,

is evil."

holds. For example, ∃x Crown(x) ∧ OnHead(x, John) states that "There exists an x
 Existential Quantifier (∃): States that there exists an object for which a property

such that x is a crown and is on John's head."


Inference Rules for Quantifiers
Inference in FOL involves rules for manipulating quantifiers:
variable with a specific object (a "ground term"). For instance, from ∀x King(x) ∧
 Universal Instantiation (UI): This rule allows us to replace a universally quantified

Greedy(x) ⇒ Evil(x), we can infer King(John) ∧ Greedy(John) ⇒ Evil(John) by


substituting x with John.

new, unique constant (often called a "Skolem constant"). For example, from ∃x
 Existential Instantiation (EI): This allows substituting an existential quantifier with a

Crown(x) ∧ OnHead(x, John), we can infer Crown(C1) ∧ OnHead(C1, John), where C1


is a new constant, meaning "some specific but unnamed object."
Transition from FOL to Propositional Logic
While FOL is more expressive, inference can sometimes be simplified by transforming FOL
statements into propositional logic, removing quantifiers to focus on propositional
inference methods. By instantiating variables in FOL with specific terms, we convert general
statements to specific cases, facilitating propositional reasoning.

This transformation process is sometimes bypassed with direct FOL inference techniques
that manipulate sentences with quantifiers directly, providing a shortcut to avoid the
complexity of converting to propositional logic.
The reduction to propositional inference is a technique for simplifying first-order logic (FOL)
inference by converting it into propositional logic. This method, known as
propositionalization, allows us to leverage propositional inference techniques in FOL
contexts. Here’s a breakdown of how this process works:
1. Converting Universal Quantifiers
The main concept of propositionalization is that a universally quantified sentence in FOL can
be transformed into a set of ground instances by substituting the variables with all possible
ground terms (constant symbols or specific objects) from the knowledge base. Once all
possible substitutions are made, the universally quantified statement can be replaced by
these ground instances. This approach turns the universally quantified sentences into
specific propositional sentences that no longer have quantifiers.
For example:
 Given ∀x (King(x) ∧ Greedy(x) ⇒ Evil(x)), we substitute x with all constants in the
knowledge base, such as John and Richard, yielding:
o King(John) ∧ Greedy(John) ⇒ Evil(John)
o King(Richard) ∧ Greedy(Richard) ⇒ Evil(Richard)
Now, we can discard the original quantified sentence and treat the new statements as
propositions (King(John), Greedy(John), etc.).
2. Propositionalizing the Knowledge Base
After performing these substitutions, the knowledge base effectively becomes a
propositional knowledge base. At this point, we can use propositional inference techniques
to make logical conclusions, such as deducing Evil(John) if King(John) and Greedy(John) are
both true.
3. Handling Existential Quantifiers
Existential quantifiers (e.g., ∃x P(x)) can also be propositionalized by introducing a unique
constant for each existential statement, known as a Skolem constant. For instance, ∃x
Crown(x) ∧ OnHead(x, John) could be converted to Crown(C1) ∧ OnHead(C1, John), with C1
as a specific instance satisfying the existential condition.
4. Challenges with Infinite Ground-Term Substitutions
The propositionalization process becomes complicated when function symbols (like Father)
are present, as they can create infinitely many ground terms (e.g., Father(Father(John)),
Father(Father(Father(John))), etc.). This infinite set makes it difficult for propositional
algorithms to manage.
5. Herbrand’s Theorem
Jacques Herbrand’s theorem helps address the issue of infinite ground terms. The theorem
states that if a sentence is entailed by the first-order knowledge base, then there is a proof
using only a finite subset of the propositionalized knowledge base. In practice, we start by
generating ground terms up to a certain depth (no function nesting, then one layer of
nesting, and so on) until we find a propositional proof.
6. Semi-Decidability in First-Order Logic
While propositionalization is complete (i.e., we can prove any entailed sentence), first-order
logic is only semi-decidable. This means that while we can always prove statements that are
entailed, there is no guaranteed method to determine if a statement is not entailed. If a
proof doesn’t exist, the algorithm may run indefinitely, similar to the halting problem in
computation, where some problems have no definite end.
Unification and Uplifting :
Unification
Unification is the process of finding a substitution that makes two logical expressions
identical. This substitution is essential for aligning premises with known facts in the
knowledge base, enabling logical inferences.
For example, consider a query Evil(x) with a knowledge base that includes:
 ∀x (King(x) ∧ Greedy(x) ⇒ Evil(x))
 King(John)
 Greedy(John)
premise of the implication (King(x) ∧ Greedy(x)) match facts in the knowledge base. Here,
To determine if Evil(John) is true, unification works by finding substitutions that make the

substituting {x/John} achieves this, allowing us to infer Evil(John).


Unification operates through an algorithm that tries to match variables in one expression
with constants or variables in another. Here are some cases of unification outcomes:
 UNIFY(Knows(John, x), Knows(John, Jane)) results in {x/Jane}.
 UNIFY(Knows(John, x), Knows(y, Bill)) yields {x/Bill, y/John}.
 If a variable clash occurs, like UNIFY(Knows(John, x), Knows(x, Elizabeth)), unification
fails because x cannot represent both John and Elizabeth.
Standardizing Apart: To avoid such clashes, variables in separate expressions are renamed
(standardized apart), so Knows(x, Elizabeth) could be rewritten as Knows(x17, Elizabeth),
allowing successful unification as {x/Elizabeth, x17/John}.
Most General Unifier (MGU): Often, multiple unifiers are possible, but the MGU is the most
flexible solution, imposing the fewest restrictions. For example, UNIFY(Knows(John, x),
Knows(y, z)) might yield {y/John, x/z}, which is more general than {y/John, x/John, z/John}.
Uplifting: Generalized Modus Ponens
Uplifting extends propositional rules like Modus Ponens to FOL by using variables and
unification, a concept known as Generalized Modus Ponens. This rule applies Modus
Ponens on statements with variables by finding unifying substitutions, allowing FOL
inferences without propositionalization.
For instance:
 Given the rule ∀x (King(x) ∧ Greedy(x) ⇒ Evil(x)), and facts King(John) and Greedy(y),
uplifting allows us to apply the substitution {x/John, y/John} to infer Evil(John).
This lifted approach simplifies inferences by focusing only on the necessary substitutions to
answer the query, avoiding the inefficiency of propositionalizing the entire knowledge base
Structure of Generalized Modus Ponens
GMP works for atomic sentences pip_ipi, pi′p_i'pi′, and qqq where:
 There exists a substitution θ\thetaθ that, when applied, makes each premise pi′p_i'pi
′ identical to a corresponding sentence pip_ipi in the knowledge base.
 Given this, we can apply θ\thetaθ to the conclusion qqq to infer a new fact.
In other words, if:
1. p1′ , p2′ , ..., pn′ (premises with substitution applied) match sentences in the KB p1,
p2, ..., pn )
2. (p1∧p2∧...∧pn⇒q) is in the KB (a rule with a single positive literal, also known as a
definite clause).
Then, we can conclude Subst(θ,q) which is q with the substitution θ applied.
Example Walkthrough
Let’s look at an example to see GMP in action:
1. Given premises and substitution:
o Premises in the KB:
 p1′=King(John)
 p2′=Greedy(John)
o Rule in KB: King(x)∧Greedy(x)⇒Evil(x)
o Goal: To derive Evil(John) using GMP.
2. Identify the substitution:
o We see that p1=King(x) matches p1′=King(John) if we substitute xxx with John.
o Similarly, p2=Greedy(x) matches p2′=Greedy(John) with the same
substitution.
o The substitution here is θ={x/John}
3. Apply substitution to the conclusion:
o Applying θ to the conclusion q=Evil(x) results in Evil(John)
4. Inference:
o Therefore, by GMP, we can infer Evil(John)
Forward chaining :

You might also like