AI | Rules for First Order Inference
Last Updated :
08 Jan, 2024
The inference that John is evil—that is, that \{x / \operatorname{John}\} answers the query Evil(x) —works as follows: using the rule that Greedy Kings Are Evil, find some x that is both a king and greedy, and infer that this x is evil. More generally, we can claim the conclusion of the implication after applying if there is some replacement that makes each of the conjuncts in the premise of the implication similar to sentences already in the knowledge base. The substitution \theta = \{x / \operatorname{John}\} achieves that goal in this situation.
We can really increase the amount of work done by the inference phase. Assume we don't know, but we do know that everyone is greedy: \forall y \quad \text { Greedy }(y)
Because we already know that John is a king (given) and that John is greedy (because everyone is greedy), we'd like to be able to conclude that \operatorname{Evil}(J o h n) exists. To make this work, we'll need to discover a replacement for both the variables in the implication sentence and the variables in the knowledge base statements. In this situation, making the implication premises \text{King}(x) \text{Greedy}(x) and the knowledge-base sentences \operatorname{King}(J o h n) and \text{Greedy}(y) equivalent by substituting \{x / J o h n, y / J o h n\} . As a result, we can deduce the implication's conclusion.
We call this inference process Generalized Modus Ponens because it can be summed up in a single inference rule:
For any p_{i}, p_{i}^{\prime}, \text { and } q for atomic sentences p, p, and q, when there is a substitution \theta such that \operatorname{SUBST}\left(\theta, p_{i}^{\prime}\right)=\operatorname{SUBST}\left(\theta, p_{i}\right) for all i .
\frac{p_{1}^{\prime}, \quad p_{2}^{\prime}, \ldots, \quad p_{n}^{\prime}, \quad\left(p_{1} \wedge p_{2} \wedge \ldots \wedge p_{n} \Rightarrow q\right)}{\operatorname{SUBST}(\theta, q)}
The n atomic phrases p_{i}^{'} and the one implication are the n + 1 premises of this rule. The result of applying the substitution \theta to the consequent q is the conclusion. As an illustration, consider the following:
\begin{array}{ll} p_{1}^{\prime} \text { is } \operatorname{King}(J o h n) & p_{1} \text { is } \operatorname{King}(x) \\ p_{2}^{\prime} \text { is } \operatorname{Greed} y(y) & p_{2} \text { is } \operatorname{Greed} y(x) \\ \theta \text { is }\{x / J o h n, y / J o h n\} & q \text { is } \operatorname{Evil}(x) \\ \operatorname{SUBST}(\theta, q) \text { is } \operatorname{Evil}(J o h n) & \end{array}
It's simple to demonstrate that Generalized Modus Ponens is a reliable inference rule. First, we see that Universal Instantiation holds for any sentence p (whose variables are supposed to be universally quantified) and any substitution \theta,
p \models \operatorname{SUBST}(\theta, p)
It holds in particular for a that satisfies the Generalized Modus Ponens rule's criteria. Thus, we may infer
\operatorname{SUBST}\left(\theta, p_{1}^{\prime}\right) \wedge \ldots \wedge \operatorname{SUBST}\left(\theta, p_{n}^{\prime}\right)
and we can infer \operatorname{SUBST}\left(\theta, p_{1}\right) \wedge \ldots \wedge \operatorname{SUBST}\left(\theta, p_{n}\right) \Rightarrow \operatorname{SUBST}(\theta, q)
from the implication p_{1} \wedge \ldots \wedge p_{n} \Rightarrow q
As the first of these two phrases perfectly supports the premise of the second, \theta in Generalized Modus Ponens is \operatorname{SUBST}\left(\theta, p_{i}{ }^{\prime}\right)=\operatorname{SuBST}\left(\theta, p_{i}\right) for all i so, . As a result, \operatorname{SUBST}(\theta, q) follows Modus Ponens.
Modus Ponens is a lifted version of Modus Ponens, elevating it from ground (variable-free) propositional logic to first-order logic. We'll learn how to create lifted versions of the forward chaining, backward chaining, and resolution algorithms discussed in Chapter 7 throughout the rest of this chapter. Lifted inference rules have the benefit of propositionalization in that they only make the replacements that are required to allow certain inferences to occur.
Similar Reads
First-Order Inductive Learner (FOIL) Algorithm Prerequisites: Predicates and Quantifiers, Learn-One-Rule, Sequential Covering Algorithm Before getting into the FOIL Algorithm, let us understand the meaning of first-order rules and the various terminologies involved in it. First-Order Logic: All expressions in first-order logic are composed of th
5 min read
First-Order Logic in Artificial Intelligence First-order logic (FOL) is also known as predicate logic. It is a foundational framework used in mathematics, philosophy, linguistics, and computer science. In artificial intelligence (AI), FOL is important for knowledge representation, automated reasoning, and NLP.FOL extends propositional logic by
3 min read
First-Order algorithms in machine learning First-order algorithms are a cornerstone of optimization in machine learning, particularly for training models and minimizing loss functions. These algorithms are essential for adjusting model parameters to improve performance and accuracy. This article delves into the technical aspects of first-ord
7 min read
Inference in AI In the realm of artificial intelligence (AI), inference serves as the cornerstone of decision-making, enabling machines to draw logical conclusions, predict outcomes, and solve complex problems. From grammar-checking applications like Grammarly to self-driving cars navigating unfamiliar roads, infer
5 min read
Knowledge Representation in First Order Logic When we talk about knowledge representation, it's like we're creating a map of information for AI to use. First-order logic (FOL) acts like a special language that helps us build this map in a detailed and organized way. It's important because it allows us to understand not only facts but also the r
6 min read