18CS753 AI Module 3
18CS753 AI Module 3
18CS753 AI Module 3
MODULE III
In this chapter and the next, we explore and discuss techniques for solving problems with incomplete
and uncertain models.
What is Reasoning?
✓ Reasoning is the act of deriving a conclusion from certain premises using a given
methodology.
✓ Reasoning is a process of thinking, logically arguing and drawing inferences.
➢ UNCERTAINTY IN REASONING
✓ The world is an uncertain place; often the Knowledge is imperfect which causes uncertainty.
Therefore, reasoning must be able to operate under uncertainty.
✓ Uncertainty is major problem in knowledge elicitation, especially when the expert's
knowledge must be quantized in rules.
✓ Uncertainty may cause bad treatment in medicine, loss of money in business.
✓ The techniques that can be used to reason effectively even when a complete, consistent, and
constant model of the world is not available are discussed here. One of the examples, which
we call the ABC Murder story, clearly illustrates many of the main issues these techniques
must deal with.
✓ Let Abbott, Babbitt, and Cabot be suspects in a murder case. Abbott has an alibi (explanation/
defense), in the register of a respectable hotel in Albany. Babbitt also has an alibi for his
brother-in-law testified that Babbitt was visiting him in Brooklyn at the time. Cabot pleads
Dept. of ISE, RNSIT
In order to do this, we must address several key issues, including the following:
1. How can the knowledge base be extended to be made on the basis of lack of knowledge as well
as on the presences of it?
2. How can the knowledge base be updated properly when a new fact is added to the system (or
when an old one is removed)?
The usual solution to this problem is keep track of proofs, which are often called justifications.
3. How can knowledge be used to help resolve conflicts when there are several inconsistent
nonmonotonic inferences that could be drawn?
To do this, we require additional methods for resolving such conflicts in ways that are most appropriate
for the particular problem that is being solved.
• Default Reasoning
✓ This is a very common form of non-monotonic reasoning. The conclusions are drawn based
on what is most likely to be true. There are two approaches for Default reasoning and both are
logic type: Non-monotonic logic and Default logic.
1. Nonmonotonic Logic
- Provides a basis for default reasoning.
- It has already been defined. It says, "The truth of a proposition may change when new
information (axioms) is added and logic may be build to allow the statement to be
retracted."
- Non-monotonic logic is a predicate logic with one extension called modal operator M
which means “consistent with everything we know”. The purpose of M is to allow
Dept. of ISE, RNSIT
Should be read as “For all x and y, if x and y are related and if the fact that x gets along
with y is consistent with everything else that is believed, then conclude that x will not
defend y”
2. Default Logic
- Default logic initiates a new inference rule: A : B
C
Where, A is known as the prerequisite, B as the justification, and C as the consequent.
Read the above inference rule as: " if A is provable and if it is consistent to assume B, then
conclude C ". The rule says that given the prerequisite, the consequent can be inferred,
provided it is consistent with the rest of the data.
- Example : Rule that "birds typically fly" would be represented as
bird(x) : flies(x)
flies (x)
which says " If x is a bird and the claim that x flies is consistent with what we know,
then infer that x flies".
Since, all we know about Tweety is that: Tweety is a bird, we therefore inferred that
Tweety flies.
- These inferences are used as basis for computing possible set of extensions to the
knowledge base.
- Here, Rules are not Wff’s
- Applying Default Rules :
While applying default rules, it is necessary to check their justifications for
consistency, not only with initial data, but also with the consequents of any other default
rules that may be applied. The application of one rule may thus block the application of
another. To solve this problem, the concept of default theory was extended.
- The idea behind non-monotonic reasoning is to reason with first order logic, and if an
inference cannot be obtained then use the set of default rules available within the first order
Dept. of ISE, RNSIT
3. Abduction
- Abduction means systematic guessing: "infer" an assumption from a conclusion.
- Definition: "Given two Wffs: A→B and B, for any expressions A and B, if it is consistent
to assume A, do so".
- Refers to deriving Conclusions, applying the implications in reverse.
- For example, the following formula:
∀x: RainedOn(x) → wet(x)
could be used "backwards" with a specific x:
if wet(Tree) then RainedOn(Tree)
This, however, would not be logically justified. We could say:
wet(Tree) CONSISTENT(rainedOn(Tree)) → rainedOn(Tree)
We could also attach probabilities, for example like this:
wet(Tree) → rainedOn(Tree) || 70%
wet(Tree) → morningDewOn(Tree) || 20%
wet(Tree) → sprinkledOn(Tree) || 10%
- Example: Given
4. Inheritance
- Consider baseball knowledge base described in chapter 4.
- The concept is “An object inherits attribute values from all the classes of which it is a
member, if not doing so leads to a contradiction, in which case a value from a more
restricted class has precedence over a value from a border class.”
- These logical ideas provide a basis for describing this idea more formally and can write
its inheritable knowledge as rules in Default Logic.
- We can write a rule to account for the inheritance of a default value for the height of a
baseball player as:
Baseball-player(x): height (x, 6-1)
height (x, 6-1)
- Assert Pitcher(Three-Finger-Brown). This concludes that Three-Finger-Brown is a
baseball player. This rule allows us to conclude that his height is 6-1.
- If, on the other hand, we had asserted a conflicted value for Three Finger had an axiom
like
- A clearer approach is to say something like, “Adult males typically have a height of 5-10
unless they are abnormal in some way.” So we could write, for example:
• Minimalist Reasoning
✓ The idea behind using minimal models as a basis for nonmonotonic reasoning about the world
is the following: “There are many fewer true statements than false ones. If something is true
and relevant it makes sense to assume that it has been entered into the knowledge base.
Therefore, assume that the only true statements are those that necessarily must be true in
order to maintain the consistency of the knowledge base.”
A (Joe) B (Joe)
We derive: A (Joe)
B (Joe)
The CWA allows us to conclude both ? A (Joe) and ? B (Joe), since neither A nor B
must necessarily be true of Joe. So, the resulting extended knowledge base is
inconsistent.
- The problem is that we have assigned a special status to positive instances of
predicates, as opposed to negative ones. Specifically, the CWA forces completion of
knowledge base by adding the negative assertion P whenever it is consistent to do so.
But the assignment of a real world property to some predicate P and its complement to
the negation of P may be arbitrary. For example, suppose we define a predicate single
and create the following knowledge base:
Single (John)
Single (Mary)
Then, if we ask about Jane, the CWA will yield the answer Single (Jane).
But now suppose we had chosen instead to use the predicate Married rather than
Single. Then the corresponding knowledge base would be
Married (John)
Married (Mary)
If we now ask about Jane, the CWA will yield the result Married (Jane).
2. Circumscription
- Circumscription is a Nonmonotonic logic to formalize the common sense assumption.
Circumscription is a formalized rule of conjecture (guess) that can be used along with
the rules of inference of first order logic.
- Circumscription involves formulating rules of thumb with "abnormality" predicates
and then restricting the extension of these predicates, circumscribing them, so that they
apply to only those things to which they are currently known.
Dept. of ISE, RNSIT
➢ IMPLEMENTATION ISSUES
✓ The solutions are offered, considering the reasoning processes into two parts:
- One, a problem solver that uses whatever mechanism it happens to have to draw
conclusions as necessary, and
- Second, a truth maintenance system whose job is to maintain consistency in knowledge
representation of a knowledge base.
✓ Search controls used are:
- Depth-first search
- Breadth-first search
➢ AUGMENTING A PROBLEM-SOLVER
✓ Knowledge base the usual PROLOG- style control structure in which rules are matched top to
bottom, left to right. Then if we ask the question? Suspect {x}, the program will first try Abbott
and return Abbott as its answer. If we had also included the facts.
RegisteredHotel(Abbott, Albany)
FarAway(Albany)
Then, the program would have failed to conclude that Abbott was a suspect and it would
instead have located Babbitt and then Cabot.
• Dependency-Directed Backtracking
✓ Depth-first approach to nonmonotonic reasoning: We need to know a fact F, which can be
derived by making some assumption A, which seems believable. So we make assumption A,
derive F, and then derive some additional facts G and H from F. We later derive some other
facts M and N, but they are completely independent of A and F. Later, a new fact comes in that
individual A. We need to withdraw our proof of F, and also our proofs of G and H since they
depended on F. But what about M and N? They didn’t depend on F, so there is no logical need
to invalidate them. But if we use a conventional backtracking scheme, we have to back up
past conclusions in the order in which we derived them, so we have to backup past M and N,
thus undoing them, in order to get back to F,G, H and A. To get around this problem, we need
a slightly different notice of backtracking, one that is based on logical dependencies rather than
the chronological order in which decisions were made. We call this new method dependency-
directed backtracking.
✓ As an example, suppose we want to build a program that generates a solution to a fairly simple
problem. Finding a time at which three busy people can all attend a meeting? One way to solve
such a problem is first to make an assumption that the meeting will be held on some
particular day, say Wednesday, add to the database. Then proceed to find a time, checking
along the way for any inconsistencies in people’s schedules. If a conflict arises, the statement
representing the assumption must be discarded and replaced by another, hope fully non-
contradictory, one.
✓ This kind of solution can be handled by a straightforward tree search with chronological
back tracking. All assumptions, as well as the inferences drawn from them, are recorded at
the search node that created them. When a node is determined to represent a contradiction,
simply backtrack to the next node from which there remain unexplored paths. The
assumptions and their inferences will disappear automatically. The drawback to this approach
is illustrated in Figure below, which shows part of the search tree of a program that is trying to
schedule a meeting. To do so, the program must solve a constraint satisfaction problem to
find a day and time at which none of the participants is busy and at which there is a sufficiently
large room available.
Figure: Nondependency-Directed Backtracking
✓ In order to solve the problem, the system must try to satisfy one constraint at a time. Initially,
there is little reason to choose one alternative over another, so it decides to schedule the meeting
on Wednesday. That creates a new constraint that must be met by the rest of the solution. The
assumption that the meeting will be held on Wednesday is stored at the node it generated. Next
the program tries to select a time at which all participants are available. Among them, they have
regularly scheduled daily meetings at all times except 2:00. So 2:00 is chosen as the meeting
time. But it would not have mattered which day was chosen. Then the program discovers that on
Wednesday there are no rooms available. So it backtracks past the assumption that the day would
be Wednesday and tries another day, Tuesday. Now it must duplicate the chain of reasoning the
led it to choose 2:00 as the time because that reasoning was lost when it backtracked to redo the
choice of day. This occurred even though that reasoning did not depend in any way on the
assumption that the day would be Wednesday. By withdrawing statements based on the order in
which they were generated by the search process rather than on the basis of responsibility for
inconsistency, we may waste a great deal of effort.
✓ If we want to use dependency-directed backtracking instead, so that we do not waste this effort,
then we need to do the following things:
- Associate with each node one or more justifications. Each justification corresponds to a
derivation process that led to the node. Each justification must contain a list of all the
nodes on which its derivation depended.
- Provide a mechanism that, when given a contradiction node and its justification, computes
the “no-good” set of assumptions that underlie the justification. The no-good set is defined
to be the minimal set of assumptions such that if you remove any element from the set, the
justification will no longer be valid and the inconsistent node will no longer be believed.
- Provide a mechanism for considering a no-good set and choosing an assumption to
retract.
we now believe, although we may change our beliefs later. We can represent these assertions
in shorthand as follows:
- Suspect Abbott (Abbott is the primary murder suspect.)
- Beneficiary Abbott (Abbott is a beneficiary of the victim.)
- Alibi Abbott (Abbott was at an Albany hotel at the time.)
In the notation of Default Logic, we can state the rule that produced it as
Beneficiary x : Alibi(x)
Suspect(x)
Figure: A Justification
Figure above shows how these three facts would be represented in a dependency network,
which can be created as a result of applying the first rule of Figure: Backward Rules using
UNLESS. The assertion Suspect Abbott has an associated TMS justification. Each
justification consists of two parts: an IN-list and an OUT-list. In the figure, the assertions on
the IN-list are connected to the justification by + links, those on the OUT-list by - links. The
justification is connected by an arrow to the assertion that it supports. In the justification
shown, there is exactly one assertion in each list. Beneficiary Abbott is in the IN-list and Alibi
Abbott is in the OUT-list. Such a justification says that Abbott should be a suspect just when
it is believed that he is a beneficiary and it is not believed that he has an alibi.
More generally, assertion (usually called nodes) in a TMS dependency network is believed
when they have a valid justification. A justification is valid if every assertion in the IN-list is
believed and none of those in the OUT-list.
Abbott was primary suspect but looking at hotel register provided a valid reason to believe
Abbott’s alibi. Figure below shows the effect of adding such a justification to the network.
Now suspect Abbott and Register forged are OUT and Alibi, Registered, and Far away
Abbott are IN.
Figure: Changed Labeling
Babbitt will have a similar justification based upon lack of belief that his brother-in-law
lied as shown in Figure below. Now suspect Babbitt and Lies B-I-L are OUT and Alibi,
Say So B-I-L (Brother-In-Law) Babbitt are IN.
Figure: Babbitt’s Justification
Figure below illustrates the fact that the only support for the alibi of attending the ski
show is that Cabot is telling the truth about being there. The only support for his telling
the truth would be if we knew he was at the ski show. But this is a circular argument.
The task of a TMS is to disallow such arguments. In particular, if the support for a node
only depends on an unbroken chain of positive links (IN-list links) leading back to itself
then that node must be labeled OUT if the labeling is to be well-founded.
Figure: Cabot’s justification
Now we learn that Cabot was seen on television attending the ski tournament. Adding this
to the dependency network first illustrates the fact that nodes can have more than one
justification as shown in Figure below.
Suppose, in particular, that we choose to believe that Babbitt’s brother-in-law lied. What
should be the justification for that belief? Figure below shows a complete abductive
justification for the belief that Babbitt’s brother-in-law lied.
Figure: A Complete Abductive Justification
At this point, we have described the key reasoning operations that are performed by a JTMS:
- Consistent labeling
- Contradiction resolution
Also described a set of important reasoning operations that a JTMS does not perform,
including:
- Applying rules to derive conclusions
- Creating justifications for the results of applying rules
- Choosing among alternative ways of resolving a contradiction
- Detecting contradictions
All of these operations must be performed by the problem-solving program that is using the JTMS.
defined by a set of assumptions as forming a lattice, as shown in below figure for a simple
example with four assumptions. Lines going upward indicate a subset relationship.
✓ The first thing this lattice does for us is to illustrate a simple mechanism by which
contradictions (inconsistent contexts) can be propagated so that large parts of the space of 2n
contexts can be eliminated. Suppose that the context labeled {A2, A3} is asserted to be
inconsistent. Then all contexts that include it (i.e., those that are above it) must also be
inconsistent.
✓ As an example of how an ATMS-based problem-solver works, let’s return to the ABC Murder
story. Again, our goal is to find a primary suspect. We need the following assumptions:
▪ A1. Hotel register was forged.
▪ A2. Hotel register was not forged.
▪ A3. Babbitt’s brother-in-law lied.
▪ A4. Babbitt’s brother-in-law did not lie.
▪ A5. Cabot lied.
▪ A6. Cabot did not lie.
▪ A7. Abbott, Babbitt, and Cabot are the only possible suspects.
▪ A8. Abbott, Babbitt, and Cabot are not the only suspects.
✓ The problem-solver could then generate the nodes and associated justifications shown in the
first two columns of Figure below. In the figure, the justification for a node that corresponds to
a decision to make assumption N is shown as {N}. Justifications for nodes that correspond to the
result of applying reasoning rules are shown as the rule involved. Then the ATMS can assign
labels to the nodes as shown in the second two columns. The first shows the label that would be
generated for each justification taken by it. The second shows the label (possibly containing
multiple contexts) that is actually assigned to the node given all its current justifications. These
columns are identical in simple cases, but they may differ in more complex situations as we see
for nodes 12, 13, and 14 of our example.
- Nodes may have several justifications if there are several possible reasons for believing
them. This is the case for nodes 12, 13, and 14.
- Recall that when we were using a JTMS, a node was labeled IN if it had at least one valid
justification. Using an ATMS, a node will end up being labeled with a consistent context if
it has at least one justification that can occur in a consistent context.
- The label assignment process is sometimes complicated. We describe it in more detail
below.
Suppose that problem-solving program first created nodes 1 through 14, representing the various
dependencies among them without committing to which of them it currently believes. It can
indicate known contradictions by marking as no good the context:
- A, B, C are the only suspects; A, B, C are not the only suspects: {A7,A8}
CHAPTER 8
STATISTICAL REASONING
Several representation techniques that can be used to model belief systems in which, at any given point,
a particular fact is believed to be true, believed to be false, or not considered one way or the other.
The first class contains problems in which there is genuine randomness in the world. Playing card
games such as bridge and blackjack is a good example of this class. Although in these problems it is
not possible to predict the world with certainty, some knowledge about the likelihood of various
outcomes is available, and we would like to be able to exploit it.
The second class contains problems that could, be modeled using the techniques we described in the last
chapter. In these problems, the relevant world is not random. It behaves “normally” unless there is
some kind of exception. Many common sense tasks fall into this category, as do many expert reasoning
tasks such as medical diagnosis. For problems like this, statistical measures may serve a very useful
function as summaries of the world. We explore several techniques that can be used to augment
knowledge representation techniques with statistical measures that describe levels of evidence and
belief.
Bayes' theorem:
• Bayes' theorem is also known as Bayes' rule, Bayes' law, or Bayesian reasoning, which
determines the probability of an event with uncertain knowledge.
• In probability theory, it relates the conditional probability and marginal probabilities of two random
events.
• Bayes' theorem was named after the British mathematician Thomas Bayes. The Bayesian
inference is an application of Bayes' theorem, which is fundamental to Bayesian statistics.
• It is a way to calculate the value of P(B|A) with the knowledge of P(A|B).
• Bayes' theorem allows updating the probability prediction of an event by observing new information
of the real world.
Example: If cancer corresponds to one's age then by using Bayes' theorem, we can determine the
probability of cancer more accurately with the help of age.
Bayes' theorem can be derived using product rule and conditional probability of event A with known
event B:
The above equation (a) is called as Bayes' rule or Bayes' theorem. This equation is basic of most
modern AI systems for probabilistic inference.
It shows the simple relationship between joint and conditional probabilities. Here,
P(A|B) is known as posterior, which we need to calculate, and it will be read as Probability of
hypothesis A when we have occurred an evidence B.
P(B|A) is called the likelihood, in which we consider that hypothesis is true, then we calculate the
probability of evidence.
P(A) is called the prior probability, probability of hypothesis before considering the evidence
In the equation (a), in general, we can write P (B) = P(A)*P(B|Ai), hence the Bayes' rule can be
written as:
Where A1, A2, A3,........, An is a set of mutually exclusive and exhaustive events.
Bayes' rule allows us to compute the single term P(B|A) in terms of P(A|B), P(B), and P(A). This is very
useful in cases where we have a good probability of these three terms and want to determine the fourth one.
Suppose we want to perceive the effect of some unknown cause, and want to compute that cause, then the
Bayes' rule becomes:
Question: what is the probability that a patient has diseases meningitis with a stiff neck?
Given Data:
A doctor is aware that disease meningitis causes a patient to have a stiff neck, and it occurs 80% of the
time. He is also aware of some more facts, which are given as follows:
Let a be the proposition that patient has stiff neck and b be the proposition that patient has meningitis. , so
we can calculate the following as:
P(a|b) = 0.8
P(b) = 1/30000
P(a)= .02
Hence, we can assume that 1 patient out of 750 patients has meningitis disease with a stiff neck.
Example-2:
Question: From a standard deck of playing cards, a single card is drawn. The probability that the
card is king is 4/52, then calculate posterior probability P(King|Face), which means the drawn face
card is a king card.
Solution:
✓ Certainty factors provides a simple way of updating probabilities given new evidence.
✓ The basic idea is to add certainty factors to rules, and use these to calculate the measure of belief
in some hypothesis. So, we might have a rule such as:
IF has-spots(X)
AND has-fever(X)
THEN has-measles(X) CF 0.5
✓ Certainty factors consist of two components a measure of belief and a measure of disbelief.
However, here we’ll assume that we only have positive evidence and equal certainty factors
with measures of belief.
✓ Certainty factors are related to conditional probabilities, but are not the same. For one thing, we
allow certainty factors of less than zero to represent cases where some evidence tends to deny
some hypothesis. Rich and Knight discuss how certainty factors consist of two components: a
measure of belief and a measure of disbelief. However, here we'll assume that we only have
positive evidence and equate certainty factors with measures of belief.
✓ Suppose we have already concluded has-spots(fred) with certainty 0.3, and has-fever(fred) with
certainty 0.8. To work out the probability of has-measles(X) we need to take account both of the
certainties of the evidence and the certainty factor attached to the rule. The certainty associated
with the conjoined premise (has-spots(fred) AND has-fever(fred)) is taken to be the minimum of
the certainties attached to each (ie min(0.3, 0.8) = 0.3). The certainty of the conclusion is the
total certainty of the premises multiplied by the certainty factor of the rule (ie, 0.3 x 0.5 = 0.15).
✓ If we have another rule drawing the same conclusion (e.g., measles(X)) then we will need to
update our certainties to reflect this additional evidence. To do this we calculate the certainties
using the individual rules (say CF1 and CF2), then combine them to get a total certainty of (CF1
+ CF2 - CF1*CF2). The result will be a certainty greater than each individual certainty, but still
less than 1.
✓ Certainty factors (CF{h,e] is defined in terms of two components.
1. MB[h,e] → a measure (between 0 and 1) of belief in hypothesis “h” given the evidence “e”.
• MB measures the extent to which the evidence supports evidence the hypothesis.
• It is zero if the evidence fails to support the hypothesis.
2. MD[h,e] →a measure (between 0 and 1)of disbelief in hypothesis “h” given the evidence “e”.
• MD measures the extent to which the evidence supports the negation of the
hypothesis.
• It is zero if the evidence supports hypothesis.
CF[h,e]=mb[h,e]-md[h,e]
Example 2:
✓ The approach that we discuss here was found in the MYCIN system, which attempts to
recommend appropriate therapies for patients with bacterial infections. It interacts with the
physician to acquire the clinical data it needs. MYCIN is an example of an expert system, since
it performs a task normally done by a human expert. Here we concentrate on the use of
Dept. of ISE, RNSIT
✓ They are actually represented internally in LISP list structure. The rule we just saw would be
represented internally as
PREMISE: ($AND (SAME CNTXT GRAM GRAMPOS)
(SAME CNTXT MORPH COCCUS)
(SAME CNTXT CONFORM CLUMPS))
ACTION: (CONCLUDE CNTXT IDENT STAPHYLOCOCCUS TALLY 0.7)
➢ BAYESIAN NETWORKS
✓ Here, we describe an alternative approach known as Bayesian networks. The main idea is that
to describe the real world, it is not necessary to use a huge joint probability table in which we
list the probabilities of all conceivable combinations of events. Here, we can use a more local
representation in which we will describe clusters of events that interact.
✓ "A Bayesian network is a probabilistic graphical model which represents a set of variables and
their conditional dependencies using a directed acyclic graph."
✓ It is also called a Bayes network, belief network, decision network, or Bayesian model.
✓ Bayesian networks are probabilistic, because these networks are built from a probability
distribution, and also use probability theory for prediction and anomaly detection.
✓ Real world applications are probabilistic in nature, and to represent the relationship between
multiple events, we need a Bayesian network. It can also be used in various tasks
including prediction, anomaly detection, diagnostics, automated insight, reasoning, time
series prediction, and decision making under uncertainty.
Bayesian Network can be used for building models from data and experts opinions, and it consists of
two parts:
A Bayesian network graph is made up of nodes and Arcs (directed links), where:
o Each node corresponds to the random variables, and a variable can be continuous or discrete.
o Arc or directed arrows represent the causal relationship or conditional probabilities between random
variables. These directed links or arrows connect the pair of nodes in the graph.
These links represent that one node directly influence the other node, and if there is no directed link
that means that nodes are independent with each other
o In the above diagram, A, B, C, and D are random variables represented by the nodes of
the network graph.
o If we are considering node B, which is connected with node A by a directed arrow, then
node A is called the parent of Node B.
o Node C is independent of node A.
o Causal Component
o Actual numbers
Each node in the Bayesian network has condition probability distribution P(Xi |Parent(Xi) ), which
determines the effect of the parent on that node.
Bayesian network is based on Joint probability distribution and conditional probability. So let's first
understand the joint probability distribution:
Dept. of ISE, RNSIT
If we have variables x1, x2, x3,....., xn, then the probabilities of a different combination of x1, x2, x3..
xn, are known as Joint probability distribution.
P[x1, x2, x3,....., xn], it can be written as the following way in terms of the joint probability
distribution.
In general for each variable Xi, we can write the equation as:
Example 1:
✓ Suppose that there are two events which could cause grass to be wet: either the sprinkler is on
it’s raining that the rain had a direct effect on the use of the sprinkler(namely that when it
rains,the sprinkler is usually not turned on).
✓ Let’s return to the example of the sprinkler, rain, and grass. we construct a directed acyclic
graph (DAG) that represents causality relationships among variables.The variables in such a
graph may be propositional (values TRUE or FALSE) or they may be variables that take on
values of some other type e.g., a specific disease, a body temperature, or a reading taken by
some other diagnostic device.
✓ A Bayes net treats the above problem with the tool: a graphical model and a few tables. For the
above example, one possible representation is
✓
Sprinkler Rain
Rain
T F
0.2 0.8
Grass wet
✓ Fuzzy set theory allows us to represent set membership as a possibility distribution, such as
the ones shown in Figure (a) below. For the set of tall people and the set of very tall people.
Notice how this contrasts with the standard Boolean definition for tall people shown in Figure
(b) below. In the latter, one is either tall or not and there must be a specific height that defines
the boundary. The same is true for very tall. In the former, one’s tallness increases with one’s
height until the value of 1 is reached.
1. What do you mean by Uncertainty? Discuss briefly the approaches to deal with the same.
2. What are Non-Monotonic Reasoning Systems? Explain from the context of ABC murder story.
3. Explain the different logic for implementing the same along with issues associated with it.
4. Discuss the importance of Truth Maintenance System (TMS)s and their variants (Types).
5. State the Bayes theorem and Illustrate how it helps in Reasoning under uncertainty.
6. Write a note on i) Rule based Systems ii) Certainty Factors.
7. What are the advantages of Bayesian Networks? Explain with an example.
8. Briefly discuss the way reasoning is done using i) Fuzzy Logic ii) Dempster Shafer Theory.