0% found this document useful (0 votes)
27 views68 pages

Unit 4( Probabilistic Reasoning)

The document discusses various types of reasoning in artificial intelligence (AI), including probabilistic, deductive, inductive, abductive, common sense, monotonic, and non-monotonic reasoning. It highlights the importance of probabilistic reasoning for handling uncertainty and introduces concepts such as Bayes' theorem, certainty factors, and Bayesian networks. The document also outlines the applications and advantages of these reasoning types in AI systems.

Uploaded by

asmigarg2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views68 pages

Unit 4( Probabilistic Reasoning)

The document discusses various types of reasoning in artificial intelligence (AI), including probabilistic, deductive, inductive, abductive, common sense, monotonic, and non-monotonic reasoning. It highlights the importance of probabilistic reasoning for handling uncertainty and introduces concepts such as Bayes' theorem, certainty factors, and Bayesian networks. The document also outlines the applications and advantages of these reasoning types in AI systems.

Uploaded by

asmigarg2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

Unit-4

Probabilistic Reasoning
• Non-Monotonic Reasoning, Default
Reasoning, Statistical Reasoning: Probability
and Bayes’ theorem, Certainty factors and
Rule- based systems, Bayesian networks,
Dempster-Shafer theory & Fuzzy logic.
Probabilistic reasoning:

• Probabilistic reasoning is a way of knowledge representation where


we apply the concept of probability to indicate the uncertainty in
knowledge.

• In probabilistic reasoning, we combine probability theory with logic


to handle the uncertainty.

• In the real world, there are lots of scenarios, where the certainty of
something is not confirmed, such as "It will rain today," "behavior of
someone for some situations," "A match between two teams or
two players." These are probable sentences for which we can
assume that it will happen but not sure about it, so here we use
probabilistic reasoning
Need of probabilistic reasoning in AI:
• When there are unpredictable outcomes.
• When specifications or possibilities of predicates
becomes too large to handle.
• When an unknown error occurs during an
experiment.
In probabilistic reasoning, there are two ways to
solve problems with uncertain knowledge:
• Bayes' rule
• Bayesian Statistics
Reasoning in Artificial intelligence

• The reasoning is the mental process of deriving


logical conclusion and making predictions from
available knowledge, facts, and beliefs. Or we can
say, "Reasoning is a way to infer facts from
existing data." It is a general process of thinking
rationally, to find valid conclusions.

• In artificial intelligence, the reasoning is essential


so that the machine can also think rationally as a
human brain, and can perform like a human.
Types of Reasoning

In artificial intelligence, reasoning can be divided into the


following categories:
• Deductive reasoning
• Inductive reasoning
• Abductive reasoning
• Common Sense Reasoning
• Monotonic Reasoning
• Non-monotonic Reasoning

• Note: Inductive and deductive reasoning are the forms


of propositional logic.
1. Deductive reasoning

• Deductive reasoning is deducing new information from logically related known


information. It is the form of valid reasoning, which means the argument's
conclusion must be true when the premises are true.
• Deductive reasoning is a type of propositional logic in AI, and it requires various
rules and facts. It is sometimes referred to as top-down reasoning, and
contradictory to inductive reasoning.
• In deductive reasoning, the truth of the premises guarantees the truth of the
conclusion.
• Deductive reasoning mostly starts from the general premises to the specific
conclusion, which can be explained as below example.
• Example:
• Premise-1: All the human eats veggies
• Premise-2: Suresh is human.
• Conclusion: Suresh eats veggies.
• The general process of deductive reasoning is given below:
2. Inductive Reasoning:

• Inductive reasoning is a form of reasoning to arrive at a conclusion using limited


sets of facts by the process of generalization. It starts with the series of specific
facts or data and reaches to a general statement or conclusion.
• Inductive reasoning is a type of propositional logic, which is also known as cause-
effect reasoning or bottom-up reasoning.
• In inductive reasoning, we use historical data or various premises to generate a
generic rule, for which premises support the conclusion.
• In inductive reasoning, premises provide probable supports to the conclusion, so
the truth of premises does not guarantee the truth of the conclusion.
• Example:
• Premise: All of the pigeons we have seen in the zoo are white.
• Conclusion: Therefore, we can expect all the pigeons to be white.
3. Abductive reasoning:

• Abductive reasoning is a form of logical reasoning


which starts with single or multiple observations then
seeks to find the most likely explanation or conclusion
for the observation.
• Abductive reasoning is an extension of deductive
reasoning, but in abductive reasoning, the premises do
not guarantee the conclusion.
• Example:
• Implication: Cricket ground is wet if it is raining
• Axiom: Cricket ground is wet.
• Conclusion It is raining.
4. Common Sense Reasoning

• Common sense reasoning is an informal form of reasoning,


which can be gained through experiences.
• Common Sense reasoning simulates the human ability to
make presumptions about events which occurs on every
day.
• It relies on good judgment rather than exact logic and
operates on heuristic knowledge and heuristic rules.
• Example:
• One person can be at one place at a time.
• If I put my hand in a fire, then it will burn.
• The above two statements are the examples of common
sense reasoning which a human mind can easily
understand and assume.
5. Monotonic Reasoning:
• In monotonic reasoning, once the conclusion is taken, then it will remain
the same even if we add some other information to existing information in
our knowledge base. In monotonic reasoning, adding knowledge does not
decrease the set of prepositions that can be derived.
• To solve monotonic problems, we can derive the valid conclusion from the
available facts only, and it will not be affected by new facts.
• Monotonic reasoning is not useful for the real-time systems, as in real
time, facts get changed, so we cannot use monotonic reasoning.
• Monotonic reasoning is used in conventional reasoning systems, and a
logic-based system is monotonic.
• Any theorem proving is an example of monotonic reasoning.
• Example:
• Earth revolves around the Sun.
• It is a true fact, and it cannot be changed even if we add another sentence
in knowledge base like, "The moon revolves around the earth" Or "Earth is
not round," etc.
5. Monotonic Reasoning:
Advantages of Monotonic Reasoning:
• In monotonic reasoning, each old proof will always remain
valid.
• If we deduce some facts from available facts, then it will
remain valid for always.
Disadvantages of Monotonic Reasoning:
• We cannot represent the real world scenarios using
Monotonic reasoning.
• Hypothesis knowledge cannot be expressed with
monotonic reasoning, which means facts should be true.
• Since we can only derive conclusions from the old proofs,
so new knowledge from the real world cannot be added.
6. Non-monotonic Reasoning

• In Non-monotonic reasoning, some conclusions may be invalidated if we add some


more information to our knowledge base.
• Logic will be said as non-monotonic if some conclusions can be invalidated by
adding more knowledge into our knowledge base.
• Non-monotonic reasoning deals with incomplete and uncertain models.
• "Human perceptions for various things in daily life, "is a general example of non-
monotonic reasoning.
• Example: Let suppose the knowledge base contains the following knowledge:
• Birds can fly
• Penguins cannot fly
• Pitty is a bird
• So from the above sentences, we can conclude that Pitty can fly.
• However, if we add one another sentence into knowledge base "Pitty is a
penguin", which concludes "Pitty cannot fly", so it invalidates the above
conclusion.
6. Non-monotonic Reasoning
Advantages of Non-monotonic reasoning:
• For real-world systems such as Robot navigation,
we can use non-monotonic reasoning.
• In Non-monotonic reasoning, we can choose
probabilistic facts or can make assumptions.
Disadvantages of Non-monotonic Reasoning:
• In non-monotonic reasoning, the old facts may be
invalidated by adding new sentences.
• It cannot be used for theorem proving.
Types of Reasoning in AI
1. Probabilistic Reasoning in AI:
Probabilistic reasoning involves dealing with uncertainty and making decisions based on probabilities. AI
systems use statistical models to assess the likelihood of different outcomes and make informed choices.

2. Default Reasoning in AI:


Default reasoning in Artificial Intelligence is a type of non-monotonic reasoning where conclusions are drawn
based on default assumptions unless explicitly contradicted. It allows systems to make plausible inferences
in the absence of complete information.

3. Statistical Reasoning in AI:


Statistical reasoning involves the use of statistical methods to analyze data, identify patterns, and make
predictions. AI systems leverage statistical reasoning to learn from data and generalize knowledge.

4. Logical Reasoning in AI:


Logical reasoning in AI involves deducing conclusions from a set of given premises using logical rules. It follows
formal logic principles to ensure the validity of conclusions drawn by AI systems.

5. Automated Reasoning in AI:


Automated reasoning refers to the ability of AI systems to automatically derive conclusions or solutions from a
set of logical rules or knowledge. It includes processes like theorem proving and decision-making.
Bayes’ theorem
• Bayes' theorem is also known as Bayes' rule, Bayes'
law, or Bayesian reasoning, which determines the
probability of an event with uncertain knowledge.

• In probability theory, it relates the conditional


probability and marginal probabilities of two random
events.

• The Bayesian inference is an application of Bayes'


theorem, which is fundamental to Bayesian statistics.
What is Bayes’ Theorem?

• Bayes theorem (also known as the Bayes Rule or Bayes Law) is used to
determine the conditional probability of event A when event B has already
occurred.

• The general statement of Bayes’ theorem is “The conditional probability of


an event A, given the occurrence of another event B, is equal to the
product of the event of B, given A and the probability of A divided by the
probability of event B.” i.e.

P(A|B) = P(B|A)P(A) / P(B)

• where,
• P(A) and P(B) are the probabilities of events A and B
• P(A|B) is the probability of event A when event B happens
• P(B|A) is the probability of event B when A happens
Bayes Theorem Statement

• Bayes’ Theorem for n set of events is defined as,


• Let E1, E2,…, En be a set of events associated with
the sample space S, in which all the events E1,
E2,…, En have a non-zero probability of
occurrence. All the events E1, E2,…, E form a
partition of S. Let A be an event from space S for
which we have to find probability, then according
to Bayes’ theorem,
• P(Ei|A) = P(Ei)P(A|Ei) / ∑ P(Ek)P(A|Ek)
• for k = 1, 2, 3, …., n
Bayes Theorem Formula

• For any two events A and B, then the formula for


the Bayes theorem is given by: (the image given
below gives the Bayes’ theorem formula)

where,
P(A) and P(B) are the probabilities of events A and B also P(B) is never
equal to zero.
P(A|B) is the probability of event A when event B happens
P(B|A) is the probability of event B when A happens
Terms Related to Bayes Theorem
• After learning about Bayes theorem in detail, let us understand some important terms
related to the concepts we covered in formula and derivation.
• Hypotheses: Events happening in the sample space E1, E2,… En is called the hypotheses
• Priori Probability: Priori Probability is the initial probability of an event occurring before any
new data is taken into account. P(Ei) is the priori probability of hypothesis Ei.
• Posterior Probability: Posterior Probability is the updated probability of an event after
considering new information. Probability P(Ei|A) is considered as the posterior probability of
hypothesis Ei.
• Conditional Probability
• The probability of an event A based on the occurrence of another event B is
termed conditional Probability.
• It is denoted as P(A|B) and represents the probability of A when event B has already
happened.
• Joint Probability
• When the probability of two more events occurring together and at the same time is
measured it is marked as Joint Probability. For two events A and B, it is denoted by joint
probability is denoted as, P(A∩B).
• Random Variables
• Real-valued variables whose possible values are determined by random experiments are
called random variables. The probability of finding such variables is the experimental
probability.
Certainty factors (CFs)
• In artificial intelligence (AI), certainty factors (CFs) are numerical
values that indicate how likely a statement or event is to be true:
Range
• CFs range from -1.0 to +1.0.
Meaning
• A CF of -1.0 means the statement is never true, while a CF of +1.0
means the statement is always true. A CF of 0 means the agent is
unaware of the condition or occurrence.
Use
• CFs are used to represent uncertain or incomplete information in AI
systems. They allow for efficient inference in uncertain situations by
combining CFs from multiple rules.
Bayesian networks
• Bayesian belief network is key computer technology for dealing with
probabilistic events and to solve a problem which has uncertainty. We can
define a Bayesian network as:
• "A Bayesian network is a probabilistic graphical model which represents a
set of variables and their conditional dependencies using a directed acyclic
graph."
• It is also called a Bayes network, belief network, decision network,
or Bayesian model.
• Bayesian networks are probabilistic, because these networks are built
from a probability distribution, and also use probability theory for
prediction and anomaly detection.
• Real world applications are probabilistic in nature, and to represent the
relationship between multiple events, we need a Bayesian network. It can
also be used in various tasks including prediction, anomaly detection,
diagnostics, automated insight, reasoning, time series prediction,
and decision making under uncertainty.
Bayesian Network
• Bayesian Network can be used for building
models from data and experts opinions, and it
consists of two parts:
• Directed Acyclic Graph
• Table of conditional probabilities.
The generalized form of Bayesian network that
represents and solve decision problems under
uncertain knowledge is known as an Influence
diagram.
Example
• Each node corresponds to the random variables, and a variable can
be continuous or discrete.
• Arc or directed arrows represent the causal relationship or
conditional probabilities between random variables. These directed
links or arrows connect the pair of nodes in the graph.
These links represent that one node directly influence the other
node, and if there is no directed link that means that nodes are
independent with each other
– In the given diagram, A, B, C, and D are random variables
represented by the nodes of the network graph.
– If we are considering node B, which is connected with node A by a
directed arrow, then node A is called the parent of Node B.
– Node C is independent of node A.
Bayesian network
The Bayesian network has mainly two components:
• Causal Component
• Actual numbers
• Each node in the Bayesian network has condition
probability distribution P(Xi |Parent(Xi) ), which
determines the effect of the parent on that node.
• Bayesian network is based on Joint probability
distribution and conditional probability.
Example
• Example: Harry installed a new burglar alarm at his home to detect
burglary. The alarm reliably responds at detecting a burglary but
also responds for minor earthquakes. Harry has two neighbors
David and Sophia, who have taken a responsibility to inform Harry
at work when they hear the alarm. David always calls Harry when
he hears the alarm, but sometimes he got confused with the phone
ringing and calls at that time too. On the other hand, Sophia likes to
listen to high music, so sometimes she misses to hear the alarm.
Here we would like to compute the probability of Burglary Alarm.
• Problem:
Calculate the probability that alarm has sounded, but there is
neither a burglary, nor an earthquake occurred, and David and
Sophia both called the Harry.
Solution:

• The Bayesian network for the above problem is given below. The network structure is showing that
burglary and earthquake is the parent node of the alarm and directly affecting the probability of
alarm's going off, but David and Sophia's calls depend on alarm probability.
• The network is representing that our assumptions do not directly perceive the burglary and also do
not notice the minor earthquake, and they also not confer before calling.
• The conditional distributions for each node are given as conditional probabilities table or CPT.
• Each row in the CPT must be sum to 1 because all the entries in the table represent an exhaustive
set of cases for the variable.
• In CPT, a boolean variable with k boolean parents contains 2K probabilities. Hence, if there are two
parents, then CPT will contain 4 probability values

List of all events occurring in this network:


• Burglary (B)
• Earthquake(E)
• Alarm(A)
• David Calls(D)
• Sophia calls(S)
Solution:

• We can write the events of problem statement in


the form of probability: P[D, S, A, B, E], can
rewrite the above probability statement using
joint probability distribution:
• P[D, S, A, B, E]= P[D | S, A, B, E]. P[S, A, B, E]
=P[D | S, A, B, E]. P[S | A, B, E]. P[A, B, E]
= P [D| A]. P [ S| A, B, E]. P[ A, B, E]
= P[D | A]. P[ S | A]. P[A| B, E]. P[B, E]
= P[D | A ]. P[S | A]. P[A| B, E]. P[B |E]. P[E]
Solution:
Solution:

Let's take the observed probability for the Burglary and


earthquake component:
• P(B= True) = 0.002, which is the probability of burglary.
• P(B= False)= 0.998, which is the probability of no
burglary.
• P(E= True)= 0.001, which is the probability of a minor
earthquake
• P(E= False)= 0.999, Which is the probability that an
earthquake not occurred.
• We can provide the conditional probabilities as per the
below tables:
Conditional probability table for
Alarm A:
Solution:
Solution:
From the formula of joint distribution, we can write the
problem statement in the form of probability
distribution:
P(S, D, A, ¬B, ¬E) = P (S|A) *P (D|A)*P (A|¬B ^
¬E) *P (¬B) *P (¬E).
= 0.75* 0.91* 0.001* 0.998*0.999
= 0.00068045.
Dempster-Shafer theory
• Uncertainty is a pervasive aspect of AI systems, as they
often deal with incomplete or conflicting information.
• Dempster–Shafer Theory, named after its inventors
Arthur P. Dempster and Glenn Shafer, offers a
mathematical framework to represent and reason with
uncertain information.
• By utilizing belief functions, Dempster–Shafer Theory in
Artificial Intelligence systems enables them to handle
imprecise and conflicting evidence, making it a
powerful tool in decision-making processes
Dempster-Shafer theory
• In recent times, the scientific and engineering community has come to realize the
significance of incorporating multiple forms of uncertainty.
• This expanded perspective on uncertainty has been made feasible by notable
advancements in computational power within the field of artificial intelligence.
• As computational systems become more adept at handling intricate analyses, the
limitations of relying solely on traditional probability theory to encompass the
entirety of uncertainty have become apparent.
• Traditional probability theory falls short in its ability to effectively address
consonant, consistent, or arbitrary evidence without the need for additional
assumptions about probability distributions within a given set.
• Moreover, it fails to express the extent of conflict that may arise between different
sets of evidence.
• To overcome these limitations, Dempster-Shafer theory has emerged as a viable
framework, blending the concept of probability with the conventional
understanding of sets.
• Dempster-Shafer theory provides the means to handle diverse types of evidence,
and it incorporates various methods to account for conflicts when combining
multiple sources of information in the context of artificial intelligence.
Dempster-Shafer theory
This theory was released because of the
following reason:-
• Bayesian theory is only concerned about
single evidence.
• Bayesian probability cannot describe
ignorance.
Dempster Shafer Theory(DST)
• Dempster Shafer Theory(DST) is an evidence theory, it
combines all possible outcomes of the problem. Hence
it is used to solve problems where there may be a
chance that a piece of different evidence will lead to
some different result.
The uncertainty in this model is given by:-
• Consider all possible outcomes.
• Belief will lead to belief in some possibility by
bringing out some evidence. (What is this supposed
to mean?)
• Plausibility will make evidence compatible with
possible outcomes.
The Uncertainty in this Model

• At its core, DST represents uncertainty using a mathematical object called


a belief function. This belief function assigns degrees of belief to various
hypotheses or propositions, allowing for a nuanced representation of
uncertainty. Three crucial points illustrate the nature of uncertainty within
this theory:
• Conflict: In DST, uncertainty arises from conflicting evidence or incomplete
information. The theory captures these conflicts and provides mechanisms
to manage and quantify them, enabling AI systems to reason effectively.
• Combination Rule: DST employs a combination rule known as Dempster's
rule of combination to merge evidence from different sources. This rule
handles conflicts between sources and determines the overall belief in
different hypotheses based on the available evidence.
• Mass Function: The mass function, denoted as m(K), quantifies the belief
assigned to a set of hypotheses, denoted as K. It provides a measure of
uncertainty by allocating probabilities to various hypotheses, reflecting
the degree of support each hypothesis has from the available evidence.
Example

• Consider a scenario in artificial intelligence (AI) where an AI system is


tasked with solving a murder mystery using Dempster–Shafer Theory. The
setting is a room with four individuals: A, B, C, and D. Suddenly, the lights
go out, and upon their return, B is discovered dead, having been stabbed
in the back with a knife. No one entered or exited the room, and it is
known that B did not commit suicide. The objective is to identify the
murderer.
• To address this challenge using Dempster–Shafer Theory, we can explore
various possibilities:
• Possibility 1: The murderer could be either A, C, or D.
• Possibility 2: The murderer could be a combination of two individuals,
such as A and C, C and D, or A and D.
• Possibility 3: All three individuals, A, C, and D, might be involved in the
crime.
• Possibility 4: None of the individuals present in the room is the murderer.
Example
• To find the murderer using Dempster–Shafer Theory, we can examine the evidence and assign measures of
plausibility to each possibility. We create a set of possible conclusions (P)(P) with individual
elements {p1,p2,...,pn}{p1,p2,...,pn}, where at least one element (p)(p) must be true. These elements must
be mutually exclusive.
• By constructing the power set, which contains all possible subsets, we can analyze the evidence. For
instance, if P={a,b,c}P={a,b,c}, the power set would
be {o,{a},{b},{c},{a,b},{b,c},{a,c},{a,b,c}}{o,{a},{b},{c},{a,b},{b,c},{a,c},{a,b,c}}, comprising 23=823=8 elements.
Mass function m(K)
• In Dempster–Shafer Theory, the mass function m(K) represents evidence for a hypothesis or subset K. It
denotes that evidence for {K or B} cannot be further divided into more specific beliefs for K and B.
Belief in K
• The belief in KK, denoted as Bel(K)Bel(K), is calculated by summing the masses of the subsets that belong
to KK. For example, if K={a,d,c}, Bel(K)K={a,d,c}, Bel(K) would be calculated
as m(a)+m(d)+m(c)+m(a,d)+m(a,c)+m(d,c)+m(a,d,c)m(a)+m(d)+m(c)+m(a,d)+m(a,c)+m(d,c)+m(a,d,c).
Plausibility in K
• Plausibility in KK, denoted as Pl(K)Pl(K), is determined by summing the masses of sets that intersect
with KK. It represents the cumulative evidence supporting the possibility of K being true. Pl(K)Pl(K) is
computed
as m(a)+m(d)+m(c)+m(a,d)+m(d,c)+m(a,c)+m(a,d,c)m(a)+m(d)+m(c)+m(a,d)+m(d,c)+m(a,c)+m(a,d,c).
• By leveraging Dempster–Shafer Theory in AI, we can analyze the evidence, assign masses to subsets of
possible conclusions, and calculate beliefs and plausibilities to infer the most likely murderer in this
murder mystery scenario.
Example
• There will be possible evidence by which we can find the murderer by the measure of plausibility.
Using the above example we can say:
Set of possible conclusion (P): {p1, p2….pn}
where P is a set of possible conclusions and cannot be exhaustive, i.e. at least one (p) I must be
true.
(p)I must be mutually exclusive.
Power Set will contain 2n elements where n is the number of elements in the possible set.
For e.g.:-
If P = { a, b, c}, then Power set is given as
{o, {a}, {b}, {c}, {a, d}, {d ,c}, {a, c}, {a, c ,d }}= 23 elements.
• Mass function m(K): It is an interpretation of m({K or B}) i.e; it means there is evidence for {K or B}
which cannot be divided among more specific beliefs for K and B.
• Belief in K: The belief in element K of Power Set is the sum of masses of the element which are
subsets of K. This can be explained through an example
Lets say K = {a, d, c}
Bel(K) = m(a) + m(d) + m(c) + m(a, d) + m(a, c) + m(d, c) + m(a, d, c)
• Plausibility in K: It is the sum of masses of the set that intersects with K.
i.e.; Pl(K) = m(a) + m(d) + m(c) + m(a, d) + m(d, c) + m(a, c) + m(a, d, c)
Dempster Shafer Theory
• Characteristics of Dempster Shafer Theory:
• Uncertainty Representation : The DST is designed to handle situations where there
is uncertainty of information and it provides a way to represent and reason
incomplete evidence.
• Conflict of Evidence : The DST allows for the combination of multiple sources of
evidence. It provides a rule, Dempster’s rule of combination, to combine belief
functions from different sources.
• Decision-Making Ability : By deriving measures such as belief, probability and
plausibility from the combined belief function it helps in decision making.
• Advantages of Dempster Shafer Theory:
• As we add more information, the uncertainty interval reduces.
• DST has a much lower level of ignorance.
• Diagnose hierarchies can be represented using this.
• Person dealing with such problems is free to think about evidence.
• Disadvantages of Dempster Shafer Theory:
• In this, computation effort is high, as we have to deal with 2n sets
Fuzzy logic
• Fuzzy Logic Systems (FLS) produce acceptable but definite output in
response to incomplete, ambiguous, distorted, or inaccurate (fuzzy) input.
• Fuzzy Logic (FL) is a method of reasoning that resembles human
reasoning. The approach of FL imitates the way of decision making in
humans that involves all intermediate possibilities between digital values
YES and NO.
• The conventional logic block that a computer can understand takes precise
input and produces a definite output as TRUE or FALSE, which is
equivalent to human’s YES or NO.
• The inventor of fuzzy logic, Lotfi Zadeh, observed that unlike computers,
the human decision making includes a range of possibilities between YES
and NO, such as −
• CERTAINLY YES , POSSIBLY YES, CANNOT SAY, POSSIBLY NO, CERTAINLY NO
• The fuzzy logic works on the levels of possibilities of input to achieve the
definite output.
Why Fuzzy Logic?

• Fuzzy logic is useful for commercial and


practical purposes.
• It can control machines and consumer
products.
• It may not give accurate reasoning, but
acceptable reasoning.
• Fuzzy logic helps to deal with the uncertainty
in engineering.
Example
ARCHITECTURE
ARCHITECTURE
• Its Architecture contains four parts :
• RULE BASE: It contains the set of rules and the IF-THEN conditions provided by the
experts to govern the decision-making system, on the basis of linguistic
information. Recent developments in fuzzy theory offer several effective methods
for the design and tuning of fuzzy controllers. Most of these developments reduce
the number of fuzzy rules.
• FUZZIFICATION: It is used to convert inputs i.e. crisp numbers into fuzzy sets. Crisp
inputs are basically the exact inputs measured by sensors and passed into the
control system for processing, such as temperature, pressure, rpm’s, etc.
• INFERENCE ENGINE: It determines the matching degree of the current fuzzy input
with respect to each rule and decides which rules are to be fired according to the
input field. Next, the fired rules are combined to form the control actions.
• DEFUZZIFICATION: It is used to convert the fuzzy sets obtained by the inference
engine into a crisp value. There are several defuzzification methods available and
the best-suited one is used with a specific expert system to reduce the error.
Fuzzy logic
Membership function
• Definition: A graph that defines how each point in the input space is mapped to
membership value between 0 and 1. Input space is often referred to as the
universe of discourse or universal set (u), which contains all the possible elements
of concern in each particular application.
• There are largely three types of fuzzifiers:
• Singleton fuzzifier
• Gaussian fuzzifier
• Trapezoidal or triangular fuzzifier
What is Fuzzy Control?
• It is a technique to embody human-like thinkings into a control system.
• It may not be designed to give accurate reasoning but it is designed to give
acceptable reasoning.
• It can emulate human deductive thinking, that is, the process people use to infer
conclusions from what they know.
• Any uncertainties can be easily dealt with the help of fuzzy logic.
Example of a Fuzzy Logic System
Let us consider an air conditioning system with 5-level fuzzy logic system. This system adjusts the temperature
of air conditioner by comparing the room temperature and the target temperature value.
Application

• It is used in the aerospace field for altitude control of spacecraft and


satellites.
• It has been used in the automotive system for speed control, traffic
control.
• It is used for decision-making support systems and personal evaluation in
the large company business.
• It has application in the chemical industry for controlling the pH, drying,
chemical distillation process.
• Fuzzy logic is used in Natural language processing and various intensive
applications in Artificial Intelligence.
• Fuzzy logic is extensively used in modern control systems such as expert
systems.
• Fuzzy Logic is used with Neural Networks as it mimics how a person would
make decisions, only much faster. It is done by Aggregation of data and
changing it into more meaningful data by forming partial truths as Fuzzy
sets.
FLSs
Advantages of FLSs
• Mathematical concepts within fuzzy reasoning are very simple.
• You can modify a FLS by just adding or deleting rules due to flexibility of
fuzzy logic.
• Fuzzy logic Systems can take imprecise, distorted, noisy input information.
• FLSs are easy to construct and understand.
• Fuzzy logic is a solution to complex problems in all fields of life, including
medicine, as it resembles human reasoning and decision making.
Disadvantages of FLSs
• There is no systematic approach to fuzzy system designing.
• They are understandable only when simple.
• They are suitable for the problems which do not need high accuracy.

You might also like