AI Chapter 4 5 6 Combined Note
AI Chapter 4 5 6 Combined Note
Unit - IV
1. What is an Expert System? Discuss the examples of Expert Systems.
Expert System is an interactive and reliable computer-based decision-making system
which uses both facts and heuristics to solve complex decision-making problems.
The purpose of an expert system is to solve the most complex issues in a specific
domain.
The Expert System in AI can resolve many issues which generally would require
a human expert.
Expert systems were the predecessor of the current day artificial intelligence,
deep learning and machine learning systems.
Example:-
MYCIN: It was based on backward chaining and could identify various bacteria
that could cause acute infections. It could also recommend drugs based on the
patient's weight. It is one of the best Expert System Example.
PXDES: An Example of Expert System used to predict the degree and type of
lung cancer
CaDet: One of the best Expert System Example that can identify cancer at early
stages
Good Reliability: The Expert system in AI needs to be reliable, and it must not
make any a mistake.
Knowledge Base:-
What is Knowledge?
The data is collection of facts. The information is organized as data and facts
about the task domain.
Data, information, and past experience combined together are termed as
knowledge.
Knowledge representation
It is the method used to organize and formalize the knowledge in the knowledge
base. It is in the form of IF-THEN-ELSE rules.
Knowledge Acquisition
The success of any expert system majorly depends on the quality, completeness,
and accuracy of the information stored in the knowledge base.
The knowledge base is formed by readings from various experts, scholars, and
the Knowledge Engineers.
The knowledge engineer is a person with the qualities of empathy, quick learning,
and case analyzing skills.
He acquires information from subject expert by recording, interviewing, and
observing him at work, etc.
He then categorizes and organizes the information in a meaningful way, in the
form of IF-THEN-ELSE rules, to be used by interference machine.
The knowledge engineer also monitors the development of the ES.
Inference Engine:
Use of efficient procedures and rules by the Inference Engine is essential in
deducting a correct, flawless solution.
In case of knowledge-based ES, the Inference Engine acquires and manipulates
the knowledge from the knowledge base to arrive at a particular solution.
In case of rule based ES, it –
o Applies rules repeatedly to the facts, which are obtained from earlier rule
application.
o Adds new knowledge into the knowledge base if required.
o Resolves rules conflict when multiple rules are applicable to a particular
case.
Forward Chaining:-
It is a strategy of an expert system to answer the question, “What can happen
next?”
Here, the Inference Engine follows the chain of conditions and derivations and
finally deduces the outcome.
It considers all the facts and rules, and sorts them before concluding to a solution.
This strategy is followed for working on conclusion, result, or effect.
For example, prediction of share market status as an effect of changes in interest
rates.
Backward Chaining:
With this strategy, an expert system finds out the answer to the question, “Why
this happened?”
On the basis of what has already happened, the Inference Engine tries to find out
which conditions could have happened in the past for this result.
This strategy is followed for finding out cause or reason. For example, diagnosis
of blood cancer in humans.
User Interface:-
User interface provides interaction between user of the ES and the ES itself.
It is generally Natural Language Processing so as to be used by the user who is
well-versed in the task domain.
The user of the ES need not be necessarily an expert in Artificial Intelligence.
It explains how the ES has arrived at a particular recommendation.
The explanation may appear in the following forms –
o Natural language displayed on screen.
o Verbal narrations in natural language.
o Listing of rule numbers displayed on the screen.
The user interface makes it easy to trace the credibility of the deductions.
Agents use their actuators to run through a cycle of perception, thought, and action.
Examples of agents in general terms include:
1. Software: This Agent has file contents, keystrokes, and received network
packages that function as sensory input, then act on those inputs, displaying the
output on a screen.
2. Human: Humans have eyes, ears, and other organs that act as sensors, and hands,
legs, mouths, and other body parts act as actuators.
3. Robotic: Robotic agents have cameras and infrared range finders that act as
sensors, and various servos and motors perform as actuators.
11. How agent interacts with its environment? Also, list the rules for agents.
How agent interacts with its environment:-
Sensor: Sensor is a device which detects the change in the environment and
sends the information to other electronic devices. An agent observes its
environment through sensors.
Actuators: Actuators are the component of machines that converts energy into
motion. The actuators are only responsible for moving and controlling a system.
An actuator can be an electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can
be legs, wheels, arms, fingers, wings, fins, and display screen.
Rules:-
-Rule 1: An AI agent must be able to perceive the environment.
-Rule 2: The environmental observations must be used to make decisions.
-Rule 3: The decisions should result in action.
-Rule 4: The action taken by the AI agent must be a rational. Rational actions are actions
that maximize performance and yield the best positive outcome.
When we define an AI agent or rational agent, then we can group its properties under
PEAS representation model.
It is made up of four words:
P: Performance measure
E: Environment
A: Actuators
S: Sensors
These agents have the model, "which is knowledge of the world" and
based on the model they perform actions.
Updating the agent state requires information about:
o How the world evolves
o How the agent's action affects the world.
3. Goal-based agents
The knowledge of the current state environment is not always sufficient to
decide for an agent to what to do.
The agent needs to know its goal which describes desirable situations.
Goal-based agents expand the capabilities of the model-based agent by
having the "goal" information.
They choose an action, so that they can achieve the goal.
These agents may have to consider a long sequence of possible actions
before deciding whether the goal is achieved or not.
Such considerations of different scenario are called searching and
planning, which makes an agent proactive.
4. Utility-based agent
These agents are similar to the goal-based agent but provide an extra
component of utility measurement which makes them different by
providing a measure of success at a given state.
Utility-based agent act based not only goals but also the best way to
achieve the goal.
The Utility-based agent is useful when there are multiple possible
alternatives, and an agent has to choose in order to perform the best
action.
The utility function maps each state to a real number to check how
efficiently each action achieves the goals.
5. Learning agent
A learning agent in AI is the type of agent which can learn from its past
experiences, or it has learning capabilities.
It starts to act with basic knowledge and then able to act and adapt
automatically through learning.
A learning agent has mainly four conceptual components, which are:
o Learning element: It is responsible for making improvements by
learning from environment
o Critic: Learning element takes feedback from critic which
describes that how well the agent is doing with respect to a fixed
performance standard.
o Performance element: It is responsible for selecting external action
o Problem generator: This component is responsible for suggesting
actions that will lead to new and informative experiences.
Hence, learning agents are able to learn, analyze performance, and look
for new ways to improve the performance.
16. List and discuss the various features of environment.
1. Fully observable vs Partially Observable
2. Static vs Dynamic
3. Discrete vs Continuous
4. Deterministic vs Stochastic
5. Single-agent vs Multi-agent
6. Episodic vs sequential
7. Known vs Unknown
8. Accessible vs Inaccessible
2. Static vs Dynamic
If the environment can change itself while an agent is deliberating then such
environment is called a dynamic environment else it is called a static
environment.
Static environments are easy to deal because an agent does not need to
continue looking at the world while deciding for an action.
However for dynamic environment, agents need to keep looking at the world
at each action.
Taxi driving is an example of a dynamic environment whereas Crossword
puzzles are an example of a static environment.
3. Discrete vs Continuous
If in an environment there are a finite number of percepts and actions that can
be performed within it, then such an environment is called a discrete
environment else it is called continuous environment.
A chess gamecomes under discrete environment as there is a finite number of
moves that can be performed.
A self-driving car is an example of a continuous environment.
4. Deterministic vs Stochastic
If an agent's current state and selected action can completely determine the
next state of the environment, then such environment is called a deterministic
environment.
A stochastic environment is random in nature and cannot be determined
completely by an agent.
In a deterministic, fully observable environment, agent does not need to
worry about uncertainty.
5. Single-agent vs Multi-agent
If only one agent is involved in an environment, and operating by itself then
such an environment is called single agent environment.
However, if multiple agents are operating in an environment, then such an
environment is called a multi-agent environment.
The agent design problems in the multi-agent environment are different from
single agent environment.
6. Episodic vs sequential
In an episodic environment, there is a series of one-shot actions, and only the
current percept is required for the action.
However, in Sequential environment, an agent requires memory of past
actions to determine the next best actions.
7. Known vs Unknown
Known and unknown are not actually a feature of an environment, but it is an
agent's state of knowledge to perform an action.
In a known environment, the results for all actions are known to the agent.
While in unknown environment, agent needs to learn how it works in order to
perform an action.
It is quite possible that a known environment to be partially observable and an
Unknown environment to be fully observable.
8. Accessible vs Inaccessible
If an agent can obtain complete and accurate information about the state's
environment, then such an environment is called an Accessible environment
else it is called inaccessible.
An empty room whose state can be defined by its temperature is an example
of an accessible environment.
Information about an event on earth is an example of Inaccessible
environment.
Unit – VI
Applications of NLP
Text and speech processing like-Voice assistants – Alexa, Siri, etc.
Text classification like Grammarly, Microsoft Word, and Google Docs
Information extraction like-Search engines like DuckDuckGo, Google
Chatbot and Question Answering like:- website bots
Language Translation like:- Google Translate
Text summarization
Disadvantages
For the training of the NLP model, A lot of data and computation are required.
Many issues arise for NLP when dealing with informal expressions, idioms, and
cultural jargon.
NLP results are sometimes not to be accurate, and accuracy is directly
proportional to the accuracy of data.
NLP is designed for a single, narrow job since it cannot adapt to new domains
and has a limited function.
1. Lexical Analysis In this step, the Lexical Analyzer categorizes the entire input
text into words, sentences, and paragraphs.
NLP identifies and analyzes the structure of words in the sentences.
2. Syntax Analysis This process is also called parsing. In this step, the Syntax
Analyzer will check the input text for grammatical errors.
In this step, the words are arranged in such a way that the relation between
them can be identified.
Syntactic Analyzer will reject any sentence in the input text which is not
correct.
3. Semantic Analysis In this step, NLP checks whether the text holds a meaning or
not.
It tries to decipher the accurate meaning of the text. Semantic Analyzer will
reject a sentence like “ dry water.”
5. Pragmatic Analysis In this step, the analyzed text is integrated with the real
world knowledge for extracting the actual meaning of the text.
Corpus
Natural language processing is a unique field that combines computer science,
data science, and linguistics all together to enable computers to understand and
use human languages.
From that perspective, corpus — Latin for body — is a term used to refer to a
body of text.
The plural form of the word is corpora.
This text can contain one or more languages and can be either in the form of
written or spoken languages.
Corpora can have a specific theme or can be generalized text.
Either way, corpora are used for statistical linguistic analysis and linguistic
computing.
Stemming
In natural language processing, stemming is a technique used to extract a word’s
origin by removing all fixes — prefixes, affixes, and suffixes.
For example: “ Flying ” is a word and its suffix is “ ing ”, if we remove “ ing ”
from “ Flying ” then we will get base word or root word which is “ Fly ”.
The main purpose of stemming is to give the algorithm the ability to look for and
extract useful information from a huge source, like the internet or big data.
Various algorithms are used to perform stemming, including:
o Lookup tables. A form that has all possible variations of all words
(similar to a dictionary).
o Stripping suffixes. Remove suffixes from the word to construct its origin
form.
o Stochastic modeling. A unique type of algorithm understands suffixes'
grammatical rules and uses that to extract a new word’s origins.
Lemmatization
Although stemming is a good approach to extract word origins, sometimes
removing fixes is not enough to obtain the correct word’s origin.
For example, if I use a stemmer to get the origin of paid, it will give me pai.
which is incorrect.
Stemmers often fails when dealing with irregular words that don't follow the
standard grammar rule.
Here where lemmatization comes to help.
Lemmatization is a word used to deliver that something is done properly.
This case refers to extracting the original form of a word— aka, the lemma. So, in
our previous example, a lemmatizer will return pay or paid based on the word's
location in the sentence.
Tokenization
In natural language processing, tokenization is the process of chopping down a
sentence into individual words or tokens.
In the process of forming tokens, punctuation or special characters are often
removed entirely.
Tokens are constructed from a specific body of text to be used for statistical
analysis and processing.
It’s worth mentioning that a token doesn’t necessarily need to be one word; for
example, “rock ’n’ roll,” “3-D printer” are tokens, and they are constructed from
multiple words.
N-grams
In text analysis tasks, n-grams refer to diving the corpus into n-words chunks.
These chunks are often constructed by moving one word at a time.
When n =1, we use the term unigrams instead of 1-gram. In case n = 2, we call it
bigrams, and when n = 3, it’s called trigrams.
Normalization
When we want to analyze text for any purpose, the analysis process can be much
more accurate if the text we are using is in a standard format.
Putting the text in a standard format is what’s called normalization.
For example, if we search within a text, it will be better if the entire text was in
either upper or lower case.
Normalization is often conducted after tokenizing a text and a query.
We may have two similar phrases but not a 100% the same, such as USA and
U.S.A. But, you want your model to match these two terms together regardless of
the small differences.
Stop Words
Stop words are those words which are filtered out before further processing of
text, since these words contribute little to overall meaning, given that they are
generally the most common words in a language.
For instance, "the," "and," and "a," while all required words in a particular
passage, don't generally contribute greatly to one's understanding of content.
As a simple example, the following pangram is just as legible if the stop words
are removed:
A quick brown fox jumps over the lazy dog.
23. What is Game playing in AI? Discuss the approaches and techniques for the same.
Game playing is a popular application of artificial intelligence that involves the
development of computer programs to play games, such as chess, checkers, or
Go.
The goal of game playing in artificial intelligence is to develop algorithms that
can learn how to play games and make decisions that will lead to winning
outcomes.
Game playing in AI is an active area of research and has many practical
applications, including game development, education, and military training.
By simulating game playing scenarios, AI algorithms can be used to develop
more effective decision-making systems for real-world applications.
Example
One of the earliest examples of successful game playing AI is the chess program
Deep Blue, developed by IBM, which defeated the world champion Garry
Kasparov in 1997.
Since then, AI has been applied to a wide range of games, including two-player
games, multiplayer games, and video games.
Approaches
There are two main approaches to game playing in AI, rule-based systems and
machine learning-based systems.
Rule-based systems use a set of fixed rules to play the game.
Machine learning-based systems use algorithms to learn from experience and
make decisions based on that experience.
Technique
The most common search technique in game playing is Minimax search
procedure.
It is depth-first depth-limited search procedure. It is used for games like chess and
tic-tac-toe.
Minimax algorithm uses two functions –
o MOVEGEN : It generates all the possible moves that can be generated
from the current position.
o STATICEVALUATION : It returns a value depending upon the
goodness from the viewpoint of two-player
24. What are the advantages and drawbacks of game playing in AI?
Advantages:-
Advancement of AI:
Game playing has been a driving force behind the development of artificial intelligence
and has led to the creation of new algorithms and techniques that can be applied to other
areas of AI.
Research:
Game playing is an active area of research in AI and provides an opportunity to study
and develop new techniques for decision-making and problem-solving.
Real-world applications:
The techniques and algorithms developed for game playing can be applied to real-world
applications, such as robotics, autonomous systems, and decision support systems.
Disadvantages:-
Limited scope:
The techniques and algorithms developed for game playing may not be well-suited for
other types of applications and may need to be adapted or modified for different
domains.
Computational cost:
Game playing can be computationally expensive, especially for complex games such as
chess or Go, and may require powerful computers to achieve real-time performance.
Computer Vision
Robots can also see, and this is possible by one of the popular Artificial
Intelligence technologies named Computer vision.
Computer Vision plays a crucial role in all industries like health, entertainment,
medical, military, mining, etc.
Computer Vision is an important domain of Artificial Intelligence that helps in
extracting meaningful information from images, videos and visual inputs and take
action accordingly.
Edge Computing
Edge computing in robots is defined as a service provider of robot integration,
testing, design and simulation.
Edge computing in robotics provides better data management, lower connectivity
cost, better security practices, more reliable and uninterrupted connection.
Affective computing
Affective computing is a field of study that deals with developing systems that
can identify, interpret, process, and simulate human emotions.
Affective computing aims to endow robots with emotional intelligence to hope
that robots can be endowed with human-like capabilities of observation,
interpretation, and emotion expression.
Mixed Reality
Mixed Reality is also an emerging domain. It is mainly used in the field of
programming by demonstration (PbD).
PbD creates a prototyping mechanism for algorithms using a combination of
physical and virtual objects.
In our brain, there are billions of cells called neurons, which processes
information in the form of electric signals.
External information/stimuli is received by the dendrites of the neuron, processed
in the neuron cell body, converted to an output and passed through the Axon to
the next neuron.
The next neuron can choose to either accept it or reject it depending on the
strength of the signal.
29. Explain the types of AI.
Types of ANN:-