0% found this document useful (0 votes)
1K views62 pages

THEORY FILE - Artificial Intelligence (Sem-6th) !!

The document provides comprehensive notes on Artificial Intelligence, covering its definition, foundations, history, and various types of problems, including toy and real-world problems. It discusses knowledge representation in AI through propositional and first-order logic, along with methods for inference and reasoning. The content is structured into units, detailing essential concepts and algorithms relevant to AI applications.

Uploaded by

sahil gupta.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views62 pages

THEORY FILE - Artificial Intelligence (Sem-6th) !!

The document provides comprehensive notes on Artificial Intelligence, covering its definition, foundations, history, and various types of problems, including toy and real-world problems. It discusses knowledge representation in AI through propositional and first-order logic, along with methods for inference and reasoning. The content is structured into units, detailing essential concepts and algorithms relevant to AI applications.

Uploaded by

sahil gupta.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

THEORY FILE : Artificial Intelligence

(FULL NOTES: BY SAHIL RAUNIYAR / PTU-CODER) .

SUBJECT CODE: UGCA- 1945

BACHELOR OF COMPUTER APPLICATIONS

ER
MAINTAINED BY: TEACHER’S /MAM’:
Sahil Kumar Prof.

COLLEGE ROLL NO: 226617

D
UNIVERSITY ROLL NO: 2200315
O
C
U
PT
@

DEPARTMENT OF COMPUTER SCIENCE ENGINEERING

BABA BANDA SINGH BAHADUR ENGINEERING

COLLEGE FATEGARH SAHIB


1

Program BCA ➖➖
Course Name
Semester
➖ 6th.
Artificial Intelligence (Theory).

UNIT ➖01
●​ # Introduction-What is intelligence? Foundations of artificial intelligence (AI).
History of AI. AI problems: Toy Problems, Real World problems- Tic-Tac-Toe, Water


Jug, Question-Answering, 8-puzzle, 8-Queens problem. Formulating problems,
Searching for Solutions .

ER
1. What is Intelligence ➖?
Intelligence refers to the ability to learn, understand, and apply knowledge to solve problems, adapt to new

D
situations, and perform complex tasks. It is commonly associated with humans and animals but can also be
demonstrated by machines.

Types of Intelligence
O
●​ Natural Intelligence: Exhibited by humans and animals. It involves reasoning, problem-solving, learning,
and decision-making.
C
●​ Artificial Intelligence (AI): The simulation of human intelligence in machines, enabling them to perform
tasks such as learning, reasoning, problem-solving, and understanding language.


U

2. Foundations of Artificial Intelligence (AI)


Artificial Intelligence (AI) is based on mathematics, computer science, logic, neuroscience, linguistics, and
PT

psychology. The key foundations of AI include:

1. Mathematics

●​ Linear Algebra: Used in machine learning and neural networks.


●​ Probability and Statistics: Essential for decision-making and AI models (e.g., Bayesian Networks).
@

●​ Calculus: Used in optimization algorithms for AI.

2. Computer Science

●​ AI algorithms rely on data structures, programming, and computational complexity.


●​ Programming languages such as Python, Java, and Lisp are widely used for AI development.

3. Logic

●​ Boolean logic helps AI make decisions based on rules.


●​ Predicate logic is used in expert systems and knowledge representation.
2
4. Neuroscience

●​ AI is inspired by the human brain, leading to the development of artificial neural networks (ANNs).

5. Psychology and Cognitive Science

●​ AI systems try to simulate human learning and cognitive processes.

3. History of AI ➖
Early AI (1940s – 1950s)

ER
●​ 1943: Warren McCulloch and Walter Pitts developed the first artificial neuron model.
●​ 1950: Alan Turing proposed the Turing Test to evaluate machine intelligence.
●​ 1956: John McCarthy coined the term Artificial Intelligence at the Dartmouth Conference.

Classical AI (1950s – 1970s)

D
●​ 1958: McCarthy developed LISP, the first AI programming language.
●​ 1965: Joseph Weizenbaum developed ELIZA, an early chatbot.
●​ 1970s: AI faced the "AI Winter", a period of reduced funding due to high expectations and slow progress.

Modern AI (1980s – Present)


O
●​ 1980s: Introduction of expert systems for decision-making.
C
●​ 1990s: AI improved in robotics and search algorithms.
●​ 1997: IBM’s Deep Blue defeated Garry Kasparov in chess.
●​ 2010s – Present: Deep learning, neural networks, and AI applications in self-driving cars, healthcare, and
U

finance.
●​ 2016: Google’s AlphaGo defeated a human Go champion.
PT

4. AI Problems: Toy Problems vs. Real-World Problems ➖


1. Toy Problems
@

Toy problems are simplified AI problems used for research and education. They have clear rules, defined
states, and limited complexity.

Examples of Toy Problems:

●​ Tic-Tac-Toe
●​ Water Jug Problem
●​ 8-Puzzle Problem
●​ 8-Queens Problem
●​ Question-Answering Systems
3
2. Real-World Problems

Real-world problems are complex and require AI to handle large amounts of data, uncertainty, and real-time
processing.

Examples of Real-World Problems:

●​ Self-Driving Cars – Requires AI for perception, navigation, and decision-making.


●​ Medical Diagnosis – AI helps doctors detect diseases from medical images.
●​ Speech Recognition – AI is used in Siri, Alexa, and Google Assistant.

5. Examples of AI Problems ➖

ER
1. Tic-Tac-Toe

●​ A two-player game played on a 3x3 grid.


●​ AI can be trained using Minimax Algorithm, which searches for the best move by considering possible
outcomes.

D
●​ Used to teach game theory and decision-making in AI.

2. Water Jug Problem


O
●​ Given two jugs of different capacities (e.g., 4L and 3L), the goal is to measure exactly N liters of water.
●​ Can be solved using Breadth-First Search (BFS) or Depth-First Search (DFS).
C
3. Question-Answering System

●​ AI models analyze text input and provide accurate responses.


●​ Example: ChatGPT, Google Assistant, and IBM Watson use Natural Language Processing (NLP).
U

4. 8-Puzzle Problem
PT

●​ A 3x3 sliding puzzle with one empty space where tiles need to be arranged in order.
●​ Solved using A Search Algorithm* (heuristic-based).

5. 8-Queens Problem

●​ The goal is to place 8 queens on an 8x8 chessboard such that no two queens attack each other.
@

●​ Solved using Backtracking and Constraint Satisfaction algorithms.

6. Formulating AI Problems ➖
To solve AI problems, they must be well-defined using the following components:

1. Initial State

●​ The starting point of the problem (e.g., the initial configuration of a chessboard).
4
2. Actions

●​ Possible moves or operations an AI can take (e.g., moving a chess piece).

3. Transition Model

●​ Describes how actions change the state (e.g., moving left in an 8-puzzle).

4. Goal State

●​ The desired outcome of the problem (e.g., winning the game, arranging a puzzle).

5. Path Cost

ER
●​ The cost associated with reaching a goal state (e.g., shortest path in navigation).

7. Searching for Solutions in AI ➖

D
AI search algorithms explore possible solutions to a problem by systematically navigating through a problem
space.

Types of Search Algorithms


O
1. Uninformed Search (Blind Search)
C
●​ Does not use prior knowledge about the problem.
●​ Examples:
○​ Breadth-First Search (BFS) – Explores all possible states at one level before moving deeper.
○​ Depth-First Search (DFS) – Explores one path fully before backtracking.
U

2. Informed Search (Heuristic Search)


PT

●​ Uses heuristics (rules of thumb) to find solutions more efficiently.


●​ Examples:
○​ A Algorithm* – Uses a heuristic function to estimate the best path.
○​ Greedy Best-First Search – Expands the most promising node first.

3. Adversarial Search
@

●​ Used in games and competitive environments (e.g., chess, tic-tac-toe).


●​ Example:
○​ Minimax Algorithm – Maximizes the AI’s chance of winning while minimizing the opponent’s
chance.

Conclusion

Artificial Intelligence is a rapidly evolving field with strong foundations in mathematics, logic, and computer science. The
history of AI has seen many breakthroughs, from early rule-based systems to modern machine learning and deep
learning. AI problems range from simple toy problems like tic-tac-toe to complex real-world applications like
autonomous vehicles and medical diagnosis. By formulating AI problems correctly and applying efficient search
algorithms, AI can optimize decision-making, automate tasks, and enhance human capabilities in various fields.
5
●​ # Knowledge Representation: Propositional Logic, Propositional Theorem
Proving-Inference and Proofs, Proof by Resolution, Horn Clauses and definite


Clauses, Forward and Backward chaining; First order Logic, Inference in First
Order Logic.

Knowledge Representation in Artificial Intelligence ➖


Knowledge Representation (KR) is a fundamental aspect of Artificial Intelligence (AI) that focuses on
how knowledge can be structured, stored, and utilized for reasoning and decision-making. AI
systems require knowledge to understand, reason, and solve complex problems efficiently.

KR uses different logic-based and rule-based approaches to represent information about the world,

ER
allowing AI to perform inference and derive conclusions.

1. Propositional Logic (PL) ➖

D
What is Propositional Logic?

Propositional Logic (PL) is a formal system in AI that represents facts and relationships in the form of
statements (propositions).
O
●​ A proposition is a declarative statement that is either true (T) or false (F) but not both.
●​ It does not consider the internal structure of statements but only the truth values.
C
Syntax of Propositional Logic
U

Propositional logic consists of:

1.​ Atomic Propositions (P, Q, R, ...) – Simple statements (e.g., "It is raining").
2.​ Logical Operators:
PT

○​ Negation (¬): NOT operator


○​ Conjunction (∧): AND operator
○​ Disjunction (∨): OR operator
○​ Implication (→): IF-THEN relationship
○​ Biconditional (↔): IF AND ONLY IF relationship
@

Examples of Propositional Logic Statements

1.​ It is raining → The ground is wet.


○​ R → W
2.​ John studies AND he passes the exam.
○​ S ∧ P
3.​ If the power is out, then the computer is off.
○​ P → C

6
Truth Table for Logical Operators

ER
2. Propositional Theorem Proving ➖

D
What is Theorem Proving?

Theorem proving is a method used in AI to infer conclusions from given facts and logical
statements.
O
●​ Uses deductive reasoning to derive new truths.
●​ A theorem prover checks whether a given statement logically follows from a set of axioms
C
(known truths).

Inference and Proofs in Propositional Logic


U

Inference is the process of deriving new facts from existing facts and rules using logical reasoning.
PT

1. Modus Ponens (Rule of Inference)

If P → Q is true and P is true, then Q must also be true.

●​ Example:
○​ If "It rains (R) → The ground is wet (W)" is true, and "It is raining (R)" is true, then
@

"The ground is wet (W)" is also true.

2. Modus Tollens

If P → Q is true and Q is false, then P must be false.

●​ Example:
○​ If "If the alarm is ringing (A) → There is a fire (F)" is true, and "There is no fire (¬F)" is
true, then "The alarm is not ringing (¬A)" must also be true.
3. Proof by Resolution ➖ 7

Resolution is a powerful inference rule used in automated theorem proving and logic programming.

Steps of Proof by Resolution

1.​ Convert the statements into Conjunctive Normal Form (CNF).


2.​ Apply the resolution rule to eliminate contradictions.
3.​ If a contradiction (empty clause ⊥) is found, the original statement is proven true.

Example of Proof by Resolution

Given:

ER
1.​ P ∨ Q
2.​ ¬P ∨ R
3.​ ¬Q ∨ ¬R

Using resolution:

D
●​ Resolve (P ∨ Q) with (¬P ∨ R): Q ∨ R
●​ Resolve Q ∨ R with (¬Q ∨ ¬R): ⊥ (Contradiction, so proven true)
O
4. Horn Clauses and Definite Clauses ➖
C
Horn Clause
U

A Horn Clause is a clause with at most one positive literal.

Example:
PT

●​ (¬P ∨ ¬Q ∨ R) can be rewritten as P ∧ Q → R.

Definite Clause

A Definite Clause is a Horn Clause with exactly one positive literal.


@

Example:

●​ ¬A ∨ B (which can be written as A → B).

Horn Clauses and Definite Clauses are widely used in logic programming and Prolog.
5. Forward and Backward Chaining ➖ 8

1. Forward Chaining (Data-Driven Reasoning)

●​ Starts with known facts and applies rules to infer new facts.
●​ Continues until the goal is reached.

Example:

●​ Given:
1.​ If Raining (R) → Then Wet ground (W)
2.​ R is true
●​ Inference: Since R → W and R is true, W must also be true.

ER
2. Backward Chaining (Goal-Driven Reasoning)

●​ Starts with a goal and works backwards to check if known facts support the goal.
●​ Useful in Expert Systems and AI Planning.

Example:

D
●​ Goal: Prove that the ground is wet (W).
●​ Given Rules:
1.​ R → W
O
2.​ R is true.
●​ Since R is true, W must be true.
C

U

6. First-Order Logic (FOL)


Propositional Logic (PL) is limited as it does not handle variables and relationships.​
PT

First-Order Logic (FOL) extends PL by adding quantifiers and predicates.

Key Components of FOL

1.​ Constants: Specific objects (e.g., John, Apple).


2.​ Variables: General representations (e.g., x, y).
@

3.​ Predicates: Properties of objects (e.g., Human(John)).


4.​ Functions: Maps input to output (e.g., Father(John)).
5.​ Quantifiers:
○​ Universal Quantifier (∀) – "For all" (∀x, Human(x) → Mortal(x)).
○​ Existential Quantifier (∃) – "There exists" (∃x, Animal(x)).
7. Inference in First-Order Logic ➖ 9

1. Unification

●​ The process of matching variables and terms to make expressions identical.


●​ Example:
○​ Given: Loves(x, Mary) and Loves(John, Mary),
○​ Unification: x = John.

2. Resolution in FOL

●​ Converts statements into clausal form and applies resolution for inference.

ER
3. Forward and Backward Chaining in FOL

●​ Used in rule-based systems and AI planning.


●​ Works similarly to propositional logic, but with predicates and variables.

D
Conclusion O
Knowledge Representation in AI relies on logical formalisms like Propositional Logic (PL) and
First-Order Logic (FOL). Theorem proving methods like resolution, forward chaining, and
backward chaining enable AI systems to reason efficiently. AI applications such as Expert Systems,
C
Knowledge-Based Systems, and Natural Language Processing (NLP) depend on these logic-based
techniques to infer conclusions and solve problems.
U

HAPPY ENDING BY ➖ SAHIL RAUNIYAR & PTUCODER !! 😀


PT
@
10

UNIT ➖ 02

●​ # Uncertain Knowledge and Reasoning: Basic probability, Bayes rule, Belief
networks, Default reasoning, Fuzzy sets and fuzzy logic.

Uncertain Knowledge and Reasoning in AI ➖


In many real-world AI applications, knowledge is incomplete, uncertain, or ambiguous. Traditional
logic-based reasoning (such as Propositional and First-Order Logic) assumes that all facts are either

ER
true or false. However, in real-life scenarios, AI systems must handle uncertainty and make
probabilistic inferences.

To deal with uncertainty, AI uses various techniques like Probability Theory, Bayesian Networks,
Default Reasoning, and Fuzzy Logic.

1. Basic Probability in AI ➖
D
O
1.1 Probability Basics

Probability provides a mathematical framework for modeling uncertainty. It represents the likelihood
C
of an event occurring.

●​ The probability of an event (P) is always between 0 and 1:


U

○​ P(A) = 0 → Event A never occurs


○​ P(A) = 1 → Event A always occurs
○​ 0 < P(A) < 1 → Event A may occur with some likelihood
PT

●​ The sum of probabilities of all possible events must be 1:


○​ P(A) + P(¬A) = 1

1.2 Types of Probability


@

●​ Prior Probability P(A) → The probability of event A occurring before any new evidence is
considered.
●​ Conditional Probability P(A | B) → The probability of A occurring given that B has already
occurred.

1.3 Joint and Marginal Probabilities

●​ Joint Probability P(A ∩ B): The probability that both A and B occur together.
●​ Marginal Probability P(A): The probability of A, ignoring any other variable.

1.4 Law of Total Probability

For a set of mutually exclusive events {B1, B2, ..., Bn} that cover all possibilities:
11
P(A)=∑P(A∣Bi)P(Bi)P(A) = \sum P(A | Bi) P(Bi)P(A)=∑P(A∣Bi)P(Bi)

2. Bayes’ Rule in AI ➖
Bayes’ Theorem is used to update probabilities based on new evidence. It is a foundational concept
in probabilistic reasoning.

P(A∣B)=P(B∣A)P(A)P(B)P(A | B) = \frac{P(B | A) P(A)}{P(B)}P(A∣B)=P(B)P(B∣A)P(A)​

Where:

●​ P(A | B): Posterior probability → Probability of A given that B is true.

ER
●​ P(B | A): Likelihood → Probability of B given that A is true.
●​ P(A): Prior probability → Probability of A before observing B.
●​ P(B): Marginal probability → Total probability of B occurring.

Example of Bayes’ Rule in AI (Medical Diagnosis)

D
●​ A doctor wants to diagnose a disease (D) based on a test result (T).
●​ Let’s say:
○​ P(D) = 0.01 (1% of people have the disease).
O
○​ P(T | D) = 0.95 (Test is positive if a person has the disease).
○​ P(T | ¬D) = 0.05 (Test is false positive for 5% of healthy people).
○​ P(T) = (P(T | D) * P(D)) + (P(T | ¬D) * P(¬D))
C
Using Bayes’ rule:

P(D∣T)=P(T∣D)P(D)P(T)P(D | T) = \frac{P(T | D) P(D)}{P(T)}P(D∣T)=P(T)P(T∣D)P(D)​


U

This helps AI in medical diagnosis, spam filtering, and AI-based decision-making.


PT

3. Belief Networks (Bayesian Networks) ➖


3.1 What are Belief Networks?
@

A Bayesian Belief Network (BBN) is a graphical model that represents probabilistic dependencies
among variables.

●​ Nodes represent random variables (e.g., "Rain", "Traffic", "Late for work").
●​ Edges represent dependencies between variables.

3.2 Example of a Bayesian Network

Consider a belief network with:

●​ Cloudy (C) → Rain (R) → Traffic (T)


●​ If it is cloudy, there is a chance of rain, which in turn affects traffic.
12
Conditional Probability Table (CPT)

ER
Using Bayesian Networks, AI can predict the likelihood of being late given weather conditions.

4. Default Reasoning ➖
D
In real life, we often make decisions based on common-sense assumptions even with incomplete
information.

4.1 What is Default Reasoning?


O
●​ It allows AI systems to make reasonable assumptions in uncertain situations.
C
●​ AI assumes a default belief unless contradicted by evidence.

4.2 Example of Default Reasoning


U

●​ "Birds can fly" → Default belief.


●​ "Penguins are birds" → But Penguins cannot fly (exception).
PT

●​ AI assumes that new birds can fly unless proven otherwise.

Default reasoning is useful in Expert Systems, Robotics, and Decision Support Systems.


@

5. Fuzzy Sets and Fuzzy Logic


Classical logic is binary:

●​ True (1) or False (0).

However, in real life, decisions are often not absolute but rather partial (e.g., "Tall person", "Hot
weather").

5.1 What is Fuzzy Logic?

●​ Fuzzy Logic allows partial truths, where variables have a degree of membership between 0
and 1.
13
●​ Used for approximate reasoning in uncertain environments.

5.2 Fuzzy Sets and Membership Functions

A Fuzzy Set assigns a degree of membership between 0 and 1.

Example:

●​ Let X = {Short, Medium, Tall} define Height.


●​ Instead of saying "Height > 6ft = Tall", Fuzzy logic assigns a membership value:

ER
5.3 Fuzzy Logic Operators

D
O
Fuzzy logic modifies classical logic operators:

1.​ Fuzzy AND (A ∧ B) → min(A, B).


C
2.​ Fuzzy OR (A ∨ B) → max(A, B).
3.​ Fuzzy NOT (¬A) → 1 - A.
U

5.4 Applications of Fuzzy Logic

●​ Washing Machines: Adjust wash time based on dirt level.


PT

●​ Air Conditioning: Adjusts cooling based on temperature variation.


●​ AI and Robotics: Decision-making in uncertain environments.


@

Conclusion
Handling uncertainty in AI is crucial for real-world decision-making. AI systems use:

1.​ Probability Theory to quantify uncertainty.


2.​ Bayes’ Rule to update beliefs with new evidence.
3.​ Belief Networks to model complex dependencies.
4.​ Default Reasoning to make assumptions in incomplete data.
5.​ Fuzzy Logic to deal with vague, imprecise situations.

These techniques power AI applications in Robotics, Medical Diagnosis, Weather Forecasting, and
Machine Learning!
14


●​ # Structured Knowledge: Associative Networks, Frame Structures,
Conceptual Dependencies and Scripts.

Structured Knowledge Representation in AI ➖


Structured Knowledge Representation refers to organizing knowledge in a systematic way to help AI
reason, understand, and make decisions efficiently. Unlike logical reasoning and probabilistic
methods, structured knowledge focuses on representing relationships between concepts, objects,
and events in a way that machines can process effectively.

Structured knowledge representation techniques include:

1.​ Associative Networks

ER
2.​ Frame Structures
3.​ Conceptual Dependencies
4.​ Scripts


D
1. Associative Networks O
1.1 What is an Associative Network?

An Associative Network (Semantic Network) is a graph-based knowledge representation where:


C
●​ Nodes represent objects, concepts, or entities.
●​ Edges represent relationships between nodes.
U

Associative networks store meaningful relationships between concepts, making them useful in
Natural Language Processing (NLP), Expert Systems, and AI-based Knowledge Graphs.

1.2 Example of an Associative Network


PT

Consider the following concepts:

●​ Dog is a Mammal
●​ Mammals breathe air
●​ Dogs bark
@

An associative network representation:

scss

[Mammal]

[Dog] → (barks)

(breathes air)
15
1.3 Types of Relationships in Associative Networks

●​ ISA (Is-a) → Defines class-subclass relationships.


○​ E.g., "A dog is a mammal."
●​ HAS-A (Has-a) → Defines property relationships.
○​ E.g., "A car has wheels."
●​ PART-OF (Part-of) → Defines components of an object.
○​ E.g., "An engine is part of a car."

1.4 Applications of Associative Networks

●​ Chatbots and Virtual Assistants (e.g., Siri, Alexa)


●​ Knowledge Graphs (e.g., Google's Knowledge Panel)

ER
●​ Machine Translation (e.g., Google Translate)

2. Frame Structures

D
2.1 What is a Frame?

A Frame is a structured data representation used to describe objects, situations, or events. It


consists of:
O
●​ Slots (attributes or properties)
●​ Values (specific data for the slots)
C
2.2 Example of a Frame for a Dog
U
PT
@

A frame is similar to a structured database record but is more flexible.

2.3 Frames with Default Values and Inheritance

Frames support:

●​ Default Values → Predefined values unless overridden.


○​ E.g., "Dogs usually have 4 legs."
●​ Inheritance → A child frame inherits properties from a parent frame.
○​ E.g., "German Shepherd" inherits from "Dog" but has a unique color.
16
2.4 Applications of Frames

●​ AI-based Chatbots (Understanding Contextual Data)


●​ Expert Systems (Medical Diagnosis, Financial Analysis)
●​ Cognitive AI (Simulating Human Memory and Reasoning)

3. Conceptual Dependencies ➖
3.1 What is Conceptual Dependency?

Conceptual Dependency (CD) is a knowledge representation model that captures the meaning of

ER
natural language sentences using a set of primitive concepts.

●​ It reduces language ambiguity by using a common representation for sentences with the
same meaning.
●​ It was developed by Roger Schank (1973) for Natural Language Processing (NLP) and
AI-based reasoning.

D
3.2 Basic Conceptual Dependency Notation

Concepts are represented using primitive actions (ACTs) such as:


O
●​ ATRANS → Transfer of possession (e.g., "John gave Mary a book.")
●​ PTRANS → Physical transfer (e.g., "The ball rolled across the floor.")
C
●​ PROPEL → Applying force (e.g., "John pushed the car.")
●​ MTRANS → Mental transfer (e.g., "John told Mary a secret.")
U

3.3 Example of Conceptual Dependency Representation

Sentence: "John gave Mary the book."


PT

Conceptual Dependency Representation:

scss

CopyEdit
@

(John) ---ATRANS---> (book) ---TO---> (Mary)

●​ The "ATRANS" action represents transfer of possession.


●​ This eliminates language-dependent variations while retaining meaning.

3.4 Applications of Conceptual Dependencies

●​ Machine Translation (Removing Language Variations)


●​ AI-based Chatbots (Understanding Meaning Instead of Words)
●​ Story Generation in AI (Simulating Human Thought Processes)
4. Scripts ➖ 17

4.1 What is a Script?

A Script is a structured sequence of events representing a typical real-world situation. It helps AI


understand context and predict future events.

●​ Proposed by Roger Schank in 1975.


●​ Useful for understanding stories, planning AI actions, and conversational AI.

4.2 Structure of a Script

A script consists of:

ER
●​ Entry Conditions → Preconditions before an event.
●​ Roles → Participants in the script.
●​ Props → Objects involved.
●​ Scenes → Steps in the process.
●​ Results → Expected outcomes.

D
4.3 Example of a Restaurant Script
O
C
U
PT

If AI understands the script, it can fill in missing details and make better predictions.
@

4.4 Applications of Scripts

●​ AI-Based Chatbots (Handling Conversations Intelligently)


●​ Story Understanding (AI-Generated Storytelling)
●​ Robotics (Planning and Executing Tasks)
Comparison of Structured Knowledge Representation Techniques ➖ 18

ER
Conclusion
Structured knowledge representation techniques help AI understand and reason about the world.
Each technique has its advantages:

1.​ Associative Networks → Relationship-based knowledge storage.

D
2.​ Frames → Object-based knowledge representation with default values.
3.​ Conceptual Dependencies → Language-independent understanding of meaning.
4.​ Scripts → Representing sequences of events for context-aware reasoning.
O
These techniques are widely used in AI applications like NLP, Expert Systems, Chatbots, Robotics,
and Knowledge Graphs, improving AI’s ability to learn, reason, and interact with the world.
C
➖ SAHIL RAUNIYAR & PTUCODER !! 😀
U

HAPPY ENDING BY
PT
@
19

UNIT ➖ 03
●​ # Uninformed Search strategies- Breadth-first search, Uniform-cost search,


Depth-first search, Depth-limited search, Iterative deepening depth-first search,
Bidirectional search, Comparing uninformed search strategies.

Uninformed Search Strategies in Artificial Intelligence ➖


ER
1. Introduction to Uninformed Search
Uninformed search strategies (also called blind search strategies) are general-purpose search
methods that explore a problem space without any additional knowledge about the goal state other
than the problem definition. These strategies systematically explore the search space without using
heuristics.

D
Uninformed search algorithms include:

1.​ Breadth-First Search (BFS)


2.​ Uniform-Cost Search (UCS)
O
3.​ Depth-First Search (DFS)
4.​ Depth-Limited Search (DLS)
C
5.​ Iterative Deepening Depth-First Search (IDDFS)
6.​ Bidirectional Search
U

2. Breadth-First Search (BFS) ➖


PT

2.1 Concept

Breadth-First Search (BFS) is a tree-based or graph-based search strategy that explores all the
nodes at one depth level before moving to the next level.
@

2.2 Algorithm Steps

1.​ Start from the root node (initial state) and add it to a queue.
2.​ Expand the first node (FIFO order) and generate its children.
3.​ Enqueue all the children into the queue.
4.​ Repeat the process until the goal state is found or the queue is empty.
20
2.3 Example

mathematica

/ \

B C

/ \ \

D E F

ER
●​ BFS explores: A → B → C → D → E → F (level-wise).

2.4 Properties

D
O
C
U

3. Uniform-Cost Search (UCS) ➖


PT

3.1 Concept

Uniform-Cost Search (UCS) is a variation of BFS that considers path cost. Instead of exploring
nodes in order of depth, UCS expands the node with the lowest path cost (g(n)) first.
@

3.2 Algorithm Steps

1.​ Start with the initial node and assign it a cost of 0.


2.​ Expand the node with the smallest cost (priority queue).
3.​ Update costs of neighboring nodes and enqueue them.
4.​ Repeat until the goal node is reached.
21
3.3 Example

Consider a graph where edges represent costs:

scss

/ \

(4) (2)

B ----- C

ER
(3)

●​ UCS expands A → C → B (chooses the lowest cost path).

3.4 Properties

D
O
C
U

4. Depth-First Search (DFS) ➖


PT

4.1 Concept

Depth-First Search (DFS) explores as far deep as possible along a branch before backtracking.

4.2 Algorithm Steps


@

1.​ Start with the initial node.


2.​ Expand the first child (LIFO order using a stack).
3.​ Continue deepening until a goal is found or a dead-end is reached.
4.​ Backtrack and explore other branches.
22
4.3 Example

mathematica

/ \

B C

/ \ \

D E F

ER
●​ DFS explores: A → B → D → E → C → F.

4.4 Properties

D
O
C
U
PT

5. Depth-Limited Search (DLS) ➖


5.1 Concept
@

DLS is a modified DFS that limits depth exploration to a predefined depth (L) to avoid infinite loops.

5.2 Algorithm Steps

1.​ Perform DFS but terminate when the depth limit L is reached.
2.​ If the goal is found before L, return success; otherwise, fail.
23
5.3 Example

For depth limit L = 2:

mathematica

/ \

B C

/ \ \

ER
D E F

●​ DLS explores A → B → C (stops at level 2).

5.4 Properties

D
O
C
U

6. Iterative Deepening Depth-First Search (IDDFS) ➖


PT

6.1 Concept

IDDFS combines BFS and DFS by running DFS multiple times with increasing depth limits.
@

6.2 Algorithm Steps

1.​ Run DLS with L = 0.


2.​ Increase L and repeat DLS until the goal is found.

6.3 Example

For a goal at depth 3, IDDFS performs:

●​ DLS(0)
●​ DLS(1)
●​ DLS(2)
●​ DLS(3) → Goal Found
24
6.4 Properties

ER
7. Bidirectional Search ➖
7.1 Concept

D
Bidirectional Search runs two searches simultaneously:

●​ One from the initial state (Forward Search).


O
●​ One from the goal state (Backward Search).

7.2 Algorithm Steps


C
1.​ Perform BFS (or DFS) from both directions.
2.​ If paths meet, merge them to form the solution.
U

7.3 Example

css
PT

A → B → C → D (Goal)

← ← ← ←

●​ Searches from A to D and D to A meet in the middle.


@

7.4 Properties
25

8. Comparing Uninformed Search Strategies ➖

ER
Conclusion ➖

D
Uninformed search strategies explore the search space without heuristics. Each has advantages:

●​ BFS and UCS are complete but require large memory.


●​
●​
O
DFS is memory-efficient but may not find the best path.
IDDFS balances BFS and DFS benefits.
●​ Bidirectional search is optimal for well-defined goals.
C
For large problems, heuristic-informed search (like A) is preferred* over uninformed search methods.
U
PT
@
26
●​ # Informed (Heuristic) Search Strategies- Hill Climbing, Simulated Annealing,


Genetic Algorithm, Greedy best-first search, A* and optimal search, Memory
bounded heuristic search.

Informed (Heuristic) Search Strategies in Artificial Intelligence ➖


1. Introduction to Informed Search Strategies ➖
Informed search strategies, also known as heuristic search algorithms, use problem-specific
knowledge (heuristics) to find solutions more efficiently than uninformed search methods. A heuristic
function (h(n)) estimates the cost from a given node to the goal.

ER
Heuristic search algorithms include:

1.​ Hill Climbing


2.​ Simulated Annealing
3.​ Genetic Algorithm
4.​ Greedy Best-First Search

D
5.​ A Search and Optimal Search*
6.​ Memory-Bounded Heuristic Search
O
2. Hill Climbing Algorithm ➖
C
2.1 Concept

Hill Climbing is a local search algorithm that starts from an initial state and moves towards
U

higher-valued states (states with better heuristic values). It is similar to climbing a hill, always taking
steps in the direction of increasing evaluation values.
PT

2.2 Algorithm Steps

1.​ Start with an initial solution (state).


2.​ Evaluate neighbor states using a heuristic function.
3.​ Move to the best neighbor with a higher heuristic value.
4.​ Repeat until no better neighbor exists (local maximum).
@

2.3 Example

Peak

(6)

/ \

(3) (5)

Hill climbing moves from 3 → 6 but may stop at 6 (local maximum) instead of the peak.
27
2.4 Problems with Hill Climbing

1.​ Local Maxima - The algorithm may stop at a local peak instead of the global peak.
2.​ Plateau - A flat region with no change in heuristic value may cause the algorithm to get stuck.
3.​ Ridges - A narrow region where a greedy approach fails.

2.5 Variants of Hill Climbing

1.​ Steepest-Ascent Hill Climbing - Considers all neighbors and picks the best.
2.​ Stochastic Hill Climbing - Chooses randomly among better neighbors.
3.​ First-Choice Hill Climbing - Evaluates neighbors randomly and moves to the first improvement.

ER
3. Simulated Annealing

3.1 Concept

Simulated Annealing (SA) is a probabilistic search algorithm that avoids getting stuck in local maxima
by allowing occasional bad moves. It is inspired by metallurgy annealing, where materials are

D
heated and cooled to reach a stable state.

3.2 Algorithm Steps


O
1.​ Start with an initial solution and a high temperature (T).
2.​ Select a random neighbor.
C
3.​ If the new state is better, accept it.
4.​ If worse, accept it with a probability based on temperature P = e^(-ΔE/T).
5.​ Decrease the temperature over time.
U

6.​ Repeat until the system cools down.

3.3 Benefits of Simulated Annealing

✅ Escapes local maxima​


PT

✅ Finds near-optimal solutions​


✅ Works well for large problems


@

4. Genetic Algorithm (GA)

4.1 Concept

Genetic Algorithms (GA) use biological evolution principles (mutation, crossover, selection) to find
optimal solutions. It maintains a population of solutions and evolves them over generations.

4.2 Algorithm Steps

1.​ Generate an initial population (random solutions).


2.​ Evaluate fitness of each individual.
3.​ Select parents for reproduction (based on fitness).
28
4.​ Crossover (recombine) genes to create offspring.
5.​ Mutate offspring randomly to introduce diversity.
6.​ Repeat until a stopping condition is met.

4.3 Example: Solving a Traveling Salesman Problem (TSP)

●​ Chromosomes represent different city orderings.


●​ Fitness is total distance traveled.
●​ Crossover swaps city sequences between parents.
●​ Mutation randomly changes city order.

4.4 Advantages of GA

✅ Works well for complex optimization problems​


✅ Can escape local optima​

ER
✅ Good for NP-hard problems

5. Greedy Best-First Search ➖


5.1 Concept

D
O
Greedy Best-First Search (GBFS) selects the most promising node based on the heuristic function
h(n) and expands it first.
C
5.2 Algorithm Steps

1.​ Initialize a priority queue with the start node.


U

2.​ Expand the node with the lowest h(n).


3.​ If the goal is found, return success.
4.​ Otherwise, add the node’s children to the queue.
5.​ Repeat until a solution is found or no more nodes remain.
PT

5.3 Example

For a graph with heuristic values: (scss)

A(6)
@

/ \

B(4) C(2)

/ \

D(3) E(0) (Goal)

GBFS explores A → C → E because C has the lowest h(n).


29
5.4 Properties

❌ Not optimal (May take a suboptimal path)​


✅ Faster than BFS

6. A Search and Optimal Search* ➖


6.1 Concept

A* Search combines Greedy Best-First Search and Uniform-Cost Search using:

f(n)=g(n)+h(n)f(n) = g(n) + h(n)f(n)=g(n)+h(n)

ER
where:

●​ g(n) = cost from the start to node n.


●​ h(n) = estimated cost from node n to goal.
●​ f(n) = total estimated cost.

D
6.2 Algorithm Steps O
1.​ Initialize a priority queue with the start node.
2.​ Expand the node with the lowest f(n).
3.​ If the goal is reached, return success.
4.​ Otherwise, add the node’s children to the queue.
C
5.​ Repeat until a solution is found.

6.3 Example
U

For a graph with costs:(scss)


PT

/ \

(2) (4)
@

B C

| |

(3) (2)

D E (Goal)

A* expands A → B → D → E as it minimizes f(n).

➖ ✅ Complete​
✅ Optimal (if h(n) is admissible and consistent)​
6.4 Properties

❌ Higher memory usage


30

7. Memory-Bounded Heuristic Search ➖


7.1 Concept

A* requires high memory. Memory-bounded heuristic search addresses this issue by limiting memory
usage.

7.2 Types of Memory-Bounded Search

1.​ Iterative-Deepening A (IDA)** - Uses depth-first search with cost limits.


2.​ Simplified Memory-Bounded A (SMA)** - Uses a fixed memory limit.

ER
7.3 Benefits

✅ Solves large problems with limited RAM.​


✅ Maintains heuristic efficiency with lower memory.


D
8. Comparing Heuristic Search Strategies
O
C
U
PT

9. Conclusion ➖
Informed search strategies use heuristics to guide the search, making them more efficient than
uninformed search methods. Among them:
@

●​ A* is optimal and widely used.


●​ Genetic Algorithms are useful for optimization.
●​ Simulated Annealing prevents local optima.

These techniques enhance AI decision-making in real-world applications like robotics, pathfinding,


and game development.

HAPPY ENDING BY ➖ SAHIL RAUNIYAR & PTUCODER !! 😀


31

UNIT ➖ 04
●​ # Natural language processing: Grammars, Parsing. ➖
Natural Language Processing (NLP): Grammars and Parsing ➖
1. Introduction to Natural Language Processing (NLP) ➖
Natural Language Processing (NLP) is a subfield of Artificial Intelligence (AI) that focuses on enabling

ER
machines to understand, interpret, and generate human language. It is used in various applications
such as machine translation, speech recognition, sentiment analysis, and chatbot development.

NLP involves two major components:

1.​ Natural Language Understanding (NLU) – Understanding the meaning, structure, and intent of

D
a sentence.
2.​ Natural Language Generation (NLG) – Generating human-like text based on structured data.

A key challenge in NLP is dealing with the ambiguity, variability, and complexity of human
O
language, which requires a well-defined set of grammars and parsing techniques to process and
analyze text efficiently.
C
2. Grammars in NLP ➖
U

A grammar is a set of rules that defines how words and phrases are structured in a language. It
provides the framework for syntax analysis, which helps machines understand the structure of
PT

sentences.

2.1 Types of Grammars

1.​ Phrase Structure Grammar (PSG) – Represents sentences using hierarchical structures.
2.​ Dependency Grammar (DG) – Focuses on relationships between words rather than phrase
@

structures.
3.​ Context-Free Grammar (CFG) – Uses production rules to define how symbols can be replaced.

2.2 Context-Free Grammar (CFG)

A Context-Free Grammar (CFG) is widely used in NLP for parsing sentences. It consists of:

●​ Terminals (Σ) – Actual words (e.g., "dog", "runs", "quickly").


●​ Non-Terminals (V) – Categories of words (e.g., Noun, Verb, Sentence).
●​ Start Symbol (S) – The root of the grammar.
●​ Production Rules (P) – Rules that describe how symbols are expanded.
32
Example of a CFG:

mathematica

S → NP VP

NP → Det N

VP → V NP

Det → "the" | "a"

N → "cat" | "dog"

ER
V → "chased" | "saw"

A valid sentence:​
"The cat chased a dog"

2.3 Dependency Grammar

D
Instead of phrase structures, Dependency Grammar (DG) represents sentences as directed trees,
where words (nodes) are connected by grammatical relationships (edges).

Example:
O
Sentence: "The cat chased the mouse."
C
Dependency Structure: (bash)

chased
U

├── cat
PT

│ └── The

└── mouse

└── The
@

Each word depends on another, defining the sentence structure.

3. Parsing in NLP ➖
Parsing is the process of analyzing a sentence based on grammar rules to extract meaning and
structure. It is a fundamental step in syntactic analysis, which helps machines understand language by
breaking sentences into their components.

3.1 Types of Parsing

There are two main types of parsing techniques:


33
1.​ Top-Down Parsing – Starts with the highest-level structure (sentence) and works down to
words.
2.​ Bottom-Up Parsing – Starts with words and builds up to the sentence structure.

4. Parsing Techniques ➖
4.1 Top-Down Parsing

Top-down parsing begins with the start symbol and applies production rules recursively to derive the
sentence.

Example: Recursive Descent Parsing

ER
Recursive descent parsing is a top-down parsing method that recursively expands non-terminals
until the input is matched.

Example Grammar:

mathematica

D
CopyEdit

S → NP VP
O
NP → Det N
C
VP → V NP
U

Sentence: "The cat saw a dog"

Steps:
PT

1.​ S → NP VP
2.​ NP → Det N → (The cat)
3.​ VP → V NP → (saw a dog)

✅ Successfully matches the sentence.


@

Advantages & Disadvantages

✅ Simple and easy to implement​


❌ Struggles with left-recursive rules (e.g., NP → NP Conj NP)
4.2 Bottom-Up Parsing

Bottom-up parsing starts with the words and builds up the structure until it reaches the start symbol.

Example: Shift-Reduce Parsing


34
Shift-Reduce Parsing builds the parse tree by:

1.​ Shifting input words onto a stack.


2.​ Reducing them using grammar rules.

Example: "The cat saw a dog"

bash

Stack: [The] → [The, cat] → [NP, saw] → [NP, VP] → [S]

✅ Successfully constructs the sentence.


Advantages & Disadvantages

✅ Handles left-recursive rules efficiently​

ER
❌ Requires backtracking in ambiguous cases
4.3 Chart Parsing (Dynamic Programming Approach)

D
Chart parsing uses dynamic programming to store intermediate parsing results, preventing redundant
computations.
O
CYK Algorithm (Cocke-Younger-Kasami Algorithm)

The CYK algorithm is a bottom-up parser for context-free grammars in Chomsky Normal Form
C
(CNF).

Example:
U

mathematica

S → NP VP
PT

NP → Det N

VP → V NP

Det → "The" | "A"


@

N → "cat" | "dog"

V → "chased"

For the sentence "The cat chased a dog", CYK builds a table and determines if it matches the
grammar.

✅ Efficient for complex grammars​


❌ Requires CNF conversion
5. Comparing Parsing Techniques ➖ 35

6. Applications of Grammars and Parsing in NLP ➖

ER
1.​ Syntax Checking – Used in compilers and spell checkers.
2.​ Machine Translation – Helps in language conversion.
3.​ Speech Recognition – Transforms spoken words into structured sentences.
4.​ Chatbots & Virtual Assistants – Analyzes user queries for meaningful responses.
5.​ Text-to-Speech Systems – Converts structured text into natural speech.

D
7. Conclusion
O
Grammars and parsing are essential components of NLP that help machines understand and process
C
human language. Context-Free Grammars (CFGs) and Dependency Grammars (DGs) define
sentence structures, while parsing techniques like Recursive Descent, Shift-Reduce, and CYK
Parsing analyze them. These methods are widely used in speech recognition, machine translation,
U

and AI chatbots to improve language understanding and generation.


PT
@
36
●​ # Pattern Recognition: Recognition and Classification Process-Decision Theoretic


Classification, Syntactic Classification; Learning Classification Patterns,
Recognizing and Understanding Speech.

Pattern Recognition: Recognition and Classification Process ➖


Pattern Recognition is a fundamental aspect of Artificial Intelligence (AI) and Machine Learning (ML),
focusing on the identification and classification of patterns in data. It is widely used in applications such
as image processing, speech recognition, medical diagnosis, and biometric authentication.

1. Introduction to Pattern Recognition ➖

ER
1.1 What is Pattern Recognition?

Pattern Recognition is the process of identifying and categorizing input data based on predefined
patterns. It involves detecting similarities, regularities, and structures within data.

Example Applications:

D
●​ Face Recognition – Identifying individuals based on facial features.
●​ Speech Recognition – Converting spoken words into text.
●​ Handwriting Recognition – Converting handwritten text into digital text.
●​
O
Medical Diagnosis – Detecting diseases based on medical images.


C
2. Recognition and Classification Process
Pattern recognition systems follow a systematic process for recognizing and classifying patterns. The
U

key stages include:

2.1 Stages of Pattern Recognition


PT

1.​ Data Collection – Gathering raw data from various sources (e.g., images, speech, text).
2.​ Preprocessing – Removing noise and normalizing the data.
3.​ Feature Extraction – Identifying key features that represent the data.
4.​ Classification – Assigning the data to predefined categories.
5.​ Post-processing – Refining the classification and improving accuracy.
@

Each stage plays a crucial role in ensuring accurate recognition and classification.

3. Decision Theoretic Classification ➖


3.1 Overview

Decision Theoretic Classification is a statistical approach to classification based on probability and


decision theory. It involves assigning an unknown pattern to the most probable class using Bayesian
decision rules.
37
3.2 Bayesian Decision Theory

Bayesian Decision Theory calculates the probability of a pattern belonging to a class and makes
decisions based on maximum likelihood estimation (MLE) or maximum a posteriori estimation
(MAP).

Bayes’ Rule:

P(Ci∣X)=P(X∣Ci)P(Ci)P(X)P(C_i | X) = \frac{P(X | C_i) P(C_i)}{P(X)}P(Ci​∣X)=P(X)P(X∣Ci​)P(Ci​)​

Where:

●​ P(Ci∣X)P(C_i | X)P(Ci​∣X) = Probability that class CiC_iCi​is correct given data XXX.
●​ P(X∣Ci)P(X | C_i)P(X∣Ci​) = Likelihood of data XXX given class CiC_iCi​.

ER
●​ P(Ci)P(C_i)P(Ci​) = Prior probability of class CiC_iCi​.
●​ P(X)P(X)P(X) = Probability of data XXX.

3.3 Decision Rule

A pattern XXX is classified into class CiC_iCi​if:

D
P(Ci∣X)>P(Cj∣X)∀j≠iP(C_i | X) > P(C_j | X) \quad \forall j \neq iP(Ci​∣X)>P(Cj​∣X)∀j=i

This ensures that the most probable class is chosen.


O
4. Syntactic Classification ➖
C
4.1 Overview
U

Syntactic Classification (or Structural Pattern Recognition) represents patterns as a combination of


simpler elements (primitives). It is based on the idea that complex patterns can be described
using grammatical rules.
PT

4.2 Grammar-Based Approach

A pattern is described using a formal grammar, which consists of:

●​ Terminals – Basic elements of the pattern.


●​ Non-terminals – Higher-level abstractions.
@

●​ Production Rules – Define how patterns are constructed.

Example: Handwriting Recognition

Consider recognizing the letter "A" using a grammar:

mathematica

S → / \ (Diagonal lines)

S → — (Horizontal line)

This approach is useful in handwriting recognition, scene understanding, and bioinformatics.


38
4.3 Comparison of Decision Theoretic and Syntactic Classification

5. Learning Classification Patterns ➖

ER
5.1 Supervised Learning

●​ Uses labeled data for training.


●​ Examples: Neural Networks, Decision Trees, Support Vector Machines (SVMs).
●​ Used in applications like spam detection, handwriting recognition.

D
5.2 Unsupervised Learning

●​ No labeled data; groups patterns into clusters.


●​ Example: K-Means Clustering.
O
●​ Used in market segmentation, anomaly detection.

5.3 Reinforcement Learning


C
●​ Learns based on rewards and penalties.
●​ Used in robotics, game AI, self-driving cars.
U

6. Recognizing and Understanding Speech ➖ Speech Recognition is a critical


PT

application of Pattern Recognition that involves converting spoken language into text.

6.1 Steps in Speech Recognition

1.​ Acoustic Signal Processing – Capturing and filtering audio signals.


2.​ Feature Extraction – Identifying unique characteristics (e.g., Mel-Frequency Cepstral
@

Coefficients (MFCCs)).
3.​ Pattern Matching – Comparing input with stored patterns.
4.​ Language Modeling – Using grammar rules to refine recognition.
39
6.3 Challenges in Speech Recognition

●​ Accent Variation – Different pronunciations of words.


●​ Background Noise – External sounds affecting recognition.
●​ Homophones – Words that sound alike but have different meanings.

6.4 Applications of Speech Recognition

✅ Voice Assistants – Siri, Google Assistant, Alexa​


✅ Automated Customer Support – Chatbots, Call Center Automation​
✅ Medical Transcription – Converting doctors’ dictations into text

ER
7. Conclusion
Pattern Recognition is essential for AI-driven applications, enabling machines to classify, analyze,
and interpret data efficiently.

D
●​ Decision Theoretic Classification uses probability-based models like Bayesian classifiers.
●​ Syntactic Classification relies on grammatical structures for recognition.
●​ Learning Classification Patterns involves supervised, unsupervised, and reinforcement
learning techniques.
O
●​ Speech Recognition applies these principles to convert spoken words into text.

The advancements in Machine Learning and Deep Learning continue to improve accuracy and
C
efficiency in pattern recognition applications like speech recognition, image analysis, and medical
diagnosis.
U
PT
@
40
●​ # Expert System Architectures: Characteristics, Rule-Based System


Architectures, Nonproduction System Architectures, Knowledge Acquisition
and Validation.

Expert System Architectures ➖


1. Introduction to Expert Systems ➖
An Expert System (ES) is an AI-based software that simulates human expertise to solve complex
problems in a specific domain. These systems use a knowledge base and an inference engine to
make decisions similar to a human expert.

ER
1.1 Characteristics of Expert Systems

Expert systems have the following key characteristics:

✅ High Performance – Provides accurate and fast solutions.​


✅ Symbolic Reasoning – Uses logical rules rather than numerical calculations.​
✅ Knowledge-Based – Stores domain-specific knowledge.​

D
✅ Decision Making – Provides solutions even with incomplete data.​
✅ Explainability – Justifies its decisions with reasoning.​
✅ Self-Learning – Some systems can update their knowledge over time.
O
1.2 Applications of Expert Systems
C
●​ Medical Diagnosis – MYCIN (for bacterial infections).
●​ Engineering Design – XCON (configuring computer systems).
●​ Business and Finance – Loan approval, fraud detection.
U

●​ Agriculture – Pest control and crop management.


PT

2. Rule-Based System Architectures


A Rule-Based Expert System consists of rules that define how input data leads to conclusions. These
rules are typically written as IF-THEN statements.

2.1 Components of Rule-Based Systems


@

1.​ Knowledge Base – Stores facts and rules.


2.​ Inference Engine – Applies logical reasoning to reach conclusions.
3.​ User Interface – Allows users to interact with the system.
4.​ Explanation Facility – Provides reasoning behind decisions.
5.​ Knowledge Acquisition Module – Helps in adding or modifying rules.

2.2 Working of Rule-Based Expert Systems

The Knowledge Base contains rules such as:​


csharp​
IF a patient has high fever AND sore throat THEN diagnose as the flu.
41
●​ The Inference Engine applies Forward Chaining or Backward Chaining to process rules and
infer conclusions.

2.3 Inference Mechanisms

●​ Forward Chaining (Data-Driven Reasoning)


○​ Starts from given facts and applies rules to reach a conclusion.
○​ Example: Diagnosing diseases based on symptoms.
●​ Backward Chaining (Goal-Driven Reasoning)
○​ Starts with a goal and works backward to find supporting facts.
○​ Example: Debugging a network issue by checking causes step-by-step.

2.4 Advantages and Disadvantages of Rule-Based Systems

ER
D ➖
3. Nonproduction System Architectures
O
Unlike rule-based systems that rely on IF-THEN rules, nonproduction expert systems use alternative
C
approaches such as neural networks, case-based reasoning, and fuzzy logic.

3.1 Types of Nonproduction Architectures


U

1.​ Case-Based Reasoning (CBR)


○​ Solves problems by reusing past experiences (stored cases).
PT

○​ Example: Diagnosing car problems based on previous repairs.


2.​ Neural Networks
○​ Uses interconnected nodes to learn patterns and make predictions.
○​ Example: Handwriting recognition, medical diagnosis.
3.​ Fuzzy Logic Systems
○​ Handles uncertainty and imprecise data.
@

○​ Example: Controlling home appliances based on environmental conditions.


4.​ Hybrid Expert Systems
○​ Combines multiple AI techniques (rules + neural networks + fuzzy logic).
○​ Example: AI-powered stock market analysis.
4. Knowledge Acquisition and Validation ➖ 42

4.1 Knowledge Acquisition

Knowledge acquisition is the process of collecting, organizing, and storing expert knowledge into the
system.

Methods of Knowledge Acquisition

●​ Interviews with Experts – Directly collecting knowledge from human experts.


●​ Observation of Expert Behavior – Studying how experts solve problems.
●​ Analysis of Documents & Reports – Extracting knowledge from books, articles, and
databases.

ER
●​ Machine Learning Approaches – Automatically learning from data patterns.

4.2 Knowledge Validation

Once knowledge is acquired, it needs to be validated to ensure accuracy and reliability.

Techniques for Knowledge Validation

D
●​ Rule Testing – Checking if rules lead to correct conclusions.
●​ Simulation & Case Studies – Running the system on real-world problems.
●​
●​
O
Expert Review – Asking domain experts to verify the system’s decisions.
Comparison with Human Experts – Measuring the system's performance against human
specialists.
C
5. Conclusion ➖
U

Expert systems are AI-driven problem-solving tools that use structured knowledge and inference
mechanisms. Rule-based architectures rely on IF-THEN statements, while nonproduction
architectures use methods like neural networks and fuzzy logic. Knowledge acquisition and
PT

validation are critical to ensuring the system’s reliability and effectiveness.

With advancements in machine learning and deep learning, modern expert systems are becoming
more accurate, adaptable, and capable of handling complex, real-world problems.
@

HAPPY ENDING BY ➖ SAHIL RAUNIYAR & PTUCODER !! 😀


43

Previous Year Questions Paper


BCA (Sem.–6)
ARTIFICIAL INTELLIGENCE
Subject Code : UGCA-1945
M.Code : 91689
Date of Examination : 02-07-2022

1. Write briefly : ➖

ER
a. Define intelligence. What is the intelligent behaviour of a machine? Discuss various levels of artificial
intelligence.
b. Discuss the history of artificial intelligence briefly.
c. What is Bayesian reasoning? How does an expert system rank potentially true hypotheses? Give an
example.

D
d. What are a fuzzy set and a Membership function? What is the difference between a crisp set and a
fuzzy set? Determine possible fuzzy sets on the universe of discourse for man weights.
e. Write down the steps of breadth first search. Illustrate with examples.
O
f. Explain the process of memory bounded heuristic search.
g. Discuss the advantages of using context-free grammars in design of practical natural language
parsers.
C
h. Give a heuristic that a block-stacking program might use to solve problems of the form “stack block X
on block Y”.
U

i. What is an expert system shell? What are the fundamental characteristics of an expert system?
j. What is pattern recognition? Discuss its applications.
PT

ANSWERS

a. Define Intelligence. What is the Intelligent Behavior of a Machine? Discuss Various


Levels of Artificial Intelligence.
@

Intelligence refers to the ability to acquire knowledge, apply reasoning, adapt to new situations, learn
from experiences, solve problems, and understand complex ideas. It is often divided into various forms
such as logical reasoning, creativity, and problem-solving abilities.

Intelligent Behavior of a Machine: Intelligent behavior in a machine refers to the ability of the machine
to perform tasks that require human-like cognitive functions. These tasks include reasoning, learning,
perception, problem-solving, understanding natural language, and even emotional responses in some
cases.

Levels of Artificial Intelligence:


44
1.​ Artificial Narrow Intelligence (ANI): This is specialized AI that can perform a specific task. For
example, voice assistants like Siri or Alexa.
2.​ Artificial General Intelligence (AGI): This level of AI can perform any intellectual task that a
human can. It exhibits learning, reasoning, problem-solving, and understanding.
3.​ Artificial Superintelligence (ASI): This is a hypothetical AI that surpasses human intelligence in
every aspect—creativity, problem-solving, emotional intelligence, and even social intelligence.

b. Discuss the History of Artificial Intelligence Briefly.

The history of AI can be divided into key stages:

1.​ 1950s-1960s: The term "Artificial Intelligence" was coined by John McCarthy in 1956. Early

ER
pioneers like Alan Turing developed foundational theories, including the Turing Test for machine
intelligence.
2.​ 1960s-1970s: Early AI research focused on solving mathematical problems, learning, and
reasoning, with systems such as the General Problem Solver (GPS) and ELIZA (a natural
language processing program).
3.​ 1980s: Expert systems became popular in commercial applications, marking a period of success

D
for rule-based reasoning and symbolic AI.
4.​ 1990s: Machine learning techniques gained prominence, with advances in statistical methods
and neural networks.
O
5.​ 2000s-present: The rise of deep learning, large datasets, and powerful computing led to
significant breakthroughs in computer vision, natural language processing, and autonomous
systems.
C
U

c. What is Bayesian Reasoning? How Does an Expert System Rank Potentially True
Hypotheses? Give an Example.

Bayesian Reasoning is a statistical method based on Bayes' Theorem, which describes how to update
PT

the probability of a hypothesis based on new evidence. It is used to make decisions under uncertainty.

In an Expert System, Bayesian reasoning helps rank hypotheses based on their likelihood. The system
assigns probabilities to various hypotheses and updates them as new information is provided.

Example: Suppose an expert system is diagnosing a disease. It might have hypotheses like:
@

●​ Hypothesis 1: The patient has Flu.


●​ Hypothesis 2: The patient has Cold.

If the system receives evidence like fever (which is more likely for flu), it will update the probability of Flu
being true, while decreasing the probability of Cold.

d. What are a Fuzzy Set and a Membership Function? What is the Difference Between a
Crisp Set and a Fuzzy Set? Determine Possible Fuzzy Sets on the Universe of Discourse
for Man Weights.
45
Fuzzy Set: A fuzzy set is a set in which elements have degrees of membership between 0 and 1. Unlike
traditional sets (crisp sets), where an element is either in or out of the set, in fuzzy sets, an element can
partially belong to the set.

Membership Function: A membership function is used to define the degree to which an element
belongs to a fuzzy set. It assigns a value between 0 and 1 for each element.

Crisp Set vs Fuzzy Set:

●​ A Crisp Set has binary membership: an element either belongs or does not belong to the set.
●​ A Fuzzy Set allows partial membership, where elements can belong to a set to varying degrees.

Example of Fuzzy Sets for Man Weights: Let’s say the universe of discourse for man weights is from
40kg to 120kg. The fuzzy sets might be:

ER
●​ Lightweight: Membership function could be μ(weight) = (120 - weight)/80, for weights from 40kg
to 120kg.
●​ Heavyweight: Membership function could be μ(weight) = (weight - 40)/80, for weights from 40kg
to 120kg.

D
e. Write Down the Steps of Breadth First Search. Illustrate with Example.
O
Breadth-First Search (BFS) is an algorithm used for traversing or searching tree or graph data
structures.
C
Steps:

1.​ Initialize a queue and enqueue the starting node.


2.​ While the queue is not empty:
U

○​ Dequeue a node.
○​ Visit and process the node.
○​ Enqueue all unvisited adjacent nodes.
PT

3.​ Repeat until all nodes are visited.

Example:​
For a graph with nodes A, B, C, D, E:

mathematica
@

A
/ \
B C
| |
D E
46
●​ Start at node A, enqueue B and C.
●​ Visit A, then dequeue B and C, enqueue D and E.
●​ Visit B, then C, then D, then E.

f. Explain the Process of Memory-Bounded Heuristic Search.

Memory-bounded heuristic search algorithms are designed to find a solution within a limited amount of
memory. These algorithms discard less promising paths or use memory more efficiently.

Steps:

1.​ Limited Memory: The algorithm only stores a subset of all possible states, prioritizing those that

ER
seem most promising.
2.​ Heuristic Function: A heuristic guides the search towards likely solutions, based on available
knowledge.
3.​ Backtracking or Pruning: Memory-bound methods often prune or backtrack when memory
limitations are reached, focusing on critical areas of the search space.

D
g. Discuss the Advantages of Using Context-Free Grammars in the Design of Practical
Natural Language Parsers.
O
Advantages of Context-Free Grammars (CFG) in NLP Parsing:
C
1.​ Simplicity: CFG provides a clear and simple way to define the structure of sentences in a
language.
2.​ Generative Power: CFG can generate a wide variety of syntactically correct sentences.
U

3.​ Efficiency: Many natural language parsers use CFG-based algorithms, which can be efficiently
implemented and allow for parsing in polynomial time.
4.​ Language Independence: CFGs can be used for a variety of languages with relatively little
PT

modification.

h. Give a Heuristic that a Block-Stacking Program Might Use to Solve Problems of the
Form “Stack Block X on Block Y.”
@

A possible heuristic could be:​


“Minimize the number of blocks that need to be moved”. In other words, the program should aim to
move only the necessary blocks to achieve the goal of stacking Block X on Block Y, without
unnecessary operations.

For example:

●​ If Block Y is already clear and on top of the stack, only Block X needs to be moved.
●​ If Block Y is blocked, the program needs to consider clearing it by moving other blocks first.
47
i. What is an Expert System Shell? What are the Fundamental Characteristics of an
Expert System?

Expert System Shell: An expert system shell is a software framework that provides the basic
components for creating expert systems. It includes a knowledge base, inference engine, and user
interface, allowing users to build expert systems without having to program every part.

Fundamental Characteristics of an Expert System:

1.​ Knowledge Base: Contains facts and rules about the domain of expertise.
2.​ Inference Engine: Applies logical reasoning to the knowledge base to draw conclusions.
3.​ User Interface: Allows users to interact with the system and input data.
4.​ Explanation Facility: Explains how the system arrived at its conclusions.

ER
5.​ Knowledge Acquisition: Mechanisms for acquiring new knowledge to update the system.

j. What is Pattern Recognition? Discuss Its Applications.

Pattern Recognition is the process of identifying patterns, trends, or regularities in data. It involves

D
classifying input data into predefined categories based on certain features.

Applications:

1.​
O
Image and Speech Recognition: Recognizing faces, handwriting, or spoken words.
2.​ Medical Diagnosis: Identifying diseases based on symptoms or medical images.
3.​ Financial Fraud Detection: Detecting abnormal transactions that may indicate fraud.
C
4.​ Autonomous Vehicles: Recognizing road signs, pedestrians, and other vehicles.
U
PT
@
48
SECTION–B

2. What are the different approaches in defining artificial intelligence? What characteristics must a
problem possess to be solved using artificial intelligence? Write a description of the 8-queens problem.

3. Discuss the syntax and semantics of propositional logic. List the rules of inference for propositional
logic. Consider the following facts and construct a step-by-step proof by resolution of the statement
“John likes peanuts”.
a. John likes all kinds of food.
b. Apple and vegetables are foods.
c. Anything anyone eats and is not killed by is food.
d. Anil eats peanuts and is still alive.

ER
e. Harry eats everything that Anil eats.

4. How are objects related in frame-based systems? What are the ‘a-kind-of’ and ‘a-part-of’
relationships? Give examples.

5. Differentiate between informed search and uninformed search. Explain depth first search technique

D
with examples. Discuss the performance of this technique.

6. Explain the architecture of an expert system. What are rule based system architecture and on
production system architecture?
O
7. Write about the various tasks in natural language processing in detail. What are the main difficulties in
natural language understanding?
C

U

2. Different Approaches in Defining Artificial Intelligence (AI) and Characteristics of Problems


Solved by AI

Approaches in Defining AI: Artificial Intelligence has been defined in multiple ways, reflecting its
PT

various aspects:

●​ Behavioral Approach: AI is the field of study that seeks to simulate human behavior. A machine
is said to exhibit intelligent behavior if it performs tasks or solves problems that typically require
human intelligence, such as decision-making and problem-solving.
●​ Functional Approach: AI is defined as any machine that can perform cognitive tasks like
@

reasoning, learning, perception, and problem-solving. This approach focuses on the


functionalities that AI systems should exhibit, irrespective of whether they mimic human
intelligence.
●​ Turing Test: Proposed by Alan Turing, this approach defines AI as the ability of a machine to
perform tasks indistinguishable from human responses in certain scenarios. If a machine can
convince a human observer that it is human, it is considered to have exhibited artificial
intelligence.
●​ Cognitive Approach: This approach defines AI based on the replication of human cognitive
processes. AI systems should mimic the thought processes, problem-solving, and
decision-making abilities of humans, and often include models of human cognition like neural
networks.
49
Characteristics of Problems Solved by AI: For a problem to be solvable by AI, it typically needs to
have the following characteristics:

●​ Complexity: The problem is often too complex for traditional algorithms to solve efficiently,
especially if the number of variables or possible states is large.
●​ Uncertainty: AI can handle problems involving incomplete or uncertain information, unlike
traditional deterministic systems.
●​ Adaptability: The problem should allow for solutions that can adapt or improve over time with
experience, such as learning from past mistakes.
●​ Decision-Making: The problem often involves making decisions based on past data, analysis, or
future predictions.
●​ Search Space: AI can navigate large and complex search spaces, often using heuristics or other
methods to reduce the space and find solutions.

ER
8-Queens Problem: The 8-Queens Problem is a classic problem in artificial intelligence where the goal
is to place eight queens on a chessboard such that no two queens threaten each other. In chess,
queens can move horizontally, vertically, or diagonally, and the challenge is to place them on the board
without any two queens being in the same row, column, or diagonal. The problem can be solved using
search algorithms or constraint satisfaction techniques. The problem becomes more complex as the

D
number of queens increases, and finding an optimal solution involves exploring different configurations
of queen placements.
O
3. Syntax and Semantics of Propositional Logic
C
Syntax of Propositional Logic: The syntax of propositional logic involves the symbols and rules for
constructing valid formulas. These include:

●​ Propositional Variables: These represent atomic statements (e.g., P, Q, R).


U

●​ Logical Connectives: These include:


○​ AND ( ∧ ): True if both operands are true.
○​ OR ( ∨ ): True if at least one operand is true.
PT

○​ NOT ( ¬ ): Negation, changes the truth value.


○​ IMPLIES ( → ): True if the first operand implies the second.
○​ BICONDITIONAL ( ↔ ): True if both operands have the same truth value.
●​ Parentheses: Used to group parts of formulas to indicate order of operations.
@

Semantics of Propositional Logic: The semantics define the truth values of propositions:

●​ True (T): A statement is true.


●​ False (F): A statement is false. Each formula in propositional logic can be evaluated based on
the truth values assigned to its components.

Rules of Inference: The rules of inference in propositional logic allow one to derive conclusions from
premises. Some important rules include:

●​ Modus Ponens: If P → Q and P are true, then Q must be true.


●​ Modus Tollens: If P → Q and ¬Q are true, then ¬P must be true.
●​ Disjunctive Syllogism: If P ∨ Q is true and ¬P is true, then Q must be true.
50
Step-by-step Proof by Resolution: Let’s resolve the statement "John likes peanuts" based on the
given facts:

1.​ Fact (a): John likes all kinds of food.


○​ This means if something is food, John likes it.
2.​ Fact (b): Apple and vegetable are foods.
○​ Apple and vegetable are in the set of foods.
3.​ Fact (c): Anything anyone eats and is not killed by is food.
○​ If Anil eats peanuts and survives, peanuts are food.
4.​ Fact (d): Anil eats peanuts and is still alive.
○​ By (c), peanuts are food.
5.​ Fact (e): Harry eats everything that Anil eats.
○​ Therefore, Harry eats peanuts.

ER
By combining all the facts, we can conclude that since peanuts are food, and John likes all foods, John
likes peanuts.

4. Objects in Frame-Based Systems

D
In frame-based systems, objects are described using frames, which are data structures that represent
knowledge about objects or concepts. A frame contains slots (attributes) and values that define an
O
object’s properties and relationships. Frames can also have pointers to other frames (objects)
representing more complex concepts.

‘A-kind-of’ Relationship: This relationship defines hierarchical connections, where one object is a
C
subclass of another. For example, a “Dog” is a kind of “Animal.”

‘A-part-of’ Relationship: This relationship defines the part-whole association between objects. For
U

example, a “Wheel” is a part of a “Car.”


PT

5. Informed Search vs Uninformed Search and Depth-First Search (DFS)

Informed Search: Informed search algorithms use additional knowledge about the problem domain to
guide the search towards the goal more efficiently. An example is A Search*, which uses a heuristic to
estimate the cost from the current state to the goal.
@

Uninformed Search: Uninformed search algorithms, like Breadth-First Search (BFS), do not use any
domain-specific knowledge and explore the search space blindly.

Depth-First Search (DFS): DFS is an uninformed search algorithm that explores as far as possible
along each branch before backtracking. The algorithm follows the path from the start node to the
deepest node and backtracks when it reaches a dead end.

Example: For a graph with nodes A, B, C, D, DFS would explore one branch completely, say A → B →
D, before backtracking.

Performance:

●​ Time Complexity: O(b^d) where b is the branching factor and d is the depth of the solution.
51
●​ Space Complexity: O(b * d) for storing the search tree.

6. Architecture of an Expert System

An expert system typically has the following architecture:

1.​ Knowledge Base: A collection of facts and rules that represent expert knowledge.
2.​ Inference Engine: Applies logical reasoning to the knowledge base to infer conclusions or make
decisions.
3.​ User Interface: Facilitates interaction with the user, allowing input and output of data.
4.​ Explanation Module: Explains the reasoning behind decisions to users.
5.​ Knowledge Acquisition Module: Allows for the system to learn and update its knowledge base.

ER
Rule-based System Architecture: This architecture relies on if-then rules (production rules) to make
decisions. Each rule has a premise (if part) and a conclusion (then part).

Non-production System Architecture: This type does not rely on rules. Instead, it uses knowledge
representation techniques like frames or semantic networks to represent and reason about knowledge.

7. Tasks in Natural Language Processing (NLP)

D
O
NLP Tasks:
C
●​ Tokenization: Breaking text into words or sentences.
●​ Part-of-Speech Tagging: Identifying the grammatical categories of words.
●​ Named Entity Recognition (NER): Identifying entities such as names, locations, and dates.
U

●​ Machine Translation: Translating text from one language to another.


●​ Sentiment Analysis: Determining the sentiment expressed in text.
●​ Speech Recognition: Converting spoken language into text.
PT

●​ Text Summarization: Creating a summary of a longer text.

Difficulties in Natural Language Understanding:

●​ Ambiguity: Words can have multiple meanings depending on context.


●​ Complexity: Sentences can have complex structures, making them hard to parse.
@

●​ Context: Understanding the full meaning requires considering the context of a sentence or
conversation.
●​ Slang and Variability: Different dialects, slang, or informal language make understanding more
challenging.

HAPPY ENDING BY ➖ SAHIL RAUNIYAR & PTUCODER !! 😀


52

Previous Year Questions Paper


BCA (2016 Batch) (Sem.–6)
ARTIFICIAL INTELLIGENCE
Subject Code : BC-601
Paper ID : [B0223]

1. Write briefly :

ER
a) Name any two problem characteristics.
b) Explain state search space.
c) Represent the following sentence using propositional logic : Every one is loyal to someone.
d) Explain Depth first search.
e) Explain CYC.

D
f) Explain disjunction with example.
g) Define natural language processing.
h) Explain scripts.
O
i) Explain knowledge representation.
j) What is Resolution?
C
ANSWERS


U

a) Name Any Two Problem Characteristics

1. Search Space: The search space refers to the set of all possible states or configurations that a
PT

problem can have. It represents the environment in which an AI algorithm operates to find a solution.
The search space is explored by search algorithms, and it typically includes all possible moves or
decisions that can be made to solve the problem. The size and structure of the search space heavily
influence the efficiency of the search process.

2. Problem Formulation: Problem formulation refers to how a problem is defined and structured for an
@

AI system to solve. This involves specifying the initial state, the goal state, the set of possible actions,
and the transition model (how one state leads to another). A well-defined problem makes it easier to
apply algorithms like search or optimization techniques to find a solution.

b) Explain State Search Space ➖


A state search space is a collection of all possible configurations or states that can be reached during
the execution of an algorithm. It is represented as a graph or tree where:

●​ Nodes represent states or configurations of the problem.


●​ Edges represent the transitions between these states, which are determined by the actions or
decisions taken.
53
The state space can be visualized as a tree with branches that represent possible actions. In this tree,
the root node is the initial state, and the leaf nodes are the goal states. An AI algorithm, such as search
algorithms (e.g., BFS, DFS), traverses this search space to find a path from the initial state to the goal
state.

For example, in the 8-puzzle problem, the state space includes all possible configurations of the
puzzle's pieces. The algorithm explores the space to find a sequence of moves that results in the goal
configuration.


c) Represent the Following Sentence Using Propositional Logic: "Everyone is loyal to
someone."

ER
Let the propositional variables be:

●​ L(x, y): x is loyal to y, where x and y represent individuals.

The sentence "Everyone is loyal to someone" can be represented in First-Order Logic (a more
expressive form of logic compared to propositional logic):

D
∀x∃y L(x,y)\forall x \exists y \, L(x, y)∀x∃yL(x,y)

This means for every individual x, there exists an individual y such that x is loyal to y.
O
In Propositional Logic, it would be difficult to express this directly, since propositional logic doesn’t
handle quantifiers or individual variables. However, you could use specific constants for individuals,
C
such as:

●​ L(john, mary), L(susan, john), etc. But this representation would not cover the general form of
the sentence.
U


PT

d) Explain Depth First Search (DFS)

Depth-First Search (DFS) is a search algorithm that explores as far down a branch as possible before
backtracking. It systematically explores the deepest unexplored nodes first and moves to the next node
only after all descendants of the current node have been visited.
@

Steps of DFS:

1.​ Start at the root node and push it onto the stack.
2.​ Pop a node from the stack and explore its unvisited neighbors.
3.​ Push unvisited neighbors onto the stack.
4.​ Repeat until the stack is empty or the goal is found.

Advantages:

●​ DFS is memory efficient because it only needs to store a single path from the root to the current
node.
●​ It can find a solution quickly if the solution is located deep in the search space.
54
Disadvantages:

●​ DFS can get stuck in infinite loops if the search space has cycles.
●​ It may not always find the shortest path or a solution quickly, especially if the solution is at a
shallow depth.

Example:​
For a tree:

Mathematica

/ \

ER
B C

/ \

D E

D
Starting from A, DFS might explore A → B → D before backtracking and exploring E, and then move to
C.
O
e) Explain CYC ➖
C
CYC is an AI project developed by Cycorp with the goal of encoding a large body of common-sense
knowledge into a machine-readable form. The system uses a knowledge base to simulate human
reasoning and understanding in areas that require common sense.
U

CYC's key idea is to create a semantic network where entities and their relationships are represented.
It contains millions of assertions (facts) about the world, such as:
PT

●​ People need food to survive.


●​ If it rains, the ground gets wet.

The system can answer complex questions and solve problems by reasoning over this vast body of
knowledge. CYC is considered a knowledge-based system that provides a foundation for
commonsense reasoning and is used in many fields like natural language understanding and expert
@

systems.

f) Explain Disjunction with Example ➖


Disjunction is a logical operation that represents the logical OR between two propositions. It is true if at
least one of the propositions is true.

In Propositional Logic, the disjunction is denoted by the symbol ∨:

●​ P ∨ Q means "P or Q."

The truth table for disjunction is as follows:


55

ER
g) Define Natural Language Processing (NLP) ➖
Natural Language Processing (NLP) is a subfield of artificial intelligence that focuses on the
interaction between computers and human languages. It involves enabling machines to understand,

D
interpret, and generate human language in a way that is both meaningful and useful.

NLP tasks include: O


●​ Speech Recognition: Converting spoken words into text.
●​ Text Classification: Categorizing text into predefined categories (e.g., sentiment analysis, topic
detection).
C
●​ Named Entity Recognition (NER): Identifying entities such as names, dates, and locations in
text.
●​ Machine Translation: Automatically translating text from one language to another.
U

●​ Text Summarization: Generating a concise summary of a larger text.

NLP combines linguistics, computer science, and machine learning techniques to process and analyze
large amounts of natural language data.
PT

h) Explain Scripts ➖
In AI, scripts refer to knowledge representations that capture the structure of events or situations. A
script is a structured framework for understanding and predicting typical sequences of events.
@

Example: A restaurant script might include the following typical sequence:

1.​ A customer enters the restaurant.


2.​ The customer is greeted by a host.
3.​ The customer orders food.
4.​ The food is served.
5.​ The customer pays the bill and leaves.

Scripts are used in natural language understanding and are helpful in understanding actions and events
in context. They are useful for tasks such as machine understanding of stories, event recognition, and
dialogue systems.
56

i) Explain Knowledge Representation ➖


Knowledge Representation (KR) is a field of AI that focuses on how to represent information about the
world in a way that machines can understand and use for reasoning. KR involves formalizing knowledge
using symbols and structures that machines can manipulate.

Common methods of KR include:

●​ Semantic Networks: Represent concepts and their relationships in a graph-like structure.


●​ Frames: Structured data representations that include slots for attributes and values, often used
in object-oriented systems.
●​ Production Rules: If-then rules that describe relationships and actions.
●​ Logic: Formal systems like propositional and first-order logic used for representing facts and

ER
reasoning.

Effective knowledge representation is essential for building AI systems that can understand and reason
about the world.

D
j) What is Resolution?

Resolution is a rule of inference used in propositional logic and first-order logic for automated
O
theorem proving. It is a refutation-based approach that combines two clauses to derive a new clause.

The basic idea of resolution is to eliminate complementary literals (e.g., P and ¬P) from two clauses,
which produces a new clause that logically follows from the two original clauses.
C
Example: Given two clauses:
U

1.​ P ∨ Q
2.​ ¬P ∨ R

The resolution of these clauses would result in:


PT

●​ Q ∨ R

Resolution is a fundamental technique in logic-based AI systems, especially in automated reasoning


and solving problems with propositional and predicate logic.
@
57
SECTION-B

2. What is artificial intelligence? Discuss three problems which have been solved by artificial
intelligence.

3. Explain various characteristics of production system in detail.

4. a) Discuss various applications of artificial intelligence.

b) Discuss important issues in design of a search problem.

5. Discuss various properties and approaches for knowledge representation.

6. Discuss weak slot and filer structures.

ER
7. Explain the Morphological, Syntactic & Semantic phase of natural language processing.

ANSWERS ➖

D
2. What is Artificial Intelligence? Discuss Three Problems Solved by Artificial Intelligence

Artificial Intelligence (AI) refers to the branch of computer science that is concerned with creating
O
machines that can perform tasks that typically require human intelligence. These tasks include learning,
reasoning, problem-solving, language understanding, perception, and decision-making. AI systems can
analyze data, recognize patterns, make predictions, and even interact with humans through natural
language processing.
C
Three Problems Solved by Artificial Intelligence:
U

1.​ Game Playing (e.g., Chess, Go): AI has been successful in solving complex problems in
strategic games. Chess-playing AI, such as IBM's Deep Blue, defeated the world champion
Garry Kasparov in 1997. Later, Google's AlphaGo defeated the world champion in Go, a game
PT

considered far more complex than chess. AI algorithms analyze a vast number of possible moves
and use heuristics to make the most strategic decisions.
2.​ Medical Diagnosis: AI has significantly contributed to medical fields by assisting in diagnosis
and treatment. IBM Watson for Oncology, for example, can analyze medical data, research
papers, and patient information to recommend personalized cancer treatment plans. AI models
also help in detecting diseases like breast cancer through image recognition techniques.
@

3.​ Autonomous Vehicles: Self-driving cars are one of the most well-known applications of AI.
Companies like Tesla and Waymo use AI systems to solve the problem of autonomous driving.
These systems rely on sensors, cameras, machine learning algorithms, and real-time
decision-making to navigate roads, avoid obstacles, and follow traffic rules without human
intervention.

3. Explain Various Characteristics of Production Systems in Detail

A production system is a framework used in AI for problem-solving. It is based on a set of rules and
includes three essential components:
58
●​ States: Represent the possible configurations of the problem.
●​ Operators (or Actions): Actions that change the state, moving from one state to another.
●​ Goal: The desired state that the system aims to reach.

The main characteristics of a production system include:

1.​ Rules (Production Rules): The production system operates by applying rules, typically in the
form of "if-then" statements. For example, "If the light is on, then the switch is closed."
2.​ State Space: The state space represents all possible states of the problem. The system
searches through the state space to find a path from the initial state to the goal state.
3.​ Control Strategy: The control strategy determines the order in which rules are applied. It can be:
○​ Forward Chaining: Starting from the initial state, the system applies rules to reach the
goal.

ER
○​ Backward Chaining: Starting from the goal state, the system works backward to find the
conditions that led to the goal.
4.​ Inference Mechanism: The inference mechanism is the process of deriving new knowledge or
making decisions based on the production rules and current state. It is used to guide the system
toward the solution.
5.​ Conflict Resolution: Sometimes multiple rules can be applicable at the same time. The system

D
needs a conflict resolution strategy to decide which rule to apply first, such as choosing the rule
with the highest priority.
O
4. a) Discuss Various Applications of Artificial Intelligence
C
Applications of Artificial Intelligence are diverse and span across many industries. Some important
applications include:

1.​ Healthcare:
U

○​ AI is used in medical diagnosis, personalized treatment, drug discovery, and patient


monitoring. AI systems can analyze medical images (X-rays, MRIs), predict disease
outbreaks, and even assist in robotic surgeries.
PT

2.​ Finance:
○​ AI is widely used for fraud detection, credit scoring, algorithmic trading, and customer
service (through chatbots). AI algorithms analyze financial data to make real-time
decisions and detect suspicious activities.
3.​ Customer Service:
@

○​ Chatbots and virtual assistants like Siri, Alexa, and Google Assistant use AI to provide
customer support. AI can answer questions, handle requests, and assist in
troubleshooting, improving customer experience.
4.​ Autonomous Vehicles:
○​ Self-driving cars use AI to navigate, detect objects, and make decisions. AI systems like
computer vision and machine learning enable vehicles to drive without human
intervention.
5.​ Natural Language Processing (NLP):
○​ AI is used in NLP for tasks such as machine translation, speech recognition, sentiment
analysis, and language generation. Google Translate and speech-to-text technologies are
prominent examples.
6.​ Manufacturing and Robotics:
59
○​ AI is used to optimize production processes, automate tasks, and improve quality control.
Industrial robots powered by AI can assemble products, inspect for defects, and perform
other tasks that would be dangerous or repetitive for humans.

4. b) Discuss Important Issues in Design of a Search Problem

Designing a search problem involves creating an environment that allows an AI system to efficiently
explore possible solutions. Key issues in the design of a search problem include:

1.​ State Representation:


○​ How to represent the possible configurations of the problem in a way that is
computationally feasible and captures all relevant information. This often involves defining

ER
a clear structure for the state space (e.g., tree, graph, or matrix).
2.​ Choice of Search Strategy:
○​ Deciding on the search strategy (e.g., breadth-first search, depth-first search, A* search).
The strategy determines how the system explores the state space. Different strategies
have different strengths in terms of memory use, time complexity, and completeness.
3.​ Search Space Size:

D
○​ The size of the search space can be enormous. One challenge is to reduce the size of the
search space through pruning or using heuristics, especially in problems with a large
number of possible states, like in chess or the traveling salesman problem.
4.​ Heuristics:
O
○​ A well-designed heuristic can dramatically improve search efficiency by guiding the search
towards promising areas of the search space. It is important to define a good heuristic that
C
approximates the distance to the goal state.
5.​ Time and Space Complexity:
○​ The search algorithm must balance between time efficiency (speed of finding a solution)
U

and space efficiency (memory usage). Some algorithms can be very time-efficient but use
a lot of memory, while others may be more space-efficient but slower.
6.​ Handling Incompleteness or Uncertainty:
PT

○​ Some problems involve uncertainty, such as when the state space is not fully observable,
or the actions do not always lead to predictable results. Designing a search problem to
handle uncertainty or incomplete information (e.g., in games like poker) is a key challenge.
@

5. Discuss Various Properties and Approaches for Knowledge Representation

Properties of Knowledge Representation:

1.​ Expressiveness: The knowledge representation scheme should be able to represent all relevant
knowledge of the domain, including facts, relationships, and rules.
2.​ Efficiency: It should allow for efficient processing and retrieval of information.
3.​ Scalability: The system should scale as the knowledge base grows.
4.​ Interoperability: It should be able to work with different systems or represent knowledge in a
way that can be easily shared across platforms.
5.​ Uncertainty Handling: It should be able to handle incomplete or uncertain information.

Approaches for Knowledge Representation:


60
1.​ Logic-Based Representation:
○​ Propositional and First-Order Logic: Used to represent facts, rules, and reasoning in
formal logical terms.
2.​ Semantic Networks:
○​ Graph-based representation where nodes represent concepts and edges represent
relationships between concepts.
3.​ Frames:
○​ Data structures that represent concepts with attributes (slots) and possible values. Similar
to object-oriented programming.
4.​ Production Rules:
○​ If-then rules that describe relationships or actions in a domain.
5.​ Ontology:
○​ A formal representation of a set of concepts within a domain and the relationships

ER
between them, typically used for knowledge sharing.

6. Discuss Weak Slot and Filter Structures

D
Weak Slot and Filter Structures are used in knowledge representation to handle uncertainty and
variability in the information:

●​ Slot: A slot is an attribute of an object that can have different values. In the context of weak slots,
O
they allow for flexible or incomplete information. For example, a "Person" object might have a slot
"Age," but the value could be unspecified or range from "young" to "old."
●​ Filter: Filters are used to specify constraints or conditions that must be met for a slot to take a
C
certain value. For example, a filter might restrict the "Age" slot to a certain range (e.g., between
18 and 65) or might only allow certain types of values (e.g., numerical).
U

Weak Slot Structures provide flexibility in representation, allowing for uncertain or incomplete
knowledge, while Filter structures help impose restrictions on the values slots can take.
PT

7. Explain the Morphological, Syntactic & Semantic Phases of Natural Language


Processing

The process of Natural Language Processing (NLP) can be divided into three primary phases:
@

1.​ Morphological Phase:


○​ Morphology refers to the structure of words. In this phase, NLP systems analyze and
break down words into their smallest meaningful units, called morphemes. This process
involves tasks such as:
■​ Tokenization: Splitting text into words.
■​ Stemming: Reducing words to their root form (e.g., "running" → "run").
■​ Lemmatization: Converting words into their base form, considering the context
(e.g., "better" → "good").
2.​ Syntactic Phase:
○​ Syntax involves the structure of sentences and their grammatical components. In this
phase, NLP systems analyze the syntactic structure of a sentence to understand its
grammatical relationships. This includes tasks such as:
61
■​ Part-of-Speech Tagging: Identifying the grammatical role of each word (noun,
verb, adjective, etc.).
■​ Parsing: Analyzing the structure of a sentence to create a syntax tree, representing
the hierarchical relationships between words.
■​ Sentence Structure Analysis: Identifying subject-verb-object relationships and
ensuring sentence coherence.
3.​ Semantic Phase:
○​ Semantics focuses on understanding the meaning of the sentence. In this phase, NLP
systems attempt to extract the meaning from the words and their relationships in the
context of the sentence. Tasks include:
■​ Named Entity Recognition (NER): Identifying and classifying entities (names,
locations, dates).
■​ Word Sense Disambiguation: Determining the correct meaning of a word based

ER
on its context.
■​ Sentiment Analysis: Identifying the sentiment or emotional tone of the text.

These three phases work together to process and understand natural language, from recognizing word
components to extracting meaning from full sentences.

HAPPY ENDING BY

D
➖ SAHIL RAUNIYAR & PTUCODER !! 😀
O
C
U
PT
@

You might also like