0% found this document useful (0 votes)
34 views19 pages

CBCS-New Scheme

The document outlines the syllabus and exam structure for the Second Semester M.C.A. Degree Examination in Artificial Intelligence. It includes various topics such as intelligent agents, search algorithms, knowledge representation, and learning paradigms. The document also provides specific questions for candidates to answer during the examination.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views19 pages

CBCS-New Scheme

The document outlines the syllabus and exam structure for the Second Semester M.C.A. Degree Examination in Artificial Intelligence. It includes various topics such as intelligent agents, search algorithms, knowledge representation, and learning paradigms. The document also provides specific questions for candidates to answer during the examination.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

Q.P.

Code - 23MCA203

Second Semester M.C.A. Degree Examination,


November/December 2024
(CBCS-New Scheme)
Computer Applications
Paper CPT 2.3.1 ARTIFICIAL INTELLIGENCE
Time: 3 Hours] [Max. Marks: 70
Instruction to Candidates: Answer any FIVE full questions from the following.
1 (a) What is intelligent agent? Explain the types of agents. (7)
. An agent is anything that can be viewed as perceiving its environment through sensors and acting
upon that environment through actuators.

 Simple Reflex Agents

 They choose actions only based on the current percept.


 They are rational only if a correct decision is made only on the basis of current precept.
 Their environment is completely observable.

Condition-Action Rule − It is a rule that maps a state (condition) to an action.

 We use rectangles to denote the current internal state of the agent’s decision process, and
ovals to represent the background information used in the process.

 Note that the description in terms of “rules” and “matching” is purely conceptual; actual
implementations can be as simple as a collection of logic gates implementing a Boolean
circuit.

 Simple reflex agents have the admirable property of being simple, but they turn out to be of
limited intelligence.

 Model Based Reflex Agents

They use a model of the world to choose their actions. They maintain an internal state.Model –
knowledge about “how the things happen in the world”. Internal State − It is a representation of
1
unobserved aspects of current state depending on percept history.

Updating the state requires the information about −

 How the world evolves.


 How the agent’s actions affect the world.

 Goal Based Agents

They choose their actions in order to achieve goals. Goal-based approach is more flexible than reflex
agent since the knowledge supporting a decision is explicitly modeled, thereby allowing for
modifications.Goal − It is the description of desirable situations.

 Knowing something about the current state of the environment is not always enough to decide
what to do. For example, at a road junction, the taxi can turn left, turn right, or go straight on.

 The correct decision depends on where the taxi is trying to get to. In other words, as well as a
current state description, the agent needs some sort of goal information that describes
situations that are desirable—for example, being at the passenger’destination.

 Utility Based Agents

They choose actions based on a preference (utility) for each state.Goals are inadequate when −

 There are conflicting goals, out of which only few can be achieved.
 Goals have some uncertainty of being achieved and you need to weigh likelihood of success
against the importance of a goal.
 At this point, the reader may be wondering, “Is it that simple? We just build agents that
maximize expected utility, and we’re done?” It’s true that such agents would be intelligent,
but it’s not simple.
 A utility-based agent has to model and keep track of its environment, tasks that have involved
a great deal of research on perception, representation, reasoning, and learning.

2
 Learning agents

 We have described agent programs with various methods for selecting actions. We have not,
so far, explained how the agent programs come into being.

 The performance element is what we have previously considered to be the entire agent: it
takes in percepts and decides on actions. The learning element uses feedback from the critic
on how the agent is doing and determines how the performance element should be modified
to do better in the future.

(b) Elaborate the steps of Simulated Annealing. (7)

3
2 (a) What is problem formulation? Explain steps which require solving problem. (7)
.

4
(b) Explain steepest ascent Hill climbing technique. (7)

3 (a) Explain Breadth First Search with example along with-it algorithm. (7)
.

5
6
(b) Develop A* search algorithm for Al applications. (7)

A* search is an informed search algorithm that efficiently finds the shortest path between a starting
node and a goal node in a weighted graph. It combines the best aspects of both Breadth-First Search
(BFS) and Dijkstra's Algorithm.

* Open Set: A set of nodes to be explored.


* Closed Set: A set of nodes that have already been explored.
* Parent Pointers: For each node, a pointer to its parent node, used to reconstruct the path.
* g(n): The cost of the path from the start node to node n.
* h(n): The estimated cost of the cheapest path from node n to the goal node (heuristic function).
* f(n) = g(n) + h(n): The total estimated cost of the path through node n.

Algorithm:
i) Initialization:
* Add the start node to the open set.
* Set its g(n) to 0 and f(n) to h(n).

7
ii)Loop:
* While the open set is not empty:
* Find the node in the open set with the lowest f(n).
* Remove that node from the open set and add it to the closed set.
* If the node is the goal node, reconstruct and return the path.
* For each neighbor of the current node:
* If the neighbor is not in the open set or closed set:
* Add the neighbor to the open set.
* Set its parent to the current node.
* Calculate its g(n), h(n), and f(n).
* Else if the neighbor is in the open set, check if this path is better:
* Calculate the tentative g-score through this node.
* If this g-score is lower, update the neighbor's parent and g(n) and f(n).

A* search is widely used in various AI applications, including,


Pathfinding in games: Finding the shortest path for characters to move around a game world.
Robotics: Planning robot motion and obstacle avoidance.
AI planning: Generating plans to achieve goals.
Image processing: Image segmentation and feature extraction.

4 (a) Discuss about constraint satisfaction and solve the crypt arithmetic problem. (7)
.

8
(b) What are the advantages and disadvantages of Different Knowledge Representation? (7)
Knowledge representation is a crucial aspect of artificial intelligence, enabling machines to
understand and reason about the world. Various techniques exist, each with its own strengths and
weaknesses. Here's a comparison of the most common ones:
1. Logical Representation:

Advantages:
* Precise and formal: Allows for rigorous reasoning and inference.
* Well-suited for tasks requiring deductive reasoning.
* Can handle complex knowledge bases.

Disadvantages:
* Can be complex and difficult to implement.
* Requires careful knowledge engineering to ensure consistency and completeness.
* May not be suitable for representing uncertain or incomplete information.

2. Semantic Networks:
Advantages:
* Intuitive and visually appealing: Easy to understand and visualize relationships between
concepts.
* Flexible: Can represent a wide range of knowledge, including hierarchical and associative
relationships.
9
* Efficient for certain types of reasoning, such as inheritance and classification.

Disadvantages:
* Can become complex and difficult to manage as the knowledge base grows.
* May not be suitable for representing complex logical reasoning.
* Can be ambiguous and difficult to interpret in some cases.

3. Frames:
Advantages:
* Structured and modular: Organizes knowledge into reusable units.
* Efficient for representing default values and inheritance hierarchies.
* Can be used to represent complex objects and their properties.

Disadvantages:
* Can be inflexible and difficult to modify.
* May not be suitable for representing uncertain or probabilistic knowledge.
* Can be challenging to reason with, especially when dealing with complex relationships.

4. Production Rules:
Advantages:
* Modular and flexible: Can be easily added, removed, or modified.
* Well-suited for representing procedural knowledge and control flow.
* Can be used to implement expert systems and decision support systems.

Disadvantages:
* Can be inefficient for large knowledge bases.
* May not be suitable for representing complex, hierarchical knowledge.
* Can be difficult to understand and debug.

5 (a) Consider the following sentences ,Translate these sentences into formulas in predicate logic. (5)

(i) ∀x (Food(x) → Likes(John, x))


. (i) John likes all kinds of food.

(ii) Apples are food.


(ii) Food(Apples)

(iii) Chicken is food.


(iii) Food(Chicken)

(iv) ∀x ∀y (Eats(x, y) ∧ Alive(x) → Food(y))


(iv) Anything anyone eats and isn't killed is food.

(v) Eats(Bill, Peanuts) ∧ Alive(Bill)


(v) Bill eats peanuts and is still alive.

(vi) ∀x (Eats(Bill, x) → Eats(Sue, x))


(vi) Sue eats every everything bill eats.

(b) Distinguish forward and backward reasoning with an example. (4)

10
Forward and backward reasoning are two fundamental techniques used in artificial intelligence to
draw conclusions from a set of facts and rules.

Forward Reasoning:Starts with initial facts and applies rules to derive new facts. This process
continues until no more new facts can be derived or a specific goal is reached. Similar to a domino
effect, where one event triggers the next.

Example:
Facts
* If it is raining, the ground is wet.
* It is raining.

Conclusion: The ground is wet.

Backward Reasoning:Starts with a goal and works backward, identifying the conditions that must
be true for the goal to be true. This process continues until a set of initial facts is reached. Similar to
working backward from a solution to the initial problem.

Example:
* Goal: Prove that the suspect is guilty.
Rules:
* If the suspect had a motive and opportunity, they are guilty.
* If the suspect's fingerprints were found at the crime scene, they had an opportunity.

Process:
* To prove guilt, we need to prove motive and opportunity.
* To prove opportunity, we need to prove fingerprints were found.
* If we can find evidence that the suspect's fingerprints were at the crime scene, we can conclude
they are guilty.

(c) Construct An AND-OR graph algorithm for a generic problem using labeling procedure. (5)

11
12
6 (a) Create a frame of the person Anand who is a chemistry professor in RD Women’s College. His (7)
. wife’s name is Sangita having two children Rupa and Shipa.

(b) Write a note on adaptive learning. (7)

13
7 (a) Explain learning paradigm techniques. (7)
. Learning paradigm techniques, or simply learning paradigms, refer to the various approaches and
methodologies used to train machines to learn from data and experiences, enabling them to make
informed decisions, solve problems, and adapt to new situations. These techniques are fundamental
to the field of artificial intelligence (AI) and machine learning (ML).

Key Learning Paradigms:

Supervised Learning: In this paradigm, the machine is trained on a labeled dataset, where each data
point is associated with a correct output or target value.
Techniques:
Regression: Predicting a continuous numerical value (e.g., house price prediction).
Classification: Assigning a class label to a data point (e.g., email spam detection).

Unsupervised Learning:
Here, the machine learns patterns from unlabeled data without explicit guidance.
Techniques:
* Clustering: Grouping similar data points together (e.g., customer segmentation).
* Dimensionality Reduction: Reducing the number of features in a dataset while preserving
essential information (e.g., principal component analysis).

Reinforcement Learning:
The machine learns through trial and error, interacting with an environment and receiving rewards or
penalties for its actions.
Techniques:
* Q-learning: Learning optimal actions based on a Q-value function.
* Policy Gradient Methods: Directly optimizing the policy function that maps states to actions.

Additional Techniques and Considerations:


Deep Learning: A subset of machine learning that utilizes artificial neural networks with multiple
layers to learn complex patterns.

Ensemble Learning: Combining multiple models to improve overall performance.

Meta-Learning: Learning to learn, where the model learns how to learn new tasks efficiently.

Active Learning: Strategically selecting data points to label, maximizing learning efficiency.

By understanding these learning paradigms and techniques, you can effectively apply machine
learning to solve a wide range of real-world problems.

(b) Describe the components of reinforcement learning. (7)

Reinforcement learning (RL) is a machine learning paradigm where an agent learns to make
decisions by interacting with an environment. The goal is to maximize a cumulative reward signal.
Here are the key components of reinforcement learning:

1. Agent:

* The decision-maker or learner.


* It perceives the environment's state and takes actions.
14
* It learns to optimize its behavior based on rewards and punishments.

2. Environment:

* The world the agent interacts with.


* It provides the agent with states and rewards.
* It can be deterministic or stochastic, meaning it can have fixed or random outcomes.

3. State:

* A specific situation or configuration of the environment.


* It represents the current condition of the environment.
* The agent perceives the state to make decisions.

4. Action:

* A move or choice made by the agent.


* It influences the transition from one state to another.
* The agent selects actions based on its policy.

5. Reward:

* A numerical value indicating the immediate outcome of an action.


* Positive rewards encourage actions that lead to desirable outcomes.
* Negative rewards discourage actions that lead to undesirable outcomes.

6. Policy:

* The agent's strategy for selecting actions in a given state.


* It maps states to actions.
* The goal of RL is to learn an optimal policy that maximizes cumulative reward.

7. Value Function:

* Estimates the expected future reward from a given state.


* It helps the agent evaluate the long-term consequences of actions.
* There are two types of value functions:
* State-value function: Estimates the expected return starting from a particular state.
* Action-value function (Q-function): Estimates the expected return from taking a particular action
in a particular state.

8. Model of the Environment (Optional):

* A model of the environment's dynamics.


* It predicts the next state and reward given the current state and action.
* It can be used for planning and simulating future scenarios.
By interacting with the environment, the agent learns to improve its policy, leading to better
decision-making and higher rewards over time.

8 (a) Explain expert system shell. (7)


.
15
16
(b) What is MYCIN? Explain its architecture.
(7)

17
18
19

You might also like