0% found this document useful (0 votes)
363 views116 pages

Al3391 Artificial Intelligence

The document outlines a course on Artificial Intelligence (AI) led by Mrs. Persi Pamela, focusing on key AI concepts, intelligent agents, and problem-solving techniques. It details course objectives, outcomes, and various types of AI agents, including simple reflex, model-based, goal-based, utility-based, and learning agents, along with their characteristics and applications. Additionally, it discusses the advantages and disadvantages of AI, the nature of environments, and the concept of rational agents.

Uploaded by

persipamela
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
363 views116 pages

Al3391 Artificial Intelligence

The document outlines a course on Artificial Intelligence (AI) led by Mrs. Persi Pamela, focusing on key AI concepts, intelligent agents, and problem-solving techniques. It details course objectives, outcomes, and various types of AI agents, including simple reflex, model-based, goal-based, utility-based, and learning agents, along with their characteristics and applications. Additionally, it discusses the advantages and disadvantages of AI, the nature of environments, and the concept of rational agents.

Uploaded by

persipamela
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

AL3391 ARTIFICIAL

INTELLIGENCE

MRS.PERSI PAMELA
ASSISTANT PROFESSOR
AI&DS
COURSE OBJECTIVES :
The main objectives of this course are to:
• Learn the basic AI approaches
• Develop problem solving agents
• Perform logical and probabilistic reasoning
COURSE OUTCOMES:

At the end of this course, the students will be able to:


• CO1: Explain intelligent agent frameworks
• CO2: Apply problem solving techniques
• CO3: Apply game playing and CSP techniques
• CO4: Perform logical reasoning
• CO5: Perform probabilistic reasoning under uncertainty
UNIT I INTELLIGENT AGENTS

Introduction to AI – Agents and Environments – concept of


rationality – nature of environments – structure of agents.
Problem solving agents – search algorithms – uninformed
search strategies.
Click icon to add picture

INTRODUCTION TO AI
What is Artificial Intelligence?
• One of the booming technologies of computer science
• Ready to create new revolution by making intelligent machines.
• It is currently working with a variety of subfields,
• such as self-driving cars, playing chess, proving theorems, playing music, Painting, etc.
Click icon to add picture

Artificial intelligence is composed of two words artificial


and intelligence,
• where Artificial defines "man-made," and
• intelligence defines "thinking power",
hence AI means "a man-made thinking power."
• " It is a branch of computer science by
which we can create intelligent machines
which can behave like a human,
think like humans, and
able to make decisions."
Click icon to add picture

 Machines will have human based skills


like
Learning
Reasoning
Problem solving
Click icon to add picture

• Creating a machine with programmed


algorithms which can work with own
intelligence
Click icon to add picture

WHY ARTIFICIAL INTELLIGENCE


• To solve real world problems easily and accurately
Ex: health care, marketing, traffic issues etc.
• To create our own personal assistants
Ex: Google assistant, Alexa, Siri
• Can create robots to work in the place of humans
Ex: Military, education, medical
• AI opens a path for other new technologies, new devices, and new Opportunities.
Click icon to add picture

G OA L S O F A RT I F I C I A L I N T E L L I G E N C E

• Replicate human intelligence


• Solve Knowledge-intensive tasks
• An intelligent connection of perception and action
• Building a machine which can perform tasks that requires human intelligence such as:
• Proving a theorem
• Playing chess
• Plan some surgical operation
• Driving a car in traffic
• Creating some system which can exhibit intelligent behavior, learn new things by itself,
demonstrate, explain, and can advise to its user.
Click icon to add picture
Click icon to add picture

ADVANTAGES OF AI
 High Accuracy with less errors
 High-Speed
 High reliability
 Useful for risky areas
 Digital Assistant
 Useful as a public utility
Click icon to add picture

DISADVANTAGES OF ARTIFICIAL
INTELLIGENCE

• High Cost
• Can't think out of the box
• No feelings and emotions
• Increase dependency on machines
• No Original Creativity
Click icon to add picture
Click icon to add picture

• Prerequisite
fundamental knowledge of following so
any computer language such as
C, C++, java, python, etc.(Knowledge of
python will be an advantage)
knowledge of essential mathematics such
as derivatives, probability theory, etc.
Click icon to add picture

INTELLIGENT AGENTS:
TYPES OF AI AGENTS OR (STRUCTURE OF AGENTS):

• Agents can be grouped into five classes based on their degree of perceived
intelligence and capability.
• All these agents can improve their performance and generate better action over the
time
Click icon to add picture

• Simple Reflex Agent


• Model-based reflex agent
• Goal-based agents
• Utility-based agent
• Learning agent
Click icon to add picture

SIMPLE REFLEX AGENT


• These agents take decisions on the basis of the current percepts(SENSES) and
ignore the rest of the percept history.
• Works in a fully - observable state
• These agents take decisions on the basis of the current percepts and ignore the rest
of the percept history
• works on Condition-action rule, which means it maps the current state to action.
• a Room Cleaner agent, it works only if there is dirt in the room
• Examples:
Automatic door:

An automatic door uses a sensor to detect the presence of a person and opens the
door accordingly.
• It doesn't remember if someone was there five seconds ago or plan for the door to
stay open for a specific duration.
• Vacuum cleaner robot:
• A simple vacuum cleaner robot might move forward until it detects a wall, then
turn and move in another direction.
It doesn't learn from its previous cleaning patterns or try to optimize its path.
Click icon to add picture

Perception: the agent senses the current state of its environment using sensors.
Condition-action rules: it uses a set of predefined rules to determine what action to
take based on the perceived state.
No memory: it does not retain information about past states or actions.
Click icon to add picture
Click icon to add picture

.• Problems for the simple reflex agent design approach:


• They have very limited intelligence
• They do not have knowledge of non-perceptual parts of the current state
• Mostly too big to generate and to store.
• Not adaptive to changes in the environment
Click icon to add picture

MODEL-BASED REFLEX AGENT

The Model-based agent can work in a partially observable environment, and track
the situation.

A model-based agent has two important factors:


Model: It is knowledge about "how things happen in the world," so it is called a
Model-based agent.
Internal State: It is a representation of the current state based on percept history.

These agents have the model, "which is knowledge of the world" and based on the
model they perform actions.
Examples

•Robotic vacuum cleaners:


These robots use sensors to detect obstacles and their surroundings, an
internal model to remember previously cleaned areas, and a set of rules
to navigate and avoid obstacles while cleaning.
•Self-driving cars:
These cars use sensors to perceive the environment (other vehicles,
pedestrians, road conditions), an internal model to predict how the
environment will change based on their actions (steering, braking,
accelerating), and a set of rules to determine how to navigate safely.
•Updating the agent state requires information about:
- How the world evolves
- How the agent's action affects the world.
Click icon to add picture

How it works:
1.T h e a g e n t re c e i v e s a p e rc e p t f r o m i t s s e n s o r s .
2.T h e a g e n t u s e s i t s i n t e r n a l m o d e l t o p r e d i c t h o w t h e
environment will change based on the current percept and the
actions it might take.
3.T h e a g e n t a p p l i e s a s e t o f r u l e s t o d e t e r m i n e t h e m o s t
appropriate action based on the current percept and the
predicted future state.
4.T h e a g e n t e x e c u t e s t h e c h o s e n a c t i o n t h r o u g h i t s a c t u a t o r s .
5.T h e a g e n t u p d a t e s i t s i n t e r n a l m o d e l b a s e d o n t h e n e w
p e rc e p t a n d t h e e x e c u t e d a c t i o n .
GOAL-BASED AGENTS

o The knowledge of the current state environment is not always sufficient to decide for an
agent to what to do.
o The agent needs to know its goal which describes desirable situations.
o Goal-based agents expand the capabilities of the model-based agent by having the "goal"
information.
o They choose an action, so that they can achieve the goal.
o These agents may have to consider a long sequence of possible actions
before deciding whether the goal is achieved or not.
o Such considerations of different scenario are called searching and
planning, which makes an agent proactive.
o
EXAMPLES
o A self-driving car, for example, uses sensors to react to its immediate
environment (like other cars) but also plans a route to its destination,
considering factors like traffic and road closures, demonstrating goal-oriented
behavior.
o A delivery robot tasked with delivering packages to specific locations can be
considered a goal-based reflex agent.
o It uses sensors to perceive its current location, the location of the package,
and the presence of obstacles.
o It then uses a map (its internal model) to plan a route to the destination,
considering factors like shortest path and potential obstacles.
o Based on this planning, it chooses the best path and executes the
corresponding actions (e.g., moving forward, turning) to reach
the destination.
o If it encounters an unexpected obstacle, it can replan its route,
demonstrating flexibility in achieving its goal.
o Perception: The agent senses its environment, receiving information through sensors.
o Goal Representation: The agent has a predefined goal or set of goals that it is trying to
achieve.
o Model of the World: The agent maintains an internal model of the environment, which may
include information about the current state, possible actions, and their potential
consequences.
o Action Selection: The agent uses its goal and the model of the world to evaluate different
actions and select the one that is most likely to lead to the desired goal.
o Action Execution: The agent performs the chosen action in the environment.
o Iteration: The agent repeats these steps, continuously sensing, reasoning, and acting to make
progress towards its goal.
UTILITY-BASED AGENTS

o These agents are similar to the goal-based agent but provide an extra component of
utility measurement which makes them different by providing a measure of success
at a given state.
o Utility-based agent act based not only goals but also the best way to achieve the
goal.
o The Utility-based agent is useful when there are multiple
possible alternatives, and an agent has to choose in order to
perform the best action.

o The utility function maps each state to a real number to check


how efficiently each action achieves the goals.
• Perception: The agent observes the environment and gathers information.
• Action: Based on the perception, the agent selects an action.
• Utility Function: A crucial element. It assigns a numerical value (utility) to
each possible action in a given state, reflecting how desirable the
outcome of that action is.
o Example 1: Smart Home Energy Management
o Scenario:
o A smart home system needs to manage energy consumption for heating and
cooling.
o Utility Function:
o Could consider factors like:
o Comfort Level: Higher temperature (for heating) or lower temperature (for cooling)
translates to higher utility.
o Energy Cost: Using less energy (and therefore spending less) has a higher utility.
o Carbon Footprint: Lowering the environmental impact is also a factor.
o How it works:
o The agent perceives the current room temperature and the current energy
prices. It then evaluates different actions like:Turning the thermostat up (or
down).
o Adjusting the setpoint.
o Turning on or off appliances.
o Switching to a different energy source (if available).
o The agent selects the action that maximizes the combined utility, balancing
comfort, cost, and environmental impact.
LEARNING AGENTS

o A learning agent in AI is the type of agent which can learn from its past experiences, or it
has learning capabilities.
o It starts to act with basic knowledge and then able to act and adapt automatically through
learning.
o A learning agent has mainly four conceptual components, which are:
o Learning element: It is responsible for making improvements by learning
from environment
o Critic: Learning element takes feedback from critic which describes that how
well the agent is doing with respect to a fixed performance standard.
o Performance element: It is responsible for selecting external action
o Problem generator: This component is responsible for suggesting actions
that will lead to new and informative experiences.
o Hence, learning agents are able to learn, analyze performance, and look for
new ways to improve the performance.
Examples of Learning Agents:
•Spam filters:
These agents learn to classify emails as spam or not spam based on user feedback
and patterns in the email content.
•Recommendation systems:
E-commerce sites and streaming services use learning agents to suggest products or
content based on user preferences and past interactions.
•Healthcare:
Learning agents can analyze patient data to help with diagnosis, treatment planning,
and disease prediction.
•Virtual assistants:
Virtual assistants like Siri or Alexa learn from user interactions to provide more
personalized and efficient service.
Key characteristics of learning agents:
•Adaptability:
They can adjust their behavior based on experience and feedback.
•Learning from data:
They use machine learning techniques to analyze data and improve
their performance.
•Goal-oriented:
They can be designed to achieve specific objectives, such as
maximizing a reward or minimizing a cost.
•Dynamic environments:
They are well-suited for environments where conditions are
constantly changing
NATURE OF ENVIRONMENTS
o
o The environment is the Task Environment (problem) for which the Rational
Agent is the solution. Any task environment is characterised on the basis of
PEAS.
o Performance – What is the performance characteristic which would either
make the agent successful or not. For example, as per the previous example
clean floor, optimal energy consumption might be performance measures.
o Environment – Physical characteristics and constraints expected. For
example, wood floors, furniture in the way etc
o Actuators – The physical or logical constructs which would take action. For
example for the vacuum cleaner, these are the suction pumps
o Sensors – Again physical or logical constructs which would sense the
environment.
RATIONAL AGENTS
o Rational Agents could be physical agents like the one described above or it
could also be a program that operates in a non-physical environment like an
operating system. Imagine a bot
o
web site operator designed to scan Internet news sources and show the
interesting items to its users, while selling advertising space to generate
revenue.
Agent Performance Environment Actuator Sensor

Math E Computer display system for


learning SLA defined score on Student, Teacher, exercises, corrections, feedback Keyboard,
system the test parents Mouse
o Environments can further be class
o Observable – Full or Partial? If the agents sensors get full access then they do
not need to pre- store any information. Partial may be due to inaccuracy of
sensors or incomplete information about an environmentified into various
buckets
o Number of Agents – For the vacuum cleaner, it works in a single agent
environment but for driver-less taxis, every driver-less taxi is a separate agent
and hence multi agent environment
o Deterministic – The number of unknowns in the environment which affect
the predictability of the environment. For example, floor space for cleaning is
mostly deterministic, the furniture is where it is most of the time but taxi
driving on a road is non-deterministic.
o Discrete – Does the agent respond when needed or does it have to
continuously scan the environment. Driver-less is continuous, online tutor is
discrete
o Static – How often does the environment change. Can the agent learn
about the environment and always do the same thing?
o Episodic – If the response to a certain precept is not dependent on the
previous one i.e. it is stateless (static methods in Java) then it is discrete. If
the decision taken now influences the future decisions then it is a
sequential environment.
Agents in artificial intelligence

o An AI system can be defined as the study of the rational agent and its
environment.
o The agents sense the environment through sensors and act on their
environment through actuators
o What is an Agent?
o An agent can be anything that perceives environment through sensors and act
upon that environment through actuators. An Agent runs in the cycle of
perceiving, thinking, and acting. An agent can be:
o Human-Agent: A human agent has eyes, ears, and other organs which
work for sensors and hand, legs, vocal tract work for actuators.
o Robotic Agent: A robotic agent can have cameras, infrared range
finder, NLP for sensors and various motors for actuators.
Software Agent: Software agent can have keystrokes, file contents as
sensory input and act on those inputs and display output on the screen.
o Hence the world around us is full of agents such as thermostat,
cellphone, camera, and even we are also agents.
o Before moving forward, we should first know about sensors, effectors, and
actuators.
o Sensor: Sensor is a device which detects the change in the environment and
sends the information to other electronic devices. An agent observes its
environment through sensors.
o Actuators: Actuators are the component of machines that converts energy
into motion. The actuators are only responsible for moving and controlling a
system. An actuator can be an electric motor, gears, rails, etc.
o Effectors: Effectors are the devices which affect the environment. Effectors
can be legs, wheels, arms, fingers, wings, fins, and display screen.
INTELLIGENT AGENTS:
o An intelligent agent is an autonomous entity which act upon an environment
using sensors and actuators for achieving goals.
o An intelligent agent may learn from the environment to achieve their goals.
o A thermostat is an example of an intelligent agent.
o Following are the main four rules for an AI agent:
o
o Rule 1: An AI agent must have the ability to perceive the environment.
o Rule 2: The observation must be used to make decisions.
o Rule 3: Decision should result in an action.
o Rule 4: The action taken by an AI agent must be a rational action.
o
o Rational Agent:
o A rational agent is an agent which has clear preference, models uncertainty,
and acts in a way to maximize its performance measure with all possible
actions.
o A rational agent is said to perform the right things. AI is about creating
rational agents to use for game theory and decision theory for various real-
world scenarios.
o
For an AI agent, the rational action is most important because in AI
reinforcement learning algorithm, for each best possible action, agent gets the
positive reward and for each wrong action, an agent gets a negative reward.
o Note: Rational agents in AI are very similar to intelligent agents.
RATIONALITY
The rationality of an agent is measured by its performance measure.
Rationality can be judged on the basis of following points:
 Performance measure which defines the success criterion.
 Agent prior knowledge of its environment.
 Best possible actions that an agent can perform.
STRUCTURE OF AN AI AGENT
o
o The task of AI is to design an agent program which implements the agent
function. The structure of an intelligent agent is a combination of architecture
and agent program. It can be viewed as:
o Agent = Architecture + Agent program
o Following are the main three terms involved in the structure of an AI agent:
o Architecture: Architecture is machinery that an AI agent executes on. Agent
Function: Agent function is used to map a percept to an action f:P* → A
o Agent program: Agent program is an implementation of
agent function. An agent program executes on the physical
architecture to produce function f.
o
PEAS REPRESENTATION
o
o PEAS is a type of model on which an AI agent works upon.
When we define an AI agent or rational agent, then we can
group its properties under PEAS representation model. It is
made up of four words:
o P: Performance measure
o E: Environment
o A: Actuators
o S: Sensors
o Here performance measure is the objective for the success of
an agent's behavior.
o PEAS for self-driving cars:
o Let's suppose a self-driving car then PEAS representation will
be:
o Performance: Safety, time, legal drive, comfort
o Environment: Roads, other vehicles, road signs, pedestrian
o Actuators: Steering, accelerator, brake, signal, horn
o Sensors: Camera, GPS, speedometer, odometer,
accelerometer, sonar.
Agent environment in AI:

o An environment is everything in the world which surrounds the agent, but it is


not a part of an agent itself.
o An environment can be described as a situation in which an agent is present.
FEATURES OF ENVIRONMENT
o Environment can have various features from the point of view
of an agent:
o Fully observable vs Partially Observable
o Static vs Dynamic
o Discrete vs Continuous
o Deterministic vs Stochastic
o Single-agent vs Multi-agent
o Episodic vs sequential
o Known vs Unknown
o Accessible vs Inaccessible
o Fully observable vs Partially Observable:
o
o If an agent sensor can sense or access the complete state of an
environment at each point of time then it is a fully observable
environment, else it is partially observable.
o A fully observable environment is easy as there is no need to maintain
the internal state to keep track history of the world.
o An agent with no sensors in all environments then such an
environment is called as unobservable.
o
o Deterministic vs Stochastic:
o
o If an agent's current state and selected action can completely
determine the next state of the environment, then such environment is
called a deterministic environment.
o A stochastic environment is random in nature and cannot be
determined completely by an agent.
o In a deterministic, fully observable environment, agent does not need
to worry about uncertainty.
o Episodic vs Sequential:
o
o In an episodic environment, there is a series of one-shot actions, and
only the current percept is required for the action.
o However, in Sequential environment, an agent requires memory of
past actions to determine the next best actions.
o
o Single-agent vs Multi-agent
o
o If only one agent is involved in an environment, and operating by itself
then such an environment is called single agent environment.
o However, if multiple agents are operating in an environment, then
such an environment is called a multi-agent environment.
o The agent design problems in the multi-agent environment are
different from single agent environment.
o Static vs Dynamic:
o
o If the environment can change itself while an agent is deliberating
then such environment is called a dynamic environment else it is
called a static environment.
o Static environments are easy to deal because an agent does not need
to continue looking at the world while deciding for an action.
o However for dynamic environment, agents need to keep looking at the
world at each action.
o Taxi driving is an example of a dynamic environment whereas
Crossword puzzles are an example of a static environment.
o
o Discrete vs Continuous:
o
o If in an environment there are a finite number of percepts and actions
that can be performed within it, then such an environment is called a
discrete environment else it is called continuous environment.
o A chess gamecomes under discrete environment as there is a finite
number of moves that can be performed.
o A self-driving car is an example of a continuous environment.
o Known vs Unknown
o
o Known and unknown are not actually a feature of an environment, but
it is an agent's state of knowledge to perform an action.
o In a known environment, the results for all actions are known to the
agent. While in unknown environment, agent needs to learn how it
works in order to perform an action.
o It is quite possible that a known environment to be partially observable
and an Unknown environment to be fully observable.
o
o
o Accessible vs Inaccessible
o
o If an agent can obtain complete and accurate information about the
state's environment, then such an environment is called an Accessible
environment else it is called inaccessible.
o An empty room whose state can be defined by its temperature is an
example of an accessible environment.
o Information about an event on earth is an example of Inaccessible
environment.
SEARCH ALGORITHMS IN
ARTIFICIAL INTELLIGENCE:
o Problem-solving agents:
o In Artificial Intelligence, Search techniques are universal
problem-solving methods. Rational agents or Problem-
solving agents in AI mostly used these search strategies or
algorithms to
o solve a specific problem and provide the best result.
SEARCH ALGORITHM
TERMINOLOGIES:
o
o Search: Searchingis a step by step procedure to solve a
search-problem in a given search space. A search problem
can have three main factors:
o Search Space: Search space represents a set of
possible solutions, which a system may have.
o Start State: It is a state from where agent begins the
search.
o Goal test: It is a function which observe the current
state and returns whether the goal state is achieved or
not.
o Search tree: A tree representation of search problem is
called Search tree. The root of the search tree is the root node
which is corresponding to the initial state.
o Actions: It gives the description of all the available actions to
the agent.
o Transition model: A description of what each action do, can
be represented as a transition model.
o Path Cost: It is a function which assigns a numeric cost to
each path.
o Solution: It is an action sequence which leads from the start
node to the goal node.
o Optimal Solution: If a solution has the lowest cost among all
PROPERTIES OF SEARCH
ALGORITHMS:
o Completeness: A search algorithm is said to be complete if
it guarantees to return a solution if at least any solution exists
for any random input.
o Optimality: If a solution found for an algorithm is guaranteed
to be the best solution (lowest path cost) among all other
solutions, then such a solution for is said to be an optimal
solution.
o Time Complexity: Time complexity is a measure of time for
an algorithm to complete its task.
o Space Complexity: It is the maximum storage space
required at any point during the search, as the complexity of
the problem.
TYPES OF SEARCH
ALGORITHMS
UNINFORMED SEARCH
ALGORITHMS:
o Uninformed search is a class of general-purpose search algorithms which
operates in brute force- way.
o Uninformed search algorithms do not have additional information about state
or search space other than how to traverse the tree, so it is also called blind
search.
o Breadth-first Search
o Depth-first Search
o Depth-limited Search
o Iterative deepening depth-first search
o Uniform cost search
o Bidirectional Search
BREADTH-FIRST SEARCH:
o Breadth-first search is the most common search strategy for traversing a tree or graph.
o This algorithm searches breadthwise in a tree or graph,
o BFS algorithm starts searching from the root node of the tree and expands all successor
node at the current level before moving to nodes of next level.
o The breadth-first search algorithm is an example of a general-graph search algorithm.
o Breadth-first search implemented using FIFO queue data structure.
o Advantages:
o BFS will provide a solution if any solution exists.
o If there are more than one solutions for a given problem, then BFS will provide the
minimal solution which requires the least number of steps.
o Disadvantages:
o It requires lots of memory since each level of the tree must be saved into memory to
expand the next level.
o BFS needs lots of time if the solution is far away from the root node.
o In the below tree structure, we have shown the traversing of
the tree using BFS algorithm from the root node S to goal
node K. BFS search algorithm traverse in layers, so it will
follow the path which is shown by the dotted arrow, and the
traversed path will be:
o S---> A--->B---->C--->D---->G--->H--->E---->F---->I--->K
S---> A--->B---->C--->D---->G--->H--->E----
>F---->I--->K
o Time Complexity: Time Complexity of BFS algorithm can be
obtained by the number of nodes traversed in BFS until the
shallowest Node.
o Where the d= depth of shallowest solution and b is a node at
every state.
o T (b) = 1+b2+b3+…….+ bd= O (bd)
o Space Complexity: Space complexity of BFS algorithm is given by the
Memory size of frontier which is O(bd).
o Completeness: BFS is complete, which means if the shallowest goal node is
at some finite depth, then BFS will find a solution.
o Optimality: BFS is optimal if path cost is a non-decreasing function of the
depth of the node.
DEPTH-FIRST SEARCH
1.Depth-first search isa recursive algorithm for traversing a tree or graph data
structure.

2.It is called the depth-first search because it starts from the root node and
follows each path to its greatest depth node before moving to the next path.
3.DFS uses a stack data structure for its implementation.

The process of the DFS algorithm is similar to the BFS algorithm.


Note: Backtracking is an algorithm technique for finding all possible solutions
using recursion.
o Advantage:
o
o DFS requires very less memory as it only needs to store a stack of the
nodes on the path from root node to the current node.
o It takes less time to reach to the goal node than BFS algorithm (if it
traverses in the right path).
o
o Disadvantage:
o
o There is the possibility that many states keep re-occurring, and there is
no guarantee of finding the solution.
o DFS algorithm goes for deep down searching and sometime it may go
to the infinite loop.
o It will start searching from root node S, and traverse A, then
B, then D and E, after traversing E, it will backtrack the tree
as E has no other successor and still goal node is not found.
o After backtracking it will traverse node C and then G, and
here it will terminate as it found goal node.
o Time Complexity: Time complexity of DFS will be equivalent
to the node traversed by the algorithm. It is given by:
o T(n)= 1+ n2+ n3 + + nm=O(nm)
o Where, m= maximum depth of any node and this can
be much larger than d (Shallowest solution depth)
o Space Complexity: DFS algorithm needs to store only single
path from the root node, hence space complexity of DFS is
equivalent to the size of the fringe set, which is O(bm).
o Optimal: DFS search algorithm is non-optimal, as it may
generate a large number of steps or high cost to reach to the
goal node.
o
DEPTH-LIMITED SEARCH
ALGORITHM:
o A depth-limited search algorithm is similar to depth-first
search with a predetermined limit.
o Depth-limited search can solve the drawback of the infinite
path in the Depth-first search.
o In this algorithm, the node at the depth limit will treat as it
has no successor nodes further.
o
o Depth-limited search can be terminated with two Conditions
of failure:
o Standard failure value: It indicates that problem does not
have any solution.

o cutoff failure value: It defines no solution for the problem within a


given depth limit.
o Advantages:

o Depth-limited search is Memory efficient.


o
o Disadvantages:

o Depth-limited search also has a disadvantage of incompleteness.


o Completeness: DLS search algorithm is complete if the
solution is above the depth-limit.
o Time Complexity: Time complexity of DLS algorithm is
O(bℓ). Space Complexity: Space complexity of DLS
algorithm is O(b×ℓ).
o Optimal: Depth-limited search can be viewed as a special
case of DFS, and it is also not optimal
o even if ℓ>d.
UNIFORM-COST SEARCH
ALGORITHM:
o Uniform-cost search is a searching algorithm used for
traversing a weighted tree or graph.
o This algorithm comes into play when a different cost is
available for each edge
o Goal: find a path to the goal node which has the lowest
cumulative cost.
o It can be used to solve any graph/tree where the optimal cost
is in demand.
o A uniform-cost search algorithm is implemented by the
priority queue.
o It gives maximum priority to the lowest cumulative cost.
o Uniform cost search is equivalent to BFS algorithm if the path
cost of all edges is the same.
o Advantages:
o Uniform cost search is optimal because at every state the path with
the least cost is chosen.
o
o Disadvantages:
o It does not care about the number of steps involve in searching and
only concerned about path cost. Due to which this algorithm may be
stuck in an infinite loop.
o
o Completeness:
o Uniform-cost search is complete, such as if there is a solution,
UCS will find it.
o Time Complexity:
o Let C* is Cost of the optimal solution, and ε is each step
to get closer to the goal node. Then the number of steps is =
C*/ε+1.
o Here we have taken +1, as we start from state 0 and end to
C*/ε.
o Hence, the worst-case time complexity of Uniform-cost search
isO(b1 + [C*/ε])/.
o Space Complexity:
o The same logic is for space complexity so, the worst-case
space complexity of Uniform-cost search is O(b1 + [C*/ε]).
o Optimal:
o Uniform-cost search is always optimal as it only selects a
path with the lowest path cost.
ITERATIVE DEEPENING
DEPTH-FIRST SEARCH:
o The iterative deepening algorithm is a combination of DFS
and BFS algorithms.
o This search algorithm finds out the best depth limit and does
it by gradually increasing the limit until a goal is found.
o This algorithm performs depth-first search up to a certain
"depth limit", and it keeps increasing the depth limit after
each iteration until the goal node is found.
o This Search algorithm combines the benefits of Breadth-first
search's fast search and depth-first search's memory
efficiency.
o The iterative search algorithm is useful uninformed search
when search space is large, and depth of goal node is
o Advantages:
o Itvcombines the benefits of BFS and DFS search algorithm in terms of
fast search and memory efficiency.

o Disadvantages:

o The main drawback of IDDFS is that it repeats all the work of the
previous phase.
o Advantages:

o It combines the benefits of BFS and DFS search algorithm in terms of


fast search and memory efficiency.

o Disadvantages:

o The main drawback of IDDFS is that it repeats all the work of the
previous phase.
o 1'st Iteration >A
o 2'nd Iteration > A, B, C
o 3'rd Iteration >A, B, D, E, C, F, G
o 4'th Iteration >A, B, D, H, I, E, C, F, K, G
o In the fourth iteration, the algorithm will find the goal node.
o
o Completeness:
o This algorithm is complete is ifthe branching factor is finite.
o Time Complexity:
o Let's suppose b is the branching factor and depth is d then
the worst-case time complexity is O(bd).
o Space Complexity:
o The space complexity of IDDFS will be O(bd). Optimal:
o IDDFS algorithm is optimal if path cost is a non- decreasing
function of the depth of the node.
BIDIRECTIONAL SEARCH
ALGORITHM:
o Bidirectional search algorithm runs two simultaneous
searches,
o one form initial state called as forward-search and other from
goal node called as backward-search, to find the goal node.
o Bidirectional search replaces one single search graph with two
small subgraphs in which one starts the search from an initial
vertex and other starts from goal vertex.
o The search stops when these two graphs intersect each other.
o Bidirectional search can use search techniques such as BFS,
DFS, DLS, etc.
o Advantages:
o Bidirectional search is fast.
o Bidirectional search requires less memory

o Disadvantages:
o Implementation of the bidirectional search tree is difficult.
o In bidirectional search, one should know the goal state in
advance.
o In the below search tree, bidirectional search algorithm is
applied. This algorithm divides one graph/tree into two sub-
graphs. It starts traversing from node 1 in the forward
direction and starts from goal node 16 in the backward
direction.

o The algorithm terminates at node 9 where two searches meet.


o Completeness: Bidirectional Search is complete if we use
BFS in both searches.
o Time Complexity: Time complexity of bidirectional search
using BFS is O(bd). Space
o Complexity: Space complexity of bidirectional search is
O(bd).
o Optimal: Bidirectional search is Optimal.

You might also like