0% found this document useful (0 votes)
33 views28 pages

BFS, DFS, A*, Minimax Algorithms Explained

experiment file for Mumbai university

Uploaded by

indian esport
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views28 pages

BFS, DFS, A*, Minimax Algorithms Explained

experiment file for Mumbai university

Uploaded by

indian esport
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Experiment 3

Aim: To implement BFS algorithm


Theory:
Breadth First Search or BFS for a Graph:
Breadth First Traversal (or Search) for a graph is similar to Breadth First Traversal of a tree. The only
catch here is, unlike trees, graphs may contain cycles, so we may come to the same node again. To
avoid processing a node more than once, we use a Boolean visited array. For simplicity, it is assumed
that all vertices are reachable from the starting vertex. It uses queue for storing visited node.
Advantages of BFS:
1. The solution will definitely found out by BFS If there is some solution.
2. BFS will never get trapped in a blind alley, which means unwanted nodes.
3. If there is more than one solution then it will find a solution with minimal steps.
Disadvantages Of BFS:
1. Memory Constraints As it stores all the nodes of the present level to go for the next level.
2. If a solution is far away then it consumes time.
Application Of BFS:
1. Finding the Shortest Path.
2. Checking graph with petiteness.
3. Copying Cheney's Algorithm.
Program:
from collections import defaultdict
class Graph:
# Constructor
def __init__(self):
# default dictionary to store graph
self.graph = defaultdict(list)
# function to add an edge to graph
def addEdge(self,u,v):
self.graph[u].append(v)
# Function to print a BFS of graph
def BFS(self, s):
# Mark all the vertices as not visited
visited = [False] * (max(self.graph) + 1)
# Create a queue for BFS
queue = []
# Mark the source node as visited and enqueue it
queue.append(s)
visited[s] = True
while queue:
# Dequeue a vertex from
# queue and print it
s = queue.pop(0)
print (s, end = " ")
# Get all adjacent vertices of the
# dequeued vertex s. If a adjacent
# has not been visited, then mark it
# visited and enqueue it
for i in self.graph[s]:
if visited[i] == False:
queue.append(i)
visited[i] = True
# Create a graph given in the above diagram
g = Graph()
g.addEdge(0, 1)
g.addEdge(0, 2)
g.addEdge(1, 2)
g.addEdge(2, 0)
g.addEdge(2, 3)
g.addEdge(3, 3)

print ("Following is Breadth First Traversal (starting from vertex 2)")


g.BFS(2)

Output:
Following is Breadth First Traversal (starting from vertex 2)
2031

Conclusion: Thus, we learned about Breath First Search and implemented it.
Experiment 4
Aim: To implement Depth First Search
Theory:
Depth First Search on a Graph:
Depth First Traversal (or Search) for a graph is similar to Depth First Traversal of a tree. The only
catch here is, unlike trees, graphs may contain cycles, so we may come to the same node again. To
avoid processing a node more than once, we use a Boolean visited array. It uses stack for storing
visited node.

Advantages of DFS:
1. The memory requirement is Linear WRT Nodes.
2. Less time and space complexity rather than BFS.
3. The solution can be found out without much more search.
Disadvantages of DFS:
1. Not Guaranteed that it will give you a solution.
2. Cut-off depth is smaller so time complexity is more.
3. Determination of depth until the search has proceeded.
Applications of DFS:
1. Finding Connected components.
2. Topological sorting.
3. Finding Bridges of the graph
Program:
from collections import defaultdict
class Graph:
# Constructor
def __init__(self):
# default dictionary to store graph
self.graph = defaultdict(list)

# function to add an edge to graph


def addEdge(self, u, v):
self.graph[u].append(v)
# A function used by DFS
def DFSUtil(self, v, visited):
# Mark the current node as visited and print it
visited.add(v)
print(v, end=' ')
# Recur for all the vertices adjacent to this vertex
for neighbour in self.graph[v]:
if neighbour not in visited:
self.DFSUtil(neighbour, visited)
# The function to do DFS traversal.
def DFS(self, v):
# Create a set to store visited vertices
visited = set()
# Call the recursive helper function to print DFS traversal
self.DFSUtil(v, visited)
# Create a graph given in the above diagram
g = Graph()
g.addEdge(0, 1)
g.addEdge(0, 2)
g.addEdge(1, 2)
g.addEdge(2, 0)
g.addEdge(2, 3)
g.addEdge(3, 3)
print("Following is DFS from (starting from vertex 2)")
g.DFS(2)

Output:
Following is Depth First Traversal (starting from vertex 2)
2013

Conclusion: Thus, we learned about Depth First Search and implemented it.
Experiment 5
Aim: To implement A* Algorithm
Theory:
A* Search Algorithm
A* is based on using heuristic methods to achieve optimality and completeness, and is a variant of the
best-first algorithm. It is one of the most successful search algorithms to find the shortest path between
nodes or graphs. It is an informed search algorithm, as it uses information about path cost and also
uses heuristics to find the solution. Each time A* enters a node, it calculates the cost, f(n)(n being the
neighbouring node), to travel to all of the neighbouring nodes, and then enters the node with the lowest
value of f(n).
Advantages:
1. It is optimal search algorithm in terms of heuristics.
2. It is one of the best heuristic search techniques.
3. It is used to solve complex search problems.
4. There is no other optimal algorithm guaranteed to expand fewer nodes than A*.
Disadvantages:
1. This algorithm is complete if the branching factor is finite and every action has fixed cost.
2. The performance of A* search is dependent on accuracy of heuristic algorithm used to
compute the function h(n).
Program:
from collections import deque

class Graph:
def __init__(self, adjac_lis):
self.adjac_lis = adjac_lis

def get_neighbors(self, v):


return self.adjac_lis[v]

# This is heuristic function which is having equal values for all nodes
def h(self, n):
H={
'A': 1,
'B': 1,
'C': 1,
'D': 1
}

return H[n]

def a_star_algorithm(self, start, stop):


open_lst = set([start])
closed_lst = set([])
poo = {}
poo[start] = 0
par = {}
par[start] = start

while len(open_lst) > 0:


n = None

# it will find a node with the lowest value of f() -


for v in open_lst:
if n == None or poo[v] + self.h(v) < poo[n] + self.h(n):
n = v;

if n == None:
print('Path does not exist!')
return None

# if the current node is the stop then we start again from start
if n == stop:
reconst_path = []

while par[n] != n:
reconst_path.append(n)
n = par[n]
reconst_path.append(start)
reconst_path.reverse()
print('Path found: {}'.format(reconst_path))
return reconst_path

# for all the neighbors of the current node do


for (m, weight) in self.get_neighbors(n):
# if the current node is not presentin both open_lst and closed_lst
# add it to open_lst and note n as it's par
if m not in open_lst and m not in closed_lst:
open_lst.add(m)
par[m] = n
poo[m] = poo[n] + weight

else:
if poo[m] > poo[n] + weight:
poo[m] = poo[n] + weight
par[m] = n

if m in closed_lst:
closed_lst.remove(m)
open_lst.add(m)

open_lst.remove(n)
closed_lst.add(n)

print('Path does not exist!')


return None

adjac_lis = {
'A': [('B', 1), ('C', 3), ('D', 7)],
'B': [('D', 5)],
'C': [('D', 12)]
}
graph1 = Graph(adjac_lis)
graph1.a_star_algorithm('A', 'D')

Output:
Path found: ['A', 'B', 'D']
['A', 'B', 'D']

Conclusion: Thus, we learned about A* Search Algorithm and implemented it.


Experiment 6
Aim: To implement Minimax Algorithm for Game Playing
Theory:
Minimax is a kind of backtracking algorithm that is used in decision making and game theory to find
the optimal move for a player, assuming that your opponent also plays optimally. It is widely used in
two player turn-based games such as Tic-Tac-Toe, Backgammon, Mancala, Chess, etc. In Minimax
the two players are called maximizer and minimizer. The maximizer tries to get the highest score
possible while the minimizer tries to do the opposite and get the lowest score possible. Every board
state has a value associated with it. In a given state if the maximizer has upper hand then, the score of
the board will tend to be some positive value. If the minimizer has the upper hand in that board state
then it will tend to be some negative value. The values of the board are calculated by some heuristics
which are unique for every type of game.
Advantages:
1. Minimax algorithm is a beneficial problem-solving algorithm that helps perform a thorough
assessment of the search space.
2. It makes it possible to implement decision making in Artificial Intelligence, which has further
given way to the development of new and smart machines, systems, and computers.
Disadvantages:

1. It has a huge branching factor, which makes the process of reaching the goal state slow.
2. Search and evaluation of unnecessary nodes or branches of the game tree degrades the overall
performance and efficiency of the engine.
3. Both min and max players have lots of choices to decide from.
4. Exploring the entire tree is not possible as there is a restriction of time and space .
Program:
import java.util.*;
final class Minimax {
private Minimax() {}
public static State minimaxDecision(State state) {
return state.getActions().stream()
.max(Comparator.comparing(Minimax::minValue)).get();}
private static double maxValue(State state) {
if(state.isTerminal()){
return state.getUtility();}
return
state.getActions().stream().map(Minimax::minValue).max(Comparator.comparing(Double::valueOf))
.get();}
private static double minValue(State state) {
if(state.isTerminal()){
return state.getUtility();}
return
state.getActions().stream().map(Minimax::maxValue).min(Comparator.comparing(Double::valueOf))
.get();}
public static class State {
final int state;
final boolean firstPlayer;
final boolean secondPlayer;
public State(int state, boolean firstPlayer){
this.state = state;
this.firstPlayer = firstPlayer;
this.secondPlayer = !firstPlayer;}
Collection<State> getActions(){
List<State> actions = new LinkedList<>();
if(state > 4){
actions.add(new State(state-5, secondPlayer));}
if(state > 3){
actions.add(new State(state-4, secondPlayer));}
if(state > 2){
actions.add(new State(state-3, secondPlayer));}
return actions;}
boolean isTerminal() {
return state < 3;}
double getUtility() {
if(firstPlayer)
return -1;
else
return 1;}}}
class Main {
public static void main(String[] args){
System.out.println("Welcome to my minimax algorithm");
boolean end = false;
int val = 27;
boolean first = true;
while(!end) {
System.out.println("Current position = "+ val +", Player one: " + first);
Minimax.State s = new Minimax.State(val, true);
Minimax.State decision = Minimax.minimaxDecision(s);
val = decision.state;
if(decision.isTerminal()){
end = true;
System.out.println("Current position = "+ val +", Player one won: " + first);
System.out.println("Game over");}
first =! first;}}}
Output:
Current position = 11, Player one: true
Current position = 8, Player one: false
Current position = 3, Player one: true
Current position = 0, Player one won: true
Game over
Conclusion: Thus, we learned about Minimax Algorithm in Game Playing and implemented it.
Experiment 7
Aim: To implement Minimax Algorithm in Tic Tac Toe
Theory:
Two players alternately put Xs and Os in compartments of a figure formed by two vertical lines
crossing two horizontal lines and each tries to get a row of three Xs or three Os before the opponent
does.
Game Tree:

Pseduocode:
function minimax(board, depth, isMaximizingPlayer):
if current board state is a terminal state :
return value of the board
if isMaximizingPlayer :
bestVal = -INFINITY
for each move in board :
value = minimax(board, depth+1, false)
bestVal = max( bestVal, value)
return bestVal
else :
bestVal = +INFINITY
for each move in board :
value = minimax(board, depth+1, true)
bestVal = min( bestVal, value)
return bestVal
Program:
# Python3 program to find the next optimal move for a player
player, opponent = 'x', 'o'
def isMovesLeft(board) :
for i in range(3) :
for j in range(3) :
if (board[i][j] == '_') :
return True
return False
def evaluate(b) :
# Checking for Rows for X or O victory.
for row in range(3) :
if (b[row][0] == b[row][1] and b[row][1] == b[row][2]) :
if (b[row][0] == player) :
return 10
elif (b[row][0] == opponent) :
return -10
# Checking for Columns for X or O victory.
for col in range(3) :
if (b[0][col] == b[1][col] and b[1][col] == b[2][col]) :
if (b[0][col] == player) :
return 10
elif (b[0][col] == opponent) :
return -10

# Checking for Diagonals for X or O victory.


if (b[0][0] == b[1][1] and b[1][1] == b[2][2]) :
if (b[0][0] == player) :
return 10
elif (b[0][0] == opponent) :
return -10
if (b[0][2] == b[1][1] and b[1][1] == b[2][0]) :
if (b[0][2] == player) :
return 10
elif (b[0][2] == opponent) :
return -10
# Elif none of them have won then return 0
return 0
def minimax(board, depth, isMax) :
score = evaluate(board)
if (score == 10) :
return score
if (score == -10) :
return score
if (isMovesLeft(board) == False) :
return 0
# If this maximizer's move
if (isMax) :
best = -1000
# Traverse all cells
for i in range(3) :
for j in range(3) :
# Check if cell is empty
if (board[i][j]=='_') :
# Make the move
board[i][j] = player
best = max( best, minimax(board,depth + 1,not isMax) )
board[i][j] = '_'
return best
# If this minimizer's move
else :
best = 1000
# Traverse all cells
for i in range(3) :
for j in range(3) :
# Check if cell is empty
if (board[i][j] == '_') :
board[i][j] = opponent
best = min(best, minimax(board, depth + 1, not isMax))
board[i][j] = '_'
return best
def findBestMove(board) :
bestVal = -1000
bestMove = (-1, -1)
for i in range(3) :
for j in range(3) :
# Check if cell is empty
if (board[i][j] == '_') :
# Make the move
board[i][j] = player
moveVal = minimax(board, 0, False)
# Undo the move
board[i][j] = '_'
if (moveVal > bestVal) :
bestMove = (i, j)
bestVal = moveVal
print("The value of the best Move is :", bestVal)
print()
return bestMove
# Driver code
board = [
[ 'x', 'o', 'x' ],
[ 'o', 'o', 'x' ],
[ '_', '_', '_' ]
]
bestMove = findBestMove(board)
print("The Optimal Move is :")
print("ROW:", bestMove[0], " COL:", bestMove[1])

Output:
The value of the best Move is : 10
The Optimal Move is :
ROW: 2 COL: 2
Conclusion: Thus, we learned about Minimax Algorithm in Tic Tac Toe and implemented it.
Experiment 8
Aim: To perform a case study on Prolog Software
Theory:
Description of Prolog:
Prolog (PROgramming in LOGic) is a 5th generation programming language (5GL). Similar to 4GL
but only difference is 4GL is used for specific problems/purposes like SQL for DBMS only while
5GL are used to make computers solve problems for you. In 3GL certain algorithms are used to solve
problems but in 5GL only conditions and break-points are needed to solve the problems. Specify what
you want and conditions to be met and computer will solve the problems for you. Some famous
examples are Prolog, OPS5 and Mercury
Prolog has its roots in first-order logic, a formal logic, and unlike many other programming
languages, Prolog is intended primarily as a declarative programming language: the program logic is
expressed in terms of relations, represented as facts and rules. A computation is initiated by running a
query over these relations.
Syntax: relation(entity1, entity2, ....kth entity).
Example:
friends(raju, mahesh).
singer(sonu).
odd_number(5).
Explanation:
These facts can be interpreted as :
raju and mahesh are friends.
sonu is a singer.
5 is an odd number.

Query:
Query 1: ?- singer(sonu).
Output: Yes.

Explanation: As our knowledge base contains


the above fact, so output was 'Yes', otherwise
it would have been 'No'.
Query 2: ?- odd_number(7).
Output: No.
Explanation: As our knowledge base does not
contain the above fact, so output was 'No'.
Key Features:
1. Unification: The basic idea is, can the given terms be made to represent the same structure.
2. Backtracking: When a task fails, prolog traces backwards and tries to satisfy previous task.
3. Recursion: Recursion is the basis for any search in program.
Advantages:
1. Easy to build database. Doesn’t need a lot of programming effort.
2. Pattern matching is easy. Search is recursion based.
3. It has built in list handling. Makes it easier to play with any algorithm involving lists.
Disadvantages:
1. LISP (another logic programming language) dominates over prolog with respect to I/O features.
2. Sometimes input and output is not easy.

The applications of prolog are as follows:


1. Specification Language
2. Robot Planning
3. Natural language understanding
4. Machine Learning
5. Problem Solving
6. Intelligent Database retrieval
7. Expert System
8. Automated Reasoning

Conclusion: We understood what prolog is and why to use it.


Experiment 9
Aim: To perform a case study on AI Planning Projects of IBM
Theory:
What is AI Planning:
Planning is a long-standing sub-area of Artificial Intelligence (AI). Planning is the task of finding a
procedural course of action for a declaratively described system to reach its goals while optimizing
overall performance measures. Automated planners find the transformations to apply in each given
state out of the possible transformations for that state. In contrast to the classification problem,
planners provide guarantees on the solution quality.
Why is it Important: Planning Applications in Industry
1. Automation is an emerging trend that requires efficient automated planning
2. Many applications of planning in industry (e.g. robots and autonomous systems, cognitive
assistants, cyber security, service composition)
How to Spot a Planning Problem
1. Declarative
o You want to find a procedural course of action for a declaratively described system to
reach its goals while optimizing overall performance measures.
2. Domain Knowledge can be elicited or learned over time
o Existing domain knowledge can/should be exploited for building the model
o Human involvement controllable. Humans build the model and can contribute to the
solution by introducing knowledge.
3. Favor consistency over learning transient behaviors
o There is a structure of the problem that cannot be learned just training
o When no large training data is available
o Changes in the problem can make previous data irrelevant

Advantages of AI Planning Techniques


1. When explainability is desired
a. When you want to be able to explain why a particular course of action was chosen
b. Assignment of responsibility/blame is essential for automation of processes (e.g.,
autonomous driving, medical expert systems)
2. Rapid prototyping: short time to solution
3. Variety of of-the-shelf planners available both IBM proprietary and open-source
4. Your problem is frequently changing, even small changes.
5. No need to change the solution, only tweak the model

Success Stories: When Planning Meets DL


In many real life applications, there is a structure of the problem that cannot be learned with DL (there
are just not enough examples). Solving optimization problems with learning is hard, but integrating
planning techniques with heuristic guidance learned by DL will result in the most famous success
stories of AI to day.
1. GO player AlphaGO uses planning (monte-carlo tree search) with deep learning (heuristic
guidance) to select the next move
2. Cognitive assistant Viv (Samsung) uses knowledge graph, planning, and deep learning to
answer complicated queries
Example AI Planning Projects in IBM

State projection via AI planning


Imagining the future helps anticipate and prepare for what is coming. This has great importance to
many, if not all, human endeavours. In this paper, we develop the Planning Projector system
prototype, which applies plan-recognition-as planning technique to both explain the observations
derived from analyzing relevant news and social media, and project a range of possible future state
trajectories for human review. Unlike the plan recognition problem, where a set of goals, and often a
plan library must be given as part of the input, the Planning Projector system takes as input the
domain knowledge, a sequence of observations derived from the news, a time horizon, and the
number of trajectories to produce. It then computes the set of trajectories by applying a planner
capable of finding a set of high-quality plans on a transformed planning problem. The Planning
Projector prototype integrates several components including:
(1) knowledge engineering: the process of encoding the domain knowledge from
domain experts
(2) data transformation: the problem of analyzing and transforming the raw data
into a sequence of observations
(3) trajectory computation: characterizing the future state projection problem and
computing a set of trajectories
(4) user interface: clustering and visualizing the trajectories. We evaluate our
approach qualitatively and conclude that the Planning Projector helps users
understand future possibilities so that they can make more informed
decisions.

Conclusion: We conducted a thorough research on how IBM AI is used for Planning Projects and
some of its example and a use case as well.
Experiment 10
Aim: To perform a case study on implementation of Bayesian Belief Networks (BBN)
Theory:
Bayesian Belief Networks allow you to construct a model with nodes and directed edges by clearly
outlining the relationships between variables.
Technically there is no training happening within BBN. We simply define how different nodes in the
network are linked together. Then observe how the probabilities change after passing some evidence
into specific nodes.
Bayesian Belief Network (BBN) is a Probabilistic Graphical Model (PGM) that represents a set of
variables and their conditional dependencies via a Directed Acyclic Graph (DAG).

There are many use cases for Bayesian Belief Networks, from helping to diagnose diseases to real-
time predictions of a race outcome.
You can also build BBNs to help you with marketing decisions. Say, I may want to know how likely
this article is to reach 10K views. Hence, I can build a BBN to tell me the probability of certain events
occurring, such as posting a link to this article on Twitter and then evaluating how this probability
changes as I get ten retweets.
At the end of the day, the possibilities are almost limitless, with the ability to generate real-time
predictions that automatically update the entire network as soon as new evidence is introduced.
Features of BBN:
1. Bayesian networks are a type of probabilistic graphical model comprised of nodes and
directed edges.
2. Bayesian network models capture both conditionally dependent and conditionally
independent relationships between random variables.
3. Models can be prepared by experts or learned from data, then used for inference to estimate
the probabilities for causal or subsequent events.
BBN Python Example using Real Life Data
Data and Python library setup
1. Australian weather data from Kaggle: https://www.kaggle.com/datasets/jsphyg/weather-
dataset-rattle-package
2. PyBBN for creating Bayesian Belief Networks
3. Pandas for data manipulation
4. NetworkX and Matplotlib for drawing graphs

import pandas as pd # for data manipulation


import networkx as nx # for drawing graphs
import matplotlib.pyplot as plt # for drawing graphs
# for creating Bayesian Belief Networks (BBN)
from pybbn.graph.dag import Bbn
from pybbn.graph.edge import Edge, EdgeType
from pybbn.graph.jointree import EvidenceBuilder
from pybbn.graph.node import BbnNode
from pybbn.graph.variable import Variable
from pybbn.pptc.inferencecontroller import InferenceController
# Set Pandas options to display more columns
pd.options.display.max_columns=50
# Read in the weather data csv
df=pd.read_csv('weatherAUS.csv', encoding='utf-8')
# Drop records where target RainTomorrow=NaN
df=df[pd.isnull(df['RainTomorrow'])==False]
# For other columns with missing values, fill them in with column mean
df=df.fillna(df.mean())
# Create bands for variables that we want to use in the model
df['WindGustSpeedCat']=df['WindGustSpeed'].apply(lambda x: '0.<=40' if x<=40 else
'1.40-50' if 40<x<=50 else '2.>50')
df['Humidity9amCat']=df['Humidity9am'].apply(lambda x: '1.>60' if x>60 else '0.<=60')
df['Humidity3pmCat']=df['Humidity3pm'].apply(lambda x: '1.>60' if x>60 else '0.<=60')
# Show a snaphsot of data
df
# Create nodes by manually typing in probabilities
H9am = BbnNode(Variable(0, 'H9am', ['<=60', '>60']), [0.30658, 0.69342])
H3pm = BbnNode(Variable(1, 'H3pm', ['<=60', '>60']), [0.92827, 0.07173,
0.55760, 0.44240])
W = BbnNode(Variable(2, 'W', ['<=40', '40-50', '>50']), [0.58660, 0.24040, 0.17300])
RT = BbnNode(Variable(3, 'RT', ['No', 'Yes']), [0.92314, 0.07686,
0.89072, 0.10928,
0.76008, 0.23992,
0.64250, 0.35750,
0.49168, 0.50832,
0.32182, 0.67818])

# This function helps to calculate probability distribution, which goes into BBN (note, can handle up
to 2 parents)
def probs(data, child, parent1=None, parent2=None):
if parent1==None:
# Calculate probabilities
prob=pd.crosstab(data[child], 'Empty', margins=False,
normalize='columns').sort_index().to_numpy().reshape(-1).tolist()
elif parent1!=None:
# Check if child node has 1 parent or 2 parents
if parent2==None:
# Caclucate probabilities
prob=pd.crosstab(data[parent1],data[child], margins=False,
normalize='index').sort_index().to_numpy().reshape(-1).tolist()
else:
# Caclucate probabilities
prob=pd.crosstab([data[parent1],data[parent2]],data[child], margins=False,
normalize='index').sort_index().to_numpy().reshape(-1).tolist()
else: print("Error in Probability Frequency Calculations")
return prob

# Create nodes by using our earlier function to automatically calculate probabilities


H9am = BbnNode(Variable(0, 'H9am', ['<=60', '>60']), probs(df, child='Humidity9amCat'))
H3pm = BbnNode(Variable(1, 'H3pm', ['<=60', '>60']), probs(df, child='Humidity3pmCat',
parent1='Humidity9amCat'))
W = BbnNode(Variable(2, 'W', ['<=40', '40-50', '>50']), probs(df, child='WindGustSpeedCat'))
RT = BbnNode(Variable(3, 'RT', ['No', 'Yes']), probs(df, child='RainTomorrow',
parent1='Humidity3pmCat', parent2='WindGustSpeedCat'))
# Create Network
bbn = Bbn() \
.add_node(H9am) \
.add_node(H3pm) \
.add_node(W) \
.add_node(RT) \
.add_edge(Edge(H9am, H3pm, EdgeType.DIRECTED)) \
.add_edge(Edge(H3pm, RT, EdgeType.DIRECTED)) \
.add_edge(Edge(W, RT, EdgeType.DIRECTED))
# Convert the BBN to a join tree
join_tree = InferenceController.apply(bbn)
# Set node positions
pos = {0: (-1, 2), 1: (-1, 0.5), 2: (1, 0.5), 3: (0, -1)}
# Set options for graph looks
options = {
"font_size": 16,
"node_size": 4000,
"node_color": "white",
"edgecolors": "black",
"edge_color": "red",
"linewidths": 5,
"width": 5,}
# Generate graph
n, d = bbn.to_nx_graph()
nx.draw(n, with_labels=True, labels=d, pos=pos, **options)
# Update margins and print the graph
ax = plt.gca()
ax.margins(0.10)
plt.axis("off")
plt.show()
# Define a function for printing marginal probabilities
def print_probs():
for node in join_tree.get_bbn_nodes():
potential = join_tree.get_bbn_potential(node)
print("Node:", node)
print("Values:")
print(potential)
print('----------------')
# Use the above function to print marginal probabilities
print_probs()

# To add evidence of events that happened so probability distribution can be recalculated


def evidence(ev, nod, cat, val):
ev = EvidenceBuilder() \
.with_node(join_tree.get_bbn_node_by_name(nod)) \
.with_evidence(cat, val) \
.build()
join_tree.set_observation(ev)

# Use above function to add evidence


evidence('ev1', 'H9am', '>60', 1.0)

# Print marginal probabilities


print_probs()

# Add more evidence


evidence('ev1', 'H3pm', '>60', 1.0)
evidence('ev2', 'W', '>50', 1.0)
# Print marginal probabilities
print_probs()
Conclusion: We learned that BBNs comes with the ability to generate real-time predictions that
automatically update the entire network as soon as new evidence is introduced.

You might also like