CS3491 - AIML Lab Record
CS3491 - AIML Lab Record
1 A 20-01-23 03
Implementation of Uninformed
search algorithms – BFS
20-01-23
B Implementation of Uninformed 05
search algorithms – DFS
2 03-02-23
A Implementation of Informed 08
search algorithms - A*
03-02-23
B Implementation of Informed 12
search algorithms - memory-
bounded A*
3 10-02-23 Implement naïve Bayes models 16
7 17-03-23
Build SVM models 33
1
10 31-03-23 Implement EM for Bayesian 47
networks
11 31-03-23 Build simple NN models 51
12 21-04-23 55
Build deep learning NN
models
2
Ex.No: 1A Implementation of Uninformed search algorithms (BFS)
Date:20-1-23
Aim :
Step iv:Repeat the above two steps, i.e., steps 2 and 3, till our queue is reduced
to 0.
Program:
graph = {
'5' : ['3','7'],
'7' : ['8'],
'2' : [],
3
'4' : ['8'],
'8' : []
visited.append(node)
queue.append(node)
while queue:
m = queue.pop(0)
visited.append(neighbour)
queue.append(neighbour)
# Driver Code
Output:
4
Breadth-First Search is
537248
Result:
Thus, the program for the BFS algorithm using python was implemented
successfully.
5
Date:20-1-23
Aim :
Step 2:Take the top item of the stack and add it to the visited list.
Step 3:Create a list of that vertex's adjacent nodes. Add the ones which aren't in
the visited list to the top of the stack.
Program:
graph = {
'5' : ['3','7'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
print (node)
visited.add(node)
6
for neighbour in graph[node]:
# Driver Code
Output:
Depth-First Search
532487
7
PSNA CET - CSE
8
PREPARATION 30
PERFORMANCE 30
RECORD 40
TOTAL 100
Result:
Thus, the program for the BFS algorithm using python was implemented
successfully.
Date:3-2-23
Aim :
9
if the node has been in either OPEN or CLOSED list. If the node has not been in
both
lists, then add it to the OPEN list.
Step 7: Return to Step 2.
Program:
from collections import deque
class Graph:
def init (self, adjacency_list):
self.adjacency_list = adjacency_list
def get_neighbors(self, v):
return self.adjacency_list[v]
def h(self, n):
H={
'A': 1,
'B': 1,
'C': 1,
'D': 1
}
return H[n]
def a_star_algorithm(self, start_node, stop_node):
open_list = set([start_node])
closed_list = set([]) g = {}
g[start_node] = 0
parents = {}
parents[start_node] = start_node
while len(open_list) > 0:
n = None
for v in open_list:
if n == None or g[v] + self.h(v) < g[n] + self.h(n):
n = v;
10
if n == None:
print('Path does not exist!')
return None
if n == stop_node:
reconst_path = []
while parents[n] != n:
reconst_path.append(n)
n = parents[n]
reconst_path.append(start_node)
reconst_path.reverse()
print('Path found:
{}'.format(reconst_path))
return reconst_path
for (m, weight) in self.get_neighbors(n):
if m not in open_list and m not in closed_list:
open_list.add(m)
parents[m] = n
g[m] = g[n] + weight
else:
if g[m] > g[n] + weight:
g[m] = g[n] + weight
parents[m] = n
if m in closed_list:
closed_list.remove(m)
open_list.add(m)
open_list.remove(n)
closed_list.add(n)
print('Path does not exist!')
return None
adjacency_list = {
11
'A': [('B', 1), ('C', 3), ('D', 7)],
'B': [('D', 5)],
'C': [('D', 12)]
}
graph1 = Graph(adjacency_list)
graph1.a_star_algorithm('A', 'D')
Output:
12
Result:
Thus the python program for A* has been implemented successfully
Date:3-2-23
Aim :
13
def iterative_deepening_a_star(tree, heuristic, start, goal):
threshold = heuristic[start][goal]
while True:
print("Iteration with threshold: " + str(threshold))
distance = iterative_deepening_a_star_rec(tree, heuristic, start, goal, 0,
threshold)
if distance == float("inf"):
return -1
elif distance < 0:
print("Found the node we're looking for!")
return -distance
else:
threshold = distance
def iterative_deepening_a_star_rec(tree, heuristic, node, goal, distance,
threshold):
print("Visiting Node " + str(node))
if node == goal:
return -distance
estimate = distance + heuristic[node][goal]
if estimate > threshold:
print("Breached threshold with heuristic: " + str(estimate))
return estimate
min = float("inf")
for i in range(len(tree[node])):
if tree[node][i] != 0:
t = iterative_deepening_a_star_rec(tree, heuristic, i, goal, distance +
tree[node][i], threshold)
if t < 0:
return t
elif t < min:
14
min = t
return min
Output:
15
16
PSNA CET - CSE
PREPARATION 30
PERFORMANCE 30
RECORD 40
TOTAL 100
Result:
Thus, the python program for Memory Bounded A * Algorithm was
implemented successfully.
Ex.No:3 Implement naïve Bayes models
Date:10-2-23
Aim :
To build a naive bayes model using gaussian naive bayes formula in python.
Algorithm:
Step 1: Convert the given dataset into frequency tables.
Step 2: Generate Likelihood table by finding the probabilities of given features.
Step 3: Now, use Bayes theorem to calculate the posterior probability.
Program:
from math import sqrt
from math import pi
from math import exp
def separate_by_class(dataset):
separated = dict()
17
for i in range(len(dataset)):
vector = dataset[i]
class_value = vector[-1]
if (class_value not in separated):
separated[class_value] = list()
separated[class_value].append(vector)
return separated
def mean(numbers):
return sum(numbers)/float(len(numbers))
def stdev(numbers):
avg = mean(numbers)
variance = sum([(x-avg)**2 for x in numbers]) / float(len(numbers)-1)
return sqrt(variance)
def summarize_dataset(dataset):
summaries = [(mean(column), stdev(column), len(column)) for column in
zip(*dataset)]
del(summaries[-1])
return summaries
def summarize_by_class(dataset):
separated = separate_by_class(dataset)
summaries = dict()
for class_value, rows in separated.items():
summaries[class_value] = summarize_dataset(rows)
return summaries
def calculate_probability(x, mean, stdev):
exponent = exp(-((x-mean)**2 / (2 * stdev**2 )))
return (1 / (sqrt(2 * pi) * stdev)) * exponent
def calculate_class_probabilities(summaries, row):
total_rows = sum([summaries[label][0][2] for label in summaries])
probabilities = dict()
18
for class_value, class_summaries in summaries.items():
probabilities[class_value] =
summaries[class_value][0][2]/float(total_rows)
for i in range(len(class_summaries)):
mean, stdev, _ = class_summaries[i]
probabilities[class_value] *= calculate_probability(row[i], mean,
stdev)
return probabilities
dataset = [[3.393533211,2.331273381,0],
[3.110073483,1.781539638,0],
[1.343808831,3.368360954,0],
[3.582294042,4.67917911,0],
[2.280362439,2.866990263,0],
[7.423436942,4.696522875,1],
[5.745051997,3.533989803,1],
[9.172168622,2.511101045,1],
[7.792783481,3.424088941,1],
[7.939820817,0.791637231,1]]
summaries = summarize_by_class(dataset)
probabilities = calculate_class_probabilities(summaries, dataset[0])
print(probabilities)
Output:
{0: 0.05032427673372076, 1: 0.000115577183799457
19
20
PSNA CET - CSE
PREPARATION 30
PERFORMANCE 30
RECORD 40
TOTAL 100
Result:
Thus,build a naive bayes model using gaussian naive bayes formula in
python were executed successfully.
Date:17-2-23
Aim :
21
Step 4:Define structure of the network, that is, the causal relationships between
all the variables.
Step 5:Define the probability rules governing the relationships between the
variables.
Program:
pip install pgmpy
from pgmpy.models import BayesianModel
from pgmpy.factors.discrete import TabularCPD
from pgmpy.inference import VariableElimination
import numpy as np
bayesNet = BayesianModel()
bayesNet.add_node("M")
bayesNet.add_node("U")
bayesNet.add_node("R")
bayesNet.add_node("B")
bayesNet.add_node("S")
bayesNet.add_edge("M", "R")
bayesNet.add_edge("U", "R")
bayesNet.add_edge("B", "R")
bayesNet.add_edge("B", "S")
bayesNet.add_edge("R", "S")
cpd_A = TabularCPD('M', 2, values=[[.95], [.05]])
cpd_U = TabularCPD('U', 2, values=[[.85], [.15]])
cpd_H = TabularCPD('B', 2, values=[[.90], [.10]])
cpd_S = TabularCPD('S', 2, values=[[0.98, .88, .95, .6], [.02, .12, .05, .40]],
evidence=['R', 'B'], evidence_card=[2, 2])
cpd_R = TabularCPD('R', 2,
values=[[0.96, .86, .94, .82, .24, .15, .10, .05], [.04, .14, .06, .18, .76, .85,
.90, .95]],
evidence=['M', 'B', 'U'], evidence_card=[2, 2,2])
22
bayesNet.add_cpds(cpd_A, cpd_U, cpd_H, cpd_S, cpd_R)
bayesNet.check_model()
print("Model is correct.")
Output:
Model is correct.
PREPARATION 30
PERFORMANCE 30
RECORD 40
TOTAL 100
Result:
The program for bayes theorem network is implemented successfully.
Date:24-2-23
Aim :
23
Step 1: Initialize the parameters.
Step 2: Predict the value of a dependent variable by giving an independent
variable.
Step 3: Calculate the error in prediction for all data points.
Step 4: Calculate partial derivative w.r.t a0 and a1.
Step 5: Calculate the cost for each number and add them.
Step 6: Update the values of a0 and a1.
Program:
x=[14,16,27,42,39,50,83]
y=[2,5,7,9,10,13,20]
xy=0
x2=0
xy_all=0
x2_all=0
x_all=0
y_all=0
for i in range(7):
xy_all=xy_all+x[i]*y[i]
x2_all=x2_all+x[i]*x[i]
x_all=x_all+x[i]
y_all=y_all+y[i]
x_bar=x_all/7
y_bar=y_all/7
24
b=(xy_all-7*x_bar*y_bar)/(x2_all-7*x_bar*x_bar)
a=(y_bar-b*x_bar)
print(b)
print(a)
print("Y=",b,"(X)+",a)
Perceptron:
x1=[1,1,-1,-1]
x2=[1,-1,1,-1]
t=[1,-1,-1,-1]
w1=0
w2=0
b=0
yin=0
del_w1=0
del_w2=0
del_b=0
y=0
breakpt=1
yin=0
alpha=1
epoch=1
cnt=0
25
while(breakpt):
cnt=0
for i in range(4):
yin=b+(w1*x1[i])+(w2*x2[i])
if yin==0:
y=0
elif yin>0:
y=1
else:
y=-1
if y !=t[i]:
print("Not equal")
del_b=alpha*t[i]
del_w1=del_b*x1[i]
del_w2=del_b*x2[i]
b=del_b+b
w1=del_w1+w1
w2=del_w2+w2
else:
cnt=cnt+1
if cnt==4:
breakpt=0
26
epoch=epoch+1
print("b=",b,"w1=",w1,"w2=",w2)
Output:
27
Perceptron:
28
Result:
Date:3-3-23
Aim:
To implement the tree based models decision tree using python for the set
of data points.
Algorithm:
Step 2: On each iteration of the algorithm, it iterates through the very unused
attribute of the
Step 3: It then selects the attribute which has the smallest Entropy or Largest
Information gain.
Step 4: The set S is then split by the selected attribute to produce a subset of the
data.before.
29
attributes never selected
Program:
import math
length={3:[2,0],4:[1,3],5:[2,2]}
gills={'yes':[0,4],'no':[5,1]}
beak={'yes':[5,3],'no':[0,2]}
teeth={'many':[3,4],'few':[2,1]}
features=["length","gills","beak","teeth"]
entrophy=[]
te=[]
n=10;
val=0
for i in length.keys():
p=length[i][0]/(length[i][0]+length[i][1])
e=0;
else:
e = -p*math.log(p,2)-((1-p)* math.log((1-p),2))
entrophy=entrophy+[e]
j=0
for i in length.keys():
t=length[i][0]+length[i][1]
30
val+=(t/n)*entrophy[j]
j=j+1
te=te+[val]
val=0
entrophy=[]
for i in gills.keys():
p=gills[i][0]/(gills[i][0]+gills[i][1])
e=0;
else:
e = -p*math.log(p,2)-((1-p)* math.log((1-p),2))
entrophy=entrophy+[e]
j=0
for i in gills.keys():
t=gills[i][0]+gills[i][1]
val+=(t/n)*entrophy[j]
j=j+1
te=te+[val]
val=0
entrophy=[]
31
for i in beak.keys():
p=beak[i][0]/(beak[i][0]+beak[i][1])
e=0;
else:
e = -p*math.log(p,2)-((1-p)* math.log((1-p),2))
entrophy=entrophy+[e]
j=0
for i in beak.keys():
t=beak[i][0]+beak[i][1]
val+=(t/n)*entrophy[j]
j=j+1
te=te+[val]
val=0
entrophy=[]
for i in teeth.keys():
p=teeth[i][0]/(teeth[i][0]+teeth[i][1])
e=0;
else:
e = -p*math.log(p,2)-((1-p)* math.log((1-p),2))
32
entrophy=entrophy+[e]
j=0
for i in teeth.keys():
t=teeth[i][0]+teeth[i][1]
val+=(t/n)*entrophy[j]
j=j+1
te=te+[val]
j=0;
minval=te[0];
for i in range(1,4):
if(te[i]<minval):
minval=te[i]
print(minval)
for i in range(4):
if(minval==te[i]):
break
Output:
33
Total Entrophy for Length 0.7245112497836532
Result:
34
Thus, the program for implement the decision tree algorithm using python
executed successfully.
Ex.No: 6B Build random forest
Date:3-3-23
Aim:
To implement the random forest algorithm using python.
Algorithm:
Step 1:First, start with the selection of random samples from a given dataset.
Step 2 − Next, this algorithm will construct a decision tree for every sample.
Then it will get the prediction result from every decision tree.
Step 3:In this step, voting will be performed for every predicted result.
Step 4:At last, select the most voted prediction result as the final prediction
result.
Program:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
path = https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data
dataset.head()
35
0 5.1 3.5 1.4 0.2 Iris-setosa
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, 4].values
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
print("Confusion Matrix:")
36
print(result)
print("Classification Report:",)
print (result1)
result2 = accuracy_score(y_test,y_pred)
print("Accuracy:",result2)
Output:
Confusion Matrix:
[[14 0 0]
[ 0 18 1]
[ 0 0 12]]
Classification Report:
37
Accuracy: 0.9777777777777777
38
PSNA CET - CSE
39
PREPARATION 30
PERFORMANCE 30
RECORD 40
TOTAL 100
Result:
Thus the tree based model (Decision tree and Random Forest) were
successfully implemented using python.
Date:17-03-23
Aim :
Algorithm:
Step 4: Split the X and Y Dataset into the Training set and Test set
40
Step 7: Predict the Test Set Results
Program:
import numpy as np
import pandas as pd
dataset = pd.read_csv('Social_Network_Ads.csv')
y = dataset.iloc[:, 4].values
#print(dataset)
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
41
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test,y_pred)
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
42
Output:
Result:
Date:24-3-23
Aim:
To implement the ensembling techniques like Bagging, Boosting and
Stacking using Python.
Algorithm:
Step 1:Create multiple datasets from the train dataset by selecting observations
with replacements
43
Step 3:Combine the predictions of all the base models to each the final output
Step 4:Bagging normally uses only one base model (XGBoost Regressor used in
the code below).
Program:
import pandas as pd
df = pd.read_csv("train_data.csv")
target = df["target"]
train = df.drop("target")
model = BaggingRegressor(base_estimator=xgb.XGBRegressor())
model.fit(X_train, y_train)
pred = model.predict(X_test)
print(mean_squared_error(y_test, pred_final))
Output:
4666
44
ii)Boosting:
Algorithm:
Step iv:Calculate errors using the predicted values and actual values.
Step vii:Make another model, make predictions using the new model in such a
way that errors made by the previous model are mitigated/corrected.
Step ix:The final model (strong learner) is the weighted mean of all the previous
models (weak learners).
Program:
import pandas as pd
df = pd.read_csv("train_data.csv")
target = df["target"]
train = df.drop("target")
45
X_train, X_test, y_train, y_test = train_test_split(
model = GradientBoostingRegressor()
model.fit(X_train, y_train)
pred_final = model.predict(X_test)
print(mean_squared_error(y_test, pred_final))
Output:
4789
iii)Stacking:
Algorithm:
Step ii:A base model (say linear regression) is fitted on n-1 parts and predictions
are made for the nth part. This is done for each one of the n part of the train set.
Step iii:The base model is then fitted on the whole train dataset.
Step v:The Steps 2 to 4 are repeated for another base model which results in
another set of predictions for the train and test dataset.
Step vi:The predictions on train data set are used as a feature to build the new
model.
Step vii:This final model is used to make the predictions on test dataset
Program:
import pandas as pd
46
from sklearn.model_selection import train_test_split
df = pd.read_csv("train_data.csv")
target = df["target"]
train = df.drop("target")
model_1 = LinearRegression()
model_2 = xgb.XGBRegressor()
model_3 = RandomForestRegressor()
final_model = model_1
pred_final = final_model.predict(X_test)
47
print(mean_squared_error(y_test, pred_final))
Output:
4510
48
PSNA CET - CSE
PREPARATION 30
PERFORMANCE 30
RECORD 40
TOTAL 100
Result:
49
Thus the above programs for ensembling techniques were successfully
implemented using python.
Date:24-3-23
Aim :
Algorithm:
Program:
K means clustering:
import math
x = [1, 1, 2, 2, 3, 5]
c1 = [1, 1.5]
c2 = [2, 1.5]
dist_c1 = []
50
dist_c2 = []
clust_1 = []
clust_2 = []
k=0
while k<2:
clust_1 = clust_2 = []
dist_c1 = dist_c2 = []
cx = cy = 0
for i in range(6):
else:
for i in range(len(clust_1)):
print(clust_1[i],end="")
cx = cx + x[clust_1[i]]
cy = cy + y[clust_1[i]]
c1 = [cx/len(clust_1), cy/len(clust_1)]
print()
51
cx = cy = 0
for i in range(len(clust_2)):
print(clust_2[i],end="")
cx = cx + x[clust_2[i]]
cy = cy + y[clust_2[i]]
c2 = [cx/len(clust_2), cy/len(clust_2)]
print()
k=k+1
Algorithm:
Step 5: Swap m and o, associate each data point to the closest medoid,
recompute the cost (sum of distances of points to their medoid)
Step 6: If the total cost of the configuration increased in the previous step, undo
the swap.
Program:
x = [2, 3, 3, 4, 6, 6, 7, 7, 8, 7]
52
y = [6, 4, 8, 7, 2, 4, 3, 4, 5, 6]
i = int(input())
j = int(input())
dist_c1 = []
dist_c2 = []
clust_c1 = []
clust_c2 = []
for k in range(10):
else:
q=0
print("Cluster 1: ",end="")
for i in range(len(clust_c1)):
print(clust_c1[i]," ",end="")
q = q + dist_c1[clust_c1[i]]
print()
print("Cluster 2: ",end="")
for i in range(len(clust_c2)):
53
print(clust_c2[i]," ",end="")
q = q + dist_c2[clust_c2[i]]
print()
print("Q = ",q)
i = int(input())
j = int(input())
dist_c1 = []
dist_c2 = []
clust_c1 = []
clust_c2 = []
for k in range(10):
else:
q=0
print("Cluster 1: ",end="")
for i in range(len(clust_c1)):
print(clust_c1[i]," ",end="")
54
q = q + dist_c1[clust_c1[i]]
print()
print("Cluster 2: ",end="")
for i in range(len(clust_c2)):
print(clust_c2[i]," ",end="")
q = q + dist_c2[clust_c2[i]]
print()
print("Q = ",q)
Output:
K means:
55
PAM:
56
57
PSNA CET - CSE
PREPARATION 30
PERFORMANCE 30
RECORD 40
58
TOTAL 100
Result:
Date:31-3-23
Aim :
Algorithm:
Step 1:Create a Bayesian network object with the given structure and initial
values.
Step 3:For each node in the network, if it has no parents, use maximum
likelihood estimation to estimate its parameters.
Step 4:For each node with parents, initialize the parameter values using the MLE
estimates for each node’s conditional probability distribution.
Step 6:.Update the parameters values for each node in the network.
Program:
59
import numpy as np
class BayesianNetwork:
self.structure = structure
self.values = values
parents = self.structure[node]
if parents == []:
else:
n_parents = len(parents)
n_values = len(self.values[node])
n_data = data.shape[0]
for i in range(n_parents):
parent = parents[i]
for j in range(n_values):
60
theta[j, i+1] = np.sum(data[mask, node]) / np.sum(mask)
while True:
for i in range(n_data):
x = data[i, node]
parents_x))
for j in range(n_parents):
parent_j = parents[j]
old_theta = np.copy(theta)
for j in range(n_values):
for k in range(n_parents):
if np.allclose(theta, old_theta):
break
self.values[node] = theta[:, 0]
61
Output:
62
63
64
PSNA CET - CSE
PREPARATION 30
PERFORMANCE 30
RECORD 40
TOTAL 100
Result:
65
Thus the Bayesian networks for EM algorithm was successfully implemented.
Date:31-3-23
Aim :
Step 1: Take the inputs from the training dataset, performed some adjustments
based on their weights, and siphoned them via a method that computed the
output of the ANN.
Step 2: Compute the back-propagated error rate. In this case, it is the difference
between neuron’s predicted output and the expected output of the training
dataset.
Step 3: Based on the extent of the error got, we performed some minor weight
adjustments using the Error Weighted Derivative formula.
Program:
import numpy as np
class NeuralNetwork():
def __init__(self):
np.random.seed(1)
66
self.synaptic_weights = 2 * np.random.random((3, 1)) - 1
return 1 / (1 + np.exp(-x))
return x * (1 - x)
output = self.think(training_inputs)
self.synaptic_weights += adjustments
67
def think(self, inputs):
inputs = inputs.astype(float)
return output
if __name__ == "__main__":
neural_network = NeuralNetwork()
print(neural_network.synaptic_weights)
training_inputs = np.array([[0,0,1],
[1,1,1],
[1,0,1],
[0,1,1]])
training_outputs = np.array([[0,1,1,0]]).T
print(neural_network.synaptic_weights)
68
user_input_three = str(input("User Input Three: "))
print(neural_network.think(np.array([user_input_one, user_input_two,
user_input_three])))
print("Success")
Output:
69
70
71
PSNA CET - CSE
PREPARATION 30
PERFORMANCE 30
RECORD 40
TOTAL 100
Result:
Date:21-4-23
Aim :
72
framework).
Algorithm:
Program:
import pandas as pd
data = pd.read_csv('diabetes.csv')
x = data.drop("Outcome", axis=1)
y = data["Outcome"]
model = Sequential()
model.add(Dense(12, activation="relu"))
model.add(Dense(1, activation="sigmoid"))
model.compile(loss="binary_crossentropy", optimizer="adam",
metrics=["accuracy"])
_, accuracy = model.evaluate(x, y)
73
print("Model accuracy: %.2f"% (accuracy*100))
predictions = model.predict(x)
model.add(Dense(8, activation="relu"))
model.add(Dense(1, activation="sigmoid"))
model.compile(loss="binary_crossentropy", optimizer="adam",
metrics=["accuracy"]) #compile model
OUTPUT:
Epoch 1/5
Epoch 2/5
74
Epoch 3/5
Epoch 4/5
Epoch 5/5
[0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1,
1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1,
1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1,
0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0,
1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1,
0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0,
0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1,
1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0,
0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1,
1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0,
0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1,
1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1,
0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0,
0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,
1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1,
75
1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0,
0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0,
1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0,
0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,
1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0,
0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1,
1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0,
0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0]
Epoch 1/5
Epoch 2/5
Epoch 3/5
Epoch 4/5
Epoch 5/5
76
77
PSNA CET - CSE
PREPARATION 30
PERFORMANCE 30
RECORD 40
TOTAL 100
78
Result:
79