CH23723 Aiml Lab
CH23723 Aiml Lab
LABORATORY MANUAL
B.TECH CHEMICAL ENGINEERING
R2023
VISION OF INSTITUTION
MISSION OF INSTITUTION
i. To impart quality technical education imbibed with proficiency and humane values.
ii. To provide right ambience and opportunities for the students to develop into creative, talented
and globally competent professionals.
iii. To promote research and development in technology and management for the benefit of the
society.
VISION OF DEPARTMENT
MISSION OF DEPARTMENT
i. To provide state of art environment to the students for better learning to cater for the
chemical industries and pursue higher studies.
ii. To provide space to the students in research to think, create and innovate things.
PEOs
I. To produce employable graduates with the knowledge and competency in Chemical Engineering
complemented by the appropriate skills and attributes.
II. To produce creative and innovative graduates with design and soft skills to carry out various
problem solving tasks.
III. To enable the students to work as teams on multidisciplinary projects with effective
communication skills, individual, supportive and leadership qualities with the right attitudes and
ethics.
IV. To produce graduates who possess interest in research and lifelong learning, as well as
continuously striving for the forefront of technology.
1. Engineering Knowledge:
Apply the knowledge of mathematics, science, and engineering fundamentals, to solve the complex
chemical engineering problems
2. Problem analysis:
Identify, formulate, review research literature, and analyze complex chemical engineering problems
reaching substantiated conclusions using first principles of mathematics, natural sciences and
engineering sciences.
3. Design/development of solutions:
Design solutions for complex chemical engineering problems and design system components or process
that meet the specified needs with appropriate consideration for the public health and safety, and the
cultural, societal and environmental considerations.
8. Ethics:
Apply ethical principles and commit to professional ethics and responsibilities and norms of the
chemical engineering practice.
10. Communication:
Communicate effectively on complex chemical engineering activities with the engineering community
and with society at large, such as, being able to comprehend and write effective reports and design
documentation, make effective presentations, and give and receive clear instructions.
PSO:
1. Graduates will be able to apply chemical engineering principles to design equipment and a process
plant.
2. They will be able to control and analyse chemical, physical and biological processes including the
hazards associated with these processes.
3. Will be able to develop mathematical models of real world industrial problems and compute solutions
to dynamic processes
2. Apply the machine learning concepts and algorithms in any suitable language of choice.
3. To equip students to develop process optimization and build an Artificial Neural Network by
4. To develop conventional and hybrid models to solve chemical engineering problems by applying
5. To learn about new optimization and dynamic simulation tools for solving chemical engineering
problems
LIST OF EXPERIMENTS
3. For a given set of training data examples stored in a .CSV file, implement and demonstrate the
4. Write a program to demonstrate the working of the decision tree based ID3 algorithm.
Use an appropriate data set for building the decision tree and apply this knowledge to classify a
new sample.
6. Write a program to implement the naïve Bayesian classifier for a sample training data
set stored as a .CSV file. Compute the accuracy of the classifier, considering few test data sets.
7. Apply EM algorithm to cluster a set of data stored in a .CSV file. Use the same data set for
clustering using k-Means algorithm. Compare the results of these two algorithms and comment
In the program.
8. Write a program to implement k-Nearest Neighbor algorithm to classify the iris data set. Print
both correct and wrong predictions. Java/Python ML library classes can be used
9. Implement the non-parametric Locally Weighted Regression algorithm in order to fit data points.
Select appropriate data set for your experiment and draw graph.
10. Implement Gradient Boosting Algorithm to predict the yield of a chemical reaction based on
COURSE OUTCOMES
4. To develop conventional and hybrid models to solve chemical engineering problems by applying
5. To learn about new optimization and dynamic simulation tools for solving chemical engineering
problems
CO PO MAPPING
PO
CO
1 2 3 4 5 6 7 8 9 10 11 12
1 3 3 3 3 3 3 2 2 3 2 3 3
2 3 3 3 3 3 3 2 2 3 2 3 3
3 3 3 3 3 3 3 2 2 3 2 3 3
4 3 3 3 3 3 3 2 2 3 2 3 3
5 3 3 3 3 3 3 2 2 3 2 3 3
CO PSO MAPPING
PSO
CO
1 2 3
1 3 2 3
2 3 2 3
3 3 2 3
4 3 2 3
5 3 2 3
• Before coming to the laboratory, understand the concept behind the experiment that you are going to
carry out.
• Keep the work area clean and properly arranged on the work - bench.
• If any parts of the computer are broken, report at once to the staff members / Lab Assistant.
• Precautions should be taken to avoid fire accidents.
• No edible items are allowed inside the laboratory.
• Switch off the fans and lights while leaving the laboratory.
• Incase of any medical emergency, report to staff members / lab assistant.
• Donot use pendrive/ other external hardware.
INDEX
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
open_set = set(start_node)
closed_set = set()
g = {} #store distance from starting node
parents = {}# parents contains an adjacency map of all nodes
#for each node m,compare its distance from start i.e g(m) to the #from
start through n node
else:
if g[m] > g[n] + weight:
#update g(m)
g[m] = g[n] + weight #change
parent of m to n parents[m] =
n
while parents[n] != n:
path.append(n)
n = parents[n]
path.append(start_node)
path.reverse()
return H_dist[n]
}
aStarAlgo('A', 'G')
OUTPUT:
class Graph:
def init (self, graph, heuristicNodeList, startNode): #instantiate graph object with graph
topology, heuristic values, start node
self.graph = graph
self.H=heuristicNodeList
self.start=startNode
self.parent={} self.status={}
self.solutionGraph={}
def printSolution(self):
print("FOR GRAPH SOLUTION, TRAVERSE THE GRAPH FROM THE START
NODE:",self.start)
print(" ")
print(self.solutionGraph)
print(" ")
def computeMinimumCostChildNodes(self, v): # Computes the Minimum Cost of child nodes of a given
node v
minimumCost=0 costToChildNodeListDict={}
costToChildNodeListDict[minimumCost]=[]
flag=True
for nodeInfoTupleList in self.getNeighbors(v): # iterate over all the set of child node/s cost=0
nodeList=[]
for c, weight in nodeInfoTupleList:
cost=cost+self.getHeuristicNodeValue(c)+weight
nodeList.append(c)
if flag==True: # initialize Minimum Cost with the cost of first set of child
node/s
node/s minimumCost=cost
Cost costToChildNodeListDict[minimumCost]=nodeList # set the Minimum Cost child
node/s flag=False
else: # checking the Minimum Cost nodes with the current Minimum
if minimumCost>cost:
minimumCost=cost
costToChildNodeListDict[minimumCost]=nodeList # set the Minimum Cost child
def aoStar(self, v, backTracking): # AO* algorithm for a start node and backTracking status flag
if solved==True: # if the Minimum Cost nodes of v are solved, set the current node
status as solved(-1)
self.setStatus(v,-1)
self.solutionGraph[v]=childNodeList # update the solution graph with the solved nodes which
may be a part of solution
if v!=self.start: # check the current node is the start node for backtracking the current
node value
self.aoStar(self.parent[v], True) # backtracking the current node value with
backtracking status set to true
h1 = {'A': 1, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7, 'I': 7, 'J': 1, 'T': 3}
graph1 = {
'A': [[('B', 1), ('C', 1)], [('D', 1)]],
'B': [[('G', 1)], [('H', 1)]],
'C': [[('J', 1)]],
'D': [[('E', 1), ('F', 1)]],
'G': [[('I', 1)]]
}
G1= Graph(graph1, h1, 'A')
G1.applyAOStar()
G1.printSolution()
h2 = {'A': 1, 'B': 6, 'C': 12, 'D': 10, 'E': 4, 'F': 4, 'G': 5, 'H': 7} # Heuristic values of Nodes
graph2 = { # Graph of Nodes and Edges
'A': [[('B', 1), ('C', 1)], [('D', 1)]], # Neighbors of Node 'A', B, C & D with repective weights 'B':
[[('G', 1)], [('H', 1)]], # Neighbors are included in a list of lists
'D': [[('E', 1), ('F', 1)]] # Each sublist indicate a "OR" node or "AND" nodes
}
G2 = Graph(graph2, h2, 'A') # Instantiate Graph object with graph, heuristic values and
start Node
G2.applyAOStar() # Run the AO* algorithm
G2.printSolution() # Print the solution graph as output of the AO* algorithm
search
OUTPUT:
HEURISTIC VALUES : {'A': 1, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7, 'I': 7, 'J': 1, 'T': 3} SOLUTION
GRAPH : {}
PROCESSING NODE : A
HEURISTIC VALUES : {'A': 10, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7, 'I': 7, 'J': 1, 'T': 3} SOLUTION
GRAPH : {}
PROCESSING NODE : B
HEURISTIC VALUES : {'A': 10, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7, 'I': 7, 'J': 1, 'T': 3} SOLUTION
GRAPH : {}
PROCESSING NODE : A
HEURISTIC VALUES : {'A': 10, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7, 'I': 7, 'J': 1, 'T': 3} SOLUTION
GRAPH : {}
PROCESSING NODE : G
HEURISTIC VALUES : {'A': 10, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 8, 'H': 7, 'I': 7, 'J': 1, 'T': 3}
SOLUTION GRAPH : {}
PROCESSING NODE : B
HEURISTIC VALUES : {'A': 10, 'B': 8, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 8, 'H': 7, 'I': 7, 'J': 1, 'T': 3} SOLUTION
GRAPH : {}
PROCESSING NODE : A
HEURISTIC VALUES : {'A': 12, 'B': 8, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 8, 'H': 7, 'I': 7, 'J': 1, 'T': 3} SOLUTION
GRAPH : {}
PROCESSING NODE : I
HEURISTIC VALUES : {'A': 12, 'B': 8, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 8, 'H': 7, 'I': 0, 'J': 1, 'T': 3} SOLUTION
GRAPH : {'I': []}
PROCESSING NODE : G
HEURISTIC VALUES : {'A': 12, 'B': 8, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 1, 'T': 3} SOLUTION
GRAPH : {'I': [], 'G': ['I']}
PROCESSING NODE : B
HEURISTIC VALUES : {'A': 12, 'B': 2, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 1, 'T': 3} SOLUTION
GRAPH : {'I': [], 'G': ['I'], 'B': ['G']}
PROCESSING NODE : A
HEURISTIC VALUES : {'A': 6, 'B': 2, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 1, 'T': 3} SOLUTION
GRAPH : {'I': [], 'G': ['I'], 'B': ['G']}
PROCESSING NODE : C
HEURISTIC VALUES : {'A': 6, 'B': 2, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 1, 'T': 3} SOLUTION
GRAPH : {'I': [], 'G': ['I'], 'B': ['G']}
PROCESSING NODE : A
HEURISTIC VALUES : {'A': 6, 'B': 2, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 1, 'T': 3} SOLUTION
GRAPH : {'I': [], 'G': ['I'], 'B': ['G']}
PROCESSING NODE : J
HEURISTIC VALUES : {'A': 6, 'B': 2, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 0, 'T': 3} SOLUTION
GRAPH : {'I': [], 'G': ['I'], 'B': ['G'], 'J': []}
PROCESSING NODE : C
HEURISTIC VALUES : {'A': 6, 'B': 2, 'C': 1, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 0, 'T': 3}
SOLUTION GRAPH : {'I': [], 'G': ['I'], 'B': ['G'], 'J': [], 'C': ['J']}
PROCESSING NODE : A
FOR GRAPH SOLUTION, TRAVERSE THE GRAPH FROM THE START NODE: A
Department of Chemical Engineering, REC 17
VI Semester CH19614 – Artificial Intelligence & Machine Learning Laboratory
{'I': [], 'G': ['I'], 'B': ['G'], 'J': [], 'C': ['J'], 'A': ['B', 'C']}
HEURISTIC VALUES : {'A': 1, 'B': 6, 'C': 12, 'D': 10, 'E': 4, 'F': 4, 'G': 5, 'H': 7} SOLUTION
GRAPH : {}
PROCESSING NODE : A
HEURISTIC VALUES : {'A': 11, 'B': 6, 'C': 12, 'D': 10, 'E': 4, 'F': 4, 'G': 5, 'H': 7} SOLUTION
GRAPH : {}
PROCESSING NODE : D
HEURISTIC VALUES : {'A': 11, 'B': 6, 'C': 12, 'D': 10, 'E': 4, 'F': 4, 'G': 5, 'H': 7} SOLUTION
GRAPH : {}
PROCESSING NODE : A
HEURISTIC VALUES : {'A': 11, 'B': 6, 'C': 12, 'D': 10, 'E': 4, 'F': 4, 'G': 5, 'H': 7} SOLUTION
GRAPH : {}
PROCESSING NODE : E
HEURISTIC VALUES : {'A': 11, 'B': 6, 'C': 12, 'D': 10, 'E': 0, 'F': 4, 'G': 5, 'H': 7} SOLUTION
GRAPH : {'E': []}
PROCESSING NODE : D
HEURISTIC VALUES : {'A': 11, 'B': 6, 'C': 12, 'D': 6, 'E': 0, 'F': 4, 'G': 5, 'H': 7} SOLUTION
GRAPH : {'E': []}
PROCESSING NODE : A
HEURISTIC VALUES : {'A': 7, 'B': 6, 'C': 12, 'D': 6, 'E': 0, 'F': 4, 'G': 5, 'H': 7} SOLUTION
GRAPH : {'E': []}
PROCESSING NODE : F
HEURISTIC VALUES : {'A': 7, 'B': 6, 'C': 12, 'D': 6, 'E': 0, 'F': 0, 'G': 5, 'H': 7} SOLUTION
GRAPH : {'E': [], 'F': []}
PROCESSING NODE : D
HEURISTIC VALUES : {'A': 7, 'B': 6, 'C': 12, 'D': 2, 'E': 0, 'F': 0, 'G': 5, 'H': 7}
SOLUTION GRAPH : {'E': [], 'F': [], 'D': ['E', 'F']}
PROCESSING NODE : A
FOR GRAPH SOLUTION, TRAVERSE THE GRAPH FROM THE START NODE: A
3. For a given set of training data examples stored in a .CSV file, implement and
demonstrate the Candidate-Elimination algorithm to output a description of
the set of all hypothesis consistent with the training examples.import random
import csv
def g_0(n):
return ("?",)*n
return all(more_general_parts)
for i in range(len(h)):
if not fulfills(x[i:i+1], h[i:i+1]):
h_new[i] = '?' if h[i] != 'ɸ' else x[i]
return [tuple(h_new)]
for i in range(len(h)): if
h[i] == "?":
return results
for i, xi in enumerate(x):
d[i].add(xi)
def candidate_elimination(examples):
domains = get_domains(examples)[:-1]
G = set([g_0(len(domains))])
S = set([s_0(len(domains))]) i =
0
print("\n G[{0}]:".format(i), G)
print("\n S[{0}]:".format(i), S) for
xcx in examples:
i=i+1
x, cx = xcx[:-1], xcx[-1] # Splitting data into attributes and decisions if cx ==
'Y': # x is positive example
print("\n G[{0}]:".format(i), G)
print("\n S[{0}]:".format(i), S)
return
for s in S_prev: if
s not in S:
continue
if not fulfills(x, s):
S.remove(s)
Splus = min_generalizations(s, x)
## keep only generalizations that have a counterpart in G
S.update([h for h in Splus if any([more_general(g,h)
for g in G])])
## remove hypothesis less specific than any other in S
S.difference_update([h for h in S if
return S
def specialize_G(x, domains, G, S):
G_prev = list(G)
for g in G_prev: if
g not in G:
continue
if fulfills(x, g):
G.remove(g)
in S])])
Department of Chemical Engineering, REC 22
VI Semester CH19614 – Artificial Intelligence & Machine Learning Laboratory
return G
candidate_elimination(examples)
OUTPUT:
G[3]: {('Sunny', '?', '?', '?', '?', '?'), ('?', '?', '?', '?', '?', 'Same'), ('?', 'Warm', '?', '?', '?', '?')}
G[4]: {('Sunny', '?', '?', '?', '?', '?'), ('?', 'Warm', '?', '?', '?', '?')} S[4]:
4. Write a program to demonstrate the working of the decision tree based ID3
algorithm. Use an appropriate data set for building the decision tree and apply
this knowledge to classify a new sample.
def infoGain(P, N):
import math
return -P / (P + N) * math.log2(P / ( P + N)) - N / (P + N) * math.log2(N / (P + N)
if countC[conceptVals[1]] == 0:
tree = insertConcept(tree, addTo, conceptVals[0])
return tree
ClassEntropy = infoGain(countC[conceptVals[1]],countC[conceptVals[0]])
Attr = {}
for a in AttributeList: Attr[a] =
list(set(data[a]))
AttrCount = {}
EntropyAttr = {}
for att in Attr:
for vals in Attr [att]: for c
in conceptVals:
iData = data[data[att] == vals] dataAtt =
iData[iData[concept] == c] AttrCount[c]
= dataAtt.shape[0]
TotalInfo = AttrCount[conceptVals[1]] + AttrCount[conceptVals[0]] if
AttrCount[conceptVals[1]] == 0 or AttrCount[conceptVals[0]] == 0:
InfoGain=0
else:
InfoGain = infoGain(AttrCount[conceptVals[1]], AttrCount[conceptVals[0]])
Gain = {}
for g in EntropyAttr:
Gain[g] = ClassEntropy - EntropyAttr[g] Node =
def main():
import pandas as pd
data = pd.read_csv('id3.csv')
AttributeList = list(data)[:-1]
concept = str(list(data)[-1])
conceptVals = list(set(data[concept]))
tree = getNextNode(data, AttributeList, concept, conceptVals, {'root':'None'}, 'root') print(tree)
compute(tree)
main()
OUTPUT:
Accuracy is : 0.75
def derivatives_sigmoid(x):
return x * (1 - x)
epoch=5000
lr=0.1
wh = np.random.uniform(size=(2,3)) bh =
np.random.uniform(size=(1,3)) wout =
np.random.uniform(size=(3,1)) bout =
np.random.uniform(size=(1,1))
for i in range(epoch): #
forward prop
hinp=np.dot(X,wh) + bh
hlayer_act = sigmoid(hinp)
outinp=np.dot(hlayer_act,wout) + bout
output = sigmoid(outinp)
hiddengrad = derivatives_sigmoid(hlayer_act)
outgrad = derivatives_sigmoid(output)
EO = y-output
d_output = EO* outgrad
EH = d_output.dot(wout.T)
d_hiddenlayer = EH * hiddengrad
OUTPUT:
Input:
[[0.66666667 1. ]
[0.33333333 0.55555556]
[1. 0.66666667]]
Actual Output:
[[0.92]
[0.86]
[0.89]]
Predicted Output:
[[0.69734296]
[0.68194708]
[0.69700956]]
import csv
import random
import math
def loadcsv(filename):
lines = csv.reader(open(filename, "r"))
dataset = list(lines)
for i in range(len(dataset)):
dataset[i] = [float(x) for x in dataset[i]]
return dataset
def mean(numbers):
return sum(numbers)/(len(numbers))
def stdev(numbers):
avg = mean(numbers) v
=0
for x in numbers: v
+= (x-avg)**2
return math.sqrt(v/(len(numbers)-1))
def summarizeByClass(dataset):
separated = {}
for i in range(len(dataset)): vector
= dataset[i]
if (vector[-1] not in separated):
separated[vector[-1]] = []
separated[vector[-1]].append(vector)
summaries = {}
for classValue, instances in separated.items():
summaries[classValue] = [(mean(attribute), stdev(attribute)) for attribute in
zip(*instances)][:-1]
return summaries
filename = 'pima-indians-diabetes.csv'
splitRatio = 0.67
dataset = loadcsv(filename)
trainingSet, testSet = splitDataset(dataset, splitRatio)
summaries = summarizeByClass(trainingSet) predictions =
getPredictions(summaries, testSet) print("\nPredictions:\
n",predictions)
accuracy = getAccuracy(testSet, predictions) print('Accuracy
',accuracy)
OUTPUT:
Naive Bayes Classifier for concept learning problem Split
14 rows into
Number of Training data: 12
Number of Test Data: 2
7. Apply EM algorithm to cluster a set of data stored in a .CSV file. Use the same
data set for clustering using k-Means algorithm. Compare the results of these
two algorithms and comment on the quality of clustering. You can add
Java/Python ML library classes/API in the program.
import matplotlib.pyplot as plt from
sklearn import datasets
from sklearn.cluster import KMeans import
sklearn.metrics as sm
import pandas as pd
import numpy as np
from sklearn import preprocessing
from sklearn.mixture import GaussianMixture l1
= [0,1,2]
def rename(s):
l2 = []
for i in s:
if i not in l2:
l2.append(i)
for i in range(len(s)):
pos = l2.index(s[i])
s[i] = l1[pos]
return s
iris = datasets.load_iris()
X = pd.DataFrame(iris.data,columns
=['Sepal_Length','Sepal_Width','Petal_Length','Petal_Width'] ) y =
pd.DataFrame(iris.target,columns = ['Targets'])
def graph_plot(l,title,s,target):
plt.subplot(l[0],l[1],l[2]) if
s==1:
plt.scatter(X.Sepal_Length,X.Sepal_Width, c=colormap[target], s=40)
else:
plt.scatter(X.Petal_Length,X.Petal_Width, c=colormap[target], s=40)
plt.title(title)
plt.figure()
colormap = np.array(['red', 'lime', 'black'])
graph_plot([1, 2, 1],'sepal',1,y.Targets)
graph_plot([1, 2, 2],'petal',0,y.Targets)
plt.show()
def fit_model(modelName):
model = modelName(3)
model.fit(X)
plt.figure()
colormap = np.array(['red', 'lime', 'black']) graph_plot([1,
2, 1],'Real Classification',0,y.Targets) if modelName ==
KMeans:
m = 'Kmeans’
else:
m = 'Em'
y1 = model.predict(X)
graph_plot([1, 2, 2],m,0,y1)
plt.show()
km = rename(y1) print("\
nPredicted: \n", km)
print("Accuracy ",sm.accuracy_score(y, km)) print("Confusion
Matrix ",sm.confusion_matrix(y, km))
fit_model(KMeans)
fit_model(GaussianMixture)
OUTPUT:
IRIS FEATURES :
['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']
IRIS TARGET :
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0000000000000111111111111111111111111
1111111111111111111111111122222222222
2222222222222222222222222222222222222
2 2]
What EM thought:
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0000000000000111111111111111111212121
1112111112111111111111111122222222222
2222222222222222222222222222222222222
2 2]
Accuracy of EM is 0.9666666666666667
Confusion Matrix for EM is
[[50 0 0]
[ 0 45 5]
[ 0 0 50]]
iris_dataset=load_iris()
i=1
x= X_test[i]
x_new = np.array([x]) print("\
n XNEW \n",x_new)
for i in range(len(X_test)):
x = X_test[i]
x_new = np.array([x]) prediction
= kn.predict(x_new)
print("\n Actual : {0} {1}, Predicted :{2}{3}".format(y_test[i],iris_dataset["target_names "]
[y_test[i]],prediction,iris_dataset["target_names"][ prediction]))
print("\n TEST SCORE[ACCURACY]: {:.2f}\n".format(kn.score(X_test, y_test)))
OUTPUT:
[0]:[setosa]
[1] :[versicolor]
[2] :[virginica]
Target :
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0000000000000111111111111111111111111
1111111111111111111111111122222222222
2222222222222222222222222222222222222
2 2]
X TRAIN
[[5.9 3. 4.2 1.5]
[5.8 2.6 4. 1.2]
[6.8 3. 5.5 2.1]
[4.7 3.2 1.3 0.2]
[6.9 3.1 5.1 2.3]
[5. 3.5 1.6 0.6]
X TEST
[[5.8 2.8 5.1 2.4]
[6. 2.2 4. 1. ]
[5.5 4.2 1.4 0.2]
[7.3 2.9 6.3 1.8]
[5. 3.4 1.5 0.2]
[6.3 3.3 6. 2.5]
[5. 3.5 1.3 0.3]
[6.7 3.1 4.7 1.5]
[6.8 2.8 4.8 1.4]
[6.1 2.8 4. 1.3]
Y TRAIN
[1 1 2 0 2 0 0 1 2 2 2 2 1 2 1 1 2 2 2 2 1 2 1 0 2 1 1 1 1 2 0 0 2 1 0 0 1
0210121022220022020220020001220001100
1021210202002021112211012201111000212
0]
XNEW
[[5. 2.9 1. 0.2]]
Predicted :[0]['setosa']
[0]['setosa']
Predicted :[0]['setosa']
Predicted :[0]['setosa']
Predicted :[0]['setosa']
Predicted :[0]['setosa']
[0]['setosa']
SCORE[ACCURACY]: 0.97
import numpy as np
import matplotlib.pyplot as plt
def draw(tau):
prediction = [local_regression(x0, X, Y, tau) for x0 in domain]
plt.plot(X, Y, 'o', color='black')
plt.plot(domain, prediction, color='red')
plt.show()
X = np.linspace(-3, 3, num=1000)
domain = X
Y = np.log(np.abs(X ** 2 - 1) + .5)
draw(10)
draw(0.1)
VI Semester CH19614 – Artificial Intelligence & Machine Learning Laboratory
OUTPUT:
VI Semester CH19614 – Artificial Intelligence & Machine Learning Laboratory
10. Implement Gradient Boosting Algorithm to predict the yield of a chemical reaction based on
several input parameters, such as temperature, pressure, and concentrations of reactants.
CODE: