0% found this document useful (0 votes)
11 views11 pages

DL record

The document outlines various experiments conducted in a course on neural networks and machine learning, including designing neural networks for multiclass classification, predicting house prices, and implementing convolutional neural networks for image classification. It also covers advanced topics such as one-hot encoding, word embeddings, recurrent neural networks for text classification, and Bayesian networks for medical diagnosis. Each section includes the aim, description, and program code for practical implementation using Python libraries like Keras and TensorFlow.

Uploaded by

sahithidhanamjay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views11 pages

DL record

The document outlines various experiments conducted in a course on neural networks and machine learning, including designing neural networks for multiclass classification, predicting house prices, and implementing convolutional neural networks for image classification. It also covers advanced topics such as one-hot encoding, word embeddings, recurrent neural networks for text classification, and Bayesian networks for medical diagnosis. Each section includes the aim, description, and program code for practical implementation using Python libraries like Keras and TensorFlow.

Uploaded by

sahithidhanamjay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Exp No: Page No:

Date:
WEEK – 3
3a) Design a neural Network for classifying news wires (Multi class classification) using
Reuters dataset.
Aim: To design a neural Network for classifying news wires (Multi class classification) using
Reuters dataset.
Description:
Multiclass Classification is the classification of samples in more than two classes. Classifying
samples into precisely two categories is colloquially referred to as Binary Classification. This
piece will design a neural network to classify newsreels from the Reuters dataset, published
by Reuters in 1986, into forty-six mutually exclusive classes using the Python library Keras.
This problem is a typical example of a single-label, multiclass classification problem.
Program:
import matplotlib.pyplot as plt
# Compile, train, and evaluate model
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(train_data, train_labels, epochs=5,
validation_data=(test_data, test_labels))
test_loss, test_acc = model.evaluate(test_data, test_labels)
# Plot loss & accuracy
for metric in ['loss', 'accuracy']:
plt.plot(history.history[metric], label='Train')
plt.plot(history.history[f'val_{metric}'], label='Validation')
plt.title(f'Model {metric.capitalize()}')
plt.xlabel('Epoch')
plt.legend()
plt.show()
print('Test accuracy:', test_acc)

Output:

ADITYA ENGINEERING COLLEGE(A) Roll No: 22A91A6194


Exp No: Page No:
Date:
3b) Design a neural network for predicting house prices using Boston Housing Price
dataset.
Aim: To design a neural network for predicting house prices using Boston Housing Price
dataset.
Description:
A neural network is a method in artificial intelligence that teaches computers to process data
in a way that is inspired by the human brain. It is a type of machine learning process, called
deep learning, that uses interconnected nodes or neurons in a layered structure that resembles
the human brain. It is composed of an input layer, one or more hidden layers, and an output
layer made up of layers of artificial neurons that are coupled. The two stages of the basic
process are called backpropagation and forward propagation
Program:
import tensorflow as tf
# Load dataset
(train_x, train_y), (test_x, test_y) =
tf.keras.datasets.boston_housing.load_data()

# Build & compile model


model =
tf.keras.Sequential([ tf.keras.layers.Dense(13,
activation='relu'), tf.keras.layers.Dense(6,
activation='relu'), tf.keras.layers.Dense(1)
])
model.compile(optimizer='adam', loss='mse', metrics=['mae'])

# Train & evaluate


model.fit(train_x, train_y, epochs=5)
model.evaluate(test_x, test_y)

Output:

ADITYA ENGINEERING COLLEGE(A) Roll No: 22A91A6194


Exp No: Page No:
Date:
WEEK – 5:
5a) Use a pre-trained convolution neural network (VGG16) for image classification
Aim: To use a pre-trained convolution neural network (VGG16) for image classification.
Description:
Program:
import numpy as np
from keras.applications.vgg16 import VGG16, preprocess_input,
decode_predictions

m = VGG16(weights='imagenet')
x = np.random.rand(1, 224, 224, 3) * 255
x = preprocess_input(x)
p = m.predict(x)
d = decode_predictions(p, top=3)[0]

for i, (_, l, s) in enumerate(d):


print(f"{i + 1}: {l} ({s:.2f})")

Output:

ADITYA ENGINEERING COLLEGE(A) Roll No: 22A91A6194


Exp No: Page No:
Date:
5b) Build a Convolution Neural Network for simple image (dogs and Cats).
Aim: To build a Convolution Neural Network for simple image (dogs and Cats).
Description:
Image Classification is one of the most interesting and useful applications of Deep neural
networks and Convolutional Neural Networks that enables us to automate the task of
assembling similar images and arranging data without the supervision of real humans. A
Convolutional Neural Network (CNN) operates by applying convolutional layers, utilizing
operations like conv2d to convolve learned filters (kernels) with input images.
Program:
import tensorflow as tf
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten,
Dense, Dropout
from tensorflow.keras.utils import to_categorical

(a, b), (c, d) = cifar10.load_data()


a, c = a / 255.0, c / 255.0
b, d = to_categorical(b, 10), to_categorical(d, 10)

m = Sequential([
Conv2D(32, (3,3), activation='relu', input_shape=(32,32,3)),
MaxPooling2D((2,2)),
Conv2D(64, (3,3), activation='relu'), MaxPooling2D((2,2)),
Flatten(),
Dense(128, activation='relu'), Dropout(0.2), Dense(10,
activation='softmax')
])

m.compile(optimizer='adam', loss='categorical_crossentropy',
metrics=['accuracy'])
m.fit(a, b, batch_size=32, epochs=3, validation_data=(c, d))
print(f'Test accuracy: {m.evaluate(c, d)[1]:.2f}')

Output:

ADITYA ENGINEERING COLLEGE(A) Roll No: 22A91A6194


Exp No: Page No:
Date:
WEEK – 6:
6. Implement one hot encoding of words or characters.
Aim: To implement one hot encoding of words or
characters. Description:
One-hot encoding is a technique used to convert categorical data into a numerical format that
machine learning models can understand. Instead of assigning arbitrary numbers to
categories, it represents them as binary vectors where only one position is "hot" (i.e., set to 1),
and the rest are 0. .
Program:
from tensorflow.keras.preprocessing.text import Tokenizer
import numpy as np

# Sample text data


texts = ["hello", "world", "deep"]

# Tokenize at character level


tokenizer = Tokenizer(char_level=True) # Character-level encoding
tokenizer.fit_on_texts(texts)

# Convert texts to sequences of character indices


sequences = tokenizer.texts_to_sequences(texts)

# Get vocabulary size


vocab_size = len(tokenizer.word_index) + 1 # +1 for padding token

# Define the one-hot encoding function


def one_hot_encode(sequences, dimension):
results = np.zeros((len(sequences), dimension))
for i, seq in enumerate(sequences):
for char_index in seq:
results[i, char_index] = 1.0
return results

# One-hot encode sequences


one_hot_encoded_chars = one_hot_encode(sequences, vocab_size)
print("Original Sequences:", sequences)
print("One-Hot Encoded Characters:\n", one_hot_encoded_chars)

Output:

ADITYA ENGINEERING COLLEGE(A) Roll No: 22A91A6194


Exp No: Page No:
Date:
WEEK – 7:
7. Implement word embeddings for IMDB dataset.
Aim: To implement word embeddings for IMDB
dataset. Description:
Recurrent Neural Network (RNN) is a type of Neural Network where the output from the
previous step is fed as input to the current step. In traditional neural networks, all the inputs
and outputs are independent of each other. Still, in cases when it is required to predict the
next word of a sentence, the previous words are required and hence there is a need to
remember the previous words. Thus, RNN came into existence, which solved this issue with
the help of a Hidden Layer. The main and most important feature of RNN is its Hidden state,
which remembers some information about a sequence. The state is also referred to as
Memory State since it remembers the previous input to the network. It uses the same
parameters for each input as it performs the same task on all the inputs or hidden layers to
produce the output. This reduces the complexity of parameters, unlike other
neural networks. Program:
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Flatten, Dense
from tensorflow.keras.layers import Embedding
from keras.preprocessing.sequence import pad_sequences

# Load and preprocess data


(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=10000)
x_train, x_test = pad_sequences(x_train, maxlen=20),
pad_sequences(x_test, maxlen=20)

# Build model
model = Sequential([
Embedding(10000, 8, input_length=20),
Flatten(),
Dense(1, activation='sigmoid')
])

# Compile and train


model.compile(optimizer='rmsprop', loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5, batch_size=32,
validation_split=0.2)

Output:

ADITYA ENGINEERING COLLEGE(A) Roll No: 22A91A6194


Exp No: Page No:
Date:
WEEK – 8:
8. Implement a Recurrent Neural Network for IMDB movie review classification
problem.
Aim: To implement a Recurrent Neural Network for IMDB movie review classification
problem.
Description:
Support Vector Machine or SVM is one of the most popular Supervised Learning algorithms,
which is used for Classification as well as Regression problems. However, primarily, it is
used for Classification problems in Machine Learning. The goal of the SVM algorithm is to
create the best line or decision boundary that can segregate n dimensional space into classes
so that we can easily put the new data point in the correct category in the future.
Program:
from keras.datasets import imdb
from keras.preprocessing import sequence
max_features = 10000
maxlen = 500
batch_size = 32
(x_train,y_train),(x_test,y_test) = imdb.load_data(num_words =
max_features)
print(len(x_train),'train sequences')
print(len(x_test),'test sequences')
print ('pad sequences(sample x time)')
from tensorflow.keras.preprocessing.sequence import pad_sequences
x_train = sequence.pad_sequences(x_train,maxlen = maxlen)
x_test = sequence.pad_sequences(x_test,maxlen = maxlen)
print('x_train shape:',x_train.shape)
print('x_test shape:',x_test.shape)
from keras.layers import Dense

Output:

ADITYA ENGINEERING COLLEGE(A) Roll No: 22A91A6194


Exp No: Page No:
Date:
WEEK – 9:
9. Consider Patient Dataset. Apply linear classification technique(SVM) to identify
the rate of heart patients.
Aim: To consider Patient Dataset. Apply linear classification technique(SVM) to identify the
rate of heart patients.
Description:
Bayesian belief network is key computer technology for dealing with probabilistic events and
to solve a problem which has uncertainty. A Bayesian network is a probabilistic graphical
model which represents a set of variables and their conditional dependencies using a directed
acyclic graph. It is also called a Bayes network, belief network, decision network, or
Bayesian model.
Program:
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC

# Generate synthetic dataset


np.random.seed(42)
df = pd.DataFrame(np.random.randint([30, 100, 150, 0, 0], [80, 180,
300, 2, 2], (200, 5)),
columns=['Age', 'RestBP', 'Chol', 'ExAng', 'AHD'])

# Prepare data
X_train, X_test, y_train, y_test =
train_test_split(df.drop(columns='AHD'), df['AHD'], test_size=0.3,
random_state=23)
X_train, X_test = StandardScaler().fit_transform(X_train),
StandardScaler().fit_transform(X_test)

# Train & evaluate SVM


svm = SVC(kernel='linear', random_state=0).fit(X_train, y_train)
print("Train Acc:", svm.score(X_train, y_train), "Test Acc:",
svm.score(X_test, y_test))

Output:

ADITYA ENGINEERING COLLEGE(A) Roll No: 22A91A6194


Exp No: Page No:
Date:
WEEK – 10:
10. Write a Python program to construct a Bayesian network considering medical
data. Use this model to demonstrate the diagnosis of heart patients using standard
Heart Disease Data Set.
Aim: To write a Python program to construct a Bayesian network considering medical data.
Use this model to demonstrate the diagnosis of heart patients using standard Heart Disease
Data Set.
Description:
A Bayesian Network is a probabilistic graphical model that represents a set of variables and
their conditional dependencies using a Directed Acyclic Graph (DAG). It is widely used in
medical diagnosis because it can effectively model the uncertain relationships between
symptoms, risk factors, and diseases.
Program:
#!pip install pgmpy
import numpy as np
import pandas as pd
from pgmpy.models import BayesianModel
from pgmpy.estimators import MaximumLikelihoodEstimator
from pgmpy.inference import VariableElimination
np.random.seed(42) # Generate synthetic dataset
size = 200
data = pd.DataFrame({
'Age': np.random.randint(30, 80, size),
'RestBP': np.random.randint(100, 180, size),
'Fbs': np.random.randint(0, 2, size),
'Sex': np.random.randint(0, 2, size),
'ExAng': np.random.randint(0, 2, size),
'AHD': np.random.randint(0, 2, size),
'RestECG': np.random.randint(0, 2, size),
'Thal': np.random.randint(0, 3, size),
'Chol': np.random.randint(150, 300, size)
})

# Define Bayesian Network structure


Model = BayesianModel([
('Age', 'RestBP'), ('Age', 'Fbs'), ('Sex', 'RestBP'), ('ExAng',
'RestBP'), ('RestBP', 'AHD'), ('Fbs', 'AHD'), ('AHD', 'RestECG'),
('AHD', 'Thal'), ('AHD', 'Chol')])

# Train model
Model.fit(data, estimator=MaximumLikelihoodEstimator)
d_infer = VariableElimination(Model) # Inference
q = d_infer.query(variables=['AHD'], evidence={'Chol': 200})
print(q)

Output:

ADITYA ENGINEERING COLLEGE(A) Roll No: 22A91A6194


Exp No: Page No:
Date:
WEEK – 11:
11. Implement Random Forest by using scikit, Tensorflow and PyTorch.
Aim: To implement Random Forest by using scikit, Tensorflow and PyTorch.
Description:
Random Forest is an ensemble learning technique that builds multiple decision trees and
merges their predictions for better accuracy and stability. It is widely used in classification
and regression problems. It works by creating multiple subsets of the training data through
bootstrap sampling and training a separate decision tree on each subset. During prediction,
the model aggregates the outputs of all decision trees using majority voting for classification
or averaging for regression.
Program:
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

# Generate a synthetic dataset


X, y = make_classification(n_samples=1000, n_features=20,
random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2, random_state=42)

# Train a Random Forest model


rf = RandomForestClassifier(n_estimators=100, random_state=42)
rf.fit(X_train, y_train)

# Make predictions
y_pred = rf.predict(X_test)

# Evaluate accuracy
print(f"Random Forest Accuracy: {accuracy_score(y_test, y_pred):.4f}")
Output:

ADITYA ENGINEERING COLLEGE(A) Roll No: 22A91A6194


Exp No: Page No:
Date:
WEEK – 12:
12. Implement Boosting algorithms using scikit, Tensorflow and PyTorch.
Aim: To implement Boosting algorithms using scikit, Tensorflow and PyTorch.
Description:
Boosting is a powerful ensemble learning technique that improves the accuracy of weak
learners (simple models) by combining multiple models sequentially. It focuses on
misclassified data points, giving them higher importance in subsequent iterations. This results
in a strong model that performs significantly better than individual weak models.
Program:
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

# Generate dataset
X, y = make_classification(n_samples=1000, n_features=20,
random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2, random_state=42)

# Train Gradient Boosting Model


gbm = GradientBoostingClassifier(n_estimators=50)
gbm.fit(X_train, y_train)

# Convert dataset to PyTorch tensors


X_test_tensor = torch.tensor(X_test, dtype=torch.float32)
y_test_tensor = torch.tensor(y_test, dtype=torch.float32)

# Predictions
y_pred = gbm.predict(X_test)

# Accuracy
print(f"PyTorch-based Gradient Boosting Accuracy:
{accuracy_score(y_test, y_pred):.4f}")
Output:

ADITYA ENGINEERING COLLEGE(A) Roll No: 22A91A6194

You might also like