0% found this document useful (0 votes)
65 views56 pages

ADL Exp File

The document outlines a practical file for an Advanced Deep Learning Lab course, detailing various experiments focused on implementing neural networks for tasks such as digit classification, movie review classification, news wire classification, house price prediction, and building convolutional neural networks. Each experiment includes theoretical background, aims, and solutions with code snippets using TensorFlow and Keras. The document serves as a comprehensive guide for students to understand and apply deep learning techniques using different datasets.

Uploaded by

memesdtc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views56 pages

ADL Exp File

The document outlines a practical file for an Advanced Deep Learning Lab course, detailing various experiments focused on implementing neural networks for tasks such as digit classification, movie review classification, news wire classification, house price prediction, and building convolutional neural networks. Each experiment includes theoretical background, aims, and solutions with code snippets using TensorFlow and Keras. The document serves as a comprehensive guide for students to understand and apply deep learning techniques using different datasets.

Uploaded by

memesdtc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

PRACTICAL FILE

SESSION: 2023-24

Advanced Deep Learning LAB


(AIML308P)

III Year, VI Sem

Submitted to: Submitted by:


Name: Mr. Shubhankit Sudhakar Name: Kanishk Mishra
Designation: Assistant Professor Enrollment No.: 01618011621

Department of Artificial Intelligence


Delhi Technical Campus, Greater Noida
INDEX

S.NO. PROGRAM NAME DATE OF DATE OF SIGN.


EXPERIMENT SUBMISSION
1 Implement multilayer perceptron algorithm for
MNIST hand written Digit Classification
2 Design a neural network for classifying movie
reviews (Binary Classification) using IMDB dataset

3 Design a neural network for classifying news wires


(Multi class classification) using Reuters dataset

4 Design a neural network for predicting


house price using Boston Housing price
dataset

5 Build a Convolutional Neural Network for MNIST


Hand written digit classification
6 Build a convolutional neural network for simple
image (dogs and cats) Classification

7 Use a pre-trained convolutional neural network


(VGG-16) for image classification.

8 Implement one hot encoding of words of characters.

9 Implement word embeddings for IMDB dataset.


10 Implement a RNN for IMDB movie review
classification problem
11 Image Classification: Building a deep learning
model that can classify images into different
categories, such as animals, cars, or buildings.
EXPERIMENT 1

Aim: Implement multilayer perceptron algorithm for MNIST hand written Digit
Classification.

Theory:

● Multilayer Perceptron (MLP): MLP is a class of feedforward artificial neural


network. It consists of at least three layers of nodes: an input layer, a hidden layer, and
an output layer. Except for the input nodes, each node is a neuron that uses a nonlinear
activation function. MLP utilizes a supervised learning technique called
backpropagation for training.
● MNIST Dataset: The MNIST database (Modified National Institute of Standards and
Technology database) is a large database of handwritten digits that is commonly used
for training various image processing systems. The database is also widely used for
training and testing in the field of machine learning.

Solution:
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.utils import to_categorical

# Load MNIST dataset


(x_train, y_train), (x_test, y_test) = mnist.load_data()

# Normalize the input images


x_train = x_train / 255.0
x_test = x_test / 255.0

# Flatten the images


x_train = x_train.reshape(-1, 28 * 28)
x_test = x_test.reshape(-1, 28 * 28)

# Convert labels to one-hot encoded vectors


y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)

# Build the MLP model


model = Sequential([
Dense(128, activation='relu', input_shape=(784,)),
Dense(64, activation='relu'),
Dense(10, activation='softmax')
])

# Compile the model


model.compile(optimizer='adam', loss='categorical_crossentropy',
metrics=['accuracy'])

# Train the model


model.fit(x_train, y_train, epochs=10, batch_size=32,
validation_data=(x_test, y_test))

# Evaluate the model


test_loss, test_accuracy = model.evaluate(x_test, y_test)
print(f'Test loss: {test_loss}')
print(f'Test accuracy: {test_accuracy}')
import matplotlib.pyplot as plt

# Train the model and collect training history


history = model.fit(x_train, y_train, epochs=10, batch_size=32,
validation_data=(x_test, y_test))

# Plot training history: loss


plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Loss Curves')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()

# Plot training history: accuracy


plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.title('Accuracy Curves')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
Output:
Epoch 7/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - accuracy:
0.9966 - loss: 0.0110 - val_accuracy: 0.9775 - val_loss: 0.1375
Epoch 8/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - accuracy:
0.9971 - loss: 0.0092 - val_accuracy: 0.9753 - val_loss: 0.1381
Epoch 9/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - accuracy:
0.9972 - loss: 0.0087 - val_accuracy: 0.9772 - val_loss: 0.1359
Epoch 10/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - accuracy:
0.9967 - loss: 0.0099 - val_accuracy: 0.9797 - val_loss: 0.1269
EXPERIMENT 2

Aim: Design a neural network for classifying movie reviews (Binary Classification) using
IMDB dataset.
Theory:

● Neural Network: Neural networks are a set of algorithms, modeled loosely after the
human brain, that are designed to recognize patterns. They interpret sensory data through
a kind of machine perception, labeling or clustering raw input.
● Binary Classification: Binary or binomial classification is the task of classifying the
elements of a given set into two groups on the basis of a classification rule.
● IMDB Dataset: The IMDB dataset is a set of 50,000 highly polarized reviews from the
Internet Movie Database. They are split into 25,000 reviews for training and 25,000
reviews for testing, each set consisting of 50% negative and 50% positive reviews.

Solution:
import numpy as np
from tensorflow.keras.datasets import imdb
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Embedding, Flatten
from tensorflow.keras.preprocessing import sequence

# Set the parameters for loading the dataset


max_features = 10000 # Number of words to consider as features
maxlen = 500 # Maximum sequence length (cut off texts after this
number of words)

# Load the IMDB dataset


(x_train, y_train), (x_test, y_test) =
imdb.load_data(num_words=max_features)

# Preprocess the data (pad sequences to ensure uniform length)


x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)

# Define the model architecture


model = Sequential([
Embedding(max_features, 32, input_length=maxlen),
Flatten(),
Dense(64, activation='relu'),
Dense(1, activation='sigmoid')
])

# Compile the model


model.compile(optimizer='adam', loss='binary_crossentropy',
metrics=['accuracy'])

# Train the model


model.fit(x_train, y_train, epochs=5, batch_size=32,
validation_data=(x_test, y_test))

# Evaluate the model


test_loss, test_accuracy = model.evaluate(x_test, y_test)
print(f'Test loss: {test_loss}')
print(f'Test accuracy: {test_accuracy}')
import matplotlib.pyplot as plt

# Train the model and collect training history


history = model.fit(x_train, y_train, epochs=10, batch_size=32,
validation_data=(x_test, y_test))

# Plot training history: loss


plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Loss Curves')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()

# Plot training history: accuracy


plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.title('Accuracy Curves')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()

Output:
Epoch 8/10
782/782 ━━━━━━━━━━━━━━━━━━━━ 6s 8ms/step - accuracy:
1.0000 - loss: 5.8364e-06 - val_accuracy: 0.8632 - val_loss: 0.8073
Epoch 9/10
782/782 ━━━━━━━━━━━━━━━━━━━━ 6s 8ms/step - accuracy:
1.0000 - loss: 3.8537e-06 - val_accuracy: 0.8640 - val_loss: 0.8306
Epoch 10/10
782/782 ━━━━━━━━━━━━━━━━━━━━ 7s 9ms/step - accuracy:
1.0000 - loss: 2.3734e-06 - val_accuracy: 0.8638 - val_loss: 0.8557
EXPERIMENT 3

Aim: Design a neural network for classifying news wires (Multi class classification) using
Reuters dataset.

Theory:

● Neural Network: Neural networks are a set of algorithms, modeled loosely after the
human brain, that are designed to recognize patterns. They interpret sensory data
through a kind of machine perception, labeling or clustering raw input.
● Multi-class Classification: Multi-class or multinomial classification is the problem of
classifying instances into one of three or more classes.
● Reuters Dataset: The Reuters dataset is a set of short newswires and their topics,
published by Reuters in 1986. It’s a simple, widely used toy dataset for text
classification. There are 46 different topics; some topics are more represented than
others, but each topic has at least 10 examples in the training set.

Solution:
import numpy as np
from tensorflow.keras.datasets import reuters
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Embedding, Flatten
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.utils import to_categorical

# Set the parameters for loading the dataset


max_features = 10000 # Number of words to consider as features
maxlen = 200 # Maximum sequence length (cut off texts after this
number of words)
batch_size = 32

# Load the Reuters dataset


(x_train, y_train), (x_test, y_test) =
reuters.load_data(num_words=max_features, test_split=0.2)

# Preprocess the data (pad sequences to ensure uniform length)


x_train = pad_sequences(x_train, maxlen=maxlen)
x_test = pad_sequences(x_test, maxlen=maxlen)

# One-hot encode the labels


num_classes = np.max(y_train) + 1
y_train = to_categorical(y_train, num_classes)
y_test = to_categorical(y_test, num_classes)

# Define the model architecture


model = Sequential([
Embedding(max_features, 128, input_length=maxlen),
Flatten(),
Dense(64, activation='relu'),
Dropout(0.5),
Dense(num_classes, activation='softmax')
])

# Compile the model


model.compile(optimizer='adam', loss='categorical_crossentropy',
metrics=['accuracy'])

# Train the model


model.fit(x_train, y_train, epochs=10, batch_size=batch_size,
validation_data=(x_test, y_test))

# Evaluate the model


test_loss, test_accuracy = model.evaluate(x_test, y_test)
print(f'Test loss: {test_loss}')
print(f'Test accuracy: {test_accuracy}')
import matplotlib.pyplot as plt

# Train the model and collect training history


history = model.fit(x_train, y_train, epochs=10, batch_size=batch_size,
validation_data=(x_test, y_test))

# Plot training history: loss


plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Loss Curves')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()

# Plot training history: accuracy


plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.title('Accuracy Curves')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()

Output:
Epoch 8/10
281/281 ━━━━━━━━━━━━━━━━━━━━ 5s 19ms/step - accuracy:
0.9537 - loss: 0.1214 - val_accuracy: 0.7124 - val_loss: 2.1270
Epoch 9/10
281/281 ━━━━━━━━━━━━━━━━━━━━ 5s 18ms/step - accuracy:
0.9576 - loss: 0.1118 - val_accuracy: 0.7039 - val_loss: 2.1985
Epoch 10/10
281/281 ━━━━━━━━━━━━━━━━━━━━ 5s 19ms/step - accuracy:
0.9544 - loss: 0.1278 - val_accuracy: 0.7079 - val_loss: 2.1972
EXPERIMENT 4

Aim: Design a neural network for predicting house price using Boston Housing price
dataset.

Theory:

● Neural Network: Neural networks are a set of algorithms, modeled loosely after the
human brain, that are designed to recognize patterns. They interpret sensory data
through a kind of machine perception, labeling or clustering raw input.
● Boston Housing Price Dataset: The Boston Housing Dataset is a derived from
information collected by the U.S. Census Service concerning housing in the area of
Boston Mass. It was obtained from the StatLib archive and has been used extensively
throughout the literature to benchmark algorithms.

Solution:
import numpy as np
from tensorflow.keras.datasets import boston_housing
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt

# Load the Boston Housing dataset


(x_train, y_train), (x_test, y_test) = boston_housing.load_data()

# Standardize the input features


scaler = StandardScaler()
x_train_scaled = scaler.fit_transform(x_train)
x_test_scaled = scaler.transform(x_test)

# Define the model architecture


model = Sequential([
Dense(64, activation='relu',
input_shape=(x_train_scaled.shape[1],)),
Dropout(0.5),
Dense(32, activation='relu'),
Dropout(0.5),
Dense(1)
])
# Compile the model
model.compile(optimizer='adam', loss='mean_squared_error')

# Train the model


history = model.fit(x_train_scaled, y_train, epochs=100, batch_size=32,
validation_split=0.2, verbose=0)

# Evaluate the model


test_loss = model.evaluate(x_test_scaled, y_test)
print(f'Test loss: {test_loss}')

# Make predictions
predictions = model.predict(x_test_scaled)

# Plot training history: loss


plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Loss Curves')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()

Output:
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 20.9702
Test loss: 24.972816467285156
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step
array([[ 8.239275 ], [15.192989 ], [18.558802 ], [29.45781 ],
[23.854698 ], [12.648457 ], [23.888613 ], [19.909472 ], [17.175808 ],
[16.581171 ], [15.793694 ], [16.692366 ], [15.752364 ], [37.596016 ],
[14.654897 ], [17.384644 ], [22.42454 ], [18.738693 ], [15.301411 ],
24.503408 ], [10.921483 ], [12.033248 ], [17.442163 ], [11.698573 ],
EXPERIMENT 5

Aim: Build a Convolutional Neural Network for MNIST Hand written digit classification.

Theory:

● Convolutional Neural Network (CNN): CNNs are a class of deep learning models,
most commonly applied to analyzing visual imagery. They are designed to
automatically and adaptively learn spatial hierarchies of features from the training
data.
● MNIST Dataset: The MNIST database (Modified National Institute of Standards and
Technology database) is a large database of handwritten digits that is commonly used
for training various image processing systems. The database is also widely used for
training and testing in the field of machine learning.

Solution:
import tensorflow as tf
import matplotlib.pyplot as plt
import seaborn as sn
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import math
import datetime
import platform

# Input data files are available in the read-only "../input/" directory


# For example, running this (by clicking run or pressing Shift+Enter)
will list all files under the input directory

import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
train = pd.read_csv('/kaggle/input/digit-recognizer/train.csv')
test = pd.read_csv('/kaggle/input/digit-recognizer/test.csv')
train.head()
train.info(), train.shape
test.info(), test.shape
X = train.iloc[:, 1:785]
y = train.iloc[:, 0]

X_test = test.iloc[:, 0:784]


# WARNING: running t-SNE on the full data set takes a while.
X_tsn = X/255

from sklearn.manifold import TSNE


tsne = TSNE()

tsne_res = tsne.fit_transform(X_tsn)
plt.figure(figsize=(14, 12))
plt.scatter(tsne_res[:,0], tsne_res[:,1], c=y, s=2)
plt.xticks([])
plt.yticks([])
plt.colorbar();
from sklearn.model_selection import train_test_split
X_train, X_validation, y_train, y_validation = train_test_split(X, y,
test_size = 0.2,random_state = 1212)
print('X_train:', X_train.shape)
print('y_train:', y_train.shape)
print('X_validation:', X_validation.shape)
print('y_validation:', y_validation.shape)
x_train_re = X_train.to_numpy().reshape(33600, 28, 28)
y_train_re = y_train.values
x_validation_re = X_validation.to_numpy().reshape(8400, 28, 28)
y_validation_re = y_validation.values
x_test_re = test.to_numpy().reshape(28000, 28, 28)
print('x_train:', x_train_re.shape)
print('y_train:', y_train_re.shape)
print('x_validation:', x_validation_re.shape)
print('y_validation:', y_validation_re.shape)
print('x_test:', x_test_re.shape)
# Save image parameters to the constants that we will use later for
data re-shaping and for model traning.
(_, IMAGE_WIDTH, IMAGE_HEIGHT) = x_train_re.shape
IMAGE_CHANNELS = 1

print('IMAGE_WIDTH:', IMAGE_WIDTH);
print('IMAGE_HEIGHT:', IMAGE_HEIGHT);
print('IMAGE_CHANNELS:', IMAGE_CHANNELS);
pd.DataFrame(x_train_re[0])
plt.imshow(x_train_re[0], cmap=plt.cm.binary)
plt.show()
numbers_to_display = 100
num_cells = math.ceil(math.sqrt(numbers_to_display))
plt.figure(figsize=(20,20))
for i in range(numbers_to_display):
plt.subplot(num_cells, num_cells, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(x_train_re[i], cmap=plt.cm.binary)
plt.xlabel(y_train_re[i])
plt.show()
x_train_with_chanels = x_train_re.reshape(
x_train_re.shape[0],
IMAGE_WIDTH,
IMAGE_HEIGHT,
IMAGE_CHANNELS
)

x_validation_with_chanels = x_validation_re.reshape(
x_validation_re.shape[0],
IMAGE_WIDTH,
IMAGE_HEIGHT,
IMAGE_CHANNELS
)

x_test_with_chanels = x_test_re.reshape(
x_test_re.shape[0],
IMAGE_WIDTH,
IMAGE_HEIGHT,
IMAGE_CHANNELS
)
print('x_train_with_chanels:', x_train_with_chanels.shape)
print('x_validation_with_chanels:', x_validation_with_chanels.shape)
print('x_test_with_chanels:', x_test_with_chanels.shape)
x_train_normalized = x_train_with_chanels / 255
x_validation_normalized = x_validation_with_chanels / 255
x_test_normalized = x_test_with_chanels / 255
# Let's check just one row from the 0th image to see color chanel
values after normalization.
x_train_normalized[0][10]
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Convolution2D(
input_shape=(IMAGE_WIDTH, IMAGE_HEIGHT, IMAGE_CHANNELS),
kernel_size=5,
filters=8,
strides=1,
activation=tf.keras.activations.relu,
kernel_initializer=tf.keras.initializers.VarianceScaling()
))

model.add(tf.keras.layers.MaxPooling2D(
pool_size=(2, 2),
strides=(2, 2)
))

model.add(tf.keras.layers.Convolution2D(
kernel_size=5,
filters=16,
strides=1,
activation=tf.keras.activations.relu,
kernel_initializer=tf.keras.initializers.VarianceScaling()
))

model.add(tf.keras.layers.MaxPooling2D(
pool_size=(2, 2),
strides=(2, 2)
))

model.add(tf.keras.layers.Flatten())

model.add(tf.keras.layers.Dense(
units=128,
activation=tf.keras.activations.relu
));

model.add(tf.keras.layers.Dropout(0.2))

model.add(tf.keras.layers.Dense(
units=10,
activation=tf.keras.activations.softmax,
kernel_initializer=tf.keras.initializers.VarianceScaling()
))
model.summary()
tf.keras.utils.plot_model(
model,
show_shapes=True,
show_layer_names=True,
)
adam_optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)

model.compile(
optimizer=adam_optimizer,
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy']
)
log_dir=".logs/fit/" +
datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback =
tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)

training_history = model.fit(
x_train_normalized,
y_train_re,
epochs=10,
validation_data=(x_validation_normalized, y_validation_re),
callbacks=[tensorboard_callback]
)

print("The model has successfully trained")


plt.xlabel('Epoch Number')
plt.ylabel('Accuracy')
plt.plot(training_history.history['loss'], label='training set')
plt.plot(training_history.history['val_loss'], label='validation
set')
plt.legend()
plt.xlabel('Epoch Number')
plt.ylabel('Accuracy')
plt.plot(training_history.history['accuracy'], label='training set')
plt.plot(training_history.history['val_accuracy'], label='validation
set')
plt.legend()
%%capture
train_loss, train_accuracy = model.evaluate(x_train_normalized,
y_train_re)
print('Train loss: ', train_loss)
print('Train accuracy: ', train_accuracy)
%%capture
validation_loss, validation_accuracy =
model.evaluate(x_validation_normalized, y_validation_re)
print('Validation loss: ', validation_loss)
print('Validation accuracy: ', validation_accuracy)
model_name = 'digits_recognition_cnn.h5'
model.save(model_name, save_format='h5')
loaded_model = tf.keras.models.load_model(model_name)
predictions_one_hot = loaded_model.predict([x_validation_normalized])
print('predictions_one_hot:', predictions_one_hot.shape)
# Predictions in form of one-hot vectors (arrays of probabilities).
pd.DataFrame(predictions_one_hot)
# Let's extract predictions with highest probabilites and detect what
digits have been actually recognized.
predictions = np.argmax(predictions_one_hot, axis=1)
pd.DataFrame(predictions)
plt.imshow(x_validation_normalized[0].reshape((IMAGE_WIDTH,
IMAGE_HEIGHT)), cmap=plt.cm.binary)
plt.show()
numbers_to_display = 196
num_cells = math.ceil(math.sqrt(numbers_to_display))
plt.figure(figsize=(15, 15))

for plot_index in range(numbers_to_display):


predicted_label = predictions[plot_index]
plt.xticks([])
plt.yticks([])
plt.grid(False)
color_map = 'Greens' if predicted_label ==
y_validation_re[plot_index] else 'Reds'
plt.subplot(num_cells, num_cells, plot_index + 1)

plt.imshow(x_validation_normalized[plot_index].reshape((IMAGE_WIDTH,
IMAGE_HEIGHT)), cmap=color_map)
plt.xlabel(predicted_label)

plt.subplots_adjust(hspace=1, wspace=0.5)
plt.show()
confusion_matrix = tf.math.confusion_matrix(y_validation_re,
predictions)
f, ax = plt.subplots(figsize=(9, 7))
sn.heatmap(
confusion_matrix,
annot=True,
linewidths=.5,
fmt="d",
square=True,
ax=ax
)
plt.show()
predictions_one_hot = loaded_model.predict([x_test_normalized])
print('predictions_one_hot:', predictions_one_hot.shape)
pd.DataFrame(predictions_one_hot)
plt.imshow(x_test_normalized[0].reshape((IMAGE_WIDTH, IMAGE_HEIGHT)),
cmap=plt.cm.binary)
plt.show()
test_pred = pd.DataFrame( loaded_model.predict([x_test_normalized]))
test_pred = pd.DataFrame(test_pred.idxmax(axis = 1))
test_pred.index.name = 'ImageId'
test_pred = test_pred.rename(columns = {0: 'Label'}).reset_index()
test_pred['ImageId'] = test_pred['ImageId'] + 1

test_pred.head()
Output:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 24, 24, 8) 208
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 12, 12, 8) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 8, 8, 16) 3216
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 4, 4, 16) 0
_________________________________________________________________
flatten (Flatten) (None, 256) 0
_________________________________________________________________
dense (Dense) (None, 128) 32896
_________________________________________________________________
dropout (Dropout) (None, 128) 0
_________________________________________________________________
dense_1 (Dense) (None, 10) 1290
=================================================================
Total params: 37,610
Trainable params: 37,610
Non-trainable params: 0
_________________________________________________________________

Epoch 7/10
1050/1050 [==============================] - 7s 7ms/step - loss: 0.0332
- accuracy: 0.9891 - val_loss: 0.0505 - val_accuracy: 0.9869
Epoch 8/10
1050/1050 [==============================] - 7s 7ms/step - loss: 0.0272
- accuracy: 0.9911 - val_loss: 0.0467 - val_accuracy: 0.9873
Epoch 9/10
1050/1050 [==============================] - 7s 7ms/step - loss: 0.0246
- accuracy: 0.9920 - val_loss: 0.0493 - val_accuracy: 0.9869
Epoch 10/10
1050/1050 [==============================] - 7s 7ms/step - loss: 0.0218
- accuracy: 0.9925 - val_loss: 0.0462 - val_accuracy: 0.9882
The model has successfully trained
Train loss: 0.009457636624574661
Train accuracy: 0.997083306312561
Validation loss: 0.04620610177516937
Validation accuracy: 0.9882143139839172
EXPERIMENT 6

Aim: Build a convolutional neural network for simple image (dogs and cats)
Classification.

Theory:

● Convolutional Neural Network (CNN): CNNs are a class of deep learning models,
most commonly applied to analyzing visual imagery. They are designed to
automatically and adaptively learn spatial hierarchies of features from the training
data.
● Image Classification: Image classification is the process of predicting the class of an
input image. In this case, the classes are ‘dog’ and ‘cat’.

Solution:
import os
import keras
import tensorflow as tf
import random

from keras.models import Sequential


from keras.layers import Dense, Conv2D, MaxPool2D, Flatten, Input,
Rescaling
from keras.preprocessing import image_dataset_from_directory

import numpy as np
from os import path
import shutil

import matplotlib.pyplot as plt


from PIL import Image
from zipfile import ZipFile

data_path = 'train.zip'

with ZipFile(data_path, 'r') as zip:


zip.extractall()
print('The data set has been extracted.')
data_path = 'train'
# number of images in the dataset
img_list = os.listdir(data_path)
len(img_list)
# number of dogs and number of cats
cats_images = [img for img in img_list if img.startswith('cat')]
dogs_images = [img for img in img_list if img.startswith('dog')]
len(cats_images), len(dogs_images)
# mix cats and dogs
np.random.shuffle(cats_images)
np.random.shuffle(dogs_images)

# divide into samples


threshold = int(0.8 * len(cats_images))
train_cats, test_cats = np.split(cats_images, [threshold])
train_dogs, test_dogs = np.split(dogs_images, [threshold])
print(len(train_cats), len(test_cats), len(train_dogs), len(test_dogs))
# structure
base_dir = f"train/dogs_vs_cats_dataset"
if not path.exists(base_dir):
os.mkdir(base_dir)

train_dir = path.join(base_dir, 'train')


test_dir = path.join(base_dir, 'test')
for d in [train_dir, test_dir]:
if not path.exists(d):
os.mkdir(d)

train_dog_dir = path.join(train_dir, 'dog')


train_cat_dir = path.join(train_dir, 'cat')
test_dog_dir = path.join(test_dir, "dog")
test_cat_dir = path.join(test_dir, "cat")
for d in [train_dog_dir, train_cat_dir, test_dog_dir, test_cat_dir]:
if not path.exists(d):
os.mkdir(d)
# moving images for training
for i in range(threshold):
src_path_cats = path.join(data_path, train_cats[i])
src_path_dogs = path.join(data_path, train_dogs[i])

dest_path_cats = path.join(train_cat_dir, train_cats[i])


dest_path_dogs = path.join(train_dog_dir, train_dogs[i])

shutil.move(src_path_cats, dest_path_cats)
shutil.move(src_path_dogs, dest_path_dogs)

# moving images for testing


for i in range(threshold, len(cats_images)):
src_path_cats = path.join(data_path, test_cats[i - threshold])
src_path_dogs = path.join(data_path, test_dogs[i - threshold])

dest_path_cats = path.join(test_cat_dir, test_cats[i - threshold])


dest_path_dogs = path.join(test_dog_dir, test_dogs[i - threshold])

shutil.move(src_path_cats, dest_path_cats)
shutil.move(src_path_dogs, dest_path_dogs)

print(f'Number of cats in {train_cat_dir}: ',


len(os.listdir(train_cat_dir)))
print(f'Number of cats in {test_cat_dir}: ',
len(os.listdir(test_cat_dir)))
print(f'Number of dogs in {train_dog_dir}: ',
len(os.listdir(train_dog_dir)))
print(f'Number of dogs in {test_cat_dir}: ',
len(os.listdir(test_cat_dir)))
# examples of cat images
fig, ax = plt.subplots(2,4,figsize=(20,12))
for i in range(8):
img_filename = random.choice(train_cats)
img_path = os.path.join(train_cat_dir, img_filename)
img = Image.open(img_path)
ax[i//4, i%4].imshow(img)
ax[i//4, i%4].axis('on')
# Example of Dog Images
fig, ax = plt.subplots(2,4,figsize=(20,12))
for i in range(8):
img_filename = random.choice(train_dogs)
img_path = os.path.join(train_dog_dir, img_filename)
img = Image.open(img_path)
ax[i//4, i%4].imshow(img)
ax[i//4, i%4].axis('on')
BATCH_SIZE = 64
WIDTH = 224
HEIGHT = 224
CHANNELS = 3

# training data generator


train_ds = image_dataset_from_directory(
train_dir,
labels='inferred',
label_mode='binary',
color_mode='rgb',
image_size=(HEIGHT,WIDTH),
batch_size=BATCH_SIZE
)

# validation data generator


test_ds = image_dataset_from_directory(
test_dir,
labels='inferred',
label_mode='binary',
color_mode='rgb',
image_size=(HEIGHT,WIDTH),
batch_size=BATCH_SIZE
)
model = keras.Sequential(
[
Rescaling(1./255, input_shape=(HEIGHT, WIDTH, CHANNELS)),

Conv2D(64, (3,3), activation='relu'),


MaxPool2D(2,2),

Conv2D(128, (3,3), activation='relu'),


MaxPool2D(2,2),

Conv2D(256, (3,3), activation='relu'),


Conv2D(256, (3,3), activation='relu'),
MaxPool2D(2,2),

Flatten(),
Dense(256, activation='relu'),
Dense(256, activation='relu'),
Dense(1, activation='sigmoid')
]
)
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.prefetch(buffer_size=AUTOTUNE)
test_ds = test_ds.prefetch(buffer_size=AUTOTUNE)

from keras.optimizers import Adam


model.compile(optimizer=Adam(learning_rate=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])

model.summary()
history = model.fit(train_ds, validation_data=test_ds, epochs=30)
plt.plot(history.history["accuracy"])
plt.plot(history.history['val_accuracy'])
plt.plot(history.history['loss'])

plt.title("Model accuracy")
plt.ylabel("Accuracy")
plt.xlabel("era")
plt.legend(["accuracy", "val_accuracy", "loss"])
print(history.history.keys())
plt.show()
print("Training Accuracy: ", model.evaluate(train_ds, verbose=None)[1])
print("Validation Accuracy: ", model.evaluate(test_ds,
verbose=None)[1])
from keras.preprocessing import image

prediction_img_paths = []
prediction_img_paths.extend(path.join(train_cat_dir,
random.choice(train_cats)) for _ in range(4))
prediction_img_paths.extend(path.join(train_dog_dir,
random.choice(train_dogs)) for _ in range(4))

prediction_imgs = []

for img in prediction_img_paths:


img = image.load_img(img, target_size=(224,224))
img = np.asarray(img)
prediction_imgs.append(img)

print(prediction_img_paths)

labels = {0: 'dog', 1: 'cat'}

fig, ax = plt.subplots(2,4,figsize=(20,12))

for i, img in enumerate(prediction_imgs):


img = tf.expand_dims(img, 0)
output = model.predict(img)
print(output)
Output:

Model: "sequential"
313/313 ━━━━━━━━━━━━━━━━━━━━ 45s 144ms/step - accuracy:
0.9941 - loss: 0.0168 - val_accuracy: 0.7218 - val_loss: 89.6814
Epoch 29/30
313/313 ━━━━━━━━━━━━━━━━━━━━ 45s 144ms/step - accuracy:
0.9945 - loss: 0.0197 - val_accuracy: 0.7096 - val_loss: 113.2724
Epoch 30/30
313/313 ━━━━━━━━━━━━━━━━━━━━ 45s 142ms/step - accuracy:
0.9965 - loss: 0.0157 - val_accuracy: 0.7038 - val_loss: 255.2161
EXPERIMENT 7

Aim: Use a pre-trained convolutional neural network (VGG-16) for image classification.

Theory:

● Convolutional Neural Network (CNN): CNNs are a class of deep learning models,
most commonly applied to analyzing visual imagery. They are designed to
automatically and adaptively learn spatial hierarchies of features from the training
data.
● VGG-16: VGG-16 is a convolutional neural network model proposed by K. Simonyan
and A. Zisserman from the University of Oxford in the paper “Very Deep
Convolutional Networks for Large-Scale Image Recognition”. The model achieves
92.7% top-5 test accuracy in ImageNet, which is a dataset of over 14 million images
belonging to 1000 classes.
● Image Classification: Image classification is the process of predicting the class of an
input image.

Solution:
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input,
decode_predictions
import numpy as np

# Load VGG16 model, including pre-trained weights


model = VGG16(weights='imagenet')

# Load an image file to test, resizing it to 224x224 pixels (required


by this model)
img = image.load_img('C:\\Users\\shubh\\Downloads\\th.jpeg',
target_size=(224, 224))

# Convert the image file to a numpy array


x = image.img_to_array(img)
# Add a fourth dimension (since Keras expects a list of images)
x = np.expand_dims(x, axis=0)

# Normalize the input image's pixel values to the range used when
training the neural network
x = preprocess_input(x)

# Run the image through the deep neural network to make a prediction
predictions = model.predict(x)

# Look up the names of the predicted classes


predicted_classes = decode_predictions(predictions, top=9)

import PIL
from PIL import Image
img = Image.open('C:\\Users\\shubh\\Downloads\\th.jpeg')
print(img.show())
print("This is an image of:")

for imagenet_id, name, likelihood in predicted_classes[0]:


print(" - {}: {:2f} likelihood".format(name, likelihood))

Output:

Original Image -

[1m1/1[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m0s[0m 219ms/step

None

This is an image of:

- lakeside: 0.493748 likelihood

- valley: 0.200450 likelihood

- fountain: 0.091733 likelihood


- cliff: 0.056482 likelihood

- dam: 0.045679 likelihood

- castle: 0.034875 likelihood

- promontory: 0.020665 likelihood

- seashore: 0.015780 likelihood

- breakwater: 0.008363 likelihood


EXPERIMENT 8

Aim: Implement one hot encoding of words of characters.

Theory:

● One-Hot Encoding: One-hot is a group of bits among which the legal combinations of
values are only those with a single high (1) bit and all the others low (0). In natural
language processing, one-hot encoding is a commonly used method to represent words
or characters as vectors where each word or character is represented as a binary vector
with all zeros except for a single one.

Solution:
import numpy as np
samples = ['The cat sat on the mat.', 'The dog ate my homework.']
# Build an index of all tokens in the data.
token_index = {}
for sample in samples:
# Tokenize the samples via the `split` method.
# In real life, we would also strip punctuation and special
characters from the samples.
for word in sample.split():
if word not in token_index:
# Assign a unique index to each unique word. Note that we
don't attribute index 0 to anything.
token_index[word] = len(token_index) + 1

# Vectorize the samples. We will only consider the first `max_length`


words in each sample.
max_length = 10

# This is where we store our results:


results = np.zeros((len(samples), max_length, max(token_index.values())
+ 1))
for i, sample in enumerate(samples):
for j, word in list(enumerate(sample.split()))[:max_length]:
index = token_index.get(word)
results[i, j, index] = 1.
print(results.shape)
print(results[0])
#Character level one-hot encoding (toy example)
import string

samples = ['The cat sat on the mat.', 'The dog ate my homework.']
characters = string.printable # All printable ASCII characters.
token_index = dict(zip(characters, range(1, len(characters) + 1)))

max_length = 50
results = np.zeros((len(samples), max_length, max(token_index.values())
+ 1))
for i, sample in enumerate(samples):
for j, character in enumerate(sample[:max_length]):
index = token_index.get(character)
results[i, j, index] = 1.

print(results.shape)
from tensorflow.keras.preprocessing.text import Tokenizer

samples = ['The cat sat on the mat.', 'The dog ate my homework.']

# Create a tokenizer, configured to only take into account the top-1000


most common words
tokenizer = Tokenizer(num_words=1000)

# Build the word index


tokenizer.fit_on_texts(samples)
# Turn strings into lists of integer indices.
sequences = tokenizer.texts_to_sequences(samples)
# You could also directly get the one-hot binary representations.
# Note that other vectorization modes than one-hot encoding are
supported!
one_hot_results = tokenizer.texts_to_matrix(samples, mode='binary')
# This is how you can recover the word index that was computed
word_index = tokenizer.word_index
print(f"Found {len(word_index)} unique tokens: {word_index}")
samples = ['The cat sat on the mat.', 'The dog ate my homework.']
# Store our words as vectors of size 1000. Note that if you have close
to 1000 words (or more)
# you will start seeing many hash collisions, which will decrease the
accuracy of this encoding method.
dimensionality = 1000
max_length = 10
results = np.zeros((len(samples), max_length, dimensionality))
for i, sample in enumerate(samples):
for j, word in list(enumerate(sample.split()))[:max_length]:
# Hash the word into a "random" integer index that is between 0
and 1000
index = abs(hash(word)) % dimensionality
results[i, j, index] = 1.
print(results.shape)

Output:
(2, 10, 11)
[[0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]
(2, 50, 101)
Found 9 unique tokens: {'the': 1, 'cat': 2, 'sat': 3, 'on': 4, 'mat':
5, 'dog': 6, 'ate': 7, 'my': 8, 'homework': 9}
(2, 10, 1000)
EXPERIMENT 9

Aim: Implement word embeddings for IMDB dataset.

Theory:

● Word Embeddings: Word embeddings are a type of word representation that allows
words with similar meaning to have a similar representation. They are a distributed
representation for text that is perhaps one of the key breakthroughs for the impressive
performance of deep learning methods on challenging natural language processing
problems.
● IMDB Dataset: The IMDB dataset is a set of 50,000 highly polarized reviews from
the Internet Movie Database. They are split into 25,000 reviews for training and
25,000 reviews for testing, each set consisting of 50% negative and 50% positive
reviews.

Solution:
import numpy as np
import itertools
import os.path
import keras
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Dense, Flatten, Dropout
from keras.layers import Embedding
#from keras.layers.convolutional import Conv1D, MaxPooling1D
from keras.preprocessing import sequence
from sklearn import decomposition
import matplotlib.pyplot as plt

np.random.seed(2018)
vocab_size = 2000
(X_train, y_train), (X_test, y_test) =
imdb.load_data(num_words=vocab_size)
vocab = set(itertools.chain.from_iterable(X_train))
print(len(vocab))
# pad dataset to a maximum review length in words
max_length = 500
X_train = sequence.pad_sequences(X_train, maxlen=max_length)
X_test = sequence.pad_sequences(X_test, maxlen=max_length)
y_train = keras.utils.to_categorical(y_train, 2)
y_test = keras.utils.to_categorical(y_test, 2)

# id to word
word_to_id = keras.datasets.imdb.get_word_index()
word_to_id["<PAD>"] = 0
word_to_id["<START>"] = 1
word_to_id["<UNK>"] = 2
id_to_word = {value:key for key,value in word_to_id.items()}

# define the model


model = Sequential()
model.add(Embedding(vocab_size, 100, input_length=max_length))
#model.add(Conv1D(filters=32, kernel_size=8, activation='relu'))
#model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(10, activation='relu'))
model.add(Dense(2, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
print(model.summary())

if os.path.exists("model_weights.h5"):
model.load_weights("model_weights.h5")
else:
# fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test),
epochs=4, batch_size=64, verbose=2)
loss, accuracy = model.evaluate(X_test, y_test, verbose=0)
print('Accuracy: %f' % (accuracy*100))
model.save_weights("model_weights.h5")

# plot
weights = model.layers[0].get_weights()[0]
pca = decomposition.PCA(n_components=2)
pca.fit(weights.T)
fig, ax = plt.subplots()
ax.scatter(pca.components_[0], pca.components_[1])
for i in vocab:
word = id_to_word[i]
ax.annotate(word, (pca.components_[0, i],pca.components_[1, i]))
fig.savefig('embedding.png')
plt.show()

Output:

EXPERIMENT 10
Aim: Implement a RNN for IMDB movie review classification problem.

Theory:

● Recurrent Neural Network (RNN): RNNs are a class of artificial neural networks
where connections between nodes form a directed graph along a temporal sequence.
This allows it to exhibit temporal dynamic behavior. Unlike feedforward neural
networks, RNNs can use their internal state (memory) to process sequences of inputs.
● IMDB Dataset: The IMDB dataset is a set of 50,000 highly polarized reviews from
the Internet Movie Database. They are split into 25,000 reviews for training and
25,000 reviews for testing, each set consisting of 50% negative and 50% positive
reviews.

Solution:
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline

from scipy import stats


from keras.datasets import imdb
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Embedding
from keras.layers import SimpleRNN,Dense,Activation
(X_train,Y_train),(X_test,Y_test)=
imdb.load_data(path="imdb.npz",num_words=None,skip_top=0,maxlen=None,st
art_char=1,seed=13,oov_char=2,index_from=3)
print("Type: ", type(X_train))
print("Type: ", type(Y_train))
#Exploratory Data Analysis(EDA)
print("Y train values: ",np.unique(Y_train))
print("Y test values: ",np.unique(Y_test))
unique,counts = np.unique(Y_train,return_counts=True)
print("Y train distribution: ", dict(zip(unique,counts)))
unique,counts = np.unique(Y_test,return_counts=True)
print("Y test distribution: ", dict(zip(unique,counts)))
print(X_train[0])
review_len_train = []
review_len_test = []
for i,j in zip(X_train,X_test):
review_len_train.append(len(i))
review_len_test.append(len(j))
print("min: ", min(review_len_train), "max: ", max(review_len_train))
print("min: ", min(review_len_test), "max: ", max(review_len_test))
sns.distplot(review_len_train,hist_kws={"alpha":0.3});
sns.distplot(review_len_test,hist_kws={"alpha":0.3});
print("Train mean: ",np.mean(review_len_train))
print("Train median: ",np.median(review_len_train))
print("Train mode: ",stats.mode(review_len_train))
# number or words
word_index = imdb.get_word_index()
print(type(word_index))
print("length of word_index: ",len(word_index))
for keys,values in word_index.items():
if values == 1:
print(keys)
def whatItSay(index=24):
reverse_index = dict([(value,key) for (key,value) in
word_index.items()])
decode_review = " ".join([reverse_index.get(i-3, "!") for i in
X_train[index]])
print(decode_review)
print(Y_train[index])
return decode_review

decoded_review = whatItSay()
decoded_review = whatItSay(5)
#Preprocess
num_words = 15000
(X_train,Y_train),(X_test,Y_test) = imdb.load_data(num_words=num_words)
maxlen=130
X_train = pad_sequences(X_train, maxlen=maxlen)
X_test = pad_sequences(X_test, maxlen=maxlen)
print("X train shape: ",X_train.shape)
print(X_train[5])
for i in X_train[0:10]:
print(len(i))
decoded_review = whatItSay(5)
#Construct RNN Model
rnn = Sequential()
rnn.add(Embedding(num_words,32,input_length =len(X_train[0]))) #
num_words=15000
rnn.add(SimpleRNN(16,input_shape = (num_words,maxlen),
return_sequences=False,activation="relu"))
rnn.add(Dense(1)) #flatten
rnn.add(Activation("sigmoid")) #using sigmoid for binary classification

print(rnn.summary())
rnn.compile(loss="binary_crossentropy",optimizer="rmsprop",metrics=["ac
curacy"])
history = rnn.fit(X_train,Y_train,validation_data =
(X_test,Y_test),epochs = 5,batch_size=128,verbose = 1)
score = rnn.evaluate(X_test,Y_test)
print("accuracy:", score[1]*100)
plt.figure()
plt.plot(history.history["accuracy"],label="Train");
plt.plot(history.history["val_accuracy"],label="Test");
plt.title("Accuracy")
plt.ylabel("Accuracy")
plt.xlabel("Epochs")
plt.legend()
plt.show();

plt.figure()
plt.plot(history.history["loss"],label="Train");
plt.plot(history.history["val_loss"],label="Test");
plt.title("Loss")
plt.ylabel("Loss")
plt.xlabel("Epochs")
plt.legend()
plt.show()
Output:
EXPERIMENT 11

Aim: Image Classification: Building a deep learning model that can classify images into
different categories, such as animals, cars, or buildings.

Theory:

● Deep Learning: Deep learning is a machine learning technique that teaches computers
to do what comes naturally to humans: learn by example. Deep learning is a key
technology behind driverless cars, enabling them to recognize a stop sign, or to
distinguish a pedestrian from a lamppost.
● Image Classification: Image classification is the process of predicting the class of an
input image. The classes in this case are ‘animals’, ‘cars’, and ‘buildings’.

Solution:
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers, models, datasets
import matplotlib.pyplot as plt

# Load CIFAR-10 dataset


(train_images, train_labels), (test_images, test_labels) =
datasets.cifar10.load_data()

# Normalize pixel values to between 0 and 1


train_images, test_images = train_images / 255.0, test_images / 255.0

# Define class names


class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']

# Show some images from the dataset


plt.figure(figsize=(10, 10))
for i in range(25):
plt.subplot(5, 5, i + 1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i][0]])
plt.show()

# Build the deep learning model


model = models.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32,
3)),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(10) # Output layer with 10 units (one for each class)
])

# Compile the model


model.compile(optimizer='adam',

loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])

# Train the model


history = model.fit(train_images, train_labels, epochs=10,
validation_data=(test_images, test_labels))

# Evaluate the model


test_loss, test_acc = model.evaluate(test_images, test_labels,
verbose=2)
print('\nTest accuracy:', test_acc)
Output:

You might also like