0% found this document useful (0 votes)
10 views8 pages

Experiment 3

Uploaded by

Vaishnavi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views8 pages

Experiment 3

Uploaded by

Vaishnavi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Ex.No.

3 Apply CNN on computer vision problem (Image Classification)


Aim
To write a program for simple image classification using Convolutional Neural
Network.

Algorithm
1.Import the necessary libraries: TensorFlow, Keras, Matplotlib, and Numpy.
2.Load the CIFAR-10 dataset into X_train, y_train, X_test, and y_test.
3.Reshape the y_train and y_test labels from 2D to 1D using reshape(-1,).
4.Create a list classes containing the 10 class names (airplane, automobile, bird,
etc.).
5.Define a function plot_sample(X, y, index) to visualize a sample image with its
label.
6.Normalize the training and testing images by dividing pixel values by 255.
7.Define a sequential ANN model with a flatten layer, two dense layers with ReLU
activation, and an output dense layer with Softmax activation.
8.Compile the ANN model using the SGD optimizer, sparse categorical cross-
entropy loss, and accuracy as the metric.
9.Train the ANN model on the training data for 5 epochs using ann.fit().
10.Predict the test labels using the ANN model with ann.predict(), and convert
predicted probabilities to class labels using np.argmax().
11.Print the classification report to evaluate the ANN model's performance.
12.Define a sequential CNN model with two convolutional layers followed by max
pooling, a flatten layer, a dense layer with ReLU, and an output dense layer with
Softmax activation.
13.Compile the CNN model using the Adam optimizer, sparse categorical cross-
entropy loss, and accuracy as the metric.
14.Train the CNN model on the training data for 10 epochs using cnn.fit().
15.Evaluate the CNN model's performance on the test data using cnn.evaluate().
16.Predict the test labels using the CNN model with cnn.predict(), and convert
predicted probabilities to class labels using np.argmax().
17.Plot a sample image from the test set and display its predicted and actual
class labels using plot_sample().
Program
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt
import numpy as np
#Load the dataset
(X_train, y_train), (X_test,y_test) = datasets.cifar10.load_data()
X_train.shape
X_test.shape
y_train.shape
y_train[:5]
y_train = y_train.reshape(-1,)
y_train[:5]
y_test = y_test.reshape(-1,)
classes =
["airplane","automobile","bird","cat","deer","dog","frog","horse","ship","truck"]
#Let's plot some images to see what they are
def plot_sample(X, y, index):
plt.figure(figsize = (15,2))
plt.imshow(X[index])
plt.xlabel(classes[y[index]])
plot_sample(X_train, y_train, 0)
plot_sample(X_train, y_train, 1)
#Normalizing the training data
X_train = X_train / 255.0
X_test = X_test / 255.0
#Build simple artificial neural network for image classification
ann = models.Sequential([
layers.Flatten(input_shape=(32,32,3)),
layers.Dense(3000, activation='relu'),
layers.Dense(1000, activation='relu'),
layers.Dense(10, activation='softmax')
])
ann.compile(optimizer='SGD',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
ann.fit(X_train, y_train, epochs=5)
from sklearn.metrics import confusion_matrix , classification_report
import numpy as np
y_pred = ann.predict(X_test)
y_pred_classes = [np.argmax(element) for element in y_pred]
print("Classification Report: \n", classification_report(y_test, y_pred_classes))
#Now let us build a convolutional neural network to train our images
cnn = models.Sequential([
layers.Conv2D(filters=32, kernel_size=(3, 3), activation='relu',
input_shape=(32, 32, 3)),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(filters=64, kernel_size=(3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(10, activation='softmax')
])
cnn.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
cnn.fit(X_train, y_train, epochs=10)
cnn.evaluate(X_test,y_test)
y_pred = cnn.predict(X_test)
y_pred[:5]
y_classes = [np.argmax(element) for element in y_pred]
y_classes[:5]
y_test[:5]
plot_sample(X_test, y_test,3)
classes[y_classes[3]]
classes[y_classes[3]]
Dataset
In this notebook, we will classify small images CIFAR10 dataset from tensorflow
keras datasets. There are total 10 classes as shown below. We will use CNN for
classification

Output
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
170498071/170498071 ━━━━━━━━━━━━━━━━━━━━ 3s 0us/step
(50000, 32, 32, 3)
(10000, 32, 32, 3)
(50000, 1)
array([[6],
[9],
[9],
[4],
[1]], dtype=uint8)
array([6, 9, 9, 4, 1], dtype=uint8)

/usr/local/lib/python3.10/dist-packages/keras/src/layers/reshaping/flatten.py:37:
UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer.
When using Sequential models, prefer using an `Input(shape)` object as the first
layer in the model instead.
super().__init__(**kwargs)
Epoch 1/5
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 123s 79ms/step - accuracy: 0.3044 - loss:
1.9267
Epoch 2/5
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 125s 80ms/step - accuracy: 0.4262 - loss:
1.6375
Epoch 3/5
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 174s 111ms/step - accuracy: 0.4513 - loss:
1.5578
Epoch 4/5
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 152s 79ms/step - accuracy: 0.4735 - loss:
1.4973
Epoch 5/5
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 139s 78ms/step - accuracy: 0.4931 - loss:
1.4404
<keras.src.callbacks.history.History at 0x780adc0c44c0>
313/313 ━━━━━━━━━━━━━━━━━━━━ 8s 24ms/step
Classification Report:
precision recall f1-score support

0 0.46 0.59 0.52 1000


1 0.70 0.44 0.54 1000
2 0.33 0.45 0.38 1000
3 0.45 0.15 0.22 1000
4 0.51 0.17 0.25 1000
5 0.48 0.24 0.32 1000
6 0.65 0.32 0.43 1000
7 0.33 0.77 0.46 1000
8 0.58 0.66 0.62 1000
9 0.43 0.70 0.54 1000

accuracy 0.45 10000


macro avg 0.49 0.45 0.43 10000
weighted avg 0.49 0.45 0.43 10000
/usr/local/lib/python3.10/dist-packages/keras/src/layers/convolutional/
base_conv.py:107: UserWarning: Do not pass an `input_shape`/`input_dim`
argument to a layer. When using Sequential models, prefer using an
`Input(shape)` object as the first layer in the model instead.
super().__init__(activity_regularizer=activity_regularizer, **kwargs)
Epoch 1/10
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 68s 42ms/step - accuracy: 0.3870 - loss:
1.6907
Epoch 2/10
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 78s 40ms/step - accuracy: 0.6074 - loss:
1.1264
Epoch 3/10
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 84s 41ms/step - accuracy: 0.6607 - loss:
0.9717
Epoch 4/10
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 80s 40ms/step - accuracy: 0.6945 - loss:
0.8762
Epoch 5/10
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 81s 39ms/step - accuracy: 0.7257 - loss:
0.7938
Epoch 6/10
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 83s 40ms/step - accuracy: 0.7459 - loss:
0.7291
Epoch 7/10
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 62s 39ms/step - accuracy: 0.7566 - loss:
0.6929
Epoch 8/10
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 80s 39ms/step - accuracy: 0.7783 - loss:
0.6406
Epoch 9/10
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 64s 41ms/step - accuracy: 0.7906 - loss:
0.5962
Epoch 10/10
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 59s 38ms/step - accuracy: 0.8067 - loss:
0.5557
<keras.src.callbacks.history.History at 0x780ae1a43c70>
313/313 ━━━━━━━━━━━━━━━━━━━━ 5s 15ms/step - accuracy: 0.7062 - loss: 0.9154
[0.9223742485046387, 0.6991999745368958]
313/313 ━━━━━━━━━━━━━━━━━━━━ 4s 13ms/step
array([[4.2512249e-05, 3.3172386e-05, 5.8466976e-04, 7.0738745e-01,
2.7108756e-06, 2.8956893e-01, 9.1810612e-04, 1.1516520e-06,
1.4545125e-03, 6.6952089e-06],
[4.9608148e-04, 7.6288164e-02, 1.7859613e-06, 1.6527525e-06,
1.4528252e-08, 9.1592893e-09, 1.4008779e-11, 5.3073839e-09,
9.2313397e-01, 7.8228062e-05],
[6.0474116e-02, 7.5931917e-03, 6.5373996e-04, 1.2751481e-03,
7.0194037e-05, 2.8914108e-04, 8.2432043e-06, 4.0714402e-04,
9.2629021e-01, 2.9387930e-03],
[9.1190392e-01, 3.6921064e-04, 1.2633701e-02, 3.8694736e-04,
2.4883706e-02, 1.2769634e-05, 8.2563762e-05, 9.8198061e-06,
4.9633831e-02, 8.3518993e-05],
[7.4620795e-07, 1.2373547e-04, 2.5005124e-03, 1.6088123e-02,
7.1768898e-01, 3.0196949e-03, 2.6041889e-01, 1.2911181e-06,
1.5753969e-04, 4.8948618e-07]], dtype=float32)
[3, 8, 8, 0, 4]
array([3, 8, 8, 0, 6], dtype=uint8)

‘airplane’
‘airplane’
Result

Thus the program for simple image classification using Convolutional


Neural Network was written and executed successfully.

You might also like