0% found this document useful (0 votes)
30 views12 pages

DL Lab Experiments 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views12 pages

DL Lab Experiments 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

dl-lab-experiments

August 11, 2024

1 3. Implement a feed forward neural network with three hidden


layers for classification on cifar-10 dataset
[ ]: import tensorflow as tf
from keras import models,layers
from keras.datasets import cifar10
from keras.utils import to_categorical

[ ]: # Load CIFAR-10 dataset


(X_train, y_train), (X_test, y_test) = cifar10.load_data()

[ ]: # Normalize pixel values to be between 0 and 1


X_train, X_test= X_train / 255.0, X_test / 255.0

[ ]: print(f'X_train shape: {X_train.shape}\ny_train shape: {y_train.shape}')


print(f'X_test shape: {X_test.shape}\ny_test shape: {y_test.shape}')

X_train shape: (50000, 32, 32, 3)


y_train shape: (50000, 1)
X_test shape: (10000, 32, 32, 3)
y_test shape: (10000, 1)

[ ]: # Convert labels to one-hot encoding


y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)

[ ]: print(f'X_train shape: {X_train.shape}\ny_train shape: {y_train.shape}')


print(f'X_test shape: {X_test.shape}\ny_test shape: {y_test.shape}')

X_train shape: (50000, 32, 32, 3)


y_train shape: (50000, 10)
X_test shape: (10000, 32, 32, 3)
y_test shape: (10000, 10)

[ ]: # Define the model


model = models.Sequential()

1
# Flatten the input for the fully connected layer
model.add(layers.Flatten(input_shape=(32, 32, 3)))

# Three hidden layers with ReLU activation


model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(256, activation='relu'))
model.add(layers.Dense(128, activation='relu'))

# Output layer with softmax activation for classification


model.add(layers.Dense(10, activation='softmax'))

[ ]: # Compile the model


model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])

[ ]: # Display the model summary


model.summary()

Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten (Flatten) (None, 3072) 0

dense (Dense) (None, 512) 1573376

dense_1 (Dense) (None, 256) 131328

dense_2 (Dense) (None, 128) 32896

dense_3 (Dense) (None, 10) 1290

=================================================================
Total params: 1738890 (6.63 MB)
Trainable params: 1738890 (6.63 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________

[ ]: # Train the model


history = model.fit(X_train, y_train, epochs=15, validation_data=(X_test,␣
↪y_test))

Epoch 1/15
1563/1563 [==============================] - 13s 5ms/step - loss: 1.8592 -
accuracy: 0.3278 - val_loss: 1.6996 - val_accuracy: 0.3868
Epoch 2/15

2
1563/1563 [==============================] - 7s 4ms/step - loss: 1.6749 -
accuracy: 0.3996 - val_loss: 1.6449 - val_accuracy: 0.4069
Epoch 3/15
1563/1563 [==============================] - 7s 5ms/step - loss: 1.5880 -
accuracy: 0.4330 - val_loss: 1.5731 - val_accuracy: 0.4367
Epoch 4/15
1563/1563 [==============================] - 7s 4ms/step - loss: 1.5302 -
accuracy: 0.4517 - val_loss: 1.5233 - val_accuracy: 0.4575
Epoch 5/15
1563/1563 [==============================] - 6s 4ms/step - loss: 1.4919 -
accuracy: 0.4660 - val_loss: 1.5045 - val_accuracy: 0.4708
Epoch 6/15
1563/1563 [==============================] - 7s 5ms/step - loss: 1.4612 -
accuracy: 0.4775 - val_loss: 1.5188 - val_accuracy: 0.4588
Epoch 7/15
1563/1563 [==============================] - 7s 4ms/step - loss: 1.4365 -
accuracy: 0.4868 - val_loss: 1.4675 - val_accuracy: 0.4818
Epoch 8/15
1563/1563 [==============================] - 7s 4ms/step - loss: 1.4100 -
accuracy: 0.4952 - val_loss: 1.4518 - val_accuracy: 0.4887
Epoch 9/15
1563/1563 [==============================] - 7s 4ms/step - loss: 1.3890 -
accuracy: 0.5034 - val_loss: 1.4769 - val_accuracy: 0.4796
Epoch 10/15
1563/1563 [==============================] - 7s 5ms/step - loss: 1.3650 -
accuracy: 0.5095 - val_loss: 1.4687 - val_accuracy: 0.4818
Epoch 11/15
1563/1563 [==============================] - 11s 7ms/step - loss: 1.3455 -
accuracy: 0.5212 - val_loss: 1.4320 - val_accuracy: 0.4926
Epoch 12/15
1563/1563 [==============================] - 9s 6ms/step - loss: 1.3239 -
accuracy: 0.5271 - val_loss: 1.4626 - val_accuracy: 0.4830
Epoch 13/15
1563/1563 [==============================] - 8s 5ms/step - loss: 1.3052 -
accuracy: 0.5326 - val_loss: 1.5024 - val_accuracy: 0.4751
Epoch 14/15
1563/1563 [==============================] - 7s 4ms/step - loss: 1.2937 -
accuracy: 0.5364 - val_loss: 1.4824 - val_accuracy: 0.4759
Epoch 15/15
1563/1563 [==============================] - 6s 4ms/step - loss: 1.2708 -
accuracy: 0.5444 - val_loss: 1.4649 - val_accuracy: 0.4905

[ ]: # Evaluate the model


score = model.evaluate(X_test, y_test)

313/313 [==============================] - 1s 3ms/step - loss: 1.4649 -


accuracy: 0.4905

3
[ ]: import matplotlib.pyplot as plt
plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.show()

2 4. Analyzing the impact of optimization and weight initializa-


tion techniques on neural networks
[ ]: import tensorflow as tf
import numpy as np
from keras import models,layers,optimizers
from keras.datasets import cifar10
from keras.utils import to_categorical

[ ]: (X_train,y_train), (X_test,y_test) = cifar10.load_data()

Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz

4
170498071/170498071 [==============================] - 6s 0us/step

[ ]: X_train = X_train.astype('float32')/ 255.0


X_test = X_test.astype('float32')/255.0

[ ]: X_train.shape

[ ]: (50000, 32, 32, 3)

[ ]: y_train = to_categorical(y_train,10)
y_test = to_categorical(y_test,10)

[ ]: #Xavier Initialization
model1 = models.Sequential()

model1.add(layers.Flatten(input_shape=(32,32,3)))

model1.add(layers.
↪Dense(256,activation='relu',kernel_initializer='glorot_uniform'))

model1.add(layers.
↪Dense(256,activation='relu',kernel_initializer='glorot_uniform'))

model1.add(layers.
↪Dense(10,activation='softmax',kernel_initializer='glorot_uniform'))

[ ]: #Kaiming Initialization
model2 = models.Sequential()

model2.add(layers.Flatten(input_shape=(32,32,3)))

model2.add(layers.Dense(256,activation='relu',kernel_initializer='he_normal'))
model2.add(layers.Dense(128,activation='relu',kernel_initializer='he_normal'))

model2.add(layers.Dense(10,activation='softmax',kernel_initializer='he_normal'))

[ ]: #With dropout Layer


model3 = models.Sequential()
model3.add(layers.Flatten(input_shape=(32,32,3)))

model3.add(layers.
↪Dense(256,activation='relu',kernel_initializer='glorot_uniform'))

model3.add(layers.Dropout(0.25))
model3.add(layers.Dense(128,activation='relu'))

model3.add(layers.Dense(10,activation='softmax'))

5
[ ]: # with batch normalization
model4 = models.Sequential()
model4.add(layers.Flatten(input_shape=(32,32,3)))
model4.add(layers.Dense(256,activation='relu'))
model4.add(layers.BatchNormalization())
model4.add(layers.Activation('relu'))
model4.add(layers.Dense(10,activation='softmax'))

[ ]: sgd_optimizer = optimizers.SGD(learning_rate=0.01, momentum=0.9)


model1.compile(optimizer=sgd_optimizer,
loss='categorical_crossentropy',
metrics=['accuracy'])
print(model1.summary())
Xavier_history = model1.
↪fit(X_train,y_train,epochs=15,batch_size=32,validation_split=0.2)

Xavier_score = model1.evaluate(X_test,y_test,batch_size=32)
print(Xavier_score)

Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten (Flatten) (None, 3072) 0

dense (Dense) (None, 256) 786688

dense_1 (Dense) (None, 256) 65792

dense_2 (Dense) (None, 10) 2570

=================================================================
Total params: 855050 (3.26 MB)
Trainable params: 855050 (3.26 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
None
Epoch 1/15
1250/1250 [==============================] - 11s 5ms/step - loss: 1.8755 -
accuracy: 0.3203 - val_loss: 1.8879 - val_accuracy: 0.3179
Epoch 2/15
1250/1250 [==============================] - 7s 6ms/step - loss: 1.7116 -
accuracy: 0.3821 - val_loss: 1.7003 - val_accuracy: 0.3947
Epoch 3/15
1250/1250 [==============================] - 5s 4ms/step - loss: 1.6399 -
accuracy: 0.4110 - val_loss: 1.6479 - val_accuracy: 0.4036
Epoch 4/15
1250/1250 [==============================] - 7s 6ms/step - loss: 1.5987 -

6
accuracy: 0.4257 - val_loss: 1.6595 - val_accuracy: 0.4075
Epoch 5/15
1250/1250 [==============================] - 6s 5ms/step - loss: 1.5721 -
accuracy: 0.4345 - val_loss: 1.6283 - val_accuracy: 0.4223
Epoch 6/15
1250/1250 [==============================] - 6s 5ms/step - loss: 1.5427 -
accuracy: 0.4479 - val_loss: 1.6270 - val_accuracy: 0.4249
Epoch 7/15
1250/1250 [==============================] - 8s 6ms/step - loss: 1.5200 -
accuracy: 0.4543 - val_loss: 1.6115 - val_accuracy: 0.4251
Epoch 8/15
1250/1250 [==============================] - 8s 6ms/step - loss: 1.5047 -
accuracy: 0.4614 - val_loss: 1.5817 - val_accuracy: 0.4326
Epoch 9/15
1250/1250 [==============================] - 5s 4ms/step - loss: 1.4787 -
accuracy: 0.4676 - val_loss: 1.6318 - val_accuracy: 0.4289
Epoch 10/15
1250/1250 [==============================] - 6s 5ms/step - loss: 1.4709 -
accuracy: 0.4707 - val_loss: 1.5859 - val_accuracy: 0.4364
Epoch 11/15
1250/1250 [==============================] - 7s 6ms/step - loss: 1.4467 -
accuracy: 0.4812 - val_loss: 1.6028 - val_accuracy: 0.4405
Epoch 12/15
1250/1250 [==============================] - 6s 5ms/step - loss: 1.4289 -
accuracy: 0.4861 - val_loss: 1.5789 - val_accuracy: 0.4499
Epoch 13/15
1250/1250 [==============================] - 6s 5ms/step - loss: 1.4155 -
accuracy: 0.4927 - val_loss: 1.5831 - val_accuracy: 0.4351
Epoch 14/15
1250/1250 [==============================] - 4s 4ms/step - loss: 1.4029 -
accuracy: 0.4967 - val_loss: 1.5780 - val_accuracy: 0.4483
Epoch 15/15
1250/1250 [==============================] - 6s 5ms/step - loss: 1.3904 -
accuracy: 0.5012 - val_loss: 1.5149 - val_accuracy: 0.4675
313/313 [==============================] - 1s 3ms/step - loss: 1.5040 -
accuracy: 0.4652
[1.5040026903152466, 0.4652000069618225]

[ ]: sgd_optimizer = optimizers.SGD(learning_rate=0.01, momentum=0.9)


model2.compile(optimizer=sgd_optimizer,
loss='categorical_crossentropy',
metrics=['accuracy'])
print(model2.summary())
Kaiming_history = model2.
↪fit(X_train,y_train,epochs=15,batch_size=32,validation_split=0.2)

Kaiming_score = model2.evaluate(X_test,y_test,batch_size=128)
print(Kaiming_score)

7
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten_1 (Flatten) (None, 3072) 0

dense_3 (Dense) (None, 256) 786688

dense_4 (Dense) (None, 128) 32896

dense_5 (Dense) (None, 10) 1290

=================================================================
Total params: 820874 (3.13 MB)
Trainable params: 820874 (3.13 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
None
Epoch 1/15
1250/1250 [==============================] - 6s 5ms/step - loss: 1.8932 -
accuracy: 0.3129 - val_loss: 1.8377 - val_accuracy: 0.3323
Epoch 2/15
1250/1250 [==============================] - 5s 4ms/step - loss: 1.7308 -
accuracy: 0.3795 - val_loss: 1.7861 - val_accuracy: 0.3530
Epoch 3/15
1250/1250 [==============================] - 5s 4ms/step - loss: 1.6724 -
accuracy: 0.4001 - val_loss: 1.6722 - val_accuracy: 0.3971
Epoch 4/15
1250/1250 [==============================] - 7s 5ms/step - loss: 1.6274 -
accuracy: 0.4169 - val_loss: 1.6683 - val_accuracy: 0.4111
Epoch 5/15
1250/1250 [==============================] - 6s 4ms/step - loss: 1.5974 -
accuracy: 0.4286 - val_loss: 1.6234 - val_accuracy: 0.4245
Epoch 6/15
1250/1250 [==============================] - 6s 5ms/step - loss: 1.5605 -
accuracy: 0.4411 - val_loss: 1.6466 - val_accuracy: 0.4192
Epoch 7/15
1250/1250 [==============================] - 5s 4ms/step - loss: 1.5411 -
accuracy: 0.4503 - val_loss: 1.5860 - val_accuracy: 0.4396
Epoch 8/15
1250/1250 [==============================] - 5s 4ms/step - loss: 1.5239 -
accuracy: 0.4536 - val_loss: 1.7091 - val_accuracy: 0.4008
Epoch 9/15
1250/1250 [==============================] - 7s 5ms/step - loss: 1.5152 -
accuracy: 0.4567 - val_loss: 1.5768 - val_accuracy: 0.4461
Epoch 10/15
1250/1250 [==============================] - 10s 8ms/step - loss: 1.4989 -
accuracy: 0.4622 - val_loss: 1.5809 - val_accuracy: 0.4467

8
Epoch 11/15
1250/1250 [==============================] - 7s 6ms/step - loss: 1.4892 -
accuracy: 0.4690 - val_loss: 1.5822 - val_accuracy: 0.4462
Epoch 12/15
1250/1250 [==============================] - 7s 6ms/step - loss: 1.4619 -
accuracy: 0.4761 - val_loss: 1.5610 - val_accuracy: 0.4498
Epoch 13/15
1250/1250 [==============================] - 5s 4ms/step - loss: 1.4540 -
accuracy: 0.4814 - val_loss: 1.5693 - val_accuracy: 0.4469
Epoch 14/15
1250/1250 [==============================] - 6s 5ms/step - loss: 1.4407 -
accuracy: 0.4827 - val_loss: 1.5491 - val_accuracy: 0.4594
Epoch 15/15
1250/1250 [==============================] - 5s 4ms/step - loss: 1.4353 -
accuracy: 0.4843 - val_loss: 1.5748 - val_accuracy: 0.4488
79/79 [==============================] - 0s 3ms/step - loss: 1.5467 - accuracy:
0.4579
[1.54667329788208, 0.4578999876976013]

[ ]: sgd_optimizer = optimizers.SGD(learning_rate=0.01, momentum=0.9)


model3.compile(optimizer=sgd_optimizer,
loss='categorical_crossentropy',
metrics=['accuracy'])
print(model3.summary())

dropout_history = model3.
↪fit(X_train,y_train,epochs=15,batch_size=32,validation_split=0.2)

dropout_score = model3.evaluate(X_test,y_test,batch_size=128)
print(dropout_score)

Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten_2 (Flatten) (None, 3072) 0

dense_6 (Dense) (None, 256) 786688

dropout (Dropout) (None, 256) 0

dense_7 (Dense) (None, 128) 32896

dense_8 (Dense) (None, 10) 1290

=================================================================
Total params: 820874 (3.13 MB)
Trainable params: 820874 (3.13 MB)
Non-trainable params: 0 (0.00 Byte)

9
_________________________________________________________________
None
Epoch 1/15
1250/1250 [==============================] - 9s 5ms/step - loss: 2.0002 -
accuracy: 0.2613 - val_loss: 1.8789 - val_accuracy: 0.3180
Epoch 2/15
1250/1250 [==============================] - 5s 4ms/step - loss: 1.8879 -
accuracy: 0.3088 - val_loss: 1.8183 - val_accuracy: 0.3398
Epoch 3/15
1250/1250 [==============================] - 7s 6ms/step - loss: 1.8400 -
accuracy: 0.3270 - val_loss: 1.7877 - val_accuracy: 0.3499
Epoch 4/15
1250/1250 [==============================] - 5s 4ms/step - loss: 1.7914 -
accuracy: 0.3453 - val_loss: 1.7410 - val_accuracy: 0.3655
Epoch 5/15
1250/1250 [==============================] - 5s 4ms/step - loss: 1.7679 -
accuracy: 0.3561 - val_loss: 1.7466 - val_accuracy: 0.3774
Epoch 6/15
1250/1250 [==============================] - 5s 4ms/step - loss: 1.7416 -
accuracy: 0.3689 - val_loss: 1.6831 - val_accuracy: 0.3946
Epoch 7/15
1250/1250 [==============================] - 5s 4ms/step - loss: 1.7284 -
accuracy: 0.3720 - val_loss: 1.6930 - val_accuracy: 0.3958
Epoch 8/15
1250/1250 [==============================] - 6s 5ms/step - loss: 1.7120 -
accuracy: 0.3805 - val_loss: 1.6957 - val_accuracy: 0.3885
Epoch 9/15
1250/1250 [==============================] - 5s 4ms/step - loss: 1.7019 -
accuracy: 0.3849 - val_loss: 1.6806 - val_accuracy: 0.4138
Epoch 10/15
1250/1250 [==============================] - 5s 4ms/step - loss: 1.6828 -
accuracy: 0.3924 - val_loss: 1.6229 - val_accuracy: 0.4289
Epoch 11/15
1250/1250 [==============================] - 5s 4ms/step - loss: 1.6775 -
accuracy: 0.3933 - val_loss: 1.6534 - val_accuracy: 0.4150
Epoch 12/15
1250/1250 [==============================] - 6s 4ms/step - loss: 1.6607 -
accuracy: 0.3996 - val_loss: 1.6522 - val_accuracy: 0.4134
Epoch 13/15
1250/1250 [==============================] - 5s 4ms/step - loss: 1.6433 -
accuracy: 0.4073 - val_loss: 1.6365 - val_accuracy: 0.4357
Epoch 14/15
1250/1250 [==============================] - 5s 4ms/step - loss: 1.6384 -
accuracy: 0.4091 - val_loss: 1.6059 - val_accuracy: 0.4261
Epoch 15/15
1250/1250 [==============================] - 5s 4ms/step - loss: 1.6356 -
accuracy: 0.4095 - val_loss: 1.6004 - val_accuracy: 0.4362
79/79 [==============================] - 0s 5ms/step - loss: 1.5753 - accuracy:

10
0.4425
[1.575345754623413, 0.4424999952316284]

[ ]: sgd_optimizer = optimizers.SGD(learning_rate=0.01, momentum=0.9)


model4.compile(optimizer=sgd_optimizer,
loss='categorical_crossentropy',
metrics=['accuracy'])
print(model4.summary())

BN_history = model4.
↪fit(X_train,y_train,epochs=15,batch_size=128,validation_split=0.2)

BN_score = model4.evaluate(X_test,y_test,batch_size=128)
print(BN_score)

Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten_3 (Flatten) (None, 3072) 0

dense_9 (Dense) (None, 256) 786688

batch_normalization (Batch (None, 256) 1024


Normalization)

activation (Activation) (None, 256) 0

dense_10 (Dense) (None, 10) 2570

=================================================================
Total params: 790282 (3.01 MB)
Trainable params: 789770 (3.01 MB)
Non-trainable params: 512 (2.00 KB)
_________________________________________________________________
None
Epoch 1/15
313/313 [==============================] - 4s 7ms/step - loss: 1.7435 -
accuracy: 0.3929 - val_loss: 1.9077 - val_accuracy: 0.3212
Epoch 2/15
313/313 [==============================] - 3s 9ms/step - loss: 1.5787 -
accuracy: 0.4527 - val_loss: 2.0264 - val_accuracy: 0.3129
Epoch 3/15
313/313 [==============================] - 2s 8ms/step - loss: 1.4976 -
accuracy: 0.4791 - val_loss: 1.7964 - val_accuracy: 0.3873
Epoch 4/15
313/313 [==============================] - 2s 5ms/step - loss: 1.4604 -
accuracy: 0.4917 - val_loss: 1.7981 - val_accuracy: 0.3686
Epoch 5/15

11
313/313 [==============================] - 2s 5ms/step - loss: 1.4433 -
accuracy: 0.4959 - val_loss: 1.6776 - val_accuracy: 0.4188
Epoch 6/15
313/313 [==============================] - 2s 5ms/step - loss: 1.4147 -
accuracy: 0.5046 - val_loss: 1.6728 - val_accuracy: 0.4098
Epoch 7/15
313/313 [==============================] - 2s 8ms/step - loss: 1.3822 -
accuracy: 0.5164 - val_loss: 1.6894 - val_accuracy: 0.4140
Epoch 8/15
313/313 [==============================] - 2s 8ms/step - loss: 1.3502 -
accuracy: 0.5282 - val_loss: 1.6909 - val_accuracy: 0.4284
Epoch 9/15
313/313 [==============================] - 4s 11ms/step - loss: 1.3272 -
accuracy: 0.5384 - val_loss: 1.6696 - val_accuracy: 0.4245
Epoch 10/15
313/313 [==============================] - 2s 6ms/step - loss: 1.3183 -
accuracy: 0.5387 - val_loss: 1.7419 - val_accuracy: 0.4188
Epoch 11/15
313/313 [==============================] - 2s 5ms/step - loss: 1.2873 -
accuracy: 0.5481 - val_loss: 1.6096 - val_accuracy: 0.4474
Epoch 12/15
313/313 [==============================] - 2s 5ms/step - loss: 1.2667 -
accuracy: 0.5526 - val_loss: 1.8200 - val_accuracy: 0.4028
Epoch 13/15
313/313 [==============================] - 2s 5ms/step - loss: 1.2532 -
accuracy: 0.5611 - val_loss: 2.0636 - val_accuracy: 0.3672
Epoch 14/15
313/313 [==============================] - 2s 5ms/step - loss: 1.2398 -
accuracy: 0.5660 - val_loss: 1.7374 - val_accuracy: 0.4327
Epoch 15/15
313/313 [==============================] - 2s 6ms/step - loss: 1.2253 -
accuracy: 0.5719 - val_loss: 1.8646 - val_accuracy: 0.3977
79/79 [==============================] - 0s 4ms/step - loss: 1.8478 - accuracy:
0.4013
[1.8477566242218018, 0.40130001306533813]

[ ]: import matplotlib.pyplot as plt

plt.plot(Xavier_history.history['val_accuracy'],label='Xavier Initialization')
plt.plot(Kaiming_history.history['val_accuracy'],label='Kaiming Initialization')
plt.plot(dropout_history.history['val_accuracy'],label='Dropout')
plt.plot(BN_history.history['val_accuracy'],label='Batch_Normalization')
plt.legend()
plt.show()

12

You might also like