0% found this document useful (0 votes)
19 views2 pages

'Float32' 'Float32': Import As Import As From Import From Import From Import

The document loads the MNIST dataset and defines an autoencoder model to compress the input images to lower dimensions and then reconstruct them. It trains the autoencoder model over 50 epochs, records the training and validation loss, encodes and decodes test images, and visualizes the original and reconstructed images.

Uploaded by

VAISHNAVI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views2 pages

'Float32' 'Float32': Import As Import As From Import From Import From Import

The document loads the MNIST dataset and defines an autoencoder model to compress the input images to lower dimensions and then reconstruct them. It trains the autoencoder model over 50 epochs, records the training and validation loss, encodes and decodes test images, and visualizes the original and reconstructed images.

Uploaded by

VAISHNAVI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

import numpy as np

import matplotlib.pyplot as plt


from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
from tensorflow.keras.datasets import mnist # Correcting the dataset name

# Load the MNIST dataset


(x_train, _), (x_test, _) = mnist.load_data()

# Preprocess the data


x_train = x_train.astype('float32') / 255.0
x_test = x_test.astype('float32') / 255.0
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))

# Define the autoencoder model


input_img = Input(shape=(784,))
encoded = Dense(32, activation='relu')(input_img)
decoded = Dense(784, activation='sigmoid')(encoded)

autoencoder = Model(input_img, decoded)


autoencoder.compile(optimizer='adam', loss='binary_crossentropy')

# Train the autoencoder


autoencoder.fit(x_train, x_train, epochs=50, batch_size=256, shuffle=True,
validation_data=(x_test, x_test))

# Encode and decode the test images


encoded_imgs = autoencoder.predict(x_test)
decoded_imgs = encoded_imgs

# Visualize original and reconstructed images


n = 10 # Number of images to display
plt.figure(figsize=(20, 4))
for i in range(n):
# Original images
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)

# Reconstructed images
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28)) # Use 'decoded_imgs'
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)

plt.show()

Epoch 1/50
235/235 [==============================] - 5s 14ms/step - loss: 0.2795 - val_loss: 0.1920
Epoch 2/50
235/235 [==============================] - 3s 13ms/step - loss: 0.1743 - val_loss: 0.1583
Epoch 3/50
235/235 [==============================] - 3s 13ms/step - loss: 0.1473 - val_loss: 0.1359
Epoch 4/50
235/235 [==============================] - 3s 15ms/step - loss: 0.1303 - val_loss: 0.1230
Epoch 5/50
235/235 [==============================] - 3s 15ms/step - loss: 0.1198 - val_loss: 0.1145
Epoch 6/50
235/235 [==============================] - 3s 12ms/step - loss: 0.1114 - val_loss: 0.1065
Epoch 7/50
235/235 [==============================] - 3s 13ms/step - loss: 0.1053 - val_loss: 0.1016
Epoch 8/50
235/235 [==============================] - 4s 16ms/step - loss: 0.1013 - val_loss: 0.0985
Epoch 9/50
235/235 [==============================] - 4s 15ms/step - loss: 0.0986 - val_loss: 0.0964
Epoch 10/50
235/235 [==============================] - 3s 13ms/step - loss: 0.0969 - val_loss: 0.0950
Epoch 11/50
235/235 [==============================] - 3s 13ms/step - loss: 0.0958 - val_loss: 0.0942
Epoch 12/50
235/235 [==============================] - 3s 14ms/step - loss: 0.0951 - val_loss: 0.0937
Epoch 13/50
235/235 [==============================] - 3s 15ms/step - loss: 0.0947 - val_loss: 0.0932
Epoch 14/50
235/235 [==============================] - 4s 17ms/step - loss: 0.0943 - val_loss: 0.0929
Epoch 15/50
235/235 [==============================] - 3s 15ms/step - loss: 0.0941 - val_loss: 0.0928
Epoch 16/50
235/235 [==============================] - 4s 15ms/step - loss: 0.0939 - val_loss: 0.0926
Epoch 17/50
235/235 [==============================] - 3s 15ms/step - loss: 0.0937 - val_loss: 0.0925
Epoch 18/50
235/235 [==============================] - 4s 15ms/step - loss: 0.0936 - val_loss: 0.0924
Epoch 19/50
235/235 [==============================] - 3s 15ms/step - loss: 0.0935 - val_loss: 0.0923
Epoch 20/50
235/235 [==============================] - 4s 15ms/step - loss: 0.0934 - val_loss: 0.0922
Epoch 21/50
235/235 [==============================] - 3s 14ms/step - loss: 0.0934 - val_loss: 0.0922
Epoch 22/50
235/235 [==============================] - 4s 16ms/step - loss: 0.0933 - val_loss: 0.0920
Epoch 23/50
235/235 [==============================] - 5s 19ms/step - loss: 0.0932 - val_loss: 0.0920
Epoch 24/50
235/235 [==============================] - 4s 15ms/step - loss: 0.0932 - val_loss: 0.0920
Epoch 25/50
235/235 [==============================] - 4s 18ms/step - loss: 0.0932 - val_loss: 0.0919
Epoch 26/50
235/235 [==============================] - 3s 14ms/step - loss: 0.0931 - val_loss: 0.0919
Epoch 27/50
235/235 [==============================] - 4s 16ms/step - loss: 0.0931 - val_loss: 0.0919
Epoch 28/50
235/235 [==============================] - 4s 16ms/step - loss: 0.0930 - val_loss: 0.0918
Epoch 29/50
235/235 [==============================] - 3s 13ms/step - loss: 0.0930 - val_loss: 0.0918
Epoch 30/50
235/235 [==============================] - 3s 13ms/step - loss: 0.0930 - val_loss: 0.0918
Epoch 31/50
235/235 [==============================] - 3s 13ms/step - loss: 0.0929 - val_loss: 0.0918
Epoch 32/50
235/235 [==============================] - 3s 13ms/step - loss: 0.0929 - val_loss: 0.0917
Epoch 33/50
235/235 [==============================] - 3s 13ms/step - loss: 0.0929 - val_loss: 0.0917
Epoch 34/50
235/235 [==============================] - 3s 13ms/step - loss: 0.0929 - val_loss: 0.0918
Epoch 35/50
235/235 [==============================] - 3s 13ms/step - loss: 0.0928 - val_loss: 0.0917
Epoch 36/50
235/235 [==============================] - 3s 13ms/step - loss: 0.0928 - val_loss: 0.0918
Epoch 37/50
235/235 [==============================] - 3s 13ms/step - loss: 0.0928 - val_loss: 0.0917
Epoch 38/50
235/235 [==============================] - 3s 14ms/step - loss: 0.0928 - val_loss: 0.0916
Epoch 39/50
235/235 [==============================] - 4s 17ms/step - loss: 0.0928 - val_loss: 0.0917
Epoch 40/50
235/235 [==============================] - 3s 14ms/step - loss: 0.0927 - val_loss: 0.0916
Epoch 41/50
235/235 [==============================] - 3s 13ms/step - loss: 0.0927 - val_loss: 0.0916
Epoch 42/50
235/235 [==============================] - 3s 14ms/step - loss: 0.0927 - val_loss: 0.0916
Epoch 43/50
235/235 [==============================] - 3s 14ms/step - loss: 0.0927 - val_loss: 0.0916
Epoch 44/50
235/235 [==============================] - 4s 15ms/step - loss: 0.0927 - val_loss: 0.0916
Epoch 45/50
235/235 [==============================] - 4s 16ms/step - loss: 0.0927 - val_loss: 0.0916
Epoch 46/50
235/235 [==============================] - 3s 14ms/step - loss: 0.0927 - val_loss: 0.0915
Epoch 47/50
235/235 [==============================] - 3s 13ms/step - loss: 0.0926 - val_loss: 0.0916
Epoch 48/50
235/235 [==============================] - 3s 14ms/step - loss: 0.0926 - val_loss: 0.0915
Epoch 49/50
235/235 [==============================] - 3s 14ms/step - loss: 0.0926 - val_loss: 0.0915
Epoch 50/50
235/235 [==============================] - 3s 14ms/step - loss: 0.0926 - val_loss: 0.0915
313/313 [==============================] - 1s 3ms/step

Loading [MathJax]/jax/output/CommonHTML/fonts/TeX/fontdata.js

You might also like