0% found this document useful (0 votes)
6 views2 pages

Assignment 1

The document outlines a machine learning workflow using TensorFlow and Keras to classify apple quality based on various features. It involves data preprocessing, model building, training, and evaluation, achieving a test accuracy of approximately 88.5%. Additionally, it includes visualizing the training and validation accuracy over epochs.

Uploaded by

gauravkr7992
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views2 pages

Assignment 1

The document outlines a machine learning workflow using TensorFlow and Keras to classify apple quality based on various features. It involves data preprocessing, model building, training, and evaluation, achieving a test accuracy of approximately 88.5%. Additionally, it includes visualizing the training and validation accuracy over epochs.

Uploaded by

gauravkr7992
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

import tensorflow as tf

from tensorflow import keras


from tensorflow.keras import layers
import numpy as np

from google.colab import drive


drive.mount('/content/drive')

Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).

import pandas as pd
from sklearn.model_selection import train_test_split
df = pd.read_csv('/content/drive/MyDrive/apple_quality.csv')
mapping = {'good': 1, 'bad': 0}

df['Quality'] = df['Quality'].map(mapping)
df.fillna(1, inplace=True)

df = df[:4000]

df = df.astype(np.float32)

tensor = tf.convert_to_tensor(df)
numeric_columns = ['Size', 'Weight', 'Crunchiness', 'Juiciness', 'Ripeness', 'Acidity']
X = df[numeric_columns].values

y=df['Quality'].values

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

len(X_train)

3200

from tensorflow.keras.models import Sequential


from tensorflow.keras.layers import Dense
model = Sequential()

model.add(Dense(units=64, activation='relu', input_dim=X_train.shape[1]))


model.add(Dense(units=20, activation='relu'))
model.add(Dense(units=1, activation='sigmoid'))

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

history = model.fit(X_train, y_train, epochs=40, batch_size=32, validation_split=0.2)

Epoch 1/40
80/80 [==============================] - 1s 7ms/step - loss: 0.5541 - accuracy: 0.7266 - val_loss: 0.4837 - val_accuracy: 0.7750
Epoch 2/40
80/80 [==============================] - 0s 4ms/step - loss: 0.4482 - accuracy: 0.7906 - val_loss: 0.4256 - val_accuracy: 0.8047
Epoch 3/40
80/80 [==============================] - 0s 4ms/step - loss: 0.4083 - accuracy: 0.8102 - val_loss: 0.3998 - val_accuracy: 0.8203
Epoch 4/40
80/80 [==============================] - 0s 4ms/step - loss: 0.3867 - accuracy: 0.8246 - val_loss: 0.3871 - val_accuracy: 0.8234
Epoch 5/40
80/80 [==============================] - 0s 4ms/step - loss: 0.3718 - accuracy: 0.8309 - val_loss: 0.3796 - val_accuracy: 0.8281
Epoch 6/40
80/80 [==============================] - 0s 4ms/step - loss: 0.3622 - accuracy: 0.8305 - val_loss: 0.3670 - val_accuracy: 0.8406
Epoch 7/40
80/80 [==============================] - 0s 4ms/step - loss: 0.3527 - accuracy: 0.8383 - val_loss: 0.3713 - val_accuracy: 0.8219
Epoch 8/40
80/80 [==============================] - 0s 4ms/step - loss: 0.3478 - accuracy: 0.8398 - val_loss: 0.3653 - val_accuracy: 0.8297
Epoch 9/40
80/80 [==============================] - 0s 4ms/step - loss: 0.3427 - accuracy: 0.8398 - val_loss: 0.3656 - val_accuracy: 0.8281
Epoch 10/40
80/80 [==============================] - 0s 4ms/step - loss: 0.3376 - accuracy: 0.8402 - val_loss: 0.3546 - val_accuracy: 0.8328
Epoch 11/40
80/80 [==============================] - 0s 4ms/step - loss: 0.3334 - accuracy: 0.8449 - val_loss: 0.3530 - val_accuracy: 0.8203
Epoch 12/40
80/80 [==============================] - 0s 4ms/step - loss: 0.3334 - accuracy: 0.8445 - val_loss: 0.3523 - val_accuracy: 0.8344
Epoch 13/40
80/80 [==============================] - 0s 4ms/step - loss: 0.3260 - accuracy: 0.8496 - val_loss: 0.3507 - val_accuracy: 0.8344
Epoch 14/40
80/80 [==============================] - 0s 4ms/step - loss: 0.3217 - accuracy: 0.8508 - val_loss: 0.3523 - val_accuracy: 0.8266
Epoch 15/40
80/80 [==============================] - 0s 4ms/step - loss: 0.3199 - accuracy: 0.8512 - val_loss: 0.3477 - val_accuracy: 0.8266
Epoch 16/40
80/80 [==============================] - 0s 3ms/step - loss: 0.3156 - accuracy: 0.8578 - val_loss: 0.3512 - val_accuracy: 0.8250
Epoch 17/40
80/80 [==============================] - 0s 3ms/step - loss: 0.3135 - accuracy: 0.8578 - val_loss: 0.3536 - val_accuracy: 0.8250
Epoch 18/40
80/80 [==============================] - 0s 3ms/step - loss: 0.3114 - accuracy: 0.8605 - val_loss: 0.3482 - val_accuracy: 0.8297
Epoch 19/40
80/80 [==============================] - 0s 3ms/step - loss: 0.3075 - accuracy: 0.8609 - val_loss: 0.3427 - val_accuracy: 0.8313
Epoch 20/40
80/80 [==============================] - 0s 3ms/step - loss: 0.3050 - accuracy: 0.8648 - val_loss: 0.3448 - val_accuracy: 0.8297
Epoch 21/40
80/80 [==============================] - 0s 2ms/step - loss: 0.3050 - accuracy: 0.8668 - val_loss: 0.3414 - val_accuracy: 0.8203
Epoch 22/40
80/80 [==============================] - 0s 2ms/step - loss: 0.2997 - accuracy: 0.8684 - val_loss: 0.3459 - val_accuracy: 0.8219
Epoch 23/40
80/80 [==============================] - 0s 2ms/step - loss: 0.2979 - accuracy: 0.8691 - val_loss: 0.3457 - val_accuracy: 0.8297
Epoch 24/40
80/80 [==============================] - 0s 2ms/step - loss: 0.2983 - accuracy: 0.8687 - val_loss: 0.3515 - val_accuracy: 0.8297
Epoch 25/40
80/80 [==============================] - 0s 3ms/step - loss: 0.2946 - accuracy: 0.8711 - val_loss: 0.3367 - val_accuracy: 0.8297
Epoch 26/40
80/80 [==============================] - 0s 3ms/step - loss: 0.2926 - accuracy: 0.8695 - val_loss: 0.3420 - val_accuracy: 0.8250
Epoch 27/40
80/80 [==============================] - 0s 3ms/step - loss: 0.2890 - accuracy: 0.8664 - val_loss: 0.3311 - val_accuracy: 0.8344
Epoch 28/40
80/80 [==============================] - 0s 3ms/step - loss: 0.2884 - accuracy: 0.8746 - val_loss: 0.3315 - val_accuracy: 0.8313
Epoch 29/40
80/80 [==============================] - 0s 3ms/step - loss: 0.2881 - accuracy: 0.8723 - val_loss: 0.3388 - val_accuracy: 0.8328
E h 30/40

test_loss, test_accuracy = model.evaluate(X_test, y_test)


print(f'Test Accuracy: {test_accuracy}')

25/25 [==============================] - 0s 1ms/step - loss: 0.2708 - accuracy: 0.8850


Test Accuracy: 0.8849999904632568

import matplotlib.pyplot as plt

plt.plot(history.history['accuracy'], label='Training Accuracy')


plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.show()

You might also like