AngadKumar_21CS012_Pattern Recognition
AngadKumar_21CS012_Pattern Recognition
Project Tittle :
Building a Medical Image Classification System:
Classifying X-rays and MRIs into 'Normal' and
'Abnormal'.
pip install
tensorflow keras
scikit-learn
matplotlib
Step 2: Loading and
opencv-python Preprocessing the Data
We'll start by loading the dataset.
For this project, you can use a directory structure
that includes two subfolders: 'normal' and 'abnormal,'
each containing labeled images of X-rays or MRIs.
import numpy as np
import cv2
import os
from sklearn.model_selection import train_test_split
!pip install kaggle
# Paths
data_dir = 'kaggle/ct-to-mri-cgan'
# Path to your dataset
categories = ['normal', 'abnormal']
img_size = 224 # Image size for resizing
# Load images
and preprocess # Normalize
def and reshape
load_images(da images for CNN Step 3: Data
ta_dir, input Augmentation:
categories): X = X / 255.0
data = [] # Normalize To make our model
labels = [] pixel values
X = X.reshape(-
more robust, we’ll
for category in 1, img_size, apply data
categories: img_size, 1) # augmentation. This
path = Add channel increases the variety
os.path.join(dat dimension
a_dir, category) of the training data
class_num = # Split data by applying
categories.inde into training transformations like
x(category) and testing sets
for img in X_train, X_test, rotation, shifting, and
os.listdir(path): y_train, y_test zooming to the
try: = images.
img_array = train_test_split(
cv2.imread(os.p X, y,
ath.join(path, test_size=0.2,
img)) # Load random_state=
from
image 42)
img_array = tensorflow.kera
cv2.cvtColor(im Step 4: Transfer s.preprocessing.
g_array, image import
cv2.COLOR_BG Learning with
ImageDataGen
R2GRAY) # VGG16
Convert to erator
Next, we use
grayscale VGG16, a powerful
resized_array =
cv2.resize(img_ pre-trained CNN
array, model, as our datagen =
(img_size, feature extractor. ImageDataGen
img_size)) # We'll replace the erator( top
Resize image
data.append(re
sized_array) rotation_range=
labels.append(c 20,
layers of VGG16 with our own custom layers for binary
classification ('normal' or 'abnormal').
from
tensorflow.keras.
applications Step 5: Training the Model
import VGG16 Once the model is compiled, we proceed to
from train it using the augmented training data.
tensorflow.keras. We also evaluate its performance on the test
models import data to avoid overfitting.
Model
from # Train the model Step 6: Model
tensorflow.keras. history = Evaluation
layers import model.fit(datage
Dense, n.flow(X_train, After training, we’ll
GlobalAveragePo y_train, check the model’s
oling2D batch_size=32), performance by printing
validation_data=( a classification report
# Load pre- X_test, y_test), and generating a
trained VGG16 epochs=10) confusion matrix.
model (without
top layers) # Evaluate model
base_model = performance on from
VGG16(weights=' test data sklearn.metrics
imagenet', test_loss, import
include_top=Fals test_acc = classification_rep
e, model.evaluate(X ort,
input_shape=(im _test, y_test) confusion_matrix
g_size, img_size, print(f"Test
3)) Accuracy: # Predict test
{test_acc}") data
# Add custom y_pred =
classification (model.predict(X
_test) >
We plot the training and validation accuracy and loss over the
epochs to visually inspect the model's performance during
training.
1. Training Output
During the training process, the model will output key metrics
such as accuracy and loss for both the training and validation
datasets. This information is displayed for each epoch.
Epoch 1/20
Training
EpochAccuracy:
32/32 7/20 Indicates how well the model is learning the
[============= training dataset.
32/32
==============
Validation Accuracy: Indicates how well the model generalizes
[=============
===] - 12s
==============
to unseen data (validation data).
350ms/step -
Loss: Shows
- 9s - how well the model is minimizing errors for both
loss:===]
0.6152
278ms/step
accuracy: 0.6710-
training and validation data.
loss: 0.2441 -
- val_loss:
accuracy: 0.9023 2. Model Evaluation Output
0.5784 -
- val_loss:
val_accuracy:
0.2443 - After the model is trained, it will be
0.7146
val_accuracy: evaluated on the test dataset to check its
0.9136 performance. This output will include the
test accuracy, classification report, and
Epoch 2/20
confusion matrix.
32/32
Epoch 13/20
[============= Test Accuracy
32/32
==============
The overall accuracy score achieved by the
===][=============
- 9s
==============
280ms/step - model on the test data:
loss:===] - 9s -
0.5411
278ms/step
accuracy: 0.7225-
loss: 0.2331 -
- val_loss:
accuracy: 0.9081
Test Accuracy: 0.923 (or 92.3%)