0% found this document useful (0 votes)
39 views11 pages

FA I - Unit5

The document outlines a step-by-step process for developing and deploying AI-based machine vision applications using Python, focusing on image classification tasks such as distinguishing between cats and dogs and detecting plant diseases. It details the necessary libraries, model building, training, evaluation, and deployment processes, including the use of Convolutional Neural Networks (CNNs) and Flask for web deployment. The guide emphasizes the importance of data preprocessing, model tuning, and potential improvements for accuracy and real-time predictions.

Uploaded by

mamahlo543
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views11 pages

FA I - Unit5

The document outlines a step-by-step process for developing and deploying AI-based machine vision applications using Python, focusing on image classification tasks such as distinguishing between cats and dogs and detecting plant diseases. It details the necessary libraries, model building, training, evaluation, and deployment processes, including the use of Convolutional Neural Networks (CNNs) and Flask for web deployment. The guide emphasizes the importance of data preprocessing, model tuning, and potential improvements for accuracy and real-time predictions.

Uploaded by

mamahlo543
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

FAI Unit V

Development and deployment of AI based Machine Vision application using


python

Introduction: Development a model follows a sequenctial process that includes various phases. In
each phase there are multiple activities to be performed.

The below are the

Step by step process of development of a AI Based machine.

Project Objective:

Define the problem and Determine he Goals

Collect and Preprocess the data

Choose a Model

Train the Model

Evaluate the Model

Tune the Model

Deploy and Monitor the Model

Lets take an example and build a model that performs classification of the given image is cat
or dog.

Project Objective: image classification problem (cats vs. dogs).

Step 1: Install Dependencies

pip install tensorflow numpy matplotlib opencv-python scikit-learn tensorflow-datasets

Libraries and Their Purpose

 tensorflow: Used for deep learning and training the CNN model.
 numpy: Helps in handling numerical computations.
 matplotlib: Used for data visualization.
 opencv-python: Helps in image processing (resizing, reading images, etc.).
 scikit-learn: Provides tools for data preprocessing and evaluation.
 tensorflow-datasets: Contains pre-built datasets (like cats vs. dogs).
Step 2: Import Necessary Libraries

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt

Explanation

 tensorflow as tf: The main deep learning library.


 Sequential: A linear stack of layers for building the model.
 Conv2D: A convolutional layer used for feature extraction from images.
 MaxPooling2D: Reduces spatial dimensions, helping in feature selection.
 Flatten: Converts the 2D image data into a 1D array for the dense layer.
 Dense: A fully connected layer, used for classification.
 Dropout: Prevents overfitting by randomly setting a fraction of inputs to zero.
 ImageDataGenerator: Augments images to improve model generalization.
 matplotlib.pyplot: Used for visualizing training progress.

Step 3: Load and Preprocess Data

import tensorflow_datasets as tfds

# Load dataset
dataset, info = tfds.load("cats_vs_dogs", as_supervised=True, with_info=True)

# Split dataset
train_data, test_data = dataset['train'].take(20000), dataset['train'].skip(20000)

Explanation

 tensorflow_datasets as tfds: Allows access to the pre-built "cats_vs_dogs" dataset.


 tfds.load("cats_vs_dogs"): Downloads the dataset.
 as_supervised=True: Returns dataset with (image, label) pairs.
 with_info=True: Returns dataset details (image shape, classes, etc.).
 train_data.take(20000): Uses the first 20,000 images for training.
 train_data.skip(20000): Uses the remaining images for testing.

Preprocessing (Resizing and Normalizing)

IMG_SIZE = (128, 128)

def preprocess(image, label):


image = tf.image.resize(image, IMG_SIZE) / 255.0 # Normalize (0-1)
return image, label
train_data = train_data.map(preprocess).batch(32).shuffle(1000)
test_data = test_data.map(preprocess).batch(32)

Explanation

 IMG_SIZE = (128, 128): Resizes all images to 128×128 pixels.


 tf.image.resize(image, IMG_SIZE): Ensures consistency in image sizes.
 image / 255.0: Normalizes pixel values (0-255) to (0-1) for better learning.
 .map(preprocess): Applies preprocessing to all images.
 .batch(32): Groups images into batches of 32 for training.
 .shuffle(1000): Randomly shuffles the dataset to prevent model bias.

Step 4: Build the CNN Model

model = Sequential([
Conv2D(32, (3, 3), activation='relu', input_shape=(128, 128, 3)),
MaxPooling2D(2, 2),

Conv2D(64, (3, 3), activation='relu'),


MaxPooling2D(2, 2),

Flatten(),
Dense(128, activation='relu'),
Dropout(0.5),
Dense(1, activation='sigmoid') # Binary classification (cats vs dogs)
])

Explanation

 Sequential([]): Builds a neural network layer by layer.


 Conv2D(32, (3,3), activation='relu', input_shape=(128, 128, 3)):

o Uses 32 filters of size 3×3 to detect patterns.


o relu: Activation function that removes negative values.
o input_shape=(128,128,3): Accepts RGB images of 128×128 pixels.

 MaxPooling2D(2,2): Reduces image size (Downsamples 2×2).


 Conv2D(64, (3,3), activation='relu'): Second convolutional layer with 64 filters.
 Flatten(): Converts 2D image features into a 1D vector.
 Dense(128, activation='relu'): Fully connected layer with 128 neurons.
 Dropout(0.5): Randomly deactivates 50% of neurons to prevent overfitting.
 Dense(1, activation='sigmoid'): Outputs 0 or 1 (cat or dog).
Step 5: Compile and Train the Model

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

Explanation

 optimizer='adam': Optimizes weight updates for better learning.


 loss='binary_crossentropy': Since we have two classes (cats vs dogs).
 metrics=['accuracy']: Tracks accuracy during training.

history = model.fit(train_data, validation_data=test_data, epochs=5)

 model.fit(): Trains the model.


 epochs=5: Runs training for 5 iterations.

Step 6: Evaluate the Model

loss, acc = model.evaluate(test_data)


print(f"Test Accuracy: {acc:.2f}")

Explanation

 model.evaluate(test_data): Tests the model on unseen images.


 print(f"Test Accuracy: {acc:.2f}"): Displays accuracy in 2 decimal places.

Plot Training Performance

plt.plot(history.history['accuracy'], label='train_accuracy')
plt.plot(history.history['val_accuracy'], label='val_accuracy')
plt.legend()
plt.show()

 Visualizes how the accuracy changes over epochs.

Step 7: Make Predictions

import numpy as np
import cv2

# Load a test image


image_path = "test_cat.jpg"
image = cv2.imread(image_path)
image = cv2.resize(image, (128, 128)) / 255.0
image = np.expand_dims(image, axis=0) # Reshape for model
# Predict
prediction = model.predict(image)
print("Predicted Class:", "Dog" if prediction > 0.5 else "Cat")

Explanation

 cv2.imread(image_path): Reads an image.


 cv2.resize(image, (128, 128)): Resizes it.
 image / 255.0: Normalizes the pixel values.
 np.expand_dims(image, axis=0): Adds batch dimension for prediction.
 model.predict(image): Predicts the class.
 If the output > 0.5, it’s a dog, otherwise it’s a cat.

Step 8: Save and Deploy

model.save("cat_dog_classifier.h5")

 Saves the trained model for future use.

from tensorflow.keras.models import load_model


new_model = load_model("cat_dog_classifier.h5")

 Loads the model back for inference.

Key Notes

We used CNNs for image classification.


The model was trained on Cats vs. Dogs dataset.
We visualized and evaluated the model's performance.
Predictions were made using OpenCV.
The trained model was saved and deployed.
Project Objective:

The Automated Plant Disease Detection System using Python. This system will use
machine learning (specifically deep learning) to classify diseases from images of plant
leaves.

Step-by-Step Process: Automated Plant Disease Detection

1. Set Up the Development Environment

Start by setting up the necessary tools and libraries:

Install Required Libraries:

You'll need the following libraries:

 TensorFlow for deep learning.


 Keras for building the neural network model.
 OpenCV for image processing.
 Flask (optional, for deploying the app).
 Matplotlib for visualization.

Run the following command to install the libraries:

pip install tensorflow opencv-python flask numpy matplotlib

2. Collect the Dataset

You'll need a dataset with images of plant leaves and their respective diseases. You can
use the PlantVillage dataset, which is available on Kaggle, or you can use the
TensorFlow datasets.

You can also find similar datasets like:

 PlantVillage dataset on Kaggle


 TensorFlow Datasets (TFDS)

The dataset typically consists of several categories (e.g., healthy leaves, disease classes
like 'powdery mildew', 'leaf spot', etc.).

3. Preprocess the Data


Image Preprocessing:

For the images to be fed into the neural network, they need to be resized, normalized,
and augmented.

Here's how you can process the data:

1. Resize Images: Resize each image to the required input size (e.g., 224x224 for
MobileNetV2).
2. Normalization: Normalize pixel values to a range of [0, 1] by dividing the pixel
values by 255.
3. Augmentation: Apply image augmentation (random rotations, flips, etc.) to help
the model generalize better.

Code for Image Preprocessing:

import os
import cv2
import numpy as np
from tensorflow.keras.preprocessing.image import ImageDataGenerator

def preprocess_images(image_folder, target_size=(224, 224)):


images = []
labels = []

for label in os.listdir(image_folder):


label_folder = os.path.join(image_folder, label)

for img_name in os.listdir(label_folder):


img_path = os.path.join(label_folder, img_name)
img = cv2.imread(img_path)
img = cv2.resize(img, target_size) # Resize to match model input
img = img / 255.0 # Normalize to [0,1] range
images.append(img)
labels.append(label)

return np.array(images), np.array(labels)

4. Build the Deep Learning Model

We'll use a Convolutional Neural Network (CNN) model for image classification. You
can use a pre-trained model like MobileNetV2 for transfer learning, which will allow
you to fine-tune it on your dataset for plant disease detection.

Code to Build the Model:

import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.applications import MobileNetV2

# Load pre-trained MobileNetV2 model (without the top classification layers)


base_model = MobileNetV2(weights='imagenet', include_top=False, input_shape=(224,
224, 3))

# Freeze base model layers


base_model.trainable = False

# Build the model


model = models.Sequential([
base_model,
layers.GlobalAveragePooling2D(),
layers.Dense(512, activation='relu'),
layers.Dropout(0.5),
layers.Dense(38, activation='softmax') # Assuming 38 classes of plant diseases
])

# Compile the model


model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

5. Train the Model

Split the dataset into training and validation sets. You can use Keras
ImageDataGenerator for loading the images in batches and performing on-the-fly data
augmentation.

Code to Train the Model:

# Split dataset into training and validation sets


train_images, train_labels = preprocess_images('path_to_train_data')
validation_images, validation_labels = preprocess_images('path_to_validation_data')

# Train the model


history = model.fit(train_images, train_labels, epochs=10,
validation_data=(validation_images, validation_labels))

6. Evaluate the Model

After training the model, evaluate its performance using the validation dataset.

Code to Evaluate the Model:


# Evaluate the model on the validation dataset
test_loss, test_acc = model.evaluate(validation_images, validation_labels, verbose=2)
print("Test accuracy:", test_acc)

7. Save the Model

Once you're happy with the model's performance, save it to disk so that it can be used
for predictions in the future.

Code to Save the Model:

# Save the trained model


model.save('plant_disease_model.h5')

8. Predict New Images (Plant Disease Detection)

You can now use the model to predict the disease of new plant leaves by passing an
image to the model.

Code to Predict the Disease:

import numpy as np
import cv2

def predict_disease(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img_resized = cv2.resize(img, (224, 224))
img_array = np.expand_dims(img_resized, axis=0) / 255.0 # Normalize

# Predict the disease


prediction = model.predict(img_array)
predicted_class = np.argmax(prediction, axis=1)
return predicted_class

# Example usage
disease_class = predict_disease('new_leaf_image.jpg')
print(f"Predicted disease class: {disease_class}")

9. Deploy the Model with Flask (Optional)

If you'd like to create a simple web application where users can upload an image of a
plant leaf, and the model will predict the disease, you can use Flask for deployment.
Code for Flask Deployment:

1. Create Flask app (app.py):

from flask import Flask, request, jsonify


import numpy as np
import cv2
from tensorflow.keras.models import load_model

# Load the trained model


model = load_model('plant_disease_model.h5')

app = Flask(__name__)

@app.route('/')
def index():
return "Welcome to the Plant Disease Detection System!"

@app.route('/predict', methods=['POST'])
def predict():
if 'file' not in request.files:
return jsonify({'error': 'No file part'}), 400

file = request.files['file']
if file.filename == '':
return jsonify({'error': 'No selected file'}), 400

# Save the uploaded image


img_path = 'uploads/' + file.filename
file.save(img_path)

# Predict the disease


disease_class = predict_disease(img_path)

return jsonify({'predicted_class': str(disease_class)})

if __name__ == '__main__':
app.run(debug=True)

2. Create a folder for image uploads (uploads/):

mkdir uploads

3. Run Flask App:

python app.py

You can now upload an image via the /predict endpoint and get a prediction.
10. Improvements and Extensions

 Enhance the Model: If accuracy is not sufficient, consider fine-tuning the model
using a larger dataset or a more complex model like EfficientNet.
 Real-time Prediction: Use a webcam or live video feed to detect plant diseases
in real-time.
 Mobile Deployment: You can convert the model to TensorFlow Lite and deploy
it on mobile devices for on-the-go predictions.

This step-by-step guide walks you through creating an automated plant disease
detection system. By following these steps, you can use deep learning to help farmers
and gardeners detect plant diseases early and take appropriate actions to protect crops.

You can further enhance the system by adding more diseases, improving the
model's accuracy, and deploying it as a web or mobile app.

You might also like