0% found this document useful (0 votes)
26 views53 pages

Group 4 Project Synopsis on Photo Editing Using Machine Learning

The project report presents a photo editing tool utilizing machine learning to enhance images, remove backgrounds, and erase unwanted objects. Key features include automatic image quality improvement and object classification, with a front-end developed in React.js and a back-end using Python frameworks. The future scope includes integrating advanced AI models for real-time editing and generative AI capabilities.

Uploaded by

soniyasolanki352
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views53 pages

Group 4 Project Synopsis on Photo Editing Using Machine Learning

The project report presents a photo editing tool utilizing machine learning to enhance images, remove backgrounds, and erase unwanted objects. Key features include automatic image quality improvement and object classification, with a front-end developed in React.js and a back-end using Python frameworks. The future scope includes integrating advanced AI models for real-time editing and generative AI capabilities.

Uploaded by

soniyasolanki352
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 53

A

Project Report
On
PHOTO EDITING USING MACHINE LEARNING
Submitted in partial fulfillment of requirements for the degree of
Bachelor of Technology
in
Computer Science & Engineering
by
Vaibhavi Pathak
(2100040100079)
Vanshika Pachauri
(2000040100076)
Yash Gola
(2000040100081)
Under the guidance of
Er. Alok Singh Jadaun

Computer Science & Engineering


Raja Balwant Singh Engineering Technical Campus, Bichpuri
Affiliated to Dr. A.P.J. Abdul Kalam Technical University (Formerly known as U.P.T.U.),
Lucknow
DECLARATION
We declare that the project work presented in this report entitled “PHOTO
EDITING USING MACHINE LEARNING”, under the guidance of “ER. ALOK
SINGH JADAUN”, submitted to the Computer Science and Engineering
Department, Raja Balwant Singh Engineering Technical Campus, Agra, affiliated to
Dr. A.P.J. Abdul Kalam Technical University (Formerly Known as U.P.T.U.),
Lucknow” in the academic session 2024-2025 for the award of the Bachelor of
Technology degree in Computer Science and Engineering, is our original work.
We have not plagiarized or submitted the same work for the award of any other
degree.

May, 2025
Place: Agra

Vaibhavi Pathak
(2100040100079)

Vanshika Pachauri
(2100040100080)

Yash Gola
(2100040100083)

ii
CERTIFICATE
This is to certify that the project entitled “PHOTO EDITING USING MACHINE

LEARNING” has been submitted by Vaibhavi Pathak (2100040100079), VIIIth

Sem., VANSHIKA PACHAURI (2100040100080), VIIIth Sem., YASH GOLA

(2100040100083), VIIIth Sem., in partial fulfilment of the degree of Bachelor of


Technology in Computer Science & Engineering of “Raja Balwant Singh Engineering
Technical Campus, affiliated to Dr. A.P.J. Abdul Kalam Technical University
(Formerly known as U.P.T.U), Lucknow” in the academic session 2024- 2025.

May, 2024
Place: Agra

(Dr. Lavkush Sharma) Er. Alok Singh Jadaun


HOD, CSE Guide & Associate professor, CSE Dept.

iii
ACKNOWLEDGEMENT

Apart from our effort, the success of the project depends largely on the encouragement and
guidelines of many others. We take this opportunity to express a gratitude to the people has
been instrumental in the successful completion of this project.
We would like to express my deep and sincere gratitude to my project guide Er. Alok
Singh Jadaun (Assistant Professor of CSE) who gave me his full support and encourage
me to work in innovative and challenging projects for educational field.
We extend our gratitude Dr. Lavkush Sharma, Dean of Department in Computer Science
and Engineering to encourage us to the highest peak and to provide us the opportunity
prepare the project.
We are grateful to Dr. Brajesh Kumar Singh (Director Academics) and Dr. Pankaj
Gupta (Director Finance & Admin.), Director, Raja Balwant Singh Engineering Technical
Campus, Bichpuri, Agra for providing us facilities and constant encouragement. We are
also grateful to all the faculty members of the Department of Computer Science and
Engineering for their deliberations and honest concerns.
Finally, we are grateful to our parents and friends for their constant support throughout this
project work. This work was a distant reality. We also place on record our indebtedness to
those who have directly or indirectly provided their helping hands in this endeavor.

Vaibhavi Pathak (2100040100079)


Vanshika Pachauri (2100040100080)
Yash Gola (2100040100083)

iv
Abstract

Photo Editing Using Machine Learning


A photo editing tool by using machine learning which offers users a seamless way to
enhance images. Key features include image quality enhancement, background removal,
image classification, and object removal. This will identify and categorizes objects within a
photo, allowing for precise edits and modifications. Backgrounds can be easily removed.
Additionally, the object removal tool lets users erase unwanted objects from image.

The front-end is built using React.js for interactivity and Tailwindcss for styling and
responsiveness. On the back-end Python frameworks like Flask or Django is used to
manage machine learning models. For the core features, image processing tasks is handled
using OpenCV, and machine learning models for classification and enhancement.
Advanced models like CNN are employed for background removal and smart object
removal.

The future scope of the photo editing web application includes integrating more advanced
AI models for real-time editing, adding features like automated style transfer and content-
aware scaling, and improving the efficiency of background removal and object detection.
Additionally, incorporating generative AI for creating entirely new content from user
inputs is a promising direction.

v
Table Of Contents
Topic Page No.
Cover Page i
Declaration ii
Certificate iii
Acknowledgement iv
Abstract v
List of Figure vii
List of Abbreviations viii
1. INTRODUCTION, OBJECTIVE & SCOPE 1-2
1.1 INTRODUCTION 1-2
1.2 OBJECTIVES 2
1.3 SCOPE 3
2. REVIEW OF LITERATURE 4-17
3. MATERIALS & METHODS TO BE USED 18-20
3.1 PROJECT CATEGORY 18
3.2 TECHNIQUES TO BE USED 19
3.3 PARALLEL TECHNIQUES AVAILABLE 19-20
3.4 HARDWARE AND SOFTWARE RESOURCE REQUIREMENT 20-22
4. PROPOSED METHODOLOGY 23-31
4.1 PROPOSED ALGORITHM 23-24
4.2 SYSTEM ARCHITECTURE AND FLOW CHART 24-29
4.3 RESULT AND DISCUSSION 30-31
5. TESTING TECHNOLOGIES AND SECURITY MECHANISM 32-34
6. LIMITATION AND DELIMITATIONS 35-36
7. CONCLUSION 37
8. BIBLIOGRAPHY 38-64
8.1 REFERENCES 38-39
8.2 SNAPSHOTS 40-43
8.3 APPENDIX 44-55
8.4 CURRICULUM VITAE 56-64

vi
LIST OF FIGURES

Figures Page No.


3.1 Flow Chart of Photo Editing 12
tool
3.2 System Architecture of Photo 13
Editing
tool 14
3.3 Level - 1 Dataflow Diagram of Photo
Editing Tool

vii
LIST OF ABBREVIATIONS

Short Form Full Forms


AI Artificial Intelligence
ML Machine Learning
RGB Red Green Blue - Data
- Convolutional Neural Network
D Residual Neural Network
CNN ImageNet Large Scale Visual Recognition Challenge
ResN Common Objects in Context
et Universal-Scale Object Detection Benchmark
s Solid State Drive
ILSV Region-based Convolutional Neural Networks
R Matrix Laboratory
C Mean of Squared Differences
COC Square of Mean Differences
O Generative Adversarial Network
USB Recurrent Neural Network
SSD Canadian Institute for Advanced Research -10
R- (Dataset)
C Canadian Institute for Advanced Research -100
N (Dataset)
N Visual Geometry Group
MAT Fully Convolutional Networks
L PASCAL Visual Object Classes Challenge
A Application Programming Interface
B Mobile View Template
MSD Document Object Model
SMD Cascading Style Sheets
GAN Hyper Text Markup Language
SVM Graphical Processing Unit
RNN Super Resolution Generative Adversarial Networks
CIFA Compute Unified Device Architecture
viii
R Hard Disk Drive
VGG Joint Photographic Experts Group
FCN Portable Network Graphics
PAS Light-Weight Matting Objective Decomposition
C Network
A You Only Look Once
L Enhanced super-resolution generative adversarial
API network with adaptive dual perceptual loss
MVT Real - Enhanced super-resolution generative
DO adversarial network with adaptive dual perceptual
M loss
CSS Transport Layer Security
HTM Role-based access control
L General Data Protection Regulation
GPU Central Consumer Protection Authority
SRG
A
N
CUD
A
HDD
JPEG
PNG
MO
D
N
et
YOL
O
ESR
G
A
N

ix
Real-
E
S
R
G
A
N

TLS
RBA
C
GDP
R
CCPA

x
11
CHAPTER-1
INTRODUCTION, OBJECTIVE & SCOPE

1.1 INTRODUCTION

This project is about creating a smart photo editing tool that uses artificial intelligence (AI)
to help people improve and edit their pictures quickly and easily. The goal is to make photo
editing simple for everyone, whether they are beginners or professionals, by using advanced
technology to do the hard work automatically. With this tool, users can enhance image
quality, remove backgrounds, detect and label objects in photos, and erase things they don’t
want in their pictures — all with just a few clicks.

The tool can automatically adjust things like brightness, contrast, and colors to make photos
look better. It can also remove the background from an image, making it easy to replace it
with something else or leave it transparent. In addition, it can recognize different objects in a
photo and tell what they are, which is useful for organizing or editing specific parts of the
image. If there’s something in the photo that the user wants to get rid of — like a person,
object, or mark — the tool can erase it and fill in the space so it looks natural.

Overall, this AI-powered photo editor is designed to save time and effort. It helps users
create professional-looking images without needing special skills or software. Whether
someone is editing photos for social media, online shopping, design work, or just for fun,
this tool makes the process faster, smarter, and more accessible to everyone.

1.2 OBJECTIVE

The main goal of this project is to build an easy-to-use and smart photo editing tool that
helps people edit images faster and better, without needing advanced skills. This tool will
use artificial intelligence to handle complex editing tasks and can be used in many areas like
social media, e-commerce, design, and personal use.
Key objectives include:

 Automatically improving the quality of photos (brightness, contrast, sharpness, etc.)


 Quickly removing and replacing backgrounds in images
 Detecting and identifying objects in photos to allow focused or selective editing

1
 Erasing unwanted objects from pictures while keeping the rest of the image looking natural
 Making professional-level editing accessible to everyone through a simple interface

1.3 SCOPE
 The scope of this project covers the development and integration of key features that
leverage machine learning models for image processing. These include:
 Image Quality Enhancement: Automatically adjusts image resolution, brightness,
contrast, and sharpness.
 Background Removal: Isolates and removes backgrounds, allowing for replacement with
custom settings or images.
 Image Classification: Identifies and categorizes objects within the image, enabling
precise edits.
 Object Removal: Erases unwanted objects while intelligently filling in the space with
surrounding pixels for seamless editing.
 The tool can be utilized across multiple domains such as:
 E-commerce: To improve product images by removing backgrounds and enhancing
quality.
 Social media: For quick and easy photo editing before sharing.
 Photography: Offering professional photographers an automated solution for post-
production editing.
 This project will focus on implementing these features efficiently while maintaining
high accuracy, aiming to reduce the time and complexity involved in photo editing tasks.

2
CHAPTER-2
REVIEW OF LITERATURE

Andreas Eitel et al. [1], created a system to help robots recognize objects using RGB-D data,
combining color images and depth information. They used two Convolutional Neural
Networks (CNNs)—one for color and one for depth—and combined their outputs with a
technique called late fusion. To handle real-world imperfections, they introduced depth
encoding (making depth data usable by CNNs) and data augmentation (adding noise to
training data). Their method outperformed existing systems, improving object recognition
accuracy, even in noisy environments.

Rui Sun et al. [2], reviewed techniques for image edge detection, essential for tasks like
object detection and segmentation. Traditional methods, like Sobel and Canny, detect edges
by identifying sharp pixel changes but struggle with noise and precision. Modern deep
learning methods, like convolutional neural networks (CNNs), improve accuracy by
recognizing complex patterns but are computationally demanding. Key advancements include
multi-scale feature fusion (combining features from different scales), codec networks
(preserving image size while enhancing edges), and network reconstruction (capturing
boundaries more effectively). The paper highlights challenges like model complexity and
large data requirements, with future research aiming for simpler, efficient models.

Kaiming He et al. [3], introduced a residual learning approach to make training very deep
neural networks easier and more effective. Instead of learning new features directly, the
network learns the difference (residual) between the input and the desired output. They used
"shortcut connections," which let layers skip directly to the next, helping to avoid problems
like "degradation," where deeper networks perform worse. Their method, called residual
networks (ResNets), achieved state-of-the-art results on the ImageNet dataset with up to 152
layers, reducing the error rate to 3.57% and winning the ILSVRC 2015 competition. ResNets
also performed well on other tasks like object detection and segmentation, proving their
usefulness across various computer vision challenges.

3
Yosuke Shinya [4], introduced a new benchmark called the Universal-Scale object detection
Benchmark (USB) to improve how object detection systems are evaluated. Unlike current
benchmarks like COCO, which focus on specific object sizes and domains, USB combines
datasets like COCO, Waymo Open Dataset, and Manga109-s to include a wider range of
object sizes and image types. This helps create models that work better in diverse, real-world
scenarios. The authors proposed fairer training and evaluation protocols, ensuring results are
comparable across different resource levels. Tests with 15 detection methods showed that
some models perform well on COCO but struggle with other datasets, proving the need for a
broader benchmark like USB.

Subhani Shaik [5], explored advancements in object detection using a deep learning method
called Single Shot Detector (SSD), known for its speed and accuracy. Unlike older methods
like R-CNN, SSD detects objects by processing an image just once, dividing it into sections
and predicting objects using bounding boxes. The authors improved SSD by using depth-wise
separable convolutions to speed up detection and default boxes to better detect small objects.
The model works well in tasks like video surveillance, autonomous vehicles, and facial
recognition, achieving over 80% accuracy. Tested on various objects, it shows strong
performance in real-time detection, making it a fast and effective solution for many industries.
Balasaheb Patil [6], introduced a method for removing unwanted objects from images using
a process called "image inpainting." This technique fills missing parts of an image by
analyzing surrounding colors and textures to create a natural-looking result. The method uses
small image patches to compare textures and fill gaps efficiently. Edge detection and
segmentation help locate object boundaries and determine the areas to fill. If the initial result
isn't perfect, the algorithm can detect and repaint faulty regions. Tested in MATLAB, the
method showed effective object removal and natural results, making digital image editing
faster and more accurate.

Lei Zhang et al. [7], developed an improved method for image inpainting, which removes
unwanted objects and fills the gaps to look natural. Previous methods often made mistakes by
choosing mismatched image patches, leading to unnatural results. Their method uses two
measurements—MSD (Mean of Squared Differences) and SMD (Square of Mean
Differences)—to select the best matching patch for filling, reducing errors. Tests showed

4
smoother, more realistic restorations compared to older techniques, though it takes more
processing time. This method improves the quality of image restoration and is useful for tasks
like photo editing and film effects. Future plans include using deep learning techniques like
GANs for further improvement.

M. Naveenkumar et al. [8], provided an overview of OpenCV, a popular open-source library


for image processing and computer vision tasks. It covers various applications like motion
detection, face recognition, edge detection, and video processing on mobile devices. The
paper discusses how OpenCV's tools and algorithms can solve real-time problems, making it a
versatile library for enhancing image-based tasks in different platforms. Additionally, it
highlights the practical applications of OpenCV, including in security and mobile technology.

M Manoj krishna1 et al. [9], explored the application of AlexNet, a deep convolutional
neural network (CNN), to classify images from the ImageNet database. The network uses
multiple convolutional layers for automatic feature extraction, identifying patterns such as
edges and textures without manual intervention. It achieves robust performance even on
cropped images, showcasing its ability to generalize across variations in input. AlexNet's
architecture, consisting of stacked convolutional and fully connected layers, benefits from
GPU acceleration, enabling efficient processing of large datasets. The research highlights the
superiority of deep learning models over traditional techniques, making CNNs a reliable tool
for large-scale image classification tasks.

Abhinav N Patil [10], used Convolutional Neural Networks (CNNs) with Keras and
TensorFlow to classify cat and dog images, achieving 97.3% accuracy. The CNN can
recognize objects accurately even if the images are rotated or scaled. It learns automatically
using multiple layers, so there's no need for manual feature extraction. Using GPUs speeds up
training, making the model more efficient. The study highlights how CNNs are ideal for real-
world image recognition tasks, handling large datasets and complex patterns easily.

Debani Prasad Mishra1 et al. [11], explored using Support Vector Machines (SVMs) and
Artificial Neural Networks (ANNs) to classify celebrity faces for authentication and security.
The process involves preprocessing images for clarity and extracting features for
classification. While effective on smaller datasets, this method struggles with larger, more

5
complex datasets. The study highlights that deep learning models, like CNNs, are better suited
for image recognition due to their scalability, flexibility, and higher accuracy.

Peipei Zhang et.al [12], discussed advanced deep learning techniques to improve image
quality, especially in low-light conditions. They used models like CNNs, RNNs, and GANs to
adjust brightness, reduce noise, and enhance texture. A key innovation was the scalable
auxiliary generation network, which improves image quality using special loss functions for
consistency. The study also tackles challenges like edge filtering and defogging, showing
significant improvements in image clarity and detail. Their methods can be used in fields like
computer vision and surveillance, and the lightweight model is suitable for real-time mobile
applications.

Xueyang Fu et al. [13], presented DrainNet, a deep convolutional neural network (CNN)
designed to remove rain streaks from single images. Unlike previous methods that focus on
videos, DrainNet works on individual images by learning the difference between rainy and
clean image details. It is trained on the high-detail layer of images, which simplifies rain
removal while keeping important details intact. Tested on both synthetic and real-world
images, DrainNet outperforms other methods in image quality and speed. It also includes an
image enhancement step for better results in heavy rain, making it useful for applications like
object tracking and image improvement in bad weather.

Syed Waqas Zam et al. [14], introduced MIRNet, a deep learning model designed for
improving image restoration tasks like denoising, super-resolution, and low-light
enhancement. Its main innovation is multi-scale feature extraction, which helps preserve fine
details and capture broader context. The model uses advanced techniques like multi-scale
residual blocks and dual attention units to improve performance. Tests on various datasets
show that MIRNet outperforms other methods, making it suitable for practical applications in
areas like photography, surveillance, and medical imaging, where high-quality image
restoration is crucial.

Karen Simonyan et al. [15], explored how increasing the depth of convolutional neural
networks (CNNs) improves image recognition accuracy. They introduced VGG networks with
depths ranging from 11 to 19 layers and small 3x3 convolution filters. These deeper networks

6
performed better without adding too much computational cost, achieving state-of-the-art
results in the 2014 ImageNet Challenge. The VGG model also works well on other datasets,
showing its versatility and making it a strong choice for image classification tasks.

Junhui Liang et al. [16], investigated how background removal affects the performance of
neural networks in fashion image classification and segmentation tasks. The authors found
that removing background elements improved classification accuracy in shallow networks
like VGG16, particularly when training from scratch, showing up to a 5% improvement.
However, for deeper models and segmentation tasks, background removal didn’t significantly
enhance performance and even conflicted with data augmentation methods like CutMix and
MixUp. The study concludes that background removal is beneficial for shallow networks in
classification but has limited benefits for deeper models and segmentation tasks.

Gao Huang et al. [17], introduced DenseNet, a deep learning architecture that improves
feature propagation and parameter efficiency through dense connectivity. Unlike traditional
CNNs, DenseNet connects every layer to all subsequent layers, ensuring better gradient flow
and feature reuse. DenseNet’s structure reduces the number of parameters required while
maintaining high accuracy, as demonstrated in benchmarks like CIFAR-10, CIFAR-100, and
ImageNet. The DenseNet-BC variant further improves efficiency with bottleneck layers and
compression. DenseNet achieves state-of-the-art results while using fewer parameters than
models like ResNet, making it a key advancement in deep learning for image classification.

Jonathan Long et al. [18], proposed fully convolutional networks (FCNs) for dense pixel-
wise prediction tasks, particularly in semantic segmentation. FCNs allow deep classification
networks like AlexNet, VGG, and GoogLeNet to process images of arbitrary sizes and
produce pixel-level segmentations. By fine-tuning classification networks to support
convolutional operations, the authors achieved improved segmentation performance with a
significant reduction in computational time. The FCN models excelled on datasets like
PASCAL VOC and NYUDv2, outperforming traditional segmentation methods and achieving
a 62.7% mean intersection-over-union (IU) on PASCAL VOC.

Simon Jegou et al. [19], adapted DenseNet for semantic segmentation tasks by creating Fully
Convolutional DenseNets (FC-DenseNets), also known as Tiramisu. The model features two

7
paths: a downsampling path for feature extraction and an upsampling path for recovering
image details. Dense Blocks in both paths facilitate efficient feature reuse, reducing the
number of parameters and speeding up training. The model achieved state-of-the-art
performance on urban scene datasets like CamVid and Gatech, outperforming methods that
used more parameters. FC-DenseNets proved to be efficient, accurate, and easy to train,
improving image segmentation tasks.

Olaf Ronneberger et al. [20], introduced U-Net, a deep learning architecture designed for
accurate segmentation of biomedical images. U-Net’s U-shaped design includes a contracting
path for capturing context and an expanding path for precise localization. This allows U-Net
to perform well with limited annotated data, making it ideal for biomedical applications. The
network combines high-resolution features from the contracting path with upsampled features
for more detailed segmentations. The model’s success is supported by extensive data
augmentation, particularly through elastic deformations, which helps U-Net generalize well
with small datasets.

8
CHAPTER 3
MATERIALS & METHODS TO BE USED

3.1 MATERIALS & METHODS (TECHNICAL DETAILS)

3.1.1 Project Category


A photo editing tool is a computer vision project, built using machine learning
algorithms in python.

3.1.2 Technologies to be used


3.1.2.1 Language
Python: It is an interpreted high-level programming language for general
purpose programming. Python was created by Guido van Rossum and first
released in 1991, Python has a design philosophy that emphasizes code
readability, notably using significant whitespace. It provides constructs that
enable clear programming on both small and large scales.

JavaScript: JavaScript is a versatile, lightweight programming language widely


used to create interactive and dynamic features on websites. It is a cornerstone
of web development alongside HTML and CSS, enabling developers to enhance
user interfaces and provide dynamic experiences such as form validation,
animations, and real-time updates. JavaScript operates primarily on the client
side but can also be used on the server side through platforms like Node.js,
making it a full-stack language.

3.1.2.2 Libraries and Frameworks


Tensorflow: TensorFlow is an open-source library developed by Google for
building and training machine learning and deep learning models. It provides a
flexible platform for numerical computation and supports a wide range of tasks,
including neural networks, natural language processing, and computer vision.
TensorFlow is highly scalable, making it suitable for both small applications and
large-scale deployments. It offers intuitive APIs for beginners and advanced
tools for professionals, facilitating efficient model development and deployment

9
across various devices.

Keras: Keras is a high-level, open-source deep learning library built on top of


frameworks like TensorFlow. It provides an intuitive and user-friendly interface
for designing and training neural networks. Known for its simplicity and
modularity, Keras allows developers to quickly build models using prebuilt
layers and components, making it ideal for beginners and rapid prototyping.
Despite its ease of use, it is powerful enough to support advanced research and
complex deep learning tasks.

Scikit learn: Scikit-learn is a popular open-source machine learning library


built on Python, offering simple and efficient tools for data analysis and
modeling. It provides a wide range of algorithms for supervised and
unsupervised learning, such as regression, classification, clustering, and
dimensionality reduction. Built on top of NumPy, SciPy, and Matplotlib, Scikit-
learn is known for its clean API, scalability, and seamless integration with other
Python libraries, making it ideal for both beginners and experts in data science
and machine learning.

Django: Django is a high-level, open-source web framework for Python,


designed to facilitate the rapid development of secure and scalable web
applications. It follows the Model-View-Template (MVT) architectural pattern
and comes with built-in features like an ORM, authentication, admin interface,
and routing, which reduce development time. Known for its "batteries-included"
approach, Django provides all the tools needed to build robust web applications,
making it a popular choice for developers.

React.js: React.js is a popular open-source JavaScript library developed by


Facebook for building fast, interactive, and reusable user interfaces, particularly
for single-page applications. It uses a component-based architecture, enabling
developers to create modular and maintainable code. React’s virtual DOM
optimizes rendering performance, making it highly efficient. Known for its
flexibility and ability to integrate with other libraries or frameworks, React is
widely used for creating dynamic and scalable front-end applications.

10
Tailwindcss: Tailwind CSS is a popular utility-first CSS framework that makes
it easy to build modern, responsive websites quickly. Instead of writing custom
CSS, developers use pre-designed utility classes directly in their HTML to style
elements. This approach encourages consistency, speeds up development, and
keeps codebases cleaner. Tailwind also offers powerful features like responsive
design support, dark mode, and easy customization through configuration files,
making it a flexible choice for both small and large projects.

3.1.1 Parallel Techniques Available

1. Object Removal
 Patch-Based Inpainting: Parallelizing the inpainting of different
regions using techniques like texture synthesis.
 Convolution-Based Parallelism: Applying parallelized convolution
filters for artifact reduction or content replacement.
 GANs for Inpainting: Utilizing pre-trained Generative Adversarial
Networks (GANs) with parallel GPU execution to fill in missing regions.

2. Background Removal
 Mask Generation Parallelism: Parallelly applying segmentation
models to generate masks for separating background from the
foreground.

 Matrix Manipulations: Using parallel matrix operations to identify and


subtract background elements efficiently.

 Deep Learning Models: Employing parallelized networks like DeepLab


or MODNet for real-time background removal.

3. Image Enhancement
 Filter-Based Parallelism: Distributing filter operations like sharpening,
smoothing, or noise reduction across processors.

 Histogram Equalization: Parallel computation of histograms for image


contrast enhancement.

11
 GANs and Neural Networks: Using parallelized enhancement models
like SRGAN for super-resolution tasks.

 Wavelet Transform: Applying parallel wavelet transformations for


image denoising or texture enhancement.

Tools and Frameworks Supporting Parallel Processing:


 OpenCV: Offers multi-threaded operations for image processing tasks.
 TensorFlow/PyTorch: Provides GPU acceleration for neural network-
based techniques.
 CUDA/OpenCL: Enables parallel computation on GPUs for high-
performance image processing.
 Dask/NumPy: Useful for parallelizing array and matrix operations.

Parallelization ensures faster and more efficient processing, especially for large-
scale or real-time applications.

3.2. Hardware and Software Requirements and their Specifications


3.2.1. Hardware Requirements:
i. 4 GB RAM
ii. 512 GB Storage Device (Either HDD or SSD)
iii. i3 or more advanced Generation processor

3.2.2. Software Requirements:


i. This is platform independent i.e., either Windows (not before 7), MacOS.
ii. Python (Any version which supports TensorFlow)

12
CHAPTER 4
PROPOSED METHODOLOGY

4.1 Proposed Algorithm


Step 1: Preprocessing
1. Input Image: Accept the input image in any supported format (e.g., JPEG, PNG).
2. Image Resizing: Resize the image to a predefined resolution for consistency,
preserving the aspect ratio (e.g., 512x512).
3. Color Normalization: Normalize image pixel values to improve ML model
performance (scale to [0, 1] or [-1, 1]).

Step 2: Background Removal


1. Model Selection: Use a pretrained segmentation model (e.g., U-Net, DeepLab, or
MODNet).
2. Segmentation:
 Pass the input image through the model to generate a binary mask for the
foreground.
 The mask should identify the subject (foreground) vs. the background.
3. Background Removal:
 Use the binary mask to separate the foreground.
 Replace the background with a transparent or custom image.

Step 3: Object Removal


1. Object Detection:
 Use a pretrained object detection model (e.g., YOLO, Faster R-CNN, or DETR) to
detect objects in the image.
 Allow users to select the object(s) they want to remove.
2. Inpainting:
 After the object is removed, use an inpainting model to fill the removed area.
 Ensure the model considers contextual information from surrounding pixels to
maintain image consistency.

Step 4: Image Enhancement

13
1. Low-Level Enhancements:
 Noise Reduction: Apply denoising using models like DnCNN.
 Color Correction: Use histogram equalization or ML-based models for color
adjustments.
2. Super-Resolution:
 Upscale the image using a super-resolution model (e.g., ESRGAN, Real-ESRGAN)
to improve clarity and sharpness.
3. Style Enhancement (Optional):
 Use GANs (e.g., StyleGAN) to apply artistic styles or improve aesthetics.

Step 5: Post-Processing
1. Final Adjustments:
 Allow users to fine-tune brightness, contrast, saturation, etc.
 Provide preview functionality to evaluate changes.
2. Output Format:
 Convert the enhanced image back to the desired output format (e.g., JPEG, PNG).

Step 6: User Interaction


1. User Interface:
 Provide a simple interface for selecting functionalities: background removal, object
removal, and image enhancement.
 Enable undo/redo options for user changes.
2. Save and Share:
 Allow users to save the edited image locally or share it via supported platforms.

4.2Flow Chart, System Architecture and Data Flow Diagram.


4.2.1 Flow Chart
The flowchart in the next page in fig. 4.1 represents a user-centric image processing
workflow. It begins with uploading a user image and selecting one of three actions:
background removal, object removal, or image enhancement. The chosen action is
applied using a model’s algorithm, and the processed image is saved in the
database. Users can download the output to their system or temporarily store it in
the software database to process another image. The flow concludes with either
processing another image or exiting the system.

14
Start

Upload User Image

Select
Action

Background Remove Image Enhancement Object Remove

Save the user image on the database

Process the image

Apply the model’s


algorithm

Save the output image


on database

Yes No
Download
Image

Temporarily save the image on


Download the image
software database
on user system

Yes Process
another
image

No

Exit

Fig. 4.1 Flow Chart of Photo Editing tool

15
4.2.2 System Architecture

Hardware

Desktop

Software

React.js, Web App


Front-end
TailwindCSS
UX/UI Design

Mobile App

Send Image
Upload Image

Software API

Back-end
Apply Store Result
ML
ML Engine

Database

Fig. 4.2 System Architecture of Photo Editing tool

A system architecture is the conceptual model that defines the structure, behaviour,
and more views of a system. An architecture description is a formal description and
representation of a system, organized in a way that supports reasoning about the

16
structures and behaviours of the system as shown in the fig. 4.2.

4.2.3 Dataflow Diagram

User Image Processing

Original Image
Edited Image
Adjusted Image
Original Image

Photo Editing Tool

Image Image

Edited Image

Adjusted Image

Database Tool Algorithm

Fig. 4.3 Level - 1 Dataflow Diagram of Photo Editing Tool


The diagram in fig. 3.3 represents the data flow of photo editing tool. Key entities involved
are: User, Image Processing Tool, Tool Algorithm, and Database. The user uploads an
original image to the tool. The tool’s algorithm processes the image for editing or
adjustment. Adjusted images may undergo further edits, creating a finalized edited image.
The database stores the original, adjusted, and edited images for reference and retrieval.

17
The system ensures efficient management of image data across different stages of
processing.

18
CHAPTER 5

TESTING TECHNOLOGIES & SECURITY MECHANISMS

5.1. Testing Technology


5.1.1 Functional Testing
Functional testing validates that the features of the photo editing tool work as intended.
 Image Upload and Preview
Verifies that users can upload images of supported formats (e.g., JPG, PNG) and
preview them in the editor.
 Editing Features
Tests the basic and advanced editing functions, such as:
 Crop, Rotate, and Resize: Ensures these tools operate accurately and efficiently.
 Filters and Effects: Confirms that filters are applied correctly to images.
 Text and Stickers: Checks for smooth placement, scaling, and removal of text and
stickers.
 Machine Learning Features
Validates the proper functioning of ML-based features, such as:
 Auto-enhancement (brightness, contrast, sharpness).
 Background removal or replacement.
 Style transfer (artistic effects applied to images).
 Save and Export Options
Ensures edited images can be saved in various resolutions and formats without losing
quality.
5.1.2 Performance Testing
Load Testing
Simulates multiple users editing images simultaneously to ensure the tool performs
well under load.

19
Response Time
Measures the time taken to apply ML-based filters, particularly for high-resolution
images.
 Memory Usage
Tests for efficient memory consumption, ensuring the tool does not crash or lag
under typical use cases.

5.1.3 Usability Testing


Usability testing focuses on the user experience to ensure ease of use and
intuitiveness.
 Interface Design
Tests whether the user interface is clear, visually appealing, and accessible for all
users.
 Error Handling
Verifies that appropriate error messages are displayed when unsupported actions
occur, such as uploading an incompatible file format.
5.1.4 Security Testing
Security testing ensures the tool and its data are protected against vulnerabilities.
 File Upload Security
Tests for protection against malicious file uploads by restricting file types and sizes.
 Data Encryption
Verifies that images and user data are encrypted during upload, processing, and
download.
 Session Security
Ensures secure user sessions, with mechanisms like token-based authentication.
5.1.5 Compatibility Testing
Compatibility testing ensures the tool works seamlessly across different environments.
 Browser Compatibility
Tests the performance on browsers such as Chrome, Firefox, Safari, and Edge.
 Device Compatibility
Verifies usability and responsiveness on desktops, tablets, and mobile devices.

20
5.1.6 Regression Testing
Whenever a new feature or update is added, regression testing ensures previously
existing features are not affected.

5.2 Security Mechanisms


5.2.1 User Data Protection
1. Data Encryption
 All user-uploaded images and personal data are encrypted using AES-256
encryption during storage and transmission.
 HTTPS with TLS (Transport Layer Security) ensures secure data transfer
between client and server.
2. Access Control
 Users are authenticated using secure login methods (e.g., OAuth 2.0 or JWT).
 Role-based access control (RBAC) restricts access to specific features or
administrative functions.
3. Data Privacy
 Temporary storage of user data is implemented, with automatic deletion after a
specific duration.
 The tool complies with privacy regulations such as GDPR and CCPA.
5.2.2 Secure File Handling
1. File Validation
 Uploaded files are scanned to detect potential malicious content.
 Only supported image formats (e.g., JPG, PNG) are allowed.
2. Size and Type Restrictions
 Implement strict limits on file size to prevent buffer overflow or denial-of-service
attacks.
5.2.3 Machine Learning Model Security
1. Model Integrity
 Store ML models securely using hashed versioning to prevent tampering.
 Access to the model APIs is restricted to authenticated users.

21
2. Adversarial Attack Mitigation
 Protect against adversarial examples by implementing robust preprocessing and
defensive distillation techniques.
3. Regular Audits
 Perform periodic security audits to detect vulnerabilities or biases in ML models.

22
CHAPTER 6

LIMITATIONS & DELIMITATIONS

6.1 LIMITATIONS:

The limitations of this photo editing tool highlight some of the technical and practical
challenges associated with using machine learning in image processing. While the tool offers a
range of advanced features, there are still certain areas where performance may not be ideal.

One major limitation lies in the accuracy of the machine learning models. Although the tool
is designed to perform tasks like background removal, image enhancement, and style transfer
automatically, it may not always produce perfect results—especially when dealing with
complex images, such as those with detailed or cluttered backgrounds, or very low-quality
images. The AI models may misinterpret certain areas of the image, leading to less-than-
optimal output.

Another limitation is the dependence on image quality. The tool works best with high-
resolution, clear images. If the input image is blurry, low-resolution, or heavily compressed, the
machine learning models may struggle to process it accurately. This can reduce the quality and
usefulness of the editing results.

Processing time is also a concern. Tasks like applying filters or performing style transfer,
especially on high-resolution images, can take a significant amount of time to complete. This
can affect the user experience, particularly when quick results are needed or when editing large
batches of photos.

In terms of system performance, the tool may require high-performance hardware—


especially for running complex machine learning tasks. Users with older or lower-end devices
might face slow performance or may not be able to access the full set of features efficiently.

23
The scope of the machine learning features is also currently limited. While the tool includes
powerful capabilities like automatic enhancement and background removal, many advanced
features—such as full object recognition, detailed segmentation, or 3D rendering—have not
been implemented due to limitations in time, resources, and project complexity.

Lastly, there is the issue of bias in machine learning models. These models are trained on
publicly available datasets, which may not represent all types of images or users fairly. This
could lead to uneven performance, such as less accurate results for certain demographics or
environments, particularly in tasks like face detection or object classification.

6.2 DELIMITATIONS:

The delimitations define the intentional boundaries set during the planning and development of
this project. These choices were made to keep the project focused, practical, and achievable
within the available time and resources.

One of the main delimitations is the scope of features. The tool was purposely designed to
focus on essential photo editing tasks, such as cropping, rotating, applying basic filters, and a
few AI-driven features like background removal and image enhancement. More advanced
functions—like 3D editing, animation, or video processing—were excluded to maintain the
project’s simplicity and usability.

The target audience for the tool also helped shapes its features. It is intended for general users
with basic to moderate editing needs. As a result, highly professional features, such as detailed
layer editing, manual masking, or precise color grading, were left out in favor of a more
streamlined and user-friendly experience.

Regarding file format support, the tool works only with common image formats such as JPG,
PNG, and JPEG. It does not support specialized formats like RAW, PSD, or TIFF, as handling
such files would require more complex processing and integration with advanced libraries or
third-party tools.

Another key delimitation is in the machine learning model customization. The AI models used

24
are based on general datasets and are not customized for specific industries or domains (such as
medical imaging or satellite photos). Tailoring models to such specific needs was considered
beyond the scope of this project.

In terms of security and privacy, only basic protection measures have been implemented to
safeguard users' images during uploading, processing, and downloading. More advanced
security standards—like end-to-end encryption or compliance with specific regulations were
not included.
Finally, the tool currently lacks integration with external services like cloud storage platforms
(e.g., Google Drive, Dropbox). This means users cannot directly upload from or save to these
platforms, limiting options for easy sharing and backup. Including these features was
considered a future enhancement, beyond the current project scope.

25
CHAPTER 7

CONCLUSION

The conclusion of this project highlights how the use of machine learning in photo editing
represents a major step forward in digital image processing. This tool goes beyond traditional
editing software by providing users with smart, automated features—such as background
removal, object removal, and image enhancement—that would normally require time, effort,
and technical skill if done manually. These advanced features are powered by machine learning
models that help simplify complex editing tasks, making the tool more efficient, accurate, and
user-friendly. A key strength of this project is its ability to make high-level editing accessible to
users of all skill levels. The clean and intuitive interface allows even beginners to achieve
professional-quality results without needing to understand the underlying technology. At the
same time, the system is powerful enough to be useful for more experienced users who need fast
and effective editing tools. Another important aspect is the focus on security and reliability. By
including strong testing practices and security features, the project ensures that users' data and
images are handled safely during upload, processing, and download. This builds user trust and
improves the overall experience, especially in an era where data privacy is a top concern. The
project is also designed to be scalable and adaptable. Its modular structure means that more
features can be added in the future as needed, such as real-time editing, 3D image manipulation,
or the ability to save and share images using cloud platforms. This flexibility ensures that the
tool can grow alongside changes in user needs and advances in technology. In summary, the
project demonstrates how machine learning can revolutionize digital tools by automating
complex tasks, improving accuracy, and enhancing creativity. It serves not only as a practical
solution for modern photo editing needs but also as a strong foundation for future developments
in AI-driven creative applications. This work shows that with the right technology and
thoughtful design, photo editing can be faster, easier, and more powerful than ever before. The
project's focus on major cloud providers, including AWS, Azure, and GCP, underscores the
commitment to providing organizations with the flexibility to choose and adapt their cloud
platforms as needed. Automation through Jenkins, configuration management with Ansible,
and workload portability with Kubernetes form a robust foundation for a streamlined and
consistent application deployment process. Despite the ambitious scope, it is important to
acknowledge certain limitations. The learning curve associated with adopting multiple
26
technologies, the resource intensiveness of the project, and the dependency on third-party
tools are considerations that need careful attention. Security concerns, the potential
limitations of cloud-agnosticism, and the need for a reliable internet connection are factors
that require ongoing vigilance. The project's contribution to high availability, disaster
recovery, cost-efficiency, and versatility in application scenarios is substantial. By
emphasizing a cloud-agnostic approach, we aspire to future- proof applications and
infrastructure in the face of a rapidly changing IT landscape. As with any technological
endeavor, the success of this project lies not only in its implementation but in the ongoing
commitment to monitoring, optimization, and a continuous improvement mindset. Rigorous
testing and validation ensure the reliability, security, and optimal performance of
applications across diverse cloud environments. In summary, this project offers a robust
solution to the challenges posed by cloud technologies, promoting resilience, cost-efficiency,
and agility. By mitigating the risks of vendor lock-in and providing organizations with the
means to adapt swiftly to changing market dynamics, the project sets the stage for a
competitive edge in the digital age. Through the marriage of Multicloud deployment and
DevOps principles, we envision a future where organizations can harness the full potential of
cloud technologies with confidence and adaptability.

27
CHAPTER 8

BIBLIOGRAPHY

8.1 References

[1] Andreas Eitel, Jost Tobias Springenberg, Luciano Spinello, Martin Riedmiller, Wolfram
Burgard, (2022) Multimodal Deep Learning for Robust RGB-D Object Recognition,
arXiv:1507.06821v2

[2] Rui Sun, Tao Lie, Xiaogang Du, Asoke K Nandi, (2022) Survey of Image Edge Detection,
Front. Sig. Proc. 2:826967, DOI: 10.3389/frsip.2022.826967

[3] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, (2015) Deep Residual Learning for
Image Recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), 2016, pp. 770-778

[4] Yosuke Shinya, (2022) USB: Universal-Scale Object Detection Benchmark,


arXiv:2103.14027v3

[5] Subhani Shaik, (2023) Real-time Object Detection Using Deep Learning, Volume 38,
DOI: 10.9734/jamcs/2023/v38i81787

[6] Balasaheb Patil, Kumbhar Megha, Jagtap Tejashree, Jadhav Asha, (2016) Object Removal
from Digital Image in image Inpainting using Novel Framework, ISSN (Print): 2347-2820 V-
4 I-2

[7] Lei Zhang, Minhui Chang, (2020) An image inpainting method for object removal based
on difference degree constraint, DOI: 10.1007/s11042-020-09835-0

[8] Naveenkumar Mahamkali, Vadivel Ayyasamy, (2015) OpenCV for Computer Vision
Applications, Proceedings of National Conference on Big Data and Cloud Computing
(NCBDC’15)

[9] M Manoj Krishna, M Neelima, M Harshali, M Venu Gopala Rao, (2018) Image
classification using Deep learning, International Journal of Engineering & Technology, DOI:
28
10.1109/ICCSEA49143.2020.9132851

[10] Abhinav N Patil, (2021) Image Recognition Using Machine Learning, International
Journal of Engineering Applied Sciences and Technology, 2021 Vol. 6, Issue 1, ISSN No.
2455-2143

[11] Debani Prasad Mishra, Sanhita Mishra, (2024) Smrutisikha Jena, Image classification
using machine learning, Indonesian Journal of Electrical Engineering and Computer Science,
Vol. 31, No. 3, pp. 1551~1558, ISSN: 2502-4752, DOI: 10.11591/ijeecs.v31.i3.pp1551-1558

[12] Peipei Zhang, (2022) Image Enhancement Method Based on Deep Learning, Hindawi
Mathematical Problems in Engineering,Volume 2022, Article ID 6797367, 9 pages, DOI:
10.1155/2022/6797367

[13] Xueyang Fu, Jiabin Huang, Xinghao Ding, Yinghao Liao, John Paisley, (2017) Clearing
the Skies: A deep network architecture for single-image rain removal, arXiv:1609.02087v2

[14] Syed Waqas Zam, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan,
Ming Hsuan Yang, Ling Shao, (2020) Learning Enriched Features for Real Image Restoration
and Enhancment, arXiv:2003.06792v2

[15] Karen Simonyan, Andrew Zisserman, (2015) very deep convolutional networks for large-
scale image recognition, Visual Geometry Group, Department of Engineering Science,
University of Oxford, Published as a conference paper at ICLR

[16] Junhui Liang, Ying Liu, Vladimir Vlassov, (2024) The Impact of Background Removal
on Performance of Neural Networks for Fashion Image Classification and Segmentation,
arXiv:2308.09764v2

[17] Gao Huang, Zhuang Liu, Laurens van der Maaten, (2018) Densely Connected
Convolutional Networks, arXiv:1608.06993v5

[18] Jonathan Long, Evan Shelhamer, Trevor Darrell, (2015) Fully Convolutional Networks
for Semantic Segmentation, arXiv:1411.4038v2

[19] Simon Jegou Michal Drozdzal, David Vazquez, Adriana Romero Yoshua Bengio, (2017)
The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic
29
Segmentation, arXiv:1611.09326v3

[20] Olaf Ronneberger, Philipp Fischer, and Thomas Brox, (2015) U-Net: Convolutional
Networks for Biomedical Image Segmentation, arXiv:1505.04597v1

8.2 Snapshots

Snapshot 1: Performing Background Removal

Snapshot 2: Background Removed

30
Snapshot 3: Performing Object Removal

Snapshot 4: Selecting Object to remove that object

31
Snapshot 5: Object Remove

32
8.3 Appendix

33
8.4 Curriculum Vitae

Prof. (Dr.) Lavkush Sharma (H.O.D)

Dr. Lavkush Sharma is currently serving as head of Training and Placement cell and
Associate Professor of Department of the Post Graduate Department of Computer Science
& Engineering of the Raja Balwant Singh Engineering Technical Campus, Bichpuri, Agra.
He has more than 15 years of teaching experience, He has taught subjects like
Cryptography, Distributed Systems, Compiler Design & Computer Programming. He has
participated in many International and National conferences in India and also been actively
involved in organizing various workshops, conferences and faculty development programs
at institute.
Academic Qualification: B. Tech, M. Tech. (Computer Science), Ph.D.
Designation with Department: Associate Professor (Computer Science and Engineering)
Contact No.: +91-9917050044
Email: [email protected]
Specialization: Computer Science & Engineering
Experience: 16 Years
Present Area of work: Machine Learning, Cryptography & N/W Security.
Research Articles/Published/Membership:
 Research Articles Published: 05
 Papers presented in Conferences: 05
 B. Tech Research Projects guided: 30

34
 Member of Institution of Engineers (India) Ltd. Kolkata
 International Journals: 06
 International Conferences: 06

 National Conferences: 08
 Book Chapter: 01
Journals/Academic Achievements:
 Conferences/Workshops attended: 10
 UGU NET Qualified
 International Conference Organized: 02
 Lecture delivered: 01
 National Conferences organized: 01
 Workshops/Seminars attended: 20

35
Er. Alok Singh Jadaun is currently serving as Assistant Professor of Department of the Post
Graduate Department of Computer Science & Engineering of the Raja Balwant Singh Engineering
Technical Campus, Bichpuri, Agra. He obtained his B.Tech. degree in Computer Science and
Engineering from U.P.T.U with First Division in 2009. He obtained the Master of Technology
(M.Tech.) degree from Bhagwant University, Ajmer in Computer Science & Engineering with
First Division in 2014. He is having nine years and seven months of experience. He is presently
engaged in research and development activities in the area of Data Structure, Cryptography and
Network Security, Computer Networks and Distributed Systems.

Academic Qualification: B.Tech., M.Tech.


Designation with Department: Assistant Professor, Computer Sci. & Engineering
Contact No.: 9639125689, 7906813603
Email: [email protected]
Specialization: Data Structure, Computer Networks, Cryptography and Network Security,
Software Engineering Object Oriented Programming
Experience: 9 Years 7 Months
Present Area of work: Data Structure, Distributed Systems, Computer Network, Data Structure
and Software Engineering, IoT.

Research Articles/Published/Membership:
 Research Articles Published: 02
 Papers Presented in Conferences: 05
 B.Tech. Research Projects guided: 10

Journals/Academic Achievements: Conferences/FDP/Workshops attended:12

36
Vaibhavi Pathak, currently pursuing a Bachelor of Technology in Computer Science and
Engineering from Raja Balwant Singh Engineering Technical Campus (RBSETC), Bichpuri,
Agra. Has a strong interest in front-end web development, data structures and algorithms (DSA),
and Google Cloud Computing, a commendable CGPA of 6.7. Developed good practical
knowledge in building websites using HTML, CSS, JavaScript, and React.js. Enjoys designing
user-friendly web pages and making them responsive and functional. Has built a solid foundation
in front-end development by working on various small projects and learning through self-practice.
Has a keen interest in improving her programming and problem-solving skills. Solved over 250+
DSA problems on LeetCode, which has helped in understand different algorithms and how to
write efficient code. Regularly practices coding to keep improving logic and speed.

Completed internships at Codsoft and NexGen Dev. Got the opportunity to work on real-life
tasks, contribute to projects, and understand how teamwork and deadlines play an important role
in software development. Also have a specialization in Google Cloud Computing with over 12
Google Cloud Certifications.

Academic Qualifications
Bachelor of Technology (Pursuing)
Email: [email protected]
Phone: +91 8810122722
Location: Agra, India
LinkedIn: www.linkedin.com/in/vaibhavi-pathak-b68223164
GitHub: https://github.com/vaibhavi2402

37
38
39
PLAGIARISM SKETCH

CHAPTER-1

40
CHAPTER – 2

41
42

You might also like