0% found this document useful (0 votes)
38 views51 pages

Image Dehazing with GMAN Algorithm

The project report focuses on developing an image dehazing model using the Global Memory Attention Network (GMAN) to enhance the visual quality of hazy images. The primary goal is to accurately classify images as hazy or clear, improving applications in outdoor photography and surveillance. The report outlines the project's objectives, methodology, and potential real-world applications, emphasizing its significance in advancing image processing technologies.

Uploaded by

fnew7887
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views51 pages

Image Dehazing with GMAN Algorithm

The project report focuses on developing an image dehazing model using the Global Memory Attention Network (GMAN) to enhance the visual quality of hazy images. The primary goal is to accurately classify images as hazy or clear, improving applications in outdoor photography and surveillance. The report outlines the project's objectives, methodology, and potential real-world applications, emphasizing its significance in advancing image processing technologies.

Uploaded by

fnew7887
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

A PROJECT REPORT ON

IMAGE DEHAZING USING GMAN ALGORITHM


Submitted in partial fulfillment of the requirements for the award of the degree

of

BACHELOR OF TECHNOLOGY
in

ELECTRONICS AND COMMUNICATION ENGINEERING


Under the guidance of

Dr. C. KAVITHA, M. E, Ph.D.,


Associate Professor & HOD Electronics and Communication Engineering

By

K. JYOTHEESH 21751A0452
K. CHARAN 21751A0455
K. VENU 21751A0456
P. VARUN KUMAR 21751A04A0

SREENIVASA INSTITUTE OF TECHNOLOGY AND MANAGEMENT


STUDIES, CHITTOOR-517127, A.P.
(Autonomous)

(Approved by AICTE, New Delhi & Affiliated to JNTUA, Ananthapuramu)

DEPARTMENT OF ELECTRONICS AND COMMUNICATION


ENGINEERING
MAY (2021-2025)
SREENIVASA INSTITUTE OF TECHNOLOG AND
MANAGEMENT STUDIES, CHITTOOR-517127, A.P.
(Autonomous)
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Ananthapuramu)

BONAFIDE CERTIFICATE
This is to certify that the project work entitled “IMAGE DEHAZING USING GMAN
ALGORITHM ” is a genuine work of
K. JYOTHEESH 21751A0452
K. CHARAN 21751A0455
K. VENU 21751A0456
P. VARUN KUMAR 21751A04A0

Submitted to the department of Electronics and Communication Engineering, in partial


fulfillment of the requirements for the award of the degree of BACHELOR OF
TECHNOLOGY in ELECTRONICS AND COMMUNICATION ENGINEERING from
Jawaharlal Nehru Technological University Anantapur, Ananthapuramu.

Signature of the Supervisor Signature of the Head of Department


Dr. C. KAVITHA, M.E, Ph.D., Dr. C. KAVITHA, M.E, Ph.D.,
Associate Professor, Associate Professor & HOD,
Department of Electronics and Department of Electronics
Communication Engineering, and Communication
Sreenivasa Institute of Technology Engineering,
and Management Studies, Chittoor, A.P. Sreenivasa Institute of Technology and
Management Studies, Chittoor, A.P.
Submitted for University Examination (Viva-Voice) held on……………………

INTERNAL EXAMINER EXTERNAL EXAMINER

ACKNOWLEDGEMENT
A Project of this magnitude would have not been possible without the guidance and co-
ordination of many people. I am fortune in having top quality people to help, support and guide us
in every step towards our goal.

Our team is very much grateful to the Chairman Sri K. Ranganadham Garu for his
encouragement and stalwart support. We are also extremely indebted to the Secretary Sri D.K.
Badri Narayana, Garu for his constant support.

We express our sincere thanks to our Academic Advisor Dr. K.L. Narayana., M. Tech.,
Ph.D., further, we would like to express our profound gratitude to our principal

Dr. N. Venkatachalapathi, M. Tech, Ph.D. for providing all possible facilities throughout the
completion of our project work.

We express our sincere thanks to our Dean (Academics), Dr. M. Saravanan, M.E., Ph.D.,
further we express our sincere thanks to our Head of the Department Dr. C. Kavitha, Ph.D., for
her co-operation and valuable suggestions towards the completion of project work.
We express our sincere thanks to our guide Dr. C. Kavitha Associate professor for offering
us the opportunity to do this work under his guidance.

I would like to extend our gratitude to our project coordinator Mr. S. Ashmad M. Tech.,
Assistant professor for his valuable support.

We express our sincere salutation to all other teaching and non-teaching staff of our
department for their direct and indirect support given during our project work. Last but not the
least, we dedicate this work to our parents and the Almighty who have been with us throughout
and helped us to overcome the hard times.

K. JYOTHEESH 21751A0452
K. CHARAN 21751A0456
K. VENU 21751A0456
P. VARUN KUMAR 21751A04A0
Course Outcomes for project work

On completion of project work we will be able to,

CO1. Demonstrate in-depth knowledge on the project topic.

CO2. Identify, analyze and formulate complex problem chosen for project work to attain
substantiated conclusions.

CO3. Design solutions to the chosen project problem.

CO4. Undertake investigation of project problem to provide valid conclusions.

CO5. Use the appropriate techniques, resources and modern engineering tools necessary for project
work.

CO6. Apply project results for sustainable development of the society.

CO7. Understand the impact of project results in the context of environmental sustainability.

CO8. Understand professional and ethical responsibilities while executing the project work.

CO9. Function effectively as individual and a member in the project team.

CO10. Develop communication skills, both oral and written for preparing and presenting project
report.

CO11. Demonstrate knowledge and understanding of cost and time analysis required for carrying
out the project.

CO12. Engage in lifelong learning to improve knowledge and competence in the chosen area
DEPARTMENT OF ELECTRONICS AND COMMUNICATION
ENGINEERING VISION AND MISSION

INSTITUTE VISION:
To emerge as a Centre of Excellence for Learning and Research in the domains of engineering,
computing and management.

INSTITUTE MISSION:
IM1: Provide congenial academic ambience with necessary infrastructure a learning resource.
IM2: Ignite the students to acquire self-reliance in state-of –the-Art technologies.
IM3: Inculcate confidence to face and experience new challenges from industry and society.
IM4: Foster enterprising spirit among students.
IM5: Work collaboratively with Technical Institutes / Universities / Industries of National,
International repute.

DEPARTMENT VISION:
To became a center of excellence in Electronics and Communication Engineering and provide
necessary skills to the students to meet the challenges of industry and society.

DEPARTMENT MISSION:
M1: provide congenial academic ambience with necessary infrastructure and learning
resources.
M2: Inculcate confidence to face and experience new challenges from industry and society.
M3: Ignite the students to acquire self-reliance in state-of-the-Art Technologies.
M4: Foster Enterprising Spirit among students.

PROGRAM EDUCATIONAL OBJECTIVES (PEOs):


PEO1: Have in-depth knowledge through life-long learning to conceptualize, critically analyze
and add value in the areas of business management.
PEO2: Have lateral thinking enabling simple solutions for complex managerial problems.
PEO3: Ignite the passion for entrepreneurship.
PEO4: Inculcate a spirit of ethical and social commitment in the personal and professional lifto
add value to the society.
PROGRAM OUTCOMES (POs):
Computer Applications Graduates will be able to
PO1. Computational Knowledge: Apply knowledge of computing fundamentals, computing
specialization, mathematics, and domain knowledge appropriate for the computing
specialization to the abstraction and conceptualization of computing models from defined
problems and requirements.

PO2. Problem Analysis: Identify, formulate, research literature, and solve complex computing
problems reaching substantiated conclusions using fundamental principles of mathematics,
computing sciences, and relevant domain disciplines.

PO3. Design /Development of Solutions: Design and evaluate solutions for complex
computing problems, and design and evaluate systems, components, or processes that meet
specified needs with appropriate consideration for public health and safety, cultural, societal,
and environmental considerations.

PO4. Conduct Investigations of Complex Computing Problems: Use research-based


knowledge and research methods including design of experiments, analysis and interpretation
of data, and synthesis of the information to provide valid conclusions.

PO5. Modern Tool Usage: Create, select, adapt and apply appropriate techniques, resources,
and modern computing tools to complex computing activities, with an understanding of the
limitations.

PO6. Societal and Environmental Concern: Understand and assess societal, environmental,
health, safety, legal, and cultural issues within local and global contexts, and the consequential
responsibilities relevant to professional computing practice.

PO7. Innovation and Entrepreneurship Identify a timely opportunity and using innovation to
pursue that opportunity to create value and wealth for the betterment of the individual and
society at large

PO8. Professional Ethics: Understand and commit to professional ethics and cyber
regulations, responsibilities, and norms of professional computing practice.

PO9. Individual and Team Work: Function effectively as an individual and as a member or
leader in diverse teams and in multidisciplinary environments.
PO10. Communication Efficacy: Communicate effectively with the computing community,
and with society at large, about complex computing activities by being able to comprehend and
write effective reports, design documentation, make effective presentations, and give and
understand clear instructions.

PO11. Project management and finance: Demonstrate knowledge and understanding of the
computing and management principles and apply these to one’s own work, as a member and
leader in a team, to manage projects and in multidisciplinary environments

PO12. Life-long Learning: Recognize the need, and have the ability, to engage in independent
learning for continual development as a computing professional
PROGRAM SPECIFIC OUTCOMES (PSOs):
On successful completion of the program, the under graduates will be able to
PSO1: Apply core and functionary management skills for professional growth and business
evaluation.
PSO2: Adapt to dynamic changes in an environment relevant to professional managerial
practice and entrepreneurship as emerging leaders.
Course Outcomes for project work

On completion of project work we will be able to,


CO1. Demonstrate in-depth knowledge on the project topic.
CO2. Identify, analyze and formulate complex problem chosen for project work to attain
substantiated conclusions.
CO3. Design solutions to the chosen project problem.
CO4. Undertake investigation of project problem to provide valid conclusions.
CO5. Use the appropriate techniques, resources and modern engineering tools necessary for
project work.
CO6. Apply project results for sustainable development of the society.
CO7. Understand the impact of project results in the context of environmental sustainability.
CO8. Understand professional and ethical responsibilities while executing the project work.
CO9. Function effectively as individual and a member in the project team.
CO10. Develop communication skills, both oral and written for preparing and presenting project
report.
CO11. Demonstrate knowledge and understanding of cost and time analysis required for
carrying out the project.
CO12. Engage in lifelong learning to improve knowledge and competence in the chosen area of
the project.
CO – PO MAPPING

COs/ POs PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12

CO1 3
CO2 3
CO3 3
CO4 3
CO5 3
CO6 3
CO7 3
CO8 3
CO9 3
CO10 3
CO11 3
CO12 3
COs 3 3 3 3 3 3 3 3 3 3 3 3
ABSTRACT

This project presents a model integrated with a Global Memory Attention Network (GMAN) for
image dehazing. The primary objective is to enhance the visual quality of hazy images by
accurately classifying them as either hazy or clear. GMAN introduces a global attention
mechanism, enabling the model to focus on significant features across the entire image, thereby
improving classification accuracy. The dataset comprises various hazy and clear images,
facilitating robust training and evaluation of the model. The proposed approach demonstrates
significant potential in real-world applications, such as improving visibility in outdoor
photography, enhancing computer vision tasks, and aiding automated surveillance systems. The
expected output will enable users to determine the quality of input images, promoting
advancements in image processing and analysis.

Keywords: Image dehazing, Global Memory Attention Network (GMAN), hazy image
classification, feature extraction, computer vision.
TABLE OF CONTENTS

Chapter No. Title Page No.

ACKNOWLEDGEMENT
DECLARATION
ABSTRACT

LIST OF TABLES

CHAPTER 1 INTRODUCTION
1.1 Motivation
1.2 Object of the project
1.3 Problem Statement
1.4 Scope
1.5 Project Introduction

CHAPTER 2 LITERATURE REVIEW


2.1 Related work

CHAPTER 3 SYSTEM ANALYSIS

3.1 Existing System


3.2 Disadvantages
3.3 Proposed system
3.4 Advantages

CHAPTER 4 REQUIREMENTANALYSIS

4.1 Function and nonfunctional requirements


4.2 Hardware Requirements
4.3 Software Requirements
4.4 Architecture

CHAPTER 5 SYSTEM DESIGN


5.1 Introduction of Input design
5.2 UML Diagram

CHAPTER 6 IMPLEMENTATION AND


RESULTS
6.1 Modules
6.2 Systems
6.3 Algorithms (G man)
6.4 Output screens
6.5 Code
CHAPTER 7 SYSTEM STUDY &TESTING

7.1 Feasibility study

CHAPTER 8 CONCLUSION

CHAPTER 9 FUTURE ENHANCEMENT

REFERENCES
CHAPTER 1

1. INTRODUCTION

1.1 Motivation:

The proliferation of digital imagery has revolutionized various fields, but the presence of hazy images
can significantly hinder visual clarity and interpretation. This project is motivated by the necessity to
enhance image quality, particularly in challenging environments like outdoor photography and
automated surveillance. Hazy images often contain crucial information that is obscured by atmospheric
conditions, leading to difficulties in object detection and analysis. By developing a model that integrates
a Global Memory Attention Network (GMAN), we aim to address these challenges by accurately
classifying hazy images and improving their visual quality. The potential applications span multiple
industries, including security, environmental monitoring, and image analysis, making this research both
timely and impactful. Ultimately, this project seeks to contribute to the advancement of image
processing technologies, enhancing users' ability to interpret and utilize visual data effectively.

1.2 Object of the Project

The primary objective of this project is to develop a robust image dehazing model utilizing a Global
Memory Attention Network (GMAN) that effectively classifies hazy images as either clear or hazy. By
employing a global attention mechanism, the model aims to enhance the focus on significant features
within images, leading to improved classification accuracy and visual quality. Additionally, the project
aims to create a comprehensive dataset comprising various hazy and clear images to facilitate effective
training and evaluation of the model. We seek to demonstrate the model's effectiveness in real-world
applications, showcasing its potential to improve visibility in outdoor photography, enhance computer
vision tasks, and support automated surveillance systems. Ultimately, this research aims to establish a
foundation for future advancements in image dehazing technologies and promote better image quality in
diverse practical scenarios.
1.3Project Introduction

project presents a cutting-edge approach to image dehazing through the integration of a Global Memory
Attention Network (GMAN). As the demand for high-quality visual data grows, the challenges posed by
hazy images become increasingly evident, particularly in applications like outdoor photography and
surveillance. The primary aim of this research is to enhance the visual quality of images by accurately
classifying them as either hazy or clear. By leveraging the global attention mechanism of GMAN, the
model can effectively focus on significant features throughout the image, thereby improving classification
accuracy. The dataset used for this study consists of a variety of hazy and clear images, enabling
comprehensive training and evaluation.

1.4 Problem Statement

Hazy images significantly degrade visual quality and hinder the performance of various computer
vision applications, including image recognition, autonomous driving, and surveillance. Traditional
image processing techniques often struggle to effectively remove haze and restore clarity, resulting in
loss of critical information. This project aims to develop a robust image dehazing solution using a
Global Memory Attention Network (GMAN). The objective is to accurately classify input images as
either hazy or clear, enabling automated image enhancement. This will enhance visibility, improve
analysis accuracy, and facilitate better decision-making in various real-world scenarios.

1.5 Scope

The scope of this project encompasses the development and implementation of an image dehazing
model integrated with a Global Memory Attention Network (GMAN). This research will involve the
creation of a diverse dataset comprising various hazy and clear images, ensuring robust training and
evaluation of the model's performance. The project will focus on the application of advanced machine
learning techniques to enhance the visual quality of images, aiming for accuracy in classification and
effective feature extraction. Furthermore, the scope extends to exploring the model's applicability in
real-world scenarios, including outdoor photography, computer vision tasks, and automated
surveillance systems. By addressing the challenges associated with hazy images, the project aspires to
contribute to the broader field of image processing, offering solutions that can improve clarity and
usability in various applications. Ultimately, the project aims to pave the way for future research and
innovation in this area.

CHAPTER 2

LITERATURE REVIEW
2. 1 Related Work

[1] Narasimhan S G, Nayar S K. Vision and the Atmosphere[J]. International Journal of Computer
Vision, 2002, 48(3):233-254.
The paper addresses the challenges faced by computer vision systems in adverse weather conditions like
haze, fog, rain, hail, and snow. It explores the visual effects of these conditions, leveraging knowledge of
atmospheric optics to enhance image analysis. By employing scattering models, the study develops methods
to recover three-dimensional structures and scene properties from images taken in poor visibility. It also
models the chromatic effects of atmospheric scattering, establishing geometric constraints on color changes
due to weather variations. Ultimately, algorithms are proposed for computing fog or haze color, depth
segmentation, and restoring "clear day" scene colors from affected images.
[2] Stark J A, Fitzgerald W J. An alternative algorithm for adaptive histogram equalization[J].
Graphical Models and Image Processing, 1996, 58(2): 180-185.
This paper introduces an adaptive image-contrast enhancement scheme based on a generalized approach to
histogram equalization (HE). While traditional HE improves image contrast, it can be overly aggressive for
certain applications. The proposed method utilizes a "cumulation function" to create grey level mappings
from local histograms, allowing for diverse effects by adjusting the function's form. This framework
facilitates discussions on various HE modifications. By varying one or two parameters, the method can
produce different levels of contrast enhancement, ranging from leaving the image unchanged to achieving
complete adaptive equalization.
[3] Stark J A. Adaptive image contrast enhancement using generalizations of histogram
equalization[J]. Image Processing, IEEE Transactions on, 2000, 9(5): 889-896.
This paper proposes a scheme for adaptive image-contrast enhancement based on a generalization of
histogram equalization (HE). HE is a useful technique for improving image contrast, but its effect is too
severe for many purposes. However, dramatically different results can be obtained with relatively minor
modifications. A concise description of adaptive HE is set out, and this framework is used in a discussion of
past suggestions for variations on HE. A key feature of this formalism is a “cumulation function,” which is
used to generate a grey level mapping from the local histogram. By choosing alternative forms of
cumulation function one can achieve a wide variety of effects. A specific form is proposed. Through the
variation of one or two parameters, the resulting process can produce a range of degrees of contrast
enhancement, at one extreme leaving the image unchanged, at another yielding full adaptive equalization
[4] Rahman Z, Jobson D J, Woodell G A. Multi-scale retinex for color image enhancement[C]//
International Conference on Image Processing, 1996. Proceedings. IEEE, 2002:1003-1006 vol.3.
In order to restore image color and enhance contrast of remote sensing image without suffering from color
cast and insufficient detail enhancement, a novel improved multi-scale retinex with color restoration
(MSRCR) image enhancement algorithm based on Gaussian filtering and guided filtering was proposed in
this paper. Firstly, multi-scale Gaussian filtering functions were used to deal with the original image to
obtain the rough illumination components. Secondly, accurate illumination components were acquired by
using the guided filtering functions. Then, combining with four-direction Sobel edge detector, a self-
adaptive weight selection nonlinear image enhancement was carried out. Finally, a series of evaluate metrics
such as mean, MSE, PSNR, contrast and information entropy were used to assess the enhancement
algorithm. .

[5] R. T. Tan. Visibility in bad weather from a single image. In Proc. CVPR2008
Bad weather conditions like fog and haze reduce visibility due to particles in the atmosphere that absorb and
scatter light. Traditional computer vision methods often require multiple input images under varying
conditions, which can be impractical. This study introduces a single-image automated method based on two
observations: clear-day images exhibit higher contrast and airlight varies smoothly with distance. A cost
function is developed using Markov random fields, allowing optimization through techniques like graph
cuts or belief propagation. This method is effective for both color and grayscale images and does not
necessitate geometrical information from the input image.
CHAPTER 3

SYSTEM ANALYSIS

3.1 Existing System

Existing systems for image dehazing typically employ traditional methods like dark channel prior,
histogram equalization, and guided filtering. These techniques rely on hand-crafted features and
assumptions about haze formation, often leading to suboptimal results in complex scenes. Recent
advancements have introduced machine learning approaches, primarily utilizing deep learning models
for haze removal. However, many of these models are limited in handling varying haze levels and
image conditions. Existing systems also struggle with real-time processing and require extensive
training data, which can hinder their applicability in dynamic environments, highlighting the need for
more efficient and effective solutions

3.2 Disadvantages

1. Limited Accuracy: Many traditional methods rely on heuristics and may not effectively handle
varying haze levels, leading to inconsistent results.

2. Computational Complexity: Some existing models are computationally intensive, requiring


significant processing power and time, which is not feasible for real-time applications.
3. Dependency on Prior Knowledge: Many techniques depend heavily on prior knowledge of the
scene, which may not always be available or accurate.
4. Noise Sensitivity: Existing systems often struggle with noise and can produce artifacts, degrading
image quality further.
5. Low Generalization: Many models perform poorly on unseen data or diverse environments,
limiting their applicability in real-world scenarios

3.3 Proposed System:

The proposed system utilizes a Global Memory Attention Network (GMAN) for effective image
dehazing. GMAN employs a global attention mechanism to emphasize critical image areas, improving
classification performance. The system processes images through a series of layers that enhance
feature extraction and classification, culminating in a classification layer that predicts the image
quality. This architecture aims to provide accurate and efficient dehazing, significantly benefiting
applications in photography, surveillance, and automated vision systems

3.4 Advantages:

1. Enhanced Image Clarity: Significantly improves the visual quality of hazy images, making them
clearer and more vibrant.
2. Accurate Classification: Utilizes advanced G MAN architectures for precise identification of
hazy versus clear images.
3. Real-Time Processing: Capable of processing images quickly, making it suitable for applications
requiring instant feedback.
4. Versatile Applications: Applicable in various fields such as photography, surveillance, and
autonomous vehicles, enhancing visibility in adverse conditions.

5. User-Friendly: Simple interface allowing users to easily upload and classify images without
technical expertise.

CHAPTER 4
REQUIREMENT ANALYSIS

4.1 Functional and non-functional requirements

Requirement’s analysis is very critical process that enables the success of a system or software
project to be assessed. Requirements are generally split into two types: Functional and non-
functional requirements.
4.1.1Functional Requirements: These are the requirements that the end user specifically
demands as basic facilities that the system should offer. All these functionalities need to be
necessarily incorporated into the system as a part of the contract. These are represented or stated in
the form of input to be given to the system, the operation performed and the output expected. They
are basically the requirements stated by the user which one can see directly in the final product,
unlike the non-functional requirements.
Examples of functional requirements:
1) Authentication of user whenever he/she logs into the system
2) System shutdown in Solar prediction.
3) A verification email is sent to user whenever he/she register for the first time on some software
system.
4.1.2Non-functional requirements: These are basically the quality constraints that the
system must satisfy according to the project contract. The priority or extent to which these factors
are implemented varies from one project to other. They are also called non-behavioral requirements.
They basically deal with issues like:
 Portability
 Security
 Maintainability
 Reliability
 Scalability
 Performance
 Reusability
 Flexibility
Examples of non-functional requirements:
1) Emails should be sent with a latency of no greater than 12 hours from such an activity.
2) The processing of each request should be done within 10 seconds
3) The site should load in 3 seconds whenever of simultaneous users are > 10000
4.2 Hardware Requirements
Operating system : Windows 7 or 7+
RAM : 8 GB
Hard disc or SSD : More than 500 GB
Processor : Intel 3rd generation or high or Ryzen with 8 GB Ram
4.3 Software Requirements:

Software’s : Python 3.10 or high version


IDE : Visual Studio Code.
Framework : Flask
--004IDE/Workbench : PyCharm
Technology : Python 3.6+
Server Deployment : Xampp Server
Database : MySQL
4.4Architecture
CHAPTER 5
SYSTEM DESIGN
5.1 Introduction of Input Design:
In an information system, input is the raw data that is processed to produce output. During the input
design, the developers must consider the input devices such as PC, MICR, OMR, etc. Therefore, the
quality of system input determines the quality of system output. Well-designed input forms and
screens have following properties It should serve specific purpose effectively such as storing,
recording, and retrieving the information.

 It ensures proper completion with accuracy.


 It should be easy to fill and straightforward.
 It should focus on user’s attention, consistency, and simplicity.

All these objectives are obtained using the knowledge of basic design principles regarding

Objectives for Input Design:


The objectives of input design are −

 To design data entry and input procedures


 To reduce input volume
 To design source documents for data capture or devise other data capture methods
 To design input data records, data entry screens, user interface screens, etc.
 To use validation checks and develop effective input controls.

Output Design:

The design of output is the most important task of any system. During output design, developers
identify the type of outputs needed, and consider the necessary output controls and prototype report
layouts.

 The objectives of input design are:


 To develop output design that serves the intended purpose and eliminates the
production unwanted output.

 To deliver the appropriate quantity of output.


 To form the output in appropriate format and direct it to the right person.
 To make the output available on time for making good decisions.

5.2 UML Diagrams:

UML DIAGRAMS
UML stands for Unified Modelling Language. UML is a standardized general-purpose modelling
language in the field of object-oriented software engineering. The standard is managed, and was
created by, the Object Management Group.
The goal is for UML to become a common language for creating models of object-oriented computer
software. In its current form UML is comprised of two major components: a Meta-model and a
notation. In the future, some form of method or process may also be added to; or associated with,
UML.
The Unified Modelling Language is a standard language for specifying, Visualization, Constructing
and documenting the artefacts of software system, as well as for business modelling and other non-
software systems.
The UML represents a collection of best engineering practices that have proven successful in the
modelling of large and complex systems.
The UML is a very important part of developing objects-oriented software and the software
development process. The UML uses mostly graphical notations to express the design of software
projects.

5.3 USE CASE DIAGRAM


 A use case diagram in the Unified Modeling Language (UML) is a type of behavioral diagram
defined by and created from a Use-case analysis.

 Its purpose is to present a graphical overview of the functionality provided by a system in


terms of actors, their goals (represented as use cases), and any dependencies between those
use cases.

 The main purpose of a use case diagram is to show what system functions are performed for
which actor. Roles of the actors in the system can be depicted.
CLASS DIAGRAM
In software engineering, a class diagram in the Unified Modeling Language (UML) is a type of
static structure diagram that describes the structure of a system by showing the system's classes,
their attributes, operations (or methods), and the relationships among the classes. It explains which
class contains information

SEQUENCE DIAGRAM
 A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram
that shows how processes operate with one another and in what order.

 It is a construct of a Message Sequence Chart. Sequence diagrams are sometimes called event
diagrams, event scenarios, and timing diagrams
DEPLOYMENT DIAGRAM
Deployment diagram represents the deployment view of a system. It is related to the component
diagram. Because the components are deployed using the deployment diagrams. A deployment
diagram consists of nodes. Nodes are nothing but physical hardware’s used to deploy the
application.

ACTIVITY DIAGRAM:
Activity diagrams are graphical representations of workflows of stepwise activities and actions
with support for choice, iteration and concurrency. In the Unified Modelling Language, activity
diagrams can be used to describe the business and operational step-by-step workflows of
components in a system. An activity diagram shows the overall flow of control.
COMPONENT DIAGRAM:
A component diagram, also known as a UML component diagram, describes the organization and
wiring of the physical components in a system. Component diagrams are often drawn to help model
implementation details and double-check that every aspect of the system's required function is
covered by planned development.

ER DIAGRAM:
An Entity–relationship model (ER model) describes the structure of a database with the help of a
diagram, which is known as Entity Relationship Diagram (ER Diagram). An ER model is a design or
blueprint of a database that can later be implemented as a database. The main components of E-R
model are: entity set and relationship set.
An ER diagram shows the relationship among entity sets. An entity set is a group of similar entities
and these entities can have attributes. In terms of DBMS, an entity is a table or attribute of a table in
database, so by showing relationship among tables and their attributes, ER diagram shows the
complete logical structure of a database. Let’s have a look at a simple ER diagram to understand this
concept.

DFD DIAGRAM:
A Data Flow Diagram (DFD) is a traditional way to visualize the information flows within a system.
A neat and clear DFD can depict a good amount of the system requirements graphically. It can be
manual, automated, or a combination of both. It shows how information enters and leaves the system,
what changes the information and where information is stored. The purpose of a DFD is to show the
scope and boundaries of a system as a whole. It may be used as a communications tool between a
systems analyst and any person who plays a part in the system that acts as the starting point for
redesigning a system.

Contrast Level:

system

dataset

user
Level 1 Diagram

Level 2 Diagram
CHAPTER 6

IMPLEMENTATION AND RESULTS

6.1 Modules

1. Users can upload a dataset, which is a crucial initial step for the system to work with relevant data.
This dataset likely contains historical information or examples that the system will use for its
predictions.
2. Users have the capability to view the dataset they've uploaded. This feature helps users confirm
the data they've provided and ensures transparency in the process.
3. Users need to input specific values or parameters into the system to request predictions or results.
These input values likely correspond to the variables or features in the dataset.
6.2 System

 Take the Dataset: The system accepts and processes the dataset provided by the user. This
dataset forms the foundation for building the predictive model.
 Preprocessing: Before training a predictive model, the system preprocesses the dataset. This
includes handling missing data, data cleaning, and feature extraction. Preprocessing ensures that
the data is in a suitable format for modeling.
 Training: The system uses machine learning techniques and Python modules to train a model
based on the preprocessed dataset. The model learns patterns and relationships within the data,
allowing it to make predictions.
 Generate Results: Once the model is trained, the system can generate results based on user input
values. These results typically indicate whether the input data corresponds to a specific condition,
event, or prediction.

6.3 Algorithms:
6.3.1. G-Man:

The Global Memory Attention Network (GMAN) is a specialized neural network architecture
designed to address challenges in tasks such as image dehazing, object detection, and other computer
vision problems. Its innovative approach focuses on effectively managing and utilizing information
from both global context and local features within the input data.

Key Components of GMAN


 Global Memory Module: This component retains contextual information across different spatial
locations in an image, allowing the model to understand and utilize broader contextual features
rather than relying solely on local information.
 Attention Mechanism: GMAN employs attention mechanisms to selectively focus on important
features in the input data. This helps in enhancing relevant information while diminishing the
influence of irrelevant or noisy data.
 Multi-Scale Feature Extraction: GMAN utilizes multi-scale feature extraction to gather
information from various resolutions. This ensures that both fine details and broader contexts are
captured effectively.
 Fusion of Features: The model combines features from different levels (global and local) to
make more informed predictions, leading to improved accuracy and robustness in its outputs.

Working Mechanism
The working of GMAN can be broken down into several steps:
o Input Processing: The model receives an input image (or data) which is processed through a
series of convolutional layers to extract feature maps.
o Feature Extraction:
 Local Features: Extracted using standard convolutional operations.
 Global Features: Captured through pooling operations that reduce the spatial dimensions,
allowing the model to retain only the essential context.
o Attention Calculation: The attention mechanism calculates weights for different features based
on their importance. This can be formulated as follows:
 \(x\) is the input feature.
 \(f(x)\) is a function (often a neural network) that computes the relevance of each feature.
 \(n\) is the total number of features.
o Global Memory Update: The global memory is updated based on the attention scores,
allowing the model to incorporate important information from both local and global features.

Where:
 \(M_t\) is the updated memory at time \(t\).
 \(M_{t-1}\) is the previous memory state.
 \(A_t\) is the current attention-enhanced feature.
 \(\alpha\) and \(\beta\) are learnable parameters that control the contribution of past memory and
current features, respectively.

Output Generation: The final prediction is generated by combining the processed features
with the updated memory, often using fully connected layers or other classifiers.
Advantages
 Robustness: By leveraging global context, GMAN can handle variations in input data more
effectively.
 Efficiency: The attention mechanism allows the model to focus on relevant information, reducing
computational overhead.

Results:
Epoch Training Loss Validation Loss Time Taken (s)
0 0.0303 0.0077 1304.77
1 0.0068 0.0063 1173.96
2 0.0061 0.0060 1176.28
3 0.0056 0.0058 1175.94
4 0.0055 0.0060 1172.94
5 0.0054 0.0055 1179.77
6 0.0053 0.0057 1181.57
7 0.0053 0.0055 1174.03
8 0.0051 0.0054 1183.21
9 0.0051 0.0053 1180.96

The table presents the training and validation losses recorded over ten epochs during model training.
Each row corresponds to a specific epoch, indicating the training loss, validation loss, and the time
taken for that epoch in seconds. Training loss measures how well the model fits the training data,
while validation loss evaluates its performance on unseen data, helping to monitor overfitting. Over
the epochs, both losses generally decrease, indicating that the model improves its performance. The
data also highlights the computational time required for each epoch, providing insights into training
efficiency and resource usage.
The
diagram displays the training and validation loss over ten epochs, illustrating the model's
performance during training. The blue line represents the training loss, which significantly decreases
from epoch 0, indicating effective learning. The orange line shows the validation loss, which also
decreases but at a slower rate.
CODE

from flask import Flask, url _ for, redirect, render_ template, request, session
import mysql.connector, os
import pandas as pd
from sklearn.preprocessing import LabelEncoder
import numpy as np
import joblib
from flask import Flask, request, render_template,send_from_directory
import tensorflow as tf
from flask import Flask, render_template, request
import matplotlib.pyplot as plt
from werkzeug.utils import secure_filename
app = Flask(__name__)
app.secret_key = 'admin'

def executionquery(query,values):
mycursor.execute(query,values)
mydb.commit()
return

mydb = mysql.connector.connect(
host="localhost",
user="root",
password="",
port="3306",
database='network'
)

mycursor = mydb.cursor()

def executionquery(query,values):
mycursor.execute(query,values)
mydb.commit()
return

def retrivequery1(query,values):
mycursor.execute(query,values)
data = mycursor.fetchall()
return data

def retrivequery2(query, params=None):


mycursor.execute(query, params) # Use params in the execute statement
data = mycursor.fetchall()
return data
##########################################################

# Home Route
@app.route('/')
def home():
return render_template('index.html', section='home', message="Welcome to the Home
page!")

# Gallery Route
@app.route('/gallery')
def gallery():
return render_template('index.html', section='gallery', message="Welcome to the Gallery!")

# About Route
@app.route('/about')
def about():
return render_template('index.html', section='about', message="Learn more About us!")

# Contact Route
@app.route('/contact', methods=['GET', 'POST'])
def contact():
if request.method == "POST":
name = request.form['name']
phone = request.form['phone']
email = request.form['email']
password = request.form['password']
c_password = request.form['c_password']

# Validate passwords
if password != c_password:
return render_template('index.html', section='contact', message="Confirm password does
not match!")

# Check if the email already exists


query = "SELECT email FROM users WHERE email = %s"
email_data = retrivequery2(query, (email,))

# Create a list of existing emails


email_data_list = [i[0] for i in email_data]

if email in email_data_list:
return render_template('index.html', section='contact', message="Email already
exists!")

# Insert new user into the database


query = "INSERT INTO users (name, email, password, phone) VALUES (%s, %s,
%s, %s)"
values = (name, email, password, phone) # Include phone number here
executionquery(query, values)

# Redirect to the login route instead of rendering


return redirect(url_for('login', message="Successfully Registered!"))

return render_template('index.html', section='contact')

# Login Route
@app.route('/login', methods=['GET', 'POST'])
def login():
if request.method == "POST":
email = request.form['email']
password = request.form['password']
query = "SELECT email FROM users"
email_data = retrivequery2(query)
email_data_list = []
for i in email_data:
email_data_list.append(i[0])

if email in email_data_list:
query = "SELECT name, password FROM users WHERE email = %s"
values = (email, )
password__data = retrivequery1(query, values)
if password == password__data[0][1]:
global user_email
user_email = email
name = password__data[0][0]
session['name'] = name
print(f"User name: {name}")
return render_template('home.html', section='home', message=f"Welcome to
Home page, {name}!")
return render_template('index.html', section='login', message="Invalid credentials!")

return render_template('index.html', section='login')

####################################################################
@app.route('/mainhome')
def mainhome():
return render_template('home.html')

# @app.route('/upload', methods=["GET", "POST"])


# def upload():
# if request.method == "POST":
# files = request.files['files']
# print(111111, files)
# # file = request.files['file']
# # df = pd.read_csv(file, encoding='latin1')
# # df = df.to_html()
# # return render_template('upload.html', df=df)
# return render_template('upload.html')

# Path to the folder containing images


dataset_1 = r'static\dataset\clear_images'
# dataset_2 = r'static\dataset\haze'

@app.route('/view_data', methods = ["GET", "POST"])


def view_data():
if request.method == "POST":
image_files_1 = [f for f in os.listdir(dataset_1) if f.endswith('.jpg')]
# image_files_2 = [f for f in os.listdir(dataset_2) if f.endswith('.jpg')]
return render_template('view_data.html',
image_files_1 = image_files_1,
# image_files_2 = image_files_2
)
return render_template('view_data.html')

# Image Uploads
UPLOAD_FOLDER = 'static/uploads'
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER

# Ensure the uploads folder exists


os.makedirs(UPLOAD_FOLDER, exist_ok=True)

def load_and_preprocess_image(img_path):
img = tf.io.read_file(img_path)
img = tf.image.decode_jpeg(img, channels=3)
img = tf.image.resize(img, size=(384, 384), antialias=True)
img = img / 255.0
img = tf.expand_dims(img, axis=0)
return img

def predict_dehazed_image(model_path, img_path):


# Load the model from the SavedModel format
model = tf.saved_model.load(model_path)
input_img = load_and_preprocess_image(img_path)
dehazed_img = model(input_img) # Assuming the model's call signature is correct
return input_img, dehazed_img

@app.route('/prediction', methods=["GET", "POST"])


def prediction():
result = None
if request.method == "POST":
myfile = request.files['image']
filename = secure_filename(myfile.filename)
mypath = os.path.join(app.config['UPLOAD_FOLDER'], filename)
myfile.save(mypath)

model_path = r'model/trained_model' # Path to your .pb model


input_img, dehazed_img = predict_dehazed_image(model_path, mypath)

# Save the input and dehazed images


tf.keras.preprocessing.image.save_img(os.path.join(app.config['UPLOAD_FOLDER'],
'input_image.jpg'), input_img[0])
tf.keras.preprocessing.image.save_img(os.path.join(app.config['UPLOAD_FOLDER'],
'dehazed_image.jpg'), dehazed_img[0])

result = 1
return render_template('prediction.html', result=result)

if __name__ == '__main__':
app.run(debug=True)
CHAPTER 7

SYSTEM STUDY AND TESTING

7.1 System Testing


The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality of
components, sub-assemblies, assemblies and/or a finished product It is the process of exercising
software with the intent of ensuring that the
Software system meets its requirements and user expectations and does not fail in an unacceptable
manner. There are various types of test. Each test type addresses a specific testing requirement.

7.2 Unit testing


Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches and
internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a structural
testing, that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at
component level and test a specific business process, application, and/or system configuration. Unit
tests ensure that each unique path of a business process performs accurately to the documented
specifications and contains clearly defined inputs and expected results.

7.2.2 Integration testing


Integration tests are designed to test integrated software components to determine if they actually run
as one program. Testing is event driven and is more concerned with the basic outcome of screens or
fields. Integration tests demonstrate that although the components were individually satisfaction, as
shown by successfully unit testing, the combination of components is correct and consistent.
Integration testing is specifically aimed at exposing the problems that arise from the combination of
components.
Software integration testing is the incremental integration testing of two or more integrated software
components on a single platform to produce failures caused by interface defects.

The task of the integration test is to check that components or software applications, e.g. components
in a software system or – one step up – software applications at the company level – interact without

Test Results: All the test cases mentioned above passed successfully. No defects encountered.

7.2.2.1 Acceptance Testing

User Acceptance Testing is a critical phase of any project and requires significant participation by
the end user. It also ensures that the system meets the functional requirements.

Test Results: All the test cases mentioned above passed successfully. No defects encountered.

7.2.3 Functional testing


Functional tests provide systematic demonstrations that functions tested are available as specified by
the business and technical requirements, system documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

Systems/Procedures: interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements, key functions, or special
test cases. In addition, systematic coverage pertaining to identify Business process flows; data fields,
predefined processes, and successive processes must be considered for testing. Before functional
testing is complete, additional tests are identified and the effective value of current tests is
determined.

7.2.4 White Box Testing


White Box Testing is a testing in which in which the software tester has knowledge of the inner
workings, structure and language of the software, or at least its purpose. It is purpose. It is used to
test areas that cannot be reached from a black box level.
7.2.5 Black Box Testing
Black Box Testing is testing the software without any knowledge of the inner workings, structure or
language of the module being tested. Black box tests, as most other kinds of tests, must be written
from a definitive source document, such as specification or requirements document, such as
specification or requirements document. It is a testing in which the software under test is treated, as a
black box. you cannot “see” into it. The test provides inputs and responds to outputs without
considering how the software works.

Test objectives

 All field entries must work properly.

 Pages must be activated from the identified link.

 The entry screen, messages and responses must not be delayed.

Features to be tested

 Verify that the entries are of the correct format

 No duplicate entries should be allowed

 All links should take the user to the correct page.


 Test cases Model building:
S.NO Test cases I/O Expected O/T Actual O/T P/F

1 Read the Dataset’s path. Datasets need to read Datasets It produced P.


datasets. successfully. fetched
If this not F
successfully.
will come

2 Registration Valid Verify that the User is It produced P.


username, registration form accepts successfully
If this is not,
email, valid user inputs and registered,
it will
password. successfully creates a new and an
account. account is undergo F.
created

3 Login Valid Verify that users can log User is It produced P.


username and in with valid credentials successfully
If this is not,
password logged in
it will
and
redirected to undergo F.
the
dashboard
4 New Input Related Output as predicted New Output as It produced
Dehaze to the dataset Dehaze image predicted
predicted
image New Dehaze
New Dehaze
image
image
6.4 OUTPUT SCREENS:

HOME PAGE:

This interface enables users to, facilitating Register and login to model home page and prediction

and training page

ABOUT PAGE:

The project predicts using machine learning models, emphasizing interpretability and

superior accuracy with ensemble methods.


Gallery page:

REGISTRATION:

This page allows users to register for services, ensuring secure access by requiring personal
details and password confirmation. It provides a user-friendly interface for creating a secure
account.
Login Page

This page provides a secure login interface for users to access the prediction account using their
email and password.

Home page

This interface enables users facilitating Selection the dataset uploads and model training and
prediction and evaluation to achieve precise predictive outcomes.
Upload page

This page allows users to upload datasets for prediction, enabling model training and evaluation
for accurate results.

Prediction page

This page collects user input for various parameters to predict the new values
CONCLUSION

In conclusion, the integration of the Global Memory Attention Network (GMAN) for image
dehazing presents a significant advancement in enhancing the visual quality of hazy images. By
leveraging a global attention mechanism, the model effectively focuses on essential features,
leading to improved classification accuracy between hazy and clear images. The diverse dataset
used for training and evaluation ensures that the model is robust and adaptable to various real-world
scenarios. This approach not only enhances visibility in outdoor photography but also contributes to
critical applications in computer vision and automated surveillance systems. Ultimately, the
successful implementation of this model promotes advancements in image processing and analysis,
providing users with a reliable tool to assess and improve image quality. As the demand for high-
quality visual content continues to grow, this research underscores the importance of innovative
solutions like GMAN in the field of image enhancement and classification.
CHAPTER 9
FUTURE ENHANCEMENT
Future enhancements for the image dehazing model integrated with the Global Memory Attention
Network (GMAN) could focus on optimizing the model for real-time applications. This includes
refining the architecture to reduce computational complexity and increase processing speed
without compromising accuracy, enabling deployment in mobile devices and embedded systems.
Additionally, exploring the integration of generative adversarial networks (GANs) could further
enhance image quality by learning to generate more realistic and visually appealing outputs.
Expanding the dataset to include a wider variety of environmental conditions and lighting
scenarios will improve the model's robustness and generalizability. Incorporating additional image
enhancement techniques, such as contrast adjustment and noise reduction, may also enhance the
overall visual quality of the output. Finally, implementing user-friendly interfaces for both desktop
and mobile platforms can facilitate accessibility, allowing a broader audience to leverage this
technology in various practical applications, such as outdoor photography, surveillance, and
autonomous driving.
REFERENCES

[1] Narasimhan S G, Nayar S K. Vision and the Atmosphere[J]. International Journal of Computer
Vision, 2002, 48(3):233-254.

[2] Stark J A, Fitzgerald W J. An alternative algorithm for adaptive histogram equalization[J].


Graphical Models and Image Processing, 1996, 58(2): 180-185.

[3] Stark J A. Adaptive image contrast enhancement using generalizations of histogram


equalization[J]. Image Processing, IEEE Transactions on, 2000, 9(5): 889-896.

[4] Rahman Z, Jobson D J, Woodell G A. Multi-scale retinex for color image enhancement[C]//
International Conference on Image Processing, 1996. Proceedings. IEEE, 2002:1003-1006 vol.3.
[5] R. T. Tan. Visibility in bad weather from a single image. In Proc. CVPR, 2008

[6] R. Fattal. Single image dehazing. ACM Transactions on Graphics, 27(3), 2008

[7] K. He, J. Sun, and X. Tang. Single image haze removal using dark channel prior. In Preoccupy,
2009.

[8] K. He, J. Sun, and X. Tang. Single image haze removal using dark channel prior. IEEE TPAMI
33(12): 2341-2353, 2011.

[9] K. He, J. Sun, and X. Tang. Guided image filtering. In Proc. ECCV, pages 1-14, 2010.

[10] K. He, J. Sun and X. Tang. Guided image filtering. IEEE TPAMI, 35(6): 1397-1409, 2013.
ELEVATION RUBRICS FOR PROJECT WORK:
Rubric (CO) Excellent (Wt = 3) Good (Wt = 2) Fair (Wt = 1)
Select a topic
Selection of Topic (CO1) Select a latest topic through
Select a topic through
through complete improper
partial knowledge of
knowledge of facts knowledge of
facts and concepts.
and concepts. facts and
concepts.
Thorough Reasonable
Improper
Analysis and Synthesis comprehension through comprehension
comprehension
(CO2) analysis/ synthesis. through
through analysis/
analysis/
synthesis.
synthesis.
Improper
Thorough Reasonable
Problem Solving (CO3) comprehension
comprehension about comprehension about
about what is
what is proposed in the what is proposed in the
proposed in the
literature papers. literature papers.
literature.
Considerable Incomplete literature
Extensive literature
Literature Survey (CO4) literature survey survey with
survey with standard
with standard substandard
references.
references. references.
Clearly identified
Usage of Techniques & Identified and has Identified and has
and has complete
Tools sufficient knowledge of inadequate
knowledge of
(CO5) techniques & tools used in knowledge of
techniques & tools
the project work. techniques & tools
used in the project
used in project
work.
work.
Project work impact on Conclusion of
Conclusion of project
Society (CO6) Conclusion of project project Work has
work has strong impact
work has considerable feeble impact on
on society.
impact onsociety. society.
Conclusion of
Conclusion of
Project work impact on Conclusion of project project work
project work has
Environment (CO7) work has strong impact has
feeble impact on
on Environment. considerable impact
environment.
on environment.
Moderate Insufficient
Clearly understands
Ethical attitude (CO8) understanding of understanding of
ethical and social
ethical and social ethical and social
practices.
practices. practices.

Independent Learning Did literature survey Selected a


Did literature survey
(CO9) and selected topic topic as
and selected topic with
with considerable suggested
a littleguidance
guidance by the
supervisor

Presentation in logical Presentation with key Presentation with


Oral Presentation sequence with key points, conclusion and insufficient key
(CO10) points, clear conclusion good language points
and excellent language andimproper
conclusion
Title of the project: IMAGE DEHAZING USING GMAN ALGORITHM

Name of the students: K JYOTHEESH 21751A0452


K CHARAN 21741A0455
K VENU 21751A0456

P VARUN KUMAR 21751A04A0

Name of the Guide & Designation: Dr. C. KAVITHA, M.E, Ph.D.,

TABLE 1: OUTCOME ATTAINED AND ITS JUSTIFICATION

PO Justification

PO1 The knowledge of microstrip patch antenna was gained through this project work

PO2 Analyzed the problems of size and performance of the antenna.

PO3 Designed dual band T-slot microstrip patch antenna.

PO4 We used research-based data to provide valid conclusions


We implemented our work with well appropriate techniques, good resources
PO5 modern engineering tools to uplift the project. HFSS is used to design our
antenna.
PO6 We are designed our project with less size of patch and substrate of antenna.

This solution increases the bandwidth, gain and reduces the return loss and less
PO7
than of 2 dB VSWR.

PO8 We followed the ethical principles.

PO9 We worked in this project function effectively as a member of the project team.

Oral and written communication skills are improved while planning,


PO10
implementing and executing the entire project and till submission of the report.

We demonstrated our knowledge and understanding of cost and time analysis


PO11
required for carrying out the project.

Facilitated ourselves in Lifelong learning to improve technical knowledge and


PO12
competence in the chosen area of the project.

You might also like