0% found this document useful (0 votes)
95 views102 pages

Intelligent Wildlife Deterrent System

This document outlines a project focused on developing an intelligent wild animal deterrent system to protect crops from wildlife intrusion using deep learning techniques. The system employs various convolutional neural network models for accurate animal detection and triggers ultrasonic alarms to repel animals without harm, while also sending real-time notifications to farmers. The project aims to enhance agricultural productivity and promote sustainable coexistence between agriculture and wildlife.

Uploaded by

poojith1184
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views102 pages

Intelligent Wildlife Deterrent System

This document outlines a project focused on developing an intelligent wild animal deterrent system to protect crops from wildlife intrusion using deep learning techniques. The system employs various convolutional neural network models for accurate animal detection and triggers ultrasonic alarms to repel animals without harm, while also sending real-time notifications to farmers. The project aims to enhance agricultural productivity and promote sustainable coexistence between agriculture and wildlife.

Uploaded by

poojith1184
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 102

Front page

Bonafide page

i
Declaration page

ii
Acknowledgement page

iii
ABSTRACT

The protection of crops from damage by wild animals poses a significant


challenge for farmers worldwide. To address this issue while promoting non-violent
coexistence with wildlife, this project focuses on the development and
implementation of an intelligent wild animals deterrent system. We employed various
deep learning techniques, training several convolutional neural network models,
including VGG16, ResNet50, DenseNet121, EfficientNetB0, EfficientNetB1,
EfficientNetB2, Xception, InceptionV3, MobileNetV2, NASNetMobile, and
NASNetLarge, using transfer learning, fine-tuning and by integrating night vision
images into the dataset comprising 13 common wild animal classes known for farm
land intrusion. Upon detection of an animal, the system triggers an ultrasonic alarm
to repel the specific animal without causing harm. Furthermore, real-time
notifications containing images of the detected animal, its type, and timestamp are
sent to the farmer's mobile device, enabling prompt action to protect crops.

In the pursuit of effective crop protection, this project emphasizes both


accuracy and humane deterrence. The utilization of deep learning, specifically
NASNetLarge model, ensures robust animal detection capabilities, even in
challenging lighting conditions commonly encountered in agricultural environments.
The inclusion of night vision images enriches the dataset, enhancing the model's
ability to discern animals accurately. The integration of an ultrasonic alarm system
offers a non-invasive method of repelling animals, minimizing crop damage without
resorting to physical harm. Moreover, the real-time notifications empower farmers
with timely information, enabling swift responses to potential threats. Overall, this
project represents a comprehensive solution for mitigating crop damage by wild
animals while fostering sustainable coexistence between agriculture and wildlife.

iv
TABLE OF CONTENTS

CHAPTER NO TITLE PAGENO

ABSTRACT IV

LIST OF FIGURES VII

LIST OF ABBREVIATIONS VIII

LIST OF TABLES IX

1 INTRODUCTION 1
1.1 Overview 1
1.2 Wildlife Intrusion and Agricultural Damage 2
1.3 Traditional Deterrents and Limitations 3
1.4 The Need for Innovative Humane Solutions 7
1.5 Objectives of The Project 10
1.6 Organization of The Report 11

2 LITERATURE SURVEY 12

3 SYSTEM ANALYSIS 19
3.1 Existing System 19
3.2 Proposed System 20

4 SYSTEM REQUIREMENTS 21
4.1 Software Requirements 21
4.2 Hardware Requirements 21
4.3 About the Software 22

v
5 SYSTEM DESIGN 28
5.1 System Architecture 28
5.2 Usecase Diagram 28
5.3 Data Flow Diagram 30
5.4 Modules and Functionalities 32
5.5 Algorithms and Techniques 38

6 EXPERIMENTAL RESULTS 44
6.1 Results and Discussion 44
6.2 Performance Measures 45

7 CONCLUSION AND FUTUREWORK 49

APPENDIX A - Screenshots 50

APPENDIX B – Source code 53

REFERENCES 90

vi
LIST OF FIGURES

FIGURENO TITLE PAGENO

1.3.1 Electric Fence 4

1.3.2 Barbed Wire Fence 4

1.3.3 Mesh Fence 5

1.3.4 Scarecrow 6

5.1.1 System Architecture 28

5.2.1 Usecase Diagram 29

5.3.1 Level0 DFD 30

5.3.2 Level1 DFD 31

5.3.3 Level2 DFD 31

5.5.1 NasNetLarge Architecture 40

6.1.1 Experimental Results 44

6.2.1 Precision, Recall, F1score 47

6.2.2 Confusion matrix 48

vii
LIST OF ABBREVIATIONS

DCNN Deep Convolutional Neural Network

SGD Stochastic Gradient Descent

DFD Data Flow Diagram

viii
LIST OF TABLES

TABLENO NAMEOFTHETABLE PAGENO

5.4.1 Model Training Results 34

ix
CHAPTER 1

INTRODUCTION

1.1 OVERVIEW

The "Wild Animals Deterrent System for Crop Protection" project


represents a pioneering endeavor in the realm of agricultural technology,
harnessing the power of deep convolutional neural networks (DCNN) to address
the longstanding issue of wildlife intrusion in farm environments. By employing
a sophisticated classification algorithm, this system endeavors to accurately
detect a diverse range of animal species depicted in images captured within
agricultural landscapes. Leveraging advanced techniques such as transfer
learning, fine tuning and night vision simulation, the system enhances its
detection capabilities, ensuring robust performance across varying lighting
conditions and environmental contexts. Upon identifying an intruding animal,
the system deploys specific sound signals tailored to each species, effectively
repelling them from the crops while prioritizing non-invasive and humane
methods of wildlife management. Additionally, real-time notifications
containing images of the detected animal, along with its type and timestamp, are
sent to the farmer's mobile device, enabling prompt action and informed
decision-making. This innovative approach not only mitigates crop damage but
also fosters harmonious coexistence between agriculture and wildlife, aligning
with the principles of sustainable farming practices. Through its integration of
deep learning technology, intelligent sound emission, and real-time
communication with farmers, the Wild Animals Deterrent System emerges as a
vital tool in enhancing agricultural productivity, resilience, and environmental
stewardship in the face of wildlife-related challenges.

1
1.2 WILDLIFE INTRUSION AND AGRICULTURAL DAMAGE

Wildlife intrusion into agricultural lands has emerged as a significant


challenge for farmers globally, resulting in substantial economic losses and
posing threats to food security. Various wild animals, including deer, wild
boars, monkeys, and elephants, frequently invade farmlands in search of food,
causing extensive damage to crops. This intrusion disrupts the growth cycle of
plants, leading to reduced yields and financial strain on farmers. For instance,
elephants can trample entire fields, while smaller animals like boars and
monkeys can uproot plants and consume valuable produce. The impact is not
limited to crop damage; the presence of wildlife in farmlands also causes
secondary issues such as soil degradation and the spread of diseases.
Additionally, the unpredictability of these intrusions makes it challenging for
farmers to plan their cultivation and harvest schedules effectively. Traditional
methods of deterring wildlife, such as fencing and scarecrows, have proven to
be insufficient and often expensive to maintain. These methods also fail to
address the root cause of the problem and can sometimes harm the animals,
leading to ethical concerns. The financial burden on farmers is further
exacerbated by the costs associated with repairing damaged infrastructure and
replanting lost crops. Moreover, the psychological stress faced by farmers due
to the constant threat of wildlife intrusion cannot be underestimated. This issue
is particularly acute in regions where agriculture is the primary source of
livelihood, and the loss of crops directly impacts the socio-economic stability of
farming communities. In response to these challenges, there is an urgent need
for innovative, humane, and cost-effective solutions that can protect crops
without harming the wildlife. Implementing advanced technologies, such as
deep learning for wildlife detection and IoT-based deterrent systems, offers a
promising path forward. These technologies can provide real-time monitoring
and response mechanisms, enabling farmers to safeguard their crops more
efficiently. By addressing wildlife intrusion through such advanced methods, we
2
can move towards a sustainable coexistence between agriculture and wildlife,
ensuring both food security and biodiversity conservation.

1.3 TRADITIONAL DETERRENTS AND LIMITATIONS

Traditional methods for deterring wildlife from agricultural areas have


been in use for many years, with a variety of techniques employed to keep
animals away from crops. These methods generally include physical barriers,
visual deterrents, and chemical repellents. While each of these techniques has
been used with varying degrees of success, they often fall short in effectively
managing wildlife intrusion due to the intelligence and adaptability of many
animal species. Understanding the types of traditional deterrents and their
limitations is crucial for evaluating their effectiveness and for identifying areas
where modern technologies could offer improvements.

1.3.1 Fencing:

Fencing is one of the most commonly used traditional methods for


protecting crops from wildlife. It functions as a physical barrier designed to
prevent animals from accessing agricultural fields. Different types of fencing
have been developed to address various challenges, and their effectiveness can
vary based on the type used, the species targeted, and the specific environmental
conditions.

 Electric Fences: These fences use an electric current to deter


animals by delivering a mild shock upon contact. They are designed to make the
animals associate the fence with discomfort, thus discouraging them from
crossing it. Electric fences are often used to manage species such as deer and
wild boars

3
FIGURE 1.3.1 ELECTRIC FENCE
.
 Barbed Wire Fences: Barbed wire fences are constructed with
sharp wire strands intended to cause discomfort or injury if animals attempt to
breach the barrier. They are generally used for larger animals like cattle and
deer, and are noted for their ability to withstand physical pressure.

FIGURE 1.3.2 BARBED WIRE FENCE

 Mesh Fences: Made from various mesh sizes, these fences aim to
prevent animals from passing through by creating a physical obstruction. Mesh
fences can be tailored in height and mesh size to accommodate different types
of animals, including smaller species.

4
FIGURE 1.3.3 MESH FENCE

Despite their widespread use, traditional fencing methods have several


limitations. Electric fences require regular maintenance to ensure that the
electric current remains effective, as animals can sometimes break through or
bypass the fence if it is not properly maintained. Barbed wire fences, while
durable, may still be breached by particularly strong or persistent animals, and
can also pose risks of injury to both animals and humans. Mesh fences, though
effective against smaller animals, often require additional reinforcement to be
effective against larger species and may suffer from wear and tear over time.

1.3.2 Scarecrows:

Scarecrows have been used for centuries as visual deterrents to keep


birds and smaller animals away from crops. They work by mimicking human
presence or creating visual disturbances that are intended to scare away wildlife.
Although they are simple and cost-effective, scarecrows face significant
challenges in maintaining their deterrent effect. These are fixed in place and
designed to resemble human figures or other deterrents. They are generally
simple constructions made from materials like straw or fabric.

5
FIGURE 1.3.4 SCARECROW

The primary limitation of scarecrows is their tendency to become less


effective over time. Wildlife, particularly intelligent and adaptive species, can
become habituated to the presence of scarecrows, learning that they pose no real
threat. As a result, the deterrent effect diminishes, and the scarecrows need to be
regularly updated or repositioned to retain their efficacy.

1.3.3 Chemical Repellents:

Chemical repellents are designed to make agricultural areas unappealing


to wildlife by using substances that either simulate the presence of predators or
cause irritation. These repellents can offer temporary protection but come with
their own set of challenges.

 Predator Urine: This method involves spreading the scent of


predators in the environment to deter prey animals. It relies on the animals'
natural fear of predators to keep them away from crops.
 Irritant Chemicals: These chemicals cause discomfort or irritation
when animals come into contact with them, thereby discouraging them from
entering treated areas.

Chemical repellents often require frequent reapplication, especially after


6
rain or other weather conditions, to remain effective. This necessity increases
labor and costs. Additionally, some chemicals can be harmful to non-target
species and the environment, raising concerns about their overall impact.
Furthermore, wildlife may develop tolerance to specific chemicals over time,
reducing their effectiveness. Traditional wildlife deterrents such as fencing,
scarecrows, and chemical repellents have been used to manage wildlife
intrusion in agricultural areas. Each method has specific types and applications,
but they also face notable limitations, especially with intelligent and adaptable
wildlife. Understanding these traditional approaches' advantages and challenges
is essential for developing more effective and humane strategies for wildlife
management.

1.4 THE NEED FOR INNOVATIVE AND HUMANE SOLUTIONS

As agricultural practices evolve and the pressure on farmland increases,


the need for effective, humane solutions to manage wildlife intrusion has
become more critical. Traditional deterrents, while offering some level of
protection, often fall short in addressing the complexities of wildlife behavior
and the environmental impact of these methods. Therefore, there is a growing
necessity for innovative approaches that not only enhance effectiveness but also
adhere to principles of humane treatment towards wildlife.

Modern agriculture faces numerous challenges, with wildlife intrusion


being a significant issue. Wildlife damage to crops can result in substantial
economic losses for farmers, disrupt agricultural productivity, and negatively
impact local ecosystems. Traditional methods, such as fencing, scarecrows, and
chemical repellents, have been widely used to address these problems.
However, these solutions are often limited in their effectiveness due to the
adaptability and intelligence of many wildlife species. For instance, animals

7
such as deer and wild boars have demonstrated remarkable adaptability to
traditional deterrents. Fencing, while providing a physical barrier, may be
breached by persistent or determined animals, especially if the fence is not
properly maintained. Scarecrows, although effective initially, tend to lose their
deterrent effect as animals become accustomed to them. Chemical repellents,
while effective to some extent, often require frequent reapplication and can
have adverse effects on the environment and non-target species.

To overcome these challenges, there is a need for innovative solutions


that combine advanced technology with humane principles. One promising area
of development is the integration of deep learning and computer vision
technologies. These technologies enable real-time detection and identification
of wildlife, allowing for a more targeted and precise approach to deterrence. For
example, deep learning models trained to recognize specific animal species can
trigger tailored deterrent measures that are more effective and less likely to
cause harm. Another innovative approach involves the use of non-invasive
deterrents, such as ultrasonic alarms and motion-activated devices. These
technologies can be designed to emit sound frequencies or visual signals that
are disruptive to wildlife but do not cause physical harm. By leveraging
technology to create adaptive and dynamic deterrent systems, farmers can
enhance their ability to manage wildlife intrusion without resorting to harmful
or ineffective methods.
.
Ensuring humane treatment of wildlife is a critical consideration in the
development of new deterrent solutions. Traditional methods often have
unintended consequences, such as causing injury or stress to animals, which can
lead to ethical concerns and negative environmental impacts. Innovative
solutions must prioritize the well-being of wildlife while effectively addressing
the issues faced by farmers. Humane deterrence strategies focus on minimizing

8
harm and stress to animals. For instance, ultrasonic deterrents can be designed
to be unpleasant to wildlife without causing physical harm. Similarly, visual
deterrents can be made to simulate predator presence or other threats in a way
that discourages animals from entering agricultural areas but does not cause
injury.

Implementing innovative and humane solutions for wildlife management


offers several benefits. Firstly, these approaches can improve the effectiveness
of deterrence by addressing the limitations of traditional methods. Real-time
detection and targeted deterrents ensure that interventions are timely and
relevant to the specific wildlife species involved. This precision increases the
likelihood of successful deterrence and reduces the potential for damage to
crops. Secondly, humane solutions contribute to ethical wildlife management
practices. By avoiding physical harm and stress, these methods align with
principles of animal welfare and conservation. This alignment is not only
important for ethical reasons but also helps build positive relationships between
farmers and conservationists. Finally, the integration of advanced technologies
and humane approaches can lead to more sustainable and long-term solutions
for wildlife management. By focusing on innovation and humaneness, farmers
can better manage wildlife intrusion while preserving the health of local
ecosystems and maintaining ethical standards.

The need for innovative and humane solutions in wildlife management is


driven by the limitations of traditional methods and the growing challenges
faced by modern agriculture. By embracing advanced technologies and
prioritizing the well-being of wildlife, it is possible to develop effective
deterrent systems that address both practical and ethical concerns.

9
1.5 OBJECTIVES OF THE PROJECT:

The objectives of the "Wild Animals Deterrent System for Crop


Protection" project are aimed at addressing the challenges posed by wildlife
intrusion in agricultural settings and developing innovative solutions to enhance
crop protection and promote coexistence between agriculture and wildlife. The
primary objective is to design and implement a robust Deep Convolutional
Neural Network (DCNN) classification algorithm capable of accurately
detecting various species of wildlife depicted in images captured within
agricultural landscapes. Additionally, the project aims to integrate sound
emission technology into the system to emit specific sound signals tailored to
each species of wildlife detected, serving as a non-invasive and humane method
of wildlife repellent. Furthermore, enabling real-time notification and
communication between the system and farmers or agronomists is a key
objective, empowering farmers to make informed decisions and undertake
timely actions to protect their crops. The project also seeks to conduct thorough
assessments and evaluations of the system's performance, refining and
optimizing it for maximum effectiveness and practicality. Ultimately, by
promoting sustainable agriculture practices and reducing reliance on harmful
control measures, the project contributes to the preservation of biodiversity and
ecosystem health while ensuring agricultural productivity and livelihoods.

10
1.6 ORGANIZATION OF THE REPORT:

Chapter 1 has the overview about the project and introduction to the
project along with summary. Chapter 2 deals with the literature survey of the
related applications along with the summary. Chapter 3 explains the overview
of the proposed system, existing system, disadvantages of existing system,
along with its summary. Chapter 4 deals with the system software and its
requirements. Chapter 5 proposes the overview of the project design, system
architecture design and data flow, implementation, list of modules with its
description. Chapter 6 gives the experimental results of how the output is
obtained and the performance of the project. Chapter 7 concludes with the
conclusion.

11
CHAPTER 2

LITERATURE SURVEY

[1] Creating Alert Messages Based on Wild Animal Activity Detection


Using Hybrid Deep Neural Networks
Author Name: B. Natarajan, R. Elakkiya, R. Bhuvaneswari, Kashif
Saleem, Dharminder Chaudhary, Syed Husain Samsudeen
Published Year: 26 June 2023

Abstract
The issue of animal attacks is increasingly concerning for rural populations
and forestry workers. To track the movement of wild animals, surveillance cameras
and drones are often employed. However, an efficient model is required to detect the
animal type, monitor its locomotion and provide its location information. Alert
messages can then be sent to ensure the safety of people and foresters. While
computer vision and machine learning-based approaches are frequently used for
animal detection, they are often expensive and complex, making it difficult to
achieve satisfactory results. This paper presents a Hybrid Visual Geometry Group
(VGG)−19+ Bidirectional Long Short-Term Memory (Bi-LSTM) network to detect
animals and generate alerts based on their activity. These alerts are sent to the local
forest office as a Short Message Service (SMS) to allow for immediate response. The
proposed model exhibits great improvements in model performance, with an average
classification accuracy of 98%, a mean Average Precision (mAP) of 77.2%, and a
Frame Per Second (FPS) of 170. The model was tested both qualitatively and
quantitatively using 40,000 images from three different benchmark datasets with 25
classes and achieved a mean accuracy and precision of above 98%. This model is a
reliable solution for providing accurate animal-based information and protecting
human lives.

12
[2] Smart Agriculture Land Crop Protection Intrusion Detection
Using Artificial Intelligence
Author Name: Kiruthika S, Sakthi P, Sanjay K, Vikraman N, Premkumar
T, Yoganantham R, and Raja M
Published Year: July 2023

Abstract
Human-wildlife conflict is the term used to describe when human activity
results in a negative outcome for people, their resources, wild animals, or their
habitat. Human population growth encroaches on wildlife habitat, resulting in a
decrease in resources. In particular habitats, there are numerous forms of human and
domesticated animal death or injury as a result of conflict. Farmers and the animals
that invade farmland suffer greatly as a result. Our project’s primary objective is to
lessen human-animal conflict and loss. The embedded system and image processing
technique are utilized in the project. Python is used to perform image processing
techniques like segmentation, statistical and feature extraction using expectation
maximization, and classification using CNN. The classification is used to determine
whether the land is empty or if animals are present. A buzzer sound is produced, a
light electric current is passed to the fence, and a message alerting the farmer to the
animal’s entry into the farmland is transmitted. This prevents the animal from
entering the field and enables the landowner to take the necessary steps to get the
animal back to the forest. The result is serially sent to the controller broad from the
control board.

13
[3] An Accurate and Fast Animal Species Detection System for
Embedded Devices
Author Name: Mai Ibraheam, Kin Fun Li, Fayez Gebali
Published Year: 03 March 2023

Abstract
Encounters between humans and wildlife often lead to injuries, especially in
remote wilderness regions, and highways. Therefore, animal detection is a vital
safety and wildlife conservation component that can mitigate the negative impacts of
these encounters. Deep learning techniques have achieved the best results compared
to other object detection techniques; however, they require many computations and
parameters. A lightweight animal species detection model based on YOLOv2 was
proposed. It was designed as a proof of concept of and as a first step to build a real-
time mitigation system with embedded devices. Multi-level features merging is
employed by adding a new pass-through layer to improve the feature extraction
ability and accuracy of YOLOv2. Moreover, the two repeated 3× 3 convolutional
layers in the seventh block of the YOLOv2 architecture are removed to reduce
computational complexity, and thus increase detection speed without reducing
accuracy. Animal species detection methods based on regular Convolutional Neural
Networks (CNNs) have been widely applied; however, these methods are difficult to
adapt to geometric variations of animals in images. Thus, a modified YOLOv2 with
the addition of deformable convolutional layers (DCLs) was proposed to resolve this
issue. Our experimental results show that the proposed model outperforms the
original YOLOv2 by 5.0% in accuracy and 12.0% in speed. Furthermore, our
analysis shows that the modified YOLOv2 model is more suitable for deployment
than YOLOv3 and YOLOv4 on embedded devices.

14
[4] Real Time Protection of Farmlands from Animal Intrusion
Author Name: R Sumathi, P Raveena, P Rakshana, P Nigila, P
Mahalakshmi
Published Year: 18 August 2022

Abstract
Crop Vandalization due to animals is becoming area of concern nowadays.
When an animal enters the land, farmers lose their crops, properties, livestock. It is
eroding the time and efforts of farmers. They also get affected economically due to
loss of crops. Conflicts between humans and animals keep putting lives in danger.
Methods like electrocution causes intense pain to animals, sometimes leading to their
death. An effective system for preventing animal intrusion is more and more
necessary. Regarding to this problem we implement a system to provide a real time
visibility of farmlands which is perfect and adaptive. Surveillance of farmlands is
carried out and when animals are encountered, they are categorized using YOLO
algorithm and corrective actions are made depending on the type of intruder present.
Finally, farmers and forest officials are supplied with geo-locations and images of
intrude. If the presence of animals is discovered after few seconds, strong repellents
are used as a backup. As a result, the proposed technology successfully drives away
animals without killing them and reduce human animal conflict as it does not require
human participation.

15
[5] Animal Intrusion Detection in Farming Area using YOLOv5
Approach
Author Name: Normaisharah Mamat, Mohd Fauzi Othman, Fitri Yakub
Published Year: 09 January 2023

Abstract
Animal intrusion in the farming area causes significant losses in agriculture.
It threatens not only the safety of farmers but also contributes to crop damage.
Providing effective solutions for human-animals conflict is now one of the most
significant challenges all over the world. Therefore, early detection of animal
intrusion via automated methods is essential. Recent deep learning-based methods
have become popular in solving these problems by generating high detection ability.
In this study, the YOLOv5 method is proposed to detect four categories of animals
commonly involved in farming intrusion areas. YOLOv5 can generate high accuracy
in detection using cross stage partial network (CSP) as a backbone. This network is
employed to extract the beneficial characteristics from an input image. The results of
the implementation of this method show that it can detect animal intrusion very
effectively and improve the accuracy of detection by nearly 94% mAP. The results
demonstrate that the proposed models meet and reach state-of-the-art results for these
problems.

16
[6] Development of Animal-Detection System using Modified CNN
Algorithm
Author Name: Sheik Mohammed. S, T. Sheela, T. Muthumanickam
Published Year: 16 January 2023

Abstract
In the present scenario almost the entire crop cultivation in farmlands are
mostly likely to be damaged by intrusion of animals like wild boars, elephants,
buffaloes, birds, etc. However this may cause huge loss to the farmers but it is quite
impossible to stay alert in the farm field for 24/7 hours to protect the crops. To
surmount the above problem, a prototype for animal intrusion detection has been
designed using a modified CNN algorithm to efficiently detect the existence of
animal intrusion in the crop field. It provides an alert signal to indicate while averting
the animal with no injuries. This paper proposes a system that includes a PIR sensor,
Thermal Imaging camera, GSM module and hologram connected with the Raspberry
Pi module. A Modified CNN algorithm is used to validate the captured animal image
and later alert the user. Absolute crop protection is guaranteed from animal trespass
thereby protecting the farmer's from huge loss.

17
[7] Efficient Wildlife Intrusion Detection System using Hybrid
Algorithm
Author Name: Divya Meena, Hari Krishna P, Chakka Naga Venkata
Jahnavi, Patri Lalithya Manasa, J Sheela
Published Year: 29 December 2022

Abstract
Human-wildlife conflict arises when the needs and behavior of animals have
a detrimental influence on humans or when humans have a negative impact on the
needs of wildlife. The primary causes of Man-Wildlife Conflicts include agricultural
expansion, human settlement, livestock overgrazing, deforestation, illegal grass
gathering, and poaching. Each year, human-animal conflict in human habitats causes
a massive loss of sources and put lives in jeopardy. As the global human population
continues to force wildlife out of their natural habitats, conflicts are unavoidable,
which is why habitat loss is one of the most prevalent dangers to endangered animals.
So, it is necessary to detect animals and identify the animal detected to reduce the
effects of human-animal conflict. This research study has developed a hybrid
algorithm, which classifies animal images into multiple groups using YOLO v5 (You
only look once) combined with CNN. The proposed system distinguishes whether the
animal is in human environment or not, and then reliably distinguishes which animal
class it belongs to using CNN. The model has been tested through its paces on a
variety of tasks in analysis to define how well it performs in various scenarios. The
system is being fine-tuned with the goal of attaining the most accurate results
possible in recognizing and decreasing hazards posed by animal invasions into
human land. These experimental findings show that the yolov5 coalescing technique
paired with CNN can properly categorize animals in habitats, with a 92.5% accuracy
from the proposed model.

18
CHAPTER 3

SYSTEM ANALYSIS

3.1 EXISTING SYSTEM:

Existing systems for crop protection against wildlife intrusion encompass a


range of approaches, including traditional methods and emerging technologies.
Traditional methods such as physical barriers (e.g., fences), chemical repellents,
and scare devices have been widely used but often lack real-time detection
capabilities and may result in harm to both wildlife and crops. Emerging
technologies, as highlighted in the literature survey, include IoT-based animal
classification systems using convolutional neural networks (CNNs) and IoT-based
acoustic classification systems. While these technologies show promise in
providing real-time detection and classification of wildlife intrusion, they may still
face limitations such as scalability, cost, and efficiency in practical implementation.

LIMITATIONS OF EXISTING SYSTEM:

 Lack of Real-Time Detection: Both traditional methods and emerging


technologies may suffer from a lack of real-time detection capabilities, leading to
delayed responses and increased crop damage.
 Harmful to Wildlife: Certain traditional methods and emerging technologies
may pose risks to wildlife by causing harm or injury, contradicting principles of
ethical wildlife management.
 High Maintenance Requirements: Traditional methods often require frequent
maintenance, increasing operational costs and labor requirements for farmers.
Emerging technologies may also require specialized expertise and ongoing
maintenance.
 Scalability and Cost: Both traditional and emerging technologies may face
challenges in scalability and cost-effectiveness, particularly for small-scale farmers
with limited resources, hindering widespread adoption and effectiveness.
19
3.2 PROPOSED SYSTEM:

The "Wild Animals Deterrent System for Crop Protection" project


introduces an innovative approach to mitigate crop damage caused by wildlife
intrusion in agricultural settings. Leveraging advanced technologies such as
deep learning and intelligent sound emission, our system offers real-time
detection and humane deterrence of wild animals, ensuring the safety and
integrity of crops without causing harm to wildlife. The system utilizes a deep
convolutional neural network (DCNN) classification algorithm, specifically a
NASNetLarge model, for accurate detection and recognition of various animal
species from images captured by cameras installed in farm landscapes.
Additionally, an ultrasonic alarm sound is emitted to repel detected animals
without causing physical harm. Real-time notifications, including images of the
detected animal, its species, and timestamp, are sent to the farmer's mobile
device for prompt action. Night vision simulation is incorporated to enhance
accuracy, enabling effective detection even in low-light conditions.
ADVANTAGES OF PROPOSED SYSTEM:

 Enhanced Accuracy and Reliability: The utilization of deep learning


techniques, coupled with night vision simulation, ensures high accuracy in animal
detection and recognition, reducing false positives and negatives.
 Humane Deterrence: Unlike traditional methods that may harm wildlife, our
system employs intelligent sound emission to repel animals without causing
physical harm, aligning with ethical wildlife management principles.
 Real-time Response: The system offers real-time detection and notification
capabilities, enabling farmers to take prompt action to protect their crops from
wildlife intrusion, thereby minimizing crop damage.
 Scalability and Cost-effectiveness: Our system is designed to be scalable and
cost-effective, making it accessible to farmers across different scales of operation.
Additionally, it reduces the need for costly and labor-intensive maintenance
associated with traditional methods, enhancing overall efficiency and profitability.
20
CHAPTER 4

SYSTEM REQUIREMENTS

To be used efficiently, all applications need certain hardware


components or other software resources to be present on all devices. These
prerequisites are known as system requirements and are often used as a
guideline as opposed to an absolute rule. Most software defines two sets of
system requirements: minimum and recommended. With increasing demand for
higher processing power and resources in newer versions of software, system
requirements tend to increase overtime. Industry analysts suggest that this trend
plays a bigger part in driving upgrades to existing computer systems than
technological advancements. A second meaning of the term of System
requirements, is a generalization of this first definition, giving the requirements
to be met in the design of a system or sub-system. Typically, an organization
starts with a set of Business requirements and then derives the System
requirements from there.

4.1 SOFTWARE REQUIREMENTS:

Operating System: Windows 7 or higher

For Web Client: Google Chrome, Edge, Safari, Mozilla Firefox, etc

Additional Requirement: Stable Internet Connection

4.2 HARDWARE REQUIREMENTS:

Processor: I3 processor system or higher

RAM: 4.00GB or higher

ROM: 100GB or higher


21
4.3 ABOUT THE SOFTWARE:

4.3.1 PYTHON:

Python is a widely used general-purpose, high level programming


language. It was created by Guido van Rossum in 1991 and further developed
by the Python Software Foundation. It was designed with an emphasis on code
readability, and its syntax allows programmers to express their concepts in
fewer lines of code. Python is a programming language that lets you work
quickly and integrate systems more efficiently.

There are two major Python versions, Python 2 and Python 3. Both are
quite different. Reason for increasing popularity. Emphasis on code readability,
shorter codes, ease of writing. Programmers can express logical concepts in
fewer lines of code in comparison to languages such as C++ or Java. Python
supports multiple programming paradigms, like object-oriented, imperative and
functional programming or procedural. There exist inbuilt functions for almost
all of the frequently used concepts. Philosophy is “Simplicity is the best”.

4.3.1.1 Language Features:

 Interpreted

 There are no separate compilation and execution steps like C and C++

 Directly run the program from the source code

 Internally, Python converts the source code into an intermediate form


called byte codes which is then translated into native language of specific
computer to run it
 No need to worry about linking and loading with libraries, etc

 Platform Independent

 Python programs can be developed and executed on multiple operating

22
system platforms
 Python can be used on Linux, Windows, Macintosh, Solaris and many
more
 Free and Open Source

 Redistributable

 High-level Language

In Python, no need to take care about low-level details such as managing the
memory used by the program.

 Simple

 Closer to English language; Easy to Learn

 More emphasis on the solution to the problem rather than the syntax

 Embeddable

 Python can be used within C/C++ program to give scripting capabilities


for the program‘s users.
 Robust

 Exceptional handling features

 Memory management techniques inbuilt

 Rich Library Support

The Python Standard Library is very vast. Known as the ―batteries


included‖ philosophy of Python ;It can help do various things involving regular
expressions, documentation generation, unit testing, threading, databases, web
browsers, CGI, email, XML, HTML, WAV files, cryptography, GUI and many
more.

23
Besides the standard library, there are various other high-quality libraries
such as the Python Imaging Library which is an amazingly simple image
manipulation library. Softwares making use of Python.

Python has been successfully embedded in a number of software


products as a scripting language. GNU Debugger uses Python as a pretty printer
to show complex structures such as C++ containers. Python has also been used
in artificial intelligence. Python is often used for natural language processing
tasks.

4.3.1.2 Current Applications of Python:

 A number of Linux distributions use installers written in Python example, in


Ubuntu we have the Ubiquity

 Python has seen extensive use in the information security industry,


including in exploit development.
 Raspberry Pi– single board computer uses Python as its principal user-
programming language.
 Python is now being used in Game Development areas as well

4.3.2 FLASK:

Flask is an API of Python that allows us to build up web-applications. It


was developed by Armin Ronacher. Flask‘s framework is more explicit than
Django‘s framework and is also easier to learn because it has less base code to
implement a simple web-Application. A Web-Application Framework or Web
Framework is the collection of modules and libraries that helps the developer to
write applications without writing the low-level codes such as protocols, thread
management, etc. Flask is based on WSGI(Web Server Gateway Interface)
toolkit and Jinja2 template engine.

24
4.3.2.1 Setting Up The Project Structure:

 Create a couple of folders and files within flask app / to keep the web app
organized.
 Within flaskapp/, create a folder, app/, to contain all your files. Inside
app/, create a folder static/; this is where you'll put our web app's images,
CSS, and JavaScript files, so create folders for each of those.
Additionally, create another folder, templates/, to store the app's web
templates.CreateanemptyPythonfileroutes.pyfortheapplicationlogic, such
as URL routing. And no project is complete without a helpful description,
so create a README.md file as well.

4.3.2.2 Working:

1. A user issues a request for a domain's root URL/ to go to its home page.

2. app.py maps the URL/ to a Python function.

3. The Python function finds a web template living in the templates/ folder.

4. A web template will look in the static/ folder for any images, CSS, or
JavaScript files it needs as it renders to HTML.
5. Rendered HTML is sent back to app.py.

6. app.py sends the HTML back to the browser.

4.3.3 KERAS:

Keras runs on top of open source machine libraries like TensorFlow,


Theano or Cognitive Toolkit (CNTK). Theano is a python library used for fast
numerical computation tasks. TensorFlow is the most famous symbolic math
library used for creating neural networks and deep learning models. TensorFlow
is very flexible and the primary benefit is distributed computing. CNTK is deep
25
learning framework developed by Microsoft. It uses libraries such as Python,
C#, C++ or standalone machine learning toolkits. Theano and TensorFlow are
very powerful libraries but difficult to understand for creating neural networks.

Keras is based on minimal structure that provides a clean and easy way to
create deep learning models based on TensorFlow or Theano. Keras is designed
to quickly define deep learning models. Well, Keras is an optimal choice for
deep learning applications.

4.3.3.1 Features:

 Keras leverages various optimization techniques to make high level


neural network API easier and more performant. It supports the following
features
 Consistent, simple and extensible API.

 Minimal structure, easy to achieve the result without any frills.

 It supports multiple platforms and backends.

 It is user friendly framework which runs on both CPU and GPU.

 Highly scalability of computation.

4.3.3.2 Benefits:

 Keras is highly powerful and dynamic framework and comes up with the
following advantages
 Larger community support.

 Keras neural networks are written in Python which makes things simpler.

 Keras supports both convolution and recurrent networks.

 Deep learning models are discrete components you can combine into
many ways.
4.3.4 TENSORFLOW:
26
TensorFlow is an open-source software library. TensorFlow was
originally developed by researchers and engineers working on the Google Brain
Team within Google‘s Machine Intelligence research organization for the
purposes of conducting machine learning and deep neural networks research,
but the system is general enough to be applicable in a wide variety of other
domains as well. Let us first try to understand what the word TensorFlow
actually mean.
TensorFlow is basically a software library for numerical computation
using data flow graphs where nodes in the graph represent mathematical
operations. Edges in the graph represent the multidimensional data arrays
(called tensors) communicated between them. (Please note that tensor is the
central unit of data in TensorFlow).

4.3.4.1 TensorflowAPIs:

TensorFlow provides multiple APIs (Application Programming


Interfaces). These can be classified into 2 major categories:

 Low level API which is complete programming control recommended for


machine learning researchers provides fine levels of control over the models.
TensorFlow Core is the low level API of TensorFlow.

 High level API which is built on top of TensorFlow Core easier to learn
and use than TensorFlow Core makes repetitive tasks easier and more
consistent between different users tf.contrib.learn is an example of a high level
API.

27
CHAPTER 5

SYSTEM DESIGN

5.1 SYSTEM ARCHITECTURE:

An architectural diagram is a diagram of a system that is used to abstract


the overall outline of the software system and the relationships, constraints, and
boundaries between components. It is an important tool as it provides an overall
view of the physical deployment of the software system and its evolution
roadmap. The system architecture of the proposed system is given in the figure
5.1.1 given below.

FIGURE 5.1.1 SYSTEM ARCHITECTURE

5.2 USECASE DIAGRAM:

A use case diagram at its simplest is a representation of a user's


interaction with the system that shows the relationship between the user and the
different use cases in which the user is involved. A use case diagram can
identify the different types of users of a system and the different use case

28
will often be accompanied by other types of diagrams as well. The use cases are
represented by either circles or ellipses.

The main purpose of a use case diagram is to portray the dynamic aspect
of a system. It accumulates the system's requirement, which includes both
internal as well as external influences. It invokes persons, use cases, and several
things that invoke the actors and elements accountable for the implementation
of use case diagrams. It represents how an entity from the external environment
can interact with a part of the system.

FIGURE 5.2.1 USECASE DIAGRAM

29
5.3 DATA FLOW DIAGRAM:

A Data Flow Diagram is the sequence of path data takes at it is


generated on the system. It shows how data is processed if such data is valid and
also specifies what happens when such data is invalid.

It uses defined symbols like rectangles, circles and arrows, plus short
text labels, to show data inputs, outputs, storage points and the routes between
each destination. Data flowcharts can range from simple, even hand-drawn
process overviews, to in-depth, multi-level DFDs that dig progressively deeper
into how the data is handled. They can be used to analyze an existing system or
model a new one. Like all the best diagrams and charts, a DFD can often
visually say things that would be hard to explain in words, and they work for
both technical and nontechnical audiences.

0th LEVEL

FIGURE 5.3.1 0th LEVEL DFD

30
1st LEVEL

FIGURE 5.3.2 1st LEVEL DFD

2nd LEVEL

FIGURE 5.3.3 2nd LEVEL DFD

31
5.4 MODULES AND FUNCTIONALITIES:

 Dataset Preparation

 Night Vision Simulation

 Model Training

 Model Selection and UI Integration

 Alert Sound Generation

 Alert Message Generation

5.4.1 Dataset Preparation:

Data quality analysis has to be done to ensure our dataset is well-suited


for machine learning tasks. We began by visualizing samples from each class to
gain an initial understanding of the dataset’s structure. We then verified image
integrity by checking for corrupted or empty images, preventing potential issues
during training. Next, we assessed the dimensions of the images to ensure
uniformity, identifying any inconsistencies. We also evaluated the resolution of
the images, flagging those that did not meet our quality standards. Duplicate
images were detected and removed by generating and comparing image hashes
to eliminate redundancy. Finally, we analyzed class distribution to identify any
imbalances in the dataset, ensuring a balanced representation of each class.

Data augmentation further enhances the robustness and diversity of the


dataset by generating additional samples per class through various image
manipulation techniques. These techniques include rotation, flipping, brightness
adjustment, Gaussian blur, and histogram equalization, among others. By
augmenting the dataset with variations of existing images, we introduce
diversity and complexity, enabling the model to learn from a broader range of

32
scenarios and improve its generalization ability. Ultimately, the combination of
data cleaning and augmentation techniques optimizes the quality, diversity, and
size of the dataset, laying a solid foundation for effective model training and
performance in tasks such as image classification and object detection.

5.4.2 Night Vision Simulation:

Night vision simulation is a critical component in preparing image


datasets for machine learning tasks, particularly in scenarios where low-light
conditions may be encountered. Our approach to night vision simulation
involves the application of various techniques to simulate the effect of night
vision on input images, thereby enhancing the robustness and adaptability of our
models to real-world conditions. These techniques include partial inversion,
manual brightness adjustment, green tint application, noise addition, gamma
correction, and vignetting. By incorporating these techniques, we aim to mimic
the characteristics of night vision imagery, such as reduced visibility, altered
color perception, and increased noise levels, ensuring that our models can
effectively operate under challenging lighting conditions. Additionally, we have
generated night vision images without a green tint to accommodate scenarios
where black and white night vision simulation is desired, providing flexibility
and versatility in our approach.

Data splitting is a fundamental step in the machine learning pipeline,


enabling the assessment of model performance and generalization ability. In our
methodology, we divide the dataset into training, validation, and test sets to
facilitate model training and evaluation. We leverage the scikit-learn
‘train_test_split’ function to partition images into their respective directories,
ensuring a balanced distribution of classes across sets. By carefully splitting the
dataset, we can train our models on a subset of the data, validate their
performance on a separate subset, and ultimately evaluate their accuracy and
33
robustness on a held-out test set. This systematic approach to data splitting
enables reliable assessments of the model’s performance and generalization
ability in real-world scenarios.

5.4.3 Model Training:

Effective model training and evaluation are pivotal for achieving high
performance in deep learning applications. Our approach involved training a
range of models using transfer learning to leverage pre-existing knowledge from
large-scale datasets. Specifically, we fine-tuned VGG16, ResNet50,
DenseNet121, EfficientNet (B0, B1, B2), Xception, InceptionV3,
MobileNetV2, NASNetMobile, and NASNetLarge to our dataset. We began
with basic versions of these models and assessed their performance based on
accuracy and loss. The following table summarizes the performance metrics of
the trained models:

MODEL LOSS ACCURACY

VGG16 44 87

ResNet50 48 87.7

DenseNet121 37 93

EfficientNetB0 23 93.4

EfficientNetB1 23 93.6

EfficientNetB2 19 94

Xception 22 93.8

InceptionV3 30 91.8

MobileNetV2 52 87

NASNetMobile 39 91.6

NASNetLarge 24 95.8
TABLE 5.4.1 MODEL TRAINING RESULTS
34
We decided to focus on NASNetLarge, Xception, and EfficientNetB2
for subsequent fine-tuning based on their superior performance metrics, with
each demonstrating high accuracy and relatively low loss in the initial
evaluations. During this phase, we experimented with various hyperparameters,
such as learning rates, optimizers, number of layers and epoch numbers, to
optimize the models' performance. The objective was to enhance predictive
accuracy and minimize loss further. The retraining process revealed that
NASNetLarge consistently outperformed the other models, achieving an
impressive final accuracy of 96% and a significantly reduced loss of 19%.
These metrics were further validated using a hold-out test set to ensure
robustness and reliability.

5.4.4 Model Selection and UI Integration:

Model selection is a critical step in developing a high-performing deep


learning system, as it involves identifying the model that best meets the
performance criteria for a given task. We conducted a comprehensive evaluation
using a hold-out test set to assess various performance metrics. These metrics
included loss, accuracy, precision, recall, F1 score, and the confusion matrix.
Loss and accuracy provided a fundamental measure of model performance.
Precision, recall, and F1 score offered insights into the model's effectiveness in
identifying and classifying instances correctly. The confusion matrix was used
to visualize the model’s performance across different classes, highlighting areas
of strength and potential improvement. The NASNetLarge model demonstrated
superior performance across these metrics, achieving high accuracy and low
loss. Its high precision, recall, and F1 score further validated its effectiveness.
The confusion matrix revealed minimal misclassifications, underscoring the
model's reliability in accurately classifying the images. As a result,
NASNetLarge was chosen as the final model, ensuring optimal performance and
robustness for deployment in real-world scenarios.
35
Furthermore, we leverage the model to make predictions on a demo
dataset, allowing for a practical comparison between true and predicted labels
alongside corresponding images. This interactive demonstration provides
stakeholders with a tangible understanding of the model's effectiveness and
enables them to assess its performance in real-world scenarios. To facilitate
real-world usage, we integrate the model with a Flask application, serving as a
user interface for seamless interaction with the deployed model. The Flask app
enables users to submit input images for prediction and displays the model's
output in an intuitive and user-friendly manner. This integration involves
creating routes to handle user requests, processing input data, making
predictions using the trained model, and presenting results back to the user
interface for visualization, ensuring a smooth and efficient user experience.

5.4.5 Alert Sound Generation:

Sound generation and integration play a crucial role in our Wild Animals
Deterrent System for Crop Protection, enhancing its effectiveness in deterring
wildlife intrusion while minimizing harm to both animals and crops. To
accomplish this, we first define animal classes along with their corresponding
frequencies, ensuring that the generated sound stimuli are tailored to each
specific species. We then implement a function using NumPy to generate
sinusoidal waveforms with the specified frequencies and durations, enabling
precise control over the characteristics of the generated sounds. These generated
sound files are saved in WAV format for each animal class and stored in the
'animal_sounds' directory, ensuring accessibility and ease of integration with the
overall system architecture.

In the integration phase, upon the detection of an animal intrusion, our


system seamlessly triggers the playback of the corresponding sound file to
distract and repel the animal effectively. This integration process is achieved

36
through the coordination of detection algorithms or sensors, which identify the
presence of animals within the monitored area. Upon detection, the system
initiates the playback of the appropriate sound file associated with the detected
animal class. By synchronizing the detection and sound generation components
of our system, we create a responsive and dynamic deterrent mechanism that
effectively mitigates wildlife intrusion while minimizing the need for physical
barriers or harmful repellents. This integrated approach not only enhances the
efficacy of our crop protection system but also promotes coexistence between
humans and wildlife by providing a humane and non-invasive solution to
wildlife management challenges in agricultural environments.

5.4.6 Alert Message Generation:

The integration of alert message generation and SMS notification


functionalities within our Wild Animals Deterrent System for Crop Protection
significantly enhances its ability to promptly inform farmers of potential
wildlife intrusions, enabling timely response and intervention. Upon detecting
an animal within the monitored area, the alert message generation module
automatically creates a comprehensive notification containing crucial
information such as the type of animal detected, timestamp of detection, and
possibly an image of the detected animal for visual confirmation. Furthermore,
the message can be customized to include additional details such as the precise
location of the intrusion, sensor information, and any other relevant metadata,
providing farmers with comprehensive insights into the incident.

To ensure immediate notification, the system leverages an SMS gateway


or service to transmit the alert message to the registered mobile number of the
farmer. Through integration with the SMS service provider's API, the module
enables seamless and programmatically controlled message transmission,
guaranteeing swift and reliable delivery of notifications to farmers regardless of
37
their location. This direct communication channel facilitates prompt action by
farmers, allowing them to assess the situation and implement appropriate
measures to mitigate crop damage and minimize losses. By combining alert
message generation with SMS notification capabilities, our system enhances the
responsiveness and effectiveness of wildlife intrusion detection and
management, ultimately contributing to improved crop protection and
sustainable agriculture practices.

5.5 ALGORITHMS AND TECHNIQUES:

5.5.1 NASNetLarge:

The NASNetLarge model, short for Neural Architecture Search Network


Large, represents a significant advancement in deep learning architecture,
particularly in the realm of image classification. Developed by researchers at
Google, NASNetLarge leverages a process known as Neural Architecture
Search (NAS), which automates the design of neural networks. By employing
NAS, the model can discover and construct optimal architectures that often
outperform manually designed networks. This automation has led to the creation
of a highly efficient and powerful model that has shown remarkable results in
various image classification tasks, including our wildlife detection system.

NASNetLarge comprises a sophisticated and intricate architecture


designed to maximize performance while maintaining computational efficiency.
The core idea behind NASNetLarge is the use of a search space that defines
possible neural network architectures, coupled with a reinforcement learning
controller that explores this space to find the most effective configurations. This
search space includes various building blocks, such as convolutional layers,
pooling layers, and activation functions, which are combined in numerous ways
to form different architectures.

38
The NASNetLarge model is built upon two primary components:
Normal Cells and Reduction Cells. Normal Cells preserve the spatial
dimensions of the input, performing standard convolutions and other operations
to extract features. Reduction Cells, on the other hand, reduce the spatial
dimensions, typically by a factor of two, allowing the network to increase its
depth and capture more abstract features. These cells are stacked together in a
repetitive manner, forming a deep and hierarchical structure that excels at
learning complex patterns in images.

One of the standout features of NASNetLarge is its use of separable


convolutions, which decompose standard convolutions into depthwise and
pointwise convolutions. This decomposition significantly reduces the number of
parameters and computational cost, making the model more efficient without
compromising performance. Additionally, the model incorporates batch
normalization and ReLU activations to enhance training stability and
convergence.

The training process of NASNetLarge is designed to fully exploit its


sophisticated architecture. We employed transfer learning to leverage the pre-
trained weights of NASNetLarge, which were initially trained on the ImageNet
dataset—a large and diverse collection of images spanning 1000 classes. This
pre-training provides a robust foundation for our specific task, enabling the
model to generalize well even with a limited dataset.

During transfer learning, the final classification layer of NASNetLarge


was replaced with a custom dense layer tailored to our specific problem. This
custom layer uses a softmax activation function to output probabilities for each
class, allowing for accurate identification of categories in our dataset. To further
enhance performance, the entire network was fine-tuned by training it on our
dataset with a reduced learning rate. This fine-tuning process allows the model
to adapt its pre-trained features to the specific characteristics of our dataset,

39
improving accuracy and robustness.

FIGURE 5.5.1 NASNETLARGE ARCHITECTURE

5.5.2 SGD Optimization:

Stochastic Gradient Descent (SGD) is a fundamental optimization


algorithm widely used in training machine learning models, particularly neural
networks. It is popular due to its simplicity, efficiency, and effectiveness in
minimizing the loss function by iteratively updating the model parameters.

The core idea behind SGD is to update the model parameters in the
direction that minimizes the loss function, gradually converging towards the
optimal solution. Unlike batch gradient descent, which computes the gradient of
the loss function with respect to all training samples, SGD updates the
parameters using a single randomly selected training sample or a small subset of
40
samples (mini-batch) at each iteration. This stochastic sampling of training data
introduces noise into the parameter updates, which helps the optimization
process escape local minima and saddle points and enables faster convergence.

SGD iteratively updates the parameters using the computed gradient for
a predefined number of iterations (epochs) or until convergence criteria are met.
The learning rate is a critical hyperparameter in SGD, as it determines the step
size of the parameter updates. Choosing an appropriate learning rate is essential
to ensure stable convergence and prevent oscillations or divergence during
training.

Despite its simplicity, SGD has several variants and extensions to


improve its performance and stability. These include momentum, adaptive
learning rate methods (e.g., AdaGrad, RMSprop, Adam), and learning rate
schedules, which dynamically adjust the learning rate during training based on
predefined criteria.

SGD is a versatile and widely used optimization algorithm that forms


the basis for training neural networks. Its simplicity, efficiency, and
effectiveness make it a cornerstone of modern optimization techniques.

5.5.3 Categorical Cross-entropy Loss:

Categorical Cross-Entropy Loss, also known as Softmax Cross-Entropy


Loss, is a widely used loss function in machine learning, especially in
classification tasks where the output belongs to multiple classes. Its primary
function is to quantify the disparity between the true distribution of classes and
the distribution predicted by the model.

The core concept of categorical cross-entropy loss revolves around


measuring the difference between the actual probability distribution of classes
and the distribution predicted by the model. Essentially, it penalizes the model
for deviations from the true distribution, with more severe penalties incurred for
41
larger discrepancies.

During training, the categorical cross-entropy loss provides crucial


feedback to the model, guiding it towards assigning higher probabilities to the
correct classes and lower probabilities to incorrect classes. This iterative process
of minimizing the loss drives the model to learn effective representations of the
input data, ultimately leading to improved classification performance.

Overall, categorical cross-entropy loss serves as a fundamental


component in training classification models, facilitating the optimization
process by quantifying the disparity between predicted and true class
distributions and guiding the model towards more accurate predictions.

5.5.4 Early Stopping:

Early stopping is a vital technique in deep learning that helps prevent


overfitting during the training process. Overfitting occurs when a model
performs exceptionally well on training data but fails to generalize to new,
unseen data. To address this, early stopping monitors the model’s performance
on a validation set and halts the training once the performance ceases to
improve. In our project, the EarlyStopping callback was employed to monitor
the validation loss. If the validation loss did not decrease for a specified number
of epochs (referred to as the patience parameter), training was terminated. This
approach ensures that the model maintains a balance between underfitting and
overfitting, providing a model that generalizes well to new data. The result is a
more efficient training process, saving computational resources by not
extending training unnecessarily.

42
5.5.5 ReduceLROnPlateau:

The ReduceLROnPlateau callback is another powerful tool used to


optimize the learning rate during the training of deep learning models. Learning
rate is a critical hyperparameter that influences how quickly a model converges
to the optimal solution. If the learning rate is too high, the model might
converge too quickly to a suboptimal solution. Conversely, a very low learning
rate can make the training process excessively slow. ReduceLROnPlateau
monitors a performance metric, such as validation loss, and reduces the learning
rate when the metric stops improving. This adaptive adjustment ensures that the
learning rate is optimal throughout the training process. In our project,
implementing this callback helped in fine-tuning the model's parameters more
effectively during the later stages of training, leading to improved model
performance and stability. By reducing the learning rate when needed, the
model was able to achieve higher accuracy and better generalization on the
validation set.

43
CHAPTER 6

EXPERIMENTAL RESULTS

6.1 RESULT AND DISCUSSION:

In our experimental setup, we utilized the NASNetLarge model for


image classification with a custom architecture built upon its base. The final
model was trained using a dataset of 6,516 images for training, 1,405 images for
testing, and 1,397 images for validation, with a data split ratio of 7:1.5:1.5. The
image data was preprocessed using a rescaling factor of 1/255, and a batch size
of 16 was employed during training. The model architecture included global
max pooling, dropout for regularization, and fully connected dense layers,
concluding with a softmax output layer for multiclass classification. Training
was executed with early stopping, reducing learning rate on plateau, and model
checkpoint callbacks to optimize performance and prevent overfitting. The
optimizer used was SGD with a learning rate of 0.01, and categorical cross-
entropy was employed as the loss function. The model's performance metrics
were evaluated on the hold-out validation set. The final model achieved an
impressive accuracy of 96% with a loss of 19%, demonstrating its robustness in
classifying the 13 distinct classes present in the dataset.

FIGURE 6.1.1 EXPERIMENTAL RESULTS

44
6.2 PERFORMANCE MEASURES:

6.2.1 Precision:

Precision is defined as the ratio of correctly classified positive samples


(True Positive) to a total number of classified positive samples (either correctly
or incorrectly).

Precision=True Positive/True Positive + False Positive Precision = TP/TP+FP

o TP-True Positive
o FP-False Positive

o The precision of a machine learning model will below when the value of;
o TP+FP(denominator)>TP(Numerator)

o The precision of the machine learning model will be high when Value of;

o TP(Numerator)>TP+FP(denominator)

Hence, precision helps us to visualize the reliability of the machine


learning model in classifying the model as positive.

6.2.2 Recall:

Recall, also known as the true positive rate(TPR), is the percentage of


data samples that a machine learning model correctly identifies as belonging to
a class of interest—the ―positive class‖—out of the total samples for that class.

Recall in Multi-class Classification

Recall as a confusion metric does not apply only to a binary classifier. It

45
can be used in more than two classes. In multi-class classification, recall is in
deep learning calculated such as:

Recall formula= True Positives in all classes / (True Positives + False


Negatives in all classes)

6.2.3 Recall vs Precision:

For imbalanced classification problem recall and precision are both


better-suited metrics than simply relying only on the accuracy of a model.
However, that doesn‘t mean they are equally important. In a specific situation,
you may want to maximize either recall or precision at the cost of the other
metric. You can‘t have both, high recall and high precision, so there is a certain
cost in getting higher points for either of them. In some cases, you want to take
both metrics into account and find an optimal blend by using the F1 score.

F1=2*(precision*recall/(precision + recall))

6.2.4 F1 Score:

The goal of the F1 score is to combine the precision and recall metrics
into a single metric. At the same time, the F1 score has been designed to work
well on imbalanced data.

F1 score formula
The F1 score is defined as the harmonic mean of precision and recall. As
a short reminder, the harmonic mean is an alternative metric for the more
common arithmetic mean. It is often useful when computing an average rate. In
the F1 score, we compute the average of precision and recall. They are both
rates, which makes it a logical choice to use the harmonic mean. The F1 score
formula is shown here:

46
FIGURE 6.2.1 PRECISION, RECALL, F1 SCORE

6.2.5 Confusion Matrix:

The confusion matrix is a matrix used to determine the performance of


the classification models for a given set of test data. It can only be determined if
the true values for test data are known. The matrix itself can be easily
understood, but the related terminologies may be confusing. Since it shows the
errors in the model performance in the form of a matrix, it is also known as an
error matrix. Some features of Confusion matrix are given below:

 For the 2 prediction classes of classifiers, the matrix is of 2*2 table, for 3
classes, it is 3*3 table, and so on.
 The matrix is divided into two dimensions, that are predicted
values and actual values along with the total number of predictions.
47
 Predicted values are those values, which are predicted by the model, and
actual values are the true values for the given observations.

The above table has the following cases:

 True Negative: Model has given prediction No, and there actual value
was also No.
 True Positive: The model has predicted yes, and the actual value was
also true.
 False Negative: The model has predicted no, but the actual value was
Yes, it is also called as Type-II error.
 False Positive: The model has predicted Yes, but the actual value was
No. It is also called a Type-I error.

FIGURE 6.2.2 CONFUSION MATRIX


48
CHAPTER 7

CONCLUSION AND FUTUREWORK

In conclusion, the development and implementation of the Wild Animals


Deterrent System for Crop Protection represent a significant step forward in
addressing the challenges posed by wildlife intrusion in agricultural settings.
Through the integration of advanced technologies such as deep learning-based
animal detection, sound generation, and alert notification systems, our solution
offers a comprehensive and effective approach to safeguarding crops while
minimizing harm to wildlife. By leveraging techniques such as transfer learning,
data augmentation, and night vision simulation, we have demonstrated the
potential for enhanced accuracy and robustness in detecting and repelling wild
animals.

While our project represents a significant advancement in crop


protection technology, there are several avenues for future research and
development to further enhance its efficacy and versatility. One potential
direction is the exploration of additional sensor modalities, such as infrared
imaging or acoustic sensors, to augment the detection capabilities of the system
and improve its performance under diverse environmental conditions.
Furthermore, incorporating advanced machine learning techniques, such as
reinforcement learning or ensemble methods, could enable the system to adapt
and optimize its strategies dynamically based on evolving environmental factors
and wildlife behaviors. Additionally, extending the functionality of the system
to include predictive analytics and decision support tools could empower
farmers with actionable insights. Overall, continued innovation and
collaboration in this field hold the promise of delivering more sophisticated and
sustainable solutions for crop protection and wildlife management in
agricultural landscapes.
49
APPENDIX – A

SCREENSHOTS

50
51
52
APPENDIX – B

SOURCECODE

animalpred.ipynb

# Import necessary libraries


import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.keras import regularizers
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from tensorflow.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint,
EarlyStopping
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import NASNetLarge
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, GlobalMaxPool2D, Input, Dropout
from tensorflow.keras.optimizers import SGD
from sklearn.metrics import classification_report, confusion_matrix

from google.colab import drive


drive.mount('/content/drive')

import zipfile
# Extract dataset from zip file
zf = zipfile.ZipFile('/content/drive/MyDrive/animaldataset.zip', "r")
zf.extractall()

import pathlib
53
# Display and check class names in training, testing and validation dataset
train_path = '../content/animaldataset/train'
data_dir = pathlib.Path(train_path)
class_names = np.array(sorted([item.name for item in data_dir.glob('*')]))
print(class_names)

test_path = '../content/animaldataset/test'
data_dir1 = pathlib.Path(test_path)
class_names = np.array(sorted([item.name for item in data_dir1.glob('*')]))
print(class_names)

val_path = '../content/animaldataset/val'
data_dir2 = pathlib.Path(val_path)
class_names = np.array(sorted([item.name for item in data_dir2.glob('*')]))
print(class_names)

import random
import matplotlib.pyplot as plt
import os

# Display random images from the training set


plt.figure(figsize=(10, 10))
plt.subplots_adjust(wspace=0.5, hspace=0.5)

for i in range(36):
ax = plt.subplot(6, 6, i+1)
random_class = random.randint(0, 12)
folder_path = train_path + '/' + class_names[random_class]

54
random_image_path = folder_path + '/' + (random.sample(os.listdir(folder_path),
1)[0])
image = plt.imread(random_image_path)
plt.axis('off')
plt.title(class_names[random_class], fontsize = 8, fontweight = 'bold')
plt.imshow(image, cmap='gray')

# Display random images from the testing set


plt.figure(figsize=(10, 10))
plt.subplots_adjust(wspace=0.5, hspace=0.5)

for i in range(36):
ax = plt.subplot(6, 6, i+1)
random_class = random.randint(0, 12)
folder_path = test_path + '/' + class_names[random_class]
random_image_path = folder_path + '/' + (random.sample(os.listdir(folder_path),
1)[0])
image = plt.imread(random_image_path)
plt.axis('off')
plt.title(class_names[random_class], fontsize = 8, fontweight = 'bold')
plt.imshow(image, cmap='gray')

# Display random images from the validation set


plt.figure(figsize=(10, 10))
plt.subplots_adjust(wspace=0.5, hspace=0.5)

for i in range(36):
ax = plt.subplot(6, 6, i+1)
random_class = random.randint(0, 12)

55
folder_path = val_path + '/' + class_names[random_class]
random_image_path = folder_path + '/' + (random.sample(os.listdir(folder_path),
1)[0])
image = plt.imread(random_image_path)
plt.axis('off')
plt.title(class_names[random_class], fontsize = 8, fontweight = 'bold')
plt.imshow(image, cmap='gray')

# Define constants for image size and batch size


IMAGE_SIZE = (256, 256)
BATCH_SIZE = 16

# Create data generator


datagen = ImageDataGenerator(
rescale=1./255
)

# Data generator for training images


train_generator = datagen.flow_from_directory(
data_dir,
target_size=IMAGE_SIZE,
batch_size=BATCH_SIZE,
class_mode='categorical',
shuffle=True
)

# Data generator for testing images


test_generator = datagen.flow_from_directory(
data_dir1,

56
target_size=IMAGE_SIZE,
batch_size=BATCH_SIZE,
class_mode='categorical',
shuffle=False
)

# Data generator for validation images


val_generator = datagen.flow_from_directory(
data_dir2,
target_size=IMAGE_SIZE,
batch_size=BATCH_SIZE,
class_mode='categorical',
shuffle=False
)

# Define the base model (DenseNet169)


base_model = NASNetLarge(weights='imagenet', include_top=False,
input_tensor=Input(shape=(256, 256,3)))
base_model.trainable=True

# Function to build a model on top of the base model


def build_model(base_model):
x = base_model.output
x = GlobalMaxPool2D()(x)
x = Dropout(0.5)(x)
x = Dense(64, activation='relu', kernel_regularizer=regularizers.l2(0.01))(x)
x = Dense(13, activation='softmax')(x)

model = Model(inputs=base_model.input, outputs=x)

57
return model

# Build the model on top of the base model


model= build_model(base_model)

# Early stopping, learning rate scheduler and model check point callbacks
early_stopping = EarlyStopping(monitor='val_loss', patience=3,
restore_best_weights=True)
scheduler = ReduceLROnPlateau(monitor='val_loss', patience=2, min_lr=1e-5,
factor=0.95)
checkpoint = ModelCheckpoint('weights_epoch{epoch:02d}.weights.h5',
save_weights_only=True)

# Compile the model


model.compile(optimizer=SGD(learning_rate=0.01, clipnorm=1.0),
loss='categorical_crossentropy',
metrics=['accuracy'])

# Train the model


import warnings

# Ignore all warnings


warnings.filterwarnings("ignore")
history = model.fit(train_generator,
epochs=10,
validation_data=val_generator,
shuffle=True,
callbacks=[early_stopping, scheduler, checkpoint])

58
# Evaluate the model on the test set
loss, accuracy = model.evaluate(test_generator, steps=len(test_generator))

# Print the results


print(f"Loss: {loss}")
print(f"Accuracy: {accuracy}")

predictions = model.predict(test_generator) # Adjust model.predict call for your


framework
y_pred_labels = [idx for idx in predictions.argmax(axis=1)]
true_labels = test_generator.classes

print("Classification Report:")
print(classification_report(true_labels, y_pred_labels))

conf_matrix = confusion_matrix(true_labels, y_pred_labels)


print("Confusion Matrix:")
print(conf_matrix)

# Extract demo dataset from zip file


zf = zipfile.ZipFile('/content/drive/MyDrive/demo.zip', "r")
zf.extractall()

# Path to demo folder


demo_folder = '../content/demo'

# Load and preprocess each image


images = []
true_labels = []

59
# Obtain class labels and indices
class_indices = {}
labels = []
for i, class_folder in enumerate(sorted(os.listdir(demo_folder))):
class_indices[class_folder] = i
labels.append(class_folder)

# Iterate through each image in the demo folder


for class_folder in os.listdir(demo_folder):
class_folder_path = os.path.join(demo_folder, class_folder)
if os.path.isdir(class_folder_path):
for image_file in os.listdir(class_folder_path):
image_path = os.path.join(class_folder_path, image_file)
image = load_img(image_path, target_size=IMAGE_SIZE)
image = img_to_array(image) / 255.0 # Rescale to [0, 1]
images.append(image)
true_labels.append(class_folder)

# Convert lists to numpy arrays


images = np.array(images)

# Make predictions
predicted_probabilities = model.predict(images)
predicted_classes = np.argmax(predicted_probabilities, axis=1)

# Plot images with true and predicted labels


plt.figure(figsize=(20, 20))
for i, image in enumerate(images):

60
true_label = true_labels[i]
predicted_label = labels[predicted_classes[i]]

plt.subplot(5, 5, i+1)
plt.imshow(image)
plt.axis('off')
plt.title(f'True: {true_label}\nPredicted: {predicted_label}')

plt.show()

# Save the model


model.save("animalpredNASLv2.keras")

import shutil
shutil.move("animalpredNASLv2.keras", "/content/drive/MyDrive/")

app.py

from flask import Flask, render_template, request, session, redirect, flash,


send_from_directory
from flask_sqlalchemy import SQLAlchemy
from werkzeug.utils import secure_filename
from sqlalchemy import text
import bcrypt
import os
import re
import cv2
import pygame
import asyncio

61
import base64
import numpy as np
from keras.models import load_model
from telegram import Bot
from datetime import datetime

app = Flask(__name__)
UPLOAD_FOLDER = 'captures'
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///database.db'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
app.config['SECRET_KEY'] = 'your_secret_key'
db = SQLAlchemy(app)

# Database model
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(100), nullable=False)
email = db.Column(db.String(100), unique=True, nullable=False)
phone = db.Column(db.String(15), unique=True, nullable=False)
password = db.Column(db.String(100), unique=True, nullable=False)

#Check email format


regex = r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b'

#Check password strength format


decimal= r"^(?=.*[a-z])(?=.*[A-Z])(?=.*\d)(?=.*[@$!%*?&])[A-Za-z\d@$!%*?&]
{8,}$"

62
# Routes
@app.route('/')
def index():
return redirect('/login')

# Login route
@app.route('/login', methods =['GET', 'POST'])
def login():
if (request.method == 'POST'):
credential = request.form.get('credential')
password = request.form.get('password')

# Check if credential is email or phone number


if re.fullmatch(regex, credential):
query = "SELECT * FROM User WHERE email = :credential"
else:
query = "SELECT * FROM User WHERE phone = :credential"

account = db.session.execute(text(query), {'credential': credential}).fetchone()

# Check if account exists and password matches with encrypted password


if account and bcrypt.checkpw(password.encode(), account.password):
session["user_id"] = account[1]
return redirect('main')

else:
flash('Wrong username or password', 'error')
63
return render_template("login.html")

# Register route
@app.route('/register', methods =['GET', 'POST'])
def register():
if (request.method == 'POST'):
name = request.form.get('name')
email = request.form.get('email')
phone = request.form.get('phone')
Password1 = request.form.get('password')
repassword = request.form.get('confirmpassword')

#Check password strength


correctpass=re.fullmatch(decimal, Password1)
if not correctpass:
flash('Password not strong enough','error')

#Confirm password
else:
if (Password1!=repassword):
flash("Passwords don't match", 'error')
else:
#Check if already registered
account = db.session.execute(text("SELECT * FROM User WHERE
email= :email"), {'email': email}).fetchone()
if account:
flash('User already registered', 'error')

64
return redirect('login')

#Register with encrypted password


else:
Password1 = Password1.encode('utf-8')
Password1 = bcrypt.hashpw(Password1, bcrypt.gensalt())
new_user = User(name=name, email=email, phone=phone ,
password=Password1)
db.session.add(new_user)
db.session.commit()
return redirect('login')

return render_template("register.html")

# Log out route


@app.route('/logout')
def logout():
session.pop('user_id', None)
return redirect('login')

# Load the trained model


model = load_model('animalpred.h5')

# Animal classes
animal_classes = ['bear', 'bison', 'deer', 'dhole', 'elephant', 'fox', 'langur',
'leopard', 'macaque', 'rabbit', 'sloth bear', 'tiger', 'wild boar']

# Function to detect animal from the image

65
def detect_animal(image):
img = cv2.resize(image, (256, 256))
img = img.astype('float32') / 255.0
img = np.expand_dims(img, axis=0)

# Perform animal detection


prediction = model.predict(img)
predicted_class_index = np.argmax(prediction)
confidence_score = prediction[0][predicted_class_index]
animal_class = animal_classes[predicted_class_index] # Convert
predicted_class_index to an integer
return animal_class, confidence_score

# Function to play alert sound


def play_alert_sound(animal_class):
if animal_class == "human":
return
else:
# Load the corresponding sound file from the animal_sounds directory
sound_file = os.path.join('animal_sounds', f'{animal_class}.wav')
pygame.mixer.init()
pygame.mixer.music.load(sound_file)
pygame.mixer.music.play()

async def send_message_with_image(api_token, chat_id, image_path, text):


bot = Bot(token=api_token)
with open(image_path, 'rb') as image_file:
# Send the image with caption

66
await bot.send_photo(chat_id=chat_id, photo=image_file, caption=text)
print("Message sent successfully!")

def detect_human(image):
# Load the pre-trained face detection model
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades +
'haarcascade_frontalface_default.xml')
# Convert the image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Detect faces in the grayscale image
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=5,
minSize=(30, 30))
# Check if any faces are detected
return len(faces) > 0

@app.route('/main', methods=['GET', 'POST'])


def main():
if request.method == 'POST':
if 'file' in request.files:
file = request.files['file']
filename = secure_filename(file.filename)
file_path = os.path.join(app.config['UPLOAD_FOLDER'], filename)
file.save(file_path)
img = cv2.imread(file_path)
animal_class, confidence = detect_animal(img)
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
API_TOKEN = ''
CHAT_ID = ''

67
TEXT = f'Detected Animal: {animal_class}\nTimestamp: {timestamp}'
asyncio.run(send_message_with_image(API_TOKEN, CHAT_ID, file_path,
TEXT))
play_alert_sound(animal_class)
# Render the main.html template with the detected animal class and uploaded
image filename
return render_template('main.html', animal_class=animal_class,
image_filename=filename)

else:
# Check if image data is received from webcam
image_data = request.form.get('image')
# Decode base64 image data
image_data = np.frombuffer(base64.b64decode(image_data.split(",")[1]),
np.uint8)
image = cv2.imdecode(image_data, cv2.IMREAD_COLOR)
# Perform human detection first
is_human = detect_human(image)
if not is_human:
# Perform animal detection
animal_class, confidence = detect_animal(image) # Assuming detection
function returns class and confidence

if confidence > 0.8: # Check if confidence score is above threshold


timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
API_TOKEN = ''
CHAT_ID = ''
TEXT = f'Detected Animal: {animal_class}\nTimestamp: {timestamp}'
counter = 1
while True:
68
filename = f'webcam_capture_{counter}.jpg'
filepath = os.path.join(app.config['UPLOAD_FOLDER'], filename)
if not os.path.exists(filepath):
break
counter += 1
with open(filepath, 'wb') as f:
f.write(image_data)
asyncio.run(send_message_with_image(API_TOKEN, CHAT_ID,
filepath, TEXT))
play_alert_sound(animal_class)
# Render the main.html template with the detected animal class and
captured frame filename
return render_template('main.html', animal_class=animal_class,
image_filename=filename)
else:
# Confidence score below threshold, no further processing
return render_template('main.html', animal_class=None,
image_filename=None) # Update template accordingly
# If it's a GET request or the file upload failed, simply render the main.html
template
return render_template('main.html')

@app.route('/captures/<filename>')
def uploaded_file(filename):
return send_from_directory(app.config['UPLOAD_FOLDER'], filename)

if __name__ == '__main__':
with app.app_context():
db.create_all()
app.run(debug=True)
69
main.html

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Animal Detection</title>
<link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}">
</head>
<body>
<div id="mySidenav" class="sidenav">
<a href="javascript:void(0)" class="closebtn"
onclick="closeNav()">&times;</a>
<a href="/logout">Log out</a>
</div>
<span style="font-size:30px;cursor:pointer;vertical-align: top;position:fixed;"
onclick="openNav()">&#9776;</span>
<div class="container">
<header class="header">
<h1 id="title">
Animal Detection
</h1>
</header>
<div class="webcam">
<div class="form-group">
<label>Webcam</label><br>
<button class="webcam-button" onclick="openWebcam()">Open
Webcam</button><br>
<video id="webcam" width="512" height="512" autoplay
70
style="display:none;"></video>
<canvas id="canvas" style="display:none;"></canvas>
<button onclick="closeWebcam()">Close Webcam</button>
</div>
</div>

<form id="upload-form" action="/main" method="POST"


enctype="multipart/form-data">

<div class="form-group">
<!-- Display detected animal class-->
<h2>Detected Animal: <span id="detected_animal"> {{ animal_class }}
</span></h2>
</div>

<div class="form-group">
<!-- Display uploaded image if available -->
<img id="captured_image" src="{{ url_for('uploaded_file',
filename=image_filename) }}" alt="Uploaded Image">
</div>

<div class="form-group">
<label for="file">Upload Image</label><br>
<input type="file" name="file" id="image" accept="image/*"
onchange="previewImage(this);">
<div style="display: flex; justify-content: center;">
<img id="preview" src="#" alt="Preview Image" style="display: none;
max-width: 300px; max-height: 300px;">
</div>

71
<br>
<button type="submit">Upload Image</button>
</div>
</form>

<script>
let videoStream; // Variable to store the video stream

function openNav() {
document.getElementById("mySidenav").style.width = "250px";
}

function closeNav() {
document.getElementById("mySidenav").style.width = "0";
}

function previewImage(input) {
var preview = document.getElementById('preview');
if (input.files && input.files[0]) {
var reader = new FileReader();
reader.onload = function (e) {
preview.src = e.target.result;
preview.style.display = 'block';
};
reader.readAsDataURL(input.files[0]);
}
}

function openWebcam() {

72
const video = document.getElementById('webcam');
const canvas = document.getElementById('canvas');

navigator.mediaDevices.getUserMedia({ video: true })


.then(stream => {
videoStream = stream; // Store the stream in a global variable
video.srcObject = stream;
video.onloadedmetadata = () => {
video.play();
video.style.display = 'block'; // Display the video stream
captureFrames(); // Start capturing frames
};
})
.catch(error => {
console.error('Error accessing webcam:', error);
});
}

function captureFrames() {
const video = document.getElementById('webcam');
const canvas = document.getElementById('canvas');

const context = canvas.getContext('2d');


canvas.width = video.videoWidth;
canvas.height = video.videoHeight;

// Continuously capture frames


videoStream.captureInterval = setInterval(() => {
context.drawImage(video, 0, 0, canvas.width, canvas.height);

73
const imageData = canvas.toDataURL('image/jpeg');
console.log("Captured image data:", imageData); // Debug captured data

// Send captured image data for detection


sendImageDataForDetection(imageData);
}, 4000); // Adjust the interval as needed
}

function updateDisplay(htmlResponse) {
console.log("Updating display with HTML response:", htmlResponse);
// Create a temporary div element to hold the HTML response
const tempDiv = document.createElement('div');
tempDiv.innerHTML = htmlResponse;

// Extract the detected animal class and image filename from the HTML
response
const animalClassElement = tempDiv.querySelector('#detected_animal');
const uploadedImageElement =
tempDiv.querySelector('#captured_image');

console.log("Extracted animal class element:", animalClassElement);


console.log("Extracted uploaded image element:",
uploadedImageElement);

// Update the display with the detected animal class and uploaded image
document.getElementById('detected_animal').innerHTML =
animalClassElement.innerHTML;
// Update the source of the uploaded image
document.getElementById('captured_image').src =

74
uploadedImageElement.src;
}

function sendImageDataForDetection(imageData) {
// Create a new FormData object
const formData = new FormData();
formData.append('image', imageData);

// Send the FormData object using fetch


fetch('/main', {
method: 'POST',
body: formData
})
.then(response => response.text()) // Assuming the server responds with the
animal class and image filename
.then(htmlResponse => {
// Update the display based on the rendered HTML response
console.log("Received HTML response from server:", htmlResponse);
updateDisplay(htmlResponse);
})
.catch(error => {
console.error('Error:', error);
});
}

function closeWebcam() {
const video = document.getElementById('webcam');
if (videoStream && videoStream.captureInterval) {
clearInterval(videoStream.captureInterval); // Stop capturing frames

75
videoStream.getTracks().forEach(track => track.stop()); // Stop the video
stream
video.srcObject = null; // Remove the video source (stream)
}
video.style.display = 'none'; // Hide the video element
canvas.style.display = 'none'; // Hide the canvas element
}
</script>
</div>
</body>
</html>

login.html

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Login</title>
<link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}">
</head>
<body>
<div class="container">
<header class="header">
<h1 id="title">
Login
</h1>
</header>
<form action="/login" method="POST">
76
<div class="form-group">
<label for="credential">Email / Phone Number</label>
<input type="text" name="credential" id="credential"
class="formControl" placeholder="Email / Password" required>
</div>

<div class="form-group">
<label for="password">Password</label>
<input type="password" name="password" id="password"
class="formControl" placeholder="Password" required>
</div>

<div class="form-group">
<button type="submit" id="login" class="btn">LOGIN</button>
</div>

<div class="form-group">
<label for="newaccount">Don't have an account?</label>
<a href="/register" class="btn">REGISTER</a>
</div>
{% with messages = get_flashed_messages(with_categories=true) %}
{% if messages %}
{% for category, message in messages %}
<div class= "{{ category }}">
<span class="closebtn">&times;</span>
<p3>
{{ message }}</p3>
{% endfor %}
</div>

77
{% endif %}
{% endwith %}
</form>
<script>
var close = document.getElementsByClassName("closebtn");
var i;

for (i = 0; i < close.length; i++) {


close[i].onclick = function(){
var div = this.parentElement;
div.style.opacity = "0";
setTimeout(function(){ div.style.display = "none"; }, 600);
}
}
</script>
</div>
</body>
</html>

register.html

<!-- app/templates/register.html -->


<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Register</title>
<link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}">

78
</head>
<body>
<div class="container">
<header class="header">
<h1 id="title">
Register
</h1>
</header>
<form action="/register" method="POST">

<div class="form-group">
<label for="name">Name</label>
<input type="text" name="name" id="name" class="formControl"
placeholder="Name" required>
</div>

<div class="form-group">
<label for="email">Email</label>
<input type="email" name="email" id="email" class="formControl"
placeholder="Email" required>
</div>

<div class="form-group">
<label for="phone">Phone Number</label>
<input type="tel" name="phone" id="phone" class="formControl"
placeholder="Phone Number" required>
</div>

<div class="form-group">

79
<label for="password">Password</label>
<input type="password" id="password" name="password"
class="formControl" placeholder="Password" required>
<p>Password should be minimum 8 characters long, have atleast a
number, a special character, a lowercase letter and a uppercase letter</p>
</div>

<div class="form-group">
<label for="confirmpassword">Confirm Password</label>
<input type="password" id="confirmpassword"
name="confirmpassword" class="formControl" placeholder="Confirm Password"
required>
</div>

<div class="form-group">
<button type="submit" id="register" class="btn">REGISTER</button>
</div>

{% with messages = get_flashed_messages(with_categories=true) %}


{% if messages %}
{% for category, message in messages %}
<div class= "{{ category }}">
<span class="closebtn">&times;</span>
<p3>
{{ message }}</p3>
{% endfor %}
</div>
{% endif %}
{% endwith %}

80
<script>
var close = document.getElementsByClassName("closebtn");
var i;

for (i = 0; i < close.length; i++) {


close[i].onclick = function(){
var div = this.parentElement;
div.style.opacity = "0";
setTimeout(function(){ div.style.display = "none"; }, 600);
}
}
</script>
</form>
</div>
</body>
</html>

style.css

@import url('https://fonts.googleapis.com/css2?family=Poppins&display=swap');

*,*::before,*::after{
box-sizing: border-box;
}

body{
font-family: 'Poppins',sans-serif;
font-size: 1rem;
font-weight: 100;
line-height: 1.4;
81
color: #FFFFFF;
}

body::before{
content: '';
position: fixed;
top: 0;
left: 0;
height: 100%;
width: 100%;
z-index: -1;
background-color:#B1A285 ;
background-size: cover;
background-repeat: no-repeat;
background-position: center;
}

.container{
width: 100%;
margin: 0 auto 0 auto;
padding:1.8rem 1rem;
}

form{
background: #FBFBF8;
border-radius: 0.25rem;
}

@media (min-width: 480px){

82
form{
padding: 2.5rem;
}
}

.formControl{
display: block;
width: 100%;
height: 2.375rem;
padding: 0.375rem 0.75rem;
color: #1A1D20;
background-color: #FFF;
background-clip: padding-box;
border: 1px solid #696767;
border-radius: 0.25rem;
transition: border-color 0.15s ease-in-out,box-shadow 0.15s ease-in-out;
}

.formControl:focus{
border-color:#696767;
outline: 0;
box-shadow:0 0 0 0.2rem #21212240;
}

.form-group{
margin: 0 auto 1.25rem auto;
padding: 0.25rem;
}

83
input,button{
margin: 0;
font-family: inherit;
font-size: inherit;
line-height: inherit;
}

a{
text-decoration: solid;
text-align: center;
}

label,h2{
color:#535151;
display: flex;
align-items: center;
font-size: 1.125rem;
margin-bottom: 0.5rem;
font-weight: bold;
}

#title{
color: #FFFFFF;
font-weight: 600;
text-shadow: 2px 2px 2px #00000040;
}

h1{
font-weight: 400;

84
line-height: 1.2;
}

p3{
font-size: 1.125rem;
color:#FFFFFF;
padding-left: 10px;
}

h1,p,p3,ol,li{
margin-top: 0;
margin-bottom: 0.5rem;
}

.btn{
display: block;
width: 100%;
padding:0.5rem 0.75rem;
background: #B1A285;
color: inherit;
border-radius: 15px;
cursor: pointer;
outline: none;
text-transform: uppercase;
font-size: 1.5rem;
color: #201F1F;
border: none;
}

85
.webcam{
background: #FBFBF8;
border-radius: 0.25rem;
padding: 2.5rem;
}

ol,li{
font-size: 1.125rem;
color:#0F0F0F;
}

p{
font-size: 1.125rem;
color:#645944;
font-weight: bold;
}

.sidenav{
height: 100%;
width: 0;
position: fixed;
z-index: 1;
top: 0;
left: 0;
background-color: #FBFBF8;
overflow-x: hidden;
transition: 0.5s;
padding-top: 60px;
}

86
.sidenav a{
padding: 8px 8px 8px 32px;
text-decoration: none;
font-size: 25px;
text-align: left;
color: #000000;
display: block;
transition: 0.3s;
}

.sidenav a:hover{
color: #FCFBF4;
background-color: #3F3737;
}

.sidenav .closebtn{
position: absolute;
top: 0;
right: 25px;
font-size: 36px;
margin-left: 50px;
}

#main{
transition: margin-left .5s;
padding: 16px;
}

87
@media screen and (max-height: 450px){
.sidenav {padding-top: 15px;}
.sidenav a {font-size: 18px;}
}

.message {
margin-bottom: 10px;
}

.message p {
margin: 5px 0;
color: #0F0F0F;
font-weight: 100;
}

.message strong {
color: #171718;
}

@media (min-width:800px){
.container{
max-width: 760px;
}
}

.error {
border-radius: 15px;
background-color: #F44336;
color: #FFFFFF;

88
opacity: 1;
transition: opacity 0.6s;
}

.closebtn {
margin-left: 15px;
color: #FFFFFF;
font-weight: bold;
float: right;
font-size: 22px;
line-height: 20px;
cursor: pointer;
transition: 0.3s;
padding-right: 10px;
}

89
REFERENCES

[1] B. Natarajan, R. Elakkiya, R. Bhuvaneswari, K. Saleem, D. Chaudhary and


S. H. Samsudeen, "Creating Alert Messages Based on Wild Animal Activity
Detection Using Hybrid Deep Neural Networks," in IEEE Access, vol. 11,
pp. 67308-67321, 2023, doi: 10.1109/ACCESS.2023.3289586.
[2] Sundaramoorthi, Kiruthika & Periasamy, Sakthi & K, Sanjay & N,
Vikraman & T, Premkumar & R, Yoganantham & M, Raja. (2023). Smart
Agriculture Land Crop Protection Intrusion Detection Using Artificial
Intelligence. E3S Web of Conferences. 399, doi:
10.1051/e3sconf/202339904006.
[3] M. Ibraheam, K. F. Li and F. Gebali, "An Accurate and Fast Animal
Species Detection System for Embedded Devices," in IEEE Access, vol. 11,
pp. 23462-23473, 2023, doi: 10.1109/ACCESS.2023.3252499.
[4] R. Sumathi, P. Raveena, P. Rakshana, P. Nigila and P. Mahalakshmi, "Real
Time Protection of Farmlands from Animal Intrusion," 2022 IEEE World
Conference on Applied Intelligence and Computing (AIC), Sonbhadra,
India, 2022, pp. 859-863, doi: 10.1109/AIC55036.2022.9848808.
[5] N. Mamat, M. F. Othman and F. Yakub, "Animal Intrusion Detection in
Farming Area using YOLOv5 Approach," 2022 22nd International
Conference on Control, Automation and Systems (ICCAS), Jeju, Korea,
Republic of, 2022, pp. 1-5, doi: 10.23919/ICCAS55662.2022.10003780.
[6] S. M. S, T. Sheela and T. Muthumanickam, "Development of Animal-
Detection System using Modified CNN Algorithm," 2022 International
Conference on Augmented Intelligence and Sustainable Systems (ICAISS),
Trichy, India, 2022, pp. 105-109, doi:
10.1109/ICAISS55157.2022.10011014.
[7] D. Meena, H. K. P, C. N. V. Jahnavi, P. L. Manasa and J. Sheela, "Efficient
Wildlife Intrusion Detection System using Hybrid Algorithm," 2022 4th

90
International Conference on Inventive Research in Computing Applications
(ICIRCA), Coimbatore, India, 2022, pp. 536-542, doi:
10.1109/ICIRCA54612.2022.9985684.
[8] Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks
for large-scale image recognition. arXiv preprint arXiv:1409.1556.
[9] K. He, X. Zhang, S. Ren and J. Sun, "Deep Residual Learning for Image
Recognition," 2016 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 770-778, doi:
10.1109/CVPR.2016.90.
[10] G. Huang, Z. Liu, L. Van Der Maaten and K. Q. Weinberger, "Densely
Connected Convolutional Networks," 2017 IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp.
2261-2269, doi: 10.1109/CVPR.2017.243.
[11] Tan, Mingxing & Le, Quoc. (2019). EfficientNet: Rethinking Model
Scaling for Convolutional Neural Networks. In International conference on
machine learning (pp. 6105-6114). PMLR.
[12] F. Chollet, "Xception: Deep Learning with Depthwise Separable
Convolutions," in 2017 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), Honolulu, HI, USA, 2017 pp. 1800-1807.
doi: 10.1109/CVPR.2017.195
[13] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens and Z. Wojna, "Rethinking
the Inception Architecture for Computer Vision," 2016 IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA,
2016, pp. 2818-2826, doi: 10.1109/CVPR.2016.308.
[14] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov and L. Chen,
"MobileNetV2: Inverted Residuals and Linear Bottlenecks," in 2018
IEEE/CVF Conference on Computer Vision and Pattern Recognition
(CVPR), Salt Lake City, UT, USA, 2018 pp. 4510-4520. doi:
10.1109/CVPR.2018.00474
[15] B. Zoph, V. Vasudevan, J. Shlens and Q. Le, "Learning Transferable
91
Architectures for Scalable Image Recognition," in 2018 IEEE/CVF
Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake
City, UT, USA, 2018 pp. 8697-8710.
doi: 10.1109/CVPR.2018.00907
[16] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry
Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig
Adam. (2017). MobileNets: Efficient convolutional neural networks for
mobile vision applications. arXiv preprint arXiv:1704.04861.
[17] C. Szegedy, et al., "Going deeper with convolutions," in 2015 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), Boston,
MA, USA, 2015 pp. 1-9.
doi: 10.1109/CVPR.2015.7298594

92

You might also like