Intelligent Wildlife Deterrent System
Intelligent Wildlife Deterrent System
Bonafide page
i
Declaration page
ii
Acknowledgement page
iii
ABSTRACT
iv
TABLE OF CONTENTS
ABSTRACT IV
LIST OF TABLES IX
1 INTRODUCTION 1
1.1 Overview 1
1.2 Wildlife Intrusion and Agricultural Damage 2
1.3 Traditional Deterrents and Limitations 3
1.4 The Need for Innovative Humane Solutions 7
1.5 Objectives of The Project 10
1.6 Organization of The Report 11
2 LITERATURE SURVEY 12
3 SYSTEM ANALYSIS 19
3.1 Existing System 19
3.2 Proposed System 20
4 SYSTEM REQUIREMENTS 21
4.1 Software Requirements 21
4.2 Hardware Requirements 21
4.3 About the Software 22
v
5 SYSTEM DESIGN 28
5.1 System Architecture 28
5.2 Usecase Diagram 28
5.3 Data Flow Diagram 30
5.4 Modules and Functionalities 32
5.5 Algorithms and Techniques 38
6 EXPERIMENTAL RESULTS 44
6.1 Results and Discussion 44
6.2 Performance Measures 45
APPENDIX A - Screenshots 50
REFERENCES 90
vi
LIST OF FIGURES
1.3.4 Scarecrow 6
vii
LIST OF ABBREVIATIONS
viii
LIST OF TABLES
ix
CHAPTER 1
INTRODUCTION
1.1 OVERVIEW
1
1.2 WILDLIFE INTRUSION AND AGRICULTURAL DAMAGE
1.3.1 Fencing:
3
FIGURE 1.3.1 ELECTRIC FENCE
.
Barbed Wire Fences: Barbed wire fences are constructed with
sharp wire strands intended to cause discomfort or injury if animals attempt to
breach the barrier. They are generally used for larger animals like cattle and
deer, and are noted for their ability to withstand physical pressure.
Mesh Fences: Made from various mesh sizes, these fences aim to
prevent animals from passing through by creating a physical obstruction. Mesh
fences can be tailored in height and mesh size to accommodate different types
of animals, including smaller species.
4
FIGURE 1.3.3 MESH FENCE
1.3.2 Scarecrows:
5
FIGURE 1.3.4 SCARECROW
7
such as deer and wild boars have demonstrated remarkable adaptability to
traditional deterrents. Fencing, while providing a physical barrier, may be
breached by persistent or determined animals, especially if the fence is not
properly maintained. Scarecrows, although effective initially, tend to lose their
deterrent effect as animals become accustomed to them. Chemical repellents,
while effective to some extent, often require frequent reapplication and can
have adverse effects on the environment and non-target species.
8
harm and stress to animals. For instance, ultrasonic deterrents can be designed
to be unpleasant to wildlife without causing physical harm. Similarly, visual
deterrents can be made to simulate predator presence or other threats in a way
that discourages animals from entering agricultural areas but does not cause
injury.
9
1.5 OBJECTIVES OF THE PROJECT:
10
1.6 ORGANIZATION OF THE REPORT:
Chapter 1 has the overview about the project and introduction to the
project along with summary. Chapter 2 deals with the literature survey of the
related applications along with the summary. Chapter 3 explains the overview
of the proposed system, existing system, disadvantages of existing system,
along with its summary. Chapter 4 deals with the system software and its
requirements. Chapter 5 proposes the overview of the project design, system
architecture design and data flow, implementation, list of modules with its
description. Chapter 6 gives the experimental results of how the output is
obtained and the performance of the project. Chapter 7 concludes with the
conclusion.
11
CHAPTER 2
LITERATURE SURVEY
Abstract
The issue of animal attacks is increasingly concerning for rural populations
and forestry workers. To track the movement of wild animals, surveillance cameras
and drones are often employed. However, an efficient model is required to detect the
animal type, monitor its locomotion and provide its location information. Alert
messages can then be sent to ensure the safety of people and foresters. While
computer vision and machine learning-based approaches are frequently used for
animal detection, they are often expensive and complex, making it difficult to
achieve satisfactory results. This paper presents a Hybrid Visual Geometry Group
(VGG)−19+ Bidirectional Long Short-Term Memory (Bi-LSTM) network to detect
animals and generate alerts based on their activity. These alerts are sent to the local
forest office as a Short Message Service (SMS) to allow for immediate response. The
proposed model exhibits great improvements in model performance, with an average
classification accuracy of 98%, a mean Average Precision (mAP) of 77.2%, and a
Frame Per Second (FPS) of 170. The model was tested both qualitatively and
quantitatively using 40,000 images from three different benchmark datasets with 25
classes and achieved a mean accuracy and precision of above 98%. This model is a
reliable solution for providing accurate animal-based information and protecting
human lives.
12
[2] Smart Agriculture Land Crop Protection Intrusion Detection
Using Artificial Intelligence
Author Name: Kiruthika S, Sakthi P, Sanjay K, Vikraman N, Premkumar
T, Yoganantham R, and Raja M
Published Year: July 2023
Abstract
Human-wildlife conflict is the term used to describe when human activity
results in a negative outcome for people, their resources, wild animals, or their
habitat. Human population growth encroaches on wildlife habitat, resulting in a
decrease in resources. In particular habitats, there are numerous forms of human and
domesticated animal death or injury as a result of conflict. Farmers and the animals
that invade farmland suffer greatly as a result. Our project’s primary objective is to
lessen human-animal conflict and loss. The embedded system and image processing
technique are utilized in the project. Python is used to perform image processing
techniques like segmentation, statistical and feature extraction using expectation
maximization, and classification using CNN. The classification is used to determine
whether the land is empty or if animals are present. A buzzer sound is produced, a
light electric current is passed to the fence, and a message alerting the farmer to the
animal’s entry into the farmland is transmitted. This prevents the animal from
entering the field and enables the landowner to take the necessary steps to get the
animal back to the forest. The result is serially sent to the controller broad from the
control board.
13
[3] An Accurate and Fast Animal Species Detection System for
Embedded Devices
Author Name: Mai Ibraheam, Kin Fun Li, Fayez Gebali
Published Year: 03 March 2023
Abstract
Encounters between humans and wildlife often lead to injuries, especially in
remote wilderness regions, and highways. Therefore, animal detection is a vital
safety and wildlife conservation component that can mitigate the negative impacts of
these encounters. Deep learning techniques have achieved the best results compared
to other object detection techniques; however, they require many computations and
parameters. A lightweight animal species detection model based on YOLOv2 was
proposed. It was designed as a proof of concept of and as a first step to build a real-
time mitigation system with embedded devices. Multi-level features merging is
employed by adding a new pass-through layer to improve the feature extraction
ability and accuracy of YOLOv2. Moreover, the two repeated 3× 3 convolutional
layers in the seventh block of the YOLOv2 architecture are removed to reduce
computational complexity, and thus increase detection speed without reducing
accuracy. Animal species detection methods based on regular Convolutional Neural
Networks (CNNs) have been widely applied; however, these methods are difficult to
adapt to geometric variations of animals in images. Thus, a modified YOLOv2 with
the addition of deformable convolutional layers (DCLs) was proposed to resolve this
issue. Our experimental results show that the proposed model outperforms the
original YOLOv2 by 5.0% in accuracy and 12.0% in speed. Furthermore, our
analysis shows that the modified YOLOv2 model is more suitable for deployment
than YOLOv3 and YOLOv4 on embedded devices.
14
[4] Real Time Protection of Farmlands from Animal Intrusion
Author Name: R Sumathi, P Raveena, P Rakshana, P Nigila, P
Mahalakshmi
Published Year: 18 August 2022
Abstract
Crop Vandalization due to animals is becoming area of concern nowadays.
When an animal enters the land, farmers lose their crops, properties, livestock. It is
eroding the time and efforts of farmers. They also get affected economically due to
loss of crops. Conflicts between humans and animals keep putting lives in danger.
Methods like electrocution causes intense pain to animals, sometimes leading to their
death. An effective system for preventing animal intrusion is more and more
necessary. Regarding to this problem we implement a system to provide a real time
visibility of farmlands which is perfect and adaptive. Surveillance of farmlands is
carried out and when animals are encountered, they are categorized using YOLO
algorithm and corrective actions are made depending on the type of intruder present.
Finally, farmers and forest officials are supplied with geo-locations and images of
intrude. If the presence of animals is discovered after few seconds, strong repellents
are used as a backup. As a result, the proposed technology successfully drives away
animals without killing them and reduce human animal conflict as it does not require
human participation.
15
[5] Animal Intrusion Detection in Farming Area using YOLOv5
Approach
Author Name: Normaisharah Mamat, Mohd Fauzi Othman, Fitri Yakub
Published Year: 09 January 2023
Abstract
Animal intrusion in the farming area causes significant losses in agriculture.
It threatens not only the safety of farmers but also contributes to crop damage.
Providing effective solutions for human-animals conflict is now one of the most
significant challenges all over the world. Therefore, early detection of animal
intrusion via automated methods is essential. Recent deep learning-based methods
have become popular in solving these problems by generating high detection ability.
In this study, the YOLOv5 method is proposed to detect four categories of animals
commonly involved in farming intrusion areas. YOLOv5 can generate high accuracy
in detection using cross stage partial network (CSP) as a backbone. This network is
employed to extract the beneficial characteristics from an input image. The results of
the implementation of this method show that it can detect animal intrusion very
effectively and improve the accuracy of detection by nearly 94% mAP. The results
demonstrate that the proposed models meet and reach state-of-the-art results for these
problems.
16
[6] Development of Animal-Detection System using Modified CNN
Algorithm
Author Name: Sheik Mohammed. S, T. Sheela, T. Muthumanickam
Published Year: 16 January 2023
Abstract
In the present scenario almost the entire crop cultivation in farmlands are
mostly likely to be damaged by intrusion of animals like wild boars, elephants,
buffaloes, birds, etc. However this may cause huge loss to the farmers but it is quite
impossible to stay alert in the farm field for 24/7 hours to protect the crops. To
surmount the above problem, a prototype for animal intrusion detection has been
designed using a modified CNN algorithm to efficiently detect the existence of
animal intrusion in the crop field. It provides an alert signal to indicate while averting
the animal with no injuries. This paper proposes a system that includes a PIR sensor,
Thermal Imaging camera, GSM module and hologram connected with the Raspberry
Pi module. A Modified CNN algorithm is used to validate the captured animal image
and later alert the user. Absolute crop protection is guaranteed from animal trespass
thereby protecting the farmer's from huge loss.
17
[7] Efficient Wildlife Intrusion Detection System using Hybrid
Algorithm
Author Name: Divya Meena, Hari Krishna P, Chakka Naga Venkata
Jahnavi, Patri Lalithya Manasa, J Sheela
Published Year: 29 December 2022
Abstract
Human-wildlife conflict arises when the needs and behavior of animals have
a detrimental influence on humans or when humans have a negative impact on the
needs of wildlife. The primary causes of Man-Wildlife Conflicts include agricultural
expansion, human settlement, livestock overgrazing, deforestation, illegal grass
gathering, and poaching. Each year, human-animal conflict in human habitats causes
a massive loss of sources and put lives in jeopardy. As the global human population
continues to force wildlife out of their natural habitats, conflicts are unavoidable,
which is why habitat loss is one of the most prevalent dangers to endangered animals.
So, it is necessary to detect animals and identify the animal detected to reduce the
effects of human-animal conflict. This research study has developed a hybrid
algorithm, which classifies animal images into multiple groups using YOLO v5 (You
only look once) combined with CNN. The proposed system distinguishes whether the
animal is in human environment or not, and then reliably distinguishes which animal
class it belongs to using CNN. The model has been tested through its paces on a
variety of tasks in analysis to define how well it performs in various scenarios. The
system is being fine-tuned with the goal of attaining the most accurate results
possible in recognizing and decreasing hazards posed by animal invasions into
human land. These experimental findings show that the yolov5 coalescing technique
paired with CNN can properly categorize animals in habitats, with a 92.5% accuracy
from the proposed model.
18
CHAPTER 3
SYSTEM ANALYSIS
SYSTEM REQUIREMENTS
For Web Client: Google Chrome, Edge, Safari, Mozilla Firefox, etc
4.3.1 PYTHON:
There are two major Python versions, Python 2 and Python 3. Both are
quite different. Reason for increasing popularity. Emphasis on code readability,
shorter codes, ease of writing. Programmers can express logical concepts in
fewer lines of code in comparison to languages such as C++ or Java. Python
supports multiple programming paradigms, like object-oriented, imperative and
functional programming or procedural. There exist inbuilt functions for almost
all of the frequently used concepts. Philosophy is “Simplicity is the best”.
Interpreted
There are no separate compilation and execution steps like C and C++
Platform Independent
22
system platforms
Python can be used on Linux, Windows, Macintosh, Solaris and many
more
Free and Open Source
Redistributable
High-level Language
In Python, no need to take care about low-level details such as managing the
memory used by the program.
Simple
More emphasis on the solution to the problem rather than the syntax
Embeddable
23
Besides the standard library, there are various other high-quality libraries
such as the Python Imaging Library which is an amazingly simple image
manipulation library. Softwares making use of Python.
4.3.2 FLASK:
24
4.3.2.1 Setting Up The Project Structure:
Create a couple of folders and files within flask app / to keep the web app
organized.
Within flaskapp/, create a folder, app/, to contain all your files. Inside
app/, create a folder static/; this is where you'll put our web app's images,
CSS, and JavaScript files, so create folders for each of those.
Additionally, create another folder, templates/, to store the app's web
templates.CreateanemptyPythonfileroutes.pyfortheapplicationlogic, such
as URL routing. And no project is complete without a helpful description,
so create a README.md file as well.
4.3.2.2 Working:
1. A user issues a request for a domain's root URL/ to go to its home page.
3. The Python function finds a web template living in the templates/ folder.
4. A web template will look in the static/ folder for any images, CSS, or
JavaScript files it needs as it renders to HTML.
5. Rendered HTML is sent back to app.py.
4.3.3 KERAS:
Keras is based on minimal structure that provides a clean and easy way to
create deep learning models based on TensorFlow or Theano. Keras is designed
to quickly define deep learning models. Well, Keras is an optimal choice for
deep learning applications.
4.3.3.1 Features:
4.3.3.2 Benefits:
Keras is highly powerful and dynamic framework and comes up with the
following advantages
Larger community support.
Keras neural networks are written in Python which makes things simpler.
Deep learning models are discrete components you can combine into
many ways.
4.3.4 TENSORFLOW:
26
TensorFlow is an open-source software library. TensorFlow was
originally developed by researchers and engineers working on the Google Brain
Team within Google‘s Machine Intelligence research organization for the
purposes of conducting machine learning and deep neural networks research,
but the system is general enough to be applicable in a wide variety of other
domains as well. Let us first try to understand what the word TensorFlow
actually mean.
TensorFlow is basically a software library for numerical computation
using data flow graphs where nodes in the graph represent mathematical
operations. Edges in the graph represent the multidimensional data arrays
(called tensors) communicated between them. (Please note that tensor is the
central unit of data in TensorFlow).
4.3.4.1 TensorflowAPIs:
High level API which is built on top of TensorFlow Core easier to learn
and use than TensorFlow Core makes repetitive tasks easier and more
consistent between different users tf.contrib.learn is an example of a high level
API.
27
CHAPTER 5
SYSTEM DESIGN
28
will often be accompanied by other types of diagrams as well. The use cases are
represented by either circles or ellipses.
The main purpose of a use case diagram is to portray the dynamic aspect
of a system. It accumulates the system's requirement, which includes both
internal as well as external influences. It invokes persons, use cases, and several
things that invoke the actors and elements accountable for the implementation
of use case diagrams. It represents how an entity from the external environment
can interact with a part of the system.
29
5.3 DATA FLOW DIAGRAM:
It uses defined symbols like rectangles, circles and arrows, plus short
text labels, to show data inputs, outputs, storage points and the routes between
each destination. Data flowcharts can range from simple, even hand-drawn
process overviews, to in-depth, multi-level DFDs that dig progressively deeper
into how the data is handled. They can be used to analyze an existing system or
model a new one. Like all the best diagrams and charts, a DFD can often
visually say things that would be hard to explain in words, and they work for
both technical and nontechnical audiences.
0th LEVEL
30
1st LEVEL
2nd LEVEL
31
5.4 MODULES AND FUNCTIONALITIES:
Dataset Preparation
Model Training
32
scenarios and improve its generalization ability. Ultimately, the combination of
data cleaning and augmentation techniques optimizes the quality, diversity, and
size of the dataset, laying a solid foundation for effective model training and
performance in tasks such as image classification and object detection.
Effective model training and evaluation are pivotal for achieving high
performance in deep learning applications. Our approach involved training a
range of models using transfer learning to leverage pre-existing knowledge from
large-scale datasets. Specifically, we fine-tuned VGG16, ResNet50,
DenseNet121, EfficientNet (B0, B1, B2), Xception, InceptionV3,
MobileNetV2, NASNetMobile, and NASNetLarge to our dataset. We began
with basic versions of these models and assessed their performance based on
accuracy and loss. The following table summarizes the performance metrics of
the trained models:
VGG16 44 87
ResNet50 48 87.7
DenseNet121 37 93
EfficientNetB0 23 93.4
EfficientNetB1 23 93.6
EfficientNetB2 19 94
Xception 22 93.8
InceptionV3 30 91.8
MobileNetV2 52 87
NASNetMobile 39 91.6
NASNetLarge 24 95.8
TABLE 5.4.1 MODEL TRAINING RESULTS
34
We decided to focus on NASNetLarge, Xception, and EfficientNetB2
for subsequent fine-tuning based on their superior performance metrics, with
each demonstrating high accuracy and relatively low loss in the initial
evaluations. During this phase, we experimented with various hyperparameters,
such as learning rates, optimizers, number of layers and epoch numbers, to
optimize the models' performance. The objective was to enhance predictive
accuracy and minimize loss further. The retraining process revealed that
NASNetLarge consistently outperformed the other models, achieving an
impressive final accuracy of 96% and a significantly reduced loss of 19%.
These metrics were further validated using a hold-out test set to ensure
robustness and reliability.
Sound generation and integration play a crucial role in our Wild Animals
Deterrent System for Crop Protection, enhancing its effectiveness in deterring
wildlife intrusion while minimizing harm to both animals and crops. To
accomplish this, we first define animal classes along with their corresponding
frequencies, ensuring that the generated sound stimuli are tailored to each
specific species. We then implement a function using NumPy to generate
sinusoidal waveforms with the specified frequencies and durations, enabling
precise control over the characteristics of the generated sounds. These generated
sound files are saved in WAV format for each animal class and stored in the
'animal_sounds' directory, ensuring accessibility and ease of integration with the
overall system architecture.
36
through the coordination of detection algorithms or sensors, which identify the
presence of animals within the monitored area. Upon detection, the system
initiates the playback of the appropriate sound file associated with the detected
animal class. By synchronizing the detection and sound generation components
of our system, we create a responsive and dynamic deterrent mechanism that
effectively mitigates wildlife intrusion while minimizing the need for physical
barriers or harmful repellents. This integrated approach not only enhances the
efficacy of our crop protection system but also promotes coexistence between
humans and wildlife by providing a humane and non-invasive solution to
wildlife management challenges in agricultural environments.
5.5.1 NASNetLarge:
38
The NASNetLarge model is built upon two primary components:
Normal Cells and Reduction Cells. Normal Cells preserve the spatial
dimensions of the input, performing standard convolutions and other operations
to extract features. Reduction Cells, on the other hand, reduce the spatial
dimensions, typically by a factor of two, allowing the network to increase its
depth and capture more abstract features. These cells are stacked together in a
repetitive manner, forming a deep and hierarchical structure that excels at
learning complex patterns in images.
39
improving accuracy and robustness.
The core idea behind SGD is to update the model parameters in the
direction that minimizes the loss function, gradually converging towards the
optimal solution. Unlike batch gradient descent, which computes the gradient of
the loss function with respect to all training samples, SGD updates the
parameters using a single randomly selected training sample or a small subset of
40
samples (mini-batch) at each iteration. This stochastic sampling of training data
introduces noise into the parameter updates, which helps the optimization
process escape local minima and saddle points and enables faster convergence.
SGD iteratively updates the parameters using the computed gradient for
a predefined number of iterations (epochs) or until convergence criteria are met.
The learning rate is a critical hyperparameter in SGD, as it determines the step
size of the parameter updates. Choosing an appropriate learning rate is essential
to ensure stable convergence and prevent oscillations or divergence during
training.
42
5.5.5 ReduceLROnPlateau:
43
CHAPTER 6
EXPERIMENTAL RESULTS
44
6.2 PERFORMANCE MEASURES:
6.2.1 Precision:
o TP-True Positive
o FP-False Positive
o The precision of a machine learning model will below when the value of;
o TP+FP(denominator)>TP(Numerator)
o The precision of the machine learning model will be high when Value of;
o TP(Numerator)>TP+FP(denominator)
6.2.2 Recall:
45
can be used in more than two classes. In multi-class classification, recall is in
deep learning calculated such as:
F1=2*(precision*recall/(precision + recall))
6.2.4 F1 Score:
The goal of the F1 score is to combine the precision and recall metrics
into a single metric. At the same time, the F1 score has been designed to work
well on imbalanced data.
F1 score formula
The F1 score is defined as the harmonic mean of precision and recall. As
a short reminder, the harmonic mean is an alternative metric for the more
common arithmetic mean. It is often useful when computing an average rate. In
the F1 score, we compute the average of precision and recall. They are both
rates, which makes it a logical choice to use the harmonic mean. The F1 score
formula is shown here:
46
FIGURE 6.2.1 PRECISION, RECALL, F1 SCORE
For the 2 prediction classes of classifiers, the matrix is of 2*2 table, for 3
classes, it is 3*3 table, and so on.
The matrix is divided into two dimensions, that are predicted
values and actual values along with the total number of predictions.
47
Predicted values are those values, which are predicted by the model, and
actual values are the true values for the given observations.
True Negative: Model has given prediction No, and there actual value
was also No.
True Positive: The model has predicted yes, and the actual value was
also true.
False Negative: The model has predicted no, but the actual value was
Yes, it is also called as Type-II error.
False Positive: The model has predicted Yes, but the actual value was
No. It is also called a Type-I error.
SCREENSHOTS
50
51
52
APPENDIX – B
SOURCECODE
animalpred.ipynb
import zipfile
# Extract dataset from zip file
zf = zipfile.ZipFile('/content/drive/MyDrive/animaldataset.zip', "r")
zf.extractall()
import pathlib
53
# Display and check class names in training, testing and validation dataset
train_path = '../content/animaldataset/train'
data_dir = pathlib.Path(train_path)
class_names = np.array(sorted([item.name for item in data_dir.glob('*')]))
print(class_names)
test_path = '../content/animaldataset/test'
data_dir1 = pathlib.Path(test_path)
class_names = np.array(sorted([item.name for item in data_dir1.glob('*')]))
print(class_names)
val_path = '../content/animaldataset/val'
data_dir2 = pathlib.Path(val_path)
class_names = np.array(sorted([item.name for item in data_dir2.glob('*')]))
print(class_names)
import random
import matplotlib.pyplot as plt
import os
for i in range(36):
ax = plt.subplot(6, 6, i+1)
random_class = random.randint(0, 12)
folder_path = train_path + '/' + class_names[random_class]
54
random_image_path = folder_path + '/' + (random.sample(os.listdir(folder_path),
1)[0])
image = plt.imread(random_image_path)
plt.axis('off')
plt.title(class_names[random_class], fontsize = 8, fontweight = 'bold')
plt.imshow(image, cmap='gray')
for i in range(36):
ax = plt.subplot(6, 6, i+1)
random_class = random.randint(0, 12)
folder_path = test_path + '/' + class_names[random_class]
random_image_path = folder_path + '/' + (random.sample(os.listdir(folder_path),
1)[0])
image = plt.imread(random_image_path)
plt.axis('off')
plt.title(class_names[random_class], fontsize = 8, fontweight = 'bold')
plt.imshow(image, cmap='gray')
for i in range(36):
ax = plt.subplot(6, 6, i+1)
random_class = random.randint(0, 12)
55
folder_path = val_path + '/' + class_names[random_class]
random_image_path = folder_path + '/' + (random.sample(os.listdir(folder_path),
1)[0])
image = plt.imread(random_image_path)
plt.axis('off')
plt.title(class_names[random_class], fontsize = 8, fontweight = 'bold')
plt.imshow(image, cmap='gray')
56
target_size=IMAGE_SIZE,
batch_size=BATCH_SIZE,
class_mode='categorical',
shuffle=False
)
57
return model
# Early stopping, learning rate scheduler and model check point callbacks
early_stopping = EarlyStopping(monitor='val_loss', patience=3,
restore_best_weights=True)
scheduler = ReduceLROnPlateau(monitor='val_loss', patience=2, min_lr=1e-5,
factor=0.95)
checkpoint = ModelCheckpoint('weights_epoch{epoch:02d}.weights.h5',
save_weights_only=True)
58
# Evaluate the model on the test set
loss, accuracy = model.evaluate(test_generator, steps=len(test_generator))
print("Classification Report:")
print(classification_report(true_labels, y_pred_labels))
59
# Obtain class labels and indices
class_indices = {}
labels = []
for i, class_folder in enumerate(sorted(os.listdir(demo_folder))):
class_indices[class_folder] = i
labels.append(class_folder)
# Make predictions
predicted_probabilities = model.predict(images)
predicted_classes = np.argmax(predicted_probabilities, axis=1)
60
true_label = true_labels[i]
predicted_label = labels[predicted_classes[i]]
plt.subplot(5, 5, i+1)
plt.imshow(image)
plt.axis('off')
plt.title(f'True: {true_label}\nPredicted: {predicted_label}')
plt.show()
import shutil
shutil.move("animalpredNASLv2.keras", "/content/drive/MyDrive/")
app.py
61
import base64
import numpy as np
from keras.models import load_model
from telegram import Bot
from datetime import datetime
app = Flask(__name__)
UPLOAD_FOLDER = 'captures'
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///database.db'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
app.config['SECRET_KEY'] = 'your_secret_key'
db = SQLAlchemy(app)
# Database model
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(100), nullable=False)
email = db.Column(db.String(100), unique=True, nullable=False)
phone = db.Column(db.String(15), unique=True, nullable=False)
password = db.Column(db.String(100), unique=True, nullable=False)
62
# Routes
@app.route('/')
def index():
return redirect('/login')
# Login route
@app.route('/login', methods =['GET', 'POST'])
def login():
if (request.method == 'POST'):
credential = request.form.get('credential')
password = request.form.get('password')
else:
flash('Wrong username or password', 'error')
63
return render_template("login.html")
# Register route
@app.route('/register', methods =['GET', 'POST'])
def register():
if (request.method == 'POST'):
name = request.form.get('name')
email = request.form.get('email')
phone = request.form.get('phone')
Password1 = request.form.get('password')
repassword = request.form.get('confirmpassword')
#Confirm password
else:
if (Password1!=repassword):
flash("Passwords don't match", 'error')
else:
#Check if already registered
account = db.session.execute(text("SELECT * FROM User WHERE
email= :email"), {'email': email}).fetchone()
if account:
flash('User already registered', 'error')
64
return redirect('login')
return render_template("register.html")
# Animal classes
animal_classes = ['bear', 'bison', 'deer', 'dhole', 'elephant', 'fox', 'langur',
'leopard', 'macaque', 'rabbit', 'sloth bear', 'tiger', 'wild boar']
65
def detect_animal(image):
img = cv2.resize(image, (256, 256))
img = img.astype('float32') / 255.0
img = np.expand_dims(img, axis=0)
66
await bot.send_photo(chat_id=chat_id, photo=image_file, caption=text)
print("Message sent successfully!")
def detect_human(image):
# Load the pre-trained face detection model
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades +
'haarcascade_frontalface_default.xml')
# Convert the image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Detect faces in the grayscale image
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=5,
minSize=(30, 30))
# Check if any faces are detected
return len(faces) > 0
67
TEXT = f'Detected Animal: {animal_class}\nTimestamp: {timestamp}'
asyncio.run(send_message_with_image(API_TOKEN, CHAT_ID, file_path,
TEXT))
play_alert_sound(animal_class)
# Render the main.html template with the detected animal class and uploaded
image filename
return render_template('main.html', animal_class=animal_class,
image_filename=filename)
else:
# Check if image data is received from webcam
image_data = request.form.get('image')
# Decode base64 image data
image_data = np.frombuffer(base64.b64decode(image_data.split(",")[1]),
np.uint8)
image = cv2.imdecode(image_data, cv2.IMREAD_COLOR)
# Perform human detection first
is_human = detect_human(image)
if not is_human:
# Perform animal detection
animal_class, confidence = detect_animal(image) # Assuming detection
function returns class and confidence
@app.route('/captures/<filename>')
def uploaded_file(filename):
return send_from_directory(app.config['UPLOAD_FOLDER'], filename)
if __name__ == '__main__':
with app.app_context():
db.create_all()
app.run(debug=True)
69
main.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Animal Detection</title>
<link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}">
</head>
<body>
<div id="mySidenav" class="sidenav">
<a href="javascript:void(0)" class="closebtn"
onclick="closeNav()">×</a>
<a href="/logout">Log out</a>
</div>
<span style="font-size:30px;cursor:pointer;vertical-align: top;position:fixed;"
onclick="openNav()">☰</span>
<div class="container">
<header class="header">
<h1 id="title">
Animal Detection
</h1>
</header>
<div class="webcam">
<div class="form-group">
<label>Webcam</label><br>
<button class="webcam-button" onclick="openWebcam()">Open
Webcam</button><br>
<video id="webcam" width="512" height="512" autoplay
70
style="display:none;"></video>
<canvas id="canvas" style="display:none;"></canvas>
<button onclick="closeWebcam()">Close Webcam</button>
</div>
</div>
<div class="form-group">
<!-- Display detected animal class-->
<h2>Detected Animal: <span id="detected_animal"> {{ animal_class }}
</span></h2>
</div>
<div class="form-group">
<!-- Display uploaded image if available -->
<img id="captured_image" src="{{ url_for('uploaded_file',
filename=image_filename) }}" alt="Uploaded Image">
</div>
<div class="form-group">
<label for="file">Upload Image</label><br>
<input type="file" name="file" id="image" accept="image/*"
onchange="previewImage(this);">
<div style="display: flex; justify-content: center;">
<img id="preview" src="#" alt="Preview Image" style="display: none;
max-width: 300px; max-height: 300px;">
</div>
71
<br>
<button type="submit">Upload Image</button>
</div>
</form>
<script>
let videoStream; // Variable to store the video stream
function openNav() {
document.getElementById("mySidenav").style.width = "250px";
}
function closeNav() {
document.getElementById("mySidenav").style.width = "0";
}
function previewImage(input) {
var preview = document.getElementById('preview');
if (input.files && input.files[0]) {
var reader = new FileReader();
reader.onload = function (e) {
preview.src = e.target.result;
preview.style.display = 'block';
};
reader.readAsDataURL(input.files[0]);
}
}
function openWebcam() {
72
const video = document.getElementById('webcam');
const canvas = document.getElementById('canvas');
function captureFrames() {
const video = document.getElementById('webcam');
const canvas = document.getElementById('canvas');
73
const imageData = canvas.toDataURL('image/jpeg');
console.log("Captured image data:", imageData); // Debug captured data
function updateDisplay(htmlResponse) {
console.log("Updating display with HTML response:", htmlResponse);
// Create a temporary div element to hold the HTML response
const tempDiv = document.createElement('div');
tempDiv.innerHTML = htmlResponse;
// Extract the detected animal class and image filename from the HTML
response
const animalClassElement = tempDiv.querySelector('#detected_animal');
const uploadedImageElement =
tempDiv.querySelector('#captured_image');
// Update the display with the detected animal class and uploaded image
document.getElementById('detected_animal').innerHTML =
animalClassElement.innerHTML;
// Update the source of the uploaded image
document.getElementById('captured_image').src =
74
uploadedImageElement.src;
}
function sendImageDataForDetection(imageData) {
// Create a new FormData object
const formData = new FormData();
formData.append('image', imageData);
function closeWebcam() {
const video = document.getElementById('webcam');
if (videoStream && videoStream.captureInterval) {
clearInterval(videoStream.captureInterval); // Stop capturing frames
75
videoStream.getTracks().forEach(track => track.stop()); // Stop the video
stream
video.srcObject = null; // Remove the video source (stream)
}
video.style.display = 'none'; // Hide the video element
canvas.style.display = 'none'; // Hide the canvas element
}
</script>
</div>
</body>
</html>
login.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Login</title>
<link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}">
</head>
<body>
<div class="container">
<header class="header">
<h1 id="title">
Login
</h1>
</header>
<form action="/login" method="POST">
76
<div class="form-group">
<label for="credential">Email / Phone Number</label>
<input type="text" name="credential" id="credential"
class="formControl" placeholder="Email / Password" required>
</div>
<div class="form-group">
<label for="password">Password</label>
<input type="password" name="password" id="password"
class="formControl" placeholder="Password" required>
</div>
<div class="form-group">
<button type="submit" id="login" class="btn">LOGIN</button>
</div>
<div class="form-group">
<label for="newaccount">Don't have an account?</label>
<a href="/register" class="btn">REGISTER</a>
</div>
{% with messages = get_flashed_messages(with_categories=true) %}
{% if messages %}
{% for category, message in messages %}
<div class= "{{ category }}">
<span class="closebtn">×</span>
<p3>
{{ message }}</p3>
{% endfor %}
</div>
77
{% endif %}
{% endwith %}
</form>
<script>
var close = document.getElementsByClassName("closebtn");
var i;
register.html
78
</head>
<body>
<div class="container">
<header class="header">
<h1 id="title">
Register
</h1>
</header>
<form action="/register" method="POST">
<div class="form-group">
<label for="name">Name</label>
<input type="text" name="name" id="name" class="formControl"
placeholder="Name" required>
</div>
<div class="form-group">
<label for="email">Email</label>
<input type="email" name="email" id="email" class="formControl"
placeholder="Email" required>
</div>
<div class="form-group">
<label for="phone">Phone Number</label>
<input type="tel" name="phone" id="phone" class="formControl"
placeholder="Phone Number" required>
</div>
<div class="form-group">
79
<label for="password">Password</label>
<input type="password" id="password" name="password"
class="formControl" placeholder="Password" required>
<p>Password should be minimum 8 characters long, have atleast a
number, a special character, a lowercase letter and a uppercase letter</p>
</div>
<div class="form-group">
<label for="confirmpassword">Confirm Password</label>
<input type="password" id="confirmpassword"
name="confirmpassword" class="formControl" placeholder="Confirm Password"
required>
</div>
<div class="form-group">
<button type="submit" id="register" class="btn">REGISTER</button>
</div>
80
<script>
var close = document.getElementsByClassName("closebtn");
var i;
style.css
@import url('https://fonts.googleapis.com/css2?family=Poppins&display=swap');
*,*::before,*::after{
box-sizing: border-box;
}
body{
font-family: 'Poppins',sans-serif;
font-size: 1rem;
font-weight: 100;
line-height: 1.4;
81
color: #FFFFFF;
}
body::before{
content: '';
position: fixed;
top: 0;
left: 0;
height: 100%;
width: 100%;
z-index: -1;
background-color:#B1A285 ;
background-size: cover;
background-repeat: no-repeat;
background-position: center;
}
.container{
width: 100%;
margin: 0 auto 0 auto;
padding:1.8rem 1rem;
}
form{
background: #FBFBF8;
border-radius: 0.25rem;
}
82
form{
padding: 2.5rem;
}
}
.formControl{
display: block;
width: 100%;
height: 2.375rem;
padding: 0.375rem 0.75rem;
color: #1A1D20;
background-color: #FFF;
background-clip: padding-box;
border: 1px solid #696767;
border-radius: 0.25rem;
transition: border-color 0.15s ease-in-out,box-shadow 0.15s ease-in-out;
}
.formControl:focus{
border-color:#696767;
outline: 0;
box-shadow:0 0 0 0.2rem #21212240;
}
.form-group{
margin: 0 auto 1.25rem auto;
padding: 0.25rem;
}
83
input,button{
margin: 0;
font-family: inherit;
font-size: inherit;
line-height: inherit;
}
a{
text-decoration: solid;
text-align: center;
}
label,h2{
color:#535151;
display: flex;
align-items: center;
font-size: 1.125rem;
margin-bottom: 0.5rem;
font-weight: bold;
}
#title{
color: #FFFFFF;
font-weight: 600;
text-shadow: 2px 2px 2px #00000040;
}
h1{
font-weight: 400;
84
line-height: 1.2;
}
p3{
font-size: 1.125rem;
color:#FFFFFF;
padding-left: 10px;
}
h1,p,p3,ol,li{
margin-top: 0;
margin-bottom: 0.5rem;
}
.btn{
display: block;
width: 100%;
padding:0.5rem 0.75rem;
background: #B1A285;
color: inherit;
border-radius: 15px;
cursor: pointer;
outline: none;
text-transform: uppercase;
font-size: 1.5rem;
color: #201F1F;
border: none;
}
85
.webcam{
background: #FBFBF8;
border-radius: 0.25rem;
padding: 2.5rem;
}
ol,li{
font-size: 1.125rem;
color:#0F0F0F;
}
p{
font-size: 1.125rem;
color:#645944;
font-weight: bold;
}
.sidenav{
height: 100%;
width: 0;
position: fixed;
z-index: 1;
top: 0;
left: 0;
background-color: #FBFBF8;
overflow-x: hidden;
transition: 0.5s;
padding-top: 60px;
}
86
.sidenav a{
padding: 8px 8px 8px 32px;
text-decoration: none;
font-size: 25px;
text-align: left;
color: #000000;
display: block;
transition: 0.3s;
}
.sidenav a:hover{
color: #FCFBF4;
background-color: #3F3737;
}
.sidenav .closebtn{
position: absolute;
top: 0;
right: 25px;
font-size: 36px;
margin-left: 50px;
}
#main{
transition: margin-left .5s;
padding: 16px;
}
87
@media screen and (max-height: 450px){
.sidenav {padding-top: 15px;}
.sidenav a {font-size: 18px;}
}
.message {
margin-bottom: 10px;
}
.message p {
margin: 5px 0;
color: #0F0F0F;
font-weight: 100;
}
.message strong {
color: #171718;
}
@media (min-width:800px){
.container{
max-width: 760px;
}
}
.error {
border-radius: 15px;
background-color: #F44336;
color: #FFFFFF;
88
opacity: 1;
transition: opacity 0.6s;
}
.closebtn {
margin-left: 15px;
color: #FFFFFF;
font-weight: bold;
float: right;
font-size: 22px;
line-height: 20px;
cursor: pointer;
transition: 0.3s;
padding-right: 10px;
}
89
REFERENCES
90
International Conference on Inventive Research in Computing Applications
(ICIRCA), Coimbatore, India, 2022, pp. 536-542, doi:
10.1109/ICIRCA54612.2022.9985684.
[8] Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks
for large-scale image recognition. arXiv preprint arXiv:1409.1556.
[9] K. He, X. Zhang, S. Ren and J. Sun, "Deep Residual Learning for Image
Recognition," 2016 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 770-778, doi:
10.1109/CVPR.2016.90.
[10] G. Huang, Z. Liu, L. Van Der Maaten and K. Q. Weinberger, "Densely
Connected Convolutional Networks," 2017 IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp.
2261-2269, doi: 10.1109/CVPR.2017.243.
[11] Tan, Mingxing & Le, Quoc. (2019). EfficientNet: Rethinking Model
Scaling for Convolutional Neural Networks. In International conference on
machine learning (pp. 6105-6114). PMLR.
[12] F. Chollet, "Xception: Deep Learning with Depthwise Separable
Convolutions," in 2017 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), Honolulu, HI, USA, 2017 pp. 1800-1807.
doi: 10.1109/CVPR.2017.195
[13] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens and Z. Wojna, "Rethinking
the Inception Architecture for Computer Vision," 2016 IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA,
2016, pp. 2818-2826, doi: 10.1109/CVPR.2016.308.
[14] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov and L. Chen,
"MobileNetV2: Inverted Residuals and Linear Bottlenecks," in 2018
IEEE/CVF Conference on Computer Vision and Pattern Recognition
(CVPR), Salt Lake City, UT, USA, 2018 pp. 4510-4520. doi:
10.1109/CVPR.2018.00474
[15] B. Zoph, V. Vasudevan, J. Shlens and Q. Le, "Learning Transferable
91
Architectures for Scalable Image Recognition," in 2018 IEEE/CVF
Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake
City, UT, USA, 2018 pp. 8697-8710.
doi: 10.1109/CVPR.2018.00907
[16] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry
Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig
Adam. (2017). MobileNets: Efficient convolutional neural networks for
mobile vision applications. arXiv preprint arXiv:1704.04861.
[17] C. Szegedy, et al., "Going deeper with convolutions," in 2015 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), Boston,
MA, USA, 2015 pp. 1-9.
doi: 10.1109/CVPR.2015.7298594
92