0% found this document useful (0 votes)
58 views89 pages

Final Document - Vehicle Accident Detection

The project report outlines a Vehicle Accident Detection and Rescue System developed by Keerthivasan K as part of a Master's degree in Computer Applications. It proposes a deep learning model for efficient accident detection, achieving 95% accuracy and enabling immediate alerts to emergency services via SMTP. The system aims to reduce response times in accidents, ultimately saving lives by providing timely medical assistance.

Uploaded by

Hari Priyan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views89 pages

Final Document - Vehicle Accident Detection

The project report outlines a Vehicle Accident Detection and Rescue System developed by Keerthivasan K as part of a Master's degree in Computer Applications. It proposes a deep learning model for efficient accident detection, achieving 95% accuracy and enabling immediate alerts to emergency services via SMTP. The system aims to reduce response times in accidents, ultimately saving lives by providing timely medical assistance.

Uploaded by

Hari Priyan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 89

VEHICLE ACCIDENT DETECTION

AND RESCUE SYSTEM

A PROJECT REPORT
Submitted by

KEERTHIVASAN K
(6127236220215)
In partial fulfillment of the requirement
for the award of the degree
of

MASTER OF COMPUTER APPLICATION

in

DEPARTMENT OF COMPUTER APPLICATION

THE KAVERY ENGINEERING COLLEGE


(An Autonomous Institution, affiliated to Anna University Chennai and Approved by AICTE,
New Delhi)

MECHERI, SALEM-636453

JULY 2025

I
VEHICLE ACCIDENT DETECTION

AND RESCUE SYSTEM

A PROJECT REPORT
Submitted by

KEERTHIVASAN K
(6127236220215)
In partial fulfillment of the requirement
for the award of the degree
of

MASTER OF COMPUTER APPLICATION

in

DEPARTMENT OF COMPUTER APPLICATION

THE KAVERY ENGINEERING COLLEGE


(An Autonomous Institution, affiliated to Anna University Chennai and Approved by AICTE,
New Delhi)

MECHERI, SALEM-636453

JULY 2025

II
THE KAVERY ENGINEERING COLLEGE

MECHERI, SALEM - 636453

BONAFIDE CERTIFICATE

Certified that this project report titled “VEHICLE ACCIDENT


DETECTION AND RESQUE SYSTEM” is the Bonafide work of
KEERTHIVASAN K (612723622015) who carried out the project under my
supervision. Certified further, that to the best of my knowledge the work
reported herein does not form part of any other project report or dissertation on
the basis of which a degree or award was conferred on an earlier occasion on
this or any other candidate.

SIGNATURE SIGNATURE
Mr.S.A.CHENNAKESAVAN, MCA, Mr.S.M.VIVIYAN RICHARDS,
M.Phil., MCA
Assistant Professor / Head, Assistant Professor
Department of Computer Application Department of Computer Application
The Kavery Engineering College, The Kavery Engineering College,
Mecheri, Mecheri,
Salem-636453. Salem-636453.

Submitted for VIVA-VOCE Examination held on

Internal Examiner External Examiner

III
DECLARATION

We jointly declare that the project report on “VEHICLE ACCIDENT


DETECTION AND RESQUE SYSTEM” is the result of original work done
by us and best of our knowledge, similar work has not been submitted
“ANNA UNIVERSITY CHENNAI” for the requirement of Degree of Master
of Computer Application. This project report is submitted on the partial
fulfilment of the requirement of the award of Degree of Master of Computer
Application.

SIGNATURE

KEERTHIVASAN K

Place :
Date :

IV
ACKNOWLEDGEMENT

We wish to express our sincere gratitude to our Advisor Dr.


A.K.NATESAN, our Chairman Thiru A.ANBALAGAN, our Secretary
Prof S.K.ELANGOVAN for providing immense facilities at our institution.

We would like to acknowledge our Principal Dr.V DURAISAMY


M.E., Ph.D., FIE,for fostering a supportive learning environment that
encourages collaboration and teamwork. Your leadership has set a positive
example for all of us and has created a culture of academic excellence.

We wish to extend our heartfelt gratitude to our Head of The


Department Mr.S.A.CHENNAKESAVAN, MCA, M.Phil., for your
support and guidance throughout the completion of the thesis.

We are highly indebted to provide our heart full thanks to our Project
Guide Mr.M.VIVIYAN RICHARDS, MCA Assistant Professor for his
valuable ideas, encouragement and supportive guidance throughout the
project.

We wish to extend our sincere thanks to all faculty members of our


MASTER OF COMPUTER APPLICATION for guiding us and
providing valuable feedback along the way for successful completion of this
project.

We would like to express my sincere gratitude to all the students in


my group for their hard work, dedication, and cooperation throughout the
preparation and editing stages of the manuscript.

We would like to thank all those who have contributed to the success
of our college project. Your support and guidance have been truly appreciated,
and we could not have completed this project without your help.

V
ABSTRACT

Accident detection plays an important role in ensuring road safety and


providing an immediate response that can significantly reduce the loss of lives.
A deep learning model is proposed for efficient accident detection. The initial
phase involves the careful curation of a diverse dataset to train models that can
autonomously recognize accidents, ensuring precise detection through rigorous
optimization. Trained models, adapt at discerning nuances and trigger critical
responses, including immediate alerts and coordination with emergency
services, thereby enhancing effectiveness in critical situations. A crucial feature
includes alerting via the Simple Mail Transfer Protocol (SMTP). In the event of
an accident, the system seamlessly interfaces with SMTP, enabling swift
communication and prompt notification to the emergency services. The
proposed model provides an accuracy of 95%, representing a higher level of
accuracy in detecting accidents. Thus, the integrated approach represents a
significant leap in road accident detection.

VI
PROJECT COMPLETION CERTIFICATE

VII
TABLE OF CONTENTS

CHAPTER TITLE PAGE


NO. NO.

ABSTRACT VI
1 1.1 INTRODUCTION 1
1.2 OBJECTIVE 3
1.3 OVERVIEW OF THE PROJECT 3
2 SYSTEM ANALYSIS 5
2.1 EXISTING SYSTEM 5

2.2 PROPOSED SYSTEM 6

2.3 SYSTEM REQUIREMENTS 7


2.3.1 HARDWARE REQUIREMENTS 7
2.3.2 SOFTWARE REQUIREMENTS 7
3 LITERATURE SURVEY 8
4 SYSTEM IMPLEMENTATION 10
4.1 IMPLEMENTATION 10
4.2 MODULE 10
4.3 MODULES 10
4.4 MODULES DESCRIPTION 10
4.4.1 DATA COLLECTION MODULE 10
4.4.2 VIDEO/IMAGE PRE-PROCESSING MODULE 11
4.4.3 CNN CLASSIFIER MODULE 11
4.4.4 ACCIDENT DETECTION BASED ON 12
VEHICLE TRACKING
4.4.5 ALERT SYSTEM MODULE 12
5 SOFTWARE DEVELOPMENT 13
5.1 SOFTWARE ENVIRONMENT 13
5.1.1 PYTHON 13
5.1.2 CHARACTERISTICS OF PYTHON 14
5.1.3 APPLICATIONS OF PYTHON 15
5.1.4 PYTHON – OVERVIEW 16
5.1.5 HISTORY OF PYTHON 16

VIII
5.1.6 PYTHON FEATURES 17
5.1.7 PYTHON ENVIRONMENT SETUP 17
5.1.8 DOWNLOAD AND INSTALL PYTHON 17
5.1.9 WINDOWS INSTALLATION 18
5.1.10 SETTING UP PATH 18
5.1.11 VERIFY THE INSTALLATION 21
5.1.12 SETTING PATH AT WINDOWS 22
5.1.13 PYTHON ENVIRONMENT VARIABLES 22
5.1.14 INTEGERATED DEVELOPMENT 23
ENVIRONMENT
5.1.15 PYTHON BASIC-SYNTAX 23
5.1.16 FIRST PYTHON PROGRAM 23
5.1.17 SETUP VISUAL STUDIO CODE FOR PYTHON 24
5.1.18 SETTING UP VISUAL STUDIO CODE 24
5.2 EXTENSIONS 25
5.2.1 INSTALL PYTHON EXTENSION 26
5.3 INTRODUCTION TO TKINTER 26
5.3.1 BASIC TKINTER WIDGETS 29
5.3.2 TKINTER PROGRAMMING 31
5.3.3 STANDARD ATTRIBUTES 32
5.3.4 GEOMETRY MANAGEMENT 32
6 SYSTEM DESIGN 37
6.1 SYSTEM ARCHITECTURE 37
6.2 DATA FLOW DIAGRAM 37
6.3 USE CASE DIAGRAM 39
6.4 DATASET DESIGN 40
7 SYSTEM DEVELOPMENT 41
7.1 INPUT AND OUTPUT DESIGN 41
7.1.1 INPUT DESIGN 41
7.1.2 OBJECTIVES 41
7.1.3 OUTPUT DESIGN 42
7.2 SYSTEM STUDY 43
7.2.1 FEASIBILITY STUDY 43
7.2.2 ECONOMICAL FEASIBILITY 43
7.2.3 TECHNICAL FEASIBILITY 43
7.2.4 SOCIAL FEASIBILITY 44

IX
7.3 SYSTEM TESTING 44
7.4 TYPES OF TESTS 44
7.4.1 UNIT TESTING 44
7.4.2 INTEGRATION TESTING 45
7.4.3 FUNCTIONAL TESTING 45
7.4.4 SYSTEM TESTING 46
7.4.5 WHITE BOX TESTING 46
7.4.6 BLACK BOX TESTING 46
8 APPENDICES 48
8.1 SCREENSHOTS 48
8.2 SOURCE CODE 54
9 CONCLUSION 74
10 FUTURE WORK 75
11 REFERENCE 76

X
TABLE OF FIGURES

FIGURE TITLE PAGE


NO. NO.

1 Python-Website 19
2 Install Python 19
3 Complete Install 20
4 Run CMD 21
5 Command Prompt 21
6 Visual Studio Code 25
7 Visual Studio Code to python Extension 26
8 Fundamental Structure of Tkinter program 28
9 Simple Tkinter Windows 32
10 System Architecture 37
11 Video Footage 40
12 Frame Image 40
13 Inside Label Images 40
14 Run Python Main.py 48
15 Main GUI 48
16 Select Video Source 49
17 Video File 49
18 Video Frame Analysis 50
19 Accident Detection 50
20 Accident Detected with Accuracy Prediction 51
21 Send SMS Process 51
22 View Image Frame 52
23 View Image Frame Prediction 52
24 View Inside Label Image 1 53
25 View Inside Label Image 2 53

XI
CHAPTER - 1

1.1INTRODUCTION

In urban areas accidents are most common phenomena where many of such
accidents can be taken care easily but some accidents occur during night time
when the visibility is quite low, during such cases it will be difficult for an
ambulance driver to identify the accident spot with the help of phone calls made
by the citizens. If the driving force knows the precise spot of the accident the
time period between the spot and the hospital is going to be significantly
reduced. The main objective of this paper is to help reduce the time factor in
case of accidents. There are many cases where an accident occurs during the
night and the person met with the accident is unconscious then it would take
hours for someone to find out and inform the authorities about it. So saving
such precious time will indeed save lives. In connection with this concept, an
experimental setup is constructed that can detect accidents automatically
without any human help. This project presents a driver assistance system which
is used for lane departure of vehicles and also analysis of its working and
stability with respect to changes in the behavior of driver. The driver assistance
system,. Its designing wad developed from the preview of co-driver system
which is a automatic system. The vehicle steering assist controller is designed
using a driver model in order to take into account the driver's intentions in
particular curve negotiation. This approach minimizes controller intervention
while the driver is awake and steers properly. Usually, information flows
through the interface from human to machine but not so often in the reverse
direction. But in this model the system has an architecture in which bi-
directional information transfer occurs across the control interface, allowing the

1
human to use the interface to simultaneously exert control and extract
information.

Every year the lives of approximately 1.3 million people are cut short as a
result of a road traffic crash. Between 20 and 50 million more people suffer
non-fatal injuries, with many incurring a disability as a result of their injury
.With the increase in population and number of vehicles on the road, it has
become more important than ever to develop effective methods for detecting
accidents and responding to them quickly. The goal of traffic accident detection
is to reduce response time and ensure that medical attention is provided to those
who need it as quickly as possible. There are several methods used for traffic
accident detection. One of the most common methods is video surveillance.
Video cameras can be placed at intersections or other areas with high traffic
volume to detect accidents. When an accident occurs, the cameras can send an
alert to emergency services or other relevant parties. This method has been
proven to be effective in detecting accidents quickly and accurately. However,
despite the numerous measures being taken to upsurge road monitoring
technologies such as CCTV cameras at the intersection of roads and radars
commonly placed on highways that capture the instances of over-speeding cars,
many lives are lost due to lack of timely accidental reports which results in
delayed medical assistance given to the victims. Current traffic management
technologies heavily rely on human perception of the footage that was captured.
This takes a substantial amount of effort from the point of view of the human
operators and does not support any real-time feedback to spontaneous events.
The field of vehicular accident detection has become one of the most prevalent
uses of computer vision to provide first-aid on time, without the need for a
human operator to monitor an event. In India, CNN for accident detection is
gaining popularity due to the increasing number of accidents on roads. The
technology can be used to monitor high-traffic areas, such as intersections and

2
highways, and quickly detect any accidents that occur. By doing so, emergency
services can be dispatched to the scene faster, potentially reducing the severity
of injuries and saving lives. Generally, the research in the area of Accident
Detection focuses on computer vision based and sensor based models. There are
few research Publications which are discussing multimodal accident detection,
but these have high computation overhead and could be damaged during the
accident.

1.2 OBJECTIVE

The field of vehicular accident detection has become one of the most
prevalent uses of computer vision to provide first-aid on time, without the need
for a human operator to monitor an event. In India, CNN for accident detection
is gaining popularity due to the increasing number of accidents on roads. The
technology can be used to monitor high-traffic areas, such as intersections and
highways, and quickly detect any accidents that occur. By doing so, emergency
services can be dispatched to the scene faster, potentially reducing the severity
of injuries and saving lives.

1.3 OVERVIEW OF THE PROJECT

This project presents a driver assistance system which is used for lane
departure of vehicles and also analysis of its working and stability with respect
to changes in the behavior of driver. The driver assistance system,. Its designing
wad developed from the preview of co-driver system which is a automatic
system. The vehicle steering assist controller is designed using a driver model in
order to take into account the driver's intentions in particular curve negotiation.
This approach minimizes controller intervention while the driver is awake and
steers properly. Usually, information flows through the interface from human to
machine but not so often in the reverse direction. But in this model the system
has an architecture in which bi-directional information transfer occurs across the

3
control interface, allowing the human to use the interface to simultaneously
exert control and extract information. n urban areas accidents are most common
phenomena where many of such accidents can be taken care easily but some
accidents occur during night time when the visibility is quite low, during such
cases it will be difficult for an ambulance driver to identify the accident spot
with the help of phone calls made by the citizens. If the driving force knows the
precise spot of the accident the time period between the spot and the hospital is
going to be significantly reduced.

4
CHAPTER - 2

SYSTEM ANALYSIS

2.1EXISTING SYSTEM

This research project uses automatic accident detection. It is made up of a


sensor, GPS, and GSM unit installed in the car that locates the accident and
transmits the location data to a primary server unit that holds the database for all
the surrounding hospitals. When an ambulance is dispatched to the scene of the
accident, it not only transports the patient to the hospital but also keeps an eye
on critical indicators like temperature and pulse rate, sending the information to
the appropriate hospital. In addition, via radio frequency transmission, traffic
light signals in the ambulance's path would be controlled to give it a clear way.
This will cut down on how long it takes the ambulance to get to the hospital.

DISADVANTAGES:

• This system helps in detecting the accidents in very less period of time,
basically within a few seconds, send the basic information to the first aid
center in a message including the time and location of the accident.
• If in case there is no casuality and assistance is not required then you can
terminate the message sending process using the switch provided in the
device.
• This application provides in the most feasible way the optimal solution to
the poor emergency facilities provided for road accidents.

5
2.2PROPOSED SYSTEM

The main objective of the project is to predict the accident using CNN. Our
attempt is to develop an accurate and robust system for detecting accident and
reach the emergency service. The images are segregated as training set and
testing set. The next step is to develop CNN model with four activation layers,
two dense layers, two convolution 2Dlayersandtwomax pooling layers. The
developed CNN model is used to classify the input images as accident and non-
accident as specified in features. With this, the further intimation is provided to
emergency service to reach site only if it is detected as an accident. The
intimation involves the sending of a clipped image of the accident and the auto
detected location to the nearest emergency service. Firstly, we tackle the
challenge of image processing by converting images into individual frames.
This facilitates faster processing and enhances accuracy. As part of
preprocessing, we convert these frames into grayscale images and resize them,
ensuring uniformity and ease of analysis.

ADVANTAGES:

• It's adept at swiftly classifying input images, accurately determining


whether an accident has occurred. Upon detecting an accident, the system
triggers an alert, promptly notifying emergency services through email.
• This alert includes a clipped image of the accident scene and its
geolocation, enabling emergency responders to reach the site swiftly.
• The overarching goal is to create a robust and efficient road accident
detection system leveraging deep learning techniques.

6
2.3 SYSTEM REQUIREMENTS

2.3.1 HARDWARE REQUIREMENTS

• System : Pentium IV 2.4 GHz.


• Hard Disk : 500 GB.
• Monitor : 15 VGA Colour.
• Mouse : Logitech.
• RAM : 4 GB.

2.3.2 SOFTWARE REQUIREMENTS

• Operating System : Windows-10/11 (64-bit).


• Frontend : Tkinter 1.3.5
• Backend : Python 3.10 (64-bit)
• Dataset : CCTV Video Dataset
• IDE Tools : Visual Studio Code 1.7

7
CHAPTER - 3

LITERATURE SURVEY

[1]. ROAD ACCIDENT DETECTION USING DEEP LEARNING

Authors: Vaishnavi Metkari, Rucha Patil And Narayani Shelke

The global increase in road accidents presents a pressing issue, resulting


in millions of fatalities annually and substantial economic burdens on societies
worldwide. With big data applications proliferating across industries and
research domains, leveraging diverse data sources such as social media, CCTV
cameras, medical records has become pivotal. In this context, the focus shifts
towards employing deep learning and CNN techniques for image analysis,
specifically targeting road accident detection. We have used tech stacks for
Frontend like Next json, Tailwind CSS, Typescript and Backend technologies
like Flask, Python, MongoDB for the Website. Through experimentation and
evaluation, our model achieves an accuracy of 77.27 percent, demonstrating
promising results for enhancing road safety. Future work involves refining the
model with additional datasets and maintaining and modifying user-friendly
interfaces for widespread deployment. This research contributes to the
advancement of intelligent surveillance systems, offering valuable insights into
accident detection and prevention.

[2]. AUTOMATIC VEHICLE ACCIDENT DETECTION AND


MESSAGEING SYSTEM

Authors: S. Parameswaran, P. Anusuya And M. Dhivya

The technology development has increased the more traffic hazards and
road accident due to lack of emergency facilities. Our paper will provide a
solution to this problem. The dangerous driving can be detected using
accelerometer in car alarm application. It used as crash or roll over detector

8
vehicle during accident or after accident. An accelerometer receive the signal
which is used to recognize the severe accident. In this paper, when vehical met
with an accident or roll over the vibration sensor will detect the signal and sends
it to ATMEGA 8A controller. GSM send alert message to police control room
or rescue team from microcontroller. Now police can trace the location to the
GPS after receiving the information. Then after conforming the location
necessary action will be taken. During the accident, if the person did not get
injury or if there is no serious threat to anyones life, then the alert message can
be stopped by driver by a switch provided. In order to avoid the wasting the
time of the rescue team. This is used to detect the accident by means of
vibration sensor.

[3]. ACCIDENT DETECTION AND ALERT SYSTEM

Authors: C K Gomathy

Road accidents rates are very high nowadays, especially two wheelers.
Timely medical aid can help in saving lives. This system aims to alert the
nearby medical center about the accident to provide immediate medical aid. The
attached accelerometer in the vehicle senses the tilt of the vehicle and the a
heartbeat sensor on the user's body senses the abnormality of the heartbeat to
understand the seriousness of the accident. Thus the systems will make the
decision and sends the information to the smartphone, connected to the
accelerometer through gsm and gps modules. The Android application in the
mobile phone will send text messages to the nearest medical center and friends.
Application also shares the exact location of the accident and it can save time.

9
CHAPTER - 4

SYSTEM IMPLEMENTATION

4.1 IMPLEMENTATION

Implementation is the stage of the project when the theoretical design is


turned out into a working system. Thus it can be considered to be the most
critical stage in achieving a successful new system and in giving the user,
confidence that the new system will work and be effective. The implementation
stage involves careful planning, investigation of the existing system and it’s
constraints on implementation, designing of methods to achieve changeover and
evaluation of changeover methods.

4.2 MODULE

4.3 MODULES:

• Data Collection Module


• Video/ Image Pre-processing Module
• CNN Classifier Module
• Accident Detection Based On Vehicle Tracking
• Alert System Module

4.4 MODULES DESCRIPTION:

4.4.1 DATA COLLECTION MODULE:

Compile a sizable collection of accident video/live. These pictures could


include and exclude accident shots. Resize images to a standard size appropriate
for CNN input. Adjust pixel to a standard scale. Expand the dataset to boost its
quantity and variety. Techniques like rotation, flipping, and cropping may be
used for this. Each picture has a label based on the class it belongs to(accident
no accident). Splitting the data into training, validation, and testing sets is

10
known as data splitting. An 80-25 split could be typical. Model Architecture
Selection: For image classification tasks, the CNN sequential model architecture
is employed. Model training: set random weights at the beginning of the
selected CNN architecture. Utilizing the training dataset, train the model. Make
use of optimization strategies such as back propogation and mini- batch gradient
descent. To avoid overfitting, keep an eye on the models performance on the
validation.

4.4.2 VIDEO/ IMAGE PRE-PROCESSING MODULE:

This phase is responsible for processing video data within the system. Its
main task is to read the video data and extract individual image frames from the
video. In the context of accident detection, this module plays a crucial role as it
allows the subsequent modules to analyze each frame for the occurrence of an
accident. Video data can be encoded in different formats or configurations, and
for the system to function properly, it requires homogeneous data in a consistent
format and configuration. The colour conversion module addresses this issue by
converting the video data to the RGB format. RGB (Red, Green, and Blue) is a
commonly used colour model in digital imaging where each pixel is represented
by the intensities of these three primary colors.

4.4.3 CNN CLASSIFIER MODULE:

There are various smart pre-trained CNNs, these CNNs have the
capability of transfer learning. Therefore it just requires the training and testing
of datasets at its input layer. The architecture of the networks differs in terms of
internal layers and techniques used. The proposed model has 4 convolution
layers .Each layer is followed by a max pooling layer, which is connected to a
flattening layer. There are then two dense layers connected by successive
dropouts of 0.5 and finally a normalisation layer. The use of CNN for accident
detection involves several steps. First, the algorithm is trained on a large dataset

11
of images that represent different types of accidents, such as collisions,
pedestrians being hit, or vehicles overturning. The algorithm is then able to
recognize these patterns when presented with real-time footage from CCTV
cameras. Each frame of the video is run through the CNN model that calculates
the probability of an accident in that frame.

4.4.4 ACCIDENT DETECTION BASED ON VEHICLE TRACKING:

Ensuring that an accident detection and alert system reliably detects


accidents, issues timely alerts, and efficiently manages updates relevant to the
destination are all part of testing the system for emergency scenarios. Creating
test scenarios to gauge timely alerts, detection accuracy, and reducing false
positives and negatives are all part of the process. Utilizing accident data from
real and simulated scenarios on a variety of routes, a test environment is set up.
It is critical to confirm throughout testing that incidents are appropriately
classified, alerts are sent out on time, and users are appropriately informed of
route updates. This include assessing the system's performance in handling
route updates, measuring the interval between accident detection and alert
transmission, and identifying accidents using data.

4.4.5 ALERT SYSTEM MODULE:

• Video/ Image Pre-processing Module


• CNN Classifier Module
• Alert System Module• Data Description Module
• Data Preprocessing Module
• Crime Type Prediction Module
• Crime Location Prediction Module
• Time Series Forecasting Module

12
CHAPTER - 5
SOFTWARE DEVELOPMENT

5.1 SOFTWARE ENVIRONMENT

5.1.1 PYTHON:

Python is a very popular general-purpose interpreted, interactive, object-


oriented, and high-level programming language. Python is dynamically-typed
and garbage-collected programming language. It was created by Guido van
Rossum during 1985- 1990. Like Perl, Python source code is also available
under the GNU General Public License (GPL). Python supports multiple
programming paradigms, including Procedural, Object Oriented and Functional
programming language. Python design philosophy emphasizes code readability
with the use of significant indentation.

WHY TO LEARN PYTHON?

Python is consistently rated as one of the world's most popular programming


languages. Python is fairly easy to learn, so if you are starting to learn any
programming language then Python could be your great choice. Today various
Schools, Colleges and Universities are teaching Python as their primary
programming language. There are many other good reasons which makes
Python as the top choice of any programmer:

• Python is Open Source which means its available free of cost.


• Python is simple and so easy to learn
• Python is versatile and can be used to create many different things.
• Python has powerful development libraries include AI, ML etc.
• Python is much in demand and ensures high salary

13
Python is a MUST for students and working professionals to become a great
Software Engineer specially when they are working in Web Development
Domain. I will list down some of the key advantages of learning Python:

• Python is Interpreted − Python is processed at runtime by the


interpreter. You do not need to compile your program before executing it.
This is similar to PERL and PHP.

• Python is Interactive − You can actually sit at a Python prompt and


interact with the interpreter directly to write your programs.

• Python is Object-Oriented − Python supports Object-Oriented style or


technique of programming that encapsulates code within objects.

• Python is a Beginner's Language − Python is a great language for the


beginner-level programmers and supports the development of a wide
range of applications from simple text processing to WWW browsers to
games.

5.1.2 CHARACTERISTICS OF PYTHON:

Following are important characteristics of Python Programming:-

• It supports functional and structured programming methods as well as


OOP.
• It can be used as a scripting language or can be compiled to byte-code for
building large applications.
• It provides very high-level dynamic data types and supports dynamic type
checking.
• It supports automatic garbage collection.
• It can be easily integrated with C, C++, COM, ActiveX, CORBA, and
Java.

14
5.1.3 APPLICATIONS OF PYTHON:

The latest release of Python is 3.x. As mentioned before, Python is one of the
most widely used language over the web. I'm going to list few of them here:

• Easy-to-learn − Python has few keywords, simple structure, and a


clearly defined syntax. This allows the student to pick up the language
quickly.

• Easy-to-read − Python code is more clearly defined and visible to the


eyes.

• Easy-to-maintain − Python's source code is fairly easy-to-maintain.

• A broad standard library − Python's bulk of the library is very portable


and cross-platform compatible on UNIX, Windows, and Macintosh.

• Interactive Mode − Python has support for an interactive mode which


allows interactive testing and debugging of snippets of code.

• Portable − Python can run on a wide variety of hardware platforms and


has the same interface on all platforms.

• Extendable − You can add low-level modules to the Python interpreter.


These modules enable programmers to add to or customize their tools to
be more efficient.

• Databases − Python provides interfaces to all major commercial


databases.

• GUI Programming − Python supports GUI applications that can be


created and ported to many system calls, libraries and windows systems,
such as Windows MFC, Macintosh, and the X Window system of Unix.

15
5.1.4 PYTHON – OVERVIEW:

Python is a high-level, interpreted, interactive and object-oriented scripting


language. Python is designed to be highly readable. It uses English keywords
frequently where as other languages use punctuation, and it has fewer
syntactical constructions than other languages.

• Python is Interpreted − Python is processed at runtime by the


interpreter. You do not need to compile your program before executing it.
This is similar to PERL and PHP.

• Python is Interactive − You can actually sit at a Python prompt and


interact with the interpreter directly to write your programs.

• Python is Object-Oriented − Python supports Object-Oriented style or


technique of programming that encapsulates code within objects.

• Python is a Beginner's Language − Python is a great language for the


beginner-level programmers and supports the development of a wide
range of applications from simple text processing to WWW browsers to
games.

5.1.5 HISTORY OF PYTHON:

Python was developed by Guido van Rossum in the late eighties and early
nineties at the National Research Institute for Mathematics and Computer
Science in the Netherlands.

Python is derived from many other languages, including ABC, Modula-3, C,


C++, Algol-68, SmallTalk, and Unix shell and other scripting languages.

Python is copyrighted. Like Perl, Python source code is now available under the
GNU General Public License (GPL).

16
5.1.6 PYTHON FEATURES:

Apart from the above-mentioned features, Python has a big list of good features,
few are listed below:

• It supports functional and structured programming methods as well as


OOP.
• It can be used as a scripting language or can be compiled to byte-code for
building large applications.
• It provides very high-level dynamic data types and supports dynamic type
checking.
• It supports automatic garbage collection.
• It can be easily integrated with C, C++, COM, ActiveX, CORBA, and
Java.

5.1.7 PYTHON - ENVIRONMENT SETUP:

As we learned in the previous Python Introduction article, python is a free,


open-source, and cross-platform language. So, the python can run on multiple
OS platforms like Windows, Linux, Mac, etc.

We will now learn how to install python or set up a Python development


environment on different OS platforms like Windows, Mac, and Linux
machines.

5.1.8 DOWNLOAD AND INSTALL PYTHON:

Before you start python installation, first verify whether the python has already
installed on your machine or not. Nowadays, most of the devices are coming
with preinstalled python.

To verify python installation, open the command prompt (cmd.exe) or


Terminal and type the command like python --version.

17
5.1.9 WINDOWS INSTALLATION:

Here are the steps to install Python on Windows machine.

• Open a Web browser and go to https://www.python.org/downloads/.

• Follow the link for the Windows installer python-XYZ.msi file where
XYZ is the version you need to install.

• To use this installer python-XYZ.msi, the Windows system must support


Microsoft Installer 2.0. Save the installer file to your local machine and
then run it to find out if your machine supports MSI.

• Run the downloaded file. This brings up the Python install wizard, which
is really easy to use. Just accept the default settings, wait until the install
is finished, and you are done.

5.1.10 SETTING UP PATH:

Programs and other executable files can be in many directories, so operating


systems provide a search path that lists the directories that the OS searches for
executables.

The path is stored in an environment variable, which is a named string


maintained by the operating system. This variable contains information
available to the command shell and other programs.

The path variable is named as PATH in Unix or Path in Windows (Unix is case
sensitive; Windows is not).

To install Python on a Windows machine, visit download python Url to


download and install the latest python version. When you open the URL, it will
automatically detect your OS and display the download link as per your
operating system like as shown below.

18
Fig.1 Python-Website

When you click on the Download Python 3.8.3 button, it will download
the python-3.8.3.exe file for a 32-bit version. If you want to download the 64-
bit version, visit the python for windows page and download the appropriate 64-
bit installer.

Fig.2 Install Python

19
If you choose the Install Now option, it will install Python in the default
installation (C:\Users\{UserName}\AppData\Local\Programs\Python\Python38)
with default settings. If you want to customize the python installation folder
location & features, you can choose the Customize installation option.
Select Add Python 3.8 to path option so that you can execute the python from
any path.

After completing the python installation, you will see the success message
window like as shown below, and click on the Close button to close the setup
wizard.

Fig.3 Complete Install

20
5.1.11 VERIFY THE INSTALLATION:

After completing python installation on your machine, you can verify it by


opening the command prompt and typing python --version command.

If python is installed successfully, it will display the version of python installed


on your machine as shown below.

To verify the installation, you open the Run window and type cmd and press
Enter:

Fig.4 Run CMD

In the Command Prompt, type python command as follows:

Fig.5 Command Prompt


21
5.1.12 SETTING PATH AT WINDOWS:

To add the Python directory to the path for a particular session in Windows

At the command prompt − type path %path%;C:\Python and press Enter.

Note − C:\Python is the path of the Python directory.

5.1.13 PYTHON ENVIRONMENT VARIABLES:

Here are important environment variables, which can be recognized by Python:

Sr.No. Variable & Description

1 PYTHONPATH

It has a role similar to PATH. This variable tells the Python interpreter
where to locate the module files imported into a program. It should
include the Python source library directory and the directories containing
Python source code. PYTHONPATH is sometimes preset by the Python
installer.

2 PYTHONSTARTUP

It contains the path of an initialization file containing Python source


code. It is executed every time you start the interpreter. It is named as
.pythonrc.py in Unix and it contains commands that load utilities or
modify PYTHONPATH.

3 PYTHONCASEOK

It is used in Windows to instruct Python to find the first case-insensitive


match in an import statement. Set this variable to any value to activate it.

22
4 PYTHONHOME

It is an alternative module search path. It is usually embedded in the


PYTHONSTARTUP or PYTHONPATH directories to make switching
module libraries easy.

5.1.14 INTEGRATED DEVELOPMENT ENVIRONMENT:

You can run Python from a Graphical User Interface (GUI) environment as
well, if you have a GUI application on your system that supports Python.

• Windows − PythonWin is the first Windows interface for Python and is


an IDE with a GUI.

If you are not able to set up the environment properly, then you can take help
from your system admin. Make sure the Python environment is properly set up
and working perfectly fine.

5.1.15 PYTHON - BASIC SYNTAX:

The Python syntax defines a set of rules that are used to create Python
statements while writing a Python Program. The Python Programming
Language Syntax has many similarities to Perl, C, and Java Programming
Languages. However, there are some definite differences between the
languages.

5.1.16 FIRST PYTHON PROGRAM:

Let us execute a Python "Hello, World!" Programs in different modes of


programming.

23
Python - Interactive Mode Programming
Python interpreter from command line by typing python at the command
prompt as following-

>>> print ("Hello, World!")

If you are running older version of Python, like Python 2.4.x, then you would
need to use print statement without parenthesis as in print "Hello, World!".
However in Python version 3.x, this produces the following result.

Hello, World!

5.1.17 SETUP VISUAL STUDIO CODE FOR PYTHON:

Visual Studio Code is a lightweight source code editor. The Visual Studio
Code is often called VS Code. The VS Code runs on your desktop. It’s available
for Windows, macOS, and Linux. VS Code comes with many features such as
IntelliSense, code editing, and extensions that allow you to edit Python source
code effectively. The best part is that the VS Code is open-source and free. Be
sides the desktop version, VS Code also has a browser version that you can use
directly in your web browser without installing it.

5.1.18 SETTING UP VISUAL STUDIO CODE:

To set up the VS Code, you follow these steps:


• First, navigate to the VS Code official website and download the VS code
based on your platform (Windows, macOS, or Linux).
• Second, launch the setup wizard and follow the steps.
Once the installation completes, you can launch the VS code application:

24
Fig.6 Visual Studio Code

Visual Studio Code is a lightweight but powerful source code editor which runs
on your desktop and is available for Windows, macOS and Linux. It comes with
built-in support for JavaScript, TypeScript and Node.js and has a rich ecosystem
of extensions for other languages and runtimes (such as C++, C#, Java, Python,
PHP, Go, .NET). Begin your journey with VS Code with these introductory
videos.

5.2 EXTENSIONS:

VS Code extensions let third parties add support for additional:


• Languages - C++, C#, Go, Java, Python
• Tools - ESLint, JSHint , PowerShell
• Debuggers - PHP XDebug.
• Keymaps - Vim, Sublime Text, IntelliJ, Emacs, Atom, Brackets, Visual
Studio, Eclipse

25
5.2.1 INSTALL PYTHON EXTENSION:

To make the VS Code works with Python, you need to install the Python
extension from the Visual Studio Marketplace.
The following picture illustrates the steps:

Fig.7 Visual Studio Code to Python Extension

• First, click the Extensions tab and second, type the python keyword on
the search input.
• Third, click the Python extension. It’ll show detailed information on the
right pane.
• Finally, click the Install button to install the Python extension.

5.3 INTRODUCTION TO TKINTER:

Graphical User Interface (GUI) is a form of user interface which allows users
to interact with computers through visual indicators using items such as icons,
menus, windows, etc. It has advantages over the Command Line Interface (CLI)
where users interact with computers by writing commands using keyboard only
and whose usage is more difficult than GUI.

Modern computer applications are user-friendly. User interaction is not


restricted to console-based I/O. They have a more ergonomic graphical user

26
interface (GUI) thanks to high speed processors and powerful graphics
hardware. These applications can receive inputs through mouse clicks and can
enable the user to choose from alternatives with the help of radio buttons,
dropdown lists, and other GUI elements (or widgets).

Such applications are developed using one of various graphics libraries


available. A graphics library is a software toolkit having a collection of classes
that define a functionality of various GUI elements. These graphics libraries are
generally written in C/C++. Many of them have been ported to Python in the
form of importable modules.

What is Tkinter?

Tkinter is the inbuilt python module that is used to create GUI applications. It
is one of the most commonly used modules for creating GUI applications in
Python as it is simple and easy to work with. You don’t need to worry about the
installation of the Tkinter module separately as it comes with Python already. It
gives an object-oriented interface to the Tk GUI toolkit.
Some other Python Libraries available for creating our own GUI applications
are,

• Kivy
• Python Qt
• wxPython

Among all Tkinter is most widely used


Here are some common use cases for Tkinter:
1. Creating windows and dialog boxes: Tkinter can be used to create windows
and dialog boxes that allow users to interact with your program. These can
be used to display information, gather input, or present options to the user.

27
2. Building a GUI for a desktop application: Tkinter can be used to create the
interface for a desktop application, including buttons, menus, and other
interactive elements.
3. Adding a GUI to a command-line program: Tkinter can be used to add a
GUI to a command-line program, making it easier for users to interact with
the program and input arguments.
4. Creating custom widgets: Tkinter includes a variety of built-in widgets,
such as buttons, labels, and text boxes, but it also allows you to create your
own custom widgets.
5. Prototyping a GUI: Tkinter can be used to quickly prototype a GUI,
allowing you to test and iterate on different design ideas before committing
to a final implementation.

What are Widgets?

Widgets in Tkinter are the elements of GUI application which provides


various controls (such as Labels, Buttons, ComboBoxes, CheckBoxes,
MenuBars, RadioButtons and many more) to users to interact with the
application.

Fig.8 Fundamental structure of tkinter program

28
5.3.1 BASIC TKINTER WIDGETS:

SNo. Operator & Description

1 Button
The Button widget is used to display buttons in your application.

2 Canvas
The Canvas widget is used to draw shapes, such as lines, ovals, polygons
and rectangles, in your application.

3 Checkbutton
The Checkbutton widget is used to display a number of options as
checkboxes. The user can select multiple options at a time.

4 Entry
The Entry widget is used to display a single-line text field for accepting
values from a user.

5 Frame
The Frame widget is used as a container widget to organize other widgets.

6 Label
The Label widget is used to provide a single-line caption for other widgets.
It can also contain images.

7 Listbox
The Listbox widget is used to provide a list of options to a user.

8 Menubutton
The Menubutton widget is used to display menus in your application.

29
9 Menu
The Menu widget is used to provide various commands to a user. These
commands are contained inside Menubutton.

10 Message
The Message widget is used to display multiline text fields for accepting
values from a user.

11 Radiobutton
The Radiobutton widget is used to display a number of options as radio
buttons. The user can select only one option at a time.

12 Scale
The Scale widget is used to provide a slider widget.

13 Scrollbar
The Scrollbar widget is used to add scrolling capability to various widgets,
such as list boxes.

14 Text
The Text widget is used to display text in multiple lines.

15 Toplevel
The Toplevel widget is used to provide a separate window container.

16 Spinbox
The Spinbox widget is a variant of the standard Tkinter Entry widget, which
can be used to select from a fixed number of values.

17 PanedWindow
A PanedWindow is a container widget that may contain any number of

30
panes, arranged horizontally or vertically.

18 LabelFrame
A labelframe is a simple container widget. Its primary purpose is to act as a
spacer or container for complex window layouts.

5.3.2 Tkinter PROGRAMMING:

Tkinter is the standard GUI library for Python. Python when combined with
Tkinter provides a fast and easy way to create GUI applications. Tkinter
provides a powerful object-oriented interface to the Tk GUI toolkit.

Creating a GUI application using Tkinter is an easy task. All you need to do is
perform the following steps-

• Import the Tkinter module.


• Create the GUI application main window.
• Add one or more of the above-mentioned widgets to the GUI
application.
• Enter the main event loop to take action against each event triggered by
the user.

EXAMPLE:
import Tkinter
top = Tkinter.Tk()
# Code to add widgets will go here...
top.mainloop()

31
This would create a following window-

Fig.9 Simple Tkinter Windows

5.3.3 STANDARD ATTRIBUTES:

Let us take a look at how some of their common attributes.such as sizes, colors
and fonts are specified.

• Dimensions
• Colors
• Fonts
• Anchors
• Relief styles
• Bitmaps
• Cursors

5.3.4 GEOMETRY MANAGEMENT:

All Tkinter widgets have access to specific geometry management methods,


which have the purpose of organizing widgets throughout the parent widget
area. Tkinter exposes the following geometry manager classes: pack, grid, and
place.

32
• The pack() Method − This geometry manager organizes widgets in
blocks before placing them in the parent widget.
• The grid() Method − This geometry manager organizes widgets in a
table-like structure in the parent widget.
• The place() Method − This geometry manager organizes widgets by
placing them in a specific position in the parent widget.

Python - place() method in Tkinter:

The Place geometry manager is the simplest of the three general geometry
managers provided in Tkinter. It allows you explicitly set the position and size
of a window, either in absolute terms, or relative to another window. You can
access the place manager through the place() method which is available for all
standard widgets. It is usually not a good idea to use place() for ordinary
window and dialog layouts; its simply too much work to get things working as
they should. Use the pack() or grid() managers for such purposes.

Syntax:

widget.place(relx = 0.5, rely = 0.5, anchor = CENTER)

Python - grid() method in Tkinter:

The Grid geometry manager puts the widgets in a 2-dimensional table. The
master widget is split into a number of rows and columns, and each “cell” in
the resulting table can hold a widget. The grid manager is the most flexible of
the geometry managers in Tkinter. If you don’t want to learn how and when to
use all three managers, you should at least make sure to learn this one.
Consider the following example-

33
Creating this layout using the pack manager is possible, but it takes a number
of extra frame widgets, and a lot of work to make things look good. If you use
the grid manager instead, you only need one call per widget to get everything
laid out properly. Using the grid manager is easy. Just create the widgets, and
use the grid method to tell the manager in which row and column to place
them. You don’t have to specify the size of the grid beforehand; the manager
automatically determines that from the widgets in it.

Python-pack() method in Tkinter:

The Pack geometry manager packs widgets relative to the earlier widget.
Tkinter literally packs the entire widgets one after the other in a window. We
can use options like fill, expand, and side to control this geometry manager.
Compared to the grid manager, the pack manager is somewhat limited, but
it’s much easier to use in a few, but quite common situations:

• Put a widget inside a frame (or any other container widget), and have it
fill the entire frame
• Place a number of widgets on top of each other
• Place a number of widgets side by side

Python - Binding function in Tkinter:

Tkinter is a GUI (Graphical User Interface) module that is widely used in


desktop applications. It comes along with the Python, but you can also install
it externally with the help of pip command.
It provides a variety of Widget classes and functions with the help of which

34
one can make our GUI more attractive and user-friendly in terms of both looks
and functionality.
We can bind Python’s Functions and methods to an event as well as we can
bind these functions to any particular widget.

What is bind?

The basic definition of the word bind is stick together or cause to stick
together in a single mass. Similarly, Tkinter bind is used to connect an event
passed in the widget along with the event handler. The event handler is the
function that gets invoked when the events take place.

widget.bind(sequence=None, func=None, add=None)

The sequence the argument describes what event we expect, and


the func argument is a function to be called when that event happens to the
widget. If there was already a binding for that event for this widget, normally
the old callback is replaced with func, but you can preserve both callbacks by
passing add='+'. The events can be bonded to an event handler using the bind
function at different levels.

1. Instance-level binding

One can bind an event to one specific widget. To bind an event of a widget,
call the .bind() method on that widget. widget.bind(event, event handler)

• Event – occurrence caused by the user that might reflect changes.


• Event Handler – function in your application that gets invoked when
the event takes place.

35
• Bind – configuring an event handler (python function) that is called
when an event occurs to a widget.

2. Class-level binding

One can bind an event to all widgets of a class. For example, you might set up
all Button widgets to respond to middle mouse button clicks by changing back
and forth between English and Japanese labels. bind_class is a method
available to all widgets and simply calls the Tk bind command again, however
not with the instance name, but the widget class name.

w.bind_class(className, sequence=None, func=None, add=None)

The basic working of .bind_class is the same as the .bind function.

3. Application-level binding

One can set up a binding so that a certain event calls a handler no matter what
widget has the focus or is under the mouse.

w.bind_all(sequence=None, func=None, add=None)

Like .bind(), but applies to all widgets in the entire application.

36
CHAPTER - 6

SYSTEM DESIGN

6.1 SYSTEM ARCHITECTURE

Fig.10 System Architecture

6.2 DATA FLOW DIAGRAM:

1. The DFD is also called as bubble chart. It is a simple graphical formalism


that can be used to represent a system in terms of input data to the system,
various processing carried out on this data, and the output data is
generated by this system.
2. The data flow diagram (DFD) is one of the most important modeling
tools. It is used to model the system components. These components are
the system process, the data used by the process, an external entity that
interacts with the system and the information flows in the system.

37
3. DFD shows how the information moves through the system and how it is
modified by a series of transformations. It is a graphical technique that
depicts information flow and the transformations that are applied as data
moves from input to output.
4. DFD is also known as bubble chart. A DFD may be used to represent a
system at any level of abstraction. DFD may be partitioned into levels
that represent increasing information flow and functional detail.

User

Live
Video/CCTV

Spatio Temporal Extracted


Model Features

Video
Processing

Yolo-Video Accident
Frames-CNN Detection

Alert
Notification

38
6.3 USE CASE DIAGRAM:

A use case diagram in the Unified Modeling Language (UML) is a type


of behavioral diagram defined by and created from a Use-case analysis. Its
purpose is to present a graphical overview of the functionality provided by a
system in terms of actors, their goals (represented as use cases), and any
dependencies between those use cases. The main purpose of a use case diagram
is to show what system functions are performed for which actor. Roles of the
actors in the system can be depicted.

Live
Video/CCTV

Spatio Temporal
Model

Extracted
Features

User
CNN-Yolo Model
Process

Video Frames

Accident
Detection

39
6.4 DATASET DESIGN

Video Footage:

Fig.11 Video Footage


Frame_Image:

Fig.12 Frame Image


Inside_Label_images:

Fig.13 Inside Label images

40
CHAPTER - 7

SYSTEM DEVELOPMENT

7.1 INPUT AND OUTPUT DESIGN

7.1.1 INPUT DESIGN

The input design is the link between the information system and the user. It
comprises the developing specification and procedures for data preparation and
those steps are necessary to put transaction data in to a usable form for
processing can be achieved by inspecting the computer to read data from a
written or printed document or it can occur by having people keying the data
directly into the system. The design of input focuses on controlling the amount
of input required, controlling the errors, avoiding delay, avoiding extra steps
and keeping the process simple. The input is designed in such a way so that it
provides security and ease of use with retaining the privacy. Input Design
considered the following things:

• What data should be given as input?


• How the data should be arranged or coded?
• The dialog to guide the operating personnel in providing input.

7.1.2 OBJECTIVES

1. Input Design is the process of converting a user-oriented description of the


input into a computer-based system. This design is important to avoid errors in
the data input process and show the correct direction to the management for
getting correct information from the computerized system.

2. It is achieved by creating user-friendly screens for the data entry to handle


large volume of data. The goal of designing input is to make data entry easier
and to be free from errors.

41
3. When the data is entered it will check for its validity. Data can be entered
with the help of screens. Appropriate messages are provided as when needed so
that the user will not be in maize of instant. Thus the objective of input design is
to create an input layout that is easy to follow.
7.1.3 OUTPUT DESIGN
A quality output is one, which meets the requirements of the end user and
presents the information clearly. In any system results of processing are
communicated to the users and to other system through outputs. In output
design it is determined how the information is to be displaced for immediate
need and also the hard copy output. It is the most important and direct source
information to the user. Efficient and intelligent output design improves the
system’s relationship to help user decision-making.

1. Designing computer output should proceed in an organized, well thought out


manner; the right output must be developed while ensuring that each output
element is designed so that people will find the system can use easily and
effectively. When analysis design computer output, they should Identify the
specific output that is needed to meet the requirements.

2. Select methods for presenting information.

3. Create document, report, or other formats that contain information produced


by the system.

The output form of an information system should accomplish one or more of the
following objectives.
• Convey information about past activities, current status or projections of
the Future.
• Signal important events, opportunities, problems, or warnings.
• Trigger an action.
• Confirm an action.

42
7.2 SYSTEM STUDY
7.2.1 FEASIBILITY STUDY:-

The feasibility of the project is analyzed in this phase and business


proposal is put forth with a very general plan for the project and some cost
estimates. During system analysis the feasibility study of the proposed system is
to be carried out. This is to ensure that the proposed system is not a burden to
the company.
• Economical Feasibility
• Technical Feasibility
• Social Feasibility
7.2.2 ECONOMICAL FEASIBILITY:-

This study is carried out to check the economic impact that the system
will have on the organization. The amount of fund that the company can pour
into the research and development of the system is limited. The expenditures
must be justified. Thus the developed system as well within the budget and this
was achieved because most of the technologies used are freely available. Only
the customized products had to be purchased.

7.2.3 TECHNICAL FEASIBILITY:-

This study is carried out to check the technical feasibility, that is, the
technical requirements of the system. Any system developed must not have a
high demand on the available technical resources. This will lead to high
demands on the available technical resources. This will lead to high demands
being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this
system.

43
7.2.4 SOCIAL FEASIBILITY:-

The aspect of study is to check the level of acceptance of the system by


the user. This includes the process of training the user to use the system
efficiently. The user must not feel threatened by the system, instead must accept
it as a necessity. The level of acceptance by the users solely depends on the
methods that are employed to educate the user about the system and to make
him familiar with it. His level of confidence must be raised so that he is also
able to make some constructive criticism, which is welcomed, as he is the final
user of the system.

7.3 SYSTEM TESTING

The purpose of testing is to discover errors. Testing is the process of trying


to discover every conceivable fault or weakness in a work product. It provides a
way to check the functionality of components, sub assemblies, assemblies and/or a
finished product it is the process of exercising software with the intent of ensuring
that the Software system meets its requirements and user expectations and does not
fail in an unacceptable manner. There are various types of test. Each test type
addresses a specific testing requirement.

7.4 TYPES OF TESTS:-

7.4.1 UNIT TESTING:-

Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid
outputs. All decision branches and internal code flow should be validated. It is the
testing of individual software units of the application .it is done after the
completion of an individual unit before integration. This is a structural testing, that
relies on knowledge of its construction and is invasive. Unit tests perform basic
tests at component level and test a specific business process, application, and/or

44
system configuration. Unit tests ensure that each unique path of a business process
performs accurately to the documented specifications and contains clearly defined
inputs and expected results.

7.4.2 INTEGRATION TESTING:-

Integration tests are designed to test integrated software components to


determine if they actually run as one program. Testing is event driven and is more
concerned with the basic outcome of screens or fields. Integration tests
demonstrate that although the components were individually satisfaction, as shown
by successfully unit testing, the combination of components is correct and
consistent. Integration testing is specifically aimed at exposing the problems that
arise from the combination of components.

7.4.3 FUNCTIONAL TESTING:-


Functional tests provide systematic demonstrations that functions tested are
available as specified by the business and technical requirements, system
documentation, and user manuals.
Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.


Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be
exercised.
Systems/Procedures : interfacing systems or procedures must be
invoked.

Organization and preparation of functional tests is focused on requirements,


key functions, or special test cases. In addition, systematic coverage pertaining to
identify Business process flows; data fields, predefined processes, and successive

45
processes must be considered for testing. Before functional testing is complete,
additional tests are identified and the effective value of current tests is determined.

7.4.4 SYSTEM TESTING

System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test.
System testing is based on process descriptions and flows, emphasizing pre-driven
process links and integration points.

7.4.5 WHITE BOX TESTING

White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or at least
its purpose. It is purpose. It is used to test areas that cannot be reached from a black
box level.

7.4.6 BLACK BOX TESTING

Black Box Testing is testing the software without any knowledge of the
inner workings, structure or language of the module being tested. Black box tests,
as most other kinds of tests, must be written from a definitive source document,
such as specification or requirements document, such as specification or
requirements document. It is a testing in which the software under test is treated, as
a black box .you cannot “see” into it. The test provides inputs and responds to
outputs without considering how the software works.

Unit Testing:
Unit testing is usually conducted as part of a combined code and unit test
phase of the software lifecycle, although it is not uncommon for coding and unit
testing to be conducted as two distinct phases.

46
Test Strategy and Approach
Field testing will be performed manually and functional tests will be written
in detail.

Test objectives
• All field entries must work properly.
• Pages must be activated from the identified link.
• The entry screen, messages and responses must not be delayed.

Features to be tested
• Verify that the entries are of the correct format
• No duplicate entries should be allowed
• All links should take the user to the correct page.

Integration Testing:
Software integration testing is the incremental integration testing of two or
more integrated software components on a single platform to produce failures
caused by interface defects. The task of the integration test is to check that
components or software applications, e.g. components in a software system or -
one step up - software applications at the company level - interact without error.

Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

Acceptance Testing:
User Acceptance Testing is a critical phase of any project and requires
significant participation by the end user. It also ensures that the system meets the
functional requirements.

Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

47
CHAPTER - 8

APPENDICES

8.1 SCREENSHOT

Run: python main.py

Fig.14 Run: python main.py

Main GUI:

Fig.15 Main GUI

48
Select Video Source:

Fig.16 Select Video Source

Select Video File:

Fig.17 Video File

49
Video Frame Analysis:

Fig.18 Video Frame Analysis

Accident Detection:

Fig.19 Accident Detection

50
Find Accident Detected with Accuracy Prediction:

Fig.20 Find Accident Detected with Accuracy Prediction

Send SMS Process:

Fig.21 Send SMS Process

51
View Image Frame:

Fig.22 View Image Frame

View Image Frame Prediction:

Fig.23 View Image Frame Prediction

52
View Inside Label Image 1:

Fig.24 View Inside Label Image 1

View Inside Label Image 2:

Fig.25 View Inside Label Image 2

53
8.2 SOURCE CODE

Vcd_ui.py

# Import the following module

import tkinter as tk

from PIL import Image

from PIL import ImageTk

from PIL.ImageTk import PhotoImage

from object_detection.utils import label_map_util

from object_detection.utils import visualization_utils as viz_utils

from IPython.display import HTML

from base64 import b64encode

from email.mime.text import MIMEText

from email.mime.image import MIMEImage

from email.mime.application import MIMEApplication

from email.mime.multipart import MIMEMultipart

from threading import Thread

from tkinter import ttk, filedialog

import vehicle_crash_detection

import ctypes

ctypes.windll.shcore.SetProcessDpiAwareness(1)

54
class VcdUI:

def __init__(self,root):

self.root = root # create root window

self.root.state('zoomed')

self.root.title("Vehicle Accident Detector")

self.root.config(bg="#277a36")

self.title_bar_icon =
PhotoImage(file="resources/icon/vehicle_crash_black.png")

self.root.iconphoto(False, self.title_bar_icon)

# create the main content frame

self.content = tk.Frame(root, bg="#277a36", width=400)

self.content.pack(side='right', fill='both', expand=True)

# Load the image and resize it to the desired size

self.icon_white =
Image.open("resources/icon/vehicle_crash_white.png").resize((60, 60))

self.icon_white = ImageTk.PhotoImage(self.icon_white)

self.icon_black =
Image.open("resources/icon/vehicle_crash_black._32.png").resize((60, 60))

55
self.icon_black = ImageTk.PhotoImage(self.icon_black)

self.title_label = tk.Label(self.content, text=" Vehicle Accident Detector",


bg=root["bg"], font=('Cascadia Code Bold', 20),

fg="black", compound="left", image=self.icon_black)

# Set the padding between the icon and text

self.title_label.image = self.title_bar_icon

self.title_label.pack(side="top", anchor="n", padx=10, pady=10)

self.detections_update_label = tk.Label(self.content, text="",


bg=root["bg"], font=('Cascadia Code Bold', 20),

fg="white", )

self.detections_update_label.pack(side="bottom", anchor="s", padx=10,


pady=60)

self.source = "No Video Source Provide Yet !"

# create the sidebar container frame

self.sidebar = tk.Frame(root, bg='#000000', width=25)

self.sidebar.pack(side='left', fill='y')

# Create a frame for the white border

self.border_frame = tk.Frame(self.sidebar, bg='white', width=2)

self.border_frame.pack(side='right', fill='y')

56
# Create a Label widget to display the image

self.sidebar_icon_label = tk.Label(self.sidebar, image=self.icon_white,


bg='#000000')

self.sidebar_icon_label.pack(side='top', pady=10)

# create the buttons for sidebarcanvas items

self.sidebar_button1 = tk.Button(self.sidebar, text='Accident Detection',


width=25,height=2, fg="white", bg="#000000",

font=('Cascadia Code', 10))

self.sidebar_button1.pack()

self.sidebar_button2 = tk.Button(self.sidebar,
text='Records',command=self.open_image_viewer ,width=25,height=2
,fg="white", bg="#000000",

font=('Cascadia Code', 10)).pack()

# Create a label for the combo box

self.combo_label = tk.Label(self.content, text="Select a Video Source:",


fg="white", bg=root["bg"],

font=('Cascadia Code', 12)).pack(side="top", anchor="n",


padx=10, pady=10)

self.combo_box = ttk.Combobox(self.content, values=["Video File",


"Live-Camera"], text='Select an option')
57
self.combo_box.pack(pady=10)

# Bind the handle_combobox() function to the "<<ComboboxSelected>>"


event

self.combo_box.bind("<<ComboboxSelected>>", self.handle_combobox)

self.combo_box.pack(side="top", anchor="n", padx=10, pady=10)

# Create a BooleanVar object

self.var = tk.BooleanVar()

# Create detection button

self.button1 = tk.Button(self.content, text="Detection \nOFF", width=18,


height=3, fg="white", bg="#000000",

command=self.toggle, font=('Cascadia Code', 9))

self.button1.place(relx=0, rely=0.30, anchor="w", x=50, y=-80)

# Create an instance of the vehicle_crash class

self.vc =
vehicle_crash_detection.VehicleCrash(self.detections_update_label,
self.content, self.button1)

# Call the load_model method on the instance

self.vc.load_model()
58
# function to open a file as video source

def open_file(self):

global source

file_path = filedialog.askopenfilename()

source = str(file_path)

self.vc.set_source(source)

return source

# function to open a camera as video source

def open_camera(self):

global source

source = 0

self.vc.set_source(source)

return source

def handle_combobox(self,event):

value = event.widget.get()

if value == "Video File":

self.open_file()

elif value == "Live-Camera":

59
self.open_camera()

def clear_frame(self):

keep_classes = [ttk.Combobox, tk.Button, tk.Label]

for widget in self.content.winfo_children():

if type(widget) not in keep_classes and type(widget.winfo_parent()) not


in keep_classes:

widget.destroy()

def toggle(self):

# Toggle the state of the variable

self.var.set(not self.var.get())

# Set the button text to "On" or "Off" depending on the state of the variable

if self.var.get():

self.button1.config(text="Detection \nON")

self.vc.run_detection()

else:

self.button1.config(text="Detection \nOFF")

self.vc.stop_detection()

self.detections_update_label.configure(text="")

self.clear_frame()

def open_image_viewer(self):

60
self.root.withdraw() # hide the current window

# Create a new window for the image viewer

image_viewer_window = tk.Toplevel(self.root)

image_viewer_window.title("Image Viewer")

# Create an instance of the image_viewer class

from image_data_viewer import ImageViewer

image_viewer_instance = ImageViewer(image_viewer_window)

if __name__ == '__main__':

root = tk.Tk()

app = VcdUI(root)

root.mainloop()

vehicle_crash_detection.py:

# Import the following module

import threading

import time

from tkinter.ttk import Style

import PIL

import tensorflow as tf

61
import cv2

import numpy as np

import tkinter as tk

from PIL import Image

from PIL import ImageTk

from object_detection.utils import label_map_util

from object_detection.utils import visualization_utils as viz_utils

from IPython.display import HTML

from base64 import b64encode

from email.mime.text import MIMEText

from email.mime.image import MIMEImage

from email.mime.application import MIMEApplication

from email.mime.multipart import MIMEMultipart

from tkinter import ttk

import functools

import email_alert

import sms_alert

import datetime

# ---------------------------------------------------Vehicle Crash Detection--------------


------------------------------------------------------------------------

62
''' This class represents the vehicle crash detector functionalities , it is called
when using the vcd_ui class '''

class VehicleCrash:

def __init__(self, detections_update_label, content, button1):

self.detections_update_label = detections_update_label

self.content = content

self.source = None

self.running = False

self.button1 = button1

self.count = 0

self.i = 0

def set_source(self, source):

self.source = source

PATH_TO_SAVED_MODEL = "inference_graph\\saved_model"

category_index =
label_map_util.create_category_index_from_labelmap("label_map.pbtxt",

use_display_name=True)

#visualise_on_image() is used to show label when an Vehicle Crash is


detected

def visualise_on_image(self, frame, image, bboxes, labels, scores, thresh):

63
(h, w, d) = image.shape

current_datetime = datetime.datetime.now().strftime("%Y-%m-%d_%H-
%M-%S")

for bbox, label, score in zip(bboxes, labels, scores):

if score > thresh:

xmin, ymin = int(bbox[1] * w), int(bbox[0] * h)

xmax, ymax = int(bbox[3] * w), int(bbox[2] * h)

cv2.rectangle(image, (xmin, ymin), (xmax, ymax), (0, 255, 0), 2)

cv2.putText(image, f"{label}: {int(score * 100)} %", (xmin, ymin),


cv2.FONT_HERSHEY_SIMPLEX, 1,

(255, 255, 255), 2)

self.count += 1

print(self.count)

if (self.count == 5):

label_box_image = frame[ymin:ymax, xmin:xmax]

cv2.imwrite("outputs/frame_img/vcd_frame" +
str(current_datetime) + str(self.i) + ".jpg", image)

# Define the desired image size

image_size = (1920, 1080) # width, height

# Resize the image to the desired size

64
resized_image = cv2.resize(label_box_image, image_size)

# Apply a sharpening filter to the resized image

kernel = np.array([[-1, -1, -1], [-1, 9, -1], [-1, -1, -1]]) # 3x3
sharpening filter

sharpened_image = cv2.filter2D(resized_image, -1, kernel)

# Save the sharpened image with high quality

png_quality = 100

cv2.imwrite("outputs/inside_label_img/vcd_inlabel" +
str(current_datetime) + str(self.i) + ".png",

sharpened_image,

[int(cv2.IMWRITE_JPEG_QUALITY), png_quality])

if (self.count == 20):

# Save the image inside the label box

# self.detections_update_label.configure(text="Vehicle Crash has


been Detected")

print("Vehicle_Accident_Detected")

perform_label_detected_func =
threading.Thread(target=self.perform_label_detected)

perform_label_detected_func.start()

self.i += 1
65
self.count = 0

break

# Reset the count variable after the loop has completed

return image

# update progress() used to update value progress bar when loading the model

def update_progress(self, progress, value):

progress['value'] = value

progress.update()

# perform_label_detected() function performs certain activities including

# functioning of alert systems when vehicle Accident is detected

def perform_label_detected(self):

self.detections_update_label.configure(text="Vehicle Accident has been


Detected")

em = email_alert.Email(self.source)

em.run_mail()

self.detections_update_label.configure(

text="Email Alert Send Successfully to nearby Hospital,Police Station


and RTO")

66
time.sleep(0.5)

sm = sms_alert.Sms(self.source)

sm.run_sms()

self.detections_update_label.configure(

text="SMS Alert Send Successfully to nearby Hospital,Police Station


and RTO")

time.sleep(0.5)

self.detections_update_label.configure(

text="Email and SMS Alert Send Successfully to nearby Hospital,Police


Station and RTO")

time.sleep(0.5)

self.detections_update_label.configure(text="")

detect_fn = ""

@functools.lru_cache(maxsize=None)

# load_model() function loads the model when the application is started

def load_model(self):

style = Style()

style.theme_use('alt')

# Self test for each subject,'winnative','clam','alt','default','classic' Test


successful.

67
# windows theme:('winnative','clam','alt','default','classic','vista','xpnative')

style.configure("Horizontal.TProgressbar", troughcolor='white',
background='black', thickness=30)

# create a progress bar

progress = ttk.Progressbar(self.content, orient=tk.HORIZONTAL,


style="Horizontal.TProgressbar", length=300,

mode='determinate')

progress.pack(pady=200, side="top", anchor="s")

self.detections_update_label.configure(text="Loading 0%")

self.update_progress(progress, 0)

time.sleep(0.1)

print("Loaded 10% saved model ...")

self.update_progress(progress, 10)

self.detections_update_label.configure(text="Loading .10%")

time.sleep(0.1)

global detect_fn

print("Loading saved model ...")

detect_fn = tf.saved_model.load(self.PATH_TO_SAVED_MODEL)

print("Loaded 50% saved model ...")

self.detections_update_label.configure(text="Loading ....50%")

68
self.update_progress(progress, 50)

time.sleep(1.5)

print("Model Loaded!")

self.detections_update_label.configure(text="Loading .........100%")

self.update_progress(progress, 100)

time.sleep(1.5)

self.detections_update_label.configure(text="")

time.sleep(0.1)

progress.destroy()

return detect_fn

# close_canvas() is used to close the canvas which displays the detection


from the video source

def close_canvas(self, canvas):

canvas.destroy()

self.content.update()

# run_detection function is called when detection button is clicked ,

# it is used to detect the occurrence of vehicle crash from the given video
source

def run_detection(self):

69
self.running = True

while self.running:

print("Video Source : ", self.source)

video_capture = cv2.VideoCapture(self.source)

start_time = time.time()

canvas = tk.Canvas(self.content, width=1000, height=600)

canvas.pack(side="top", anchor="n", padx=10, pady=40)

frame_width = int(video_capture.get(3))

frame_height = int(video_capture.get(4))

# fps = int(video_capture.get(5))

size = (frame_width, frame_height)

# Initialize video writer

result = cv2.VideoWriter('outputs/detection_video/det_vid.mp4',
cv2.VideoWriter_fourcc('m', 'p', '4', 'v'),

15,

size)

while True:

ret, frame = video_capture.read()

if not ret:

self.close_canvas(canvas)

70
self.stop_detection()

self.button1.config(text="Detection \nOFF")

self.detections_update_label.configure(text="")

self.source = "Video Source"

print('Unable to read video / Video ended')

self.detections_update_label.configure(text="Unable to read video /


Video ended")

break

frame = cv2.flip(frame, 1)

image_np = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

# The input needs to be a tensor, convert it using


`tf.convert_to_tensor`.

# The model expects a batch of images, so also add an axis with


`tf.newaxis`.

input_tensor = tf.convert_to_tensor(image_np)[tf.newaxis, ...]

# Pass frame through detector

detections = detect_fn(input_tensor)

# Set detection parameters

score_thresh = 0.92 # Minimum threshold for object detection

71
max_detections = 1

# All outputs are batches tensors.

# Convert to numpy arrays, and take index [0] to remove the batch
dimension.

# We're only interested in the first num_detections.

scores = detections['detection_scores'][0, :max_detections].numpy()

bboxes = detections['detection_boxes'][0, :max_detections].numpy()

labels = detections['detection_classes'][0,
:max_detections].numpy().astype(np.int64)

labels = [self.category_index[n]['name'] for n in labels]

# Display detections

self.visualise_on_image(frame, frame, bboxes, labels, scores,


score_thresh)

# Convert the frame to a Tkinter-compatible format

frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

image = PIL.Image.fromarray(frame)

image = image.resize((1000, 600))

photo = PIL.ImageTk.PhotoImage(image)

# Perform Certain Functions on Detections

72
# perform_label_detected(labels, scores, score_thresh)

end_time = time.time()

fps = int(1 / (end_time - start_time))

start_time = end_time

# Update the canvas with the new frame and text

canvas.create_image(0, 0, image=photo, anchor=tk.NW)

canvas.create_text(50,
video_capture.get(cv2.CAP_PROP_FRAME_HEIGHT) + 25, text=f"FPS:
{fps}",

font=("Arial", 14), fill="red", anchor=tk.NW)

canvas.update()

# Write output video

result.write(frame)

video_capture.release()

# stop_detection() function is called to end an already running run_detection


function ,it is called by clicking the

# detection button again while running run_detection for Vehicle Crash


Detection

def stop_detection(self):

self.running = False

self.count = 0

print("Detection Stopped")
73
CHAPTER - 9

CONCLUSION

Developed to accurately detect the accidents from CCTV footage quickly


.One important consideration in an accident detection project is the trade-off
between accuracy and speed. While a highly accurate CNN model may detect
accidents more reliably, it may also require more computational resources and
take longer to process input data. Therefore, it is important to balance accuracy
and speed based on the specific needs of the paper. Overall, the proposed CNN
classification model for accident detection has the potential to improve road
safety and reduce the human and economic costs of traffic accidents by
automating the detection of accidents and enabling faster emergency response.
To further this model can be enhanced in future for identifying how many
people were in the accident.

74
CHAPTER - 10

FUTURE WORK

• Enhanced sensor integration: sensors for comprehensive environmental


and impact data collection.
• AI-Driven Response: Integration of AI algorithms for proactive accident
prevention and adaptive responses.
• Smart communication: Development of direct communication
interfaces with emergency services and hospitals for seamless support.

75
CHAPTER - 11

REFERENCE

[1] J. White, C. Thompson, H. Turner, B. Dougherty, and D. . chmidt,


“Wreckwatch Automatic traffic accident detection and notification with
smartphones,” Mobile Networks and Applications, vol. 16, no. 3, pp. 285–303,
2021.

[2] . . Amin, J. Jalil, and . Reaz, “Accident detection and reporting system using
GPS, G R and G International technology,” in Proc. IEEE Conference on
Informatics, Electronics & Vision (ICIEV), pp. 640–643, 2022.

[3] S. Kamijo, Y. Matsushita, K. Ikeuchi, and M. akauchi, “Traffic monitoring


and accident detection at intersections,” IEEE Trans. on Intelligent
Transportation Systems, vol. 1, no. 2, pp. 108–118, 2019.

[4] Y.-K. Ki and D.-Y. Lee, “A traffic accident recording and reporting model
at intersections,” IEEE Trans. on Intelligent Transportation Systems, vol. 8, no.
2, pp. 188–194, 2020.

[5] Chris Thompson, Jules White, Brian Dougherty, Adam Albright, and
Douglas C. SchmidtUsing Smartphones to Detect Car Accidents and Provide
Situational Awareness to Emergency Responders, @c Institute for Computer
Sciences, Social Informatics and Telecommunications Engineering 2021.

[6] Harrald J, Jefferson T (2022) Shared situational awareness in emergency


management mitigation and response. In: 40th annual Hawaii international
conference on system sciences, 2022. HICSS 2007. IEEE, p 23.

[7] W. Wei and F. Hanbo, “Traffic accident automatic detection and remote
alarm device,” in Proc. of International Conferenc of Electric Information and
Control Engineering (ICEICE), pp. 910 913, 2023.

76
[8] M. Fogue, P. Garrido, F. J. Martinez, J.-C. Cano, C. T. alafate, and . anzoni,
“Automatic accident detection: Assistance through communication technologies
and vehicles,” IEEE Vehicular Technology Magazine, vol. 7, no. 3, pp. 90–100,
2019.

[9] T. Andersson and . V¨arbrand, “Decision support tools for ambulance


dispatch and relocation,” Journal of the Operational Research Society, vol. 58,
no. 2, pp. 195– 201, 2020. [10] L. Brotcorne, G. Laporte, and F. Semet,
“Ambulance location and relocation models,” European Journal of Operational
Research, vol. 147, no. 3, pp. 451–463, 2021.

[11] O. Vermesan, Internet of Things: Converging Technologies for Smart


Environments. River Publishers, 2022.

[12] "Road Crash Statistics", Asirt.org, 2023. [Online]. Available:


http://asirt.org/initiatives/informing-road users/road-safety-facts/roadcrash-
statistics.

[13] A. App and P. LLC, "Auto Accident App dans l’App tore", App tore, 2019.
[Online]. Available: https://itunes.apple.com/ca/app/auto accident-
app/id515255099?l=fr.

[14] "Auto Accident App - Murphy Battista LLP", Murphy Battista LLP, 2020.
[Online]. Available: http://www.murphybattista.com/autoaccident- app.

[15] "Accident Report for Android", Appsgalery.com, 2021. [Online].


Available: http://www.appsgalery.com/apps/accident report-34136.

[16] Alexandra Fanca and Honoriu valean, Accident Reporting and Guidance
System with automatic detection of the accident, 20th International Conference
on System Theory, Control andComputing (ICSTCC), October 13-15, Sinaia,
Romania IEEE, 2022.

77
[17] M. Bhokare, S. Kaulkar, A. Khinvasara, A. Agrawal, Y. K. Sharma. An
Algorithmic Approach for Detecting Car Accidents using Smartphone.
International Journal of Research in Advent Technology, Vol.2, No.4, April
2023, pp. 151-154, E-ISSN: 2321-9637

[18] A. uşcaşiu, A. Fanca, H. Vălean. Tracking and Localization System using


Android Mobile Phones. Proc. of 2019 IEEE International Conference on
Automation, Quality and Testing, Robotics THETA 20th edition, ISBN: 978-1
4673-8691-3, CFP16AQTUSB.

[19] Elie Nasr, Elie Kfoury and David Khoury, “An IoT approach to vehicle
accident detection, reporting and navigation”, International multidisciplinary
conference on Engineering Technology (IMCET), IEEE 2020.

[20] Hari Sankar S, Jayadev K, Suraj B and Aparna P, A Comprehensive


solution to road traffic accident detection and ambulance management,
International conference on advances in Electrical, Electronic and system
Engineering, IEEE, Nov 2021

78

You might also like