Final Document - Vehicle Accident Detection
Final Document - Vehicle Accident Detection
A PROJECT REPORT
Submitted by
KEERTHIVASAN K
(6127236220215)
In partial fulfillment of the requirement
for the award of the degree
of
in
MECHERI, SALEM-636453
JULY 2025
I
VEHICLE ACCIDENT DETECTION
A PROJECT REPORT
Submitted by
KEERTHIVASAN K
(6127236220215)
In partial fulfillment of the requirement
for the award of the degree
of
in
MECHERI, SALEM-636453
JULY 2025
II
THE KAVERY ENGINEERING COLLEGE
BONAFIDE CERTIFICATE
SIGNATURE SIGNATURE
Mr.S.A.CHENNAKESAVAN, MCA, Mr.S.M.VIVIYAN RICHARDS,
M.Phil., MCA
Assistant Professor / Head, Assistant Professor
Department of Computer Application Department of Computer Application
The Kavery Engineering College, The Kavery Engineering College,
Mecheri, Mecheri,
Salem-636453. Salem-636453.
III
DECLARATION
SIGNATURE
KEERTHIVASAN K
Place :
Date :
IV
ACKNOWLEDGEMENT
We are highly indebted to provide our heart full thanks to our Project
Guide Mr.M.VIVIYAN RICHARDS, MCA Assistant Professor for his
valuable ideas, encouragement and supportive guidance throughout the
project.
We would like to thank all those who have contributed to the success
of our college project. Your support and guidance have been truly appreciated,
and we could not have completed this project without your help.
V
ABSTRACT
VI
PROJECT COMPLETION CERTIFICATE
VII
TABLE OF CONTENTS
ABSTRACT VI
1 1.1 INTRODUCTION 1
1.2 OBJECTIVE 3
1.3 OVERVIEW OF THE PROJECT 3
2 SYSTEM ANALYSIS 5
2.1 EXISTING SYSTEM 5
VIII
5.1.6 PYTHON FEATURES 17
5.1.7 PYTHON ENVIRONMENT SETUP 17
5.1.8 DOWNLOAD AND INSTALL PYTHON 17
5.1.9 WINDOWS INSTALLATION 18
5.1.10 SETTING UP PATH 18
5.1.11 VERIFY THE INSTALLATION 21
5.1.12 SETTING PATH AT WINDOWS 22
5.1.13 PYTHON ENVIRONMENT VARIABLES 22
5.1.14 INTEGERATED DEVELOPMENT 23
ENVIRONMENT
5.1.15 PYTHON BASIC-SYNTAX 23
5.1.16 FIRST PYTHON PROGRAM 23
5.1.17 SETUP VISUAL STUDIO CODE FOR PYTHON 24
5.1.18 SETTING UP VISUAL STUDIO CODE 24
5.2 EXTENSIONS 25
5.2.1 INSTALL PYTHON EXTENSION 26
5.3 INTRODUCTION TO TKINTER 26
5.3.1 BASIC TKINTER WIDGETS 29
5.3.2 TKINTER PROGRAMMING 31
5.3.3 STANDARD ATTRIBUTES 32
5.3.4 GEOMETRY MANAGEMENT 32
6 SYSTEM DESIGN 37
6.1 SYSTEM ARCHITECTURE 37
6.2 DATA FLOW DIAGRAM 37
6.3 USE CASE DIAGRAM 39
6.4 DATASET DESIGN 40
7 SYSTEM DEVELOPMENT 41
7.1 INPUT AND OUTPUT DESIGN 41
7.1.1 INPUT DESIGN 41
7.1.2 OBJECTIVES 41
7.1.3 OUTPUT DESIGN 42
7.2 SYSTEM STUDY 43
7.2.1 FEASIBILITY STUDY 43
7.2.2 ECONOMICAL FEASIBILITY 43
7.2.3 TECHNICAL FEASIBILITY 43
7.2.4 SOCIAL FEASIBILITY 44
IX
7.3 SYSTEM TESTING 44
7.4 TYPES OF TESTS 44
7.4.1 UNIT TESTING 44
7.4.2 INTEGRATION TESTING 45
7.4.3 FUNCTIONAL TESTING 45
7.4.4 SYSTEM TESTING 46
7.4.5 WHITE BOX TESTING 46
7.4.6 BLACK BOX TESTING 46
8 APPENDICES 48
8.1 SCREENSHOTS 48
8.2 SOURCE CODE 54
9 CONCLUSION 74
10 FUTURE WORK 75
11 REFERENCE 76
X
TABLE OF FIGURES
1 Python-Website 19
2 Install Python 19
3 Complete Install 20
4 Run CMD 21
5 Command Prompt 21
6 Visual Studio Code 25
7 Visual Studio Code to python Extension 26
8 Fundamental Structure of Tkinter program 28
9 Simple Tkinter Windows 32
10 System Architecture 37
11 Video Footage 40
12 Frame Image 40
13 Inside Label Images 40
14 Run Python Main.py 48
15 Main GUI 48
16 Select Video Source 49
17 Video File 49
18 Video Frame Analysis 50
19 Accident Detection 50
20 Accident Detected with Accuracy Prediction 51
21 Send SMS Process 51
22 View Image Frame 52
23 View Image Frame Prediction 52
24 View Inside Label Image 1 53
25 View Inside Label Image 2 53
XI
CHAPTER - 1
1.1INTRODUCTION
In urban areas accidents are most common phenomena where many of such
accidents can be taken care easily but some accidents occur during night time
when the visibility is quite low, during such cases it will be difficult for an
ambulance driver to identify the accident spot with the help of phone calls made
by the citizens. If the driving force knows the precise spot of the accident the
time period between the spot and the hospital is going to be significantly
reduced. The main objective of this paper is to help reduce the time factor in
case of accidents. There are many cases where an accident occurs during the
night and the person met with the accident is unconscious then it would take
hours for someone to find out and inform the authorities about it. So saving
such precious time will indeed save lives. In connection with this concept, an
experimental setup is constructed that can detect accidents automatically
without any human help. This project presents a driver assistance system which
is used for lane departure of vehicles and also analysis of its working and
stability with respect to changes in the behavior of driver. The driver assistance
system,. Its designing wad developed from the preview of co-driver system
which is a automatic system. The vehicle steering assist controller is designed
using a driver model in order to take into account the driver's intentions in
particular curve negotiation. This approach minimizes controller intervention
while the driver is awake and steers properly. Usually, information flows
through the interface from human to machine but not so often in the reverse
direction. But in this model the system has an architecture in which bi-
directional information transfer occurs across the control interface, allowing the
1
human to use the interface to simultaneously exert control and extract
information.
Every year the lives of approximately 1.3 million people are cut short as a
result of a road traffic crash. Between 20 and 50 million more people suffer
non-fatal injuries, with many incurring a disability as a result of their injury
.With the increase in population and number of vehicles on the road, it has
become more important than ever to develop effective methods for detecting
accidents and responding to them quickly. The goal of traffic accident detection
is to reduce response time and ensure that medical attention is provided to those
who need it as quickly as possible. There are several methods used for traffic
accident detection. One of the most common methods is video surveillance.
Video cameras can be placed at intersections or other areas with high traffic
volume to detect accidents. When an accident occurs, the cameras can send an
alert to emergency services or other relevant parties. This method has been
proven to be effective in detecting accidents quickly and accurately. However,
despite the numerous measures being taken to upsurge road monitoring
technologies such as CCTV cameras at the intersection of roads and radars
commonly placed on highways that capture the instances of over-speeding cars,
many lives are lost due to lack of timely accidental reports which results in
delayed medical assistance given to the victims. Current traffic management
technologies heavily rely on human perception of the footage that was captured.
This takes a substantial amount of effort from the point of view of the human
operators and does not support any real-time feedback to spontaneous events.
The field of vehicular accident detection has become one of the most prevalent
uses of computer vision to provide first-aid on time, without the need for a
human operator to monitor an event. In India, CNN for accident detection is
gaining popularity due to the increasing number of accidents on roads. The
technology can be used to monitor high-traffic areas, such as intersections and
2
highways, and quickly detect any accidents that occur. By doing so, emergency
services can be dispatched to the scene faster, potentially reducing the severity
of injuries and saving lives. Generally, the research in the area of Accident
Detection focuses on computer vision based and sensor based models. There are
few research Publications which are discussing multimodal accident detection,
but these have high computation overhead and could be damaged during the
accident.
1.2 OBJECTIVE
The field of vehicular accident detection has become one of the most
prevalent uses of computer vision to provide first-aid on time, without the need
for a human operator to monitor an event. In India, CNN for accident detection
is gaining popularity due to the increasing number of accidents on roads. The
technology can be used to monitor high-traffic areas, such as intersections and
highways, and quickly detect any accidents that occur. By doing so, emergency
services can be dispatched to the scene faster, potentially reducing the severity
of injuries and saving lives.
This project presents a driver assistance system which is used for lane
departure of vehicles and also analysis of its working and stability with respect
to changes in the behavior of driver. The driver assistance system,. Its designing
wad developed from the preview of co-driver system which is a automatic
system. The vehicle steering assist controller is designed using a driver model in
order to take into account the driver's intentions in particular curve negotiation.
This approach minimizes controller intervention while the driver is awake and
steers properly. Usually, information flows through the interface from human to
machine but not so often in the reverse direction. But in this model the system
has an architecture in which bi-directional information transfer occurs across the
3
control interface, allowing the human to use the interface to simultaneously
exert control and extract information. n urban areas accidents are most common
phenomena where many of such accidents can be taken care easily but some
accidents occur during night time when the visibility is quite low, during such
cases it will be difficult for an ambulance driver to identify the accident spot
with the help of phone calls made by the citizens. If the driving force knows the
precise spot of the accident the time period between the spot and the hospital is
going to be significantly reduced.
4
CHAPTER - 2
SYSTEM ANALYSIS
2.1EXISTING SYSTEM
DISADVANTAGES:
• This system helps in detecting the accidents in very less period of time,
basically within a few seconds, send the basic information to the first aid
center in a message including the time and location of the accident.
• If in case there is no casuality and assistance is not required then you can
terminate the message sending process using the switch provided in the
device.
• This application provides in the most feasible way the optimal solution to
the poor emergency facilities provided for road accidents.
5
2.2PROPOSED SYSTEM
The main objective of the project is to predict the accident using CNN. Our
attempt is to develop an accurate and robust system for detecting accident and
reach the emergency service. The images are segregated as training set and
testing set. The next step is to develop CNN model with four activation layers,
two dense layers, two convolution 2Dlayersandtwomax pooling layers. The
developed CNN model is used to classify the input images as accident and non-
accident as specified in features. With this, the further intimation is provided to
emergency service to reach site only if it is detected as an accident. The
intimation involves the sending of a clipped image of the accident and the auto
detected location to the nearest emergency service. Firstly, we tackle the
challenge of image processing by converting images into individual frames.
This facilitates faster processing and enhances accuracy. As part of
preprocessing, we convert these frames into grayscale images and resize them,
ensuring uniformity and ease of analysis.
ADVANTAGES:
6
2.3 SYSTEM REQUIREMENTS
7
CHAPTER - 3
LITERATURE SURVEY
The technology development has increased the more traffic hazards and
road accident due to lack of emergency facilities. Our paper will provide a
solution to this problem. The dangerous driving can be detected using
accelerometer in car alarm application. It used as crash or roll over detector
8
vehicle during accident or after accident. An accelerometer receive the signal
which is used to recognize the severe accident. In this paper, when vehical met
with an accident or roll over the vibration sensor will detect the signal and sends
it to ATMEGA 8A controller. GSM send alert message to police control room
or rescue team from microcontroller. Now police can trace the location to the
GPS after receiving the information. Then after conforming the location
necessary action will be taken. During the accident, if the person did not get
injury or if there is no serious threat to anyones life, then the alert message can
be stopped by driver by a switch provided. In order to avoid the wasting the
time of the rescue team. This is used to detect the accident by means of
vibration sensor.
Authors: C K Gomathy
Road accidents rates are very high nowadays, especially two wheelers.
Timely medical aid can help in saving lives. This system aims to alert the
nearby medical center about the accident to provide immediate medical aid. The
attached accelerometer in the vehicle senses the tilt of the vehicle and the a
heartbeat sensor on the user's body senses the abnormality of the heartbeat to
understand the seriousness of the accident. Thus the systems will make the
decision and sends the information to the smartphone, connected to the
accelerometer through gsm and gps modules. The Android application in the
mobile phone will send text messages to the nearest medical center and friends.
Application also shares the exact location of the accident and it can save time.
9
CHAPTER - 4
SYSTEM IMPLEMENTATION
4.1 IMPLEMENTATION
4.2 MODULE
4.3 MODULES:
10
known as data splitting. An 80-25 split could be typical. Model Architecture
Selection: For image classification tasks, the CNN sequential model architecture
is employed. Model training: set random weights at the beginning of the
selected CNN architecture. Utilizing the training dataset, train the model. Make
use of optimization strategies such as back propogation and mini- batch gradient
descent. To avoid overfitting, keep an eye on the models performance on the
validation.
This phase is responsible for processing video data within the system. Its
main task is to read the video data and extract individual image frames from the
video. In the context of accident detection, this module plays a crucial role as it
allows the subsequent modules to analyze each frame for the occurrence of an
accident. Video data can be encoded in different formats or configurations, and
for the system to function properly, it requires homogeneous data in a consistent
format and configuration. The colour conversion module addresses this issue by
converting the video data to the RGB format. RGB (Red, Green, and Blue) is a
commonly used colour model in digital imaging where each pixel is represented
by the intensities of these three primary colors.
There are various smart pre-trained CNNs, these CNNs have the
capability of transfer learning. Therefore it just requires the training and testing
of datasets at its input layer. The architecture of the networks differs in terms of
internal layers and techniques used. The proposed model has 4 convolution
layers .Each layer is followed by a max pooling layer, which is connected to a
flattening layer. There are then two dense layers connected by successive
dropouts of 0.5 and finally a normalisation layer. The use of CNN for accident
detection involves several steps. First, the algorithm is trained on a large dataset
11
of images that represent different types of accidents, such as collisions,
pedestrians being hit, or vehicles overturning. The algorithm is then able to
recognize these patterns when presented with real-time footage from CCTV
cameras. Each frame of the video is run through the CNN model that calculates
the probability of an accident in that frame.
12
CHAPTER - 5
SOFTWARE DEVELOPMENT
5.1.1 PYTHON:
13
Python is a MUST for students and working professionals to become a great
Software Engineer specially when they are working in Web Development
Domain. I will list down some of the key advantages of learning Python:
14
5.1.3 APPLICATIONS OF PYTHON:
The latest release of Python is 3.x. As mentioned before, Python is one of the
most widely used language over the web. I'm going to list few of them here:
15
5.1.4 PYTHON – OVERVIEW:
Python was developed by Guido van Rossum in the late eighties and early
nineties at the National Research Institute for Mathematics and Computer
Science in the Netherlands.
Python is copyrighted. Like Perl, Python source code is now available under the
GNU General Public License (GPL).
16
5.1.6 PYTHON FEATURES:
Apart from the above-mentioned features, Python has a big list of good features,
few are listed below:
Before you start python installation, first verify whether the python has already
installed on your machine or not. Nowadays, most of the devices are coming
with preinstalled python.
17
5.1.9 WINDOWS INSTALLATION:
• Follow the link for the Windows installer python-XYZ.msi file where
XYZ is the version you need to install.
• Run the downloaded file. This brings up the Python install wizard, which
is really easy to use. Just accept the default settings, wait until the install
is finished, and you are done.
The path variable is named as PATH in Unix or Path in Windows (Unix is case
sensitive; Windows is not).
18
Fig.1 Python-Website
When you click on the Download Python 3.8.3 button, it will download
the python-3.8.3.exe file for a 32-bit version. If you want to download the 64-
bit version, visit the python for windows page and download the appropriate 64-
bit installer.
19
If you choose the Install Now option, it will install Python in the default
installation (C:\Users\{UserName}\AppData\Local\Programs\Python\Python38)
with default settings. If you want to customize the python installation folder
location & features, you can choose the Customize installation option.
Select Add Python 3.8 to path option so that you can execute the python from
any path.
After completing the python installation, you will see the success message
window like as shown below, and click on the Close button to close the setup
wizard.
20
5.1.11 VERIFY THE INSTALLATION:
To verify the installation, you open the Run window and type cmd and press
Enter:
To add the Python directory to the path for a particular session in Windows
1 PYTHONPATH
It has a role similar to PATH. This variable tells the Python interpreter
where to locate the module files imported into a program. It should
include the Python source library directory and the directories containing
Python source code. PYTHONPATH is sometimes preset by the Python
installer.
2 PYTHONSTARTUP
3 PYTHONCASEOK
22
4 PYTHONHOME
You can run Python from a Graphical User Interface (GUI) environment as
well, if you have a GUI application on your system that supports Python.
If you are not able to set up the environment properly, then you can take help
from your system admin. Make sure the Python environment is properly set up
and working perfectly fine.
The Python syntax defines a set of rules that are used to create Python
statements while writing a Python Program. The Python Programming
Language Syntax has many similarities to Perl, C, and Java Programming
Languages. However, there are some definite differences between the
languages.
23
Python - Interactive Mode Programming
Python interpreter from command line by typing python at the command
prompt as following-
If you are running older version of Python, like Python 2.4.x, then you would
need to use print statement without parenthesis as in print "Hello, World!".
However in Python version 3.x, this produces the following result.
Hello, World!
Visual Studio Code is a lightweight source code editor. The Visual Studio
Code is often called VS Code. The VS Code runs on your desktop. It’s available
for Windows, macOS, and Linux. VS Code comes with many features such as
IntelliSense, code editing, and extensions that allow you to edit Python source
code effectively. The best part is that the VS Code is open-source and free. Be
sides the desktop version, VS Code also has a browser version that you can use
directly in your web browser without installing it.
24
Fig.6 Visual Studio Code
Visual Studio Code is a lightweight but powerful source code editor which runs
on your desktop and is available for Windows, macOS and Linux. It comes with
built-in support for JavaScript, TypeScript and Node.js and has a rich ecosystem
of extensions for other languages and runtimes (such as C++, C#, Java, Python,
PHP, Go, .NET). Begin your journey with VS Code with these introductory
videos.
5.2 EXTENSIONS:
25
5.2.1 INSTALL PYTHON EXTENSION:
To make the VS Code works with Python, you need to install the Python
extension from the Visual Studio Marketplace.
The following picture illustrates the steps:
• First, click the Extensions tab and second, type the python keyword on
the search input.
• Third, click the Python extension. It’ll show detailed information on the
right pane.
• Finally, click the Install button to install the Python extension.
Graphical User Interface (GUI) is a form of user interface which allows users
to interact with computers through visual indicators using items such as icons,
menus, windows, etc. It has advantages over the Command Line Interface (CLI)
where users interact with computers by writing commands using keyboard only
and whose usage is more difficult than GUI.
26
interface (GUI) thanks to high speed processors and powerful graphics
hardware. These applications can receive inputs through mouse clicks and can
enable the user to choose from alternatives with the help of radio buttons,
dropdown lists, and other GUI elements (or widgets).
What is Tkinter?
Tkinter is the inbuilt python module that is used to create GUI applications. It
is one of the most commonly used modules for creating GUI applications in
Python as it is simple and easy to work with. You don’t need to worry about the
installation of the Tkinter module separately as it comes with Python already. It
gives an object-oriented interface to the Tk GUI toolkit.
Some other Python Libraries available for creating our own GUI applications
are,
• Kivy
• Python Qt
• wxPython
27
2. Building a GUI for a desktop application: Tkinter can be used to create the
interface for a desktop application, including buttons, menus, and other
interactive elements.
3. Adding a GUI to a command-line program: Tkinter can be used to add a
GUI to a command-line program, making it easier for users to interact with
the program and input arguments.
4. Creating custom widgets: Tkinter includes a variety of built-in widgets,
such as buttons, labels, and text boxes, but it also allows you to create your
own custom widgets.
5. Prototyping a GUI: Tkinter can be used to quickly prototype a GUI,
allowing you to test and iterate on different design ideas before committing
to a final implementation.
28
5.3.1 BASIC TKINTER WIDGETS:
1 Button
The Button widget is used to display buttons in your application.
2 Canvas
The Canvas widget is used to draw shapes, such as lines, ovals, polygons
and rectangles, in your application.
3 Checkbutton
The Checkbutton widget is used to display a number of options as
checkboxes. The user can select multiple options at a time.
4 Entry
The Entry widget is used to display a single-line text field for accepting
values from a user.
5 Frame
The Frame widget is used as a container widget to organize other widgets.
6 Label
The Label widget is used to provide a single-line caption for other widgets.
It can also contain images.
7 Listbox
The Listbox widget is used to provide a list of options to a user.
8 Menubutton
The Menubutton widget is used to display menus in your application.
29
9 Menu
The Menu widget is used to provide various commands to a user. These
commands are contained inside Menubutton.
10 Message
The Message widget is used to display multiline text fields for accepting
values from a user.
11 Radiobutton
The Radiobutton widget is used to display a number of options as radio
buttons. The user can select only one option at a time.
12 Scale
The Scale widget is used to provide a slider widget.
13 Scrollbar
The Scrollbar widget is used to add scrolling capability to various widgets,
such as list boxes.
14 Text
The Text widget is used to display text in multiple lines.
15 Toplevel
The Toplevel widget is used to provide a separate window container.
16 Spinbox
The Spinbox widget is a variant of the standard Tkinter Entry widget, which
can be used to select from a fixed number of values.
17 PanedWindow
A PanedWindow is a container widget that may contain any number of
30
panes, arranged horizontally or vertically.
18 LabelFrame
A labelframe is a simple container widget. Its primary purpose is to act as a
spacer or container for complex window layouts.
Tkinter is the standard GUI library for Python. Python when combined with
Tkinter provides a fast and easy way to create GUI applications. Tkinter
provides a powerful object-oriented interface to the Tk GUI toolkit.
Creating a GUI application using Tkinter is an easy task. All you need to do is
perform the following steps-
EXAMPLE:
import Tkinter
top = Tkinter.Tk()
# Code to add widgets will go here...
top.mainloop()
31
This would create a following window-
Let us take a look at how some of their common attributes.such as sizes, colors
and fonts are specified.
• Dimensions
• Colors
• Fonts
• Anchors
• Relief styles
• Bitmaps
• Cursors
32
• The pack() Method − This geometry manager organizes widgets in
blocks before placing them in the parent widget.
• The grid() Method − This geometry manager organizes widgets in a
table-like structure in the parent widget.
• The place() Method − This geometry manager organizes widgets by
placing them in a specific position in the parent widget.
The Place geometry manager is the simplest of the three general geometry
managers provided in Tkinter. It allows you explicitly set the position and size
of a window, either in absolute terms, or relative to another window. You can
access the place manager through the place() method which is available for all
standard widgets. It is usually not a good idea to use place() for ordinary
window and dialog layouts; its simply too much work to get things working as
they should. Use the pack() or grid() managers for such purposes.
Syntax:
The Grid geometry manager puts the widgets in a 2-dimensional table. The
master widget is split into a number of rows and columns, and each “cell” in
the resulting table can hold a widget. The grid manager is the most flexible of
the geometry managers in Tkinter. If you don’t want to learn how and when to
use all three managers, you should at least make sure to learn this one.
Consider the following example-
33
Creating this layout using the pack manager is possible, but it takes a number
of extra frame widgets, and a lot of work to make things look good. If you use
the grid manager instead, you only need one call per widget to get everything
laid out properly. Using the grid manager is easy. Just create the widgets, and
use the grid method to tell the manager in which row and column to place
them. You don’t have to specify the size of the grid beforehand; the manager
automatically determines that from the widgets in it.
The Pack geometry manager packs widgets relative to the earlier widget.
Tkinter literally packs the entire widgets one after the other in a window. We
can use options like fill, expand, and side to control this geometry manager.
Compared to the grid manager, the pack manager is somewhat limited, but
it’s much easier to use in a few, but quite common situations:
• Put a widget inside a frame (or any other container widget), and have it
fill the entire frame
• Place a number of widgets on top of each other
• Place a number of widgets side by side
34
one can make our GUI more attractive and user-friendly in terms of both looks
and functionality.
We can bind Python’s Functions and methods to an event as well as we can
bind these functions to any particular widget.
What is bind?
The basic definition of the word bind is stick together or cause to stick
together in a single mass. Similarly, Tkinter bind is used to connect an event
passed in the widget along with the event handler. The event handler is the
function that gets invoked when the events take place.
1. Instance-level binding
One can bind an event to one specific widget. To bind an event of a widget,
call the .bind() method on that widget. widget.bind(event, event handler)
35
• Bind – configuring an event handler (python function) that is called
when an event occurs to a widget.
2. Class-level binding
One can bind an event to all widgets of a class. For example, you might set up
all Button widgets to respond to middle mouse button clicks by changing back
and forth between English and Japanese labels. bind_class is a method
available to all widgets and simply calls the Tk bind command again, however
not with the instance name, but the widget class name.
3. Application-level binding
One can set up a binding so that a certain event calls a handler no matter what
widget has the focus or is under the mouse.
36
CHAPTER - 6
SYSTEM DESIGN
37
3. DFD shows how the information moves through the system and how it is
modified by a series of transformations. It is a graphical technique that
depicts information flow and the transformations that are applied as data
moves from input to output.
4. DFD is also known as bubble chart. A DFD may be used to represent a
system at any level of abstraction. DFD may be partitioned into levels
that represent increasing information flow and functional detail.
User
Live
Video/CCTV
Video
Processing
Yolo-Video Accident
Frames-CNN Detection
Alert
Notification
38
6.3 USE CASE DIAGRAM:
Live
Video/CCTV
Spatio Temporal
Model
Extracted
Features
User
CNN-Yolo Model
Process
Video Frames
Accident
Detection
39
6.4 DATASET DESIGN
Video Footage:
40
CHAPTER - 7
SYSTEM DEVELOPMENT
The input design is the link between the information system and the user. It
comprises the developing specification and procedures for data preparation and
those steps are necessary to put transaction data in to a usable form for
processing can be achieved by inspecting the computer to read data from a
written or printed document or it can occur by having people keying the data
directly into the system. The design of input focuses on controlling the amount
of input required, controlling the errors, avoiding delay, avoiding extra steps
and keeping the process simple. The input is designed in such a way so that it
provides security and ease of use with retaining the privacy. Input Design
considered the following things:
7.1.2 OBJECTIVES
41
3. When the data is entered it will check for its validity. Data can be entered
with the help of screens. Appropriate messages are provided as when needed so
that the user will not be in maize of instant. Thus the objective of input design is
to create an input layout that is easy to follow.
7.1.3 OUTPUT DESIGN
A quality output is one, which meets the requirements of the end user and
presents the information clearly. In any system results of processing are
communicated to the users and to other system through outputs. In output
design it is determined how the information is to be displaced for immediate
need and also the hard copy output. It is the most important and direct source
information to the user. Efficient and intelligent output design improves the
system’s relationship to help user decision-making.
The output form of an information system should accomplish one or more of the
following objectives.
• Convey information about past activities, current status or projections of
the Future.
• Signal important events, opportunities, problems, or warnings.
• Trigger an action.
• Confirm an action.
42
7.2 SYSTEM STUDY
7.2.1 FEASIBILITY STUDY:-
This study is carried out to check the economic impact that the system
will have on the organization. The amount of fund that the company can pour
into the research and development of the system is limited. The expenditures
must be justified. Thus the developed system as well within the budget and this
was achieved because most of the technologies used are freely available. Only
the customized products had to be purchased.
This study is carried out to check the technical feasibility, that is, the
technical requirements of the system. Any system developed must not have a
high demand on the available technical resources. This will lead to high
demands on the available technical resources. This will lead to high demands
being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this
system.
43
7.2.4 SOCIAL FEASIBILITY:-
Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid
outputs. All decision branches and internal code flow should be validated. It is the
testing of individual software units of the application .it is done after the
completion of an individual unit before integration. This is a structural testing, that
relies on knowledge of its construction and is invasive. Unit tests perform basic
tests at component level and test a specific business process, application, and/or
44
system configuration. Unit tests ensure that each unique path of a business process
performs accurately to the documented specifications and contains clearly defined
inputs and expected results.
45
processes must be considered for testing. Before functional testing is complete,
additional tests are identified and the effective value of current tests is determined.
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test.
System testing is based on process descriptions and flows, emphasizing pre-driven
process links and integration points.
White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or at least
its purpose. It is purpose. It is used to test areas that cannot be reached from a black
box level.
Black Box Testing is testing the software without any knowledge of the
inner workings, structure or language of the module being tested. Black box tests,
as most other kinds of tests, must be written from a definitive source document,
such as specification or requirements document, such as specification or
requirements document. It is a testing in which the software under test is treated, as
a black box .you cannot “see” into it. The test provides inputs and responds to
outputs without considering how the software works.
Unit Testing:
Unit testing is usually conducted as part of a combined code and unit test
phase of the software lifecycle, although it is not uncommon for coding and unit
testing to be conducted as two distinct phases.
46
Test Strategy and Approach
Field testing will be performed manually and functional tests will be written
in detail.
Test objectives
• All field entries must work properly.
• Pages must be activated from the identified link.
• The entry screen, messages and responses must not be delayed.
Features to be tested
• Verify that the entries are of the correct format
• No duplicate entries should be allowed
• All links should take the user to the correct page.
Integration Testing:
Software integration testing is the incremental integration testing of two or
more integrated software components on a single platform to produce failures
caused by interface defects. The task of the integration test is to check that
components or software applications, e.g. components in a software system or -
one step up - software applications at the company level - interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.
Acceptance Testing:
User Acceptance Testing is a critical phase of any project and requires
significant participation by the end user. It also ensures that the system meets the
functional requirements.
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.
47
CHAPTER - 8
APPENDICES
8.1 SCREENSHOT
Main GUI:
48
Select Video Source:
49
Video Frame Analysis:
Accident Detection:
50
Find Accident Detected with Accuracy Prediction:
51
View Image Frame:
52
View Inside Label Image 1:
53
8.2 SOURCE CODE
Vcd_ui.py
import tkinter as tk
import vehicle_crash_detection
import ctypes
ctypes.windll.shcore.SetProcessDpiAwareness(1)
54
class VcdUI:
def __init__(self,root):
self.root.state('zoomed')
self.root.config(bg="#277a36")
self.title_bar_icon =
PhotoImage(file="resources/icon/vehicle_crash_black.png")
self.root.iconphoto(False, self.title_bar_icon)
self.icon_white =
Image.open("resources/icon/vehicle_crash_white.png").resize((60, 60))
self.icon_white = ImageTk.PhotoImage(self.icon_white)
self.icon_black =
Image.open("resources/icon/vehicle_crash_black._32.png").resize((60, 60))
55
self.icon_black = ImageTk.PhotoImage(self.icon_black)
self.title_label.image = self.title_bar_icon
fg="white", )
self.sidebar.pack(side='left', fill='y')
self.border_frame.pack(side='right', fill='y')
56
# Create a Label widget to display the image
self.sidebar_icon_label.pack(side='top', pady=10)
self.sidebar_button1.pack()
self.sidebar_button2 = tk.Button(self.sidebar,
text='Records',command=self.open_image_viewer ,width=25,height=2
,fg="white", bg="#000000",
self.combo_box.bind("<<ComboboxSelected>>", self.handle_combobox)
self.var = tk.BooleanVar()
self.vc =
vehicle_crash_detection.VehicleCrash(self.detections_update_label,
self.content, self.button1)
self.vc.load_model()
58
# function to open a file as video source
def open_file(self):
global source
file_path = filedialog.askopenfilename()
source = str(file_path)
self.vc.set_source(source)
return source
def open_camera(self):
global source
source = 0
self.vc.set_source(source)
return source
def handle_combobox(self,event):
value = event.widget.get()
self.open_file()
59
self.open_camera()
def clear_frame(self):
widget.destroy()
def toggle(self):
self.var.set(not self.var.get())
# Set the button text to "On" or "Off" depending on the state of the variable
if self.var.get():
self.button1.config(text="Detection \nON")
self.vc.run_detection()
else:
self.button1.config(text="Detection \nOFF")
self.vc.stop_detection()
self.detections_update_label.configure(text="")
self.clear_frame()
def open_image_viewer(self):
60
self.root.withdraw() # hide the current window
image_viewer_window = tk.Toplevel(self.root)
image_viewer_window.title("Image Viewer")
image_viewer_instance = ImageViewer(image_viewer_window)
if __name__ == '__main__':
root = tk.Tk()
app = VcdUI(root)
root.mainloop()
vehicle_crash_detection.py:
import threading
import time
import PIL
import tensorflow as tf
61
import cv2
import numpy as np
import tkinter as tk
import functools
import email_alert
import sms_alert
import datetime
62
''' This class represents the vehicle crash detector functionalities , it is called
when using the vcd_ui class '''
class VehicleCrash:
self.detections_update_label = detections_update_label
self.content = content
self.source = None
self.running = False
self.button1 = button1
self.count = 0
self.i = 0
self.source = source
PATH_TO_SAVED_MODEL = "inference_graph\\saved_model"
category_index =
label_map_util.create_category_index_from_labelmap("label_map.pbtxt",
use_display_name=True)
63
(h, w, d) = image.shape
current_datetime = datetime.datetime.now().strftime("%Y-%m-%d_%H-
%M-%S")
self.count += 1
print(self.count)
if (self.count == 5):
cv2.imwrite("outputs/frame_img/vcd_frame" +
str(current_datetime) + str(self.i) + ".jpg", image)
64
resized_image = cv2.resize(label_box_image, image_size)
kernel = np.array([[-1, -1, -1], [-1, 9, -1], [-1, -1, -1]]) # 3x3
sharpening filter
png_quality = 100
cv2.imwrite("outputs/inside_label_img/vcd_inlabel" +
str(current_datetime) + str(self.i) + ".png",
sharpened_image,
[int(cv2.IMWRITE_JPEG_QUALITY), png_quality])
if (self.count == 20):
print("Vehicle_Accident_Detected")
perform_label_detected_func =
threading.Thread(target=self.perform_label_detected)
perform_label_detected_func.start()
self.i += 1
65
self.count = 0
break
return image
# update progress() used to update value progress bar when loading the model
progress['value'] = value
progress.update()
def perform_label_detected(self):
em = email_alert.Email(self.source)
em.run_mail()
self.detections_update_label.configure(
66
time.sleep(0.5)
sm = sms_alert.Sms(self.source)
sm.run_sms()
self.detections_update_label.configure(
time.sleep(0.5)
self.detections_update_label.configure(
time.sleep(0.5)
self.detections_update_label.configure(text="")
detect_fn = ""
@functools.lru_cache(maxsize=None)
def load_model(self):
style = Style()
style.theme_use('alt')
67
# windows theme:('winnative','clam','alt','default','classic','vista','xpnative')
style.configure("Horizontal.TProgressbar", troughcolor='white',
background='black', thickness=30)
mode='determinate')
self.detections_update_label.configure(text="Loading 0%")
self.update_progress(progress, 0)
time.sleep(0.1)
self.update_progress(progress, 10)
self.detections_update_label.configure(text="Loading .10%")
time.sleep(0.1)
global detect_fn
detect_fn = tf.saved_model.load(self.PATH_TO_SAVED_MODEL)
self.detections_update_label.configure(text="Loading ....50%")
68
self.update_progress(progress, 50)
time.sleep(1.5)
print("Model Loaded!")
self.detections_update_label.configure(text="Loading .........100%")
self.update_progress(progress, 100)
time.sleep(1.5)
self.detections_update_label.configure(text="")
time.sleep(0.1)
progress.destroy()
return detect_fn
canvas.destroy()
self.content.update()
# it is used to detect the occurrence of vehicle crash from the given video
source
def run_detection(self):
69
self.running = True
while self.running:
video_capture = cv2.VideoCapture(self.source)
start_time = time.time()
frame_width = int(video_capture.get(3))
frame_height = int(video_capture.get(4))
# fps = int(video_capture.get(5))
result = cv2.VideoWriter('outputs/detection_video/det_vid.mp4',
cv2.VideoWriter_fourcc('m', 'p', '4', 'v'),
15,
size)
while True:
if not ret:
self.close_canvas(canvas)
70
self.stop_detection()
self.button1.config(text="Detection \nOFF")
self.detections_update_label.configure(text="")
break
frame = cv2.flip(frame, 1)
detections = detect_fn(input_tensor)
71
max_detections = 1
# Convert to numpy arrays, and take index [0] to remove the batch
dimension.
labels = detections['detection_classes'][0,
:max_detections].numpy().astype(np.int64)
# Display detections
image = PIL.Image.fromarray(frame)
photo = PIL.ImageTk.PhotoImage(image)
72
# perform_label_detected(labels, scores, score_thresh)
end_time = time.time()
start_time = end_time
canvas.create_text(50,
video_capture.get(cv2.CAP_PROP_FRAME_HEIGHT) + 25, text=f"FPS:
{fps}",
canvas.update()
result.write(frame)
video_capture.release()
def stop_detection(self):
self.running = False
self.count = 0
print("Detection Stopped")
73
CHAPTER - 9
CONCLUSION
74
CHAPTER - 10
FUTURE WORK
75
CHAPTER - 11
REFERENCE
[2] . . Amin, J. Jalil, and . Reaz, “Accident detection and reporting system using
GPS, G R and G International technology,” in Proc. IEEE Conference on
Informatics, Electronics & Vision (ICIEV), pp. 640–643, 2022.
[4] Y.-K. Ki and D.-Y. Lee, “A traffic accident recording and reporting model
at intersections,” IEEE Trans. on Intelligent Transportation Systems, vol. 8, no.
2, pp. 188–194, 2020.
[5] Chris Thompson, Jules White, Brian Dougherty, Adam Albright, and
Douglas C. SchmidtUsing Smartphones to Detect Car Accidents and Provide
Situational Awareness to Emergency Responders, @c Institute for Computer
Sciences, Social Informatics and Telecommunications Engineering 2021.
[7] W. Wei and F. Hanbo, “Traffic accident automatic detection and remote
alarm device,” in Proc. of International Conferenc of Electric Information and
Control Engineering (ICEICE), pp. 910 913, 2023.
76
[8] M. Fogue, P. Garrido, F. J. Martinez, J.-C. Cano, C. T. alafate, and . anzoni,
“Automatic accident detection: Assistance through communication technologies
and vehicles,” IEEE Vehicular Technology Magazine, vol. 7, no. 3, pp. 90–100,
2019.
[13] A. App and P. LLC, "Auto Accident App dans l’App tore", App tore, 2019.
[Online]. Available: https://itunes.apple.com/ca/app/auto accident-
app/id515255099?l=fr.
[14] "Auto Accident App - Murphy Battista LLP", Murphy Battista LLP, 2020.
[Online]. Available: http://www.murphybattista.com/autoaccident- app.
[16] Alexandra Fanca and Honoriu valean, Accident Reporting and Guidance
System with automatic detection of the accident, 20th International Conference
on System Theory, Control andComputing (ICSTCC), October 13-15, Sinaia,
Romania IEEE, 2022.
77
[17] M. Bhokare, S. Kaulkar, A. Khinvasara, A. Agrawal, Y. K. Sharma. An
Algorithmic Approach for Detecting Car Accidents using Smartphone.
International Journal of Research in Advent Technology, Vol.2, No.4, April
2023, pp. 151-154, E-ISSN: 2321-9637
[19] Elie Nasr, Elie Kfoury and David Khoury, “An IoT approach to vehicle
accident detection, reporting and navigation”, International multidisciplinary
conference on Engineering Technology (IMCET), IEEE 2020.
78