0% found this document useful (0 votes)
102 views

A Project Presentation On Real Time Object Detection in Autonomous Driving

The document is a project presentation on real-time object detection in autonomous driving. It was presented by 3 students and guided by their professor. The presentation includes an introduction to object detection, the problem statement of detecting and tracking generic objects in real-time, the proposed work which involves extracting frames from video and applying techniques like background subtraction, segmentation, and classification to detect objects. The implementation steps involve filtering frames, extracting foreground using background subtraction, running the images through a convolutional neural network to detect blobs and features, and saving output frames. The result was object recognition on sample frames from the video.

Uploaded by

Pulkit Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
102 views

A Project Presentation On Real Time Object Detection in Autonomous Driving

The document is a project presentation on real-time object detection in autonomous driving. It was presented by 3 students and guided by their professor. The presentation includes an introduction to object detection, the problem statement of detecting and tracking generic objects in real-time, the proposed work which involves extracting frames from video and applying techniques like background subtraction, segmentation, and classification to detect objects. The implementation steps involve filtering frames, extracting foreground using background subtraction, running the images through a convolutional neural network to detect blobs and features, and saving output frames. The result was object recognition on sample frames from the video.

Uploaded by

Pulkit Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 35

A

Project Presentation
on
Real Time Object Detection in Autonomous Driving
Presented By
Baliram Kumar Gupta (1509710037)
Deepanker Gupta(1509710043)
Madhur Gupta(1509710061)

Guided By
Mr. Ajeet Kumar Bharti
Assistant Professor, CSE Department

Department of Computer Science & Engineering


Galgotias College of Engineering & Technology , Greater Noida
Dr. A.P.J. Abdul Kalam Technical University, Lucknow,
Uttar Pradesh
Outline
• Introduction

• Background

• Problem Statement

• Proposed Work

• System Design

• Implementation

• Result

• Conclusion

• References
Introduction

• Object Detection is the process of finding instances of real world objects such
as faces, bicycles, and buildings in images or videos.

• Object detection algorithms typically use extracted features and learning algor-
ithms to recognize instances of an object category.

• It is commonly used in applications such as image retrieval, security, surveilla-


nce and advanced driver assistance systems( ADAS ).

• We can detect objects using a variety of models, including:

I. Deep learning object detection

II. Feature- based object detection


• In this Project, we focus on moving object detection and classification
using a camera placed outside a car.

• In many autonomous driving systems, the object detection subtask is itself


one of the most important prerequisites to autonomous navigation, as this
task is what allows the car controller to account for obstacles when
considering possible future trajectories.
Problem Statement

The objective is to detect and track the generic object in real time. In real life,

therefore, we require rich information about the surrounding. We need to

understand how the objects are moving with respect to the camera. It would

also help to recognize the interaction between objects. For example, in case of
the self-driving car the knowledge about the interaction between the

pedestrians will help to predict the pedestrian behavior accurately. This

prediction will eventually help the self-driving car to make intelligent choices
on a crowded road.
Background
• Traditionally, computer vision approaches primarily used Joint Probabilistic Data

Association (JPDA) filters and Multiple Hypothesis Tracking (MHT). Most of the

approaches are not suitable for real-time applications such as autonomous navigation as
related problems are intractable.

• Additional information regarding individual objects can be obtained by incorporating


motion models to predict potential position of object in the future frames, interaction
model to understand interaction between multiple objects in the frame, and occlusion
handling to track objects when they are occluded by other objects in a frame.
Challenges During Object Detection
• Illumination Variation
Change due to motion of light source, different times of day, reflection from bright
surfaces, weather in outdoor scenes, partial or complete blockage of light source by
other objects .

• Moving Object Appearance Changes


As the object moves in 3d space its 2d projection changes ,we get wrong data.

• Occlusion
The object may be occluded by other objects in the scene. In this case, some
parts of the object can be camouflaged or just hidden behind other objects .

• Complex background
The background may be highly textured. the dynamic, namely some regions of the
background may contain movement ,hence background subtraction become
difficult.
• Shadow
A dynamic shadow, caused by a moving object, has a critical impact for
accurately detecting moving object since it has the same motion properties
as the moving object and is tightly connected to it.
• Problems related to camera
Many factors related to video acquisition systems, acquisition methods,
compression .
Techniques, stability of cameras (or sensors) can directly affect the quality
of a video sequence, Noise is another factor that can severely deteriorate the
quality of image sequences.
• Non-rigid object deformation
In some cases, different parts of a moving object might have different
movements in terms of speed and orientation. For instance, a walking
dog when wags its tail or a moving tank when rotates its When dealing
with detecting such moving objects, most algorithms detect different parts
as different moving objects.
Background Subtraction

• As the name suggests, background subtraction is the process of separating


out foreground objects from the background in a sequence of video
frames

• Background subtraction is widely used approach for detecting moving


objects from static cameras.

• Fundamental logic for detecting moving objects from the difference


between the current frame and reference frame, called “background
image” and this method is known as FRAME DIFFERENCE METHOD
Background Subtraction Techniques

• Running Gaussian Average

• Histograms

• Mixture of Gaussians

• Kernel Density Estimators

• Mean Shift Based Estimation


Basic Method of Background Subtraction

• Frame Difference:
|framei-framei-1| > Th

• Very Sensitive to Threshold Th


Drawback of basic Model

• They do not provide an explicit method to choose the threshold.


• Major: Based on a single value, they cannot cope with multiple modal
background distributions; example:
Running Gaussian Average

• Fitting one Gaussian distribution (µ,σ) over the histogram -

• In test | F - µ | > Th, Th can be chosen as kσ.


• It does not cope with multimodal backgrounds.
Mean-shift based estimation

• A gradient-ascent method able to detect the modes of a multimodal


distribution together with their covariance matrix .
• Iterative, the step decreases towards convergence .
• The mean shift vector:
Comparison of various background subtraction
techniques
Background Subtraction
Original frame:

Result of Background Subtractor MOG:


Result of Background Subtractor GMG:
Types of Segmentation

1. Thresh-holding Method
• Select initial threshold value, typically the mean 8-bit value of the original
image.
• Divide the original image into two portions;
Pixel values that are less than or equal to the threshold; background
Pixel values greater than the threshold; foreground
• Find the average mean values of the two new images
• Calculate the new threshold by averaging the two means.
• If the difference between the previous threshold value and the new threshold
value are below a specified limit, you are finished. Otherwise apply the new
threshold to the original image keep trying.
2. Cluster based Method:
• Read Image
• Convert Image from RGB Color Space to L*a*b* Color Space
• Classify the Colors in 'a*b*' Space Using K-Means Clustering
• Create Images that Segment the H&E Image by Color

3. Region Based Method:


In this different objects are separated by other kind of perceptual boundaries –
neighbourhood features
Most often texture-based – Textures are considered as instantiations of underlying
stochastic processes and analyzed under the assumptions that stationarity and
ergodicity hold
Method – Region-based features are extracted and used to define “classes”
Comparison of various segmentation method
Classification Methods

The extracted moving region may be different kinds of objects of various


colours , shapes and textures. Some of them are:-

a.) Shape-based classification:


Different shape information of motion regions such as representations of
points, box and blob are available for classifying moving objects.

b.) Motion-based classification:


Non-rigid articulated object motion shows a periodic property.
c.) Colour-based classification:
Unlike many other image features colour is relatively constant under
viewpoint changes and easy to be acquired.

d.) Texture-based classification:


It counts the occurrences of gradient orientation in localized portions of an
image, then computes the data on a dense grid of uniformly spaced cells
and uses overlapping local contrast normalization for better accuracy.
Proposed Work
Proposed Work

1. The compressed video is acquired from the camera and it is then


converted into raw format.
2. Frames are extracted from this raw formatted video and these set of
frames are processed for noise removal.
3. After the noise removal stage, the loss is compared with the threshold to
check if the image is reliable or not. If it is not, then we re-iterate step 2.
4. Else we apply background subtraction techniques to obtain the foreground
image.
5. The foreground image is thus obtained and segmentation is applied on this
image.
6. We then apply classification techniques on this output image to classify
the object into one of the many classes.
System Design
Implementation

• Backup Surveillance Video to USB Drive/Secondary Storage


• Extract frames from video. Convert these frames into RGB format
• Now use a median kalman filter to filter the noisy image, J
• Extract the foreground image from the background image using the above
step.
• Run the image through the proposed ConvNet Architecture and the desired
algorithm to detect blobs and features in the images.
• Remove any kind of occlusion or blur in the image and other smoothing
techniques to obtain the final output.
• Save the resultant frames back into the drive.
Result
• The Result of object recognition on the sample data set is evident in the
following video snippet.
• Also the segmentation applied on the extracted frame can be seen in the
image below.
• We have applied 3 types of segmentation on the target image
• Colour based segmentation
• Region based Segmentation
• Cluster based Segmentation
Conclusion
• In this presentation, we have presented a novel approach to segment and cla
ssify the moving objects from a surveillance video.

• Compress the video in HEVC format and convert into HSV or Grey scale

• We apply Gaussian algorithm for Background Subtraction.

• We have used 3 method for outlining the segmented region namely Region
based segmentation , Color based Subtraction and Cluster based

• Segmentation.

• We have have used Convolution Neural Network for Classification of

recognised objects.

• Use Of Libraries like tensorflow , keras , openCv2


References
• Dhara Trambadiya and Chintan Varnagar , A Review on Moving Object Det
ection and Tracking Methods, National Conference on Emerging Trends in
Computer, Electrical Electronics (ETCEE-2015) International Journal of
Advance Engineering and Resarch Development (IJAERD)
e-ISSN: 2348 - 4470 , print-ISSN:2348-6406

• Shikha Mangal and Ashavani Kumar,Real time moving object detection for
video surveil-lance based on improved GMM , International Journal of
Advanced Technology and Engineering Exploration,
Vol 4(26) ISSN (Print): 2394-5443 ISSN (Online)
: 2394-7454

• Mrs Poonam Khare. ”Literature survey on the various methods of object det
ection in video surveillance systems .”International Research Journal of Eng
ineering and Technology (IRJET) e-ISSN: 2395 -0056 p-ISSN: 2395-0072.
References(Cont.)
• S.Arun Inigo, P.Suresh ,General Study on Moving Object Segmentation
Methods for Video, International Journal of Advanced Research in Computer
Engineering & Technology (IJARCET) Volume 1, Issue 8,
ISSN: 2278 – 1323.

• Yuan-Ting Hu, Jia-Bin Huang, and Alexander G. Schwing,Unsupervised Video


Object Segmentation usingMotion Saliency-GuidedSpatio-Temporal
Propagation, arXiv:1809.01125v1 [cs.CV] ,4 September 2018.

• Mizanur Rahman , Mhafuzul Islam , Jon Calhoun ,Real-Time Pedestrian


Detection Approach With An Efficient Data Communication Bandwidth
Strategy, Paper submitted for presentation at the Transportation Research
Board 98th Annual Meeting and for publication in Transportation Research
Record, August 1, 2018 .
References(Cont.)
• Geethu Miriam Jacob and Sukhendu Das,”Moving Object Segmentation in
Jittery Videos by Stabilizing Trajectories Modeled in Kendalls Shape
Space”,”arXiv:1808.04551v1 [cs.CV] 14 Aug 2018”.

• Anton Mitrokhin1, Cornelia Fermuller, Chethan Parameshwara, Yiannis


Aloimonos,”Event-based Moving Object Detection and
Tracking”,”arXiv:1803.04523v2 [cs.CV] 23 Jul 2018”.

• Yang Li, Guangcan Liu, Member, IEEE, Shengyong Chen, Senior Member,
IEEE,”Detection of Moving Object in Dynamic Background Using Gaussian
Max-Pooling and Segmentation Constrained RPCA”,”arXiv:1709.00657v1
[cs.CV] 3 Sep 2017”.

• Hamed R. Tavakoli,Jorma Laaksonen,”Towards Instance Segmentation with


Object Priority: Prominent Object Detection and
Recognition”,”arXiv:1704.07402v2 [cs.CV] 4 Aug 2017”.
References(Cont.)

• Mehran Yazdi,Thierry Bouwmans ,”New Trends on Moving Object Detection


in Video Images Captured by a moving Camera: A Survey”,” Computer
Science Review, Elsevier, 2018. hal-01724322, HAL Id: hal-01724322
https://hal.archives-ouvertes.fr/hal-01724322 Submitted on 6 Mar 2018 ”.

• Payal Ranipa, Kapildev Naina ,”Real Time Moving Object Tracking In Video
Processing ”,”International Journal of Engineering Research and General
Science Volume 3, Issue 1, January-February, 2015 ISSN 2091-2730 147 ”.

• Urvasi Sharma ece department mody university lakshmangarh, tripti Sharma


ece department mody university lakshmangarh, india, efficient object
detection with its enhancement in proceeding of computing , communication
and automation, may-2015 .

• Xie Yong, Improved Gaussian Mixture Model in Video Motion Detection,


JOURNAL OF MULTIMEDIA, VOL. 8, NO. 5, OCTOBER 2013
Thank you

You might also like