0% found this document useful (0 votes)
15 views29 pages

Sustainability 13 11417 v2

Uploaded by

Pratik Takudage
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views29 pages

Sustainability 13 11417 v2

Uploaded by

Pratik Takudage
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

sustainability

Article
Review on Lane Detection and Tracking Algorithms of
Advanced Driver Assistance System
Swapnil Waykole, Nirajan Shiwakoti * and Peter Stasinopoulos

School of Engineering, RMIT University, Melbourne, VIC 3000, Australia; [email protected] (S.W.);
[email protected] (P.S.)
* Correspondence: [email protected]

Abstract: Autonomous vehicles and advanced driver assistance systems are predicted to provide
higher safety and reduce fuel and energy consumption and road traffic emissions. Lane detection
and tracking are the advanced key features of the advanced driver assistance system. Lane detection
is the process of detecting white lines on the roads. Lane tracking is the process of assisting the
vehicle to remain in the desired path, and it controls the motion model by using previously detected
lane markers. There are limited studies in the literature that provide state-of-art findings in this
area. This study reviews previous studies on lane detection and tracking algorithms by performing a
comparative qualitative analysis of algorithms to identify gaps in knowledge. It also summarizes
some of the key data sets used for testing algorithms and metrics used to evaluate the algorithms.
It is found that complex road geometries such as clothoid roads are less investigated, with many
studies focused on straight roads. The complexity of lane detection and tracking is compounded
by the challenging weather conditions, vision (camera) quality, unclear line-markings and unpaved
 roads. Further, occlusion due to overtaking vehicles, high-speed and high illumination effects also

pose a challenge. The majority of the studies have used custom based data sets for model testing.
Citation: Waykole, S.; Shiwakoti, N.;
As this field continues to grow, especially with the development of fully autonomous vehicles in
Stasinopoulos, P. Review on Lane
the near future, it is expected that in future, more reliable and robust lane detection and tracking
Detection and Tracking Algorithms of
algorithms will be developed and tested with real-time data sets.
Advanced Driver Assistance System.
Sustainability 2021, 13, 11417. https://
Keywords: lane detection; lane tracking system; sensors; advanced driver assistance system (ADAS);
doi.org/10.3390/su132011417
lane departure warning system
Academic Editors: Young-Ji Byon,
Feng Chen and Meng Guo

Received: 1 September 2021 1. Introduction


Accepted: 9 October 2021 Autonomous passenger vehicles are a direct implementation of transportation-related
Published: 15 October 2021 autonomous robotics research. They are also known as self-driving vehicles or driverless
vehicles. Shakey the robot (1966–1972) is the first autonomous mobile robot that has been
Publisher’s Note: MDPI stays neutral documented [1]. It was developed by Stanford Research Institute’s Artificial Intelligence
with regard to jurisdictional claims in Centre and was capable of detecting the environment, thinking, planning, and navigation.
published maps and institutional affil-
In basic settings, vision-based lane tracking and obstacle avoidance sparked interest in au-
iations.
tonomous vehicles [2]. In the early 1990s, The Royal Armament Research and Development
Establishment in the United Kingdom created two vehicles for obstacle-free navigation on
and off the road [3]. In the United States, the first operations of autonomous driving in
realistic settings dates back to Carnegie Mellon University’s NavLab in the early 1990s [4].
Copyright: © 2021 by the authors. The vehicle developed by NavLab was operated at very low speeds due to the limited
Licensee MDPI, Basel, Switzerland. computational power available at the time. Early US research projects also included the
This article is an open access article California PATH project, which developed the automated highway [5]. Vehicle steering
distributed under the terms and
was automated with manual longitudinal control in the “No Hands Across America”
conditions of the Creative Commons
project [6]. In early 2000, CyberCars, one of several European projects began developing
Attribution (CC BY) license (https://
technologies based on automated transport [7]. The announcement of the defence advanced
creativecommons.org/licenses/by/
research projects agency (DARPA) grand challenge in 2003 generated research interest in
4.0/).

Sustainability 2021, 13, 11417. https://doi.org/10.3390/su132011417 https://www.mdpi.com/journal/sustainability


Sustainability 2021, 13, 11417 2 of 29

autonomous cars. Following that, in 2006, the DARPA urban challenge was performed in
a controlled situation with a variety of autonomous and human-operated vehicles. Since
then, many manufactures, including Audi, BMW, Bosch, Ford, GM, Lexus, Mercedes,
Nissan, Tesla, Volkswagen, Volvo and Google, have launched self-driving vehicle projects
in collaboration with universities [8]. Google’s self-driving car has experimented and
travelled 500 thousand kilometres and has begun building prototypes of its own cars [9]. A
completely autonomous vehicle would be expected to drive to a chosen location without
any expectation of shared control with the driver, including safety-critical tasks.
The performance of lane detection and tracking depends on the well-developed roads
and their lane markings, so smart cities are also a prominent factor in autonomous vehicle
research. The idea of a smart city is often linked with an eco-city or a sustainable city,
both of which seek to enhance the quality of municipal services while lowering their costs.
Smart cities’ primary goal is to balance technological innovation with the economic, social,
and environmental problems that tomorrow’s cities face. The greater closeness between
government and people is required in smart cities that embrace the circular economy’s
concepts [10]. The way materials and goods flow around people and their demands will
alter, as will the structure of cities. Several car manufacturers such as Tesla and Audi
have already launched autonomous vehicle marketing for private use. Soon, society
will be influenced by autonomous vehicles’ spread to urban transport systems [11]. The
development of smart cities with the introduction of connected and autonomous vehicles
could potentially transform cities and guide long-term urban planning [10].
Autonomous vehicles and Advanced Driver Assistance Systems (ADAS) are predicted
to provide a higher degree of safety and reduce fuel and energy consumption and road
traffic emissions. ADAS is implemented for safe and efficient driving, which has many
driver assistance features such as warning drivers about forwarding collision warning or
safe lane change [12]. Research shows that most accidents occur because of driver errors,
and the ADAS can reduce the accidents and workload of the driver. If there is a likelihood
of an accident, ADAS can take the necessary action to avoid it [13]. Lane departure warning
(LDW), which utilizes lane detection and tracking algorithms, is an essential feature of the
ADAS. The LDW warns the driver when a vehicle crosses white lane lines unintentionally
and controls the vehicle by bringing it back into the desired safe path. Three types of
approaches for lane detection are usually discussed in the existing literature: learning-
based approach, features-based approach, and model-based approach [13–18] (detailed
analysis are presented in Section 3.2). Many challenges and issues have been highlighted
in the literature regarding the LDW systems, such as visibility conditions change, variation
in images, and lane appearance diversity [17]. Since different countries have used various
lane markers, there is a challenge for lane detection and tracking to solve the problems.

1.1. Objectives and Scope of the Study


There are limited studies that provide state-of-art lane detection and tracking al-
gorithms for ADAS. This review paper aims to comprehensively review the previous
literature on lane detection and tracking for ADAS and identify gaps in knowledge for
future research. The report compares different lane detection and tracking algorithms and
analyses different datasets used to verify the algorithms and metrics used to evaluate the
algorithms. Specifically, the review identifies and classifies the existing lane detection and
tracking algorithms under three themes: features-based, learning-based and model-based,
which provides a systematic approach towards understanding the key characteristics of
lane detection and tracking algorithms in the literature. Some patented works by vehicle
manufacturers under these three categories are also reviewed to acknowledge growing
commercialisation interests in this field of study. However, given the large number of
patents by educational institutions, research groups and vehicle manufacturers, a detailed
review of patented works is outside the scope of this study. This systematic review is
expected to assist researchers working in this area by delivering current advancements
commercialisation interests in this field of study. However, given the large number of pa‐
tents by educational institutions, research groups and vehicle manufacturers, a detailed 
review of patented works is outside the scope of this study. This systematic review is ex‐
Sustainability 2021, 13, 11417 pected  to  assist  researchers  working  in  this  area  by  delivering  current  advancements  3 of 29
made in lane detection and tracking for ADAS and the challenges to overcome in the fu‐
ture for robust lane detection and tracking systems. 
The structure of the paper is as follows. Section 2 provides an overview of the meth‐
made in lane detection and tracking for ADAS and the challenges to overcome in the future
odology adopted for the literature review. It is then followed by a detailed literature re‐
for robust lane detection and tracking systems.
view that includes a brief introduction to sensors used in the ADAS, an analysis of the 
The structure of the paper is as follows. Section 2 provides an overview of the
existing literature on lane detection and tracking algorithms. Section 4 presents the dis‐
methodology adopted for the literature review. It is then followed by a detailed literature
cussions followed by conclusions in Section 5. 
review that includes a brief introduction to sensors used in the ADAS, an analysis of
the existing literature on lane detection and tracking algorithms. Section 4 presents the
2. Methodology 
discussions followed by conclusions in Section 5.
Literature  was  gathered  from  the  electronic  database.  The  database  included  “ISI 
2.Web  of  Science”,  “Science  Direct”  “Scopus”  “Google  Scholar”  and  “IEEE  Xplore”.  The 
Methodology
keywords 
Literature used 
wasfor  the  search 
gathered from were  “Lane  detection 
the electronic algorithms”, 
database. The database“Lane  tracking 
included algo‐
“ISI Web
rithms”, 
of Science”, “Lane  departure 
“Science Direct” warning 
“Scopus”algorithms”,  “Advanced 
“Google Scholar” driver 
and “IEEE assistance 
Xplore”. system”, 
The keywords
“Lane change tracking”, “Vehicle tracking”, Vehicle tracking sensors”, and “Automated 
used for the search were “Lane detection algorithms”, “Lane tracking algorithms”, “Lane
lane change” or a combination of these words (Figure 1). We also searched for patented 
departure warning algorithms”, “Advanced driver assistance system”, “Lane change
works. Patents published from 2006 to 2020 were also searched using the term “Lane de‐
tracking”, “Vehicle tracking”, Vehicle tracking sensors”, and “Automated lane change” or
atection  and  tracking,” 
combination “Lane  departure 
of these words (Figure 1). warning,”  and  “Advance 
We also searched driver 
for patented assistance 
works. sys‐
Patents
tem” using the “Google scholar” and “PubMed”. As mentioned in Section 1.1, the objec‐
published from 2006 to 2020 were also searched using the term “Lane detection and
tive of patents search is to acknowledge growing commercialisation interests in this field 
tracking,” “Lane departure warning,” and “Advance driver assistance system” using the
of study rather than a detailed review of the patented works. As such, only a sample of 
“Google scholar” and “PubMed”. As mentioned in Section 1.1, the objective of patents
patents works from vehicle manufacturers was discussed. The period studied for the lit‐
search is to acknowledge growing commercialisation interests in this field of study rather
erature is the past two decades, as lane tracking and detection is an emerging field that 
than a detailed review of the patented works. As such, only a sample of patents works from
has gained momentum post‐2000. English language‐based publications were only consid‐
vehicle manufacturers was discussed. The period studied for the literature is the past two
decades, as lane tracking and detection is an emerging field that has gained momentum
ered for the review as they are widely accessible to global readers. Relevant publications 
post-2000. English language-based publications were only considered for the review as they
further improved the search procedure in the reference lists available in the collected lit‐
are widelyThe 
erature.  accessible to globaland 
lane  detection  readers. Relevant
tracking  publications
algorithms  further improved
were  investigated  under the search
three  ap‐
procedure in the reference lists available in the collected literature. The lane
proaches that have been commonly referred to in the literature (Features based, learning‐detection and
tracking algorithms were investigated under three approaches that have been commonly
based and model‐based as shown in Figure 1). The existing databases were analyzed to 
referred to in the literature (Features based, learning-based and model-based as shown in
identify the availability of datasets for future research. The lane detection criteria, calcu‐
Figure 1). The existing databases were analyzed to identify the availability of datasets for
lation of the detection rate and accuracy of the algorithms that have been adopted in the 
future research. The lane detection criteria, calculation of the detection rate and accuracy
literature are also reviewed. 
of the algorithms that have been adopted in the literature are also reviewed.

 
Figure 1. Flowchart showing the methodology adopted for the review.
Figure 1. Flowchart showing the methodology adopted for the review. 
3. Literature Review
A comparison of the different sensors used in ADAS is presented first. It is then
followed up with an in-depth review of algorithms used for lane detection and tracking,
including the patented works.
Sustainability 2021, 13, 11417 4 of 29

3.1. Sensors Used in the ADAS


ADAS deploy different sensors fusion systems to guide the vehicle (Figure 2). In the
literature, three types of sensors have been identified that are used in the ADAS, which are,
Light Amplification by Stimulated Emission of Radiation (LASER) based sensors, Radio
Detection And Ranging (RADAR) based sensors and vision-based sensors as described
below. Table 1 shows the Strengths, Weaknesses, Opportunities, and Threats (SWOT)
analysis of LASER, RADAR and vision-based sensors.se

Table 1. SWOT analysis of sensors that are used in ADAS.

Type of Relative Measured Perceived Recognizing


Strengths Weaknesses Application Opportunities Threats
Sensors Velocity Distance Energy Vehicle
Reliable for Collision Gives 600–1000 Resolved
Vulnerable to
LASER automatic car warning, warning for Failure due to emitted laser via spatial
Derivative Time of dirty lenses and
based parking and assistant excessive inclement waves segmenta-
of range flight reflecting target
sensors collision automatic load or weather (Nanome- tion and
reduced.
mitigation parking strain. ters) motion
Inappropriate
Vulnerable and Collision
Suitable for Better and difficult to Emitted
RADAR sometimes fails warning,
Time of collision accuracy implementa- radio single Resolved
based Frequency for extreme assistant
flight mitigation and and required tion by wave via tracking
sensors weather automatic
adaptive no attention non- (Millimeter)
condition parking
professional
Low cost,
Vulnerable to Less effective
Readily Collision passive
extreme weather for bad Resolved
Vision available and warning, non-invasive
Derivative Model conditions and weather, for Ambient via motion
based affordable in assistant sensors and
of range parameter sometimes fails complex visible light and
sensors the automobile automatic low
to work in the illumination appearance
sector parking. operating
night time. and shadow
power.

3.1.1. LASER Based Sensors


Laser scanner and Light detection and ranging (LIDAR) is the common laser-based
sensors. In this technology, the transmitter and receiver are placed, and the impulse light
of electromagnetic waves are recorded through it. Infrared near about (800–950 nm) and
ultraviolet above (1500 nm) wavelength of the electromagnetic spectrum are used [19].
By estimating the time of flight, the distance between the transmitter and the receiver
is calculated. It may not be possible to derive the direct relative velocity of the moving
object, so it is obtained by taking the derivative of ranges with respect to time. These
types of sensors are used for multiple target tracking. The drawbacks of these sensors
are vulnerability to dirty lenses and the inadequacy of the reflecting target. Besides, for
weather conditions, these sensors may be too sensitive. These types of sensors are reliable
for automatic car parking and collision mitigation [19,20].

3.1.2. RADAR Based Sensors


RADAR sensors can detect the images in haze, dust, rain, and snow up to 200 m
ahead. Through the radar detection and ranging process, these sensors emit strong radio
waves through the transmitter and receive them back through the receiver, similar to
laser-based sensors. The distance between sensors and objects is calculated by the time
of flight. Another advantage is that frequency between emitted and Doppler echo can be
calculated, which provides the object’s velocity. To map movements of aircraft, these kinds
of sensors are often used in aviation and defence manufacturing sectors. In the automobile
sector, two types of sensors are used: long-range sensors, which range between 77–81 GHz
spectrum and short-range sensors that range between 21.65–26.65 GHz. In extreme weather
conditions, these sensors are very vulnerable and sometimes fail to work. These kinds of
sensors are used for collision mitigation and adaptive cruise control [19,20].

3.1.3. Vision-Based Sensors


These types of sensors come under the passive sensors category, which means they do
not emit any waves. To assess the presence, orientation and accuracy of the surrounding
77–81 GHz spectrum and short‐range sensors that range between 21.65–26.65 GHz. In ex‐
treme weather conditions, these sensors are very vulnerable and sometimes fail to work. 
These kinds of sensors are used for collision mitigation and adaptive cruise control [19,20]. 

Sustainability 2021, 13, 11417 3.1.3. Vision‐Based Sensors  5 of 29


These types of sensors come under the passive sensors category, which means they 
do not emit any waves. To assess the presence, orientation and accuracy of the surround‐
ing objects, vision sensors use images. Vision sensors use a mixture of image acquisition 
objects, vision sensors use images. Vision sensors use a mixture of image acquisition and
and image processing, and multi‐point inspection is carried out using a single sensor. Two 
image processing, and multi-point inspection is carried out using a single sensor. Two
types of sensors are used in a vision‐based system, the first is a monocular camera, and 
types of sensors are used in a vision-based system, the first is a monocular camera, and
the second is a stereo vision camera. These sensors do not directly derive the range and 
the second is a stereo vision camera. These sensors do not directly derive the range and
velocity of the objects, and as such, a sophisticated signal procedure is used to derive these 
velocity of the objects, and as such, a sophisticated signal procedure is used to derive these
parameters. These sensors are readily available and affordable in the automobile sector. 
parameters. These sensors are readily available and affordable in the automobile sector.
For traffic signal analysis and lane change assistance, these kinds of sensors are applicable. 
For traffic signal analysis and lane change assistance, these kinds of sensors are applicable.
The main drawback is vulnerabilty to extreme weather conditions and sometimes failing 
The main drawback is vulnerabilty to extreme weather conditions and sometimes failing
to work at nighttime [19,20]. 
to work at nighttime [19,20].

 
Figure 2. Sensors fusion to guide autonomous vehicle, adapted and reprinted from ref. [21].
Figure 2. Sensors fusion to guide autonomous vehicle, adapted and reprinted from ref. [21]. 

3.2. Lane Detection and Tracking Algorithms


3.2. Lane Detection and Tracking Algorithms 
In this section, we have conducted a comparison and analysis of algorithms in three
In this section, we have conducted a comparison and analysis of algorithms in three 
different categories according to approaches used: features-based approach, model-based
different categories according to approaches used: features‐based approach, model‐based 
and learning-based approach.
and learning‐based approach. 
The feature-based approach uses edges and local visual characteristics of interest, such
The  feature‐based  approach  uses  edges  and  local  visual  characteristics  of  interest, 
as gradient, colour, brightness, texture, orientation, and variations, which are relatively in-
such as gradient, colour, brightness, texture, orientation, and variations, which are rela‐
sensitive to road shapes but sensitive to illumination effects. The model-based approaches
tively  insensitive  to  road  shapes  but  sensitive  to  illumination  effects.  The  model‐based 
apply global road models to fit low levels of features that are more robust against illumina-
approaches apply  global road  models  to  fit  low levels  of  features  that  are  more  robust 
tion effects, but they are sensitive to road shapes [13,14]. The geometrics parameters are
against illumination effects, but they are sensitive to road shapes [13,14]. The geometrics 
used in the model-based approach for lane detection [16–18]. The learning-based approach
parameters are used in the model‐based approach for lane detection [16–18]. The learning‐
consists of two stages: training and classification. The training process uses previously
based approach 
known errors and consists 
systemof properties
two  stages: 
totraining and  classification. The 
construct a model, e.g., programtraining process 
variables. In
uses previously known errors and system properties to construct a model, e.g., program 
addition, the classification phase applies the training model to the user set of properties
variables. In addition, the classification phase applies the training model to the user set of 
and outputs that are more likely to be correlated with the error ordered by their probability
properties and outputs that are more likely to be correlated with the error ordered by their 
of fault discloser [19]. In the following sections, we describe the three approaches used in
the literature in detail. It is then followed up by summary tables (Tables 2–5) that present
the key features of these algorithms and strengths, weaknesses, and future prospects.

3.2.1. Features-Based Approach (Image and Sensor-Based Lane Detection and Tracking)
Image and sensor-based lane detection and tracking decision-making processes are
dependent on the sensors attached to the vehicle and the camera output. In this approach,
the image frames are pre-processed, and a lane detection algorithm is applied to determine
lane tracking. The sensor values are used to further decide on the path to be followed by
the lane markings [22,23].
Kuo et al. [24] implemented a vision-based lane-keeping system. The proposed
system obtains the vehicle position following the lane and controls the vehicle to be in
Sustainability 2021, 13, 11417 6 of 29

the desired path. The steps involved in the lane-keeping system are inverse perspective
mapping, detection of lane scope features and reconstruction of the lane markings. The
main drawback of the system is that the performance is reduced when the vehicle is driving
in a tunnel.
Kang et al. [25] proposed a kinematic-based fault-tolerant mechanism to detect the
lane even if the camera cannot provide the road image due to malfunction or environmental
constraints. In the absence of camera input, the lane is predicted using the kinematic model
by taking the parameters such as the length and speed of the vehicle. The camera input is
given as a clothoid cubic polynomial curve road model. In the absence of camera input, the
lane coefficients of the clothoid model will be available. A lane restoration scheme is used to
overcome this loss based on a multi-rate state estimator obtained from the kinematic lateral
motion model in the clothoid function. The predicted lane is based on the past curvature
rate and road curvature. The results show that the proposed method can maintain the lane
for 3 s without camera input. The developed algorithm was simulated using CARSIM and
Simulink. It has been tested in a test vehicle equipped with an Auto Box from dSPACE in
Tucson from HYUNDAI Motors.
Borkar et al. [26] proposed a lane detection and tracking method using inverse projec-
tive mapping (IPM) to create a bird’s-eye view of the road; a Hough transform for detecting
candidate lane and Kalman filter track the lane. The road image is converted to grayscale
form followed by temporal blurring. The application of IPM makes the image provide a
bird’s eye view. The lanes are detected by identifying the pair of parallel lines which are
separated by a distance. The IPM images are converted to binary, and a Hough transform is
performed on the binary image and then divided into two halves. To determine the center
of the line, the one-dimensional matched filter is applied to each sample. The pixel with a
large correlation that exceeds the threshold is selected as the center of the lane. The Kalman
filter is used to track the lane, which takes the lane orientation and difference between the
current and previous frames. A firewire camera is used to capture the image of the road.
The performance of the proposed algorithm provides better accuracy under the isolated
highway and metro highway, and the accuracy is in the range of 86% on city roads. The
improved performance is due to the usage of the Kalman filter to track the lane.
Sun et al. [27] proposed a lane detection mechanism considering multiple frames in
contrast with the single frame along with the inertial work classifier. The initially assigned
probability value changes due to error and vehicle movement. Kalman filter is applied to
smooth the line segments in Hough space. The inertial measurement unit (IMU) values
are used to align the previous line segments in the Hough space. The lane detection is
determined by considering the line segments with a high probability value. The analysis of
the method using the Caltech dataset provides accuracy in the range of 95% to 97%. The
lane detection under different environmental conditions such as sunlight, rain and with
high values of sunlight and rainfall shows the performance in the range of 72% to 87%.
The Hough transform is employed to extract the line segment from lane markings stored
in the Hough space. The Hough space is used to store the line segments with an associated
probability value. The truthiness of the line segments is determined using Convolutional
Neural Net. The system is implemented using NVIDIA GTX1050ti GPU, OV10650 camera,
and the IMU is Epson G320.
Lu et al. [28] proposed a lane detection algorithm for urban traffic scenarios in which
the road is well-constructed, flat and of equal width. The road model is constructed using
feature line pairs (FLP), the FLP is detected using Kalman filter and a regression diagnostic
technique to determine the road model using FLP. The result shows that the time taken to
detect the road parameters is 11 ms. The proposed method is implemented using C++ on
a 1.33 GHz AMD processor-based personal computer with a single camera and a Matrox
Meteor RGB/PPB digitizer and implemented in THMR-V (Tsinghua Mobile Robot V).
Zhang and Shi [29] proposed a lane detection method for detecting the lanes at night.
The sober and canny operator detects the edges of the lanes. Gradients acquiring a certain
threshold are labelled as edge points. The histogram with the higher brightness is named
Sustainability 2021, 13, 11417 7 of 29

as lane boundary, and the low valued histogram is named a road. The accuracy of the
proposed method is high even in the presence of noises from car head and rear lights and
road contour signs.
Borkar et al. [30] proposed a layered approach to detect the lane at night. The region of
interest is specified in the captured image of the road. The image is converted to greyscale
for further processing. Temporal burring is applied to obtain the continuous lanes of the
long line. Depending on the characteristics of the neighboring pixels, an adaptive controller
is used to determine the object. The images are converted to the left and right halves, and
each half Hough transform is performed to determine the straight lines. The final process
deals with the fitting of all the straight lines. Firewire S400 (400 Mbps) color camera in
VGA resolution (640 × 480) at 30 fps is used to capture the video and fed to MATLAB, and
lanes are detected in an offline manner. The performance of the proposed method is good
in isolated highways and in metro highway scenarios. With moderate traffic, the accuracy
of detecting the lanes is reduced to 80 percent.
Priyadarshini et al. [31] proposed a lane detection system that detects the lane during
the daytime. The captured video is converted to a grayscale image. A Gaussian filter is
applied to remove the noise. The Canny edge detection algorithm is used to detect the
edges. To identify the length of the lane, a Hough transform is applied. The proposed
method is simulated using a raspberry pi-based robot with a camera and ultrasonic sensors
to determine the distance between neighbouring vehicles.
The survey by Hong et al. [32] discussed video processing techniques to determine
the lanes illumination change on the region of interest for straight-line roads. The survey
highlights the methodologies involved, such as choosing the proper color space and
determination of the region of interest. Once the intended image is captured, a color
segmentation operation is performed using region splitting and clustering schemes. This is
followed by applying the merging algorithm to suppress the noise in the image.
A color-based lane detection and a representative line extraction algorithm are pro-
posed by Park et al. [33]. The captured image in RGB format is converted to gray code
followed by binary image conversion. The purpose of binary image conversion is to re-
move the shadows in the captured image. The lanes in the image are detected using the
canny algorithm by the feature named color. The direction and intensity are determined by
removing the noise using the gaussian filter. The images are smoothened by applying a
median filter. The lanes in the image are considered as the region of interest, and Hough
transform is applied to confirm the accuracy of the lanes in the region of interest. The
experiment is performed during the daytime. The results show that the lane detection rate
is more than 93%.
El Hajjouji et al. [34] proposed a hardware architecture for detecting straight lane lines
using Hough transform. The CORDIC (Coordinate Rotation Digital Computer) algorithm
calculates the gradient and phase from the captured image. The output of CORDIC block
is the norm and angle of the x-axis of the image. The norm and angles are compared
with the threshold obtained from the region of interest. The Hough transform is applied
to the outcome of the comparator module, and the relation between the Hough space
and the angle is determined. The noises are removed by the Hough transform voting
procedure. Finally, the output is obtained as the slope of the straight line. The algorithm
is implemented in the Virtex-5 ML505 platform. The algorithm was tested on a variety
of images with varying illumination and different road conditions, such as urban streets,
highways, occlusion, poor line paintings, day and night and scenarios. The algorithm
provides a detection rate of 92%.
Samadzadegan et al. [35] proposed a lane detection methodology in a circular arc or
parabolic based geometric method. The RGB colour is converted to an intensity image
that contains a specific range of values. A three-layer pyramid image is constructed using
bi-cubic interpolation method. Among the three layers of region of interest, the first
layer pixels undergo randomized Hough transformation to determine the curvature and
orientation features followed by a Genetic Algorithm Optimisation. The process is repeated
Sustainability 2021, 13, 11417 8 of 29

to the remaining two layers. The outcome obtained in the lower layers are the features
of the lane and used to determine the lanes in the region of interest. The result shows
that there is a performance drop in lane detection when entering the tunnel region and
occlusion in lane markings due to the shadow of another vehicle.
Cheng et al. [36] proposed a hierarchical lane detection system to detect the lanes on
structured and unstructured roads. The system classifies the environment into structured
and unstructured based on the feature extraction, which depends on the color of the
lane marking. The connected component labelling method is applied to determine the
feature objects. During the training, phase supervised learning is performed and manually
classified the objects as left lane, right lane and no lane markings. The image is classified
as structured and unstructured based on the vote value associated with the weights. The
lanes for structured roads are detected by eliminating the moving vehicle on the lane image
followed by lane recognition by considering the angle of inclination and starting points
of the lane markings. The lane coherence verification module compares the lane width
of the current frame with the previous frame to determine the lanes. For unstructured
roads, the following steps are performed: mean shift segmentation, which deals with
the determination of road surface by comparing with the surroundings to determine the
variation in colors and texture. The region merging and boundary smoothing module deals
with pruning unnecessary boundary lines and neglecting the region which is smaller than
the threshold. The boundary is selected based on the posterior probability of each set of
candidates. The simulation results show that around 0.11 s is needed to identify structured
or unstructured roads. The system achieves an accuracy of 97% in lane detection.
Han et al. [37] proposed a LIDAR sensor-based road boundary detection and tracking
for both structured and unstructured roads. The LIDAR is used to obtain the polar coordi-
nates. The line segments are obtained from the height and pitch of LIDAR. Information
such as roadside, curbs, sidewalks and buildings are obtained from the line segments. The
road slope and width are obtained by merging two-line segments. The road is tracked
using the nearest neighbor filter to estimate the state of the target. The algorithm is tested in
a real vehicle equipped with LIDAR, GPS and IMU. The road boundary detection accuracy
is 95% for structured and 92% for unstructured roads.
Le et al. [38] proposed a method to detect pedestrian lanes under different illumination
conditions with no lane markings. The first stage of the proposed system is the vanishing
point estimation which works based on votes of local orientations from colored edge pixels.
The local orientation of pixels is determined as the vanishing point. The next stage is the
determination of the sample region of the lane from the vanishing point. To achieve higher
robustness towards different illuminations, invariant space is used. Finally, the lanes are
detected using the appearance and shape information from the input image. A Greedy
algorithm is applied, which helps to determine the connectivity between the lanes in each
iteration of the input image. The proposed model is tested on the input image of both
indoor and outdoor environments. The results show that the lane detection accuracy is
95%.
Wang et al. [39] proposed a lane detection system for straight and curve road scenarios.
The captured image determines the region of interest, set as 60 m which falls in the near
field region. The region of interest is divided into the straight region and the curve
region. The near field region is approximated as the straight line, and the far-field region
is approximated as the curve. An improved Hough transform is applied to detect the
straight line. The curve is determined in the far-field region using the least-squares curve
fitting method. The WAT902H2 camera model is used to capture the image of the road.
The results show that the time taken to determine the straight and curve lane is 60–80 ms
compared to 70–100 ms in the existing works and the accuracy is around 92–93%. The error
rate in bending to the left or right direction is from −0.85 to 5.20% for different angles.
Yenıaydin [40] proposed a lane detection algorithm based on camera and 2D LIDAR
input data. The camera obtains the bird’s eye view of the road, and the LIDAR detects the
location of objects. The proposed method consists of the steps mentioned below:
Sustainability 2021, 13, 11417 9 of 29

• Obtain the camera and 2D LIDAR data.


• Perform segmentation operation of the LIDAR data to determine groups of objects. It
is done based on the distance among different points.
• Map the group or objects to the camera data.
• Turn the pixels of groups or objects into camera data. It is done by the formation of
the region of interest based on a rectangular region. Straight lines are drawn from
the location of the camera to the corner of the region of interest. The convex polygon
algorithm determines the background and occluded region of the image.
• Apply lane detection to the binary image to detect the lanes. The proposed approach
shows better accuracy compared with the traditional methods for a distance less than
9 m.
Kemsaram et al. [41] proposed a deep learning-based approach for detecting lanes, objects
and free space. The Nvidia tool comes with SDK (software development kit) with inbuilt options
for object detection, lane detection and free space detection. The object detection module loads
the image and applies transformations to the image to detect different objects. The lane detection
framework uses the lane Net pipeline, which uses the images. The lanes are assigned with
numbers from left to right. For each frame, the lane detection framework determines the lane
markings. The lane detection function creates the pixel coordinates (x, y) for each lane marking.
The free space module can identify the free space on the surface and in front of the vehicle. The
proposed method is implanted in C++ and runs real-time on Nvidia Drive PX 2 platform. The
time taken to determine the lane falls under 6 to 9 ms.

3.2.2. Model-Based Approach (Robust Lane Detection and Tracking)


Lee and Moon [42] proposed a robust lane detection and tracking system. This
system’s main aim is to detect the lane and track by considering different environmental
conditions such as clear sky, rainy, and snowy during morning and night. The proposed
system consists of three phases, namely initialization, lane detection, and lane tracking. In
the initialization phase, the road region is captured and pre-processed to a low-resolution
image. The edges are extracted, and the image is split into the left half and right half region.
An intersection point is made from both regions, and intersection points are mostly found
near the vanishing point. Once the vanishing points become greater than the threshold, the
region above and below the vanishing points is removed. In the lane marking detection
phase, the lane marking is determined in the rectangular region of interest. The image
is converted into greyscale by using edge line detection, and a line segment is detected.
The hierarchical agglomerative clustering method is used for a color image. The line
segment is determined from surrounding vehicles, shadows, trees, and buildings by using
its frequency in the region of interest. Other disturbances are not continuous compared to
the real lane marking, and they can be determined by comparing them with the consecutive
frames. In the lane tracking phase, lane tracking is achieved from the modified region of
interest. Multiple pairs of lanes with the same weight are considered, and the smallest
are chosen. Some lanes, which are not detected, are predicted by using the Kalman filter.
This system is tested using C++ and open CV library with Ubuntu14. There is scope for
improvement of the algorithm during the night scenario.
Son et al. [43] proposed a robust multi-lane detection and tracking algorithm to
determine the lane accurately under different road conditions such as poor road marking,
obstacles and guardrails. An adaptive threshold is used to extract strong lane features from
images that are not clear. The next step is to extract the erroneous lane features and apply
the random sample consensus algorithm to prevent false lane detection. The selected lanes
are verified using the lane classification algorithm. The advantage of this approach is that
no prior knowledge of the lane geometry is required. The scope for improvement is the
detection of the false lane under the different urban driving scenarios.
Li et al. [44] proposed a real-time robust lane detection method consisting of three methods:
lane marking extraction, geometric model estimation, and tracking key points of the geometric
Sustainability 2021, 13, 11417 10 of 29

model. In the lane extraction process, lane width is chosen according to the standards followed
in the country. The gradient of each pixel is used to estimate the edge points of lane marking.
Son et al. [45] proposed a method that uses the illumination property of lanes under
different conditions, as it is a challenge to detect the lane and keep the lane on track under
different conditions. The methodology involves the determination of the vanishing point
and in which the bottom half of the image is analyzed using a canny edge detector and
Hough transform. The second step involves the determination of white lanes or yellow
lanes based on the illumination property. The white and yellow lanes are used to obtain
the binary image of the lane. The lanes are labelled, and the angles are made to intercept
the y-axis. If there is a match, they are grouped to determine long lanes.
Chae et al. [46] proposed an autonomous lane changing system consisting of three
modules: perception, motion planning, and control. The surrounding vehicles are detected
using LIDAR sensor input. In motion planning, the vehicle determines the mode such as
lane-keeping or lane change, followed by the desired motion that is planned considering the
safety of surrounding vehicles. A linear quadratic regulator (LQR) based model predictive
control is used for longitudinal acceleration and deciding the steering angle. The stochastic
model predictive control is used for lateral acceleration.
Chen et al. [47] proposed a deep convolutional neural network to detect the lane
markings. The modules involved in the lane detection process are lane marking generation,
grouping, and lane model fitting. The lane grouping process involves forming a cluster
comprising neighbouring pixels represented as a single label that belongs to the same lane
and connecting the labels called super marking. The next step of lane model fitting uses
3rd order polynomial to represent straight and curved lanes. The simulation is done on the
CAMVID dataset. The setup requires high-end systems to do the training. The algorithm is
evaluated for a minimal real-time situation. The authors proposed a Global Navigation
Satellite System (GNSS) based lane-keeping assistance system, which calculates the target
steering angle using a model predictive controller. The advantage of the approach is that
it is estimated from GNSS when the lane is not visible due to environmental constraints.
The steering angle and acceleration are modelled using the first-order lag system. The
model predictive control is used to control the lateral movement of the vehicle. The
proposed system was simulated, and prototype testing was conducted in a real vehicle,
OUTLANDER PHEV (Mitsubishi Motors Corporation). The results show that the lane is
followed with a minimal lateral error of about 0.19 m. The drawback of the approach is
that the time delay of GNSS has an impact on the oscillation in the steering. Hence, the
GNSS time delay should be kept minimal compared to the steering time delay.
Lu et al. [48] proposed a lane detection approach using Gaussian distribution random
sample consensus (G-RANSAC). The process involves converting a bird’s eye view image
to look at all the lane characteristics. The next step is using a ride detector to extract the
features of lane points and remove noise points using an adaptable neutral network. The
ridge features are extracted from the gray images, which provide better results during
the presence of vehicle shadow and minimal illumination on the environment. Finally,
the lanes are detected using the RANSAC approach. The RANSAC algorithm considers
the confidence level of ridge points in determining the lanes from noise. The proposed
algorithm is tested under four different illumination conditions: normal illumination
and good pavement, intense illumination and shadow interruption, normal illumination
and sign-on-the-ground interruption and poor illumination and vehicle interference. The
algorithm achieved 99.02%, 96.92%, 96.65% and 91.61% true-positive rates respectively.

3.2.3. Learning-Based Approach (Predictive Controller Lane Detection and Tracking)


Bian et al. [49] implemented a lane-keeping assistance system (LKAS) with two switchable
assistance modes: lane departure prevention and lane-keeping co-pilot modes. The LKAS
is designed to achieve better reliability. The two switchable assistance modes consist of a
conventional Lane Departure Prevention (LDP) mode and a lane-keeping Co-pilot (LK Co-Pilot)
mode. The LDP mode is activated if a lane departure is detected. A lateral offset is used as a
Sustainability 2021, 13, 11417 11 of 29

lane-departure metric to determine whether to trigger the LDP or not. The LK Co-pilot mode is
activated if the driver does not intend to change the lane; this mode helps the driver follow the
expected trajectory based on the driver’s dynamic steering input. Care should be taken to set
the threshold accurately and adequately; otherwise false lane detection would be increased.
Wang et al. [50] proposed a lane-changing strategy for autonomous vehicles using deep
reinforcement learning. The parameters which are considered for the reward are delay and
traffic on the road. The decision to switch lanes depends on improving the reward by interact-
ing with the environment. The proposed approach is tested under accident and non-accident
scenarios. The advantage of this approach is collaborative decision making in lane changing.
Fixed rules may not be suitable for heterogeneous environmental or traffic scenarios.
Wang et al. [51] proposed a reinforcement learning-based lane change controller for a
lane change. Two types of lane change controllers are adopted, namely longitudinal and
lateral control. A car-following model, namely the intelligent driver model, is chosen for
the longitudinal controller. The lateral controller is implemented by reinforcement learning.
The reward function is based on yaw rate, acceleration, and time to change the lane. To
overcome the static rules, a Q-function approximator is proposed to achieve continuous
action space. The proposed system is tested in a custom-made simulation environment.
Extensive simulation is expected to test the efficiency of the approximator function under
different real-time scenarios.
Suh et al. [52] implemented a real-time probabilistic and deterministic lane changing
motion prediction system which works under complex driving scenarios. They designed
and tested the proposed system on both a simulation and real-time basis. A hyperbolic
tangent path is chosen for the lane-change maneuver. The lane changing process is initiated
if the clearance distance is greater than the minimum safe distance and the position of
other vehicles. A safe driving envelope constraint is maintained to check the availability of
nearby vehicles in different directions. A stochastic model predictive controller is used to
calculate the steering angle and acceleration from the disturbances. The disturbance values
are obtained from experimental data. The usage of advanced machine learning algorithms
could improve the currently developed system’s reliability and performance.
Gopalan et al. [53] proposed a lane detection system to detect the lane accurately under
different conditions such as lack of prior knowledge of the road geometry, lane appearance
variation due to change in environmental condition, and independent of vehicle speed.
The modules of the proposed system are lane detection and tracking. The basic approach
used for lane detection is to classify the lane markings from the non-lane markings from
the labelled training sample. A pixel hierarchy feature descriptor method is proposed to
identify the correlation between the lane and its surroundings. A machine learning-based
boosting algorithm is used to identify the most relevant features. The advantage of the
boosting algorithm is the adaptive way of increasing or decreasing the weightage of the
samples. The lane tracking process is performed during the non-availability of knowledge
about the motion pattern of lane markings. Lane tracking is achieved by using particle
filters to track each of the lane markings and understand the cause for the variation. The
variance is calculated for different parameters such as the initial position of the lane, motion
of the vehicle, change in road geometry, traffic pattern. The variance associated with the
above parameters is used to track the lane under different environmental conditions. The
learning-based proposed system provides better performance under different scenarios.
The point to consider is that the assumption made is the flat nature of the road. The flat
road image was chosen to avoid the sudden appearance and disappearance of the lane.
The proposed system is implemented at the simulation level.
To summarize the progress made in lane detection and tracking as discussed in this sec-
tion, Table 2 has been presented that shows the key steps involved in the three approaches
for lane detection and tracking, along with remarks on their general characteristics. It is
then followed with Tables 3–5 that presents the summary of data used, strengths, draw-
backs, key findings and future prospects of the key studies that have adopted the three
approaches in the literature.
Sustainability 2021, 13, 11417 12 of 29

Table 2. A summary of methods used for lane detection and tracking with general remarks.

Methods Steps Tool Used Data Used Methods Classification Remarks

a. Image frames are pre-


processed Frequent calibration is required for
Image and sensor-based lane b. Lane detection algorithm is a. Camera sensors values Feature-based approach accurate decision making in a
detection and tracking applied b. Sensors complex environment
c. The sensors values are used
to track the lanes

a. Model predictive con- Reinforcement learning with model


Predictive controller for lane Machine learning technique (e.g., troller data obtained from the predictive controller could be a
Learning-based approach
detection and controller neural networks,) b. Reinforcement learning al- controller better choice to avoid false lane
gorithms detection.

a. Capture an image through


camera Provides better result in different
Robust lane detection and b. Use Edge detector to data for Based on robust lane detection environmental conditions. Camera
extract the features of the im- Real-time Model-based approach
tracking model algorithms quality plays important role in
age determining lanes marking
c. Determination of vanishing
point

Table 3. A comprehensive summary of lane detection and tracking algorithm.

Data
Simulation

Sources Method Used Advantages Drawbacks Results Tool Used Future Prospects Data Reason for Drawbacks
Real

Performance drop in
determining the lane, if the
Fisheye dashcam, Enhancing the Data obtained vehicle is driving in a
The algorithm performance
Inverse perspective The lane detection error is inertial algorithm suitable by using a tunnel and the road
drops when driving in
mapping method is Minimal error and quick 5%. The cross-track error is measurement unit for complex road model car conditions where there is no
[24] Y tunnel due to the
applied to convert the detection of lane. 25% and lane detection time and ARM scenario and with running at a proper lighting.
fluctuation in the lighting
image to bird’s eye view. is 11 ms. processor-based less light speed of 100 The complex environment
conditions.
computer. conditions. m/s. creates unnecessary tilt
causing some inaccuracy in
lane detection.
Sustainability 2021, 13, 11417 13 of 29

Table 3. Cont.

Data
Simulation

Sources Method Used Advantages Drawbacks Results Tool Used Future Prospects Data Reason for Drawbacks
Real

No need for
parameterization of the Mobileye camera,
Kinematic motion model The algorithm suitable for
vehicle with variables like carsim and MAT- Trying the fault
to determine the lane different environment Lateral error of 0.15 m in the
[25] Y cornering stiffness and LAB/Simulink, tolerant model in Test vehicle —-
with minimal parameters situation not been absence of camera image.
inertia. Prediction of lane Auto box from real vehicle.
of the vehicle. considered
even in absence of camera dSPACE.
input for around 3 s.
The algorithm requires 0.8 s
Usage of inverse Improved accuracy of lane Performance under
to process frame. Higher Real-time Highway and
mapping for the creation detection in the range of different vehicle speed and Firewire color
[26] Y accuracy when more than implementation of streets and —-
of bird’s eye view of the 86%to 96% for different inclement weather camera, MATLAB
59% of lane markers are the work around Atlanta
environment. road types. conditions not considered.
visible.
Hough transform to For urban scenario, the
extract the line segments, proposed algorithm
In the custom dataset, the Performance The device specification
usage of a convolutional provides accuracy greater OV10650 camera Caltech dataset
performance drops improvement is and calibration, it plays
[27] Y Y neural network-based Tolerant to noise than 95%. The accuracy and I MU is Epson and custom
compared to Caltech future important role in capturing
classifier to determine the obtained in lane detection G320. dataset.
dataset. consideration. the lane.
confidence of line in the custom setup is 72%
segment. to 86%.
Around 4 ms to detect the
Robust tracking
Testing the algorithm edge pixels, 80 ms to detect C++; camera and a
Feature-line-pairs (FLP) Faster detection of lanes, and improve the
suitability under different all the FLPs, 1 ms to matrox meteor
[28] Y along with Kalman filter suitable for real-time performance in Test robot. ——
environmental conditions determine the extract road RGB/ PPB
for road detection. environment. urban dense
could be done. model with Kalman filter digitizer.
traffic.
tracking.
Dual thresholding
algorithm for
pre-processing and the The lane detection
The algorithm detects the Suitability of the algorithm
edge is detected by single algorithm insensitive Detection Camera with RGB
[29] Y straight lanes during the ——- Custom dataset for different types of roads
direction gradient headlight, rear light, cars, Of straight lanes. channel.
night. during night to be studied.
operator. Usage of the road contour signs.
noise filter to remove the
noise.
Geometrics
transformation of
Determination of region The algorithm needs
Firewire S400 image for The constraints and
of interest and conversion changes for checking its 90% accuracy during night
[30] Y Better accuracy camera and increasing the Custom dataset assumption considered do
of binary image via suitability for the day time at isolated highways
MATLAB accuracy and not suit for the day time.
adaptive threshold. lane detection
intensity
normalization.
Sustainability 2021, 13, 11417 14 of 29

Table 3. Cont.

Data
Simulation

Sources Method Used Advantages Drawbacks Results Tool Used Future Prospects Data Reason for Drawbacks
Real

Simulation of the
proposed method by
Canny edge detector Raspberry pi using raspberry Pi based
Hough transform improves
algorithm is used to Performance of the based robust robot with a monocular
[31] Y the output of the lane —— Custom data ——
detect the edges of the proposed system is better. with camera camera and radar-based
tracker.
lanes. and sensors. sensors to determine the
distance between
neighboring vehicles.
Video processing
Determine the lanes
technique to determine
vision-based illumination changes on
[32] Y the lanes illumination —- —- Robust performance Simulator —-
vehicle the region of interest for
change on the region of
curve line roads
interest.
A colour-based lane
detection and The results show that the There is scope to test the Unwanted noise reduces
Better accuracy in the day Algorithm needs changes to
[33] Y Y representative line lane detection rate is more MATLAB algorithm in the night Custom data the performance of the
time. test in different scenario.
extraction algorithm is than 93%. time. algorithm.
used.
Algorithm tested under
Proposed hardware Proposed algorithm various conditions of roads
Computer complexity and Algorithm need to test
architecture for detecting provides better accuracy for such as urban street, Virtex-5 ML 505
[34] Y high cost of HT (Hough with different weather Custom —–
straight lane lines using occlusion, poor line highway and algorithm platform
transform) condition.
Hough transform. paintings. provides a detection rate of
92%.
Proposed a lane detection
Video sensor improves the Performance dropped in Experiment performed with Proposed method can test
methodology in a circular maps, video
[35] Y performance of the lane lane detection when different road scene and with previously available Custom Due to low illumination
arc or parabolic based sensors, GPS.
marking. entering the tunnel region provided better results. data.
geometric method.
Proposed a hierarchical
lane detection system to The system achieves an Algorithm can test on an
[36] Y detect the lanes on the Quick detection of lanes. —- accuracy of 97% in lane MATLAB isolated highway, urban —-
structured and detection. roads.
unstructured roads.
LIDAR sensor-based Difficult to track lane
The road boundary
boundary detection and Regardless of road types, boundaries for Test vehicle Algorithm needs to test
detection accuracy is 95% Low contract arbitrary
[37] Y tracking method for algorithm detect accurate unstructured roads because with LIDAR, with RADAR based and Custom data
for structured roads and road shape
structured and lane boundaries. of low contract, arbitrary GPS and IMU. vision-based sensors.
92% for unstructured roads.
unstructured roads. road shape
Sustainability 2021, 13, 11417 15 of 29

Table 3. Cont.

Data
Simulation

Sources Method Used Advantages Drawbacks Results Tool Used Future Prospects Data Reason for Drawbacks
Real

Proposed a method to
Robust performance for There is scope for
detect the pedestrian The result shows that the New dataset of
pedestrian lane detection More challenging for indoor structured roads
[38] Y lanes under different lane detection accuracy is MATLAB 2000 images Complex environment
under unstructured and outdoor environment. with different
illumination conditions 95%. (custom)
environment. speeds limit
with no lane markings.
The proposed system is
implemented using an
improved Hough Robust performance for a
transform, which campus road, in which the Performance drops due to Test vehicle and
[39] Y Y —— ——- Custom data Low illumination
pre-process different light road does not have lane low intensity of light MATLAB
intensity road images markings.
and convert it to the polar
angle constraint area.
The proposed approach Proposed method
A lane detection Computational and
shows better accuracy need to test with software based Fusion of
algorithm based on experimental results show
[40] Y —— compared with the RADAR and analysis and camera and 2D —–
camera and 2D LIDAR the method significantly
traditional methods for vision-based MATLAB LIDAR data
input data. increases accuracy.
distance less than 9 m. sensors data
The Nvidia tool comes with
A deep learning-based SDK (software Complex road
Monocular camera with The time taken to C++ and NVidia’s
approach for detecting development kit) with scenario with
[41] Y advance driver assistance determine the lane falls drive PX2 KITT —-
lanes, object and free inbuild options for object different high
system is costly. under 6 to 9 ms. platform
space. detection, lane detection intensity of light.
and free space.
Sustainability 2021, 13, 11417 16 of 29

Table 4. A comprehensive summary of learning-based model predictive controller lane detection and tracking.

Data
Simulation

Sources Method Advantages Drawbacks Result Tool Used Future Prospects Data Reason for Drawback
Real

The proposed method C++ and OpenCV on


Gradient cue, color cue The suitability of the Except rainy condition Since the road
works better under ubuntu operating 48 video clips
and line clustering are algorithm for multi-lane during the day, the environment may not
[42] Y different weather system. —- from USA and
used to verify the lane detection of lane curvature proposed system provides be predictable, leads to
conditions such as rainy Hardware: duel ARM Korea
markings. is to be studied. better results. false detection.
and snowy environments. cortex-A9 processors.
The Caltech lane datasets
Extraction of lanes from consisting of four types of
Urban driving scenario
the captured image Multilane detection even urban driving scenarios: Real time IMU sensors could be
quality has to be improved Data from south
Random, sample during poor lane markings. Cordova 1; implementation of incorporated to avoid
[43] Y in cardova 2dataset since it MATLAB Korea road and
consensus algorithm is No prior knowledge about Cordova 2; the proposed the false detection of
perceives the curb of the Caltech dataset.
used to eradicate error in the lane is required. Washington2; with a total of algorithm lanes.
sidewalk as a lane.
lane detection. 1224 frames containing 4172
lane markings.
Rectangular detection In Cardova 2 dataset, the Software based Due to the difficulty
region is formed on the false detection value is performance analysis In image capturing
Robust lane detection
image. Edge points of higher around 38%. The on Caltech dataset for false detection
method by using a Caltech and
lane is extracted using Performance drops when algorithm shows better different urban driving happened. More
[44] Y Y monocular camera in which —- custom-made
threshold algorithm. A road is not flat performance under scenario. Hardware training or inclusion of
the roads are provided with dataset
modified Brenham line different roads geometries implementation on the sensors for live dataset
proper lane markings.
voting space is used to such as straight, curve, Tuyou autonomous collection will help to
detect lane segment. polyline and complex vehicle. mitigate it.
Based on voting map,
detected vanishing
points, usage of distinct Need to reduce There are chances,
Under various
property of lane colour to Overall method test computational complexity to test algorithm Custom data
Illumination condition lane Software based
[45] Y obtain illumination algorithm within 33 ms per by using vanishing point at day time with based on —–
detection rate of the analysis done.
invariant lane marker frame. and adaptive ROI for every inclement weather Real-time
algorithm is an average 93%
and finally found main frame. conditions.
lane by using clustering
methods.
Proposed a sharp curve
lane from the input
image based on
The results show that the
hyperbola fitting. The
The suitability of the accuracy of lane detection is Custom made
input image is converted Better accuracy for sharp
[46] Y algorithm for different road around 97% and the simulator C/C++ and —– Custom data —–
to grayscale image and curve lanes.
geometrics yet to study. average time taken to detect visual studio
the feature namely left
the lane is 20 ms.
edge, right edge and the
extreme points of the
lanes are calculated
Sustainability 2021, 13, 11417 17 of 29

Table 4. Cont.

Data
Simulation

Sources Method Advantages Drawbacks Result Tool Used Future Prospects Data Reason for Drawback
Real

Difficult to obtain robust The accuracy of vanishing Future scope for


vanishing point detection Accurate and robust Unmanned ground Complex background
vanishing point for point range between 80.9% structured roads
[47] Y method for unstructured performance for vehicle and mobile Custom data interference and
detection of lane for to 93.6% for different with different
roads unstructured roads. robot. unclear road marking.
unstructured scene. scenarios. scenarios.
Proposed a lane detection
approach using Gaussian The proposed algorithm is
distribution random tested under different
Provides better results Need to test
sample consensus illumination condition
during the presence of proposed method
(G-RANSAC), usage of ranging from normal, Software based
[48] Y vehicle shadow and —- under various Test vehicle —-
rider detector to extract intense, normal and poor analysis
minimal illumination of the times like day,
the features of lane points and provides lane detection
environment. night.
and adaptable neural accuracy as 95%, 92%, 91%
network for remove and 90%.
noise.

Table 5. A comprehensive summary of robust lane detection and tracking.

Data
Simulation

Advantages Future Prospects Reason for


Sources Method Used Drawbacks Result Tool Used Data
Drawbacks
Real

The complex
The lane detection Fisheye dashcam: Data obtained
Inverse perspective Enhancing the algorithm environment creates
The algorithm performance error is 5%. The inertial measurement by using a
mapping method is suitable for complex road unnecessary tilt
[49] Y Quick detection of lane. drops due to the fluctuation cross-track error is 25% unit; Arm model car
applied to convert the scenario and with less causing some
in the lighting conditions. lane detection time is processor-based running at a
image to bird’s eye view. light conditions. inaccuracy in lane
11 ms. computer. speed of 1 m/s
detection.
Sustainability 2021, 13, 11417 18 of 29

Table 5. Cont.

Data
Simulation

Advantages Future Prospects Reason for


Sources Method Used Drawbacks Result Tool Used Data
Drawbacks
Real

Deep learning-based
reinforcement learning is The performance is
Cooperative
used for decision making Validation expected to fine-tuned based on Dynamic selection of
decision-making processes Newell car
in the changeover. The check the accuracy of the the cooperation for Custom made cooperation coefficient
[50] Y involving the reward following —-
reward for decision lane changing algorithm for both accident and simulator under different traffic
function comparing delay model.
making is based on the heterogeneous environment non-accidental scenario
of a vehicle and traffic.
parameters like traffic scenario
efficiency
To test the efficiency of
the proposed approach
under different road
Need for more testing to
Reinforcement Decision-making process geometrics and traffic
check the efficiency of the More parameters
learning-based approach involving reward function The reward functions conditions. Testing the
approximator function for Custom made could be considered
[51] Y for decision making by comprising yaw rate, yaw are used to learn the feasibility of the custom
its suitability under simulator for the reward
using Q-function acceleration and lane lane in a better way. reinforcement learning
different real-time function.
approximator. changing time. with fuzzy logic for
conditions.
image input and
controller action based on
the current situation.
MATLAB/Simulink
and carsim. Used
Robust decision real-time setup as The algorithm to be
Usage of deterministic and Custom dataset
Probabilistic and Analysis of the efficiency of making compared to following: modified for real
probabilistic prediction of Testing undue different (collection of
[52] Y prediction for the the system under real-time the deterministic Hyundai-Kia motors suitability for
traffic of other vehicles to scenario data using test
complex driving scenario. noise is challenging. method. Lesser K7, mobile eye camera real-time
improve the robustness vehicle).
probability of collision. system, micro auto box monitoring.
II, Delphi radars, IBEO
laser scanner.
Usage of pixel hierarchy Usage of vehicles inertial
Machine with 4-GHz
to the occurrence of lane sensors GPS information Improved performance
Detection of the lane processor capable of
markings. Detection of and geometry model by using support To test the efficiency of Calibration of the
without prior knowledge working on image
[53] Y the lane markings using a further improve vector machines and the algorithm by using custom data sensors needs to be
on-road model and vehicle approximately 240 ×
boosting algorithm. performance under artificial neural the Kalman filter. maintained.
speed. 320 image at 15 frames
Tracking of lanes using a different environmental networks on the image.
per second.
particle filter. conditions
Sustainability 2021, 13, 11417 19 of 29

Based on the review, some of the key observations from Tables 3–5 are summarized
below:
• Frequent calibration is required for accurate decision making in a complex environ-
ment.
• Reinforcement learning with the model predictive control could be a better choice to
avoid false lane detection.
• Model-based approaches (robust lane detection and tracking) provide better results
in different environmental conditions. Camera quality plays an important role in
determining lane marking.
• The algorithm’s performance depends on the type of filter used, and the Kalman filter
is mostly used for lane tracking.
• In a vision-based system, image smoothing is the initial lane detection and tracking
stage, which plays a vital role in increasing systems performance.
• External disturbances like weather conditions, vision quality, shadow and blazing,
and internal disturbances such as too narrow, too wide, and unclear lane marking,
drop algorithm performance.
• The majority of researchers (>90%) have used custom datasets for research.
• Monocular, stereo and infrared cameras have been used to capture images and videos.
The algorithm’s accuracy depends on the type of camera used, and a stereo camera
gives better performance than a monocular camera.
• The lane markers can be occluded by a nearby vehicle while doing overtake.
• There is an abrupt change in illumination as the vehicle gets out of a tunnel. Sudden
changes in illumination affect the image quality and drop the system performance.
• The results show that the lane detection and tracking efficiency rate under dry and
light rain conditions is near 99% in most scenarios. However, the efficiency of lane
marking detection is significantly affected by heavy rain conditions.
• It has been seen that the performance of the system drops due to unclear and degraded
lane markings.
• IMU (Inertia measurement unit) and GPS are examples that help to improve RADAR
and LIDAR’s performance of distance measurement.
• One of the biggest problems with today’s ADAS is that changes in environmental and
weather conditions have a major effect on the system’s performance.

3.3. Patented Works


According to the patent’s family size, it is observed that Toyota has a generally greater
number of patents work (521), followed by Ford (406), General Motors (GM) (353), Honda
motor (284) and Uber (245). Six of the top ten companies are from the United States, while
four are from Asia. From a patent standpoint, Europe seems to be lagging behind in the
battle for ADAS, and that the patents published in China and other Asian countries for
lane detection and tracking are invented in the universities. Only Google and General
Motor patent portfolios have a high technical relevance score among the top ten patent
manufacturers. On the other hand, all portfolios have an above average market coverage
score, indicating that their manufacturer believes their inventions are valuable enough to
protect globally, and it highlights the significance and promises that companies perceive
in autonomous driving. The detailed review of the patent works is beyond the scope of
this study. However, given the commercial nature of lane detection and tracking, a sample
of patented works, especially from the vehicle manufacturer, that align with the three
approaches (feature-based, learning-based and model-based) has been presented in Table 6.
Some of the key observations from Table 6 are:
Sustainability 2021, 13, 11417 20 of 29

• By following the method of image and sensor-based lane detection, separate courses
are calculated for precisely two of the lane markings to be tracked, with a set of binary
parameters indicating the allocation of the determined offset values to one of the two
separate courses [54]
• By following the robust lane detection and tracking method, after a fixed number of
computing cycles, a most probable hypothesis is calculated—the difference between
the predicted courses of lane markings to only be tracked and the courses of recognized
lane markings to be lowest [55].
• A parametric estimation method, in particular a maximum likelihood method, is
used to assign the calculated offset values to each of the separate courses of the lane
markings to be tracked [56].
• Only those two-lane markers that refer to the left and right lane boundaries of the
vehicle’s own lane are applied to the tracking procedure [57].
• The positive and negative ratios of the extracted characteristics of the frame are used
to assess the system’s correctness. The degree of accuracy is enhanced by including
the judgment in all extracted frames [58].
• At a present calculation cycle, the lane change assistance calculates a target control
amount comprising a feed-forward control using a target curvature of a track for
changing the host vehicle’s lane [59].
• Extra details analyzing signals mounted to determine if a collision between the host
vehicle and any other vehicle is likely to occur, allowing action to be done to avoid the
accident [60].
• There are two kinds of issues that are often seen and corrected in dewarped perspective
images: a stretching effect at the periphery region of a wide-angle image de warped
by rectilinear projection, and duplicate images of objects in an area where the left and
right camera views overlap [61].
• The object identification system examines the pixels in order to identify the object that
has not previously been identified in the 3D Environment [62].

Table 6. Summary of patents for lane detection and tracking algorithms.

Country Patent No Assignee Method Key Finding Approach Inventor


Aptiv Camera based vision based State estimation and Feature based
USA US20170068862A1 Mirko Mueter, Kun Zhao
Technologies Ltd. driver assistance system. separate progression. approach
Generates accurate lane
Toyota motor estimation using course map Centre of the lane and Model based Avdhut Joshi and
USA US9384394B2
corporation information and LIDAR multiple lanes. approach Michael James
sensors.
Controller is designed in
such way that it detect lanes
Nissan motor co Measure the output of the Learning based
USA US20020095246A1 by controlling steering angle Hiroshi Kawazoe
Ltd. signal. approach
when vehicle move out of
desired track.
Proposed an extraction
Atsushi Lisaka, Mamoru
Panasonic method using Hough Determine the maximum Feature based
Europe EP1143398A3 Kaneko and Nobohiko
Corporation transform to detect the lanes value of accumulators. approach
Yasui
in the opposite side of roads.
This method finds multi
Computer graphical and
Beijing University target tracking and cascade
vision-based technology with Model based Zhitong, H. and Yuefeng,
China CN105205500A of post and classifier with high
multi target filtering and approach Z
telecommunication detection processing
sorter training is used.
speed.
Objective of this method is
Developed steering assist
relative position host
device for lane detection and Model based
Japan JP6589941B2 Not available vehicle and their relation Shota Fujii
tracking under periphery approach
with lane has been
monitoring.
identified.
Proposed a deep Exacted features of lane Alexandru Mihai,
Ford global learning-based front facing boundaries with the help Feature based Tejaswi Koduri, Vidya
USA US10336326
technologies LLC camera lane detection of camera mounted at approach Nariyambut Marali Kyle
method. front. J Carey
Sustainability 2021, 13, 11417 21 of 29

Table 6. Cont.

Country Patent No Assignee Method Key Finding Approach Inventor


The improved perspective Main objective is to Wende Zhang, Jinsong
GM Global view is produced a new improve the perspective Wang, Kent S Lybecker,
Featured based
USA US9834143B2 Technology camera imaging surface view of the vehicle at front Jeffrey S. Piasecki,
approach
Operations LLC model and other distortion for lane detection and Bakhtiar Brian Litkouhi,
correcting technique. tracking. Ryan M. Frakes
Sensor fusion data processing Generate 3D envirmental
Uber technologies technique is used for data through sensor fusion Leaning based Carlos
USA US20170323179A1
Inc. surrounding object detection to guide autonomous approach Vallespi-Gonzalez
and lane detection. vehicle.

4. Discussion
Based on the review of studies on lane detection and tracking in Section 3.2, it can be
observed that there are limited data sets in the literature that researchers have used to test
lane detection and tracking algorithms. Based on the literature review, a summary of the
key data sets used in the literature or available to the researchers is presented in Table 7,
which shows some of the key features, strengths, and weaknesses. It is expected that in
future, more data sets may be available for the researchers as this field continues to grow,
especially with the development of fully autonomous vehicles. As per the statistics survey
of research papers published between 2000 and 2020, almost 42% of researchers mainly
focused on Intrusion Detection System (IDS) matrix to evaluate the performance of the
algorithms. This may be because the efficiency and effectiveness of IDS are better when
compared to Point Clustering Comparison, Gaussian Distribution, Spatial Distribution
and Key Points Estimation methods. The verification of the performance of the algorithms
for lane detection and tracking system is done based on ground truth data set. There are
four possibilities as true positive (TP), false negative (FN), false positive (FP) and true
negative (TN), as shown in Table 8. There are many metrics available for the evaluation
of performance, but the most common are accuracy, precision, F-score, Dice similarity
coefficient (DSC) and receiver operating characteristic (ROC) curves. Table 9 provides the
common metrics and the associated formulas used for the evaluation of the algorithms.
Sustainability 2021, 13, 11417 22 of 29

Table 7. A summary of datasets that have been used in the literature for verification of the algorithms.

Dataset Features Strength Weakness


55 h videos, 133,235 extracted frames, 88,880 training set, 9675 validations set For unseen or occluded lane marking annotated Except for four lanes markings, others are not
CU lane [63]
and 34,680 test set. manually with a cubic spline. annotated
10 h video 640 × 480 Hz of regular traffic in an urban environment. Entire dataset annotated, testing data also provided
Not applicable for all types of road geometries and
Caltech [64] 250,000 frames, 350,000 boundary boxes annotated with occlusion and (set 06–set 10) and training data (set 00–set 05) each
weather conditions.
temporal. 1 GB.
Custom data (collection of data using test
Not applicable Available according to the requirements Time-consuming and highly expensive
vehicle)
Multimodal dataset:
Sony cyber shot DSC-RX 100 camera, 5 different photometric variation pairs.
RGB-D dataset: More than 200indoor/outdoor scenes, Kinect Vz and zed
stereo camera obtain RGB-D frames. Different scenarios have been covered, like a traffic Dataset for different weather conditions and lanes
DIML [65]
Lane dataset: 470 video sequences of downtown and urban roads. jam, pedestrians and obstacles. with no markings are missing.
Emotion Recognition dataset (CAER): more than 13,000 videos and 13,000
annotated videos
CoVieW18 dataset: untrimmed videos sample, 90,000 YouTube videos URLs.
It contains stereo, optical flow, visual odometry etc. it contains an object Evaluation is done of orientation estimation of bird’s Only 15 cars and 30 pedestrians have been considered
KITTI [66] detection dataset, monocular images and boundary boxes, 7481 training eye view and applicable for real-time object detection while capturing images. Applicable for rural and
images, 7518 test images. and 3D tracking. Evaluation metrics provided. highway roads dataset.
Training: 3222 annotated vehicles in 20 frames per second for 1074 clips of 25
videos.
Lane detection challenge, velocity estimation Calibration file for lane detection has not been
Tusimple [67] Testing: 269 video clips
challenge and ground truths have been provided. provided.
Supplementary data: 5066 images of position and velocity of vehicle marked
by range sensors.
Raw real time data:
Raw-GPS, RAW-Accelerometers.
Processed data as continuous variables: pro lane detection, pro vehicle
More than 500 min naturistic driving and processed
UAH [68] detection and pro OpenStreetMap data. Limited accessibility to the research community
sematic information have provided.
Processed data as events: events list lane changes and events inertial.
Sematic information:
Sematic final and sematic online.
100,000 videos for more than 1000 h, road object detection, drivable area, IMU data, timestamp and localization have been
BDD100K [69] Data for unstructured road has not covered.
segmentation and full frame sematic segmentation. included in the dataset.
Sustainability 2021, 13, 11417 23 of 29

Table 8. Performance metrics for verification of lane detection and tracking algorithms, compiled from ref. [70].

Possibility Condition 1 Condition 2


True positive Ground truth exists When the algorithm detects lane markers.
False positive No ground truth exists When the algorithm detects lane markers.
False negative Ground truth exists in the image When the algorithm detects lane markers.
True negative No ground truth exists in the image When the algorithm is not detecting anything

Table 9. A summary of the equation of metrics used for evaluation of the performance of the algorithm, compiledfrom
refs. [71,72].

Sr. no Metrics Formula *


1. Accuracy(A) ( TP+ TN )
A = (TP+TN + FP+ FN )

2. Detection rate (DR) ( TP)


DR = (TP+ FN )

3. False positive rate (FPR) ( FN )


FPR = (TP+ FN )

4. False negative rate (FNR) FNR = ( FNFN


+ TP)

5. True negative rate (TNR) TNR = (TNTN


+ TP)

6. Precision Precision = (TNTP


+ FP)

7. F-measure (2× Recall × Precision)


F − Measure = ( Recall × Precision)

8. Error rate ( TP+ FN )


Error = ( FP+ FN +TP+TN )
* Where, TP = True positive, i.e., both conditions are satisfied by the algorithm. FP = False positive. i.e., only one condition satisfied by the
algorithm. TN = True negative. i.e., ground truth missing in the image. FN = False negative. i.e., algorithm fails to detect lane marking.

If the database is balanced, the accuracy rate should accurately reflect the algorithm’s
global output. The precision reflects the goodness of optimistic forecasts. The greater
the accuracy, the lower the number of “false alarms.” The recall, also called true positive
rate (TPR), is the ratio of positive instances that are correctly detected by the algorithm.
Therefore, the higher the recall, the higher the algorithm’s quality in detecting positive
instances. The F1-Score is the Precision and Recall harmonic mean, and since they are
combined into a concise metric, it can be used for comparing algorithms. Because it is more
sensitive to low values, the harmonic mean is used rather than arithmetic. Hence, a valid
algorithm has a satisfactory F1 score if it has accuracy and high recall. These parameters
can be estimated as unique metrics for each class or as the algorithm’s overall metrics [73].
Table 10 shows the SWOT analysis of different approaches used for lane detection and
tracking algorithms. The use of a Learning-based approach (model predictive controller) is
considered an emerging approach for lane detection and tracking because it is computa-
tionally more efficient than the other two approaches, and it provides reasonable results
in real-time scenarios. However, the risk of mismatching lanes and performance drop in
inclement weather conditions are the drawback of the learning-based approach. Feature-
based approach, while time-consuming, can provide better performance in optimization of
lane detection and tracking. However, this approach poses challenges in handling high
illumination or shadows. Image and sensor-based lane detection and tracking approaches
have been used widely in lane detection and tracking patents.
Sustainability 2021, 13, 11417 24 of 29

Table 10. SWOT analysis of different approaches used for lane detection and tracking algorithms.

Methods Strength Weakness Opportunities Threats


Feature based Feature extraction is used to Better performance in Less effective for complex
Time-consuming
approach determine false lane markings. optimization illumination and shadow
Learning based Computationally more Performance drops due to
Easy and reliable method Mismatching lanes
approach efficient inclement weather
Model based Camera quality improves Expensive and Robust performance for Difficult to mount sensor fusion
approach system performance time-consuming lane detection model system for complex geometry

In addition, from the literature synthesis, several gaps in knowledge are identified
and are presented in Table 11. The literature review shows that clothoid and hyperbola
shape roads are ignored for lane detection and algorithms road because of the complexity
of road structure and unavailability of the dataset. Likewise, much work has already been
done on structured roads’ pavement marking compared to unstructured roads (Figure 3).
Most studies focus on straight roads. It is to be noted that unstructured roads are available
in residential areas, hilly area roads, forest area roads. Much research has previously
considered daytime, while night and rainy conditions are less studied. From the literature,
it is observed that, in terms of speed flow conditions, they have been previously researched
on the speed levels of 40 km/h to 80 km/h while high speed (above 80 km/hr) has received
less attention. Further, occlusion due to overtaking vehicles or other objects (Figure 4),
and high illumination also pose a challenge for lane detection and tracking. These issues
should be addressed to move from level 3 automation (partial driving) to level 5 fully
autonomous Also, new databases for more testing of algorithms are needed as researchers
are constrained due to the unavailability of datasets. There is, however, the prospect of
using synthetic sensor data generated by using a test vehicle or driving scenario designing
through a driving simulator app available through commercial software.

Table 11. Lane detection under different conditions to identify the gaps in knowledge.

Road Geometry Pavement Marking Weather Condition Speed


Unstructured
Hyperbola

Structured
Clothoid
Straight
Sources

Night

Rain
Day

√ √ √
[26] Borkar et al. (2009) – – – – – –
√ √ √
[28] Lu et al. (2002) – – – – – –
√ √ √
[29] Zhang & Shi (2009) – – – – – –
√ √ √
[32] Hong et al. (2018) – – – – – –
√ √ √ Low (40 km/h) &
[33] Park, H. et al. (2018) – – – – –
high (80 km/h)
√ √ √ √
[34] EI Hajiouji, H. (2019) – – – – 120 km/h
√ √ √ √
[35] Samadzadegan et al. (2006) – – – – –
√ √ √ √
[36] Cheng et al. (2010) – – – – –
√ √ √ √
[40] Yeniaydin et al. (2019) – – – – –
√ √ √
[41] Kemsoaram et al. (2019) – – – – – –
√ √ √ √
[43] Son et al. (2019) – – – –
√ √ √ √
[47] Chen et al. (2018) – – – – –
√ √ √ √
[52] Suh et al. (2019) – – – – 60–80 km/h
√ √ √ √ √
[53] Gopalan et al. (2018) – – – –
√ √ √
[74] Wu et al. (2008) – – – – – 40 km/h
[34] 
[34]  EI 
EI  Hajiouji, 
Hajiouji,  H. H.  √ 
√  ‐‐ 
‐‐  ‐‐ 
‐‐  √ 
√  ‐‐ 
‐‐  √ 
√  √ 
√  ‐‐ 
‐‐  120km/h 
120km/h 
(2019) 
(2019) 
[35] 
[35]  Samadzadegan 
Samadzadegan  et  et  ‐‐
‐‐ ‐‐ 
‐‐  √ 
√  √ 
√  ‐‐ 
‐‐  √ 
√  √ 
√  ‐‐ 
‐‐  ‐‐ 
‐‐ 
al. (2006) 
Sustainability
al. (2006)  2021, 13, 11417 25 of 29
[36]Cheng et al. (2010) 
[36]Cheng et al. (2010)  √ 
√  √ 
√  ‐‐ 
‐‐  √ 
√  ‐‐ 
‐‐  √ 
√  ‐‐ 
‐‐  ‐‐ 
‐‐  ‐‐ 
‐‐ 
[40]  Yeniaydin 
[40]  Yeniaydin  et  et  al. 
al. 
√ 
√  ‐‐ 
‐‐  √ 
√  ‐‐ 
‐‐  √ 
√  √√ ‐‐ 
‐‐  ‐‐ 
‐‐  ‐‐ 
‐‐ 
(2019) 
(2019) 
Table 11. Cont.
[41]  Kemsoaram 
[41]  Kemsoaram  et  et  al.
al.
√ 
√  ‐‐ 
‐‐  √√ ‐‐ 
‐‐  √ 
√  ‐‐ 
‐‐  ‐‐ 
‐‐  ‐‐ 
‐‐  ‐‐ 
‐‐ 
(2019) 
(2019)  Road Geometry Pavement Marking Weather Condition Speed
[43] Son et al. (2019) 
[43] Son et al. (2019)  √ 
√  ‐‐ 
‐‐  √ 
√  √ 
√  ‐‐ 
‐‐  √ 
√  ‐‐ 
‐‐  ‐‐ 
‐‐    
Sources

Straight

Clothoid

Hyperbola

Structured

Unstructured

Day

Night

Rain
[47] Chen et al. (2018) 
[47] Chen et al. (2018)  √ 
√  √ 
√  ‐‐ 
‐‐  √ 
√  ‐‐ 
‐‐  √ 
√  ‐‐ 
‐‐  ‐‐ 
‐‐  ‐‐ 
‐‐ 
[52] Suh et al. (2019) 
[52] Suh et al. (2019)  √√ ‐‐ 
‐‐  √√ √ 
√  ‐‐ 
‐‐  √ 
√  ‐‐ 
‐‐  ‐‐ 
‐‐  60‐80km/h 
60‐80km/h 
[53]Gopalan et al. (2018)
[53]Gopalan et al. (2018) √ 
√  ‐‐ 
‐‐  √ 
√  √ 
√  √ 
√  √ 
√  ‐‐ 
‐‐  ‐‐ 
‐‐  ‐‐ 
‐‐ 
[74] Wu et al.(2008) 
[74] Wu et al.(2008)  √ 
√  ‐‐ 
‐‐  ‐‐ 
‐‐  √ 
√  ‐‐ 
‐‐  √ 
√ √ ‐‐ 
‐‐  ‐‐ 
‐‐  40km/h 
40km/h 
√ √ √ √ √
[75]Liu &Li et al. (2018) 
[75] Liu & Li et al. (2018)
[75]Liu &Li et al. (2018)  √ 
√  ‐‐ 
‐‐  – √ 
√  √ 
√  ‐‐
‐‐ – √ 
√  √ 
√  √ 
√  –‐‐ 
‐‐ 
√ √ √ √
[76]Han et al. (2019) 
[76] Han et al. (2019)
[76]Han et al. (2019)  √ 
√  ‐‐ 
‐‐  – ‐‐ 
‐‐  – √ 
√  √ 
√  √ 
√  ‐‐ 

‐‐  ‐‐ 

‐‐  30‐50km/h 
30–50 km/h
30‐50km/h 
[77]Tominaga  et  √ √
[77]Tominaga 
[77] Tominaga et al. (2019)et  ‐‐  – ‐‐  – ‐‐  – √√ ‐‐  – √  –
‐‐  –
‐‐  80 km/h
80km/h 
al.(2019)  ‐‐  √ ‐‐  ‐‐  √ √ ‐‐  √  ‐‐  ‐‐  80km/h 
al.(2019) 
[78] Chen Z et al. (2019) – – – – – –
[78] Chen Z et al. (2019) 
[78] Chen Z et al. (2019)  √ 
√  √ ‐‐ 
‐‐  √√ √ √ 
√  √ ‐‐ 
‐‐  ‐‐ √
‐‐  ‐‐ 

‐‐  ‐‐ 

‐‐  ‐‐ 
‐‐ 
[79] Feng et al. (2019) – – 120 km/h
[79]Feng et al. (2019) 
[79]Feng et al. (2019)  √ 
√  ‐‐ 
‐‐  √ 
√  √√ ‐‐ 
‐‐  √√ √√ √√ 120km/h 
120km/h 

  
Figure 3. Efficiency of the unstructured road is affected by shadow, heavy rain, low or high illumi‐
Figure 3. Efficiency of the unstructured road is affected by shadow, heavy rain, low or high illumi‐
Figure 3. Efficiency of the unstructured road is affected by shadow, heavy rain, low or high illumina-
nation. 
nation. 
tion.

  
Figure 4. Challenge in lane marking detection: vehicle stop or occlude nearby lane. 
Figure 4. Challenge in lane marking detection: vehicle stop or occlude nearby lane. 
Figure 4. Challenge in lane marking detection: vehicle stop or occlude nearby lane.

      Lane markings are usually yellow and white, although reflector lanes are designated
with other colors. The number of lanes and their width varies per country. Due to the
existence of shadows, there may be problems with vision clarity. The surrounding cars may
obstruct the lane markings. Likewise, there is a dramatic shift in lighting as the car exits a
tunnel. As a result, excessive light has an impact on visual clarity. Due to different weather
conditions such as rain, fog, and snow, the visibility of the lane markings decreases. In
the evening, visibility may be reduced. These difficulties in lane recognition and tracking
Sustainability 2021, 13, 11417 26 of 29

lead to a drop in the performance of lane detection and tracking algorithms. Therefore, the
development of a reliable lane detecting system is a challenge.

5. Conclusions
Over the last decade, many researchers have researched ADAS. This field continues to
grow, as fully autonomous vehicles are predicted to enter the market soon [80,81]. There
are limited studies in the literature that provides the state-of-art in lane detection and
tracking algorithms and evaluation of the algorithms. To fulfil this gap, in this study, we
have provided a comprehensive review of different methods of lane detection and tracking
algorithms. In addition, we presented a summary of different data sets that researchers
have used to test the algorithms, along with the approaches for evaluating the performance
of the algorithms. Further, a summary of patented works has also been provided.
The use of a Learning-based approach is gaining popularity because it is computa-
tionally more efficient and provides reasonable results in real-time scenarios. The unavail-
ability of rigorous and varied datasets to test the algorithms have been a constraint to
the researchers. However, using synthetic sensor data generated by using a test vehicle
or driving scenario through a vehicle simulator app availability in commercial software
has opened the door for testing algorithms. Likewise, the following areas need more
investigations in future:
• lane detection and tracking under different complex geometric road design models,
e.g., hyperbola and clothoid
• achieving high reliability for detecting and tracking the lane under different weather
conditions, different speeds and weather conditions, and
• lane detection and tracking for the unstructured roads
This study aimed to comprehensively review previous literature on lane detection and
tracking for ADAS and identify gaps in knowledge for future research. This is important
because limited studies provide state-of-art lane detection and tracking algorithms for
ADAS and a holistic overview of works in this area. The quantitative assessment of
mathematical models and parameters is beyond the scope of this work. It is anticipated
that this review paper will be a valuable resource for the researchers intending to develop
reliable lane detection and tracking algorithms for emerging autonomous vehicles in future.

Author Contributions: Investigation, data collection, methodology, writing—original draft prepara-


tion, S.W.; Supervision, writing—review and editing, N.S.; Supervision, writing—review and editing,
P.S. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Acknowledgments: The first author would like to acknowledge the Government of India, Ministry
of Social Justice & Empowerment, for providing full scholarship to pursue PhD study at RMIT
University. We want to thank the three anonymous reviewers whose constructive comments helped
to improve the paper further.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Nilsson, N.J. Shakey the Robot; Sri International Menlo Park: California, CA, USA, 1984.
2. Tsugawa, S.; Yatabe, T.; Hirose, T.; Matsumoto, S. An Automobile with Artificial Intelligence. In Proceedings of the 6th
International Joint Conference on Artificial Intelligence, Tokyo, Japan, 20 August 1979.
3. Blackman, C.P. The ROVA and MARDI projects. In Proceedings of the IEEE Colloquium on Advanced Robotic Initiatives in the
UK, London, UK, 17 April 1991; pp. 5/1–5/3.
Sustainability 2021, 13, 11417 27 of 29

4. Thorpe, C.; Herbert, M.; Kanade, T.; Shafter, S. Toward autonomous driving: The CMU Navlab. II. Architecture and systems.
IEEE Expert. 1991, 6, 44–52. [CrossRef]
5. Horowitz, R.; Varaiya, P. Control design of an automated highway system. Proc. IEEE 2000, 88, 913–925. [CrossRef]
6. Pomerleau, D.A.; Jochem, T. Rapidly Adapting Machine Vision for Automated Vehicle Steering. IEEE Expert. 1996, 11, 19–27.
[CrossRef]
7. Parent, M. Advanced Urban Transport: Automation Is on the Way. Intell. Syst. IEEE 2007, 22, 9–11. [CrossRef]
8. Lari, A.Z.; Douma, F.; Onyiah, I. Self-Driving Vehicles and Policy Implications: Current Status of Autonomous Vehicle Develop-
ment and Minnesota Policy Implications. Minn. J. Law Sci. Technol. 2015, 16, 735.
9. Urmson, C. Green Lights for Our Self-Driving Vehicle Prototypes. Available online: https://blog.google/alphabet/self-driving-
vehicle-prototypes-on-road/ (accessed on 30 September 2021).
10. Campisi, T.; Severino, A.; Al-Rashid, M.A.; Pau, G. The Development of the Smart Cities in the Connected and Autonomous
Vehicles (CAVs) Era: From Mobility Patterns to Scaling in Cities. Infrastructures 2021, 6, 100. [CrossRef]
11. Severino, A.; Curto, S.; Barberi, S.; Arena, F.; Pau, G. Autonomous Vehicles: An Analysis both on Their Distinctiveness and the
Potential Impact on Urban Transport Systems. Appl. Sci. 2021, 11, 3604. [CrossRef]
12. Aly, M. Real time Detection of Lane Markers in Urban Streets. In Proceedings of the 2008 IEEE Intelligent Vehicles Symposium,
Eindhoven, The Netherlands, 4–6 June 2008; pp. 7–12. [CrossRef]
13. Bar Hillel, A.; Lerner, R.; Levi, D.; Raz, G. Recent progress in road and lane detection: A survey. Mach. Vis. Appl. 2014, 25, 727–745.
[CrossRef]
14. Ying, Z.; Li, G.; Zang, X.; Wang, R.; Wang, W. A Novel Shadow-Free Feature Extractor for Real-Time Road Detection. In
Proceedings of the 24th ACM International Conference on Multimedia, Amsterdam, The Netherlands, 15–19 October 2016.
15. Jothilashimi, S.; Gudivada, V. Machine Learning Based Approach. 2016. Available online: https://www.sciencedirect.com/
topics/computer-science/machine-learning-based-approach (accessed on 20 August 2021).
16. Zhou, S.; Jiang, Y.; Xi, J.; Gong, J.; Xiong, G.; Chen, H. A novel lane detection based on geometrical model and Gabor filter. In
Proceedings of the 2010 IEEE Intelligent Vehicles Symposium, La Jolla, CA, USA, 21–24 June 2010; pp. 59–64.
17. Zhao, H.; Teng, Z.; Kim, H.; Kang, D. Annealed Particle Filter Algorithm Used for Lane Detection and Tracking. J. Autom. Control
Eng. 2013, 1, 31–35. [CrossRef]
18. Paula, M.B.; Jung, C.R. Real-Time Detection and Classification of Road Lane Markings. In Proceedings of the 2013 XXVI
Conference on Graphics, Patterns and Images, Arequipa, Peru, 5–8 August 2013.
19. Kukkala, V.K.; Tunnell, J.; Pasricha, S.; Bradley, T. Advanced Driver-Assistance Systems: A Path toward Autonomous Vehicles. In
IEEE Consumer Electronics Magazine; IEEE: Eindhoven, The Netherlands, 2018; Volume 7, pp. 18–25. [CrossRef]
20. Yenkanchi, S. Multi Sensor Data Fusion for Autonomous Vehicles; University of Windsor: Windsor, ON, Canada, 2016.
21. Synopsys.com. What Is ADAS (Advanced Driver Assistance Systems)?—Overview of ADAS Applications|Synopsys. 2021.
Available online: https://www.synopsys.com/automotive/what-is-adas.html (accessed on 12 October 2021).
22. McCall, J.C.; Trivedi, M.M. Video-based lane estimation and tracking for driver assistance: Survey, system, and evaluation. In
IEEE Transactions on Intelligent Transportation Systems; IEEE: Eindhoven, The Netherlands, 2006; Volume 7, pp. 20–37. [CrossRef]
23. Veit, T.; Tarel, J.; Nicolle, P.; Charbonnier, P. Evaluation of Road Marking Feature Extraction. In Proceedings of the 2008 11th
International IEEE Conference on Intelligent Transportation Systems, Beijing, China, 12–15 October 2008; pp. 174–181.
24. Kuo, C.Y.; Lu, Y.R.; Yang, S.M. On the Image Sensor Processing for Lane Detection and Control in Vehicle Lane Keeping Systems.
Sensors 2019, 19, 1665. [CrossRef]
25. Kang, C.M.; Lee, S.H.; Kee, S.C.; Chung, C.C. Kinematics-based Fault-tolerant Techniques: Lane Prediction for an Autonomous
Lane Keeping System. Int. J. Control Autom. Syst. 2018, 16, 1293–1302. [CrossRef]
26. Borkar, A.; Hayes, M.; Smith, M.T. Robust lane detection and tracking with ransac and Kalman filter. In Proceedings of the 2009
16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 3261–3264. [CrossRef]
27. Sun, Y.; Li, J.; Sun, Z. Multi-Stage Hough Space Calculation for Lane Markings Detection via IMU and Vision Fusion. Sensors
2019, 19, 2305. [CrossRef]
28. Lu, J.; Ming Yang, M.; Wang, H.; Zhang, B. Vision-based real-time road detection in urban traffic, Proc. SPIE 4666. In Real-Time
Imaging VI; SPIE: Bellingham, WA, USA, 2002. [CrossRef]
29. Zhang, X.; Shi, Z. Study on lane boundary detection in night scene. In Proceedings of the 2009 IEEE Intelligent Vehicles
Symposium, Xi’an, China, 3–5 June 2009; pp. 538–541. [CrossRef]
30. Borkar, A.; Hayes, M.; Smith, M.T.; Pankanti, S. A layered approach to robust lane detection at night. In Proceedings of the 2009
IEEE Workshop on Computational Intelligence in Vehicles and Vehicular Systems, Nashville, TN, USA, 30 March–2 April 2009;
pp. 51–57. [CrossRef]
31. Priyadharshini, P.; Niketha, P.; Saantha Lakshmi, K.; Sharmila, S.; Divya, R. Advances in Vision based Lane Detection Algo-
rithm Based on Reliable Lane Markings. In Proceedings of the 2019 5th International Conference on Advanced Computing &
Communication Systems (ICACCS), Coimbatore, India, 15–16 March 2019; pp. 880–885. [CrossRef]
32. Hong, G.-S.; Kim, B.-G.; Dorra, D.P.; Roy, P.P. A Survey of Real-time Road Detection Techniques Using Visual Color Sensor. J.
Multimed. Inf. Syst. 2018, 5, 9–14. [CrossRef]
33. Park, H. Implementation of Lane Detection Algorithm for Self-driving Vehicles Using Tensor Flow. In International Conference on
Innovative Mobile and Internet Services in Ubiquitous Computing; Springer: Cham, Switzerland, 2018; pp. 438–447.
Sustainability 2021, 13, 11417 28 of 29

34. El Hajjouji, I.; Mars, S.; Asrih, Z.; Mourabit, A.E. A novel FPGA implementation of Hough Transform for straight lane detection.
Eng. Sci. Technol. Int. J. 2020, 23, 274–280. [CrossRef]
35. Samadzadegan, F.; Sarafraz, A.; Tabibi, M. Automatic Lane Detection in Image Sequences for Vision-based Navigation Purposes.
ISPRS Image Eng. Vis. Metrol. 2006. Available online: https://www.semanticscholar.org/paper/Automatic-Lane-Detection-in-
Image-Sequences-for-Samadzadegan-Sarafraz/55f0683190eb6cb21bf52c5f64b443c6437b38ea (accessed on 12 August 2021).
36. Cheng, H.-Y.; Yu, C.-C.; Tseng, C.-C.; Fan, K.-C.; Hwang, J.-N.; Jeng, B.-S. Environment classification and hierarchical lane
detection for structured and unstructured roads. Comput. Vis. IET 2010, 4, 37–49. [CrossRef]
37. Han, J.; Kim, D.; Lee, M.; Sunwoo, M. Road boundary detection and tracking for structured and unstructured roads using a 2D
lidar sensor. Int. J. Automot. Technol. 2014, 15, 611–623. [CrossRef]
38. Le, M.C.; Phung, S.L.; Bouzerdoum, A. Lane Detection in Unstructured Environments for Autonomous Navigation Systems.
In Asian Conference on Computer Vision; Cremers, D., Reid, I., Saito, H., Yang, M.H., Eds.; Springer: Cham, Switzerland, 2015.
[CrossRef]
39. Wang, J.; Ma, H.; Zhang, X.; Liu, X. Detection of Lane Lines on Both Sides of Road Based on Monocular Camera. In Proceedings
of the 2018 IEEE International Conference on Mechatronics and Automation (ICMA), Changchun, China, 5–8 August 2018;
pp. 1134–1139.
40. YenIaydin, Y.; Schmidt, K.W. Sensor Fusion of a Camera and 2D LIDAR for Lane Detection. In Proceedings of the 2019 27th Signal
Processing and Communications Applications Conference (SIU), Sivas, Turkey, 24–26 April 2019; pp. 1–4.
41. Kemsaram, N.; Das, A.; Dubbelman, G. An Integrated Framework for Autonomous Driving: Object Detection, Lane Detection,
and Free Space Detection. In Proceedings of the 2019 Third World Conference on Smart Trends in Systems Security and
Sustainablity (WorldS4), London, UK, 30–31 July 2019; pp. 260–265. [CrossRef]
42. Lee, C.; Moon, J.-H. Robust Lane Detection and Tracking for Real-Time Applications. IEEE Trans. Intell. Transp. Syst. 2018, 19, 1–6.
[CrossRef]
43. Son, Y.; Lee, E.S.; Kum, D. Robust multi-lane detection and tracking using adaptive threshold and lane classification. Mach. Vis.
Appl. 2018, 30, 111–124. [CrossRef]
44. Li, Q.; Zhou, J.; Li, B.; Guo, Y.; Xiao, J. Robust Lane-Detection Method for Low-Speed Environments. Sensors 2018, 18, 4274.
[CrossRef]
45. Son, J.; Yoo, H.; Kim, S.; Sohn, K. Real-time illumination invariant lane detection for lane departure warning system. Expert Syst.
Appl. 2014, 42. [CrossRef]
46. Chae, H.; Jeong, Y.; Kim, S.; Lee, H.; Park, J.; Yi, K. Design and Vehicle Implementation of Autonomous Lane Change Algorithm
based on Probabilistic Prediction. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems
(ITSC), Maui, HI, USA, 4–7 November 2018; pp. 2845–2852. [CrossRef]
47. Chen, P.R.; Lo, S.Y.; Hang, H.M.; Chan, S.W.; Lin, J.J. Efficient Road Lane Marking Detection with Deep Learning. In Proceedings
of the 2018 IEEE 23rd International Conference on Digital Signal Processing (DSP), Shanghai, China, 19–21 November 2018; pp.
1–5.
48. Lu, Z.; Xu, Y.; Shan, X. A lane detection method based on the ridge detector and regional G-RANSAC. Sensors 2019, 19, 4028.
[CrossRef]
49. Bian, Y.; Ding, J.; Hu, M.; Xu, Q.; Wang, J.; Li, K. An Advanced Lane-Keeping Assistance System with Switchable Assistance
Modes. IEEE Trans. Intell. Transp. Syst. 2019, 21, 385–396. [CrossRef]
50. Wang, G.; Hu, J.; Li, Z.; Li, L. Cooperative Lane Changing via Deep Reinforcement Learning. arXiv 2019, arXiv:1906.08662.
51. Wang, P.; Chan, C.Y.; de La Fortelle, A. A reinforcement learning based approach for automated lane change maneuvers. In
Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 1379–1384.
52. Suh, J.; Chae, H.; Yi, K. Stochastic model-predictive control for lane change decision of automated driving vehicles. IEEE Trans.
Veh. Technol. 2018, 67, 4771–4782. [CrossRef]
53. Gopalan, R.; Hong, T.; Shneier, M.; Chellappa, R. A learning approach towards detection and tracking of lane markings. IEEE
Trans. Intell. Transp. Syst. 2012, 13, 1088–1098. [CrossRef]
54. Mueter, M.; Zhao, K. Method for Lane Detection. US20170068862A1. 2015. Available online: https://patents.google.com/patent/
US20170068862A1/en (accessed on 12 August 2021).
55. Joshi, A. Method for Generating Accurate Lane Level Maps. US9384394B2. 2013. Available online: https://patents.google.com/
patent/US9384394B2/en (accessed on 12 August 2021).
56. Kawazoe, H. Lane Tracking Control System for Vehicle. US20020095246A1. 2001. Available online: https://patents.google.com/
patent/US20020095246 (accessed on 12 August 2021).
57. Lisaka, A. Lane Detection Sensor and Navigation System Employing the Same. EP1143398A3. 1996. Available online: https:
//patents.google.com/patent/EP1143398A3/en (accessed on 12 August 2021).
58. Zhitong, H.; Yuefeng, Z. Vehicle Detecting Method Based on Multi-Target Tracking and Cascade Classifier Combination.
CN105205500A. 2015. Available online: https://patents.google.com/patent/CN105205500A/en (accessed on 12 August 2021).
59. Fujii, S. Steering Support Device. JP6589941B2, 2019. Patentimages.storage.googleapis.com. 2021. Available online: https:
//patentimages.storage.googleapis.com/0b/d0/ff/978af5acfb7b35/JP6589941B2.pdf (accessed on 12 August 2021).
60. Gurghian, A.; Koduri, T.; Nariyambut Murali, V.; Carey, K. Lane Detection Systems and Methods. US10336326B2. 2016. Available
online: https://patents.google.com/patent/US10336326B2/en (accessed on 12 August 2021).
Sustainability 2021, 13, 11417 29 of 29

61. Zhang, W.; Wang, J.; Lybecker, K.; Piasecki, J.; Brian Litkouhi, B.; Frakes, R. Enhanced Perspective View Generation in a Front
Curb Viewing System Abstract. US9834143B2. 2014. Available online: https://patents.google.com/patent/US9834143B2/en
(accessed on 12 August 2021).
62. Vallespi-Gonzalez, C. Object Detection for an Autonomous Vehicle. US20170323179A1. 2016. Available online: https://patents.
google.com/patent/US20170323179A1/en (accessed on 12 August 2021).
63. Cu Lane Dataset. Available online: https://xingangpan.github.io/projects/CULane.html (accessed on 13 April 2020).
64. Caltech Pedestrian Detection Benchmark. Available online: http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/
(accessed on 13 April 2020).
65. Lee, E. Digital Image Media Lab. Diml.yonsei.ac.kr. 2020. Available online: http://diml.yonsei.ac.kr/dataset/ (accessed on 13
April 2020).
66. Cvlibs.net. The KITTI Vision Benchmark Suite. Available online: http://www.cvlibs.net/datasets/kitti/ (accessed on 27 April
2020).
67. Tusimple/Tusimple-Benchmark. Available online: https://github.com/TuSimple/tusimple-benchmark/tree/master/doc/
velocity_estimation (accessed on 15 April 2020).
68. Romera, E.; Luis, M.; Arroyo, L. Need Data for Driver Behavior Analysis? Presenting the Public UAH-Drive Set. In Proceedings
of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems, Rio de Janeiro, Brazil, 1–4 November 2016.
69. BDD100K Dataset. Available online: https://mc.ai/bdd100k-dataset/ (accessed on 2 April 2020).
70. Kumar, A.M.; Simon, P. Review of Lane Detection and Tracking Algorithms in Advanced Driver Assistance System. Int. J. Comput.
Sci. Inf. Technol. 2015, 7, 65–78. [CrossRef]
71. Hamed, T.; Kremer, S. Computer and Information Security Handbook, 3rd ed.; Elesevier: Amsterdam, The Netherlands, 2017; p. 114.
72. Precision and Recall. Available online: https://en.wikipedia.org/wiki/Precision_and_recall (accessed on 13 January 2021).
73. Fiorentini, N.; Losa, M. Long-Term-Based Road Blackspot Screening Procedures by Machine Learning Algorithms. Sustainability
2020, 12, 5972. [CrossRef]
74. Wu, S.J.; Chiang, H.H.; Perng, J.W.; Chen, C.J.; Wu, B.F.; Lee, T.T. The heterogeneous systems integration design and implementa-
tion for lane keeping on a vehicle. IEEE Trans. Intell. Transp. Syst. 2008, 9, 246–263. [CrossRef]
75. Liu, H.; Li, X. Sharp Curve Lane Detection for Autonomous Driving. Comput. Sci. Eng. 2019, 21, 80–95. [CrossRef]
76. Han, J.; Yang, Z.; Hu, G.; Zhang, T.; Song, J. Accurate and robust vanishing point detection method in unstructured road scenes. J.
Intell. Robot. Syst. 2019, 94, 143–158. [CrossRef]
77. Tominaga, K.; Takeuchi, Y.; Tomoki, U.; Kameoka, S.; Kitano, H.; Quirynen, R.; Berntorp, K.; Di Cairano, S. GNSS Based Lane
Keeping Assist System via Model Predictive Control. 2019. Available online: https://doi.org/10.4271/2019-01-0685 (accessed on
9 September 2021).
78. Chen, Z.; Liu, Q.; Lian, C. PointLaneNet: Efficient end-to-end CNNs for Accurate Real-Time Lane Detection. In Proceedings of
the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 2563–2568. [CrossRef]
79. Feng, Y.; Rong-ben, W.; Rong-hui, Z. Research on Road Recognition Algorithm Based on Structure Environment for ITS. In
Proceedings of the 2008 ISECS International Colloquium on Computing, Communication, Control, and Management, Guangzhou,
China, 3–4 August 2008; pp. 84–87. [CrossRef]
80. Nieuwenhuijsen, J.; de Almeida Correia, G.H.; Milakis, D.; van Arem, B.; van Daalen, E. Towards a quantitative method to
analyze the long-term innovation diffusion of automated vehicles technology using system dynamics. Transp. Res. Part C Emerg.
Technol. 2018, 86, 300–327. [CrossRef]
81. Stasinopoulos, P.; Shiwakoti, N.; Beining, M. Use-stage life cycle greenhouse gas emissions of the transition to an autonomous
vehicle fleet: A System Dynamics approach. J. Clean. Prod. 2021, 278, 123447. [CrossRef]

You might also like