Sustainability 13 11417 v2
Sustainability 13 11417 v2
Article
Review on Lane Detection and Tracking Algorithms of
Advanced Driver Assistance System
Swapnil Waykole, Nirajan Shiwakoti * and Peter Stasinopoulos
School of Engineering, RMIT University, Melbourne, VIC 3000, Australia; [email protected] (S.W.);
[email protected] (P.S.)
* Correspondence: [email protected]
Abstract: Autonomous vehicles and advanced driver assistance systems are predicted to provide
higher safety and reduce fuel and energy consumption and road traffic emissions. Lane detection
and tracking are the advanced key features of the advanced driver assistance system. Lane detection
is the process of detecting white lines on the roads. Lane tracking is the process of assisting the
vehicle to remain in the desired path, and it controls the motion model by using previously detected
lane markers. There are limited studies in the literature that provide state-of-art findings in this
area. This study reviews previous studies on lane detection and tracking algorithms by performing a
comparative qualitative analysis of algorithms to identify gaps in knowledge. It also summarizes
some of the key data sets used for testing algorithms and metrics used to evaluate the algorithms.
It is found that complex road geometries such as clothoid roads are less investigated, with many
studies focused on straight roads. The complexity of lane detection and tracking is compounded
by the challenging weather conditions, vision (camera) quality, unclear line-markings and unpaved
roads. Further, occlusion due to overtaking vehicles, high-speed and high illumination effects also
pose a challenge. The majority of the studies have used custom based data sets for model testing.
Citation: Waykole, S.; Shiwakoti, N.;
As this field continues to grow, especially with the development of fully autonomous vehicles in
Stasinopoulos, P. Review on Lane
the near future, it is expected that in future, more reliable and robust lane detection and tracking
Detection and Tracking Algorithms of
algorithms will be developed and tested with real-time data sets.
Advanced Driver Assistance System.
Sustainability 2021, 13, 11417. https://
Keywords: lane detection; lane tracking system; sensors; advanced driver assistance system (ADAS);
doi.org/10.3390/su132011417
lane departure warning system
Academic Editors: Young-Ji Byon,
Feng Chen and Meng Guo
autonomous cars. Following that, in 2006, the DARPA urban challenge was performed in
a controlled situation with a variety of autonomous and human-operated vehicles. Since
then, many manufactures, including Audi, BMW, Bosch, Ford, GM, Lexus, Mercedes,
Nissan, Tesla, Volkswagen, Volvo and Google, have launched self-driving vehicle projects
in collaboration with universities [8]. Google’s self-driving car has experimented and
travelled 500 thousand kilometres and has begun building prototypes of its own cars [9]. A
completely autonomous vehicle would be expected to drive to a chosen location without
any expectation of shared control with the driver, including safety-critical tasks.
The performance of lane detection and tracking depends on the well-developed roads
and their lane markings, so smart cities are also a prominent factor in autonomous vehicle
research. The idea of a smart city is often linked with an eco-city or a sustainable city,
both of which seek to enhance the quality of municipal services while lowering their costs.
Smart cities’ primary goal is to balance technological innovation with the economic, social,
and environmental problems that tomorrow’s cities face. The greater closeness between
government and people is required in smart cities that embrace the circular economy’s
concepts [10]. The way materials and goods flow around people and their demands will
alter, as will the structure of cities. Several car manufacturers such as Tesla and Audi
have already launched autonomous vehicle marketing for private use. Soon, society
will be influenced by autonomous vehicles’ spread to urban transport systems [11]. The
development of smart cities with the introduction of connected and autonomous vehicles
could potentially transform cities and guide long-term urban planning [10].
Autonomous vehicles and Advanced Driver Assistance Systems (ADAS) are predicted
to provide a higher degree of safety and reduce fuel and energy consumption and road
traffic emissions. ADAS is implemented for safe and efficient driving, which has many
driver assistance features such as warning drivers about forwarding collision warning or
safe lane change [12]. Research shows that most accidents occur because of driver errors,
and the ADAS can reduce the accidents and workload of the driver. If there is a likelihood
of an accident, ADAS can take the necessary action to avoid it [13]. Lane departure warning
(LDW), which utilizes lane detection and tracking algorithms, is an essential feature of the
ADAS. The LDW warns the driver when a vehicle crosses white lane lines unintentionally
and controls the vehicle by bringing it back into the desired safe path. Three types of
approaches for lane detection are usually discussed in the existing literature: learning-
based approach, features-based approach, and model-based approach [13–18] (detailed
analysis are presented in Section 3.2). Many challenges and issues have been highlighted
in the literature regarding the LDW systems, such as visibility conditions change, variation
in images, and lane appearance diversity [17]. Since different countries have used various
lane markers, there is a challenge for lane detection and tracking to solve the problems.
Figure 1. Flowchart showing the methodology adopted for the review.
Figure 1. Flowchart showing the methodology adopted for the review.
3. Literature Review
A comparison of the different sensors used in ADAS is presented first. It is then
followed up with an in-depth review of algorithms used for lane detection and tracking,
including the patented works.
Sustainability 2021, 13, 11417 4 of 29
Figure 2. Sensors fusion to guide autonomous vehicle, adapted and reprinted from ref. [21].
Figure 2. Sensors fusion to guide autonomous vehicle, adapted and reprinted from ref. [21].
3.2.1. Features-Based Approach (Image and Sensor-Based Lane Detection and Tracking)
Image and sensor-based lane detection and tracking decision-making processes are
dependent on the sensors attached to the vehicle and the camera output. In this approach,
the image frames are pre-processed, and a lane detection algorithm is applied to determine
lane tracking. The sensor values are used to further decide on the path to be followed by
the lane markings [22,23].
Kuo et al. [24] implemented a vision-based lane-keeping system. The proposed
system obtains the vehicle position following the lane and controls the vehicle to be in
Sustainability 2021, 13, 11417 6 of 29
the desired path. The steps involved in the lane-keeping system are inverse perspective
mapping, detection of lane scope features and reconstruction of the lane markings. The
main drawback of the system is that the performance is reduced when the vehicle is driving
in a tunnel.
Kang et al. [25] proposed a kinematic-based fault-tolerant mechanism to detect the
lane even if the camera cannot provide the road image due to malfunction or environmental
constraints. In the absence of camera input, the lane is predicted using the kinematic model
by taking the parameters such as the length and speed of the vehicle. The camera input is
given as a clothoid cubic polynomial curve road model. In the absence of camera input, the
lane coefficients of the clothoid model will be available. A lane restoration scheme is used to
overcome this loss based on a multi-rate state estimator obtained from the kinematic lateral
motion model in the clothoid function. The predicted lane is based on the past curvature
rate and road curvature. The results show that the proposed method can maintain the lane
for 3 s without camera input. The developed algorithm was simulated using CARSIM and
Simulink. It has been tested in a test vehicle equipped with an Auto Box from dSPACE in
Tucson from HYUNDAI Motors.
Borkar et al. [26] proposed a lane detection and tracking method using inverse projec-
tive mapping (IPM) to create a bird’s-eye view of the road; a Hough transform for detecting
candidate lane and Kalman filter track the lane. The road image is converted to grayscale
form followed by temporal blurring. The application of IPM makes the image provide a
bird’s eye view. The lanes are detected by identifying the pair of parallel lines which are
separated by a distance. The IPM images are converted to binary, and a Hough transform is
performed on the binary image and then divided into two halves. To determine the center
of the line, the one-dimensional matched filter is applied to each sample. The pixel with a
large correlation that exceeds the threshold is selected as the center of the lane. The Kalman
filter is used to track the lane, which takes the lane orientation and difference between the
current and previous frames. A firewire camera is used to capture the image of the road.
The performance of the proposed algorithm provides better accuracy under the isolated
highway and metro highway, and the accuracy is in the range of 86% on city roads. The
improved performance is due to the usage of the Kalman filter to track the lane.
Sun et al. [27] proposed a lane detection mechanism considering multiple frames in
contrast with the single frame along with the inertial work classifier. The initially assigned
probability value changes due to error and vehicle movement. Kalman filter is applied to
smooth the line segments in Hough space. The inertial measurement unit (IMU) values
are used to align the previous line segments in the Hough space. The lane detection is
determined by considering the line segments with a high probability value. The analysis of
the method using the Caltech dataset provides accuracy in the range of 95% to 97%. The
lane detection under different environmental conditions such as sunlight, rain and with
high values of sunlight and rainfall shows the performance in the range of 72% to 87%.
The Hough transform is employed to extract the line segment from lane markings stored
in the Hough space. The Hough space is used to store the line segments with an associated
probability value. The truthiness of the line segments is determined using Convolutional
Neural Net. The system is implemented using NVIDIA GTX1050ti GPU, OV10650 camera,
and the IMU is Epson G320.
Lu et al. [28] proposed a lane detection algorithm for urban traffic scenarios in which
the road is well-constructed, flat and of equal width. The road model is constructed using
feature line pairs (FLP), the FLP is detected using Kalman filter and a regression diagnostic
technique to determine the road model using FLP. The result shows that the time taken to
detect the road parameters is 11 ms. The proposed method is implemented using C++ on
a 1.33 GHz AMD processor-based personal computer with a single camera and a Matrox
Meteor RGB/PPB digitizer and implemented in THMR-V (Tsinghua Mobile Robot V).
Zhang and Shi [29] proposed a lane detection method for detecting the lanes at night.
The sober and canny operator detects the edges of the lanes. Gradients acquiring a certain
threshold are labelled as edge points. The histogram with the higher brightness is named
Sustainability 2021, 13, 11417 7 of 29
as lane boundary, and the low valued histogram is named a road. The accuracy of the
proposed method is high even in the presence of noises from car head and rear lights and
road contour signs.
Borkar et al. [30] proposed a layered approach to detect the lane at night. The region of
interest is specified in the captured image of the road. The image is converted to greyscale
for further processing. Temporal burring is applied to obtain the continuous lanes of the
long line. Depending on the characteristics of the neighboring pixels, an adaptive controller
is used to determine the object. The images are converted to the left and right halves, and
each half Hough transform is performed to determine the straight lines. The final process
deals with the fitting of all the straight lines. Firewire S400 (400 Mbps) color camera in
VGA resolution (640 × 480) at 30 fps is used to capture the video and fed to MATLAB, and
lanes are detected in an offline manner. The performance of the proposed method is good
in isolated highways and in metro highway scenarios. With moderate traffic, the accuracy
of detecting the lanes is reduced to 80 percent.
Priyadarshini et al. [31] proposed a lane detection system that detects the lane during
the daytime. The captured video is converted to a grayscale image. A Gaussian filter is
applied to remove the noise. The Canny edge detection algorithm is used to detect the
edges. To identify the length of the lane, a Hough transform is applied. The proposed
method is simulated using a raspberry pi-based robot with a camera and ultrasonic sensors
to determine the distance between neighbouring vehicles.
The survey by Hong et al. [32] discussed video processing techniques to determine
the lanes illumination change on the region of interest for straight-line roads. The survey
highlights the methodologies involved, such as choosing the proper color space and
determination of the region of interest. Once the intended image is captured, a color
segmentation operation is performed using region splitting and clustering schemes. This is
followed by applying the merging algorithm to suppress the noise in the image.
A color-based lane detection and a representative line extraction algorithm are pro-
posed by Park et al. [33]. The captured image in RGB format is converted to gray code
followed by binary image conversion. The purpose of binary image conversion is to re-
move the shadows in the captured image. The lanes in the image are detected using the
canny algorithm by the feature named color. The direction and intensity are determined by
removing the noise using the gaussian filter. The images are smoothened by applying a
median filter. The lanes in the image are considered as the region of interest, and Hough
transform is applied to confirm the accuracy of the lanes in the region of interest. The
experiment is performed during the daytime. The results show that the lane detection rate
is more than 93%.
El Hajjouji et al. [34] proposed a hardware architecture for detecting straight lane lines
using Hough transform. The CORDIC (Coordinate Rotation Digital Computer) algorithm
calculates the gradient and phase from the captured image. The output of CORDIC block
is the norm and angle of the x-axis of the image. The norm and angles are compared
with the threshold obtained from the region of interest. The Hough transform is applied
to the outcome of the comparator module, and the relation between the Hough space
and the angle is determined. The noises are removed by the Hough transform voting
procedure. Finally, the output is obtained as the slope of the straight line. The algorithm
is implemented in the Virtex-5 ML505 platform. The algorithm was tested on a variety
of images with varying illumination and different road conditions, such as urban streets,
highways, occlusion, poor line paintings, day and night and scenarios. The algorithm
provides a detection rate of 92%.
Samadzadegan et al. [35] proposed a lane detection methodology in a circular arc or
parabolic based geometric method. The RGB colour is converted to an intensity image
that contains a specific range of values. A three-layer pyramid image is constructed using
bi-cubic interpolation method. Among the three layers of region of interest, the first
layer pixels undergo randomized Hough transformation to determine the curvature and
orientation features followed by a Genetic Algorithm Optimisation. The process is repeated
Sustainability 2021, 13, 11417 8 of 29
to the remaining two layers. The outcome obtained in the lower layers are the features
of the lane and used to determine the lanes in the region of interest. The result shows
that there is a performance drop in lane detection when entering the tunnel region and
occlusion in lane markings due to the shadow of another vehicle.
Cheng et al. [36] proposed a hierarchical lane detection system to detect the lanes on
structured and unstructured roads. The system classifies the environment into structured
and unstructured based on the feature extraction, which depends on the color of the
lane marking. The connected component labelling method is applied to determine the
feature objects. During the training, phase supervised learning is performed and manually
classified the objects as left lane, right lane and no lane markings. The image is classified
as structured and unstructured based on the vote value associated with the weights. The
lanes for structured roads are detected by eliminating the moving vehicle on the lane image
followed by lane recognition by considering the angle of inclination and starting points
of the lane markings. The lane coherence verification module compares the lane width
of the current frame with the previous frame to determine the lanes. For unstructured
roads, the following steps are performed: mean shift segmentation, which deals with
the determination of road surface by comparing with the surroundings to determine the
variation in colors and texture. The region merging and boundary smoothing module deals
with pruning unnecessary boundary lines and neglecting the region which is smaller than
the threshold. The boundary is selected based on the posterior probability of each set of
candidates. The simulation results show that around 0.11 s is needed to identify structured
or unstructured roads. The system achieves an accuracy of 97% in lane detection.
Han et al. [37] proposed a LIDAR sensor-based road boundary detection and tracking
for both structured and unstructured roads. The LIDAR is used to obtain the polar coordi-
nates. The line segments are obtained from the height and pitch of LIDAR. Information
such as roadside, curbs, sidewalks and buildings are obtained from the line segments. The
road slope and width are obtained by merging two-line segments. The road is tracked
using the nearest neighbor filter to estimate the state of the target. The algorithm is tested in
a real vehicle equipped with LIDAR, GPS and IMU. The road boundary detection accuracy
is 95% for structured and 92% for unstructured roads.
Le et al. [38] proposed a method to detect pedestrian lanes under different illumination
conditions with no lane markings. The first stage of the proposed system is the vanishing
point estimation which works based on votes of local orientations from colored edge pixels.
The local orientation of pixels is determined as the vanishing point. The next stage is the
determination of the sample region of the lane from the vanishing point. To achieve higher
robustness towards different illuminations, invariant space is used. Finally, the lanes are
detected using the appearance and shape information from the input image. A Greedy
algorithm is applied, which helps to determine the connectivity between the lanes in each
iteration of the input image. The proposed model is tested on the input image of both
indoor and outdoor environments. The results show that the lane detection accuracy is
95%.
Wang et al. [39] proposed a lane detection system for straight and curve road scenarios.
The captured image determines the region of interest, set as 60 m which falls in the near
field region. The region of interest is divided into the straight region and the curve
region. The near field region is approximated as the straight line, and the far-field region
is approximated as the curve. An improved Hough transform is applied to detect the
straight line. The curve is determined in the far-field region using the least-squares curve
fitting method. The WAT902H2 camera model is used to capture the image of the road.
The results show that the time taken to determine the straight and curve lane is 60–80 ms
compared to 70–100 ms in the existing works and the accuracy is around 92–93%. The error
rate in bending to the left or right direction is from −0.85 to 5.20% for different angles.
Yenıaydin [40] proposed a lane detection algorithm based on camera and 2D LIDAR
input data. The camera obtains the bird’s eye view of the road, and the LIDAR detects the
location of objects. The proposed method consists of the steps mentioned below:
Sustainability 2021, 13, 11417 9 of 29
model. In the lane extraction process, lane width is chosen according to the standards followed
in the country. The gradient of each pixel is used to estimate the edge points of lane marking.
Son et al. [45] proposed a method that uses the illumination property of lanes under
different conditions, as it is a challenge to detect the lane and keep the lane on track under
different conditions. The methodology involves the determination of the vanishing point
and in which the bottom half of the image is analyzed using a canny edge detector and
Hough transform. The second step involves the determination of white lanes or yellow
lanes based on the illumination property. The white and yellow lanes are used to obtain
the binary image of the lane. The lanes are labelled, and the angles are made to intercept
the y-axis. If there is a match, they are grouped to determine long lanes.
Chae et al. [46] proposed an autonomous lane changing system consisting of three
modules: perception, motion planning, and control. The surrounding vehicles are detected
using LIDAR sensor input. In motion planning, the vehicle determines the mode such as
lane-keeping or lane change, followed by the desired motion that is planned considering the
safety of surrounding vehicles. A linear quadratic regulator (LQR) based model predictive
control is used for longitudinal acceleration and deciding the steering angle. The stochastic
model predictive control is used for lateral acceleration.
Chen et al. [47] proposed a deep convolutional neural network to detect the lane
markings. The modules involved in the lane detection process are lane marking generation,
grouping, and lane model fitting. The lane grouping process involves forming a cluster
comprising neighbouring pixels represented as a single label that belongs to the same lane
and connecting the labels called super marking. The next step of lane model fitting uses
3rd order polynomial to represent straight and curved lanes. The simulation is done on the
CAMVID dataset. The setup requires high-end systems to do the training. The algorithm is
evaluated for a minimal real-time situation. The authors proposed a Global Navigation
Satellite System (GNSS) based lane-keeping assistance system, which calculates the target
steering angle using a model predictive controller. The advantage of the approach is that
it is estimated from GNSS when the lane is not visible due to environmental constraints.
The steering angle and acceleration are modelled using the first-order lag system. The
model predictive control is used to control the lateral movement of the vehicle. The
proposed system was simulated, and prototype testing was conducted in a real vehicle,
OUTLANDER PHEV (Mitsubishi Motors Corporation). The results show that the lane is
followed with a minimal lateral error of about 0.19 m. The drawback of the approach is
that the time delay of GNSS has an impact on the oscillation in the steering. Hence, the
GNSS time delay should be kept minimal compared to the steering time delay.
Lu et al. [48] proposed a lane detection approach using Gaussian distribution random
sample consensus (G-RANSAC). The process involves converting a bird’s eye view image
to look at all the lane characteristics. The next step is using a ride detector to extract the
features of lane points and remove noise points using an adaptable neutral network. The
ridge features are extracted from the gray images, which provide better results during
the presence of vehicle shadow and minimal illumination on the environment. Finally,
the lanes are detected using the RANSAC approach. The RANSAC algorithm considers
the confidence level of ridge points in determining the lanes from noise. The proposed
algorithm is tested under four different illumination conditions: normal illumination
and good pavement, intense illumination and shadow interruption, normal illumination
and sign-on-the-ground interruption and poor illumination and vehicle interference. The
algorithm achieved 99.02%, 96.92%, 96.65% and 91.61% true-positive rates respectively.
lane-departure metric to determine whether to trigger the LDP or not. The LK Co-pilot mode is
activated if the driver does not intend to change the lane; this mode helps the driver follow the
expected trajectory based on the driver’s dynamic steering input. Care should be taken to set
the threshold accurately and adequately; otherwise false lane detection would be increased.
Wang et al. [50] proposed a lane-changing strategy for autonomous vehicles using deep
reinforcement learning. The parameters which are considered for the reward are delay and
traffic on the road. The decision to switch lanes depends on improving the reward by interact-
ing with the environment. The proposed approach is tested under accident and non-accident
scenarios. The advantage of this approach is collaborative decision making in lane changing.
Fixed rules may not be suitable for heterogeneous environmental or traffic scenarios.
Wang et al. [51] proposed a reinforcement learning-based lane change controller for a
lane change. Two types of lane change controllers are adopted, namely longitudinal and
lateral control. A car-following model, namely the intelligent driver model, is chosen for
the longitudinal controller. The lateral controller is implemented by reinforcement learning.
The reward function is based on yaw rate, acceleration, and time to change the lane. To
overcome the static rules, a Q-function approximator is proposed to achieve continuous
action space. The proposed system is tested in a custom-made simulation environment.
Extensive simulation is expected to test the efficiency of the approximator function under
different real-time scenarios.
Suh et al. [52] implemented a real-time probabilistic and deterministic lane changing
motion prediction system which works under complex driving scenarios. They designed
and tested the proposed system on both a simulation and real-time basis. A hyperbolic
tangent path is chosen for the lane-change maneuver. The lane changing process is initiated
if the clearance distance is greater than the minimum safe distance and the position of
other vehicles. A safe driving envelope constraint is maintained to check the availability of
nearby vehicles in different directions. A stochastic model predictive controller is used to
calculate the steering angle and acceleration from the disturbances. The disturbance values
are obtained from experimental data. The usage of advanced machine learning algorithms
could improve the currently developed system’s reliability and performance.
Gopalan et al. [53] proposed a lane detection system to detect the lane accurately under
different conditions such as lack of prior knowledge of the road geometry, lane appearance
variation due to change in environmental condition, and independent of vehicle speed.
The modules of the proposed system are lane detection and tracking. The basic approach
used for lane detection is to classify the lane markings from the non-lane markings from
the labelled training sample. A pixel hierarchy feature descriptor method is proposed to
identify the correlation between the lane and its surroundings. A machine learning-based
boosting algorithm is used to identify the most relevant features. The advantage of the
boosting algorithm is the adaptive way of increasing or decreasing the weightage of the
samples. The lane tracking process is performed during the non-availability of knowledge
about the motion pattern of lane markings. Lane tracking is achieved by using particle
filters to track each of the lane markings and understand the cause for the variation. The
variance is calculated for different parameters such as the initial position of the lane, motion
of the vehicle, change in road geometry, traffic pattern. The variance associated with the
above parameters is used to track the lane under different environmental conditions. The
learning-based proposed system provides better performance under different scenarios.
The point to consider is that the assumption made is the flat nature of the road. The flat
road image was chosen to avoid the sudden appearance and disappearance of the lane.
The proposed system is implemented at the simulation level.
To summarize the progress made in lane detection and tracking as discussed in this sec-
tion, Table 2 has been presented that shows the key steps involved in the three approaches
for lane detection and tracking, along with remarks on their general characteristics. It is
then followed with Tables 3–5 that presents the summary of data used, strengths, draw-
backs, key findings and future prospects of the key studies that have adopted the three
approaches in the literature.
Sustainability 2021, 13, 11417 12 of 29
Table 2. A summary of methods used for lane detection and tracking with general remarks.
Data
Simulation
Sources Method Used Advantages Drawbacks Results Tool Used Future Prospects Data Reason for Drawbacks
Real
Performance drop in
determining the lane, if the
Fisheye dashcam, Enhancing the Data obtained vehicle is driving in a
The algorithm performance
Inverse perspective The lane detection error is inertial algorithm suitable by using a tunnel and the road
drops when driving in
mapping method is Minimal error and quick 5%. The cross-track error is measurement unit for complex road model car conditions where there is no
[24] Y tunnel due to the
applied to convert the detection of lane. 25% and lane detection time and ARM scenario and with running at a proper lighting.
fluctuation in the lighting
image to bird’s eye view. is 11 ms. processor-based less light speed of 100 The complex environment
conditions.
computer. conditions. m/s. creates unnecessary tilt
causing some inaccuracy in
lane detection.
Sustainability 2021, 13, 11417 13 of 29
Table 3. Cont.
Data
Simulation
Sources Method Used Advantages Drawbacks Results Tool Used Future Prospects Data Reason for Drawbacks
Real
No need for
parameterization of the Mobileye camera,
Kinematic motion model The algorithm suitable for
vehicle with variables like carsim and MAT- Trying the fault
to determine the lane different environment Lateral error of 0.15 m in the
[25] Y cornering stiffness and LAB/Simulink, tolerant model in Test vehicle —-
with minimal parameters situation not been absence of camera image.
inertia. Prediction of lane Auto box from real vehicle.
of the vehicle. considered
even in absence of camera dSPACE.
input for around 3 s.
The algorithm requires 0.8 s
Usage of inverse Improved accuracy of lane Performance under
to process frame. Higher Real-time Highway and
mapping for the creation detection in the range of different vehicle speed and Firewire color
[26] Y accuracy when more than implementation of streets and —-
of bird’s eye view of the 86%to 96% for different inclement weather camera, MATLAB
59% of lane markers are the work around Atlanta
environment. road types. conditions not considered.
visible.
Hough transform to For urban scenario, the
extract the line segments, proposed algorithm
In the custom dataset, the Performance The device specification
usage of a convolutional provides accuracy greater OV10650 camera Caltech dataset
performance drops improvement is and calibration, it plays
[27] Y Y neural network-based Tolerant to noise than 95%. The accuracy and I MU is Epson and custom
compared to Caltech future important role in capturing
classifier to determine the obtained in lane detection G320. dataset.
dataset. consideration. the lane.
confidence of line in the custom setup is 72%
segment. to 86%.
Around 4 ms to detect the
Robust tracking
Testing the algorithm edge pixels, 80 ms to detect C++; camera and a
Feature-line-pairs (FLP) Faster detection of lanes, and improve the
suitability under different all the FLPs, 1 ms to matrox meteor
[28] Y along with Kalman filter suitable for real-time performance in Test robot. ——
environmental conditions determine the extract road RGB/ PPB
for road detection. environment. urban dense
could be done. model with Kalman filter digitizer.
traffic.
tracking.
Dual thresholding
algorithm for
pre-processing and the The lane detection
The algorithm detects the Suitability of the algorithm
edge is detected by single algorithm insensitive Detection Camera with RGB
[29] Y straight lanes during the ——- Custom dataset for different types of roads
direction gradient headlight, rear light, cars, Of straight lanes. channel.
night. during night to be studied.
operator. Usage of the road contour signs.
noise filter to remove the
noise.
Geometrics
transformation of
Determination of region The algorithm needs
Firewire S400 image for The constraints and
of interest and conversion changes for checking its 90% accuracy during night
[30] Y Better accuracy camera and increasing the Custom dataset assumption considered do
of binary image via suitability for the day time at isolated highways
MATLAB accuracy and not suit for the day time.
adaptive threshold. lane detection
intensity
normalization.
Sustainability 2021, 13, 11417 14 of 29
Table 3. Cont.
Data
Simulation
Sources Method Used Advantages Drawbacks Results Tool Used Future Prospects Data Reason for Drawbacks
Real
Simulation of the
proposed method by
Canny edge detector Raspberry pi using raspberry Pi based
Hough transform improves
algorithm is used to Performance of the based robust robot with a monocular
[31] Y the output of the lane —— Custom data ——
detect the edges of the proposed system is better. with camera camera and radar-based
tracker.
lanes. and sensors. sensors to determine the
distance between
neighboring vehicles.
Video processing
Determine the lanes
technique to determine
vision-based illumination changes on
[32] Y the lanes illumination —- —- Robust performance Simulator —-
vehicle the region of interest for
change on the region of
curve line roads
interest.
A colour-based lane
detection and The results show that the There is scope to test the Unwanted noise reduces
Better accuracy in the day Algorithm needs changes to
[33] Y Y representative line lane detection rate is more MATLAB algorithm in the night Custom data the performance of the
time. test in different scenario.
extraction algorithm is than 93%. time. algorithm.
used.
Algorithm tested under
Proposed hardware Proposed algorithm various conditions of roads
Computer complexity and Algorithm need to test
architecture for detecting provides better accuracy for such as urban street, Virtex-5 ML 505
[34] Y high cost of HT (Hough with different weather Custom —–
straight lane lines using occlusion, poor line highway and algorithm platform
transform) condition.
Hough transform. paintings. provides a detection rate of
92%.
Proposed a lane detection
Video sensor improves the Performance dropped in Experiment performed with Proposed method can test
methodology in a circular maps, video
[35] Y performance of the lane lane detection when different road scene and with previously available Custom Due to low illumination
arc or parabolic based sensors, GPS.
marking. entering the tunnel region provided better results. data.
geometric method.
Proposed a hierarchical
lane detection system to The system achieves an Algorithm can test on an
[36] Y detect the lanes on the Quick detection of lanes. —- accuracy of 97% in lane MATLAB isolated highway, urban —-
structured and detection. roads.
unstructured roads.
LIDAR sensor-based Difficult to track lane
The road boundary
boundary detection and Regardless of road types, boundaries for Test vehicle Algorithm needs to test
detection accuracy is 95% Low contract arbitrary
[37] Y tracking method for algorithm detect accurate unstructured roads because with LIDAR, with RADAR based and Custom data
for structured roads and road shape
structured and lane boundaries. of low contract, arbitrary GPS and IMU. vision-based sensors.
92% for unstructured roads.
unstructured roads. road shape
Sustainability 2021, 13, 11417 15 of 29
Table 3. Cont.
Data
Simulation
Sources Method Used Advantages Drawbacks Results Tool Used Future Prospects Data Reason for Drawbacks
Real
Proposed a method to
Robust performance for There is scope for
detect the pedestrian The result shows that the New dataset of
pedestrian lane detection More challenging for indoor structured roads
[38] Y lanes under different lane detection accuracy is MATLAB 2000 images Complex environment
under unstructured and outdoor environment. with different
illumination conditions 95%. (custom)
environment. speeds limit
with no lane markings.
The proposed system is
implemented using an
improved Hough Robust performance for a
transform, which campus road, in which the Performance drops due to Test vehicle and
[39] Y Y —— ——- Custom data Low illumination
pre-process different light road does not have lane low intensity of light MATLAB
intensity road images markings.
and convert it to the polar
angle constraint area.
The proposed approach Proposed method
A lane detection Computational and
shows better accuracy need to test with software based Fusion of
algorithm based on experimental results show
[40] Y —— compared with the RADAR and analysis and camera and 2D —–
camera and 2D LIDAR the method significantly
traditional methods for vision-based MATLAB LIDAR data
input data. increases accuracy.
distance less than 9 m. sensors data
The Nvidia tool comes with
A deep learning-based SDK (software Complex road
Monocular camera with The time taken to C++ and NVidia’s
approach for detecting development kit) with scenario with
[41] Y advance driver assistance determine the lane falls drive PX2 KITT —-
lanes, object and free inbuild options for object different high
system is costly. under 6 to 9 ms. platform
space. detection, lane detection intensity of light.
and free space.
Sustainability 2021, 13, 11417 16 of 29
Table 4. A comprehensive summary of learning-based model predictive controller lane detection and tracking.
Data
Simulation
Sources Method Advantages Drawbacks Result Tool Used Future Prospects Data Reason for Drawback
Real
Table 4. Cont.
Data
Simulation
Sources Method Advantages Drawbacks Result Tool Used Future Prospects Data Reason for Drawback
Real
Data
Simulation
The complex
The lane detection Fisheye dashcam: Data obtained
Inverse perspective Enhancing the algorithm environment creates
The algorithm performance error is 5%. The inertial measurement by using a
mapping method is suitable for complex road unnecessary tilt
[49] Y Quick detection of lane. drops due to the fluctuation cross-track error is 25% unit; Arm model car
applied to convert the scenario and with less causing some
in the lighting conditions. lane detection time is processor-based running at a
image to bird’s eye view. light conditions. inaccuracy in lane
11 ms. computer. speed of 1 m/s
detection.
Sustainability 2021, 13, 11417 18 of 29
Table 5. Cont.
Data
Simulation
Deep learning-based
reinforcement learning is The performance is
Cooperative
used for decision making Validation expected to fine-tuned based on Dynamic selection of
decision-making processes Newell car
in the changeover. The check the accuracy of the the cooperation for Custom made cooperation coefficient
[50] Y involving the reward following —-
reward for decision lane changing algorithm for both accident and simulator under different traffic
function comparing delay model.
making is based on the heterogeneous environment non-accidental scenario
of a vehicle and traffic.
parameters like traffic scenario
efficiency
To test the efficiency of
the proposed approach
under different road
Need for more testing to
Reinforcement Decision-making process geometrics and traffic
check the efficiency of the More parameters
learning-based approach involving reward function The reward functions conditions. Testing the
approximator function for Custom made could be considered
[51] Y for decision making by comprising yaw rate, yaw are used to learn the feasibility of the custom
its suitability under simulator for the reward
using Q-function acceleration and lane lane in a better way. reinforcement learning
different real-time function.
approximator. changing time. with fuzzy logic for
conditions.
image input and
controller action based on
the current situation.
MATLAB/Simulink
and carsim. Used
Robust decision real-time setup as The algorithm to be
Usage of deterministic and Custom dataset
Probabilistic and Analysis of the efficiency of making compared to following: modified for real
probabilistic prediction of Testing undue different (collection of
[52] Y prediction for the the system under real-time the deterministic Hyundai-Kia motors suitability for
traffic of other vehicles to scenario data using test
complex driving scenario. noise is challenging. method. Lesser K7, mobile eye camera real-time
improve the robustness vehicle).
probability of collision. system, micro auto box monitoring.
II, Delphi radars, IBEO
laser scanner.
Usage of pixel hierarchy Usage of vehicles inertial
Machine with 4-GHz
to the occurrence of lane sensors GPS information Improved performance
Detection of the lane processor capable of
markings. Detection of and geometry model by using support To test the efficiency of Calibration of the
without prior knowledge working on image
[53] Y the lane markings using a further improve vector machines and the algorithm by using custom data sensors needs to be
on-road model and vehicle approximately 240 ×
boosting algorithm. performance under artificial neural the Kalman filter. maintained.
speed. 320 image at 15 frames
Tracking of lanes using a different environmental networks on the image.
per second.
particle filter. conditions
Sustainability 2021, 13, 11417 19 of 29
Based on the review, some of the key observations from Tables 3–5 are summarized
below:
• Frequent calibration is required for accurate decision making in a complex environ-
ment.
• Reinforcement learning with the model predictive control could be a better choice to
avoid false lane detection.
• Model-based approaches (robust lane detection and tracking) provide better results
in different environmental conditions. Camera quality plays an important role in
determining lane marking.
• The algorithm’s performance depends on the type of filter used, and the Kalman filter
is mostly used for lane tracking.
• In a vision-based system, image smoothing is the initial lane detection and tracking
stage, which plays a vital role in increasing systems performance.
• External disturbances like weather conditions, vision quality, shadow and blazing,
and internal disturbances such as too narrow, too wide, and unclear lane marking,
drop algorithm performance.
• The majority of researchers (>90%) have used custom datasets for research.
• Monocular, stereo and infrared cameras have been used to capture images and videos.
The algorithm’s accuracy depends on the type of camera used, and a stereo camera
gives better performance than a monocular camera.
• The lane markers can be occluded by a nearby vehicle while doing overtake.
• There is an abrupt change in illumination as the vehicle gets out of a tunnel. Sudden
changes in illumination affect the image quality and drop the system performance.
• The results show that the lane detection and tracking efficiency rate under dry and
light rain conditions is near 99% in most scenarios. However, the efficiency of lane
marking detection is significantly affected by heavy rain conditions.
• It has been seen that the performance of the system drops due to unclear and degraded
lane markings.
• IMU (Inertia measurement unit) and GPS are examples that help to improve RADAR
and LIDAR’s performance of distance measurement.
• One of the biggest problems with today’s ADAS is that changes in environmental and
weather conditions have a major effect on the system’s performance.
• By following the method of image and sensor-based lane detection, separate courses
are calculated for precisely two of the lane markings to be tracked, with a set of binary
parameters indicating the allocation of the determined offset values to one of the two
separate courses [54]
• By following the robust lane detection and tracking method, after a fixed number of
computing cycles, a most probable hypothesis is calculated—the difference between
the predicted courses of lane markings to only be tracked and the courses of recognized
lane markings to be lowest [55].
• A parametric estimation method, in particular a maximum likelihood method, is
used to assign the calculated offset values to each of the separate courses of the lane
markings to be tracked [56].
• Only those two-lane markers that refer to the left and right lane boundaries of the
vehicle’s own lane are applied to the tracking procedure [57].
• The positive and negative ratios of the extracted characteristics of the frame are used
to assess the system’s correctness. The degree of accuracy is enhanced by including
the judgment in all extracted frames [58].
• At a present calculation cycle, the lane change assistance calculates a target control
amount comprising a feed-forward control using a target curvature of a track for
changing the host vehicle’s lane [59].
• Extra details analyzing signals mounted to determine if a collision between the host
vehicle and any other vehicle is likely to occur, allowing action to be done to avoid the
accident [60].
• There are two kinds of issues that are often seen and corrected in dewarped perspective
images: a stretching effect at the periphery region of a wide-angle image de warped
by rectilinear projection, and duplicate images of objects in an area where the left and
right camera views overlap [61].
• The object identification system examines the pixels in order to identify the object that
has not previously been identified in the 3D Environment [62].
Table 6. Cont.
4. Discussion
Based on the review of studies on lane detection and tracking in Section 3.2, it can be
observed that there are limited data sets in the literature that researchers have used to test
lane detection and tracking algorithms. Based on the literature review, a summary of the
key data sets used in the literature or available to the researchers is presented in Table 7,
which shows some of the key features, strengths, and weaknesses. It is expected that in
future, more data sets may be available for the researchers as this field continues to grow,
especially with the development of fully autonomous vehicles. As per the statistics survey
of research papers published between 2000 and 2020, almost 42% of researchers mainly
focused on Intrusion Detection System (IDS) matrix to evaluate the performance of the
algorithms. This may be because the efficiency and effectiveness of IDS are better when
compared to Point Clustering Comparison, Gaussian Distribution, Spatial Distribution
and Key Points Estimation methods. The verification of the performance of the algorithms
for lane detection and tracking system is done based on ground truth data set. There are
four possibilities as true positive (TP), false negative (FN), false positive (FP) and true
negative (TN), as shown in Table 8. There are many metrics available for the evaluation
of performance, but the most common are accuracy, precision, F-score, Dice similarity
coefficient (DSC) and receiver operating characteristic (ROC) curves. Table 9 provides the
common metrics and the associated formulas used for the evaluation of the algorithms.
Sustainability 2021, 13, 11417 22 of 29
Table 7. A summary of datasets that have been used in the literature for verification of the algorithms.
Table 8. Performance metrics for verification of lane detection and tracking algorithms, compiled from ref. [70].
Table 9. A summary of the equation of metrics used for evaluation of the performance of the algorithm, compiledfrom
refs. [71,72].
If the database is balanced, the accuracy rate should accurately reflect the algorithm’s
global output. The precision reflects the goodness of optimistic forecasts. The greater
the accuracy, the lower the number of “false alarms.” The recall, also called true positive
rate (TPR), is the ratio of positive instances that are correctly detected by the algorithm.
Therefore, the higher the recall, the higher the algorithm’s quality in detecting positive
instances. The F1-Score is the Precision and Recall harmonic mean, and since they are
combined into a concise metric, it can be used for comparing algorithms. Because it is more
sensitive to low values, the harmonic mean is used rather than arithmetic. Hence, a valid
algorithm has a satisfactory F1 score if it has accuracy and high recall. These parameters
can be estimated as unique metrics for each class or as the algorithm’s overall metrics [73].
Table 10 shows the SWOT analysis of different approaches used for lane detection and
tracking algorithms. The use of a Learning-based approach (model predictive controller) is
considered an emerging approach for lane detection and tracking because it is computa-
tionally more efficient than the other two approaches, and it provides reasonable results
in real-time scenarios. However, the risk of mismatching lanes and performance drop in
inclement weather conditions are the drawback of the learning-based approach. Feature-
based approach, while time-consuming, can provide better performance in optimization of
lane detection and tracking. However, this approach poses challenges in handling high
illumination or shadows. Image and sensor-based lane detection and tracking approaches
have been used widely in lane detection and tracking patents.
Sustainability 2021, 13, 11417 24 of 29
Table 10. SWOT analysis of different approaches used for lane detection and tracking algorithms.
In addition, from the literature synthesis, several gaps in knowledge are identified
and are presented in Table 11. The literature review shows that clothoid and hyperbola
shape roads are ignored for lane detection and algorithms road because of the complexity
of road structure and unavailability of the dataset. Likewise, much work has already been
done on structured roads’ pavement marking compared to unstructured roads (Figure 3).
Most studies focus on straight roads. It is to be noted that unstructured roads are available
in residential areas, hilly area roads, forest area roads. Much research has previously
considered daytime, while night and rainy conditions are less studied. From the literature,
it is observed that, in terms of speed flow conditions, they have been previously researched
on the speed levels of 40 km/h to 80 km/h while high speed (above 80 km/hr) has received
less attention. Further, occlusion due to overtaking vehicles or other objects (Figure 4),
and high illumination also pose a challenge for lane detection and tracking. These issues
should be addressed to move from level 3 automation (partial driving) to level 5 fully
autonomous Also, new databases for more testing of algorithms are needed as researchers
are constrained due to the unavailability of datasets. There is, however, the prospect of
using synthetic sensor data generated by using a test vehicle or driving scenario designing
through a driving simulator app available through commercial software.
Table 11. Lane detection under different conditions to identify the gaps in knowledge.
Structured
Clothoid
Straight
Sources
Night
Rain
Day
√ √ √
[26] Borkar et al. (2009) – – – – – –
√ √ √
[28] Lu et al. (2002) – – – – – –
√ √ √
[29] Zhang & Shi (2009) – – – – – –
√ √ √
[32] Hong et al. (2018) – – – – – –
√ √ √ Low (40 km/h) &
[33] Park, H. et al. (2018) – – – – –
high (80 km/h)
√ √ √ √
[34] EI Hajiouji, H. (2019) – – – – 120 km/h
√ √ √ √
[35] Samadzadegan et al. (2006) – – – – –
√ √ √ √
[36] Cheng et al. (2010) – – – – –
√ √ √ √
[40] Yeniaydin et al. (2019) – – – – –
√ √ √
[41] Kemsoaram et al. (2019) – – – – – –
√ √ √ √
[43] Son et al. (2019) – – – –
√ √ √ √
[47] Chen et al. (2018) – – – – –
√ √ √ √
[52] Suh et al. (2019) – – – – 60–80 km/h
√ √ √ √ √
[53] Gopalan et al. (2018) – – – –
√ √ √
[74] Wu et al. (2008) – – – – – 40 km/h
[34]
[34] EI
EI Hajiouji,
Hajiouji, H. H. √
√ ‐‐
‐‐ ‐‐
‐‐ √
√ ‐‐
‐‐ √
√ √
√ ‐‐
‐‐ 120km/h
120km/h
(2019)
(2019)
[35]
[35] Samadzadegan
Samadzadegan et et ‐‐
‐‐ ‐‐
‐‐ √
√ √
√ ‐‐
‐‐ √
√ √
√ ‐‐
‐‐ ‐‐
‐‐
al. (2006)
Sustainability
al. (2006) 2021, 13, 11417 25 of 29
[36]Cheng et al. (2010)
[36]Cheng et al. (2010) √
√ √
√ ‐‐
‐‐ √
√ ‐‐
‐‐ √
√ ‐‐
‐‐ ‐‐
‐‐ ‐‐
‐‐
[40] Yeniaydin
[40] Yeniaydin et et al.
al.
√
√ ‐‐
‐‐ √
√ ‐‐
‐‐ √
√ √√ ‐‐
‐‐ ‐‐
‐‐ ‐‐
‐‐
(2019)
(2019)
Table 11. Cont.
[41] Kemsoaram
[41] Kemsoaram et et al.
al.
√
√ ‐‐
‐‐ √√ ‐‐
‐‐ √
√ ‐‐
‐‐ ‐‐
‐‐ ‐‐
‐‐ ‐‐
‐‐
(2019)
(2019) Road Geometry Pavement Marking Weather Condition Speed
[43] Son et al. (2019)
[43] Son et al. (2019) √
√ ‐‐
‐‐ √
√ √
√ ‐‐
‐‐ √
√ ‐‐
‐‐ ‐‐
‐‐
Sources
Straight
Clothoid
Hyperbola
Structured
Unstructured
Day
Night
Rain
[47] Chen et al. (2018)
[47] Chen et al. (2018) √
√ √
√ ‐‐
‐‐ √
√ ‐‐
‐‐ √
√ ‐‐
‐‐ ‐‐
‐‐ ‐‐
‐‐
[52] Suh et al. (2019)
[52] Suh et al. (2019) √√ ‐‐
‐‐ √√ √
√ ‐‐
‐‐ √
√ ‐‐
‐‐ ‐‐
‐‐ 60‐80km/h
60‐80km/h
[53]Gopalan et al. (2018)
[53]Gopalan et al. (2018) √
√ ‐‐
‐‐ √
√ √
√ √
√ √
√ ‐‐
‐‐ ‐‐
‐‐ ‐‐
‐‐
[74] Wu et al.(2008)
[74] Wu et al.(2008) √
√ ‐‐
‐‐ ‐‐
‐‐ √
√ ‐‐
‐‐ √
√ √ ‐‐
‐‐ ‐‐
‐‐ 40km/h
40km/h
√ √ √ √ √
[75]Liu &Li et al. (2018)
[75] Liu & Li et al. (2018)
[75]Liu &Li et al. (2018) √
√ ‐‐
‐‐ – √
√ √
√ ‐‐
‐‐ – √
√ √
√ √
√ –‐‐
‐‐
√ √ √ √
[76]Han et al. (2019)
[76] Han et al. (2019)
[76]Han et al. (2019) √
√ ‐‐
‐‐ – ‐‐
‐‐ – √
√ √
√ √
√ ‐‐
–
‐‐ ‐‐
–
‐‐ 30‐50km/h
30–50 km/h
30‐50km/h
[77]Tominaga et √ √
[77]Tominaga
[77] Tominaga et al. (2019)et ‐‐ – ‐‐ – ‐‐ – √√ ‐‐ – √ –
‐‐ –
‐‐ 80 km/h
80km/h
al.(2019) ‐‐ √ ‐‐ ‐‐ √ √ ‐‐ √ ‐‐ ‐‐ 80km/h
al.(2019)
[78] Chen Z et al. (2019) – – – – – –
[78] Chen Z et al. (2019)
[78] Chen Z et al. (2019) √
√ √ ‐‐
‐‐ √√ √ √
√ √ ‐‐
‐‐ ‐‐ √
‐‐ ‐‐
√
‐‐ ‐‐
√
‐‐ ‐‐
‐‐
[79] Feng et al. (2019) – – 120 km/h
[79]Feng et al. (2019)
[79]Feng et al. (2019) √
√ ‐‐
‐‐ √
√ √√ ‐‐
‐‐ √√ √√ √√ 120km/h
120km/h
Figure 3. Efficiency of the unstructured road is affected by shadow, heavy rain, low or high illumi‐
Figure 3. Efficiency of the unstructured road is affected by shadow, heavy rain, low or high illumi‐
Figure 3. Efficiency of the unstructured road is affected by shadow, heavy rain, low or high illumina-
nation.
nation.
tion.
Figure 4. Challenge in lane marking detection: vehicle stop or occlude nearby lane.
Figure 4. Challenge in lane marking detection: vehicle stop or occlude nearby lane.
Figure 4. Challenge in lane marking detection: vehicle stop or occlude nearby lane.
Lane markings are usually yellow and white, although reflector lanes are designated
with other colors. The number of lanes and their width varies per country. Due to the
existence of shadows, there may be problems with vision clarity. The surrounding cars may
obstruct the lane markings. Likewise, there is a dramatic shift in lighting as the car exits a
tunnel. As a result, excessive light has an impact on visual clarity. Due to different weather
conditions such as rain, fog, and snow, the visibility of the lane markings decreases. In
the evening, visibility may be reduced. These difficulties in lane recognition and tracking
Sustainability 2021, 13, 11417 26 of 29
lead to a drop in the performance of lane detection and tracking algorithms. Therefore, the
development of a reliable lane detecting system is a challenge.
5. Conclusions
Over the last decade, many researchers have researched ADAS. This field continues to
grow, as fully autonomous vehicles are predicted to enter the market soon [80,81]. There
are limited studies in the literature that provides the state-of-art in lane detection and
tracking algorithms and evaluation of the algorithms. To fulfil this gap, in this study, we
have provided a comprehensive review of different methods of lane detection and tracking
algorithms. In addition, we presented a summary of different data sets that researchers
have used to test the algorithms, along with the approaches for evaluating the performance
of the algorithms. Further, a summary of patented works has also been provided.
The use of a Learning-based approach is gaining popularity because it is computa-
tionally more efficient and provides reasonable results in real-time scenarios. The unavail-
ability of rigorous and varied datasets to test the algorithms have been a constraint to
the researchers. However, using synthetic sensor data generated by using a test vehicle
or driving scenario through a vehicle simulator app availability in commercial software
has opened the door for testing algorithms. Likewise, the following areas need more
investigations in future:
• lane detection and tracking under different complex geometric road design models,
e.g., hyperbola and clothoid
• achieving high reliability for detecting and tracking the lane under different weather
conditions, different speeds and weather conditions, and
• lane detection and tracking for the unstructured roads
This study aimed to comprehensively review previous literature on lane detection and
tracking for ADAS and identify gaps in knowledge for future research. This is important
because limited studies provide state-of-art lane detection and tracking algorithms for
ADAS and a holistic overview of works in this area. The quantitative assessment of
mathematical models and parameters is beyond the scope of this work. It is anticipated
that this review paper will be a valuable resource for the researchers intending to develop
reliable lane detection and tracking algorithms for emerging autonomous vehicles in future.
References
1. Nilsson, N.J. Shakey the Robot; Sri International Menlo Park: California, CA, USA, 1984.
2. Tsugawa, S.; Yatabe, T.; Hirose, T.; Matsumoto, S. An Automobile with Artificial Intelligence. In Proceedings of the 6th
International Joint Conference on Artificial Intelligence, Tokyo, Japan, 20 August 1979.
3. Blackman, C.P. The ROVA and MARDI projects. In Proceedings of the IEEE Colloquium on Advanced Robotic Initiatives in the
UK, London, UK, 17 April 1991; pp. 5/1–5/3.
Sustainability 2021, 13, 11417 27 of 29
4. Thorpe, C.; Herbert, M.; Kanade, T.; Shafter, S. Toward autonomous driving: The CMU Navlab. II. Architecture and systems.
IEEE Expert. 1991, 6, 44–52. [CrossRef]
5. Horowitz, R.; Varaiya, P. Control design of an automated highway system. Proc. IEEE 2000, 88, 913–925. [CrossRef]
6. Pomerleau, D.A.; Jochem, T. Rapidly Adapting Machine Vision for Automated Vehicle Steering. IEEE Expert. 1996, 11, 19–27.
[CrossRef]
7. Parent, M. Advanced Urban Transport: Automation Is on the Way. Intell. Syst. IEEE 2007, 22, 9–11. [CrossRef]
8. Lari, A.Z.; Douma, F.; Onyiah, I. Self-Driving Vehicles and Policy Implications: Current Status of Autonomous Vehicle Develop-
ment and Minnesota Policy Implications. Minn. J. Law Sci. Technol. 2015, 16, 735.
9. Urmson, C. Green Lights for Our Self-Driving Vehicle Prototypes. Available online: https://blog.google/alphabet/self-driving-
vehicle-prototypes-on-road/ (accessed on 30 September 2021).
10. Campisi, T.; Severino, A.; Al-Rashid, M.A.; Pau, G. The Development of the Smart Cities in the Connected and Autonomous
Vehicles (CAVs) Era: From Mobility Patterns to Scaling in Cities. Infrastructures 2021, 6, 100. [CrossRef]
11. Severino, A.; Curto, S.; Barberi, S.; Arena, F.; Pau, G. Autonomous Vehicles: An Analysis both on Their Distinctiveness and the
Potential Impact on Urban Transport Systems. Appl. Sci. 2021, 11, 3604. [CrossRef]
12. Aly, M. Real time Detection of Lane Markers in Urban Streets. In Proceedings of the 2008 IEEE Intelligent Vehicles Symposium,
Eindhoven, The Netherlands, 4–6 June 2008; pp. 7–12. [CrossRef]
13. Bar Hillel, A.; Lerner, R.; Levi, D.; Raz, G. Recent progress in road and lane detection: A survey. Mach. Vis. Appl. 2014, 25, 727–745.
[CrossRef]
14. Ying, Z.; Li, G.; Zang, X.; Wang, R.; Wang, W. A Novel Shadow-Free Feature Extractor for Real-Time Road Detection. In
Proceedings of the 24th ACM International Conference on Multimedia, Amsterdam, The Netherlands, 15–19 October 2016.
15. Jothilashimi, S.; Gudivada, V. Machine Learning Based Approach. 2016. Available online: https://www.sciencedirect.com/
topics/computer-science/machine-learning-based-approach (accessed on 20 August 2021).
16. Zhou, S.; Jiang, Y.; Xi, J.; Gong, J.; Xiong, G.; Chen, H. A novel lane detection based on geometrical model and Gabor filter. In
Proceedings of the 2010 IEEE Intelligent Vehicles Symposium, La Jolla, CA, USA, 21–24 June 2010; pp. 59–64.
17. Zhao, H.; Teng, Z.; Kim, H.; Kang, D. Annealed Particle Filter Algorithm Used for Lane Detection and Tracking. J. Autom. Control
Eng. 2013, 1, 31–35. [CrossRef]
18. Paula, M.B.; Jung, C.R. Real-Time Detection and Classification of Road Lane Markings. In Proceedings of the 2013 XXVI
Conference on Graphics, Patterns and Images, Arequipa, Peru, 5–8 August 2013.
19. Kukkala, V.K.; Tunnell, J.; Pasricha, S.; Bradley, T. Advanced Driver-Assistance Systems: A Path toward Autonomous Vehicles. In
IEEE Consumer Electronics Magazine; IEEE: Eindhoven, The Netherlands, 2018; Volume 7, pp. 18–25. [CrossRef]
20. Yenkanchi, S. Multi Sensor Data Fusion for Autonomous Vehicles; University of Windsor: Windsor, ON, Canada, 2016.
21. Synopsys.com. What Is ADAS (Advanced Driver Assistance Systems)?—Overview of ADAS Applications|Synopsys. 2021.
Available online: https://www.synopsys.com/automotive/what-is-adas.html (accessed on 12 October 2021).
22. McCall, J.C.; Trivedi, M.M. Video-based lane estimation and tracking for driver assistance: Survey, system, and evaluation. In
IEEE Transactions on Intelligent Transportation Systems; IEEE: Eindhoven, The Netherlands, 2006; Volume 7, pp. 20–37. [CrossRef]
23. Veit, T.; Tarel, J.; Nicolle, P.; Charbonnier, P. Evaluation of Road Marking Feature Extraction. In Proceedings of the 2008 11th
International IEEE Conference on Intelligent Transportation Systems, Beijing, China, 12–15 October 2008; pp. 174–181.
24. Kuo, C.Y.; Lu, Y.R.; Yang, S.M. On the Image Sensor Processing for Lane Detection and Control in Vehicle Lane Keeping Systems.
Sensors 2019, 19, 1665. [CrossRef]
25. Kang, C.M.; Lee, S.H.; Kee, S.C.; Chung, C.C. Kinematics-based Fault-tolerant Techniques: Lane Prediction for an Autonomous
Lane Keeping System. Int. J. Control Autom. Syst. 2018, 16, 1293–1302. [CrossRef]
26. Borkar, A.; Hayes, M.; Smith, M.T. Robust lane detection and tracking with ransac and Kalman filter. In Proceedings of the 2009
16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 3261–3264. [CrossRef]
27. Sun, Y.; Li, J.; Sun, Z. Multi-Stage Hough Space Calculation for Lane Markings Detection via IMU and Vision Fusion. Sensors
2019, 19, 2305. [CrossRef]
28. Lu, J.; Ming Yang, M.; Wang, H.; Zhang, B. Vision-based real-time road detection in urban traffic, Proc. SPIE 4666. In Real-Time
Imaging VI; SPIE: Bellingham, WA, USA, 2002. [CrossRef]
29. Zhang, X.; Shi, Z. Study on lane boundary detection in night scene. In Proceedings of the 2009 IEEE Intelligent Vehicles
Symposium, Xi’an, China, 3–5 June 2009; pp. 538–541. [CrossRef]
30. Borkar, A.; Hayes, M.; Smith, M.T.; Pankanti, S. A layered approach to robust lane detection at night. In Proceedings of the 2009
IEEE Workshop on Computational Intelligence in Vehicles and Vehicular Systems, Nashville, TN, USA, 30 March–2 April 2009;
pp. 51–57. [CrossRef]
31. Priyadharshini, P.; Niketha, P.; Saantha Lakshmi, K.; Sharmila, S.; Divya, R. Advances in Vision based Lane Detection Algo-
rithm Based on Reliable Lane Markings. In Proceedings of the 2019 5th International Conference on Advanced Computing &
Communication Systems (ICACCS), Coimbatore, India, 15–16 March 2019; pp. 880–885. [CrossRef]
32. Hong, G.-S.; Kim, B.-G.; Dorra, D.P.; Roy, P.P. A Survey of Real-time Road Detection Techniques Using Visual Color Sensor. J.
Multimed. Inf. Syst. 2018, 5, 9–14. [CrossRef]
33. Park, H. Implementation of Lane Detection Algorithm for Self-driving Vehicles Using Tensor Flow. In International Conference on
Innovative Mobile and Internet Services in Ubiquitous Computing; Springer: Cham, Switzerland, 2018; pp. 438–447.
Sustainability 2021, 13, 11417 28 of 29
34. El Hajjouji, I.; Mars, S.; Asrih, Z.; Mourabit, A.E. A novel FPGA implementation of Hough Transform for straight lane detection.
Eng. Sci. Technol. Int. J. 2020, 23, 274–280. [CrossRef]
35. Samadzadegan, F.; Sarafraz, A.; Tabibi, M. Automatic Lane Detection in Image Sequences for Vision-based Navigation Purposes.
ISPRS Image Eng. Vis. Metrol. 2006. Available online: https://www.semanticscholar.org/paper/Automatic-Lane-Detection-in-
Image-Sequences-for-Samadzadegan-Sarafraz/55f0683190eb6cb21bf52c5f64b443c6437b38ea (accessed on 12 August 2021).
36. Cheng, H.-Y.; Yu, C.-C.; Tseng, C.-C.; Fan, K.-C.; Hwang, J.-N.; Jeng, B.-S. Environment classification and hierarchical lane
detection for structured and unstructured roads. Comput. Vis. IET 2010, 4, 37–49. [CrossRef]
37. Han, J.; Kim, D.; Lee, M.; Sunwoo, M. Road boundary detection and tracking for structured and unstructured roads using a 2D
lidar sensor. Int. J. Automot. Technol. 2014, 15, 611–623. [CrossRef]
38. Le, M.C.; Phung, S.L.; Bouzerdoum, A. Lane Detection in Unstructured Environments for Autonomous Navigation Systems.
In Asian Conference on Computer Vision; Cremers, D., Reid, I., Saito, H., Yang, M.H., Eds.; Springer: Cham, Switzerland, 2015.
[CrossRef]
39. Wang, J.; Ma, H.; Zhang, X.; Liu, X. Detection of Lane Lines on Both Sides of Road Based on Monocular Camera. In Proceedings
of the 2018 IEEE International Conference on Mechatronics and Automation (ICMA), Changchun, China, 5–8 August 2018;
pp. 1134–1139.
40. YenIaydin, Y.; Schmidt, K.W. Sensor Fusion of a Camera and 2D LIDAR for Lane Detection. In Proceedings of the 2019 27th Signal
Processing and Communications Applications Conference (SIU), Sivas, Turkey, 24–26 April 2019; pp. 1–4.
41. Kemsaram, N.; Das, A.; Dubbelman, G. An Integrated Framework for Autonomous Driving: Object Detection, Lane Detection,
and Free Space Detection. In Proceedings of the 2019 Third World Conference on Smart Trends in Systems Security and
Sustainablity (WorldS4), London, UK, 30–31 July 2019; pp. 260–265. [CrossRef]
42. Lee, C.; Moon, J.-H. Robust Lane Detection and Tracking for Real-Time Applications. IEEE Trans. Intell. Transp. Syst. 2018, 19, 1–6.
[CrossRef]
43. Son, Y.; Lee, E.S.; Kum, D. Robust multi-lane detection and tracking using adaptive threshold and lane classification. Mach. Vis.
Appl. 2018, 30, 111–124. [CrossRef]
44. Li, Q.; Zhou, J.; Li, B.; Guo, Y.; Xiao, J. Robust Lane-Detection Method for Low-Speed Environments. Sensors 2018, 18, 4274.
[CrossRef]
45. Son, J.; Yoo, H.; Kim, S.; Sohn, K. Real-time illumination invariant lane detection for lane departure warning system. Expert Syst.
Appl. 2014, 42. [CrossRef]
46. Chae, H.; Jeong, Y.; Kim, S.; Lee, H.; Park, J.; Yi, K. Design and Vehicle Implementation of Autonomous Lane Change Algorithm
based on Probabilistic Prediction. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems
(ITSC), Maui, HI, USA, 4–7 November 2018; pp. 2845–2852. [CrossRef]
47. Chen, P.R.; Lo, S.Y.; Hang, H.M.; Chan, S.W.; Lin, J.J. Efficient Road Lane Marking Detection with Deep Learning. In Proceedings
of the 2018 IEEE 23rd International Conference on Digital Signal Processing (DSP), Shanghai, China, 19–21 November 2018; pp.
1–5.
48. Lu, Z.; Xu, Y.; Shan, X. A lane detection method based on the ridge detector and regional G-RANSAC. Sensors 2019, 19, 4028.
[CrossRef]
49. Bian, Y.; Ding, J.; Hu, M.; Xu, Q.; Wang, J.; Li, K. An Advanced Lane-Keeping Assistance System with Switchable Assistance
Modes. IEEE Trans. Intell. Transp. Syst. 2019, 21, 385–396. [CrossRef]
50. Wang, G.; Hu, J.; Li, Z.; Li, L. Cooperative Lane Changing via Deep Reinforcement Learning. arXiv 2019, arXiv:1906.08662.
51. Wang, P.; Chan, C.Y.; de La Fortelle, A. A reinforcement learning based approach for automated lane change maneuvers. In
Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 1379–1384.
52. Suh, J.; Chae, H.; Yi, K. Stochastic model-predictive control for lane change decision of automated driving vehicles. IEEE Trans.
Veh. Technol. 2018, 67, 4771–4782. [CrossRef]
53. Gopalan, R.; Hong, T.; Shneier, M.; Chellappa, R. A learning approach towards detection and tracking of lane markings. IEEE
Trans. Intell. Transp. Syst. 2012, 13, 1088–1098. [CrossRef]
54. Mueter, M.; Zhao, K. Method for Lane Detection. US20170068862A1. 2015. Available online: https://patents.google.com/patent/
US20170068862A1/en (accessed on 12 August 2021).
55. Joshi, A. Method for Generating Accurate Lane Level Maps. US9384394B2. 2013. Available online: https://patents.google.com/
patent/US9384394B2/en (accessed on 12 August 2021).
56. Kawazoe, H. Lane Tracking Control System for Vehicle. US20020095246A1. 2001. Available online: https://patents.google.com/
patent/US20020095246 (accessed on 12 August 2021).
57. Lisaka, A. Lane Detection Sensor and Navigation System Employing the Same. EP1143398A3. 1996. Available online: https:
//patents.google.com/patent/EP1143398A3/en (accessed on 12 August 2021).
58. Zhitong, H.; Yuefeng, Z. Vehicle Detecting Method Based on Multi-Target Tracking and Cascade Classifier Combination.
CN105205500A. 2015. Available online: https://patents.google.com/patent/CN105205500A/en (accessed on 12 August 2021).
59. Fujii, S. Steering Support Device. JP6589941B2, 2019. Patentimages.storage.googleapis.com. 2021. Available online: https:
//patentimages.storage.googleapis.com/0b/d0/ff/978af5acfb7b35/JP6589941B2.pdf (accessed on 12 August 2021).
60. Gurghian, A.; Koduri, T.; Nariyambut Murali, V.; Carey, K. Lane Detection Systems and Methods. US10336326B2. 2016. Available
online: https://patents.google.com/patent/US10336326B2/en (accessed on 12 August 2021).
Sustainability 2021, 13, 11417 29 of 29
61. Zhang, W.; Wang, J.; Lybecker, K.; Piasecki, J.; Brian Litkouhi, B.; Frakes, R. Enhanced Perspective View Generation in a Front
Curb Viewing System Abstract. US9834143B2. 2014. Available online: https://patents.google.com/patent/US9834143B2/en
(accessed on 12 August 2021).
62. Vallespi-Gonzalez, C. Object Detection for an Autonomous Vehicle. US20170323179A1. 2016. Available online: https://patents.
google.com/patent/US20170323179A1/en (accessed on 12 August 2021).
63. Cu Lane Dataset. Available online: https://xingangpan.github.io/projects/CULane.html (accessed on 13 April 2020).
64. Caltech Pedestrian Detection Benchmark. Available online: http://www.vision.caltech.edu/Image_Datasets/CaltechPedestrians/
(accessed on 13 April 2020).
65. Lee, E. Digital Image Media Lab. Diml.yonsei.ac.kr. 2020. Available online: http://diml.yonsei.ac.kr/dataset/ (accessed on 13
April 2020).
66. Cvlibs.net. The KITTI Vision Benchmark Suite. Available online: http://www.cvlibs.net/datasets/kitti/ (accessed on 27 April
2020).
67. Tusimple/Tusimple-Benchmark. Available online: https://github.com/TuSimple/tusimple-benchmark/tree/master/doc/
velocity_estimation (accessed on 15 April 2020).
68. Romera, E.; Luis, M.; Arroyo, L. Need Data for Driver Behavior Analysis? Presenting the Public UAH-Drive Set. In Proceedings
of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems, Rio de Janeiro, Brazil, 1–4 November 2016.
69. BDD100K Dataset. Available online: https://mc.ai/bdd100k-dataset/ (accessed on 2 April 2020).
70. Kumar, A.M.; Simon, P. Review of Lane Detection and Tracking Algorithms in Advanced Driver Assistance System. Int. J. Comput.
Sci. Inf. Technol. 2015, 7, 65–78. [CrossRef]
71. Hamed, T.; Kremer, S. Computer and Information Security Handbook, 3rd ed.; Elesevier: Amsterdam, The Netherlands, 2017; p. 114.
72. Precision and Recall. Available online: https://en.wikipedia.org/wiki/Precision_and_recall (accessed on 13 January 2021).
73. Fiorentini, N.; Losa, M. Long-Term-Based Road Blackspot Screening Procedures by Machine Learning Algorithms. Sustainability
2020, 12, 5972. [CrossRef]
74. Wu, S.J.; Chiang, H.H.; Perng, J.W.; Chen, C.J.; Wu, B.F.; Lee, T.T. The heterogeneous systems integration design and implementa-
tion for lane keeping on a vehicle. IEEE Trans. Intell. Transp. Syst. 2008, 9, 246–263. [CrossRef]
75. Liu, H.; Li, X. Sharp Curve Lane Detection for Autonomous Driving. Comput. Sci. Eng. 2019, 21, 80–95. [CrossRef]
76. Han, J.; Yang, Z.; Hu, G.; Zhang, T.; Song, J. Accurate and robust vanishing point detection method in unstructured road scenes. J.
Intell. Robot. Syst. 2019, 94, 143–158. [CrossRef]
77. Tominaga, K.; Takeuchi, Y.; Tomoki, U.; Kameoka, S.; Kitano, H.; Quirynen, R.; Berntorp, K.; Di Cairano, S. GNSS Based Lane
Keeping Assist System via Model Predictive Control. 2019. Available online: https://doi.org/10.4271/2019-01-0685 (accessed on
9 September 2021).
78. Chen, Z.; Liu, Q.; Lian, C. PointLaneNet: Efficient end-to-end CNNs for Accurate Real-Time Lane Detection. In Proceedings of
the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 2563–2568. [CrossRef]
79. Feng, Y.; Rong-ben, W.; Rong-hui, Z. Research on Road Recognition Algorithm Based on Structure Environment for ITS. In
Proceedings of the 2008 ISECS International Colloquium on Computing, Communication, Control, and Management, Guangzhou,
China, 3–4 August 2008; pp. 84–87. [CrossRef]
80. Nieuwenhuijsen, J.; de Almeida Correia, G.H.; Milakis, D.; van Arem, B.; van Daalen, E. Towards a quantitative method to
analyze the long-term innovation diffusion of automated vehicles technology using system dynamics. Transp. Res. Part C Emerg.
Technol. 2018, 86, 300–327. [CrossRef]
81. Stasinopoulos, P.; Shiwakoti, N.; Beining, M. Use-stage life cycle greenhouse gas emissions of the transition to an autonomous
vehicle fleet: A System Dynamics approach. J. Clean. Prod. 2021, 278, 123447. [CrossRef]