How Simulation Helps Autonomous Driving
How Simulation Helps Autonomous Driving
Abstract—Safety and cost are two important concerns for the Reality Simulation
development of autonomous driving technologies. From the aca- Sim2real
demic research to commercial applications of autonomous driving Knowledge
distillation
Transfer
learning
Curriculum
learning
vehicles, sufficient simulation and real world testing are required. Meta
learning
Robust
RL
Domain
randomization
In general, a large scale of testing in simulation environment is Physical Parameter & Expert Knowledge
& Reality Sample
conducted and then the learned driving knowledge is transferred Vehicle Pre-trained
Deployment Strategies
to the real world, so how to adapt driving knowledge learned
in simulation to reality becomes a critical issue. However, the
Digital Twins
virtual simulation world differs from the real world in many Physical space Prediction & Analytics Based Control Virtual space
aspects such as lighting, textures, vehicle dynamics, and agents’ Physical Entity Digital Twin
behaviors, etc., which makes it difficult to bridge the gap between AR/MR
Interaction
Historical
the virtual and real worlds. This gap is commonly referred to as Real-time
sensor
data
AI
the reality gap (RG). In recent years, researchers have explored measurement Env Real-Time Data Env ML model
various approaches to address the reality gap issue, which can be
broadly classified into three categories: transferring knowledge
Parallel Intelligence
from simulation to reality (sim2real), learning in digital twins Physical societies Artificial societies
(DTs), and learning by parallel intelligence (PI) technologies.
In this paper, we consider the solutions through the sim2real, Mixed Big Data
DTs, and PI technologies, and review important applications and
innovations in the field of autonomous driving. Meanwhile, we Real dynamic Simulation
show the state-of-the-arts from the views of algorithms, models, system
Descriptive Predictive Prescriptive
system
and simulators, and elaborate the development process from Intelligence Intelligence Intelligence
sim2real to DTs and PI. The presentation also illustrates the far-
reaching effects and challenges in the development of sim2real, Fig. 1. Leveraging computer simulations to improve real-world autonomous
DTs, and PI in autonomous driving. driving performance using three types of techniques: sim2real, digital twins
(DTs), and parallel intelligence (PI). Sim2real mainly focus on improvement
Index Terms—autonomous driving, sim2real, digital twins, at the algorithm level. DTs are more concerned with scenario modeling and
parallel intelligence, reality gap. interaction. As a higher level technology, PI which has three-level functions
including description, prediction, and prescriptive tends to construct mixed
datasets and parallel computation of both artificial and physical scenarios.
I. I NTRODUCTION
UTONOMOUS driving, as an important part of intel-
A ligent transportation in the future, has great potential
in alleviating traffic congestion and avoiding traffic accidents
The application of autonomous driving in industry demands
significant effort, especially for the planning and control
caused by human factors. In the past decades, researchers’ tasks [6]–[9]. It is necessary to consider not only the safety
efforts in this direction have increased, and many academic performance at the algorithm level, but also the cost at the
achievements have been made in autonomous driving [1]–[5]. real vehicle level, such as the price of high-precision sensors,
radars, and cameras, as well as the collision damage of real ve-
This work was supported in part by the National Natural Science Foundation hicles, etc. The consequences of directly applying an immature
of China under Grant 62273135 and in part by the Natural Science Foundation
of Hubei Province in China under Grant 2021CFB460. (corresponding author:
algorithm in a real vehicle are immeasurable. Therefore, an
Long Chen) experimental process is usually conducted in the high-fidelity
Xuemin Hu, Shen Li, and Tingyu Huang are with the School of Artifi- simulator at first, and then deployed in the reality environment,
cial Intelligence, Hubei University, Wuhan, Hubei, 430062, China (e-mail:
[email protected], lishen [email protected], [email protected])
which can greatly reduce the research and development costs.
Bo Tang is with the Department of Electrical and Computer Engineer- However, the simulation and the reality environment could be
ing, Worcester Polytechnic Institute, Worcester, MA, 01609, USA. (e-mail: completely different, and there are always gaps between the
[email protected])
Rouxing Huai is with Beijing Huairou Academy of Parallel Sensing,
simulated and real worlds such as lighting, textures, vehicle
Beijing, 101499, China. (e-mail: [email protected]) dynamics, and agents’ behaviors, etc., which is usually named
Long Chen is with the State Key Laboratory of Management and Control “reality gap (RG)” [10].
for Complex Systems, Institute of Automation, Chinese Academy of Sciences,
Beijing, 100864, China, and also with the Waytous Inc. Beijing 100083, In order to bridge the gap between the simulation and reality,
China. (e-mail: [email protected]) many different methods have been proposed [11]. At a high
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. XX, XXX XXX 2
level, these methods are mainly divided into three categories: i) researchers utilize parallel intelligence (PI) [13] to reduce RG.
transferring knowledge from simulation to reality, ii) learning As a more advanced technology, PI combines the advantages
in digital twins, and iii) learning by parallel intelligence of both sim2real and DT methods and can achieve better
technologies, which are shown in Fig. 1. management and control for complex systems. Unlike DT,
Sim2real refers to the process of transferring the strategies which only have a descriptive function, PI has three functions
and knowledge learned from the simulated world to the real including description intelligence, prediction intelligence, and
word to bridge the RG. In the field of autonomous driving, the prescriptive intelligence, which are shown in the bottom part of
core idea of sim2real is to train autonomous driving systems in Fig. 1. In parallel intelligence techniques, researchers typically
simulation environment and then to apply them in real-world construct an artificial system which is mapped to a physical
vehicles. In order to address the RG caused by the factors system to learn knowledge and gives feedback to the physical
such as uneven sampling in the real world, too many physical system. The artificial system learns knowledge from the data
parameters, insufficient expert experience, and imperfect dy- collected from the two systems through computational exper-
namics models, etc., sim2real has gradually been developed iments for evaluation, and then the knowledge is applied to
using six kinds of methods, including curriculum learning, the real system through parallel execution with high real-time
meta-learning, knowledge distillation, robust reinforcement virtual-real interaction and online feedback.
learning, domain randomization, and transfer learning. The In these three methods, what researchers need to do is
top part of Fig. 1 shows these methods and their relationships. to construct a virtual world. Therefore, some researchers
Each method has its unique way to deal with the RG problem. devote themselves to developing simulators of autonomous
For example, randomization of environmental parameters is driving and robotics, such as AirSim [14], CARLA [15], etc.
proposed in domain randomization so that the parameters in The mismatch between real and simulated settings can be
the simulated world can cover those in the real world, making minimized by providing training data and experience in these
the strategies and knowledge learning from the simulated simulators, and robot agents can be deployed to the real world
world applicable in the real world. However, the computational through sim2real methods.
cost of sim2real is still a challenge, especially when dealing This study surveys methods, applications, and development
with complex and dynamic environment. The computational of bridging the reality gap in autonomous driving. To the
requirements for simulated complex environments are difficult best of our knowledge, this is the first survey to focus on
to meet, which limits the scalability of sim2real methods. dealing the RG from the perspectives of sim2real, digital twins,
Unlike sim2real methods, digital twin (DT)-based methods and parallel intelligence. In conclusion, our contributions are
aim to construct a mapping of real-world physical entities in summarized as following three aspects.
a simulation environment using the data from sensors and • A taxonomy of the literature is presented from the
physical models to achieve the role of reflecting the entire perspectives of sim2real, digital twins, and parallel in-
lifecycle process of corresponding physical entities [12]. In telligence, where particularly DT and PI methods are
autonomous driving, DTs are typically used for multi-scale reviewed to address the reality gap issue for the first time.
modeling of the environment and vehicles. As shown in the • Methods and applications of handling the reality gap
middle part of Fig. 1, data from real sensors are used to achieve between simulation and reality in autonomous driving are
motion planning of the twin body and physical vehicle in the comprehensively reviewed in this paper.
virtual scenes through data interaction between the driving • Discussions of key challenges and opportunities are pre-
data analysis model and virtual reality. Some additional expert sented in this paper, offering insights of developing new
experience guides the modeling process to further narrow sim2real, DT, and PI methods in autonomous driving.
the RG. Moreover, the DT system is continuously learned
The remainder of this paper is organized as follows. In
and optimized through real-time data updates and interactions
the Section II, we introduce the methodologies and appli-
between the virtual and real twins to keep improving accuracy
cations of sim2real. Technologies and applications of digital
of the model. To achieve better virtual simulation effects,
twins as well as the guiding technologies, AR and MR, in
researchers combine virtual reality technologies with DTs.
autonomous driving are introduced in Section III. In Section
Argument reality (AR) and mixed reality (MR) technologies
IV, parallel intelligence technologies for autonomous driving
can provide better interactivity and visualization for digital
are presented. In Section V, we introduce the simulators that
twins. Users can interact and control the DT model using AR
are used to implement the above related technologies. The
and MR technologies to achieve better physical simulation,
future works and challenges are summarized in Section VI,
visualization, and manipulation. In addition, AR and MR tech-
and the conclusions are drawn in the final section.
nologies can enhance DTs through real-time object tracking
and virtualization, resulting in better interactive experiences
and more advanced interactive applications. II. S IMULATION TO REALITY TRANSFER
Sim2real-based methods play a crucial role in adapting Autonomous driving algorithms require extensive testing
autonomous driving vehicles to the complexities of the real before commercial applications. For safety and cost consider-
world by transferring learned knowledge from simulated en- ation, most of the current researches on new algorithms, espe-
vironments, while DT-based methods allow autonomous driv- cially for reinforcement learning-based methods, have focused
ing vehicles to learn knowledge by synchronizing data from on simulation environment [16], [17]. However, vehicle agents
both the real and simulated worlds. In recent years, some are trained to near-human levels in the simulation environment,
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. XX, XXX XXX 3
A. Curriculum learning
Curriculum learning (CL) [37] is a training strategy in
which the model accumulates knowledge by initially learning
simple tasks before involving more complex ones. It optimizes
the sequence of accumulating the experience for agents to
speed up the training process and improve effectiveness. The
task of sequencing the samples is tedious, so Kumar et al. Fig. 2. Curriculum self-paced learning approach for object detection in
propose the concept of Self-Paced Learning [38], where the autonomous driving [39].
curriculum is dynamically determined to adjust to the learning
pace of the learner. A relevant method for object detection in
B. Meta-learning
autonomous driving tasks is proposed by Soviany et al. [39],
which is shown in Fig. 2. The model trains a target detector The concept of meta-learning [43], which means learning
on the source images that are usually from the simulation to learn, has gained significant attention in recent years. The
environment, and then allows it to learn tasks from easy to essence of meta-learning is that the model is expected to gain
hard levels by a self-paced learning method. Finally, it reaches prior experience in other similar tasks and then be able to learn
the goal of predicting the images in the real domain in order new knowledge more quickly. Meta-learning has been shown
to bridge the RG. However, adaptive methods need some to have good advantages in several scenarios, such as single-
prior knowledge and require manual design [40]. In recent task learning [44], multitask learning [45], few-shot scenarios
years, the method of combining of curriculum learning and [46], Neural Architecture Search (NAS) [47], etc. Finn et
reinforcement learning has been widely developed [41]. For al. [48] propose a model-agnostic meta-learning algorithm
example, Florensa et al. [42] propose a reinforcement learning- which is compatible with all gradient descent, named as Model
based curriculum learning method, which does not require Agnostic Meta Learning (MAML). The model requires only
priority knowledge and use inverse training, allowing the agent a small amount of data to achieve fast convergence, and it
to directly start the curriculum from an applicable initial state. addresses the previous drawback of focusing only on the
Nonetheless, this static approach sometimes does not work initialization parameters in the moment.
well in complex autonomous driving tasks. One of the challenges in traditional reinforcement learning
To address complex driving tasks in dynamic scenarios, is the inefficiency of data exploration. In order to solve the
Qiao et al. [18] propose a dynamic idea named Automatically problem, Sæmundsson et al. [49] refer meta-learning as a la-
Generated Curriculum (AGC). They use deep reinforcement tent variable model and enable the knowledge to be transferred
learning to develop a strategy and use candidate sets to gener- across the robotic system and automatically infer relationships
ate the curriculum, which can optimize the traffic efficiency at between tasks from the data based on probabilistic ideas. In
complex intersections and reduce the training time. However, contrast, Nagabandi et al. [22] propose an online adaptive
this approach of pre-training on other tasks and transferring the learning approach for high-capacity dynamic models, where
knowledge is not very effective in different driving situations. they implemented the algorithm on a vehicle to solve the
Bae et al. [19] propose a curriculum strategy with self-taught sim2real problem by using model-based reinforcement learn-
scoring functions, which effectively estimated the roll and ing with online adaptive methods while training a dynamic
sideslip angles of the vehicle in different driving situations. prior model using meta-learning. Jaafra et al. [23] propose an
Meanwhile, Song et al. [20] propose a high-speed autonomous adaptive meta-learning method based on an embedded adaptive
overtaking system based on three-stages curriculum learning, meta reinforcement learning method with a neural network
using the same task-specific curriculum learning to train end- controller, which enables fast and efficient iterations for chang-
to-end neural networks, and experimental results proved that ing tasks and extends RL to urban autonomous driving tasks
the method has better overtaking performance. Anzalone et in the CARLA simulator. In addition, to reduce the high cost
al. [21] propose a reinforcement learning method with five of the labeled datasets required to train a model with high
stages for end-to-end autonomous driving, following the idea performance, Kar et al. [24] propose the concept of Meta-Sim,
of curriculum learning to continuously increase the learning as shown in Fig. 3, which is a learning model for generating
difficulty of the agents at each stage to learn complex behav- synthetic driving scenarios with the goal of acquiring images
iors. through a graphics engine and their corresponding realistic
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. XX, XXX XXX 4
TABLE I
TYPICAL SIM2REAL MODELS IN AUTONOMOUS DRIVING.
images that bridge the distribution gap between the simulation D. Robust reinforcement learning
and reality. Meta-Sim also optimizes a meta objective by
Robust reinforcement learning (RL) was proposed as a new
automatically learning to synthesize labeled datasets related
paradigm for RL in [53] to represent the insensitivity of
to downstream tasks.
control systems to characteristic perturbations. The approach
is defined based on H ∞ control theory and explicitly considers
C. Knowledge distillation input disturbances and modeling errors to achieve robustness
The concept of knowledge distillation was first introduced in RL against uncertainties that affect actions and states in the
by Hinton et al. [50]. The main idea is that small student system, such as physical parameter changes.
models usually learn from and are supervised by large teacher In the process of reinforcement learning research, classical
models for knowledge transfer. Knowledge distillation, as an techniques for improving robustness are shown to only prevent
effective technique for deep neural model compression, has common situations and inconsistent for different simulation
been widely used in different areas of artificial intelligence environments, so robust adversarial reinforcement learning
[51], including speech recognition, visual recognition, and (RARL) [54] is proposed to improve the agents’ behaviors
natural language processing. by training the adversary to effectively prevent against system
The knowledge transfer from the teacher to student is a perturbations. As shown in Fig. 5, a constrained policy for
critical component of knowledge distillation, but it is also highway entrance ramps in autonomous driving is proposed
a challenge to entirely learn from the real data for teacher by He et al. [30]. They model the environment of a highway
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. XX, XXX XXX 6
Fig. 5. Decision making framework based on robust RL for highway on-ramp authors train a generic model for different target domains.
merging in autonomous driving [30]. For some parameters of interest such as friction and mass,
the method proposed in [59] trains a generic control strategy
through a dynamic model with calibrated parameters, and then
intersection as an adversarial agent to constrain vehicle behav- analyzes the parameters using an online recognition system.
iors, and use adversarial agent and white-box adversarial attack Besides, Yue et al. [32] propose a new method of domain
techniques to simulate or generate adversarial environmental randomization and pyramid consistency to learn models with
disturbances and observational perturbations so as to bridge high generalization ability using consistency forced training
the RG. In addition, authors also propose an observation- of crossing-domain. The overall experiments show the effec-
based adversarial reinforcement learning that influences the tiveness of domain randomization is much better than existing
observations by incorporating a Bayesian optimization black- methods.
box attack method to make the agent efficiently approximate Kontes et al. [33] consider more complex road and high-
the optimal adversarial perturbation strategy [55]. Pan et al. speed driving situations by training several variants of the
[31] also propose a risk-adversarial learning approach, which complex problem, in which authors use domain randomization
incorporates a risk-averse mechanism by playing a game to complete the sim2real transfer and transfer the learned
between risk-averse and risk-seeking agents while modeling driving strategies to real vehicles. In addition, to perform
risk as a value function. Experimental results show that this randomized simulation of visual features in the source domain,
method triggers much fewer catastrophic events by agents than the method in [60] generates robust motion trajectories by
classical reinforcement learning methods. randomized dynamics models and acts on real robots to guide
Most of the existing literature refer RARL as a zero-sum them to perform learning multiple motor skills. Pouyanfar
simultaneous game with a Nash equilibrium, which may ignore et al. [34] propose a dynamic domain randomization, which
the sequential manner of RL deployment, produce overly differs from the static domain randomization and contains the
conservative agents, and lead to training instability. To handle randomization of dynamic objects. The proposed framework
this issue, Huang et al. [56] present a new sequential robust RL based on domain randomization for collision-free autonomous
formulation for both single-agent robot control and multi-agent driving is shown in Fig. 6. It generates a simulated world by
highway merging in the task, where they propose a general and simulating vehicle driving to collect images and corresponding
Stackelberg game model called RRL-Stack and developed the steering angles, and eventually predicts future steering angles
Stackelberg policy gradient algorithm to solve the RRL-Stack, in real images by scenario randomization, which solves the
which is better than the general RL method. end-to-end collision-free driving problem using the real data.
Fig. 7. Transfer learning-based driving model including two stages with the
transferred weights technique [36].
TABLE II
AUTONOMOUS DRIVING APPLICATIONS STATISTICS FOR DIGITAL TWINS.
Shikata et al. [90] MBD Vehicles EV charging design and test 2019
ADAS test,
Szalai et al. [76] MBD, VPG Vehicles, traffic 2020
mixed-reality application
Model-based RL
Wu et al. [92] MBD Environment 2021
in autonomous driving
Yu et al. [87] MBD, CPS Vehicles, environment ADAS design and test, V2X 2022
Schwarz et al. [93] CPS Vehicles, environment ADAS design and test 2010
Eleonora et al. [94] CPS Vehicles AGV logistics action test 2017
Chen et al. [95] CPS Drivers Drivers’ safety behaviors analysis 2018
Ge et al. [97] CPS Vehicles ADAS Design and test, V2X 2019
Veledar et al. [98] CPS Vehicles ADAS design and test 2019
Liu et al. [100] CPS Vehicles ADAS design and test 2021
Culley et al. [101] VPG, MBD Vehicles, environment ADAS design and test 2020
Fremont et al. [102] VPG Environment, vehicles Formal test based on scenario 2020
Reinforcement learning
Voogd et al. [103] Transfer learning Vehicles 2022
autonomous driving
MBD: model-based design, CPS: cyber-physical systems, VPG: virtual proving ground.
time, it solves two problems of traditional simulators [87], control. Researchers typically design task-specific digital twin
[88]: 1) simulation testing is not equivalent to real end-to-end systems based on the above points. The digital twin system
testing, and 2) factors such as weather, climate, and lighting for automated guided vehicles enables the autonomous trans-
are not fully covered. The digital twin paradigm is able to portation of driven robots among three predefined tracks of
generate high-fidelity complex virtual environments and sensor processing stations through digital twin modeling, where the
modeling that approximates real data to realize a complete, digital twin is only used as a dynamic simulation of discrete
comprehensive, accurate, and reliable digital twin entity. Tab.2 events without involving complex dynamic driving control
provides the review of digital twins for autonomous driving techniques [94]. Rassõlkin et al. [96] propose a digital twin
applications. system for autonomous electric vehicles. They use matlab
In the field of autonomous driving, a digital twin system to model the sensors and vehicle models of the ISEAUTO
is usually constructed with three parts [87]: 1) the digital vehicle and then combine data from the real devices and virtual
twin of the sensor model, 2) the digital twin of 3D digital sensors with a machine learning program.
maps, and 3) the logical twin corresponding to traffic flow In recent years, many different frameworks have been pro-
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. XX, XXX XXX 10
TABLE III
COMPARISON OF SELECTED VIRTUAL-TO-REAL WORKS ON CONTROL OF AUTONOMOUS DRIVING WITH REAL DATA.
Cooperative control of
Wang et al. [125] iHorizon parallel driving 2017
multi-vehicle autonomous driving
Parallel traffic
Wang et al. [117] ATS parallel control 2010
management framework
C. Parallel driving and parallel testing methods based on parallel driving can learn drivers’ behav-
ior in response to different reality and simulation scenarios
The development of autonomous driving poses significant through the integrated multi-ADAV modules so as to bridge
challenges to current vehicle and transportation systems. Wang the RG.
et al. [125] propose three elements required for future con- In the research of parallel driving, Wang et al. [125] propose
nected autonomous driving: physical vehicles, human drivers, the initial concept of parallel testing, in which the cyclic
and cognitive attributes. Based on the ACP theory, the authors updating method is used to address the RG problem and
develop projecting the three elements to three parallel worlds verify the performance of autonomous driving. As shown in
and creating a parallel driving framework. As shown in Fig. Fig. 14, the cyclic updating method of co-evolution between
13, the three parallel worlds are interconnected. Each vehicle the real testing ground and parallel virtual testing ground
in the real world is assigned controlling functions as well as enhances the interaction of virtual reality and authenticity of
an ADAV module which is responsible for communication the scenarios, so the test in virtual testing ground can show the
with the artificial world and other vehicles. This module also performance of autonomous driving. Following this research,
provides driving information for human drivers. Facilitated by Li et al. [119] complete the framework of parallel testing
the ADAV module, parallel driving enables joint responses to which emphasized the real-time data interaction between real
complex autonomous driving scenarios, with human drivers and virtual scenarios and the diversity of testing scenarios.
conditionally participating to ensure system safety. Further-
more, Liu et al. [120] integrate “digital quadruplets” including
the physical vehicle, the descriptive vehicle, the predictive D. Parallel planning and parallel control
vehicle, and the prescriptive vehicle in parallel driving. Based Planning is one of the important parts for autonomous
on the description of digital quadruplets, three virtual vehicles, driving systems. To address the problem of emergency traffic
which are defined as three “guardian angels” for the physical scenarios in the real world for self-driving vehicles, Chen et
vehicle, play different roles to make intelligent vehicles safer al. [131] propose the method of parallel planning. As shown
and more reliable in complex scenarios. Autonomous driving in Fig. 15, the method involves modeling emergency traffic
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. XX, XXX XXX 13
TABLE IV
SIMULATOR AND RELATED DESCRIPTIONS FOR AUTONOMOUS DRIVING.
AirSim [14] High fidelity platform developed by unreal engine High-fidelity environment and car models
Gazebo [134] Open source robot simulator Strong physical modeling capabilities
CARLA [15] Unreal engine open source autopilot simulator High-fidelity sensor data
SUMO [137] Open source traffic simulation platform Generate complex traffic systems
Apollo [140] Large-scale datasets consisting of video and 3d point clouds Data for high precision attitude information
of the sim2real process. In this case, we need to consider a to improve the generalization of the transfer process.
way to quantify and represent the reality gap in the current (5) Parallel intelligence technologies improve autonomous
autonomous driving field and compare different environments driving systems’ performance, handle big data, and solve
and applications, which is conducive to further research. the RG problem through multi-unit parallel computing and
Moreover, quantifying RG is deriving relevant algorithms to execution. With the expansion of the computing scale and
reduce RG and achieve the purpose of leaping from the virtual increasement of functional requirements in the autonomous
to real world. driving field, parallel intelligence technologies need to meet
(2) As a solution for training generalized reinforcement the requirements of ever-increasing computing scale and com-
learning models, sim2real needs a lot of data to support its plexity.
training process. We expect to use more real data, but real (6) Parallel intelligence can help autonomous driving sys-
data are difficult to obtain compared with simulated data. Real tems learn knowledge from the data generated by the physical
data are often difficult and challenging to collect, classify, and and artificial systems, but the generated data exhibit charac-
generalize to a standard distribution. teristics of multi-modality, high massiveness and redundancy.
Therefore, how to perform efficient data analysis to refrain
(3) The digital twin is nowadays mainly used as an applied
from useless data and make a precise representation of the
technology in the fields of robotics and autonomous driving
physical system are problems for PI-based autonomous driving
to achieve relevant tasks directly by building digital twin
systems.
entities or environments. However, the specific methodology
and interpretation of related characteristics of digital twins are
scarce in relevant literature. It is still a difficult task to achieve VII. C ONCLUSION
a comprehensive digital twin. With the help of meta-learning, Autonomous driving researches require a significant amount
the digital twin can obtain prior learning experience through of real-world data to train a reliable and robust algorithm. On
the previous twin in the processes of environment interaction one hand, using real data in extreme and scarce scenarios often
or simulated object synchronization, leading to the iterative leads to a high cost, so simulated data are typically used to
optimization process. meet the requirement. On the other hand, there is always a
(4) The models and algorithms of DT technologies are sep- gap between the simulated and real worlds, so it is necessary
arated, and there is no comprehensive method to combine and to further investigate methods for transferring knowledge from
evaluate them. For the model, environmental parameters, and simulation to reality. In this paper, we comprehensively review
some random noise in DTs, there is a lack of a synchronous state-of-the-arts of sim2real, DTs, and PI. In sim2real, we
sim2real method that can be considered as a dynamic process mainly introduce the methodologies and related applications
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. XX, XXX XXX 15
from the view of different categories, while we focus on the [15] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, V. Koltun, Carla: An
virtual technologies about the DT dependency, and then intro- open urban driving simulator, in: Conference on robot learning, PMLR,
2017, pp. 1–16.
duce the common frameworks and applications of autonomous [16] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa,
driving. Subsequently, the parallel intelligence thesis and D. Silver, D. Wierstra, Continuous control with deep reinforcement
technologies including parallel learning, parallel vision, and learning, arXiv preprint arXiv:1509.02971 (2015).
[17] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley,
parallel driving, etc. in autonomous driving are reviewed. To D. Silver, K. Kavukcuoglu, Asynchronous methods for deep rein-
demonstrate the simulation environment for realizing sim2real, forcement learning, in: International conference on machine learning,
DT and PI methods, we also summarize existing autonomous PMLR, 2016, pp. 1928–1937.
[18] Z. Qiao, K. Muelling, J. M. Dolan, P. Palanisamy, P. Mudalige,
driving simulators. In addition, we present existing challenges Automatically generated curriculum based reinforcement learning for
for the future development of autonomous driving in sim2real, autonomous vehicles in urban environment, in: 2018 IEEE Intelligent
DTs and PI in this paper. Vehicles Symposium (IV), IEEE, 2018, pp. 1233–1238.
[19] J. Bae, T. Kim, W. Lee, I. Shim, Curriculum learning for vehicle lateral
stability estimations, IEEE Access 9 (2021) 89249–89262.
R EFERENCES [20] Y. Song, H. Lin, E. Kaufmann, P. Dürr, D. Scaramuzza, Autonomous
overtaking in gran turismo sport using curriculum reinforcement learn-
[1] X. Hu, L. Chen, B. Tang, D. Cao, H. He, Dynamic path planning ing, in: 2021 IEEE international conference on robotics and automation
for autonomous driving on various roads with avoidance of static (ICRA), IEEE, 2021, pp. 9403–9409.
and moving obstacles, Mechanical systems and signal processing 100 [21] L. Anzalone, P. Barra, S. Barra, A. Castiglione, M. Nappi, An end-
(2018) 482–500. to-end curriculum learning approach for autonomous driving scenar-
[2] W. Zhang, A robust lateral tracking control strategy for autonomous ios, IEEE Transactions on Intelligent Transportation Systems 23 (10)
driving vehicles, Mechanical Systems and Signal Processing 150 (2022) 19817–19826.
(2021) 1–15. [22] A. Nagabandi, I. Clavera, S. Liu, R. S. Fearing, P. Abbeel, S. Levine,
[3] L. Chen, Y. Li, C. Huang, B. Li, Y. Xing, D. Tian, L. Li, Z. Hu, C. Finn, Learning to adapt in dynamic, real-world environments
X. Na, Z. Li, S. Teng, C. Lv, J. Wang, D. Cao, N. Zheng, F.-Y. Wang, through meta-reinforcement learning, arXiv preprint arXiv:1803.11347
Milestones in autonomous driving and intelligent vehicles: Survey of (2018).
surveys, IEEE Transactions on Intelligent Vehicles 8 (2) (2023) 1046– [23] Y. Jaafra, A. Deruyver, J. L. Laurent, M. S. Naceur, Context-aware
1056. doi:10.1109/TIV.2022.3223131. autonomous driving using meta-reinforcement learning, in: 2019 18th
[4] L. Chen, Y. Li, C. Huang, Y. Xing, D. Tian, L. Li, Z. Hu, S. Teng, C. Lv, IEEE International Conference On Machine Learning And Applications
J. Wang, D. Cao, N. Zheng, F.-Y. Wang, Milestones in autonomous (ICMLA), IEEE, 2019, pp. 450–455.
driving and intelligent vehicles—part 1: Control, computing system [24] A. Kar, A. Prakash, M.-Y. Liu, E. Cameracci, J. Yuan, M. Rusiniak,
design, communication, hd map, testing, and human behaviors, IEEE D. Acuna, A. Torralba, S. Fidler, Meta-sim: Learning to generate
Transactions on Systems, Man, and Cybernetics: Systems (2023) 1– synthetic datasets, in: Proceedings of the IEEE/CVF International
17doi:10.1109/TSMC.2023.3276218. Conference on Computer Vision, 2019, pp. 4551–4560.
[5] L. Chen, S. Teng, B. Li, X. Na, Y. Li, Z. Li, J. Wang, D. Cao, N. Zheng, [25] M. R. U. Saputra, P. P. De Gusmao, Y. Almalioglu, A. Markham,
F.-Y. Wang, Milestones in autonomous driving and intelligent vehi- N. Trigoni, Distilling knowledge from a deep pose regressor network,
cles—part ii: Perception and planning, IEEE Transactions on Systems, in: Proceedings of the IEEE/CVF international conference on computer
Man, and Cybernetics: Systems (2023) 1–15doi:10.1109/TSMC. vision, 2019, pp. 263–272.
2023.3283021. [26] A. Zhao, T. He, Y. Liang, H. Huang, G. Van den Broeck, S. Soatto,
[6] X. Hu, B. Tang, L. Chen, S. Song, X. Tong, Learning a deep Sam: Squeeze-and-mimic networks for conditional visual driving pol-
cascaded neural network for multiple motion commands prediction in icy learning, in: Conference on Robot Learning, PMLR, 2021, pp. 156–
autonomous driving, IEEE Transactions on Intelligent Transportation 175.
Systems 22 (12) (2022) 7585–7596. [27] L. Zhang, R. Dong, H.-S. Tai, K. Ma, Pointdistiller: structured knowl-
[7] Y. Liu, X. Ji, K. Yang, X. He, X. Na, Y. Liu, Finite-time optimized edge distillation towards efficient and compact 3d detection, arXiv
robust control with adaptive state estimation algorithm for autonomous preprint arXiv:2205.11098 (2022).
heavy vehicle, Mechanical Systems and Signal Processing 139 (2020) [28] C. Sautier, G. Puy, S. Gidaris, A. Boulch, A. Bursuc, R. Marlet, Image-
1–22. to-lidar self-supervised distillation for autonomous driving data, in:
[8] S. Teng, L. Chen, Y. Ai, Y. Zhou, Z. Xuanyuan, X. Hu, Hierarchical Proceedings of the IEEE/CVF Conference on Computer Vision and
interpretable imitation learning for end-to-end autonomous driving, Pattern Recognition, 2022, pp. 9891–9901.
IEEE Transactions on Intelligent Vehicles 8 (1) (2023) 673–683. [29] J. Li, H. Dai, Y. Ding, Self-distillation for robust lidar semantic seg-
doi:10.1109/TIV.2022.3225340. mentation in autonomous driving, in: Computer Vision–ECCV 2022:
[9] S. Teng, X. Hu, P. Deng, B. Li, Y. Li, Y. Ai, D. Yang, L. Li, Z. Xu- 17th European Conference, Tel Aviv, Israel, October 23–27, 2022,
anyuan, F. Zhu, L. Chen, Motion planning for autonomous driving: The Proceedings, Part XXVIII, Springer, 2022, pp. 659–676.
state of the art and future perspectives, IEEE Transactions on Intelligent [30] X. He, B. Lou, H. Yang, C. Lv, Robust decision making for autonomous
Vehicles (2023) 1–21doi:10.1109/TIV.2023.3274536. vehicles at highway on-ramps: A constrained adversarial reinforcement
[10] A. Kadian, J. Truong, A. Gokaslan, A. Clegg, E. Wijmans, S. Lee, learning approach, IEEE Transactions on Intelligent Transportation
M. Savva, S. Chernova, D. Batra, Sim2real predictivity: Does evalua- Systems (2022).
tion in simulation predict real-world performance?, IEEE Robotics and [31] X. Pan, D. Seita, Y. Gao, J. Canny, Risk averse robust adversarial
Automation Letters 5 (4) (2020) 6670–6677. reinforcement learning, in: 2019 International Conference on Robotics
[11] K. Kang, S. Belkhale, G. Kahn, P. Abbeel, S. Levine, Generalization and Automation (ICRA), IEEE, 2019, pp. 8522–8528.
through simulation: Integrating simulated and real data into deep [32] X. Yue, Y. Zhang, S. Zhao, A. Sangiovanni-Vincentelli, K. Keutzer,
reinforcement learning for vision-based autonomous flight, in: 2019 B. Gong, Domain randomization and pyramid consistency: Simulation-
international conference on robotics and automation (ICRA), IEEE, to-real generalization without accessing target domain data, in: Pro-
2019, pp. 6008–6014. ceedings of the IEEE/CVF International Conference on Computer
[12] L. Edington, N. Dervilis, A. B. Abdessalem, D. Wagg, A time- Vision, 2019, pp. 2100–2110.
evolvingdigitaltwintoolforengineeringdynamics applications, Mechani- [33] G. D. Kontes, D. D. Scherer, T. Nisslbeck, J. Fischer, C. Mutschler,
cal Systems and Signal Processing 188 (2023) 1–16. High-speed collision avoidance using deep reinforcement learning and
[13] X. Wang, L. Li, Y. Yuan, P. Ye, F.-Y. Wang, Acp-based social domain randomization for autonomous vehicles, in: 2020 IEEE 23rd
computing and parallel intelligence: Societies 5.0 and beyond, CAAI international conference on Intelligent Transportation Systems (ITSC),
Transactions on Intelligence Technology 1 (4) (2016) 377–393. IEEE, 2020, pp. 1–8.
[14] S. Shah, D. Dey, C. Lovett, A. Kapoor, Airsim: High-fidelity visual [34] S. Pouyanfar, M. Saleem, N. George, S.-C. Chen, Roads: Randomiza-
and physical simulation for autonomous vehicles, in: Field and Service tion for obstacle avoidance and driving in simulation, in: Proceedings
Robotics: Results of the 11th International Conference, Springer, 2018, of the IEEE/CVF Conference on Computer Vision and Pattern Recog-
pp. 621–635. nition Workshops, 2019, pp. 0–0.
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. XX, XXX XXX 16
[35] J. Kim, C. Park, End-to-end ego lane estimation based on sequential IEEE/RSJ International Conference on Intelligent Robots and Systems
transfer learning for self-driving cars, in: Proceedings of the IEEE (IROS), IEEE, 2015, pp. 5307–5314.
conference on computer vision and pattern recognition workshops, [61] K. Weiss, T. M. Khoshgoftaar, D. Wang, A survey of transfer learning,
2017, pp. 30–38. Journal of Big data 3 (1) (2016) 1–40.
[36] S. Akhauri, L. Y. Zheng, M. C. Lin, Enhanced transfer learning for [62] D. Isele, A. Cosgun, Transferring autonomous driving knowledge
autonomous driving with systematic accident simulation, in: 2020 on simulated and real intersections, arXiv preprint arXiv:1712.01106
IEEE/RSJ International Conference on Intelligent Robots and Systems (2017).
(IROS), IEEE, 2020, pp. 5986–5993. [63] E. Tzeng, C. Devin, J. Hoffman, C. Finn, P. Abbeel, S. Levine,
[37] Y. Bengio, J. Louradour, R. Collobert, J. Weston, Curriculum learning, K. Saenko, T. Darrell, Adapting deep visuomotor representations with
in: Proceedings of the 26th annual international conference on machine weak pairwise constraints, in: Algorithmic Foundations of Robotics
learning, 2009, pp. 41–48. XII: Proceedings of the Twelfth Workshop on the Algorithmic Foun-
[38] M. Kumar, B. Packer, D. Koller, Self-paced learning for latent variable dations of Robotics, Springer, 2020, pp. 688–703.
models, Advances in neural information processing systems 23 (2010). [64] H. B. Ammar, E. Eaton, P. Ruvolo, M. Taylor, Online multi-task
[39] P. Soviany, R. T. Ionescu, P. Rota, N. Sebe, Curriculum self-paced learning for policy gradient methods, in: International conference on
learning for cross-domain object detection, Computer Vision and Image machine learning, PMLR, 2014, pp. 1206–1214.
Understanding 204 (2021) 103166. [65] S. Sharma, J. E. Ball, B. Tang, D. W. Carruth, M. Doude, M. A. Islam,
[40] T. Matiisen, A. Oliver, T. Cohen, J. Schulman, Teacher–student cur- Semantic segmentation with transfer learning for off-road autonomous
riculum learning, IEEE transactions on neural networks and learning driving, Sensors 19 (11) (2019) 2577.
systems 31 (9) (2019) 3732–3740. [66] C. Schwarz, Z. Wang, The role of digital twins in connected and
[41] S. Narvekar, B. Peng, M. Leonetti, J. Sinapov, M. E. Taylor, P. Stone, automated vehicles, IEEE Intelligent Transportation Systems Magazine
Curriculum learning for reinforcement learning domains: A framework 14 (6) (2022) 41–51.
and survey, The Journal of Machine Learning Research 21 (1) (2020) [67] M. Singh, E. Fuenmayor, E. P. Hinchy, Y. Qiao, N. Murray, D. Devine,
7382–7431. Digital twin: Origin to future, Applied System Innovation 4 (2) (2021)
[42] C. Florensa, D. Held, M. Wulfmeier, M. Zhang, P. Abbeel, Reverse 36.
curriculum generation for reinforcement learning, in: Conference on [68] Y. Vered, S. J. Elliott, The use of digital twins to remotely update
robot learning, PMLR, 2017, pp. 482–495. feedback controllers for the motion control of nonlinear dynamic
[43] J. Schmidhuber, Evolutionary principles in self-referential learning, systems, Mechanical Systems and Signal Processing 185 (2023) 1–17.
or on learning how to learn: the meta-meta-... hook, Ph.D. thesis, [69] T. Fei, X. Bin, Q. Qinglin, C. Jiangfeng, J. Ping, Digital twin modeling,
Technische Universität München (1987). Journal of Manufacturing Systems 64 (2022) 372–389.
[44] S. Thrun, L. Pratt, Learning to learn: Introduction and overview, [70] B. Mark, C. Adrian, L. Gun, et al., A survey of augmented reality,
Learning to learn (1998) 3–17. Foundations and Trends® in Human–Computer Interaction 8 (2-3)
[45] M. Andrychowicz, M. Denil, S. Gomez, M. W. Hoffman, D. Pfau, (2015) 73–272.
T. Schaul, B. Shillingford, N. De Freitas, Learning to learn by gradient [71] R. T. Azuma, A survey of augmented reality, Presence: teleoperators
descent by gradient descent, Advances in neural information processing & virtual environments 6 (4) (1997) 355–385.
systems 29 (2016). [72] E. Costanza, A. Kunz, M. Fjeld, Mixed reality: A survey, Springer,
[46] J. Snell, K. Swersky, R. Zemel, Prototypical networks for few-shot 2009.
learning, Advances in neural information processing systems 30 (2017). [73] W. Hoenig, C. Milanes, L. Scaria, T. Phan, M. Bolas, N. Ayanian,
Mixed reality for robotics, in: 2015 IEEE/RSJ International Conference
[47] E. Real, A. Aggarwal, Y. Huang, Q. V. Le, Regularized evolution
on Intelligent Robots and Systems (IROS), IEEE, 2015, pp. 5382–5387.
for image classifier architecture search, in: Proceedings of the aaai
[74] P. Lindemann, T.-Y. Lee, G. Rigoll, An explanatory windshield display
conference on artificial intelligence, Vol. 33, 2019, pp. 4780–4789.
interface with augmented reality elements for urban autonomous driv-
[48] C. Finn, P. Abbeel, S. Levine, Model-agnostic meta-learning for fast
ing, in: 2018 IEEE International Symposium on Mixed and Augmented
adaptation of deep networks, in: International conference on machine
Reality Adjunct (ISMAR-Adjunct), IEEE, 2018, pp. 36–37.
learning, PMLR, 2017, pp. 1126–1135.
[75] S. Su, X. Zeng, S. Song, M. Lin, H. Dai, W. Yang, C. Hu, Positioning
[49] S. Sæmundsson, K. Hofmann, M. P. Deisenroth, Meta reinforce- accuracy improvement of automated guided vehicles based on a novel
ment learning with latent variable gaussian processes, arXiv preprint magnetic tracking approach, IEEE Intelligent Transportation Systems
arXiv:1803.07551 (2018). Magazine 12 (4) (2018) 138–148.
[50] G. Hinton, O. Vinyals, J. Dean, Distilling the knowledge in a neural [76] M. Szalai, B. Varga, T. Tettamanti, V. Tihanyi, Mixed reality test
network, arXiv preprint arXiv:1503.02531 (2015). environment for autonomous cars using unity 3d and sumo, in: 2020
[51] A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with IEEE 18th World Symposium on Applied Machine Intelligence and
deep convolutional neural networks, Communications of the ACM Informatics (SAMI), IEEE, 2020, pp. 73–78.
60 (6) (2017) 84–90. [77] M. R. Zofka, M. Essinger, T. Fleck, R. Kohlhaas, J. M. Zöllner,
[52] Z. Xu, Y.-C. Hsu, J. Huang, Training shallow and thin networks for The sleepwalker framework: Verification and validation of autonomous
acceleration via knowledge distillation with conditional adversarial vehicles by mixed reality lidar stimulation, in: 2018 IEEE Interna-
networks, arXiv preprint arXiv:1709.00513 (2017). tional Conference on Simulation, Modeling, and Programming for
[53] J. Morimoto, K. Doya, Robust reinforcement learning, Neural compu- Autonomous Robots (SIMPAR), IEEE, 2018, pp. 151–157.
tation 17 (2) (2005) 335–359. [78] X. Gao, X. Wu, S. Ho, T. Misu, K. Akash, Effects of augmented-reality-
[54] L. Pinto, J. Davidson, R. Sukthankar, A. Gupta, Robust adversarial rein- based assisting interfaces on drivers’ object-wise situational awareness
forcement learning, in: International Conference on Machine Learning, in highly autonomous vehicles, in: 2022 IEEE Intelligent Vehicles
PMLR, 2017, pp. 2817–2826. Symposium (IV), IEEE, 2022, pp. 563–572.
[55] X. He, H. Yang, Z. Hu, C. Lv, Robust lane change decision making [79] Z. Wang, K. Han, P. Tiwari, Augmented reality-based advanced driver-
for autonomous vehicles: An observation adversarial reinforcement assistance system for connected vehicles, in: 2020 ieee international
learning approach, IEEE Transactions on Intelligent Vehicles (2022). conference on systems, man, and cybernetics (SMC), IEEE, 2020, pp.
[56] P. Huang, M. Xu, F. Fang, D. Zhao, Robust reinforcement learning as a 752–759.
stackelberg game via adaptively-regularized adversarial training, arXiv [80] R. Williams, J. A. Erkoyuncu, T. Masood, et al., Augmented reality as-
preprint arXiv:2202.09514 (2022). sisted calibration of digital twins of mobile robots, IFAC-PapersOnLine
[57] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, P. Abbeel, 53 (3) (2020) 203–208.
Domain randomization for transferring deep neural networks from sim- [81] T. Tettamanti, M. Szalai, S. Vass, V. Tihanyi, Vehicle-in-the-loop test
ulation to the real world, in: 2017 IEEE/RSJ international conference environment for autonomous driving with microscopic traffic simula-
on intelligent robots and systems (IROS), IEEE, 2017, pp. 23–30. tion, in: 2018 IEEE International Conference on Vehicular Electronics
[58] F. Sadeghi, S. Levine, Cad2rl: Real single-image flight without a single and Safety (ICVES), IEEE, 2018, pp. 1–6.
real image, arXiv preprint arXiv:1611.04201 (2016). [82] R. Mitchell, J. Fletcher, J. Panerati, A. Prorok, Multi-vehicle mixed-
[59] W. Yu, J. Tan, C. K. Liu, G. Turk, Preparing for the unknown: Learning reality reinforcement learning for autonomous multi-lane driving, arXiv
a universal policy with online system identification, arXiv preprint preprint arXiv:1911.11699 (2019).
arXiv:1702.02453 (2017). [83] R. Moezzi, D. Krcmarik, H. Bahri, J. Hlava, Autonomous vehicle
[60] I. Mordatch, K. Lowrey, E. Todorov, Ensemble-cio: Full-body dy- control based on hololens technology and raspberry pi platform: An
namic motion planning that transfers to physical humanoids, in: 2015 educational perspective, IFAC-PapersOnLine 52 (27) (2019) 80–85.
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. XX, XXX XXX 17
[84] X. Tu, J. Autiosalo, A. Jadid, K. Tammi, G. Klinker, A mixed reality [104] H. Xiong, Z. Wang, G. Wu, Y. Pan, Design and implementation of
interface for a digital twin based crane, Applied Sciences 11 (20) (2021) digital twin-assisted simulation method for autonomous vehicle in car-
9480. following scenario, Journal of Sensors 2022 (2022).
[85] S. H. Choi, K.-B. Park, D. H. Roh, J. Y. Lee, M. Mohammed, [105] S. Almeaibed, S. Al-Rubaye, A. Tsourdos, N. P. Avdelidis, Digital twin
Y. Ghasemi, H. Jeong, An integrated mixed reality system for safety- analysis to promote safety and security in autonomous vehicles, IEEE
aware human-robot collaboration using deep learning and digital Communications Standards Magazine 5 (1) (2021) 40–46.
twin generation, Robotics and Computer-Integrated Manufacturing 73 [106] A. Niaz, M. U. Shoukat, Y. Jia, S. Khan, F. Niaz, M. U. Raza,
(2022) 102258. Autonomous driving test method based on digital twin: A survey, in:
[86] K. Lalik, S. Flaga, A real-time distance measurement system for a 2021 International Conference on Computing, Electronic and Electrical
digital twin using mixed reality goggles, Sensors 21 (23) (2021) 7870. Engineering (ICE Cube), IEEE, 2021, pp. 1–7.
[87] B. Yu, C. Chen, J. Tang, S. Liu, J.-L. Gaudiot, Autonomous vehicles [107] S.-H. Wang, C.-H. Tu, J.-C. Juang, Automatic traffic modelling for
digital twin: A practical paradigm for autonomous driving system creating digital twins to facilitate autonomous vehicle development,
development, Computer 55 (9) (2022) 26–34. Connection Science 34 (1) (2022) 1018–1037.
[88] E. Salvato, G. Fenu, E. Medvet, F. A. Pellegrino, Crossing the reality [108] X. Xu, K. Liu, P. Dai, B. Chen, Enabling digital twin in vehicular edge
gap: A survey on sim-to-real transferability of robot controllers in computing: A multi-agent multi-objective deep reinforcement learning
reinforcement learning, IEEE Access 9 (2021) 153171–153187. solution, arXiv preprint arXiv:2210.17386 (2022).
[89] Y. Laschinsky, K. von Neumann-Cosel, M. Gonter, C. Wegwerth, [109] G. Rong, B. H. Shin, H. Tabatabaee, Q. Lu, S. Lemke, M. Možeiko,
R. Dubitzky, A. Knoll, Evaluation of an active safety light using E. Boise, G. Uhm, M. Gerow, S. Mehta, et al., Lgsvl simulator: A
virtual test drive within vehicle in the loop, in: 2010 IEEE International high fidelity simulator for autonomous driving, in: 2020 IEEE 23rd
Conference on Industrial Technology, IEEE, 2010, pp. 1119–1112. International conference on intelligent transportation systems (ITSC),
[90] H. Shikata, T. Yamashita, K. Arai, T. Nakano, K. Hatanaka, H. Fu- IEEE, 2020, pp. 1–6.
jikawa, Digital twin environment to integrate vehicle simulation and [110] Z. Hu, S. Lou, Y. Xing, X. Wang, D. Cao, C. Lv, Review and
physical verification, SEI Technical Review 88 (2019) 18–21. perspectives on driver digital twin and its enabling technologies for
[91] V. Dygalo, A. Keller, A. Shcherbin, Principles of application of virtual intelligent vehicles, IEEE Transactions on Intelligent Vehicles (2022).
and physical simulation technology in production of digital twin of [111] F.-Y. Wang, Parallel system methods for management and control of
active vehicle safety systems, Transportation research procedia 50 complex systems, CONTROL AND DECISION. 19 (2004) 485–489.
(2020) 121–129. [112] K. M. Alam, A. El Saddik, C2ps: A digital twin architecture reference
[92] J. Wu, Z. Huang, P. Hang, C. Huang, N. De Boer, C. Lv, Digital twin- model for the cloud-based cyber-physical systems, IEEE access 5
enabled reinforcement learning for end-to-end autonomous driving, in: (2017) 2050–2062.
2021 IEEE 1st International Conference on Digital Twins and Parallel [113] G. Bhatti, H. Mohan, R. R. Singh, Towards the future of smart electric
Intelligence (DTPI), IEEE, 2021, pp. 62–65. vehicles: Digital twin technology, Renewable and Sustainable Energy
[93] C. Schwarz, K. Moran, Digital map enhancements of electronic stabil- Reviews 141 (2021) 110801.
ity control, SAE International, Warrendale, PA, USA, SAE Tech. Paper [114] F.-Y. Wang, Toward a revolution in transportation operations: Ai for
(2010) 0148–7191. complex systems, IEEE Intelligent Systems 23 (6) (2008) 8–13.
[94] E. Bottani, A. Cammardella, T. Murino, S. Vespoli, et al., From the [115] L. Chen, X. Hu, B. Tang, D. Cao, Parallel motion planning: Learning
cyber-physical system to the digital twin: the process development for a deep planning model against emergencies, IEEE Intelligent Trans-
behaviour modelling of a cyber guided vehicle in m2m logic, XXII portation Systems Magazine 11 (1) (2018) 36–41.
Summer School Francesco TurcoIndustrial Systems Engineering (2017) [116] K. Wang, C. Gou, N. Zheng, J. M. Rehg, F.-Y. Wang, Parallel vision for
1–7. perception and understanding of complex scenes: methods, framework,
[95] X. Chen, E. Kang, S. Shiraishi, V. M. Preciado, Z. Jiang, Digital and perspectives, Artificial Intelligence Review 48 (2017) 299–329.
behavioral twins for safe connected cars, in: Proceedings of the [117] F.-Y. Wang, Parallel control and management for intelligent trans-
21th ACM/IEEE international conference on model driven engineering portation systems: Concepts, architectures, and applications, IEEE
languages and systems, 2018, pp. 144–153. transactions on intelligent transportation systems 11 (3) (2010) 630–
[96] A. Rassõlkin, T. Vaimann, A. Kallaste, V. Kuts, Digital twin for 638.
propulsion drive of autonomous electric vehicle, in: 2019 IEEE 60th [118] J. Lu, Q. Wei, T. Zhou, Z. Wang, F.-Y. Wang, Event-triggered near-
International Scientific Conference on Power and Electrical Engineer- optimal control for unknown discrete-time nonlinear systems using
ing of Riga Technical University (RTUCON), IEEE, 2019, pp. 1–4. parallel control, IEEE Transactions on Cybernetics (2022).
[97] Y. Ge, Y. Wang, R. Yu, Q. Han, Y. Chen, Research on test method [119] L. Li, X. Wang, K. Wang, Y. Lin, J. Xin, L. Chen, L. Xu, B. Tian, Y. Ai,
of autonomous driving based on digital twin, in: 2019 IEEE Vehicular J. Wang, et al., Parallel testing of vehicle intelligence via virtual-real
Networking Conference (VNC), IEEE, 2019, pp. 1–2. interaction, Science robotics 4 (28) (2019) eaaw4106.
[98] O. Veledar, V. Damjanovic-Behrendt, G. Macher, Digital twins for [120] T. Liu, X. Wang, Y. Xing, Y. Gao, B. Tian, L. Chen, Research
dependability improvement of autonomous driving, in: Systems, Soft- on digital quadruplets in cyber-physical-social space-based parallel
ware and Services Process Improvement: 26th European Conference, driving, Chinese Journal of Intelligent Science and Technology 1 (1)
EuroSPI 2019, Edinburgh, UK, September 18–20, 2019, Proceedings (2019) 40–51.
26, Springer, 2019, pp. 415–426. [121] J. Yang, X. Wang, Y. Zhao, Parallel manufacturing for industrial
[99] Y. Liu, Z. Wang, K. Han, Z. Shou, P. Tiwari, J. H. Hansen, Sensor metaverses: A new paradigm in smart manufacturing, IEEE/CAA
fusion of camera and cloud digital twin information for intelligent Journal of Automatica Sinica 9 (12) (2022) 2063–2070.
vehicles, in: 2020 IEEE Intelligent Vehicles Symposium (IV), IEEE, [122] L. Li, Y.-L. Lin, D.-P. Cao, N.-N. Zheng, F.-Y. Wang, Parallel learning-
2020, pp. 182–187. a new framework for machine learning, Acta Automatica Sinica 43 (1)
[100] S. Liu, B. Yu, J. Tang, Q. Zhu, Towards fully intelligent transportation (2017) 1–8.
through infrastructure-vehicle cooperative autonomous driving: Chal- [123] W. Zhang, K. Wang, Y. Liu, Y. Lu, F.-Y. Wang, A parallel vision
lenges and opportunities, in: 2021 58th ACM/IEEE Design Automation approach to scene-specific pedestrian detection, Neurocomputing 394
Conference (DAC), IEEE, 2021, pp. 1323–1326. (2020) 114–126.
[101] J. Culley, S. Garlick, E. G. Esteller, P. Georgiev, I. Fursa, I. Van- [124] W. Zheng, K. Wang, F.-Y. Wang, A novel background subtraction
der Sluis, P. Ball, A. Bradley, System design for a driverless au- algorithm based on parallel vision and bayesian gans, Neurocomputing
tonomous racing vehicle, in: 2020 12th International Symposium on 394 (2020) 178–200.
Communication Systems, Networks and Digital Signal Processing [125] F.-Y. Wang, N.-N. Zheng, D. Cao, C. M. Martinez, L. Li, T. Liu,
(CSNDSP), IEEE, 2020, pp. 1–6. Parallel driving in cpss: A unified approach for transport automation
[102] D. J. Fremont, E. Kim, Y. V. Pant, S. A. Seshia, A. Acharya, X. Bruso, and vehicle intelligence, IEEE/CAA Journal of Automatica Sinica 4 (4)
P. Wells, S. Lemke, Q. Lu, S. Mehta, Formal scenario-based testing of (2017) 577–587.
autonomous vehicles: From simulation to the real world, in: 2020 IEEE [126] L. Chen, Q. Wang, X. Lu, D. Cao, F.-Y. Wang, Learning driving models
23rd International Conference on Intelligent Transportation Systems from parallel end-to-end driving data set, Proceedings of the IEEE
(ITSC), IEEE, 2020, pp. 1–8. 108 (2) (2019) 262–273.
[103] K. Voogd, J. P. Allamaa, J. Alonso-Mora, T. D. Son, Reinforcement [127] T. Liu, X. Yang, H. Wang, X. Tang, L. Chen, H. Yu, F.-Y. Wang, Digital
learning from simulation to real world autonomous driving using digital quadruplets for cyber-physical-social systems based parallel driving:
twin, arXiv preprint arXiv:2211.14874 (2022). From concept to applications, arXiv preprint arXiv:2007.10799 (2020).
JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. XX, XXX XXX 18
[128] L. Chen, Y. Zhang, B. Tian, Y. Ai, D. Cao, F.-Y. Wang, Parallel driving Shen Li received the B.S. degree in Computer
os: A ubiquitous operating system for autonomous driving in cpss, Science and Technology from Liaoning University in
IEEE Transactions on Intelligent Vehicles 7 (4) (2022) 886–895. 2018. From September 2022 to now, he is pursuing
[129] H. Liu, Y. Sun, J. Cao, S. Chen, N. Pan, Y. Dai, D. Pan, Study on his Master’s degree in School of Artificial Intelli-
uav parallel planning system for transmission line project acceptance gence, Hubei University, Wuhan, China. His areas
under the background of industry 5.0, IEEE Transactions on Industrial of interest include deep learning and autonomous
Informatics 18 (8) (2022) 5537–5546. driving.
[130] K. Wang, C. Gou, F. Wang, Parallel vision: An acp-based approach to
intelligent vision computing, Acta Automatica Sinica 42 (10) (2016)
1490–1500.
[131] L. Chen, X. Hu, W. Tian, H. Wang, D. Cao, F.-Y. Wang, Parallel
planning: A new motion planning framework for autonomous driving, Tingyu Huang received the B.S. degree in Com-
IEEE/CAA Journal of Automatica Sinica 6 (1) (2018) 236–246. munication Engineering from Hunan Institute of
[132] F.-Y. Wang, On the modeling, analysis, control and management of Science and Technology in 2022. From September
complex systems, Fuza Xitong yu Fuzaxing Kexue(Complex Systems 2022 to now, she is pursuing her Master’s degree in
and Complexity Science) 3 (2) (2006) 26–34. School of Artificial Intelligence, Hubei University,
[133] L. Chen, Q. Wang, X. Lu, D. Cao, F.-Y. Wang, Learning driving models Wuhan, China. Her areas of interest include deep
from parallel end-to-end driving data set, Proceedings of the IEEE learning and autonomous driving.
108 (2) (2020) 262–273. doi:10.1109/JPROC.2019.2952735.
[134] N. Koenig, A. Howard, Design and use paradigms for gazebo, an
open-source multi-robot simulator, in: 2004 IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS)(IEEE Cat. No.
04CH37566), Vol. 3, IEEE, 2004, pp. 2149–2154. Bo Tang is an Associate Professor in the Depart-
[135] B. Wymann, E. Espié, C. Guionneau, C. Dimitrakakis, R. Coulom, ment of Electrical and Computer Engineering at
A. Sumner, Torcs, the open racing car simulator, Software available at Worcester Polytechnic Institute. Prior to this, he
http://torcs. sourceforge. net 4 (6) (2000) 2. was an Assistant Professor in the Department of
[136] Q. Li, Z. Peng, L. Feng, Q. Zhang, Z. Xue, B. Zhou, Metadrive: Electrical and Computer Engineering at Mississippi
Composing diverse driving scenarios for generalizable reinforcement State University. He received the Ph.D. degree in
learning, IEEE transactions on pattern analysis and machine intelli- electrical engineering from University of Rhode Is-
gence (2022). land (Kingstown, RI) in 2016. His research interests
[137] D. Krajzewicz, Traffic simulation with sumo–simulation of urban lie in the general areas of bio-inspired artificial
mobility, Fundamentals of traffic simulation (2010) 269–293. intelligence (AI), AI security, edge AI, and their
[138] P. Cai, Y. Lee, Y. Luo, D. Hsu, Summit: A simulator for urban driving applications in Cyber-Physical Systems (e.g., wire-
in massive mixed traffic, in: 2020 IEEE International Conference on less networks, autonomous vehicles, and power systems). He is currently an
Robotics and Automation (ICRA), IEEE, 2020, pp. 4023–4029. Associate Editor for IEEE Transactions on Neural Networks and Learning
[139] S. Kato, S. Tokunaga, Y. Maruyama, S. Maeda, M. Hirabayashi, Systems.
Y. Kitsukawa, A. Monrroy, T. Ando, Y. Fujii, T. Azumi, Autoware on
board: Enabling autonomous vehicles with embedded systems, in: 2018 Rouxing Huai currently an Advisory Scientist at
ACM/IEEE 9th International Conference on Cyber-Physical Systems Beijing Huairou Academy of Parallel Sensing, is a
(ICCPS), IEEE, 2018, pp. 287–296. senior researcher in mechatronics, intelligent sys-
[140] X. Huang, X. Cheng, Q. Geng, B. Cao, D. Zhou, P. Wang, Y. Lin, tems, color optics, and parallel optical fields. He
R. Yang, The apolloscape dataset for autonomous driving, in: Proceed- received his PhD in Computer and Systems En-
ings of the IEEE conference on computer vision and pattern recognition gineering from USA in 1990 and worked at the
workshops, 2018, pp. 954–960. University of Arizona for 21 years in Robotics
[141] R. F. Benekohal, J. Treiterer, Carsim: Car-following model for simu- and Intelligent Systems for Manufacturing, Space
lation of traffic in normal and stop-and-go conditions, Transportation Exploration, and Optical Applications.
research record 1194 (1988) 99–111.
[142] D. Krajzewicz, J. Erdmann, M. Behrisch, L. Bieker, Recent de-
velopment and applications of sumo-simulation of urban mobility, Long Chen is currently a Professor with State Key
International journal on advances in systems and measurements 5 (3&4) Laboratory of Management and Control for Complex
(2012). Systems, Institute of Automation, Chinese Academy
of Sciences, Beijing, China. His research interests
include autonomous driving, robotics, and artificial
intelligence, where he has contributed more than 100
publications. He received the IEEE Vehicular Tech-
nology Society 2018 Best Land Transportation Paper
Award, the IEEE Intelligent Vehicle Symposium
2018 Best Student Paper Award and Best Workshop
Paper Award, the IEEE Intelligent Transportation
Systems Society 2021 Outstanding Application Award, the IEEE Conference
on Digital Twin and Parallel Intelligence 2021 Best Paper and Outstanding
Xuemin Hu is currently an Associate Professor Paper Award, the IEEE International Conference on Intelligent Transportation
with School of Artificial Intelligence, Hubei Univer- Systems 2021 Best Paper Award. He serves as an Associate Editor for
sity, Wuhan, China. He received the B.S. degree in the IEEE Transaction on Intelligent Transportation Systems, the IEEE/CAA
Biomedical Engineering from Huazhong University Journal of Automatica Sinica, the IEEE Transaction on Intelligent Vehicle and
of Science and Technology and the Ph.D. degree the IEEE Technical Committee on Cyber-Physical Systems.
in Signal and Information Processing from Wuhan
University in 2007 and in 2012, respectively. He was
a visiting scholar in the University of Rhode Island,
Kingston, RI, US from November 2015 to May
2016. His areas of interest include computer vision,
machine learning, motion planning, and autonomous
driving.In this paragraph you can place your educational, professional back-
ground and research and other interests.