0% found this document useful (0 votes)
28 views6 pages

Autonomous_Navigation_Algorithm_for_Planetary_Rovers_Based_on_Multimodality

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views6 pages

Autonomous_Navigation_Algorithm_for_Planetary_Rovers_Based_on_Multimodality

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2024 IEEE 9th International Conference for Convergence in Technology (I2CT)

Pune, India. Apr 5-7, 2024

Autonomous Navigation Algorithm for Planetary


Rovers Based on Multimodality
Hrishikesh H Pillai Malavika R Saji
2024 IEEE 9th International Conference for Convergence in Technology (I2CT) | 979-8-3503-9447-4/24/$31.00 ©2024 IEEE | DOI: 10.1109/I2CT61223.2024.10543896

College of Engineering Trivandrum College of Engineering Trivandrum


APJ Abdul Kalam Technological University APJ Abdul Kalam Technological University
Thiruvananthapuram, Kerala, India Thiruvananthapuram, Kerala, India
[email protected] [email protected]

Lal Priya P S
College of Engineering Trivandrum
APJ Abdul Kalam Technological University
Thiruvananthapuram, Kerala, India
[email protected]

Abstract—Planetary rovers are essential robotic exploration vast pressure variations, dust exposure, corrosion, and cosmic
devices that are vital for the investigation of extraterrestrial ray exposure. To traverse challenging terrains, the rovers are
environments, where they analyze the atmospheric conditions and equipped with wheels specially designed for the purpose [2].
terrain. These rovers must endure intense acceleration, withstand
severe environmental conditions, and maintain their functionality Moreover, the onboard computing system of the rover must
for a prolonged period. These robotic explorers operate in be capable of withstanding the challenging conditions. In
distant and hostile environments where human intervention is addition, real-time control of these rovers from the Earth faces
not feasible. In the quest to surmount these challenges, they are challenges due to the relatively slow speed of communication.
required to function autonomously and rely on state-of-the-art Hence, the rover requires decision-making capabilities with
technologies. This article proposes two multimodal architectures,
one which fuses object detection and semantic segmentation, and minimum interactions from the Earth, especially for tasks such
the other which infuses monocular depth estimation and semantic as navigation and data acquisition [3].
segmentation, to drive the rover autonomously. The proposed Autonomous decision-making involves the process of the
multimodal architectures are robust and enable the rover to rover making decisions independently, without any human
navigate autonomously in highly uncertain and unpredictable
environments. The results show that implementing the proposed
interventions [4]. By making it autonomous, productivity and
architectures can successfully achieve the rover’s autonomous lifespan can be increased, as it can navigate by itself in
navigation. unforeseen circumstances [5]. To overcome these challenges,
Index Terms—Multimodal, semantic segmentation, monocu- two multimodality-based architectures using state-of-the-art
lar depth estimation, object detection, stereo, depth slicing, technologies to establish autonomy are proposed in this work.
Hadamard product.
The first architecture combines depth estimation, semantic seg-
mentation, and object detection model to accurately determine
I. I NTRODUCTION
the next phase of trajectory for the rover. Although being
A rover is a device for exploring planetary surfaces, specif- more complex, it provides a more precise maneuver for the
ically crafted to traverse the challenging terrains of planets, rovers. The second architecture proposed is much simpler for
moon, and other celestial bodies within our solar system. the onboard system to process and uses more advanced state-
The inceptions of rover missions are attributable to the quest of-the-art deep learning methods.
to expand our exploration beyond the Earth and unravel the The contributions of this paper encompass the development
mysteries of the universe. They are crafted with precise engi- of the following:
neering and are equipped with a plethora of instruments. They
play a crucial role in planetary exploration by providing valu- • Multimodal deep learning-based architectures designed
able insights into the distant worlds. These intrepid machines for autonomous navigation of rovers.
have found home on planets such as Mars, where NASA’s • Novel algorithms for the autonomous navigation of
trailblazing rovers such as Spirit, Opportunity, Curiosity, and rovers.
Perseverance have etched their names into the archives of • Architectures that offer a completely autonomous naviga-
space exploration [1]. tion system, developed by incorporating state-of-the-art
Since the rovers are deployed using spacecraft, they must techniques in depth estimation, semantic segmentation,
possess a compact design. Moreover, following deployment, and object detection to effectively handle the navigation.
these rovers encounter many challenges including uneven ter- This article is organized as follows. Section II focuses on
rains, extreme temperatures, abounding levels of acceleration, past developments and related works. Section III discusses the

979-8-3503-9447-4/24/$31.00 ©2024 IEEE 1


Authorized licensed use limited to: J.R.D. Tata Memorial Library Indian Institute of Science Bengaluru. Downloaded on December 14,2024 at 14:59:16 UTC from IEEE Xplore. Restrictions apply.
proposed architectures and their block diagrams. Section IV A simulation platform where drive torques and steering
includes the experimental results with the proposed architec- torques of wheels serve as inputs, producing rover attitude
tures. To conclude, Section V includes the remarks and the and wheel-terrain forces as outputs is developed in [12]. It in-
future scope. troduced a method reliant on virtual point detection to identify
the contact relationship and assess the geometric parameters
II. R ELATED W ORKS between the wheels and the terrain. Enhanced precision in
The early navigation systems of rovers relied on commands geometric parameters is achieved through static balance con-
from the Earth, specifying the intended direction for its siderations. Bekker’s terramechanics was employed to solve
movement. Subsequently, the control transitioned to semi- the forces between wheels and the terrain. To navigate through
autonomous mode, enabling the rover to navigate success- unfamiliar terrain, collision avoidance is paramount. Address-
fully avoiding the obstacles on its way, by utilizing hazard ing this, a socially aware multi-agent collision avoidance
avoidance software. The controllers on the Earth were able to system, leveraging deep learning, is introduced in [13].
plan the routes based on the images sent by the rover. The Different from most research, this study focuses on mul-
rover’s navigation system underwent further enhancement and timodal architectures for autonomous navigation of planetary
it could choose its path using the Autonomous Exploration rovers.
for Gathering Increased Science (AEGIS) software [6]. The
latest navigation system uses a combination of cameras, Light III. M ULTIMODAL A RCHITECTURE
Detection and Ranging (LiDAR) sensors, and other sensors This section describes the two architectures proposed in this
to navigate autonomously. Even though sensors are present, work for the autonomous navigation of planetary rovers.
the Earth-based team provides overall mission planning and
high-level guidance. A. Learning-based Architecture-1
An approach for computing the path of a rover using only Learning-based architecture-1 is the stereo vision-based
passive vision data, called fast stereo-based Visual Odometry model, which integrates depth estimation, semantic segmen-
(VO) is proposed in [7]. A faster and simplified feature tation, and object detection to navigate accurately through
detector/ descriptor is proposed for effective and accurate path unforeseen situations in planetary terrains. Even though this
estimation in tasks such as autonomous planetary exploration. architecture is more complex, it provides a much more ac-
Simultaneous Localisation and Mapping (SLAM) algorithms curate and precise autonomous navigation capability to the
have been proposed using various kinds of sensors, but the rovers. The block diagram of the proposed learning-based
vision sensor needs to be extensively studied due to the large architecture-1 is shown in Fig. 1. It has a stereo module, a se-
amount of information it provides. A system that utilizes a mantic segmentation module, an object detection module, and
Kalman filter to fuse data from a VO and Inertial Measurement a path-planning algorithm. The pseudo-code of the learning-
Unit (IMU) for estimating the position and orientation of a based navigation algorithm for the proposed architecture-1 is
Mars rover is detailed in [8]. The rover’s linear and angular described in Algorithm 1.
velocity as well as the angular rates are provided by the
IMU. Concurrently, VO tracks distinct scene features in stereo
imagery utilizing a maximum likelihood motion estimation
algorithm to assess the rover’s movement between consecutive
stereo image pairs.
A comprehensive overview of the latest research in the
field of autonomous driving, including lateral, longitudinal,
and integrated control techniques is presented in [9]. The
major benefits of autonomous driving such as improved fuel
efficiency, reduced traffic congestion, and increased safety are
depicted. Some of the key challenges of autonomous vehicles,
such as the need for robust perception and control algorithms,
the difficulty of handling complex and uncertain environments,
and the need for effective human-machine interfaces are also Fig. 1. Learning based architecture-1
described. A novel semi-direct VO algorithm based on monoc-
ular depth estimation is depicted in [10]. The scale ambiguity 1) Stereo Module: The proposed architecture leverages
limitation of monocular VO systems which have the advantage a stereo-based depth estimation system for calculating the
of using monocular cameras for small mobile platforms due to depth of obstacles and craters on the planetary surface. The
their low cost and lightweight is addressed in the algorithm. architecture uses the onboard stereo cameras to capture the
In [11], the semantic information from RGB data is used left and right frames of the images and uses stereo vision
to harness the strengths of both visual and laser data for techniques along with the inputs from the object detection
extracting depth information and semantic details within the module to estimate the depth of artifacts present in front of
image. the rover.

2
Authorized licensed use limited to: J.R.D. Tata Memorial Library Indian Institute of Science Bengaluru. Downloaded on December 14,2024 at 14:59:16 UTC from IEEE Xplore. Restrictions apply.
2) Semantic Segmentation Module: Segmentation plays a Algorithm 1 Learning-based navigation algorithm I
crucial role in identifying the boundaries and shapes of ob- 1: Input: Bounding box coordinate xi , yi , xj , yj and classes
stacles, craters, and various features on the lunar or Martian clsi , segmented image, rover dimensions, threshold limit
surface. The system essentially equips the rover with a so- for obstacle avoidance distance Dth , Clearance Clr
phisticated set of eyes that can analyze its surroundings in 2: for clsi do
detail. Though provided with an obstacle detection system, 3: Calculate center point (Xi , Yi )
segmentation on top of obstacle detection enables a much 4: Calculate depth from rover to (Xi , Yi ) using stereo
more precise maneuver as and when the situation demands. vision
This mapped information becomes invaluable for the rover to 5: end for
plan its route effectively, ensuring a safe and smooth journey 6: if distance of obstacle in front Df ≤ Dth then
even in challenging landscapes. 7: for clsi do
3) Object Detection Module: The proposed architecture 8: Calculate distance between obstacles using bound-
employs a state-of-the-art object detection algorithm that ing box and stereo vision Dibt
enables the detection of obstacles and valleys in the lunar 9: if Dibt > rover dimension + Clr then
or Martian terrain through bounding boxes. Although only 10: Flag T raversiblei
providing a rough estimate of the specifics of the environ- 11: end if
ment, this bounding box information along with the stereo 12: end for
depth estimator is further used by the navigation algorithm 13: for T raversiblei do
to estimate the position of the features. This enables the 14: TrPath = Path(distance, angle of rotation)
navigation algorithm to accurately determine an optimal path 15: end for
for navigation. 16: OptimizedPath = Min(TrPath )
4) Path Planning Algorithm: The proposed path planning 17: Traverse OptimizedPath
algorithm uses a multimodal approach taking in information 18: end if
from the segmentation, depth estimation, and object detection
modules. The object detection along with the depth estimation
module enables the rover to identify the distance of various
features on the planetary surface, while the segmentation
module enables precise identification of the boundaries of the
features. Based on the identified features and also accounting
for the rover dimensions, the algorithm identifies the best
next step to be taken to navigate without encountering any
obstacles. Furthermore, in the case of intricate maneuvers, the
system takes into account the information from the segmented
image of the terrain to identify a safe and optimal maneuver.

B. Distributed Learning-Based Control


Fig. 2. Learning based architecture-2
Since rovers traverse through uneven and unpredictable
terrains, the control of each wheel depends on the point of
contact and the position of the wheel at any point in time, to
account for this variability a distributed control is proposed 1) Monocular Depth Estimation: Monocular depth esti-
wherein each wheel is controlled independently from control mation involves the challenge of predicting the depth value
signals generated by the central path planning algorithm, (distance about the camera) for each pixel based on a single
thereby providing a more precise and optimal control of the RGB image. The proposed architecture uses state-of-the-art
rover even in extremely uneven terrains. models to estimate the depth from a single RGB camera image,
which is much more efficient compared to having a multi-
C. Learning-based Architecture-2 camera depth estimation setup or using expensive hardware
This architecture presents a streamlined version of au- including LiDAR sensors. Although the high-level accuracy
tonomous navigation which uses cutting-edge deep learning of the other methods is compromised, comparing the merits
techniques for path planning. The simplified architecture is of monocular depth estimation, it gives an optimal analysis of
shown in Fig. 2 and it uses a monocular depth estimation the depth information from a single camera.
module and a semantic segmentation module to create a depth 2) Semantic Segmentation Module: As depicted in the
map of the environment based on which the rover can find Learning-based architecture-I, a similar semantic segmentation
the optimal traversable path. The pseudo-code of the learning- model is employed to meticulously analyze the environment.
based navigation algorithm for the proposed architecture-2 is Subsequently, the output of this model is utilized for process-
described in Algorithm 2. ing depth maps.

3
Authorized licensed use limited to: J.R.D. Tata Memorial Library Indian Institute of Science Bengaluru. Downloaded on December 14,2024 at 14:59:16 UTC from IEEE Xplore. Restrictions apply.
3) Depth Slicing Module: The Hadamard product [14] of 1) Mixing Datasets for Zero-shot Cross-dataset Transfer:
the depth maps and the semantic segmentation masks, which Midas model was used to estimate the depth of the objects
yields a matrix where each element is the product of the from the lunar dataset as shown in Fig. 3. As the distance of
corresponding elements of the input matrices is first taken to the obstacle varies, the model gives a corresponding inverse
obtain a traversable region estimate. Specifically, if A repre- relative depth map as shown in Fig. 3. Since MiDas model
sents the depth map matrix, and B represents the semantic outputs inverse relative depth values, fine calibrations and
segmentation mask matrix, then the Hadamard product C is curve fitting are required to calculate the exact metric depth.
calculated element-wise as in (1). Since only a relative scale is available, a close approximation
threshold is set further for the navigation system.
Cij = Aij × Bij (1)

where, Cij is the value at the ith row and j th column of


the resulting matrix. The output matrix C would contain
elements that represent the combined influence of both depth
information and semantic segmentation on each specific lo-
cation in the terrain. In the context of rover navigation, the
Hadamard product output would represent a combined matrix
that combines information about the depth of the terrain with
semantic information about the objects present. Fig. 3. Depth estimation on the lunar dataset using MiDas
The Hadamard product of the depth maps and the semantic 2) Zero-shot Transfer by Combining Relative and Metric
segmentation masks is further fed into the depth slicing Depth: While Midas gave optimal performance, the latest
module which will estimate the obstacles, their features, and ZoeDepth model for monocular depth estimation is tested on
their relative depths. Based on the threshold limit set for the the lunar dataset and the output is as shown in Fig. 4. Apart
traversable region, the rover will be able to identify the optimal from its counterparts, ZoeDepth incorporated both relative and
path at the current instant. This data is fed into the distributed metric depth estimations. Despite some shortcomings in the
learning-based control algorithm which will further control the calibration of the metric depth estimation during experimental
rover, enabling autonomous navigation. testing, the model exhibited impressive performance com-
parable to MiDas. Since a threshold-driven depth estimator
Algorithm 2 Learning-based navigation algorithm II
is employed, the relative depth values will suffice in the
1: Input: Relative depth values di , Segmented image, S development of the proposed architecture. However, metric
traversable threshold depth with more fine-tuning would provide precise control.
2: Create the depth map
3: for Each pixel in depth map do
4: if di < S then
5: Flag T raversiblei
6: end if
7: end for
8: for T raversiblei do
9: TrP ath = Path(distance, angle of rotation)
10: end for
Fig. 4. Depth estimation on the lunar dataset using ZoeDepth
11: OptimizedPath = min(TrP ath )
12: Traverse OptimizedPath B. Planetary Obstacle Detection
The lunar or Martian surface poses many challenges, in-
cluding craters and uneven rocks. Although the suspension
IV. E XPERIMENTAL R ESULTS of the current rovers can cross over obstacles of twice its
wheel size, providing an optimal and energy-efficient path
The experimental analysis encompasses monocular and that improves the navigation system further is imperative. The
stereo depth estimation, planetary obstacle detection, and proposed architecture uses object detection systems to analyze
semantic segmentation of a lunar dataset. and identify the position of rocks, craters, and artifacts in
the extraterrestrial surface mapped onto a 2D image. Later
A. Monocular Depth Estimation
on with the help of stereo or monocular depth estimations, it
The proposed learning-based navigation architectures have is remapped to a representative 3D approximation regarding
explored two state-of-the-art models for monocular depth es- the depth and position of the artifacts. It was tested on the
timation namely Mixing Datasets for Zero-shot Cross-dataset state-of-the-art YOLOv8 model, which was trained and fine-
Transfer (MiDas) and Zero-shot Transfer by Combining Rel- tuned on the lunar data sets to identify the rocks, craters, and
ative and Metric Depth (ZoeDepth). artifacts present on the lunar surface.

4
Authorized licensed use limited to: J.R.D. Tata Memorial Library Indian Institute of Science Bengaluru. Downloaded on December 14,2024 at 14:59:16 UTC from IEEE Xplore. Restrictions apply.
1) Object Detection on Lunar Surface: The proposed ar-
chitecture for object detection utilizes a YOLOv8 model that
underwent extensive training across multiple epochs to ensure
its ability to accurately identify craters, rocks, and other
artifacts on the lunar surface under various lighting conditions,
ranging from well-lit scenarios to darker environments. This
training was conducted using a combination of real and
synthetically generated lunar images. The results displayed in
Fig. 5 demonstrate that the module has effectively detected
and marked the craters, rocks, and artifacts found on the lunar
surface using bounding boxes.
Fig. 7. Semantic segmentation on lunar images

2) Semantic Segmentation on Lunar Surface: The dataset


generated was employed to train the YOLOv8 segmentation
model for 100 epochs, yielding promising outcomes applicable
to planetary navigation. The YOLOv8 architecture seamlessly
integrates segmentation and detection processes within a uni-
fied pipeline, enhancing the refinement of results through
detection. For the proposed system, segmentation of rocks and
craters was carried out, with the outcomes exemplified in a
sample displayed in Fig. 7.

Fig. 5. Illustrations of the obstacle detection


D. Stereo Depth Estimation
The model has demonstrated promising results indicating Stereo depth estimation is a traditional method used in
the potential for further enhancement through training for a robotics and past planetary rover missions for navigation and
higher number of epochs and hyperparameter tuning. control. The stereo images could also be used as a reference
to visualize the planetary terrain to drive the rover manually
C. Semantic Segmentation from ground stations. The proposed system also implements a
The semantic segmentation module provides a finer under- traditional stereo-based depth estimation system that provides
standing of the characteristics of the terrain and the obstacles a depth map of the lunar surface which can further be incor-
in the path of the rover, which further improves the navigation porated with the object detection and segmentation modules to
capabilities. navigate the unknown terrain accurately. The proposed system
1) Dataset: Although semantic segmentation provides a has been tested on daily objects on the Earth to validate its
deeper understanding of the lunar terrain, the scarcity of performance on a calibrated stereo setup at the laboratory.
datasets for training such segmentation models severely limits Further, the direct disparity and depth maps were generated
its usability. Hence, in this study, this gap is addressed by from a representative lunar stereo dataset sourced from the Na-
creating a dataset consisting of 1000 manually annotated lunar tional Aeronautics and Space Administration (NASA) Ames
images, encompassing both real and photo-realistic images Research Center database [16]. A sample stereo image is
sourced from [15]. This dataset facilitates the training of illustrated in Fig. 8, with its corresponding direct disparity
cutting-edge segmentation models, enabling a deeper under- and depth maps displayed in Fig. 9 and Fig. 10 respectively.
standing of the features present on the lunar surface. A sample
dataset is shown in Fig. 6.

Fig. 8. Sample stereo images from NASA database [16]


The outputs from the modules are fed into the path-planning
algorithm which determines the most suitable control actions
for the rover. The proposed system employs a distributed con-
trol approach, wherein a central control signal is disseminated
Fig. 6. Sample dataset of lunar images
to computing units for each drive module. These units then
execute the necessary actions based on the received control

5
Authorized licensed use limited to: J.R.D. Tata Memorial Library Indian Institute of Science Bengaluru. Downloaded on December 14,2024 at 14:59:16 UTC from IEEE Xplore. Restrictions apply.
environment with a limited field of view,” in Proceedings
of the 2019 International Conference on Robotics and
Automation (ICRA), 2019, pp. 5993–6000.
[5] N. Gadkar, S. Das, S. Chakraborty, and S. K. Mishra,
“Static obstacle avoidance for rover vehicles using model
predictive controller,” in Proceedings of the 2022 Inter-
national Conference on IoT and Blockchain Technology
(ICIBT), 2022, pp. 1–6.
[6] M. Pfeiffer, M. Schaeuble, J. Nieto, R. Siegwart, and
C. Cadena, “From perception to decision: A data-driven
Fig. 9. Disparity map
approach to end-to-end motion planning for autonomous
ground robots,” in Proceedings of the 2017 IEEE inter-
national conference on robotics and automation (ICRA),
2017, pp. 1527–1533.
[7] A. Cumani and A. Guiducci, “Fast stereo-based visual
odometry for rover navigation,” WSEAS Trans Circuits
Syst, vol. 7, no. 7, 2008.
[8] D. M. Helmick, Y. Cheng, D. S. Clouse, L. H. Matthies,
and S. I. Roumeliotis, “Path following using visual
odometry for a Mars rover in high-slip environments,”
in Proceedings of the 2004 IEEE Aerospace Conference,
Fig. 10. Depth map 2004, pp. 772–789.
[9] H. Rizk, A. Chaibet, and A. Kribèche, “Model-based
control and model-free control techniques for au-
signal. This design ensures robust and optimal control for
tonomous vehicles: A technical survey,” Applied Sci-
applications in planetary exploration devices.
ences, vol. 13, no. 11, 2023.
V. C ONCLUSIONS AND F UTURE S COPE [10] S. Guo, J. Guo, and C. Bai, “Semi-direct visual odometry
This article introduces two autonomous navigation architec- based on monocular depth estimation,” in Proceedings of
tures designed for planetary rovers. One approach integrates the 2019 IEEE International Conference on Unmanned
stereo depth estimation with advanced object detection and Systems (ICUS), 2019, pp. 720–724.
segmentation models to navigate unfamiliar terrains, while [11] L. Gao, J. Ding, W. Liu, H. Piao, Y. Wang, X. Yang,
the other combines monocular depth estimation with seman- and B. Yin, “A vision-based irregular obstacle avoidance
tic segmentation for near-optimal navigation across extrater- framework via deep reinforcement learning,” in Proceed-
restrial landscapes. These multimodal architectures leverage ings of the 2021 IEEE/RSJ International Conference on
cutting-edge deep learning algorithms, potentially advancing Intelligent Robots and Systems (IROS), 2021, pp. 9262–
autonomous navigation and control for planetary rovers. Future 9269.
endeavors will involve constructing a prototype rover hardware [12] X. Tian and H. Ju, “Modeling and simulation for lunar
setup to validate the proposed architectures experimentally. rover based on terramechanics and multibody dynamics,”
in Proceedings of the 32nd Chinese Control Conference,
R EFERENCES 2013, pp. 8687–8692.
[1] B. Rothrock, R. Kennedy, C. Cunningham, J. Papon, [13] Y. F. Chen, M. Everett, M. Liu, and J. P. How, “Socially
M. Heverly, and M. Ono, “SPOC: Deep learning-based aware motion planning with deep reinforcement learn-
terrain classification for Mars rover missions,” in Pro- ing,” in Proceedings of the 2017 IEEE/RSJ International
ceedings of the AIAA SPACE 2016, pp. 1–12. Conference on Intelligent Robots and Systems (IROS),
[2] Y. Kuroda, T. Teshima, Y. Sato, and T. Kubota, “Mobility 2017, pp. 1343–1350.
performance evaluation of planetary rover with similarity [14] W. Feng, L. Ding, R. Zhou, C. Xu, H. Yang, H. Gao,
model experiment,” in Proceedings of the IEEE Interna- G. Liu, and Z. Deng, “Learning-based end-to-end nav-
tional Conference on Robotics and Automation, 2004. igation for planetary rovers considering non-geometric
Proceedings. ICRA’04. 2004, vol. 2, 2004, pp. 2098– hazards,” IEEE Robotics and Automation Letters, 2023.
2103. [15] “Artificial lunar landscape dataset,”
[3] L. Tai, S. Li, and M. Liu, “A deep-network solution https://www.kaggle.com/datasets/romainpessia/artificial-
towards model-less obstacle avoidance,” in Proceedings lunar-rocky-landscape-dataset, accessed: 2023-09-30.
of the 2016 IEEE/RSJ International Conference on Intel- [16] NASA - Ames Research Center, “Polar stereo dataset,”
ligent Robots and Systems (IROS), 2016, pp. 2759–2764. https://ti.arc.nasa.gov/dataset/IRG-PolarDB, Nov 01,
[4] J. Choi, K. Park, M. Kim, and S. Seok, “Deep reinforce- 2017.
ment learning of navigation in a complex and crowded

6
Authorized licensed use limited to: J.R.D. Tata Memorial Library Indian Institute of Science Bengaluru. Downloaded on December 14,2024 at 14:59:16 UTC from IEEE Xplore. Restrictions apply.

You might also like