Toward Wheeled Mobility on Vertically Challenging
Terrain: Platforms, Datasets, and Algorithms
Anonymous Author(s)
Affiliation
Address
email
1 Most conventional wheeled robots can only
2 move in flat environments and simply di-
3 vide their planar workspaces into free spaces
4 and obstacles. Deeming obstacles as non-
5 traversable significantly limits wheeled robots’
6 mobility in real-world, non-flat, off-road envi-
7 ronments, where part of the terrain (e.g., steep
8 slopes or rugged boulders) will be treated as
9 non-traversable obstacles. Our work is mo-
10 tivated by such limitations and aims at ex-
11 panding the mobility of these widely available
12 wheeled robot platforms so that they can ven-
Figure 1: The Verti-Wheelers: Conventional
13 ture into vertically challenging environments, Wheeled Vehicles Moving through Vertically
14 which would otherwise be deemed as obstacles Challenging Terrain.
15 (non-traversable) or require specialized hardware.
16 Thanks to the recent advancement in machine learning, data-driven approaches have been leveraged
17 to improve robot mobility [1]. Learning from data removes the necessity of building analytical
18 models of the environments, such as vehicle-terrain or human-robot interactions, and alleviates the
19 burden of crafting delicate cost functions [2] or tuning unintuitive parameters [3]. Therefore, we
20 hypothesize that data-driven approaches are one avenue toward enabling enhanced wheeled mobility
21 on previously impossible, vertically challenging terrain.
22 Considering that most ground robots are wheeled with no or passive suspension systems and the
23 potential of machine learning methods, we develop wheeled platforms, large-scale datasets, and
24 both classical and data-driven algorithms to facilitate robot mobility on vertically challenging ter-
25 rain. We present an open-source design of two wheeled robot platforms, the Verti-Wheelers (VW),
26 which are representative of the majority of existing conventional ground mobile robot platforms, and
27 hypothesize that conventional wheeled robots can also navigate many vertically challenging envi-
28 ronments (Figure 1); We identify the following seven desiderata for their hardware and achieve all of
29 them in our design: All-Wheel Drive (D1), Independent Suspensions (D2), Differential Lock (D3),
30 Low/High Gear (D4), Wheel Speed / RPM Sensing (D5), Ground Speed Sensing (D6), Actuated
31 Perception (D7); We collect two datasets to facilitate future data-driven mobility research; We de-
32 velop three algorithms to autonomously drive wheeled robots over vertically challenging terrain: an
33 Open-Loop (OL), a classical Rule-Based (RB), and a data-driven Behavior Cloning (BC) approach.
34 For the mechanical components in D1 to D4, we base our platforms on two off-the-shelf, two-axle
35 and four-wheel, three-axle and six-wheel, all-wheel-drive, off-road vehicle chassis from Traxxas. D1
36 and D2 are therefore achieved. We use an Arduino Mega micro-controller to lock/unlock the front
37 and rear differential (D3) and switch between low and high gear (D4) through three servos. For D5,
38 we install four magnetic sensors on the front and rear axles for our Verti-4-wheeler (V4W) and on
39 the front and middle axles for our Verti-6-Wheeler (V6W), and eight magnets per wheel to sense the
40 wheel rotation. For D6, we install a Crazyflie Flow deck v2 sensor on the chassis facing downward,
Submitted to the 7th Conference on Robot Learning (CoRL 2023). Do not distribute.
Figure 3: Custom-Built Testbed with V6W and V4W and Example Traversals by the Three Algo-
rithms (OL, RB, and BC).
41 providing not only 2D ground speed (x and y) but also distance between the sensor and the ground
42 (z). We choose an Azure Kinect RGB-D camera due to its high-resolution depth perception at close
43 range. For D7, we add a tilt joint for the camera actuated by a servo. We use a complementary filter
44 to estimate the camera orientation and a PID controller to regulate the camera pitch angle. We use
45 NVIDIA Jetson AGX Orin and Xavier NX to provide both onboard CPU and GPU computation.
46 To interface all low-level sensors and actuators, we use the Arduino Mega micro-controller. The
47 mechanical and electrical components for both V4W and V6W are shown in Fig. 2
48 Considering the difficulty in representing sur-
49 face topography and modeling complex vehi-
50 cle dynamics and the recent success in data-
51 driven mobility [1], we collect two datasets
52 with the two wheeled robots on our custom-
53 built testbed. We reconfigure our testbed mul-
54 tiple times and both robots are manually driven
55 through different vertically challenging terrain.
56 We collect the following data streams from the
57 onboard sensors and human teleoperation com-
58 mands: RGB (1280 × 720 × 3) and depth
59 (512 × 512) images i, wheel speed w (4D float
60 vector for four wheels), ground speed g (rel-
61 ative movement indicators along ∆x and ∆y
62 and displacement along z, along with two bi-
63 nary reliability indicators for speeds and dis-
64 placement), differential release/lock d (2D bi-
65 nary vector for both front and rear differentials), Figure 2: Components of the Verti-Wheelers.
66 low/high gear switch s (1D binary vector), lin-
67 ear velocity v (scalar float number), and steering angle ω (scalar float number). Each dataset D is
68 therefore D = {it , wt , gt , dt , st , vt , ωt }N
t=1 , where N indicates the total number of data frames.
69 We deploy all three methods, i.e., OL, RB, and BC (BC4 and BC6 learned with the V4W and V6W
70 dataset respectively), on three different test courses (Figure 3). Table 1 reports number of successful
71 trials and mean traversal time with variance. BC4 achieves the best performance among all methods.
Table 1: Number of Successful Trials (Out of 10) and Mean Traversal Time (of Successful Trials in
Seconds) with Variance
V6W V4W
OL RB BC6 BC4 OL RB BC6 BC4
Easy 5 (20.7 ± 1.7) 8 (19.2 ± 3.9) 9 (13.8 ± 8.2) 10 (11.6 ± 1.9) 6 (17.7 ± 3.8) 6 (13.4 ± 2.5) 7 (17.2 ± 6.7) 9 (14.1 ± 7.7)
Medium 6 (15.4 ± 0.9) 9 (14.8 ± 2.2) 9 (14.6 ± 11.2) 10 (13.6 ± 2.3) 4 (15.6 ± 14.2) 6 (12.9 ± 1.8) 3 (19.2 ± 10.6) 8 (13.7 ± 1.6)
Difficult 3 (24.1 ± 2.6) 6 (14.3 ± 1.9) 6 (15.7 ± 18.5) 9 (14.9 ± 2.9) 3 (19.7 ± 29.4) 5 (16.8 ± 20.5) 3 (23.3 ± 43.4) 7 (14.9 ± 8.2)
72 During the demonstration session at the workshop, we will showcase our physical VW platforms and
73 demonstrate their autonomous crawling and navigation capability on vertically challenging terrain
74 with a small-scale rock testbed.
2
75 References
76 [1] X. Xiao, B. Liu, G. Warnell, and P. Stone. Motion planning and control for mobile robot
77 navigation using machine learning: a survey. Autonomous Robots, 46(5):569–597, 2022.
78 [2] X. Xiao, T. Zhang, K. M. Choromanski, T.-W. E. Lee, A. Francis, J. Varley, S. Tu, S. Singh,
79 P. Xu, F. Xia, S. M. Persson, L. Takayama, R. Frostig, J. Tan, C. Parada, and V. Sindhwani.
80 Learning model predictive controllers with real-time attention for real-world navigation. In
81 Conference on robot learning. PMLR, 2022.
82 [3] X. Xiao, Z. Wang, Z. Xu, B. Liu, G. Warnell, G. Dhamankar, A. Nair, and P. Stone. Appl:
83 Adaptive planner parameter learning. Robotics and Autonomous Systems, 154:104132, 2022.