<h1 align="center">AirSLAM: An Efficient and Illumination-Robust Point-Line Visual SLAM System</h1>
<p align="center"><strong>
<a href = "https://scholar.google.com/citations?user=-p7HvCMAAAAJ&hl=zh-CN">Kuan Xu</a><sup>1</sup>,
<a href = "https://github.com/yuefanhao">Yuefan Hao</a><sup>2</sup>,
<a href = "https://scholar.google.com/citations?user=XcV_sesAAAAJ&hl=en">Shenghai Yuan</a><sup>1</sup>,
<a href = "https://sairlab.org/team/chenw/">Chen Wang</a><sup>2</sup>,
<a href = "https://scholar.google.com.sg/citations?user=Fmrv3J8AAAAJ&hl=en">Lihua Xie</a><sup>1</sup>
</strong></p>
<p align="center"><strong>
<a href = "https://www.ntu.edu.sg/cartin">1: Centre for Advanced Robotics Technology Innovation (CARTIN), Nanyang Technological University</a><br>
<a href = "https://sairlab.org/">2: Spatial AI & Robotics (SAIR) Lab, Computer Science and Engineering, University at Buffalo</a><br>
</strong></p>
<p align="center"><strong>
<a href = "https://arxiv.org/pdf/2408.03520">📄 [Arxiv]</a> |
<a href = "https://xukuanhit.github.io/airslam/">💾 [Project Site]</a> |
<a href = "https://youtu.be/5OcR5KeO5nc">🎥 [Youtube]</a> |
<a href = "https://www.bilibili.com/video/BV1rJY7efE9x">🎥 [Bilibili]</a>
<!-- 📖 [OpenAccess] -->
</strong></p>
### :scroll: AirSLAM has dual modes (V-SLAM & VI-SLAM), upgraded from [AirVO (IROS23)](https://github.com/sair-lab/AirSLAM/releases/tag/1.0)
<p align="middle">
<img src="figures/system_arch.jpg" width="600" />
</p>
**AirSLAM** is an efficient visual SLAM system designed to tackle both short-term and long-term illumination
challenges. Our system adopts a hybrid approach that combines deep learning techniques for feature detection and matching with traditional backend optimization methods. Specifically, we propose a unified convolutional neural network (CNN) that simultaneously extracts keypoints and structural lines. These features are then associated, matched, triangulated, and optimized in a coupled manner. Additionally, we introduce a lightweight relocalization pipeline that reuses the built map, where keypoints, lines, and a structure graph are used to match the query frame with the map. To enhance the applicability of the proposed system to real-world robots, we deploy and accelerate the feature detection and matching networks using C++ and NVIDIA TensorRT. Extensive experiments conducted on various datasets demonstrate that our system outperforms other state-of-the-art visual SLAM systems in illumination-challenging environments. Efficiency evaluations show that our system can run at a rate of 73Hz on a PC and 40Hz on an embedded platform.
**Video**
<p align="middle">
<a href="https://youtu.be/5OcR5KeO5nc" target="_blank"><img src="figures/title.JPG" width="600" border="10"/></a>
</p>
## :eyes: Updates
* [2024.08] We release the code and paper for AirSLAM.
* [2023.07] AriVO is accepted by IROS 2023.
* [2022.10] We release the code and paper for AirVO. The code for AirVO can now be found [here](https://github.com/sair-lab/AirSLAM/tree/airvo_iros).
## :checkered_flag: Test Environment
### Dependencies
* OpenCV 4.2
* Eigen 3
* Ceres 2.0.0
* G2O (tag:20230223_git)
* TensorRT 8.6.1.6
* CUDA 12.1
* python
* ROS noetic
* Boost
### Docker (Recommend)
```bash
docker pull xukuanhit/air_slam:v4
docker run -it --env DISPLAY=$DISPLAY --volume /tmp/.X11-unix:/tmp/.X11-unix --privileged --runtime nvidia --gpus all --volume ${PWD}:/workspace --workdir /workspace --name air_slam xukuanhit/air_slam:v4 /bin/bash
```
## :book: Data
The data for mapping should be organized in the following Autonomous Systems Lab (ASL) dataset format (imu data is optional):
```
dataroot
├── cam0
│ └── data
│ ├── t0.jpg
│ ├── t1.jpg
│ ├── t2.jpg
│ └── ......
├── cam1
│ └── data
│ ├── t0.jpg
│ ├── t1.jpg
│ ├── t2.jpg
│ └── ......
└── imu0
└── data.csv
```
After the map is built, the relocalization requires only monocular images. Therefore, you only need to place the query images in a folder.
## :computer: Build
```
cd ~/catkin_ws/src
git clone https://github.com/sair-lab/AirSLAM.git
cd ../
catkin_make
source ~/catkin_ws/devel/setup.bash
```
## :running: Run
The launch files for VO/VIO, map optimization, and relocalization are placed in [VO folder](launch/visual_odometry), [MR folder](launch/map_refinement), and [Reloc folder](launch/relocalization), respectively. Before running them, you need to modify the corresponding configurations according to your data path and the desired map-saving path. The following is an example of mapping, optimization, and relocalization with the EuRoC dataset.
### Mapping
**1**: Change "dataroot" in [VO launch file](launch/visual_odometry/vo_euroc.launch) to your own data path. For the EuRoC dataset, "mav0" needs to be included in the path.
**2**: Change "saving_dir" in the same file to the path where you want to save the map and trajectory. **It must be an existing folder.**
**3**: Run the launch file:
```
roslaunch air_slam vo_euroc.launch
```
### Map Optimization
**1**: Change "map_root" in [MR launch file](launch/map_refinement/mr_euroc.launch) to your own map path.
**2**: Run the launch file:
```
roslaunch air_slam mr_euroc.launch
```
### Relocalization
**1**: Change "dataroot" in [Reloc launch file](launch/relocalization/reloc_euroc.launch) to your own query data path.
**2**: Change "map_root" in the same file to your own map path.
**3**: Run the launch file:
```
roslaunch air_slam reloc_euroc.launch
```
### Other datasets
[Launch folder](launch) and [config folder](configs) respectively provide the launch files and configuration files for other datasets in the paper. If you want to run AirSLAM with your own dataset, you need to create your own camera file, configuration file, and launch file.
## :writing_hand: TODO List
- [x] Initial release. :rocket:
- [ ] Support more GPUs and development environments
- [ ] Support SuperGlue as the feature matcher
- [ ] Optimize the TensorRT acceleration of PLNet
## :pencil: Citation
```bibtex
@article{xu2024airslam,
title = {{AirSLAM}: An Efficient and Illumination-Robust Point-Line Visual SLAM System},
author = {Xu, Kuan and Hao, Yuefan and Yuan, Shenghai and Wang, Chen and Xie, Lihua},
journal = {arXiv preprint arXiv:2408.03520},
year = {2024},
url = {https://arxiv.org/abs/2408.03520},
code = {https://github.com/sair-lab/AirSLAM},
}
@inproceedings{xu2023airvo,
title = {{AirVO}: An Illumination-Robust Point-Line Visual Odometry},
author = {Xu, Kuan and Hao, Yuefan and Yuan, Shenghai and Wang, Chen and Xie, Lihua},
booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year = {2023},
url = {https://arxiv.org/abs/2212.07595},
code = {https://github.com/sair-lab/AirVO},
video = {https://youtu.be/YfOCLll_PfU},
}
```
AirSLAM是一种高效的视觉SLAM系统,旨在应对短期和长期的照明挑战 使用C++和NVIDIA TensorRT部署和加速特...
需积分: 0 4 浏览量
更新于2024-12-29
收藏 168.65MB ZIP 举报
AirSLAM具有双模式(V-SLAM和VI-SLAM),由AirVO(IROS23)升级而来。
AirSLAM是一种高效的视觉SLAM系统,旨在应对短期和长期的照明挑战。我们的系统采用了一种混合方法,将用于特征检测和匹配的深度学习技术与传统的后端优化方法相结合。具体而言,我们提出了一种同时提取关键点和结构线的统一卷积神经网络(CNN)。然后,这些特征以耦合的方式进行关联、匹配、三角化和优化。此外,我们引入了一个轻量级的重新定位管道,该管道重用构建的映射,其中关键点、线条和结构图用于将查询帧与映射相匹配。为了增强所提出的系统对现实世界机器人的适用性,我们使用C++和NVIDIA TensorRT部署和加速特征检测和匹配网络。在各种数据集上进行的广泛实验表明,我们的系统在照明挑战环境中优于其他最先进的视觉SLAM系统。效率评估表明,我们的系统在PC上可以以73Hz的速率运行,在嵌入式平台上可以以40Hz的速度运行。

运维经纬(公众号)
- 粉丝: 447
最新资源
- (源码)基于Arduino Nano的MAX7219矩阵LED控制器.zip
- 利用卷积神经网络对身份证号码进行识别
- (源码)基于MSP430微控制器和Node RED框架的设备通信控制系统.zip
- (源码)基于C语言的嵌入式系统POSIX线程实现项目.zip
- (源码)基于STM32CUBEIDE的Furuta Pendulum控制系统.zip
- 基于 BP 数学原理的 MATLAB 实现:模式识别实验之 BP 神经网络
- (源码)基于Arduino的sine wave信号比对项目.zip
- 利用卷积神经网络对身份证号码进行识别
- (源码)基于UmiJS框架的Max模板项目.zip
- (源码)基于Arduino和ESP32的水位监测系统.zip
- (源码)基于Java Servlet的图书分享系统.zip
- 用手工方式实现最简单的 BP 神经网络方法
- (源码)基于createreactapp脚手架的烘焙帮项目.zip
- 高能物理计算的演变与未来展望
- (源码)基于Python和Django框架的待办事项应用.zip
- (源码)基于Arduino IDE与MQTT Dash的智能珠宝箱管理系统.zip