Visual SLAM system for illumination-challenging environments
Top 37.6% on sourcepulse
AirSLAM is an efficient and illumination-robust visual SLAM system designed for robots operating in challenging lighting conditions. It targets researchers and developers in robotics and computer vision who need robust SLAM capabilities, offering improved performance over existing methods in variable illumination.
How It Works
AirSLAM employs a hybrid approach, combining deep learning for feature extraction with traditional optimization. It utilizes a unified CNN to extract both keypoints and structural lines, which are then coupled for association, matching, triangulation, and optimization. A lightweight relocalization pipeline reuses the map, incorporating keypoints, lines, and a structure graph for frame matching. This point-line feature fusion and coupled optimization are key to its robustness and efficiency.
Quick Start & Requirements
docker pull xukuanhit/air_slam:v4
) or building from source within a ROS Noetic workspace (git clone ...; cd ../; catkin_make; source .../setup.bash
).Highlighted Details
Maintenance & Community
The project is associated with Nanyang Technological University and the University at Buffalo. Updates are regularly posted on the GitHub repository.
Licensing & Compatibility
The repository does not explicitly state a license. The code is provided for research purposes. Commercial use would require clarification.
Limitations & Caveats
The project is still under active development with a TODO list including support for more GPUs/environments and alternative feature matchers. Custom datasets require manual creation of configuration and launch files.
6 months ago
1 day