Shuaifeng Jiao, Zhiwen Zeng, Zhuoqun Su, Xieyuanli Chen Zongtan Zhou*, Huimin Lu
- [2025-03] Code released.
- [2025-02] Submitted to IROS 2025.
This repository introduces the Lunar Exploration Simulator System (LESS), a lunar surface simulation system, alongside the LunarSeg dataset, which supplies RGB-D data for the segmentation of lunar obstacles, including both positive and negative instances. Furthermore, it presents a novel two-stage segmentation network, termed LuSeg.
Our accompanying video is available at Demo
Please cite the corresponding paper:
@article{jiao2024luseg,
title={LuSeg: Efficient Negative and Positive Obstacles Segmentation via Contrast-Driven Multi-Modal Feature Fusion on the Lunar},
author={Shuaifeng Jiao, Zhiwen Zeng, Zhuoqun Su, Xieyuanli Chen, Zongtan Zhou, Huimin Lu},
journal={arXiv preprint arXiv:2503.11409},
year={2025}
}
The LESS system integrates a high-fidelity lunar terrain model, a customizable rover platform, and a multi-modal sensor suite, while also supporting the Robot Operating System (ROS) to enable realistic data generation and the validation of autonomous perception algorithms for the rover.LESS provides a scalable platform for developing and validating perception algorithms in extraterrestrial environments. This open-source framework is designed for high extensibility, allowing researchers to integrate additional sensors or customize terrain models according to the specific requirements of their applications.
You can collect multimodal data based on your needs in the LESS system.
To install the LESS on your workstation and learn more about the system, please refer to the LESS_install.
LuSeg is a novel two-stage training segmentation method that effectively maintains the semantic consistency of multimodal features via our proposed Contrast-Driven Fusion module. Stage I involves single-modal training using only RGB images as input, while Stage II focuses on multi-modal training with both RGB and depth images as input. In Stage II, the output of the depth encoder is aligned with the output of the RGB encoder from Stage I, whose parameters are frozen during this stage. This serves as input to our proposed Contrast-Driven Fusion Module (CDFM). The final output of Stage II is the result of our LuSeg.
The LunarSeg dataset is a dataset of lunar obstacles, including both positive and negative instances. The dataset is collected using the LESS system and is available for download at Google Drive.
The pre-trained weights of LuSeg can be downloaded from here.
We train and evaluate our code in Python 3.7, CUDA 12.1, Pytorch 2.3.1
You can download our pretrained weights to reproduce our results, and you also can train the LuSeg model on the LunarSeg dataset by running the following command:
#Stage I
python train_RGB.py --data_dir /your/path/to/LunarSeg/ --batch_size 4 --gpu_ids 0
#Stage II
python train_TS.py --data_dir /your/path/to/LunarSeg/ --batch_size 4 --gpu_ids 0 --rgb_dir /your/path/to/LunarSeg/StageI/trained_rgb/weight/
You can evaluate the LuSeg model on the LunarSeg dataset by running the following command:
python run_demo_lunar.py --data_dir /your/path/to/LunarSeg/test/ --batch_size 2 --gpu_ids 0 --rgb_dir /your/path/to/LunarSeg/StageI/trained_rgb/weight/ --model_dir /your/path/to/LunarSeg/StageII/trained_ts/weight/
We would like to express our sincere gratitude for the following open-source work that has been immensely helpful in the development of LuSeg.
- InconSeg InconSeg: Residual-Guided Fusion With Inconsistent Multi-Modal Data for Negative and Positive Road Obstacles Segmentation.
This project is free software made available under the MIT License. For details see the LICENSE file.