- About
- Structure
- Usage
- Datasets
- Citation
- License
- Acknowledgments
- Note
This library implements a brain-inspired multimodal hybrid neural network (MHNN) for robot place recognition. The MHNN encodes and integrates multimodal cues from both conventional and neuromorphic sensors. Specifically, to encode different sensory cues, we build various neural networks of spatial view cells, place cells, head direction cells, and time cells. To integrate these cues, we design a multiscale liquid state machine that can process and fuse multimodal information effectively and asynchronously by using diverse neuronal dynamics and bio-inspired inhibitory circuits.
- src
- model: contains the MHNN model
- tools: contains the utils
- config: contains the configure files
- main: contains the demo
- Step 1, setup the running environments
- Step 2, download the datasets
- Step 3, download the code:
git clone https://github.com/cognav/neurogpr.git - Step 4, run the demo in the folder of ‘main’ :
python main_mhnn.py --config_file ../config/corridor_setting.ini
The following libraries are needed for running the demo.
- Python 3.7.4
- Torch 1.11.0
- Torchvision 0.12.0
- Numpy 1.21.5
- Scipy 1.7.1
The datasets include four groups of data collected in different environments.
- The Room, Corridor, and THU-Forest datasets are available on Zenodo (https://zenodo.org/record/7827108#.ZD_ke3bP0ds).
- The public Brisbane-Event-VPR dataset is available on Zenodo (https://zenodo.org/record/4302805#.ZD8puXbP0ds).
Fangwen Yu, Yujie Wu, Songchen Ma, Mingkun Xu, Hongyi Li, Huanyu Qu, Chenhang Song, Taoyi Wang, Rong Zhao and Luping Shi. Brain-inspired multimodal hybrid neural network for robot place recognition. Science Robotics (accepted).
MIT License
If you have any questions, please contact us.
If the parameter settings are different, the results may be not consistent. You may need to modify the settings or adjust the code properly.