Skip to content

wujiaxu/robust_robot_navi

Repository files navigation

Diversity-aware Crowd Model for Robust Robot Navigation in Human Populated Environment

This repository contains the implementation and experiment codes for our paper Diversity-aware Crowd Model for Robust Robot Navigation in Human Populated Environment.

Introduction

Robot navigation in human-populated environments poses challenges due to the diversity of human behaviors and the unpredictability of human paths. However, existing Reinforcement Learning (RL)-based methods often rely on simulators that lack sufficient diversity in human behavior, resulting in navigation policies that overfit specific human behavior and perform poorly in unseen environments. To address this, we propose a diversity-aware crowd model based on Reinforcement Learning, employing Constrained Variational Exploration (VE) with a Mutual Information (MI)-based auxiliary reward to capture fine-grained behavioral diversity. The proposed model leverages a Centralized Training Decentralized Execution (CTDE) paradigm, which ensures stable exploration under multi-agent settings. Using the proposed diversity-aware model for training, we obtain robust robot navigation policies capable of handling diverse unseen scenarios. Extensive simulation and real-world experiments demonstrate the superior performance of our approach in achieving diverse crowd behaviors and enhancing robot navigation robustness. These findings highlight the potential of our method to advance safe and efficient robot operations in complex dynamic environments.

Key features

** Constrained Variational Exploration ** ** Centralized Training Decentralize Execution **

For more details, please refer to our paper: Diversity-aware Crowd Model for Robust Robot Navigation in Human Populated Environment.

Installation

Install the package

conda create -n harl python=3.8
conda activate harl
# Install pytorch>=1.9.0 (CUDA>=11.0) manually
git clone https://github.com/PKU-MARL/HARL.git
cd robust_robot_navi
pip install -e .

Install Environments Dependencies

HARL Gym.

Solve Dependencies

After the installation above, run the following commands to solve dependencies.

pip install gym==0.21.0
pip install pyglet==1.5.0
pip install importlib-metadata==4.13.0

Usage

Training on Existing Environments

Create you preferred config

python script/create_config.py

After modify the train_all.sh or train_select.sh to add path of you config, Run

bash script/train_select.sh

Citation

Our implementation is based on: Peking University and BIGAI. If you find our paper or this repository helpful in your research or project, please consider citing our works using the following BibTeX citation:

@article{JMLR:v25:23-0488,
  author  = {Yifan Zhong and Jakub Grudzien Kuba and Xidong Feng and Siyi Hu and Jiaming Ji and Yaodong Yang},
  title   = {Heterogeneous-Agent Reinforcement Learning},
  journal = {Journal of Machine Learning Research},
  year    = {2024},
  volume  = {25},
  number  = {32},
  pages   = {1--67},
  url     = {http://jmlr.org/papers/v25/23-0488.html}
}
@inproceedings{
liu2024maximum,
title={Maximum Entropy Heterogeneous-Agent Reinforcement Learning},
author={Jiarong Liu and Yifan Zhong and Siyi Hu and Haobo Fu and QIANG FU and Xiaojun Chang and Yaodong Yang},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=tmqOhBC4a5}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages