AllocNet is a lightweight learning-based trajectory optimization framework.
Authors: Yuwei Wu, Xiatao Sun, Igor Spasojevic, and Vijay Kumar from the Kumar Lab.
Video Links: Youtube
Related Paper: Wu, Y., Sun, X., Spasojevic, I. and Kumar, V., 2023. Learning Optimal Trajectories for Quadrotors. arXiv preprint arXiv:2309.15191. arxiv Preprint
If this repo helps your research, please cite our paper at:
@misc{wu2023learning,
title={Learning Optimal Trajectories for Quadrotors},
author={Yuwei Wu and Xiatao Sun and Igor Spasojevic and Vijay Kumar},
year={2023},
eprint={2309.15191},
archivePrefix={arXiv},
primaryClass={cs.RO}
}
- Dataset: The raw point cloud dataset from M3ED
- Front-end Path Planning: We use OMPL planning library
- Planning Modules and Visualization: We use the module in GCOPTER
The repo has been tested on 20.04 with ros-desktop-full installation.
Follow the guidance to install ROS and install OMPL:
sudo apt install libompl-dev
Download the libtorch and put it into the planner folder: GPU version, or CPU version
We use osqp to solve quadratic programming, install by:
git clone -b release-0.6.3 https://github.com/osqp/osqp.git
cd osqp
git submodule init
git submodule update
mkdir build & cd build
cmake ..
sudo make install
cd ../..
git clone https://github.com/robotology/osqp-eigen.git
cd osqp-eigen
mkdir build & cd build
cmake ..
sudo make install
git clone [email protected]:yuwei-wu/AllocNet.git && cd AllocNet/src
wstool init && wstool merge utils.rosinstall && wstool update
catkin build
The default mode is set to the GPU version. To switch to the CPU, navigate to line 29 in the 'learning_planning.cpp' file and replace 'device(torch::kGPU)' with 'device(torch::kCPU)'. After making this change, recompile the code for the updates to take effect.
source devel/setup.bash
roslaunch planner learning_planning.launch
click 2D Nav Goal to trigger planning:
network/
│
├── config/ - Configuration files for training and testing.
│
│
├── utils/ - Utility functions and classes.
│ └── learning/ - Contains network classes and layers
│
└── sample_<...>.py - Scripts for sampling data
└── train_minsnap_<...>.py - Scripts for training
└── test_minsnap_<...>.py - Scripts for testing
└── ts_conversion_<...>.py - Scripts for converting to TorchScript
- Ubuntu 20.04 / Windows 11
- If using WSL2 with simulation running in windows, please add
export WSL_HOST_IP=$(cat /etc/resolv.conf | grep nameserver | awk '{print $2}')
to your.bashrc
file to allow communication between Windows and the subsystem.
- If using WSL2 with simulation running in windows, please add
- Python 3.8
- CUDA 11.7
- Install Ubuntu packages
sudo apt-get install python3-dev python3-venv
- Create a virtual environment
python3 -m venv venv
- Activate the virtual environment
source venv/bin/activate
- Install the requirements
pip install wheel
pip install numpy==1.24.2
pip install -r requirements.txt
Follow the instructions to install, and you may need to change the CMakeLists.txt in iris-distro/CMakeLists.txt
iris: https://github.com/rdeits/iris-distro
For AMD CPU, if you encounter a core dump, please refer to instructions in this link:
https://github.com/rdeits/iris-distro/issues/81
pip install -U kaleido
- For training, please run
python train_minsnap_<model_configuration>.py
- For testing, please run
python test_minsnap_<model_configuration>.py
- For converting the learned model to TorchScript, please run
python ts_conversion_<model_configuration>.py
For any technical issues, please contact Yuwei Wu ([email protected], [email protected]).