Joint Keypoint and Object Detection
This is a complimentary repository to our paper YOLOPoint. The code is built on top of pytorch-superpoint and YOLOv5.
- python >= 3.8
- pytorch >= 1.10
- accelerate >= 1.14 (needed only for training)
- rospy (only for deployment with ROS)
$ pip install -r requirements.txt
Huggingface accelerate is a wrapper used mainly for multi-gpu and half-precision training. Prior to training, you must configure accelerate with:
$ accelerate config
To save time during data loading, images are resized and saved such that height or width are divisible by 32.
YOLOPoint/
├── datasets/
│ ├── coco/
│ │ ├── images480/
│ │ │ ├── train/
│ │ │ └── val/
│ │ └── labels/
│ │ ├── train/
│ │ └── val/
- Adjust your config files as needed before launching the training script.
- Use provided weights to generate pseudo-ground truth keypoint labels:
$ python export_homography.py
- The following command will train YOLOPointS and save weights to logs/my_experiment/checkpoints.
$ accelerate launch train.py --config configs/coco.yaml --exper_name my_experiment --model YOLOPoint --version s
- Broadcast Tensorboard logs.
$ sh logs/my_experiment/run_th.sh
You will first want to configure your config.yaml file.
$ cd src
$ python demo.py --config configs/inference.yaml
First build the package and start a roscore:
$ catkin build yolopoint
$ roscore
You can either choose to stream images from a directory or subscribe to a topic. To visualize object bounding boxes and tracked points, set the --visualize flag.
$ rosrun yolopoint demo_ROS.py src/configs/kitti_inference.yaml directory '/path/to/image/folder' --visualize
Alternatively, you can publish bounding boxes and points and visualize them in another node. Before publishing, the keypoint descriptor vectors get flattened into a single vector and then unflattened in a listener node.
$ rosrun yolopoint demo_ROS.py src/configs/kitti_inference.yaml ros '/image/message/name' --publish
$ rosrun yolopoint demo_ROS_listener.py '/image/message/name'
Example for evaluating homography estimation:
$ python export_homography.py --config configs/hpatches.yaml
$ python evaluation_hpatches.py path/to/weights.pt -homo