- First, you need to train an YOLOv8-pose model or leverage pretrained weights. Then export the weights file to onnx format following the instruction here. I recommend using the pretrained model YOLOv8l-pose because it is accurate enough although a bit slow.
- Second, it's also necessary to prepare a video for infering.
- After that, you can run the following command and watch the result:
python demo.py -iv demo.mp4
- DD-Net: A Double-feature Double-motion Network - Link GitHub
- Convert Keras to Onnx - Link