This repository contains the official PyTorch implementation of:
SOMA: Solving Optical Marker-Based MoCap Automatically
Nima Ghorbani and Michael J. Black
Paper | Supp.Mat. | Video | Project website | Poster
SOMA automatically transforms raw marker-based mocap point clouds (black dots in the back) into solved SMPL-X bodies and labeled markers (colored dots).
- Make sure it is run on 3.7 since some packages are not compatible
- To use your own moshpp marker vertex id mapping, replace the
marker_vids.py
before make - When running in slurm,
- Add
--user
afterpython setup.py xxx
commands if encounter write permission issues module load eigen tbb
- Skip sudo apt install steps
- For mesh install
- Boot include dir:
$BOOST_INCLUDE
- i.e.,
BOOST_INCLUDE_DIRS=$BOOST_INCLUDE make all
- or
BOOST_ROOT=$BOOST_ROOT make all
- Boot include dir:
- For moshpp install
- Change
/soma/moshpp/src/moshpp/scan2mesh/mesh_distance/sample2meshdist.h
line 4 to#include "Eigen/Core"
- Copy the changes in mosh python files (cp snippet below)
cd moshpp/src/moshpp/scan2mesh/mesh_distance
tomake
cd ../../../..
topython setup.py install --user
- Change
- For blender
- Load from module
- rename bpy-2.83-20200908 to bpy
- Add
SOMA is originally developed in Python 3.7, PyTorch 1.8.2 LTS, for Ubuntu 20.04.2 LTS. Below we prepare the python environment using Anaconda, however, we opt for a simple pip package manager for installing dependencies.
sudo apt install libatlas-base-dev
sudo apt install libpython3.7
sudo apt install libtbb2
conda create -n soma python=3.7
conda install -c conda-forge ezc3d
pip3 install torch==1.8.2+cu102 torchvision==0.9.2+cu102 torchaudio==0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
ezc3d installation is currently not supported by pip.
Assuming that you have already cloned this repository to your local drive go to the root directory of SOMA code and run
pip install -r requirements.txt
python setup.py develop
If you encounter permission issues in slurm, add --user to the end of the python setup command.
Copy the precompiled
smpl-fast-derivatives
into your python site-packages folder, i.e. anaconda3/envs/soma/lib/python3.7/site-packages
.
The final directory should look like anaconda3/envs/soma/lib/python3.7/site-packages/psbody/smpl
.
Install the psbody.mesh library following the instructions in https://github.com/MPI-IS/mesh.
Hint (not a good hint, just follow mesh instructions): clone the mesh repository and run the following from the anaconda environment: python setup.py install
.
To use the rendering capabilities first install an instance of Blender-2.83 LTS on your machine.
Afterward uncompress contents of the precompiled
bpy-2.83
keep the outter folder as bpy-2.83xxx and copy
into your python site-packages folder, i.e. anaconda3/envs/soma/lib/python3.7/site-packages
.
Last but not least, the current SOMA code relies on MoSh++ mocap solver.
- Before make step, replace the
marker_vids.py
with your own mapping
cp support_data/Test-Leyang/marker_vids.py moshpp/src/moshpp/marker_layout/marker_vids.py
cp support_data/Test-Leyang/create_marker_layout_for_mocaps.py moshpp/src/moshpp/marker_layout/create_marker_layout_for_mocaps.py
cp support_data/Test-Leyang/chmosh.py moshpp/src/moshpp/chmosh.py
Please install MoSh++ following the guidelines in its repository.
Leyang: Done
There are multiple main parts of the codebase that we try to explain in the Tutorials:
- Run SOMA On MoCap Point Cloud Data
- Label Priming an Unknown Marker Layout
- SOMA Ablative Studies
- Solve Already Labeled MoCaps With MoSh++
Please cite the following paper if you use this code directly or indirectly in your research/projects:
@inproceedings{SOMA:ICCV:2021,
title = {{SOMA}: Solving Optical Marker-Based MoCap Automatically},
author = {Ghorbani, Nima and Black, Michael J.},
booktitle = {Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV)},
month = oct,
year = {2021},
doi = {},
month_numeric = {10}}
Software Copyright License for non-commercial scientific research purposes. Please read carefully the terms and conditions and any accompanying documentation before you download and/or use the SOMA data and software, (the "Data & Software"), software, scripts, and animations. By downloading and/or using the Data & Software (including downloading, cloning, installing, and any other use of this repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Data & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.
The code in this repository is developed by Nima Ghorbani while at Max-Planck Institute for Intelligent Systems, Tübingen, Germany.
If you have any questions you can contact us at [email protected].
For commercial licensing, contact [email protected]