Skip to content

Commit

Permalink
cuda 11 configuration
Browse files Browse the repository at this point in the history
  • Loading branch information
zhan-xu committed Jul 20, 2021
1 parent 490b0a3 commit c5f5c98
Show file tree
Hide file tree
Showing 2 changed files with 61 additions and 13 deletions.
24 changes: 12 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,9 @@ This is the code repository implementing the paper "RigNet: Neural Rigging for A
**[2020.11.23]** There is now a great add-on for Blender based on our work,
implemented by @[pKrime](https://github.com/pKrime). Please check the Github [link](https://github.com/pKrime/brignet), and the video [demo](https://www.youtube.com/watch?v=ueLlS3IoeGY&feature=youtu.be).

**[2021.07.20]** Another add-on for Blender,
implemented by @[L-Medici](https://github.com/L-Medici). Please check the Github [link](https://github.com/L-Medici/Rignet_blender_addon).

## Dependecy and Setup

The project is developed on Ubuntu 16.04 with cuda10.0 and cudnn7.6.3.
Expand All @@ -11,24 +14,21 @@ On both platforms, we suggest to use conda virtual environment.

#### For Linux user

**[2020.09.13]** I have tested the code on Ubuntu 20.04, with cuda 10.1 + cudnn 7.6. I installed (almost all) the dependencies as their latest versions and everything works fine. The following commands have been updated which install pytorch1.6.0 and pytorch_geometric1.6.1.
**[2021.07.20]** I have tested the code on Ubuntu 18.04, with cuda 11.0. The following commands have been updated which install pytorch 1.7.1 and pytorch_geometric 1.7.2.

```
conda create -n rignet python=3.7
conda activate rignet
conda create --name rignet_cuda11 python=3.6
conda activate rignet_cuda11
```

Some necessary libraries include:

```
pip install numpy scipy matplotlib tensorboard open3d==0.9.0 opencv-python
pip install "rtree>=0.8,<0.9"
pip install "rtree>=0.8,<0.9"
pip install trimesh[easy]
conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.1 -c pytorch
pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.6.0+cu101.html
pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-1.6.0+cu101.html
pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-1.6.0+cu101.html
pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-1.6.0+cu101.html
conda install pytorch==1.7.1 torchvision==0.8.2 cudatoolkit=11.0 -c pytorch
pip install --no-index torch-scatter torch-sparse torch-cluster -f https://pytorch-geometric.com/whl/torch-1.7.1+cu110.html
pip install torch-geometric
```

Expand All @@ -41,7 +41,7 @@ The code has been tested on Windows 10 with cuda 10.1. The most important differ


## Quick start
We provide a script for quick start. First download our trained models from [here](https://umass.box.com/s/l7dxfayrubf5qzxcyg7can715xnislwm).
We provide a script for quick start. First download our trained models from [here](https://umass-my.sharepoint.com/:u:/g/personal/zhanxu_umass_edu/EYKLCvYTWFJArehlo3-H2SgBABnY08B4k5Q14K7H1Hh0VA).
Put the checkpoints folder into the project folder.

Check and run quick_start.py. We provide some examples in this script.
Expand All @@ -62,10 +62,10 @@ running maya_save_fbx.py provided by us in Maya using mayapy. (To use numpy in m
Our dataset ModelsResource-RigNetv1 has 2,703 models.
We split it into 80% for training (2,163‬ models), 10%
for validation (270 models), and 10% for testing.
All models in fbx format can be downloaded [here](https://umass.box.com/s/448zm5iw1ewbq4l2kdll6q99v5y3q4pw).
All models in fbx format can be downloaded [here](https://umass-my.sharepoint.com/:u:/g/personal/zhanxu_umass_edu/EVgpX4uZEVNLu8OjX9JRaFYBzOjfm4znndui29evdEfs-g).

To use this dataset in this project, pre-processing is performed.
We put the pre-processed data [here](https://umass.box.com/s/9bo643jb2jy6tu9nffoe8zmad8dio0z5), which consists of several sub-folders.
We put the pre-processed data [here](https://umass-my.sharepoint.com/:u:/g/personal/zhanxu_umass_edu/EaUH-2lI6-xOrJ0N9fDbZOABREt4ryEtQ64wmELF5SReTg), which consists of several sub-folders.

* obj: all meshes in OBJ format.
* rig_info: we store the rigging information into a txt file. Each txt file has four blocks. (1) Lines starting with "joint" define a joint with its 3D position. Each of joint line has four elements, which are joint_name, X, Y, and Z. (2) Line starting with "root" defines the name of root joint. (3) Lines starting with "hier" define the hierarchy of skeleton. Each hierarchy line has two elements, which are parent joint name and its child joint name. One parent joint can have multiple children joints. (4) Lines starting with "skin" define the skinning weights. Each skinning line follows the format as vertex_id, bind_joint_name_1, bind_weight_1, bind_joint_name_2, bind_weight_2 ... The vertex_id follows the vertice order in obj files in the above obj folder.
Expand Down
50 changes: 49 additions & 1 deletion geometric_proc/compute_pretrain_attn.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
import os
import glob
import time
import copy
import trimesh
import numpy as np
import open3d as o3d
Expand Down Expand Up @@ -187,24 +188,59 @@ def shoot_rays(mesh, origins, ray_dir, debug=False, model_id=None):
return all_hit_pos, all_hit_ori_id, all_hit_ori


def normalize_mesh_rig(mesh, rig):
# normalize mesh
mesh_v = np.asarray(mesh.vertices)
dims = [max(mesh_v[:, 0]) - min(mesh_v[:, 0]),
max(mesh_v[:, 1]) - min(mesh_v[:, 1]),
max(mesh_v[:, 2]) - min(mesh_v[:, 2])]
scale = 1.0 / max(dims)
pivot = np.array([(min(mesh_v[:, 0]) + max(mesh_v[:, 0])) / 2, min(mesh_v[:, 1]),
(min(mesh_v[:, 2]) + max(mesh_v[:, 2])) / 2])
mesh_v[:, 0] -= pivot[0]
mesh_v[:, 1] -= pivot[1]
mesh_v[:, 2] -= pivot[2]
mesh_v *= scale
mesh.vertices = o3d.utility.Vector3dVector(mesh_v)

# normalize rig
for k, v in rig.joint_pos.items():
rig.joint_pos[k] -= pivot
rig.joint_pos[k] *= scale
this_level = [rig.root]
while this_level:
next_level = []
for node in this_level:
node.pos = (np.array(node.pos) - pivot) * scale
node.pos = (node.pos[0], node.pos[1], node.pos[2])
for ch in node.children:
next_level.append(ch)
this_level = next_level

return mesh, rig

if __name__ == '__main__':
start_id = int(sys.argv[1])
end_id = int(sys.argv[2])
subsampling = True # decimate mesh to speed up

ray_per_sample = 14 # number of rays shoot from each joint
dataset_folder = "/media/zhanxu/4T/ModelResource_RigNetv1_preproccessed/"
#dataset_folder = "/home/zhanxu/Proj/RigNet_public/quick_start/tigran/"

remesh_obj_folder = os.path.join(dataset_folder, "obj_remesh/")
info_folder = os.path.join(dataset_folder, "rig_info/")
res_folder = os.path.join(dataset_folder, "pretrain_attention/")
model_list = np.loadtxt(os.path.join(dataset_folder, "model_list.txt"), dtype=int)
#model_list = np.array([1, 2, 3, 4, 5, 6, 7], dtype=np.int)

for model_id in model_list[start_id:end_id]:
print(model_id)
mesh = o3d.io.read_triangle_mesh(os.path.join(remesh_obj_folder, '{:d}.obj'.format(model_id)))
vtx_ori = np.asarray(mesh.vertices)
rig_info = Info(os.path.join(info_folder, '{:d}.txt'.format(model_id)))
mesh, rig_info = normalize_mesh_rig(mesh, rig_info)
mesh_ori = copy.deepcopy(mesh)
vtx_ori = np.asarray(mesh.vertices)

if subsampling:
mesh = mesh.simplify_quadric_decimation(3000)
Expand All @@ -228,4 +264,16 @@ def shoot_rays(mesh, origins, ray_dir, debug=False, model_id=None):
else:
id_nn = np.argwhere(np.sum(dist[np.argwhere(all_hit_ori_id == joint_id).squeeze(), :], axis=0) > 0).squeeze(1)
attn[id_nn] = True

# vis = o3d.visualization.Visualizer()
# vis.create_window()
# mesh_ls = o3d.geometry.LineSet.create_from_triangle_mesh(mesh_ori)
# mesh_ls.colors = o3d.utility.Vector3dVector([[0.8, 0.8, 0.8] for i in range(len(mesh_ls.lines))])
# vis.add_geometry(mesh_ls)
# pcd = o3d.geometry.PointCloud(points=o3d.utility.Vector3dVector(vtx_ori[np.argwhere(attn).squeeze()]))
# pcd.paint_uniform_color([1.0, 0.0, 0.0])
# vis.add_geometry(pcd)
# vis.run()
# vis.destroy_window()

np.savetxt(os.path.join(res_folder, '{:d}.txt'.format(model_id)), attn, fmt='%d')

0 comments on commit c5f5c98

Please sign in to comment.