Ruowen Zhao1,2*,
Junliang Ye1,2*,
Zhengyi Wang1,2*,
Guangce Liu2,
Yiwen Chen3,
Yikai Wang1,
Jun Zhu1,2†
*Equal Contribution.
†Corresponding authors.
1Tsinghua University,
2ShengShu,
3S-Lab, Nanyang Technological University,
All of the meshes above are generated by DeepMesh. DeepMesh can generate high-quality meshes conditioned on the given point cloud by auto-regressive transformer.
- [3/20] 🔥🔥We released the pretrained weight of DeepMesh (0.5 B).
Our environment has been tested on Ubuntu 22, CUDA 11.8 with A100, A800 and A6000.
- Clone our repo and create conda environment
git clone https://github.com/zhaorw02/DeepMesh.git && cd DeepMesh
conda env create -f environment.yaml
conda activate deepmesh
- Install the pretrained model weight
pip install -U "huggingface_hub[cli]"
huggingface-cli login
huggingface-cli download zzzrw/DeepMesh --local-dir ./
# Note: if you want to use your own point cloud, please make sure the normal is included.
# The file format should be a .npy file with shape (N, 6), where N is the number of points. The first 3 columns are the coordinates, and the last 3 columns are the normal.
# Generate all obj/ply in your folder
CUDA_VISIBLE_DEVICES=0 torchrun --nproc-per-node=1 --master_port=12345 sample.py \
--model_path "your_model_path" \
--steps 90000 \
--input_path examples \
--output_path mesh_output \
--repeat_num 4 \
--temperature 0.5 \
# Generate the specified obj/ply in your folder
CUDA_VISIBLE_DEVICES=0 torchrun --nproc-per-node=1 --master_port=22345.py \
--model_path "your_model_path" \
--steps 90000 \
--input_path examples \
--output_path mesh_output \
--repeat_num 4 \
--uid_list "wand1.obj,wand2.obj,wand3.ply" \
--temperature 0.5 \
# Or
bash sample.sh
- Please refer to our project_page for more examples.
- Release of pre-training code ( truncted sliding training ).
- Release of post-training code ( DPO ).
- Release of larger model ( 1b version ).
Our code is based on these wonderful repos:
@article{zhao2025deepmesh,
title={DeepMesh: Auto-Regressive Artist-mesh Creation with Reinforcement Learning},
author={Zhao, Ruowen and Ye, Junliang and Wang, Zhengyi and Liu, Guangce and Chen, Yiwen and Wang, Yikai and Zhu, Jun},
journal={arXiv preprint arXiv:2503.15265},
year={2025}
}