Skip to content
forked from AiuniAI/Unique3D

Official implementation of Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image

License

Notifications You must be signed in to change notification settings

drwbns/Unique3D

 
 

Repository files navigation

中文版本 中文

Unique3D

Official implementation of Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image

Kailu Wu, Fangfu Liu, Zhihan Cai, Runjie Yan, Hanyang Wang, Yating Hu, Yueqi Duan, Kaisheng Ma

The Gradio Demo is currently unstable due to a memory leak, which we are working on fixing as well as creating a stable Huggingface Demo.

If the Gradio Demo unfortunately hangs or is very crowded, you can use the Online Demo, which is free to try, and the registration invitation code is aiuni24. However, the Online Demo is slightly different from the Gradio Demo, in that the inference speed is slower, and the generation results is less stable, but the quality of the material is better.

High-fidelity and diverse textured meshes generated by Unique3D from single-view wild images in 30 seconds.

More features

The repo is still being under construction, thanks for your patience.

  • Upload weights.
  • Local gradio demo.
  • Detailed tutorial.
  • Huggingface demo.
  • Detailed local demo.
  • Comfyui support.
  • Windows support.
  • Docker support.
  • More stable reconstruction with normal.
  • Training code release.

Preparation for inference

Linux System Setup.

conda create -n unique3d
conda activate unique3d
pip install -r requirements.txt

Interactive inference: run your local gradio demo.

  1. Download the weights from huggingface spaces, and extract it to ckpt/*.
Unique3D
    ├──ckpt
        ├── controlnet-tile/
        ├── image2normal/
        ├── img2mvimg/
        ├── realesrgan-x4.onnx
        └── v1-inference.yaml
  1. Run the interactive inference locally.
python app/gradio_local.py --port 7860

Tips to get better results

  1. Unique3D is sensitive to the facing direction of input images. Due to the distribution of the training data, orthographic front-facing images with a rest pose always lead to good reconstructions.
  2. Images with occlusions will cause worse reconstructions, since four views cannot cover the complete object. Images with fewer occlusions lead to better results.
  3. Pass an image with as high a resolution as possible to the input when resolution is a factor.

Acknowledgement

We have intensively borrowed code from the following repositories. Many thanks to the authors for sharing their code.

Collaborations

Our mission is to create a 4D generative model with 3D concepts. This is just our first step, and the road ahead is still long, but we are confident. We warmly invite you to join the discussion and explore potential collaborations in any capacity. If you're interested in connecting or partnering with us, please don't hesitate to reach out via email ([email protected]).

Citation

If you found Unique3D helpful, please cite our report:

@misc{wu2024unique3d,
      title={Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image}, 
      author={Kailu Wu and Fangfu Liu and Zhihan Cai and Runjie Yan and Hanyang Wang and Yating Hu and Yueqi Duan and Kaisheng Ma},
      year={2024},
      eprint={2405.20343},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

About

Official implementation of Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%