Skip to content

[WACV 2025] Implementation of RGB2Point:3D Point Cloud Generation from Single RGB Images

Notifications You must be signed in to change notification settings

JaeLee18/RGB2point

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Overview

RGB2Point is officially accepted to WACV 2025. It takes a single unposed RGB image to generate 3D Point Cloud. Check more details from the paper.

Codes

RGB2Point is tested on Ubuntu 22 and Windows 11. Python 3.9+ and Pytorch 2.0+ is required.

Dependencies

Assuming Pytorch 2.0+ with CUDA is installed, run:

pip install timm
pip install accelerate
pip install wandb
pip install open3d
pip install scikit-learn

Training

python train.py

Training Data

Please download 1) Point cloud data zip file, 2) Rendered Images, and 3) Train/test filenames.

Next, modify the downloaded 1), 2), 3) file paths to L#36, L#38, L#14 and L#16.

Pretrained Model

Download the model trained on Chair, Airplane and Car from ShapeNet.

https://drive.google.com/file/d/1Z5luy_833YV6NGiKjGhfsfEUyaQkgua1/view?usp=sharing

Inference

python inference.py

Change image_path and save_path in inference.py accrodingly.

Reference

If you find this paper and code useful in your research, please consider citing:

@article{lee2024rgb2point,
  title={RGB2Point: 3D Point Cloud Generation from Single RGB Images},
  author={Lee, Jae Joong and Benes, Bedrich},
  journal={arXiv preprint arXiv:2407.14979},
  year={2024}
}

About

[WACV 2025] Implementation of RGB2Point:3D Point Cloud Generation from Single RGB Images

Resources

Stars

Watchers

Forks

Languages