Skip to content

Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images. In ECCV2018.

License

Notifications You must be signed in to change notification settings

maoLB/Pixel2Mesh

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 

Repository files navigation

The code and pre-trained model will be released in a few days!

Pixel2Mesh

This repository contains the TensorFlow implementation for the following paper

Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images (ECCV2018)

Nanyang Wang*, Yinda Zhang*, Zhuwen Li*, Yanwei Fu, Wei Liu, Yu-Gang Jiang. (*Equal Contribution)

The code is based on the gcn framework. For Chamfer losses, we have included the cuda implementations of Fan et. al.

Project Page

The project page is available at http://bigvid.fudan.edu.cn/pixel2mesh

Dependencies

Requirements:

Our code has been tested with Python 2.7, TensorFlow 1.3.0, TFLearn 0.3.2, CUDA 8.0 on Ubuntu 14.04.

Installation

git clone https://github.com/nywang16/Pixel2Mesh.git
cd Pixel2Mesh
python setup.py install

Running the demo

python test_nn.py data/examples/plane.png

Run the testing code and the output mesh file is saved in data/examples/plane.obj

Input image, output mesh.
label

Dataset

We used the ShapeNet dataset for 3D models, and rendered views from 3D-R2N2:
When using the provided data make sure to respect the shapenet license.

Below is the complete set of training data. Download it into the data/ folder. https://drive.google.com/file/d/1Z8gt4HdPujBNFABYrthhau9VZW10WWYe/view?usp=sharing

cd pixel2mesh/data
tar -xzf ShapeNetTrain.tar

The training/testing split can be found in data/train_list.txt and data/test_list.txt

The file is named in syntheticID_modelID_renderID.dat format, and the data processing script will be released soon.

Each .dat file in the provided data contain:

  • The rendered image from 3D-R2N2. We resized it to 224x224 and made the background white.
  • The sampled point cloud (with vertex normal) from ShapeNet. We transformed it to corresponding coordinates in camera coordinate based on camera parameters from the Rendering Dataset.

Input image, ground truth point cloud.
label

Training

python train_nn.py

You can change the training data, learning rate and other parameters by editing train_nn.py

Citation

If you use this code for your research, please consider citing:

@inProceedings{wang2018pixel2mesh,
  title={Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images},
  author={Nanyang Wang and Yinda Zhang and Zhuwen Li and Yanwei Fu and Wei Liu and Yu-Gang Jiang},
  booktitle={ECCV},
  year={2018}
}

About

Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images. In ECCV2018.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published