Skip to content

Instant-ngp in pytorch+cuda trained with pytorch-lightning (with only few lines of legible code)

License

Notifications You must be signed in to change notification settings

EvilicLufas/NEFR-ngp_pl

 
 

Repository files navigation

ngp_pl

Advertisement: stay tuned with my channel, I will upload cuda tutorials recently, and do a stream about this implementation!

Instant-ngp (only NeRF) in pytorch+cuda trained with pytorch-lightning (high quality with high speed). This repo aims at providing a concise pytorch interface to facilitate future research, and am grateful if you can share it (and a citation is highly appreciated)!

progress.mp4
lego_trainval_5min_PNSR35.73.mp4

💻 Installation

This implementation has strict requirements due to dependencies on other libraries, if you encounter installation problem due to hardware/software mismatch, I'm afraid there is no intention to support different platforms (you are welcomed to contribute).

Hardware

  • OS: Ubuntu 20.04
  • NVIDIA GPU with Compute Compatibility >= 75 and memory > 8GB (Tested with RTX 2080 Ti), CUDA 11.3 (might work with older version)

Software

  • Clone this repo by git clone https://github.com/kwea123/ngp_pl

  • Python>=3.8 (installation via anaconda is recommended, use conda create -n ngp_pl python=3.8 to create a conda environment and activate it by conda activate ngp_pl)

  • Python libraries

    • Install pytorch by pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
    • Install tinycudann following their instruction (compilation and pytorch extension)
    • Install apex following their instruction
    • Install core requirements by pip install -r requirements.txt
  • Cuda extension: Upgrade pip to >= 22.1 and run pip install models/csrc/

📚 Data preparation

Download preprocessed datasets from NSVF.

🔑 Training

Quickstart: python train.py --root_dir <path/to/lego> --exp_name Lego

It will train the lego scene for 20k steps (each step with 8192 rays), and perform one testing at the end. The training process should finish within about 5 minutes (saving testing image is slow, add --no_save_test to disable). Testing PSNR will be shown at the end.

More options can be found in opt.py.

🔎 Testing

Use test.ipynb to generate images. Lego pretrained model is available here

Comparison with torch-ngp and the paper

I compared the quality (average testing PSNR on Synthetic-NeRF) and the inference speed (on Lego scene) v.s. the concurrent work torch-ngp (default settings) and the paper, all trained for about 5 minutes:

Method avg PSNR FPS
torch-ngp 31.46 18.2
mine 32.38 36.2
instant-ngp paper 33.18 60

As for quality, mine is slightly better than torch-ngp, but the result might fluctuate across different runs.

As for speed, mine is faster than torch-ngp, but is still only half fast as instant-ngp. Speed is dependent on the scene (if most of the scene is empty, speed will be faster).


Left: torch-ngp. Right: mine.

More details are in the following section.

Benchmarks

To run benchmarks, use the scripts under benchmarking.

Followings are my results:

Synthetic-NeRF
Mic Ficus Chair Hotdog Materials Drums Ship Lego AVG
PSNR 35.23 33.64 34.78 36.76 28.77 25.61 29.57 34.69 32.38
FPS 40.81 34.02 49.80 25.06 20.08 37.77 15.77 36.20 32.44
Synthetic-NSVF
Wineholder Steamtrain Toad Robot Bike Palace Spaceship Lifestyle AVG
PSNR 31.06 35.65 34.49 36.23 36.99 36.36 35.48 33.96 35.03
FPS 47.07 75.17 50.42 64.87 66.88 28.62 35.55 22.84 48.93
Tanks and Temples
Ignatius Truck Barn Caterpillar Family AVG
PSNR 28.90 28.21 28.92 26.30 33.77 29.22
*FPS 10.04 7.99 16.14 10.91 6.16 10.25

*Evaluated on test-traj

BlendedMVS
*Jade *Fountain Character Statues AVG
PSNR 25.69 26.91 30.16 26.93 27.42
**FPS 26.02 21.24 35.99 19.22 25.61

*I manually switch the background from black to white, so the number isn't directly comparable to that in the papers.

**Evaluated on test-traj

TODO

  • support custom dataset

About

Instant-ngp in pytorch+cuda trained with pytorch-lightning (with only few lines of legible code)

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jupyter Notebook 75.3%
  • Python 9.5%
  • Cuda 8.8%
  • C 3.8%
  • C++ 1.9%
  • Shell 0.7%