A pytorch implementation of instant-ngp, as described in Instant Neural Graphics Primitives with a Multiresolution Hash Encoding.
Note: The NeRF performance is far from instant due to current naive raymarching implementation (see here).
SDF | NeRF |
---|---|
As the official pytorch extension tinycudann has been released, the following implementations can be used as modular alternatives.
The performance and speed of these modules are guaranteed to be on-par, and we support using tinycudann as the backbone by the --tcnn
flag.
Later development will be focused on reproducing the NeRF inference speed.
- Fully-fused MLP
- basic pytorch binding of the original implementation
- HashGrid Encoder
- basic pytorch CUDA extension
- fp16 support
- Experiments
- SDF
- baseline
- better SDF calculation (especially for non-watertight meshes)
- NeRF
- baseline (although much slower)
- ray marching in CUDA.
- SDF
- 2.15: add the official tinycudann as an alternative backend.
- 2.10: add cuda_raymarching, can train/infer faster, but performance is worse currently.
- 2.6: add support for RGBA image.
- 1.30: fixed atomicAdd() to use __half2 in HashGrid Encoder's backward, now the training speed with fp16 is as expected!
- 1.29:
- finished an experimental binding of fully-fused MLP.
- replace SHEncoder with a CUDA implementation.
- 1.26: add fp16 support for HashGrid Encoder (requires CUDA >= 10 and GPU ARCH >= 70 for now...).
pip install -r requirements.txt
Tested on Ubuntu with torch 1.10 & CUDA 11.3
We use the same data format as instant-ngp, e.g., armadillo and fox.
Please download and put them under ./data
.
First time running will take some time to compile the CUDA extensions.
# SDF experiment
bash scripts/run_sdf.sh
# NeRF experiment
bash scripts/run_nerf.sh
python train_nerf.py data/fox/transforms.json --workspace trial_nerf # fp32 mode
python train_nerf.py data/fox/transforms.json --workspace trial_nerf --fp16 # fp16 mode (pytorch amp)
python train_nerf.py data/fox/transforms.json --workspace trial_nerf --fp16 --tcnn # fp16 mode + official tinycudann
python train_nerf.py data/fox/transforms.json --workspace trial_nerf --fp16 --ff # fp16 mode + fully-fused MLP
python train_nerf.py data/fox/transforms.json --workspace trial_nerf --fp16 --ff --cuda_raymarching # (experimental) fp16 mode + fully-fused MLP + cuda raymarching
-
Credits to Thomas Müller for the amazing tiny-cuda-nn and instant-ngp:
@misc{tiny-cuda-nn, Author = {Thomas M\"uller}, Year = {2021}, Note = {https://github.com/nvlabs/tiny-cuda-nn}, Title = {Tiny {CUDA} Neural Network Framework} } @article{mueller2022instant, title = {Instant Neural Graphics Primitives with a Multiresolution Hash Encoding}, author = {Thomas M\"uller and Alex Evans and Christoph Schied and Alexander Keller}, journal = {arXiv:2201.05989}, year = {2022}, month = jan }
-
The framework of NeRF is adapted from nerf_pl:
@misc{queianchen_nerf, author = {Quei-An, Chen}, title = {Nerf_pl: a pytorch-lightning implementation of NeRF}, url = {https://github.com/kwea123/nerf_pl/}, year = {2020}, }