8000 GitHub - tucanae47/nerf_pl at v1.0.1
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

tucanae47/nerf_pl

Repository files navigation

nerf_pl

Unofficial implementation of NeRF (Neural Radiance Fields) using pytorch-lightning.

Official implementation: nerf

Reference pytorch implementation: nerf-pytorch

Features

  • Multi-gpu training: Training on 8 GPUs finishes within 1 hour for the synthetic dataset!

Installation

Hardware

  • OS: Ubuntu 18.04
  • NVIDIA GPU with CUDA>=10.1 (tested with 1 RTX2080Ti)

Software

  • Clone this repo by git clone --recursive https://github.com/kwea123/nerf_pl
  • Python 3.8 (installation via anaconda is recommended, use conda create -n nerf_pl python=3.8 to create a conda environment and activate it by conda activate nerf_pl)
  • Python libraries
    • Install core requirements by pip install -r requirements.txt
    • Install torchsearchsorted by cd torchsearchsorted then pip install .

Training

Please see each subsection for training on different datasets. Available training datasets:

  • Blender (Realistic Synthetic 360)

Blender

Data download

Download nerf_synthetic.zip from here

Training model

Run (example)

python train.py \
   --dataset_name blender \
   --root_dir $BLENDER_DIR \
   --N_importance 64 --img_wh 400 400 --noise_std 0 \
   --num_epochs 16 --batch_size 1024 \
   --optimizer adam --lr 5e-4 \
   --lr_scheduler steplr --decay_step 2 4 8 --decay_gamma 0.5 \
   --exp_name exp

These parameters are chosen to best mimic the training settings in the original repo. See opt.py for all configurations.

Pretrained model and log

Download the pretrained model and training log in release.

Comparison with other repos

GPU mem in GB
(train)
Speed (1 step)
Original 8.5 0.177s
Ref pytorch 6.0 0.147s
This repo 3.2 0.12s

The speed is measure on 1 RTX2080Ti. Detailed profile can be found in release. Training memory is largely reduced, since the original repo loads the whole data to GPU at the beginning, while we only pass batches to GPU every step.

Testing

See test.ipynb for a simple view synthesis and depth prediction on 1 image.

Use eval.py to create the whole sequence of moving views. E.g.

python eval.py \
   --root_dir $BLENDER \
   --dataset_name blender --scene_name lego \
   --img_wh 400 400 --N_importance 64 --ckpt_path $CKPT_PATH

It will create folder results/{dataset_name}/{scene_name} and run inference on all test data, finally create a gif out of them.

Example of lego scene using pretrained model, shown at 1/2 scale: (PSNR=31.39, paper=32.54)

lego

Example of fern scene using pretrained model, shown at 1/2 scale:

fern

Notes on differences with the original repo

  • The learning rate decay in the original repo is by step, which means it decreases every step, here I use learning rate decay by epoch, which means it changes only at the end of 1 epoch.
  • The validation image for LLFF dataset is chosen as the last image here, whereas the original repo chooses every 8th image.
  • The rendering spiral path is slightly different from the original repo (I use approximate values to simplify the code).

TODO

  • Test multigpu for llff data with 1 val image only across 8 gpus..

About

NeRF (Neural Radiance Fields) and NeRF in the Wild using pytorch-lightning

Resources

License

MIT, MIT licenses found

Licenses found

MIT
LICENSE
MIT
LICENSE_bmild

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jupyter Notebook 67.7%
  • Python 32.3%
0