8000 GitHub - elidub/equiv-neural-rendering: Using Geometric DL for neural rendering to train roto-transnational equivariant.
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

elidub/equiv-neural-rendering

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Equivariant Neural Rendering

Authors: Elias Dubbeldam, Aniek Eijpe, Robin Sasse, Oline Ranum, Orestis Gorgogiannis

This repository contains code and blogpost on the reproduction and extension of Equivariant Neural Rendering, ICML 2020. We present framework for learning neural scene implicit scene representations directly from images. The framework is able to render a scene from a single image. We present models trained on rotations-only, translations-only and roto-translations. For an in-depth discussion of our work, see the blogpost.


Novel view synthesis rendered from a single image.

Code structure

Directory Description
demos/ Notebooks used to analyze training runs, results and render scenes.
src/configs/ Configuration files to run the training experiments.
src/enr Source code used for equivariant neural rendering. Adapted from the repo of the original paper.
src/imgs/ Location where produced images are stored.
src/scripts/ Files to run that reproduce the results.
src/train_results/ Location where all trained models and its (intermediate) results are stored.
blogpost.md Report introducing original work, discussing our novel contribution and analaysis.

Installation & download

First, install the conda environment and Blender

conda env create -f env_nr.yml
conda activate nr

# Download blender 3.5.1
!wget https://ftp.nluug.nl/pub/graphics/blender/release/Blender3.5/blender-3.5.1-linux-x64.tar.xz

# Unpack 
!tar -xvf blender-3.5.1-linux-x64.tar.xz
!rm ./blender-3.5.1-linux-x64.tar.xz

# Move and rename for shorter commands
!mv ./blender-3.5.1-linux-x64 ./src/enr/data/demo/blender

Finally, download the ShapeNet Core dataset, the subset comprised of chair models can be found in folder 03001627.

Usage

Creating data

After the installation insctructions from above, one can run the scripts in src/scripts/ to create the data and reproduce the training runs and results. To create the necessary data, run the following:

python src/enr/data/create_data.py --blender_dir <blender location> --obj_dir <location of blender objects to read> \\
--output_dir <location to store the data> --transformation <rotation, translation or roto-translation>

Depening on the transformation-flag, one produces images from the objects in obj_dir with Blender See the ReadMe about the data for more information.

Training a model

To train a model, run the following:

python src/scripts/experiments.py config.json

See the configuration files in src/configs/ for detailed training options, such as training on rotations,translations or roto-translations, single or multi-GPU and more. To reproduce our results, use the config.json files as they are saved in /src/train_results/<timestamp>_<id>. That is, one should change id, path_to_data and path_to_test_data accordingly, to the paths where one created the data following the instructions in the ReadMe about the data.

Evaluate results and produce renderings

To evaluate a model, run the following:

python src/scripts/evaluate_psnr.py <path_to_model> <path_to_data>

By default, models are stored in src/train_results/, The path to data is dependent on the user, where it has saved its data. See the configuration files in src/configs/ for more information.

One can create gifs that render the scene from different viewpoints by running the following:

python src/scripts/render_gif.py --img_path <source_img_path> --model_path <model_path>

Models are stored in src/train_results/<timestamp>_<id>/best_model.pt, The path to data is dependent on the user, where it has saved its data.

Analyze results and explore the model

Finally, the model can be analyzed and used to render scenes using the notebooks in demos/. The notebook demos/analyze_training.ipynb shows the loss curves of the different training runs. The notebook EquivariantNR.ipynb shows how to use a trained model to infer a scene representation from a single image and how to use this representation to render novel views.

An in-depth discussion and analysis on the results can be found in the blogpost.

License

As this code has been built upon the repository apple/ml-equivariant-neural-rendering, therefore the same license applies. That is, this project is licensed under the Apple Sample Code License.

About

Using Geometric DL for neural rendering to train roto-transnational equivariant.

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%
0