conda env create -f environment.yml
./download_demo_data.sh
conda activate nex
python train.py -scene data/crest_demo -model_dir crest -http
tensorboard --logdir runs/
We provide environment.yml
to help you setup a conda environment.
conda env create -f environment.yml
Download: Shiny dataset.
We provide 2 directories named shiny
and shiny_extended
.
shiny
contains benchmark scenes used to report the scores in our paper.shiny_extended
contains additional challenging scenes used on our website project page and video
Download: Undistorted front facing dataset
For real forward-facing dataset, NeRF is trained with the raw images, which may contain lens distortion. But we use the undistorted images provided by COLMAP.
However, you can try running other scenes from Local lightfield fusion (Eg. airplant) without any changes in the dataset files. In this case, the images are not automatically undistorted.
We slightly modified the file structure of Spaces dataset in order to determine the plane placement and split train/test sets.
Run with the paper's config
python train.py -scene ${PATH_TO_SCENE} -model_dir ${MODEL_TO_SAVE_CHECKPOINT} -http
For a GPU/GPUs with less memory (e.g., a single RTX 2080Ti), you can run using the following command:
python train.py -scene ${PATH_TO_SCENE} -model_dir ${MODEL_TO_SAVE_CHECKPOINT} -http -layers 12 -sublayers 6 -hidden 256
Note that when your GPU runs ouut of memeory, you can try reducing the number of layers, sublayers, and sampled rays.
To generate a WebGL viewer and a video result.
python train.py -scene ${scene} -model_dir ${MODEL_TO_SAVE_CHECKPOINT} -predict -http
To generate a video that matches the real forward-facing rendering path, add -nice_llff
argument, or -nice_shiny
for shiny dataset