NeRF-pytorch-readme

NeRF-pytorch

NeRF(神经辐射场)是一种实现最先进的结果来合成复杂场景的新观点的方法。以下是由这个存储库生成的一些视频(下面提供了预先训练过的模型):

 

这个项目是 NeRF 的忠实PyTorch实现,在运行1.3倍的速度复制结果。该代码基于作者的tenserflow实现here,并已测试了数值匹配。

Installation

git clone https://github.com/yenchenlin/nerf-pytorch.git

cd nerf-pytorch

pip install -r requirements.txt

Dependencies

  • PyTorch 1.4
  • matplotlib
  • numpy
  • imageio
  • imageio-ffmpeg
  • configargparse

The LLFF data loader requires ImageMagick.

You will also need the LLFF code (and COLMAP) set up to compute poses如果你想在你自己的真实数据上运行.

How To Run?

Quick Start

Download data for two example datasetslego and fern

bash download_example_data.sh

To train a low-res lego NeRF:

python run_nerf.py --config configs/lego.txt

After training for 100k iterations (~4 hours on a single 2080 Ti), you can find the following video at logs/lego_test/lego_test_spiral_100000_rgb.mp4.

To train a low-res fern NeRF:

python run_nerf.py --config configs/fern.txt

After training for 200k iterations (~8 hours on a single 2080 Ti), you can find the following video at logs/fern_test/fern_test_spiral_200000_rgb.mp4 and  logs/fern_test/fern_test_spiral_200000_disp.mp4

More Datasets

要播放本文中呈现的其他场景,请here下载数据。根据以下目录结构放置已下载的数据集:

├── configs                                                                                                       

│   ├── ...                                                                                     

│                                                                                               

├── data                                                                                                                                                                                                       

│   ├── nerf_llff_data                                                                                                  

│   │   └── fern                                                                                                                             

│   │   └── flower  # downloaded llff dataset                                                                                  

│   │   └── horns   # downloaded llff dataset

|   |   └── ...

|   ├── nerf_synthetic

|   |   └── lego

|   |   └── ship    # downloaded synthetic dataset

|   |   └── ...


To train NeRF on different datasets:

python run_nerf.py --config configs/{DATASET}.txt

replace {DATASET} with trex | horns | flower | fortress | lego | etc.


To test NeRF trained on different datasets:

python run_nerf.py --config configs/{DATASET}.txt --render_only

replace {DATASET} with trex | horns | flower | fortress | lego | etc.

Pre-trained Models

你可以下载预先训练过的模型here。将下载的目录放在./logs中,以便稍后进行测试。有关一个示例,请参阅以下目录结构:

├── logs

│   ├── fern_test

│   ├── flower_test  # downloaded logs

│   ├── trex_test    # downloaded logs

Reproducibility可重现性

确保所有功能和训练循环的结果与官方请求相匹配的测试包含在一个不同的分支reproduce中 . 人们可以检查出它并运行test

git checkout reproduce

py.test

Method

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

Ben Mildenhall*1Pratul P. Srinivasan*1Matthew Tancik*1Jonathan T. Barron2Ravi Ramamoorthi3Ren Ng1 

1UC Berkeley, 2Google Research, 3UC San Diego

*denotes equal contribution

A neural radiance field is a simple fully connected network (weights are ~5MB) trained to reproduce input views of a single scene using a rendering loss. The network directly maps from spatial location and viewing direction (5D input) to color and opacity (4D output), acting as the "volume" so we can use volume rendering to differentiably render new views

神经辐射场是一个简单的全连接网络(权值为~5MB),训练用于使用渲染损失再现单个场景的输入视图。该网络直接从空间位置和观看方向(5D输入)映射到颜色和不透明度(4D输出),作为“体积”,因此我们可以使用体积渲染来不同地渲染新视图。

你可能感兴趣的:(笔记,python)