This is an educational repository on the implementation of NeRF for novel view synthesis from scratch. The primary dataset used is NeRF Synthetic which includes synthetic rendered images of objects at various angles. The main goal of this repository is to provide a demonstration of various technical concepts involved in traing NeRF and rendering images.
-
Clone the repository:
git clone https://github.com/CodeKnight314/ESRGAN-pytorch.git
-
Create and activate a virtual environment (optional but recommended):
python -m venv nerf-env source nerf-env/bin/activate
-
cd to project directory:
cd NeRF-Pytorch-Implementation/
-
Install the required packages:
pip install -r requirements.txt
Use the vol_render.py
script to perform super-resolution on images in a specified directory.
Arguments:
--weight_path
: Directory reference for NeRF weights.--output_path
: Directory to save rendered images.--img_h
: Height of the rendered image.--img_w
: Width of the rendered image.
Example:
python vol_render.py --weight_path ./dir/weights --output_path ./data/output --img_h HEIGHT --img_w WIDTH
Use the train.py
script to train the ESRGAN model. It includes pretraining of the generator and full training with the discriminator.
Arguments:
--root
Root directory for images (requires train and val split).--lr
: Initial Learning rate of the NeRF model.--epochs
: Total epochs for running the model.--save
: Output directory for saving model weights and images--num_steps
: Number of samples per generated ray--size
: The desired output size of rendered image.
Example:
python NeRF-Pytorch-Implementation/train.py --root lego/ --lr 5e-4 --epochs 16 --save Outputs/ --num_steps 192 --size 128