NeRF

Assessing NeRF's Efficacy in 3D Model Reconstruction: A Comparative Analysis with Blender

In 2020, NeRF revolutionized 3D reconstruction by efficiently optimizing neural radiance fields to generate photorealistic views of scenes with complex geometry and appearance. In a previous project, I explored 3D reconstruction using Multi-View Stereo (MVS) and Structure from Motion (SfM), which yielded satisfactory results. However, NeRF, utilizing a fully connected (non-convolutional) deep network, achieved state-of-the-art outcomes.

Inspired by my experience with Blender through the "Intro to Blender" course at UofR's Studio X, I aimed to enhance my Blender models using NeRF. I set out to determine how closely I could replicate a 3D Blender model by randomly generating images along with their intrinsic and extrinsic parameters, and feeding them into a NeRF model. This project focuses on constructing a vanilla-NeRF model from scratch, based on the NeRF paper "Representing Scenes as Neural Radiance Fields for View Synthesis" (2020).

It's worth noting that recent advancements in NeRF methods, such as instant-NGP, leverage hashing to reduce the required number of layers in the MLP. Additionally, for purely visual reconstruction, Gaussian Splatting has shown to be superior to NeRF. Nonetheless, this project serves as a valuable exercise in understanding the underlying mechanics of the groundbreaking NeRF paper that transformed 3D reconstruction.