Learning Disentangled Features for NeRF-based Face Reconstruction
Peizhi Yan
Rabab Ward
Dan Wang
Qiang Tang
Shan Du


Abstract

The 3D-aware parametric face model named HeadNeRF achieved advantages in rendering photo-realistic face images. However, it has two limitations: (1) it uses single-image fitting reconstruction that is slow and prone to overfitting; (2) it lacks explicit 3D geometry information, making using semantic facial-parts-based loss challenging. This paper presents a 3D-aware face reconstruction learning framework tailored for HeadNeRF to address the limitations. We train a face encoder network that can directly learn the disentangled features for facial reconstruction to address the first limitation. For the second limitation, we introduce a lightweight semantic face segmentation network and facial-parts-based loss function to improve the reconstruction accuracy and quality. Our experiments show that the proposed method achieves a low reconstruction time consumption and enhanced reconstruction accuracy.


More Results

Varying Rendering Views

Varying Expressions

Varying Appearance


Flowchart



Paper and Supplementary Material

Available soon.