DehazeNeRF: Haze Removal using Neural Radiance Fields | 3DV 2024

Wei-Ting Chen, Wang Yifan, Sy-Yen Kuo, Gordon Wetzstein

Multiple Image Haze Removal and 3D Shape Reconstruction using Neural Radiance Fields.

ABSTRACT

Neural radiance fields (NeRFs) have demonstrated state-of-the-art performance for 3D computer vision tasks, including novel view synthesis and 3D shape reconstruction. However, these methods fail in adverse weather conditions. To address this challenge, we introduce DehazeNeRF as a framework that robustly operates in hazy conditions. DehazeNeRF extends the volume rendering equation by adding physically realistic terms that model atmospheric scattering. By parameterizing these terms using suitable networks that match the physical properties, we introduce effective inductive biases, which, together with the proposed regularizations, allow DehazeNeRF to demonstrate successful multi-view haze removal, novel view synthesis, and 3D shape reconstruction where existing approaches fail.



FILES

  • Technical paper and supplement (link to arxiv)
  • Code (coming soon)

CITATION

W. Chen, W. Yifan, S. Kuo, G. Wetzstein, DehazeNeRF: Multiple Image Haze Removal and 3D Shape Reconstruction using Neural Radiance Fields, 3DV 2024

@inproceedings{Chen2024dehazenerf,
author = {W. Chen and W. Yifan and S. Kuo and G. Wetzstein},
title = {DehazeNeRF: Multiple Image Haze Removal and 3D Shape Reconstruction using Neural Radiance Fields},
booktitle = {3DV},
year = {2024},
}

Overview and Results

overview of DehazeNeRF.
Given a set of hazy images, our method augments the existing NeRF pipeline (gray) with a haze module (yellow), which explicitly models the scattering phenomenon using atmospheric light and scattering coefficient. During training, we render the hazy reconstruction as a composition of surface and haze, which is compared to the input hazy images to optimize the learnable parameters (in green) jointly. During inference, we use the surface module (gray) to render clear views.
Our method successfully removes heterogeneous haze in the synthesized views, showing the best appearance fidelity compared with baseline methods. The reconstructed geometry is more accurate, less noisy, and contains more details.

Our method demonstrates superior performance in captured data, recovering the haze-free scenes with more image details and colors closer to the ground truth.

RELATED PROJECTS

You may also be interested in related projects on neural scene representations, such as :

  • Chan et al. EG3D. CVPR 2022 (link)
  • Lindell et al. BACON: Band-limited Coordinate Networks. CVPR 2022 (link)
  • Sitzmann et al. Implicit Neural Representations with Periodic Activation Functions. NeurIPS 2020 (link)
  • Mildenhall et al. NeRF, ECCV 2020 (link)
  • Sitzmann et al. Scene Representation Networks. NeurIPS 2019 (link)