Non-line-of-sight Imaging with Partial Occluders and Surface Normals | TOG 2019

Felix Heide, Matthew O'Toole, Kai Zang, David B. Lindell, Steven Diamond, Gordon Wetzstein

A new approach to non-line-of-sight imaging that estimates partial occlusions in the hidden volume along with surface normals.

ABSTRACT

Imaging objects obscured by occluders is a significant challenge for many applications. A camera that could “see around corners” could help improve navigation and mapping capabilities of autonomous vehicles or make search and rescue missions more effective. Time-resolved single-photon imaging systems have recently been demonstrated to record optical information of a scene that can lead to an estimation of the shape and reflectance of objects hidden from the line of sight of a camera. However, existing non-line-of-sight (NLOS) reconstruction algorithms have been constrained in the types of light transport effects they model for the hidden scene parts. We introduce a factored NLOS light transport representation that accounts for partial occlusions and surface normals. Based on this model, we develop a factorization approach for inverse time-resolved light transport and demonstrate high-fidelity NLOS reconstructions for challenging scenes both in simulation and with an experimental NLOS imaging system.

FILES

  • technical paper (link)
  • technical paper supplement (link)
  • SIGGRAPH 2019 presentation slides (below)

 

Datasets

Download the captured data here: link

It took a lot of effort to build and calibrate this hardware setup and to capture these data. Feel free to use our datasets in your own projects, but please acknowledge our work by citing the following papers:

  • Matthew O’Toole, Felix Heide, David B. Lindell, Kai Zang, Steven Diamond, and Gordon Wetzstein. 2017. Reconstructing transient images from single-photon sensors. In Proc. CVPR. (link)
  • Matthew O’Toole, David B. Lindell, and Gordon Wetzstein. 2018. Confocal non-line-of-sight imaging based on the light-cone transform. Nature 555, 7696, 338. (link)
  • Felix Heide, Matthew O’Toole, Kai Zang, David B. Lindell, Steven Diamond, and Gordon Wetzstein. 2018. Non-line-of-sight Imaging with partial occluders and surface normals. ACM Trans. Graph. (link)
  • David B. Lindell, Gordon Wetzstein, and Matthew O’Toole. 2019. Wave-based non-line-of-sight Imaging using fast f−k migration. ACM Trans. Graph. (SIGGRAPH) 38, 4, 116. (link)

 

CITATION

Felix Heide, Matthew O’Toole, Kai Zang, David B. Lindell, Steven Diamond, Gordon Wetzstein. 2019. Non-line-of-sight Imaging with Partial Occluders and Surface Normals. In ACM Trans. Graph. (SIGGRAPH)

BibTeX

@article{Heide:2019:OcclusionNLOS,
author = {Felix Heide and Matthew O’Toole and Kai Zang and David B. Lindell and Steven Diamond and Gordon Wetzstein},
title = {Non-line-of-sight Imaging with Partial Occluders and Surface Normals},
journal = {ACM Trans. Graph.},
year = {2019}
}

Related Projects

You may also be interested in related projects, where we have developed non-line-of-sight imaging systems:

  • Metzler et al. 2021. Keyhole Imaging. IEEE Trans. Computational Imaging (link)
  • Lindell et al. 2020. Confocal Diffuse Tomography. Nature Communications (link)
  • Young et al. 2020. Non-line-of-sight Surface Reconstruction using the Directional Light-cone Transform. CVPR (link)
  • Lindell et al. 2019. Wave-based Non-line-of-sight Imaging using Fast f-k Migration. ACM SIGGRAPH (link)
  • Heide et al. 2019. Non-line-of-sight Imaging with Partial Occluders and Surface Normals. ACM Transactions on Graphics (presented at SIGGRAPH) (link)
  • Lindell et al. 2019. Acoustic Non-line-of-sight Imaging. CVPR (link)
  • O’Toole et al. 2018. Confocal Non-line-of-sight Imaging based on the Light-cone Transform. Nature (link)

and direct line-of-sight or transient imaging systems:

  • Bergman et al. 2020. Deep Adaptive LiDAR: End-to-end Optimization of Sampling and Depth Completion at Low Sampling Rates. ICCP (link)
  • Nishimura et al. 2020. 3D Imaging with an RGB camera and a single SPAD. ECCV (link)
  • Heide et al. 2019. Sub-picosecond photon-efficient 3D imaging using single-photon sensors. Scientific Reports (link)
  • Lindell et al. 2018. Single-Photon 3D Imaging with Deep Sensor Fusions. ACM SIGGRAPH (link)
  • O’Toole et al. 2017. Reconstructing Transient Images from Single-Photon Sensors. CVPR (link)

 

Acknowledgments

The authors thank James Harris for fruitful discussions. D.B.L. is supported by a Stanford Graduate Fellowship in Science and Engineering. G.W. is supported by a Terman Faculty Fellowship and a Sloan Fellowship. Additional funding was generously provided by the National Science Foundation (CAREER Award IIS 1553333), the DARPA REVEAL program, the ARO (Grant W911NF-19-1-0120), the Center for Automotive Research at Stanford (CARS), and by the KAUST Office of Sponsored Research through the Visual Computing Center CCF grant.