Learned Large Field-of-View Imaging With Thin-Plate Optics | SIGGRAPH Asia 2019

Yifan (Evan) Peng*, Qilin Sun*, Xiong Dun*, Gordon Wetzstein, Wolfgang Heidrich, Felix Heide

We propose a lens design and learned reconstruction architecture that provide an order of magnitude increase in field of view for computational imaging using only a single thin-plate lens element.

ABSTRACT

Typical camera optics consist of a system of individual elements that are designed to compensate for the aberrations of a single lens. Recent computational cameras shift some of this correction task from the optics to post-capture processing, reducing the imaging optics to only a few optical elements. However, these systems only achieve reasonable image quality by limiting the field of view (FOV) to a few degrees — effectively ignoring severe off-axis aberrations with blur sizes of multiple hundred pixels.

In this paper, we propose a lens design and learned reconstruction architecture that lift this limitation and provide an order of magnitude increase in field of view using only a single thin-plate lens element. Specifically, we design a lens to produce spatially shift-invariant point spread functions, over the full FOV, that are tailored to the proposed reconstruction architecture. We achieve this with a mixture PSF, consisting of a peak and and a low-pass component, which provides residual contrast instead of a small spot size as in traditional lens designs. To perform the reconstruction, we train a deep network on captured data from a display lab setup, eliminating the need for manual acquisition of training data in the field. We assess the proposed method in simulation and experimentally with a prototype camera system. We compare our system against existing single-element designs, including an aspherical lens and a pinhole, and we compare against a complex multi-element lens, validating high-quality large field-of-view (i.e. 53-deg) imaging performance using only a single thin-plate element.

FILES

 

CITATION

Y. Peng, Q. Sun, X. Dun, G. Wetzstein, W. Heidrich, F. Heide. “Learned Large Field-of-View Imaging With Thin-Plate Optics,”, ACM SIGGRAPH Asia (Transactions on Graphics 38, 6), 2019.

BibTeX

@article{Peng:2019:LearnedLargeFovImaging,
author = {Y. Peng, Q. Sun, X. Dun, G. Wetzstein, W. Heidrich, F. Heide},
title = {{Learned Large Field-of-View Imaging With Thin-Plate Optics}},
journal = {ACM Trans. Graph. (SIGGRAPH Asia)},
issue = {38},
number = {6},
year = {2019},
}

* This project is collaborated among Stanford University, King Abdullah University of Science and Technology, and Princeton University.

SPOTLIGHTS

Overview: Photograph of the prototype thin-plate lens and the example capture results. We design a lens with compact form factor using one (or two) optimized refractive surfaces on a thin substrate.

Optics: Schematic of aperture partitioning approach. The spatial position of a virtual aperture (specified by the offset from the optical axis and visualized with different colors) along the radial direction is determined by the controlled position of the stop (on the left), that is further dependent on incident ray directions. The synthetic spot distributions of three directions are presented as inserts, each showing a sharp peak that fits well to the design goal of PSFs.

Result: Our optimization results in a dual-mixture point spread function, which is nearly invariant to the incident angle, exhibiting a high-intensity peak and a large, almost constant, tail. We show the sensor measurement and image reconstruction in natural lighting conditions, which demonstrate that the proposed deep image recovery effectively removes aberrations and haze resulting from the proposed thin-plate optics.

Related Projects

You may also be interested in related projects, where we apply the idea of Deep Optics, i.e. end-to-end optimization of optics and image processing, to other applications, like image classification, extended depth-of-field imaging, superresolution imaging, or optical computing.

  • Wetzstein et al. 2020. AI with Optics & Photonics. Nature (review paper, link)
  • Martel et al. 2020. Neural Sensors. ICCP & TPAMI 2020 (link)
  • Dun et al. 2020. Learned Diffractive Achromat. Optica 2020 (link)
  • Metzler et al. 2020. Deep Optics for HDR Imaging. CVPR 2020 (link)
  • Chang et al. 2019. Deep Optics for Depth Estimation and Object Detection. ICCV 2019 (link)
  • Peng et al. 2019. Large Field-of-view Imaging with Learned DOEs. SIGGRAPH Asia 2019 (link)
  • Chang et al. 2018. Hybrid Optical-Electronic Convolutional Neural Networks with Optimized Diffractive Optics for Image Classification. Scientific Reports (link)
  • Sitzmann et al. 2018. End-to-end Optimization of Optics and Imaging Processing for Achromatic Extended Depth-of-field and Super-resolution Imaging. ACM SIGGRAPH 2018 (link)