Learning Spatially Varying Pixel Exposures for Motion Deblurring | ICCP 2022

Cindy M. Nguyen, Julien N.P. Martel, Gordon Wetzstein

Per-pixel exposures implemented on a programmable sensor that find the best tradeoff between denoising short exposures and deblurring long ones.

ABSTRACT

Computationally removing the motion blur introduced by camera shake or object motion in a captured image remains a challenging task in computational photography. Deblurring methods are often limited by the fixed global exposure time of the image capture process. The post-processing algorithm either must deblur a longer exposure that contains relatively little noise or denoise a short exposure that intentionally removes the opportunity for blur at the cost of increased noise. We present a novel approach of leveraging spatially varying pixel exposures for motion deblurring using next-generation focal-plane sensor–processors along with an end-to-end design of these exposures and a machine learning–based motion-deblurring framework. We demonstrate in simulation and a physical prototype that learned spatially varying pixel exposures (L-SVPE) can successfully deblur scenes while recovering high frequency detail. Our work illustrates the promising role that focal-plane sensor–processors can play in the future of computational imaging.

FILES

  • Technical paper (arxiv)
  • Project Page (link)
  • Code (coming soon)

 

CITATION

C.M. Nguyen, J.N.P. Martel, G. Wetzstein, Learning Spatially Varying Pixel Exposures forĀ  Motion Deblurring, IEEE International Conference on Computational Photography (ICCP), 2022.

@inproceedings{nguyen2022learning,
author = {Nguyen, Cindy M and Martel, Julien NP and Wetzstein, Gordon},
title = {Learning Spatially Varying Pixel Exposures for Motion Deblurring},
booktitle = {IEEE International Conference on Computational Photography (ICCP)},
year = {2022},
}

LSVPE - Pipeline
Pipeline Architecture. Given a motion blurred video, our method simulates a scene captured in a single snapshot at a rate of 30 frames per second using a spatially varying coded exposure. The coded sensor image is then interpolated to produce a multi-channel image, in which each channel represents its own exposure length. This multi-channel image is then reconstructed by a decoding network. During training, a loss is computed between the reconstructed image and ground truth (sharp and noise-free) frame. The error is backpropagated to the exposure lengths and the weights of the decoding network to improve the overall reconstruction.

 

LSVPE - Results
Comparisons against varying exposures. We compare our method, L-SVPE, against top performing baselines which include global exposures (such as a Short exposure) and other spatially varying pixel exposures. Burst average is the average of all the frames not processed by a reconstruction network. The Short baseline is a short exposure processed by a reconstruction network trained to scale and denoise. The Quad Bilinear and Scatter baselines have a quad exposure arrangement (Long, Medium, Medium, Short) with different interpolation methods. Full is our theoretical upper bound, using information from the Long, Medium, and Short global exposures for reconstruction. L-SVPE can successfully recover the following high frequency details (from top row to bottom row): the seams of the jacket, the sharp edges of the license plate number, the lettering on the block free from color artifacts, and the fencing in front of the white car.

Related Projects

You may also be interested in related projects focusing on programmable sensors and snapshot HDR imaging:

  • So et al., MantissaCam. arxiv 2022 (link)
  • Martel et al., Neural Sensors. IEEE Trans. PAMI / ICCP 2020 (link)
  • Wetzstein et al., Inference in artificial intelligence with deep optics and photonics (link)

 

Acknowledgements

This project was in part supported by NSF Award 1839974, an NSF Graduate Research Fellowship (DGE-1656518), a PECASE by the ARL, and SK Hynix. We thank Piotr Dudek for providing the SCAMP-5 sensor and related discussions. The authors would also like to thank Mark Nishimura, Alexander Bergman, and Nitish Padmanaban for thoughtful discussions.