Factored Occlusion AR Display | IEEE VR 2020

Brooke Krajancich*, Nitish Padmanaban*, Gordon Wetzstein

A novel, computational approach to obtaining hard-edge mutual occlusion in optical see-through augmented reality, using only a single spatial light modulator.

ABSTRACT

Occlusion is a powerful visual cue that is crucial for depth perception and realism in optical see-through augmented reality (OST-AR). However, existing OST-AR systems additively overlay physical and digital content with beam combiners – an approach that does not easily support mutual occlusion, resulting in virtual objects that appear semi-transparent and unrealistic. In this work, we propose a new type of occlusion-capable OST-AR system. Rather than additively combining the real and virtual worlds, we employ a single digital micromirror device (DMD) to merge the respective light paths in a multiplicative manner. This unique approach allows us to simultaneously block light incident from the physical scene on a pixel-by-pixel basis while also modulating the light emitted by a light-emitting diode (LED) to display digital content. Our technique builds on mixed binary/continuous factorization algorithms to optimize time-multiplexed binary DMD patterns and their corresponding LED colors to approximate a target augmented reality (AR) scene. In simulations and with a prototype benchtop display, we demonstrate hard-edge occlusions, plausible shadows, and also gaze-contingent optimization of this novel display mode, which only requires a single spatial light modulator.

FILES

CITATION

Krajancich, B., Padmanaban, N., & Wetzstein, G. (2018). Factored Occlusion: Single Spatial Light Modulator Occlusion-capable Optical See-through Augmented Reality Display IEEE Transactions on Visualization and Computer Graphics.
doi: 10.1109/TVCG.2020.2973443

BibTeX

@article{Krajancich:2020:Occlusion,
author={Krajancich, Brooke and Padmanaban, Nitish and Wetzstein, Gordon},
journal={IEEE Transactions on Visualization and Computer Graphics},
title={Factored Occlusion: Single Spatial Light Modulator Occlusion-Capable Optical See-Through Augmented Reality Display},
year={2020},
volume={PP},
number={99},
pages={1–1}
}

Demonstration of factored occlusion for rendering hard-edge occlusion. (left) Unable to block light from the scene, a conventional beamsplitter configuration produces largely transparent renderings. (right) In comparison, the composition captured with our approach shows significant improvements to both light blocking and color fidelity.

 

Illustration of the principle of a single-SLM occlusion-capable AR display. Each pixel of the DMD can be flipped to one of two states. (a) In one state, the mirror reflects light of the physical scene, R, toward the user. (b) In the other state, the mirror reflects the light of an LED, L, toward the user. By quickly flipping the state of the mirror during the integration time of the user’s photoreceptors, this display is capable of optimizing a set of DMD pixel states and LED color values to display a target scene, O, with mutual occlusions between physical and digital content.

 

Multiple subframes (binary DMD patterns and global LED illumination states) can be combined using time-multiplexing to form RGB images within a scene. The Rhino image is formed from 48 subframes.

 

Gaze-contingent optimization, enabled by eyetracking, can locally improve the rendering of the fixated object (indicated by the dashed white circle) by assigning lower penalty to accurate rendering of the pixels in the viewer’s periphery.

Related Projects

You may also be interested in related projects from our group on next-generation near-eye displays and wearable technology:

  • Y. Peng et al. “Neural Holography with Camera-in-the-loop Training”, ACM SIGGRAPH 2020 (link)
  • R. Konrad et al. “Gaze-contingent Ocular Parallax Rendering for Virtual Reality”, ACM Transactions on Graphics 2020 (link)
  • B. Krajancich et al. “Optimizing Depth Perception in Virtual and Augmented Reality through Gaze-contingent Stereo Rendering”, ACM SIGGRAPH Asia 2020 (link)
  • B. Krajancich et al. “Factored Occlusion: Single Spatial Light Modulator Occlusion-capable Optical See-through Augmented Reality Display”, IEEE TVCG, 2020 (link)
  • N. Padmanaban et al. “Autofocals: Evaluating Gaze-Contingent Eyeglasses for Presbyopes”, Science Advances 2019 (link)
  • K. Rathinavel et al. “Varifocal Occlusion-Capable Optical See-through Augmented Reality Display based on Focus-tunable Optics”, IEEE TVCG 2019 (link)
  • N. Padmanaban et al. “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays”, PNAS 2017 (link)
  • R. Konrad et al. “Accommodation-invariant Computational Near-eye Displays”, ACM SIGGRAPH 2017 (link)
  • R. Konrad et al. “Novel Optical Configurations for Virtual Reality: Evaluating User Preference and Performance with Focus-tunable and Monovision Near-eye Displays”, ACM SIGCHI 2016 (link)
  • F.C. Huang et al. “The Light Field Stereoscope: Immersive Computer Graphics via Factored Near-Eye Light Field Display with Focus Cues”, ACM SIGGRAPH 2015 (link)

 

ACKNOWLEDGEMENTS

B.K. was supported by a Stanford Knight-Hennessy Fellowship. N.P. was supported by the National Science Foundation (NSF) Graduate Research Fellowship Program. G.W. was supported by an Okawa Research Grant and a Sloan Fellowship. Other funding for the project was provided by NSF (award numbers 1553333 and 1839974) and Intel. The authors would also like to thank Dr. Julien Martel for assistance with constructing the benchtop prototype.