Accommodation-invariant Near-eye Displays | SIGGRAPH 2017

Robert Konrad, Nitish Padmanaban, Keenan Molner, Emily A. Cooper, Gordon Wetzstein

A new VR/AR display technology that produces visual stimuli that are invariant to the accommodation state of the eye. Accommodation is then driven by stereoscopic cues, instead of retinal blur, and the mismatch between vergence and accommodation state of the eyes is reduced.

Accommodation-invariant Computational Near-eye Displays

ABSTRACT

Although emerging virtual and augmented reality (VR/AR) systems can produce highly immersive experiences, they can also cause visual discomfort, eyestrain, and nausea. One of the sources of these symptoms is a mismatch between vergence and focus cues. In current VR/AR near-eye displays, a stereoscopic image pair drives the vergence state of the human visual system to arbitrary distances, but the accommodation, or focus, state of the eyes is optically driven towards a fixed distance. In this work, we introduce a new display technology, dubbed accommodation-invariant (AI) near-eye displays, to improve the consistency of depth cues in near-eye displays. Rather than producing correct focus cues, AI displays are optically engineered to produce visual stimuli that are invariant to the accommodation state of the eye. The accommodation system can then be driven by stereoscopic cues, and the mismatch between vergence and accommodation state of the eyes is significantly reduced. We validate the principle of operation of AI displays using a prototype display that allows for the accommodation state of users to be measured while they view visual stimuli using multiple different display modes.

FILES

  • technical paper (pdf)
  • technical paper supplement (pdf)
  • presentation slides (slideshare)

 

CITATION

R. Konrad, N. Padmanaban, K. Molner, E. A. Cooper, G. Wetzstein. “Accommodation-invariant Computational Near-eye Displays”, ACM SIGGRAPH (Transactions on Graphics 36, 4), 2017.

BibTeX

@article{Konrad:2017:AIDisplays,
author = {R. Konrad, N. Padmanaban, K. Molner, E. A. Cooper, G. Wetzstein},
title = {{Accommodation-invariant Computational Near-eye Displays}},
journal = {ACM Trans. Graph. (SIGGRAPH)},
issue = {36},
number = {4},
year = {2017},
}

Photograph of prototype display. Top: a stereoscopic near eye display was table-mounted to include an autorefractor that records the user’s accommodative response to a presented visual stimulus. Each arm comprises a high-resolution liquid crystal display (LCD), a series of focusing lenses, a focus-tunable lens, and a NIR/visible beam splitter that allows the optical path to be shared with the autorefractor. The interpupillary distance is adjustable by a translation stage. Bottom: a custom printed circuit board intercepts the signals between driver board of an LCD to synchronize the focus-tunable lens with the strobed backlight via a microcontroller.

 

Overview of relevant depth cues. Vergence and accommodation are oculomotor cues whereas binocular disparity and retinal blur are visual cues. In normal viewing conditions, disparity drives vergence and blur drives accommodation. However, these cues are cross-coupled, so there are conditions under which blur-driven vergence or disparity-driven accommodation occur. Accommodation-invariant displays use display point spread function engineering to facilitate disparity-driven accommodation. This is illustrated by the red arrows.

 

Captured point spread functions of the green display channel. The plots show one-dimensional slices of captured PSFs at several di erent locations (top) and depths for conventional, accommodation-invariant (AI) continuous, AI 2-plane, and AI 3-plane display modes. Whereas the conven- tional PSFs quickly blur out away from the focal plane at 1 D (1 m), the shape of the accommodation-invariant PSFs remains almost constant throughout the entire range. Multi-plane AI PSFs are focused at the respective planes, but not in between.

 

Comparing deconvolution methods. A target image(top left),creates a sharply-focused image only at a single plane but the perceived blur when accommodated at other distances is severe (top right, focused at 25 cm). Accommodation-invariant displays provide a depth-invariant PSF (center inset) but require the target image to be deconvolved prior to display. We compare two deconvolution methods: inverse filtering and constrained optimization (row 2). The la er provides a baseline for the best possible results, whereas inverse filtering creates near-optimal results in real-time. Photographs of the prototype (row 4) match simulations (row 3).

 

Photographic results of several display modes for six different focal settings. The conventional mode (top row) provides a sharp image at a single depth plane. Accommodation-invariant displays with a continuous focal sweep equalize the PSF over the entire depth range (second row), but the full image resolution cannot be restored even with deconvolution. Multi-plane AI displays optimize image resolution for a select number of depths, here shown for two (third row) and three (fourth row) planes.

 

Accommodative gain in the first user study. Each panel shows the individual (black lines) and average (blue lines) accommodative responses to the oscillating stimulus (red lines) for each display mode (conventional, AI, and dynamic focus). In the conventional mode, the virtual image distance was fixed at 1.3 m. Data are shown for 3 cycles a er a 0.5 cycle bu er at the start of each trial. The ordinate indicates the accommodative and stimulus distance with the mean distance subtracted out. This is done to account for individual o sets in each user’s accommodative measures. Inset histograms show the distribution of gains for each condition.

Related Projects

You may also be interested in related projects from our group on next-generation near-eye displays and wearable technology:

  • Y. Peng et al. “Neural Holography with Camera-in-the-loop Training”, ACM SIGGRAPH 2020 (link)
  • R. Konrad et al. “Gaze-contingent Ocular Parallax Rendering for Virtual Reality”, ACM Transactions on Graphics 2020 (link)
  • B. Krajancich et al. “Optimizing Depth Perception in Virtual and Augmented Reality through Gaze-contingent Stereo Rendering”, ACM SIGGRAPH Asia 2020 (link)
  • B. Krajancich et al. “Factored Occlusion: Single Spatial Light Modulator Occlusion-capable Optical See-through Augmented Reality Display”, IEEE TVCG, 2020 (link)
  • N. Padmanaban et al. “Autofocals: Evaluating Gaze-Contingent Eyeglasses for Presbyopes”, Science Advances 2019 (link)
  • K. Rathinavel et al. “Varifocal Occlusion-Capable Optical See-through Augmented Reality Display based on Focus-tunable Optics”, IEEE TVCG 2019 (link)
  • N. Padmanaban et al. “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays”, PNAS 2017 (link)
  • R. Konrad et al. “Accommodation-invariant Computational Near-eye Displays”, ACM SIGGRAPH 2017 (link)
  • R. Konrad et al. “Novel Optical Configurations for Virtual Reality: Evaluating User Preference and Performance with Focus-tunable and Monovision Near-eye Displays”, ACM SIGCHI 2016 (link)
  • F.C. Huang et al. “The Light Field Stereoscope: Immersive Computer Graphics via Factored Near-Eye Light Field Display with Focus Cues”, ACM SIGGRAPH 2015 (link)