Gaze-Contingent Ocular Parallax Rendering for VR | TOG 2020

Robert Konrad, Anastasios Angelopoulos, Gordon Wetzstein

A new gaze-contingent rendering mode for VR/AR that renders in perceptually correct ocular parallax which benefits depth perception and perceptual realism.

ABSTRACT

Immersive computer graphics systems strive to generate perceptually realistic user experiences. Current-generation virtual reality (VR) displays are successful in accurately rendering many perceptually important effects, including perspective, disparity, motion parallax, and other depth cues. In this paper we introduce ocular parallax rendering, a technology that accurately renders small amounts of gaze-contingent parallax capable of improving depth perception and realism in VR. Ocular parallax describes the small amounts of depth-dependent image shifts on the retina that are created as the eye rotates. The effect occurs because the centers of rotation and projection of the eye are not the same. We study the perceptual implications of ocular parallax rendering by designing and conducting a series of user experiments. Specifically, we estimate perceptual detection and discrimination thresholds for this effect and demonstrate that it is clearly visible in most VR applications. Additionally, we show that ocular parallax rendering provides an effective ordinal depth cue and it improves the impression of realistic depth in VR.

Link to arXiv

CITATION

R. Konrad, A. Angelopoulos, G. Wetzstein, “Gaze-Contingent Ocular Parallax Rendering for Virtual Reality”, in ACM Trans. Graph., 39 (2), 2020.

BibTeX
@article{Konrad:2019:OcularParallax,
author = {Konrad, Robert
and Angelopoulos, Anastasios
and Wetzstein, Gordon},
title = {Gaze-Contingent Ocular Parallax Rendering for Virtual Reality},
journal = {ACM Trans. Graph.},
volume = {39},
issue = {2},
year={2020}
}

 

User Experiment Results

Ocular Detection and Discrimination Thresholds for Ocular Parallax in VR: Two experiments were conducted to estimate perceptual thresholds using an HTC Vive Pro head mounted display presenting stimuli of the form shown on the left. A red surface subtending 2˚ of visual angle was completely occluded by a noisy gray surface in front of it (top left). The relative distances of these surfaces were varied with conditions described in the text. Detection thresholds (bottom left) and discrimination thresholds (bottom right) were estimated from the psychometric functions fitted to the recorded subject data (top right).

Effect of Ocular Parallax on Ordinal Depth Perception: Subjects viewed the monocular stimuli consisting of two distinctly textured surfaces separated by 1~D or 2~D (top) and were asked which one was closer. The proportions of correct responses, averaged across subjects per condition, are plotted on the bottom. Subjects performed significantly better with ocular parallax rendering enabled compared to conventional rendering. However, they also performed slightly better than conventional rendering with reversed ocular parallax rendering, indicating that the extra-retinal signal of eye rotation may not be crucial for depth perception. Significance is indicated at the p ≤ 0.05, 0.01, and 0.001 levels with ⋆, ⋆⋆, and ⋆⋆⋆, respectively.

Evaluating Perceptual Realism: Subjects viewed a 3D scene consisting of targets that are randomly distributed in depth but that do not occlude one another (left). This stimulus was presented with either conventional, ocular parallax, or reversed ocular parallax rendering and we asked subjects to indicated which rendering mode provided a stronger impression of realistic depth. Results of pairwise comparisons between these rendering modes show the percent of times the first member of the pair was chosen over the second (right). Rendering with correct and reversed ocular parallax conveyed a stronger impression of depth compared to conventional rendering, but when compared against one another no difference was observed. This result indicates that the relative magnitudes of depth-dependent motion velocities are most important for perceptual realism but not necessarily their direction. Significance is indicated at the p ≤ 0.05, 0.01, and 0.001 levels with ⋆, ⋆⋆, and ⋆⋆⋆, respectively.

Ocular Parallax

Ocular Parallax Overview: The centers of rotation and projection of the eyes are not the same. As a consequence, small amounts of parallax are created in the retinal image as we fixate on different objects in the scene. The nodal points of the eye, representing the centers of projection, are shown as small blue circles on the left along with a ray diagram illustrating the optical mechanism of ocular parallax. Simulated retinal images that include the falloff of acuity in the periphery of the visual field are shown on the right. As a user fixates on the candle in the center of the scene (center, red circle indicates fixation point), the bottle is partly occluded by the candle. As their gaze moves to the left, ocular parallax reveals the bottle behind the candle in the center (right).

The Optics of the Eye: Illustration of a schematic eye, including the front and rear nodal points 𝑵,𝑵’, the center of rotation 𝑪, and the anterior vertex of the cornea 𝑽. The nodal points are two parameters of a thick lens model that refracts light rays as depicted; the front nodal point acts as the effective center of projection of the eye. The exact locations of these points depend on the schematic eye model used.

Ocular Parallax Rendering

Ocular Parallax Rendering Implementation: Illustration of parameters used to compute the nodal points 𝑵 for each eye, defined with respect to the center of rotation 𝐂 of the respective eye, from the fixation point 𝑭, which is estimated by the eye tracking. The precise locations of these nodal points are required for calculating the view and projection matrices in the rendering pipeline.

Related Projects

You may also be interested in related projects from our group on perceptual aspects of near-eye displays:

  • R. Konrad et al. “Gaze-contingent Ocular Parallax Rendering for Virtual Reality”, ACM Transactions on Graphics 2020 (link)
  • B. Krajancich et al. “Optimizing Depth Perception in Virtual and Augmented Reality through Gaze-contingent Stereo Rendering”, ACM SIGGRAPH Asia 2020 (link)
  • N. Padmanaban et al. “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays”, PNAS 2017 (link)

and other next-generation near-eye display and wearable technology:

  • Y. Peng et al. “Neural Holography with Camera-in-the-loop Training”, ACM SIGGRAPH 2020 (link)
  • B. Krajancich et al. “Factored Occlusion: Single Spatial Light Modulator Occlusion-capable Optical See-through Augmented Reality Display”, IEEE TVCG, 2020 (link)
  • N. Padmanaban et al. “Autofocals: Evaluating Gaze-Contingent Eyeglasses for Presbyopes”, Science Advances 2019 (link)
  • K. Rathinavel et al. “Varifocal Occlusion-Capable Optical See-through Augmented Reality Display based on Focus-tunable Optics”, IEEE TVCG 2019 (link)
  • R. Konrad et al. “Accommodation-invariant Computational Near-eye Displays”, ACM SIGGRAPH 2017 (link)
  • R. Konrad et al. “Novel Optical Configurations for Virtual Reality: Evaluating User Preference and Performance with Focus-tunable and Monovision Near-eye Displays”, ACM SIGCHI 2016 (link)
  • F.C. Huang et al. “The Light Field Stereoscope: Immersive Computer Graphics via Factored Near-Eye Light Field Display with Focus Cues”, ACM SIGGRAPH 2015 (link)