Towards Attention-aware Foveated Rendering | SIGGRAPH 2023

Brooke Krajancich, Petr Kellnhofer, Gordon Wetzstein

Existing perceptual models used in foveated graphics neglect the effects of visual attention distribution. We introduce the first attention-aware model of contrast sensitivity and motivate the development of future foveation models, demonstrating that tolerance for foveation is significantly higher when the user is concentrating on a task in the fovea.

 

Best Paper Honorable Mention Award!

SIGGRAPH 2023 - 3 Min Overview

ABSTRACT

Foveated graphics is a promising approach to solving the bandwidth challenges of immersive virtual and augmented reality displays by exploiting the falloff in spatial acuity in the periphery of the visual field. However, the perceptual models used in these applications neglect the effects of higher-level cognitive processing, namely the allocation of visual attention, and are thus overestimating sensitivity in the periphery in many scenarios. Here, we introduce the first attention-aware model of contrast sensitivity. We conduct user studies to measure contrast sensitivity under different attention distributions and show that sensitivity in the periphery drops significantly when the user is required to allocate attention to the fovea. We motivate the development of future foveation models with another user study and demonstrate that tolerance for foveation in the periphery is significantly higher when the user is concentrating on a task in the fovea. Analysis of our model predicts significant bandwidth savings than those afforded by current models. As such, our work forms the foundation for attention-aware foveated graphics techniques.

FILES

CITATION

B. Krajancich, P. Kellnhofer, G. Wetzstein, “Towards Attention-aware Foveated Rendering”, in ACM Trans. Graph., 40 (4), 2023.

BibTeX
@article{krajancich2023attention,
author = {Krajancich, Brooke
and Kellnhofer, Petr
and Wetzstein, Gordon},
title = {Towards Attention-aware Foveated Rendering},
journal = {ACM Trans. Graph.},
volume = {40},
issue = {4},
year={2023}
}

The First Attention-aware CSF Model

Shifting attention to the fovea: We investigated the effect of modulating the amount of attention allocated to the contrast discrimination task in the periphery, by forcing attention to the fovea with a visually demanding task. Specifically, users were tasked to multi-task, required to obtain the color of the letter “T” in the fovea, while also discriminating the orientations of Gabor patches in the periphery (to obtain the CSF).

Contrast thresholds increase significantly: We compare the standard approach to measuring contrast sensitivity, where a low amount of attention is directed to the fovea (“low” ), to scenarios where part or most of the attention is directed there (“medium” and “high” ), as determined by the difficulty of the foveal task. We show that in such instances, peripheral contrast discrimination thresholds elevate significantly and this effect increases with eccentricity.



A Model for Attention-aware Contrast Sensitivity (CSF): We introduce
the first attention-aware contrast sensitivity model. Note that contrast sensitivity is the inverse of the contrast threshold, so the medium and high attention conditions appear under the original CSF, since we are less sensitive in these conditions. See the paper for results of validating this fit with additional stimuli and at different display luminances.

Visual Attention and Perception

Visual attention refers to a set of cognitive operations that helps us selectively process the vast amounts of information with which we are confronted, allowing us to focus on a certain location or aspect of the visual scene, while ignoring others. Spatial visual attention is often modeled as a “zoom” or “variable-power lens”, that is, the the attended region can be adjusted in size, spatial attention defines a trade-off between its allocation and processing efficiency because of the limited processing capacity of the brain (illustrated above).
Overestimating contrast sensitivity (CSF) in the periphery: Several studies have demonstrated that increasing the amount of attention allocated to a visual task can enhance performance. In a similar manner, dividing attention between tasks reduces this enhancement effect, including contrast sensitivity, visual acuity and speed of information accrual. However, all existing models of contrast sensitivity and visual acuity are built on experiments where users are asked to direct most of their visual attention to the discrimination task in the periphery. Thus for most scenarios in the real-world and VR/AR, where our attention is allocated at our gaze position, we are likely overestimating the capabilities of human perception in the periphery.

Applications in Foveated Graphics

Motivating new cognitively-foveated approaches: We motivate the development of future foveation models with another user study. We use the same visually demanding task in the fovea, but replace the Gabor patches with images. We foveate one side at some intensity (foveation slope) ask users to discriminate which side is more visually degraded. Using an adaptive staircase procedure, we determine the foveation slope where the user cannot discriminate from the full resolution image.

Higher tolerable foveation: We demonstrate that tolerance for foveation in the periphery is significantly higher when the user is concentrating on a task in the fovea. Moreover, state-of-the-art visual difference predictor, FovVideoVDP, predicts that artifacts should be visible in the “Medium” and “High” attention conditions. We modify this metric to use our attention-aware model of CSF, and show that this more accurately predicts the measured foveation tolerances in each condition.

Potential for higher bandwidth savings: We provide a theoretical analysis of the compression gain factors this model may potentially enable for foveated graphics applications when used to allocate resources such as bandwidth, showing the potential for significant additional savings.

Related Projects

You may also be interested in related projects from our group on perceptual aspects of near-eye displays:

  • B. Krajancich et al. “A Perceptual Model for Eccentricity-dependent Spatio-temporal Flicker Fusion and its Applications to Foveated Graphics”, SIGGRAPH 2021 (link)
  • R. Konrad et al. “Gaze-contingent Ocular Parallax Rendering for Virtual Reality”, ACM Transactions on Graphics 2020 (link)
  • B. Krajancich et al. “Optimizing Depth Perception in Virtual and Augmented Reality through Gaze-contingent Stereo Rendering”, ACM SIGGRAPH Asia 2020 (link)
  • N. Padmanaban et al. “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays”, PNAS 2017 (link)

and other next-generation near-eye display and wearable technology:

  • Y. Peng et al. “Neural Holography with Camera-in-the-loop Training”, ACM SIGGRAPH 2020 (link)
  • B. Krajancich et al. “Factored Occlusion: Single Spatial Light Modulator Occlusion-capable Optical See-through Augmented Reality Display”, IEEE TVCG, 2020 (link)
  • N. Padmanaban et al. “Autofocals: Evaluating Gaze-Contingent Eyeglasses for Presbyopes”, Science Advances 2019 (link)
  • K. Rathinavel et al. “Varifocal Occlusion-Capable Optical See-through Augmented Reality Display based on Focus-tunable Optics”, IEEE TVCG 2019 (link)
  • R. Konrad et al. “Accommodation-invariant Computational Near-eye Displays”, ACM SIGGRAPH 2017 (link)
  • R. Konrad et al. “Novel Optical Configurations for Virtual Reality: Evaluating User Preference and Performance with Focus-tunable and Monovision Near-eye Displays”, ACM SIGCHI 2016 (link)
  • F.C. Huang et al. “The Light Field Stereoscope: Immersive Computer Graphics via Factored Near-Eye Light Field Display with Focus Cues”, ACM SIGGRAPH 2015 (link)

 

ACKNOWLEDGEMENTS

The project was supported by a Stanford Knight-Hennessy Fellowship and Samsung. The authors would also like to thank Robert Konrad, Nitish Padmanaban and Justin Gardner for helpful discussions and advice.