Towards a Machine-learning Approach for Sickness Prediction in 360° Stereoscopic Videos | IEEE VR 2018

Nitish Padmanaban*, Timon Ruban*, Vincent Sitzmann, Anthony M. Norcia, and Gordon Wetzstein

Using machine learning approaches to automate the process of determining sickness ratings for a given 360° stereoscopic video.

ABSTRACT

Virtual reality systems are widely believed to be the next major computing platform. There are, however, some barriers to adoption that must be addressed, such as that of motion sickness – which can lead to undesirable symptoms including postural instability, headaches, and nausea. Motion sickness in virtual reality occurs as a result of moving visual stimuli that cause users to perceive self-motion while they remain stationary in the real world. There are several contributing factors to both this perception of motion and the subsequent onset of sickness, including field of view, motion velocity, and stimulus depth. We verify first that differences in vection due to relative stimulus depth remain correlated with sickness. Then, we build a dataset of stereoscopic 3D videos and their corresponding sickness ratings in order to quantify their nauseogenicity, which we make available for future use. Using this dataset, we train a machine learning algorithm on hand-crafted features (quantifying speed, direction, and depth as functions of time) from each video, learning the contributions of these various features to the sickness ratings. Our predictor generally outperforms a naïve estimate, but is ultimately limited by the size of the dataset. However, our result is promising and opens the door to future work with more extensive datasets. This and further advances in this space have the potential to alleviate developer and end user concerns about motion sickness in the increasingly commonplace virtual world.

FILES

CITATION

Padmanaban, N., Ruban, T., Sitzmann, V., Norcia, A. M., & Wetzstein, G. (2018). Towards a Machine-learning Approach for Sickness Prediction in 360° Stereoscopic Videos. IEEE Transactions on Visualization and Computer Graphics.
doi: 10.1109/TVCG.2018.2793560

BibTeX

@article{Padmanaban:2018:Sickness,
author={Padmanaban, Nitish and Ruban, Timon and Sitzmann, Vincent and Norcia, Anthony M and Wetzstein, Gordon},
journal={IEEE Transactions on Visualization and Computer Graphics},
title={Towards a Machine-Learning Approach for Sickness Prediction in 360$^circ$ Stereoscopic Videos},
year={2018},
volume={24},
number={4},
pages={1594–1603}
}

A sample set of frames showing the feature extraction. From left to right: 1) the video, 2) the disparity map, 3) the optical flow field, and 4) the disparity, vertical flow, and horizontal flow on a 3D axis.

 

 

An illustration of the feature calculation for a single frame. The horizontal and vertical components of the optical flow, plus the disparity are averaged, binned, or summarized using PCA.