The official banner for the IEEE Conference on Virtual Reality + User Interfaces, comprised of a Kiwi wearing a VR headset overlaid on an image of Mount Cook and a braided river.

Conference Awards Committee IEEE VR 2024

Conference Awards Chairs

  • Frank Steinicke ‒ Universität Hamburg, Germany
  • Shi-Min Hu ‒ Tsinghua University, China
  • Kiyoshi Kiyokawa ‒ Nara Institute of Science and Technology, Japan
  • Luciana Nedel ‒ Federal University of Rio Grande do Sul, Brazil
  • Missie Smith ‒ Meta Reality Labs, USA
Congratulations to the IEEE VR 2024 Award Winners!
Conference Awards - Quick Links
Papers Best Papers Honorable Mentions
Posters Best Posters Honorable Mentions
Research Demos Best Research Demos Honorable Mentions
3DUI Contest Demos Best 3DUI Contest Demos Honorable Mentions
Doctoral Consortiums Best Doctoral Consortium  
Paper Presentations Best Paper Presentations Honorable Mentions

Best Papers & Honorable Mentions

The IEEE VR Best Paper Awards honor exceptional papers published and presented at the IEEE VR conference. During the review process, the program committee chairs will choose approximately 3% of submissions to receive an award. Among these chosen submissions, the separate Conference Awards Selection Committee will select the best submissions to receive a Best Paper Award (ca. 1% of total submissions), while a selection of the remaining submissions receive an Honorable Mention Award. Papers that receive an award will be marked in the program and authors will receive a certificate at the conference.

Best Paper Award Best Papers

VR.net: A Real-world Large-scale Dataset for Virtual Reality Motion Sickness Research (Journal: P1422)

Elliott Wen, The University of Auckland; Chitralekha Gupta, National University of Singapore; Prasanth Sasikumar, National University of Singapore; Mark Billinghurst, University of South Australia; James Wilmott, Meta; Emily Skow, Meta; Arindam Dey, Meta; Suranga Nanayakkara, The University of Auckland

This paper introduces VR.net, a dataset with 165 hours of gameplay videos from 100 real-world games spanning 10 genres, evaluated by 500 participants. The dataset assigns 24 motion sickness-related labels per video frame. They are automatically extracted from 3D engines' rendering pipelines. VR.net's substantial scale, accuracy, and diversity present unmatched opportunities for VR motion sickness research and beyond.

Redirection Strategy Switching: Selective Redirection Controller for Dynamic Environment Adaptation (Journal: P1873)

Ho Jung Lee, Yonsei University; Sang-Bin Jeon, Yonsei University; Yong-Hun Cho, Korea University; In-Kwon Lee, Yonsei University

Selective Redirection Controller (SRC) is a novel approach to Redirected Walking (RDW) that dynamically switches between four redirection controllers (S2C, TAPF, ARC, SRL) based on the user's environment. Unlike traditional methods, SRC, trained by reinforcement learning, allows real-time controller switching to optimize the RDW experience. It's been evaluated through simulations and user studies, demonstrating a significant reduction in resets compared to conventional controllers. SRC's decision making is analysed using heat map visualization, allowing it to effectively exploit the advantages of each strategy. The result is a more immersive and seamless RDW experience that showcases SRC's innovative, contextual design.

The Differential Effects of Multisensory Attentional Cues on Task Performance in VR Depending on the Level of Cognitive Load and Cognitive Capacity (Journal: P1040)

Sihyun Jeong, KAIST; Jinwook Kim, KAIST; Jeongmi Lee, KAIST

Devising attentional cues that optimize VR task performance has become crucial. We investigated how the effects of attentional cues on task performance are modulated by the levels of cognitive load and cognitive capacity. Participants engaged in dual tasks under different levels of cognitive load while an attentional cue (visual, tactile, or visuotactile) was presented. The results showed that multi-sensory attentional cues are generally more effective than uni-sensory cues in enhancing task performance, but the benefit of multi-sensory cues increases with higher cognitive load and lower cognitive capacity. These findings provide practical implications for designing attentional cues to enhance VR task performance.

Evaluating Text Reading Speed in VR Scenes and 3D Particle Visualizations (Journal: P1072)

Johannes Novotny PhD, VRVis Zentrum für Virtual Reality und Visualisierung; David H. Laidlaw, Brown University

We report on the effects of text size and display parameters on reading speed and legibility in three state-of-the-art VR displays. Two are head-mounted displays, and one is Brown’s CAVE-like YURT. Our two perception experiments uncover limits where reading speed declines as the text size approaches the so-called critical print sizes (CPS) of individual displays. We observe an inverse correlation between display resolution and CPS, revealing hardware-specific limitations on legibility beyond display resolution, making CPS an effective benchmark for VR devices. Additionally, we report on the effects of text panel placement, orientation, and occlusion-reducing rendering methods on reading speeds in volumetric particle visualization.

Projection Mapping under Environmental Lighting by Replacing Room Lights with Heterogeneous Projectors (Journal: P1121)

Masaki Takeuchi, Osaka University; Hiroki Kusuyama, Osaka University; Daisuke Iwai, Osaka University; Kosuke Sato, Osaka University

Projection mapping (PM) typically requires a dark environment to achieve high-quality projections, limiting its practicality. In this paper, we overcome this limitation by replacing conventional room lighting with heterogeneous projectors. These projectors replicate environmental lighting by selectively illuminating the scene, excluding the projection target. Our contributions include a distributed projector optimization framework designed to effectively replicate environmental lighting and the incorporation of a large-aperture projector to reduce high-luminance emitted rays and hard shadows. Our findings demonstrate that our projector-based lighting system significantly enhances the contrast and realism of PM results.

Swift-Eye: Towards Anti-blink Pupil Tracking for Precise and Robust High-Frequency Near-Eye Movement Analysis with Event Cameras (Journal: P1220)

Tongyu Zhang, Shandong University; Yiran Shen, Shandong University; Guangrong Zhao, School of Software; Lin Wang, HKUST, GZ; Xiaoming Chen, Beijing Technology and Business University; Lu Bai, Shandong University; Yuanfeng Zhou, Shandong University

In this paper, we propose Swift-Eye, an offline precise and robust pupil estimation and tracking framework to support high-frequency near-eye movement analysis, especially when the pupil region is partially occluded. Swift-Eye is built upon the emerging event cameras to capture the high-speed movement of eyes in high temporal resolution. Then, a series of bespoke components are designed to generate high-quality near-eye movement video at a high frame rate over kilohertz and deal with the occlusion over the pupil caused by involuntary eye blinks. According to our extensive evaluations on EV-Eye, a large-scale public dataset for eye tracking using event cameras, Swift-Eye shows high robustness against significant occlusion.

Best Paper Honorable Mention Best Papers - Honorable Mentions

Robust Dual-Modal Speech Keyword Spotting for XR Headsets (Journal: P1317)

Zhuojiang Cai, Beihang University; Yuhan Ma, Beihang University; Feng Lu, Beihang University

While speech interaction finds widespread utility within the Extended Reality (XR) domain, conventional vocal speech keyword spotting systems continue to grapple with formidable challenges, including suboptimal performance in noisy environments, impracticality in situations requiring silence, and susceptibility to inadvertent activations when others speak nearby. These challenges, however, can potentially be surmounted through the cost-effective fusion of voice and lip movement information. Consequently, we propose a novel vocal-echoic dual-modal keyword spotting system for XR headsets. Experimental results demonstrate the promising performance of this dual-modal system across various challenging scenarios.

Instant Segmentation and Fitting of Excavations in Subsurface Utility Engineering (Journal: P1561)

Marco Stranner, Institute for Computer Graphics and Vision; Philipp Fleck, Graz University of Technology; Dieter Schmalstieg, Graz University of Technology; Clemens Arth, Graz University of Technology

AR for subsurface utility engineering (SUE) has benefited from recent advances in sensing hardware. In this work, we present a novel approach to automate the process of aligning existing SUE databases with measurements taken during excavation works, with the potential to correct the deviation from the as-planned to as-built documentation. Our segmentation algorithm performs infrastructure segmentation based on the live capture of an excavation on site. Our fitting approach correlates the inferred position and orientation with the existing digital plan and registers the as-planned model into the as-built state. We show the results of our proposed method on both synthetic data and a set of real excavations.

Analyzing user behaviour patterns in a cross-virtuality immersive analytics system (Journal: P1631)

Mohammad Rajabi Seraji, Simon Fraser University; Parastoo Piray, Simon Fraser University; Vahid Zahednejad, Simon Fraser University; Wolfgang Stuerzlinger, Simon Fraser University

Motivated by the recently discovered benefits of Cross-virtuality Immersive Analytics (XVA) systems, we developed HybridAxes, which allows users to transition seamlessly between the desktop and a virtual environment. Our user study shows that users prefer AR for exploratory tasks and the desktop for detailed tasks, indicating that these modes of an XVA system complement each other in enhancing the data analysis experience. Despite minor challenges in mode-switching, the system was well-received for its user-friendliness and engagement. Our research offers design insights, valuable directions for future cross-virtuality visual analytics systems, and identifies potential areas for further study.

Modeling the Impact of Head-Body Rotations on Audio-Visual Spatial Perception for Virtual Reality Applications (Journal: P1307)

Edurne Bernal-Berdun, Universidad de Zaragoza - I3A; Mateo Vallejo, Universidad de Zaragoza - I3A; Qi Sun, New York University; Ana Serrano, Universidad de Zaragoza; Diego Gutierrez, Universidad de Zaragoza

Proper synchronization of visual and auditory feedback is crucial for perceiving a coherent and immersive virtual reality (VR) experience. We investigate how audio-visual offsets and rotation velocities impact users' directional localization acuity during natural head-body rotations. Using psychometric functions, we model perceptual disparities and identify offset detection thresholds. Results show that target localization accuracy is affected by perceptual audio-visual disparities during head-body rotations when there is a stimuli-head relative motion. We showcase with a VR game how a compensatory approach based on our study can enhance localization accuracy by up to 40%. Similarly, we provide guidelines for enhancing VR content creation.

With or Without You: Effect of Contextual and Responsive Crowds on VR-based Crowd Motion Capture (Journal: P1727)

Tairan Yin, INRIA; Ludovic Hoyet, Inria; Marc Christie, IRISA; Marie-Paule R. Cani, Ecole Polytechnique, IP Paris; Julien Pettré, Inria

Capturing real crowd motions is challenging. VR helps by immersing users in simulated or motion-capture-based crowds. Users' motions can extend the crowd size using Record-and-Replay (2R), but these methods have limitations affecting data quality. We introduce contextual crowds, combining crowd simulation and 2R for consistent data. We present two strategies: Replace-Record-Replay (3R), where simulated agents are replaced by user data, and Replace-Record-Replay-Responsive (4R), where agents gain responsive capabilities. Evaluated in VR-replicated real-world scenarios, these paradigms yield more natural user behaviors, enhancing captured crowd data consistency.

Towards Co-operative Beaming Displays: Dual Steering Projectors for Extended Projection Volume and Head Orientation Range (Journal: P2043)

Hiroto Aoki, The University of Tokyo; Takumi Tochimoto, Tokyo Institute of Technology; Yuichi Hiroi, Cluster Inc.; Yuta Itoh, The University of Tokyo

This study tackles trade-offs in existing near-eye displays (NEDs) by introducing a beaming display with dual steering projectors. While the traditional NED faces challenges in size, weight, and user limitations, the beaming display separates the NED into a steering projector (SP) and a passive headset. To overcome issues with a single SP, dual projectors are distributed to extend head orientation. A geometric model and calibration method for multiple projectors are proposed. The prototype achieves precision (1.8 ∼ 5.7 mm) and delay (14.46 ms) at 1m, projecting images in the passive headset's area (20 mm × 30 mm) and enabling multiple users with improved presentation features.

BiRD: Using Bidirectional Rotation Gain Differences to Redirect Users during Back-and-forth Head Turns in Walking (Journal: P2113)

Sen-Zhe Xu, Tsinghua University; Fiona Xiao Yu Chen, Tsinghua University; Ran Gong, Tsinghua University; Fang-Lue Zhang, Victoria University of Wellingtong; Song-Hai Zhang, Tsinghua University

Redirected walking (RDW) facilitates user navigation within expansive virtual spaces despite the constraints of limited physical spaces. It employs discrepancies between human visual-proprioceptive sensations, known as gains, to enable the remapping of virtual and physical environments. In this paper, we explore how to apply rotation gain while the user is walking. We propose to apply a rotation gain to let the user rotate by a different angle when reciprocating from a previous head rotation, to achieve the aim of steering the user to a desired direction. To apply the gains imperceptibly based on such a Bidirectional Rotation gain Difference (BiRD), we conduct both measurement and verification experiments on the detection thresholds of the rotation gain for reciprocating head rotations during walking. Unlike previous rotation gains which are measured when users are turning around in place (standing or sitting), BiRD is measured during users’ walking. Our study offers a critical assessment of the acceptable range of rotational mapping differences for different rotational orientations across the user's walking experience, contributing to an effective tool for redirecting users in virtual environments.

Try This for Size: Multi-Scale Teleportation in Immersive Virtual Reality (Journal: P1016)

Tim Weissker, RWTH Aachen University; Matthis Franzgrote, RWTH Aachen University; Torsten Wolfgang Kuhlen, RWTH Aachen University

We present three novel teleportation-based techniques that enable users to adjust their own scale while traveling through virtual environments. Our approaches build on the extension of known teleportation workflows and suggest specifying scale adjustments either simultaneously with, as a connected second step after, or separately from the user's new horizontal position. The results of a two-part user study with 30 participants indicate that the simultaneous and connected specification paradigms are both suitable candidates for effective and comfortable multi-scale teleportation with nuanced individual benefits. Scale specification as a separate mode, on the other hand, was considered less beneficial.

Stepping into the Right Shoes: The Effects of User-Matched Avatar Ethnicity and Gender on Sense of Embodiment in Virtual Reality (Journal: P1126)

Tiffany D. Do, University of Central Florida; Camille Isabella Protko, University of Central Florida; Ryan P. McMahan, University of Central Florida

In many consumer VR applications, users embody predefined characters that offer minimal customization options, frequently emphasizing storytelling over choice. We investigated if matching a user's ethnicity and gender with their virtual self-avatar affects their affects their sense of embodiment in VR. A 2x2 experiment with diverse participants (n=32) showed matching ethnicity increased overall sense embodiment, irrespective of gender, impacting sense of appearance, response, and ownership. Our findings highlight the significance of avatar-user alignment for a more immersive VR experience.

Breaking the Isolation: Exploring the Impact of Passthrough in Shared Spaces on Player Performance and Experience in VR Exergames (Journal: P1232)

Zixuan Guo, Xi'an Jiaotong-Liverpool University; Hongyu Wang, Xi'an Jiaotong-Liverpool University; Hanxiao Deng, Xi'an Jiaotong-Liverpool University; Wenge Xu, Birmingham City University; Nilufar Baghaei, University of Queensland; Cheng-Hung Lo, Xi'an Jiaotong-Liverpool University; Hai-Ning Liang, Xi'an Jiaotong-Liverpool University

VR exergames boost physical activity but face challenges in shared spaces due to the presence of bystanders. Passthrough in VR enhances players' environmental awareness, offering a promising solution. This work explores its impact on player performance and experience in the following conditions: Space (Office vs. Corridor) and Passthrough Function (With vs. Without). Results show Passthrough improves performance and awareness while reducing immersion, especially benefiting higher self-consciousness players. Players typically favor open spaces that consider social acceptability issues, and Passthrough addresses concerns in both shared space types. Our findings offer insights for designing VR experiences in shared environments.

Spatial Contraction Based on Velocity Variation for Natural Walking in Virtual Reality (Journal: P2112)

Sen-Zhe Xu, Tsinghua University; Kui Huang, Tsinghua University; Cheng-Wei Fan, Tsinghua University; Song-Hai Zhang, Tsinghua University

Virtual Reality (VR) offers an immersive 3D digital environment, but enabling natural walking sensations without the constraints of physical space remains a technological challenge. This paper introduces "Spatial Contraction (SC)", an innovative VR locomotion method inspired by the phenomenon of Lorentz contraction in Special Relativity. Similar to the Lorentz contraction, our SC contracts the virtual space along the user's velocity direction in response to velocity variation. The virtual space contracts more when the user's speed is high, whereas minimal or no contraction happens at low speeds. We provide a virtual space transformation method for spatial contraction and optimize the user experience in smoothness and stability. Through SC, VR users can effectively traverse a longer virtual distance with a shorter physical walking. Different from locomotion gains, there is no inconsistency between the user's proprioception and visual perception in SC. SC is a general locomotion method that has no special requirements for VR scenes. The experimental results of live user studies in various virtual scenarios demonstrate that SC has a significant effect in reducing both the number of resets and the physical walking distance users need to cover. Furthermore, experiments have also demonstrated that SC has the potential for integration with the translation gain.

Best Posters & Honorable Mentions

The IEEE VR Best Poster Awards honors exceptional posters published and presented at the IEEE VR conference. During the review process, the best poster committee for IEEE VR consists of three distinguished members chosen by the Conference Awards Committee and Poster Chairs, which will select the best posters. Posters that receive an award will be marked in the program and authors will receive a certificate at the conference.

Best Poster Award Best Posters

Aging Naturally: Virtual reality nature vs real-world nature's effects on executive functioning and stress recovery in older adults (ID: P1107)

Sara LoTemplio, Colorado State University; Sharde Johnson, Colorado State University; Michaela Rice, Colorado State University; Rachel Masters, Colorado State University; Sara-Ashley Collins, Colorado State University; Joshua Hofecker, Colorado State University; Jordan Rivera, Colorado State University; Dylan Schreiber, Colorado State University; Victoria Interrante, University of Minnesota; Francisco Raul Ortega, Colorado State University; Deana Davalos, Colorado State University

Development of Alzheimer's Dementia & other related dementias (ADRD) is characterized by decline in executive functioning (EF), and onset risk of ADRD is increased by stress. Previous work has shown that spending time in nature or virtual reality nature can improve EF and improve stress recovery in younger adults. Yet, little work has examined whether these benefits can extend to older adults. We examine how spending time in either nature or an equivalently designed virtual reality natural environment can affect EF and stress in older adults compared to a lab control condition.

Tremor Stabilization for Sculpting Assistance in Virtual Reality (ID: P1106)

Layla Erb, Augusta University; Jason Orlosky, Augusta University

This paper presents an exploration of assistive technology for virtual reality (VR) art, such as sculpting and ceramics. For many artists, tremors from Parkinsonian diseases can interfere with molding, carving, cutting, and modeling of different mediums for creating new sculptures. To help address this, we have developed a system that algorithmically stabilizes tremors to enhance the artistic experience for creators with physical impairments or movement disorder. In addition, we present a real-time sculpting application that allows us to measure differences in sculpting actions and a target object or shape.

Investigating Incoherent Depth Perception Features in Virtual Reality using Stereoscopic Impostor-Based Rendering (ID: P1041)

Kristoffer Waldow, TH Köln; Lukas Decker, TH Köln; Martin Mišiak, University of Würzburg; Arnulph Fuhrmann, TH Köln; Daniel Roth, Technical University of Munich; Marc Erich Latoschik, University of Würzburg

Depth perception is essential for our daily experiences, aiding in orientation and interaction with our surroundings. Virtual Reality allows us to decouple such depth cues mainly represented through binocular disparity and motion parallax. Dealing with fully-mesh-based rendering methods these cues are not problematic as they originate from the object's underlying geometry. However, manipulating motion parallax, as seen in stereoscopic imposter-based rendering, raises questions about visual errors and perceived 3-dimensionality. Therefore, we conducted a user experiment to investigate how varying object sizes affect such visual errors and perceived 3-dimensionality, revealing an interestingly significant negative correlation and new assumptions about visual quality.

How Long Do I Want to Fade Away? The Duration of Fade-To-Black Transitions in Target-Based Discontinuous Travel (Teleportation) (ID: P1121)

Matthias Wölwer, University of Trier; Benjamin Weyers, Trier University; Daniel Zielasko, University of Trier

A fade-to-black animation enhances the transition during teleportation, yet its duration has not been systematically explored even though it is one of the central parameters. To fill this gap, we conducted a small study to determine a preferred duration. We find a short duration of 0.3s to be the average preference, contrasting durations used previously in the literature. This research contributes to the systematic parameterization of discontinuous travel.

The Influence of Metaverse Environment Design on Learning Experiences in Virtual Reality Classes: A Comparative Study (ID: P1305)

Valentina Uribe, Universidad de los Andes; Vivian Gómez, Universidad de los Andes; Pablo Figueroa, Universidad de los Andes

In this study, we investigate learning and the quality of the classroom experience by conducting classes in four metaverse environments: Workrooms, Spatial, Mozilla Hubs, and Arthur. Using questionnaires, we analyze factors like avatars, spatial layout, mobility, and additional functions' influence on concentration, usability, presence, and learning. Despite minimal differences in learning outcomes, significant variations in classroom experience emerged. Particularly, metaverses with restricted movement and functions showed heightened immersion, concentration, and presence. Additionally, our findings underscore the beneficial influence of avatars featuring lifelike facial expressions in improving the overall learning encounter.

Brain Dynamics of Balance Loss in Virtual Reality and Real-world Beam Walking (ID: P1141)

Amanda Studnicki, University of Florida; Ahmed Rageeb Ahsan, University of Florida; Eric Ragan, University of Florida; Daniel P. Ferris, University of Florida

Virtual reality (VR) aims to replicate the sensation of a genuine experience through the integration of realism, presence, and embodiment. In this study, we used mobile electroencephalography to quantify differences in anterior cingulate brain activity, an area involved in error monitoring, with and without VR during a challenging balance task to discern the factors contributing to VR's perceptual shortcomings. We found a major delay in the anterior cingulate response to self-generated loss of balance in VR compared to the real world. We also found a robust response in the anterior cingulate when loss of balance was generated by external disturbance.

Best Poster Honorable Mention Best Poster - Honorable Mentions

Evaluation of Monocular and Binocular Contrast Sensitivity on Virtual Reality Head-Mounted Displays (ID: P1298)

Khushi Bhansali, Cornell University; Miguel Lago, U.S. FDA; Ryan Beams, Food and Drug Administration; Chumin Zhao, CDRH, United States Food and Drug Administration

Virtual reality (VR) creates an immersive experience by rendering a pair of graphical views on a head-mounted display (HMD). However, image quality assessment on VR HMDs has been primarily limited to monocular optical bench measurements on a single eyepiece. We begin to bridge the gap between monocular and binocular image quality evaluation by developing a WebXR test platform to perform human observer experiments. Specifically, monocular and binocular contrast sensitivity functions (CSFs) are obtained using varied interpupillary distance (IPD) conditions. A combination of optical image quality characteristics and binocular summation can potentially predict the binocular contrast sensitivity on VR HMDs.

AgileFingers: Authoring AR Character Animation Through Hierarchical and Embodied Hand Gestures (ID: P1288)

Yue Lin, The Hong Kong University of Science and Technology (Guangzhou); Yudong Huang, The Hong Kong University of Science and Technology (Guangzhou); David Yip, The Hong Kong University of Science and Technology; Zeyu Wang, The Hong Kong University of Science and Technology (Guangzhou)

We present AgileFingers, a hand gesture-based solution for authoring AR character animation based on a mobile device. Our work initially categorizes four major types of animals under Vertebrata. We conducted a formative study on how users construct hierarchical relationships in full-body skeletal animation and potential hand structure mapping rules. Informed by the study, we developed a hierarchical segmented control system, which enables novice users to manipulate full-body 3D characters with unimanual gestures sequentially. Our user study reveals the ease of use, intuitiveness, and high adaptability of the AgileFingers system across various characters. 

Repeat Body-Ownership Illusions in Commodity Virtual Reality (ID: P1215)

Pauline W Cha, Davidson College; Tabitha C. Peck, Davidson College

Virtual self-avatars have been shown to produce the Proteus effect, however limited work investigates the subjective sense of embodiment using commodity virtual reality systems. In this work, we present results from a pilot experiment where participants are given a self-avatar in a simple virtual experience while wearing a cardboard head-mounted display. Participants then repeat the experience five days later. Overall, subjective embodiment scores are similar to those reported in experience using higher-fidelity systems. However, the subjective sense of embodiment significantly lowered from trial one to trial two.

Evaluation of Shared-Gaze Visualizations for Virtual Assembly Tasks (ID: P1060)

Daniel Alexander Delgado, University of Florida; Jaime Ruiz, University of Florida

Shared-gaze visualizations (SGV) allow collocated collaborators to understand each other's attention and intentions while working jointly in an augmented reality setting. However, prior work has overlooked user control and privacy over how gaze information can be shared between collaborators. In this abstract, we examine two methods for visualizing shared-gaze between collaborators: gaze-hover and gaze-trigger. We compare the methods with existing solutions through a paired-user evaluation study in which participants participate in a virtual assembly task. Finally, we contribute an understanding of user perceptions, preferences, and design implications of shared-gaze visualizations in augmented reality.

Redirected Walking vs. Omni-Directional Treadmills: An Evaluation of Presence (ID: P1301)

Raiffa Syamil, University of Central Florida; Mahdi Azmandian, Sony Interactive Entertainment; Sergio Casas-Yrurzum, University of Valencia; Pedro Morillo, University of Valencia; Carolina Cruz-Neira, University of Central Florida

Omni-Directional Treadmills (ODT) and Redirected Walking (RDW) seem suitable for eliciting presence through a full-body walking experience, however both present unique mechanisms that can affect users' presence, comfort, and overall preference. To measure this effect, we conducted a counterbalanced within-subjects user study with 20 participants. Participants wore a wireless VR headset and experienced a tour of a virtual art museum, once using RDW and another time using a passive, slip-based ODT. Both solutions elicit similar amounts of presence, however RDW is perceived as more natural and is the preferred choice of the participants.

Designing Non-Humanoid Virtual Assistants for Task-Oriented AR Environments (ID: P1094)

Bettina Schlager, Columbia University; Steven Feiner, Columbia University

In task-oriented Augmented Reality (AR), humanoid Embodied Conversational Agents can enhance the feeling of social presence and reduce mental workload. Yet, such agents can also introduce social biases and lead to distractions. This presents a challenge for AR applications that require the user to concentrate mainly on a task environment. To address this, we introduce a non-humanoid virtual assistant designed for minimal visual intrusion in AR. Our approach aims to enhance a user's focus on the tasks they need to perform. We explain our design choices based on previously published guidelines and describe our prototype implemented for an optical--see-through headset.

Best Research Demos & Honorable Mentions

The IEEE VR Best Research Demo Awards honors exceptional research demos published and presented at the IEEE VR conference. The IEEE VR Research Demonstration Chairs rank the accepted demos and recommend approximately 10% of all demos for an award. The best research demo committee for IEEE VR consists of three distinguished members chosen by the Conference Awards Committee Chairs and the Research Demonstration Chairs. This committee selects one of the demos for the Best Research Demo Award and one for the Honorable Mention Award. The corresponding authors will receive a certificate at the conference.

Best Research Demo Award Best Research Demo

Navigating Realities: Assessing Cross-Reality Transitions Through a Spatial Memory Game in VR and AR Environments (ID: PO1015)

Nico Feld, Trier University; Pauline Bimberg, University of Trier; Benjamin Weyers, Trier University; Daniel Zielasko, University of Trier

This tech demo offers an immersive exploration of the most prominent scene transitions within the Reality-Virtuality Continuum (RVC). It delves into the seamless integration of real and virtual worlds, showcasing a spectrum of environments ranging from entirely real to fully virtual and various transitions to switch between them. Our innovative approach centers around an engaging cross-environmental spatial memory game. This game is not just a playful experience but a carefully crafted task d...

Best Research Demo Honorable Mention Best Research Demo - Honorable Mention

GPT-VR Nexus: ChatGPT-Powered Immersive Virtual Reality Experience (ID: PO1060)

Jiangong Chen, Pennsylvania State University; Tian Lan, George Washington University; Bin Li, Pennsylvania State University

The fusion of generative Artificial Intelligence (AI) like ChatGPT and Virtual Reality (VR) can unlock new interaction capabilities through natural language. We introduce GPT-VR Nexus, a novel framework creating a truly immersive VR experience driven by an underlying generative AI engine. It employs a two-step prompt strategy and robust post-processing procedures, without fine-tuning the complex AI model. Our experimental results show quick responses to various user audio requests/inputs.

Best 3DUI Contest Demos & Honorable Mentions

The IEEE VR Best 3DUI Contest Submission Awards honors exceptional 3DUI contest submissions published and presented at the IEEE VR conference. The 3DUI contest chairs select one of the submissions for the Best 3DUI Contest Submission Award and one for the Honorable Mention Award. The final decision is based on a combination of the reviews’ scores, scores from experts testing the contest submission during the conference, and the audience scores. The winning team with the highest score will be awarded. Authors will receive a certificate at the conference.

Best 3DUI Contest Demo Award Best 3DUI Contest Demo

Best 3DUI Contest Demo Honorable Mention Best 3DUI Contest Demo - Honorable Mention

Beyond Euclid: An Educational Virtual Reality Journey into Spherical Geometry (ID: 1002)

Agapi Chrysanthakopoulou, University of Patras; Theofilos Chrysikopoulos, University of Patras; Leandros Nikolaos Arvanitopoulos, University of Patras; Kostantinos Moustakas, University of Patras

Best Doctoral Consortium Paper

The IEEE VR Best Doctoral Consortium (DC) Paper Awards honors exceptional DC papers published and presented at the IEEE VR conference. The best DC paper committee consists of three distinguished members chosen by the Conference Awards Committee Chairs and the DC chairs. The DC chairs recommend 20% of all DC papers for such an award. The best DC committee selects one of these DC papers for Best DC Paper Award. The DC paper that receives the award will be marked in the program and the author will receive a certificate at the conference.

Best Doctoral Consortium Award Best Doctoral Consortium Paper

Toward Realistic 3D Avatar Generation with Dynamic 3D Gaussian Splatting for AR/VR Communication (ID: 1098)

Author: Hail Song, Korea Advanced Institute of Science and Technology, Republic of Korea,
Mentor: Jason Orlosky

Realistic avatars are fundamental for immersive experiences in Augmented Reality (AR) and Virtual Reality (VR) environments. In this work, we introduce a novel approach for avatar generation, combining 3D Gaussian Splatting with the parametric body model, SMPL. This methodology overcomes the inefficiencies of traditional image/video-based avatar creation, which is often slow and requires high computing resources. The integration of 3D Gaussian Splatting for representing human avatar offers realistic and real-time rendering for AR/VR applications. We also conducted preliminary tests to verify the quality of avatar representation using 3D Gaussian Splatting. These tests, displayed alongside outcomes from existing methods, demonstrate the potential of this research to significantly contribute to the creation of realistic avatars in the future. Additionally, several key discussions are presented, essential for developing and evaluating the system and providing valuable insights for future research.

Best Paper Presentations & Honorable Mentions

The IEEE VR Best Presentation Awards honor excellent, interesting, and stimulating presentations of research papers at the IEEE VR conference. During the conference, the audience can give a vote for each presentation that they think deserves an award. Approximately 3% of presentations with the highest number of votes receive an award. Among these selected presentations, the top 1% regarding the number of votes, will receive a Best Presentation Award, while the remaining presentations receive an Honorable Mention Award.

Best Paper Presentation Award Best Paper Presentations

The Effects of Auditory, Visual, and Cognitive Distractions on Cybersickness in Virtual Reality (ID: P3014)

Rohith Venkatakrishnan, School of Computing, Clemson University, USA; Roshan Venkatakrishnan, School of Computing, Clemson niversity, USA; Balagopal Raveendranath, Department of Psychology, Clemson University, USA; Dawn M. Sarno, Department of Psychology, Clemson University, USA; Andrew C. Robb, School of Computing, Clemson University, USA; Wen-Chieh Lin, Department of Computer Science, National Yang Ming Chiao Tung University, Taiwan; Sabarish V. Babu, School of Computing, Clemson University, USA

Cybersickness (CS) is one of the challenges that has hindered the widespread adoption of Virtual Reality (VR). Consequently, researchers continue to explore novel means to mitigate the undesirable effects associated with this affliction, one that may require a combination of remedies as opposed to a solitary stratagem. Inspired by research probing into the use of distractions as a means to control pain, we investigated the efficacy of this countermeasure against CS, studying how the introduction of temporally time-gated distractions affects this malady during a virtual experience featuring active exploration. Downstream of this, we discuss how other aspects of the VR experience are affected by this intervention. We discuss the results of a between-subjects study manipulating the presence, sensory modality, and nature of periodic and short-lived (5-12 seconds) distractor stimuli across 4 experimental conditions: (1) no-distractors (ND); (2) auditory distractors (AD); (3) visual distractors (VD); (4) cognitive distractors (CD). Two of these conditions (VD and AD) formed a yoked control design wherein every matched pair of ‘seers’ and ‘hearers’ was periodically exposed to distractors that were identical in terms of content, temporality, duration, and sequence. In the CD condition, each participant had to periodically perform a 2-back working memory task, the duration and temporality of which was matched to distractors presented in each matched pair of the yoked conditions. These three conditions were compared to a baseline control group featuring no distractions. Results indicated that the reported sickness levels were lower in all three distraction groups in comparison to the control group. The intervention was also able to both increase the amount of time users were able to endure the VR simulation, as well as avoid causing detriments to spatial memory and virtual travel efficiency. Overall, it appears that it may be possible to make users less consciously aware and bothered by the symptoms of CS, thereby reducing its perceived severity.

The Effects of Secondary Task Demands on Cybersickness in Active Exploration Virtual Reality Experiences (ID: P1769)

Rohith Venkatakrishnan, Clemson University; Roshan Venkatakrishnan, Clemson University; Balagopal Raveendranath, Clemson University; Ryan Canales, Clemson University; Dawn M. Sarno, Clemson University; Andrew Robb, Clemson University; Wen-Chieh Lin, National Yang Ming Chiao Tung University; Sabarish V. Babu, Clemson University

During navigation, users often engage in additional tasks that require attentional resources. This work investigated how the attentional demands of secondary tasks performed during exploration affect cybersickness in virtual reality. We manipulated a secondary task's demand across two levels and studied its effects on sickness in two provocative experiences. Results revealed that increased secondary task demand generally exacerbated sickness levels, further vitiating spatial memory and navigational performance. In light of research demonstrating the use of distractions to counteract sickness, our results suggest the existence of a threshold beyond which distractions can reverse from being sickness-reducing to sickness-inducing.

Best Paper Presentation Honorable Mention Best Paper Presentation - Honorable Mentions

BOXRR-23: 4.7 Million Motion Capture Recordings from 105,000 XR Users (ID: P1654)

Vivek C Nair, UC Berkeley; Wenbo Guo, Purdue University; Rui Wang, Carnegie Mellon University; James F. O'Brien, UC Berkeley; Louis Rosenberg, Unanimous AI; Dawn Song, UC Berkeley

Extended reality (XR) devices such as the Meta Quest and Apple Vision Pro have seen a recent surge in attention, with motion tracking "telemetry" data lying at the core of nearly all XR and metaverse experiences. Researchers are just beginning to understand the implications of this data for security, privacy, usability, and more, but currently lack large-scale human motion datasets to study. The BOXRR-23 dataset contains 4,717,215 motion capture recordings, voluntarily submitted by 105,852 XR device users from over 50 countries. BOXRR-23 is over 200 times larger than the largest existing motion capture research dataset and uses a new, highly efficient and purpose-built XR Open Recording (XROR) file format.

Investigating the Effects of Avatarization and Interaction Techniques on Near-field Mixed Reality Interactions with Physical Components (ID: P1482)

Roshan Venkatakrishnan, Clemson University; Rohith Venkatakrishnan, Clemson University; Ryan Canales, Clemson University; Balagopal Raveendranath, Clemson University; Christopher Pagano, Clemson University; Andrew Robb, Clemson University; Wen-Chieh Lin, National Yang Ming Chiao Tung University; Sabarish V. Babu, Clemson University

Mixed reality experiences typically involve users interacting with a combination of virtual and physical components. In an attempt to understand how such interactions can be improved, we investigated how avatarization, the physicality of the interacting components, and interaction techniques affect the user experience. Results indicate that accuracy is more when the components are virtual rather than physical because of the increased salience of the task-relevant information. Furthermore, the relationship between avatarization and interaction techniques dictate how usable mixed reality interactions are deemed to be. This study provides key insights for optimizing mixed reality interactions towards immersive and effective user experiences.

EyeShadows: Peripheral Virtual Copies for Rapid Gaze Selection and Interaction (ID: P1153)

Jason Orlosky, Augusta University; Chang Liu, Kyoto University Hospital; Kenya Sakamoto, Osaka University; Ludwig Sidenmark, University of Toronto; Adam Mansour, Augusta University

In this paper, we present EyeShadows, an eye gaze-based selection system that takes advantage of peripheral copies of items that allow for quick selection and manipulation of an object or corresponding menus. This method is compatible with a variety of different selection tasks and controllable items, avoids the Midas touch problem, does not clutter the virtual environment, and is context sensitive. We have implemented and refined this selection tool for VR and AR, including testing with optical and video see-through displays. We demonstrate that EyeShadows can also be used for a wide range of AR and VR applications, including manipulation of sliders or analog elements.

An Empirical Evaluation of the Calibration of Auditory Distance Perception under Different Levels of Virtual Environment Visibilities (ID: P1669)

Wan-Yi Lin, National Yang Ming Chiao Tung University; Rohith Venkatakrishnan, University of Florida; Roshan Venkatakrishnan, University of Florida; Sabarish V. Babu, Clemson University; Christopher Pagano, Clemson University; Wen-Chieh Lin, National Yang Ming Chiao Tung University

We investigated if perceptual learning through carryover effects of calibration occurs in different levels of a virtual environment’s visibility. Users performed an auditory depth judgment task over several trials in which they walked where they perceived an aural sound to be. This task was sequentially performed in the pretest, calibration, and posttest phases. Feedback on the perceptual accuracy of distance estimates was only provided in the calibration phase. We found that auditory depth estimates, obtained using an absolute measure, can be calibrated to become more accurate and that environments visible enough to reveal their extent may contain visual information that users attune to in scaling aurally perceived depth.

IEEE  IEEE Computer Society IEEE Visualization and Graphics Technical Community

Student Participation
Support Student Participation
Special
UCF
Silver
JP Morgan Chase & Company
Bronze
Christie
UA Little Rock, Emerging Analytics Center
TECHVIZ

Code of Conduct

©IEEE VR Conference 2024, Sponsored by the IEEE Computer Society and the Visualization and Graphics Technical Committee