
Conference Awards IEEE VR 2025
Awards Chairs
- Kiyoshi Kiyokawa ‒ Nara Institute of Science and Technology, Japan
- Frank Steinicke ‒ Universität Hamburg, Germany
- Stefania Serafin ‒ Aalborg University, Denmark
- Amine Chellali ‒ Université d'Evry Paris-Saclay, France
Best Papers & Honorable Mention for Best Papers
The IEEE VR Best Paper Awards honor exceptional papers published and presented at the IEEE VR conference. During the review process, the program committee chairs will choose approximately 3% of submissions to receive an award. Among these chosen submissions, the separate Conference Awards Selection Committee will select the best submissions to receive a Best Paper Award (ca. 1% of total submissions), while a selection of the remaining submissions receive an Honorable Mention Award. Papers that receive an award will be marked in the program, and authors will receive a certificate at the conference.
Best Papers
Seeing is not Thinking: Testing Capabilities of VR to Promote Perspective-Taking
Eugene Kukshinov, University of Waterloo; Federica Gini, University of Trento; Anchit Mishra, University of Waterloo; Nick Bowman, Syracuse University; Brendan Rooney, University College Dublin; Lennart E. Nacke, University of Waterloo
Virtual Reality (VR) offers immersive experiences, often considered a tool for fostering perspective-taking (PT) by enabling users to see through another's eyes. However, our study reveals that VR's commonly assumed ability to facilitate PT is overstated. In a 2x2 design, we examined the effects of point of view (first-person, 1PP vs. third-person, 3PP) and a PT task on users' PT as thinking about others. Results showed that PT significantly improved only when participants were explicitly tasked with considering another's perspective. These findings highlight the importance of intentional design, emphasizing targeted tasks over mere viewpoint shifts to achieve authentic perspective-taking in VR.
Peripheral Teleportation: A Rest Frame Design to Mitigate Cybersickness During Virtual Locomotion
Tongyu Nie, University of Minnesota; Courtney Hutton Pospick, University of Minnesota; Ville Cantory, University of Minnesota; Danhua Zhang, University of Minnesota; Jasmine Joyce DeGuzman, University of Central Florida; Victoria Interrante, University of Minnesota; Isayas Berhe Adhanom, Texas State University; Evan Suma Rosenberg, University of Minnesota
Mitigating cybersickness can improve the usability of VR and increase its adoption. We propose peripheral teleportation, a novel technique that creates a rest frame (RF) in the user's peripheral vision. Specifically, the peripheral region is rendered by a pair of RF cameras whose transforms are updated by the user's physical motion. We apply alternating teleportations or snap turns to the RF cameras to keep them close to the current viewpoint. We compared peripheral teleportation with a black FOV restrictor and an unrestricted control condition. The results showed that peripheral teleportation significantly reduced discomfort and enabled participants stay immersed longer.
Look at the Sky: Sky-aware Efficient 3D Gaussian Splatting in the Wild
Yuze Wang, Beihang University; Junyi Wang, Shandong University; Ruicheng Gao, Beihang University; Yansong Qu, XMU; Wantong Duan, Beihang University; Shuo Yang, Beihang University; Yue Qi, Beihang University
Photographs taken in unconstrained tourist environments often present challenges for accurate scene reconstruction due to variable appearances and transient occlusions. With the advancement of 3DGS, some methods have enabled real-time rendering by reconstructing scenes from unconstrained photo collections. However, the rapid convergence of 3DGS is misaligned with the slower convergence of neural network-based appearance encoder and transient mask predictor, hindering the reconstruction efficiency. To address this, we propose a novel sky-aware framework for in-the-wild scene reconstruction. Extensive experiments on multiple datasets demonstrate that the proposed framework outperforms existing methods in novel view and appearance synthesis.
Setting the Stage: Using Virtual Reality to Assess the Effects of Music Performance Anxiety in Pianists
Nicalia Elise ThompSon, Goldsmiths; Xueni Pan, Goldsmiths; Maria Herrojo Ruiz, Goldsmiths
Music Performance Anxiety (MPA) is highly prevalent among musicians and often debilitating. We systematically assessed the effect of MPA on performance, physiology, and anxiety ratings using virtual reality (VR) to induce heightened MPA levels in pianists. Twenty pianists completed a performance task under two conditions: a public -audition- and a private -studio- rehearsal. Participants experienced VR pre-performance settings before transitioning to live piano performance in the real world. Pianists in the Audition condition experienced higher somatic anxiety and increased in performance accuracy over time, with a reduced error rate. Additionally, their performances were faster and featured increased note intensity.
Can I Get There? Negotiated User-to-User Teleportations in Social VR
Miao Wang, Beihang University; Wentong Shu, Beihang University; Yi-Jun Li, Tsinghua University; Wanwan Li, University of South Florida
The growing adoption of social virtual reality platforms underscores the importance of safeguarding personal VR space to maintain user privacy and security. Teleportation facilitates user engagement but can also inadvertently intrude upon personal VR space. This paper introduces three innovative negotiated teleportation techniques designed to secure user-to-user teleportation and protect personal space privacy, all under a unified small-group development framework. We conducted a user study with 20 participants who performed social tasks within a virtual campus environment. The findings demonstrate that our techniques significantly enhance privacy protection and alleviate anxiety associated with unwanted proximity in social VR.
How Collaboration Context and Personality Traits Shape the Social Norms of Human-to-Avatar Identity Representation
Seoyoung Kang, KAIST; Boram Yoon, KI-ITC ARRC; KAIST; Kangsoo Kim, University of Calgary; Jonathan Gratch, University of Southern California; Woontack Woo, KAIST
As avatars evolve from digital representations into identity extensions, they enable unprecedented self-expression beyond physical limitations. Our survey of 150 participants investigated social norms surrounding avatar modifications in various contexts. Results show modifications are viewed more favorably from a partner's perspective, especially for changeable attributes, but are less acceptable in professional settings. Individuals with high self-monitoring are more resistant to changes, while those with higher Machiavellian traits show greater acceptance. These findings suggest creating context-sensitive customization options that balance core identity elements with personality-driven preferences while respecting social norms.
Avatars, Should We Look at Them Directly or through a Mirror?: Effects of Avatar Display Method on Sense of Embodiment and Gaze
Kizashi Nakano
In this study, by using head-mounted display with an extended downward field of view (FOV) and an eye-tracking function, we investigated the effects of presentation on the avatar body, which cannot be easily viewed with a limited downward FOV, on the sense of embodiment, presence, difficulty, speed of reaching tasks, and head behavior patterns.
Best Papers - Honorable Mentions
Enhancing Patient Acceptance of Robotic Ultrasound through Conversational Virtual Agent and Immersive Visualizations
Tianyu Song, Technical University of Munich; Felix Pabst, Technical University of Munich; Ulrich Eck, Technische Universitaet Muenchen; Nassir Navab, Technische Universität München
Robotic ultrasound systems can enhance medical diagnostics, but patient acceptance is a challenge. We propose a system combining an AI-powered conversational virtual agent with three mixed reality visualizations to improve trust and comfort. The virtual agent, powered by a large language model, engages in natural conversations and guides the ultrasound robot, enhancing interaction reliability. The visualizations include augmented reality, augmented virtuality, and fully immersive virtual reality, each designed to create patient-friendly experiences. A user study demonstrated significant improvements in trust and acceptance, offering valuable insights for designing mixed reality and virtual agents in autonomous medical procedures.
Self-Similarity Beats Motor Contor in Augmented Reality Body Weight Perception
Marie Luisa Fiedler, University of Würzburg; Mario Botsch, TU Dortmund University; Carolin Wienrich, University of Würzburg; Marc Erich Latoschik, University of Würzburg
Our work examines how self-similarity and motor control affect sense of embodiment, self-identification, and body weight perception in Augmented Reality (AR). In a 2x2 mixed design experiment, 60 participants interacted with synchronously or independently moving virtual humans, either with self-similar or generic appearance, across two AR sessions. Results show that self-similarity enhances sense of embodiment, self-identification, and weight estimation accuracy, while motor control effects were weaker than in similar VR studies. Participants' body weight, self-esteem, and body shape concerns also impacted estimates. These findings deepen understanding of AR body weight perception, highlighting real-world coherence as a key factor.
Investigating the Impact of Video Pass-Through Embodiment on Presence and Performance in Virtual Reality
Kristoffer Waldow, TH Köln; Constantin Kleinbeck, Technical University of Munich; Arnulph Fuhrmann, TH Köln; Daniel Roth, Technical University of Munich
Creating a compelling sense of presence and embodiment can enhance the user experience in VR. Traditional approaches use personalized or video self-avatars but require external hardware and focus mainly on hand representations. This paper introduces video Pass-Through Embodiment (PTE), a method leveraging per-eye depth maps from Head-Mounted Displays to integrate users' real bodies into VR environments without additional hardware. In a study with 40 participants performing a sorting task, PTE enhanced presence and embodiment despite minor visual artifacts, with no negative impact on performance, cognitive load, or VR sickness. PTE therefore offers a practical alternative to avatar-based methods in VR.
Illuminating the Scene: How Virtual Environments and Learning Modes Shape Film Lighting Mastery in Virtual Reality
Zheng Wei, The Hong Kong University of Science and Technology; Jia Sun, The Hong Kong University of Science and Technology (Guangzhou); Junxiang LIAO, The Hong Kong University of Science and Technology (Guangzhou); Lik-Hang Lee, Hong Kong Polytechnic University; Chan In Devin SIO, The Hong Kong Polytechnic University; Pan Hui, The Hong Kong University of Science and Technology; Huamin Qu, The Hong Kong University of Science and Technology; Wai Tong, Texas A&M University; Xian Xu, The Hong Kong University of Science and Technology
This study explores the impact of virtual environments and collaboration on learning film lighting techniques in VR education. A 3ž2 factorial experiment with 36 participants examined three environments' baseline, dynamic beach, and familiar office. Results revealed the beach heightened engagement but increased frustration for individuals, while team learning in the office reduced frustration and improved collaboration. Team-based learning excelled in the baseline, while individuals performed better in challenging settings. These findings offer insights into optimizing VR environments to enhance learning outcomes in both individual and collaborative settings.
Reaction Time as a Proxy for Presence in Mixed Reality Environments with Break-In Presence
Yasra Chandio, University of Massachusetts; Amherst; Victoria Interrante, University of Minnesota; Fatima Muhammad Anwar, UMASS Amherst
Distractions in mixed reality (MR) environments affect presence, reaction time, cognitive load, and Break in Presence (BIP), where attention shifts from the virtual to the real world. While prior work has examined these factors individually, their relationship remains underexplored in MR, where users engage with real and virtual stimuli. We present a model examining how congruent and incongruent distractions affect these constructs. In a within-subject study (N=54), participants performed image-sorting tasks under different distraction conditions. Our results show that incongruent distractions increase cognitive load, slow reaction times, and elevate BIP frequency, with presence mediating these effects.
HIPS - A Surgical Virtual Reality Training System for Total Hip Arthroplasty (THA) with Realistic Force Feedback
Mario Lorenz, Chemnitz University of Technology; Maximilian Kaluschke, University of Bremen; Annegret Melzer, Chemnitz University of Technology; Nina Pillen, Youse GmbH; Magdalena Sanrow, Youse GmbH; Andrea Hoffmann, Chemnitz University of Technology; Dennis Schmidt, FAKT Software GmbH; Andre Dettmann, Chemnitz University of Technology; Angelika Bullinger, Chemnitz University of Technology; Jerome Perret Perret, Haption GmbH; Gabriel Zachmann, University of Bremen
VR training is essential for enhancing patient safety in surgical education. Simulators require visual realism and accurate haptic feedback. Although some surgical simulators are available, authentic training tools are limited in high-force areas like total hip arthroplasty (THA). This research introduces an innovative VR simulation for five steps of THA, with realistic haptic feedback for the first time. The simulation employs a novel haptic hammering device, the enhanced Virtuose 6D from Haption, alongside novel algorithms for collision detection, haptic rendering, and material removal. A study involving 17 surgeons across various skill levels confirmed the realism, practicality, and user-friendliness of our pioneering methods.
MineVRA: Exploring the Role of Generative AI-Driven Content Development in XR Environments through a Context-Aware Approach
Lorenzo Stacchio, University of Macerata; Emanuele Balloni, Universita Politecnica delle Marche; Marina Paolanti, University of Macerata; Emanuele Frontoni, University of Macerata; Primo Zingaretti, UNIVPM; Roberto Pierdicca, Universita Politecnica delle MArche
The convergence of AI, Computer Vision, Graphics, and Extended Reality (XR) drives innovation in immersive environments. A key challenge is creating personalized 3D assets, which is traditionally a manual, time-consuming process. Generative AI (GenAI) offers a promising solution for automated, context-aware content creation. This paper introduces MineVRA, a Human-In-The-Loop XR framework integrating GenAI for adaptive 3D asset generation. A user study compared GenAI-generated objects to Sketchfab assets in immersive contexts. Results highlight GenAI's potential to complement traditional libraries, offering design insights for human-centered XR environments and advancing efficient, personalized 3D content creation.
Fov-GS: Foveated 3D Guassian Splatting for Dynamic Scene
Runze Fan, Beihang University; Jian Wu, Beihang University; Xuehuai Shi, Nanjing University of Posts and Telecommunications; Lizhi Zhao, Beihang University; Qixiang Ma, Beihang University; Lili Wang, Beihang University
3D Gaussian Splatting-based methods can achieve photo-realistic rendering with speeds of over 100 fps in static scenes, but the speed drops below 10 fps in monocular dynamic scenes. Foveated rendering provides a possible solution to accelerate rendering without compromising visual perceptual quality. However, 3DGS and foveated rendering are not compatible. In this paper, we propose Fov-GS, a foveated 3D Gaussian splatting method for rendering dynamic scenes in real time. Experiments demonstrate that our method not only achieves higher rendering quality in the foveal and salient regions compared to the SOTA methods but also dramatically improves rendering performance, achieving up to $11.33X$ speedup.
Am I (Not) a Ghost? Leveraging Affordances to Study the Impact of Avatar/Interaction Coherence on Embodiment and Plausibility in Virtual Reality
Florian Dufresne, Arts et Métiers Institute of Technology; Charlotte Dubosc, Arts et Métiers Institute of Technology; Titouan LEFROU, Arts et Métiers Institute of Technology; Geoffrey Gorisse, Arts et Métiers Institute of Technology; Olivier Christmann, Arts et Métiers Institute of Technology
Understanding how avatars suggest possibilities for action, namely affordances, has attracted attention in the human-computer interaction community. Indeed, avatars' aesthetic features may signify false affordances conflicting with users' expectations and impacting avatar plausibility. The proposed research initially aimed at exploring the use of such affordances to investigate the impact of congruence manipulations on the sense of embodiment. However, it appeared participants' behavior was driven by other processes priming over the perception of the affordances. We unexpectedly manipulated the internal congruence following repeated exposures, causing a rupture in plausibility and significantly lowering embodiment scores and performance.
AR Fitness Dog: The Effects of a User-Mimicking Interactive Virtual Pet on User Experience and Social Presence in Physical Exercise
Hyeongil Nam, University of Calgary; Kisub Lee, Hanyang University; Jong-Il Park, Hanyang University; Kangsoo Kim, University of Calgary
This paper explores the impact of an augmented reality (AR) virtual dog that physically presents and mimics user behavior on exercise experiences in solo and group (pair) settings. A human-subject experiment compared three conditions: a mimicking virtual dog, a randomly behaving virtual dog, and no virtual dog. Results revealed that the mimicking virtual dog significantly enhanced the solo exercise experience by fostering companionship and improved group cohesion in pair settings by acting as a social facilitator. These findings underscore the potential of behavior-mimicking virtual pets to enhance exercise experiences and inform the development of AR-based fitness applications.
Detection Thresholds for Replay and Real-Time Discrepancies in VR Hand Redirection
Kiyu Tanaka, The University of Tokyo; Takuto Nakamura, The University of Tokyo; Keigo Matsumoto, The University of Tokyo; Hideaki Kuzuoka, The University of Tokyo; Takuji Narumi, The University of Tokyo
Hand redirection can modify perception and movement by providing real-time corrections to motor feedback. In the context of motor learning, observing replays of movements can enhance motor function. The application of hand redirection to these replays by making movements appear larger or smaller than they actually are has the potential to improve motor function. We conducted two psychophysical experiments to evaluate how much discrepancy between replayed and actual movements can go unnoticed by users, both with hand redirection (N=20) and without (N=18). Our findings reveal that the detection threshold for discrepancies in replayed movements is significantly different from that for real-time discrepancies.
Saliency-aware Foveated Path Tracing for Virtual Reality Rendering
Yang Gao, Beihang university; Wencan Li, Beihang University; Shiyu Liang, Beihang University; Aimin Hao, Beihang University; Xiaohui Tan, Capital Normal University
Foveated rendering reduces computational load by distributing resources based on the human visual system. However, traditional foveation methods based on eccentricity cannot account for the complex behavior of visual attention. This is one of the reasons for lower perceived quality. In this study, we introduce a pipeline that incorporates ocular attention through visual saliency. Based on saliency, our approach facilitates the real-time production of high-quality images utilizing path tracing. To further augment image quality, an adaptive filtering process is employed to reduce artifacts in non-foveal regions. Our experiments prove that our approach has superior performance both in terms of quantitative metrics and perceived quality.
It's My Fingers' Fault: Investigating the Effect of Shared Avatar Control on Agency and Responsibility Attribution
Xiaotong Li, The University of Tokyo; Yuji Hatada, The University of Tokyo; Takuji Narumi, the University of Tokyo
Previous studies introduced a system known as 'virtual co-embodiment,' where control over a single avatar is shared between two people. We investigate how this experience influences people's agency and responsibility attribution through: (1) explicit agency questionnaires, (2) implicit intentional binding (IB) effects, (3) responsibility attribution measured through financial gain/loss distribution, and (4) interviews. Agency questionnaires indicated that losing control over the fingers' movements negatively affected the agency over both the fingers and the entire upper limb. IB demonstrated that participants felt greater causality for failed attempts, they were reluctant to take responsibility and accept financial penalties.
ReLive: Walking into Virtual Reality Spaces from Video Recordings of One's Past Can Increase the Experiential Detail and Affect of Autobiographical Memories
Valdemar Danry, MIT; Eli Villa, MIT; Samantha W. T. Chan, MIT Media Lab; Pattie Maes, MIT Media Lab
With the rapid development of advanced machine learning methods for spatial reconstruction, it becomes important to understand the psychological and emotional impacts of such technologies on autobiographical memories. In a within-subjects study, we found that allowing users to walk through old spaces reconstructed from their videos significantly enhances their sense of traveling into past memories, increases the vividness of those memories, and boosts their emotional intensity compared to simply viewing videos of the same past events. These findings highlight that, regardless of the technological advancements, the immersive experience of VR can profoundly affect memory phenomenology and emotional engagement.
VASA-Rig: Audio-Driven 3D Facial Animation with 'Live' Mood Dynamics in Virtual Reality
Ye Pan, Shanghai Jiaotong University; Chang Liu, Shanghai Jiao Tong University; Sicheng Xu, Microsoft Research Asia; Shuai Tan, Shanghai Jiao Tong University; Jiaolong Yang, Microsoft Research Asia
Audio-driven 3D facial animation is key to enhancing realism and interactivity in the metaverse. While existing methods excel at 2D talking head videos, they lack adaptability to 3D environments. We present VASA-Rig, which advances lip-audio synchronization, facial dynamics, and head movements. Using a novel rig parameter-based emotional talking face dataset, our Latents2Rig model transforms 2D facial animations into 3D. Unlike mesh-based models, VASA-Rig outputs 174 Metahuman rig parameters, ensuring compatibility with industry-standard pipelines. Experiments show that VASA-Rig surpasses state-of-the-art methods in realism and accuracy, offering a robust solution for 3D animation in interactive virtual environments.
Multimodal Neural Acoustic Fields for Immersive Virtual Reality
Guansen Tong, UNC Chapel Hill; Johnathan Chi-Ho Leung, UNC Chapel Hill; Xi Peng, UNC Chapel Hill; Haosheng Shi, UNC Chapel Hill; Liujie Zheng, UNC Chapel Hill; Shengze Wang, UNC Chapel Hill; Arryn Carlos O'Brien, UNC Chapel Hill; Ashley Paula-Ann Neall, UNC Chapel Hill; Grace Fei, UNC Chapel Hill; Martim Gaspar, UNC Chapel Hill; Praneeth Chakravarthula, UNC Chapel Hill
We propose multimodal neural acoustic fields for synthesizing spatial sound and creating immersive auditory experiences from novel viewpoints and unseen environments. Extending neural radiance fields to acoustics, our hybrid transformer-convolutional network captures scene reverberation and generates spatial sound using sparse audio-visual inputs. By representing spatial acoustics, our method enhances presence in augmented and virtual reality. Validated on synthetic and real-world data, it outperforms existing methods in spatial audio quality and nonlinear effects like reverberation. Studies confirm significant improvement in audio perception for immersive mixed reality applications.
From Novelty to Knowledge: A Longitudinal Investigation of the Novelty Effect on Learning Outcomes in Virtual Reality
Joomi Lee, University of Arkansas; Chen (Crystal) Chen, University of Miami; Aryabrata Basu, University of Arkansas at Little Rock
This study examines the novelty effect of immersive virtual reality (VR) and its influence on long-term learning. A three-wave longitudinal study revealed that while novelty initially boosted user engagement and exploration, it significantly waned over time. Importantly, learning outcomes steadily improved, indicating that the novelty effect may obscure VR's sustained educational benefits. These findings highlight VR's potential as a powerful educational tool and provide guidelines for mitigating the novelty effect in long-term learning strategies, ensuring that immersive environments can support meaningful, lasting learning outcomes beyond initial excitement.