
Conference Awards IEEE VR 2025
Awards Chairs
- Kiyoshi Kiyokawa ‒ Nara Institute of Science and Technology, Japan
- Frank Steinicke ‒ Universität Hamburg, Germany
- Stefania Serafin ‒ Aalborg University, Denmark
- Amine Chellali ‒ Université d'Evry Paris-Saclay, France
Best Papers & Honorable Mention for Best Papers
The IEEE VR Best Paper Awards honor exceptional papers published and presented at the IEEE VR conference. During the review process, the program committee chairs will choose approximately 3% of submissions to receive an award. Among these chosen submissions, the separate Conference Awards Selection Committee will select the best submissions to receive a Best Paper Award (ca. 1% of total submissions), while a selection of the remaining submissions receive an Honorable Mention Award. Papers that receive an award will be marked in the program, and authors will receive a certificate at the conference.
Best Papers
Seeing is not Thinking: Testing Capabilities of VR to Promote Perspective-Taking
Eugene Kukshinov, University of Waterloo; Federica Gini, University of Trento; Anchit Mishra, University of Waterloo; Nick Bowman, Syracuse University; Brendan Rooney, University College Dublin; Lennart E. Nacke, University of Waterloo
Virtual Reality (VR) offers immersive experiences, often considered a tool for fostering perspective-taking (PT) by enabling users to see through another's eyes. However, our study reveals that VR's commonly assumed ability to facilitate PT is overstated. In a 2x2 design, we examined the effects of point of view (first-person, 1PP vs. third-person, 3PP) and a PT task on users' PT as thinking about others. Results showed that PT significantly improved only when participants were explicitly tasked with considering another's perspective. These findings highlight the importance of intentional design, emphasizing targeted tasks over mere viewpoint shifts to achieve authentic perspective-taking in VR.
Peripheral Teleportation: A Rest Frame Design to Mitigate Cybersickness During Virtual Locomotion
Tongyu Nie, University of Minnesota; Courtney Hutton Pospick, University of Minnesota; Ville Cantory, University of Minnesota; Danhua Zhang, University of Minnesota; Jasmine Joyce DeGuzman, University of Central Florida; Victoria Interrante, University of Minnesota; Isayas Berhe Adhanom, Texas State University; Evan Suma Rosenberg, University of Minnesota
Mitigating cybersickness can improve the usability of VR and increase its adoption. We propose peripheral teleportation, a novel technique that creates a rest frame (RF) in the user's peripheral vision. Specifically, the peripheral region is rendered by a pair of RF cameras whose transforms are updated by the user's physical motion. We apply alternating teleportations or snap turns to the RF cameras to keep them close to the current viewpoint. We compared peripheral teleportation with a black FOV restrictor and an unrestricted control condition. The results showed that peripheral teleportation significantly reduced discomfort and enabled participants stay immersed longer.
Look at the Sky: Sky-aware Efficient 3D Gaussian Splatting in the Wild
Yuze Wang, Beihang University; Junyi Wang, Shandong University; Ruicheng Gao, Beihang University; Yansong Qu, XMU; Wantong Duan, Beihang University; Shuo Yang, Beihang University; Yue Qi, Beihang University
Photographs taken in unconstrained tourist environments often present challenges for accurate scene reconstruction due to variable appearances and transient occlusions. With the advancement of 3DGS, some methods have enabled real-time rendering by reconstructing scenes from unconstrained photo collections. However, the rapid convergence of 3DGS is misaligned with the slower convergence of neural network-based appearance encoder and transient mask predictor, hindering the reconstruction efficiency. To address this, we propose a novel sky-aware framework for in-the-wild scene reconstruction. Extensive experiments on multiple datasets demonstrate that the proposed framework outperforms existing methods in novel view and appearance synthesis.
Setting the Stage: Using Virtual Reality to Assess the Effects of Music Performance Anxiety in Pianists
Nicalia Elise ThompSon, Goldsmiths; Xueni Pan, Goldsmiths; Maria Herrojo Ruiz, Goldsmiths
Music Performance Anxiety (MPA) is highly prevalent among musicians and often debilitating. We systematically assessed the effect of MPA on performance, physiology, and anxiety ratings using virtual reality (VR) to induce heightened MPA levels in pianists. Twenty pianists completed a performance task under two conditions: a public -audition- and a private -studio- rehearsal. Participants experienced VR pre-performance settings before transitioning to live piano performance in the real world. Pianists in the Audition condition experienced higher somatic anxiety and increased in performance accuracy over time, with a reduced error rate. Additionally, their performances were faster and featured increased note intensity.
Can I Get There? Negotiated User-to-User Teleportations in Social VR
Miao Wang, Beihang University; Wentong Shu, Beihang University; Yi-Jun Li, Tsinghua University; Wanwan Li, University of South Florida
The growing adoption of social virtual reality platforms underscores the importance of safeguarding personal VR space to maintain user privacy and security. Teleportation facilitates user engagement but can also inadvertently intrude upon personal VR space. This paper introduces three innovative negotiated teleportation techniques designed to secure user-to-user teleportation and protect personal space privacy, all under a unified small-group development framework. We conducted a user study with 20 participants who performed social tasks within a virtual campus environment. The findings demonstrate that our techniques significantly enhance privacy protection and alleviate anxiety associated with unwanted proximity in social VR.
How Collaboration Context and Personality Traits Shape the Social Norms of Human-to-Avatar Identity Representation
Seoyoung Kang, KAIST; Boram Yoon, KI-ITC ARRC; KAIST; Kangsoo Kim, University of Calgary; Jonathan Gratch, University of Southern California; Woontack Woo, KAIST
As avatars evolve from digital representations into identity extensions, they enable unprecedented self-expression beyond physical limitations. Our survey of 150 participants investigated social norms surrounding avatar modifications in various contexts. Results show modifications are viewed more favorably from a partner's perspective, especially for changeable attributes, but are less acceptable in professional settings. Individuals with high self-monitoring are more resistant to changes, while those with higher Machiavellian traits show greater acceptance. These findings suggest creating context-sensitive customization options that balance core identity elements with personality-driven preferences while respecting social norms.
Avatars, Should We Look at Them Directly or through a Mirror?: Effects of Avatar Display Method on Sense of Embodiment and Gaze
Kizashi Nakano, The University of Tokyo; Takuji Narumi, The University of Tokyo
In this study, by using head-mounted display with an extended downward field of view (FOV) and an eye-tracking function, we investigated the effects of presentation on the avatar body, which cannot be easily viewed with a limited downward FOV, on the sense of embodiment, presence, difficulty, speed of reaching tasks, and head behavior patterns.
Best Papers
Enhancing Patient Acceptance of Robotic Ultrasound through Conversational Virtual Agent and Immersive Visualizations
Tianyu Song, Technical University of Munich; Felix Pabst, Technical University of Munich; Ulrich Eck, Technische Universitaet Muenchen; Nassir Navab, Technische Universität München
Robotic ultrasound systems can enhance medical diagnostics, but patient acceptance is a challenge. We propose a system combining an AI-powered conversational virtual agent with three mixed reality visualizations to improve trust and comfort. The virtual agent, powered by a large language model, engages in natural conversations and guides the ultrasound robot, enhancing interaction reliability. The visualizations include augmented reality, augmented virtuality, and fully immersive virtual reality, each designed to create patient-friendly experiences. A user study demonstrated significant improvements in trust and acceptance, offering valuable insights for designing mixed reality and virtual agents in autonomous medical procedures.
Self-Similarity Beats Motor Contor in Augmented Reality Body Weight Perception
Marie Luisa Fiedler, University of Würzburg; Mario Botsch, TU Dortmund University; Carolin Wienrich, University of Würzburg; Marc Erich Latoschik, University of Würzburg
Our work examines how self-similarity and motor control affect sense of embodiment, self-identification, and body weight perception in Augmented Reality (AR). In a 2x2 mixed design experiment, 60 participants interacted with synchronously or independently moving virtual humans, either with self-similar or generic appearance, across two AR sessions. Results show that self-similarity enhances sense of embodiment, self-identification, and weight estimation accuracy, while motor control effects were weaker than in similar VR studies. Participants' body weight, self-esteem, and body shape concerns also impacted estimates. These findings deepen understanding of AR body weight perception, highlighting real-world coherence as a key factor.
Investigating the Impact of Video Pass-Through Embodiment on Presence and Performance in Virtual Reality
Kristoffer Waldow, TH Köln; Constantin Kleinbeck, Technical University of Munich; Arnulph Fuhrmann, TH Köln; Daniel Roth, Technical University of Munich
Creating a compelling sense of presence and embodiment can enhance the user experience in VR. Traditional approaches use personalized or video self-avatars but require external hardware and focus mainly on hand representations. This paper introduces video Pass-Through Embodiment (PTE), a method leveraging per-eye depth maps from Head-Mounted Displays to integrate users' real bodies into VR environments without additional hardware. In a study with 40 participants performing a sorting task, PTE enhanced presence and embodiment despite minor visual artifacts, with no negative impact on performance, cognitive load, or VR sickness. PTE therefore offers a practical alternative to avatar-based methods in VR.
Illuminating the Scene: How Virtual Environments and Learning Modes Shape Film Lighting Mastery in Virtual Reality
Zheng Wei, The Hong Kong University of Science and Technology; Jia Sun, The Hong Kong University of Science and Technology (Guangzhou); Junxiang LIAO, The Hong Kong University of Science and Technology (Guangzhou); Lik-Hang Lee, Hong Kong Polytechnic University; Chan In Devin SIO, The Hong Kong Polytechnic University; Pan Hui, The Hong Kong University of Science and Technology; Huamin Qu, The Hong Kong University of Science and Technology; Wai Tong, Texas A&M University; Xian Xu, The Hong Kong University of Science and Technology
This study explores the impact of virtual environments and collaboration on learning film lighting techniques in VR education. A 3*2 factorial experiment with 36 participants examined three environments' baseline, dynamic beach, and familiar office. Results revealed the beach heightened engagement but increased frustration for individuals, while team learning in the office reduced frustration and improved collaboration. Team-based learning excelled in the baseline, while individuals performed better in challenging settings. These findings offer insights into optimizing VR environments to enhance learning outcomes in both individual and collaborative settings.
Reaction Time as a Proxy for Presence in Mixed Reality Environments with Break-In Presence
Yasra Chandio, University of Massachusetts; Amherst; Victoria Interrante, University of Minnesota; Fatima Muhammad Anwar, UMASS Amherst
Distractions in mixed reality (MR) environments affect presence, reaction time, cognitive load, and Break in Presence (BIP), where attention shifts from the virtual to the real world. While prior work has examined these factors individually, their relationship remains underexplored in MR, where users engage with real and virtual stimuli. We present a model examining how congruent and incongruent distractions affect these constructs. In a within-subject study (N=54), participants performed image-sorting tasks under different distraction conditions. Our results show that incongruent distractions increase cognitive load, slow reaction times, and elevate BIP frequency, with presence mediating these effects.
HIPS - A Surgical Virtual Reality Training System for Total Hip Arthroplasty (THA) with Realistic Force Feedback
Mario Lorenz, Chemnitz University of Technology; Maximilian Kaluschke, University of Bremen; Annegret Melzer, Chemnitz University of Technology; Nina Pillen, Youse GmbH; Magdalena Sanrow, Youse GmbH; Andrea Hoffmann, Chemnitz University of Technology; Dennis Schmidt, FAKT Software GmbH; Andre Dettmann, Chemnitz University of Technology; Angelika Bullinger, Chemnitz University of Technology; Jérôme Perret, Haption GmbH; Gabriel Zachmann, University of Bremen
VR training is essential for enhancing patient safety in surgical education. Simulators require visual realism and accurate haptic feedback. Although some surgical simulators are available, authentic training tools are limited in high-force areas like total hip arthroplasty (THA). This research introduces an innovative VR simulation for five steps of THA, with realistic haptic feedback for the first time. The simulation employs a novel haptic hammering device, the enhanced Virtuose 6D from Haption, alongside novel algorithms for collision detection, haptic rendering, and material removal. A study involving 17 surgeons across various skill levels confirmed the realism, practicality, and user-friendliness of our pioneering methods.
MineVRA: Exploring the Role of Generative AI-Driven Content Development in XR Environments through a Context-Aware Approach
Emiliano Santarnecchi, Harvard Medical School;Precision Neuroscience and Neuromodulation Program; Network Control Laboratory at Massachusetts General Hospital Boston; MA; USA; and Horizon Neuroscience; Boston; MA; USA.; Emanuele Balloni, Universita Politecnica delle Marche; Marina Paolanti, University of Macerata; Emanuele Frontoni, University of Macerata; Lorenzo Stacchio, University of Macerata; Primo Zingaretti, UNIVPM; Roberto Pierdicca, Universita Politecnica delle Marche
The convergence of AI, Computer Vision, Graphics, and Extended Reality (XR) drives innovation in immersive environments. A key challenge is creating personalized 3D assets, which is traditionally a manual, time-consuming process. Generative AI (GenAI) offers a promising solution for automated, context-aware content creation. This paper introduces MineVRA, a Human-In-The-Loop XR framework integrating GenAI for adaptive 3D asset generation. A user study compared GenAI-generated objects to Sketchfab assets in immersive contexts. Results highlight GenAI's potential to complement traditional libraries, offering design insights for human-centered XR environments and advancing efficient, personalized 3D content creation.
Fov-GS: Foveated 3D Guassian Splatting for Dynamic Scene
Runze Fan, Beihang University; Jian Wu, Beihang University; Xuehuai Shi, Nanjing University of Posts and Telecommunications; Lizhi Zhao, Beihang University; Qixiang Ma, Beihang University; Lili Wang, Beihang University
3D Gaussian Splatting-based methods can achieve photo-realistic rendering with speeds of over 100 fps in static scenes, but the speed drops below 10 fps in monocular dynamic scenes. Foveated rendering provides a possible solution to accelerate rendering without compromising visual perceptual quality. However, 3DGS and foveated rendering are not compatible. In this paper, we propose Fov-GS, a foveated 3D Gaussian splatting method for rendering dynamic scenes in real time. Experiments demonstrate that our method not only achieves higher rendering quality in the foveal and salient regions compared to the SOTA methods but also dramatically improves rendering performance, achieving up to $11.33X$ speedup.
Am I (Not) a Ghost? Leveraging Affordances to Study the Impact of Avatar/Interaction Coherence on Embodiment and Plausibility in Virtual Reality
Florian Dufresne, Arts et Métiers Institute of Technology; Charlotte Dubosc, Arts et Métiers Institute of Technology; Titouan LEFROU, Arts et Métiers Institute of Technology; Geoffrey Gorisse, Arts et Métiers Institute of Technology; Olivier Christmann, Arts et Métiers Institute of Technology
Understanding how avatars suggest possibilities for action, namely affordances, has attracted attention in the human-computer interaction community. Indeed, avatars' aesthetic features may signify false affordances conflicting with users' expectations and impacting avatar plausibility. The proposed research initially aimed at exploring the use of such affordances to investigate the impact of congruence manipulations on the sense of embodiment. However, it appeared participants' behavior was driven by other processes priming over the perception of the affordances. We unexpectedly manipulated the internal congruence following repeated exposures, causing a rupture in plausibility and significantly lowering embodiment scores and performance.
AR Fitness Dog: The Effects of a User-Mimicking Interactive Virtual Pet on User Experience and Social Presence in Physical Exercise
Hyeongil Nam, University of Calgary; Kisub Lee, Hanyang University; Jong-Il Park, Hanyang University; Kangsoo Kim, University of Calgary
This paper explores the impact of an augmented reality (AR) virtual dog that physically presents and mimics user behavior on exercise experiences in solo and group (pair) settings. A human-subject experiment compared three conditions: a mimicking virtual dog, a randomly behaving virtual dog, and no virtual dog. Results revealed that the mimicking virtual dog significantly enhanced the solo exercise experience by fostering companionship and improved group cohesion in pair settings by acting as a social facilitator. These findings underscore the potential of behavior-mimicking virtual pets to enhance exercise experiences and inform the development of AR-based fitness applications.
Detection Thresholds for Replay and Real-Time Discrepancies in VR Hand Redirection
Kiyu Tanaka, The University of Tokyo; Takuto Nakamura, The University of Tokyo; Keigo Matsumoto, The University of Tokyo; Hideaki Kuzuoka, The University of Tokyo; Takuji Narumi, The University of Tokyo
Hand redirection can modify perception and movement by providing real-time corrections to motor feedback. In the context of motor learning, observing replays of movements can enhance motor function. The application of hand redirection to these replays by making movements appear larger or smaller than they actually are has the potential to improve motor function. We conducted two psychophysical experiments to evaluate how much discrepancy between replayed and actual movements can go unnoticed by users, both with hand redirection (N=20) and without (N=18). Our findings reveal that the detection threshold for discrepancies in replayed movements is significantly different from that for real-time discrepancies.
Saliency-aware Foveated Path Tracing for Virtual Reality Rendering
Yang Gao, Beihang university; Wencan Li, Beihang University; Shiyu Liang, Beihang University; Aimin Hao, Beihang University; Xiaohui Tan, Capital Normal University
Foveated rendering reduces computational load by distributing resources based on the human visual system. However, traditional foveation methods based on eccentricity cannot account for the complex behavior of visual attention. This is one of the reasons for lower perceived quality. In this study, we introduce a pipeline that incorporates ocular attention through visual saliency. Based on saliency, our approach facilitates the real-time production of high-quality images utilizing path tracing. To further augment image quality, an adaptive filtering process is employed to reduce artifacts in non-foveal regions. Our experiments prove that our approach has superior performance both in terms of quantitative metrics and perceived quality.
It's My Fingers' Fault: Investigating the Effect of Shared Avatar Control on Agency and Responsibility Attribution
Xiaotong Li, The University of Tokyo; Yuji Hatada, The University of Tokyo; Takuji Narumi, the University of Tokyo
Previous studies introduced a system known as 'virtual co-embodiment,' where control over a single avatar is shared between two people. We investigate how this experience influences people's agency and responsibility attribution through: (1) explicit agency questionnaires, (2) implicit intentional binding (IB) effects, (3) responsibility attribution measured through financial gain/loss distribution, and (4) interviews. Agency questionnaires indicated that losing control over the fingers' movements negatively affected the agency over both the fingers and the entire upper limb. IB demonstrated that participants felt greater causality for failed attempts, they were reluctant to take responsibility and accept financial penalties.
ReLive: Walking into Virtual Reality Spaces from Video Recordings of One's Past Can Increase the Experiential Detail and Affect of Autobiographical Memories
Valdemar Danry, MIT; Eli Villa, MIT; Samantha W. T. Chan, MIT Media Lab; Pattie Maes, MIT Media Lab
With the rapid development of advanced machine learning methods for spatial reconstruction, it becomes important to understand the psychological and emotional impacts of such technologies on autobiographical memories. In a within-subjects study, we found that allowing users to walk through old spaces reconstructed from their videos significantly enhances their sense of traveling into past memories, increases the vividness of those memories, and boosts their emotional intensity compared to simply viewing videos of the same past events. These findings highlight that, regardless of the technological advancements, the immersive experience of VR can profoundly affect memory phenomenology and emotional engagement.
VASA-Rig: Audio-Driven 3D Facial Animation with 'Live' Mood Dynamics in Virtual Reality
Ye Pan, Shanghai Jiaotong University; Chang Liu, Shanghai Jiao Tong University; Sicheng Xu, Microsoft Research Asia; Shuai Tan, Shanghai Jiao Tong University; Jiaolong Yang, Microsoft Research Asia
Audio-driven 3D facial animation is key to enhancing realism and interactivity in the metaverse. While existing methods excel at 2D talking head videos, they lack adaptability to 3D environments. We present VASA-Rig, which advances lip-audio synchronization, facial dynamics, and head movements. Using a novel rig parameter-based emotional talking face dataset, our Latents2Rig model transforms 2D facial animations into 3D. Unlike mesh-based models, VASA-Rig outputs 174 Metahuman rig parameters, ensuring compatibility with industry-standard pipelines. Experiments show that VASA-Rig surpasses state-of-the-art methods in realism and accuracy, offering a robust solution for 3D animation in interactive virtual environments.
Multimodal Neural Acoustic Fields for Immersive Virtual Reality
Guansen Tong, UNC Chapel Hill; Johnathan Chi-Ho Leung, UNC Chapel Hill; Xi Peng, UNC Chapel Hill; Haosheng Shi, UNC Chapel Hill; Liujie Zheng, UNC Chapel Hill; Shengze Wang, UNC Chapel Hill; Arryn Carlos O'Brien, UNC Chapel Hill; Ashley Paula-Ann Neall, UNC Chapel Hill; Grace Fei, UNC Chapel Hill; Martim Gaspar, UNC Chapel Hill; Praneeth Chakravarthula, UNC Chapel Hill
We propose multimodal neural acoustic fields for synthesizing spatial sound and creating immersive auditory experiences from novel viewpoints and unseen environments. Extending neural radiance fields to acoustics, our hybrid transformer-convolutional network captures scene reverberation and generates spatial sound using sparse audio-visual inputs. By representing spatial acoustics, our method enhances presence in augmented and virtual reality. Validated on synthetic and real-world data, it outperforms existing methods in spatial audio quality and nonlinear effects like reverberation. Studies confirm significant improvement in audio perception for immersive mixed reality applications.
From Novelty to Knowledge: A Longitudinal Investigation of the Novelty Effect on Learning Outcomes in Virtual Reality
Joomi Lee, University of Arkansas; Chen (Crystal) Chen, University of Miami; Aryabrata Basu, University of Arkansas at Little Rock
This study examines the novelty effect of immersive virtual reality (VR) and its influence on long-term learning. A three-wave longitudinal study revealed that while novelty initially boosted user engagement and exploration, it significantly waned over time. Importantly, learning outcomes steadily improved, indicating that the novelty effect may obscure VR's sustained educational benefits. These findings highlight VR's potential as a powerful educational tool and provide guidelines for mitigating the novelty effect in long-term learning strategies, ensuring that immersive environments can support meaningful, lasting learning outcomes beyond initial excitement.
Best Posters & Honorable Mention for Best Poster
The IEEE VR Best Poster Awards honors exceptional posters published and presented at the IEEE VR conference. During the review process, the best poster committee for IEEE VR consists of three distinguished members chosen by the Conference Awards Committee and Poster Chairs, which will select the best posters based on the two-page abstract and the poster presentation during the conference. Posters that receive an award will be marked in the program, and authors will receive a certificate at the conference.
Best Posters
Do You Feel Better? The Impact of Embodying Photorealistic Avatars with Ideal Body Weight on Attractiveness and Self-Esteem in Virtual Reality (Booth ID: 1246)
Lena Holderrieth, University of Würzburg; Erik Wolf, University of Würzburg; Marie Luisa Fiedler, University of Würzburg; Mario Botsch, TU Dortmund University; Marc Erich Latoschik, University of Würzburg; Carolin Wienrich, University of Würzburg
Body weight issues can manifest in low self-esteem through a negative body image or the feeling of unattractiveness. To explore potential interventions, the pilot study examined whether embodying a photorealistically personalized avatar with enhanced attractiveness affects self-esteem. Participants in the manipulation group adjusted their avatar's body weight to their self-defined ideal, while a control group used unmodified avatars. To confirm the manipulation, we measured the perceived avatars' attractiveness. Results showed that participants found avatars at their ideal weight significantly more attractive, confirming an effective manipulation. Further, the ideal weight group showed a clear trend towards higher self-esteem post-exposure.
Proprioception Drift in Virtual Reality: An Experiment with an Unrealistically Long Leg (Booth ID: 1254)
Valentin Vallageas, Imaging and orthopedics research laboratory; David R Labbe, Ecole de technologie superieure; Rachid Aissaoui, École de technologie supérieure
Embodiment refers to the sensation of owning, controlling, and perceiving a virtual or artificial body as one's own. This study investigates how embodying an avatar with a leg twice its normal length affects proprioception, with congruent or incongruent visuotactile stimuli. Preliminary results (n = 10) show that participants experienced embodiment with a lengthened virtual leg, regardless of congruence of stimuli. A proprioceptive drift of 31.2 cm toward the virtual foot was observed. These findings extend research on upper-body proprioception to include virtual lower-limb deformations.
Best Poster - Honorable Mentions
How Embodied Conversational Agents Should Enter Your Space? (Booth ID: 1019)
Junyeong Kum, Pusan National University; Myungho Lee, Pusan National University
Embodied conversational agents (ECAs) capable of nonverbal behaviors have been developed to address the limitations of voice-only assistants. Research has explored their use in augmented reality (AR), suggesting they may soon interact with us more naturally in physical spaces. However, the question of how they should enter the user's space when summoned remains under-explored. In this paper, we focused on the plausibility of ECAs' entering action into the user's field of view in AR. We analyzed its impact on users' perceived social presence and functionality of the agent. Our results indicated that the plausibility of the action significantly affected social presence and had a marginal effect on perceived functionality.
Edge Vision AI Co-Processing for Dynamic Context Awareness in Mixed Reality (Booth ID: 1242)
Alex Orsholits, The University of Tokyo; Manabu Tsukada, The University of Tokyo
Spatial computing is evolving towards leveraging data streaming for computationally demanding applications, facilitating a shift to lightweight, untethered, and standalone devices. These devices are ideal candidates for co-processing, where real-time scene context understanding and low-latency data streaming are fundamental for general-purpose Mixed Reality (MR) experiences. This poster demonstrates and evaluates a scalable approach to augmented contextual understanding in MR by implementing edge AI co-processing through a Hailo-8 AI accelerator, a low-power ARM-based single board computer (SBC), and the Magic Leap 2 AR headset. The resulting inferences are streamed back to the headset for spatial reprojection into the user’s vision.
Dynamic and Modular Thermal Feedback for Interactive 6DoF VR (Booth ID: 1205)
Sophie Villenave, Ecole Centrale de Lyon, CNRS, INSA Lyon, Universite Claude Bernard Lyon 1, Université Lumière Lyon 2, LIRIS, UMR5205, ENISE; Pierre Raimbaud, Ecole Centrale de Lyon, CNRS, INSA Lyon, Universite Claude Bernard Lyon 1, Université Lumière Lyon 2, LIRIS, UMR5205, ENISE; Guillaume Lavoué, Ecole Centrale de Lyon, CNRS, INSA Lyon, Universite Claude Bernard Lyon 1, Université Lumière Lyon 2, LIRIS, UMR5205, ENISE
Thermal effects in Virtual Reality (VR) can provide sensory information when interacting with objects, or create a thermal ambiance for a Virtual Environment (VE). They are critical to a range of applications, such as firefighting training or thermal comfort simulation. Existing ambient thermal feedback systems revealed limitations: some lack of proper sensory characterization, limit movements and interactions, and can be difficult to replicate. In this context, our contribution is a reproducible room-scale system able to provide dynamic ambient thermal feedback for 6 Degrees of Freedom VR experiences. We present its psychophysical study on thermal sensation, latency, and noise (n=10).
Best Demo & Honorable Mention for Best Demo
The IEEE VR Best Demo Awards honors exceptional research demos published and presented at the IEEE VR conference. The IEEE VR Demo Chairs rank the accepted demos and recommend approximately 10% of all demos for an award. The best demo committee for IEEE VR consists of three distinguished members chosen by the Conference Awards Committee Chairs and the Demo Chairs. This committee selects one of the demos for the Best Demo Award and one for the Honorable Mention Award. The corresponding authors will receive a certificate at the conference.
Best Research Demo
Demonstrating WAVY: a hand wearable with force and vibrotactile feedback for multimodal interaction in Virtual Reality
Hall: CEZEMBRE, Booth ID : 15
Marion Pontreau, Université Paris-Saclay, CEA, List ; Céphise Louison, CEA, List ; Pierre-Henri Oréfice, CEA, List ; Sylvain Bouchigny, CEA, List ; Thanh-loan Sarah LE, Institut des Systèmes Intelligents et de Robotique ; David Gueorguiev, Institute of Neuroscience ; Sabrina Panëels, CEA, List
While the public has access to audio and visual stimuli in virtual reality (VR) thanks to affordable and light headsets, the haptic sense (i.e. the sense of touch) is often lacking or restricted to controllers? vibrations. Besides, commercial haptic wearables remain too expensive for the mass market. Thus, we designed an affordable and light hand wearable that combines tracking, multi-point vibrations and force feedback for the palm and the fingers. The wearable is demonstrated in a VR scenario.
Best Research Demo - Honorable Mention
Demonstration of VirtuEleDent: A Compact XR Tooth-Cutting Training System Using a Physical EMR-based Dental Handpiece and Teeth Model
Hall: SURCOUF, Booth ID : 12
Yuhui Wang, Tohoku University ; Kazuki Takashima, Shibaura Institute of Technology ; Masamitsu Ito, Wacom Co.,Ltd. ; Takeshi Kobori, Wacom Co., Ltd. EMR Technology ; Tomo Asakura, Wacom Co., Ltd. ; Kazuyuki Fujita, Tohoku University ; Hong Guang, Tohoku University ; Yoshifumi Kitamura, Tohoku University
Dental cutting is a crucial skill for dental students. However, current VR dental cutting training systems rely on bulky and costly haptic devices, which reduce opportunities for individual practice. Moreover, the limitations imposed by the maximum reaction force of an active haptic device would impact the range of tooth hardness that can be reproduced. We propose a compact XR tooth-cutting training system, VirtuEleDent, that employs a passive haptic approach using a 3D-printed physical teeth model and a three-dimensionally tracked handpiece. Their spatial relationship is accurately rendered in the virtual environment of a mobile head-mounted display (HMD), providing users with realistic haptic sensations during virtual tooth-cutting exercises. Our tracking platform is operated using electromagnetic resonance (EMR) stylus technology and consists of a digitizer (i.e., tracking board) and a handpiece device. A customized EMR stylus unit (i.e., resonance coil) and an inertial measurement unit (IMU) sensor are installed inside the handpiece, allowing for precise measurement of its tip's 3D position and orientation. This setup enables the learner to physically manipulate dexterous handpieces on the teeth model while experiencing virtual tooth-cutting in the HMD. This is a companion demo to the IEEE VR 2025 Conference paper: ``VirtuEleDent: A Compact XR Tooth-Cutting Training System Using a Physical EMR-based Dental Handpiece and Teeth Model.'' To watch a video about VirtuEleDent, please visit \url{ZvHZ6IEAhyM}.
Power wheelchair driving: a multisensory simulator using VR to learn inrehabilitation centers
Hall: CEZEMBRE, Booth ID : 5
Fabien Grzeskowiak, Inria Centre Bretagne Atlantique ; Emilie Leblong, Pole Saint Helier, rehabilitation center ; Sébastien THOMAS, INRIA ; Francois Pasteau, Univ Rennes, INSA ; Louise Devigne, Irisa-UMR6074 ; Sylvain GUEGAN, Univ Rennes, INSA Rennes, LGCGM ; Marie Babel, Univ Rennes, INSA Rennes, Inria, CNRS, IRISA
Power wheelchairs (PWCs) significantly enhance mobility for individuals with disabilities but are often challenging to master, requiring extensive training. Traditional training methods can be risky, resource-intensive, and difficult to implement. Virtual reality (VR) offers a safer, customizable, and effective alternative, as demonstrated in rehabilitation contexts. We developed a multisensory VR simulator incorporating vestibular feedback to enhance the sense of presence and mitigate cybersickness. Our studies show effective skill transfer from the virtual environment to real-world PWC use, validated with actual users in clinical trials. This demonstration highlights the capabilities of our mechanical simulator, featuring navigation scenes from previous trials and ongoing work with virtual agents. The simulator's immersive and adaptive design addresses the challenges of PWC training, offering a practical and innovative solution for clinicians and users alike.
Best 3DUI Contest & Honorable Mention
The IEEE VR Best 3DUI Contest Submission Awards honors exceptional 3DUI contest submissions published and presented at the IEEE VR conference. The 3DUI contest chairs select one of the submissions for the Best 3DUI Contest Submission Award and one for the Honorable Mention Award. The final decision is based on a combination of the reviews’ scores, scores from experts testing the contest submission during the conference, and the audience scores. The winning team with the highest score will be awarded. Authors will receive a certificate at the conference.
Best 3DUI Contest Demo
From Dystopia to Eutopia: Transforming Urban Environments Through Collaborative Decision-Making in Arsinoe VR (ID: 8622)
Giorgos Ganias, National and Kapodistrian University of Athens; Christos Lougiakis, National and Kapodistrian University of Athens; Anastasis Niarchos, Athena Research Center; Akrivi Katifori, Athena Research Center; Dimitra Petousi, "Athena" Research Center; Marina Stergiou, Athena Research Center; Gabriel Gourdoglou, Athena Research Center; Maria Roussou, National Kapodistrian University of Athens; Yannis Ioannidis, University of Athens and Athena Research Center
Best 3DUI Contest Demo - Honorable Mention
Future Planet: How the present can influence the future (ID: 8361)
Moritz Treu, University of Applied Sciences Hamburg; Jendrik Roland Ludwig, University of Applied Sciences Hamburg; Jonas Merlin Christmann, University of Applied Sciences Hamburg; Manon Lemonnier, Arts et Metiers Institute of Technology; Roman Buaillon, Arts et Metiers Institute of Technology; Olivier Raoux, Arts et Metiers Institute of Technology; Eike Langbehn, University of Applied Sciences Hamburg; Jean-Remy Chardonnet, Arts et Metiers Institute of Technology
Best DC Paper & Honorable Mention for Best DC Paper
The IEEE VR Best Doctoral Consortium (DC) Paper Awards honors exceptional DC papers published and presented at the IEEE VR conference. The best DC paper committee consists of three distinguished members chosen by the Conference Awards Committee Chairs and the DC chairs. The DC chairs recommend 20% of all DC papers for such an award. The best DC committee selects one of these DC papers for Best DC Paper Award and one to receive an Honorable Mention Award. DC papers that receive an award will be marked in the program, and authors will receive a certificate at the conference.
Best Doctoral Consortium Paper
Towards Comprehensible and Expressive Teleportation Techniques in Immersive Virtual Environments
Author: Daniel Rupp, RWTH Aachen University
Teleportation, a popular navigation technique in virtual environments, is favored for its efficiency and reduction of cybersickness but presents challenges such as reduced spatial awareness and limited navigational freedom compared to continuous techniques. I would like to focus on three aspects that advance our understanding of teleportation in both the spatial and the temporal domain. 1) An assessment of different parametrizations of common mathematical models used to specify the target location of the teleportation and the influence on teleportation distance and accuracy. 2) Extending teleportation capabilities to improve navigational freedom, comprehensibility, and accuracy. 3) Adapt teleportation to the time domain, mediating temporal disorientation. The results will enhance the expressivity of existing teleportation interfaces and provide validated alternatives to their steering-based counterparts.
Best Doctoral Consortium Paper - Honorable Mention
Gaze-Based Viewport Control in VR
Author: Hock Siang Lee, Lancaster University
Head-Mounted Displays (HMD) allow users to explore Virtual Reality (VR) environments via extensive head and body movements. In this work, we introduce novel gaze-based techniques for viewport control. These allow users to explore VR environments without moving their head, body, or hands, nor need to use any external controllers other than an eye-tracker. The techniques have been evaluated and compared against traditional alternatives in an abstract study and a video-watching study - demonstrating comparable performance, task load, cybersickness, and user preference. Future research seeks to improve its applicability in a variety of real-world settings, necessitating investigations into how it affects hand-eye coordination and its impact on interactions and interface design. Combined, these should address the need for more accessible and ergonomic exploration methods in VR, particularly for users with limited mobility or those in confined spaces.
Best XR Gallery exhibits
The IEEE VR Best XR Gallery Awards honor exceptional art projects published and exhibited at the IEEE VR conference. The award committee this year constitutes of members of Ars Electronica, who will carefully consider every project in terms of its novelty, impact and technical mastery. Two awards will be administered, one honorable mention and the best XR Gallery award. Papers that receive an award will be marked in the program, and authors will receive a certificate at the conference.
Best XR Gallery Exhibit
Exhibit: Imagraph
Imagraph is an optical apparatus that projects video onto closed eyelids, produced as a medium that mediates two primordial attitudes: projecting an image and closing the senses. Participants close their eyes and wait for the video to be projected. A video prepared in advance is played after being tilted bluish by the spectral compensation for the 'blood-red' unique to the participant’s own flesh. Here, the lid is also the medium for the very object it is trying to block off. The intended colors, their placements, and their motion can be sent, which realizes animation. However, this in no way implies the triumph of projection over closing the senses. In the privileged posture of the closed eye, the light of the video shares texture and fuses with the visual image of the unconscious. Multiple versions were produced, including a version in which the eyelids of participants lying down are connected by optical fiber to a suspended display pixel on the ceiling, and another version in which a laser projector projects images directly onto the eyelids of participants looking forward.

Best XR Gallery - Honorable Mention
Exhibit: Edges of Identity: A Reverse Turing Test
In an era where artificial intelligence is fundamentally changing our perception of reality, ‘Edges of Identity: The Reverse Turing Test’ presents a fascinating reversal of the classic Turing Test. This immersive Virtual Reality (VR) installation challenges visitors to disguise themselves as the only human among advanced AI systems, raising profound questions about the nature of intelligence, identity and reality.
