
Research Demos
Research Demos Schedule (Timezone: Saint-Malo, France UTC+1) | |||
---|---|---|---|
Research Demo Fast Forward Session | Monday 10th | 15:15 - 16:15 | Hall: Room: Chateaubriand |
Research Demo Booths Open | Monday 10th | 16:15 - 17:15 | Hall: Room: Surcouf / Cézembre |
Research Demo Booths Open | Tuesday 11th | 9:30 - 10:00, 13:15 - 14:00, 16:15 - 17:15 | Hall: Room: Surcouf / Cézembre |
Research Demo Booths Open | Wednesday 12th | 9:30 - 10:00, 13:15 - 14:00 | Hall: Room: Surcouf / Cézembre |
Demonstration of Petting Interaction with Breathing Cat Using Mid-Air Ultrasound Haptic Feedback
Hall: SURCOUF, Booth ID : 11
Juro Hosoi, The University of Tokyo ; Yuki Ban, The University of Tokyo ; Shinichi Warisawa, The University of Tokyo
This study introduces a petting interaction system with a virtual breathing cat, utilizing mid-air ultrasound haptic feedback to enhance the realism of pet experiences in virtual reality (VR). Previous research has focused mainly on the tactile presentation of the fur texture, and interaction with realistic virtual animals has not been fully investigated. Therefore, based on the prior work, the proposed system employs airborne ultrasound tactile displays (AUTDs) and a hand tracking camera to deliver real-time, non-contact haptic feedback simulating not only the soft fur texture but also the breathing motion. By dynamically controlling the ultrasound focal point?s position and intensity, the system reproduces the tactile sensations of fur stroking and rhythmic respiratory motion. The visual component features a 3D cat model synchronized with breathing cycles, situated in a familiar domestic setting. User experience design focused on creating a calming interaction, with tactile feedback adjusting based on hand movement patterns.
Demonstration of Directional Force Feedback in Virtual Reality Tennis using Dual-Flywheel Haptics
Hall: SURCOUF, Booth ID : 15
Allyson Chen, University of California, San Diego ; Xuan Gedney, University of California San Diego ; Jasmine Roberts, University of California, San Digo
This demo introduces a haptic feedback device for tennis simulation, designed to provide realistic physical sensations of ball impact through interaction with a virtual racket model. The system integrates with a Unity-based virtual setup where users are visually immersed in a virtual tennis court environment. Unlike previous systems, this device incorporates a mechanical interface that replicates the sensation of a tennis ball striking the racket using a dual flywheel system. This approach follows a one-controller, one-function paradigm, where each input device is dedicated to a specific function rather than serving as a general-purpose controller. This targeted design enhances realism and improves user immersion by providing precise, specialized feedback rather than distributing multiple functionalities across a single device. Furthermore, this system aligns with augmented virtuality, as it brings physical sensations into an otherwise fully virtual environment, reinforcing the sense of realism and presence.
FirstModulAR: Open Source Modular Augmented Reality Interfaces for First Responders
Hall: CEZEMBRE, Booth ID : 1
Regis Kopper, Iowa State University ; Jeronimo Grandi, Augusta University ; Erin Argo, Augusta University ; Jason Jerald, NextGen Interactions ; Rich Bennett, NextGen Interactions ; Connor Shipway, NextGen Interactions
First responders operate in high-pressure environments where decisions must be made quickly and accurately. However, the increasing availability of real-time data presents a double-edged sword, while potentially valuable, the sheer volume of information can lead to cognitive overload, compromising situational awareness and performance. The FirstModulAR project seeks to address these challenges by developing modular Augmented Reality (AR) interfaces specifically tailored to public safety. By working closely with first responders, the project focuses on creating reusable AR modules that improve situational awareness and decision-making across diverse scenarios. This paper provides an overview of the system architecture, the modular design approach, and the integration of AR solutions into realistic public safety scenarios.
StimulHeat: a Clip-On, Low-Energy and Wireless Device for Thermal Feedback when Grasping in VR
Hall: CEZEMBRE, Booth ID : 11
Matthieu Mesnage, INSA Lyon, Ecole Centrale Lyon, Universite Claude Bernard Lyon 1, CPE Lyon, CNRS, INL UMR 5270, 69100 ; Sophie Villenave, Ecole Centrale de Lyon, CNRS, INSA Lyon, Universite Claude Bernard Lyon 1, Université Lumiere Lyon 2, LIRIS, UMR5205, ENISE ; Pierre Raimbaud, Ecole Centrale de Lyon, CNRS, INSA Lyon, Universite Claude Bernard Lyon 1, Université Lumiere Lyon 2, LIRIS, UMR5205, ENISE ; Guillaume Lavoué, Ecole Centrale de Lyon, CNRS, INSA Lyon, Universite Claude Bernard Lyon 1, Université Lumiere Lyon 2, LIRIS, UMR5205, ENISE ; Bertrand Massot, INSA Lyon, Ecole Centrale Lyon, Universite Claude Bernard Lyon 1, CPE Lyon, CNRS, INL UMR 5270
Adding thermal feedback to Virtual Reality (VR) experience enhances user immersion and the resulting feeling of presence. Our device, StimulHeat, provides a wireless, low-power and easy-to-use solution for delivering hot and cold sensations in VR, particularly when grasping objects. This demonstration highlights its integration with Valve Index Controllers, though it can be adapted to various controllers or tangible interfaces for both augmented and mixed reality experiences.
RAGatar: Enhancing LLM-driven Avatars with RAG for Knowledge-Adaptive Conversations in Virtual Reality
Hall: CEZEMBRE, Booth ID : 10
Alexander Marquardt, Institute of Visual Computing ; David Golchinfar, University of Applied Sciences Bonn-Rhein-Sieg ; Daryoush Vaziri, University of Applied Sciences
We present a virtual reality system that enables users to seamlessly switch between general conversations and domain-specific knowledge retrieval through natural interactions with AI-driven avatars. By combining MetaHuman technology with self-hosted large language models and retrieval-augmented generation, our system demonstrates how immersive AI interactions can enhance learning and training applications where both general communication and expert knowledge are required.
Demo: Automated Multiparty Conversations Analysis in VR Massive Multiplayer Online Games Environments
Hall: CEZEMBRE, Booth ID : 9
Riccardo Calcagno, Fondazione Bruno Kessler ; Elio Salvadori, Fondazione Bruno Kessler ; Oscar Mayora, Fondazione Bruno Kessler Research Institute ; Giovanna Varni, University of Trento ; Luca Turchet, University of Trento
The study of social interactions in Massively Multiplayer Online Games (MMOGs) has largely focused on chat communication and speech content analysis. This paper introduces a novel tool designed to analyze social interactions among MMOG players by leveraging Conversation Analysis methodologies to systematically assess multi-party interactions during 6DOF Virtual Reality sessions. The Multiparty Conversation Analysis (MCA) tool can be applied in several MMOG contexts where social analysis is needed. In this demo paper, we present a specific application of the tool focusing on VR MMOG-based therapy for autistic teens. A video demo can be seen in: U4A4vkjUrvM
Demo: Out-Of-Body Experience induced after Virtual Embodiment
Hall: CEZEMBRE, Booth ID : 8
Pierre Bourdin-Kreitz, Universitat Oberta de Catalunya ; Andreu Castano Jimenez, Universitat Pompeu Fabra
Out-of-body experiences (OBEs), where individuals perceive themselves as separated from their physical bodies, hold significant potential for research in psychology and neuroscience. We present a demonstration of a system designed to induce OBEs through immersive technology and virtual body ownership illusions (VBOI). Our approach uses personalized avatars, synchronized visuomotor and visuotactile stimulation, and real-time interaction to create strong VBOI, followed by the induction of OBEs via controlled camera movements. The system is lightweight, user-friendly, and offers better ecological validity, opening new avenues for studying mind-body relationships in controlled environments. Applications include advancing research on self-consciousness, embodiment, and disembodiment, as well as exploring therapeutic interventions for conditions such as PTSD and anxiety. In future research, we wish to explore virtual OBEs in more realistic contexts, such as VR car crash scenarios.
Endomersion: An Immersive Remote Guidance and Feedback System for Robot-Assisted Minimally Invasive Surgery
Hall: CEZEMBRE, Booth ID : 7
Mats Ellenberg, TUD Dresden University of Technology ; Katja Krug, TUD Dresden University of Technology ; Yichen Fan, TUD Dresden University of Technology ; Jens Krzywinski, Technische Universitat Dresden ; Raimund Dachselt, TUD Dresden University of Technology ; Rayan Younis, Medical Faculty and University Hospital Carl Gustav Carus, TUD Dresden University of Technology ; Martin Wagner, TUD Dresden University of Technology ; Juergen Weitz, TUD Dresden University of Technology ; Ariel Rodriguez, NCT Dresden ; Gregor Just, NCT/UCC, Carl Gustav Carus Faculty of Medicine, TU Dresden ; Sebastian Bodenstedt, NCT Dresden ; Stefanie Speidel, National Center for Tumor Diseases (NCT)
The current practice of surgical telementoring involves remote experts providing guidance via telephone or video conferences. Especially in minimally invasive surgery (MIS), this greatly limits their insight into the procedure and inhibits effective deictic referencing. To address this, we present Endomersion, an immersive remote guidance and feedback system that allows verbal and non-verbal communication through audio chat, pointing and telestration, enables remote control of the endoscope camera, and offers an immersive 3D workspace supporting remote expert's understanding of the on-site operation. The system was developed with feedback from surgeons and tested in actual operating room settings. In this demonstration, users take over the role of the remote expert, communicating with the on-site surgeon and remotely controlling the endoscopic camera attached to a robot arm.
P.E.T.R.A: Persuasive Environment for Tracking and Regulating Arousal and Valence in Virtual Reality
Hall: CEZEMBRE, Booth ID : 6
Fabio Vito Papapicco, Universita degli Studi di Bari Aldo Moro ; Francesca Cuzzo, Universita degli Studi di Bari Aldo Moro ; Veronica Rossano, University of Bari
The rising prevalence of gambling, particularly in digital formats, demands innovative approaches to understanding and mitigating Gambling Disorder (GD). Virtual Reality (VR), with its immersive and interactive potential, offers a unique platform to explore the complex interplay of psychological and environmental factors contributing to impulsive gambling behaviors. This research introduces P.E.T.R.A. (Persuasive Environment for Tracking and Regulating Arousal), a novel VR-based environment designed asan amusement park simulation. P.E.T.R.A combines elements inspired by electronic gambling machines (EGMs) with gamification mechanics to study and modulate emotional arousal and valence. Using the Circumplex Model of Emotion, P.E.T.R.A. tracks and analyzes affective states in real time, providing insights into impulsive decision-making. Preliminary results validate P.E.T.R.A.?s capability to evoke realistic emotional responses and inform future interventions, particularly for cognitive behavioral therapy (CBT). This work aims at demonstrating the potential of VR technologies for GD research, prevention, and therapeutic applications.
Power wheelchair driving: a multisensory simulator using VR to learn inrehabilitation centers
Hall: CEZEMBRE, Booth ID : 5
Fabien Grzeskowiak, Inria Centre Bretagne Atlantique ; Emilie Leblong, Pole Saint Helier, rehabilitation center ; Sébastien THOMAS, INRIA ; Francois Pasteau, Univ Rennes, INSA ; Louise Devigne, Irisa-UMR6074 ; Sylvain GUEGAN, Univ Rennes, INSA Rennes, LGCGM ; Marie Babel, Univ Rennes, INSA Rennes, Inria, CNRS, IRISA
Power wheelchairs (PWCs) significantly enhance mobility for individuals with disabilities but are often challenging to master, requiring extensive training. Traditional training methods can be risky, resource-intensive, and difficult to implement. Virtual reality (VR) offers a safer, customizable, and effective alternative, as demonstrated in rehabilitation contexts. We developed a multisensory VR simulator incorporating vestibular feedback to enhance the sense of presence and mitigate cybersickness. Our studies show effective skill transfer from the virtual environment to real-world PWC use, validated with actual users in clinical trials. This demonstration highlights the capabilities of our mechanical simulator, featuring navigation scenes from previous trials and ongoing work with virtual agents. The simulator's immersive and adaptive design addresses the challenges of PWC training, offering a practical and innovative solution for clinicians and users alike.
Real-Time Decoder and Player for MPEG V-DMC Dynamic Meshes
Hall: SURCOUF, Booth ID : 18
Aleksei Martemianov, Nokia Technologies ; Emre Aksu, Nokia ; Lauri Ilola, Nokia Technologies ; Lukasz Kondrad, Nokia
This paper presents a real-time decoder and player for 3D dynamic meshes, compressed using the MPEG V-DMC standard (Video-based Dynamic Mesh Coding). MPEG V-DMC efficiently compresses dynamic 3D meshes, enabling smooth playback of high-resolution meshes with minimal latency. We demonstrate that the V-DMC standard is highly effective for representing dynamic meshes, offering significant data compression while maintaining high visual quality. Our implementation further shows that MPEG V-DMC can be successfully deployed on simple, resource-constrained devices, making it ideal for a wide range of applications, including volumetric media streaming, augmented reality (AR), and virtual reality (VR). By leveraging modern web technologies, we provide a platform-independent solution that broadens the accessibility of high-quality XR content.
Beaming AR: A Compact Environment-Based Display System for Battery-Free Augmented Reality
Hall: SURCOUF, Booth ID : 19
Hiroto Aoki, The University of Tokyo ; Yuta Itoh, The University of Tokyo
Beaming AR demonstrates a new approach to augmented reality (AR) that fundamentally rethinks the conventional all-in-one head-mounted display (HMD) paradigm. Instead of integrating power-hungry components into headwear, our system relocates projectors, processors, and power sources to a compact environment-mounted unit, allowing users to wear only lightweight, battery-free glasses. Our demonstration features a 300mm ž 300mm bench-top setup that combines steerable laser projection with co-axial infrared marker tracking. The tracking system uses retroreflective markers and follows the same optical path as the display beam, enabling precise alignment between real-world motion and projected AR content. Conference attendees can experience this technology firsthand by wearing passive eyewear and observing AR content in free space. This proof-of-concept implementation shows how environmental hardware offloading could lead to more practical and comfortable AR displays for extended use.
Conversational Virtual Agent and Immersive Visualizations for Robotic Ultrasound
Hall: SURCOUF, Booth ID : 3
Tianyu Song, Technical University of Munich ; Felix Pabst, Technical University of Munich ; Ulrich Eck, Technical University of Munich ; Nassir Navab, Technische Universitat Munchen
Robotic ultrasound systems offer significant potential for improving diagnostic precision but face challenges in patient acceptance due to the lack of human interaction. We present a novel system that integrates a conversational virtual agent, powered by a large language model, with two immersive visualization modalities?augmented reality (AR) and virtual reality (VR). The virtual agent engages patients through natural dialogue, utilizing real-time speech-to-text and text-to-speech systems to provide clear guidance and reassurance throughout the procedure. The AR visualization allows patients to remain aware of the robot while interacting with the virtual assistant, whereas the VR visualization fully immerses patients in a virtual environment where the robot is hidden, offering a more relaxed experience. This demo showcases how combining adaptive communication with immersive environments can bridge the gap between robotic automation and patient-centered care in medical procedures. A video of this demo can be found here: 88axBCUFsLM
Immersive AI-Powered Language Learning Experience in Virtual Reality: A Gamified Environment for Japanese Learning
Hall: SURCOUF, Booth ID : 14
Qingzhu Zhang, University of California, Berkeley
This work presents an AI-driven language learning system that combines immersive VR technology and gamification to explore alternative approaches in language education. Developed for Japanese language learners, the system creates a simulated Tokyo environment where learners engage with AI-powered virtual characters to complete tasks, such as purchasing items at convenience stores and seeking directions. It provides dynamic vocabulary assistance and adaptive feedback to support practical communication skills through authentic interactions. The system adjusts task complexity and vocabulary difficulty according to user performance, while reinforcing language acquisition through contextualized conversations. The implementation explores how the integration of artificial intelligence with virtual environments may contribute to computer-assisted language learning.
Demonstration of Visual Presentation Method for Paranormal Phenomena Through Binocular Rivalry Induced by Dichoptic Color Differences
Hall: SURCOUF, Booth ID : 9
Kai Guo, Graduate School of Frontier Sciences ; Juro Hosoi, The University of Tokyo ; Yuki Shimomura, The University of Tokyo ; Yuki Ban, The University of Tokyo ; Shinichi Warisawa, The University of Tokyo
Paranormal visual effects, such as spirits and miracles, are frequently depicted in visual games and media design. However, current methods do not express paranormal experiences as aspects of the sixth sense. We propose utilizing binocular rivalry to provide a new visual presentation method by displaying different images in each eye. In this study, we developed a color selection method based on experimentally gathered data to choose a secondary color to display to the other eye when given an initial input color. We then applied this method to create two demonstrations. Our approach aims to deliver a more realistic visual experience of paranormal phenomena, thereby enhancing user satisfaction while simultaneously simplifying the design process for designers who utilize our proposed method.
Virtual Reality Antarctic Weather Station Repair for Informal STEM Learning
Hall: SURCOUF, Booth ID : 8
Kevin Ponto, University of Wisconsin - Madison ; Ross Tredinnick, University of Wisconsin - Madison ; Sarah Gagnon, University of Wisconsin, Madison ; David Gagnon, University of Wisconsin
Developed in collaboration with polar science experts, this virtual reality game immerses users in the life of a scientist maintaining weather stations in Antarctica. Designed for the Meta Quest 3, the game challenges players to complete realistic repair tasks reimagined as engaging tactile puzzles. As players tackle these challenges, a virtual assistant provides guidance, not only assisting with gameplay mechanics but also offering valuable insights into the real-world science behind their actions. This interactive experience aims to captivate players aged ten and above, offering a unique blend of education and entertainment. A preview of the experience can be found at cQpikAcYwek.
Drawn Together: A Collocated Mixed Reality Sketching and Annotation Experience
Hall: SURCOUF, Booth ID : 20
Andrew Rukangu, University of Georgia ; Ethan Bowmar, University of Georgia ; Sun Joo (Grace) Ahn, University of Georgia ; Beshoy Morkos, University of Georgia ; Kyle Johnsen, University of Georgia
We present Drawn Together, a 3D immersive mixed-reality sketching and annotation tool designed for collaboration research. The system supports a variety of 3D content creation ranging from mid-air, free-form line drawing to voxel-based drawings and mannequins that can be flexibly posed. Groups of users can engage in synchronous, collocated design, education, art, and play aligned against a pass-through video backdrop to afford physical references and annotations. A number of input options are also presented, including standard motion controllers, a tracked stylus, and ring-augmented hand tracking.
LLM-powered Text Entry in Virtual Reality
Hall: SURCOUF, Booth ID : 5
Yan Ma, Stony Brook University ; Tony Li, Stony Brook University ; Zhi Li, Stony Brook University ; Xiaojun Bi, Stony Brook University
Large language models (LLMs) have demonstrated exceptional performance across various language-related tasks, offering significant potential for enhancing text entry in Virtual Reality (VR). We introduce an LLM-powered text entry system for VR, which integrates multiple input modalities and utilizes a fine-tuned LLM as a keyboard decoder. The LLM-based decoder achieved 93.1% top-1 decoding accuracy on a word gesture typing dataset and 95.4% on tap typing, highlighting its potential for VR text entry applications. Our demonstration shows how LLMs can support tap typing and word-gesture typing through raycasting and joystick-based inputs, potentially accommodating various user preferences and enhancing the adaptability of VR text input methods.
From 2D to 3D: Redesigning the Desktop Metaphor in Virtual Reality
Hall: SURCOUF, Booth ID : 4
Mingzhu Cui, Yonsei University ; Sangwon Lee, Yonsei University ; Junyoung Kim, Yonsei University
Since the 1980s, scholars have proposed Natural User Interfaces (NUI) as the future of interaction, with VR HMD technology making 3D environments and hand gestures feasible. However, much of today?s VR research focuses on algorithms or conceptual ideas, and industry solutions often extend traditional 2D WIMP GUIs. In response, this study presents a new VR desktop paradigm incorporating object, metaphor, physics, embodiment, spatial context, and magic. A prototype featuring a music player and calculator demonstrated high usability and immersion.Moreover, workspace customization emerges as a valuable feature for personalization, yielding positive outcomes in terms of spatial memory and task re-immersion. This underscores the potential of VR interfaces to offer not only novel interaction techniques but also enhanced cognitive and experiential benefits for users, suggesting avenues for future research on broader applications.
Room Connection Techniques in Virtual Reality for Walking in Impossible Spaces
Hall: SURCOUF, Booth ID : 7
Ana Rebelo, NOVA LINCS, NOVA School of Science and Technology ; Pedro A. Ferreira, NOVA School of Science and Technology ; Rui Nóbrega, Faculdade de Ciencias e Tecnologia, Universidade Nova de Lisboa
This work demonstrates three room-connection techniques ? portals, corridors, and central hubs ? that can be used in virtual environments to create "impossible spaces". These spaces use overlapping areas to maximize available physical space, enabling walking even in constrained spaces. In this experiment, a user can navigate through a five-room apartment of about 106.25 mý by walking within a physical play area of about 6.25 mý (2.5 x 2.5 meters). The primary contribution of this work is demonstrating how these room-connection techniques can be applied to dynamically adapt virtual environments to fit small physical spaces, such as those commonly available to VR users at home. A video showcasing this demo can be found at the following link: a_4kcC0VOuY.
Interoperable and Open Source Solution for Anchoring Augmented Reality Content to the Real World
Hall: CEZEMBRE, Booth ID : 2
Sylvain Buche, Orange Labs ; Ingo Feldmann, Fraunhofer HHI ; Patrick Harms, Nuremberg Institute of Technology ; Hugo Kruber, IRT b-com ; Jeremy Lacoche, Orange Labs ; Stéphane Louis Dit Picard, Orange ; Sylvain Renault, Fraunhofer Heinrich-Hertz-Institut ; Jerome Royan, IRT b-com
This research demonstration presents an open source end-to-end solution for facilitating the anchoring of virtual content in the real world based on various platforms thanks to the interoperable specifications of the ETSI Augmented Reality Framework (ARF) Industry Specification Group (ISG). The solution relies on authoring tools developed for Unity and the created AR applications can be deployed on platforms such as iOS, Android and VisionOS. A validation scenario for data center maintenance is detailed to illustrate the solution. Various examples based on the glTF format are also provided to show the potential of this novel architecture.
Focus360: Guiding User Attention in Immersive Videos for VR
Hall: CEZEMBRE, Booth ID : 19
Paulo Vitor Santana da Silva, Advanced Knowledge Center in Immersive Technologies (AKCIT) ; Lucas Lima Neves, Advanced Knowledge Center in Immersive Technologies (AKCIT) ; Rafael Alves Gois, Advanced Knowledge Center in Immersive Technologies (AKCIT) ; Diogo Fernandes, Advanced Knowledge Center in Immersive Technologies (AKCIT) ; Rafael Sousa, Advanced Knowledge Center in Immersive Technologies (AKCIT) ; Arlindo Galvço, Advanced Knowledge Center in Immersive Technologies (AKCIT)
This demo introduces Focus360, a system designed to enhance user engagement in 360§ VR videos by guiding attention to key elements within the scene. Using natural language descriptions, the system identifies important elements and applies a combination of visual effects to guide attention seamlessly. At the demonstration venue, participants can experience a 360§ Safari Tour, showcasing the system's ability to improve user focus while maintaining an immersive experience.
Multi-resolution audiovisual representation of sound fields recorded with a 7th-order ambisonic microphone array.
Hall: CEZEMBRE, Booth ID : 18
Francois Salmon, Noise Makers
As part of the development of an ambisonic microphone array, it is of interest to determine the spatial accuracy required to capture and reproduce a sound field over headphones. A 7th order ambisonic microphone array consisting of 480 MEMs has been developed to capture sound fields with high resolution. This research demonstration allows us to experience the drawbacks of reducing the spatial accuracy of an ambisonic representation obtained with such a prototype. It allows to listen to the captured sound field at different resolutions and to visualise the resulting accuracy in terms of energy over the sphere. The visual information is overlaid on 360 videos to help identify the sound sources around the listener. Such development can find applications in the context of remote monitoring and will enable the study of interactions between auditory and visual cues in the context of Virtual Reality experiences.
Vision Language Model-Based Solution for Obstruction Attack in AR: A Meta Quest 3 Implementation
Hall: CEZEMBRE, Booth ID : 4
Yanming Xiu, Duke University ; Maria Gorlatova, Duke University
Obstruction attacks in Augmented Reality (AR) pose significant challenges by obscuring critical real-world objects. This work demonstrates the first implementation of obstruction detection on a video see-through head-mounted display (HMD), the Meta Quest 3. Leveraging a vision language models (VLM) and a multi-modal object detection model, our system detects obstructions by analyzing both raw and augmented images. Due to limited access to raw camera feeds, the system employs an image-capturing approach using Oculus casting, capturing a sequence of images and finding the raw image from them. Our implementation showcases the feasibility of effective obstruction detection in AR environments and highlights future opportunities for improving real-time detection through enhanced camera access.
Using Virtual Reality to Raise Awareness of Communication Challenges Faced by Individuals with Hearing Loss
Hall: CEZEMBRE, Booth ID : 17
Joanna Luberadzka, Eurecat ; Enric Guso, Eurecat ; Umut Sayin, Eurecat ; Adan Garriga, Eurecat
Hearing loss is typically managed with hearing aids, which amplify sound based on the user's hearing profile. However, unlike glasses for vision, hearing aids cannot address the reduced time-frequency resolution of the auditory system. To raise awareness of the challenges faced by hearing aid users, particularly in complex acoustic environments, we use virtual reality (VR). We integrate hearing loss and hearing aid simulators into a VR application, allowing users to experience the altered soundscape in an immersive way.
Designing Fractal Expressions for Immersive Aesthetic Experiences
Hall: CEZEMBRE, Booth ID : 20
Christian Geiger, University of Applied Sciences Duesseldorf ; Emil Gerhardt, University of Applied Sciences Duesseldorf ; Mitja Saer, University of Applied Sciences Duesseldorf
Aesthetic experiences are closely related to human emotions and experiences as well as the perception of aesthetic objects and phenomena. We consider interaction with generated immersive worlds to be an immersive aesthetic experience if it is designed independently of any functional use and can be experienced in an aesthetically pleasing way. The possibilities of generative techniques such as AI and other algorithms and visualization through the integration of virtual reality open up new aesthetic possibilities. We present the results of a collaboration between scientists and designers/artists in the development of a framework for three-dimensional fractals [5] and its application in various scenarios in the field of festivals, performances and exhibitions.
XiloVR: Simulation for Teaching and Technique Preservation
Hall: SURCOUF, Booth ID : 2
Samuel Costa, Federal University of Rio Grande do Norte ; Bianca Miranda, Federal University of Rio Grande do Norte ; Stefane de Assis Orichuela, Federal University of Rio Grande do Norte ; Alyson Souza, Federal University of Rio Grande do Norte
This demonstration introduces a novel approach to exploring the art of woodcut printing through an interactive Virtual Reality (VR) environment, leveraging the educational potential of immersive technologies to promote art and culture. In this context, we developed a physical prototype designed to replace the primary functionalities of hand-based user interaction typically performed with VR controllers. Instead, the system utilizes representations of woodcut production tools, tracked by a sensor, to enable interaction within the immersive environment. The system aims to provide a practical, educational, and immersive experience in artistic creation, innovating teaching methodologies and ensuring the preservation of woodcut printing knowledge for future generations. By combining physical and digital elements, this solution bridges technological accessibility with cultural heritage promotion, offering an engaging and transformative learning platform.
My Co-worker ChatGPT: Development of an XR Application for Embodied Artificial Intelligence in Work Environments
Hall: CEZEMBRE, Booth ID : 3
Philipp Krop, University of Wurzburg ; David Obremski, University of Wurzburg ; Astrid Carolus, University of Wurzburg ; Marc Latoschik, University of Wurzburg ; Carolin Wienrich, University of Wurzburg
With recent developments in spatial computing, work contexts might shift to augmented reality. Embodied AI - virtual conversational agents backed by AI systems - have the potential to enhance these contexts and open up more communication channels than just text. To support knowledge transfer from virtual agent research to the general populace, we developed My CoWorker ChatGPT - an interactive demo where employees can try out various embodied AIs in a virtual office or their own using augmented reality. We use state-of-the-art speech synthesis and body-scanning technology to create believable and trustworthy AI assistants. The demo was shown at multiple events throughout Germany, where it was well received and sparked fruitful conversations about the possibilities of embodied AI in work contexts.
I hear, see, speak & do: Bringing Multimodal Information Processing to Intelligent Virtual Agents for Natural Human-AI Communication
Hall: SURCOUF, Booth ID : 12
Ke Li, University of Hamburg ; Fariba Mostajeran, University of Hamburg ; Sebastian Rings, Universitat Hamburg ; Lucie Kruse, Universitat Hamburg ; Susanne Schmidt, University of Canterbury ; Michael Arz, University of Hamburg ; Erik Wolf, University of Hamburg ; Frank Steinicke, Universitat Hamburg
We present an XR framework with a streamlined workflow for creating and interacting with intelligent virtual agents (IVAs) with multimodal information processing capabilities using commercially available AI tools and cloud services such as large language and vision models. The system supports: (i) the integration of high-quality, customizable virtual 3D human models for visual representations of IVAs and (ii) multimodal communication with generative AI-driven IVAs in immersive XR.
Fashion Beneath the Skin - a Fashion Exhibition Experience in Social Virtual Reality
Hall: SURCOUF, Booth ID : 6
Karolina Wylezek, CWI ; Irene Viola, Centrum Wiskunde en Informatica (CWI) ; Pablo Cesar, Centrum Wiskunde & Informatica (CWI) ; Jack Jansen, Centrum Wiskunde & Informatica ; Suzanne Mulder, Centraal Museum Utrecht ; Dylan Eno, Dylan Eno ; Wytze Koppelman, Netherlands Institute for Sound and Vision ; Marta Franceschini, European Fashion Heritage Association ; Marco Rendina, European Fashion Heritage Association ; Ninke Bloemberg, Centraal Museum Utrecht
Social VR allows users to interact with each other and explore virtual spaces together. This demo presents a social VR fashion museum, where visitors engage with fashion artefacts. The experience spans across three different spaces designed for various goals. This application, created following a human-centred approach, explores how the visitors interact with each other and the exhibits, considering their 3D volumetric representation and changing environmental context throughout the experience.
Demonstrating WAVY: a hand wearable with force and vibrotactile feedback for multimodal interaction in Virtual Reality
Hall: CEZEMBRE, Booth ID : 15
Marion Pontreau, Université Paris-Saclay, CEA, List ; Céphise Louison, CEA, List ; Pierre-Henri Oréfice, CEA, List ; Sylvain Bouchigny, CEA, List ; Thanh-loan Sarah LE, Institut des Systèmes Intelligents et de Robotique ; David Gueorguiev, Institute of Neuroscience ; Sabrina Panëels, CEA, List
While the public has access to audio and visual stimuli in virtual reality (VR) thanks to affordable and light headsets, the haptic sense (i.e. the sense of touch) is often lacking or restricted to controllers? vibrations. Besides, commercial haptic wearables remain too expensive for the mass market. Thus, we designed an affordable and light hand wearable that combines tracking, multi-point vibrations and force feedback for the palm and the fingers. The wearable is demonstrated in a VR scenario.
LLM-powered Gaussian Splatting in VR interactions
Hall: CEZEMBRE, Booth ID : 14
Haotian Mao, Shanghai Jiao Tong University ; Zhuoxiong Xu, Shanghai Jiao Tong University ; Siyue Wei, Shanghai Jiao Tong University ; Yule Quan, Shanghai Jiao Tong University ; Nianchen Deng, Shanghai AI Lab ; Xubo Yang, SHANGHAI JIAO TONG UNIVERSITY
Recent advances in radiance field rendering, particularly 3D Gaussian Splatting (3DGS), have demonstrated significant potential for VR content creation, offering both high-quality rendering and an efficient production pipeline. However, current physics-based interaction systems for 3DGS are limited to either simplistic, unrealistic simulations or require substantial user input for complex scenes, largely due to the lack of scene comprehension. In this demonstration, we present a highly realistic interactive VR system powered by large language models (LLMs). After object-aware GS reconstruction, we prompt GPT-4o to analyze the physical properties of objects in the scene, which then guide physical simulations that adhere to real-world phenomena. Additionally, We design a GPT-assisted GS inpainting module to complete the areas occluded by manipulated objects. To facilitate rich interaction, we introduce a computationally efficient physical simulation framework through a PBD-based unified interpolation method, which supports various forms of physical interactions. In our research demonstrations, we reconstruct varieties of scenes enhanced by LLM's understanding, showcasing how our VR system can support complex, realistic interactions without additional manual design or annotation.
GRIPPY: A VR Grip Controller for Combating Sarcopenia in Elderly
Hall: CEZEMBRE, Booth ID : 16
Hoi Lam Leong, University of Nottingham Ningbo China ; Jiang WU, University of Nottingham, Ningbo, China ; Dr. Yoke Chin Lai, University of Nottingham Ningbo China ; Loic Faulon, University of Nottingham Ningbo China ; Xu Sun, University of Nottingham Ningbo China ; Boon Giin Lee, University of Nottingham Ningbo China
As the global population ages, sarcopenia?age-related muscle decline?demands innovative solutions. This paper introduces GRIPPY, a VR grip controller that transforms basic handgrip exercises into immersive, gamified tasks. By integrating sensor-based interaction and VR gameplay, older adults can strengthen grip while maintaining higher motivation. A preliminary test with three participants (ages 68?80) revealed key usability insights, underscoring GRIPPY?s potential for enhanced rehabilitation. Future work will refine its design and validate long-term efficacy.
Xareus: a Framework to Create Interactive Applications without Coding
Hall: SURCOUF, Booth ID : 13
Lysa Gramoli, Univ. Rennes, INSA Rennes, Inria, CNRS, IRISA ; Florian Nouviale, Univ. Rennes, INSA Rennes, Inria, CNRS, IRISA ; Adrien Reuzeau, Univ. Rennes, Inria, CNRS, IRISA ; Alexandre Audinot, Univ. Rennes, INSA Rennes, Inria, CNRS, IRISA ; Mathieu Risy, Univ. Rennes, INSA Rennes, Inria, CNRS, IRISA ; Tangui Marchand-Guerniou, Univ. Rennes, INSA Rennes, Inria, CNRS, IRISA ; Mae Mavromatis, Univ. Rennes, INSA Rennes, Inria, CNRS, IRISA ; Bruno Arnaldi, Univ. Rennes, INSA Rennes, Inria, CNRS, IRISA ; Valérie Gouranton, Univ. Rennes, INSA Rennes, Inria, CNRS, IRISA
Creating interactive XR applications is a complex task. It implies people with different backgrounds which can lead to communication problems and a lot of coding, rarely formalized, that leads to a lack of reusability. Furthermore, the domain expert can not be directly involved in the creation process. Therefore, we propose Xareus, a framework designed to simplify and accelerate the creation of interactive applications with little coding. To help domain experts and developers, Xareus includes several features to make the virtual objects interactive, manage virtual humans, and create a scenario using a graphical interface or VR interaction. Our framework is compatible with Unity Engine and suitable for various fields such as training, video games, or industry. During the demo, the participants will have the opportunity to test these features. A video can be found here: iqdU3-202As
Advancing Critical Care Skills: Immersive VR Training Powered by Real-World Patient Data
Hall: CEZEMBRE, Booth ID : 13
Luisa Theelke, TUM University Hospital, Technical University Munich ; Diana Beksultanow, Technical University of Munich ; Lydia Marquardt, Technical University of Munich ; Philipp Gulde, Technical University of Munich ; Lisa Vallines, Siemens Healthineers ; Daniel Roth, Technical University of Munich
VR simulations are becoming essential for staff training in clinical care, specifically in environments that cannot be trained well during regular operations, such as critical care. This paper presents a novel system that aims at further closing the gap between simulation and reality by integrating real patient data into immersive training scenarios in the context of acute care. Focused on sepsis recognition, the VR simulation introduces trainees to clinical routines and to make informed decisions while observing the evolving patient conditions. By engaging with dynamic disease progression, it fosters understanding of critical conditions in a time-sensitive context. Preliminary feedback from a pilot assessment with nursing professionals highlighted its value for trainees and potential to enhance preparedness and decision-making skills in real-world scenarios.
Digital Time Machine: A Virtual Reality Reconstruction of the Southwestern Zeitgeist Through the Lens of Clark Hulings' Artistic Legacy
Hall: CEZEMBRE, Booth ID : 12
Jeffrey Price, The University of Texas at Dallas ; Hamida Khatri, The University of Texas at Dallas ; Brandon Coffey, The University of Texas at Dallas ; Chris Gauthier, The University of Texas at Dallas ; Jacqueline Garza, The University of Texas at Dallas ; Evan Barreiro, Evan Barreiro Designs
This research demo showcases the ?Digital Time Machine,? a virtual reality (VR) experience designed to transport users to 1974 Chimayo, New Mexico, as captured through the artistry of Clark Hulings. By integrating photorealistic VR environments, artistic stylization, and user-centered design, the project demonstrates the potential of immersive technology to deepen cultural appreciation, historical understanding, and artistic engagement. Developed in collaboration with the New Mexico Museum of Art Vladem Contemporary, this project exemplifies how VR can bridge temporal and spatial divides, allowing museum audiences to step into the world of Hulings? iconic painting.
Virtual Exhibition as a Portal to Authentic Art Experiences: Exploring the Immersive Reproduction of Exhibition in Practice
Hall: SURCOUF, Booth ID : 17
Yuanyuan Yin, University of Southampton ; Lian Pan, University of Southampton ; Xiao Wei, University of Southampton ; Ruohan Tang, University of Southampton ; Christopher O'Connor, University of Southampton
Exhibitions are traditionally constrained by fixed times and locations, limiting access for a wider audience. Immersive reproduction, leveraging 3D reconstruction and virtual reality technologies, offers a way to transcend these temporal and spatial limitations. While previous research has largely focused on integrating mixed-reality elements within physical museum spaces, treating VR as a supplementary tool, our research positions VR as the primary ?portal? to authentic artistic experiences. This demo showcases the complete process of immersive reproduction for Following the Fish, an exhibition featured at the 18th Venice International Architecture Biennale in 2023, offering valuable insights and references for future digital archiving and immersive reproduction of exhibitions. The reproduced immersive exhibition was constructed at a 1:1 original scale of the physical exhibition. When the available space permits (i.e., when the space is equal to or larger than the original venue), visitors can walk through the virtual environment, recreating an authentic artistic experience.
Demonstration of VirtuEleDent: A Compact XR Tooth-Cutting Training System Using a Physical EMR-based Dental Handpiece and Teeth Model
Hall: SURCOUF, Booth ID : 10
Yuhui Wang, Tohoku University ; Kazuki Takashima, Shibaura Institute of Technology ; Masamitsu Ito, Wacom Co.,Ltd. ; Takeshi Kobori, Wacom Co., Ltd. EMR Technology ; Tomo Asakura, Wacom Co., Ltd. ; Kazuyuki Fujita, Tohoku University ; Hong Guang, Tohoku University ; Yoshifumi Kitamura, Tohoku University
Dental cutting is a crucial skill for dental students. However, current VR dental cutting training systems rely on bulky and costly haptic devices, which reduce opportunities for individual practice. Moreover, the limitations imposed by the maximum reaction force of an active haptic device would impact the range of tooth hardness that can be reproduced. We propose a compact XR tooth-cutting training system, VirtuEleDent, that employs a passive haptic approach using a 3D-printed physical teeth model and a three-dimensionally tracked handpiece. Their spatial relationship is accurately rendered in the virtual environment of a mobile head-mounted display (HMD), providing users with realistic haptic sensations during virtual tooth-cutting exercises. Our tracking platform is operated using electromagnetic resonance (EMR) stylus technology and consists of a digitizer (i.e., tracking board) and a handpiece device. A customized EMR stylus unit (i.e., resonance coil) and an inertial measurement unit (IMU) sensor are installed inside the handpiece, allowing for precise measurement of its tip's 3D position and orientation. This setup enables the learner to physically manipulate dexterous handpieces on the teeth model while experiencing virtual tooth-cutting in the HMD. This is a companion demo to the IEEE VR 2025 Conference paper: ``VirtuEleDent: A Compact XR Tooth-Cutting Training System Using a Physical EMR-based Dental Handpiece and Teeth Model.'' To watch a video about VirtuEleDent, please visit \url{ZvHZ6IEAhyM}.
Flexible Virtual Lenses to Magnify Virtual Environments Locally Without Losing Context Information
Hall: SURCOUF, Booth ID : 16
Julien Ducrocq, Nara Institute of Science and Technology ; Yutaro Hirao, Nara Institute of Science and Technology ; Monica Perusquia-Hernandez, Nara Institute of Science and Technology ; Hideaki Uchiyama, Nara Institute of Science and Technology ; Kiyoshi Kiyokawa, Nara Institute of Science and Technology
We introduce virtual lenses that magnify image regions locally without losing context elements. We developed a unified formalism to design virtual lenses of any closed shape, that can be moved interactively on a screen and within a virtual reality (VR) application in real-time. Moreover, our lenses are versatile: the user can change their parameters to modify the appearance of the magnified image region dynamically. These lenses are suitable for several applications, including virtual tourism and video-surveillance. Our demo showcases an implementation of a lens changing of shape in real-time on a non-VR monitor, followed by a proof-of-concept VR version where the user can move a lens with a VR controller to deform remote textures at different distances.
ARES: Augmented Reality and AI Assistance Technologies for Safety and Efficiency Optimization in Explosive Ordnance Exploration
Hall: SURCOUF, Booth ID : 1
Paul Chojecki, Fraunhofer Heinrich Hertz Institute HHI ; David Przewozny, Fraunhofer Heinrich Hertz Institute HHI ; Mustafa Lafci, Fraunhofer Institute for Telecommunications ; Mykyta Kovalenko, Fraunhofer Heinrich Hertz Institute HHI ; Pia Packmohr, UseTree GmbH ; Hoa van Thanh, UseTree GmbH ; Laura Dreßler, UseTree GmbH ; Wolfgang Süß, SENSYS GmbH ; Stefan Huber, UseTree GmbH ; Sebastian Bosse, Fraunhofer HHI
Unexploded ordnance (UXO) detection remains a critical challenge in past and present conflict zones. Magnetometer surveys are a key method for identifying UXO, but require precise, systematic scanning and expert interpretation. The ARES project leverages Augmented Reality (AR) and Artificial Intelligence (AI) to enhance UXO exploration by improving quality, efficiency, and safety. Using AR glasses, the system guides users through survey areas, assisting with lane alignment, walking speed, and maintaining magnetometer stability. AI-driven processing transforms sparse magnetometer data into dense magnetic maps, while suspected UXO points are directly visualized in the AR display. These features streamline on-site navigation and analysis, offering an intuitive decision-support system for field experts while maintaining reliance on human expertise.