The official banner for the IEEE Conference on Virtual Reality + User Interfaces, comprised of a Kiwi wearing a VR headset overlaid on an image of Mount Cook and a braided river.

Posters

Monday posters

Collaboration, Virtual Humans and Social Applications

PaintBranch: Asynchronous Collaborative Art in Virtual Reality (Booth ID: 1001)

Ana David, Instituto Superior Técnico, University of Lisbon; Daniele Giunchi, University College London; Stuart James, University College London (UCL); Anthony Steed, University College London; Augusto Esteves, Instituto Superior Técnico, University of Lisbon

We describe the development of PaintBranch, a virtual reality prototype designed to support asynchronous collaborative art. By incorporating version control (VC), PaintBranch aims to promote creative idea generation and reduce conflicts during collaboration. In a user study, eight participants were organized into four pairs and worked asynchronously for a week, with each participant having four painting sessions. We analyzed the emerging collaboration patterns and uses. Results indicated that experienced artists used these features effectively to meet collaborative and personal goals.

Enhancing Asynchronous Learning in immersive Environments: Exploring Baseline Modalities for Avatar-Based AR Guidance (Booth ID: 1002)

Anjela Mayer, Karlsruhe Institute of Technology; Ines Miguel-Alonso, University of Burgos; Jean-Rémy Chardonnet, Arts et Metiers Institute of Technology; Andrés Bustillo, Universidad de Burgos; Jivka Ovtcharova, Institute for Information Management in Engineering

This study investigates baseline modalities for evaluating Augmented Reality (AR) avatar guidance in asynchronous collaboration on spatially complex tasks. A formative study with three participants compared smartphone video, HoloLens video, and AR avatars across usability, collaboration, learning, and spatial awareness. Results suggest smartphone video as a reliable baseline due to usability and familiarity. Avatars showed potential for enhancing spatial awareness, task engagement, and learning outcomes but require interface improvements. Despite the small sample size, this study offers insights into immersive technologies for industrial training and collaboration.

Emotion-Aware Personalized Co-Speech Motion Generation (Booth ID: 1003)

Jiayi Zhao, Beijing Institute of Technology; Nan Gao, Institute of Automation, Chinese Academy of Sciences; Dongdong Weng, Beijing Institute of Technology; Yihua Bao, Beijing Institute of Technology; Xiaoqiang Liu, Kuaishou Technology; Yan Zhou, Kuaishou Technology

Co-speech motion generation is a challenging task requiring alignment between motion, speech rhythm, and semantics. To improve adaptability, style control mechanisms, including emotion perception and personalized motion generation, are needed. We propose a hybrid emotion-aware personalized co-speech motion generation system, consisting of an emotion-aware module based on LLM and a rhythm motion generation module based on diffusion, ensuring consistency between motion and speech emotion. Additionally, we introduce a style transfer module to adapt the generated motions to the speaker’s style. Experimental results show the system generates stylized and emotionally consistent motions, enhancing the realism and stylization of virtual humans.

Comparison of Calculation Modes for Virtual Co-embodiment: Movement Average and Vector Summation (Booth ID: 1004)

Marie Michael Morita, Ritsumeikan University; Yuji Fujihara, Ritsumeikan University; Tetsuro Nakamura, Ritsumeikan University; Miki Matsumuro, Cornell University; Fumihisa Shibata, Ritsumeikan University; Asako Kimura, Ritsumeikan Univ.; Norimichi Kitagawa, Ritsumeikan University

When two individuals control a virtual avatar simultaneously (virtual co-embodiment), their movement performance can, somewhat surprisingly, improve compared to when they control an avatar alone. This study investigated whether movement performance is affected by how individuals' movements embody into an avatar: by averaging the positions of body parts of each individual or by summing the movement vector of each individual's body parts. The experiment, in which participants reached a target with the avatar’s hand in these two types of co-embodiment, showed that the vector summation was more efficient in improving body movements than the position average.

Danmaku Avatar: Enabling Co-viewing Experiences in Virtual Reality via Danmaku (Booth ID: 1005)

Xiaofeng Dou, ShanghaiTech University; Jiahe Dong, ShanghaiTech University; Shuhao Zhang, ShanghaiTech University; Qian Zhu, The Hong Kong University of Science and Technology; Quan Li, ShanghaiTech University

With the evolution of social interaction needs, danmaku comments on 2D screens have emerged in Asia, offering users a virtual co-viewing experience. With advancements in virtual reality (VR) technology, there is an increasing demand for similar co-viewing experiences in VR settings. In this study, we designed the Danmaku Avatar System to explore the adaptation of danmaku into VR environments to enhance shared viewing experiences. Comparative experiments showed that danmaku avatars substantially improved social presence and increased the desire for communication. These findings lay the groundwork for further refining danmaku representations in VR, advancing virtual social interactions and immersive viewing experiences.

Realtime-Rendering of Dynamic Scenes with Neural Radiance Fields (Booth ID: 1006)

Kilian L. Krause, Fraunhofer HHI; Wieland Morgenstern, Fraunhofer HHI; Anna Hilsmann, Fraunhofer HHI; Peter Eisert, Fraunhofer HHI

Digitizing the humans and scenes around us is a challenging research topic, with results finding new and more widespread implementation in modern applications. This paper addresses the challenge of real-time rendering of dynamic 3D scenes. The proposed approach utilizes Instant Neural Graphics Primitives (iNGP), training a radiance field for each individual frame. To achieve playback rates suitable for interactive VR experiences, we developed efficient data storage and loading methods. We present a modular framework for playing real-time free-viewpoint videos on a conventional Desktop PC, to showcase research advancements of dynamic radiance fields.

Enhancing Resilience through AI-Driven Virtual Reality (Booth ID: 1007)

Talya Maness, Reichman University; Anat Klomek Brunstein, Reichman University; Doron Friedman, Reichman University

We introduce AI-enhanced virtual humans to address the shortage of human professionals in mental health. Specifically, we allow for automatically learning and practicing resilience skills by assisting a distressed virtual human, utilizing the established Resilience Plan Intervention (RPI). One hundred participants underwent a brief training session in the RPI and then applied their newly acquired resilience strategies in simulated interactions with a distressed individual, conducted through either VR or traditional role-playing. The results revealed significant enhancements in reported resilience for both groups, accompanied by significant increases in reported levels of cognitive flexibility, emotion regulation, and help-seeking abilities.

Who Will They Choose?: Avatar Selection Without a Gender-Race-Match (Booth ID: 1008)

Alice Guth, Davidson College; Tabitha C. Peck, Davidson College

Avatars in virtual spaces often serve as extensions of users. Avatar choice depends on context, with users typically selecting avatars they identify with. Race and gender can influence avatar selection. This study explores avatar choice when a gender-race-matched avatar is unavailable. We examine whether participants are more likely to choose a race-matched or gender-matched avatar in the absence of a gender-race-matched option. Results show that all participants select avatars matching at least one aspect of their identity. When forced to choose, participants favored gender-matched avatars over race-matched ones, suggesting gender may be more central to one's identity during avatar selection.

Exploring the Effects of Embodied Agents' Verbal and Nonverbal Dominance on Decision-Making: A Study Design (Booth ID: 1009)

Taeyeon kim, Pusan National University; Hyeongil Nam, University of Calgary; Ahmad A Fouad, University of Calgary; Kangsoo Kim, University of Calgary; Myungho Lee, Pusan National University

As Artificial Intelligence and Virtual Reality evolve, intelligent virtual agents are shifting from mere information retrieval to significantly influencing users’ cognition and decision-making processes as social actors. This study examines the impact of dominance, a critical element of social interaction, by incorporating it into agents through verbal and non-verbal cues. We present our preliminary research framework, which includes both the experimental design and the development of the agent system. The findings of this study are anticipated to offer valuable insights into the dynamics of user interactions with virtual agents and their broader implications.

When Faces Are Masked: Exploring Emotional Expression Through Body Postures in Virtual Reality (Booth ID: 1010)

Inas Redjem, Univ Rennes; Julien Cagnoncle, Univ. Rennes; Arnaud Huaulmé, Univ Rennes; Alexandre Audinot, Univ Rennes; Florian Nouviale, Univ Rennes; Mathieu Risy, Univ Rennes; Valérie Gouranton, Univ Rennes; Estelle Michinov, Univ Rennes; Pierre Jannin, Univ Rennes

As simulation advances in healthcare training, understanding how body-only signals convey emotions in virtual environments is crucial, particularly with masked virtual agents. This study involved 41 nursing students evaluating 16 faceless fear and surprise postures to assess their realism and the emotion conveyed. While well-recognized in 2D human representations, only three of 16 postures were correctly identified by more than 50% of participants in a 3D virtual agent. These results highlight the impact of virtual agent design on emotional recognition and the need for rigorous testing and refinement to improve emotional expressiveness and realism.

Exploring Human Reactions to Telepresence Drones: A User Study on Safety and Trust Using A Simulated Drone in a VR Environment (Booth ID: 1011)

Shihui XU, Waseda Univiersity; Like WU, Waseda University; Wenjie Liao, Waseda University; Shigeru Fujimura, Waseda University

Telepresence using drones is a promising technology. This paper presents a user study within a simulated Virtual Reality (VR) remote collaboration system to shed light on users' perceived safety and trust regarding telepresence drones. We manipulated drone operation variables, including failure of drone, speed of flying, and distance restriction, to assess their impact on user perceptions. Quantitative and qualitative analysis shows that drone failures significantly reduce safety and trust for both local and remote users. Flying speed affects safety and trust for local workers but only impacts safety for remote experts. Distance restrictions enhance both safety and trust.

Illusion Worlds: Deceptive UI Attacks in Social VR (Booth ID: 1012)

Junhee Lee, Kwangwoon University; Hwanjo Heo, ETRI; Seungwon Woo, ETRI; Minseok Kim, Kwangwoon University; Jongseop Kim, Kwangwoon University; Jinwoo Kim, Kwangwoon University

Social Virtual Reality (VR) platforms have surged in popularity, yet their security risks remain underexplored. This paper presents four novel UI attacks that covertly manipulate users into performing harmful actions through deceptive virtual content. Implemented on VRChat and validated in an IRB-approved study with 30 participants, these attacks demonstrate how deceptive elements can mislead users into malicious actions without their awareness. To address these vulnerabilities, we propose MetaScanner, a proactive countermeasure that rapidly analyzes objects and scripts in virtual worlds, detecting suspicious elements within seconds.

Willingness to Interact: Social Resources Facilitate Pulling Actions toward Social Avatars in Virtual Reality (Booth ID: 1013)

Jaejoon Jeong, Chonnam National University; Hwaryung Lee, Chonnam National University; Ji-eun Shin, Chonnam National University; Daeun Kim, Chonnam National University; Sei Kang, Chonnam National University; Gun A. Lee, University of South Australia; Soo-Hyung Kim, Chonnam National University; Hyung-Jeong Yang, Chonnam National University; Seungwon Kim, Chonnam National University

This study examined whether individuals with richer real-world social resources exhibit a greater willingness to interact with social avatars in Virtual Environment (VE). Employing a psychological approach, we assessed willingness to interact by measuring response times for pulling or pushing avatars. The results revealed that participants with richer social resources performed faster pulling actions toward social avatars, indicating a heightened willingness to engage. Notably, these effects were specific to social targets (i.e., avatars) and were not observed with non-social targets (i.e., a flag).

Statistical Blendshape Calculation and Analysis for Graphics Applications (Booth ID: 1014)

Shuxian Li, Lenovo Research; Tianyue Wang, Lenovo Research; Yiqiang Yan, Lenovo Reaserch; Chris Twombly, Lenovo Research

Real-time facial avatar animation is widely used in entertainment, office, business and other fields, where blendshapes have become a common industry animation solution. We independently developed an accurate blendshape prediction system for low-power VR applications using a webcam. First, feature vectors are extracted through affine transformation and segmentation. Using further transformation and regression analysis, we created statistical models with significant predictive power. Post-processing was used to further improve response stability, including smoothing, filtering and nonlinear transformations. We achieved accuracy similar to ARKit 6. Our model has low requirements with a consistent, accurate and smooth visual experience.

Augmented Hide-and-Seek: Evaluating Spatial Decision-Making and Player Experience in a Multiplayer Location-based Game (Booth ID: 1015)

Yasas Sri Wickramasinghe, University of Canterbury; Heide Lukosch, University of Canterbury; James Everett, Niantic Aotearoa NZ; Stephan Lukosch, University of Canterbury

This paper presents a study on the impact of enabling remote gameplay via Augmented Reality (AR) on spatial decision-making and player experience in a hide-and-seek game. We designed a remote multiplayer handheld-based AR game and evaluated how it influences players’ spatial decision-making strategies, engagement, and gameplay experience. In a study with 60 participants, we compared remote gameplay in our AR game to traditional hide-and-seek. Our findings show that AR enhances gameplay by improving spatial interactions, decision-making, and collaboration. Despite navigation challenges, AR has the potential to foster player engagement and social interaction, contributing to the design of future AR games and beyond.

Common Ground: Establishing Group Awareness for Cross-Virtuality Collaborative Geospatial Analysis (Booth ID: 1016)

Lauren Gold, Arizona State University; Flemming Laursen, Arizona State University; Krutik Pandya, Arizona State University; Kathryn E Powell, Arizona State University; Robert LiKamWa, Arizona State University

Virtual Reality (VR) and desktop users struggle to coordinate due to scale differences when analyzing geospatial data. This work explores multiscale workspace awareness techniques in geospatial analysis scenarios, such as scale-responsive cursors and vistas. A study (n = 16) compared Common Ground to current collaboration tools. The results reveal that 34% of users struggle to maintain awareness of their collaborator's focus, which drops to 9% with Common Ground; and 32% of users struggle to know precisely what features their collaborator referenced, decreasing to 15% with Common Ground. A follow-up with 12 geospatial practitioners provided insights into real-world applications of our system.

Editable Body: Interactive Adaptation of Avatar Control Schemes and Body Structures (Booth ID: 1017)

Shuto Takashita, The University of Tokyo; Taiga Suzuki, Information Somatics Lab, The University of Tokyo; Naoki Tanaka, Information Somatics Lab, The University of Tokyo; Masahiko Inami, Information Somatics Lab, The University of Tokyo

We present the concept of an “Editable Body,” enabling users to modify their avatar’s structure and control scheme in real time. In our VR game, users embody a robot avatar whose mapping to their real limbs can be freely edited and whose structure can be augmented with a third arm. We investigated how this system affects participants’ perceptions of body identity, body-related expression, and impressions of body-editing technologies. Our findings indicate that real-time, user-driven body editing leads to greater acceptance of identity expression through non-innate bodies, fosters a more fluid view of the body, and increases openness to body modification technologies.

Lie Detection in Social VR Using Multimodal Data (Booth ID: 1018)

Clarence Chi San Cheung, Hong Kong University of Science and Technology; Ahmad Alhilal, Aalto University; Kriti Agrawal, Birla Institute of Technology and Science, Pilani; Zhizhuo Yin, The Hong Kong University of Science and Technology (Guangzhou); Reza Hadi Mogavi, University of Waterloo; Pan Hui, The Hong Kong University of Science and Technology

Trust is a key element in human relationships and is vital for the success of any interaction. Establishing trust in the Social VR is important for promoting collaboration, communication, and social interaction. In our research, we investigate the perception and feasibility of using lie detection techniques in virtual reality to build trust. Our findings suggest that it is possible to detect deception in virtual reality using sensor-detected cues, and there is general desirabiilty of the feature. Using random forest, gaze-based features by themselves could reach an 87\% accuracy in detecting lies. We also propose design considerations for future research.

How Embodied Conversational Agents Should Enter Your Space? (Booth ID: 1019)

Junyeong Kum, Pusan National University; Myungho Lee, Pusan National University

Embodied conversational agents (ECAs) capable of nonverbal behaviors have been developed to address the limitations of voice-only assistants. Research has explored their use in augmented reality (AR), suggesting they may soon interact with us more naturally in physical spaces. However, the question of how they should enter the user's space when summoned remains under-explored. In this paper, we focused on the plausibility of ECAs' entering action into the user's field of view in AR. We analyzed its impact on users' perceived social presence and functionality of the agent. Our results indicated that the plausibility of the action significantly affected social presence and had a marginal effect on perceived functionality.

From Robots to Creatures: The Influence of Pedagogical Agent Design on Student Motivation and Learning in IVR Education (Booth ID: 1020)

Linjing SUN, University of Nottingham Ningbo China; Boon Giin Lee, University of Nottingham Ningbo China; Matthew Pike, University of Nottingham Ningbo China; David Chieng PhD, University of Nottingham Ningbo; Sen Yang, University of Nottingham Ningbo China

Pedagogical agents (PAs) have been extensively employed in immersive virtual reality (IVR) educational settings, yet their specific attributes are not thoroughly investigated. This study explores the impact of PA appearance and knowledge delivery methods on student learning in IVR environments. The findings reveal that the flying robot PA is regarded as the most credible, while cute creature PAs increase motivation but cause some distraction. The integrated text and audio multi-modality approach resulted in the best knowledge retention, though some participants experienced a slower learning pace. These insights are significant for designing and using PAs in IVR educational settings.

SentioScape: Facilitating Effective Cross-Cultural Communication through Emotional Mixed Reality (Booth ID: 1021)

Jack F Brophy, Keio University; Dunya Chen, KMD, Keio University; Tanner Person, Keio University Graduate School of Media Design; Kouta Minamizawa, Keio University Graduate School of Media Design; Giulia Barbareschi, Keio University

Increasing global migration has led to the rapid growth of interactions between organizations, groups, and individuals from various cultures. As such, knowing how to decode communication involving conversational partners from other cultures is essential in avoiding misunderstandings and needless conflicts. We present SentioScape, a Mixed Reality (MR) system for aiding emotional expression and understanding to foster more effective cross-cultural communication. 20 participants took part in an experiment to evaluate the system, with preliminary results revealing some positive trends that suggest the system has potential as a tool for creating more effective emotional communication between high- and low-context communicators.

Locomotion, Navigation and Redirection

winDirect: Studying the Effect of a Wind-Based Haptic Bracelet on Presence and the Detectability of Hand Redirection (Booth ID: 1022)

Ulrike Kulzer, Saarland University, Saarland Informatics Campus; André Zenner, Saarland University & DFKI, Saarland Informatics Campus; Donald Degraen, University of Canterbury

We developed a wind-based wearable haptic feedback device called winDirect to investigate if multimodal stimuli can be used to disguise hand redirection (HR) by increasing corresponding perceptual detection thresholds (DTs) in users. Our investigation was motivated by the findings of two previous works, which showed multimodal stimuli to increase presence, and indicated an increased feeling of embodiment to increase the DTs of HR. In contrast to our expectations, we found that the integration of multimodal stimuli did not guarantee increased HR DTs, even when increasing presence - highlighting the need to study correlations between presence and HR more deeply.

Low-cost DIY 16 Directions of Movement Walk-in-Place Interface for VR Applications (Booth ID: 1023)

Hyuntaek Park, Daegok Middle School; Hyunwoo Cho, University of South Australia; Sang-Min Choi, Gyeongsang National University; Suwon Lee, Gyeongsang National University

Traditional VR locomotion systems often rely on costly treadmills or sophisticated motion capture technologies, limiting accessibility for broader audiences. This study introduces a highly affordable do-it-yourself walk-in-place interface for VR applications. The proposed interface is built using widely available materials, such as cardboard and aluminum foil, integrated with touch sensors and Bluetooth communication. The interface supports 16 directions of movement and can be constructed for approximately $50. This study demonstrates the potential of budget-friendly solutions to enhance VR. The proposed approach broadens access to immersive VR experiences and bridges the gap between functionality and affordability of VR systems.

Walking in Space: Immersive Storytelling in Space Exploration VR Museum (Booth ID: 1024)

Pedro A. Ferreira, NOVA School of Science and Technology; Ana Rita Rebelo, NOVA LINCS, NOVA School of Science and Technology; Rui Nóbrega, Universidade Nova de Lisboa, Lisboa, Portugal

Virtual Reality (VR) is widely used in museums to offer unique exhibition experiences. Navigation in virtual museums is typically achieved through teleportation. While teleportation bypasses physical space limitations, this technique differs from natural walking in the real world. Walking provides more immersive experiences but usually requires a large physical area. To address this, we present a space exploration VR museum where users can walk through the history of humanity's space exploration within just a 2.5 x 2.5 m physical space by employing overlapping spaces. This approach enables immersive storytelling and accessible virtual museum experiences, with preliminary results supporting its effectiveness.

Vision-Based Tracking System via Sparse Artificial Marker (Booth ID: 1025)

zhe xie, Beijing Engineering Research Center of Mixed Reality and Advanced Display,School of Optics and Photonics,Beijing Institute of Technology; Yue Liu, Beijing Engineering Research Center of Mixed Reality and Advanced Display,School of Optics and Photonics,Beijing Institute of Technology; Dong Li, Beijing Engineering Research Center of Mixed Reality and Advanced Display,School of Optics and Photonics,Beijing Institute of Technology

Localization in large-scale environment is fundamental for high-degree- of-freedom augmented reality (AR) and virtual reality (VR) applications. However, achieving stable and cost-effective wide-area tracking remains challenging, as existing methods often increase system cost and complexity when scaled to large spaces. This paper proposes a vision-based tracking system that encodes spatial relationships between markers to generate a planar layout. We propose a BiLSTM-based network for efficient marker recognition and introduce a robust rematching algorithm to enhance recognition accuracy. The proposed approach scales effectively with sparse markers while maintaining a high recognition rate.

Friction Sensation in Redirected Walking Using a Rotating Handrail (Booth ID: 1026)

Yuto Ohashi, Graduate School of Science and Technology, Nara Institute of Science and Technology; Keigo Matsumoto, The University of Tokyo; Yutaro Hirao, Nara Institute of Science and Technology; Monica Perusquia-Hernandez, Nara Institute of Science and Technology; Nobuchika Sakata, Ryukoku University; Hideaki Uchiyama, Nara Institute of Science and Technology; Kiyoshi Kiyokawa, Nara Institute of Science and Technology

We propose a walker-type system designed to enhance visual gain in redirected walking within virtual reality(VR) by providing dynamic haptic feedback through a rotating circular handrail.Two experiments were conducted to investigate whether haptic feedback in Redirected Walking(RDW) can expand the range of visual gain.This represents an approximately 23% increase in the visual gain threshold when the slide gain was set to 2.69, and a 21% increase when the slide gain was set to 1.00, highlighting the influence of haptic feedback. The results demonstrate that this device can enhance the user’s perception of movement in VR by coupling visual and haptic cues.

A Comparative Study on Locomotion Methods and Distance Perception in Immersive Virtual Reality (Booth ID: 1027)

Razeen Hussain, University of Genoa; Manuela Chessa, University of Genoa; Fabio Solari, University of Genoa

Accurately perceiving distances in virtual reality (VR) remains a challenge due to discrepancies between real-world and VR spatial experiences. This study compares four VR locomotion methods—Teleport, Joystick, Arms Swinging, and Real Walking—by evaluating distance perception, spatial orientation, and user experience. In an exploratory within-subject study, participants navigated a virtual environment, estimated distances, and completed various questionnaires (SSQ, IPQ, SUS, NASA TLX, and a comparison questionnaire). Results showed a mismatch between user preference and performance: while Arms Swinging was the least preferred method, it provided the most accurate distance estimation and spatial orientation.

Advancing Cybersickness Prediction in Immersive Virtual Reality Using Pre-Trained Large Foundation Models (Booth ID: 1028)

Ripan Kumar Kundu, University of Missouri-Columbia; Khaza Anuarul Hoque, University of Missouri

Existing ML/DL methods for predicting VR cybersickness require massive amounts of high-quality data for effective training, extended training times, and lack of transferability capability to new VR environments. To address this, we propose a novel approach using zero-shot and few-shot learning mechanisms to leverage the knowledge of pre-trained large foundation models (TimeGPT and Chronos). Validated on two open-source VR cybersickness datasets, Simulations 2021 and APAL Head 2019, our fine-tuned TimeGPT model outperforms traditional DL models, achieving superior accuracy and significantly reduced training times compared to the trained-from-scratch Transformer models.

Inducing Unintentional Positional Drift (UPD) in Virtual Reality via Physical Rotations and the Illusion of Leaning (Booth ID: 1029)

Zubin Datta Choudhary, University of Central Florida; Ferran Argelaguet Sanz, Inria; Gerd Bruder, University of Central Florida; Greg Welch, University of Central Florida

Virtual Reality (VR) users often turn their bodies during experiences. Virtual navigation techniques use rotations and forward translation to simulate movement. Despite being designed for stationary use, these techniques can cause Unintentional Positional Drift (UPD), impacting user safety and VR experiences. We conducted an human-subject study, approved by our university ethics board, with 20 participants performing repetitive rotation tasks. Our study focused on intentionally inducing UPD via physical rotations by adding an offset to the VR camera's roll angle, creating a visual illusion of "leaning" or "banking." Our results show that camera roll offsets induced UPD along participants' initial left-right axis under specific conditions.

Multi-sensory Simulation of Wind Sensation (MSSWS): An Approach of Reducing Motion Sickness in Passive Virtual Driving (Booth ID: 1030)

Yuan Yue, Shandong University; Chao Zhou, Institute of Software Chinese Acadamy of Sciences; Tangjun Qu, Shandong University; Baiqiao Zhang, Shandong University; Juan Liu, School of Mechanical, Electrical & Information Engineering; Junhao Wang, Peking University; Tianren Luo, Institute of Software; Yulong Bian, Shandong University

This paper introduces a method to reduce motion sickness (MS) during passive motion in virtual reality (VR), like virtual driving. The method improves users' sense of embodiment (SoE) by simulating airflow to align vestibular, visual, auditory, and tactile cues. We developed a Wind Simulation Helmet to provide airflow around the head and created a virtual motorcycle driving environment with matching visual and auditory cues. A preliminary experiment helped determine helmet parameters, followed by a formal experiment to test the effect of wind sensation on motion sickness. Results show that wind simulation can effectively: 1) improves proprioception and SoE of participants; 2) reduces motion sickness risk during passive driving.

“I look like a gorilla, but don’t move like one!”: Impact of Avatar-Locomotion Congruence in Virtual Reality (Booth ID: 1031)

Omar Khan, University of Calgary; Hyeongil Nam, University of Calgary; Kangsoo Kim, University of Calgary

As virtual reality (VR) continues to expand, particularly in social VR platforms and immersive gaming, understanding the factors that shape user experience is becoming increasingly important. Avatars and locomotion methods both play central roles in influencing user experience in VR. However, little is known about the impact of congruence between these two factors. We conducted a user study with 30 participants, employing two avatar types (human and gorilla) and two locomotion methods (human-like arm-swing and gorilla-like arm-roll), to assess the effects of avatar-locomotion congruence. Our results indicate that congruent avatar-locomotion conditions enhance avatar identification and user experience.

Jumping-At-Air : Jumping without Touching the Ground in Virtual Reality (Booth ID: 1032)

Liuyang Chen, NetEase (Hangzhou) Network Co., Ltd; Gaoqi He, East China Normal University; Changbo Wang, School of Computer Science and Technology

We propose Jumping-At-Air, where the avatar jumps in the virtual world and lands at a higher air while the user jumps in the real world, then jumps again from the air. The core idea is to first create a pair of pressure-sensing shoes based on the ESP32 chip, equipped with pressure sensors in the soles, to accurately detect the jumping stage. Then, through a carefully designed equation with two adjustable parameters, the user's jump trajectory in the real world is redirected to drive the jumps of the avatar in the virtual world. The redirected jumping curve is differentiable and continuous throughout its domain and conforms to the definition of a parabola.

User Experience and Presence

Pedaling into Presence: Evaluating Presence and User Experience in VR Cycling vs. Controller Use (Booth ID: 1033)

Jennifer Brade, Professorship Production Systems and Processes; Alexander Kögel, Professorship of Ergonomics and Innovation Management; Franziska Klimant, Professorship Production Systems and Processes; Martin Dix, Professorship Production Systems and Processes

This study examined two input methods—real bicycle and gaming controller—in a virtual bike tour. Participants engaged in a virtual bike tour using both methods while assessing presence and user experience. The bicycle method notably enhanced sense of physical space, engagement, and ecological validity, resulting in a more immersive experience. While pragmatic qualities showed no significant differences, the bicycle method excelled in hedonic qualities. The findings indicate that realistic physical interaction boosts presence and appeal, suggesting future research on different movement types.

Exploring User-Specific Variations in Vestibular Stimulation to Reduce Motion Sickness in VR (Booth ID: 1034)

Ye-Seom Jin, Konkuk University; BoYu Gao, Jinan University; HyungSeok Kim, Konkuk University

Motion sickness is regarded as one of important limitations in Virtual Reality (VR) applications. This study investigates the effects of vertical vibrations and additional rotational vestibular stimulation on alleviating discomfort and enhancing immersion. We conducted pilot and main studies to examine the threshold of stimulation intensity to reduce motion sickness. Experimental results revealed that the effect of vestibular stimuli varied based on user characteristics such as gender, age, and vision, highlighting the need for personalized stimulation. The initial findings offer suggestion for developing more comfortable and immersive VR experiences.

Creating Virtual Environments with 3D Gaussian Splatting: A Comparative Study (Booth ID: 1035)

Shi Qiu, CUHK; Binzhu Xie, The Chinese University of Hong Kong; Qixuan Liu, CUHK; Pheng Ann Heng, The Chinese University of Hong Kong

3D Gaussian Splatting (3DGS) has recently emerged as an innovative and efficient 3D representation technique. While its potential for extended reality (XR) applications is frequently highlighted, its practical effectiveness remains underexplored. In this work, we examine three distinct 3DGS-based approaches for virtual environment (VE) creation, leveraging their unique strengths for efficient and visually compelling scene representation. By conducting a comparable study, we evaluate the feasibility of 3DGS in creating immersive VEs, identify its limitations in XR applications, and discuss future research and development opportunities.

Towards Cross Reality: A Comparison of World-in-Miniature and Virtual Reality Switch (Booth ID: 1036)

Damla Welti, ETH Zurich; Mathieu Lutfallah, ETH Zurich; Long Cheng, Innovation Center Virtual Reality; Andreas Kunz, ETH Zurich

Visualization of complex architectural and interior design proposals demands spatial comprehension. Mixed Reality (MR) is particularly effective in architectural visualization because it overlays virtual models onto already existing real environments. However, this approach alone restricts users to the physical constraints of their surroundings. We developed two approaches to augment MR with additional perspectives from physically inaccessible viewpoints: the World-in-Miniature (WIM) solution and the Cross Reality (CR) solution. The WIM solution provides a scaled-down 3D replica of the virtual environment, whereas the CR solution enables users to switch to Virtual Reality and allows them to teleport to physically inaccessible viewpoints.

Temperature Matters: Thermal Feedback for Awe Experiences in VR (Booth ID: 1037)

Alexander Marquardt, Institute of Visual Computing; Marvin Lehnort, Institute of Visual Computing; Melissa Steininger, University Hospital Bonn; Ernst Kruijff, Bonn-Rhein-Sieg University of Applied Sciences; Kiyoshi Kiyokawa, Nara Institute of Science and Technology; Monica Perusquia-Hernandez, Nara Institute of Science and Technology

This work explores how thermal feedback can enhance awe experiences in virtual reality (VR). We developed a custom thermal feedback system integrated into a VR headset that delivers temperature sensations to the user's face while viewing vast scenes of snow-covered mountains and desert canyons. Our results show that thermal feedback significantly enhanced presence measures while influencing specific components of awe experiences, specifically those related to physical sensations.

The Effects of Content-Aware Textured Reverse Motion Flow on Cybersickness and User Experience (Booth ID: 1038)

ByeongSun Hong, DeepXRLab; Qimeng Zhang, Korea University; Gerard Jounghyun Kim, Korea University; Jun Ryu, Korea University

Difficulty arises in virtual reality (VR) due to cybersickness. The discrepancy in motion information between the visual and vestibular feedback causes sensory conflict, a widely accepted cause for cyber-sickness. Previous research has shown reduced VR sickness by augmenting the content with motion patterns in the opposite direction of the virtual motion. However, it can also cause significant content intrusion. This poster aims to mitigate the latter by applying content-aware textured motion patterns vs. simple white animated feature points. Positive results were obtained with a higher preference for the proposed visualization while showing a similar degree of sickness reduction.

I’ve Got a Feeling: Sentiment analysis for collaborative performance in VR (Booth ID: 1039)

Bibek Khattri, Birmingham City University; Paweenuch Chantratita, Birmingham City University; Maite Frutos-Pascual, Birmingham City University; Ian Williams, Birmingham City University

Sentiment analysis is a common method for evaluating the meaning of text. While commonplace, the application of sentiment analysis for assessing collaborative teamwork is limited. We present an exploratory study of sentiment analysis as a useful companion to established metrics of user collaboration in VR. We present results of paired users (N=14, Npairs = 7) completing a VR brainstorming task, and sentiment analysis on their conversation. We report on how the sentiment process maps to the Team Workload Questionnaire (TWLQ) and illustrate how this could be a valuable metric for supporting an enriched evaluation of collaborative VR.

Exploring Display Parameters Associated with Cybersickness Using Electroencephalography (Booth ID: 1040)

Yasufumi Nakata, Keio University; Mayuka Otsuki, Keio University; Miki Nagai, Keio University; Ryosuke Yamamoto, Keio University; Minori Sakai, Keio University; Atsushi Aoyama, Keio University

VR offers immersive experiences but often causes cybersickness, hindering application. Here, we analyzed electroencephalographic (EEG) data for VR-induced discomfort associated with display parameters. Increased discomfort correlates with higher low-frequency and lower mid-frequency EEG signal power, indicating changes in autonomic activity and cognition. Previous studies have not thoroughly examined the neural impact of individual display settings. Our findings reveal the complex interplay between the display settings and EEG patterns. Optimizing the settings can mitigate discomfort and improve VR experiences, promoting wider application of VR technology.

Case Studies of Remarkable VR Motion Sickness Resistance: A Judo Practitioner and a Commuter (Booth ID: 1041)

Gang Li, University of Bath

Among 41 participants, two males exhibited no motion sickness (MS) at all during a VR-based cognitive discrimination task. The first, a judo practitioner with 16 years of mid-air balance control training, and the second, a commuter accustomed to reading while commuting, showed suppressed beta activity between cognitive and sensorimotor domains in our EEG analysis, associated with better cognitive behavioral performance. The judo practitioner demonstrated stronger suppression and superior cognitive behavioral performance. These findings raise whether beta activity suppression between cognitive and sensorimotor cortical domains reflects compensatory reticular neuron activation in the deep brainstem under a proposed MS habituation mechanism.

Comparing Pass-Through Quality of Mixed Reality Devices: A User Experience Study During Real-World Tasks (Booth ID: 1042)

Francesco Vona, University of Applied Sciences Hamm-Lippstadt; Julia Schorlemmer, Immersive Reality Lab, University of Applied Sciences Hamm-Lippstadt; Michael Stern, University of Applied Sciences Hamm-Lippstadt; Navid Ashrafi, University of Applied Sciences Hamm-Lippstadt; Maurizio Vergari, Technische Universität Berlin; Tanja Kojic, Technische Universität Berlin; Jan-Niklas Voigt-Antons, Hamm-Lippstadt University of Applied Sciences

In extended reality, “pass-through” enables users to view their real-world surroundings via cameras on the headset, displaying live video inside the device. This study compared the pass-through quality of three devices: Apple Vision Pro, Meta Quest 3, and Varjo XR-3. Thirty-one participants performed two tasks—reading a text and solving a puzzle—while using each headset with the pass-through feature activated. Participants then rated their experiences, focusing on workload and cybersickness. Results showed that the Apple Vision Pro outperformed the Meta Quest 3 and Varjo XR-3, receiving the highest ratings for pass-through quality.

Evaluating Usability, Cognitive Load, and Presence in VR Tutorials: A Preliminary Study (Booth ID: 1043)

Isaac Taylor, Ontario Tech University; Hamed Tadayyoni, Ontario Tech University; Fabian Gualdron, Ontario Tech University; Alvaro Quevedo, Ontario Tech University; Pejman Mirza-Babaei, University of Ontario Institute of Technology

The adoption of virtual reality (VR) in education, training, and health care, among others, has resulted in non-VR users exposed to immersive experiences. While VR provides immersion with various degrees of realism, several tasks are not properly represented due to the technology's limitations. For example, multiple actions require game controllers that fail to represent real-world actions. This paper evaluates the effects of onboarding techniques on usability, cognitive load, and presence on commercial video games and serious games developed for research. Our preliminary results highlight different impacts caused by the freedom and proper representation of tasks in the tutorials.

Towards Optimized Real-time Cybersickness Detection framework Using Deep Learning for Standalone Virtual Reality Headsets (Booth ID: 1044)

Md Jahirul Islam, Kennesaw State University; Rifatul Islam, Kennesaw State University

Cybersickness (CS) is a major problem that is induced around 60-95% users due to immersive VR exposure, creates an obstacle for comfortable experiences. Deep learning models can predict CS, but require powerful resources, which is unsuitable for computationally constrained standalone VR (SVR) headsets. Using external computing resources can increase complexity. To address these gaps, we developed a framework for SVR devices where we trained deep learning models using physiological data, optimized the model, and deployed in the SVR headset to predict CS in real-time (during immersion) with minimum inference time. Our findings introduce future directions for real-time cybersickness prediction on SVR devices.

A Machine Learning Approach to Expanding the Degrees of Freedom on Phone-Based Head Mounted Displays (Booth ID: 1045)

Andrew J. Jones anjones1@davidson.edu, Davidson College; Raghuram Ramanujan, Davidson College; Tabitha C. Peck, Davidson College

Phone-based head mounted displays (HMDs) make participation in virtual reality (VR) easily accessible. However, phone-based HMDs can only track the user's head orientation, excluding head position. This not only stifles the user's immersion in the virtual environment, but also risks simulator sickness. To mitigate this limitation, we propose a fully-connected neural network that predicts the user's head position given the head orientation history and other information about the user's movement that can be tracked from phone sensors. To analyze performance, the model iterates over the test dataset, predicting the entirety of the dataset's positional displacement history.

Real-Time Flow State Analysis in Game: A Non-Intrusive System Based on Eye Feature Detection (Booth ID: 1046)

Qi Wu, Communication University of China; Di Zhang, Communication University of China; Vincent Nourrit, IMT Atlantique; Jean-Louis de Bougrenet, IMT Atlantique; Long Ye, Communication University of China

Flow is a key indicator of deep immersion, but existing detection methods rely on intrusive physiological devices, limiting real-time analysis. To overcome this, we used Tobii eye-tracking glasses to record eye movement data during gaming sessions, identifying patterns correlated with flow states. We developed a non-intrusive system with a custom algorithm to detect flow in real time, achieving 85.3% accuracy and a 412 ms detection delay. This system provides valuable insights into the relationship between flow and game design, enhancing the understanding of flow's biophysical mechanisms.

A Research on Unconscious Visual Effects for Enhancing Concentration in Virtual Reality (Booth ID: 1047)

Tomokazu Ishikawa, Toyo University; Takaaki Ueno, Toyo University

This study investigated unconscious visual effects in VR for enhancing concentration. Four peripheral vision effects (blur, chromatic aberration, mosaic, peripheral vignetting) were examined through two experiments. Twenty participants determined detection thresholds, then 15 tested concentration effects via typing tasks. Results showed blur and peripheral vignetting significantly improved typing accuracy and reduced keystroke intervals. Subjective evaluations supported these findings, indicating subtle visual modifications can enhance concentration without conscious awareness. The research demonstrates potential for strategically designed visual interventions in VR that optimize cognitive performance.

Leveraging Real-Time EEG Neurofeedback in VR for Personalized Interventions in Exposure Therapy (Booth ID: 1048)

Íris Peixoto, Instituto Politécnico de Setúbal; Joana Silva Cerqueira, Universidade NOVA de Lisboa - Faculdade de Ciências e Tecnologias; André Antunes, NOVA University of Lisbon; Anna Letournel, Setúbal Polytechnic University; Rui Neves Madeira, NOVA University of Lisbon

Innovative therapeutic tools are essential for conditions like anxiety and food aversion. This study integrates EEG and ECG signals with a VR-based serious game for exposure therapy. Preliminary results show consistent detection of EEG biomarkers, such as N200, during VR tasks and evidence of neural habituation. Despite challenges with ECG artifacts, the successful integration of EEG and VR highlights the feasibility of real-time neurofeedback. Our approach shows auspicious results towards personalized therapeutic interventions by combining VR’s immersive qualities with the potential of neurofeedback-driven adaptability, advancing exposure therapy based on physiological feedback.

Eye-Tracking Driven Personalized 360° Panorama Content Generation (Booth ID: 1049)

guotao wang, Beihang University; Chenglizhao Chen, China University of Petroleum; Aimin Hao, Beihang University

Personalized 360° content generation faces significant challenges due to the lack of adaptation to user preferences during interaction. Traditional methods rely on static content or post-session analysis, which limits dynamic engagement. In this paper, we propose a novel approach that integrates gaze tracking with dynamic content generation to create personalized 360° experiences. By analyzing gaze trajectories and user preferences, our approach adjusts content based on the user’s visual focus, leading to a more immersive experience. Experimental results show that our approach enhances content relevance and user engagement compared to existing methods, demonstrating the potential of gaze-driven personalized content in VR.

Prototyping Therapeutic Gaming: from Semi-Immersive to Fully Immersive Games for Parkinson’s Disease (Booth ID: 1050)

Rui Neves Madeira, Instituto Politécnico de Setúbal; Pedro Albuquerque Santos, Lisbon School of Engineering (ISEL), Politécnico de Lisboa (IPL); Diogo Pinto, Polytechnic University of Setubal

Parkinson’s Disease (PD) is a neurodegenerative disorder where physical exercise is crucial for symptom management and quality of life. This paper presents prototypes supporting therapeutic exercises for PD patients: the Wall Game, utilizing Kinect, and the Object Sorting Game, using Meta Quest. These prototypes target motor coordination, balance, and cognitive engagement by integrating physiotherapy exercises into game scenarios. A user study with 30 participants evaluated usability, cognitive demand, and therapeutic potential. Results showed high usability and low workload scores, confirming these games' feasibility as therapeutic tools. Future work will validate them with PD patients and refine gameplay based on feedback.

HeartFortress - an Affective VR Game Adapting to Player Emotions (Booth ID: 1051)

Paweł Jemioło, AGH University of Krakow; Adrian Kuśmierek, AGH University of Krakow

This study explores affective computing in virtual reality (VR) gaming, focusing on adapting gameplay to players' arousal levels. By utilizing real-time physiological signals, such as heart rate and electrodermal activity, the VR game HeartFortress dynamically adjusts its mechanics. Observations suggest that the adaptive version of the game may enhance flow - a state of deep involvement in an activity - and increase physiological responsiveness, indicating heightened arousal. While preliminary, these findings highlight the potential of affective-driven adaptations in interactive systems, offering insights for gaming, training, and therapeutic applications.

Buenas: Giving Everyone a Seat at the Study Group Table (Booth ID: 1052)

Raquel T. Cabrera-Araya, Texas A&M University; Siyu Huang, Purdue University; Mohammad Nadim, Texas A&M University - Central Texas; Edgar Javier Rojas-Muñoz, Texas A&M University; Anitha Chennamaneni, Texas A & M University Central Texas; Walter Murphy, Texas A&M University-Central Texas; Voicu Popescu, Purdue University

Study groups are crucial in education to foster collaboration and motivation. Remote students, at greater risk of isolation, often miss this peer-driven environment. This paper presents Buenas, an Extended Reality (XR) system enabling remote students to participate in on-campus study groups seamlessly. Local students interact with projected representations of remote peers seated at a conference table, while remote students use XR headsets to integrate the group into their physical surroundings. A user study comparing Buenas to conventional videoconferencing showed significant improvements in engagement, presence, connectedness, and rapport through both objective and subjective measures.

Do We Need 3D to See? Impact of Dimensionality of the Virtual Environment on User Attention (Booth ID: 1053)

Teresa Matos, FEUP; Daniel Mendes, FEUP/INESCTEC; João Tiago Jacob, FEUP; A. Augusto Sousa, FEUP/INESCTEC; Rui Rodrigues, FEUP/INESC TEC

Virtual Reality allows users to experience realistic environments in an immersive and controlled manner, particularly beneficial for contexts where the real scenario is not easily or safely accessible. The choice between 360º content and 3D models impacts outcomes such as perceived quality and computational cost, but can also affect user attention. This study explores how attention manifests in VR using a 3D model or a 360º image rendered from said model during visuospatial tasks. User tests revealed no significant difference in workload or cybersickness between these types of content, while sense of presence was reportedly higher in the 3D environment.

Investigating the Effects of Limited Field of View and Resolution in Optical See-Through Augmented Reality in the Context of Immersive Analytics (Booth ID: 1054)

Julia Hertel, University of Hamburg; Jan Nicolai Synwoldt, Universität Hamburg; Frank Steinicke, Universität Hamburg

Currently, optical see-through augmented reality HMDs (OST AR HMDs) still have significant display limitations. This work presents a user study conducted with an OST AR HMD to investigate how two of these limitations - a small field of view and low resolution - affect task performance, user experience, workload, and simulator sickness in the context of immersive analytics. The Magic Leap 2 was used to simulate the display properties of other commonly used HMDs. The results suggest that a limited field of view negatively impacts user experience and workload, with the difference between the Magic Leap 2's field of view and that of the two generations of the Microsoft HoloLens being particularly striking.

A teleclinic for Neuro-psychological Assessment in the Metaverse (Booth ID: 1055)

Yannick Prié, Nantes Université; Toinon Vigier, Nantes Université; Hélène Bonneville, LS2N UMR 6004, CNRS, Nantes Université; Manon Georges, Nantes Université; Didier Acier, Nantes Université; Solène Thévenet, CHU Nantes; Samuel Bulteau, Centre Hospitalier Universitaire de Nantes

Metaverse technologies allow new tele-medicine services and applications. Our interdisciplinary team conducted a user-centered design process, and implemented an immersive teleclinic for clinicians to carry out neuro-psychological testing sessions with distant patients. A session is a mix of discussions and desk-based or independent test activities, accompanied and monitored by the clinician with various dashboards. Preliminary evaluation focused on usability, presence and acceptability for healthy participants playing the role of patients and practitioners. All the participants could project into using the system, practitioners were more reluctant, due so usability issues, and insufficient quality of current technology to observe patients.

Foveated VR Rendering System for Large 3D Meshes (Booth ID: 1056)

Huadong Zhang, Rochester Institute of Technology; Chao Peng, Rochester Institute of Technology

This paper presents a novel virtual reality (VR) system for real-time foveated rendering of large-scale 3D meshes comprising 100 million triangles. As VR demands high frame rates and stereo rendering, existing desktop-based approaches face computational challenges. We incorporate foveal focus into the level-of-detail (LOD) selection, preserving high geometric detail in the foveal region and reducing detail in peripheral areas. A hierarchical patch-based mesh structure enables precise rendering load estimation and adaptive LOD adjustments. Constrained optimization with a fixed total load budget runs in parallel on the GPU. Evaluations show our VR system outperforms state-of-the-art solutions in visual fidelity and performance.

“Did you hear that?”: Investigating localized spatial audio effects on enhancing immersive and affective user experiences in virtual reality (Booth ID: 1057)

Ifigeneia Mavridou, Tilburg University; Ellen Seiss, Bournemouth University; Dr. Giuseppe Ugazio, University of Geneva; Mark Harpster, Bongiovi Acoustics Labs; Phillip Brown, Tilburg University; Sophia Cox, Emteq Labs; Christine Erie, Bongiovi Accoustics Labs; David Lopez Jr, Bongiovi Accoustics Labs; Ryan Copt, Bongiovi Accoustics Labs; Charles Nduka, Emteq Ltd; James Hughes, Bongiovi Accoustics Labs; Joseph Butera III, Bongiovi Accoustics Labs; Daniel N Weiss, Bongiovi Accoustics Labs

This study examines the effects of software-based localised audio enhancement in VR on spatial audio, immersion, and affective responses. Sixty-eight participants took part in a VR game (Job Simulator) and reported improved sound quality, immersion, and localization under enhanced audio conditions. Physiological measures revealed significant effects on emotional valence, highlighting enhanced audio as a cost-effective way to improve auditory involvement and emotional engagement in VR without additional hardware

Tuesday posters

Ethical and Psychological Aspects

Perceived benefits and risks of information disclosure in augmented reality (Booth ID: 1101)

Gal Hadad, University of Haifa; Amber Maimon, University of Haifa; Ofer Arazy, University of Haifa; Joel Lanir, The University of Haifa

By merging digital content with the physical environment, AR offers unique social interaction opportunities alongside distinct risks. This study explores perceived benefits and risks of sharing personal info in AR social contexts. Analysis of participant interviews, conducted after presenting three AR scenarios, revealed benefits like targeted communication, networking opportunities, and fostering shared values, varying by scenario. Participants also identified risks like prejudgement, harassment, and diminished emotional connection, also shown to be context-dependent. These findings demonstrate the dynamic interplay between advantages and vulnerabilities of self-disclosure in AR, highlighting the need for systems addressing these concerns.

Mindfulness Anywhere: Mediating in the Virtual World (Booth ID: 1102)

Yuyang Jiang, Computational Media and Art; Dehan JIA, The Hong Kong University of Science and Technology (Guangzhou); Zeyu Yang, HKUST(Guangzhou); Ling Li, The Hong Kong University of Science and Technology (Guangzhou); Yuyang Wang, The Hong Kong University of Science and Technology (Guangzhou); Pan Hui, The Hong Kong University of Science and Technology

With the rapid advancement of Virtual Reality (VR) technology, immersive meditation experiences have become a promising tool for enhancing mental well-being. However, many existing VR meditation environments lack personalization and realistic integration, limiting their effectiveness. This study evaluated personalized virtual relaxation environments using a comparative research design. Results showed that the experimental group, using generative virtual meditation spaces, experienced significantly more positive emotional states. Future work will explore the impact of VR meditation through fNIRS measurements of prefrontal cortex activity, aiming to understand how personalized virtual environments influence psychological and neural responses.

Does DEI Still Matter?: A Survey of VR Researchers at IEEE VR (Booth ID: 1103)

Aleshia Hayes, University of North Texas; John Quarles, University of Texas at San Antonio; Greg Welch, University of Central Florida; Dylan Fox, Cornell Tech

This study explores perspectives on DEIA from researchers from IEEE VR. Fourteen participants expressed sustained commitment to DEIA, noting underrepresentation of women, BIPOC, individuals with disabilities, and researchers from developing countries or low socioeconomic backgrounds. Participants perceived high costs, lack of accessibility features (e.g., subtitles, mobility support), and insufficient family resources (e.g., childcare) as barriers. Among factors that could deter support of DEIA listed were fear of retaliation, financial constraints, structural challenges, and difficulties identifying diverse representatives. Participants supported reporting demographic statistics, strategic planning, and promoting DEIA in speakers.

SecretVR: Differential Privacy Defense Against Membership Inference Privacy Attacks in Virtual Reality (Booth ID: 1104)

Ripan Kumar Kundu, University of Missouri-Columbia; Khaza Anuarul Hoque, University of Missouri

The convergence of AI and VR technologies (AI VR) offers innovative applications but raises privacy concerns due to the sensitive nature of data. This work demonstrates that AI VR models are vulnerable to membership inference attacks (MIA) and leak users' information, with a high success rate. To address this, we propose the SecretVR framework, which uses a differential privacy (DP)-enabled privacy-preserving mechanism to defend against MIA. Evaluated on seven AI VR models and three different datasets, our approach reduces the MIA success rates by up to $32\%$ while maintaining high model utility with classification accuracies.

Preserving Privacy in VR Telemetry Data (Booth ID: 1105)

Jayasri Sai Nikitha Guthula, University of Arkansas at Little Rock; Hadi Rashid, University of Arkansas at Little Rock; Jan P Springer, University of Arkansas at Little Rock; Aryabrata Basu, University of Arkansas at Little Rock

Telemetry data is essential for optimizing VR systems but poses privacy risks due to re-identification vulnerabilities. This study introduces a privacy-preserving framework using Wasserstein GANs with Gradient Penalty (WGAN-GP) to generate synthetic datasets that retain the statistical integrity of real data while mitigating re-identification risks. Differentially Private Stochastic Gradient Descent (DP-SGD) enhances privacy by introducing controlled noise during model training. The proposed approach effectively balances data utility and privacy, enabling secure analysis of VR telemetry data for performance optimization and user experience improvement, paving the way for safer and more ethical VR data practices.

A Serious Immersive VR Game Designed for a Better Magnetic Resonance Imaging Experience for Children (Booth ID: 1106)

Iresh Jayasundara Mudiyanselage, University of Oulu; Eemeli Häyrynen, University of Oulu; Severi Pitkänen, University of Oulu; Paula Alavesa, University of Oulu; Sirpa Kekäläinen, University of Oulu; Katherine J. Mimnaugh, University of Oulu; Tarja Pölkki, Research Unit of Health Sciences and Technology, University of Oulu

Children often experience anxiety and fear when undergoing Magnetic Resonance Imaging (MRI), leading to challenges in completing the procedure. This study presents a Virtual Reality application designed to simulate the MRI experience, aimed at reducing anxiety and preparing children for the procedure. The VR application was evaluated through a pilot study with parents and medical experts, as ethical constraints prevented testing with children. The results indicated high usability, and positive feedback regarding the realism of the MRI sounds and environments. Participants emphasized the potential for the application to prepare both children and their parents for the MRI process.

Training and Education

Development of Exercise Intervention Program for Gait and Posture Improvement in Extended-Reality Environment (Booth ID: 1107)

Hiroki Terashima, Shizuoka University; Koji Kanda, Shizuoka University; Aozora Shimao, Shizuoka University; Kenichiro Fujita, Kengoro medical group; Takuhiro Mizuno, Alpha Code Inc; Shogo Ishikawa, Shizuoka University; Shinya Kiriyama, Shizuoka University

In this study, we developed an exercise intervention program in an XR environment that is customized for each individual and enables self-training of gait and posture for the promotion of health, and had 17 subjects practice the program. Expert evaluated the gait videos before and after the training and confirmed that all subjects improved their gait. The experts' evaluations provided useful suggestions for information presentation control during the XR experience, and experts’ comments provided new insights, such as differences in the improvement effect depending on the physical characteristics of the subjects.

VR Trainee: a Virtual Reality Training Tool for the Energy Industry (Booth ID: 1108)

Alberto Carvalho, Fraunhofer Portugal AICOS; Waldir Moreira PhD, Fraunhofer Portugal AICOS; Filipe Sousa, Fraunhofer Portugal AICOS; Isabel Alexandre, Instituto Universitário de Lisboa (ISCTE-IUL)

Operating hazardous machinery often requires comprehensive training, which can be costly, time-consuming, and lead to delays. To address these challenges, we introduce VR Trainee, a Virtual Reality-based tool designed for electrical substation workers. VR Trainee allows users to interact with objects in a safe, controlled environment, enhancing training effectiveness for new maintenance employees. Equipped with inertial sensors, it gathers data to provide real-time feedback related to users’ actions. Initial validation shows significant skill improvements, thanks to trial-and-error opportunities and clear instructions. Our work demonstrates VR’s potential to streamline learning and boost user motivation.

Folding Reality: Learning Spatial Computing through VR Interactive Paper Folding Game (Booth ID: 1109)

Mengyao Guo, Harbin Institute of Technology, Shenzhen; Shunan Zhang, School of Architecture and Design; Zhenran Xu, Harbin Institute of Technology; Yuyang Jiang, Computational Media and Art; Yikun Fang, Royal college of art

We introduce \"Folding Reality\" for conducting research on training spatial computing in virtual reality scenarios. It is designed for children and Large Multimodal Models (LMMs). We aim to understand how creativity emerges during the 2D to 3D folding process through interactive data gathered from children's engagement with the game. We will then use this data to train LMMs, providing them an opportunity to enhance their spatial computing abilities through interactive paper folding. Our goal is to create a learning environment where both children and LMM can explore and understand spatial relationships, advancing research in this field.

SynchroDexterity: Rapid Non-Dominant Hand Skill Acquisition with Synchronized Guidance in Mixed Reality (Booth ID: 1110)

Ryudai Inoue, Waseda University; Qi Feng, Waseda Research Institute for Science and Engineering; Shigeo Morishima, Waseda Research Institute for Science and Engineering

Acquiring fine motor skills for the non-dominant hand is challenging yet crucial, particularly when the dominant hand is injured. While traditional mirror therapy has proven effective, it often requires extended practice and can impose significant cognitive demands. This paper presents a novel augmented reality system that leverages inverted visual feedback from the dominant hand's real-time movements, coupled with a synchronized instructional video, to facilitate efficient training for the non-dominant hand. Experimental results demonstrate that this approach significantly enhances non-dominant hand skills in a short period while maintaining a low cognitive load.

An Augmented reality platform for medical self-diagnosis training (Booth ID: 1111)

Aran Aharoni, Ben Gurion University; Guy Lavy, Ben Gurion University; Ilan Vol, Ben Gurion University; Shachar Maidenbaum, Ben Gurion University

Diagnosing medical conditions is critical for medical personnel. Current training tools include textbooks or descriptive scenarios that lack interactivity and offer limited hands-on experience or expensive scenarios with actors. Interactive virtual simulations hold great promise, but are limited by a lack of tangibleness and realism. We suggest that augmented reality may be a key addition to the training toolbox, specifically augmenting the user’s body, thus providing a tangible platform for interaction. We developed such a self-diagnosis training platform and performed basic usability testing. We found that users could successfully use the platform, with performance correlated with self-reported medical knowledge.

Immersive Virtual Reality for Vocational Training: A Case Study in Bakery Sales Training (Booth ID: 1112)

Forouzan Farzinnejad, Coburg University of applied sciences and arts; Navid Khezrian, Coburg University of Applied Sciences and Arts; Seyedmasih Tabaei, Coburg University of applied sciences and arts; Jens Grubert, Coburg University of Applied Sciences and Arts

This study presents a Virtual Reality vocational training system for bakery sales, offering a potentially cost-effective and immersive solution to traditional training methods. The VR platform is designed to simulate realistic customer interactions, product handling, and transactions, which are believed to support trainees in developing skills, building confidence, and preparing for real-world tasks. A study with ten participants yielded a System Usability Scale score of 72.5, indicating positive usability feedback. This work explores VR's potential for vocational training through immersive simulations that support skill development, confidence-building, and real-world readiness.

Evaluating Compliance, Acceptance, and Exercise Intensity of a Custom Virtual Reality Exergame for Hospitalized Children with Cancer: A Pilot Study (Booth ID: 1113)

Jos Deforges, University of Rennes; Alexandre Vu, University of Rennes; Virginie Gandemer, University Hospital of Rennes; Jacinthe Bonneau-Lagacherie, University Hospital of Rennes; Fanny Drouadenne, University Hospital of Rennes; Steven Gastinger, University of Rennes; Benoit Bideau, University of Rennes; Catherine Soladie, CentraleSupelec; Amélie Rebillard, University of Rennes

Exercise improves health outcomes in hospitalized children with cancer but is often limited by various barriers. Virtual Reality (VR)-based exergames offer a promising solution. This study evaluated the acceptance, compliance, and intensity of a custom VR-based exergame program for pediatric cancer patients in a hospital setting. Thirteen children participated, achieving a compliance rate of 68\% and an average intensity of 4.92 METs, meeting recommended moderate-to-vigorous physical activity (PA) levels. The findings suggest that this VR-based exergame is a promising and engaging intervention to promote exercise in children with cancer during treatment, with minimal risk of cybersickness.

Virtual Reality Learning Optimization: Immersive Debriefing with Review and Redo for Effective, Targeted Training (Booth ID: 1114)

Kelly Minotti, Université Paris Saclay; Guillaume Loup, Paris-Saclay; Amine Chellali, Université d'Evry Paris Saclay; Marie-helene Ferrer, The French Armed Forces Biomedical Research Institute; Samir Otmane, Université d'Evry , Université Paris Saclay

In VR training, debriefing is as crucial as the simulation phase. With the growing adoption of these pedagogical tools, defining optimal educational approaches to maximize benefits for trainers and learners becomes essential. However, despite their benefits, VR-adapted debriefing methods still need to be explored. This paper presents an adaptable, all-in-one immersive debriefing module. It includes a complete system for recording, reviewing, and redoing actions. An ongoing study explores the redo module's effect in dynamic training scenarios. This module could enhance learning and reinforce immersive debriefing systems' interest and relevance.

Schrödinger's Beat: An XR Rhythm Game for Learning Quantum Computing (Booth ID: 1115)

Brady Phelps, Ohio University; Chang Liu, Ohio University; Chad Mourning, Ohio University

Schrödinger's Beat aims to make quantum education intuitive, fun, and active. This research iterates on existing rhythm game concepts and introduces quantum computing principles. Users will follow a brief tutorial, before engaging in the main application, where they use their hands to smash a series quantum gates in time with the rhythm, and purposefully dodge gates to apply them to a qubit. The users aim to smash gates while dodging correct gates to match their qubit state with a goal state. Users gain familiarity with quantum concepts while staying active and interacting with intuitive 3D representations of quantum computing concepts.

Industrial Augmented Reality – Concept for an AR-AI-Station for Component Recovery from Washing Machines for Circular Economy (Booth ID: 1116)

Mario Lorenz, Chemnitz University of Technology; Martin Okoniewski, Chemnitz University of Technology; Arwa Own, Chemnitz University of Technology; Sebastian Knopp, Chemnitz University of Technology

Using Augmented Reality in the production of home appliance is an understudied field, even more its use in circular economy. Further, also the application of Artificial Intelligence for validating that an AR instructed task was actually carried out correctly is an underexplored area. In this work we present the novel concept for using AR to recover parts from returned washing machines in a circular economy approach, validating that the disassembly is carried out correctly using the YOLO-AI-model. A last challenge is the adaptability of the AR instructions that need to change depending on the varying conditions of the recovered parts.

Towards a Virtual Reality-Based Educational System for Teaching Single-Layer Perceptron Concepts (Booth ID: 1117)

Navid Khezrian, Coburg University of Applied Sciences and Arts; Forouzan Farzinnejad, Coburg University of applied sciences and arts; Jens Grubert, Coburg University of Applied Sciences and Arts

We present a Virtual Reality (VR) educational system for teaching single-layer perceptron concepts, offering an immersive and interactive approach to enhance traditional learning methods. The VR platform gamifies neural network training by simulating a logical OR gate in a factory-inspired environment, aiming at bridging abstract theories with practical applications. In a user study with eight participants, the system received positive but also critical feedback for its interactivity and usability, achieving a System Usability Scale score of 64.69. As an initial study on teaching machine learning in VR, this work highlights VR’s potential to enhance comprehension and engagement through experiences.

How Should I React? Learning to Face Chemical Risks with Virtual Reality (Booth ID: 1118)

Pierre Raimbaud, Ecole Centrale de Lyon, CNRS, INSA Lyon, Universite Claude Bernard Lyon 1, Université Lumière Lyon 2, LIRIS, UMR5205, ENISE; Matthieu Blanchard, Ecole Centrale de Lyon, CNRS, INSA Lyon, Universite Claude Bernard Lyon 1, Université Lumière Lyon 2, LIRIS, UMR5205, ENISE; Eliott Zimmermann, Ecole Centrale de Lyon, CNRS, INSA Lyon, Universite Claude Bernard Lyon 1, Université Lumière Lyon 2, LIRIS, UMR5205, ENISE; Sophie Villenave, Ecole Centrale de Lyon, CNRS, INSA Lyon, Universite Claude Bernard Lyon 1, Université Lumière Lyon 2, LIRIS, UMR5205, ENISE; Guillaume Lavoué, Ecole Centrale de Lyon, CNRS, INSA Lyon, Universite Claude Bernard Lyon 1, Université Lumière Lyon 2, LIRIS, UMR5205, ENISE; Janine Jongbloed, Laboratoire ECP, Université Lumière Lyon 2, 86 rue Pasteur, F-69365; Rawad Chaker, Laboratoire ECP, Université Lumière Lyon 2, 86 rue Pasteur, F-69365; Matthieu Mesnage, INSA Lyon, Ecole Centrale Lyon, Universite Claude Bernard Lyon 1, CPE Lyon, CNRS, INL UMR 5270, 69100; Bertrand Massot, INSA Lyon, Ecole Centrale Lyon, Universite Claude Bernard Lyon 1, CPE Lyon, CNRS, INL UMR 5270; Jean-Pierre Cloarec, Ecole Centrale Lyon, Universite Claude Bernard Lyon 1, CPE Lyon, CNRS, INL UMR 5270, 69100; Jean-Louis Leclercq, Ecole Centrale Lyon, Universite Claude Bernard Lyon 1, CPE Lyon, CNRS, INL UMR 5270, 69100; Valentin Midez, Univ Lyon, INSA Lyon, CNRS, LIRIS, UMR5205, F-69621; Audrey Serna, Univ Lyon, INSA Lyon, CNRS, LIRIS, UMR5205, F-69621; Élise Lavoué, Université Jean Moulin Lyon 3, iaelyon school of Management, INSA Lyon, CNRS, LIRIS, UMR5205, F-69621

Our project proposes to use Virtual Reality (VR) to train chemists to react to incidents. It explores the effects of multisensoriality on learner VR sense of presence and immersion, and the effects of visualising (in/out of VR) behavioural and physiological indicators on learner reflexivity. Multisensoriality implementation, robust capture of behavioural and physiological data, and the design of relevant indicators are our main challenges. Three situations involving chemical incidents have been developed, engaging the learner to react accordingly. We present the first implementations of this VR training environment and the results of our first user studies.

Integrating Real-Time ECG Data into Virtual Reality for Enhanced Medical Training (Booth ID: 1119)

Francisco Díaz-Barrancas, University of Extremadura; Daniel Flores-Martin, Extremadura Supercomputing Center; Dr. Javier Berrocal, University of Extremadura; Dr. Juan C. Peguero, University of Extremadura; Pedro J. Pardo, University of Extremadura

Healthcare trainees often encounter challenges such as high-pressure scenarios, risks to patient safety, limited exposure to specific cases, and restricted access to advanced tools. Virtual Reality (VR) addresses these issues by offering a controlled, immersive environment for repeated, risk-free practice. This work presents a VR training system that allows trainees to engage with real-time ECG data, observe live procedures remotely, and provide feedback. This approach ensures patient anonymity with encrypted data, the system connects theoretical learning with practical application, demonstrating its effectiveness in improving clinical skills and providing a flexible, secure training option for settings with limited patient access.

XR Cricothyroidotomy and Intraosseous Access: A Preliminary Study on Usability and Presence between VR/MR and Hand Tracking/Controller Interactions (Booth ID: 1120)

Dale Button, Durham College; Alvaro Quevedo, Ontario Tech University; Adam Dubrowski, Ontario Tech University

Consumer-level extended reality (XR) devices and available software development tools have sparked content creation in training and education. Current research highlights the benefits of using XR for learning, and the gaps pertaining to motor skills, since consumer-level devices lack proper task and instrument representation that is essential for developing transferable skills. Our paper, presents a preliminary study on usability, cognitive load, and presence using an off-the-shelf XR device to better understand the user experience effects when combining virtual and mixed reality with hand tracking and controller inputs that will inform future work for designing effective properly represented interactions.

User Experience Impact of various User Interfaces on VR Laparoscopy Tasks: A Preliminary Study (Booth ID: 1121)

Bill Ko, Ontario Tech University; Alvaro Quevedo, Ontario Tech University; David Rojas, University of Toronto; Bill Kapralos, Ontario Tech University; Rory Windrim, University of Toronto

Recent advances in immersive technologies have facilitated the development of highly realistic and cost-effective computer-based simulations (CBS) and this has greatly influenced health professions training. Unlike high-end simulators that require specialized devices, consumerl-level, virtual reality (VR) controllers, and 3D-printed custom interfaces facilitate interactions with CBSs. However, given the lack of proper task representation, many questions remain regarding their effects on the user experience. This paper presents usability, cognitive load and presence preliminary study comparing a virtual laparoscope manipulation using a keyboard and mouse, gamepad, VR controllers, and a 3D-printed laparoscopic controller.

VR Breastfeeding: A Preliminary User Experience Study comparing Head and Eye Tracking Interactions (Booth ID: 1122)

Gabrielle Hollaender, Ontario Tech University; Alvaro Quevedo, Ontario Tech University; Jennifer Abbass-Dick, Ontario Tech University; Adam Dubrowski, Ontario Tech University

Breastfeeding is an essential activity for newborns and mothers, with low rates becoming a major health concern. While various breastfeeding programs aim to increase the success and the desire to breastfeed using electronic health (eHealth) resources, only in-person classes provide hands-on experience with mock-up models. Such resources lack immersion and proper representation achievable using virtual reality (VR) This paper focuses on better understand the perceived educational and user experience value of a VR breastfeeding latching prototype and better understand how user interactions leveraging eye and head tracking impact usability and cognitive load.

Immersive Fire Training with Live Expert Support and Embodied Interaction with Real Extintor (Booth ID: 1123)

Lucia Tallero, Nokia; Ester Gonzalez-Sosa, Nokia; Baldomero Rodríguez Árbol, Nokia; Alvaro Villegas, Nokia

We developed an immersive training environment for mastering the use of a fire extinguisher, with the possibility of receiving feedback from a remote expert. To facilitate communication and knowledge sharing, we provide bidirectional audio and event synchronization. As additional novel features, we allow the user to see and interact with their own body and real fire extinguisher using deep learning and color-based semantic segmentation. We also allow the expert to monitor the trainee by rendering a 2D video of their egocentric view. Preliminary results show the benefit of receiving live feedback and the potential of using a real fire extinguisher.

Safe Place VR: A Guided Imagery-Based Virtual Reality Trauma Stabilization Technique (Booth ID: 1124)

Ruyun Dai, Hanyang University; Jimoon Kim, Hanyang University; Kibum Kim, Hanyang University

One of the techniques used in trauma stabilization stage is ‘safe place’ which aims to establish a sense of safety by guiding patients to create a vivid mental imagery. Existing research has highlighted both the advantages and potential of using VR for trauma intervention, as well as some limitations. In this study, we present the ‘Safe Place VR’. Specifically, we provide three pre-set virtual environments, three different time-of-day lighting settings, and serval comfort items representing various senses that may be linked to positive emotions or memories, guiding patients to create their own safe place within the VR environment.

Virtual embodiment reduces fear of movement after knee surgery (Booth ID: 1125)

Tony Donegan, IDIBAPS, University of Barcelona; Beñat Amestoy, IDIBAPS, University of Barcelona; Sergi Sastre, Hospital Clinic de Barcelona; Andrés Combalia, Hospital Clinic de Barcelona, University of Barcelona; Maria Victoria Sanchez-Vives, IDIBAPS, ICREA

Virtual embodiment and virtual exercise are promising adjuncts to treatment for orthopedic patients, who face physical and psychological barriers during rehabilitation. This randomized controlled trial evaluated the impact of VR on post-surgical knee recovery. Patients (n=47) were assigned to either conventional rehabilitation or conventional plus daily 20-minute VR training, which utilized embodied avatars for movement visualization. The VR group demonstrated a substantial reduction in kinesiophobia scores at 4 weeks post-surgery (p=0.043). No significant differences were observed in quadriceps strength, knee range of motion, and disability. These findings support VR's potential for addressing psychological barriers in rehabilitation.

Virtual Reality Application for OCD Assessment (Booth ID: 1126)

Ruya Ilkin SULUTAS, Bournemouth University; Antoine RICHEROL, Saint Cyr Écoles de Coëtquidan; Eloi de GARIDEL-THORON, Saint Cyr Écoles de Coëtquidan; Fred Charles, Bournemouth University; Ellen Seiss, Bournemouth University

This paper provides an overview of a fully implemented Virtual Reality (VR) application which aims to support the assessment of the most common subtypes of Obsessive Compulsive Disorder (OCD): cleaning and checking behaviours. The VR application consists of a tool for the therapist to select differing levels of tasks for the participants to fulfil within an interactive 3D virtual kitchen. Participants’ assessment will be taking place at tasks completion level whilst their behaviour will be analysed through continuous recording of physiological measures and self-reporting questionnaires.

Roots of Evolution: An Immersive VR Journey Through Plant Evolution Using Digital Storytelling in Botany (Booth ID: 1127)

Nour Boulahcen, Trinity College Dublin; Yifan Chen, Trinity College Dublin

The public’s growing awareness of sustainability and plant diversity makes teaching plant evolution a fascinating and emergent challenge for learners at every level. Roots of Evolution is an immersive Virtual Reality museum using digital storytelling to explore plant evolution interactively. Inspired by the “Map of Plants” by Dominic Walliman, this project aims to simplify complex concepts and foster curiosity through interactive multi-modal content, an intelligent AI agent, and a narrative-driven approach to 3D immersive media. Future expansions aim to include broader botanical topics and user testing to refine its educational impact, showcasing the potential of immersive technology and digital storytelling in science education.

DecibelDefender: Using Virtual Reality for Promoting Sustainable Hearing Habits at Concerts and Festivals (Booth ID: 1128)

Ali Adjorlu, Aalborg University Copenhagen; August Brandt Juul, Aalborg University; Malte Bach Hansen, Aalborg University; Mikkel Julius Hansen, Aalborg University; Peter Ousager Andersen, Aalborg University; Sophie Vindal Larsen, Aalborg University

Noise-induced hearing loss is a growing concern among young adults attending concerts and festivals, where prolonged exposure to high-decibel environments can cause irreversible damage. DecibelDefender is a multiplayer VR experience designed to promote sustainable hearing habits by immersing users in a simulated concert environment. Participants experience progressive hearing loss and engage in interactive activities emphasizing the importance of hearing protection during concerts. Initial results shows an impact on behavioral intentions, with participants demonstrating greater preference for protective measures after trying the VR experience. This study highlights VR's potential to foster empathy and encourage preventive health behaviors.

V-CURRENTS: Virtual Collaborative Underwater Realities for Engagement and Novel Teaching Spaces (Booth ID: 1129)

Alexander Klippel, Wageningen University and Research; Jeroen Hubert, Wageningen University and Research; Jan Oliver Wallgrün, Independent Researcher; Joseph Henry, Triton Oceanic Exploration Society; Thomas Bruhn, Wageningen University and Research; Timon Verduijn, Wageningen University and Research; Tinka Murk, Wageningen University and Research

Ocean literacy--the understanding of how oceans influence the Earth’s ecosystems, their critical role in sustaining life on Earth, and the impact of human activities on the ocean--is essential for for making informed decisions for addressing fundamental environmental problems. However, the inaccessibility of oceans brings about special challenges for educators and decision makers. We describe our work on V-CURRENTS, a platform for creating underwater virtual reality field trips as an approach to address some of these challenges and promote ocean literacy through immersive technologies and immersive experiences. We sketch key components of the platform as well as exemplary application scenarios.

Metabook: A System to Automatically Generate Interactive AR Storybooks to Improve Children’s Reading Interest (Booth ID: 1130)

Yibo WANG, The Hong Kong University of Science and Technology(Guangzhou)); Yuanyuan MAO, The Hong Kong university of science and technology (Guangzhou); Shi-Ting Ni, Computational Media and Arts, The Hong Kong University of Science and Technology (Guangzhou),; Zeyu Wang, The Hong Kong University of Science and Technology (Guangzhou); Pan Hui, The Hong Kong University of Science and Technology(Guangzhou))

We propose Metabook, a system to automatically generate interactive AR storybooks to improve children’s reading interest. Metabook introduces a story-to-3D-book generation scheme and a 3D avatar that combines multiple AI models as a reading companion. Our user study shows that Metabook can significantly increase children’s interest in reading. Teachers acknowledged Metabook’s effectiveness in enhancing reading enthusiasm by connecting verbal and visual thinking, expressing high expectations for its future potential in education.

NeuroCR: Integrating Mixed Reality with Desktop Medical Imaging Software for Neurologists Performing Pre-surgical Evaluations of Epilepsy Patients (Booth ID: 1131)

Brody Wells, University of Calgary; Nanjia Wang, University of Calgary; Samuel Wiebe, University of Calgary; Colin Bruce Josephson, University of Calgary; Farnaz Sinaei, University of Calgary; Frank Maurer, University of Calgary

We present our prototype system, developed via a participatory design process with neurologists, demonstrating how Cross Reality technology can integrate the immersive, stereoscopic visualization and interaction benefits of eXtended Reality tools with traditional desktop medical imaging software. Our system enables clinicians to work simultaneously with both systems, with changes to the data synchronized between them. To address the clinical use-case, we adapted interaction techniques to make cross-sectional analysis more intuitive than on a flat screen. Feedback from the neurologists indicate a Cross Reality system has potential to enhance their current workflows.

Augmented Reality-Assisted Guide Plate Positioning in Lumbar Surgery Using Apple Vision Pro (Booth ID: 1132)

Hua Ma, Beijing United-Imaging Research Institute of Intelligent Imaging; Chongyan Sun, Beijing United-Imaging Research Institute of Intelligent Imaging; Zhenyu Li, Beijing United-Imaging Research Institute of Intelligent Imaging; Jiahao Chen, Beijing United-Imaging Research Institute of Intelligent Imaging; Runting Li, Peking University Third Hospital; Ziyuan Wang, Peking University Third Hospital; Yun Tian, Peking University Third Hospital; Tengjiao Zhu, Peking University Third Hospital; Zhen Qian, Beijing United-Imaging Research Institute of Intelligent Imaging

We propose an AR system using Apple Vision Pro (AVP) to assess the accuracy of guide plate positioning in lumbar surgery. It leverages preoperative CT data and integrates fiducial marker tracking to automate the initial alignment of a virtual vertebra with the intraoperative anatomy. Additionally, we developed a graphical interface for the AVP platform, which allows manual adjustment of the pose and visualization of the virtual vertebra. The system was tested on 3D-printed lumbar phantoms, an ex vivo pig lumbar specimen, and two in vivo pig surgeries. Results demonstrated accurate guide plate alignment and superior visualization, validating the potential of AR systems in improving surgical precision in lumbar procedures.

An asymmetric VR system to configure and practice low-vision aids for social interactions in clinical settings (Booth ID: 1133)

Johanna Delachambre, Université Côte d'Azur, Inria; Hui-Yin Wu, Université Côte d'Azur, Inria; Monica Di Meo, CHU Pasteur; Frédérique Lagniez, CHU Pasteur; christine morfin bourlat, CHU Pasteur; Stéphanie Baillif, CHU Pasteur; Eric Castet, Aix Marseille Univ, CNRS, CRPN; Pierre Kornprobst, Université Côte d'Azur, Inria

Patients with visual impairment often rely on low-vision aids (LVAs) such as magnifiers to perform near-vision tasks, with rehabilitation programs traditionally focusing on these activities. XR technologies offer opportunities to address broader needs, including social interactions, which require also guidance and training. We present a system leveraging VR to enable realistic testing and training of LVAs within immersive scenarios. Through an observational study involving patients and orthoptists, we illustrate how this approach expands traditional care practices, integrating emerging technologies, facilitating personalization, and enabling efficient training of LVAs under conditions otherwise challenging to replicate in clinical settings.

MetaCourt: Leveraging the Metaverse for Immersive Court Interpreting Education with Self-Determination Theory (Booth ID: 1134)

Chan In Devin SIO, The Hong Kong Polytechnic University; Yao Yao, Xi'an Jiaotong University; Rui Xie, Beijing Foreign Studies University; Xian Wang, The Hong Kong Polytechnic University; Chi Sun, The Hong Kong Polytechnic University; Yu Hin TANG, The Hong Kong Polytechnic University; Choi Chun Ifan CHEUNG, The Hong Kong Polytechnic University; Andrew K.F. Cheung, Hong Kong Polytechnic University; Lik-Hang Lee, Hong Kong Polytechnic University

Court interpreting is vital for justice, yet traditional training methods often lack hands-on engagement and rely on uninspiring trial-and-error techniques. We developed MetaCourt, an immersive virtual environment for court interpreting education to address this. Through participatory design, we created a system evaluated by 21 interpreting students using the PENS model based on self-determination theory. Results showed high autonomy, intuitive controls, and presence, with improved fluency in VR compared to traditional methods. Surveys (NASA-TLX, IPQ) indicated reduced mental workload and strong usability. This study pioneers integrating VR with PENS for court interpreting, offering significant advantages in training effectiveness.

LinkForm: a Multipurpose VR-based Remote Skill-transfer Robotic System (Booth ID: 1135)

Abdullah Iskandar, Telkom University; Hala Aburajouh, qatar university; Kodai Fuchino, Waseda University; Osama Halabi, Qatar University; Faisal Al-jaber, qatar university; Mohammed Al-Sada, Qatar University; Tatsuo Nakajima, Waseda University

Physical interaction plays a vital role in various skill-transfer scenarios. Despite its importance, the majority of systems focus on visual and auditory in immersive environments for training contexts. Therefore, we present LinkForm, a novel VR-based robotic skill-transfer system comprises a VR environment and a robotic system. LinkForm enables a coach to remotely train a user, with the ability to convey visual, verbal, and physical guidance and interactions to the remote trainee. The preliminary evaluation result shows statistically significant improvements in the participants performance after using LinkForm. Participants also showed strong interest in training using LinkForm, suggesting various remote-training scenarios.

An Evaluation of Text-Only Versus Text-3D Model Instructions in Augmented Reality Training (Booth ID: 1136)

Eoghan Paul Hynes, Technological University of the Shannon, Midlands and Midwest; Ronan Flynn, Athlone Institute of Technology; Niall Murray, Athlone Institute of Technology

Augmented reality (AR) exhibits potential for delivering training instructions. Trainee learning and transfer can be influenced by the different extraneous cognitive loads implicit in text and graphical instruction formats. Our research evaluated this influence using a GoCube™ training procedure delivered in AR. Learning, transfer and user experience were evaluated using mental rotations, physiological ratings, eye gaze, facial expressions, memory recall and questionnaire responses. Results showed that trainees using text-only instruction were significantly faster in training and recall than their counterparts using a combined text and 3D-model instruction format. This correlated to mental rotation baselines, gaze shifts and cognitive load.

A Virtual Objective Structured Clinical Examination (OSCE) platform: Implementation and Feasibility Testing (Booth ID: 1137)

Mathias DELAHAYE, Hôpitaux Universitaires de Genève; Andrea Nathaly Neher, Universität Bern; Tanja BIRRENBACH, Inselspital; Florian B. NEUBAUER, Institut für Medizinische Lehre; Christoph BERENDONK, Institut für Medizinische Lehre; Thomas SAUTER, Inselspital; Oliver KANNAPE, Hôpitaux Universitaires de Genève

We here present both a technical description of the platform we developed to reproduce the gold-standard assessment of practical clinical skill Objective Structured Clinical Examination (OSCE) as well as the results of a feasibility study conducted with 5th-year medical students (N=33) in a virtual twin of their official examen. The results indicate that participants appreciated the simulation’s functionality and its ease of use and highlighted the platform’s potential as a training resource.

Visualization and Rendering

Human-in-the-Loop Gaussian Model Enhancement with Mobile Robotic Re-Capture (Booth ID: 1138)

Xiaonuo Dongye, Beijing Institute of Technology; Hanzhi Guo, Beijing Institute of Technology; Yihua Bao, Beijing Institute of Technology; Haiyan Jiang, Singapore Institute of Technology; Dongdong Weng, Beijing Institute of Technology

3D Gaussian Splatting can create virtual models with captured real-world images using a camera mounted on a mobile robotic arm. However, the Gaussian model quality deteriorates when parts of the object that have not been previously captured are exposed in VR interaction. This poster presents a human-in-the-loop Gaussian model enhancement method, comprising pre-training, viewpoint labeling, robotic image re-capture, and Gaussian model enhancement. By incorporating newly re-captured images, the enhanced models achieve improved image similarity metrics compared to the pre-trained ones, with minimal optimization time. This method facilitates continuous quality improvement, making Gaussian models more adaptable for VR applications.

CvhSlicer 2.0: Immersive and Interactive Visualization of Chinese Visible Human Data in XR Environments (Booth ID: 1139)

Yue Qiu, The Chinese University of Hong Kong; Yuqi Tong, The Chinese University of Hong Kong; Yu Zhang, The Chinese University of Hong Kong; Qixuan Liu, The Chinese University of Hong Kong; Jialun Pei, The Chinese University of Hong Kong; Shi Qiu, The Chinese University of Hong Kong; Pheng Ann Heng, The Chinese University of Hong Kong; Chi-Wing Fu, The Chinese University of Hong Kong

The study of human anatomy through advanced visualization techniques is crucial for medical research and education. In this work, we introduce CvhSlicer 2.0, an innovative XR system designed for immersive and interactive visualization of the Chinese Visible Human (CVH) dataset. Particularly, our proposed system operates entirely on a commercial XR headset, offering a range of visualization and interaction tools for dynamic 2D and 3D data exploration. By conducting comprehensive evaluations, our CvhSlicer 2.0 demonstrates strong capabilities in visualizing anatomical data, enhancing user engagement and improving educational effectiveness. A demo video is available at CfR72S_0N-4.

DimSplat: A Real-Time Diminished Reality System for Revisiting Environments Using Gaussian Splats in Mobile WebXR (Booth ID: 1140)

Kristoffer Waldow, TH Köln; Jonas Scholz, TH Köln; Arnulph Fuhrmann, TH Köln

Diminished Reality (DR) allows for the selective removal or transformation of real-world objects from a user’s field of view. In certain cases, it becomes essential to revisit environments and observe them in their original state. While Smartphone and Web-based approaches provide intuitive and convenient AR platforms, it also faces challenges like limited resources. This work introduces a DR system using Gaussian splats for real-time object removal on WebXR-enabled devices. We achieve up to 60 FPS while maintaining high visual fidelity, validated with different image quality metrics, offering a scalable solution for revisiting and manipulating temporally altered environments.

ImmersiveDepth: A Hybrid Approach for Monocular Depth Estimation from 360-Degree Images Using Tangent Projection and Multi-Model Integration Abstract (Booth ID: 1141)

Sarshar Dorosti, Ulster University; Xiaosong Yang, Bournemouth University

ImmersiveDepth is a hybrid framework designed to tackle challenges in Monocular Depth Estimation (MDE) from 360-degree images, specifically spherical distortions, occlusions, and texture inconsistencies. By integrating tangent image projection, a combination of convolutional neural networks (CNNs) and transformer models, and a novel multi-scale alignment process, ImmersiveDepth achieves seamless and precise depth predictions. Evaluations on diverse datasets, show an average 37% reduction in RMSE compared to Depth Anything V2 and a 25% accuracy boost in low-light conditions over MiDaS v3.1. ImmersiveDepth thus establishes a robust solution for immersive technologies, autonomous systems, and 3D reconstruction.

Night Sky Explorer VR (Booth ID: 1142)

Maxim Spur, Ecole Nationale d'Ingénieurs de Brest; Philippe DEVERCHERE, ScotopicLabs; Olivier Augereau, ENIB; Edna Hernández González, UBO

We present a Virtual Reality application that immersively visualizes nighttime skyglow and its sources of light pollution. Utilizing Unity with Cesium and OpenStreetMap geospatial data, the system integrates calibrated all-sky images processed with the Sky Quality Camera software to provide natural and luminance views of the night sky. By projecting road maps and location markers onto a surrounding sphere using inverse stereographic projection, users can intuitively explore the correlation between observed skyglow and its sources. This tool offers educational and exploratory potential, highlighting the impact of artificial light pollution on the night sky in an immersive and realistic context.

AvatarMirror: Rendering Restoration for Real-time Immersive Presence System (Booth ID: 1143)

Jia Li, Lenovo Research; Yan Liu, Lenovo Research; Nan Gao, Institute of Automation, Chinese Academy of Sciences; Ke Shang, Lenovo Research

Human-centered immersive computing is catching on within VR and AR. We establish a real-time immersive presence system called AvatarMirror that enables arbitrary characters to look at themselves instantly in novel stereo views. First, stereo matching is implemented to acquire multi-view depth priors, based on which the human topology is reconstructed through raycasting from a novel perspective. Then, the color information from the referenced views is adaptively blended to perform the human rendering. Furthermore, we propose an RR-Net to conduct rendering restoration. AvatarMirror realizes real-time high-fidelity immersive presence with multi-threaded scheduling and CUDA acceleration.

Optimization Strategies for Standalone Virtual Reality Experiences: the virtual reconstruction of the city of Aquinum (Booth ID: 1144)

Bruno Rodriguez-Garcia, University of Burgos; Giuseppe Ceraudo, University of Salento; Laura Corchia, University of Salento; Lucio Tommaso De Paolis, University of Salento

This study presents an optimized 3D modeling procedure for the development of immersive Virtual Reality (iVR) experiences. It has been tested by conducting the virtual reconstruction of the Roman city of Aquinum (Italy), a key archaeological site. The resulting 3D model demonstrates a high Level of Detail (LoD) across a total area of 17,570 m² (2,470 m² of which are explorable) with a file size of only 46.2 MB, incorporating just six unique textures. This research not only delivers an accurate reconstruction of the city of Aquinum but also introduces an effective optimization method based on open-source tools.

Rendering Reality: Measuring Time Efficiency in Virtual Human Creation (Booth ID: 1145)

Or Butbul, Coastal Carolina University; Oyewole Oyekoya, City University of New York - Hunter College

Recent advancements in real-time rendering technology have significantly improved the detail of virtual humans. Higher resolution textures, physically-based hair and skin materials, and other innovations have increased their likeness. However, this has also raised render time and complexity. While hardware improvements alleviate some issues, limited access to high-performance hardware restricts creators on lower-performance systems from using advanced rendering techniques. This study explores the relationship between render times at different levels of detail and the perceived realism of avatars. By comparing these levels, we establish a relative realism scale and identify suitable levels of detail for specific virtual experiences.

Color Display on Moving Object Trajectories Using High-Speed Projection (Booth ID: 1146)

Arisa Kohtani, Institute of Science Tokyo; Shio Miyafuji, Institute of Science Tokyo; Hideki Koike, Institute of Science Tokyo

We propose a method to display target colors along the trajectories of moving objects using high-speed projection. This is achieved by leveraging the selective disruption of the afterimage effect, ensuring the target color appears prominently along motion trajectories. In our method, a high-speed projector is used to display multiple frames, consisting of target color frames and a single complementary color frame. This ensures that the target color appears prominently along the motion trajectories of the objects, where the afterimage effect is selectively disrupted, while maintaining color perception when the objects stop. This method enables novel interactions by dynamically adapting visual effects to the motion of objects.

Exploring Indirect Relations between Topics in Augmented Reality to Inform the Design of a Neuroscience Experiment (Booth ID: 1147)

Boyu Xu, Utrecht University; Lynda Hardman, CWI; Wolfgang Hürst, Utrecht University

Neuroscientists analyse publications to inform experiment design. Exploring direct relations between topics, such as brain diseases and regions, aids this process. Brain diseases may also connect indirectly to regions through topics such as mental processes. We aim to establish whether exploring indirect relations helps design experiments. Using a user-centred design approach, we interview neuroscientists to establish the usefulness of exploring indirect relations, specify functionality, and design a corresponding visualisation. Nine neuroscientists indicated the visualisation is suitable to present the functionality, the functionality is useful to explore indirect relations, and exploring indirect relations is useful to design experiments.

Extended Reality System for Robotic Learning from Human Demonstration (Booth ID: 1148)

Isaac Ngui, University of Illinois Urbana-Champaign; Courtney McBeth, University of Illinois Urbana-Champaign; Grace He, University of Illinois Urbana-Champaign; André Corrêa Santos, Insper; Luciano P Soares, Insper; Marco Morales, University of Illinois Urbana-Champaign; Nancy Amato, University of Illinois Urbana-Champaign

Many tasks are intuitive for humans but difficult to encode algorithmically when utilizing a robot. Robotic systems often benefit from learning from expert demonstrations, wherein operators move the robot along trajectories. Often, using a physical robot to provide these demonstrations may be difficult or unsafe. Extended reality provides a natural setting for demonstrating trajectories while bypassing safety concerns or modifying existing solution trajectories. We propose the Robot Action Demonstration in Extended Reality (RADER) system for learning from demonstration approaches. We additionally present its application to a state-of-the-art approach and show comparable results between using a physical robot and our system.

Cuing Multiple-Targets for Visual Search in Virtual Reality (Booth ID: 1149)

Brendan Kelley, Colorado State University; Ryan P. McMahan, Virginia Tech; Christopher Wickens, Colorado State University; Benjamin A. Clegg, Montana State University; Francisco Raul Ortega, Colorado State University

Visual search is a common task, especially given the high amount of spatial information we process visually. To aid in searching an environment for targets, various cues have been developed and implemented for augmented reality (AR) and virtual reality (VR) head-mounted displays (HMDs). A variety of different designs have emerged from prior literature including the gaze line, 2D wedge, and 3D arrow, each with unique and different design characteristics. However, many of these designs are not evaluated beyond their initial design proposals. Results favored the gaze line cue for search time, accuracy, and reported metnal effort, potentially highlighting the benefit of having both direction and location information embedded into the cue.

Impact of Adding, Removing and Modifying Driving and Non-Driving Related Information on Trust in Autonomous Vehicles (Booth ID: 1150)

Thi Thanh Hoa TRAN, IMT Atlantique; Bruce H Thomas, University of South Australia; Guillaume Moreau, IMT Atlantique; James A. Walsh, University of South Australia; Etienne Peillard, IMT Atlantique

Integrating Autonomous Vehicles (AVs) into daily life requires enhancing user trust and acceptance. While prior research demonstrated that Augmented Reality (AR) visualizations improve trust by adding driving-related information, the effects of removing or modifying unrelated information remain unclear. This study examines six AR visualization strategies within AVs, including the addition, modification, and removal of information related or unrelated to driving tasks. Using a custom-developed driving simulator that emulates AR, we evaluated their impact on trust, technology acceptance, and situational awareness. Results indicate that AR visualizations can significantly enhance trust in AVs, whether adding, modifying, or removing information

GaussianShopVR: Facilitating Immersive 3D Authoring using Gaussian Splatting in VR (Booth ID: 1151)

Yulin Shen, The Hong Kong University of Science and Technology (Guangzhou); Boyu Li, The Hong Kong University of Science and Technology (Guangzhou); Jiayang Huang, The Hong Kong University of Science and Technology (Guangzhou); Zeyu Wang, The Hong Kong University of Science and Technology (Guangzhou)

3D Gaussian Splatting has attracted much attention for its ability to quickly create digital replicas of real-life scenes and its compatibility with traditional rendering pipelines. However, it remains a challenge to edit 3DGS in a flexible and controllable manner. We propose GaussianShopVR, a system that leverages VR user interfaces to specify target areas to achieve flexible and controllable editing of reconstructed 3DGS. In addition, selected areas can provide 3D information to generative AI models to facilitate the editing. GaussianShopVR integrates object hierarchy management while keeping the backpropagated gradient flow to allow local editing with context information.

Learning Cubic Field Representation from A Single Panorama for Virtual Reality (Booth ID: 1153)

Wenjie Chang, University of Science and Technology of China; Hao Ai, The Hong Kong University of Science and Technology (Guangzhou Campus); Tianzhu Zhang, University of Science and Technology of China; Lin Wang, HKUST

Panoramic images provide comprehensive scene information and are suitable for VR applications. Obtaining corresponding depth maps is essential for achieving immersive and interactive experiences. We propose a novel method that learns a cubic field composed of multiple MPIs from a single panoramic image for depth estimation. The entire pipeline is trained using photometric loss calculated from rendered views within a self-supervised learning approach. Experiments demonstrate the superior performance of CUBE360 and highlight its effectiveness in downstream applications.

LIVE-GS: LLM Powers Interactive VR by Enhancing Gaussian Splatting (Booth ID: 1154)

Haotian Mao, Shanghai Jiao Tong University; Zhuoxiong Xu, Shanghai Jiao Tong University; Siyue Wei, Shanghai Jiao Tong University; Yule Quan, Shanghai Jiao Tong University; Nianchen Deng, Shanghai AI Lab; Xubo Yang, SHANGHAI JIAO TONG UNIVERSITY

We propose LIVE-GS, a highly realistic interactive Gaussian splatting system in VR setting powered by LLM. Our pipeline supports reconstructions and physically-based interactions in VR, integrating object-aware reconstruction, GPT-assisted inpainting, and a computationally efficient simulation framework. To improve the scene understanding, we prompt GPT-4o to analyze the physical properties of objects in the scene, thereby guiding physical simulations to align with real-world phenomena. Our experimental results demonstrate that with the assistance of LLM's understanding and enhancement of scenes, our VR system can support complex and realistic interactions without requiring additional manual design or annotation.

Towards an Expanded Eyebox for a Wide-Field-of-View Augmented Reality Near-eye Pinlight Display with 3D Pupil Localization (Booth ID: 1155)

Xinxing Xia, Shanghai University; Zheye Yu, Shanghai University; Dongyu Qiu, Singapore Institute of Technology; Andrei State, University of North Carolina at Chapel Hill; Tat-Jen Cham, Nanyang Technological University; Frank Guan, Singapore Institute of Technology; Henry Fuchs, University of North Carolina at Chapel Hill

A head-worn optical-see-through near-eye display (NED) is crucial for augmented reality (AR), enabling simultaneous perception of virtual and real imagery. Current AR NEDs face trade-offs among field of view (FOV), eyebox size, and form factor. We designed an enhanced pinlight AR NED using a novel approach to capture the pupil’s 3D location and compute a modulation pattern on the display. In our design, an eye-tracking camera rig captures stereoscopic views to calculate a display pattern, modulating light beams for precise imagery. Our compact prototype achieves wide FOV, large eyebox, and experimental results confirm its spatial and temporal consistency.

Ray-based Multiscale Spherical Grid for Egocentric Viewing (Booth ID: 1156)

Weichao Song, College of Computer and Information Science, Southwest University; Bingyao Huang, Southwest University

Virtual reality (VR) and augmented reality (AR) demand real-time and high-quality egocentric viewing. Neural rendering can generate extremely high-quality novel views but requires expensive training time and renders too slow. To address these challenges, this paper proposes a method named multi-scale spherical tensor decomposition (MSTD). Its ability to represent egocentric neural 3D scenes also enables potential further applications in relevant domains such as data storage, VR, and AR. Our method can reconstruct scenes from multi-view images or omnidirectional images, outperforming baseline methods in training and rendering time as well as rendering quality.

In-situ Ultrasound Guided Closed Long Bone Shaft Fracture Treatments (Booth ID: 1157)

Wenqing Yan, Tsinghua University; Haowei Li, Tsinghua University; Shiye Huang, Beijing Tsinghua Changgung Hospital; Long Qian, Medivis. Inc.; Hui Ding, Tsinghua University; Guangzhi WANG, Tsinghua University; Zhe Zhao, Tsinghua University

Closed Reduction and Internal Fixation (CRIF) is the preferred treatment for closed long bone shaft fractures, where frequent X-rays imaging reduce efficiency and increase radiation exposure. Intra-operative ultrasound offers real-time, radiation-free bone imaging, but challenges orthopedic surgeons. To address this, we developed an Augmented Reality (AR) navigation system with in-situ ultrasound visualization, combining on-device infrared tool tracking via an AR headset’s depth sensor with remote rendering for accurate, low-latency images. A standard operating procedure for surgical application was also proposed. Clinical trials with six patients showed significant reduction in fluoroscopy dose and operation time.

Wednesday posters

Haptic Feedback

4-DOF Haptic Rendering for Hysteroscopic Surgery Simulation (Booth ID: 1201)

Lei He, Beihang University; Mingbo Hu, Beihang University; Hongyu Wu, Beihang University; Shuai Li, Beihang University; Hong Qin, Stony Brook University; Aimin Hao, Beihang University

Haptic rendering and soft body deformation are important part for virtual surgery. In hysteroscopic surgery, movement of surgical tool is limited to the fixed entry point and there is no well-developed algorithm for these surgery scenes. We propose an 4 domain of freedom algorithm suitable for hysteroscopic surgery which can provide stable feedback force and realistic soft body deformation effect. Our energy-based algorithm for rigid-soft body haptic interaction can synchronize iterations with different update frequency. From the experimental results, users can operate in real-time with high stability and fidelity.

Haptic Shape Discrimination in Virtual Environments Using Force Direction (Booth ID: 1202)

Lida Ghaemi Dizaji, University of Calgary; Yaoping Hu, University of Calgary; Mahdiyeh Sadat Moosavi, Arts et Metiers Institute of Technology; Frederic Merienne, Arts et Metiers Institute of Technology

Shape discrimination of objects relies on sensory and contextual cues. While existing studies explored cues for shape discrimination, an underexplored question remains what the minimal haptic cue (one kind of the sensory cues) is sufficient for such discrimination with contextual cues in virtual environments (VE). This study examined whether the changes of force direction – as a haptic cue – could serve this sufficiency. The results of the study confirmed the sufficiency for the discrimination under certain conditions. This confirmation implied a potential of applying force direction to simplify the design of haptic cues for VE applications.

Understanding User Perception of Haptic Illusions (Booth ID: 1203)

Josué JV Vincent, ESIEA; Théo COMBE, Caplogy; Noreen Izza Arshad, Universiti Teknologi PETRONAS; Aylen Ricca, Univ. Evry, Paris-Saclay

This research investigates the realism of virtual touch through haptic devices, comparing high-fidelity haptic feedback (force-based) with pseudo-haptics (visual illusions of touch) in the context of training. This preliminary study involves the design and development of a virtual reality (VR) prototype, and an experimental evaluation in which participants performed an object manipulation task under two conditions (force feedback and visuo-haptics feedback) to assess weight estimation and self-performance. Results show that haptic feedback enhances realism and presence in VR, but pseudo-haptic feedback offers a cost-effective alternative for less precision-critical tasks, with potential for refinement.

Hand Redirection Thresholds in a Shooting Game (Booth ID: 1204)

Mathieu Lutfallah, ETH Zurich; Alessandro Biella, ETH Zurich; Andreas Kunz, ETH Zurich

This study investigates the combined effects of time pressure and interactivity on hand redirection perception in virtual reality during haptic retargeting, using an immersive shooting game as the experimental context. A total of 27 participants interacted with a haptic proxy under two experimental conditions: varying the speed of approaching enemies and the distance between the physical and virtual objects. The results revealed a high perceived normality of interactions, with over 50\% of responses rated as normal across all phases of the experiment. These findings highlight the positive influence of engagement on the perceived realism of haptic interactions.

Dynamic and Modular Thermal Feedback for Interactive 6DoF VR (Booth ID: 1205)

Sophie Villenave, Ecole Centrale de Lyon, CNRS, INSA Lyon, Universite Claude Bernard Lyon 1, Université Lumière Lyon 2, LIRIS, UMR5205, ENISE; Pierre Raimbaud, Ecole Centrale de Lyon, CNRS, INSA Lyon, Universite Claude Bernard Lyon 1, Université Lumière Lyon 2, LIRIS, UMR5205, ENISE; Guillaume Lavoué, Ecole Centrale de Lyon, CNRS, INSA Lyon, Universite Claude Bernard Lyon 1, Université Lumière Lyon 2, LIRIS, UMR5205, ENISE

Thermal effects in Virtual Reality (VR) can provide sensory information when interacting with objects, or create a thermal ambiance for a Virtual Environment (VE). They are critical to a range of applications, such as firefighting training or thermal comfort simulation. Existing ambient thermal feedback systems revealed limitations: some lack of proper sensory characterization, limit movements and interactions, and can be difficult to replicate. In this context, our contribution is a reproducible room-scale system able to provide dynamic ambient thermal feedback for 6 Degrees of Freedom VR experiences. We present its psychophysical study on thermal sensation, latency, and noise (n=10).

Impact of Virtual Self-Touch on the Embodiment of Avatars with Different Body Sizes (Booth ID: 1206)

Kotaro Okada, Osaka University; Kazuhiro Matsui, Osaka Electro-Communication University; Ruu Fujii, Osaka University; Ryoma Kojima, Osaka University; Keita Atsuumi, Hiroshima City University; Hiroaki Hirai, Osaka University; Atsushi Nishikawa, Osaka University

This study explores the impact of our developed virtual self-touch (VST), where participants touch an avatar as a substitute body and receive haptic feedback, on updating the body schema for intervention in motor control strategies. VST was tested with ten participants using avatars of normal size and those with elongated forearms. The effectiveness of the method was evaluated using perceptual drift and reaching tasks, comparing VST to the arm swing task (AST). The results showed that VST enhanced perceptual drift and affected motor control strategy more than AST, suggesting its role in altering the body schema. The effect of this motor control strategy leads to rehabilitation.

Lightweight Wearable Fingertip Haptic Device with Tangential Force Feedback based on Finger Nail Stimulation (Booth ID: 1207)

Yunxiu Xu, Institute of Science Tokyo; Siyu Wang, Institute of Science Tokyo; Shoichi Hasegawa, Institute of Science Tokyo

We introduce a lightweight wearable haptic device providing tangential force feedback through fingernail stimulation. The device uses a ring structure on the fingernail with three miniature motors: two for string-based feedback and one with an arc-shaped pin. The design maintains finger pad sensitivity by placing actuators on the nail side while delivering 2-DOF force feedback. Weighing 5.24g, the device generates nine force patterns by combining string tension and pin pressure. User studies evaluated direction discrimination and perception of weights and friction coefficients.

Enhancing Visuo-Haptic Coherency by Manipulating Fingertip Contact Tilt (Booth ID: 1208)

Hiroki Ota, Nara Institute of Science and Technology; Yutaro Hirao, Nara Institute of Science and Technology; Monica Perusquia-Hernandez, Nara Institute of Science and Technology; Hideaki Uchiyama, Nara Institute of Science and Technology; Kiyoshi Kiyokawa, Nara Institute of Science and Technology; Maud Marchal, Univ. Rennes, INSA, IRISA, Inria

This study investigates the manipulation of fingertip contact plane manipulation to enhance visuo-haptic coherency of virtual object. We designed a compact device that adjusts fingertip contact plane tilt to simulate diverse 3D shapes. Through user studies, the device demonstrated significant improvements in haptic realism, enjoyment, and operability compared to conventional methods. These findings highlight the potential of fingertip contact plane manipulation as a novel approach for advancing virtual reality (VR) haptic interfaces, enabling more immersive and realistic experiences.

A Flexible Vibrotactile Feedback System for Rapid Prototyping (Booth ID: 1209)

Carlos Paniagua, Nara Institute of Science and Technology; Hiroki Ota, Nara Institute of Science and Technology; Yutaro Hirao, Nara Institute of Science and Technology; Monica Perusquia-Hernandez, Nara Institute of Science and Technology; Hideaki Uchiyama, Nara Institute of Science and Technology; Kiyoshi Kiyokawa, Nara Institute of Science and Technology

Tape-Tics is a flexible, modular vibrotactile feedback system integrating LEDs and vibration motors. Users can adjust vibration intensity, LED colors, and configurations via Bluetooth Low Enery (BLE), enabling operation based on predefined settings. Built with flexible printed circuit (FPC) technology, Tape-Tics adapts to curved surfaces, enhancing its versatility in education, research, and art. In a hands-on workshop, participants assembled and operated Tape-Tics to explore vibrotactile technology. Using the GUI controller, they configured vibration patterns and developed unique applications. Surveys showed participants’ understanding of haptics improved significantly, and they rated Tape-Tics highly as an educational tool.

Interaction and Interfaces

Unfolding Complex Knowledge Graphs in Large Mixed Reality Space (Booth ID: 1210)

Zhongyuan Yu, Technische Universität Dresden; Julian Baader, Technische Universität Dresden; Matthew McGinity, Technische Universität Dresden

With the advent of mixed reality technologies, low-cost mobile head-mounted displays such as the Meta Quest series become widely affordable. These devices come with markerless inside-out tracking capabilities and onboard operating systems and thus can be used in large spaces for interactive data exploration. In this work, we introduce design considerations, concepts, and methods to enhance the interactive exploration and authoring of knowledge graph datasets within architectural-scale mixed reality environments. We have developed a prototype showcasing the potential of our concept. Moving forward, we plan to continue developing the system and conduct user studies to further evaluate our concepts.

Multi-modal Interaction with Virtual Agents: A Pick-up and Placement Case Study (Booth ID: 1212)

Jalal Safari Bazargani, Sejong University; Soo-Mi Choi, Sejong University

This paper presents a multi-modal human-agent interaction framework that integrates hand tracking, voice commands, user intention interpretation, and scene understanding using large language and vision models for object pick-up and placement tasks in VR. Scene understanding integrates object tags and ChatGPT-4V's object detection. User commands are processed via the LLaMA-3.2 model, which interprets user intention, identifies task and reference objects, and spatial relationships for task execution. An initial experiment with 15 participants showed that the multimodal approach achieved the fastest task completion times and highest user satisfaction. While hand-only interaction was comparable, voice-only interaction was significantly slower.

Optimized Sensor Position Detection: Improving Visual Sensor Setups for Hand Tracking in VR (Booth ID: 1213)

Melissa Steininger, University Hospital Bonn; Anna Jansen, University Hospital Bonn; Kristian Welle, University Hospital Bonn; Björn Krüger, University Hospital Bonn

Hand tracking plays an important role in many Virtual Reality (VR) applications, enabling natural user interactions. Achieving precise tracking is often challenged by occlusion and suboptimal sensor placement. To address these challenges, we developed the Sensor Positioning Simulator, a versatile tool designed to optimize sensor placement. To demonstrate its utility, we simulated scenes from the VIRTOSHA project, a VR-based surgical training platform. Evaluations show that the tool effectively positioned sensors to achieve maximum hand surface visibility and full hand movement area coverage, even in occlusion-heavy environments. Future developments include support for animated simulations and validation through real-world experiments.

OrbitText A Hybrid Prediction System for Efficient and Effortless Text Entry in Virtual Reality (Booth ID: 1214)

Zihao Li, Beijing Engineering Research Center of Mixed Reality and Advanced Display; Dongdong Weng, Beijing Engineering Research Center of Mixed Reality and Advanced Display,School of Optics and Photonics; Xiaonuo Dongye, Beijing Engineering Research Center of Mixed Reality and Advanced Display; Jie Hao, Beijing Engineering Research Center of Mixed Reality and Advanced Display; Xiangyu Qi, Beijing Engineering Research Center of Mixed Reality and Advanced Display

Text entry in virtual reality (VR) is a common task; however, it is often hindered by low typing efficiency and physical fatigue. To address these challenges, we present OrbitText, a novel text entry system for VR environments. OrbitText features a dual-handed, minimal-motion 3D interface and integrates an innovative typing prediction model that combines large language models (LLMs) with n-gram models. Our design aims to minimize physical strain while enhancing typing efficiency. User studies demonstrate that OrbitText significantly outperforms mainstream virtual keyboards, achieving higher efficiency, reduced fatigue, and a lower perceived task load.

Human-Scene Interaction Data Generation with Virtual Environment using User-Centric Scene Graph (Booth ID: 1215)

Taewook Ha, KAIST; Selin Choi, KAIST; Seonji Kim, KAIST; Dooyoung Kim, KAIST; Woontack Woo, KAIST

We present a new approach using Virtual Reality to capture human-scene interaction data within 3D environments. Traditional methods required costly physical setups with real furniture, making it difficult to collect both interaction and 3D scene data comprehensively. Our solution implements a user-centric scene graph with a hierarchical structure that captures both user interactions and 3D scene elements. The resulting dataset contains real human interaction data, dynamic scene information, synthetic sensor data, and scene graph annotations. This work provides a foundation for developing Mixed Reality experiences using 3D scene graphs with human interaction data.

Physiological Signals Quality in 6DoF Virtual Reality: Preliminary Results (Booth ID: 1216)

Matthieu Blanchard, Ecole Centrale de Lyon, CNRS, INSA Lyon, Universite Claude Bernard Lyon 1, Université Lumière Lyon 2, LIRIS, UMR5205, ENISE; Sophie Villenave, Ecole Centrale de Lyon, CNRS, INSA Lyon, Universite Claude Bernard Lyon 1, Université Lumière Lyon 2, LIRIS, UMR5205, ENISE; Tristan Habémont, INSA Lyon, Ecole Centrale Lyon, Universite Claude Bernard Lyon 1, CPE Lyon, CNRS, INL UMR 5270; Bertrand Massot, INSA Lyon, Ecole Centrale Lyon, Universite Claude Bernard Lyon 1, CPE Lyon, CNRS, INL UMR 5270; Pierre Raimbaud, Ecole Centrale de Lyon, CNRS, INSA Lyon, Universite Claude Bernard Lyon 1, Université Lumière Lyon 2, LIRIS, UMR5205, ENISE; Guillaume Lavoué, Ecole Centrale de Lyon, CNRS, INSA Lyon, Universite Claude Bernard Lyon 1, Université Lumière Lyon 2, LIRIS, UMR5205, ENISE

Physiological data are increasingly used in Virtual Reality (VR) to analyze the user experience. However, when we move and interact in VR, the quality of physiological data is degraded. This study evaluates the impact of movements and sensor positions on signal quality. 3 positions for EDA and 2 for ECG have been compared on 36 subjects performing a series of different gestures in a within-subjects design. We observed that movements requiring the hand to be closed significantly degraded EDA signals, and that the palm position offered better robustness. For ECG, the position closest to the heart gave the best results.

Autonomous Hand Gesture Recognition Solution for Embedded Automotive Systems (Booth ID: 1217)

Nicolas LEVI-VALENSI, Aix-Marseille Université; Antoine HP MORICE, Aix-Marseille Université; William WILMOT, Stellantis; Jocelyn Monnoyer, Stellantis; Dr Julien MAROT, Aix-Marseille Université

We investigated how to design a software pipeline to recognize hand gesture from images inside a vehicle cockpit. Two architectures are compared. Frequency descriptors derived from the Fourier transforms were solely used in image outline analysis.

Visual assistance in VR-based robot control : towards a reproducible evaluation scenario (Booth ID: 1218)

Marine Desvergnes, CeRCA, Université de Poitiers; Cecile R. Scotto, CeRCA, Université de Poitiers; Kathleen Belhassein, PPRIME, Université de Poitiers; Célestin Préault, CESI Lineact; Jean Pylouster, CeRCA, Université de Poitiers; Pierre Laguillaumie, PPRIME, Université de Poitiers; Jean-Pierre Gazeau, PPRIME, Université de Poitiers; Ludovic Le Bigot, CeRCA, Université de Poitiers; Nicolas Louveton, CeRCA, Université de Poitiers

Immersive technologies enhance industrial applications by creating virtual representations of manufacturing environments and providing visual assistance for tasks. Most studies are based on ad hoc scenarios, making comparisons across studies difficult. In this research, 99 participants controlled a remote industrial robot using a VR headset in a reproducible maintenance task. They performed the task under three conditions: no assistance (control), text-based assistance, or attentional cueing (highlighting objects). We manipulated task difficulty (easy vs. difficult) and assessed performance based on completion time and failure rate. Results showed that visual assistance significantly shortened task completion time, with findings discussed.

Towards the Fusion of Gaze and Micro-Gestures (Booth ID: 1219)

Kagan Taylor, Lancaster University; Haopeng Wang, Lancaster University; Florian Weidner, Lancaster University; Hans Gellersen, Lancaster University

Gaze and hand-based micro-gestures have been widely studied individually, each offering unique advantages as input modalities, but their combination remains under-explored. In this work, we integrate gaze with thumb-to-finger micro-gestures, discussing their complementary strengths to enable expressive and efficient interactions. We examine two strategies for combining gaze and micro-gestures, assess their strengths and weaknesses, and present examples such as mouse emulation and application shortcuts. Finally, we outline the potential and challenges of gaze and micro-gestures to enhance interaction across diverse contexts, and discuss future directions. This work begins the exploration of gaze and micro-gestures for novel input techniques.

Effects of Confidence Visualization in Automated Assembly Instructions (Booth ID: 1220)

Kazuma Mori, Nara Institute of Science and Technology; Isidro M. Butaslac III, Nara Institute of Science and Technology; Taishi Sawabe, Nara Institute of Science and Technology; Hirokazu Kato, Nara Institute of Science and Technology; Alexander Plopski, TU Graz

Automatic AR-based work support systems promise to provide instructions without user interaction when the current step is completed. However, progress estimation failures can cause confusion and reduce trust. To support understanding of instruction reliability, we display the confidence of each step. To explore its effects on user interaction, we conducted a Wizard-of-Oz style user study with 12 participants assembling a LEGO model with and without confidence visualization, where confidence levels were predefined. While the visualization did not improve performance, participants preferred having it, especially for low-confidence steps. These findings show design possibilities for future automated instruction systems.

Accelerating the Development of Machine Voice User Interfaces with Immersive Environments (Booth ID: 1221)

Polina Häfner, Karlsruhe Insitute of Technology; Frithjof Anton Ingemar Eisenlohr, Karlsruhe Insitute of Technology; Felix Longge Michels, Karlsruhe Insitute of Technology; Abhijit Karande, EES Beratungsgesellschaft mbH; Michael Grethler, EES Beratungsgesellschaft mbH

This paper proposes a conceptual framework leveraging VR technologies to develop large language model (LLM)-based Voice User Interfaces (VUIs) for industrial applications, addressing key adoption challenges. By integrating virtual twins and immersive environments, the framework supports essential development steps, including data collection, LLM training, usability testing, and iterative design validation. The study shows how VR enables realistic simulations, faster development, and reduces the dependency on physical prototypes in the VUI development, offering improvements in efficiency and user-centered design. Initial findings of the proof-of-concept study highlight the framework's potential to address industrial requirements.

Hands vs. Controllers: Comparing User Interactions in Virtual Reality Shopping Environments (Booth ID: 1222)

Francesco Vona, University of Applied Sciences Hamm-Lippstadt; Julia Schorlemmer, Immersive Reality Lab, University of Applied Sciences Hamm-Lippstadt; Jessica Stemann, University of Applied Sciences Hamm-Lippstadt; Sebastian Fischer, University of Applied Sciences Hamm-Lippstadt; Jan-Niklas Voigt-Antons, Hamm-Lippstadt University of Applied Sciences

Virtual reality (VR) enables users to simulate real-life situations in immersive environments. Interaction methods significantly shape user experience, particularly in high-fidelity simulations mimicking real-world tasks. This study evaluates two primary VR interaction techniques—hand-based and controller-based—through virtual shopping tasks in a simulated supermarket with 40 participants. Hand-based interaction was preferred for its natural, immersive qualities and alignment with real-world gestures but faced usability challenges, including limited haptic feedback and grasping inefficiencies. In contrast, controller-based interaction offered greater precision and reliability, making it more suitable for tasks requiring fine motor skills.

Full-Body Interaction in Mixed Reality using 3D Pose and Shape Estimation (Booth ID: 1223)

Hong Son Nguyen, Korea University; Andrew Chalmers, Victoria University of Wellington; DaEun Cheong, Korea University; Myoung Gon Kim, Korea University; Taehyun James Rhee, University of Melbourne; JungHyun Han, Korea University

This paper presents a pipeline that estimates a user's 3D pose and shape to facilitate a user’s full-body interaction with virtual objects in a mixed reality environment. The usability and effectiveness of the pipeline are demonstrated through a user study.

Influence of ball reception on visual tracking performance of soccer players in a virtual reality Multiple Player Tracking task (Booth ID: 1224)

Alexandre Vu, M2S - EA7470, Univ Rennes 2, Inria; Anthony Sorel, M2S - EA7470, Univ Rennes 2, Inria; Benoit Bideau, M2S - EA7470, Univ Rennes 2, Inria; Richard Kulpa, M2S - EA7470, Univ Rennes 2, Inria

The multitasking aspect of soccer is rarely considered to study the relationship between expertise and visual attention. This study examined the ability of 24 soccer players to visually track teammates and opponents in simulated game situations within a CAVE, with or without the requirement to intercept a virtual ball. No significant decrease in visual tracking performance was observed when an actual motor interception was required. Additionally, no significant differences were found between participants competing at the national level and those at lower levels. Further research is needed to better understand how visual attention contributes to expertise in soccer.

Exploring Interaction Paradigms for Performing Medical Image Segmentation Tasks in Virtual Reality (Booth ID: 1225)

Zachary C Jones, Concordia University; Simon Drouin, École de Technologie Supérieure; Marta Kersten-Oertel, Concordia University

Virtual reality (VR) facilitates immersive visualization and interaction with complex medical images in immersive platforms. It remains unclear which VR input schemes and devices are optimal for these tasks. In this study, we perform a 12-person study to investigate user performance and experience while performing medical image segmentation with two control schemes, keyboard and mouse (KBM) and VR motion controllers (MC). Our results showed that motion controllers are faster in smaller segmentation tasks and offer a more pleasant user experience, however no significant difference in user accuracy was observed.

An implementation of MagicCraft: Generating Interactive 3D Objects and Their Behaviors from Text for Commercial Metaverse Platforms (Booth ID: 1226)

Ryutaro Kurai, Cluster, Inc.; Takefumi Hiraki, Cluster Metaverse Lab; Yuichi Hiroi, Cluster Metaverse Lab; Yutaro Hirao, Nara Institute of Science and Technology; Monica Perusquia-Hernandez, Nara Institute of Science and Technology; Hideaki Uchiyama, Nara Institute of Science and Technology; Kiyoshi Kiyokawa, Nara Institute of Science and Technology

Metaverse platforms are rapidly evolving to provide immersive spaces. However, the generation of dynamic and interactive 3D objects remains challenging due to the need for advanced 3D modeling and programming skills. We present MagicCraft, a system that generates functional 3D objects from natural language prompts. MagicCraft uses generative AI models to manage the entire content creation pipeline: converting user text descriptions into images, transforming images into 3D models, predicting object behavior, and assigning necessary attributes and scripts. It also provides an interactive interface for users to refine generated objects by adjusting features like orientation, scale, seating positions, and grip points.

Evaluating the Impact of Sonification in an Immersive Analytics Environment Using Real-World Geophysical Datasets (Booth ID: 1227)

Disha Sardana, Virginia Tech; Lee Lisle, Virginia Tech; Denis Gracanin, Virginia Tech; Ivica Ico Bukvic, Virginia Tech; Kresimir Matkovic, VRVis Research Center; Greg Earle, Virginia Tech

This study evaluates the role of sonification in immersive analytics using real-world geophysical datasets. A between-subject experiment in a mixed-reality environment compared audio-visual and visual-only scenarios with 50 participants. The study analyzed key performance metrics, including pattern identification, confidence, task responses, workload (NASA Task Load Index), and usability (SUS questionnaire). Results show that sonification enhances pattern detection and user confidence in analytics tasks. This research highlights sonification's benefits and limitations, offering insights into optimizing audio for immersive systems to support complex data exploration.

Event-Driven Lighting for Immersive Attention Guidance (Booth ID: 1228)

Alexander Gao, University of Maryland, College Park; Xijun Wang, The University of Manyland, College Park; Geonsun Lee, University of Maryland; William Barber Smith Chambers, University of Maryland; Niall L. Williams, University of Maryland, College Park; Yi-Ling Qiao, University of Maryland, College Park; Shengjie Xu, University of Maryland, College Park; Ming C Lin, University of Maryland at College Park

In immersive VR environments, users often miss events occurring outside their field of view. To address this, we propose a framework that guides user attention by dynamically adjusting environment lighting based on an input text script and 3D scene. Validated through a user study, our system improves users' recall of virtual environments.

Investigating Visualization and Control for Assisting Gesture Interaction in Virtual Reality (Booth ID: 1229)

Zhaomou Song, University of Cambridge; John J Dudley, University of Cambridge; Per Ola Kristensson, University of Cambridge

Gesture interaction systems continuously predict user gestural inputs to allow rapid access to functions. It is essential to provide additional assistance when the system makes a mistake. We introduce two alternative assistance strategies, a contextual menu and visual-based guidance, that prompt the user with additional information when the system uncertainty exceeds a threshold. We compared the proposed approaches with a fixed menu in a user study with 18 participants. The results suggest that participants universally praised the fixed menu for its high controllability. Visual guidance was also preferred by many participants due to its simplicity and intuitiveness, especially for those who seek understanding about the system.

Designing a Mixed Reality Library Experience to Engage Students with Open-Source Literacy Content for Primary School Students (Booth ID: 1230)

Aleshia Hayes, University of North Texas; Tania Heap, University of North Texas; Deborah Cockerham, University of North Texas; Lauren Eutsler, University of North Texas; Ruth West, University of North Texas; Haseeb Abdul, University of North Texas; Erika Knapp, University of North Texas; Neureka Kandu, University of North Texas

Highly immersive, interactive VR is engaging and effective for education and training across users, contexts, and domains. However, VR tools are not recommended for early elementary school children. Here we report on the participatory design approach to three novel mixed reality displays and a mixed reality software, “The Readerverse,” in the context of reading proficiency to demonstrate application of the affordances of mixed reality for early elementary education. This mixed reality library allows students to open virtual books that transport them into the book world with content-specific learning experiences. This paper discusses pilot testing and iterative design with stakeholders.

Perception and Cognition

Evaluating Depth Perception in Augmented Reality X-Ray Vision (Booth ID: 1243)

Matthew Zhang, Rhodes College; Nate Phillips, Rhodes College

Depth perception in AR describes the ability to perceive depth from AR-generated objects. This functionality enables an emergent capability: x-ray vision, or visualizing objects past an occluding surface. To evaluate x-ray vision’s feasibility, we propose experiments that depict a virtual object beyond a solid wall. In one condition this display is mitigated with a semi-transparent virtual window, while in another condition no such effect is presented. Depth estimates will be measured using triangulation by walking. Our motivation is to understand discrepancies between perceived distances of virtual and real-world objects so that AR systems can accurately display objects in the real-world.

Pupil-Based Prediction of Affect in VR: A Machine Learning Approach (Booth ID: 1244)

Agata Szymańska, Jagiellonian University; Paweł Jemioło, AGH University of Krakow; Beata Pacula-Leśniak, Jagiellonian University; Michał Kuniecki, Institute of Psychology, Faculty of Philosophy, Jagiellonian University in Krakow

Adapting VR scenarios in real time based on users’ emotional responses is a promising advancement for developers. However, most VR environments are either "blind" to emotions or depend on intrusive, contact-based methods. This study introduces a novel, non-invasive approach for predicting arousal and valence using pupil reactivity and gaze features. Data from 105 participants (ages 18–30) viewing 120 emotional images via a VR headset with eye tracking informed machine learning models. These models, leveraging features like mean and standard deviation of pupil data, achieved F1 scores of 0.64 (valence) and 0.73 (arousal), highlighting potential real-world applications in VR development.

Warming the Ice: The Role of Social Touch and Physical Warmth on First Impressions in Virtual Reality (Booth ID: 1245)

Beatrice Biancardi, CESI LINEACT; Mukesh Barange, CESI LINEACT; Matthieu Vigilant, Université Paris Cité; Pierre-Louis Lefour, Université Paris Cité; Laurence Chaby, Université Paris Cité

First impressions are critical in shaping social interactions, with perceptions of interpersonal warmth playing a central role in fostering positive judgments. Social touch, such as handshake, conveys both interpersonal and physical warmth, potentially influencing impressions and social proximity. Using VR and haptic technologies, we explore how handshake temperature (warm or cold) affects first impressions, and interpersonal distance, during interactions with a virtual agent. We aim to discuss the implications for designing more engaging and realistic human-agent interactions, with a focus on the role of haptic feedback in fostering positive impressions.

Do You Feel Better? The Impact of Embodying Photorealistic Avatars with Ideal Body Weight on Attractiveness and Self-Esteem in Virtual Reality (Booth ID: 1246)

Lena Holderrieth, University of Würzburg; Erik Wolf, University of Würzburg; Marie Luisa Fiedler, University of Würzburg; Mario Botsch, TU Dortmund University; Marc Erich Latoschik, University of Würzburg; Carolin Wienrich, University of Würzburg

Body weight issues can manifest in low self-esteem through a negative body image or the feeling of unattractiveness. To explore potential interventions, the pilot study examined whether embodying a photorealistically personalized avatar with enhanced attractiveness affects self-esteem. Participants in the manipulation group adjusted their avatar's body weight to their self-defined ideal, while a control group used unmodified avatars. To confirm the manipulation, we measured the perceived avatars' attractiveness. Results showed that participants found avatars at their ideal weight significantly more attractive, confirming an effective manipulation. Further, the ideal weight group showed a clear trend towards higher self-esteem post-exposure.

Dynamic VR Modulation with EEG Feedback for Psychedelic Simulation (Booth ID: 1247)

Leon Lange, UC San Diego; Jacob Yenney, UC San Diego; Ying Choon Wu, UC San Diego

This study explores approaches to combining real-time electroencephalographic (EEG) with VR to generate synthetic psychedelic experiences. Using a passive brain-computer interface (BCI) that dynamically adjusts virtual content, we aim to elicit neurocognitive states similar to those that characterize psychedelic trips - including attenuation of self-referential thought and diminished engagement of the default mode network. Preliminary results indicate that this type of personalized VR experience can modulate cortical activities and elicit unique states of altered consciousness. By mimicking hallucinatory experiences, we aim in the future to replicate the cognitive and emotional benefits of psychedelic agents without the associated risks.

Luminous Atmospheres 360: Comparing VR HMDs and UHD Screens for Reproducing Architectural Daylight Atmospheres (Booth ID: 1248)

Michèle Atié, Nantes Université, ENSA Nantes, Ecole Centrale Nantes, CNRS, AAU-CRENAU, UMR 1563; Céline Drozd, Nantes Université, ENSA Nantes, Ecole Centrale Nantes, CNRS, AAU-CRENAU, UMR 1563; Toinon Vigier, Université de Nantes; Daniel Siret, Nantes Université, ENSA Nantes, Ecole Centrale Nantes, CNRS, AAU-CRENAU, UMR 1563

This paper studies the potential of VR HMDs to replicate subjective impressions of luminous atmospheres using 360° photographs. We compare the perception of luminous atmospheres in two settings—VR HMD and UHD screen—against real iconic buildings. Evaluations were conducted through three distinct experiments: in seven real indoors of iconic buildings, using 360° photographs of these indoors viewed in a VR HMD, and using 2D images from these indoors displayed on a UHD screen. Results indicate that VR HMDs are a promising tool for studying luminous atmospheres, providing a closer perception to real-world conditions.

Effectiveness of MR to enhance drivers’ perception of turnability (Booth ID: 1249)

Antoine HP MORICE, Aix-Marseille Université; Nicolas SALVAGE, Aix-Marseille Université; Emilie DUBUISSON, Aix-Marseille Université

The automotive industry faces the challenge of determining whether Mixed Reality displays (MR), which are expensive, outperform traditional head-down displays in depicting Advanced Driver Assistant Systems (ADAS). We designed a novel ecological interface for an ADAS to enhance drivers’ perception of Turnability - an affordance specifying the possibility of crossing a narrow orthogonal alleyway following a forward constant-curvature turn. We augmented the perception of turnability with configural (head-down) and conformal (MR) displays. Overall, both displays improved drivers’ decision-making. However, the configural display is not only superior but also rated as more usable than the conformal display by the drivers.

The Effect of Cognitive Load on Visual Search Tasks in Multisensory Immersive Environments (Booth ID: 1250)

Jorge Pina, Universidad de Zaragoza; Edurne Bernal-Berdun, Universidad de Zaragoza - I3A; Daniel Martin, Universidad de Zaragoza; Sandra Malpica, Centro Universitario de la Defensa, I3A; Carmen Real, Universidad de Zaragoza; Pablo Armañac, Universidad de Zaragoza; Jesus Lazaro, Universidad de Zaragoza; Alba Martin, Universidad de Zaragoza; Belen Masia, Universidad de Zaragoza; Ana Serrano, Universidad de Zaragoza

Cognitive load, the mental effort required to process information and perform tasks, affects user performance and engagement. Understanding its impact in immersive, multisensory experiences is critical for task-oriented applications like training and education. We studied the effect of cognitive load on user performance on a visual search task, performed on its own or alongside a secondary, auditory task. This effect was tested for two different search areas: 90º and 360º. Results show a decline of task performance with increased cognitive load, but with different trends between the 90º and 360º cases.

Encoding Multimodal Scenes in a Virtual City: How the Multifaceted Self Shapes Naturalistic Episodic Memory (Booth ID: 1251)

Delphine Yeh, Université Paris Cité; Sylvain Penaud, Université Paris Cité; Alexandre Gaston-Bellegarde, Université Paris Cité; Pascale Piolino, Université Paris Cité

The self-reference effect (SRE) suggests that encoding new material in episodic memory (EM) is enhanced when information is closely linked to the Self. However, traditional approaches to the SRE have neglected the interplay between multiple facets of the Self and naturalistic, multisensory EM contexts. The present study investigates the respective and joint contributions of the minimal and narrative Selves to the SRE during the encoding of naturalistic, multisensory daily events in a virtual city, experienced while embodied in a first-person avatar. Preliminary findings suggest that the narrative Self plays a predominant role in enhancing EM encoding under ecological conditions.

Attention's Substates: Unlocking Adaptive VR through EEG Insights (Booth ID: 1252)

Yobbahim J. Vite, University of Calgary; Yaoping Hu, University of Calgary

To enable adaptive virtual reality (VR) based systems, this study explored to differentiate attention’s 3 substates – i.e., orienting (OR), alerting (AL), and cognitive maintenance (CM). Human participants performed a task within a VR-based cockpit to evoke these substates, while their brain activity was recorded as electroencephalographic data to extract features. Statistical analyses revealed significant differences of these features among the substates. Compared 5 machine-learning models, naïve bayes and K-nearest neighbors achieved similarly the best accuracy (77% ~ 95%) to classify the features. These findings implied a potential of the features for informing design of adaptive VR-based systems.

Brain Activity and Decision-Making with Haptic Cues (Booth ID: 1253)

Stanley Tarng, University of Calgary; Yaoping Hu, University of Calgary

In this study, we investigated the relationship between behavioral and brain responses during haptic interactions in a virtual environment. Using a helicopter aerial firefighting simulation, ten participants interacted with co-located and dis-located haptic cues. Behavioral data, including reaction times were analyzed using Drift-Diffusion Model (DDM) along event-related potentials, focusing on N200 and P300 features. Bland-Altman analyses revealed potential agreement between DDM drift rate and ERP feature slopes, suggesting that internal cognitive processes can be inferred behaviorally. The results indicate that behavioral models, such as DDM, may reduce the need for direct brain measurements, with implications for haptic interface design.

Proprioception Drift in Virtual Reality: An Experiment with an Unrealistically Long Leg (Booth ID: 1254)

Valentin Vallageas, Imaging and orthopedics research laboratory; David R Labbe, Ecole de technologie superieure; Rachid Aissaoui, École de technologie supérieure

Embodiment refers to the sensation of owning, controlling, and perceiving a virtual or artificial body as one's own. This study investigates how embodying an avatar with a leg twice its normal length affects proprioception, with congruent or incongruent visuotactile stimuli. Preliminary results (n = 10) show that participants experienced embodiment with a lengthened virtual leg, regardless of congruence of stimuli. A proprioceptive drift of 31.2 cm toward the virtual foot was observed. These findings extend research on upper-body proprioception to include virtual lower-limb deformations.

The Effect of Elbow and Shoulder Supports on Detection Threshold of Hand Redirection (Booth ID: 1255)

Gaku Fukui, The University of Tokyo; Maki Ogawa, the University of Tokyo; Keigo Matsumoto, The University of Tokyo; Takuto Nakamura, The University of Tokyo; Takuji Narumi, the University of Tokyo; Hideaki Kuzuoka, The University of Tokyo

Hand Redirection (HR) techniques subtly alter a user’s real hand movement to deviate from its virtual representation, enhancing passive haptics in virtual environments. However, noticeable discrepancies can reduce immersion. Inspired by redirected walking research demonstrating that knee supports can influence the perception of manipulation, this study explores whether shoulder and elbow supports can expand the detection threshold in horizontal HR. A user study revealed that shoulder supports significantly increase the DT, offering a practical, stimulus-free method to enhance HR. These findings highlight the potential of leveraging joint constraints to improve immersion without relying on external stimuli.

Perception of Time in Virtual Reality with Different Sensory Stimulation (Booth ID: 1256)

Hyunjo Bang, Konkuk University; Mingyu Kwon, Konkuk University; Seunghan Lee, Konkuk University; HyungSeok Kim, Konkuk University

This paper investigates the phenomenon of perceptual distortion of time caused by the stimulation of the senses in a virtual environment. We examine how time perception changes with sensory stimuli at varying intervals, focusing on visual, auditory, tactile, and visual-auditory stimuli, as well as the impact of complex visual event frequencies. Participants' time perception before and during the experiment was compared. The analysis showed that visual stimuli induced time compression, and auditory stimuli induced time dilation. Time dilation was more effectively induced than time compression. Based on these results, personalizing the time interval of sensory stimuli may maximize the time distortion effect.

Exploring the Virtual Continuum to tackle Parkinson’s Disease: AR vs VR in Therapeutic Gaming (Booth ID: 1258)

João Viana, NOVA School of Science and Technology, NOVA University; Mariana Romão Cachapa, Polytechnic University of Setubal; Rui Neves Madeira, Instituto Politécnico de Setúbal; Pedro Albuquerque Santos, Lisbon School of Engineering (ISEL), Politécnico de Lisboa (IPL); Patrícia Macedo, Escola Superior de Tecnologia de Setúbal (ESTSetúbal), Inst Politécnico de Setúbal

The effective management of Parkinson's disease (PD) is increasingly reliant on innovative approaches to motivate patients to maintain physical activity. The potential of augmented reality (AR) and virtual reality (VR) as immersive tools to enhance patient engagement and encourage movement has been demonstrated. This study examines a fruit-catching game in both AR and VR formats, exploring the virtual continuum to prove which version is more beneficial for therapeutic use in tackling PD. The results of this study inform the future development of more personalized, accessible, and impactful serious games tailored to support PD rehabilitation.

Miscellaneous Topics

Exploring Decoupled Generation for Enhancing VR Interaction and Control (Booth ID: 1231)

Chong CHENG, Hong Kong University of Science and Technology (Guangzhou); Qinzheng Zhou, National Key Laboratory of Multispectral Information Processing; Jianfeng Zhang, NUS; Shiya Tsang, The Hong Kong University of Science and Technology; Hao Wang, HKUST

This paper presents Decoupled3D, a novel framework for 3D object generation with decoupled components. Decoupled3D uniquely decouples and optimizes object component meshes from a single image, producing not only an overall shape for the object but also the independent modeling for object components. Decoupled3D serves as a plug-and-play module, which is applicable to various object images and existing 3D reconstruction methods, offering high flexibility and scalability. Experimental results suggest the potential in digital creation and virtual reality applications. The survey results indicate that participants believe our proposed Decoupled3D offers superior control and design capabilities.

Immersive Engineering Toolkit for Deep Geothermal Energy (Booth ID: 1232)

Victor Häfner, Karlsruhe Insitute of Technology; Polina Häfner, Karlsruhe Insitute of Technology; Felix Longge Michels, Karlsruhe Insitute of Technology; Florian Bauer, Karlsruhe Insitute of Technology; Michael Grethler, Karlsruhe Insitute of Technology

Geothermal energy offers a sustainable and reliable energy source. However, VR engineering methods have seen limited application in this field. This work introduces an Immersive Engineering Toolkit, which leverages an open-source VR engine to address challenges across the geothermal lifecycle, such as drill site planning, subsurface visualization, engineering simulations, and public engagement. By integrating heterogeneous data formats and enabling collaborative environments, the toolkit aims to enhance cost-efficiency, safety, decision-making and stakeholder communication. By unifying scientific, engineering, and societal aspects, the toolkit aims to provide a holistic approach to accelerate the adoption of deep geothermal energy systems.

Industrial Augmented Reality – Feasibility of AI-based Validation of AR Instructed Component Recovery for Washing Machines in Circular Economy (Booth ID: 1233)

Arwa Own, Chemnitz University of Technology; Sebastian Knopp, Chemnitz University of Technology; Mario Lorenz, Chemnitz University of Technology

Using Artificial Intelligence (AI) to validate that tasks are performed correctly when instructed through Augmented Reality (AR) is a rather unexplored area. Focusing on the removal of components during the disassembly of washing machines in the context of circular economy, we develop an AR system, to detect correctly executed tasks using AI. We present preliminary results of our novel approach using the YOLOv8 Nano model detecting eight components. Already with very limited training data we could achieve detection precisions of over 90% for seven of eight components proofing the general feasibility of our work.

Extending Gaussian Splatting to Audio: Optimizing Audio Points for Novel-view Acoustic Synthesis (Booth ID: 1234)

Masaki Yoshida, Hokkaido University; Ren Togo, Hokkaido University; Takahiro Ogawa, Hokkaido University; Miki Haseyama, Hokkaido University

This paper proposes a novel method to extend Gaussian Splatting (3DGS) to the audio domain, enabling novel-view acoustic synthesis solely using audio data. While recent advancements in 3DGS have significantly improved novel-view synthesis in the visual domain, its application to audio has been overlooked, despite the critical role of spatial audio for immersive AR/VR experiences. Our method addresses this gap by constructing an audio point cloud from audio at source viewpoints and rendering spatial audio at arbitrary viewpoints. Experimental results show that our method outperforms existing approaches relying on audio-visual information, demonstrating the feasibility of extending 3DGS to audio.

AReplica: User-Centered Design of an Augmented Reality Portion Size Estimation tool for Dietary Intake Assessment (Booth ID: 1235)

Nina Rosa, Wageningen University and Research; Esther Kok, Wageningen University and Research; Els Siebelink, Wageningen University and Research; Michelle van Alst, Wageningen University and Research; Travis Masterson, Pennsylvania State University; Alexander Klippel, Wageningen University and Research

Portion size estimation is an essential component of dietary intake assessment, but people generally struggle with such estimation. Augmented reality methods have been developed to aid in this, but it is unclear how selected features are intended to improve accuracy or create a positive user experience. In this paper, we present a storyboard for a new augmented reality approach, AReplica. We explain why each feature was chosen and list necessary future research.

Developing an XR-Integrated Digital Twin for Hydrogen Pipeline Monitoring and Navigation (Booth ID: 1236)

Nanjia Wang, University of Calgary; Muskan Sarvesh, University of Calgary; Brody Wells, University of Calgary; Bryson Lawton, University of Calgary; minseok ryan kang, University of Calgary; Kangsoo Kim, University of Calgary; Frank Maurer, University of Calgary

This paper presents an immersive extended reality (XR) based digital twin (DT) system designed for a hydrogen pipeline test facility, combining real-time data visualization with interactive virtual navigation. Employing 3D pipeline models and a point-and-teleport locomotion method with orientation specifications, the system enables efficient exploration of the physical pipeline facility's virtual counterpart. Preliminary feedback from field experts highlights the system's potential to provide effective navigation and reduce cognitive fatigue through the visualization of the sensor data.

Deconfounded Human Skeleton-based Action Recognition via Causal Intervention (Booth ID: 1237)

Yanan Liu, Yunnan university; Yanqiu Li, The University of Melbourne; Hao Zhang, Yunnan University; Qianhan Tang, Yunnan University; Dan Xu, Yunnan University

Human action recognition has vast research prospects in the field of virtual reality, particularly in the context of the skeleton modality with enhanced robustness to background noise. However, recent methods overlook a dataset bias caused by the customized and subject-specific sub-actions correlated spurious causalities between actions and predicted labels. In this work, we build a skeleton causal graph to formulate the causalities and causally intervened the confounded recognition model via backdoor adjustment, and design a novel causal intervention based pipeline CI-GCN for skeleton-based action recognition to deconfound model and improve recognition accuracy, which achieves the state-of-the-art on three public action datasets.

Virtual Reality Benchmark for Edge Caching Systems (Booth ID: 1238)

Nader Alfares, Pennsylvania State University; George Kesidis, Pennsylvania State University

We introduce a Unity based benchmark for evaluating Virtual Reality (VR) delivery systems using edge-cloud caching. As VR evolves, meeting strict latency and Quality of Experience (QoE) requirements is critical. Traditional cloud architectures often struggle to meet these demands. With edge computing, resources are brought closer to users in efforts to reduce latency and improve QoEs. However, challenges arise from changing fields of view (FoVs) and synchronization requirements. We address the lack of suitable benchmarks and propose a framework that simulates multiuser VR scenarios while logging users' interaction with objects within their FoVs, supporting research in optimizing edge caching and other edge-cloud functions for VR streaming.

Authoring framework for industrial XR digital twin and autonomous agent : a proof of concept (Booth ID: 1239)

Alexandre Courallet, CESI; David Baudry, CESI; Vincent Havard, CESI

Extended Reality-Interfaced Digital Twins (XR-DTs) can be used in a wide range of industrial applications, including training, simulation, and the design and monitoring of industrial systems. However, interacting with and controlling autonomous agents, whether virtual or real, through XR-DT can be challenging, and existing solutions often address only a certain level of collaboration or limited use cases. To address these limitations, this paper presents a no-code scenario authoring solution for XR-DTs called NEURONES as well as a demonstration scenario.

Evaluating Adjustment and Proficiency Disparities in Virtual Reality (Booth ID: 1240)

Mohammad Jahed Murad Sunny, University of Arkansas at Little Rock; Jan P Springer, University of Arkansas at Little Rock; Aryabrata Basu, University of Arkansas at Little Rock

Our study explores the relationship between VR experience, 3D gaming expertise, and performance metrics like task completion time, accuracy, and workload. Analysis reveals that combined expertise in VR and 3D gaming improves efficiency, reduces workload, and enhances accuracy. Notably, users with multidimensional experience consistently perform better with higher accuracy and lower task-load scores, demonstrating the value of VR proficiency. Interestingly, physical space requirements remain low across all levels, highlighting VR’s accessibility.

LLMs on XR (LoXR): Performance Evaluation of LLMs Executed Locally on Extended Reality Devices (Booth ID: 1241)

Xinyu Liu, King Abdullah University of Science and Technology; Dawar Khan, King Abdullah University of Science and Technology; Omar Mena, King Abdullah University of Science and Technology; Donggang Jia, King Abdullah University of Science and Technology (KAUST); Alexandre Kouyoumdjian, King Abdullah University of Science and Technology; Ivan Viola, King Abdullah University of Science and Technology

We evaluate 17 LLMs across four XR devices—Magic Leap 2, Meta Quest 3, Vivo X100s Pro, and Apple Vision Pro—assessing performance on key metrics: consistency, processing speed, and battery consumption. Our experimental setup examines 68 model-device pairs under varying string lengths, batch sizes, and thread counts, providing insights into trade-offs for real-time XR applications. Our findings offer guidance for optimizing LLM deployment on XR devices and establish a foundation for future research in this rapidly evolving field.

Edge Vision AI Co-Processing for Dynamic Context Awareness in Mixed Reality (Booth ID: 1242)

Alex Orsholits, The University of Tokyo; Manabu Tsukada, The University of Tokyo

Spatial computing is evolving towards leveraging data streaming for computationally demanding applications, facilitating a shift to lightweight, untethered, and standalone devices. These devices are ideal candidates for co-processing, where real-time scene context understanding and low-latency data streaming are fundamental for general-purpose Mixed Reality (MR) experiences. This poster demonstrates and evaluates a scalable approach to augmented contextual understanding in MR by implementing edge AI co-processing through a Hailo-8 AI accelerator, a low-power ARM-based single board computer (SBC), and the Magic Leap 2 AR headset. The resulting inferences are streamed back to the headset for spatial reprojection into the user’s vision.

IEEE  IEEE Computer Society IEEE Visualization and Graphics Technical Community

Special
Inria logo.

Silver

InterDigital logo.

Google logo.

Bronze
MiddleVR logo.
HITLab NZ logo.

Immersion logo.
Qualcomm logo.
Huawei logo.
Meta logo. AFXR logo.
LabSTICC logo.
GuestXR logo.
ENSAM logo. Haption logo.

EuroXR logo.

INSA logo.

Institut de Neurociencies, Universitat de Barcelona logo.

SHARESPACE logo.

RegionBretagne logo.

UnivRennes logo.

Orange logo.

CLARTE logo.

Inami Monnai Lab logo.

VRSJ logo.

CESI logo.

©IEEE VR Conference 2025, Sponsored by the IEEE Computer Society and the Visualization and Graphics Technical Committee