The official banner for the IEEE Conference on Virtual Reality + User Interfaces, comprised of a Kiwi wearing a VR headset overlaid on an image of Mount Cook and a braided river.

Doctoral Consortium

Accepted Students

Conversational Agents for Natural Human-Computer Interaction in Virtual Reality

Author: Matteo Bosco, TU Wien

With the emergence of large language models (LLMs), conversational agents (CAs) have gained significant attention across various domains, including virtual reality (VR). Although LLMs are capable of simulating human dialogue with impressive realism, interactions with LLM-powered CAs still appear unnatural and lack a human-like feel. This is primarily due to several key challenges, including limitations in the accuracy of speech recognition and the quality of speech synthesis. Furthermore, CAs need more autonomy and more natural turn-taking abilities to enable realistic and engaging interactions in VR. This position paper outlines my PhD research by discussing my motivation for improving CAs in VR environments, summarizing my previous work, and presenting my planned research directions.

Bridging the Gap Between Real and Virtual Touch: Multisensory Avatars for Enhanced Training in Virtual Reality

Author: Manel Boukli Hacene, Paris Saclay University

This position paper presents our ongoing PhD work exploring the integration of interpersonal haptic interactions into immersive VR training systems for first responders. The research focuses on enhancing user engagement, emotional resilience, and decision- making in high-stress scenarios by leveraging advanced haptic technologies, such as vibrotactile feedback and electromuscular stimulation. The primary goal is to model and simulate haptic interactions, such as touch sensations perceived by individuals in the real world, enhancing the realism of virtual training. This paper details the research objectives and the ongoing work and highlights the critical gaps this work aims to address in the current VR training landscape.

Impact of Mixed Reality on Spatial Perception and Affordances

Author: Eva Di Noia, Airbus Helicopters

This study examines the impact of mixed reality environments on spatial behavior. In a first study, we investigated a navigation task, specifically a goal-directed task. Participants performed this task under both real-world and mixed reality conditions, navigating through doors of varying widths. The main objective was to evaluate how the technological limitations of mixed reality headsets (including restricted field of view, resolution, and refresh rate) influence navigation behaviors. Two hypotheses were tested: (1) mixed reality modifies behavioral adjustments compared to a real-world environment, and (2) distance perception in mixed reality is underestimated, leading to compensatory behavioral adjustments, particularly in deceleration and shoulder rotation strategies. The key variables analyzed included deceleration threshold (the distance at which participants reduce their speed before passing through the door) and shoulder rotation threshold (the distance at which they begin adopting a lateral gait). The results aim to shed light on the effects of immersive environments on spatial perception and motor adaptation, with implications for the design of navigation interfaces in mixed reality systems.

Supporting Asynchronous Collaboration in Mixed Reality with Large Language Models (LLM)

Author: Giuseppina Pinky Kathlea Diatmiko, IMT Atlantique

While prior work has demonstrated the benefits of asynchronous collaboration in Mixed Reality (MR) for tasks like education and industrial training, it is still under-investigated compared to synchronous collaboration. Previous asynchronous MR systems rely mainly on recording and replaying of the collaborators’ actions, which raises many challenges in terms of interactivity, understanding and processing speed of the recorded data. My PhD project investigates how Large Language Models (LLMs) can be used to improve asynchronous collaboration by creating a virtual surrogate which can interact on behalf of a person who is not available at a particular time. This paper presents the research approach, the application context and the first prototype developed as part of this project, as well as the future work planned for the upcoming years.

CoboDeck Pro: Advanced Encounter-Typed Haptic Device for Collaborative Architectural Design in Walkable VR

Author: Mohammad Ghazanfari, TU Wien

Haptic feedback has consistently shown its potential to enhance task performance in virtual environments. In architectural design, the ability to sketch and conceptualize directly in Virtual Reality (VR), rather than relying on traditional methods, offers a transformative shift in the design process. Integrating haptic feedback into this process can further improve design practices. This dissertation introduces CoboDeck Pro, an advanced haptic device designed to simulate structural deformation, enabling more informed decision-making during early design stages. Additionally, the research addresses multi-user haptic interactions to foster interdisciplinary collaboration while reducing system costs per user. The proposed platform integrates into architectural design frameworks, making the process more collaborative, interactive, and responsive.

Innovative XR Approaches for Behavioral and Cognitive Therapy

Author: Marta Goyena, Universidad Politécnica de Madrid

The increasing deployment of immersive technologies is opening up opportunities in the field of behavioral and cognitive therapies. In particular, the use of eXtended Reality (XR) technologies allows the development of therapies that require moving users to a remote location without the need to go to the therapy site. This is especially useful when the therapy is aimed at phobia treatment or attention training skills. Specifically, this PhD aims to research innovative therapy sessions by incorporating XR-based technologies, assessing their effectiveness in treating conditions like phobias, enhancing cognitive abilities, and improving the accessibility of therapy for individuals with psychosocial difficulties, such as intellectual disabilities.

Exploring Behavioral Dynamics to Enhance Collective Intelligence in Virtual Environments

Author: Tristan Lannuzel, CESI-LINEACT

Collective intelligence (CI) is a predictive measure of a group's ability to perform a wide variety of tasks. It is an essential concept for understanding team dynamics and enhancing team performance. While extensively studied in traditional environments such as face-to-face settings or online interactions, CI remains underexplored in immersive Virtual Reality (VR). This thesis has three goals: (1) to analyze how CI manifests in VR through verbal and non-verbal indicators, (2) to design real-time feedback systems that enhance CI in VR environments, and (3) to apply these findings to immersive educational platforms to improve collaboration and learning.

Gaze-Based Viewport Control in VR

Author: Hock Siang Lee, Lancaster University

Head-Mounted Displays (HMD) allow users to explore Virtual Reality (VR) environments via extensive head and body movements. In this work, we introduce novel gaze-based techniques for viewport control. These allow users to explore VR environments without moving their head, body, or hands, nor need to use any external controllers other than an eye-tracker. The techniques have been evaluated and compared against traditional alternatives in an abstract study and a video-watching study - demonstrating comparable performance, task load, cybersickness, and user preference. Future research seeks to improve its applicability in a variety of real-world settings, necessitating investigations into how it affects hand-eye coordination and its impact on interactions and interface design. Combined, these should address the need for more accessible and ergonomic exploration methods in VR, particularly for users with limited mobility or those in confined spaces.

Beyond Avatars: Designing Bodily Interfaces in Virtual Environments

Author: Xiang Li, University of Cambridge

Our bodies serve as the interface between physical and virtual realities, mediating our perceptions and actions within immersive environments, and enabling the potential for novel experiences or “superpowers.” My PhD research investigates how bodily interfaces can be redesigned to augment user behavior in virtual reality (VR). Specifically, I explore avatar morphologies, transitioning from continuous to discrete forms, such as swarm bodies, to improve the manipulation of multiple objects simultaneously. Additionally, I examine on-body menus, focusing on how the spatial relationship between menu positions and different body landmarks influences user preferences. My ongoing work also addresses microgestures for VR, analyzing their application in text editing and object selection. Moving forward, my research will further investigate control mechanisms for discrete avatars and explore new ways to enhance users’ perception and control of their virtual bodies. Ultimately, my dissertation aims to advance more immersive interactions in virtual environments by leveraging the potential of bodily interfaces.

Neck Muscles in Head-Mounted Displays Interaction

Author: Guanlin Li, Lancaster Univeristy

Head movement is the central of head-mounted displays (HMDs) interaction. Neck muscles drive head movement and play a crucial role in the body's proprioception. However, few studies have investigated how neck muscle can affect HMD interaction. This PhD research aims to explore and leverage the knowledge of neck muscles to improve HMD interaction experience. In the current progress, we have successfully leveraged the neck muscle understanding to improve head amplification. Our primary study results show that the novel head amplification can reduce neck discomfort without creating significant motion sickness. However, further research is necessary to explore how other interaction techniques and methods can be refined and improved. This PhD research introduces a novel direction to improve the HMD interaction experience. By using insight into the neck muscles, we have the potential to further improve HMD interaction experience.

Enhancing Learner Engagement and Attention in XR Environments: Metrics and Strategies

Author: Carlos-Andres Lievano-Taborda, Universite Paris-Saclay CNRS LISN

This study explores the roles of Engagement and Attention (E/A) in eXtended Reality (XR) learning environments, emphasizing their importance in enhancing educational outcomes. Through a comprehensive review, we identify prevalent methods for measuring E/A, including post-test evaluations, summative assessments, and real-time monitoring techniques. Our findings reveal a gap in the integration of these cognitive states and highlight the limited focus on collaborative learning scenarios, emphasizing the need for multi-method approaches to assess E/A in XR. To address these gaps, we propose research directions that combine user-centered design and pedagogical strategies to enhance E/A in XR learning. By incorporating gamification, structured lesson plans, and interactive content, future XR learning environments can mitigate distractions and foster immersive and personalized learning experiences. This paper sets the foundation for optimizing E/A measurement and improvement strategies in diverse educational contexts.

IMMERCOG: Impact of extreme environments on the operation of critical systems

Author: Elena Lopez-Contreras, ISAE-SUPAERO

Remote operation technologies play a critical role in fields such as medicine, defense, and space exploration, yet existing research often relies on controlled laboratory simulations that fail to capture the complexities of real-world environments. This doctoral project, IMMERCOG, investigates the impact of extreme conditions—such as underwater environments and real flight—on the psychophysiological states and performance of operators controlling remote systems. This research bridges the gap between controlled studies and real-world applications, aiming to advance human-machine interaction (HMI) for remote operations under stress. The project involves three experimental campaigns conducted using the same interface: a laboratory study to establish baseline operator performance using motion simulation, underwater teleoperation at varying depths to evaluate stress and cognitive impact, and in-flight teleoperation to address the challenges of controlling robots from a moving aircraft. Preliminary results highlight significant learning effects across tasks and notable performance differences between manual and autonomous control modes. These findings provide actionable insights for designing adaptive HMIs tailored to extreme conditions, contributing to the development of robust and efficient teleoperation systems.

Exploring Environmental Customization in Virtual Reality Mindful Movement to Reduce Anxiety and Stress in University Students

Author: Samantha Monk, University of Windsor

University students face high stress and anxiety, impacting mental health and academic performance. This study investigates the effects of customizable virtual reality environments during mindful movement exercises on reducing stress and anxiety. Grounded in Self-Determination and Stress-Coping Theories, the research includes three phases: evaluating psychological and physiological outcomes, exploring user experiences, and collaborating with stakeholders to integrate VR into student wellness programs. Findings aim to demonstrate the benefits of VR customization for mental well-being, offering scalable solutions for student wellness and advancing VR applications in mental health.

Using VR Embodiment and Co-embodiment to Improve Our Lives

Author: Yuke Pi, Goldsmiths, University of London

Owning another body in Virtual Reality (VR) has been proven to trigger changes in participants' social cognition, attitude, and behaviour. This PhD thesis aims to explore the capacity of VR embodiment and ways to use it to improve our lives. In study 1, we investigated embodied time travel in behaviour change to address the climate crisis and found that embodied VR experience triggered a more pronounced response in participants' perceived influence on climate action and engagement in pro-environment behaviours. Study 2 plans to explore the effect of co-embodiment (sharing an avatar with another entity) on motor skills learning. We built an initial prototype where participants could learn Tai Chi moves with an AI teacher, and plan to investigate whether co-embodiment could enhance learning.

Navigating Impossible Spaces in Virtual Reality For Seamless Walking Experiences in Small Physical Spaces

Author: Ana Rita Rebelo, NOVA LINCS NOVA School of Science and Technology

In Virtual Reality (VR), navigating small physical spaces often relies on handheld controllers, such as teleportation or joystick movement, due to the limited space available for walking. However, walking-based techniques can enhance immersion by enabling more natural movement. This position paper presents research that employs the concept of “impossible spaces” to enable walking in small physical spaces. Three room-connection techniques – portals, corridors, and central hubs – are used to create impossible spaces by overlapping and adapting multiple virtual areas, maximizing the use of limited physical space. Our previous user studies show that all three techniques are viable for connecting rooms in VR within a play area of about 2.5 x 2.5 meters. Portals provide a flexible solution, as they can be placed in the middle of a room, corridors offer a seamless and natural transition between spaces, and central hubs simplify navigation in complex layouts by creating a central room that connects to all other rooms. The primary contribution of this work is to make walking in VR accessible for all users by demonstrating how these room-connection techniques can dynamically adapt virtual environments to fit small physical spaces, such as those commonly available to VR users at home.

Exploring Virtual Photogrammetry Techniques and Applications For Advancement of Digital Twin Generation

Author: Jacob Rubinstein, University of Maryland Baltimore County

We explore challenges and opportunities in the use of photogrammetry for digital twin generation, namely we aim to improve our understanding of what makes good input data for photogrammetry and to quantify the different traits of various photogrammetry processes. We propose the use of virtual photogrammetry - utilizing synthetic 2D images from pre-existing 3D models as input - to aid in this goal. Our approach aims to create a pipeline for generating datasets of synthetic images which can be used to evaluate and improve camera pose/ intrisics estimation as well as to assess the impact of errors on 3D reconstruction accuracy. By leveraging the advantages of this synthetic data, we aim to evaluate the resilience and accuracy of photogrammetry systems, leading to the higher quality results from non-virtual photogrammetry in the future.

Towards Comprehensible and Expressive Teleportation Techniques in Immersive Virtual Environments

Author: Daniel Rupp, RWTH Aachen University

Teleportation, a popular navigation technique in virtual environments, is favored for its efficiency and reduction of cybersickness but presents challenges such as reduced spatial awareness and limited navigational freedom compared to continuous techniques. I would like to focus on three aspects that advance our understanding of teleportation in both the spatial and the temporal domain. 1) An assessment of different parametrizations of common mathematical models used to specify the target location of the teleportation and the influence on teleportation distance and accuracy. 2) Extending teleportation capabilities to improve navigational freedom, comprehensibility, and accuracy. 3) Adapt teleportation to the time domain, mediating temporal disorientation. The results will enhance the expressivity of existing teleportation interfaces and provide validated alternatives to their steering-based counterparts.

Enhancing Accessibility in XR Games for Users with Autism Spectrum Disorder

Author: Marc Soler-Bages, Universitat de Barcelona

Extended Reality (XR) devices, encompassing Virtual and Augmented Reality, have rapidly grown in popularity across industries such as entertainment, education, and healthcare. However, accessibility barriers remain prevalent, especially for individuals with Autism Spectrum Disorder (ASD), due to a lack of inclusive design considerations. This position paper advocates for targeted research on the accessibility of XR applications for ASD users, highlighting the importance of developing evidence-based guidelines to enhance usability and inclusion. By addressing this underexplored area, the paper aims to contribute to the creation of practical solutions that empower individuals with ASD to fully benefit from XR technologies.

The Proteus Effect in Virtual Reality: Examining Stereotype Threat through Gender Cues

Author: Agata Szymanska, Jagiellonian University

The Proteus Effect and stereotype threat represent two critical phenomena within virtual reality (VR) and social psychology research. The Proteus Effect suggests that embodying virtual avatars can alter self-perception and behavior, while stereotype threat, arising from awareness of negative stereotypes, can impair performance. This project investigates the interplay between these phenomena. Using VR environments, we aim to elicit stereotype threat and assess its impact on cognitive performance and physiological responses. By measuring electrodermal activity (EDA) and heart rate (HR) during working memory tasks, this research seeks to provide objective insights into the psychological and physiological mechanisms of stereotype threat in immersive settings.

The Portal as a Transition Between the Real and the Virtual

Author: Kristof Timmerman, AP University of Applied Sciences and Arts

Our lives are increasingly shifting to the digital. Artists too have made attempts - successful or not - to attract audiences in virtual worlds. This evolution is irreversible, not to replace physical experiences, but to create new forms and tap into new audiences. To make these experiences valuable, it will be necessary to make all those involved - artists, performers, spectators - feel part of these virtual worlds. The portal, the access to this experience, plays a crucial role here. How can the transition between the real and the virtual be constructed in such a way that those involved feel part of the virtual? Concepts such as storytelling, interaction, presence and immersion are of great importance. Throughout my PhD research, I am conducting a series of experiments that combine these elements in various ways, aiming to develop portals in the form of performances and installations that bridge these two realms seamlessly.

Detection of an Isolated User's (Inter)Actions during an Augmented Reality Guided Procedure: an Application in Space Medicine

Author: Frederick George Vickery, Lab-STICC UMR 6285 / ENIB

With recent development of Augmented Reality (AR) and Artificial Intelligence (AI) technology, many new options have become available to overcome previously encountered problems (accurately detecting human activity, adaptive AR guidance, etc.). So when a collaborative scene between a 'distant' user in an isolated location in need of assistance from a 'local' user in a control centre is put into question, many different solutions can be considered. A solution could be to provide AR guidance for the isolated user with adjustable instructions from the local user who is immersed in a virtual representation of the isolated user's environment. This allows the local user to quickly understand the problem and current state of the procedure, enabling them to provide the following steps proactively or to rearrange the steps in the current procedure. During my thesis work, our aim is to study how to detect the interactions of an isolated user, in order to create a more detailed virtual environment, improving the quality and priority of the instructed actions provided by the local user. This research subject has become more feasible with recent developments in the fields of AR technology, Human Activity Recognition (HAR) for lightweight or non-intrusive equipment as well as Application Domain Models to represent the environment using semantic information.

Developing Virtual Reality Rehabilitation Tools for PTSD

Author: Ilan Vol, Ben Gurion University

The world is full of conflicts which cause widespread psychological trauma, leading to conditions such as post-traumatic stress disorder (PTSD). While traditional therapies offer some relief, the global mental health crisis, limited access to care, technical limits, and lack of personalization hinder effective treatment. Virtual Reality (VR) is emerging as a promising tool to address these challenges. Building upon existing virtual exposure therapy tools like BRAVEMIND, I will adapt and expand its capabilities to align with the specific needs of civilians affected by the Israeli-Palestinian conflict. Additionally, I will develop a novel VR tool that focuses on Place Attachment theory, facilitating emotional processing. Ultimately, this research seeks to contribute to the development of accessible and effective mental health interventions, particularly for individuals who have experienced trauma and loss, as part of my wider thesis in which I am developing extended reality tools for rehabilitation.

Understanding and Leveraging Head Movement as Input for Interaction

Author: Haopeng Wang, Lancaster University

Head movement is an intuitive and fundamental input modality for interactions. This PhD investigates head movement as an input modality for improving interaction efficiency and user experience. It encompasses three core components: 1) a systematic literature review analysing existing head interaction techniques, 2) the development and evaluation of head interaction techniques for head-mounted displays (HMDs), and 3) the generalisation of design principles derived from HMD-based research to other interfaces. The design of interaction techniques integrates fundamental knowledge of head movement kinematics, the natural coordination of the head with the eyes and hands, and control-display mappings. This research contributes to understanding head movement as an input modality and novel interaction techniques that harness its potential to enhance user interfaces across a range of devices.

XR Visualizations for Exploring 3D Dynamics

Author: Zhongyuan Yu, Technische Universitat Dresden

Dynamic 3D information is all around us in everyday life. When digitized, dynamic 3D data find extensive application in domains such as education, archaeology, immersive journalism, VR therapy, and art. For example, analyzing 3D human motions and movements can offer valuable insights into human behavior and movement patterns. With the growing affordability of tracking and capturing devices, acquiring 3D data has become more and more accessible. At the same time, recent advances in VR and AR headsets make it possible for users to immerse themselves in the computer-generated virtual environment (VE), which is perfectly suited for exploring captured complex dynamic 3D datasets due to their inherent complexity and three-dimensional nature. However, since mixed reality technology is just emerging, current systems are not yet fully prepared to harness all its benefits for comprehensive dynamic 3D data exploration. In my dissertation project, I aim to explore the creative potential of XR visualizations for exploring complex dynamic 3D datasets. My dissertation aims to enrich XR visualization designs and interactive prototypes to better support 3D data exploration tasks in mixed reality.

IEEE  IEEE Computer Society IEEE Visualization and Graphics Technical Community

Special
Inria logo.

Silver

InterDigital logo.

Google logo.

Bronze
MiddleVR logo.
HITLab NZ logo.

Immersion logo.
Qualcomm logo.
Huawei logo.
Meta logo. AFXR logo.
LabSTICC logo.
GuestXR logo.
ENSAM logo. Haption logo.

EuroXR logo.

INSA logo.

Institut de Neurociencies, Universitat de Barcelona logo.

SHARESPACE logo.

RegionBretagne logo.

UnivRennes logo.

Orange logo.

CLARTE logo.

Inami Monnai Lab logo.

VRSJ logo.

CESI logo.

©IEEE VR Conference 2025, Sponsored by the IEEE Computer Society and the Visualization and Graphics Technical Committee