Doctoral Consortium
Key Information
The Doctoral Consortium will be held on Saturday, March 21 in “325D” room. All times below are given in local time of Korea (UTC+9).
Here are the key information for the presenters and mentors:
- Each presentation includes a 10-minute talk + 5-minute Q&A.
- All presenters and mentors need to be present at the scheduled session and are encouraged to attend as much of the doctoral consortium as possible.
- After each session, the mentors and students can use the ” 327 ” room to further breakout discussions and mentoring during Break/Lunch time.
- Because some mentors cannot attend the conference in person, we recommend that the students take the initiative to reach out to allocated mentors and arrange a separate online meeting for mentoring.
- The venue for the DC track is: “ 325D ” room.
- The presentation and mentoring at the DC mark the start of collaborations and we strongly recommend that the presenters and mentors hold periodical meetings to deepen the collaborations.
- In the following schedule, the student is mentioned first followed by the associated mentors.
Schedules - Saturday, March 21
| Session (UTC+9) |
ID | Title | Mentor's Name | Mentor's Affliation | Mentee's Name | Mentee's Afflication |
|---|---|---|---|---|---|---|
| Session 1 8:45am - 10:00am |
1062 | [DC] VR Ecology: Relational Haptic Co-Creation for Kinaesthetic Creativity in XR | Jen-Shuo Liu | Columbia University | Vollmer, S.C. | Department of Computational Arts, York University, Toronto, Canada |
| 1224 | [DC] Investigating Eye-Body Coupling and Gaze Adaptation in Virtual Reality | Jean Botev | VR/AR Lab, University of Luxembourg | Anvari, Taravat | Institute for Psychology, University of Muenster, Muenster, Germany | |
| 1238 | [DC]Toward Adaptive and Accessible XR Through User and Context Modeling | Alejandro Martin Gomez | University of Arkansas | Wu, Zhiqing | Computational Media and Arts Thrust, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, Guangdong, China | |
| 1240 | [DC]From Being There to Acting There: A Motor Learning Perspective on Reconceptualizing VR Interaction | Regis Kopper | Iowa State University | Xiao, Cleo | Department of Computer Science, University of Copenhagen, Copenhagen, Denmark | |
| 1246 | [DC] Improving XR Training and Collaboration Through Shared, Self, and Mediated Awareness | Rashid Ali | Department of Engineering Science, University West, Sweden | Catarina Gonçalves Fidalgo | Human-Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States & Instituto Superior Técnico, University of Lisbon, Lisbon, Portugal | |
| Session 2 10:30am - 11:45am |
1249 | [DC] Beyond Visual Dominance: Haptics as Social Cues for Enhanced Multi-User Experiences in VR | Yan Hu | Blekinge Institute of Technology, Sweden | Jang, Hyuckjin | Graduate School of Culture Technology, KAIST, Daejeon, Korea, Republic of |
| 1258 | [DC] Adaptive Visual Complexity in VR Training: An Eye-Tracking Approach to Detecting and Regulating Embodied Cognitive Load | Tim Dwyer | Monash University, Australia | Nasri, Mahsa | Northeastern University, Boston, Massachusetts, United States | |
| 1260 | [DC] Enhancing Accessibility and User Experience of Virtual Reality Locomotion for Older Adults | Daniel A. Muñoz | Hong Kong Baptist University | Chong, Kit-Ying Angela | Systems Design Engineering/TAG lab, University of Waterloo, Waterloo, Ontario, Canada | |
| 1263 | [DC] Immersive disorientation - Visualizing Dolly Zoom through Counter-Invariant Perception in Cinematic Virtual Reality | Shohei Mori | University of Stuttgart | Fong, Andrew | School of Creative Media, City University of Hong Kong, Hong Kong | |
| 1265 | [DC] Virtual Reality for Mental Health: Toward Bioadaptive Narrative and Sensory Feedback Systems | Majed Elwardy | Blekinge Institute of Technology | Olofsson, Max | Gothenburg University, Institute of Neuroscience and Physiology, Gothenburg, Sweden | |
| Session 3 1:15pm - 2:30pm |
1267 | [DC] Towards Universal Access: Building a Cross-Device Mixed Reality Ecosystem | Richard (Rick) Skarbez | La Trobe University | Becerril Palma, Paulina | Department of Information Science and Media Studies, University of Bergen, Bergen, Norway |
| 1268 | [DC] Visual Perception Enhancement via Transient Visual Cue in Immersive Virtual Reality | Tim Weissker | RWTH Aachen University | Kim, DongHoon | Computer Science, Utah State University, Logan, Utah, United States | |
| 1274 | [DC] Augmented Co-Embodiment for Motor Skill Learning with Held Tools in Virtual Reality | Steve Feiner | Columbia University | Morell, Jean | CAOR, Mines Paris, Paris, France | |
| 1276 | [DC] Virtual Reality and Beyond: Exploring the Design and User Experience of Augmented Social Touch in VR | Sasha Alexdottir | Department of NCCA, Faculty of Media, Science and Technology, Bournemouth University, UK | Tietenberg, Julius | Faculty of Computer Science / Department of Human-centered Computing and Cognitive Science (HCCS) / Entertainment Computing Group, University of Duisburg-Essen, Duisburg, Germany | |
| 1278 | [DC] Personality and Affective States in Virtual Reality: A Multi-Study Program on Awe, Acute Stress, and Trust | Manuela Chessa | University of Genoa, Italy | Steininger, Melissa | Department of Epileptology, University Hospital Bonn, Bonn, Germany | |
| Session 4 3:00pm - 4:15pm |
1279 | [DC] Mixed Reality for Psychological Resilience: A Conceptual Framework for Rescue Training in combination with ACT | Ali Haskins | University of Central Florida | Kastner, Kevin | CeMOS – Research and Transfer Center, Technical University of Applied Sciences Mannheim, Mannheim, Germany |
| 1287 | [DC] Self Adaptive 3D User Interfaces | Jinghui Hu | Lancaster University | Argo, Erin | Augusta University, Augusta, Georgia, United States | |
| 1291 | [DC] Agency-Preserving AI Mediation (APAM) Supporting Self-Directed Learning in Augmented Reality | Ye Pan | Shanghai Jiao Tong University | Schwertfeger, Sharmen | School of Computer & Cyber Sciences, Augusta University, Augusta, Georgia, United States | |
| 1298 | [DC] Road-map to Efficient Attention Guided Augmented Reality User Interfaces: From Controlled Environments to the Wild | Jeanine Stefanucci | University of Utah | Ahmed, Tanim | Computer Science/IRLab, Iowa State University, Ames, Iowa, United States |
DC1062: [DC]VR Ecology: Relational Haptic Co-Creation for Kinaesthetic Creativity
Mentor
Jen-Shuo Liu / Columbia University
Dr. Liu is an AR/VR researcher and engineer focused on visual computing and HCI. He designs perception-based interfaces for teleoperation and maintenance while developing algorithms for HDR and immersive video processing. He holds a Ph.D. from Columbia University, where he focused on AR/VR precueing systems. His research is published in venues including IEEE TVCG, TIP, and ISMAR. An active community contributor, Jen-Shuo serves on the CHI Late-Breaking Work Program Committee, and as both the Publicity & Communications Chair and Pitch-Your-Lab Chair for IEEE ISMAR.
VR Ecology investigates relational presence in XR co-creation: how an AI partner can stay legible and negotiable within embodied making. Instead of prompts or speech, the AI uses a small ProTactile-inspired set of haptic micro-signals (orientation, offer/handoff, slow/confirm). An end-to-end WebXR prototype streams hand and gesture data, drives haptic output, and records trace-memory logs with human/AI/system provenance for replay and annotation. Planned within-subject studies compare visual-only, haptic-only, and combined cues, measuring coordination timing (turn-taking, interruption, response latency), task/process outcomes, and validated self-report of flow and agency; capability-staging contingency matrices keep evaluations comparable as the system evolves.
DC1224: [DC] Investigating Eye-Body Coupling and Gaze Adaptation in Virtual Reality
Mentor
Jean Botev / VR/AR Lab, University of Luxembourg
Dr. Botev is the head of the VR/AR Lab and the Collaborative and Socio-Technical Systems (COaST) research group at the University of Luxembourg. His research focuses on collaborative distributed virtual environments, novel interaction techniques, and human-centered mediated reality at the intersection of physical and digital spaces. He has published over 100 peer-reviewed articles and papers and serves on the editorial board of Empathic Computing. Dr. Botev also serves on the steering committees of IEEE ACSOS and ACM MMVE. As a Principal Investigator, he has led national and international projects, including the EU H2020 FET Open project ChronoPilot, DELICIOS, and VR BIAS. Dr. Botev is actively engaged in mentoring, doctoral training, and public engagement, regularly giving invited talks and organizing outreach activities that promote ethical and human-centered immersive technologies.
Gaze plays a central role in guiding human movement, yet most VR locomotion systems still treat gaze and motion as separate signals. Our research examines how these modalities interact during natural and redirected walking, and how this coupling changes under altered visual–motor contingencies. We first showed that integrating binocular gaze with sparse VR tracking enhances fullbody pose estimation. Building on this foundation, we now analyze high-frequency gaze behaviour during both short- and longterm exposure to redirected walking (RDW) to investigate whether gaze strategies adapt over time. The overall goal of our research is to characterize how gaze–locomotion coupling reorganizes under RDW and to determine whether gaze remains informative for understanding walking direction throughout adaptation. This paper summarizes our completed and ongoing work and outlines key questions for discussion during the Doctoral Consortium.
DC1238: [DC]Toward Adaptive and Accessible XR Through User and Context Modeling
Mentor
Alejandro Martin Gomez / University of Arkansas
I am an Assistant Professor at the Electrical Engineering and Computer Science Department and a member of the Institute for Integrative and Innovative Research at the University of Arkansas. Before receiving this appointment, I worked as an Assistant Research Professor at the Computer Science Department at Johns Hopkins University. I earned my Ph.D. in Computer Science from the Chair for Computer-Aided Medical Procedures and Augmented Reality (CAMP) at the Technical University of Munich, and hold master’s and bachelor’s degrees in Electronic Engineering from the Autonomous University of San Luis Potosi and the Technical Institute of Aguascalientes, Mexico.My interests focus on understanding the properties of human sensory and cognitive systems to create intuitive and transferable extended reality experiences. My research integrates fundamental concepts of visual perception to design perceptually-aware extended reality systems that can be combined with robotics for education, training, and interventional medicine.
As XR technologies become increasingly integrated into everyday applications, designing interfaces that dynamically adapt to diverse users, tasks, and environments is critical. Traditional XR systems often assume static layouts and uniform user capabilities, but in real-world use, users move between contexts, switch tasks, and face varying physical and cognitive demands, which can disrupt usability. My research investigates adaptive interaction in XR, focusing on dynamically adjusting spatial layouts, input strategies, feedback mechanisms, and interaction logic to respond to users’ physical, cognitive, and environmental conditions. My PhD research examines age-related differences, develop hand redirection techniques to optimize interaction in supine postures in VR, and explore adaptive mixed-reality learning environments that tune task complexity and visual feedback based on cognitive load.
DC1240: [DC]From Being There to Acting There: A Motor Learning Perspective on Reconceptualizing VR Interaction
Mentor
Regis Kopper / Iowa State University
Dr. Regis Kopper is an assistant professor of Computer Science at Iowa State University. He researches extended reality (XR) interfaces and user experience (UX) for immersive interactive systems, with a focus on interaction design, simulation, and training, particularly in the design and evaluation of highly effective 3D user interfaces and their application in critical areas such as public safety and healthcare. His work has been recognized with awards such as the Best Paper Award at the IEEE Symposium on 3D User Interfaces and the IEEE Virtual Reality’s 3D User Interfaces Grand Prize. More recently, Dr. Kopper has expanded his research to investigate the potential of immersive technology for delivering equitable, sustainable, and accessible healthcare, reflecting his dedication to creating innovative solutions with global impact. His research has been funded by prestigious organizations, including the DoD, NSF, NIH, and NIST. Before joining Iowa State, Dr. Kopper held faculty appointments at Duke University and the University of North Carolina at Greensboro, as well as a postdoctoral appointment at the University of Florida. He holds a B.A. and M.S. in Computer Science from PUCRS, Brazil, and a Ph.D. in Computer Science from Virginia Tech.
Virtual Reality promises embodied superpowers that defy physical laws, creating a sensorimotor gap between fixed biological limits and unbounded virtual capabilities. While the field employs various evaluation methods, my systematic review critiques the application of subjective concepts like presence, revealing that its usage is often theoretically underspecified for validating interaction quality. Addressing this, I shift focus from the subjective feeling of being there to the computational mechanisms of acting there. I conceptualize VR interaction as a sensorimotor mapping and propose a design space grounded in motor learning theory, distinguishing between motor adaptation (recalibrating internal models) and de novo learning (constructing novel policies). Finally, I employ Bayesian computational modeling to decode the learning dynamics of mappings in the design space. By simulating users as Bayesian agents, this work aims to explain the acquisition of virtual capabilities, contributing to a transition from heuristic trial-and-error to predictive computational modeling.
DC1246: [DC] Improving XR Training and Collaboration Through Shared, Self, and Mediated Awareness
Mentor
Rashid Ali / Department of Engineering Science, University West, Sweden
Rashid Ali is currently a Senior Lecturer (Associate Professor) at the Department of Engineering Science, University West, Sweden. He has a strong academic and industry background, focusing on cutting-edge technologies in wireless networks and artificial intelligence (AI). His research interests include federated learning, reinforcement learning, and their applications in wireless networks, sensors, and intelligent systems. He actively supervises Ph.D. students in sensors and AI, and participates in industrial-academic collaborations. Rashid also holds a Higher Education Pedagogy Certificate from Gothenburg University, underlining his dedication to quality teaching and student engagement. He completed his Ph.D. in Information and Communication Engineering at Yeungnam University, South Korea (2019).
Extended Reality (XR) creates new possibilities for training and collaboration by merging digital information with the physical space. However, sustaining awareness - the ongoing understanding of what others (or oneself) are doing, seeing, and needing - remains challenging. My PhD explores how awareness emerges in XR training spaces, and how it can be maintained between people, within oneself, and in interaction with intelligent agents.
DC1249: [DC] Beyond Visual Dominance: Haptics as Social Cues for Enhanced Multi-User Experiences in VR
Mentor
Yan Hu / Blekinge Institute of Technology, Sweden
Dr. Yan Hu is an associate professor in the Department of Computer Science at Blekinge Institute of Technology, Sweden, where she leads the Game and Interactive Systems team. She received her PhD in Computer Science from Blekinge Institute of Technology in 2017 and began her VR research during her postdoctoral studies. Her research interests include human-computer interaction, XR interaction, and biometric research in behavior and interaction. She has co-authored publications in TVCG, IEEE VR, and ISMAR. Yan currently supervises two PhD students as the main supervisor and two as a co-supervisor in the area of XR.
This research investigates the potential of haptic patterns serving as a social cue that supports users’ social cognition in visually demanding virtual reality (VR) environments. While prior work in social VR has primarily focused on visual and auditory information, this research aims to examine how vibrotactile patterns can represent relational or group-relevant information and influence users’ social understanding. Building on multisensory social cognition theories, the research aims to establish design principles for haptic-based social signals and explore how they complement visual cues in social VR contexts.
DC1258: [DC] Adaptive Visual Complexity in VR Training: An Eye-Tracking Approach to Detecting and Regulating Embodied Cognitive Load
Mentor
Tim Dwyer / Monash University, Australia
Prof. Tim Dwyer is Australia’s foremost Information Visualisation researcher with more than 20 papers at the premier conference for this discipline, CORE A*-ranked IEEE VIS – with numerous Best Paper and Honourable Mention for Best Paper awards. He has publications on topics in data visualisation, visual analytics and human computer interaction in ACM CHI, ACM UIST, IEEE VR, IEEE PacificVis, EuroVis, Interact, Graph Drawing, Diagrams, IEEE TVCG, WWW and others. He has over 100 peer-reviewed papers total, with over 9,000 citations and an H-index of 56 (Google Scholar, February 2026). In addition to best paper awards at IEEE VIS he has paper awards at ACM CHI, ACM ISS, ACM AVI, IEEE PacificVis and IEEE BDVA.
Virtual Reality (VR) provides an effective medium for training complex spatiotemporal procedures. However, current systems rarely account for individual differences in cognitive load during embodied interactions. This doctoral research investigates how integrated eye-tracking in VR headsets can be used both to detect and to regulate learners’ embodied cognitive states. First, I will develop an instrument that combines low-level ocular metrics (e.g., pupil dilation, fixation) with higher-level gaze behaviors (e.g., gaze entropy, scanpath organization) to model embodied cognitive load in spatiotemporal VR tasks. Using these models, I will design and evaluate a visually adaptive VR training environment that adjusts visual complexity, saliency, and scene structure in real time based on inferred user state. Two planned user studies will (1) validate the embodied cognition instrument and train real-time classifiers, and (2) test whether state-contingent visual adaptation improves learning and user experience. This work aims to contribute eye-tracking-based models and design guidelines for adaptive VR training.
DC1260: [DC] Enhancing Accessibility and User Experience of Virtual Reality Locomotion for Older Adults
Mentor
Daniel A. Muñoz / Hong Kong Baptist University
From 2018 to 2025 he has been recognised in The Australian newspaper Research Magazine as Australia’s foremost Computer Graphics researcher based on his publications in premier computer graphics journals. For 2023 he was profiled in an Australian photographic feature article.
There is an increase in Virtual Reality (VR) applications geared towards Older Adults (OAs) in both medical and leisure usages. However, OAs are repeatedly excluded in technological designs, despite the significant growth of the aging population globally. This results in technological designs that are not age-friendly contributing to frustrations when trying to use VR that eventually led to non-adoption. VR locomotion is one of the many aspects that may represent barriers for the adoption of VR by OAs. VR locomotion are ways an individual may navigate within a virtual environment and is fundamental to any VR experiences. In VR research, most VR systems are designed with younger users in mind. When it comes to older users, these designs are either transferred directly or at best minimally adapted. This further suggests VR as an inaccessible platform for OAs. This PhD work aims to improve accessibility of VR experiences for OAs by addressing VR locomotion. Mixed and participatory methods will be used to focus on 1) unique OA needs in VR locomotion, 2) mental model of VR interactive mechanisms, and 3) the suitability criteria for evaluating VR locomotion techniques.
DC1263: [DC] Immersive disorientation - Visualizing Dolly Zoom through Counter-Invariant Perception in Cinematic Virtual Reality
Mentor
Shohei Mori / University of Stuttgart
Since 2014 he has been one of the founders of the new research area of Immersive Analytics, which explores information visualisation using emerging immersive display and interaction technologies. His work combines expertise in algorithms, optimisation and interaction design and evaluation. His career has spanned industry and academia. He has developed techniques for network and set visualisation and multidimensional scaling implemented in commercial and open-source software, including Microsoft Visual Studio
Cinematic virtual reality (CVR) is a relatively new academic discipline within the field of VR studies. Many areas remain unexplored, notably cinematography for CVR. My doctoral research focuses on the camera movement in CVR, i.e., the dolly zoom. While the well-established technique creates visual distortion through camera movement in traditional cinema, it cannot be applied directly in CVR due to the optical limitations. Thus, the research explores the possibility of representing the perceptual distortion of the dolly zoom supported by rigorous scientific principles. This paper consists of three parts. First, it introduces the background and research questions of the proposed research. Second, it discusses theories and the methodology outlining the project. Third, it recapitulates the work done and proposes further work to complete the research. The key focus of the research is simulating the dolly zoom effects by manipulating the 3DoF project parameters and creating an inverted scaling shader for 6DoF experiences. It will contribute to the development of CVR by introducing a new research direction in cinematography, a field that is currently understudied in scholarly research.
DC1265: [DC] Virtual Reality for Mental Health: Toward Bioadaptive Narrative and Sensory Feedback Systems
Mentor
Majed Elwardy / Blekinge Institute of Technology
Majed Elwardy is a university lecturer and researcher in computer science. He earned his Ph.D. from Blekinge Institute of Technology (BTH), Karlskrona, Sweden, in 2025. He previously worked as a research assistant at the Signal Processing and Information Systems (SPIS) Laboratory at Sabanci University, Turkey, and served as a teaching assistant in the Electronics Engineering and Mathematics department at Sabanci University. Prior to that, he was a teaching and research assistant in the Electronics and Communications Engineering department (ECE) at Mansoura University, Egypt. Majed’s current research on extended reality (XR) applications and video quality assessment, with additional interests in statistical signal processing, machine learning, virtual reality, and brain–computer interfaces. He holds a B.Sc. in Communications and Information Engineering from Mansoura University (2011) and an M.Sc. in Electronics Engineering from Sabanci University (2016).
This research investigates how virtual reality, physiological biofeedback, and conversational AI can be integrated to create adaptive mental health interventions. Traditional care faces significant scalability challenges, prompting a need for digital alternatives. The proposed system operates through two primary modes: a sensory mode and a narrative mode. To stabilize arousal and provide grounding, the sensory component utilizes experiences such as naturalistic environments, cyberdelic visuals, spatial audio, and haptic feedback. Conversely, the narrative component employs a generative AI agent to offer guided reflection and supportive dialogue. Transitions between these modes are driven by real-time physiological data, including heart rate variability, EEG, and EDA activity. When the system detects markers of high stress or increased cognitive load, it prioritizes sensory grounding. Once indicators suggest the user is stable or receptive, it shifts toward narrative exploration. By integrating these modalities, the project seeks to establish design principles for scalable, consumer-oriented digital tools that respond dynamically to individual needs in real-time.
DC1267: [DC] Towards Universal Access: Building a Cross-Device Mixed Reality Ecosystem
Mentor
Richard (Rick) Skarbez / La Trobe University
Dr. Richard (Rick) Skarbez is a computer scientist, a Senior Lecturer at La Trobe University in Melbourne, Australia, and an experienced XR researcher (although he hates the term XR). His research focuses on understanding the psychological, social, and cultural impacts of emerging technologies, particularly virtual reality (VR) but also augmented and mixed reality (AR/MR), artificial intelligence (AI), and related topics under the umbrella of human-computer interaction (HCI). His research has received several awards and honorable mentions, including the 2018 IEEE VGTC best dissertation award (sole honorable mention) and a best paper award (Top 1%) at ACM CHI 2025.
While mixed reality (MR) has seen significant recent advancements, it remains a technology with entry barriers. Recent studies on universal access in MR, including my previous Ph.D. research, have shown an increasing interest in the use of non-dedicated devices to broaden access and improve user experience for users that, for various reasons, prefer alternatives to head-mounted displays (HMDs). Non-dedicated MR describe multi-purpose devices that can also be used to deliver MR experiences, such as smartphones, tablets, or projectors. This position paper covers theoretical work on universal access to MR with a focus on access and user experience using non-dedicated devices. I describe my two previous projects: a systematic review and a co-creation workshop with older adults. In addition, I describe my ongoing research on seamless cross-device interactions through universal input methods. Last, I describe the current progress and planned next steps, alongside questions to discuss at the Doctoral Consortium at IEEE VR 2026.
DC1268: [DC] Visual Perception Enhancement via Transient Visual Cue in Immersive Virtual Reality
Mentor
Tim Weissker / RWTH Aachen University
Tim Weissker is a senior scientist for virtual reality at the Visual Computing Institute of RWTH Aachen University, Germany, where he is permanently appointed for conducting independent research in the broad area of virtual reality and 3D user interfaces. He leads the Virtual Reality Research Group at the institute, which conducts a diverse mixture of foundational as well as application-oriented research in the field. Tim’s research interests include a large variety of topics on effective, efficient, and comfortable user interaction in both single- and multi-user 3D virtual environments, all of which leverage the unique potential that virtual reality systems offer beyond the mere replication of real-world scenarios.
Virtual Reality (VR) offers users a realistic sense of presence by simulating environments rich in visual stimuli. To provide a practical and effective experience, it is essential to guide a user's behavior and perception within the VR environment as intended by the creator. Highlighting is a widely used method to increase perception of a specific object by making an object stand out from its surroundings, but it can disrupt the context or the immersive quality of the VR environment. This consortium paper presents research questions to address the issue and experimental plans to investigate a perception-enhancing method that preserves immersion.
DC1274: [DC] Augmented Co-Embodiment for Motor Skill Learning with Held Tools in Virtual Reality
Mentor
Steve Feiner / Columbia University
Steve Feiner (Ph.D., Brown, ’87) is the Wang Family Professor of Computer Science at Columbia University, where he directs the Computer Graphics and User Interfaces Lab. He has been doing AR and VR research for over 30 years, designing and evaluating novel 3D interaction and visualization techniques, creating the first outdoor AR system using a see-through head-worn display and GPS, and pioneering experimental applications of AR and VR to fields as diverse as tourism, journalism, assembly, maintenance, construction, dentistry, and medicine. Steve is a Fellow of the ACM and the IEEE, a member of the ACM SIGCHI Academy and the IEEE VR Academy, and the recipient of the ACM SIGCHI 2018 Lifetime Research Award, the IEEE ISMAR 2017 Career Impact Award, and the IEEE VGTC 2014 Virtual Reality Career Award. He and his colleagues have won the IEEE ISMAR 2022 and 2019 Impact Paper Awards, the ISWC 2017 Early Innovator Award, and the ACM UIST 2010 Lasting Impact Award.
Virtual reality (VR) is an interesting tool for developing new pedagogical methods for motor skill learning, enabling to give more varied visual concurrent feedback. Co-embodiment allows to share the control of an avatar between two persons. This technique has become popular in virtual reality for studying the relationship between the sense of agency and embodiment of the co-controlled avatar. Recently, some studies have begun to propose motor skill learning methods based on virtual co-embodiment. I propose to improve and extend these methods on three main topics. (Q1) Investigate whether additionnal concurrent feedbacks alongside virtual co-embodiment could improve learning performances. (Q2) Extend the results of embodiment and co-embodiment to an external held object, such as a crafting tool. (Q3) Propose more efficient co-embodiment methods for motor skill learning, without mobilising a human instructor. Our objective is to improve motor skill learning performance in craftsmanship, especially when using held tools, paying particular attention to the conscious and unconscious learning processes.
DC1276: [DC] Virtual Reality and Beyond: Exploring the Design and User Experience of Augmented Social Touch in VR
Mentor
Sasha Alexdottir / Department of NCCA, Faculty of Media, Science and Technology, Bournemouth University, UK
Sasha Alexdottir is a researcher at Bournemouth University’s National Centre for Computer Animation (NCCA), UK. Since 2021, she has specialised in Phantom Touch in Virtual Reality - a phenomenon in which individuals experience tactile sensations in VR without the use of haptic devices or external stimuli. Alexdottir leads the research into Phantom Touch in VR and its practical applications, developing a method that enables individuals to feel pseudo-haptic touch sensations within immersive environments in VR. Her work has attracted international attention and has been featured on BBC News and other global media outlets. Beyond this research, she studies social communication and interaction in the metaverse, player psychology in social VR, and avatar embodiment. Alexdottir also advocates for the responsible expansion of virtual metaverse spaces - through presentations, seminars, and public talks, she explores how virtual environments can be designed to become safer and more inclusive while preserving users’ freedom of expression.
This paper highlights the potential of exploring mediated social touch in virtual reality beyond the recreation of face-to-face interactions, motivating my proposed research structure to identify key design parameters of augmented social touch and to investigate the resulting immersive, social, and emotional user experience.
DC1278: [DC] Personality and Affective States in Virtual Reality: A Multi-Study Program on Awe, Acute Stress, and Trust
Mentor
Manuela Chessa / University of Genoa, Italy
Manuela Chessa is Associate Professor in Computer Science at the Department of Informatics, Bioengineering, Robotics, and Systems Engineering of the University of Genoa, where she received her Ph.D. in Bioengineering in 2009. Her research interests are focused on the study and development of natural human-machine interfaces based on virtual, augmented, and extended reality and on the perceptual and cognitive aspects of interaction in VR, AR, and XR. Her background is in the study of biological and artificial vision systems and in the development of bioinspired models. Her current research focuses on integrating these two aspects to improve XR, enhance its effectiveness, and achieve societal impact. She is interested in developing XR solutions that can effectively adapt to the cognitive and physical status of the users. She is involved in several national and international projects to develop XR and HCI solutions for rehabilitation tasks, for teaching and training. She is the Principal Investigator of the Perception & Interaction Lab at DIBRIS - PILab (https://pilab.unige.it/). She is author and co-author of more than 120 peer-reviewed scientific papers, 3 patents, and serves in the program and organizing committee of the major VR/AR/XR conferences.
This doctoral research investigates how personality traits shape affective experiences and physiological responses in Virtual Reality (VR). While VR is widely used to elicit emotions, induce stress, and support social interaction, individual differences are often treated as noise rather than a central focus. Building on the Big Five model, this cumulative thesis examines personality-affect relationships across three complementary studies that share a common methodological core: immersive VR or interactive 3D environments, brief personality assessment, and biosensors. The first, already completed, study explores awe in VR across diverse scenes and cultures, revealing strong effects of scene type and presence but a more complex pattern for personality and physiology than expected. The second study examines acute stress in a VR nursing training scenario, asking how personality moderates stress responses, performance, and acceptance of VR compared to traditional skills lab training. The third study focuses on avatar-mediated trust in collaborative VR, investigating how personality traits influence trust in virtual partners and associated physiological responses. Together, these studies uncover personality-specific affective and physiological patterns in VR and derive design implications for awe experiences, training systems, and avatar design.
DC1279: [DC] Mixed Reality for Psychological Resilience: A Conceptual Framework for Rescue Training in combination with ACT
Mentor
Ali Haskins / University of Central Florida
Ali Haskins, Ph.D., serves as Director of Development and Operations for the Virtual Experience Research Accelerator (VERA), an NSF‑funded initiative advancing large‑scale VR human‑subjects research, under development at the University of Central Florida. She leads cross‑functional teams that support software development, legal and policy frameworks, and business development strategies that will enable VERA’s growth and impact. Ali brings extensive experience in UX consulting for industry, academic, and government clients, along with prior roles in industry research analytics and engineering education at Academic Analytics and Virginia Tech. She holds bachelor’s, master’s, and doctoral degrees in Industrial & Systems Engineering from Virginia Tech with a focus in Human-Computer Interaction.
The digital transformation offers novel training opportunities for emergency and rescue teams. Traditional simulations in volunteer services face limitations, often lacking either visual realism or essential physical feedback. This PhD project addresses this gap by proposing a Mixed Reality training model. By combining physical training dummies with responsive virtual patients, the system allows for the simultaneous practice of procedural skills and mental resilience under realistic psychological stress. The goal is to integrate technical proficiency with enhanced mental well-being using Acceptance and Commitment Therapy principles.
DC1287: [DC] Toward Self-Adaptive 3D User Interfaces
Mentor
Jinghui Hu / Lancaster University
Jinghui Hu is a Senior Research Associate at Lancaster University working on the ERC-funded GEMINI project with Prof. Hans Gellersen. She received her PhD from the University of Cambridge in the Intelligent Interactive Systems Group, supervised by Prof. Per Ola Kristensson, and holds an MSc in Human–Computer Interaction from University College London. Jinghui's research analyses and models multimodal human behaviour, including gaze, head movement, and bodily action, in virtual and augmented reality environments, with the aim of understanding human behaviour patterns and building human-centred intelligent interactive systems. Jinghui has published widely in leading venues including IEEE VR, TVCG, ISMAR, ACM CHI, TOCHI and ETRA. Her contributions range from foundational modelling work to interaction techniques and multimodal XR interfaces.
3D User Interfaces for Augmented and Virtual Reality operate in highly dynamic, embodied, and context-dependent environments, where fixed interaction rules and static layouts are insufficient. While many XR systems incorporate runtime adaptation or heuristic optimization, these approaches differ fundamentally from true closed-loop adaptation as defined in Adaptive Software research. This work lays foundational ground for defining Adaptive 3D User Interfaces by grounding interface adaptation in established self-adaptive systems principles. We identify core interface properties—form, content, placement, and interaction—as primary candidates for adaptation. We present a controlled, ontology-driven adaptive interface framework that enables reproducible evaluation of adaptive behavior. Finally, we outline an experimental protocol for assessing user performance, preference, and trust under adaptive interface conditions.
DC1291: [DC] Designing Agency-Preserving AI Mediation Supporting Self-Directed Learning in Augmented Reality
Mentor
Ye Pan / Shanghai Jiao Tong University
Ye Pan is an Associate Professor at the John Hopcroft Center, Shanghai Jiao Tong University. Her research interests include AR/VR, avatars and characters, 3D animation, human–computer interaction, and computer graphics. She previously worked as an Associate Research Scientist at Disney Research Los Angeles and received her Ph.D. from University College London.
AI-mediated assistance is increasingly embedded in learning contexts, yet many systems prioritize efficiency and responsiveness in ways that displace learner control. While prior work on adaptive learning, explainable AI, and intelligent tutoring has examined transparency and personalization, learner agency is typically treated as a secondary design constraint rather than a primary interactional objective. This research investigates how AI-mediated assistance can be designed to preserve learner agency during self-directed learning. It proposes Agency-Preserving AI Mediation (APAM), a design framework that emphasizes negotiable control over when and how assistance is enacted. Using augmented reality (AR) as an interactional medium, the work examines how agency can be made observable through learner action rather than inferred cognitive state. Context-Enhanced Learning (CEL) supports coherent, context-sensitive assistance without asserting learner ability or performance. This work contributes an interaction-centered perspective on agency preservation in immersive AI-mediated learning systems.
DC1298: [DC] Road-map to Efficient Attention Guided Augmented Reality User Interfaces: From Controlled Environments to the Wild
Mentor
Jeanine Stefanucci / University of Utah
Jeanine Stefanucci, PhD is a professor in the Department of Psychology with an adjunct appointment in the School of Computing at the University of Utah. She earned her PhD at the University of Virginia in 2006. Her area of specialization is human factors and engineering psychology. Her research program investigates how and whether emotional, physiological, and physical states of the body have an influence on how we see, think about, and navigate our environments. She conducts this research in natural, outdoor settings, indoors in hallways or buildings, and in virtual environments (both immersive and desktop). She strives to make her work apply to issues beyond psychology, as evidenced by her many publications that cross disciplines. She believes the applicability of her work has been integral in securing continued funding since obtaining her first academic position 20 years ago.
This doctoral consortium paper discusses my roadmap towards efficient attention-guided Augmented Reality interfaces for public safety. My main goal is to evaluate the effect of AR interface placement and interaction design on performance, workload, and situational awareness under a dual-task configuration. Currently, I am working on how different AR display placements under dual-task configuration contribute to trade-offs between performance and workload. I aim to evaluate core AR UI modules, such as information display, path guidance, annotations, and alerts, using a dual-task evaluation toolkit, and work with first responders to validate the findings from the lab in a real-life scenario. The goal of this roadmap is to recommend practical design considerations and empirically validated AR interfaces to assist first responders in maintaining performance during dynamic scenarios.












