The official banner for the IEEE Conference on Virtual Reality + User Interfaces, comprised of a Kiwi wearing a VR headset overlaid on an image of Mount Cook and a braided river.

Doctoral Consortium

The Doctoral Consortium will be held on 16 March 2024 (Saturday) in the Sorcerer's Apprentice Ballroom. All times below are given in local time of Orlando, Florida USA EDT (UTC-4).

Here are the key information for the presenters and mentors:

  • Each presentation includes a 10-minute talk + 5-minute Q&A.
  • All presenters and mentors need to be present at the scheduled session and are encouraged to attend as much of the doctoral consortium as possible.
  • After each session, the mentors and students can use the Break/Lunch time for further breakout discussions and mentoring.
  • Some students may have a pair of mentors. Because some mentors cannot attend the conference in person, we recommend that the students take the initiative to reach out to allocated mentors and arrange a separate online meeting for mentoring.
  • The venue for the DC track is: Sorcerer's Apprentice, 2.
  • The final mentor allocation will be released soon.
  • The presentation and mentoring at the DC mark the start of collaborations and we strongly recommend that the presenters and mentors hold periodical meetings to deepen the collaborations.

Schedule - 16 March 2024, Saturday
08:20 - 08:30 am Welcome
08:30 - 10:00 am Presentations 1-6 (10-min talk + 5-min Q&A for each presentation)
Jinwook Kim - Jan Springer
Hyeongil Nam - Jens Grubert
Siamak Ahmadzadeh Bazzaz - Jan Springer
Muhammad Twaha Ibrahim - Jan Springer
Xueqi Wang - Frank Guan
Jiachen Liang - Frank Guan
10:00 - 10:30 am Break (breakout with mentors)
10:30 - 12:00 am Presentations 7-12 (10-min talk + 5-min Q&A for each presentation)
Seoyoung Kang - Bruce Thomas
Assem Kroma - Bruce Thomas
Kristen Grinyer - Bruce Thomas
Rachel Masters - Jens Grubert
Sunday Ubur - Shohei Mori
Dahlia Musa - Shohei Mori
12:00 - 1:30 pm Lunch
13:30 - 15:30 pm Presentations 13-20 (10-min talk + 5-min Q&A for each presentation)
Dong Woo Yoo - Guillaume Moreau
Brett Benda - Steve Feiner
Seonji Kim - Dieter Schmalstieg
Hail Song - Jason Orlosky
Dongyun Han - Steve Feiner
Jennifer Cremer - Dieter Schmalstieg
Elham Mohammadrezaei - Guillaume Moreau
Nikitha Donekal Chandrashekar - Guillaume Moreau
15:30 - 16:00 pm Break (breakout with mentors)
16:00 - 17:00 pm Presentations 21-24 (10-min talk + 5-min Q&A for each presentation)
Danah Omary - Isaac Cho
Jingyi Zhang - Frank Guan
Tomáš Nováček - Isaac Cho
Ryan Canales - Isaac Cho
17:00 - 17:30 pm Breakout with mentors

Accepted Students

The application of Digital Twins in Sustainable Urban Planning: from data acquisition to 3D virtualization (ID: 1109)

Author: Siamak Ahmadzadeh Bazzaz, IUSS Pavia University, Italy
Mentor: Jan Springer

This research explores the role of Digital Twins (DT) in enhancing Sustainable Urban Planning, especially in climate-resilient regions. It integrates DT for real-time data acquisition, processing, and 3D virtualization, focusing on climate change adaptation and environmental risk mitigation. The study combines a literature review, a conceptual framework, case studies, and data collection, showcasing a comprehensive approach to applying DT in urban planning. At the core of this research is the innovative use of DT for dynamic urban management, enabling informed decision-making and improved resilience. Challenges such as integrating DT into existing urban infrastructures and ensuring user accessibility are identified, highlighting areas for further innovation. Looking forward, the research anticipates DT's growing role in urban planning, suggesting potential for broader adoption and further technological integration. This work positions DT as a transformative tool for sustainable and resilient urban development in the face of climate challenges.

Remapped Interaction Detection by Naive Users in Virtual Reality (ID: 1095)

Author: Brett Benda, University of Florida, United States
Mentor: Steve Feiner

Remapped interactions have been often evaluated under the lens of detection; if we use these techniques at undetectable levels, we will not risk distracting the user, breaking immersion, or increasing cybersickness. However, research has relied heavily on methods that induce behavior and grant knowledge to participants that conflict with actual end-users. The goal of this dissertation is to evaluate the detection of these techniques by naive users, who do not know they are being used and are focused on other tasks to guide the usage of these techniques outside the laboratory.

Animating Interactive Virtual Humans (ID: 1112)

Author: Ryan Canales, Clemson University, United States
Mentor: Isaac Cho

Immersive virtual worlds, such as in video games, have captivated the general public for years. Virtual reality (VR) takes immersion further, placing users inside virtual environments and allowing them to interact with virtual objects and even socialize with others in a shared virtual space. The effectiveness of VR is heavily influenced by the quality of virtual interactions and characters. This research proposal aims to address some of the challenges of creating engaging virtual experiences by focusing on the motion of virtual humans.

Scan2Twin: Virtual Reality for Enhanced Anatomical Investigation (ID: 1113)

Author: Jennifer Cieliesz Cremer, University of Florida, United States
Mentor: Dieter Schmalstieg

Interpreting 3D information from 2D slices of data points is a notoriously difficult process, especially in the medical field with its use of scan imaging. This dissertation aims to apply and assess the advantages of virtual reality (VR) for exploring volumetric visualization of medical images and enable efficient generation of explicit, unambiguous surface models. I will evaluate the effects of stereoscopic viewing on different methods of volumetric data visualizations, experiment with tool development to increase 3D modeling accessibility, and an expert review of the overall system design we refer to as Scan2Twin.

Understanding the impact of the Fidelity of Multimodal Interactions in XR based Training Simulators on Cognitive Load (ID: 1120)

Author: Nikitha Donekal Chandrashekar, Virginia Tech, United States
Mentor: Guillaume Moreau, Steve Feiner

As eXtended Reality (XR) technologies evolve, the integration of multimodal interfaces becomes a defining factor in immersive experiences. Existing literature highlights a lack of consensus regarding the impact of these interfaces in XR environments. Due to the Uncanny Valley Effect and its amplification in multimodal stimuli, the uncanny valley effect poses a potential challenge of increased cognitive load due to dissonance between user expectations and reality. My research pivots on the observation that current studies often overlook a crucial factor—the fidelity of stimuli presented to users. The main goal of my research is to answer the question about how multimodal interactions in XR based applications impact the cognitive load experienced by the user. To address this gap, I employ a comprehensive human-computer interaction (HCI) research approach involving frameworks, theories, user studies and guidelines. The goal is to systematically investigate the interplay of stimulus fidelity and cognitive load in XR, aiming to offer insights for the design of Audio-Visual-Haptic (AVH) interfaces.

Supporting Complex Interactions in Mobile Virtual Reality (ID: 1114)

Author: Kristen Grinyer, Carleton University, Canada
Mentor: Bruce Thomas, Jens Grubert

Virtual reality (VR) is poised to become a ubiquitous technology in the coming years. However, its high-cost barrier to adoption prevents user groups of lower socioeconomic statuses from accessing the technology, leaving them out of what may become a predominant computing paradigm in the next decade. Low-cost mobile VR has potential to democratize VR access but offers limited interaction capabilities due to low-fidelity hardware and few input options. Mobile VR cannot currently support most VR applications or offer effective VR experiences. This doctoral consortium proposal outlines my past, current, and future work aiming to investigate and develop low-cost interaction solutions for mobile VR to support complex interactions and outlines topics for future discussion.

Perception of Visual Variables on Various Tiled Wall-Sized Display Settings in Immersive Environment (ID: 1102)

Author: Dongyun Han, Utah State University, United States
Mentor: Steve Feiner, Dieter Schmalstieg

Immersive analytics indicates carrying out visual analytics in interactive virtual environments (IVEs). As immersive technologies have rapidly matured with commercial success, researchers have sought to leverage their potential to aid human data understanding and sensemaking processes. Unlike the limited information space available on 2D display settings, users in IVEs can use 3D immersive space as information space by creating as many visualizations as they want. Despite growing knowledge of its benefits, many challenges still remain for immersive analytics to be effectively deployed for various applications. Effective use of data visualizations in IVEs requires a better understanding of how people perceive, interpret, and process visualized information. This paper discusses my research questions and presents research progress.

Dynamic Spatially Augmented Reality on Deformable Surfaces (ID: 1119)

Author: Muhammad Twaha Ibrahim, UC Irvine, United States
Mentor: Jan Springer

Prior work on spatially augmented reality using multiple projectors have focused primarily on rigid objects. However, works on projection based displays on deformable dynamic objects have focused only on small scale single projector systems. Tracking a deformable surface and updating projections in real time to project on it is a significantly challenging task, even for single projector systems. For my dissertation, I am working on developing the first end-to-end solution for achieving a real-time, seamless display on deformable surfaces using multiple projectors without requiring any prior knowledge of the surface shape or device calibration parameters. Challenges in achieving such a system include accurate calibration of the devices despite a continuously deforming surface, real-time tracking of the changing surface shape using multiple cameras, and real-time warp and blend of the projected imagery to create the seamless display on it. This work has tremendous applications on mobile and expeditionary systems where environmentals (e.g. wind, vibrations, suction) cannot be avoided. One can create large displays on tent walls in remote, austere military or emergency operations in minutes to support large scale command and control, mission rehearsal, large scale visualization or training operations. It can be used to create displays on mobile and inflatable objects for tradeshows/events and touring edutainment applications.

Investigating Avatar Facial Expressions and Collaboration Dynamics for Social Presence in Avatar-Mediated XR Remote Communication (ID: 1104)

Author: Seoyoung Kang, KAIST, Republic of Korea,
Mentor: Bruce Thomas, Yue Li (Remote)

In avatar-mediated remote communication, reflecting one's facial expressions in avatars remains challenging but essential in facilitating effective communication and fostering social presence. From our prior studies, we underscored the significance of emotion-based facial expressions in avatar-mediated remote communications within eXtended Reality (XR). Our findings indicated that during emotional conversations as well as informative speeches, activating the emotion-based blendshapes resulted in a level of social presence and communication quality comparable to activating all facial blendshapes. As we move forward, our research aims to investigate avatar facial expression considering XR contexts, with an emphasis on the challenges stemming from XR device constraints and the dynamic contexts of remote collaboration. We aspire to devise adaptive guidelines for avatar facial expression, leveraging emotional cues using XR devices. Additionally, we plan to systematically analyze the implications of remote collaboration's varying scenarios, formulating guidelines for adaptive avatar representation that account for different collaborative environments and scenarios. To enhance social presence and communication experiences among users, we are preparing for a series of user studies, aiming to validate the effectiveness of our approaches and prototype systems.

Exploring and Designing VR Locomotion Method based on Bio-signal for Hands-free Context and its Improvement (ID: 1022)

Author: Jinwook Kim, KAIST, Republic of Korea,
Mentor: Jan Springer, Qi Sun (Remote)

Along with the advanced hand-tracking algorithms, various interactions based on hand motion are actively developing. However, if too much function are concentrated on the hand, it might be a burden on the hand and interrupt immersion. Therefore, we focused on designing a locomotion method, which is essential for exploring the broad virtual world efficiently and comfortably. We utilized bio-signal (i.e., eye tracking, EEG) to reduce the load on the hand. First, we explored to compare the usability and efficiency of our method to hand-tracking-based locomotion methods. The result showed that our methods are suitable for hands-free VR contexts. Also, we found that EEG based method proposes enhanced experience when it is used with eye-tracking. In order to improve this effect, we are currently working on research that develops user-friendly Steady-State Visual Evoked Potential (SSVEP) stimuli that suit the VR HMD format. From our research, we aim to propose design guidelines for presenting appropriate locomotion methods depending on the various contexts in the virtual environment for an enhanced hands-free VR experience.

Experience Graph using Spatio-Temporal Scene Data for Replaying Mixed Reality Interaction (ID: 1096)

Author: Seonji Kim, KAIST, Republic of Korea,
Mentor: Dieter Schmalstieg, Qi Sun (Remote)

This study proposes an experience graph, a Mixed Reality (MR) experience replaying method for recording and playing spatially adaptive interaction in a mixed reality environment. Many MR immersive interactions are experienced in a space different from the original recorded space, so it is important to adaptively experience the interaction in the local space. As a solution to the collision and mismatch during the interactions in local space, there is research on scene generation or adaptive object placement, but the change of interactions dependent on space and the resulting experience transformation were not considered. This study structures spatial information and dependent interactions into an experience graph and proposes a spatial adaptive replay method that minimizes interaction deformation. Recorded interactions are evaluated quantitatively through loss rate and transformation rate, played interactions are evaluated qualitatively through user experiments, and finally, indicators important to the user experience are reflected in the experience graph. Utilizing the results of this study will contribute to experiencing immersive spatial MR where users actively record and share spatial experiences beyond the limitations of time and space.

The Essentials of XR Prototyping for Non-Tech Professionals: Taxonomy and Low-Fidelity Tools to Aid Prototyping Decisions (ID: 1110)

Author: Assem Kroma, Carleton University, Canada
Mentor: Bruce Thomas, Mark Billinghurst (Remote)

This paper delves into the complexities of Extended Reality (XR), exploring the challenges in XR design, directing user attention, managing cybersickness, and the overall integrative nature of these issues in the context of prototyping. We offer an overview of the current state of XR and identify key areas where novice and non-technical professionals face challenges. The paper concludes with a proposed research agenda focused on three main themes and potential research directions. This work aims to enhance the design process and facilitate rapid prototyping, enabling non-technical professionals to create more effective and engaging XR experiences.

Playful and participatory cultural heritage experience using extended reality technologies (ID: 1123)

Author: Jiachen Liang, Xi’an Jiaotong-Liverpool University, China
Mentor: Frank Guan

Creative design and applications based on Cultural Heritage (CH) content have gradually become a new way of communicating and interpreting cultural values. The proposed research aims to improve the CH experience using extended reality (XR) technologies by investigating the digital affordances of XR technologies that can support the design of playful CH experience, and exploring appropriate ways to enable user-generated content for participatory CH experience. This is an interdisciplinary project that sits at the intersection of human-computer interaction (HCI), XR, and CH. A practice-based approach will be adopted to conduct research in the wild, designing and developing novel XR interfaces and interaction techniques for CH experiences. The combined design thinking and computational approach will allow an in-depth understanding of playful and participatory CH experience using XR technologies, providing empirical, artifact, and theoretical contributions to the fields of HCI, XR, and CH.

Virtual Reality Nature Environment Designs for Mental Health (ID: 1116)

Author: Rachel Masters, Colorado State University, United States
Mentor: Jens Grubert

Shinrin-yoku, or forest bathing, is a therapeutic nature immersion practice shown to reduce stress and restore mental resources. In a world where the negative effects of sustained stress are an increasingly prevalent issue, connection to nature is important. However, many populations who encounter the most stress are largely isolated from nature, like people working long hours in cities, nursing home residents, or hospital patients. For these populations, virtual reality (VR) nature immersion is a promising supplement for when nature is not accessible. In order to design optimal VR nature environments (VNEs), we must first understand why and how VNEs can be restorative. This is a current area of research, and one critical gap that the proposed research seeks to understand is how the different components of VNEs and their interactions can be optimally designed for short and long term stress reduction and attention restoration. The proposed research does a deeper exploration into the designs of plants and water for optimally restorative VR green and blue spaces.

Reinforcement Learning for Context-aware and Adaptive Lighting Design using Extended Reality: Impacts on Human Emotions and Behaviors (ID: 1115)

Author: Elham Mohammadrezaei, Virginia Tech, United States
Mentor: Guillaume Moreau

In the interconnected world of smart built environments, Extended Reality (XR) emerges as a transformative technology that can enhance user experiences through personalized lighting systems. The integration of XR with deep reinforcement learning for adaptive lighting design (ALD) has potential to optimize visual experiences while addressing real-time data analysis, XR system complexities, and integration challenges with building automation systems. The research centers on developing an XR-based ALD system that dynamically responds to user preferences, thereby positively impacting human emotions, behaviors, and overall well-being.

3D Measurement System for Wound Care (ID: 1121)

Author: Dahlia Musa, New Jersey Institute of Technology, United States
Mentor: Shohei Mori

Wounds can be challenging to treat and often require periodic measurements to determine progression. Healthcare providers typically measure wounds (e.g., pressure injuries, venous ulcers, and diabetic foot ulcers) using rulers or images; however, these measurements tend to be subjective and inaccurate. We are developing a system that measures and tracks wound progression from 3-dimensional (3D) scans. In this research, we are evaluating our 3D software in comparison to wound measurement methods commonly used by providers. Our 3D software may facilitate the measurement, documentation, and visualization of wounds to support clinicians’ treatment decisions and improve patient outcomes.

Multimodal VR/MR Simulation with Empathic Agents for Nursing Education (ID: 1106)

Author: Hyeongil Nam, Hanyang University, Republic of Korea,
Mentor: Jens Grubert

In the field of healthcare education, including medical and nursing education, realistic and interactive simulations of various clinical scenes are required to mimic real-life situations, necessitating more than just a procedural experience. There has been a recent surge in demand for virtual/mixed reality (VR/MR) due to its advantages in providing diverse simulation environments. However, several challenges still remain to be addressed to advance to the next level of VR/MR-based clinical simulation. In this paper, I introduce the research that I have conducted on creating immersive clinical nursing scenes in VR/MR-based learning and training simulation systems, focusing on key elements such as multi-modal interaction, virtual agents, and learner performance assessment. Furthermore, I will discuss future research aimed at fostering collaboration in various healthcare fields, including nursing, and developing VR/MR simulation systems that enable learners to experience and learn from personalized clinical situations. These topics and questions will be further discussed at the IEEE VR 2024 Doctoral Consortium.

Precise Hand Tracking using Multiple Optical Sensors (ID: 1111)

Author: Tomáš Nováček, Faculty of Information Technology, CTU, Czech Republic
Mentor: Isaac Cho

Virtual and extended reality are some of the fastest-growing fields today. The visual quality, the headset size, the latency – it all improves with every new device. However, the user interface is still falling behind. After several decades, when we mainly used a mouse and keyboard as controllers for personal computers, we finally can and should use other methods of input. We can even use parts of our body other than our hands to control the virtual worlds. For example, we can track the eyes for additional input or run around the virtual world with our own feet. This doctoral thesis deals with hand-tracking controllers for virtual reality without the user needing to hold or wear any device to interact with the virtual world. The goal is to create an intuitive controller using optical sensors that can provide a more natural user interface. We propose and implement a set of algorithms to fuse the tracking data from multiple optical hand-tracking sensors that provide greater precision and tracking possibilities than a single optical sensor.

Customizable Multi-Modal Mixed Reality Framework (ID: 1057)

Author: Danah Omary, University of North Texas, United States
Mentor: Isaac Cho

Mixed Reality (MR) has potential to be used not only in the entertainment industry and the work industry, but also for assistive technology. Mixed Reality is suited for assistive technology because of its utilization of other forms of physical feedback to the human senses besides vision while still making use of visual feedback. We propose a glove-based MR system framework that will use finger and hand movement tracking along with tactile feedback so that the blind and visually-impaired (BVI) can interact tactily with virtual objects. In addition to touch, our proposed framework will include robust interactions through other modalities such as a custom voice assistant and an audio interface as well as visual interfaces that tailor to the visual needs of BVI users. Through the various modalities of interaction in our proposed framework, BVI users will be able to obtain a more detailed sense of virtual objects of any 3D model and their experiences will not be limited by vision. The customizable features and modalities that will be available in our proposed system framework will allow for a more individual experience that can be tailored to the variety of the different needs of the BVI as well as general users.

Best Doctoral Consortium Paper

Toward Realistic 3D Avatar Generation with Dynamic 3D Gaussian Splatting for AR/VR Communication (ID: 1098)

Author: Hail Song, Korea Advanced Institute of Science and Technology, Republic of Korea,
Mentor: Jason Orlosky

Realistic avatars are fundamental for immersive experiences in Augmented Reality (AR) and Virtual Reality (VR) environments. In this work, we introduce a novel approach for avatar generation, combining 3D Gaussian Splatting with the parametric body model, SMPL. This methodology overcomes the inefficiencies of traditional image/video-based avatar creation, which is often slow and requires high computing resources. The integration of 3D Gaussian Splatting for representing human avatar offers realistic and real-time rendering for AR/VR applications. We also conducted preliminary tests to verify the quality of avatar representation using 3D Gaussian Splatting. These tests, displayed alongside outcomes from existing methods, demonstrate the potential of this research to significantly contribute to the creation of realistic avatars in the future. Additionally, several key discussions are presented, essential for developing and evaluating the system and providing valuable insights for future research.

Enhancing Accessibility and Emotional Expression in Educational Extended Reality for Deaf and Hard of Hearing: A User-centric Investigation (ID: 1117)

Author: Sunday David Ubur, Virginia Tech, United States
Mentor: Shohei Mori

My research focuses on addressing accessibility challenges for the Deaf and Hard of Hearing (DHH) population, specifically within Extended Reality (XR) applications for education. Emphasizing improved communication, the study aims to enhance accessibility and emotional expression in XR environments. Objectives include identifying best practices and guidelines for designing accessible XR, exploring prescriptive and descriptive aspects of accessibility design, and integrating emotional aspects for DHH users. The three-phase research plan includes a literature review, development of an accessible XR system, and user studies. Anticipated contributions involve enhanced support for DHH individuals in XR and a deeper understanding of human-computer interaction aspects.

User exploratory learning in a Virtual Reality museum (ID: 1122)

Author: Xueqi Wang, Xi'an Jiaotong-Liverpool University, China
Mentor: Frank Guan

Museums are actively digitizing collections to store, distribute and share them with more audiences. The adoption of interactive technologies is promoting the spread of culture by developing creative narratives to support education and entertainment. Virtual Reality (VR) museums is an extension of physical museums that presents content in digital forms, such as photographs of museum archives. Current extended reality technologies afford the creation of interactive cultural heritage learning experience with 3D assets. This project aims to obtain an in-depth understanding of users’ experiential learning in VR museums. We will investigate appropriate ways to engage users in learning culture heritage and creating personalized experiences. This is an interdisciplinary project that sits at the intersection of human-computer interaction (HCI), Museums and education. The research will provide insights and guidelines to the design and development of VR museums for effective experiential learning.

Data-Driven Expertise Assessment in XR: Analyzing Multimodal Sensor Data for Psychomotor and Cognitive Tasks (ID: 1092)

Author: Dong Woo Yoo, Northeastern University, United States
Mentor: Guillaume Moreau, Jason Orlosky

In this doctoral research, a novel approach is developed for the real-time assessment of user expertise, focusing on skill-based and cognitive tasks within Extended Reality (XR) environments. This approach combines advanced user modeling with state-of-the-art sensors and includes an in-depth analysis of gaze behavior and physiological responses. A distinctive aspect of this research is the real-time analysis of multimodal sensor data, which provides deeper and more precise insights into user skills and cognitive abilities. The goal is to advance the field of expertise assessment by introducing a nuanced and dynamic perspective that is suitable for a variety of applications across different domains. This research aims to establish new standards in the assessment of user expertise, meeting the evolving needs of modern educational and professional settings.

Actor Takeover of Animated Characters (ID: 1107)

Author: Jingyi Zhang, University College London, United Kingdom
Mentor: Frank Guan

With the rise in accessibility of consumer-grade virtual reality headsets, social virtual reality applications have now caught widespread attention. One's anticipation for the virtual world is for it to be lively with characters. However, current technology falls short of generating intelligent virtual agents, thus creating a densely-populated environment with which one could have a variety of interactions is out of reach. This position paper introduces a takeover system which enables a single actor to seamlessly take over control of multiple virtual characters in a social virtual reality environment. In our experiments, we expect our system to enhance the perceived social presence of the scene and to uncover the difference in perception regarding computer-driven agents and human controlled avatars when interacting with multiple characters in the same environment.

IEEE  IEEE Computer Society IEEE Visualization and Graphics Technical Community

Student Participation
Support Student Participation
Special
UCF
Silver
JP Morgan Chase & Company
Bronze
Christie
UA Little Rock, Emerging Analytics Center
TECHVIZ

Code of Conduct

©IEEE VR Conference 2024, Sponsored by the IEEE Computer Society and the Visualization and Graphics Technical Committee