The official banner for the IEEE Conference on Virtual Reality + User Interfaces, comprised of a Kiwi wearing a VR headset overlaid on an image of Mount Cook and a braided river.

Posters

Posters (Timezone: Orlando, Florida USA UTC-4)
Monday Posters Sorcerer's Apprentice Ballroom Talk with the authors: 9:45‑10:15, 13:00‑13:30, 15:00‑15:30, 17:00‑17:30
Tuesday Posters Sorcerer's Apprentice Ballroom Talk with the authors: 9:45‑10:15, 13:00‑13:30, 15:00‑15:30, 17:00‑17:30
Wednesday Posters Sorcerer's Apprentice Ballroom Talk with the authors: 9:45‑10:15, 13:00‑13:30, 15:00‑15:30, 17:00‑17:30

Monday Posters

Talk with the authors: 9:45‑10:15, 13:00‑13:30, 15:00‑15:30, 17:00‑17:30, Room: Sorcerer's Apprentice Ballroom

Effects of constant and sinusoidal display lag on sickness during active exposures to virtual reality (ID: P1002)

Stephen Palmisano, University of Wollongong; Vanessa Morrison, University of Wollongong; Robert Allison, York University; Rodney Davies, University of Wollongong; Juno Kim, University of New South Wales

When we move our heads in virtual reality (VR), display lag creates differences between our virtual and physical head pose (DVP). This study examined whether objective estimates of these DVP could be used to predict the sickness caused by different types of lag. We found that adding constant and time-varying lag to simulations generated similar levels of sickness - with all added lag conditions producing more severe sickness than our baseline control. Consistent with the DVP hypothesis, the spatial magnitude and temporal dynamics of the DVP were both found to predict cybersickness severity during active HMD VR.

Display lag effects on postural stability and cybersickness during active exposures to HMD virtual reality (ID: P1003)

Stephen Palmisano, University of Wollongong; Shao Yang Chia, University of Wollongong

This study examined whether a person's spontaneous postural sway before, and their head-movements during, exposure to virtual reality (VR) predicts their experiences of cybersickness. We compared the stability of head and body movements made by 50 HMD users to the sickness they experienced during VR simulations with different amounts of display lag. Consistent with Postural Instability Theory, we found that: 1) naturally unstable participants were significantly more likely to become sick during these laggy simulations; and 2) the severity of this sickness depended on the spatial magnitude and the temporal dynamics of their head movements during active HMD VR.

Learning Personalized Agent for Real-Time Face-to-Face Interaction in VR (ID: P1005)

Xiaonuo Dongye, Beijing Institute of Technology; Dongdong Weng, Beijing Institute of Technology; Haiyan Jiang, Beijing Institute of Technology; Pukun Chen, Beijing Institute of Technology

Interactive agents in virtual reality are anticipated to make decisions and provide feedback based on the user's inputs. Despite recent advancements in large language models (LLMs), employing LLMs in real-time face-to-face interactions decision-making, and delivering personalized feedback remains challenging. To address this, our proposed system involves generating and labeling symbolic data, pre-training a real-time network, collecting personalized data, and fine-tuning the network. Utilizing inputs such as interaction distances, head orientations, and hand poses, the agents can provide personalized feedback. User experiments show significant advantages in both pragmatic and hedonic aspects over LLM-based agents, suggesting potential applications across diverse interactive domains.

Deep-Texture: A Foldable Haptic Ring for Shape and Texture Rendering in Virtual Reality (ID: P1008)

Youjin Sung, KAIST; DongKyu Kwak, Kaist; Taeyeon Kim, KAIST; Woontack Woo, KAIST; Sang Ho Yoon, KAIST

In this paper, we suggest a foldable device that renders the shape and texture in Virtual Reality called Deep-Texture. We devised Deep-Texture to achieve effective haptic feedback with lightweight hardware by combining the basic sensations of shape and texture. By integrating the frequency change of linear resonant actuator and 1-bar mechanism, we propose a novel haptic interaction device for immersive VR experiences. During the pilot test, the results show that it enhances realism while keeping usability. By open-sourcing Deep-Texture, we aim to empower a wider range of users to engage with and benefit from haptic technology, ultimately lowering the hurdles.

Towards Continuous Patient Care with Remote Guided VR-Therapy (ID: P1010)

Julian Kreimeier, Friedrich-Alexander University Erlangen-Nürnberg; Hannah Schieber, Friedrich-Alexander University; Noah Lewis, Friedrich-Alexander University Erlangen-Nürnberg; Max Smietana, Friedrich-Alexander University Erlangen-Nürnberg; Juliane Reithmeier, Friedrich-Alexander-Universität Erlangen-Nürnberg; Vlad Cnejevici, Friedrich-Alexander University Erlangen-Nürnberg; Prathik Prasad, Friedrich-Alexander University Erlangen-Nürnberg; Abdallah Eid, Friedrich-Alexander University Erlangen-Nürnberg; Maximilian Maier, Kinfinity; Daniel Roth, Technical University of Munich

Hand motor impairments heavily impact a people's independence and and overall well-being. Physiotherapy plays a crucial role after surgical interventions. Given the shortage of personnel and therapy session availability, enabling support and monitoring during the absence of the physiotherapist is a key future direction of medical care. Virtual reality has been shown to support the rehabilitation. An individualized and motivating rehabilitation process is crucial to support the affected person until full recovery. We present a prototype of a VR rehabilitation system that allows for the medical expert to control exercise planning and receive a detailed report on the patients success.

Fovea Prediction Model in VR (ID: P1012)

Daniele Giunchi, University College London; Riccardo Bovo, Imperial College London; Nitesh Bhatia, Imperial College London; Thomas Heinis, Imperial College; Anthony Steed, University College London

We propose a lightweight deep learning approach for gaze estimation that represents the visual field as three distinct regions: fovea, near, and far peripheral. Each region is modelled using a gaze parameterization gaze regarding angle-magnitude, latitude, or a combination of angle-magnitude-latitude. We evaluated how accurately these representations can predict a user's gaze across the visual field when trained on data from VR headsets. Our experiments confirmed that the latitude model generates gaze predictions with superior accuracy with an average latency compatible with the demanding real-time functionalities of an untethered device. We generated an outperforming ensemble model with a comparable latency.

Exploring the Gap between Real and Virtual Nature (ID: P1014)

Katherine Hartley, University of Florida; Victoria Interrante, University of Minnesota

We describe the design and preliminary findings of an experiment that aims to elucidate the cost of omitting accurate haptic and olfactory stimulation from VR nature immersion experiences. Across four separate sessions, participants are immersed in virtual urban and forest environments, while seated both outside in real nature and indoors, with the correspondence of virtual environment to real environment and the order of presentation counterbalanced. We compare multiple restorative outcome measures between the four conditions, including physiological data (EDA and HR), subjective surveys of stress and contentment, and objective performance on tests of visual and auditory attentional resources.

DepBoxia: Depth Perception Training in Boxing, an Immersive Approach (ID: P1024)

Yen-Ru Chen, National Tsing Hua University; Tsung-Hsun Tsai, National Tsing Hua University; Tica Lin, Harvard University; Calvin Ku, National Tsing Hua University; Min-Chun Hu, National Tsing Hua University; Hung-Kuo Chu, National Tsing Hua University

Depth perception is crucial in novice boxer training for understanding punch timing and distance. Traditional methods rely on coach support and teamwork, leading to a high entry barrier for novice boxers to train whenever or wherever. To address this, we propose an immersive training system using virtual reality (VR) that integrates visual and audio guidance, specifically designed to enhance depth perception in boxing for novice boxers. Our rigorous experiment shows significant improvements in accuracy and reaction time. We introduce the system to professional boxing coaches, enabling them to integrate a systematic approach to training novice boxers' depth perception.

4D Facial Capture Pipeline Incorporating Progressive Retopology Approach (ID: P1028)

Zeyu Tian, Beijing Institute of Technology; Dongdong Weng, Beijing Institute of Technology; Hui Fang, Beijing Institute of Technology; Hanzhi Guo, Beijing Institute of Technology; Yihua Bao, Beijing Institute of Technology

The pipeline for creating high-fidelity facial models often utilizes multi-view stereo techniques for reconstruction. However, the subsequent step of retopology often involves intricate manual work, limiting the extension of facial capture systems towards 4D acquisition. This paper proposes a facial 4D capture pipeline based on high-speed cameras. We employ standard multi-view stereo techniques for 3D reconstruction. Non-linear deformations of facial expressions are decoupled from rigid movements of the skull using QR code markers. Additionally, a progressive automated retopology approach is introduced for batch processing. Results demonstrate that our system can capture continuous facial motion sequences with detailed 3D models.

Influence of Prior Acquaintance on the Shared VR Experience (ID: P1029)

Esen Küçüktütüncü, Institute of Neurosciences of the University of Barcelona; Ramon Oliver, Institute of Neurosciences of the University of Barcelona; Mel Slater, Institute of Neurosciences of the University of Barcelona

Exploring social dynamics in virtual reality (VR) at a qualitative level holds great potential for improved applications Here we examined the influence of prior acquaintance on how people interacted with each other in VR. Groups of 3 or 4 participants, represented by realistic look-alike avatars, engaged in discussions on predefined themes. There were two conditions: (1) groups of individuals with prior connections and (2) strangers. Questionnaire responses revealed that pre-existing acquaintances fostered a stronger sense of copresence, and greater sentiment compared to the strangers group. This insight is crucial for optimizing the design and dynamics of VR interactions.

Towards an Altered Body Image Through the Exposure to a Modulated Self in Virtual Reality (ID: P1033)

Erik Wolf, University of Würzburg; Carolin Wienrich, University of Würzburg; Marc Erich Latoschik, University of Würzburg

Self-exposure using modulated embodied avatars in virtual reality (VR) may support a positive body image. However, further investigation is needed to address methodological challenges and to understand the concrete effects, including their quantification. We present an iteratively refined paradigm for studying the tangible effects of exposure to a modulated self in VR. Participants perform body-centered movement tasks in front of a virtual mirror, encountering their photorealistically personalized embodied avatar with increased, decreased, or unchanged body size. Additionally, we propose different body size estimation tasks conducted in reality and VR before and after exposure to assess participants' putative elicited perceptual adaptations.

More than Fitness: User Perceptions and Hopes for Getting Fit in Virtual Reality (ID: P1034)

Aleshia Hayes, University of North Texas; Veronica Johnson, University of North Texas; Deborah Cockerham, University of North Texas

Fitness activities are linked to health, cognitive acuity, and the brain's ability to respond to a stimulus. This article reports on a mixed-methods investigation of user experiences of a commercial off-the-shelf virtual reality fitness experience. A total of 74 visitors to a southern science museum, spanning from under 18 to 64 years of age participated in a VR fitness experience and completed a questionnaire and a semi-structured interview. Participants reported experiencing social presence with pre-recorded fitness instructors, positive usability, and a majority predicted that VR fitness could improve cognitive acuity. This pilot indicates a cross-generational interest in VR fitness tools.

Enhanced Reconstruction of Interacting Hands for Immersive Embodiment (ID: P1037)

Yu Miao, Beijing Institute of Technology; Yu Han, Beijing Institute of Technology; Yi xiao, China Academy of Aerospace Science and Innovation; Yue Liu, Beijing Institute of Technology

Interacting hands reconstruction serves as a channel for natural user engagement in virtual reality, providing realistic embodiment that markedly elevates the immersive experiences. However, accurate prediction of the spatial relations between two hands remains challenging due to the severe occlusion and homogeneous appearance of hands. This paper presents a spatial relationship refinement method for hand reconstruction, employing a module to yield the 3D relative translation between hands and a novel loss function to limit hand mesh penetration. Our method achieves state-of-the-art performance on the InterHand2.6M dataset, offering considerable potential for interacting hands reconstruction in enhancing embodiment for virtual reality.

ExpressionAuth: Utilizing Avatar Expression Blendshapes for Behavioral Biometrics in VR (ID: P1038)

Tussoun Jitpanyoyos, Shizuoka University; Yuya Sato, Shizuoka University; Soshi Maeda, Shizuoka Univerisity; Masakatsu Nishigaki, Shizuoka University; Tetsushi Ohki, Shizuoka University

As interests in Virtual Reality (VR) continue to rise, head-mounted displays (HMDs) are actively being developed. Current user authentication method in HMDs requires the use of virtual keyboard, which has low usability and is prone to shoulder surfing attacks. This paper introduces ExpressionAuth, a novel authentication method which uses face tracking capabilities in certain HMDs to verify the identity of the user. ExpressionAuth leverages smile as the expression for user verification. ExpressionAuth has the potential to be a secure and usable biometrics, achieving EER as low as 0.00178 and AUC of up to 0.999 in our experiments.

Evaluating the Feasibility of Using Augmented Reality for Tooth Preparation (ID: P1039)

Takuya Kihara, Johns Hopkins University; Andreas Keller, Technical University of Munich; Takumi Ogawa, Tsurumi University; Mehran Armand, Johns Hopkins University; Alejandro Martin-Gomez, Johns Hopkins University

Tooth preparation is a fundamental treatment technique to restore oral function in prosthodontic dentistry. This technique is complicated as it requires the preparation of an abutment while simultaneously predicting the ideal shape. We explore the feasibility of using Augmented Reality (AR) Head-Mounted Displays (HMDs) to assist dentists during tooth preparation using two different visualization techniques. A user study (N=24) revealed that AR is effective for angle adjustment, and reduces the occurrence of over-reduction. These results suggest that AR can be used to assist physicians during these procedures and has the potential to enhance the accuracy and safety of prosthodontic treatment.

Best Poster Award

Investigating Incoherent Depth Perception Features in Virtual Reality using Stereoscopic Impostor-Based Rendering (ID: P1041)

Kristoffer Waldow, TH Köln; Lukas Decker, TH Köln; Martin Mišiak, University of Würzburg; Arnulph Fuhrmann, TH Köln; Daniel Roth, Technical University of Munich; Marc Erich Latoschik, University of Würzburg

Depth perception is essential for our daily experiences, aiding in orientation and interaction with our surroundings. Virtual Reality allows us to decouple such depth cues mainly represented through binocular disparity and motion parallax. Dealing with fully-mesh-based rendering methods these cues are not problematic as they originate from the object's underlying geometry. However, manipulating motion parallax, as seen in stereoscopic imposter-based rendering, raises questions about visual errors and perceived 3-dimensionality. Therefore, we conducted a user experiment to investigate how varying object sizes affect such visual errors and perceived 3-dimensionality, revealing an interestingly significant negative correlation and new assumptions about visual quality.

Facial Feature Enhancement for Immersive Real-Time Avatar-Based Sign Language Communication using Personalized CNNs (ID: P1042)

Kristoffer Waldow, TH Köln; Arnulph Fuhrmann, TH Köln; Daniel Roth, Technical University of Munich

Facial recognition is crucial in sign language communication. Especially for virtual reality and avatar-based communication, increased facial features have the potential to integrate the deaf and hard-of-hearing community to improve speech comprehension and empathy. But, current methods lack precision in capturing nuanced expressions. To address this, we present a real-time solution that utilizes personalized Convolutional Neural Networks (CNNs) to capture intricate facial details, such as tongue movement and individual puffed cheeks. Our system's classification models offer easy expansion and integration into existing facial recognition systems via UDP network broadcasting.

Comparatively testing the effect of reality modality on spatial memory (ID: P1046)

Leon Mayrose, Ben Gurion University; Shachar Maidenbaum, Ben Gurion University

Virtual and augmented reality hold great potential for understanding spatial cognition. However, it is unclear what effect reality modality has on our perception and interaction with our spatial surroundings. Here, participants performed a spatial memory task using passthrough augmented reality in the real world and in a virtual environment reconstructed by scanning the real environment. We found no significant differences by reality modality for subjective measures such as reported immersion, difficulty, enjoyment and cyber-sickness, nor did we find objective differences in performance. These results suggest limited effects on spatial tasks, and are promising for transfer between virtual and augmented scenarios.

PUTREE: A PHOTOREALISTIC LARGE-SCALE VIRTUAL BENCHMARK FOR FOREST TRAINING (ID: P1047)

Yawen Lu, Purdue Univ; Yunhan Huang, Purdue University; Su Sun, Purdue University; Songlin Fei, Purdue University; Yingjie Victor Chen, Purdue University

Forest systems play an important role in mitigating anthropogenic climate change and regulating the global climate. However, due to difficulties in collecting wild data and lack of forestry expertise, the availability of large-scale forest datasets is very limited. In this work, we establish a new virtual forest dataset named PUTree. Our goal is to create a larger, more photo-realistic and diverse dataset as a powerful training resource in the wild forest. Early experimental results demonstrate its validity as a new forest benchmark for the evaluation of tree detection and segmentation algorithms, and its potential in broad application scenarios.

Effectiveness of Visual Acuity Test in VR vs Real World (ID: P1048)

Sarker Monojit Asish, Florida Polytechnic University; Roberto Enrique Salazar, University of Louisiana at Lafayette; Arun K Kulshreshth, University of Louisiana at Lafayette

Virtual Reality (VR) devices have opened a new dimension of merging technology and healthcare in an immersive and exciting way to test eye vision. Visual acuity is a person's capacity to perceive small details. An optometrist or ophthalmologist determines a visual acuity score following a vision examination. In this work, we explored how recent VR devices could be utilized to conduct visual acuity tests. We used two Snellen charts to examine eye vision in VR, similar to testing in a doctor's chamber. We found that VR could be utilized to conduct preliminary eye vision tests.

Effects of moving task condition on improving operational performance with slight delay (ID: P1049)

Yakumo Miwa, Nagoya Institute of Technology; Kenji Funahashi, Nagoya Institute of Technology; Koji Tanida, Faculty of Science and Engineering; Shinji Mizuno, Aichi Institute of Technology

We made a hypothesis that appropriate delay in operational system would improve its operational performance from some reviews and papers. As an experimental result, performance was improved in slight delay. The sensory evaluation also confirmed that the subject felt support even though there was no actual force support. Another experiment confirmed that depth movement restriction, and the move ratio of the virtual tool on a screen to the input device affected to improve performance. We also investigated that difference of task conditions, i.e. moving task distance and target area size, affect to performance improvement.

Cross-Reality Attention Guidance on the Light Field Display (ID: P1050)

Chuyang Zhang, Keio University; Kai Kunze, Keio University Graduate School of Media Design

We present a cross-reality collaboration system with visual attention guidance that allows a user in VR to share his view with the users in the real world accurately. The VR user can remotely decide the display content of the light field display oriented to multiple real-world users by manipulating the camera in the virtual world. The VR user's focus depth is estimated and then used to adjust the focal plane of the light field display. Our system can improve collaboration in multi-user scenarios especially in which one host user is delivering information to others such as in classrooms and museums.

Training a Neural Network on Virtual Reality Devices: Challenges and Limitations (ID: P1054)

Francisco Díaz-Barrancas, Justus Liebig University; Dr. Daniel Flores-Martín, University of Extremadura; Dr. Javier Berrocal, University of Extremadura

The processing power of Virtual Reality (VR) devices is constantly growing. However, few applications still take advantage of these capabilities. Machine learning algorithms have shown promise in enabling an immersive and personalized experience for VR device users. Therefore, it is interesting that these algorithms are processed directly on the devices themselves, without needing other external resources. In this work, a Neural Network (NN) is trained for real-time image classification using different VR devices. The results show the feasibility of incorporating VR devices for NN training without compromising the quality of the interaction, simply and saving external resources.

Exploring Influences of Appearance and Voice Realism of Virtual Humans on Communication Effectiveness in Social Virtual Reality (ID: P1056)

Yu Han, Beijing Engineering Research Center of Mixed Reality and Advanced Display; Yu Miao, Beijing Engineering Research Center of Mixed Reality and Advanced Display; Hao Sha, Beijing Institute of Technology; Yue Liu, Beijing Institute of Technology

Virtual humans play a crucial role in elevating user experiences in Virtual Reality (VR). Despite ongoing technological advancements, achieving highly realistic virtual humans with entirely natural behaviors remains a challenge. This paper explores how the appearance and voice realism of virtual humans influence communication effectiveness within social VR scenarios. Our preliminary results indicate the significant impact of alterations in appearance and voice realism on communication effectiveness. We observe that a cross-modal realism mismatch between appearance and voice can impede effective communication. This research provides valuable insights for designing virtual humans and improving the quality of social communication in VR environments.

Exploring Virtual Reality for Religious Education in Real-World Settings (ID: P1059)

Sara Wolf, Julius-Maximilians-Universität Würzburg; Ilona Nord, Julius-Maximilians Universität Würzburg; Jörn Hurtienne, Julius-Maximilians-Universität

Research and design of virtual reality (VR) applications for educational contexts often focus on science-related subjects and evaluate knowledge acquisition while overlooking other subjects like religious education or what actually happens when VR is used in real-world settings. Our article combines both and presents two VR applications, Blessed Spaces and VR Pastor, designed for individual experiences in (Protestant) religious education. We deployed the applications in a real-world setting, an out-of-school learning centre. Surprisingly, the applications mediated social engagement between students. Our findings challenge traditional notions of social experiences in VR-supported education and call for more research in real-world settings.

Best Poster Honorable Mention

Evaluation of Shared-Gaze Visualizations for Virtual Assembly Tasks (ID: P1060)

Daniel Alexander Delgado, University of Florida; Jaime Ruiz, University of Florida

Shared-gaze visualizations (SGV) allow collocated collaborators to understand each other's attention and intentions while working jointly in an augmented reality setting. However, prior work has overlooked user control and privacy over how gaze information can be shared between collaborators. In this abstract, we examine two methods for visualizing shared-gaze between collaborators: gaze-hover and gaze-trigger. We compare the methods with existing solutions through a paired-user evaluation study in which participants participate in a virtual assembly task. Finally, we contribute an understanding of user perceptions, preferences, and design implications of shared-gaze visualizations in augmented reality.

Effects on Size Perception by Changing Dynamic Invisible Body Size (ID: P1061)

Ryota Kondo, Keio University; Maki Sugimoto, Keio University; Hideo Saito, Keio University

Only virtual hands and feet move synchronously with an observer's movement, inducing body ownership of an invisible body between them. However, it is unclear whether body ownership is also induced when the size of the invisible body is changed. In this study, we investigated whether body ownership is induced in large or small invisible bodies and whether size perception changes with the size of the invisible body. The results showed that body ownership was induced even if the size of the invisible body was changed, but the size perception did not change.

Can't touch this? Why vibrotactile feedback matters in educational VR (ID: P1062)

Fabian Froehlich, NYU

This study investigates the relationship between vibrotactile feedback and sense of presence in VR. The inquiry focuses on corrective and reenforcing feedback in STEM learning outcomes using a VR environment called [blinded]. In a randomized within-subject design experiment (N=68) participants got assigned to a vibrotactile and non-vibrotactile condition. Our hypotheses: Participants in the vibrotactile-condition report higher sense of presence ratings compared to the non-haptic condition. Results indicate that vibrotactile feedback increases the sense of presence and impacts metacognition. Participants who received corrective feedback as a vibrotactile stimuli are more likely to underestimate their actual test performance.

Exploring the efficient and hedonic shopping: A Comparative Study of in-game VR Stores (ID: P1063)

Yang Zhan, Waseda University; Yiming Sun, Waseda University; Tatsuo Nakajima, Waseda University

Shopping in Virtual Reality (VR) become popular in recent years since it provides immersive experiences. However, there is insufficient understanding of how efficient and hedonic features in VR stores affect the user experiences concurrently. This work aimed to address this gap by integrating a 2D user interface store and a 3D diegetic store into a VR game for comparative analysis. We explore the effects of efficient and hedonic factors on the users' perception and experiences. Results from a within-subject study ($N$=14) revealed that the diegetic store surpasses the 2D store in offering hedonic features, providing suggestions for VR store designs.

Pinhole Occlusion: Enhancing Soft-edge Occlusion Using a Dynamic Pinhole Array (ID: P1064)

Xiaodan Hu, NAIST; Yan Zhang, Shanghai Jiao Tong University; Monica Perusquia-Hernandez, Nara Institute of Science and Technolgy; Yutaro Hirao, Nara Institute of Science and Technology; Hideaki Uchiyama, Nara Institute of Science and Technology; Kiyoshi Kiyokawa, Nara Institute of Science and Technology

Systems with occlusion capabilities have gained interest in augmented reality, vision augmentation, and image processing. To address the challenge of creating a precise yet lightweight occlusion system, we introduce a novel architecture to tackle occlusion blurriness due to defocusing. Our approach, utilizing a dynamic pinhole array on a transmissive spatial light modulator positioned between the eye and the occlusion layer, offers adaptive pinhole patterns, gaze-contingent functionality, and the potential to reduce visual artifacts. Our preliminary result demonstrates that, with the focal plane at 1.8 m, an occlusion placed at 4 cm can be observed sharply through a 4.3 mm aperture.

Comparative Efficacy of 2D and 3D Virtual Reality Games in American Sign Language Learning (ID: P1065)

Jindi Wang, Durham Univeristy; Ioannis Ivrissimtzis, Durham University; Zhaoxing Li, University of Southampton; Lei Shi, Newcastle University

Extensive research on sign language aimed to enhance communication between hearing individuals and the deaf community. With ongoing advancements in virtual reality and gamification, researchers are exploring their application in sign language learning. This study compares the impact of 2D and 3D games on American Sign Language (ASL) learning, using questionnaires to assess user experience. Results show that 3D games enhance engagement, attractiveness, usability, and efficiency, although user performance remains similar in both environments. The findings suggest the potential of 3D game-based approaches to improving ASL learning experiences while also identifying areas for enhancing dependability and clarity in 3D environments.

Cybersickness Lies in the Eye of the Observer - Pupil Diameter as a Potential Indicator of Motion Sickness in Virtual Reality? (ID: P1068)

Katharina Margareta Theresa Pöhlmann, KITE-Toronto Rehabilitation Institute; Aalim Makani, Toronto Metropolitan University; Raheleh Saryazdi, Trent University; Keshavarz Behrang, The KITE Research Institute

Cybersickness is a widespread problem for many users of Virtual Reality systems. Changes in pupil diameter have been suggested as potential physiological correlates of cybersickness, but the relationship remains vague. Here, we further investigated how pupil diameter changes in relation to cybersickness by engaging participants in a passive locomotion through an outer-space environment. Participants who experienced sickness showed greater variance in pupil diameter compared to non-sick participants, whereas average pupil diameter did not differ. Our results suggest that irregular pupillary rhythms may be a potential correlate of cybersickness, which could be used to objectively identify cybersickness.

Never Tell The Trick: Covert Interactive Mixed Reality System for Immersive Theatre (ID: P1069)

Chanwoo Lee, Imperial College London; Kyubeom Shim, Sogang University; Sanggyo Seo, Sogang Univ.; Gwonu Ryu, Dept. of Art & Technology; Yongsoon Choi, Sogang Univ.

This study explores the integration of Ultra-Wideband (UWB) technology into Mixed Reality (MR) Systems for immersive theatre. Addressing the limitations of existing technologies like Microsoft Kinect and HTC Vive, the research focuses on overcoming challenges in robustness to occlusion, tracking volume, and cost efficiency in props tracking. Utilizing the UWB, the immersive MR system enhances the scope of performance art by enabling larger tracking areas, more reliable and cheaper multi-prop tracking, and reducing occlusion issues. Preliminary user tests demonstrate meaningful improvements in immersive experience, promising a new possibility in Extended Reality (XR) theatre and performance art.

Enhancing Virtual Walking in Lying Position: Upright Perception by Changing Self-Avatar's Posture (ID: P1070)

Junya Nakamura, Toyohashi University of Technology; Michiteru Kitazaki, Toyohashi University of Technology

We aimed to decrease visual-proprioceptive conflict in experiencing virtual walking with the lying posture. An optic flow during an avatar's standing up was presented before virtual walking that was induced by the radial optic flow and foot vibrations. The walking sensation and telepresence slightly increased by the standing-up optic flow, but it did not reach statistical significance. Participants felt as the posture was more matched to the walking avatar with the standing-up optic flow compared to the no-animation condition. These results highlight the potential for posture-informed VR design to improve user experiences in a situation with visual-proprioceptive conflict.

StreamSpace: A Framework for Window Streaming in Collaborative MR Environments (ID: P1072)

Daniele Giunchi, University College London; Riccardo Bovo, Imperial College London; Nels Numan, University College London; Anthony Steed, University College London

We introduce StreamSpace as a framework for the exploration of screen-based collaborative MR experiences, focusing on the streaming, integration, and layout of screen content in MR environments. Utilizing Unity and Ubiq, this framework allows users to engage with, reposition, and resize uniquely identified screens within a user-centric virtual space. Building on Ubiq's WebRTC capabilities, our framework enables real-time streaming and transformations through peer-to-peer communication. Key features of StreamSpace include distributed streaming, automated screen layout, and flexible privacy settings for virtual screens. Introducing StreamSpace, we aim to provide a foundational basis for research on screen-based collaborative MR applications.

Can Brain Stimulation Reduce VR motion sickness in Healthy Young Adults During an Immersive Relaxation Application? A Study of tACS (ID: P1074)

Gang Li, University of Glasgow; Ari Billig, SyncVR Medical; Chao Ping Chen, Shanghai Jiao Tong University; Katharina Margareta Theresa Pöhlmann, KITE Rehabilitation Institution

This study marks the first exploration of whether a non-invasive transcranial alternating current stimulation (tACS) on the left parietal cortex can reduce VR motion sickness (VRMS) induced by a commercial VR relaxation app. Two VRMS conditions were examined for 36 healthy young adults: 1) pure VRMS without a moving platform; 2) VRMS with a side-to-side rotary chair. Participants underwent three counterbalanced tACS protocols at the beta frequency band (sham, treatment, and control). Contrary to our hypothesis, the treatment protocol did not significantly reduce VRMS in either condition. Given the protocol's prior success in our previous tACS study, we discussed potential factors hindering the replication of our earlier achievement.

Super-Resolution AR?: Enhanced Image Visibility for AR Imagery? (ID: P1078)

Hyemin Shin, Korea University; Hanseob Kim, Korea University; DongYun Joo, Korea University; Gerard Jounghyun Kim, Korea University

In AR applications, there may be situations in which the visual target is not clearly visible/legible because it is too far or small. The unclear part of the imagery can be captured and magnified, but the image quality can still be problematic with the aliasing artifacts by the limited resolution. This poster proposes to apply deep learning-based upscaling to enhance the low-resolution images. We developed a prototype system that can capture an image and upscale/present it to the user. The pilot study demonstrated that upscaled imagery improved image clarity, the ability to find hidden information more quickly, and user experience.

u-DFOV: User-Activated Dynamic Field of View Restriction for Managing Cybersickness and Task Performance (ID: P1079)

Yechan Yang, Korea University; Hanseob Kim, Korea University; Gerard Jounghyun Kim, Korea University

Dynamic field of view restriction is one effective way of mitigating cybersickness by modulating the amount of visual information during virtual navigation. However, in the presence of an interactive task for which visibility is important, it can impede the task performance. This poster examined the efficiency of users, manually engaging the dynamic field of view restriction to control and mitigate cybersickness while performing interactive tasks. The comparative experiment has shown that the user-activated method significantly reduced cybersickness as much as the automatic method. However, it also achieved significantly higher task performance and usability despite the manual control.

PianoFMS: Real-time Evaluation of Cybersickness by Keyboard Fingering (ID: P1080)

Yechan Yang, Korea University; Hanseob Kim, Korea University; Jungha Kim, Korea University; Gerard Jounghyun Kim, Korea University

Various measurement tools/methods have been developed to assess cybersickness induced in virtual environments, e.g., using the controller, dial device, and verbal input. In this poster, we propose PianoFMS as a cybersickness measurement tool that allows users to directly input absolute scores using the five piano keys without tampering with visual content. The preliminary study revealed that the levels of cybersickness measured using both the dial and PianoFMS were similar, and they each exhibited a significant correlation with that of the conventional post-experiment questionnaire scores. However, the PianoFMS exhibited a markedly enhanced level of usability in comparison to the dial.

VR Interface vs Desktop to convey Quality of Outerwear: a comparative study (ID: P1082)

Dario Gentile, Politecnico di Bari; Francesco Musolino, Polytechnic University of Bari; Michele Fiorentino, Polythecnic Institute of Bari; fabio vangi, Polytechnic University of Bari

Conveying the quality of garments through media as in the physical store is demanding. This study proposes the design of a VR interface that aims to convey outerwear quality, featuring the 3D model animated through physical simulation. To measure the effectiveness of this interface, it is tested simultaneously with its corresponding desktop counterpart, with the photo gallery of the product, in a within-subject analysis on 50 users. Results show that perceived quality of products changes between the experiences. Moreover, in VR visual content was found to be more significant for the quality assessment than written information.

Development of Force Display Using Pneumatic Actuators for Efficient Conveyance of Emotion (ID: P1083)

Nagisa Ito, The University of Tokyo; Hiroyuki Umemura, National Institute of Advanced Industrial Science and Technology; Kunihiro Ogata, National Institute of Advanced Industrial Science and Technology; Kenta Kimura, National Institute of Advanced Industrial Science and Technology

This study investigated the emotional impact of varying force in haptic feedback during gripping. Previous studies have not focused on how grip strength influences emotional expression, despite its known use in conveying feelings. We developed a haptic device to present grip-like haptic presentations and conducted an experiment (N=17) to evaluate the emotions elicited by different haptic presentations. The study found that force, speed, and presentation pattern influence both valence (positive or negative emotion) and arousal (intensity of emotion). The results also indicated that haptic feedback communicates not only anger and disgust but also happiness and surprise.

Mid-air Imaging Based on Truncated Cylindrical Array Plate (ID: P1084)

Junpei Sano, The University of Electro-Communications; Naoya Koizumi, Department of Informatics

This paper presents a mid-air imaging optical system consisting of two dimensionally arranged truncated cylindrical optical elements. The proposed system aims to reduce the impact of stray light and improve the limited viewing range of mid-air images in micromirror array plates, an existing mid-air imaging optical system. In this study, we used ray tracing to assess mid-air images formed by our proposed optical system. The results show that our method is practical in terms of the invisibility of stray light and brightness of the image when viewed from an angle.

Multitasking with Graphical Encoding Visualization of Numerical Values in Virtual Reality (ID: P1085)

Amal Hashky, University of Florida; Benjamin Rheault, University of Florida; Ahmed Rageeb Ahsan, University of Florida; Lauren Newman, University of Florida; Eric Ragan, University of Florida

This study evaluates the influence of various visual representations of numerical values on users' ability to multitask in virtual reality. We designed a game-like VR simulation where users had to complete one main task while maintaining the status of other subtasks. Supplemental visualizations showed risk status of the subtask depending on experimental condition, with different visual data encodings: position, brightness, color, and area. We collected preliminary data (n=18) on participant performance during the experiment and subjective ratings afterward. The results showed that the intervention rate significantly differed between the four visual encodings, with the position-based version having the lowest rate.

Alleviating the Uncanny Valley Problem in Facial Model Mapping Using Direct Texture Transfer (ID: P1086)

Kaylee Andrews, Augusta University; Jeffrey L Benson Jr., Augusta University; Jason Orlosky, Augusta University

Though facial models for telepresence have made significant progress in recent years, most model reconstruction techniques still suffer from artifacts or deficiencies that result in the uncanny valley problem when used for real-time communication. In this paper, we propose an optimized approach that makes use of direct texture transfer and reduces the inconsistencies present in many facial modeling algorithms. By mapping the source texture from a 2D image to a rough 3D facial mesh, detailed features are preserved, while still allowing a 3D perspective view of the face. Moreover, we accomplish this in real time with a single, monocular camera.

Embracing Tradition Through Technology: The Mixed Reality Calligraphy Studying Environment (ID: P1092)

Yi Wang, Beijing Jiaotong University; Ze Gao, Hong Kong University of Science and Technology

This article introduces an innovative Mixed Reality (MR) system designed explicitly for calligraphy learning and practice. Learners must prepare numerous copies of calligraphy works and character templates in traditional calligraphy study and practice. They often rely on oral guidance from teachers in person for their calligraphy practice. However, with the progress of digital technology, we envision leveraging MR wearable glasses combined with image capture and analysis techniques to assist calligraphy learners in improving their practice in a more flexible time.

Best Poster Honorable Mention

Designing Non-Humanoid Virtual Assistants for Task-Oriented AR Environments (ID: P1094)

Bettina Schlager, Columbia University; Steven Feiner, Columbia University

In task-oriented Augmented Reality (AR), humanoid Embodied Conversational Agents can enhance the feeling of social presence and reduce mental workload. Yet, such agents can also introduce social biases and lead to distractions. This presents a challenge for AR applications that require the user to concentrate mainly on a task environment. To address this, we introduce a non-humanoid virtual assistant designed for minimal visual intrusion in AR. Our approach aims to enhance a user's focus on the tasks they need to perform. We explain our design choices based on previously published guidelines and describe our prototype implemented for an optical--see-through headset.

A Virtual Reality Musical Instrument Integrated with a Remote Playing Robot System (ID: P1097)

Zhonghao Zhu, Beijing Institute of Technology; Weizhi Nai, Jilin University; Xin Wang, Jilin University; Yue Liu, Beijing Institute of Technology

Music education and performance are often constrained by geographical limitations. To evaluate the effectiveness of remote music performance, we designed a virtual reality musical instrument system with a remote playing robot. The system comprises a virtual Irish tin whistle playing system and a remote playing robot system. In the virtual playing system, sensors capture the performer's gestures, translating them into corresponding tin whistle playing commands and producing the music. The playing robot is constructed with mechanical hands, and a data transmission module is programmed to facilitate communication between the virtual playing system and the robot, enabling remote simulated performances.

Sensory Feedback in a Serious Gaming Environment and Virtual Reality for Training Upper Limb Amputees (ID: P1098)

Reidner Santos Cavalcante, Federal University of Uberlândia; Edgard Afonso Lamounier Jr., Federal University of Uberlândia; Alcimar Soares, Faculty of Electrical Engineering, Federal University of Uberlândia; Aya Gaballa, Qatar University; John Cabibihan, Qatar University

In this article, the authors present a system based on Immersive Virtual Reality and Serious Games for training the use of prostheses by upper limb amputees with a tactile feedback. By using EMG signal processing users can control the opening and closing of a virtual prosthesis, just like in real life. Tactile feedback causes an improvement in the sensation of touch. Tests were carried out with separate groups: with and without sensory feedback with amputated and non-amputee volunteers. Tests in which users who received haptic feedback demonstrated improvements in performance compared to those who did not use haptic feedback.

Do You XaRaoke? Immersive Realistic Singing Experience \\ with Embodied Singer (ID: P1101)

Germán Calcedo, University of Groningen; Ester Gonzalez-Sosa, Nokia; Diego González Morín, Nokia; Pablo Perez, Nokia; Alvaro Villegas, Nokia

We have developed an immersive karaoke experience that allows users to sing their favorite songs in front of a simulated audience. The karaoke experience is designed within an immersive stadium scene with a simulated audience and stage lights that synchronize with the song's beats and displayed lyrics. Unlike VR-based karaoke commercial solutions, users can even see their real bodies as video-based self-avatars through the use of a deep learning network and a real microphone without using VR controllers. Preliminary results from a subset of 17 participants validate the developed prototype and provide insights for future improvements.

Towards Optimized Cybersickness Prediction for Computationally Constrained Standalone Virtual Reality Devices (ID: P1102)

Md Jahirul Islam, Kennesaw State University; Rifatul Islam, Kennesaw State University

Cybersickness, affecting 60-95% of VR users, poses a challenge for immersive experiences. Research using multimodal data, like pupilometry and heart rate, to predict cybersickness using complex machine-learning models often requires off-the-shelf computing resources (i.e., cloud servers), which is impractical for standalone VR devices (SVRs) as network lag and processing limitations can introduce latency during immersion, exacerbating cybersickness. We propose a novel approach that minimizes the computational cost of cybersickness prediction models by hyper-parameter tuning and reducing training parameters while maintaining prediction accuracy. Our method significantly improves training and inference time, paving the way for optimized prediction frameworks on resource-constrained SVRs.

Enhancing Body Ownership of Avian Avatars in Virtual Reality through Multimodal Haptic Feedback (ID: P1103)

Ziqi Wang, university of the arts london; Ze Gao, Hong Kong University of Science and Technology

This paper uses multimodal haptic feedback to enhance users' body ownership in virtual reality through wearable devices. In this case, the human is transformed into a bird, which belongs to the beyond-real transformations category in virtual reality interactions. For body transformation, wearable retractable straps can help people mimic the movement mechanism of avian bodies; for space transformation, the inflatable cushions and blowers can simulate the air resistance and lift, oxygen deprivation, and temperature decrease during the take-off process of avian avatars. The system aims to establish a realistic fidelity of the haptic feedback to enhance the user's body ownership.

Effect of Ambulatory Conditions and Virtual Locomotion Techniques on Distance Estimation and Motion Sickness of a Navigated VR Environment (ID: P1105)

Aidan Morris, College of Holy Cross; Michael Vail, College of the Holy Cross; Gabriel Hanna, College of Holy Cross; Anurag Rimzhim, College of Holy Cross

We present preliminary results from a 2 X 2 between-subjects experiment. Our two IVs were ambulatory-restrictive (i.e., without locomotion) postural conditions (sitting vs. standing), and virtual navigation technique of steering versus teleporting. Participants navigated a complex virtual environment comprising outdoor and indoor environments for 10 minutes. We found that teleporting may result in less online distance estimation error than steering. Motion sickness was lower while teleporting than steering and when sitting than standing. Teleporting also resulted in better system usability than steering. We discuss the results' implications for VR usability.

Best Poster Award

Tremor Stabilization for Sculpting Assistance in Virtual Reality (ID: P1106)

Layla Erb, Augusta University; Jason Orlosky, Augusta University

This paper presents an exploration of assistive technology for virtual reality (VR) art, such as sculpting and ceramics. For many artists, tremors from Parkinsonian diseases can interfere with molding, carving, cutting, and modeling of different mediums for creating new sculptures. To help address this, we have developed a system that algorithmically stabilizes tremors to enhance the artistic experience for creators with physical impairments or movement disorder. In addition, we present a real-time sculpting application that allows us to measure differences in sculpting actions and a target object or shape.

Designing Indicators to Show a Robot's Physical Vision Capability (ID: P1108)

Hong Wang, University of South Florida; Tam Do, College of Engineering; Zhao Han, University of South Florida

In human-robot interaction (HRI), studies show humans can mistakenly assume that robots and humans have the same field of view, possessing an inaccurate mental model of a robot. This misperception is problematic during collaborative HRI tasks where robots might be asked to complete impossible tasks about out-of-view objects. In this initial work, we aim to align humans' mental models of robots by exploring the design of field-of-view indicators in augmented reality (AR). Specifically, we rendered nine such indicators from the head to the task space, and plan to register them onto the real robot and conduct human-subjects studies.

Guiding Gaze: Comparing Cues for Visual Search (ID: P1112)

Brendan Kelley, Colorado State University; Christopher Wickens, Colorado State University; Benjamin A. Clegg, Colorado State University; Amelia C. Warden, Colorado State University; Francisco Raul Ortega, Colorado State University

Visual search tasks are commonplace in daily life. In cases where the time and accuracy of the search is critical (such as first responder, crisis, or military scenarios) augmented reality (AR) visual cueing is potentially beneficial. Three cue conditions (3D Arrow, 2D Wedge, and Gaze Lines) were tested in a visual search task against a baseline no cue condition. Results show that any cue is better than none, however the Gaze Line design produced the lowest search time and greatest accuracy.

Eye direction control and reduction of discomfort by vection in HMD viewing of panoramic images (ID: P1113)

Seitaro Inagaki, Nagoya Institute of Technology; Kenji Funahashi, Nagoya Institute of Technology

We have previously proposed an “eye direction exaggeration method.” That facilitates rearward visibility by exaggerating the angle of the eye direction when viewing panoramic images with an HMD in a seated position. In this study, we improved this exaggeration method. However, the exaggeration sometimes increased discomfort such as VR sickness. We also tried to reduce discomfort by presenting horizontally moving particles and inducing vection stably.

Collaborative Motion Modes in Serious Game Using Virtual Co-embodiment: A Pilot Study on Usability and Agency (ID: P1114)

Xiongju Sun, Xi'an Jiaotong-Liverpool University; Xiaoyi Xue, Xi'an Jiaotong-Liverpool University; Yangyang HE, Xi'an Jiaotong-Liverpool University; Jingjing Zhang, Xi'an Jiaotong-Liverpool University

With the increasing attention to collaboration in Serious Games, particularly in immersive virtual environments, a novel approach of virtual co-embodiment (e.g., one avatar controlled by multiple users) in recent studies has the potential to contribute to the research on collaborative motion in multiplayer games. This pilot study investigates the usability and the agency of collaborative motion modes in a virtual cycling game under the virtual co-embodiment system. Results showed that these new collaborative modes reported a higher perceived usability. Besides, based on quantitative and qualitative findings, co-embodiment modes in this virtual serious game might evoke an enhanced users' agency.

AIsop: Exploring Immersive VR Storytelling Leveraging Generative AI (ID: P1116)

Elia Gatti, University College London; Daniele Giunchi, University College London; Nels Numan, University College London; Anthony Steed, University College London

We introduce AIsop, a system that autonomously generates VR storytelling experiences using generative artificial intelligence (AI). AIsop crafts unique stories by leveraging state-of-the-art Large Language Models (LLMs) and employs Text-To-Speech (TTS) technology for narration. Further enriching the experience, a visual representation of the narrative is produced through a pipeline that pairs LLM-generated prompts with diffusion models, rendering visuals for clusters of sentences in the story. Our evaluation encompasses two distinct use cases: the narration of pre-existing content and the generation of entirely new narratives. AIsop highlights the myriad research prospects spanning its technical architecture and user engagement.

Development and Evaluation of an AR News Interface for Efficient Information Access (ID: P1117)

Masaru Tanaka, NHK Science & Technology Research Laboratories; Hiroyuki Kawakita, Japan Broadcasting Corporation; Takuya Handa, Science & Technology Research Laboratories, NHK

In this study, we developed a user interface (UI) for augmented reality (AR) glasses designed to allow users to browse news articles easily and efficiently. The UI uses natural language processing and dimensionality reduction techniques to place articles optimally within a virtual space. We compared this UI with two other AR-based interfaces in a user study with 13 participants, and the results show that the proposed interface reduced the time required to browse articles as well as the cognitive load of the activity.

Progress Observation in Augmented Reality Assembly Tutorials Using Dynamic Hand Gesture Recognition (ID: P1119)

Tania Kaimel, Graz University of Technology; Ana Stanescu, Graz University of Technology; Peter Mohr, Graz University of Technology; Dieter Schmalstieg, Graz University of Technology; Denis Kalkofen, Graz University of Technology

We propose a proof-of-concept augmented reality assembly tutorial application that uses a video-see-through headset to guide the user through assembly instruction steps. It is solely controlled by observing the user's physical interactions with the workpiece. The tutorial progresses automatically, making use of hand gesture classification to estimate the progression to the next instruction. For dynamic hand gesture classification, we integrate a neural network module to classify the user's hand movement in real time. We evaluate the learned model used in our application to provide insights into the performance of implicit gestural interactions.

Best Poster Award

How Long Do I Want to Fade Away? The Duration of Fade-To-Black Transitions in Target-Based Discontinuous Travel (Teleportation) (ID: P1121)

Matthias Wölwer, University of Trier; Benjamin Weyers, Trier University; Daniel Zielasko, University of Trier

A fade-to-black animation enhances the transition during teleportation, yet its duration has not been systematically explored even though it is one of the central parameters. To fill this gap, we conducted a small study to determine a preferred duration. We find a short duration of 0.3s to be the average preference, contrasting durations used previously in the literature. This research contributes to the systematic parameterization of discontinuous travel.

Real-time shader-based shadow and occlusion rendering in AR (ID: P1122)

Agapi Chrysanthakopoulou, University of Patras; Kostantinos Moustakas, University of Patras

We present novel methods designed to elevate the realism of augmented reality (AR) applications focusing specifically on optical see-through devices. Our work integrates shadow rendering methods for multiple light sources and dynamic occlusion culling techniques. By creating custom surface shaders we can manage multiple light sources in real-time, augmenting depth perception and spatial coherence. Furthermore, the dynamic occlusion culling system handles occluded objects, ensuring a more convincing and seamless user experience. Several cases and methods are presented with their results for various lighting and spatial conditions, promising a more enhanced and immersive user experience in various AR domains.

Identifying Markers of Immersion Using Auditory Event-Related EEG Potentials in Virtual Reality with a Novel Protocol for Manipulating Task Difficulty (ID: P1125)

Michael Ramirez, Universidad Escuela Colombiana de Ingeniería Julio Garavito; Hamed Tadayyoni, Ontario Tech University; Heather McCracken, Ontario Tech University; Alvaro Quevedo, Ontario Tech University; Bernadette A. Murphy, Ontario Tech University

Immersion is defined as the degree in which the senses are engaged with the virtual environment. Recent studies have focused on investigating the role of difficulty (challenge immersion) through correlating auditory event-related potentials (ERPs) to task difficulty. This study introduces a novel experimental protocol for studying immersion in which other confounding variables than difficulty are equalized by choosing a VR jigsaw puzzle as the task in which the difficulty is only adjusted by the number of pieces. By introducing two new metrics in conformity with the metrics from literature, this work shows promise for auditory ERPs as markers of immersion.

Challenges in the Production of a Mixed Reality Theater Dance Performance (ID: P1316)

Daniel Neves Coelho, Curvature Games; Eike Langbehn, University of Applied Sciences Hamburg

Virtual and augmented reality systems are becoming more and more popular in artistic performances. Most of these experiences are based on 360-videos or single-user applications. We produced a mixed reality theater dance performance. An audience of 30 people wore XR-headsets during a theater show. The headsets were networked and shared the same tracking space. Three performers (singer, musician, dancer) performed live and their performance was motion captured and transferred in real-time into the virtual environment. In this poster, we report our technical setup and discuss the challenges that this entails.

Evaluation of Augmented Reality for Collaborative Environments (ID: P1310)

John Dallas Cast, Johns Hopkins University; Alejandro Martin-Gomez, Johns Hopkins University; Mathias Unberath, Johns Hopkins University

We present ClimbAR, an open-source, collaborative, real-time, augmented reality application running natively on the Hololens 2 that allows climbers to virtually and collaboratively set climbing holds in their physical environments to better understand and plan their routes. We present the qualitative results of demonstrating ClimbAR at two climbing gyms as well as the quantitative results of analyzing the spatial alignment accuracy of its core synchronization framework, SynchronizAR, through a proto-user study. We find an average rotational alignment error of 12.83 degrees and an average translational alignment error of 3.85 centimeters when using SynchronizAR for collaborative layout tasks involving two users.

Diffusion Attack: Leveraging Stable Diffusion for Naturalistic Image Attacking (ID: P1142)

Qianyu Guo, Purdue University; Jiaming Fu, Purdue University; Yawen Lu, Purdue Univ; Dongming Gan, Polytechnic Institute

In Virtual Reality (VR), adversarial attack remains a significant security threat. Most deep learning-based methods for physical and digital adversarial attacks focus on enhancing attack performance by crafting adversarial examples that contain large printable distortions that are easy for human observers to identify. However, attackers rarely impose limitations on the naturalness and comfort of the appearance of the generated attack image, resulting in a noticeable and unnatural attack. To address this challenge, we propose a framework to incorporate style transfer to craft adversarial inputs of natural styles that exhibit minimal detectability and maximum natural appearance, while maintaining superior attack capabilities.

Plausible and Diverse Human Hand Grasping Motion Generation (ID: P1344)

Xiaoyuan Wang, IRISA; Yang Li, East China Normal University; Changbo Wang, Depart of Software Science and Technology; Marc Christie, IRISA

Techniques to grasp targeted objects in realistic and diverse ways find many applications in computer graphics, robotics and VR. This study generates diverse grasping motions while keeping plausible final grasps for human hands. We first build on a Transformer-based VAE to encode diverse reaching motions into a latent representation noted as GMF and then train an MLP-based cVAE to learn the grasping affordance of targeted objects. Finally, through learning a denoising process, we condition GMF with affordance to generate grasping motions for the targeted object. We identify improvements in our results, and will further address them in future work.

DVIO - Distributed Visual Inertial Odometry in a Multi-user Environment (ID: P1212)

Mathieu Lutfallah, ETH Zurich; Juyi Zhang, ETH Zurich; Andreas Kunz, ETH Zurich

Head-mounted displays typically use a visual inertial odometry system, which relies on the headset's camera combined with Inertial Measurement Units. While effective, this setup fails if the camera is obstructed or if the environments lacks features. Traditional recalibration methods like place recognition often fall short in such settings. Addressing this, the paper proposes a novel distributed tracking method that uses the positions of other users. This approach creates a network or "daisy chain" of user locations, enhancing position tracking accuracy. It serves as an alternative and a supplementary solution to the standard system, ensuring precise location tracking for all users.

AccompliceVR: Lending Assistance to Immersed Users by Adding a Generic Collaborative Layer (ID: P1216)

Anthony Steed, University College London

The current model of development for virtual reality applications is that a single application is responsible for construction of the complete immersive experience. If the application is collaborative that application must implement the functionality for sharing. We present VRAccomplice, and overlay application that add a collaboration layer to applications running on SteamVR. Using the Ubiq software, we can add avatars controlled by remote users as an overlay into any running app used by a local user. Remote users can see video of the local user. We demonstrate this in some common SteamVR games.

Enacting Molecular Interactions in VR: Preliminary relationships between visual navigation and learning outcomes (ID: P1174)

Julianna C Washington, Southern Methodist University; Prajakt Pande, Southern Methodist University; Praveen Ramasamy, Danish Technological Institute; Morten E. Moeller, University College Copenhagen; Biljana Mojsoska, Roskilde University

Twenty-three undergraduates participated in a pre-post quasi-experimental single-group study involving an immersive VR simulation which allowed them to embody (i.e. become) a biomolecule and enact/experience its molecular interactions at a microscopic level using actions and gestures. Based on initial data analyses from this study, the present poster reports preliminary findings on the relationships between the participants' visual navigation, and conceptual as well as affective learning outcomes.

Autonomous avatar for customer service training VR system (ID: P1149)

Takenori Hara, Dai Nippon Printing Co., Ltd.

By immersing trainees in a virtual space and conducting customer service training with customer avatars, physical training facilities are no longer required and customer service training costs can be reduced. Furthermore, since there is no need for travel time to the training facility, trainees can easily participate in training even from remote locations. However, the production cost of customer avatars that behave according to training scenarios has become a new issue in social implementation. Therefore, we conducted a preliminary implementation experiment of a customer avatar that works autonomously by incorporating LLM and reported the findings and problems we encountered.

Tuesday Posters

Talk with the authors: 9:45‑10:15, 13:00‑13:30, 15:00‑15:30, 17:00‑17:30, Room: Sorcerer's Apprentice Ballroom

Using immersive video to recall significant musical experiences in elderly population with intellectual disability (ID: P1127)

Pablo Perez, Nokia; Marta Orduna, Nokia Spain; María Nava-Ruiz, Fundación Juan XXIII; Javier Martín-Boix, Fundación Juan XXIII

Comodia Elderly project studies the use of immersive video inelderly population with intellectual disability. Participants in the project took part in live concerts, which were recorded with a 180-degree stereoscopic camera. Recording were visualized in succesive sessions of virtual reality video. Preliminary results show a high level of spacial and social presence, assessed both from adapted questionnaires and for external observation of observable signs.

Target Selection with Avatars in Mixed Reality (ID: P1128)

Eric DeMarbre, Carleton University; Robert J Teather, Carleton University

This poster presents a Fitts' law experiment evaluating the effects of using an avatar in mixed and virtual reality selection tasks. The avatar had little to no impact on efficiency and surprisingly lowered overall accuracy in both the 2D plane and depth. However, the avatar also reduced variability in depth selection. Avatar design and specific MR hardware parameters may significantly impact efficiency, especially compared to VR devices.

Georeferenced 360-Degree Photos for Enhancing Navigation and Interaction within Virtual Electric Power Substations (ID: P1130)

Gabriel Fernandes Cyrino, Federal University of Uberlândia; Claudemir José Alves, Federal University of Uberlândia; Gerson FlÌÁvio Mendes de Lima, Federal University of Uberlândia; Edgard Afonso Lamounier Jr., Federal University of Uberlândia; Alexandre Cardoso, Federal University of Uberlândia; Ana Marotti, Eletrobras; Ricardo Oliveira, Eletrobras

This work introduces a methodology for improving virtual navigation and interaction within power substations by employing georeferenced 360-degree photos. The objective is to swiftly update the current state of the field since, in most cases, incorporating such amendments is not feasible during the reconstruction of virtual environments. This changing speed is very important for critical systems. Preliminary results demonstrate successful updating rates, enabling engineers to make rapid decisions. It is expected that the proposed methodology can improve the efficiency and dependability of step-by-step activities, while also reducing the time and costs associated with system maintenance.

Exploring the Impact of Virtual Human and Symbol-based Guide Cues in Immersive VR on Real-World Navigation Experience (ID: P1131)

Omar Khan, University of Calgary; Anh Nguyen, University of Calgary; Michael Francis, University of Calgary; Kangsoo Kim, University of Calgary

In this paper, we explore how navigation performance and experience in a real-world indoor environment is impacted after learning the route from various guide cues in a replicated immersive virtual environment. A guide system, featuring two distinct audiovisual guide representations—a human agent guide and a symbol-based guide—was developed and evaluated through a preliminary user study. The results do not show significant differences between the two guide conditions, but offer insight into the user-perceived confidence and enjoyment of the real-world navigation task after experiencing the route in immersive virtual reality. We discuss the results and direction of future research.

Effects of Nonverbal Communication of Virtual Agents on Social Pressure and Encouragement in VR (ID: P1132)

Pascal Martinez Pankotsch, University of Würzburg; Sebastian Oberdörfer, University of Würzburg; Marc Erich Latoschik, University of Würzburg

Our study investigated how virtual agents impact users in challenging VR environments, exploring if nonverbal animations affect social pressure, positive encouragement, and trust in 30 female participants. Despite showing signs of pressure and support during the experimental trials, we could not find significant differences in post-exposure measurements of social pressure and encouragement, interpersonal trust, and well-being. While inconclusive, the findings suggest potential, indicating the need for further research with improved animations and a larger sample size for validation.

Flowing with Zen: Exploring Empowering the Dissemination of Intangible Cultural Heritage via Immersive Mixed Reality Spaces (ID: P1133)

Wenchen Guo, Peking University; Guoyu Sun, Communication University of China; Wenbo Zhao, Communication University of China; Zhirui Chen, University of Chinese Academy of Social Sciences; Menghan Shi, Lancaster University; Weiyue Lin, Peking University

Zen is a treasure of the world's intangible cultural heritage (ICH), but nowadays it is facing difficulties in dissemination. This paper presents an immersive experience space Flowing with Zen by integrating HCI technology and MR. Audiences can explore and interact with the four scenarios, as well as meditate, and experience Zen philosophy. The pilot study shows that the MR space not only evokes users' interest and participation but also deepens their empathy or reflections. This innovative way of combining MR, ICH, and UX enhances the accessibility of Zen, and it may open up a new mode of dissemination for ICH.

Lessons Learned in Designing Racially Diverse Androgynous Avatars (ID: P1134)

Camille Isabella Protko, University of Central Florida; Ryan P. McMahan, University of Central Florida; Tiffany D. Do, University of Central Florida

As virtual reality technology evolves, avatars play a crucial role in user representation, yet options for gender-diverse individuals remain limited. The goal of this research was to develop 14 racially diverse androgynous avatars using the design guidelines recommended in prior work. However, a perceptual experiment involving 68 participants revealed unexpected results, as most avatars were perceived as predominantly masculine. Additionally, our results yielded discrepancies in perceived gender across different racial identities, as the same design process resulted in widely varying perceptions. Despite these challenges, our research highlights the importance of continued exploration to improve the creation of inclusive avatars.

Sexual Presence in Virtual Reality: A Psychophysiological Exploration (ID: P1135)

Sara Saint-Pierre Cote, École de technologie supérieure

The increasing use of immersive technologies for sexual purposes raises questions about their capacity to enhance a unique aspect of presence—Sexual Presence (SP). Investigating this phenomenon hinges on our ability to measure it accurately. This paper improves our understanding of SP by identifying potential quantitative electroencephalography variables associated with SP. Twelve heterosexual cisgender males were exposed to virtual scenarios featuring sexual content performed by a Virtual Character (VC). After viewing, participants completed a Sexual Presence questionnaire. A correlation was observed between self-reported SP and the alpha band activity in the frontal and parietal regions.

NavigAR: Enhancing Localized Space Navigation using Augmented Reality (ID: P1138)

Sahil Deshpande, Indraprastha Institute of Information Technology - Delhi (IIITD); Rahul Ajith, Indraprastha Institute of Information Technology - Delhi (IIITD); Yaksh Patel, Indraprastha Institute of Information Technology - Delhi (IIITD); Anmol Srivastava, Indraprastha Institute of Information Technology Delhi

In this study, we use Augmented Reality (AR) to project and let users interact with a 3D-replicated model of a fenced space to understand how AR can improve navigation and enhance space retention by improving wayfinding via recall. We recreated a fenced space in 3D using Blender. Users can see their current position and, using onscreen buttons, see routes to their destinations. We found that users could associate the 3D models of the buildings with their real-world counterparts. We observed that users were better able to navigate the campus after using the application.

Exploring Radiance Field Content Generation for Virtual Reality (ID: P1139)

Asif Sijan, University of Minnesota Duluth; Peter Willemsen, University of Minnesota Duluth

Generating content for virtual reality takes effort and expanding how content can be generated has the potential to increase participation by a wider range of users well outside virtual reality researchers and programmers. This work explores some current content acquisition and generation techniques for replicating real world 3D scenes and objects to better understand the practical use and application of these techniques. The techniques explored focus on the recent emergence of radiance field-based methods and their potential to replace previous scene/object-generating techniques.

Best Poster Award

Brain Dynamics of Balance Loss in Virtual Reality and Real-world Beam Walking (ID: P1141)

Amanda Studnicki, University of Florida; Ahmed Rageeb Ahsan, University of Florida; Eric Ragan, University of Florida; Daniel P. Ferris, University of Florida

Virtual reality (VR) aims to replicate the sensation of a genuine experience through the integration of realism, presence, and embodiment. In this study, we used mobile electroencephalography to quantify differences in anterior cingulate brain activity, an area involved in error monitoring, with and without VR during a challenging balance task to discern the factors contributing to VR's perceptual shortcomings. We found a major delay in the anterior cingulate response to self-generated loss of balance in VR compared to the real world. We also found a robust response in the anterior cingulate when loss of balance was generated by external disturbance.

Investigating the Impact of Virtual Avatars and Owner Gender on Virtual Partner Selection in Avatar-based Interactions (ID: P1144)

Anh Nguyen, University of Calgary; Seoyoung Kang, KAIST; Woontack Woo, KAIST; Kangsoo Kim, University of Calgary

As real-life social interactions transition to virtual environments using Virtual/Augmented Reality (VR/AR) technologies, understanding how gender representation in virtual avatars affects choices and behaviors becomes increasingly relevant. In this paper, we investigate the combined impact of avatars' gender representation and their owner's gender on virtual partner selection in physical, intellectual, social, and romantic scenarios. We introduce our preliminary research plan, outlining both the study design and system development. The research will contribute to our understanding of social dynamics and gender effects in avatar-mediated interactions.

An exploration on modeling haptic reaction time of 3D interactive tasks within virtual environments (ID: P1145)

Stanley Tarng, University of Calgary; Yaoping Hu, University of Calgary

Virtual environments (VEs) utilize haptic cues - e.g., vibrotactile and force feedbacks - to facilitate user interactions in 3D tasks. Existing studies reported maximum likelihood estimation (MLE) to integrate cues of different modalities. The MLE integration was excluded for cues of the same modality. Although proportional likelihood estimation (PLE) was able to integrate same-modality cues for the parameter of task accuracy, its applicability remains unclear to other task parameters like reaction time. This feasibility study compared thus MLE and PLE to integrate the haptic cues for reaction time. PLE was found to be applicable to model haptic reaction time.

HiLoTEL: Virtual Reality Robot Interface-Based Human-in-the-Loop Task Execution and Learning in the Physical World Through Its Digital Twin (ID: P1148)

Amanuel Ergogo, SANO Centre for Computational Personalized Medicine; Diego Dall'Alba, SANO Centre for Computational Personalized Medicine; Przemysław Korzeniowski, Sano Centre for Computational Medicine

HiLoTEL is a flexible virtual-reality framework for executing and learning tasks in virtual and physical environments. It enables human experts to collaborate with learning agents and intervene when necessary through human-in-the-loop imitation learning. HiLoTEL reduces the need to carry out repetitive tasks, providing the user with an intuitive supervision interface. The system is tested on Pick-and-Place task, considering both teleoperated and passthrough interaction modalities. The results show that HiLoTEL improves success rates while maintaining human-level completion time and providing users with 71% hands free supervision time, thus enabling effective human-robot collaboration.

Design and Analysis of Interaction Method to Adjust Magnification Function Using Microgestures in VR or AR Applications (ID: P1154)

Hao Sun, Beijing Insitute of Technology; Shining Ma, Beijing Institute of Technology; Mingwei Hu, Beijing Institute of Technology; Weitao Song, Beijing Institute of Technology; Yue Liu, Beijing Institute of Technology

Among various interaction modes in VR/AR, microgestures have distinct advantages over controllers by reducing fatigue and improving the efficiency. In this paper, we proposed a microgesture set that utilizes the number of fingers as input instructions for magnification adjustment in VR environments. To evaluate the effectiveness of the proposed microgestures, a series of tasks have been designed in three scenes. The results revealed a significant improvement in completion time and subjective measures compared to the performance of controller. These findings offer valuable insights for future gesture design, contributing to the development of more efficient and user-friendly interaction techniques in VR.

Voicing Your Emotion: Integrating Emotion and Identity in Cross-Modal 3D Facial Animations (ID: P1156)

wenfeng song, Beijing information science and technology university; Zhenyu Lv, Beijing Information Science and Technology University; wang xuan, Beijing Information Science & Technology Universit; Xia Hou, Beijing information science and technology university

Speech-driven 3D facial animation is widely applied in VR fields.Capturing intricate expressiveness remains an intricate challenge. Addressing this deficiency, we unveil a method tailored to produce 3D facial expressions that resonate deeply with emotion and identity, guided by speech and user-provided prompt words. Our key insight is an emotion-identity fusion mechanism, a pre-trained self-reconstruction codebook, meticulously crafted from a wide array of emotional facial movements, serving as an expressive motion benchmark. Using this foundation, prompt words are seamlessly transformed into vivid facial representations. Our approach is a robust tool for crafting 3D talking avatars, rich in emotional depth and distinctive identity.

Using VR in a Two-Month University Course (ID: P1158)

Jose Joskowicz, Facultad de Ingenieria; Fabricio Gonzalez, Quantik; Inés Urrestarazu, Facultad de Ciencias Económicas

This paper describes the experience of using VR in a two-month University course. The experience was performed in the School of Economics, University of the Republic, Uruguay. Fourteen students and the professors attended the “Accounting in integrated management systems” class using Meta Quest 2 VR headsets during seven 1-hour sessions, one session per week. Different aspects were analyzed during the sessions, including audiovisual quality, comfort, sickness, immersion, presence, fatigue, cognitive load, and useful of the technology for the academic purposes.

Investigating Situated Learning Theory through an Augmented Reality Mobile Assistant for Everyday STEM Learning (ID: P1161)

Abhishek Mayuresh Kulkarni, University of Florida; Cecelia Albright, University of Florida; Pratik Kamble, University of Florida; Sharon Lynn Chu, University of Florida

Situated learning theory (SLT) suggests learning should take place within authentic contexts to be effective. Research in virtual and augmented reality (AR) tend to use SLT to ground their work. Yet, the effectiveness of SLT to ground designs is still questionable. This work investigates situated learning through an AR mobile application called Objectica that seeks to teach STEM (Science, Technology, Engineering, Mathematics) concepts using authentic everyday objects. A between-subjects study compared Objectica with a version not grounded in SLT. Initial results show no significant differences in the effectiveness of the two versions, questioning impact of SLT in educational app design.

Orienting response is modulated by the human-likeness and realism of the virtual proposer. Exploratory study with physiological measurement. (ID: P1166)

Radoslaw Sterna, Jagiellonian University in Kraków; Joanna Pilarczyk, Institute of Psychology, Faculty of Philosophy, Jagiellonian University in Kraków; Agata Szymańska, Institute of Psychology, Faculty of Philosophy, Jagiellonian University; Jakub Szczugieł, Jagiellonian University in Kraków; Magdalena Igras-Cybulska, AGH UST; Michał Kuniecki, Institute of Psychology, Faculty of Philosophy, Jagiellonian University in Kraków

This poster presents an exploratory analysis investigating the impact of virtual character's realism and human-likeness on participants' Orienting Response (OR) indexed by heart rate (HR) and skin conductance response (SCR). Fifty-nine participants, wearing HMD, watched the video recordings of the virtual characters and humans differing in terms of behavioral realism (movement and gaze), while their physiological responses were measured. Findings highlight a significant influence of behavioral realism on both indices of the Orienting Response (deeper HR deceleration and stronger SCR), which in case of SCR is further modulated by human-likeness of the virtual proposer

The validation of a Polish version of Co-Presence and Social Presence Scale. (ID: P1167)

Radoslaw Sterna, Jagiellonian University in Kraków; Natalia Lipp, Sano Center for Computational Personalised Medicine; Agnieszka Strojny, Institute of Applied Psychology Faculty of Management and Social Communication, Jagiellonian University, Kraków, Poland; Michał Kuniecki, Institute of Psychology, Faculty of Philosophy, Jagiellonian University in Kraków; Paweł Strojny, Institute of Applied Psychology, Faculty of Management and Social Communication, Jagiellonian University in Kraków

This study validates the Polish translation of the Co-presence and Social Presence scale, confirming strong reliability and validity through robust internal consistency and favorable confirmatory factor analysis fit indices. Convergent validity results align with expectations, although correlation magnitudes are lower than anticipated. Examining discriminant validity, an unexpected weak positive correlation with eeriness challenges initial expectations of negative relationship. Furthermore, moderate to low correlations between co-presence and presence emphasize their distinct yet related nature

XR Slate: XR Swiping and Layout Adjustment for Text Entry (ID: P1169)

Theodore Okamura, University of North Carolina at Greensboro; Regis Kopper, University of North Carolina at Greensboro

In the expansive realm of virtual reality (VR) and digital interfaces, this project introduces XR Slate, an innovative text entry method to address the challenges associated with the prevailing ray-casting approach. XR Slate mitigates ray-casting difficulty in precisely hitting specific keys due by progressively refining the virtual keyboard using intuitive swiping gestures, eliminating unwanted keys and enhancing the accessibility and user-friendliness of the text entry process.

Evaluating NeRF Fidelity using Virtual Environments (ID: P1170)

Ander J Talley, Mississippi State University; Adam Jones, Mississippi State University

Neural Radiance Fields (NeRF) are a promising form of 3D Reconstruction that utilize sparse 2D imagery to recreate synthetic and real scenes. Since NeRF was first developed, numerous methods and techniques have worked upon the original algorithm to improve its accuracy, fidelity, and speed. We aim to evaluate the fidelity of a reconstructed scene, as well as evaluate the quality of the reconstruction.

Prototyping Autonomous Vehicle Lane Detection for Snow in VR (ID: P1171)

Tatiana Ortegon Sarmiento, Université du Québec à Trois-Rivières; Alvaro Uribe Quevedo, Ontario Tech University; Sousso Kelouwani, Université du Québec à Trois-Rivières; Patricia Paderewski Rodriguez, Universidad de Granada; Francisco Gutierrez Vela, Universidad de Granada

Autonomous vehicles (AVs) are gaining momentum, and features such as autopilot are becoming widespread among consumer vehicles. AVs track the road to drive correctly, however, when they fail, the driver must take over. Lane detection is a must-have feature for AVs, which has numerous investigations. Nevertheless, most have gaps in extreme weather, such as snowy winters. The lack of snow datasets adds to this, as most include mild scenarios. This paper presents the prototype of a virtual reality digital twin that will allow training lane detection using synthetic data that would otherwise be difficult to recreate in real life.

First Steps in Constructing an AI-Powered Digital Twin Teacher: Harnessing Large Language Models in a Metaverse Classroom (ID: P1175)

Marco Fiore, Polytechnic University of Bari; Michele Gattullo, Polytechnic University of Bari; Marina Mongiello, Polytechnic University of Bari

This study proposes a ground-breaking idea at the intersection of artificial intelligence and virtual education: the creation of an AI-powered digital twin instructor in a Metaverse-based classroom using Large Language Models. We aim to build a teacher avatar capable of dynamic interactions with students, tailored teaching approaches, and contextual response inside a virtual world. The research aims to address two major issues for both students and teachers: the digital twin can provide feedbacks to resolve doubts about course content and material; also, it can improve student management and allow teachers to answer the trickiest questions raised by students.

Merging Blockchain and Augmented Reality for an Immersive Traceability Platform (ID: P1176)

Marco Fiore, Polytechnic University of Bari; Michele Gattullo, Polytechnic University of Bari; Marina Mongiello, Polytechnic University of Bari; Antonio E. Uva, Polytechnic Institute of Bari

The demand for ethically sourced and safe products has surged, prompting industries to adopt intricate traceability systems. Blockchain technology, renowned for its decentralized and immutable ledger, revolutionizes traceability by ensuring data integrity and transparency in supply chains. However, complexities within supply chains often obfuscate meaningful insights for consumers. This paper explores leveraging Augmented Reality to enhance Blockchain-based traceability systems. By integrating AR, consumers can seamlessly access traceability information through QR codes, presented via optimized 3D models. This immersive approach fosters trust by visually demonstrating product quality. The architecture combines QR codes, Vuforia markers, and Blockchain, ensuring data security and immutability.

Using Machine Learning to Classify EEG Data Collected With or Without Haptic Feedback During a Simulated Drilling Task (ID: P1178)

Michael Ramirez, Universidad Escuela Colombiana de Ingeniería Julio Garavito; Heather McCracken, Ontario Tech University; Brianna L Grant, Ontario Tech University; Alvaro Quevedo, Ontario Tech University; Paul Yielder, Ontario Tech University; Bernadette A. Murphy, Ontario Tech University

Simulation environments (SE) are becoming important tools that can be leveraged to implement training protocols and educational resources. Electroencephalography (EEG) is used to compare the effects of different types of feedback in SE, but it can be challenging to know which aspects represent the impact of those feedback on neural processing. For this study, machine learning approaches were applied to differentiate neural circuitry associated with haptic and non-haptic feedback in a simulated drilling task. Electroencephalography was analyzed based on the extraction and selection of different types of features. Trials with haptic feedback were correctly identified from those without haptic feedback.

Gender Identification of VR Users by Machine Learning Tracking Data (ID: P1181)

Qidi J. Wang, University of Central Florida; Ryan P. McMahan, University of Central Florida

Gender identification of virtual reality (VR) users by machine learning tracking data could afford personalized experiences, including mitigation of expected human factors or psychology issues. While much research has recently been conducted to identify individual users given their VR tracking data, little research has investigated gender identification. Furthermore, nearly all prior studies have only considered positions and rotations of all the devices. We present a systematic investigation of different combinations and spatial representations of VR tracked devices for predicting a user's gender. Our results indicate head rotations are integral to gender identification while head positions are surprisingly not as important.

TacPoint: Influence of Partial Visuo-Tactile Feedback on Sense of Embodiment in Virtual Reality (ID: P1182)

Jingjing Zhang, University of Liverpool; Mengjie Huang, Xi'an Jiaotong-Liverpool University; Yonglin Chen, City University of Macau; Xiaoyi Xue, Xi'an Jiaotong-Liverpool University; Kailun Liao, Xi'an Jiaotong-Liverpool University; Rui Yang, Xi'an Jiaotong-Liverpool University; Jiajia Shi, Kunshan Rehabilitation Hospital

The employment of Virtual Reality in medical rehabilitation has been broadened to incorporate visual and tactile feedback, while how patients use tangible objects to induce better perceptions in VR remains unexplored. We investigated how partial visuo-tactile feedback influences users' embodiment, and proposed a design idea named TacPoint (a red point mark) that connects to the physical and virtual world. The results reported that higher embodiment illusions could be induced in the TacPoint session than in others without feedback during virtual interactions. TacPoint could help patients induce embodiment easily to increase their positive VR experience for further rehabilitation training.

Exploring Trustful AI Augmentation with Virtual Evacuation Study (ID: P1183)

Ruohan Wang, Marvin Ridge High School; Aidong Lu, University of North Carolina at Charlotte

This project created a virtual evacuation study to explore people's trust of AI in dangerous situations. Our study adopts a virtual maze-like scenario with automatic AI-augmentation and three levels of fire simulations, where participants need to make the final decision. We have collected performance data and revised the metrics of KUSIV3 for participants to report their trust of AI in the post-questionnaires. Our results show that the levels of trust towards AI are generally positive, and can be affected by dangerous conditions. The participant-reported and measured trust levels also demonstrate a loose but consistent correlation.

Testing Virtualized Future Technologies Using Stressful Simulations (ID: P1184)

Nicole Kosoris, Georgia Institute of Technology

This work used Virtual Reality (VR) to prototype and test novel devices for First Responders, investigating using intentionally stressful scenarios to better differentiate between designs. Using a mixed methods approach, researchers designed and built calm and stressful scenarios for a traffic stop. Researchers began with qualitative methods, confirmed via survey, to determine the factors First Responders identified as most stressful in a traffic stop. Visibility was identified as the most critical design consideration. Comparisons were then done between low and high-visibility displays; participants responded significantly more negatively to the low-visibility display when in a simulated high-stress scenario.

VR Reconstruction of Amazonian Geoglyphs Using LiDAR Data (ID: P1185)

Fang Wang, University of Missouri; Albert Zhou, University of Missouri; Robert S. Walker, University of Missouri; Mike Sturm, University of Missouri; Amith Nalmas, University of Missouri; Scottie D Murrell Mr, University of Missouri

This work presents an interactive Virtual Reality (VR) application to enhance education on Amazonian geoglyphs. Many Amazonian archaeological sites are difficult to reach and visualize as they are hidden by dense rainforest foliage. Using a combination of VR technology and LiDAR (Light Detection and Ranging) data, we generate scale models of geoglyphs without the current foliage. We use this model to create a reconstruction of the historical usage of these earthworks, based on current archaeological and anthropological interpretations. The experience is enhanced by using reconstructions of artifacts commonly found in the vicinity of these geoglyphs.

Mind-Body TaoRelax: Relieving Stress through Immersive Virtual Reality Relaxation Training in a Taoist Atmosphere (ID: P1186)

Hao Yan, Southeast University; Ding Ding, Southeast University; Zhuying Li, Southeast University

Virtual reality technology is providing a new method for psychological relaxation training, but most systems focus primarily on meditation as a singular relaxation method. We propose a novel virtual reality relaxation system called TaoRelax. Building upon traditional Taoist culture, the system integrates both meditation and progressive muscle relaxation training. It combines electromyography (EMG) as a means of feedback and assessment during training, providing users with a dual experience of relaxation for both body and mind. Our preliminary results indicate that after training in our system, participants experienced a significant reduction in psychological stress and a notable improvement in meditation abilities.

Light Field Transmission Fusing Image Super-Resolution and Selective Quality Patterns (ID: P1187)

Wei Zheng, Beijing Technology and Business University; Xiaoming Chen, Beijing Technology and Business University; Zongyou Yu, Beijing Technology and Business University; Zeke Zexi Hu, The University of Sydney; Yuk Ying Chung, The University of Sydney

Light field imaging has revolutionized immersive experiences and virtual reality applications. However, the vast amount of data generated by light field imaging presents significant challenges in terms of storage and transmission. In this study, we propose a novel approach for light field storage and transmission. Our approach leverages image super-resolution, which can be utilized to significantly reduce the light field data to be transmitted while maintaining reasonable light field viewing quality. Moreover, we have devised selective transmission patterns that align with human viewing patterns, enhancing the overall efficiency of light field transmission.

A Taxonomy for Guiding XR Prototyping Decisions by the Non-Tech-Savvy (ID: P1188)

Assem Kroma, Carleton University; Robert J Teather, Carleton University

Within XR design, prototypes play an essential role in materializing, communicating and evaluating concepts. Yet, selecting the appropriate prototype proves intuitive for seasoned designers but daunting for beginners and the non tech-savvy. Addressing the right challenges when making decisions on the prototyping method is vital for a constructive outcome. Taxonomies emerge as powerful tools in this context. We forge a holistic taxonomy based on various prototyping taxonomies in the contexts of XR and other disciplines. We further validate it by surveying existing XR prototyping work. Next, we will further validate it through a series of steps common in such research.

Emot Act AR: Tailoring Content through User Emotion and Activity Analysis (ID: P1191)

Somaiieh Rokhsaritalemi, Sejong University; Abolghasem Sadeghi-Niaraki, Sejong University; Soo-Mi Choi, Sejong University

In an era characterized by immersive content delivery experiences, the convergence of augmented reality (AR) and user behavior data stands out as a transformative synergy. This paper introduces "Emot Act AR," an innovative system intricately designed to dynamically tailor virtual content by user data. This framework incorporates user emotions and activities as intrinsic components. Through the utilization of sensing devices like camera and advanced AI models, the system dynamically customizes virtual elements, such as virtual flowers, in both static and dynamic settings. Additionally, the system utilizes user activity data to activate messaging avatars that encourage exercise.

Enhanced Techniques to Implement Jumping-Over-Down and Jumping-At-Air using Pressure-sensing Shoes in Virtual Reality (ID: P1192)

Liuyang Chen, East China Normal University; Liuyang Chen, NetEase, Inc.; Gaoqi He, East China Normal University; Changbo Wang, Depart of Software Science and Technology

Enhance the experience of jumping in the virtual reality environment is interesting and useful for various applications. This work includes three parts which are Jumping Detection, Jumping-Over-Down and Jumping-at-Air. One particular designed pressure-sensing shoes are used for jumping detection. This design could substantially reduce the cost of detecting jumping behaviors while ensuring high precision. Jumping-Over-Down motion allows users to control a virtual avatar to perform realistic downward jump in VR environment. While Jumping-at-Air motion represents a type of super-real jumping whicht resembles the double-jump feature found in many video games. From the hovering position, user can even execute another jumping.

Robotic VR Massage System: Physical Care Robot with Out-of-body Experience in Virtual Karesansui (ID: P1193)

Naoya Harada, Aoyama Gakuin University; Michiteru Kitazaki, Toyohashi University of Technology; Ryosuke Tasaki, Aoyama Gakuin Univercity

We have developed a robotic VR system that combines a physical care robot with a visual massage therapy in virtual environments. The robotic finger generates a periodic pressing pattern designed using learning data from skilled therapists. The virtual massage movement can be seamlessly synchronized with the movements of the physical care robot controlled in the real world using a high-sensitive force sensor on the robot end-effector, This creates an out-of-body experience in the third-person perspective. Furthermore, the robotic VR massage system provide a healing impression through the design of Karesansui, a Japanese dry garden related to Zen.

Analysis of the association between metaverse-based engineering education and environment, social and governance platforms (ID: P1199)

Jihyung Kim, Pohang University of Science and Technology(POSTECH; Sungeun Park, POSTECH; Ju Hong Park, Pohang University of Science and Technology

This paper proposed a metaverse environment-based engineering education method to utilize it as a platform that meets sustainable environmental, social, and governance according to social movements for reducing energy consumption and carbon footprint can be reduced by replacing them. Verifying the proposed method, we measured potential carbon footprint reduction as environmental factors and immersion as social and governance factors. Result of the experiment, the average value of all items was about 4.51, and the five items with the highest average value were separated from the real world due to the high sense of immersion and presence from the environment.

Influence of Personality and Communication Behavior of a Conversational Agent on User Experience and Social Presence in Augmented Reality (ID: P1200)

Katerina Georgieva Koleva, Technische Universität Berlin; Maurizio Vergari, Technische Universität Berlin; Tanja Kojic, Technische Universität Berlin; Sebastian Möller, Technische Universität Berlin; Jan-Niklas Voigt-Antons, University of Applied Sciences Hamm-Lippstadt

A virtual embodiment can benefit conversational agents, but it is unclear how their personalities and non-verbal behavior influence the User Experience and Social Presence in Augmented Reality (AR). We asked 30 users to converse with a virtual assistant who gives recommendations about city activities. The participants interacted with two different personalities: Sammy, a cheerful blue mouse, and Olive, a serious green human-like agent. Each was presented with two body languages - happy/friendly and annoyed/unfriendly. We conclude how agent representation and humor affect User Experience aspects, and that body language is significant in the evaluation and perception of the AR agent.

Experimental Immersive 3D Camera Setup for Mobile Phones (ID: P1201)

Christian Aulbach, Julius-Maximilians-Universität Würzburg; Felix Scheuerpflug, Julius-Maximilians-Universität Würzburg; Daniel Pohl, immerVR GmbH; Sebastian von Mammen, University of Würzburg

Immersive media is becoming popular due to increased consumer access to virtual reality (VR) headsets. Mass adoption of 3D and immersive media may require devices similar to 2D phone cameras. Smartphones now feature wide-angle cameras with up to 150° field of view, approaching the 180° field of view in VR180 stereo photos. In this work, we use two smartphones with 123° wide-angle cameras and create a new app to capture immersive media in an equirectangular format for VR consumption. Our work is a proof of concept deploying current mobile devices to approach VR180 stereo photos with the available smartphone today.

AR Simulations in VR: The Case for Environmental Awareness (ID: P1202)

Ryleigh Byrne, Duke University; Zhehan Qu, Duke University; Christian Fronk, Duke University; Sangjun Eom, Duke University; Tim Scargill, Duke University; Maria Gorlatova, Duke University

Augmented reality (AR) simulations in virtual reality (VR) offer controlled conditions, fewer hardware limitations, and access to diverse settings. However, simulations in VR must replicate the effects of environmental context in AR. Here, we examine perceived virtual content transparency under varying environment illuminance, and conduct a user study identifying discrepancies between AR and a standard simulation in VR. Results show perceived transparency remains high across all illuminance levels tested in VR, but is reduced at low illuminance in AR. This illustrates the impact of environment properties on the efficacy of AR simulations in VR, and motivates development of context-aware simulations.

VR-Engine: Training Post-Earthquake Damages in Building Structure (ID: P1203)

Shah Rukh Humayoun, San Francisco State University; Jenna Wong, San Francisco State University; Khanh Nguyen, San Francisco State University; Purva Zinjarde, San Francisco State University; Prathiba Ramesh, San Francisco State University

The use of latest advancements in technology, such as virtual reality (VR), can increase student engagement in classrooms and improve their learning skills in Science, Technology, Engineering, and Mathematics (STEM) fields. Therefore, it is now a time to explore deeply how we can use VR more effectively targeting different real-life applications in STEM fields. Targeting this, we developed a VR environment, called VR-Engine, to train civil engineering students about post-earthquake damages to the building structure. VR-Engine enables students to go around a 3D building structure to learn and diagnoses different kinds of damages in a post-earthquake scenario.

Exploring User Preferences of VR Relaxation Experiences: A Comparative Mixed-Methods Study (ID: P1204)

Magdalena Igras-Cybulska, AGH UST; Radoslaw Sterna, Jagiellonian University in Kraków; Grzegorz Łukawski, Kielce University of Technology; Gabriela Wrońska, AGH University of Science and Technology

This comparative study presents the results of a user experience investigation conducted during the use of four relaxing VR applications, distinguished by style and interactivity (Fujii, Real VR Fishing, TRIPP, Nature Treks VR), tested by 12 participants in a balanced order, within a within-subjects scenario. Both immersion and anxiety were measured. Semi-structured individual interviews were conducted to better understand user preferences. Among the most frequently suggested features, personalization and voluntary microinteractions appeared to be the most preferred.

EndovasculAR: Utility of Mixed Reality to Segment Large Displays in Surgical Settings (ID: P1206)

Griffin J Hurt, University of Pittsburgh; Talha Khan, University of Pittsburgh; Michael Ryan Kann, University of Pittsburgh School of Medicine; Edward Andrews, University of Pittsburgh Medical Center; Jacob Biehl, University of Pittsburgh

Mixed reality (MR) holds potential for transforming endovascular surgery by enhancing information delivery. This advancement could significantly alter surgical interfaces, leading to improved patient outcomes. Our research utilizes MR technology to transform physical monitor displays inside the operating room (OR) into holographic windows. We aim to reduce cognitive load on surgeons by counteracting the split attention effect and enabling ergonomic display layouts. Our research is tackling key design challenges, including hands-free interaction, and occlusion management in densely crowded ORs. We are conducting studies to understand user behavior changes when people consult information on holographic windows compared to conventional displays.

Exploring the Chameleon Effect in Metaverse Environments for Enabling Creative Communication (ID: P1208)

Ai Nakada, Toyota Central R&D Labs., Inc.; Takayoshi Yoshimura, Toyota Central R&D Labs., Inc.; Hiroyuki Sakai, Toyota Central R&D Labs., Inc.; Satoshi Nakagawa, The University of Tokyo; Tomohiro Tanikawa, The University of Tokyo; Yasuo Kuniyoshi, The University of Tokyo

In this study, we explore the `Chameleon Effect' within metaverse environments and its impact on creative communication. This experiment investigates how avatar mimicry in online interactions influences creativity. Participants were divided into Mimic and Original conditions, participating in brainstorming tasks, notably the creative tasks such as the Alternative Uses Task. The study assesses the quality of ideas generated—focusing on Fluency, Flexibility, Elaboration, and Originality—and participant impressions of their avatars. Our findings suggest significant implications for enhancing creativity in virtual settings, providing insights into the future of collaborative innovation in the metaverse.

Visual Attention and Virtual Human Facial Animations in Virtual Reality (VR): An Eye-Tracking Study (ID: P1209)

Shu Wei, University of Oxford; Daniel Freeman, University of Oxford; Aitor Rovira, University of Oxford

We set out to understand the visual attention and perceptions of virtual humans in individuals vulnerable to paranoia in a VR eye-tracking study. In a factorial between-groups design, 122 individuals with elevated paranoia experienced a virtual lift ride with virtual humans that varied in facial animation (static or animated) and expression (neutral or positive). Facial animation (p=0.053) led to a significant reduction in co-presence. Positive expressions (p-adj=0.046) significantly decreased the visual attention to virtual humans when their faces were static. Our results indicate that virtual human programming could influence user perception and visual behaviours for people with mistrust.

Predictive Task Guidance with Artificial Intelligence in Augmented Reality (ID: P1210)

Benjamin Rheault, University of Florida; Shivvrat Arya, The University of Texas at Dallas; Akshay Vyas, University of Texas at Dallas; Rohith Peddi, The University of Texas at Dallas; Brett Benda, University of Florida; Vibhav Gogate, University of Texas in Dallas; Nicholas Ruozzi, University of Texas in Dallas; Yu Xiang, University of Texas at Dallas; Eric Ragan, University of Florida

We put forward an augmented reality (AR) system using artificial intelligence (AI) to guide users through real-world procedures. The system uses the cameras of the headset to recognize objects and their positions in real time. It constructs a model to predict: (i) the task being completed, (ii) the step of the task they are on, and (iii) whether they have made any errors. By updating this model in real-time as the user completes the task, the system automatically updates the instructions and cues provided to the user. This system represents a step towards ubiquitous task guidance via AR

Design of experiment for evaluation of anomalous environments for product visualization in virtual reality (ID: P1213)

Francesco Musolino, Polytechnic University of Bari; Dario Gentile, Politecnico di Bari; fabio vangi, Polytechnic University of Bari; Michele Fiorentino, Polythecnic Institute of Bari

Virtual reality metaverse is candidate to be the future platform for retail. Nevertheless, the knowledge in designing virtual environments and their influence on products is limited in literature. Common “brick and mortar” virtualization of shops as realistic digital twin is a reductive approach compared to more creative experiences. This work explores novel paradigms by evaluating “anomalous environments” deviated from the ordinary conception. Different anomalies are evaluated within 8 scenes promoting an existing line of futuristiclooking tracksuits. The experimentation on 42 subjects, shows that anomalous environments, especially when aligned with the product's storytelling, can increase products likeability, interest and perceived valorization.

Best Poster Honorable Mention

Repeat Body-Ownership Illusions in Commodity Virtual Reality (ID: P1215)

Pauline W Cha, Davidson College; Tabitha C. Peck, Davidson College

Virtual self-avatars have been shown to produce the Proteus effect, however limited work investigates the subjective sense of embodiment using commodity virtual reality systems. In this work, we present results from a pilot experiment where participants are given a self-avatar in a simple virtual experience while wearing a cardboard head-mounted display. Participants then repeat the experience five days later. Overall, subjective embodiment scores are similar to those reported in experience using higher-fidelity systems. However, the subjective sense of embodiment significantly lowered from trial one to trial two.

Optimization of a Standalone VR Experience through User Perception: A Journey to the Hidden Spaces of the Burgos Cathedral (ID: P1217)

David Checa, Universidad de Burgos; Mario Alaguero Alaguero, Universidad de Burgos

This project offers a groundbreaking virtual reality (VR) experience that allows visitors to explore inaccessible areas of the Burgos Cathedral. Employing photogrammetry techniques, these spaces have been digitally recreated in high detail. The application, optimized for standalone VR devices, has undergone meticulous optimization. This was achieved by analyzing visitor attention patterns, enhancing resolution in frequently observed areas to ensure a rich and seamless experience. This initiative not only significantly improves the visitor experience but also serves as a model for the dissemination of cultural heritage through VR.

A Calibration Interface for 3D Gaze Depth Disambiguation in Virtual Environments (ID: P1218)

Cameron Boyd, Augusta University; Mia Thompson, Augusta University; Maddie Smith, Augusta University; Jason Orlosky, Augusta University

In Augmented and Virtual Reality, accurate eye tracking is a requirement for many applications. Though state-of-the art algorithms have enabled sub-degree accuracy for line-of-sight tracking, one remaining problem is that depth tracking, i.e. calculation of the gaze intersection at various depths, is still inaccurate. In this paper, we propose a 3D calibration method that accounts for gaze depth in addition to line-of-sight. By taking advantage of 3D calibration points and modeling the relationship between gaze inaccuracy and depth, we show that we can improve depth calculations and better determine the 3D position of gaze intersections in virtual environments.

Insights from the Participatory Design of a Virtual Reality Spatial Navigation Application for Veterans with a History of TBI (ID: P1219)

Lal 'Lila' Bozgeyikli, University of Arizona; Evren Bozgeyikli, University of Arizona; Ryu Kevin Funakoshiya, University of Arizona; Quan Le, University of Arizona; Kyle Walker, University of Arizona; L. Matthew Law, Phoenix VA Health Care System; Daniel R. Griffiths, Phoenix VA Health Care System; Arne Ekstrom, University of Arizona; Jonathan Lifshitz, Phoenix VA Health Care System

We present a virtual reality (VR) application for potential application to cognitive rehabilitation of Veterans with a history of traumatic brain injury (TBI). Our application offers exploration of various virtual environments. We discuss the notable aspects of our design and implementation and insights from the participatory design process with lived-experience Veteran consultants.

HMD VR Experience Motivates Exercise Intention in Patients with Type 2 Diabetes (ID: P1220)

Sun-Jeong Kim, Hallym University; Dong-Soo Shin, Hallym University; Ohk-Hyun Ryu, Chuncheon Sacred Heart Hospital; Shin-Jeong Kim, Hallym University; Yongsoon Park, Chuncheon Sacred Heart Hospital; Yong-Jun Choi, Hallym University

Patients are expected to manage their chronic conditions, but very few exercise regularly in addition to taking their medications. Therefore, we developed a 3-minute HMD VR program to educate patients about the importance of exercise. The control group (n=30) received usual care, while the experimental group (n=30) experienced virtual reality and assessed SSQ and exercise intention. No patients experienced VR sickness and the experimental group had a statistically significant higher intention to exercise for diabetes management. Older adults found the VR experience exciting and novel. For diabetic foot experience, color change is enough to motivate them.

The Influence of Extended Reality and Virtual Characters' Embodiment Levels on User Experience in Well-Being Activities (ID: P1221)

Tanja Kojic, Technische Universität Berlin; Maurizio Vergari, Technische Universität Berlin; Marco Podratz, Technische University Berlin; Sebastian Möller, Technische Universität Berlin; Jan-Niklas Voigt-Antons, University of Applied Sciences Hamm-Lippstadt

Millions of people have seen their daily habits transform, reducing physical activity and leading to mental health issues. This study explores how virtual characters impact motivation for well-being. Three prototypes with cartoon, robotic, and human-like avatars were tested by 22 participants. Results show that animated virtual avatars, especially with extended reality, boost motivation, enhance comprehension of activities, and heighten presence. Multiple output modalities, like audio and text, with character animations, improve the user experience. Notably, the cartoon-like character evoked positive responses. This research highlights virtual characters' potential to engage individuals in daily well-being activities.

Multimodal Exploration of Terrain with Immersive Interactions (ID: P1223)

Shamima Yasmin, Eastern Washington University

Virtual reality (VR) lets users explore objects in an immersive environment. Users freely interact with objects in a virtual environment (VE), where they can assign physical properties to objects, make them animate or inanimate, and have them respond according to the underlying characteristics. This research explores geological topology using VR. In addition, users studied a terrain model in a multimodal audio-visual VE. Two types of audio-enhanced modes were used: “Continuous” and “Sporadic.” Initial findings showed that users preferred multimodal exploration of terrain models compared to the traditional unimodal vision-only version.

Career XRcade Framework: Student-driven Collaboration Processes to Develop Learning Environments for Immersive Career Exploration (ID: P1224)

Zahra Khaleghian, Arizona State University; Robert LiKamWa, Arizona State University

The Career XRcade (CXR) Framework is our platform to allow college students creating educational immersive learning experiences that prepare high school students to explore diverse career paths. These virtual journeys through various career paths offer students a medium to understand real-world job environments and demands. We have demonstrated the use of our framework through the creation of two virtual worlds each showcasing 5 career paths in Cybersecurity and Esports industries. Our proposed user studies on our case studies will gather data and insights from our stakeholders, aiming to measure the effectiveness of such a framework in academic settings.

Primitive-based Model Reduction for Huge 3D Meshes of Industrial Plants (ID: P1225)

Haruki Hattori, The University of Tokyo; Yutaka Ohtake, The University of Tokyo; Tatsuya Yatagawa, Hitotsubashi University

This study addresses the model reduction for huge 3D meshes of industrial plants, which typically consist of millions of triangles, where a plant mesh is converted into a set of primitive shapes, including spheres, cuboids, cylinders, cones, torus, and more complicated extruded and quadratic surfaces. To achieve this, we partition the input mesh using geometric features, and each part is converted to a primitive to minimize the surface deviation. Storing each primitive into a leaf node of the bounding volume hierarchy (BVH), our system achieves more than 90% of model reduction and interactive rendering performance for several plant models.

PPVR: A Privacy-preserving Approach for User Behaviors in VR (ID: P1227)

Ruoxi Sun, CSIRO's Data61; Hanwen Wang, The University of Adelaide; Minhui Xue Dr, CSIRO's Data61; Hsiang-Ting Chen, University of Adelaide

The increasing popularity of immersive environments creates a pressing challenge of reconciling service providers' desire to collect user behavior data with users' privacy concerns. In this study, we investigate the use of differential privacy (DP) algorithms in VR applications to enable statistical analysis of 3D spatial motion data while protecting against re-identification attacks. We assessed the efficacy of DP on both the original 3D spatial data and its cumulative heat map representation. This study underscores the utility of the DP algorithm in the context of 3D body motion data and highlights its potential for widespread adoption in diverse VR applications.

Developing a VR Meditation Program for College Students to Use on Campus with a Message Chair (ID: P1228)

Fang Wang, University of Missouri; Scottie D Murrell Mr, University of Missouri; Albert Zhou, University of Missouri; Kanishka Peryala, University of Missouri, Columbia

College students often experience high level of stress due to academic commitment and associated factors. To explore the potential benefits of VR in stress management for students while they are on campus, we developed a VR meditation application to use with a message chair. The application utilizes hand tracking technology and is designed to be used comfortably in conjunction with a massage chair to provide a convenient and accessible way to help manage stress for students while on campus. We plan to further develop the application and conduct further studies on the effects of this approach on students' mental health.

TeleAutoHINTS: A Virtual or Augmented Reality (VR/AR) System for Automated Tele-Neurologic Evaluation of Acute Vertigo (ID: P1230)

Haochen Wei, Johns Hopkins University; Justin Bolsey, Johns Hopkins Hospital; Edward Kuwera, Johns Hopkins Hospital; Peter Kazanzides, Johns Hopkins University; Kemar E Green, Johns Hopkins University School of Medicine

Annually, millions of patients in the United States visit the emergency room (ER) due to symptoms of vertigo or dizziness. Rapidly distinguishing between benign causes and more severe conditions requires performing and interpreting a three-step bedside head and eye movement assessment known as HINTS (head impulse, nystagmus, and test of skew). A shortage of experts trained in conducting and interpreting this test exists. Consequently, we developed TeleAutoHINTS, comprising (1) a Hololens2-based head and eye tracking platform for automated tele-sensing and (2) an interconnected interface for visualization and analysis. We tested and evaluated TeleAutoHINTS on three subjects.

Hitting a Brick Wall: Passive Haptic Feedback for Control Elements in Virtual Reality (ID: P1286)

Markus Tatzgern, Salzburg University of Applied Sciences; David Fasching, Salzburg University of Applied Sciences; Vivian Gómez, Universidad de los Andes; Sebastian Calvache, Universidad de los Andes; Pablo Figueroa, Universidad de los Andes

The experience and performance of Virtual Reality (VR) simulations and training applications can be improved by utilizing haptic feedback when interacting with typical control elements of machines such as buttons and slider. To experience haptic feedback with a consumer-grade head-mounted display (HMD), in-built hand tracking solutions can be used. However, these solutions may suffer from tracking issues leading to imprecise interactions. Hence, we designed a controller-based hand input method that allows users to perceive haptic feedback with typical handheld controllers when interacting with virtual control elements while at the same time benefitting from precise controller-based tracking.

Precueing Compound Tasks in Virtual and Augmented Reality (ID: P1356)

Ahmed Rageeb Ahsan, University of Florida; Andrew W. Tompkins, University of Florida; Eric Ragan, University of Florida; Jaime Ruiz, University of Florida; Ryan P. McMahan, University of Central Florida

This paper addresses the challenge of determining the quantity and composition of visual task cues for effective precueing.AR and VR can seamlessly overlay additional visual information to help people improve work efficiently and reduce errors, but the effectiveness of such task assistance depends on the amount of information given. Prior research has found benefits of supplemental visual cues for giving future hints for upcoming steps, though many experiments have predominantly focused on simplified tasks. We present an experiment assessing different visual cues in VR to test a user's ability to harness distinct precueing information.

A Scoping Review on Immersive Technologies in the Oil and Gas Industry (ID: P1330)

Muskan Sarvesh, University of Calgary; Mehdi Marzban, University of Calgary; minseok ryan kang, University of Calgary; Simon S. Park, University of Calgary; Ron Hugo, University of Calgary; Kangsoo Kim, University of Calgary

This paper explores the role of immersive technologies, such as Augmented Reality, Virtual Reality, Mixed Reality, Digital Twins, and Building Information Modeling in the oil and gas industry. Through a comprehensive scoping review, we address key research questions related to the types of immersive technologies used in the context of the oil and gas pipeline industry, and the application scenarios like training, testing, maintenance, and monitoring. It highlights the potential of immersive technologies, identifies research gaps, and suggests future directions.

SIGMA: An Open-Source Interactive System for Mixed-Reality Task Assistance Research - Extended Abstract (ID: P1259)

Dan Bohus, Microsoft Research; Sean Andrist, Microsoft Research; Nick C. W. Saw, Microsoft Research; Ann Paradiso, Microsoft Research; Ishani Chakraborty, Microsoft; Mahdi Rad, Microsoft

We introduce an open-source system called SIGMA (short for "Situated Interactive Guidance, Monitoring, and Assistance") as a platform for conducting research on task-assistive agents in mixed-reality scenarios. The system leverages the sensing and rendering affordances of a head-mounted mixed reality device in conjunction with large language and vision models to guide users step by step through procedural tasks. By open-sourcing the system, we aim to lower the barrier to entry, accelerate research in this space, and chart a path towards community-driven end-to-end evaluation of large language, vision, and multimodal models in the context of real-world interactive applications.

[EXTENDED] Interactive Data Fusion of Neural Radiance Fields for Facility Inspection in Virtual Reality (ID: P1324)

Ke Li, Deutsches Elektronen-Synchrotron (DESY); Susanne Schmidt, Universität Hamburg; Tim Rolff, Universität Hamburg; Reinhard Bacher, Deutsches Elektronen Synchrotron DESY; Wim Leemans, Deutsches Elektronen Synchrotron; Frank Steinicke, Universität Hamburg

Real-world industrial facilities often present complex equipment that requires extensive inspection and maintenance for their operations. We present a virtual reality framework that supports virtual facility inspection and maintenance tasks by using neural radiance field models to replicate, store, and visualize the appearance of complex industrial facilities. To overcome the performance bottleneck of VR NeRF rendering, we present two interactive data fusion techniques that can merge a NeRF model with its' corresponding CAD model through contextualized visualizations. Technical benchmarking results and preliminary expert feedback are presented for the initial evaluation of our framework.

Virtual Day: a VR Game for the Evaluation of Prospective Memory in Older Adults (ID: P1300)

Maryam Alimardani, Tilburg University; Guido Morera, Tilburg University; Alexandra Hering, Tilburg University

Prospective memory (PM), the ability to remember to perform tasks in the future, is vital for maintaining functional independence in older adults. This paper introduces “Virtual Day”, a novel Virtual Reality (VR) game designed to assess PM in an immersive and realistic environment. We report a feasibility study where Virtual Day was compared to its clinical counterpart, a digital board game. Results indicate a preference for Virtual Day, noting it as more engaging and enjoyable, despite being more challenging. Follow-up development will build on these findings to use VR as a potential tool for cognitive interventions to promote healthy aging.

Framework for Social XR: Navigating Challenges, Ethics, and Opportunities for Authentic Community Engagement (ID: P1281)

Stella Doukianou, University of the Arts; Vali Lalioti, University of the Arts London

Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) redefine community engagement, posing challenges in user experience, privacy, security, inclusivity, and authentic interactions. This work proposes strategies for authentic community interactions in virtual realms, presenting a framework for ethical engagement through AR/VR/MR. Assessing industry trends, we categorize XR potential and challenges into Users, Technology, and XR experiences, emphasizing research gaps and opportunities for addressing social goals. We review and propose strategies to navigate the ethical considerations, inclusive design practices, and user satisfaction metrics to foster genuine connections across boundaries.

Development of a Touchable VR Planetarium for Both Sighted and Visually Impaired People (ID: P1093)

Kota Suzuki, Kogakuin University; Keita Ushida, Kogakuin University; Qiu Chen, Kogakuin University

The authors proposed and developed a “touchable” VR planetarium. The user wears a VR headset and “touches” the stars with the controllers. Because we can't touch the stars in reality, this application provides the users with additional value and experience of the planetarium. As this feature is valuable for visually impaired people to experience the starry sky, the authors also implemented the functions that help it. In the trial use by visually impaired people, they experienced the starry sky with the support functions and evaluated the VR planetarium as a valuable application.

Best Poster Award

Aging Naturally: Virtual reality nature vs real-world nature's effects on executive functioning and stress recovery in older adults (ID: P1107)

Sara LoTemplio, Colorado State University; Sharde Johnson, Colorado State University; Michaela Rice, Colorado State University; Rachel Masters, Colorado State University; Sara-Ashley Collins, Colorado State University; Joshua Hofecker, Colorado State University; Jordan Rivera, Colorado State University; Dylan Schreiber, Colorado State University; Victoria Interrante, University of Minnesota; Francisco Raul Ortega, Colorado State University; Deana Davalos, Colorado State University

Development of Alzheimer's Dementia & other related dementias (ADRD) is characterized by decline in executive functioning (EF), and onset risk of ADRD is increased by stress. Previous work has shown that spending time in nature or virtual reality nature can improve EF and improve stress recovery in younger adults. Yet, little work has examined whether these benefits can extend to older adults. We examine how spending time in either nature or an equivalently designed virtual reality natural environment can affect EF and stress in older adults compared to a lab control condition.

Multiple Level of Details Neural Implicit Surface Representation for Unconstrained Viewpoint Rendering (ID: P1340)

Zhihua Chen, East China University of Science & Technology; Yuhang Li, East China University of Science & Technology; Lei Dai, East China University of Science & Technology; Ping Li, The Hong Kong Polytechnic University; Lei Zhu, The Hong Kong University of Science and Technology (Guangzhou); Bin Sheng, Shanghai Jiao Tong University

To mitigate artifacts and performance degradation in neural implicit representation visualization for unconstrained viewpoint rendering, we propose a feature voxel grid-based neural representation architecture. This approach flexibly encodes the implicit surface with multiple levels of detail, facilitating high-quality rendering with dynamic switching between detail levels. Additionally, we implement a range-limited strategy to concentrate on sampling valid areas while excluding undefined areas. Our results in unconstrained viewpoint scenarios demonstrate the effectiveness of our method. This work extends the capabilities of neural implicit representations, broadening their potential applications beyond previously defined limitations.

Study of Cross-Reality Interfaces with Avatar Redirection to Improve Desktop Presentations To Headset-Immersed Audiences (ID: P1357)

Jason Wolfgang Woodworth, University of Louisiana at Lafayette; Christoph W Borst, University of Louisiana at Lafayette

We study cross-reality desktop interfaces that allow a desktop user, with a 3D avatar, to present lecture-style content to a headset-immersed audience. Recent work on VR meeting spaces for presentations suggests that some presenters prefer to use desktop interfaces for comfort or other factors. We designed and compared interfaces to understand design tradeoffs. A key result is that the SDesk interface, offering an audience perspective and eye-tracked gaze gestures, is preferred over a standard VR-simulating desktop interface.

Wednesday Posters

Talk with the authors: 9:45‑10:15, 13:00‑13:30, 15:00‑15:30, 17:00‑17:30, Room: Sorcerer's Apprentice Ballroom

Virtual Streamer with Conversational and Tactile Interaction (ID: P1231)

Vaishnavi Josyula, The University of Texas at Dallas; Sowresh MecheriSenthil, The University of Texas at Dallas; Abbas Khawaja, The University of Texas at Dallas; Jose M Garcia, The University of Texas at Dallas; Ayush Bhardwaj, The University of Texas at Dallas; Ashish Pratap, The University of Texas at Dallas; Jin Ryong Kim, The University of Texas at Dallas

This paper introduces an approach to an interactive virtual streamer that provides conversational and tactile interactions, offering insights into levels of immersion and personalization for the user. User interaction is categorized into various types, such as the spatial nature of the virtual streamer's stream, along with conversational and tactile interactions with the streamer. We deploy the virtual streamer in a VR environment and conclude the efficiency of user interactions with the virtual streamer via a scenario-based assessment.

Affordance Maintenance-based 3D Scene Synthesis for Immersive Mixed Reality (ID: P1233)

Haiyan Jiang, Beijing Institute of Technology; Dongdong Weng, Beijing Institute of Technology; Xiaonuo Dongye, Beijing Institute of Technology

With the explosion of head-mounted displays into the commercial market, immersive scenes are being used in many applications. As these applications are usually used indoors, the user would suffer from safety and immersion issues when immersed in a virtual scene. Therefore, we propose a 3D synthesis method that synthesizes the 3D scene according to the affordance of physical objects and the intention of the user. The results show that virtual objects in synthetic scenes have the same affordances as physical objects in the real world and can provide interaction medium and feedback to the situated user and avoid collisions.

An AR-Based Multi-User Learning Environment for Anatomy Seminars (ID: P1236)

Danny Schott, Otto-von-Guericke University; Jonas Mandel, Otto-von-Guericke-Universität Magdeburg; Florian Heinrich, Otto-von-Guericke-Universität Magdeburg; Lovis Schwenderling, Otto-von-Guericke-University; Matthias Kunz, Clinic for Cardiology and Angiology; Rüdiger Braun-Dullaeus, Clinic for Cardiology and Angiology; Christian Hansen, Faculty of Computer Science

Understanding the intricate and rapid changes in shape during embryonic formation is vital for medical students. Using the example of embryonic human heart development, we introduce an AR-based multi-user approach to enhance understanding and foster a participatory learning environment. Through a user-centered approach, we created a prototype accommodating two player roles and enabling multi-modal inputs to encourage dynamic group discussions. We invited four anatomy experts to evaluate three system configurations in an interdisciplinary workshop to assess integration feasibility into anatomy seminars. The gathered data and qualitative feedback indicate the potential of our collaborative concept for integration into the medical curriculum.

Evaluating User Perception Toward Pose-Changed Avatar in Remote Dissimilar Spaces (ID: P1237)

Taehei Kim, KAIST; Hyeshim Kim, KAIST; Jeongmi Lee, KAIST; Sung-Hee Lee, Korea Advanced Institute of Science and Technology

In dissimilar telepresence environments, where spaces differ in size and furniture layouts, researchers have proposed methods to adjust avatar movement to deviate from the user's while still adapting to the physical environment. From this perspective, this poster aims to investigate users' perceptions and preferences toward their adjusted avatars. Especially, we propose an experiment focusing on pose standing and sitting which is the most common pose when talking to others. We describe the system setup and initial results that show how users perceive their avatars when their pose category is changed to avoid physics conflict.

Working with XR in Public: Effects on Users and Bystanders (ID: P1241)

Verena Biener, Coburg University of applied sciences and arts; Snehanjali Kalamkar, Coburg University of Applied Sciences and Arts; John J Dudley, University of Cambridge; Jinghui Hu, University of Cambridge; Per Ola Kristensson, University of Cambridge; Jörg Müller, University of Bayreuth; Jens Grubert, Coburg University

Recent commercial virtual and augmented reality (VR/AR) devices have been promoted as tools for knowledge work. Their possibility to display virtual screens can be especially helpful in mobile contexts, in which users might not have access to multiple screens, for example, when working on a train. As using such devices in public is still uncommon, this motivates our study to better understand the implications of using AR and VR for work in public on users and bystanders.Therefore, we present initial results of a study in a university cafeteria comparing three different systems: a laptop with a single screen; a laptop combined with an optical see-through AR headset; a laptop combined with an immersive VR headset.

360° Storytelling for Immersive Teaching Online and in the Classroom for Secondary and Tertiary Education (ID: P1242)

Lloyd Spencer Davis, University of Otago; Wiebke Finkler, University of Otago; Wei Hong Lo, University of Otago; Mary Rabbidge, Otago Boys Highschool; Lei Zhu, University of Otago; Stefanie Zollmann, University of Otago

In this work, we present the findings of a study designed to investigate the impact of 360° videos on student engagement and learning outcomes in both secondary and tertiary educational contexts. The research focused on two distinct scenarios: Teaching Science to Secondary School Students and Hybrid University Courses integrating On-campus and Distant Students. The study employed a multifaceted approach, combining video production, test protocols, and evaluations to assess the efficacy of 360° videos.

Side-By-Side Might Win: Occlusion Negatively Affects The Performance Of Augmented Reality Task Instructions (ID: P1243)

Robin Wiethüchter, ETH Zürich; Saikishore Kalloori, ETH Zürich; David Lindlbauer, Carnegie Mellon University

We compare different representations for Augmented Reality instructions for a combined pick & place and drawing task. Participants completed the task with instructions presented as 3D AR overlays, as 3D AR offset (side-by-side), or as 2D panels. Results indicate that participants preferred the 3D offset instructions, especially compared to 3D overlays. Our results stand in contrast to prior work where 3D overlays were preferred. Our work points at the need for the community to better define benchmarks and standardized tests to create guidelines for when to use what type of AR representation.

Gaze Pattern Genius: Gaze-Driven VR Interaction using Unsupervised Domain Adaption (ID: P1244)

Kexin Wang, Beihang university; Yang Gao, Beihang university; wenfeng song, Beijing information science and technology university; Yuecheng Li, Beihang University; Aimin Hao, Beihang University

This poster advocates shifting VR interaction to gaze-driven interaction, a more intuitive alternative to traditional controls like VR controllers or gestures. Our focus is on enhancing neural network recognition accuracy, especially with limited user-specific gaze data. We introduce a novel framework for capturing gaze gesture patterns and propose a template dataset concept to boost neural training. Our domain adaptation model, blending template depth and sparse user data authenticity, consistently excels in recognizing gaze patterns across diverse users. Empirical user studies confirm: gaze-driven interactions not only elevate VR experiences but also redefine immersive VR control dynamics.

AR-based Dynamic Sandbox for Hydraulic Earth-rock Dam Break Simulation (ID: P1246)

Huiwen Liu, China Institute of Water Resources and Hydropower Research; Yuan Wang, Beihang University; Dawei Zhang, China Institute of Water Resources and Hydropower Research

VR/AR-based hydraulic scenes holds significant value for geological disaster display. However, the river dynamics models prevalent in computational fluid dynamics (CFD) and hydraulics rely largely on pure numerical simulations, lacking intuitive visual representation. In contrast, current AR-based physical simulation methods often fail to incorporate hydraulic parameters adequately, falling short of the accuracy needed for real hydraulic models. This paper proposes integrating a precise numerical river dynamics model into a visual fluid-solid interaction simulation, which achieves a balance between a realistic visual simulation of dam breaks and the numerical accuracy aligned with river dynamics models, while maintaining high computational efficiency.

Asymmetric VR Chores: A Social Presence Preliminary Study (ID: P1249)

Stephen Saunders, Ontario Tech University; Alvaro Quevedo, Ontario Tech University; Winnie Sun, Ontario Tech University; Sheri Anne Horsburgh, Ontario Tech University

VR applications focused on daily activities for health care focusing either physical or cognitive interventions continue gaining momentum as a result of consumer-level devices. However, interactions mainly focused on single user experiences with little research conducted on VR and asymmetric gameplay, which refers to the fact that players play the game using different media. This article presents a preliminary study focused on how the addition of asymmetry to a house chores VR game positively influences social presence when playing the game. Our motivation is driven by research that indicates that asymmetry in games increases communication and social connection.

Enhancing the Immersive Experience of the Yijing in Claborate-Style Painting through Virtual Reality (ID: P1250)

Yuting Cheng, Xi'an Jiaotong-Liverpool University; Mengjie Huang, Xi'an Jiaotong-Liverpool University; Jiashu Yang, Xi'an Jiaotong-Liverpool University; Wenxin Sun, Xi'an Jiaotong-Liverpool University

Chinese claborate-style painting is a traditional painting that depicts objects with delicate brushstrokes of color. Virtual reality (VR) can create an immersive virtual environment, a representation that brings a new direction to the exhibition of paintings. In order to convey the yijing (artistic conception) of the claborate-style painting in VR, this paper investigates how to enhance the user's feeling of yijing in VR and set up two paths in VR to provide the user with a more immersive experience and provide a new reference for future VR design practices of Chinese paintings.

A Bio-Inspired Computational Model for the 3D Motion Estimation (ID: P1251)

Di Zhang, Communication University of China; Ping Lu, Beijing University of Posts and Telecommunications; Qi Wu, Communication University of China; Long Ye, Communication University of China

Stereoscopic motion perception is pivotal for human-world interaction and has sparked advancements in artificial vision technologies. We introduce the Stereo Motion Perception Model (SMPM), a bio-inspired computational model designed to emulate the process of stereoscopic motion perception. The SMPM extracts temporal, spatial, and shape features from stereoscopic video stimuli, enabling accurate motion perception. The performance of SMPM is tested and compared with human perception. The results indicates that the SMPM effectively estimates target motion on both simplistic and complex scene, high consistency is shown with human stereo motion perception. Future implications of the bio-inspired model is discussed.

Controlling Experience for Interaction Techniques in Virtual Reality Exergames (ID: P1252)

Wenxin Sun, Xi'an Jiaotong-Liverpool University; Mengjie Huang, Xi'an Jiaotong-Liverpool University; Chenxin Wu, Xi'an Jiaotong-Liverpool University; Wendi Wang, Xi'an Jiaotong-Liverpool University; Rui Yang, Xi'an Jiaotong-Liverpool University; Jiajia Shi, Kunshan Rehabilitation Hospital

Virtual reality (VR) has become an essential platform for upper limb rehabilitation by introducing VR interaction techniques (e.g., hand gestures, controllers, and tangible tools) into traditional rehabilitation apparatus. Controlling experience plays a significant role in upper limb exergames, highly relevant to rehabilitation outcomes. This study focused on users' controlling experience when integrating VR interaction techniques and traditional rehabilitation tables (commonly employed in clinics and homes). This study revealed that tangible tools or controllers contributed to the higher controlling experience, and they were recommended as the suitable options for upper limb exergames when building VR rehabilitations.

Visual Complexity in VR: Implications for Cognitive Load (ID: P1253)

Maximilian Rettinger, Technical University of Munich; Xuefei Jiang, Technical University of Munich; Jiaxin Yang, Technical University of Munich; Gerhard Rigoll, Technical University of Munich

Given the potential of Virtual Reality for education and training, it is essential to implement it as effectively as possible. For this reason, we investigate the impact of virtual environments with different complexity levels on the cognitive load. We compare the methods 1) No-Room, 2) Empty-Room, 3) Filled-Room, and 4) Animated-Room. In a within-subjects study (n=40), participants completed a letter recall and a mental rotation task to assess the cognitive load. The results show significant differences in the letter recall task, indicating that the cognitive load is also lower in the rooms with less complexity.

Cholesteric Liquid Crystal Based Reconfigurable Optical Combiner for Head-Mounted Display Application (ID: P1256)

Yuanjie Xia, University of Glasgow; Dr Haobo Li, University of Glasgow; Marija Vaskeviciute, University of Glasgow; Daniele Faccio, University of Glasgow; Affar Karimullah, University of Glasgow; Hadi Heidari, University of Glasgow; Rami Ghannam, University of Glasgow

Recent advancements in Head-Mounted Displays (HMDs) showcase their potential to replace traditional displays. The upcoming generation of HMDs necessitates a seamless transition between Augmented Reality (AR) and Virtual Reality (VR) modes. We introduce an innovative Cholesteric Liquid Crystal (CLC) based optical combiner for HMDs. This approach enables the display device to switch between AR, VR and transparent modes via temperature modulation. We demonstrated the reconfigurability of the tunable optical combiners using 532 nm laser sources. Our findings demonstrate that CLC-based optical combiners can switch between the three modes at corresponding temperatures, paving the way for versatile applications in future HMDs.

Multi-3D-Models Registration-Based Augmented Reality Instructions for Assembly (ID: P1257)

Seda Tuzun Canadinc, Webster University; Wei Yan Ph.D., Texas A&M University

BRICKxAR (Multi 3D Models/M3D) prototype offers markerless, in-situ, and step-by-step, highly accurate Augmented Reality (AR) assembly instructions for large or small part assembly. The prototype employs multiple assembly phases of deep learning-trained 3D model-based AR registration coupled with a step count. This ensures object recognition and tracking persist while the model updates at each step, even if a part's location is not visible to the AR camera. The use of phases simplifies the complex assembly instructions. The testing and heuristic evaluation findings indicate that BRICKxAR (M3D) provides robust instructions for assembly, promising potential applicability at different scales and scenarios.

Semantic Web-Enabled Intelligent VR Tutoring System for Engineering Education: A Theoretical Framework (ID: P1258)

Victor Häfner, Karlsruhe Institute of Technology; Tengyu Li, Karlsruhe Institute of Technology; Felix Longge Michels, Karlsruhe Institute of Technology; Polina Häfner, Karlsruhe Institute of Technology; Haoran Yu, Karlsruhe Institute of Technology; Jivka Ovtcharova, Karlsruhe Institute of Technology

An unmet demand exists for personalized virtual reality (VR) training that adapts to users' abilities and performance. The primary issue lies in the high cost of setting up and maintaining virtual environments, resulting in limited flexibility for training content updates. To address this, we incorporate Semantic Web technology into VR training and explore the integration of the intelligent tutoring system. Finally, we propose a framework for a Semantic Web-Enabled Intelligent VR Tutoring System, especially for engineering education.

Virtual Reality (VR)-Based Training Tool for Positive Reinforcement and Communication in Autistic Children (ID: P1260)

Ashayla Williams, Purdue Northwest University; Magesh Chandramouli, Purdue Northwest University

This paper integrates a novel virtual reality (VR) framework with eye-tracking technology to enhance social communication skills in children diagnosed with Autism Spectrum Disorder (ASD). ASD often leads to a 'nonsocial bias,' hindering the perception of social cues. Leveraging social motivation theory, the framework employs positive reinforcement, integrating restricted interest objects (RIs) to enhance social attention and interaction. This approach signifies a significant advancement in using VR to address ASD-related social communication challenges. We believe the results from this research have the potential to enhance the overall quality of life of autistic children by helping them acquire essential social skills.

Therapet: Designing an Interactive XR Therapeutic Pet (ID: P1261)

Jordan Glendinning, Abertay University; Haocheng Yang, Abertay University; Andrew Hogue, OntarioTech University; Patrick C. K. Hung, Ontaruo Tech University; Ruth Falconer, Abertay University

As Extended Reality (XR) technologies continue to mature, they offer ever more diverse, complex, and impactful experiences. We present ``Therapet'', an augmented reality (AR) application designed for the Hololens 2. Therapet presents an interactive therapeutic pet avatar---using Volumetric Video recordings and traditionally animated models---with which users can interact with by feeding, playing fetch, and petting. We conducted a pilot study to investigate usability and user preferences, revealing positive user experiences and insights into the use of Volumetric Video avatars. Furthermore, we discuss lessons learned to create a successful AR therapeutic pet application.

A Methodology for Predicting User Curvature Perception of 3D Objects (ID: P1262)

Nayan N Chawla, University of Central Florida; Joseph LaViola, University of Central Florida; Ryan P. McMahan, University of Central Florida

One's perception of an object's curvature affects its perceived appeal, realism, and even distance. However, studies indicate curvature perception often differs from the object's mathematically defined curvature, and no alternative for predicting curvature perception exists. We present two pairwise-comparison studies where participants selected objects perceived to have more curvature. The results indicate some objects are perceived to have significantly more curvature than others, yielding distinct perceptual clusters. We then demonstrate that traditional curvature measures poorly predict curvature perception, and present a novel methodology with results proving it a capable indicator of how a 3D object's curvature will be perceived.

The Impact of Virtual Agent Locomotion and Emotional Postures on Social Perception and Brain Activity in Mixed Reality (ID: P1263)

Zhuang Chang, The University of Auckland; Jiashuo Cao, The University of Auckland; Kunal Gupta, The University of Auckland; Huidong Bai, The University of Auckland; Mark Billinghurst, Auckland Bioengineering Institute

When people interact with virtual characters, the character's non-verbal cues can change the user's perception of and engagement with it. In this work, we investigate the impact of virtual agent locomotion and emotional posture on user engagement, as measured by subjective questionnaires and EEG signals. We found the EEG-based engagement index was significantly lower in the postural agent condition, which was opposite to the questionnaire-based engagement result. From these results, we differentiate social engagement from task engagement and discuss their influence on overall perceived engagement.

[EXTENDED] Immersive Rover Control and Obstacle Detection based on Extended Reality and Artificial Intelligence (ID: P1264)

Sofia Coloma, University of Luxembourg; Alexandre Frantz, University of Luxembourg; Dave van der Meer, University of Luxembourg; Ernest Skrzypczyk, SnT; Andrej Orsula, University of Luxembourg; Miguel Olivares-Mendez, SnT - University of Luxembourg

Lunar exploration has become a key focus, driving scientific and technological advances. Ongoing missions are deploying rovers to the Moon's surface, targeting the far side and south pole. However, these terrains pose challenges, emphasizing the need for precise obstacles and resource detection to avoid mission risks. This work proposes a novel system that integrates extended reality and artificial intelligence to teleoperate lunar rovers. It is capable of autonomously detecting rocks and recreating an immersive 3D virtual environment of the robot's location. This system has been validated in a lunar laboratory to observe its advantages over traditional 2D-based teleoperation approaches.

Effects of Ankle Tendon Electrical Stimulation on Detection Threshold of Redirected Walking (ID: P1265)

Takashi Ota, The University of Tokyo; Keigo Matsumoto, The University of Tokyo; Kazusa Aoyama, Gunma University; Tomohiro Amemiya Ph.D., The University of Tokyo; Takuji Narumi, the University of Tokyo; Hideaki Kuzuoka, The University of Tokyo

Redirected walking (RDW) is a method for exploring virtual spaces larger than physical spaces while preserving a natural walking sensation. We proposed a locomotion method that applies ankle tendon electrical stimulation (TES), which is known to induce a body sway and tilt sensation, to RDW. We evaluated how TES affects the detection threshold (DT), which is the maximum gain at which visual manipulation is not noticed. The results indicated that the DT was expanded when TES was applied to induce the body tilt sensation in the same direction as the RDW's visual manipulation.

Multimodal Interaction with Gaze and Pressure Ring in Mixed Reality (ID: P1266)

Zhimin Wang, Beihang University; JingYi Sun, Beihang University; Mingwei Hu, Beijing Institute of Technology; Maohang Rao, Beihang University; Yangshi Ge, School of Computer Science and Engineering; Weitao Song, Beijing Institute of Technology; Feng Lu, Beihang University

Controller-free augmented reality devices currently rely on gesture interaction, which can cause arm fatigue and limit hand mobility. This paper proposes a multimodal interaction technique that combines gaze and pressure ring worn on the index finger. The proposed technique eliminates the need for direct hand capturing using the camera, allowing for more flexible hand movements and reducing fatigue. The experiment conducted in this study demonstrates the effectiveness of the pressure ring in enhancing interaction efficiency.

Audio-driven Talking Face Video Generation with Emotion (ID: P1267)

Jiadong Liang, Beihang University; Feng Lu, Beihang University

Vivid talking face generation has potential applications in virtual reality. Existing methods can generate talking faces that are synchronized with the audio, but typically ignore the accurate expression of emotions. In this paper, we propose an advanced two-step framework to synthesize talking face videos with vivid emotional appearances.Vivid talking face generation has potential applications in virtual reality. Existing methods can generate talking faces that are synchronized with the audio, but typically ignore the accurate expression of emotions. In this paper, we propose an advanced two-step framework to synthesize talking face videos with vivid emotional appearances. The first step is designed to generate emotional fine-grained landmarks, including the normalized landmarks, gaze, and head pose. In the second step, we map the facial landmarks to latent key points, which are then fed into the pre-trained model to generate high-quality face images. Extensive experiments demonstrate the effectiveness of our method.

Neuromorphic-Enabled Implementation of Extremely Low-Power Gaze Estimation (ID: P1269)

Zhipeng Sui, Tsinghua University; Weihua He, Tsinghua University; Fei Liang, Huawei Technologies Co Ltd; Yongxiang Feng, Huawei Technologies Co Ltd; Xiaobao Wei, Chinese Academy of Sciences Institute of Automation; Qiushuang Lian, Tsinghua University; Ziyang Zhang, Huawei Technologies Co Ltd.; Guoqi Li, Chinese Academy of Sciences Institute of Automation; Wenhui Wang, Tsinghua University

Event camera has great potential in the field of eye tracking, while current event-based gaze estimation suffers from complex imaging settings and participation of RGB modality. We propose a novel architecture for completely event-based low-power spiking gaze estimation using only one eye signal. Our architecture employs a wake-up module to judge the state of inputs, and then enters one of the three modules including hibernation, lightweight SNN segmentation network, and image processing module to obtain the gaze estimation results. We prove that our method performs better in terms of accuracy and power consumption on Angelopoulos's dataset.

Exploring Strategies for Crafting Emotional Facial Animations in Virtual Characters (ID: P1270)

Hyewon Song, Yonsei University; Seongmin Lee, Yonsei University; Sanghoon Lee, Yonsei University

Emotional portrayal by virtual characters is a critical aspect of communication in virtual reality (VR). Our investigation focuses on understanding the factors that influence the depiction of emotions in these characters. Our findings suggest that emotional representation in virtual characters is highly influenced by factors such as facial expressions, head movements, and overall appearance. Surprisingly, despite being a central point in previous studies, lip-syncing seems to have less impact on conveying emotions. These insights from our study hold promising potential for the VR community, enabling the development of virtual characters capable of expressing a wide range of emotions.

Perceived Realism in 360° Video-Based vs. 3D Modeled Virtual Reality Environment (ID: P1271)

Ibrahim Itani, Deakin University; Kaja Antlej, Deakin University; Michael Mortimer, Deakin University; Ben Horan, Deakin University

Virtual reality impacts users differently depending on the visualization environments. This study compares the impact of two visualization styles in virtual environments on perceived realism and perception. An experiment was conducted using a high-fidelity 360° video-based environment and a modeled three-dimensional (3D) environment. The results show the 360° video-based environment produced a higher perceived realism rating and a greater sense of reality. However, no impact was observed on users' perception of the depicted environments. Users' perceptions were positive, indicating an engaging experience with a great effect on opinions regarding the stigma of Geelong's post-industrial manufacturing status.

Physical OOP: Spatial Program Physical Objects Interactions (ID: P1272)

Xiaoyan Wei, Th University of Adelaide; Zijian Yue, The University of Adelaide; Hsiang-Ting Chen, University of Adelaide

Spatial computing has the potential to revolutionise the way we interact with the world around us. However, the current generation of development tools has not yet fully adapted to this shift. This creates a large perceptual distance between the abstract variable and the physical object.We introduce the Physical Object-Oriented Programming Environment (PhyOOP). PhyOOP empowers users to capture various states of physical objects through live video streams from cameras and insert these visual representations into their code. Users can augment physical objects by attaching executable code snippets onto objects, enabling opportunistic execution triggered by camera observations.

Intrinsic Omnidirectional Video Decomposition (ID: P1273)

Rong-Kai Xu, Beijing Institute of Technology; Fang-Lue Zhang, Victoria University of Wellingtong; Lei Zhang, Beijing Institute of Technology

Intrinsic decomposition of omnidirectional video is a challenging task. We propose a method that can provide temporally consistent decomposition results. Leveraging the 360-degree scene representation, we maintain the global point cloud to propagate and reuse the similar inter-frame content and establish temporal constraints which elevate the quality of frame-wise decomposition while maintaining inter-frame coherence. By optimizing the proposed objective function, our method achieves a precise separation of reflectance and shading components. Comprehensive experiments demonstrate that our approach outperforms existing intrinsic decomposition methods. Our method also hold promise for various video manipulation applications.

3D printing guide plate model construction method for Orthopedic surgery based on Virtual Reality (ID: P1274)

Wenjun Tan, Northeastern University; Pinan Lu, Northeastern University; Qingya Li, Northeastern University; Jinzhu Yang, Northeastern University; TianLong Ji, The First Hospital of China Medical University

This work studies the key technologies of surgical simulation and surgical guide plate design on the basis of VR and constructs a VR prototype system of surgical guide plate design. Firstly, this thesis delves into the study of mesh simplification algorithms. Secondly, a design methodology for surgical guide plate based on triangular mesh filling is proposed. Building upon this foundation, a closed surgical guide plate mesh model is constructed by adding triangular surfaces to the mesh. All key algorithms are integrated and optimized in this thesis to develop a prototype system for VR surgical guide plate design.

Social Simon Effect between Two Adjacent Avatars in VR (ID: P1275)

Xiaotong Li, The University of Tokyo; Yuji Hatada, The University of Tokyo; Takuji Narumi, The University of Tokyo

This study examined the Social Simon Effect (SSE), which indicates that each individual in joint actions automatically activating representations of the other's behavior in the motor system, between two adjacent avatars in virtual reality and investigated the impact of the visual representation of the avatar on SSE. The results showed that SSE was induced when the co-actor's avatar was displayed in full-body or was entirely transparent, but it was absent when only the two hands were visible.

Throwing in Virtual Reality: Performance and Preferences Across Input Device Configurations (ID: P1276)

Amirpouya Ghasemaghaei, University of Central Florida; Yahya Hmaiti, University of Central Florida; Mykola Maslych, University of Central Florida; Esteban Segarra Martinez, University of Central Florida; Joseph LaViola, University of Central Florida

An underexplored interaction metaphor in virtual reality (VR) is throwing, with a considerable challenge in achieving accurate and natural results. We conducted a preliminary investigation of participants' VR throwing performance, measuring their accuracy and preferences across various input device configurations and throwable object types. Participants were tasked with throwing different throwable objects toward random targets using one throwing input configuration out of the five we developed. Our work is relevant to researchers and developers aiming to improve throwing interactions in VR. We demonstrate that on-body tracking leads to the highest throwing accuracy, whereas most participants preferred a controller-derived configuration.

I look, but I don't see it: Inattentional Blindness as an Evaluation Metric for Augmented-Reality (ID: P1277)

Nayara Faria, Virginia Tech; Brian Williams, Virginia Tech Transportation Institute; Joseph L Gabbard, Virginia Tech

As vehicles increasingly incorporate augmented reality (AR) into head-up displays, assessing their safety in driving becomes vital. AR, blending real and synthetic scenes, can cause inattentional blindness (IB), where crucial information is missed despite users directly looking at them. Traditional HCI methods, centered on task time and accuracy, fall short in evaluating AR's impact on human performance in safety-critical contexts. Our real-road user study with AR-enabled vehicles focuses on inattentional blindness as a key metric for assessment. The results underline the importance of including IB in AR evaluations, extending to other safety-critical sectors like manufacturing, and healthcare.

Exploring the Impacts of Interactive Tasks on Redirected Walking in Virtual Reality (ID: P1278)

Jieun Lee, Gwangju Institute of Science and Technology; Aya Ataya, Gwangju Institute of Science and Technology; SeungJun Kim, Gwangju Institute of Science and Technology

Reorientation in virtual reality (VR) is manipulating user orientation within a small physical space to support travel in a large virtual space. Reorientation can be applied to alter the user's orientation in VR content as the user rotates to engage with the content. This study integrated different types of interactions with reorientation to reduce the discrepancy between virtual and real rotation without causing user discomfort. We exceeded the rotation gain threshold to examine the enhancement of reorientation performance in 48 participants. The results indicate that engagement in interactive tasks effectively prevents users from discerning reorientation manipulation without inducing VR sickness.

A Comparative Analysis of Text Readability on Display Layouts in Virtual Reality (ID: P1280)

Samiha Sultana, BRAC University; Md. Ashraful Alam, BRAC University

Curved displays, featuring layouts such as semi-circular and fully circular designs, offer superior visual experiences compared to flat screens due to their alignment with human vision. This study assesses user experience in VR with different display layouts, comparing curved designs (quarter circle, semicircle, three-fourths circle, full circle) and flat screens. It focuses on reading comprehension across 8 display types. The study finds that shorter display layouts are preferred for their faster reading speeds compared to longer curved and flat displays. The results offer key insights for designing and optimizing curved displays in virtual reality applications.

Integrating Spatial Design with Reality: An AR Game Scene Generation Approach (ID: P1283)

Jia Liu, Nara Institute of Science and Technology; Renjie Zhang, Nara Institute of Science and Technology; Taishi Sawabe, Nara Institute of Science and Technology; Yuichiro Fujimoto, Nara Institute of Science and Technology; Masayuki Kanbara, Nara Institute of Science and Technology; Hirokazu Kato, Nara Institute of Science and Technology

AR games can offer a distinct and immersive experience set apart from traditional games. When creating AR games, one of the most formidable challenges faced by designers lies in the unpredictability of real-world environments, making it difficult to seamlessly integrate virtual world with the player's surroundings. Our approach addresses this challenge by converting the designer's design and the player's real environment into scene graphs, and then integrating the two graphs to determine the placement of virtual objects. In addition, we provide a user-friendly interface for designers to quickly visualize inspirations, and an integration module for naturally arranging virtual objects.

OnStage: Design and Development of a Performance and Practice Simulation System (ID: P1284)

Chong Cao, Beihang University; Yulu Lu, School of New Media Art and Disign

Traditional dance practice processes use mirrors or videos to observe movements and facial expressions, have limitations due to their reliance on a single viewpoint. Meanwhile, the transition from dance training rooms to stage performances involves significant environmental changes, including limited rehearsal times and limited opportunities for costume adjustments. In this paper, we present a dance performance and practice simulation system that allows dancers to observe dance movements from multiple perspectives, providing them with an advance preview of their performance in aformal setting.

CovisitVM: Cross-Reality Virtual Museum Visiting (ID: P1285)

Xuansheng Xia, Xi'an Jiaotong-Liverpool University; Yue Li, Xi'an Jiaotong-Liverpool University; Hai-Ning Liang, Xi'an Jiaotong-Liverpool University

Virtual Reality Head-Mounted Displays (VR HMDs) are the main ways for users to immerse in a virtual environment and interact with its virtual objects. The experiences of those around the VR HMD users and their effects on HMD users' experiences have not been well studied. In this work, we invite participants to engage in a cross-reality virtual museum visit. With low, medium, and high degrees of non-HMD user involvement, they could incrementally observe, navigate within, and interact with the virtual museum. Our study provides insights into the design of engaging multiuser VR experiences and cross-reality collaborations.

MemoryVR: Collecting and Sharing Memories in Personal Virtual Museums (ID: P1287)

Jiachen Liang, Xi'an Jiaotong-Liverpool University; Yue Li, Xi'an Jiaotong-Liverpool University; Xueqi Wang, Xi'an Jiaotong-Liverpool University; Ziyue Zhao, Xi'an Jiaotong-Liverpool University; Hai-Ning Liang, Xi'an Jiaotong-Liverpool University

We present MemoryVR, a virtual museum system designed to preserve and share personal memories. This system enables users to create customized virtual museums within a spatial enclosure, providing an immersive and enriched way to experience personal memories. We invited participants to use MemoryVR to create their own personal virtual museums and visit those created by others. Results from evaluation studies showed a positive impact of MemoryVR on their experience of memories. Participants reported that their experiences within the personal virtual museums were fulfilling, invoking a sense of ritual, ownership, curiosity, and engagement.

Best Poster Honorable Mention

AgileFingers: Authoring AR Character Animation Through Hierarchical and Embodied Hand Gestures (ID: P1288)

Yue Lin, The Hong Kong University of Science and Technology (Guangzhou); Yudong Huang, The Hong Kong University of Science and Technology (Guangzhou); David Yip, The Hong Kong University of Science and Technology; Zeyu Wang, The Hong Kong University of Science and Technology (Guangzhou)

We present AgileFingers, a hand gesture-based solution for authoring AR character animation based on a mobile device. Our work initially categorizes four major types of animals under Vertebrata. We conducted a formative study on how users construct hierarchical relationships in full-body skeletal animation and potential hand structure mapping rules. Informed by the study, we developed a hierarchical segmented control system, which enables novice users to manipulate full-body 3D characters with unimanual gestures sequentially. Our user study reveals the ease of use, intuitiveness, and high adaptability of the AgileFingers system across various characters. 

Exploring Interactive Gestures with Voice Assistant on HMDs in Social Situations (ID: P1289)

Xin Yi, Tsinghua University; Yan Kong, CS; Xueze Kang, CS; Shuning Zhang, Tsinghua University; Xueyang Wang, Tsinghua University; Yuntao Wang, Tsinghua University; Yu Tian, China Astronaut Research and Training Center; Hewu Li, Tsinghua University

Voice assistants (VAs) are becoming increasingly popular in VR and MR. However, speaking to the VA in social (e.g., during conversation with others) may lead to misunderstanding and embarrassment. In this paper, we proposed CollarMic, a technique that allowed the users to use speak-to-shoulder gesture to indicate whether to talk to the VA. Through a brainstorming with 10 experts, we first collected a total of 62 conversation switching gestures in social that leveraged different body parts (e.g., head and hand). We further eliminated different gestures through voting and interviews. Our findings highlighted the speak-to-shoulder gesture that best suits our technique.

Augmented Reality Guidance for Numerical Control Program Setups (ID: P1290)

Constantin Kleinbeck, Friedrich-Alexander Universität Erlangen-Nürnberg; Tobias Hassel, Friedrich-Alexander Universität Erlangen-Nürnberg; Julian Kreimeier, Friedrich-Alexander University Erlangen-Nürnberg; Daniel Roth, Technical University of Munich

Setting up a new numerical control (NC) program requires a time-consuming and error-prone validation process. Complications arise as diagnostic information like trajectories are commonly presented as abstract numerical value. We devised, implemented, and evaluated an Augmented Reality (AR) NC-program setup assistance system. Interviews with machine operators identified common issues during program setup and we designed and evaluated visual guidance methods. Our findings indicate that the AR path preview system can decrease error detection time, increase detection rate, and enhance user confidence. Users quickly noticed errors in paths, while errors related to milling depth were slower to detect.

A Case Study on User Centered Design for VR Based Training in Health (ID: P1291)

Pablo Figueroa, Universidad de los Andes; Carlos Rivera, Hospital Militar Central; Patricia Casas, Hospital Militar Central; Jorge E López, Hospital Militar Central; Daniel Mora, Hospital Militar Central; Ivan Gómez, Hospital Militar Central; Kelly Katherine Peñaranda, Universidad de Los Andes

We present GoMi-VR, a Virtual Reality (VR) environment for training physicians in their duties during the first minute of a baby delivery. Our development process allowed us to discuss requirements remotely, create rapid prototypes, test them with experts, identify improvements, and repeat the process in a promptly manner.

Best Poster Honorable Mention

Evaluation of Monocular and Binocular Contrast Sensitivity on Virtual Reality Head-Mounted Displays (ID: P1298)

Khushi Bhansali, Cornell University; Miguel Lago, U.S. FDA; Ryan Beams, Food and Drug Administration; Chumin Zhao, CDRH, United States Food and Drug Administration

Virtual reality (VR) creates an immersive experience by rendering a pair of graphical views on a head-mounted display (HMD). However, image quality assessment on VR HMDs has been primarily limited to monocular optical bench measurements on a single eyepiece. We begin to bridge the gap between monocular and binocular image quality evaluation by developing a WebXR test platform to perform human observer experiments. Specifically, monocular and binocular contrast sensitivity functions (CSFs) are obtained using varied interpupillary distance (IPD) conditions. A combination of optical image quality characteristics and binocular summation can potentially predict the binocular contrast sensitivity on VR HMDs.

Visuo-vestibular Congruency Impacts on Player Experiences in Virtual Reality (ID: P1299)

Benjamin Williams, Staffordshire University; Christopher Headleand, Staffordshire University

Exposure to Virtual Reality is typically multisensory, and incorporates information from several sensory modalities. Within this context, the congruency between cross-modal feedback plays a crucial role in enhancing user experience and minimising potential discomfort. Whilst some studies have investigated the visuo-vestibular relationship in VR exposure, the exploration of congruency has often been limited in scope. In this paper, vestibular congruency in VR is investigated in detail, with a focus on its significance in recreational activities. Participants were tasked with a virtual driving activity in motion-actuated environments, with three conditions altering the degree of correlation between the visual and vestibular senses.

Best Poster Honorable Mention

Redirected Walking vs. Omni-Directional Treadmills: An Evaluation of Presence (ID: P1301)

Raiffa Syamil, University of Central Florida; Mahdi Azmandian, Sony Interactive Entertainment; Sergio Casas-Yrurzum, University of Valencia; Pedro Morillo, University of Valencia; Carolina Cruz-Neira, University of Central Florida

Omni-Directional Treadmills (ODT) and Redirected Walking (RDW) seem suitable for eliciting presence through a full-body walking experience, however both present unique mechanisms that can affect users' presence, comfort, and overall preference. To measure this effect, we conducted a counterbalanced within-subjects user study with 20 participants. Participants wore a wireless VR headset and experienced a tour of a virtual art museum, once using RDW and another time using a passive, slip-based ODT. Both solutions elicit similar amounts of presence, however RDW is perceived as more natural and is the preferred choice of the participants.

Can a Novel Virtual Reality Simulator, Developed for Standalone HMDs, Effectively Prepare Patients for an MRI Examination? (ID: P1303)

Yue Yang, Johns Hopkins University; Emmanuel A Corona, Stanford University; Bruce Daniel, Stanford University; Christoph Leuze, Stanford University

Magnetic Resonance Imaging (MRI) examinations can be scary for pediatric or claustrophobic patients. Virtual reality (VR) simulation has shown the potential to prepare patients for MRI and alleviate anxiety. However, existing VR simulations either use low-quality mobile headsets or require a complex setup, limiting immersion level and practicality. No existing simulation incorporates interactive elements to train patients to hold still. We have designed a novel VR-MRI simulator for standalone HMDs that offers high-quality and interactive MRI simulation. After comparing it with standard preparatory material, we conclude that our VR-MRI could be more engaging, satisfying, and effective for MRI preparation.

Best Poster Award

The Influence of Metaverse Environment Design on Learning Experiences in Virtual Reality Classes: A Comparative Study (ID: P1305)

Valentina Uribe, Universidad de los Andes; Vivian Gómez, Universidad de los Andes; Pablo Figueroa, Universidad de los Andes

In this study, we investigate learning and the quality of the classroom experience by conducting classes in four metaverse environments: Workrooms, Spatial, Mozilla Hubs, and Arthur. Using questionnaires, we analyze factors like avatars, spatial layout, mobility, and additional functions' influence on concentration, usability, presence, and learning. Despite minimal differences in learning outcomes, significant variations in classroom experience emerged. Particularly, metaverses with restricted movement and functions showed heightened immersion, concentration, and presence. Additionally, our findings underscore the beneficial influence of avatars featuring lifelike facial expressions in improving the overall learning encounter.

Towards a centered-user interaction in immersive visualization for preoperative surgical planning in complex malformations. A mental model elicitation approach: Work in progress (ID: P1306)

Carlos J Latorre-Rojas, Universidad Militar Nueva Granada; Alexander Rozo-Torres, Universidad Militar Nueva Granada; Javier A. Luzon, Akershus University Hospital; Wilson J. Sarmiento, Universidad Militar Nueva Granada

Extended Reality (XR) technologies have opened new possibilities for image visualization in healthcare, but eliciting user requirements for XR development is challenging. Human-computer interaction can help obtain user mental models, mostly unexplored in XR for health. An ongoing project aims to address this gap by studying the skills and backgrounds of radiologists and surgeons using the same XR tool in preoperative surgical planning for complex malformations through elucidating mental model, performing technical review and proposing potential interaction models or interaction concepts. The work aims to contribute insights for designing more intuitive, immersive medical visualization tools.

AnyTracker: Tracking Pose for Any Object in Videos (ID: P1308)

Wenbo Li, Zhejiang University; Zhaoyang Huang, The Chinese University of Hong Kong; Yijin Li, Zhejiang University; Yichen Shen, Zhejiang University; Shuo Chen, Zhejiang University; Zhaopeng Cui, Zhejiang University; Hongsheng Li, The Chinese University of Hong Kong; Guofeng Zhang, Computer Science College

Object pose estimation is crucial for AR and video editing, but current methods are limited to RGB-D videos or require prior scanning, making them unsuitable for internet videos. We propose AnyTracker, tracking 6-DoF poses of casual objects in RGB videos, which utilizes GroupGOTR to estimate correspondence between reference and query images for a group of query points. It incorporates the TransGRU module and a geometry module for iterative refinement of correspondences. During inference, AnyTracker employs Active Bundle Adjustment (ABA) to establish feature tracks based on correspondences and estimate per-frame object pose. AnyTracker achieves superior accuracy in correspondence and pose estimation.

3D Pano Inpainting: Building a VR Environment from a Single Input Panorama (ID: P1309)

Shivam Asija, California Polytechnic State University; Edward Du, California Polytechnic State University San Luis Obispo; Nam Nguyen, California Polytechnic State University, San Luis Obispo; Stefanie Zollmann, University of Otago; Jonathan Ventura, California Polytechnic State University

Creating 360-degree 3D content is challenging because it requires either a multi-camera rig or a collection of many images taken from different perspectives. Our approach aims to generate a 360? VR scene from a single panoramic image using a learning-based inpainting method adapted for panoramic content. We introduce a pipeline capable of transforming an equirectangular panoramic RGB image into a complete 360? 3D virtual reality scene represented as a textured mesh, which is easily rendered on a VR headset using standard graphics rendering pipelines. We qualitatively evaluate our results on a synthetic dataset consisting of 360 panoramas in indoor scenes.

Effects of Transcranial Direct-Current Stimulation on Hand Redirection (ID: P1311)

Shogo Tachi, The University of Tokyo; Keigo Matsumoto, The University of Tokyo; Maki Ogawa, the University of Tokyo; Ayumu Yamashita, The University of Tokyo; Takuji Narumi, the University of Tokyo; Kaoru Amano, The University of TOkyo

Hand redirection (HR) is a technique to manipulate the position and shape of an object by shifting the virtual hand position and posture from the physical hand position and posture. This study assessed the detection thresholds (DTs) for horizontal HR during left DLPFC intervention using transcranial direct current stimulation (tDCS). Based on a user study, our findings revealed that DTs for leftward shifts significantly increased in the sham condition, likely due to habituation. Interestingly, anodal tDCS stimulation effectively mitigated this increase in DTs. These results suggest that tDCS may reset the effects of HR habituation or enhance attention for HR.

Developing a Multimodal Clinical Nursing Simulation with a Virtual Preceptor in AR (ID: P1314)

Hyeongil Nam, Hanyang University; Ji-Young Yeo, Hanyang University; Kisub Lee, Hanyang University; Kangsoo Kim, University of Calgary; Jong-Il Park, Hanyang University

Nursing education plays a crucial role in preparing future nurses to provide high-quality patient care. To ensure its effectiveness, it is imperative to instruct nursing students in discerning patient signs/symptoms using a range of sensory modalities. Moreover, pairing nursing students with expert preceptors offers a highly effective means of providing valuable feedback and mentorship. In this paper, we propose a novel Augmented Reality (AR) simulation framework for effective clinical nursing education with multimodal signs/symptoms and a preceptor agent to provide guidance/instruction. Using the proposed framework, we developed an AR nurse training system that resembles practical clinical nursing environments.

Design of Time-Continuous Rating Interfaces for Collecting Empathic Responses in VR, and Initial Evaluation with VR180 Video Viewing (ID: P1315)

Md Istiak Jaman Ami, University of Louisiana at Lafayette; Jason Wolfgang Woodworth, University of Louisiana at Lafayette; Christoph W Borst, University of Louisiana at Lafayette

We design and assess interfaces for time-continuous self-reporting of empathic responses in VR, using dimensions of empathic concern and personal distress. Visual styles included Circular Rating Indicators (CRI), Color Dials (CD), and Interactive Curving Faces (Frowny). Input types included desktop knobs and VR controller touchpads. We considered the intuitiveness, effectiveness, and intrusiveness of designs with viewers of a stereo 180° video. Initial results highlight the CRI and Frowny as promising choices, calling for future examination of their effectiveness.

Understanding How Interaction Experiences Factor into Security Perception of Payment Authentication in Virtual Reality (ID: P1317)

Jingjie Li, University of Edinburgh; Sunpreet Singh Arora, Visa Research; Kassem Fawaz, University of Wisconsin-Madison; Younghyun Kim, University of Wisconsin-Madison; Can Liu, Visa Research; Sebastian Meiser, University of Lübeck; Mohsen Minaei, Visa Research; Maliheh Shirvanian, Netflix; Kim Wagner, Visa Research

Users embrace the rapid development of virtual reality (VR) technology for everyday settings. These settings include payment, which makes user authentication necessary. Despite this need, there is a limited understanding of how users' unique experiences in VR contribute to their security perception. To understand this question, we designed probes of payment authentication, which are embedded in the routine payment of a VR game, to provoke participants' reactions from multiple angles.

The Effects of Colored Environmental Surroundings in Virtual Reality (ID: P1318)

Deyrel Diaz, Clemson University; Andrew Robb, Clemson University; Sabarish V. Babu, Clemson University; Christopher Pagano, Clemson University

Research has shown that environmental cues affect long-term memory and spatial cognition, but there is still a lack of understanding of the exact characteristics that produce these effects. We conducted a virtual reality within-subjects repeated measures study on 51 participants to test color congruency. Participants saw and studied 20 objects, then completed object recall and placement tasks in a recall room with a congruent or incongruent color. The objective and subjective data we gathered suggest that congruent color conditions influenced long-term memory and speed for recalled objects. Object size was also shown to influence spatial cognition and long-term memory.

Initial investigations into information retention and perception on Virtual Human Race (ID: P1319)

Deyrel Diaz, Clemson University; Samaneh Zamanifard, Clemson University; Matias Volonte, Clemson University; Andrew Duchowski, Clemson University

Virtual humans have long been studied in the field of embodied conversational agents. Most studies have focused on understanding the verbal and non-verbal cues required of a virtual agent for relationship building, trust, and credibility. Some studies have even gone as far as looking into characteristics like clothes, accessories, and race to see what effects they may have on the interlocutor. We seek feedback on an investigative study where we look to better understand how avatar race may affect not only previously investigated affect, but also information retention and eye gaze behavior. We discuss the technical design and research methodology.

The Role of Haptic Feedback in Enhancing Technical Skills Acquisition and Transfer in an Immersive Simulator: A User Study (ID: P1320)

Intissar Cherif, Univ Evry,Université Paris Saclay; Amine Chellali, Univ Evry,Université Paris Saclay; Mohamed Chaouki Babahenini, University of Biskra; Samir Otmane, Université d'Evry , Université Paris Saclay

We study the impact of haptic feedback on basic technical skills transfer from VR to the real world. Twenty-four volunteers were divided into two training groups (haptic and no-haptic groups) and a control group. The training groups learned to perform a "Ring Transfer" task in a VR simulator and all participants performed pre-, post, and retention tests on a similar physical setup. Results show that skill transfer is observed for both training groups and not for the control group. The haptic group participants also improved their performance compared to the no-haptic group, but the difference was not significant.

[EXTENDED] The Restorative Influence of Virtual Reality Environment Design (ID: P1322)

Jalynn Blu Nicoly, Colorado State University; Rachel Masters, Colorado State University; Vidya D Gaddy, Colorado State University; Victoria Interrante, University of Minnesota; Francisco Raul Ortega, Colorado State University

We aim to explore whether beauty in moving and still virtual environments (VEs) contributes to restorativeness. We hypothesized that the moving forest environment would be the most restorative and the abstract art would be the least restorative. The Perceived Restorativeness Scale (PRS) and the Zuckerman Inventory of Personal Reactions (ZIPERS) positive affect showed a significant increase in restorativeness in the moving forest condition than in the control condition. Additionally, PRS indicated more significance in the moving forest condition than in the abstract art condition.

Propagation as Data (PaD): Neural Phase Hologram Generation with Variable Distance Support (ID: P1333)

Jun Yeong Cha, Kyung Hee University; Hyunmin Ban, Kyung Hee University; Seungmi Choi, Kyung Hee University; Hui Yong Kim, Kyung Hee Univ.

Most of the neural network models for generating phase holograms developed so far are trained and validated only for a single distance. Consequently, if a distance is altered, the performance of models tends to decline dramatically. To address this, we introduce a novel approach called `Propagation as Data (PaD)'. Unlike conventional methods, our proposed model does not include the propagation process in a neural network. We pre-calculate propagation kernels and use them as conditioning data. Experimental results demonstrate that our model can consistently generate high-quality phase holograms across a range of distances with a single model.

Generating Look-Alike Avatars: Perception of Head Shape, Texture Fidelity and Head Orientation of Other People (ID: P1153)

Kwame Agyemang Baffour, Graduate Center; Oyewole Oyekoya, City University of New York - Hunter College

This research seeks to determine the influence of head shape, texture fidelity and head orientation of a look-alike avatar on perception of likeability and visual realism, especially when judged by other people. Two textured look-alike avatars were generated using: (i) three-dimensional (3D) stereophotogrammetry; and (ii) 3D face reconstruction from a single full-face image. Participants compared three different head orientations (0 degree, 45 degree, 90 degree) of the look-alike avatars' textured heads to their corresponding head silhouettes. Results suggest that participants prefer geometrically accurate photorealistic avatars of other people due to the accuracy of the head shape and texture fidelity.

You're Hired! Effect of Virtual Agents' Social Status and Social Attitudes on Stress Induction in Virtual Job Interviews (ID: P1339)

Celia Kessassi, IMT Atlantique; Cédric Dumas, IMT Atlantique; Caroline G. L. Cao, University of Illinois; Mathieu Chollet, University of Glasgow

Virtual reality offers new possibilities for social skills training. In fact, with virtual reality, it is possible to get immersed and train to various social situations, including job interviews. In this paper, we investigate the effect of virtual recruiters' social status and social attitudes on participants' stress during a job interview. Results show that negative recruiter attitudes led to higher subjective stress compared to neutral attitudes, and that participants with high social anxiety react differently to positive feedback compared to participants with low social anxiety. The mechanisms of social stress induction in virtual reality are complex and deserve further study.

[EXTENDED] Eye Tracking Performance in Mobile Mixed Reality (ID: P1323)

Satyam Awasthi, University of California, Santa Barbara; Vivian Ross, University of California, Santa Barbara; Sydney Lim, University of California, Santa Barbara; Michael Beyeler, University of California, Santa Barbara; Tobias Höllerer, University of California, Santa Barbara

Implementing and evaluating eye tracking across multiple platforms and use cases can be challenging due to the lack of standardized metrics and measurements. Additionally, existing calibration methods and accuracy measurements often do not account for the common scenarios of walking and scanning in mobile AR settings. We conducted user studies evaluating eye tracking on the Magic Leap One, the HoloLens 2, and the Meta Quest Pro. Our results reveal that the degree to which locomotion influenced eye tracking performance depended on the headset, with the HoloLens 2, which features a retractable visor, displaying the greatest decrease in accuracy during locomotion.

The Influence of Perspective on Training Effects in Virtual Reality Public Speaking Training (ID: P1337)

Fumitaka Ueda, Nara Institute of Science and Technology; Yuichiro Fujimoto, Nara Institute of Science and Technology; Taishi Sawabe, Nara Institute of Science and Technology; Masayuki Kanbara, Nara Institute of Science and Technology; Hirokazu Kato, Nara Institute of Science and Technology

A third-person perspective in virtual reality (VR) based public speaking training enables trainees to observe themselves through self-avatars, potentially enhancing their public speaking skills. This study investigates the influence of perspective on the training effects, i.e., changes in audience evaluations before and after training. In the experiment, VR job interview training was conducted for five days under three perspective conditions. Mock interviews were also performed before and after training and were assessed by external raters. The results indicate that the training effects were significantly higher in the Front condition regarding verbal communication skills and the overall impression of the interview.

Enhancing Immersion in Virtual Reality: Cost-Efficient Spatial Audio Generation for Panoramic Videos (ID: P1347)

Di Zhang, Communication University of China; JIAXIN SHI, Communication University of China; Long Ye, Communication University of China

This paper presents a novel system for generating 5.1.4 format spatial audio for home theater scenes using panoramic video content. In virtual reality, spatial audio technology is an integral part, however, current virtual reality videos lack systematic spatial audio production methods, while traditional spatial audio production methods are expensive and complex. To address this problem, this system provides an efficient and cost-effective solution for audio producers to match audio with panoramic visuals. Finally, this study provides a user interface?considering the human auditory properties, the system is optimized to generate 5.1.4 spatial surround audio to enhance the immersion of the listener.

Toward Optimized AR-based Human-Robot Interaction Ergonomics: Modeling and Predicting Interaction Comfort (ID: P1336)

Yunqiang Pei, University of Electronic Science and Technology of China; Bowen Jiang, University of Electronic Science and Technology of China; Kaiyue Zhang, University of Electronic Science and Technology of China; Ziyang Lu, University of Electronic Science and Technology of China; Mingfeng Zha, University of Electronic Science and Technology of China; Guoqing Wang, University of Electronic Science and Technology of China; Zhitao Liu, University of Electronic Science and Technology of China; Ning Xie, University of Electronic Science and Technology of China; Yang Yang, University of Electronic Science and Technology of China; Heng Tao Shen, Tongji University

Augmented Reality in Human-Robot Interaction (AR-HRI) boosts user experience. The key challenge is refining interaction methods to minimize discomfort and enhance quality. This AR-HRI study uses Galvanic Skin Respons (GSR) to predict and improve user comfort. User studies tested interaction strategies in an AR environment. A machine learning model, developed from GSR data, predicted comfort levels and informed strategy changes. Comfort metrics were visualized every second using Hololens 2, creating an AR-HRI comfort system. The method improved user comfort, provided a new AR-HRI metric, and highlighted future research opportunities.

Visual Perception in VR Training: Impact of Information Transfer Methods (ID: P1091)

Maximilian Rettinger, Technical University of Munich; Michael Hug, Technical University of Munich; Hassan Kamel, Technical University of Munich; Yashita Saxena, Technical University of Munich; Gerhard Rigoll, Technical University of Munich

Virtual Reality (VR) has great potential for education and training, but it is not fully understood how to make VR training as effective as possible. An investigation indicated that a combination of auditory and textual information is the best way to present training instructions and explanations to the user. However, is it also suitable to perceive the training content visualized in the virtual environment? We conduct a within-subjects study to investigate this by comparing users' gaze attention in four information transfer methods. The results are consistent with previous findings that the auditory-visual-combination is the most appropriate of the methods compared.

Distribution-Shifting: Improved Phase Hologram Processing with Novel Phase Distortion Metric (ID: P1335)

Seungmi Choi, Kyung Hee University; Jun Yeong Cha, Kyung Hee University; Hyunmin Ban, Kyung Hee University; Kwan-Jung Oh, ETRI; Hyunsuk Ko, Hanyang University; Hui Yong Kim, Kyung Hee Univ.

Due to the lack of suitable phase distortion metrics, the optimization and evaluation of phase-holograms have relied on numerical reconstruction (NR) domain metrics. However, the necessity of NR for hologram processing leads to more intricate design and a higher computational burden. In this paper, we introduce a distribution-shifting (DS) algorithm to enable optimizing or measuring distortions directly in phase-domain, considering 2π-periodicity and shift-invariance. Experimental results with various noise types demonstrate a strong correlation between phase-domain metric with DS and their NR-domain counterparts. We believe that our DS-metric could facilitate direct optimization approaches in various phase-hologram processing techniques.

Using Virtual Reality to Promote Pro-environmental Consumption (ID: P1338)

Dr. Ou Li, Hangzhou Normal University; Wenchao Su, Hangzhou Normal University; Yan Shi, Hangzhou Normal University

Virtual reality (VR) has emerged as an effective method for encouraging eco-friendly behaviors. The present study aimed to investigate the effectiveness and underlying mechanisms of VR in promoting pro-environmental consumption. The results indicated that, when compared to traditional mediums such as 2D video and printed material, VR can improve individuals' pro-environmental consumption. This favorable effect of VR was found to be mediated by the inclusion of nature in self (INS). Additionally, this study also demonstrated that individuals' green values act as the boundary condition.

IEEE  IEEE Computer Society IEEE Visualization and Graphics Technical Community

Student Participation
Support Student Participation
Special
UCF
Silver
JP Morgan Chase & Company
Bronze
Christie
UA Little Rock, Emerging Analytics Center
TECHVIZ

Code of Conduct

©IEEE VR Conference 2024, Sponsored by the IEEE Computer Society and the Visualization and Graphics Technical Committee