The official banner for the IEEE Conference on Virtual Reality + User Interfaces, comprised of a Kiwi wearing a VR headset overlaid on an image of Mount Cook and a braided river.

Papers -- Tentative Program

Monday, March 27, 2023, Shanghai UTC+8
Tracking 10:15 - 11:15 Room A
Collaboration 10:15 - 11:15 Room B
Agents 10:15 - 11:15 Room C
Locomotion 1 13:30 - 14:30 Room B
Audio 13:30 - 14:30 Room C
Rendering 1 14:45 - 15:45 Room A
Cybersickness and Social Emotional 14:45 - 15:45 Room B
Rendering 2 16:00 - 17:00 Room C
360Video, 3D Video and applications 16:00 - 17:00 Room A
Locomotion 2 16:00 - 17:00 Room B
Tuesday, March 28, 2023, Shanghai UTC+8
Gaze, Haptics and Foveated Rendering 8:30 - 9:30 Room A
Cybersickness 1 8:30 - 9:30 Room B
Interaction 1 8:30 - 9:30 Room C
Gaze 14:00 - 15:00 Room B
Interaction 2 14:00 - 15:00 Room C
Accessibility and Applications 15:15 - 16:15 Room A
Displays 15:15 - 16:15 Room B
Medical 16:30 - 17:30 Room B
Haptics 16:30 - 17:30 Room C
Wednesday, March 29, 2023, Shanghai UTC+8
SocialEmotional 8:30 - 9:30 Room A
Perception 1 8:30 - 9:30 Room B
Multimodal and Haptics 11:00 - 12:00 Room A
Gestures and Interaction 11:00 - 12:00 Room B
Education and Medical 11:00 - 12:00 Room C
Displays and Haptics 14:00 - 15:00 Room B
Agents and Perception 14:00 - 15:00 Room C
Perception 2 15:15 - 16:15 Room A
InfoVis and TextEntry 15:15 - 16:15 Room B

Session: Tracking

Monday, March 27, 2023, 10:15, Shanghai UTC+8, Room A

Session Chair: Guofeng Zhang

Simultaneous Scene-independent Camera Localization and Category-level Object Pose Estimation via Multi-level Feature Fusion

Conference

Junyi Wang, Yue Qi

In this paper, we focus on simultaneous scene-independent camera localization and category-level object pose estimation with a unified learning framework, consisting of a localization branch called SLO-LocNet, a pose estimation branch called SLO-ObjNet, a feature fusion module for feature sharing between two tasks, and two decoders. Three feature modules, covering an image fusion module in SLO-LocNet, a geometry fusion module in SLO-ObjNet, and a task fusion module between SLO-LocNet and SLO-ObjNet, are designed to promote feature sharing in two tasks. Experiments on both scene-independent localization and category-level pose estimation datasets demonstrate superior performance to other existing methods.

SCP-SLAM: Accelerating DynaSLAM with Static Confidence Propagation

Conference

Mingfei Yu, Lei Zhang, Wufan Wang, Jiahui Wang

This paper proposes SCP-SLAM, which accelerates DynaSLAM by running the CNN only on keyframes and propagating static confidence through other frames in parallel. The proposed static confidence characterizes the moving object features by the residual defined by inter-frame geometry transformation, which can be computed quickly. Our method combines the effectiveness of a CNN with the efficiency of static confidence in a tightly coupled manner. Extensive experiments on the publicly available TUM and Bonn RGB-D dynamic benchmark datasets demonstrate the efficacy of the method. Compared with DynaSLAM, it enables acceleration by a factor often on average, but retains comparable localization accuracy.

AR-MoCap: Using Augmented Reality to Support Motion Capture Acting

Conference

Alberto Cannavò, Filippo Gabriele Pratticò, Alberto Bruno, Fabrizio Lamberti

This paper aims to demonstrate how Augmented Reality (AR) can be helpful for actors when shooting mocap scenes. To this purpose, we devised a system named AR-MoCap that can be used by actors for rehearsing the scene in AR on the real set before actually shooting it. Through an Optical See-Through Head-Mounted Display, an actor can see, e.g., the digital characters of other actors wearing mocap suits overlapped in realtime to their bodies. Experimental results showed that, compared to the traditional approach based on physical props and other cues, the devised system can help the actors to position themselves and direct their gaze while shooting the scene, while also improving spatial and social presence, as well as perceived effectiveness.

Cross-View Visual Geo-Localization for Outdoor Augmented Reality

Conference

Niluthpol Chowdhury Mithun, Kshitij Minhas, Han-Pang Chiu, Taragay Oskiper, Mikhail Sizintsev, Supun Samarasekera, Rakesh Kumar

Precise estimation of global orientation and location is critical to ensure a compelling outdoor Augmented Reality (AR) experience. We address the problem by cross-view matching of query ground images to a geo-referenced satellite image database, proposing a transformer neural network model and a modified ranking loss. Experiments on benchmark cross-view geo-localization datasets show that our model achieves state-of-the-art performance. We also present an approach to extend the single image query-based localization approach utilizing temporal information from a navigation pipeline for continuous geo-localization. Experiments on several real-world video sequences demonstrate that our approach enables high-precision and stable AR insertion.

LiDAR-aid Inertial Poser: Large-scale Human Motion Capture by Sparse Inertial and LiDAR Sensors

Journal

Yiming Ren, Chengfeng Zhao, Yannan He, Peishan Cong, Han Liang, Jingyi Yu, Lan XU, Yuexin Ma

We propose a multi-sensor fusion method for capturing challenging 3D human motions with accurate consecutive local poses and global trajectories in large-scale scenarios, only using single LiDAR and 4 IMUs, which are set up conveniently and worn lightly. Moreover, we collect a LiDAR-IMU multi-modal mocap dataset, LIPD, with diverse human actions in long-range scenarios. Extensive quantitative and qualitative experiments on LIPD and other open datasets all demonstrate the capability of our approach for compelling motion capture in large-scale scenarios, which outperforms other methods by an obvious margin. We will release our code and captured dataset to stimulate future research.

Session: Collaboration

Monday, March 27, 2023, 10:15, Shanghai UTC+8, Room B

Session Chair: Hai-Ning Liang

Comparing Visual Attention with Leading and Following Virtual Agents in a Collaborative Perception-Action Task in VR

Conference

Sai-Keung Wong, Matias Volonte, Kuan-yu Liu, Elham Ebrahimi, Sabarish V. Babu

This paper presents a within-subject study to investigate the effects of leading and following behaviors on user visual attention behaviors when collaborating with a virtual agent (VA) during performing transportation tasks. There were two conditions, namely leader VA (LVA) and follower VA (FVA). The leader gave instructions to the follower to perform actions. In the FVA condition, the users played the leader role, while they played the follower role in the LVA condition. The preliminary results revealed significant differences in the user visual attention behaviors between the follower and leader VA conditions during the transportation tasks.

Towards an Understanding of Asymmetric Collaborative Visualization on Problem-solving

Conference

Wai Tong, Meng Xia, Kam Kwai Wong, Doug Bowman, Ting-Chuen Pong, Huamin Qu, Yalong Yang

With the ability to access various computing devices, such as Virtual Reality (VR) head-mounted displays, we aimed to provide an understanding of user experience in using collaborative visualization in a distributed asymmetric setting (i.e., PC-VR in different locations). To inform designs, we first conducted a formative study with 12 pairs of participants. We then improved our asymmetric design based on the key findings from the first study. Ten pairs of participants experienced enhanced PC-VR and PC-PC conditions in a follow-up study. We found that a well-designed asymmetric collaboration system could be as effective as a symmetric system. Participants who used the PC perceived less mental demand and effort in the PC-VR than in the PC-PC.

MAGIC: Manipulating Avatars and Gestures to Improve Remote Collaboration

Conference

Catarina Gonçalves Fidalgo, Mauricio Sousa, Daniel Mendes, Rafael Kuffner dos Anjos, Daniel Medeiros, Karan Singh, Joaquim Jorge

Remote collaboration in virtual environments has become pervasive in many fields. Within this context, users often collaboratively interact with virtual 3D models. However, discussing shared 3D content face-to-face can be challenging due to ambiguities, occlusions, and different viewpoints. To address this challenge, we introduce MAGIC, a novel approach for understanding pointing gestures in a face-to-face shared 3D space. MAGIC distorts the remote user's gestures to correctly reflect them in the local user's reference space when face-to-face. Results suggest that MAGIC significantly improves pointing agreement in face-to-face collaboration settings, improving co-presence and awareness of interactions performed in the shared space.

Effects of Collaborative Training Using Virtual Co-embodiment on Motor Skill Learning

Journal

Daiki Kodama, Takato Mizuho, Yuji Hatada, Takuji Narumi, Michitaka Hirose

Many VR systems, which enable users to observe and follow a teacher's movements from a first-person perspective, reported their usefulness for motor skill learning. However, learners using these methods feel weak agency because they need to be aware of following the teacher's movements, preventing motor skill retention. To address this problem, we propose applying virtual co-embodiment, in which two users are immersed in the same virtual avatar, as a method that arises strong agency in motor skill learning. The experiment showed that learning in virtual co-embodiment with the teacher improves learning efficiency compared with sharing the teacher's perspective or alone.

Using Virtual Replicas to Improve Mixed Reality Remote Collaboration

Journal

Huayuan Tian, Gun A. Lee, Huidong Bai, Mark Billinghurst

In this paper, we explore how virtual replicas can enhance MR remote collaboration with a 3D reconstruction of the task space and study how they can work as a spatial cue to improve MR remote collaboration. Our approach segments the foreground manipulable objects in the local environment and creates virtual replicas of them. The remote user can manipulate them to explain the task and guide the partner who can rapidly and accurately understand the remote expert's intentions and instructions. Our user study found using virtual replica manipulation was more efficient than using 3D annotation drawing in the MR remote collaboration. We report and discuss the findings and limitations of our system and study, and directions for future research.

Session: Agents

Monday, March 27, 2023, 10:15, Shanghai UTC+8, Room C

Session Chair: Chong Cao

Exploring the Social Influence of Virtual Humans Unintentionally Conveying Conflicting Emotions

Conference

Zubin Choudhary, Nahal Norouzi, Austin Erickson, Ryan Schubert, Gerd Bruder, Greg Welch

In this paper, we present a human-subjects study aimed at understanding the impact of conflicting facial and vocal emotional expressions. Specifically we explored three levels of emotional valence (unhappy, neutral, and happy) expressed in both visual (facial) and aural (vocal) forms. We also investigate three levels of head scales (down-scaled, accurate, and up-scaled) to evaluate whether head scale affects user interpretation of the conveyed emotion. We find significant effects of different multimodal expressions on happiness and trust perception, while no significant effect was observed for head scales. We discuss the relationships, implications, and guidelines for social applications that aim to leverage multimodal social cues.

Studying Avatar Transitions in Augmented Reality: Effect of Visual Transformation and Physical Action

Conference

Riku Otono, Adélaïde Genay, Monica Perusquia-Hernandez, Naoya Isoyama, Hideaki Uchiyama, Martin Hachet, Anatole Lécuyer, Kiyoshi Kiyokawa

We explore how applying smooth visual transitions at the moment of the change can help to maintain the SoE and benefit the general user experience. To address this, we implemented an AR system allowing users to embody a regular-shaped avatar that can be transformed into a muscular one through a visual effect. The avatar's transformation can be triggered either by the user through physical action (``active'' transition), or automatically launched by the system (``passive'' transition). The results showed that visual effects controlled by the user when changing their avatar's appearance can benefit their experience by preserving the SoE and intensifying the Proteus effects.

Animation Fidelity in Self-Avatars: Impact on User Performance and Sense of Agency

Conference

Haoran Yun, Jose Luis Ponton, Carlos Andujar, Nuria Pelechano

In this paper, we study the impact of the avatar's animation fidelity on different tasks. We compare three animation techniques: two of them using Inverse Kinematics to reconstruct the pose from six trackers, and a third one using a motion capture system with 17 inertial sensors. Our results show that the animation quality affects the Sense of Embodiment. Inertial-based MoCap performs significantly better in mimicking body poses. Surprisingly, IK-based solutions using fewer sensors outperformed MoCap in tasks requiring accurate positioning, which we attribute to the higher latency and the positional drift of the end-effectors.

Fully Automatic Blendshapes Generation for Stylized Characters

Conference

Jingying Wang, Yilin Qiu, Keyu Chen, Yu Ding, Ye Pan

We adopt the essence of deep learning and feature transfer to realize deformation transfer, thereby generating blendshapes for target avatars based on the given sources. We proposed a Variational Autoencoder (VAE) to extract the latent space of the avatars and then use a Multilayer Perceptron (MLP) model to realize the translation between the latent spaces of the source avatar and target avatars. By decoding the latent code of different blendshapes, we can obtain the blendshapes for the target avatars with the same semantics as that of the source. We also demonstrated that our method can be applied to identity and expression transfer for stylized characters with different topologies.

PACE: Data-Driven Virtual Agent Interaction in Dense and Cluttered Environments

Journal

James F Mullen Jr, Dinesh Manocha

We present PACE, a novel method for modifying motion-captured virtual agents to interact with and move throughout dense, cluttered 3D scenes. Our approach changes a given motion sequence of a virtual agent as needed to adjust to the obstacles and objects in the environment. We compare our method with prior motion generating techniques and highlight the benefits of our method with a perceptual study, where human raters preferred our method, and physical plausibility metrics, where our method performed. We have integrated our system with Microsoft HoloLens and demonstrate its benefits in real-world scenes. Our project website is available at https://gamma.umd.edu/pace/.

Session: Locomotion 1

Monday, March 27, 2023, 13:30, Shanghai UTC+8, Room B

Session Chair: Songhai Zhang

Designing Viewpoint Transition Techniques in Multiscale Virtual Environments

Conference

Jong-In Lee, Paul Asente, Wolfgang Stuerzlinger

Viewpoint transitions have been shown to improve users' spatial orientation and help them build a cognitive map when navigating an unfamiliar virtual environment. Previous work has investigated transitions in single-scale virtual environments, focusing on trajectories and continuity. We extend previous work on simple transitions to an in-depth investigation of transition techniques in multiscale virtual environments (MVEs). We identify challenges in navigating MVEs with nested structures and assess how different transition techniques affect spatial understanding and usability. Through two user studies, we investigated transition trajectories, interactive control of transition movement, and speed modulation in a nested MVE.

Assisted walking-in-place: Introducing assisted motion to walking-by-cycling in embodied Virtual Reality

Journal

Yann Moullec, Justine Saint-Aubert, Mélanie Cogne, Anatole Lécuyer

We investigated the use of a motorized bike to support the walk of an avatar in Virtual Reality. Our approach consists in assisting a walking-in-place technique called walking-by-cycling with a motorized bike in order to provide participants with a compelling walking experience while reducing their perceived effort. We conducted a study which showed that "assisted walking-by-cycling" induced more ownership, agency, and walking sensation than a static simulation. It also induced levels of ownership and walking sensation similar to that of active walking-by-cycling, but it induced less perceived effort, which promotes the use of our approach in situations where users cannot or do not want to exert much effort while walking in embodied VR.

Monte-Carlo Redirected Walking: Gain Selection Through Simulated Walks

Journal

Ben J. Congdon, Anthony Steed

We present Monte-Carlo Redirected Walking (MCRDW), a gain selection algorithm for redirected walking. MCRDW applies the Monte-Carlo method to redirected walking by simulating a large number of simple virtual walks, then inversely applying redirection to the virtual paths. Different gain levels and directions are applied, producing differing physical paths. Each physical path is scored and the results used to select the best gain level and direction. We provide a simple example implementation and a simulation-based study for validation. In our study, when compared with the next best technique, MCRDW reduced incidence of boundary collisions by over 50% while reducing total rotation and position gain.

A Systematic Literature Review of Virtual Reality Locomotion Taxonomies

Invited Journal

Lisa Marie Prinz, Tintu Mathew, Benjamin Weyers

The change of the user's viewpoint in an immersive virtual environment, called locomotion, is one of the key components in a virtual reality interface. Effects of locomotion, such as simulator sickness or disorientation, depend on the specific design of the locomotion method and can influence the task performance as well as the overall acceptance of the virtual reality system. Thus, it is important that a locomotion method achieves the intended effects. The complexity of this task has increased with the growing number of locomotion methods and design choices in recent years. Locomotion taxonomies are classification schemes that group multiple locomotion methods and can aid in the design and selection of locomotion methods. Like locomotion methods themselves, there exist multiple locomotion taxonomies, each with a different focus and, consequently, a different possible outcome. However, there is little research that focuses on locomotion taxonomies. We performed a systematic literature review to provide an overview of possible locomotion taxonomies and analysis of possible decision criteria such as impact, common elements, and use cases for locomotion taxonomies. We aim to support future research on the design, choice, and evaluation of locomotion taxonomies and thereby support future research on virtual reality locomotion.

Revisiting Walking-in-Place by Introducing Step-Height Control, Elastic Input, and Pseudo-Haptic Feedback

Invited Journal

Yutaro Hirao, Takuji Narumi, Ferran Argelaguet, Anatole Lécuyer

Walking-in-place (WIP) is a locomotion technique that enables users to 'walk infinitely' through vast virtual environments using walking-like gestures within a limited physical space. This paper investigates alternative interaction schemes for WIP, addressing successively the control, input, and output of WIP. First, we introduce a novel height-based control to increase advanced speed. Second, we introduce a novel input system for WIP based on elastic and passive strips. Third, we introduce the use of pseudo-haptic feedback as a novel output for WIP meant to alter walking sensations. The results of a series of user studies show that height and frequency based control of WIP can facilitate higher virtual speed with greater efficacy and ease than in frequency-based WIP. Second, using an upward elastic input system can result in a stable virtual speed control, although excessively strong elastic forces may impact the usability and user experience. Finally, using a pseudo-haptic approach can improve the perceived realism of virtual slopes. Taken together, our results suggest that, for future VR applications, there is value in further research into the use of alternative interaction schemes for walking-in-place.

Session: Audio

Monday, March 27, 2023, 13:30, Shanghai UTC+8, Room C

Session Chair: Yue Li

Lightweight Scene-aware Rain Sound Simulation for Interactive Virtual Environments

Conference

Haonan Cheng, Shiguang Liu, Jiawan Zhang

This paper proposes a lightweight sound synthesis method for generating scene-aware rain sound at interactive rates while reducing memory requirement. First, an exponential moving average based frequency domain additive synthesis method is designed to extend and modify the pre-computed basic rain sounds. Second, an efficient binaural rendering method is proposed to simulate the 3D perception that coheres with the visual scene based on a set of Near-Field Transfer Functions. Various results demonstrate that the proposed method dramatically improves the performance of rain sound generation synchronized with the visual scene in terms of memory and speed.

ConeSpeech: Exploring Directional Speech Interaction for Multi-Person Remote Communication in Virtual Reality

Journal

Yukang Yan, Haohua Liu, Yingtian Shi, Jingying Wang, Ruici Guo, Zisu Li, Xuhai Xu, Chun Yu, Yuntao Wang, Yuanchun Shi

We present ConeSpeech, a virtual reality (VR) based multi-user remote communication technique, which enables users to selectively speak to target listeners without distracting bystanders. With ConeSpeech, the user looks at the target listener and only in a cone-shaped area in the direction can the listeners hear the speech. We conducted a user study to determine the modality to control the cone-shaped delivery area. Then we implemented the technique and evaluated its performance in three typical multi-user communication tasks by comparing it to two baseline methods. Results show that ConeSpeech balanced the convenience and flexibility of voice communication.

The Design Space of the Auditory Representation of Objects and their Behaviours in Virtual Reality for Blind People

Journal

João Guerreiro, Yujin Kim, Rodrigo Nogueira, SeungA Chung, André Rodrigues, Uran Oh

VR is typically designed in terms of visual experience, posing major challenges for blind people to understand and interact with the environment. We propose a design space to augment objects and their behaviours in VR with an audio representation. It intends to support designers in creating accessible experiences by explicitly considering alternatives to visual feedback. We recruited 16 blind users and explored the design space under two scenarios in the context of boxing (defend and attack). This exploration resulted in multiple engaging approaches and depicted shared preferences but no one-size-fits-all solution, suggesting the need to understand the consequences of each design choice and their impact on the individual user experience.

Emotional Voice Puppetry

Journal

Ye Pan, Ruisi Zhang, Shengran Cheng, Shuai Tan, Yu Ding, Kenny Mitchell, Xubo Yang

The paper presents emotional voice puppetry, an audio-based facial animation approach to portray characters with vivid emotional changes. The lips motion and the surrounding facial areas are controlled by the contents of the audio, and the facial dynamics are established by category of the emotion and the intensity. Our approach is exclusive because it takes account of perceptual validity and geometry instead of pure geometric processes. Another highlight of our approach is the generalizability to multiple characters. User studies demonstrate the effectiveness of our approach both qualitatively and quantitatively.

Persuasive vibrations : Effects of Speech-Based Vibrations on Persuasion, Leadership, and Co-Presence During Verbal Communication in VR

Conference

Justine Saint-Aubert, Ferran Argelaguet Sanz, Claudio Pacchierotti, Marc J-M Macé, Amir Amedi, Anatole Lécuyer

Our paper aims to investigate how tactile feedback consisting in vibrations synchronized with speech could influence persuasion, co-presence and leadership in VR. In a first experiment, participants were listening to two speaking virtual agents and the speech of one agent was augmented with vibrotactile feedback. In a second experiment, the participants were talking to two agents, and their own speech was augmented or not with vibrotactile feedback. Interestingly, the results show that vibrotactile feedback improve the co-presence, persuasiveness and leadership of agents when listening to them. It also improve co-presence, and participants perceive their speech as more persuasive when speaking.

Session: Rendering 1

Monday, March 27, 2023, 14:45, Shanghai UTC+8, Room A

Session Chair: Xin Yang

ShadowMover: Automatically Projecting Real Shadows onto Virtual Object

Journal

Piaopiao Yu, Jie Guo, Fan Huang, Zhenyu Chen, Chen Wang, Yan Zhang, Yanwen Guo

Inserting 3D virtual objects into real-world images has many applications in photo editing and augmented reality. We present the first end-to-end solution to fully automatically project real shadows onto virtual objects for outdoor scenes. In our method, we introduce the Shifted Shadow Map, a new shadow representation that encodes the binary mask of shifted real shadows after inserting virtual objects in an image. Based on the shifted shadow map, we propose a CNN-based shadow generation model named ShadowMover which first predicts the shifted shadow map for an input image and then automatically generates plausible shadows on any inserted virtual object.

Add-on Occlusion: Turning Off-the-Shelf Optical See-through Head-mounted Displays Occlusion-capable

Journal

Yan Zhang, Xiaodan Hu, Kiyoshi Kiyokawa, Xubo Yang

The occlusion-capable optical see-through head-mounted display (OC-OSTHMD) is actively developed in recent years. Virtual contents are demonstrated to have a higher quality when displayed with occlusion patterns. However, implementing occlusion with the specific type of OSTHMDs prevents the appealing feature from the wide application. In this paper, a novel approach of realizing occlusion for common OSTHMDs is proposed. A wearable device with per-pixel occlusion capability is designed. OSTHMDs are upgraded to be occlusion-capable by attaching the device before optical combiners. A prototype with HoloLens 1 is built. The proposed system is expected to realize a universal implementation of the occlusion function in augmented reality (AR).

NeRFPlayer: A Streamable Dynamic Scene Representation with Decomposed Neural Radiance Fields

Journal

Liangchen Song, Anpei Chen, Zhong Li, Zhang Chen, Lele Chen, Junsong Yuan, Yi Xu, Andreas Geiger

An efficient framework, named NeRFPlayer, has been developed for fast reconstruction, compact modeling, and streamable rendering of 4D spatiotemporal space in VR. The framework decomposes the 4D space into three categories: static, deforming, and new areas, each represented by a separate neural field. A hybrid representations based feature streaming scheme is used for efficiently modeling the neural fields. Our method is evaluated on dynamic scenes captured by single hand-held cameras and multi-camera arrays, achieving comparable or superior rendering performance in terms of quality and speed, with reconstruction taking 10 seconds per frame.

Integrating Both Parallax and Latency Compensation into Video See-Through Head-Mounted Display

Journal

Atsushi Ishihara, Hiroyuki Aga, Yasuko Ishihara, Hirotake Ichikawa, Hidetaka Kaji, Koichi Kawasaki, Daita Kobayashi, Toshimi Kobayashi, Senior Enginner Ken Nishida, Takumi Hamasaki, Hideto Mori, Yuki Morikubo

This study incorporates both parallax and latency compensation methods into a video see-through head-mounted display, to realize edge-preserving occlusion. To reconstruct captured images, we reproject the images captured by the color camera to the user's eye position, using depth maps. We fill the disocclusion areas using cached depth maps estimated in previous frames, instead of relying upon computationally heavy inpainting procedures. For occlusion, we refine the edges of the depth maps using both infrared masks and color-guided filters. For latency compensation, we propose a two-phase temporal warping method. It is found to be not only fast but also spatially correct for static scenes.

GeoSynth: A Photorealistic Synthetic Indoor Dataset for Scene Understanding

Journal

Brian Pugh, Davin Chernak, Salma Jiddi

Deep learning has revolutionized many scene perception tasks over the past decade. Some of these improvements can be attributed to the development of large labeled datasets. The creation of such datasets can be an expensive, time-consuming, and imperfect process. To address these issues, we introduce GeoSynth, a diverse photorealistic synthetic dataset for indoor scene understanding tasks. Each GeoSynth exemplar contains rich labels including segmentation, geometry, camera parameters, surface material, lighting, and more. We demonstrate that supplementing real training data with GeoSynth can significantly improve network performance on perception tasks, like semantic segmentation.

Session: Cybersickness and Social Emotional

Monday, March 27, 2023, 14:45, Shanghai UTC+8, Room B

Session Chair: Yue Liu

Cybersickness, Cognition, & Motor Skills: The Effects of Music, Gender, and Gaming Experience

Journal

Panagiotis Kourtesis, Rayaan Amir, Josie Linnell, Ferran Argelaguet, Sarah MacPherson

This paper examines the effects of cybersickness on the cognitive, motor, and reading performance in VR, as well as evaluates the mitigating effects of music on cybersickness, the role of gender, and the computing, VR, and gaming experience of the user. Joyful and Calming music substantially decreased the intensity of nausea-related symptoms. Only Joyful music significantly reduced the overall cybersickness intensity. Cybersickness decreased verbal working memory performance and pupil size, decelerating reaction time, and reading speed. The gaming experience was negatively correlated with cybersickness. Cybersickness's intensity was not different between female and male participants with comparable gaming experience.

Effect of Frame Rate on User Experience, Performance, and Simulator Sickness in Virtual Reality

Journal

Jialin Wang, Rongkai Shi, Wenxuan Zheng, Weijie Xie, Dominic Kao, Hai-Ning Liang

This work fills a gap in the effect of the frame rate of virtual reality (VR) head-mounted displays (HMDs) on users' experience, performance, and simulator sickness (SS) symptoms. We report the findings of a study with two VR applications that compared four specific frame rates (60, 90, 120, and 180 frames per second (fps)). Our results show that 120fps is an important threshold. After 120fps, users tend to feel lower SS symptoms without a significant negative effect on their experience and performance. Higher frame rates (e.g., 120 and 180fps) can ensure better user performance than lower rates.

Intentional Head-Motion Assisted Locomotion for Reducing Cybersickness

Invited Journal

Zehui Lin, Xiang Gu, Sheng Li, Zhiming Hu, Guoping Wang

We present an efficient locomotion technique that can reduce cybersickness through aligning the visual and vestibular induced self-motion illusion. Our locomotion technique stimulates proprioception consistent with the visual sense by intentional head motion, which includes both the head's translational movement and yaw rotation. A locomotion event is triggered by the hand-held controller together with an intended physical head motion simultaneously. Based on our method, we further explore the connections between the level of cybersickness and the velocity of self motion through a series of experiments. We first conduct Experiment 1 to investigate the cybersickness induced by different translation velocities using our method and then conduct Experiment 2 to investigate the cybersickness induced by different angular velocities. Our user studies from these two experiments reveal a new finding on the correlation between translation/angular velocities and the level of cybersickness. The cybersickness is greatest at the lowest velocity using our method, and the statistical analysis also indicates a possible U-shaped relation between the translation/angular velocity and cybersickness degree. Finally, we conduct Experiment 3 to evaluate the performances of our method and other commonly-used locomotion approaches, i.e., joystick-based steering and teleportation. The results show that our method can significantly reduce cybersickness compared with the joystick-based steering and obtain a higher presence compared with the teleportation. These advantages demonstrate that our method can be an optional locomotion solution for immersive VR applications using commercially available HMD suites only.

Mitigation of VR Sickness during Locomotion with a Motion-Based Dynamic Vision Modulator

Invited Journal

Guanghan Zhao, Jason Orlosky, Steven Feiner, Photchara Ratsamee, Yuki Uranishi

In virtual reality, VR sickness resulting from continuous locomotion via controllers or joysticks is still a significant problem. In this paper, we present a set of algorithms to mitigate VR sickness that dynamically modulate the user’s field of view by modifying the contrast of the periphery based on movement, color, and depth. In contrast with previous work, this vision modulator is a shader that is triggered by specific motions known to cause VR sickness, such as acceleration, strafing, and linear velocity. Moreover, the algorithm is governed by delta velocity, delta angle, and average color of the view. We ran two experiments with different washout periods to investigate the effectiveness of dynamic modulation on the symptoms of VR sickness, in which we compared this approach against baseline and pitch-black field-of-view restrictors. Our first experiment made use of a just-noticeable-sickness design, which can be useful for building experiments with a short washout period.

The Effects of Spatial Complexity on Narrative Experience in Space-Adaptive AR Storytelling

Invited Journal

Jae-eun Shin, Boram Yoon, Dooyoung Kim, Woontack Woo

A critical yet unresolved challenge in designing space-adaptive narratives for Augmented Reality (AR) is to provide consistently immersive user experiences anywhere, regardless of physical features specific to a space. For this, we present a comprehensive analysis on a series of user studies investigating how the size, density, and layout of real indoor spaces affect users playing Fragments, a space-adaptive AR detective game. Based on the studies, we assert that moderate levels of traversability and visual complexity afforded in counteracting combinations of size and complexity are beneficial for narrative experience. To confirm our argument, we combined the experimental data of the studies (n=112) to compare how five different spatial complexity conditions impact narrative experience when applied to contrasting room sizes. Results show that whereas factors of narrative experience are rated significantly higher in relatively simple settings for a small space, they are less affected by complexity in a large space. Ultimately, we establish guidelines on the design and placement of space-adaptive augmentations in location-independent AR narratives to compensate for the lack or excess of affordances in various real spaces and enhance user experiences therein.

Session: Rendering 2

Monday, March 27, 2023, 16:00, Shanghai UTC+8, Room C

Session Chair: Jie Guo

Level-of-Detail AR: Dynamically Adjusting Augmented Reality Level of Detail Based on Visual Angle

Conference

Abby Wysopal, Vivian Ross, Joyce E Passananti, Kangyou Yu, Brandon Huynh, Tobias Höllerer

We present a Level-of-Detail AR mechanism to dynamically render textual and interactable content based on the visual angle that an application subtends, taking into account legibility, interactability, and viewability. When tested, our mechanism functioned as intended out-of-the-box on 44 of the 45 standard user interface Unity prefabs in Microsoft's Mixed Reality Tool Kit. We additionally evaluated the mechanism's impact on task performance, user distance, and subjective satisfaction through a mixed-design user study with 45 participants. Statistical analysis of our results revealed significant task-dependent differences in user performance between the modes. User satisfaction was consistently higher for the Level-of-Detail AR condition.

Where to Render: Studying Renderability for IBR of Large-Scale Scenes

Conference

Zimu Yi, Ke Xie, Jiahui Lyu, Minglun Gong, Hui Huang

In this work, we introduce the concept of Renderability, which predicts the quality of image-based rendering (IBR) results at any given viewpoint and view direction. Consequently, the renderability values evaluated for the 5D camera parameter space form a field, which effectively guides viewpoint/trajectory selection for IBR, especially for challenging large-scale 3D scenes.

Delta Path Tracing for Real-Time Global Illumination in Mixed Reality

Conference

Yang Xu, Yuanfa Jiang, Shibo Wang, Kang Li, Guohua Geng

Visual coherence between real and virtual objects is important in mixed reality. However, the illumination change produced by the inserted virtual objects is difficult to compute in real-time due to the heavy computation demands. In this work, we propose delta path tracing (DPT), which only computes the radiance blocked by the virtual objects from the light sources at the primary hit points of path tracing. DPT can reduce the number of direct illumination queries and avoid rendering scenes twice to improve performance. We implement DPT using hardware-accelerated ray tracing on modern GPUs, and the results demonstrate that our method can produce plausible visual coherence between real and virtual objects at real-time frame rates.

Style-aware Augmented Virtuality Embeddings (SAVE)

Conference

Johannes Hoster, Dennis Ritter, Kristian Hildebrand

We present an augmented virtuality (AV) pipeline that enables the user to interact with real-world objects through stylised representations which match the VR scene and thereby preserve immersion. It consists of three stages: First, the object of interest is reconstructed from images and corresponding camera poses recorded with the VR headset, or alternatively a retrieval model finds a fitting mesh from the ShapeNet dataset. Second, a style transfer technique adapts the mesh to the VR game scene in order to preserve consistent immersion. Third, the stylised mesh is superimposed on the real object in real time to ensure interactivity even if the real object is moved. Our pipeline serves as proof of concept for style-aware AV embeddings.

LFACon: Introducing Anglewise Attentions to Light Field Space in No-reference Quality Assessment

Journal

Qiang Qu, Xiaoming Chen, Yuk Ying Chung, Weidong Cai

Compared to 2D image assessment, light field image quality assessment (LFIQA) needs to consider not only the image quality in spatial domain but also the quality consistency in angular domain. In this paper, we propose a novel concept of 'anglewise attention' by introducing a multihead self-attention mechanism to the angular domain of a light field image (LFI). This mechanism better reflects the LFI quality. Based on the proposed anglewise attention, we further propose our light field attentional convolutional neural network (LFACon) as an LFIQA metric. Our experimental results show that the proposed LFACon metric significantly outperforms the state-of-the-art LFIQA metrics.

Session: 360Video, 3D Video and applications

Monday, March 27, 2023, 16:00, Shanghai UTC+8, Room A

Session Chair: Fei Hou

Introducing 3D Thumbnails to Access 360-Degree Videos in Virtual Reality

Journal

ir. Alissa Vermast, Wolfgang Hürst

Interfaces to access datasets of 360-degree videos in VR almost always use 2D thumbnails to represent them, even though the data is inherently three-dimensional. In a comparative study, we verified if using spherical and cube-shaped 3D thumbnails provides a better user experience and is more effective at conveying the high-level subject matter of a video or when searching for a specific item in it. Results show that traditional 2D equirectangular projections still performed better for high-level classification tasks but were outperformed by spherical thumbnails when participants had to search for details within the videos.

Masked360: Enabling Robust 360-degree Video Streaming with Ultra Low Bandwidth Consumption

Journal

Zhenxiao Luo, Baili Chai, Zelong Wang, Miao Hu, Di Wu

We propose a practical neural-enhanced 360-degree video streaming framework called Masked360, which can significantly reduce bandwidth consumption and achieve robustness against packet loss. In Masked360, instead of transmitting the complete video frame, the video server only transmits a masked low-resolution version of each video frame to reduce bandwidth significantly. Besides, the client can reconstruct the original 360-degree video frames with a lightweight neural network model. To further improve the quality of video streaming, we also propose a set of optimization techniques, such as complexity-based patch selection, quarter masking strategy, redundant patch transmission and enhanced model training meth

Wavelet-Based Fast Decoding of 360 Videos

Journal

Colin Groth, Sascha Fricke, Ing. Susana Castillo, Marcus Magnor

In this paper, we propose a wavelet-based video codec that enables real-time playback of high-resolution 360-degree videos. Our codec streams the relevant content directly from the drive and decodes the video viewport-dependently. Due to the specific design and the exploitation of the wavelet transform for intra- and inter-frame transform, our codec's decoding performance is up to 272% faster than state-of-the-art video codecs. Finally, we demonstrate how our wavelet-based codec can also directly be used in conjunction with foveation for further performance increases.

CaV3: Cache-assisted Viewport Adaptive Volumetric Video Streaming

Conference

Junhua Liu, Boxiang Zhu, Fangxin Wang, Yili Jin, Wenyi Zhang, zihan xu, Shuguang Cui

Volumetric video (VV) recently emerges as a new form of video application providing a photorealistic immersive 3D viewing experience. Existing works mostly focused on predicting the viewport for a tiling-based adaptive VV streaming, which however only has quite a limited effect on resource saving. We argue that the content repeatability in the viewport can be further leveraged, and for the first time, propose a client-side cache-assisted strategy that aims to buffer the repeatedly appearing VV tiles in the near future to reduce the redundant content transmission. The extensive evaluation of the dataset confirms the superiority of CaV3, which outperforms the SOTA algorithm by 15.6%-43% in viewport prediction and 13%-40% in system utility.

Scaling VR Video Conferencing

Conference

Mallesham Dasari, Edward Lu, Michael W Farb, Nuno Pereira, Ivan Liang, Anthony Rowe

Virtual reality platforms are being challenged to support live performances, sporting events, and conferences with thousands of users across seamless virtual worlds. Current systems struggle to meet these demands which has led to high-profile events with groups of users isolated in parallel sessions. To this end, we present an architecture that supports hundreds of users in a single virtual environment. We leverage the property of spatial locality with two key optimizations: 1) a Quality of Service scheme to prioritize traffic based on users' locality, 2) a resource manager that allocates client connections across multiple servers based on user proximity. Through extensive evaluations, we demonstrate the scalability of our platform.

Session: Locomotion 2

Monday, March 27, 2023, 16:00, Shanghai UTC+8, Room B

Session Chair: Miao Wang

Tell Me Where To Go: Voice Controlled Hands-Free Locomotion for Virtual Reality Systems

Conference

Jan Niklas Hombeck, Henrik Voigt, Timo Heggemann, Rabi R. Datta, Kai Lawonn

As locomotion is an important factor in improving Virtual Reality (VR) immersion and usability, research in this area has been and continues to be a crucial aspect for the success of VR applications. In recent years, a variety of techniques have been developed and evaluated, ranging from abstract control, vehicle, and teleportation techniques to more realistic techniques such as motion, gestures, and gaze. However, when it comes to hands-free scenarios, for example to increase the overall accessibility of an application or in medical scenarios under sterile conditions, most of the announced techniques cannot be applied. This is where the use of speech as an intuitive means of navigation comes in handy.

Investigating Guardian Awareness Techniques to Promote Safety in Virtual Reality

Conference

Sixuan Wu, Jiannan Li, Mauricio Sousa, Tovi Grossman

Virtual Reality (VR) can immerse users in a virtual world and provide little awareness of the physical environment. Current VR technologies use predefined guardians to set safety boundaries. However, bystanders cannot perceive these boundaries and may collide with VR users if they enter guardians. We investigate four techniques to help bystanders avoid invading guardians. These techniques include augmented reality overlays and visual, auditory, and haptic alerts indicating bystanders' distance from guardians. Our findings suggest the techniques effectively keep participants clear of the safety boundaries. Using augmented reality overlays, participants could avoid guardians with less time, and haptic alerts caused less distractions.

Redirected Walking Based on Historical User Walking Data

Conference

Cheng-Wei Fan, Sen-Zhe Xu, Peng Yu, Fang-Lue Zhang, Song-Hai Zhang

This paper proposes a novel Redirected Walking (RDW) method that improves the effect of real-time unrestricted RDW by analyzing and utilizing the user's historical walking data. Using the weighted directed graph obtained from the user's historical walking data, we update the scores of different reachable poses and guide the user to the optimal target pose. Since simulation experiments have been shown to be effective in many previous RDW studies, we also provide a method to simulate user walking trajectories and generate a dataset. Experiments show that our method outperforms multiple state-of-the-art methods in various environment layouts.

Gaining the High Ground: Teleportation to Mid-Air Targets in Immersive Virtual Environments

Journal

Tim Weissker, Pauline Bimberg, Aalok Shashidhar Gokhale, Torsten Wolfgang Kuhlen, Bernd Froehlich

We present three teleportation techniques that enable the user to travel not only to ground-based but also to mid-air targets. The techniques differ in the extent to which elevation changes are integrated into the conventional target selection process. Elevation can be specified either simultaneously, as a connected second step, or separately from horizontal movements. A user study indicated a trade-off between the simultaneous method with the highest accuracy and the two-step method with the lowest task load. The separate method was least suitable on its own. Based on our findings, we define initial design guidelines for mid-air navigation techniques.

FREE-RDW: A Multiuser Redirected Walking Method for Supporting Nonforward Steps

Journal

Tianyang Dong, Tieqi Gao, Yinyan Dong, Liming Wang, Kefan Hu, Jing Fan

Redirected walking (RDW) algorithms for non-forward steps can enrich the movement direction of users' virtual roaming. In addition, the non-forward motions have a greater curvature gain, which can be used to reduce resets in RDW. This paper presents a new method of multi-user RDW for supporting non-forward steps, which adds the options of sideward and backward steps to extend the virtual reality (VR) locomotion. Our method adopts a user collision avoidance strategy based on optimal reciprocal collision avoidance (ORCA) and optimizes it into a linear programming problem to obtain the optimal velocity for users. The experiments show that our method performs well in virtual scenes with? forward and non-forward steps.

Session: Gaze, Haptics and Foveated Rendering

Tuesday, March 28, 2023, 8:30, Shanghai UTC+8, Room A

Session Chair: Sheng Li

Locomotion-aware Foveated Rendering

Conference

Xuehuai Shi, Lili Wang, Jian Wu, Wei Ke, Chan-Tong Lam

We collect and analyze the viewing motion of different locomotion methods, and describe the effects of these viewing motions on HVS's sensitivity, as well as the advantages of these effects that may bring to foveated rendering. Then we propose the locomotion-aware foveated rendering method (LaFR) to further accelerate foveated rendering by leveraging the advantages. LaFR achieves similar perceptual visual quality as the conventional foveated rendering while achieving up to 1.6x speedup.

Power, Performance, and Image Quality Tradeoffs in Foveated Rendering

Conference

Rahul Singh, Muhammad Huzaifa, Jeffrey Liu, Anjul Patney, Hashim Sharif, Yifan Zhao, Sarita Adve

In this paper, we study the tradeoff between fixed foveated rendering (FFR), gaze-tracked foveated rendering (TFR), and conventional rendering. We provide the first comprehensive study of their relative feasibility in practical systems with limited battery life and computational budget. We show that TFR with the added cost of the gaze-tracker can often be more expensive than FFR. Thus, we co-design a gaze-tracked foveated renderer. We describe approximations for eye tracking which provide up to 9x speedup in runtime with about 20x improvement in energy efficiency on a mobile GPU. Overall, with our technique TFR is feasible compared to FFR, resulting in up to 1.25x faster frame times while also reducing total energy consumption by over 40%

Privacy-preserving datasets of eye-tracking samples with applications in XR

Journal

Brendan David-John, Kevin Butler, Eakta Jain

XR technology has advanced significantly in recent years and will enable the future of work, education, socialization, and entertainment. Eye-tracking data supports XR interaction, animating virtual avatars, and rendering optimizations. While eye tracking enables beneficial applications, it also introduces a privacy risk by enabling re-identification of users. We applied privacy definitions of k-anonymity and plausible deniability (PD) to eye-tracking sample datasets and evaluated them against differential privacy (DP). Our results suggest that both PD and DP produced practical privacy-utility trade-offs between re-identification and activity classification accuracy, while k-anonymity performed best at retaining utility for gaze prediction.

Analyzing the Effect of Diverse Gaze and Head Direction on Facial Expression Recognition with Photo-Reflective Sensors Embedded in a Head-Mounted Display

Invited Journal

Fumihiko Nakamura, Masaaki Murakami, Katsuhiro Suzuki, Masaaki Fukuoka, Katsutoshi Masai, Maki Sugimoto

As one of the facial expression recognition techniques for Head-Mounted Display (HMD) users, embedded photo-reflective sensors have been used. In this paper, we investigate how gaze and face directions affect facial expression recognition using the embedded photo-reflective sensors. First, we collected a dataset of five facial expressions (Neutral, Happy, Angry, Sad, Surprised) while looking in diverse directions by moving 1) the eyes and 2) the head. Using the dataset, we analyzed the effect of gaze and face directions by constructing facial expression classifiers in five ways and evaluating the classification accuracy of each classifier. The results revealed that the single classifier that learned the data for all gaze points achieved the highest classification performance. Then, we investigated which facial part was affected by the gaze and face direction. The results showed that the gaze directions affected the upper facial parts, while the face directions affected the lower facial parts. In addition, by removing the bias of facial expression reproducibility, we investigated the pure effect of gaze and face directions in three conditions. The results showed that, in terms of gaze direction, building classifiers for each direction significantly improved the classification accuracy. However, in terms of face directions, there were slight differences between the classifier conditions. Our experimental results implied that multiple classifiers corresponding to multiple gaze and face directions improved facial expression recognition accuracy, but collecting the data of the vertical movement of gaze and face is a practical solution to improving facial expression recognition accuracy.

When Tangibles become Deformable: Studying Pseudo-Stiffness Perceptual Thresholds in a VR Grasping Task

Journal

Elodie Bouzbib, Claudio Pacchierotti, Anatole Lécuyer

Pseudo-Haptic techniques leverage user's visual dominance over haptics to alter users' perception, but are limited to perceptual thresholds. In this paper, we estimate thresholds for pseudo-stiffness in a VR grasping task. We conducted a user study (n = 15) to estimate if compliance can be induced on non-compressible objects and to what extent. Our results show that (1) compliance can be induced in rigid objects and that (2) pseudo-haptics can simulate beyond 24 N/cm stiffness (from gummy bears up to rigid objects). Pseudo-stiffness efficiency is (3) enhanced by objects' scales and (4) correlated to input force. Our results show novel opportunities to simplify the design of haptic interfaces, and extend the haptic properties of props in VR.

Session: Cybersickness 1

Tuesday, March 28, 2023, 8:30, Shanghai UTC+8, Room B

Session Chair: Rob Lindeman

You Make Me Sick! The Effect of Stairs on Presence, Cybersickness, and Perception of Embodied Conversational Agents

Conference

Samuel Ang, Amanda Fernandez, Michael Rushforth, John Quarles

Virtual reality (VR) has many applications involving an embodied conversational agent (ECA). VR remains inaccessible due to cybersickness: a collection of negative symptoms such as nausea and headache. Many factors are believed to affect cybersickness, but little is known regarding how factors may influence user opinion of ECAs. Participants completed a navigation task and conversation with a virtual airport customs agent in Spanish. Participants first traversed either hallways or staircases. We collected ratings of cybersickness, presence, and the ECA along with heart rate and galvanic skin response. Results indicate that staircases increased cybersickness and reduced perceived realism, but increased presence.

LiteVR: Interpretable and Lightweight Cybersickness Detection using Explainable AI

Conference

Ripan Kumar Kundu, Rifatul Islam, John Quarles, Khaza Anuarul Hoque

Cybersickness is a common ailment associated with virtual reality (VR) user experiences. In recent years, a plethora of research has proposed several automated methods based on machine learning (ML) and deep learning (DL) to detect cybersickness. However, most of these cybersickness detection methods are perceived as computationally intensive and black-box methods. Thus, those techniques are not well understood and impractical for deploying on standalone energy-constrained VR head-mounted devices (HMDs). In this work, we present an explainable artificial intelligence (XAI)-based framework LiteVR for cybersickness detection, explaining the model's outcome and reducing the feature dimensions and overall computational costs.

An EEG-based Experiment on VR Sickness and Postural Instability While Walking in Virtual Environments

Conference

Carlos Alfredo Tirado Cortes, Chin-Teng Lin, Tien-Thong Nguyen Do, Hsiang-Ting Chen

This paper studies VR sickness and postural instability while the user walks in an immersive virtual environment using an electroencephalogram (EEG) headset and a full-body motion capture system. The experiment induced VR sickness by gradually increasing the translation gain beyond the user's detection threshold. The participants with VR sickness showed a reduction of alpha power, a phenomenon previously linked to a higher workload and efforts to maintain postural control. In contrast, those without VR sickness exhibited brain activities previously linked to fine cognitive-motor control. The EEG result suggests that participants with VR sickness could maintain their postural stability at the cost of a higher cognitive workload.

Like a Rolling Stone: Effects of Space Deformation During Linear Acceleration on Slope Perception and Cybersickness

Conference

Tongyu Nie, Isayas Berhe Adhanom, Evan Suma Rosenberg

The decoupled relationship between the optical and inertial information in VR is commonly acknowledged as a major factor contributing to cybersickness. We noticed that a slope naturally affords acceleration, and the gravito-inertial force we experience when we are accelerating freely on a slope has the same relative direction and approximately the same magnitude as the gravity we experience when standing on the ground. In this paper, we present a novel space deformation technique that deforms the virtual environment to replicate the structure of a slope when the user accelerates virtually to restore the relationship between the optical and inertial information.

Enhanced Theta Activity in the Left Parietal Cortex May Defend Against VR Motion Sickness Attacks: A Pilot Study of EEG

Conference

Gang Li, Katharina Margareta Theresa Pöhlmann, Mark McGill, Chao Ping Chen, Stephen Anthony Brewster, Frank Pollick

VR motion sickness (VRMS) is seriously hampering the adoption of VR-based livelihood services, like VR healthcare and education. Based on existing datasets from a previous study, this paper investigated the differences in brainwaves between young adults who are relatively resistant and susceptible to VRMS. We found enhanced theta activity in the left parietal cortex in VRMS-resistant individuals (N=10) compared to VRMS-susceptible individuals (N=10). This finding offers new hypotheses regarding how to reduce VRMS by the enhancement of brain functions per se (e.g., via non-invasive transcranial electrostimulation techniques) without the need to redesign the existing VR content.

Session: Interaction 1

Tuesday, March 28, 2023, 8:30, Shanghai UTC+8, Room C

Session Chair: Huidong Bai

Toward Intuitive Acquisition of Fully Occluded VR Objects Through Direct Grab From a Disocclusion Mini-map

Conference

Mykola Maslych, Yahya Hmaiti, Ryan Ghamandi, Paige Leber, Ravi Kiran Kattoju University of Central Florida, Jacob Belga, Joseph LaViola

Standard selection techniques such as ray casting fail when virtual objects are partially or fully occluded. In this paper, we present two novel approaches that combine cone-casting, world-in-miniature, and grasping metaphors to disocclude objects in the representation local to the user. Through a within-subject study where we compared 4 selection techniques across 3 levels of object occlusion, we found that our techniques outperformed an alternative one that also focuses on maintaining the spatial relationships between objects. We discuss application scenarios and future research directions for these types of selection techniques.

Measuring the Effect of Stereo Deficiencies on Peripersonal Space Pointing

Conference

Anil Ufuk Batmaz, Moaaz Hudhud Mughrabi, Mine Sarac, Mayra Donaji Barrera Machuca, Wolfgang Stuerzlinger

State-of-the-art Virtual Reality (VR) and Augmented Reality (AR) headsets rely on singlefocal stereo displays. For objects away from the focal plane, such displays create a vergence-accommodation conflict (VAC), potentially degrading user interaction performance. In this paper, we study how the VAC affects pointing at targets within arm's reach with virtual hand and raycasting interaction in current stereo display systems. We conducted a user study with eighteen participants and the results indicate that participants were faster and had higher throughput in the constant VAC condition. We hope that our results enable designers to choose more efficient interaction methods in virtual environments.

AR Interfaces for Disocclusion--A Comparative Study

Conference

Shuqi Liao, Yuqi Zhou, Voicu Popescu

An important application of augmented reality (AR) is the design of interfaces that reveal parts of the real world to which the user does not have line of sight. The design space for such interfaces is vast, with many options for integrating the visualization of the occluded parts of the scene into the user's main view. This paper compares four AR interfaces for disocclusion: X-ray, Cutaway, Picture-in-picture, and Multiperspective. The interfaces are compared in a within-subjects study (N = 33) over four tasks: counting dynamic spheres, pointing to the direction of an occluded person, finding the closest object to a given object, and finding pairs of matching numbers.

Warpy: Contextual and Multi-view Indirect 3D Curve Sketching in Augmented Reality

Conference

Rawan Alghofaili, Cuong Nguyen, Vojtěch Krs, Nathan Carr, Radomir Mech, Lap-Fai Yu

We propose Warpy, an environment-aware 3D curve drawing tool for mobile AR. Our system enables users to draw freeform curves from a distance in AR by combining 2D-to-3D sketch inference with geometric proxies. Geometric Proxies can be obtained via 3D scanning or from a list of pre-defined primitives. Warpy also provides a multi-view mode to enable users to sketch a curve from multiple viewpoints, which is useful if the target curve cannot fit within the camera's field of view. We conducted two user studies and found that Warpy can be a viable tool to help users create complex and large curves in AR.

How Do I Get There? Overcoming Reachability Limitations of Constrained Industrial Environments in Augmented Reality Applications

Conference

Daniel Bambusek, Zdenek Materna, Michal Kapinus, Vitezslav Beran, Pavel Smrž

The paper presents an approach for handheld AR in constrained industrial environments, where it might be hard or even impossible to reach certain poses within a workspace in order to see or interact with digital content in applications like visual robot programming, robotic program visualizations, or workspace annotation. To overcome this limitation, we propose a temporal switching to a non-immersive VR that allows the user to see the virtual counterpart of the workspace from any angle and distance, where the viewpoint is controlled using a unique combination of on-screen controls complemented by the physical motion of the handheld device.

Session: Gaze

Tuesday, March 28, 2023, 14:00, Shanghai UTC+8, Room B

Session Chair: Kaan Akşit

Exploring 3D Interaction with Gaze Guidance in Augmented Reality

Conference

Yiwei Bao, Jiaxi Wang, Zhimin Wang, Feng Lu

To explore the potential of hand-eye coordination techniques in AR, we investigate whether gaze could help object selection and translation with 3D occlusions. Therefore, we develop new methods with proper gaze guidance for 3D interaction in AR, and also an implicit online calibration method. The user study we conducted shows that our methods not only improve the effectiveness of occluded objects selection but also alleviate the arm fatigue problem significantly in the depth translation task. To reduce the burden of gaze calibration, we also propose an implicit online calibration method, which achieves better accuracy than standard 9 points calibration without interfering with the users.

A Large-Scale Study of Proxemics and Gaze in Groups

Conference

Mark Roman Miller, Cyan DeVeaux, Eugy Han, Nilam Ram, Jeremy N. Bailenson

Scholars who study nonverbal behavior have focused an incredible amount of work on proxemics and mutual gaze. However, to date, these studies in VR have largely been reporting behavior of a small number of participants for short periods of time. In this experimental field study, we analyze the proxemics and gaze of 232 participants over two experimental studies who each contributed up to about 240 minutes of tracking data during eight weekly 30-minute social virtual reality sessions. Participants' non-verbal behaviors changed in conjunction with manipulations of the environment and over time, and showed a range of individual and pair differences.

Exploring Enhancements towards Gaze Oriented Parallel Views in Immersive Tasks

Conference

Theophilus Teo, Kuniharu Sakurada, Maki Sugimoto

We explore enhancements on a singular or asynchronous task by utilizing parallel views. Three prototypes were developed, comprises of fixed, symmetric and gaze-oriented parallel view. We conducted a user study comparing each prototype, against traditional VR in three tasks: object search and interaction tasks in a 1) simple environment and 2) complex environment, and 3) object distances estimation task. We found parallel views improved multi-embodiment while each technique helped different tasks. Traditional VR provided a clean interface, thus improving spatial presence, mental effort, and user performance. However, participants' feedback highlighted usefulness and a lower physical effort of using parallel views to solve complicated tasks.

MoPeDT: A Modular Head-Mounted Display Toolkit to Conduct Peripheral Vision Research

Conference

Matthias Albrecht, Lorenz Assländer, Harald Reiterer, Stephan Streuber

We introduce MoPeDT: Modular Peripheral Display Toolkit, a freely available, flexible, reconfigurable, and extendable headset to conduct peripheral vision research. MoPeDT can be built with a 3D printer and off-the-shelf components. It features multiple spatially configurable near-eye display modules and full 3D tracking inside and outside the lab. With our system, researchers and designers may easily develop and prototype novel peripheral vision interaction and visualization techniques. We demonstrate the versatility of our headset with several possible applications for spatial awareness, balance, interaction, feedback, and notifications.

Leveling the Playing Field: A Comparative Reevaluation of Unmodified Eye-Tracking as an Input and Interaction Modality for VR

Journal

Ajoy Savio Fernandes, T. Scott Murdison, Michael J Proulx

Here we establish a much-needed baseline for evaluating eye tracking interactions for AR/VR targeting and selecting tasks, including both traditional standards and those more aligned with AR/VR interactions today. In a targeting and button press selection task, we compare completely unadjusted, cursor-less, eye tracking to controller and head tracking, which both had cursors. Unmodified eye tracking, without any form of a cursor, or feedback, outperformed the head and performed comparably to the controller in throughput and in subjective ratings. Eye tracking, with even minor sensible interaction design modifications, has tremendous potential in reshaping interactions in next-generation AR/VR head mounted displays.

Session: Interaction 2

Tuesday, March 28, 2023, 14:00, Shanghai UTC+8, Room C

Session Chair: Akihiro Matsuura

Examining the Fine Motor Control Ability of Linear Hand Movement in Virtual Reality

Conference

Xin Yi, Xueyang Wang, Jiaqi Li, Hewu Li Tsinghua University

We conducted three user studies to progressively examine users' ability of fine motor control in 3D linear hand movement tasks. In Study 1, we examined participants' behavioural patterns when drawing straight lines using both the controller and the hand. Results showed that the exhibited stroke length tended to be longer than perceived. In Study 2, we further tested the effect of different visual references and found that providing only a virtual table yielded higher input precision and user preference. In Study 3, we repeated Study 2 in real dragging and scaling tasks and verified the generalizability of the above findings.

Exploring the Effects of Augmented Reality Notification Type and Placement in AR HMD while Walking

Conference

Hyunjin Lee, Woontack Woo

Augmented reality helps users easily accept information when walking by providing virtual information in front of their eyes. However, it remains unclear how to present AR notifications considering the expected user reaction to interruption. Therefore, we investigated the appropriate placement methods by dividing notification types into high or low. We found that using a display-fixed coordinate system responded faster for high notification types, whereas using a body-fixed coordinate system resulted in quick walking speed for low ones. Furthermore, the high types had higher notification performance at the bottom position, but the low had enhanced walking performance at the right.

Evaluating Augmented Reality Landmark Cues and Frame of Reference Displays with Virtual Reality

Journal

Yu Zhao, Jeanine Stefanucci, Sarah Creem-Regehr, Bobby Bodenheimer

Head-mounted augmented reality (AR) displays provide a preview of future navigation systems across various application domains for walking travel, such as search and rescue or commuting, but designing them is still an open problem. Here, we investigate two options for navigation that such AR systems can make: 1) whether to use AR cues to indicate landmarks; 2) how to convey navigation instructions and the effects on spatial knowledge acquisition. We found that the world-fixed frame of reference resulted in better spatial learning when there were no landmarks cued; adding AR landmark cues marginally improved spatial learning in the screen-fixed condition. Our findings have implications for future cognition-driven navigation systems design.

A Lack of Restraint: Comparing Virtual Reality Interaction Techniques for Constrained Transport Seating

Journal

Graham Wilson, Mark McGill, Daniel Medeiros, Stephen Anthony Brewster

Standalone Virtual Reality (VR) headsets can now be used in cars, trains and planes. However, the spaces around transport seating are constrained by other seats, walls and passengers, leaving users with little space in which to interact safely and acceptably. Therefore, they cannot use most commercial VR applications, as they are designed for unobstructed 1-2m2 home environments. In this paper, we conducted a gamified user study to test whether three at-a-distance interaction techniques from the literature could be adapted to support common VR movement inputs identified from commercial games, and so equalise the interaction capabilities of at-home and constrained users.

Give Me a Hand: Improving the Effectiveness of Near-field Augmented Reality Interactions By Avatarizing Users' End Effectors

Journal

Roshan Venkatakrishnan, Rohith Venkatakrishnan, Balagopal Raveendranath, Christopher Pagano, Andrew Robb, Wen-Chieh Lin, Sabarish V. Babu

We investigated whether avatarizing users' end-effectors (hands) improved their interaction performance on a near-field, obstacle avoidance, object retrieval task. We employed a 3 (Augmented hand representation) X 2 (density of obstacles) X 2 (size of obstacles) X 2 (virtual light intensity) multi-factorial design, manipulating the presence/absence and anthropomorphic fidelity of augmented self-avatars, across three experimental conditions: (1) No-Augmented Avatar; (2) Iconic-Augmented Avatar; (3) Realistic Augmented Avatar. Our findings seem to indicate that interaction performance may improve when users are provided with a visual representation of the AR system's interacting layer in the form of an augmented self-avatar.

Session: Accessibility and Applications

Tuesday, March 28, 2023, 15:15, Shanghai UTC+8, Room A

Session Chair: Debbie Ding

Using Smartphones as Assistive Displays to AR HMDs to Enhance the AR Reading Experience

Conference

Sunyoung Bang, Woontack Woo

The reading experience on current augmented reality head mounted displays is often impeded by the devices' low perceived resolution, translucency, and small field of view. To resolve this issue, we explore the use of smartphones as assistive displays to AR HMDs. To validate the feasibility of our approach, we conducted a user study in which we compared a smartphone-assisted hybrid interface against using the HMD only for two different text lengths. The results demonstrate that the hybrid interface yields a lower task load regardless of the text length, although it does not improve task performance. Furthermore, the hybrid interface provides a better experience regarding user comfort, visual fatigue, and perceived readability.

Evoking empathy with visually impaired people through an augmented reality embodiment experience

Conference

Renan Guarese, Emma Pretty, Haytham Fayek, Fabio Zambetta, Ron van Schyndel

To promote empathy with people that have disabilities, we propose a multi-sensory interactive experience that allows sighted users to embody having a visual impairment whilst using assistive technologies. The experiment involves blindfolded sighted participants interacting with sonification methods in order to locate targets in a real kitchen. We enquired about the perceived benefits of increasing said empathy from the blind community. We gathered sighted people's self-reported and perceived empathy with the BVI community from sighted and blind people respectively. We re-tested sighted people's empathy after the experiment and found that their empathetic responses significantly increased.

Optimizing Product Placement for Virtual Stores

Conference

Wei Liang, Luhui Wang, Xinzhe Yu, Changyang Li, Rawan Alghofaili, Yining Lang, Lap-Fai Yu

The recent popularity of consumer-grade virtual reality devices has enabled users to experience immersive shopping in virtual environments. As in a real-world store, the placement of products in a virtual store should appeal to shoppers, which could be time-consuming, tedious, and non-trivial to create manually. Thus, this work introduces a novel approach for automatically optimizing product placement in virtual stores. Our approach considers product exposure and spatial constraints, applying an optimizer to search for optimal product placement solutions. We conducted qualitative scene rationality and quantitative product exposure experiments to validate our approach with users.

A survey on remote assistance and training in Mixed Reality environments

Journal

Catarina Gonçalves Fidalgo, Yukang Yan, Hyunsung Cho, Mauricio Sousa, David Lindlbauer, Joaquim Jorge

The recent pandemic, war, and oil crises have caused many to reconsider their need to travel for education, training, and meetings. Mixed Reality offers opportunities to improve remote assistance and training, as it opens the way to increased spatial clarity and large interaction space. We contribute a survey of remote assistance and training in MR environments through a systematic literature review to provide a deeper understanding of current approaches, benefits and challenges. We analyze 62 articles and contextualize our findings into a taxonomy, identifying the main gaps and opportunities in this research area.

Evaluating the Effects of Virtual Reality Environment Learning on Subsequent Robot Teleoperation in an Unfamiliar Building

Journal

Karl Eisenträger, Judith Haubner, Jennifer Brade, Wolfgang Einhäuser, Alexandra Bendixen, Sven Winkler Chemnitz, Philipp Klimant Chemnitz, Georg Jahn

We compared three methods to prepare for tasks performed by teleoperating a robot in a building. One group studied a floorplan, the second explored a VR reconstruction of the building from a normal-sized avatar's perspective and a third explored the VR from a giant-sized avatar's perspective. Giant VR and floorplan took less learning time than normal VR. Both VR methods significantly outperformed the floorplan in a orientation task. Navigation was quicker in the giant compared to the normal perspective and the floorplan. We conclude that normal and giant perspective in VR are viable for preparation of teleoperation in unfamiliar environments.

Session: Displays

Tuesday, March 28, 2023, 15:15, Shanghai UTC+8, Room B

Session Chair: Frank Guan

HoloBeam: Paper-Thin Near-Eye Displays

Conference

Kaan Akşit, Yuta Itoh

Our work, HoloBeam, introduces a new milestone for near-eye displays. In our design, a custom holographic projector populates a micro-volume located at some distance (1-2 meters) with multiple planes of images. Users view magnified copies of these images from this small volume with the help of an eyepiece that is either a Holographic Optical Element (HOE) or a set of lenses. Our HoloBeam prototypes demonstrate the thinnest AR glasses to date with a submillimeter thickness (e.g., 120 um thick). In addition, HoloBeam prototypes demonstrate near retinal resolutions (24 cycles per degree) with a 70 degrees-wide field of view.

Extended Depth-of-Field Projector using Learned Diffractive Optics

Conference

Yuqi Li, Qiang Fu, Wolfgang Heidrich

We jointly design a DOE for light phase modulation and a convolutional neural network for projector compensation. The designed extended depth of field (EDOF) computational projector can achieve high light throughput and real-time performance. We demonstrate that this learned optics compares favorably to baselines with conventional projectors, and the learned compensation network outperforms previous state-of-the-art compensation methods in terms of both computational efficiency and compensation quality. We implement a laboratory prototype of the computational EDOF projector equipped with the learned DOE, and evaluate it with real display experiments on depth-varying and tilted projection surfaces.

Proposal for an aerial display using dynamic projection mapping on a distant flying screen

Conference

Masatoshi Iuchi, Yuito Hirohashi, Hiromasa Oku

In this study, we propose a method for an aerial display.The method uses a high-speed gaze control system and a laser display to perform projection mapping on a screen at a distance, which is suspended from a flying drone. A prototype system was developed, and performance evaluation and application experiments were conducted.In the performance evaluation, it was confirmed that the system was capable of controlling the gaze over a wide range and of performing dynamic projection mapping on a distant object 200 m away.In application experiments, dynamic projection mapping was successfully performed on a screen attached to a drone in flight at a distance of approximately 36 m, demonstrating the effectiveness of the proposed method.

CompenHR: Efficient Full Compensation for High-resolution Projector

Conference

Yuxi Wang, Haibin Ling, Bingyao Huang

Full compensation aiming to find a projector input image for canceling the geometric and photometric distortions is a practical task of projector-camera systems. To address the issue that learning-based compensation methods for high-resolution setups are impractical due to the long training time and high memory cost, this paper proposes a practical full compensation solution. We design an attention-based grid refinement network to improve geometric correction quality and integrate a novel sampling scheme into an end-to-end compensation network to alleviate computation. Furthermore, we construct a benchmark dataset for high-resolution projector full compensation. The experiments demonstrate clear advantages in both efficiency and quality.

A compact photochromic occlusion capable see-through display with holographic lenses.

Conference

Chun Wei Ooi, Yuichi Hiroi, Yuta Itoh

This paper presents a compact photochromic occlusion-capable OST design using multilayer, wavelength-dependent holographic optical lenses (HOLs). Our approach employs a single digital micromirror display (DMD) to form both the occlusion mask with UV light and a virtual image with visible light in a time-multiplexed manner. We demonstrate our proof-of-concept system on a bench-top setup and assess the appearance and contrasts of the displayed image. We also suggest potential improvements for current prototypes to encourage the community to explore this occlusion approach.

Session: Medical

Tuesday, March 28, 2023, 16:30, Shanghai UTC+8, Room B

Session Chair: Bruce Gu

Remapping Control in VR for Patients with AMD

Conference

Michael Nitsche, Blaire Bosley, Susan A. Primo, Jisu Park, Daniel Carr

Age-related Macular Degeneration (AMD) is the leading cause of vision loss among persons over 50. We present a two-part interface consisting of a VR-based visualization for AMD patients and an interconnected doctor interface to optimize this VR view. It focuses on remapping imagery to provide customized image optimizations. The system allows doctors to generate a tailored, patient-specific VR visualization. We pilot tested the doctor interface (n=10) with eye care professionals. The results indicate the potential of VR-based eye care for doctors to help visually-impaired patients, but also show a necessary training phase to establish new technologies in vision rehabilitation.

Design and Development of a Mixed Reality Acupuncture Training System

Conference

Qilei Sun, Mr Jiayou Huang, Haodong Zhang, Paul Craig, Lingyun Yu, Eng Gee LIM

This paper examines the use of mixed reality in enhancing Chinese acupuncture practice through a virtual acupuncture training simulator. The simulator was developed for the study which allows practitioners to practice needling with virtual acupuncture content using bare hands. The system provides a safe and natural environment for the development of acupuncture skills, muscle memory, and memory of acupuncture points. The study also presents the results of a comparative user evaluation assessing the system's viability as a training tool, revealing improved spatial understanding, learning and dexterity in acupuncture practice. The results demonstrate the potential of mixed reality for improving therapeutic medicine.

Evaluation of AR visualization approaches for catheter insertion into the ventricle cavity

Journal

Mohamed Benmahdjoub, Abdullah Thabit, Marie-Lise C. van Veelen, Wiro J. Niessen, Eppo B. Wolvius, Theo van Walsum

The way how virtual data is presented in AR guided surgical navigation plays an important role for correct spatial perception of the virtual overlay. This study compares various visualization modalities for catheter insertion in external ventricular drain and ventricular shunt procedures. We investigate (1) 2D approaches (smartphone, 2D window), and (2) 3D approaches (fully aligned patient model, and a model that is adjacent to the patient and is rotationally aligned) using an OST. 32 participants performed 20 AR guided insertions per approach. The results show more accurate insertions using 3D approaches with a higher preference compared to 2D approaches.

A Video-Based Augmented Reality System for Human-in-the-Loop Muscle Strength Assessment of Juvenile Dermatomyositis

Journal

Kanglei Zhou, Ruizhi Cai, Yue Ma, Qingqing Tan, Xinning Wang, Jianguo Li, Hubert P. H. Shum, Frederick W. B. Li, Song Jin, Xiaohui Liang

JDM is characterized by skin rashes and muscle weakness. The CMAS is commonly used to measure the degree of muscle involvement for diagnosis. However, humans are subject to personal bias, and automatic AQA algorithms cannot guarantee 100% accuracy. Therefore, we propose a video-based augmented reality system for human-in-the-loop muscle strength assessment of children with JDM. Our core insight is to visualize the AQA results as a virtual character facilitated by a 3D animation dataset, so that users can compare the real-world patient and the virtual character to verify the AQA results. The experimental results verify the effectiveness of our system.

CardioGenesis4D: Interactive Morphological Transitions of Embryonic Heart Development in a Virtual Learning Environment

Journal

Danny Schott, Matthias Kunz, Tom Wunderling, Florian Heinrich, Rüdiger Braun-Dullaeus, Christian Hansen

In the embryonic human heart, complex dynamic shape changes take place in a short period, making this development difficult to visualize. An immersive learning environment is presented that enables the understanding of morphological transitions through hand interactions. In a user study, we examined usability, perceived task load, and sense of presence. We also assessed knowledge gain, and obtained feedback from domain experts. Students and professionals rated the application as usable, and our results show that interactive learning content should consider features for different learning styles. Our work previews, how VR can be integrated into a cardiac embryology education curriculum.

Session: Haptics

Tuesday, March 28, 2023, 16:30, Shanghai UTC+8, Room C

Session Chair: Yaoping Hu

Providing 3D Guidance and Improving the Music-Listening Experience in Virtual Reality Shooting Games Using Musical Vibrotactile Feedback

Conference

Yusuke Yamazaki, Shoichi Hasegawa

We aim to improve the experience of virtual reality (VR) shooting games by employing a 3D haptic guidance method using necklace-type and belt-type haptic devices. Such devices help to modulate the vibrations generated by and synchronized with musical signals according to the azimuth and height of a target in 3D space. We evaluated the method's potential by conducting a task that participants were asked to shoot a randomly spawned target moving in 3D VR space. The results suggest the proposed method can guide players toward the target's location using only tactile stimuli. Modulated musical vibrations also enhanced the music-listening experience.

Investigating Noticeable Hand Redirection in Virtual Reality using Physiological and Interaction Data

Conference

Martin Feick, Kora Persephone Regitz, Anthony Tang, Tobias Jungbluth, Maurice Rekrut, Antonio Krüger

Hand redirection is effective so long as the introduced offsets are not noticeably disruptive to users. We investigate the use of physiological and interaction data to detect movement discrepancies between a user's real and virtual hand. We ran a study with 22 participants, collecting EEG, ECG, EDA, RSP, and interaction data. Our results suggest that EEG and interaction data can be used to detect visuo-motor discrepancies, whereas ECG and RSP seem to suffer from inconsistencies. Our findings show that participants quickly adapt to large discrepancies, and suggest that there is no absolute threshold for possible non-detectable discrepancies.

A Haptic Stimulation-Based Training Method to Improve the Quality of Motor Imagery EEG Signal in VR

Conference

Shiwei Cheng, Jieming Tian

With the emergence of brain-computer interaction interface (BCI) technology and virtual reality (VR), how to improve the quality of motor imagery (MI) electroencephalogram (EEG) signal has become a key issue for MI BCI applications under VR. In this paper, we proposed to enhance the quality of MI EEG signal by using haptic stimulation training. We designed a first-person perspective and a third-person perspective scene under VR, and the experimental results showed that the left- and right-hand MI EEG quality of the participants improved compared with that before training. We implemented a VR-BCI application system, in which the average classification accuracy by the participants after training increased.

RemoteTouch: Enhancing Immersive 3D Video Communication with Hand Touch

Conference

Yizhong Zhang, Zhiqi Li, Sicheng Xu, Chong Li, Jiaolong Yang, Xin Tong, Baining Guo

We present a method to enhance the immersive experience of 3D video communication by adding the hand touch capability. The participants can reach their hands out to the screen and perform hand clapping as if they were only separated by a virtual glass. The key challenge is that the hand is invisible to cameras when close to the screen. We present a dual representation of the user's hand and a distance-based fusion method for realistic hand rendering, so that the hand is visible to the remote user throughout the touching process. Our experiments demonstrate that our method provides consistent hand contact experience between remote users and improves the immersive experience of 3D video communication.

CoboDeck: A Large-Scale Haptic VR System Using a Collaborative Mobile Robot

Conference

Soroosh Mortezapoor, Khrystyna Vasylevska, Emanuel Vonach, Hannes Kaufmann

We present CoboDeck - our proof-of-concept immersive virtual reality haptic system with free walking support. It provides prop-based encounter-type haptic feedback with a mobile robotic platform. Intended for use as a design tool for architects, it enables the user to directly and intuitively interact with virtual objects like walls, doors, or furniture. A collaborative robotic arm mounted on an omnidirectional mobile platform can present a physical prop that matches the position and orientation of a virtual counterpart anywhere in large virtual and real environments. Thus, the user can naturally interact with a virtual object with bare hands or any body part and simultaneously encounter it in real physical space.

Session: SocialEmotional

Wednesday, March 29, 2023, 8:30, Shanghai UTC+8, Room A

Session Chair: Zeyu Wang

Virtual reality in supporting charitable giving: The role of vicarious experience, existential guilt, and need for stimulation

Conference

Ou Li, Han Qiu

Although a growing number of charities have used virtual reality (VR) for fundraising activities, there is relatively little academic research in this area. The purpose of this study is to investigate the underlying mechanism of VR in supporting charitable giving. We found that VR charitable appeals increase actual money donations when compared to the traditional two-dimensional (2D) format and that this effect is achieved through a serial mediating effect of vicarious experience and existential guilt. Findings also identify the need for stimulation as a boundary condition, indicating that those with a higher (vs. lower) need for stimulation were more (vs. less) affected by the mediating mechanism of VR charitable appeals on donations.

Measuring Interpersonal Trust towards Virtual Humans with a Virtual Maze Paradigm

Journal

Jinghuai Lin, Johrine Cronjé, Ivo Käthner, Paul Pauli, Marc Erich Latoschik

This work proposes a validated behavioural tool to measure interpersonal trust towards a specific virtual social interaction partner in social VR. The task of the users (the trustors) is to navigate through a maze in virtual reality, where they can interact with a virtual human (the trustee). In the validation study, participants interacting with a trustworthy avatar asked for advice more often than those interacting with an untrustworthy avatar, indicating that the paradigm is sensitive to the differences in interpersonal trust and can be used to measure trust towards virtual humans.

The Dating Metaverse: Why We Need to Design Consent in Social VR

Journal

Douglas Zytko, Jonathan Chan

We present a participatory design study about how consent to interpersonal behavior can be designed in social VR to prevent harm. VR dating applications are used as the context of study. Through design workshops with potential VR daters (n=18) we elucidate nonconsensual experiences that should be prevented and designs for exchanging consent in VR. We position consent as a valuable lens for designing preventative solutions to harm in social VR by reframing harm as unwanted experiences that happen because of the absence of mechanics to support users in giving and denying agreement to a virtual experience before it occurs.

Role-Exchange Playing: An Exploration of Role-Playing Effects for Anti-Bullying in Immersive Virtual Environments

Invited Journal

Xiang Gu, Sheng Li, Kangrui Yi, Xiaojuan Yang, Huiling Liu, Guoping Wang

Role-playing is widely used in many areas, such as psychotherapy and behavior change. However, few studies have explored the possible effects of playing multiple roles in a single role-playing process. We propose a new role-playing paradigm, called role-exchange playing, in which a user plays two opposite roles successively in the same simulated event for better cognitive enhancement. We designed an experiment with this novel role-exchange playing strategy in the immersive virtual environments; and school bullying was chosen as a scenario in this case. A total of 234 middle/high school students were enrolled in the mixed-design experiment. From the user study, we found that through role-exchange, students developed more morally correct opinions about bullying, as well as increased empathy and willingness to engage in supportive behavior. They also showed increased commitment to stopping bullying others. Our role-exchange paradigm could achieve a better effect than traditional role-playing methods in situations where participants have no prior experience associated with the roles they play. Therefore, using role-exchange playing in the immersive virtual environments to educate minors can help prevent them from bullying others in the real world. Our study indicates a positive significance in moral education of teenagers. Our role-exchange playing may have the potential to be extended to such applications as counseling, therapy, and crime prevention.

Session: Perception 1

Wednesday, March 29, 2023, 8:30, Shanghai UTC+8, Room B

Session Chair: Yong Hu

Empirically Evaluating the Effects of Eye Height and Self-Avatars on Dynamic Passability Affordances in Virtual Reality

Conference

Ayush Bhargava, Roshan Venkatakrishnan, Rohith Venkatakrishnan, Hannah Solini, Kathryn Lucaites, Andrew Robb, Christopher Pagano, Sabarish V. Babu

Self-avatars have been shown to affect the perception of oneself and environmental spatial properties. However, most virtual experiences have a generic self-avatar that does not fit the proportions of the users' body. This can negatively affect affordance judgments relative to the size of the user like reachability and maneuverability. This is prevalent when the task requires the user to maneuver around moving objects in games. Therefore, it is necessary to understand how different sized self-avatars affect affordances judgments in dynamic virtual environments. As such, we investigated how a shorter, taller or matched self-avatar affects judgments when passing through dynamic gaps.

Manipulation of Motion Parallax Gain Distorts Perceived Distance and Object Depth in Virtual Reality

Conference

Xue Teng, Robert Allison, Laurie M Wilcox

Virtual reality (VR) is distinguished by the rich, multimodal, immersive sensory information and affordances provided to the user. However, when moving about an immersive virtual world the visual display often conflicts with other sensory cues due to design, the nature of the simulation, or to system limitations (for example impoverished vestibular motion cues during acceleration in racing games). Given that conflicts between sensory cues have been associated with disorientation or discomfort, and theoretically could distort spatial perception, it is important that we understand how and when they are manifested in the user experience.

How Virtual Hand Representations Affect the Perceptions of Dynamic Affordances in Virtual Reality

Journal

Roshan Venkatakrishnan, Rohith Venkatakrishnan, Balagopal Raveendranath, Christopher Pagano, Andrew Robb, Wen-Chieh Lin, Sabarish V. Babu

We investigated how different virtual hand representations affect users' perceptions of dynamic affordances using a collision-avoidance object retrieval task. We employed a 3 (virtual end-effector representation) X 13 (frequency of moving doors) X 2 (target object size) multi-factorial design, manipulating the input modality and its concomitant virtual end-effector representation across three experimental conditions: (1) Controller ; (2) Controller-hand ; (3) Glove. We find that representing the end-effector as hands tends to increase embodiment but can also come at the cost of performance, or an increased workload due to a discordant mapping between the virtual representation and the input modality used.

Inward VR: Toward a Qualitative Method for Investigating Interoceptive Awareness in VR

Journal

Alexander Haley, Don Thorpe, Alex Pelletier, Svetlana Yarosh, Daniel F. Keefe

VR can produce powerful illusions of being in another place or inhabiting another body, and theories of presence and embodiment provide valuable guidance to VR designers. However, VR can also be used to develop an awareness of one's own body (i.e., interoceptive awareness); here, design guidelines and evaluative techniques are less clear. To address this, we present a qualitative methodology, including a reusable codebook, for adapting the Multidimensional Assessment of Interoceptive Awareness conceptual framework to explore interoceptive awareness in VR. We report results from an exploratory study (n=21) applying this method to understand the interoceptive awareness experiences of VR users.

Analysis of the Saliency of Color-Based Dichoptic Cues in Optical See-Through Augmented Reality

Invited Journal

Austin Erickson, Gerd Bruder, Gregory F. Welch

In a future of pervasive augmented reality (AR), AR systems will need to be able to efficiently draw or guide the attention of the user to visual points of interest in their physical-virtual environment. Since AR imagery is overlaid on top of the user's view of their physical environment, these attention guidance techniques must not only compete with other virtual imagery, but also with distracting or attention-grabbing features in the user's physical environment. Because of the wide range of physical-virtual environments that pervasive AR users will find themselves in, it is difficult to design visual cues that “pop out” to the user without performing a visual analysis of the user's environment, and changing the appearance of the cue to stand out from its surroundings. In this paper, we present an initial investigation into the potential uses of dichoptic visual cues for optical see-through AR displays, specifically cues that involve having a difference in hue, saturation, or value between the user's eyes. These types of cues have been shown to be preattentively processed by the user when presented on other stereoscopic displays, and may also be an effective method of drawing user attention on optical see-through AR displays. We present two user studies: one that evaluates the saliency of dichoptic visual cues on optical see-through displays, and one that evaluates their subjective qualities. Our results suggest that hue-based dichoptic cues or “Forbidden Colors” may be particularly effective for these purposes, achieving significantly lower error rates in a pop out task compared to value-based and saturation-based cues.

Session: Multimodal and Haptics

Wednesday, March 29, 2023, 11:00, Shanghai UTC+8, Room A

Session Chair: Haonan Cheng

GroundFlow: Liquid-based Haptics for Simulating Fluid on the Ground in Virtual Reality

Journal

Ping-Hsuan Han, Tzu-Hua Wang, Chien-Hsing Chou

Most haptic devices simulate feedback in dry environments such as the living room, prairie, or city. However, water-related environments are thus less explored, for example, rivers, beaches, and swimming pools. In this paper, we present GroundFlow, a liquid-based haptic floor system for simulating fluid on the ground in VR. We discuss design considerations and propose a system architecture and interaction design. We conduct two user studies to assist in designing a multiple-flow feedback mechanism, develop three applications to explore the potential uses of the mechanism, and consider the limitations and challenges thereof to inform VR developers and haptic practitioners.

Wind comfort and emotion can change by the cross-modal presentation of audio-visual stimuli of indoor and outdoor environments

Conference

Kenichi Ito, Juro Hosoi, Yuki Ban, Takayuki Kikuchi, Kyosuke Nakagawa, Hanako Kitagawa, Chizuru Murakami, Yosuke Imai, Shinichi Warisawa

Previous research on wind stimuli for relaxation has overlooked the impact of multisensory stimuli on wind comfort. Our study aimed to investigate whether audio-visual stimuli in virtual environments can alter the effects of wind on comfort and emotions. We measured wind comfort and emotion when participants experienced outdoor and indoor virtual environments through virtual reality. Results indicate that the virtual environment of an outdoor meadow and natural wind sound significantly improved wind comfort, openness of wind, and emotional state. Simulated natural wind also reduced mental stress compared to a condition without wind, as shown by questionnaires and biometric data.

Eat, Smell, See: Investigating Cross-Modal Correspondence when Eating in VR

Journal

Florian Weidner, Frau Jana Elena Maier janamaier, Wolfgang Broll

Integrating taste in AR/VR applications has various promising use cases - from social eating to the treatment of disorders. We present the results of a user study in which participants were confronted with congruent and incongruent visual and olfactory stimuli while eating a tasteless food product in VR. With our results, we highlight challenges that arise when trying to influence perception, show that vision does not always dominate or guide perception, and emphasize that tri-modal incongruency hampers perception enormously. We discuss our data within the context of multisensory integration, AR/VR human-food interaction, and basic human perception.

Modified Egocentric Viewpoint for Softer Seated Experience in Virtual Reality

Journal

Miki Matsumuro, Shohei Mori, Yuta Kataoka, Fumiaki Igarashi, Fumihisa Shibata, Asako Kimura

We aimed to change the perceived haptic features of a chair by shifting the position and angle of the users' viewpoints in virtual reality. To enhance the seat softness, we shifted the virtual viewpoint using an exponential formula soon after a user's bottom contacted the seat surface. We also manipulated the viewpoint to change the flexibility of the virtual backrest. Subjective evaluations confirmed that participants perceived the seat as softer and the backrest more flexible than the actual ones, though significant changes resulted in discomfort.

Upper Body Thermal Referral and Tactile Masking for Localized Feedback

Journal

Hyungki Son, Haokun Wang, Yatharth Singhal, Jin Ryong Kim

This paper investigates the effects of thermal referral and tactile masking illusions to achieve localized thermal feedback. The first experiment uses sixteen vibrotactile actuators with four thermal actuators to explore the thermal distribution on the user's back. The result confirms that localized thermal feedback can be achieved through cross-modal thermo-tactile interaction on the user's back of the body. The second experiment is conducted to validate our approach in VR. The results show that our thermal referral with a tactile masking approach with a lesser number of thermal actuators achieves greater response time and better location accuracy.

Session: Gestures and Interaction

Wednesday, March 29, 2023, 11:00, Shanghai UTC+8, Room B

Session Chair: Boyu Gao

Real-Time Recognition of In-Place Body Actions and Head Gestures using Only a Head-Mounted Display

Conference

Jingbo Zhao, Mingjun Shao, Yaojun Wang, Ruolin Xu

We present a unified two-stream 1-D convolutional neural network (CNN) for recognition of body actions when a user performs walking-in-place (WIP) and for recognition of head gestures when a user stands still wearing only an HMD. The present method does not require specialized hardware and/or additional tracking devices other than an HMD and can recognize a significantly larger number of body actions and head gestures than other existing methods. The utility of the method is demonstrated through a virtual locomotion task, which shows that the present body action interface is reliable in recognizing actions for virtual locomotion.

Skeleton-based Human Action Recognition via Large-kernel Attention Graph Convolutional Network

Journal

Yanan Liu, Hao Zhang, Yanqiu Li, Kangjian He, Dan Xu

The skeleton-based human action recognition has broad application prospects in the field of virtual reality. Notably, recent works learns the spatio-temporal pattern via graph convolution operators. Still, the stacked graph convolution plays a marginal role in modeling long-range dependences. In this work, we introduce a skeleton large kernel attention operator (SLKA), which can enlarge the receptive field and improve channel adaptability without increasing too much computational burden. Further, we have designed a novel recognition network architecture called the spatiotemporal large-kernel attention graph convolution network (LKA-GCN). Ultimately, on three action datasets, our LKA-GCN has achieved a state-of-the-art level.

GestureSurface: VR sketching through assembling scaffold surface with non dominant hand

Journal

Xinchi Xu, Yang Zhou, Bingchan Shao, Guihuan Feng, Chun Yu

3D sketching provides an immersive drawing experience for designs. However, the lack of depth perception cues in VR makes it difficult to draw accurate strokes. To handle this, we introduce gesture-based scaffolding to guide strokes. We conduct a gesture-design study and propose GestureSurface, a bi-manual interface that uses non-dominant hand performing gestures to create scaffolding and the other hand drawing with controller. When the dominant hand is occupied, reducing the idleness of the non-dominant hand thourgh gestural input increases efficiency and fluency. We evaluated GestureSurface using a 20-person user study that found it had high efficiency and low fatigue.

Comparing Different Grasping Visualizations for Object Manipulation in VR using Controllers

Journal

Giorgos Ganias, Christos Lougiakis, Akrivi Katifori, Maria Roussou, Yannis Ioannidis, Ioannis Panagiotis

Visualizing grasping when users interact with virtual objects using handheld controllers in VR is underexplored. We present an experiment with 38 participants, comparing three different grasping visualizations: the Auto-Pose, where the hand is automatically adjusted to the object upon grasping; the Simple-Pose, where the hand closes fully when selecting the object; and the Disappearing-Hand, where the hand becomes invisible after selecting an object, and turns visible after positioning it on the target. Measuring the effect on user performance, embodiment and preference showed that the perceived sense of embodiment is stronger with the Auto-Pose, and is generally preferred by the users.

Kine-Appendage: Enhancing Freehand VR Interaction Through Transformations of Virtual Appendages

Invited Journal

Yang Tian, Hualong Bai, Shengdong Zhao, Chi-Wing Fu, Chun Yu, Haozhao Qin, Qiong Wang, Pheng-Ann Heng

Kinesthetic feedback, the feeling of restriction or resistance when hands contact objects, is essential for natural freehand interaction in VR. However, inducing kinesthetic feedback using mechanical hardware can be cumbersome and hard to control in commodity VR systems. We propose the kine-appendage concept to compensate for the loss of kinesthetic feedback in virtual environments, i.e., a virtual appendage is added to the user's avatar hand; when the appendage contacts a virtual object, it exhibits transformations (rotation and deformation); when it disengages from the contact, it recovers its original appearance. A proof-of-concept kine-appendage technique, BrittleStylus , was designed to enhance isomorphic typing. Our empirical evaluations demonstrated that (i) BrittleStylus significantly reduced the uncorrected error rate of naive isomorphic typing from 6.53% to 1.92% without compromising the typing speed; (ii) BrittleStylus could induce the sense of kinesthetic feedback, the degree of which was parity with that induced by pseudo-haptic (+ visual cue) methods; and (iii) participants preferred BrittleStylus over pseudo-haptic (+ visual cue) methods because of not only good performance but also fluent hand movements.

Session: Education and Medical

Wednesday, March 29, 2023, 11:00, Shanghai UTC+8, Room C

Session Chair: Ali Adjorlu

Investigating Spatial Representation of Learning Content in Virtual Reality Learning Environments

Conference

Manshul Belani, Mr. Harsh Vardhan Singh, Aman Parnami, Pushpendra Singh

We discuss spatial representation of learning content in VR. The 1st study discusses the effect of 4 different placements of learning content in VR (learning laser cutting process) with 42 participants: world-anchored (TV screen in the environment), user-anchored (panel anchored to the controller or HMD of the user), and object-anchored (panel anchored to the object of interest). While knowledge gain, transfer, and cognitive load were not significantly different, the object-anchored placement scored significantly better than the TV screen and HMD conditions on 3 user experience scales. In the 2nd study, 22 participants chose from these 4 placements in the VR environment followed by semi-structured interviews to understand their preferences.

Virtual Optical Bench: Teaching Spherical Lens Layout in VR with Real-Time Ray Tracing

Conference

Martin Bellgardt, Sebastian Pape, David Gilbert, Ing. Marcel Prochnau, Dipl.-Ing. Georg König, Torsten Wolfgang Kuhlen

We present the virtual optical bench, an application that lets users explore spherical lens layouts in virtual reality (VR). We implemented a numerically accurate simulation of optical systems using Nvidia OptiX, as well as a prototypical VR application, which we then evaluated in an expert review with 6 optics experts. Based on their feedback, we re-implemented our VR application in Unreal Engine 4. The re-implementation has since been actively used for teaching optical layouts, where we performed a qualitative evaluation with 18 students. We show that our virtual optical bench achieves good usability and is perceived to enhance the understanding of course contents.

How to maximise Spatial Presence: Design Guidelines for a Virtual Learning Environment for school use

Journal

Marc Bastian Rieger, Björn Risch

Research on learning with and in immersive virtual reality (VR) continues to grow, yielding more insights into how immersive learning works. A major hurdle that hinders the use of immersive digital media in schools is the lack of guidelines for designing VR learning environments for practical use in schools. Using a design-based research approach, we explored the guidelines for creating VR learning content for tenth-grade students in a German secondary school and recreated a real-world, out-of-school VR learning space which can be used for hands-on instruction. This presentation investigated how to maximise the experience of spatial presence by creating a VR learning environment in several microcycles.

ImTooth: Neural Implicit Tooth for Dental Augmented Reality

Journal

Hai Li, Hongjia Zhai, Xingrui Yang, Zhirong Wu, Yihao Zheng, Haofan Wang, Jianchao Wu, Hujun Bao, Guofeng Zhang

We propose a simple and accurate neural-implicit model-driven dental AR system, named ImTooth, and adapted for HoloLens 2. Based on the modeling capabilities and differentiable optimization properties of state-of-the-art neural implicit representations, our system fuses reconstruction and registration in a single network, greatly simplifying existing dental AR solutions and enabling reconstruction, registration, and interaction. Experiments show that our method can reconstruct high-precision models and accomplish accurate registration. It is also robust to weak, repeating, and inconsistent textures.We also show that our system can be easily integrated into dental diagnostic and therapeutic procedures, such as bracket placement guidance.

MD-Cave: An Immersive Visualization Workbench for Radiologists

Invited Journal

Shreeraj Jadhav, Arie E. Kaufman

The MD-Cave is an immersive analytics system that provides enhanced stereoscopic visualizations to support visual diagnoses performed by radiologists. The system harnesses contemporary paradigms in immersive visualization and 3D interaction, which are better suited for investigating 3D volumetric data. We retain practicality through efficient utilization of desk space and comfort for radiologists in terms of frequent long duration use. MD-Cave is general and incorporates: (1) high resolution stereoscopic visualizations through a surround triple-monitor setup, (2) 3D interactions through head and hand tracking, (3) and a general framework that supports 3D visualization of deep-seated anatomical structures without the need for explicit segmentation algorithms. Such a general framework expands the utility of our system to many diagnostic scenarios. We have developed MD-Cave through close collaboration and feedback from two expert radiologists who evaluated the utility of MD-Cave and the 3D interactions in the context of radiological examinations. We also provide evaluation of MD-Cave through case studies performed by an expert radiologist and concrete examples on multiple real-world diagnostic scenarios, such as pancreatic cancer, shoulder-CT, and COVID-19 Chest CT examination.

Session: Displays and Haptics

Wednesday, March 29, 2023, 14:00, Shanghai UTC+8, Room B

Session Chair: Dangxiao Wang

Realistic Defocus Blur for Multiplane Computer-Generated Holography

Conference

Koray Kavaklı, Yuta Itoh, Hakan Urey, Kaan Akşit

This paper introduces a new multiplane CGH computation method to reconstruct artifact-free high-quality holograms with natural-looking defocus blur. We introduce a new targeting scheme and a new loss function. While the targeting scheme accounts for defocused parts of the scene at each depth plane, the new loss function analyzes focused and defocused parts separately in reconstructed images. Our method support phase-only CGH calculations using various iterative and non-iterative CGH techniques. We achieve our best image quality using a modified gradient descent-based optimization recipe where we introduce a constraint inspired by the double phase method. We validate our method experimentally using our proof-of-concept holographic display.

Shadowless Projection Mapping using Retrotransmissive Optics

Journal

Kosuke Hiratani, Daisuke Iwai, Yuta Kageyama, Parinya Punpongsanon, Takefumi Hiraki, Kosuke Sato

This paper presents a shadowless projection mapping system for interactive applications in which a target surface is frequently occluded from a projector with a user's body. We propose a delay-free optical solution for this critical problem. Specifically, as the primary technical contribution, we apply a large format retrotransmissive plate to project images onto the target surface from wide viewing angles. We also tackle technical issues unique to the proposed shadowless principle such as stray light and touch detection. We implement a proof-of-concept prototype and validate the proposed techniques through experiments.

Off-Axis Layered Displays: Hybrid Direct-View/Near-Eye Mixed Reality with Focus Cues

Journal

Christoph Ebner, Peter Mohr, Tobias Langlotz, Yifan (Evan) Peng, Dieter Schmalstieg, Gordon Wetzstein, Denis Kalkofen Graz

This work introduces off-axis layered displays, the first approach to stereoscopic direct-view displays with support for focus cues. Off-axis layered displays combine a head-mounted display with a direct-view display to provide focus cues. We present a pipeline for the real-time rendering of off-axis display patterns to explore the novel display architecture. Additionally, we build two prototypes using a head-mounted display in combination with a stereoscopic, and a monoscopic direct-view display. Furthermore, we show how extending off-axis layered displays with an attenuation layer and with eye-tracking improves image quality. We thoroughly analyze each component and present examples captured through our prototypes.

Perceptually-guided dual-mode virtual reality system for motion-adaptive display

Journal

hui zeng, rong zhao

The development of high-quality virtual reality (VR) devices brings great challenges for display panel fabrication, real-time rendering, and data transfer. To address this issue, we introduce a dual-mode virtual reality system based on the spatio-temporal perception characteristics of human vision. The proposed VR system has a novel optical architecture. It can switch display modes according to the user's perceptual requirements for different display scenes to adaptively adjust the display spatial and temporal resolution based on a given display budget, thus providing users with the optimal visual perception quality.

Dynamic Redirection for VR Haptics with a Handheld Stick

Journal

Yuqi Zhou, Voicu Popescu

This paper proposes a general handheld stick haptic redirection method that allows the user to experience complex shapes with haptic feedback through both tapping and extended contact, such as in contour tracing. As the user extends the stick to make contact with a virtual object, the contact point with the virtual object and the targeted contact point with the physical object are continually updated, and the virtual stick is redirected to synchronize the virtual and real contacts. Redirection is applied either just to the virtual stick, or to both the virtual stick and hand. A user study (N = 26) confirms the effectiveness of the proposed redirection method.

Session: Agents and Perception

Wednesday, March 29, 2023, 14:00, Shanghai UTC+8, Room C

Session Chair: ye pan

A study of the influence of AR on the perception, comprehension and projection levels of situation awareness

Conference

Ms Camille Truong-Allié, Martin Herbeth, Alexis Paljic

We examine how Augmented Reality (AR) impacts user's situation awareness (SA) on elements secondary to an AR-assisted main task. These elements can still provide relevant information. A good understanding of user's awareness about them is therefore interesting. In this regard, we measured SA about secondary elements in an industrial workshop in the context of an AR-assisted pedestrian navigation. We compared SA between three guidance conditions: a paper map, a virtual path, and a virtual path with virtual cues about secondary elements. We adapted an existing SA measure to a real-world environment and found that, in our settings, the use of AR decreased user's SA about secondary elements, mainly at the perception level.

The Impact of Avatar and Environment Congruence on Plausibility, Embodiment, Presence, and the Proteus Effect in Virtual Reality

Journal

David Mal, Erik Wolf, Nina Döllinger, Carolin Wienrich, Marc Erich Latoschik

We investigated the impact of avatar and environment types on VR-related qualia and the Proteus effect. Participants embodied either an avatar in sports- or business wear in a semantic congruent or incongruent environment while performing exercises in virtual reality. The avatar-environment congruence significantly affected the avatar's plausibility but not the sense of embodiment or spatial presence. A significant Proteus effect emerged only for participants who reported a high feeling of (virtual) body ownership, indicating that a strong sense of having and owning a virtual body is key to facilitating the Proteus effect.

A Systematic Review on the Visualization of Avatars and Agents in AR & VR

Journal

Florian Weidner, Gerd Boettcher, Chenyao Dao, Luljeta Sinani, Stephanie Arevalo Arboleda, Christian Kunert, Christoph Gerhardt, Wolfgang Broll, Alexander Raake

Augmented Reality (AR) and Virtual Reality (VR) are pushing from the labs towards consumers, especially with social applications. These applications require visual representations of humans and intelligent entities. Our work investigates the effects of rendering style and visible body parts in AR and VR by adopting a systematic literature review. We analyzed 72 papers that compare various avatar representations. We discuss and synthesize our results within the context of today's AR and VR ecosystem, provide guidelines for practitioners, and finally identify and present promising research opportunities to encourage future research of avatars and agents in AR/VR environments.

Volumetric Avatar Reconstruction with Spatio-Temporally Offset RGBD Cameras

Conference

Gareth Rendle, Adrian Kreskowski, Bernd Froehlich

RGBD cameras can capture users' actions for reconstruction of volumetric avatars that allow rich interaction between telepresence parties in VR. This work presents a system design enabling volumetric avatar reconstruction at increased frame rates. We overcome the limited frame rate of commodity RGBD cameras by dividing cameras into two spatio-temporally offset groups and implementing a real-time reconstruction pipeline to fuse the temporally offset RGBD streams. Comparisons against capture configurations possible with the same number of cameras indicate that using spatially and temporally offset RGBD cameras is beneficial, allowing increased reconstruction frame rates and scene coverage while producing temporally consistent avatars.

Effects of the Visual Fidelity of Virtual Environments on Presence, Context-dependent Forgetting, and Source-monitoring Error

Journal

Takato Mizuho, Takuji Narumi, Hideaki Kuzuoka

The present study examined two effects caused by alternating VE and RE experiences: "context-dependent forgetting'' and "source-monitoring errors.'' The former effect is that memories learned in VEs are more easily recalled in VEs than in REs, and vice versa. The source-monitoring error is that memories learned in VEs and REs are easily confused, making it difficult to identify the source of the memory. We hypothesized that the visual fidelity of VEs is responsible for these effects. We found that the level of visual fidelity significantly affected the sense of presence, but not context-dependent forgetting or source-monitoring errors.

Session: Perception 2

Wednesday, March 29, 2023, 15:15, Shanghai UTC+8, Room A

Session Chair: Shuo Yan

Body and Time: Virtual Embodiment and its effect on Time Perception

Journal

Fabian Unruh, David H.V. Vogel, Maximilian Landeck, Jean-Luc Lugrin, Marc Erich Latoschik

This article explores the relation between one's own body and the perception of time in a novel Virtual Reality (VR) experiment explicitly fostering user activity. Forty-Eight participants randomly experienced different degrees of embodiment: i) without an avatar (low), ii) with hands (medium), and iii) with a high-quality avatar (high). Participants had to repeatedly activate a virtual lamp and estimate the duration of time intervals as well as judge the passage of time. Our results show a significant effect of embodiment on time perception: time passes slower in the low embodiment condition compared to the medium and high conditions.

Comparing the Effects of Visual Realism on Size Perception in VR versus Real World Viewing through Physical and Verbal Judgments

Journal

Ignatius Alex Wijayanto, Sabarish V. Babu, Christopher Pagano, Jung-Hong Chuang

Virtual Reality (VR) is well-known for its use in interdisciplinary applications and research. The visual representation of these applications could vary and in those situations could require an accurate perception of size for task performance. However, the relationship between size perception and visual realism in VR has not yet been explored. In this contribution, we conducted an empirical evaluation using a between-subject design over four conditions of visual realism on size perception of target objects in the same virtual environment. Our result showed participants' size perception was accurate in both realistic and non-photorealistic conditions suggesting that invariant provides meaningful information in the environment.

Can I Squeeze Through? Effects of Self-Avatars and Calibration in a Person-Plus-Virtual-Object System on Perceived Lateral Passability in VR

Journal

Ayush Bhargava, Rohith Venkatakrishnan, Roshan Venkatakrishnan, Kathryn Lucaites, Hannah Solini, Andrew Robb, Christopher Pagano, Sabarish V. Babu

Self-avatars and virtual object interactions give rise to affordance based challenges as dynamic surface information like compression and stickiness is absent in VR. This effect is amplified when interacting with virtual objects as the weight and inertial feedback associated with them is often mismatched. As such, we investigated how the absence of dynamic surface properties affect lateral passability judgments with virtual handheld objects in the presence or absence of self-avatars. Results show that participants can account for the missing dynamic information with self-avatars, but rely on their internal body schema of a compressed body depth in the absence of self-avatars.

A Study of Change Blindness in Immersive Environments

Journal

Daniel Martin, Xin Sun, Diego Gutierrez, Belen Masia

Human performance is poor at detecting certain changes in a scene, a phenomenon known as change blindness. Although the exact reasons of this effect are not yet completely understood, there is a consensus that it is due to our constrained attention and memory capacity. In this work, we present a study of change blindness using immersive 3D environments. We devise two experiments; first, we focus on analyzing how different change properties may affect change blindness. We then further explore its relation with the capacity of our visual working memory and analyze the influence of the number of changes.

Exploring Plausibility and Presence in Mixed Reality Experiences

Journal

Franziska Westermeier, Larissa Brübach, Marc Erich Latoschik, Carolin Wienrich

Our study investigates the impact of incongruencies on different information processing layers (i.e., sensation/perception and cognition layer) in Mixed Reality (MR), and its effects on plausibility, spatial and overall presence. In a simulated maintenance application participants performed operations in a randomized 2x2 design, experiencing either VR (congruent sensation/perception) or AR (incongruent sensation/perception). By inducing cognitive incongruence through the absence of traceable power outages, we aimed to explore the relationship between perceived cause and effect. Our results indicate that the effects of the power outages differ significantly in the perceived plausibility and spatial presence ratings between VR and AR.

Continuous VR Weight Illusion by Combining Adaptive Trigger Resistance and Control-Display Ratio Manipulation

Conference

Carolin Stellmacher, André Zenner, Oscar Javier Ariza Nunez, Ernst Kruijff, Johannes Schöning

We studied a novel combination of a hardware-based technique and a software-based pseudo-haptic approach to achieve a continuous VR weight illusion. While a modified VR controller renders adaptive trigger resistance during grasping, a manipulation of the control-display ratio (C/D~ratio) induces a sense of weight during lifting. In a psychophysical study, we tested our combined approach against the individual rendering techniques. Our findings show that participants were significantly more sensitive towards smaller weight differences in the combined weight simulations and determined weight differences faster. Our work demonstrates the meaningful benefit of combining physical and virtual methods for virtual weight perception.

Session: InfoVis and TextEntry

Wednesday, March 29, 2023, 15:15, Shanghai UTC+8, Room B

Session Chair: Chunyi Chen

iARVis: Mobile AR Based Declarative Information Visualization Authoring, Exploring and Sharing

Conference

Junjie Chen, Chenhui Li, Sicheng Song, Changbo Wang

We present iARVis, a proof-of-concept toolkit for creating, experiencing, and sharing mobile AR-based information visualization environments, which frequently necessitate low-level programming expertise and lengthy hand encodings to construct. We present a declarative approach for defining the AR environment, including how information is automatically positioned, laid out, and interacted with to improve construction efficiency. We also present advanced features such as hot-reload, persistence and continuity, etc., to ensure convenient creation for designers and seamless experiences for users. To demonstrate the viability and extensibility, we evaluate iARVis using different use cases along with performance evaluation and expert reviews.

Comparing Scatterplot Variants for Temporal Trends Visualization in Immersive Virtual Environments

Conference

Carlos Quijano-Chavez, Carla M. Dal Sasso Freitas, Luciana Nedel

Trends are changes in variables or attributes over time. Interpreting tendencies require observing the lines or points behavior regarding increments, decrements, or reversals. Previous work assessed variants of scatterplots like Animation, Small Multiples, and Overlaid Trails to compare the effectiveness of trends representation using large and small displays. In this work, we study how best to enable the analyst to explore and perform temporal trend tasks with these same techniques in immersive virtual environments. Results show that Overlaid Trails are the fastest overall, followed by Animation and Small Multiples, while accuracy is task-dependent. We also report results from interaction measures and questionnaires.

Text Input for Non-Stationary XR Workspaces: Investigating Tap and Word-Gesture Keyboards in Virtual and Augmented Reality

Journal

Florian Kern, Florian Niebling, Marc Erich Latoschik

We evaluated two text input techniques for non-stationary virtual reality (VR) and video see-through augmented reality (VST AR) XR displays. Our study with 64 participants showed that XR displays and input techniques have a strong impact on text entry performance. We found that tap keyboards had higher usability, better user experience, and lower task load compared to swipe keyboards in VR and VST AR. Both input techniques were faster in VR than in VST AR, and the tap keyboard was the fastest in VR. Participants showed a significant learning effect. Our reference implementation is publicly available for replication and reuse.

CrowbarLimbs: A Fatigue-Reducing Virtual Reality Text Entry Metaphor

Journal

Muhammad Abu Bakar, Yu-Ting Tsai, Hao-Han Hsueh, Elena Carolina Li

We present 'CrowbarLimbs,' a novel virtual reality text entry metaphor with two deformable extended virtual limbs. CrowbarLimbs can assist a user in placing their hands and arms in a comfortable posture, thus effectively reducing the physical fatigue in the hands, wrists, and elbows. It can also achieve comparable text entry speed, accuracy, and system usability to those of previous selection-based methods. We found that the shapes of CrowbarLimbs have significant effects on fatigue ratings and text entry speed. Furthermore, placing the virtual keyboard near the user and at half their height significantly affects text entry speed.

I Can't See That! Considering the Readability of Small Objects in Virtual Environments

Journal

Jacob Young, Nadia Pantidi, Matthew Wood

Interacting with and interpreting small objects in virtual reality has remained an issue, particularly in the replication of real-world tasks. We propose three techniques for improving the usability and readability of small objects: i) expanding them in place, ii) expanding a separate replica, and iii) showing a large readout of the object's state. We conducted a user study comparing each technique's usability, induced presence, and effect on knowledge retention and conclude that scaling the area of interest may not be enough to improve the usability of information-bearing objects, while text readouts can aid task completion while reducing knowledge retention.

Conference Sponsors

Special

Lujiazui Logo

Diamond

Platinum

Baidu Logo

Gold

Senstime Logo
Unity china
XImmerse Logo
Vivo Logo

Silver

GritWorld logo

Bronze

ImageDerivative logo
yuanjing alibaba logo
raysengine alibaba logo
S-Dream logo
VRIH logo
VRIH logo
evis logo
kanjing logo
lianying logo

Doctoral Consortium Sponsors

Supporters

Tencent Learn logo
lenovo logo
qualcomm logo
liangfengtai logo
hgmt logo

Host

SJTU

Co-Host

ZJU NUIST

Supporting Associations

ccf-vr Logo CSIG-VR Logo cgs-vcc Logo
CCF-VR CSIG-VR CGS-VCC
cvrvt Logo mia Logo siga Logo
CVRVT MIA SIGA
cvrvt Logo
SJMC


Code of Conduct

© IEEE VR Conference 2023, Sponsored by the IEEE Computer Society and the Visualization and Graphics Technical Committee