The official banner for the IEEE Conference on Virtual Reality + User Interfaces, comprised of a Kiwi wearing a VR headset overlaid on an image of Mount Cook and a braided river.

IEEE VR 2023 Conference Awards

Best Papers
Honorable Mentions
Best Poster
Honorable Mention
Best Demo
Honorable Mention
3DUI Contest
Best 3DUI
Honorable Mention
Doctoral Consortium
Best Presentation
Paper Presentation
Best Paper Presentation

Best Papers

Effects of Collaborative Training Using Virtual Co-embodiment on Motor Skill Learning

Daiki Kodama, Takato Mizuho, Yuji Hatada, Takuji Narumi, Michitaka Hirose

Many VR systems, which enable users to observe and follow a teacher's movements from a first-person perspective, reported their usefulness for motor skill learning. However, learners using these methods feel weak agency because they need to be aware of following the teacher's movements, preventing motor skill retention. To address this problem, we propose applying virtual co-embodiment, in which two users are immersed in the same virtual avatar, as a method that arises strong agency in motor skill learning. The experiment showed that learning in virtual co-embodiment with the teacher improves learning efficiency compared with sharing the teacher's perspective or alone.

Evaluating the Effects of Virtual Reality Environment Learning on Subsequent Robot Teleoperation in an Unfamiliar Building

Karl Eisenträger, Judith Haubner, Jennifer Brade, Wolfgang Einhäuser, Alexandra Bendixen, Sven Winkler Chemnitz, Philipp Klimant Chemnitz, Georg Jahn

We compared three methods to prepare for tasks performed by teleoperating a robot in a building. One group studied a floorplan, the second explored a VR reconstruction of the building from a normal-sized avatar's perspective and a third explored the VR from a giant-sized avatar's perspective. Giant VR and floorplan took less learning time than normal VR. Both VR methods significantly outperformed the floorplan in a orientation task. Navigation was quicker in the giant compared to the normal perspective and the floorplan. We conclude that normal and giant perspective in VR are viable for preparation of teleoperation in unfamiliar environments.

How End-effector Representations Affect the Perceptions of Dynamic Affordances in Virtual Reality

Roshan Venkatakrishnan, Rohith Venkatakrishnan, Balagopal Raveendranath, Christopher Pagano, Andrew Robb, Wen-Chieh Lin, Sabarish V. Babu

We investigated how different virtual hand representations affect users' perceptions of dynamic affordances using a collision-avoidance object retrieval task. We employed a 3 (virtual end-effector representation) X 13 (frequency of moving doors) X 2 (target object size) multi-factorial design, manipulating the input modality and its concomitant virtual end-effector representation across three experimental conditions: (1) Controller ; (2) Controller-hand ; (3) Glove. We find that representing the end-effector as hands tends to increase embodiment but can also come at the cost of performance, or an increased workload due to a discordant mapping between the virtual representation and the input modality used.

Integrating Both Parallax and Latency Compensation into Video See-Through Head-Mounted Display

Atsushi Ishihara, Hiroyuki Aga, Yasuko Ishihara, Hirotake Ichikawa, Hidetaka Kaji, Koichi Kawasaki, Daita Kobayashi, Toshimi Kobayashi, Senior Enginner Ken Nishida, Takumi Hamasaki, Hideto Mori, Yuki Morikubo

This study incorporates both parallax and latency compensation methods into a video see-through head-mounted display, to realize edge-preserving occlusion. To reconstruct captured images, we reproject the images captured by the color camera to the user's eye position, using depth maps. We fill the disocclusion areas using cached depth maps estimated in previous frames, instead of relying upon computationally heavy inpainting procedures. For occlusion, we refine the edges of the depth maps using both infrared masks and color-guided filters. For latency compensation, we propose a two-phase temporal warping method. It is found to be not only fast but also spatially correct for static scenes.

Paper - Honorable Mentions

Gaining the High Ground: Teleportation to Mid-Air Targets in Immersive Virtual Environments

Tim Weissker, Pauline Bimberg, Aalok Shashidhar Gokhale, Torsten Wolfgang Kuhlen, Bernd Froehlich

We present three teleportation techniques that enable the user to travel not only to ground-based but also to mid-air targets. The techniques differ in the extent to which elevation changes are integrated into the conventional target selection process. Elevation can be specified either simultaneously, as a connected second step, or separately from horizontal movements. A user study indicated a trade-off between the simultaneous method with the highest accuracy and the two-step method with the lowest task load. The separate method was least suitable on its own. Based on our findings, we define initial design guidelines for mid-air navigation techniques.

Measuring Interpersonal Trust towards Virtual Humans with a Virtual Maze Paradigm

Jinghuai Lin, Johrine Cronjé, Ivo Käthner, Paul Pauli, Marc Erich Latoschik

This work proposes a validated behavioural tool to measure interpersonal trust towards a specific virtual social interaction partner in social VR. The task of the users (the trustors) is to navigate through a maze in virtual reality, where they can interact with a virtual human (the trustee). In the validation study, participants interacting with a trustworthy avatar asked for advice more often than those interacting with an untrustworthy avatar, indicating that the paradigm is sensitive to the differences in interpersonal trust and can be used to measure trust towards virtual humans.

PACE: Data-Driven Virtual Agent Interaction in Dense and Cluttered Environments

James F Mullen Jr, Dinesh Manocha

We present PACE, a novel method for modifying motion-captured virtual agents to interact with and move throughout dense, cluttered 3D scenes. Our approach changes a given motion sequence of a virtual agent as needed to adjust to the obstacles and objects in the environment. We compare our method with prior motion generating techniques and highlight the benefits of our method with a perceptual study, where human raters preferred our method, and physical plausibility metrics, where our method performed. We have integrated our system with Microsoft HoloLens and demonstrate its benefits in real-world scenes. Our project website is available at

Give Me a Hand: Improving the Effectiveness of Near-field Augmented Reality Interactions By Avatarizing Users' End Effectors

Roshan Venkatakrishnan, Rohith Venkatakrishnan, Balagopal Raveendranath, Christopher Pagano, Andrew Robb, Wen-Chieh Lin, Sabarish V. Babu

We investigated whether avatarizing users' end-effectors (hands) improved their interaction performance on a near-field, obstacle avoidance, object retrieval task. We employed a 3 (Augmented hand representation) X 2 (density of obstacles) X 2 (size of obstacles) X 2 (virtual light intensity) multi-factorial design, manipulating the presence/absence and anthropomorphic fidelity of augmented self-avatars, across three experimental conditions: (1) No-Augmented Avatar; (2) Iconic-Augmented Avatar; (3) Realistic Augmented Avatar. Our findings seem to indicate that interaction performance may improve when users are provided with a visual representation of the AR system's interacting layer in the form of an augmented self-avatar.

Evaluation of AR visualization approaches for catheter insertion into the ventricle cavity

Mohamed Benmahdjoub, Abdullah Thabit, Marie-Lise C. van Veelen, Wiro J. Niessen, Eppo B. Wolvius, Theo van Walsum

The way how virtual data is presented in AR guided surgical navigation plays an important role for correct spatial perception of the virtual overlay. This study compares various visualization modalities for catheter insertion in external ventricular drain and ventricular shunt procedures. We investigate (1) 2D approaches (smartphone, 2D window), and (2) 3D approaches (fully aligned patient model, and a model that is adjacent to the patient and is rotationally aligned) using an OST. 32 participants performed 20 AR guided insertions per approach. The results show more accurate insertions using 3D approaches with a higher preference compared to 2D approaches.

CardioGenesis4D: Interactive Morphological Transitions of Embryonic Heart Development in a Virtual Learning Environment

Danny Schott, Matthias Kunz, Tom Wunderling, Florian Heinrich, Rüdiger Braun-Dullaeus, Christian Hansen

In the embryonic human heart, complex dynamic shape changes take place in a short period, making this development difficult to visualize. An immersive learning environment is presented that enables the understanding of morphological transitions through hand interactions. In a user study, we examined usability, perceived task load, and sense of presence. We also assessed knowledge gain, and obtained feedback from domain experts. Students and professionals rated the application as usable, and our results show that interactive learning content should consider features for different learning styles. Our work previews, how VR can be integrated into a cardiac embryology education curriculum.

Paper - Nominees

GroundFlow: Liquid-based Haptics for Simulating Fluid on the Ground in Virtual Reality

Ping-Hsuan Han, Tzu-Hua Wang, Chien-Hsing Chou

Most haptic devices simulate feedback in dry environments such as the living room, prairie, or city. However, water-related environments are thus less explored, for example, rivers, beaches, and swimming pools. In this paper, we present GroundFlow, a liquid-based haptic floor system for simulating fluid on the ground in VR. We discuss design considerations and propose a system architecture and interaction design. We conduct two user studies to assist in designing a multiple-flow feedback mechanism, develop three applications to explore the potential uses of the mechanism, and consider the limitations and challenges thereof to inform VR developers and haptic practitioners.

ShadowMover: Automatically Projecting Real Shadows onto Virtual Object

Piaopiao Yu, Jie Guo, Fan Huang, Zhenyu Chen, Chen Wang, Yan Zhang, Yanwen Guo

Inserting 3D virtual objects into real-world images has many applications in photo editing and augmented reality. We present the first end-to-end solution to fully automatically project real shadows onto virtual objects for outdoor scenes. In our method, we introduce the Shifted Shadow Map, a new shadow representation that encodes the binary mask of shifted real shadows after inserting virtual objects in an image. Based on the shifted shadow map, we propose a CNN-based shadow generation model named ShadowMover which first predicts the shifted shadow map for an input image and then automatically generates plausible shadows on any inserted virtual object.

A Study of Change Blindness in Immersive Environments

Daniel Martin, Xin Sun, Diego Gutierrez, Belen Masia

Human performance is poor at detecting certain changes in a scene, a phenomenon known as change blindness. Although the exact reasons of this effect are not yet completely understood, there is a consensus that it is due to our constrained attention and memory capacity. In this work, we present a study of change blindness using immersive 3D environments. We devise two experiments; first, we focus on analyzing how different change properties may affect change blindness. We then further explore its relation with the capacity of our visual working memory and analyze the influence of the number of changes.

Evoking empathy with visually impaired people through an augmented reality embodiment experience

Renan Guarese, Emma Pretty, Haytham Fayek, Fabio Zambetta, Ron van Schyndel

To promote empathy with people that have disabilities, we propose a multi-sensory interactive experience that allows sighted users to embody having a visual impairment whilst using assistive technologies. The experiment involves blindfolded sighted participants interacting with sonification methods in order to locate targets in a real kitchen. We enquired about the perceived benefits of increasing said empathy from the blind community. We gathered sighted people's self-reported and perceived empathy with the BVI community from sighted and blind people respectively. We re-tested sighted people's empathy after the experiment and found that their empathetic responses significantly increased.

Shadowless Projection Mapping using Retrotransmissive Optics

Kosuke Hiratani, Daisuke Iwai, Yuta Kageyama, Parinya Punpongsanon, Takefumi Hiraki, Kosuke Sato

This paper presents a shadowless projection mapping system for interactive applications in which a target surface is frequently occluded from a projector with a user's body. We propose a delay-free optical solution for this critical problem. Specifically, as the primary technical contribution, we apply a large format retrotransmissive plate to project images onto the target surface from wide viewing angles. We also tackle technical issues unique to the proposed shadowless principle such as stray light and touch detection. We implement a proof-of-concept prototype and validate the proposed techniques through experiments.

ConeSpeech: Exploring Directional Speech Interaction for Multi-Person Remote Communication in Virtual Reality

Yukang Yan, Haohua Liu, Yingtian Shi, Jingying Wang, Ruici Guo, Zisu Li, Xuhai Xu, Chun Yu, Yuntao Wang, Yuanchun Shi

We present ConeSpeech, a virtual reality (VR) based multi-user remote communication technique, which enables users to selectively speak to target listeners without distracting bystanders. With ConeSpeech, the user looks at the target listener and only in a cone-shaped area in the direction can the listeners hear the speech. We conducted a user study to determine the modality to control the cone-shaped delivery area. Then we implemented the technique and evaluated its performance in three typical multi-user communication tasks by comparing it to two baseline methods. Results show that ConeSpeech balanced the convenience and flexibility of voice communication.

Assisted walking-in-place: Introducing assisted motion to walking-by-cycling in embodied Virtual Reality

Yann Moullec, Justine Saint-Aubert, Mélanie Cogne, Anatole Lécuyer

We investigated the use of a motorized bike to support the walk of an avatar in Virtual Reality. Our approach consists in assisting a walking-in-place technique called walking-by-cycling with a motorized bike in order to provide participants with a compelling walking experience while reducing their perceived effort. We conducted a study which showed that "assisted walking-by-cycling" induced more ownership, agency, and walking sensation than a static simulation. It also induced levels of ownership and walking sensation similar to that of active walking-by-cycling, but it induced less perceived effort, which promotes the use of our approach in situations where users cannot or do not want to exert much effort while walking in embodied VR.

Exploring Plausibility and Presence in Mixed Reality Experiences

Franziska Westermeier, Larissa Brübach, Marc Erich Latoschik, Carolin Wienrich

Our study investigates the impact of incongruencies on different information processing layers (i.e., sensation/perception and cognition layer) in Mixed Reality (MR), and its effects on plausibility, spatial and overall presence. In a simulated maintenance application participants performed operations in a randomized 2x2 design, experiencing either VR (congruent sensation/perception) or AR (incongruent sensation/perception). By inducing cognitive incongruence through the absence of traceable power outages, we aimed to explore the relationship between perceived cause and effect. Our results indicate that the effects of the power outages differ significantly in the perceived plausibility and spatial presence ratings between VR and AR.

Upper Body Thermal Referral and Tactile Masking for Localized Feedback

Hyungki Son, Haokun Wang, Yatharth Singhal, Jin Ryong Kim

This paper investigates the effects of thermal referral and tactile masking illusions to achieve localized thermal feedback. The first experiment uses sixteen vibrotactile actuators with four thermal actuators to explore the thermal distribution on the user's back. The result confirms that localized thermal feedback can be achieved through cross-modal thermo-tactile interaction on the user's back of the body. The second experiment is conducted to validate our approach in VR. The results show that our thermal referral with a tactile masking approach with a lesser number of thermal actuators achieves greater response time and better location accuracy.

LiDAR-aid Inertial Poser: Large-scale Human Motion Capture by Sparse Inertial and LiDAR Sensors

Yiming Ren, Chengfeng Zhao, Yannan He, Peishan Cong, Han Liang, Jingyi Yu, Lan XU, Yuexin Ma

We propose a multi-sensor fusion method for capturing challenging 3D human motions with accurate consecutive local poses and global trajectories in large-scale scenarios, only using single LiDAR and 4 IMUs, which are set up conveniently and worn lightly. Moreover, we collect a LiDAR-IMU multi-modal mocap dataset, LIPD, with diverse human actions in long-range scenarios. Extensive quantitative and qualitative experiments on LIPD and other open datasets all demonstrate the capability of our approach for compelling motion capture in large-scale scenarios, which outperforms other methods by an obvious margin. We will release our code and captured dataset to stimulate future research.

Using Virtual Replicas to Improve Mixed Reality Remote Collaboration

Huayuan Tian, Gun A. Lee, Huidong Bai, Mark Billinghurst

In this paper, we explore how virtual replicas can enhance MR remote collaboration with a 3D reconstruction of the task space and study how they can work as a spatial cue to improve MR remote collaboration. Our approach segments the foreground manipulable objects in the local environment and creates virtual replicas of them. The remote user can manipulate them to explain the task and guide the partner who can rapidly and accurately understand the remote expert's intentions and instructions. Our user study found using virtual replica manipulation was more efficient than using 3D annotation drawing in the MR remote collaboration. We report and discuss the findings and limitations of our system and study, and directions for future research.

GeoSynth: A Photorealistic Synthetic Indoor Dataset for Scene Understanding

Brian Pugh, Davin Chernak, Salma Jiddi

Deep learning has revolutionized many scene perception tasks over the past decade. Some of these improvements can be attributed to the development of large labeled datasets. The creation of such datasets can be an expensive, time-consuming, and imperfect process. To address these issues, we introduce GeoSynth, a diverse photorealistic synthetic dataset for indoor scene understanding tasks. Each GeoSynth exemplar contains rich labels including segmentation, geometry, camera parameters, surface material, lighting, and more. We demonstrate that supplementing real training data with GeoSynth can significantly improve network performance on perception tasks, like semantic segmentation.

Body and Time: Virtual Embodiment and its effect on Time Perception

Fabian Unruh, David H.V. Vogel, Maximilian Landeck, Jean-Luc Lugrin, Marc Erich Latoschik

This article explores the relation between one's own body and the perception of time in a novel Virtual Reality (VR) experiment explicitly fostering user activity. Forty-Eight participants randomly experienced different degrees of embodiment: i) without an avatar (low), ii) with hands (medium), and iii) with a high-quality avatar (high). Participants had to repeatedly activate a virtual lamp and estimate the duration of time intervals as well as judge the passage of time. Our results show a significant effect of embodiment on time perception: time passes slower in the low embodiment condition compared to the medium and high conditions.

GestureSurface: VR sketching through assembling scaffold surface with non dominant hand

Xinchi Xu, Yang Zhou, Bingchan Shao, Guihuan Feng, Chun Yu

3D sketching provides an immersive drawing experience for designs. However, the lack of depth perception cues in VR makes it difficult to draw accurate strokes. To handle this, we introduce gesture-based scaffolding to guide strokes. We conduct a gesture-design study and propose GestureSurface, a bi-manual interface that uses non-dominant hand performing gestures to create scaffolding and the other hand drawing with controller. When the dominant hand is occupied, reducing the idleness of the non-dominant hand thourgh gestural input increases efficiency and fluency. We evaluated GestureSurface using a 20-person user study that found it had high efficiency and low fatigue.

Virtual reality in supporting charitable giving: The role of vicarious experience, existential guilt, and need for stimulation

Ou Li, Han Qiu

Although a growing number of charities have used virtual reality (VR) for fundraising activities, there is relatively little academic research in this area. The purpose of this study is to investigate the underlying mechanism of VR in supporting charitable giving. We found that VR charitable appeals increase actual money donations when compared to the traditional two-dimensional (2D) format and that this effect is achieved through a serial mediating effect of vicarious experience and existential guilt. Findings also identify the need for stimulation as a boundary condition, indicating that those with a higher (vs. lower) need for stimulation were more (vs. less) affected by the mediating mechanism of VR charitable appeals on donations.

Effects of the Visual Fidelity of Virtual Environments on Presence, Context-dependent Forgetting, and Source-monitoring Error

Takato Mizuho, Takuji Narumi, Hideaki Kuzuoka

The present study examined two effects caused by alternating VE and RE experiences: "context-dependent forgetting'' and "source-monitoring errors.'' The former effect is that memories learned in VEs are more easily recalled in VEs than in REs, and vice versa. The source-monitoring error is that memories learned in VEs and REs are easily confused, making it difficult to identify the source of the memory. We hypothesized that the visual fidelity of VEs is responsible for these effects. We found that the level of visual fidelity significantly affected the sense of presence, but not context-dependent forgetting or source-monitoring errors.

Best Poster

VRScroll: A Shape-Changing Device for Precise Sketching in Virtual Reality

Wen Ying: University of Virginia; Seongkook Heo: University of Virginia

Studies have shown that incorporating physical surfaces in VR can improve sketching performance by allowing users to feel the contact and rest their pens on it. However, using physical objects or devices to reproduce the diverse shapes of virtual model surfaces can be challenging. We propose VRScroll, a shape-changing device that physically renders the shape of a virtual surface for users to sketch on it using a pen. By providing users with passive haptic feedback and constrained movement, VRScroll has the potential to facilitate precise sketching on virtual surfaces.

Poster - Honorable Mention

Tomato Presence: Virtual Hand Ownership with a Disappearing Hand

Anthony Steed: University College London; Vit Drga: University College London

Tomato presence is a term coined by Owlchemy Labs to refer to the observation that players of their game Job Simulator can experience `hand presence' over an object that is not their hand. When playing the game, if a player grabs an object, their virtual hand disappears leaving the grabbed object. It seems that this should be a conflict with current theories of how users might react to visual/proprioceptive mismatch of their embodiment. We run a hand ownership experiment where we implement standard object grasp and the disappearing hand grasp. We show that on a body-ownership questionnaire there is evidence that users feel ownership over a disappearing virtual hand. We also confirm that most users do not report that their hand disappeared.

A High-Dynamic-Range Mesh Screen VR Display by Combining Frontal Projection and Retinal Projection

Kazushi Kinjo: Osaka University; Daisuke Iwai: Osaka University; Kosuke Sato: Osaka University

We proposed a high-dynamic-range virtual reality (VR) display that can represent a glossy material by combining a conventional frontal projection mapping and a retinal projection light passing through a screen of a microporous plate. In the prototype system, the retinal projection could be superimposed on the frontal projection and increase the luminance of only the specular highlight in the image. This paper also reports approximately six hundred times brighter presentation of the retinal projection than that of the frontal projection.

The Effects of Avatar Personalization and Human-Virtual Agent Interactions on Self-Esteem

Wei Jie Dominic Koek: Nanyang Technological University; Vivian Hsueh Hua Chen: Nanyang Technological University

Extant literature has suggested that VR may be a potential avenue to enhance self-esteem. However, the understanding toward the underlying technological mechanisms and their corresponding effects on the self are still not comprehensive. To address this research gap, the current study designed a series of social interactions in VR where participants (N = 171) embodied either a personalized or non-personalized avatar and had either positive or negative interactions with a virtual agent. Findings showed that participants who embodied a personalized avatar experienced a positive change in self-esteem from pre- to post-simulation, regardless of the virtual agent interaction quality.

Best Demo

A Novel Piezo-Based Technology for Haptic Feedback for XR

Rolf Simon Adelsberger: Sensoryx; Alberto Calatroni: Sensoryx; Salar Shahna: Sensoryx

We present a novel technology for tactile haptic feedback tailored to Virtual Reality (VR) and Augmented Reality (AR) experiences. Unlike piezoelectric benders, eccentric rotating masses (ERM), and linear resonant actuators (LRA) we present an application of piezoelectric motors aimed at providing realistic tactile haptic feedback.

Demo - Honorable Mention

Dynamic Scene Adjustment for Player Engagement in VR Game

Zhitao Liu: Center for Future Media, the School of Computer Science and Engineering, UESTC, Chengdu, China; Yi Li: Center for Future Media, the School of Computer Science and Engineering, UESTC, Chengdu, China; Ning Xie: Center for Future Media, the School of Computer Science and Engineering, UESTC, Chengdu, China; YouTeng Fan: Center for Future Media, the School of Computer Science and Engineering, UESTC, Chengdu, China; Haolan Tang: Center for Future Media, the School of Computer Science and Engineering, University of Electronic Science and Technology of China; Wei Zhang: AVIC Chengdu Aircraft Design&Research Institute

Virtual reality (VR) produces a highly realistic simulated environment with controllable environment variables. This paper proposes a Dynamic Scene Adjustment (DSA) mechanism based on the user interaction status and performance, which aims to adjust the VR experiment variables to improve the user's game engagement. We combined the DSA mechanism with a musical rhythm VR game. The experimental results show that the DSA mechanism can improve the user's game engagement (task performance).

Spatially Augmented Reality on Non-rigid Dynamic Surfaces

Aditi Majumder: UCI; Muhammad Twaha Ibrahim: UC Irvine

We will demonstrate a spatially augmented reality system on non-rigid dynamic surfaces like stretchable fabrics. Using a single projector and RGB-D camera, our system adapts the projection to the changing shape of the non-rigid surface automatically. Such systems have applications in various domains such as art, design, entertainment and medicine. In particular, we we will demonstrate the use of the system as a surgical guidance system for cleft palate surgery using a Simulare cleft model.

Best 3DUI

A Continuous Authentication Technique for XR Utilizing Time-Based One Time Passwords, Haptics, and Kinetic Activity

Jeronimo Grandi: University of North Carolina at Greensboro, USA; Jerry Terrell: University of North Carolina at Greensboro, USA; Kadir Baturalp Lofca: University of North Carolina at Greensboro, USA; Carlos Manuel Ruiz Valencia: University of North Carolina at Greensboro, USA; Regis Kopper: University of North Carolina at Greensboro, USA;

3DUI - Honorable Mention

VR authentication through 3D key block building

Romain FOURNIER: University of Strasbourg, France; Benjamin Freeling: University of Strasbourg, France; Miguel Gervilla: University of Strasbourg, France; Martin Heitz: University of Strasbourg, France; Paul Viville: University of Strasbourg, France; Kévin Berenger: University of Strasbourg, France; Flavien Lecuyer: University of Strasbourg, France; Antonio Capobianco: University of Strasbourg, France;

Pathword: A 3D Identity Authentication Interface Based on Connection Trajectory

Han Yang: Beijing University of Posts and Telecommunications, China; Yuxuan Fan: Beijing University of Posts and Telecommunications, China; Haopai Shi: Beijing University of Posts and Telecommunications, China; Yanning Jin: Beijing University of Posts and Telecommunications, China; Tiemeng Li: Beijing University of Posts and Telecommunications, China;

DC - Honorable Mention

Supporting Embodied Sensemaking in Immersive Environment

Yidan Zhang
Monash University

Immersive Record and Replay for Lively Virtual Environments

Klara Brandstätter
University College London

Limb Motion Guidance in Extended Reality

Xingyao Yu
University of Stuttgart

DC - Best Presentation

Fostering Well-Being with Virtual Reality Applications

Nadine Wagner
University of Bremen

Best Paper Presentation

Monte-Carlo Redirected Walking: Gain Selection Through Simulated Walks

Ben J. Congdon, Anthony Steed

We present Monte-Carlo Redirected Walking (MCRDW), a gain selection algorithm for redirected walking. MCRDW applies the Monte-Carlo method to redirected walking by simulating a large number of simple virtual walks, then inversely applying redirection to the virtual paths. Different gain levels and directions are applied, producing differing physical paths. Each physical path is scored and the results used to select the best gain level and direction. We provide a simple example implementation and a simulation-based study for validation. In our study, when compared with the next best technique, MCRDW reduced incidence of boundary collisions by over 50% while reducing total rotation and position gain.

A Lack of Restraint: Comparing Virtual Reality Interaction Techniques for Constrained Transport Seating

Graham Wilson, Mark McGill, Daniel Medeiros, Stephen Anthony Brewster

Standalone Virtual Reality (VR) headsets can now be used in cars, trains and planes. However, the spaces around transport seating are constrained by other seats, walls and passengers, leaving users with little space in which to interact safely and acceptably. Therefore, they cannot use most commercial VR applications, as they are designed for unobstructed 1-2m2 home environments. In this paper, we conducted a gamified user study to test whether three at-a-distance interaction techniques from the literature could be adapted to support common VR movement inputs identified from commercial games, and so equalise the interaction capabilities of at-home and constrained users.

Continuous VR Weight Illusion by Combining Adaptive Trigger Resistance and Control-Display Ratio Manipulation

Carolin Stellmacher, André Zenner, Oscar Javier Ariza Nunez, Ernst Kruijff, Johannes Schöning

We studied a novel combination of a hardware-based technique and a software-based pseudo-haptic approach to achieve a continuous VR weight illusion. While a modified VR controller renders adaptive trigger resistance during grasping, a manipulation of the control-display ratio (C/D~ratio) induces a sense of weight during lifting. In a psychophysical study, we tested our combined approach against the individual rendering techniques. Our findings show that participants were significantly more sensitive towards smaller weight differences in the combined weight simulations and determined weight differences faster. Our work demonstrates the meaningful benefit of combining physical and virtual methods for virtual weight perception.

Conference Sponsors


Lujiazui Logo



Baidu Logo


Senstime Logo
Unity china
XImmerse Logo
Vivo Logo


GritWorld logo


ImageDerivative logo
yuanjing alibaba logo
raysengine alibaba logo
S-Dream logo
VRIH logo
VRIH logo
evis logo
kanjing logo
lianying logo

Doctoral Consortium Sponsors


Tencent Learn logo
lenovo logo
qualcomm logo
liangfengtai logo
hgmt logo





Supporting Associations

ccf-vr Logo CSIG-VR Logo cgs-vcc Logo
cvrvt Logo mia Logo siga Logo
cvrvt Logo

Code of Conduct

© IEEE VR Conference 2023, Sponsored by the IEEE Computer Society and the Visualization and Graphics Technical Committee