Image of the Belem Tower in Lisbon along with a view for the Tagus river. And the Logo for IEEEVR Lisbon 2021 with the moto: make virtual reality diverse and acessible.

Papers

Monday, March 29, Lisbon WEST, UTC+1
Augmented Reality 12:00 - 13:00
VR/AR Displays 12:00 - 13:00
Emotion and Cognition 14:00 - 15:00
Holographic and Inertial Displays 14:00 - 15:00
Embodiment 16:30 - 17:30
Visualization 16:30 - 17:30
Tuesday, March 30, Lisbon WEST, UTC+1
Collaboration 08:30 - 09:30
Multimodal Interfaces 08:30 - 09:30
Security and Drone Teleoperation 11:00 - 12:00
Embedded and Surround Videos 11:00 - 12:00
Virtual Humans and Agents 13:00 - 14:00
Hands, Gestures and Grasping 13:00 - 14:00
Plausibility, Presence and Social VR 17:00 - 18:00
Wednesday, March 31, Lisbon WEST, UTC+1
Accessible VR 08:30 - 09:30
Haptics 08:30 - 09:30
Redirected Locomotion 13:00 - 14:00
Selection and Manipulation 15:30 - 16:30
Training and Learning 17:00 - 18:00
Pen-based and Hands-free Interaction 17:00 - 18:00
Thursday, April 1, Lisbon WEST, UTC+1
Locomotion 08:30 - 09:30
Rendering and Texture Mapping 08:30 - 09:30
Tracking, Vision and Sound 12:00 - 13:00
Perception 14:00 - 15:00
VR Applications 14:00 - 15:00

Session: Augmented Reality

Monday, March 29, 12:00, Lisbon WEST UTC+1

Session Chair: Dieter Schmalstieg

Take me to the event:

Virbela Location: Auditorium A (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

3D Curve Creation on and around Physical Objects with Mobile AR

Invited Journal

Hui Ye: School of Creative Media, City University of Hong Kong, China; Kin Chung Kwan: Department of Computer and Information Science, University of Konstanz, Germany; Hongbo Fu: School of Creative Media, City University of Hong Kong, China.

When creating 3D curves on and around physical objects with mobile AR, tracking might be less robust or lost due to camera occlusion or textureless scenes. This motivates us to study how to achieve natural interaction with minimum tracking errors in this case. We contribute an elicitation study on input point and phone grip, and a quantitative study on tracking errors. We present a system for direct 3D drawing with a mobile phone as a 3D pen, interactive correction of 3D curves with tracking errors in mobile AR, and explore two applications of in-situ 3D drawing and direct 3D measurement.

Augmented Reality for Maritime Navigation Assistance - Egocentric Depth Perception in Large Distance Outdoor Environments

Conference

Julia Hertel: University of Hamburg; Frank Steinicke: Universität Hamburg

Augmented reality (AR) provides potential for navigation assistance interfaces in maritime contexts by displaying information directly into the user's field of view. Therefore, it is crucial to understand how egocentric distances of displayed objects are perceived and how different design attributes influence depth estimation. While previous work mainly focused on depth perception in short-distance indoor environments, this paper presents a perceptual matching task in a wide outdoor environment. Our results suggest a distance overestimation across all tested distances and significant influences of (i) shape, (ii) coloration, and (iii) relation to floor. Additionally, we explored potential design implications in a pilot study on a ship.

Do we still need physical monitors? An evaluation of the usability of AR virtual monitors for productivity work

Conference

Leonardo Pavanatto: Virginia Tech; Chris North: Virginia Tech; Doug Bowman: Virginia Tech; Richard Stoakley: Microsoft Corp.; Carmen Badea: Microsoft Corp.

Physical monitors require space, lack flexibility, and can become expensive and less portable in large setups. Virtual monitors can be subject to technological limitations such as lower resolution and field of view. We investigate the impacts of using virtual monitors on a current state-of-the-art augmented reality headset for conducting productivity work. We conducted a user study that compared physical monitors, virtual monitors, and a hybrid combination of both in terms of performance, accuracy, comfort, focus, preference, and confidence. Results show that virtual monitors are a feasible approach, albeit with inferior usability and performance, while hybrid was a middle ground.

DreamStore: A Data Platform for Enabling Shared Augmented Reality

Conference

Meraj Ahmed Khan: The Procter & Gamble Company; Arnab Nandi: The Ohio State University

The natural mode of AR user-interaction triggers backend queries implicitly based on the field in the user's view at any instant, generating queries in excess of the device frame rate. Ensuring a smooth user experience in such a scenario requires a systemic solution exploiting the unique characteristics of the AR workloads. We propose DreamStore - a data platform that considers AR queries as first-class queries, and view-maintenance and large-scale analytics infrastructure around this design choice. Through performance experiments on large-scale and query-intensive AR workloads on DreamStore, we show the advantages and the capabilities of our proposed platform.

Evaluating the Potential of Glanceable AR Interfaces for Authentic Everyday Uses

Conference

Feiyu Lu: Virginia Tech; Doug Bowman: Virginia Tech

In the near future, augmented reality (AR) glasses are envisioned to become the next-generation personal computing platform. However, it remains unclear how we could enable unobtrusive and easy information access in AR without distracting users, while being acceptable to use at the same time. To address this question, we implemented two prototypes based on the Glanceable AR paradigm. We conducted two separate studies to evaluate our designs. We found that users appreciated the Glanceable AR approach in authentic everyday use cases. They found it less distracting or intrusive than existing devices, and would like to use the interface on a daily basis if the form factor of the AR headset was more like eyeglasses.

Story CreatAR: a Toolkit for Spatially-Adaptive Augmented Reality Storytelling

Conference

Abbey Singh: Dalhousie; Ramanpreet Kaur: Dalhousie University; Peter Haltner: Dalhousie University; Matthew Peachey: Dalhousie University; Mar Gonzalez-Franco: Microsoft Research; Joseph Malloch: Dalhousie University; Derek Reilly: Dalhousie University

Headworn Augmented Reality (AR) and Virtual Reality (VR) displays are an exciting new medium for locative storytelling. Authors face challenges planning and testing the placement of story elements when the story is experienced in multiple locations or the environment is large or complex. We present Story CreatAR, the first locative AR/VR authoring tool that integrates spatial analysis techniques. Story CreatAR is designed to help authors think about, experiment with, and reflect upon spatial relationships between story elements, and between their story and the environment. We motivate and validate our design through developing different locative AR/VR stories with several authors.

Session: VR/AR Displays

Monday, March 29, 12:00, Lisbon WEST UTC+1

Session Chair: Daisuke Iwai

Take me to the event:

Virbela Location: Auditorium B (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

Visual Complexity and Scene Recognition: How Low Can You Go?

Conference

Joshua Peter Handali: University of Liechtenstein; Johannes Schneider: University of Liechtenstein; Michael Gau: Karlsruhe Institute of Technology; Valentin Holzwarth: University of Liechtenstein; Jan vom Brocke: University of Liechtenstein

Visual realism in Virtual Environments (VEs) increases immersion but also costs. As the extent of visual realism relates to the level of visual complexity, we investigate on the impact of visual complexity on users’ spatial orientation in a VE based on a real-world place. Visual complexity is varied by adding cartographic visual elements, namely a map overlay and 3D buildings, resulting in a 2x2 factorial within-participants study. Participants were asked to map their VE location to a real-world location. Our findings show that the addition of either visual element improved spatial orientation, while their combination only adds a slight improvement.

Beaming Displays

Journal

Yuta Itoh: Tokyo Institute of Technology; Takumi Kaminokado: Tokyo Institute of Technology; Kaan Akşit: University College London

We present the beaming displays, a new type of near-eye display system that uses a projector and an all passive wearable headset. We modify an off-the-shelf projector with additional lenses. We install such a projector to the environment to beam images from a distance to a passive wearable headset. The beaming projection system tracks the current position of a wearable headset to project distortion-free images with correct perspectives. In our system, a wearable headset guides the beamed images to a user's retina, which are then perceived as an augmented scene within a user's field of view.

DeProCams: Simultaneous Relighting, Compensation and Shape Reconstruction for Projector-Camera Systems

Journal

Bingyao Huang: Stony Brook University; Haibin Ling: Stony Brook University

Image-based relighting, projector compensation and depth/normal reconstruction are three important tasks of projector-camera systems (ProCams) and spatial augmented reality (SAR). Although they share a similar pipeline of finding projector-camera image mappings, in tradition, they are addressed independently, sometimes with different prerequisites, devices and sampling images. We propose a novel end-to-end trainable model named DeProCams to explicitly learn the photometric and geometric mappings of ProCams, and once trained, DeProCams can be applied simultaneously to the three tasks. By solving the three tasks in a unified model, DeProCams waives the need for additional optical devices, radiometric calibrations, and structured light.

Lenslet VR: Thin, Flat and Wide-FOV Virtual Reality Display Using Fresnel Lens and Lenslet Array

Journal

Kiseung Bang: Seoul National University; Youngjin Jo: Seoul National University; Minseok Chae: Seoul National University; Byoungho Lee: Seoul National University

We propose a new thin and flat VR display design using a Fresnel lenslet array, a Fresnel lens, and a polarization-based optical folding technique. The proposed optical system has a wide FOV of 102˚×102˚, a wide eye-box of 8.8 mm, and an ergonomic eye-relief of 20 mm while having a few millimeters of system thickness. We demonstrate an 8.8-mm-thick VR glasses prototype and experimentally verify that the proposed VR display system has the expected performance while having a glasses-like form factor.

Who Are Virtual Reality Headset Owners? A Survey and Comparison of Headset Owners and Non-Owners

Conference

Jonathan Kelly: Iowa State University; Lucia Cherep: Iowa State University; Alex Lim: Iowa State University; Taylor A Doty: Iowa State University; Stephen B. Gilbert: Iowa State University

Researchers can readily recruit head-mounted display (HMD) owners to participate remotely. However, HMD owners recruited online may differ from the university community that typically participates in virtual reality research. HMD owners (n=220) and non-owners (n=282) were surveyed through two online work sites and an undergraduate pool. Participants completed demographics and measures of HMD use, video game use, spatial ability, and motion sickness. In the context of the populations sampled, the results provide a characterization of HMD owners, a snapshot of commonly owned HMDs, a comparison between owners and non-owners, and a comparison among online workers and undergraduates.

Session: Emotion and Cognition

Monday, March 29, 14:00, Lisbon WEST UTC+1

Session Chair: Victoria Interrante

Take me to the event:

Virbela Location: Auditorium A (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

Self-Illusion: A Study on Cognition of Role-Playing in Immersive Virtual Environments

Invited Journal

Sheng Li: Peking University; Xiang Gu: Peking University; Kangrui Yi: Peking University; Yanlin Yang: Peking University; Guoping Wang: Peking University; Dinesh Manocha: University of Maryland.

We present the design and results of an experiment investigating the occurrence of self-illusion and its contribution to realistic behavior consistent with a virtual role in virtual environments. Self-illusion is a generalized illusion about one's self in cognition, eliciting a sense of being associated with a role in a virtual world, despite sure knowledge that this role is not the actual self. We validate and measure self-illusion through an experiment where each participant occupies a non-human perspective and plays a non-human role. There is some evidence that self-illusion can be considered a novel psychological component of presence.

VR-based Student Priming to Reduce Anxiety and Increase Cognitive Bandwidth

Conference

Daniel Hawes: Carleton University; Ali Arya: Carleton University

Recent research indicates that many post-secondary students feel overwhelming anxiety, negatively impacting academic performance and overall well-being. In this paper, based on multidisciplinary literature analysis and innovative ideas in cognitive science, learning models, and emerging technologies, we introduce a theoretical framework that shows how and when priming activities can be introduced in the learning cycles to reduce anxiety and improve cognitive availability. This framework proposes a Virtual Reality based priming approach that uses games and meditative interventions. Our results show this approach's potential compared to no-priming scenarios for reducing anxiety and significance for VR gaming in improving cognitive bandwidth.

Exploiting Object-of-Interest Information to Understand Attention in VR Classrooms

Conference

Efe Bozkir: University of Tübingen; Philipp Stark: University of Tübingen; Hong Gao: University of Tübingen; Lisa Hasenbein: University of Tübingen; Jens-Uwe Hahn: Hochschule der Medien Stuttgart; Enkelejda Kasneci: University of Tübingen; Richard Göllner: University of Tübingen

Developments in computer graphics and hardware technology enable easy access to VR headsets. The immersion provided by VR may soon help to create realistic digital alternatives to conventional classrooms. Until now, however, students' behaviors in immersive virtual environments have not been investigated in depth. This work studies students' attention by exploiting object-of-interests using eye tracking in different classroom manipulations, particularly sitting positions of students, visualization styles of avatars, and various hand-raising behaviors of peer-learners. We show that such manipulations affect students' attention. Our research may contribute to understanding how visual attention relates to social dynamics in virtual classrooms.

Evidence of Racial Bias Using Immersive Virtual Reality: Analysis of Head and Hand Motions During Shooting Decisions

Journal

Tabitha C. Peck: Davidson College; Jessica J Good: Davidson College; Katharina Seitz: Davidson College

Shooter bias is the tendency to more quickly shoot at unarmed Black suspects compared to unarmed White suspects. This research investigates the efficacy of shooter bias simulation studies in an immersive virtual scenario and presents results from a user study (N=99) investigating shooter and racial bias. More nuanced head and hand motion analysis was able to predict participantsí racial shooting accuracy and implicit racism scores. Discussion of how these nuanced measures can be used for detecting behavior changes for body-swap illusions, and implications of this work related to racial justice and police brutality are discussed.

Virtual Morality: Using Virtual Reality to Study Moral Behavior in Extreme Accident Situations

Conference

Giulia Benvegnù: University of Padova; Patrik Pluchino: University of Padova; Luciano Gamberini: University of Padova

Virtual Reality (VR) has recently been employed to study moral dilemmas in driving contexts, in order to collect people’s preferences during unavoidable collisions and inform the design of Autonomous Vehicles (AVs). However, little is known about the experience of being the driver acting in such situations rather than being in an AV that chooses for you. We present a case study that uses VR to investigate emotional reactions and behavior in human and autonomous driving modes. Our findings showed increased arousal, negative valence, and perceived responsibility when participants faced moral dilemmas as drivers. Instead, in scenarios that did not involve killing someone, being in an AV was judged less pleasant than being the actual driver.

Don't Worry be Happy - Using virtual environments to induce emotional states measured by subjective scales and heart rate parameters

Conference

Jan-Niklas Voigt-Antons: Technische Universität Berlin; Robert Spang: Technische Universität Berlin; Tanja Kojic: Technische Universität Berlin; Luis Meier: Technische Universität Berlin; Maurizio Vergari: Technische Universität Berlin; Sebastian Möller: Technische Universität Berlin

Advancing technology and higher availability of Virtual Reality (VR) devices sparked its application in various research fields. For instance, health-related research showed that simulated nature environments in VR could reduce arousal and increase valence levels. This study investigates how the amount of possible interactivity influences the presence in nature environments and consequences on arousal and valence. After inducing fear (high arousal and low valence) through a VR-horror game, it was tested how participants recovered if they played a VR-nature game with either no, limited, or extensive interaction. The horror game proved to be a valid stimulus for inducing high arousal and low valence with a successful manipulation check.

Session: Holographic and Inertial Displays

Monday, March 29, 14:00, Lisbon WEST UTC+1

Session Chair: Dirk Reiners

Take me to the event:

Virbela Location: Auditorium B (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

Unident: Providing Impact Sensations on Handheld Objects via High-Speed Change of the Rotational Inertia

Conference

Shuntaro Shimizu: The University of Tokyo; Takeru Hashimoto: The University of Tokyo; Shigeo Yoshida: The University of Tokyo; Reo Matsumura: karakuri products Inc.; Takuji Narumi: The University of Tokyo; Hideaki Kuzuoka: The University of Tokyo

We propose Unident, a handheld proxy capable of providing impact sensations by changing its rotational inertia at a high speed. Unident allows providing impact sensations at a high frequency with low latency and power consumption. In the first experiment, we demonstrated that Unident can physically provide an impact sensation applied to a handheld object by analyzing the pressure on the user's palm. The second experiment showed that Unident can provide an impact sensation with various magnitudes depending on the amount of rotational inertia to be changed. In the user study, Unident could provide more realistic impact sensations than vibrotactile feedback.

Proximity Effect Correction for Fresnel Holograms on Nanophotonic Phased Arrays

Conference

Xuetong Sun: University of Maryland College Park; Yang Zhang Zhang: University of Maryland; Po-Chun Huang: University of Maryland; Niloy Acharjee: University of Maryland, College Park; Mario Dagenais: University of Maryland; Martin Peckerar: University of Maryland; Amitabh Varshney: University of Maryland College Park

The Nanophotonic Phased Array (NPA), a new type of holographic display, affords several advantages over other holographic display technologies. However, the thermal phase modulation of the NPA makes it susceptible to the thermal proximity effect where heating one pixel affects nearby pixels. Proximity effect correction (PEC) methods have been proposed for 2D Fourier holograms but not for Fresnel holograms at user-specified depths. We present a PEC method for the NPA for Fresnel holograms and validate it through simulations. Our method is not only effective in correcting the proximity effect of 2D images at desired depths but can also leverage the fast refresh rate of the NPA to display 3D scenes with time-division multiplexing.

Realistic 3D Swept-Volume Display with Hidden-Surface Removal Using Physical Materials

Conference

Ray Asahina: Tokyo Institute of Technology; Takashi Nomoto: Tokyo Institute of Technology; Takatoshi Yoshida: Massachusetts Institute of Technology; Yoshihiro Watanabe: Tokyo Institute of Technology

Swept-volume displays can provide accurate physical cues for depth perception. However, the corresponding texture reproduction does not have high quality because they employ high-speed projectors with low bit-depth and low resolution. To address such limitation while retaining their advantages, we propose a novel swept-volume display by incorporating physical materials as screens. Physical materials such as wool, felt, and so on are directly used for reproducing textures on a displayed 3D surface. Furthermore, we introduce the adaptive pattern generation based on viewpoint tracking for the hidden-surface removal. Our algorithm leverages the ray-tracing concept and can run at high speed on GPU.

Revealable Volume Displays: 3D Exploration of Mixed-Reality Public Exhibitions

Conference

Fatma Ben Guefrech: Université de Lille; Florent Berthaut: Université de Lille; patricia plénacoste: université de lille ; Yvan Peter: Université Lille 1; Laurent Grisoni: university of lille

We present a class of mixed-reality displays which allow for the 3D exploration of content in public exhibitions, that we call Revealable Volume Displays (RVD). They allow visitors to reveal information placed freely inside or around protected artefacts, visible by all, using their reflection in the panel. We first discuss the implementation of RVDs, providing both projector-based and mobile versions. We then present a design space that describes the interaction possibilities that they offer. Drawing on insights from a field study during a first exhibition, we finally propose and evaluate techniques for facilitating 3D exploration with RVDs.

DCGH: Dynamic Computer Generated Holography for Speckle-Free, High Fidelity 3D Displays

Conference

Vincent R Curtis: University of North Carolina-Chapel Hill; Nicholas William Caira: University of North Carolina-Chapel Hill; Jiayi Xu: University of North Carolina-Chapel Hill; Asha Gowda Sata: University of North Carolina-Chapel Hill; Nicolas C Pegard: University of North Carolina at Chapel Hill

Computer Generated Holography (CGH) is a promising technique for synthesizing 3D images. However, CGH displays that modulate coherent light with a static 2D pattern can only render a small subset of all the possible illumination patterns where speckle noise is omnipresent. Here, we introduce Dynamic CGH, a new holographic technique that modulates light both spatially and temporally with globally optimized patterns, enabling 3D light sculpting with many more degrees of control. Experimental results obtained with a high-speed Digital Micromirror Device (DMD) show that DCGH yields speckle-free 3D images with improved resolution and contrast, successfully addressing the shortcomings of single-frame holography.

Session: Embodiment

Monday, March 29, 16:30, Lisbon WEST UTC+1

Session Chair: John Quarles

Take me to the event:

Virbela Location: Auditorium A (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

Adapting Virtual Embodiment through Reinforcement Learning

Invited Journal

Thibault Porssut: Immersive Interaction Research Group, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland; Yawen Hou: Immersive Interaction Research Group, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland; Olaf Blanke: Laboratory of Cognitive Neuroscience, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland; Bruno Herbelin: Laboratory of Cognitive Neuroscience, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland; Ronan Boulic: Immersive Interaction Research Group, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland.

In Virtual Reality, having a virtual body opens a wide range of possibilities like partially manipulate the displayed avatar movement through some distortion to make the overall experience more enjoyable and effective (e.g. training, exercising, rehabilitation). However, an excessive distortion may become noticeable and break the feeling of being embodied into the avatar. Thus, we propose a method taking advantage of Reinforcement Learning (RL) to efficiently adapt our system to each individual. We show through a controlled experiment with subjects that the RL method finds a more robust detection threshold compared to the adaptive staircase method.

Impact of Information Placement and User Representations in VR on Performance and Embodiment

Invited Journal

Sofia Seinfeld: Universitat Politècnicade Catalunya, University of Bayreuth; Tiare Feuchtner: Aarhus University; Johannes Pinzek: University of Bayreuth; Jörg Müller: University of Bayreuth.

We evaluated the processing of visual stimuli depending on their location in 3D space and the experienced user representation during interaction (i.e., hands, controllers, keyboard). Participants executed a motor task in VR, while simultaneously detecting visual stimuli appearing at different locations. We found that independently of how the user is represented, detection performance is higher when visual stimuli are presented near the user's body, while actively engaged in a motor task. Further, motor performance and embodiment are enhanced by controllers and hands, in contrast to keyboard control. This study contributes to better understanding the detectability of visual content in VR.

The Embodiment of Photorealistic Avatars Influences Female Body Weight Perception in Virtual Reality

Conference

Erik Wolf: University of Würzburg, Department of Computer Science, HCI Group; Nathalie Merdan: University of Würzburg; Nina Döllinger: Julius-Maximilians-Universität; David Mal: Universit of Würzburg, Department of Computer Science, HCI Group; Carolin Wienrich: University Würzburg; Mario Botsch: TU Dortmund University; Marc Erich Latoschik: Department of Computer Science, HCI Group

In our work, we compared body weight perception of 56 female participants in VR. They either (a) embodied a photorealistic, non-personalized virtual human and performed body movements or (b) only observed it performing the same movements without embodying it. Afterward, participants had to estimate the virtual human's body weight. Additionally, we considered the influence of the participants' BMI on the estimations and captured the participants' feelings of presence and embodiment. Participants embodying the virtual human estimated the body weight significantly lower. Furthermore, the estimations of body weight were significantly predicted by the participant's BMI with embodiment, but not without.

The influence of hand visualization in tool-based motor-skills training, a longitudinal study

Conference

Aylen Ricca: IBISC, Univ Evry, Université Paris-Saclay; Amine Chellali: IBISC, Univ Evry, Université Paris-Saclay; Samir Otmane: IBISC, Univ Evry, Université Paris-Saclay

In this work, we study how the user's hand representation impacts the training of tool-based motor skills in immersive VR. We created a VR trainer for a tool-based task, and conducted a user study (N=26) to evaluate how the hand visualization can influence participants' learning performance. Two groups of participants were trained in the simulator under one of the two experimental conditions: presence/absence of their virtual hands' representation, while a control group received no training. The results show that while both training groups improve their performance compared to the control group, no significant impact of the hand visualization is observed.

The Influence of Avatar Representation on Interpersonal Communication in Virtual Social Environments

Journal

Sahar Aseeri: University of Minnesota; Victoria Interrante: University of Minnesota

Current avatar representations used in immersive VR applications lack features that may support natural behaviors and effective communication among individuals. This work investigates the impact of the visual and nonverbal cues afforded by three different types of avatar representations in social context tasks. The avatar types we compared are: No_Avatar (HMD and controllers only), Scanned_Avatar (wearing an HMD), and Real_Avatar (video-see-through). We used subjective and objective measures to assess the quality of interpersonal communication. The findings provide novel insight into how a user’s experience in a social VR scenario is affected by the type of avatar representation provided.

Session: Visualization

Monday, March 29, 16:30, Lisbon WEST UTC+1

Session Chair: Jian Chen

Take me to the event:

Virbela Location: Auditorium B (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

Visual Quality of 3D Meshes With Diffuse Colors in Virtual Reality: Subjective and Objective Evaluation

Invited Journal

Yana Nehmé: CNRS, LIRIS, France; Florent Dupont: CNRS, LIRIS, France; Jean-Philippe Farrugia: CNRS, LIRIS, France; Patrick Le Callet (Fellow, IEEE): CNRS, LS2N, France; Guillaume Lavoué (Senior Member, IEEE): CNRS, LIRIS, France.

3D graphics with appearance attributes are well-known for providing high degree of realism and allowing six-degrees-of-freedom interactions in immersive environments. These data are subject to processing operations, resulting in a quality loss of final rendered scenes. Thus, both subjective and objective studies are needed to understand and predict this visual loss. In this work, we introduce a large dataset of 480 animated meshes with diffuse colors, produced in virtual reality. Our dataset allowed us to explore the factors influencing subjective opinions. Based on these findings, we propose the first quality metric for meshes with diffuse colors, which works entirely on mesh domain.

Comparing and Combining Virtual Hand and Virtual Ray Pointer Interactions for Data Manipulation in Immersive Analytics

Journal

Jorge Wagner: Federal University of Rio Grande do Sul; Wolfgang Stuerzlinger: Simon Fraser University; Luciana Nedel: Federal University of Rio Grande do Sul (UFRGS)

In this work, we evaluate two standard interaction techniques for Immersive Analytics, virtual hands and virtual ray pointers, together with a third option, which seamlessly integrates both without explicit mode switching. Our findings indicate that both virtual hands and ray-casting result in similar performance, workload ratings, and even interactivity patterns. Yet, the mixed mode significantly reduced completion times by 23% for the most demanding task, at the cost of a 5% decrease in overall success rates. Also, most participants clearly preferred being able to choose the best interaction technique for each low-level task.

Exploring the SenseMaking Process through Interactions and fNIRS in Immersive Visualization

Journal

Alexia Galati: University of North Carolina at Charlotte; Riley J Schoppa: UNC Charlotte; Aidong Lu: University of North Carolina at Charlotte

Theories of cognition inform our decisions when designing human-computer interfaces, and immersive systems enable us to examine these theories. This work explores the sensemaking process in an immersive environment through studying both internal and external user behaviors with a classical visualization problem. We assessed how the interface layout and the task challenge level influenced users' interactions, how these interactions changed over time, and how they influenced task performance. We found that increased interactions and cerebral hemodynamic responses were associated with more accurate performance, especially on cognitively demanding trials, and discuss about sensemaking from the perspective of embodied and distributed cognition.

Sensemaking Strategies with the Immersive Space to Think

Conference

Lee Lisle: Virginia Tech; Kylie Davidson: Virginia Tech; Chris North: Virginia Tech; Doug Bowman: Virginia Tech; Edward J.K. Gitre: Virginia Tech

The process of sensemaking is a cognitively intensive task that involves foraging through and extracting information from large sets of documents. A recent approach, the Immersive Space to Think (IST), allows analysts to read, mark up documents, and use immersive 3D space to organize and label collections of documents. We observed seventeen novice analysts perform a sensemaking task in order to understand how users utilize the features of IST to extract meaning from large text-based datasets. We found three different layout strategies they employed to create meaning with the documents we provided, and found patterns of interaction and organization that can inform future improvements to the IST approach.

Visualizing Planetary Spectroscopy through Immersive On-site Rendering

Conference

Lauren Gold: Arizona State University; Alireza Bahremand: Arizona State University; Connor Richards: Arizona State University; Justin Hertzberg: Arizona State University; Kyle Sese: Arizona State University; Alexander A Gonzalez: Hamilton High School; Zoe Purcell: Arizona State University; Kathryn E Powell: Northern Arizona University; Robert LiKamWa: Arizona State University

Planetary Visor is our virtual reality tool to visualize orbital and rover-based datasets from the ongoing traverse of the NASA Curiosity rover in Gale Crater. Data from orbital spectrometers provide insight about the composition of planetary terrains. Meanwhile, Curiosity rover data provide fine-scaled localized information about Martian geology. By visualizing the intersection of the orbiting instrument's field of view with the rover-scale topography, and providing interactive navigation controls, Visor constitutes a platform for users to intuitively understand the scale and context of the Martian geologic data under scientific investigation.

Session: Collaboration

Tuesday, March 30, 08:30, Lisbon WEST UTC+1

Session Chair: Thierry Duval

Take me to the event:

Virbela Location: Auditorium A (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

Collaborative Work in Augmented Reality: A Survey

Invited Journal

Mickael Sereno: Université Paris-Saclay, CNRS, Inria, LRI, France; Xiyao Wang: Université Paris-Saclay, CNRS, Inria, LRI, France; Lonni Besançon: Linköpings Universitet; Michael J Mcguffin: Ecole de technologie supérieure, Université du Québec; Tobias Isenberg: Université Paris-Saclay, CNRS, Inria, LRI, France.

Multi-user AR has received limited attention from researchers, even though AR has been in development for more than two decades. We present the state of existing work at the intersection of AR and Computer-Supported Collaborative Work, by combining a systematic survey approach with an exploratory, opportunistic literature search. We categorize 65 papers along the dimensions of space, time, users’ role symmetry, users’ technology symmetry, and output and input modalities. We derive design considerations for collaborative AR environments, and identify research topics that deserve further investigation.

A VR/AR Environment for Multi-User Liver Anatomy Education

Conference

Danny Schott: Otto-von-Guericke University; Patrick Saalfeld: Otto-von-Guericke University; Gerd Schmidt: Otto-von-Guericke University; Fabian Joeres: Otto-von-Guericke University; Christian Boedecker: University Medicine of the Johannes Gutenberg-University; Florentine Huettl: University Medicine of the Johannes Gutenberg-University; Hauke Lang: University Medicine of the Johannes Gutenberg-University; Tobias Huber: University Medicine of the Johannes Gutenberg-University; Bernhard Preim: Otto-von-Guericke University; Christian Hansen: Otto-von-Guericke University

We present a VR/AR multi-user prototype of a learning environment for liver anatomy education. Our system supports various learning scenarios, where users can participate in VR, AR, or via desktop PCs. A virtual organ library was created using nineteen liver datasets. As part of a user study with surgery lecturers (5) and medical students (5), we evaluated the usability and presence. A total of 435 individual statements were recorded and summarized to 49 statements. The results show that our prototype is usable, induces presence, and potentially support the teaching of liver anatomy and surgery in the future.

Group Navigation for Guided Tours in Distributed Virtual Environments

Journal

Tim Weissker: Bauhaus-Universitaet Weimar; Bernd Froehlich: Bauhaus-Universität Weimar

Group navigation can be an invaluable tool for performing guided tours in distributed virtual environments. We derived that group navigation techniques should be comprehensible for the guide and attendees, assist in obstacle avoidance, and allow the creation of meaningful spatial arrangements. To meet these requirements, we developed a group navigation technique based on short-distance teleportation and performed an initial usability evaluation. After navigating with groups of up to 10 users through a virtual museum, participants indicated that our technique is easy to learn for guides, comprehensible also for attendees, non-nauseating for both roles, and therefore well-suited for performing guided tours.

StuckInSpace: Exploring the Difference Between Two Mediums of Play in a Multi-Modal Virtual Reality Game

Conference

Yoan-Daniel Grigorov Malinov: University of Southampton; David Millard: University of Southampton; Tom Blount: University of Southampton

Multi-modal co-located VR games are a very cost-effective way of adding a second or more players to the normally solitary VR experience. This paper introduces "StuckInSpace", a game used to test whether including a second player through a Phone or PC affects the immersion and co-presence of the participants. Quantitative data from the undertaken experiment shows that the mode does not affect the stated variables, while the qualitative data reveals why. Analyzing it gives a number of design decisions for future research in this field and shows that adding a second player this way is a viable method.

Quality of Service Impact on Edge Physics Simulations for VR

Journal

Sebastian J Friston: University College London; Elias J Griffith: University of Liverpool; David Swapp: University College London; Iheanyi Caleb Irondi: University of Liverpool; Fred P. M. Jjunju: UNiversity of Liverpool; Ryan J Ward: University of Liverpool; Alan Marshall: University of Liverpool; Anthony Steed: University College London

Mobile HMDs sacrifice performance for ergonomics and power efficiency, meaning mobile applications must also sacrifice complexity, or offload it. A common approach is render streaming, but latency requirements make VR a particularly challenging use case. In our paper we re-examine scene-graph streaming, but in the context of edge-computing. With 'edge-physics', latency sensitive loops run locally, tasks that hit the power-wall of mobile CPUs are off-loaded, while improving GPUs are leveraged to maximum effect. We investigate the potential of edge physics by implementing a prototype and evaluating its response to varying quality of service.

Session: Multimodal Interfaces

Tuesday, March 30, 08:30, Lisbon WEST UTC+1

Session Chair: Takuji Narumi

Take me to the event:

Virbela Location: Auditorium B (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

Impossible Staircase: Vertically Real Walking in an Infinite Virtual Tower

Conference

Jen-Hao Cheng: National Taiwan University; Yi Chen: National Taiwan University; Ting-Yi Chang: National Taiwan University; Hsu-En Lin: National Taiwan University; Po-Yao (Cosmos) Wang: National Taiwan University; Lung-Pan Cheng: National Taiwan University

We present Impossible Staircase, a real-walking virtual reality system that allows users to climb an infinite virtual tower. Our set-up consists of an one-level scaffold and a lifter. A user climbs up the scaffold by real walking on a stairway while wearing a head-mounted display, and gets reset to the ground level by a lifter imperceptibly. By repeating this process, the user perceives an illusion of climbing an infinite number of levels. We built a working system and demonstrated it with a 15-min experience. With the working system, we conducted user studies to gain deeper insights into vertical motion simulation and vertical real walking in virtual reality.

TapID: Rapid Touch Interaction in Virtual Reality using Wearable Sensing

Conference

Manuel Meier: ETH Zürich; Paul Streli: ETH Zürich; Andreas Rene Fender: ETH; Christian Holz: ETH Zürich

In this paper, we bring rapid touch interaction on surfaces to Virtual Reality. Current systems capture input with cameras to track controllers and hands in mid-air, but cannot detect touch input from the user. We present TapID, a wrist-based inertial sensing system to detect touch events on surfaces—the input modality common on phones and tablets. TapID reliably detects input events and identifies the finger used for touch, which we combine with optically tracked hand poses to trigger input in VR. We conclude with a series of applications that complement hand tracking with touch input.

Does Virtual Odor Representation Influence the Perception of Olfactory Intensity and Directionality in VR?

Conference

Shou-En Tsai: Department of Computer Science; Wan-Lun Tsai: National Cheng Kung University; Tse-Yu Pan: National Tsing Hua University; Chia-Ming Kuo: CityChaser; Min-Chun Hu: National Tsing Hua University

Visual stimuli have been proved to dominate human perception among multiple sensors in virtual environments. If visual stimuli can be used to guide the olfactory sense in VR, the design of the olfactory display can be simpler but still able to provide olfactory experience with more diversity. In this work, a portable olfactory display was proposed. An experimental study was conducted to investigate visual-olfactory human perception, i.e. how the visually virtual odor representation in VR influences human perception of real odor produced by the proposed olfactory display.

BouncyScreen: Physical Enhancement of Pseudo-Force Feedback

Conference

Yuki Onishi: Tohoku University; Kazuki Takashima: Tohoku University; Kazuyuki Fujita: Tohoku University; Yoshifumi Kitamura: Tohoku University

We explore BouncyScreen, an actuated 1D display system that enriches indirect interaction with a virtual object by pseudo-haptic mechanism enhanced through the screen's physical movements. We configured a prototype of BouncyScreen with a flat-screen mounted on a mobile robot, which physically moves in accordance with the virtual object. Our weight discrimination study showed that BouncyScreen offers identical pseudo-force feedback to the vision-based pseudo-haptic technique. The results of the follow-up weight magnitude estimation study revealed different characteristics of a users' perceived weight magnitude depending on interaction styles and the enhancement of the reality of the interaction and the sense of presence.

Optimal time window for the integration of spatial audio-visual information in virtual environments

Conference

Jiacheng Liu: University College London; Vit Drga: University College London; Ifat Yasin: University College London

This study investigated audio-visual integration in a virtual environment. Two tasks were used, an auditory localization task and a detection task (judgement of audio-visual synchrony). The short-duration auditory stimuli (35-ms spatialized sound) and long-duration auditory stimuli (600-ms non-spatialized sound followed by 35 ms of spatialized sound) were presented between -60 and +60 degrees azimuth, with the visual stimulus presented synchronously/asynchronously with respect to the start of the auditory stimulus. Auditory localization errors and audio-visual synchrony detection reveal the effects of underlying neural mechanisms that can be harnessed to optimize audio-visual experiences in virtual environments.

Session: Security and Drone Teleoperation

Tuesday, March 30, 11:00, Lisbon WEST UTC+1

Session Chair: Stefanie Zollmann

Take me to the event:

Virbela Location: Auditorium A (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

A Rate-based Drone Control with Adaptive Origin Update in Telexistence

Conference

Di Zhang: Nanjing University of Posts and Telecommunications; Chi-Man Pun: University of Macau; Hao Gao: Nanjing University of Posts and Telecommunications; Feng Xu: Tsinghua University

A new form of telexistence is achieved by recording videos with a camera on an Uncrewed aerial vehicle (UAV) and playing the videos to a user via a head-mounted display (HMD). User studies demonstrate that comparing with other telexistence solutions and the widely used joystick-based solutions, our solution largely reduces the workload and saves time and moving distance for the user.

The Impact of Virtual Reality and Viewpoints in Body Motion Based Drone Teleoperation

Conference

Matteo Macchini: EPFL; Manana Lortkipanidze: EPFL; Fabrizio Schiano: EPFL; Dario Floreano: EPFL

Operating telerobotic systems can be a challenging task. Body-Machine Interfaces represent a promising resource as they leverage intuitive body motion and gestures. Virtual Reality and first-person view perspectives can increase the user’s sense of presence in avatars, however, few studies concern the teleoperation of non-anthropomorphic robots. Our experiments on a non-anthropomorphic drone show that VR correlates with the spatial presence dimension, whereas viewpoints affect embodiment. Spontaneous body motion is affected by these conditions in terms of variability, amplitude, and robot correlates, suggesting that BoMIs for robotic teleoperation should carefully consider the use of VR and the choice of the viewpoint.

Using Siamese Neural Networks to Perform Cross-System Behavioral Authentication in Virtual Reality

Conference

Robert Miller: Clarkson University; Natasha Kholgade Banerjee: Clarkson University; Sean Banerjee: Clarkson University

We provide an approach on using behavioral biometrics to perform cross-system high-assurance authentication of users in VR environments. Traditional PIN or password-based credentials can be breached by malicious impostors, or be handed over by an intended user of a VR system to a confederate. We use Siamese neural networks to characterize systematic differences between data across pairs of dissimilar VR systems. We provide equal error rates (EERs) ranging from 1.38% to 3.86% for authentication and identification accuracies ranging from 87.82% to 98.53% using a dataset consisting of 41 users performing ball-throwing with 3 VR systems.

VR-Spy: A Side-Channel Attack on Virtual Key-Logging in VR Headsets

Conference

Abdullah Al Arafat: University of Central Florida; Zhishan Guo: University of Central Florida; Amro Awad: North Carolina State University

In Virtual Reality, users typically interact with the virtual world using a virtual keyboard to access online accounts. Hence, it becomes imperative to understand the security of virtual keystrokes. In this paper, we present VR-Spy, a virtual keystrokes recognition method using WiFi signals. To the best of our knowledge, this is the first work that uses WiFi signals to recognize virtual keystrokes in VR headsets. VR-Spy leverages signal processing techniques to extract the patterns related to the keystrokes from the variations of WiFi signals. We implement VR-Spy using two Commercially Off-The-Shelf devices, a transmitter, and a receiver. Finally, VR-Spy achieves a virtual keystrokes recognition accuracy of 69.75%.

A privacy-preserving approach to streaming eye-tracking data

Journal

Brendan David-John: University of Florida; Diane Hosfelt: Mozilla; Kevin Butler: University of Florida; Eakta JAIN: University of Florida

Eye-tracking technology is being increasingly integrated into mixed reality devices. Although critical applications are being enabled, there are significant possibilities for violating user privacy expectations. We propose a framework that incorporates gatekeeping via the design of the application programming interface and via software-implemented privacy mechanisms. Our results indicate that these mechanisms can reduce the rate of identification from as much as 85% to as low as 30%. The impact of introducing these mechanisms is less than 1.5⁰error in in gaze prediction. Our approach is the first to support privacy-by-design in the flow of eye-tracking data within mixed reality use cases.

Session: Embedded and Surround Videos

Tuesday, March 30, 11:00, Lisbon WEST UTC+1

Session Chair: Rob Lindeman

Take me to the event:

Virbela Location: Auditorium B (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

Assessment of the Simulator Sickness Questionnaire for Omnidirectional Videos

Conference

Ashutosh Singla: TU Ilmenau; Steve Göring: TU Ilmenau; Dominik Keller: TU Ilmenau; Rakesh Rao Ramachandra Rao: TU Ilmenau; Stephan Fremerey: TU Ilmenau; Alexander Raake: TU Ilmenau

The SSQ is the most widely used questionnaire for the assessment of simulator sickness. Since the SSQ with its 16 questions was not designed for 360° video related studies, our research hypothesis in this paper was that it may be simplified to enable more efficient testing for 360° video. Hence, we evaluate the SSQ based on six different previously conducted studies. We derive reduced sets of questions using PCA for each test. Pearson Correlation is analysed to compare the relation of all obtained reduced questionnaires as well as two further variants of SSQ reported in the literature, namely VRSQ and CSQ. Our analysis suggests that with a minimum of 9 out of 16 questions yields a sufficient high agreement with the initial SSQ.

Camera Space Synthesis of Motion Effects Emphasizing a Moving Object in 4D films

Conference

Sangyoon Han: Pohang University of Science and Technology (POSTECH); Gyeore Yun: POSTECH; Seungmoon Choi: Pohang University of Science and Technology (POSTECH)

One of the most frequent effects in four-dimensional (4D) films is the object-based motion effect, which refers to the vestibular stimulus generated by a motion chair to emphasize a moving object of interest displayed on the screen. This paper presents an algorithm for synthesizing convincing object-based motion effects automatically from a given object motion trajectory. Our method creates motion effects that simultaneously express the translation and rotation of an object in the motion platform’s limited workspace and DoFs while considering visual perception. The experimental results indicate that our method can generate compelling object-based motion effects that better enhance the 4D film viewing experience than the previous methods.

A Log-Rectilinear Transformation for Foveated 360-degree Video Streaming

Journal

David Li: University Of Maryland; Ruofei Du: Google; Adharsh Babu: University of Maryland College Park; Camelia D. Brumar: University of Maryland; Amitabh Varshney: University of Maryland College Park

With the rapidly increasing resolutions of 360° cameras, head-mounted displays, and live-streaming services, streaming high-resolution panoramic videos over limited-bandwidth networks is becoming a critical challenge. Foveated video streaming can address this rising challenge in the context of eye-tracking-equipped virtual reality head-mounted displays. However, conventional log-polar foveated rendering suffers from a number of visual artifacts such as aliasing and flickering. In this paper, we introduce a new log-rectilinear transformation that incorporates summed-area table filtering and off-the-shelf video codecs to enable foveated streaming of 360° videos suitable for VR headsets with built-in eye-tracking.

Comparing the Neuro-Physiological Effects of Cinematic Virtual Reality with 2D Monitors

Conference

Ruochen Cao: University of South Australia; Lena Zou-Williams: University of South Australia; Andrew Cunningham: University of South Australia; James A. Walsh: University of South Australia; Mark Kohler: University of Adelaide; Bruce H Thomas: University of South Australia

In this work, we explore if the immersion afforded by Virtual Reality can improve the cognitive integration of information in Cinematic Virtual Reality (CVR). We conducted a user study examining participants' cognitive activities when consuming visual information of emotional and emotionally neutral scenes in a non-CVR environment (i.e. a monitor) versus a CVR environment (i.e. a head-mounted display). Cortical response was recorded using electroencephalography. We found that participants had greater early visual attention with neutral emotions in CVR environments, and showed higher overall alpha power in CVR environments. The use of CVR did not significantly affect participants' recall performance.

Video Content Representation to Support the Hyper-reality Experience in Virtual Reality

Conference

HYERIM PARK: KAIST; Woontack Woo: KAIST

In this paper, we investigate a video content representation method to provide a hyper-reality experience of the narrative's world in virtual reality. We reflect the time and place settings of the video content in virtual reality and have participants watch the video in four different virtual reality environments. As a result, we reveal that reflecting the narratives' environment settings to the virtual reality environment significantly improves the spatial presence and narratives engagement. We also confirm a positive correlation between spatial presence and narrative engagement, including sub-scales such as emotional engagement and narrative presence.

Session: Virtual Humans and Agents

Tuesday, March 30, 13:00, Lisbon WEST UTC+1

Session Chair: Ronan Boulic

Take me to the event:

Virbela Location: Auditorium A (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

Climaxing VR Character with Scene-Aware Aesthetic Dress Synthesis

Conference

Sifan Hou: Beijing Institute of Technology; Yujia Wang: Beijing Institute of Technology; Wei Liang: Beijing Institute of Technology; Bing Ning: Beijing Institute of Fashion Technology

In this paper, we propose a new problem of synthesizing appropriate dress for a virtual character based on the analysis of the scenario where he/she showes up. We come up with a pipeline to tackle the scenario-aware dress synthesis problem. Firstly, given a scene, our approach predicts a dress code from the extracted high-level information in the scene, consisting of season, occasion, and scene category. Then our approach tunes the dress details to fit the aesthetic criteria and the virtual character's attributes. The tuning process is implemented by an optimization of a cost function. We carried out experiments to validate the efficacy of the proposed approach. The perceptual study results show the good performance of our approach.

Mobile, Egocentric Human Body Motion Reconstruction Using Only Eyeglasses-mounted Cameras and a Few Body-worn Inertial Sensors

Conference

Young-Woon Cha: University of North Carolina at Chapel Hill; Husam Shaik: University of North Carolina at Chapel Hill; Qian Zhang: University of North Carolina at Chapel Hill; Fan Feng: University of North Carolina at Chapel Hill; Andrei State: University of North Carolina at Chapel Hill; Adrian Ilie: University of North Carolina at Chapel Hill; Henry Fuchs: UNC Chapel Hill

We envision a convenient telepresence system available to users anywhere, anytime, requiring displays and sensors embedded in commonly worn items such as eyeglasses, wristwatches, and shoes. To that end, we present a standalone real-time system for the dynamic 3D capture of a person, relying only on cameras embedded into a head-worn device, and on Inertial Measurement Units (IMUs) worn on the wrists and ankles. We demonstrate our system by reconstructing various human body movements. We captured an egocentric visual-inertial 3D human pose dataset, which we plan to make publicly available for training and evaluating similar methods.

Output-Sensitive Avatar Representations for Immersive Telepresence

Invited Journal

Adrian Kreskowski: Virtual Reality and Visualization Research Group, Bauhaus-Universität Weimar; Stephan Beck: Virtual Reality and Visualization Research Group, Bauhaus-Universität Weimar; Bernd Froehlich: Virtual Reality and Visualization Research Group, Bauhaus-Universität Weimar.

We propose a system design and implementation for output-sensitive reconstruction, transmission and rendering of volumetric avatars (3D video avatars) in distributed virtual environments. In our immersive telepresence system, users are captured by multiple RGBD sensors connected to a server that performs geometry reconstruction based on viewing feedback from remote telepresence parties. This feedback and reconstruction loop enables visibility-aware level-of-detail reconstruction of volumetric avatars regarding geometry and texture data. Our evaluation reveals that our approach leads to a significant reduction of reconstruction times, network bandwidth requirements and round-trip times in many situations.

The Impact of Avatar Appearance, Perspective and Context on Gait Variability and User Experience in Virtual Reality

Conference

Markus Wirth: Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU); Stefan Gradl: Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU); Georg Felix Prosinger: Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU); Felix Kluge: Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU); Daniel Roth: Computer Aided Medical Procedures and Augmented Reality; Bjoern M Eskofier: Friedrich-Alexander-Universität Erlangen-Nürnberg

Gait supervision plays an important role in the diagnosis, analysis, and rehabilitation of motor impairments. In particular for rehabilitation, several applications in virtual reality (VR) exist showing similar outcomes like in vivo therapy. However, equivalence of underlying gait characteristics was not assessed so far. We analyzed the influence of different avatar appearances, environments, and perspectives on gait parameters in VR during different walking tasks. Results show that gait stability is significantly impacted by VR exposure, walking tasks influence gait behavior differently in VR compared to in vivo, and thus equivalence of gait characteristics in VR may not be blindly assumed.

Virtual Co-Embodiment: Evaluation of the Sense of Agency while Sharing the Control of a Virtual Body among Two Individuals

Invited Journal

Rebecca Fribourg*: Inria, Univ Rennes, CNRS, IRISA; Nami Ogawa*: The University of Tokyo; Ludovic Hoyet: Inria, Univ Rennes, CNRS, IRISA; Ferran Argelaguet: Inria, Univ Rennes, CNRS, IRISA; Takuji Narumi: The University of Tokyo / JST PREST; Michitaka Hirose: The University of Tokyo; Anatole Lécuyer: Inria, Univ Rennes, CNRS, IRISA.

In this paper, we introduce a concept called "virtual co-embodiment", which enables a user to share their virtual avatar with another entity (e.g., another user, robot, or autonomous agent). We describe a proof-of-concept in which two users can be immersed from a first-person perspective in a virtual environment and can have complementary levels of control (total, partial, or none) over a shared avatar. In addition, we conducted an experiment to investigate the influence of users' level of control over the shared avatar and prior knowledge of their actions on the users' sense of agency and motor actions.

Text2Gestures: A Transformer-Based Network for Generating Emotive Body Gestures for Virtual Agents

Conference

Uttaran Bhattacharya: University of Maryland; Nick Rewkowski: University of Maryland; Abhishek Banerjee: University of Maryland, College Park; Pooja Guhan: University of Maryland College Park; Aniket Bera: University of Maryland at College Park; Dinesh Manocha: University of Maryland

We present Text2Gestures, a transformer-based learning method to interactively generate emotive full-body gestures for virtual agents aligned with text inputs. Our method generates emotionally expressive gestures by utilizing the relevant affective features for body expressions. We train our network on the MPI Emotional Body Expressions Database and observe state-of-the-art performance in generating emotive gestures. We conduct a web-based user study and observe that around 91% of participants indicated our generated gestures to be at least plausible on a five-point Likert Scale. The emotions perceived by the participants from the gestures are also strongly positively correlated with the corresponding intended emotions.

Session: Hands, Gestures and Grasping

Tuesday, March 30, 13:00, Lisbon WEST UTC+1

Session Chair: Ryan P McMahan

Take me to the event:

Virbela Location: Auditorium B (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

Capacitive Sensing for Improving Contact Rendering with Tangible Objects in VR

Invited Journal

Xavier de Tinguy: Univ Rennes, INSA, IRISA, Inria, CNRS; Claudio Pacchierotti: CNRS, Univ Rennes, Inria, IRISA; Anatole Lécuyer: Univ Rennes, Inria, CNRS, IRISA; Maud Marchal: Univ Rennes, INSA, IRISA, Inria, CNRS.

Standard tracking systems cannot properly track the interaction between the user's hand and a tangible object. As they cannot detect when the hand is actually touching the tangible, when reconstructing the virtual elements’ position, their relative position is often flawn by a remaining virtual gap or interpenetration. Thus, we combine tracking information from a tangible object instrumented with capacitive sensors and an optical tracking system, to improve contact rendering when interacting with tangibles in VR. A human-subject study shows that combining capacitive sensing with optical tracking significantly improves the visuohaptic synchronization and immersion of the VR experience.

Passing a Non-verbal Turing Test: Evaluating Gesture Animations Generated from Speech

Conference

Manuel Rebol: American University; Christian Gütl: Graz University of Technology; Krzysztof Pietroszek: American University

We propose a data-driven technique for generating gestures directly from speech. Our approach is based on the application of Generative Adversarial Neural Networks (GANs) to model the correlation rather than causation between speech and gestures. This approach approximates neuroscience findings on how non-verbal communication and speech are correlated. We animate the generated gestures on a virtual character and evaluate the gestures in a user study. The study shows that users are not able to distinguish between the generated and the recorded gestures. Moreover, users are able to identify our synthesized gestures as related or not related to a given utterance.

GestOnHMD: Enabling Gesture-based Interaction on the Surface of Low-cost VR Head-Mounted Display

Journal

Taizhou Chen: City University of Hong Kong; Lantian Xu: City University of Hong Kong; Xianshan Xu: City University of Hong Kong; Kening Zhu: City University of Hong Kong

We present GestOnHMD, a gesture-based interaction technique that leverages the stereo microphones in a commodity smartphone to detect scratching gestures on three surfaces on a mobile VR headset. We first proposed a set of user-defined gestures on different surfaces of the Google Cardboard, with the consideration on user preferences and signal detectability. We then constructed a data set containing the acoustic signals of 18 users performing these on-surface gestures. Using this data set, we trained the three-step deep-learning classification pipeline for gesture detection and recognition. Lastly, we conducted a series of online participatory-design sessions to collect a set of user-defined gesture-referent mappings that could potentially benefit from GestOnHMD.

Freehand Grasping: An Analysis of Grasping for Docking Tasks in Virtual Reality

Conference

Andreea Dalia Blaga: Birmingham City University; Maite Frutos-Pascual: Birmingham City University; Chris Creed: Birmingham City University; Ian Williams: Birmingham City University

Natural interaction such as freehand grasping is still a significant challenge in VR due to the dexterous versatility of the human grasping actions. Currently, the design considerations for creating freehand grasping interactions in VR are drawn from the body of knowledge presented for real object grasping. While this may be suitable for some applications, recent work has shown that users grasp virtual objects differently than they grasp real objects, presenting an absence of knowledge on grasp patterns in VR. We present an elicitation study where participants grasp virtual objects categorised by shape in a mixed docking task. Our results are of value to be taken forward into parameterising grasp types for developing intuitive grasp models.

Stable Hand Pose Estimation under Tremor via Graph Neural Network

Conference

Zhiying Leng: Beihang University; Jiaying Chen: state key laboratory of virtual reality technology and systems; Hubert Shum: Durham University; Frederick Li: Durham University; Xiaohui Liang: Beihang University

Hand pose estimation is a fundamental task in VR/AR applications. The tremor motion leads to estimations that deviate from user's intentions. We present a novel Graph Neural Network for stable hand pose estimation under tremor. Firstly, we propose the constraint adjacency matrix to mine the suitable graph topology, modeling spatial-temporal constraint of joints and outputting the precise 3D tremor hand pose. Then, we devise a tremor compensation module to obtain a stable hand pose, which exploits the constraint between control points and tremor hand pose. The extensive experiments show our method has achieved decent performance.

Mid-Air Finger Sketching for Tree Modelling

Conference

Fanxing Zhang: Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences; Zhihao Liu: Chinese Academy of Sciences; Zhanglin Cheng: Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences; Oliver Deussen: Chinese Academy of Sciences; Baoquan Chen: Peking University; Yunhai Wang: Shandong University

We explore the use of mid-air finger 3D sketching in VR for tree modeling. We present a hybrid approach that integrates freehand 3D sketches with an automatic population of branch geometries. The user only needs to draw a few 3D strokes in mid-air to define the envelope of the foliage and main branches. Our algorithm then automatically generates a full 3D tree model based on these stroke inputs. We demonstrate the ease-of-use, efficiency, and flexibility in tree modeling and overall shape control. We perform user studies and show a variety of realistic tree models generated instantaneously from 3D finger sketching.

Session: Plausibility, Presence and Social VR

Tuesday, March 30, 17:00, Lisbon WEST UTC+1

Session Chair: Mar Gonzalez Franco

Take me to the event:

Virbela Location: Auditorium A (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

Effects of Language Familiarity in Simulated Natural Dialogue with a Virtual Crowd of Digital Humans on Emotion Contagion in Virtual Reality

Conference

Matias Volonte: Clemson University; Chang Chun Wang: National Chiao Tung University; Elham Ebrahimi: UNC Wilmington; Yu Chun Hsu: Multimedia Engineering at National Chiao Tung University; Kuan-yu Liu: Multimedia Engineering at National Chiao Tung University; Sai-Keung Wong: National Chiao Tung University; Sabarish V. Babu: Clemson University

We compared the emotional impact caused by a crowd of affective virtual humans (VHs) that communicated in the users' native or foreign language. We evaluated the users' reactions to a crowd of VHs that exhibited distinct emotions: Positive, Negative, and Neutral. A Mixed condition included VHs showing Positive, Negative, and Neutral emotions. Users collected items from a digital market and interacted with the VHs using natural speech. Three language conditions included: one in the USA where participants interacted in English and two groups in Taiwan in English (foreign) or Mandarin (native). Findings revealed that language familiarity enhanced users’ emotions.

Disturbance and Plausibility in a Virtual Rock Concert

Conference

Alejandro Beacco: Universitat de Barcelona; Ramon Oliva: Universitat de Barcelona; Carlos Cabreira: Universitat de Barcelona; Jaime Gallego: Universitat de Barcelona; Mel Slater: Universitat de Barcelona

A performance by the rock band Dire Straits was rendered in virtual reality, using computer vision techniques to extract the appearance and movements of the band from video, and crowd simulation for the audience. An online pilot study was conducted where participants experienced the scenario and freely wrote about their experience. The documents produced were analyzed using sentiment analysis. The results showed that some participants were disturbed by the accompanying virtual audience that surrounded them, while others enjoyed the performance. The results point to a profound level of Plausibility of the scenario in both cases.

Influence of Interactivity and Social Environments on User Experience and Social Acceptability in Virtual Reality

Conference

Maurizio Vergari: Technische Universität Berlin; Tanja Kojic: Technische Universität Berlin; Francesco Vona: Politecnico di Milano; Franca Garzotto: Politecnico di Milano; Sebastian Möller: Technische Universität Berlin; Jan-Niklas Voigt-Antons: Technische Universität Berlin

Nowadays, Virtual Reality (VR) technology can be potentially used everywhere. Nevertheless, it is still uncommon to see VR devices in public settings. In these contexts, unaware bystanders in the surroundings might influence the User Experience (UX) and create concerns about the social acceptability of this technology. This paper investigates the influence of Social Environments, and degree of interactivity on User Experience and social acceptability. Four Social Environments were simulated employing 360° Videos, and two VR games developed with two levels of interactivity. Findings indicate that Social Environments and degree of interactivity should be taken into account while designing VR applications.

The Most Social Platform Ever? A Survey about Activities & Motives of Social VR Users

Conference

Philipp Sykownik: University of Duisburg-Essen; Linda Graf: University of Duisburg-Essen; Christoph Zils: University of Duisburg-Essen; Maic Masuch: University of Duisburg-Essen

We present online survey results on the activities and usage motives of social virtual reality users. We found that most users use social VR applications to satisfy diverse social needs. The second most frequently mentioned categories of activities and motives relate to experiential aspects such as entertainment activities. Another important category relates to the self, such as personal growth. Further, our results indicate that while social VR provides a superior social experience than traditional digital social spaces, users still have a desire for better and affordable tracking technology, increased sensory immersion, but also for further improvement concerning social features.

Toward Understanding the Effects of Virtual Character Appearance on Avoidance Movement Behavior

Conference

Christos Mousas: Purdue University; Alexandros Fabio Koilias: University of the Aegean; Banafsheh Rekabdar: Southern Illinois University; Dominic Kao: Purdue University; Dimitris Anastasiou: Southern Illinois University

This virtual reality study was conducted to assess the impact of the appearance of virtual characters on the avoidance movement behavior of participants. Under each condition, one of the five different virtual characters (classified as mannequin, human, cartoon, robot, and zombie) was studied. Based on the collected measurements (avoidance movement behavior and self-reported ratings), we tried to understand the effects of the appearance of a virtual character on the avoidance movement behavior. The results obtained from this study indicated that the appearance of the virtual characters did affect the avoidance movement behavior and also some of the examined concepts.

Session: Accessible VR

Wednesday, March 31, 08:30, Lisbon WEST UTC+1

Session Chair: Simon Hoermann

Take me to the event:

Virbela Location: Auditorium A (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

VR based Power Wheelchair Simulator: Usability Evaluation through a Clinically Validated Task with Regular Users

Conference

Guillaume Vailland: INSA; Louise Devigne: INSA; Francois Pasteau: Univ Rennes, INSA; Florian Nouviale: Univ Rennes, INSA; Bastien Fraudet: Pôle Saint Hélier, rehabilitation center; Emilie Leblong: Pôle Saint Hélier, rehabilitation center; Marie Babel: Univ Rennes, INSA; Valérie Gouranton: Univ Rennes, INSA

Power wheelchairs are one of the main solutions for people with reduced mobility to maintain autonomy. However, safely driving a power wheelchair is a difficult task that requires training methods based on real-life situations often too complex to implement and unsuitable for some people with complex handicap. In this context, we developed a Virtual Reality based power wheelchair simulator. In this paper, we present a clinical study in which 29 power wheelchair regular users were asked to complete a driving task within two conditions: in virtual and in real life. The objective is to compare performances between the two conditions and to evaluate the Quality of Experience provided by our simulator.

The Impact of Implicit and Explicit Feedback on Performance and Experience during VR-Supported Motor Rehabilitation

Conference

Negin Hamzeheinejad: University of Würzburg, Department of Computer Science, HCI Group; Daniel Roth: Computer Aided Medical Procedures and Augmented Reality; Samantha Monty: Department of Computer Science, HCI Group; Julian Breuer: Neurologisches interdisziplinäres Behandlungszentrum; Anuschka Rodenberg: Neurologisches interdisziplinäres Behandlungszentrum; Marc Erich Latoschik: Department of Computer Science, HCI Group

Patients with motor impairments may benefit greatly from sophisticated VR supported rehabilitation methods. However, it is crucial to investigate essential mechanisms that maximize the experience, performance, and therapy outcome. This paper examines the impact of explicit (visual and auditory cues and feedback) and implicit feedback (mirror neuron stimulation through the virtual trainer) on performance and user experience. Our results show that feedback improved the performance, objectively assessed by the applied support force of a gait robot. Additionally, VR improved enjoyment and satisfaction. Implicit feedback/adapted motion synchrony by the virtual trainer led to higher mental demand, indicating potentially increased neural activity.

Using Fuzzy Logic to Involve Individual Differences for Predicting Cybersickness during VR Navigation

Conference

Yuyang Wang: Arts et Métiers ParisTech; Jean-Rémy Chardonnet: Arts et Métiers, Institut Image; Frederic Merienne: Arts et Metiers; Jivka Ovtcharova: Karlsruhe Institute of Technology

Many studies show that individual differences affect users' susceptibility to cybersickness in VR. Based on the fuzzy logic theory, we developed a model to involve three individual differences (age, Gaming experience and ethnicity), and for validation, we correlated the corresponding outputs with the scores obtained from the simulator sickness questionnaire (SSQ) in a simple navigation scenario. Our work provides insights to establish customized experiences for VR navigation by involving individual differences.

Head Up Visualization of Spatial Sound Sources in Virtual Reality for Deaf and Hard-of-Hearing People

Conference

Mohammadreza Mirzaei: Vienna University of Technology; Peter Kán: Vienna University of Technology; Hannes Kaufmann: Vienna University of Technology

This paper presents a novel method for the visualization of 3D spatial sounds in Virtual Reality (VR) for Deaf and Hard-of-Hearing (DHH) people. Our method enhances traditional VR devices with additional haptic and visual feedback, which aids spatial sound localization. The proposed system automatically analyses 3D sound from VR application, and it indicates the direction of sound sources to a user by two Vibro-motors and two Light-Emitting Diodes (LEDs). Our study results suggest that DHH participants could complete sound-related VR tasks significantly faster using LED and haptic+LED conditions in comparison to only haptic feedback.

Multi-modal Spatial Object Localization in Virtual Reality for Deaf and Hard-of-Hearing People

Conference

Mohammadreza Mirzaei: Vienna University of Technology; Peter Kán: Vienna University of Technology; Hannes Kaufmann: Vienna University of Technology

In this paper, we propose a novel Omni-directional particle visualization method and evaluate multi-modal presentation methods in Virtual Reality (VR) for Deaf and Hard-of-Hearing (DHH) persons, such as audio, visual, haptic, and a combination of them (AVH). Additionally, we compare the results with the results of hearing persons. Our user studies show that both DHH and hearing persons could do VR tasks significantly faster using AVH. Also, we found that DHH persons can do visual-related VR tasks faster than hearing persons by using our new proposed visualization method. Our qualitative and quantitative evaluation indicates that both participants preferred AVH method.

Session: Haptics

Wednesday, March 31, 08:30, Lisbon WEST UTC+1

Session Chair: Gabriel Zachmann

Take me to the event:

Virbela Location: Auditorium B (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

Floor-vibration VR: Mitigating Cybersickness Using Whole-body Tactile Stimuli in Highly Realistic Vehicle Driving Experiences

Journal

Sungchul Jung: University of Canterbury; Richard Chen Li: University of Canterbury; Ryan Douglas McKee: University of Canterbury; Mary C. Whitton: UNC Chapel Hill; Robert W. Lindeman: University of Canterbury

Starting from sensory conflict theory, we posit that if a vertical-vibrating floor delivers vestibular stimuli that minimally match the vibration characteristics of a scenario, the conflict between the visual and vestibular senses will be reduced and, thus, the incidence and/or severity of cybersickness will also be reduced. We implemented a realistic off-road vehicle simulator and programmed the floor to generate vibrations similar to those experienced in real off-road vehicle driving. Comparing data (presence, SSQ, self-rated-discomfort-levels, HR, GSR, pupil-size) between participants in the Vibration-group to the No-Vibration-group, we found that the vibration significantly reduced measures (SSQ and GSR) of cybersickness.

Combining Dynamic Passive Haptics and Haptic Retargeting for Enhanced Haptic Feedback in Virtual Reality

Journal

André Zenner: Saarland University, Saarland Informatics Campus; Kristin Ullmann: Saarland University, Saarland Informatics Campus; Antonio Krüger: Saarland University, Saarland Informatics Campus

For immersive proxy-based haptics in virtual reality, the challenges of similarity and colocation must be solved. To do so, we propose to combine the techniques of Dynamic Passive Haptic Feedback and Haptic Retargeting, which both have been shown individually to be successful in tackling these challenges, but up to now, have not ever been studied in combination. In two proof-of-concept experiments focused on the perception of virtual weight distribution inside a stick, we validate that their combination can better solve the challenges of similarity and colocation than the individual techniques can do alone.

EncounteredLimbs: A Room-scale Encountered-type Haptic Presentation using Wearable Robotic Arms

Conference

Arata Horie: The University of Tokyo; Mhd Yamen Saraiji: Keio University; Zendai Kashino: University of Tokyo; Masahiko Inami: University of Tokyo

In this work, we present EncounteredLimbs; a novel, wearable approach to presenting a user with encountered-type haptic feedback. We realize this feedback using a wearable robotic limb that holds a plate where the user might interact with their room-scale environment. A technical evaluation of the implemented system showed that the system provides a stiffness over 25 N/m and slant angle errors under 3°. Three user studies show the limitations of haptic slant perception in humans and the quantitative and qualitative effectiveness of the current prototype system. We conclude the paper by discussing various potential applications and possible improvements that could be made to the system.

Tactile Perceptual Thresholds of Electrovibration in VR

Journal

Lu Zhao: Beijing Institute of Technology; Yue Liu: Beijing Institute of Technology; Weitao Song: Beijing Institute of Technology

To produce high-fidelity haptic feedback in virtual environments, various haptic devices and tactile rendering methods have been explored, and perception deviation between a virtual environment and a real environment has been investigated. However, the tactile sensitivity for touch perception in a virtual environment has not been fully studied. This paper investigates users’ perceptual thresholds when they are immersed in a virtual environment by utilizing electrovibration tactile feedback and by generating tactile stimuli with different waveform, frequency and amplitude characteristics. We believe that our work on tactile perceptual thresholds can promote future research that focuses on creating a favorable haptic experience for VR applications.

Unscripted Retargeting: Reach Prediction for Haptic Retargeting in Virtual Reality

Conference

Aldrich Clarence: Monash University; Jarrod Knibbe: University of Melbourne; Maxime Cordeil: Monash University; Michael Wybrow: Monash University

Research is exploring novel ways of adding haptics to VR. One popular technique is haptic retargeting, where real and virtual hands are decoupled to enable the reuse of physical props. However, this technique requires the system to know the users’ intended interaction target, or requires additional hardware for prediction. We explore software-based reach prediction as a means of facilitating responsive, unscripted retargeting. We trained a Long Short-Term Memory network on users’ reach trajectories to predict intended targets. We achieved an accuracy of 81.1% at approximately 65% of movement. This could enable haptic retargeting during the last 35% of movement. We discuss the implications for possible physical proxy locations.

Session: Redirected Locomotion

Wednesday, March 31, 13:00, Lisbon WEST UTC+1

Session Chair: Evan Suma Rosenberg

Take me to the event:

Virbela Location: Auditorium A (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

Adjusting Relative Translation Gains According to Space Size in Redirected Walking for Mixed Reality Mutual Space Generation

Conference

Dooyoung Kim: KAIST; Jae-eun Shin: KAIST; Jeongmi Lee: KAIST; Woontack Woo: KAIST

We propose the concept of relative translation gains, a novel Redirected Walking (RDW) method to create a mutual movable space between the Augmented Reality (AR) host's reference space and the Virtual Reality (VR) client's space. Our method adjust the remote client's walking speed for each axis of a VR space to modify the movable area without coordinate distortion. We estimate the relative translation gain threshold according to reference space size. Our study showed that for remote clients connected to the larger reference space, relative translation gains can be increased to utilize a VR space bigger than their real space.

ARC: Alignment-based Redirection Controller for Redirected Walking in Complex Environments

Journal

Niall L. Williams: University of Maryland, College Park; Aniket Bera: University of Maryland at College Park; Dinesh Manocha: University of Maryland

We present a novel redirected walking controller based on alignment that allows the user to explore large, complex virtual environments, while minimizing the number of collisions in the physical environment. Our alignment-based redirection controller (ARC) steers the user such that their proximity to obstacles in the physical environment matches the proximity to obstacles in the virtual environment as closely as possible. We introduce a new metric to measure the relative environment complexity and characterize the difference in navigational complexity between the physical and virtual environments. ARC significantly outperforms state-of-the-art controllers and works in arbitrary environments without any user input.

Detection Thresholds with Joint Horizontal and Vertical Gains in Redirected Jumping

Conference

Yi-Jun Li: Beihang University; De-Rong Jin: Beihang University; Miao Wang: Beihang University; Junlong Chen: Beihang University; Frank Steinicke: Universität Hamburg; Shi-Min Hu: BNRist, Tsinghua University; Qinping Zhao: Beihang University

Redirected jumping (RDJ) is a locomotion technique that allows users to explore a virtual space larger than the available physical space by imperceptibly manipulating users’ virtual viewpoints according to different gains. To figure out how humans perceive distance manipulation when more than one gain is used, we explored joint horizontal and vertical gains during two-legged takeoff jumping. We estimated and analyzed horizontal and vertical detection thresholds by conducting a user study, fitting the data to two-dimensional psychometric functions, and visualizing the fitted 3D plots. We also designed redirected jumping-based games and demonstrated the effectiveness of RDJ.

Dynamic Density-based Redirected Walking Towards Multi-user Virtual Environments

Conference

Tianyang Dong: Zhejiang University of Technology; Yue Shen: Zhejiang University of Technology; Tieqi Gao: Zhejiang University of Technology; Jing Fan: Zhejiang University of Technology

The boundary conflicts in the real walking for multi-user VR applications is related to the density of users in physical space. In order to decrease the boundary conflicts, this paper presents a novel method of dynamic density-based redirected walking towards multi-user virtual environments. The method dynamically adjusts the user distribution to the desired state with high center density and low boundary density through the density force, which is generated by the density difference between standard density and actual density. The experiment results show that our method can reduces the conflicts about 30% compared with other existing multi-user redirected walking methods.

LiveObj: Object Semantics-based Viewport Prediction for Live Mobile Virtual Reality Streaming

Journal

Xianglong Feng: Rutgers University; Zeyang Bao: Rutgers University; Sheng Wei: Rutgers University

Virtual reality (VR) video streaming has been gaining popularity recently as a new form of multimedia providing the users with immersive viewing experience. However, the high volume of data for the 360-degree video frames creates significant bandwidth challenges. We develop a live viewport prediction mechanism, namely LiveObj, by detecting the objects in the video based on their semantics. The detected objects are then tracked to infer the user's viewport in real time by employing a reinforcement learning algorithm. Our evaluations based on 48 users watching 10 VR videos demonstrate high prediction accuracy and significant bandwidth savings achieved by LiveObj.

Walking Outside the Box: Estimation of Detection Thresholds for Non-Forward Steps

Conference

Yong-Hun Cho: Yonsei University; Daehong Min: Yonsei University; Jin-Suk Huh: Yonsei University; Se-Hee Lee: Yonsei University; June-Seop Yoon: Yonsei University; In-Kwon Lee: Yonsei University

Redirected walking maps a virtual path and a real path with unnoticeable distortion for efficient space usage. To hide the distortion from the user, detection thresholds have been measured for forward steps. In addition to a forward step, adding a non-forward step can expand the VR locomotion in any direction. In this work, we measure the translation and curvature detection thresholds for non-forward steps. The results show similar translation thresholds with forward-step and wider thresholds for the curvature. Having non-forward steps in the redirected walking arsenal can add freedom to virtual world design and lead to efficient space usage.

Session: Selection and Manipulation

Wednesday, March 31, 15:30, Lisbon WEST UTC+1

Session Chair: Rob Teather

Take me to the event:

Virbela Location: Auditorium A (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

Exploring Input Approximations for Control Panels in Virtual Reality

Conference

Markus Tatzgern: Salzburg University of Applied Sciences; Christoph Birgmann: Salzburg University of Applied Sciences

We present an exploration of hand input approximations of real hands to manipulate the buttons, toggles, knobs and sliders using typical handheld VR controllers. We use Oculus Quest controllers that rely on capacitive sensing to create basic, approximate hand gestures. We demonstrate the potential of our designs by comparing approximate hand gestures against using the controller's joystick that allows fine motor thumb input, and a baseline ray-casting interaction. A detailed analysis of our interaction designs using the Framework for Interactive Fidelity Analysis (FIFA) allows us to discuss differences between input approximations and the real-world hand manipulations.

Disocclusion Headlight for Selection Assistance in VR

Conference

Lili Wang: Beihang University; Jianjun Chen: Beihang University; Qixiang Ma: Beihang University; Voicu Popescu: Purdue University

We introduce the disocclusion headlight, a method for VR selection assistance based on alleviating occlusions at the center of the user's field of view. The user's visualization of the VE is modified to reduce overlap between objects. This way, selection candidate objects have larger image footprints, which facilitates selection. The modification is confined to the center of the frame, with continuity to the periphery of the frame which is rendered conventionally. We have tested our method on three selection tasks, where we compared it to the alpha cursor and the flower cone method. Our method showed significant advantages in terms of shorter task completion times, and of fewer selection errors.

Evaluating the Effects of Non-isomorphic Rotation on 3D Manipulation Tasks in Mixed Reality Simulation

Invited Journal

Zihan Gao: Harbin Engineering University; Huiqiang Wang: Harbin Engineering University, Peng Cheng Laboratory, China; Hongwu Lv: Harbin Engineering University; Moshu Wang: Harbin Engineering University; Yifan Qi: Communication University of China.

Non-isomorphic rotation has been considered an effective approach for 3D rotation tasks. However, it is not clear whether non-isomorphic rotation can benefit 6-DOF manipulation tasks. Our paper extends the studies of non-isomorphic rotation from rotation-only tasks to 6-DOF manipulation tasks and analyzes collected data using a 2-component model. With a mixed reality simulation approach, we also explore whether environment (AR or VR) has an impact on manipulation tasks. We find that only dynamic non-isomorphic rotation is significantly faster than isomorphic rotation, and environmental visual realism can be helpful to improve user experience.

Scene-Context-Aware Indoor Object Selection and Movement in VR

Conference

Miao Wang: Beihang University; Ziming Ye: Beihang University; Jinchuan Shi: Beihang University; Yongliang Yang: University of Bath

VR applications such as interior design typically require accurate and efficient selection and movement of indoor objects. In this paper, we present an indoor object selection and movement approach by taking into account scene contexts such as object semantics and interrelations. This provides more intelligence and guidance to the interaction, and greatly enhances user experience. We evaluate our proposals by comparing them with traditional approaches in different interaction modes based on controller, head pose, and eye gaze. We demonstrate our findings via a furniture arrangement application.

Evaluating Object Manipulation Interaction Techniques in Mixed Reality: Tangible User Interfaces and Gesture

Conference

Evren Bozgeyikli: University of Arizona; Lila Bozgeyikli: University of Arizona

Tangible user interfaces (TUIs) have been widely studied in computer, virtual reality and augmented reality systems. However, there have been few evaluations of TUIs in wearable mixed reality (MR). We evaluated three object manipulation techniques in wearable MR: (1) Space-multiplexed identical-formed TUI (physical cube); (2) Time-multiplexed TUI (tangible controller); (3) Hand gesture. The interaction techniques were compared with a user study with 42 participants. Results revealed that the tangible cube and the controller were comparative to each other while both being superior to the hand gesture in terms of user experience, performance, and presence.

FixationNet: Forecasting Eye Fixations in Task-Oriented Virtual Environments

Journal

Zhiming Hu: Peking University; Andreas Bulling: University of Stuttgart; Sheng Li: Peking University; Guoping Wang: Computer Science and Technology

Human visual attention in immersive virtual reality (VR) is key for many important applications, such as gaze-contingent rendering and gaze-based interaction. However, prior works typically focused on free-viewing conditions that have limited relevance for practical applications. We first collect eye tracking data of 27 participants performing a visual search task in four immersive VR environments. Then we provide a comprehensive analysis of the collected data and reveal correlations between users' eye fixations and other factors. Based on the analysis, we propose FixationNet -- a novel learning-based model to forecast users' eye fixations in the near future in VR.

Session: Training and Learning

Wednesday, March 31, 17:00, Lisbon WEST UTC+1

Session Chair: Mary C. Whitton

Take me to the event:

Virbela Location: Auditorium A (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

The Effect of Pitch in Auditory Error Feedback for Fitts’ Tasks in Virtual Reality Training Systems

Conference

Anil Ufuk Batmaz: Simon Fraser University ; Wolfgang Stuerzlinger: Simon Fraser University

We examine the effect of different pitches for auditory error feedback on pointing performance. The results of our first study demonstrate that high-pitch error feedback significantly decreases user performance in terms of time and throughput. In a second study, we evaluated adaptive sound feedback, i.e., we increased the pitch with the error rate, while asking subjects to execute the task “as fast/as precise/as fast and precise as possible”. Results showed that adaptive sound feedback decreases the error rate for “as fast as possible” task execution without affecting the time. Our results inform the design of various VR systems.

Correction of Avatar Hand Movements Supports Learning of a Motor Skill

Conference

Klemen Lilija: University of Copenhagen; Søren Kyllingsbæk: University of Copenhagen; Kasper Hornbæk: University of Copenhagen

Learning to move the hands in particular ways is essential in many training and leisure virtual reality applications, yet challenging. Existing techniques that support learning of motor movement in virtual reality rely on external cues such as arrows showing where to move or transparent hands showing the target movement. We propose a technique where the avatar’s hand movement is corrected to be closer to the target movement. This embeds guidance in the user’s avatar, instead of in external cues and minimizes visual distraction. Through two experiments, we found that such movement guidance improves the short-term retention of the target movement when compared to a control condition without guidance.

Diegetic Tool Management in a Virtual Reality Training Simulation

Conference

Patrick Dickinson: University of Lincoln; Andrew Cardwell: University of Lincoln; Adrian Parke: University of the West of Scotland; Kathrin Gerling: KU Leuven; John C Murray: University of Hull

Researchers have suggested that diegetic interfaces can enhance users’ sense of presence and immersion in virtual reality. We present a study (N = 58) in which we compare participants’ experiences of diegetic and non-diegetic interfaces, in a prototype VR CSI training application. Contrary to expectations, we do not find evidence that participants’ sense of presence is elevated when using the diegetic interface; however, we suggest that this may be due to reported higher levels of perceived workload. We conclude by discussing the relationship between diegetic interface design and interaction fidelity, and highlighting trade-offs between fidelity and engagement.

The Effects of Cognitive Load on Engagement in a Virtual Reality Learning Environment

Conference

Jhon Bueno Vesga: University of Missouri; Xinhao Xu: University of Missouri; Hao He: University of Missouri

Engagement has been traditionally linked to presence in desktop-based virtual reality learning environments. The main purpose of this study was to explain if individual dimensions of cognitive load (mental demand, effort, and frustration level) can be used in addition to factors like presence and self-efficacy to predict student’s cognitive engagement. The results of the study confirmed presence and self-efficacy as significant predictors of student’s engagement. Also, a three-step hierarchical regression analysis revealed that two of the three individual dimensions of cognitive load (effort and frustration level) were also significant predictors of student’s engagement.

Virtual Reality Public Speaking Training: Experimental Evaluation of Direct Feedback Technology Acceptance

Conference

Fabrizio Palmas: Technical University Munich; Ramona Rainert: University of Augsburg; Jakub Edward Cichor: Technical University of Munich; David A. Plecher: Technical University; Gudrun Klinker: Technical University of Munich

Virtual Reality Speech Training (VR-ST) helps trainees develop presentation skills and practice their application in the real world. Another benefit is direct feedback based on gamification principles. It is not yet clear if direct feedback is accepted by participants. We investigated how direct feedback in a VR-ST affects the participants' technology acceptance based on the Technology Acceptance Model (TAM). Our study compares a VR-ST with direct feedback (n=100) with a simulation-based VR-ST (n=100). The results show that direct feedback offers benefits to trainees by improving technology acceptance. Further results show that VR-ST is generally more accepted by participants without public speaking anxiety.

Session: Pen-based and Hands-free Interaction

Wednesday, March 31, 17:00, Lisbon WEST UTC+1

Session Chair: Doug Bowman

Take me to the event:

Virbela Location: Auditorium B (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

Blink-Suppressed Hand Redirection

Conference

André Zenner: Saarland University, Saarland Informatics Campus; Kora Persephone Regitz: Saarland University, Saarland Informatics Campus; Antonio Krüger: Saarland University, Saarland Informatics Campus

We present Blink-Suppressed Hand Redirection (BSHR), the first body warping technique that makes use of blink-induced change blindness, to study the feasibility and detectability of hand redirection based on blink suppression. In a psychophysical experiment, we verify that unnoticeable blink-suppressed hand redirection is possible and derive corresponding detection thresholds. Our findings also indicate that the range of unnoticeable BSHR can be increased by combining blink-suppressed instantaneous hand shifts with continuous warping. As an additional contribution, we derive detection thresholds for Cheng et al.'s (2017) body warping technique that does not leverage blinks.

Flashpen: A High-Fidelity and High-Precision Multi-Surface Pen for Virtual Reality

Conference

Hugo Romat: ETH Zurich; Andreas Rene Fender: ETH; Manuel Meier: ETH Zürich; Christian Holz: ETH Zürich

Digital pen interaction has become a first-class input modality for precision tasks such as writing, annotating, and drawing. In Virtual Reality, however, input is largely detected using cameras which does not nearly reach the fidelity we achieve with analog handwriting or the spatial resolution required to enable fine-grained on-surface input. We present FlashPen, a digital pen for VR whose sensing principle affords accurately digitizing hand-writing and fine-grained 2D input for manipulation. We combine absolute camera tracking with relative motion sensing from an optical flow sensor. In this paper, we describe our prototype, a user study and several application prototypes.

Design and Evaluation of a Free-Hand VR-based Authoring Environment for Automated Vehicle Testing

Conference

Sevinc Eroglu: RWTH Aachen University; Frederic Stefan: Ford Motor Company; Alain M.R. Chevalier: Ford Research Center Aachen; Daniel Roettger: Ford Motor Company; Daniel Zielasko: University of Trier; Torsten Wolfgang Kuhlen: RWTH Aachen University; Benjamin Weyers: Trier University

We propose a VR authoring environment that enables engineers to design road networks and traffic scenarios for automated vehicle testing based on free-hand interaction. We present a 3D interaction technique for the efficient placement and selection of virtual objects that is employed on a 2D panel. We conducted a comparative user study in which our interaction technique outperformed existing approaches regarding precision and task completion time. Furthermore, we demonstrate the effectiveness of the system by a qualitative user study with domain experts.

Hands-free interaction in immersive virtual reality: A systematic review

Journal

Pedro Monteiro: INESC TEC; Guilherme Gonçalves: INESC TEC; Hugo Coelho: INESC TEC; Miguel Melo: INESC TEC; Maximino Bessa: Universidade de Trás-os-Montes e Alto Douro

Hands are the most important tool to interact with immersive virtual environments and they should be available to perform the most critical tasks. For example, a surgeon in VR should keep his/her hands on the instruments and be able to do secondary tasks without performing a disruptive event. In this situation, one can observe that hands are not available for interaction. This systematic review surveys the literature and provides a quantitative and qualitative analysis of which hands-free interfaces are used, the performed interaction tasks, what metrics are used for evaluation, the results of such evaluations, and research opportunities.

Comparative Evaluation of Digital Writing and Art in Real and Immersive Virtual Environments

Conference

Roshan Venkatakrishnan: Clemson University; Rohith Venkatakrishnan: Clemson University; Sabarish V. Babu: Clemson University; Yu-Shuen Wang: National Chiao Tung University

With Virtual Reality increasingly being applied in educational contexts where writing and note taking is crucial, it is important to study how well humans can perform these tasks in VR. In a between-subjects evaluation, we studied participants' fine motor coordination in real and virtual settings, further examining the effects of providing a virtual self avatar on task performance. Overall, it appears that while writing and artistic activities can be successfully supported in VR applications using specialized input devices, the accuracy in performing such tasks is compromised, highlighting the need for developments that support such fine motor tasks in VR.

Session: Locomotion

Thursday, April 1, 08:30, Lisbon WEST UTC+1

Session Chair: Adalberto Simeone

Take me to the event:

Virbela Location: Auditorium A (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

Evaluation of Body-centric Locomotion with Different Transfer Functions in Virtual Reality

Conference

BoYu Gao: Jinan University; Zijun Mai: Jinan University; Huawei Tu: La Trobe University; Henry Been-Lirn Duh: La Trobe University

Transfer functions are an important determinant of the locus of body centric locomotion methods. However, there is little known about the effects of transfer functions on virtual locomotion with different body parts. In this work, we selected four typical transfer functions and four common body parts from existing works, and conducted an experiment to evaluate their effects on virtual locomotion in VR. We presented the objective (task time, final position error and rate of failed trials) and subjective (user experience questionnaire-short) evaluation results. According to the results, we provide implications of designing body-centric locomotion with different transfer functions in VR.

I Feel More Engaged When I Move!: Deep Learning-based Backward Movement Detection and its Application

Conference

Seungwon Paik: Ajou University; Youngseung Jeon: Ajou University; Patrick C. Shih: Indiana University Bloomington; Kyungsik Han: Ajou University

We present the development of a prediction model for forward/backward movement while considering a user's orientation and the verification of the model's effectiveness. We built a deep learning-based model by collecting sensor data on the movement of the user's head, waist, and feet. We developed three realistic VR scenarios that involve backward movement, set three conditions (controller-based, treadmill-based, and model-based) for movement, and evaluated user experience in each condition through a study of 36 participants. The results of our study demonstrated that movement support through modeling is possible, suggesting its potential for use in many VR applications.

Larger Step Faster Speed: Investigating Gesture-based Locomotion in Place with Different Walking Speed in Virtual Reality

Conference

Pingchuan Ke: City University of Hong Kong; Kening Zhu: City University of Hong Kong

We investigated the technique of gesture-amplitude-based walking-speed control for locomotion in place (LIP) in VR. Study1 showed that compared to tapping and goose-stepping, the gesture of marching in place was preferred by users while sitting and standing. With the recorded data, we trained a classification model for LIP speed control based on users' leg/foot marching gestures. We then compared the marching-in-place speed-control technique with the controller-based teleportation approach on a target-reaching task. We found no significant difference between the two conditions in terms of accuracy. More importantly, the technique of marching in place yielded significantly higher user ratings in naturalness, realness, and engagement.

Spherical World in Miniature: Exploring the Tiny Planets Metaphor for Discrete Locomotion in Virtual Reality

Conference

David Englmeier: LMU Munich; Wanja Sajko: LMU Munich; Andreas Butz: LMU Munich

We explore the concept of a Spherical World in Miniature (SWIM) for discrete locomotion in Virtual Reality (VR). A SWIM wraps a planar WIM around a physically embodied sphere. It thereby implements a tangible Tiny Planet metaphor that can be rotated and moved, enabling scrolling, scaling, and avatar teleportation. In a lab study (N=20), we compare the SWIM to a planar WIM using VR controllers. We test both concepts in a navigation task and also investigate the effects of two different screen sizes. Despite its less direct geometrical transformation, our results show that the SWIM performed superior in most evaluations.

Towards Sneaking as a Playful Input Modality for Virtual Environments

Conference

Sebastian Cmentowski: University of Duisburg-Essen; Andrey Krekhov: University of Duisburg-Essen; André Zenner: Saarland University, Saarland Informatics Campus; Daniel Kucharski: University of Duisburg-Essen; Jens Krueger: University of Duisburg-Essen

In our work, we explore the potential of sneaking as a playful input modality for virtual environments. Therefore, we discuss possible sneaking-based gameplay mechanisms and develop three technical approaches, including precise foot-tracking and two abstraction levels. Our evaluation reveals the potential of sneaking-based interactions in IVEs, offering unique challenges and thrilling gameplay. For these interactions, precise tracking of individual footsteps is unnecessary, as a more abstract approach focusing on the players' intention offers the same experience while providing better comprehensible feedback. Based on these findings, we discuss the broader potential and individual strengths of our gait-centered interactions.

Session: Rendering and Texture Mapping

Thursday, April 1, 08:30, Lisbon WEST UTC+1

Session Chair: Taehyun Rhee

Take me to the event:

Virbela Location: Auditorium B (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

Magnoramas: Magnifying Dioramas for Precise Annotations in Asymmetric 3D Teleconsultation

Conference

Kevin Yu: Research Group MITI; Alexander Winkler: Technical University of Munich; Frieder Pankratz: LMU; Marc Lazarovici: Institut für Notfallmedizin; Prof. Dirk Wilhelm: Research Group MITI; Ulrich Eck: Computer Aided Medical Procedures and Augmented Reality; Daniel Roth: Computer Aided Medical Procedures and Augmented Reality; Nassir Navab: Technische Universität München

We introduce Magnoramas, an interaction method for creating supernaturally precise annotations on virtual objects. We evaluated Magnoramas in a collaborative context in a simplified clinical scenario. Teleconsultation was performed between a remote expert inside a 3D reconstruction and embodied by an avatar in Virtual Reality that collaborated with a local user through Augmented Reality. The results show that Magnoramas significantly improve the precision of annotations while preserving usability and perceived presence measures compared to the baseline method. By additionally hiding the physical world while keeping the Magnorama, users can intentionally lower their perceived social presence and focus on their tasks.

DSNet: Deep Shadow Network for Illumination Estimation

Conference

Yuan Xiong: Beihang University; Hongrui Chen: Beihang University; Jingru Wang: Beihang University; Zhe Zhu: Duke University; Zhong Zhou: Beihang University

Illumination consistency has applications to modeling and rendering in virtual reality. In 3D reconstruction and Mixed Reality(MR) fusion, the appearance of a large-scale outdoor scene may change frequently. The illumination inconsistency in photograph makes it challenging to fit to the existing model. To tackle this problem, this paper proposes a novel approach that can precisely estimate the illumination of the input image, and collaboratively utilizes illumination-based data augmentation for optimization. We show that illumination simulation can improve the performance of visual applications. Experimental results validate the effectiveness of the proposed approach, and show its superiority over the state-of-the-art.

Instant Panoramic Texture Mapping with Semantic Object Matching for Large-Scale Urban Scene Reproduction

Journal

Jinwoo Park: KAIST; Ikbeom Jeon: KAIST; Sung-eui Yoon: KAIST; Woontack Woo: KAIST

This paper proposes a novel panoramic texture mapping-based rendering system for real-time, photorealistic reproduction of large-scale urban scenes at a street level. To provide users with free walk-through experiences in global urban streets, our system effectively covers large-scale scenes by using sparsely sampled panoramic street-view images and simplified scene models, which are easily obtainable from open databases. Our key concept is to extract semantic information from the given street-view images and to deploy it in proper intermediate steps of the suggested pipeline, which results in enhanced rendering accuracy and performance time. Furthermore, our method supports real-time semantic 3D inpainting to handle occluded and untextured areas, which appear often when the user's viewpoint dynamically changes.

Bidirectional Shadow Rendering for Interactive Mixed 360° Videos

Conference

Lili Wang: Beihang University; Hao Wang: Beihang University; Danqing Dai: Beihang University; Jiaye Leng: Beihang University; Xiaoguang Han: Shenzhen Research Institute of Big Data

In this paper, we provide a bidirectional shadow rendering method to render shadows between real and virtual objects in the 360◦ videos in real time. We construct a 3D scene approximation from the current output viewpoint to approximate the real scene geometry nearby in the video. Then, we propose a ray casting based algorithm to determine the shadow regions on the virtual objects cast by the real objects. After that, we introduce an object-aware shadow mapping method to cast shadows from virtual objects to real objects. Finally, we use a shadow intensity estimation algorithm to determine the shadow intensity of virtual objects and real objects to obtain shadows consistent with the input 360◦ video.

Real-Time Omnidirectional Stereo Rendering: Generating 360° Surround-View Panoramic Images for Comfortable Immersive Viewing

Journal

Thomas Marrinan: University of St. Thomas; Michael E. Papka: Argonne National Laboratory

Omnidirectional stereo images (surround-view panoramas) can provide an immersive experience for pre-generated content, but typically suffer from binocular misalignment in the peripheral vision as a user's view direction approaches the north / south pole of the projection sphere. This paper presents a real-time geometry-based approach for omnidirectional stereo rendering that fits into the standard rendering pipeline. Our approach includes tunable parameters that enable pole merging -- a reduction in the stereo effect near the poles that can minimize binocular misalignment. Results from a user study indicate that pole merging reduces visual fatigue and discomfort associated with binocular misalignment without inhibiting depth perception.

Session: Tracking, Vision and Sound

Thursday, April 1, 12:00, Lisbon WEST UTC+1

Session Chair: Bobby Bodenheimer

Take me to the event:

Virbela Location: Auditorium A (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

Spatial Anchor Based Indoor Asset Tracking

Conference

Wennan He: The Australian National University; Mingze Xi: CSIRO; Henry Gardner: The Australian National University; Ben Swift: Australian National University; Matt Adcock: CSIRO

Indoor asset tracking is an essential task in many areas of industry, such as shipping and warehousing. Widely-used indoor asset tracking technologies are costly and typically require supporting infrastructure to communicate with active tags on the assets. This paper presents SABIAT, an indoor asset tracking technique for augmented reality (AR), that continuously tracks the approximate locations of an asset using spatial anchors and, when needed, the precise location of that asset using fiducial markers. We have applied our SABIAT technique to build a demonstrator system, AR-IPS, to show how assets can be tracked and located inside a large, multi-level building.

Event-Based Near-Eye Gaze Tracking Beyond 10,000 Hz

Journal

Anastasios Angelopoulos: University of California, Berkeley; Julien N.P. Martel: Stanford University; Amit Pal Singh Kohli: University of California, Berkeley; Jorg Conradt: KTH Stockholm; Gordon Wetzstein: Stanford University

We developed a hybrid frame-event-based near-eye graze tracking system offering updates beyond 10,000 Hz with an accuracy of 0.45-1.75 degrees for fields of view from 45-95 degrees. We hope this new technology enables a new generation of efficient, ultra-low-latency gaze-contingent rendering and display techniques for VR/AR.

Learning Acoustic Scattering Fields for Dynamic Interactive Sound Propagation

Conference

Zhenyu Tang: University of Maryland; Hsien-Yu Meng: University of Maryland; Dinesh Manocha: University of Maryland

We present a novel hybrid sound propagation algorithm for interactive applications. Our approach is designed for dynamic scenes and uses a neural network-based learned scattered field representation along with ray tracing to generate specular, diffuse, diffraction, and occlusion effects efficiently. We use geometric deep learning to approximate the acoustic scattering field using spherical harmonics. We use a large 3D dataset for training, and compare its accuracy with the ground truth generated using an accurate wave-based solver. We demonstrate its interactive performance by generating plausible sound effects in dynamic scenes with diffraction and occlusion effects.

SuperPlane: 3D Plane Detection and Description from a Single Image

Conference

Weicai Ye: Zhejiang University; Hai Li: Zhejiang University; Tianxiang Zhang: Beijing Institute of Spacecraft System Engineering; Xiaowei Zhou: Zhejiang University; Hujun Bao: Zhejiang University; Guofeng Zhang: Zhejiang University

We present a novel end-to-end simultaneously plane detection and description network named SuperPlane to tackle the challenging conditions in matching problems. Through the applications in image-based localization and augmented reality, SuperPlane demonstrates the strong power of plane matching in the challenge scenarios.

EllSeg: An Ellipse Segmentation Framework for Robust Gaze Tracking

Journal

Rakshit Sunil Kothari: Rochester Institute of Technology; Aayush Kumar Chaudhary: Rochester Institute of Technology; Reynold Bailey: Rochester Institute of Technology; Jeff B Pelz: Rochester Institute of Technology; Gabriel Jacob Diaz: Rochester Institute of Technology

Ellipse fitting, an essential component in video oculography, is performed on previously segmented eye parts generated using computer vision techniques. Occlusions due to eyelid shape, camera position or eyelashes, break ellipse fitting algorithms that rely on edge segments. We propose training a convolutional neural network to segment entire elliptical structures and demonstrate that such a framework is robust to occlusions and offers superior pupil and iris tracking performance (at least 10% and 24% increase in pupil and iris center detection rate within a two-pixel error margin) compared to standard eye parts segmentation for multiple publicly available synthetic segmentation datasets.

Session: Perception

Thursday, April 1, 14:00, Lisbon WEST UTC+1

Session Chair: J. Adam Jones

Take me to the event:

Virbela Location: Auditorium A (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

Promoting Reality Awareness in Virtual Reality Through Proxemics

Conference

Daniel Medeiros: University of Glasgow; Rafael Kuffner dos Anjos: University College London; Nadia Pantidi: Victoria University of Wellington; Kun Huang: Victoria University of Wellington; Mauricio Sousa: University of Toronto; Craig Anslow: Victoria University of Wellington; Joaquim P Jorge: INESC-ID

Head-Mounted Virtual reality (VR) systems render people oblivious to the outside, placing them in unsafe situations. Existing research proposed alert-based solutions to address this. Our work takes a different angle. We focus on: (i) exploring alerts to make VR users aware of non-immersed bystanders' in non-critical contexts; (ii) understanding how best to make non-immersed bystanders noticed while maintaining a presence in VR. To this end, we leveraged proxemics, perception channels, and push/pull approaches evaluated via two user studies. Our findings indicate a strong preference towards maintaining immersion, combining audio/visual cues, push and pull notification techniques that evolve dynamically based on proximity.

Egocentric Distance Judgments in Full-Cue Video-See-Through VR Conditions are No Better than Distance Judgments to Targets in a Void

Conference

Koorosh Vaziri: University of Minnesota; Maria Bondy: University of Minnesota; Amanda Bui: University of Minnesota; Victoria Interrante: University of Minnesota

We report the results of an experiment that provides new insight into the extent to which, and conditions under which, scene detail affects spatial perception accuracy in VR applications. Using a custom-built video-see-through HMD, participants judged distances in a real-world outdoor environment under three different conditions of detail reduction: raw camera view, Sobel-filtered camera view, and complete background subtraction, plus a control condition of unmediated real-world viewing. We found no significant difference in distance walked between the three VST conditions, despite significant differences in ratings of visual and experiential realism, suggesting a sole reliance on angular declination to the target, independent of context.

Revisiting Distance Perception with Scaled Embodied Cues in Social Virtual Reality

Conference

Zubin Choudhary: University of Central Florida; Matt Gottsacker: University of Central Florida; Kangsoo Kim: University of Central Florida; Ryan Schubert: University of Central Florida; Jeanine Stefanucci: University of Utah; Gerd Bruder: University of Central Florida; Greg Welch: University of Central Florida

In this paper we investigate how the perception of avatar distance is changed based on two means for scaling embodied social cues: visual head scale and verbal volume scale. We conducted a human-subject study employing a mixed factorial design with two Social VR avatar representations (full-body, head-only) as a between factor as well as three visual head scales and three verbal volume scales (up-scaled, accurate, down-scaled) as within factors. We found that visual head scale had a significant effect on distance judgments, while verbal volume scales did not. We discuss the interactions between the factors and implications for Social VR.

Temporal Availability of Ebbinghaus Illusions on Perceiving and Interacting with 3D Objects in a Contextual Virtual Environment

Conference

Russell Todd: University of Wyoming; Qin Zhu: University of Wyoming; Amy Banic: University of Wyoming

Contextual illusions, such as the Ebbinghaus Illusion, can be potentially used to improve or hinder reach-to-grasp interaction in a virtual environment. It remains unknown how the sudden, or dynamic, change of surrounding features will impact the perception and then the action towards the object. We conducted a series of experiments to evaluate the effects of 3D Ebbinghaus illusion with dynamic surrounding features on the task of reaching to grasp a 3D object in an immersive virtual environment. An innovative 3D perceptual judgment task was implemented. The kinematics of reach-to-grasp task were we experimentally manipulated the visual gain and loss of the 3D contextual inducers, the participant’s virtual hand, and the 3D contextual object.

The Effect of Feedback on Estimates of Reaching Ability in Virtual Reality

Conference

Holly C Gagnon: University of Utah; Taren Rohovit: University of Utah; Hunter Finney: University of Utah; Yu Zhao: Vanderbilt University; John Franchak: University of California, Riverside; Jeanine Stefanucci: University of Utah; Sarah Creem-Regehr: University of Utah; Bobby Bodenheimer: Vanderbilt University

We evaluated judgments of two action capabilities – reaching out and up – within a virtual environment (VE). We assessed whether feedback from reaching improved judgments and if recalibration from feedback differed across reaching behaviors. In feedback trials, participants viewed targets that were farther or closer than their actual reach, decided whether the target was reachable, and then reached out to the target to receive visual feedback. For both behaviors, reach was initially overestimated, and then adjustments decreased to become more accurate with feedback. This study establishes a straightforward methodology that can be used for calibration of actions in VEs.

Empirically Evaluating the Effects of Perceptual Information Channels on Size Perception of Tangibles in Near-field Virtual Reality

Conference

Alexandre Gomes de Siqueira: University of Florida; Rohith Venkatakrishnan: Clemson University; Roshan Venkatakrishnan: Clemson University; Ayush Bhargava: Key Lime Interactive; Kathryn Lucaites: Clemson University; Hannah Solini: Clemson University; Moloud Nasiri: Clemson University; Andrew Robb: Clemson University; Christopher Pagano: Clemson University; Brygg Ullmer: Clemson University; Sabarish V. Babu: Clemson University

The success of applications combining tangibles and VR often depends on how accurately size is perceived. Research has shown that visuo-haptic perceptual information is important in size perception. However, it is unclear how these sensory-perceptual channels are affected by immersive virtual environments that incorporate tangibles. We conducted a between-subjects study evaluating the accuracy of size perception across three experimental conditions (Vision-only, Haptics-only, Vision and Haptics). Overall, participants consistently over-estimated the size of dials regardless of the type of perceptual information. Our results also revealed an increased efficiency in reporting size over time most pronounced in the visuo-haptic condition.

Session: VR Applications

Thursday, April 1, 14:00, Lisbon WEST UTC+1

Session Chair: Bret Jackson

Take me to the event:

Virbela Location: Auditorium B (MAP)
Watch the recorded video stream: HERE
Discord Channel: Open in Browser, Open in App (Participants only)

Using Virtual Reality to Support Acting in Motion Capture with Differently Scaled Characters

Conference

Robin Kammerlander: Department of Speech, Music and Hearing; Andre Pereira: KTH Royal Institue of Technology; Simon Alexanderson: KTH Royal Institue of Technology

Motion capture is a well-established technology for capturing actors' performances within the entertainment industry. Instead of detailed sets, costumes and props, actors play in empty spaces wearing tight suits. Often, their co-actors are imaginary, replaced by placeholder props, or even out of scale with their virtual counterparts. We propose using a combination of virtual reality and motion capture technology to bring differently proportioned characters into a shared collaborative virtual environment. The results show that our proposed platform enhances actor’s feelings of body ownership and immersion, changing their performances, narrowing the gap between virtual performances and final intended animations.

VR System for the Restoration of Broken Cultural Artifacts on the Example of a Funerary Monument

Conference

Patrick Saalfeld: Otto-von-Guericke University; Claudia Böttcher: Förderverein Dom zu Magdeburg e.V.; Fabian Klink: Department of Mechanical Engineering; Bernhard Preim: Otto-von-Guericke University

We present a VR system that supports the restoration of broken cultural artifacts. As a case study, we demonstrate this approach on a funerary monument. Among the challenges of this monument are a large number of 415 fragments, missing pieces prevent full reconstruction and fragments vary strongly in size. Our system offers a configurable self-arranging fragment wall, which supports the user to organize fragments and identify relevant ones. We implemented two sets of manipulation techniques for rough aligning and precise assembly. The iterative development was accompanied by a restorer that reconstructed the monument within 14 sessions and 21 hours.

Work Surface Arrangement Optimization Driven by Human Activity

Conference

Jingjing Liu: Beijing Institude of Technology; Wei Liang: Beijing Institute of Technology; Bing Ning: Beijing Institute of Fashion Technology; Ting Mao: Beijing Institude of Technology

In this paper, we aim at guiding people to accomplish a personalized task, work surface organizing, in mixed reality environment. Through the cameras mounted in a MR device, e.g., Hololens, we firstly capture a person's daily activities in real scene when he uses the work surface. From such activities, we model the individual behavior habits and apply them to optimize the arrangement of the work surface. A cost function is defined for the optimization, considering general arrangement rules and human habitual behavior. The optimized arrangement is suggested to the user by augmenting the virtual arrangement on the real scene. To evaluate the effectiveness of our approach, we conducted experiments on a variety of scenes.

SPinPong - Virtual Reality Table Tennis Skill Acquisition using Visual, Haptic and Temporal Cues

Journal

Erwin Wu: Tokyo Institute of Technology; Mitski Piekenbrock: Tokyo Institute of Technology; Takuto Nakamura: Tokyo Institute of Technology; Hideki Koike: Tokyo Institute of Technology

Returning strong spin shots in table tennis needs to judge the spin and decide the racket pose within a second, which is very difficult. Therefore, we introduce an intuitive training system to acquire this specific skill using different information in VR, such as visual cues, haptic devices, and distorting the time. In the study, we first obtain some insights about augmentation for training spin shots. The training system is improved based on a VR game by adding new conditions using different cues. Finally, we performed a detailed experiment to study the effect of each condition both quantitatively and qualitatively to provide an entrance for VR table tennis training.

Text Entry in Virtual Environments using Speech and a Midair Keyboard

Journal

Jiban Krishna Adhikary: Michigan Technological University; Keith Vertanen: Michigan Technological University

We investigate text input in virtual reality using hand-tracking and speech. Our system visualizes users' hands, allowing typing on an auto-correcting midair keyboard. It supports speaking a sentence and correcting errors by selecting alternative words proposed by a speech recognizer. Using only the keyboard, participants wrote at 11 words-per-minute at a 1.2% error rate. Speaking and correcting was faster and more accurate at 28 words-per-minute and a 0.5% error rate. Participants achieved this despite half of sentences containing an out-of-vocabulary word. For sentences with only in-vocabulary words, performance was even faster at 36 words-per-minute.

Conference Sponsors

Diamond

Virbela Logo

Gold

Instituto Superior Tecnico
immersive Learning Research Network

Silver

Qualcomm Logo

Bronze

Vicon Logo
Hitlab logo
Microsoft Logo
Appen Logo
Facebook Reality Labs Logo
XR Bootcamp Logo

Supporter

GPCG Logo
Inesc-id Logo
NVIDIA Logo

Doctoral Consortium Sponsors

NSF Logo
Fakespace Logo

Conference Partner

CIO Applications Europe Website


Code of Conduct

© IEEEVR Conference