2018 IEEE VR Los Angeles logo

March 18th - 22nd

2018 IEEE VR logo

March 18th - 22nd

In Cooperation with
the German Association for Electrical, Electronic and Information Technologies: VDE
VDE Logo
IEEE Computer Society IEEE

In Cooperation with
the German Association for Electrical, Electronic and Information Technologies: VDE

VDE Logo
IEEE Computer Society
IEEE

Exhibitors and Supporters

Diamond


National Science Foundation

Gold


VICON

Digital Projection

Gold Awards


NVIDIA

Silver


ART

Bronze


Haption

MiddleVR

VR-ON

VISCON

BARCO

Ultrahaptics

WorldViz

Disney Research

Microsoft

Non-Profit


Computer Network Information Center
Chinese Academy of Sciences

Sponsor for Research Demo


KUKA

Other Sponsors


Magic Leap

Exhibitors and Supporters

Some changes to the program below may still be necessary. Authors, please notify the program chairs if you have special constraints.

Tuesday, March 20th

Wednesday, March 21st

Thursday, March 22nd


Session 1: Avatars and Virtual Humans

Tuesday, March 20th, 10:30 AM - 12:00 PM, Grosser Saal

Chair: Rick Skarbez

The effect of realistic appearance of virtual characters in immersive environments - does the character’s personality play a role?

Katja Zibrek, Elena Kokkinara, Rachel McDonnell

TVCG

Abstract: Virtual characters that appear almost photo-realistic have been shown to induce negative responses from viewers in traditional media, such as film and video games. This effect, described as the uncanny valley, is the reason why realism is often avoided when the aim is to create an appealing virtual character. In Virtual Reality, there have been few attempts to investigate this phenomenon and the implications of rendering virtual characters with high levels of realism on user enjoyment. In this paper, we conducted a large-scale experiment on over one thousand members of the public in order to gather information on how virtual characters are perceived in interactive virtual reality games. We were particularly interested in whether different render styles (realistic, cartoon, etc.) would directly influence appeal, or if a character’s personality was the most important indicator of appeal. We used a number of perceptual metrics such as subjective ratings, proximity, and attribution bias in order to test our hypothesis. Our main result shows that affinity towards virtual characters is a complex interaction between the character’s appearance and personality, and that realism is in fact a positive choice for virtual characters in virtual reality.

Investigating the Effects of Anthropomorphic Fidelity of Self-Avatars on Near Field Depth Perception in Immersive Virtual Environments

Elham Ebrahimi, Leah Hartman, Andrew Robb, Christopher Pagano, Sabarish V. Babu

Conference

Abstract: IVEs are becoming more accessible and more widely utilized for training. While research has demonstrated that self-avatars can enhance ones sense of presence and improve distance perception, the effects of self-avatar fidelity on near field distance estimations has yet to be investigated. This study tested the effect of three levels of avatar fidelity on the accuracy of distance estimations in the near-field. Performance with a virtual avatar was also compared to real-world performance. The results suggest that reach estimations become more accurate as the visual fidelity of the avatar increases, with accuracy for high fidelity avatars approaching real-world performance as compared to low-fidelity and end-effector conditions.

Simulating Movement Interactions between Avatars & Agents in Virtual Worlds using Human Motion Constraints

Sahil Narang, Andrew Best, Dinesh Manocha

Conference

Abstract: We present an interactive algorithm to generate plausible full-body movements for human-like agents interacting with other agents or avatars in a virtual environment. Our approach takes into account high-dimensional human motion constraints and bio-mechanical constraints to compute collision-free trajectories for each agent.Compared to prior methods, our formulation reduces artefacts that arise in dense scenarios and close interactions, and results in smoother and plausible locomotive behaviors. Our approach also allows the user to interact with the agents from a first-person perspective in immersive settings. We conduct extensive user evaluations which demonstrate the perceptual benefits of our algorithm.

Any “Body” There?: Avatar Visibility Effects in a Virtual Reality Game

Jean-Luc Lugrin, Philipp Krop, Maximilian Ertl, Bianka Weisz, Maximilian Rück, Sebastian Stierstorfer, Richard Klüpfel, Nina Schmidt, Johann Schmitt, Marc Erich Latoschik

Conference

Abstract: This article presents an experiment exploring the possible impact of avatar’s body part visibility on players experience and performance using current Virtual Reality (VR) gaming platform capacities. In contrast to the expected outcome from non-game VR contexts, our results did not reveal significant differences with an avatar presenting an increasing number of visible body parts. This tends to confirm the strong performance aspect of action-based games, whereby control efficiency and enemy awareness is paramount and could overcome the perceptual, behavioural or emotional effects of avatar embodiment.

Empirical Evaluation of Virtual Human Conversational and Affective Animations on Visual Attention in Virtual Reality

Matias volonte, Andrew Robb, Andrew Duchowski, Sabarish V. Babu

Conference

Abstract: Creating realistic animations of virtual humans remains comparatively complex and expensive. This research explores the degree to which animation fidelity affects user’s gaze behavior during interactions with virtual reality training simulations that include virtual human. Participants’ were divided into three conditions, wherein the virtual patient either: 1) was not animated; 2) played idle animations; or 3) played idle animations, looked at the participant when speaking, and moved his lips when speaking. Participants’ gazes were recorded during the simulation to determine whether animation fidelity affected their visual attention.


Session 2: Augmented Reality

Tuesday, March 20th, 10:30 AM - 12:00 PM, Kleiner Saal A

Chair: Yuta Itoh

An Evaluation of Bimanual Gestures on the Microsoft HoloLens

Nikolas Chaconas, Tobias Höllerer

Conference

Abstract: We developed and evaluated two-handed gestures on the Microsoft HoloLens. We conducted a comparative user study with 48 users, recording multiple performance metrics as well as user preferences. Results indicate that in spite of problems due to field-of-view limitations, certain two-handed techniques perform comparatively to the one-handed baseline technique in terms of accuracy and time. Furthermore, the best-performing two-handed technique outdid all other techniques in terms of overall user preference, demonstrating that bi-manual gesture interactions can serve a valuable role in the UI toolbox on head-worn AR devices such as the HoloLens.

Interacting with Distant Objects in Augmented Reality

Matt Whitlock, Ethan Hanner, Jed R. Brubaker, Shaun Kane, Danielle Albers Szafir

Conference

Abstract: Augmented reality applications can leverage the full space of an environment to create immersive experiences. As objects move farther away, the efficacy and usability of different interaction modalities may change. We conducted an empirical study to measure trade-offs between three interaction modalities–multimodal voice, embodied freehand gestures, and handheld devices–at distances ranging from 8 to 16 feet. Despite comparable performance between embodied gestures and device-mediated interactions, participants perceived embodied gestures as significantly more efficient and usable than device-mediated interactions. Our findings offer considerations for designing efficient and intuitive interactions in room-scale AR applications.

Driver Behavior and Performance with Augmented Reality Pedestrian Collision Warning: An Outdoor User Study

Hyungil Kim, Joe Gabbard, Alexandre Miranda Anon, Teruhisa Misu

TVCG

Abstract: This article investigates the effects of visual warning presentation methods on human performance in augmented reality (AR) driving. An experimental user study was conducted in a parking lot where participants drove a test vehicle while braking for any cross traffic with assistance from AR visual warnings presented on a monoscopic and volumetric head-up display (HUD). Results showed that monoscopic displays can be as effective as volumetric displays for human performance in AR braking tasks. The experiment also demonstrated the benefits of conformal graphics, which are tightly integrated into the real world, such as their ability to guide drivers attention and their positive consequences on driver behavior and performance. These findings suggest that conformal graphics presented via monoscopic HUDs can enhance driver performance by leveraging the effectiveness of monocular depth cues. The proposed approaches and methods can be used and further developed by future researchers and practitioners to better understand driver performance in AR as well as inform usability evaluation of future automotive AR applications.

Drone-Augmented Human Vision: Exocentric Control for Drones Exploring Hidden Areas

Okan Erat, Werner Alexander Isop, Denis Kalkofen, Dieter Schmalstieg

TVCG

Abstract: Drones allow exploring dangerous or impassable areas safely from a distant point of view. However, flight control from an egocentric view in narrow or constrained environments can be challenging. Arguably, an exocentric view would afford a better overview and, thus, more intuitive flight control of the drone. Unfortunately, such an exocentric view is unavailable when exploring indoor environments. This paper investigates the potential of drone-augmented human vision, i.e., of exploring the environment and controlling the drone indirectly from an exocentric viewpoint. If used with a see-through display, this approach can simulate X-ray vision to provide a natural view into an otherwise occluded environment. The user’s view is synthesized from a three-dimensional reconstruction of the indoor environment using image-based rendering. This user interface is designed to reduce the cognitive load of the drone’s flight control. The user can concentrate on the exploration of the inaccessible space, while flight control is largely delegated to the drone’s autopilot system. We assess our system with a first experiment showing how drone-augmented human vision supports spatial understanding and improves natural interaction with the drone.

Design and Assessment of a Collaborative 3D Interaction Technique for Handheld Augmented Reality

Jerônimo Gustavo Grandi, Henrique Galvan Debarba, Iago Berndt, Luciana Nedel, Anderson Maciel

Conference

Abstract: We present the design of a handheld-based interface for collaborative manipulations of 3D objects in mobile augmented reality. The interface creates a shared medium where several users can interact through their point-of-view and simultaneously manipulate 3D virtual augmentations. We evaluated our solution in two parts. First, we assessed the interface in single user mode in three conditions: touch gestures, device movements and hybrid. Then, we conducted a study to understand and classify the strategies that arise when partners are free to make their task organization. Furthermore, we compared the effect of simultaneous manipulations with the individual approach.


Session 3: Body and Mind

Tuesday, March 20th, 10:30 AM - 12:00 PM, Kleiner Saal B

Chair: Tabitha Peck

Egocentric Mapping of Body Surface Constraints

Molla Eray, Henrique Galvan Debarba, Ronan Boulic

TVCG-Invited

Abstract: The relative location of human body parts often materializes the semantics of on-going actions, intentions and even emotions expressed, or performed, by a human being. However, traditional methods of performance animation fail to correctly and automatically map the semantics of performer postures involving self-body contacts onto characters with different sizes and proportions. Our method proposes an egocentric normalization of the body-part relative distances to preserve the consistency of self contacts for a large variety of human-like target characters. Egocentric coordinates are character independent and encode the whole posture space, i.e., it ensures the continuity of the motion with and without self-contacts. We can transfer classes of complex postures involving multiple interacting limb segments by preserving their spatial order without depending on temporal coherence. The mapping process exploits a low-cost constraint relaxation technique relying on analytic inverse kinematics; thus, we can achieve online performance animation. We demonstrate our approach on a variety of characters and compare it with the state of the art in online retargeting with a user study. Overall, our method performs better than the state of the art, especially when the proportions of the animated character deviates from those of the performer.

Performance-Driven Dance Motion Control of a Virtual Partner Character

Christos Mousas

Conference

Abstract: Taking advantage of motion capture and display technologies, a method giving a user the ability to control the dance motions of a virtual partner in an immersive setup was developed and is presented in this paper. The method utilizes a dance motion dataset containing the motion of both dancers (leader and partner). A hidden Markov model (HMM) was used to learn the structure of the dance motions. The HMM allows the user to improvise dance motions during the runtime. A few corrective steps were also implemented to ensure the partner character’s motions appear natural. A user study was conducted to understand the naturalness of the synthesized motion as well as the control that the user has on the partner character’s synthesized motion.

Exercise Intensity-driven Level Design

Biao Xie, Yongqi Zhang, Haikun Huang, Elisa Ogawa, Tongjian You, Lap-Fai Yu

TVCG

Abstract: Games and experiences designed for virtual or augmented reality usually require the player to move physically to play. This poses substantial challenge for level designers because the player’s physical experience in a level will need to be considered, otherwise the level may turn out to be too exhausting or not challenging enough. This paper presents a novel approach to optimize level designs by considering the physical challenge imposed upon the player in completing a level of motion-based games. A game level is represented as an assembly of chunks characterized by the exercise intensity levels they impose on players. We formulate game level synthesis as an optimization problem, where the chunks are assembled in a way to achieve an optimized level of intensity. To allow the synthesis of game levels of varying lengths, we solve the trans-dimensional optimization problem with a Reversible-jump Markov chain Monte Carlo technique. We demonstrate that our approach can be applied to generate game levels for different types of motion-based virtual reality games. A user evaluation validates the effectiveness of our approach in generating levels with the desired amount of physical challenge.

Lucid Virtual Dreaming: Antecedents and Consequents of Virtual Lucidity during Virtual Threat

Jordan Quaglia, Andrew Holecek

Conference

Abstract: We report the first empirical findings on virtual lucidity (VL), a new construct similar to lucidity during dreaming, but regarding awareness that one is having a virtual experience. VL concerns the depth and breadth of this awareness, as well as the extent it affords regulatory monitoring and control. Results indicated that higher VL predicted lower fear, but not less enjoyment, during a virtual reality (VR) threat scenario of walking, and being asked to step off, a wooden plank seemingly high above a city. We discuss VL’s validity, relation to presence, and how it may inform VR development and application.

“Do you feel in control?”: Towards Novel Approaches to Characterise, Manipulate and Measure the Sense of Agency in Virtual Environments

Camille Jeunet, Louis Albert, Ferran Argelaguet Sanz, Anatole Lécuyer

TVCG

Abstract: While the Sense of Agency (SoA) has so far been predominantly characterised - in VR - as a component of the Sense of Embodiment, other communities (e.g., in psychology or neurosciences) have investigated the SoA from a different perspective proposing complementary theories. Yet, despite the acknowledged potential benefits of catching up with these theories a gap remains. This paper first aims to contribute to fill this gap by introducing a theory according to which the SoA can be divided into two components, the feeling and the judgment of agency, and relies on three principles, namely the principles of priority, exclusivity and consistency. We argue that this theory could provide insights on the factors influencing the SoA in VR systems. Second, we propose novel approaches to manipulate the SoA in controlled VR experiments (based on these three principles) as well as to measure the SoA, and more specifically its two components (based on neurophysiological markers, using ElectroEncephaloGraphy -EEG-). We claim that these approaches would enable us to deepen our understanding of the SoA in VR contexts. Finally, we validate these approaches in an experiment. Our results (N=24) suggest that our approach was successful in manipulating the SoA as the modulation of each of the three principles induced significant decreases of the SoA (measured using questionnaires). In addition, we recorded participants’ EEG signals during the VR experiment, and neurophysiological markers of the SoA, potentially reflecting the feeling and judgment of agency specifically, were revealed. Our results also suggest that the users’ profile, more precisely their locus of control, influences their levels of immersion and SoA.


Session 4: Active Haptics

Tuesday, March 20th, 2:00 PM - 3:15 PM, Kleiner Saal A

Chair: Miguel Otaduy

Force Rendering and Its Evaluation of a Friction-based Walking Sensation Display for a Seated User

Ginga Kato, Yoshihiro Kuroda, Kiyoshi Kiyokawa, Haruo Takemura

TVCG

Abstract: Most existing locomotion devices that represent the sensation of walking target a user who is actually performing a walking motion. Here, we attempted to represent the walking sensation, especially a kinesthetic sensation and advancing feeling (the sense of moving forward) while the user remains seated. To represent the walking sensation using a relatively simple device, we focused on the force rendering and its evaluation of the longitudinal friction force applied on the sole during walking. Based on the measurement of the friction force applied on the sole during actual walking, we developed a novel friction force display that can present the friction force without the influence of body weight. Using performance evaluation testing, we found that the proposed method can stably and rapidly display friction force. Also, we developed a virtual reality (VR) walk-through system that is able to present the friction force through the proposed device according to the avatar’s walking motion in a virtual world. By evaluating the realism, we found that the proposed device can represent a more realistic advancing feeling than vibration feedback.

The Effect of Haptic Prediction Accuracy on Presence

Dominik Gall, Marc Erich Latoschik

Conference

Abstract: This paper reports on the effect of visually-anchored prediction accuracy of haptic information on the perceived presence of virtual environments. The study revealed increased presence for high prediction accuracy and decreased presence for low prediction accuracy, while perceptual binding still occurred. The results indicate a significant correlation between prediction accuracy of haptic information and the perceived realness and presence of a virtual environment which gives rise to a discussion about models for dissociative symptom derealisation.

Enhancing the stiffness perception of tangible objects in Mixed Reality using wearable haptics

Xavier de Tinguy, Claudio Pacchierotti, Maud Marchal, Anatole Lécuyer

Conference

Abstract: We study the combination of tangible objects and wearable haptics for improving the display of stiffness sensations in virtual and augmented reality environments. Since tangible objects enable to feel the general shape of objects and wearable haptic devices can generate varying tactile sensations, we propose to combine these two approaches to augment the perceived stiffness of tangible objects by providing timely tactile stimuli at the fingers. We present five use cases and a user study, showing that such haptic stimulation can alter the perceived stiffness of real objects, even when the tactile stimuli are not delivered at the contact point.

Effect of Electrical Stimulation Haptic Feedback on Perceptions of Softness-Hardness and Stickiness while Touching a Virtual Object

Vibol Yem, Kevin Vu, Yuki Kon, Hiroyuki Kajimoto

Conference

Abstract: We developed a 3D virtual reality system combined with finger-motion capture and electrical stimulation devices for inducing force sensation in the fingertip. We conducted two experiments to evaluate our electrical stimulation method and analyzed the effects of electrical stimulation on perception. The first experiment confirmed that participants could distinguish between the directions of the illusory force sensation, reporting whether the stimulation flexed their index finger forward or extended it backward. The second experiment examined the effects of the electric current itself on the intensity of their perception of the softness, hardness and stickiness of a virtual object.


Session 5: Cybersickness

Tuesday, March 20th, 4:00 PM - 5:30 PM, Grosser Saal

Chair: Torsten Kuhlen

Towards a Machine-learning Approach for Sickness Prediction in 360 Stereoscopic Videos

Nitish Padmanaban, Timon Ruban, Vincent Sitzmann, Anthony Norcia, Gordon Wetzstein

TVCG

Abstract: Virtual reality systems are widely believed to be the next major computing platform. There are, however, some barriers to adoption that must be addressed, such as that of motion sickness – which can lead to undesirable symptoms including postural instability, headaches, and nausea. Motion sickness in virtual reality occurs as a result of moving visual stimuli that cause users to perceive self-motion while they remain stationary in the real world. There are several contributing factors to both this perception of motion and the subsequent onset of sickness, including field of view, motion velocity, and stimulus depth. We verify first that differences in vection due to relative stimulus depth remain correlated with sickness. Then, we build a dataset of stereoscopic 3D videos and their corresponding sickness ratings in order to quantify their nauseogenicity, which we make available for future use. Using this dataset, we train a machine learning algorithm on hand-crafted features (quantifying speed, direction, and depth as functions of time) from each video, learning the contributions of these various features to the sickness ratings. Our predictor generally outperforms a naive estimate, but is ultimately limited by the size of the dataset. However, our result is promising and opens the door to future work with more extensive datasets. This and further advances in this space have the potential to alleviate developer and end user concerns about motion sickness in the increasingly commonplace virtual world.

Spatial Updating and Simulator Sickness during Steering and Jumping in Immersive Virtual Environments

Tim Weissker, Andr Kunert, Bernd Froehlich, Alexander Kulik

Conference

Abstract: Many head-mounted display applications implement a range-restricted variant of teleportation called jumping, in which users only teleport to locations in the currently visible part of the scene. In this paper, we present a classification scheme for teleportation techniques and present the results of a user study that compared jumping to steering with respect to spatial updating and simulator sickness. Our results show that a majority of participants (75%) achieved similar spatial updating accuracies in both conditions. Furthermore, jumping induced significantly less simulator sickness, which altogether justifies it as an alternative to steering for the exploration of immersive virtual environments.

Visually-Induced Motion Sickness Reduction via Static and Dynamic Rest Frames

Zekun Cao, Regis Kopper, Jason Jerald

Conference

Abstract: We utilize rest frames to alleviate visually-induced motion sickness and report the results of two multi-day within-subjects studies with 44 subjects who used virtual travel to navigate the environment. Results show that a virtual environment with a static or dynamic rest frame allowed users to travel through more waypoints before stopping due to discomfort compared to a virtual environment without a rest frame. Further, a virtual environment with a static rest frame was also found to result in more real-time reported comfort than when there was no rest frame.

Cybersickness-Provoking Virtual Reality Alters Brain Signals of Persons with Multiple Sclerosis

Imtiaz Muhammad arafat, Sharif Mohammad Shahnewaz Ferdous, John Quarles

Conference

Abstract: This study investigates and compares brain signals between persons with and without Multiple Sclerosis (MS) when exposed to cybersickness-provoking Virtual Reality (VR). Cybersickness is a set of discomforts and commonly triggered by VR exposure. Although cybersickness has been studied for decades, populations with neurological disabilities, such as MS, have remained minimally studied. MS can disrupt communication between neurons (signal carrying nerve cells) from different areas of the brain. Cybersickness also can affect brain signals, for example, frequency powers may change due to cybersickness. This study investigates the combination of MS and cybersickness in terms of brain signals.

Effects of Latency Jitter on Simulator Sickness in a Search Task

Jan-Philipp Stauffert, Florian Niebling, Marc Erich Latoschik

Conference

Abstract: Low latency is a fundamental requirement for Virtual Reality systems to reduce the potential risks of cybersickness. Contrary to uniform latency degradation, the influence of latency jitter on user experience is not well researched. We research the impact of latency jitter on cybersickness. Test subjects are given a search task in Virtual Reality. One group experienced artificially added latency jitter. The effects of the introduced latency jitter were measured based on self-reports simulator sickness questionnaire (SSQ) and by taking physiological measurements. We found a significant increase in self-reported simulator sickness.


Session 6: Locomotion & Walking

Tuesday, March 20th, 4:00 PM - 5:30 PM, Kleiner Saal A

Chair: Niels Nielson

Locomotive Recalibration and Prism Adaptation of Children and Teens in Immersive Virtual Environments

Haley Alexander Adams, Gayathri Narasimham, John Rieser, Sarah Creem-Regehr, Jeanine Stefanucci, Bobby Bodenheimer

TVCG

Abstract: As virtual reality expands in popularity, an increasingly diverse audience is gaining exposure to immersive virtual environments (IVEs). A significant body of research has demonstrated how perception and action work in such environments, but most of this work has been done studying adults. Less is known about how how physical and cognitive development affect perception and action in IVEs, particularly as applied to preteen and teenage children. Accordingly, in the current study we assess how preteens (children aged 8-12 years) and teenagers (children aged 15-18 years) respond to mismatches between their motor behavior and the visual information presented by an IVE. Over two experiments, we evaluate how these individuals recalibrate their actions across functionally distinct systems of movement. The first experiment analyzed forward walking recalibration after exposure to an IVE with either increased or decreased visual flow. Visual flow during normal bipedal locomotion was manipulated to be either twice or half as fast as the physical gait. The second experiment leveraged a prism throwing adaptation paradigm to test the effect of recalibration on throwing movement. In the first experiment, our results show no differences across age groups, although subjects generally experienced a post-exposure effect of shortened distance estimation after experiencing visually faster flow and longer distance estimation after experiencing visually slower flow. In the second experiment, subjects generally showed the typical prism adaptation behavior of a throwing after-effect error. The error lasted longer for preteens than older children. Our results have implications for the design of virtual systems with children as a target audience.

Inducing Compensatory Changes in Gait similar to External Perturbations using an Immersive Head Mounted Display

Lara Riem, Jacob A Van Dehy, Tanya Onushko, Scott Beardsley

Conference

Abstract: Dynamic balance of ambulation through an immersive VR environment on a treadmill mounted to a motion base was studied. Pseudorandom roll perturbations were applied visually to a virtual bridge with (VP trials) and without (V trials) a corresponding physical displacement of the motion base. Significant differences were observed between unperturbed and perturbed walking in the VR environment for average step length, width, and Margin of Stability for both V and VP. The results demonstrate that visual perturbations provided in an immersive virtual environment can induce compensatory changes in gait during treadmill walking that are consistent with a physical perturbation.

Collision avoidance behavior between walkers: global and local motion cues

Sean Dean Lynch, Richard Kulpa, Laurentius Antonius Meerhoff, Julien Pettre, Armel Cretual, Anne-Hélène Olivier

TVCG-Invited

Abstract: Daily activities require agents to interact with each other, such as during collision avoidance. The nature of visual information that is used for a collision free interaction requires further understanding. We aim to manipulate the nature of visual information in two forms, global and local information appearances. Sixteen healthy participants navigated towards a target in an immersive computer-assisted virtual environment (CAVE) using a joystick. A moving passive obstacle crossed the participant’s trajectory perpendicularly at various pre-defined risks of collision distances. The obstacle was presented with one of five virtual appearances, associated to global motion cues (i.e., a cylinder or a sphere), or local motion cues (i.e., only the legs or the trunk). A full body virtual walker, showing both local and global motion cues, used as a reference condition. The final crossing distance was affected by the global motion appearances, however, appearance had no qualitative effect on motion adaptations. These findings contribute towards further understanding what information people use when interacting with others.

Effect of virtual human gaze behaviour during an orthogonal collision avoidance walking task

Sean Lynch, Julien Pettr, Julien Bruneau, Richard Kulpa, Armel Cretual, Anne-Hélène Olivier

Conference

Abstract: A study performed in virtual reality on the effect of gaze during collision avoidance between two walkers. In such a situation, gaze is believed to detail future path intentions and to be part of the nonverbal negotiation to achieve avoidance collaboratively. Seventeen participants were immersed in a virtual environment, instructed to navigate across a virtual space and to avoid a virtual character that would appear from either side. The character would either gaze or not towards the participant. We discuss the possible exploitation of the results to improve the design of virtual characters for populated virtual environments and user interaction.

You Shall Not Pass: Non-Intrusive Feedback for Virtual Walls in VR Environments with Room-Scale Mapping

Mette Boldt, Michael Bonfert, Inga Lehne, Melina Cahnbley, Kim Korsching, Ioannis Bikas, Stefan Finke, Martin Hanci, Valentin Kraft, Boxuan Liu, Tram Nguyen, Alina Panova, Ramneek Singh, Alexander Steenbergen, Rainer Malaka, Jan David Smeddinck

Conference

Abstract: Room-scale mapping facilitates natural locomotion in VR, but it creates a problem when encountering virtual walls. In traditional video games, player avatars can simply be prevented from moving through walls which is not possible in VR. Instead, we propose a combination of auditory, visual, and vibrotactile feedback for wall collisions. This solution can be implemented with standard game engine features, does not require any additional hardware or sensors, and is independent of application concept and narrative. A between-group study with 46 participants showed that a large majority of players without the feedback did pass through virtual walls, while 87% of the participants with the feedback refrained from walking through walls.


Session 7: 3D hand Interaction and Physics

Tuesday, March 20th, 4:00 PM - 5:30 PM, Kleiner Saal B

Chair: Maud Marchal

Effects of Hand Representations for Typing in Virtual Reality

Jens Grubert, Lukas Witzani, Eyal Ofek, Michel Pahud, Matthias Kranz, Per Ola Kristensson

Conference

Abstract: Text entry is a challenge for Virtual Reality (VR) applications. VR enables new capabilities, impossible in the real world, such as an unobstructed view of the keyboard, without occlusion by the user’s physical hands. We study the effects of four hand representations (no hand, inverse kinematic model, fingertip visualization and video inlay) on typing in VR using a desktop keyboard with 24 participants. We found that the fingertip visualization and video inlay both resulted in statistically significant lower text entry error rates compared to no hand or inverse kinematic model representations. We found no statistical differences in text entry speed.

Text Entry in Immersive Head-Mounted Display-based Virtual Reality using Standard Keyboards

Jens Grubert, Lukas Witzani, Eyal Ofek, Michel Pahud, Matthias Kranz, Per Ola Kristensson

Conference

Abstract: We study the performance of desktop and touchscreen keyboards for use in VR applications. We analyze a total of 24 hours of typing data in VR from 24 participants and find that novice users are able to retain about 60% of their typing speed on a desktop keyboard and about 45% of typing speed on a touchscreen keyboard. We find no significant learning effects, indicating that users can transfer typing skills fast into VR. Further, repositioning virtual from physical hand and keyboard locations does not adversely affect performance for desktop keyboard typing.

Effects of Image Size And Structural Complexity On Time And Precision of Hand Movements in Head Mounted Virtual Reality

ANIL UFUK BATMAZ, Michel de Mathelin, Birgitta Dresp-Langley

Conference

Abstract: In this study, the time and precision of index fingertip movements under varying conditions of structural complexity and image size were investigated in virtual reality (VR). Subjects followed a complex and a simple structure of small, medium and large size in VR, with the index finger of one of their two hands, from right to left, and from left to right. It is concluded that both size and structural complexity critically influence task execution in VR when no tactile feed-back from object to finger is generated. Individual learning curves should be monitored from the beginning of the training as suggested by the individual speed-precision analyses.

Efficient Physics-Based Implementation for Realistic Hand-Object Interaction in Virtual Reality

Markus Höll, Markus Oberweger, Clemens Arth, Vincent Lepetit

Conference

Abstract: We propose an efficient physics-based method for dexterous ’real hand-’virtual object interaction in Virtual Reality environments. Our method is based on the Coulomb friction model, and we show how to efficiently implement it in a commodity VR engine for real-time performance. This model enables very convincing simulations of many types of actions such as pushing, pulling, grasping, or even dexterous manipulations such as spinning objects between fingers without restrictions on the objects shapes or hand poses. Because it is an analytic model, we do not require any prerecorded data, in contrast to previous methods.

Soft Hand Simulation for Smooth and Robust Natural Interaction

Mickeal Verschoor, Daniel Lobo, Miguel A. Otaduy

Conference

Abstract: Natural hand-based interaction should feature hand motion that adapts smoothly to the tracked user’s motion, reacts robustly to contact with objects in a virtual environment, and enables dexterous manipulation of these objects. In our work, we enable all these properties thanks to an efficient soft hand simulation model. This model integrates an articulated skeleton, nonlinear soft tissue and frictional contact, to provide the realism necessary for natural interaction. As a result, we accomplish hand simulation as an asset that can be connected to diverse input tracking devices, and seamlessly integrated in game engines for fast deployment in VR applications.


Session 8: Social VR

Wednesday, March 21st, 9:00 AM - 10:30 AM, Grosser Saal

Chair: Andrew Robb

Developing and Proving a Framework for Reaction Time Experiments in VR to Objectively Measure Social Interaction with Virtual Agents

Carolin Wienrich, Richard Gross, Felix Kretschmer, Gisela Müller-Plath

Conference

Abstract: Social Inhibition of Return (sIOR) is a small reaction time (RT) phenomenon (20 ms) hitherto assumed to be evoked by a real person doing an experimental task together with another one. In experiments, in which one of the persons is replaced by a virtual agent displayed as a video on a 2D-TV-screen, an sIOR could not be found. We transferred this experimental setting into virtual reality using consumer VR hardware. We found that a) with our VR setup we were able to reliably measure such small RT effects in VR, and b) our virtual agent could indeed evoke an sIOR.

Social VR: How Personal Space is Affected by Virtual Agents’ Emotions

Andrea Bönsch, Sina Radke, Heiko Overath, Laura Marie Asch, Jonathan Wendt, Tom Vierjahn, Ute Habel, Torsten Wolfgang Kuhlen

Conference

Abstract: Personal space (PS) is a key element in social interactions affecting the interpersonal distance. Its dynamic size depends on numerous factors and its violation evokes discomfort.

We contribute to this research by investigating the influence of facial emotions of approaching virtual agents (VAs) on an individual‘s PS. In a CAVE, 27 German males (age range 18-30 years) were approached by either a single VA or a group of three VAs showing an angry or happy facial expression. PS preferences were influenced by both the emotion and by the number of VAs: larger distances were kept to angry VAs than to happy VAs; single VAs were allowed closer compared to groups of VAs. These findings provide more insight into PS in virtual environments.

Social Presence and Cooperation in Large-Scale Multi-User Virtual Reality The Relevance of Social Interdependence for Location-Based Environments

Carolin Wienrich, Kristina Schindler, Nina Döllinger, Simon Nicolas Kock, Ole Traupe

Conference

Abstract: Location-based VR entertainment increasingly offers multi-user scenarios. Aiming at design principles promoting sustained customer motivation, we studied the impact of (positive) social interdependence (IDP) on various team-experiential parameters during a multi-user VR adventure. Effects were assessed via a control-group design. We found that social IDP increased social presence and cooperation among participants. Additionally, behavioral involvement (part of presence) and various other experiential aspects and affective evaluations during the adventure were positively influenced. Thus, we successfully addressed one goal of location-based VR hosts to scientifically establish design principles for social and collective adventures.

Beyond Replication: Augmenting Social Behaviors in Multi-User Virtual Realities

Daniel Roth, Constantin Kleinbeck, Tobias Feigl, Christopher Mutschler, Marc Erich Latoschik

Conference

Abstract: We present a novel approach for the augmentation of social behaviors in virtual reality (VR). We designed three visual transformations for behavioral phenomena of social interactions: eye contact, joint attention, and grouping. Participants were represented as simplified pillar avatars and explored a virtual museum either with or without the augmentations in groups of five, using a large-scale tracking environment. Our results indicate that our approach can significantly increase social presence and that the augmented experience appears more thought-provoking. Furthermore, the augmentations seem to affect the actual behavior of participants.

Influences on the Elicitation of Interpersonal Space with Virtual Humans

David M. Krum, Sin-Hwa Kang, Thai Phan

Conference

Abstract: We examined whether haptic priming (presenting an illusion of virtual human touch at the beginning of the virtual experience) and different locomotion techniques (either joystick or physical walking) might affect proxemic behavior in human users while interacting with virtual humans. The study results suggest that locomotion techniques can alter proxemic behavior in significant ways. Haptic priming did not appear to impact proxemic behavior, but did increase rapport and other subjective social measures. This suggests that designers and developers of immersive training systems should carefully consider the impact of even simple design and fidelity choices on trainee reactions in social interactions.


Session 9: Rendering

Wednesday, March 21st, 9:00 AM - 10:30 AM, Kleiner Saal A

Chair: Dirk Reiners

HySAR: Hybrid Material Rendering by an Optical See-Through Head-Mounted Display with Spatial Augmented Reality Projection

Takumi Hamasaki, Yuta Itoh, Yuichi Hiroi, Daisuke Iwai, Maki Sugimoto

TVCG

Abstract: Spatial augmented reality (SAR) pursues realism in rendering materials and objects. To advance this goal, we propose a hybrid SAR (HySAR) that combines a projector with optical see-through head-mounted displays (OST-HMD). In an ordinary SAR scenario with co-located viewers, the viewers perceive the same virtual material on physical surfaces. In general, the material consists of two components: a view-independent (VI) component such as diffuse reflection, and a view-dependent (VD) component such as specular reflection. The VI component is static over viewpoints, whereas the VD should change for each viewpoint even if a projector can simulate only one viewpoint at one time. In HySAR, a projector only renders the static VI components. In addition, the OST-HMD renders the dynamic VD components according to the viewer’s current viewpoint. Unlike conventional SAR, the HySAR concept theoretically allows an unlimited number of co-located viewers to see the correct material over different viewpoints. Furthermore, the combination enhances the total dynamic range, the maximum intensity, and the resolution of perceived materials. With proof-of-concept systems, we demonstrate HySAR both qualitatively and quantitatively with real objects. First, we demonstrate HySAR by rendering synthetic material properties on a real object from different viewpoints. Our quantitative evaluation shows that our system increases the dynamic range by 2.24 times and the maximum intensity by 2.12 times compared to an ordinary SAR system. Second, we replicate the material properties of a real object by SAR and HySAR, and show that HySAR outperforms SAR in rendering VD specular components.

Real-Time Re-textured Geometry Modeling Using Microsoft HoloLens

Samuel Dong, Tobias Höllerer

Conference

Abstract: We implemented live-textured geometry model creation with immediate coverage feedback visualizations in AR on the Microsoft HoloLens. A user walking and looking around a physical space can create a textured model of the space, ready for remote exploration and AR collaboration. Out of the box, a HoloLens builds a triangle mesh of the environment while scanning and being tracked in a new environment. The mesh contains vertices, triangles, and normals, but not color. We take the video stream from the color camera and use it to color a UV texture to be mapped to the mesh. Due to the limited graphics memory of the HoloLens, we use a fixed-size texture. Since the mesh generation dynamically changes in real time, we use an adaptive mapping scheme that evenly distributes every triangle of the dynamic mesh onto the fixed-size texture and adapts to new geometry without compromising existing color data. Occlusion is also considered. The user can walk around their environment and continuously fill in the texture while growing the mesh in real-time. We describe our texture generation algorithm and illustrate benefits and limitations of our system with example modeling sessions. Having first-person immediate AR feedback on the quality of modeled physical infrastructure, both in terms of mesh resolution and texture quality, helps the creation of high-quality colored meshes with this standalone wireless device and a fixed memory footprint in real-time.

Profiling Distributed Virtual Environments by Tracing Causality

Sebastian J Friston, Elias J Griffith, David Swapp, Alan Marshall, Anthony Steed

Conference

Abstract: In this paper we explore a new technique to profile distributed virtual environments. Profiling is a key part of the optimisation process, but techniques based on static analysis or passing meta-data have difficulty following causality in concurrent and distributed systems. Our technique is based on taking hashes of the system state in order to abstract away platform-specific details, facilitating causality tracing across process, machine and even semantic boundaries. Across three case studies, we demonstrate the efficacy of this approach, and how it supports a variety of metrics for comprehensively bench-marking distributed virtual environments.

Software-based Visual Aberration Correction for HMDs

Feng Xu, Dayang Li

Conference

Abstract: In this paper, we propose a real-time image pre-correction technique to correct the aberrations purely by software. Users can take off their own glasses and enjoy the virtual reality (VR) experience through an ordinary HMD freely and comfortably. Furthermore, as our technique is not related to hardware, it is compatible with all the current commercial HMDs. Our technique is based on the observation where the refractive errors majorly cause the ideal retinal image to be convolved by certain kernels. So we pre-correct the image on the display to maximize the similarity between the convolved retinal image and the ideal image.

BrightView: Increasing Perceived Brightness of Optical See-Through Head-Mounted Displays through Unnoticeable Incident Light Reduction

Shohei Mori, Sei Ikeda, Alexander Plopski, Christian Sandor

Conference

Abstract: Optical See-Through Head-Mounted Displays (OST-HMDs) lose the visibility of virtual contents under bright environment illumination due to their see-through nature. We demonstrate how liquid crystal (LC) filters attached to an OST-HMD work for increasing the perceived brightness of virtual contents without impacting the brightness of the real scene. In our psychophysical experiments in three scenes, participants were asked to compare the magnitude of brightness changes of both real and virtual objects, before and after dimming the LC filter. The results show that the participants felt increases in the brightness of virtual objects while they were less conscious of reductions of the real scene luminance.


Session 10: Multimodality: Sound, Olfactory, and Gustatory Displays

Wednesday, March 21st, 9:00 AM - 10:30 AM, Kleiner Saal B

Chair: Evan Suma

Midair Ultrasound Fragrance Rendering

Keisuke Hasegawa, Liwei Qiu, Hiroyuki Shinoda

TVCG

Abstract: We propose a system that controls the spatial distribution of odors in an environment by generating electronically steerable ultrasound-driven narrow air flows. The proposed system is designed not only to remotely present a preset fragrance to a user, but also to provide applications that would be conventionally inconceivable, such as: 1) fetching the odor of a generic object placed at a location remote from the user and guiding it to his or her nostrils, or 2) nullifying the odor of an object near a user by carrying it away before it reaches his or her nostrils (Fig. 1). These are all accomplished with an ultrasound-driven air stream serving as an airborne carrier of fragrant substances. The flow originates from a point in midair located away from the ultrasound source and travels while accelerating and maintaining its narrow cross-sectional area. These properties differentiate the flow from conventional jet- or fan-driven flows and contribute to achieving a midair flow. In our system, we employed a phased array of ultrasound transducers so that the traveling direction of the flow could be electronically and instantaneously controlled. In this paper, we describe the physical principle of odor control, the system construction, and experiments conducted to evaluate remote fragrance presentation and fragrance tracking.

New Thermal Taste Actuation Technology for Future Multisensory Virtual Reality and Internet

Kasun karunanayaka, Nurafiqah Johari, Kevin Stanley Bielawski, Surina Hariri, Hanis Camelia, Adrian David Cheok

TVCG

Abstract: Today’s virtual reality (VR) applications such as gaming, multisensory entertainment, remote dining, and online shopping are mainly based on audio, visual, and haptic interactions between human and virtual world. Integrating the sense of taste into VR is difficult since we are dependent on chemical-based taste delivery systems. This paper presents the “Thermal Taste Machine”, a new digital taste actuation technology that can effectively produce and modify thermal taste sensations on the tongue. It modifies the temperature of the surface of the tongue within a short period of time (from 25°C to 40°C while heating and from 25°C to 10°C while cooling). We tested this device on human subjects and described the experience of thermal taste using 20 known (taste and non-taste) sensations. Our results suggested that rapidly heating the tongue produce sweetness, fatty/oiliness, electric taste, warmness, and reduce the sensibility for metallic taste. Similarly, participants reported that the cooling the tongue produce mint taste, pleasantness, and coldness. By conducting another user study on the perceived sweetness of sucrose solutions after the thermal stimulation, we found that heating the tongue significantly enhance the intensity of sweetness for both thermal tasters and non-thermal tasters. Also, we found that faster temperature rises on the tongue produce more intense sweet sensations for thermal tasters. This technology will be useful in two ways: First, it can produce taste sensations without using chemicals for the individuals who are sensitive to thermal taste. Second, the temperature rise of the device can be used as a way to enhance the intensity of sweetness. We believe that this technology can be used to digitally produce and enhance taste sensations in future virtual reality applications. The key novelties of this paper are as follows: 1. Development of a thermal taste actuation technology for stimulating the human taste receptors, 2. Characterization of the thermal taste produced by the device using taste-related sensations and non-taste related sensations, 3. Research on enhancing the intensity for sucrose solutions using thermal stimulation, 4. Research on how different speeds of heating affect the intensity of sweetness produced by thermal stimulation

Diffraction Kernels for Interactive Sound Propagation in Dynamic Environments

Atul Rungta, Carl Schissler, Nicholas Rewkowski, Ravish Mehra, Dinesh Manocha

TVCG

Abstract: We present a novel method to generate plausible diffraction effects for interactive sound propagation in dynamic scenes. Our approach precomputes a diffraction kernel for each dynamic object in the scene and combines them with interactive ray tracing algorithms at runtime. A diffraction kernel encapsulates the sound interaction behavior of individual objects in the free field and we present a new source placement algorithm to significantly accelerate the precomputation. Our overall propagation algorithm can handle highly-tessellated or smooth objects undergoing rigid motion. We have evaluated our algorithm’s performance on different scenarios with multiple moving objects and demonstrate the benefits over prior interactive geometric sound propagation methods.

Spatially Perturbed Collision Sounds Attenuate Perceived Causality in 3D Launching Events

Duotun Wang, James Kubricht, Yixin Zhu, Wei Liang, Song-Chun Zhu, Chenfanfu Jiang, Hongjing Lu

Conference

Abstract: When a moving object causes another to move, a launching event is perceived. However, when the stationary object’s motion is delayed—or is accompanied by a collision sound—launching impressions change. Previous research has exclusively utilized 2D displays to examine launching impressions, yet it remains unclear whether people are equally sensitive to spatiotemporal collision properties in the real world. Our study examines whether previous findings in audiovisual causal perception extend to 3D virtual environments by perturbing the position of collision sounds. Results demonstrate that people can localize sound position from auditory input and launching impressions attenuate with spatial perturbation.

Effects of Unaugmented Periphery and Vibrotactile Feedback on Proxemics with Virtual Humans in AR

Myungho Lee, Gerd Bruder, Tobias Höllerer, Greg Welch

TVCG

Abstract: In this paper, we investigate factors and issues related to human locomotion behavior and proxemics in the presence of a real or virtual human in augmented reality (AR). First, we discuss a unique issue with current-state optical see-through head-mounted displays, namely the mismatch between a small augmented visual field and a large unaugmented periphery, and its potential impact on locomotion behavior in close proximity of virtual content. We discuss a potential simple solution based on restricting the field of view to the central region, and we present the results of a controlled human-subject study. The study results show objective benefits for this approach in producing behaviors that more closely match those that occur when seeing a real human, but also some drawbacks in overall acceptance of the restricted field of view. Second, we discuss the limited multimodal feedback provided by virtual humans in AR, present a potential improvement based on vibrotactile feedback induced via the floor to compensate for the limited augmented visual field, and report results showing that benefits of such vibrations are less visible in objective locomotion behavior than in subjective estimates of co-presence. Third, we investigate and document significant differences in the effects that real and virtual humans have on locomotion behavior in AR with respect to clearance distances, walking speed, and head motions. We discuss potential explanations for these effects related to social expectations, and analyze effects of different types of behaviors including idle standing, jumping, and walking that such real or virtual humans may exhibit in the presence of an observer.


Session 11: Immersion & Embodiment

Wednesday, March 21st, 11:00 AM -12:30 PM, Grosser Saal

Chair: Gerd Bruder

The Impact of Avatar Personalization and Immersion on Virtual Body Ownership, Presence, and Emotional Response

Thomas Waltemate, Dominik Gall, Daniel Roth, Mario Botsch, Marc Erich Latoschik

TVCG

Abstract: This article reports the impact of the degree of personalization and individualization of users’ avatars as well as the impact of the degree of immersion on typical psychophysical factors in embodied Virtual Environments. We investigated if and how virtual body ownership (including agency), presence, and emotional response are influenced depending on the specific look of users’ avatars, which varied between (1) a generic hand-modeled version, (2) a generic scanned version, and (3) an individualized scanned version. The latter two were created using a state-of-the-art photogrammetry method providing a fast 3D-scan and post-process workflow. Users encountered their avatars in a virtual mirror metaphor using two VR setups that provided a varying degree of immersion, (a) a large screen surround projection (L-shape part of a CAVE) and (b) a head-mounted display (HMD). We found several significant as well as a number of notable effects. First, personalized avatars significantly increase body ownership, presence, and dominance compared to their generic counterparts, even if the latter were generated by the same photogrammetry process and hence could be valued as equal in terms of the degree of realism and graphical quality. Second, the degree of immersion significantly increases the body ownership, agency, as well as the feeling of presence. These results substantiate the value of personalized avatars resembling users’ real-world appearances as well as the value of the deployed scanning process to generate avatars for VR-setups where the effect strength might be substantial, e.g., in social Virtual Reality (VR) or in medical VR-based therapies relying on embodied interfaces. Additionally, our results also strengthen the value of fully immersive setups which, today, are accessible for a variety of applications due to the widely available consumer HMDs.

In Limbo: The Effect of Gradual Visual Transition between Real and Virtual on Virtual Body Ownership Illusion and Presence

Sungchul Jung, Pamela J. Wisniewski, Charles E Hughes

Conference

Abstract: We present a study of the relative effects of gradual versus instananeous transition between one’s own body and a virtual surrogate body, and between one’s real-world environment and a virtual environment. The approach uses a stereo camera attached to an HMD to provide the illusions of virtual body ownership and spatial presence in VR. We conducted the study in a static environment which is similar to the traditional rubber hand experiment platform.

The Effect of Gender Body-Swap Illusions on Working Memory and Stereotype Threat

Tabitha C. Peck, My Doan, Kimberly Anne Bourne, Jessica J Good

TVCG

Abstract: The underrepresentation of women in technical and STEM fields is a well-known problem, and stereotype threatening situations have been linked to the inability to recruit and retain women into these fields. Virtual reality enables the unique ability to perform body-swap illusions, and research has shown that these illusions can change participant behavior. Characteristically people take on the traits of the avatar they are embodying. We hypothesized that female participants embodying male avatars when a stereotype threat was made salient would demonstrate stereotype lift. We tested our hypothesis through a between-participants user study in an immersive virtual environment by measuring working memory. Our results support that stereotype threat can be induced in an immersive virtual environment, and that stereotype lift is possible with fully-immersive body-swap illusions. Additionally, our results suggest that participants in a gender-swapped avatar without an induced stereotype threat have significantly impaired working memory; however, this impairment is lifted when a threat is made salient. We discuss possible theories as to why a body-swap illusion from a female participant into a male avatar would only increase working memory impairment when not under threat, as well as applications and future research directions. Our results offer additional insight into understanding the cognitive effects of body-swap illusions, and provide evidence that virtual reality may be an applicable tool for decreasing the gender gap in technology.

Studying the Sense of Embodiment in VR Shared Experiences

Rebecca Fribourg, Ferran Argelaguet Sanz, Ludovic Hoyet, Anatole Lécuyer

Conference

Abstract: We present an experiment where users were embodied in an anthropomorphic virtual avatar, while performing a virtual version of the well-known whac-a-mole game, and evaluate if the presence of another user influences the sense of embodiment in virtual reality. Our results show that users were faster, and accordingly more engaged, in performing the task when sharing the virtual environment, in particular for the more competitive tasks. Also, users experienced comparable levels of embodiment both when immersed alone or with another user.

NotifiVR: Exploring Interruptions and Notifications in Virtual Reality

Sarthak Ghosh, Lauren Winston, Nishant Panchal, Philippe Kimura-Thollander, Jeff Hotnog, Douglas Cheong, Gabriel Reyes, Gregory D Abowd

TVCG

Abstract: The proliferation of high resolution and affordable virtual reality (VR) headsets is quickly making room-scale VR experiences available in our homes. Most VR experiences strive to achieve complete immersion by creating a disconnect from the real world. However, due to the lack of a standardized notification management system and minimal context awareness in VR, an immersed user may face certain situations such as missing an important phone call (digital scenario), tripping over wandering pets (physical scenario), or losing track of time (temporal scenario). In this paper, we present the results of 1) a survey across 61 VR users to understand common interruptions and scenarios that would benefit from some form of notifications; 2) a design exercise with VR professionals to explore possible notification methods; and 3) an empirical study on the noticeability and perception of 5 different VR interruption scenarios across 6 modality combinations (e.g., audio, visual, haptic, audio + haptic, visual + haptic, and audio + visual) implemented in Unity and presented using the HTC Vive headset. Finally, we combine key learnings from each of these steps along with participant feedback to present a set of observations and recommendations for notification design in VR.


Session 12: Training

Wednesday, March 21st, 11:00 AM -12:30 PM, Kleiner Saal A

Chair: Benjamin Lok

[TVCG invited] The Implementation and Validation of a Virtual Environment for Training Powered Wheelchair Manoeuvres

Nigel W. John, Serban R. Pop, Thomas W. Day, Panagiotis D. Ritsos, Christopher J. Headleand

TVCG-Invited

Abstract: Navigating a powered wheelchair and avoiding collisions is often a daunting task for new wheelchair users. It takes time and practice to gain the coordination needed to become a competent driver and this can be even more of a challenge for someone with a disability. We present a cost-effective virtual reality (VR) application that takes advantage of consumer level VR hardware. The system can be easily deployed in an assessment centre or for home use, and does not depend on a specialized high-end virtual environment such as a Powerwall or CAVE. This paper reviews previous work that has used virtual environments technology for training tasks, particularly wheelchair simulation. We then describe the implementation of our own system and the first validation study carried out using thirty three able bodied volunteers. The study results indicate that at a significance level of 5% then there is an improvement in driving skills from the use of our VR system. We thus have the potential to develop the competency of a wheelchair user whilst avoiding the risks inherent to training in the real world. However, the occurrence of cybersickness is a particular problem in this application that will need to be addressed.

A Comparison of Virtual and Physical Training Transfer of Bimanual Assembly Tasks

Mara Murcia-López, Anthony Steed

TVCG

Abstract: As we explore the use of consumer virtual reality technology for training applications, there is a need to evaluate its validity compared to more traditional training formats. In this paper, we present a study that compares the effectiveness of virtual training and physical training for teaching a bimanual assembly task. In a between-subjects experiment, 60 participants were trained to solve three 3D burr puzzles in one of six conditions comprised of virtual and physical training elements. In the four physical conditions, training was delivered via paper- and video-based instructions, with or without the physical puzzles to practice with. In the two virtual conditions, participants learnt to assemble the puzzles in an interactive virtual environment, with or without 3D animations showing the assembly process. After training, we conducted immediate tests in which participants were asked to solve a physical version of the puzzles. We measured performance through success rates and assembly completion testing times. We also measured training times as well as subjective ratings on several aspects of the experience. Our results show that the performance of virtually trained participants was promising. A statistically significant difference was not found between virtual training with animated instructions and the best performing physical condition (in which physical blocks were available during training) for the last and most complex puzzle in terms of success rates and testing times. Performance in retention tests two weeks after training was generally not as good as expected for all experimental conditions. We discuss the implications of the results and highlight the validity of virtual reality systems in training.

WoaH: A Virtual Reality Work-at-height Simulator

Cédric Di Loreto, Jean-Rémy Chardonnet, Julien Ryard, Alain Rousseau

Conference

Abstract: We present WoaH, a virtual reality work-at-height simulator aimed at easily detecting susceptibility to vertigo among future workers, and training in a typical work-at-height engineering operation. The simulator include a real ladder synchronized in position with a virtual one placed high above the ground in a virtual environment, a dynamic platform and an HMD. We conducted a first user study evaluating our simulator in terms of cybersickness, perceived realism and anxiety, and testing whether vibratory cues could enhance the level of anxiety felt. Results indicate that WoaH generates anxiety as expected and is perceived as realistic. Adding vibrations had significant impact on the perceived realism but not on the electro-dermal activity.

Towards Joint Attention Training for Children with ASD - A VR Game Approach and Eye Gaze Exploration

Chao Mei, Bushra Tasnim Zahed, Lee Mason, John Quarles

Conference

Abstract: The deficit in joint attention is an early predictor of children with Autism Spectrum Disorder (ASD). Training of joint attention has been a significant topic in ASD intervention. We propose a novel training approach using a Customizable Virtual Human (CVH) and a Virtual Reality (VR) game. We developed a CVH in an educational game - Imagination Drums - and conducted a user study on adolescents with high functioning ASD. The study results showed that the CVH makes the participants gaze less at the irrelevant area of the game’s storyline (i.e. background), but surprisingly, also provided evidence that participants react slower to the CVH’s joint attention bids, compared with Non-Customizable Virtual Humans.

Synthesizing Personalized Training Programs for Improving Driving Habits via Virtual Reality

Yining Lang, Wei Liang, Fang Xu, Yibiao Zhao, Lap-Fai Yu

Conference

Abstract: Our work introduces an approach of synthesizing personalized training program with traffic events for improving driving habits via Virtual Reality. To achieve this goal, our approach identifies improper driving habits of a user when he drives in a virtual city through a Logitech driving controller and a FOVE eye-tracking Virtual Reality headset. Then it synthesizes a pertinent training program to help improve the user’s driving skills based on the discovered improper habits of the user. The experiments demonstrate that our approach is efficient in the sense of improving the improper driving habits.


Session 13: Redirected Walking

Wednesday, March 21st, 2:00 PM - 3:15 PM, Kleiner Saal A

Chair: Mary Whitten

You Spin my Head Right Round: Threshold of Limited Immersion for Rotation Gains in Redirected Walking

Patric Schmitz, Julian Romeo Hildebrandt, Andr Calero Valdez, Leif Kobbelt, Martina Ziefle

TVCG

Abstract: In virtual environments, the space that can be explored by real walking is limited by the size of the tracked area. To enable unimpeded walking through large virtual spaces in small real-world surroundings, redirection techniques are used. These unnoticeably manipulate the userô virtual walking trajectory. It is important to know how strongly such techniques can be applied without the user noticing the manipulation”r getting cybersick. Previously, this was estimated by measuring a detection threshold (DT) in highly-controlled psychophysical studies, which experimentally isolate the effect but do not aim for perceived immersion in the context of VR applications. While these studies suggest that only relatively low degrees of manipulation are tolerable, we claim that, besides establishing detection thresholds, it is important to know when the userô immersion breaks. We hypothesize that the degree of unnoticed manipulation is significantly different from the detection threshold when the user is immersed in a task. We conducted three studies: a) to devise an experimental paradigm to measure the threshold of limited immersion (TLI), b) to measure the TLI for slowly decreasing and increasing rotation gains, and c) to establish a baseline of cybersickness for our experimental setup. For rr rotation gains greater than 1.0, we found that immersion breaks quite late after the gain is detectable. However, for gains lesser than 1.0, some users reported a break of immersion even before established detection thresholds were reached. Apparently, the developed metric measures an additional quality of user experience. This article contributes to the development of effective spatial compression methods by utilizing the break of immersion as a benchmark for redirection techniques.

Analyses of Gait Parameters of Younger & Older Adults during (Non-)Isometric Virtual Walking

Omar Janeh, Gerd Bruder, Frank Steinicke, Alessandro Gulberti, Monika Poetter-Nerger

TVCG-Invited

Abstract: Understanding real walking in virtual environments (VEs) is essential for immersive experiences, allowing users to move through VEs in the most natural way. Previous studies have shown that basic implementations of real walking in virtual spaces, in which head-tracked movements are mapped isometrically to a VE, are not estimated as entirely natural. Instead, users estimate a virtual walking velocity as more natural when it is slightly increased compared to the users physical locomotion. However, these findings have been reported in most cases only for young persons, e. g., students, whereas older adults are clearly underrepresented in such studies. Our study investigates the effects of (non-)isometric mappings between physical movements and virtual motions in the VE on the walking biomechanics across generations. Three primary domains (pace, base of support and phase) of spatio-temporal parameters were identified to evaluate gait performance. The results show that the older adults walked very similar in the real and VE in the pace and phasic domains, which differs from results found in younger adults. In contrast, the result indicate differences in terms of base of support domain parameters. For non-isometric mappings we found an increased divergence of gait parameters in all domains correlating with the up- or down-scaled velocity of visual self-motion feedback.

I Can See on my Feet While Walking: Sensitivity to Translation Gains with Visible Feet

Lucie Kruse, Eike Langbehn, Frank Steinicke

Conference

Abstract: To our knowledge, all previous experiments on identifying detection thresholds for redirected walking were conducted without a visual self-representation of the user, i.e., without showing the user’s feet. We address the question if the virtual self-representation of the user’s feet changes the detection thresholds for translation gains. Furthermore, we consider the influence of the type of virtual environment. Therefore, we conducted an experiment with three different conditions. The results show a significant difference between the two types of environment. Our findings suggest that the virtual environment is more important for manipulation detection than the visual self-representation of the user’s feet.

Experiencing an Invisible World War I Battlefield Through Narrative-Driven Redirected Walking in Virtual Reality

Run Yu, Zachary Duer, Todd Ogle, Doug Bowman, Thomas Tucker, David Hicks, Dongsoo Choi, Zach Bush, Huy Ngo, Phat Nguyen, Xindi Liu

Conference

Abstract: Redirected walking techniques have the potential to provide natural locomotion while users experience large virtual environments. However, when using redirected walking in small physical workspaces, disruptive overt resets are often required. We describe the design of an educational virtual reality experience in which users physically walk through virtual tunnels representative of the World War I battle of Vauquois. Walking in only a 15- by 5-foot tracked space, users are redirected through subtle, narrative-driven resets to walk through a tunnel nearly 50 feet in length. This work contributes approaches and lessons that can be used to provide a seamless and natural virtual reality walking experience in highly constrained physical spaces.


Session 14: Applications

Thursday, March 22nd, 9:00 AM - 10:30 AM, Grosser Saal

Chair: Maki Sugimoto

Fluid Sketching Immersive Sketching Based on Fluid Flow

Sevinc Eroglu, Sascha Gebhardt, Patric Schmitz, Dominik Rausch, Torsten Wolfgang Kuhlen

Conference

Abstract: Fluid artwork refers to works of art based on the aesthetics of fluid motion, such as smoke photography, ink injection into water, and paper marbling. Inspired by such types of art, we created Fluid Sketching as a novel medium for creating 3D fluid artwork in immersive virtual environments. It allows artists to draw 3D fluid-like sketches and manipulate them via six degrees of freedom input devices. Different brush stroke settings are available, varying the characteristics of the fluid. Artists can shape the drawn sketch by directly interacting with it, either with their hand or by blowing into the fluid. We rely on particle advection via curl-noise as a fast procedural method for animating the fluid flow.

Automatic Furniture Arrangement Using Greedy Cost Minimization

Peter Kán, Hannes Kaufmann

Conference

Abstract: We present a novel method for fast generation of furniture arrangements in interior scenes. Our method exploits the benefits of optimization-based approaches for global aesthetic rules and the advantages of procedural approaches for local arrangement of small objects. We generate the furniture arrangements for a given room in two steps: We first optimize the selection and arrangement of furniture objects in a room with respect to aesthetic and functional rules. In the second step, the procedural methods are locally applied in a stochastic fashion to generate important scene details. We demonstrate the feasibility of our method in user study.


Session 15: Navigation

Thursday, March 22nd, 9:00 AM - 10:30 AM, Kleiner Saal A

Chair: Anatole Lecuyer

Interactive Exploration Assistance for Immersive Virtual Environments based on Object Visibility and Viewpoint Quality

Sebastian Freitag, Benjamin Weyers, Torsten Wolfgang Kuhlen

Conference

Abstract: During free exploration of an unknown virtual scene, users often miss important parts, resulting in incomplete environment knowledge. Existing approaches to address this, like maps, trails or guided tours either do not actually ensure exploration success or take away control from the user. Therefore, we present an interactive exploration assistance interface that guides users to interesting, unvisited regions upon request, supplementing their own, free exploration. It is based on an automated analysis of visibility and viewpoint quality and therefore applicable to many different scenes without human supervision or input. We show the effectiveness of the approach in a user study.

RST 3D: A Comprehensive Gesture Set for Multitouch 3D Navigation

Alexander Kulik, Andr Kunert, Magdalena Keil, Bernd Froehlich

Conference

Abstract: We present a comprehensive multitouch input mapping for multiscale 3D navigation. In contrast to prior work, our technique offers explicit control over 3D rotation, 3D translation, and uniform scaling with manipulative gestures that do not require graphical widgets. Our proposed technique is consistent with the established RST mapping (rotation, scaling, translation) for 2D multitouch input and follows suggestions from prior work on multitouch 3D interaction. Our implementation includes a rendering technique that can reduce perceptual conflicts of 3D touch input on stereoscopic displays. We also report on two user studies that informed our interaction design and confirmed its usability.

Efficient VR and AR Navigation through Multiperspective Occlusion Management

Meng-Lin Wu, Voicu Popescu

TVCG-Invited

Abstract: Immersive navigation in virtual reality (VR) and augmented reality (AR) leverages physical locomotion through pose tracking of the head-mounted display. While this navigation modality is intuitive, regions of interest in the scene may suffer from occlusion and require significant viewpoint translation. Moreover, limited physical space and user mobility need to be taken into consideration. Some regions of interest may require viewpoints that are physically unreachable without less intuitive methods such as walking in-place or redirected walking. We propose a novel approach for increasing navigation efficiency in VR and AR using multiperspective visualization. Our approach samples occluded regions of interest from additional perspectives, which are integrated seamlessly into the user’s perspective. This approach improves navigation efficiency by bringing simultaneously into view multiple regions of interest, allowing the user to explore more while moving less. We have conducted a user study that shows that our method brings significant performance improvement in VR and AR environments, on tasks that include tracking, matching, searching, and ambushing objects of interest.

Saliency in VR: How do people explore virtual environments?

Vincent Sitzmann, Ana Serrano, Amy Pavel, Maneesh Agrawala, Diego Gutierrez, Belen Masia, Gordon Wetzstein

TVCG

Abstract: Understanding how people explore immersive virtual environments is crucial for many applications, such as designing virtual reality (VR) content, developing new compression algorithms, or learning computational models of saliency or visual attention. Whereas a body of recent work has focused on modeling saliency in desktop viewing conditions, VR is very different from these conditions in that viewing behavior is governed by stereoscopic vision and by the complex interaction of head orientation, gaze, and other kinematic constraints. To further our understanding of viewing behavior and saliency in VR, we capture and analyze gaze and head orientation data of 169 users exploring stereoscopic, static omni-directional panoramas, for a total of 1980 head and gaze trajectories for three different viewing conditions. We provide a thorough analysis of our data, which leads to several important insights, such as the existence of a particular fixation bias, which we then use to adapt existing saliency predictors to immersive VR conditions. In addition, we explore other applications of our data and analysis, including automatic alignment of VR video cuts, panorama thumbnails, panorama video synopsis, and saliency-based compression.

Rapid, Continuous Movement Between Nodes as an Accessible Virtual Reality Locomotion Technique

M.P. Jacob Habgood, David Moore, David Wilson, Sergio Alapont

Conference

Abstract: The Chantry is a PlayStation VR game based on the life and work of Edward Jenner, the 18th century scientist credited with the discovery of vaccination. This paper presents a node-based locomotion system for virtual reality which allows the player to move between predefined node positions using a rapid, continuous, linear motion. An evaluation was undertaken to compare motion sickness and presence for this locomotion technique, against commonly used, teleportation-based and continuous walking approaches. Contrary to intuition, we show that rapid movement speeds reduce players feelings of motion sickness as compared to continuous movement at normal walking speeds.


Session 16: Passive Haptics

Thursday, March 22nd, 9:00 AM - 10:30 AM, Kleiner Saal B

Chair: Rob Lindeman

Evaluating Remapped Physical Reach for Hand Interactions with Passive Haptics in Virtual Reality

Dustin T. Han, Mohamed Suhail Mohamed Yousuf Sait, Eric Ragan

TVCG

Abstract: Virtual reality often uses motion tracking to incorporate physical hand movements into interaction techniques for selection and manipulation of virtual objects. To increase realism and allow direct hand interaction, real-world physical objects can be aligned with virtual objects to provide tactile feedback and physical grasping. However, unless a physical space is custom configured to match a specific virtual reality experience, the ability to perfectly match the physical and virtual objects is limited. Our research addresses this challenge by studying methods that allow one physical object to be mapped to multiple virtual objects that can exist at different virtual locations in an egocentric reference frame. We study two such techniques: one that introduces a static translational offset between the virtual and physical hand before a reaching action, and one that dynamically interpolates the position of the virtual hand during a reaching motion. We conducted two experiments to assess how the two methods affect reaching effectiveness, comfort, and ability to adapt to the remapping techniques when reaching for objects with different types of mismatches between physical and virtual locations. We also present a case study to demonstrate how the hand remapping techniques could be used in an immersive game application to support realistic hand interaction while optimizing usability. Overall, the translational technique performed better than the interpolated reach technique and was more robust for situations with larger mismatches between virtual and physical objects.

Ascending and Descending in Virtual Reality: Simple and Safe System using Passive Haptics

Ryohei Nagao, Keigo Matsumoto, Takuji Narumi, Tomohiro Tanikawa, Michitaka Hirose

TVCG

Abstract: This paper presents a novel interactive system that provides users with virtual reality (VR) experiences, wherein users feel as if they are ascending/descending stairs through passive haptic feedback. The passive haptic stimuli are provided by small bumps under the feet of users; these stimuli are provided to represent the edges of the stairs in the virtual environment. The visual stimuli of the stairs and shoes, provided by head-mounted displays, evoke a visuo-haptic interaction that modifies a user’s perception of the floor shape. Our system enables users to experience all types of stairs, such as half-turn and spiral stairs, in a VR setting. We conducted a preliminary user study and two experiments to evaluate the proposed technique. The preliminary user study investigated the effectiveness of the basic idea associated with the proposed technique for the case of a user ascending stairs. The results demonstrated that the passive haptic feedback produced by the small bumps enhanced the user’s feeling of presence and sense of ascending. We subsequently performed an experiment to investigate an improved viewpoint manipulation method and the interaction of the manipulation and haptics for both the ascending and descending cases. The experimental results demonstrated that the participants had a feeling of presence and felt a steep stair gradient under the condition of haptic feedback and viewpoint manipulation based on the characteristics of actual stair walking data. However, these results also indicated that the proposed system may not be as effective in providing a sense of descending stairs without an optimization of the haptic stimuli. We then redesigned the shape of the small bumps, and evaluated the design in a second experiment. The results indicated that the best shape to present haptic stimuli is a right triangle cross section in both the ascending and descending cases. Although it is necessary to install small protrusions in the determined direction, by using this optimized shape the users feeling of presence of the stairs and the sensation of walking up and down was enhanced.

Cognitive and Touch Performance Effects of Mismatched 3D Physical and Visual Perceptions

Jason Hochreiter, Salam Daher, Gerd Bruder, Greg Welch

Conference

Abstract: In a controlled human-subject study we investigated the effects of mismatched physical and visual perception on cognitive load, performance (touch accuracy and response time), and usability in an augmented reality touch task by varying physical fidelity (matching vs. non-matching shape) and visual mechanism (projector vs. HMD). Participants touched visual targets on four corresponding physical-visual representations of a human head. The cognitive load task required target size estimations during a concurrent (secondary) counting task. Results indicated higher performance, lower cognitive load, and increased usability when participants touched a matching physical head-shaped surface and with projector-based visuals.

MRTouch: Adding Touch Input to Head-Mounted Mixed Reality

Robert Xiao, Julia Schwarz, Nick Throm, Andrew D Wilson, Hrvoje Benko

TVCG

Abstract: We present MRTouch, a novel multitouch input solution for head-mounted mixed reality systems. Our system enables users to reach out and directly manipulate virtual interfaces affixed to surfaces in their environment, as though they were touchscreens. Touch input offers precise, tactile and comfortable user input, and naturally complements existing popular modalities, such as voice and hand gesture. Our research prototype combines both depth and infrared camera streams together with real-time detection and tracking of surface planes to enable robust finger-tracking even when both the hand and head are in motion. Our technique is implemented on a commercial Microsoft HoloLens without requiring any additional hardware nor any user or environmental calibration. Through our performance evaluation, we demonstrate high input accuracy with an average positional error of 5.4_mm and 95% button size of 16_mm, across 17 participants, 2 surface orientations and 4 surface materials. Finally, we demonstrate the potential of our technique to enable on-world touch interactions through 5 example applications.

The Critical Role of Self-Contact for Embodiment in Virtual Reality

Sidney Bovet, Henrique Galvan Debarba, Bruno Herbelin, Eray Molla, Ronan Boulic

TVCG

Abstract: With the broad range of motion capture devices available on the market, it is now commonplace to directly control the limb movement of an avatar during immersion in a virtual environment. Here, we study how the subjective experience of embodying a full-body controlled avatar is influenced by motor alteration and self-contact mismatches. Self-contact is in particular a strong source of passive haptic feedback and we assume it to bring a clear benefit in terms of embodiment. For evaluating this hypothesis, we experimentally manipulate self-contacts and the virtual hand displacement relatively to the body. We introduce these body posture transformations to experimentally reproduce the imperfect or incorrect mapping between real and virtual bodies, with the goal of quantifying the limits of acceptance for distorted mapping on the reported body ownership and agency. We first describe how we exploit egocentric coordinate representations to perform a motion capture ensuring that real and virtual hands coincide whenever the real hand is in contact with the body. Then, we present a pilot study that focuses on quantifying our sensitivity to visuo-tactile mismatches. The results are then used to design our main study with two factors, offset (for self-contact) and amplitude (for movement amplification). Our main result shows that subjects’ embodiment remains important, even when an artificially amplified movement of the hand was performed, but provided that correct self-contacts are ensured.


Session 17: Selection and Pointing

Thursday, March 22nd, 11:00 AM -12:30 PM, Grosser Saal

Chair: Wolfgang Stürzlinger

Transferability of Spatial Maps: Augmented Versus Virtual Reality Training

Nicko Reginio Caluya, Alexander Plopski, Jayzon Ty, Christian Sandor, Takafumi Taketomi, Hirokazu Kato

Conference

Abstract: Work space simulations help trainees acquire skills necessary to perform their tasks efficiently without disrupting the workflow, forgetting important steps during a procedure, or the location of important information. This training can be conducted in Augmented and Virtual Reality (AR, VR). However, thus far, it is unclear which training (AR/VR) achieves better results in terms of positive training transfer. We compare the effectiveness of AR and VR for spatial memory training in a control-room scenario, where 16 participants have to memorize the location of buttons and information displays. Results of our within-subject study show that VR outperformed AR in short-term memory tests, but performed worse in memory transfer tests.

User Preference for SharpView-Enhanced Virtual Text during Non-Fixated Viewing

Trey Cook, Nate Phillips, Kristen L Massey, Alexander Plopski, Christian Sandor, J. Edward Swan

Conference

Abstract: For optical see-through head-mounted displays, the mismatch between the display’s focal length and the observed scene inadvertently prevents users from simultaneously focusing on the presented computer graphics and the scene, leading to eye strain as users refocus between the two. It has been shown that it is possible to ameliorate the out-of-focus blur for images with a known focus distance by applying an algorithm called SharpView. In this study we investigate whether users report increased clarity when SharpView is applied to a text-based image. Our results indicate that there is a significant user preference for SharpView-enhanced images in non-fixated viewing.

Visual Perception of Real World Depth Map Resolution for Mixed Reality

Lohit Dev Petikam, Andrew Chalmers, Taehyun James Rhee

Conference

Abstract: Compositing virtual objects into photographs with known real world geometry is a common task in mixed reality (MR) applications. This enables mutual illumination effects, such as lighting, shadowing, and occlusions between real and virtual objects. We investigate the relationship between real world depth fidelity and visual quality in MR rendering. Through user experiments we independently evaluate the noticeability of multiple composition artifacts that occur with approximate depth. The findings are used to inform trade-off decisions for optimising depth acquisition in MR applications.

Human Compensation Strategies for Orientation Drifts

Tobias Feigl, Christopher Mutschler, Michael Philippsen

Conference

Abstract: No-Pose tracking systems rely on a single sensor at the user’s head to determine its position. They estimate the head orientation with inertial sensors and analyze the body motion to compensate their drift. However with orientation drift, VR users implicitly lean their heads and bodies sidewards. Hence, to determine the sensor drift and to adjust the orientation of the VR display there is a need to understand and consider both the user’s head and body orientations. We study the effects of head orientation drift on the user’s absolute head and body orientations when walking in the VR and how much drift accumulates over time, how a user experiences and tolerates it, and how a user applies strategies to compensate for larger drifts.

Simulated Reference Frame: A Cost-Effective Solution to Improve Spatial Orientation in VR

Thinh Nguyen-Vo, Bernhard E. Riecke, Wolfgang Stuerzlinger

Conference

Abstract: Though recent technological advances offer a high level of photo-realism, locomotion in VR is still restricted because people might not perceive their self-motion as they would in the real world. Previous research has identified the roles reference frames play in retaining spatial orientation. Here, we propose using visually overlaid rectangular boxes, simulating reference frames, to provide users with better insight into spatial direction in VR. Results showed that a simulated reference frame yields significant effects on participants’ completion time and travel distance in the task, suggesting that adding a simulated reference frame to VR applications might be a cost-effective solution to the spatial disorientation problem in VR.


Session 18: Hardware & Tracking

Thursday, March 22nd, 11:00 AM -12:30 PM, Kleiner Saal A

Chair: Henry Fuchs

Fabricating Diminishable Visual Markers for Geometric Registration in Projection Mapping

Hirotaka Asayama, Daisuke Iwai, Kosuke Sato

TVCG-Invited

Abstract: We propose a visual marker embedding method for the pose estimation of a projection surface to correctly map projected images onto the surface. Assuming that the surface is fabricated by a full-color or multi-material three-dimensional (3D) printer, we propose to automatically embed visual markers on the surface with mechanical accuracy. The appearance of the marker is designed such that the marker is detected by infrared cameras even when printed on a non-planar surface while its appearance can be diminished by the projection to be as imperceptible as possible to human observers. The marker placement is optimized using a genetic algorithm to maximize the number of valid viewpoints from which the pose of the object can be estimated correctly using a stereo camera system. We also propose a radiometric compensation technique to quickly diminish the marker appearance. Experimental results confirm that the pose of projection objects are correctly estimated while the appearance of the markers was diminished to an imperceptible level. At the same time, we confirmed the limitations of the current method; only one object can be handled, and pose estimation is not performed at interactive frame rates. Finally, we demonstrate the proposed technique to show that it works successfully for various surface shapes and target textures.

Widening Viewing Angles of Automultiscopic Displays using Refractive Inserts

Geng Lyu, Xukun Shen, Taku Komura, kartic subr, Lijun Teng

TVCG

Abstract: Displays that can portray environments as seen from multiple views are known as multiscopic displays. Amongst these displays, some enable realistic perception of 3D environments without the need for cumbersome mounts or fragile head-tracking algorithms. Such automultiscopic displays function by carefully controlling the distribution of emitted light over space, direction (angle) and time so that even a static image (lightfield) can encode parallax across viewing directions. These displays allow simultaneous observation by multiple viewers, each perceiving 3D from their own (correct) perspective. Currently, the illusion can only be effectively maintained over a narrow range of viewing angles. In this paper, we propose and analyze a simple solution to widen the range of viewing angles for automultiscopic displays that use parallax barriers. We propose the use of a refractive medium, with a high refractive index, between the display and parallax barriers. The inserted medium warps the exitant lightfield in a way that increases the potential viewing angle. We evaluate the consequences of this warp and demonstrate that there is a 93% increase in the effective viewing angle.

Augmented Reality Driving Using Semantic Geo-Registration

Han-Pang Chiu, Varun Murali, Ryan Villamil, Gregory Drew Kessler, Supun Samarasekera, Rakesh Kumar

Conference

Abstract: We propose a new approach that utilizes semantic information to register 2D monocular video frames to the world using 3D geo-referenced data, for augmented reality driving applications. The geo-registration process uses our predicted vehicle pose to generate a rendered depth map for each frame, allowing 3D graphics to be convincingly blended with the real world view. We also estimate absolute depth values for dynamic objects, up to 120 meters, based on the rendered depth map. This process also creates opportunistic global heading measurements to improve pose estimates. We evaluate our system on a driving vehicle for producing realistic augmentations.

Cascaded 3D Full-body Pose Regression from Single Depth Image at 100 FPS

Shihong Xia, Zihao Zhang, Le Su

Conference

Abstract: This paper presents a novel cascaded 3D full-body pose regression method to estimate accurate pose from a single depth image at 100 fps. The key idea is to train cascaded regressors based on Gradient Boosting algorithm from pre-recorded human motion capture database. By incorporating hierarchical kinematics model of human pose into the learning procedure, we can directly estimate accurate 3D joint angles instead of joint positions. The biggest advantage of this model is that the bone length can be preserved during the whole 3D pose estimation procedure, which leads to more effective features and higher pose estimation accuracy.

Coded Light Based Extensible Optical Tracking System

Dong Li, Danli Wang, Dongdong Weng, Yue Li, Hang Xun, Yihua Bao

Conference

Abstract: In this paper, an extensible optical tracking system is proposed, which can distinguish the signals from different base stations and support the simultaneous operation of multiple base stations. Furthermore, we designed an encoding scheme to generate independent code for up to 32 base stations and proposed a high-speed decoding method. Experiments demonstrate that the system has high tracking accuracy and low system latency. Users can adjust the number and layout of base stations according to the actual demand, which greatly improves the flexibility of the system, and benefits promoting the development of large scale optical tracking equipment with low cost.


Session 19: 360° and Panoramic Videos

Thursday, March 22nd, 1:45 PM -3:00 PM, Grosser Saal

Chair: Sei Ikeda

Parallax360: Stereoscopic 360 Scene Representation for Head-Motion Parallax

Bicheng Luo, Feng Xu, Christian Richardt, Jun-hai Yong

TVCG

Abstract: We propose a novel 360\degree scene representation for converting real scenes into stereoscopic 3D virtual reality content with head-motion parallax. Our image-based scene representation enables efficient synthesis of novel views with six degrees-of-freedom (6-DoF) by fusing motion fields at two scales: (1) disparity motion fields carry implicit depth information and are robustly estimated from multiple laterally displaced auxiliary viewpoints, and (2) pairwise motion fields enable real-time flow-based blending, which improves the visual fidelity of results by minimizing ghosting and view transition artifacts. Based on our scene representation, we present an end-to-end system that captures real scenes with a robotic camera arm, processes the recorded data, and finally renders the scene in a head-mounted display in real time (more than 40\,Hz). Our approach is the first to support head-motion parallax when viewing real 360\degree scenes. We demonstrate compelling results that illustrate the increased visual experience and hence sense of immersion achieved with our approach compared to widely-used stereoscopic panoramas.

The Effect of Transition Type in Multi-View 360° Media

Andrew MacQuarrie, Anthony Steed

TVCG

Abstract: 360° images and video have become extremely popular formats for immersive displays, due in large part to the technical ease of content production. While many experiences use a single camera viewpoint, an increasing number of experiences use multiple camera locations. In such multi-view 360° media (MV360M) systems, a visual effect is required when the user transitions from one camera location to another. This effect can take several forms, such as a cut or an image-based warp, and the choice of effect may impact many aspects of the experience, including issues related to enjoyment and scene understanding. To investigate the effect of transition types on immersive MV360M experiences, a repeated-measures experiment was conducted with 31 participants. Wearing a head-mounted display, participants explored four static scenes, for which multiple 360° images and a reconstructed 3D model were available. Three transition types were examined: teleport, a linear move through a 3D model of the scene, and an image-based transition using a Möbius transformation. The metrics investigated included spatial awareness, users movement profiles, transition preference and the subjective feeling of moving through the space. Results indicate that there was no significant difference between transition types in terms of spatial awareness, while significant differences were found for users movement profiles, with participants taking 1.6 seconds longer to select their next location following a teleport transition. The model and Möbius transitions were significantly better in terms of creating the feeling of moving through the space. Preference was also significantly different, with model and teleport transitions being preferred over Möbius transitions. Our results indicate that trade-offs between transitions will require content creators to think carefully about what aspects they consider to be most important when producing MV360M experiences.

Generating VR Live Videos with Tripod Panoramic Rig

Feng Xu, Tianqi Zhao, Bicheng Luo, Qionghai Dai

Conference

Abstract: We propose an end-to-end system that records a scene using a tripod panoramic rig and broadcasts 360° stereo panorama videos in real time. The system performs a panorama stitching technique which pre-compute 3 stitching seam candidates for dynamic seam switching in the live broadcasting. This technique achieves high frame rates (>30fps) with minimum foreground cut-off and temporal jittering artifacts. Stereo vision quality is also better preserved by a proposed weighting-based image alignment scheme. We demonstrate the effectiveness of our approach on a variety of videos delivering live events.

Detection Thresholds for Rotation and Translation Gains in 360° Video-based Telepresence System

Jingxin Zhang, Eike Langbehn, Dennis Krupke, Nicholas Katzakis, Frank Steinicke

TVCG

Abstract: Telepresence systems have the potential to overcome limits and distance constraints of the real-world by enabling people to remotely visit and interact with each other. However, current telepresence systems usually lack natural ways of supporting interaction and exploration of remote environments (REs). In particular, single webcams for capturing the RE provide only a limited illusion of spatial presence and movement control of mobile platforms in todayô telepresence systems are often restricted to simple interaction devices. One of the main challenges of telepresence systems is to allow users to explore a RE in an immersive, intuitive and natural way, e. g. real walking in the userô local environment (LE), and thus controlling motions of the robot platform in the RE. However, the LE in which the userô motions are tracked usually provides a much smaller interaction space than the RE. In this context, redirected walking (RDW) is a very suitable approach to solve this problem. However, so far there is no previous work, which explored if and how RDW can be used in video-based 360p telepresence systems. In this article, we conducted two psychophysical experiments in which we have quantified how much humans can be unknowingly redirected on virtual paths in the RE, which are different from the physical paths that they actually walk in the LE. Experiment 1 introduces a discrimination task between local and remote translations, and in Experiment 2 we analyzed the de discrimination between local and remote rotations. In Experiment 1 participants performed straightforward translations in the LE that were mapped to straightforward translations in the RE shown as 360° videos to which we applied different gains. Then, participants had to estimate if the remotely perceived translation was faster or slower than the actual physically performed translation. Similarly, in Experiment 2 participants performed rotations in the LE that were mapped to the virtual rotations in a 360° video-based RE to which we applied different gains. Again, they had to estimate whether the remotely perceived rotation was smaller or larger than the actual physically performed rotation. Our results show that participants are not able to reliably discriminate the difference between physical motion in the LE and the virtual motion from the 360° video RE when virtual translations are down-scaled by 5.8% and up-scaled by 9.7%, and virtual rotations are about 12.3% less or 9.2% more than the corresponding physical rotations in the LE.

Gaze-aware streaming solutions for the next generation of mobile VR experiences

Pietro Lungaro, Rickard Sjoberg, Alfredo Jos Fanghella Valero, Ashutosh Mittal, Konrad Tollmar

TVCG

Abstract: This paper presents a novel approach to content delivery for video streaming services. It exploits information from connected eye-trackers embedded in the next generation of VR Head Mounted Displays (HMDs). The proposed solution aims to deliver high visual quality, in real time, around the users fixations points while lowering the quality everywhere else. The goal of the proposed approach is to substantially reduce the overall bandwidth requirements for supporting VR video experiences while delivering high levels of user perceived quality. The prerequisites to achieve these results are: (1) mechanisms that can cope with different degrees of latency in the system and (2) solutions that support fast adaptation of video quality in different parts of a frame, without requiring a large increase in bitrate.

A novel codec configuration, capable of supporting near-instantaneous video quality adaptation in specific portions of a video frame, is presented. The proposed method exploits in-built properties of HEVC encoders and while it introduces a moderate amount of error, these errors are indetectable by users. Fast adaptation is the key to enable gaze-aware streaming and its reduction in bandwidth.

A testbed implementing gaze-aware streaming, together with a prototype HMD with in-built eye tracker, is presented and was used for testing with real users. The studies quantified the bandwidth savings achievable by the proposed approach and characterize the relationships between Quality of Experience (QoE) and network latency. The results showed that up to 83% less bandwidth is required to deliver high QoE levels to the users, as compared to conventional solutions.


Session 20: Learning and Educational VR

Thursday, March 22nd, 1:45 PM -3:00 PM, Kleiner Saal A

Chair: Kyle Johnsen

Neurophysiology of Visual-Motor Learning during a Simulated Marksmanship Task in Immersive Virtual Reality

Jillian Clements, Regis Kopper, David J Zielinski, Hrishikesh Rao, Marc A Sommer, Elayna Kirsch, Boyla Mainsah, Leslie Collins, Lawrence G. Appelbaum

Conference

Abstract: Virtual reality (VR) systems offer flexible control of an environment, along with precise tracking of movement. VR can also be used in conjunction with neurophysiological monitoring, such as EEG, to record neural activity as users perform a task. In this study, we combine these elements to explore the biological mechanisms that underlie motor learning during a multi-day dynamic marksmanship training regimen. Over the course of the 3 days, shot accuracy improved while reaction times got significantly faster. More negative EEG amplitudes produced over the visual cortices correlated with better shooting performance measured by accuracy and reaction times, indicating that visual system plasticity underlies behavioral learning in this task.

Evaluating Multiple Levels of an Interaction Fidelity Continuum on Performance and Learning in Near-Field Training Simulations

Ayush Bhargava, Jeffrey W Bertrand, Anand K. Gramopadhye, Kapil Chalil Madathil, Sabarish V. Babu

TVCG

Abstract: With costs of head-mounted displays (HMDs) and tracking technology decreasing rapidly, various virtual reality applications are being widely adopted for education and training. Hardware advancements have enabled replication of real-world interactions in virtual environments to a large extent, paving the way for commercial grade applications that provide a safe and risk-free training environment at a fraction of the cost. But this also mandates the need to develop more intrinsic interaction techniques and to empirically evaluate them in a more comprehensive manner. Although there exists a body of previous research that examines the benefits of selected levels of interaction fidelity on performance, few studies have investigated the constituent components of fidelity in a Interaction Fidelity Continuum (IFC) with several system instances and their respective effects on performance and learning in the context of a real-world skills training application. Our work describes a large experiment conducted over several years that utilizes bimanual interaction metaphors at six discrete levels of interaction fidelity to teach basic precision metrology concepts in a near field spatial interaction task in VR. A combined analysis performed on the data compares and contrasts the six different conditions and their overall effects on performance and learning outcomes, eliciting patterns in the results between the discrete application points on the IFC. With respect to some performance variables, results indicate that simpler restrictive interaction metaphors and highest fidelity metaphors perform better than medium fidelity interaction metaphors. In light of these results, a set of general guidelines are created for developers of spatial interaction metaphors in immersive virtual environments for precise fine motor skills training simulations.

Active Assembly Guidance with Online Video Parsing

Bin Wang, Guofeng Wang, Andrei Sharf, Yangyan Li, Fan Zhong, Xueying Qin, Daniel CohenOr, Baoquan Chen

Conference

Abstract: In this paper, we introduce an online video-based system that actively assists users in assembly tasks. The system guides and monitors the assembly process by providing instructions and feedback on possibly erroneous operations, enabling easy and effective guidance in AR/MR applications. The core of our system is an online video-based assembly parsing method that can understand the assembly process. Our method exploits the availability of the participating parts to significantly alleviate the problem. To further constrain the search space, and understand the observed assembly activity, we introduce a tree-based global-inference technique. Our key idea is to incorporate part-interaction rules as powerful constraints.

Teacher-Guided Educational VR : Assessment of Live and Prerecorded Teachers Guiding Virtual Field Trips

Christoph W. Borst, Nicholas Lipari, Jason Wolfgang Woodworth

Conference

Abstract: We present a VR field trip framework, Kvasir-VR, and assess its two approaches to teacher-guided content. In one approach, networked student groups are guided by a live teacher captured as live-streamed depth camera imagery. The second approach is a standalone (non-networked) version allowing students to individually experience the field trip based on depth camera recordings of the same teacher. Both approaches were tested at two high schools using a VR environment that teaches students about solar energy production via tours of a solar plant.

Immersive Visualization of Abstract Information: An Evaluation on Dimensionally-Reduced Data Scatterplots

Jorge A. Wagner, Marina F. Rey, Carla M.D.S. Freitas, Luciana Nedel

Conference

Abstract: In this work, we evaluate the use of an HMD-based environment for the exploration of multidimensional data, represented in 3D scatterplots as a result of dimensionality reduction (DR). We present a new modeling for this problem, accounting for the two factors whose interplay determine the impact on the overall task performance: the difference in errors introduced by performing dimensionality reduction to 2D or 3D, and the difference in human perception errors under different visualization conditions. This two-step framework offers a simple approach to estimate the benefits of using an immersive 3D setup for a particular dataset.


Session 21: Visual Perception

Thursday, March 22nd, 1:45 PM -3:00 PM, Kleiner Saal B

Chair: Victoria Interrante

Yea Big, Yea High: A 3D User Interface for Surface Selection by Progressive Refinement in Virtual Environments

Bret Jackson, Brighten Jelke, Gabriel Brown

Conference

Abstract: We present Yea Big, Yea High, a 3D user interface for surface selection in virtual environments. The interface extends previous selection interfaces that support exploratory visualization and 3D modeling. While these systems primarily focus on selecting single objects, Yea Big, Yea High allows users to select part of a surface mesh, a common task for data analysis, model editing, or annotation. The selection can be progressively refined by indicating a region-of-interest between a user’s hands. We describe the design of the interface and key limitations. We present findings from a case study exploring design choices and use of the system.

Analysis of Proximity-Based Multimodal Feedback for 3D Selection in Immersive Virtual Environments

Oscar Javier Ariza Nunez, Gerd Bruder, Nicholas Katzakis, Frank Steinicke

Conference

Abstract: Interaction tasks in VR such as 3D selection of objects often suffer from reduced performance due to missing or different feedback provided by VR systems than during real-world interactions. In this paper, we analyzed the effects of visual, auditory and tactile feedback, while users perform 3D object selections, by comparing binary and continuous proximity-based feedback approaches in which stimulus intensities depend on spatiotemporal relations between the input device and the virtual target object. We conducted a Fitts Law experiment and evaluated the different feedback approaches. The results show that the feedback types affect ballistic and correction phases of the selection movement, and significantly influence the user performance.

Pointing at Wiggle 3D Displays

Michael Ortega, Wolfgang Stuerzlinger

Conference

Abstract: We present two new pointing techniques for wiggle 3D displays (2D projection of 3D content with automatic (rotatory) motion parallax). These techniques take advantage of the cursor’s current position or the user’s gaze direction for collocating the wiggle rotation center and potential targets. We evaluate the performance of the techniques with a novel methodology that integrates 3D distractors into the ISO-9241-9 standard task. The experimental results indicate that the new techniques are significantly more efficient than standard pointing techniques in wiggle 3D displays. Given that we observed no performance variation for different targets, our new techniques seem to negate any interaction performance penalties of wiggle 3D displays.

Perception of Redirected Pointing Precision in Immersive Virtual Reality

Henrique Galvan Debarba, Jad-Nicolas Elie Khoury, Sami Perrin, Bruno Herbelin, Ronan Boulic

Conference

Abstract: We investigate the self-attribution of distorted pointing movements in immersive virtual reality. Participants had to complete a multi-directional pointing task in which the visual feedback of the tapping finger could be deviated in order to increase or decrease the motor size of a target relative to its visual size. Participants were then asked whether the seen movement was equivalent to the movement that they have performed. We show that participants are often unaware of the movement manipulation, and tend to self-attribute movements that have been modified to make the task easier more often than movements that have not been distorted.

Performance Envelopes of In-Air Direct and Smartwatch Indirect Control for Head-Mounted Augmented Reality

Dennis Wolf, John J Dudley, Per Ola Kristensson

Conference

Abstract: The scarcity of established input methods for augmented reality (AR) head-mounted displays (HMD) motivates us to investigate the performance envelopes of two easily realisable solutions: indirect cursor control via a smartwatch and direct control by in-air touch. Indirect cursor control via a smartwatch has not been previously investigated for AR HMDs. We evaluate these two techniques in three fundamental user interface actions: target acquisition, goal crossing, and circular steering. We find that in-air is faster than smartwatch (p<0.001) for target acquisition and circular steering. We observe, however, that in-air selection can lead to discomfort after extended use and suggest that smartwatch control offers a complementary alternative.