2018 IEEE VR Los Angeles logo

March 18th - 22nd

2018 IEEE VR logo

March 18th - 22nd

In Cooperation with
the German Association for Electrical, Electronic and Information Technologies: VDE
VDE Logo
IEEE Computer Society IEEE

In Cooperation with
the German Association for Electrical, Electronic and Information Technologies: VDE

VDE Logo
IEEE Computer Society
IEEE

Exhibitors and Supporters

Diamond


National Science Foundation

Gold


VICON

Digital Projection

Gold Awards


NVIDIA

Silver


ART

Bronze


Haption

MiddleVR

VR-ON

VISCON

BARCO

Ultrahaptics

WorldViz

Disney Research

Microsoft

Non-Profit


Computer Network Information Center
Chinese Academy of Sciences

Sponsor for Research Demo


KUKA

Other Sponsors


Magic Leap

Exhibitors and Supporters

Posters

On-the-fly Simulator of Tabletop Light-field 3-D Displays Powered by a Game Engine

Shunsuke Yoshida

Abstract: This paper proposes a method for visual simulation of glass-free tabletop light-field 3-D displays based on a popular game-engine. Light-field 3-D displays generally entail several trade-offs among the screen optics and projection parameters. These parameters are linked to hardware configurations. Therefore, physical cut-and-try prototyping and well-controlled fabrication are practically difficult. To improve the final 3-D visuals of 3-D displays, a simulator based on a popular game engine was developed by our proposed approach. Utilizing built-in functions efficiently, the simulator could be developed in a more agile and intuitive manner compared to full-scratch coding. Various parameters were examined in real time.

Tetrahedral Mesh Visualization in a Game Engine

Kuocheng Wang, Kishore Adimulam, Thenkurussi Kesavadas

Abstract: A tetrahedral mesh is used to perform finite element analysis (FEA) of surgical cuts in a medical simulator. Visualization of a tetrahedral mesh is an important challenge in the process of constructing a realistic simulator. As game engines become increasingly popular, many companies are beginning to develop their own engines, or using existing engines for software development. However, game engines such as Unity and Unreal do not accept volume mesh. What is more, during the cutting process, the topology of tetrahedral mesh will change, resulting in progressively difficult visualization. In this paper, we present a procedure to prepare a tetrahedral mesh for a surgery simulator. We will also show an example of cutting a sphere by deleting tetrahedral elements.

Mixed Reality Collaboration Between Human-Agent Teams

Thai Phan, Wolfgang Hoenig, Nora Ayanian

Abstract: Collaboration between two or more geographically dispersed teams has applications in research and training. In many cases specialized devices, such as robots, may need to be combined between the collaborating groups. However, it would be expensive or even impossible to collocate them at a single physical location. We describe the design of a mixed reality test bed which allows dispersed humans and physically embodied agents to collaborate within a single virtual environment. We demonstrate our approach using Unity’s networking architecture as well as open source robot software and hardware. In our scenario, a total of 3 humans and 6 drones must move through a narrow doorway while avoiding collisions in the physical spaces as well as virtual space

An Approach to Embodiment and Interactions with Digital Entities in Mixed-Reality Environments

Mohamed Handosa, Hendrik Schulze, Denis Gracanin, Matthew Tucker, Mark Manuel

Abstract: The advances in mixed reality (MR) technologies provide an opportunity to support the deployment and use of MR for training and education. We describe an approach that extends the functionality of the Microsoft HoloLens device to support a wider range of embodied interactions by making use of the Microsoft Kinect V2 device. The embodied interactions can support novel interaction scenarios, especially within the context of training and skills development, thereby removing or reducing the need for training equipment.

RIDERS: ROAD INSPECTION & DRIVER SIMULATION

MAURICIO ROBERTO VERONEZ, Luiz Gonzaga Jr., Fabiane Bordin, Lucas Kupssinskü, Gabriel Lanzer Kannenberg, Tiago Duarte, Leonardo G. Santana, Jean Luca de Fraga, Demetrius Nunes Alves, Fernando Pinho Marson

Abstract: The main goal of this paper was to evaluate the use of a low cost immersive driving simulator to improve the teaching learning process of the Transport Infrastructure undergraduate course. The driving simulator that was developed in a virtual reality environment to assist both the teaching of engineering and the research on road safety. An experiment was conducted in Transport Infrastructure 1 course for Civil Engineering students in a Brazilian university. The students developed a geometric design of a road that was posteriorly modeled in 3D and provided in simulator. Students piloted a vehicle in the immersive simulator in the same road that they designed. Subsequently the usability of the system was assessed by the SUS metric (System Usability Scale). We performed an evaluation with 52 users and the SUS metric that we found was of 73% assuring a degree of usability above average and demonstrating that the immersive system is good to be used as a complementary tool in the learning of transport infrastructure.

Gaze Direction in a Virtual Environment Via a Dynamic Full-image Color Effect

Mason Andrew Smith, Ann McNamara

Abstract: For developers of immersive 360-degree virtual environments, directing the viewer’s gaze towards Points of Interest (POIs) is a challenge. Limited research exists testing the effectiveness of various gaze direction techniques. However, there is a lack of empirical research evaluating full-image color effects or “color grading” designed to direct the viewer’s gaze. We developed a novel VR gaze-directing stimulus using dynamic real-time image effects and tested its effectiveness in a user study. The stimulus was influenced by color psychology research and chosen by an informal pilot study. Results suggest that the stimulus encouraged participants to direct their gaze back towards POIs. In the majority of subjects who encountered the stimulus, their gaze was successfully directed back to POIs within a few seconds. While the task of holding viewer gaze in VR remains a challenge, this experiment has uncovered new information about the potential of color effect-based VR gaze direction.

Inverse Virtual Reality: Intelligence-Driven Mutually Mirrored World

Zhenliang Zhang, Benyang Cao, Jie Guo, Dongdong Weng, Yue Liu, Yongtian Wang

Abstract: Since artificial intelligence has been integrated into virtual reality, a new branch of virtual reality, which is called inverse virtual reality (IVR), is created. A typical IVR system contains both the intelligence-driven virtual reality and the physical reality, thus constructing an intelligence-driven mutually mirrored world. We propose the concept of IVR, and describe the details about the definition, structure and implementation of a typical IVR system. The parallel living environment is proposed as a typical application of IVR, which reveals that IVR has a significant potential to extend the human living environment.

Intraousseous Access Simulator in Newborns VR System

Sergio Medina-Papagayo, Byron Perez-Gutierrez, Lizeth Vega-Medina, Hernando Leon-Rodriguez, Norman Jaimes, Claudia Alarcon, Alvaro Joffre Uribe Quevedo

Abstract: Simulators in medical procedures are important tools for learning, teaching and training allowing practices in a wide range of scenarios of life-like situations. In the field of vascular access, the use of virtual reality has been emerging as a complement to address the current problems due to the low availability of simulation manikins for training and its high cost. Intraosseous access in newborns is an invasive, fast and effective medical procedure of high complexity employed in critically ill newborns as an option to access veins after peripheral access has failed. Due to vein vasoconstriction present in neonatal shock and cardiorespiratory arrest among other life-threatening conditions it is difficult to perform any other forms of venous access. Intraosseous access requires the proper handling of the related equipment and knowledge, this is only possible with continuous training that cannot be done in real patients. Mastering this technique is required to preserve patient’s life, achieve a good recovery and reduce the risk of infection or even death. This paper presents the development of a newborn’s intraosseous access simulator for training covering the required steps involved in the procedure. To increase the immersion of the simulation, a force feedback haptic device is integrated to simulate the needle insertion beneath leg tissues to the bone with a biomechanical tissue model, which is the more important skill to be developed in this procedure, and a consumer head mounted display to provide a stereo view of the operation room to give depth to the user when approaching to the patient leg. Our preliminary results were evaluated by a medical expert in terms of usability of the prototype.

3D Touch-and-drag: Gesture-free 3D Manipulation with Finger Tracking

Thomas Jung, Patrick Bauer

Abstract: In this study, we define a new modeling technique called 3D touch-and-drag, wherein users select vertices by simply approaching them with a 3D cursor such as a forefinger. Operations are finished by removing the 3D cursor from a line or plane in 3D space. These lines or planes constrain the modeling operations, as is the case when using 3D widgets. User tests demonstrated that there was no significant difference in movement time between moving a sphere along a line using a pinch gesture and using the proposed technique. Since it is easier to select a vertex by just approaching it, compared with performing a pinch gesture, we believe that 3D touch-and-drag is more efficient than current techniques, while being just as precise.

AR in a Large Area through Instance Recognition with Hybrid Sensors

Takahiro Kashima, Osamu Tsukahara, Ken Miyamoto

Abstract: This paper presents the concept of instance recognition with access points and vision for realizing augmented reality in a large indoor environment. The proposed study aims at contributing to the reduction of computational cost and mismatches that occur when the database size for instance recognition is large and includes similar textures. The proposed method consists of database construction and instance recognition through the database. The database construction process involves structuring pairs of images and access points for managing images through the access points. The objective of the instance recognition process is to find the best fit by incorporating access points and vision. The evaluation results show that the proposed method consumes 68% lesser computational time and has 10% greater recognition accuracy than our previous work using only vision.

A study of cybersickness and sensory conflict theory using a motion-coupled virtual reality system

Adrian K. T. Ng, Leith K. Y. Chan, Henry Y. K. Lau

Abstract: Sensory conflict theory attempts to provide the framework of cybersickness in virtual reality (VR) systems by the mismatch between visual and vestibular senses. This study examined whether coupling motion sensations to the visual stimulus in a VR setting could reduce the discomfort. A motion-coupled VR system was used. Motion platform provides motion that supplements visual stimulus from the head-mounted display. Participants experience programmed visual and motion yaw rotations while viewing a virtual apartment. Three conditions were tested on how motion and visual stimuli synchronise which each other: purely visual, motion synchronised with visual, and visually-levelled frame of reference. Results showed that providing matching visual-motion stimuli decreased the miserable score (MISC) of cybersickness and increased the joyfulness score (JOSC) of their subjective feeling.

The Effect of Immersion on Emotional Responses to Film Viewing in a Virtual Environment

AELEE KIM, Minha Chang, Yeseul Choi, Sohyeon Jeon, Kyoung-Min Lee

Abstract: In this study, we explore how immersion affects people’s sense of emotions in a virtual environment. The primary goals of this study are to analyze the possible use of virtual reality (VR) as an affective medium and research the relationship between immersion and emotion.

To investigate these objectives, we compared two viewing conditions (HMD vs. No-HMD) and applied two types of emotional content (horror and empathy) to examine whether the level of immersion could influence emotional responses. The results showed that viewers who watched the horror movie using HMD felt more scared than those in the No-HMD condition. However, there were no significant emotional differences between the HMD and No-HMD conditions in the movie groups exposed to empathy. Regarding these results, we may assume that the effect of an immersive viewing experience on emotional responses in VR is deeply related to the degree of arousal and strong perceptual cues. The horror movie used in this study included intense visual and audio stimuli found in the typical horror film format. In contrast, viewers experienced less stimulating perceptual input when they are watching the empathetic movie.

In conclusion, VR undoubtedly elicits a more immersive experience and greater emotional responses to the horror film. This study has confirmed the efficacy of VR as an emotional amplifier and successfully demonstrated the important association between immersion and emotion in VR.

Concept for Rendering Optimizations for Full Human Field of View HMDs

Daniel Pohl, Nural Choudhury, Markus Achtelik

Abstract: To enable high immersion for virtual reality head-mounted displays (HMDs), a wide field of view of the display is required. Today’s consumer solutions are mostly around 90 to 110 degrees field of view. The full human field of view for both eyes together has been measured to be between 200 and 220 degrees. Prototypes of HMDs with such properties have been shown. As the rendering workload increases with more pixels to fill the field of view, we propose a novel rendering method optimized for HMDs that cover the full human field of view. We target lower end HMDs where the cost of eye tracking would increase the price too much. Our method works without eye tracking, making use of certain human vision properties that appear once the full human field of view is covered. We achieve almost twice the rendering performance using our method.

Using Pico Projectors With Spatial Contextual Awareness To Create Augmented Knowledge Spaces For Interdisciplinary Engineering Teams

Matthias Merk, Isabel Leber, Gabriela Tullius, Peter Hertkorn

Abstract: Engineers of the research project “Digital Product Life-Cycle” are using a graph-based design language to model all aspects of the product they are working on. This abstract model is the base for all further investigations, developments and implementations. In particular at early stages of development, collaborative decision making is very important. We propose a semantic augmented knowledge space by means of mixed reality technology, to support engineering teams. Therefore we present an interaction prototype consisting of a pico projector and a camera. In our usage scenario engineers are augmenting different artefacts in a virtual working environment. The concept of our prototype contains both an interaction and a technical concept. To realise implicit and natural interactions, we conducted two prototype tests: (1) A test with a low-fidelity prototype and (2) a test by using the method Wizard of Oz. As a result, we present a prototype with interaction selection using augmentation spotlighting and an interaction zoom as a semantic zoom.

Real-time 3D Face Reconstruction and Gaze Tracking for Virtual Reality

Shu-Yu Chen, Lin Gao, Yu-kun Lai, Paul Rosin, Shihong Xia

Abstract: With the rapid development of virtual reality (VR) technology, VR glasses, a.k.a. Head-Mounted Displays (HMDs) are widely available, allowing immersive 3D content to be viewed. A natural need for truly immersive VR is to allow bidirectional communication: the user should be able to interact with the virtual world using facial expressions and eye gaze, in addition to traditional means of interaction. Typical application scenarios include VR virtual conferencing and virtual roaming, where ideally users are able to see other users’ expressions and have eye contact with them in the virtual world. Despite significant achievements in recent years for reconstruction of 3D faces from RGB or RGB-D images, it remains a challenge to reliably capture and reconstruct 3D facial expressions including eye gaze when the user is wearing VR glasses, because the majority of the face is occluded, especially those areas around the eyes which are essential for recognizing facial expressions and eye gaze. In this paper, we introduce a novel real-time system that is able to capture and reconstruct 3D faces wearing HMDs and robustly recover eye gaze. We demonstrate the effectiveness of our system using live capture and more results are shown in the accompanying video.

Phase-Aligned Foveated Rendering for Virtual Reality Headsets

Eric Lee Turner, Haomiao Jiang, damien saint macary, Behnam Bastani

Abstract: We propose a novel method of foveated rendering for virtual reality, targeting head-mounted displays with large fields of view or high pixel densities. Our foveation method removes motion-induced flicker in the periphery by aligning the rendered pixel grid to the virtual scene content during rasterization and upsampling. This method dramatically reduces detectability of motion artifacts in the periphery without complex interpolation or anti-aliasing algorithms.

Voice Conversion System Based on Deep Neural Network Capable of Parallel Computation

Kunihiko Sato, Jun Rekimoto

Abstract: Voice conversion (VC) algorithms modify the speech of a particular speaker to resemble that of another speaker. Many existing virtual reality (VR) and augmented reality (AR) systems make it possible to change the appearance of users, and if VC is added, then users can also change their voice. State-of-the-art VC methods employ recurrent neural networks (RNNs), including long short-term memory (LSTM) networks, for generating converted speech. However, it is difficult for RNNs to perform parallel computations because the computations at each timestep depend on the results of a previous timestep, which prevents them from operating in real-time. In contrast, we propose a novel VC approach based on a dilated convolutional neural network (Dilated CNN), which is a deep neural network model that allows for parallel computation. We adapted the Dilated CNN model to perform convolutions in both the forward and reverse directions to ensure the learning is successful. In addition, to ensure the model can be parallelized during both the training and inference phases, we developed a model architecture that predicts all output values from the value of the input speech, and does not rely on predicted values for the next input. The results demonstrate that the proposed VC approach has a faster conversion rate relative to that of state-of-the-art methods, while improving speech quality a little and maintaining speaker similarity.

An Investigation of Head Motion and Perceptual Motion Cues‘ Influence on User Depth Perception of Augmented Reality Neurosurgical Simulators

Hamza Ghandorh, Roy Eagleson, Sandrine DeRibaupierre

Abstract: Training and planning for neurosurgeries necessitate many requirements from junior neurosurgeons, including perceptual capacities. An effective method of deliberate training is to replicate the required procedures using neurosurgical simulation tools and visualizing a three-dimensional (3D) workspace. However, Augmented Reality (AR) neurosurgical simulators become obsolete for a variety of reasons, including users‘ distance underestimation. Few investigations have been conducted for improving users‘ depth perception in AR systems with perceptual motion cues through neurosurgical simulation tools for planning aid purposes. In this poster, we are reporting a user study about whether head motion and perceptual motion cues have any an influence on users‘ depth perception.

Virtual Content Creation Using Dynamic Omnidirectional Texture Synthesis

Chih-Fan Chen, Evan Suma Rosenberg

Abstract: We present a dynamic omnidirectional texture synthesis (DOTS) approach for generating real-time virtual reality content captured using a consumer-grade RGB-D camera. Compared to a single fixed-viewpoint color map, view-dependent texture mapping (VDTM) techniques can reproduce finer detail and replicate dynamic lighting effects that become especially noticeable with head tracking in virtual reality. However, VDTM is very sensitive to errors such as missing data or inaccurate camera pose estimation, both of which are commonplace for objects captured using consumer-grade RGB-D cameras. We proposed a dynamic omnidirectional texture synthesis (DOTS) approach to overcome these limitations. Our proposed optimization can synthesize a high resolution view-dependent texture map for any virtual camera location. Synthetic textures are generated by uniformly sampling a spherical virtual camera set surrounding the virtual object, thereby enabling efficient real-time rendering for all potential viewing directions.

An AR-Guided System for Fast Image-Based Modeling of Indoor Scenes

Daniel Andersen, Voicu Popescu

Abstract: We present a system that enables a novice user to acquire a large indoor scene in minutes as a collection of images that are sufficient for five degrees-of-freedom virtual navigation by image morphing. The user walks through the scene wearing an augmented reality head-mounted display (AR HMD) enhanced with a panoramic video camera. The AR HMD visualizes a 2D grid partitioning of a dynamically generated floor plan, which guides the user to acquire a panorama from each grid cell. The panoramas are registered offline using both AR HMD tracking data and structure-from-motion tools. Feature correspondences are established between neighboring panoramas. The resulting panoramas and correspondences support interactive rendering via image morphing with any view direction and from any viewpoint on the acquisition plane.

Augmented Reality System for Aiding Mild Alzheimer Patients and Caregivers

Keynes Masayoshi Kanno, Edgard Afonso Lamounier Jr., Alexandre Cardoso, Ederaldo José Lopes, Gerson FlÌÁvio Mendes de Lima

Abstract: Alzheimer´s Disease (AD) has become ever more prominent within the area of healthcare. Studies show that it is the most common cause of dementia in older adults. In addition, a large percentage of patients and their caregivers still face enormous challenges concerning treatment and the performing of everyday tasks. Forgetfulness and location awareness are recurrent symptoms. On the other hand, smartphones have become a more common feature of their daily lives. Therefore, this paper presents a mobile application to help individuals diagnosed in the early stages of Alzheimer´s Disease to identify objects and people. In addition, it can track the location of an AD individual, since they frequently become disoriented and lost. The application presented herein proposes an accessible interface, based on Augmented Reality techniques that uses speech commands for different features: time reminders for taking medicine, identification of which medicine to be taken, people recognition from photos, among others. In addition tests revealed a promising interface using voice recognition and a feasibility of locating individuals using the caregiver´s mobile application.

Virtual Buzzwire: Assessment of a Prototype VR Game for Stroke Rehabilitation

Chris Christou, Despina Michael-Grigoriou, Dimitris Sokratous

Abstract: We created a VR version of the Buzzwire children’s toy as part of a project to develop tools for assessment and rehabilitation of upper-body motor skills for people with dexterity impairment after stroke. In two pilot studies, participants wearing a HMD used a hand-held wand with precision tracking to traverse virtual ‘wires’. In the first study, we compared able-bodied participant’s performance with and without binocular viewing to establish a connection with previous experiments using physical versions of the game. Furthermore, we show that our extended measures were could also discern differences between subjects’ dominant versus non-dominant hand. In a second study, we assessed the usability of the system on a small sample of subjects with post-stroke hemiparesis. There was positive acceptance of the technology with no fatigue or nausea and measurements highlighted the differences between the hemiparetic and unaffected hand.

Augmented Reality Visualization of Joint Movements for Physical Examination and Rehabilitation

Henrique Galvan Debarba, Marcelo Elias de Oliveira, Alexandre Lädermann, Sylvain Chagué, Caecilia Charbonnier

Abstract: We present a visualization tool for human motion analysis in augmented reality. Our tool builds upon our previous work on joint biomechanical modelling for kinematic analysis, based on optical motion capture and personalized anatomical reconstruction of joint structures from medical imaging. It provides healthcare professionals with the in situ visualization of joint movements, where bones are accurately rendered as a holographic overlay on the subject – like if the user has an “X-ray vision” – and in real-time as the subject performs the movement.

A User-based Comparison of Two Augmented Reality Glasses

Elisa Maria Klose, Ludger Schmidt

Abstract: We present a scenario-based laboratory study with 40 participants comparing two augmented reality (AR) glasses in a travelling scenario. In a within-subject design, the binocular Epson Moverio BT-200 and the monocular Vuzix M100 were compared with regard to performance, acceptance, workload and preference. While performance was equal with both glasses, the Epson Moverio BT-200 glasses got higher acceptance and lower workload ratings and were preferred by the majority of the participants. The findings provide knowledge on human factors in AR glasses usage.

Effect of reclining angle on the perception of horizontal plane for HMD users

Hideki Kawai, Hiroki Hara, Yasuyuki Yanagida

Abstract: In recent years, head mounted displays (HMDs) have become widely used among general users thanks to the release of low-cost and high-performance versions. It is expected that virtual reality (VR) experiences will often be performed in a relaxed posture, while sitting on a reclining seat at home or in the supine position on a bed in rehabilitation institutions. However, there is no clear knowledge on the subject of how humans perceive the horizontal plane of a virtual world while using HMDs in reclining postures. In this study, we investigated how the angle of the subjective horizontal plane changes in relation to the angle of the upper body in a reclining seat. As a result, we found that the angle of the perceived horizontal plane changes depending on the angle of the upper body, and the physical direction of gravity has little effect on this perception.

AirwayVR: Learning Endotracheal Intubation in Virtual Reality

Pavithra Rajeswaran, Na-Teng Hung, Praveen Kumar, Thenkurussi Kesavadas, John Vozenilek

Abstract: Endotracheal intubation is a procedure in which a tube is passed through the mouth into the trachea (windpipe) to maintain an airway and provide artificial respiration. This lifesaving procedure, utilized in many clinical situations, requires complex psychomotor skills. Healthcare providers need significant training and experience to acquire skills necessary for a quick and atraumatic endotracheal intubation to prevent complications. However, medical professionals have limited training platforms and opportunities to be trained on this procedure. In this poster, we present a virtual reality based simulation trainer for intubation training. This VR based intubation trainer provides an environment for healthcare professionals to assimilate these complex psychomotor skills while also allowing a safe place to practice swift and atraumatic intubation. User survey results are presented demonstrating that VR is a promising platform to train medical professionals effectively for this procedure.

What Can VR Systems Tell Sports Players? Reaction-based Analysis of Baseball Batters in Virtual and Real Worlds

Mariko Isogawa, Dan Mikami, Takehiro Fukuda, Naoki Saijo, Kosuke Takahashi, Hideaki Kimata, Makio Kashino

Abstract: This study aims at ascertaining the applicability of a virtual reality environment (VRE) to sports training. Hitting an incoming object is one of the most common actions in various ball games, in which players are required to move to a suitable position and hit the object in a split second; this is a complicated task requiring spatio-temporal reaction to the object. Due to this complexity, how a VRE can serve as a training environment still remains an open question. In the work reported in this paper, we investigated the idea of substituting a VRE for an actual environment for training on the task of hitting a baseball. By focusing on the batter’s temporal behavior with real and virtual environments, we clarified factors that contribute to the batter’s reaction. This helped us understand how training VREs can be effectively utilized and the VRE requirements needed for sports training.

Preliminary Environment Mapping for Redirected Walking

Christian Hirt, Markus Zank, Andreas Kunz

Abstract: Redirected walking applications allow a user to explore large virtual environments in a smaller physical space by employing so-called redirection techniques. To further improve the immersion of a virtual experience, path planner algorithms were developed which choose redirection techniques based on the current position and orientation of the user. In order to ensure a reliable performance, planning algorithms depend on accurate position tracking using an external tracking system. However, the disadvantage of such a tracking method is the time-consuming preparation of the physical environment which renders the system immobile. A possible solution to eliminate this dependency is to replace the external tracking system with a state-of-the-art inside-out tracker based on the concept of Simultaneous Localization and Mapping (SLAM). In this paper, we present an approach in which we attach a commercially available SLAM device to a head-mounted display to track the head motion of a user. From sensor recordings of the device, we construct a map of the surrounding environment for future processing in an existing path planner for redirected walking.

Effect of Environment Size on Curvature Redirected Walking Thresholds

Anh Nguyen, Yannick Rothacher, Peter Brugger, Bigna Lenggenhager, Andreas Kunz

Abstract: Redirected walking (RDW) refers to a number of techniques that enable users to explore a virtual environment larger than the real physical space. These techniques are based on the introduction of a mismatch in rotation, translation and curvature between the virtual and real trajectories, quantified as rotational, translational and curvature gains. When these gains are applied within certain thresholds, the manipulation is unnoticeable and immersion is maintained. Existing studies on RDW thresholds reported a wide range of threshold values. These differences could be attributed to many factors such as individual differences, walking speed, or environment settings.

In this paper, we propose a study to investigate one of the environment settings that could potentially influence curvature RDW thresholds: the environment size. The detailed description of the study is also provided, where the adaptive, 2-alternative forced choice method is used to identify the detection thresholds.

A Comparative Study of the Learning Outcomes and Experience of VR in Education

Yoana Slavova, Mu Mu

Abstract: Virtual Reality (VR) is believed to have a pivotal role in transforming teaching and learning in higher education. The novelty factor and full immersion in a virtual environment can undoubtedly improve students’ attention. However, it is still unclear how the use of VR would impact the learning experience and outcomes with respect to knowledge acquisition. We conducted a comparative study on students’ performance in a standardised assessment when course content is delivered using VR and conventional lecture slides. Results show that improvement in social interaction and productivity tools in VR are essential for its greater impact in higher education.

Evaluation of Hand Gesture Annotation in Remote Collaboration Using Augmented Reality

Shohei Yamada, Naiwala Chandrasiri

Abstract: In this research we have devised a system which can tell works to the worker by using the hand gestures of the helper. In this system, by using augmented reality, it is possible to display as if the helper’s hand model actually exist in front of the worker. In order to evaluate the usefulness of the proposed system, we conducted comparative experiments on remote work support by instruction annotations using conventional method of drawn lines, and the proposed method of by using hand gesture instructions. As a result, no significant difference was found between two methods in terms of ease of understanding in the instructions. However, regarding working time, the hand gesture instructions were shorter by 20 seconds (shortened by 19%) on average than the other.

Using Industrial Robots as Haptic Devices for VR-Training

Sebastian Knopp, Mario Lorenz, Luigi Pelliccia, Philipp Klimant

Abstract: Many VR-training application require the integration of haptics, i.e. for surgical training. However, surgical VR-training is still limited to minimal invasive surgeries. For surgeries where high forces occur, like hip replacement, no VR-training applications have been developed. One cause for this is the lack of appropriate haptic devices which can deliver high forces. Novel industrial collaborative robots can provide high forces. Although, they lack control interfaces allowing to use them as haptic devices. We present 4 approaches for using these robots as general, multi-purpose haptic input and output devices. The implemented approach was integrated into a VR hip replacement training application. An initial assessment demonstrates the general feasibility of our solution.

Augmentation of road surfaces with subsurface utility model projections

Stéphane Côté, Alexandra Mercier

Abstract: Subsurface utility work planning would benefit from augmented reality. Unfortunately, the exact pipe location is rarely known, which produces unreliable augmentations. We proposed an augmentation technique that drapes 2D pipe maps onto the road surface and aligns them with corresponding features in the physical world using a pre-captured 3D mesh. Resulting augmentations are more likely to be displayed at the true pipe locations.

Object Size Perception in Immersive Virtual Reality: Avatar Realism Affects the Way We Perceive

Nami Ogawa, Takuji Narumi, Michitaka Hirose

Abstract: How does the representation of an embodied avatar influence the way in which a human perceives the scale of a virtual environment? It has been shown that the scale of the external environment is perceived relative to the size of one’s body. However, the influence of avatar realism on the perceived scale has not been investigated, despite the fact that it is common to embody avatars of various representations, from iconic to realistic. This study examined how avatar realism would affect perceived graspable object sizes as the size of the avatar hand changes. In the experiment, we manipulated the realism (high, medium, and low) and size (veridical and enlarged) of the avatar hand, and measured the perceived size of a cube. The results showed that the size of the cube was perceived to be smaller when the avatar hand was enlarged for all degrees of realism of the hand. However, the enlargement of the avatar hand had a greater influence on the perceived cube size for the highly realistic avatar than for the medium-level and low-level realism conditions. This study shed new light on the importance of the avatar representation in a three-dimensional user interface field, in how it can affect the manner in which we perceive the scale of a virtual environment.

Physics-Inspired Input Method for Near-Field Mixed Reality Applications Using Latent Active Correction

Zhenliang Zhang, Yue Li, Dongdong Weng, Yue Liu, Yongtian Wang

Abstract: Calibration accuracy is one of the most important factors to affect the user experience in mixed reality applications. For a typical mixed reality system built with the optical see-through head-mounted display (OST-HMD), a key problem is how to guarantee the accuracy of hand-eye coordination by decreasing the instability of the eye and the HMD in long-term use. In this paper, we propose a real-time latent active correction (LAC) algorithm to decrease hand-eye calibration errors accumulated over time. Experimental results show that we can successfully use the LAC algorithm to physics-inspired virtual input methods.

Evaluation of Hand-Based Interaction for Near-Field Mixed Reality with Optical See-Through Head-Mounted Displays

Zhenliang Zhang, Benyang Cao, Dongdong Weng, Yue Liu, Yongtian Wang, Hua Huang

Abstract: Hand-based interaction is one of the most widely-used interaction modes in the applications based on optical see-through head-mounted displays (OST-HMDs). In this paper, such interaction modes as gesture-based interaction (GBI) and physics-based interaction (PBI) are developed to construct a mixed reality system to evaluate the advantages and disadvantages of different interaction modes for near-field mixed reality. The experimental results show that PBI leads to a better performance of users regarding their work efficiency in the proposed tasks. The statistical analysis of T-test has been adopted to prove that the difference of efficiency between different interaction modes is significant.

HangerOVER: Mechanism of Controlling the Hanger Reflex Using Air Balloon for HMD Embedded Haptic Display

Yuki Kon, Takuto Nakamura, Vibol Yem, Hiroyuki Kajimoto

Abstract: The Hanger Reflex is a phenomenon in which the head rotates unintentionally when it is sandwiched by a wire hanger. The reflex is effectively generated by pressing on specific points, and can be reproduced by pressing with an actuator. We propose the HangerOVER, an HMD-embedded haptic display that can provide both force and motion senses using the Hanger Reflex. In this paper, we designed a mechanism of controlling the Hanger Reflex using air balloons for HMD embedded haptic display, and confirmed the occurrence of the Hanger Reflex.

Impact of Alignment Point Distance Distribution on SPAAM Calibration of Optical See-Through Head-Mounted Displays

Kenneth Moser, Mohammed Safayet Arefin, J. Edward Swan

Abstract: The use of Optical See-Through Head-Mounted Displays (OST-HMDs) for presenting Augmented Reality experiences has become more common, due to the increasing availability of lower cost head-worn device options. Despite this growth, commercially available OST hardware remains devoid of the integrated eye-tracking cameras necessary for automatically calibrating user-specific view parameters, leaving manual calibration methods as the most consistently viable option across display types. The Single Point Active Alignment Method (SPAAM) is currently the most-cited manual calibration technique, due to the relaxation of user constraints with respect to allowable motion during the calibration process. This work presents the first formal study directly investigating the effects that alignment point distribution imposes on SPAAM calibration accuracy and precision. A user experiment, employing a single expert user, is presented, in which SPAAM calibrations are performed under each of five conditions. Four of the conditions cross alignment distance (arm length, room scale) with user pose (sitting, standing). The fifth condition is a control condition, in which the user is replaced with a rigidly mounted camera; the control condition removes the effect of noise from uncontrollable postural sway. The final experimental results show no significant impact on calibration due to user pose (sitting, standing). The control condition also did not differ from the user produced calibration results, suggesting that posture sway was not a significant factor. However, both the user and control conditions show significant improvement using arm’s length alignment points over room scale alignments, with an order of magnitude difference in eye location estimate error between conditions.

Real-Time Marker-Based Finger Tracking with Neural Networks

Dario Pavllo, Thibault Porssut, Bruno Herbelin, Ronan Boulic

Abstract: Hands in virtual reality applications represent our primary means for interacting with the environment. Although marker-based motion capture with inverse kinematics (IK) works for body tracking, it is less reliable for fingers often occluded when captured with cameras. Many computer vision and virtual reality applications circumvent the problem by using an additional system (e.g. inertial trackers). We explore an alternative solution that tracks hands and fingers using solely a motion capture system based on cameras and active markers with machine learning techniques. Our animation of fingers is performed by a predictive model based on neural networks, which is trained on a movements dataset acquired from several subjects with a complementary capture system (inertial). The system is as efficient as a traditional IK algorithm, provides a natural reconstruction of postures, and handles occlusions.

Augmented Reality-based Personalized Virtual Operative Anatomy for Neurosurgical Guidance and Training

Weixin Si, Xiangyun Liao, Qiong Wang, Pheng-Ann Heng

Abstract: This paper presents a novel augmented reality (AR) interactive environment for neurosurgical training. Comparing with traditional virtual reality based neurosurgical simulator, our system provides a more natural and intuitive fashion for surgeons. To achieve holographic visualization of virtual brain on 3D-printed skull (workspace), the first step is to reconstruct the personalized anatomy structure from segmented MR imaging. Then, tailored to the computational power of HoloLens, we employ the mass-spring method to model the mechanical response of brain. After that, a precise registration method is employed to map the virtual-real spatial information, which can overlay the virtual operative brain on workspace. In addition, bimanual haptic interface is also integrated into our simulator, which is more similar with real neurosurgery. In experiments, we conduct accuracy validation on our registration method, as well as the validity test on the developed simulators. The results demonstrate that our simulator can provide high-accuracy augmented visualization effects and deep immersion for novice surgeons.

Selecting Invisible Objects

Junwei Sun, Wolfgang Stuerzlinger

Abstract: We augment 3D user interfaces with a new technique that enables users to select objects that are invisible from the current viewpoint. We present a layer-based method for selecting invisible objects, which works for arbitrary objects and scenes. The user study shows that with our new techniques users can easily select hidden objects.

Mobile AR In and Out: Towards Delay-based Modeling of Acoustic Scenes

Cumhur Erkut, Jonas Holfelt, Stefania Serafin

Abstract: We have previously presented an augmented reality (AR) audio application, where scattering delay networks efficiently generate and organize a reverberator, based on room geometry scanned by an AR device. The application allowed for real-time processing and updating of reflection path geometry and provided a proof-of-concept for plausible audio-spatial registration of a virtual object in real environments. Here we present our ongoing work that aims to extend the simulation to outdoor scenes by using the Waveguide Web, instead of the original formulation with the Scattering Delay Networks. The current implementation is computationally more demanding, but has a potential to provide more accurate second-order reflections, and therefore, better registering of audio-visual AR scenes.

VR-Assisted vs Video-Assisted Teacher Training

Jean-Luc Lugrin, Sebastian Oberdörfer, Alice Wittmann, Christian Seufert, Silke Grafe, Marc Erich Latoschik

Abstract: This paper compares teacher training in Virtual Reality (VR) to traditional approaches based on videos analysis and reflections. Our VR-assisted teacher training targets classroom management (CM) skills, using a low cost collaborative immersive VR platform. First results reveal a significant improvement using the VR approach.

Rendering of pressure and textures using wearable haptics in immersive VR environments

Giovanni Spagnoletti, Leonardo Meli, Tommaso Lisini Baldi, Guido Gioioso, Claudio Pacchierotti, Domenico Prattichizzo

Abstract: Haptic systems have only recently started to be designed with wearability in mind. Compact, unobtrusive, inexpensive, easy-to-wear, and lightweight haptic devices enable researchers to provide compelling touch sensations to multiple parts of the body, significantly increasing the applicability of haptics in many fields, such as robotics, rehabilitation, gaming, and immersive systems. In this respect, wearable haptics has a great potential in the fields of virtual and augmented reality. Being able to touch virtual objects in a wearable and unobtrusive way may indeed open new exciting avenues for the fields of haptics and VR. This work presents a novel wearable haptic system for immersive virtual reality experiences. It conveys the sensation of touching objects made of different materials, rendering pressure and texture stimuli through a moving platform and a vibrotactile motor. The device is composed of two platforms: one placed on the nail side of the finger and one in contact with the finger pad, connected by three cables. One small servomotor controls the length of the cables, moving the platform towards or away from the fingertip. One voice coil actuator, embedded in the platform, provides vibrotactile stimuli to the user.

Light Virtual Reality systems for the training of conditionally automated vehicle drivers

Daniele Sportillo, Alexis Paljic, Luciano Ojeda, Philippe Fuchs, Vincent Roussarie

Abstract: In conditionally automated vehicles, drivers can engage in secondary activities while traveling to their destination. However, drivers are required to appropriately respond, in a limited amount of time, to a take-over request when the system reaches its functional boundaries. In this context, Virtual Reality systems represent a promising training and learning tool to properly familiarize drivers with the automated vehicle and allow them to interact with the novel equipment involved. In this study, the effectiveness of an Head-Mounted display (HMD)-based training program for acquiring interaction skills in automated cars was compared to a user manual and a fixed-base simulator. Results show that the training system affects the take-over performances evaluated in a test drive in a high-end driving simulator. Moreover, self-reported measures indicate that the HMD-based training is preferred with respect to the other systems.

Casting Virtual Shadows Based on Brightness Induction for Optical See-Through Displays

Shinnosuke Manabe, Sei Ikeda, Asako Kimura, Fumihisa Shibata

Abstract: This paper proposes a novel method for casting virtual shadows on real surfaces on an optical see-through head-mounted display without any extra physical filter devices. Instead, the method presents shadows as results of brightness induction. To produce brightness induction, we place a texture of the real scene with a certain transparency around the shadow area to amplify the luminance of the surrounding area. To make this amplification unnoticeable, the transparency of the surrounding region is gradually increased as the distance from the shadow region. In the experiment with 23 participants, we confirmed that users tend to perceive the shadow region is darker than a non-shadow area under the conditions where a circular virtual shadow is placed on a flat surface.

Model Retrieval by 3D Sketching in Immersive Virtual Reality

Daniele Giunchi, Stuart James, Anthony Steed

Abstract: We describe a novel method for searching 3D model collections using free-form sketches within a virtual environment as queries. As opposed to traditional Sketch Retrieval, our queries are drawn directly onto an example model. Using immersive virtual reality the user can express their query through a sketch that demonstrates the desired structure, color and texture.
Unlike previous sketch-based retrieval methods, users remain immersed within the environment without relying on textual queries or 2D projections which can disconnect the user from the environment. We show how a convolutional neural network (CNN) can create multi-view representations of colored 3D sketches. Using such a descriptor representation, our system is able to rapidly retrieve models and in this way, we provide the user with an interactive method of navigating large object datasets. Through a preliminary user study we demonstrate that by using our VR 3D model retrieval system, users can perform quick and intuitive search. Using our system users can rapidly populate a virtual environment with specific models from a very large database, and thus the technique has the potential to be broadly applicable in immersive editing systems.

Hybrid Decision Support System for Traffic Engineers

Manuela Uhr, Joachim Nitschke, Jingxin Zhang, Frank Steinicke

Abstract: We present a hybrid system combining immersive and nonimmersive technology for traffic engineering experts in the cooperative process of construction site planning and decision-making. We exploit a four-sided CAVE setup, which allows a 360° view of the actual real-world location, while an interactive tabletop positioned in the center of the CAVE provides a map-based 2D planning view. By selecting different locations on the digital map, the 360° environment changes with respect to the selected spot. The tabletop can display more detailed data, e. g., traffic flow and traffic light programs. In the described setup group decisions can be made more effectively compared to current workplace situations. In preliminary focus group discussions, we received positive feedback and plan to expand the system’s features in the future.

Towards Mobile 3D Telepresence using Head-worn Devices and Dual-purpose Screens

Shoaib R Soomro, Osman Eldes, Hakan Urey

Abstract: Head-mounted displays and augmented reality headsets are emerging as the future of human-computer interaction. Such devices can display high resolution 3D images and use on-board cameras to capture the surroundings of the user. However, capturing the user who is wearing the device to facilitate 3D telepresence is not possible with such headsets. Here we propose and demonstrate a new integrated platform to provide mobile 3D telepresence experience using a head-worn device and a dual-purpose passive screen. At the core of this telepresence architecture, we use a portable multi-layered passive screen which facilitates the stereoscopic 3D display using a pair of head-worn projectors and at the same time, captures the multi-perspective views of the user on a head-worn camera through reflections of the screen. The screen contains retroreflective material for stereo image display and an array of convex mirrors for 3D capture. The 3D telepresence is demonstrated using an experimental setup where a local-user wearing the developed head-worn device perceives the 3D images on the dual-purpose screen, while the captured perspective views of user-1 are rendered as stereo viewpoints and showed to the user-2 on a virtual reality headset.

Evaluation of Environment-Independent Techniques for 3D Position Marking in Augmented Reality

Wallace S Lages, Yuan Li, Doug Bowman

Abstract: Specifying 3D positions in the real world is an essential step to create augmented reality content. However, this task can be challenging when information about the depth or geometry of the target location is not available. In this paper, we evaluate alternative techniques for 3D pointing at a distance without knowledge about the environment. We present the results of two studies evaluating the accuracy of techniques based on geometry and human depth perception. We find that geometric methods provide higher accuracy but may suffer from low precision due to pointing errors. We propose a solution that increases precision by combining multiple samples to obtain a better estimate of the target position.

Towards Revisiting Passability Judgments in Real and Immersive Virtual Environments

Ayush Bhargava, Kathryn Lucaites, Leah Hartman, Hannah Solini, Jeffrey W. Bertrand, Andrew Robb, Christopher Pagano, Sabarish V. Babu

Abstract: Every task we perform in our day-to-day lives requires us to make judgements about size, distance, depth, etc. The same is true for tasks in an immersive virtual environments (IVE). Increasingly, Virtual Reality (VR) applications are being developed for training and entertainment, many of which require the user to determining whether s/he can pass through an opening. Typically, people determine their ability to pass through an aperture by comparing the width of their shoulders to the width of the opening. Thus, judgments of size and distance in an IVE are necessary for accurate judgments of passability. In this experiment, we empirically evaluate how passability judgments in an IVE, viewed through a Head-Mounted Display (HMD), compare to judgments made in the real world. An exact to scale virtual replica of the room and apparatus was used for the VR condition. Results indicate that the accuracy of passability judgments seem to be comparable to the real world.

Towards Situated Knee Trajectory Visualization for Self Analysis in Cycling

Oral Kaplan, Goshiro Yamamoto, Takafumi Taketomi, Yasuhide Yoshitake, Alexander Plopski, Christian Sandor, Hirokazu Kato

Abstract: Inflammation, stiffness, and swelling are frequently reported symptoms of patellar tendinitis among cyclists; making knee pain a consistently observed overuse injury in cycling. In this paper, we investigate the applicability of a knee trajectory visualization to self-analysis for increasing awareness of movement patterns leading to injuries. We briefly explain overuse injuries and patellar instability, describe the experiments we did with cyclists for gathering requirements, and finally illustrate an augmented reality concept. We also show two different types of visualizations with participant opinions; one being conventional and other being a video-based one and discuss how situated visualizations can be utilized for improving self awareness to injury causes.

Adopting the Roll Manipulation for Redirected Walking

Tatsuki Yamamoto, Keigo Matsumoto, Takuji Narumi, Tomohiro Tanikawa, Michitaka Hirose

Abstract: The contribution of this paper is to propose a novel Redirected Walking (RDW) technique that adopts manipulation gain in the roll direction. RDW is a technique that enables users to explore a large Virtual Environment (VE) while walking within a physically limited space by manipulating their virtual vision. Thus far, studies have determined the detection thresholds for translation, rotation, and curvature gains, but the thresholds are limited in the yaw direction. In contrast, in other research areas, movements in the yaw and roll directions have sometimes been addressed at the same time. In the present study, we investigated the detection threshold of inclination gain in the roll direction. We applied an inclination gain that was increased gradually while the participants walked 3 meters straight ahead. The results showed that users can detect an inclination gain of 1.93° on the left hand and 1.39° on the right hand.

Teach Me A Story : an Augmented Reality Application for Teaching History in middle school

SCHIAVI Barbara, Franck GECHTER, Céline GECHTER, Albert Rizzo

Abstract: Augmented Reality (AR) is now more and more widespread in many application fields. Thanks to the new progresses in computer power, AR can now be used widely in education without any expensive additional devices. In this paper is presented the feedback of an experimental protocol using an AR application as an additional support for a History lesson in secondary schools. Even if the technical part has a lead role in the student experience, the most challenging issue is related to the choice of the teaching lesson as itself which must fit several, sometimes contradictory, requirements.

Olfactory Display Based on Sniffing Action

Shingo Kato, Takamichi Nakamoto

Abstract: An olfactory display is a device which provides various scents to a user. Such devices are expected to be applied to VR since olfactory stimulus influences human emotion and enhances user experience. One of the main problems in the conventional olfactory display is that the odorants emitted from the device not only reach the nose but spread into the ambient air, so that the user experience may be changed by the remaining odor. To solve this problem, we have developed a newly structured olfactory display which utilizes human respiratory action. In the method, DC fan is driven to create an odor stream in front of the nostril. Thus, odor goes through the nostril only when the user sniffs it. The odor plume generation is based on the combination of SAW atomizer with micro dispensing valve. We have fabricated a prototype, and then evaluated the waste odor emitted into the air using a commercially available gas detector. It was demonstrated that the new structure makes it possible to reduce the waste odor emission into the air compared to the conventional method.

Deep Localization on Panoramic Images

Atsutoshi Hanasaki, Hideaki Uchiyama, Atsushi Shimada, Rin-ichiro Taniguchi

Abstract: We propose a framework for estimating a camera pose with respect to a panoramic image. The proposed framework comprises two main processes: deep neural network (DNN) based location estimation, and intensity based registration for homography estimation. In a reference panoramic image, view locations, each of which corresponds to a certain region in the image, are first defined. Inside each view location region, images are synthesized by variously transforming images locally cropped in the region. Finally, the mapping function from each synthesized image to its location is trained with the DNN. For the localization tasks, the view location of an input image is first estimated using the trained DNN. Then, the pose is computed by using the intensity based registration between the input image and the panoramic image. With this framework, the accurate homography of an input image with respect to a reference panoramic image can be computed. In the evaluation, we investigate the performance of our framework compared with a Bag of Visual Word (BoVW) based method.

Biomechanical Parameters Under Curvature Gains and Bending Gains in Redirected Walking

Keigo Matsumoto, Ayaka Yamada, Anna Nakamura, Yasushi Uchimura, Keitaro Kawai, Tomohiro Tanikawa

Abstract: In this study, we examined the effect of walking biomechanics, which occurs when the curvature of the walking path in virtual space is changed, while the actual walking path remains constant. Curvature gains and bending gains were used to change the virtual walking path. We found a significant difference in most biomechanical parameters when curvature manipulation and bending manipulation are applied compared with the case in which they are not applied. Some parameters were also suggested to depend on the visual sense or disagreement between the visual and other senses.

A Preliminary Investigation of the Effects of Discrete Virtual Rotation on Cybersickness

Andreas Ryge, Casper Vollmers, Jonatan Hvass, Lars Koreska Andersen, Jon Ram Bruun-Pedersen, Niels Christian Nilsson, Rolf Nordahl

Abstract: Most virtual reality (VR) applications require the user to travel through the virtual environment (VE). However, some users are susceptible to cybersickness, and this issue is particularly prominent if the user is physically stationary while virtually moving. One approach to minimizing cybersickness is to rotate the user in discrete steps. This poster presents a between-subjects study (n=42) comparing this approach to smooth virtual rotation. The results revealed a statistically significant increase in self-reported sickness after exposure to the VE in case of both conditions. No statistically significant differences between the two conditions were found.

A Path-based Attention Guiding Technique for Assembly Environments with Target Occlusions

Patrick Renner, Jonas Blattgerste, Thies Pfeiffer

Abstract: An important use-case of augmented reality-based assistance systems is supporting users in search tasks by guiding their attention towards the relevant targets. This has been shown to reduce search time and errors, such as wrongly picked items or false placements. The optimization of attention guiding techniques is thus one area of research in augmented assistance.

In this paper, we address the problem of attention guiding in domains with occluded targets. We propose and evaluate a variant of a line-based approach and show that it improves upon two existing approaches in a newly designed evaluation scenario.

Using vertex displacements to distort virtual bodies and objects while preserving visuo-tactile congruency during touch

Marius Rubo, Matthias Gamer

Abstract: Distorting virtual bodies during virtual body illusions is a key element in several fields of basic research and clinical applications. This technique typically results in a loss of visuo-tactile congruency during touch, which is concealed by restrictions in participant movement. We present a method based on vertex displacement which allows to preserve visuo-tactile congruency during touch while more freely moving a visually distorted virtual body and/or object.

Using cybersickness indicators to adapt navigation in virtual reality: a pre-study

Jeremy Plouzeau, Jean-Rémy Chardonnet, Frederic Merienne

Abstract: We propose an innovative method to navigate in a virtual environment by adapting the acceleration parameters to users in real time, in order to reduce cybersickness. Indeed, navigation parameters for most navigation interfaces are still determined by rate-control devices. Inappropriate parameter settings may lead to strong sickness, making the application unusable. Past research found that especially accelerations should not be set too high. Here, we define the accelerations as a function of a cybersickness indicator: the Electro-Dermal Activity (EDA). A pre-study was conducted to test the effectiveness of our approach and showed promising results where cybersickness tends to decrease with our adaptive navigation method.

Personal Perspective: Using Modified World Views to Overcome Real-Life Limitations in Virtual Reality

Adrian H. Hoppe, Florian van de Camp, Rainer Stiefelhagen

Abstract: Virtual Reality opens up new possibilities as it allows to overcome real-life limitations and create novel experiences. While interacting with other people, it is beneficial to share a common view point. We modify the virtual world to allow face-to-face interaction with another person, while still retaining an optimal point of view on presented data. This is done by adapting the virtual environment independently for each user, using translation, rotation and scaling. The presented modification of the world gives a natural solution to the problems of collaborative analysis of content. It is therefore beneficial for usage in human-human interaction scenarios that support cooperative work.

A Threefold Approach for Precise and Efficient Locomotion in Virtual Environments with Varying Accessibility

Thomas Arnskov, Anders Elmholdt, Kristian Hagemann Jensen, Nicklas Kristoffersen, Jonas Litvinas, Frederik Waldhausen, Niels Christian Nilsson, Rolf Nordahl, Stefania Serafin

Abstract: This poster details the design and evaluation of Locomotion3 - a framework that allows users to freely alternate between real walking, walking-in-place (WIP), and a skateboard metaphor depending on whether navigation requires efficiency, precision, or both. The user study compared the framework to WIP locomotion and the skateboard metaphor and found that Locomotion3 achieves a balance between the efficiency of the skateboard metaphor and the precision of WIP locomotion.

Spatial Asynchronous Visuo-Tactile Stimuli influence Ownership of Virtual Wings

Anastasia Andreasen, Niels Christian Nilsson, Stefania Serafin

Abstract: This poster describes a within-subject study of the virtual body ownership (VBO) illusion using anatomically similar but morphologically different body of a virtual bat. Participants experienced visuo-tactile stimulation of their arms while seeing an object touching the wing of the bat. The mapping between the real and the virtual touch points varied across three conditions: no spatial deviation between visual and tactile input, 50% deviation, and 70% deviation. The results suggest that the degree of experienced VBO varies across the conditions. The illusion was broken in the absence of visuo-tactile stimuli.

Agency Enhances Body Ownership Illusion of Being a Virtual Bat

Anastasia Andreasen, Niels Christian Nilsson, Stefania Serafin

Abstract: This poster describes a within-subject study of agency´s influence on virtual body ownership (VBO) using anatomically similar but morphologically different body of a virtual bat. Paricipants were exposed to flight under four conditions: voluntary movement through virtual environment (VE) with avatar present, voluntary movement through virtual environment (VE) with avatar absent, voluntary limbs movement without movements through VE, and finally involuntary movement of the avatar through VE. The results suggest that agency enhances VBO illusion the most under participants´ full control during flight locomotion.

Using EEG to Decode Subjective Levels of Emotional Arousal during an Immersive VR Roller Coaster Ride

Felix Klotzsche, Alberto Mariola, Simon Hofmann, Vadim V. Nikulin, Arno Villringer, Michael Gaebler

Abstract: Emotional arousal is a key component of a user’s experience in immersive virtual reality (VR). Subjective and highly dynamic in nature, emotional arousal involves the whole body and particularly the brain. However, it has been difficult to relate subjective emotional arousal to an objective, neurophysiological marker—especially in naturalistic settings. We tested the association between continuously changing states of emotional arousal and oscillatory power in the brain during a VR roller coaster experience. We used novel spatial filtering approaches to predict self-reported emotional arousal from the electroencephalogram (EEG) signal of 38 participants. Periods of high vs. low emotional arousal could be classified with accuracies significantly above chance level. Our results are consistent with prior findings regarding emotional arousal in less naturalistic settings. We demonstrate a new approach to decode states of subjective emotional arousal from continuous EEG data in an immersive VR experience.

Effects of Visual Realism and Moving Detail on Cybersickness

Matti Pouke, Arttu Tiiro, Steven M. LaValle, Timo Ojala

Abstract: In this study we compare two conditions of visual detail - modern graphics and detail reduction through cel-shading - in experiencing Cybersickness during virtual movement along a preprogrammed path within a scene depicting a real-world outdoor museum viewed with Oculus CV1. The Cybersickness experience was quantified with the Fast-Motion-Sickness (FMS) scale and the Simulator Sickness Questionnaire (SSQ). We found weak evidence for realistic graphics being more sickness-inducing. Also, FMS scores peaked whenever a participant was entering a building.

Towards Standardization of Medical Trials using Virtual Experimenters

Zachariah Jestyn Inks, Matias volonte, Sarah C Beadle, Bjoern Horing, Andrew Robb, Sabarish V. Babu

Abstract: We describe a system for experiment standardization and distributing medication in medical trials using a virtual human. In our system, we employed a virtual experimenter that explains the experiment, medication, procedure and risks involved through a large screen display. The participant is able to interact with and ask questions of the virtual experimenter through a touch screen interface on an additional monitor display. During the interaction, the virtual experimenter will present the participant with a pill. The pill is physically distributed by a custom-made Arduino based dispenser. We conducted an initial user evaluation of the system using a placebo response protocol and a perceived pain scale. In the study, participants submerged their hand in a hot water bath before and after interacting with the system and reported their perceived pain response. The system was either a virtual human or text interface that either disseminated a pill stating that it was an analgesic or did not provide a medication. Through this system, we propose the potential use of virtual humans as a method to provide a consistent and standardized interaction between a participant and experimenter, while maintaining the benefits of social interaction in medication trials.

The Effect of Immersive Displays on Situation Awareness in Virtual Environments for Aerial Firefighting Air Attack Supervisor Training

Rory Michael Scott Clifford, Humayun Khan, Simon Hoermann, Mark Billinghurst, Robert Lindeman

Abstract: Situation Awareness (SA) is an essential skill in Air Attack Supervision (AAS) for aerial based wildfire firefighting. The display types used for Virtual Reality Training Systems (VRTS) afford different visual SA depending on the Field of View (FoV) as well as the sense of presence users can obtain in the virtual environment. We conducted a study with 36 participants to evaluate SA acquisition in three display types: a high-de?nition TV (HDTV), an Oculus Rift Head-Mounted Display (HMD) and a 270° cylindrical simulation projection display called the SimPit. We found a signi?cant difference between the HMD and the HDTV, as well as with the SimPit and the HDTV for the three levels of SA.

The Effects of Olfactory Stimulation and Active Participation on Food Cravings in Virtual Reality

Nikita Mae B. Tuanquin, Simon Hoermann, Carl Petersen, Robert Lindeman

Abstract: Previous work on combining Virtual Reality (VR) and visual cue-exposure therapy has shown promising results that suggest its potential as a tool to support treatments for eating disorders. Visual food cues can elicit cravings in people no matter where they get their exposure from (e.g., photograph, real world, or virtual world). However, there is little work on the influence of olfactory stimuli in VR. Consequently, we investigated the effects of olfactory stimuli and VR on food cravings. In particular, we examined the hypothesis that olfactory interaction with food in VR can further increase food cravings. The results of this study show that VR can elicit similar effects as exposure to traditional cues when compared to a neutral baseline. Furthermore, the added olfactory cues increased food cravings and the urge to eat the presented food, which also increased when participants were allowed to interact with the virtual food. In conclusion, VR was shown to have considerable potential to be a valid alternative to traditional food-exposure interventions.

Movement Visualizer for Networked Virtual Reality Platforms

Omar Shaikh, Yilu Sun, Andrea Stevenson Won

Abstract: We describe the design, deployment and testing of a module to track and graphically represent user movement in a collaborative virtual environment. This module allows for the comparison of ground-truth user/observer ratings of the affective qualities of an interaction with automatically generated representations of the participants’ movements in real time. In this example, we generate three charts visible both to participants and external researchers. Two display the sum of the tracked movements of each participant, and a third displays a “synchrony visualizer”, or a correlation coefficient based on the relationship between the two participants’ movements. Users and observers thus see a visual representation of “nonverbal synchrony” as it evolves over the course of the interaction. We discuss this module in the context of other applications beyond synchrony.

Heterogeneous, Distributed Mixed Reality Applications. A Concept

Pablo Figueroa, Tiberio Hernández, Frederic Merienne, Jean-Rémy Chardonnet, Jose Dorado, Jesús Sebastián López Pacheco

Abstract: This poster formulates the concept of heterogeneous distributed mixed reality (HDMR) applications in order to state some interesting research questions in this domain. HDMR applications give synchronous access to shared virtual worlds, from diverse mixed reality (MR) hardware, and at similar levels of functionality. We show the relationship between HDMR and previous concepts, state challenges in their development, and illustrate this concept and its challenges with an example.

Memory Task Performance across Augmented and Virtual Reality

Peter Willemsen, Edward Downs, William Jaros, Charles McGregor, Maranda Berndt, Alexander Passofaro

Abstract: As commodity virtual reality and augmented reality hardware becomes more accessible, the opportunity to use these systems for learning and training will increase. This study provides an exploratory look at performance differences for a simple memory matching task across four different technologies that could easily be used for learning and training. We compare time and number of attempts to successfully complete a memory matching game across virtual reality, augmented reality, a large touchscreen table-top display and a real environment. The results indicate that participants took more time to complete the task in both the augmented reality and real conditions. Augmented reality and real environments were statistically different than the fastest two conditions, which occurred in the virtual reality and table-top touch display conditions.

Human Identification using Neural Network-Based Classification of Periodic Behaviors in Virtual Reality

Duc-Minh Pham

Abstract: There are a lot of techniques that help computer systems or devices identify their users in order to not only protect privacy, personal information, and sensitive data but also provide appropriate treatments, advertisements, or benefits. With passcode, password, fingerprint, or iris, people need to explicitly do some required activities such as typing their codes, showing their eyes, and putting their fingers on the scanners. Those solutions should be used in high-secure scenarios such as executing banking transactions and unlocking personal phones. In other systems such as gaming machines and collaborative frameworks, which aim to prioritize user experience and convenience, it would be better if user profile can be collected and built implicitly. Among those systems, virtual reality (VR) is a new trend, a new platform supporting not only fully immersive experience for gamers but also a collaborative environment for students, researchers, and other people. Currently, VR systems can track user physical activities via trackable devices such as HMD and VR controllers. Therefore, we aim to use virtual reality as our identification equipment. In virtual reality, we can easily simulate an invariant condition at any time so that people have larger probability to replicate their behaviors without any external affections. Therefore, we want to investigate if we could classify VR users based on their periodic interaction with virtual objects. We collect the position and direction of user’s head or hands when doing a task and build a classification model based on those data using convolutional neural network approach. We have done an experiment to explore the capability of our proposed technique. The result was motivated with the highest accuracy of 90.92%. Identification in VR hence is potentially applicable. In the future, we plan to do a large-scale experiment with a larger group of participants to examine the strength of our method.

Immersing Web3D Furniture into Real Interior Images

chao wangchao, Shuang Liang, Jiayuan Jia

Abstract: Platforms for interior DIY(Do It Yourself) design should hold sufficient realistic sense and light manual operation in interior modeling, besides more flexibility and adaptation of online editing are also essential. But current pure Web3D or pure image based platforms are hardly meet those goals. Therefore this paper presents a lightweight and immersive solutions by editing virtual 3D furniture into captured 2D interior pictures interactively, placing them at the optimal location automatically and rendering them in real time to consistently harmonize them with real interior pictures in terms of geometric layout and lighting visual effects. Our contributions consist in: (1)cuboid modeling from camera captured interior pictures interactively without loss of realistic sense and with light manual operation; (2)lightweight automatic furniture arrangement method with enough flexibility and adaptation for online editing; (3)lightweight IBL(Image Based Lighting) and PBR(Physic Based Rendering) to make virtual furniture immerse into real interior image more visually authentic. Compared with those existing online systems, this solution can provide low-cost, convenient, pervasive online services for interior DIY design over mobile Internet.

Path Prediction using LSTM Network for Redirected Walking

Yong-Hun Cho, Dong-Yong Lee, In-Kwon Lee

Abstract: Redirected walking enables immersive walking experience in a limited-sized room. To apply redirected walking efficiently and minimize the number of resets, an accurate path prediction algorithm is required. We propose a data-driven path prediction model using Long Short-Term Memory(LSTM) network. User path data was collected via path exploration experiment on a maze-like environment and fed into LSTM network. Our algorithm can predict user’s future path based on user’s past position and facing direction data. We compare our path prediction result with actual user data and show that our model can accurately predict user’s future path.

The Relationship between Visual Attention and Simulator Sickness - A Driving Simulation Study

Anne Hösch, Sandra Poeschl, Florian Weidner, Roberto Walter, Nicola Doering

Abstract: Although visual attention cues are of particular importance for driving simulation tasks, research on the relationship of visual attention and simulator sickness is scarce. This exploratory study is aimed at investigating this relation with a laboratory study in a fixed-based driving simulator (N = 36). No correlation between visual attention and simulator sickness was shown, but the direction of the relation shows a negative tendency.

Smart adaptation of BIM for virtual reality, depending on building project actors’ needs: the nursery case

Frederic Merienne, Tiberio Hernández, Pablo Figueroa, Pierre Raimbaud, Florence Danglade, Ruding LOU

Abstract: Nowadays, virtual reality (VR) is widely used in the AEC (architecture, engineering and construction) industry. One crucial issue is how to reuse Building Information Modeling (BIM) models in VR applications. This paper presents an approach for a smart adaptation of BIM models for using in VR scene, by following the needs expressed by building projects actors. The main adaptation consists in filtering BIM data to keep the necessary ones for VR, according to the user objectives. Moreover, VR system should be chosen by taking into account the purpose of usage of the VR model. This approach is applied to a study case of a nursery building project.

Illusory body ownership between different body parts: Synchronization of right thumb and right arm

Ryota Kondo, Yamato Tani, Maki Sugimoto, Kouta Minamizawa, Masahiko Inami, Michiteru Kitazaki

Abstract: Illusory body ownership can be induced by visual-tactile stimulation or visual-motor synchronicity. We aimed to test whether a right thumb could be remapped to a virtual right arm and illusory body ownership of the virtual arm induced through synchronous movements of the right thumb and the virtual right arm. We presented the virtual right arm in synchronization with movements of a participant’s right thumb on a head-mounted display (HMD). We found that the participants felt as though their right thumb became the right arm, and that the right arm belonged to their own body.

Real-time control operation support of unstable system by visual feedback

Tomohiro Ichiyama, Atsushi Matsubayashi, Yasutoshi Makino, Hiroyuki Shinoda

Abstract: In this paper, we show that an inverted pendulum can be stabilized manually even when a user does not know the physical characteristics and the current state of the pendulum. We display two markers: one indicates current position of the base of the pendulum and the other indicates the target position where the base should be located 0.3 seconds later. Subjects can stabilize the pendulum for a significantly longer time than seeing the real pendulum directly, just by chasing the target marker.

Scope of Manipulability Sharing: a Case Study for Sports Training

Yoshiyuki Tanaka, Tadayoshi Aoyama, Mitsuhisa Shiokawa

Abstract: Recently, advanced information communication technology and robotic technology have been used for developing a sports-like game application and low-cost interface devices, such as wii-sports, to encourage performing exercises in a room. Such an application can provide an easy-to-understand visual feedback for players by using a virtual reality head-mount display. However, those do not provide quantitative information about the body forms required for training during sports to improve the performance and skill of players. The purpose of this study is to develop an intelligent scope of manipulability sharing considering the dynamic change in the human body form (structure) in sports motion, thus providing an evaluation result based on the manipulability theory for both the player and the instructor in real time. Evaluation tests using a prototype system using a smart glasses and a Kinect sensor are conducted to verify the effectiveness of the proposed scope of manipulability sharing in pitching and batting motions.

Behavioral Simulation of Passengers in a Waiting Hall

Shaohua Liu, Xiyuan Song, Hao Jiang, Ming Shi, Tianlu Mao

Abstract: In this paper, we introduced a behavioral decision and execution method to simulate crowded passengers in a waiting hall. The method, as well as its simulation framework, is designed under the special purpose of passenger safety investigation. It supports the simulation of both regular crowded passenger behaviors and emergency passenger behavior. Situations under different time tables and density control measure could easily be conducted and simulated for safety purposes.

Simulator Sick but still Immersed: A Comparison of Head-Object Collision Handling and their Impact on Fun, Immersion, and Simulator Sickness

Peter Ziegler, Daniel Roth, Andreas Knote, Michael Kreuzer, Sebastian von Mammen

Abstract: We compared three techniques for handling head-object collisions in room-scale virtual reality (VR). We developed a game whose mechanics induce such collisions which we either addressed (1) not at all, (2) by fading the screen information to black, or (3) by restricting translation, i.e. correcting the virtual offset in such a way that no penetration occurred. We measured these conditions’ impact on simulator sickness, fun, and immersion perception. We found that the translation-restricted method yielded the greatest immersion value but also contributed the most to simulator sickness.

Space Tentacles - Integrating Multimodal Input into a VR Adventure Game

Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik

Abstract: Multimodal interfaces for Virtual Reality (VR), e.g., based on speech and gesture input/output (I/O), often exhibit complex system architectures. Tight couplings between the required I/O processing stages and the underlying scene representation and the simulator system’s flow-of-control tend to result in high development and maintainability costs. This paper presents a maintainable solution for realizing such interfaces by means of a cherry-picking approach. A reusable multimodal I/O processing platform is combined with the simulation and rendering capabilities of the Unity game engine, allowing to exploit the game engine’s superior API usability and tool support. The approach is illustrated based on the development of a multimodal VR adventure game called Space Tentacles.

Gaze Guidance in Immersive Environments

Steve Grogorick, Georgia Albuquerque, Marcus Magnor

Abstract: We investigate the efficiency of five different gaze guidance techniques for immersive environments, probing our peripheral vision’s sensitivity to different stimuli embedded in complex, real-world panorama still images. We conducted extensive user studies for a commercially available headset as well as in a custom-built dome projection environment. The dome enables us to create true 360° visual immersion at high-resolution, akin to what may be expected of future-generation VR headsets. Evaluation with high-quality eye tracking shows that local luminance modulation as proposed by Bailey et al. is the most effective technique, eliciting saccades to the target region with up to 40% success rate within the first second.

Design of a Virtual Reality and Haptic Setup Linking Arousals in Training Scenarios: a Preliminary Stage

Konstantinos Koumaditis, Francesco Chinello, Sarune Venckute

Abstract: Using Virtual Reality (VR) to realise immersive training environments is not a new concept. However, investigating arousal in immersive environments is. By arousal, we denote a general physical and psychological activity that in the form of anxiety and stress for example, can affect trainees’ performance. In this work, we describe the setup design for a two-phase explorative experiment linking arousal and performance, during training in a Virtual Reality (VR) environment. To do so we are using an appraised well-crafted VR puzzle game, questionnaires (i.e. NASA Task Load Index), and sensors (skin conductance response / pulse). The experiment will involve participants from the public that will be trained in two predefined processes of variant difficulty.

Knowledge Spaces in VR: Intuitive Interfacing with a Multiperspective Hypermedia Environment

Peter Gerjets, Martin Lachmair, Johannes Lohmann, Martin V. Butz

Abstract: Virtual reality technologies, along with motion based input devices allow for the design of innovative interfaces between learners and digital knowledge resources. These interfaces might facilitate knowledge work in educational and scientific contexts. Compared to 2D interfaces, immersive 3D environments provide greater flexibility regarding the interface design, however, so far no general, theory-driven and validated design principles are available. Seeing that complex learning environments can foster the development of various cognitive abilities, like multiperspective reasoning skills (MPRS), such design principles are highly desirable. Using multiperspective hypermedia environments (MHEs) as a testbed, the presented project aims to identify and evaluate design principles, derived from cognitive science. We will create and study interactive, immersive 3D-interface to MHEs using virtual reality technology. To evaluate the developed system, we will contrast the acquisition of MPRS in 2D and 3D learning environments. We expect that the developed design principles will be directly applicable for enhancing the accessibility of other knowledge environments.

Smart choices for deviceless and device-based manipulation in Immersive Virtual Reality

Fabio Marco Caputo, Daniel Mendes, Alessia Bonetti, Giacomo Saletti, Andrea Giachetti

Abstract: The choice of a suitable method for object manipulation is one of the most critical aspects of virtual environment design. It has been shown that different environments or applications might benefit from direct manipulation approaches, while others might be more usable with indirect ones, exploiting, for example, three dimensional virtual widgets. When it comes to mid-air interactions, the success of a manipulation technique is not only defined by the kind of application but also by the hardware setup, especially when specific restrictions exist. In this paper we present an experimental evaluation of different techniques and hardware for mid-air object manipulation in immersive virtual environments (IVE). We compared task performances using both deviceless and device-based tracking solutions, combined with direct and widget-based approaches. We also tested, in the case of freehand manipulation, the effects of different visual feedback, comparing the use of a realistic virtual hand rendering with a simple cursor-like visualization.

Light Projection-Induced Illusion for Controlling Object Color

Ryo Akiyama, Goshiro Yamamoto, Toshiyuki Amano, Takafumi Taketomi, Alexander Plopski, Christian Sandor, Hirokazu Kato

Abstract: Using projection mapping, we can control the appearance of real-world objects by projecting colored light onto them. Because a projector can only add illumination to the scene, only a limited color gamut can be presented through projection mapping. In this paper we describe how the controllable color gamut can be extended by accounting for human perception and visual illusions. In particular, we induce color constancy to control what color space observers will perceive. In this paper, we explain the concept of our approach, and show first results of our system.

Collaborative Production Line Planning with Augmented Fabrication

Doris Aschenbrenner, Meng Li, Radoslaw Dukalski, Jouke Verlinden, Stephan Lukosch

Abstract: The project “Factory-in-a-day” aims at reducing the installation time of a new hybrid robot-human production line, from weeks or months that current industrial systems now take, down to one day. The ability to rapidly install (and reconfigure) production lines where robots work alongside humans will strongly reduce operating cost and open a range of new opportunities for industry. In this paper, we explore a method of collaborative fabrication planning with the help of Augmented Reality as part of the concept Augmented Fabrication. In order to plan a new production line, two co-located workers at the factory wear a Microsoft Hololens head-mounted display and thus share a common visual context on the planed position of the robots and the production machines. They are assisted by an external remote expert connected via the Internet who is virtually co-located. We developed three different visualizations of the state of the local collaboration and plan to compare them in a user study.

COP: A New Continuous Packing Layout for 360 VR Videos

Qikai Pei, Juan Guo, Haiwen Lu, Guilong Ma, Wensong Li, Xinyu Zhang

Abstract: We present a new projection format and packing layout for 360 VR videos using octahedron mapping. A spherical video is projected onto an octahedron, where the upper and lower hemispheres correspond to the upper and lower half of the octahedron, respectively. Four regular triangles in the half of the octahedron, are transformed into isosceles right-angled triangles and packed into a square by remaining adjacent edges. Two equal squares resulted respectively from the upper and lower half of the octahedron, are placed side by side and adjoined by a common edge. This generates a 2:1 aspect ratio rectangle. We demonstrate that our new projection format and layout have advantages in uniformity of pixel density, internal continuity and rectangle aspect ratio while encoding 360 VR videos.

Immersive Virtual Fieldwork: Advances for the Petroleum Industry

Luiz Gonzaga Jr., MAURICIO ROBERTO VERONEZ, Gabriel Lanzer Kannenberg, Demetrius Nunes Alves, Caroline Lessio Cazarin, Leonardo Campos Inocencio, Fernando Pinho Marson, Jean Luca de Fraga, Leonardo G. Santana, Fabiane Bordin, Laís Vieira de Souza, Francisco Manoel Wohnrath Tognoli

Abstract: Laser scanning and photogrammetry techniques have been broadly adopted by Oil\&Gas industry for modeling petroleum reservoir analogues. Beyond the benefits of digital data itself, computer systems employed by geoscientists for interpretation and modeling tasks provide high quality rendering, point clouds surface meshes and photo-realistic textured models. But these systems, commonly, have used 2-D display, the 3-D models and information are projected on the screen, providing a limited visualization and restrictive toolset for interpretation. This work proposes to break this paradigm by developing a fully immersive system capable to virtually teleport the geoscientists to the fieldwork and provide a complete toolset for the outcrop’s interpretation. Besides, the system has been evaluated and validated by geologists with different skills and it has emerged as an useful and attractive toolset for Oil\&Gas industry.

User performance of VR-based tissue dissection under the effects of force models and tracing speeds

Fernando Trejo, Yaoping Hu

Abstract: Significant research efforts have been devoted to the development of force models that estimate soft-tissue biomechanical responses, finding an application on virtual reality (VR) based surgery training simulation. Nonetheless, the effects of force models on user performance of surgical tasks at different translation speeds are yet unclear. Thus, this work evaluated the effects of simple Weibull and realistic Analytic force models on 10 naïve human subjects for performing 1 degree-of-freedom (DOF) brain-tissue dissection tasks on a VR simulator at speeds of 0.10, 1.27, and 2.54 cm/s. Relying on 4 objective and 5 subjective performance metrics, two-way and one-way ANOVA analyses showed that a realistic force model such as the Analytic model is required to lessen the workload perceived by users only at a low dissection speed of 0.10 cm/s like that observed in neurosurgery. It was also found that dissections performed at the speed of 0.10 cm/s demand more refined manual skills than those at higher speeds. This finding complies with the lengthy surgery training curricula required to master surgical skills.

Do Textures and Global Illumination Influence the Perception of Redirected Walking Based on Translational Gain?

Kristoffer Waldow, Arnulph Fuhrmann, Stefan Michael Grünvogel

Abstract: For locomotion in virtual environments (VE) the method of redirected walking (RDW) enables users to explore large virtual areas within a restricted physical space by (almost) natural walking. The trick behind this method is to manipulate the virtual camera in an user-undetectable manner that leads to a change of his movements. If the virtual camera is manipulated too strong then the user recognizes this manipulation and reacts accordingly. We studied the effect of human perception of RDW under the influence of the level of realism in rendering the virtual scene.

An exploration on the integration of vibrotactile and force cues for 3D interactive tasks

Stanley Tarng, Aida Erfanian, Yaoping Hu, Frederic Merienne

Abstract: Vibrotactile and force cues of the haptic modality is increasing used to facilitate interactive tasks in three-dimensional (3D) virtual environments (VE). While maximum likelihood estimation (MLE) explains the integration of multi-sensory cues in many studies, an existing work yielded mean and amplitude mismatches when using MLE to interpret the integration of vibrotactile and force cues. To investigate these mismatches, we proposed mean-shifted MLE and conducted a study of comparing MLE and mean-shift MLE. Mean-shifted MLE shared the same additive assumption of the cues as MLE, but took account mean differences of both cues. In a VE, the study replicated the visual scene, the 3D interactive task, and the cues from the existing work. All human participants in the study were biased to rely on the vibrotactile cue for their task, departing from unbiased reliance towards both cues in the existing work. After validating the replications, we applied MLE and mean-shifted MLE to interpret the integration of the vibrotactile and force cues. Similar to the existing work, MLE failed to explain the mean mismatch. Mean-shifted MLE remedied this mismatch, but maintained the amplitude mismatch. Further examinations revealed that the integration of the vibrotactile and force cues might violate the additive assumption of MLE and mean-shifted MLE. This sheds a light for modeling the integration of vibrotactile and force cues to aid 3D interactive tasks within VEs.

A Method of View-dependent Stereoscopic Projection on Curved Screen

Juan Liu, Hanchao Li, Lu Zhao, Siwei Zhao, Guowen Qi, Yulong Bian, Xiangxu Meng, Chenglei Yang

Abstract: In this paper, we present a method of view-dependent stereoscopic projection on curved screen. It allows the user to walk around with the correct perspective view of the virtual scene consistent with his/her location. To solve the problem of distortion and drift of virtual objects when projecting the view-dependent scene images on curved screen, we operate a dynamic parallax adjustment of the stereoscopic images according to viewpoints. User evaluation shows that our proposed approaches are effective on improving visual experience.

A Multisensory Virtual Environment for OSH Training

Mina Tahsiri, Glyn Lawson, Che Abdullah, Tessa Roper

Abstract: This paper presents a multisensory and low-cost virtual training simulator developed in Unity 3D, with the aim of improving the effectiveness of Occupational Safety and Health (OSH) training. The prototype system facilitates heat and smell feedback functions operated by an Arduino microprocessor and triggered based on the proximity of the avatar to receptive colliders within the Virtual Environment (VE). The prototype enables the creation of bespoke virtual representations using the 3D scanning function of the Google Tango device making multisensory VE OSH training a feasible and versatile approach in the short-term future.

Touchless Haptic Feedback for VR Rhythm Games

Orestis Georgiou, Craig Jeffrey, Ziyuan Chen, Bao Xiao Tong, Shing Hei Chan, Boyin Yang, Adam Harwood, Tom Carter

Abstract: Haptics is an important part of the VR space as seen by the plethora of haptic controllers available today. Recent advancements have enabled touchless haptic feedback through the use of focused ultrasound thereby removing the need for a controller. Here, we present the world’s first mid-air haptic rhythm game in VR and describe the reasoning behind its interface and gameplay, and in particular, how these were enabled by the effective use of state-of-the-art ultrasonic haptic technology.

Touchless Haptic Feedback for Supernatural VR Experiences

Jonatan Martinez, Daniel Griffiths, Valerio Biscione, Orestis Georgiou, Tom Carter

Abstract: Haptics is an important part of the VR space as seen by the plethora of haptic controllers available today. By using a novel ultrasonic haptic device, we developed and integrated mid-air haptic sensations without the need to wear or hold any equipment in a VR game experience. The compelling experience combines visual, audio and haptic stimulation in a supernatural narrative in which the user takes on the role of a wizard apprentice. By using different haptified patterns we could generate a wide range of sensations which mimic supernatural interactions (wizard spells). We detail our methodology and briefly discuss our findings and future work.

Reducing VR Sickness through Peripheral Visual Effects

Helmut Buhler, Sebastian Misztal, Jonas Schild

Abstract: This paper proposes and evaluates two novel visual effects that can be applied to Virtual Reality (VR) applications to reduce VR sickness with head-mounted displays (HMD). Unlike other techniques that pursue the same goal, our approach allows a user to move continuously through a virtual environment without reducing the perceived field of view (FOV). A within-design study with 18 users compares reported sickness between the two effects and baseline. The results show lower means of sickness in the two novel effects; however, the difference is not statistically significant across all users, replicating large variety in individual reactions found in previous studies. In summary, reducing optical flow in peripheral vision is a promising approach. Future potential lies in adjusting visual effect parameters to maximize impact for large user groups.

Comparing VR and Non-VR Affordances of a Push Broom

Noah Edward Miller, Peter Willemsen, Robert Feyen

Abstract: This study explores how VR controller interfaces affect how participants hold a virtual push broom in VR. We aim to understand how the affordances available with current VR controllers and a custom broom VR controller impact user hand placement in a visual VR broom task. We compare hand placement in two VR conditions against hand placement holding a real push broom. Our goal is to understand the roles that controllers have on recreating physically accurate actions in VR training scenarios. The results from this initial pilot show an effect of the broom controller condition but also that the order in which some of the conditions were presented to subjects affected the way subjects held the VR and real push brooms in subsequent actions. Future work will continue to explore how controller affordance may impact the role of training in VR.

Head-to-Body-Pose Classification in No-Pose VR Tracking Systems

Tobias Feigl, Christopher Mutschler, Michael Philippsen

Abstract: Pose tracking does not yet reliably work in large-scale interactive multi-user VR. Our novel head orientation estimation combines a single inertial sensor located at the user’s head with inaccurate positional tracking. We exploit that users tend to walk in their viewing direction and classify head and body motion to estimate heading drift. This enables low-cost long-time stable head orientation. We evaluate our method and show that we sustain immersion.

Immersive Visual Analysis to Explore Mystery at Wildlife Preserve

Bo Sun, Aleksandr William Fritz, Wei Xu

Abstract: In this paper, we aim to extend our work in VAST Challenge of IEEE visualization conference last year, where we use visual analytics to solve an environment problem on why a local attractive bird, Rose-Crested Blue Pipit, is decreasing. Given the large scale and multi-dimensional datasets on chemical releases, we develop immersive visual analytics to find the connections between manufactures and sensor readings, and eventually, to discover the potential reason of the bird decreasing.

Redirected Scene Rotation for Immersive Movie Experiences

Travis Stebbins, Eric Ragan

Abstract: Virtual reality (VR) allows for immersive and natural viewing experiences; however, these often expect users to be standing and able to physically turn and move easily. Seated VR applications, specifically immersive 360 degree movies, must be appropriately designed to facilitate user comfort and prevent sickness. Our research explores a scene rotation-based method for redirecting a viewer’s gaze and its effectiveness given three parameter adjustments: rotation delay, rotation speed, and angle threshold. The technique may be useful in the development of future immersive movie or VR experiences. From the research, we expect to discover which parameters prove most effective at redirecting a viewer’s gaze in an immersive movie experience. We present preliminary developments and an informal usability evaluation to collect participant feedback about preference, comfort, and sickness.

Reverse Disability Simulation in a Virtual Environment

Tanvir Irfan Chowdhury, Sharif Mohammad Shahnewaz Ferdous, Tabitha C. Peck, John Quarles

Abstract: Disability Simulation (DS) is an approach used to modify attitudes regarding people with disabilities. DS places people without disabilities in situations that are designed for the users to experience a disability. In this research we investigate reverse disability simulation (RDS) in a virtual reality environment. In a RDS people with disabilities perform tasks that are made easier in the virtual environment compared to the real world. We hypothesized that putting people with disabilities in a RDS will increase confidence and enable efficient task completion. To investigate this hypothesis, we conducted a within-subjects experiment in which participants performed a virtual “kicking a ball” task in two different conditions: a normal condition without RDS (i.e., same difficulty as in the real world) and an easy condition with RDS (i.e., physically easier than the real world but visually the same). The results from our study suggest that RDS increased participants’ confidence.

Comparing VR Display with Conventional Displays for User Evaluation Experiences

Quinate Chioma Ihemedu-Steinke, Gerrit Meixner, Michael Weber

Abstract: The adoption of virtual reality in various industrial sectors other than gaming is spreading by the day. Many people are still sceptical regarding the potentials and advantages of virtual reality because this has not been made obvious enough to convince them. We investigated how virtual reality affects the concentration, involvement and enjoyment of users during evaluation sessions. Eighty four participants drove on a virtual automated driving simulator with and without the Oculus rift CV1. The experiment showed a statistically significant result for all variables that virtual reality enables better concentration, better involvement and enjoyment when compared with conventional displays.

Immersive Robot-Assisted Virtual Reality Therapy for Neurologically-Caused Gait Impairments

Negin Hamzeheinejad, Samantha Straka, Dominik Gall, Franz Weilbach, Marc Erich Latoschik

Abstract: This paper presents an immersive Virtual Reality (VR) therapy system for gait rehabilitation after neurological impairments, e.g., caused by accidents or strokes: The system targets increase of patients’ motivation to perform the repeated exercise by providing stimulating virtual exercise environments with the final goal to increase therapy efficiency and effectiveness. Instead of simply working out on immobile stationary devices, the system allows them to walk through and explore a stimulating virtual world. Patients are immersed in the virtual environments using a Head-Mounted Display (HMD). Walking patterns are captured by motion sensors attached to the patients’ feet to synchronize locomotion speed between the real and the virtual world. A user-centered design process evaluated usability, user experience, and feasibility to confirm the overall goals of the system before any sensitive clinical trials with impaired patients can start. Overall, the results demonstrated an encouraging user experience and acceptance while it did not induce any unwanted side-effects, e.g., nausea or cyber-sickness.

Investigating the Reason for Increased Postural Instability in Virtual Reality for Persons with Balance Impairments

Sharif Mohammad Shahnewaz Ferdous, Tanvir Irfan Chowdhury, imtiaz muhammad arafat, John Quarles

Abstract: The objective of this study is to investigate how different visual components of Virtual Reality (VR), such as field of view, frame rate, and display resolution affect postural stability in VR. Although previous studies identified these visual components as some of the primary factors that differ significantly in VR from reality, the effect of each component on postural stability in VR is yet unknown. While most people experience postural instability in VR, it is worse for people with balance impairments (BIs). This is likely because they depend more on their visual cues to maintain postural stability. Therefore, we conducted a within-subject study with ten people with balance impairments due to Multiple Sclerosis (MS). In each condition, we varied one component and kept all other components fixed. Each participant explored the virtual environment (VE) in a controlled fashion to make sure that the effect of the visual components was consistent for all participants. Results from our study suggest that decreased field of view and frame rate have significant effects on postural stability, but the effect of display resolution is inconclusive. Therefore, VR systems targeting people with balance impairments should focus on improving field of view and frame rate rather than display resolution.

Force Push: Exploring Expressive Gesture-to-Force Mappings for Indirect 3D Object Manipulation

Run Yu, Doug Bowman

Abstract: We present Force Push, a hyper-natural gesture-to-action mapping for object manipulation in virtual reality (VR). It maps hand gestures derived from human-human interaction to physics-driven movement of an object and uses expressive features of gestures to enhance controllability. An initial user study shows both the performance and broader user experience qualities of Force Push as compared to a traditional direct control mapping.

A Calibration Method for On-vehicle AR-HUD System Using Mixed Reality Glasses

Nianchen Deng, YanQing Zhou, Jiannan Ye, Xubo Yang

Abstract: Calibration is a key step for on-vehicle AR-HUD systems to ensure the augmented information to be correctly viewed by the driver. State-of-art calibration methods require setting up of spatial tracking devices or attaching markers on vehicles, which is time-consuming and error-prone. In this paper, we present a novel multi-viewpoints calibration method for AR-HUD using only a mixed reality glasses such as HoloLens. The full calibration process can be done in one minute and provides high precise calibration result, while no markers need to be attached on vehicle.

VR Touch Museum

Yuchen Zhao, Regis Kopper, Maurizio Forte

Abstract: In recent years, digital technology has become ubiquitous in the museum. They have changed the ways museums document, preserve and present cultural heritage. Now, we are exploring if there are some ways that could provide more historical context to a displayed object and make an exhibition more immersive. Therefore, we did a project called “The Virtual Reality Touch Museum” and used an experiment to test if such museum performs better on “Presence” and learning achievements. As the results show, our VR Touch Museum was outstanding in “presence” but more research is necessary to verify how effective it is for learning.

Pop the Feed Filter Bubble: Making Reddit Social Media a VR Cityscape

Rhema Linder, Alexandria M Stacy, Nic Lupfer, Andruid Kerne, Eric Ragan

Abstract: On Reddit, users from tens of thousands of communities create and promote internet content, including pictures, videos, news, memes, and creative writing. However, like most social media feeds, subscribing to a very small subset of available content creates filter bubbles. These bubbles, while created unintentionally, skew perceptions of reality. This phenomena provides an impetus for researchers to design techniques breaking out of filter bubbles. Virtual reality provides opportunities for new environments that contextualize social media among multiple perspectives. We present one solution to the filter bubble problem: Blue Link City, which enables contextualized exploration of Reddit.

Please Don’t Puke: Early Detection of Severe Motion Sickness in VR

Courtney Hutton, Shelby Ziccardi, Julio A Medina, Evan Suma Rosenberg

Abstract: Motion sickness is a potentially debilitating side effect experienced by certain users of virtual reality systems. Unexpected results from a user study on redirected walking suggest that there is a need to quickly identify participants who have an extremely low tolerance for virtual motion manipulations and remove them from the experience. In this poster, we investigate the use of a previously introduced ``fast motion sickness’’ measure to identify potential outliers with heightened levels of sensitivity. This work demonstrates a promising experimental methodology and suggests possible shared characteristics among users in this group.

Redirected Walking in Irregularly Shaped Physical Environments with Dynamic Obstacles

Haiwei Chen, Samantha Chen, Evan Suma Rosenberg

Abstract: Redirected walking is a Virtual Reality (VR) locomotion technique that enables the exploration of a large virtual environment (VE) within a small physical space via real walking. Thus far, the physical environment has generally been assumed to be rectangular, static, and free of obstacles. However, it is unlikely that real-world locations that may be used for virtual reality fulfill these constraints. In addition, accounting for dynamic obstacles such as people helps increase user safety when the view of the physical world is occluded by a head-mounted display. In this work, we present the use of a planning algorithm and its initial implementation that redirects the user in an irregularly shaped physical environment with dynamically moving obstacles. This technique represents an important step towards the use of redirected walking in more dynamic, real-world environments.

Vive Tracking Alignment and Correction Made Easy

Alex Peer, Peter Ullrich, Kevin Ponto

Abstract: The alignment of virtual and real coordinate spaces is a general problem in virtual reality research, as misalignments may influence experiments that depend on correct representation or registration of objects in space. This work proposes an automated alignment and correction for the HTC Vive tracking system by using three Vive Trackers arranged to describe the desired axis of origin in the real space. The proposed technique should facilitate the alignment of real and virtual scenes, and automatic correction of a source of error in the Vive tracking system shown to cause misalignments on the order of tens of centimeters. An initial proof-of-concept simulation on recorded data demonstrates a significant reduction of error.

A Realtime Virtual Grasping System for Manipulating Complex Objects

Hao Tian, Changbo Wang, Xinyu Zhang

Abstract: With the introduction of new VR/AR devices, realistic and fast interaction within virtual environments becomes more and more appealing. However, the challenge is to make interactions with virtual objects accurately reflect interactions with physical objects in realtime. In this paper, we present a virtual grasping system for multi-fingered hands when manipulating complex objects. Human-like grasping postures and realistic grasping motions guarantee a physically plausible appearance for hand grasping. Our system does not require any pre-captured motion data. Our system is fast enough to allow realtime interaction during virtual grasping for complex objects.

Investigating a Sparse Peripheral Display in a Head-Mounted Display for VR Locomotion

Abraham M. Hashemian, Alexandra J Kitson, Thinh Nguyen-Vo, Hrvoje Benko, Wolfgang Stuerzlinger, Bernhard E. Riecke

Abstract: Head-Mounted Displays (HMDs) provide immersive experiences for virtual reality. However, their field of view (FOV) is still relatively small compared to the human eye, which adding sparse peripheral displays (SPDs) could address. We designed a new SPD, SparseLightVR2, which increases the HMD’s FOV to 180° horizontally. We evaluated SparseLightVR2 with a study (N=29) by comparing three conditions: 1) no SPD, where the peripheral display (PD) was inactive; 2) extended SPD, where the PD provided visual cues consistent with and extending the HMD’s main screen; and 3) counter-vection SPD, where the PD’s visuals were flipped horizontally during VR travel to provide optic flow in the opposite direction of the travel. The participants experienced passive motion on a linear path and reported introspective measures such as sensation of self-motion. Results showed, compared to no SPD, both extended and counter-vection SPDs provided a more natural experience of motion, while extended SPD also enhanced vection intensity and believability of movement. Yet, visually induced motion sickness (VIMS) was not affected by display condition. To investigate the reason behind these non-significant results, we conducted a follow-up study and had users increase peripheral counter-vection visuals on the central HMD screen until they nulled out vection. Our results suggest extending HMDs through SPDs enhanced vection, naturalness, and believability of movement without enhancing VIMS, but reversed SPD motion cues might not be strong enough to reduce vection and VIMS.

Evaluation of Optical See-Through Head-Mounted Displays in Training for Critical Care and Trauma

Ehsan Azimi, Alexander Winkler, Emerson Tucker, Long Qian, Manyu Sharma, Jayfus Tucker Doswell, Nassir Navab, Peter Kazanzides

Abstract: One major cause of preventable death is a lack of proper skills for providing critical care. Conventional training for advanced emergency medical procedures is often limited to a verbal block of instructions and/or an instructional video. In this study, we evaluate the benefits of using an optical see-through head-mounted display (OST-HMD) for training of caregivers in an emergency medical environment. A rich user interface was implemented that provides 3D visual aids including images, text and tracked 3D overlays for each task. A user study with 20 participants was conducted for two medical tasks, where each subject received conventional training for one task and HMD training for the other task. Our results indicate that using a mixed reality HMD is more engaging, improves the time-on-task, and increases the confidence level of users.

Towards evaluating the effects of stereoscopic viewing and haptic interaction on perception-action coordination

David Brickler, Sabarish V. Babu, Jeffrey W Bertrand, Ayush Bhargava

Abstract: This paper details the results of an initial empirical evaluation conducted to examine how stereoscopic viewing and haptic feedback affects fine motor actions in a pick-and-place task, similar to the peg transfer task in an FLS training curriculum for laproscopic sugical training. In a between subjects experiment, we examined the effect of stereoscopic viewing and simulated tactile feedback during the fine motor actions of a participants’ actions in the near field on the number of collisions and time to complete the task. We found that stereo and haptic feedback contributed to the effectiveness of task performance in different ways. Specifically, we found that the mean time to complete the trials was significantly higher in the abcense of tactile feedback as compared to when it was present, and the mean number of collisions was significantly higher in the presence of stereo as compared to when it was absent.

Tracking a Consumer HMD with a Third Party Motion Capture System

Henrique Galvan Debarba, Marcelo Elias de Oliveira, Alexandre Lädermann, Sylvain Chagué, Caecilia Charbonnier

Abstract: We describe a calibration procedure to track consumer Head Mounted Displays (HMD) using a 3rd party tracking solution. The calibration consists of registering the center of projection of the rendering hardware to a 3rd party tracked object attached to it, and is performed by matching motion datasets from the HMD built-in and 3rd party tracking solutions. We demonstrate this calibration with an augmented reality optical see-through HMD, where the correctness of the alignment is critical to the visual match of a real object by a virtual overlay. We assessed a mean error of 3mm (SD=1mm) for objects at a distance of 70cm in the projected overlay image.

Immersive Exploration of OSGi-based Software Systems in Virtual Reality

Martin Misiak, Doreen Seider, Sascha Zur, Arnulph Fuhrmann, Andreas Schreiber

Abstract: We present an approach for exploring OSGi-based software systems in virtual reality. We employ an island metaphor, which represents every module as a distinct island. The resulting island system is displayed in the confines of a virtual table, where users can explore the software visualization on multiple levels of granularity by performing intuitive navigational tasks. Our approach allows users to get a first overview about the complexity of an OSGi-based software system by interactively exploring its modules as well as the dependencies between them.

A Calibration Method for Large-Scale Projection Based Floor Display System

Chun Xie, Hidehiko Shishido, Yoshinari Kameda, Kenji Suzuki, Itaru Kitahara

Abstract: We propose a calibration method for deploying a large-scale projection based floor display system. In our system, multiple projectors are installed on the ceiling of a large indoor space like a gymnasium to achieve a large projection area on the floor. The projection results suffer from both perspective distortion and lens distortion. In this paper, we use projector-camera systems, in which a camera is mounted on each projector, with the “straight lines have to be straight” methodology, to calibrate our projection system. Different from conventional approaches, our method does not use any calibration board and makes no requirement on the overlapping among the projections and the cameras’ fields of view.

A Framework for Virtual 3D Manipulation of Face in Video

Jungsik Park, Jong-Il Park

Abstract: This paper presents a framework that enables a user to manipulate his / her face shape three-dimensionally in video. Existing face manipulation applications and methods have some limitations: single photo, manipulation on image domain, or limited deformation. In the proposed framework, face is tracked from video by using landmark tracking and fitting 3d morphable face model to image, and the face model is further deformed according to the user input with mesh deformation method and rendered with texture from the frame image onto the camera preview. Therefore, unlike conventional applications and researches for face manipulation, the proposed framework allows the user to perform free-form 3d deformation of face in video and to view the deformed face at various view points.

HIPS - A Virtual Reality Hip Prosthesis Implantation Simulator

Maximilian Kaluschke, Rene Weller, Gabriel Zachmann, Luigi Pelliccia, Mario Lorenz, Philipp Klimant, Johannes P. G. Atze, Falk Möckel, Sebastian Knopp

Abstract: We present the first VR training simulator for hip replacement surgeries. We solved the main challenges of this task – high and stable forces during the milling process while simultaneously a very sensitive feedback is required – by using an industrial robot for the force output and the development of a novel massively parallel haptic rendering algorithm with support for material removal.