Posters

Group A


ID: A1
A Full-Body Motion Calibration and Retargeting for Intuitive Object Manipulation in Immersive Virtual Environments

Brandon Wilson, University of Houston-Victoria
Matthew Bounds, University of Houston-Victoria
Alireza Tavakkoli, Universtiy of Houston-Victoria
Presenter Brandon Wilson
Abstract: In this paper a system is proposed to combine small finger movements with the large scale body movements captured from a motion capture system. The strength of the proposed work over previous research is in the real-time and natural interactions that the virtual hands have with their environment. By being able to conform to physics, the virtual hands feel like virtual extensions of one’s own hands. This provides a higher degree of immersion and interactivity when compared to more traditional virtual reality systems.


ID: A2
See what I see: concepts to improve the social acceptance of HMDs

Daniel Pohl, Intel Corporation
Fernandez de Tejada Quemada, Carlos Intel Corporation
Presenter Daniel Pohl
Abstract: Mobile virtual reality solutions are nowadays widely available and affordable for many smartphones, by adding a case with attached lenses around the phone to create a head-mounted display. While using these in public places or at social gatherings where the head-mounted display is given around to others, it can lead to problems regarding social acceptance, as the surrounding people are not aware of what the virtual reality user is seeing and doing. We address this problem by adding a second, front-facing screen to the head-mounted display. We build and evaluate two prototypes for this usage.


ID: A3
A Simplified Inverse Kinematic Approach for Embodied VR Applications

Daniel Roth, Human-Computer Interaction, Institute for Computer Science, University of Würzburg
Jean-Luc Lugrin, Human-Computer Interaction, Institute for Computer Science, University of Würzburg
Julia Büser, Institute of Media and Imaging Technology, TH Köln
Gary Bente, Communications, Arts, and Sciences, Michigan State University
Arnulph Fuhrmann, Institute of Media and Imaging Technology, TH Köln, Cologne
Marc Erich Latoschik, Human-Computer Interaction, Institute for Computer Science, University of Würzburg
Presenter Daniel Roth
Abstract: In this paper, we compare a full body marker set with a reduced rigid body marker set supported by inverse kinematics. We measured system latency, illusion of virtual body ownership, and task load in an applied scenario for inducing acrophobia. While not showing a significant change in body ownership or task performance, results do show that latency and task load are reduced when using the rigid body inverse kinematics solution. The approach therefore has the potential to improve virtual reality experiences.


ID: A4
Space-sharing AR Interaction on Multiple Mobile Devices with a Depth Camera

Yuki Kaneto, Saitama University
Takashi Komuro, Saitama University
Presenter Takashi Komuro
Abstract: We propose a markerless augmented reality system that works on multiple mobile devices. The relative positions and orientations of the devices and their individual motions are estimated from 3D information obtained by depth cameras attached to the devices. To estimate the relative positions and orientations of the devices, the system generates 2D images by looking down from above at the 3D scene obtained by the depth cameras, performs 2D registration using template matching, and obtains a transformation matrix. Using the proposed system, we created an application that enables multiple users to interact with the same virtual object.


ID: A5
Animated self-avatars for motor rehabilitation applications that are biomechanically accurate, low-latency and easy to use

Mikael Dallaire-Côté, École de technologie supérieure
Philippe Charbonneau, École de technologie supérieure
Sara St-Pierre Côté, École de technologie supérieure
Rachid Aissaoui, École de technologie supérieure
David R. Labbe, École de technologie supérieure
Presenter Mikael Dallaire-Côté
Abstract: The emerging use of self-avatars for physical and motor rehabilitation leads to specific requirements for their real-time animation that combine properties from the fields of computer graphics and of biomechanics. We present and validate a method for animating a self-avatar in real-time that allows for high-fidelity representation of whole-body kinematics using anatomical and reproducible bone-segment definition. The method requires little setup time and has low motion-to-photon latency.


ID: A6
Monochrome Glove: A Robust Real-Time Hand Gesture Recognition Method by using a Fabric Glove with Design of Structured Markers

Hidetoshi Ishiyama, Cygames, Inc.
Shuichi Kurabayashi, Cygames, Inc.
Presenter Hidetoshi Ishiyama
Abstract: This paper presents a method for recognizing human-hand postures in real time, even if environment light cannot be configured appropriately. The key technology is a monochrome glove that is patterned with augmented reality markers on its palm and is also designed with a structured marker on each finger. As the glove only uses white color for the design of the patterns, it can achieve robust gesture recognition in a natural lighting environment by using a single camera to track a hand wearing the glove. The extensive experiments we conducted demonstrate the accuracy, efficiency, and robustness of our gesture recognition method.


ID: A7
Using Chromo-coded Light Fields for Augmented Reality

Ian Schillebeeckx, Washington University in St. Louis
Robert Pless, Washington University in St. Louis
Presenter Ian Schillebeeckx
Abstract: This poster considers VR opportunities possible with chromo-coded light fields, created by materials like lenticular arrays whose appearance varies by viewing angle. Chromo-coded light fields use color to create geometric cues, making it cheaper, faster and more accurate to measure object pose. For high-end applications like image guided surgery, the color-cues make it possible to accurately measure pose of small objects like a scalpel. Because lenticular arrays are cheap and the color cues simplify the computation, they support new possibilities for augmented reality using smart-phones and arbitrary objects.


ID: A8
Evaluation of Hands-Free HMD-Based Navigation Techniques for Immersive Data Analysis

Daniel Zielasko, Visual Computing Institute, RWTH Aachen University
Sven Horn, Visual Computing Institute, RWTH Aachen University
Sebastian Freitag, Visual Computing Institute, RWTH Aachen University
Benjamin Weyers, Visual Computing Institute, RWTH Aachen University
Torsten W. Kuhlen, Visual Computing Institute, RWTH Aachen University
Presenter Daniel Zielasko
Abstract: To use the full potential of immersive data analysis when wearing a head-mounted display, the user has to be able to navigate through the spatial data. We collected, developed and evaluated 5 different hands-free navigation methods that are usable while seated in the analyst’s usual workplace. All methods meet the requirements of being easy to learn and inexpensive to integrate into existing workplaces. We conducted a user study with 23 participants which showed that a body leaning metaphor and an accelerometer pedal metaphor performed best within the given task.


ID: A9
Automatic Generation of World in Miniatures for Realistic Architectural Immersive Virtual Environments

Andrea Bönsch, Visual Computing Institute, RWTH Aachen University
Sebastian Freitag, Visual Computing Institute, RWTH Aachen University
Torsten W. Kuhlen, Visual Computing Institute, RWTH Aachen University
Presenter Andrea Bönsch
Abstract: Orientation and wayfinding in architectural Immersive Virtual Environments (IVEs) are non-trivial tasks. World in Miniatures (WIMs) are an established approach to gain survey knowledge about the scene and information about the user’s relation to it. However, for large-scale scenes, scaling and occlusion issues diminish returns. Furthermore, the lack of standardized information regarding scene decompositions hampers presenting self-contained scene extracts. Therefore, we present an automatic WIM generation workflow for arbitrary in- and outdoor IVEs to provide users with meaningfully selected and scaled extracts of the IVE and corresponding context information. Additionally, a 3D user interface is provided to manipulate our WIM.


ID: A10
Combining Eye Tracking with Optimizations for Lens Astigmatism in modern wide-angle HMDs

Daniel Pohl, Intel Corporation
Xucong Zhang, Max Planck Institute for Informatics
Andreas Bulling, Max Planck Institute for Informatics
Presenter Daniel Pohl
Abstract: Virtual Reality hit the consumer market with affordable HMDs. However, it quickly becomes apparent that the resolution of the built-in display panels still needs to be highly increased. To overcome the resulting higher performance demands, eye tracking can be used for foveated rendering. However, as there are lens distortions in HMDs, there are more possibilities to increase the performance with smarter rendering approaches. We present a new system using optimizations for rendering considering lens astigmatism and combining this with foveated rendering through eye tracking. Depending on the current eye gaze, this delivers a rendering speed-up of up to 20%.


ID: A11
Fast and Accurate Relocalization for Keyframe-Based SLAM Using Geometric Model Selection

Atsunori Moteki, Fujitsu Laboratories Ltd.
Nobuyasu Yamaguchi, Fujitsu Laboratories Ltd.
Ayu Karasudani, Fujitsu Laboratories Ltd.
Toshiyuki Yoshitake, Fujitsu Laboratories Ltd.
Presenter Atsunori Moteki
Abstract: We propose a relocalization method for keyframe-based SLAM that enables real-time and accurate recovery from tracking failures. To realize an AR-based application in a real world situation, not only accurate camera tracking but also fast and accurate relocalization from tracking failure is required. The proposed relocalization method selects two algorithms adaptively depending on the relative camera pose between a current frame and a target keyframe. In addition, it estimates a degree of false matches to speed up RANSAC-based model estimation. We present effectiveness of our method by an evaluation using public tracking dataset.


ID: A12
Psychophysical Influence on Temperature Perception by Mixed-Reality Visual Stimulation

Satoshi Hashiguchi, Ritsumeikan University
Fumihisa Shibata, Ritsumeikan University
Asako Kimura, Ritsumeikan University
Presenter Satoshi Hashiguchi
Abstract: In Mixed-Realty (MR) space, the visual appearance of a real object can be changed by superimposing a virtual object on it. We defined the changes in the visual information of a real object in MR space as “MR visual stimulation” and examine the influence of the haptic sense using MR visual stimulation. For our research, we verified the influence PresenterMR visual stimulation has on the perceived position of the temperature perception. In the experiment, we presented MR visual stimulation and temperature stimulation in different positions. Our results demonstrate that temperature perception is strongly affected by visual stimulation.


ID: A13
VR Device Time – Hi-precision Time Management By Synchronizing Times Between Devices and Host PC Through USB –

Ryugo Kijima, Gifu University
Katsuya Yamaguchi, Gifu University
Presenter Katsuya Yamaguchi
Abstract: In the virtual reality system, accurate time management is necessary to maintain the correct relation between the displayed image and the users’ motion by the latency compensation. The timestamp concept is effective in managing the timing of the display and sensing device. For time stamping, a unified time axis is necessary among the devices and the PC. In this paper, a time synchronization system via USB was developed. This prototype was proved to have accuracy below 20 µs among multiple sensors.


ID: A14
bioSync: Wearable Haptic I/O Device for Synchronous Kinesthetic Interaction

Jun Nishida, University of Tsukuba
Kenji Suzuki, University of Tsukuba
Presenter Jun Nishida
Abstract: This paper presents a synchronous kinesthetic interaction through haptic input/output based on biosignal measurement and stimulation. Users are able to bi-directionally transmit kinesthetic experiences such as rigidity of joints or exertion of muscles. Such interaction would be very important in the fields of rehabilitation and sports training. In this study, we introduce a set of wearable devices that is capable of both electromyogram (EMG) measurement and electrical muscle stimulation (EMS) simultaneously on the same muscle by using common electrodes. We propose a new method for discharging the residual potential in order to enable fastest simultaneous operation (40Hz).


ID: A15
Effect of Head Mounted Display Latency on Human Stability During Quiescent Standing on One Foot

Soma Kawamura, Gifu University
Ryugo Kijima, Gifu University
Presenter Soma Kawamura
Abstract: The purpose of this study is to reveal the effects of small latencies on head mounted display users. The subjects in this study were asked to simply and stably stand on a force plate with one foot. The speed of body sway was measured with several lags, from 1 ms to 66 ms, using an Oculus Rift DK2. The results showed that the sway speed increased monotonically with increase in the latency values. The sense of balance is regarded as a relatively direct index of quality of a VR system, including the effects of lag.


ID: A16
Depth Perception in Mirrors: The Effects of Video Based Augmented Reality in Driver’s Side View Mirrors

Valerie Kane, Virginia Polytechnic and State University
Missie Smith, Virginia Polytechnic and State University
Gary Burnett, University of Nottingham
Joseph Gabbard, Virginia Polytechnic and State University
David Large, University of Nottingham
Presenter Valerie Kane
Abstract: This study explored the effects that augmented reality graphics have on a drivers’ distance estimation/depth perception in an augmented side view mirror. The study was conducted both inside a simulator and outside in a parking lot. Sixteen participants partook in the study, 8 in the simulator and 8 in the test vehicle outside. Distance judgments were compared across four side view mirror conditions both simulator and outdoor scenarios. Results show some opportunities for AR to improve depth judgments, but further analysis is necessary.


ID: A17
Extra-normal interactions in mediated virtual environments: an investigation of an audio-visual crossed-sense modality

Benjamin Outram, Keio University
Masashi Nakatani, Keio University
Kouta Minamizawa, Keio University
Presenter Benjamin I. Outram
Abstract: Unusual crossed-sense couplings can be exploited to engineer non-realistic but beneficial VEs providing high degrees of interaction and presence, and new avenues for enhancing the experience of single- and multi-user VEs. We report user’s reactions to VEs in which the sound of their voice is represented via a frequency-to-color-mapped visualization. Users reported a sense of interaction and enjoyment, but a preliminary user study has yet to show a clear link with presence. We discuss the next stages of the research which will investigate how mixed-sense modalities contribute to co-presence and collaboration in multi-user environments.


ID: A18
Evaluation of Hand and Stylus Based Calibration for Optical See-Through Head-Mounted Displays Using Leap Motion

Kenneth Moser, Mississippi State University
Edward Swan, Mississippi State University
Presenter Kenneth Moser
Abstract: Next generation OST HMDs promise to inclusion a variety of integrated and on-board sensors. In particular, hand tracking cameras, such as the Leap Motion, show potential for facilitating intuitive calibration procedures accessible to researchers, developers, and novice users alike. We evaluate hand and stylus based OST calibration utilizing tracking data from a Leap Motion. Our findings show performance of both methods is comparable to results from prior studies using standard environment-centric methods. Also, while our hand based calibration improved by using more contextual reticle designs, calibrations performed with a stylus yielded the most accurate and precise results over all.


ID: A19
Reducing Application-Stage Latencies of Interprocess Communication Techniques for Real-Time Interactive Systems

Jan-Philipp Stauffert, University of Würzburg
Florian Niebling, University of Würzburg
Marc Erich Latoschik, University of Würzburg
Presenter Marc Erich Latoschik
Abstract: This paper analyzes latency jitter caused by typical interprocess communication (IPC) techniques commonly found in today’s systems used for VR. We use four different implementations on a Linux kernel as well as on a real-time (RT) Linux kernel to further assess if a RT variant of a multiuser multiprocess operating system can prevent latency spikes and how this behavior would apply to different programming languages and IPC techniques. We found that Linux RT can limit the latency jitter at the cost of throughput for certain implementations. Further, coarse grained concurrency should be employed to avoid adding up of scheduler latencies.


ID: A20
Analysis in Support of Realistic Timing in Animated Fingerspelling

Nkenge Wheatland, University of California, Riverside
Ahsan Abdullah, University of California, Davis
Michael Neff, University of California, Davis
Sophie Jörg, Clemson University
Victor Zordan, Clemson University
Presenter Nkenge Wheatland
Abstract: American Sign Language (ASL) fingerspelling is the act of spelling a word letter-by-letter when a specific sign does not exist to represent it. Synthesizing intelligible ASL, which includes fingerspelling, is important to create signing virtual characters for training and communicating in virtual environments or further applications. The rhythm and speed of fingerspelling play a large role in how well fingerspelling is understood. Using motion capture technologies, we record fingerspelling and analyze timing information about letters in the words. Our goal is to identify fingerspelling timing information and use it to create fingerspelling animations that are natural and understandable.


ID: A21
Effects of Vibrotactile Stimulation During Virtual Sandboarding

Lind Stine, Aalborg University
Lui Thomsen, Aalborg University
Mie Egeberg, Aalborg University
Niels Christian Nilsson, Aalborg University
Stefania Serafin, Aalborg University
Rolf Nordahl, Aalborg University
Presenter Niels Christian Nilsson
Abstract: This poster details a within-subjects study (n=17) investigating the effects of vibrotactile stimulation on illusory self-motion, presence and perceived realism during an interactive sandboarding simulation. Vibrotactile feedback was delivered using a low frequency audio transducer mounted underneath the board. The study compared three conditions: no vibration, constant vibration and dynamic vibration. The results suggest that constant vibrotactile feedback led to significantly more compelling self-motion illusions and a higher degree of perceived realism, than the condition devoid of vibrotactile feedback. No significant differences were found between the two conditions involving vibrotactile stimulation.


ID: A22
Estimation of Detection Thresholds for Audiovisual Rotation Gains

Niels Christian Nilsson, Aalborg University
Evan Suma, USC Institute for Creative Technologies
Rolf Nordahl, Aalborg University
Mark Bolas, USC Institute for Creative Technologies
Stefania Serafin, Aalborg University
Presenter Niels Christian Nilsson
Abstract: Redirection techniques allow users to explore large virtual environments on foot while remaining within a limited physical space. However, research has primarily focused on redirection through manipulation of visual stimuli. We describe a within-subjects study (n=31) exploring if participants’ ability to detect differences between real and virtual rotations is influenced by the addition of sound that is spatially aligned with its virtual source. The results revealed similar detection thresholds for conditions involving moving audio, static audio, and no audio. This may be viewed as an indication of visual dominance during scenarios such as the one used for the current study.


ID: A23
Low-Cost Raycast-based Coordinate System Registration for Consumer Depth Cameras

Dennis Wiebusch, Universität Würzburg
Martin Fischbach, Universität Würzburg
Florian Niebling, Universität Würzburg
Marc Erich Latoschik, Universität Würzburg
Presenter Dennis Wiebusch
Abstract: We present four raycast-based techniques that determine the transformation between a depth camera’s coordinate system and the coordinate system defined by a rectangular surface. In addition, the surface’s dimensions are measured. In contrast to other approaches, these techniques limit additional hardware requirements to commonly available, low-cost artifacts and focus on simple non-laborious procedures. A preliminary study examining our Kinect v2-based proof of concept revealed promising first results. The utilized software is available as an open-source project.


ID: A24
A Handy System for Natural Composition of CG and Real Scene with Real-time Reflection of Lighting Changes

Hirofumi Morioka, NHK (Japan Broadcasting Corporation)
Hidehiko Okubo, NHK (Japan Broadcasting Corporation)
Hideki Mitsumine, NHK (Japan Broadcasting Corporation)
Presenter Hirofumi Morioka
Abstract: A handy system is presented for creating in real time a natural television image composed of a real scene and a computer graphics object. The light positions and colors (RGB values) in the real scene are estimated and then applied to the object by using simple equipment. Objective and subjective experiments demonstrated the effectiveness of this system. An enhanced algorithm is also presented that improves the accuracy of light estimation.


ID: A25
Supporting Path Switching for Non-Player Characters in a Virtual Environment

Mingze Xi, The University of Newcastle, Australia
Shamus P. Smith, The University of Newcastle, Australia
Presenter Mingze Xi
Abstract: Realistic non-player characters (NPC) are an important component of virtual environments. However, generating NPCs with human-like behaviour can be difficult. A previously proposed pipeline used evacuation simulators to generate domain relevant NPC behaviour for virtual environments. However, the NPCs generated through this process had static behaviours and could not dynamically change pre-computed evacuation paths. In this paper, we extend this pipeline with an approach that supports path switching for NPCs in four situations and demonstrates the pipeline in a virtual environment modelled on a large real building. A scalability test on the large building showed overall evacuation time in both source evacuation simulator and target virtual environment was consistent regardless of building size and complexity. Three test-cases demonstrate path switching for NPCs with increasing evacuation success rates through different situations. The reuse of static paths to enable dynamic NPC behaviour supports ongoing work to develop low cost and realistic training systems using game engine technology.


ID: A26
RayOnPlane: A Translation Technique Minimizing Gesture Size

Philipp Tiefenbacher, TUM, Munich
Clemens Techmer, TUM, Munich
Gerhard Rigoll, TUM, Munich
Presenter Philipp Tiefenbacher
Abstract: In this work, we propose a device-based manipulation technique named RayOnPlane, which maintains the ease of use also in case of increasing work space size. We compare this technique to the state-of-the-art device-based manipulation technique HOMER-S. An experiment incorporating different work space sizes indicates comparable performance in completion time, while minimizing gesture size as well as user frustration and physical strain.


ID: A27
Comparison of Mobile Touch Interfaces for Object Identification and Troubleshooting Tasks in Augmented Reality

Philipp Tiefenbacher, TUM, Munich
Jan Gillich, TUM, Munich
Paul Schott, TUM, Munich
Gerhard Rigoll, TUM, Munich
Presenter Philipp Tiefenbacher
Abstract: This work adapts common HMD interfaces for the use on a hand-held device. The proposed interfaces focus on: Easy interaction on the mobile device and independence of the provided content to the user’s view. We compare two AR interface techniques for object identification and three AR interfaces for troubleshooting. The results show that exocentric-based AR interfaces outperform egocentric ones in respect to completion time, walking distance and number of interactions.


ID: A28
Olfactory display using surface acoustic wave device and micropumps for wearable applications

Kazuki Hashimoto, Tokyo Institute of Technology
Takamichi Nakamoto, Tokyo Institute of Technology
Presenter Kazuki Hashimoto
Abstract: Olfaction is expected to provide reality and a sense of immersion in multimedia contents. Therefore, an olfactory display, a gadget to present scents to one or more user(s), has been developed. A wearable olfactory display has advantage from the viewpoint of reducing the odorant diffusion into the atmosphere and is suitable for virtual reality applications. In this study, we developed a portable olfactory display using surface acoustic wave (SAW) device and micropumps. In the experiment using quartz crystal microbalance (QCM) gas sensor, we confirmed that the olfactory display can present the odorant with intended intensity.


ID: A29
Measurement of wind direction perception by the entire head

Takuya Nakano, Meijo University
Yasuyuki Yanagida, Meijo University
Presenter Takuya Nakano
Abstract: In many cases, when we present video to users, we present sound too. Hence, Eyesight and hearing are stimulated and we can improve the users’ sensation of presence. In addition, recently several VR systems using wind have been built to enhance presence. If we use wind, users don’t need to use several devices because we can present non-contact sensation. Therefore, the systems don’t disturb improvement of presence. Some studies conclude that Presenterwind and video at the same moment improves presence. However, in these studies, wind sources are arranged rather sparsely. In this sparse arrangement, it is doubtful whether precise environment can be reproduced. Therefore, we examined the properties of wind direction perception at the entire head.


ID: A30
A multi-modal interactive tablet with tactile feedback, rear and lateral operation for maximum front screen visibility

Itsuo Kumazawa, Tokyo Institute of Technology
Shu Yano, Tokyo Institute of Technology
Souma Suzuki, Tokyo Institute of Technology
Shunsuke Ono, Tokyo Institute of Technology
Presenter Itsuo Kumazawa
Abstract: When we use a tablet style handheld device such as a smart phone as a part of a virtual really system, its most outstanding future: the touch screen dominating the most area of the front face should be incorporated into the system effectively and beneficially. For example, the visual information displayed on the screen can be merged with the surrounding or background scenes and the intuitive touch operation can be performed in a suitable scenario. However, if the means of the interaction is limited to the touch operation, finger operation on the front screen must be performed even for unsuitable scenarios, and the fingers or hands occluding the visual information disturb our immersive experience. To deal with this situation, we propose a multi-modal interactive tablet that uses its cameras, accelerometer, track ball and pressure sensors implemented on its rear and side for operations ensuring visibility. The pressing and ball-rotating operation on the rear and the side and the tactile feedback generated by voice-coil-based actuators assist and guide the multi-modal interaction. The effectiveness of the multimodality with the rear and the side operation and the tactile feedback is evaluated by an experiment.


ID: A31
Head Mounted Projection for Enhanced Gaze in Social Interactions

David Krum, USC Institute for Creative Technologies
Sin-Hwa Kang, USC Institute for Creative Technologies
Thai Phan, USC Institute for Creative Technologies
Lauren Dukes, Google, Inc.
Mark Bolas, USC Institute for Creative Technologies
Presenter David M. Krum
Abstract: Projected displays can present life-sized imagery of a virtual human character that can be seen by multiple observers. However, typical projected displays can only render that virtual human from a single viewpoint, regardless of whether head tracking is employed. This results in the virtual human being rendered from an incorrect perspective for most individuals. This could cause perceptual miscues, such as the “Mona Lisa” effect, causing the virtual human to appear as if it is simultaneously gazing and pointing at all observers regardless of their location. This may be detrimental to training scenarios in which all trainees must accurately assess where the virtual human is looking or pointing. We discuss our investigations into the presentation of eye gaze using REFLCT, a previously introduced head mounted projective display. REFLCT uses head tracked, head mounted projectors and retroreflective screens to present personalized, perspective correct imagery to multiple users without the occlusion of a traditional head mounted display. We examined how head mounted projection for enhanced presentation of eye gaze might facilitate or otherwise affect social interactions during a multi-person guessing game of “Twenty Questions.”


ID: A32
Automatic Identification of Rigidly Linked 6DoF Sensors

Jake Fountain, The University of Newcastle, Australia
Shamus P. Smith, The University of Newcastle, Australia
Presenter Jake Fountain
Abstract: We present techniques for automatically identifying relationships between rigidly linked 6DoF and 3DoF sensors belonging to different sensor systems. The techniques allow for subsequent automatic alignment of the sensor systems, increasing the usability of modular sensor systems. Two techniques are presented and analysed in simulation and a case study for performance under varying noise and latency conditions. Good results were achieved, with each sensor identified correctly in at least 60% of estimates, or 4 times greater than random selection. After a sample collection period of around 5 seconds, the matching is performed in less than 5ms and is scalable to noisier systems by using more samples. Our methods represent a key step in creating highly accessible modular multi-device 3D systems.


ID: A33
Influence by others’ opinions: social pressure from agents in immersive virtual environments

Christos Kyrlitsias, GET Lab, Department of Multimedia and Graphic Arts, Cyprus University of Technology
Despina Michael, GET Lab, Department of Multimedia and Graphic Arts, Cyprus University of Technology
Presenter Christos Kyrlitsias
Abstract: Virtual Reality is used in fields of cognitive sciences to study participants’ reactions. In such cases, existence of other avatars in the virtual environment is a crucial factor. In this study we investigate whether agents have social influence on the participants by performing the Asch conformity experiment (1951) in an immersive virtual environment. Findings are demonstrating that participants’ response times were affected by the judgments of agents existing in the virtual environment.


ID: A34
SharpView: Improved Clarity of Defocussed Content on Optical See-Through Head-Mounted Displays

Kohei Oshima, Nara Institute of Science and Technology
Kenneth Moser, Mississippi State University
Damien Rompapas, Nara Institute of Science and Technology
J. Edward Swan II, Mississippi State University
Sei Ikeda, Ritsumeikan University
Goshiro Yamamoto, Nara Institute of Science and Technology
Takafumi Taketomi, Nara Institute of Science and Technology
Christian Sandor, Nara Institute of Science and Technology
Hirokazu Kato, Nara Institute of Science and Technology
Presenter Kenneth Moser
Abstract: Augmented reality (AR) systems, which utilize optical see-through (OST) head-mounted displays (HMDs), are becoming more common place in the consumer market, especially with the current availability of several consumer level options, and the promise of additional, more advanced, devices on the horizon. Despite their growing popularity, the current generation of OST HMDs are still prone to a number of issues which diminish the impact of their utility. Our work aims to investigate the impact of one of these issues, the visual blur caused by simultaneously viewing AR and environmental objects at differing focal distances. In this paper, we investigate the impact of focus blur on user perception of AR content and also present a novel technique, termed SharpView, for mitigating these effects. Our experimental results reveal that the SharpView method correctly models the expected level of focus blur perceived by the user and provides a more perceptually correct image over an uncorrected view. Our findings also show that even though users are sensitive to the perceptual effects of focal disparity, their performance in an AR matching task was not negatively impacted even in the presence of significant blur.


ID: A35
The Effect of Multi-Sensory Cues on Performance and Experience During Walking in Immersive Virtual Environments

Mi Feng, Worcester Polytechnic Institute
Arindam Dey, HIT Lab Australia, University of Tasmania
Robert Lindeman, HIT Lab NZ
Presenter Mi Feng
Abstract: To examine the effects of multi-sensory cues during nonfatiguing walking in immersive virtual environments, we selected sensory cues including movement wind, directional wind, footstep vibration, and footstep sounds, and investigated their influence and interaction with each other. We developed a virtual reality system with non-fatiguing walking interaction and low-latency, multi-sensory feedback, and used it to conduct two successive experiments measuring user experience and performance through a triangle-completion task. We noticed some positive effects due to the addition of footstep vibration on task performance, and saw significant improvement in reported user experience due to the added wind and vibration cues.


ID: A36
Spatial Consistency Perception in Optical- and Video-See-Through Head-Mounted Augmentations

Alexander Plopski, Osaka University
Kenneth R. Moser, Mississippi State University
Kiyoshi Kiyokawa, Osaka University
J. Edward Swan II, Mississippi State University
Haruo Takemura, Osaka University
Presenter Kenneth R. Moser
Abstract: Correct spatial alignment is an essential requirement for convincing augmented reality experiences. Registration error, caused by a variety of systematic, environmental, and user influences decreases the realism and utility of head mounted display AR applications. Focus is often given to rigorous calibration and prediction methods seeking to entirely remove misalignment error between virtual and real content. Unfortunately, producing perfect registration is often simply not possible. Our goal is to quantify the sensitivity of users to registration error in these systems, and identify acceptability thresholds at which users can no longer distinguish between the spatial positioning of virtual and real objects. We simulate both video see-through and optical see-through environments using a projector system and experimentally measure user perception of virtual content misalignment. Our results indicate that users are less perceptive to rotational errors overall and that translational accuracy is less important in optical see-through systems than in video see-through.


ID: A37
Through the Eyes of a Bystander: The Promise and Challenges of VR as a Bullying Prevention Tool

Kelly McEvoy, Ultraflex Systems
Oyewole Oyekoya, Clemson University
Adrienne Holz Ivory, Virginia Tech
James D. Ivory, Virginia Tech
Presenter James D. Ivory
Abstract: Two studies explored the potential of virtual reality (VR) in bystander-focused bullying prevention campaigns. An experiment compared responses to three versions of a bullying scenario in which users (N = 78) were placed in the perspective of a bystander: customized VR, non-customized VR, and video. Measures included empathy, attitudes toward bullying victims and bullying, anticipated future behavior, presence, and other perceptions of bullying. The only significant effects observed were on feelings of empathy, with scores in the video condition higher than in the other two conditions, and on perceptions of bullying as a problem in participants’ schools, again with scores highest in the video condition. These results were further explored in a follow-up qualitative focus group study (N = 10). Findings from both studies suggest that to elicit empathy-related responses, VR simulations should use photorealistic graphics, employ interactive features, and make customization prominent and carefully tailored. Lessons learned could inform the use of virtual reality in future campaigns.


ID: A38
OST Rift: Temporally Consistent Augmented Reality with a Consumer Optical See-Through Head-Mounted Display

Yuta Itoh, Technische Universität München
Jason Orlosky, Osaka University
Kiyoshi Kiyokawa, Osaka University
Manuel Huber, Technische Universität München
Gudrun Klinker, Technische Universität München
Presenter Yuta Itoh
Abstract: We present an off-the-shelf, low-latency Optical See-through Head-Mounted Displays (OST-HMD) for Augmented Reality (AR). Temporally consistent visualization is crucial for realizing immersive AR experiences. This is challenging since it requires both accurate head-tracking and low-latency rendering of AR content. Building a system which meets both constraints usually requires experts on computer vision/graphics and expensive display hardware. This work demonstrates that such high spatio-temporal fidelity is achievable with commodity hardware available today. We build a custom OST-HMD system that consists of a virtual reality HMD, i.e., the Oculus Rift DK2, and half-mirror optics, and adapt the rendering pipeline in order to integrate the OST-HMD calibration framework. An evaluation with a user-perspective camera shows that the system achieves mean temporal error of


ID: A39
Acting Together: Joint Pedestrian Road Crossing in an Immersive Virtual Environment

Yuanyuan Jiang, The University of Iowa
Elizabeth O’Neal, The University of Iowa
Junghum Paul Yon, The University of Iowa
Luke Franzen, The University of Iowa
Pooya Rahimian, The University of Iowa
Jodie Plumert, The University of Iowa
Joseph Kearney, The University of Iowa
Presenter Yuanyuan Jiang
Abstract: We investigated how two people jointly coordinate their decisions and actions in a co-occupied, large-screen virtual environment. The task for participants was to physically cross a virtual road with continuous traffic without getting hit by a car. Participants performed this task either alone or with another person (see Fig.1). We found that pairs often crossed the same gap together and closely synchronized their movements when crossing. Pairs also chose larger gaps than individuals to accommodate the extra time needed to cross through gaps together. These results reveal how two people interact and coordinate their behaviors in performing whole-body, joint motions. This study also provides a foundation for future studies examining joint actions in shared VEs where participants are represented by graphic avatars.


ID: A40
Exploring the Perception of Co-Location Errors during Tool Interaction in Visuo-Haptic Augmented Reality

Ulrich Eck University of South Australia, Mawson Lakes, South Australia, Australia
Ulrich Eck, University of South Australia
Liem Hoang, Nara Institute of Science and Technology
Christian Sandor, Nara Institute of Science and Technology
Goshiro Yamamoto, Nara Institute of Science and Technology
Takafumi Taketomi, Nara Institute of Science and Technology
Hirokazu Kato, Nara Institute of Science and Technology
Hamid Laga, University of South Australia
Presenter Liem Hoang
Abstract: Co-located haptic feedback in mixed and augmented reality environments can improve realism and user performance, but it also requires careful system design and calibration. In this poster, we determine the thresholds for perceiving co-location errors through two psychophysics experiments in a typical fine-motor manipulation task. In these experiments we simulate the two fundamental ways of implementing VHAR systems: first, attaching a real tool; second, augmenting a virtual tool. We determined the just-noticeable co-location errors for position and orientation in both experiments and found that users are significantly more sensitive to co-location errors with virtual tools. Our overall findings are useful for designing visuo-haptic augmented reality workspaces and calibration procedures.


ID: A41
The Rainbow Marker: An AR Marker with Planar Light Probe based on Structural Color Pattern Matching

Yuki Uranishi, Kyoto University
Masataka Imura, Kwansei Gakuin University
Tomohiro Kuroda, Kyoto University
Presenter Yuki Uranishi
Abstract: This paper proposes The Rainbow Marker, a planar marker for estimating the direction of a light source using a structural color. A structural color is a color produced by microscopically structured surfaces that vary in appearance according to the viewpoint, the direction and the spectrum of the light source. The proposed marker contains a planar material which causes structural coloration. The direction of the light source is estimated by structural color pattern matching between an input pattern and referential color patterns. In this paper, two types of the marker were implemented, with a grating sheet and with a holographic sheet, to demonstrate that the proposed method is applicable in the field of augmented reality.


ID: A42
Groupnect: Integrating Group Interaction into Large Display System

Hao Jiang, Institute of Computing Technology, Chinese Academy of Sciences
Chang Gao, Institute of Computing Technology, Chinese Academy of Sciences
Tianlu Mao, Institute of Computing Technology, Chinese Academy of Sciences
Hui Li, Sichuan University
Zhaoqi Wang, Institute of Computing Technology, Chinese Academy of Sciences
Presenter Hao Jiang
Abstract: Large display systems have been successfully applied in virtual reality domains because they can provide full sense of immersion through large visual space and high display resolution. However, only a few users can interact with these systems by using pen-like or marker-based devices. In addition, user experience and application mode are constrained in many areas. In this paper, we propose a novel application framework called ”Groupnect”, which gives users unique experience of group interaction in a large display system. By using optical tracking and 3D gesture recognition technologies, our approach can automatically recognize gesture-based control signals for 12 users simultaneously. And the backend system can trigger corresponding actions in real time. We conduct a user study and compare the results with a standard interaction mode. The results demonstrate that our approach greatly increases recorded objective activities and subjective efforts. Moreover, the physical and mental participation of users can be promoted by Groupnect. It indicates great potential to design novel applications in entertainment, education and training areas.


Group B


ID: B1
Measurement of Head Mounted Display’s Latency in Rotation and Side Effect Caused by Lag Compensation by Simultaneous Observation – An Example Result Using Oculus Rift DK2

Ryugo Kijima, Gifu University
Kento Miyajima, Gifu University
Presenter Kento Miyajima
Abstract: Latency is an important specification of the Head Mounted Display (HMD). In this paper, the s proposed the measurement methods to evaluate the average latency as well as the effects/ side effects of the lag compensation. The Oculus Rift DK2 was selected for measurement. The results showed that the average latency of an DK2 without compensation was about 26.3 ms and that with full compensation was 1 ms. The rest part of the dynamic response was observed and evaluated by subtracting the measured latency from the observed trajectory. The side effect of Timewarp was observed as the spike like angular error.


ID: B2
Speaking Haptics: Proactive Haptic Articulation for Intercommunication in Virtual Environments

Victor Adriel Oliveira, Universidade Federal do Rio Grande do Sul
Anderson Maciel, Universidade Federal do Rio Grande do Sul
Luciana Nedel, Federal University of Rio Grande do Sul (UFRGS)
Presenter Luciana Nedel
Abstract: Communication is crucial in collaborative tasks. Multimodal strategies are commonly applied to complement, reinforce and disambiguate information exchange. However, although multimodal communication is commonplace in Collaborative Virtual Environments, the proactive use of touch for intercommunication is surprisingly neglected regardless its importance for communication. In this paper, we look up to elements present in speech articulation to introduce the proactive haptic articulation as a novel approach for communication in CVEs. We defend the hypothesis that elements present in natural language, when added to the design of the vibrotactile vocabulary, should provide an expressive medium for intercommunication. Moreover, we hypothesize that the ability to render tactile cues to a teammate will encourage users to adapt a given vocabulary spontaneously during its use. We implemented a case study around a collaborative puzzle task to demonstrate the use of such vocabulary. Results show that the proactive haptic articulation provided a way for participants to autonomously and dynamically adapt the provided tactile vocabulary to attend their communication needs during the task.


ID: B3
Faster Feedback for Remote Scene Viewing with Pan-Tilt Stereo Camera

Yi Ren, University of North Carolina at Chapel Hill
Henry Fuchs, University of North Carolina at Chapel Hill
Presenter Yi Ren
Abstract: We demonstrate a remote scene viewing system for telepresence purposes. The system is based on a pan-tilt stereo camera that captures stereo video and transfers it to a remote user over a network. On the user end, the live stereo video is processed and displayed in a Head-Mounted Display. Faster feedback can be achieved through latency compensation. Using a wider field-of-view, higher resolution camera, the appropriate subset of the image is selected and displayed. We introduce the hardware configuration and software framework for the system and a method to calculate the homography between the camera image space and user head image space. Our perceived latency of the system is estimated to be 50-100ms.


ID: B4
Effects of field of regard and stereoscopy and the Validity of MR simulation for Visual Analysis of scientific data

Wallace S. Lages, Virginia Tech
Bireswar Laha, Stanford University
Wesley Miller, Brown University
Johannes Novotny, Brown University
John J. Socha, Virginia Tech
David H. Laidlaw, Brown University
Doug Bowman, Virginia Tech
Presenter Wallace Lages
Abstract: We report the findings of a study designed to evaluate the effect of stereopsis and field of regard (FOR) in two different mixed reality (MR) simulation platforms: a head-mounted display (HMD) and a CAVE. We compared the performance of participants on two levels of stereopsis (mono and stereo) and two levels of FOR (90 degrees and 270 degrees) using a variety of scientific visualization tasks. Among the findings, we observed that not all the effects were consistent between the platforms. Stereo alone or in combination with higher FOR improved completion time on both platforms. However, adding stereo solely reduced the accuracy of the participants on the CAVE and improved on the HMD. Our findings extend prior knowledge on the contribution of visual fidelity components and suggests potential limits on MR simulation between platforms.


ID: B5
Exploring Social Presence Transfer in Real-Virtual Human Interaction

Salam Daher, University of Central Florida
Kangsoo Kim, University of Central Florida
Myungho Lee, University of Central Florida
Andrew Raij, University of Central Florida
Ryan Schubert, University of Central Florida
Jeremy Bailenson, Stanford University
Greg Welch, University of Central Florida
Presenter Salam Daher
Abstract: We explore whether a peripheral observation of apparent mutual social presence between a real human (RH) and a virtual human (VH) can in turn increase a subject’s sense of social presence with the VH. In other words, we explore whether social presence can “transfer” from one RH-VH interaction to another. Specifically we carried out an experiment where human subjects were asked to play a game with a VH. Approximately half of the subjects were exposed to a brief but apparently engaging conversation between an RH and the VH as they entered the game room. For the subjects exposed to the brief RH-VH interaction, both an emotional connection and the attentional allocation dimension of social presence for the VH were found to be significantly higher compared to those who were not. We describe the motivation, the experiment, and the results.


ID: B6
Visual Feedback to Improve the Accessibility of Head Mounted Displays for Persons with Balance Impairments

Sharif Mohammad Shahnewaz Ferdous, University of Texas at San Antonio
Imtiaz Muhammad Arafat, University of Texas at San Antonio
John Quarles, University of Texas at San Antonio
Presenter Sharif Mohammad Shahnewaz Ferdous
Abstract: The objective of this research is to improve the accessibility of Head-Mounted Displays (HMDs) for users with balance impairments while they are in immersive Virtual Environments (VEs). Previous research has shown that most users experience some imbalance in a fully immersive VE. However, this imbalance is significantly worse in users with balance deficits. Thus, this research aims to determine an effective visual feedback technique to improve balance of persons while using VEs to improve the accessibility of HMDs. In order to do that, we conducted a study with seven users without impairment and seven users with balance impairments due to Multiple Sclerosis (MS). We investigated how a static reference frame (SRF) (e.g., a cross-hair always rendered in the same position on the user’s display screen) impacts the participants’ balances in VR. Results indicate that a SRF significantly improves balance in VR for users with MS. Based on these results, we propose guidelines for designing more accessible VEs for persons with balance impairments.


ID: B7
Anchoring 2D Gesture Annotations in Augmented Reality

Benjamin Nuernberger, University of California, Santa Barbara
Kuo-Chin Lien, University of California, Santa Barbara
Tobias Hollerer, University of California, Santa Barbara
Matthew Turk, University of California, Santa Barbara
Presenter Benjamin Nuernberger
Abstract: Augmented reality enhanced collaboration systems often allow users to draw 2D gesture annotations onto video feeds to help collaborators to complete physical tasks. This works well for static cameras, but for movable cameras, perspective effects cause problems when trying to render 2D annotations from a new viewpoint in 3D. In this paper, we present a new approach towards solving this problem by using gesture enhanced annotations. By first classifying which type of gesture the user drew, we show that it is possible to render annotations in 3D in a way that conforms more to the original intention of the user than with traditional methods. We first determined a generic vocabulary of important 2D gestures for remote collaboration by running an Amazon Mechanical Turk study with 88 participants. Next, we designed a novel system to automatically handle the top two 2D gesture annotations—arrows and circles. Arrows are handled by identifying their anchor points and using surface normals for better perspective rendering. For circles, we designed a novel energy function to help infer the object of interest using both 2D image cues and 3D geometric cues. Results indicate that our approach outperforms previous methods in terms of better conveying the original drawing’s meaning from different viewpoints.


ID: B8
New Hybrid Projection to Widen the Vertical Field of View with Large Screen to Improve the Perception of Personal Space in Architectural Project Review

Sabah Boustila, Université de Strasbourg
Antonio Capobianco, Université de Strasbourg
Olivier Génevaux, Université de Strasbourg
Dominique Bechmann, Université de Strasbourg
Presenter Sabah Boustila
Abstract: In architectural project review, the perception of the near surrounding ground is important for the evaluation of the virtual environment (VE). This near surrounding ground is missing when using wall screens. To address this problem we suggest increasing the Vertical Field of View (VFoV). One solution is the use of the rendering approach such as non-planar projection. However, simply increasing the FoV of the rendering leads to much distortion in the VE which is not suitable for architectural project review. We propose a new hybrid projection combining a perspective projection in the center of the screen and a cylindrical-like projection on the top and bottom boarders. By this, we increase VFoV without incurring large deformations to preserve the perception of distances and to allow seeing the surrounding ground. We also report the results of an experiment we conducted to evaluate distance perception with our projection.


ID: B9
Vestibulohaptic passive stimulation for a walking sensation

Yasushi Ikei, Tokyo Metropolitan University
Shunki Kato, Tokyo Metropolitan University
Kohei Komase, Tokyo Metropolitan University
Shogo Imao, Tokyo Metropolitan University
Sho Sakurai, Tokyo Metropolitan University, The University of Tokyo
Tomohiro Amemiya, NTT
Michiteru Kitazaki, Toyohashi University of Technology
Koichi Hirota, The University of Electro-Communications
Presenter Sho Sakurai
Abstract: This paper describes a passive stimulation of a body to evoke a walking sensation using a vestibular and haptic device while the real body of the user is sitting. It imparts a pseudo body image to the user through the real (physical) body of the user as a part of the virtual reality (VR) display system. The created walking sensation was evaluated by nine factors to analyze the complex nature of the walking sensation.


ID: B10
Is this bridge safe? Evaluation of Audiovisual Cues for a Walk on a Small Bridge Over a Canyon.

Erik Sikström, Aalborg University Copenhagen
Niels Nilsson, Aalborg University Copenhagen
Amalia De Goetzen, Aalborg University Copenhagen
Stefania Serafin, Aalborg University Copenhagen
Presenter Erik Sikström
Abstract: This paper presents two within-subjects studies (n=23) exploring how different combinations of visual and auditory feedback influence perceived realism, virtual self-perception and the experience of safety during walks on a virtual platform suspended over a canyon. In the first study, the frequency factor of the footstep sounds was altered and the visual appearance was changed between a newly built wooden bridge and an old bridge with a weaker structure and broken planks. In the second study, the sounds of creaking wood were added to the footstep sounds in half of the trails and compared against footsteps without creaking sounds. Moreover, the frequency factor of the frequency controls for footsteps was also manipulated between trails, but the visual appearance of the bridge was limited to the model of the old broken bridge.


ID: B11
Avatar Realism and Social Interaction Quality in Virtual Reality

Daniel Roth, Human-Computer Interaction, Institute for Computer Science, University of Würzburg
Jean-Luc Lugrin, Human-Computer Interaction, Institute for Computer Science, University of Würzburg
Dmitri Galakhov, Institute of Media and Imaging Technology, TH Köln
Arvid Hofmann, Media- and Communication Psychology, Department of Psychology, University of Cologne
Gary Bente, Communications, Arts, and Sciences, Michigan State University
Marc Erich Latoschik, Human-Computer Interaction, Institute for Computer Science, University of Würzburg
Arnulph Fuhrmann, Institute of Media and Imaging Technology, TH Köln
Presenter Daniel Roth
Abstract: In this paper, we describe an experimental method to investigate the effects of reduced social information and behavioral channels in immersive virtual environments with full-body avatar embodiment. We compared physical-based and verbal-based social interactions in real world (RW) and virtual reality (VR). Participants were represented by abstract avatars that did not display gaze, facial expressions or social cues from appearance. Our results show significant differences in terms of presence and physical performance. However, differences in effectiveness in the verbal-based task were not present. Participants appear to efficiently compensate for missing social and behavioral cues by shifting their attentions to other behavioral channels.


ID: B12
Supporting Multiple Immersive Configurations Using a Shape-Changing Display

Anthony Steed, University College London
Presenter Anthony Steed
Abstract: Immersive displays for virtual reality systems can be roughly classified into spatially immersive displays (similar to CAVE-like dis- plays or large-screen simulators) or head-mounted displays. The former type is usually static in spatial configuration and configured to support a small group of users. The latter supports only a single user. We propose a new class of actuated, reconfigurable display that can support both small groups and individual users: in particular we suggest a robotic display that can change shape. The display can change shape to support different usage conditions, and can also move rapidly to give a larger apparent field of view for an individual user. We explore the potential advantages of a display that can move independently from its user(s), and we present a prototype that demonstrates some of the potential use scenarios.


ID: B13
Using Virtual Environments to Evaluate Assumptions of the Human Visual System

Eric Palmer, Purdue University
Aaron Michaux, Purdue University
Zygmunt Pizlo, Purdue University
Presenter Eric Palmer
Abstract: Virtual reality applications provide an opportunity to test human vision in well-controlled scenarios that would be difficult or impossible to generate in real physical spaces. This paper presents a study intended to evaluate the importance of possible assumptions made by the human visual system. Using a CAVE simulation, participants viewed and counted virtual furniture objects in a variety of experimental manipulations. The assumption of uprightness against inversion, or the “gravity constraint,” was identified as a significant assumption of the visual system (p < 0.001). Monocular vs. binocular vision was also demonstrated as an important factor in this study (p = 0.01), while color vs. grayscale did not have a significant impact on task performance (p = 0.16). By including the binocular cue, and the assumption about the direction of gravity, the scene reconstruction produced by our computer vision model is reliable. The model can detect and count symmetrical objects in a 3D real scene and then recover their 3D shapes.


ID: B14
Acoustic Redirected Walking with Auditory Cues by Means of Wave Field Synthesis

Malte Nogalski, University of Applied Sciences Hamburg
Wolfgang Fohl, University of Applied Sciences Hamburg
Presenter Malte Nogalski
Abstract: We present an experiment to identify detection thresholds for acoustic redirected walking by means of a wave field synthesis system. The most natural way to navigate an avatar through an immersive virtual environment (IVE) is by copying the tracked physical movements of a user. Redirected walking offers an approach to tackle the discrepancy between the potentially infinite IVE and the generally limited available physical space or tracking area, by applying manipulations, such as rotations or translations, to the IVE in form of gains to the user’s movements. 39 blindfolded test subjects performed 2777 constant stimulus trials with various amounts of rotation and curvature gains. The test subjects were divided into four groups with different knowledge of the experiment, and one group performed two-alternative-forced-choice tasks, while the others could give feedback freely. The detection thresholds were greatly dependent on the groups i.e., the knowledge of the experiment. The 25% detection threshold was reached by the most relevant test group at gains that up-scaled rotations by 5%, down-scaled them by 37.5%, and bend a straight path into a circle with a radius of 5.71 meters. Almost no signs of simulator sickness could be observed.


ID: B15
A methodology for reducing the time to generate virtual electric substations

Alexandre Carvalho, Universidade Federal de Uberlândia
Leandro Mattioli, Universidade Federal de Uberlândia
Camilo Barreto, Universidade Federal de Uberlândia
Milton Miranda Neto, Universidade Federal de Uberlândia
Paulo Prado, Companhia Energética de Minas Gerais – CEMIG Gerson Flavio
Mendes de Lima, Universidade Federal de Uberlândia
Edgard Lamounier, Universidade Federal de Uberlândia
Alexandre Cardoso, Universidade Federal de Uberlândia
Presenter Alexandre Cardoso
Abstract: One of the great challenges in the development of Virtual Reality applications is to create the illusion of being in a different space. This is especially true when we are dealing with critical systems in engineering such as the control and monitoring of power substations. In this work, an electric power energy company is a research partner with more than 50 electric substations. Therefore, time to model all these substations, with a high-level of required photorealism, is an essential issue. To achieve this goal, a methodology is presented. First, the methodology proposes a protocol for acquiring data from field components (CADs, satellite images, manufacturer sheets etc.) to model faithful electric components by means of dimensioning and angles. Next, rules such as cable connectors positioning and monitoring of the amount of polygons (low-poly) are established. In addition, since each electric substation has circuit arrangements composed by different electric components, a pattern recognition tool has been developed to extract information from 2D basic plants in order to generate automatic positioning of components within a virtual substation. Also, considering the need for control and monitoring of the electric system, in real time, a set of interface templates are provided to support direct access to data from supervisory system (SCADA), without the loss of immersion and navigation which are imperative for Virtual Reality applications. Experiments have shown that this initiative reduces mental efforts of employees when operating the system. In the very first trials to generate a virtual electric substation a lot of work and time have been spent by our research team. After the establishment of the proposed methodology, results show that the time to generate new substations has been reduced by the order of 83%.


ID: B16
Induction of Linear and Circular Vection in Real and Virtual Worlds

Bobby Bodenheimer, Vanderbilt University
Yiming Wang, Vanderbilt University
Divine Maloney, University of the South
John Rieser, Vanderbilt University
Presenter Bobby Bodenheimer
Abstract: Vection is the illusion of self-motion, usually induced by a visual stimulus. It is important in virtual reality because inducing it in motion simulations can lead to improved experiences. In this poster we examine linear and circular vection in commodity level head-mounted displays. We compare the experience of circular vection induced through a real world stimulus, an optokinetic drum, with that experienced through a virtual stimulus. With virtual stimuli, we also compare circular vection with linear horizontal and linear vertical vection. Finally, we examine circular and linear vection in more naturalistic virtual environments. Linear vection was induced more rapidly than any other type, but circular vection occurs more rapidly with a real world stimulus than a virtual one. Our results have practical application and can inform virtual reality design that uses head-mounted display technology and wishes to establish vection.


ID: B17
Detecting Movement Patterns from Inertial Data of a Mobile Head-Mounted-Display for Navigation via Walking-in-Place

Thies Pfeiffer, Center of Excellence Cognitive Interaction Technology
Aljoscha Schmidt, Bielefeld University
Patrick Renner, Center of Excellence Cognitive Interaction Technology
Presenter Patrick Renner
Abstract: While display quality and rendering for Head-Mounted-Displays (HMDs) has increased in quality and performance, the interaction capabilities with these devices are still very limited or relying on expensive technology. Current experiences offered for mobile HMDs often stick to dome-like looking around, automatic or gaze-triggered movement, or flying techniques. We developed an easy to use walking-in-place technique that does not require additional hardware to enable basic navigation, such as walking, running, or jumping, in virtual environments. Our approach is based on the analysis of data from the inertial unit embedded in mobile HMDs. In a first prototype realized for the Samsung Galaxy Gear VR we detect steps and jumps. A user study shows that users novice to virtual reality easily pick up the method. In comparison to a classic input device, using our walking-in-place technique study participants felt more present in the virtual environment and preferred our method for exploration of the virtual world.


ID: B18
Bringing Basic Accessibility Features to Virtual Reality Context

Mauro Teófilo, SIDIA, Samsung Research Institute, Manaus, AM, Brazil
Josiane Nascimento, SIDIA, Samsung Research Institute, Manaus, AM, Brazil
Jonathan Santos, SIDIA, Samsung Research Institute, Manaus, AM, Brazil
Yves Jacques, SIDIA, Samsung Research Institute, Manaus, AM, Brazil
André Souza, The University of Alabama
Daniel Nogueira, SIDIA, Samsung Research Institute, Manaus, AM, Brazil
Presenter Mauro Teófilo
Abstract: Virtual reality is an experience, often generated by computer, that brings immersive environments that can be interacted with. Since 2014, the spread of VR technology created a content demand as well as new paradigms of interactions. One of the key aspects of this new paradigms is that they consider the use of virtual reality by visually-impaired people. In HCI, accessibility features are special computer functions that help people with disabilities to use technology more easily. This paper introduces Virtual Reality (VR) basic scenarios of accessibility tools like zooming, negative colors, auto reading, text-to-speech, subs, cursor based on context, and so on. The proposed solutions were designed based on accessibility features already in use by other platforms.


ID: B19
Immersion at Scale: Researcher’s Guide to Ecologically Valid Mobile Experiments

Soo Youn Oh, Stanford University
Ketaki Shriram, Stanford University
Bireswar Laha, Stanford University
Shawnee Baughman, Stanford University
Elise Ogle, Stanford University
Jeremy Bailenson, Stanford University
Presenter Soo Youn Oh
Abstract: While there have been hundreds of psychological studies using virtual reality (VR) over the past few decades, those studies have almost exclusively been conducted in laboratory settings using small samples of college students with little demographic variance. Hence, the generalizability of the results is limited, as not all findings will apply outside the college demographic. In this paper, we present our mobile VR project (Immersion at Scale) where we conduct VR experiment sessions in naturalistic settings (e.g., local events, museums, etc.). On average, we were able to collect data from 20-25 people for each 4-hour data collection session of Immersion at Scale. We discovered a number of obstacles and opportunities based on bringing VR out into the field. Thus, we do not focus on experimental stimuli and results, but methodological guidelines based on our iterative design improvements from pilot testing.


ID: B20
Discovering Educational Augmented Reality Math Applications by Prototyping with Elementary-School Teachers

Iulian Radu, Georgia Institute of Technology
Betsy McCarthy, WestEd
Yvonne Kao, WestEd
Presenter Iulian Radu
Abstract: In recent years, augmented reality (AR) applications for children’s entertainment have been gaining popularity, and educational organizations are increasingly interested in applying this technology to children’s educational games. In this paper we describe our collaboration with teachers and game designers, in order to explore educational potential for AR technology. This paper specifically investigates the topics of: What mathematics curriculum topics should technological innovations address in the Grade 1-3 classrooms? Which of the topics are suitable for AR games? And, how can we facilitate an efficient dialogue between educators and game designers?


ID: B21
Improving the Curvature Manipulation Technique for Redirected Walking Using Passive Haptic Cues

Keigo Matsumoto, Faculty of Engineering the University of Tokyo
Yuki Ban, Graduate School of Information Science and Technology, The University of Tokyo
Takuji Narumi, Graduate School of Information Science and Technology, The University of Tokyo
Tomohiro Tanikawa, Graduate School of Information Science and Technology, The University of Tokyo
Michitaka Hirose, Graduate School of Information Science and Technology, The University of Tokyo
Presenter Keigo Matsumoto
Abstract: We propose a method for improving the effects of manipulation in redirected walking (RDW) by using passive haptic cues. In particular, we focus on a curvature manipulation technique and develop an RDW system, through which we can display a visual representation of a flat wall although, in reality, the user touches a curved surface wall. Using this system, we conduct an experiment to investigate the effects of our redirection techniques, and the results show that the proposed method using passive haptic cues can redirect users more effectively than conventional techniques that rely only on the visual manipulation.


ID: B22
Mechanism of Inhibitory Effect of Cathodal Current Tongue Stimulation on Five Basic Tastes

Satoru Sakurai, Osaka University
Kazuma Aoyama, Osaka University
Nobuhisa Miyamoto, Osaka University
Makoto Mizukami, Osaka University
Masahiro Furukawa, Osaka University
Taro Maeda, Osaka University
Hideyuki Ando, Osaka University
Presenter Satoru Sakurai
Abstract: The mechanism by which cathodal current stimulation exerts inhibitory effects on the five basic tastes is revealed herein. The objective of this paper is to successfully achieve inhibition of sweetness by cathodal current electrical stimulation to the tongue, which has not been reported to date, though inhibition of salty and umami perception has been documented. By focusing on the electrophoresis of ions generated by dissolution of taste-inducing substances in water, this paper indicates how human gustation is inhibited by electrical stimulation, which is a key addition to the knowledgebase for achieving control of all five basic tastes.


ID: B23
Evaluating the Effects of Image Persistence on Dynamic Target Acquisition in Low Frame Rate Virtual Environments

David Zielinski, Duke University
Hrishikesh Rao, Duke University
Nick Potter, Duke University
Lawrence Appelbaum, Duke University
Regis Kopper, Duke University
Presenter David J. Zielinski
Abstract: Here we explore a visual display technique for low frame rate virtual environments called low persistence (LP). This involves displaying the rendered frame for a single display frame and blanking the screen while waiting for the next frame to be generated. To gain greater knowledge about the LP technique, we have conducted a user study to evaluate user performance and learning during a dynamic target acquisition task. The task involved the acquisition of targets moving along several different trajectories, modeled after a shotgun trap shooting task. The results of our study indicate the LP condition approaches high frame rate performance within certain classes of target trajectories. Interestingly we also see that learning is consistent across conditions, indicating that it may not always be necessary to train under a visually high frame rate system.


ID: B24
A Tour Guiding System of Historical Relics Based on Augmented Reality

Wei Xiaodong, Beijing Institute of Technology
Weng Dongdong, Beijing Institute of Technology
Liu Yue, Beijing Institute of Technology
Wang Yongtian, Beijing Institute of Technology
Presenter Xiaodong Wei
Abstract: Yuanmingyuan is a relic park and only few cultural relics are left due to the looting and burning down in history, which makes that most of the scenic spots of the park look boring. To address such issue, a game-based guidance system for Yuanmingyuan and a time travel game called MAGIC-EYES has been proposed with Augmented Reality technology. Six interactive modes are designed in the proposed system to guide tourists to visit the specified place. The evaluation results of a pilot study shows that the proposed guidance system has significantly improved the tourist experiences.


ID: B25
Casting Shadows: Ecological Interface Design for Augmented Reality Pedestrian Collision Warning

Hyungil Kim, Virginia Tech
Jessica Isleib, Virginia Tech
Joseph Gabbard, Virginia Tech
Presenter Hyungil Kim
Abstract: Ecological interface design (EID) has the opportunity to complement current approaches for augmented reality (AR) interface design by considering human-environment interaction and leveraging the inherent benefit of AR interfaces: conformal graphics. This work applies EID to design a novel interface for pedestrian collision warning for an automotive AR head-up display (HUD). Our initial usability evaluation shows potential benefits of incorporating EID into AR interface design.


ID: B26
The Effect of Realism on the Virtual Hand Illusion

Lorraine Lin, School of Computing, Clemson University
Sophie Jörg, School of Computing, Clemson University
Presenter Lorraine Lin
Abstract: The virtual hand illusion is a body ownership illusion that occurs in a virtual environment. Previous studies reached different conclusions on the effect of realism of controllable virtual hand models on the intensity of the perceived illusion. We compare participants’ responses to virtual impacts and threats when using hand models with different levels of realism. Our findings indicate that an illusion can occur for any model but that the effect is weakest for a non-anthropomorphic block model and strongest for a realistic human hand model in direct comparison. We furthermore find that reactions to our experiments highly vary between participants.


ID: B27
Evaluating two alternative walking in place interfaces for virtual reality gaming

Christian Toft, Aalborg University
Niels Nilsson, Aalborg University
Rolf Nordahl, Aalborg University
Stefania Serafin, Aalborg University
Presenter Rolf Nordahl
Abstract: This study investigates sliding as a walking-in-place (WIP) method for virtual reality navigation using the Wizdish. The Wizdish is a novel WIP device built for home usage. Two WIP methods, sliding and marching, were compared for naturalness, presence, and surface difference. The sliding technique used on the Wizdish was found to be significantly more disruptive during the experience compared to marching. This could be due to the size of the Wizdish, restricting the users stride, or due to a longer acclimatization time.


ID: B28
Disguising Rotational Gain for Redirected Walking in Virtual Reality: Effect of Visual Density

Anders Paludan, Aalborg University
Niels Nilsson, Aalborg University
Rolf Nordahl, Aalborg University
Stefania Serafin, Aalborg University
Presenter Rolf Nordahl
Abstract: In virtual reality environments that allow users to walk freely, the area of the virtual environment (VE) is constrained to the size of the tracking area. By using redirection techniques, this problem can be partially circumvented; one of the techniques involves rotating the user more or less in the virtual world than in the physical world; this technique is referred to as rotational gain. This paper seeks to further investigate this area, examining the effect of visual density in the VE.


ID: B29
De-escalation Training in an Augmented Virtuality Space

Charles Hughes, University of Central Florida
Kathleen Ingraham, University of Central Florida
Presenter Charlie Hughes
Abstract: This poster describes the TeachLivE paradigm, its application to de-escalation training, especially for law enforcement personnel, and the realization of this in a four-walled augmented virtuality space. Emphasis is placed on how the resulting physical presence increases co-presence and social presence, leading to an immersive and effective learning environment.


ID: B30
FaceBo: Facial and Body Tracking for Faithful Synthesis of Avatar

Jean-Luc Lugrin, University of Würzburg
Daniel Roth, University of Cologne
David Zilch, University of Würzburg
Gary Bente, University of Cologne
Marc Erich Latoschik, University of Würzburg
Presenter Marc Erich Latoschik
Abstract: This paper introduces a low-cost framework capable of combining both real-time markerless face and body tracking for faithful avatar embodiment in Virtual Reality (VR). We discuss suitable hardware and software solutions and present a first prototype. This work lays the technological basis for further research on the importance of the appearance and behavioral realism of avatars, e.g., for the illusion of virtual body ownership, for social interactions in VR, as well as for VR entertainment applications (immersive games or movies).


ID: B31
An Intelligent Multimodal Mixed Reality Real-Time Strategy Game

Sascha Link, University of Würzburg
Berit Barkschat, University of Würzburg
Chris Zimmerer, University of Würzburg
Martin Fischbach, University of Würzburg
Dennis Wiebusch, University of Würzburg
Jean-Luc Lugrin, University of Würzburg
Marc Erich Latoschik, University of Würzburg
Presenter Dennis Wiebusch
Abstract: This paper presents a mixed reality tabletop role-playing game with a novel combination of interaction styles and gameplay mechanics. Our contribution extends previous approaches by abandoning the traditional turn-based gameplay in favor of simultaneous real-time interaction. The increased cognitive and physical load during the simultaneous control of multiple game characters is counteracted by two features: First, certain game characters are equipped with AI-driven capabilities to become semi-autonomous virtual agents. Second, (groups of) these agents can be instructed by high-level commands via a multimodal -speech and gesture- interface.


ID: B32
Redirected Head Gaze to Support AR Meetings Distributed Over Heterogeneous Environments

Taeheon Kim, Georgia Institute of Technology
Ashwin Kachhara, Georgia Institute of Technology
Blair MacIntyre, Georgia Institute of Technology
Presenter Taeheon Kim
Abstract: We demonstrate a method for redirecting gaze of virtual avatars in distributed augmented reality (AR) meetings. As social cues are a necessity for effective communication, our method tries to preserve gaze awareness, one of the key elements of a face-to-face meeting. When using AR to bring multiple sites together in a distributed meeting, with different numbers of participants and physical arrangements across sites, gaze awareness is maintained regardless of the seating topology. By maintaining gaze, we hope to enhance the presence of remote attendees and improve communication among the users, making meetings in AR a practical option for teleconferencing.


ID: B33
Integrating Videos with LIDAR Scans for Virtual Reality

Giang Bui, University of Missouri
Brittany Morago, University of Missouri
Truc Le, University of Missouri
Kevin Karsch, University of Missouri
Zheyu Lu, University of Missouri
Ye Duan, University of Missouri
Presenter Ye Duan
Abstract: LIDAR range scans can be used to quickly create accurate 3D models for virtual reality and as a basis to visualize sets of photographs, videos, and virtual objects in a cohesive environment. We demonstrate how to register a variety of 2D imagery with a range scan to construct photo-realistic models and to extract walking people captured in videos and model them in a 3D space. We also present a method for determining the sun position from a set of stitched photographs in order to apply correct lighting to virtual objects placed amongst real world data.


ID: B34
Virtual Energy Center for Teaching Alternative Energy Technologies

Christoph Borst, University of Louisiana at Lafayette
Kary Ritter, University of Louisiana at Lafayette
Terrence Chambers, University of Louisiana at Lafayette
Presenter Christoph W. Borst
Abstract: We overview the Virtual Energy Center, a VR environment that models a real energy facility to enable virtual field trips and self-guided exploration. VEC is augmented by visual guides and educational content to teach students about concentrating solar power technology. A teacher physically near the student can appear in the scene via depth camera imagery, allowing the teacher to walk around in a classroom setting and assist students. Work-in-progress is streaming the depth images over a network to allow students to virtually meet expert guides from the real facility. We summarize these features, some interaction-related challenges, and ongoing testing.


ID: B35
Simultaneous Mapping and Redirected Walking for ad hoc Free Walking in Virtual Environments

Thomas Nescher, ETH Zurich
Markuz Zank, ETH Zurich
Andreas Kunz, ETH Zurich
Presenter Thomas Nescher
Abstract: This paper presents an approach that combines redirected walking with a low-cost and user-worn tracking approach based on simultaneous localization and mapping (SLAM). I.e. learning the environment with the walkable area, tracking the user’s viewpoint, and redirected walking is done on the fly without any prior setup, without preparing a room, and without setting up a tracking system. This allows ad hoc free walking in virtual environments even within dynamic and cluttered physical rooms, where the walkable area is of arbitrary shape.


ID: B36
Perceptual Space Warping: Preliminary Exploration

Alex Peer, University of Wisconsin-Madison
Kevin Ponto, University of Wisconsin-Madison
Presenter Alex Peer
Abstract: Distance has been shown to be incorrectly estimated in virtual environments relative to the same estimation tasks in a real environment. This work describes a preliminary exploration of Perceptual Space Warping, which influences perceived distance in virtual environments by using a vertex shader to warp geometry. Empirical tests demonstrate significant effects, but of smaller magnitude than expected. This raises further questions about the complex interactions between the presentation and perception of space in a virtual environment.


ID: B37
Evaluation of the Effect of a Virtual Avatar’s Representation on Distance Perception in Immersive Virtual Environments

Dimitar Valkov, University of Münster
John Martens, University of Münster
Klaus Hinrichs, University of Münster
Presenter Dimitar Valkov
Abstract: It is well known that distance estimation in IVEs suffers from compression when viewed from an egocentric perspective with a HMD. While previous research indicates that providing the user with an avatar with high geometric and motion fidelity may alleviate this problem, little work has been done to investigate which properties of the avatar’s representation influence the distance estimation. In this poster we report the results of an evaluation of the user’s distance perception with different avatar representations. Our results indicate that anthropometric fidelity of the avatar has stronger effect on the distance perception than its visual fidelity.


ID: B38
Depth-based 3D Gesture Multi-Level Radial Menu for Virtual Object Manipulation

Matthew M. Davis, Virginia Tech
Joseph L. Gabbard, Virginia Tech
Doug A. Bowman, Virginia Tech
Dennis Gracanin, Virginia Tech
Presenter Matthew M. Davis
Abstract: In this work, we present a depth-based solution to multi-level menus for selection and manipulation of virtual objects using free-hand gestures. Navigation between and through menus is performed using three gesture states that utilize X, Y translations of the finger with boundary crossing. Although presented in a single context, this menu structure can be applied to a myriad of domains requiring several levels of menu data, and serves to supplement existing and emerging menu design for augmented, virtual, and mixed-reality applications.


ID: B39
Progressive Feedback Point Cloud Rendering for Virtual Reality Display

Ross Tredinnick, University of Wisconsin – Madison
Kevin Ponto, University of Wisconsin – Madison
Markus Broecker, University of Wisconsin – Madison
Presenter Kevin Ponto
Abstract: Previous approaches to rendering large point clouds on immersive displays have generally created a trade-off between interactivity and quality. While these approaches have been quite successful for desktop environments when interaction is limited, virtual reality systems are continuously interactive, which forces users to suffer through either low frame rates or low image quality. This paper presents a novel approach to this problem through a progressive feedback-driven rendering algorithm. This algorithm uses reprojections of past views to accelerate the reconstruction of the current view. The presented method is tested against previous methods, showing improvements in both rendering quality and interactivity.


ID: B40
Using Projection AR to Add Design Studio Pedagogy to a CS Classroom

Blair MacIntyre, Georgia Institute of Technology
Dingtian Zhang, Georgia Institute of Technology
Ryan Jones, Georgia Institute of Technology
Amber Solomon, Georgia Institute of Technology
Elizabeth DiSalvo, Georgia Institute of Technology
Mark Guzdial, Georgia Institute of Technology
Presenter Ryan Jones
Abstract: We use projection augmented reality to add design studio learning models to a classroom for an introductory Media Computation computer science class. Students do classwork using an enhanced version of Pythy that captures students’ work and displays it around the room. We leverage the Microsoft RoomAlive Toolkit to construct a room-scale augmented reality. The system “pins” students’ work to the walls, where teachers and students can see and discuss the work. We hope that the system will foster collaboration and support creating STEM learning experiences that encourage creativity, and help build strong peer learning environment.


ID: B41
A Low-cost, Low-latency Approach to Dynamic Immersion in Occlusive Head-Mounted Displays

Robert Lindeman, HIT Lab NZ, University of Canterbury
Presenter Rob Lindeman
Abstract: We introduce a method for dynamically controlling the level of immersion provided by HMDs. We replace the cowling around typical ski-goggle type HMDs with LCD panels whose transparency can be controlled using very simple stand-alone circuitry or a micro-controller to vary the amount of the real world that is visible in the periphery of the user. This allows users to see objects in their immediate surroundings (e.g., the keyboard and mouse), can be used to counter cybersickness by providing natural cues, and introduces no added latency into the system.


ID: B42
A Realistic Walking Model for Enhancing Redirection in Virtual Reality

Courtney Hutton, Occidental College
Evan Suma, USC Institute for Creative Technologies
Presenter Courtney Hutton
Abstract: Redirected walking algorithms require the prediction of human motion in order to effectively steer users away from the boundaries of the physical space. While a virtual walking trajectory may be represented using straight lines connecting waypoints of interest, this simple model does not accurately represent typical user behavior. In this poster, we present a more realistic walking model for use in real-time virtual environments that employ redirection techniques. We implemented the model within a framework that can be used for simulation of redirected walking within different virtual and physical environments.