2019 IEEE VR Osaka logo

March 23rd - 27th

2019 IEEE VR Osaka logo

March 23rd - 27th

IEEE Computer Society IEEE VRSJ

IEEE Computer Society IEEE VRSJ


Sponsors


Diamond

Osaka International Convention Center

Platinum

DELL + Intel Japan
Mercari
National Science Foundation
OSAKA CONVENTION & TOURISM BUREAU

Gold


Tateishi Science and Technology Foundation

The Telecommunications Advancement Foundation

Silver


DAQRI

Bronze

BARCO
Huawei Japan
Knowledge Service Network
Mozilla Corporation
Osaka Electro-Communication University
SenseTime Japan

Flower / Misc

GREE, Inc.
KYOHRITSU ELECTRONIC INDUSTRY Co.,Ltd.
Beijing Nokov Science & Technology Co., Ltd.
PoSTMEDIA
SoftCube Corporation
Sumitomo Electric Industries
Vicon

Exhibitors

Advanced Realtime Tracking (ART)
Archivetips
China's State Key Laboratory of Virtual Reality Technology and Systems
Computer Network Information Center, Chinese Academy of Sciences
Creact
Crescent
DELL + Intel Japan
Fujitsu
Fun Life Inc.
Haption
Kyohritsu
Nihon Binary Co., Ltd.
NIST - Public Safety Communications Research
Nokov
Optitrack Japan, Ltd.
PhaseSpace
QD Laser, Inc.
Qualisys
Solidray Co.,Ltd.
WESTUNITIS Co., Ltd.

Supporters


IEEE Kansai Section

Society for Information Display Japan Chapter

VR Consortium

The Institute of Systems, Control and Information Engineers

Human Interface Society

The Japanese Society for Artificial Intelligence

The Visualization Society of Japan

Information Processing Society of Japan

The Robotics Society of Japan

Japan Society for Graphic Science

The Japan Society of Mechanical Engineers

Japanese Society for Medical and Biological Engineering

The Institute of Image Information and Television Engineers

The Society of Instrument and Control Engineers

The Institute of Electronics, Information and Communication Engineers

The Institute of Electrical Engineers of Japan

The Society for Art and Science

Japan Ergonomics Society

The Japanese Society of Medical Imaging

Exhibitors and Supporters

Papers

We received the record number of paper submissions. We will have around 140 paper talks in 4 prallel sessions.

Some changes to the program below may still be necessary. Authors, please notify the program chairs if you have special constraints.

Monday, March 25th

Tuesday, March 26th

Wednesday, March 27th


Session 1: 360 Video 1

Monday, March 25th, 10:15 - 11:30, Room A

Chair: Stephen DiVerdi

Motion parallax for 360 RGBD video

Ana Serrano (Universidad de Zaragoza), Incheol Kim (Universidad de Zaragoza), Zhili Chen (Adobe Research), Stephen DiVerdi (Adobe), Diego Gutierrez (Universidad de Zaragoza), Aaron Hertzmann (Adobe), Belen Masia (Universidad de Zaragoza)

TVCG

Abstract: We present a method for adding parallax and real-time playback of 360 videos in Virtual Reality headsets. In current video players, the playback does not respond to translational head movement, which reduces the feeling of immersion, and causes motion sickness for some viewers. Given a 360 video and its corresponding depth (provided by current stereo 360 stitching algorithms), a naive image-based rendering approach would use the depth to generate a 3D mesh around the viewer, then translate it appropriately as the viewer moves their head. However, this approach breaks at depth discontinuities, showing visible distortions, whereas cutting the mesh at such discontinuities leads to ragged silhouettes and holes at disocclusions. We address these issues by improving the given initial depth map to yield cleaner, more natural silhouettes. We rely on a three-layer scene representation, made up of a foreground layer and two static background layers, to handle disocclusions by propagating information from multiple frames for the first background layer, and then inpainting for the second one. Our system works with input from many of today’s most popular 360 stereo capture devices (e.g., Yi Halo or GoPro Odyssey), and works well even if the original video does not provide depth information. Our user studies confirm that our method provides a more compelling viewing experience with respect to traditional viewers that do not support parallax, increasing immersion while reducing discomfort and nausea.

MegaParallax: Casual 360° Panoramas with Motion Parallax

Tobias Bertel (University of Bath), Neill DF Campbell (University of Bath), Christian Richardt (University of Bath)

TVCG

Abstract: “The ubiquity of smart mobile devices, such as phones and tablets, enables users to casually capture 360° panoramas with a single camera sweep to share and relive experiences. However, panoramas lack motion parallax as they do not provide different views for different viewpoints. The motion parallax induced by translational head motion is a crucial depth cue in daily life. Alternatives, such as omnidirectional stereo panoramas, provide different views for each eye (binocular disparity), but they also lack motion parallax as the left and right eye panoramas are stitched statically. Methods based on explicit scene geometry reconstruct textured 3D geometry, which provides motion parallax, but suffers from visible reconstruction artefacts. The core of our method is a novel multi-perspective panorama representation, which can be casually captured and rendered with motion parallax for each eye on the fly. This provides a more realistic perception of panoramic environments which is particularly useful for virtual reality applications. Our approach uses a single consumer video camera to acquire 200–400 views of a real 360° environment with a single sweep. By using novel-view synthesis with flow-based blending, we show how to turn these input views into an enriched 360° panoramic experience that can be explored in real time, without relying on potentially unreliable reconstruction of scene geometry. We compare our results with existing omnidirectional stereo and image-based rendering methods to demonstrate the benefit of our approach, which is the first to enable casual consumers to capture and view high-quality 360° panoramas with motion parallax.”

Deep Learning-Based Approach For Automatic VR Image Upright Adjustment

Raehyuk Jung (KAIST), Aiden Seung Joon Lee (Brown University), Amirsaman Ashtari (KAIST), Jean-Charles Bazin (KAIST)

Conference

Abstract: Recent spherical VR cameras can capture VR images with a 360° field of view at a low cost, and thus have greatly democratized and simplified the acquisition of real-world VR contents. However, in practice, when the camera orientation is not straight, the acquired VR image is not vertically aligned, and thus appears tilted when displayed on a VR headset, which diminishes the quality of the VR experience. To overcome this problem, we present a deep learning based approach that can automatically estimate the orientation of a VR image and return its upright version. In contrast to existing methods, our approach does not require the presence of lines, vanishing points or horizon in the image, and thus can be applied on a wide range of scenes. Experiments on real datasets with quantitative and qualitative evaluations, as well as comparison with recent stateof- the-art methods, have successfully confirmed the validity of our approach.

Dense 3D Scene Reconstruction from Multiple Spherical Images for 3-DoF+ VR Applications

Thiago Lopes Trugillo da Silveira (Federal University of Rio Grande do Sul), Claudio R Jung (UFRGS)

Conference

Abstract: We propose a novel method for estimating the 3D geometry of indoor scenes based on multiple spherical images. Our technique produces a dense depth map registered to a reference view so that depth-image-based-rendering (DIBR) techniques can be explored for providing three-degrees-of-freedom plus (3-DoF+) immersive experience to augmented/mixed/virtual reality (AR/MR/VR) users. The core of our method is to explore large displacement optical flow algorithms to obtain point correspondences, and use cross-checking and geometric constraints to detect and remove bad matches related to deformations or occlusions. We show that selecting a subset of the best dense matches leads to better pose estimates than traditional approaches based on sparse feature matching, and explore a weighting scheme to obtain the depth maps. Finally, we adapt a fast image-guided filter to the spherical domain for enforcing local spatial consistency, improving the 3D estimates. Experimental results indicate that our method quantitatively outperforms competitive approaches on computer-generated images and synthetic data under noisy correspondences and camera poses. Also, we show that the estimated depth maps obtained from only a few real spherical captures of the scene are capable of producing coherent synthesized binocular stereoscopic views by using traditional DIBR methods.


Session 2: Haptics and Perception

Monday, March 25th, 10:15 - 11:30, Room B

Chair: Ann McNamara

Remapped Physical-Virtual Interfaces with Bimanual Haptic Retargeting

Brandon J Matthews (University of South Australia), Bruce H Thomas (University of South Australia), G Stewart Von Itzstein (University of South Australia), Ross Smith (University of South Australia)

Conference

Abstract: This paper proposes a novel interface for virtual reality in which physical interface components are mapped to multiple virtual coun- terparts using haptic retargeting illusions. This gives virtual reality interfaces the ability to have correct haptic sensations for many vir- tual buttons although in the physical space there is only one. This is a generic system that can be applied to areas including design, inter- action tasks, product prototype development and interactive games in virtual reality. The system presented extends existing retargeting algorithms to support bi-manual interactions. A new warp algorithm, called interface warp, was developed to support remapped virtual reality user interfaces. Through an experimental user study, we explore the effects of bimanual retargeting and the interface warp technique on task response time, errors, presence, perceived manipu- lation compared to unimanual (single handed) retargeting and other existing warp techniques. The experiment was conducted to explore the operating parameters of the system which demonstrated faster task response time and less errors for the interface warp technique and shows no significant impact of bimanual interactions.

FleXeen: Visually Manipulating Perceived Fabric Bending Stiffness in Spatial Augmented Reality

Parinya Punpongsanon, Daisuke Iwai, Kosuke Sato

TVCG-Invited

Abstract: It has been suggested that the appearance of fabric motion affects the human perception of its bending stiffness. This paper presents a novel spatial augmented reality (SAR), or projection mapping, approach that can visually manipulate the perceived bending stiffness of a fabric. In particular, we propose a flow enhancement method that changes apparent fabric motion based on a simple optical flow analysis technique rather than complex physical simulations for interactive applications. Through a psychophysical experiment, we investigated the relationship between the magnification factor of our flow enhancement and perceived bending stiffness of fabric. Furthermore, we constructed a prototype application system that allows a user to control the stiffness of a fabric without changing the actual physical fabric. Through an evaluation of the prototype, we confirmed that, on average, the proposed technique could manipulate the perceived stiffness for various materials (i.e., cotton, polyester, and mixed cotton and linen) at an average accuracy of 90.3%.

Effects of Stereoscopic Viewing and Haptic Feedback, Sensory-Motor Congruence and Calibration on Near-Field Fine Motor Perception-Action Coordination in Virtual Reality

David Brickler (Clemson University), Jeffrey W. Bertrand (Clemson University), Sabarish V. Babu (Clemson University)

Conference

Abstract: We present an empirical evaluation on how stereoscopic viewing and haptic feedback deferentially affects fine motor perception-action coordination in a pick-and-place task in Virtual Reality (VR), similar to the peg transfer task from the surgical training curriculum. The factors considered were stereoscopic viewing, haptic feedback, sensory-motor congruence and mismatch, and calibration on perception-action coordination in near field fine motor task performance in VR. Quantitative measures of placement error, distance, collision, and time to complete trials were recorded and analyzed. Overall, we found that participants’ manual dexterous task performance was enhanced in the presence of both stereoscopic viewing and haptic feedback. However, we found that time to complete task was greatly enhanced by the presence of haptic feedback, and economy and efficiency of movement of the end effector as well as the manipulated object was enhanced by the presence of both haptic feedback and stereoscopic viewing. Whereas, number of collisions and placement accuracy were greatly enhanced by the presence of stereoscopic viewing in near-field fine motor perception-action coordination. Our research additionally shows that mismatch in sensory-motor stimuli can detrimentally affect the number of collisions, and efficiency of end effector and object movements in near-field fine motor activities, and can be further negatively affected by the absence of haptic feedback and stereoscopic viewing. In spite of reduced cue situations in VR, and the absence or presence of stereoscopic viewing and haptic feedback, we found that participants tend to calibrate or adapt their perception-action coordination rapidly with a set of at least 5 trials.

Visual Manipulation for Underwater Drag Force Perception in Immersive Virtual Environments

HyeongYeop Kang (Korea University), Geonsun Lee (Korea University), JungHyun Han (Korea University)

Conference

Abstract: In this paper, we propose to reproduce drag forces in a virtual underwater environment. To this end, we first compute the drag forces to be exerted on human limbs in a physically correct way. Adopting a pseudo-haptic approach that generates visual discrepancies between the real and virtual limb motions, we compute the extent of drag forces that are applied to the virtual limbs and can be naturally perceived. Through two tests, our drag force simulation method is compared with others. The results prove that our method is effective in reproducing the sense of being immersed in water. Our study can be utilized for various types of virtual underwater applications such as scuba diving training and aquatic therapy.

Estimating Detection Thresholds for Desktop-Scale Hand Redirection in Virtual Reality

André Zenner (German Research Center for Artificial Intelligence (DFKI), Saarland Informatics Campus), Antonio Krüger (German Research Center for Artificial Intelligence (DFKI), Saarland Informatics Campus)

Conference

Abstract: Virtual reality (VR) interaction techniques like haptic retargeting offset the user’s rendered virtual hand from the real hand location to redirect the user’s physical hand movement. This paper explores the order of magnitude of hand redirection that can be applied without the user noticing it. By deriving lower-bound estimates of detection thresholds, we quantify the range of unnoticeable redirection for the three basic redirection dimensions horizontal, vertical and gain-based hand warping. In a two-alternative forced choice (2AFC) experiment, we individually explore these three hand warping dimensions each in three different scenarios: a very conservative scenario without any distraction and two conservative but more realistic scenarios that distract users from the redirection. Additionally, we combine the results of all scenarios to derive robust recommendations for each redirection technique. Our results indicate that within a certain range, desktop-scale VR hand redirection can go unnoticed by the user, but that this range is narrow. The findings show that the virtual hand can unnoticedly be displaced horizontally or vertically by up to 4.5° in either direction respectively. This allows for a range of ca. 9°, in which users cannot reliably detect applied redirection. For our gain-based hand redirection technique, we found that gain factors between g = 0.88 and g = 1.07 can go unnoticed, which corresponds to a user grasping up to 13.75% further or up to 6.18% less far than in virtual space. Our findings are of value for the development of VR applications that aim to redirect users in an undetectable manner, such as for haptic retargeting.


Session 3: Redirected Walking

Monday, March 25th, 10:15 - 11:30, Room C

Chair: Evan Suma Rosenberg

A General Reactive Algorithm for Redirected Walking using Artificial Potential Functions

Jerald Thomas (University of Minnesota), Evan Suma Rosenberg (University of Minnesota)

Conference

Abstract: Redirected walking enables users to locomote naturally within a virtual environment that is larger than the available physical space. These systems depend on steering algorithms that continuously redirect users within limited real world boundaries. While a majority of the most recent research has focused on predictive algorithms, it is often necessary to utilize reactive approaches when the user’s path is unconstrained. Unfortunately, previously proposed reactive algorithms assume a completely empty space with convex boundaries and perform poorly in complex real world spaces containing obstacles. To overcome this limitation, we present Push/Pull Reactive (P2R), a novel algorithm that uses an artificial potential function to steer users away from potential collisions. We also introduce three new reset strategies and conducted an experiment to evaluate which one performs best when used with P2R. Simulation results demonstrate that the proposed approach outperforms the previous state-of-the-art reactive algorithm in non-convex spaces with and without interior obstacles.

Real-time Optimal Planning for Redirected Walking Using Deep Q-Learning

Dong-Yong Lee (Yonsei University), Yong-Hun Cho (Yonsei University), In-Kwon Lee (Yonsei University)

Conference

Abstract: This work presents a novel control algorithm of redirected walking called steer-to-optimal-target (S2OT) for effective real-time planning in redirected walking. S2OT is a method of redirection estimating the optimal steering target that can avoid the collision on the future path based on the user’s virtual and physical paths. We design and train the machine learning model for estimating optimal steering target through reinforcement learning, especially, using the technique called Deep Q-Learning. S2OT significantly reduces the number of resets caused by collisions between user and physical space boundaries compared to well-known algorithms such as steer-to-center (S2C) and Model Predictive Control Redirection (MPCred). The results are consistent for any combinations of room-scale and largescale physical spaces and virtual maps with or without predefined paths. S2OT also has a fast computation time of 0.763 msec per redirection, which is sufficient for redirected walking in real-time environments. We also present the possibility to adjust the weight between the number of resets and the degree of rotational redirection by changing the design of the reward function used in S2OT.

Effects of Tracking Area Shape and Size on Artificial Potential Field Redirected Walking

Justin Messinger (Miami University), Eric Hodgson (Miami University), Eric R Bachmann (Miami University)

Conference

Abstract: Immersive Virtual Environment systems that utilize Head Mounted Displays and a large tracking area have the advantage of being able to use natural walking as a locomotion interface. In such systems, difficulties arise when the virtual world is larger than the tracking area and users approach area boundaries. Redirected walking (RDW) is a technique that distorts the correspondence between physical and virtual world motion to imperceptibly steer users away from boundaries and obstacles, including other co-immersed users. Recently, a RDW algorithm was proposed based on the use of artificial potential fields (APF), in which walls and obstacles repel the user. APF-RDW effectively supports multiple simultaneous users and, unlike other RDW algorithms, can easily account for tracking area dimensions and room shape when generating steering instructions. This work investigates the performance of an refined APF-RDW algorithm in different sized tracking areas and in irregularly shaped rooms, as compared to a Steer-to-Center (STC) algorithm and an un-steered control condition. Data are presented for both single-user and multi-user scenarios. Results show the ability of APF-RDW to steer effectively in irregular concave shaped tracking areas such as L-shaped rooms or crosses, along with scalable multi-user support, and better performance than STC algorithms in almost all conditions.

Multi-User Redirected Walking and Resetting Using Artificial Potential Fields

Eric R Bachmann (Miami University), Eric Hodgson (Miami University), Cole Hoffbauer (Miami University), Justin Messinger (Miami University)

TVCG

Abstract: Head-mounted displays (HMDs) and large area position tracking systems can enable users to navigate virtual worlds through natural walking. Redirected walking (RDW) imperceptibly steers immersed users away from physical world obstacles and allows them to explore unbounded virtual worlds while walking in a limited physical space. In cases of imminent collisions with physical obstacles or other users, resetting techniques can reorient into open space. This work introduces new RDW and resetting algorithms based on the use of artificial potential fields that “push” users away from obstacles and other users. Data from human subject experiments indicate that these methods reduce potential single-user resets by 66% compared to previous techniques. A live multi-user study demonstrates the viability of the algorithm with up to 3 concurrent users, and simulation results with up to 8 simultaneous users indicate that the algorithm scales efficiently and is effective with larger groups.

Shrinking Circles: Adaptation to Increased Curvature Gain in Redirected Walking

Luke Bölling (University of Münster), Niklas Stein (University of Muenster), Frank Steinicke (Universität Hamburg), Markus Lappe (University of Muenster)

TVCG

Abstract: Real walking is the most natural way to locomote in virtual reality (VR), but a confined physical walking space limits its applicability. Redirected walking (RDW) is a collection of techniques to solve this problem. One of these techniques aims to imperceptibly rotate the user’s view of the virtual scene in order to steer her along a confined path whilst giving the impression of walking in a straight line in a large virtual space. Measurement of perceptual thresholds for the detection of such a modified curvature gain have indicated a radius that is still larger than most room sizes. Since the brain is an adaptive system and thresholds usually depend on previous stimulations, we tested if prolonged exposure to an immersive virtual environment (IVE) with increased curvature gain produces adaptation to that gain and modifies thresholds such that, over time, larger curvature gains can be applied for RDW. Therefore, participants first completed a measurement of their perceptual threshold for curvature gain. In a second session, the same participants were exposed to an IVE with a constant curvature gain in which they walked between two targets for about 20 minutes. Afterwards, their perceptual thresholds were measured again. The results show that the psychometric curves shifted after the exposure session and perceptual thresholds for increased curvature gain further increased. The increase of the detection threshold suggests that participants adapt to the manipulation and stronger curvature gains can be applied in RDW, and therefore improves its applicability in such situations.


Session 4: Rendering

Monday, March 25th, 10:15 - 11:30, Room D

Chair: Mark Billinghurst

Tile Pair-based Adaptive Multi-Rate Stereo Shading

Yazhen Yuan, Rui Wang, Hujun Bao

TVCG-Invited

Abstract: The work proposes a new stereo shading architecture that enables adaptive shading rates and automatic shading reuse among triangles and between two views. The proposed pipeline presents several novel features. First, the present sort-middle/bin shading is extended to tile pair-based shading to rasterize and shade pixels at two views simultaneously. A new rasterization algorithm utilizing epipolar geometry is then proposed to schedule tile pairs and perform rasterization at stereo views efficiently. Second, this work presents an adaptive multi-rate shading framework to compute shading on pixels at different rates. A novel tile-based screen space cache and a new cache reuse shader are proposed to perform such multi-rate shading across triangles and views. The results show that the newly proposed method outperforms the standard sort-middle shading and the state-of-the-art multi-rate shading by achieving considerably lower shading costs and memory bandwidths.

Real-Time Rendering of Stereo-Consistent Contours

Dejing He (Zhejiang University), Rui Wang (Zhejiang University), Hujun Bao (Zhejiang University)

Conference

Abstract: Line drawing is an important method to depict the shapes of objects concisely. The combination of line drawing and stereo rendering, the stereo line drawing, not only conveys shape efficiently but also provides viewers with the visual experience of the stereoscopic 3D world. Contours are the most important lines to draw. However, due to their view-dependent nature, contours must be rendered consistently for two eyes; otherwise they will induce binocular rivalry and viewing discomfort. This paper proposes a novel solution to draw stereo-consistent contours at real-time. We derive from the concept of epipolar-slidability introduced in previous work that the stereo continuity of contour points along the epipolar curve can be evaluated by extreme points on the trajectory of corresponding viewpoints of contour points, and further extend the contour continuity test via an image space search rather than sampling multiple viewpoints. Overlaps are handled appropriately by projecting contour points and extreme points in image space using per-pixel linked list. Results show that the proposed method has a much lower cost than previous works, therefore enables real-time editings for users, such as changing camera viewpoints, editing object geometry, tweaking parameters to show contours with different details, etc.

Hybrid Mono-Stereo Rendering in Virtual Reality

Laura Fink (University Erlangen-Nuremberg), Nora Hensel (University Erlangen-Nuremberg), Daniela Markov-Vetter (University of Rostock), Christoph Weber (University Erlangen-Nuremberg), Oliver Staadt (University of Rostock), Marc Stamminger (University Erlangen-Nuremberg)

Conference

Abstract: Rendering for Head Mounted Displays (HMD) causes a doubled computational effort, since serving the human stereopsis requires the creation of one image for the left and one for the right eye. The difference in this image pair, called binocular disparity, is an important cue for depth perception and the spatial arrangement of surrounding objects. Findings in the context of the human visual system (HSV) have shown that especially in the near range of an observer, binocular disparities have a high significance. But as with rising distance the disparity converges to a simple geometric shift, also the importance as depth cue exponentially declines. In this paper, we exploit this knowledge about the human perception by rendering objects fully stereoscopic only up to a chosen distance and monoscopic, from there on. Doing so, we obtain three distinct images which are synthesized to a new hybrid stereoscopic image pair (HSIP) which reasonably approximates a conventionally rendered stereoscopic image pair (CSIP). The method has the potential to reduce the amount of rendered primitives easily to nearly 50% and thus, significantly lower frame times. Besides of a detailed analysis of the introduced formal error and how to deal with occurring artifacts, we evaluated the perceived quality of the VR experience during a comprehensive user study with nearly 50 participants. The results show that the perceived difference participants were not able to distinguish between the shown HSIPs and the CSIPs was generally small. An in-depth analysis is given on how the participants reached their decisions and how they subjectively rated their VR experience.

Optimised Molecular Graphics on the HoloLens

Christoph Müller (Universität Stuttgart), Matthias Braun (Universität Stuttgart), Thomas Ertl

Conference

Abstract: The advent of modern and affordable augmented reality head sets like Microsoft HoloLens has sparked new interest in using virtual and augmented reality technology in the analysis of molecular data. For all visualisation in immersive, mixed reality scenarios, a sufficiently high rendering speed is an important factor, which leads to the issue of limited processing power available on fully untethered devices facing the situation of handling computationally expensive visualisations. Recent research shows that the space-filling model of even small data sets from the protein data bank (PDB) cannot be rendered at desirable frame rates on the HoloLens. In this work, we report on how to improve the rendering speed of atom-based visualisation of proteins and how the rendering of more abstract representations of the molecules compares against it.

Real-Time Continuous Level of Detail Rendering of Point Clouds

Markus Schütz (Institute of Visual Computing & Human-Centered Technology), Katharina Krösl (TU Wien), Michael Wimmer (TU Wien)

Conference

Abstract: Real-time rendering of large point clouds requires acceleration structures that reduce the number of points that are drawn on screen. State-of-the art algorithms group and render points in hierarchically organized chunks with varying extent and density, which results in sudden changes of density from one level of detail to another, as well as noticeable popping artifacts when additional chunks are blended in or out. These popping artifacts are especially noticeable at lower levels of detail, and consequently in virtual reality where high performance requirements impose a reduction in detail. We propose a continuous level of detail method that exhibits gradual rather than sudden changes in density. Our method continuously recreates a down-sampled vertex buffer from the full point cloud, based on camera orientation, position, and distance to the camera, in a point-wise rather than chunk-wise fashion and at speeds up to 18 million points per millisecond. As a result, additional details are blended in or out in a less noticeable and significantly less irritating manner as compared to the state of the art. The improved acceptance of our method was successfully evaluated in a virtual reality user study.


Session 5: Audio

Monday, March 25th, 14:15 - 15:30, Room A

Chair: Tabitha Peck

Haptic Force Guided Sound Synthesis in Multimodal Virtual Reality (VR) Simulation for Rigid-Fluid Interaction

Haonan Cheng (Tianjin University), Shiguang Liu (Tianjin University)

Conference

Abstract: With the increasing demand for user immersion in VR, researchers are increasingly taking an interest in different interactions in multimodal simulation, while rigid-fluid interaction has been neglected for a long time. One core issue of the rigid-fluid interaction in multimodal VR system is how to balance the algorithm efficiency, result authenticity and result synchronization. Since the sampling rate of audio is far greater than visual and haptic modalities, sound synthesis for a multimodal VR system is more difficult than visual simulation and haptic rendering, which still remains an open challenge until now. Therefore, this paper focuses on developing an efficient sound synthesis method tailored for a multimodal system. Existing physical-based sound synthesis method simplified the sound model which greatly damaged the sound authenticity. To improve the result authenticity while ensuring real time performance and result synchronization, we propose a novel haptic force guided granular sound synthesis method tailored for sounding in multimodal VR systems. To the best of our knowledge, this is the first step that exploits haptic force feedback from the tactile channel for guiding sound synthesis in a multimodal VR system. Specifically, we propose a modified spectral granular sound synthesis method, which can ensure real time simulation and improve the result authenticity as well. Then, to balance the algorithm efficiency and result synchronization, we design a multi-force (MF) granulation algorithm which avoids repeated analysis of fluid particle motion and thereby improves the synchronization performance. Various results show that the proposed sound synthesis method effectively overcomes the limitations of existing methods in terms of audio modality, which has great potential to provide powerful technological support for building a more immersive for multimodal VR system.

Adaptive Sampling for Sound Propagation

Chakravarty Reddy Alla Chaitanya (Microsoft Research), John Snyder (Microsoft Research), Keith Godin (Microsoft), Derek Nowrouzezahrai (McGill University), Nikunj Raghuvanshi (Microsoft Research)

TVCG

Abstract: Precomputed sound propagation supports player motion at runtime by sampling acoustics from numerous probe locations in a scene. For each probe, a costly 3D numerical simulation must be performed offline and the resulting field encoded for runtime rendering of dynamic sources. Prior work samples probes on a uniform grid, requiring high global density to resolve narrow spaces. Our adaptive sampling approach varies sampling density based on a novel “local diameter”’ measure of the size of the space surrounding a given point, evaluated by stochastically tracing paths in a scene. We employ this measure in a probe layout technique that smoothly adapts resolution to eliminate under-sampling in nooks, narrow corridors and stairways, while coarsening appropriately in more open areas. Coupled with a new runtime interpolation method based on a radial weighting over geodesic paths, we achieve smooth acoustic effects respecting scene boundaries as the source or listener moves, unlike prior visibility-based methods. We demonstrate consistent quality improvement over prior work with cost held fixed.

Audio-Material Reconstruction for Virtualized Reality Using a Probabilistic Damping Model

Auston Baker Sterling (UNC Chapel Hill), Nicholas Rewkowski (UNC Chapel Hill), Roberta L. Klatzky (Carnegie Mellon), Ming C Lin (University of Maryland)

TVCG

Abstract: Modal sound synthesis has been used to create realistic sounds from rigid-body objects, but requires accurate real-world material parameters. These material parameters can be estimated from recorded sounds of an impacted object, but external factors can interfere with accurate parameter estimation. We present a novel technique for estimating the damping parameters of materials from recorded impact sounds that probabilistically models these external factors. We represent the combined effects of material damping, support damping, and sampling inaccuracies with a generative model, then use maximum likelihood estimation to fit a damping model to recorded data. This technique greatly reduces the human effort needed and does not require the precise object geometry or the exact hit location. We validate the effectiveness of this technique with a comprehensive analysis of a synthetic dataset and a perceptual study on object identification. We also present a study establishing human performance on the same parameter estimation task for comparison.

Immersive Spatial Audio Reproduction for VR/AR Using Room Acoustic Modelling from 360 Images

Hansung Kim (University of Surrey), Luca Remaggi (University of Surrey), Philip J.B. Jackson (University of Surrey), Adrian Hilton (University of Surrey)

Conference

Abstract: Recent progresses in Virtual Reality (VR) and Augmented Reality (AR) allow us to experience various VR/AR applications in our daily life. In order to maximise the immersiveness of user in VR/AR environments, a plausible spatial audio reproduction synchronised with visual information is essential. In this paper, we propose a simple and efficient system to estimate room acoustic for plausible reproducton of spatial audio using 360 cameras for VR/AR applications. A pair of 360 images is used for room geometry and acoustic property estimation. A simplified 3D geometric model of the scene is estimated by depth estimation from captured images and semantic labelling using a convolutional neural network (CNN). The real environment acoustics are characterised by frequency-dependent acoustic predictions of the scene. Spatially synchronised audio is reproduced based on the estimated geometric and acoustic properties in the scene. The reconstructed scenes are rendered with synthesised spatial audio as VR/AR contect. The results of estimated room geometry and simulated spatial audio are evaluated against the actual measurements and audio calculated from ground-truth Room Impulse Responses (RIRs) recorded in the rooms.


Session 6: Collaboration

Monday, March 25th, 14:15 - 15:30, Room B

Chair: Steven Feiner

Characterizing Asymmetric Collaborative Interactions in Virtual and Augmented Realities

Jerônimo Gustavo Grandi (Federal University of Rio Grande do Sul (UFRGS)), Henrique Galvan Debarba (IT University of Copenhagen), Anderson Maciel (Federal University of Rio Grande do Sul)

Conference

Abstract: We present an assessment of asymmetric interactions in Collaborative Virtual Environments (CVEs). In our asymmetric setup, two co-located users interact with virtual 3D objects, one in immersive Virtual Reality (VR) and the other in mobile Augmented Reality (AR). We conducted a study with 36 participants to evaluate performance and collaboration aspects of pair work, and compare it with two symmetric scenarios, either with both users in immersive VR or mobile AR. To perform this experiment, we adopt a collaborative AR manipulation technique from literature and develop and evaluate a VR manipulation technique of our own. Our results indicate that pairs in asymmetric VR-AR achieved significantly better performance than the AR symmetric condition, and similar performance to VR symmetric. Regardless of the condition, pairs had similar work participation indicating a high cooperation level even when there is a visualization and interaction asymmetry among the participants.

Immersive Telepresence and Remote Collaboration using Mobile and Wearable Devices

Jacob Young (University of Otago), Tobias Langlotz (University of Otago), Matthew Cook (University of Otago), Steven Mills (University of Otago), Holger Regenbrecht (University of Otago)

TVCG

Abstract: The mobility and ubiquity of mobile head-mounted displays make them a promising platform for telepresence research as they allow for spontaneous and remote use cases not possible with stationary hardware. In this work we present a system that provides immersive telepresence and remote collaboration on mobile and wearable devices by building a live spherical panoramic representation of a user’s environment that can be viewed in real time by a remote user who can independently choose the viewing direction. The remote user can then interact with this environment as if they were actually there through intuitive gesture-based interaction. Each user can obtain independent views within this environment by rotating their device, and their current field of view is shared to allow for simple coordination of viewpoints. We present several different approaches to create this shared live environment and discuss their implementation details, individual challenges, and performance on modern mobile hardware; by doing so we provide key insights into the design and implementation of next generation mobile telepresence systems, guiding future research in this domain. The results of a preliminary user study confirm the ability of our system to induce the desired sense of presence in its users.

Multi-Ray Jumping: Comprehensible Group Navigation for Collocated Users in Immersive Virtual Reality

Tim Weissker (Bauhaus-Universität Weimar), Alexander Kulik (Bauhaus-Universität Weimar), Bernd Froehlich (Bauhaus-Universität Weimar)

Conference

Abstract: The collaborative exploration of virtual environments requires support for joint group navigation instead of only providing navigation capabilities for individuals. In this paper, we focus on short-range teleportation techniques (jumping) for groups of collocated users wearing head-mounted displays. We review opportunities and challenges of three naive group jumping concepts in a pilot study. The derived requirements for comprehensible group jumping highlight the need to foster awareness of ongoing navigation actions and to make their effects precisely predictable for each passenger. Moreover, the navigator needs support for estimating the target locations of all group members. We propose a novel Multi-Ray Jumping technique to meet these requirements and report on a formal user study (N = 22) on two-user jumping to investigate its benefits in comparison to naive group jumping. The results indicate that Multi-Ray Jumping decreases spatial confusion for passengers, increases planning accuracy for navigators and reduces cognitive load for both.

Dataspace: A Reconfigurable Hybrid Reality Environment for Collaborative Information Analysis

Marco Cavallo (IBM Research), Mishal Dolakia (IBM Research), Matous Havlena (IBM Research), Ken Ocheltree (IBM Research), Mark Podlaseck (IBM Research)

Conference

Abstract: Immersive environments have gradually become standard for visualizing and analyzing large or complex datasets that would otherwise be cumbersome, if not impossible, to explore through smaller scale computing devices. However, this type of workspace often proves to possess limitations in terms of interaction, flexibility, cost and scalability. In this paper we introduce a novel immersive environment called Dataspace, which features a new combination of heterogeneous technologies and methods of interaction towards creating a better team workspace. Dataspace provides 15 high-resolution displays that can be dynamically reconfigured in space through robotic arms, a central table where information can be projected, and a unique integration with augmented reality (AR) and virtual reality (VR) headsets and other mobile devices. In particular, we contribute novel interaction methodologies to couple the physical environment with AR and VR technologies, enabling visualization of complex types of data and mitigating the scalability issues of existing immersive environments. We demonstrate through four use cases how this environment can be effectively used across different domains and reconfigured based on user requirements. Finally, we compare Dataspace with existing technologies, summarizing the trade-offs that should be considered when attempting to build better collaborative workspaces for the future.


Session 7: Training and Learning

Monday, March 25th, 14:15 - 15:30, Room C

Chair: Gudrun Klinker

Scale - Unexplored Opportunities for Immersive Technologies in Place-based Learning

Jiayan Zhao (The Pennsylvania State University), Alexander Klippel (The Pennsylvania State University)

Conference

Abstract: Immersive technologies have the potential to overcome physical limitations and virtually deliver field site experiences, for example, into the classroom. Yet, little is known about the features of immersive technologies that contribute to successful place-based learning. Immersive technologies afford embodied experiences by mimicking natural embodied interactions through a user‘s egocentric perspective. Additionally, they allow for beyond reality experiences integrating contextual information that cannot be provided at actual field sites. The current study singles out one aspect of place-based learning: Scale. In an empirical evaluation, scale was manipulated as part of two immersive virtual field trip (iVFT) experiences in order to disentangle its effect on place-based learning. Students either attended an actual field trip (AFT) or experienced one of two iVFTs using a head-mounted display. The iVFTs either mimicked the actual field trip or provided beyond reality experiences offering access to the field site from an elevated perspective using pseudo-aerial 360° imagery. Results show that students with access to the elevated perspective had significantly better scores, for example, on their spatial situation model (SSM). Our findings provide first results on how an increased (geographic) scale, which is accessible through an elevated perspective, boosts the development of SSMs. The reported study is part of a larger immersive education effort. Inspired by the positive results, we discuss our plan for a more rigorous assessment of scale effects on both self- and objectively assessed performance measures of spatial learning.

Virtual Classmates: Embodying Historical Learners Messages as Learning Companions in a VR Classroom through Comment Mapping

Meng-Yun Liao (National Chiao Tung University), Ching Ying Sung (National Taiwan University), Hao-Chuan Wang (UC Davis), Wen-Chieh Lin (National Chiao Tung University)

Conference

Abstract: Online learning platforms such as MOOCs have been prevalent sources of self-paced learning to people nowadays. However, the lack of peer accompaniment and social interaction may increase learners’ sense of isolation and loneliness. Prior studies have shown the positive effects on visualizing peer students’ appearances with virtual avatars or virtualized online learners in VR learning environments. In this work, we propose to build virtual classmates, which were constructed by synthesizing previous learners’ messages (time-anchored comments). Configurations of virtual classmates, such as the number of classmates participating in a VR class and the behavioral features of the classmates, can also be adjusted. To build the characteristics of virtual classmates, we propose a technique called comment mapping to aggregate prior online learners’ comments to shape virtual classmates’ behaviors. We conduct a study with 100 participants to evaluate the effects of the virtual classmates built with and without the comment mapping and the amount of virtual classmates rendered in VR. The findings of our study suggest design implications for developing virtual classmates in VR environments.

iVRNote: Design, Creation and Evaluation of an Interactive Note-Taking Interface for Study and Reflection in VR Learning Environments

Yi-Ting Chen (National Chiao Tung University), Chi-Hsuan Hsu (National Chiao Tung University), Chih-Han Chung (Computer Science), Yu-Shuen Wang (National Chiao Tung University), Sabarish V. Babu (Clemson University)

Conference

Abstract: In this contribution, we design, implement and evaluate the pedagogical benefits of a novel interactive note taking interface (iVRNote) in VR for the purpose of learning and reflection lectures. In future VR learning environments, students would have challenges in taking notes when they wear a head mounted display (HMD). To solve this problem, we installed a digital tablet on the desk and provided several tools in VR to facilitate the learning experience. Specifically, we track the stylus’ position and orientation in the real world and then render a virtual stylus in VR. In other words, when students see a virtual stylus somewhere on the desk, they can reach out with their hand for the physical stylus. The information provided will also enable them to know where they will draw or write before the stylus touches the tablet. Since the presented iVRNote featuring our note taking system is a digital environment, we also enable students save efforts in taking extensive notes by providing several functions, such as post-editing and picture taking, so that they can pay more attention to lectures in VR. We also record the time of each stroke on the note to help students review a lecture. They can select a part of their note to relearn the corresponding segment in a virtual online lecture. Figures and the accompanying video demonstrate the feasibility of the presented iVRNote system. To evaluate the system, we conducted a user study with 20 participants to assess the preference and pedagogical benefits of the iVRNote interface. The feedback provided by the participants were overall positive and indicated that the iVRNote interface could be potentially effective in VR learning experiences.

Creating a Stressful Decision Making Environment for Aerial Firefighter Training in Virtual Reality

Rory Michael Scott Clifford (University of Canterbury), Simon Hoermann (University of Canterbury), Sungchul Jung (University of Canterbury), Mark Billinghurst (University of South Australia), Robert W. Lindeman (University of Canterbury)

Conference

Abstract: “The decisions made by an Air Attack Supervisor (AAS) helicopter co-pilot or aerial firefighter, have critical and immediate impacts on their working environments. However, it is difficult to always make the best decisions due to many task-based stressors. Real world training exercises have limitations such as safety, cost, time and difficulty in reproducing events, make them not feasible for frequent training. Virtual Reality (VR) offers training opportunities but it is challenging to create a virtual environment with the analogous level of stress experienced in the real-world. In this paper, we investigate the use of a multi-user, collaborative, projection-based multi-sensory (vision, audio, tactile) VR high-fidelity system to produce a realistic situated training environment for practicing firefighting scenarios. We focus on a comparison between our VR training system and a real-world field training exercise, where we examine if the system can provide a near enough level of stress to the participants. We conducted the study with real trainee AAS firefighters using Heart-Rate Variability (HRV) as a physiological measure, along with validated questionnaires, to determine the effectiveness of the system. Our results show that there were no significant differences between the VR training exercise and the real-world exercise in terms of the stress level, measured by HRV.”

Ad-hoc study on soldiers calibration procedures in Virtual Reality

Jean-Daniel Taupiac (Univ. Montpellier), Nancy Rodriguez (Univ. Montpellier), Olivier Strauss (Univ. Montpellier), Martin Rabier (Capgemini Technology Services)

Conference

Abstract: French Army infantrymen are equipped today with a combat system called FELIN, which includes an infrared sighting device, the IR sight. One of the first manipulations learned by the soldier is the IR sight calibration. This procedure consists in navigating inside IR sight’s menus using a remote control added to the rifle’s handle. Then, the infantryman must shoot five cartridges on a target placed on a defined distance. After that, he has to go in front of the target in order to verify if the mean impact point is well at the aimed point. Facing the target, he must calculate the correction values he should apply for correcting any potential shift between the rifle and the sight axes. He has to go back to his rifle to enter correction values into the sight. The soldier repeats the procedure by reshooting five cartridges until the mean impact point matches with the aimed point. Currently, before the first attempts on the shooting range, this procedure training is done through a learning software reproducing the sight’s interface in 2D. Learners practice on the software until making no mistakes. When they succeed, they receive a procedure certificate and are allowed to experience the real situation on the shooting range. In this paper, we present an ad-hoc study of a learning tool in Virtual Reality for training on the FELIN IR sight calibration procedure. This prototype uses an HTC Vive headset, an off-the-shelf rifle’s support for Vive controllers and an off-the-shelf bipod. This material combination is intended to allow the trainee to take the prone shooting position. We assigned the trigger of one controller to the trigger of the rifle, and the trackpad of the other one to the remote control interface. The goal was to determine if passive haptics of these off-the-shelf supports could bring a satisfying level of fidelity. We carried out experimentations on learners from a French Infantry School. In the first time, we asked learners and instructors to assess the system, in order to validate or invalidate the prototype’s conception choices. We mainly focused on passive haptics’ levels of fidelity, but we also analyzed the moving method implemented to see if users feel cybersickness or eyestrain. A teleportation zone was placed on the side of the shooting range, to allow the learner to move on and to return from the target. Finally, we wanted to see if learners tended to make more mistakes using the prototype and if they had a better awareness of their mistakes. We also analyzed if they were more intrinsically motivated when they used the Virtual Reality prototype and if they felt more confident after its use. User tests revealed good assessments for the sight and its menus. The moving method was also well rated, and users did not felt significant cybersickness and eyestrain. This study also revealed some passive haptics fidelity improvements needed for the rifle and the remote control. Results also showed an attractive added value of Virtual Reality for this specific use case. It improved the learners’ intrinsic motivation, learning efficiency and helped to identify specific mistake types not detected by the traditional learning software. Nevertheless, results did not show significant results on mistakes awareness and learners’ self-confidence. These works opened interesting prospects. An improved version of the prototype is necessary in order to correct fidelity issues. Other points could also be explored through future experiments, like the Virtual Reality impacts on less experimented learners, and effects of Virtual Reality on learning transfer for this use case.


Session 8: Visualization Tools and Applications

Monday, March 25th, 14:15 - 15:30, Room D

Chair: Michele Fiorentino

IATK: An Immersive Analytics Toolkit

Maxime Cordeil (Monash University), Andrew Cunningham, Benjamin Bach (Edinburgh University), Christophe Hurter (Ecole National de l’Aviation Civile), Bruce H Thomas (University of South Australia), Kim Marriott (Monash University), Tim Dwyer (Monash University)

Conference

Abstract: We introduce Kati, the Immersive Analytics Toolkit, a software package for Unity that allows interactive authoring and exploration of data visualization in immersive environments. The design of Kati was informed by interdisciplinary expert-collaborations as well as visual analytics applications and iterative refinement over several years. Kati allows for easy assembly of visualizations through a grammar of graphics that a user can configure in a GUI—in addition to a dedicated visualisation API that supports the creation of novel immersive visualization designs and interactions. Kati is designed with scalability in mind, allowing visualisation and fluid responsive interactions in the order of several million points at a usable frame rate. This paper outlines our design requirements, Kati’s framework design and technical features, its user interface, as well as application examples.

GeoGate: Correlating Geo-Temporal Datasets Using an Augmented Reality Space-Time Cube and Tangible Interactions

Seung Youb Ssin (University of South Australia), James A Walsh (University of South Australia), Ross Smith, Andrew Cunningham (University of South Australia), Bruce H Thomas (University of South Australia)

Conference

Abstract: This paper introduces GeoGate, an augmented reality tabletop system that presents an extension of the Space-Time Cube whilst utilizing a ring-shaped tangible user interface to explore correlations between entities in multiple datasets. The goal of GeoGate is to correlate multiple datasets containing location data, that for our implementation are from the maritime domain. In particular, our approach in GeoGate seeks to find geotemporal associations between trajectory data recorded from a global positioning system, and light data extracted from night time satellite images. We utilize a tabletop system displaying a traditional 2D map in conjunction with a Microsoft Hololens to present a focused view of the data with a novel Augmented Reality extension of the Space-Time Cube. To validate GeoGate, we present the results of a user study comparing GeoGate with the existing 2D desktop approach used in a normal desktop environment. The outcomes of the user study from both the quantitative and qualitative results show that GeoGate’s approach reduces mistakes in the interpretation of the correlations between various datasets.

Developing Virtual Reality Visualizations for Unsteady Flow Analysis of Dinosaur Track Formation using Scientific Sketching

Johannes Novotny (Brown University), Joshua Tveite (Brown University), Morgan Turner (Brown University), Stephen Gatesy (Brown University), Peter L Falkingham (Liverpool John Moores University), David H. Laidlaw (Brown University)

TVCG

Abstract: “We present the results of a two-year design study to developing virtual reality (VR) flow visualization tools for the analysis of dinosaur track creation in a malleable substrate. Using Scientific Sketching methodology, we combined input from illustration artists, visualization experts, and domain scientists to create novel visualization methods. By iteratively improving visualization concepts at multiple levels of abstraction we helped domain scientists to gain insights into the relationship between dinosaur foot movements and substrate deformations. We involved over 20 art and computer science students from a VR design course in a rapid visualization sketching cycle, guided by our paleontologist collaborators through multiple critique sessions. This allowed us to explore a wide range of potential visualization methods and select the most promising methods for actual implementation. Our resulting visualization methods provide paleontologists with effective tools to analyze their data through particle, pathline and time surface visualizations. We also introduce a set of visual metaphors to compare foot motion in relation to substrate deformation by using pathsurfaces. This is one of the first large-scale projects using Scientific Sketching as a development methodology. We discuss how the research questions of our collaborators have evolved during the sketching and prototyping phases. Finally, we provide lessons learned and usage considerations for Scientific Sketching based on the experiences gathered during this project.”

Non-Contact Thermo-Visual Augmentation by IR-RGB Projection

Daisuke Iwai, Mei Aoki, Kosuke Sato

TVCG-Invited

Abstract: This paper presents an approach for non-contact haptic augmentation with spatial augmented reality (SAR). We construct a thermo-visual projection system by combining a standard RGB projector and a fabricated infrared (IR) projector. The primary contribution of this paper is that we conduct thorough psychophysical experiments to investigate a design guideline for spatiotemporal projection patterns for both RGB and IR projectors to render a warm object with high presence. We develop application systems to evaluate the validity of the proposed system and design guideline. The evaluation results demonstrate that the proposed system can render warm objects with significantly higher presence than a standard SAR system.

Exploration of an EEG-Based Cognitively Adaptive Training System in Virtual Reality

Arindam Dey (University of Queensland), Alex Chatburn (University of South Australia), Mark Billinghurst (University of South Australia)

Conference

Abstract: Virtual Reality (VR) has been demonstrated to be effective in various training scenarios across multiple domains such as education, health, and defense. However, most of those applications are not adaptive to the real-time cognitive or subjectively experienced load placed on the trainee as a result of training. In this paper, we explore a cognitively adaptive training system based on real-time measurement of task relevant, alpha activity in the brain through the use of a 32-channel mobile Electroencephalography (EEG) system. We used this measurement to adapt the difficulty of the task to an ideal level in order to challenge our participants, and thus theoretically induce the best level of performance gains as a result of training. Our system required participants to select target objects and the complexity of the task adapted to the alpha activity in the brain. 14 participants undertook our training and completed ~20 levels of increasing complexity. Our study identified significant differences in brain activity in response to increasing levels of task complexity, but that response time did not alter as a function of task difficulty. Collectively, we interpret this spread of findings to indicate the brain’s ability to compensate for higher task load without affecting behaviourally measured visuomotor performance.


Session 9: Perception of Depth and Space

Monday, March 25th, 15:45 - 17:00, Room B

Chair: Bret Jackson

An Evaluation of Depth and Size Perception on Spherical Fish Tank Virtual Reality Display

Qian Zhou (University of British Columbia), Georg Hagemann (University of British Columbia), Dylan Brodie Fafard (University of Saskatchewan), Ian Stavness (University of Saskatchewan), Sidney S Fels (University of British Columbia)

TVCG

Abstract: “Fish Tank Virtual Reality (FTVR) displays create a compelling 3D spatial effect by rendering to the perspective of the viewer with head-tracking. Combining FTVR with a spherical display enhances the 3D experience with unique properties of the spherical screen such as the enclosing shape, consistent curved surface, and borderless views from all angles around the display. The ability to generate a strong 3D effect on a spherical display with head-tracked rendering is promising for increasing user’s performance in 3D tasks. An unanswered question is whether these natural affordances of spherical FTVR displays can improve spatial perception in comparison to traditional flat FTVR displays. To investigate this question, we conducted an experiment to see whether users can perceive the depth and size of virtual objects better on a spherical FTVR display compared to a flat FTVR display on two tasks. Using the spherical display, we found significantly that users had 1cm depth accuracy compared to 6.5cm accuracy using the flat display on a depth-ranking task. Likewise, their performance on a size-matching task was also significantly better with the size error of 2.3mm on the spherical display compared to 3.1mm on the flat display. Furthermore, the perception of size-constancy is stronger on the spherical display than the flat display. This study indicates that the natural affordances provided by the spherical form factor improve depth and size perception in 3D compared to a flat display. We believe that spherical FTVR displays have potential as a 3D virtual environment to provide better task performance for various 3D applications such as 3D designs, scientific visualizations, and virtual surgery.”

Virtual Objects Look Farther on the Sides: The Anisotropy of Distance Perception in Virtual Reality

Etienne Peillard (Inria), Thomas Thebaud (Inria), Jean-Marie Normand (Ecole Centrale de Nantes), Ferran Argelaguet Sanz (Inria), Guillaume Moreau (Ecole Centrale de Nantes), Anatole Lécuyer (Inria)

Conference

Abstract: The topic of distance perception has been widely investigated in Virtual Reality (VR). However, the vast majority of previous work mainly focused on distance perception of objects placed in front of the observer. Then, what happens when the observer looks on the side? In this paper, we study differences in distance estimation when comparing objects placed in front of the observer with objects placed on his side. Through a series of four experiments (n=85), we assessed participants’ distance estimation under different experimental conditions. In particular, we considered the placement of visual stimuli in the field of view, users’ exploration behavior as well as the presence of depth cues. For all experiments a two-alternative forced choice (2AFC) standardized psychophysical protocol was employed, in which the main task was to determine the stimuli that seemed to be the farthest one. In summary, our results showed that the orientation of virtual stimuli with respect to the user introduces a distance perception bias: objects placed on the sides are systematically perceived farther away than objects in front. In addition, we could observe that this bias increases along with the angle, and appears to be independent of both the position of the object in the field of view as well as the quality of the virtual scene. This work sheds a new light on one of the specificities of VR environments regarding the wider subject of visual space theory in real environments. Our study paves the way for future experiments evaluating the anisotropy of distance perception and its impact on VR applications.

Distance Judgments to On- and Off-Ground Objects in Augmented Reality

Carlos Salas (Vanderbilt University), Grant D. Pointon (University of Utah), Haley Alexander Adams (Vanderbilt University), Jeanine Stefanucci (University of Utah), Sarah Creem-Regehr (University of Utah), William B Thompson (University of Utah), Bobby Bodenheimer (Vanderbilt University)

Conference

Abstract: Augmented reality (AR) technologies have the potential to provide individuals with unique training and visualizations, but the effectiveness of these applications may be influenced by users’ perceptions of the distance to AR objects. Perceived distances to AR objects may be biased if these objects do not appear to be attached to the ground plane. The current work compared distance judgments of AR targets presented on the ground versus off the ground when no additional AR depth cues, such as shadows, were available. We predicted that without additional information for height off the ground, observers would perceive the off-ground objects as placed on the ground, but at farther distances. Furthermore, this bias should be exaggerated when targets were viewed with one eye rather than two. In our experiment, participants judged the absolute egocentric distance to various cubes presented on or off the ground with an action-based measure, blind walking. We found that observers walked farther for off-ground AR objects and that this effect was exaggerated when participants viewed off-ground objects with monocular vision compared to binocular vision. However, we also found that the restriction of binocular cues influenced participants’ distance judgments for on-ground AR objects. Our results suggest that distances to off-ground AR objects are perceived differently than on-ground AR objects and that the elimination of binocular cues further influences how users perceive these distances.

Mitigating Incorrect Perception of Distance in Virtual Reality through Personalized Rendering Manipulation

Alex Peer (UW - Madison), Kevin Ponto (University of Wisconsin - Madison)

Conference

Abstract: Viewers of virtual reality appear to have an incorrect sense of space when performing blind directed-action tasks, such as blind walking or blind throwing. It has been shown that various manipulations can influence this incorrect sense of space, and that the degree of misperception varies by person. It follows that one could measure the degree of misperception an individual experiences and generate some manipulation to correct for it, though it is not clear that correct behavior in a specific blind directed action task leads to correct behavior in all tasks in general. In this work, we evaluate the effectiveness of correcting perceived distance in virtual reality by first measuring individual perceived distance through blind throwing, then manipulating sense of space using a vertex shader to make things appear more or less distant, to a degree personalized to the individual’s perceived distance. Two variants of the manipulation are explored. The effects of these personalized manipulations are evaluated first when performing the same blind throwing task used to calibrate the manipulation, then using two perceptual matching tasks, which differ from blind directed action tasks by allowing full visual feedback when objects, or the participants themselves, move through space.

Orientation Perception in Real and Virtual Environments

Adam Jones (University of Mississippi), Jonathan E Hopper (University of Mississippi), Mark Bolas (Microsoft), David M. Krum (University of Southern California)

TVCG

Abstract: Spatial perception in virtual environments has been a topic of intense research. Arguably, the majority of this work has focused on distance perception. However, orientation perception is also an important factor. In this paper, we systematically investigate allocentric orientation judgments in both real and virtual contexts over the course of four experiments. A pattern of sinusoidal judgment errors known to exist in 2D perspective displays is found to persist in immersive virtual environments. This pattern also manifests itself in a real world setting using two differing judgment methods. The findings suggest the presence of a radial anisotropy that persists across viewing contexts. Additionally, there is some evidence to suggest that observers have multiple strategies for processing orientations but further investigation is needed to fully describe this phenomenon. We also offer design suggestions for 3D user interfaces that may require users to perform orientation judgments.


Session 10: Rendering and Simulation

Monday, March 25th, 15:45 - 17:00, Room C

Chair: Benjamin Weyers

Temporal Resolution Multiplexing: Exploiting the limitations of spatio-temporal vision for more efficient VR rendering

Gyorgy Denes (University of Cambridge), Kuba Maruszczyk (University of Cambridge), George Ash (University of Cambridge), Rafal K. Mantiuk (University of Cambridge)

TVCG

Abstract: Rendering in virtual reality (VR) requires substantial computational power to generate 90 frames per second at high resolution with good-quality antialiasing. The video data sent to a VR headset requires high bandwidth, achievable only on dedicated links. In this paper we explain how rendering requirements and transmission bandwidth can be reduced using a conceptually simple technique that integrates well with existing rendering pipelines. Every even-numbered frame is rendered at a lower resolution, and every odd-numbered frame is kept at high resolution but is modified in order to compensate for the previous loss of high spatial frequencies. When the frames are seen at a high frame rate, they are fused and perceived as high-resolution and high-frame-rate animation. The technique relies on the limited ability of the visual system to perceive high spatio-temporal frequencies. Despite its conceptual simplicity, correct execution of the technique requires a number of non-trivial steps: display photometric temporal response must be modeled, flicker and motion artifacts must be avoided, and the generated signal must not exceed the dynamic range of the display. Our experiments, performed on a high-frame-rate LCD monitor and OLED-based VR headsets, explore the parameter space of the proposed technique and demonstrate that its perceived quality is indistinguishable from full-resolution rendering. The technique is an attractive alternative to reprojection and resolution reduction of all frames.

Intuitive VR Exploration Assistance through Automatic Occlusion Removal

Lili Wang (Beihang University), Jian Wu (Beihang University), Xuefeng Yang (Beihang University), Voicu Popescu (Purdue University)

TVCG

Abstract: In the navigation of virtual reality scenes, it is often necessary to significantly transport the viewpoint due to occlusion, which poses a great challenge to the exploration time cost and the limited physical space. This paper presents a novel disocclusion exploration(DE) approach to improve efficiency of VR navigation. The discclusion exploration detects both lateral and vertical depth discontinuity points automatically by analyzing the depth image rendered from the current output viewpoint. With those points, the disocclusion portals and new assistant viewpoints are constructed, which determine the disocclusion region can be visible from the portal with the DE. The disocclusion matrices for the vertices in both the disocclusion region and the frustum of the output viewpoint are computed with considering of the positions of the portal, the assistant viewpoint and the output viewpoint. The disocclusion region is disoccluded by using a vertex based multiperspective rendering method. We have conducted a user study on tasks that include searching and chasing in VR scenes. The results shows that our method has improved the efficiency of VR navigation.

Material Surface Reproduction and Perceptual Deformation with Projection Mapping for Car Interior Design

Takuro Takezawa (Osaka University), Daisuke Iwai (Osaka University), Kosuke Sato (Osaka university), Toshihiro Hara (Mazda Motor Corporation), Yusaku Takeda (Mazda Motor Corporation), Kenji Murase (Mazda Motor Corporation)

Conference

Abstract: “Car interior design, such as dashboard, broadly consists of two parts. One is shape design, where the processes of 2D drawing, 3D modeling and evaluation with full-scale mockups are iterated, which takes a massive amount of time and cost. The other is material design such as surface property tuning, where designers compare material samples. This way, however, has a limitation on the number of material samples to compare and doesn’t allow applying of the samples of interest to the whole mockups in early phases. In this paper, we apply projection mapping technique to boost the design process by altering the appearance of the surface of projected objects and enabling various shape and material evaluations in early phases. Our proposed system uses multiple projectors, one of which is 4K projector to reproduce fine leather surface. Utilizing physiological and psychological depth cues, the system allows the user to perceive the projected mockup as deformed. Psychological experiments confirm that users perceive deformation and have controlled impression on leather reproduced with certain parameters. In addition, we discuss the usability of the proposed system as a support system of car interior design.”

A Deep Learning-based Framework for Intersectional Traffic Simulation and Editing

Huikun Bi, Tianlu Mao, Zhaoqi Wang, Zhigang Deng

TVCG-Invited

Abstract: Most of existing traffic simulation methods have been focused on simulating vehicles on freeways or city-scale urban networks. However, relatively little research has been done to simulate intersectional traffic to date despite its obvious importance in real-world traffic phenomena. In this paper we propose a novel deep learning-based framework to simulate and edit intersectional traffic. Specifically, based on an in-house collected intersectional traffic dataset, we employ the combination of convolution network (CNN) and recurrent network (RNN) to learn the patterns of vehicle trajectories in intersectional traffic. Besides simulating novel intersectional traffic, our method can be used to edit existing intersectional traffic. Through many experiments as well as comparison user studies, we demonstrate that the results by our method are visually indistinguishable from ground truth and perform better than other methods.

Simulating Water Resistance for Virtual Underwater Experience Using Visual Delay Effect

Eun-Cheol Lee (Yonsei University), Yong-Hun Cho (Yonsei University), In-Kwon Lee (Yonsei University)

Conference

Abstract: IIn this paper, we propose a new visual effect to add presence to user ‘s own motion in virtual underwater environment. To do this, we simulate the resistance in the water environment by delaying the arm movements of the user’s avatar. This delay effect was experimented by two methods, physics simulation and recovery force. Experimental results show that the combination of physics simulation and recovery force maximizes user’s presence, satisfaction, and immersion.


Session 11: Sensing and Capture

Monday, March 25th, 15:45 - 17:00, Room D

Chair: David Krum

Mo2Cap2: Real-time Mobile 3D Motion Capture with a Cap-mounted Fisheye Camera

Weipeng Xu (MPII), Avishek Chatterjee (MPII), Michael Zollhoefer (Stanford University), Helge Rhodin (EPFL), Pascal Fua (EPFL), Hans-Peter Seidel (MPII), Christian Theobalt (Max Planck Institute for Informatics)

TVCG

Abstract: We propose the first real-time system for the egocentric estimation of 3D human body pose in a wide range of unconstrained everyday activities. This setting has a unique set of challenges, such as mobility of the hardware setup, and robustness to long capture sessions with fast recovery from tracking failures. We tackle these challenges based on a novel lightweight setup that converts a standard baseball cap to a device for high-quality pose estimation based on a single cap-mounted fisheye camera. From the captured egocentric live stream, our CNN based 3D pose estimation approach runs at 60Hz on a consumer-level GPU. In addition to the lightweight hardware setup, our other main contributions are: 1) a large ground truth training corpus of top-down fisheye images and 2) a disentangled 3D pose estimation approach that takes the unique properties of the egocentric viewpoint into account. As shown by our evaluation, we achieve lower 3D joint error as well as better 2D overlay than the existing baselines.

SLAMCast: Large-Scale, Real-Time 3D Reconstruction and Streaming for Immersive Multi-Client Live Telepresence

Patrick Stotko (University of Bonn), Stefan Krumpen (University of Bonn), Matthias Hullin (University of Bonn), Michael Weinmann (University of Bonn), Reinhard Klein (University of Bonn)

TVCG

Abstract: Real-time 3D scene reconstruction from RGB-D sensor data, as well as the exploration of such data in VR/AR settings, has seen tremendous progress in recent years. The combination of both these components into telepresence systems, however, comes with significant technical challenges. All approaches proposed so far are extremely demanding on input and output devices, compute resources and transmission bandwidth, and they do not reach the level of immediacy required for applications such as remote collaboration. Here, we introduce what we believe is the first practical client-server system for real-time capture and many-user exploration of static 3D scenes. Our system is based on the observation that interactive frame rates are sufficient for capturing and reconstruction, and real-time performance is only required on the client site to achieve lag-free view updates when rendering the 3D model. Starting from this insight, we extend previous voxel block hashing frameworks by introducing a novel thread-safe GPU hash map data structure that is robust under massively concurrent retrieval, insertion and removal of entries on a thread level. We further propose a novel transmission scheme for volume data that is specifically targeted to Marching Cubes geometry reconstruction and enables a 90% reduction in bandwidth between server and exploration clients. The resulting system poses very moderate requirements on network bandwidth, latency and client-side computation, which enables it to rely entirely on consumer-grade hardware, including mobile devices. We demonstrate that our technique achieves state-of-the-art representation accuracy while providing, for any number of clients, an immersive and fluid lag-free viewing experience even during network outages.

Temporal Upsampling of Depth Maps Using a Hybrid Camera

Ming-Ze Yuan, Lin Gao, Hongbo Fu, Shihong Xia

TVCG-Invited

Abstract: In recent years, consumer-level depth cameras have been adopted for various applications. However, they often produce depth maps at only a moderately high frame rate (approximately 30 frames per second), preventing them from being used for applications such as digitizing human performance involving fast motion. On the other hand, low-cost, high-frame-rate video cameras are available. This motivates us to develop a hybrid camera that consists of a high-frame-rate video camera and a low-frame-rate depth camera and to allow temporal interpolation of depth maps with the help of auxiliary color images. To achieve this, we develop a novel algorithm that reconstructs intermediate depth maps and estimates scene flow simultaneously. We test our algorithm on various examples involving fast, non-rigid motions of single or multiple objects. Our experiments show that our scene flow estimation method is more precise than a tracking-based method and the state-of-the-art techniques.

Realistic Procedural Plant Modeling from Multiple View Images

Jianwei Guo, Shibiao Xu, Dong-Ming Yan, Zhanglin Cheng, Marc Jaeger, Xiaopeng Zhang

TVCG-Invited

Abstract: In this paper, we describe a novel procedural modeling technique for generating realistic plant models from multi-view photographs. The realism is enhanced via visual and spatial information acquired from images. In contrast to previous approaches that heavily rely on user interaction to segment plants or recover branches in images, our method automatically estimates an accurate depth map of each image and extracts a 3D dense point cloud by exploiting an efficient stereophotogrammetry approach. Taking this point cloud as a soft constraint, we fit a parametric plant representation to simulate the plant growth progress. In this way, we are able to combine real data (photos and 3D point clouds) analysis with rule-based procedural plant modeling. We demonstrate the robustness of the proposed approach by modeling a variety of plants with complex branching structures and significant self-occlusions. We also demonstrate that the proposed framework can be used to reconstruct ground-covering plants, such as bushes and shrubs which have gained little attention in the literature. The effectiveness of our approach is validated by visually and quantitatively comparing with the state-of-the-art approaches.

Mask-off: Synthesizing Face Images in the Presence of Head-mounted Displays

Yajie Zhao (Institute for Creative Technologies), Qingguo Xu (University of Kentucky), Weikai Chen (USC Institute for Creative Technologies), Jun Xing (Institute for Creative Technologies), Chao Du (University of Kentucky), Xinyu Huang (North Carolina Central University), Ruigang Yang (University of Kentucky)

Conference

Abstract: Wearable VR/AR devices provide users with fully immersive experience in a virtual environment, enabling possibilities to reshape the forms of entertainment and telepresence. While the body language is a crucial element in effective communication, wearing a head-mounted display (HMD) could severely hinder the eye contact and block facial expressions. We present a novel headset removal technique that enables high-quality occlusion-free communication in virtual environment. In particular, our solution synthesizes photoreal faces in the occluded region with faithful reconstruction of facial expressions and eye movements. Towards this goal, we develop a novel capture setup that consists of two near-infrared (NIR) cameras inside the HMD for eye capturing and one external RGB camera for recording visible face regions. To enable realistic face synthesis with consistent illuminations, we propose a data-driven approach to fuse the gray-scale narrow-field-of-view NIR images with the RGB image captured from the external camera. In addition, to generate photorealistic eyes, a dedicated algorithm is proposed to colorize the NIR eye images and further rectify the color distortion caused by the non-linear mapping of IR light sensitivity. Experimental results demonstrate that our framework is capable to synthesize high-fidelity unoccluded facial images with accurate tracking of head motion, facial expression and eye movement.


Session 12: Applications

Tuesday, March 26th, 10:15 - 11:30, Room A

Chair: Amela Sadagic

Investigating the Third Dimension for Authentication in Immersive Virtual Reality and in the Real World

Ceenu George (LMU Munich), Daniel Buschek (LMU Munich), Mohamed Khamis (University of Glasgow), Heinrich Hussmann (LMU Munich)

Conference

Abstract: Immersive Virtual Reality (IVR) is a growing 3D environment, where social and commercial applications will require user authentication. Similarly, smart homes in the real world (RW), offer an opportunity to authenticate in the third dimension. For both environments, there is a gap in understanding which elements of the third dimension can be leveraged to improve usability and security of authentication. In particular, investigating transferability of findings between these environments would help towards understanding how rapid prototyping of authentication concepts can be achieved in this context. We identify key elements from prior research that are promising for authentication in the third dimension. Based on these, we propose a concept in which users’ authenticate by selecting a series of 3D objects in a room using a pointer. We created a virtual 3D replica of a real world room, which we leverage to evaluate and compare the factors that impact the usability and security of authentication in IVR and RW. In particular, we investigate the influence of randomized user and object positions, in a series of user studies (N=48). We also evaluate shoulder surfing by real world bystanders for IVR (N=75). Our results show that 3D passwords within our concept are resistant against shoulder surfing attacks. Interactions are faster in RW compared to IVR, yet workload is comparable.

The Effects of Stereopsis and Immersion on Bimanual Assembly Task in a Virtual Reality System

Douglas Yamashita de Moura (Brazilian Air Force), Amela Sadagic (Naval Postgraduate School (NPS))

Conference

Abstract: Assembly task is an essential component in several complex operations done by a large number of humans on a regular basis; examples include system maintenance (preventive and corrective), industrial production lines, and teleoperation. Having access to superior and low-cost solutions that can be used to train personnel who need to conduct those tasks is essential. Virtual reality (VR) technology with its immersive and non-immersive display solutions combined with hand controllers suitable for bimanual operations, is especially appealing for training purposes in this domain. We designed and executed a user study in which we tested the influence of stereopsis and immersion on execution of bimanual assembly task, and examined the effects of tested system configurations on symptoms of cybersickness. Our user study, with its between-subjects format, collected comprehensive data sets in four distinct experimental conditions: immersive stereoscopic (IS), immersive non-stereoscopic (INS), non-immersive stereoscopic (NIS), and non-immersive non-stereoscopic (NINS). The results of this study suggest that immersive stereoscopic platforms are the most promising contenders for an efficient system solution, and that non-immersive non-stereoscopic solutions that use larger screens (like TV set in our case) may also be considered. It is encouraging that no significant simulator sickness issues were recorded in any condition. The results of this study provide important input and guidance that people who work in training domain need to have before making a decision about acquisition of new solutions for training of assembly tasks.

Empowering Young Job Seekers with Virtual Reality

Ekaterina Prasolova-Førland (Norwegian University of Science and Technology), Mikhail Fominykh (Norwegian University of Science and Technology), Oscar Ihlen Ekelund (Norwegian University of Science and Technology)

Conference

Abstract: This paper presents the results of the Virtual Internship project that aims to help young job seekers get insights of different workplaces via immersive and interactive experiences. We designed a concept of immersive job taste that provides a rich presentation of occupations with elements of workplace training, targeting a specific group of young job seekers, including high-school students and unemployed. We designed a feedback model, using industry-specific keywords to provide feedback on user performance. We developed several scenarios and applied different virtual and augmented reality concepts to build prototypes for different types of devices. The intermediary and the final versions of the prototypes were evaluated by several groups of primary users and experts, including over 70 young job seekers and high school students and over 45 various professionals and experts. The data were collected using questionnaires and interviews. The results indicate a generally very positive attitude towards the concept of immersive job taste, although with significant differences between job seekers and experts. The prototype developed for room-scale virtual reality with controllers was generally evaluated better than other, including cardboard with 360 videos or with animated 3D graphics and augmented reality glasses. In the paper, we discuss several aspects, such as the potential of immersive technologies for career guidance, fighting youth unemployment by better informing the young job seekers, and various practical and technology considerations.

Functional Workspace Optimization via Learning Personal Preferences from Virtual Experiences

Wei Liang (Beijing Institute of Technology), Jingjing Liu (Beijing Institute of Technology), Yining Lang (Beijing Institute of Technology), Bing Ning (Beijing Institute of Fashion Technology), Lap-Fai (Craig) Yu (George Mason University)

TVCG

Abstract: The functionality of a workspace is one of the most important considerations in both virtual world design and interior design. To offer appropriate functionality to the user, designers usually take some general rules into account, e.g., general workflow and average stature of users, which are summarized from the population statistics. Yet, such general rules cannot reflect the personal preferences of a single individual, which vary from person to person. In this paper, we intend to optimize a functional workspace according to the personal preferences of the specific individual who will use it. We come up with an approach to learn the individual’s personal preferences from his activities while using a virtual version of the workspace via virtual reality devices. Then, we construct a cost function, which incorporates personal preferences, spatial constraints, pose assessments, and visual field. At last, the cost function is optimized to achieve an optimal layout. To evaluate the approach, we experimented with different settings. The results of the user study show that the workspaces updated in this way better fit the users.

VR as a Content Creation Tool for Film Previsualisation

Quentin Galvane (Inria), I-Sheng Lin (Department of Computer Science), Ferran Argelaguet Sanz (Inria), Tsai-Yen Li (NCCU), Marc Christie (IRISA)

Conference

Abstract: Creatives in animation and film productions have forever been exploring the use of new means to prototype their visual sequences before realizing them, by relying on hand-drawn storyboards, physical mockups or more recently 3D modelling and animation tools. However these 3D tools are designed in mind for dedicated animators rather than creatives such as film directors or directors of photography and remain complex to control and master. In this paper we propose a VR authoring system which provides intuitive ways of crafting visual sequences, both for expert animators and expert creatives in the animation and film industry. The proposed system is designed to reflect the traditional process through (i) a storyboarding mode that enables rapid creation of annotated still images, (ii) a previsualisation mode that enables the animation of the characters, objects and cameras, and (iii) a technical mode that enables the placement and animation of complex camera rigs (such as cameras cranes) and light rigs. Our methodology strongly relies on the benefits of VR manipulations to re-think how content creation can be performed in this specific context, typically how to animate contents in space and time. As a result, the proposed system is complimentary to existing tools, and provides a seamless back-and-forth process between all stages of previsualisation. We evaluated the tool with professional users to gather experts’ perspectives on the specific benefits of VR in 3D content creation.


Session 13: Haptics and Vibrotactiles

Tuesday, March 26th, 10:15 - 11:30, Room B

Chair: Maud Marchal

Modulating Fine Roughness Perception of Vibrotactile Textured Surface using Pseudo-haptic Effect

Yusuke Ujitoko (Hitachi, Ltd.), Yuki Ban (The University of Tokyo), Koichi Hirota (The University of Electro-Communications)

TVCG

Abstract: Playing back vibrotactile signals through actuators is commonly used to simulate tactile feelings of virtual textured sur- faces. However, there is often a small mismatch between the simulated tactile feelings and intended tactile feelings by tactile design- ers. Thus, a method of modulating the vibrotactile perception is required. We focus on the fine roughness perception and we propose a method using a pseudo-haptic effect to modulate the fine roughness perception of vibrotactile texture. Specifically, we visually modify the pointer’s position on the screen slightly, which indicates the touch position on textured surfaces. We hypothesize that if users receive vibrational feedback watching the pointer visually oscillating back/forth and left/right, users would believe the vibrotactile surfaces more uneven. We also hypothesize that the size of visual oscillation would affect the amount of modification of roughness perception. We conducted user studies to test the hypotheses. Results of first user study suggested that users feel vibrotactile texture with our method rougher than they do without our method at a high probability. Results of second user study suggested that users feel different roughness for vibrational texture in response to the size of visual oscillation. These results confirmed our hypotheses and they suggested a effectiveness of our method. Our method can be applied to VR applications where the pointer indicating the contact point with the virtual surface is visualized.

TacTiles: Dual-mode Low-power Electromagnetic Actuators for Rendering Continuous Contact and Spatial Haptic Patterns in VR

Velko Vechev (ETH), Juan Zarate (AIT - ETHZ), David Lindlbauer (ETH Zurich), Ronan J Hinchet (EPFL), Herbert Shea (EPFL), Otmar Hilliges (ETH Zurich)

Conference

Abstract: We introduce TacTiles, light (1.8g), low-power (130 mW), and small form-factor (1cm^3) electromagnetic actuators that can form a flexible haptic array to provide localized tactile feedback. A novel hardware design uses a custom designed 8 layer PCB, dampening materials to reduce recoil, and an asymmetric latching mechanism that enables two distinct modes of actuation. We leverage these modes in Virtual Reality (VR) to render touch with objects and surface textures when moving over them. We conducted quantitative and qualitative experiments to evaluate system performance and experiences in VR. Our results indicate that TacTiles are suitable in rendering a variety of surface textures, can convincingly render continuous touch with virtual objects, and enables users to discriminate objects from textured surfaces even without looking at them.

Toward Universal Tangible Objects: a Novel Approach to Optimize Haptic Sensations in 3D Interaction

Xavier de Tinguy (INSA/IRISA), Claudio Pacchierotti (CNRS), Maud Marchal (IRISA/INSA), Anatole Lécuyer (Inria)

Conference

Abstract: “Tangible objects are a simple yet effective way for providing haptic sensations in Virtual Reality. For achieving a compelling illusion, there should be a good correspondence between what users see in the virtual environment and what they touch in the real world. The haptic features of the tangible object should indeed match those of the corresponding virtual one in terms of, e.g., size, local shape, mass, texture. A straightforward solution is to create perfect tangible replicas of all the virtual objects in the scene. However, this is often neither feasible nor desirable. This paper presents an innovative approach enabling the use of few tangible objects to render many virtual ones. The proposed algorithm analyzes the available tangible and virtual objects to find the best grasps in terms of matching haptic sensations. It starts by identifying several suitable pinching poses on the considered tangible and virtual objects. Then, for each pose, it evaluates a series of haptically-salient characteristics. Next, it identifies the two most similar pinching poses according to these metrics, one on the tangible and one on the virtual object. Finally, it highlights the chosen pinching pose, which provides the best matching sensation between what users see and touch. The effectiveness of our approach is evaluated through a user study enrolling 12 participants. Results show that the algorithm is able to well combine several haptically-salient object features to find convincing pinches between the given tangible and virtual objects. Finally, we demonstrate the approach within an operational use case, where users are asked to interact with different virtual objects rotating on a carousel – all rendered by the same tangible object.”

HapticSphere: Physical Support To Enable Precision Touch Interaction in Mobile Mixed-Reality

Chiu-Hsuan Wang (Nation Chiao Tung University), Chen-Yuan Hsieh (National Chiao Tung University), Neng-Hao Yu (National Taiwan University of Science and Technology), Andrea Bianchi (KAIST), Liwei Chan (Computer Science)

Conference

Abstract: This work presents HapitcSphere, a wearable spherical surface enabled by bridging a finger and the HMD with a passive string. Users perceive physical support at the finger when reaching it to the surface defined by the string extent. This physical support assists users in precise touch interaction in the context of stationary and walking virtual or mixed-reality. We propose three methods of attachment of the haptic string (directly on the head or on the body), and illustrate a novel single-step calibration algorithm that supports these configurations by estimation of a grand haptic sphere, once an head-coordinated touch interaction is established. Two user studies were conducted to validate our approach and to compare the touch performance with physical support in sitting and walking situations in the context of mobile mixed-reality scenarios. The results reported that, in walking condition, touch interaction with physical support significantly outperform with the visual-only condition.

Vibro-tactile Feedback for Real-world Awareness in Immersive Virtual Environments

Dimitar Valkov (University of Münster), Lars Linsen (Westfälische Wilhelms-Universität Münster)

Conference

Abstract: In immersive virtual environments (IVE), users’ visual and auditory perception is replaced by computer-generated stimuli. Thus, knowing the positions of real objects is crucial for physical safety. While some solutions exist, e.\,g., using virtual replicas or visible cues indicating the interaction space boundaries, these are either limiting the IVE design or depend on the hardware setup. Moreover, most solutions cannot handle lost tracking, erroneous tracker calibration, or moving obstacles. However, these are common scenarios especially for the increasingly popular home virtual reality settings. In this paper, we present a stand-alone hardware device designed to alert IVE users for potential collisions with real-world objects. It uses distance sensors mounted on a head-mounted display (HMD) and vibro-tactile actuators inserted into the HMD’s face cushion. We implemented different types of sensor-actuator mappings with the goal to find a mapping function that is minimally obtrusive in normal use, but efficiently alerting in risk situations.


Session 14: Displays 1

Tuesday, March 26th, 10:15 - 11:30, Room C

Chair: Ed Swan

Manufacturing Application-Driven Foveated Near-Eye Displays

KAAN AKSIT (NVIDIA RESEARCH), Praneeth Chakravarthula (UNC Chapel Hill), Kishore Rathinavel (UNC Chapel Hill), Youngmo Jeong (NVIDIA CORPORATION, NVIDIA RESEARCH), Rachel Albert (NVIDIA CORPORATION, NVIDIA RESEARCH), Henry Fuchs (UNC Chapel Hill), David Luebke (NVIDIA CORPORATION, NVIDIA RESEARCH)

TVCG

Abstract: Traditional optical manufacturing posses a great challenge to near-eye display designers due to large lead times in the order of multiple weeks, limiting the abilities of optical designers to iterate fast and explore beyond conventional designs. We present a complete near-eye display manufacturing pipeline with a day lead time using commodity hardware. Our novel manufacturing pipeline consists of several innovations including a rapid production technique to improve surface of a 3D printed component to optical quality suitable for near-eye display application, a computational design methodology using machine learning and ray tracing to create freeform static projection screen surfaces for near-eye displays that can represent arbitrary focal surfaces, and a custom projection lens design that distributes pixels non-uniformly for a foveated near-eye display hardware design candidate. We have demonstrated an untethered augmented reality near-eye display prototypes to asses success of our technique, and show that a ski-goggles form factor, a large monocular field of view (30x55), and a resolution of 12 cycles per degree can be achieved.

A perception-driven hybrid decomposition for multi-layer accommodative displays

Hyeonseung Yu (MPI Informatik), Mojtaba Bemana (MPI Informatik), Marek Wernikowski (West Pomeranian University of Technology), Michal Chwesiuk (West Pomeranian University of Technology), Okan Tarhan Tursun (MPI Informatik), Gurprit Singh (Max Planck Institute for informatics), Karol Myszkowski (MPI Informaktik), Radoslaw Mantiuk (ZUT in Szczecin), Hans-Peter Seidel (MPII), Piotr Didyk (Università della Svizzera italiana)

TVCG

Abstract: Multi-focal plane and multi-layered light-field displays are promising solutions for addressing all visual cues observed in the real world. Unfortunately, these devices usually require expensive optimizations to compute a suitable decomposition of the input light field or focal stack to drive individual display layers. Although these methods provide near-correct image reconstruction, a significant computational cost prevents real-time applications. A simple alternative is a linear blending strategy which decomposes a single 2D image using depth information. This method provides real-time performance, but it generates inaccurate results at occlusion boundaries and on glossy surfaces. This paper proposes a perception-based light field synthesis technique which combines the advantages of the above strategies and achieves both real-time performance and high-fidelity results. The fundamental idea is to apply expensive optimizations only in regions where it is perceptually superior, e.g., depth discontinuities at the fovea, and fall back to less costly linear blending otherwise. We present a complete, perception-informed analysis and model that locally determine which of the two strategies should be applied. The prediction is later utilized by our new synthesis method which performs the image decomposition. The results are analyzed and validated in user experiments on a custom multi-plane display.

Light Attenuation Display: Subtractive See-Through Near-Eye Display via Spatial Color Filtering

Yuta Itoh (Tokyo Institute of Technology), Tobias Langlotz (University of Otago), Daisuke Iwai (Osaka University), Toshiyuki Amano (Wakayama University), Kiyoshi Kiyokawa (Nara Institute of Science and Technology)

TVCG

Abstract: We present a display for optical see-through near-eye displays based on light attenuation, a new paradigm that forms images by spatially subtracting colors of light. Existing optical see-through head-mounted displays (OST-HMDs) form virtual images in an additive manner\textemdash they optically combine the light from an embedded light source such as a microdisplay into the users’ field of view (FoV). Instead, our light attenuation display filters the color of the real background light pixel-wise in the users’ see-through view, resulting in an image as a spatial color filter. Our image formation is complementary to existing light-additive OST-HMDs. The core optical component in our system is a phase-only spatial light modulator (PSLM), a liquid crystal module that can control the phase of the light in each pixel. By combining PSLMs with polarization optics, our system realizes a spatially programmable color filter. In this paper, we introduce our optics design, evaluate the spatial color filter, consider applications including image rendering and FoV color control, and discuss the limitations of the current prototype.

Varifocal Occlusion for Optical See-Through Head-Mounted Displays using a Slide Occlusion Mask

Takumi Hamasaki (Keio University), Yuta Itoh (Tokyo Institute of Technology)

TVCG

Abstract: We propose a varifocal occlusion method for optical see-through head-mounted displays (OST-HMDs). Occlusion in OST-HMDs is a powerful cue to enable depth perception in augmented reality. Without occlusion, virtual objects rendered by an OST-HMD appears semi-transparent and reduces its realism in appearance. A common occlusion technique is to use spatial light modulators (SLMs) to block incoming light rays at each pixel on the SLM selectively. However, most of the existing methods create an occlusion mask only at a single, fixed depth—typically at infinity. With recent advancement in varifocal OST-HMDs, such traditional fixed-focus occlusion causes a mismatch in depth between the occlusion mask plane and a virtual object to be occluded, leading to uncomfortable user experience with blurred occlusion masks. In this paper, we thus propose an OST-HMD system with varifocal occlusion capability, where we physically slide an SLM to optically shift its occlusion layer in depth so that the mask appears sharp and aligns to virtual image at a given depth. In the experiment, we build a proof-of-concept varifocal-occlusion system implemented with a custom retinal projection display and demonstrate that the system can shift the occlusion plane at given depths ranging from 25cm to infinity.

MDI: A Multi-channel Dynamic Immersion Headset for Seamless Switching between Virtual and Real World Activities

Kien T. P. Tran (University of Canterbury), Sungchul Jung (University of Canterbury), Simon Hoermann (University of Canterbury), Robert W. Lindeman (University of Canterbury)

Conference

Abstract: We present a study of the usability and induced workload of a new head-mounted display (HMD) which enables the user to dynamically integrate the view of the real world through multiple optical channels. Most HMDs provide full immersion by occluding the real world from the view of the user, and replacing it with a computer-generated virtual environment. One trade-off, however, is that such HMDs make it difficult to interact with real-world objects and people, or to react to events while being immersed. Tasks that are simple in the unmediated real world, such as taking a drink or responding to a text, become nearly impossible while wearing an occlusive HMD, and normally require the user to take off the HMD. To address this limitation, we propose an HMD which allows users to dynamically integrate the view of their near-field environment while still wearing the HMD. We call our approach Multi-channel Dynamic Immersion (MDI). We embedded controllable LCD panels around the periphery of an HMD and installed a fish-eye lens in front of its forward-facing camera. We then compared MDI to three other methods of switching from working in VR to real world activities: (a) a front camera video-see-through method, (b) using dynamic LCDs only, and (c) taking off the HMD. Participants had to interrupt their work in a VR office to carry out typical real world daily activities, such as picking up a mug, responding to a phone call, or replying to an SMS. Our results show no significant differences in task execution time between the four conditions. However, we observed a tendency favouring MDI as the easiest to use among the HMDs. In addition, we found a significantly higher rated ease of use for the two HMDs with controllable LCDs compared to the two occlusive HMDs. Our data also underscores that a substantial amount of time is lost when switching between work contexts in occlusive HMDs and the real world.


Session 15: Navigation and Redirection

Tuesday, March 26th, 10:15 - 11:30, Room D

Chair: Victoria Interrante

VRoamer: Generating On-The-Fly VR Experiences While Walking inside Large, Unknown Real-World Building Environments

Lung-Pan Cheng (National Taiwan University), Eyal Ofek (Microsoft Research), Christian Holz (Microsoft Research), Andrew D Wilson (Microsoft Research)

Conference

Abstract: Procedural generation in virtual reality (VR) has been used to adapt the virtual world to various indoor environments, fitting different geometries and interiors with virtual environments. However, such applications require that the physical environment be known or pre-scanned prior to use to then generate the corresponding virtual scene, thus restricting the virtual experience to a controlled space. In this paper, we present VRoamer, which enables users to walk unseen physical spaces for which VRoamer procedurally generates a virtual scene on-the-fly. Scaling to the size of office buildings, VRoamer extracts walkable areas and detects physical obstacles in real time using inside-out tracking, instantiates pre-authored virtual rooms if their sizes fit physically walkable areas or otherwise generates virtual corridors and doors that lead to undiscovered physical areas. The use of these virtual structures that connect pre-authored scenes on-the-fly allow VRoamer to (1) temporarily block users’ passage, thus slowing them down while increasing VRoamer’s insight into newly discovered physical areas, (2) prevent users from seeing changes beyond the current virtual scene, and (3) obfuscate the appearance of physical environments. VRoamer animates virtual objects to reflect dynamically discovered changes of the physical environment, such as people walking by or obstacles that become apparent only with closer proximity. In our feasibility evaluation of VRoamer, participants were able to quickly walk long distances through a procedurally generated dungeon experience and reported high levels of immersion in a post-hoc questionnaire.

Improving Walking in Place Methods with Individualization and Deep Networks

Sara Hanson (University of Southern California), Richard Paris (Vanderbilt University), Haley Alexander Adams (Vanderbilt University), Bobby Bodenheimer (Vanderbilt University)

Conference

Abstract: Walking in place is a standard method for moving through large virtual environments when the physical space or tracking range is limited. It has become increasingly significant with the advent of mobile virtual reality where external tracking may not be present. In this paper we revisit walking in place algorithms to improve some of their typical difficulties, such as starting and stopping for individual users and controlling the speed with which users travel through the environment. Starting from a hand-tuned threshold based algorithm we provide a fast method for individualizing the walking in place algorithm based on standard biomechanics. In addition, we introduce a new walking in place model based on a convolutional neural network trained to differentiate walking and standing. In two experiments we assess these methods on two mobile virtual reality platforms based on controllability, scale, and presence. Our results suggest that an adequately trained convolutional neural network can be an effective way of implementing walking in place.

Redirecting View Rotation in Immersive Movies with Washout Filters

Travis Stebbins (Texas A&M University), Eric Ragan (University of Florida)

Conference

Abstract: Immersive movies take advantage of virtual reality (VR) to bring new opportunities for storytelling that allow users to naturally turn their heads and bodies to view a 3D virtual world and follow the story in a surrounding space. However, while many designers often assume the scenarios where viewers stand and are free to physically turn without constraints, this excludes many commonly desired usage settings where the user may wish to remain seated, such as the use of VR while relaxing on the couch or passing the time during a flight. For such situations, large amounts of physical turning may be uncomfortable due to neck strain or awkward twisting. Our research investigates a technique that automatically rotates the virtual scene to help redirect the viewer’s physical rotation while viewing immersive narrative experiences. By slowly rotating the virtual content, viewers are encouraged to gradually turn physically to align their head positions to a more comfortable straight-ahead viewing direction in seated situations where physical turning is not ideal. We present our study of technique design and an evaluation of how the redirection approach affects user comfort, sickness, the amount of physical rotation, and likelihood of viewers noticing the rotational adjustments. Evaluation results show the rotation technique was effective at significantly reducing the amount of physical turning while watching immersive videos, and only 39% of participants noticed the automated rotation when the technique rotated at a speed of 3 degrees per second.

Redirected Jumping: Imperceptibly Manipulating Jump Motions in Virtual Reality

Daigo Hayashi (Research Institute of Electrical Communication, Tohoku University), Kazuyuki Fujita (Tohoku University), Kazuki Takashima (Tohoku University), Robert W. Lindeman (University of Canterbury), Yoshifumi Kitamura (Tohoku University)

Conference

Abstract: Jumping is a fundamental movement in our daily lives, and is often used in many video games. However, little research on jumping and its possible use as a redirection technique in virtual reality (VR) has been done. In this study we propose Redirected Jumping, a novel redirection technique which enables us to purposefully manipulate the mapping of the user’s physical jumping movements (e.g., distance and direction) to movement in the virtual space, allowing richer and more active physical VR experiences within a limited tracking area. To demonstrate the possibilities afforded by Redirected Jumping, we implemented a jump detection algorithm and jumping redirection methods for three basic jumping actions (i.e., horizontal, vertical, and rotational jumps) using common VR devices. We conducted three user studies to investigate the effective manipulation ranges, and the results reveal that our methods can manipulate user’s jumping movements without his/her noticing enough as well as walking.

Evaluating the Effectiveness of Redirected Walking with Auditory Distractors for Navigation in Virtual Environments

Nicholas Rewkowski (UNC Chapel Hill), Atul Rungta (University of North Carolina at Chapel Hill), Mary C. Whitton (UNC Chapel Hill), Ming C Lin (UMD College Park)

Conference

Abstract: “Many virtual locomotion interfaces allowing users to move in virtual reality have been built and evaluated, such as redirected walking (RDW), walking-in-place (WIP), and joystick input. RDW has been shown to be among the most natural and immersive as it supports real walking, and many newer methods further adapt RDW to allow for customization and greater immersion. Most of these methods have been demonstrated to work with vision, in this paper we evaluate the ability for a general distractor-based RDW framework to be used with only auditory display. We conducted two studies evaluating the differences between RDW with auditory distractors and other distractor modalities using distraction ratio, virtual and physical path information, immersion, simulator sickness, and other measurements. Our results indicate that auditory RDW has the potential to be used with complex navigational tasks, such as crossing streets and avoiding obstacles. It can be used without designing the system specifically for audio-only users. Additionally, sense of presence and simulator sickness remain reasonable across all user groups.”


Session 16: 360 Video 2

Tuesday, March 26th, 14:15 - 15:30, Room A

Chair: Hideo Saito

Real-time panoramic depth maps from omni-directional stereo images for 6 DoF videos in virtual reality

Po Kong Lai (University of Ottawa), Shuang Xie (University of Ottawa), Jochen Lang (University of Ottawa), Robert Laganière (University of Ottawa)

Conference

Abstract: In this paper we present an approach for 6 DoF panoramic videos from omni-directional stereo (ODS) images using convolutional neural networks (CNNs). More specifically, we use CNNs to generate panoramic depth maps from ODS images in real-time. These depth maps would then allow for re-projection of panoramic images thus providing 6 DoF to a viewer in virtual reality (VR). As the boundaries of a panoramic image must touch in order to envelope a viewer, we introduce a border weighted loss function as well as new error metrics specifically tailored for panoramic images. We show experimentally that training with our border weighted loss function improves performance by benchmarking a baseline skip-connected encoder-decoder style network as well as other state-of-the-art methods in depth map esimation from mono and stereo images. Finally, a practical application for VR using real world data is also demonstrated.

Exploration of Large Omnidirectional Images in Immersive Environments

Seyedkoosha Mirhosseini (Stony Brook University), Parmida Ghahremani (Stony brook university), Sushant Ojal (Stony Brook University), Joseph Marino (Stony Brook University), Arie Kaufman (Stony Brook University)

Conference

Abstract: Navigation is a major challenge in exploring data within immersive environments, especially of large omnidirectional spherical images. We propose a method of auto-scaling to allow users to navigate using teleportation within the safe boundary of their physical environment with different levels of focus. Our method combines physical navigation with virtual teleportation. We also propose a “peek then warp” behavior when using a zoom lens and evaluate our system in conjunction with different teleportation transitions, including a proposed transition for exploration of omnidirectional and 360-degree panoramic imagery, termed Envelop, wherein the destination view expands out from the zoom lens to completely envelop the user. In this work, we focus on visualizing and navigating large omnidirectional or panoramic images with application to GIS visualization as an inside-out omnidirectional image of the earth. We conducted two user studies to evaluate our techniques over a search and comparison task. Our results illustrate the advantages of our techniques for navigation and exploration of omnidirectional images in an immersive environment.

The Effect of Camera Height, Actor Behavior, and Viewer Position on the User Experience of 360° Videos

Tuuli Keskinen (University of Tampere), Ville Mäkelä (LMU Munich), Pekka Kallioniemi (University of Tampere), Jaakko Hakulinen (University of Tampere), Jussi Karhu (University of Tampere), Kimmo Ronkainen (University of Tampere), John Mäkelä (University of Tampere), Markku Turunen (University of Tampere)

Conference

Abstract: 360° videos can be viewed in an immersive manner with a head-mounted display (HMD). However, it is unclear how the viewing experience is affected by basic properties of 360° videos, such as how high they are recorded from, and whether there are people close to the camera. We conducted a 24-participant user study where we explored whether the viewing experience is affected by A) camera height, B) the proximity and actions of people appearing in the videos, and C) viewer position (standing/sitting). The results, surprisingly, suggest that the viewer’s own height has little to no effect on the preferred camera height and the experience. The most optimal camera height situates at around 150 centimeters, which hits the comfortable height range for both sitting and standing viewers. Moreover, in some cases, people being close to the camera, or the camera being very low, has a negative effect on the experience. Our work contributes to understanding and designing immersive 360° experiences.

Live Stereoscopic 3D Image With Constant Capture Direction of 360 Cameras for High-Quality Visual Telepresence

Yasushi Ikei (Tokyo Metropolitan University), Vibol Yem (TokyoMetropolitanUniversity), Kento Tashiro (Tokyo Metropolitan University), Toi Fujie (TokyoMetropolitanUniversity), Tomohiro Amemiya, Michiteru Kitazaki

Conference

Abstract: To capture a remote 3D image, a system using conventional stereo cameras attached to the robot’s head is commonly used. However, latency and motion blur that may cause VR sickness easily occur due to the degraded image in buffers newly generated when the cameras rotated. In this study, we propose a method named TwinCam, in which we use two 360° cameras spacing at the standard interpupillary distance and keep the direction of the lens constant in the world coordinate even when they are rotated around the axis of the head and moved along with the position of the eyes. We considered that this method can suppress the image buffers because each camera can capture the omni-directional image with the capture direction fixed. This paper introduces the mechanical design of our camera system and its potential for visual telepresence through three experiments. Experiment 1 confirmed the requirement of stereoscopic rather than monoscopic camera for high accuracy depth perception, experiment 2 and 3 proved that our mechanical camera setup can reduce motion blur and VR sickness.

Efficient Hybrid Projection For Encoding 360 VR Videos

Jingtao Tang (East China Normal University), Xinyu Zhang (East China Normal University)

Conference

Abstract: During the past five years, tons of economic 360 VR cameras (e.g., Ricoh Theta, Samsumg Gear360, LG 360, Insta 360) are sold in the market. While 360 VR videos become ubiquitous very soon, 360 VR video standardization is still under discussion in the digital industry and concrete efforts are desirable. Though ERP has been widely used for projection and packing layout while encoding 360 VR videos, it has severe projection distortion at poles. In this paper, we thoroughly analyze the problems with ERP. Based on our analysis, we introduce a new format for encoding and storing 360 VR videos using hybrid cylindrical projection. Our new format can generate well balanced pixel distribution and minimize stretching ratios in resulting projection.


Session 17: Audio and Perception

Tuesday, March 26th, 14:15 - 15:30, Room B

Chair: Ernst Kruijff

Perceptual Study of Near-Field Binaural Audio Rendering in Six-Degrees-of-Freedom Virtual Reality

Olli Rummukainen (Fraunhofer IIS), Sebastian J. Schlecht (Fraunhofer IIS), Thomas Robotham (Fraunhofer IIS), Axel Plinge (Fraunhofer IIS), Emanuël A. P. Habets (Fraunhofer IIS)

Conference

Abstract: Auditory localization cues in the near-field (< 1.0 m) are significantly different than in the far-field. The near-field region is within an arm’s length of the listener allowing to integrate proprioceptive cues to determine the location of an object in space. This perceptual study compares three non-individualized methods to apply head-related transfer functions (HRTFs) in six-degrees-of-freedom near-field audio rendering, namely, far-field measured HRTFs, multi-distance measured HRTFs, and spherical-model-based HRTFs with near-field extrapolation. To set our findings in context, we provide a real-world hand-held audio source for comparison along with a distance-invariant condition. Two modes of interaction are compared in an audio-visual virtual reality: one allowing the participant to move the audio object dynamically and the other with a stationary audio object but a freely moving listener.

Perceptual Characterization of Early and Late Reflections for Auditory Displays

Atul Rungta (University of North Carolina at Chapel Hill), Nicholas Rewkowski (UNC Chapel Hill), Roberta L. Klatzky (Carnegie Mellon), Dinesh Manocha (University of Maryland at College Park)

Conference

Abstract: We introduce a novel, perceptually derived metric (P-Reverb) that relates the just-noticeable difference (JND) of the early sound field(also called early reflections) to the late sound field (known as late reflections or reverberation). Early and late reflections are crucial components of the sound field and provide multiple perceptual cue sfor auditory displays. We conduct two extensive user evaluations that relate the JNDs of early reflections and late reverberation in terms of the mean-free path of the environment and present a novel P-Reverb metric. Our metric is used to estimate dynamic reverberation characteristics efficiently in terms of important parameters like reverberation time (RT60). We show the numerical accuracy of our P-Reverb metric in estimating RT60. Finally, we use our metric to design an interactive sound propagation algorithm and demonstrate its effectiveness on various benchmarks.

Audio-Visual-Olfactory Resource Allocation for Tri-modal Virtual Environments

Efstratios Doukakis (University of Warwick), Kurt Debattista (University of Warwick), Thomas Bashford-Rogers (University of the West of England), Amar Dhokia (University of Warwick), Ali Asadipour (University of Warwick), Alan Chalmers (University of Warwick), Carlo Harvey (Birmingham City University)

TVCG

Abstract: Virtual Environments (VEs) provide the opportunity to simulate a wide range of applications, from training to entertainment, in a safe and controlled manner. For applications which require realistic representations of real world environments, the VEs need to provide multiple, physically accurate sensory stimuli. However, simulating all the senses that comprise the human sensory system (HSS) is a task that requires significant computational resources. Since it is intractable to deliver all senses at the highest quality, we propose a resource distribution scheme in order to achieve an optimal perceptual experience within the given computational budgets. This paper investigates resource balancing for multi-modal scenarios composed of aural, visual and olfactory stimuli. Three experimental studies were conducted. The first experiment identified perceptual boundaries for olfactory computation. In the second experiment, participants ($N = 25$) were asked, across a fixed number of budgets ($M = 5$), to identify what they perceived to be the best visual, acoustic and olfactory stimulus quality for a given computational budget. Results demonstrate that participants tend to prioritize visual quality compared to other sensory stimuli. However, as the budget size is increased, users prefer a balanced distribution of resources with an increased preference for having smell impulses in the VE. Based on the collected data, a quality prediction model is proposed and its accuracy is validated against previously unused budgets and an untested scenario in a third and final experiment.

Auditory Feedback for Navigation with Echoes in Virtual Environments: Training Procedure and Orientation Strategies

Anastasia Andreasen (Aalborg University), Michele Geronazzo (Aalborg University), Niels Christian Nilsson (Aalborg University Copenhagen), Jelizaveta Zovnercuka (Aalborg University), Kristian Konovalov (Aalborg University), Stefania Serafin (Aalborg University)

TVCG

Abstract: Being able to hear objects in an environment, for example using echolocation, is a challenging task. The main goal of the current work is to use virtual environments (VEs) to train novice users to navigate using echolocation. Previous studies have shown that musicians are able to differentiate sound pulses from reflections. This paper presents design patterns for VE simulators for both training and testing procedures, while classifying users’ navigation strategies in the VE. Moreover, the paper presents features that increase users’ performance in VEs. We report the findings of two user studies: a pilot test that helped improve the sonic interaction design, and a primary study exposing participants to a spatial orientation task during four conditions which were early reflections (RF), late reverberation (RV), early reflections-reverberation (RR) and visual stimuli (V). The latter study allowed us to identify navigation strategies among the users. Some users (10/26) reported an ability to create spatial cognitive maps during the test with auditory echoes, which may explain why this group performed better than the remaining participants in the RR condition.


Session 18: Head and Gaze Input

Tuesday, March 26th, 14:15 - 15:30, Room C

Chair: Henry Fuchs

RingText: Dwell-free and hands-free Text Entry for Mobile Head-Mounted Displays using Head Motions

Wenge Xu (Xi’an Jiaotong-Liverpool University), Hai-Ning Liang (Xi’an Jiaotong-Liverpool University), Yuxuan Zhao (Xi’an Jiaotong-Liverpool University), Tianyu Zhang (Xi’an Jiaotong-Liverpool University), Difeng Yu (Xi’an Jiaotong-Liverpool University), Diego Vilela Monteiro (Xi’an Jiaotong-Liverpool University), Yong Yue (Xi’an Jiaotong-Liverpool University)

TVCG

Abstract: In this paper, we present a case for text entry using a circular keyboard layout for mobile head-mounted displays (HMDs) that is dwell-free and does not require users to hold a dedicated input device for letter selection. To support the case, we have implemented RingText whose design is based on a circular layout with two concentric circles. The outer circle is subdivided into regions containing letters. Selection is made by using a virtual cursor controlled by the user’s head movements—entering a letter region triggers a selection and moving back into the inner circle resets the selection. The design of RingText follows an iterative process, where we initially conduct one first study to investigate the optimal number of letters per region, inner circle size, and alphabet starting location. We then optimize its design by selecting the most suitable features from the first study: one letter per region, narrowing the trigger area to lower error rates, and creating candidate regions that incorporate two suggested words to appear next to the current letter region (close to the cursor) using a dynamic (rather than fixed) approach. Our second study compares text entry performance of RingText with four other hands-free techniques and the results show that RingText outperforms them. Finally, we run a third study lasting four consecutive days with 10 participants (5 novice users and 5 expert users) doing two daily sessions and the results show that RingText is quite efficient and yields a low error rate. At the end of the eighth session, the novice users can achieve a text entry speed of 11.30 WPM after 60 minutes of training while the expert (more experienced) users can reach an average text entry speed of 13.24 WPM after 90 minutes of training.

Optimizing Visual Element Placement in Virtual Environments via Visual Attention Analysis

Rawan Alghofaili (George Mason University), Michael S Solah (University of Massachusetts Boston), Haikun Huang (University of Massachusetts Boston), Yasuhito Sawahata (Japan Broadcasting Corporation), Marc Pomplun (University of Massachusetts Boston), Lap-Fai (Craig) Yu (George Mason University)

Conference

Abstract: Eye-tracking enables researchers to conduct complex analysis on human behavior. With the recent introduction of eye-tracking into consumer-grade virtual reality headsets, the barrier of entry to visual attention analysis in virtual environments has been lowered significantly. Whether for arranging artwork in a virtual museum, posting banners for virtual events or placing advertisements in virtual worlds, analyzing visual attention patterns provides a powerful means for guiding visual element placement. In this work, we propose a novel data-driven optimization approach for automatically analyzing visual attention and placing visual elements in 3D virtual environments. Using an eye-tracking virtual reality headset, we collect eye-tracking data which we use to train a regression model for predicting visual attention. We then use the predicted gaze duration output of our regressors to optimize the placement of visual elements with respect to certain visual attention and design goals. Through experiments in several virtual environments, we demonstrate the effectiveness of our optimization approach for predicting visual attention and for placing visual elements in different practical scenarios. Our approach is implemented as a useful plug-in that level designers can use to automatically populate visual elements in 3D virtual environments.

SGaze: A Data-Driven Eye-Head Coordination Model for Realtime Gaze Prediction

Zhiming Hu (Peking University), Congyi Zhang (Peking University), Sheng Li (Peking University), Guoping Wang (Peking Univesity), Dinesh Manocha (University of Maryland)

TVCG

Abstract: “We present a novel, data-driven eye-head coordination model that can be used for realtime gaze prediction for immersive HMD-based applications without any external hardware or eye tracker. Our model (SGaze) is computed by generating a large dataset that corresponds to different users navigating in virtual worlds with different lighting conditions. We perform statistical analysis on the recorded data and observe a linear correlation between gaze positions and head rotation angular velocities. We also find that there exists a latency between eye movements and head movements. SGaze can work as a software-based realtime gaze predictor, and we formulate a time related function between head movement and eye movement and use that for realtime gaze position prediction. We demonstrate the benefits of SGaze for gaze-contingent rendering and evaluate the results with a user study.”

EyeSeeThrough: Unifying Tool Selection and Application in Virtual Environments

Diako Mardanbegi (Lancaster University), Ken Pfeuffer (Bundeswehr University Munich), Alexander Perzl (Ludwig-Maximilians-Universität), Benedikt Mayer (Ludwig Maximilians University), Shahram Jalaliniya (Centennial College), Hans Gellersen (Lancaster University)

Conference

Abstract: In 2D interfaces, actions are often represented by fixed tools ar- ranged in menus, palettes, or dedicated parts of a screen, whereas 3D interfaces afford their arrangement at different depths relative to the user and the user can move them relative to each other. In this paper we introduce EyeSeeThrough as a novel interaction technique that utilises eye-tracking in VR. The user can apply an action to an intended object by visually aligning the object with the tool at the line-of-sight, and then issue a confirmation command. The underly- ing idea is to merge the two-step process of 1) selection of a mode in a menu and 2) applying it to a target, into one unified interaction. We present a user study where we compare the method to the baseline two-step selection. The results of our user study showed that our technique outperforms the two step selection in terms of speed and comfort. We further developed a prototype of a virtual living room to demonstrate the practicality of the proposed technique.


Session 19: Medical Applications Therapy

Tuesday, March 26th, 14:15 - 15:30, Room D

Chair: Benjamin Lok

Visual stimulus disrupts the location of tactile perception in virtual reality

Dion Willis (University of Portsmouth), Wendy Powell (Tilburg University), Vaughan Powell (University of Portsmouth), Brett Stevens (University of Portsmouth)

Conference

Abstract: Phantom limb pain is a neuropathic condition in which a person feels pain in a limb that is not present. Cognitive treatments that visually recreate the limb in an attempt to create a cross modal interaction between vision, and touch/proprioception have shown to be effective at alleviating this pain. With improvements in technology, Virtual Mirror Therapy is starting to gain favour, however, there are currently no applications that utilize passive touch in the same way non-virtual reality applications do. This paper investigates whether a visual stimulus can relocate a tactile stimulus to a different location using principles from the rubber hand illusion and mirror therapy. We demonstrate that a displaced visual stimulus in virtual reality can disrupt accurate spatial perception of a physical vibrotactile sensation, however, the effects are small and require further investigation.

Virtual Reality Video Game Paired with Physical Monocular Blurring as Accessible Therapy for Amblyopia

Ocean Hurd (University of California, Santa Cruz), Sri Kurniawan (University of California, Santa Cruz), Mircea Teodorescu (University of California, Santa Cruz)

Conference

Abstract: This paper discusses a virtual reality (VR) video game we created for use as therapy for treatment of the neurological eye disorder, Amblyopia. Amblyopia is often referred to as lazy eye, and it entails weaker vision in one eye due to a poor connection between the eye and the brain. Until recently it was thought to be untreatable in adults, but new research has proven that with consistent therapy even adults can improve their Amblyopia. Even so, many Amblyopic people have been shown to have very bad compliance with therapy. Traditional Amblyopic therapies are either invasive or dull and repetitive. Our Amblyopia therapy is a fun and accessible alternative, as it entails adhering a Bangerter foil (an opaque sticker) on a VR headset to blur vision in an Amblyopic person’s dominant eye while having them play a VR video game. If they want to perform well in the video game, their brain must adapt to rely on seeing with their weaker eye, reforging that neurological connection. In crafting our game, we also studied through our user’s data what visual and kinetic components were more effective therapeutically. Our findings from this study generally show positive results, implying that visual acuity increases with 45 minutes of therapy in adults. Amblyopia has many negative symptoms including poor depth perception, so this therapy could be life changing for those who have it, particularly adults.

Using Virtual Reality and Gamification Within Procedural Generated Environments to Increase Motivation During Gait Rehabilitation

Florian Kern (Department of Computer Science, HCI Group), Carla Winter (Department of Psychology I, Biological Psychology, Clinical Psychology and Psychotherapy), Dominik Gall (University of Würzburg), Ivo Käthner (University of Würzburg), Paul Pauli (Department of Psychology I, Biological Psychology, Clinical Psychology and Psychotherapy), Marc Erich Latoschik (Department of Computer Science, HCI Group)

Conference

Abstract: Virtual Reality (VR) technology provides promising and novel ways to enhance traditional treadmill-based gait rehabilitation programs. VR approaches could already show to improve patients’ gait and balance control, generate a higher motivation, promote a better task focus, and boost attitude in comparison to traditional training. This paper introduces a novel individualized and gamified VR gait rehabilitation system which includes a head-mounted display and motion sensors to track the patients’ velocity and movements. The approach increases motivation by an engaging storyline in combination with a gamified rewarding system. We used procedural content generation to generate unique landscapes according to the individual goals and walking abilities. Additionally, a social companion accompanies users during their training sessions and gives positive feedback for reaching certain walking distances. We evaluated the system for its overall usability and specific motivational properties with 37 healthy participants. Participants reported greater enjoyment, felt more competent and experienced higher decision freedom and task meaningfulness in the VR condition. Further, participants experienced a significantly lower physical demand, simulator sickness, and state anxiety and felt less pressured while still perceiving a higher personal performance. Based on these results, we discuss design implications for future applications in gait rehabilitation.


Session 20: Avatars

Wednesday, March 27th, 10:15 - 11:30, Room A

Chair: Amy Banic

The Effect of Hand Size and Interaction Modality on the Virtual Hand Illusion

Lorraine Lin (Clemson University), Aline Normoyle (Venturi Labs, LLC), Alexandra Adkins (Clemson University), Yu Sun (Clemson University), Andrew Robb (Clemson University), Yuting Ye (Oculus), Massimiliano Di Luca (Oculus), Sophie Joerg (Clemson University)

Conference

Abstract: Most commercial virtual reality applications with self avatars provide users with a “one-size fits all” avatar. While the height of this body may be scaled to the user’s height, other body proportions, such as limb length and hand size, are rarely customized to fit an individual user. Prior research has shown that mismatches between a user’s avatar and their actual body can affect size perception and feelings of body ownership. In this paper, we consider how concepts related to the virtual hand illusion, user experience, and task efficiency are influenced by variations between the size of a user’s actual hand and their avatar’s hand. We also consider how using a tracked controller or tracked gestures affect these concepts. We conducted a 2x3 within-subjects study (n=20), with two levels of input modality: using tracked finger motion vs. a hand-held controller (glove vs. controller), and three levels of hand scaling (small, fit, and large). Participants completed 2 block-assembly trials for each condition (for a total of 12 trials). Time, mistakes, and a user experience survey were recorded for each trial. Participants experienced stronger feelings of ownership and realism in the glove condition while efficiency was higher in the controller condition. We did not find enough evidence for a change in the intensity of the virtual hand illusion depending on the hand size. Over half of the participants indicated preferring using the gloves instead of controllers, mentioning fun and efficiency as factors in their choices. Preferences on hand scaling were mixed but often attributed to efficiency. Participants liked the appearance of their virtual hand more while using the fit instead of large hands. Several interaction effects were observed between input modality and hand scaling, for example, for smaller hands, tracked hands evoked stronger feelings of ownership compared to using a controller. Our results show that the virtual hand illusion is stronger when participants are able to control a hand directly rather than with a hand-held device, and that the virtual reality task must first be considered to determine which modality and hand size are the most applicable.

Virtual Hand Realism Affects Object Size Perception in Body-Based Scaling

Nami Ogawa (The University of Tokyo), Takuji Narumi (the University of Tokyo), Michitaka Hirose (The University of Tokyo)

Conference

Abstract: How does the representation of an embodied avatar influence the way in which one perceives the scale of a virtual environment? In virtual reality, it is common to embody avatars of various appearances, from abstract to realistic. It is known that changes in realism of virtual hands affect the self-body perception including body ownership. However, the influence of self-avatar realism on the perception of non-body objects has not been investigated. Considering the theory that the scale of the external environment is perceived relative to the size of one’s body (body-based scaling), it can be hypothesized that the realism of an avatar affects not only body ownership but also the fidelity of our own body as a metric. Therefore, this study examines how avatar realism affects perceived object sizes as the size of the virtual hand changes. In the experiment, we manipulate the level of realism (realistic, iconic, and abstract) and size (veridical and enlarged) of the virtual hand and measure the perceived size of a graspable cube. The results show that the size of the cube is perceived to be smaller when the virtual hand is enlarged, indicating that the participants perceive the sizes of objects based on the size of the avatar representation, only in the case of highly realistic hand. Our findings indicate that the more realistic the avatar, the stronger is the sense of embodiment including body ownership provided, which fosters scaling the size of objects using the size of body representation as a fundamental metric. This provides evidence that self-avatar appearances affect how we perceive not only virtual bodies themselves, but also virtual spaces.

Reconciling Being in-Control vs Being Helped for the Execution of Complex Movements in VR

Thibault Porssut (Ecole Polytechnique Fédérale de Lausanne), Bruno Herbelin (École polytechnique fédérale de Lausanne), Ronan Boulic (EPFL)

Conference

Abstract: Performing motor tasks in virtual environments is best achieved with motion capture and animation of a 3D character that participants control in real time and perceive as being their avatar in the virtual environment. A strong Sense of Embodiment (SoE) for the virtual body not only relies on the feeling that the virtual body is their own (body ownership), but also that the virtual body moves in the world according to their will and replicates precisely their body movement (sense of agency). The inspiration of the present study comes from the conviction that embodied interaction in VR could be particularly beneficial for motor rehabilitation. Within that frame of mind our specific aim is to provide subjects with the ability to execute (in VR) a movement that is normally difficult or impossible to execute pre- cisely (in real life). More specifically, the experimental task consists in following a target with the hand. However, this target is animated using non-biological motion, necessarily leading to systematic er- rors by the participants. The challenge here is to introduce a subtle distortion between the position of the real hand and the position of the virtual hand, so that the virtual hand succeeds in performing the task while still letting subjects believe they are fully in control. Results of two experiments (N=16) show that our implementation of a distortion function, that we name the attraction well, successfully led participants to report being in control of the movement (agency) and being embodied in the avatar (body ownership) even when the distortion was above a threshold that they can detect. Furthermore, a progressive introduction of the distortion (starting without help, and introducing distortion on the go) could even further increase its acceptance.

The Influence of Size in Augmented Reality Telepresence Avatars

Michael Walker (University of Colorado Boulder), Daniel Szafir (University of Colorado Boulder), Irene Rae (Google)

Conference

Abstract: In this work, we explore how advances in augmented reality technologies are creating a new design space for long-distance telepresence communication through virtual avatars. Studies have shown that the relative size of a speaker has a significant impact on many aspects of human communication including perceived dominance and persuasiveness. Our system synchronizes the body pose of a remote user with a realistic, virtual human avatar visible to a local user wearing an augmented reality head-mounted display. We conducted a two-by-two (relative system size: equivalent vs. small; leader vs. follower), between participants study (N = 40) to investigate the effect of avatar size on the interactions between remote and local user. We found the equal-sized avatars to be significantly more influential than the small-sized avatars as well as the small avatars commanded significantly less attention than the equal-sized avatars. Additionally we found assigned leadership role to significantly impact participant subjective satisfaction of the task outcome.

The Effect of Avatar Appearance on Social Presence in an Augmented Reality Remote Collaboration

Boram Yoon (UVR Lab, KAIST), Hyung-il Kim (KAIST), Gun Lee (University of South Australia), Mark Billinghurst (University of South Australia), Woontack Woo (KAIST)

Conference

Abstract: This paper investigates the effect of avatar appearance on a user’s Social Presence and perception in an Augmented Reality telepresence system. Despite the development of various commercial 3D telepresence systems, there has been little evaluation and discussions about the appearance of the collaborator’s avatars that are used. We conducted a pair of user studies comparing the effect of avatar appearances with three levels of body part visibility (head & hands, upper body, and whole body) and two different rendering styles (realistic and cartoon-like) on Social Presence while performing two different remote collaboration tasks. We found that a realistic whole body avatar was perceived as being best for remote collaboration, but an upper body could be considered as a substitute depending on the collaboration context. There was no difference in the user’s perceived Social Presence between Realistic or Cartoon style avatars even though the user’s felt differently about the two types of avatar during the AR-based remote collaboration. We discuss these results and suggest guidelines for designing future avatar-mediated AR remote collaboration systems.


Session 21: Displays 2

Wednesday, March 27th, 10:15 - 11:30, Room B

Chair: Daisuke Iwai

Large-Scale Projection-Based Immersive Display: The Design and Implementation of LargeSpace

Hikaru Takatori (University of Tsukuba), Masashi Hiraiwa (University of Tsukuba), Hiroaki Yano (University of Tsukuba), Hiroo Iwata (University of Tsukuba)

Conference

Abstract: In this paper we introduce LargeSpace, the world’s largest immersive display, and discuss the principles of its design. To clarify the design of large-scale projection-based immersive displays, we address the optimum screen shape, projection approach, and arrangement of projectors and tracking cameras. In addition, a novel distortion correction method for panoramic stereo rendering is described. The method can be applied to any projection-based immersive display with any screen shape, and can generate real-time panoramic-stereoscopic views from the viewpoints of tracked participants. To validate the design principles and the rendering algorithm, we implement the LargeSpace and confirm that the method can generate the correct perspective from any position inside the screen viewing area. We implement several applications and show that large-scale immersive displays can be used in the fields of art and experimental behavior analysis.

The Effect of Focal Distance, Age, and Brightness on Near-Field Augmented Reality Depth Matching

Gurjot Singh, Stephen R. Ellis, J. Edward Swan II

TVCG-Invited

Abstract: Many augmented reality (AR) applications operate within near-field reaching distances, and require matching the depth of a virtual object with a real object. The accuracy of this matching was measured in three experiments, which examined the effect of focal distance, age, and brightness, within distances of 33.3 to 50~cm, using a custom-built AR haploscope. Experiment~I examined the effect of focal demand, at the levels of collimated (infinite focal distance), consistent with other depth cues, and at the midpoint of reaching distance. Observers were too young to exhibit age-related reductions in accommodative ability. The depth matches of collimated targets were increasingly overestimated with increasing distance, consistent targets were slightly underestimated, and midpoint targets were accurately estimated. Experiment~II replicated Experiment~I, with older observers. Results were similar to Experiment~I. Experiment~III replicated Experiment~I with dimmer targets, using young observers. Results were again consistent with Experiment~I, except that both consistent and midpoint targets were accurately estimated. In all cases, collimated results were explained by a model, where the collimation biases the eyes’ vergence angle outwards by a constant amount. Focal demand and brightness affect near-field AR depth matching, while age-related reductions in accommodative ability have no effect.

Towards Eye-Friendly VR: How Bright Should It Be?

Khrystyna Vasylevska (TU Wien), Hyunjin Yoo (IRYStec), Tara Akhavan (IRYStec Software Inc.), Hannes Kaufmann (TU Wien)

Conference

Abstract: Visual information plays an important part in the perception of the world around us. Recently, head-mounted displays (HMD) came to the consumer market and became a part of everyday life of thousands of people. Like with the desktop screens or hand-held devices before, the public is concerned with the possible health consequences of the prolonged usage and question the adequacy of the default settings. It has been shown that the brightness and contrast of a display should be adjusted to match the external light to decrease eye strain and other symptoms. Currently, there is a noticeable mismatch in brightness between the screen and dark background of an HMD that might cause eye strain, insomnia, and other unpleasant symptoms. In this paper, we explore the possibility to significantly lower the screen brightness in the HMD and successfully compensate for the loss of the visual information on a dimmed screen. We designed a user study to explore the connection between the screen brightness in HMD and task performance, cybersickness, users’ comfort, and preferences. We have tested three levels of brightness: the default Full Brightness, the optional Night Mode and a significantly lower brightness with original content and compensated content. Our results suggest that although users still prefer the brighter setting, the HMDs can be successfully used with significantly lower screen brightness, especially if the low screen brightness is compensated.

The Effect of Narrow Field of View and Information Density on Visual Search Performance in Augmented Reality

Christina Trepkowski (Institute of Visual Computing), Tom David Eibich (University of Applied Scienes Bonn-Rhein-Sieg), Jens Maiero (Institue of Visual Computing), Alexander Marquardt (Institute of Visual Computing), Ernst Kruijff (University of Applied Sciences Bonn-Rhein-Sieg), Steven Feiner (Columbia University)

Conference

Abstract: Many optical–see-through displays have a relatively narrow field of view. However, a limited field of view can constrain how in- formation can be presented and searched through. To understand these constraints, we present a series of experiments that address the interrelationships between field of view, information density, and search performance. We do so by simulating various fields of view using two approaches: limiting the field of view presented on a Microsoft HoloLens optical–see-through head-worn display and dynamically changing the portion of a large tiled-display wall on which information is presented, for head-tracked users in both cases.

Implementation and Evaluation of a 50KHz, 28µs Motion-to-Pose Latency Head Tracking Instrument

Alex Blate (UNC Chapel Hill), Mary C. Whitton (UNC Chapel Hill), Andrei State (University of North Carolina at Chapel Hill), Montek Singh (UNC Chapel Hill), Gregory F. Welch (University of Central Florida), Turner Whitted (UNC Chapel Hill), Henry Fuchs (UNC Chapel Hill)

TVCG

Abstract: This paper presents the implementation and evaluation of a 50,000-pose-sample-per second, 6-degree-of-freedom optical head tracking instrument with motion-to-pose latency of 28µs and dynamic precision of 1-2 arcminutes. The instrument uses high-intensity infrared emitters and two duo-lateral photodiode-based optical sensors to triangulate pose. This instrument serves two purposes: it is the first step towards the requisite head tracking component in sub-100µs motion-to-photon latency optical see-through augmented reality (OST AR) head-mounted display (HMD) systems; and it enables new avenues of research into human visual perception – including measuring the thresholds for perceptible real-virtual displacement during head rotation and other human research requiring high-sample-rate motion tracking. The instrument’s tracking volume is limited to about 120×120×250mm but allows for the full range of natural head rotation and is sufficient for research involving seated users. We discuss how the instrument’s tracking volume is scalable in multiple ways and some of the trade-offs involved therein. Finally, we introduce a novel laser-pointer-based measurement technique for assessing the instrument’s tracking latency and repeatability. We show that the instrument’s motion-to-pose latency is 28µs and that it is repeatable within 1-2 arcminutes at mean rotational velocities (yaw) in excess of 500°/sec.


Session 22: Interaction Techniques

Wednesday, March 27th, 10:15 - 11:30, Room C

Chair: Doug BowmanA

Towards Brain-Computer Interfaces for Augmented Reality: Feasibility, Design and Evaluation

Hakim Si-Mohammed, Jimmy Petit, Camille Jeunet, Ferran Argelaguet, Fabien Spindler, Andéol Évain, Nicolas Roussel, Géry Casiez, Anatole Lécuyer

TVCG-Invited

Abstract: Brain-Computer Interfaces (BCIs) enable users to interact with computers without any dedicated movement, bringing new hands-free interaction paradigms. In this paper we study the combination of BCI and Augmented Reality (AR). We first tested the feasibility of using BCI in AR settings based on Optical See-Through Head-Mounted Displays (OST-HMDs). Experimental results showed that a BCI and an OST-HMD equipment (EEG headset and Hololens in our case) are well compatible and that small movements of the head can be tolerated when using the BCI. Second, we introduced a design space for command display strategies based on BCI in AR, when exploiting a famous brain pattern called Steady-State Visually Evoked Potential (SSVEP). Our design space relies on five dimensions concerning the visual layout of the BCI menu; namely: orientation, frame-of-reference, anchorage, size and explicitness. We implemented various BCI-based display strategies and tested them within the context of mobile robot control in AR. Our findings were finally integrated within an operational prototype based on a real mobile robot that is controlled in AR using a BCI and a HoloLens headset. Taken together our results (4 user studies, 94 participants) and our methodology could pave the way to future interaction schemes in AR exploiting 3D User Interfaces based on brain activity and BCIs.

Do Head-Mounted Display Stereo Deficiencies Affect 3D Pointing Tasks in AR and VR?

Anil Ufuk Batmaz (Simon Fraser University), Mayra Donaji Barrera Machuca (Simon Fraser University), Duc-Minh Pham (Simon Fraser University), Wolfgang Stuerzlinger (Simon Fraser University)

Conference

Abstract: AR and VR headsets use stereoscopic displays to show virtual objects in 3D. However, the limitations of current stereo display systems affect depth perception through conflicting depth cues, which then also affects virtual hand interaction in peri-personal space, i.e., within arm’s reach. We performed a Fitts’ law experiment to better understand the impact of stereo display deficiencies of AR and VR headsets on pointing at close-by targets arranged laterally or along the line of sight. According to our results, the movement direction and the corresponding change in target depth affects pointing time and throughput; subjects’ movements towards/away from their head were slower and less accurate than their lateral movements (left/right). However, even though subjects moved faster in AR, we did not observe a significant difference for pointing performance between AR and VR headsets, which means that previously identified differences in depth perception between these platforms have no strong effect on interaction. Our results also help 3D user interface designers understand how changes in target depth affect users’ performance in different movement directions in AR and VR.

Augmented Reality Map Navigation with Freehand Gestures

Kadek Ananta Satriadi (Monash University), Barrett Ens (Monash University), Maxime Cordeil (Monash University), Tobias Czauderna (Monash University), Wesley Willett (University of Calgary), Bernhard Jenny (Monash University)

Conference

Abstract: Mid-air hand gesture interaction has long been proposed as a ‘natural’ input method for Augmented Reality (AR) applications, yet has been little explored for intensive applications like multiscale navigation. In multiscale navigation, such as digital map navigation, pan and zoom are the predominant interactions. A position-based input mapping (e.g. grabbing metaphor) is intuitive for such interactions, but is prone to arm fatigue. This work focuses on improving digital map navigation in AR with mid-air hand gestures, using a horizontal intangible map display. First, we conducted a user study to explore the effects of handedness (unimanual and bimanual) and input mapping (position-based and rate-based). From these findings we designed DiveZoom and TerraceZoom, two novel hybrid techniques that smoothly transition between position- and rate-based mappings. A second user study evaluated these designs. Our result indicates that the introduced input-mapping transitions can reduce perceived arm fatigue with limited impact on performance.

Get a Grip! Introducing Variable Grip for Controller-Based VR Systems

Michael Bonfert (University of Bremen), Robert Porzel (Bremen University), Rainer Malaka (University of Bremen)

Conference

Abstract: We propose an approach to facilitate adjustable grip for object interaction in virtual reality. It enables the user to handle objects with loose and firm grip using conventional controllers. Pivotal design properties were identified and evaluated in a qualitative pilot study. Two revised interaction designs with variable grip were compared to the status quo of invariable grip in a quantitative study. The users performed placing actions with all interaction modes. Performance, clutching, task load, and usability were measured. While the handling time increased slightly using variable grip, the usability score was significantly higher. No substantial differences were measured in positioning accuracy. The results lead to the conclusion that variable grip can be useful and improve realism depending on tasks, goals, and user preference.

The effect of elastic feedback on the perceived user experience and presence of travel methods in immersive environments.

Tobias Günther (Technische Universität Dresden), Lars Engeln (Technische Universität Dresden), Sally Julie Busch (Technische Universität Dresden), Rainer Groh (Technische Universität Dresden)

Conference

Abstract: Mid-air interaction achieved through handtracking or VR-controller in Virtual Reality (VR) is performed without an explicit spatial reference. One solution for creating such a reference is elastic feedback, which we realized via tension springs attached to a tracked VR-controller. Thereby, directional force is exerted and perceived by the user. A study was conducted to investigate the effects of elastic feedback on users in travel tasks. Therefore, the elastic mode of the input device was compared to an isotonic mode, without any perceivable force. Also, two virtual travelling methods, a driving mode and a flight mode, were investigated in the study. The main goal of the study was to analyse, how elastic feedback influences user experience and presence perception. Statistical analysis with ANOVA of data from 24 subjects revealed significant main effects for the elastic mode of both, the User Experience Questionnaire (UEQ) and the Igroup Presence Questionnaire (IPQ). In addition, the subjects showed a better performance completing the tasks. However, analysis of task completion times indicated no relevant differences. Although not all individual scales of UEQ and IPQ showed significant outcomes, the result indicates that immersive travel interfaces could benefit from the use of elastic feedback.


Session 23: Perception

Wednesday, March 27th, 10:15 - 11:30, Room D

Chair: Guillaume Moreau

Enactive approach to assess perceived speed error during walking and running in virtual reality

Théo Perrin (Univ Rennes, Inria), Hugo A. Kerhervé (M2S - EA 7470, Univ Rennes), Charles Faure (Univ Rennes, Inria), Anthony Sorel, Benoit Bideau (M2S - EA 7470, Univ Rennes, Inria), Richard Kulpa (Univ Rennes, Inria)

Conference

Abstract: The recent development of virtual reality (VR) devices such as head mounted displays (HMDs) increases opportunities for applications at the confluence of physical activity and gaming. Recently, the fields of sport and fitness have turned to VR, including for locomotor activities, to enhance motor and energetic resources, as well as motivation and adherence. For example, VR can provide visual feedbacks during treadmill running, thereby reducing monotony and increasing the feeling of movement and engagement with the activity. However, the relevance of using VR tools during locomotion depends on the ability of these systems to provide natural immersive feelings, specifically a coherent perception of speed. The objective of this study is to estimate the error between actual and perceived locomotor speed in VE using an enactive approach, i.e. allowing an active control of the environment. Sixteen healthy individuals participated in the experiment, which consisted in walking and running on a motorized treadmill at speeds ranging from 3 to 11 km/h with 0.5 km/h increments, in a randomized order while wearing a HMD device (HTC Vive) displaying a virtual racetrack. Participants were instructed to match VE speed with what they perceived was their actual locomotion speed (LS), using a handheld Vive controller. They were able to modify the optic flow speed (OFS) with a 0.02 km/h increment/decrement accuracy. An optic flow multiplier (OFM) was computed based on the error between OFS and LS. It represents the gain that exists between the visually perceived speed and the real locomotion speed experienced by participants for each trial. For all conditions, the average of OFM was 1.00 ±.25 to best match LS. This finding is at odds with previous works reporting an underestimation of speed perception in VR. It could be explained by the use of an enactive approach allowing an active and accurate matching of visually and proprioceptively perceived speeds by participants. But above all, our study showed that the perception of speed in VR is strongly individual, with some participants always overestimating and others constantly underestimating. Therefore, a general OFM should not be used to correct speed in VE to ensure congruence in speed perception, and we propose the use of individual models as recommendations for setting up locomotion-based VR applications.

Perceptually Based Adaptive Motion Retargeting to Animate Real Objects by Light Projection

Taiki Fukiage (NTT Communication Science Laboratories), Takahiro Kawabe (NTT Communication Science Laboratories), Shinya Nishida (NTT CS Labs)

TVCG

Abstract: A recently developed light projection technique can add dynamic impressions to static real objects without changing their original visual attributes such as surface colors and textures. It produces illusory motion impressions in the projection target by projecting gray-scale motion-inducer patterns that selectively drive the motion detectors in the human visual system. Since a compelling illusory motion can be produced by an inducer pattern weaker than necessary to perfectly reproduce the shift of the original pattern on an object’s surface, the technique works well under bright environmental light conditions. However, determining the best deformation sizes is often difficult: When users try to add a large deformation, the deviation in the projected patterns from the original surface pattern on the target object becomes apparent. Therefore, to obtain satisfactory results, they have to spend much time and effort to manually adjust the shift sizes. Here, to overcome this limitation, we propose an optimization framework that adaptively retargets the displacement vectors based on a perceptual model. The perceptual model predicts the subjective inconsistency between a projected pattern and an original one by simulating responses in the human visual system. The displacement vectors are adaptively optimized so that the projection effect is maximized within the tolerable range predicted by the model. We extensively evaluated the perceptual model and optimization method through a psychophysical experiment as well as user studies.

PeriText: Utilizing Peripheral Vision for Reading Text on Augmented Reality Smart Glasses

Pin Sung Ku (National Taiwan University), Yi-Hao Peng (National Taiwan University), Yu-Chih Lin (National Taiwan University), Mike Y. Chen (National Taiwan University)

Conference

Abstract: Augmented Reality (AR) provides real-time information by superimposing virtual information onto users’ view of the real world. Our work is the first to explore how peripheral vision, instead of central vision, can be used to read text on AR and smart glasses. We present Peritext, a multiword reading interface using rapid serial visual presentation (RSVP). This enables users to observe the real world using central vision, while using peripheral vision to read virtual information. We first conducted a lab-based study to determine the effect of different text transformation by comparing reading efficiency among 3 capitalization schemes, 2 font faces, 2 text animation methods, and 3 different numbers of words for RSVP paradigm. We found that title case capitalization, sans-serif font and word-wise typewriter animation with multiword RSVP display resulted in better reading efficiency, which together formed our Peritext design. Another lab-based study followed, investigating the performance of the Peritext against control text, and the results showed significant better performance. Finally, we conducted a field study to collect user feedback while using Peritext in real-world walking scenarios, and all users reported a preference of 5º eccentricity over 8º.

Text Presentation for Augmented Reality Applications in Dual Task Situations

Elisa Maria Klose (University of Kassel), Nils Adrian Mack (University of Kassel), Jens Hegenberg (University of Kassel), Ludger Schmidt (University of Kassel)

Conference

Abstract: We investigated how the performance in three tasks is affected when people simultaneously read text in augmented reality (AR) glasses. Furthermore, we varied the placement of the displayed text and investigated the effects of different positions on both primary task and reading performance. The three given tasks were a visual stimulus-response task (VSRT), a simple walking task (WS) and a walking obstacle course (WO). We propose a novel body-locked text placement style for AR text presentation and compare it to head-locked text placement each in two heights (top and bottom). The AR reading task (ARR) affected performance in all three tasks. Further, reading speed was affected by simultaneous task execution of all three tasks. The walking tasks affected AR reading speed more than the VSRT. AR text placement positions had different effects on task performance and preference for the different tasks, indicating an interaction effect between task and text placement. The results demonstrate that it is necessary to consider the context of use carefully when designing AR text visualization. Furthermore, a subjective preference for the body-locked text presentation – overall and during the WO task – contrasts its worse performance for several parameters compared to the display-locked presentation style. The presented study with 12 participants provides insights into the cognitive effects of AR glasses usage in dual task situations and can support design decisions.

Perception of Volumetric Characters’ Eye-Gaze Direction in Head-Mounted Displays

Andrew MacQuarrie (University College London), Anthony Steed (University College London)

Conference

Abstract: Volumetric capture allows the creation of near-video-quality content that can be explored with six degrees of freedom. Due to limitations in these experiences, such as the content being fixed at the point of filming, an understanding of eye-gaze awareness is critical. A repeated measures experiment was conducted that explored users’ ability to evaluate where a volumetrically captured avatar (VCA) was looking. Wearing a head-mounted display (HMD), 36 participants rotated a VCA to look at a target. The HMD resolution, target position, and VCA’s eye-gaze direction were varied. Results did not show a difference in accuracy between HMD resolutions, while the task became significantly harder as target locations diverged from mutual gaze. In contrast to real-world studies, participants consistently misjudged eye-gaze direction based on target location, but not based on the avatar’s head turn direction. Implications are discussed, as results for VCAs viewed in HMDs appear to differ from face-to-face scenarios.


Session 24: Medical Applications Training

Wednesday, March 27th, 14:15 - 15:30, Room A

Chair: Rick Skarbez

Immersive Virtual Colonoscopy

Seyedkoosha Mirhosseini (Stony Brook University), Ievgenia Gutenko (Stony Brook University), Sushant Ojal (Stony Brook University), Joseph Marino (Stony Brook University), Arie Kaufman (Stony Brook University)

TVCG

Abstract: Virtual colonoscopy (VC) is a non-invasive screening tool for colorectal polyps which employs volume visualization of a colon model reconstructed from a CT scan of the patient’s abdomen. We present an immersive analytics system for VC which enhances and improves the traditional desktop VC through the use of VR technologies. Our system, using a head-mounted display (HMD), includes all of the standard VC features, such as the volume rendered endoluminal fly-through, measurement tool, bookmark modes, electronic biopsy, and slice views. The use of VR immersion, stereo, and wider field of view and field of regard has a positive effect on polyp search and analysis tasks in our immersive VC system, a volumetric-based immersive analytics application. Navigation includes enhanced automatic speed and direction controls, based on the user’s head orientation, in conjunction with physical navigation for exploration of local proximity. In order to accommodate the resolution and frame rate requirements for HMDs, new rendering techniques have been developed, including mesh-assisted volume raycasting and a novel lighting paradigm. Feedback and further suggestions from expert radiologists show the promise of our system for immersive analysis for VC and encourage new avenues for exploring the use of VR in visualization systems for medical diagnosis.

ICthroughVR: Illuminating Cataracts through Virtual Reality

Katharina Krösl (TU Wien), Carmine Elvezio (Columbia University), Matthias Hürbe (TU Wien), Sonja Karst (Medical University Vienna), Michael Wimmer (TU Wien), Steven Feiner (Columbia University)

Conference

Abstract: Complex vision impairments, such as cataracts, affect the way many people interact with their environment, yet are rarely considered by architects and lighting designers because of a lack of design tools. To address this, we present a method to simulate vision impairments, in particular cataracts, graphically in virtual reality (VR), using eye tracking for gaze-dependent effects. We also conduct a VR user study to investigate the effects of lighting on visual perception for users with cataracts. In contrast to existing approaches, which mostly provide only simplified simulations and are primarily targeted at educational or demonstrative purposes, we account for the user’s vision and the hardware constraints of the VR headset. This makes it possible to calibrate our cataract simulation to the same level of degraded vision for all participants. The results of our study show that we are able to calibrate the vision of all our participants to a similar level of impairment, that maximum recognition distances for escape route signs with simulated cataracts are significantly smaller than without, and that luminaires that are visible in the field of view are perceived as especially disturbing due to the glare effects they create.

Efficacy Study on Interactive Mixed Reality (IMR) Software with Sepsis Prevention Medical Education

Naveen Kumar Sankaran, Harris J Nisar, Ji Zhang, Kyle Formella, Jennifer Amos, Lisa T. Barker, John Vozenilek, Steven M. LaValle, Thenkurussi Kesavadas

Conference

Abstract: In recent years, the training of novice medical professionals with simulated environments such as Virtual Reality (VR) and Augmented Reality (AR) has increased dramatically. However, the usability of these technologies is limited due to the complexity involved in creating the clinical content. To be comparable to the clinical environment, the simulation platform should include real world learning parameters such as patient physiology, emotions, and clinical team behaviors. Incorporating such non-deterministic parameters has historically required medical faculty to possess advanced programming skills. We address this challenge through a software platform that simplifies the creation of Interactive Mixed Reality (IMR) scenarios. Three educational components were embedded into an IMR scenario are: 1) integrated 360-degree video recording of the clinical encounter to provide first-person perspective, 2) rich annotated knowledge content, and 3) assessment questionnaire. A sepsis prevention training scenario using the IMR software was developed to demonstrates the potential of enhancing simulated medical training by accelerating clinical exposure for novice students. IRB approved user feedback study was conducted with 28 participants to evaluate the efficacy of the IMR software. The participants provided their feedback by answering demographics, NASA-TLX and System Usability Scale questionnaires. The NASA-TLX study results recorded high performance score for the software, while mental demand, physical demand, temporal demand, frustration level and effort were not high. The system usability study shows us that majority of the participants agreed that IMR system was relaxing and the system was easy to understand. The user feedback this study provided is an evidence that the IMR software is a strong tool for learning medical education.

Toward Virtual Stress Inoculation Training of Prehospital Healthcare Personnel: A Stress-Inducing Environment Design and Investigation of an Emotional Connection Factor

Mores Prachyabrued (Mahidol University), Disathon Wattanadhirach (Mahidol University), Richard Bartley Dudrow (Mahidol University), Nat Krairojananan (Phramongkutklao Hospital), Pusit Fuengfoo (Phramongkutklao Hospital)

Conference

Abstract: Prehospital emergency healthcare personnel are responsible for finding, rescuing, and taking prehospital care of emergency patients. They are regularly exposed to stressful and traumatic lifesaving situations. The stress involved can impact their performance and can cause mental disorders in the long term. Stress inoculation training (SIT) inoculates individuals to potential stressors by letting them practice stress-coping skills in a controlled environment. Our work explores a story-driven stressful virtual environment design that can potentially be used for SIT in the new context of emergency healthcare personnel. Users role-play a first-time emergency worker on a rescue mission. The interactive storytelling is designed to engage users and elicit strong emotional responses, and follows the three-act structure commonly found in films and video games. To understand the stress-inducing and sense of presence qualities of our approach including the previously untested impact of an emotional connection factor, we conduct a between-subjects experiment involving 60 subjects. Results show that the approach successfully induces stress by increasing heart rate, galvanic skin response, and subjective stress rating. Questionnaire results indicate positive presence. One subject group engages in an initial friendly conversation with a virtual co-worker to establish an emotional connection. Another group includes no such conversation. The group with the emotional connection shows higher physiological stress levels and more occurrences of subject behaviors reflecting presence. Medical experts review our approach and suggest several applications that can benefit from its stress inducing ability.


Session 25: Navigation

Wednesday, March 27th, 14:15 - 15:30, Room B

Chair: Regis Kopper

Virtual vs. Physical Navigation in VR: Study of Gaze and Body Segments Temporal Reorientation Behaviour

Hugo Brument (Inria), Iana Podkosova (TU Wien), Hannes Kaufmann (Vienna University of Technology), Anne-Hélène Olivier (University of Rennes 2), Ferran Argelaguet Sanz (Inria)

Conference

Abstract: In this paper, we investigated whether the body anticipation synergies in real environments (REs) are preserved during navigation in virtual environments (VEs). Experimental studies related to the control of human locomotion in REs during curved trajectories report a top-down reorientation strategy with the reorientation of the gaze anticipating the reorientation of head, the shoulders and finally the global body motion. This anticipation behavior provides a stable reference frame to the walker to control and reorient whole-body according to the future direction. To assess body anticipation during navigation in VEs, we conducted an experiment where participants, wearing a head-mounted display, were asked to perform a lemniscate trajectory in a virtual environment (VE) using five different navigation techniques, including walking, virtual steering (head, hand or torso steering) and passive navigation. For the purpose of this experiment, we designed a new control law based on the power-law relation between speed and curvature during human walking. Taken together our results showed a similar ordered top-down sequence of reorientation of the gaze, head and shoulders during curved trajectories between walking in REs and in VEs (for all the evaluated techniques). However, this anticipation mechanism significantly differs between physical walking in VE, where the anticipation is higher, and the other virtual navigation techniques. The results presented in this paper pave the way to the better understanding of the underlying mechanisms of human navigation in VEs and to the design of navigation techniques more adapted to humans.

User-Centered Extension of Locomotion Typology: Body-Based Sensorial Cues of Different Locomotion Modes to predict Spatial Learning

Carolin Wienrich (University Würzburg), Nina Döllinger (Universität Würzburg), Simon Kock, Klaus Gramann (TECHNISCHE UNIVERSITAET BERLIN (TU))

Conference

Abstract: When human operators can locomote actively in virtual environments (VE), the locomotion has to be often adapted to the limited dimension of the physical space. This however might lead to a conflict between sensory information originating from user movements and sensory feedback provided through the virtual locomotion. To investigate whether different locomotion strategies that adapt the limited physical space to the desired virtual dimensions impact cognitive processing of the user in VEs, two experiments were conducted. The first used walking in place and the second study scale of locomotion to investigate the impact of locomotion adaptation on the acquisition of spatial knowledge and user experience. We systematically analyzed body-based sensorial conflicts for the different adaptation strategies and reveal that neither walking in place nor scale of locomotion impacts spatial knowledge acquisition nor user experience. We can conclude that visual cues indicating locomotion combined with body-based rotational cues seem to be sufficient for the acquisition of spatial knowledge and that locomotion with controllers seems efficient and preferable for users. Anyways, another contribution of our study is linking system-driven descriptions (like the typology of [1]) with human-centered factors which might guide testing locomotion techniques in virtual environments in future studies.

Jumping Further: Forward Jumps in a Gravity-reduced Immersive Virtual Environment

HyeongYeop Kang (Korea University), Geonsun Lee (Korea University), Dae Seok Kang (KITECH), Ohung Kwon (KITECH), Jun Yeup Cho (Korea University), Ho-Jung Choi (KITECH), JungHyun Han (Korea University)

Conference

Abstract: In a cable-driven suspension system developed to simulate the reduced gravity of lunar or Martian surfaces, we propose to manipulate/reduce the physical cues of forward jumps so as to overcome the limited workspace problem. The physical cues should be manipulated in a way that the discrepancy from the visual cues provided through the HMD is not noticeable by users.We identified the extent to which forward jumps can be manipulated naturally. We combined it with visual gains, which can scale visual cues without being noticed by users. The test results obtained in a prototype application show that we can use both trajectory manipulation and visual gains to overcome the spatial limit. We also investigated the user experiences when making significantly high and far jumps. The results will be helpful in designing astronaut-training systems and various VR entertainment content.

Occlusion Management in VR: A Comparative Study

Lili Wang (Beihang University), Han Zhao (Beihang University), Zesheng Wang (Beihang University), Jian Wu (Beihang University), Bingqiang Li (Beihang University), Zhiming He (Beihang University), Voicu Popescu (Purdue University)

Conference

Abstract: VR applications rely on the user’s ability to explore the virtual scene efficiently. In complex scenes, occlusions limit what the user can see from a given location, and the user has to navigate the viewpoint around occluders to gain line of sight to the hidden parts of the scene. When the disoccluded regions prove to be of no interest, the user has to retrace their path, making scene exploration inefficient. Furthermore, the user might not be able to assume a viewpoint that would reveal the occluded regions due to physical limitations, such as obstacles in the real world hosting the VR application, viewpoints beyond the tracked area, or viewpoints above the user’s head that cannot be reached by walking. Several occlusion management methods have been proposed in visualization research, such as top view, X-ray, and multiperpsective visualization, which help the user see more from the current position, having the potential to improve the exploration efficiency of complex scenes. This paper reports on a study that investigates the potential of these three occlusions management methods in the context of VR applications, compared to conventional navigation. Participants were required to explore two virtual scenes to purchase five items in a virtual Supermarket, and to find three people in a virtual parking garage. The task performance metrics were task completion time, total distance traveled, and total head rotation. The study also measured user spatial awareness, depth perception, and simulator sickness. The results indicate that users benefit from top view visualization which helps them learn the scene layout and helps them understand their position within the scene, but the top view does not let the user find targets easily due to occlusions in the vertical direction, and due to the small image footprint of the targets. The X-ray visualization method worked better in the garage scene, a scene with a few big occluders and a low occlusion depth complexity, and less well in the Supermarket scene, a scene with many small occluders that create high occlusion depth complexity. The multiperspective visualization method achieves better performance than the top view method and the X-ray method, in both scenes. There are no significant differences between the three methods and the conventional method in terms of spatial awareness, depth perception, and simulator sickness.


Session 26: Social Interactions

Wednesday, March 27th, 14:15 - 15:30, Room C

Chair: Rob Lindeman

Inferring User Intent using Bayesian Theory of Mind in Shared Avatar-Agent Virtual Environments

Sahil Narang (University of North Carolina at Chapel Hill), Andrew Best (University of North Carolina at Chapel Hill), Dinesh Manocha (University of North Carolina at Chapel Hill)

TVCG

Abstract: “We present a real-time algorithm to infer the intention of a user’s avatar in a virtual environment shared with multiple human-like agents. Our algorithm applies the Bayesian Theory of Mind approach to make inferences about the avatar’s hidden intentions based on the observed proxemics and gaze-based cues. Our approach accounts for the potential irrationality in human behavior, as well as the dynamic nature of an individual’s intentions. The inferred intent is used to guide the response of the virtual agent and generate locomotion and gaze-based behaviors. Our overall approach allows the user to actively interact with tens of virtual agents from a first-person perspective in an immersive setting. We systematically evaluate our inference algorithm in controlled multi-agent simulation environments and highlight its ability to reliably and efficiently infer the hidden intent of a user’s avatar even under noisy conditions. We quantitatively demonstrate the performance benefits of our approach in terms of reducing false inferences, as compared to a prior method. The results of our user evaluation show that 68.18% of participants reported feeling more comfortable in sharing the virtual environment with agents simulated with our algorithm as compared to a prior inference method, likely as a direct result of significantly fewer false inferences and more plausible responses from the virtual agents.”

Interpersonal Affordances and Social Dynamics in Collaborative Immersive Virtual Environments: Passing Together Through Apertures

Lauren Buck (Vanderbilt University), John Rieser (Vanderbilt University), Gayathri Narasimham (Vanderbilt University), Bobby Bodenheimer (Vanderbilt University)

TVCG

Abstract: An essential question in understanding how to develop and build collaborative immersive virtual environments (IVEs) is recognizing how people perform actions together. Many actions in the real world require that people act without prior planning, and these actions are executed quite successfully. In this paper, we study the common action of two people passing through an aperture together in both the real world (Experiment 1) and in a distributed, collaborative IVE (Experiment 2). The aperture’s width is varied from too narrow to be passable to so wide as to be easily passable by both participants together simultaneously. We do this in the real world for all possible gender-based pairings. In virtual reality, however, there is potential for the gender of the participant and the gender of the self-avatar to be different. We also investigate the joint action for all possible gender-based pairings in the distributed IVE. Results indicated that, in the real world, social dynamics between gendered pairings emerged; male-male pairings refused to concede to one another until absolutely necessary while other pairings did not. Male-female pairings were most likely to provide ample space to one another during passage. These behaviors did not appear in the IVE, and avatar gender across all pairings generated no behavioral differences. In addition, participants tended to require wider gaps to allow for passage in the IVE. These findings establish base knowledge of social dynamics and affordance behaviors within multi-user IVEs.

Not Alone Here?! Scalability and User Experience of Embodied Ambient Crowds in Distributed Social Virtual Reality

Marc Erich Latoschik (University of Würzburg), Florian Kern (University of Würzburg), Jan-Philipp Stauffert (University of Würzburg), Andrea Bartl (University of Würzburg), Mario Botsch (Bielefeld University), Jean-Luc Lugrin (University of Würzburg)

TVCG

Abstract: This article investigates performance and user experience in Social Virtual Reality (SVR) targeting distributed, embodied, and immersive, face-to-face encounters. We demonstrate the close relationship between scalability, reproduction accuracy, and the resulting performance characteristics, as well as the impact of these characteristics on users co-located with larger groups of embodied virtual others. System scalability provides a variable number of co-located avatars and AI-controlled agents with a variety of different appearances, including realistic-looking virtual humans generated from photogrammetry scans. The article reports on how to meet the requirements of embodied SVR with today’s technical off-the-shelf solutions and what to expect regarding features, performance, and potential limitations. Special care has been taken to achieve low latencies and sufficient frame rates necessary for reliable communication of embodied social signals. We propose a hybrid evaluation approach which coherently relates results from technical benchmarks to subjective ratings and which confirms required performance characteristics for the target scenario of larger distributed groups. A user-study reveals positive effects of an increasing number of co-located social companions on the quality of experience of virtual worlds, i.e., on presence, possibility of interaction, and co-presence. It also shows that variety in avatar/agent appearance might increase eeriness but might also stimulate an increased interest of participants about the environment.

Studying Gaze Behaviour During Interactions With a Virtual Walker: Influence of the Virtual Reality Setup

Florian Berton (Inria), Anne-Hélène Olivier (University of Rennes 2), Julien Bruneau (Inria), Ludovic Hoyet (Inria), Julien Pettré (Inria)

Conference

Abstract: Simulating realistic interactions between virtual characters has been of interest to research communities for years, and is particularly important to automatically populate virtual environments. This problem requires to accurately understand and model how humans interact, which can be difficult to assess. In this context, Virtual Reality (VR) is a powerful tool to study human behaviour, especially as it allows assessing conditions which are both ecological and controlled. While VR was shown to allow realistic collision avoidance adaptations, in the frame of the ecological theory of perception and action, interactions between walkers can not solely be characterized through motion adaptations but also through the perception processes involved in such interactions. The objective of this paper is therefore to evaluate how different VR setups influence gaze behaviour during interactions between walkers. To this end, we designed an experiment involving a collision avoidance task between a participant and another walker (real confederate or virtual character). During this interaction, we compared both the participant’s locomotion and gaze behaviour in a real environment and the same situation in different VR setups (including a CAVE, a screen and a Head-Mounted Display). Our results show that even if some quantitative differences exist, gaze behaviour is qualitatively similar between VR and real conditions. Especially, gaze behaviour in VR setups including a HMD is more in line with the real situation than the other setups. Furthermore, the outcome on motion adaptations confirms previous work, where collision avoidance behaviour is qualitatively similar in VR and real conditions. In conclusion, our results show that VR is relevant for qualitative analysis of locomotion and gaze behaviour during interaction between walkers. This opens perspectives in the design of new experiments to better understand human behaviour, in order to design more realistic virtual humans.

Effects of Self-Avatar and Gaze on Avoidance Movement Behavior

Christos Mousas (Purdue University), Alexandros Fabio Koilias (University of the Aegean), Dimitris Anastasiou (Southern Illinois University Carbondale), Banafsheh Rekabdar (Southern Illinois UNiversity), Christos-Nikolaos Anagnostopoulos (University of the Aegean)

Conference

Abstract: The present study investigates users’ movement behavior in a virtual environment when they attempted to avoid a virtual character. At each iteration of the experiment, four conditions (Self-Avatar LookAt, No Self-Avatar LookAt, Self-Avatar No LookAt, and No Self-Avatar No LookAt) were applied to examine users’ movement behavior based on kinematic measures. During the experiment, 52 participants were asked to walk from a starting position to a target position. A virtual character was placed at the midpoint. Participants were asked to wear a head-mounted display throughout the task, and their locomotion was captured using a motion capture suit. We analyzed the captured trajectories of the participants’ routes on four kinematic dimensions to explore whether the four experimental conditions influenced the paths they took. The results indicated that the Self-Avatar LookAt condition affected the path the participants chose more significantly than the other three conditions in terms of length, duration, and deviation, but not in terms of speed. Overall, the length and duration of the task, as well as the deviation of the trajectory from the straight line, were greater when a self-avatar represented participants. An additional effect on kinematic measures was found in the LookAt (Gaze) conditions. Implications for future research are discussed.


Session 27: Visualization Techniques

Wednesday, March 27th, 14:15 - 15:30, Room D

Chair: Antonello Uva

Visualization Techniques for Precise Alignment in VR. A Comparative Study

Alejandro Martin-Gomez (Technische Universitaet Muenchen), Ulrich Eck (Technische Universitaet Muenchen), Nassir Navab (Technische Universität München)

Conference

Abstract: Many studies explored the effectiveness of Augmented, Virtual, and Mixed Reality (AR, VR, and MR respectively) for object placement tasks in assembly, maintenance, assistance, or training. Two main approaches for assisting users during object alignment exist: static visualization techniques like transparent or wireframe rendering and interactive guides such as arrows or text. In this work we focus on static visualization techniques, since they do not require precise tracking of the objects that need to be aligned. To the best of our knowledge, no previous work exists that evaluates which visualization technique is most suitable to support users while precisely aligning objects using AR. This paper presents a comparative evaluation of four visualization techniques used to render virtual objects when precise alignment in 6 degrees of freedom (DoF) is required. The selection of these techniques is based on the amount of occlusion observed when the real and virtual objects overlap. We propose using two visualization techniques presenting low amount of occlusion: Silhouette and Fresnel-Derivative, and we compare them against two commonly used techniques: Wireframe and Semitransparent. We designed a VR environment considering two scenarios –with and without time constraints– in which users aligned pairs of objects. To evaluate users performance, quantitative –distance, rotation and time to completion–, and qualitative –usability and mental effort– scores were collected. Our results suggest that the selection of visualization techniques with low levels of occlusions can improve the precision in rotation and translation achieved by users and increase the usability of the systems.

The Influence of Label Design on Search Performance and Noticeability in Wide Field of View Augmented Reality Displays

Ernst Kruijff (Hochschule Bonn-Rhein-Sieg), Jason Orlosky (Osaka University), Naohiro Kishishita (Fujitsu Ltd), Christina Trepkowski (Hochschule Bonn-Rhein-Sieg), Kiyoshi Kiyokawa

TVCG-Invited

Abstract: In Augmented Reality (AR), search performance for outdoor tasks is an important metric for evaluating the success of a large number of AR applications. Users must be able to find content quickly, labels and indicators must not be invasive but still clearly noticeable, and the user interface should maximize search performance in a variety of conditions. To address these issues, we have set up a series of experiments to test the influence of virtual characteristics such as color, size, and leader lines on the performance of search tasks and noticeability in both real and simulated environments. The first experiment showed that limited FOV will severe-ly limit search performance, but that appropriate placement of labels and leaders within the periphery can alleviate this problem without interfering with walking or decreasing user comfort. In the second experiment, we found that different types of motion are more no-ticeable in optical versus video see-through displays, but that blue coloration is most noticeable in both. Results can aid in designing more effective view management techniques, especially for wider field of view display.

Comparing Techniques for Visualizing Moving Out-of-View Objects in Head-mounted Virtual Reality

Uwe Gruenefeld (University of Oldenburg), Ilja Koethe (University of Oldenburg), Daniel Lange (OFFIS - Institute for Information Technology), Sebastian Weiß (OFFIS e.V.), Wilko Heuten (OFFIS - Institute for Information Technology)

Conference

Abstract: Current head-mounted displays (HMDs) have limited fields-of-view (FOVs). A limited FOV further decreases the already restricted human visual range and amplifies the problem of objects receding from view (e.g., opponents in computer games). However, there is no previous work that investigates techniques for visualizing moving out-of-view objects on head-mounted displays. In this paper, we compare two visualization approaches: (1) Overview+detail, with 3D Radar, and (2) Focus+context, with EyeSee360, in a user study to evaluate their performances for visualizing moving out-of-view objects. We found that using 3D Radar resulted in a significantly lower movement estimation error and higher usability, measured by the system usability scale. 3D Radar was also preferred by 13 out of 15 participants for visualization of moving out-of-view objects.

Worlds-in-Wedges: Combining WIMs and Portals to Support Comparative Immersive Visualization of Forestry Data

Jung Who Nam (University of Minnesota), Krista McCullough (University of Minnesota), Joshua Tveite (University of Minnesota), Maria Molina Espinosa (University of Minnesota), Charles Hobie Perry (USDA Forest Service - Northern Research Station), Barry Ty Wilson (USDA Forest Service - Northern Research Station), Daniel F. Keefe (University of Minnesota)

Conference

Abstract: Virtual reality (VR) environments are typically designed so users feel present in a single virtual world at a time, but this creates a problem for applications that require visual comparisons (e.g., forest scientists comparing multiple data-driven virtual forests). To address this, we present Worlds-in-Wedges, a 3D user interface and visualization technique that supports comparative immersive visualization by dividing the virtual space surrounding the user into volumetric wedges. There are three visual/interactive levels. The first, worlds-in-context, visualizes high-level relationships between the worlds (e.g., a map for worlds that are related in space). The second level, worlds-in-miniature, is a multi-instance implementation of the World-in-Miniature technique extended to support mutlivariate glyph visualization. The third level, worlds-in-wedges, displays multiple large-scale worlds in wedges that act as volumetric portals. The interface supports navigation, selection, and view manipulation. Since the techniques were inspired directly by problems facing forest scientists, the interface was evaluated by building a complete multivariate data visualization of the US Forest Service Forest Inventory and Analysis public dataset. Scientist user feedback and lessons from iterative design are reported.


Session 28: Avatar Technologies

Wednesday, March 27th, 15:45 - 17:00, Room B

Chair: Gerd Bruder

The Impact of Avatar Tracking Errors on User Experience in VR

Nicholas Toothman (Facebook Reality Labs), Michael Neff (Facebook Reality Labs)

Conference

Abstract: There is evidence that adding motion-tracked avatars to virtual environments increases users’ sense of presence. High quality motion capture systems, however, remain too expensive for the average user and low cost systems introduce various forms of error to the tracking. Much research has looked at the impact of particular kinds of error, primarily latency, on factors such as body ownership, but it is still not known what level of tracking error is permissible in these systems to afford compelling social interaction. This paper presents a series of experiments employing a sizable subject pool (n=96) that study the impact of motion tracking errors on user experience for activities including social interaction and virtual object manipulation. Diverse forms of error that arise in tracking are examined, including latency, popping (jumps in position), stuttering (positions held in time) and constant noise. The focus is on error on a person’s own avatar, but some conditions also include error on an interlocutor, which appears underexplored. The picture that emerges is complex. Certain forms of error impact performance, a person’s sense of embodiment, enjoyment and perceived usability, while others do not. Notably, evidence was not found that tracking errors impact social presence, even when those errors are severe.

The Virtual Caliper: Rapid Creation of Metrically Accurate Avatars from 3D Measurements

Sergi Pujades (Univ. Grenoble Alpes, Inria, CNRS, GrenobleINP, LJK), Betty Mohler (Amazon), Anne Thaler (Max Planck Institute for Biological Cybernetics), Joachim Tesch (Max Planck Institute for Intelligent Systems), Naureen Mahmood (Meshcapade GmbH), Nikolas Hesse (Fraunhofer Institute (IOSB)), Heinrich H. Bülthoff (Max Planck Institute for Biological Cybernetics), Michael J. Black (Max Planck Institute for Intelligent Systems)

TVCG

Abstract: Creating metrically accurate avatars is important for many applications such as virtual clothing try-on, ergonomics, medicine, immersive social media, telepresence, and gaming. Creating avatars that precisely represent a particular individual is challenging however, due to the need for expensive 3D scanners, privacy issues with photographs or videos, and difficulty in making accurate tailoring measurements. We overcome these challenges by creating ``The Virtual Caliper’’, which uses VR game controllers to make simple measurements. First, we establish what body measurements users can reliably make on their own body. We find several distance measurements to be good candidates and then verify that these are linearly related to 3D body shape as represented by the SMPL body model. The Virtual Caliper enables novice users to accurately measure themselves and create an avatar with their own body shape. We evaluate the metric accuracy relative to ground truth 3D body scan data, compare the method quantitatively to other avatar creation tools, and perform extensive perceptual studies. We also provide a software application to the community that enables novices to rapidly create avatars in fewer than five minutes. Not only is our approach more rapid than existing methods, it exports a metrically accurate 3D avatar model that is rigged and skinned.

Virtual Agent Positioning Driven by Scene Semantics in Mixed Reality

Yining Lang (Beijing Institute of Technology), Wei Liang (Beijing Institute of Technology), Lap-Fai (Craig) Yu (George Mason University)

Conference

Abstract: When a user interacts with a virtual agent via a mixed reality device, such as the Hololens or the Magic Leap headset, it is important to consider the semantics of the real-world scene in positioning the virtual agent, so that it interacts with the user and the objects in the real world naturally. Mixed reality aims to blend the virtual world with the real world seamlessly. In line with this goal, in this paper, we propose a novel approach to use scene semantics to guide the positioning of a virtual agent. Such considerations can avoid unnatural interaction experiences, e.g., interacting with a virtual human floating in the air. To obtain the semantics of a scene, we first reconstruct the 3D model of the scene by using the RGB-D cameras mounted on the mixed reality device (e.g., a Hololens). Then, we employ the Mask R-CNN object detector to detect objects relevant to the interactions within the scene context. To evaluate the positions and orientations for placing a virtual agent in the scene, we define a cost function based on the scene semantics, which comprises a visibility term and a spatial term. We then apply a Markov chain Monte Carlo optimization technique to search for an optimized solution for placing the virtual agent. We carried out user study experiments to evaluate the results generated by our approach. The results show that our approach achieved a higher user evaluation score than that of the alternative approaches.

EEG can be used to measure embodiment when controlling a walking self-avatar

Bilal Alchalabi (University of Montreal), Jocelyn Faubert (University of Montreal), David Labbe (Ecole de technologie superieure)

Conference

Abstract: It has recently been shown that inducing the ownership illusion and then manipulating the movements of one’s self-avatar can lead to compensatory motor control strategies in gait rehabilitation. In order to maximize this effect, there is a need for a method that measures and monitors embodiment levels of participants immersed in VR to induce and maintain a strong ownership illusion. The objective of this study was to propose a novel approach to measuring embodiment by presenting visual feedback that conflicts with motor control to embodied subjects. Twenty healthy participants were recruited. During experiment, participants wore an EEG cap and motion capture markers, with an avatar displayed in a HMD from a first-person perspective. They were cued to either perform, watch or imagine a single step forward or to initiate walking on the treadmill. For for some of the trials, the avatar took a step with the contra-lateral limb or stopped walking before the participant stopped (modified feedback). Results show that subjective levels of embodiment correlate strongly with the difference in µ-ERS power over the motor and pre-motor cortex between the modified and non-modified feedback trials.

Automatic Generation and Stylization of High Quality Real-Time Avatars

Fabien Danieau (Technicolor), Ilja Gubins (Utrecht University), Nicolas Olivier (ESIR), Olivier Dumas (Technicolor), Bernard Denis (Technicolor), Thomas Lopez (Technicolor), Nicolas Mollet (Technicolor), Brian Frager (Technicolor Experience Center), Quentin Avril (Technicolor)

Conference

Abstract: In this paper, we present a fully automatic pipeline for generating and stylizing high geometric and textural quality real-time rendered avatars. They are automatically rigged with facial blendshapes for animation, and can be used across platforms for applications including virtual reality, augmented reality, remote collaboration, gaming and more. From a set of input facial photos, our approach is to be able to create a photorealistic, fully-rigged avatar in less than seven minutes. The mesh reconstruction is based on state-of-the art photogrammetry approaches. Automatic landmarking coupled with ICP registration with regularization provide direct correspondence and registration from a given generic mesh to the acquired facial mesh. Then, using deformation transfer, existing blendshapes are transferred from the generic to the reconstructed facial mesh. The reconstructed face is then fit to the full body generic mesh. Extra geometry such as jaws, teeth and nostrils are retargeted and transferred to the avatar. An automatic iris color extraction algorithm is performed to colorize a separate eye texture, animated with dynamic UVs. Finally, an extra step applies a style to the photorealistic face to enable blending of personalized facial features into any other character. The user’s face can then be adapted to any human or non-human generic mesh. A pilot user study was performed to evaluate the utility of our approach. Up to 65% of the participants were successfully able to discern the presence of one’s unique facial features when the style was not too far from a humanoid shape.


Session 29: Cognition and Psychology

Wednesday, March 27th, 15:45 - 17:00, Room C

Chair: Jason Orlosky

The Effects of Presence on Harm-inducing Factors in Virtual Slot Machines

David Heidrich (University of Würzburg), Sebastian Oberdörfer (University of Würzburg), Marc Erich Latoschik (Department of Computer Science, HCI Group)

Conference

Abstract: Slot machines are one of the most played games by pathological gamblers. New technologies, e.g. immersive Virtual Reality (VR), offer more possibilities to exploit erroneous beliefs in the context of gambling. However, the risk potential of VR-based gambling has not been researched, yet. A higher presence might increase harmful aspects, thus making VR realizations more dangerous. Measuring harm-inducing factors reveals the risk potential of virtual gambling. In a user study, we analyze a slot machine realized as a low-presence desktop 3D and a high-presence VR version. Both versions are compared in respect to effects on dissociation, urge to gamble, dark flow, and illusion of control. Our study shows significant higher values of dissociation, dark flow, and urge to gamble in the VR version. Presence significantly correlated with all measured harm-inducing factors. We demonstrate that VR-based gambling has a higher risk potential. This creates the importance of regulating VR-based gambling.

Entropy of Controller Movements Detects Mental Workload in Virtual Reality

Daniel Reinhardt (Julius-Maximilians-Universität Würzburg), Steffen Haesler (University of Würzburg), Jörn Hurtienne (Julius-Maximilians-Universität), Carolin Wienrich (University Würzburg)

Conference

Abstract: Virtual Reality imposes cognitive demands on users that influence their performance when solving tasks. These cognitive demands, however, have been difficult to measure precisely without inducing breaks of presence. Based on findings in psychological science on how motion trajectories reflect underlying cognitive processes, we investigated entropy (i.e. the degree of movement irregularity) as an unobtrusive measure of mental workload. Entropy values were obtained from a time-series history of controller movement data. Mental workload is considered high over a given time interval, when the measured entropy is high as well. By manipulating the difficulty of a simple rhythm game we could show that the results are comparable to the results of the NASA-TLX questionnaire, which is currently used as the gold standard in VR for measuring mental workload. Thus, our results pave the way for further investigating the entropy of controller movements as a precise measurement of mental workload in VR.

Studying the Mental Effort in Virtual Versus Real Environments

Tiffany Luong (b<>com), Nicolas Martin (b<>com), Ferran Argelaguet Sanz (Inria), Anatole Lécuyer (Inria)

Conference

Abstract: Is there an effect of Virtual Reality (VR) headsets on the user’s mental effort? In this paper, we compare the mental effort in VR versus in real environments. An experiment (N=27) was conducted to assess the effect of wearing a VR Head-Mounted Display (HMD) on the user’s mental effort while performing a standardized cognitive task (the well-known N-back task, with three levels of difficulty, N <U+2208> {1,2,3}). In addition to test the effect of the environment (virtual versus real), we also explored the impact of performing a dual task (i.e., sitting versus walking) in both environments on mental effort. The mental effort was assessed through self-reports, task performance, behavioural, and physiological measures. In a nutshell, the analysis of all measurements revealed no significant effect of wearing a VR HMD on the users’ mental effort. In contrast, natural walking significantly increased the users’ mental effort. Taken together, our results support the fact there is no specific additional mental effort related to the use of a VR HMD.

You or me? Personality traits predict sacrificial decisions in a VR-simulated accident situation

Uijong Ju (Korea university), June Kang (Korea university), Christian Wallraven (Korea University)

TVCG

Abstract: Emergency situations during car driving sometimes force the driver to make a sudden decision. Predicting these decisions will have important applications in updating risk analyses in insurance applications, but also can give insights for drafting autonomous vehicle guidelines. Studying such behavior in experimental settings, however, is limited by ethical issues as it would endanger peoples’ lives. Here, we employed the potential of virtual reality (VR) to investigate decision-making in an extreme situation in which participants would have to sacrifice others in order to save themselves. In a VR driving simulation, participants first trained to complete a difficult course with multiple crossroads in which the wrong turn would lead the car to fall down a cliff. In the testing phase, obstacles suddenly appeared on the “safe” turn of a crossroad: for the control group, obstacles consisted of trees, whereas for the experimental group, they were pedestrians. In both groups, drivers had to decide between falling down the cliff or colliding with the obstacles. Results showed that differences in personality traits were able to predict this decision: in the experimental group, drivers who collided with the pedestrians had significantly higher psychopathy and impulsivity traits, whereas impulsivity alone was to some degree predictive in the control group. Other factors like heart rate differences, gender, video game expertise, and driving experience were not predictive of the emergency decision in either group. Our results show that self-interest related personality traits affect decision-making when choosing between preservation of self or others in extreme situations and showcase the potential of virtual reality in studying and modeling human decision-making.


Session 30: Cybersickness

Wednesday, March 27th, 15:45 - 17:00, Room D

Chair: Blair MacIntyre

PhantomLegs: Reducing Virtual Reality Sickness using Head-Worn Haptic Devices

Shi-Hong Liu (National Taiwan University), Neng-Hao Yu (National Taiwan University of Science and Technology), Liwei Chan (Computer Science), Yi-Hao Peng (National Taiwan University), Wei-Zen Sun (National Taiwan University), Mike Y. Chen (National Taiwan University)

Conference

Abstract: Virtual Reality (VR) sickness occurs when exposure to a virtual environment causes symptoms that are similar to motion sickness, and has been one of the major user experience barriers of VR. To reduceVR sickness, prior work has explored dynamic field-of-view modification and galvanic vestibular stimulation (GVS) that recouples the visual and vestibular systems. We propose a new approach to reduce VR sickness, called PhantomLegs, that applies alternating haptic cues that are synchronized to users’ footsteps in VR. Our prototype consists of two servos with padded swing arms, one set on each side of the head, that lightly taps the head as users walk in VR. We conducted a three-session, multi-day user study with 30 participants to evaluate its effects as users walk through a VR environment while physically being seated. Results show that our approach significantly reduces VR sickness while remaining comfortable to users.

Motion Sickness Prediction in Stereoscopic Videos Using 3D Convolutional Neural Networks

Tae Min Lee (Yonsei University), Jong-Chul Yoon (Kangwon National University), In-Kwon Lee (Yonsei University)

TVCG

Abstract: In this paper, we propose a three-dimensional (3D) convolutional neural network (CNN)-based method for predicting the degree of motion sickness induced by a 360° stereoscopic video. We consider the user’s eye movement as a new feature, in addition to the motion velocity and depth features of a video used in previous work. For this purpose, we use saliency, optical flow, and disparity maps of an input video, which represent eye movement, velocity, and depth, respectively, as the input of the 3D CNN. To train our machine-learning model, we extend the dataset established in the previous work using two data augmentation techniques: frame shifting and pixel shifting. Consequently, our model can predict the degree of motion sickness more precisely than the previous method, and the results have a more similar correlation to the distribution of ground-truth sickness.

Analysis on Mitigation of Visually Induced Motion Sickness by Applying Dynamical Blurring on a User’s Retina

Guang-Yu Nie, Henry Been-Lirn Duh, Yue Liu, Yongtian Wang

TVCG-Invited

Abstract: Visually induced motion sickness (MS) experienced in a 3D immersive virtual environment (VE) limits the widespread use of virtual reality (VR). This paper studies the effects of a saliency detection-based approach on the reduction of MS when the display on a user’s retina is dynamic blurred. In the experiment, forty participants were exposed to a VR experience under a control condition without applying dynamic blurring, and an experimental condition applying dynamic blurring. The experimental results show that the participants under the experimental condition report a statistically significant reduction in the severity of MS symptoms on average during the VR experience compared to those under the control condition, which demonstrates that the proposed approach can effectively prevent visually induced MS in VR and enable users to remain in a VE for a longer period of time.

Scene Transitions and Teleportation in Virtual Reality and the Implications for Spatial Awareness and Sickness

Kasra Rahimi Moghadam, Colin Banigan, Eric D. Ragan

TVCG-Invited

Abstract: Various viewing and travel techniques are used in immersive virtual reality to allow users to see different areas or perspectives of 3D environments. Our research evaluates techniques for visually showing transitions between two viewpoints in head-tracked virtual reality. We present four experiments that focus on automated viewpoint changes that are controlled by the system rather than by interactive user control. The experiments evaluate three different transition techniques (teleportation, animated interpolation, and pulsed interpolation), different types of visual adjustments for each technique, and different types of viewpoint changes. We evaluated how differences in transition can influence a viewer’s comfort, sickness, and ability to maintain spatial awareness of dynamic objects in a virtual scene. For instant teleportations, the experiments found participants could most easily track scene changes with rotational transitions without translational movements. Among the tested techniques, animated interpolations allowed significantly better spatial awareness of moving objects, but the animated technique was also rated worst in terms of sickness, particularly for rotational viewpoint changes. Across techniques, viewpoint transitions involving both translational and rotational changes together were more difficult to track than either individual type of change.

Cybersickness Analysis with EEG using Deep Learning Algorithms

Dae kyo Jeong (Data Visualization Lab, Sejong University), Sangbong Yoo (Sejong University), Yun Jang (Sejong University)

Conference

Abstract: Cybersickness is a symptom of dizziness that occurs while experiencing Virtual Reality (VR) technology and it is presumed to occur mainly by crosstalk between the sensory and cognitive systems. However, since the sensory and cognitive systems cannot be measured objectively, it is difficult to measure cybersickness. Therefore, methodologies for measuring cybersickness have been studied in various ways. Traditional studies have collected answers to questionnaires or analyzed EEG data using machine learning algorithms. However, the system relying on the questionnaires lacks objectivity, and it is difficult to obtain highly accurate measurements with the machine learning algorithms in the previous work. In this work, we apply and compare Deep Neural Network (DNN) and Convolutional Neural Network (CNN) deep learning algorithms for objective cybersickness measurement from EEG data. We also propose a data preprocessing for learning and signal quality weights allowing us to achieve high performance when learning EEG data with the deep learning algorithms. Besides, we analyze video characteristics where cybersickness occurs by examining the video segments causing cybersickness in the experiments. We find common patterns that causes cybersickness.