2019 IEEE VR Osaka logo

March 23rd - 27th

2019 IEEE VR Osaka logo

March 23rd - 27th

IEEE Computer Society IEEE VRSJ

IEEE Computer Society IEEE VRSJ


Sponsors


Diamond

Osaka International Convention Center

Platinum

DELL + Intel Japan
Mercari
National Science Foundation
OSAKA CONVENTION & TOURISM BUREAU

Gold


Tateishi Science and Technology Foundation

The Telecommunications Advancement Foundation

Silver


DAQRI

Bronze

BARCO
Huawei Japan
Knowledge Service Network
Mozilla Corporation
Osaka Electro-Communication University
SenseTime Japan

Flower / Misc

GREE, Inc.
KYOHRITSU ELECTRONIC INDUSTRY Co.,Ltd.
Beijing Nokov Science & Technology Co., Ltd.
PoSTMEDIA
SoftCube Corporation
Sumitomo Electric Industries
Vicon

Exhibitors

Advanced Realtime Tracking (ART)
Archivetips
China's State Key Laboratory of Virtual Reality Technology and Systems
Computer Network Information Center, Chinese Academy of Sciences
Creact
Crescent
DELL + Intel Japan
Fujitsu
Fun Life Inc.
Haption
Kyohritsu
Nihon Binary Co., Ltd.
NIST - Public Safety Communications Research
Nokov
Optitrack Japan, Ltd.
PhaseSpace
QD Laser, Inc.
Qualisys
Solidray Co.,Ltd.
WESTUNITIS Co., Ltd.

Supporters


IEEE Kansai Section

Society for Information Display Japan Chapter

VR Consortium

The Institute of Systems, Control and Information Engineers

Human Interface Society

The Japanese Society for Artificial Intelligence

The Visualization Society of Japan

Information Processing Society of Japan

The Robotics Society of Japan

Japan Society for Graphic Science

The Japan Society of Mechanical Engineers

Japanese Society for Medical and Biological Engineering

The Institute of Image Information and Television Engineers

The Society of Instrument and Control Engineers

The Institute of Electronics, Information and Communication Engineers

The Institute of Electrical Engineers of Japan

The Society for Art and Science

Japan Ergonomics Society

The Japanese Society of Medical Imaging

Exhibitors and Supporters

Posters

Optical system that forms a mid-air image moving at high speed in the depth direction

Yui Osato (Department of Informatics), Naoya KOIZUMI (Department of Informatics)

Abstract: Mid-air imaging technology expresses how virtual images move about in the real world. A conventional mid-air image display using a retro-transmissive optical element moves a light source the distance a mid-air image is moved. In conventional mid-air image displays, the linear actuator that moves a display as a light source makes the system large. In order to solve this problem, we designed an optical system that realizes high-speed movement of mid-air images without a linear actuator. We propose an optical system that moves the virtual image of the light source at a high speed by generating the virtual image of the light source with a rotating mirror and light source by the motor.

Personalized Personal Spaces for Virtual Reality

Daniel Pohl (Intel Corporation), Markus Achtelik (Intel Corporation)

Abstract: An important criterion for virtual reality experiences is that they are very immersive. The person inside the head-mounted display feels like really being in the virtual environment. While this can be a very pleasant experience, the opposite can happen as well. The concepts of personal spaces and people or unfriendly avatars entering them, can lead to the same discomfort as if it would happen in real life. In this work, we propose to define multi-level artificial barriers for other avatars and objects, respecting the personal spaces as defined by users. We apply this to both interactive rendered environments and as much as possible also to 360 degree photo and video content.

Freely Explore the Scene with 360 Field of View

Feng Dai (Institute of Computing Technology, Chinese Academy of Sciences), chen zhu (The Institute of Computing Technology of the Chinese Academy of Sciences), Yike Ma (Institute of Computing Technology, Chinese Academy of Sciences), Juan Cao (Institute of Computing Technology, Chinese Academy of Sciences), Qiang Zhao (Institute of Computing Technology, Chinese Academy of Sciences), Yongdong Zhang (The University of Science and Technology of China)

Abstract: By providing 360 field of view, spherical panoramas are widely used in virtual reality (VR) systems and street view services. However, due to bandwidth or storage limitations, existing systems only provide sparsely captured panoramas and have limited interaction modes. Although there are methods that can synthesize novel views based on captured panoramas, the generated novel views all lie on the lines connecting existing views. Therefore these methods do not support free viewpoint navigation. In this paper, we propose a new panoramic image based rendering method. Our method takes pre-captured images as input and can synthesize panoramas at novel views that are far from input camera positions. Thus, it supports to freely explore the scene with 360 field of view.

Effect of Full Body Avatar in Augmented Reality Remote Collaboration

Tzu-Yang Wang (University of Tsukuba), Yuji Sato (University of Tsukuba), Mai Otsuki (University of Tsukuba), Hideaki Kuzuoka (University of Tsukuba), Yusuke Suzuki (OKI Electric Industry Co., Ltd.)

Abstract: In this paper, we compared three different types of avatar design (“Body”, “Hand + Arm”, and “Hand only”) for the augmented reality remote instruction system in terms of usability. The result showed that the usability of the remote instruction system with full body avatar has a higher usability. In addition, participants felt more easily to track the full body avatar than the avatar with hand only. However, concerning the understandability of the instruction, there was no difference between three designs.

PILC Projector: RGB-IR Projector for Pixel-level Infrared Light Communication

Ikuo Kamei (The University of Tokyo), Takefumi Hiraki (The University of Tokyo), Shogo Fukushima (The University of Tokyo), Takeshi Naemura (The University of Tokyo)

Abstract: The projection of invisible data on visible images can facilitate seamless interactive projection, since data embedded in regular images is unobtrusive to human viewers. However, the previous techniques sacrificed one of the following key goals: 1) calibration-free setup; 2) full-color projection; or 3) high contrast image. In this paper, we propose a Pixel-level Infrared Light Communication (PILC) projector that achieves all these requirements by adding an infrared light source to the full-color projector. To provide a proof of concept, we built a functional prototype, evaluated its performance, and presented a basic application.

Symmetrical Reality: Toward a Unified Framework for Physical and Virtual Reality

Zhenliang Zhang (Beijing Institute of Technology), Cong Wang (China Electronics Standardization Institute), Dongdong Weng (Beijing Institute of Technology), Yue Liu (Beijing Institute of Technology), Yongtian Wang (Beijing Institute of Technology)

Abstract: In this paper, we review the background of physical reality, virtual reality, and some traditional mixed forms of them. Based on the current knowledge, we propose a new unified concept called symmetrical reality to describe the physical and virtual world in a unified perspective. Under the framework of symmetrical reality, the traditional virtual reality, augmented reality, inverse virtual reality, and inverse augmented reality can be interpreted using a unified presentation. We analyze the characteristics of symmetrical reality from two different observation locations (i.e., from the physical world and from the virtual world), where all other forms of physical and virtual reality can be treated as special cases of symmetrical reality.

A virtual-real occlusion method based on GPU acceleration for MR

TianRen Luo (Research Institute of Virtual Reality and Intelligent System), ZeHao Liu (VR & IS institution), Zhigeng Pan (Hangzhou Normal University), Mingmin Zhang (Zhejiang University)

Abstract: The mixed reality makes the user’s realism strong and weak, depending on the fusion effect of the virtual object and the real scene, and the key factor affecting the virtual reality fusion effect is whether the virtual and real object in the synthetic scene has occlusion consistency. Existing virtual-real occlusion methods typically require construct known geometries of real objects in the scene, either manually, or three-dimensionally reconstructed using a dense mapping algorithm, which limits the situations in which they can be applied. In this paper, a method is proposed that utilizes color and depth information provided by an RGBD camera to improved occlusion.In this method, After cropping the depth image and the color image of the virtual scence and the real scene,based on the GPU,Extracting the ROI area where the virtual object is located (the area where the virtual-real occlusion occurs),then introducing the real scence color image as the guiding image to modified joint bilateral filtering the depth image to repair the wrong edge information and “black hole” of the depth image;pixel-by-pixel point to determine the relationship between the depth of the virtual object and the depth of the real object, to achieve virtual-real occlusion rendering blend;for the still existing “sawtooth artifacts” using delay-coloring only fuzzy local boundary algorithm, to achieve Accurate and smoothing edges virtual-real occlusion to enhance visual effects.

Laser-based Photochromic Drawing Method for Rotating Objects with High-speed Visual Feedback

Yuri Mikawa (The University of Tokyo), Tomohiro Sueishi (The University of Tokyo), Tomohiko Hayakawa (The University of Tokyo), Masatoshi Ishikawa (The University of Tokyo)

Abstract: Color-forming display can present digital vision by repeatedly changing its color; this display has been studied in recent research on augmented reality (AR) displays. Previous methods exhibit large-latency and low-speed problems, leading to the lack of interaction in immersive AR experiences. This study proposes a high-speed and low-latency drawing method on dynamically moving objects, using photochromic material colored by ultraviolet light laser, high-speed mirror control, and visual feedback. The experiment evaluated the method’s drawing accuracy and we confirmed that this method has an adequate tracking ability against random rotation to produce accurate drawings.

On sharing physical geometric space between augmented and virtual reality environments

Wooyoung Chun (Sogang University), Gyujin Choi (Sogang University), Jaepung An (Sogang University), Woong Seo (Sogang University), Sanghun Park (Dongguk University), Insung Ihm (Sogang University)

Abstract: Despite the expected synergistic effects, augmented and virtual reality (AR and VR, respectively) technologies still tend to be discrete entities. In this paper, we describe our effort to enable both AR and VR users to share the same physical geometric space. The geometric transformation between two world spaces, defined separately by AR and VR systems, are estimated using a specially designed tracking board. Once initially obtained, the transformation will allow users to collaborate with each other within an integrated physical environment while making the best use of both AR and VR technologies.

Player Perception Augmentation for Beginners Using Visual and Haptic Feedback in Ball Game

Yuji Sano (University of Tsukuba), Koya Sato (University of Tsukuba), Ryoichiro Shiraishi (Tennodai 1-1-1), Mai Otsuki (University of Tsukuba)

Abstract: We developed Sports Support System that augments the perception of beginner players and supports situation awareness to motivate beginners in multiplayer sports through visual and haptic feedback. Our system provides positional relationship of the opponents using visual feedback. In addition, the position of the opponent beyond the field of view was provided using haptic feedback. An experiment of pass interception as used to compare the visual and haptic feedback. The results confirmed the effectiveness and characteristics of these feedback processes in a multiplayer ball game.

Match the Cube: Investigation of the Head-coupled Input with a Spherical Fish Tank Virtual Reality Display

Qian Zhou (University of British Columbia), Fan Wu (University of British Columbia), Ian Stavness (University of Saskatchewan), Sidney S Fels (University of British Columbia)

Abstract: Fish Tank Virtual Reality (FTVR) displays create a compelling 3D effect with the motion parallax cue using the head-coupled perspective. While the head-coupled viewpoint control provides natural visuomotor coupling, the motion parallax cue has been found to be underutilized with minimal head motion detected when manual input becomes available to users. We investigate whether users can effectively use head-coupling in conjunction with manual input in a mental rotation task involving inspection and comparison of a pair of 3D cubes. We found that participants managed to incorporate the head-coupled viewpoint control with the manual touch input in the task. They used the touch input as the primary input and the head as the secondary input with the input ratio of 4.2:1. The combined input approach appears to be sequential with only 8.63% duration when the head and manual input are co-activated. The result of this study provides insights for designing head-coupled interactions in many 3D interactive applications.

MonoEye: Monocular Fisheye Camera based 3D Human Pose Estimation

Dong-Hyun Hwang (Tokyo Institute of Technology), Kohei Aso (Tokyo Institute of Technology), Hideki Koike (Tokyo Institute of Technology)

Abstract: Wearable cameras have the potential to be used in various ways in combination with egocentric views such as action recognition, gesture input method for augmented/virtual reality (AR/VR) as well as lifelogger. Particularly, the pose of the camera wearer is one of the interesting factors of the egocentric view and various eccentric view- based pose estimation systems have been proposed; however, there is no balance between recognizable poses and enough egocentric views. In this work, we propose MonoEye, a system to provide wearer’s estimated 3D pose and wide egocentric view. Our system’s chest-mounted camera, equipped with the ultra-wide fisheye lens, covers the wearer’s limbs and wide egocentric view; our pose estimation network estimates 3D body pose of the wearer from the camera’s egocentric view. The proposed system not only can be used as an input interface of AR and VR through estimation of a various pose of the wearer but also has a potential to be used for action recognition by providing a wide egocentric view.

VIRTUAL REALITY TRAINING WITH PASSIVE HAPTIC FEEDBACK FOR CRYOEM SAMPLE PREPARATION

Jiahui Dong (Purdue University), Jun Zhang (Purdue University), Xiao Ma (Purdue University), Pengyu Ren (Purdue University), Zhenyu Cheryl Qian (Purdue University), Yingjie Victor Chen (Purdue University)

Abstract: We present an immersive virtual reality training system cryoVR with passive haptic feedback for training biological scientists preparing bio-sample for Cryo-Electron Microscopy (CryoEM). CryoEM requires the user to conduct careful operations on expensive delicate equipment. To minimize the risk and interruption of crucial research work, we tried to mimic the real lab using VR to let trainer practice in a virtual environment. We used 3D printed objects to provide passive haptic feedback in order to achieve a more realistic training experience. Participants are able to interact with the equipment in the virtual environment by moving or touching physical models. We developed all the necessary operations with haptic feedback, including moving, clicking, pouring, rotating, polling and pushing. By following instructions provided by our virtual reality simulator and interacting with physical objects, trainees will learn how to operate CryoEM with low cost and risk. By developing our training system, we explore the benefits, limitations and precautions of embedding haptic feedback to scientific VR training.

Underwater Manipulation Training Simulation System for Manned Deep Submarine Vehicle

zhang xiaoxi (Dalian Maritime University), yin yong (Dalian Maritime University), Feifei Wan (Dalian Maritime University)

Abstract: In order to solve the problems of high safety risk and low training efficiency in underwater operation of deep submersible vehicle. Taking China’s first manned deep submarine vehicle “jiaolong” as the simulation prototype, an underwater training platform is developed, which is not limited by time and place. Using three-dimensional modeling technology to build the three-dimensional model of “jiaolong”; A mathematical model of the motion of a manipulator is established to simulate the associated motion between the joints of the manipulator. The collision detection technique is used to determine whether there is interaction between the manipulator and the operated object, between the object and the sampling basket, and between the equipment and the scene. The system is based on 3D development engine, which includes cobalt-rich crust mining area, polymetallic sulphide area, shallow sea area and cold spring area and so on. With this system the operator can train the underwater operation process. The system uses virtual reality helmet as the final visual display mode and the line of sight collision detection based on helmet instead of mouse click to simulate the whole process of underwater operation of deep submersible.

Investigation of Visual Self-Representation for a Walking-in-Place Navigation System in Virtual Reality

Chanho Park (Electronics and Telecommunications Research Institute), Kyungho Jang (Electronics and Telecommunications Research Institute)

Abstract: Walking-in-place (WIP) is one of the techniques for navigation in virtual reality (VR), and it can be configured in a limited space with a simple algorithm. Although WIP systems provide a sense of movement, it is important to deliver immersive VR experiences by providing information as similar as possible to walking in the real world. There have been many studies on the WIP technology, but it has rarely been done on visual self-representation of WIP in the virtual environment (VE). In this paper, we describe our investigation of virtual self-representation for application to a WIP navigation system using a HMD and full body motion capture system. Our system is designed to move in the pelvis direction by calculating the inertial sensor data, and a virtual body that is linked to the user’s movement is seen from the first-person perspective (1PP) in two ways: (i) full body, and (ii) full body with natural walking. In (ii), when a step is detected, the motion of the lower part of the avatar is manipulated as if the user is performing real walking. We discuss the possibility of visual self-representation for the WIP system.

Evaluation of a Virtual Reality-based Baseball Batting Training System using Instantaneous Bat Swing Information

Liyuan Zou (Room 1201 Karin35,Minamigasahigashi 3-chome 21-26), Takatoshi Higuchi (Fukuoka Institute of Technology University), Roberto Lopez-Gulliver (Ritsumeikan University), Tadao Isaka (Ritsumeikan University), Haruo Noma (Ritsumeikan University)

Abstract: Batting training practice aims at increasing the batting performance of baseball players. Traditional batting practice methods have been proven effective in increasing the players batting performance in real games. However, the feedback the player gets is limited to: the vibration of the bat, the sound of the impact and the ball trajectory. We propose a Virtual Reality-based (VR) baseball batting system that provides batters with instantaneous bat swing information as feedback. This information includes: exact bat-ball impact location and angle, replay for swing timing and speed. The ability to review this swing information immediately after each swing may help batters to quantitatively adjust their swing to improve batting performance. In order to evaluate its effectiveness, we compared the proposed method against a traditional batting training method, during a short period. Results of our preliminary experiments show that the VR-based group batting performance, after a 10-day practice period, was comparable to the traditional group. Further experiments and analysis are required to assess the efficiency of the proposed method.

Self Bird’s Eye View with Omnidirectional Camera on HMD

Kenji Funahashi (Nagoya Institute of Technology), Naoki Sumida (Nagoya Institute of Technology), Shinji Mizuno (Aichi Institute of Technology)

Abstract: The experience to see oneself from behind in real-time, that is like out-of-body experience, is refreshing, it is also expected as a trigger to improve oneself. However selfie-stick, drone, or something is necessary to take a bird’s-eye view. We propose the method to make virtual self bird’s-eye view using an omnidirectional camera on an HMD.

Real-Time Simulation of Deep-Sea Hydrothermal Fluid

Feifei Wan (Dalian Maritime University), yin yong (Dalian Maritime University), zhang xiaoxi (Dalian Maritime University)

Abstract: We perform a virtual simulation of deep-sea hydrothermal fluids using a combination of physical-based particles and grids. We use Vortex-In-Cell method to simulate vorticity and generate induced velocity fields, and the semi-Lagrangian method to simulate the advection and diffusion of temperature and density fields. Finally, we show the rich details of hydrothermal fluids by comparing simulations with real fluids.

Preliminary Evaluation of Gill-breathing Simulation System Gill+Man

Izumi Mizoguchi (The University of Electro-Communications), Takahiro Ando (The University of Electro-Communications), Mizuki Nagano (The University of Electro-Communications), Ryota Shijo (The University of Electro-Communications), Sho Sakurai (The University of Electro-Communications), Koichi Hirota (The University of Electro-Communications), Takuya Nojima (The University of Electro-Communications)

Abstract: We propose a gill-breathing simulation system named Gill+Man. The system presents a sense of breathing through gills like a fish does. The Gill+Man system comprises three devices, namely a breath-sensing device, a device presenting a swallowing sensation, and a device presenting a gill sensation. These devices use simple stimulation and combine to produce the sense of breathing through gills. Respondents gave the gill breathing feeling a score of 66 in a questionnaire evaluation.

Exploring Stereovision-Based 3-D Scene Reconstruction for Augmented Reality

Guang-Yu Nie (Beijing Institute of Technology), Yun Liu (Nankai University), Cong Wang (China Electronics Standardization Institute), Yue Liu (Beijing Institute of Technology), Yongtian Wang (Beijing Institute of Technology)

Abstract: Three-dimensional (3-D) scene reconstruction is one of the key techniques in Augmented Reality (AR), which is related to the integration of image processing and display systems of complex information. Stereo matching is a computer vision based approach for 3-D scene reconstruction. In this paper, we explore an improved stereo matching network, SLED-Net, in which a Single Long Encoder-Decoder is proposed to replace the stacked hourglass network in PSM-Net for better contextual information learning. We compare SLED-Net to state-of-the-art methods recently published, and demonstrate its superior performance on Scene Flow and KITTI2015 test sets.

PReWAP: Predictive Redirected Walking using Artificial Potential Fields

Christian Hirt (ETH Zürich), Markus Zank (Lucerne University of Applied Sciences and Arts), Andreas Kunz (ETH Zurich)

Abstract: In predictive redirected walking applications, planning the redirection and path prediction are crucial for a safe and effective redirection. Common predictive redirection algorithms require many simplifications and limitations concerning the real and the virtual environments to achieve a real-time performance. These limitations include for example that the tracking space needs to be convex, and only a single user is supported. In this paper, we present a novel approach called PReWAP which addresses many of these shortcomings. We introduce artificial potential fields to represent the real environment which are able to handle non-convex environments and multiple co-located users. Further, we show how this new approach can be integrated into a model predictive controller which will allow various redirection techniques and multiple gains to be applied.

“Ready Player One”: Enhancing Omnidirectional Treadmills for use in  Virtual Environments

Adrian Barberis (University of Wyoming), Trystan Bennett (University of Wyoming), Meredith Minear (University of Wyoming)

Abstract: As large-scale immersive virtual environments become a reality, different technologies are being developed to allow increasingly naturalistic travel in these spaces. One such technology is the omnidirectional treadmill, which seeks to become the primary locomotion device for spacious virtual environments. Though the hardware has been realized, it still lacks a concrete software paradigm that ensures comfort and performance for the user. We identified three potentially key settings for omnidirectional treadmill locomotion in virtual reality (VR) environments that may lead to more comfort, increased degrees of freedom, and better performance: movement speed, treadmill sensor sensitivity, and decoupling the head’s rotation from that of the body. To integrate these settings, we developed an original first-person movement script and conducted a pre-study to discover if enough variance between individuals existed to necessitate calibration on a per-individual basis. Initial pre-study results suggest that these three values do show enough variance to recommend they be calibrated on an per-individual basis and opens up the opportunity for further research into the potential benefits of these variables on locomotion in environments that use omnidirectional treadmills.

Archaeological Excavation Simulation for Interaction in Virtual Reality

Da-Chung Yi (National Taiwan University), Yang-Sheng Chen (National Taiwan University), Ping-Hsuan Han (National Taiwan University), Hao-Cheng Wang (National Taiwan University), Yi-Ping Hung (National Taiwan University)

Abstract: We propose a real-time excavation simulation system for interactive gameplay in Virtual Reality. In order to increase the player’s immersion, our simulation system will produce actual potholes and clods according to the depth and angle of the player’s excavation. We divide the process into three phases: ground deformation, clod generation and clod fragmentation. In ground deformation, we describe how to simulate the topographic changes before and after excavation. In clod generation, we describe how to generate the clod which mesh matches the depth and angle of the player’s excavation action. In clod fragmentation, the clods are broken and fall as the shovel lifts. This simulation system can create excavation effects on different geologies by changing the material of the ground and clods.

VR Sickness in Continuous Exposure to Live-action 180° Video

Sinan Zhang (Meiji University), Akiyoshi Kurogi (L.A.B Co. Ltd.), Yumie Ono (Meiji University)

Abstract: The goal of this study was to determine the factors that determine the degree of VR sickness in order to improve the audiovisual experience of VR videos or games. We used a simulator sickness questionnaire to evaluate the degree of VR sickness for nine types of live-action 180-degree videos, with a combination of different movement speeds and fields of view (FOV). Among the 40 participants we tested, those suffering from motion sickness had more serious symptoms than those without motion sickness. Although statistical tests failed to show significant differences related to the movement speeds or fields of view, our results suggested that VR exposure time was the most important factor influencing VR sickness.

Social skills training tool in Virtual Reality, intended for managers and sales representatives

Jean-Daniel Taupiac (Univ. Montpellier), Nancy Rodriguez (Univ. Montpellier), Olivier Strauss (Univ. Montpellier), Pierre Beney (Safran Helicopter Engines)

Abstract: Social skills training courses for managers and sales representatives are mainly realized today through role-playing sessions, which show several limits: realism, contextualization, evaluation objectivity. This paper describes a prototype of a Virtual Reality tool intended to answer these issues. This tool allows living role-playing sessions with virtual characters. We present the results of the first user tests we carried out. Users felt spatially present, involved, and socially present. Results also underline areas of improvement, such as nonverbal behaviors, scenario content and environment realism.

A Multidirectional Haptic Feedback Prototype for Experiencing Collisions between Virtual and Real Objects

Li Zhang (Northwestern Polytechnical University), Weiping He (Northwestern Polytechnical University), Mengmeng Sun (Northwestern Polytechnical University), Silian Li (Northwestern Polytechnical University), Xiaoliang Bai (Northwestern Polytechnical University), Shuxia Wang (Northwestern Polytechnical University)

Abstract: Haptic feedback has shown its great value in HCI research and applications for enhancing user experiences. Simulating people’s tactile sensations of virtual objects is currently a primary research target. Different from wearing motors on fingers or hands, we attached vibration motors on a physical object to simulate the augmented sense of collision with virtual objects to bare hands. We developed a novel sensor-based proof-of-concept prototype that distributes multiple vibration motors around a physical object and provides vibrational sensations from the collision direction through combinations of motors. Users can obtain augmented haptic feedback when manipulating the augmented physical object to interact with virtual objects in an AR environment. We first studied the influence of the number and input voltage of motors for a correct judgment of different directions to identify the design parameters of the prototype. Then we investigated the effect of introducing the prototype in a typical manipulation task in AR. We found the prototype was efficient for enhancing human performance and collision experience with virtual objects together with visual feedback.

Interacting with 3D Images on a Rear-projection Tabletop 3D Display Using Wireless Magnetic Markers and an Annular Coil Array

Shunsuke Yoshida (NICT), Ryo Sugawara (Research Institute of Electrical Communication), Jiawei Huang (Tohoku University), Yoshifumi Kitamura (Tohoku University)

Abstract: This paper proposes an interactive rear-projection tabletop glasses-free 3D display using a novel wireless magnetic motion capture system. Our tracking system employs an electromagnetic field generator and 16 magnetic detectors. It detects the 3D positions of several small markers in the generated electromagnetic field. The detectors are arranged in a ring around the rim of the conical screen of the 3D display to avoid occluding the reproduced 360-degree-viewable 3D images. For the proposed configuration, our experimental results reveal that a toroidal area around a hemispherical 3D image display area allows the 3D position to be measured with sufficient accuracy. We implemented an application to demonstrate real-time interaction with virtual 3D objects displayed on the table using markers attached to a physical object like a stick or finger.

Live Coding of a VR Render Engine in VR

Markus Schütz (Institute of Visual Computing & Human-Centered Technology), Michael Wimmer (TU Wien)

Abstract: Live coding in virtual reality allows users to create and modify their surroundings through code without the need to leave the virtual reality environment. Previous work focuses on modifying the scene. We propose an application that allows developers to modify virtually everything at runtime, including the scene but also the render engine, shader code and input handling, using standard desktop IDEs through a desktop mirror.

Haptic Interface Based on Optical Fiber Force Myography Sensor

Eric Fujiwara (University of Campinas), Yu Tzu Wu (University of Campinas), Matheus Kaue Gomes (University of Campinas), Willian Hideak Arita da Silva (University of Campinas), Carlos Kenichi Suzuki (University of Campinas)

Abstract: A haptic grasp interface based on the force myography technique is reported. The hand movements and forces during the object manipulation are assessed by an optical fiber sensor attached to the forearm, so the virtual contact is computed, and the reaction forces are delivered to the subject by graphical and vibrotactile feedbacks. The system was successfully tested for different objects, providing a non-invasive and realistic approach for applications in virtual-reality environments.

ImSpector: Imersive System of Inspection of Bridges/Viaducts

MAURICIO ROBERTO VERONEZ (UNISINOS University), Luiz Gonzaga Jr. (UNISINOS University), Fabiane Bordin (UNISINOS University), Leonardo Campos Inocencio (UNISINOS University), Graciela Eliane dos Reis Racolte (UNISINOS University), Lucas Kupssinskü (UNISINOS University), Pedro Rossa (UNISINOS University), Leonardo Scalco (UNISINOS University)

Abstract: One of the main difficulties in inspection of Bridges/Viaducts by observation is inaccessibility or lack of access throughout the structure. Mapping from remote sensors on Unmanned Aerial Vehicles (UAVs) or by means of laser scanning can be an interesting alternative to the engineer as it can enable more detailed analysis and diagnostics. Such mapping techniques also allow the generation of realistic 3D models that can be integrated in Virtual Reality (VR) environments. In this sense, we present the ImSpector, a system that uses realistic 3D models generated by remote sensors embedded in UAVs that implements a virtual and immersive environment for inspections. As a result, the system provides to the engineer a tool to carry out field tests directly at the office, ensuring agility, accuracy and safety in bridge and viaduct inspections.

Reducing Cybersickness by Geometry Deformation

Ruding LOU (Arts et Metiers, Institut Image), Jean-Rémy Chardonnet (Arts et Métiers, Institut Image)

Abstract: Since a couple of years, virtual reality technologies are more and more used in digital activities. One major and well-known issue that occurs during VR experience is the appearance of cybersickness, which refrains users from accepting these technologies. The induced cybersickness is due to a self-motion feeling that is produced when users see objects moving in the virtual world. To reduce cybersickness several methods have been proposed in the literature, however they do not guarantee immersion and navigation quality. In this paper, a new method to reduce cybersickness is proposed. The geometric deformation of the virtual model displayed in the peripheral field of view allows reducing the self-motion perceived by the user. Pilot test results show that visually induced self-motion is indeed reduced with a guaranteed immersion quality while the user navigation parameters are kept.

Obstacles Awareness Methods from Occupancy Map for Walking in VR

Marilyn Keller (GFI Informatique), Tristan Alexandre Charles TCHILINGUIRIAN (Gfi Informatique)

Abstract: With Head Mounted Displays (HMD) equipped with extended tracking features, users can now walk in a room scale space while being immersed in a virtual world. However, to fully exploit this feature and enable free-walking, these devices still require a large physical space, cleared of obstacles. This is an essential requirement that not any user can meet, especially at home, thus this constraint limits the use of free-walking in Virtual Reality (VR) applications.

In this poster, we propose ways of representing the physical obstacles surrounding the user. There are generated from an occupancy map and compared to the representation as a point cloud. We propose three visualisation modes: integrating an occupancy map into the virtual floor, generating lava lakes where obstacles are and building a semi-transparent wall along the obstacles boundaries. We found that although showing the obstacles on the floor only impacts lightly the navigation, the preferred visualization mode remains the point cloud.

Head Pointer or Eye Gaze: Which Helps more in MR Remote Collaboration?

Peng Wang (Northwestern Polytechnical university), Shusheng Zhang (Northwestern Polytechnical University), Xiaoliang Bai (Northwestern Polytechnical university), Mark Billinghurst (University of South Australia), Weiping He (Northwestern Polytechnical university), Shuxia Wang (Northwestern Polytechnical University), Xiaokun Zhang (Northwestern Polytechnical University), Jiaxiang Du (Northwestern Polytechnical university), Yongxing Chen (Northwestern Polytechnical university)

Abstract: This paper investigates how two different unique gaze visualizations (the head pointer(HP), eye gaze(EG)) affect table-size physical tasks in Mixed Reality (MR) remote collaboration. We developed a remote collaborative MR Platform which supports sharing of the remote expert’s HP and EG. The prototype was evaluated with a user study comparing two conditions: sharing HP and EG with respect to their effectiveness in the performance and quality of cooperation. There was a statistically significant difference between two conditions on the performance time, and HP is a good proxy for EG in remote collaboration.

Shadows can change the shape appearances of real and virtual objects

Kazushi Maruya (Nippon Telegraph and Telephone corporation), Tomoko OHTANI (Tokyo University of the Arts)

Abstract: The human visual system can estimate the shape of an object casting a shadow. The principle has been widely studied, and utilized in 3D scanners. However, estimating the shape of an object on which a shadow is casted (“screen object”) was rarely investigated. In this study, we show that the casted shadow distorts the perceived shape of a screen object in a class of virtual and physical scenes. In addition, we utilized the principle to create variations of Café-wall illusion. The principles can be utilized to control the perceived shapes without changing the structure of the screen object.

Edible Retroreflector Made of Candy

Miko Sato (Gunma University), Yuki Funato (Gunma University), Hiromasa Oku (Gunma University)

Abstract: In this research, we propose an edible retroreflector made of candy. Previously proposed edible retroreflector was made of agar, granulated sugar, and water, so it was weak against drying and lost its function in a short period of time. However, solid foodstuffs like candy are stable against drying. Thus, if it was possible to create a retroreflector from candy, longer lifetime is expected to be achieved. Therefore, edible retroreflectors made of candy was developed, and the performances as an optical device were evaluated.

Novel View Synthesis with Multiple 360 Images for Large-Scale 6-DOF Virtual Reality System

Hochul Cho (KAIST), Jangyoon Kim (KAIST), Woontack Woo (KAIST)

Abstract: We present a novel view synthesis method that allows users to experience a large-scale Six-Degree-of-Freedom (6-DOF) virtual environment. Our main contributions are the construction of a large-scale 6-DOF virtual environment using multiple 360 images as well as synthesis of a scene from novel viewpoints. Novel view synthesis from a single 360 image can give free viewpoint experience with full 6-DOF of head motion to players, but the moveable space is limited within a context of the image. We propose a novel view synthesis process that references multiple 360 images via reconstructing a large-scale real world based virtual data map and perform a weighted blending for interpolating multiple novel view images. Our results show that our approach provides a wider area of virtual environment as well as a smooth transition between each reference 360 images.

A VR Interactive Story Using POV and Flashback for Empathy

Byung-Chull Bae (Hongik University), Su-Ji Jang (HONGIK UNIVERSITY), Duck-Ki Ahn (HONGIK UNIVERSITY), Gapyuel Seo (HONGIK UNIVERSITY)

Abstract: In this paper we introduce our ongoing project to design and implement an interactive storytelling in VR. Our particular design intent is to invoke narrative empathy through two narrative devices - change of POV (Point of View) and flashback. Focusing on narrative empathy, we explore design considerations for our project, including story, characters and objects, events and space of possibility, focalization (or POV), flashback and VR implementation.

Revisiting Virtual Reality for Practical Use in Therapy: Patient Satisfaction in Outpatient Rehabilitation

Josephine Hartney (Augusta University), Sydney Nicole Rosenthal (Augusta University), Aaron Kirkpatrick (Augusta University), Maci Skinner (Augusta University), Jason Hughes (Augusta University), Jason Orlosky (Osaka University)

Abstract: Though Virtual Reality (VR) has made its way into many commercial applications, it has only begun to gain adoption in applications for rehabilitation and therapy. Though some research exists in this area, we still need to better understand how interactive VR can be integrated into a fast-paced clinical setting. To address this challenge, we present the results of a pilot study with 25 participants, who used an interactive application for conducting upper-limb rehabilitation. Our interface consists of a VR display with two controllers that are used to clean and clear segments of a target virtual window. From observations of participant challenges with the interface, feedback from the occupational therapy staff, and a subjective questionnaire using Likert scales, we examine ways to better integrate VR into a patient’s scheduled rehabilitation.

Scrambled Body: A Method to Compare Full Body Illusion and Illusory Body Ownership of Body Parts

Ryota Kondo (Toyohashi University of Technology), Maki Sugimoto (Keio University), Masahiko Inami (University of Tokyo), Michiteru Kitazaki (Toyohashi University of Technology)

Abstract: Humans can feel as if a fake body is their own body in the illusory body ownership. The illusion can be induced to a full body by a visual-tactile synchronicity or visual-motor synchronicity. In our previous study, illusory full body ownership of invisible body was elicited by synchronous movements of virtual gloves and socks. In this study, we aimed to investigate whether the spatial relationship is necessary for the full body illusion using a stimulus of scrambled body. In the scrambled body, positions of gloves and socks were scrambled. The results suggest that spatial relationship of body parts is necessary for the full body illusion.

Shared body by action integration of two persons: Body ownership, sense of agency and task performance

Takayoshi Hagiwara (Toyohashi University of Technology), Maki Sugimoto (Keio University), Masahiko Inami (University of Tokyo), Michiteru Kitazaki (Toyohashi University of Technology)

Abstract: Humans have one own body. However, we can share a body with two different persons in a virtual environment. We have developed a shared body as an avatar that is controlled by two persons’ actions. Movements of two subjects were continuously captured, and integrated into the avatar’s motion with the ratios of 0:100, 25:75, 50:50, 75:25, 100:0. They were not aware of integration ratios, and asked to reach cubes with the right hand. They felt body ownership and sense of agency to the shared avatar more when the responsible ratio was higher. The reaching path of the avatar’s hand was shorter in the shared body condition (75:25) than the single body condition (100:0). These results suggest that we have some of body ownership and sense of agency to the shared body, and the task performance is improved by the shared body.

SCUBA VR: Submersible-Type Virtual Underwater Experience System

Denik Hatsushika (University of Tsukuba), Kazuma Nagata (University of Tsukuba), Yuki Hashimoto (University of Tsukuba)

Abstract: In this paper, we propose the development of an underwater virtual reality (VR) system for scuba training. The aim is to enable scuba students to experience an arbitrary underwater environment and enable training using VR technology in limited water environments such as a pool or shallow water. Using this system, scuba training, which can currently only be done by hands-on training, can be reproduced through a VR experience in a pool. This system consists of a cable-connected PC-HMD (UWHMD: Underwater Wired Head Mounted Display) that can be used underwater and a motion capture system. According to the pilot test, it was confirmed that position tracking was successful at widths of about 2.25 m, lengths of about 3.0 m, and depths of about 1.3m.

Tray Minh Voong (University of Osnabrueck), Michael Oehler (University of Osnabrueck)

Abstract: An approach is presented how to practically determine which head-related transfer function (HRTF) profiles fit best for individuals wearing a bone conduction headphone. Such headphones may be particularly useful for visually impaired people (e.g., for navigation applications) as they do not obstruct the outer ear. Hence, it is still possible to perceive environmental sounds without restraints while wearing such headphones. For a fast and user-friendly identification of fitting HRTF profiles, an adapted tournament system is proposed. It could be shown that the results of the tournament method, where participants had to rate overall preference, externalization and envelopment, correlated well with the results of the localization task. The correlation was higher for the conventional headphones condition than for the bone conduction headphones condition. Analyses of the transmission characteristics show an uneven frequency response of bone conduction headphones compared to conventional headphones or speakers. In future research it will be investigated whether these findings are relevant for the auditory spatial perception at all and to what extent best fitting HRTFs may compensate for these phenomena.

Towards a Virtual Memory Palace

Chang (Angelina) Liu (Duke University), Regis Kopper (Duke University), Brian H Lee (Duke University)

Abstract: Mnemonic is a powerful tool to assist memorization of a large amount of complex information. Amongst the different techniques, memory palace, also known as the method of Loci, presents significant results in enhancing memory performance for massive lists of numbers, objects and even texts. Substantial research has been conducted to determine both the validity of applying this complex technique in education, as well as that of applying VR to ease the process of learning this technique. Experiments have shown auspicious results suggesting that a virtual environment helps with spatial memory enhancements. However, previous studies have not investigated factors such as training participants over several stages of learning or including interactivity in the virtual environment. The objective of this pilot project is to assess such effects systematically and suggest several guidelines for improving future studies on virtual memory palace.

Estimation of Rotation Gain Thresholds for Redirected Walking Considering FOV and Gender

Niall Williams (Davidson College), Tabitha C. Peck (Davidson College)

Abstract: Redirected walking techniques enable users to naturally locomote in virtual environments (VEs) that are larger than the tracked space. Redirected walking imperceptibly transforms the VE around the user with predefined estimated threshold gains. Previously estimated gains were evaluated with a 40° field of view (FOV). We conducted a within-participant user study to estimate and compare thresholds for rotation gains. Significant differences in detection thresholds were found between FOVs. When using a 110° FOV, rotations can be decreased 31% and increased 49% compared to decreased 18% and increased 47% with a 40° FOV. Significant differences were found between female and male gains with a 110° FOV.

An Open Initiative for the Delivery of Infinitely Scalable and Animated 3D Scenes

Gwendal Simon (Adobe), Vishy Swaminathan (Adobe)

Abstract: Planet-scale Augmented Reality (AR) and photorealistic Virtual Reality (VR) are two examples of applications that require the delivery of a rich, wide, and animated 3D scene. We extend this concept to infinitely-scalable and animated 3D scenes and we propose key concepts to launch an open initiative for the delivery of these 3D scenes. We draw on the lessons learned from large-scale implementation of video streaming to design a pull-based mechanism with segmented representations of objects and periodic map updates.

Fantasy Gaming and Virtual Heritage

Wanqi Zhou (The Australian National University), Kit Devine (The Australian National University), Henry Gardner (The Australian National University)

Abstract: Virtual worlds are increasingly being developed to provide a navigable space for museum collections. The Virtual Sydney Rocks (VSR) is one such virtual world that has been constructed as an authentic representation of the area of first European settlement in Australia 230 years ago - from a time just before its settlement until the present day. We constructed two versions of a game for children in order to motivate interaction with, and learning of, the historical content of the VSR. One of these game versions contained a number of fantasy design elements with the idea that their inclusion would motivate children to engage with VSR content in a pleasurable way. A preliminary study with a small number of school-aged children was not able to show that the inclusion of fantasy elements affected either the comprehension of historical facts or the quality of moral judgements made by the children. This study did, however, provide some evidence that interactivity with the relevant parts of the virtual world may have affected the retention of historical facts.

Haptic Rendering for Chinese Characters Recognition

Xinli Wu (Zhejiang Sci-Tech University), Jiali Luo (Zhejiang Sci-Tech University), Minxiong Zhang (Zhejiang Sci-Tech University), wenzhen yang (Zhejiang Sci-Tech University), Zhigeng Pan (Hangzhou Normal University)

Abstract: This paper explores new approaches to recognize Chinese characters in images through touch sense. We extract the gradient image fused with brightness and chrominance information by the flow-based Gauss difference algorithm, and use the edge tangential flow method to smooth image contour, to obtain smooth character contours with significant edge features. Then, we use a tactile generation algorithm with five mechanical elements to generate stable and continuous tactile information based on the normal vector of triangular patches of the surface. To verify the feasibility and effectiveness of the proposed methods, we developed a Chinese character tactile sensing system. Experiments show that Chinese characters in images can be recognized by touch sense with higher recognition accuracy rate.

Optical Fiber 3D Shape Sensor for Motion Capture

Eric Fujiwara (University of Campinas), Yu Tzu Wu (Unicamp), Luiz Evaristo da Silva (University of Campinas), Hugo Eugenio de Freitas (University of Campinas), Stenio Aristilde (University of Campinas), Cristiano Monteiro de Barros Cordeiro (University of Campinas)

Abstract: An optical fiber 3D shape sensor for motion capture is reported. The probe is comprised of Bragg grating strain sensors embedded into a 3D-printed polymer fiber substrate, allowing for the continuous assessment of the bending magnitude and direction. Moreover, the device was tested on the measurement of the forearm movements, making it possible to correctly replicate the captured motions in an avatar, providing a simple and minimally-invasive approach for applications in virtual reality.

Human, Virtual Human, Bump! A Preliminary Study on Haptic Feedback

Claudia Krogmeier (Purdue University), Christos Mousas (Purdue University), David Matthew Whittinghill (Purdue University)

Abstract: How does haptic feedback during a human-virtual human interaction affect emotional arousal in virtual reality? In this between-subjects study, we compare haptic feedback and no haptic feedback conditions in which a virtual human “bumps” into the participant in order to determine the influence of haptic feedback on emotional arousal, sense of presence, and embodiment in virtual reality, as well as compare self-report measures of emotional arousal to those objectively collected via event-related galvanic skin response (GSR) recordings. We plan to extend the current preliminary study by adding three more conditions as described in the future work section. Participants are students age 18-30 with at least moderate experience in virtual reality. Preliminary results indicate significant differences in presence and embodiment between haptic feedback and no haptic feedback groups. With our small sample size at the current time, GSR does not show significant differences between haptic and no haptic feedback conditions.

Thermal HDR: Applying High Dynamic Range Rendering for Fusion of Thermal Augmentations with Visible Light

Lance Zhang (Osaka University), Jason Orlosky (Osaka University)

Abstract: In safety applications for fields such as navigation or industrial manufacturing, thermal augmentations have often been used to help users safely navigate low-light environments and detect obstacles. However, one problem with thermal imaging is that it often occludes the environment and coloration that would otherwise be useful, for example traffic sign text or warning labels on industrial equipment.

In this paper, we explore a new algorithm that takes advantage of high dynamic range (HDR) rendering techniques in order to more effectively present thermal information. Unlike previous fusion algorithms or conventional blending techniques, our setup makes use of both a thermal camera and HDR frames to synthesize a final overlay. Moreover, we set up a series of two experiments with a simulated heads up display (HUD) to 1) measure reaction times to the sudden appearance of pedestrians, and 2) conduct circuit repair. Results showed that the HDR algorithm was subjectively preferred to other approaches, and that performance on average could match other conventional algorithms on most occasions.

Hybrid Camera System for Telepresence with Foveated Imaging

MUHAMMAD FIRDAUS SYAWALUDIN LUBIS (KIST), Chanho Kim, Jae-In Hwang (Korea Institute of Science and Technology)

Abstract: To improve the telepresence sense of a local HMD user, a high-resolution view of the remote environment is necessary. However, current commodity omnidirectional camera could not support enough resolution for the human eye. Using a higher resolution omnidirectional camera is also infeasible because it will increase the streaming bandwidth. We propose a hybrid camera system that can convey a higher resolution for the HMD user viewport ROI region in available bandwidth range. The hybrid camera consists of a pair of omnidirectional and PTZ camera which is close to each other. The HMD user head orientation controls the PTZ camera orientation. The HMD user also controls the zooming level of the PTZ camera to achieve higher resolution up to PTZ camera maximum optical zoom level. The remote environment view obtained from each camera is streamed to the HMD user and then stitched into one combined view. This combined view simulates human visual system (HVS) phenomenon called foveation, where only a small part in the human view is in high resolution, and the rests are in low resolution.

Edible lens made of agar

Miyu Nomura (Gunma University), Hiromasa Oku (Gunma University)

Abstract: In this paper, we propose an edible lens made from foodstuffs. The optical lens is used for forming an optical system. It is expected that it will be possible to make optical systems edible by preparing an edible lens that has the same function as the conventional optical lens. In order to realize this, prototypes of edible lens made of agar were developed, because the agar had been reported as a material to form edible retroreflector materials. Furthermore, we investigated its optical performance.

Odor Modulation by Warming/Cooling Nose Based on Cross-modal Effect

Yuichi Fujino (Osaka University), Haruka Matsukura (Osaka University), Daisuke Iwai (Osaka University), Kosuke Sato (Osaka university)

Abstract:

Human Face Reconstruction under A HMD Occlusion

Zhengfu Peng (South China University of Technology), Ting Lu (School of Electronic and Information Engineering), zhaowen chen (South China University of Technology), Xiangmin Xu (South China university of technology), LIN SHU (SCUT)

Abstract: With the help of existing augmented vision perception motion capture technologies, virtual reality (VR) can make users immerse in virtual environments. But users are difficult to convey their actual emotions to others in virtual environments. Since the head-mounted displays (HMDs) significantly obstruct user’s face, it is hard to recover the full face directly with traditional techniques. In this paper, we introduce a novel method to address this problem by only using the RGB image of a person, without the need of any other sensors or devices. Firstly, we utilize the facial landmark points to estimate the face shape, expression and pose of the user. Then with the information of the Non occlusion face area, we could recover the face texture and the illumination of the current scene.

Real-Time Collaborative Animation of 3D Models with Finger Play and Hand Shadow

Amato Tsuji (Kogakuin university), Keita Ushida (Kogakuin University), Saneyasu Yamaguchi (Kogakuin University), Qiu Chen (Kogakuin University)

Abstract: The authors propose a method for real-time collaborative animation for 3D models with finger play and hand shadow. Two or more users have the model animated more complexly and expressively than one user. For instance, a model of a crocodile is ani- mated by two users; one manipulates its mouth, neck and tail, and the other manipulates its four legs. With the implemented system up to five users can manipulate one model collaboratively. From evaluations following points were found: 1) users could manipulate models without detailed instruction; 2) most participants felt the system operable and enjoyable; 3) the motions made with the system were not less cute, amusing and lively than those by animators with conventional methods.

Virtual Reality Instruction Followed by Enactment can Increase Procedural Knowledge in a Science Lesson

Niels Andreasen (Aalborg University), Sarune Baceviciute (University of Copenhagen), Prajakt Pande (Roskilde University Center), Guido Makransky (University of Copenhagen)

Abstract: A 2X2 between-subjects experiment (a) investigated and compared the instructional effectiveness of immersive virtual reality (VR) versus video as media for teaching scientific procedural knowledge, and (b) examined the efficacy of enactment as a generative learning strategy in combination with the respective instructional media. A total of 117 high school students (74 females) were randomly distributed across four instructional groups – VR and enactment, video and enactment, only VR, and only video. Outcome measures included declarative knowledge, procedural knowledge, knowledge transfer, and subjective ratings of perceived enjoyment. Results indicated that there were no main effects or interactions for the outcomes of declarative knowledge or transfer. However, there was a significant interaction between media and method for the outcome of procedural knowledge with the VR and enactment group having the highest performance. Furthermore, media also seemed to have a significant effect on student perceived enjoyment, indicating that the groups enjoyed the VR simulation significantly more than the video. The results deepen our understanding of how we learn with immersive technology, as well as suggest important implications for implementing VR in schools.

Working Memory Load Performance Based on Collocation of Virtual and Physical Hands

Altan Tutar (Davidson College), Tabitha C. Peck (Davidson College)

Abstract: The use of real-like hands in virtual reality simulators is common; however, research into understanding how the human brain perceives hands in virtual environments is limited. Self avatars are a great way to improve the user’s presence and perception of space [6], but the precise implementation of avatars is arduous, and including only hands is an attractive alternative. Earlier psychology research reported that the closer the hands to the studied object, the lower the working memory load. We hypothesized that in virtual environments, virtual hands that are collocated with the user’s hands should improve the user’s working memory load, and we tested our hypothesis with a between-participant study (n=30) measuring working memory load with the Stroop Interference Task.

A decomposition approach for complex gesture recognition using DTW and prefix tree

Hui Chen (King Abdullah University of Science and Technology)

Abstract: Gestures are effective tools for expressing emotions and conveying information to the environment. Sequence matching and machine-learning based algorithm are two main methods to recognize continuous gestures. Machine-learning based recognition systems are not flexible to new gestures because the models have to be trained again. On the other hand, the computational time that matching methods required increases with the complexity and the class of the gestures. In this work, we propose a decomposition approach for complex gesture recognition utilizing DTW and prefix tree. This system can recognize 100 gestures with an accuracy of 97.38\%.

Shooter Bias and Socioeconomic Status in Virtual Reality

Evan Blanpied (Davidson College), Jessica J Good (Davidson College), Tabitha C. Peck (Davidson College)

Abstract: Our poster details an ongoing experiment aiming to test the prevalence of shooter bias in virtual reality. Further, we examine the interaction between shooter bias and perceived socioeconomic status. Even though shooter bias is a well-documented topic in psychology research, little experimentation has used virtual reality, instead opting for unrealistic two-dimensional simulations. This study will yield new insight into shooter bias, especially concerning virtual reality as a tool for use in future work, and will provide new understanding about the relationship between socioeconomic status and shooter bias.

Streaming a Sequence of Textures for Adaptive 3D Scene Delivery

Gwendal Simon (Adobe), Stefano Petrangeli (Adobe), Nathan Carr (Adobe), Vishy Swaminathan (Adobe)

Abstract: Delivering rich, high quality 3D scenes over the internet is challenged by the size of the 3D objects in terms of geometry and textures. This paper proposes a new method for the delivery of textures, which are encoded and delivered as a video sequence, rather than independently. Implemented on the existing video delivery infrastructure, our method provides a fine-grained control on the quality of the resulting video sequence.

Kantenbouki VR: A Virtual Reality Authoring Tool for Learning Localized Weather Reporting

Nicko Reginio Caluya (Nara Institute of Science and Technology), Marc Ericson C. Santos (Weathernews Inc.)

Abstract: Localized weather reporting based on human observation is a skill practiced by fishermen, seafarers, airport ground staff, among others. Weather reporting largely depends on accurately identifying cloud types, and judging visibility and distances. This skill is developed by first-hand experience of various weather phenomena. As such, beginners would have difficulty practicing this reporting task without experiencing the phenomena, without proper guidance of experts, and without appropriate tools to support their learning. We present Kantenbouki VR, a virtual reality authoring tool, which can solve various issues in the process of learning weather reporting based on human observation. Using a 360-degree camera capture as an image background in the application, users can create annotations via free-form drawings accompanied by a cloud type label. To test our idea, we conducted an exploratory design study. Observations and results show that users learned how to use the system quickly, and approached the annotation tasks differently. Features that can be improved include the fidelity of the cloud images, as well as mapping of actions to controller buttons.

Customizing Climate Change on your Plate: A VR Seafood Buffet

Daniel Pimentel (University of Florida), Ricardo Amaya (University of Florida), Shivashankar Halan (University of Florida), Sri Kalyanaraman (University of Florida), Jeremy Bailenson (Stanford University)

Abstract: The use of virtual reality (VR) to depict climate change impacts is a popular strategy used by environmental, news, and political organizations to encourage pro-environmental outcomes. However, despite widespread dissemination of immersive content, climate change mitigation efforts remain tepid. In response, we present a VR simulation conveying the adverse effects of climate change in a personally-relevant fashion. The “Virtual Seafood Buffet” experience allows users to select from dozens of lifelike virtual seafood items and experience the degradation of that particular species based on projected climate change impacts. The developed simulation is proposed as an intervention designed to encourage climate change mitigation efforts. An overview of the simulation, its purpose, and directions for future research are outlined herein.

Towards a Framework on Accessible and Social VR in Education

Anthony Scavarelli, Ali Arya, Robert J Teather

Abstract: In this extended abstract, we argue that for virtual reality to be a successful tool in social learning spaces (e.g. classrooms or museums) we must also look outside the virtual reality literature to provide greater focus on accessible and social collaborative content. We explore work within Computer Supported Collaborative Learning (CSCL) and social VR domains to move towards developing a design framework for socio-educational VR. We also briefly describe our work-in-progress application framework, Circles, including these features in WebVR.

A Study in Virtual Reality on (Non-)Gamers’ Attitudes and Behaviors

Sebastian Stadler (TUMCREATE), Henriette Cornet (TUMCREATE), Fritz Frenkler (Technical University of Munich)

Abstract: Virtual Reality (VR) constitutes an advantageous alternative for research considering scenarios that are not feasible in real-life conditions. Thus, this technology was used in the presented study for the behavioral observation of participants when being exposed to autonomous vehicles (AVs). Further data was collected via questionnaires before, directly after the experience and one month later to measure the impact that the experience had on participants’ general attitude towards AVs. Despite a non-significance of the results, first insights suggest that participants with low prior gaming experience were more impacted than gamers. Future work will involve bigger sample size and refined questionnaires.

Virtual Rotation with Visuo-Hapitcs

Akihiro Nakamura (Kyushu University), Shinji Sakai (Kyushu University), Kazunori Shidoji (Kyushu University)

Abstract: Redirected walking has been proposed as a means of making a narrow space feel wide in a virtual space. We examined the effect of visuo-haptics on the detection threshold of rotation gain when participants walked around a wall. We found that the threshold was affected by visuo-haptics only when they walked around the outside of the wall but was not affected when they walked around the inside of the wall.

Eye-gaze-triggered Visual Cues to Restore Attention in Educational VR

Andrew Yoshimura (University of Louisiana at Lafayette), Christoph W Borst (University of Louisiana at Lafayette), Adil Khokhar (University of Louisiana at Lafayette)

Abstract: In educational virtual reality, it is important to deal with problems of student inattention to presented content. We are developing attention-restoring visual cues for display when gaze tracking detects that student focus shifts away from critical objects. These cues include both novel aspectscues and variations of standard cues that performed well in prior work on visual guidance. For the longer term, we propose experiments to compare various cues and their parameters to assess effectiveness and tradeoffs, and to assess the impact of eye tracking. Eye tracking is used to both detect inattention and to control the appearance and location of cues.

I Got your Point: An Investigation of Pointing Cues in a Spherical Fish Tank Virtual Reality Display

Fan Wu (University of British Columbia), Qian Zhou (University of British Columbia), Kyoungwon Seo (University of British Columbia), Toshiro Kashiwagi (Future University Hakodate), Sidney S Fels (University of British Columbia)

Abstract: Pointing is a fundamental building block in human communication. While it is ubiquitous in our daily interactions within the real world, it is difficult to precisely interpret a virtual agent’s pointing direction to the physical world, considering its complex and subtle gesture cues, such as the movements of the human hand and head. Fish Tank Virtual Reality (FTVR) display has the potential to provide accurate pointing cues as it creates a compelling 3D spatial effect by rendering perspective-corrected vision. In this paper, we conducted a study with pointing cues of three levels (Head-only, Hand-only, and Hand+Head) to evaluate how the head and hand gesture cues affect observers’ performance in interpretation of where a virtual agent is pointing in a spherical FTVR display. The results showed that the hand gesture significantly helps people interpret the pointing both accurately and quickly for fine pointing (15°), with 19.4% higher accuracy and 1.42 seconds faster than the head cue. The combination of the head and hand yielded a small improvement on the accuracy (4.4%) with even slightly longer time (0.38 seconds) compared to the hand-only cue. However, for coarse pointing (30°), head cue appears to be sufficient with the accuracy of 90.2%. The result of this study provides guidelines on cues selection for designing pointing in the virtual environment.

Viscosity-based Vorticity Correction for Turbulent SPH Fluids

Sinuo Liu (University of Science and Technology Beijing), Xiaokun Wang (University of Science and Technology Beijing), Xiaojuan Ban (University of Science and Technology Beijing), Yanrui Xu (University of Science and Technology Beijing), Jing Zhou (University of Science and Technology Beijing), Yalan Zhang (University of Science and Technology Beijing)

Abstract: A critical problem of Smooth Particle Hydrodynamics (SPH) methods is the numerical dissipation in viscosity computation. This leads to unrealistic results where high frequency details, like turbulence, are smoothed out. To address this issue, we introduce a viscosity-based vorticity correction scheme for SPH fluids, without complex time integration or limited time steps. In our method, the energy difference in viscosity computation is used to correct the vorticity field. Instead of solving Biot-Savart integrals, we adopt stream function, which is easier to solve and more efficient, to recover the velocity field from the vorticity difference. Our method can increase the existing vortex significantly and generate additional turbulence at potential position. Moreover, it is simple to implement and can be easily integrated with other SPH methods.

Augmented Concentration: Concentration Improvement by Visual Noise Reduction with a Video See-Through HMD

Masaki Koshi (Nara Institute of Science and Technology), Nobuchika Sakata (Nara Institute of Science and Technology), Kiyoshi Kiyokawa (Nara Institute of Science and Technology)

Abstract: We propose a concentration improvement technique by using a video see-through head mounted display (HMD). Our technique reduces the visual noise or disturbing visual stimulus, such as moving objects near the central visual field, by lowering its visual saliency in real-time. Earphones and noise cancelling headphones are often used to shutdown auditory noise from surroundings when we need to concentrate on the job. Several studies have proven the effectiveness of such noise reduction on improving the concentration and learning efficiencies. We apply this analogy to the visual noise and propose a visual noise reduction HMD to improve concentration. In this article, we report on two preliminary user studies to investigate the effectiveness of our method. In the prototype system, we manually specify a part of the visual field as the working area and apply grayscale and blur filters outside it in real-time. We compared two conditions ”no effect” and ”strong blur” (for Exp 1) or ”weak blur” (for Exp 2), using a simple math task. Our preliminary results show that visual noise reduction improves the task completion time by 10% for Exp 1 and 7.9% for Exp 2 on average.

Tendon Vibration Increases Vision-induced Kinesthetic Illusions in a Virtual Environment

Daiki Hagimori (Nara Institute of Science and Technology), Shunsuke Yoshimoto (Osaka University), Nobuchika Sakata (NAIST), Kiyoshi Kiyokawa (Nara Institute of Science and Technology)

Abstract: In virtual reality (VR) systems, a user avatar is typically manipulated based on actual body motion in order to present natural immersive sensations resulted from proprioceptors. It is useful to be able to feel a large body motion in a virtual environment while it is actually smaller in the real environment. As a method to achieve this, we have been working on kinesthetic illusions induced by tendon vibration. In this article, we report on a user study focusing on the effects of a combination of visual stimulus and tendon vibration. In the user study, we measured subjects’ perceived elbow angles while they observed the corresponding virtual arm bending to 90 degrees, with different rotation amplification ratios between 1.0 and 1.5, with and without tendon vibration on the wrist joint. The results show that the perceived angle is increased by up to 8 degrees when tendon vibration is applied, roughly in proportion to the rotation amplification ratio. Our study suggests that the vision-induced kinesthetic illusion is further increased by tendon vibration, and that our technique can be applied to VR applications to make users feel larger body motion than that in the real environment.

The Influence of Body Position on Presence When Playing a Virtual Reality Game

AELEE KIM (Seoul National University), Minyoung Kwon (Seoul National University), Minha Chang (Seoul National University), Sunkyue Kim (Seoul National University), Dahei Jung (Seoul National University), Kyoung-Min Lee (Seoul National University)

Abstract: In this study, we explore how proprioception relates to the sense of presence in a virtual environment. To investigate this objective, we compared three body conditions (Standing vs. Sitting vs. Half-Sitting) to examine whether body position influences the level of presence when playing a virtual reality (VR) game. The results showed that participants who played the game in a standing position felt greater presence than those who were sitting or half-sitting, while the degree of presence was higher for sitting than for half-sitting participants. In addition, a correlation analysis of the control, sensory, and attention factor associated with presence revealed a strong connection between control and sensory factor, whereas attention factor was moderately linked to control and sensory factor. Based on these results, we may assume that variations in movement related to body position influence proprioception, which consequently affects the sense of presence. Overall, this study has confirmed the important association between body position and presence in VR.

Interaction Design for Selection and Manipulation on Immersive Touch Table Display Systems for 3D Geographic Visualization

Karljohan Lundin Palmerius (Linköping University), Jonas Lundberg (Linköping University)

Abstract: Geographic visualizations are, due to the limited need for vertical navigation, suitable for touch tables. In this poster we consider the design of interaction design for selection and manipulation through touch on the screen used for the display of 3D geographic visualization—in our case the visualization of and interaction with drone traffic over rural and urban areas—focusing on moving from a monoscopic to a more immersive, stereoscopic touch table, and how this move affects the interaction design. With a monoscopic display our stereoscopic vision uses the graphics to perceive the location of the surface, and touch interaction can naturally and intuitively be performed on top of 3D objects. Moving to stereocopic display, for increased sense of immersion, the graphics no longer provide visual cues about the location of the screen. We argue that this motivates modification of the design principles, with an alternative interaction design as a result.

Development of Wearable Motion Capture System using Fiber Bragg Grating Sensors for Measuring Arm Motion

Minsu Jang (Korea Institute of Science and Technology), Jun Sik Kim (Korea Institute of Science and Technology), Kyumin Kang (Korea Institute of Science and Technology), Sugnwook Yang (Korea Institute of Science and Technology), Soong Ho Um (Sungkyunkwan University), Jinseok Kim (Korea Institute of Science and Technology)

Abstract: Motion capture systems are gaining much attention in various fields, including entertainment, medical and sports fields. Although many types of motion capture sensor have been emerging, they have limitations and disadvantages such as occlusion, drift and interference by electromagnetic fields. Here, we introduce the novel wearable motion capture system using fiber Bragg gratings (FBGs) sensors. Since the human joints have different degrees of freedom (DOF), we developed three types of sensors to reconstruct the human body motion from the strains induced on the FBGs. First, a shape sensor using three fibers provides the position and orientation of joints in three dimensional space. Second, we introduce the angle sensor which is capable of measuring bending angle with high curvature using single fiber. Lastly, to detect the twisting of joints, a sensor with fiber attached on a soft material spirally is used. With the optical fiber based motion capture sensors, we reconstruct the motion of arm in real-time. In detail, the joints of the arm include the sternoclavicular, acromioclavicular, shoulder and elbow. By arranging the three types of sensors on the joints in accordance with the DOF, the accuracy of the reconstructed motion is evaluated, resulting in an average error below 2.42o. Finally, to prove the feasibility of applying in virtual reality, we successfully manipulate the virtual avatar in real-time.

Analyzing the Usability of Gesture Interaction in Virtual Driving System

Dan Zhao (Beijing Institute of Technology), Yue Liu (Beijing Institute of Technology), Yongtian Wang (Beijing Institute of Technology), Tong Liu (Beijing Institute of Technology)

Abstract: In this study, an experiment is presented aiming at verifying the applicability of gesture interaction in the virtual driving environment. 30 participants are recruited to perform the secondary tasks with gesture and touch interaction. The task completion rate and reaction time of two interaction modalities under different road conditions are adopted as evaluation indexes. In addition, visual attention, NASA-TLX, and subjective questionnaires are collected as evaluation factors for fuzzy comprehensive evaluation based on entropy to evaluate the usability gestures. The research results show that gesture interaction not only shows excellence in safety, but also favors more than 90% of users.

A Simulation for Examining the Effects of Inaccurate Head Tracking on Drivers of Vehicles with Transparent Cockpit Projections

Patrick Lindemann (Technical University of Munich), Rui Zhu (Technical University of Munich), Gerhard Rigoll (Technical University of Munich)

Abstract: The transparent cockpit (TC) is a driver-car interface concept in which processed camera images of the surrounding environment are superimposed onto the car interior to give the driver the ability to see through the cockpit. For a perspectively accurate experience from the driver’s point of view, head tracking is required. However, a real-world system may be prone to typical tracking errors, especially while driving. In this work, we present a TC simulation using artificial errors of various types to examine how much each type affects driver performance and experience. First results of an initial study show that there is generally no significant deterioration of lateral performance compared to a perfectly accurate TC. Repeated loss of tracking was least noticed by participants. Accuracy (miscalibration) and precision (jitter) errors were noticed the most.

Where are you walking? : Reality of telepresence under walking

Taro Maeda (Osaka University)

Abstract: The main contribution of this report is to realize a telepresence walking experiences under controlling an avatar robot. Advantage of telepresence technology is to enable the experience of reality as if the user himself exists in a remote place. However, realizing the reality of walking with telepresence was difficult previously. When an avatar robot moves in telepresence technology, it creates social interaction as a virtual user with neighbors, not just a remote traveling experience. It is an advantage of realizing the reality of walking that it is possible to respond by maintaining sociality by a subliminal response to this remote interaction. In this report, we verify the conditions that enable the user to experience walking with immersive reality by telepresence technology where his avatar robot tracks the walking trajectory of the user in real time.

Virtual Reality for Virtual Commissioning of Automated Guided Vehicles

Christoph Allmacher (Chemnitz University of Technology, Institute for Machine Tools and Production Processes), Manuel Dudczig (Chemnitz University of Technology, Institute for Machine Tools and Production Processes), Sebastian Knopp (Chemnitz University of Technology, Institute for Machine Tools and Production Processes), Philipp Klimant (Chemnitz University of Technology, Institute for Machine Tools and Production Processes)

Abstract: The development process of automated guided vehicles (AGV) can be supported by virtual commissioning, which comprises the validation of functionality using a simulation of the AGV. In case of collaborating AGVs, testing their functionalities require the simulation to contain not only the AGV but also a human and an interaction device. Therefore, this paper presents a setup, that integrates a human using motion capturing and emulates a smartwatch as interaction device. Furthermore, the simulation is visualized by a head-mounted display and provides additional information for the test case assessment and analysis. Thus, it enables the virtual commissioning of collaborative AGVs.

Building AR-based Optical Experiment Applications in a VR Course

huan wei (Beijing Institute of Technology), Yue Liu (Beijing Institute of Technology), Yongtian Wang (Beijing Institute of Technology)

Abstract: The demand for VR courses in universities is growing since VR technology is widely used in industry, entertainment, and education today. Traditional lectures and exercises have problems in motivating and engaging students, especially non-CS-majors. We design a VR course with a teamwork project assignment for Opto-Electronic Engineering undergraduates, providing them with project-based learning (PBL) experience. The task requires three students to group a team to build an AR-based optical experiment application over the course, aiming to develop students’ practical engineering ability. The process of the project consists of three main stages: preparation, designing and implementing. We also evaluate students’ work from different aspects and survey to analyze the students’ attitude toward the project.

Contextual Bandit Learning-Based Viewport Prediction for 360 Video

Joris Heyse (Ghent University - imec), Maria Torres Vega (Ghent University), Femke De Backere (Ghent University - imec), Filip De Turck (Ghent University - imec)

Abstract: Accurately predicting where the user of a Virtual Reality (VR) application will be looking at in the near future improves the perceive quality of services, such as adaptive tile-based streaming or personalised online training. However, because of the unpredictability and dissimilarity of user behavior it is still a big challenge. In this work, we propose to use reinforcement learning, in particular contextual bandits, to solve this problem. The proposed solution tackles the prediction in two stages: (1) detection of movement; (2) prediction of direction. In order to prove its potential for VR services, the method was deployed on an adaptive tile-based VR streaming testbed, for benchmarking against a 3D trajectory extrapolation approach. Our results showed a significant improvement in terms of prediction error compared to the benchmark. This reduced prediction error also resulted in an enhancement on the perceived video quality.

Integrating Tactile Feedback in an Acetabular Reamer for Surgical VR-Training

David Weik (Fraunhofer Institute for Machine Tools and Forming Tech-nology IWU), Mario Lorenz (Institute for Machine Tools and Production Processes), Sebastian Knopp (TU Chemnitz), Luigi Pelliccia (Chemnitz University of Technology), Stefanie Feierabend (Fraunhofer Institute for Machine Tools and Forming Technology IWU), Christian Rotsch (Fraunhofer Institute for Machine Tools and Forming Technology IWU), Philipp Klimant (Institute for Machine Tools and Production Processes)

Abstract: Surgical VR-simulators including haptic and tactile feedback can severely improve the training of surgeons. However, so far only minimally invasive surgery benefited from the capabilities of VR. For non-minimally invasive surgery, only one prototype to simulate the reaming of the acetabula during hip joint replacement surgery exists, which is unable to simulate the vibrations of the surgical reamer. By integrating electronic components, a tactile reamer, able to simulate the vibrations, could be developed. A qualitative as-sessment with orthopedic surgeons revealed that the simulated sound of the vibrations is realistic but the intensity of the vibrations needs to be improved.

Early Virtual Reality User Experience and Usability Assessment of a Surgical Shape Memory Alloy Aspiration/Irrigation Instrument

Mario Lorenz (Institute for Machine Tools and Production Processes), Constanze Neupetsch (Chemnitz University of Tech-nology), Christian Rotsch (Fraunhofer Institute for Machine Tools and Forming Technology IWU), Philipp Klimant (Institute for Machine Tools and Production Processes), Niels Hammer (Department of Anatomy, Clinical Anatomy Research Group)

Abstract: Performing user experience (UX) and usability studies in VR to assess early product prototypes is common in the automotive and other sectors but has not yet been applied for the development of surgical instruments. For the development of an aspiration/irrigation instrument using a shape memory alloy (SMA) tube, we wanted to get an early user feedback of our prototype from medical staff. We conducted a study with 22 participants with a medical background and considered a potential presence bias of the UX and usability ratings. The results confirm the novelty and usefulness of a SMA tube and gave us valuable feedback for optimizing the shapeshifting mechanism.

Real Time 3D Magnetic Field Visualization Based on Augmented Reality

Xiaoxu Liu (Beijing Institute of Technology), Yue Liu (Beijing Institute of Technology), Yongtian Wang (Beijing Institute of Technology)

Abstract: In physics teaching, electromagnetism is one of the most difficult concepts for students to understand. This paper proposes a real time visualization method for 3-D magnetic field based on the augmented reality technology, which can not only visualize magnetic flux lines in real time, but also simulates the approximate sparse distribution of magnetic flux lines in space. An application utilizing this method is also presented. It permits leaners to freely and interactively move the magnets in 3-D space and to observe the magnetic flux lines in real time. As a result, the proposed method visualizes the invisible factors in 3-D magnetic field, with which students will have real-life reference when studying electromagnetic.

360-degree photo-realistic VR conferencing

Simon Gunkel (TNO), Marleen D.W. Dohmen (TNO), Hans Stokking (TNO), Omar Niamut (TNO)

Abstract: VR experiences are becoming more social, but many social VR systems represent users as artificial avatars. For use cases such as VR conferencing, photo-realistic representations may be preferred. In this paper, we present ongoing research into social VR experiences with photo-realistic representations of participants and present a web-based social VR framework that extends current video conferencing capabilities with new VR functionalities. We explain the underlying design concepts of our framework and discuss user studies to evaluate the framework in three different scenarios. We show that people are able to use VR communication in real meeting situations and outline our future research to better understand the actual benefits and limitations of our approach, to fully understand the technological gaps that need to be bridged and to better understand the user experience.

Echolocation. Seeing the Virtual World through Lighting Echoes

Mikkel Brogaard Vittrup, Jeppe Køhlert, David Sebastian Eriksen, Amalie Rosenkvist, Miicha Valimaa, Anastasia Andreasen (Aalborg University), George Palamas (Aalborg University Copenhagen)

Abstract: This poster describes a within subject-study of navigation with cognitive mental map based on visualization of sonic echoes in a dark virtual environment (VE). Participants experienced navigation in two conditions: Light (directional lights turned on) and Dark (directional lights switched off). In both conditions participants could send the visualized audio signal, thus in the Dark condition it was the only possible visualized navigational option. The results suggest that participants were able to create a cognitive mental map, for both conditions, and were also able to perform spatial localization and navigation.

A Hybrid RTK GNSS and SLAM Outdoor Augmented Reality System

Frank Ling (Columbia University), Carmine Elvezio (Columbia University), Jacob Bullock (Columbia University), Steven Henderson (Enlighten IT Consulting), Steven Feiner (Columbia University)

Abstract: In the real world, we are surrounded by potentially important data. For example, military personnel and first responders may need to understand the layout of an environment, including the locations of designated assets, specified in latitude and longitude. However, many augmented reality (AR) systems cannot associate absolute geographic coordinates with the coordinate system in which they track. We describe a simple approach for developing a wide-area outdoor wearable AR system that uses RTK GNSS position tracking to align together and georegister multiple smaller maps from an existing SLAM tracking system.

MOSIS: Immersive Virtual Field Environments for Earth Sciences

Pedro Rossa (Vale do Rio do Sinos University), Rafael Kenji Horota (Vale do Rio do Sinos University), Ademir Marques Jr (Vale do Rio do Sinos University), Alysson Soares Aires (Vale do Rio do Sinos University), Eniuce Menezes de Souza (Vale do Rio do Sinos University), Gabriel Lanzer Kannenberg (Vale do Rio do Sinos University), Jean Luca de Fraga (Vale do Rio dos Sinos University), Leonardo G. Santana (Vale do Rio dos Sinos University), Demetrius Nunes Alves (Vale do Rio dos Sinos University), Julia Boesing (Vale do Rio do Sinos University), Luiz Gonzaga Jr. (Vale do Rio dos Sinos University), MAURICIO ROBERTO VERONEZ (Vale do Rio dos Sinos University), Caroline Lessio Cazarin (Petrobras)

Abstract: For the past decades, environmental studies have been mostly a field activity, especially when concerning geosciences, where rock exposures could not be represented or taken into laboratories. Besides that, VR (Virtual Reality) is growing in many academic areas as an important technology to represent 3D objects, bringing immersion to the most simple tasks. Following that trend, MOSIS (Multi Outcrop Sharing and Interpretation System) was created to help earth scientists and other users to visualize and study VFEs (Virtual Field Environments) from all over the world in immersive virtual reality.

A Continuous Material Cutting Model with Haptic Feedback for Medical Simulations

Maximilian Kaluschke (University of Bremen), Rene Weller (University of Bremen), Gabriel Zachmann (University of Bremen), Mario Lorenz (Institute for Machine Tools and Production Processes)

Abstract: We present a novel haptic rendering approach to simulate material removal in medical simulations at haptic rates. The core of our method is a new massively-parallel continuous collision detection algorithm in combination with a stable and flexible 6-DOF collision response scheme that combines penalty-based and constraint-based force computation.

Towards EEG-Based Haptic Interaction within Virtual Environments

Stanley Tarng (University of Calgary), Deng Wang (University of Calgary), Yaoping Hu (University of Calgary), Frederic Merienne (Arts et Metiers)

Abstract: Current virtual environments (VE) enable perceiving haptic stimuli to facilitate 3D user interaction, but lack brain-interfacial contents. Using electroencephalography (EEG), we undertook a feasibility study on exploring event-related potential (ERP) patterns of the user’s brain responses during haptic interaction within a VE. The interaction was flying a virtual drone along a curved transmission line to detect defects under the stimuli (e.g., force increase and/or vibrotactile cues). We found that there were variations in the peak amplitudes and latencies (as ERP patterns) of the responses at about 200 ms post the onset of the stimuli. The largest negative peak occurred during 200~400 ms after the onset in all vibration-related blocks. Moreover, the amplitudes and latencies of the peak were differentiable among the vibration-related blocks. These findings imply feasible decoding of the brain responses during haptic interaction within VEs.

An Approach to Designing Next Generation User Interfaces for Public-Safety Organizations

Jeronimo Grandi (Duke University), Mark Ogren (Duke University), Regis Kopper (Duke University)

Abstract: High-speed broadband networks will enable public-safety organizations to handle critical situations that go beyond the common voice communication channel. In this paper, we present a user-centered approach that makes the deployment and adoption of next-generation user interfaces reflect the first responders’ needs, requirements, and contexts of use. It is composed of four phases where we elicit requirements, iteratively prototype user interfaces, and evaluate our designs. In this process, public-safety organizations are always engaged, contributing through their feedback and evaluation. We will use immersive Virtual Reality to simulate the user interface designs. Within the virtual environment, it is possible to prototype several concepts before committing to a definitive interface. The solutions proposed will be instrumental for the adoption of next-generation user interfaces by the public safety community.

Play it by Ear: An Immersive Ear Anatomy Tutorial

Haley Alexander Adams (Vanderbilt University), William G Morrel (Vanderbilt University Medical Center), Justin R. R Shinn (Vanderbilt University Medical Center), Jack Noble (Vanderbilt University), Alejandro Rivas (Vanderbilt University Medical Center), Robert Labadie (Vanderbilt University Medical Center), Bobby Bodenheimer (Vanderbilt University)

Abstract: The anatomy of the ear and the bones surrounding it are intricate yet critical for medical professionals to know. Current best practices teach ear anatomy through two-dimensional representations, which poorly characterize the three-dimensional (3D), spatial nature of the anatomy and make it difficult to learn and visualize. In this work, we describe an immersive, stereoscopic visualization tool for the anatomy of the ear based on real patient data. We describe the interface and its construction. And we compare how well medical students learned ear anatomy in the simulation compared with more traditional learning methods. Our preliminary results suggest that virtual reality may be an effective tool for anatomy education in this context.

Pedagogical Agent Responsive to Eye Tracking in Educational VR

Adil Khokhar (University of Louisiana at Lafayette), Andrew Yoshimura (University of Louisiana at Lafayette), Christoph W Borst (University of Louisiana at Lafayette)

Abstract: We present an architecture to make a VR pedagogical agent responsive to shifts in user attention monitored by eye tracking. The behavior-based AI includes low-level sensor elements, sensor combiners that compute attention metrics for higher-level sensors called generalized hotspots, an annotation system for arranging scene elements and responses, and its response selection system. We show that the techniques can control the playback of teacher avatar clips that point out and explain objects in a VR oil rig for training.

Advancing Ethical Decision Making in Virtual Reality

Sin-Hwa Kang (University of Southern California), Jake Chanenson (Swarthmore College), Peter Cowal (Pomona College), Madeleine Weaver (Massachusetts Bay Community College), Pranav Ghate (University of Southern California), David M. Krum (University of Southern California)

Abstract: Virtual reality (VR) has been widely utilized for training and education purposes because of pedagogical, safety, and economic benefits. The investigation of moral judgment is a particularly interesting VR application, related to training. For this study, we designed a within-subject experiment manipulating the role of study participants in a Trolley Dilemma scenario: either victim or driver. We conducted a pilot study with four participants and describe preliminary results and implications in this poster.

Exploring Scalable WorkSpace Based on Virtual and Physical Movable Wall

Sabah Boustila (University of Toronto), Hao Zheng (Tohoku University), Shuxia Bai (Tohoku University), Kazuki Takashima (Tohoku University), Yoshifumi Kitamura (Tohoku University)

Abstract: We propose an approach for flexibly scalable workspace that changes viewer’s perceived size of the workspace by simulating a virtually movable wall on the real wall. We conducted an experiment comparing viewer’s depth estimates between extended workspace virtually and physically. Besides, we investigated wall’s motion effects; discrete wall motion or continuous wall motion. Distances were estimated verbally and using perceptual matching method. Verbal depth estimates were equivalent in the virtual extended workspace and the physical one, with a high accuracy (estimation errors were less than 5%). However, perceptual matching estimates were significantly different between virtual and physical scaling. Overall, they were slightly overestimated. No difference of the motion effect was found. Our results clearly suggest the potential of the workspace scaling by just giving such visual impression of extended workspace using perspective projection technique.

Color Moiré Reduction Method for Thin Integral 3D Displays

Hisayuki Sasaki (NHK (Japan Broadcasting Corporation)), Hayato Watanabe (NHK (Japan Broadcasting Corporation)), Naoto Okaichi (NHK (Japan Broadcasting Corporation)), Kensuke Hisatomi (NHK (Japan Broadcasting Corporation)), Masahiro Kawakita (NHK (Japan Broadcasting Corporation))

Abstract: The integral three-dimensional (3D) display is an ideal visual 3D user interface. It is a display method that fulfills many of the physiological factors of human vision. However, in integral 3D displays for mobile applications that use a direct viewing flat panel to display elemental images, the occurrence of color moiré is a problem owing to the sampling of subpixels by the elemental lenses and the insufficient resolution and depth reproduction performance of the reconstructed 3D image. We propose a method to solve these problems that utilizes optical wobbling spatiotemporal multiplexing using a birefringent element and a polarization controller. With the conventional moiré reduction method, the degree of defocus of elemental lenses has to be set to a large value, which has also been a factor that reduces the depth reproduction performance. We show that effective color moiré reduction can be achieved with a slight defocus of elemental lenses and without deteriorating depth reproducibility by using the proposed optical wobbling method.

An Initial Investigation into Stereotypical Influences on Implicit Racial Bias and Embodied Avatars

Divine Maloney (Clemson University), Andrew Robb (Clemson University)

Abstract: In this paper, we present an initial study to investigate the effects stereotypical settings and avatar appearance of embodied avatars on a user’s implicit racial bias. Literature demonstrates the effects embodied avatars can have on a users biases, both implicit and explicit. These shifts in bias and behavior could be caused by the avatars appearance or the stereotypical environment. Few studies have investigated the presence of stereotypical triggers and avatar representation in a learning, game-like environment. With virtual reality entertainment and training simulations becoming popular it is necessary to better understand the effects avatars can have on our behavior, perception, and biases. This study will investigate the potential effects of embodied avatars reinforcing a user’s implicit racial biases.

Speech-Driven Facial Animation by LSTM-RNN for Communication Use

Ryosuke Nishimura (Osaka University), Nobuchika Sakata (NAIST), Tomu Tominaga (Osaka University), Yoshinori Hijikata (Kwansei Gakuin University), Kensuke Harada (Osaka University), Kiyoshi Kiyokawa (Nara Institute of Science and Technology)

Abstract: The goal of this research is developing a system that a rich facial animation can be used in communication is generated from only speech. Generally, a source of the generating facial animation is a camera. Using cameras as an input source, it causes limitations of the angle of view of the camera or problems that cannot be aware of the human face, depending on the orientation of the face. Therefore, it is reasonable for developing a system for generating a facial animation using only voice. In this study, we generate facial expressions from only speech using LSTM-RNN. Comparing 3 patterns of speech analysis data, we showed that the proposed method using A-weighting is effective for facial expression estimation.

ReliveInVR: Capturing and reliving virtual reality experiences together

Cheng Yao Wang (Cornell University), Mose Sakashita (Cornell University), Upol Ehsan (Cornell University), Jingjin Li (Cornell University), Andrea Stevenson Won (Cornell University)

Abstract: We present a new type of sharing VR experience over distance which allows people to relive their recorded experience in VR together. We describe a pilot study examining the user experience when people share their VR experience together remotely. Finally, we discuss the implications for sharing VR experiences over time and space.

A Prototype of Virtual Drum Performance System with a Head-Mounted Display

Toshiyuki Ishiyama (Graduate School), Tetsuro Kitahara (Nihon University)

Abstract: The goal of our study is to develop a virtual remote performance system that enables musicians who are distant from each other to play ensemble performances using virtual reality (VR) technologies. In ensemble performances, musicians usually use gestures to communicate with each other. Therefore, as the first step, we have developed a drum performance system that displays a computergenerated character that represents a bass player. The drummer wears a head-mounted display, which displays the computergraphics (CG) bass-player character. As the human bass player moves in front of a Kinect sensor, the CG-based character’s movements are synchronized with those of the human bass player. An experiment was carried out to demonstrate the performance of the system.

VR-Replay: Capturing and Replaying Avatars in VR for Asynchronous 3D Collaborative Design

Cheng Yao Wang (Cornell University), Logan R Drumm (Cornell University), Christopher Troup (Cornell University), Natalie Ding (Cornell University), Andrea Stevenson Won (Cornell University)

Abstract: Distributed teams rely on asynchronous CMC tools to complete collaborative tasks due to the difficulties and costs surrounding scheduling synchronous communications. In this paper, we present VR-Replay, a new communication tool that records and replays avatars with both nonverbal behavior and verbal communication in VR asynchronous collaboration. We describe a study comparing VR-Replay with a desktop-based CVE with audio annotation and a VR immersive CVE with audio annotation. Our results suggest that viewing the replay avatar in VR-Replay improves teamwork, causing people to view their partners as more likable, warm, and friendly. 75% of the users chose VR-Replay as the preferred communication tool in our study.

Evaluating Teacher Avatar Appearances in Educational VR

Jason Wolfgang Woodworth (University of Louisiana at Lafayette), Nicholas Lipari (University of Louisiana at Lafayette), Christoph W Borst (University of Louisiana at Lafayette)

Abstract: We present a pilot study of four teacher avatars for an educational virtual field trip. The avatars consist of a depth-video-based mesh of a real person, a game-style human model, a robot model, and the robot with its head replaced by a video feed of the teacher’s face. Multiple avatars were developed to consider alternatives to the mesh representation that required high-bandwidth networks and a non-immersive teacher interface. The pilot study presents a random avatar to the participant at each of 4 educational stations, and follows up with a subjective questionnaire. Most notably, we find positive affinity for the plain robot model to be similar to that of the video mesh, which was previously shown to provide high co-presence and good results for education. Results are guiding a larger study that will measure the educational efficacy of revised avatars.

A Study on the Sense of Burden and Body Ownership on Virtual Slope

Ryo Shimamura (Waseda University), Seita Kayukawa (Waseda University), Takayuki Nakatsuka (Waseda University), Shoki Miyagawa (Waseda university), Shigeo MORISHIMA (Waseda Research Institute for Science and Engineering)

Abstract: This paper provides insight into the burden when people are walking up and down slopes in a virtual environment (VE) while actually walking on a flat floor in the real environment (RE). In RE, we feel a physical load during walking uphill or downhill. To reproduce such physical load in the VE, we provided visual stimuli to users by changing their step length. In order to investigate how the stimuli affect a sense of burden and body ownership, we performed a user study where participants walked on slopes in the VE. We found that changing the step length has a significant impact on a burden on the user and less correlation between body ownership and step length.

Warping Space and Time – xR Reviving Educational Tools of the 19th Century

Alexander Klippel (The Pennsylvania State University), Jan Oliver Wallgrün (The Pennsylvania State University), Arif Masrur (The Pennsylvania State University), Jiayan Zhao (The Pennsylvania State University), Peter LaFemina (The Pennsylvania State University)

Abstract: xR has the potential to warp both space and time. We demonstrate this potential by designing a mixed reality application for mobile devices for the Penn State’s Obelisk, a historic landmark on the main Penn State campus that artistically reveals the geological history of Pennsylvania. Our AR application allows for placing a model of the Obelisk on any surface, interacting with the individual stones to reveal their geological characteristics and location of excavation, and changing to an immersive VR experience of this location based on 360° imagery. Originally conceptualized as a teaching tool for the School of Mines, our xR application revives the Obelisk’s long forgotten mission and allows educators to integrate it once more into the curriculum as well as creatively expand its potential.

Short-term Path Prediction for Virtual Open Spaces

Christian Hirt (ETH Zürich), Markus Zank (Lucerne University of Applied Sciences and Arts), Andreas Kunz (ETH Zurich)

Abstract: In predictive redirected walking applications, reliable path prediction is essential for an effective redirection. So far, most predictive redirected walking algorithms introduced many restrictions to the virtual environment in order to simplify this path prediction. Path prediction is time-consuming and is also prone to errors due to potentially impulsive and even irrational walking behaviour of humans in virtual environments. Therefore, many applications confine users in narrow virtual corridors or mazes in order to minimise larger deviations from intended and predictable walking patterns. In this paper, we present a novel approach for short-term path prediction which can be applied to virtual open space predictive redirected walking. We introduce a drop-shaped trajectory prediction which is described using a Lemniscate of Bernoulli. The drop’s contour is discretised and we show how this is used to determine potential user trajectories in the virtual environment.

Remapping a Third Arm in Virtual Reality

Adam Mark Drogemuller (University of South Australia), Adrien Alexandre Verhulst (The University of Tokyo), Benjamin Volmer (University of South Australia), Bruce H Thomas (University of South Australia), Masahiko Inami (The University of Tokyo), Maki Sugimoto (Keio University)

Abstract: This paper presents development on a conceptual method to remap supernumerary limbs using Virtual Reality (VR) as a platform for experimentation. Our VR system allows users to control a third arm through their own limbs such as their head, arms, and feet with the ability to switch between them. To realize and experiment with our remapping method, we used the Oculus Rift in conjunction with OptiTrack to track users in a room-scaled virtual environment. We present some initial findings from a small pilot study and conclude with suggestions for future work.

Training Transfer of Bimanual Assembly Tasks in Cost-Differentiated Virtual Reality Systems

Songjia Shen (University of Technology Sydney), Hsiang-Ting Chen (University of Technology Sydney), Tuck Wah Leong (UTS)

Abstract: Recent advances of the affordable virtual reality headsets make virtual reality training an economical choice when compared to traditional training. However, these virtual reality devices present a range of different levels of virtual reality fidelity and interactions. Few works have evaluated their validity against the traditional training formats. This paper presents a study that compares the learning efficiency of a bimanual gearbox assembly task among traditional training, virtual reality training with direct 3D inputs (HTC VIVE), and virtual reality training without 3D inputs (Google Cardboard). A pilot study was conducted and the result shows that HTC VIVE brings the best learning outcomes.

Occurrence of Pseudo-Haptics by Swimming in a Virtual Reality Environment

Hirooki Aoki (Chitose Institute of Science and Technolgoy)

Abstract: This study focuses on the investigation of pseudo-haptics while swimming in a highly immersive virtual reality environment created using a head-mounted display to investigate the conditions under which pseudo-haptics occur while performing considerable physical exercises. When the users perform the outward sweep motion of the breaststroke, the spheres floating in the virtual reality space move toward the users. The experimental results confirm that pseudo-haptics can be controlled using this setup by adjusting the ratio of the amount of movement of users’ hands and the amount of movement of the virtual reality spheres.

Towards Robot Arm Training in Virtual Reality using Partial Least Squares Regression

Benjamin Volmer (University of South Australia), Adrien Alexandre Verhulst (The University of Tokyo), Adam Mark Drogemuller (University of South Australia), Bruce H Thomas (University of South Australia), Masahiko Inami (The University of Tokyo), Maki Sugimoto (Keio University)

Abstract: Robot assistance can reduce the user’s workload of a task. However, the robot needs to be programmed or trained on how to assist the user. Virtual Reality (VR) can be used to train and validate the actions of the robot in a safer and cheaper environment. In this paper, we examine how a robotic arm can be trained using Coloured Petri Nets (CPN) and Partial Least Squares Regression (PLSR). Based upon these algorithms, we discuss the concept of using the user’s acceleration and rotation as a sufficient means to train a robotic arm for a procedural task in VR. We present a work-in-progress system for training robotic limbs using VR as a cost effective and safe medium for experimentation. Additionally, we propose PLSR data that could be considered for training data analysis.

Parasitic Body: Exploring Perspective Dependency in a Shared Body with a Third Arm

Ryo Takizawa (Keio University), Adrien Alexandre Verhulst (The University of Tokyo), Katie Seaborn (RIKEN Center for Advanced Intelligence Project), Masaaki Fukuoka (Keio University), Atsushi Hiyama (The University of Tokyo), Michiteru Kitazaki (Toyohashi University of Technology), Masahiko Inami (The University of Tokyo), Maki Sugimoto (Keio University)

Abstract: With advancements in robotics, systems featuring wearable robotic arms teleoperated by a third party are appearing. An important aspect of these systems is the visual feedback provided to the third party operator. This can be achieved by placing a wearable camera on the robotic arm’s “host,” or Main Body Operator (MBO), but such a setup makes the visual feedback dependant on the movements of the main body. Here we introduce a VR system called Parasitic Body to explore a VR shared body concept representative of the wearable robotic arms “host” (the MBO) and of the teleoperator (here called the Parasite Body Operator (PBO)). 2 users jointly operate a shared virtual body with a third arm: The MBO controls the main body and the PBO controls a third arm sticking out from the left shoulder of the main body. We focused here on the perspective dependency of the PBO (indeed, the PBO view is dependant of the movement of the MBO) in a “finding and reaching” task.

Collaborative Problem Solving in Local and Remote VR situations

Adamantini Hatzipanayioti (Max Planck Institute for Biological Cybernetics), Anastasia Pavlidou (Max Planck Institute for Biological Cybernetics), Manuel Dixken (Fraunhofer Institute for Industrial Engineering IAO), Heinrich H. Bülthoff (Max Planck Institute for Biological Cybernetics), Tobias Meilinger (Max Planck Institute for Biological Cybernetics), Matthias Bues (Fraunhofer IAO), Betty Mohler (Max Planck Institute for Intelligent Systems)

Abstract: In one experiment we examined whether interaction among people who collaborate locally affects performance compared to situations where people meet within the virtual space. Participants solved a Rubik’s cube type of puzzle by arranging cubes that varied in color within a solution space, so that each side of the solution space showed a single color. Participants in the local condition were located in the same room, while in the remote condition each participant was placed in different rooms. Results showed that collaborators in both conditions successfully completed the task but performance was best in the local than the remote condition.

DepthText: Leveraging Head Movements towards the Depth Dimension for Hands-free Text Entry in Mobile Virtual Reality Systems

Xueshi Lu (Xi’an Jiaotong-Liverpool University), Difeng Yu (Xi’an Jiaotong-Liverpool University), Hai-Ning Liang (Xi’an Jiaotong-Liverpool University), Xiyu Feng (Xi’an Jiaotong-Liverpool University), Wenge Xu (Xi’an Jiaotong-Liverpool University)

Abstract: We DepthText, a hands-free text entry technique for mobile virtual reality (VR) systems. DepthText leverages the built-in IMU sensors of current mobile VR devices and as such has no costs associated with external trackers or input devices. Users are able to enter characters through short bursts of forward head movements. According to our 5-day study, this technique is able to support users in achieving an average of 10.59 words per minute (WPM). Responses to questionnaires show that the technique is acceptable by users and they are willing to use it when they are alone or before people familiar to them.

A Mobile Augmented Reality Approach for Creating Dynamic Effects with Controlled Vector Fields

Lifeng Zhu (Southeast University), Xijing Liu (Northwestern University)

Abstract: Dynamic effects are commonly added offline to pre-recorded videos. In this work, we propose to synthesize online dynamic effects with controlled vector fields by using mobile augmented reality. By modelling typical primitives of a flow field to oriented markers, we use image detection and tracking techniques to build and control a virtual vector field in the real world. Virtual objects are then added and animated in real time. We prototype our system and the results show the possibility that dynamic effects can be created in mobile augmented reality to enhance visual communication with the real world.

CAVE-AR: A VR Authoring System to Interactively Design, Simulate, and Debug Multi-user AR Experiences

Marco Cavallo (IBM Research), Angus G. Forbes (University of California, Santa Cruz)

Abstract: We propose CAVE-AR, a novel virtual reality system for authoring, simulating and debugging custom augmented reality experiences. CAVE-AR is based on the concept of representing in the same global reference system both AR content and tracking information, mixing geographical information, architectural features, and sensor data to simulate the context of an AR experience. Our VR application provides designers with ways to create and modify an AR application, even while others are in the midst of using it. CAVE-AR further allows the designer to track how users are behaving, preview what they are currently seeing, and interact with them.

Food Appearance Optimizer: Automatic projection mapping system for enhancing perceived deliciousness based on appearance

Yuichiro Fujimoto (Tokyo University of Agriculture and Technology)

Abstract: This paper proposes a system to enhance the degree of subjective deliciousness of food visually perceived by a person by automatically changing its appearance in a real environment. The system is called the Food Appearance Optimizer. The proposed system analyzes the appearance of food in a camera image and projects an appropriate image onto the food using a projector. The relationship between the degree of subjective deliciousness and four appearance features for each food category is modeled using data gathered via a crowdsourcing-based questionnaire. Using this model, the system generates the appropriate projection image to increase the deliciousness of the food.

Early Stage Digital-Physical Twinning to Engage Citizens with City Planning and Design

Lee Kent (Mechanical Engineering), Chris Snider (University of Bristol), Ben Hicks (University of Bristol)

Abstract: The pairing of physical objects with a digital counterpart, referred to as a digital-physical twin, provides the capability to leverage the affordances of each. This can be applied to city scale visualisation during concept generation to engage citizens with city planning. A Virtual Reality based platform is proposed, tested with members of the public and its suitability as an engagement tool is discussed. The platform, City Blocks, allows later design stage visualisation and analysis of ideas to be brought to concept development with citizens able to reason and refine designs via an abstracted physical twin.

Investigating Visualization Techniques for Observing a Group of Virtual Reality Users using Augmented Reality

Santawat Thanyadit (Hong Kong University of Science and Technology), Parinya Punpongsanon (Osaka University), Ting-Chuen Pong (Hong Kong University of Science and Technology)

Abstract: As an emerging technology, virtual reality (VR) has been used increasingly as a learning tool to explore ‘outside the classroom experiences’ inside the classroom. While VR provides an immersive experience to the students, it is difficult for the instructor to monitor the students’ activities in the VR. Thus, it hinders interactions between the instructor and students. To solve this challenge, we investigated a technique that allows the instructor to observe VR users at scale using Augmented Reality. Augmented Reality techniques are used to visualize the gazes of the VR users in the virtual environment, and improve the instructor’s awareness.

Pupil Center Detection based on the UNet for the User Interaction in VR and AR Environments

Sang Yoon Han (Inst. of Newmedia and Comm.), YoonSik Kim (Inst. of Newmedia of and Comm., Dept. of Electrical and Computer Eng.), Sang Hwa Lee (Seoul National University), Nam Ik Cho (Seoul National University)

Abstract: Finding the location of a pupil center is important for the human-computer interaction especially for the user interface in AR/VR devices. In this paper, we propose an indirect use of the convolutional neural network (CNN) for the task, which first segments the pupil region by a CNN, and then finds the center of mass of the region. For this, we create a dataset by labeling the pupil area on 111,581 images from 29 IR video sequences. We also label the pupil region of widely used datasets to test and validate our method on a variety of inputs. Experiments show that the proposed method provides better accuracies than the conventional ones, showing robustness to the noise.

Can We Create Better Haptic Illusions by Reducing Body Information?

Yutaro Hirao (Waseda), Takashi Kawai (Waseda University)

Abstract: In this paper, we propose a method of alleviating individual differences in pseudo-haptic experience and unnaturalness with pseudo-haptics in VR. The method is to use a non-isomorphic manipulation of a virtual body, where the actual and virtual body movement is decoupled. As an evaluation of this approach, the isomorphic and non-isomorphic manipulation were compared in the task where pseudo-haptic experience was presented in VR. The results suggest a tendency where the individual differences in pseudo-haptic experience and unnaturalness were smaller with the non-isomorphic manipulation, even with intensive pseudo-haptic expressions.

A Motion Deblurring Approach to Restore Rich Testures for Visual SLAM

Guojing jin (Beijing Institute of Technology), Jing Chen (Beijing Institute of Technology), Jingyao Wang (Beijing Institute of Technology), Yongtian Wang (Beijing Institute of Technology)

Abstract: In this paper, we present a sequential video deblurring method based on a spatio-temporal recurrent network for visual SLAM. The method can be applied to any SLAM systems to make sure continuous localization even with blurred images. The quality of the deblurring method is evaluated on real-world problems: feature points extraction and SLAM, which prove the method can significantly improve the performance of tracking accuracy especially in some severe cases containing strong camera shake or fast motion.

Augmenting Virtual Reality with Near Real World Objects

Michael Rauter (FH Wiener Neustadt), Christoph Abseher (FH Wiener Neustadt), Markus Safar (FH Wiener Neustadt)

Abstract: Pure virtual reality headsets lack support for user awareness of real world objects. We show how to augment a virtual environment with real world objects by incorporating color and stereo information from the front-mounted stereo camera system of the HTC Vive Pro. With an adjustable amount of virtual and real elements these can be embedded in the virtual world with correct occlusion. Possible use cases are training with appliances that require you to see both the appliance and your hands, collaborating in virtual reality with the real users instead of avatars, as well as static and moving obstacle detection.

Peripersonal Visual-Haptic Size Estimation in Virtual Reality

Nikolaos Katzakis (Universität Hamburg), Lihan Chen (Peking University), Fariba Mostajeran (University of Hamburg), Frank Steinicke (Universität Hamburg)

Abstract: We report an experiment in which participants compared the size of a visual sphere to a haptic sphere in VE. The sizes from two modalities were either congruent or conflicting (with different disparities). Importantly, three standard references (small, medium and large) for haptic sizes were used with the method of constant stimuli. Results show a dominant functional priority of the visual size perception. The results are discussed in the framework of adaptation level theory for haptic size reference, and provide important implications for the design of 3D visuo-haptic human-computer interaction.

Acceptance and User Experience of Driving with a See-Through Cockpit in a Narrow-Space Overtaking Scenario

Patrick Lindemann (Technical University of Munich), Dominik Eisl (Technical University of Munich), Gerhard Rigoll (Technical University of Munich)

Abstract: In this work, we examine the implications of driving with transparent cockpits (TCs) in a narrow-space overtaking scenario. We utilize a virtual environment to simulate two possible manifestations: a user-controlled head-mounted system and a static projection-based system. We conducted a user study with an overtaking task and present results for acceptance and a comparison of both systems regarding user experience. Participants preferred the static TC and evaluated it as the solution with higher pragmatic quality and attractiveness. The TC generally scored highly in hedonic quality and was rated positively regarding perceived safety and ease of use.

Repurposing Labeled Photographs for Facial Tracking with Alternative Camera Intrinsics

Caio Jose Dos Santos Brito (Disney Research), Kenny Mitchell (Disney Research)

Abstract: Acquiring manually labeled training data for a specific application is expensive and usually it is not a good fit for novel cameras. To overcome this, we present a repurposing approach that relies on spherical image warping to retarget an existing dataset of landmark labeled casual photography of people faces with arbitrary poses from regular camera lenses to target cameras with significantly different intrinsics, as those often attached to the head mounted displays (HMDs). Our method can predict mouth and eyes landmarks of the HMD wearer which is used as input to animate avatars in real-time without user-specific nor application-specific dataset.

The Augmented Reality Floods and Smoke Smartphone App Disaster Scope Utilizing Real-Time Occlusion

Tomoki Itamiya (Aichi University of Technology), Hideaki Tohara (Aichi University of Technology), Yohei Nasuda (Aichi University of Technology)

Abstract: We developed the augmented reality smartphone application Disaster Scope that enables immersive experiences to improve people’s crisis awareness of the disaster in peacetime. The application can superimpose the occurrence situation of disasters such as CG floods and debris and fire smoke in the actual scenery. By using a smartphone equipped with a 3D depth sensor, it is possible to sense the height from the ground and recognize surrounding objects. The real-time occlusion processing enabled using only by a smartphone. As a result, it has become possible to understand more realistically the dangerous of floods and a fire smoke charge.

Simulated Reference Frame Effects on Steering, Jumping and Sliding

Jean-Luc Lugrin (Department of Computer Science, HCI Group), Maximilian Landeck (University of Wuerzburg), Marc Erich Latoschik (Department of Computer Science, HCI Group)

Abstract: In this paper, we investigated the impact of an egocentric simulated frame of reference, the so-called simulated CAVE, on three type of travel techniques: Steering, Jumping and Sliding. Contrary to suggestions from previous work, no significant differences were found regarding spatial awareness between all techniques with or without the simulated CAVE. Our first results also showed a negative effect of the simulated CAVE on participants’ motion sickness for every technique, while confirming that the Jumping is eliciting less motion sickness with or without it.

The Effects of Tactile Gestalt on Generating Velvet Hand Illusion

Hiraku Komura (Nagoya University), Masahiro Ohka (Nagoya University)

Abstract: Smoothness is one of the important factors in controlling texture sensation in tactile VR. To develop tactile display which provides someone with the texture seasation, we focus on the Velvet Hand Illusion (VHI) which is one of the tactile illusion phenomena. To find out the mechanism of VHI, we investigated the relationship between tactile Gestalt and VHI using various shape stimuli with the dot-matrix display. We clarified that the law of closure, which is one of the Gestalt factors, is essential to generate VHI through the psychophysical experiment.

Latency Measurement in Head-Mounted Virtual Environments

Adam Jones (University of Mississippi), Tykeyah Key (Rust College), Ethan Luckett (University of Mississippi), Nathan D Newsome (Clemson University)

Abstract: In this paper, we discuss a generalizable method to measure end-to-end latency. This is the length of time that elapses between when a real-world movement occurs and when the pixels within a head-mounted display are updated to reflect this movement. The method described here utilizes components commonly available at electronics and hobby shops. We demonstrate this measurement method using an HTC Vive and discuss the influence of its low-persistence display on latency measurement.

Developing an accessible evaluation method of VR cybersickness

Takurou Magaki (Future University Hakodate), Michael Vallance (Future University Hakodate)

Abstract: Virtual Reality has yet to make a significant impact in conventional education due to its high cost, unconvincing learning data and cybersickness. To alleviate this dilemma, it is necessary to develop a straightforward and reliable measurement of cybersickness for VR application developers and mainstream educators. The Empatica E4 wearable device and its eco-system were utilized to record Heart Rate Variability (HRV) and Electrodermal Activity (EDA) during customized computer-based and VR tasks with 16 participants. The metrics of NNMean, SDNN, RMSSD, and Poincaré Plot in HRV data and SCR width in EDA data were found to be potential indicators of cybersickness.

Architectural Design in Virtual Reality and Mixed Reality Environments: A Comparative Analysis

Oğuzcan Ergün (Middle East Technical University), Şahin Akın (Middle East Technical University), Ipek Gursel Dino (Middle East Technical University), Elif Surer (Middle East Technical University)

Abstract: In this study, we evaluate the merit of multiple interaction environments for architectural design with a test on 21 architecture students. For this purpose, we updated our initial prototype of MR-based environment and extended it to VR. 1) MR environment using HoloLens with gestures, 2) HoloLens with a clicker, 3) VR environment using HTC Vive with two controllers, and 4) HoloLens emulator, are the interaction cases. We used presence, usability and technology acceptance questionnaires for the evaluation. Our results reveal that VR environment is the most natural interaction medium. Additionally, participants preferred MR and VR environments instead of an emulator.

Virtual Hand Illusion: the Alien Finger Motion Experiment

Agata Marta Soccini (University of Torino), Marco Grangetto (Turin University), Tetsunari Inamura (National Institute of Informatics), Sotaro Shimada (Meiji University)

Abstract: We present a contribution to a better understanding of the Sense of Embodiment by assessing two of its main components, body ownership and agency, through an experiment involving finger alien motion. The key aspect of the experimental protocol is to integrate a condition with some personalized alien finger movement while the subject is asked to remain still. Body ownership appears to be significantly reduced, but not agency. We also propose a metrics to assess quantitatively that the view of the alien movement induces more finger posture variation compared to the reference context in the still condition.

Virtual Garment using Joint Landmark Prediction and Part Segmentation

Yi Xu (OPPO US Research Center), Shanglin Yang (JD.COM American Technologies), Wei Sun (NCSU), Li Tan (JD.com), Kefeng Li (JD.COM), Hui Zhou (JD.COM American Technologies)

Abstract: We present a novel approach that constructs 3D virtual garment models from photos. Our approach only requires two images as input, one front and one back. We first apply a multi-task learning network that jointly predicts fashion landmarks and parses a garment image into semantic parts. The predicted landmarks are used for deforming a template mesh to generate 3D garment model. The semantic parts are utilized for extracting color textures for the model.

Semantic Labeling and Object Registration for Augmented Reality Language Learning

Brandon Huynh (University of California, Santa Barbara), Jason Orlosky (Osaka University), Tobias Höllerer (University of California, Santa Barbara)

Abstract: We propose an Augmented Reality vocabulary learning interface in which objects in a user’s environment are automatically recognized and labeled in a foreign language. Using AR for language learning in this manner is still impractical for a number of reasons. Scalable object recognition and consistent labeling of objects is still a significant challenge, and interaction with arbitrary physical objects is not well explored. To address these challenges, we present a system that utilizes real-time object recognition to perform semantic labeling and object registration in Augmented Reality, and discuss how it can be applied to AR language learning applications.

Improve the Decision-making Skill of Basketball Players by an Action-aware VR Training System

Wan-Lun Tsai (Department of Computer Science and Information Engineering), Li-wen, Su (National Cheng Kung University), Tsai-Yen, Ko (Department of Computer Science and Information Engineering), Cheng-Ta Yang (Department of Psycology), Min-Chun Hu (National Cheng Kung University)

Abstract: Decision-making is an essential part in basketball offenses. In this paper, we proposed a basketball offensive decision-making VR training system. During the training, the trainee can intuitively interact with the system by wearing a motion capture suit and be trained in different virtual defensive scenarios designed by professional coaches. The system will recognize the offensive action performed by the user and provide correct suggestions when he/she makes a bad offensive decision. We compared the effectiveness of the training protocols by using conventional tactics board tactics board and the proposed VR system with prerecorded 360-degree panorama video and computer simulated virtual content.

Quick Estimation of Detection Thresholds for Redirected Walking with Method of Adjustment

Weiya Chen (Huazhong University of Science and Technology), Yangliu Hu (Huazhong University of Science and Technology), Nicolas Ladeveze (CNRS, Université Paris-Saclay), Patrick M Bourdot (CNRS)

Abstract: A method that allows quick estimation of Redirection Detection Thresholds (RDTs) is not only useful for identifying factors that contribute to the detection of redirections, but can also provide timely inputs for personalized redirected walking control. In aim to achieve quick RDT estimation, we opted for a classical psychophysical method - the Method of Adjustment (MoA), and compared it against commonly used method for RDT estimation (i.e. MCS-2AFC) to see their difference. Preliminary results show that MoA allows to save about 33\% experiment time when compared with MCS-2AFC while getting overall similar RDT estimations on the same population.

A Real-Time Music VR System for 3D External and Internal Articulators

Jun Yu (University of Science and Technology of China)

Abstract: Both external and internal articulators are crucial to generating avatars in VR. Compared to traditional talking head with only appearance, we enhance it to 3D singing head with music signal as input, and focus on the entire head. Experiments shows our system can not only produce realistic animation, but also significantly reduce the dependence on training data.

Avatars for Co-located Collaborations in HMD-based Virtual Environments

Jens Herder (HS Düsseldorf, University of Applied Sciences), Nico Brettschneider (HS Düsseldorf, University of Applied Sciences), Jeroen de Mooij (Weird Reality), Bektur Ryskeldiev (University of Tsukuba)

Abstract: Multi-user virtual reality is transforming towards a social activity that is no longer only used by remote users, but also in large-scale location-based experiences. Usage of realtime-tracked avatars in co-located business-oriented applications with a “guide-user-scenario” is examined for user-related factors of Spatial Presence, Social Presence, User Experience and Task Load. A user study was conducted in order to compare both techniques of a realtime-tracked avatar and a non-visualised guide. Results reveal that the avatar-guide enhanced and stimulated communicative processes while facilitating interaction possibilities and creating a higher sense of mental immersion for users and engagement.

Evaluation on A Wheelchair Simulator Using Limited-Motion Patterns and Vection-Inducing Movies

Akihiro Miyata (Nihon University), Hironobu Uno (Nihon University), Kenro Go (Nihon University)

Abstract: Existing virtual reality (VR) based wheelchair simulators have difficulty providing both visual and motion feedback at low cost. To address this issue, we propose a VR-based wheelchair simulator using a combination of motions attainable by an electric-powered wheelchair and vection-inducing movies displayed on a head-mounted display. This approach enables the user to have a richer simulation experience, because the scenes of the movie change as if the wheelchair performs motions that are not actually performable. We developed a proof of concept using only consumer products and conducted evaluation tasks.

Augmented Chair: Exploring the Sittable Chair in Immersive Virtual Reality for Seamless Interaction

Ping-Hsuan Han (National Taiwan University), Jia-Wei Lin (National Taiwan University), Jhih-Hong Hsu (National Taiwan University), Chiao-En Hsieh (National Taiwan University), Lily Tsai (National Taiwan University), Yuan-An Chan (National Taiwan University), Wan-Ting Huang (National Taiwan University), Yi-Ping Hung (Tainan National University of the Arts)

Abstract: Virtual reality has been a promising technique to provide an immersive experience. Comparing to traditional multimedia, when the user wants to take a rest or change the position such as sitting down, the chair might not be “sittable” because of the inconsistency between the physical and the virtual chair. In this work, we utilized a tracker attached to a physical chair and conducted a user study to explore the sitting behavior when they interact with different forms of the virtual chair. Results indicate that visualizing each part of the chair could provide different information and affect trust and preference.

RetroTracker: Upgrading Existing Virtual Reality Tracking Systems

Kylee Michelle Krzanich (Stanford University) KAAN AKSIT (NVIDIA RESEARCH), Eric Whitmire (University of Washington), Michael Stengel (NVIDIA Corporation), David Luebke (NVIDIA CORPORATION, NVIDIA RESEARCH), Michael Kass (NVIDIA CORPORATION)

Abstract: Virtual reality systems often make use of spatially tracked hand-held props to add realism and interaction. Tracking these objects today relies on the use of expensive, bulky, and power-consuming trackers. We propose a passive tracking technique that works with existing low-cost, off-the-shelf optical tracking components and is capable of turning any object into a tracked VR prop. Our method utilizes paper-thin retro-reflective markers that can be placed anywhere on everyday objects. The proof-of-concept prototype allows bringing physical real-world objects to virtual worlds with ease and provides an object identification technique using patterned markers.

Panoramic fluid painting

Shengyu Du (Tongji University), Ting Ge (Tongji University), Jingyi Pei (Tongji University), Jianmin Wang (Tongji University), Changqing Yin (Tongji University), Yongning Zhu (Tongji University)

Abstract: The dynamic motion of fluids is essential in generating aesthetically appealing effects like the oriental ink painting and fluid art. Due to the prohibitively high cost required in volumetric fluid simulations, implementing an interactive volumetric painting system in immersive virtual environments (IVEs) is challenging. We propose a framework to generate immersive fluid dynamic environments by solving the Navier-Stokes equation on the viewing sphere. With this approach, we largely reduce the complexity without losing the effective resolution. We demonstrate our method on a real-time 360-degree painting system and verify the usability of our interface prototype with examples.

Emotion recognition in gamers wearing head-mounted display

Hwanmoo Yong (Yonsei University), Jisuk Lee (Motion Device Inc.), Jongeun Choi (Yonsei University)

Abstract: Wearing head-mounted display (HMD) makes previous research regarding emotion recognition using machine vision ineffective since they utilized entire face images for training. In this paper, we trained the convolutional neural networks (CNNs) which are capable of estimating the emotions from the images of a face wearing a HMD by hiding eyes and eyebrows from existing face-emotion dataset. Our analysis based on the class activation maps show that it is capable of classifying emotions without the eyes and the eyebrows which are known to serve useful information in recognizing emotions. This implies the possibility of estimating the emotions from the images of humans wearing HMDs using machine vision.

Did you see what I saw?: Comparing user synchrony when watching 360° Video in HMD vs Flat Screen

Harry Farmer (University of Bath), Chris Bevan (University of Bristol), David Philip Green (University of the West of England), Mandy Rose (University of the West of England), Kirsten Cater (University of Bristol), Danaë Stanton Fraser (University of Bath)

Abstract: This study examined whether the high level of immersion provided by HMDs encourages participants to synchronise their attention during viewing. 39 participants watched the 360° documentary “Clouds Over Sidra” using either a HMD or via a flat screen tablet display. We found that the HMD group showed significantly greater overall ISC did the tablet group and that this effect was strongest during transition between scenes.

Supporting Visual Annotation Cues in a Live 360 Panorama based Mixed Reality Remote Collaboration

Theophilus Hua Lid Teo (University of South Australia), Gun Lee (University of South Australia), Mark Billinghurst (University of South Australia), Matt Adcock (CSIRO)

Abstract: We propose enhancing live 360 panorama-based Mixed Reality (MR) remote collaboration through supporting visual annotation cues. Prior work on live 360 panorama-based collaboration used MR visualization to overlay visual cues, such as view frames and virtual hands, yet they were not registered onto the shared physical workspace, hence had limitations in accuracy for pointing or marking objects. Our prototype system uses spatial mapping and tracking feature of an Augmented Reality head-mounted display to show visual annotation cues accurately registered onto the physical environment. We describe the design and implementation details of our prototype system, and discuss on how such feature could help improve MR remote collaboration.

Estimation of Detection Thresholds for Redirected Turning

Junya Mizutani (The University of Tokyo), Keigo Matsumoto (The University of Tokyo), Ryohei Nagao (the University of Tokyo), Takuji Narumi (the University of Tokyo), Tomohiro Tanikawa (the University of Tokyo), Michitaka Hirose (The University of Tokyo)

Abstract: Redirection makes it possible to walk around a vast virtual space in a limited real space while providing a natural walking sensation by applying a gain to the amount of movement in a real space. However, manipulating the walking path while keeping it and maintaining the naturalness of walking when turning at a corner cannot be achieved by the existing methods. To realize natural manipulation for turning at a corner, this study proposes novel “turning gains”, which refer to the increase in real and virtual turning degrees. The result of an experiment which aims to estimate the detection thresholds of turning gains indicated that when the turning radius is 0.5 m, discrimination is more difficult compared with the rotation gains (r = 0.0 m).

Investigating Spherical Fish Tank Virtual Reality Displays for Establishing Realistic Eye-Contact

Georg Hagemann (University of British Columbia), Qian Zhou (University of British Columbia), Ian Stavness (University of Saskatchewan), Sidney S Fels (University of British Columbia)

Abstract: Spherical Fish Tank Virtual Reality (FTVR) has been proposed as a potential solution to provide realistic eye-contact cues. This study investigates the impact of a planar and a spherical display shape for realistic eye-contact using FTVR technology in a user-centered gaze perception task. In our study, we found that 87.5% of users reported that eye-contact is more realistic using a spherical display and that effect persisted more than 30 minutes, while the realism effect decays for the planar display. Further, there was no loss of task performance with spherical displays.

Rotbav: A Toolkit for Constructing Mixed Reality Apps with Real-Time Roaming in Large Indoor Physical Spaces

Huan Xing (Shandong University), Xiyu Bao (Shandong University), Fan Zhang (Shandong University), Wei Gai (Shandong University), Meng Qi (Shandong Normal University), Juan Liu (Shandong University), Yuliang Shi (Shandong University), Gerard de Melo (Rutgers University), Chenglei Yang (Shandong University), Xiangxu Meng (Shandong University)

Abstract: This paper presents a toolkit called Rotbav for easily constructing mixed reality (MR) apps that can be experienced in real time in large indoor physical space via HoloLens. It resolves the problem that existing MR devices, e.g. HoloLens, are unable to scan and model an entire large scene with several rooms at once. We introduce a custom data structure called VorPa, based on the Voronoi diagram, to implement path editing, accelerated rendering and location effectively. Our experiments and applications show that the toolkit is convenient and easy to use for constructing MR apps targeting large indoor physical spaces, in which users can roam in real time.

Assessing Media QoE, Simulator Sickness and Presence for Omnidirectional Videos with Different Test Protocols

Ashutosh Singla (TU Ilmenau), Rakesh Rao Ramachandra Rao (TU Ilmenau), Steve Göring (TU Ilmenau), Alexander Raake (TU Ilmenau)

Abstract: QoE for omnidirectional videos comprises additional components such as simulator sickness and presence. In this paper, a series of tests is presented comparing different test protocols to assess integral quality, simulator sickness and presence for omnidirectional videos in one test run, using HTC Vive Pro head-mounted display. For the complementary constructs, the well-established Simulator Sickness Questionnaire and Presence Questionnaire methods were used, once in a full version, and once with only one single integral scale for analyzing usage of single-scale approach for presence and simulator sickness. Additionally, the inter-relation between the ratings for quality, presence and simulator sickness is investigated.

A Research Framework for Virtual Reality Neurosurgery Based on Open-Source Tools

Lukas D.J. Fiederer (Medical Center – University of Freiburg), Hisham Alwanni (Medical Center – University of Freiburg), Martin Völker (Medical Center – University of Freiburg), Oliver Schnell (Medical Center – University of Freiburg), Jürgen Beck (Medical Center – University of Freiburg), Tonio Ball (Medical Center – University of Freiburg)

Abstract: Fully immersive virtual reality (VR) has the potential to improve neurosurgical planning. However, there is a lack of research tools for this area. We present a research framework for VR neurosurgery based on open-source tools. We showcase the potential of such a framework using clinical data of two patients and research data of one subject. As first step toward practical evaluations, certified neurosurgeons positively assessed the VR visualizations and interactions using head-mounted displays. Methods and findings described in our study thus provide a foundation for research and development of versatile and user-friendly VR tools for improving neurosurgical planning and training.

Brain Activity in Virtual Reality: Assessing Signal Quality of High-Resolution EEG While Using Head-Mounted Displays

Stephan Hertweck (University Medical Center), Desirée Weber (University Medical Center), Hisham Alwanni (University Medical Center), Fabian Unruh (University of Würzburg), Martin Fischbach (University of Würzburg), Marc Erich Latoschik (University of Würzburg), Tonio Ball (Medical Center – University of Freiburg)

Abstract: Biometric measures like Electroencephalography (EEG) are promising techniques to investigate psychophysical effects in Virtual Reality (VR) systems. However, it remains largely unclear to what extent head-mounted displays (HMDs) affect the quality of EEG signals. In this study, we present a novel experimental protocol to evaluate the influence of HMDs on EEG signals. Our work shows that HMDs indeed induced artifacts, especially above frequencies of 50 Hz, while the frequency range most relevant to in EEG research (below 50 Hz) remained largely uninfluenced. We further confirmed the latter by reproducing a well known physiological effect.

Spatial Presence in Real and Remote Immersive Environments

Nawel Khenak (Paris-Saclay), Jean-Marc VEZIEN (CNRS), David Thery (Université Paris Sud - CNRS), Patrick M Bourdot (CNRS, Université Paris-Saclay)

Abstract: This paper presents an experiment assessing Spatial Presence in both real and remote environments. Twenty-eight participants performed a 3D-pointing task while being located in a real office and the same office remotely rendered over HMD. Spatial Presence was evaluated by means of questionnaire and users’ behaviour analysis. The analysis also included the effect of different levels of immersion. The results show a higher perceived spatial presence for the remote condition, regardless of the level of immersion, and for the high immersion condition regardless of the environment. Trajectory analysis of users’ heads reveals that participants behaved similarly in both environments.

Passenger Anxiety when Seated in a Virtual Reality Self-Driving Car

Alexandros Fabio Koilias (University of the Aegean), Christos Mousas (Purdue University), Banafsheh Rekabdar (Southern Illinois UNiversity), Christos-Nikolaos Anagnostopoulos (University of the Aegean)

Abstract: A virtual reality study was conducted to understand participants’ anxiety when immersed in a virtual reality trip with a self-driving car. Participants were placed as passengers in a virtual car, and they were seated in the co-driver seat. Five different conditions were examined. The Anxiety Modality Questionnaire that captures both the somatic and cognitive anxiety of participants was used. The results indicated that lower levels of anxiety were found when the driver is either fully or partially aware of the traffic and the behavior of the car, and higher anxiety levels were found when the driver is completely unaware.

A Capacitive-sensing Physical Keyboard for VR Text Entry

Tim Menzner (Coburg University of Applied Sciences and Arts), Alexander Otte (Coburg University of Applied Sciences and Arts), Travis Gesslein (Coburg University of Applied Sciences and Arts), Philipp Gagel (Coburg University of Applied Sciences and Arts), Daniel Schneider (Coburg University), Jens Grubert (Coburg University)

Abstract: In the context of immersive VR Head-Mounted Displays, physical keyboards have been proven to be an efficient typing interface. However, text entry using physical keyboards typically requires external camera-based tracking systems.

Touch-sensitive physical keyboards allow for on-surface interaction, with sensing integrated into the keyboard itself, but have not been utilized for VR. We propose to utilize touch-sensitive physical keyboards for text entry as an alternative sensing mechanism for tracking user’s fingertips and present a first prototype for VR.

Determining Design Requirements for AR Physics Education Applications

Corey Richard Pittman (University of Central Florida), Joseph LaViola (University of Central Florida)

Abstract: Education is a domain that often drives innovation with emerging technologies. One particular subject which benefits from additional visualization capabilities is physics. In this paper, we present the results of a series of interviews with secondary school teachers about their experience with AR and the features which would be most beneficial to them from a pedagogical perspective. To gather meaningful information, a prototype application was developed and presented to the teachers. Based on the feedback collected from the teachers, we present a set of design recommendations for AR physics education tools, as well as other useful collects comments.

Exploring the Usability of Nesplora Aquarium, a Virtual Reality system for neuropsychological assessment of attention and executive functioning

Alexandra Voinescu (University of Bath), Liviu Andrei Fodor (Babeș-Bolyai University), Danaë Stanton Fraser (University of Bath), Miguel Mejías (Nesplora Technology & Behavior), Daniel David (Babes-Bolyai University)

Abstract: Virtual reality (VR) has proved to be an efficient alternative to traditional neuropsychological assessment. As VR has become more affordable, it is ready to break out of the laboratory and enter homes and clinical practices. We present preliminary findings from a study designed to evaluate self-reported usability of a VR test for neuropsychological assessment of attention and executive function.

A Fast Multi-RGBD-Camera Calibration

Stuart Duncan (University of Otago), Holger Regenbrecht (University of Otago), Tobias Langlotz (University of Otago)

Abstract: Calibrating multiple depth cameras to a common coordinate space can be a laborious and time-consuming task, and often relies on bespoke motion capture tracking systems. If accuracy constraints can be relaxed then a consumer-grade virtual reality tracking system is sufficient and calibration process can be simplified. We present a fast and convenient camera calibration method aimed at such systems. The calibration process can be carried out in about ten minutes and the resulting calibration is sufficiently accurate to achieve spatial coherence when targeting a voxel reconstruction on an 8 mm grid over a ≈2.2 m^3 capture volume

Ground Camera Images and UAV 3D Model Registration for Outdoor Augmented Reality

Weiquan Liu (Xiamen University), Cheng Wang (Xiamen University), Yu Zang (Xiamen University), Shang-Hong Lai (National Tsing Hua University), Dongdong Weng (Beijing Institute of Technology), Xuesheng Bian (Xiamen University), Xiuhong Lin (Xiamen University), Xuelun Shen (Xiamen University), Jonathan Li (Xiamen University)

Abstract: This paper presents a novel virtual-real registration approach for AR in large-scale outdoor environments. Essentially, it is a pose estimation for the mobile camera images in 3D model recovered by UAV image sequence via SFM technology. The approach indirectly establishes the spatial relationship between 2D and 3D space by inferring the transformation relationship between ground camera images and UAV 3D model rendered images. Specifically, our approach can overcome slight positioning errors, which are drift in the GPS, deviation of orientation. Experiments demonstrate the possibility of the proposed approach, which is robust, efficient and intuitive for AR in large-scale outdoor environments.

Information Placement in Virtual Reality

Ann McNamara (Texas A&M University), Annie Suther (Texas A&M University), David Oh (Texas A&M University), Katherine Boyd (Texas A&M University), Joanne George (Texas A&M University), Weston Jones (Texas A&M University)

Abstract: In this poster, we develop a technique for placing informational labels in complex Virtual Environments (VEs). The ability to effectively and efficiently present labels in VEs is valuable in Virtual Reality (VR) for many reasons, but the motivation is to bring us closer to a system that delivers information in VR in an optimal way without causing information overload. The novelty of this technique lies in the use of eye tracking as an accurate indicator of attention to identifying objects of interest.

A Mixed Presence Collaborative Mixed Reality System

Mitchell Norman (University of South Australia), Gun Lee (University of South Australia), Ross Smith (University of South Australia), Mark Billinghurst (University of South Australia)

Abstract: Research has shown that Mixed Presence Groupware (MPG) systems are a valuable collaboration tool. However research into MPG systems is limited to a handful of tabletop and Virtual Reality (VR) systems with no exploration of Head-Mounted Display (HMD) based Augmented Reality (AR) solutions. We present a new system with two local users and one remote user using HMD based AR interfaces. Our system provides tools allowing users to layout a room with the help of a remote user. The remote user has access to a marker and pointer tools to assist in directing the local users. Feedback collected from several groups of users showed that our system is easy to learn but could have increased accuracy and consistency.

Collaborative Data Analytics Using Virtual Reality

Huyen Nguyen (University of New South Wales), Benjamin Ward (Murdoch University), Ulrich Engelke (CSIRO), Bruce H Thomas (University of South Australia), Tomasz Bednarz (CSIRO Data61)

Abstract: Immersive analytics allows a large amount of data and complex structures to be concurrently investigated. We proposed a collaborative analytics system that benefits from new advances in immersive technologies for collaborators working in the early stages of data exploration. We implemented a combination of Star Coordinates and Star Plot visualisation techniques to support the visualisation of multidimensional data and the encoding of datasets using simple and compact visual representations. To support data analytics tasks, we proposed tools and interaction techniques for users to build decision trees - an approach to visualise and analyse data in a top-down categorical method.

A 6-DOF Telexistence Drone Controlled by a Head Mounted Display

Xingyu Xia (Nanjing University of Posts and Telecommunications), Chi-Man Pun (University of Macau), Di Zhang (Nanjing University of Posts and Telecommunications), Yang Yang (Nanjing University of Posts and Telecommunications), Huimin Lu (Kyushu Institute of Technology), Hao Gao (Nanjing University of Posts and Telecommunications), Feng Xu (Tsinghua University)

Abstract: Recently, a new form of telexistence is achieved by recording images with cameras on an unmanned aerial vehicle (UAV) and displaying them to the user via a head mounted display (HMD). A key problem here is how to provide a free and natural mechanism for the user to control the viewpoint and watch a scene. We propose an improved rate-control method with an adaptive origin update (AOU) scheme which handles the self-centering problem without the aid of any auxiliary equipment. In addition, we present a full 6-DOF viewpoint control method to manipulate the motion of a stereo camera.

Real-time Animation and Motion Retargeting of Virtual Characters based on Single RGB-D Camera

Ning Kang (Beihang University), Junxuan Bai (Beihang University), Junjun Pan (Beihang University), Hong Qin (Stony Brook University)

Abstract: The rapid generation and flexible reuse of characters animation by commodity devices are of significant importance to rich digital content production in virtual reality. This paper aims to handle the challenges of current motion imitation for human body in several indoor scenes (e.g., fitness training). We develop a real-time system based on single Kinect device, which is able to capture stable human motions and retarget to virtual characters. A large variety of motions and characters are tested to validate the efficiency and effectiveness of our system.

Virtual Reality Wound Care Training for Clinical Nursing Education: An Initial User Study

Kup-Sze Choi (The Hong Kong Polytechnic University)

Abstract: Wound care is an essential nursing competency, where dressing change is an important component. Compliance with aseptic procedures and techniques is necessary to reduce the risk of infection. Proficiency in the skills can be developed through adequate practice. In this paper, use of virtual reality is proposed to provide more practice opportunity. An immersive virtual environment is developed to simulate the steps of changing a simple wound dressing. Positive comments are obtained from an initial user study on usability with an experienced nurse and an undergraduate nursing student. Comprehensive evaluation will be conducted to further improve the simulation.

A UMI3D-based Interactions Analytics System for XR Devices and Interaction Techniques

Julien Casarin (Gfi Informatique), Antoine Ladrech (GFI Informatique), Tristan Alexandre Charles TCHILINGUIRIAN (Gfi Informatique), Dominique Bechmann (Strasbourg University)

Abstract: In this paper we present an interaction analytics system we are working on. With this system we intend to simplify the evaluation and classification of eXtented Reality devices and interaction techniques. Our final objective is to release it as an open Cloud platform that will allow researchers to compare their respective results in the field of Human-Computer Interaction with ease. To achieve this, we use the UMI3D exchange protocol to design device-independent 3D environments, and a Cloud analytics platform which stores experimental raw data from these environments as well as from the different devices.

Human Sensitivity to Slopes of Slanted Paths

Luyao Hu (Zhejiang University), Rui Wang (Zhejiang University), Zaifeng Gao (Zhejiang University), Hujun Bao (Zhejiang University), Yaorui Zhang (Zhejiang University), Wei Hua (Zhejiang University)

Abstract: Previous studies have analyzed human sensitivity to redirected walking in a horizontal direction, but users also need to change their height. In this work, we expand the vertical movement space by positioning users on virtual paths with slopes that are different from those of real paths. We conduct psychological experiments to explore human sensitivity to slope gains that describe the discrepancies between the slopes of paths in virtual and real environments. The investigation shows that humans can walk on virtual slopes that are higher or lower than the real position without detecting the slopes and establishes corresponding detection thresholds.

Automatic Generation of Interactive 3D Characters and Scenes for Virtual Reality from a Single-Viewpoint 360-Degree Video

Gregoire Dupont de Dinechin (MINES ParisTech, PSL Research University), Alexis Paljic (MINES ParisTech, PSL-Research University)

Abstract: This work addresses the problem of using real-world data captured from a single viewpoint by a low-cost 360-degree camera to create an immersive and interactive virtual reality scene. We combine different existing state-of-the-art data enhancement methods based on pre-trained deep learning models to quickly and automatically obtain 3D scenes with animated character models from a 360-degree video. We provide details on our implementation and insight on how to adapt existing methods to 360-degree inputs. We also present the results of a user study assessing the extent to which virtual agents generated by this process are perceived as present and engaging.

Visual exploratory activity under microgravity conditions in VR: An exploratory study during a parabolic flight

Cesar Daniel Rojas Ferrer (University of Tsukuba), Hidehiko Shishido (University of Tsukuba), Itaru Kitahara (University of Tsukuba), Yoshinari Kameda (University of Tsukuba)

Abstract: This work explores the human visual exploratory activity (VEA) in a microgravity environment compared to one-G. Parabolic flights are the only way to experience microgravity without astronaut training, and the duration of each microgravity segment is less than 20 seconds. Under such special conditions, the test subject visually searches a virtual representation of the International Space Station located in his Field of Regard (FOR). The task was repeated in two different postural positions. Interestingly, the test subject reported a significant reduction of microgravity-related motion sickness while experiencing the VR simulation, in comparison to his previous parabolic flights without VR.

Evaluation of Pointing Interfaces with an AR Agent for Multi-section Information Guidance

Nattaon Techasarntikul (Osaka university), Photchara Ratsamee (Osaka Univercity), Jason Orlosky (Osaka University), Tomohiro Mashita (Osaka Univercity), Yuki Uranishi (Osaka Univercity), Kiyoshi Kiyokawa (Nara Institute of Science and Technology), Haruo Takemura (Osaka Univercity)

Abstract: Augmented Reality (AR) has the potential to provide information about exhibits. However, dealing with items that have information in multiple areas such as an intricate large painting is still a significant challenge. We introduce an AR guidance system that uses an embodied agent to point out and explain each part of the exhibit items.We designed and evaluated 3 different pointing interfaces: gesture only, gesture with a dot laser, and gesture with line laser. The result shows that the search times for target positions were the fastest with the line laser. However, no particular interface outperformed others in memory recall of exhibit content.

Embodying an Extra Virtual Body in Augmented Reality

Nina Rosa (Utrecht University), Jean-Paul van Bommel (Utrecht University), Wolfgang Hürst (Utrecht University), Tanja Nijboer (Utrecht University), Remco C. Veltkamp (Utrecht University), Peter Werkhoven (Utrecht University)

Abstract: Presence and the sense of embodiment are essential concepts for the experience of our self and virtual bodies, but there is little quantitative evidence for a relation between these, and this relation becomes more complicated when there are real and virtual bodies in augmented reality (AR). We investigate the experience of body ownership, agency, self-location and self-presence in AR where users can see their real body and a virtual body from behind. Active arm movement congruency and virtual anthropomorphism are varied. We found significant effects of movement congruency but not anthropomorphism, a strong correlation between self-presence and body ownership, and a moderate correlation between self-presence and agency and self-location.

Enchanting Your Noodles: GAN-based Real-time Food-to-Food Translation and Its Impact on Vision-induced Gustatory Manipulation

Kizashi Nakano (Nara Institute of Science and Technology), Daichi Horita (The University of Electro-Communications), Nobuchika Sakata (NAIST), Kiyoshi Kiyokawa (Nara Institute of Science and Technology), Keiji Yanai (The University of Electro-Communications), Takuji Narumi (the University of Tokyo)

Abstract: We propose a novel gustatory manipulation interface which utilizes the cross-modal effect of vision on taste elicited with AR-based real-time food appearance modulation using a generative adversarial network (GAN). Our system changes the appearance of food into multiple types of food in real-time flexibly, dynamically and interactively in accordance with the deformation of the food that the user is actually eating by using GAN-based image-to-image translation. The experimental results reveal that our system successfully manipulates gustatory sensations to some extent and that the effectiveness depends on the original and target types of food as well as each user’s food experience.

Towards an Affordable Virtual Reality Solution for Cardiopulmonary Resuscitation Training

Samali Udara Liyanage (University of Colombo School of Computing), Lakshman Jayaratne (University of Colombo School of Computing), Manjusri Wickramasinghe (University of Colombo School of Computing), Aruna Munasinghe (District Hospital)

Abstract: Currently, CPR training is carried out using mechanical manikins which have drawbacks in terms of realism, cost and durability. Since VR has entered a new phase of wider availability and affordability, a low-cost VR-based solution can now be proposed to address some of these issues with the mechanical manikin. Hence, this study presents an approach where the mechanical manikin has been augmented with VR. To test the viability and acceptance of this solution, a user-based evaluation was conducted with a group of experts and novices in CPR and both groups have expressed a favourable view towards this basic solution.

Localizing Teleoperator Gaze in 360° Hosted Telepresence

Jingxin Zhang (Universität Hamburg), Nikolaos Katzakis (Universität Hamburg), Fariba Mostajeran (University of Hamburg), Frank Steinicke (Universität Hamburg)

Abstract: We evaluate the ability of locally present participants to localize an avatar head’s gaze direction in 360° hosted telepresence. We performed a controlled user study to test two potential solutions to indicate a remote user’s gaze. We analyze the influence of the user’s distance to the avatar and display technique on localization accuracy. Our experimental results suggest that all these factors have a significant effect on the localization accuracy with varying effect sizes.

Sphere in Hand: Exploring Tangible Interaction with Immersive Spherical Visualizations

David Englmeier (LMU Munich), Isabel Schönewald (LMU Munich), Andreas Butz (LMU Munich), Tobias Höllerer (University of California, Santa Barbara)

Abstract: Spherical visualizations allow for convenient exploration of certain types of data. Our tangible sphere, exactly aligned with the sphere visualizations shown in VR, implements a very natural way of interaction and utilizes senses and skills trained in the real world. This work is motivated by the prospect to create in VR a low-cost, tangible, robust, handheld spherical display that would be difficult or impossible to implement as a physical display. Our concept enables it to gain insights about the impact of a fully tangible embodiment of a virtual object on. We discuss the advantages and disadvantages of our approach, taking into account different handheld spherical displays utilizing outside and inside projection.

Tracking-Tolerant Visual Cryptography

Ruofei Du (University of Maryland, College Park), Eric Lee (University of Maryland), Amitabh Varshney (University of Maryland College Park)

Abstract: We introduce a novel secure display system, which uses visual cryptography with tolerance for tracking. Our algorithm splits a visual message into two shares and the composited result is only available through a head-mounted display or mobile device. In contrast to prior art, our system is able to provide tracking tolerance, making it more practically usable in modern VR and AR systems. We model the probability of misalignment caused by head jitter as a Gaussian distribution. Our algorithm diffuses the second image using the normalized probabilities, thus enabling the visual cryptography to be tolerant of alignment errors due to tracking.

Interactive Fusion of 360° Images for a Mirrored World

Ruofei Du (University of Maryland, College Park), David Li (University Of Maryland), Amitabh Varshney (University of Maryland College Park)

Abstract: Reconstruction of the physical world in real time has been a grand challenge in computer graphics and 3D vision. In this paper, we introduce an interactive pipeline to reconstruct a mirrored world at two levels of detail. At a fine level of detail for close-up views, we render textured meshes using adjacent local street views and depth maps. When viewed from afar, we apply projection mappings to 3D geometries extruded from building polygons for a coarse level of detail. We present an application of our approach by incorporating it into a mixed-reality social platform, Geollery.

Virtual Reality Synthesis of Robotic Systems for Human Upper-Limb and Hand Tasks

Omid Heidari (Idaho State University), Alba Perez Gracia (Idaho State University)

Abstract: The design of robotic systems for human-hand tasks could benefit for a more intuitive environment for task definition and prototype testing. Virtual reality with human-limb motion identification and visualization seems to be an appropriate environment both for defining the task and to visualize the solution. In this work we present the combination of a method for the kinematic design of robots with serial and tree structures, with a virtual reality and depth-sensing system that allows faithful representation and data extraction for hand and upper extremity motion. The communications and solver are embedded in the virtual reality programming, yielding a tool for design of cooperative human-robot systems.

Selection and Manipulation Whole-Body Gesture Elicitation Study In Virtual Reality

Francisco Ortega (Colorado State University), Mathew Kress (Florida International University), Katherine Tarre (Florida International University), Adam Sinclair Williams (Colorado State University), Armando Barreto (Florida International University), Naphtali D. Rishe (Florida International University)

Abstract: We present a whole-body gesture elicitation study using Head Mounted Displays, including a legacy bias reduction technique. The motivation for this study was to understand the type of gesture agreement rates for selection and manipulation interactions and to improve the user experience for whole-body interactions. We found that regardless of the production technique used to remove legacy bias, legacy bias was still found in some of the produced gestures.

Comparison in depth perception between Virtual Reality and Augmented Reality systems

Jiamin Ping, Yue Liu, Dongdong Weng

Abstract: The virtual reality (VR) and augmented reality (AR) applications have been widely used in a variety of fields; one of the key requirements in a VR or AR system is to understand how users perceive depth in the virtual environment and augmented reality. This paper conducts an experiment to compare users’ performance of depth perception in VR and AR system using an optical see-through head mounted display (HMD). The result shows that the accuracy of depth estimation in AR is higher than in VR. Besides, the matching error increases as the distance becomes farther.

Motivation to Select Point of View in Cinematic Virtual Reality

Andrea Stevenson Won (Cornell University), Tanja Aitamurto (Northwestern University), Byungdoo Kim (Cornell University), Sukolsak Sakshuwong (Stanford), Yasamin Sadeghi (University of California, Los Angeles), Catherine Lynn Kircos (Stanford University)

Abstract: This paper examines the effects of participants’ preferred point of view of two protagonists, and their motivation for this preference, on two viewings of a cinematic 360-degree video filmed from the first person perspective. Before watching the film, which dramatized gender bias in a STEM workplace, participants were asked to state whether they preferred to view the film from the point of view (POV) of a male protagonist, a female protagonist, or make no selection.They were then asked why they held this preference. Their answers were predictive. Participants’ tracked head movements, and the events participants recalled from the film, differed according to their pre-stated preference and motivation.

Rohith Venkatakrishnan (Clemson University), Ayush Bhargava (Clemson University), Roshan Venkatakrishnan (Clemson University), Kathryn Lucaites (Clemson University), Matias volonte (Clemson University), Hannah Solini (Clemson University), Andrew Robb (Clemson University), Christopher Pagano (Clemson University), Sabarish V. Babu (Clemson University)

Abstract: The commercialization of Virtual Reality (VR) devices has made it easier for everyday users to experience VR from the comfort of their living rooms. Cybersickness refers to the discomfort experienced by an individual while experiencing virtual environments. The symptoms are similar to those of motion sickness but are more disorienting in nature resulting in dizziness, blurred vision, etc. Cybersickness is currently one of the biggest hurdles to the widespread adoption of VR, and it is therefore critical to explore the factors that influence its onset. Towards this end, we present a proof of concept simulation to study cybersickness in highly realistic immersive virtual environments.

Matching vs. Non-Matching Visuals and Shape for Embodied Virtual Healthcare Agents

Salam Daher, Jason Hochreiter, Nahal Norouzi, Ryan Schubert, Gerd Bruder, Laura Gonzalez, Mindi Anderson, Desiree Diaz, Juan Cendan, Greg Welch

Abstract: Embodied virtual agents serving as patient simulators are widely used in medical training scenarios, ranging from physical patients to virtual patients presented via virtual and augmented reality technologies. Physical-virtual patients are a hybrid solution that combines the benefits of dynamic visuals integrated into a human-shaped physical form that can also present other cues, such as pulse, breathing sounds, and temperature. Sometimes in simulation the visuals and shape do not match. We carried out a human-participant study employing graduate nursing students in pediatric patient simulations comprising conditions associated with matching/non-matching of the visuals and shape.

Robust High-Level Video Stabilization for Effective AR Telementoring

Chengyuan Lin (Purdue University), Edgar Rojas-Muñoz (Purdue University), Maria Eugenia Cabrera (Purdue University), Natalia Sanchez-Tamayo (Purdue University), Daniel Andersen (Purdue University), Voicu Popescu (Purdue University), Juan Antonio Barragan Noguera (Purdue University), Ben Zarzaur (Indiana University School of Medicine), Pat Murphy (Indiana University School of Medicine), Kathryn Anderson (Indiana University School of Medicine), Thomas Douglas (Naval Medical Center Portsmouth), Clare Griffis (Naval Medical Center Portsmouth), Juan Wachs (Purdue University)

Abstract: This poster presents the design, implementation, and evaluation of a method for robust high-level stabilization of mentee’s first-person video in augmented reality (AR) telementoring. This video is captured by the front-facing built-in camera of an AR headset and stabilized by rendering from a stationary view a planar proxy of the workspace projectively texture mapped with the video feed. The result is stable, complete, up to date, continuous, distortion free, and rendered from the mentee’s default viewpoint. The stabilization method was evaluated in two user studies, in the context of number matching and for cricothyroidotomy training, respectively. Both showed a significant advantage of our method compared with unstabilized visualization.

Human Perception of a Haptic Shape-changing Interface with Variable Rigidity and Size

Alberto Boem (University of Tsukuba), Yuuki Enzaki (University of Tsukuba), Hiroaki Yano (University of Tsukuba), Hiroo Iwata (University of Tsukuba)

Abstract: This paper studies the characteristics of the human perception of a haptic shape-changing interface, capable of altering its size and rigidity simultaneously for presenting characteristics of virtual objects physically. The haptic interface is composed of an array of computer-controlled balloons, with two mechanisms developed for dynamically changing their size and rigidity. We conducted a psychophysical experiment with twenty subjects to measure perceived sensory thresholds of these two cues. Our results show that such system can present an ample range of rigidities and variations of the size in a way that is compatible with the human haptic perception.

Enhanced Geometric Techniques for Point Marking in Model-Free Augmented Reality

Wallace S Lages (Virginia Tech), Yuan Li (Virginia Tech), Lee Lisle (Virginia Tech), Feiyu Lu (Virginia Tech), Tobias Höllerer (University of California, Santa Barbara), Doug Bowman (Virginia Tech)

Abstract: Specifying points in three-dimensional space is essential in AR applications. Geometric triangulation is a straightforward way to specify points, but its naive implementation has low precision. We designed two enhanced geometric techniques for 3D point marking: VectorCloud, which uses multiple rays to reduce jittering, and ImageRefinement, which allows 3D ray refinement to improve precision. Our experiments, conducted in both simulated and real AR, demonstrate that both techniques improve the precision of 3D point marking, and that ImageRefinement is superior to VectorCloud overall. These results are particularly relevant in the design of mobile AR systems for large outdoor areas.

Sports Training System for Visualizing Birds-Eye View Position from First-Person View

Kaoru Sumi (Future University Hakodate), Kaoru Sumi (Future University Hakodate)

Abstract: In ball games, it is important that the players are able to estimate the position of the other players from a bird’s-eye view based on the information obtained from their first-person view. We have developed a training system for improving this ability. The user wears a head-mounted display and can simulate ball games in 360° from the first-person view. The system allows the user to rearrange all players, and a ball from the bird’s-eye view. The user can then track the other players from the first-person viewpoint and perform actions specific to the ball game such as passing, receiving a ball, and (if a defence player) following offense players.

Mixed Reality Storytelling Environments based on Tangible User Interface: Take Origami as an Example

Yingjie Song (Shandong University), Nianmei Zhou (Shandong University), Qianhui Sun (Shandong University), Wei Gai (Shandong University), Juan Liu (Shandong University), Yulong Bian (Shandong University), Shijun Liu (Shandong University), Lizhen Cui (Shandong University), Chenglei Yang (Shandong University)

Abstract: This paper presents a mixed reality storytelling system, which takes handicrafts as tangible interaction tools. Via the system, users can learn handicraft and then use the handicraft pieces to design, create and tell stories with HoloLens iteratively. In order to overcome the limitations of HoloLens gesture interaction, the system uses hand tracking with Kinect to implement a touch-like effect on the desktop. User study shows that our system has good usability, and it is welcomed by users. In addition, it can stimulate users interest in handicraft and storytelling, and even promote parent-child interaction effectively.

Real-time Underwater Caustics for Mixed Reality 360° Videos

Stephen Thompson (Victoria University of Wellington), Andrew Chalmers (Victoria University of Wellington), Taehyun James Rhee (Victoria University of Wellington)

Abstract: We present a novel mixed reality (MR) rendering solution that illuminates and blends virtual objects into underwater 360° video with real-time underwater caustic effects. Image-based lighting is used in conjunction with underwater caustics to provide automatic ambient and high frequency underwater lighting. This ensures that the caustics and virtual objects are lit and blend into each frame of the video semi-automatically and in real-time. We provide an interactive interface with intuitive parameter controls to fine tune caustics to match with the background video.

Design of a Semiautomatic Travel Technique in VR Environments

Yuyang Wang (Arts et Métiers, Institut Image), Jean-Rémy Chardonnet (Arts et Métiers, Institut Image), Frederic Merienne (Arts et Metiers)

Abstract: Travel in a real environment is a common task that human beings conduct easily and subconsciously. However transposing this task in virtual environments (VEs) remains challenging due to input devices and techniques. Considering the well-described sensory conflict theory, we present a semiautomatic travel method based on path planning algorithms and gaze-directed control, aiming at reducing the generation of conflicted signals that may confuse the central nervous system. Since gaze-directed control is user-centered and path planning is goal-oriented, our semiautomatic technique makes up for the deficiencies of each with smoother and less jerky trajectories.

Effects of VR on Intentions to Change Environmental Behavior

Joscha Cepok (University of Bremen), Kevin Marnholz (University of Bremen), Roman Arzaroli (University of Bremen), Cornelia S. Große (University of Bremen), Hauke Reuter (Leibniz Center for Tropical Marine Research), Katie Nelson (Leibniz Centre for Tropical Marine Research), Mario Lorenz (Institute for Machine Tools and Production Processes), Rene Weller (University of Bremen), Gabriel Zachmann (University of Bremen)

Abstract: We present a study investigating the question whether and how people’s intention to change their environmental behavior depends on the degrees of immersion and freedom of navigation when they experience a virtual coral reef. The most striking result is, perhaps, that the highest level of immersion combined with the highest level of navigation did not lead to the highest intentions to change behavior.

No Strings Attached: Force and Vibrotactile Feedback in a Virtual Guitar Simulation

Andrea Passalenti (University of Udine), Razvan Paisa (Aalborg University), Niels Christian Nilsson (Aalborg University), Nikolaj Schwab Andersson (Aalborg University), Federico Fontana (University of Udine), Rolf Nordahl (Aalborg University), Stefania Serafin (Aalborg University)

Abstract: The poster describes a multisensory simulation of plucking guitar strings in virtual reality and a user study evaluating the simulation. Auditory feedback is generated by a physics-based simulation of guitar strings, and haptic feedback is provided by a combination of high fidelity vibrotactile actuators and a Phantom Omni. The study compared four conditions: no haptic feedback, vibrotactile feed-back, force feedback, and a combination of force and vibrotactile feedback. The results indicate that the combination of vibrotactile and force feedback elicits the most realistic experience, and during this condition, participants were less likely to inadvertently hit strings.

Design and Testing of a Virtual Reality Enabled Experience that Enhances Engagement and Simulates Empathy for Historical Events and Characters

James Calvert (Torrens University Australia), Rhodora Abadia (Torrens University Australia), Syed Mohammad Tauseef (University of Petroleum & Energy Studies)

Abstract: Our study uses Virtual Reality (VR) to transport high school students into the mountains of Papua New Guinea during World War Two – the Kokoda campaign. By using photogrammetry, in combination with animated characters, Kokoda VR places the students in the centre of the action. Results from data collected in two Australian high schools has shown that a linear narrative in the VR condition increases feelings of empathy for the soldiers, over the 360 video desktop application. Students using VR also reported higher levels of engagement and the study found a positive correlation between high engagement and increased empathy in VR.

Daniel Zielasko (RWTH Aachen University), Marcel Krüger (RWTH Aachen University), Benjamin Weyers (Trier University), Torsten Wolfgang Kuhlen (RWTH Aachen University)

Abstract: In this work, we evaluate the impact of passive haptic feedback on touch-based menus, given the constraints and possibilities of a seated, desk-based scenario in VR. Therefore, we compare a menu that once is placed on the surface of a desk and once mid-air on a surface in front of the user. The study design is completed by two conditions without passive haptic feedback. In the conducted user study (n = 33), we found effects of passive haptics (present vs- non-present) and menu alignment (desk vs. mid-air) on the task performance and subjective look & feel. However, the race between the conditions was close. An overall winner was the mid-air menu with passive haptic feedback, which however raises hardware requirements.

Travel Your Desk? An Office Desk Substitution and its Effects on Cybersickness, Presence and Performance in an HMD-based Exploratory Analysis Task

Daniel Zielasko (RWTH Aachen University), Benjamin Weyers (Trier University), Torsten Wolfgang Kuhlen (RWTH Aachen University)

Abstract: In this work, we evaluate the feasibility of an office desk substitution in the context of a visual data analysis task involving travel. We measure the impact on cybersickness as well as the general task performance and presence. In the conducted user study (n=52), surprisingly, and partially in contradiction to existing work, we found no significant differences for those core measures between the control condition without a virtual table and the condition containing a virtual table.

Extending a User Involvement Tool with Virtual and Augmented Reality

Ciprian Florea, Paula Alavesa, Leena Arhippainen, Matti Pouke (University of Oulu), Weiping Huang, Lotta Haukipuro, Satu Väinämö, Arttu Niemelä, Marta Cortés Orduña, Minna Anneli Pakanen, Timo Ojala

Abstract: Living labs are environments for acquiring user feedback on new products and services. Virtual environments can complement living labs by providing dynamic immersive setup for depicting change. This paper describes implementation of Virtual and Augmented Reality clients as an extension to a user involvement tool for an existing living lab. A user experience study with 14 participants was conducted to compare the clients. According to findings, the virtual reality client was experienced as innovative, easy to use, entertaining and fun. Whereas the augmented reality client was perceived playful and empowering.

Watching videos together in social Virtual Reality: an experimental study on user’s QoE

Francesca De Simone (Centrum Wiskunde & Informatica), Jie Li (Centrum Wiskunde & Informatica (CWI)), Henrique Galvan Debarba (Artanim Foundation), Abdallah El Ali (Centrum Wiskunde & Informatica (CWI)), Simon Gunkel (TNO), Pablo Cesar (CWI)

Abstract: In this paper, we describe a user study in which pairs of users watched a video trailer and interacted with each other, using two social Virtual Reality (sVR) systems, as well as in a face-to-face condition. The sVR systems are: Facebook Spaces, based on puppet-like customized avatars, and a video-based sVR system using photo-realistic virtual user representations. We collected subjective and objective data to analyze users’ Quality of Experience (QoE) and compare their interaction in VR to that observed during the real-life scenario.

Virtual Games and Volitional Pain: A New Methodological Approach for Testing VR Pain Interventions on Individuals Receiving a Tattoo

Daniel Pimentel (University of Florida), Sri Kalyanaraman (University of Florida), Roger Fillingim (University of Florida), Shivashankar Halan (University of Florida)

Abstract: The efficacy of Virtual reality (VR) as a non-pharmacological pain remedy remains widely supported. Yet, despite robust findings, this claim is weakened by several factors, namely restricted accessibility to vulnerable subjects and variability in pain type/severity. To address these issues, we propose testing VR interventions on volitional pain, namely pain experienced during a tattoo. Leveraging qualitative interviews with tattoo artists and customers and a preliminary field experiment, findings support the efficacy of volitional pain as a means by which to test the analgesic effects of VR. Methodological considerations and guidelines for follow-up investigations using this experimental approach are discussed.

Developing an Agent-based Virtual Interview Training System for College Students with High Shyness Level

Xinpei Jin (Shandong University), Yulong Bian (Shandong University), Wenxiu Geng (Shandong University), Yeqing Chen (Shandong University), ke chu (Shandong University), Hao Hu (Shandong University), Juan Liu (College of computer science and technology), Yuliang Shi (shandong university), Chenglei Yang (Shandong University)

Abstract: In this paper, we developed an agent-based virtual interview training system which can help college students with high shyness level to improve interview skills and reduce their anxiety by themselves before they take a real interview. The system includes three main contents: three virtual agents with different types of personality, three kinds of interview training contents and a multidimensional evaluation method, so that it is able to meet common demands of preparing for interviews. User study indicates the system can help shy college students cope with interview anxiety and improve their interview training performance effectively.

VirtualTablet: Extending Movable Surfaces with Touch Interaction

Adrian H. Hoppe (Karlsruhe Institute of Technology (KIT)), Felix Marek (Institute for Anthropomatics and Robotics (IAR)), Florian van de Camp (Fraunhofer IOSB), Rainer Stiefelhagen (Karlsruhe Institute of Technology (KIT))

Abstract: Immersive output and effortless input are two core aspects of a virtual reality (VR) experience. We transfer ubiquitous touch interaction with haptic feedback into a virtual environment (VE). The movable and cheap real world object supplies an accurate touch detection equal to a ray-casting interaction with a controller. Moreover, the virtual tablet extends the functionality of a real world tablet. Additional information is displayed in mid-air around the touchable area and the tablet can be turned over to interact with both sides. It allows easy to learn and precise system interaction and can even augment the established touch metaphor with new paradigms.

A supernatural VR environment for spatial user rotation

Jonathan Becker (Hamburg University of Applied Science), Tobias Eichler (Hamburg University of Applied Sciences), Uli Meyer (Hamburg University of Applied Sciences), Susanne Draheim (Hamburg University of Applied Sciences)

Abstract: VR environments with supernatural properties which expand the laws of physics could be used to understand how the brain organises and interprets sensory stimulation. We built an application with a supernatural room that allows users to walk up the wall and on the ceiling. During preliminary tests, we optimised the application so that it rarely causes cybersickness. User reports and observed user reaction including swaying indicate that users accepted the rotation as a self-rotation, as opposed to animated rotation of the room around the user. Therefore the application is viable for future studies on spatial orientation, pathfinding and cognitive maps.

Effects of Voluntary Heart Rate Control on User Engagement in Virtual Reality

Samory Houzangbe (Arts et Métiers Paristech), Olivier Christmann (Arts et Métiers Paristech), Geoffrey Gorisse (Arts et Métiers Paristech), Simon Richir (Arts et Métiers Paristech)

Abstract: The usage of biofeedback in VR is becoming important in providing fully immersive experiences. With the rapid evolution of physiological monitoring technologies it is important to study how biofeedback can alter user experience. While previous studies use biofeedback as an additional interaction mechanic. We created a protocol to assess heart rate control competency and used the results of said protocol to immerse our participants in a VR experience where the biofeedback mechanics are mandatory to complete a game. We observed consistent results between our competency scale and the participants’ mastery of the biofeedback game mechanic in the VR experience. We also found that the biofeedback mechanic has a significant impact on engagement.

DepthMove: Hands-free Interaction in Virtual Reality Using Head Motions in the Depth Dimension

Difeng Yu (Xi’an Jiaotong-Liverpool University), Hai-Ning Liang (Xi’an Jiaotong-Liverpool University), Tianyu Zhang (Xi’an Jiaotong-Liverpool University), Wenge Xu (Xi’an Jiaotong-Liverpool University)

Abstract: Hands-free interactions are very useful in virtual reality (VR) head-worn display (HWD) systems because they allow users to interact with VR environments without the need for a handheld device. We explore the potential of a new approach that we call DepthMove to allow hands-free interactions based on head motions towards the depth dimension. With DepthMove, a user can interact in a VR system proactively by moving towards the depth dimension with an HWD. We present the concept and implementation of DepthMove in VR HWD systems and describe its potential applications. We further discuss the advantages and limitations of DepthMove.

OmniMR: Omnidirectional Mixed Reality with Spatially-Varying Environment Reflections from Moving 360° Video Cameras

Joanna Karolina Tarko (University of Bath), James Tompkin (Brown University), Christian Richardt (University of Bath)

Abstract: We propose a new approach for creating omnidirectional mixed reality (OmniMR) from moving-camera 360° video. To insert virtual computer-generated elements into a moving-camera 360° video, we reconstruct camera motion and sparse scene content via structure-from-motion on equirectangular video. Then, to plausibly reproduce real-world lighting conditions for these inserted elements, we employ inverse tone mapping to recover high dynamic range environment maps which vary spatially along the camera path. We implement our approach into the Unity rendering engine for real-time object rendering with dynamic lighting and user interaction. This expands the use and flexibility of 360° video for mixed reality.

AirwayVR: Virtual Reality Trainer for Endotracheal Intubation- Design Considerations and Challenges

Pavithra Rajeswaran (University of Illinois at Urbana Champaign), Priti Jani (University of Chicago), Praveen Kumar (OSF HealthCare System), Thenkurussi Kesavadas (University of Illinois at Urbana-Champaign)

Abstract: Endotracheal Intubation is a lifesaving procedure in which a tube is passed through the mouth into the trachea (windpipe) to maintain an open airway. The current methods of training, including manikins and cadaver, have limitations in terms of their availability and ability to represent high risk/ difficult intubation scenarios. In this paper, we present the design considerations and challenges of using virtual reality as a platform to 1) train novice learners(Medical students and residents) and 2) provide Just-in-time training for experts to mentally prepare for a complex case prior to the procedure.

Virtual Reality and Photogrammetry for Improved Reproducibility of Human-Robot Interaction Studies

Mark Murnane (University of Maryland, Baltimore County), Max Breitmeyer (University of Maryland, Baltimore County), Cynthia Matuszek (UMBC), Don Engel (University of Maryland, Baltimore County)

Abstract: Collecting data in robotics, especially human-robot interactions, traditionally requires a physical robot in a prepared environment, presenting substantial scalability challenges. Robots provide many possible points of system failure, while the availability of human participants is limited. For tasks such as language learning, it is important to create environments which provide interesting, varied use cases. This requires prepared physical spaces for each scenario studied. Finally, the expense associated with acquiring robots and preparing spaces places serious limitations on reproducibility. We therefore propose a novel mechanism for using commodity VR hardware to simulate robotic sensor data in a series of prepared scenarios.

Virtual-GymVR A Virtual Reality Platform for Personalized Exergames

Victor Fernandez Cervantes (University of Alberta), Eleni Stroulia (University of Alberta)

Abstract: Virtual-GymVR is a platform for serious exergames in virtual reality. Its purpose is to provide older adults with a fun experience, while encouraging them to complete their personalised exercise sessions. The platform takes as input a description of a prescribed exercise, as posture-transition grammar. It build a personalized version of the game with a proper configuration of the interactive objects. We designed three different metaphors appropriate for three different types of exercises. The game-configuration process controls the placement and interaction behavior of the games’ objects to induce users to adopt the correct postures, as described by the input exercise specification.

Sabah Boustila (University of Toronto), Thomas Guégan (Tohoku University), Kazuki Takashima (Tohoku University), Yoshifumi Kitamura (Tohoku University)

Abstract: In this work, we were interested in using smartphone touchscreen keyboard for text typing in virtual environments (VEs) with head-mounted display. We carried out an experiment comparing the smartphone to the ordinary devices: gamepad and HTC Vive Controllers. We represented the touchscreen keyboard in the VE with a virtual interface and the fingertips with tracked green circles. A confirm-on-release paradigm was employed for text typing. Results showed that the smartphone did not fully outperformed the other devices. However, unlike the other devices, smartphone’s users tended to correct progressively their error while typing thanks to their familiarity with the device.

Camera-Based Selection with Cardboard HMDs

Siqi Luo (Carleton University), Robert J Teather (Carleton University)

Abstract: We present a study of selection techniques for low-cost mobile VR devices, such as Google Cardboard, using the outward facing camera on modern smartphones. We compared three selection techniques, air touch, head ray, and finger ray. Initial evaluation indicates that hand-based selection (air touch) was the worst. A ray cast using the tracked finger position offered much higher selection performance. Our results suggest that camera-based mobile tracking is feasible with ray-based techniques.

BaLuna: Floating Balloon Screen Manipulated Using Ultrasound

Takuro Furumoto (The University of Tokyo), Masahiro Fujiwara (The University of Tokyo), Yasutoshi Makino (The University of Tokyo), Hiroyuki Shinoda (The University of Tokyo)

Abstract: In this paper, we present BaLuna, a prototype of an externally actuated midair display for indoor use in a room-scale workspace. This system is the first battery-less midair display with a one-meter-cubic workspace. The system projects an image onto a balloon screen whose position is controlled by airborne ultrasound phased array (AUPA) devices. Users can naturally manipulate the screen position by dragging and dropping the screen directly with their hands. We adapted feedback-based acoustic manipulation technology that enables sparsely distributed AUPA devices to control the screen position. This is combined with a depth-image-based tracking and a three-dimensionally calibrated projector.

Simulation and Evaluation of Three-User Redirected Walking Algorithm in Shared Physical Spaces

Tianyang Dong (Zhejiang University of Technology), Yifan Song (Zhejiang University of Technology), Yuqi Shen (Zhejiang University of Technology)

Abstract: Existing methods primarily address the problem of one- or two-user redirected walking and do not respond to additional challenges related to potential collisions among three or more users who are moving both virtually and physically. To apply redirected walking to multiple users who are immersed in virtual reality experiences, we present a novel algorithm of three-user redirected walking in shared physical spaces. We adopt a user clustering algorithm that is based on users’ motion states to divide the users into groups, with a maximum of three users per group. Therefore, the strategy for three-user redirected walking can be applied to each group to address the challenges of multi-user redirected walking.

Explore the Weak Association between Flow and Performance based on a Visual Search Task Paradigm in Virtual Reality

Yulong Bian (Shandong University), Chao Zhou (Liaoning Normal University), Yeqing Chen (Shandong University), Yanshuai Zhao (Shandong University), Juan Liu (College of computer science and technology), Chenglei Yang (Shandong University), Xiangxu Meng (College of Computer Science and Technology)

Abstract: Weak association model indicates that distraction caused by the disjunction between the primary task and interactive artifacts may be a key factor directly leading to weak association between flow and task performance in virtual reality (VR) activities. To test the idea, this paper proposed a VR visual search paradigm based on which we constructed a VR oceanic treasure hunting system. Experiment 1 proved that distraction caused by the incongruence was indeed a direct antecedent of weak association. Next, we slightly adjusted the system by providing visual cues to achieve task-oriented selective attention. Experiment 2 found this helped enhance the task performance without damaging flow experience.

Virtual Crafting Experience: Hand Motion and Scent Stimulation in Conjunction with a Promotional Video for Improving Interest

Hikari Yukawa (Nara Women’s University), Katsunari Sato (Kitauoya Nishi Machi)

Abstract: Crafting workshops are useful methods for the promotion of products because it can let customers know about the technical and social appeal of the product. In this study, we have developed simple, portable, and active multisensory VR systems of crafting experiences for product promotion. It comprises a video of crafting a wooden object with scent and hand motion synchronized with the video. The result of the product evaluations demonstrated positive effects, wherein the participants were found to be more attracted to the product in the video, where only scent or haptic stimulations were presented. However, the result also indicated that there is a negative synergetic effect when two stimulations were simultaneously presented.

A Comparison of Desktop and Augmented Reality Scenario Based Training Authoring Tools

Andrés N Vargas González (University of Central Florida), Katelynn Kapalo (University of Central Florida), Senglee Koh (University of Central Florida), Robert Sottilare (U.S. Army Research Laboratory), Patrick Garrity (U.S. Army Research Laboratory), Joseph LaViola (University of Central Florida)

Abstract: This work presents a comparison of two applications (Augmented Reality (AR) and Desktop) to author Scenario-Based Training (SBT) simulations. Through an iterative design process two interface conditions are developed and then evaluated qualitatively and quantitatively. A graph based authoring visualization help designers understand the scenario learning artifacts and relationships. No significant difference was found on time taken to complete tasks nor on the perceived usability of the systems. However, Desktop was perceived as more efficient, corroborated by the significantly higher number of mistakes made in AR. Findings are presented towards building better AR immersive authoring tools.

Haptic Compass: Active Vibrotactile Feedback of Physical Object for Path Guidance

Mengmeng Sun (Northwestern Polytechnical University), Weiping He (Northwestern Polytechnical University), Li Zhang (Northwestern Polytechnical University), Peng Wang (Northwestern Polytechnical University), Shuxia Wang (Northwestern Polytechnical University), Xiaoliang Bai (Northwestern Polytechnical University)

Abstract: We developed Haptic Compass, a prototype with active vibrotactile feedback by attaching vibration motors to a cylinder-shaped physical object, to provide directional cues for haptic guidance. To validate the prototype, we conducted two user studies. The first study showed that participants could effectively recognize and judge the vibration direction. In the second study, we evaluated the task completion time and the movement error among the Haptic-only, Visual-Only and Haptic-Visual feedback conditions in a typical path-guiding task. We found that, together with visual feedback, haptic feedback could enhance task performance and user experience for path guidance.

Harassment in Social VR: Implications for Design

Lindsay Blackwell (Oculus), Nicole Ellison (Oculus), Raz Schwartz (Oculus), Natasha Elliott-Deflo (Oculus)

Abstract: We interviewed VR users (n=25) about their experiences with harassment, abuse, and discomfort in social VR. We find that users’ definitions of ‘online harassment’ are subjective and highly personal, making it difficult to govern social spaces at the platform or application level. We also find that embodiment and presence make harassment feel more intense. Finally, we find that shared norms for appropriate behavior in social VR are still emergent, and that users distinguish between newcomers who unknowingly violate expectations for appropriateness and those users who aim to cause intentional harm.

3D positioning system based on one-handed thumb interactions for 3D annotation placement

So Tashiro (Kyushu University), Hideaki Uchiyama (Kyushu University), Diego Thomas (Kyushu University), Rin-ichiro Taniguchi (Kyushu University)

Abstract: This paper presents a 3D positioning system based on one-handed thumb interactions for 3D annotation placement with a smartphone. To place an annotation at a target point in the real environment, the 3D coordinate of the point is computed by interactively selecting the corresponding points in multiple views by users while performing SLAM. In addition, we developed three pixel selection methods based on one-handed thumb interactions. A pixel is selected at the thumb position at a live view in FingAR, the position of a reticle marker at a live view in SnipAR, or that of a movable reticle marker at a freezed view in FreezAR.

Evaluating Dynamic Characteristics of Head Mounted Display in Parallel Movement With Simultaneous Subjective Observation Method

Eisaku Miyamoto (Gifu University), Ryugo Kijima (Gifu University)

Abstract: The final purpose of this research is to establish a method for measuring the dynamic characteristics of HMD. Therefore, the purpose of this paper examined whether our proposed simultaneous subjective observation method was valid for the parallel movement and it could apply to various conditions by actually measuring it. In the measurement, we used Oculus Rift DK2. As a result, it was proved that the proposed method has enough precision to measure the dynamic characteristics of an HMD in the parallel movement, and could be applied to various usage situation.

Acting Together, Acting Stronger? Interference between participants during face-to-face cooperative interception task

Charles Faure (Univ Rennes, Inria), Annabelle Limballe (ENS Rennes - Rennes 2 University), Benoit Bideau (M2S - EA 7470, Univ Rennes, Inria), Anthony Sorel (M2S - EA 7470, Univ Rennes, Inria), Théo Perrin (Univ Rennes), Richard Kulpa (University Rennes 2)

Abstract: People generally coordinate their action to be more effective. However, in some cases, interference between them occur, resulting in an inefficient collaboration. The main goal of this study is to explore the way two persons regulate their actions when performing a cooperative task of ball interception. Twelve teams of two participants had to physically intercept balls moving down from the roof to the floor in a virtual room. Overall, results showed that team coordination emerges from between-participants interactions in this ball interception task and that interference between them depends on task complexity (uncertainty on partner’s action and visual information available).

The Effect of Audio and Visual Modality Based CPR Skill Training with Haptics Feedback in VR

VARUN DURAI S I (INDIAN INSTITUTE OF TECHNOLOGY MADRAS), Raj Arjunan (Indian Institute of Technology Madras), Manivannan M (IIT Madras)

Abstract: The hypothesis of this study is to verify the sensory dominance with the combinations of three sensory modalities: Audio-Haptics, Visual-Haptics, Audio-Visual-Haptics using Virtual Reality based Cardiopulmonary Resuscitation (CPR) simulator. To test this hypothesis, three experiments with three different groups of participants were conducted in two phases (training & testing) with the above three modes of combinations. Performance score in the testing phase using mannequin based CPR simulator was compared. The results show that the group who trained with the three sensory modalities has better performance than the other two groups. Our future work is to incorporate rescue breathing in the CPR training simulator for better skill training.

A New 360 Camera Design for Multi Format VR Experiences

Xinyu Zhang (East China Normal University), Yao Zhao (East China Normal University), Nikk Mitchell (FXG), Wensong Li (HyperView)

Abstract: We present a new 360 camera design to 2D, 3D and 6DoF multi-format 360 videos for immersive VR experiences. The minimum safe distance of our new camera is very short (approximately 30cm) which allows users to create special intimate immersive experiences. We propose to characterize the camera design using the fractal ratio of the distance of adjacent view points and interpupillary distance. While most early camera designs have fractal ratio >1 or =1, our camera has a fractal ratio <1. Moreover, with adjustable rendering interpupillary distance, our camera can be used to flexibly control the interpupillary distance for creating 3D 360 videos.

Individual Differences on Embodied Distance Estimation in Virtual Reality

Mar Gonzalez-Franco (Microsoft Research), Parastoo Abtahi (Stanford University), Anthony Steed (University College London)

Abstract: There are important individual differences when experiencing VR setups. We ran a study with 20 participants who got a scale-matched avatar and were asked to blind-walk to a VR target placed 2.5 meters away. In such setups, people typically underestimate distances by approximately 10\% when virtual environments are viewed through head mounted displays. Consistent with previous studies we found that the underestimation was significantly reduced the more embodied the participants were. However, not all participants developed the same level of embodiment when exposed to the exact same conditions.

A Systematic Evaluation of Multi-Sensor Array Configurations for SLAM Tracking with Agile Movements

Brian Williamson (University of Central Florida), Eugene Matthew Taranta (University of Central Florida), Patrick Garrity (U.S. Army Research Laboratory), Robert Sottilare (U.S. Army Research Laboratory), Joseph LaViola (University of Central Florida)

Abstract: Accurate tracking of a user in a marker-less environment can be difficult, especially during agile movements. When relying on feature detection the issue arises that a large rotational delta causes previously tracked features to become lost. One approach to overcome this is with multiple sensors. In this paper, we begin a systematic evaluation of how a sensors array affects tracking accuracy. We begin with four sensors and test the resulting output from a chosen SLAM algorithm. We then remove cameras from the feed covering all permutations to determine the level of accuracy and tracking loss. We go over some of the lessons learned in this preliminary experiment and how it may guide researchers in tracking extremely agile movements.

Spherical Structure-from-Motion for Casual Capture of Stereo Panoramas

Lewis Baker (University of Otago), Stefanie Zollmann (University of Otago), Jonathan Ventura (California Polytechnic State University)

Abstract: Hand-held capture of stereo panoramas involves spinning the camera in a roughly circular path to acquire a dense set of views of the scene. However, most existing structure-from-motion pipelines fail when trying to reconstruct such trajectories, due to the small baseline between frames. In this work, we propose to use spherical structure-from-motion for reconstructing handheld stereo panorama captures. Our initial results show that spherical motion constraints are critical for reconstructing small-baseline, circular trajectories.

Networking COTS Systems to Provide a Development Environment for Inside-Out Tracking for Virtual Reality Headsets

Loki Kristina Rasmussen (University of Arkansas at Little Rock), Jay Basinger (Southwest Research Institute), Mariofanna Milanova (UALR)

Abstract: Many companies are working to create inside-out marker-less tracking for virtual reality headsets. Inside-out marker-less tracking can be found on consumer augmented reality devices, but currently there is no system available to researchers, developers, or consumers that provides this feature without custom hardware and software. Our research provides a development environment that takes advantage of this feature before consumer level inside-out marker-less virtual reality systems hit the market. The solution utilizes current commercial off-the-shelf hardware systems, and by networking them together, allows the user to move through a captured environment without needing tracking towers.

Evaluation of Maslow’s Hierarchy of Needs on Long-Term Use of HMDs—A Case Study of Office Environment

Jie Guo (Beijing Institute of Technology), Dongdong Weng (Beijing Institute of Technology), Zhenliang Zhang (Beijing Institute of Technology), Yue Liu (Beijing Institute of Technology), Yongtian Wang (Beijing Institute of Technology)

Abstract: Long-term exposure to VR will become more and more important, but what we need for long term immersion to meet users’ fundamental needs is still under-researched. In this paper, we apply the theory of Maslow’s Hierarchy of Needs to guide the design of VR for long-term immersion based on the normal biological rhythm of human beings (24 hours). An office environment is designed to verify those needs. The efficiency, the physical and the psychological effects of this VR office system are tested. The results show that the Maslow’s Hierarchy of Needs can be a guideline for long-term immersion.

Perceived space and spatial performance during path-integration tasks in consumer-oriented virtual environments

Jose Dorado (Universidad de los Andes), Pablo Figueroa (Universidad de los Andes), Jean-Rémy Chardonnet (Arts et Métiers, Institut Image), Frederic Merienne (Arts et Metiers), Tiberio Hernández (Universidad de los Andes)

Abstract: Studies using virtual environments (VEs) have shown that we can perform path integration tasks with acceptable performance. However, in these studies, subjects could walk across large tracking areas or used large-immersive displays. These configurations are far from consumer-oriented VEs, and little is known about how their limitations influence this task. We assessed the performance in two consumer-oriented displays (HTC Vive, GearVR) and two consumer-oriented interaction devices (VR Motion Platform, Touchpad Control). Results show that when locomotion is available, there exist significant effects regarding display and path. In contrast, when locomotion is mediated no effect was found. Some research directions are therefore proposed.

The Effect of Hanger Reflex on Virtual Reality Redirected Walking

Chun Xie (University of Tsukuba), Chun Kwang Tan (University of Tsukuba), Taisei Sugiyama (University of Tsukuba)

Abstract: We explore the effect of haptic-based navigation by Hanger Reflex (HR) on the perception of redirected walking (RDW) with visual manipulation. In our experiment, seven individuals walked along a straight path in VR while, the visual scene was rotated with a curvature gain of π/36, and HR rotation was induced by a wearable haptic device. Participants reported their perceived walking direction and effort to walk along the path on visual analog scale. Experiment results showed that HR can influence the perception in RDW, but the effects may be complex and therefore require further investigation.

A Context-Aware Technical Information Manager for Presentation in Augmented Reality

Michele Gattullo (Politecnico di Bari), Vito Dalena (Politecnico di Bari), Alessandro Evangelista (Politecnico di Bari), Antonio E. Uva (Politecnico di Bari), Michele Fiorentino (Politecnico di Bari), Antonio Boccaccio (Politecnico di Bari), Michele Ruta (Politecnico di Bari), Joseph L. Gabbard (Virginia Tech)

Abstract: Technical information presentation is evolving from static contents presented on paper or via digital publishing to real-time context-aware contents displayed via virtual and augmented reality devices. We present a Context-Aware Technical Information Management system (CATIM), that dynamically manages (1) what information as well as (2) how information is presented in an augmented reality interface. CATIM acquires context data about activity, operator, and environment, and then based on these data, proposes a dynamic augmented reality output tailored to the current context. The system was successfully implemented and preliminarily evaluated in a case study regarding the maintenance of a hydraulic valve.

Yoshikazu Onuki (Tokyo Institute of Technology), Itsuo Kumazawa (Tokyo Institute of Technology)

Abstract: Novel virtual turning for stationary VR environments, accomplishing to reorient the gazed view towards the center, is proposed. Prompt reorientation during rapid head motion and blinking performed unnoticeable scene switching that achieved the seamless user experience, especially for the wide-angle turning. Whereas, continuous narrow-angle turning by horizontally rotating the virtual world corresponding to the face orientation achieved enhanced sense of reality. The proposal comprises a hybrid of these two turning schemes. Experiments using simulator sickness and presence questionnaires revealed that our methods achieved comparable or lower sickness scores and higher presence scores than conventional smooth and snap turns.