Image of the Belem Tower in Lisbon along with a view for the Tagus river. And the Logo for IEEEVR Lisbon 2021 with the moto: make virtual reality diverse and acessible.
Posters Location
Doctoral Consortium Posters Hall A
Doctoral Consortium Posters Hall B
Posters Hall A
Posters Hall B
Take me to the event:

Virbela Location: Hall A and Hall B (MAP)
Discord Channel: Open in Browser, Open in App (Participants only)

Best of IEEE VR 2021

Please use this form to vote for the best poster, best demo, and best 3DUI contest submission.

Vote!

Doctoral Consortium - Hall A

VirSec: Virtual Reality as Cost-Effective Test Bed for Usability and Security Evaluations

Doctoral Consortium

Booth: C14

Florian Mathis

Glanceable AR: Towards an Always-on Augmented Reality Future

Doctoral Consortium

Booth: C13

Feiyu Lu

Situated augmented reality: beyond the egocentric viewpoint

Doctoral Consortium

Booth: C12

Nuno Martins

Embodying an avatar with an asymmetrical lower body to modulate the dynamic characteristics of gait initiation

Doctoral Consortium

Booth: C11

Valentin Vallageas

The Effect of Modulating The Step Length of an Embodied Self-Avatars on Gait Symmetry During Treadmill Walking

Doctoral Consortium

Booth: D14

Iris Willaert

SharpView AR: Enhanced Visual Acuity for Out-of-Focus Virtual Content

Doctoral Consortium

Booth: D13

Mohammed Safayet Arefin

Gait Differences in the Real World and Virtual Reality: The Effect of Prior Virtual Reality Experience

Doctoral Consortium

Booth: D12

Moloud Nasiri

Towards Universal VR Sickness Mitigation Strategies

Doctoral Consortium

Booth: D11

Isayas Berhe Adhanom

“SHOW YOUR DEDICATION:” VR Games and Outmersion

Doctoral Consortium

Booth: E12

PS Berge

The Adaptation of Caribbean Literary Texts into VR

Doctoral Consortium

Booth: E11

Amanda Teneil Zilla

Doctoral Consortium - Hall B

Eye Fixation Forecasting in Task-Oriented Virtual Reality

Doctoral Consortium

Booth: C14

Zhiming Hu

Presence in VR: Developing measure and stimulus

Doctoral Consortium

Booth: C13

Eugene Kukshinov

Emotion regulation via Eyegaze

Doctoral Consortium

Booth: C12

Nermin Shaltout

Exploring Body Gestures for Small Object Selection in Dense Environment in HMD VR for Data Visualization Applications

Doctoral Consortium

Booth: C11

Shimmila Bhowmick

Privacy in VR: Empowering Users with Emotional Privacy from Verbal and Non-verbal Behavior of their Avatars.

Doctoral Consortium

Booth: D14

Dilshani Rangana Kumarapeli

Attitude Change in Immersive Virtual Environments

Doctoral Consortium

Booth: D13

Alina Nikolaou

Clinical Application of Immersive VR in Spatial Cognition: The Assessment of Spatial Memory and Unilateral Spatial Neglect in Neurological Patients

Doctoral Consortium

Booth: D12

Julia Belger

Analyzing Visual Perception and Predicting Locomotion using Virtual Reality and Eye Tracking

Doctoral Consortium

Booth: D11

Niklas Stein

Psychophysical Effects of Augmented Reality Experiences

Doctoral Consortium

Booth: E14

Daniel Eckhoff

Immersive Journalism - The Future of the News?

Doctoral Consortium

Booth: E13

Hannah Greber

Posters - Hall A

Evaluating User Acceptance using WebXR for an Augmented Reality Information System

Poster

Booth: E23

Fabian Meyer: Hochschule Ruhr West University of Applied Sciences; Christian Gehrke: Hochschule Ruhr West University of Applied Sciences; Michael Schäfer: Institute Computer Science

Teaser Video: Watch Now

Augmented Reality has a long history and has seen major technical advantages in the last years. With WebXR, a new web standard, Mobile Augmented Reality (MAR) applications are now available in the web browser. This eliminates one of the biggest obstacles for users in accessing advanced, markerless AR environments on the smartphone, as it makes installing additional software obsolete. Through the prototypical implementation of an AR information system in the form of a web app and a case study, we were able to show that this relatively new browser API can indeed be used for such complex application areas.

Is Virtual Reality sickness elicited by illusory motion affected by gender and prior video gaming experience?

Poster

Booth: E24

Katharina Margareta Theresa Pöhlmann: University of Lincoln; Louise O'Hare: Nottingham Trent University; Julia Föcker: University of Lincoln; Adrian Parke: University of the West of Scotland; Patrick Dickinson: University of Lincoln

Teaser Video: Watch Now

Gaming using VR headsets is becoming increasingly popular; however, these displays can cause VR sickness. To investigate the effects of gender and gamer type on VR sickness motion illusions are used as stimuli, being a novel method of inducing the perception of motion whilst minimising the ``accommodation vergence conflict''. Females and those who do not play action games experienced more severe VR sickness symptoms compared to males and experienced action gamers. The interaction of the gender and gamer type revealed that prior video gaming experience was beneficial for females, however, for males, it did not show the same positive effects.

Virtual Reality in transit: how acceptable is VR use on public transport?

Poster

Booth: F41

Laura Bajorunaite: University of Glasgow; Stephen Brewster: University of Glasgow; Julie R. Williamson: University of Glasgow

Teaser Video: Watch Now

When travelling on public transport, passengers use devices such as mobile phones or laptops to pass the time. VR (Virtual Reality) head-mounted displays could provide advantages over these devices by delivering personal and private experiences that help the wearer escape their confined space. This paper presents the key factors that influence VR acceptance on different modes of public transport (from buses to aeroplanes), uncovered through two surveys (N1=60, N2=108). An initial analysis of responses revealed unique passenger needs and challenges currently preventing wider VR adoption, creating parameters for future research.

Effects of Immersion and Visual Angle on Brand Placement Effectiveness

Poster

Booth: E12

Sebastian Oberdörfer: University of Würzburg; Samantha Straka: University of Würzburg; Marc Erich Latoschik: Department of Computer Science, HCI Group

Teaser Video: Watch Now

Typical inherent properties of immersive Virtual Reality (VR) such as felt presence might have an impact on how well brand placements are remembered. In this study, we exposed participants to brand placements in four conditions of varying degrees of immersion and visual angle on the stimulus. Placements appeared either as poster or as puzzle. We measured the recall and recognition of these placements. Our study revealed that neither immersion nor the visual angle had a significant impact on memory for brand placements.

Measuring the Effects of Virtual Environment Design on Decision-Making

Poster

Booth: E11

Sebastian Oberdörfer: University of Würzburg; David Heidrich: German Aerospace Center (DLR); Sandra Birnstiel: University of Würzburg; Marc Erich Latoschik: Department of Computer Science, HCI Group

Teaser Video: Watch Now

Recent research indicates an impairment in decision-making in immersive Virtual Reality (VR) when completing the Iowa Gambling Task (IGT). There is a high potential for emotions to explain the IGT decision-making behavior. The design of a virtual environment (VE) can influence a user's mood and hence potentially the decision-making. In a novel user study, we measure decision-making using three virtual versions of the IGT. The versions differ with regard to the degree of immersion and design of the VE. Our results revealed no significant impact of the VE on the IGT and hence on decision-making.

Learning Hawaiian Open Ocean Navigation Methods with Kilo Hōkū VR

Poster

Booth: F42

Patrick Karjala: University of Hawaiʻi at Mānoa; Dean Lodes: University of Hawai‘i at Mānoa; Anna Sikkink: University of Hawaii at Manoa; Kari Noe: University of Hawaii at Manoa; Jason Leigh: University of Hawaii at Manoa

Teaser Video: Watch Now

Kilo Hōkū VR (t: “to observe and study the stars”) is a virtual reality simulation of the Hōkūleʻa, a Polynesian double-hulled sailing canoe, and the practice of Modern Hawaiian wayfinding, or non-instrument open ocean navigation. It was developed to assist in the cultural preservation of the celestial navigation portion of Modern Hawaiian wayfinding, and to expand the availability of learning opportunities. We here introduce new features added to the simulation for teacher and student interaction and learning. We observed the potential viability of using Kilo Hōkū VR with students who are currently learning wayfinding in a classroom setting.

CAVE vs. HMD, in Distance Perception

Poster

Booth: F43

Théo COMBE: Institut Image; Jean-Rémy Chardonnet: Arts et Métiers, Institut Image; Frederic Merienne: Arts et Metiers; Jivka Ovtcharova: Institute for Information Management in Engineering

Teaser Video: Watch Now

This study aims to analyze differences between a CAVE system and a Head-Mounted Display (HMD), two technologies presenting important differences, focusing on distance perception, as past research on this factor is usually carried with only one or the other device. We performed two experiments. First, we explored the impact of the HMD's weight, by removing any other bias. Second, we compared distance perception using a simple hand interaction in a replicated environment. Results reveal that the HMD's weight has no significant impact over short distances, and the usage of a virtual replica was found to improve distance perception.

Requirements Gathering for VR Simulators for Training: Lessons Learned for Globally Dispersed Teams

Poster

Booth: F38

Vivian Gómez: Universidad de los Andes; Pablo Figueroa: Universidad de los Andes; Kelly Katherine Peñaranda: Universidad de Los Andes

Teaser Video: Watch Now

We report an empirical study on the use of current VR technologies for requirements gathering in the field of simulation and training. We used synchronous and asynchronous traditional techniques plus collaborative virtual environments such as MozillaHubs and AltspaceVR. Our results show that requirements gathering in VR makes a difference in the process of requirements identification. We report advantages and shortcomings that can be useful for future practitioners. For example, we found that VR sessions allowed for better identification of dimensions and sizes. VR sessions for requirements gathering could also benefit from better pointers and better sound

VR-Phore: A Novel Virtual Reality system for Diagnosis of Binocular Vision

Poster

Booth: F37

Sai Srinivas Vuddagiri: International Institute of Information Technology; Kavita Vemuri: International Institute of Information Technology - Hyderabad; MALE SHIVA RAM: University of Hyderabad; Rishi Bhardwaj: University of Hyderabad

Teaser Video: Watch Now

Binocular vision (BV) is the result of fusion between inputs from each eye to form a coherent image. BV anomalies are evaluated using different diagnostic tests and instruments. One such instrument is the Synoptophore, which evaluates three grades of BV. This equipment though efficient has certain limitations like ambient light while testing, bulky and expensive. We propose VR-Phore, application of a VR head-mounted display for diagnostics based on principle of the haploscope similar to Synoptophore. The proposed system addresses the limitations of Synoptophore with added advantage of a software platform to incorporate testing modules for a range of clinical conditions.

Capturing Human-Robot Interaction with Virtual Robots, Simulated Sensors, Real-Time Performance Capture, and Inverse Kinematics

Poster

Booth: F36

Mark Murnane: University of Maryland, Baltimore County; Padraig Higgins: University of Maryland, Baltimore County; Monali Saraf: University of Maryland, Baltimore County; Francis Ferraro: University of Maryland Baltimore County; Cynthia Matuszek: University of Maryland, Baltimore County; Don Engel: University of Maryland, Baltimore County

Teaser Video: Watch Now

We present a suite of tools to model a robot, its sensors, and the surrounding environment in VR, with the goal of collecting training data for real-world robots. The virtual robot observes a rigged avatar created in our photogrammetry facility and embodying a VR user. We are particularly interested in verbal human/robot interactions, which can be combined with the robot's sensor data for grounded language learning. Because virtual scenes, tasks, and robots are easily reconfigured compared to their physical analogs, our approach proves extremely versatile in preparing a wide range of robot scenarios for an array of use cases.

Effect of Context and Distance Switching on Visual Performances in Augmented Reality

Poster

Booth: C28
Note:
Previously on Booth E38 - Expo Hall B

Mathilde Drouot: IMT Atlantique; LEBIGOT Nathalie: UBO; Jean-Louis de Bougrenet: IMT Atlantique; Vincent Nourrit: IMT Atlantique

Teaser Video: Watch Now

Augmented reality may lead the user to repeatedly look at different environments (real/virtual) and at different distances to process information. We studied how context and distance switching could (together or separately) affect users’ performances. 29 participants (16 video game players) performed two tasks that required to switch between two screens (visual search and target detection task). These screens could be virtual (using HoloLens2) or real and placed at 1.5 or 2 meters. Distance switching had an impact only on visual search performances. Participants’ levels of experience with video games modified the effect of context switching.

Visual Indicators for Monitoring Students in a VR class

Poster

Booth: F31

David Michael Broussard: University of Louisiana at Lafayette; Yitoshee Rahman: University of Louisiana at Lafayette; Arun K Kulshreshth: University of Louisiana at Lafayette; Christoph W Borst: University of Louisiana at Lafayette

Teaser Video: Watch Now

Remote classes using VR technology are gaining recognition when in-person meetings are difficult or risky. We designed an immersive VR interface with several visual cues to support teacher awareness of students and their actions, attention, and temperament in a social VR environment. This interface keeps relevant information about students within the teacher's visual field of attention and has options to reduce the amount of information presented. Pilot study participants preferred to see all student indicators in one place and suggested we minimize the amount of information displayed to focus on the most urgent students.

Multiscale Sensor Fusion for Display-Centered Head Tracking

Poster

Booth: F32

Tianyu Wu: NC State University; Benjamin Watson: NC State University

Teaser Video: Watch Now

Emerging display usage scenarios require head tracking both at short (<1m) and modest (<3m) ranges. Yet it is difficult to find low-cost, unobtrusive tracking solutions that remain accurate across this range. By combining multiple head tracking solutions, we can mitigate the weaknesses of one solution with the strengths of another and improve head tracking overall. We built such a combination of two widely available and low-cost trackers, a Tobii Eye Tracker and a Kinect. The resulting system is more effective than Kinect at short range, and than the Tobii at a more distant range.

Exploring Human-Computer Interaction (HCI) criteria in the design and assessment of Next Generation VR based education and training environments

Poster

Booth: F33

J Cecil: Oklahoma State University; Sam O Kauffman: Oklahoma State University; Aaron Cecil-Xavier: University of Wisconsin-Madison; Avinash Gupta: Oklahoma State University; Vern McKinney: Yavapai Regional Medical Center; Mary Sweet-Darter: Applied Behavior Analysis of Oklahoma (ABA-OK)

Teaser Video: Watch Now

This paper discusses the approach and outcomes of adopting design and assessment criteria based on Human-Computer Interaction (HCI) principles. A general framework is presented which has been adapted to support teaching and training in two domains: (i) training of first responders involved in the Covid-19 and (ii) teach science and engineering to students with autism. The framework emphasizes the importance of HCI principles such as affordance, visual density, and cognitive load during the design process. The environments were created using various interfaces and immersion levels. The preliminary results of the assessment demonstrated positive impact of such environments for both domains.

Field of View Effect on Distance Perception in Virtual Reality

Poster

Booth: F28

Sina Masnadi: University of Central Florida; Kevin Pfeil: University of Central Florida; Jose-Valentin T Sera-Josef: University of Central Florida; Joseph LaViola: University of Central Florida

Recent state-of-the-art Virtual Reality HMDs provide wide FoV which were not possible in the past. Previous efforts have shown that reduced FoVs affect user perception of distance in a given environment, but none have investigated VR HMDs with wide FoVs. In this paper, we directly investigate the effect of HMD FoV on distance estimation in virtual environments. We performed a user study with 14 participants who performed a blind throwing task wearing a Pimax 5K Plus HMD, in which we virtually restricted the FoV to 200, 110, and 60 degrees. We found a significant difference in perceived distance between the 200 and 60 FoVs, as well as between the 110 and 60 FoVs. However, no significant difference was observed between 200 and 110 degrees.

HMD Type and Spatial Ability: Effects on the Experiences and Learning of Students in Immersive Virtual Field Trips

Poster

Booth: F27

pejman sajjadi: Pennsylvania State University; Jiayan Zhao: The Pennsylvania State University; Jan Oliver Wallgrün: The Pennsylvania State University; Peter LaFemina: The Pennsylvania State University; Alexander Klippel: The Pennsylvania State University

Teaser Video: Watch Now

We report on the results of a study in the context of place-based immersive VR (iVR) geoscience education that compares the experiences and learning of 45 students after going through an immersive virtual field trip, using either a lower-sensing but scalable Oculus Quest or a higher-sensing but tethered HTC Vive Pro. Our results suggest that with content design considerations, standalone HMDs can be a viable replacement for high-end ones in large-scale educational studies. Furthermore, our results also suggest that the spatial ability of students can be a determining factor for their experiences and learning.

Estimating Gaze From Head and Hand Pose and Scene Images for Open-Ended Exploration in VR Environments

Poster

Booth: F26

Kara J Emery: The University of Nevada, Reno; Marina Zannoli: Facebook Reality Labs; Lei Xiao: Facebook Reality Labs; James Warren: Facebook Reality Labs; Sachin S Talathi: Facebook

Teaser Video: Watch Now

Though previous research has shown coordination between non-eye signals and gaze, whether head, hand, and scene signals and their complete combination are useful for estimating gaze has not yet been quantified. To address this, we collected a dataset of head, hand, scene, and gaze signals as users explore open-ended virtual environments hosting a variety of potential actions. We show that gaze estimation models trained on signals from each individual sensor and their full combination outperform baseline gaze estimates across cross-validation methods. We conclude that these non-eye signals comprise useful information for estimating gaze that can complement traditional eye tracking methodologies.

Personal Identifiability of User Tracking Data During VR Training

Poster

Booth: F21

Alec G Moore: University of Central Florida; Ryan P. McMahan: University of Central Florida; Hailiang Dong: University of Texas at Dallas; Nicholas Ruozzi: University of Texas in Dallas

Teaser Video: Watch Now

Recent research indicates that user tracking data from virtual reality (VR) experiences can be used to personally identify users at accuracies as high as 95 percent. However, these results indicating that non-verbal data should be understood as personally identifying data were based on observing 360-degree videos. In this paper, we present participant identification results based on a session of user tracking data from a VR training application, which show accuracies above 90 percent. While still highly accurate, this decrease indicates that the personal identifiability of user tracking data is likely dependent upon the nature of the underlying VR experience.

The Importance of Sensory Feedback to Enhance Embodiment during Virtual Training of Myoelectric Prostheses Users

Poster

Booth: F22

Reidner Santos Cavalcante: Universidade Federal de Uberlândia; Aya Gaballa: Qatar University; John Cabibihan: Qatar University; Alcimar Soares: Faculty of Electrical Engineering, Federal University of Uberlândia; Edgard Afonso Lamounier Jr.: Federal University of Uberlândia

Teaser Video: Watch Now

In this poster, we propose a system that uses immersive Virtual Reality (iVR) and EMG signal processing (muscle activity) to provide a training environment for amputees who are supposed to use a myoelectric prosthesis. We also investigate the efficiency of learning how to control a virtual prosthesis with and without sensory feedback. Our results show that virtual training can be greatly improved when proper tactile feedback is provided, especially for controlling myoelectric prostheses.

Affordance Judgments in Mobile Augmented Reality with Cues

Poster

Booth: F23

Yu Zhao: Vanderbilt University; Jeanine Stefanucci: University of Utah; Sarah Creem-Regehr: University of Utah; Bobby Bodenheimer: Vanderbilt University

Teaser Video: Watch Now

We investigated two judgments of action capabilities with virtual objects presented through smartphones: passing through an aperture and stepping over a gap. The results showed that users were conservative in their affordance judgments for the two actions, but that judgments became more accurate with training by AR cues. In the post-cue trials, passing through judgments improved; in contrast, stepping over judgments became more precise when the cue was present, but did not display the same generalization in the post-cue block of improved estimates.

Remote Asynchronous Collaboration in Maintenance scenarios using Augmented Reality and Annotations

Poster

Booth: C31

Bernardo Marques: Universidade de Aveiro; Samuel Silva: Universidade de Aveiro; António Rocha: Bosch Termotecnologia, S.A.; Paulo Dias: University of Aveiro; Beatriz Sousa Santos: University of Aveiro

Teaser Video: Watch Now

This paper presents an Augmented Reality (AR) remote collaborative approach making use of different stabilized annotation features, part of an ongoing research with partners from the industry. It enables a remote expert to assist an on-site technician during asynchronous maintenance tasks. To foster the creation of a shared understanding, the on-site technician uses mobile AR, allowing the identification of issues, while the remote expert uses a computer to share annotations and provide spatial information about objects, events and areas of interest. The results of a pilot user study to evaluate asynchronous collaborative aspects while using the approach are also presented.

A Comparison of Single and Multi-View IR image-based AR Glasses Pose Estimation Approaches

Poster

Booth: C32

Ahmet Firintepe: BMW Group Research, New Technology, Innovations; Alain Pagani: German Research Center for Artificial Intelligence; Didier Stricker: German Research Center for Artificial Intelligence

Teaser Video: Watch Now

In this paper, we present a study on single and multi-view image-based AR glasses pose estimation with two novel methods. The first approach is named GlassPose and is a VGG-based network. The second approach GlassPoseRN is based on ResNet18. We train and evaluate the two custom developed glasses pose estimation networks with one, two and three input images on the HMDPose dataset. We achieve errors as low as 0.10° and 0.90mm on average on all axes for orientation and translation. For both networks, we observe minimal improvements in position estimation with more input views.

bmlSUP - A SMPL Unity Player

Poster

Booth: C33

Adam O. Bebko: York University; Anne Thaler: York University; Nikolaus F. Troje: York University

Teaser Video: Watch Now

Realistic virtual characters are important for many applications. The SMPL body model is based on 3D body scans and uses body shape and pose-dependent blendshapes to achieve realistic human animations [3]. Recently, a large database of SMPL animations called AMASS has been released [4]. Here, we present a tool that allows these animations to be viewed and controlled in Unity called the BioMotionLab SMPL Unity Player (bmlSUP). This tool provides an easy interface to load and display AMASS animations in 3D immersive environments and mixed reality. We present the functionality, uses, and possible applications of this new tool.

Do materials matter? How surface representation affects presence in virtual environments

Poster

Booth: C34

Jennifer Brade: Institute for Machine Tools and Production Processes; Alexander Kögel: Professorship of Ergonomics and Innovation Management; Benjamin Schreiber: Institute for Machine Tools and Production Processes; Franziska Klimant: Institute for Machine Tools and Production Processes

Teaser Video: Watch Now

This article reports the impact of different visual realism of materials applied to objects on perceived presence during an assembly task. The results of the experiment show that there is a significant difference between the more realistic scene and one where the surfaces of objects have been replaced with simpler, CAD-inspired visualizations. Despite these difference, both scenarios reach high values for presence and acceptance. Therefore, less detailed and less realistic rendering of surfaces might be sufficient to obtain a high presence and acceptance level in scenarios, which focus on manual tasks, if the associated drop in presence can be tolerated.

Lightweight Quaternion Transition Generation with Neural Networks

Poster

Booth: D31

Romi Geleijn: IT University of Copenhagen; Adrian Radziszewski: IT University of Copenhagen; Julia Beryl van Straaten: IT University of Copenhagen; Henrique Galvan Debarba: IT University of Copenhagen

Teaser Video: Watch Now

This paper introduces the Quaternion Transition Generator (QTG), a new network architecture tailored to animation transition generation for virtual characters. The QTG is simpler than the current state of the art, making it lightweight and easier to implement. It uses approximately 80% fewer arithmetic operations compared to other transition networks. Additionally, this architecture is capable of generating visually accurate rotation-based animations transitions and results in a lower Mean Absolute Error than transition generation techniques that are commonly used for animation blending.

ARCritique: Supporting Remote Design Critique of Physical Artifacts through Collaborative Augmented Reality

Poster

Booth: D32

Yuan Li: Virginia Tech; David Hicks: Virginia Tech; Wallace Lages: Virginia Tech; Sang Won Lee: Virginia Polytechnic Institute and State University; Akshay Sharma: Virginia tech; Doug Bowman: Virginia Tech

Teaser Video: Watch Now

Design critique sessions require students and instructors to jointly view and discuss physical artifacts. However, in remote learning scenarios, available tools (such as videoconferencing) are insufficient due to ineffective, inefficient communication of spatial information. This paper presents ARCritique, a mobile augmented reality application that combines KinectFusion and ARKit to allow users to 1) scan artifacts and share the resulting 3D models, 2) view the model simultaneously in a shared virtual environment from remote physical locations, and 3) point to and draw on the model to aid communication. A preliminary evaluation of ARCritique revealed great potential for supporting remote design education.

Simulation and Assessment of Safety Procedure in an Immersive Virtual Reality (IVR) Laboratory

Poster

Booth: D33

Hossain Samar Qorbani: Carleton university; Ali Arya: Carleton University; Nuket Nowlan: Carleton University; Maryam Abdinejad: University of Toronto

Teaser Video: Watch Now

This paper presents the early research finding of our approach to utilize an immersive virtual reality (IVR) environment as an educational tool in Science, Technology, Engineering, and Math (STEM). The proposed approach is demonstrated for the science laboratory. Realistic environment and interactions, immersive presence, automatic (in-app) data collection, and the possibility of following different educational theories such as experiential learning and use of the actual course content are among features that make this approach novel and help address existing shortcomings in STEM education, especially during COVID-19 restrictions.

An X-Ray Vision System for Situation Awareness in Action Space

Poster

Booth: D34

Nate Phillips: Mississippi State University; Farzana Alam Khan: Mississippi State University; Brady Allen Kruse: Mississippi State University; Cindy Bethel: Mississippi State University; J. Edward Swan II: Mississippi State University

Teaser Video: Watch Now

Usable x-ray vision has long been a goal in augmented reality research and development. X-ray vision, or the ability to view and understand information presented through an opaque barrier, would be imminently useful across a variety of domains. Unfortunately, however, the effect of x-ray vision on situation awareness, an operator's understanding of a task or environment, has not been significantly studied. This is an important question; if x-ray vision does not increase situation awareness, of what use is it? Thus, we have developed an x-ray vision system, in order to investigate situation awareness in the context of action space distances.

Industrial Augmented Reality: Connecting Machine-, NC- and Sensor-Data to an AR Maintenance Support System

Poster

Booth: D28

Mario Lorenz: Chemnitz University of Technology; Shamik Shandilya: Chemnitz University of Technology; Sebastian Knopp: Chemnitz University of Technology; Philipp Klimant: Chemnitz University of Technology

Teaser Video: Watch Now

Access to machine data, e.g. axis positions inside an AR mainte-nance application can potentially increase the usefulness of AR in maintenance. Technicians walking to the machine control for look-ing up information would be avoided. However, the machine con-trol interface and data are machine manufacturer depended, making it necessary to customize the interface between the machine control and the AR maintenance application. Here, we present a solution integrating machine control access from three different machines using a middleware box. A qualitative assessment with technicians confirms the usefulness direct machine data access from an AR maintenance application.

CDVVAR: VR/AR Collaborative Data Visualization Tool

Poster

Booth: D27

Amal Yassien: German University in Cairo; Youssef Emad Hamada: German University in Cairo; Slim Abdennadher: German University in Cairo

Teaser Video: Watch Now

The emergence of immersive platforms has opened room for designing more effective data visualization tools. Therefore, we developed CDVVAR, a VR/AR collaborative tool that can visualize any dataset using three different techniques. Our prototype enables users to share graphs and highlight (ping) specific data point across VR or mobile AR platforms. We conducted a within-subject study (12 pairs) to evaluate the effectiveness of our prototype. Each pair was shown graphs and asked to point specific data values in both VR and mobile AR setup. The time to collaboratively answer the questions was recorded along with users' general feedback.

RED: A Real-Time Datalogging Toolkit for Remote Experiments

Poster

Booth: D26

Sam Adeniyi: University of Minnesota; Evan Suma Rosenberg: University of Minnesota; Jerald Thomas Jr.: University of Minnesota

Teaser Video: Watch Now

The ability to conduct experiments on virtual reality systems has become increasingly compelling as the world continues to migrate towards remote research, affecting the feasibility of conducting in-person studies with human participants. The Remote Experiment Datalogger (RED) Toolkit is an open-source library designed to simplify the administration of remote experiments requiring continuous real-time data collection. Our design consists of a REST server, implemented using the Flask framework, and a client API for transparent integration with multiple game engines. We foresee the RED Toolkit serving as a building block for the handling of future remote experiments across a multitude of circumstances.

Investigating the Influence of Sound Source Visualization on the Ventriloquism Effect in an Auralized Virtual Reality Environment

Poster

Booth: D25

Nigel Frangenberg: TH Köln; Kristoffer Waldow: TH Köln; Arnulph Fuhrmann: TH Köln

Teaser Video: Watch Now

The ventriloquism effect (VQE) describes the illusion that the puppeteer's voice seems to come out of the puppet's mouth. This effect can even be observed in virtual reality (VR) when a spatial discrepancy between the auditory and visual component occurs. However, previous studies have never fully investigated the impact of visual quality on VQE. Therefore, we conducted an exploratory experiment to investigate the influence of the visual appearance of a loudspeaker on the VQE in VR. Our evaluation yields significant differences in the vertical plane which leads to the assumption that the less realistic model had a stronger VQE than the realistic one.

Technology acceptance of a VR e-learning application addressing the cellular transport mechanisms

Poster

Booth: D21

Sascha Müller: Pädagogische Hochschule; Wolfgang Mueller: Pädagogische Hochschule

Teaser Video: Watch Now

Over the last decades, research has shown that students experience difficulties in understanding cellular transport mechanisms, especially when it comes to processes on the molecular level. One major difficulty in the comprehension of these concepts is their abstract nature along with the demand for spatial ability. To address this problem a VR-application has been developed, which allows students to explore molecular transport mechanisms across the cell by interacting with the environment and manipulating the molecule concentration or initiate transport mechanisms by moving molecules. A study showed an increase in understanding of osmosis and diffusion after using the application.

Virtual Loupes: An Augmented Reality Aid for Microsurgery

Poster

Booth: D22

Cory Ilo: Virginia Tech; Waylon Zeng MD: Virginia Tech; Doug Bowman: Virginia Tech

Teaser Video: Watch Now

Microsurgery requires the use of loupes, which are optical elements attached to eyeglass frames that magnify objects such as blood vessels and nerves. To address the inflexibility of loupes, we have developed an augmented reality system that provides virtual loupes with a video-see through headset, allowing for easy-to-use variable zoom in a surgical context. Our prototype solution utilizes a gaze-controlled interface, which allows for the zoom level to be selected in either a discrete or continuous manner. A feasibility study revealed both the potential of virtual loupes and the limitations of current technology to realize this concept.

Programmable Virtual Reality Environments

Poster

Booth: D23

Nanlin Sun: Virginia Tech; Annette Feng: Virginia Tech; Ryan M Patton: Virginia Commonwealth University; Yotam Gingold: George Mason University; Wallace Lages: Virginia Tech

Teaser Video: Watch Now

We present a programmable virtual environment that allows users to create and manipulate 3D objects via code while inside virtual reality. Our environment supports the control of 3D transforms, physical, and visual properties. Programming is done by means of a custom visual block-language that is translated into Lua language scripts. We believed that the direction of this project will benefit computer science education in helping students to learn programming and spatial thinking more efficiently.

Tactical and Strategical Analysis in Virtual Geographical Environments

Poster

Booth: D24

Bettina Schlager: VRVis Forschungs-GmbH; Daniela Stoll: VRVis Forschungs-GmbH; Katharina Krösl: VRVis Forschungs-GmbH; Anton L Fuhrmann: VRVis Forschungs-GmbH

Teaser Video: Watch Now

Planning tactical and strategical operations on a terrain is a well structured and complex process. It is usually executed by a group of interdisciplinary experts with different objectives. Traditional equipment for tactical analysis like paper maps or sand tables cannot transfer spatial understanding for visibility analysis and height judgment. Immersive virtual geographical environments offer additional perspectives for rapid decision making and reasoning of spatial structures. We propose a collaborative multi-user virtual reality system for the tactical analysis of terrain data to support mission planning.

Leveraging AR and Object Interactions for Emotional Support Interfaces

Poster

Booth: E42

Anjali Sapra: Virginia Tech; Wallace Lages: Virginia Tech

Teaser Video: Watch Now

This work explores ways we can leverage augmented reality systems to create meaningful and emotional interactions with physical objects. We explore how hand and eye interactions can be used to trigger meaningful visual and auditory feedback to the user. These investigations show how AR interfaces can interpret the actions of a user to shift the primary purpose of a physical object for emotional support.

The Onset Time of the Dynamic and Static Invisible Body Illusion

Poster

Booth: D43

Ryota Kondo: Toyohashi University of Technology

Teaser Video: Watch Now

The body part ownership is elicited in 23 s, and the full-body illusion is produced in 5 s. Our previous study has shown that synchronized hands and feet movements produce the full-body illusion at an empty space between the hands and feet in 5 min at most, but the exact onset time is still unclear. This study investigated the optimal learning method and learning time for the invisible body illusion by measuring the onset time of the illusion in active movement and static observation. As a result, the illusion was produced after 6.93 s of synchronized movement or 4.67 s of static observation. These results suggest that both active movement and static observation are effective for the invisible body illusion.

Investigating motor skill training and user arousal levels in VR : Pilot Study and Observations

Poster

Booth: E43

Unnikrishnan Radhakrishnan: Aarhus University; Alin Blindu: Unity Studios; Francesco Chinello: Aarhus University; Konstantinos Koumaditis: AU

Teaser Video: Watch Now

Virtual Reality (VR) for skill training is seeing increasing interest from academia and industry thanks to the highly immersive and realistic training opportunities they offer. Of the many factors affecting the effectiveness of training in VR, arousal levels of the users merit a closer examination. Though subjective methods of measuring arousal exist in VR literature, there is potential in using cost-effective sensors to directly measure these from bio-signals generated by the nervous system. We introduce the design of preliminary observations from a pilot study exploring user's arousal levels and performance while executing a series of fine motor skill tasks (buzzwire tracing). Future directions of the work are also discussed.

Myopia in Head-Worn Virtual Reality

Poster

Booth: E44

Lara Panfili: Vienna University of Technology; Michael Wimmer: TU Wien; Katharina Krösl: VRVis Forschungs-GmbH

Teaser Video: Watch Now

In this work, we investigate the influence of myopia on the perceived visual acuity (VA) in head-worn virtual reality (VR). Factors such as display resolution or vision capabilities of users influence the VA in VR. We simulated eyesight tests in VR and on a desktop screen and conducted a user study comparing VA measurements of participants with normal sight and participants with myopia. Surprisingly, our results suggest that people with severe myopia can see better in VR than in the real world, while the VA of people with normal or corrected sight or mild myopia is reduced in VR.

Immersive Authoring of Virtual Reality Training

Poster

Booth: E38

Fernando Cassola: INESCTEC; Manuel Pinto: INESCTEC; Daniel Mendes: INESC TEC; Leonel Morgado: INESCTEC; António Coelho: INESC TEC/DEI; Hugo Paredes: INESC TEC

Teaser Video: Watch Now

The use of VR in industrial training contributes to reduce costs and risks, supporting more frequent and diversified use of experiential learning activities, an approach with proven results. In this work, we present an innovative immersive authoring tool for experiential learning in VR-based training. It enables a trainer to structure an entire VR training course in an immersive environment, defining its sub-components, models, tools, and settings, as well as specifying by demonstration the actions to be performed by trainees. The trainees performing the immersive training course have their actions recorded and matched to the ones specified by the trainer.

Virtual Optical Bench: A VR learning tool for optical design

Poster

Booth: E37

Sebastian Pape: RWTH; Martin Bellgardt: RWTH Aachen University; David Gilbert: RWTH Aachen; Georg König: RWTH Aachen; Torsten Wolfgang Kuhlen: RWTH Aachen University

Teaser Video: Watch Now

The design of optical lens assemblies is a difficult process that requires lots of expertise. The teaching of this process today is done on physical optical benches, which are often too expensive for students to purchase. One way of circumventing these costs is to use software to simulate the optical bench. This work presents a virtual optical bench, which leverages realtime ray tracing in combination with VR rendering to create a teaching tool which creates a repeatable, non-hazardous and feature-rich learning environment. The resulting application was evaluated in an expert review with 6 optical engineers.

ARThings – enhancing the visitors’ experience in museums through collaborative AR

Poster

Booth: E36

Andreea Lupascu: Technical University of Cluj-Napoca, Romania; Aurelia Ciupe: Technical University of Cluj-Napoca, Romania; Serban Meza: Technical University of Cluj-Napoca, Romania; Bogdan Orza: Technical University of Cluj-Napoca, Romania

Teaser Video: Watch Now

Museum exhibitions are adopting Culturtainment, a new range of emergent media, as means of engaging their visitors through a high level of creative output and consumption, where Augmented Reality becomes a technological challenge for augmenting the visitors’ experience in a meaningful way. ARThings defines its scope to extend the concept of touring in an art exhibition of paintings, towards collecting itinerary and interaction analytics, through collaborative AR. A demonstrative prototype has been implemented for the Art Museum of Brașov, Transylvania, Romania, where a User-Acceptance Test has been conducted on 21 participants, together with on-site assessment, positively validating a future adoption.

Naturalistic audio-visual volumetric sequences dataset of sounding actions for six degree-of-freedom interaction

Poster

Booth: E35

Hanne Stenzel: Fraunhofer IIS; Davide Berghi: University of Surrey; Marco Volino: University of Surrey; Philip J.B. Jackson: University of Surrey

Teaser Video: Watch Now

As audio-visual systems increasingly bring immersive and interactive capabilities into our work and leisure activities, so the need for naturalistic test material grows. New volumetric datasets have captured high-quality 3D video, but accompanying audio is often neglected, making it hard to test an integrated bimodal experience. Designed to cover diverse sound types and features, the presented volumetric dataset was constructed from audio and video studio recordings of scenes to yield forty short action sequences. Potential uses in technical and scientific tests are discussed.

Psychophysiology, eye-tracking and VR: exemplary study design

Poster

Booth: E31

Radosław Sterna: Institute of Psychology, Faculty of Philosophy, Jagiellonian University; Artur Cybulski: AGH University of Science and Technology; Magdalena Igras-Cybulska: AGH University of Science and Technology; Joanna Pilarczyk: Institute of Psychology, Faculty of Philosophy, Jagiellonian University; Agnieszka Siry: Institute of Psychology, Faculty of Philosophy, Jagiellonian University; Michał Kuniecki: Institute of Psychology, Faculty of Philosophy, Jagiellonian University

Teaser Video: Watch Now

In our poster we present the design of the study, which will be conducted in the Virtual Environment with the usage of psychophysiology (EDA, HR and pupil size) and eye-tracking. Although many papers implementing aforementioned methods have been carried out to date, we believe that these techniques are still not fully utilized. Therefore, the focus in our work is on the methodological steps we have taken and technical topography we created to provide proper ground for the measurement. Our aim was to increase the signal-to-noise ratio, achieve higher confounders control and obtain reliable results.

CeVRicale: A VR app for Cervical Rehabilitation

Poster

Booth: E32

Arnaldo Cesco: University of Bologna; Francesco Ballardin: University of Bologna; Gustavo Marfia: University of Bologna

Teaser Video: Watch Now

We propose CeVRicale, a cervical rehabilitation application based on the use of virtual reality (VR). CeVRicale is smartphone-based, thus it may be available to larger shares of population when compared to those applications that are implemented for head mounted displays such as HTC Vive or Oculus Rift. The app exploits a smartphone's sensor to track head movements in five exergames inspired by rehabilitation exercises. This project is the first step in a study to evaluate the effectiveness and efficiency of a low cost VR application in the treatment of cervical musculoskeletal disorders.

Preserving Family Album Photos with the HoloLens 2

Poster

Booth: E33

Lorenzo Stacchio: University of Bologna; Shirin Hajahmadi: University of Bologna; Gustavo Marfia: University of Bologna

Teaser Video: Watch Now

We propose an AR-application capable of individuating and segmenting photos, while browsing a family album. In addition to this digitization step, we also include a second one where the digitized images are analyzed, tagging them with meta-data pertaining their presumed socio-historical context and their date. The addition of such meta-data is key not only from a cataloguing point of view, but also from a conservation perspective. In fact, analog photos are more often preserved when their subject and their date is known. To this aim, we experiment the use of the HoloLens 2 along with artificial intelligence paradigms.

Using High Fidelity Avatars to Enhance Learning Experience in Virtual Learning Environments

Poster

Booth: E34

Vlasios Kasapakis: University of the Aegean; Elena Dzardanova: University of the Aegean

Teaser Video: Watch Now

Virtual Learning Environments (VLEs) are the quintessence of the integration of several Information and Communication Technologies (ICTs) in the learning process. In this study, we investigate the performance of a multi-user Virtual Reality Learning Environment (VRLE), incorporating high-fidelity avatars, as a tool for enhancing learning experience in VLEs. This work presents the system architecture along with preliminary evaluation results provided by 19 post-graduate students

The Royal Game of Ur: Virtual Reality Prototype of the Board Game Played in Ancient Mesopotamia

Poster

Booth: E28

Krzysztof Pietroszek: American University; Zaki Andiga Agraraharja: American University; Christian Eckhardt: California Polytechnic State University San Luis Obispo

Teaser Video: Watch Now

We present a virtual reality implementation of the ``Royal Game of Ur'', an ancient board game played by the people of the Akkadian empire since around 3000 BC. The game's rules recently deciphered from the cuneiform tablets have been implemented using the MinMax approach with alpha-beta pruning optimization, allowing for a highly competitive game-play against even the most skilled human players. The game utilizes freehand interaction and, experimentally, a brain-computer interface and passive haptic feedback to roll the dice and move the pieces.

Combining Virtual Reality with Camera Data and a Wearable Sensor Jacket to Facilitate Robot Teleoperation

Poster

Booth: E27

Boris Illing: Fraunhofer Institute for Communication, Information Processing and Ergonomics FKIE; Bastian Gaspers: Fraunhofer Institute for Communication, Information Processing and Ergonomics FKIE; Dirk H. Schulz: Fraunhofer Institute for Communication, Information Processing and Ergonomics FKIE

Teaser Video: Watch Now

Unmanned ground vehicles (UGV) with differing degrees of autonomy are increasingly used for routine tasks, albeit still widely controlled through teleoperation in safety-critical contexts. We propose to complement a traditional approach of showing camera views on a 2D screen with an immersive Virtual Environment (VE), displaying camera data with depth cues around a model of the robot. Combining both approaches with a wearable sensor jacket, the operator can choose the best one for the current task. Preliminary experiments showed that even untrained operators can successfully solve pick and place tasks with high accuracy with our system.

Collaborative Design of Augmented Flashcards for Design History Class Case Study

Poster

Booth: E26

Laura A. Huisinga: California State University - Fresno

Teaser Video: Watch Now

This case study looks at augmented reality (AR) flashcards used in a university-level classroom. The collaborative design effort used RtD and pulled from the outcomes of three prior case studies using AR in the classroom. This poster focuses on the latest case study. Using research through design (RtD) to develop various AR user interface (UI) for educators to use as a framework to augment classroom content aiming to improve learning outcomes for all students. The case study shows that more research is warranted, and there is a viable need for using AR in higher education humanities courses.

Remote Assistance with Mixed Reality for Procedural Tasks

Poster

Booth: E25

Manuel Rebol: American University; Colton Hood: The George Washington University; Claudia Ranniger: The George Washington University; Adam Rutenberg: The George Washington University; Neal Sikka: The George Washington University; Erin Maria Horan: American University; Christian Gütl: Graz University Of Technology; Krzysztof Pietroszek: American University

Teaser Video: Watch Now

We present a volumetric communication system that is designed for remote assistance of procedural tasks. The system allows a remote expert to visually guide a local operator. The two parties share a view that is spatially identical, but for the local operator it is of the object on which they operate, while for the remote expert, the object is presented as a mixed reality "hologram". Guidance is provided by voice, gestures, and annotations performed directly on the object of interest or its hologram. At each end of the communication, spatial is visualized using mixed-reality glasses.

Lipoma Extraction Surgery Simulation in a Multi-user Environment

Poster

Booth: E21

Santiago Felipe Carreño: Universidad Militar Nueva Granada; Byron Perez-Gutierrez: Universidad Militar Nueva Granada; Alvaro Uribe Quevedo: University of Ontario Institute of Technology; Norman Jaimes: Universidad Militar Nueva Granada

Teaser Video: Watch Now

The present work proposes the use of a multi-user virtual environment for simulation of the lipoma extraction surgery. Starting from the characterization of the procedure, all the steps were mapped to a virtual environment. The prototype implemented include two main actors, the surgeon and the medical instrumentalist, and observers that can join the simulation at any time. Voice chat was included in order to allow lifelike operating room communication, a key element for a successful simulation. Initial evaluation indicated that the proposed solution is promising in a work-from-home environment as a complementary tool for medical education.

A Method for Measuring the Perceived Location of Virtual Content in Optical See Through Augmented Reality

Poster

Booth: E22

Farzana Alam Khan: Mississippi State University; Veera Venkata Ram Murali Krishna Rao Muvva: University of Nebraska--Lincoln; Dennis Wu: Mississippi State University; Mohammed Safayet Arefin: Mississippi State University; Nate Phillips: Mississippi State University; J. Edward Swan II: Mississippi State University

Teaser Video: Watch Now

For optical, see-through augmented reality (AR), a new method for measuring the perceived 3D location of a virtual object is presented. The method is tested with a Microsoft HoloLens, and examines two different virtual object designs, whether turning in a circle disrupts HoloLens tracking, and whether accuracy errors found with a HoloLens display might be restricted to that particular display. Turning in a circle did not disrupt HoloLens tracking, and a second HoloLens did not suggest systematic errors restricted to a specific display. The proposed method could measure the perceived location of a virtual object to a precision of∼1 mm.

Posters - Hall B

A practical framework of multi-person 3D human pose estimation with a single RGB camera

Poster

Booth: D35

Le Ma: Institute of Automation,Chinese Academy of Sciences; Sen Lian: Institute of Automation,Chinese Academy of Sciences; Shandong Wang: Intel Labs China; Weiliang Meng: Zhejiang Lab; Jun Xiao: University of Chinese Academy of Sciences; Xiaopeng ZHNAG: Institute of Automation, Chinese Academy of Sciences

Teaser Video: Watch Now

We propose a practical framework named 'DN-2DPN-3DPN' for multi-person 3D pose estimation with a single RGB camera. Our framework performs three-stages tasks on the input video: our DetectNet(DN) firstly detects the people's bounding box individually for each frame of the video, while our 2DPoseNet(2DPN) estimates the 2D poses for each person in the second stage, and our 3DPoseNet(3DPN) is finally applied to obtain the 3D poses of the people. Experiments validate that our method can achieve state-of-the-art performance for multi-person 3D human pose estimation on the Human3.6M dataset.

A Protocol for Dynamic Load Distribution in Web-Based AR

Poster

Booth: D31

Rajath Jayashankar: Cochin University of Science and Technology; Akul Santhosh: Cochin University of Science and Technology; Sahil athrij: Cochin University of Science and Technology; Arun Padmanabhan: Cochin University of Science and Technology; Sheena Mathew: Cochin University of Science and Technology

Teaser Video: Watch Now

In a Web-based Augmented Reality (AR) application, to achieve an immersive experience we require precise object detection, realistic model rendering, and smooth occlusion in real-time. To achieve these objectives on a device requires heavy computation capabilities unavailable on most mobile devices, this can be solved by using cloud computing but it introduces network latency issues. In this work, we propose a new network protocol named DLDAR (Dynamic Load Distribution in Web-based AR) that facilitates and standardizes methods for dynamic division of compute between client and server based on device and network condition to min-max latency and quality.

Digital Twin as A Mixed Reality Platform for Art Exhibition Curation

Poster

Booth: D32

Inhwa Yeom: KAIST; Woontack Woo: KAIST

Teaser Video: Watch Now

We present a digital twin-based Mixed Reality platform for arts curators. Despite the rising presence of exhibitions in varying digital formats, the virtual engagement of arts curators has been nearly disabled. To replicate and enhance the professional capability of arts curators in the virtual realm, our system integrates a digital twin art space and 3D authoring techniques, aided with spatial and semantic metadata. Our user evaluation proves the system's usability, and its capacity to support curatorial activities. With this, we aim to provide a groundwork to similar systems that also extend arts curator’s creativity and outreach beyond time and space.

Through the Solar System ——XR science education system based on multiple monitors

Poster

Booth: D33

YanXiang Zhang: University of Science and Technology of China; JiaYu Wang: University of Science and Technology of China

Teaser Video: Watch Now

This paper presents a multi-user extended reality (XR) science education system based on multiple monitors. It effectively uses existing equipment and space, allowing teachers and students to share immersive virtual scenes, increasing the possibility of collaboration, and attracting students through interactive interfaces and plot designs. Actively learn scientific knowledge, face-to-face communication, and interaction, thereby enhancing the effect of science education.

Visualization and Manipulation of Air Conditioner Flow via Touch Screen

Poster

Booth: D34

Wei Yaguang: Osaka University; Jason Orlosky: Osaka University; Tomohiro Mashita: Osaka Univercity

Teaser Video: Watch Now

In this paper, we present a smartphone controlled interface for both manipulating A/C air flow and visualizing the resulting output as a 3D Augmented Reality (AR) overlay. In contrast with previous work, we generate airflow models based on the A/C's exhaust vents, allowing users to see the effects of interactions on airflow in real time. We also implemented and tested three different methods for manipulation including, swipe, drag and button based manipulations using finger or device gestures. Experiments showed that participants (N=50) were able to control air flow most quickly with the swipe based interface, which outperformed the drag and button median completion times by 26.5% and 36.1%, respectively.

A-Visor and A-Camera: Arduino-based Cardboard Head-Mounted Controllers for VR Games

Poster

Booth: D28

Sangmin Park: Hanyang University; Hojun Aan: Hanyang University; Junhyeong Jo: Hanyang University; Hyeonkyu Kim: Hanyang University; Sangsun Han: Hanyang University; Jimoon Kim: Hanyang University; Pilhyoun Yoon: Hanyang University; Kibum Kim: Hanyang University

Teaser Video: Watch Now

The Nintendo Labo: VR Kit introduced several types of cardboard controllers that allow users to enjoy virtual reality through various interactions. However, it is not compatible with smartphone devices which many people can use to access VR easily. In this study, we used Arduino and a smartphone to create two customized low-cost cardboard head-mounted VR controllers which we called A-Visor and A-Camera. We also created VR games for A-Visor and A-Camera using Unity3D. Thus, we present new DIY head-mounted VR controllers that are made by assembling corrugated cardboard materials, Arduino, and sensors, which are readily accessible to DIY enthusiasts.

Hand-by-Hand Mentor: An AR based Training System for Piano Performance

Poster

Booth: D27

Ruoxi Guo: Beihang University; Jiahao Cui: Beihang University; Wanru Zhao: Beihang University; Shuai Li: Beihang University; Aimin Hao: Beihang University

Teaser Video: Watch Now

Multimedia instrument training has gained great momentum benefiting from augmented and/or virtual reality (AR/VR) technologies. We present an AR-based individual training system for piano performance that uses only MIDI data as input. Based on fingerings decided by a pre-trained Hidden Markov Model (HMM), the system employs musical prior knowledge to generate natural-looking 3D animation of hand motion automatically. The generated virtual hand demonstrations are rendered in head-mounted displays and registered with a piano roll. Two user studies conducted by us show that the system requires relatively less cognitive load and may increase learning efficiency and quality.

Impact of Avatar Anthropomorphism and Task Type on Social Presence in Immersive Collaborative Virtual Environments

Poster

Booth: F24

Charlotte Dubosc: Arts et Métiers Institute of Technology; Geoffrey Gorisse: Arts et Métiers Institute of Technology; Olivier Christmann: Arts et Métiers Institute of Technology; Sylvain Fleury: Arts et Métiers Institute of Technology; Killian Poinsot: Arts et Métiers Institute of Technology; Simon Richir: Arts et Métiers Institute of Technology

Teaser Video: Watch Now

Eliciting a sense of social presence is necessary to create believable multi-user situations in immersive virtual environments. To be able to collaborate in virtual worlds, users are represented by avatars (virtual characters controlled in real time) allowing them to interact with each other. We report a study investigating the impact on social presence of both non-human avatars' facial properties and of the type of collaborative task being performed by the users (asymmetric collaboration versus negotiation). While we observed no significant impact of facial properties, both co-presence and perceived message understanding scores were significantly higher during the negotiation task.

AREarthQuakeDrill: Toward Increased Awareness of Personnel during Earthquakes via AR Evacuation Drills

Poster

Booth: D26

Kohei Yoshimi: Information Engineering System; Photchara Ratsamee: Osaka Univercity; Jason Orlosky: Osaka University

Teaser Video: Watch Now

Evacuation drills are carried out to reduce the injury or death caused by the earthquake. However, the content of evacuation drills is fixed to confirm evacuation routes and actions. This immutability reduces user motivation and sincerity. In this paper, we propose Augmented Reality (AR) based evacuation drills. We use an optical see-through head-mounted display for mapping and recognizing the room interior. Our system constructs an AR drill environment of the real environment with the after-effects of the earthquake disaster. We evaluated our system by experiments with 10 participants. Comparing cases with and without AR obstacles, we found that our AR system affected participants' motivation and diversity of evacuation routes.

Interactive Context-Aware Furniture Recommendation using Mixed Reality

Poster

Booth: D25

Hongfei Yu: Beijing Institute of Technology; Wei Liang: Beijing Institute of Technology; Shihao Song: Beijing Institute of Technology; Bing Ning: School of Computer Science, Beijing Institute of Technology; Yixin Zhu: UCLA

Teaser Video: Watch Now

We present a Mixed Reality (MR) system, through Hololens, to provide context-aware furniture recommendation in an interactive fashion. Firstly, a ranking-based metric learning method is adopted to represent the furniture compatibility through a latent space. Then,in the recommendation process, a physical scene is captured by the cameras mounted on the MR device, and two types of scene contextare analyzed: (1) category context; (2) spatial context. At last, the one with the minimal weighted ranking distance in the latent spaceis recommended to the user. With MR devices, a user could perceiveand manipulate the recommended furniture in real-time. We conductuser study to validate the efficacy of the proposed system.

Augmented Reality based Surgical Navigation for Percutaneous Endoscopic Transforaminal Discectomy

Poster

Booth: D21

Junjun Pan: Beihang University; Ranyang Li: Beihang university; Dongfang Yu: State Key Laboratory of Virtual Reality Technology and Systems; Xinliang Wang: BUAA; Wenhao Zheng: Beihang University; Xin Huang: Peking University Third Hospital; Bin Zhu: Peking University Third Hospital; Haijun Zeng: Beijing Normal University; Xiaoguang Liu: Peking University Third Hospital

Teaser Video: Watch Now

Fluoroscopic guidance is a critical step for the puncture procedure in percutaneous endoscopic transforaminal discectomy (PETD). In this paper, we propose an AR surgical navigation system for PETD based on multi-modality imaging information, which contain fluoroscopy, optical tracking and depth camera. We also present a self-adaptive calibration and transformation method between 6-DOF optical tracking device and depth camera, which are in different coordinate systems. With substantially reduced frequency of fluoroscopy imaging, the system can accurately track and superimpose the virtual puncture needle on fluoroscopy images in real-time.

Play with Emotional Characters: Improving User Emotional Experience by A Data-driven Approach in VR Volleyball Games

Poster

Booth: D22

Zechen Bai: Institute of Software, Chinese Academy of Sciences; Naiming Yao: Institute of Software, Chinese Academy of Sciences; Nidhi Mishra: Nanyang Technological University; Hui Chen: Institute of Software, Chinese Academy of Sciences; Hongan Wang: Institute of Software, Chinese Academy of Sciences; Nadia Magnenat Thalmann: Nanyang Technological University

Teaser Video: Watch Now

In real-world volleyball games, players are generally aware of the emotions of other players as they can observe facial expressions, body behaviors, etc., which evokes a rich emotional experience. However, most of the VR volleyball games mainly concentrate on modeling the game playing, rather than supporting an emotional experience. We introduce a data-driven framework to enhance the user's emotional experience and engagement by building emotional virtual characters in VR volleyball games. This framework enables virtual characters to arouse emotions according to the game state and express emotions through facial expressions. Evaluation results demonstrate our framework has benefits to enhance user's emotional experience and engagement.

GazeTance Guidance: Gaze and Distance-Based Content Presentation for Virtual Museum

Poster

Booth: D23

Haopeng Lu: Shanghai Jiao Tong University; Huiwen Ren: Peking University; Yanan Feng: MIGU Co.,Ltd; Shanshe Wang: Peking University; Siwei Ma: Peking University; Wen Gao: Peking University

Teaser Video: Watch Now

The increasing popularity of virtual reality provides new opportunities for online exhibitions, especially for fragile artwork in museums. However, the limited guidance approaches of virtual museums might hinder the acquisition of knowledge. In this paper, a novel interaction concept is proposed named GazeTance Guidance, which leverages the user's gaze point and interact-distance towards the region of interest (ROI) and helps users appreciate artworks more organized. We conducted a series of comprehension tasks on several long scroll paintings and verified the necessity of guidance. Comparing with no-guidance mechanisms, participants showed a better memory performance on the ROIs without compromising presence and user experience.

Effects of Virtual Environments and Self-representations on Redirected Jumping

Poster

Booth: D24

Yi-Jun Li: Beihang University; Miao Wang: Beihang University; De-Rong Jin: Beihang University; Frank Steinicke: Universität Hamburg; Shi-Min Hu: Tsinghua University; Qinping Zhao: Beihang University

Teaser Video: Watch Now

We design experiments to measure the perception (detection thresh-olds for gains, presence, embodiment, intrinsic motivation, and cybersickness) and physical performance (heart rate intensity, preparation time, and actual jumping distance) of redirected jumping (RDJ), under six different combinations of virtual environments (VEs) (low and high visual richness) and self-representations (SRs) (invisible, shoes, human-like). Results suggested that both VEs and SRs influence users’ perception and performance in RDJ, and have to be taken into account when designing locomotion techniques.

Cognitive Load/flow and Performance in Virtual Reality Simulation Training of Laparoscopic Surgery

Poster

Booth: E41

Peng Yu: Beihang University; Junjun Pan: Beihang University; Zhaoxue Wang: Beijing Normal University; Yang Shen: National Engineering Laboraory for Cyberlearning and Intelligent Technology,Faulty of education; Lili Wang: Beihang University; Jialun Li: Beihang University; Aimin Hao: Beihang University; Haipeng Wang: Beijing General Aerospace Hospital

Teaser Video: Watch Now

VR based laparoscopic surgical simulators (VRLS) are increasingly popular in training surgeons. However, they are validated by subjective methods in most research. In this paper, we resort to physiological approaches to objectively research quantitative influence and performance analysis of VRLS training system. The results show that the VRLS could highly improve medical students’ performance (p<0.01) and enable the participants to obtain flow experience with a lower cognitive load. The performance of participants is negatively correlated with cognitive load through quantitatively physiological analysis.

The Effects of a Stressful Physical Environment During Virtual Reality Height Exposure

Poster

Booth: E42

Howe Yuan Zhu: University of Technology Sydney; Hsiang-Ting Chen: University of Adelaide; Chin-Teng Lin: Centre of Artificial Intelligence, School of Software, Faculty of Engineering and Information Technology, University of Technology Sydney

Teaser Video: Watch Now

Virtual reality height exposure is a reliable method of inducing stress with low variance across age and demographics. As the virtual environment's quality of rendering fidelity increases dramatically, it is leading to the neglect or simplification of the physical environment. This paper presents an experiment that explored the effects of an elevated physical platform with a virtually heightened environment to induce stress. Fifteen participants experienced four different conditions of varying physical and virtual heights. Participants reported significantly higher stress level when physically elevated regardless of the virtual height which suggests that the inherent elevation will induce more stress within participants.

3D Fluid Volume Editing based on a Bidirectional Time Coupling Optimization Approach

Poster

Booth: E43

Xiaoying Nie: Beihang University; Yong Hu: Beihang University; Zhiyuan Su: Beihang University; Xukun Shen: Beihang University

Teaser Video: Watch Now

We propose a novel optimization approach to locally edit a 3D fluid volume. Starting from a fluid reconstruction sequence or a fluid simulation sequence, we formulate geometric deformations as a nonlinear optimization problem to match user-specified targets. To seamlessly blending the edited 3D fluid volume into an original temporal sequence, we provide a bidirectional time coupling optimization approach. This approach takes the physical properties of the previous frame and the next frame as constraints to solve the current frame and meanwhile respect the spatial-temporal consistency of editing in various scenarios. Our results indicate the intuitiveness and efficacy of our method.

Magnification Vision - a Novel Gaze-Directed User Interface

Poster

Booth: E44

Sondre Agledahl: University College London; Anthony Steed: University College London

Teaser Video: Watch Now

We present a novel magnifying tool for virtual environments, whereusers are given a view of the world through a handheld windowcontrolled by their real-time eye gaze data. The system builds on theoptics of real magnifying glasses and prior work in gaze-directedinterfaces. A pilot study is run to evaluate these techniques againsta baseline, that reveals no significant improvement in performance,though users appear to prefer the new technique.

WebPoseEstimator: A Fundamental and Flexible Pose Estimation Framework for Mobile Web AR

Poster

Booth: E37

Yakun Huang: Beijing University of Posts and Telecommunications; Xiuquan Qiao: Beijing University of Posts and Telecommunications; Zhijie Tan: Beijing University of Posts and Telecommunications; Jianwei Zhang: Capinfo Company Limited; Jiulin Li: Beijing National Speed Staking Oval Operation Company Limited

Teaser Video: Watch Now

Exploring immersive augmented reality (AR) on the cross-platform web has attracted a growing interest. We implement WebPoseEstimator, a fundamental and flexible pose estimation framework, running on the common web platform, and providing the core capability to enable true Web AR. WebPoseEstimator provides a first real-time pose estimation framework that optimizes the loosely coupled multi-sensor fusion framework to flexibly adapt heterogeneous mobile devices. We also introduce how to optimize and compile such computationally intensive pose estimation from C++ source code into JavaScript profiles of less than 2 MB, thus supporting the fundamental and underlying capability for implementing true Web AR.

Analysis of Positional Tracking Space Usage when using Teleportation

Poster

Booth: E36

Aniruddha Prithul: University of Nevada Reno; Eelke Folmer: University of Nevada

Teleportation is a widely used virtual locomotion technique that allows users to navigate beyond the confines of available tracking space with a low possibility of inducing VR sickness. Because teleportation requires little physical effort and lets users traverse large distances instantly, a risk is that over time users might only use teleportation and abandon walking input. This paper provides insight into this risk by presenting results from a study that analyzes tracking space usage of three popular commercially available VR games that rely on teleportation. Our study confirms that positional tracking usage is limited by the use of teleportation.

Saw It or Triggered It : Exploring the Threshold of Implicit and Explicit Interaction for Eye-tracking Technique in Virtual Reality

Poster

Booth: E35

Tzu-Hsuan Yang: Department of Computer Science & Information Engineering; Jing-Yuan Huang: Department of Computer Science & Information Engineerin; Ping-Hsuan Han: National Taipei University of Technology; Yi-Ping Hung: Department of Computer Science & Information Engineering

Teaser Video: Watch Now

With eye-tracking techniques, the virtual reality (VR) system can acquire what the user is looking at in the virtual environment (VE). In this paper, we tried to determine whether the eye-tracking techniques can be applied as implicit interactions. We conducted a user study to investigate the threshold of implicit and explicit interaction for the eye-tracking technique in VR. We designed three interfaces for users to judge whether they trigger objects in the VE when seeing them. The result provides a parameter in the design guideline, which brings a new VR storytelling technique.

Personal Space Evaluation and Protection in Social VR

Poster

Booth: E31

Jiayi Sun: Beihang University; Wenli Jiang: Beihang University; Lutong Li: School of New Media Art and Design; Chong Cao: Beihang University

Teaser Video: Watch Now

Social VR has been widely promoted and popularized in recent years. Due to the immersion characteristic of VR, although people do not have physical contact in the virtual space, they still have judgement on distance and may feel annoyed when their personal space is intruded. Social VR users want to communicate with their virtual friends in the space, meanwhile keep a comfortable distance from other avatars and protect their personal space. In this paper, we evaluate user's perception and comfortable level of personal distance, and compare four different methods to protect personal space in social VR.

Visual Techniques to Reduce Cybersickness in Virtual Reality

Poster

Booth: E32

Colin Groth: TU Braunschweig; Jan-Philipp Tauscher: TU Braunschweig; Nikkel Heesen: TU Braunschweig; Susana Castillo: TU Braunschweig; Marcus Magnor: TU Braunschweig

Teaser Video: Watch Now

Cybersickness is a unpleasant phenomenon caused by the visually induced impression of ego-motion while in fact being seated. To reduce its negative impact in VR experiences, we analyze the effectiveness of two techniques -- peripheral blurring and field of view reduction -- through an experiment in an interactive race game environment displayed with a commercial head-mounted display with integrated eye tracker. To measure the level of discomfort experienced by our participants, we utilize self-report and physiological measurements. Our results indicate that, among both techniques, reducing the displayed field of view up to 10 degrees is most efficient to mitigate cybersickness.

Adaptive Web-Based VR Streaming of Multi-LoD 3D Scenes via Author-Provided Relevance Scores

Poster

Booth: E33

Hendrik Lievens: Hasselt University; Maarten Wijnants: Hasselt University; Mike Vandersanden: Hasselt University; Peter Quax: Hasselt University/Flanders Make/tUL; Wim Lamotte: Hasselt University

Teaser Video: Watch Now

The growing storage requirements of 3D virtual scenes, and the increased heterogeneity of consumption devices trigger the need for novel, on-demand streaming techniques of textured meshes. This paper proposes a way to perform adaptive bit-rate (ABR) scheduling using MPEG-DASH, tailored for VR consumption in the web browser. Scene authors are able to annotate the relative importance of assets to optimize scheduling decisions. The results show that Relevance ABR outperforms the state-of-the-art (measured using the MS-SSIM metric) across different scene complexities and network configurations, and is found to be most beneficial when scene complexity is high and network conditions are poor.

Investigating Individual Differences in Olfactory Adaptation to Pulse Ejection Odor Display by Scaling Olfaction Sensitivity of Intensity

Poster

Booth: E34

Shangyin Zou: The University of Tokyo; Yuki Ban: The University of Tokyo; Shinichi Warisawa: The University of Tokyo

Teaser Video: Watch Now

Olfactory adaptation is a non-negligible issue to consider to provide sustained olfaction experience in virtual reality. This study conducted experiments to measure users’ adaptation to pulse ejection odor display by ink-jet devices and analyzed the individual differences in olfactory perception. The results revealed that the average intensity perception dropped approximately to 70% of max perceived intensity in the 10-minute scent display session. Furthermore, individuals’ adaptation levels were correlated to personal sensitivity to olfactory intensity variations acquired by labeled magnitude scale. This work provided a theoretical basis to personalize odor display in VR for more stable olfactory experience.

Determining the Target Point of the Mid-Air Pinch Gesture

Poster

Booth: E28

Reigo Ban: The University of Tokyo; Yutaro Hirao: The University of Tokyo; Takuji Narumi: the University of Tokyo

Teaser Video: Watch Now

Pinching is a common gesture primarily used for zooming on mobile devices, and previous studies considered utilizing it as a mid-air gesture in AR/VR. As opposed to touch screens, there is no physical contact point between the display and the fingers in mid-air pinching, which means the positional relationship between the target point for zooming and the users' finger movement in mid-air pinching could be different from that of touch screens. In this study, we investigated the relationship in mid-air pinching to estimate the target point from the hand posture, and found that the point was significantly off towards the thumb and away from the index finger (approximately 7% offset). This finding contributes to a more accurate mid-air zooming.

Indicators and Predictors of the Suspension of Disbelief: Children's Individual Presence Tendencies

Poster

Booth: E27

Andreas Dengel: University of Würzburg; Lucas Plabst: Julius-Maximilians-University Würzburg; David Fernes: Julius-Maximilians University

Teaser Video: Watch Now

Presence is a phenomenon where a person distributes his/her attention to internal (mental) or distal (sensory) cues. Research shows, that not only technological immersion, but also person-specific factors, influence the sense of presence. These factors may cause an individual presence tendency (IPT) that affects how presence is experienced. This paper investigates if an IPT exists, how it can be calculated, and what factors influence it. A study is presented with 78 participants, aged 13-16, who experienced three different environments within different immersive settings. The results show that level of technological immersion has a strong effect on presence.

Who kicked the ball? Situated Visualization in On-Site Sports Spectating

Poster

Booth: E26

Wei Hong Lo: University of Otago; Stefanie Zollmann: University of Otago; Holger Regenbrecht: University of Otago

Teaser Video: Watch Now

With the recent technological advancements in sports broadcasting, viewers that follow a sports game through broadcast media or online are presented with an enriched experience that includes additional content such as statistics and graphics that help to follow a game. In contrast, spectators at live sporting events often miss out on this additional content. In this paper, we explore the opportunities of using situated visualization to enrich on-site sports spectating. We developed two novel situated visualization approaches for on-site sports spectating: (1) situated broadcast-styled visualization which mimics television broadcasts and (2) situated infographics which places visual elements into the environment.

SHeF-WIP: Walking-in-Place based on Step Height and Frequency for Wider Range of Virtual Speed

Poster

Booth: E25

Yutaro Hirao: The University of Tokyo; Takuji Narumi: the University of Tokyo; Ferran Argelaguet Sanz: Inria; Anatole Lécuyer: Inria

Teaser Video: Watch Now

Walking-in-place (WIP) approaches face difficulties in reaching high locomotion speeds because of the required high step frequency, rapidly creating an awkward or risky experience for the user. In this paper, we introduce a novel WIP approach called Step-Height-and-Frequency (SHeF) WIP, which considers a second parameter, i.e., the step height, in addition to the step frequency, to better control the speed of advancement. We compared SHeF-WIP with a conventional WIP system in a user study conducted with 12 participants. Our results suggest that SHeF-WIP enabled them to reach higher virtual speeds (+80%) with more efficacy and ease.

Gender Differences of Cognitive Loads in Augmented Reality-based Warehouse

Poster

Booth: E21

Zihan Yan: Zhejiang University; Yifei Shan: Zhejiang University; Yiyang Li: Zhejiang University; Kailin Yin: Zhejiang University; Xiangdong Li: Zhejiang University

The rapid emergence of augmented reality (AR) has brought considerable advantages to contemporary warehouse. However, due to inherent biological and cognitive differences, the male and female workers perceive the AR systems differently. Understanding the differences is essential to improve workers’ productivity and well-being. Therefore, we developed the AR headset that helped the participants facilitate parcel scanning and evaluated the gender differences in context of long-lasting repetitive parcel scanning. The results show that the female workers had significantly lower operational efficiency, higher visual attention, and higher memory loads than the male, but they quickly gained advantages in these aspects.

Velocity Guided Amplification of View Rotation for Seated VR Scene Exploration

Poster

Booth: E12

Songhai Zhang: Tsinghua University; Chen Wang: Tsinghua University; Yizhuo Zhang: Tsinghua University; Fang-Lue Zhang: Victoria University of Wellington; Nadia Pantidi: Victoria University of Wellington; Shi-Min Hu: Tsinghua University

Teaser Video: Watch Now

This paper presents a velocity guided amplification approach for head rotation in VR headsets, enabling the amplification factor tobe dynamically changed according to head rotation velocity, and keep it within the comfort range. We first conducted experiments to investigate the effects of head rotation velocity on human sense towards the virtual view rotation and propose the velocity guided amplification approach. We then performed extensive evaluation by comparing our amplification method with existing linear mapping methods that use constant amplification factors. Results demonstrate that users achieve the best performance on given tasks with less discomfort when using our technique.

Virtual Reality Based Mass Disaster Triage Training for Emergency Medical Services

Poster

Booth: E11

Nicole Bilek BSc.: Institute for CreativeMedia/Technologies; Alisa Feldhofer: Institute for CreativeMediaTechnologies; Thomas Moser: Institute for CreativeMedia/Technologies

Teaser Video: Watch Now

When mass disasters with multiple casualties and injured people happen, the Emergency Medical Service staff needs to have strong organizational skills in addition to medical knowledge to efficiently apply triage systems. Currently, it is very expensive for Emergency Medical Services to practice these skills because they need to set up elaborate training scenarios with actors and complex environments. This work presents a Virtual Reality training application that aims to replicate the learning experience of the real-life training without the disadvantage of the organizational effort. The application complements the current training allowing for a more frequent training for all staff members.

An Embedded Virtual Experiment Environment System for Reality Classroom

Poster

Booth: E22

YanXiang Zhang: University of Science and Technology of China; YuTong Zi: University of Science and Technology of China; JiaYu Wang: University of Science and Technology of China

Teaser Video: Watch Now

We designed a low-cost augmented virtuality system based on the Oculus Quest to embed VR in classrooms. To build the system, we measure the size and position of tables in the classroom, make a proxy model in Unity, and then embed the proxy model to seamlessly within the real classroom. In this system, schoolchildren can realize collaborative experiments in ideal conditions or some hard-to-reach scenes. This system's contribution is: (1) By manually adding obstacles, it makes up for most VR systems that can only delimit the area but cannot identify obstacles. (2) It cleverly reuses tables and makes them play the role of anti-collision, workbench, and joystick placement. (3) It expands the available area of VR in complex environments.

The Effect of Camera Height on The User Experience of Mid-air 360° VR Videos

Poster

Booth: E23

YanXiang Zhang: University of Science and Technology of China; YingNa Wang: University of Science and Technology of China; BEIDOLLAHKHANI AZADEH: University of Science and Technology of China; Zheng Xi: University of Science and Technology of China

Teaser Video: Watch Now

Mid-air 360° videos are videos shot by placing the camera on the drone or helicopter. However, how the camera height of mid-air 360° videos affects user experience is unclear. The study explores whether the camera's height affects users' immersion, presence, and realism. Results suggest that when the camera height is higher, immersion decreases for acrophobic people while first drops and then rises for others because of the broad vision and beautiful scenery. Higher camera height brings a higher presence and worse realism, especially in distance details. Our work contributes to better understanding and designing of mid-air 360° video experiences.

Co-assemble- A collaborative AR cross-devices teaching system for assemble practice course

Poster

Booth: E24

YanXiang Zhang: University of Science and Technology of China; JiaQi Cheng: University of Science and Technology of China; JiaYu Wang: University of Science and Technology of China; Lei Zhao: University of Science and Technology of China

Teaser Video: Watch Now

Assembly training in engineering drawing courses mainly relies on physical models, revealing many limitations. We use augmented reality and Azure Spatial Anchors to design “Co-assemble,” a multi-user cross-device collaborative system for the assembly practice on mobile devices. There are three modes in the system: single, collaboration, and class. The system will help students understanding the models’ structure and assembly activities and help teachers easily teach or monitor classes. We also presented and discussed the results of the preliminary user study evaluating the system.

An Enhanced Photorealistic Immersive System using Augmented Situated Visualization within Virtual Reality

Poster

Booth: F41

Maria Insa-Iglesias: Glasgow Caledonian University; Mark David Jenkins: Glasgow Caledonian University; Gordon Morison: Glasgow Caledonian University

Teaser Video: Watch Now

This work presents an This Enhanced Photorealistic Immersive system which allows image data and extracted features from a real-world location to be captured and modelled in a Virtual Reality environment combined with Augmented Situated Visualizations overlaid and registered in a virtual environment. Combining these technologies with techniques from Data Science and Artificial Intelligence allows the creation of a setting where remote locations can be modelled and interacted with from anywhere in the world. This system is adaptable to a wide range of use cases, although just a use case example focused on a structural examination of railway tunnels along with a pilot study is presented, which can demonstrate the usefulness of this system.

Matching 2D Image Patches and 3D Point Cloud Volumes by Learning Local Cross-domain Feature Descriptors

Poster

Booth: F42

Weiquan Liu: Xiamen University; Baiqi Lai: Xiamen University; Cheng Wang: Xiamen University; Xuesheng Bian: Xiamen University; Chenglu Wen: Ximen University; Ming Cheng: Xiamen University; Yu Zang: Xiamen University; Yan Xia: Technical University of Munich; Jonathan Li: University of Waterloo

Teaser Video: Watch Now

Establishing the matching relationship of 2D images and 3D point clouds is a feasible solution to establish the spatial relationship between 2D space and 3D space. In this paper, we propose a novel network, 2D3D-GAN-Net, to learn the local invariant cross-domain feature descriptors of 2D image patches and 3D point cloud volumes. Then, the learned local invariant cross-domain feature descriptors are used for matching 2D images and 3D point clouds. Experiments show that the local cross-domain feature descriptors learned by 2D3D-GAN-Net are robust, and can be used for cross-dimensional retrieval on the 2D image patches and 3D point cloud volumes.

A Novel Redirected Walking Algorithm for VR Navigation in Small Tracking Area

Poster

Booth: F43

Meng Qi: Shandong Normal University; Yunqiu Liu: Shandong Normal University; JIA CUI: Shandong Normal University

Teaser Video: Watch Now

We propose a novel steering algorithm for the RDW technique to direct users away from the boundary of the tracking area. During the navigation, we interactively and imperceptibly rotate the VE to make the user walking along arcs while thinking she is walking straightly. When the user is approaching the boundary, we hierarchically adjust the redirection gains to make the user turn away to avoid the reset procedure. The live-user study indicates that our algorithm can effectively speed up and smooth the navigation, reduce collisions and perceptual distortion, show the potential to direct multiple users simultaneously.

Subtle Gaze Guidance for 360° Content by Gradual Brightness Modulation and Termination of Modulation by Gaze Approaching

Poster

Booth: F44

Masatoshi Yokomi: Nara Institute of Science and Technology; Naoya Isoyama: Nara Institute of Science and Technology; Nobuchika Sakata: NAIST; Kiyoshi Kiyokawa: Nara Institute of Science and Technology

Teaser Video: Watch Now

On VR, users do not always see the specific contents that the creators want them to focus on. For the creators to provide users with the immersive experience they intend, it is necessary to naturally guide the user's gaze to relevant spots in the virtual space. In this paper, we propose a subtle gaze guidance method for 360° content combining two techniques; gradual brightness modulation and termination of modulation by gaze approaching the guidance area. The experimental results show that our method significantly contributes to a more natural and less disturbing viewing experience while maintaining relatively high guidance performance.

Text Selection in AR-HMD Using a Smartphone as an Input Device

Poster

Booth: F38

Rajkumar Darbar: INRIA Bordeaux; Joan Odicio-Vilchez: INRIA; Thibault Lainé: Asobo Studio; Arnaud Prouzeau: Monash University; Martin HACHET: Inria

Text selection is a common task while reading a PDF file or browsing the web. Efficient text selection techniques exist on desktops and touch devices, but are still under-explored for Augmented Reality Head Mounted Display (AR-HMD). Performing text selection in AR commonly uses hand-tracking, voice commands, and eye/head-gaze, which are cumbersome and lack precision. In this poster paper, we explore the use of a smartphone as an input device to support text selection in AR-HMD because of its availability, familiarity, and social acceptability. As an initial attempt, we propose four eyes-free, uni-manual text selection techniques for AR-HMD, all using a smartphone - continuous touch, discrete touch, spatial movement, and raycasting.

VXSlate: Combining Head Movement and Mobile Touch for Large Virtual Display Interaction

Poster

Booth: F37

Khanh-Duy Le: ABB Corporate Research; Tanh Quang Tran: University of Otago; Karol Chlasta: Polish-Japanese Academy of Information Technology; Krzysztof Krejtz: SWPS University of Social Sciences and Humanities; Morten Fjeld: Chalmers University of Technology; Andreas Kunz: ETH Zurich

Teaser Video: Watch Now

Virtual reality (VR) can open opportunities for users to accomplish complex tasks on large virtual displays using compact setups. However, interacting with large virtual displays using existing interaction techniques might cause fatigue, especially for precise manipulations, due to the lack of physical surfaces. We designed VXSlate, an interaction technique that uses a large virtual display as an expansion of a tablet. VXSlate combines a user’s head movements as tracked by the VR headset, and touch interaction on the tablet. The user head movements position both a virtual representation of the tablet and of the user's hand on the large virtual display.The user’s multi-touch interactions perform finely-tuned content manipulations.

2-Thumbs Typing:A Novel Bimanual Text Entry Method in Virtual Reality Environments

Poster

Booth: F36

Zigang Zhang: College of Software; Minghui Sun: Jilin University; BoYu Gao: Jinan University; Limin Wang: Jilin University

Teaser Video: Watch Now

We propose a new technique named 2-Thumbs Typing (2TT) enabling text entry with a touchpad in HTC VIVE controller using two thumbs. 2TT method works similarly with bimanual handwriting input but using new designed uni-stroke gestures considering only strokes’ direction. We first design a set of gestures and improve them to finish the final design by a preliminary study through memory, performance efficiency and ease of use. The initial results show that the 2TT technique is easy and comfortable to use, no additional equipment required and supporting eyes-free entry. 2TT can reach 8.5 words per minute with extensive training.

TeleGate: Immersive Multi-User Collaboration for Mixed Reality 360° Video

Poster

Booth: F35

Hyejin Kim: Victoria University of Wellington; Jacob Young: Victoria University of Wellington; Daniel Medeiros: University of Glasgow; Stephen Thompson: Victoria University of Wellington; Taehyun James Rhee: Victoria University of Wellington

Teaser Video: Watch Now

When collaborating on virtual content within 360° mixed reality environments it is often desirable for collaborators to fully immerse themselves within the task space, usually by means of a head-mounted display. However, these socially isolate any co-located collaborators, removing the ability to communicate through important gestural, facial, and body language cues. We present TeleGate, a system that instead utilises a shared immersive display to allow collaboration within remote environments between an arbitrary number of users, keeping collaborators visible while allowing immersive and interactive collaboration within remote environments.

Evaluation of Curved Raycasting-based Interactive Surfaces in Virtual Environments

Poster

Booth: F31

Tomomi Takashina: Nikon Corporation; Mitsuru Ito: Nikon Corporation; Hitoshi Nagaura: Nikon Systems Inc.; Eisuke Wakabayashi: Nikon Systems Inc.

Teaser Video: Watch Now

As 3D user interfaces become more popular, quick and reliable aerial selection and manipulation are desired. We evaluated a virtual curved interactive surface with controllable curvature based on raycasting. To investigate the users’ operation ability for different curved conditions, we experimented with multiple surface curvature radii, including completely flat conditions. The experimental results showed that varying the curvature of the display improved the pointing accuracy by 28% and the speed by 15% over the flat surface in the most effective cases. These findings can be applied to curved interactive surfaces with mid-air pointing for 2D-style applications.

Gaze-Pinch Menu: Performing Multiple Interactions Concurrently in Mixed Reality

Poster

Booth: F32

Yaguang Lu: Beihang University; Xukun Shen: Beihang University; Huiyan Feng: Beihang University; Pengshuai Duan: Beihang University; Shijin Zhang: Beihang University; Yong Hu: Beihang University

Teaser Video: Watch Now

Performing an interaction using gaze and pinch has been certified as an efficient interactive method in Mixed Reality, for such techniques can provide users concise and natural experiences. However, executing a task with individual interactions gradually is inefficient in some application scenarios. In this paper, we propose the Hand-Pinch Menu, which core concept is to reduce unnecessary operations by combining several interactions. Users can continuously perform multiple interactions on a selected object concurrently without changing gestures by using this technique. The user study results show that our Gaze-Pinch Menu can improve operational efficiency effectively.

MagicCube: A One-Handed Interaction Approach in 3D Environment on Smartphones

Poster

Booth: F33

Mengyuan Wang: Beihang University; Yong Hu: Beihang University; Chuchen Li: Beihang University; Xukun Shen: Beihang University

Teaser Video: Watch Now

When users have only one free hand to operate smartphones, interaction with 3D virtual environment(VE) is restricted. We propose MagicCube, a cube with 5 DOF that perfectly integrates navigation and selection in 3D VE, to address this problem. Three sets of opposing faces are assigned to feature faces with translation, rotation, and selection. All operations can be performed by dragging the corresponding face. Further, by rotating the cube, users can choose the displayed faces and adjust their relative position according to their preference. A comparative analysis showed that it could effectively reduce the finger’s visual occlusion as well as improve the accuracy and stability of interaction operations.

A Preliminary Investigation of Avatar Use in Video-Conferencing

Poster

Booth: F34

Darragh Higgins: Trinity College; Rachel McDonnell: Trinity College Dublin

Teaser Video: Watch Now

Avatar use on video-conference platforms has found dual purpose in recent times as a potential method for ensuring privacy and improving subjective engagement with remote meeting, provided one can also ensure a minimal loss in the quality of social interaction and sense of personal presence. This preliminary study focuses on interaction through virtual avatars in a video conferencing context

MagicChem: A Multi-modal Mixed Reality System Based on Needs Theory for Chemical Education

Poster

Booth: F28

Tianren Luo: Research Institute of Virtual Reality and Intelligent System; Ning Cai: Research Institute of VR&IS; Zheng Li: Research Institute of VR&IS; Jinda Miao: College of computer science; Zhipeng Pan: Research Institute of VR&IS; YuZe Shen: Research Institute of VR&IS; Zhigeng Pan: Hangzhou Normal University; Mingmin Zhang: College of computer science

Teaser Video: Watch Now

MR technology provides us with the possibility of solving the safety issues and the space-time constraints, while the theory of human needs provides us with a way to think about designing a comfortable and stimulant MR system with realistic visual presentation and interaction. This study combines with the theory of human needs to propose a new needs model for virtual experiment. Based on this needs model, we design and develop a comprehensive MR system called MagicChem to verify the needs model. User study shows MagicChem that satisfies the needs model is better than the MR experimental environment that partially meet the needs model. In addition, we explore the application of the needs model in VR environment.

Inspiring healthy Food Choices in a Virtual Reality Supermarket by adding a tangible Dimension in the Form of an Augmented Virtuality Smartphone

Poster

Booth: F27

Christian Eichhorn: Technical University of Munich; Martin Lurz: Technical University of Munich; David A. Plecher: Technical University; Sandro Weber: Technical University of Munich; Monika Wintergerst: Technical University of Munich; Birgit Kaiser: Technical University of Munich; Sophie Laura Holzmann: Chair of Nutritional Medicine; Christina Holzapfel: Technical University of Munich; Hans Hauner: Technical University of Munich; Kurt M. Gedrich: Technical University of Munich; Georg Groh: Technical University of Munich; Markus Böhm: Technical University of Munich; Helmut Krcmar: Technical University of Munich; Gudrun Klinker: TUM

Teaser Video: Watch Now

We want to understand the changing shopping behavior, influenced by health-targeting nutrition apps on mobile devices. On top of that, we built a virtual replica smartphone in VR with nutrition-related functionality. This has been extended with an Augmented Virtuality (AV) feature, that enables us to track the screen of a participant’s own smartphone, hence allowing us to integrate real-world apps and letting the user interact with them during the simulation.

Where are you? Influence of Redirected Walking on Audio-Visual Position Estimation of Co-Located Users

Poster

Booth: F26

Lucie Kruse: Universität Hamburg; Eike Langbehn: University of Hamburg; Frank Steinicke: Universität Hamburg

Teaser Video: Watch Now

Two-user redirected walking (RDW) promises new possibilities for collaborative experiences in virtual reality (VR), but it also introduces challenges like the spatial de-synchronization emerging when one or more users are redirected in different ways. We analyzed the ability to estimate a co-located user's position, and assessed the feeling of representation consistency in an interactive game. Results show that a user's estimation of the other's position improves when their partner's virtual and physical position get closer together. Furthermore, larger redirection gains lead to a lower feeling of consistency, which also applies if only one person is redirected.

Effective close-range accuracy comparison of Microsoft HoloLens Generation one and two using Vuforia ImageTargets

Poster

Booth: F25

Jonas Simon Iven Rieder: TU Delft; Daniëlle Hilde van Tol: TU Delft; Doris Aschenbrenner: TU Delft

Teaser Video: Watch Now

This paper analyzes the effective accuracy for close-range operations for the first and the second generation of Microsoft HoloLens in combination with Vuforia Image Targets in a black-box approach. The authors developed a method to benchmark and compare the applicability of these devices for tasks that demand a higher accuracy like composite manufacturing or medical surgery assistance. Furthermore, the method can be used for a broad variety of devices, establishing a platform for bench-marking and comparing these and future devices.

LighterBody: RNN based Anticipated Virtual Body Makes You Feel Lighter

Poster

Booth: F21

Tatsuya Kure: SonyCSL; Shunichi Kasahara: Sony CSL

Teaser Video: Watch Now

The virtual body representation had shown the potential of intervention into the sense of body. To investigate how the temporal shift of body representation affects the user’s kinetic sensation, we developed a system to anticipate body movement with RNN. We then conducted a user study to assess the effect with the anticipated body movement and the system baseline. Results revealed that the transition from the baseline to the anticipated body induced a lighter bodyweight feeling, and the opposite transition induced a heavier feeling. Our work enlightens the potential of interactive manipulation of the full-body kinetic sensation using virtual body representation.

Detecting the Point of Release of Virtual Projectiles in AR/VR

Poster

Booth: F22

Goksu Yamac: Trinity College Dublin; Niloy Mitra: University College London; Carol O'Sullivan: Trinity College Dublin

Teaser Video: Watch Now

Our aim is to detect the point of release of a thrown virtual projectile in VR/AR. We capture the full-body motion of 18 participants throwing virtual projectiles and extract motion features, such as position, velocity, rotation and rotational velocity for arm joints. Frame-level binary classifiers that estimate the point of release are trained and evaluated using a metric that prioritizes detection timing to obtain an importance ranking of joints and motion features. We find that the wrist joint and the rotation motion feature are most accurate, which can guide the placement of simple motion tracking sensors for real-time throw detection.

Self-Avatars in Virtual Reality: A Study Protocol for Investigating the Impact of the Deliberateness of Choice and the Context-Match

Poster

Booth: F23

Andrea Bartl: University of Würzburg, Department of Computer Science, HCI Group; Sungchul Jung: University of Canterbury; Peter Kullmann: University of Würzburg; Stephan Wenninger: TU Dortmund University; Jascha Achenbach: Bielefeld University; Erik Wolf: University of Würzburg, Department of Computer Science, HCI Group; Christian Schell: Department of Computer Science, HCI Group; Robert W. Lindeman: University of Canterbury; Mario Botsch: TU Dortmund University; Marc Erich Latoschik: Department of Computer Science, HCI Group

Teaser Video: Watch Now

The illusion of virtual body ownership (VBO) plays a critical role in virtual reality (VR). VR applications provide a broad design space which includes contextual aspects of the virtual surroundings as well as user-driven deliberate choices of their appearance in VR potentially influencing VBO and other well-known effects of VR. We propose a protocol for an experiment to investigate the influence of deliberateness and context-match on VBO and presence. In a first study, we found significant interactions with the environment. Based on our results we derive recommendations for future experiments.

Investigation of Microcirculatory Effects of Experiencing Burning Hands in Augmented Reality

Poster

Booth: C41

Daniel Eckhoff: City University of Hong Kong; Cecilia Li: Hong Kong Polytechnic University; Gladys Cheing: Hong Kong Polytechnic University; Alvaro Cassinelli: City University, Hong Kong; Christian Sandor: City University of Hong Kong

Teaser Video: Watch Now

In this paper we report on our Augmented Reality (AR) experience that enables users to see and hear their own left hand burning while looking through a Video See-Through Head-Mounted Display. In an pilot study we investigated whether that AR experience can lead to a change of Skin Blood Flow (SkBF) in the left thumb. Our results show that the SkBF in the left thumb did change significantly after being exposed to the virtual fire in AR. Additionally, we could classify three participants as a responders, who experienced a change of blood flow similarly to real thermal stimulation.

The Effect of the Virtual Object Size on Weight Perception Augmented with Pseudo-Haptic Feedback

Poster

Booth: C42

Jinwook Kim: KAIST; Jeongmi Lee: KAIST

Teaser Video: Watch Now

Providing realistic haptic feedback of virtual objects is critical for immersive VR experience, and there have been many approaches using proprioception mismatch to provide pseudo-haptic feedback. Most of them, however, are limited in that the differences in visual features of the objects are not considered in adjusting the level of pseudo-haptic feedback. Therefore, we conducted an experiment to examine how the threshold of simulated weight perception changes according to the size of a virtual object. The results showed that a stronger level of visual pseudo-haptic feedback is required for bigger objects for simulated weight perception.

Evaluating Presence in VR with Self-representing Auditory-vibrotactile Input

Poster

Booth: C43

Guanghan Zhao: Osaka University; Jason Orlosky: Osaka University; Yuki Uranishi: Osaka Univercity

Teaser Video: Watch Now

In this paper, we present the results of an experiment testing various pairings of auditory feedback devices on immersion and emotion in Virtual Reality (VR). We investigate the effects of bone conduction headphones, a chest-mounted vibration speaker, regular headphones, and combinations thereof, in combination with internal (self-representing) sounds and vibrations in two different simulated scenarios. Results suggest that certain auditory-vibrotactile inputs can influence immersion in an intense virtual scene and evoke emotions in a relaxing virtual scene. In addition, self-representing sounds were observed to significantly weaken immersion in the relaxing virtual scene.

Design and Prototyping of Computational Sunglasses for Autism Spectrum Disorders

Poster

Booth: C44

Xiaodan Hu: Nara Institute of Science and Technology; Yan Zhang: Nara Institute of Science and Technology; Naoya Isoyama: Nara Institute of Science and Technology; Nobuchika Sakata: Nara Institute of Science and Technology; Kiyoshi Kiyokawa: Nara Institute of Science and Technology

Teaser Video: Watch Now

We design the computational sunglasses for individuals with autism spectrum disorders (ASDs) to alleviate the atypical visual perceptions caused by their unique light sensitivities. With the proposed system, a scene camera is used for ambient illuminance detection, and an appropriate occlusion pattern is calculated in real-time then displayed on a spatial light modulator (SLM) to modulate the contrast of the incoming scene. Based on a simple contrast adjustment algorithm that avoids overexposure and underexposure, our bench-top prototype demonstrates better scene contrast successfully.

Viewpoint Planning of Projector Placement for Spatial Augmented Reality using Star-Kernel Decomposition

Poster

Booth: C38

Takefumi Hiraki: University of Tsukuba; Tomohiro Hayase: Fujitsu Laboratories Ltd.; Yuichi Ike: Fujitsu Laboratories Ltd.; Takashi Tsuboi: Musashino University; Michio Yoshiwaki: Osaka City Univeristy

To obtain the targeted projection appearance in a spatial augmented reality system, we need to solve the projector placement problem. However, there are only heuristic solutions for viewpoint planning, which is the fundamental problem of projector placement. In this paper, we propose the star-kernel decomposition algorithm that can solve the viewpoint planning problem of an object composed of known polygons. The star-kernel decomposition algorithm decomposes a polygon into subpolygons that have non-empty star kernels. We implemented a program to obtain the placement of multiple projectors and evaluated the applicability of the obtained placement.

Symmetrical Cognition Between Physical Humans and Virtual Agents

Poster

Booth: C37

Zhenliang Zhang: Tencent

Teaser Video: Watch Now

In this paper, we discuss the cognition problem from some perspectives, i.e., attention, perception, pattern recognition, and communication. We show how the symmetrical reality system can change the traditional attention mechanism, the perception process, the recognition, and the communication between different agents. Finally, we introduce the application of symmetrical cognition to expand the current cognition research and give some useful suggestions for studying the cognition problem.

Auto-generating Virtual Human Behavior by Understanding User Contexts

Poster

Booth: C36

Hanseob Kim: Korea University; Ghazanfar Ali: University of Science and Technology; Seungwon Kim: KIST; Gerard Jounghyun Kim: Korea University; Jae-In Hwang: Korea Institute of Science and Technology

Teaser Video: Watch Now

Virtual humans are most natural and effective when it can act out and animate verbal/gestural actions. One popular method to realize this is to infer the actions from predefined phrases. This research aims to provide a more flexible method to activate various behaviors straight from natural conversations. Our approach uses BERT as the backbone for natural language understanding and, on top of it, a jointly learned sentence classifier (SC) and entity classifier (EC). The SC classifies the input into conversation or action, and EC extracts the entities for the action. The pilot study has shown promising results with high perceived naturalness and positive experiences.

Using Eye Gaze for Navigation: A User Study Inspired by Lucid Dreams

Poster

Booth: C35

Chaojing Li: Goldsmiths; Sicong Zheng: Cornell University; Xueni Pan: Goldsmiths

Teaser Video: Watch Now

We implemented a new navigation method using Eye-tracking to steer the direction and compared our method with using HMD Head-ray for steering. Our idea was inspired by rapid eyeball movement (REM) often observed in lucid dreams. A within-group experiment was conducted where participants experienced four different VR scenarios with both navigation methods. The results suggested that using Eye-gaze as a navigation method is viable in VR (similar travel distance, speed, and time), and that the angle between the Eye-gaze ray and the Head-ray is significantly higher in all four scenes with the Eye-tracking steering method than the HMD Head-ray one.

Dynamic Projection Mapping with 3D Images Using Volumetric Display

Poster

Booth: C31

Masumi Kiyokawa: University; Naoki Hashimoto: The University of Electro-Communications

Teaser Video: Watch Now

Recently, projection mapping has become popular that it is used in various entertainments. In the research field, dynamic projection mapping has been studied to change the appearance of moving and deforming objects. Although it seamlessly connects the real world and the virtual world, complex equipment surrounds the object to achieve advanced projection and it is an unnecessarily visually disturbing factor. Furthermore, projection images are occluded when the target object is grasped and manipulated by hand. In this study, we propose a novel dynamic projection mapping with invisible projection devices with 3D images using a volumetric display and retro-transmissive optics.

Disocclusion-Reducing Geometry for Multiple RGB-D Video Streams

Poster

Booth: C32

Jaesuk Lee: Sogang University; Youngwook Kim: Sogang University; Jehyeong Yun: Sogang University; Joungil Yun: ETRI(Electronics and Telecommunications Research Institute); Won-Sik Cheong: ETRI(Electronics and Telecommunications Research Institute); Insung Ihm: Sogang University

Teaser Video: Watch Now

Depth-image-based rendering(DIBR) is a key method for synthesizing virtual views from multiple RGB-D video streams. A challenging issue in this approach is the disocclusion that occurs as the virtual viewpoint moves away from a reference view. In this work, we present a technique for extracting 3D geometric data, called the disocclusion-reducing geometry, from the input video streams. This auxiliary information, represented as a 3D point cloud, can then be combined easily with a conventional DIBR pipeline to reduce the disoccluded region as much as possible during the view warping process, eventually decreasing the visual artifacts by the subsequent hole-filling process.

Improving Weight Perception in Virtual Reality via a Brain-Computer Interface

Poster

Booth: C33

Jinyi Long: Jinan Unviersity; Xupeng Ye: College of Information Science and Technology, Jinan University

Teaser Video: Watch Now

However, the tracking offset mechanism is limited in its expressiveness in the sensation of weight since the tracking offset cannot be adjusted according to the user's actual perception of weight in real time. Here, a brain-computer interface (BCI) was used to detect the user’s perception of the weight of virtual objects in real-time for adjustment of the tracking offset in a closed loop. We investigated the effect of this BCI mechanism via lifting ball game and compared it with other methods. The BCI mechanism not only has similar effect as the tracking offset mechanism but also significantly improves the consistency.

Virtual Walking Generator from Omnidirectional Video with Ground-dependent Foot Vibrations

Poster

Booth: C34

Junya Nakamura: Toyohashi University of Technology; Yusuke Matsuda: Toyohashi University of Technology; Tomohiro Amemiya Ph.D.: The University of Tokyo; Yasushi Ikei: Tokyo Metropolitan University; Michiteru Kitazaki: Toyohashi University of Technology

Teaser Video: Watch Now

Virtual waking sensation can be experienced by combining optic flow and rhythmical foot vibrations. We hypothesized that the walking sensation would be enhanced by presenting foot vibrations matched with the ground in the video scene. Camera transitions were estimated from omnidirectional movies, and footstep patterns were generated. Then, vibrations matched to the ground in the scene were presented to users. We evaluated the system varying the vibration congruency with the grounds, and found that the ground-matched vibrations enhanced virtual walking sensation, but it depended on the ground type. The virtual walking sensation would contribute to improving virtual travel experiences.

User study of an AR reading aid system to promote deep reading

Poster

Booth: D41

Xiaojuan Li: Beijing Institute of Technology; Yu Han: Beijing Institute of Technology; Yuhui Wu: Beijing Institute of Technology; Kang Yue: Beijing Institute of Technology; Yue Liu: Beijing Institute of Technology

Teaser Video: Watch Now

We proposed a reading aid system combining AR HMD and occlusion technology to cultivate students' intensive reading habits and promote deep reading. A user study was conducted to explore the differences in reading comprehension and subjective evaluation between AR and VR mode. Analysis results show that there are statistically significant differences for realism factors in subjective evaluation between the two modes. The participants confirmed the usefulness of the AR system and preferred the setting of actual paper reading. These results indicate there is potential for AR reading aid system to be used as an educational aid to promote deep reading.

Communications in Virtual Environment Improve Interpersonal Impression

Poster

Booth: D42

Yuki Kato: Toyohashi university of technology; Maki Sugimoto: Keio University; Masahiko Inami: University of Tokyo; Michiteru Kitazaki: Toyohashi University of Technology

Teaser Video: Watch Now

Pseudo physical touch is used for communications in virtual environments such as VRChat. We aimed to test if the pseudo-touch communication affects social impression in a virtual environment. Nineteen participants performed the controlled experiment with a partner who was an experimenter with three types of communications: no touch, pseudo touch, and actual touch. Subjective ratings of attractiveness and the communication easiness with the partner increased in all conditions, suggesting that the communication in virtual environments improves interpersonal attraction and communicability either with or without physical or pseudo touch.

Integration of Concept Maps into the Mixed Reality Learning Space: Quasi-Experimental Design and Preliminary Results

Poster

Booth: D44

Yu Liu: Beijing Institute of Technology; Yue Liu: Beijing Institute of Technology; Kang Yue: Beijing INstitute of Technology

Teaser Video: Watch Now

It is difficult to establish the relationship among complex concepts learned in virtual learning activities in the absence of appropriate instructional scaffolds, leading to students' confusion and frustration. To address such an issue, a mixed reality (MR) friction experiment system integrated with interactive hierarchical concept maps with the name of MMRCM is proposed as a scaffolding tool to guide students in learning friction concepts. Results of a quasi-experiment assessing the effectiveness of MMRCM showed that students' academic performance improved significantly. Interviews with students and teachers also showed that the MMRCM system can help learners navigate educational concepts and material efficiently.

A Low-cost Arm Based Motion Restriction Haptics for VR

Poster

Booth: D38

LOKESH KUMAR V M: IIT BOMBAY

Teaser Video: Watch Now

Virtual reality requires high-level haptics in order to achieve a feeling of realism. In this paper, we present a low-cost device that can provide motion restriction haptics. The device is a wearable, arm based flexible exoskeleton made using cable/zip ties, which are actuated based on VR interaction. It restricts arm motion beyond a point using the cable tie’s tensile strength, analogous to how our arm works.

Immersive Pedigree Graph Visualisations

Poster

Booth: D37

Septian Razi: Australian National University; Henry Gardner: The Australian National University; Matt Adcock: CSIRO

Teaser Video: Watch Now

We describe a study of six visualisation methods for hierarchical pedigree data in virtual reality. We model six different pedigree graph layouts: planar, cylinder, floor, sphere, cone (force directed) and vase layouts. Measurements of task accuracy and task completion time showed a statistically significant pairwise comparison on two task completion times between the sphere (better) and floor (worse) conditions. Likert ratings of user sentiments showed a statistically significant main effect of graph condition ratings of the "understandability of the data” with the sphere and vase graph layouts to be generally higher and the floor layout to be generally lower.

A Telepresence System using Toy Robots the Users can Assemble and Manipulate with Finger Plays and Hand Shadow

Poster

Booth: D36

Amato Tsuji: Seibii inc.; Keita Ushida: Kogakuin University

Teaser Video: Watch Now

The authors report a telepresence system using toy robots. The robots can be assembled and set up for telepresence by the user easily. The robots are manipulated with hand gestures of finger play and hand shadow without instruction, since the users already know these gestures by their experience. Through the experiment, the system was found to be easy to use, and the users felt familiar with the robot and didn’t feel nervous.

Conference Sponsors

Diamond

Virbela Logo

Gold

Instituto Superior Tecnico
immersive Learning Research Network

Silver

Qualcomm Logo

Bronze

Vicon Logo
Hitlab logo
Microsoft Logo
Appen Logo
Facebook Reality Labs Logo
XR Bootcamp Logo

Supporter

GPCG Logo
Inesc-id Logo
NVIDIA Logo

Doctoral Consortium Sponsors

NSF Logo
Fakespace Logo

Conference Partner

CIO Applications Europe Website


Code of Conduct

© IEEEVR Conference