Papers
Monday, March 14, NZDT, UTC+13 | |
---|---|
Displays | 12:00 - 13:00 |
Security | 12:00 - 13:00 |
Emotion and Cognition | 12:00 - 13:00 |
3DUI | 14:00 - 15:00 |
Locomotion (Americas) | 14:00 - 15:00 |
Immersive Visualization and Virtual Production | 14:00 - 15:00 |
Multimodal VR | 16:30 - 17:30 |
Tuesday, March 15, NZDT, UTC+13 | |
---|---|
Embodiment | 8:30 - 9:30 |
Presence | 8:30 - 9:30 |
Interaction Design | 8:30 - 9:30 |
Collaboration | 11:00 - 12:00 |
Perception in AR | 11:00 - 12:00 |
Virtual Humans and Agents | 11:00 - 12:00 |
Augmented Reality | 13:00 - 14:00 |
Locomotion (Asia-Pacific) | 13:00 - 14:00 |
Perception | 13:00 - 14:00 |
Machine Learning | 18:00 - 19:00 |
Medical and Health Care | 18:00 - 19:00 |
Negative Effects | 18:00 - 19:00 |
Wednesday, March 16, NZDT, UTC+13 | |
---|---|
Audio in VR | 8:30 - 9:30 |
Haptics | 8:30 - 9:30 |
Locomotion (Europe) | 8:30 - 9:30 |
Rendering | 13:00 - 14:00 |
Computer Vision | 13:00 - 14:00 |
Inclusive VR | 13:00 - 14:00 |
Advanced UI | 14:00 - 15:00 |
Session: Displays
Monday, March 14, 12:00, NZDT UTC+13
Session Chair: Yuta Itoh
Discord URL: https://discord.com/channels/842181663248482334/951017483446411285
Content Presentation on 3D Augmented Reality Windshield Displays in the Context of Automated Driving
Conference
Andreas Riegler, Andreas Riener, Clemens Holzmann
Increasing vehicle automation presents challenges as drivers of automated vehicles become more disengaged from the primary driving task, as there will still be activities that require interfaces for vehicle-passenger interactions. Augmented reality windshield displays provide large-content areas supporting drivers in both driving and non-driving related tasks. Participants of a user study were presented with two modes of content presentation (multiple content-specific windows vs. one main window) in a virtual reality driving simulator. Using one main content window resulted in better task performance and lower take-over times, however, subjective user experience was higher for the multi-window user interface.
Sparse Nanophotonic Phased Arrays for Energy-Efficient Holographic Displays
Conference
Susmija Jabbireddy, Yang Zhang Zhang, Martin Peckerar, Mario Dagenais, Amitabh Varshney
The Nanophotonic Phased Array (NPA) is an emerging holographic display technology. With chip-scaled sizes, high refresh rates, and integrated light sources, a large-scale NPA can enable high-resolution real-time holography. One of the critical challenges is the high electrical power consumption required to modulate the amplitude and phase of the pixels. We propose a simple method that outputs a sparse NPA configuration to generate the desired image. Using as few as 10% of dense 2D array pixels, we show, through computational simulations, that a perceptually acceptable holographic image can be generated. Our study can advance research on sparse NPAs for holography.
Metameric Varifocal Holograms
Conference
David Robert Walton, Koray Kavakli, Rafael Kuffner dos Anjos, David Swapp, Tim Weyrich, Hakan Urey, Anthony Steed, Tobias Ritschel, Kaan Akşit
Computer-Generated Holography (CGH) is a promising display technology, but holographic images suffer from noise and artefacts. We propose a new method using gaze-contingency and perceptual graphics to help address these. Firstly, our method infers the user's focal depth and generates images at their focus plane. Second, it displays metamers; in the user's peripheral vision, we only match local statistics of the target. We use a novel metameric loss function with an accurate display model in a gradient descent solver. Our method improves foveal quality, avoiding perceptible artefacts in the periphery. We demonstrate our method on a real prototype holographic display.
Design of a Pupil-Matched Occlusion-Capable Optical See-Through Wearable Display
Invited Journal
Austin Wilson, Hong Hua
URL: https://doi.org/10.1109/TVCG.2021.3076069
The state-of-the-art optical see-through head-mounted displays (OST-HMD) for augmented reality applications lack the ability to render correct light interaction behavior between digital and physical objects, known as mutual occlusion capability. This paper presents a novel optical architecture for enabling a compact, high performance, occlusion-capable optical see-through head-mounted display (OCOST-HMD) with correct, pupil-matched viewing perspective. The proposed design utilizes a single-layer, double-pass architecture, offering a compact OCOST-HMD solution that is capable of rendering per-pixel mutual occlusion, a correctly pupil-matched viewing between virtual and real views, and a very wide see-through field of view (FOV). Based on this architecture, we demonstrate a design embodiment and a compact prototype implementation. The prototype offers a virtual display with an FOV of 34 by 22, an angular resolution of 1.06 arc minutes per pixel, and an average image contrast greater than 40% at the Nyquist frequency of 53 cycles/mm. Further, the prototype system affords a wide see-though FOV of 90 by 50, within which about 40 diagonally is occlusion-enabled, along with an angular resolution of 1.0 arc minutes, comparable to a 20/20 vision and a dynamic range greater than 100:1. Lastly, we composed a quantitative color study that compares the effects of occlusion between a conventional HMD system and our OCOST-HMD system and the resulting response exhibited in different studies.
Video See-Through Mixed Reality with Focus Cues
Journal
Christoph Ebner, Shohei Mori, Peter Mohr, Yifan (Evan) Peng, Dieter Schmalstieg, Gordon Wetzstein, Denis Kalkofen
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150504
We introduce the first approach to video see-through mixed reality with support for focus cues. By combining the flexibility to adjust the focus distance found in varifocal designs with the robustness to eye-tracking error of multifocal designs, our novel display architecture delivers focus cues over large workspaces. In particular, we introduce gaze-contingent layered displays and mixed reality focal stacks, an efficient representation of mixed reality content that lends itself to fast processing for driving layered displays in real time. We evaluate this approach by building an end-to-end pipeline for capture, render, and display of focus cues in video see-through displays.
Session: Security
Monday, March 14, 12:00, NZDT UTC+13
Session Chair: Sab Babu
Discord URL: https://discord.com/channels/842181663248482334/951017519320293386
Virtual Reality Observations: Using Virtual Reality to Augment Lab-Based Shoulder Surfing Research
Conference
Florian Mathis, Joseph O'Hagan, Mohamed Khamis, Kami Vaniea
We exploit VR's unique characteristics and study the use of non-immersive/immersive VR recordings for real-world shoulder surfing research. We demonstrate the strengths of VR-based shoulder surfing research by exploring three different authentication scenarios: automated-teller-machine (ATM), smartphone PIN, and smartphone pattern authentication. Our results show that applying VR for shoulder surfing research is advantageous in many ways compared to currently used approaches (e.g., lab-based 2D video recordings). We discuss the strengths and weaknesses of using VR for shoulder surfing research and conclude with four recommendations to help researchers decide when (and when not) to employ VR for shoulder surfing research.
Can I Borrow Your ATM? Using Virtual Reality for (Simulated) In Situ Authentication Research
Conference
Florian Mathis, Kami Vaniea, Mohamed Khamis
In situ evaluations of novel authentication systems, where the system is evaluated in its intended usage context, are often infeasible due to ethical and legal constraints. In this work, we explore how VR can overcome the shortcomings of authentication studies conducted in the lab and contribute towards more realistic authentication research. Our findings highlight VR's great potential to circumvent potential restrictions researchers experience when evaluating authentication schemes. We provide researchers with a novel research approach to conduct (simulated) in situ authentication research and conclude with three key lessons to support researchers in deciding when to use VR for authentication research.
Combining Real-World Constraints on User Behavior with Deep Neural Networks for Virtual Reality (VR) Biometrics
Conference
Robert Miller, Natasha Kholgade Banerjee, Sean Banerjee
Deep networks demonstrate enormous potential for VR security using behavioral biometrics. Existing datasets are small, and make automated learning of real-world behavior features using deep networks challenging. We incorporate real-world constraints such as spatial relationships between devices in the form of displacement vectors and trajectory smoothing in input data for deep networks performing behavior-based identification and authentication. We evaluate our approach against baseline methods that use raw data directly and that perform global normalization. Using displacement vectors, we show higher success over baseline methods in 36 out of 42 cases of varying user sets, VR systems, and sessions.
HoloLogger: Keystroke Inference on Mixed Reality Head Mounted Displays
Conference
Shiqing Luo, Xinyu Hu, Zhisheng Yan
When using services in mixed reality (MR) such as online payment and social media, sensitive information must be typed in MR. Keystroke inference attacks have been conducted to hack such information. However, previous attacks require placing extra hardware near the user, which is easily noticeable in practice. In this paper, we expose a more dangerous malware-based attack through the vulnerability that no permission is required for accessing MR motion data. Extensive evaluations of our HoloLogger system demonstrate that the proposed attack is accurate and robust in various environments such as different user positions and input categories.
Temporal Effects in Motion Behavior for Virtual Reality (VR) Biometrics
Conference
Robert Miller, Natasha Kholgade Banerjee, Sean Banerjee
We evaluate how deep learning approaches for security through matching VR trajectories are influenced by natural behavior variabilities over varying timescales. On short timescales of seconds to minutes, we observe no statistically significant relationship between temporal placement of enrollment trajectories and matches to input trajectories. We observe similar median accuracy for users with high and low medium-scale enrollment/input separation of days to weeks. Over long timescales of 7 to 18 months, we obtain optimal performance for short and long enrollment/input separations by using training sets from users providing long-timescale data, as these sets encompass coarse and fine behavior changes.
A Keylogging Inference Attack on Air-Tapping Keyboards in Virtual Environments
Conference
ülkü Meteriz-Yildiran, Necip Fazil Yildiran, Amro Awad, David Mohaisen
Enabling users to push the physical world's limits, AR/VR opened a new chapter in perception. Novel immersive experiences resulted in novel interaction methods, which came with unprecedented security and privacy risks. This paper presents a keylogging attack to infer inputs typed with in-air tapping keyboards. We exploit the observation that hands follow specific patterns when typing in-air and build a 5-stage pipeline. Through various experiments, we showed that our attack achieves 40% - 89% accuracy. Finally, we discuss countermeasures, while the results presented provide a cautionary tale of the security and privacy risk of the immersive mobile technology.
Session: Emotion and Cognition
Monday, March 14, 12:00, NZDT UTC+13
Session Chair: Anderson Maciel
Discord URL: https://discord.com/channels/842181663248482334/951017566447497258
Do You Notice Me? How Bystanders Affect the Cognitive Load in Virtual Reality
Conference
Maximilian Rettinger, Christoph Schmaderer, Gerhard Rigoll
In contrast to the real world, users are not able to perceive bystanders in virtual reality. This involves users to feel discomfort at the thought of unintentionally touching or even bumping into a physical bystander while interacting with the virtual environment. Therefore, we investigate how a bystander affects a user's cognitive load. In a between-subjects lab study, three conditions were compared: 1) no bystander, 2) an invisible bystander, and 3) a visible bystander. The results of our study demonstrate that a bystander acting as an avatar in the virtual environment increases the user's cognitive load more than an invisible bystander. Moreover, the cognitive load of a VR user is significantly increased by a bystander.
Empathic AuRea: Exploring the Effects of an Augmented Reality Cue for Emotional Sharing Across Three Face-to-Face Tasks
Conference
Andreia Valente, Daniel S. Lopes, Nuno Jardim Nunes, Augusto Esteves
The better a speaker can understand their listener's emotions, the better can they transmit information; and the better a listener can understand the speaker's emotions, the better can they apprehend that information. Previous emotional sharing works have managed to elicit emotional understanding between remote collaborators using bio-sensing, but how face-to-face communication can benefit from bio-feedback is still fairly unexplored. This paper introduces an AR communication cue from an emotion recognition neural network model and ECG data. A study where pairs of participants engaged in three tasks found our system to positively effect performance and emotional understanding, but negatively effect memorization.
Supporting Jury Understanding of Expert Evidence in a Virtual Environment
Conference
Carolin Reichherzer, Andrew Cunningham, Jason Barr, Tracey Coleman, Kurt McManus, Dion Sheppard, Scott Coussens, Mark Kohler, Mark Billinghurst, Bruce H Thomas
This work investigates the use of Virtual Reality (VR) to present forensic evidence to the jury in a courtroom trial. We performed a between-participant study comparing comprehension of an expert statement in VR to the traditional courtroom presentation of still images. We measured understanding of the expert domain, mental effort and content recall and found that VR significantly improves the understanding of spatial information and knowledge acquisition. We also identify different patterns of user behaviour depending on the display method. We conclude with suggestions on how to best adapt evidence presentation to VR.
A Wheelchair Locomotion Interface in a VR Disability Simulation Reduces Implicit Bias
Invited Journal
Tanvir Irfan Chowdhury, John Quarles
URL: https://doi.org/10.1109/TVCG.2021.3099115
This research investigates how experiencing virtual embodiment in a wheelchair affects implicit bias towards people who use wheelchairs. We also investigate how receiving information from a virtual instructor who uses a wheelchair affects implicit bias towards people who use wheelchairs. Implicit biases are actions or judgments of people towards various concepts or stereotypes (e.g., races). We hypothesized that experiencing a Disability Simulation (DS) through an avatar in a wheelchair and receiving information from an instructor with a disability will have a significant effect on participants' ability to recall disability-related information and will reduce implicit biases towards people who use wheelchairs. To investigate this hypothesis, a 2x2 between-subjects user study was conducted where participants experienced an immersive VR DS that presents information about Multiple Sclerosis (MS) with factors of instructor (i.e., instructor with a disability vs instructor without a disability) and locomotion interface (i.e., without a disability -- locomotion through in-place-walking, with a disability -- locomotion in a wheelchair). Participants took a disability-focused Implicit Association Test two times, once before and once after experiencing the DS. They also took a test of knowledge retention about MS. The primary result is: experiencing the DS through locomotion in a wheelchair was better for both the disability-related information recall task and reducing implicit bias towards people who use wheelchairs.
Mood-Driven Colorization of Virtual Indoor Scenes
Journal
Michael S Solah, Haikun Huang, Jiachuan Sheng, Tian Feng, Marc Pomplun, Lap-Fai Yu
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150513
A challenging task in virtual scene design for Virtual Reality (VR) is invoking particular moods in viewers. The subjective nature of moods brings uncertainty to this purpose. We propose a novel approach for automatic color adjustment of textures for objects in virtual indoor scenes, enabling them to match target moods. A dataset of 25,000 indoor environment images was used to train a classifier with features extracted via deep learning. We use an optimization process that colorizes virtual scenes automatically according to the target mood. Our approach was tested on four indoor scenes used in user studies with a VR headset.
Session: 3DUI
Monday, March 14, 14:00, NZDT UTC+13
Session Chair: Kiyoshi Kiyokawa
Discord URL: https://discord.com/channels/842181663248482334/951017600698171452
Bullet Comments for 360° Video
Conference
Yi-Jun Li, Jinchuan Shi, Fang-Lue Zhang, Miao Wang
Time-anchored on-screen comments, as known as bullet comments, are a popular feature for online video streaming. Bullet comments reflect audiences' feelings and opinions at specific video timings, which have been shown to be beneficial to video content understanding and social connection level. In this paper, we for the first time investigate the problem of bullet comment display and insertion for 360° video via head-mounted display and controller. We design four bullet comment display methods and propose two controller-based methods for bullet comment insertion. User study results revealed how the factors of display and insertion methods affect 360° video experience.
An Evaluation of Virtual Reality Maintenance Training for Industrial Hydraulic Machines
Conference
Thuong Hoang, Stefan Greuter, Simeon Taylor
Virtual reality applications for industrial training have widespread benefits for simulating various scenarios and conditions. Through our collaboration with a leading industry partner, a remote multi-user industry maintenance training VR platform applying kinesthetics learning strategy using head-mounted display was designed and implemented. We present the evaluation of the platform with two diverse cohorts of novice users and industry contractors, in comparison to traditional training. The results show that VR training is engaging and effective in boosting trainee's confidence, especially for novice users, with reflections on the differences between novice and industry trainees.
GazeDock: Gaze-Only Menu Selection in Virtual Reality using Auto-Triggering Peripheral Menu
Conference
Xin Yi, Yiqin Lu, Ziyin Cai, Zihan Wu, Yuntao Wang, Yuanchun Shi
In this paper, we proposed GazeDock, a technique for enabling fast and robust gaze-based menu selection in VR. GazeDock features a view-fixed peripheral menu layout that automatically triggers appearing and selection when the user's gaze approaches and leaves the menu zone. By analyzing the user's natural gaze movement patterns, we designed the menu UI personalization and optimized selection detection algorithm of GazeDock. We also examined users' gaze selection precision for targets on the peripheral menu. In a VR navigation game that contains both scene exploration and menu selection, GazeDock received higher user preference ratings compared with dwell-based and pursuit-based techniques.
OctoPocus in VR: Using a Dynamic Guide for 3D Mid-Air Gestures in Virtual Reality
Invited Journal
Katherine Fennedy, Jeremy Hartmann, Quentin Roy, Simon Tangi Perrault, Daniel Vogel
URL: https://doi.org/10.1109/TVCG.2021.3101854
Bau and Mackays OctoPocus dynamic guide helps novices learn, execute, and remember 2D surface gestures. We adapt OctoPocus to 3D mid-air gestures in Virtual Reality (VR) using an optimization-based recognizer, and by introducing an optional exploration mode to help visualize the spatial complexity of guides in a 3D gesture set. A replication of the original experiment protocol is used to compare OctoPocus in VR with a VR implementation of a crib-sheet. Results show that despite requiring 0.9s more reaction time than crib-sheet, OctoPocus enables participants to execute gestures 1.8s faster with 13.8 percent more accuracy during training,while remembering a comparable number of gestures. Subjective ratings support these results, 75 percent of participants found OctoPocus easier to learn and 83 percent found it more accurate. We contribute an implementation and empirical evidence demonstrating that an adaptation of the OctoPocus guide to VR is feasible and beneficial.
EHTask: Recognizing User Tasks from Eye and Head Movements in Immersive Virtual Reality
Invited Journal
Zhiming Hu, Andreas Bulling, Sheng Li, Guoping Wang
URL: https://doi.org/10.1109/TVCG.2021.3138902
Understanding human visual attention in immersive virtual reality (VR) is crucial for many important applications, including gaze prediction, gaze guidance, and gaze-contingent rendering. However, previous works on visual attention analysis typically only explored one specific VR task and paid less attention to the differences between different tasks. Moreover, existing task recognition methods typically focused on 2D viewing conditions and only explored the effectiveness of human eye movements. We first collect eye and head movements of 30 participants performing four tasks, i.e. Free viewing, Visual search, Saliency, and Track, in 15 360-degree VR videos. Using this dataset, we analyze the patterns of human eye and head movements and reveal significant differences across different tasks in terms of fixation duration, saccade amplitude, head rotation velocity, and eye-head coordination. We then propose EHTask -- a novel learning-based method that employs eye and head movements to recognize user tasks in VR. We show that our method significantly outperforms the state-of-the-art methods derived from 2D viewing conditions both on our dataset (accuracy of 84.4% vs. 62.8%) and on a real-world dataset (61.9% vs. 44.1%). As such, our work provides meaningful insights into human visual attention under different VR tasks and guides future work on recognizing user tasks in VR.
Session: Locomotion (Americas)
Monday, March 14, 14:00, NZDT UTC+13
Session Chair: Eike Langbehn
Discord URL: https://discord.com/channels/842181663248482334/951017710773489674
Research Trends in Virtual Reality Locomotion Techniques
Conference
Esteban Segarra Martinez, Annie S. Wu, Ryan P. McMahan
Virtual reality (VR) researchers have had a long-standing interest in studying locomotion for developing new techniques, improving upon prior ones, and analyzing their effects on users. In this paper, we present a systematic review of locomotion techniques based on a well-established taxonomy, and we use k-means clustering to identify to what extent locomotion techniques have been explored. Our results indicate that selection-based, walking-based, and steering-based locomotion techniques have been moderately to highly explored while manipulation-based and automated locomotion techniques have been less explored. We also present results on what types of metrics have been used to evaluate locomotion techniques.
ENI: Quantifying Environment Compatibility for Natural Walking in Virtual Reality
Conference
Niall L. Williams, Aniket Bera, Dinesh Manocha
We present a metric to measure the similarity between physical and virtual environments for natural walking in VR. We use geometric techniques based on visibility polygons to compute the Environment Navigation Incompatibility (ENI) metric to measure the complexity VR locomotion. ENI is useful for highlighting regions of incompatibility and guiding the design of the virtual environments to make them more compatible. User studies and simulations show that ENI identifies environments where users are able to walk larger distances before colliding with objects. ENI is the first general metric that automatically quantifies environment navigability for VR locomotion. Project page: gamma.umd.edu/eni
Evaluating the Impact of Limited Physical Space on the Navigation Performance of Two Locomotion Methods in Immersive Virtual Environments
Conference
Richard Paris, Lauren Buck, Timothy P. McNamara, Bobby Bodenheimer
Consumer level virtual experiences almost always occur when physical space is limited, either by the constraints of an indoor space or of a tracked area. Some locomotion interfaces support movement in the real world, while some do not. This paper examines how the amount of physical space used in the real world by one popular locomotion interface, resetting, compared to a locomotion interface that requires minimal physical space, walking in place. We compared them by determining how well people could acquire survey knowledge using them. While there are trade-offs between the methods, walking in place is preferable in small spaces.
Remote research on locomotion interfaces for virtual reality: Replication of lab-based research on the teleporting interface
Journal
Jonathan Kelly, Melynda Hoover, Taylor A Doty, Alex Renner, Moriah Zimmerman, Kimberly Knuth, Lucia Cherep, Stephen B. Gilbert
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150475
Researchers can now recruit virtual reality (VR) equipment owners to participate remotely. Yet, there are many differences between lab and home environments, as well as differences between participant samples recruited for lab and remote studies. This project replicates a lab-based experiment on VR locomotion interfaces using a remote sample. Participants completed a triangle-completion task (travel two path legs, then point to the path origin) in a remote, unsupervised setting. Locomotion was accomplished using two versions of the teleporting interface varying in available rotational self-motion cues. Remote results largely mirrored lab results, with better performance when rotational cues were available.
Adaptive Redirection: A Context-Aware Redirected Walking Meta-Strategy
Journal
Mahdi Azmandian, Rhys Yahata, Timofey Grechkin, Evan Suma Rosenberg
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150500
This work establishes the theoretical foundations for adaptive redirection, a meta-strategy that switches between a suite of redirected walking strategies with a priori knowledge of their performance under the various circumstances. We also introduce a novel static planning strategy that optimizes gain parameters for a predetermined virtual path and conduct a simulation-based experiment that demonstrates how adaptation rules can be determined empirically using machine learning. Adaptive redirection provides a foundation for making redirected walking work in practice and can be extended to improve performance in the future as new techniques are integrated into the framework.
Validating Simulation-Based Evaluation of Redirected Walking Systems
Journal
Mahdi Azmandian, Rhys Yahata, Timofey Grechkin, Jerald Thomas, Evan Suma Rosenberg
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150466
We present an experiment comparing redirected walking simulation and live user data to understand interaction between locomotion behavior and redirection gains at a micro-level (across small path segments) and macro-level (across an entire experience). The results identify specific properties of locomotion behavior that influence the application of redirected walking. Overall, we found that the simulation provided a conservative estimate of the average performance with real users and observed that performance trends when comparing two redirected walking algorithms were preserved. In general, these results indicate that simulation is an empirically valid evaluation methodology for redirected walking algorithms.
Session: Immersive Visualization and Virtual Production
Monday, March 14, 14:00, NZDT UTC+13
Session Chair: Luciana Nedel
Discord URL: https://discord.com/channels/842181663248482334/951017772069052416
The Virtual Production Studio Concept - An Emerging Game Changer in filmmaking
Conference
Manolya Kavakli, Cinzia Cremona
Virtual Production (VP) integrates virtual and augmented reality technologies with CGI and VFX using a game engine to enable on set production crews to capture and unwrap scenes in real time. In this sense, it is a game changer for the traditional film industry. This paper analyses the VP studio and the use of LED as the pivotal element of this process from a technical and conceptual perspective. As the interest in the field is growing rapidly, we endeavour to evaluate the developments to date in Australia to build a snapshot of emerging VP practices and research supported by two case studies.
Exploring the Design Space for Immersive Embodiment in Dance
Conference
Danielle Lottridge, Becca Weber, Eva-Rae McLean, Hazel Williams, Joanna Cook, Huidong Bai
How can virtual reality (VR) support people in more deeply inhabiting their own bodies? Through action research, we explored the design space for how VR can support feeling embodied and generating movement within a dance context. We observed and reflected on participants' experiences with games intended to stimulate physical movement, single player and multiplayer 3D painting, 360 live video, and custom real-time audio and visual feedback based on motion capture. We present a design space that encapsulates our insights, with axes for control, collaboration, auditory and visual feedback. We discuss implications for the support of immersive embodiment in dance.
Mixed Reality Co-Design for Indigenous Culture Preservation & Continuation
Conference
Noel Park, Holger Regenbrecht, Stuart Duncan, Steven Mills, Robert W. Lindeman, Nadia Pantidi, Hemi Whaanga
Decades of Māori urbanisation, colonisation and globalisation have dispersed marae communities away from their tribal home all around NZ and overseas. This has left many feeling disconnected from learning their cultural identity. Māori have sought digital solutions to mitigate this problem by finding ways of preserving and continuing their culture using immersive technologies. We achieve this by developing an application which places Māori back into their ancestral meeting house and allow them to hear 3D stories about their people. For other researchers and developers involved in an indigenous project, we recommend using a participatory co-design process when developing MR applications for indigenous preservation and continuation.
TimeTables: Embodied Exploration of Immersive Spatio-Temporal Data
Conference
Yidan Zhang, Barrett Ens, Kadek Ananta Satriadi, Arnaud Prouzeau, Sarah Goodwin
We propose TimeTables, a novel prototype system that aims to support data exploration, using embodiment with space-time cubes in virtual reality. TimeTables uses multiple space-time cubes on virtual tabletops, which users can manipulate by extracting time layers to create new tabletop views. The system presents information at different time scales by stretching layers to drill down in time. Users can also jump into tabletops to inspect data from an egocentric perspective. From our use case scenario of energy consumption displayed on a university campus, we believe the system has a high potential in supporting spatio-temporal data exploration and analysis.
Virtual replicas of real places: Experimental investigations
Invited Journal
Richard Skarbez, Joe Gabbard, Doug A Bowman, Todd Ogle, Thomas Tucker
URL: https://doi.org/10.1109/TVCG.2021.3096494
As virtual reality (VR) technology becomes cheaper, higher-quality, and more widely available, it is seeing increasing use in a variety of applications including cultural heritage, real estate, and architecture. A common goal for all these applications is a compelling virtual recreation of a real place. Despite this, there has been very little research into how users perceive and experience such replicated spaces. This paper reports the results from a series of three user studies investigating this topic. Results include that the scale of the room and large objects in it are most important for users to perceive the room as real and that non-physical behaviors such as objects floating in air are readily noticeable and have a negative effect even when the errors are small in scale.
VirtualCube: An Immersive 3D Video Communication System
Journal
Yizhong Zhang, Jiaolong Yang, Zhen Liu, Ruicheng Wang, Guojun Chen, Xin Tong, Baining Guo
The VirtualCube system is a 3D video conference system that attempts to overcome some limitations of conventional technologies. The physical setup of VirtualCube is a standardized cubicle installed with off-the-shelf hardware including 3 TV displays and 6 RGBD cameras. With high-quality 3D capturing and rendering algorithm, the system teleports the remote participants into a virtual meeting room to achieve immersive in-person meeting experience with correct eye gaze. A set of VirtualCubes can be easily assembled into a V-Cube Assembly to model different video communication and shared workspace scenarios, as if all participants were in the same room.
Session: Multimodal VR
Monday, March 14, 16:30, NZDT UTC+13
Session Chair: Sandra Malpica
Discord URL: https://discord.com/channels/842181663248482334/951017808924377118
Evaluating Visual Cues for Future Airborne Surveillance Using Simulated Augmented Reality Displays
Conference
Nicolas Barbotin, James Baumeister, Andrew Cunningham, Thierry Duval, Olivier Grisvard, Bruce H. Thomas
This work explores bringing Augmented Reality (AR) to airborne surveillance operators. In a Virtual Reality (VR) simulation replicating the environment of surveillance aircrafts, we introduce AR cues and an AR control panel to support search tasks and secondary tasks, respectively. Our high-fidelity simulation factors in the focal plane of the AR technology and simulates the user's eye accommodation reflex. Results collected from 24 participants show that the effectiveness of the cues depends on the modality of the secondary task and that, under certain situations, choosing the right focal plane for the AR display may improve search task performances.
Empirical Evaluation of Calibration and Long-term Carryover Effects of Reverberation on Egocentric Auditory Depth Perception in VR
Conference
WanYi Lin, Ying-Chu Wang, Dai-rong Wu, Rohith Venkatakrishnan, Roshan Venkatakrishnan, Elham Ebrahimi, Christopher Pagano, Sabarish V. Babu, Wen-Chieh Lin
We conducted a study to understand the perceptual learning and carryover effects by using RT as stimuli for users to perceive distance in IVEs. The results show that the carryover effect exists after calibration, which indicates people can learn to perceive distances by attuning reverberation time, and the accuracy even remains a constant level after 6 months.
Simulating Olfactory Cocktail Party Effect in VR: A Multi-odor Display Approach Based on Attention
Conference
Shangyin Zou, Xianyin Hu, Yuki Ban, Shinichi Warisawa
We present a VR multi-odor display approach that dynamically changes the intensity combinations of different scent sources in the virtual environment according to the user's attention, hence simulating a virtual cocktail party effect of smell. We acquire attention from the eye-tracking sensors and increase the display intensity of the scent that the user is focusing on to simulate the cocktail party effect of smell, enabling the user to distinguish their desired scent source. The user study showed our approach was able to improve olfactory experience and sense of presence in VR compared to the non-dynamic odor display method.
Shape Aware Haptic Retargeting for Accurate Hand Interactions
Conference
Brandon J. Matthews, Bruce H. Thomas, Stewart von Itzstein, Ross Smith
Shape Aware Haptic Retargeting (SAHR) is an extension of "state-of-the-art" haptic retargeting and the first to support retargeted interaction between any part of the user's hand and any part of the target object. In previous methods, the maximum retargeting is applied only when the hand position aligns with the target position. SAHR generalizes the distance computation to consider the full hand and target geometry. Simulated evaluations demonstrated SAHR can provide improved interaction accuracy over existing methods with full mesh geometry being the most accurate and a primitive approximation being the preferred method for combined computational performance and interaction accuracy.
Adaptive Reset Techniques for Haptic Retargeted Interaction
Invited Journal
Brandon Matthews, Bruce H Thomas, Stewart Von Itzstein, Ross Smith
URL: https://doi.org/10.1109/TVCG.2021.3120410
This paper presents a set of adaptive reset techniques for use with haptic retargeting systems focusing on interaction with hybrid virtual reality interfaces that align with a physical interface. Haptic retargeting between changing physical and virtual targets requires a reset where the physical and virtual hand positions are re-aligned. We present a modified Point technique to guide the user in the direction of their next interaction such that the remaining distance to the target is minimized upon completion of the reset. This, along with techniques drawn from existing work are further modified to consider the angular and translational gain of each redirection and identify the optimal position for the reset to take place. When the angular and translational gain is within an acceptable range, the reset can be entirely omitted. This enables continuous retargeting between targets removing interruptions from a sequence of retargeted interactions. These techniques were evaluated in a user study which showed that adaptive reset techniques can provide a significant decrease in task completion time, travel distance, and the number of user errors.
Studying the Effects of Congruence of Auditory and Visual Stimuli on Virtual Reality Experiences
Journal
Hayeon Kim, In-Kwon Lee
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150514
This paper explores how the congruence between auditory and visual (AV) stimuli, which are the sensory stimuli typically provided by VR devices. We defined the types of (in)congruence between AV stimuli, and then designed 12 virtual spaces with different degrees of congruence between AV stimuli with evaluating user experience changes. We observed the following key findings: 1) there is a limit to the degree of temporal or spatial incongruence that can be tolerated; 2) users are tolerant of semantic incongruence; 3) a simulation that considers synesthetic congruence contributes to the user's sense of immersion and presence.
Session: Embodiment
Tuesday, March 15, 8:30, NZDT UTC+13
Session Chair: Tabitha Peck
Discord URL: https://discord.com/channels/842181663248482334/951018222868627456
Visual Fidelity Effects on Expressive Self-avatar in Virtual Reality: First Impressions Matter
Conference
Fang Ma, Xueni Pan
In this work we present a technical pipeline of creating self-alike avatars whose facial expressions can be then controlled in real-time, from inside the HMD. Using this pipeline we conducted two within-group studies on the psychological impact of the appearance realism of self-avatar. We found a "first trial effect" in both studies where participants gave more positive feedback over the avatar from their first trial, regardless of it being realistic or cartoon-like. Our eye-tracking data suggested that although participants were mainly facing their avatar during their presentation, their eye-gaze was focused elsewhere half of the time.
Within or Between? Comparing Experimental Designs for Virtual Embodiment Studies
Conference
Grégoire Richard, Thomas Pietrzak, Ferran Argelaguet Sanz, Anatole Lécuyer, Géry Casiez
This paper reports a within-subjects experiment with 92 participants comparing embodiment under a visuomotor task with two conditions: synchronous and asynchronous motions. We explored the impact of the number of participants on the replicability of the results from the 92 within-subjects experiment. Results showed that while the replicability of the results increased with the number of participants for the within-subjects simulations, no matter the number of participants, between-subjects simulations were not able to replicate the initial results. We discuss potential reasons that could have lead to this and potential methodological practices to mitigate them.
Being an Avatar "for Real": a Survey on Virtual Embodiment in Augmented Reality
Invited Journal
Adelaide Charlotte Sarah Genay, Anatole Lecuyer, Martin Hachet
URL: https://doi.org/10.1109/TVCG.2021.3099290
Virtual self-avatars have been increasingly used in Augmented Reality (AR) where one can see virtual content embedded into physical space. However, little is known about the perception of self-avatars in such a context. The possibility that their embodiment could be achieved in a similar way as in Virtual Reality opens the door to numerous applications in education, communication, entertainment, or the medical field. This article aims to review the literature covering the embodiment of virtual self-avatars in AR. Our goal is (i) to guide readers through the different options and challenges linked to the implementation of AR embodiment systems, (ii) to provide a better understanding of AR embodiment perception by classifying the existing knowledge, and (iii) to offer insight on future research topics and trends for AR and avatar research. To do so, we introduce a taxonomy of virtual embodiment experiences by defining a "body avatarization" continuum. The presented knowledge suggests that the sense of embodiment evolves in the same way in AR as in other settings, but this possibility has yet to be fully investigated. We suggest that, whilst it is yet to be well understood, the embodiment of avatars has a promising future in AR and conclude by discussing possible directions for research.
Do You Need Another Hand? Investigating Dual Body Representations During Anisomorphic 3D Manipulation
Journal
Diane Dewez, Ludovic Hoyet, Anatole Lécuyer, Ferran Argelaguet Sanz
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150501
Manipulation techniques that distort motion can negatively impact the sense of embodiment as they create a mismatch between the real action and the displayed action. In this paper, we propose to use a dual representation during anisomorphic interaction. A co-located representation reproduces the users' motion, while an interactive representation is used for distorted interaction. We conducted two experiments, investigating the use of dual representations with amplified motion (with the Go-Go technique) and decreased motion (with the PRISM technique). Two visual appearances for the interactive representation and the co-located one were explored (ghost and realistic).
The Impact of Embodiment and Avatar Sizing on Personal Space in Immersive Virtual Environments
Journal
Lauren Buck, Soumyajit Chakraborty, Bobby Bodenheimer
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150483
Our work examines how degree of embodiment and avatar sizing affect the way personal space is perceived in virtual reality. We look at two components of personal space: interpersonal and peripersonal space. We hypothesized that higher levels of embodiment would result in differing measures of interpersonal and peripersonal space, and that, only interpersonal space would change with arm length. We found that interpersonal and peripersonal space change in the presence of differing levels of embodiment, and that only interpersonal space is sensitive to changes in arm dimension. These findings provide understanding for improved design of social interaction in virtual environments.
Session: Presence
Tuesday, March 15, 8:30, NZDT UTC+13
Session Chair: Adam Jones
Discord URL: https://discord.com/channels/842181663248482334/951018273355477002
Exploring Presence, Avatar Embodiment, and Body Perception with a Holographic Augmented Reality Mirror
Conference
Erik Wolf, Marie Luisa Fiedler, Nina Döllinger, Carolin Weinrich, Marc Erich Latoschik
The use of full-body illusions in virtual reality has been shown promise in enhancing the user's mental health. In our work, we developed a holographic augmented reality (AR) mirror to extend these advances by real-world interaction and evaluated its user experience. Twenty-seven participants provided predominantly positive qualitative feedback on the technical implementation and rated the system regarding presence, embodiment, and body weight perception. Our comparison with more immersive systems shows that AR systems can provide full-body illusions of similar quality. However, future work is essential to determine the significance of our findings in the context of mental health.
Kicking in Virtual Reality: The Influence of Foot Visibility on the Shooting Experience and Accuracy
Conference
Michael Bonfert, Stella Lemke, Robert Porzel, Rainer Malaka
Foot interaction is crucial in many disciplines when playing sports in virtual reality. We investigated how the visibility of the foot influences penalty shooting in soccer. In a between-group experiment, we asked 28 players to hit eight targets with a virtual ball. We measured the performance, task load, presence, ball control, and body ownership of inexperienced to advanced soccer players. In one condition, the players saw a visual representation of their tracked foot which improved the accuracy of the shots significantly. Players with invisible foot needed 58% more attempts. Further, with foot visibility the self-reported body ownership was higher.
Augmenting Immersive Telepresence Experience with a Virtual Body
Journal
Nikunj Arora, Markku Suomalainen, Matti Pouke, Evan G Center, Katherine J. Mimnaugh, Alexis P Chambers, Pauli Sakaria Pouke, Steven LaValle
We propose augmenting immersive telepresence by adding a virtual body, representing the user's own arm motions, as realized through a head-mounted display and a 360-degree camera. We conducted a study where participants were telepresent through a head-mounted display at a researcher's physical location, who interacted with them and prompted for reactions. The results showed contradiction between pilot and confirmatory studies, with at best weak evidence in increase of presence and preference of the virtual body. Further analysis suggests that the quality and style of the virtual arms led to individual differences, which subsequently moderated feelings of presence.
Breaking Plausibility Without Breaking Presence - Evidence For The Multi-Layer Nature Of Plausibility
Journal
Larissa Brübach*, Franziska Westermeier*, Carolin Wienrich, Marc Erich Latoschik (* = equal contribution)
URL: https://doi.org/10.1109/TVCG.2022.3150496
Recently, a novel theoretical model introduced coherence and plausibility as the essential conditions of XR experiences. Plausibility results from multi-layer (cognitive, perceptual, and sensational) coherence activation. We utilized breaks in plausibility (analogous to breaks in presence) by introducing incoherence on the perceptual and cognitive layer. A simulation of gravity-defying objects, i.e., the perceptual manipulation, broke plausibility, however, not presence. Simultaneously, the cognitive manipulation, presented as storyline framing, was too weak to counteract the strong bottom-up inconsistencies. Both results confirm the predictions of the novel model, incorporating well-known top-down and bottom-up rivalries and a theorized increased independence between plausibility and presence.
Session: Interaction Design
Tuesday, March 15, 8:30, NZDT UTC+13
Session Chair: Rick Skarbez
Discord URL: https://discord.com/channels/842181663248482334/951018309715906570
Exploring and Selecting Supershapes in Virtual Reality with Line, Quad, and Cube Shaped Widgets
Conference
Francisco Nicolau, Johan Gielis, Adalberto L. Simeone, Daniel S. Lopes
In this work, we propose VR shape widgets that allow users to probe and select supershapes from a multitude of solutions. Our designs take leverage on thumbnails, mini-maps, haptic feedback and spatial interaction, while supporting 1-D, 2-D and 3-D supershape parameter spaces. We conducted a user study (N = 18) and found that VR shape widgets are effective, more efficient, and natural than conventional VR 1-D sliders while also usable for users without prior knowledge on supershapes. We also found that the proposed VR widgets provide a quick overview of the main supershapes, and users can easily reach the desired solution without having to perform fine-grain handle manipulations.
The Potential of Augmented Reality for Digital Twins: A Literature Review
Conference
Andreas Künz, Sabrina Rosmann, Enrica Loria, Johanna Pirker
Implementing digital twins with the help of mixed/augmented reality is a promising approach, yet still a novel area of research. So we conducted a PRISMA-based literature review of scientific articles and book chapters dealing with the use of MR and AR for digital twins. 25 papers were analyzed, sorted and compared by different categories like research topic, domain, paper type, evaluation type, used hardware, as well as the different outcomes. The major finding of this research survey is the predominant focus of the reviewed papers on the technology itself and the neglect of factors regarding the users.
The Effect of Exploration Mode and Frame of Reference in Immersive Analytics
Invited Journal
Jorge Wagner, Wolfgang Stuerzlinger, Luciana Nedel
URL: https://doi.org/10.1109/TVCG.2021.3060666
The design space for user interfaces for Immersive Analytics applications is vast. Designers can combine navigation and manipulation to enable data exploration with ego- or exocentric views, have the user operate at different scales, or use different forms of navigation with varying levels of physical movement. This freedom results in a multitude of different viable approaches. Yet, there is no clear understanding of the advantages and disadvantages of each choice. Our goal is to investigate the affordances of several major design choices, to enable both application designers and users to make better decisions. In this work, we assess two main factors, exploration mode and frame of reference, consequently also varying visualization scale and physical movement demand. To isolate each factor, we implemented nine different conditions in a Space-Time Cube visualization use case and asked 36 participants to perform multiple tasks. We analyzed the results in terms of performance and qualitative measures and correlated them with participants' spatial abilities. While egocentric room-scale exploration significantly reduced mental workload, exocentric exploration improved performance in some tasks. Combining navigation and manipulation made tasks easier by reducing workload, temporal demand, and physical effort.
Synthesizing Personalized Construction Safety Training Scenarios for VR Training
Journal
Wanwan Li, Haikun Huang, Tomay Solomon, Behzad Esmaeili, Lap-Fai Yu
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150510
Construction industry has the largest number of preventable fatal injuries, providing effective safety training practices can play a significant role in reducing the number of fatalities. Building on recent advancements in virtual reality-based training, we devised a novel approach to synthesize construction safety training scenarios to train users on how to proficiently inspect the potential hazards on construction sites in virtual reality. Given the training specifications such as individual training preferences and target training time, we synthesize personalized VR training scenarios through an optimization approach. We validated our approach by conducting user studies where users went through our personalized VR training, general VR training, or conventional slides training. The results show that our personalized VR training approach can more effectively train users to improve their construction hazard inspection skills.
PoVRPoint: Authoring Presentations in Mobile Virtual Reality
Journal
Verena Biener, Travis Gesslein, Daniel Schneider, Felix Kawala, Alexander Otte, Per Ola Kristensson, Michel Pahud, Eyal Ofek, Cuauhtli Campos, Matjazz Kljun, Klen Čopič Pucihar, Jens Grubert
We propose PoVRPoint - a set of tools coupling pen- and touch-based editing of presentations on mobile touch devices with the interaction capabilities afforded by VR. We study the utility of extended display space to assist users in identifying target slides, supporting spatial manipulation of slide-content, creating animations, and facilitating arrangements of multiple, possibly occluded objects. Among other things, our results indicate that the three-dimensional view in VR enables significantly faster object reordering in the presence of occlusion compared to two baseline interfaces. A user study further confirmed that our interaction techniques were found to be usable and enjoyable.
Session: Collaboration
Tuesday, March 15, 11:00, NZDT UTC+13
Session Chair: Wallace Lages
Discord URL: https://discord.com/channels/842181663248482334/951018348488052757
The Potential of VR-based Tactical Resource Planning on Spatial Data
Conference
Maria L. Medeiros, Bettina Schlager, Katharina Krösl, Anton L. Fuhrmann
In this work, we leverage the benefits of immersive head-mounted displays (HMDs) and present the design, implementation, and evaluation of a collaborative Virtual Reality (VR) application for tactical resource planning on spatial data. We derived system and design requirements from observations of a military on-site staff exercise and evaluated our prototype, we conducted semi-structured interviews with domain experts. Our results show that the potential of VR-based tactical resource planning is dependent on the technical features as well as on non-technical environmental aspects, such as user attitude, prior experience, and interoperability.
Virtual Workspace Positioning Techniques during Teleportation for Co-located Collaboration in Virtual Reality using HMDs
Conference
Yiran Zhang, Huyen Nguyen, Nicolas Ladeveze, Cedric Fleury, Patrick Bourdot
Many co-located collaborative Virtual Reality applications rely on a one-to-one mapping of users' relative positions in real and virtual environments. However, the users' individual virtual navigation may break this spatial configuration. This work enables the recovery of spatial consistency after individual navigation. We provide a virtual representation of the users' shared physical workspace and developed two techniques to position it. The first technique enables a single user to control the virtual workspace location, while the second allows concurrent manipulation. Experimental results show that allowing two users to co-manipulate the virtual workspace significantly reduces negotiation time.
Duplicated Reality for Co-located Augmented Reality Collaboration
Journal
Kevin Yu, Ulrich Eck, Frieder Pankratz, Marc Lazarovici, Dirk Wilhelm, Nassir Navab
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150520
When multiple users collaborate in the same space with Augmented Reality, they often encounter conflicting intentions regarding the occupation of the working area. For relaxing constraints of physical co-location, we propose Duplicated Reality, where a digital copy of a 3D region of interest is reconstructed in real-time and visualized through Augmented Reality. We compare the proposed method to an in-situ augmentation. The result indicates almost identical metrics, except a decrease in the consulter's awareness of co-located users when using our method. Duplicating the working area into a designated consulting area opens up new paradigms for future co-located Augmented Reality systems.
Session: Perception in AR
Tuesday, March 15, 11:00, NZDT UTC+13
Session Chair: Étienne Peillard
Discord URL: https://discord.com/channels/842181663248482334/951018400510001192
The Influence of Environmental Lighting on Size Variations in Optical See-through Tangible Augmented Reality
Conference
Denise Kahl, Marc Ruble, Antonio Krüger
It is not possible to use a physical replica for each virtual object interacted with in Tangible Augmented Reality. Investigations to what extent a physical object can differ from its virtual counterpart are necessary. Since perception in optical see-through Augmented Reality is strongly influenced by the ambient lighting, we examined under three different indoor lighting conditions how much the physical object can differ in size from its virtual representation. As illuminance increases, usability and presence decrease, but the size ranges in which a physical object can deviate without having a strong negative impact on usability, presence and performance increase.
Depth Perception in Augmented Reality: The Effects of Display, Shadow, and Position
Conference
Haley Alexander Adams, Sarah Creem-Regehr, Jeanine Stefanucci, Bobby Bodenheimer
Although it is commonly accepted that depth perception in AR displays is distorted, we have yet to isolate which properties of AR affect people's ability to correctly perceive virtual objects in real spaces. In this research, we evaluate absolute measures of distance perception in the Microsoft HoloLens 2, an optical see-through (OST) display, and the Varjo XR-3, a video see-through (VST) display. Our work is the first to evaluate either device using absolute distance measures. Our results suggest that current VST displays may induce more distance underestimation than their OST counterparts. We also find differences in depth judgments to grounded versus floating virtual objects--a difference that prevails even when cast shadows are present.
Multisensory Proximity and Transition Cues for Improving Target Awareness in Narrow Field of View Augmented Reality Displays
Invited Journal
Christina Trepkowski, Alexander Marquardt, Tom David Eibich, Yusuke Shikanai, Jens Maiero, Kiyoshi Kiyokawa, Ernst Kruijff, Johannes Schoning, Peter Konig
URL: https://doi.org/10.1109/TVCG.2021.3116673
Augmented reality applications allow users to enrich their real surroundings with additional digital content. However, due to the limited field of view of augmented reality devices, it can sometimes be difficult to become aware of newly emerging information inside or outside the field of view. Typical visual conflicts like clutter and occlusion of augmentations occur and can be further aggravated especially in the context of dense information spaces. In this article, we evaluate how multisensory cue combinations can improve the awareness for moving out-of-view objects in narrow field of view augmented reality displays. We distinguish between proximity and transition cues in either visual, auditory or tactile manner. Proximity cues are intended to enhance spatial awareness of approaching out-of-view objects while transition cues inform the user that the object just entered the field of view. In study 1, user preference was determined for 6 different cue combinations via forced-choice decisions. In study 2, the 3 most preferred modes were then evaluated with respect to performance and awareness measures in a divided attention reaction task. Both studies were conducted under varying noise levels. We show that on average the Visual-Tactile combination leads to 63% and Audio-Tactile to 65% faster reactions to incoming out-of-view augmentations than their Visual-Audio counterpart, indicating a high usefulness of tactile transition cues. We further show a detrimental effect of visual and audio noise on performance when feedback included visual proximity cues. Based on these results, we make recommendations to determine which cue combination is appropriate for which application.
The Effect of Context Switching, Focal Switching Distance, Binocular and Monocular Viewing, and Transient Focal Blur on Human Performance in Optical See-Through Augmented Reality
Journal
Mohammed Safayet Arefin, Nate Phillips, Alexander Plopski, Joseph L Gabbard, J. Edward Swan
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150503
In optical see-through augmented reality (AR), information is often distributed between real and virtual contexts, and often appears at different distances from the user. To integrate information, users must repeatedly switch context and change focal distance. Previously, Gabbard, Mehra, and Swan (2018) examined these issues, using a text-based visual search task on a monocular, optical see-through AR display. Our study partially replicated and extended this task on a custom-built AR Haploscope for both monocular and binocular viewings. Results establishes that context switching, focal distance switching, and transient focal blur remain important AR user interface design issues.
Stereopsis Only: Validation of a Monocular Depth Cues Reduced Gamified Virtual Reality with Reaction Time Measurement
Journal
Wolfgang Andreas Mehringer, Markus Wirth, Daniel Roth, Georg Michelson, Bjoern M Eskofier
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150486
Visuomotor task performance is limited in absence of binocular cues, e.g. when the visual system is affected by a disorder like amblyopia. Conventional amblyopia treatment occludes the healthy eye, however, resulting in poor stereopsis improvements. Therefore, binocular treatments equilibrate both eyes' visual input. Most approaches use divided stimuli which do not account for loss of stereopsis. We created a Virtual Reality with reduced monocular depth cues in which a stereoscopic task is shown to both eyes simultaneously. In a study with 18 participants, the number of correct responses reduced from 90% under binocular vision to 50% under monocular vision.
Session: Virtual Humans and Agents
Tuesday, March 15, 11:00, NZDT UTC+13
Session Chair: Kangsoo Kim
Discord URL: https://discord.com/channels/842181663248482334/951018494512750592
The Stare-in-the-Crowd Effect in Virtual Reality
Conference
Pierre Raimbaud, Alberto Jovane, Katja Zibrek, Claudio Paccierotti, Marc Christie, Ludovic Hoyet, Julien Pettré, Anne-Hélène Olivier
In this study we explore the stare-in-the crowd effect, an asymmetry in the gaze behaviour of a human observing directed and averted gaze present in a crowd. In virtual reality we have analysed gazes of a group of 30 users, asked to observe a crowd of 11 virtual agents performing 4 different gaze patterns. Results show that the stare-in-the-crowd effect is preserved in VR. We have additionally explored how user gaze behaviour can be affected by social anxiety. Such results are encouraging for the development of expressive and reactive virtual humans, which can be animated to express natural interactive behaviour.
Virtual Humans with Pets and Robots: Exploring the Influence of Social Priming on One's Perception of a Virtual Human
Conference
Nahal Norouzi, Matt Gottsacker, Gerd Bruder, Pamela J. Wisniewski, Jeremy N. Bailenson, Greg Welch
Social priming is the idea that observations of a virtual human (VH) engaged in short social interactions with a real or virtual human bystander can positively influence users' subsequent interactions with that VH. In this paper, we investigate the question of whether the positive effects of social priming are limited to interactions with humanoid entities through a human-subjects experiment. Our mixed-methods analysis revealed positive influences of social priming with humanoid and non-humanoid entities, such as increasing participants' perception of the VH's affective attraction and enhancing participants' quality of experience.
Investigating the Effects of Leader and Follower Behaviors of Virtual Humans in Collaborative Fine Motor Tasks in Virtual Reality
Conference
Kuan-yu Liu, Sai-Keung Wong, Matias Volonte, Elham Ebrahimi, Sabarish V. Babu
We examined the effects on users during collaboration with two types of virtual human (VH) agents in object transportation. The two types of VHs we examined are leader and follower agents. The goal of the users is to interact with the agents using natural language and carry objects from initial locations to destinations. The follower agent follows a user's instructions to perform actions to manipulate the object. The leader agent determines the appropriate actions that the agent and the user should perform. We conducted a within-subjects study to evaluate the user behaviors when interacting with the two VHs.
An Evaluation of Native versus Foreign Communicative Interactions on Users' Behavioral Reactions towards Affective Virtual Crowds
Conference
Chang Chun Wang, Matias Volonte, Elham Ebrahimi, Kuan-yu Liu, Sai-Keung Wong, Sabarish V. Babu
This investigation compared the impact on the users' non-verbal behaviors elicited by interacting with a crowd of emotional virtual humans (VHs) in native and non-native language settings. In a between-subject experiment, we collected objective metrics regarding the users' behaviors during interaction with a crowd. The language conditions were collected in the USA and under two different conditions in Taiwan. The participants in the USA group interacted with the VHs in English (a native language for the USA setting); and two different groups in Taiwan interacted with the VHs in either a foreign (English) or native (Mandarin) language, respectively.
The Effect of Virtual Humans Making Verbal Communication Mistakes on Learner's Perspective of their Credibility, Reliability, and Trustworthiness
Conference
Jacob Stuart, Karen Aul, Anita Stephen, Michael D. Bumbach, Alexandre Gomes de Siqueira, Benjamin Lok
In this work, we performed a 2x2 mixed design user study that had learners (n = 80) attempt to identify verbal communication mistakes made by a virtual human acting as a nurse in a virtual desktop environment. We found that learners struggle to identify infrequent virtual human verbal communication mistakes. Additionally, learners with lower initial trustworthiness ratings are more likely to overlook potentially life-threatening mistakes, and virtual human mistakes temporarily lower learner credibility, reliability, and trustworthiness ratings of virtual humans.
The One-Man-Crowd: Single User Generation of Crowd Motions Using Virtual Reality
Journal
Tairan Yin, Ludovic Hoyet, Marc Christie, Marie-Paule R. Cani, Julien Pettré
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150507
Crowd motion data is fundamental for understanding and simulating crowd behaviours. Such data is usually collected through controlled experiments and is scarce due to difficulties involved in its gathering. In this work, we propose a novel Virtual Reality based approach for the acquisition of crowd motion data, which immerses a single user in virtual scenarios to act each crowd member. We validate our approach by replicating three real experiments, and compare the results. Using our approach, realistic collective behaviours can naturally emerge, even though with lower behavioural variety. These results provide valuable insights to virtual crowd experiences, and reveal key directions for further improvements.
Session: Augmented Reality
Tuesday, March 15, 13:00, NZDT UTC+13
Session Chair: Vicki Interrante
Discord URL: https://discord.com/channels/842181663248482334/951018549554606111
An improved augmented-reality method of inserting virtual objects into the scene with transparent objects
Conference
Aijia Zhang, Yan Zhao, Shigang Wang, Jian Wei
In augmented reality, the insertion of virtual objects into the real scene needs to meet the requirements of visual consistency. When there are transparent objects in the real scene, the difference in refractive index and roughness of transparent objects will influence the effect of the virtual and real fusion. To tackle this problem, we solve for the material parameters of objects and illumination simultaneously by nesting microfacet model and hemispherical area illumination model into inverse path tracing. Multiple experiments verify that the proposed approach performs better than the state-of-the-art method.
Using Speech to Visualise Shared Gaze Cues in MR Remote Collaboration
Conference
Allison Jing, Gun Lee, Mark Billinghurst
In this paper, we present a 360 panoramic Mixed Reality (MR) system that visualises shared gaze cues using contextual speech input to improve task coordination. We conducted two studies to evaluate the design of the MR gaze-speech interface exploring the combinations of visualisation style and context control level. Findings from the first study suggest that an explicit visual form that directly connects the collaborators' shared gaze to the contextual conversation is preferred. The second study indicates that the gaze-speech modality shortens the coordination time to attend to the shared interest, making the communication more natural and the collaboration more effective.
A Comparison of Spatial Augmented Reality Predictive Cues and their Effects on Sleep Deprived Users
Conference
Benjamin Volmer, James Baumeister, Linda Grosser, Stewart von Itzstein, Siobhan Banks, Bruce H Thomas
Spatial Augmented Reality (SAR) is a useful tool for procedural tasks as virtual instructions can be co-located with the physical task. Existing research has shown SAR predictive cues to further enhance performance. However, this research has been conducted while the user is not fatigued. This paper investigates predictive cues under a sub-optimal scenario by depriving users of sleep. Results from a 62-hour sleep deprivation experiment demonstrate that SAR predictive cues are beneficial to sleep deprived users. From the predictive cues outlined in this paper, the line cue maintained the best performance and was least impacted by early morning performance declines.
Distortion-free Mid-air Image Inside Refractive Surface and on Reflective Surface
Conference
Shunji Kiuchi, Naoya Koizumi
We propose an approach to display a distortion-free mid-air image inside a transparent refractive object and on a curved reflective surface. We compensate for the distortion by generating a light source image that cancels the distortions in the mid-air image caused by refraction and reflection. The light source image is generated via ray-tracing simulation by transmitting the desired view image to the mid-air imaging system, and by receiving the transmitted image at the light source position. Finally, we present the results of an evaluation of our method performed in an actual optical system.
HaptoMapping: Visuo-Haptic Augmented Reality by Embedding User-Imperceptible Tactile Display Control Signals in a Projected Image
Invited Journal
Yamato Miyatake, Takefumi Hiraki, Daisuke Iwai, Kosuke Sato
URL: https://doi.org/10.1109/TVCG.2021.3136214
This paper proposes HaptoMapping, a projection-based visuo-haptic augmented reality (VHAR) system, that can render visual and haptic content independently and present consistent visuo-haptic sensations on physical surfaces. HaptoMapping controls wearable haptic displays by embedded control signals that are imperceptible to the user in projected images using a pixel-level visible light communication technique. The prototype system is comprised of a high-speed projector and three types of haptic devicesfinger worn, stylus, and arm mounted. The finger-worn and stylus devices present vibrotactile sensations to a users fingertips. The arm-mounted device presents stroking sensations on a users forearm using arrayed actuators with a synchronized hand projection mapping. We identified that the developed systems maximum latency of haptic from visual sensations was 93.4 ms. We conducted user studies on the latency perception of our VHAR system. The results revealed that the developed haptic devices can present haptic sensations without user-perceivable latencies, and the visual-haptic latency tolerance of our VHAR system was 100, 159, 500 ms for the finger-worn, stylus, and arm-mounted devices, respectively. Another user study with the arm-mounted device discovered that the visuo-haptic stroking system maintained both continuity and pleasantness when the spacing between each substrate was relatively sparse, such as 20 mm, and significantly improved both the continuity and pleasantness at 80 and 150 mm/s when compared to the haptic only stroking system. Lastly, we introduced four potential applications in daily scenes. Our system methodology allows for a wide range of VHAR application design without concern for latency and misalignment effects.
SEAR: Scaling Experiences in Multi-user Augmented Reality
Journal
Wenxiao ZHANG, Bo Han, Pan Hui
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150467
In this paper, we present SEAR, a collaborative framework for Scaling Experiences in multi-user Augmented Reality (AR). Most AR systems benefit from computer vision algorithms to detect/classify/recognize physical objects for augmentation. A widely-used acceleration method is to offload compute-intensive tasks to the network edge. However, we show that the end-to-end latency, an important metric of mobile AR, may dramatically increase when offloading tasks from a large number of concurrent users. SEAR tackles this scalability issue through the innovation of a lightweight collaborative local cache. We build a prototype of SEAR to demonstrate its efficacy in scaling AR experiences.
Session: Locomotion (Asia-Pacific)
Tuesday, March 15, 13:00, NZDT UTC+13
Session Chair: Rob Lindeman
Discord URL: https://discord.com/channels/842181663248482334/951018594546880582
Effects of Virtual Room Size and Objects on Relative Translation Gain Thresholds in Redirected Walking
Conference
Dooyoung Kim, Jinwook Kim, Boram Yoon, Jae-eun Shin, Jeongmi Lee, Woontack Woo
This paper investigates how the size of virtual space and objects affect the threshold range of relative translation gains, a Redirected Walking (RDW) technique that scales the user's movement in virtual space in different ratios for width and depth. We estimate the relative translation gain thresholds in six spatial conditions configured by three room sizes and the presence of virtual objects, which were set according to differing Angles of Declination (AoDs) between eye-gaze and the forward-gaze. Results show that both size and virtual objects significantly affect the threshold range, it being greater in the large-sized condition and furnished condition.
RedirectedDoors: Redirection While Opening Doors in Virtual Reality
Conference
Yukai Hoshikawa, Kazuyuki Fujita, Kazuki Takashima, Morten Fjeld, Yoshifumi Kitamura
We propose RedirectedDoors, a novel technique for redirection in VR focused on door-opening behavior. This technique manipulates the user's walking direction by rotating the entire virtual environment at a certain angular ratio of the door being opened, while the virtual door's position is kept unmanipulated to ensure door-opening realism. Results of a user study using two types of door-opening interfaces (with and without a passive haptic prop) revealed that the estimated detection thresholds generally showed a higher space efficiency of redirection. Following the results, we derived usage guidelines for our technique that provide lower noticeability and higher acceptability.
PseudoJumpOn: Jumping onto Steps in Virtual Reality
Conference
Kumpei Ogawa, Kazuyuki Fujita, Kazuki Takashima, Yoshifumi Kitamura
We propose PseudoJumpOn, a novel locomotion technique using a common VR setup that allows the user to experience virtual step-up jumping motion by applying viewpoint manipulation to the physical jump on a flat floor. The core idea is to replicate the physical characteristics of ascending jumps, and thus we designed two viewpoint manipulation methods: gain manipulation, which differentiates the ascent and descent height; and peak shifting, which delays the peak timing. The experimental results showed that the combination of these methods allowed the participants to feel adequate reality and naturalness of actually jumping onto steps, even knowing no physical steps existed.
Solitary Jogging with A Virtual Runner using Smartglasses
Conference
Takeo Hamada, Ari Hautasaari, Michiteru Kitazaki, Noboru Koshizuka
Group exercise is more effective for gaining motivation than exercising alone, but it can be difficult to always find such partners. In this paper, we explore the experiences that joggers have with a virtual partner instead of a human partner and report on the results of two controlled experiments evaluating our approach. In Study 1, we investigated how participants felt and how their behavior changed when they jogged indoors with a human partner or with a virtual partner compared to solitary jogging. The virtual partner was represented either as a full-body, limb-only, or a point-light avatar displayed on smartglasses. In Study 2, we investigated the differences between the three representations as virtual partners in an outdoor setting.
Optimal Target Guided Redirected Walking with Pose Score Precomputation
Conference
Sen-Zhe Xu, Tian Lv, Guangrong He, Chiahao Chen, Fang-Lue Zhang, Song-Hai Zhang
Most of the previous redirected walking (RDW) methods do not consider future possibilities of collisions after imperceptibly redirecting users. This paper combines the subtle RDW methods and reset strategy in our method design and proposes a novel solution for RDW. We discretize the representation of possible user positions and orientations by a series of standard poses and rate them based on the possibilities of hitting obstacles of their reachable poses. A transfer path algorithm is proposed to measure the accessibility of the poses. Our method can redirect VR users imperceptibly to the best pose among all the reachable poses. Experiments demonstrate that it outperforms state-of-the-art methods in various environment settings.
FrictShoes: Providing Multilevel Nonuniform Friction Feedback on Shoes in VR
Journal
Chih-An Tsao, Tzu-Chun Wu, Hsin-Ruey Tsai, Tzu-Yun Wei, Fang-Ying Liao, Sean Chapman, Bing-Yu Chen
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150492
We propose a wearable device, FrictShoes a pair of foot accessories, to provide multilevel nonuniform friction feedback to feet. By independently controlling six brakes on six wheels underneath each FrictShoe, the friction levels of the wheels from each could be either matched or to vary. We conducted user studies to understand users' distinguishability of friction force magnitudes (or levels), realize how users adjust and map the multilevel nonuniform friction patterns to common VR terrains or ground textures, and evaluate the performance of the proposed feedback to the feet as if walking on different terrains or ground textures in VR.
Session: Perception
Tuesday, March 15, 13:00, NZDT UTC+13
Session Chair: Scott Kuhl
Discord URL: https://discord.com/channels/842181663248482334/951018621990215680
Effects of Field of View on Dynamic Out-of-View Target Search in Virtual Reality
Conference
Kristen Grinyer, Robert J. Teather
We present a study of the effects of field of view (FOV), target movement, and number of targets on visual search performance in virtual reality. We compared visual search tasks in two FOVs (~65°, ~32.5°) under two target movement speeds (static, dynamic) while varying the visible target count, with targets potentially out of the user's view. We examined the expected linear relationship between search time and number of items, to explore how moving and/or out-of-view targets affected this relationship. Overall, search performance increased with a wide FOV, but decreased when targets were moving and with more visible targets. Neither FOV nor target movement meaningfully altered the linear relationship between search time and number of items.
The Smell Engine: A system for artificial odor synthesis in virtual environments
Conference
Alireza Bahremand, Mason Manetta, Jessica Lai, Byron Lahey, Christy Spackman, Brian H. Smith, Richard C. Gerkin, Robert LiKamWa
We devise a Smell Engine that presents users with a real-time odor synthesis of smells from a virtual environment. Using the Smell Composer framework, developers can configure odor sources in virtual space, then the Smell Mixer component dynamically estimates the odor mix that the user would smell, and a Smell Controller coordinates an olfactometer to physically present an approximation of the odor mix to the user's mask. Through a three-part user study, we found that the Smell Engine can help measure a subject's olfactory detection threshold and improve their ability to precisely localize odors in the virtual environment.
All Shook Up: The Impact of Floor Vibration in Symmetric and Asymmetric Immersive Multi-user VR Gaming Experiences
Conference
Sungchul Jung, Yuanjie Wu, Ryan Douglas McKee, Robert W. Lindeman
This paper investigates the influence of floor-vibration tactile feedback on immersed users under symmetric and asymmetric tactile sensory cue conditions. With our custom-built, computer-controlled vibration floor, we implemented a cannonball shooting game for two physically-separated players. In the VR game, the two players shoot cannonballs to destroy their opponent's protective wall and cannon, while the programmed floor platform generates vertical vibrations depending on the experimental condition. We concluded that vibration provided to a pair of game players in immersive VR can significantly enhance the VR experience, but sensory symmetry does not guarantee improved gaming performance.
Shedding Light on Cast Shadows: An Investigation of Perceived Ground Contact in AR and VR
Invited Journal
Haley Adams, Jeanine Stefanucci, Sarah H Creem-Regehr, Grant Pointon, William B Thompson, Bobby Bodenheimer
URL: https://doi.org/10.1109/TVCG.2021.3097978
Virtual objects in augmented reality (AR) often appear to float atop real world surfaces, which makes it difficult to determine where they are positioned in space. This is problematic as many applications for AR require accurate spatial perception. In the current study, we examine how the way we render cast shadows--which act as an important monocular depth cue for creating a sense of contact between an object and the surface beneath it--impacts spatial perception. Over two experiments, we evaluate people's sense of surface contact given both traditional and non-traditional shadow shading methods in optical see-through augmented reality (OST AR), video see-through augmented reality (VST AR), and virtual reality (VR) head-mounted displays. Our results provide evidence that nontraditional shading techniques for rendering shadows in AR displays may enhance the accuracy of one's perception of surface contact. This finding implies a possible tradeoff between photorealism and accuracy of depth perception, especially in OST AR displays. However, it also supports the use of more stylized graphics like non-traditional cast shadows to improve perception and interaction in AR applications.
Effects of Transparency on Perceived Humanness: Implications for Rendering Skin Tones Using Optical See-Through Displays
Journal
Tabitha C. Peck, Jessica J Good, Austin Erickson, Isaac M Bynum, Gerd Bruder
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150521
Current optical see-through displays in the field of augmented reality are limited in their ability to display colors with low lightness in the hue, saturation, lightness (HSL) color space, causing such colors to appear transparent. This hardware limitation may add unintended bias into scenarios with virtual humans. We present an exploratory user study investigating whether differing opacity levels result in dehumanizing avatar and human faces. Results support that dehumanization occurs as opacity decreases. This suggests that in similar lighting, low lightness skin tones (e.g., Black faces) will be viewed as less human than high lightness skin tones (e.g., White faces).
Session: Machine Learning
Tuesday, March 15, 18:00, NZDT UTC+13
Session Chair: Miao Wang
Discord URL: https://discord.com/channels/842181663248482334/951018650343706666
Continuous Transformation Superposition for Visual Comfort Enhancement of Casual Stereoscopic Photography
Conference
Yuzhong Chen, Qijin Shen, Yuzhen Niu, Wenxi Liu
In this paper, we present a novel visual comfort enhancement method for casual stereoscopic photography via reinforcement learning based on continuous transformation superposition. To achieve the continuous transformation superposition, we prepare continuous transformation models for translation, rotation, and perspective transformations. Then we train a policy model to determine an optimal transformation chain to recurrently handle the geometric constraints and disparity adjustment. We further propose an attention-based stereo feature fusion module that enhances and integrates the binocular information. Experimental results on three datasets demonstrate that our method achieves superior performance to state-of-the-art methods.
SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image Classifiers
Conference
Bingyao Huang, Haibin Ling
We propose to use spatial augmented reality (SAR) techniques to fool image classifiers by altering the physical light condition with a projector. The main challenge is how to generate robust and stealthy projections. For the first time, we formulate this problem as an end-to-end differentiable process and propose Stealthy Projector-based Adversarial Attack (SPAA). In SPAA, we approximate the real Project-and-Capture process using a neural network named PCNet, then we include PCNet in the optimization of projector-based attacks. Finally, we propose an algorithm that alternates between the adversarial loss and stealthiness loss optimization. Experiments show that SPAA clearly outperforms other methods.
360 Depth Estimation in the Wild - the Depth360 Dataset and the SegFuse Network
Conference
Qi Feng, Hubert Shum, Shigeo Morishima
Single-view depth estimation from omnidirectional images has gained popularity with various applications. Although data-driven methods demonstrate significant potential in this field, scarce training data and ineffective 360 estimation algorithms are still two key limitations hindering accurate estimation across diverse domains. In this work, we first establish a large-scale dataset with varied settings by exploring the use of a plenteous source of data, internet 360 videos, with a test-time training method. We then propose an end-to-end multi-task network, SegFuse, to effectively learn from the dataset and estimate depth maps from diverse images. Experimental results show favorable performance against the state-of-the-art methods.
ScanGAN360: A Generative Model of Realistic Scanpaths for 360° Images
Journal
Daniel Martin, Ana Serrano, Alexander William Bergman, Gordon Wetzstein, Belen Masia
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150502
We present ScanGAN360, a new generative adversarial approach to address this problem. We propose a novel loss function based on dynamic time warping and tailor our network to the specifics of 360° images. The quality of our generated scanpaths outperforms competing approaches by a large margin, and is almost on par with the human baseline. ScanGAN360 allows fast simulation of large numbers of virtual observers, whose behavior mimics real users, enabling a better understanding of gaze behavior, facilitating experimentation, and aiding novel applications in virtual reality and beyond.
A Virtual Reality Based System for the Screening and Classification of Autism
Journal
Marta Robles, Negar Namdarian Nosratabadi, Julia Otto, Evelin Wassiljew, Nassir Navab, Christine Falter Wagner, Daniel Roth
URL: https://doi.org/10.1109/TVCG.2022.3150489
Autism Spectrum Disorders (ASD) - also known solely as Autism or Autism Spectrum Conditions (ASC) - are a neurodevelopmental disorder (ICD-11 6A02) that is associated with characteristic deficits in social interaction and altered communicative behavior patterns. As a consequence, many autistic individuals may struggle in everyday life, which sometimes manifests in depression, unemployment, or addiction. One crucial problem in patient support and treatment is the long waiting time to diagnosis, which was approximated to 13 months on average. Yet, the earlier an intervention can take place the better the patient can be supported, which was identified as a crucial factor. We propose a system to support the screening of ASD based on a virtual reality (VR) social interaction, namely a shopping experience, with an embodied agent. During this everyday interaction, behavioral responses are tracked and recorded. We analyze this behavior with machine learning approaches to classify participants from an ASD group in comparison to a typically developed (TD) individuals control sample with high accuracy, demonstrating the feasibility of the approach. We believe that such tools can strongly impact the way mental disorders are assessed and may help to further find objective criteria and categorization.
Online Projector Deblurring Using a Convolutional Neural Network
Journal
Yuta Kageyama, Daisuke Iwai, Kosuke Sato
URL: https://doi.org/10.1109/TVCG.2022.3150465
Projector deblurring is an important technology for dynamic projection mapping (PM), where the distance between a projector and a projection surface changes in time. However, conventional deblurring techniques do not support dynamic PM because they need to project calibration patterns to estimate the amount of defocus blur each time the surface moves. We present a deep neural network that can compensate for defocus blur in dynamic PM without projecting calibration patterns. We also propose a pseudo-projection technique for synthesizing physically plausible training data. Both simulation and physical PM experiments showed that our technique alleviated the defocus blur in dynamic PM.
Session: Medical and Health Care
Tuesday, March 15, 18:00, NZDT UTC+13
Session Chair: Junjun Pan
Discord URL: https://discord.com/channels/842181663248482334/951018774985863169
Inducing Emotional Stress from the ICU Context in VR
Conference
Sebastian Weiß, Steffen Busse, Wilko Heuten
Nurses in ICUs can suffer from emotional stress, which may severely effect their health and impact the quality of care. Stress Inoculation Training is a promising method to learn coping strategies. It involves stressors in a controlled environment at increasing intensity. In this work, using VR, we implement an emotional (moral) stressor in three intensity levels, based on literature reviews and expert interviews. In a user study, subjects deal with virtual characters while experiencing the stressor of having to comply with the patient's family wishes against ones own beliefs. Resulting stress was measured in objective and subjective ways. We show that emotional stress can be induced using VR and artificial characters.
2D versus 3D: A Comparison of Needle Navigation Concepts between Augmented Reality Display Devices
Conference
Florian Heinrich, Lovis Schwenderling, Fabian Joeres, Christian Hansen
Only an insufficient amount of fundamental research has focused on the design and hardware selection of such augmented reality needle navigation systems for surgical procedures. We contribute to this area by comparing three state-of-the-art concepts displayed by an optical see-through HMD and stereoscopic projectors. A 2D glyph resulted in higher accuracy but required more insertion time. Punctures guided by 3D see-through vision were less accurate but faster. A static representation of the correctly positioned needle, showed too high target errors. Future work should focus on improving the accuracy of the see-through vision concept. Until then, the glyph visualization is recommended.
Supporting Playful Rehabilitation in the Home using Virtual Reality Headsets and Force Feedback Gloves
Conference
Qisong Wang, Bo Kang, Per Ola Kristensson
Virtual Reality (VR) is a promising platform for home rehabilitation with the potential to completely immerse users within a playful experience. To explore this area we design, implement, and evaluate a system that uses a VR headset in conjunction with force feedback gloves to present users with a playful experience for home rehabilitation. The system immerses the user within a virtual cat bathing simulation that allows users to practice fine motor skills by progressively completing three cat-care tasks. The study results demonstrate the positive role that playfulness may play in the user experience of VR rehabilitation.
Evaluating Perceptional Tasks for Medicine: A Comparative User Study Between a Virtual Reality and a Desktop Application
Conference
Jan Niklas Hornbeck, Monique Meuschke, Lennert Zyla, André-Joel Heuser, Justus Toader, Felix Popp, Christane J. Bruns, Christian Hansen, Rabi R. Datta, Kai Lawonn
One way to improve the performance of precision-based medical VR applications is to provide suitable visualizations. Today, these "suitable" visualizations are mostly transferred from desktop to VR without considering their spatial and temporal performance might change in VR. This may not lead to an optimal solution, which can be crucial for precision-based tasks. Misinterpretation of shape or distance in a surgical or pre-operative simulation can affect the chosen treatment and thus directly impact the outcome. Therefore, we evaluate the performance differences of multiple visualizations for 3D surfaces based on their shape and distance estimation for desktop and VR applications.
Session: Negative Effects
Tuesday, March 15, 18:00, NZDT UTC+13
Session Chair: Barrett Ens
Discord URL: https://discord.com/channels/842181663248482334/951018685781389332
Answering With Bow and Arrow: Questionnaires and VR Blend Without Distorting the Outcome
Conference
Jan Philip Gründling, Daniel Zeiler, Benjamin Weyers
Negative effects generated by transitioning between physical and virtual reality or time spent between the actual sensation and its measurement when using questionnaires might affect subjective measurements. This motivates research on innovative questionnaire moalities. Using the same interaction technique, this study integrates the answering of the questionnaire into the actual task. We made a bow and arrow game where the player shoots at random targets as quickly as possible. The player then had to answer questionnaires by firing at a target representing the rating. Notably, the presence (SUS-PQ), satisfaction (ASQ), and workload (SMEQ) evaluations did not change across questionnaires presented in VR, text panel VR, or desktop PC version.
Systematic Design Space Exploration of Discrete Virtual Rotations in VR
Conference
Daniel Zielasko, Jonas Heib, Benjamin Weyers
Continuous virtual rotation is one of the biggest contributors to cybersickness, while simultaneously being necessary for many VR scenarios where the user is limited in physical body rotation. A solution is discrete virtual rotation. We classify existing work in discrete virtual rotation and systematically investigate the two dimensions target (rotation) acquisition (selection vs. directional) and body-based (yes vs. no) regarding their impact on the performance in a naive and a primed rotational search task, spatial orientation, and usability. We find rotation selection most successful in both search tasks and no difference in the factor body-based on spatial orientation.
Omnidirectional Galvanic Vestibular Stimulation in Virtual Reality
Journal
Colin Groth, Jan-Philipp Tauscher, Nikkel Heesen, Max Hattenbach, Susana Castillo, Marcus Magnor
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150506
Cybersickness often taints virtual experiences. Its source can be associated to the perceptual mismatch happening when our eyes tell us we are moving while we are, in fact, at rest. We reconcile the signals from our senses by using omnidirectional galvanic vestibular stimulation (GVS), stimulating the vestibular canals behind our ears with low-current electrical signals specifically attuned to the visually displayed camera motion. We describe how to calibrate and generate the appropriate GVS signals in real-time for pre-recorded omnidirectional videos exhibiting ego-motion in all three spatial directions, and prove that it significantly reduces discomfort for cybersickness-susceptible VR users.
Session: Audio in VR
Wednesday, March 16, 8:30, NZDT UTC+13
Session Chair: Iana Podkosova
Discord URL: https://discord.com/channels/842181663248482334/951018857240334357
Investigating how speech and motion realism influence the perceived personality of virtual characters and agents.
Conference
Sean Thomas, Ylva Ferstl, Rachel McDonnell, Cathy Ennis
The portrayed personality of virtual characters and agents influences how we perceive and engage with digital applications. Understanding the influence of speech and animation allows us to design more personalized and engaging characters. Using performance capture data from multiple datasets, we contrast performance-driven characters to those portrayed by generated gestures and synthesized speech, analysing how the features of each influence personality according to the Big Five personality traits. Our results highlight motion as dominant for portraying extraversion and speech for communicating agreeableness and emotional stability, supporting the development of virtual characters, social agents and 3DUI agents with targeted personalities.
Spatial Updating in Virtual Reality - Auditory and Visual Cues in a Cave Automatic Virtual Environment
Conference
Christiane Breitkreutz, Jennifer Brade, Sven Winkler, Alexandra Bendixen, Philipp Klimant, Georg Jahn
While moving through a real environment, egocentric location representations are effortlessly and automatically updated. But in synthetic environments, spatial updating is often disrupted or incomplete due to a lack of body-based movement information. To prevent disorientation, the support of spatial updating via other sensory movement cues is necessary. In the presented experiment, participants performed a spatial updating task inside a CAVE, with either no orientation cues, three visible distant landmarks, or one continuous auditory cue present. The data showed improved task performance when an orientation cue was present, with auditory cues providing at least as much improvement as visual cues.
Audience Experiences of a Volumetric Virtual Reality Music Video
Conference
Gareth W. Young, Neill C. O'Dwyer, Matthew Moynihan, Aljosa Smolic
Modern music videos apply various media capture techniques and creative postproduction technologies to provide a myriad of stimulating and artistic approaches to audience entertainment and engagement for viewing across multiple devices. Within this domain, volumetric technologies are becoming a popular means of recording and reproducing musical performances for new audiences to access via traditional 2D screens and emergent virtual reality platforms. However, the precise impact of volumetric video in virtual reality music video entertainment has yet to be fully explored from a user's perspective. Here we show how users responded to volumetric representations of music performance in virtual reality.
Comparing Direct and Indirect Methods of Audio Quality Evaluation in Virtual Reality Scenes of Varying Complexity
Journal
Thomas Robotham, Olli S. Rummukainen, Miriam Kurz, Marie Eckert, Emanuël A. P. Habets
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150491
This study uses four subjective audio quality evaluation methods (viz. multiple-stimulus with and without reference for direct scaling, and rank-order elimination and pairwise comparison for indirect scaling) to investigate the contributing factors present in multi-modal 6-DoF VR on quality ratings of real-time audio rendering. Five scenes were designed for evaluation with various amounts of user interactivity and complexity. Our results show rank-order elimination proved to be the fastest method, required the least amount of repetitive motion, and yielded the highest discrimination between spatial conditions. Ratings across scenes indicate complex scenes and interactive aspects of 6-DoF VR can impede quality judgments.
Session: Haptics
Wednesday, March 16, 8:30, NZDT UTC+13
Session Chair: Mar Gonzalez Franco
Discord URL: https://discord.com/channels/842181663248482334/951018879281410109
Tapping with a Handheld Stick in VR: Redirection Detection Thresholds for Passive Haptic Feedback
Conference
Yuqi Zhou, Voicu Popescu
This paper investigates providing grounded passive haptic feedback to a user of a VR application through a handheld stick with which the user taps virtual objects. Two haptics redirection methods are proposed: the DriftingHand method, which alters the position of the user's virtual hand, and the VariStick method, which alters the length of the virtual stick. Detection thresholds were measured in a user study (N = 60) for multiple stick lengths and multiple distances from the user to the real object. VariStick and DriftingHand provide an undetectable range of offsets of [-20cm,+13cm] and [-11cm, +11cm], respectively.
STROE: An Ungrounded String-Based Weight Simulation Device
Conference
Alexander Achberger, Pirathipan Arulrajah, Kresimir Vidackovic, Michael Sedlmair
We present STROE, a new ungrounded string-based weight simulation device. STROE is worn as an add-on to a shoe that in turn is connected to the user's hand via a controllable string. A motor is pulling the string with a force according to the weight to be simulated. The design of STROE allows the users to move more freely than other state-of-the-art devices for weight simulation. We conducted a user study that empirically shows that STROE is able to simulate the weight of various objects and, in doing so, increases users' perceived realism and immersion of VR scenes.
LevelEd SR: A Substitutional Reality Level Design Workflow
Conference
Lee Beever, Nigel W. John
We present LevelEd SR, a substitutional reality level design workflow that combines AR and VR systems and is built for consumer devices. The system enables passive haptics through the inclusion of physical objects from within a space into a virtual world. A validation study produced quantitative data that suggests players benefit from passive haptics in VR games with an improved game experience and increased levels of presence. An evaluation found that participants were accepting of the system, rating it positively using the System Usability Scale questionnaire and would want to use it again to experience substitutional reality.
Body Warping Versus Change Blindness Remapping: A Comparison of Two Approaches to Repurposing Haptic Proxies for Virtual Reality
Conference
Cristian Patras, Mantas Cibulskis, Niels Christian Nilsson
This paper details a study comparing two techniques for repurposing haptic proxies; namely haptic retargeting based on body warping and change blindness remapping. Participants performed a simple button-pressing task, and 24 virtual buttons were mapped onto four haptic proxies with varying degrees of misalignment. Body warping and change blindness remapping were used to realign the real and virtual buttons, and the results indicate that users failed to reliably detect realignment of up to 7.9 cm for body warping and up to 9.7 cm for change blindness remapping. Moreover, change blindness remapping yielded significantly higher self-reported agency, and marginally higher ownership.
"Let;s Meet and Work It Out": Understanding and Mitigating Encountered-Type of Haptic Devices Failure Modes in VR
Conference
Elodie Bouzbib, Gilles Bailly
Encountered-type of Haptic devices (ETHD) are robotic interfaces physically overlaying virtual counterparts prior to a user interaction in Virtual Reality. They theoretically reliably provide haptics in Virtual environments, yet they raise several intrinsic design challenges to properly display rich haptic feedback and interactions in VR applications. In this paper, we use a Failure Mode and Effects Analysis (FMEA) approach to identify, organise and analyse the failure modes and their causes in the different stages of an ETHD scenario and highlight appropriate solutions from the literature to mitigate them. We help justify these interfaces' lack of deployment, to ultimately identify guidelines for future ETHD designers.
Exploring Pseudo-Weight in Augmented Reality Extended Displays
Conference
Shohei Mori, Yuta Kataoka, Satoshi Hashiguchi
Augmented reality (AR) allows us to wear virtual displays that are registered to our bodies and devices. Such virtually extendable displays, or AR extended displays (AREDs), are personal and free from physical restrictions. Existing work has explored the new design space for improved user performance. Contrary to this direction, we focus on the weight that the user perceives from AREDs, even though they are virtual and have no physical weight. Our results show evidence that AREDs can be a source of pseudo-weight. We also systematically evaluate the perceived weight changes depending on the layout and delay in the visualization system.
Session: Locomotion (Europe)
Wednesday, March 16, 8:30, NZDT UTC+13
Session Chair: Evan Suma Rosenberg
Discord URL: https://discord.com/channels/842181663248482334/951018919194415134
Foldable Spaces: An Overt Redirection Approach for Natural Walking in Virtual Reality
Conference
Jihae Han, Andrew Vande Moere, Adalberto L. Simeone
We propose Foldable Spaces, a novel overt redirection approach that dynamically 'folds' the geometry of the virtual environment to enable natural walking. Based on this approach, we developed three techniques: (1) Horizontal, which folds virtual space like the pages in a book; (2) Vertical, which rotates virtual space along a vertical axis; and (3) Accordion, which corrugates virtual space to bring faraway places closer to the user. In a within-subjects study, we compared our foldable techniques along with a base condition, Stop & Reset. Our findings show that Accordion was the most continuous and preferred technique for experiencing natural walking.
Design and Evaluation of Travel and Orientation Techniques for Desk VR
Conference
Guilherme dos Santos Amaro, Daniel Mendes, Rui Rodrigues
Typical VR interactions can be tiring, resulting in decreased comfort and session duration compared with traditional non-VR interfaces, which may, in turn, reduce productivity. Desk VR experiences provide the convenience and comfort of a desktop experience and the benefits of VR immersion. We explore travel and orientations techniques targeted at desk VR users, using both controllers and a large multi-touch surface. Results revealed advantages for a continuous controller-based travel method and a trend for a dragging-based orientation technique. Also, we identified possible trends towards task focus affecting overall cybersickness symptomatology.
Eye Tracking-based LSTM for Locomotion Prediction in VR
Conference
Niklas Stein, Gianni Bremer, Markus Lappe
Virtual Reality allows users to perform natural movements such as walking in virtual environments. This comes with the need for a large tracking space. To optimise use of the physical space, prediction models for upcoming behavior are helpful. Here we examined whether eye movements can improve such predictions. Eighteen participants walked through a virtual environment while performing different tasks. The recorded data were used to train an LSTM model. We found that positions 2.5s into the future can be predicted with an average error of 65cm. The benefit of eye movement data depended on task and environment. Situations with changes in walking speed benefited from the inclusion of eye data.
The Chaotic Behavior of Redirection - Revisiting Simulations in Redirected Walking
Conference
Christian Hirt, Yves Kompis, Christian Holz, Andreas Kunz
Redirected Walking is a common technique to allow real walking for exploring large virtual environments in constrained physical spaces. Many existing approaches were evaluated in simulation only, and researchers argued that the findings would translate to real scenarios to motivate the effectiveness of their algorithms. In this paper, we argue that simulation-based evaluations require critical reflection. We demonstrate simulations that show the chaotic process fundamental to RDW, in which altering the initial user's position by mere millimeters can drastically change the resulting steering behavior. This insight suggests that redirection is more sensitive to underlying data than previously assumed.
How to Take a Brake from Embodied Locomotion - Seamless Status Control Methods for Seated Leaning Interfaces
Conference
Carlo Flemming, Benjamin Weyers, Daniel Zielasko
Embodied locomotion, especially leaning, has one major problem. Effectively the regular functionality of the utilized body parts is overwritten. Thus, in this work, we propose 6 different status control methods that seamlessly switch off (brake) a seated leaning locomotion interface. Different input modalities, such as a physical button, voice, and gestures/metaphors, are used and evaluated against a baseline condition and a leaning interface with a bilateral transfer function. In a user study, the most diegetic interface, a hoverboard metaphor, was the most preferred. The overall results are more heterogeneous and the interfaces vary in their suitability for different scenarios.
Session: Rendering
Wednesday, March 16, 13:00, NZDT UTC+13
Session Chair: Lili Wang
Discord URL: https://discord.com/channels/842181663248482334/951018980305420368
SivsFormer: Parallax-Aware Transformers for Single-image-based View Synthesis
Conference
Chunlan Zhang, Chunyu Lin, Kang Liao, Lang Nie, Yao Zhao
Single-image-based view synthesis is challenging as it requires inferring contents beyond what is immediately visible. The generated views suffer from visually unpleasant holes, deformations, and artifacts by previous methods. We propose SivsFormer for high-quality and realistic view synthesis. In particular, a warping and occlusion handing module is designed to reduce the influence of parallax. Subsequently, a disparity alignment module captures the long-range information over the scene and ensures that pixels move correctly. Moreover, we present a parallax-aware loss function to improve the quality of the synthetic images, which explicitly quantifies the magnitude of parallax. Our approach achieves superior performance.
Reconstructing 3D Virtual Face with Eye Gaze from a Single Image
Conference
Jiadong Liang, Yunfei Liu, Feng Lu
Reconstructing 3D virtual face from a single image has a wide range of applications in virtual reality. In this paper, we propose to reconstruct 3D virtual face with eye gaze information from a single image. In detail, we design the 3D face reconstruction with precise eye region to retain correct eye gaze information and we propose eye contact guided facial-rotation to perform both eye contact and gaze estimation simultaneously. Extensive experiments on different tasks demonstrate the significant gain of the proposed approach.
Interactive Mixed Reality Rendering on Holographic Pyramid
Conference
Danqing Dai, Xuehuai Shi, Lili Wang, Xiangyu Li
Currently, ray tracing and image-based lighting (IBL) have shortcomings when rendering the metallic virtual object displayed in the holographic pyramid in mixed reality. We propose a mixed reality rendering method to render glossy and specular reflection effects on metallic virtual objects displayed in the holographic pyramid based on the surrounding real environment at interactive frame rates. Compared with IBL and screen-space ray tracing, our method can generate the rendering results closer to the ground truth at the same time cost. Compared with Monte Carlo path tracing, our method is 2.5-4.5 times faster in generating rendering results of the comparable quality.
Rectangular Mapping-based Foveated Rendering
Conference
Jiannan Ye, Anqi Xie, Susmija Jabbireddy, Yunchuan Li, Xubo Yang, Xiaoxu Meng
With the speedy increase of display resolution and the demand for interactive frame rate, rendering acceleration is becoming more critical for a wide range of virtual reality applications. Foveated rendering addresses this challenge by rendering with a non-uniform resolution for the display. Motivated by the non-linear optical lens equation, we present rectangular mapping-based foveated rendering (RMFR), a simple yet effective implementation of foveated rendering framework. RMFR supports varying level of foveation according to the eccentricity and the scene complexity. Compared with traditional foveated rendering methods, RMFR provides a superior level of perceived visual quality while consuming minimal rendering cost.
Dynamic Projection Mapping for Robust Sphere Posture Tracking Using Uniform/Biased Circumferential Markers
Invited Journal
Yuri Mikawa, Tomohiro Sueishi, Yoshihiro Watanabe, Masatoshi Ishikawa
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2021.3111085
In spatial augmented reality, a widely dynamic projection mapping system has been developed as a novel approach to graphics presentation for widely moving objects in dynamic situations. However, this method necessitates a novel tracking marker design that is resistant to random and complex occlusion and out-of-focus blurring, which conventional markers have not achieved. This study presents a uniform circumferential marker that becomes an ellipse in perspective projection and expresses geometric information. It can track the relative posture of a dynamically moving sphere with high speed, high accuracy, and robustness owing to continuous contour lines, thereby supporting both wide-range movement in the depth direction and human interaction. Moreover, a biased circumferential marker is proposed to embed unique coding, where the absolute posture is decoded with a novel recognition algorithm. Moreover, rough initialization using the geometry of multiple ellipses is proposed for both markers to start the automatic and precise tracking. Real-time rotation visualization onto the surface of a moving sphere is made possible with the high-speed, widely dynamic projection mapping system. The tracking performance is demonstrated to exhibit sufficient basic tracking performance as well as robustness against blurring and occlusion compared to conventional dot-based markers.
Dynamic Multi-Projection Mapping Based on Parallel Intensity Control
Journal
Takashi Nomoto, Wanlong Li, Hao-Lun Peng, Yoshihiro Watanabe
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150488
Projection mapping using multiple projectors is promising for spatial augmented reality; however, it is difficult to apply it to dynamic scenes. This is because it is hard for the conventional method to reduce the latency from motion to projection. To mitigate this, we propose a novel method of controlling the intensity based on a pixel-parallel calculation for each projector with low latency. Additionally, our pixel-parallel calculation method allows a distributed system configuration, such that the number of projectors can be increased to form a network. We demonstrate a seamless mapping into dynamic scenes at 360 fps with a 9.5-ms latency using ten cameras and four projectors.
Session: Computer Vision
Wednesday, March 16, 13:00, NZDT UTC+13
Session Chair: Bruce Thomas
Discord URL: https://discord.com/channels/842181663248482334/951019026509889556
Real-Time Gaze Tracking with Event-Driven Eye Segmentation
Conference
Yu Feng, Nathan Goulding-Hotta, Asif Khan, Hans Reyserhove, Yuhao Zhu
Gaze tracking is an essential component in X-Reality. Modern gaze tracking algorithms are heavyweight; they operate at most 5 Hz on mobile processors despite that cameras can operate at a real-time rate (> 30 Hz). This paper presents a real-time eye tracking algorithm that can operate at 30 Hz on a mobile processor, achieves 0.1°-0.5° gaze accuracies, all the while requiring one to two orders of magnitude smaller parameters than state-of-the-art eye tracking algorithms. The key is an Auto ROI mode, which continuously predicts and processes only the Regions of Interest (ROIs) of near-eye images. In particular, we discuss how to accurately predict ROI by emulating an event camera without requiring special hardware support.
Structured Light of Flickering Patterns Having Different Frequencies for a Projector-Event-Camera System
Conference
Yuichiro Fujimoto, Taishi Sawabe, Masayuki Kanbara, Hirokazu Kato
Our objective is to realize a projector-camera system that combines the event camera with a projector under the strong ambient light. Specifically, this study proposes a new structured light that combines different frequencies of flickers to acquire the correspondence between the image pixels of the event camera and the projector. This method does not rely on the co-axial frame-based measurement and synchronization mechanism between projector and camera and is thus applicable to most general event cameras. Experiments confirm that the proposed method obtains the correspondence robustly with reasonable accuracy in a bright room (up to 2,600 lux) under general indoor lighting and additional light projection.
Instant Reality: Gaze-Contingent Perceptual Optimization for 3D Virtual Reality Streaming
Journal
Shaoyu Chen, Budmonde Duinkharjav, Xin Sun, Li-Yi Wei, Stefano Petrangeli, Jose Echevarria, Claudio Silva, Qi Sun
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150522
To advance VR applications in a cloud-edge setting, we propose a perceptually-optimized progressive 3D streaming method for spatial quality and temporal consistency. Our model schedules the streaming assets for optimal spatial-temporal quality based on human visual mechanisms. Subjective studies and objective analysis demonstrate the framework's enhanced visual quality and temporal consistency than alternative solutions. We envision our framework allowing future efficient immersive streaming applications without compromising high visual quality and interactivity, such as those in esports and teleconference.
Prepare for Ludicrous Speed: Marker-based Instantaneous Binocular Rolling Shutter Localization
Journal
Juan Carlos Dibene Simental, Yazmin Maldonado, Leonardo Trujillo, Enrique Dunn
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150485
We propose a marker-based geometric framework for the high-frequency absolute pose estimation of a binocular camera system by using the data captured during the exposure of a single rolling shutter scanline. We leverage the projective invariants of a planar pattern to define a geometric reference and determine 2D-3D correspondences from edge measurements in individual scanlines. To tackle the ensuing multi-view estimation problem, achieve real-time operation, and minimize latency, we develop a pair of custom solvers leveraging our geometric setup. We demonstrate the effectiveness of our approach with an FPGA implementation achieving a localization throughput of 129.6 KHz with 1.5us latency.
Robust Tightly-Coupled Visual-Inertial Odometry with Pre-built Maps in High Latency Situations
Journal
Hujun Bao, Weijian Xie, Quanhao Qian, Danpeng Chen, Shangjin Zhai, Nan Wang, Guofeng Zhang
URL: https://doi.ieeecomputersociety.org/10.1109/TVCG.2022.3150495
In this paper, we present a novel monocular visual-inertial odometry system with the pre-built maps deployed on the remote server. By coupling VIO with geometric priors from pre-built maps, our system can tolerate the high latency and low frequency of global localization service. Firstly, sparse point clouds are obtained from the dense mesh according to the localization results. The sparse point clouds are directly used for feature tracking and state update of VIO to suppress the drift accumulation. Both the experiments on datasets and the real-time AR demo show that our method outperforms the state-of-the-art methods.
Session: Inclusive VR
Wednesday, March 16, 13:00, NZDT UTC+13
Session Chair: Victor Adriel Oliveira
Discord URL: https://discord.com/channels/842181663248482334/951019069321150514
Asymmetric Lateral Field-of-View Restriction to Mitigate Cybersickness During Virtual Turns
Conference
Fei Wu, Evan Suma Rosenberg
We propose and evaluate a novel variant of field-of-view restriction that uses an asymmetric mask to obscure only one side region of the periphery during rotation and laterally shifts the center of restriction towards the direction of the turn. A between-subjects study was conducted to compare the side restrictor, a traditional symmetric restrictor, and a control condition without restriction. The side restrictor was effective in mitigating cybersickness, reducing discomfort, improving subjective visibility, and enabling longer immersion time. These results indicate that side field-of-view restriction is an effective cybersickness mitigation technique for virtual environments with frequent turns.
You're in for a Bumpy Ride! Uneven Terrain Increases Cybersickness While Navigating with Head Mounted Displays
Conference
Samuel Ang, John Quarles
Cybersickness poses a challenge to the broader adoption of virtual reality technologies. In this study, we examine the impact of uneven virtual terrain traversal on cybersickness in VR. We recruited 38 participants to navigate with three virtual terrain variants: flat surface, regular bumps, and terrain generated from Perlin noise. We collected cybersickness data using the Fast Motion Sickness Scale, Simulator Sickness Questionnaire, and galvanic skin response. Our results indicate that users felt greater cybersickness when traversing uneven terrain than they did with flat geometry. We recommend that designers exercise caution when incorporating uneven terrain into their virtual experiences.
Auditory Feedback for Standing Balance Improvement in Virtual Reality
Conference
M. Rasel Mahmud, Michael Stewart, Alberto Cordova, John Quarles
Virtual Reality (VR) users often experience postural instability which could be a major barrier to universal usability for all, especially for persons with balance impairments. We recruited 42 participants (balance impairments: 21, without balance impairments: 21) to investigate the impact of several auditory techniques on balance in VR. Participants performed standing visual exploration and standing reach and grasp tasks. Results showed that each auditory technique improved balance in VR for all. Spatial and CoP audio improved balance significantly more than other audios. The techniques presented in this research could be used in future virtual environments to improve standing balance.
Session: Advanced UI
Wednesday, March 16, 14:00, NZDT UTC+13
Session Chair: Ryan McMahan
Discord URL: https://discord.com/channels/842181663248482334/951019111943667722
Galea: A physiological sensing system for behavioral research in virtual environments
Conference
Guillermo Bernal, Nelson Hidalgo Julia, Conor Russomanno, Pattie Maes
The pairing of Virtual Reality technology with Physiological Sensing has gained much interest in clinical settings and beyond. We present Galea, a device which measures physiological responses when experiencing virtual content, enabling behavioral, affective computing, and human-computer interaction research access to data from the Parasympathetic nervous system and Sympathetic nervous system simultaneously. We provide design considerations and circuit characterization results of in-vivo recordings, and present two examples to help contextualize how these signals can be used in VR settings. We also discuss the importance and contributions of this work and future challenges that should be considered.
Validating the Benefits of Glanceable and Context-Aware Augmented Reality for Everyday Information Access Tasks
Conference
Shakiba Davari, Feiyu Lu, Doug Bowman
Glanceable Augmented Reality interfaces have the potential to provide fast and efficient information access for the user. However, the virtual content's placement and accessibility depends on the user context. We designed a Context-Aware AR interface that can intelligently adapt for two different contexts: solo and social. We evaluated information access using Context-Aware AR compared to current mobile phones and non-adaptive Glanceable AR interfaces. Our results indicate the advantages of Context-Aware AR interface for information access efficiency, avoiding negative effects on primary tasks or social interactions, and overall user experience.
User Preference for Navigation Instructions in Mixed Reality
Conference
Jaewook Lee, Fanjie Jin, Younsoo Kim, David Lindlbauer
Mixed Reality (MR) holds the promise of integrating navigation instructions directly in users' visual field, making them less obtrusive and more expressive. Current solutions, however, focus on using conventional designs such as arrows, and do not fully leverage the technological possibilities of MR. We contribute a remote survey and an in-person Virtual Reality study, showing that while familiar designs such as arrows are well received, novel navigation aids such as avatars or desaturation of non-target areas are viable alternatives. We distill the results into a set of guidelines for MR content creators and future context-aware MR navigation systems.
Tangiball: Foot-Enabled Embodied Tangible Interaction with a Ball in Virtual Reality
Conference
Lila Bozgeyikli, Evren Bozgeyikli
Interaction with tangible user interfaces in virtual reality (VR) is known to offer several benefits. In this study, we explored foot-enabled embodied interaction in VR through a room-scale tangible soccer game (Tangiball). Users interacted with a physical ball with their feet in real time by seeing its virtual counterpart inside a VR head mounted display. A user study was performed with 40 participants, in which Tangiball was compared with the control condition of foot-enabled embodied interaction with a purely virtual ball. The results revealed that tangible interaction improved user performance and presence significantly, while no difference in terms of motion sickness was detected between the tangible and virtual versions.
Redirecting Desktop Interface Input to Animate Cross Reality Avatars
Conference
Jason Wolfgang Woodworth, David Michael Broussard, Cristoph W. Borst
We present and evaluate methods to redirect desktop inputs such as eye gaze and mouse pointing to a VR-embedded avatar. We use these methods to build a novel interface that allows a desktop user to give presentations in remote meetings such as VR-based conferences or classrooms with a more engaging "cross-reality" avatar capable of gestures similar to those performed by standard immersed avatars. We compare desktop avatar control to headset-based control, suggesting users consider the enhanced desktop avatar to be comparably human-like to the VR headset condition, implying that our methods could be useful for future cross-reality remote learning tools.
UrbanRama: Navigating Cities in Virtual Reality
Invited Journal
Shaoyu Chen, Fabio Miranda, Nivan Ferreira, Marcos Lage, Harish Doraiswamy, Corinne Brenner, Connor Defanti, Michael Koutsoubis, Luc Wilson, Kenneth Perlin, Claudio T Silva
URL: https://doi.org/10.1109/TVCG.2021.3099012
Exploring large virtual environments, such as cities, is a central task in several domains, such as gaming and urban planning. VR systems can greatly help this task by providing an immersive experience; however, a common issue with viewing and navigating a city in the traditional sense is that users can either obtain a local or a global view, but not both at the same time, requiring them to continuously switch between perspectives, losing context and distracting them from their analysis. In this paper, our goal is to allow users to navigate to points of interest without changing perspectives. To accomplish this, we design an intuitive navigation interface that takes advantage of the strong sense of spatial presence provided by VR. We supplement this interface with a perspective that warps the environment, called UrbanRama, based on a cylindrical projection, providing a mix of local and global views. The design of this interface was performed as an interactive process in collaboration with architects and urban planners. We conducted a qualitative and a quantitative pilot user study to evaluate UrbanRama and the results indicate the effectiveness of our system in reducing perspective changes, while ensuring that the warping doesnt affect distance and orientation perception.