Posters will be presented in Virbela, with fast forward sessions held in Zoom. For instructions on accessing Virbela click here: https://ieeevr.org/2022/attend/virbela-instructions/
Posters | Location |
---|---|
Doctoral Consortium Posters (Session 1) | Expo Hall A |
Doctoral Consortium Posters (Session 2) | Expo Hall B |
Doctoral Consortium Posters (Session 3) | Expo Hall C |
Posters (Session 1) | Expo Hall A |
Posters (Session 2) | Expo Hall B |
Posters (Session 3) | Expo Hall C |
Doctoral Consortium - Expo Hall A
Immersive Analytics for Understanding Ecosystem Services Tradeoffs
Doctoral Consortium
Booth: D28
Benjamin Powley
Existing immersive systems for analysing geospatial data relating to ecosystem services are not designed for all groups involved with land use decision making. Land management scientists have different requirements compared to non-experts as the tasks they perform are different. Land use decision making needs better tools for assisting the analysis and exploration of land use decisions, and their effect on ecosystem services. In this research, a user centred design process is applied for developing and evaluating an immersive VR visualization tool to assist with better decision making around land use. Interviews with experts found issues with how their current tool presents analysis results, and problems with communicating their results to stakeholders. A literature review found no pre-existing immersive VR systems specifically for analysing tradeoffs among ecosystem services.
Exploration of Context and Physiological Cues for Personalized Emotion-Adaptive Virtual Reality
Doctoral Consortium
Booth: D31
Mr Kunal Gupta
Immersive Virtual Reality (VR) can create compelling context-specific emotional experiences, but very few studies explore the importance of emotion-relevant contextual cues in VR. In this thesis, I investigate how to use combined contextual and physiological cues to improve emotion recognition in VR and enhance shared VR experiences. The main novelty is the creation of the first Personalized Real-time Emotion-Adaptive Context-Aware VR (PerAffectly VR) system which provides significant insight into how to create, measure, and share emotional VR experiences.
Improving Multi-User Interaction for Mixed Reality Telecollaboration
Doctoral Consortium
Booth: D27
Faisal Zaman
Mixed reality approaches offer merging of real and virtual worlds to create new environments and visualizations for real-time interaction. However, existing systems do not utilise user real environment, lack detail in dynamic environments, and often lack multi-user capabilities. This research focuses on exploring this multi-user aspect of immersive collaboration, where an arbitrary number of co-located and remotely located users can immerse into a single or merged collaborative mixed reality space. The aim is to enable users to experience VR/AR together, irrespective of the type of HMD, and facilitate users with their collaborative tasks. The main goal is to develop an immersive collaboration platform in which users can utilize the space around them and at the same time can collaborate and switch between different perspectives of other co-located and remote users.
A Mobile Intervention to Promote Social Skills in Children with Autism Spectrum Disorder Using AR Face Masks
Doctoral Consortium
Booth: D32
Ms. Hiroshika Nadeeshani Premarathne
Autism Spectrum Disorder is a lifelong neuro-developmental disorder characterized by several behaviors including the deficits in emotion recognition and face-directed eye gaze. Although, behavioral therapists are there to help children with ASD to improve the skills, due to both high demand for services and lack of resources, there is a need for providing alternative interventions to therapy sessions. My PhD aims to develop an Augmented Reality (AR) based mobile application to provide an alternative method and assist children with ASD in identifying emotions and communicating with their close family members using face-directed eye gaze.
Using Multimodal Input in Augmented Virtual Teleportation
Doctoral Consortium
Booth: D26
Prasanth Sasikumar
Augmented (AR) and Virtual Reality (VR) can create compelling emotional collaborative experiences, but very few studies have explored the importance of sharing a user's live environment and their physiological cues. In this Ph.D. thesis, I am investigating how to use scene reconstruction and emotion recognition to enhance shared collaborative AR/VR experiences. I have developed a framework that can be broadly classified into two sections: 1) Live scene capturing for real-time environment reconstruction, 2) Sharing multimodal input such as gaze, gesture, and physiological cues. The main novelty of the research is that it is one of the first systems for real-time sharing of environment and emotion cues. It provides significant insight into how to create, measure, and share remote collaborative experiences. The research will be helpful in multiple application domains such as remote assistance, tourism, training, and entertainment. It will also enable the creation of interfaces that automatically adapt to the user's emotional needs and environment and provide a better collaborative experience.
A Tangible Augmented Reality Programming Learning Environment (TARPLE) for Active, Guided Learning
Doctoral Consortium
Booth: D33
Dmitry Resnyansky
This PhD project aims to bring together technological and educational perspectives by understanding how objectives of programming learning and principles of active learning and embodied learning can be supported and enhanced with AR and TUI technologies. This work presents an approach to the design and evaluation of a TARPLE prototype with enhanced functionality to encourage active guided learning of a text-based OOP language. The system supports natural interaction with learning material, and embodiment and contextualisation of information in 3D space. One of the goals of this thesis has been to understand how empirical studies can inform the design of a TARPLE that supports learning of text-based programming languages and development of a basic debugging skillset.
Designing and Optimizing Daily-wear Photophobic Smart Sunglasses
Doctoral Consortium
Booth: D25
Xiaodan Hu
Photophobia, also known as light sensitivity, is a condition in which there is a fear of light. Traditional sunglasses and tinted glasses typically worn by individuals with photophobia only provide linear dimming, leading to difficulty to see the contents in the dark region of a high-contrast environment (e.g., indoors at night). This paper presents a smart dimming sunglass that uses a spatial light modular (SLM) to flexibly dim the user's field of view based on scene detection from an HDR camera. To address the problem when the user views a distant object, the occlusion mask displayed on the SLM becomes blurred due to out-of-focus, thus providing an insufficient modulation. An optimization model is designed to dilate the occlusion mask appropriately. The optimized dimming effect is verified by the camera and preliminary test by real users to be able to filter the desired amount of incoming light through a blurred mask.
The impact of the Informational load of Presence Illusions on Users Attention and Memory
Doctoral Consortium
Booth: D34
Daniel A. Muñoz Daniel A. Muñoz
Presence has become an expected outcome for most VR experiences due to its positive relationship with engagement, task performance, behavior change, and other experiences. However, current literature reports contradictory results regarding presence, cognitive load, working memory, and attention. This study explores the cognitive load of presence under the framework of illusions to understand how these illusions impact users' attention and memory. Quantify and hierarchize the information built by presence (Place and Plausibility of illusion), attempt to contribute knowledge to further design, and effectively manage attention and distraction on Virtual reality experiences. This document discussed our theoretical direction and a proposed psychophysiological study.
Doctoral Consortium - Expo Hall B
Gamified VR for Socially Isolated Adolescents with Significant Illness
Doctoral Consortium
Booth: D25
Ms Udapola Balage Hansi Shashiprabha Udapola
Adolescents with significant illness face various psychosocial and mental wellbeing challenges during hospitalisation. Social isolation from family and peers is identified as a significant concern for this group. Several digital interventions have been proposed to connect these young people with others, such as video conferencing, social media, social robots, and online games. Research so far has found those to be beneficial for adolescents' wellbeing. Social VR is a novel social interaction mechanism that allows users to interact socially within an immersive 3D virtual environment with embodiment experience. Playing in the social VR space is generally identified as a motivational factor for adolescents to engage in social interactions. Therefore, integrating game technologies into social VR space would encourage and motivate socially isolated adolescents to engage socially intrinsically. The main goal of this research project is to enhance the social engagement of socially isolated adolescents by fostering positive interactions with their peers within a safe gamified virtual environment. In order to achieve that, this research will design and develop an intervention using game and VR technologies. Then, the designed intervention will be evaluated with the target user group to investigate the impact of implemented game mechanics on social engagement and connectedness.
XR for Improving Cardiac Catheter Ablation Procedure
Doctoral Consortium
Booth: D34
Mr. Nisal Manisha Udawatta Kankanamge Don
Cardiac arrhythmia refers to abnormalities of heart rhythm, and cardiac catheter ablation procedure provides the best therapeutic outcomes to cure this life-threatening pathology. An electrophysiologist clinically performs the procedure that involves navigating 'catheters' into the chambers of the heart through peripheral blood vessels, studying the cardiac electrophysiology and performing ablations. An electrophysiologist must possess a comprehensive understanding of cardiac electrophysiology and precise instrument handling due to the sensitiveness of the procedure. In the conventional approach, electroanatomical mapping systems and fluoroscopic visualizations are utilized to assist the procedure; however, their limitations reduce the procedure's effectiveness. Two main scenarios have been identified to improve the effectiveness of the procedure: intraoperative guidance and procedure training. This study aims to examine how extended reality technologies (eg. AR/VR) can be used to improve the effectiveness of the cardiac catheter ablation procedure.
Mixed Reality Interaction for Mobile Knowledge Work
Doctoral Consortium
Booth: D26
Verena Biener
Knowledge workers typically work on some kind of computer and other than an internet connection, they rarely need access to specific work environments or devices. This makes it feasible for them to also work in different environments or in mobile settings, like public transportation. In such spaces comfort and productivity could be decreased due to hardware limitations like small screen sizes or input devices and environmental clutter. Mixed reality (MR) has the potential to solve such issues. It can provide the user with additional display space that can even include the third dimension. It can open up new possibilities for interacting with virtual content using gestures or spatially tracked devices. And it can allow the users to modify the work environment according to their personal preferences. This doctoral thesis aims at exploring the challenges of using MR for mobile knowledge work and how to effectively support knowledge worker tasks through appropriate interaction techniques.
Dynamic facial expressions on virtual humans to facilitate virtual reality (VR) mental health therapy
Doctoral Consortium
Booth: D31
Miss Shu Wei
This ongoing project aims to utilize dynamic facial expressions on virtual humans to enhance the effectiveness and efficiency of virtual reality (VR) mental health therapy. A systematic review of virtual humans in mental health VR indicated that only around 10% applications used dynamic facial expressions. The potentials of virtual characters' emotion richness is understudied and it is unclear how the facial expressions affect the individuals differently in the virtual environment. Therefore, we will focus on understanding people's behavioural, physiological, and psychological reactions toward facial-animated characters in VR experimental studies. The first study examines whether particular non-verbal behaviours can enhance people's therapy engagement, by applying warmness facial expressions and head nod on a virtual coach. Future experiments will further look at mixed facial expressions, and assess people's visual attention (through eye-tracking) and interpretation of the emotional faces. This research will explore how best to use facial expressions to facilitate VR therapy through the practice of psychiatric research, VR programming and 3D animation.
Robust Redirected Walking in the Wild
Doctoral Consortium
Booth: D27
Niall L. Williams
Locomotion is a fundamental component of experiences in virtual reality (VR). However, locomotion in VR is often difficult because the layouts of the physical and virtual environments are often different, which may cause unobstructed paths in the virtual world to correspond to obstructed paths in the physical world. Thus, in order to deliver a comfortable and immersive virtual experience to users, it is important that the user can explore the virtual world using techniques that help them avoid collisions with unseen physical objects. Redirected walking (RDW) is one such technique that enables collision-free locomotion in VR using real walking. Although RDW shows promise as an effective locomotion interface, it has seen relatively little adoption in the consumer market due to the difficulty in deploying effective RDW algorithms that are robust to different environment layouts and different users' perceptual thresholds. For my thesis, I am focused on developing RDW methods that are capable of enabling collision-free locomotion in arbitrary physical and virtual environments for a wide range of users.
Context-Aware Inference and Adaptation in Augmented Reality
Doctoral Consortium
Booth: D32
Shakiba Davari
Augmented Reality(AR) offers the potential for easy and efficient information access, reinforcing the wide belief that AR Glasses are the next-generation of personal computing devices. However, to realize this all-day AR vision, the AR interface must be able to address the challenges that constant and pervasive presence of virtual content can cause for the user. The optimal interface, that is the most efficient yet least intrusive, in one context may be the worst interface for another context. Throughout the day, as the user switches context, an optimal all-day interface must adapts its virtual content display and interactions as well. This work aims to propose a research agenda to design and validate different adaptation techniques and context-aware AR interfaces and introduce a framework for the design of such intelligent interfaces.
Balancing Realities by Smoothing Cross-Reality Interactions
Doctoral Consortium
Booth: D28
Matt Gottsacker
Virtual reality (VR) devices have a demonstrated capability to make users feel present in a virtual world. Research has shown that, at times, users desire a less immersive system that provides them awareness of and the ability to interact with elements from the real world and with a variety of devices. Understanding such cross-reality interactions is an under-explored research area that will become increasingly important as immersive devices become more ubiquitous. The planned focus of my dissertation is to investigate the social norms that are complicated by these interactions and design solutions that lead to meaningful interactions. As a second-year PhD student, I am excited about the possibility of discussing my research at the IEEE VR 2022 Doctoral Consortium and getting feedback from peers and mentors about the direction of my dissertation.
Designing Immersive Tools for Supporting Cognition in Remote Scientific Collaboration
Doctoral Consortium
Booth: D33
Monsurat Olaosebikan
Scientists collaborate remotely across institutions, countries and continents. However, collaborating remotely is challenging. Video-conferencing tools used for meetings limit the cognitive practices that collaborators can partake in. In virtual reality (VR) users can gain back spatial affordances present in collocated collaboration and we can design interactions that would not be possible in the real world. My research aims to investigate how VR can support cognition in remote scientific collaboration through the design, development, and study of Embodied Notes: a cognitive support tool designed to be used in a collaborative virtual environment.
Doctoral Consortium - Expo Hall C
Improving presence of virtual humans through paralinguistics
Doctoral Consortium
Booth: D25
Andrew H Maxim
Social presence and plausibility are two of the most important constructs being studied in IEEE VR to improve user experience in virtual environments. However, virtual humans that are used in these virtual environments lack the capability of adaptive speech that directly influences social presence and plausibility of the virtual humans that are used since humans use adaptive speech during conversations. Virtual humans lack the adaptability of their speech in the manner that humans do. Research has been explored in how pitch, rate of speech, and intensity can be manipulated in human discourse. However, little research has been done in using adaptable speech to affect dialog with virtual humans. This dissertation attempts to create a framework and a system that enables adaptive virtual human speech. This includes using machine learning and user modeling to manipulate paralinguistic features such as pitch, rate of speech, pause duration, and intensity using Speech Synthesis Markup Language (SSML). Understanding the interplay of paralinguistic features between virtual human and human will improve the social presence of virtual humans. By achieving this, virtual interactions can be moved toward the dynamic level of human-to-human interactions, thus increasing the co-presence and plausibility of these virtual human characters.
Leveraging AR Cues towards New Navigation Assistant Paradigm
Doctoral Consortium
Booth: D34
Yu Zhao
Extensive research has shown that the knowledge required to navigate an unfamiliar environment has been greatly reduced as many of the planning and decision-making tasks can be supplanted by the use of automated navigation systems. The progress in augmented reality (AR), particularly AR head-mounted displays (HMDs) foreshadows the prevalence of such devices as computational platforms of the future. AR displays open a new design space on navigational aids for solving this problem by superimposing virtual imagery over the environment. This dissertation abstract proposes a research agenda that investigates how to effectively leverage AR cues to help both navigation efficiency and spatial learning in walking scenarios.
Annotation in Asynchronous Collaborative Immersive Analytic Environments using Augmented Reality
Doctoral Consortium
Booth: D26
Zahra Borhani
Immersive Analysts(IA) and Augmented Reality (AR) head-mounted displays provide a different paradigm for people to analyze multidimensional data and externalize their thoughts by utilizing the stereoscopic nature of headsets. However, using annotation in IA-AR is challenging and not well-understood. In addition, IA collaborative environments add another complexity level for users operating on complex visualized datasets. Current AR systems focus mainly on synchronized collaboration, while asynchronous collaboration has remained unexplored. This project investigates annotation in IA for asynchronous collaborative environments. We present our research studies on virtual annotation types and introduce a new filtering annotation technique for IA.
Effects of Asymmetric Locomotion Methods on Collaborative Navigation and Wayfinding in Shared Virtual Environments
Doctoral Consortium
Booth: D33
Soumyajit Chakraborty
Navigation and wayfinding can be accomplished either by single person or a group of people. Using the help of immersive virtual reality technology, significant research has been conducted to find out how a person can navigate and wayfind in a virtual world. How- ever, there has been little work done that asks how multiple people can collaboratively navigate and wayfind in a virtual world. In this proposal, we investigate this question with a specific interest on how different locomotion methods can affect the acquired knowledge of a group of individuals in a distributed, shared virtual environment.
Posters - Expo Hall A
CV-Mora Based Lip Sync Facial Animations for Japanese Speech
Poster
Booth: B11
Ryoto Kato: Tokyo Metropolitan University; Yusuke Kikuchi: Tokyo Metroporitan Univercity; Vibol Yem: Tokyo Metropolitan University; Yasushi Ikei: The University of Tokyo
Teaser Video: Watch Now
To generate authentic real-time facial animations using face mesh data, which corresponds to fifty-six consonant and vowel (CV) types of morae that form the basis of Japanese speech, we propose a new method. Our method produces facial expressions by the weighted addition of fifty-three face meshes based on the real-time mapping of voice streaming to registered morae. In the user study, results showed that facial expressions produced during Japanese speech were more natural using our method than those using popular methods to generate real-time English-based Oculus lip sync and volume intensity-based facial animations.
Gaze Capture based Considerate Behaviour Control of Virtual Guiding Agent
Poster
Booth: B12
Pinjung Chen: Tokyo Institute of Technology; Hironori Mitake: Tokyo Institute of Technology; Shoichi Hasegawa: Tokyo Institute of Technology
Teaser Video: Watch Now
Agents in VR have wide application like guidance. Most current agents are passive, so that people should suspend their current tasks and request agents with explicit demand. It is necessary to make agent more actively open the interaction naturally but without being bothering. We propose a virtual guidance agent which provide voice explanation in appropriate timing, using gaze tracking, attention amount estimation and attention driven state machine. We used time-decayed moving average of angle between gaze direction and face front direction. We implemented the method in VR and evaluated effectiveness in virtual guiding tour experimentally.
Perceptions of Colour Pickers and Companions in Virtual Reality Art-Making
Poster
Booth: B13
Marylyn Alex: University of Auckland; Burkhard Wuensche: University of Auckland; Danielle Lottridge: University of Auckland
Teaser Video: Watch Now
Virtual reality art is reshaping digital art experiences but may elicit different first impressions across disparate age groups. We investigate first impressions of VR colour pickers and the impact of a virtual companion via an online survey with 63 adults and 24 older adults. The colour pickers differed significantly in perceived hedonic qualities. We found no statistical differences between perceptions of adults and older adults. The virtual companion had no significant effect on participants' overall experiences. However, we found statistical trends where older adults rated the virtual companion higher in terms of companionship and making VR art more engaging.
Augmenting VR Ski Training using Time Distortion
Poster
Booth: B14
Takashi Matsumoto: Tokyo Institute of Technology; Erwin Wu: Tokyo Institute of Technology; Hideki Koike: Tokyo Institute of Technology
Teaser Video: Watch Now
Virtual reality-based sports simulators are widely developed, which makes training in a virtual environment possible. On the other hand, methods using temporal features are also introduced to realize an adaptive training. In this paper, we study the effect of time distortion on alpine ski training to find out how modifying the temporal space can affect sports training. Experiments are conducted to investigate how a fast/slow and a static/dynamic time distortion-based training, respectively, can impact the performance of users.
Investigating Display Position of a Head-Fixed Augmented Reality Notification for Dual-task
Poster
Booth: B35
Hyunjin Lee: KAIST; Woontack Woo: KAIST
Teaser Video: Watch Now
Providing additional information in the proper position of augmented reality (AR) head-mounted display (HMD) can help increase AR performance and usability for dual-task. Therefore, our study investigated how to place notifications for the dual-task to address this. We compared eight display positions and two tasks (single and dual tasks) to identify the appropriate area for displaying notifications. We confirmed that the middle-right reduces response time and task load. In contrast, the top-left is the location, which should avoid providing any notification in AR dual-task. Our study contributes to designing AR notifications on HMDs to enhance everyday AR experiences.
Augmented Reality Fitts' Law Input Comparison Between Touchpad, Pointing Gesture, and Raycast
Poster
Booth: B21
Domenick Mifsud: Colorado State University; Adam Sinclair Williams: Colorado State University; Robert J Teather: Carleton University; Francisco Raul Ortega: Colorado State University
Teaser Video: Watch Now
With the goal of exploring the impact of transparency on selection in augmented reality (AR), we present a Fitts' law experiment with 18 participants, comparing three different input methods (finger based Pointing Gesture, controller using the Touchpad, and controller using Raycast), across 4 different target transparency levels (0%, 30%, 60%, and 90%) in an optical see-through AR head-mounted display. The results indicate that transparency has little effect on selection throughput and error rates. Overall, the Raycast input method performed significantly better than the pointing gesture and Touchpad inputs in terms of error rate and throughput in all opacity conditions.
High-speed Gaze-oriented Projection by Cross-ratio-based Eye Tracking with Dual Infrared Imaging
Poster
Booth: C11
Ayumi Matsumoto: The University of Tokyo; Tomohiro Sueishi: The University of Tokyo; Masatoshi Ishikawa: The University of Tokyo
Teaser Video: Watch Now
While gaze-oriented projection can be high-resolution and wide-area display, conventional methods have difficulties in handling quick human eye movements. In this paper, we propose a high-speed gaze-oriented projection system using a synchronized high-speed tracking projector and cross-ratio-based eye tracking. The tracking projector with a high-speed projector and rotational mirrors enables temporal geometric consistency of the projection. The eye tracking uses high-speed cameras and infrared lightings of different wavelengths, and can achieve fast and almost calibration-free due to the cross-ratio algorithm. We have experimentally validated the eye tracking speed and accuracy, system latency, and demonstrated gaze-oriented projection.
VR Wayfinding Training for People with Visual Impairment using VR Treadmill and VR Tracker
Poster
Booth: C12
Sangsun Han: Hanyang University; Pilhyoun Yoon: Hanyang University; Miyeon Ha: Hanyang University; Kibum Kim: Hanyang University
Teaser Video: Watch Now
There are virtual reality (VR) wayfinding training systems for people with visual impairment, but there is a lack of studies about how training environments can affect spatial information acquisition of people with visual impairment. Using a VR treadmill and a VR tracker, we studied how walk-in-place and actual walking can affect the acquisition of spatial information with regard to paths and obstacles. Our results show that people with visual impairment remember routes better when trained with VR treadmill, but they remember obstacles better when trained with VR tracker. We evaluate the respective efficacies of these approaches on spatial information memorization.
HoloInset: 3D Biomedical Image Data Exploration through Augmented Hologram Insets
Poster
Booth: C13
JunYoung Choi: Ulsan National Institute of Science and Technology (UNIST); Haejin Jeong: Korea University; Won-Ki Jeong: Korea University
Teaser Video: Watch Now
The extended reality (XR) provides realistic depth perception and huge visualization spaces, which can serve as a powerful workspace for 3D data exploration and analysis. However, a direct adaptation of XR to conventional 3D data exploration tasks is less feasible due to several hardware limitations, such as low screen resolution, dizziness, narrow field of view, etc. In this paper, we propose a novel mixed reality visualization scheme, HoloInset, which combines a conventional visual analytics system and a virtual environment to effectively explore 3D biomedical image data. We also demonstrate the usability of the proposed visualization through a real-world analysis case.
Augmenting Sculpture with Immersive Sonification
Poster
Booth: C14
Yichen Wang: The Australian National University ; Henry Gardner: The Australian National University; Charles Patrick Martin: Australian National University; Matt Adcock: CSIRO
Teaser Video: Watch Now
We present an artistic Mixed Reality (MR) system that remixes a sculptural element of a building and its aesthetic context to provide an on-site augmented art experience. Mainstream MR systems, particularly art-related, focus on the use of visuals in presenting additional information, whereas the use of audio as the main information channel has rarely been considered. In this work, we explore two different versions of a sonic experience for walking through an artistic staircase to enhance its public's engagement. Our user evaluation reveals the effectiveness of sonic design for a rewarding MR experience. With this, we emphasise the importance of sonic design in MR applications.
Feasibility of Training Elite Athletes for Improving their Mental Imagery Ability Using Virtual Reality
Poster
Booth: C21
Yuanjie Wu: University of Canterbury; Stephan Lukosch: University of Canterbury; Heide Lukosch: University of Canterbury; Robert W. Lindeman: University of Canterbury; Ryan Douglas McKee: University of Canterbury; Shunsuke Fukuden: HIT Lab NZ, University of Canterbury; Cameron Ross: Snow Sports NZ; Dave Collins: Snow Sports NZ
Teaser Video: Watch Now
The goal of imagery training for athletes is to create realistic images in their minds and to familiarize them with certain procedures, environments, and other aspects related to competition. Traditional imagery training methods use still images or videos, and athletes study the pictures or watch the videos in order to mentally rehearse. However, factors such as distractions and low realism can affect the training quality. In this paper, we present a VR solution and a study that explores our hypotheses that 1) high-fidelity VR systems improve mental imagery skills, and that 2) the presence of elements such as an audience or photographers in the VR environment result in better mental imagery skill improvement.
Emotional Avatars: Facial Emotion Identification Methodology for Avatar Based Systems
Poster
Booth: C22
Dilshani Rangana Kumarapeli: University of Canterbury; Sungchul Jung: Kennesaw State University; Robert W. Lindeman: University of Canterbury
Teaser Video: Watch Now
This work analyses the effect of uncanniness behaviour in identifying emotions from different humanoid avatar representations. Expressive avatars play a vital role in immersive environments. However, technical limitations in replicating subtle emotional cues using real-time expression conversion techniques create an uncanniness to the viewers. Hence, achieving the desired emotional awareness is arguable. Therefore, using an avatar representation resistant to uncanniness in systems sensitive to emotional changes is vital. Therefore, here we analyse the level of uncanniness noticed by people for different avatars exhibiting various emotions using expressive faces and the behavioural trends of people in catching emotion-related uncanniness.
Prototyping a Virtual Agent for Pre-school English Teaching
Poster
Booth: C23
Eduardo Benitez Sandoval: UNSW; Diego Vázquez Rojas: Instituto Politecnico Nacional; Clarissa Anaid Parada Cereceres: Instituto Politecnico Nacional; Alvaro Anzueto-Rios: Instituto Politecnico Nacional; Amit Barde: University of Auckland; Mark Billinghurst: University of Auckland
Teaser Video: Watch Now
This paper describes a case study and the insights gained from prototyping an Intelligent Virtual Agent (IVA) for English vocabulary building for Spanish-speaking preschool children. After an initial exploration to evaluate the feasibility of developing an IVA, we followed a Human-Centered Design (HCD) approach to create a prototype. We report on the multidisciplinary process used that incorporated two well-known educative concepts: gamification and story-telling as the main components for engagement. Our results suggest that a multidisciplinary approach to developing an educational IVA is effective. We report on the relevant aspects of the ideation and design processes that informed the vision and mission of the project.
A Tangible Augmented Reality Programming Learning Environment for textual languages
Poster
Booth: C24
Dmitry Resnyansky: University of South Australia; Mark Billinghurst: University of South Australia; Gun Lee: University of South Australia
Teaser Video: Watch Now
We present a novel Tangible Augmented Reality Programming Learning Environment system using head-mounted display (HMD) and physical manipulatives for teaching programming. The system supports student understanding/recollection of terms, and statement construction through access to terminology, explanations, and programming hints. It is designed to provide a virtual workspace for natural interaction with learning material using affordances of Augmented Reality (AR) and Tangible User Interfaces (TUIs). An AR code template provides a building and testing environment for learners to practice statement construction and computational skills. The system bolsters active learning with localised AR program visualisations and HMD-anchored AR glossary.
How Late is Too Late? Effects of Network Latency on Audio-Visual Perception During AR Remote Musical Collaboration
Poster
Booth: C28
Torin Hopkins: University of Colorado Boulder; Suibi Che-Chuan Weng: University of Colorado Boulder; Rishi Vanukuru: University of Colorado Boulder; Emma A Wenzel: University of Colorado Boulder; Amy Banic: University of Wyoming; Ellen Yi-Luen Do: University of Colorado Boulder
Teaser Video: Watch Now
Networked Musical Collaboration requires near-instantaneous network transmission for successful real-time collaboration. We studied the way changes in network latency affect participants' auditory and visual perception in latency detection, as well as latency tolerance in AR. Twenty-four participants were asked to play a hand drum with a prerecorded remote musician rendered as an avatar in AR at different levels of audio-visual latency. We analyzed the subjective responses of the participants from each session. Results suggest a minimum noticeable delay value between 160 milliseconds (ms) and 320 ms, as well as no upper limit to audio-visual delay tolerance.
Toward Using Multi-Modal Machine Learning for User Behavior Prediction in Simulated Smart Home for Extended Reality
Poster
Booth: C31
Powen Yao: University of Southern California; Yu Hou: University of Southern California; Yuan He: University of Southern California ; Da Cheng: University of Southern California; Huanpu Hu: University of Southern California; Michael Zyda: USC
Teaser Video: Watch Now
In this work, we propose a multi-modal approach to manipulate smart home devices in a smart home environment simulated in virtual reality (VR). We determine the user's target device and the desired action by their utterance, spatial information (gestures, positions, etc.), or a combination of the two. Since the information contained in the user's utterance and the spatial information can be disjoint or complementary to each other, we process the two sources of information in parallel using our array of machine learning models. We use ensemble modeling to aggregate the results of these models and enhance the quality of our final prediction results. We present our preliminary architecture, models, and findings.
A Live-Coded Add-On System for Video Conferencing in Virtual Reality
Poster
Booth: C38
Septian Razi: Australian National University; Henry Gardner: The Australian National University; Andrew Sorenson: MOSO; Matt Adcock: CSIRO
Teaser Video: Watch Now
Despite our increasing reliance on video conferencing, it has inherent limitations in engagement and effective communication. We explore a VR add-on system that supplements traditional video conferencing where a host live-codes data visualisations for stakeholders. It combines VR visualisation techniques with libraries present in popular data analytics tools such as Python, allowing real-time changes by utilising a RESTful real-time database. An application has been built to explore pedigree node-link graphs, and its feasibility has been analysed with a few domain experts. Trials have demonstrated that our system appears to enhance engagement and communication above traditional video conferencing for data exploration.
Seamless Walk: Novel Natural Virtual Reality Locomotion Method with a High-Resolution Tactile Sensor
Poster
Booth: C37
Yunho Choi: Gwang-ju institute of science and technology; Hyeonchang Jeon: Gwangju Institute of Science and Technology; Sungha Lee: Gwangju Institute of Science and Technology; Isaac Han: Gwangju Institute of Science and Technology; Yiyue Luo: Massachusetts Institute of Technology (MIT); SeungJun Kim: Gwangju Institute of Science and Technology; Wojciech Matusik: MIT; KyungJoong Kim: GIST
Teaser Video: Watch Now
Natural movement is a challenging problem in virtual reality locomotion. However, existing foot-based locomotion methods lack naturalness due to physical limitations caused by wearing equipment. Therefore, in this study, we propose Seamless-walk, a novel virtual reality (VR) locomotion technique to enable locomotion in the virtual environment by walking on a high-resolution tactile carpet. The proposed Seamless-walk moves the user's virtual character by extracting the users' walking speed and orientation from raw tactile signals using machine learning techniques. We demonstrate that the proposed Seamless-walk is more natural and effective than existing VR locomotion methods by comparing them in VR game-playing tasks.
Understanding the Capabilities of the HoloLens 1 and 2 in a Mixed Reality Environment for Direct Volume Rendering with a Ray-casting Algorithm
Poster
Booth: C36
Hoijoon Jung: The University of Sydney; Younhyun Jung: Gachon University; Jinman Kim: The University of Sydney
Teaser Video: Watch Now
Direct volume rendering (DVR) is a standard technique for visualizing scientific volumetric data in three-dimension (3D). Utilizing current mixed reality head-mounted displays (MR-HMDs), the DVR can be displayed as a 3D hologram that can be superimposed on the original 'physical' object, offering supplementary x-ray visions showing its interior features. These MR-MHDs are stimulating innovations in a range of scientific application fields, yet their capabilities on DVR have yet to be thoroughly investigated. In this study, we explore a key requirement of rendering latency capability for MR-HMDs by proposing a benchmark application with 5 volumes and 30 rendering parameter variations.
The Virtual-Augmented Reality Simulator: Evaluating OST-HMD AR calibration algorithms in VR
Poster
Booth: C35
Danilo Gasques: University of California San Diego; Weichen Liu: University of California San Diego; Nadir Weibel: UC San Diego
Teaser Video: Watch Now
When developing AR applications for high-precision domains such as surgery, we face a common problem: how can the system guarantee that the end-user will see a virtual object aligned with its real-world counterpart? Alignment, or registration, is a crucial feature of AR displays, but achieving accurate alignment between real and virtual objects is not trivial. With hundreds of calibration approaches available, we need better tools to understand how and when calibration algorithms fail as well as understand what can be done to improve alignment. This poster introduces a novel AR simulator in VR that facilitates experimentation with different calibration algorithms.
Relationship Between the Sensory Processing Patterns and the Detection Threshold of Curvature Gain
Poster
Booth: C41
Keigo Matsumoto: The University of Tokyo; Takuji Narumi: the University of Tokyo
Teaser Video: Watch Now
This study examines the relationship between sensory processing patterns (SPPs) and the effects of redirected walking (RDW). Research efforts have been devoted to identifying the detection threshold (DT) of the RDW techniques, and various DTs have been reported in different studies. Recently, age, sex, and spatial ability have been found to be associated with the DTs of RDW techniques. A preliminary examination was conducted on the relationship between SSPs, as measured by the Adolescents/Adult Sensory Profile, and the DT of curvature gains, one of the fundamental RDW techniques, and it was suggested that the higher sensory sensitivity tendencies were associated with lower DT, i.e., participants were more likely to notice the RDW technique.
Predicting Blendshapes of Virtual Humans for Low-Delay Remote Rendering using LSTM
Poster
Booth: C42
Haruhisa Kato: KDDI Research Inc.; Tatsuya Kobayashi: KDDI Research Inc.; Sei Naito: KDDI Research Inc.
Teaser Video: Watch Now
This paper proposes a novel framework to reduce the perceptual delay in rendering the facial expressions of virtual humans. The proposed method reduces the delay by predicting the future blend coefficients that represent facial expressions. The prediction accuracy of the proposed method is improved by 27% on average over a conventional LSTM. We also subjectively confirmed that the proposed method achieves natural facial expressions.
AIR-range: Arranging optical systems to present mid-AIR images with continuous luminance on and above a tabletop
Poster
Booth: C43
Tomoyo Kikuchi: The University of Tokyo; Yuchi Yahagi: The University of Tokyo; Shogo Fukushima: The University of Tokyo; Saki Sakaguchi: Tokyo Metropolitan University; Takeshi Naemura: The University of Tokyo
Teaser Video: Watch Now
We propose "AIR-range"- a system that seamlessly connects mid-air images from the surface of a table to mid-air space. This system can display tall mid-air images in the three-dimensional (3D) space beyond the screen. AIR-range is implemented using a symmetrical mirror structure that displays a large image by integrating multiple imaging paths. The mirror arrangement in previous research had a problem in that the luminance was discontinuous. In this study, we theorize the relationship between the parameters of optical elements and the appearance of mid-air images and optimize an optical system to minimize the difference in luminance between image paths.
Third-Person Perspective Avatar Embodiment in Augmented Reality: Examining the Proteus Effect on Physical Performance
Poster
Booth: C44
Riku Otono: Nara Institute of Science and Technology; Naoya Isoyama: Nara Institute of Science and Technology; Hideaki Uchiyama: Nara Institute of Science and Technology; Kiyoshi Kiyokawa: Nara Institute of Science and Technology
Teaser Video: Watch Now
Embodiment in augmented reality (AR) is applicable to various fields such as exercise and education. However, full-body embodiment in AR is still challenging to implement due to technical problems such as low body tracking accuracy. Therefore, the study on the impact of an avatar in AR on user performance is limited. We implemented an AR embodiment system and investigated its impact on user physical performance. The system allows users to see their avatar instead of their real body from a third-person perspective. The results show that a muscular avatar improves user physical performance during and after controlling the avatar.
Omnidirectional Neural Radiance Field for Immersive Experience
Poster
Booth: D11
Qiaoge Li: University of Tsukuba; Itsuki Ueda: University of Tsukuba; Chun Xie: University of Tsukuba; Hidehiko Shishido: University of Tsukuba; Itaru Kitahara: University of Tsukuba
Teaser Video: Watch Now
This paper proposes a method using only RGB information from multiple captured panoramas to provide an immersive observing experience for real scenes. We generated an omnidirectional neural radiance field by adopting the Fibonacci sphere model for sampling rays and several optimized positional encoding approaches. We tested our method on synthetic and real scenes and achieved satisfying empirical performance. Our result makes the immersive continuous free-viewpoint experience possible.
Depth Reduction in Light-Field Head-Mounted Displays by Generating Intermediate Images as Virtual Images
Poster
Booth: D12
Yasutaka Maeda: Japan Broadcasting Corporation (NHK); Daiichi Koide: Japan Broadcasting Corporation (NHK); Hisayuki Sasaki: Japan Broadcasting Corporation (NHK); Kensuke Hisatomi: Japan Broadcasting Corporation (NHK)
Teaser Video: Watch Now
Light-field head-mounted displays (HMDs) can resolve vergence-accommodation conflicts, which may cause visual discomfort and fatigue. However, light-field HMDs have a narrow field of view (FOV), owing to their small display size and optical configuration. We increased the FOV by adopting a large display and shifting the elemental image from the back of the corresponding microlens. In this study, we proposed a method to generate intermediate images as virtual images to reduce the distance between the microlens array and the eyepiece. We also designed a microlens array suitable for the proposed method and successfully reduced the device depth by 44%.
Perceptually-Based Optimization for Radiometric Projector Compensation
Poster
Booth: D13
Ryo Akiyama: NTT; Taiki Fukiage: NTT; Shin'ya Nishida: Kyoto University
Teaser Video: Watch Now
Radiometric compensation techniques have been proposed to manipulate the appearance of arbitrarily textured surfaces using projectors. However, due to the limited dynamic range of the projectors, these compensation techniques often fail under bright environmental lighting or when the projection surface contains high contrast textures, resulting in clipping artifacts. To address this issue, we propose to apply a perceptually-based tone mapping technique to generate compensated projection images. The experimental results demonstrated that our approach minimizes the clipping artifacts and contrast degradation under challenging conditions.
Effects of Mirrors on User Behavior in Social Virtual Reality Environments
Poster
Booth: D14
Takayuki Kameoka: The University of Electro-Communications; Seitaro Kaneko: The University of Electro-Communications
Teaser Video: Watch Now
The authors have observed that users gather in front of mirrors on VRSNS such as VRChat. Based on these observations, we hypothesized that mirrors attracted users and conducted an experiment in a controlled environment. The participants were requested to converse in pairs in a VR space with mirrors and posters, and their behavior was recorded. Results showed that, although a certain number of users gathered in front of the mirror, it did not significantly increase their chance of staying. Conversely, we received comments such as "I feel relaxed when I go in front of the mirror".
Implementation of an Authoring Tool for Wheelchair Simulation with Visual and Vestibular Feedback
Poster
Booth: D21
Takumi Okawara: Nihon University; Kousuke Motooka: Graduate School of integrated Basic Sciences, Nihon University; Kazuki Okugawa: Nihon University; Akihiro Miyata: Nihon University
Teaser Video: Watch Now
In this study, we develop a wheelchair simulator that provides visual and vestibular feedback at a low cost by leveraging the combination of vection-inducing movies displayed on a head-mounted display and vestibular feedback provided by an electric-powered wheelchair. However, this simulator requires users to manually create a synchronized pair of a VR movie and a motion scenario (chronological control commands for the wheelchair), which necessitates considerable effort and experience. In this paper, we introduce a novel authoring tool that generates a pair of a VR movie and a motion scenario based on a few parameters entered by the user.
Robust Tangible Projection Mapping with Multi-View Contour-Based Object Tracking
Poster
Booth: D22
Yuta Halvorson: The University of Electro-Communications; Takumi Saito: The University of Electro-Communications; Naoki Hashimoto: The University of Electro-Communications
Teaser Video: Watch Now
Dynamic projection mapping for moving objects can greatly enhance an augmented reality representation by using projected images because the target object can be freely moved. However, the interaction between the target object and the user has not been sufficiently considered. In this research, we propose tangible projection mapping, with which a user can grasp an object with their hands and move it freely. It uses a simple configuration of two infrared cameras to realize dynamic projection mapping that is robust to hand occlusion.
Design of a VR Action Observation Tool for Rhythmic Coordination Training
Poster
Booth: D23
James Jonathan Pinkl: University of Aizu; Michael Cohen: University of Aizu
Teaser Video: Watch Now
Motor learning applications, particularly those with first-person virtual reality Action Observation features, have achieved positive results in a variety of fields. This project entails development of a first-person VR application designed to help musicians learn rhythm and the body coordination needed to express rhythm. Utilizing realtime tracking of user hand position, various accompanying visual media, and spatial audio with outside-the-head localization, the tool provides a new way to learn and improve rhythm.
Interpersonal Distance to a Speaking Avatar: Loudness Matters Irrespective of Contents
Poster
Booth: D24
Kota Takahashi: Toyohashi University of Technology; Yasuyuki Inoue: Toyohashi University of Technology; Michiteru Kitazaki: Toyohashi University of Technology
Teaser Video: Watch Now
It is important for us to maintain appropriate interpersonal distance depending on situations in effective and safe communications. We aimed to investigate the effects of speech loudness and clarity on the interpersonal distance towards an avatar in a virtual environment. We found that the louder speech of the avatar made the distance between the participants and the avatar larger than the quiet speech, but the clarity of the speech did not significantly affect the distance. These results suggest that the perception of loudness modulates the interpersonal distance towards the virtual avatar to maintain the intimate equilibrium.
A Skin Pressure-type Grasping Device to Reproduce Impulse Force for Virtual Ball Games
Poster
Booth: D38
Kazuma Yoshimura: Nara Institute of Science and Technology; Naoya Isoyama: Nara Institute of Science and Technology; Hideaki Uchiyama: Nara Institute of Science and Technology; Nobuchika Sakata: Ryukoku University; Kiyoshi Kiyokawa: Nara Institute of Science and Technology; Yoshihiro Kuroda: University of Tsukuba
Teaser Video: Watch Now
Tactile feedback is crucial to improve the realism of virtual sports, which are popular applications of virtual reality (VR). However, few wearable tactile feedback devices can produce an impulse force in midair. We propose a graspable impulse force presentation device by using a voice coil motor. In the evaluation experiment, the result suggests that the proposed device improves the sense of realism compared to a conventional VR controller with vibration.
Virtual Touch Modulates Perception of Pleasant Touch
Poster
Booth: D37
Gakumaru HARAGUCHI: Toyohashi University of Tecknology; Michiteru Kitazaki: Toyohashi University of Technology
Teaser Video: Watch Now
Pleasant touch by gentle stroking is processed by C-tactile afferents, and important for emotional and social touch in human communication. We aimed to investigate the effects of visual touch in a virtual environment on the perception of pleasantness of tactile touch. Five velocities were used for the tactile brushing and virtual brushing (0.3 - 30 cm/s), and participants answered the perceived pleasantness of the tactile touch irrespective of visual stimulus. We found that the perception of pleasant touch was significantly modulated by the velocity of visual brushing. Thus, the pleasant touch would be perceived by integrating vision and tactile perception.
An Examination on Reduction of Displayed Character Shake while Walking in Place with AR Glasses
Poster
Booth: D36
Hiromu Koide: Utsunomiya University; Kei Kanari: college; Mie Sato: Utsunomiya University
Teaser Video: Watch Now
In recent years, augmented reality (AR) has started to be used in our daily lives. AR glasses are used when walking, which is a normal part of daily life, but walking causes the text displayed on the glasses to shake. This reduces both readability and our attention to what is in front of us, and increases discomfort. We propose a method of fixing the text to take account of shaking while walking to reduce these adverse effects. Experiments revealed the effectiveness of our reduction method and its influence on the distance of the text display.
Knowing the Partner's Objective Increases Embodiment towards a Limb Controlled by the Partner
Poster
Booth: D35
Harin Manujaya Hapuarachchi: Toyohashi University of Technology; Michiteru Kitazaki: Toyohashi University of Technology
Teaser Video: Watch Now
We have developed a joint avatar of which two participants simultaneously control left and right-side limbs in first-person view in a virtual environment. We aimed to investigate whether having a common objective with a partner affects sense of embodiment towards a limb controlled by the partner. Participants performed reaching tasks using the joint avatar. We found that the embodiment towards the arm controlled by the partner was significantly higher when the participant dyads shared a common objective or when they were allowed to see their partner's goal, compared to when their partner's goal was unknown to them.
On the Effectiveness of Conveying BIM Metadata in VR Design Reviews for Healthcare Architecture
Poster
Booth: D41
Emma Buchanan: University of Canterbury; Giuseppe Loporcaro: University of Canterbury; Stephan Lukosch: University of Canterbury
Teaser Video: Watch Now
This research seeks to assess whether Virtual Reality (VR) can be used to convey Building Information Modelling (BIM) metadata alongside geometric and spatial data in a virtual environment, and by doing so, determine if this increases the understanding of the design by the stakeholder. A user study assessed participants performance and preference for conducting design reviews in VR or using a traditional design review system of PDF drawings and a 3D model. Early results indicate VR was preferred with fewer errors made during assessment and a higher System Usability Scale SUS score.
Towards a Virtual Reality Math Game for Learning In Schools - A User Study
Poster
Booth: D42
Meike Belter Belter: University of Canterbury; Heide Lukosch: University of Canterbury
Teaser Video: Watch Now
In recent years, immersive Virtual Reality (VR) has gained popularity among young users as a new technology for entertainment gaming. While VR remains majorly used for entertainment purposes, 3D desktop games are already used in schools. This study takes a closer look at the suitability for VR games to be used in a formal educational environment, and its potential to enrich existing game based learning approaches. Based on learning needs of in particular easily distracted and inattentive children, an immersive VR math game was created and tested on 15 children aged 11-12.
Motion Correction of Interactive CG Avatars Using Machine Learning
Poster
Booth: D43
Ko Suzuki: Utsunomiya University; Hiroshi Mori: Utsunomiya University; Fubito Toyama: Utsunomiya University
Teaser Video: Watch Now
Motion capture allows users to control their CG avatar via their own movements. However, the composed avatar motion fails to deliver the actual input movements if the user's motion information is not accurately captured due to measurement errors. In this paper, we propose a method that complements a user's motion according to the motion of another person for a two-party motion with interaction. This method is expected to compose avatar motions that look natural to the other person while emphasizing the actual motions of the user.
Adding Difference Flow between Virtual and Actual Motion to Reduce Sensory Mismatch and VR Sickness while Moving
Poster
Booth: D44
Kwan Yun: Korea University; Gerard Jounghyun Kim: Korea University
Teaser Video: Watch Now
Enjoying Virtual Reality in vehicles presents a problem because of the sensory mismatch and sickness. While moving, the vestibular sense perceives actual motion in one direction, and the visual sense, visual motion in another. We propose to zero out such physiological mismatch by mixing in motion information as computed by the difference between those of the actual and virtual, namely, "Difference" flow. We present the system for computing and visualizing the difference flow and validate our approach through a small pilot field experiment. Although tested only with a low number of subjects, the initial results are promising.
Event Synthesis for Light Field Videos using Recurrent Neural Networks
Poster
Booth: E31
Zhicheng Lu: The University of Sydney; Xiaoming Chen: Beijing Technology and Business University; Yuk Ying Chung: The University of Sydney; Sen Liu: University of Science and Technology of China
Teaser Video: Watch Now
Light field videos (LFVs) yield higher complexity in performing computer vision tasks. The emerging event cameras offer new means for light-weight processing of LFVs, but it is infeasible to build an event camera array due to their high costs. In this poster, we propose a novel "event synthesis for light field videos" (ES4LFV) model by using recurrent neural networks and build a preliminary dataset. The ES4LFV can synthesize events for light field videos from a LFV camera array and single event camera. The experimental results show that ES4LFV outperforms the traditional method by 3.1dB in PSNR.
Towards Controlling Whole Body Avatars with Partial Body-Tracking and Environmental Information
Poster
Booth: E28
Koji Yamada: Utsunomiya University; Hiroshi Mori: Utsunomiya University; Fubito Toyama: Utsunomiya University
Teaser Video: Watch Now
In body-tracking-based avatar manipulation, the user's motion is reflected in their avatar, which creates a high level of immersion. Conversely, the user is required to give a detailed performance similar to that of the avatar to be composed.
In this research, we aim to produce avatar motion that reflects the user's intended actions and maintains consistency with the VR environment by inputting the user's posture. The avatar's body motion was generated by inputting the user's motion and VR environmental information into the motion configuration network, which was developed using machine learning.
Geometric Calibration with Multi-Viewpoints for Multi-Projector Systems on Arbitrary Shapes Using Homography and Pixel Maps
Poster
Booth: E21
Atsuya Ueno: Wakayama University ; Toshiyuki Amano: Wakayama University; Chisato Yamauchi: Misato astronomical observatory
Teaser Video: Watch Now
We propose a geometrical calibration method for the MISATO astronomical observatory in Japan. The primary objective of our projection system is enabling a large-scale geometrical projection calibration for near-planner arbitrarily-shaped ground projection with a multi-projector system via temporally-placed camera capturing. The obtained results exhibited an average distortion reduction of 84.7% less than the simple homography-based method.
Bouncing Seat: An Immersive Virtual Locomotion Interface with LSTM Based Body Gesture Estimation
Poster
Booth: E22
Yoshikazu Onuki: Digital Hollywood University
Teaser Video: Watch Now
A pilot study of the bouncing seat system is presented. This system consists of an air cushion and gravicorder. The former enables users to move easier and feel lively motion. The latter and its computing units detect users' body sway and recognize its gestures. Four commands (walk, right turn, left turn, and jump) and associated intuitive body gestures were designed. The estimator based on the multi-timescale LSTM was trained using newly created body gesture dataset and achieved 88% inference accuracy. Results suggest that our system has potential for providing a higher sense of immersion and enjoyment than the joysticks.
Hype Live: Biometric-based Sensory Feedback for Improving the Sense of Unity in VR Live Performance
Poster
Booth: E23
Masashi Abe: Nara Institute of Science and Technology; Akiyoshi Takuto: Nara Institute of Science and Technology; Isidro Mendoza Butaslac III: Nara Institute of Science and Technology; Zhou Hangyu: Nara Institute of Science and Technology; Taishi Sawabe: Nara Institute of Science and Technology
Teaser Video: Watch Now
We propose Hype Live, a system to improve the sense of unity by sharing responses through visual, auditory, and haptic stimuli based on the biometrics of the participants in VR live performances. In this field, the sharing of reactions among the participants is one of the most important factors in improving the sense of unity. However, not many past studies have provided feedback to the participants' senses. Therefore, as a prototype of Hype Live, we used a vibration device to provide haptic feedback and recreated moshing scenes where the participants violently collide with each other like a real live performance.
Sense of Agency on Handheld AR for Virtual Object Translation
Poster
Booth: E14
Wenxin Sun: Xi'an Jiaotong-Liverpool University; Mengjie Huang: Xi'an Jiaotong-Liverpool University; Chenxin Wu: Xi'an Jiaotong-Liverpool University; Rui Yang: Xi'an Jiaotong-Liverpool University
Teaser Video: Watch Now
Handheld augmented reality (AR) interfaces are applied in various programs nowadays, and the degrees of freedom (DoF) of translation modes have become significant on these interfaces. Sense of agency (SoA), emphasizing one's feeling of control, has emerged as an essential index of user experience. However, little was known about users' feelings of control with different translation modes in literature. Hence, this paper focuses on users' SoA in different translation modes by assessing subjective and objective measures. Correlations between SoA and translation modes were explored on handheld AR interfaces, revealing that the 1DoF translation mode was associated with higher SoA.
User-Defined Interaction Using Everyday Objects for Augmented Reality First Person Action Games
Poster
Booth: E24
Mac Greenslade: University of Canterbury; Adrian James Clark: University of Canterbury; Stephan Lukosch: University of Canterbury
Teaser Video: Watch Now
In this paper, we present an elicitation study to explore how people use everyday objects for augmented reality first person games. 24 participants were asked to select items from a range of everyday objects to use as controllers for three different classes of virtual object. Participants completed tasks using their selected items and rated the experience using the Augmented Reality Immersion (ARI) questionnaire. Results indicate no strong consensus linking any specific everyday object to any virtual object across our testing population. Based on these findings, we recommend developers provide the ability for users to choose the everyday objects they prefer.
AmbientTransfer: Presence Enhancement by Converting Video Ambient to Users' Somatosensory Feedback
Poster
Booth: E27
Xunshi Li: School of Computer Science and Engineering, Beijing Technology and Business University; Xiaoming Chen: Beijing Technology and Business University; Yuk Ying Chung: The University of Sydney; Qiang Qu: The University of Sydney
Teaser Video: Watch Now
Haptic feedback can improve users' presence when watching immersive videos. In this work, we present our "AmbientTransfer" framework for demonstrating how to "embed" users in the video ambient with somatosensory feedback. Particularly, AmbientTransfer can obtain and convert video ambient factors, e.g., rain intensities, into dynamic haptic feedback by slight electrostimulation in various levels. Then AmbientTransfer maps various levels of haptic feedback to different body parts of users wearing the emerging electrostimulation suit. Preliminary experimental results show that AmbientTransfer can considerably enhance the users' presence. We believe that AmbientTransfer is worth of further exploration for exploiting its full potentials.
Comparing Physiological and Emotional Effects of Happy and Sad Virtual Environments Experienced in Video and Virtual Reality
Poster
Booth: E26
Yuankun Zhu: University of Queensland; Arindam Dey: University of Queensland
Teaser Video: Watch Now
Virtual Reality (VR) could give users a more immersive experience than other non-immersive mediums. In this study, we explored differences in emotional and physiological effects between videos and VR using two different sets of contents to evoke happy and sad emotions. In this within-subjects controlled experiment we collected real-time heart rate and positive and negative affect schedule (PANAS) to measure physiological and emotional effects. Our results showed that VR triggers stronger emotions and higher heart rate than videos.
Toward Understanding the Effects of Visual and Tactile Stimuli to Reduce the Sensation of Movement with XR Mobility Platform
Poster
Booth: E25
Taishi Sawabe: Nara Institute of Science and Technology; Masayuki Kanbara: Nara Institute of Science and Technology; Yuichiro Fujimoto: Nara Institute of Science and Technology; Hirokazu Kato: Nara Institute of Science and Technology
Teaser Video: Watch Now
This paper investigates a reduction method for passenger's movement sensation with the XR mobility platform mounted on an autonomous vehicle to improve passenger comfort during auto-driving. We investigate a reduction method that controls passenger's sense of movement by controlling visual and tactile perception using a multimodal XR mobility platform which consists of an immersive display and a motion platform with a tilting seat. The result of 30 subjects shows the sense of movement perceived by the passenger was reduced significantly when both visual acceleration and tactile acceleration control method was activated inside a moving autonomous vehicle.
Creating 3D Personal Avatars with High Quality Facial Expressions for Telecommunication and Telepresence
Poster
Booth: E32
Michal Joachimczak: NICT UCRI; Juan Liu: NICT UCRI; Hiroshi Ando: NICT UCRI
Teaser Video: Watch Now
This study aims at providing a low-cost solution for telepresence where people are reconstructed as 3D avatars using an ordinary webcam, while still exhibiting abundant facial information (such as micro-expressions) that are critical for face-to-face communication. We estimate the basic 3D shape and texture of the body from a set of video frames, and then subsequently update its body pose, facial expression, and facial texture in each frame. Our method is expected to reduce the entry barrier of VR systems and create an embodied telecommunication that conveys rich information and subtle emotional changes to deepen mutual understanding at a distance.
Jitsi360: Using 360 Images for Live Tours
Poster
Booth: E33
Alaeddin Nassani: University of Auckland; Huidong Bai: The University of Auckland; Mark Billinghurst: University of South Australia
Teaser Video: Watch Now
In this poster, we present a system for sharing immersive 360-degree images for live tours. While the sharing of prerecorded 360 video and images is becoming commonplace, there have been fewer systems presented that support live-sharing of 360 images. We customised a video conferencing platform to enable dozens of people to see the same 360-degree content together while having a live call. We describe our system and pilot user study results from using it for a virtual guided tour. Compared to sharing non 360-degree images on Zoom, our system was felt to be more immersive, enjoyable, and easy to use.
Apparent shape manipulation by light-field projection onto a retroreflective surface
Poster
Booth: E34
jion kanaya: Wakayama University; Toshiyuki Amano: Wakayama University
Teaser Video: Watch Now
In order to optically correctly present metallic luster and structural color, it is necessary to reproduce the changes in brilliance and color that accompany the movement of the viewpoint.
Light-field projection onto a retroreflective surface can optically present texture depending on the viewpoint. By applying this, it is thought that the apparent shape can be manipulated depending on the viewpoint. This research proposes an optical illusion that manipulates the apparent shape of the 3D object with lightfield projection based on a perceptual normal map transformation.
Enabling Augmented Reality Incorporate with Audio on Indoor Navigation for People with Low Vision
Poster
Booth: E13
Zihao Chi: Nara Institute of Science and Technology; Zhaofeng Niu: Nara Institute of Science and Technology; Taishi Sawabe: Nara Institute of Science and Technology
Teaser Video: Watch Now
Indoor navigation is difficult for low vision people, even they can benefit from visual cue. However, visual rating has been neglected, which can provide assistant for different visual impairments. In this paper, we propose an Augmented Reality (AR) application that measures the visual rating firstly. According to visual rating, different navigation service, including visual and audio cue, is provided for users. Object detection and depth estimation are utilized for avoiding obstacles. We conducted an exploratory design study to investigate our idea. In experiment, Snellen chart is displayed on Hololens2 and a pilot study has been conducted. The strategy on both visual aid and audio cue will be tested by a user study for the next step
Flick Typing: Toward A New XR Text Input System Based on 3D Gestures and Machine Learning
Poster
Booth: E12
Tian Yang: University of Southern California; Powen Yao: University of Southern California; Michael Zyda: USC
Teaser Video: Watch Now
We propose a new text entry input method in Extended Reality that we call Flick Typing. Flick Typing utilizes the user's knowledge of a QWERTY keyboard layout, but does not explicitly provide visualization of the keys, and is agnostic to user posture or keyboard position. To type with Flick Typing, users will move their controller to where they think the target key is with respect to the controller's starting position and orientation, often with a simple flick of their wrists. Machine learning model is trained and used to adapt to the user's mental map of the keys in 3D space.
Feasibility of mapping engagement ratios to levels of task complexity within VR environments
Poster
Booth: E11
Yobbahim J Vite: University of Calgary; Yaoping Hu: University of Calgary
Teaser Video: Watch Now
This paper studied the feasibility of mapping an engagement ratio onto a level of task complexity, when human participants undertook interactive tasks within a virtual reality (VR) environment. Each human participant used a haptic device to push a ball-shaped object through a pipe. There were a total of three pipes, which had three different shapes corresponding to the levels of task complexity mathematically. An electroencephalogram (EEG) device recorded the brain activity of the participant while undertaking the task. The outcomes of the study confirmed the feasibility of mapping the engagement ratio with the levels of task complexity.
Posters - Expo Hall B
Taming Cyclops: Mixed Reality Head-Mounted Displays as Laser Safety Goggles for Advanced Optics Laboratories
Poster
Booth: B14
Ke Li: Deutsches Elektronen-Synchrotron (DESY); Aradhana Choudhuri: Deutsches Elektronen Synchrotron; Susanne Schmidt: Universität Hamburg; Reinhard Bacher: Deutsches Elektronen Synchrotron DESY; Ingmar Hartl: Deutsches Elektronen Synchrotron; Wim Leemans: Deutsches Elektronen Synchrotron; Frank Steinicke: Universität Hamburg
Teaser Video: Watch Now
In this poster paper, we present a mixed reality application for laser eye protection based on a video see-through head-mounted display. With our setup, laser lab users perceive the real environment through the head-mounted display, using it as a substitute for laser safety goggles required by health and safety regulations. We designed and evaluated our prototype with a human-centered computing approach at the Deutsches Elektronen-Synchrotron where there exists some of the most advanced and extreme optics laboratory working conditions. We demonstrated that virtual reality headsets can be an attractive future alternative to conventional laser safety goggles.
Absence Agents: Mitigating Interruptions in Extended Reality Remote Collaboration
Poster
Booth: B13
Huyen Nguyen: Université Paris-Saclay, CNRS, LISN, VENISE team; Thomas Bruhn: Université Paris-Saclay, CNRS, LISN, VENISE team; Christian Sandor: Université Paris-Saclay, CNRS, LISN, VENISE team; Patrick Bourdot: Université Paris-Saclay, CNRS, LIMSI, VENISE team
Teaser Video: Watch Now
Although dealing with interruptions in remote collaboration has been studied in general, few works have done this for Extended Reality (XR) collaboration. With the current explosion of interest in XR collaboration, we explore the negative impacts of interruptions in synchronous distributed XR environments and propose a novel concept for dealing with them: Absence Agents. We present their requirements analysis, design, and a prototype implementation. We believe that our concept and design of Absence Agents are important for practitioners and researchers alike, as they highlight avenues for future research.
Group WiM: A Group Navigation Technique for Collaborative Virtual Reality Environments
Poster
Booth: B12
Vuthea Chheang: University of Magdeburg; Florian Heinrich: University of Magdeburg; Fabian Joeres: University of Magdeburg; Patrick Saalfeld: University of Magdeburg; Bernhard Preim: University of Magdeburg; Christian Hansen: University of Magdeburg
Teaser Video: Watch Now
In this work, we present a group World-in-Miniature (WiM) navigation technique that allows a guide to navigate a team in collaborative virtual reality (VR) environments. We evaluated the usability, discomfort, and user performance of the technique compared to state-of-the-art group teleportation in a user study. The results show that the proposed technique induces less discomfort for the guide and has slight usability advantages. The group WiM technique seems superior regarding task completion time for obstructed target destination. The group WiM technique provides potential benefits for effective group navigation in complex virtual environments and harder-to-reach target locations.
Impact of Parameter Disentanglement on Collaborative Alignment
Poster
Booth: B11
Tianyu Song: Technical University of Munich; Alejandro Martin-Gomez: Technical University of Munich; Qiaochu Wang: City University of Hong Kong; Arian Mehrfard: Technical University of Munich; Javad Fotouhi: Philips Research North America; Daniel Roth: Human-Centered Computing and Extended Reality; Ulrich Eck: Technische Universitaet Muenchen; Nassir Navab: Technische Universität München
Teaser Video: Watch Now
The interactive alignment of real and virtual content in AR is often non-trivial. Positional errors along the user's view direction frequently lead to the misjudgment of the object's depth. This work takes advantage of alternative users' viewpoints in collaborative settings to mitigate these errors. Furthermore, we systematically restrict the parameters used to control the virtual content's pose and investigate the impact of sharing and disentangling such parameters. Results from this work show that alignment schemes that disentangle the control parameters improve overall alignment accuracy with a similar workload for the users and no significant increase in execution time.
Light VR Client for Point Cloud Navigation with 360° Images
Poster
Booth: B21
Clément Dluzniewski: CEA-List; Jérémie Le Garrec: CEA-List; claude ca andriot: CEA-List; Frédéric Noël: Grenoble-INP
Teaser Video: Watch Now
Since point clouds require a large amount of data to be visually pleasing, they tend to be voluminous. Hence, hardware with limited computational and memory capabilities may not be able to handle such large data structures. Here, we propose a light VR client to explore a static point cloud, stored in a remote server, through 360° images. The client visualizes in an HMD the omnidirectional rendering of the point cloud and moves to another position with a teleportation metaphor. The main advantage of our proposition is the ability to work on modest hardware without a continuous high bandwidth.
Vibrating tilt platform enhancing immersive experience in VR
Poster
Booth: B22
Dorota Kamińska: Lodz University of Technology; Grzegorz Zwoliński: Lodz University of Technology; Anna Barbara Laska-Leśniewicz: Lodz University of Technology; Łukasz Adamek: Institute of Mechatronics and Information Systems
Teaser Video: Watch Now
Virtual reality systems, since their initial development, suffered some disadvantages. One of them was the fact that they had only visual interfaces. This limitation, however, has been successfully overcome with the development of haptic technology. Peripheral solutions reinforcing and enriching VR experience are now commonplace, and many haptic systems are being developed for deepening VR immersion. This paper discusses a new peripheral solution - a vibrating tilt platform with three angles of inclination to be used for enhancing the VR experience.
Enabling Virtual Reality Interactions in Confined Spaces by Re-Associating Finger Motions
Poster
Booth: B28
Wen-Jie Tseng: Institut Polytechnique de Paris; Samuel Huron: Télécom Paris, Institut Polytechnique de Paris; Eric Lecolinet: Institut Polytechnique de Paris; Jan Gugenheimer: Institut Polytechnique de Paris
Teaser Video: Watch Now
As Virtual Reality (VR) headsets become mobile, people can interact in public places with applications often requiring large arm movements. However, using these open gestures is often uncomfortable and sometimes impossible in confined and public spaces (e.g., commuting in a vehicle). We introduce the concept of finger mapping, re-associating small-scale finger motions onto virtual arms in a larger VR space. Finger mapping supports various interactions (e.g., arms swinging movement, selection, manipulation, and locomotion) when the environment is constrained and does not allow large gestures. Finally, we discuss the opportunities and challenges of using finger mapping for VR interactions.
Understanding Shoulder Surfer Behavior Using Virtual Reality
Poster
Booth: B27
Yasmeen Abdrabou: Bundeswehr University Munich; Radiah Rivu: Bundeswehr University Munich; Tarek Ammar: LMU Munich; Jonathan Liebers: University of Duisburg-Essen; Alia Saad: University of Duisburg-Essen; Carina Liebers: University of Duisburg-Essen; Uwe Gruenefeld: University of Duisburg-Essen; Pascal Knierim: Bundeswehr University Munich; Mohamed Khamis: University of Glasgow; Ville Mäkelä: University of Waterloo; Stefan Schneegass: University of Duisburg-Essen; Florian Alt: Bundeswehr University Munich
Teaser Video: Watch Now
We explore how attackers behave during shoulder surfing. Such behavior is difficult to study as it is often opportunistic and can occur wherever potential attackers can observe other people's private screens. Therefore, we investigate shoulder surfing using virtual reality (VR). We recruited 24 participants and observed their behavior in two virtual waiting scenarios: at a bus stop and in an open office space. In both scenarios, avatars interacted with private screens displaying different content, thus providing opportunities for shoulder surfing. From the results, we derive an understanding of factors influencing shoulder surfing behavior.
Priority-Dependent Display of Notifications in the Peripheral Field of View of Smart Glasses
Poster
Booth: B26
Anja K. Faulhaber: University of Kassel; Moritz Hoppe: Human-Machine Systems Engineering Group, University of Kassel; Ludger Schmidt: University of Kassel
Teaser Video: Watch Now
We propose a concept for displaying notifications in the peripheral field of view of smart glasses aiming to achieve a balance between perception and distraction depending on the priority of the notification. We designed three different visualizations for notifications of low, medium, and high priority. To evaluate this concept, we conducted a study with 24 participants who reacted to the notifications while performing a primary task. Reaction times for the low-priority notification were significantly higher. The medium- and high-priority notifications did not show a clear difference.
Studying the User Adaptability to Hyperbolic Spaces and Delay Time Scenarios
Poster
Booth: B25
Ana Rita Rebelo: NOVA LINCS, NOVA School of Science and Technology; Rui Nóbrega: Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa; Fernando Birra: Nova School of Science and Technology
Teaser Video: Watch Now
To create immersive virtual experiences, it is crucial to understand how users perceive Virtual Environments (VEs) and which interaction techniques are most appropriate for their tasks. We created a tangible VE - the VR Lab - where it is possible to study space and time conditions to analyse the user's adaptability to different forms of interaction. As a case study, we restricted the scope of the investigation to two morphing scenarios. The space morphing scenario compares the adaptability of users to Euclidean versus hyperbolic spaces. The time morphing scenario aims to establish from which values the visual delay affects performance.
"What a Mess!"': Traces of Use to Increase Asynchronous Social Presence in Shared Virtual Environments
Poster
Booth: B31
Linda Hirsch: LMU Munich; Anna Haller: LMU Munich; Andreas Butz: LMU Munich; Ceenu George: Augsburg University
Teaser Video: Watch Now
Shared virtual environments (VEs) are challenged conveying and triggering users' feelings of social presence. Traces of use are implicit evidence of prior interactions that support social awareness in the real environment (RE). However, they have not been explored in VEs so far. We investigate the traces' effect on users' perception of asynchronous social presences in a within-subject study (N=26) by comparing the users' experience with and without traces. The traces significantly increased the feeling of social presence. We contribute an initial exploration of the \emph{traces of use} concept in VE to design shared social spaces for long-term use.
Beyond Flicker, Beyond Blur: View-coherent Metameric Light Fields for Foveated Display
Poster
Booth: B32
Prithvi Kohli: University College London; David Robert Walton: University College London; Rafael Kuffner dos Anjos: University of Leeds; Anthony Steed: University College London; Tobias Ritschel: University College London
Teaser Video: Watch Now
Ventral metamers, pairs of images which may differ substantially in the periphery, but are perceptually identical, offer exciting new possibilities in foveated rendering and image compression, as well as offering insights into the human visual system. However, existing literature has mainly focused on creating metamers of static images. In this work, we develop a method for creating sequences of metameric frames, specifically light fields, with enforced consistency along the temporal, or angular, dimension. This greatly expands the potential applications for these metamers, and expanding metamers along the third dimension offers further new potential for compression.
KARLI: Kid-friendly Augmented Reality for Primary School Health Education
Poster
Booth: B33
Mariella Seel: St. Poelten University of Applied Sciences; Michael Andorfer: St. Poelten University of Applied Sciences; Mario Heller: St. Poelten University of Applied Sciences; Andreas Jakl: St. Poelten University of Applied Sciences
Teaser Video: Watch Now
Acquiring health knowledge is essential and starts already in primary school. Augmented Reality (AR) helps to convey complex topics in a more understandable way. In this work, we present the prototype of KARLI, the "Kid-friendly Augmented Reality Learning Interface". This AR smartphone app for in-school use is designed for age level 8 to 10, enabling pupils to explore a 3D model of the human body based on the primary school curriculum. Underlining the importance of kid-friendly app development and testing, our evaluation results of 38 pupils and 3 teachers indicate that KARLI is suitable and helpful for health education in primary schools.
Comparing principally imagination and interaction versions of a play anywhere mobile AR location-based story
Poster
Booth: B34
Gideon Raeburn: Queen Mary University of London; Laurissa Tokarchuk: Queen Mary University of London
Teaser Video: Watch Now
Augmented Reality (AR) allows virtual elements to be overlaid on the real world, providing new opportunities for location-based storytelling, by offering blended environments to more closely resemble those in a story. However, there is limited research considering how different interaction opportunities with such augmented surroundings, affect user engagement and immersion in such a story. A mobile AR app, Map Story 2, was developed to investigate this, offering a guided story-walk around a user's chosen location, either interacting with overlaid virtual objects to progress the story, or being asked to imagine the same events playing out at each augmented location.
Design of Mentally and Physically Demanding Tasks as Distractors of Rotation Gains
Poster
Booth: B23
Daniel Neves Coelho: Curvature Games; Frank Steinicke: Universität Hamburg; Eike Langbehn: University of Applied Sciences Hamburg
Teaser Video: Watch Now
Rotation gains decouple real and virtual head turning. When reaching a boundary of the tracking space, the user's orientation can be reset by applying a rotation gain and forcing the user to do a certain task that requires head rotations. To identify what kind of tasks are best suited to mask the redirection, four tasks were designed that differentiate in their amount of mentally and physically demand: a spatial memory task, a verbal memory task, a physically exhausting task and a task which requires physical skill. A first pilot study was conducted to evaluate the influence on the redirection awareness.
Minimaps for Impossible Spaces: Improving Spatial Cognition in Self-Overlapping Virtual Rooms
Poster
Booth: B24
Rafael Epplée: Universität Hamburg; Eike Langbehn: University of Applied Sciences Hamburg
Teaser Video: Watch Now
Natural walking in virtual reality is constrained by the physical boundaries of the tracking space. Impossible spaces enlarge the virtual environment by creating overlapping architecture and letting multiple locations occupy the same physical space. Minimaps, which are small representations of the environment, are a common method to assist with wayfinding and navigation. Unfortunately, in a naive implementation of such minimaps for an environment with impossible spaces, the overlap would be obvious. We investigated approaches to displaying impossible spaces on minimaps without attracting users' attention to the overlapping parts. We conducted a study that investigated effects of minimaps on spatial cognition.
STARE: Semantic Augmented Reality Decision Support in Smart Environments
Poster
Booth: E24
mengya zheng: University College Dublin; xingyu pan: University College Dublin; Nestor Velasco Bermeo: University College Dublin; Rosemary J. Thomas: University College Dublin; David Coyle: University College Dublin; Gregory M.P. O'Hare Dr: University College Dublin (UCD); Abraham G. Campbell: University College Dublin
Teaser Video: Watch Now
The Internet of Things (IoT) facilitates real-time decision support within smart environments. Augmented Reality (AR) allows for the ubiquitous visualization of IoT-derived data, and AR visualization will simultaneously permit the cognitive and visual binding of information to the physical object(s) to which they pertain. Essential questions exist about efficiently filtering, prioritizing, determining relevance, and adjudicating individual information needs in real-time decision-making. To this end, this paper proposes a novel AR decision support framework (STARE) to support immediate decisions within a smart environment by augmenting the user's focal objects with assemblies of semantically relevant IoT data and corresponding suggestions.
Emotional Support Companions in Virtual Reality
Poster
Booth: E26
Linda Graf: University of Duisburg-Essen; Sophie Abramowski: University of Duisburg-Essen; Melina Baßfeld: University of Duisburg-Essen; Kirsten Melanie Gerschermann: University of Duisburg-Essen; Marius Grießhammer: University of Duisburg-Essen; Leslie Scholemann: University of Duisburg-Essen; Maic Masuch: University of Duisburg-Essen
Teaser Video: Watch Now
According to social psychological models, the presence of another person or even a virtual character in stressful situations can have stress-reducing effects. Thereby, the outcome can depend on the congruency between one's mood and the perceived mood of the other person. This dependence guides the design of VR applications for stress reduction, which use different virtual companion designs depending on the emotional state of individual users. This paper describes an ongoing design and development process towards such an emotionally supportive companion and shares initial results concerning the perception and stress-reducing effects of positively- and negatively-minded companions.
Heuristic Short-term Path Prediction for Spontaneous Human Locomotionin Virtual Open Spaces
Poster
Booth: C11
Christian Hirt: ETH Zürich; Marco Ketzel: ETH Zürich; Philip Graf: ETH Zürich; Christian Holz: ETH Zürich; Andreas Kunz: ETH Zurich
Teaser Video: Watch Now
Redirected Walking (RDW) shrinks large virtual environments to fit small physical tracking spaces while supporting natural locomotion. In predictive RDW, algorithms rely on predicting users' paths to adjust the induced redirection. Current predictors assume drastic simplifications or build on complex locomotion models. Further, also adapting existing predictive RDW algorithms to unconstrained open spaces exponentially increases their computational complexity. In this paper, we propose simple yet flexible path prediction models supporting dynamic virtual open spaces. Our proposed prediction models consist of a drop and a sector shape, defining an area, in which linear and clothoidic walking trajectories will be investigated.
Towards Eye-Perspective Rendering for Optical See-Through Head-Mounted Displays
Poster
Booth: C12
Gerlinde Emsenhuber: Salzburg University of Applied Sciences; Michael Domhardt: Salzburg University of Applied Sciences; Tobias Langlotz: University of Otago; Denis Kalkofen: Graz University of Technology; Markus Tatzgern: Salzburg University of Applied Sciences
Teaser Video: Watch Now
Optical see-through (OST) HMDs are a typical platform for Augmented Reality and allows users to experience augmentations. Utilizing information of the real-world background, visualization algorithms adapt the layout and representation of content to improve legibility. Typically, background information is captured via built-in HMD cameras. However, HMD camera views of the real-world scene are distinctively different to the user's view through the OST HMD. We propose eye-perspective rendering as a solution to synthesize high fidelity renderings of the user's view for OST HMDs to enable adaptation algorithms to utilize visual information as seen from the user's perspective to improve placement, rendering and, thus, legibility of content.
Improved Offset Handling in Hand-centered Object Manipulation Extending Ray-casting
Poster
Booth: C13
Emil Edström: Linköping University; Tim William Cardell Cardell: Linköping University; Karljohan Lundin Palmerius: Linköping University
Teaser Video: Watch Now
One of the most common types of interaction in Virtual Reality (VR) is pose manipulation. The simultaneous manipulation of both object position and orientation, 6 DoF interaction, that can be complex in desktop environments, can become simple in VR, by mapping the natural motion with a 3D interaction controller (wand) to the motion of the object. In this paper we acknowledge the importance of an offset variable in HOMER, present a way of handling the offset consistently when the user faces different directions during interaction, compare this to using naive static offset with respect to task completion time and number of re-grabs, and measure their respective System Usability Scale (SUS) score and Raw NASA Task Load Index (RTLX).
If I Share with you my Perspective, Would you Share your Data with me?
Poster
Booth: C14
Tianyu Song: Technical University of Munich; Ulrich Eck: Technische Universitaet Muenchen; Nassir Navab: Technische Universität München
Teaser Video: Watch Now
Real-time 3D reconstruction using multiple RGBD cameras and their online transmission facilitates the adoption of mixed reality telepresence. However, such a system can only cover a limited volume, and increasing the number of RGBD cameras is unfeasible due to setup complexity and space constraints. To address this issue, we present the concept of Dynamic 3D View Sharing, which complements the views of a 3D reconstruction system by the dynamic view of the user's HMD. Here, we present a markerless calibration method integrating these two seamlessly into the mixed reality telepresence systems without disrupting the current workflow.
Proximity in VR: The Importance of Character Attractiveness and Participant Gender
Poster
Booth: C21
Katja Zibrek: INRIA; Benjamin Niay: Inria; Anne-Hélène Olivier: University of Rennes 2; Ludovic Hoyet: Inria; Julien Pettré: Inria; Rachel McDonnell: Trinity College Dublin
Teaser Video: Watch Now
In this study, we expand upon recent evidence that the motion of virtual characters affects the proximity of users who are immersed with them in virtual reality (VR). Characters with attractive motions decrease proximity, while females prefer further distances from characters than males. No effect of character gender on proximity was found. We designed a similar experiment where users observed walking motions in VR which were displayed on male and female virtual characters. Our results show similar patterns found in previous studies, while some differences due to the specifics of the characters emerged.
A Comparison of Input Devices for Precise Interaction Tasks in VR-based Surgical Planning and Training
Poster
Booth: C22
Mareen Allgaier: Otto-von-Guericke University Magdeburg; Vuthea Chheang: Otto-von-Guericke University Magdeburg; Patrick Saalfeld: Otto-von-Guericke University; Vikram Apilla: Otto-von-Guericke University Magdeburg; Tobias Huber: University Medicine of the Johannes Gutenberg-University; Florentine Huettl: University Medicine of the Johannes Gutenberg-University; Belal Neyazi: University Hospital Magdeburg; I. Erol Sandalcioglu: University Hospital Magdeburg; Christian Hansen: Otto-von-Guericke University Magdeburg; Bernhard Preim: Otto-von-Guericke University Magdeburg; Sylvia Saalfeld: Otto von Guericke University
Teaser Video: Watch Now
We present a comparison of input devices for common interaction tasks in medical VR training and planning based on two relevant applications. The chosen devices, VR controllers, VR Ink, data gloves, and a real medical instrument, differ in their degree of specialization and their grip. The conducted user study shows that the controllers and VR Ink performed significantly better than the other devices regarding precision. Concerning questionnaire results, no device stands out but most participants preferred the VR Ink for both applications. These results can serve as a guide to identify an appropriate device for future medical VR applications.
Interacting with a Torque-Controlled Virtual Human in Virtual Reality for Ergonomics Studies
Poster
Booth: C23
Jacques Zhong: CEA List; Vincent Weistroffer: CEA LIST; claude ca andriot: cea; Pauline Maurice: INRIA LORIA; Francis Colas: INRIA LORIA
Teaser Video: Watch Now
This paper presents a new tool to help ergonomists conduct studies on digital human models (DHM) in a more intuitive and physically consistent way. To do so, a virtual reality setup was combined with a DHM in a real-time physics simulation. Therefore, the user is able to directly manipulate the DHM within the virtual workplace and quickly experiment with a variety of scenarios.
Cloud-Based Cross-Platform Collaborative AR in Flutter
Poster
Booth: C24
Lars Carius: Technical University of Munich; Christian Eichhorn: Technical University of Munich; David A. Plecher: Technical University; Gudrun Klinker: Technical University of Munich
Teaser Video: Watch Now
We present a novel collaborative AR framework aimed at lowering the entry barriers and operating expenses of AR applications. It includes a cross-platform and cloud-based Flutter plugin combined with a web-based content management system allowing non-technical staff to take over operational tasks such as providing 3D models. To provide a SOA feature set, the AR Flutter plugin builds upon ARCore and ARKit and unifies the two frameworks using an abstraction layer written in Dart. Our contribution closes a gap by providing an AR framework seamlessly integrating with the familiar development process of cross-platform apps. With the accompanying content management system, AR can be used as a tool to achieve business objectives.
Mixed Reality Support for Bridge Inspectors
Poster
Booth: C28
Urs Riedlinger: Fraunhofer FIT; Florian Klein: HHVISION; Marcos Hill: LIST Digital; Christian Lambracht: University of Applied Sciences Bochum; Sonja Nieborowski: Federal Highway Research Institute; Ralph Holst: Federal Highway Research Institute; Sascha Bahlau: LIST Digital; Leif Oppermann: Fraunhofer FIT
Teaser Video: Watch Now
Bridge inspectors work for the safety of our infrastructure and mobility. In regular intervals, they conduct structural inspections - a manual task with a long-lasting and firmly normed analogue tradition. We propose a mixed analogue and digital workflow that includes Mixed Reality views that can be ready-to-hand for bridge inspectors during their work at and in a bridge. Our presented demonstrator was iteratively designed in a collaborative research project and turned into a tablet-based application to digitally support that work. It employs BIM data that contains 3D geometry-data and additional data about the structure, such as previous damage reports.
Study of communication modalities for teaching distance information
Poster
Booth: C31
Franceso Fastelli: Univ Evry, Université Paris Saclay; Cassandre Simon: Univ Evry, Université Paris Saclay; Aylen Ricca: Arts et Métiers, UBFC, HESAM, Institut Image; Amine Chellali: Univ Evry, Université Paris Saclay
Teaser Video: Watch Now
We present an exploratory study to compare the haptic, visual, and verbal modalities for communicating distance information in a shared virtual environment. The results show that the visual modality decreased the distance estimation error while the haptic modality decreased the completion time. The verbal modality increased the sense of copresence but was the least preferred modality. These results suggest that a combination of modalities could improve communication of distance information to a partner. These findings can contribute to improve the design of collaborative VR systems and open new research perspectives on studying the effectiveness of multimodal interaction.
Using 3D Reconstruction to create Pervasive Augmented Reality Experiences: A comparison
Poster
Booth: C38
Miguel Neves: University of Aveiro; Bernardo Marques: Universidade de Aveiro; Tiago Madeira: Universidade de Aveiro; Paulo Dias: University of Aveiro; Beatriz Sousa Santos: University of Aveiro
Teaser Video: Watch Now
This paper presents a prototype for configuration and visualization of Pervasive Augmented Reality (AR) experiences using two versions: desktop and mobile. It makes use of 3D scans from physical environments to provide a reconstructed digital representation of such spaces to the desktop version and enable positional tracking for the mobile. While the desktop presents a non-immersive setting, the mobile provides continuous AR in the physical environment. Both versions can be used to place virtual content and ultimately configure an AR experience. The authoring capabilities of the proposed solution were compared by conducting a user study focused on evaluating their usability.
Does Remote Expert Representation really matters: A comparison of Video and AR-based Guidance
Poster
Booth: C37
Bernardo Marques: Universidade de Aveiro; Samuel Silva: Universidade de Aveiro; Paulo Dias: University of Aveiro; Beatriz Sousa Santos: University of Aveiro
Teaser Video: Watch Now
This work describes a user study aimed at understanding how the remote expert representation affects the sense of social presence in scenarios of remote guidance. We compared a traditional video chat solution with an Augmented Reality (AR) annotation tool. These were selected due to ongoing research with partners from the industry sector, following the insights of a participatory design process. A well defined-problem was used, i.e., a synchronous maintenance task with 4 completion stages that required a remote expert using a computer to guide 26 on-site participants wielding a handheld device. The results of the study are described and discussed.
Whac-A-Mole: Exploring Virtual Reality (VR) for Upper-Limb Post-Stroke Physical Rehabilitation based on Participatory Design and Serious Games
Poster
Booth: C36
Helder Paraense Serra: University of Aveiro; Bernardo Marques: Universidade de Aveiro; Paula Amorim: University of Beira Interior; Paulo Dias: University of Aveiro; Beatriz Sousa Santos: University of Aveiro
Teaser Video: Watch Now
This paper describes a Human-Centered Design methodology aimed at understanding how Virtual Reality (VR) can assist in post-stroke physical rehabilitation. Based on insights from stroke survivors and healthcare members, a serious game prototype is proposed. We focused on upper-limb rehabilitation, which inspired the game narrative and the movements users must perform. The game supports two modes: 1- normal version - users can use any arm to pick a virtual hammer and hit objects; 2- mirror version - converts a traditional approach to VR, providing the illusion that the arm affected by the stroke is moving. These were evaluated through a user study.
Distinguishing Visual Fatigue, Mental Workload and Acute Stress in Immersive Virtual Reality with Physiological Data: pre-test results
Poster
Booth: C35
Alexis Souchet: CNRS; Weifei Xie: CNRS; Domitile Lourdeaux: Sorbonne University Association, University of Technology of Compiègne, CNRS
Teaser Video: Watch Now
Virtual Reality Induced Symptoms and Effects (VRISE) can arise. Experimental paradigms are heterogeneous to assess them, and various factors can induce physiological variations. Therefore, we developed a Stroop task to study and distinguish VRISE. We use eye tracking, ECG, EDA, and VRSQ, NASA-TLX, and STAI-6 questionnaires. Pre-tests have been conducted with 6 subjects exposed to 4 experimental conditions: control, dual task, stressful, and stereoscopy. Subjects report different subjective visual fatigue and mental workload but not stress between conditions. Several physiological features are different between conditions. A VRISE detector can be envisioned based on physiological data and questionnaires as an index.
Towards Scalable and Real-time Markerless Motion Capture
Poster
Booth: C41
Georgios Albanis: University of Thessaly; Anargyros Chatzitofis: AC CODEWHEEL LTD; Spyridon Thermos: AC Codewheel Ltd; Nikolaos Zioulis: Independent Researcher; Kostas Kolomvatsos: University of Thessaly
Teaser Video: Watch Now
Human motion capture and perception without the need for complex systems with specialized cameras or wearable equipment is the holy grail for many human-centric applications. Here, we present a scalable markerless motion capture method that estimates 3D human poses in real-time using low-cost hardware. We do so by replacing the inefficient 3D joint reconstruction techniques, such as learnable triangulation and feature splatting, with a novel uncertainty-driven approach that exploits the available depth information and the edge sensors' spatial alignment to fuse the per viewpoint estimates into final 3D joint positions.
A Mixed Reality Guidance System for Blind and Visually Impaired People
Poster
Booth: C42
Hannah Schieber: Friedrich Alexander University Erlangen-Nürnberg; Constantin Kleinbeck: Friedrich-Alexander Universität Erlangen-Nürnberg; Charlotte Pradel: Friedrich Alexander University Erlangen-Nürnberg; Luisa Theelke: Friedrich Alexander University Erlangen-Nürnberg; Daniel Roth: Friedrich Alexander University Erlangen-Nürnberg
Teaser Video: Watch Now
Persons affected by blindness or visual impairments are challenged by spatially understanding unfamiliar environments. To obtain such understanding, they have to sense their environment closely and carefully. Especially objects outside the sensing area of analog assistive devices, such as a white cane, are simply not perceived and can be the cause of collisions. This project proposes a mixed reality guidance system that aims at preventing such problems. We use object detection and the 3D sensing capabilities of a mixed reality head mounted device to inform users about their spatial surroundings.
Holding Hands for Short-Term Group Navigation in Social Virtual Reality
Poster
Booth: C43
Tim Weissker: Bauhaus-Universitaet Weimar; Pauline Bimberg: Bauhaus-Universitaet Weimar; Ankith Kodanda: Bauhaus-Universitaet Weimar; Bernd Froehlich: Bauhaus-Universität Weimar
Teaser Video: Watch Now
Prior research has shown that social interactions in VR benefit from techniques for group navigation that bring multiple users to a common destination together. In this work, we propose the metaphor of holding onto another user's virtual hand for the ad-hoc formation of a navigational group and report on the positive results of an initial usability study in an exploratory two-user scenario.
Stay Safe! Safety Precautions for Walking on a Conventional Treadmill in VR
Poster
Booth: C44
Sandra Birnstiel: University of Würzburg; Sebastian Oberdörfer: University of Würzburg; Marc Erich Latoschik: Department of Computer Science, HCI Group
Teaser Video: Watch Now
Conventional treadmills are used in virtual reality (VR) applications, such as for rehabilitation training or gait studies. However, using the devices in VR poses risks of injury. Therefore, this study investigates safety precautions when using a conventional treadmill for a walking task. We designed a safety belt and displayed parts of the treadmill in VR. The safety belt was much appreciated by the participants and did not affect the walking behavior. However, the participants requested more visual cues in the user's field of view.
Exploring How, for Whom and in Which Contexts Extended Reality Training 'Works' in Upskilling Healthcare Workers: A Realist Review
Poster
Booth: D11
Norina Gasteiger: University of Manchester; Sabine N van der Veer: University of Manchester; Paul Wilson: University of Manchester; Dawn Dowding: University of Manchester
Teaser Video: Watch Now
Extended reality (XR), including virtual reality (VR) and augmented reality (AR) may overcome barriers to training healthcare workers, such as resource constraints. However, the effectiveness of XR training is disputed and not well understood. Our realist review explores how, for whom and in what contexts AR and VR training 'works' in upskilling healthcare workers. Eighty papers informed our program theory, while 46 empirical studies tested/refined it. We conclude that XR triggers perceptions of realism and deep immersion, and enables visualization, interactive learning, enhancement of skills and repeated practice within a safe learning environment, consequently bettering skills, learning/knowledge and learner satisfaction.
ARTFM: Augmented Reality Visualization of Tool Functionality Manuals in Operating Rooms
Poster
Booth: D12
Constantin Kleinbeck: Friedrich-Alexander Universität Erlangen-Nürnberg; Hannah Schieber: Friedrich-Alexander University; Sebastian Andress: Ludwig-Maximilians-Universität München; Christian Krautz: Universitätsklinikum Erlangen; Daniel Roth: Friedrich-Alexander-Universität Erlangen-Nürnberg
Teaser Video: Watch Now
Error-free surgical procedures are crucial for a patient's health. However, with the increasing complexity and variety of surgical instruments, it is difficult for clinical staff to acquire detailed assembly and usage knowledge leading to errors in process and preparation steps. Yet, the gold standard in retrieving necessary information when problems occur is to get the paper-based manual. Reading through the necessary instructions is time-consuming and decreases care quality. We propose ARTFM, a process integrated manual, highlighting the correct parts needed, their location, and step-by-step instructions to combine the instrument using an augmented reality head-mounted display.
Comparing Controller with the Hand Gestures Pinch and Grab for Picking Up and Placing Virtual Objects
Poster
Booth: D13
Alexander Schäfer: TU Kaiserslautern; Gerd Reis: German Research Center for Artificial Intelligence; Didier Stricker: German Research Center for Artificial Intelligence
Teaser Video: Watch Now
Grabbing virtual objects is one of the essential tasks for Augmented, Virtual, and Mixed Reality applications. Modern applications usually use a simple pinch gesture for grabbing and moving objects. However, picking up objects by pinching has disadvantages. It can be an unnatural gesture to pick up objects and prevents the implementation of other gestures which would be performed with thumb and index. Therefore it is not the optimal choice for many applications. In this work, different implementations for grabbing and placing virtual objects are proposed and compared. Performance and accuracy of the proposed techniques are measured and compared.
Social Presence in VR Empathy Game for Children: Empathic Interaction with the Virtual Characters
Poster
Booth: D14
Ekaterina Muravevskaia: University of Florida; Christina Gardner-McCune: University of Florida
Teaser Video: Watch Now
This paper discusses children's empathic interactions with the VR characters. The study and findings in this paper were derived from a larger research study that explored the design and evaluation of empathic experiences for young children (6-9 years old) in VR environments. We found similarities between how children interacted with the VR characters and with real people (i.e., cognitive empathy and emotional contagion). This provides initial insight into children's experience of social presence with the VR characters. We suggest follow-up research on connection between empathy and social presence to explore the ways to create empathic VR experiences for children.
Supervised Machine Learning Hand Gesture Classification in VR for Immersive Training
Poster
Booth: D24
Ozkan Cem Bahceci: BT; Anasol Pena-Rios: BT; Gavin Buckingham: University of Exeter; Anthony Conway: BT
Teaser Video: Watch Now
The fast adoption of immersive wearables evidences the need for more intuitive interaction methods between virtual environments and their users. Advances in wearables are making in-built real-time hand tracking mechanisms more common. However, wearable providers only include limited gestures available for developer use. This limits their use for VR-based training solutions, where users need to complete tasks that mimic real-world activities as much as possible to gain valuable insights on those tasks. We present a virtual reality application that collects data on hand characteristics and analyses the collected data to identify hand gestures towards achieving a more natural interaction.
MeasVRe: Measurement Tools for Unity VR Applications
Poster
Booth: D23
Jolly Chen: University of Amsterdam; Robert G. Belleman: University of Amsterdam
Teaser Video: Watch Now
An indispensable facility that is often missing in VR applications for studying complex 3D models, is the ability to take measurements. This paper describes a toolkit for Unity VR applications that allows for taking distance, angle, trace, surface area, and bounding box volume measurements by placing markers. We show that markers can be efficiently snapped to a model by ray casting. Moreover, we validated taking distance measurements with our toolkit, by performing an experiment where we measure inter-branch distances of corals. Lastly, the toolkit can save measurements locally or upload them to a logging server.
Automatic 3D Avatar Generation from a Single RBG Frontal Image
Poster
Booth: D22
Alejandro Beacco: Universitat de Barcelona; Jaime Gallego: University of Barcelona; Mel Slater: University of Barcelona
Teaser Video: Watch Now
We present a complete automatic system to obtain a realistic 3D avatar reconstruction of a person using only a frontal RGB image. Our proposed workflow first determines the pose, shape and semantic information from the input image. All this information is processed to create the skeleton and the 3D skinned textured mesh that conform the final avatar. We use a specific head reconstruction method to correctly match our final mesh to a realistic avatar. Our pipeline focuses on three main aspects: automation of the process, identification of the person, and usability of the avatar.
MR-RIEW: An MR Toolkit for Designing Remote Immersive Experiment Workflows
Poster
Booth: D21
Daniele Giunchi: University College London; Riccardo Bovo: Imperial College London; Anthony Steed: University College London; Thomas Heinis: Imperial College
Teaser Video: Watch Now
We present MR-RIEW, a toolkit for virtual and mixed reality that provides researchers with a dynamic way to design an immersive experiment workflow including instructions, environments, sessions, trials and questionnaires. It is implemented in Unity via scriptable objects, allowing simple customisation. The graphic elements, the scenes and the questionnaires can be selected and associated without code. MR-RIEW can save locally into the headset and remotely the questionnaire's answers. MR-RIEW is connected to Google Firebase service for the remote solution requiring a minimal configuration.
Virtual Human Coherence and Plausibility - Towards a Validated Scale
Poster
Booth: D35
David Mal: University of Würzburg, Department of Computer Science, HCI Group; Erik Wolf: University of Würzburg, Department of Computer Science, HCI Group; Nina Döllinger: Human-Technology-Systems Group; Mario Botsch: TU Dortmund University; Carolin Wienrich: University Würzburg; Marc Erich Latoschik: Department of Computer Science, HCI Group
Teaser Video: Watch Now
Virtual humans contribute to users' state of plausibility in various XR applications. We present the development and preliminary evaluation of a self-assessment questionnaire to quantify virtual human's plausibility in virtual environments based on eleven concise items. A principal component analysis of 650 appraisals collected in an online survey revealed two highly reliable components within the items. We interpret the components as possible factors, i.e., appearance and behavior plausibility and match to the virtual environment, and propose future work aiming towards a standardized virtual human plausibility scale by validating the structure and sensitivity of both sub-components in XR environments.
Democratic Video Pass-Through for Commercial Virtual Reality Devices
Poster
Booth: D36
Diego González Morín: Nokia; Francisco Pereira: Bell Labs; Ester Gonzalez-Sosa: Bell Labs; Pablo Perez: Bell Labs; Alvaro Villegas: Bell Labs
Teaser Video: Watch Now
Video pass-through Extended Reality (XR) is rapidly gaining interest from developers and researchers. However, video pass-through enabled XR devices' ecosystem is still bounded to expensive hardware. In this paper, we describe our custom hardware and software setup for providing effective video pass-through capabilities to inexpensive commercial Virtual Reality (VR) devices. The proposed hardware setup incorporates a low-cost HD stereo camera rigidly hooked to the VR device using a custom 3D printed attachment. Our software solution, implemented in Unity, overcomes hardware-specific limitations, such as cameras' delays, in a simple yet successful manner.
Bringing Real Body as Self-Avatar into Mixed Reality: A Gamified Volcano Experience
Poster
Booth: D37
Diego González Morín: Nokia; Ester Gonzalez-Sosa: Bell Labs; Pablo Perez: Bell Labs; Alvaro Villegas: Bell Labs
Teaser Video: Watch Now
In this work we showcase a Mixed Reality experience that bring users' own body as a self-avatar. This is mainly based on an algorithm that segment in real time the egocentric full body placing a regular stereo camera in front of a commercial Virtual Reality device. To the best of our knowledge, this is the first work that integrates real body as self-avatar both meeting real-time and segmentation quality in real-life conditions. We conclude the paper with some preliminary subjective evaluation measuring Presence and Embodiment Factors, that suggests the potential of using your own body as self-avatar in Mixed Reality.
A Replication Study to Measure the Perceived Three-Dimensional Location of Virtual Objects in Optical See Through Augmented Reality
Poster
Booth: E26
Farzana Alam Khan: Mississippi State University; Mohammed Safayet Arefin: Mississippi State University; Nate Phillips: Mississippi State University; J. Edward Swan II: Mississippi State University
Teaser Video: Watch Now
An important research question in optical see-through (OST) augmented reality (AR) is, how accurately and precisely can a virtual object's real world location be perceived? Previously, a method was developed to measure the perceived three-dimensional location of virtual objects in OST AR. In this research, a replication study is reported, which examined whether the perceived location of virtual objects are biased in the direction of the dominant eye. The successful replication analysis suggests that perceptual accuracy is not biased in the direction of the dominant eye. Compared to the previous study's findings, overall perceptual accuracy increased, and precision was similar.
Who will Trust my Digital Twin? Maybe a Clerk in a Brick and Mortar Fashion Shop
Poster
Booth: D38
Lorenzo Stacchio: University of Bologna; Michele Perlino: University of Bologna; Ulderico Vagnoni: University of Bologna; Federica Sasso: Independent Artist; Claudia Scorolli: University of Bologna; Gustavo Marfia: University of Bologna
Teaser Video: Watch Now
Digital Twin (DT) models mirror the life of physical entities and are adopted to optimize several industrial processes. Although well-established in the industrial fields, one of the most exciting examples of where DTs may be employed is in the MetaVerse, with Human Digital Twins (HDTs). We present a preliminary study that examines the efficacy of HDT-human interactions in the context of a fashion shop. Based on the results obtained involving thirty-two participants in our experiments, we begin a discussion related to the pros and cons of this approach on x-commerce.
Measuring Virtual Object Location with X-Ray Vision at Action Space Distances
Poster
Booth: D44
Nate Phillips: Mississippi State University; Farzana Alam Khan: Mississippi State University; Mohammed Safayet Arefin: Mississippi State University; Cindy Bethel: Mississsippi State University; J. Edward Swan II: Mississippi State University
Teaser Video: Watch Now
Accurate and usable x-ray vision is a significant goal in augmented reality (AR) development. X-ray vision, or the ability to comprehend location and object information when it is presented through an opaque barrier, needs to successfully convey scene information to be a viable use case for AR. Further, this investigation should be performed in an ecologically valid context in order to best test x-ray vision. This research seeks to experimentally evaluate the perceived object location of stimuli presented with x-ray vision, as compared to real-world perceived object location through a window, at action space distances of 1.5 to 15 meters.
Preliminary evaluation of an IVR user experience design model using eye-tracking attention measurements
Poster
Booth: D43
Elena Dzardanova: University of the Aegean; Vlasios Kasapakis: University of the Aegean
Teaser Video: Watch Now
The present study drafts a simplified IVR user experience design model to guideline a preliminary evaluation of attention variance for semantically distinct elements. 27 participants freely explored an interactive state of the art virtual setting, whilst equipped with eye-tracking technology which procured attention duration measurements. Initial results confirm significant element attention discrepancy and provide the first indication toward a more detailed categorical organization of experience components for follow-up experimentation.
Proposing the RecursiVerse Overlay Application for the MetaVerse
Poster
Booth: D42
Lorenzo Donatiello: University of Bologna; Gustavo Marfia: University of Bologna
Teaser Video: Watch Now
In a still uncertain future for the MetaVerse, we dare to envision one of its possible future developments, the RecursiVerse. The RecursiVerse is an overlay application that may be built upon the MetaVerse, amounting to symmetrical virtual-real space where a human being may rely on human digital twins which may move, operate and recursively replicate to collaboratively perform multiple tasks. The RecursiVerse may hence extend what will eventually be possible thanks to the MetaVerse, providing a service to a society that poses increasing cognitive and perceptual challenges due to growing work-life imbalances and increasing cognitive loads.
Augmented Reality In-Field Observation Creation and Visualization in Underperforming Areas
Poster
Booth: D41
Mengya Zheng: University College Dublin; Nestor Velasco Bermeo: University College Dublin; Abraham G. Campbell: University College Dublin
Teaser Video: Watch Now
To precisely diagnose and address farming issues that led to crop yield underperformance, agronomists need to conduct field trips at the targeted underperforming areas and record the observed issues to support crop treatment decisions. Traditional field data visualization tools may not provide intuitive visualizations to support field trip investigations at targeted underperforming areas. This paper presents an Augmented Reality (AR) in-field observation tool to navigate agronomists to annotate the observed issues at precisely targeted underperforming areas. These recorded observation AR annotations are then synchronized in a Web-based Interactive Multi-Layer Mapping Tool for a complete precision farming decision-making process.
Jamming in MR: Towards Real-Time Music Collaboration in Mixed Reality
Poster
Booth: E23
Ruben Schlagowski: University of Augsburg; Kunal Gupta: The University of Auckland; Silvan Mertes: University of Augsburg; Mark Billinghurst: University of Auckland; Susanne Metzner: University of Augsburg; Elisabeth André: University of Augsburg
Teaser Video: Watch Now
Recent pandemic-related contact restrictions have made it difficult for musicians to meet in person to make music. As a result, there has been an increased demand for applications that enable remote and real-time music collaboration. One desirable goal here is to give musicians a sense of social presence, to make them feel that they are "on site" with their musical partners. We conducted a focus group study to investigate the impact of remote jamming on users' affect. Further, we gathered user requirements for a Mixed Reality system that enables real-time jamming and developed a prototype based on these findings.
Effects of the Level of Detail on the Recognition of City Landmarks in Virtual Environments
Poster
Booth: E22
Achref Doula: TU Darmstadt; Philipp Kaufmann: TU Darmstadt; Alejandro Sanchez Guinea: TU Darmstadt; Max Mühlhäuser: TU Darmstadt
Teaser Video: Watch Now
The reconstruction of city landmarks is central to creating recognizable virtual environments representing real cities. Despite the recent advances, it is still not clear what level of detail (LOD) to adopt when reconstructing those landmarks for their correct recognition, and if particular architectural styles represent specific challenges in this respect. In this paper, we investigate the effect of LOD on landmark recognition, generally, and on some architectural styles, specifically. The results of our user study show that higher LOD lead to a better landmark identification. Particularly, Neoclassical-style buildings need more details to be individually distinguished from similar ones.
Facial emotion recognition analysis using deep learning through RGB-D imagery of VR participants through partially occluded facial types
Poster
Booth: E21
Ian Mills: Walton Institute for Information and Communication Systems Science; Frances cleary: Walton Institute for Information and Communication Systems Science
Teaser Video: Watch Now
This research poster outlines the initial development of a facial emotion recognition (FER) evaluation system based on RGB-D imagery captured via a mobile based device. The study outlined features control group of non-occluded facial types and a set of participants wearing a head mounted display (HMD) in order to demonstrate an occluded facial type. We explore an architecture to develop a FER system that is suitable for occluded facial analysis. The paper details the methodology, experimental design and future work to be carried out to deliver such a system.
Irish Sign Language in a Virtual Reality Environment
Poster
Booth: E27
Ryan McCloskey: Walton Institute for Information and Communication Systems Science
Teaser Video: Watch Now
This paper describes a study whereby groups of users are tasked with learning Irish Sign Language (ISL) gestures in distinct environments and performing a comparison of outcomes to determine the general effectiveness of each method. One groups of users are tasked with learning via traditional tuition methods, and the second groups is placed in a virtual reality environment with a custom training module for teaching ISL gestures. This paper discusses such a study and details the processes, motivations and potential future work.
A validation study to trigger nicotine craving in virtual reality
Poster
Booth: E28
Chun-Jou Yu: Goldsmiths; Aitor Rovira: Oxford Health NHS Foundation Trust; Xueni Pan: Goldsmiths; Daniel Freeman: University of Oxford
Teaser Video: Watch Now
We built a virtual beer garden that contained various smoking cues (both verbal and non-verbal) using a motion capture system to record the realistic smoking behaviour related animations. Our 3-min long VR experience was optimized for Oculus Quest 2 with the hand tracking function enabled. We conducted a pilot study with 13 non-treatment-seeking nicotine-dependent cigarette smokers. The preliminary results indicate that this VR experience led to high levels of presence, and that there is a significant increase of nicotine craving - but only for those who reported a high level of immersion.
X-Ray Device Positioning with Augmented Reality Visual Feedback
Poster
Booth: E32
Kartikay Tehlan: Technical University of Munich; Alexander Winkler: Technical University of Munich; Daniel Roth: Human-Centered Computing and Extended Reality; Nassir Navab: Technische Universität München
Teaser Video: Watch Now
In minimally invasive surgeries one common way to verify progress is the use of an intraoperative X-ray device (due to its characteristic shape called a C-arm). Its control, however, remains challenging owing to its complex movements. We propose the use of an Augmented Reality Head-Mounted Display (AR-HMD) to let the surgeon choose a desired X-ray view interventionally providing the corresponding C-arm configuration as visual feedback. The study participants' feedback, despite being critical of the HMD hardware limitations, suggests an inclination towards using AR for orthopaedic surgeries on especially complex or unusual anatomies.
HoloCMDS: Investigating Around Field of View Glanceable Commands Selection in AR-HMDs
Poster
Booth: E31
Rajkumar Darbar: INRIA Bordeaux; Arnaud Prouzeau: Inria; Martin HACHET: Inria
Teaser Video: Watch Now
Augmented reality merges the real and virtual worlds seamlessly in real-time. However, we need contextual menus to manipulate virtual objects rendered in our physical space. Unfortunately, designing a menu for augmented reality head-mounted displays (AR-HMDs) is challenging because of their limited display field of view (FOV). In this paper, we propose HoloCMDS to support quick access of contextual commands in AR-HMDs and conduct an initial experiment to get users' feedback about this technique.
Rereading the Narrative Paradox for Virtual Reality Theatre
Poster
Booth: E34
Xiaotian Jiang: Goldsmiths, University of London; Xueni Pan: Goldsmiths; Jonathan Freeman: Goldsmiths University
Teaser Video: Watch Now
We examined several key issues around audience autonomy in VR theatre. Informed by a literature review and a qualitative user study (grounded theory), we developed a conceptual model that enables a quantifiable evaluation of audience experience in VR theatre. A second user study inspired by the 'narrative paradox', investigates the relationship between spatial exploration and narrative comprehension in two VR performances. Our results show that although navigation distracted the participants from following the full story, they were more engaged, attached and had a better overall experience as a result of their freedom to move and interact.
Investigation of the potential use of Virtual Reality for Agoraphobia Exposure therapy
Poster
Booth: E33
Sinead Barnett: Walton Institute; Ian Mills: Walton Institute; Frances cleary: Walton Institute
Teaser Video: Watch Now
Preliminary research study to evaluate the potential and need for virtual reality in the mental health sector specifically focusing on the treatment of agoraphobia. A survey was sent to numerous participants that have been diagnosed and currently receiving treatment for agoraphobia. Results have concluded there is a demand for virtual reality treatment for agoraphobia and this in turn can lead to future studies into the VR therapy
Posters - Expo Hall C
Effects of Clutching Mechanism on Remote Object Manipulation Tasks
Poster
Booth: B11
Zihan Gao: Harbin Engineering University; Huiqiang Wang: Harbin Engineering University; Anqi Ge: Harbin Medical University; Hongwu Lv: Harbin Engineering University; Guangsheng Feng: Harbin Engineering University
Teaser Video: Watch Now
Remote object manipulation in practice is often an iterative process that needs clutching. However, while many interaction techniques have been designed for manipulating remote objects, clutching mechanism is often an important but overlooked aspect in manipulation tasks. In this paper, we evaluate the effects of clutching mechanism on remote object manipulation tasks, which compares two clutching mechanisms under various tasks settings. The results suggested that an efficient clutching mechanism can effectively improve the usability in remote object manipulation tasks.
A Testbed for Exploring Multi-Level Precueing in Augmented Reality
Poster
Booth: B12
Jen-Shuo Liu: Columbia University; Barbara Tversky: Columbia Teachers College; Steven Feiner: Columbia University
Teaser Video: Watch Now
Precueing information about upcoming subtasks prior to performing them has the potential to make an entire task faster and easier to accomplish than cueing only the current subtask. Most AR and VR research on precueing has addressed path-following tasks requiring simple actions at a series of locations, such as pushing a button or just visiting that location. We present a testbed for exploring multi-level precueing in a richer task that requires the user to move their hand between specified locations, transporting an object between some of them, and rotating it to a designated orientation.
Resolution Tradeoff in Gameplay Experience, Performance, and Simulator Sickness in Virtual Reality Games
Poster
Booth: B13
Jialin Wang: Xi'an Jiaotong-liverpool University; Rongkai Shi: Xi'an Jiaotong-Liverpool University; Zehui Xiao: Xi'an Jiaotong-Liverpool University; Xueying Qin: Shandong University; Hai-Ning Liang: Xi'an Jiaotong-Liverpool University
Teaser Video: Watch Now
Higher resolution is one of the main directions and drivers in the development of Virtual Reality (VR) Head-Mounted Displays (HMDs). However, given their associated higher cost, it is unclear the benefits of having higher resolution on user experience, especially in VR games. This research aims to investigate the effects of resolution in gameplay experience and simulator sickness for VR games. To this end, we designed an experiment to collect gameplay experience, simulator sickness (SS), and player performance data with a VR First-Person Shooter game. Our results indicate that 2K resolution is an important threshold for an enhanced gameplay experience without affecting performance and increasing SS levels.
VCoach: Enabling Personalized Boxing Training in Virtual Reality
Poster
Booth: B14
Hao Chen: Beijing Institute of Technology; Yujia Wang: Beijing Institute of Technology; Wei Liang: Beijing Institute of Technology
Teaser Video: Watch Now
We propose a training system in virtual reality, VCoach, automatically generating interactive and personalized boxing training drills for individual trainees. The training drill is generated in real-time based on the trainee's updated performance, including the evaluation of punch speed, reaction time, and punch pose through wearable VR devices. The training drill is visualized as a sequence of target points on a virtual heavy bag and the corresponding punch motion, as well as the performance feedback. Our experiments show that VCoach can generate personalized training drills to help trainees improve skills efficiently.
Control with Vergence Eye Movement in Augmented Reality See-Through Vision
Poster
Booth: B21
Zhimin Wang: Beihang University; Yuxin Zhao: School of Computer Science and Engineering; Feng Lu: Beihang University
Teaser Video: Watch Now
Augmented Reality (AR) see-through vision enables the user to see through a wall and view the occluded objects. Most existing works only used common modalities to control the display for see-through vision, e.g., button clicking and speech. However, we use visual system to watch see-through vision. Using an addition interaction channel will distract the user and degrade the user experience. In this paper, we propose a novel interaction method using vergence eye movement for controlling see-through vision in AR. Specifically, we first customize eye cameras and design gaze depth estimation method for Microsoft HoloLens 2. With our algorithm, fixation depth can be computed from the vergence, and used to manage the see-through vision.
Semi-Analytical Surface Tension Model for Free Surface Flows
Poster
Booth: B22
Nurshat Menglik: Peking University; Hebin Yao: Kuaishou Technology; Yi Zheng: Kuaishou Technology; Jian Shi: Institute of Automation, Chinese Academy of Sciences; Ying Qiao: Institute of Software, Chinese Academy of Sciences; Xiaowei He: Institute of Software, Chinese Academy of Sciences
Teaser Video: Watch Now
In this paper, a semi-analytical surface tension model is proposed for smoothed particle hydrodynamics (SPH). Different from previous approaches, cohesive and adhesive forces in our model are unified within a surface energy framework for nonuniform systems. To calculate the adhesive force, we use a semi-analytical solution to convert the volume integral into a surface integral, triangular meshes which represent solid boundaries can be directly introduced into liquid-solid interactions. A gradient descent algorithm is employed to optimize the objective function, which represents the total energy of the fluid. Experiments show that our model can efficiently handle complex solid boundaries with surface-tension-driven phenomena.
Deformable torso anatomy education with three-dimensional autostereoscopic visualization and free-hand interaction
Poster
Booth: B28
Nan Zhang: Tsinghua University; Hongkai Wang: Dalian University of Technology; Tianqi Huang: Tsinghua University; Xinran Zhang: Tsinghua University; Hongen Liao: Tsinghua University
Teaser Video: Watch Now
Virtual reality/augmented reality has advantages in immersive learning and conveying meta-information in anatomy education. In this study, we present an interactive naked-eye 3D torso anatomy education environment based on population anatomical information. In particular, we utilize the deformable anatomy models, constructed from big data of healthy adults, to convey the anatomy knowledge of truthfulness organ shapes and population anatomical variances. In addition, the proposed anatomy education system is conveyed by free-hand 3D autostereoscopic visualization, which supports multi-user and glass-free. A user study with fourteen students was implemented and demonstrates that the system is appropriate and useful for torso anatomy education.
FusedAR: Adaptive Environment Lighting Reconstruction for Visually Coherent Mobile AR Rendering
Poster
Booth: B27
Yiqin Zhao: Worcester Polytechnic Institute; Tian Guo: Worcester Polytechnic Institute
Teaser Video: Watch Now
Obtaining accurate omnidirectional environment lighting for high quality rendering in mobile augmented reality is challenging due to the practical limitation of mobile devices and the inherent spatial variance of lighting. In this paper, we present a novel adaptive environment lighting reconstruction method called FusedAR, which is designed from the outset to consider mobile characteristics, e.g., by exploiting mobile user natural behaviors of pointing the camera sensors perpendicular to the observation-rendering direction. Our initial evaluation shows that FusedAR achieves better rendering effects compared to using a recent deep learning-based AR lighting estimation system and environment lighting captured by 360° cameras.
Designing a Physiological Loop for the Adaptation of Virtual Human Characters in a Social VR Scenario
Poster
Booth: B26
Francesco Chiossi: LMU Munich; Robin Welsch: LMU Munich; Steeven Villa: LMU Munich; Lewis L Chuang: IfADo-Leibniz Institute for Working Environment and Human Factors; Sven Mayer: LMU Munich
Teaser Video: Watch Now
Social virtual reality is getting mainstream not only for entertainment purposes but also for productivity and education. This makes the design of social VR scenarios functional to support the operator's performance. We present a physiologically-adaptive system that optimizes for visual complexity in a dual-task scenario based on electrodermal activity. Specifically, we propose a system that adapts the amount of non-player characters while jointly performing an N-Back task (primary) and visual detection task (secondary). Our preliminary results show that when optimizing the complexity of the secondary task, users report an improved user experience.
GestureExplorer: Immersive Visualisation and Exploration of Gesture Data
Poster
Booth: B25
Ang Li: Monash University ; Jiazhou Liu: Monash University; Maxime Cordeil: Monash University; Barrett Ens: Monash University
Teaser Video: Watch Now
This paper presents GestureExplorer, which features versatile immersive visualisations to grant the user free control over their perspective, allowing them to gain a better understanding of gestures. It provides multiple data visualisation views, and interactive features to support analysis and exploration of gesture datasets. A pair of iterative user studies provides initial feedback from several participants, including experts on immersive visualisation, and demonstrates the potential of GestureExplorer for providing a useful and engaging experience for exploring gesture data.
Multi Touch Smartphone Based Progressive Refinement VR Selection
Poster
Booth: B35
Elaheh Samimi: Carleton University; Robert J Teather: Carleton University
Teaser Video: Watch Now
We developed a progressive refinement technique for VR object selection using a smartphone as a controller. Our technique combines progressive refinement with the marking menu-based CountMarks, which uses multi-finger touch gestures to "short-circuit" multi-item marking menus. Users can indicate a specific item in a sub-menu by pressing a specific number of fingers on the screen while swiping in the desired menu's direction. This reduces the number of steps in progressive refinement selection.
Predictive Power of Pupil Dynamics in a Team Based Virtual Reality Task
Poster
Booth: B32
Yinuo Qin: Laboratory for Intelligent Imaging and Neural Computing; Weijia Zhang: Columbia University; Richard T Lee: Columbia University; Xiaoxiao Sun: Columbia University; Paul Sajda: Columbia University
Teaser Video: Watch Now
In this work, we describe a team-based VR task termed the Apollo Distributed Control Task (ADCT), where individuals, via the single independent degree-of-freedom control and limited environmental views, must work together to guide a virtual spacecraft back to Earth. Focusing on the analysis of pupil dynamics, which have been linked to a number of cognitive and physiological processes such as arousal, cognitive control, and working memory, we find that pupil diameter changes are predictive of multiple task-related dimensions, including the difficulty of the task, the role of the team member, and the type of communication.
Ebublio: Edge Assisted Multi-user 360-Degree Video Streaming
Poster
Booth: B33
Yili Jin: The Chinese University of Hong Kong, Shenzhen; Junhua Liu: The chinese university of Hong Kong,Shenzhen; Fangxin Wang: The Chinese University of Hong Kong, Shenzhen
Teaser Video: Watch Now
Compared to traditional videos, streaming 360° videos is more difficult. We propose Ebublio, a novel intelligent edge caching framework consisting of a collaborative FoV prediction (CFP) module and a long-term tile caching optimization (LTO) module. The former integrates the features of video content, user trajectory, and other users' records for combined prediction. The latter employs the Lyapunov framework and a subgradient optimization toward the optimal caching replacement policy. Our trace-driven evaluation further demonstrates the superiority of our framework, with about 42% improvement in FoV prediction, and 36% improvement in QoE under similar traffic consumption.
3Dify: Extruding Common 2D Charts with Timeseries Data
Poster
Booth: B34
Richard Brath: Uncharted Software; Martin Matusiak: Uncharted Software Inc.
Teaser Video: Watch Now
3D charts are not common in financial services. We review chart use in practice. We create 3D financial visualizations starting with 2D charts used extensively in financial services, then extend into the third dimension with timeseries data. We embed the 2D view into the the 3D scene; constrain interaction and add depth cues to facilitate comprehension. Usage and extensions indicate success.
Virtual Reality Point Cloud Annotation
Poster
Booth: B23
Anton Franzluebbers: University of Georgia; Changying Li: University of Georgia; Andrew Paterson: University of Georgia; Kyle Johnsen: University of Georgia
Teaser Video: Watch Now
This work presents an immersive headset-based virtual reality visualization and annotation system for point clouds, oriented towards application on laser scans of plants. The system can be used to paint regions or individual points with fine detail, even with large, dense point clouds. A non-immersive desktop interface was designed for comparison within the same application. A within-subjects user study (N=16) was conducted to compare these interfaces for annotation and counting tasks. Results showed a strong preference for the immersive virtual reality interface, likely as a result of perceived and actual significant differences in task performance.
Design and Evaluation of an Augmented Reality Application for Learning Spatial Transformations and Their Mathematical Representations
Poster
Booth: B24
Zohreh Shaghaghian: Texas A&M University; Heather Burte: Texas A&M University; Dezhen Song: Texas A&M University; Wei Yan: Texas A&M University
Teaser Video: Watch Now
There is a close relation between spatial thinking and mathematical problem-solving. This paper presents a newly developed educational Augmented Reality (AR) mobile application, BRICKxAR/T, to help students intuitively learn spatial transformations and the related mathematics through play. A pilot study with 7 undergraduate students evaluates students learning gain through a mental rotation and a math test on transformation matrices. The results show most students performed better with a higher score after learning with the app. Students found the app interesting to play and useful for learning geometric transformations and matrices.
VR Edufication on Historic Lunar Roving Missions
Poster
Booth: D32
Huadong Zhang: Rochester Institute of Technology; Lizhou Cao: Rochester Institute of Technology; Angelica Howell: Rochester Institute of Technology; Chao Peng: Rochester Institute of Technology
Teaser Video: Watch Now
This work presents the design and evaluation of an educational VR game to teach historic Apollo lunar roving missions. The game consists of three gameplay scenes, and each scene aims to convey one type of knowledge. The game adheres with the learning objectives, presents the contents, and conveys knowledge in both active and passive interaction modes. We conducted a user study focusing on the understanding of the influence of different interaction modes on learning outcomes and learning engagement.
The Immediate and Retained Effectiveness of One-time Virtual Reality Exposure in Enhancing Intercultural Sensitivity
Poster
Booth: D31
Richard Chen Li: The Hong Kong Polytechnic University ; Angel Lo Lo Kon: City University of Hong Kong; Justin So: University College London; Horace Ho Shing Ip: City University of Hong Kong
Teaser Video: Watch Now
This study aims to investigate the immediate and retained effects of one-time virtual reality (VR) exposure on intercultural sensitivity (IS) and identify the contributing factors. Three virtual scenarios about the ethnic minorities in Hong Kong were created for the empirical study. The longitudinal results involving 30 participants (15M 15F) showed that both the immediate and retained effects of the one-time VR exposure on IS were significant. Moreover, linear growth curve models suggested that among the female participants, presence and emotional empathy were closely associated with the change of IS over time, but this relation was not significant among the males.
Redirected Placement: Retargeting Destinations of Passive Props for Enhancing Bimanual Haptic Feedback in Virtual Reality
Poster
Booth: C11
Xuanhui Yang: Shanghai Jiao Tong University; Yixiao Kang: Shanghai Jiao Tong University; Xubo Yang: Shanghai Jiao Tong University
Teaser Video: Watch Now
Haptic retargeting is a commonly used technique to match passive props to virtual objects and add tactile feedback in Virtual Reality (VR). However, researchers have mainly focused on single-hand retargeting and applied these techniques primarily for tasks of touching objects. In this work, we propose a novel retargeting solution for tasks of grabbing and placing objects in VR, called redirected placement (RP), which is applied when placing virtual objects. The key idea of this method is that when the user places the virtual object, it can enable the user to place the physical prop in the user's hand in a position that is easier to match with multiple other virtual objects, without being detected by the user.
Moving Visual-Inertial Ordometry into Cross-platform Web for Markerless Augmented Reality
Poster
Booth: C12
Yakun Huang: Beijing University of Posts and Telecommunications; Zhijie Tan: State Key Laboratory of Networking and Switching Technology; Xiuquan Qiao: Beijing University of Posts and Telecommunications; Jie Zhao: Beijing University of Posts and Telecommunications; Fenghua Tian: Beijing University of Posts and Telecommunications
Teaser Video: Watch Now
Widespread mobile AR implementation still suffers from the unique adaption for different mobile platforms. Enabling AR experience on the cross-platform web is a potential alternative and provides unified implementation. We reveal a lightweight VIO implementation on the web without any visual marker or cumbersome frameworks for AR services. The contributions include 1) designing a novel VIO architecture to balance the user experience and the efficiency; 2) optimizing a single-thread VIO algorithm to avoid the sophisticated multi-threading management and enhancing compatibility; 3) improving the dirty noise of IMU and developing efficient numerical libraries like SuiteSparse for the massive computation afford.
Automatic Virtual Portals Placement for Efficient VR Navigation
Poster
Booth: C13
Lili Wang: Beihang University; Yi Liu: State Key Laboratory of Virtual Reality Technology and Systems,school of computer science and engineering, Beihang University; Xiaolong Liu: Beihang University; Jian Wu: Beihang University
Teaser Video: Watch Now
Portals placement in a large virtual scene can help users improve navigation efficiency, but determining the number and the positions of the portals has some challenges. In this paper, we proposed two automatic virtual portals placement methods for efficient VR navigation. To reduce the number of reverse redirections, we also proposed a real-time portal orientation determination algorithm. For any given single-floor outdoor virtual scene, our methods can automatically place the portals.
Material Reflectance Property Estimation of Complex Objects Using an Attention Network
Poster
Booth: C14
Bin Cheng: Beijing Normal University; Junli Zhao: Qingdao University; Fuqing Duan: Beijing Normal University
Teaser Video: Watch Now
Material reflectance property modeling can be used in realistic rendering to generate realistic appearances for virtual objects. However, current works mainly focus on near plane objects. In this paper, we propose an end-to-end network framework with attention mechanism to estimate the reflectance properties of any 3D object surface from a single image, where a single attention module is used for each reflectance property respectively to learn the property specific features. We also generate a material dataset by rendering a set of 3D complex shape models. The dataset is suitable for reflectance property estimation of arbitrary complex shape objects. Experiments validate the proposed method.
3D Scene Reconstruction from RGB Images Using Dynamic Graph Convolution for Augmented Reality
Poster
Booth: C21
Tzu-Hsuan Weng: National Taiwan University; Robin Fischer: National Taiwan University; Li-Chen Fu: National Taiwan University
Teaser Video: Watch Now
The 3D scene reconstruction task aims to reconstruct the object shape, object pose, and the 3D layout of the scene. In the field of augmented reality, this information is required for interactions with the surroundings. In this paper, we develop a holistic end-to-end scene reconstruction system using only RGB images. We further designed an architecture that can adapt to different types of objects through our graph convolution network during object surface generation. Moreover, a scene-merging strategy is proposed to alleviate the occlusion problem by merging different views continuously. This also allows our system to reconstruct the complete surroundings in a room.
Designing a Mixed Reality System for Exploring Genetic Mutation Data of Cancer Patients
Poster
Booth: C22
Syeda Aniqa Imtiaz: Ryerson University; Alexander Bakogeorge: Ryerson University; Nour Abu Hantash: Ryerson University; Caleb Barynin: Ryerson University; Roozbeh Manshaei: Ryerson University; Ali Mazalek: Ryerson University
Teaser Video: Watch Now
Increased availability of cancerous genomics mutation data provides researchers the opportunity to discover associations among genetic mutation patterns within the same organ as well as similar mutation patterns among different organs. However, the complexity, variety, and scale of multi-dimensional data involved in analyzing mutations across organs poses challenges for clinicians and researchers to draw such relationships. We present a prototype application that leverages multiple-coordinated views in mixed reality (MR) to enable investigations of genetic mutation patterns and the organs affected by cancer. We believe our prototype has the potential to enhance data and association discovery within and across different organs.
A Pinch-based Text Entry Method for Head-mounted Displays
Poster
Booth: C23
Haiyan Jiang: Beijing Institute of Technology; Dongdong Weng: Beijing Institute of Technology; Xiaonuo Dongye: Beijing Institute of Technology; Yue Liu: Beijing Institute of Technology
Teaser Video: Watch Now
Pinch gestures have been used for text entry in Head-mounted displays (HMDs), enabling a comfortable and eyes-free text entry. However, the number of pinch gestures is limited, making it difficult to input all characters. In addition, the common pinch-based methods with a QWERTY keyboard require accurate control of the hand position and angle, which could be affected by natural tremors and the Heisenberg effect. So, we propose a new text entry method for HMDs, which combines hand positions and pinch gestures with a condensed key-based keyboard, enabling one-handed text entry for HMDs. The results of a primary study show that the mean input speed of the proposed method is 7.60 words-per-minute (WPM).
Analysis of Emotional Tendency and Syntactic Properties for VR Game Reviews
Poster
Booth: C24
Yang Gao: Beihang university; Anqi CHEN: Beihang university; Susan Chi: NetEase Games; Guangtao Zhang: Technical University of Denmark; Aimin Hao: Beihang University
Teaser Video: Watch Now
The studies of player reviews can help game developers design and optimize VR games. To this end, we investigated 288,685 reviews from 506 VR games on the Steam platform to analyze their sentiment tendencies using the machine learning-based model SKEP, which finds that although some of the reviews are "recommend", they actually have opposite emotional tendencies. We also study the syntactic properties based on the natural language processing (NLP) kits Stanza and NLTK library, and we find that cybersickness is a significant concern for players.
Role of Dynamic Affordance and Cognitive Load in the Design of Extended Reality based Simulation Environments for Surgical Contexts
Poster
Booth: C28
Avinash Gupta: Oklahoma State University; J Cecil: OKLAHOMA STATE UNIVERSITY; M Pirela-Cruz: Dignity Regional Medical Center
Teaser Video: Watch Now
In this paper, HCI-based design criteria for the Extended Reality (XR) based training environments. The design criteria explored in the paper help lay a foundation for the creation of Human Centric XR environments to train users in an orthopedic surgical procedure. The HCI-based perspective presented in the paper investigates the criteria such as affordance and cognitive load during the design. The paper focuses on the design of XR based environments based on the participatory design approach and information-centric modeling. Testing and assessment strategy presented provide insights into the impact of such HCI-based criteria on participants' acquisition of skills and knowledge during interactions with the XR environments.
MienCap: Performance-Based Facial Animation with Live Mood Dynamics
Poster
Booth: C31
Ye Pan: Shanghai Jiaotong University ; Ruisi Zhang: Shanghai Jiao Tong University; Jingying Wang: University of Michigan; Nengfu Chen: Shanghai Jiao Tong University; Yilin Qiu: Shanghai Jiao Tong University; Yu Ding: Netease Fuxi AI Lab; Kenny Mitchell: Edinburgh Napier University
Teaser Video: Watch Now
Our purpose is to improve performance-based animation which can drive believable 3D stylized characters that are truly perceptual. By combining traditional blendshape animation techniques with machine learning models, we present a real time motion capture system, called MienCap, which drive character expressions in a geometrically consistent and perceptually valid way. We demonstrate the effectiveness of our system by comparing to a commercial product Faceware. Results reveal that ratings of the recognition of expressions depicted for animated characters via our systems are statistically higher than Faceware. Our results provide animators with a system for creating the expressions they wish to use more quickly and accurately.
Preliminary analysis of effective assistance timing for iterative visual search tasks using gaze-based visual cognition estimation
Poster
Booth: C38
Syunsuke Yoshida: University of Hyogo; Makoto Sei: ATR; Akira Utsumi: ATR; Hirotake Yamazoe: University of Hyogo
Teaser Video: Watch Now
In this paper, focusing on whether a person has visually recognized a target (visual cognition, VC) in iterative visual-search tasks, we propose an efficient assistance method based on the VC. In the proposed method, we first estimate the participant's VC of the target in the previous task. We then determine the target for the next task based on the VC and start to guide the participant's attention to the target for the next task at the VC timing. By initiating the guidance from the timing of the previous target's VC, we can guide attention at an earlier time and achieve efficient attention guidance. The preliminary experimental results showed that VC-based assistance improves task performance.
Towards Conducting Effective Locomotion Through Hardware Transformation in Head-Mounted-Device - A Review Study
Poster
Booth: C37
Pawankumar Gururaj Yendigeri: International Institute of Information Technology, Hyderabad; Raghav Mittal: International Institute of Information Technology Hyderabad; Sai Anirudh Karre: IIIT Hyderabad; Raghu Reddy: IIIT; Syed Azeemuddin: IIIT
Teaser Video: Watch Now
Locomotion in Virtual Reality (VR) acts as a motion tracking unit for simulating user movements based on the Degree-of-Freedom (DOF) of the application. For effective locomotion, VR practitioners may have to transform their hardware from 3-DOF to 6-DOF. In this context, we conducted a literature review on different motion tracking methods employed in the Head-Mounted-Devices (HMD) to understand such hardware transformation to conduct locomotion in VR. Our observations led us to formulate a taxonomy of the tracking methods for locomotion in VR based on system design. Our study also captures different metrics that VR practitioners use to evaluate the hardware based on the context, performance, and significance for conducting locomotion.
Head in the Clouds - Floating Locomotion in Virtual Reality
Poster
Booth: C36
Priya Ganapathi: Indian Institute Of Technology Guwahati; Felix Johannes Thiel: University College London; David Swapp: University College London; Anthony Steed: University College London
Teaser Video: Watch Now
Navigating large virtual spaces within the confines of a small tracked volume in a seated position becomes a serious accessibility issue when users' lower seating position reduces their visibility and makes it uncomfortable to reach for items with ease. Hence, we propose a "floating" accessibility technique, in which a seated VR user experiences the virtual environment from the perspective of a standing eye height. We conducted a user study comparing sitting, standing and floating conditions and observed that the floating technique had no detrimental effect in comparison to the standing technique and had a slight benefit over the sitting technique.
OmniSyn: Synthesizing 360 Videos with Wide-baseline Panoramas
Poster
Booth: C35
David Li: University Of Maryland; Yinda Zhang: Google; Christian Haene: Google; Danhang Tang: Google; Amitabh Varshney: University of Maryland College Park; Ruofei Du: Google
Teaser Video: Watch Now
Immersive maps such as Google Street View and Bing Streetside provide true-to-life views with a massive collection of panoramas. However, these panoramas are only available at sparse intervals along the path they are taken, resulting in visual discontinuities during navigation. In this paper, we leverage the unique characteristics of wide-baseline panoramas and present OmniSyn, a novel pipeline for 360 view synthesis between wide-baseline panoramas. OmniSyn predicts omnidirectional depth maps, renders meshes, and synthesizes intermediate views. We envision our work may inspire future research for this real-world task and lead to smoother experiences navigating immersive maps.
AiRType: An Air-Tapping Keyboard for Augmented Reality Environments
Poster
Booth: C41
Necip Faz?l Y?ld?ran: University of Central Florida; Ülkü Meteriz-Y?ld?ran: University of Central Florida; David Mohaisen: University of Central Florida
Teaser Video: Watch Now
We present AiRType for AR/VR HMDs that enables text entry through bare hands for more natural perception. The hand models in the virtual environment mirror hand movements of the user and user targets and selects the keys via hand models. AiRType fully leverages the additional dimension without restraining the interaction space by users' arm lengths. It can be attached to anywhere and can be scaled freely. We evaluated and compared AiRType with the baseline-the built-in keyboard of Magic Leap 1. AiRType shows 27% decrease in the error rate, 3.3% increase in character-per-second, and 9.4% increase in user satisfaction.
Head-Worn Markerless Augmented Reality Inside A Moving Vehicle
Poster
Booth: C42
zhiwei Zhu: SRI International; Mikhail Sizintsev: SRI International; Glenn Murray: SRI International; Han-Pang Chiu: SRI International; Ali Chaudhry: SRI International; Supun Samarasekera: SRI International; Rakesh Kumar: SRI International
Teaser Video: Watch Now
This paper describes a system that provides general head-worn outdoor augmented reality (AR) capability for the user inside a moving vehicle. Our system follows the concept of combining pose estimation from both vehicle navigation system and wearable sensors to address the failure of commercial AR devices inside a moving vehicle. We continuously match natural visual features from the camera against a prebuilt database of interior vehicle scenes. To improve the robustness in a moving vehicle with other passengers, a human detection module is adapted to filter out people from the camera scene. Experiments demonstrate the effectiveness of the proposed solution.
Using External Video to Attack Behavior-Based Security Mechanisms in Virtual Reality (VR)
Poster
Booth: C43
Robert Miller: Clarkson University; Natasha Kholgade Banerjee: Clarkson University; Sean Banerjee: Clarkson University
Teaser Video: Watch Now
As VR systems become prevalent in domains such as healthcare and education, sensitive data must be protected from attacks. Password-based techniques are circumvented once an attacker gains access to the user's credentials. Behavior-based approaches are susceptible to attacks from malicious users who mimic the actions of a genuine user or gain access to the 3D trajectories. We investigate a novel attack where a malicious user obtains a 2D video of genuine user interacting in VR. We demonstrate that an attacker can extract 2D motion trajectories from the video and match them to 3D enrollment trajectories to defeat behavior-based VR security.
VR-based Context Priming to Increase Student Engagement and Academic Performance
Poster
Booth: C44
Daniel Hawes: Carleton University; Ali Arya: Carleton University
Teaser Video: Watch Now
Research suggests that virtual environments can be designed to increase engagement and performance with many cognitive tasks. This paper compares the efficacy of specifically designed 3D environments intended to prime these effects within Virtual Reality (VR). A 27-minute seminar "The Creative Process of Making an Animated Movie" was presented to 51 participants within three VR learning spaces: two prime and one no-prime. The prime conditions included two situated learning environments; an animation studio and a theatre with animation artifacts vs. the no-prime: theatre without artifacts. Increased academic performance was observed in both prime conditions. A UX survey was also completed.
From 2D to 3D: Facilitating Single-Finger Mid-Air Typing on Virtual Keyboards with Probabilistic Touch Modeling
Poster
Booth: D11
Xin Yi: Tsinghua University; Chen Liang: Tsinghua University; Haozhan Chen: Tsinghua University; Jiuxu Song: University of California, Santa Barbara; Chun Yu: Tsinghua University; Yuanchun Shi: Tsinghua University
Teaser Video: Watch Now
Mid-air text entry on virtual keyboards suffers from the lack of tactile feedback, bringing challenges to both tap detection and input prediction. In this poster, we demonstrated the feasibility of efficient single-finger typing in mid-air through probabilistic touch modeling. We first collected users' typing data on different sizes of virtual keyboards. Based on analyzing the data, we derived an input prediction algorithm that incorporated probabilistic touch detection and elastic probabilistic decoding. In the evaluation study where the participants performed real text entry tasks with this technique, they reached a pick-up single-finger typing speed of 24.0 WPM with 2.8% word-level error rate.
Splitting Large Convolutional Neural Network Layers to Run Real-Time Applications on Mixed-Reality Hardware: Extended Abstract
Poster
Booth: D12
Anthony Beug: University of Regina; Howard J Hamilton: University of Regina
Teaser Video: Watch Now
When executing a computationally expensive Convolutional Neural Network (CNN) in a real-time mixed-reality application, some convolutional layers may take longer than the target frame time to execute. In this work, dropped frames produced by large convolutional layers are avoided by dividing the work performed in a convolution so that it can be executed over multiple frames. Existing convolution splitting techniques are applied to pretrained CNNs with expensive convolutions, and static schedules are designed to execute the resulting layers over multiple frames. Overhead is introduced, but the average frame rate is increased since delays produced by computing large layers are avoided.
Digital Twins of Wave Energy Generation Based on Artificial Intelligence
Poster
Booth: D13
Yuqi Liu: College of Computer Science and Technology; Xiaocheng Liu: College of Computer Science and Technology; Jinkang Guo: Qingdao University; Ranran Lou: Qingdao University; Zhihan Lv: Uppsala University
Teaser Video: Watch Now
Ocean waves provide a large amount of renewable energy, and Wave energy converter (WEC) can convert wave energy into electric energy. This paper proposes a visualization platform for wave power generation. The platform can monitor various indicators of wave power generation in real time, combined with Long Short-Term Memory (LSTM) neural network to predict wave power and electricity consumption. We make digital twins of a wave power plant in a computer, allowing users to remotely view the factory through VR glasses.
Who do you look like? - Gaze-based authentication for workers in VR
Poster
Booth: D14
Karina LaRubbio: University of Florida; Jeremiah Wright: University of Florida; Brendan David-John: University of Florida; Andreas Enqvist: University of Florida; Eakta JAIN: University of Florida
Teaser Video: Watch Now
Behavior-based authentication methods are actively being developed for XR. In particular, gaze-based methods promise continuous authentication of remote users. However, gaze behavior depends on the task being performed. Identification rate is typically highest when comparing data from the same task. In this study, we compared authentication performance using VR gaze data during random dot viewing, 360-degree image viewing, and a nuclear training simulation. We found that within-task authentication performed best for image viewing (72%). The implication for practitioners is to integrate image viewing into a VR workflow to collect gaze data that is viable for authentication.
High-Quality Surface-Based 3D Reconstruction Using 2.5D Maps
Poster
Booth: D24
Lingxiao Song: Beijing Institute of Technology; Xiao Yu: Beijing Institute of Technology; Huijun Di: Beijing Institute of Technology; Weiran Wang: Beijing Institute of Technology
Teaser Video: Watch Now
Previous works on RGB-D reconstruction are based on voxels, points, or meshes, which are either too computationally expensive to represent high-resolution 3D models, or not convenient for model regularization to deal with noise and errors. In this paper, we propose a new method to reconstruct 3D models with more accurate geometric information and better texture. Our method uses abstract surfaces consisting of different points with similar information as units of the model. To reduce the complexity, we use 2.5D heightmaps to represent each surface in the reconstructed model, making it convenient to do regularization. Experiments demonstrate the effectiveness of our method.
Using Direct Volume Rendering for Augmented Reality in Resource-constrained Platforms
Poster
Booth: D23
Berk Cetinsaya: University of Central Florida; Carsten Neumann: University of Central Florida; Dirk Reiners: University of Central Florida
Teaser Video: Watch Now
Rendering a large volume is a challenging task on mobile and Augmented Reality (AR) devices due to lack of memory space and device limitations. Therefore, we implemented an Empty Space Skipping (ESS) optimization algorithm to render the high-quality large models on HoloLens. We designed and developed a system to visualize the computerized tomography (CT) scan data and Digital Imaging and Communications in Medicine (DICOM) files on Microsoft HoloLens 2. We used the Unity3D game engine to develop the system. As a result, we achieved about 10 times more frames per second (fps) on a high-quality model than the non-optimized version.
Emotional Empathy and Facial Mimicry of Avatar Faces
Poster
Booth: D22
Angela Saquinaula: Western Connecticut State University; Adriel Juarez: Monmouth University; Joe Geigel: Rochester Institute of Technology; Reynold Bailey: Rochester Institute of Technology; Cecilia Ovesdotter Alm: Rochester Institute of Technology
Teaser Video: Watch Now
We explore the extent to which empathetic reactions are elicited when subjects view 3D motion-capture driven avatar faces compared to viewing human faces. Through a remote study, we captured subjects' facial reactions when viewing avatar and humans faces, and elicited self reported feedback regarding empathy. Avatar faces varied by gender and realism. Results show no sign of facial mimicry; only mimicking of slight facial movements with no solid consistency. Participants tended to empathize with avatars when they could adequately identify the stimulus' emotion. As avatar realism increased, it negatively impacted the subjects' feelings towards the stimuli.
A Time Reversal Symmetry Based Real-time Optical Motion Capture Missing Marker Recovery Method
Poster
Booth: D21
Dongdong Weng: Beijing Institute of Technology; Yihan Wang: Beijing Institute of Technology; Dong Li: Beijing Institute of Technology
Teaser Video: Watch Now
This paper proposes a deep learning model based on time reversal symmetry for real-time recovery of continuous missing marker sequences in optical motion capture. This paper firstly uses time reversal symmetry of human motion as a constraint of the model. BiLSTM is used to describe the constraint and extract the bidirectional spatiotemporal features. This paper proposes a weight position loss function for model training, which describes the effect of different joints on the pose. Compared with the existing methods, the experimental results show that the proposed method has higher accuracy and good real-time performance.
Let Every Seat Be Perfect! A Case Study on Combining BIM and VR for Room Planning
Poster
Booth: D38
Wai Tong: The Hong Kong University of Science and Technology; Haotian Li: The Hong Kong University of Science and Technology; Huan Wei: The Hong Kong University of Science and Technology; Liwenhan Xie: Hong Kong University of Science and Technology; Yanna Lin: The Hong Kong University of Science and Technology; Huamin Qu: The Hong Kong University of Science and Technology
Teaser Video: Watch Now
When communicating indoor room design, professional designers normally rely on software like Revit to export walk-through videos for their clients. However, a lack of in-situ experience restricts the ultimate users from evaluating the design and hence provides limited feedback, which may lead to a rework after actual construction. In this case study, we explore empowering end-users by exposing rich design details through a Virtual Reality (VR) application based on building an information model. Qualitative feedback in our user study shows promising results. We further discuss the benefits of the approach and opportunities for future research.
Virtual reality-based distraction on pain and performance during and after moderate-vigorous intensity cycling
Poster
Booth: D37
Carly Wender: Kessler Foundation; Phillip Tomporowski: University of Georgia; Sun Joo (Grace) Ahn: University of Georgia; Patrick O'Connor: University of Georgia
Teaser Video: Watch Now
This experiment measured effects of visual perceptual load (PL) within immersive virtual reality (VR) on exercise-induced pain during cycling. Using a within-subjects design (n=43), participants cycled at a perceptually "hard" intensity for 10 minutes without VR (i.e., no PL - NPL) or with VR of low or high PL (i.e., LPL or HPL). Mean quadriceps pain was significantly lower in the NPL condition than either the LPL (d=0.472) or HPL conditions (d=0.391). Mean cycling performance was significantly greater during the LPL condition. Compared to traditional cycling (NPL), cycling in the LPL condition resulted in greater exercise performance despite greater pain.
Evaluating 3D Visual Fatigue Induced by VR Headset Using EEG and Self-attention CNN
Poster
Booth: D36
Haochen Hu: Beijing Institution of Technology; Yue Liu: Beijing Institute of Technology; Kang Yue: Beijing INstitute of Technology
Teaser Video: Watch Now
3D visual fatigue is one of the major factors that hinder the development of virtual reality contents towards larger population. We proposed an EEG-based self-attention CNN model to evaluate user's 3D visual fatigue in an end-to-end fashion. We adopted a wavelet-based convolution to extract spatiotemporal information and prevent overfitting. Besides, a self-attention layer was added to the feature extractor backbone to cope with the subject-variation problem in EEG-decoding. The proposed method is compared with four state-of-the-art methods, and the results demonstrate that our model has the best performance among all methods in subject-dependent and cross-subject scenarios.
Perception of Symmetry of Actual and Modulated Self-Avatar Gait Movements During Treadmill Walking
Poster
Booth: D35
Iris Willaert: Ecole de technologie superieure; Rachid Aissaoui: CHUM research center; Sylvie Nadeau: University of Montreal; Cyril Duclos: Université de Montreal; David Labbe PhD: Ecole de technologie superieure
Teaser Video: Watch Now
In virtual reality, it is possible to simulate one's visual self-representation by mapping one's body movements to those of an avatar. Accepting the virtual body as part of one's own body creates an ownership illusion. This study aimed to assess the perception threshold between a subject's actual gait movements and those of their modulated self-avatar during treadmill walking. Preliminary results on two subjects suggest that healthy subjects can detect the mismatch, but differences may exist between subjects
Moving Soon? Rearranging Furniture using Mixed Reality
Poster
Booth: B31
Shihao Song: Beijing Institute of Technology; Yujia Wang: Beijing Institute of Technology; Wei Liang: Beijing Institute of Technology; Xiangyuan Li: Beijing Forestry University
Teaser Video: Watch Now
We present a mixed reality (MR) system to help users with a houseful of furniture moving from an existing home into a new space, inheriting the preferences of furniture layout from the previous scene.
With the RGB-D cameras mounted on a mixed reality device, Microsoft HoloLens 2, our system first reconstructs the 3D model of the existing scene and leverages a deep learning-based approach to detect and to group objects. Then, our system generates a personalized furniture layout by optimizing a cost function, incorporating the analyzed relevance of between and within groups, and the spatial constraints of the new layout.
Add-on Occlusion: An External Module for Optical See-through Augmented Reality Displays to Support Mutual Occlusion
Poster
Booth: D42
Yan Zhang: Nara Institute of Science and Technology; Kiyoshi Kiyokawa: Nara Institute of Science and Technology; Naoya Isoyama: Nara Institute of Science and Technology; Hideaki Uchiyama: Nara Institute of Science and Technology; Xubo Yang: SHANGHAI JIAO TONG UNIVERSITY
Teaser Video: Watch Now
The occlusion function benefits augmented reality (AR) in many aspects. However, existing occlusion-capable optical see-through augmented reality (OC-OST-AR) displays are designed by integrating virtual displays into a dedicated occlusion-capable architecture, hereby, we miss merits from emerging OST-AR displays. In this article, we propose an external occlusion module that can be added to common OST-AR displays. Per-pixel occlusion is supported with a small form-factor by using polarization-based optical path compression. The occlusion function can be switched on/off by controlling the incident light polarization. A prototype within a volume of 6x6x3cm is built. A preliminary experiment proves that occlusion is realized successfully.
Assist Home Training Table Tennis Skill Acquisition via Immersive Learning and Web Technologies
Poster
Booth: D41
Jian-Jia Weng: National Tsing Hua University; Yu-Hsin Wang: National Tsing Hua University; Calvin Ku: National Tsing Hua University; Dong-Xian Wu: National Tsing Hua University; Yi-Min Lau: National Tsing Hua University; Wan-Lun Tsai: National Cheng Kung University; Tse-Yu Pan: National Tsing Hua University; Min-Chun Hu: National Tsing Hua University; Hung-Kuo Chu: National Tsing Hua University; Te-Cheng Wu: National Tsing Hua University
Teaser Video: Watch Now
Sports applications in Virtual Reality (VR) have become immensely popular for training skill-based sports like table tennis. However, the existing researches do not focus on designing an intuitive system for efficient communication between the trainee and the coach. We developed a VR table tennis training system for table tennis skill acquisition that focuses on helping coaches to convey a player's mistake clearly. Our system consists of a VR training system where trainees can learn a skill gradually and a web-based feedback annotative tool for coaches. Trainees can examine their mistakes through a tablet or an immersive VR world.
Touch the History in Virtuality: Combine Passive Haptic with 360° videos in history learning
Poster
Booth: E23
YanXiang Zhang: University of Science and Technology of China; YingNa Wang: University of Science and Technology of China; QingQin Liu: University of Science and Technology of China
Teaser Video: Watch Now
Based on ethical principles and Asimov's three laws of robotics, this article discusses three ethical issues generated by the use of virtual reality to "revive" or come into contact with the deceased, including "not harm humans," "rights-related issues," and "fairness and meaning." And religious factors are also taken into consideration. It is necessary to predict the ethical risk of virtual reality " revive " of the deceased, which will make ethics play a better role in its future development. In addition, it will help stakeholders to pay more attention to the ethical issues involved in virtual avatars.
The Sloped Shoes: Influence Human Perception of the Virtual Slope
Poster
Booth: E22
YanXiang Zhang: University of Science and Technology of China; JiaLing Wu: University of Science and Technology of China; QingQin Liu: University of Science and Technology of China
Teaser Video: Watch Now
In this study, people were allowed to walk uphill or downhill in virtual environments by changing the slope of shoes, while they walked on a flat wooden board in physical environments. We can explore the impact of the shoe slope on users' perception of walking uphill or downhill in the virtual world. We find that the slope of the shoes affects participants' perception, increasing their sense of realism when walking uphill and downhill in the virtual world. With changing the shoe slope, participants' perception of the possibility of walking uphill or downhill will change and establish corresponding detection thresholds.
Redirected Walking in 360° Video: Effect of Environment Size on Detection Thresholds for Translation and Rotation Gains
Poster
Booth: E21
YanXiang Zhang: University of Science and Technology of China; QingQin Liu: University of Science and Technology of China; YingNa Wang: University of Science and Technology of China
Teaser Video: Watch Now
Using real walking to control the playback of the 360° videos is a natural and immersive way to match visual and self-motion perception. Redirected walking can enable users to walk in limited physical tracking space but experience larger scenes. Environment size may affect user perception in 360° videos. We conducted a user study about the detection thresholds (DTs) for translation and rotation gains in 360° video-based virtual environments in three scenes with different widths. Results show that environment size of the scene increases the DTs for both lower and upper translation gains but doesn't affect the DTs for rotation gains.
Movement Augmentation in Virtual Reality: Impact on Sense of Agency Measured by Subjective Responses and Electroencephalography
Poster
Booth: E26
Liu Wang: Xi'an Jiaotong-Liverpool University; Mengjie Huang: Xi'an Jiaotong-Liverpool University; Chengxuan Qin: Xi'an Jiaotong-Liverpool University; Yiqi Wang: University College London; Rui Yang: Xi'an Jiaotong-Liverpool University
Teaser Video: Watch Now
Virtual movement augmentation, which refers to the visual amplification of remapped movement, shows potential to be applied in motion-related virtual reality programs. Sense of agency (SoA), which measures the user's feeling of control in their action, has not been fully investigated for augmented movement. This study investigated the effect of augmented movement at three different levels (baseline, medium, and high) on users' SoA using both subjective responses and electroencephalography (EEG). Results show that SoA can be boosted slightly at medium augmentation level but drops at high level. The augmented virtual movement only helps to enhance SoA to a certain extent.
A Location-Triggered Augmented Reality Walking Tour Using Snap Spectacles 2021
Poster
Booth: E27
AadilMehdi Sanchawala: Columbia University; Mara Dimofte: Columbia University; Steven Feiner: Columbia University
Teaser Video: Watch Now
We present an on-site 3D-animated audiovisual tour guide augmented reality application developed for Snap Spectacles 2021. The primary goal of this project is to explore how to use this experimental product to create an augmented reality tour guide. In addition, we present the design considerations for the user interface and the underlying system architecture. We illustrate the workflow of the tour application and discuss our experience working with Spectacles 2021 and its experimental API. We also present our design choices and directions for future work.
Video2Force: Experiencing Object Motion in Video with Dynamic Force Feedback based on Bio-Inspired Sensing and Processing
Poster
Booth: E28
Guangxin Zhao: School of Computer Science and Engineering, Beijing Technology and Business University; Zhaobo Wang: The University of Sydney; Xiaoming Chen: Beijing Technology and Business University; Zhicheng Lu: The University of Sydney; Yuk Ying Chung: The University of Sydney; Haisheng LI: Beijing Technology and Business University
Teaser Video: Watch Now
In this work, we propose Video2Force, a framework that can estimate the video object motion and its resulting "force" by leveraging the emerging bio-inspired vision sensors for low-complexity and low-latency visual information processing. While the user watches video, the estimated force is "delivered" to the user's sensation with dynamic force feedback via emerging haptic gloves, synchronized with the video content. Consequently, the user is enabled to experience the "virtual existence" of the video object and its motion. Preliminary experimental results demonstrate that Video2Force is feasible and can enhance the users' presence in video experience.
Immersive Visualization of Sneeze Simulation Data on Mobile Devices
Poster
Booth: E33
Liangding Li: University of Central Florida; Douglas Fontes: University of Central Florida; Carsten Neumann: University of Central Florida; Dirk Reiners: University of Central Florida; Carolina Cruz-Neira: University of Central Florida; Michael Kinzel: University of Central Florida
Teaser Video: Watch Now
One key factor in stopping the spread of COVID-19 is practicing social distancing. Visualizing possible sneeze droplets' transmission routes in front of an infected person might be an effective way to help people understand the importance of social distancing. This paper presents a mobile virtual reality (VR) interface that helps people visualize droplet dispersion from the target person's view. We implemented a VR application to visualize and interact with the sneeze simulation data immersively. Our application provides an easy way to communicate the correlation between social distance and infected droplets exposure, which is difficult to achieve in the real world.
Towards Retargetable Animations for Industrial Augmented Reality
Poster
Booth: E32
Reza Manuel Mirzaiee: University of Virginia; Teodor I Vernica: Aarhus University; Kurt Scheuringer: Lockheed Martin Corporation; William Bernstein: Air Force Research Lab
Teaser Video: Watch Now
The adoption of Augmented Reality (AR) within manufacturing has surged. However, updating animations due to upstream changes or design variations is expensive. Currently, platform-agnostic standards for computer-aided design files do not adequately specify kinematic animations. Consequently, animations require manual, individual updates. We showcase a method for implementing geometry-independent AR assembly animations using skeletal armatures, a technique widely used in the entertainment industry. This technique allows upstream engineering changes to propagate through to the AR assembly visualization, leading to a more automated pipeline for handling animations in Industrial AR systems.
Synesthesia AR: Creating Sound-to-Color Synesthesia in Augmented Reality
Poster
Booth: E31
Shashaank N: Columbia University; Steven Feiner: Columbia University
Teaser Video: Watch Now
Sound-to-color synesthesia is a neurological condition where people experience different colors and shapes when listening to music. We present an augmented reality application that aims to create an interactive synesthesia experience for non-synesthetes. In this application, users can visualize colors corresponding to each unique note in the 12-tone equal-temperament tuning system, and the auditory input can be selected from audio files or real-time microphone. A gestural hand-tracking interface allows users to paint the world space in visualized synesthetic colors.
Studying the Effect of Physical Realism on Time Perception in a HAZMAT VR Simulation
Poster
Booth: C32
Kadir Baturalp Lofca: University of North Carolina at Greensboro; Jason Haskins: Nextgen Interactions; Jason Jerald: Nextgen Interactions; Regis Kopper: University of North Carolina at Greensboro
Teaser Video: Watch Now
Our research focuses on how physical props in virtual reality (VR) can affect users' time perception. We designed an experiment with the goal of comparing users' perception of time when using physical props in VR as compared to standard controllers and only virtual elements. In order to quantify this effect, time estimates for both conditions are compared to time estimates for a matching real-world task. In this experiment, participants assume the role of a firefighter trainee, going through a HAZMAT scenario, where they touch and interact with different physical props that match the virtual elements of the scene.
Learning Environments in AR Comparing Tablet and Head-mounted Augmented Reality Devices at Room and Table Scale
Poster
Booth: C33
Paul Craig: University of Minnesota; Peter Willemsen: University of Minnesota Duluth; Edward Downs: University of Minnesota Duluth; Alex Lover: University of Minnesota Duluth; William Barber: University of Minnesota Duluth
Teaser Video: Watch Now
This paper presents work to examine how presentation scale and device form factor may affect learning in augmented reality (AR) environments. We conducted a 2 (form factor) x 2 (scale) experiment in which 131 participants explored an AR learning environment using either tablet AR or a Hololens2 crossed with either full room scale or at table scale. Dependent variables measured participants declarative knowledge about information acquired in the environment as well as their understanding of spatial layout. Initial analysis suggests comparable outcomes across all manipulations with respect to acquiring declarative and spatial knowledge.
The Digital Twins of Thor's Hammer Based on Motion Sensing
Poster
Booth: C34
Zengxu Bian: College of Computer Science and Technology; Yuqi Liu: College of Computer Science and Technology; Jinkang Guo: Qingdao University; Zhihan Lv: Uppsala University
Teaser Video: Watch Now
Ancient humans attribute the phenomenon of thunder and lightning to divine power. The power of Thor that can lift Thor's Hammer, the body not be hurt by thunder and lightning. It's not impossible for us to control thunder and lightning like Thor. The Digital Twins system of the robotic arm designed in this paper integrates the physical device of the robotic arm, the digital model of robotic arm, the body sense interaction, and the virtual-reality mapping module. It can digitally control the robotic arm.With this system, we can all lift Thor's hammer in the future.