The official banner for the IEEE Conference on Virtual Reality + User Interfaces, comprised of a Kiwi wearing a VR headset overlaid on an image of Mount Cook and a braided river.
Posters Location
Poster Session 1 Session/Server 1
Poster Session 2 Session/Server 2
Poster Session 3 Session/Server 3

On-site Floor Plan

Session/Server 1

Remote art therapy in collaborative virtual environment: a pilot study on feasibility and usability

Booth: 1

Chen Li: The Hong Kong Polytechnic University ; Yixin Dai: The Hong Kong Polytechnic University; Honglin Li: The Hong Kong Polytechnic University; Pui Yin Yip: The Hong Kong Polytechnic University

The uniqueness of the collaborative virtual environment (CVE) makes it a perfect medium for delivering art therapy sessions remotely during the global pandemic. This pilot study investigates the feasibility and usability of delivering remote art therapy in a customised CVE. Four young adults as clients and two registered expressive arts therapists participated in the study. The quantitative and qualitative results suggested that the CVE-enabled approach was feasible and the system's usability was high. The approach was well received by both the clients and the therapists. Possible improvements to the CVE were identified and will be addressed in our future work.

Exploring Influences of Design and Environmental Factors on Recognizing Teacher’s Facial Expressions in Educational VR

Booth: 2

Yu Han: Beijing Engineering Research Center of Mixed Reality and Advanced Display; Jie Hao: Beijing Institute of Technology; Yu Miao: Beijing Engineering Research Center of Mixed Reality and Advanced Display; Yue Liu: Beijing Institute of Technology

Teachers’ facial expressions significantly influence students’ willingness to learn. However, insufficient resolution of current consumer head-mounted displays leads to difficulties in capturing nuances of virtual teachers in VR, which may be solved by scaling up the heads of virtual teachers. In this paper, we explore the effects of design and environmental factors on the ideal head scales for recognizing the virtual teacher’s facial expressions. Our results show that the facial visualization style and the distance from the user to the virtual teacher are important factors affecting facial expression recognition, which could contribute to the design and optimization of educational VR.

Exploring Situated Instructions for Mixed Reality (MR) Remote Collaboration: Comparing Embedded and Non-Embedded Annotations

Booth: 3

Bernardo Marques: Universidade de Aveiro; André Santos: University of Aveiro; Nuno Martins: IETA, University of Aveiro; Samuel Silva: Universidade de Aveiro; Paulo Dias: University of Aveiro; Beatriz Sousa Santos: University of Aveiro

Mixed Reality (MR) remote collaboration enable off-site experts to assist on-site collaborators needing guidance. Different visualizations have been proposed for sharing situated information, e.g., embedded, non-embedded, etc. However, the effectiveness of these visualizations have not been compared. This work describes a user study with 16 participants, aimed at comparing two conditions: C1) Embedded and C2) Non-Embedded annotations during a real-life remote maintenance. Two devices were used: Hand-Held Device (HHD) and Head-Mounted Display (HMD). This last using Embedded annotations was considered the better alternative, while HHD using Non-Embedded annotations were rated lower from all conditions.

Coarse to Fine Recursive Image-based Localization on a 3D Mesh Map

yohei hanaoka: KDDI Research, Inc.; Kohei Matsuzaki: KDDI Research, Inc.; Satoshi Komorita: KDDI Research, Inc.

Image-based localization is appropriate for augmented/virtual reality (AR/VR) services due to its high accuracy. However, it generally needs dedicated 3D map generation, which is costly. Thus, some methods have tried to utilize 3D mesh maps instead. However, the significant visual difference between the query image of the camera to be localized and a 3D mesh map results in low accuracy. This paper proposes to improve accuracy by dynamically controlling the comparison threshold and selecting the best localization result in a recursive estimation process. The results show that it improves the accuracy by 15% compared to the baseline method.

Multimodal Activity Detection for Natural Interaction with Virtual Human

Booth: 4

Kai Wang: China Unicom; Shiguo Lian: China Unicom; Haiyan Sang: China Unicom; Wen Liu: China Unicom; Zhaoxiang Liu: China Unicom; Fuyuan Shi: China Unicom; Hui Deng: China Unicom; Zeming Sun: China Unicom; Zezhou Chen: China Unicom

Natural face-to-face human-robot conversation is one of the most important features for virtual human in virtual reality and metaverse. However, the unintended wake-up of robot is often activated with only Voice Activity Detection (VAD). To address this issue, we propose a Multimodal Activity Detection (MAD) scheme, which considers not only voice but also gaze, lip-movement and talking content to decide whether the person is activating the robot. A dataset for large screen-based virtual human conversation is collected from various challenging cases. The experimental results show that the proposed MAD greatly outperforms VAD-only method.

User Motion Accentuation in Social Pointing Scenario

Booth: 5

Ruoxi Guo: University College London; Lisa Izzouzi: University College London; Anthony Steed: University College London

Few existing methods produce full-body user motion in Virtual Environments from only one consumer-level Head-Mounted-Display. This preliminary project generates full-body motions from the user’s hands and head positions through data-based motion accentuation. The method is evaluated in a simple collaborative scenario with one Pointer, represented by an avatar, pointing at targets while an Observer interprets the Pointer’s movements. The Pointer’s motion is modified by our motion accentuation algorithm SocialMoves. Comparisons on the Pointer’s motion are made between SocialMoves, the Final IK, and Ground Truth. Our method showed the same level of user experience as the Ground Truth method.

Knock the Reality: Virtual Interface Registration in Mixed Reality

Booth: 6

Weiwei Jiang: Anhui Normal University; Difeng Yu: University of Melbourne; Andrew Irlitti: University of Melbourne; Jorge Goncalves: University of Melbourne; Vassilis Kostakos: University of Melbourne; Xin He: Anhui Normal University

We present Knock the Reality, an interaction technique for virtual interface registration in mixed reality (MR). When a user knocks on a physical object, our technique identifies the object based on a knocking sound and registers a customizable virtual interface onto the object. Unlike computer vision-based methods, our approach does not require continuously processing image information. Instead, we utilize audio features which are less computationally expensive. This work presents our implementation and demonstrates an interaction scenario where a user works in MR. Overall, our method offers a simple and intuitive way to register MR interfaces.

Estimation of Required Horizontal FoV for Ideal HMD Utilizing Vignetting under Practical Range of Eye Displacement

Yamato Miyashita: Japan Broadcasting Corporation; Masamitsu Harasawa Ph.D.: Japan Broadcasting Corporation; Kazuhiro Hara: Japan Broadcasting Corporation; Yasuhito Sawahata: Japan Broadcasting Corporation; Kazuteru Komine: Japan Broadcasting Corporation

This study presents the required horizontal FoV for ideal HMDs that bring an indistinguishable experience from the state of not wearing them. We investigated the threshold of the horizontally visible range relative to the head, when the size of the eye displacement is equivalent to that in real life. We confirmed that the required FoV was smaller than that in a previous study, in which the participants' eye displacement was at its maximum. We found that the required FoV was approximately 229° when the peripheral luminance gradually decreased, as in vignetting.

Projection Mapping in the Light: A Preliminary Attempt to Substitute Projectors for Room Lights

Masaki Takeuchi: Osaka University; Daisuke Iwai: Osaka University; Kosuke Sato: Osaka University

Projection mapping (PM) in a bright room creates the problem of reducing the contrast of the projected texture because ambient lighting elevates the black level of the projection target. In this paper, we developed a pseudo-ambient lighting technique that turns off the original ambient lighting and reproduces its illumination from the projectors on surfaces other than the target. We confirmed that the proposed technique could reproduce a bright room while suppressing the contrast reduction of the projected texture on the target, which helped to improve the viewing experience.

Multi-color LED Marker for Dynamic Target Tracking in Wide Area

Yuri Mikawa: The University of Tokyo; Christian Eichhorn: Technical University of Munich; Gudrun Klinker: Technical University of Munich

To present augmented reality in dynamic scenes like sports, a wide tracking range is required as well as accuracy and speed. Conventional tracking markers had a complicated shape, which resulted in being weak against image blur and low image resolution, leading to a narrow tracking range. This paper proposes multi-color LED markers for tracking a dynamic object in a wide area. They can emit consistent light and express a unique ID by a few color pixels, which are efficiently extracted using a camera’s short exposure time. This marker was tested at various distances and fast rotation, where the high identification accuracy was evaluated.

Designing a Smart VR Painting System with Multisensory Interaction for Immersive Experience

Booth: 7

Zhuoshu Li: Zhejiang University; Pei Chen: Zhejiang University; Hongbo ZHANG: Zhejiang University; Yexinrui WU: Zhejiang University; Kewei Guo: Zhejiang University; Lingyun Sun: Zhejiang University

VR painting has become increasingly popular for its potential to provide a less restricted painting environment. However, limited sensory feedback in existing VR painting systems detracts people from fully immersive experience. To better integrate multisensory interaction into the VR painting system, we invited 12 participants to a participatory design process. Based on participants' feedback, we envision the design of a smart VR painting system, which integrates visual, audio, haptic, and smell feedback to enhance immersion. Meanwhile, AI's capability is adopted to reduce the difficulty of painting in VR.

An Attention-Based Signed Distance Field Estimation Method for Hand-Object Reconstruction

Booth: 8

xinkang zhang: Academy for engineering & technology, Fudan University; xinhan di: bloo company; Xiaokun Dai: Academy for engineering & technology, Fudan University; Xinrong Chen: Academy for engineering & technology, Fudan University

Joint reconstruction of hands and objects from monocular RGB images is a challenging task. In this work, we present a novel hybrid model for joint reconstruction of hands and objects. The model proposed consists of three key modules. Among them, multi-scale attention feature extractor is designed to enhance cross-scale information extraction. Attention-based graph encoder can encode the graph information of the hands. Interacting attention module is applied to fuse information between hands and object. Test results on ObMan dataset indicate that our method has better joint reconstruction results compared to the state-of-the-art methods.

Radiological Incident System using Augmented Reality (RISAR)

Muhannad Ismael: luxembourg institute of science and technology ; Roderick McCall: Luxembourg Institute of Science and Technology (LIST); Maël Cornil: Luxembourg Institute of Science and Technology; Mike Griffin: Luxembourg Institute of Science and Technology (LIST); Joan Baixauli: Luxembourg Institute of Science and Technology (LIST)

This paper presents an Augmented Reality (AR) solution called RISAR that allows for the real-time visualisation of Radiological hazards based on sensor data captured from detectors as well as Unmanned Aerial and Ground Vehicles. RISAR improves safety for first responders during radiological events by enhancing their situation awareness. This lowers the risk of harm, and with it any health impacts and costs.

Concurrent Feedback VR Rhythmic Coordination Training

Booth: 9

James Jonathan Pinkl: University of Aizu; Michael Cohen: University of Aizu

Action Observation VR tools have had observable success in teaching a wide variety of skills. The authors’ previous development includes a first-person VR tool designed for users to learn drumming exercises and improve rhythm via HMD. This contribution is an extension that implements concurrent feedback and multimodal cues to further immerse users in a coherent experience. Another virtual scene is included to help users improve their polyrhythms, a historically challenging musical concept to learn through traditional methods. The aim of this work is to use synchronized multimodal techniques with realtime feedback capabilities to develop a new drumming practice tool and to devise a potentially more effective, method to practice polyrhythms.

Does interpupillary distance (IPD) relate to immediate cybersickness?

Taylor A Doty: Iowa State University; Jonathan Kelly: Iowa State University; Michael Dorneich Dorneich: Iowa State University; Stephen B. Gilbert: Iowa State University

Widespread adoption of virtual reality (VR) will likely be limited by the common occurrence of cybersickness. Cybersickness susceptibility varies across individuals, and previous research reported that interpupillary distance (IPD) may be a factor. However, that work emphasized cybersickness recovery rather than cybersickness immediately after exposure. The current study (N=178) examined if the mismatch between the user's IPD and the VR headset's IPD setting contributes to immediate cybersickness. Multiple linear regression indicated that gender and prior sickness due to screens were significant predictors of immediate cybersickness. However, no relationship between IPD mismatch and immediate cybersickness was observed.

Scene-aware Motion Redirection in Telecommunication

Booth: 10

Luhui Wang: Beijing Institute of Technology; Wei Liang: Beijing Institute of Technology; Xiangyuan Li: Beijing Forestry University

We propose a motion redirection system for virtual reality and augmented reality, automatically detecting a person's action in a source scene and redirecting it to a new target scene. The redirected motion is augmented in the target scene based on the understanding of the target scene's semantics and layout. The results show that the system may facilitate telecommunication across scenes, improving user interaction experiences.

The Belated Guest: Exploring the Design Space for Transforming Asynchronous Social Interactions in Virtual Reality

Portia Wang: Stanford University; Mark Roman Miller: Stanford University; Jeremy Bailenson: Stanford University

Social meetings in Virtual Reality (VR) are fundamentally different from videoconferencing because the VR tracking data can be used to render scenes as if they were in real-time, enabling people to go back in time and experience discussions that they may have not attended. Moreover, the tracking data allows people to visit the meeting from any location in the room, as opposed to the single camera angle used for video conferences. The current paper describes the methodology of transforming tracking data around proxemics and head orientations of recorded avatars to nonverbally assimilate new users into a recorded scene.

A High-Dynamic-Range Mesh Screen VR Display by Combining Frontal Projection and Retinal Projection

Kazushi Kinjo: Osaka University; Daisuke Iwai: Osaka University; Kosuke Sato: Osaka University

We proposed a high-dynamic-range virtual reality (VR) display that can represent a glossy material by combining a conventional frontal projection mapping and a retinal projection light passing through a screen of a microporous plate. In the prototype system, the retinal projection could be superimposed on the frontal projection and increase the luminance of only the specular highlight in the image. This paper also reports approximately six hundred times brighter presentation of the retinal projection than that of the frontal projection.

Energy Efficient Wearable Vibrotactile Transducer Utilizing the Leakage Magnetic Flux of Repelling Magnets

Mitsuki Manabe: the university of electro-communications; Keigo Ushiyama: The University of Electro-Communications; Akifumi Takahashi: University of Chicago; Hiroyuki Kajimoto: The University of Electro-Communications

We propose a novel energy-efficient wearable vibrotactile transducer that features two characteristics. Firstly, it uses two repulsive magnets attached to one other to create a concentrated leakage of magnetic flux on the side surface. Secondly, the magnets and coil are directly attached to the skin. A prototype of the device was fabricated and tested, demonstrating that it can produce stronger and wider frequency vibrations than existing methods, providing a more accurate representation of rough textures.

Manipulation Guidance Field for Collaborative Object Manipulation in VR

Booth: 11

Xiaolong Liu: Beihang University; Shuai Luan: BUAA; Lili Wang: Beihang University; Chan-Tong Lam: Macao Polytechnic University

Object manipulation is a fundamental interaction in virtual reality (VR). Efficient and accurate manipulation is important for many VR applications, especially collaborative VR applications. We introduce a collaborative method based on the manipulation guidance field (MGF) to improve manipulation accuracy and efficiency. We first introduce the concept of MGF and its construction method. Then we propose two strategies to accelerate MGF updating process. After that, we propose a collaborative manipulation method to manipulate objects using the guidance of MGF.

Auxiliary Means to Improve Motion Guidance Memorability in Extended Reality

Booth: 12

Patrick Gebhardt: University of Stuttgart; Maximilian Weiß: University of Stuttgart; Pascal Huszár: University of Stuttgart; Xingyao Yu: VISUS; Alexander Achberger: Mercedes-Benz AG; Xiaobing Zhang: China Mobile (Jiangxi) Virtual Reality Technology Co., Ltd; Michael Sedlmair: University of Stuttgart

VR-based motion guidance systems can provide 3D movement instructions and real-time feedback for practicing movement without a live instructor. However, the precise visualization of movement paths or postures may be insufficient to learn a new motor skill, as they might make users too dependent and lead to poor performance when there is no guidance. In this paper, we propose to use enhanced error visualization, asymptotic path, increasing transparency, and haptic constraint to improve the memorability of motion guidance. Our study results indicated that adding an enhanced error feedback visualization helped the users with short-term retention.

Towards Discovering Meaningful Historical Relationships in Virtual Reality

Melanie Derksen: TU Dortmund University; Tim Weissker: RWTH Aachen University; Torsten Wolfgang Kuhlen: RWTH Aachen University; Mario Botsch: TU Dortmund University

Traditional digital tools for exploring historical data mostly rely on conventional 2D visualizations, which often cannot reveal all relevant interrelationships between historical fragments. We are working on a novel interactive exploration tool for historical data in virtual reality, which arranges fragments in a 3D environment based on their temporal, spatial and categorical proximity to a reference fragment. In this poster, we report on an initial expert review of our approach, giving us valuable insights into the use cases and requirements that inform our further developments.

Design and User Experience Evaluation of 3D Product Information in XR Shopping Application

Kaitong Qin: Zhejiang University; Yankun Zhen: Alibaba Group; Tianshu Dong: Zhejiang University; Liuqing Chen: Zhejiang University; Lingyun Sun: Zhejiang University; Yumou Zhang: Alibaba Group; TingTing Zhou: Alibaba Group

Current 3D product information pages are believed to enrich the online shopping experience since it provides a more immersive experience. However, existing solutions still retain the 2D UI elements in the product information presentation design, preventing users from fully immersing themselves in the virtual environment and further degrading the shopping experience. In order to evaluate the user experience of 3D product information in XR shopping applications, we first construct a design space based on previous design cases of product information presentation in virtual environments and produce nine new solutions by combining elements in the design space.

Real-time Physics-based Interaction in Augmented Reality

Booth: 13

Jin Li: Beihang university; Hanchen Deng: Beihang University; Yang Gao: Beihang University; Anqi CHEN: State Key Laboratory of Virtual Reality Technology and Systems,beihang university; Zilong Song: Beihang University; Aimin Hao: Beihang University

We propose a unified AR-based framework that combines the real-time physical multi-material simulation model and the efficient free-hand gesture interaction method. First, we employ a simple RGBD camera to quickly acquire 3D environmental data to build the static boundary conditions. The real-time gestures are then detached and used as dynamic objects that interact with physical simulations. Finally, the calculated lighting parameters are used for real-time rendering and virtual-reality fusion. Our framework enables users to interact with various physical simulations in AR scenes, which considerably expands the applications of the fusion of AR and physical simulations.

Development of a Data-driven Self-adaptive Upper Limb Virtual Rehabilitation System for Post Stroke Elderly

Booth: 14

Zhiqiang Luo: Foshan University; Tek Yong Lim: Multimedia University

The study aims to develop a virtual rehabilitation system to assist upper limb motor training for older post-stroke patients. The system contains data-driven virtual exergames simulating the task-oriented training; receives the rehabilitation prescription and the online data collected from the multi-mode hand controller and the depth camera; assesses online patient’s performance which in turn updates the data of virtual exergames to adapt to the patient’s training progress. Its innovation is providing precision rehabilitation via gaining the personalized learning experience to improve the adherence to and effectiveness of virtual rehabilitation.

Real-Time Exploded View Animation Authoring in VR Based on Simplified Assembly Sequence Planning

Jesper Gaarsdal: SynergyXR ApS; Sune Wolff: SynergyXR ApS; Claus B. Madsen: Aalborg University

In this paper, we present an animation authoring tool, capable of automatically generating an exploded view of assemblies in real-time. 3D user interfaces in virtual reality are used for controlling the explosion distance of parts, as well as other metrics affecting the explosion direction and order. The methods used are based on assembly sequence planning and employ an assembly-by-disassembly approach, requiring no additional information about parts other than their geometry and position in the assembled state. The computation times for five assemblies of different complexities are presented, tested on a standalone virtual reality device.

Sequential Eyelid Gestures for User Interfaces in VR

Christian Arzate Cruz: Ritsumeikan University; Tatsuya Natsume: Ritsumeikan University; Mizuto Ichihara: Ritsumeikan University; Asako Kimura: Ritsumeikan Univ.; Fumihisa Shibata: Ritsumeikan University

In this poster, we present user-defined sequential eyelid gestures to control UIs in VR. Previous works have proposed sequential eyelid gestures for VR interaction. However, they do not include squint or wide-open eyelid states. In contrast, we consider those eyelid states too, and we encouraged users to prioritize how well the gestures fitted the commands. To validate that the user-defined gestures are effective, we tested them in a user study N = 17 with five different UI commands for VR interfaces. We use the Vive Pro Eye HMD to detect all our final gestures with low error rates.

Comparing Context-sharing Interfaces in XR Remote Collaboration

Booth: 15

Eunhee Chang: University of South Australia; Yongjae Lee: Arizona State University; Byounghyun Yoo: Korea Institute of Science and Technology; Hojeong Im: Korea Institute of Science and Technology

eXtended Reality (XR) remote collaboration refers to working together on virtual data or real objects through various types of reality. To achieve a higher immersive collaboration experience, it is critical to converge heterogeneous realities in one shared workspace, where environmental changes are continuously updated. In this study, We extend our previous work, the Webized XR system, in terms of sharing the local environmental context. This system can provide three types of context-sharing interface (2D video, 360 video, and 3D reconstruction), delivering spatial information about the workspace. We also performed a pilot user study to measure the quality of this system.

Towards more child safety-oriented decisions through VR?

Haoyang Du: Goldsmiths, University of London; Songkai Jia: Goldsmiths, University of London; Joel Gautschi: Zurich University of Applied Science; Julia Quehenberger: Zurich University of Applied Science ; David Lätsch: ZHAW Zurich University of Applied Sciences; Xueni Pan: Goldsmiths

Witnessing intimate partner violence (IPV) could have long-term negative impacts on children. Such exposure is, however, often overlooked by professionals. We developed a VR scenario which allowed participants to witness IPV from a child’s perspective. In this pilot study, we found that they made more child-protective decisions and showed higher levels of empathy towards the child after VR exposure. When comparing the impact between the child’s perspective and third-person perspective, no statistically significant differences were found in empathy and decision-making, even though those with the child’s perspective had significantly higher levels of presence.

Tomato Presence: Virtual Hand Ownership with a Disappearing Hand

Anthony Steed: University College London; Vit Drga: University College London

Tomato presence is a term coined by Owlchemy Labs to refer to the observation that players of their game Job Simulator can experience `hand presence' over an object that is not their hand. When playing the game, if a player grabs an object, their virtual hand disappears leaving the grabbed object. It seems that this should be a conflict with current theories of how users might react to visual/proprioceptive mismatch of their embodiment. We run a hand ownership experiment where we implement standard object grasp and the disappearing hand grasp. We show that on a body-ownership questionnaire there is evidence that users feel ownership over a disappearing virtual hand. We also confirm that most users do not report that their hand disappeared.

Reducing Foreign Language Anxiety with Virtual Reality

Seonjeong Park: Goldsmiths, University of London; Damaris D E Carlisle: LASALLE College of the Arts; Marco Gillies: Goldsmiths, University of London; Xueni Pan: Goldsmiths

An immersive VR experience was developed to examine the relationship between foreign language anxiety (FLA), virtual audience characteristics, and learners' perceptions of the virtual audience. Seven students studying English as a second language selected their avatars and practised their presentations in front of a virtual audience in a realistic classroom. Results indicated that participants' FLA levels increased when presenting to larger audiences, but decreased after repeated presentations. They were also able to identify the surroundings more readily when presenting in front of smaller audiences, as well as in front of audiences of diverse ethnicity.

Detecting Distracted Students in an Educational VR Environment Utilizing Machine Learning on EEG and Eye-Gaze Data

Sarker Monojit Asish: University of Louisiana at Lafayette; Arun K Kulshreshth: University of Louisiana at Lafayette; Christoph W Borst: University of Louisiana at Lafayette

Virtual Reality (VR) is frequently used in various educational contexts since it could improve knowledge retention compared to traditional learning methods. However, distraction is an unavoidable problem in the educational VR environment due to stress, mind-wandering, unwanted noise/sounds, irrelevant stimuli, etc. We explored the combination of EEG and eye gaze data to detect student distractions in an educational VR environment. We designed an educational VR environment and trained three machine learning models (CNN-LSTM, Random Forest and SVM) to detect distracted students. Our preliminary study results show that Random Forest and CNN-LSTM provide better accuracy (98%) compared to SVM.

Analysis and Synthesis of Spatial Audio for VR Applications: Comparing SIRR and RSAO as Two Main Parametric Approaches

Booth: 16

Atiyeh Alinaghi: University of Southampton; Luca Remaggi: NA; Hansung Kim: University of Southampton

In order to have a natural experience in a virtual reality environment, it is crucial to make the sound aligned with the surrounding room acoustics. The room acoustics are usually captured by room impulse responses (RIRs) which can be employed to regenerate the same audio perception. In this paper, we applied and compared two main parametric approaches, Spatial Impulse Response Rendering (SIRR) and Reverberant Spatial Audio Object (RSAO) to encode and render the RIRs. We showed that SIRR synthesizes the early reflections more precisely whereas RSAO is more accurate to render the late reverberation.

Material Recognition for Immersive Interactions in Virtual/Augmented Reality

Booth: 17

Yuwen Heng: School of Electronics and Computer Science, University of Southampton; Srinandan Dasmahapatra: University of Southampton; Hansung Kim: University of Southampton

To provide an immersive experience in a mirrored virtual world such as spatially synchronised audio, visualisation of reproduced real-world scenes and haptic sensing, it is necessary to know the materials of the object surface which provides the optical and acoustic properties for the rendering engine. We focus on identifying materials from real-world images to reproduce more realistic and plausible virtual environments. To cope with considerable variation in material, we propose the DPT architecture which dynamically decides the dependency on different patch resolutions. We evaluate the benefits of learning from multiple patch resolutions on LMD and OpenSurfaces datasets.

Shrink or grow the kids? Scale cognition in an immersive virtual environment for K-12 summer camp

Linfeng Wu: North Carolina State University ; Karen B Chen: North Carolina State University; Brian Sekelsky: North Carolina State University; Matthew Peterson: North Carolina State University; Tyler Harper-Gampp: NC State University; Cesar Delgado: North Carolina State University

Virtual reality (VR) has been widely used for education and affords embodied learning experiences. Here we describe: Scale Worlds (SW), an immersive virtual environment to allow users to shrink or grow by powers of ten (10X) and experience entities from molecular to astronomical levels; and students’ impressions and outcomes from experiencing SW in a CAVE (Figure 1) during experiential summer outreach sessions. Data collected from post-visit surveys of 69 students, and field observations, revealed that VR technologies: enabled interactive learning experiences; encouraged active engagement and discussions among participating students; enhanced the understanding of size and scale; and increased interest in STEM careers.

Subjective Quality Assessment of User-Generated 360° Videos

Booth: 18

Yuming Fang: Jiangxi University of Finance and Economics; Yiru Yao: Jiangxi University of Finance and Economics; Xiangjie Sui: Jiangxi University of Finance and Economics; Kede Ma: City University of Hong Kong

In this poster, we establish one of the largest virtual reality (VR) video database, containing 502 user-generated videos with rich content and commingled authentic distortions (often localized in space and time). We capture viewing behaviors (\ie, scanpaths) of 139 users, and collect their opinion scores of perceived quality under four different viewing conditions (two starting points × two exploration times). We provide a thorough statistical analysis of recorded data, resulting in several interesting observations, which reveal how viewing conditions affect human behaviors and perceived quality. The database is available at https://github.com/Yao-Yiru/VR-Video-Database.

IPS : Integrating Pose with Speech for enhancement of body pose estimation in VR remote collaboration

Seoyoung Kang: KAIST; Sungwoo Jeon: KAIST; Woontack Woo: KAIST

We propose a Speech-Pose integration method to overcome the limitation of existing body pose estimation. Unlike previous Speech-based gesture generation method, our proposal reflects the user’s actual pose using a vision-based system and speech as a subsidiary. When the system detects the camera out of sight, we carry out a context-aware method analyzing the speech. The system determines the target pose and connects it with the last pose captured by the camera using the bounding box. Our system can be a robust solution for avatar-mediated remote collaboration which requires accurate gesture delivery such as VR remote yoga training.

Multi-person tracking for virtual reality surrounding awareness

Booth: 19

Ayman Mukhaimar: Victoria University; Yuan Miao: Victoria University; Zora Vrcelj: Victoria University; Bruce Gu: Victoria University; Ang Yang: Victoria University; Jun Zhao: Victoria University; Malindu Sandanayake: Victoria University; Melissa Chan: Victoria University

Virtual reality devices are designed to cover our vision so we are unaware of our surroundings and completely emerged into a different world. This limits VR user's ability to move freely in crowded areas, such as classrooms, which limits the user interactions in VR. We present a framework that enables VR users to see other people around them inside the VR environment with the help of an external 3D depth camera. The proposed framework enables multi-VR users to use one tracking camera and provides a cost-effective solution. The proposed framework can also help in performing collaborative activities.

HandAttNet: Attention 3D Hand Mesh Estimation Network

Booth: 20

Jintao Sun: Beijing Institute of Technology; Gangyi Ding: Beijing Institute of Technology

Hand pose estimation and reconstruction become increasingly compelling in the metaverse era. But in reality hands are often heavily occluded, which makes the estimation of occluded 3D hand meshes challenging. Previous work tends to ignore the information of the occluded regions, we believe that the occluded regions hand information can be highly utilized, Therefore, in this study, we propose hand mesh estimation network, HandAttNet. We design the cross-attention mechanism module and the DUO-FIT module to inject hand information into the occluded region. Finally, we use the self-attention regression module for 3D hand mesh estimation. Our HandAttNet achieves SOTA performance.

Measuring Collision Anxiety in XR Exergames

Patrizia Ring: University Duisburg-Essen; Maic Masuch: University of Duisburg-Essen

Extended Reality (XR) applications have become increasingly relevant due to technical advances, but also have their own challenges regarding the user’s real-world orientation. We propose a novel definition and a not yet validated questionnaire for the feeling of disorientation and fear of colliding with real objects in XR, called Collision Anxiety (CA). Participants (N = 37) played an AR and VR version of a game with results suggesting that while an AR game can provide a similar player experience compared to its VR equivalent, differences regarding CA exist.

A Palm-Through Interaction Technique for Controlling IoT Devices

Booth: 21

Zhengchang Yang: NAIST; Naoya Isoyama: Nara Institute of Science and Technology; Nobuchika Sakata: Ryukoku University; Kiyoshi Kiyokawa: Nara Institute of Science and Technology

In this study, we propose a palm-through interaction technique to control smart devices by using an augmented reality head-mounted display. The contributions are as follows: 1) The proposed method is intuitive to aim at and touch the remote object of concern, 2) natural haptic feedback is generated. The user study suggests that the palm-through interface is more intuitive because the user interface is similar to that of a smartphone, and brings users haptic confidence. This showed us how smart devices can greatly benefit from an AR implementation, motivating us to further explore this approach for more scenarios.

Feasibility and Expert Acceptance of a Virtual Reality Gait Rehabilitation Tool

Alexandre Gordo: Instituto Superior Técnico, Universidade de Lisboa; Ivo Roupa: Instituto Superior Técnico, Universidade de Lisboa; Hugo Nicolau: Universidade de Lisboa; Daniel S. Lopes: INESC ID, Instituto Superior Técnico, Universidade de Lisboa

We present LocomotiVR, a Virtual Reality tool designed with physiotherapists to improve the gait rehabilitation in clinical practice. The tool features two interfaces: a VR environment to immerse the patient in the therapy activity; and a desktop tool operated by a physiotherapist to customize exercises and follow the patient's performance. Results revealed that LocomotiVR presented promising acceptability, usage, and engagement scores. These results were supported by qualitative data collected from participating experts, which discussed high levels of satisfaction, motivation, and acceptance to incorporate the LocomotiVR in daily therapy practices. Concerns were related to patient safety and lack of legal regulation.

The role of attention and cognitive workload in measuring levels of task complexity within virtual environments

Booth: 22

Yobbahim J. Vite: University of Calgary; Yaoping Hu: University of Calgary

This paper studied the role of attention and cognitive workload (CWL) in measuring levels of task complexity. Within a virtual environment (VE), participants undertook a task with a baseline and 3 levels of the complexity. The baseline and levels were defined as those in an existing work. A well-known ratio of attention, AR, was computed from recorded brainwaves of the participants during the task. Their CWL was derived from a de-facto questionnaire of the NASA task load index. The study’s outcomes indicated that, while the role of the AR in the measurement remained unclear, the CWL could serve the measurement.

Magnifying Augmented Mirrors for Accurate Alignment Tasks

Vanessa Kern: Friedrich-Alexander-Universität Erlangen-Nürnberg; Constantin Kleinbeck: Friedrich-Alexander Universität Erlangen-Nürnberg; Kevin Yu: Technical University of Munich; Alejandro Martin-Gomez: Johns Hopkins University; Alexander Winkler: Technical University of Munich; Nassir Navab: Technische Universität München; Daniel Roth: Friedrich-Alexander-Universität Erlangen-Nürnberg

Limited mobility in augmented reality applications restricts spatial understanding along with augmentation placement and visibility. Systems can counteract by providing perspectives by tracking and augmenting mirrors without requiring user movement. However, the decreased visual size of mirrored objects reduces accuracy for precision tasks. We propose Magnifying Augmented Mirrors: digitally zoomed mirror images mapped back onto their surface, producing magnified reflections. In a user study (N = 14) conducted in virtual reality, we evaluated our method on a precision alignment task. Although participants needed time for acclimatization, they achieved the most accurate results using a magnified mirror.

Exploring the Usefulness of Visual Indicators for Monitoring Students in a VR-based Teaching Interface

Yitoshee Rahman: University of Louisiana at Lafayette; Arun K Kulshreshth: University of Louisiana at Lafayette; Christoph W. Borst: University of Louisiana at Lafayette

Teaching remotely using immersive Virtual Reality (VR) technology is becoming more popular as well as imperative with the ever-changing educational delivery methods. However, it is not easy for a teacher to monitor students in VR since only student avatars are visible. We designed and tested two educational VR teaching interfaces to help a teacher monitor students. Our comparative analysis using a preliminary study with 5 participants showed that the teaching interface with centrally-arranged emoticon-like indicators, displaying a summary of student information, performed better than the interface with avatar-located indicators in terms of teaching duration, and classroom management.

Cinematography in the Metaverse: Exploring the Lighting Education on a Soundstage

Booth: 23

Xian Xu: The Hong Kong University of Science and Technology; Wai Tong: The Hong Kong University of Science and Technology; Zheng Wei: The Hong Kong University of Science and Technology; Meng Xia: Carnegie Mellon University; Lik-Hang Lee: Korea Advanced Institute of Science & Technology; Huamin Qu: The Hong Kong University of Science and Technology

Lighting education is a foundational component of cinematography education. However, there is still a lack of knowledge on the design of a VR system for teaching cinematography. In this work, we present our VR soundstage system for instructors and learners to emulate cinematography lighting in virtual scenarios and then evaluate it from five aspects in the user study. Qualitative and quantitative feedback in our user study shows promising results. We further discuss the benefits of the approach and opportunities for future research.

Real-time Hand-object Occlusion for Augmented Reality Using Hand Segmentation and Depth Correction

Booth: 24

Yuhui Wu: Beijing Institute of Technology; Yue Liu: Beijing Institute of Technology; Jiajun Wang: Beijing Institute of Technology

Hand-object occlusion is crucial to enhance the realism of Augmented Reality, especially for egocentric hand-object interaction scenes. In this paper, a hand segmentation-based depth correction approach is proposed, which can help to realize real-time hand-object occlusion. We introduce a lightweight convolutional neural network to quickly obtain real hand segmentation mask. Based on the hand mask, different strategies are adopted to correct the depth data of hand and non-hand regions, which can implement hand-object occlusion and object-object occlusion simultaneously to deal with complex hand situations during interaction. The experimental results demonstrate the feasibility of our approach presenting visually appealing occlusion effects.

Does realism of a virtual character influence arousal? Exploratory study with pupil size measurement.

Radosław Sterna: Institute of Psychology, Faculty of Philosophy, Jagiellonian University in Krakow; Artur Cybulski: Jagiellonian University in Krakow; Magdalena Igrs-Cybulska: AGH University of Science and Technology; Joanna Pilarczyk: Institute of Psychology, Faculty of Philosophy, Jagiellonian University in Krakow; Michał Kuniecki: Institute of Psychology, Faculty of Philosophy, Jagiellonian University in Krakow

In this poster we discuss exploratory analyses seeking to test whether realism of a virtual character can influence arousal measured by pupil diameter change in response to viewing a virtual character. To do this we conducted a study in which 180 virtual characters were presented to 45 participants in a free-viewing procedure, while their physiological responses were recorded. We manipulated the realism of these characters in terms of their appearance, behavior and intractability. Results suggest that appearance and intractability can influence arousal of the participants.

An Exploratory Investigation into the Design of a Basketball Immersive Vision Training System

Pin-Xuan Liu: National Tsing Hua University; Tse-Yu Pan: Institute of A.I. Cross-disciplinary Tech; Min-Chun Hu: National Tsing Hua University; Hung-Kuo Chu: National Tsing Hua University; Hsin-Shih Lin: National Cheng Kung University; wen-wei Hsieh: Physical education Office; Chih-Jen Cheng: National Yang Ming Chiao Tung University

Having good vision ability is important for basketball players who possesses the ball to efficiently search the wide-open teammates and quickly pass the ball to the one who has better chance to score according to the defenders’ movements. To customize precise training scenarios for cultivating an individual athlete's vision abilities, we proposed a basketball immersive vision training system which considers not only the real-world vision training scenario in basketball team but also the concept of the optimal gaze behaviors to provide more reliable training tasks. The result of this pilot study shows the recruited five participants satisfied the experience of proposed training system.

How Field of View Affects Awareness of an Avatar During a Musical Task in Augmented Reality

Suibi Che-Chuan Weng: University of Colorado; Torin Hopkins: University of Colorado; Rishi Vanukuru: University of Colorado Boulder; Chad Tobin: University of Colorado; Amy Banic: University of Wyoming; Daniel Leithinger: University of Colorado, Boulder; Ellen Yi-Luen Do: University of Colorado, Boulder

To investigate how field of view (FOV) affects how players notice the communicative gestures of their partner's avatar in a musical task, we conducted an experiment that compares several AR technologies with varying FOV. We measured response time to communicative gestures, co-presence score, and task enjoyment in three different AR scenarios: holograms, AR headset, and an AR headset with a notification of the avatar's intention to gesture. Results suggest that the hologram setup had the fastest response time and highest ratings for sense of co-presence and task enjoyment. Notification tasks slowed response time, but noticeably improved co-presence with the avatar.

A Comparison of Gesture-based Interaction and Controller-based Interaction for External Users in Co-located Asymmetric Virtual Reality

Booth: 25

Yuetong Zhao: Beihang University; Shuo Yan: Beihang University; Xuanmiao Zhang: Beihang University; Xukun Shen: Beihang University

Head-mounted displays (HMDs) create a highly immersive experience, while limit VR users’ awareness of external users that co-located in the same environment. In our work, we designed a wearable gesture-based interface for external users in two main asymmetric VR scenarios: (1) Object Transition, (2) Collaborative Game. We conducted a hybrid user study with twenty participants to compare the gesture-based interaction and controller-based interaction to investigate the effects on VR experience, social presence, and collaborative efficiency for external users.

Multimodal Apology: Using WebXR to Repair Trust with Virtual Companion

Booth: 26

Yunqiang Pei: University of Electronic Science and Technology of China; RenMing Huang: University of Electronic Science and Technology of China; Guoqing Wang: University of Electronic Science and Technology of China; Yang Yang: University of Electronic Science and Technology of China; Ning Xie: University of Electronic Science and Technology of China; Heng Tao Shen: University of Electronic Science and Technology of China

Research on trust repair in human-agent cooperation is limited, but it can mitigate trust violations and prevent distrust. We conducted a study with 45 participants who interacted with a companion agent through mixed-reality interfaces while playing a language-learning quiz game. Our results showed that displaying the agent's intentions during mistake explanations helped repair trust. We also found that this had positive effects on affect, user experience, and cooperation willingness. Our findings suggest that including agent intentions in mistake explanations is an effective way to repair trust in human-agent interactions.

MRMSim: A Framework for Mixed Reality based Microsurgery Simulation

Nan Xiang: Xi'an Jiaotong-Liverpool University; Hai-Ning Liang: Xi'an Jiaotong-Liverpool University; Lingyun Yu: Xi'an Jiaotong-Liverpool University; Xiaosong Yang: Bournemouth University; Jian J Zhang: Bournemouth University

With the rapid development of the computer technologies, virtual surgery has gained extensive attention over the past decades. In this research, we take advantage of mixed reality (MR) that creates an interactive environment where physical and digital objects coexist, and present a framework (MRMSim) for MR based microsurgery simulation. It enables users to practice microanastomosis skills with real microsurgical instruments rather than additional haptic feedback devices. Both hardware design and software development are included in this work. A prototype system is proposed to demonstrate the feasibility and applicability of our framework.

Collaborative VR: Conveying a Complex Disease and Its Treatment

Maximilian Rettinger: Technical University of Munich ; Sebastian Berndt: Alexion Pharma Germany GmbH; Gerhard Rigoll: Technical University of Munich; Christoph Schmaderer: Technical University of Munich

In medical research, there are constantly new findings that lead to new insights and novel treatments. Hence, physicians always need to stay up-to-date to provide high-quality patient care. Virtual reality is unique among the available learning methods as it offers the opportunity to collaboratively explore the inside of the human body to experience and understand the complex workings of medical interventions. We realized such a VR-Learning-System and assessed its potential with medical experts (n=9). Initial results indicate that experts prefer the new system, and according to their subjective assessment, they achieved a higher learning outcome compared to conventional methods.

Example Process for Designing a Hybrid User Interface for a Multi-Robot System

Jan Philipp Gründling: Trier University; Nathalie Schauffel: Trier University; Sebastian Pape: RWTH; Simon Oehrl: RWTH Aachen University; Torsten Wolfgang Kuhlen: RWTH Aachen University; Thomas Ellwart: Trier University; Benjamin Weyers: Trier University

Interaction with semi-autonomous multi-robot systems requires the integration of technical, task-related, and user-oriented details. While basic design principles for user interfaces already exist, they are not yet tailored to the specific case of multi-robot systems. This work shows how a design process tailored to multi-robot systems can be based on basic design principles and where it needs more specific methods. In doing so, it is indicated that virtual reality (VR) can be used as an instrument for conveying situational awareness.

UI Binding Transfer for Bone-driven Facial Rigs

Booth: 27

Jing Hou: Beijing Institute of Technology; Zhihe Zhao: Beijing Institute of Technology; Dongdong Weng: Beijing Institute of Technology

We propose an automatic method to transfer the UI binding from a rigged model to a new target mesh. We use feed forward neural networks to find the mapping functions between bones and controllers of source model. The learned mapping networks then become the initial weights of an auto-encoder. Then the auto-encoder is retrained using target controllers-bones pairs obtained by the mesh transfer and bones decoupling method. Our system only requires neutral expression of the target person but allows artists to customize other basic expressions, and is evaluated by the semantic reproducibility of basic expressions and the semantic similarity.

A Design Thinking Approach to Construct a Multi-learner VR Lab Monitoring and Assessment Tool Deployed in an XR Environment

Booth: 28

Pak Ming Fan: Hong Kong University of Science and Technology; Santawat Thanyadit: King Mongkut's University of Technology Thonburi; Ting-Chuen Pong: Hong Kong University of Science and Technology

A Virtual Reality Laboratory (VR Lab) refers to a virtual experiment session that aims to deliver procedural knowledge to students like that in physical laboratories using Virtual Reality (VR) technology. While existing VR Lab designs provide a rich learning experience to students, instructors receive limited information on the student's performance and could not provide feedback and assess their performance. This motivated us to create an XR-based multi-learner monitoring and assessment tool for VR Labs. We evaluated the tool with domain experts and report recommendations for developing monitoring and assessing tools for VR Labs sessions.

A VR Enabled Visualization System for Race Suits Design

Booth: 29

Bing Ning: Beijing Institute of Fashion Technology; Mingtao Pei: Beijing Institute of Technology; Yixuan Wang: Beijing Institute of Fashion Technology; Ying Jiang: Beijing Institute of Fashion Technology; Li Liu: Beijing Institute of Fashion Technology

We propose a VR techniques enabled visualization system for race suits design, allowing designers to observe the design work from a near-real perspective of users. The system first constructs a virtual competition venue according to the real field settings automatically. A digital avatar that is scanned from an athlete is dressed up with a designed race suit. Then the avatar is given a sequence of preset sports motions to act in the virtual competition venue. The system can synthesize the traditional user’s views for designers to observe via VR devices and improve the efficiency of the race suit design workflow.

Development of Training Systems using Spatial Augmented Hand

Isabella Mika Taninaka: Osaka University; Daisuke Iwai: Osaka University; Kosuke Sato: Osaka University; Parinya Punpongsanon: Osaka University

Training to execute procedural tasks is often a burden for novice workers. Different systems have been developed to improve users' experience and, consequentially, their performance at a task. However, systems for tasks requiring hand operation lack fine details regarding a specialist choice of hand movements or positions. This study proposes a projection-based hand visualization system that guides the user's physical hand through a task.

Color calibration in virtual reality for Unity and Unreal

Francisco Díaz-Barrancas: Justus Liebig University; Raquel Gil Rodríguez: Justus Liebig University; Avi Aizenman: Justus Liebig University; Florian S. Bayer: Justus Liebig University; Karl R. Gegenfurtner: Justus Liebig University

This work measures the relationship between RGB intensities and the reflected color for each color channel of the HTC Vive Pro Eye virtual reality (VR) headset. The study additionally measured the display spectra of the device and characterized how Unity and Unreal 3D rendering software influences chromatic behavior. The results were compared to measurements taken without rendering software to quantify the pure characteristics of display primaries. A methodology was proposed to carry out a color calibration customized to the type of material or graphics engine used for more accurate and realistic color representation in VR.

High-speed and Low-Latency Ocular Parallax Rendering Improves Binocular Fusion in Stereoscopic Vision

Booth: 30

Yuri Mikawa: The University of Tokyo; Masahiro Fujiwara: The University of Tokyo; Yasutoshi Makino: The University of Tokyo; Hiroyuki Shinoda: The University of Tokyo

Most of the current virtual/augmented reality (VR/AR) displays use binocular parallax to present 3D images. However, they do not consider ocular parallax, which refers to the slight movement of the viewpoint with eye rotation, such as saccade. A commercial head-mounted display (HMD) has a large latency in realizing ocular parallax rendering. In our study, a high-speed (1,000 fps) and low-latency (average 4.072 ms) ocular-parallax-rendering device was assembled, and its effect was examined wherein a reasonable approximation algorithm for viewpoint was applied. A user study experiment employing a random dot stereogram (RDS) showed improvements in binocular fusion, the causes of which are comprehensively discussed.

Effect of Look-Alike Avatars on Students' Perceptions of Teaching Effectiveness

Kwame Agyemang Baffour: Graduate Center; Oyewole Oyekoya: City University of New York - Hunter College

This paper presents a study investigating the influence of look-alike avatars on students' perceptions of teaching effectiveness in three-dimensional virtual and augmented reality environments. This study investigated three avatar representations: i) look-alike avatar of the instructor; ii) stick avatar; and iii) video recording of the instructor. Eighteen participants were asked to rank the three avatar representations, as well as the immersion experience of the teaching simulation on the virtual and augmented reality displays. The result of this study suggests that look-alike avatars can be used to represent an instructor in virtual environments.

Session/Server 2

Exploring the Display Patterns of Object-Centered User Interface in Head-Worn Mixed Reality Environment

Booth: 31

Yihan Li: Beihang University; Yong Hu: Beihang University; Xukun Shen: Beihang University

Object-centered user interface (UI), which UI displayed around real-world objects provides convenience for users to interact with real objects in head-worn Mixed Reality (MR) environment. However, it remains unclear how to design the object-centered UI to reduce the interference of always displayed UI. In this research, we designed two display patterns of object-centered UI and compared them with the basic object-centered UI without any display patterns. Our results demonstrate the trade-offs among the three UIs and collect user preference. Both display patterns can provide better support for users while reducing the occlusion and interference of the UI.

SPAT-VR: A Holistic and Extensible Framework for VR Project Management

Booth: 32

Jin Qi Yeo: Singapore Institute of Technology; Xinxing Xia: Shanghai University; Kan Chen: Singapore Institute of Technology; Malcolm Yoke Hean Low: Singapore Institute of Technology; Alvin Chan: Singapore Institute of Technology; Dongyu Qiu: Singapore Institute of Technology; Frank Guan: Singapore Institute of Technology

With the increasing availability of VR content and affordability of consumer VR hardware systems, VR has been applied in various domains. However, to our knowledge, at present there are no existing frameworks that are applicable to the project management of the entire life cycle of building VR applications. In this paper, we present SPAT-VR: a holistic and extensible framework for VR project management. The proposed framework has been deployed to successfully complete a number of VR projects. A user survey has been conducted with all positive results received on the proposed framework.

Investigating the Minimal Condition of the Dynamic Invisible Body Illusion

Ryota Kondo: Keio University; Maki Sugimoto: Keio Univ

Visual-tactile synchronization or visual-motor synchronization makes a non-innate body feel like one's own body (illusory body ownership). A recent study has shown that body ownership is induced to an empty space between the hands and feet from the motion of only hands and feet (dynamic invisible body illusion). However, it is unclear whether both hands and feet are necessary to induce the illusion. In this study, we investigated the minimal condition for the dynamic invisible body illusion by manipulating the presentation of hands and feet. Our results suggest that both hands and feet are necessary for the illusion.

Stretchy: Enhancing Object Sensation Through Multi-sensory Feedback and Muscle Input

Booth: 33

Nicha Vanichvoranun: Korea Advanced Institute of Science and Technology(KAIST); Bowon Kim: KAIST; Dooyoung Kim: KAIST; Woontack Woo: KAIST ; Jeongmi Lee: KAIST; Sang Ho Yoon: KAIST

Current works on 3D interaction methods mainly focus on rigid object manipulation and selection, while very few have been done on elastic object interaction. Therefore, we suggest a novel interaction method to observe and manipulate virtual fabric in a VR environment. We use multi-sensory pseudo-haptic feedback (a combination of tactile and visual feedback) and muscle strength data (EMG) to perceive the stiffness of the virtual fabric and flexible objects. For demonstration, we make fabric patches with various stiffness, and the stiffness difference can be distinguished. Our system can be implemented in a virtual cloth store to give consumers information about product stiffness and texture.

An AR Visualization System for 3D Carbon Dioxide Concentration Measurement Using Fixed and Mobile Sensors

Maho Otsuka: Nara Institute of Science and Technology; Monica Perusquia-Hernandez: Nara Institute of Science and Technolgy; Naoya Isoyama: Nara Institute of Science and Technology; Hideaki Uchiyama: Nara Institute of Science and Technology; Kiyoshi Kiyokawa: Nara Institute of Science and Technology

Recently, the use of CO2 concentration monitors as a guide for ventilation is increasing in spaces where many people gather, such as offices.However, 2D map-based visualizations are not enough to understand the progression of indoor air pollution.Therefore, we propose a three-dimensional (3D) visualization of CO2 concentration using a head-mounted display (HMD).A 3D distribution of CO2 concentration is automatically calculated using the position coordinates and measured values of fixed and mobile sensors.The results of the preliminary user evaluation suggested that AR visualization may be a more effective way to inform the need for ventilation than conventional methods.

Volumivive: An Authoring System for Adding Interactivity to Volumetric Video

Qiao Jin: University of Minnesota; Yu Liu: University of Minnesota; Puqi Zhou: George Mason University; Bo Han: George Mason University; Svetlana Yarosh: University of Minnesota; Feng Qian: University of Minnesota

Volumetric video is a medium that captures the three-dimensional (3D) shape and movement of real-life objects or people. However, pre-recorded volumetric video is limited in terms of interactivity. We introduce a novel authoring system called Volumivive, which enables the creation of interactive experiences using volumetric video, enhancing the dynamic capabilities and interactivity of the medium. We provide four interaction methods that allow users to manipulate and engage with digital objects within the volumetric video. These interactive experiences can be used in both augmented reality (AR) and virtual reality (VR) settings, providing users with a more immersive and interactive experience.

Mo2Hap: Rendering performer's Motion Flow to Upper-body Vibrotactile Haptic Feedback for VR performance

Booth: 34

Kyungeun Jung: KAIST ; Seungjae Oh: Kyung Hee University; Sang Ho Yoon: KAIST

We present a novel haptic rendering method that translates the virtual performer's motions into real-time vibrotactile feedback. Our method characterizes salient motion points from proposing system Motion Salient Triangle to generate haptic parameters to highlight the performer's motions. Here, we employ an entire upper-body haptic system that provides vibrotactile feedback on the torso, back, and shoulders. We enable immersive virtual reality performance experiences by accommodating the performer's motions on top of motion-to-haptic feedback.

FakeBand: Virtual Band Music Performance with Balanced Interface for Individual Score/Rhythm Play and Inter-player Expression Coordination

Seungwoo Son: Korea University; Yechan Yang: Korea University; Jaeyoon Lee: Korea University; Gerard Jounghyun Kim: Korea University

In this poster, we feature a two-user immersive VR music performance system called the "FakeBand". It resembles the rhythm game for a guitar play-along, but extended in its interaction model and interface design, focusing on the band performance experience in terms of coordination and collective musical expression. Coordinating timings for the tempo/accent change begins by the leader and the follower exchanging glances and using bodily gestures - nodding for tempo and large arm motion for accents. The interaction model and separated interface between the rhythm/chord play and music coordination were important in bringing about the active and frustration-free musical experience.

Optical See-through Scope for Observing the Global Component of a Scene

Yoshiaki Makita: Osaka university; Daisuke Iwai: Osaka University; Kosuke Sato: Osaka University

The reflected light on a physical object can be separated into a direct illumination component and a global illumination component. Observing only the global component provides a better understanding of the reflectance property of the surface. In conventional methods, the components are separated using computer vision techniques. In this study, we developed a see-through optical scope that allows a user to selectively observe the global component using high spatial frequency pattern projection and a high spatial frequency mask with a liquid crystal panel. We confirmed that only the global component could be observed with the naked eye through the scope.

VR-based Vector Watercolor Painting System

Yang Gao: Beihang university; Hongming Bai: Beihang University; Ziyi Pei: Beihang University; wenfeng song: Beijing information science and technology university; Aimin Hao: Beihang University

We develope a VR-based watercolor painting system. The system generates paintings in the form of vector graphics, which remain clear under free scaling and panoramic projection in VR space. Meanwhile,  in order to create a better operation experience and more convenient user interaction in the VR environment, we optimize the system to enable a variety of realistic watercolor effects and provide users with a number of optional painting settings such as color, size, effect, etc. After a short period of study, users can easily use the system to complete the VR painting experience.

Breaking the Ice with Group Flow:A Collaborative VR Serious Game for Relationship Enhancement

Booth: 35

Yang Zhang: School of Mechanical,Electrical & Information Engineering; Tangjun Qu: Shandong University; Dingming Tan: Shandong University; Yulong Bian: Shandong University; Juan Liu: College of computer science and technology; Zelu Liu: Shandong University; Weiying Liu: School of Mechanical,Electrical&Information Engineering; Chao Zhou: Tsinghua University

It is usually hard for unfamiliar people to rapidly “break the ice” in the early stage of relationship establishment, which hinders the development of relationship and team productivity. Therefore, we propose a collaborative serious game for icebreaking by combining immersive virtual reality (VR) with brain-computer interface. We design a COVID-19 themed collaboration task and propose an approach to improve empathy between team members by sharing their mental state in VR world. Moreover, we propose an EEG-based method for dynamic evaluation and enhancement of group flow to achieve better teamwork. Then, we developed a prototype system. Results of user study show that our method is beneficial to the enhancement of relationship enhancement.

Bringing Instant Neural Graphics Primitives to Immersive Virtual Reality

Ke Li: Deutsches Elektronen-Synchrotron (DESY); Tim Rolff: Universität Hamburg; Susanne Schmidt: Universität Hamburg; Simone Frintrop: Universität Hamburg; Reinhard Bacher: Deutsches Elektronen Synchrotron DESY; Wim Leemans: Deutsches Elektronen Synchrotron; Frank Steinicke: Universität Hamburg

Neural radiance field (NeRF), in particular, its extension by instant neural graphics primitives is a novel rendering method for view synthesis that uses real-world images to build photo-realistic immersive virtual scenes. Despite its enormous potential for virtual reality (VR) applications, there is currently little robust integration of NeRF into typical VR systems available for research and benchmarking in the VR community. In this poster paper, we present an extension to instant neural graphics primitives and bring stereoscopic, high-resolution, low-latency, 6-DoF NeRF rendering to the Unity game engine for immersive VR applications.

Enhanced Removal of the Light Reflection of Eyeglass Using Multi-Channel CycleGAN with Difference Image Equivalency Loss

Yoshikazu Onuki: Digital Hollywood University; Kosei Kudo: Tokyo Institute of Technology; Itsuo Kumazawa: Tokyo Institute of Technology

Gaze estimation is commonly used by equipping infrared light (IR) sources and cameras inside of head mount displays (HMDs). Some HMDs enable users to wear with spectacles on, and, in this case, IR light reflections of the eyeglass often causes serious obstruction to detect those of the corneal. In a previous study, we proposed multi-channel CycleGAN to generate eye images with no eyeglass from those with eyeglass. In this study, we additionally applied the difference image equivalency loss and late fusion of channels to improve removal performance and reduce mixing actions across channels, which achieved enhanced removal of eyeglass reflections.

A Simple Approach to Animating Virtual Characters by Facial Expressions Reenactment

Booth: 36

Zechen Bai: Institute of Software, Chinese Academy of Sciences; Naiming Yao: Institute of Software, Chinese Academy of Sciences; Lu Liu: Institute of Software, Chinese Academy of Sciences; Hui Chen: Institute of Software, Chinese Academy of Sciences; Hongan Wang: Institute of Software, Chinese Academy of Sciences

Animating virtual characters is one of the core problems in virtual reality. Facial animations are able to intuitively express emotion and attitudes of virtual characters. However, creating facial animations is a non-trivial task. It depends on either expensive motion capture devices or human designers' time and effort to tune the animation parameters. In this work, we propose a learning-based approach to animate virtual characters by facial expression reenactment from abundant image data. This approach is simple yet effective, and is generalizable to various 3D characters. Preliminary evaluation results demonstrate its effectiveness and its potential to accelerate the development of VR applications.

VibAware: Context-Aware Tap and Swipe Gestures Using Bio-Acoustic Sensing

Booth: 37

Jina Kim: KAIST; Minyung Kim: KAIST; woo suk lee: Microsoft Corporation; Sang Ho Yoon: KAIST

We present VibAware, context-aware tap and swipe gesture detection using bio-acoustic sensing. We employ both active and passive sensing methods to recognize microgestures and classify inherent interaction contexts. Here, the interaction contexts refer to interaction spaces, graspable interfaces, and surface materials. With a context-aware approach, we could support adaptive input controls to enable rich and affordable interactions in Augmented Reality and Mixed Reality for graspable, material-based interfaces. Through an investigation and preliminary study, we confirmed the feasibility of tap and swipe gesture recognition while classifying associated contexts.

Geospatial Augmented Reality Tourist System

Somaiieh Rokhsaritalemi: Sejong University; Beom-Seok Ko: Sejong University; Abolghasem Sadeghi-Niaraki: Sejong University; Soo-Mi Choi: Sejong University

Most existing intelligent AR tourist systems focus on recommending or presenting information related to points of interest (POI)s that do not consider user-generated datasets and the user's preference leading to user dissatisfaction. This paper aims to develop an intelligent AR system applying tourist behavior recognition and user-generated content using volunteer geospatial information (VGI). For POIs recommendation, user-generated content have been modeled with the VIKOR and optimized with the behavior recognition method. For POIs information presentation in HoloLens, sentiment analysis was utilized for VGI data generation as well as a deep learning method was performed for POIs object detection and visualization.

A Mixed Reality Framework for Interactive Realistic Volume Rendering with Dynamic Environment Illumination

Booth: 38

Haojie Cheng: University of Sciences and Technology of China; Chunxiao Xu: University of Sciences and Technology of China; Zhenxin Chen: Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science; Jiajun Wang: Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science; Yibo Chen: Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science; Lingxiao Zhao: University of Sciences and Technology of China

Interactive volume data visualization using a mixed reality (MR) system is increasingly popular in computer graphics. Due to sophisticated requirements of user interaction and vision when using head-mounted display (HMD) devices, the conflict between realisticness and efficiency of direct volume rendering (DVR) is yet to be resolved. We present an MR visualization framework that supports realistic volume rendering. Our MR framework can capture real-world illumination and generate interactive volume rendering to synchronously reflect surrounding real-world illumination. The user study shows that our MR visualization framework helps provide users intuitive perception of volumetric structures as naturally blended in the real-world environment.

A Piecewise Approach to Mapping Interactions between Room-scale Environments in Remote Mixed/Augmented Reality

Akshith Ullal: Vanderbilt University; Nilanjan Sarkar: Vanderbilt University School of Engineering

Naturalistic interactions from remote settings require the user to have the ability to move and interact with their environment. However, a direct mapping of the interactions onto their photorealistic avatars will cause errors, primarily due to the different spatial configurations between the environments. Hence, the interactions need to be redirected, naturalistically. This redirection is a computationally intensive process and current techniques can only be used for localized spaces. We propose a piecewise approach, where the interaction mapping is split into a 2D locomotion and a 3D gesture redirection mechanism, reducing the computational requirement and making it applicable to room-scale environments.

Increasing Trust with Augmented Reality in Human-in-the-Loop Intelligent System: A Case Study with Inspection Task

Booth: 39

Zhenning Zhang: Nanjing university of science and technology; Haoyu Wang: Nanjing University of Science and Technology; Weiqing Li: Nanjing University of Science and Technology; Zhigeng Pan: Nanjing University of Information Science and Technology; Zhiyong Su: Nanjing University of Science and Technology

Augmented reality (AR) technologies combining computer vision algorithms have shown great potential to assist users in inspection tasks. However, as an intelligent system in which human-in-the-loop, the AR-based inspection system presents key challenges to human-machine trust. In this paper, we propose AR-based trust-increasing factors to enhance user trust in the inspection task. We explore the factors in a user study within an aircraft cabin inspection task that requires accurate inspection and strong confidence. The results insights help designers of AR applications provide a confident user experience in a human-in-the-loop intelligent system.

Digital Agent’s Engagement and Affective Posture Impact Its Social Presence and Trustworthiness in Group Decision-Making

Booth: 40

Bin Han: Korea Institute of Science and Technology; Hanseob Kim: Korea Institute of Science and Technology; Jieun Kim: Korea Institute of Science and Technology; MUHAMMAD FIRDAUS SYAWALUDIN LUBIS LUBIS: Korea Institute of Science and Technology; Jae-In Hwang: Korea Institute of Science and Technology

This poster introduces a user study investigating whether a digital agent’s engagement and affective posture in a group decision-making scenario influence how users perceive the digital agent’s social presence and trustworthiness. Our experiment was conducted with a three-person group discussion with two participants and one digital agent, and a 2 × 2 within-subject study manipulating the engagement and affective posture of a digital agent. Our findings indicate that during group discussions, the engagement posture of digital agents positively affected user perception of digital agents, whereas their affective posture negatively affected user perception.

Masked FER-2013: Dataset for Emotional Face Augmentation

Booth: 41

Bin Han: Korea Institute of Science and Technology (KIST); Hanseob Kim: Korea Institute of Science and Technology; Gerard Jounghyun Kim: Korea University; Jae-In Hwang: Korea Institute of Science and Technology

This poster introduces the Masked FER-2013 dataset that can be used to analyze facial emotions in people wearing masks and train a recognizer. The FER-2013 dataset containing face images annotated with seven emotions was modified by synthesizing mask images into the lower portion of the face. Based on the quantitative evaluation, our dataset can improve the accuracy of CNN, VGG, and ResNet models’ emotion recognition in masked images by a maximum of 46%. Additionally, as a use case, we present an application that recognizes the emotions of a masked person, generates emotional face animation, and augments it on a real mask during video conferencing.

Immersive Visualization of Open Geospatial Data in Unreal Engine

Tristan King: University of Maryland, Baltimore County; Kyle R Davis: University of Maryland, Baltimore County; Bradley Saunders: University of Maryland, Baltimore County; James Zuber: University of Maryland, Baltimore County; Damaruka Priya Rajasagi: University of Maryland -Baltimore County; Christina Lukaszczyk: University of Maryland, Baltimore County; Anita Komlodi: University of Maryland, Baltimore County; Lee Boot: University of Maryland, Baltimore County

Immersive geospatial data visualizations are a growing area of interest for research and commercial efforts. However, access to the necessary technologies is often limited by the expense of proprietary solutions. Furthermore, available solutions do not offer the ability to select and render only portions of global maps, creating significant and unnecessary inefficiencies. We developed a pipeline for efficient generation, rendering, and interaction of open-source geospatial vector features in Unreal Engine™ Virtual Reality (VR) at almost any scale, and a user interface for interacting with these vector map features and georegistered data visualizations (as game objects) in VR applications.

A Robotic Arm-based Telepresence for Mixed-Reality Telecollaboration System

Booth: 42

Le Luo: Beijing Institute of Technology; Dongdong Weng: Beijing Institute of Technology; Jie Hao: Beijing Institute of Technology; Ziqi Tu: Beijing Institute of Technology; Bin Liang: China Software Testing Center; Haiyan Jiang: Beijing Institute of Technology

In mixed-reality telecollaboration, it is often difficult for remote users to actively and naturally control the viewpoint. We propose a viewpoint controllable telepresence with a robotic arm that carries a stereo camera in the local environment so that the remote user can control the robotic arm by moving his or her head to actively and flexibly observe the local environment. Additionally, we built a mixed-reality telecollaboration prototype that adds avatar and nonverbal cues for the remote user to the local user side, enabling the remote user to better guide the local user. In this paper, we discuss our prototype, make design recommendations, and discuss configuration and implementation.

The Case Study of a Computational Model for Immersive Human-Process Interactions within Design Optimization

Davide Guzzetti: Auburn University

Virtual reality enables incorporating new cues and affordances into design optimization interfaces. Designers visually interact with optimization algorithms to examine solutions, define the search space, and present tradeoffs or outcomes. We hypothesize that additional dimensions of the human-process interface yield a higher probability of discovering the global optimal solution. To test this hypothesis, this work develops a computational model for visual-based process interactions in design optimization. The model is inspired by the Van Wijk model for visual analytics. Initial numerical results support the hypothesis.

Real-Time Augmented Reality Visual-Captions for Deaf and Hard-of-Hearing Children in Classrooms

Booth: 43

Jingya Li: Beijing Jiaotong University

Deaf and hard-of-hearing (DHH) children experience difficulties in mainstream classrooms since they cannot access audio information effectively. Although tools like captions are developed specifically to assist DHH individuals, primary school children have trouble to read or understand the text. To address this challenge, this paper develops an AR-based visual-captions, which will transfer the speech to text and images in real-time and display the information around the teacher in classrooms. The AR visual-captions aims to help DHH children better receive information and enhance their classroom experience. We describe the concept, design and implementation of our prototype, and discuss future research directions.

Building Symmetrical Reality Systems for Cooperative Manipulation

Booth: 44

Zhenliang Zhang: Beijing Institute for General Artificial Intelligence

Humans and virtual agents (usually embodied in the physical world as physical robots) could coexist within the shared environment, which naturally creates a mixed reality environment of the physical and virtual worlds. In this paper, We analyze the characteristics of symmetrical reality with two different perception centers, and all of the mixed forms of physical and virtual reality can be treated as special cases of symmetrical reality. The experiment of folding clothes is implemented to demonstrate the significance of symmetrical reality for human-robot interaction, which also shows the general pipeline of building the symmetrical reality system with two minds involved.

Audio to Deep Visual: Speaking Mouth Generation Based on 3D Sparse Landmarks

Booth: 45

Hui Fang: Beijing Institute of Technology; Dongdong Weng: Beijing Institute of Technology; Zeyu Tian: Beijing Institute of Technology; Zhen Song: The Central Academy of Drama

Having a system to automatically generate a talking mouth in sync with input speech would enhance speech communication and enable many novel applications. This article presents a new model that can generate 3D talking mouth landmarks from Chinese speech. We use sparse 3D landmarks to model the mouth motion, which are easy to capture and provide sufficient lip accuracy. The 4D mouth motion dataset was collected by our self-developed facial capture device, filling the gap in the Chinese speech-driven lip dataset. The experimental results show that the generated talking landmarks achieve accurate, smooth, and natural 3D mouth movements.

Design of the Seated Navigation for Immersive Lower Limb Exergame

Yu-Yen Chung: The University of Texas at Dallas; Thiru M. Annaswamy: Penn State Health Milton S. Hershey Medical Center; Balakrishnan Prabhakaran: The University of Texas at Dallas

Immersive lower limb exergames have been used to motivate physical training or rehabilitation. Due to the relative constraint on mobility in the seated pose, interacting with the distanced object using the leg could be challenging. To tackle the challenge, we describe the detailed design and implementation of Dual Point-Tugging (DPT) seated navigation. The DPT interface unifies the translation and rotation, allowing users to refine their position and extend leg reachability in the immersive virtual environment. Also, by rearranging the arm position, the user could continuously perform swift translation around a spot in a comfortable and balanced pose.

Heat Metaphor for Attention Estimation for Educational VR

David Michael Broussard: University of Louisiana at Lafayette; Christoph W Borst: University of Louisiana at Lafayette

We prototype a technique, for educational VR applications, to estimate each student's level of attention in real time. Our system attaches scores to both students and objects, which change in response to eye-tracked gaze intersections. Compared to a simple angle-based approach, our system provides a dynamic and granular representation of object importance and frees the lesson designer from having to fully define objects of interest and timings. Our system takes into account simultaneous behaviors of multiple students and filters out brief behavioral deviations of attentive students. The results may help a teacher or a virtual agent better guide students.

Tether-Handle Interaction for Retrieving Out-of-Range Objects in VR

David Michael Broussard: University of Louisiana at Lafayette; Christoph W Borst: University of Louisiana at Lafayette

Many VR applications allow users to grab and manipulate virtual objects, whether it be with a motion-tracked controller or a hand-tracked gestural input system. However, when objects move out-of-reach, standard grabbing interactions become less useful. Techniques for retrieving distant objects often use separate interaction metaphors from the main grabbing technique, such as pointing or hand extension. However, to avoid such metaphor switching, and to avoid cluttering a controller or gestural interface with multiple functions, we created a novel object retrieval technique that dynamically presents a grabbable handle tethered to a distant object.

Feeling of Control for Virtual Object Manipulation in Handheld AR

Booth: 46

Chenxin Wu: Xi’an Jiaotong-Liverpool University; Wenxin Sun: Xi’an Jiaotong-Liverpool University; Mengjie Huang: Xi’an Jiaotong-Liverpool University; Rui Yang: Xi’an Jiaotong-Liverpool University

One essential feature of handheld augmented reality (AR) is the ability to manipulate virtual objects employing different interaction techniques, such as touch-based, mid-air and tangible interactions. A critical factor for user experience evaluation of AR systems is users’ feeling of control. However, little was known in literature about feeling of control for virtual object manipulation in handheld AR. This study explores users’ feeling of control among three interaction techniques via self-report and task performance. The findings reveal that users perceived a stronger feeling of control when applying tangible interaction technique to manipulate virtual objects than other techniques in handheld AR.

TeleSteer: Combining Discrete and Continuous Locomotion Techniques in Virtual Reality

Booth: 47

Ziyue Zhao: Xi'an Jiaotong-Liverpool University; Yue Li: Xi'an Jiaotong-Liverpool University; Lingyun Yu: Xi'an Jiaotong-Liverpool University; Hai-Ning Liang: Xi'an Jiaotong-Liverpool University

Steering and teleporting are two common locomotion techniques in virtual reality (VR). Steering generates a great sense of spatial awareness and immersion but tends to lead to cybersickness; teleporting performs better in mitigating cybersickness but may lead to the loss of spatial awareness. Hence, we combined these two techniques and designed TeleSteer. This technique allows users to perform both steering and teleporting and customize the control. We discuss that a combined use of discrete (e.g. teleporting) and continuous (e.g. steering) locomotion techniques is necessary for scenarios that require both free explorations and close-range interaction tasks, making TeleSteer a suitable alternative.

Virtual Reality Based Human-Computer Interaction System for Metaverse

Zhihan Lv: Uppsala University

This work aims to evaluate the performance and effect of a Human-Computer Interaction (HCI) system in the Metaverse with real-time interaction and immersion. This work introduces the Metaverse with HCI functions of 3D face reconstruction, handshake and hug, multi-user synchronous editing environment, all information is stored in the blockchain. The performance of the HCI functions are also evaluated. The results demonstrate that the overall correct rate of the subjects by the HCI system is 95.45%. Users’ social acceptance of VR interactive systems, action interaction usability, user emotions, and user satisfaction all show high scores. This work contributes to the performance evaluation and optimization of the HCI systems in the Metaverse.

Memo:me, an AR Sticky Note With Priority-Based Color Transition and On-Time Reminder

Eunhwa Song: KAIST; Minju Baeck: KAIST; Jihyeon Lee: Korea Advanced Institute of Science and Technology; Seo Young Oh: KAIST; Dooyoung Kim: KAIST; Woontack Woo: KAIST ; Jeongmi Lee: KAIST; Sang Ho Yoon: KAIST

We propose Memo:me, an AR sticky note with priority-based color transition and on-time reminders in smartphones. For the priority-based color transition, the user can choose by himself among three different colors, or the system changes the color automatically 10 minutes before the entered time. For the on-time reminder, Memo:me provides a visual notification and a sound at the designated time. We enabled users to create virtual notes on planes, or carry daily objects with the virtual notes attached. We expect that our system would benefit users to manage their tasks in a time-appropriate manner.

Design and Evaluation of a VR Therapy for Patients with Mild Cognitive Impairment and Dementia: Perspectives from Patients and Stakeholders

Booth: 48

Ruiqi Chen: Duke Kunshan University; Shuhe Wang: Duke Kunshan University; Xuhai Xu: University of Washington; Lan Wei: Duke Kunshan University; Yuling Sun: East China Normal University; Xin Tong: Duke Kunshan University

Immersive virtual reality (VR) technology has shown great promise in intervening in Mild Cognitive Impairment (MCI) and Mild Dementia (MD) patients' cognitive therapies. However, current VR applications mainly focus on task performances, ignoring the significant values of other stakeholders (caregivers and therapists) and their roles in MCI/MD patients' therapies. We designed a VR way-finding cognitive task and evaluated its usability and effectiveness by interviewing both MCI/MD patients and stakeholders. Findings suggest that the interventions of stakeholders can improve the performance of the participants. Besides, we identified several significant factors in designing VR cognitive tasks for patients with MCI/MD.

A Lightweight Wearable Multi-joint Force Feedback for High Definition Grasping in VR

Booth: 49

Nicha Vanichvoranun: Korea Advanced Institute of Science and Technology(KAIST); Sang Ho Yoon: KAIST

Various research develops kinesthetic gloves that only render force on the distal interphalangeal joints (DIP) and often require a complex mechanism. Our research proposes a novel design of a wearable haptic device enabling kinesthetic feedback to all finger joints to promote precise grasping in VR. We employ electrostatic-based force feedback to form a clutch mechanism that we extend to all finger joints. Using the electrostatic-based clutch allows us to maintain a thin and light form factor for devising the prototype. Our proposed method supports a blocking force of up to 30N per joint.

The Exploration and Evaluation of Generating Affective 360° Panoramic VR Environments Through Neural Style Transfer

Booth: 50

Yanheng Li: City University of Hong Kong; Long Bai: The Chinese University of Hong Kong; Yaxuan MAO: City University of Hong Kong; Xuening Peng: Duke Kunshan University; Zehao Zhang: University of Waterloo; Xin Tong: Duke Kunshan University; RAY LC: City University of Hong Kong

Affective virtual reality (VR) environments with varying visual style can impact users’ valence and arousal responses. We applied Neural Style Transfer (NST) to generate 360° VR environments that elicited users’ varied valence and arousal responses. From a user study with 30 participants, findings suggested that generative VR environments changed participants’ arousal responses but not their valence levels. The generated visual features, e.g., textures and colors, also altered participants’ affective perceptions. Our work contributes novel insights about how users respond to generative VR environments and provided a strategy for creating affective VR environments without altering content.

Catch my Eyebrow, Catch my Mind: Examining the Effect of Upper Facial Expressions on Emotional Experience for VR Avatars

Booth: 51

Xin Yi: Tsinghua University; Ziyu Han: Carnegie Mellon University; Xinge Liu: Tsinghua University; Yutong Ren: Duke Kunshan University; Xin Tong: Duke Kunshan University; Yan Kong: CS; Hewu Li: Tsinghua University

Capturing the movement of the upper face in social VR was challenging due to the head-mounted display's occlusion, which hinders accurate emotional expression. In this paper, we conducted two user studies to examine the effect of upper facial expressions on users' emotional recognition ability with VR avatars. Study 1 found that compared with eyelids, restricting the movement of eyebrows would yield greater difficulty for the users to recognize the expressed emotions of 3D avatars. Study 2 further tested different movement amplitudes of the upper facial expressions and results showed that the most realistic movement amplitude was optimal when balancing the tradeoff between emotion recognition accuracy and the Uncanny Valley effect.

Letting It Go: Four Design Concepts to Support Emotion Regulation in Virtual Reality

Nadine Wagener: University of Bremen; Johannes Schöning: University of St. Gallen; Yvonne Rogers: UCL ; Jasmin Niess: University of St. Gallen

Depicting emotions, both in reality (e.g. art therapy) and in virtual reality (VR), is an established method for emotion regulation (ER), promoting reflection, behaviour change and mental well-being. However, the specific ways in which users engage with and process negative emotions in VR remains unclear. In this study, we conducted expert interviews with psychotherapists and collaboratively identified design requirements for VR interventions that support the processing of negative emotions. Our findings highlight the potential of VR to facilitate the transition from negative to positive experiences. Based on these findings, we propose specific design concepts for using VR as positive technology for emotion regulation.

Ten years of Immersive VR Installations - Past, Present and Future

Elisabeth Mayer: Leibniz Supercomputing Centre, Bavarian Academy of Sciences and Humanities; Rubén Jesús García-Hernández: Ludwig Maximilians Universität München; Daniel Kolb: Leibniz Supercomputing Centre; Jutta Dreer: Leibniz Supercomputing Centre, Bavarian Academy of Science and Humanities; Simone Mueller: Leibniz Supercomputing Centre; Thomas Odaker: Leibniz Supercomputing Centre; Dieter August Kranzlmüller: Bavarian Academy of Sciences and Humanities

Virtual Reality (VR) has found application in many fields including art history, education, research, and smart industry. Immersive 3D screens, large-scale displays, and CAVE systems are time-tested VR installations in research and scientific visualization. In this paper, we present learnings and insights from ten years of operating and maintaining a visualization center with large scale immersive displays and installations. Our report focuses on the installations themselves as well as the various developments of the center over time. In addition, we discuss the advantages, challenges and future development of a location-based VR center.

A Divide-and-conquer Solution to 3D Human Motion Estimation from Raw MoCap Data

Booth: 52

Jilin Tang: NetEase Fuxi AI Lab; Lincheng Li: NetEase Fuxi AI Lab; Jie Hou: Netease Fuxi AI Lab; Haoran Xin: Netease Fuxi AI Lab; Xin Yu: Faculty of Engineering and Information Technology

Marker-based optical motion capture (MoCap) aims to estimate 3D human motions from a sequence of input raw markers. In this paper, we propose a divide-and-conquer strategy based MoCap solving network that accurately retrieves 3D human skeleton motions from raw marker sequences in real-time. Our core idea is to decompose the task of direct estimation of global human motion from all markers into first solving sub-motions of local parts and then aggregating sub-motions into a global one to achieve accurate motion estimation.

Eye-tracked Evaluation of Subtitles in Immersive VR 360 Video

Booth: 53

Marta Brescia-Zapata: Universitat Autònoma de Barcelona; Pilar Orero: Universitat Autònoma de Barcelona; Andrew Duchowski: Clemson University; Chris J Hughes: Salford University; Krzysztof Krejtz: SWPS University of Social Sciences and Humanities

The present paper presents an analysis of eye-tracked visual attention to subtitles within immersive media. The study relies on an imple- mentation of 360◦ video rendering with eye movements recorded in Virtual Reality. Two characteristics of immersive subtitles (position and color) are compared and then evaluated in terms of perceived task load and cognitive processing of the content. Results are in- triguing showing that head-locked subtitles afford more focal visual inspection of the scene and presumably better comprehension. This type of in-depth analysis would not be possible without the eye movement analyses.

Learning Detailed 3D Face via CLIP Model from Monocular Image

Pengfei Zhou: Shandong University of Science and Technology; Yongtang Bao: Shandong University of Science and Technology; Yue Qi: Beihang University

3D morphable face models (3DMMs) methods cannot accurately estimate facial expressions and geometric details. We propose a framework for regressing 3D facial expressions and geometric details to address this problem. First, we propose a parameter refinement module to learn rich feature representations. Second, a novel feature consistency loss during training is designed, which exploits the powerful representation ability of CLIP (Contrastive-Language-Image-Pretraining) to capture facial expressions and geometric details. Finally, we leverage text-guided expression-specific transfer for 3D face reconstruction. Our method achieves significant performance in terms of reconstructed expressions and geometric details.

Motion Analysis and Reconstruction of Human Joint Regions from Sparse RGBD Images

Tianzhen Dong: Shanghai Institute of Technology; Yuntao Bai: Shanghai Institute of Technology; Qing Zhang: Shanghai Institute of Technology; Yi Zhang: Shanghai Institute of Technology

We propose a motion deformation analysis framework that can reconstruct the joint region of dynamic human only depending on extremely sparse RGBD images. In the framework, we analyze the deformation mechanism of the joint region from the perspective of its internal material. The non-rigid deformation of a joint region is transformed into the elastic deformation of multiple elements by introducing Finite Element Method. Considering the influence of joint position on the motion deformation analysis, we optimize the joint positions obtained from skeleton tracking prior. Experimental results verify the rationality and effectiveness of our framework.

Assessing Individual Decision-Making Skill by Manipulating Predictive and Unpredictive Cues in Virtual Baseball Batting Environment

yuhi tani: Keio University; Akemi Kobayashi: NTT Communication Science Laboratories; Katsutoshi Masai: NTT Communication Science Laboratories; Takehiro Fukuda: NTT Communication Science Laboratotries; Maki Sugimoto: Keio University; Toshitaka Kimura: NTT Communication Science Laboratories

We propose a virtual reality (VR) baseball batting system for assessing individual decision-making skill based on swing judgement of pitch types and the underlying prediction ability by manipulating combinations of pitching motion and ball trajectory cues. Our analysis of data from 10 elite baseball players revealed highly accurate swing motions in conditions during which the batter made precise swing decisions. Delays in swing motion were observed in conditions during which predictive cues were mismatched. Our findings indicated that decision-making based on pitch type influences the inherent stability of decision and accuracy of hitting, and that most batters made decisions based on pitching motion cues rather than on ball trajectory.

Interactive Panoramic Ray Tracing for Mixed 360° RGBD Videos

Booth: 54

Jian Wu: Beihang University; Lili Wang: Beihang University; Wei Ke: Polytechnic University

This paper introduces an interactive panoramic ray tracing method for rendering photo-realistic illumination and shadow effects in real-time when inserting virtual objects into 360° RGBD videos. A sparse sampling ray generation method is proposed to accelerate the tracing process by reducing the number of rays that need to be emitted in ray tracing. We tested our method in some natural and synthetic scenes. The results show that our method can generate visually photo-realistic frames for virtual objects in 360° RGBD videos in real-time, making the rendering results more natural and credible.

Towards Trustworthy Augmented Reality in the Metaverse Era: Probing Manipulative Designs in Virtual-Physical Commercial Platforms

Booth: 55

Esmée Henrieke Anne de Haas: KAIST (Korea Advanced Institute of Science and Technology); Yiming Huang: Hong Kong University of Science and Technology, Korea Advanced Institute of Science and Technology; Carlos Bermejo Fernandez: Hong Kong University of Science and Technology; Zijun Lin: London School of Economics and Political Science; Pan Hui: The Hong Kong University of Science and Technology; Lik-Hang Lee: The Hong Kong Polytechnic University

E-commerce has become an important activity where new advances in technology shape the shopper experience. At the same time, the metaverse is seen as the next milestone to revolutionize the e-commerce experience, where immersion, realism, and ubiquity are its main features. However, under such circumstances, manipulative designs to `trick' users toward intended choices or outcomes can become more effective. This paper sheds light on the design space of manipulative techniques in e-commerce applications for the metaverse, reinforcing our understanding of interface design guidelines and counteracting malicious practices.

Reconstruction of Human Body Pose and Appearance Using Body-Worn IMUs and a Nearby Camera View for Collaborative Egocentric Telepresence

Booth: 56

Qian Zhang: University of North Carolina at Chapel Hill; Akshay Paruchuri: University of North Carolina at Chapel Hill; YoungWoon Cha: Gachon University; Jiabin Huang: University of Maryland; Jade Kandel: UNC Chapel Hill; Howard Jiang: University of North Carolina at Chapel Hill; Adrian Ilie: University of North Carolina at Chapel Hill; Andrei State: University of North Carolina at Chapel Hill; Danielle Albers Szafir: University of North Carolina-Chapel Hill; Daniel Szafir: University of North Carolina at Chapel Hill; Henry Fuchs: University of North Carolina at Chapel Hill

We envision a future in which telepresence is available to users anytime and anywhere, enabled by sensors and displays embedded in accessories worn daily, such as watches, jewelry, belt buckles, shoes, and eyeglasses. We present a collaborative approach to 3D reconstruction that combines a set of IMUs worn by a target person with an external view from another nearby person wearing an AR headset, used for estimating the target person’s body pose and reconstructing their appearance, respectively.

Multi-Agent Reinforcement Learning for Visual Comfort Enhancement of Casual Stereoscopic Photography

Booth: 57

Yuzhong Chen: Fuzhou University; Qijin Shen: Fuzhou University; Yuzhen Niu: Fuzhou University; Wenxi Liu: Fuzhou University

The goal of casual stereoscopic photography is to allow ordinary users to create a stereoscopic photo using two photos taken casually by a monocular camera. In this poster, we propose a multi-agent reinforcement learning framework for visual comfort enhancement of casual stereoscopic photography. Each agent calculates a homography matrix based on the positions of four corners before and after the offsets. Furthermore, we propose a hierarchical stereo transformer based on window attention, which enhances and fuses multiscale correlations between left and right views. Experimental results show that our proposed method achieves superior performance to the state-of-the-art methods.

A Virtual Reality System for the Assessment of Patients with Lower Limb Rotational Abnormalities

David Sibrina: Durham University; Sarath Bethapudi MBBS, MRCP, FRCR: University Hospital of North Durham; George Alex Koulieris: Durham University

Rotational lower limb abnormalities cause patellar mal-tracking which impacts young patients. Repetitive patellar dislocation may require knee arthroplasty. Surgeons employ CT to identify rotational abnormalities and make surgical decisions. Recent studies also demonstrated that immersive 3D visualisation is preferred when examining 3D volumes of patient’s data. We describe a prototype VR system that allows orthopaedic surgeons assess patients' lower limb anatomy in an immersive three-dimensional environment and simulate the effects of surgical procedures such as corrective osteotomies on a standalone VR headset. Preliminary results show an increased understanding of the patient’s specific anatomy and predicted surgery outcomes.

Descriptive Linguistic Patterns of Group Conversations in VR

Cyan DeVeaux: Stanford University; David Matthew Markowitz: University of Oregon; Eugy Han: Stanford University; Mark Roman Miller: Stanford University; Jeff Hancock: Stanford University; Jeremy Bailenson: Stanford University

Although talking is one of the most common activities in social VR, there is little empirical work identifying what people say and how they communicate in immersive, virtual settings. The current paper addresses this opportunity by performing automated text analysis on over 4,800 minutes of in-VR, small group conversations. These conversations took place over the span of two months during a university course where 171 students attended discussion sections via head-mounted displays. We provide a methodology for analyzing verbal communication along two dimensions: content and structure. We implement methods to describe linguistic patterns from the class and introduce a preliminary VR Dictionary.

Inducing joint attention between users by visual guidance with blur effects

Booth: 58

Nikolaos Chatziantoniou: The University of Tokyo; Akimi Oyanagi: The University of Tokyo; Kenichiro Ito: The University of Tokyo; Kazuma Aoyama: The University of Tokyo; Hideaki Kuzuoka: The University of Tokyo; Tomohiro Amemiya: The University of Tokyo

Attracting a learner’s attention towards desired regions is imperative for immersive training applications. However, direct view manipulation can induce cybesickness and overt visual cues decrease immersion and distract from the training process. We propose and evaluate a technique to subtly and effortlessly guide the follower's visual attention, by blurring their field of view based on the leader's head orientation. We compared the performance of our technique against a cross-shaped head gaze indicator in alignment of moving viewport and object searching tasks. Results suggest that our technique can prompt joint attention between users in shared perspective virtual reality systems, with less induced cybersickness, workload and distraction.

Predicting the Light Spectrum of Virtual Reality Scenarios for Non-Image-Forming Visual Evaluation

Yitong Sun: Royal College of Art; Hanchun Wang: Imperial College London; Pinar Satilmis: Birmingham City University; Narges Pourshahrokhi: Royal College of Art; Carlo Harvey: Birmingham City University; Ali Asadipour: Royal College of Art

Virtual reality (VR) headsets, while providing realistic simulated environments, are also over-stimulating the human eye, particularly for the Non-Image-Forming (NIF) visual system. Therefore, it is crucial to predict the spectrum emitted by the VR headset and to perform light stimulation evaluations during the virtual environment construction phase. We propose a framework for spectrum prediction of VR scenes only by importing a pre-acquired optical profile of the VR headset. It is successively converted into "Five Photoreceptors Radiation Efficacy" (FPRE) maps and the "Melanopic Equivalent Daylight Illuminance" (M-EDI) value to visually predict the detailed stimulation of virtual scenes to the human eye.

Offloading Visual-Inertial Odemetry for Low Power Extended Reality

Booth: 59

Qinjun Jiang: UIUC; Muhammad Huzaifa: UIUC; William Sentosa: UIUC; Jeffrey Zhang: UIUC; Steven Gao: UIUC; Yihan Pang: UIUC; Brighten Godfrey: UIUC; Sarita Adve: UIUC

Achieving the full potential of extended reality (XR) requires comfortable all-day wear devices that provide immersive experiences. Unfortunately, power consumption is a constraint in designing such devices. This work reduces power consumption due to head tracking, one of the top power consumers, by offloading it to a remote server. We present a novel end-to-end XR system with visual-inertial odometry (VIO) offloaded to a remote server. Despite data compression and network latency, the system produces an end-to-end XR experience comparable to a system without offloading, while providing significant power savings. This latency and bandwidth-resilient nature of head tracking is a new and surprising result.

A Commonsense Knowledge-based Object Retrieval Approach for Virtual Reality

Booth: 60

Haiyan Jiang: Beijing Institute of Technology; Dongdong Weng: Beijing Institute of Technology; Xiaonuo Dongye: Beijing Institute of Technology; Nan Zhang: China north advanced technology generalization institute; Le Luo: Beijing Institute of technology

Out-of-reach object retrieval is an important task in many applications in virtual reality. Hand gestures have been widely studied for object retrieval. We proposed a grasping gesture-based object retrieval approach for out-of-reach objects based on a graphical model called And-Or graphs (AOG), leveraging the scene-object occurrence, object co-occurrence, and human grasp commonsense knowledge. This approach enables users to acquire objects by using natural grasping gestures according to their experience of grasping physical objects. Importantly, users could perform the same grasping gesture for different virtual objects and perform different grasping gestures for one virtual object in the virtual environment.

An Augmented Reality Application and User Study for Understanding and Learning Architectural Representations

Ziad Ashour: King Fahd University of Petroleum and Minerals; Zohreh Shaghaghian: PassiveLogic; Wei Yan Ph.D.: Texas A&M University

The research employed BIMxAR, a Building Information Modeling-enabled AR educational tool with novel visualization features to support learning and understanding construction systems, materials configuration, and 3D section views of complex building structures. We validated the research through a test case based on a quasi-experimental research design, in which BIMxAR was used as an intervention. Two study groups were employed – non-AR and AR. The learning gain differences within and between the groups were not statistically significant, however, the AR group perceived significantly less workload and higher performance compared to the non-AR group. These findings suggest that the AR version is an easy, useful, and convenient learning tool.

Embodiment and Personalization for Self-Identification with Virtual Humans

Booth: 61

Marie Luisa Fiedler: University of Würzburg; Erik Wolf: University of Würzburg; Nina Döllinger: University of Würzburg; Mario Botsch: TU Dortmund University; Marc Erich Latoschik: University of Würzburg; Carolin Wienrich: University of Würzburg

Our work investigates the impact of virtual human embodiment and personalization on the sense of embodiment (SoE) and self-identification (SI). We introduce preliminary items to query self-similarity (SS) and self-attribution (SA) with virtual humans as dimensions of SI. In our study, 64 participants successively observed personalized and generic-looking virtual humans, either as embodied avatars in a virtual mirror or as agents while performing tasks. They reported significantly higher SoE and SI when facing personalized virtual humans and significantly higher SoE and SA when facing embodied avatars, indicating that both factors have strong separate and complimentary influence on SoE and SI.

Smooth Hand Preshaping for Visually Realistic Grasping in Virtual Reality

Tangui Marchand-Guerniou: Orange Labs; Maxime Jouin: Orange Labs; Jérémy Lacoche: Orange Labs

In virtual reality, grasping interaction techniques that use controllers often lack a smooth transition between the neutral hand pose and the one that fits the topology of the grabbed object. This transition is referred to as preshaping. We explore the impact on embodiment and task speed of progressively and automatically adapting the fingers' closure of the hand when it gets close to an object by anticipating the user's intention. A comparison with existing methods suggests that this approach negatively impacts the sense of agency but does not impact task speed, body ownership, and interaction and finger movement realism.

F2RPC: Fake to Real Portrait Control from a Virtual Character

Seoungyoon Kang: Korea Advanced Institute of Science and Technology; Minjae Kim: NCSOFT; Hyunjung Shim: Korea Advanced Institute of Science and Technology

This study explores a new method for generating virtual characters in video for use in virtual and augmented reality services. We propose F2RPC as a unified framework for improving both the appearance and motion of the characters, unlike previous method focusing only on one or the other. Specifically, our F2RPC consists of two modules for image destylization (AdaGPEN) and face reenactment (PCGPEN), respectively. Experimental results show that ours successfully solves the task compared to the combination of state-of-the-art destylization and reenactment method.

ARPad: Compound Spatial-Semantic Gesture Interaction in Augmented Reality

Booth: 62

Yang Zhou: Nanjing University; Jie Liu: Institute of Software, Chinese Academy of Sciences; Peixin Yang: Nanjing University; Xinchi Xu: Nanjing University; Bingchan Shao: Nanjing University; Guihuan Feng: Nanjing University; Bin Luo: Nanjing University

In this paper, we propose the concept of compound spatial-semantic gesture interaction, in which spatial semantics is defined as the combination of world-level spatial semantics and object-level spatial semantics. The semantics of compound spatial-semantic gesture is determined by both the gesture and the spatial location of the gesture, which can increase the interaction bandwidth of gestures and improve the user experience. We present three typical compound spatial-semantic gesture application scenarios. Finally, compound spatial-semantic concepts can be applied to other interaction channels, which provides a feasible and general implementation method for expanding the interaction in Augmented Reality (AR) scenarios.

Magic, Superpowers, or Empowerment? A Conceptual Framework for Magic Interaction Techniques

Bastian Dewitz: Universität Hamburg; Sukran Karaosmanoglu: Universität Hamburg; Robert W. Lindeman: University of Canterbury; Frank Steinicke: Universität Hamburg

This poster presents an approach to systematically distinguish between interaction techniques (ITs) in the context of magic ITs in immersive virtual environments. Currently, heterogeneous terms are used in research to describe the concept of enhancing the abilities of users beyond the limits of the real world, such as magic, super-natural, hyper-natural, superhuman abilities, superpowers, augmentation or empowerment. As a first step towards clarifying and systematically defining the terminology, we propose using the orthogonal concepts of interalizability, congruence, and enhancement (or ICE-cube) as a simple yet expressive conceptual framework.

Mixed Reality Guided Museum Tour: Digital Enhancement of Museum Experience

Soko Aoki: Kadinche; Naoki Itabashi: Kadinche; Ripandy Adha: Kadinche; Shinichi Sameshima: Kadinche; Yuki Kinoshita: Domain; Takeo Kotoku: Mizuki Shigeru Museum

Museums as architectural structures rely on physical objects, which limit the content of their exhibits and the frequency of updates. The aim of the research was to create a richer museum visit experience by adding digital commentary and presentation to existing exhibits through a mixed reality-based guided tour system. We utilized headsets and visual positioning system to design contents that would enable a deeper understanding of the exhibits and an enjoy- able experience. We found that more than 80 percent of participants to our experiment agreed with the effectiveness of the system and willingness to pay additional experience fees.

Session/Server 3

Influence of Simulated Aerodynamic Forces on Weight Perception and Realism in Virtual Reality

Marvin Winkler: TH Köln (University of Applied Sciences); Stefan Michael Grünvogel: TH Köln

In this research, the impact of aerodynamics on the perception of weight in virtual environments was studied. A real-time aerodynamic simulation was created for objects with different shapes and weights, and an experiment was conducted to determine the correlation between subjects’ weight perception and the presence of simulated aerodynamic forces on virtual objects. The results showed that the weight of light objects was more accurately conveyed and the perceived realism was improved when simulated aerodynamic forces were included.

Medical Visualizations with Dynamic Shape and Depth Cues

Booth: 63

Alejandro Martin-Gomez: Johns Hopkins University; Felix Merkl: University Hospital, LMU Munich; Alexander Winkler: Technical University of Munich; Christian Heiliger: Ludwig-Maximilians-University Hospital; Sebastian Andress: Ludwig-Maximilians-Universität München; Tianyu Song: Technical University of Munich; Ulrich Eck: Technische Universitaet Muenchen; Konrad Karcz: Ludwig-Maximilians-University Hospital; Nassir Navab: Technische Universität München

Navigated surgery enables physicians to perform complex tasks assisted by virtual representations of surgical tools and anatomical structures visualized using intraoperative medical images. Integrating Augmented Reality (AR) in these scenarios enriches the virtual information presented to the surgeon by utilizing a wide range of visualization techniques. In this work, we introduce a novel approach to conveying additional depth and shape information of the augmented content using dynamic visualization techniques. Compared to existing works, these techniques allow users to gather information not only from pictorial but also from kinetic depth cues.

VRScroll: A Shape-Changing Device for Precise Sketching in Virtual Reality

Booth: 64

Wen Ying: University of Virginia; Seongkook Heo: University of Virginia

Studies have shown that incorporating physical surfaces in VR can improve sketching performance by allowing users to feel the contact and rest their pens on it. However, using physical objects or devices to reproduce the diverse shapes of virtual model surfaces can be challenging. We propose VRScroll, a shape-changing device that physically renders the shape of a virtual surface for users to sketch on it using a pen. By providing users with passive haptic feedback and constrained movement, VRScroll has the potential to facilitate precise sketching on virtual surfaces.

Extending the Metaverse: Hyper-Connected Smart Environments with Mixed Reality and the Internet of Things

Jie Guan: OCAD University; Alexis Morris: OCAD University; Jay Irizawa: OCAD University

The metaverse, the collection of technologies that provide a virtual twin of the real world via mixed reality, internet of things, and others, is gaining prominence. However, the metaverse faces challenges as it grows toward mainstream adoption. Among these is the lack of strong connections between metaverse objects and traditional physical objects and environments. This work explores a framework for bridging the physical environment and the metaverse with internet-of-things objects and mixed reality designs. The contributions include: i) an architectural framework for extending the metaverse, ii) design prototypes using the framework. Together, this exploration advances toward a more cohesive, hyper-connected metaverse smart environment.

A simulation study investigating a novel method for emotion transfer between virtual humans

Samad Roohi: La Trobe University; Richard Skarbez: La Trobe University

Affective – that is, emotionally aware – interactions with virtual humans contribute to improved user experience in both immersive and non-immersive virtual worlds. Some important open problems in this area are how to generate plausible moment-to-moment emotions in virtual humans, how to enable and model the transfer of emotions between multiple virtual humans, and how these emotions change and are changed by the environment. Motivated by these open questions, this paper presents a simulation study for the computational models of emotion and its shift and transference between characters.

Pedestrian Behavior Interacting with Autonomous Vehicles: Role of AV Operation and Signal Indication and Roadway Infrastructure

Fengjiao Zou: Clemson University; Jennifer Ogle: Clemson University; Weimin Jin: Arcadis U.S., Inc.; Patrick Gerard: Clemson University; Daniel D PETTY PETTY: 6D Simulations; Andrew Robb: Clemson University

Interacting with pedestrians is challenging for Autonomous vehicles (AVs). This study evaluates how AV operations /associated signaling and roadway infrastructure affect pedestrian behavior in virtual reality. AVs were designed with different operations and signal indications, including negotiating with no signal, negotiating with a yellow signal, and yellow/blue negotiating/no-yield indications. Results show that AV signal significantly impacts pedestrians' accepted gap, walking time, and waiting time. Pedestrians chose the largest open gap between cars with AV showing no signal, and had the slowest crossing speed with AV showing a yellow signal indication. Roadway infrastructure affects pedestrian walking time and waiting time.

Global Physical Prior Based Smoke Reconstruction for AR

Booth: 65

Qifan Zhang: Beihang University; Shibang Xiao: Beihang University; Yunchi Cen: Beihang University; Jing Han: Beihang University; Xiaohui Liang: Beihang University

Fluid is a common natural phenomenon and often appears in various VR/AR applications. Several works use sparse view images and integrate physical priors to improve reconstruction results. However, existing works only consider physical priors between adjacent frames. In our work, we propose a differentiable fluid simulator combined with a differentiable renderer for fluid reconstruction, which can make full use of global physical priors among long series. Furthermore, we introduce divergence-free Laplacian eigenfunctions as velocity bases to improve efficiency and save memory. We demonstrate our method on both synthetic and real data and show that it can produce better results.

A Subjective Quality Assessment of Temporally Reprojected Specular Reflections in Virtual Reality

Martin Mišiak: University of Würzburg; Arnulph Fuhrmann: TH Köln; Marc Erich Latoschik: University of Würzburg

Temporal reprojection is a popular method for mitigating sampling artifacts from a variety of sources. This work investigates it's impact on the subjective quality of specular reflections in Virtual Reality(VR). Our results show that temporal reprojection is highly effective at improving the visual comfort of specular materials, especially at low sample counts. A slightly diminished effect could also be observed in improving the subjective accuracy of the resulting reflection.

Exploring Quantitative Assessment of Cybersickness in Virtual Reality Using EEG Signals and a CNN-LSTM Network

Booth: 66

Mutian Liu: School of Mechatronic Engineering and Automation, Shanghai University; Banghua Yang: School of Mechatronic Engineering and Automation, Shanghai University; Mengdie Xu: School of Mechatronic Engineering and Automation, Shanghai University; Peng Zan: School of Mechatronic Engineering and Automation, Shanghai University; Xinxing Xia: Shanghai University

Cybersickness in the virtual reality (VR) environment poses a significant challenge to the user experience. In this paper, we evaluate VR cybersickness quantitatively based on electroencephalography (EEG). We designed a novel experimental paradigm to generate cybersickness based on the virtual rotation of the user's head and collected the corresponding EEG signals. A novel deep learning framework combing a convolutional neural network (CNN) with long short-term memory (LSTM) was proposed to predict cybersickness based on the EEG signals. With the proposed method, a meaningful evaluation of cybersickness is achieved. We further investigated the brain frequency-domain physiological characteristics at different levels of cybersickness.

An Immersive Labeling Method for Large Point Clouds

Booth: 67

Tianfang Lin: TU Dresden; Zhongyuan Yu: TU Dresden; Nico Volkens: TU Dresden; Matthew McGinity: Technische Universität Dresden; Stefan Gumhold: TU Dresden

3D point clouds often require accurate labeling and semantic information. However, in the absence of fully automated methods, such labeling must be performed manually, which can prove extremely time and labour intensive. To address this, we propose a novel hybrid CPU/GPU-based algorithm allowing instantaneous selection and modification of points supporting very large point clouds. Our tool provides a palette of 3D interactions for efficient viewing, selection and labeling of points using head-mounted VR and controllers. We evaluate our method with 25 users on tasks involving large point clouds and find convincing results that support the use case of VR-based point cloud labeling.

FluidPlaying: Efficient Adaptive Simulation for Highly Dynamic Fluid

Booth: 68

Sinuo Liu: Peking University; Xiaojuan Ban: University of Science and Techonolgy Beijing; Sheng Li: Peking University; Haokai Zeng: University of Science and Techonolgy Beijing; Xiaokun Wang: University of Science and Techonolgy Beijing; Yanrui Xu: University of Science and Techonolgy Beijing; Fei Zhu: Peking University; Guoping Wang: Peking University

In this paper, we propose FluidPlaying, a novel highly dynamic spatially adaptive method, which takes into account both the visibility level and the dynamic level of the flow field to achieve high-fidelity and efficient simulation of particle-based fluids. To minimize the density error, an online optimization scheme is used when increasing the resolution by particle splitting. We also proposed a neighbor-based splash enhancement to compensate for the loss of dynamic details. Compared with the high-resolution simulation baseline, our method can achieve over 3 times speedups while consuming only less than 10% computational resources. Besides, our method can make up for the loss of high-frequency details caused by spatial adaptation.

Motion Prediction based Safety Boundary Study in Virtual Reality

Booth: 69

Chenxin Qu: Beijing Jiaotong University; Xiaoping CHE: Beijing Jiaotong University; Enyao Chang: Beijing Jiaotong University; Zimo Cai: Beijing Jiaotong University

Nowadays, VR technology has play an indispensable role in promoting the construction of the Metaverse. In this study, we tried to avoid users' body physical collisions within the safety boundary in VR scenes. Three basic motion event categories are defined according to the common motions of users. We segmented users data into three-dimensional coordinates, extracted the motion range features, and constructed the relational data set. Furthermore, we used statistical methods to study the correlation between user characteristics and motion range categories, in order to explore and analyze the possibility of optimizing the safety motion range. Finally, we deployed our safety boundary systems in the virtual reality environment.

Multi-Needle Particle Implantation Computer Assisted Surgery Based on Virtual Reality

Booth: 70

Wenjun Tan: Northeastern University; Jinsong Wang: Northeastern University; Peifang Huang: Northeastern University; guangqiang Yang: Northeastern University; Qinghua Zhou: NEU(Northeastern University)

effective minimally invasive technique for tumor treatment. Because of the difficulties in planning, low accuracy of puncture operation and non-standard system, it can not be widely used. In this work, a virtual reality based multi-needle particle implantation surgery planning system is developed to help doctors conduct surgical training and planning, and three-dimensional surgical guide model is constructed to assist doctors in positioning in real surgery. The results show that this work can effectively solve the problems of "unclear insertion" and "inaccurate insertion" in clinical multi-needle particle implantation.

Comparison of Physiological Cues for Cognitive Load Measures in VR

Mohammad Ahmadi: University of Auckland; Huidong Bai: The University of Auckland; Alex Chatburn: University of South Australia; Marzieh Ahmadi Najafabadi: University of Ontario Institute of Technology; Burkhard Wuensche: University of Auckland; Mark Billinghurst: Auckland Bioengineering Institute

Cognitive load (CL) is an important measure for detecting whether a user is struggling with a task. Many researchers have investigated physiological cues for measuring CL, such as pupil dilation, galvanic skin response (GSR), heart rate, and electroencephalography (EEG). However, physical activity can often affect the measured signals, so it is unclear how suitable these measures are for use in Virtual Reality (VR). In this paper we present a VR game, PlayMeBack, for measuring users' CL using different physiological cues. In the game, the user is shown different patterns of tiles lighting up and is asked to replay the patterns by pressing tiles in the same sequence. Task difficulty increases with the length of the lighting pattern.

The Optimal Interactive Space for Hand Controller Interaction in Virtual Reality

Booth: 71

Xiaolong Lou: Hangzhou Dianzi University; Ying Wu: Hangzhou Dianzi University; Yigang Wang: Hangzhou Dianzi University

The definition of the optimal interactive space for hand direct operation, is an universally beneficial but less explored issue in the virtual reality (VR) domain. Through a specifically designed target acquisition experiment, it was found that (1) a bent arm posture generated a higher operational speed and accuracy, as well as lower physical exertion than a stretched arm posture; (2) arm operation at the shoulder below position was less laborious but more efficient and accurate than that at the shoulder above position; (3) the individual characteristic of handedness influenced the controller interaction performance at left- and right- sided interactive spaces.

An immersive simulator for improving chemistry learning efficiency

Booth: 72

Shan Jin: The Hong Kong University of Science and Technology (Guangzhou); Yuyang Wang: Hong Kong University of Science and Technology; Lik-Hang Lee: Korea Advanced Institute of Science & Technology; Xian Wang: Hong Kong University of Science and Technology (Guangzhou); Zemin Chen: Hong Kong University of Science and Technology; Boya Dong: South China University of Technology; Xinyi Luo: South China University of Technology; Pan Hui: University of Electronic Science and Technology of China

This paper designed a virtual simulator for chemistry education and computer-mediated hands-on exercises. We compared the participants' performances in hands-on chemistry tasks with or without score-keeping and timer and used questionnaires to measure their learning performance and perceived cognitive workload. The experimental results indicate participants' preferences and attitudes toward efficiency and safety. Most participants reported that the learning simulator could improve learning efficiency and a sense of safety. Our findings shed light on the quality and learning performance of theory knowledge and operational skills for chemistry education.

High Levels of Visibility of Virtual Agents Increase the Social Presence of Users

Lucie Kruse: Universität Hamburg; Fariba Mostajeran: Universität Hamburg; Frank Steinicke: Universität Hamburg

Virtual humanoid agents can perform a variety of tasks, and their representation has an influence on several psychological aspects. We compared the effects of agent visibility during an anagram solving task in virtual reality on social presence and cognitive task performance. We increased the visibility from (i) voice-only, (ii) mouth-only, (iii) full head, (iv) upper body, to (v) full body representations of the same virtual agent in a within-subject design study. The results revealed significant differences in the perceived social presence, especially for the two least visible representations, i.e., voice- and mount-only, but no significant effects on task performance could be found.

A Comparative Analysis of VR-Based and Real-World Human-Robot Collaboration for Small-Scale Joining

Padraig Higgins: University of Maryland, Baltimore County; Ryan C Barron: University of Maryland, Baltimore County; Cynthia Matuszek: UMBC; Don Engel: University of Maryland, Baltimore County

While VR as an interface for the teleoperation of robots has been well-studied, VR can also be used to advance our understanding of in-person human-robot interaction (HRI) by simulating such interactions. Platforms now exist for studying human-robot interaction in VR, but little of this work has involved the study of the realism of specific, typical in-person HRI tasks. We conduct a user study consisting of a collaborative assembly task where a robot and human work together to build a simple electrical circuit. We present a comparison of the task performed in the real world versus with virtual robot in VR.

Does Cognitive Workload Impact the Effect on Pain Distraction in Virtual Reality? A Study Design

Booth: 73

Jie Hao: Beijing Institute of Technology; Dongdong Weng: Beijing Institute of Technology; Le Luo: Beijing Institute of technology; Ming Li: Beijing Institute of Technology; Jie Guo: Pengcheng Laboratory; Bin Liang: China Software Testing Center

Pain is an important issue for post-surgery patients. Virtual reality treatment (VRT) has been used to reduce pain perception, while there are few studies have investigated the effect of cognitive workload (CWL). The present study design hypothesizes that higher CWL will lead to stronger distraction in a virtual environment. To this end, a soothing psychological relief environment with a cognitive task, in which participants’ CWL can be controlled, is constructed. In the future, this study will be applied to patients with pain to explore whether CWL is one of the key factors in reducing pain in VRT.

Impact of Spatial Environment Design on Cognitive Load

Sungchul Jung: Kennesaw State University; Yi (Joy) Li: Kennesaw State University

This paper conducted a preliminary study to investigate the impact of the environment style on the cognitive load demanded task in virtual reality. Participants experienced recall tests and spatial reasoning tests in the four different types of virtual classroom environments. A within-subject design was used with four conditions, task within a generic classroom, music room, science lab, and outdoor classroom. We found significant differences in the spatial environment design on the Spatial Reasoning with trivial effect sizes. The results indicate that we showed minimal evidence to support that spatial environment design could impact cognitive load in VR.

User Evaluation of Dynamic X-Ray Vision in Mixed Reality

Hung-Jui Guo: Software Engineering department; Jonathan Bakdash: U.S. Army Research Lab; Laura Marusich: U.S. DEVCOM Army Research Laboratory; Omeed E Ashtiani: University of Texas at Dallas; Balakrishnan Prabhakaran: The University of Texas at Dallas

X-Ray vision is a technique providing a Superman-like ability to help users see through physical obstacles. In this paper, we utilized the HoloLens 2 to build a virtual environment and a dynamic X-Ray vision window based on the created environment. Unlike previous works, our system allows users to physically move and still have the X-Ray vision ability in real-time, providing depth cues and additional information in the surrounding environment. Additionally, we designed an experiment and recruited multiple participants as a proof of concept for our system's ability.

Space Topology Change Mostly Attracts Human Attention: An Implicit Feedback VR Driving System

Booth: 74

Tingting Li: Jiangnan University; Fanyu Wang: Jiangnan University; Zhenping Xie: Jiangnan University

Driving safety, especially for autonomous vehicles, is severely challenged by unexpected object motions. To investigate the internal mechanism of how human vision can instantly and robustly respond to those sudden changes, a novel implicit feedback VR driving system is designed. Wherein, the object-level and space-level topological properties for scene changes are introduced, and virtual driving scenes are constructed in VR environment with EEG signals and eye-tracking feedback. The experiment results demonstrate that space-level topology mostly attracts human attention among the space-level, object-level and non-topology changes, which inform a very solid theoretical foundation for developing safer assisted even autonomous driving systems.

Creating and Using XR for Environmental Communication: Three Exploratory Case Studies

Barbara Buljat Raymond: Université Côte d'Azur; Daniel Pimentel: University of Oregon; Kay Poh Gek Vasey: Meshminds

Despite growing investment in environmental media campaigns, the link between awareness and action remains inadequate. One promising approach involves using extended reality (XR) technologies – augmented reality (AR) and virtual reality (VR) – to provide audiences with direct interactions with environmental dangers. The current work presents three interdisciplinary case studies highlighting how brands, non-governmental organizations, and academic institutions are leveraging XR for environmental communication.

Projection Mapping Method using Projector-LiDAR (light detection and ranging) Calibration

Booth: 75

Daye Yoon: Korea Institute of Industrial Technology; Jinyoung Kim: Korea Institute of Industrial Technology; JAYANG CHO: Korea Institute of Industrial Technology; Kibum Kim: Hanyang University

The projector-LiDAR system automatically performs projection mapping on a long-distance object after calibrating the projector using LiDAR. With LiDAR equipped with an RGB camera, you can calibrate the projector in just two scans. Because this system knows the ray formula for every pixel shooting from the projector, it can make accurate projections. In particular, the system introduces how to perform linear interpolation in the process of obtaining a formula and a methodology that allows you to immediately know the 3D coordinates of a desired part in a 2D image. Through a long-distance experiment, the projection mapping result of the proposed method showed an error of less than 5mm, and the existing system had a maximum error of 50mm.

Edible Light Pipe Made of Candy

Booth: 76

Yuki Funato: Gunma University; Suzuno Hayashi: Gunma University; Hiromasa Oku: Gunma University

In recent years, there have been widespread attempts to enhance and extend the sense of food using light and images. As one of such attempts, a method to create transparent optical elements using foods such as candy and agar has been proposed. In this study, an edible light pipe made of candy is proposed as a new edible optical element to decorate foods by light. We report the results of basic evaluation of a prototype light pipe made of candy. We also propose a method of supplying light to the light pipe from a remote location using a laser.

Privacy Threats of Behaviour Identity Detection in VR

Dilshani Rangana Kumarapeli: University of Canterbury; Sungchul Jung: Kennesaw State University; Robert W. Lindeman: University of Canterbury

Modern VR technology generates large volumes of data by attaching various sensors around the user. Over time, researchers have used these data to create unique behavioural keys to provide authentication to immersive applications. However, these approaches come with various behavioural privacy risks. Hence, through this work, we investigate the privacy risks associated with behavioural identity detection in VR and how users’ physical appearance affects this behavioural detection. We found that users could be identified across various sessions with 80% accuracy once tracked, and that physical appearance did not impact the behaviour detection.

Introducing Shopper Avatars in a Virtual Reality Store

Alexander Schnack: The New Zealand Institute for Plant and Food Research; Yinshu Zhao: Massey University; Nilufar Baghaei: University of Queensland

Immersive Virtual Reality (iVR) is rapidly altering the world we live in. Studies have shown that people exhibit similar behaviours and reactions in virtual retail scenarios, thus making iVR a promising tool for researchers to study in-store shopper behaviour. This paper investigates whether virtual avatars can affect users’ perceived telepresence and shopping behavior. Our findings showed the mere presence of virtual avatars in the store had no significant effect on participants’ sense of telepresence and shopping behaviour. We believe this research highlighted the need for further study and contributed to the development of fully-fledged virtual simulated shopping experiences.

What's My Age Again? Exploring the Impact of Age on the Enfacement of Current State-of-the-Art Avatars

Hugh Jordan: Trinity College Dublin; Lauren Buck: Trinity College Dublin; Pradnya Shinde: Trinity College Dublin; Rachel McDonnell: Trinity College Dublin

In this work, we examine how participants experience enfacement toward state-of-the-art avatars of different ages while assessing perception of avatar appearance. While most literature focuses on the feeling of ownership and agency over an avatar's body, recent developments in real-time facial tracking capabilities have brought about interest in the feeling of ownership, agency, and self-identification with an avatar's face. We present a study in which participants viewed avatars of different ages on a desktop monitor fitted with advanced real-time tracking and rated the degree of enfacement they felt toward each avatar as well as the uncanniness of the avatar's appearance.

UPSR: a Unified Proxy Skeleton Retargeting Method for Heterogeneous Avatar Animation

Booth: 77

wenfeng song: Beijing information science and technology university; Xinyu Zhang: Beijing Information Science and Technology University; Yang Gao: Beihang University; Yifan Luo: Beijing Information Science and Technology University; Haoxiang Wang: Beijing Information Science and Technology University; Xianfei Wang: Beijing Information Science and Technology University; Xia Hou: Beijing information science and technology university

Pose-driven avatar animation is widely applied in VR fields. The flexible creation of animations is a significant yet challenging task due to the heterogeneous topologies and shapes of avatars. To alleviate this, we propose a unified proxy skeleton for retargeting (UPSR) to achieve consistent motions of the virtual avatar. Particularly, the heterogeneous topologies are converted into a unified skeleton topology of the avatars by using a learned nonlinear mapping function.

A Graph-based Error Correction Model Using Lie-algebraic Cohomology and Its Application on Global Consistent Indoor Scene Reconstruction

Booth: 78

Yuxue Ren: Capital Normal University; Jiang Baowei: Capital Normal University; Wei Chen: Dalian University of Technology; Na Lei: Dalian University of Technology; Xianfeng David Gu: Stony Brook University

We present a novel, effective method for global indoor scene reconstruction problems by geometric topology. Based on point cloud pairwise registration methods (e.g ICP) or IMU, we focus on the problem of accumulated error for the composition of transformations along any loops. The major technical contribution of this paper is a linear method for the graph optimation, using only solving a Poisson equation. We demonstrate the consistency of our method from Hodge-Helmhotz decomposition theorem and experiments on multiple RGBD datasets of indoor scene. The experimental results also demonstrate that our global registration method runs quickly and provides accurate reconstructions.

Study of Cybersickness Prediction in Real Time Using Eye Tracking Data

Shogo Shimada: Tokyo Metropolitan University; Yasushi Ikei: The University of Tokyo; Nobuyuki Nishiuchi: Tokyo Metropolitan University; Vibol Yem: Tokyo Metropolitan University

Cybersickness seriously degrades users’ experiences of virtual reality (VR). The level of cybersickness is commonly gauged through a simulator sickness questionnaire (SSQ) administered after the experience. However, for observing the user’s health and evaluating the VR content/device, measuring the level of cybersickness in real time is essential. In this study, we examined the relationship between eye tracking data and sickness level, then predicted the sickness level using machine learning methods. Some characteristics of eye related indices significantly differed between the sickness and non-sickness groups. The machine learning methods predicted cybersickness in real time with an accuracy of approximately 70%.

HapticBox: Designing Hand-Held Thermal, Wetness, and Wind Stimuli for Virtual Reality

Booth: 79

Kedong Chai: Xi'an Jiaotong-Liverpool University; Yue Li: Xi'an Jiaotong-Liverpool University; Lingyun Yu: Xi'an Jiaotong-Liverpool University; Hai-Ning Liang: Xi'an Jiaotong-Liverpool University

Experiences in virtual reality (VR) through multiple sensory modalities can be as rich as real-world experiences. However, many VR systems offer only visual and auditory stimuli. In this paper, we present HapticBox, a small, portable, and highly adaptable haptic device that can provide hand-held thermal, wetness, and wind haptics. We evaluated user perception of wetness and wind stimuli in the hand and to the face. The results showed that users had a stronger perception of the stimuli and a higher level of comfort with haptics in the hand. While increasing the voltage enhanced the wind perception, the results suggested that noise is an important side effect. We present our design details and discuss the future work.

Applications of Interactive Style Transfer to Virtual Gallery

Xin-Han Wu: National Taiwan University; Hsin-Ju Chien: National Taiwan University; Yi-Ping Hung: National Taiwan University; Yen Nun Huang: Academia Sinica

Since the outbreak of the epidemic in recent years, many events have been held online to avoid spreading illness, such as the virtual gallery. Therefore, we propose an interactive style transfer virtual gallery system. Users can perform style transfer on 2D objects and 3D objects, where 2D objects include pictures and paper jams, and 3D objects include frames and 3D artworks. The results show that it not only makes the viewing process more interesting but also deepens the impression of the artwork through the process of choosing different styles.

Metaverse Community Group Loyalty Contagion and Regulation Model Based on User Stickiness

Booth: 80

Junxiao Xue: Research Institute of Artificial Intelligence; Mingchuang Zhang: School of Cyber Science and Engineering

The investigation into the micro-social behaviours of groups in metaverse communities is minimal. This paper proposes a user stickiness-based loyalty contagion and regulation model in a metaverse community inspired by user stickiness in the real world. We hope to establish a link between user stickiness and group loyalty through community regulation, enhance user stickiness, and improve metaverse community loyalty. We verified the feasibility and robustness of the model through simulation experiments and obtained the optimal community regulation intensity through sensitivity analysis experiments. Our results show that communities can maintain healthy long-term development when optimal community regulation intensity is adopted.

User study of omnidirectional treadmill control algorithms in VR

Mathias DELAHAYE: EPFL; Ronan Boulic: EPFL

We conducted a comparative user study of the Infinadeck omnidirectional treadmill native control algorithm alongside an approach based on a state observer control scheme combining a kinematic model with error dynamics that allows users to change their walk freely. Based on the results from 22 participants, we observed that the alternative approach outperformed the native algorithm on trajectories involving a left or right turn (with radii of curvature of 0.5m, 1m, and 2m). However, there was no significant difference for straight-line trajectories, and the native approach yielded the best scores for in-place rotations.

Beyond Action Recognition: Extracting Meaningful Information from Procedure Recordings

Booth: 81

Tim Jeroen Schoonbeek: Eindhoven University of Technology; Hans Onvlee: ASML; Pierluigi Frisco: ASML ; Peter H.N. de With: Eindhoven University of Technology; Fons van der Sommen: Eindhoven University of Technology

Understanding procedural actions is important, as it can be used to automatically analyze the execution of a procedure and provide assistance to users by warning for potential mistakes or forgotten steps. However, current approaches require a rigid, step-by-step execution order, laborious and impractical datasets. Furthermore, they are unreliable to variations in viewpoint, or measure the performance of actions rather than the actual completion of actions. To address these limitations and stimulate research in this field, this work proposes the novel task of procedure state recognition (PSR) together with a set of evaluation metrics.

The Role of Social Identity Labels in CVEs on User Behavior

Kathrin Knutzen: Ilmenau University of Technology; Florian Weidner: Ilmenau University of Technology; Wolfgang Broll: Ilmenau University of Technology

Psychological, and individual factors like group identity influence social presence in collaborative virtual settings. We investigated the impact of social identity labels, which reflect a user's nation and academic affiliation, on collaborative behavior. In an experiment N = 18 dyads played puzzle games while seeing or not seeing such labels. There were no significant differences regarding their social presence, trust, group identification or enjoyment. We argue that social identity labels in dyadic interactions do not change collaborative virtual behavior. We advance the field of sociotechnical applications by highlighting the relationship between psychological characteristics and cooperative behavior in collaborative virtual settings.

Scene Transformer: Automatic Transformation from Real Scene to Virtual Scene

Booth: 82

Runze Fan: Beihang University; Lili Wang: Beihang University; Wei Ke: Macao Polytechnic University; Chan-Tong Lam: Macao Polytechnic University

Given a real scene and a virtual scene, the indoor scene transformation problem is defined as transforming the layout of the input virtual scene. We propose a real-scene-constrained deep scene transformer to solve this problem. First, we introduce the deep scene matching network to predict the matching relationship between real furniture and virtual furniture. Then we introduce a layout refinement algorithm based on the refinement parameter network to arrange the matched virtual furniture into the new virtual scene. At last, we introduce a deep scene generating network to arrange the unmatched virtual furniture into the new virtual scene.

Generating Co-Speech Gestures for Virtual Agents from Multimodal Information Based on Transformer

Booth: 83

Yue Yu: Beijing Institute of Technology; Jiande Shi: Beijing Institute of Technology

To generate co-speech gestures for virtual agents and enhance the correlation between gestures and input modalities, we propose a Transformer-based model, which encodes four-modal-like information (Audio Waveform, Mel-Spectrogram, Text, and SpeakerIDs). For the Mel-Spectrogram modal, we design a Mel-Spectrogram encoder based on the Swin Transformer pre-trained model to extract the audio spectrum features hierarchically. For the Text modal, we use the Transformer encoder to extract text features aligned with the audio. We evaluate on the TED-Gesture dataset. Compared with the state-of-art methods, we improve the mean absolute joint error by 2.33%, the mean acceleration difference by 15.01%, and the Frechet gesture distance by 59.32%.

Collaborative Co-located Mixed Reality in Teaching Veterinary Radiation Safety Rules: a Preliminary Evaluation

Booth: 84

Xuanhui Xu: University College Dublin; Antonella Puggioni: University College Dublin; David Kilroy: University College Dublin; Abraham Campbell: University College Dublin

When learning radiographic techniques on horses, veterinary students must be familiar with radiation safety rules, which are essential to avoid potentially dangerous exposure to the x-ray beam. We propose a collaborative co-located Mixed Reality (MR) system for training students on radiation safety rules allowing staff to guide the students to practice positioning for radiography safely. The performance of 22 veterinary students was measured before and after they had grouped training sessions using the system. The result showed that the participants performed significantly better in the post-test. This evaluation illustrates the potential of expanding the technique into other subjects.

Augmented Reality for Medical Training in Eastern Africa

Phyllis Oduor: Stanford University; Luqman Mushila Hodgkinson: Stanford School of Medicine; Doris Tuitoek Jeptalam Tuitoek: Masinde Muliro University of Science and Technology; Lucy Kavinguha Kageha: Masinde Muliro University of Science and Technology; Bruce Daniel: Stanford University; Jarrett Rosenberg: Stanford University; Lydiah Nyachiro: Masinde Muliro University of Science and Technology; Dinah Okwiri: Masinde Muliro University of Science and Technology; Simon Ochieng Ogana: Masinde Muliro University of Science and Technology; Thomas Ng'abwa: Masinde Muliro University of Science and Technology; Tecla Jerotich Sum Ms: Masinde Muliro University of Science and Technology; John Arudo: Masinde Muliro University of Science and Technology; Christoph Leuze: Stanford University

Access to qualified medical care is severely limited in low-resource areas within East Africa. Technologies like Augmented Reality (AR) have been shown to be useful for medical guidance and may contribute to the training of medical experts in these countries. AR has low infrastructure requirements. However, AR is mainly tested for training and surgical applications in highly developed environments. In this study, we tested whether AR can also address the needs for healthcare in East Africa. Our study consisted of a medical need-finding part and a pilot study to address some of these needs with AR. Our study shows that remote teaching via AR led to significantly higher test score gains and more consistent results than in the control group.

An Image-Space Split-Rendering Approach to Accelerate Low-Powered Virtual Reality

Ville Cantory: University of Minnesota; Nathan Ringo: University of Minnesota

Current mobile GPUs lack the computing power needed to support high end VR rendering. As an alternative, frames can be rendered on a server and streamed to a thin client, but this approach can incur high end-to-end latency due to processing and network requirements. We propose a networked split-rendering approach using an image-space division of labour between a server and a heavy client to achieve faster end-to-end image presentation rates on the mobile device while preserving image quality.

VRChoir: Exploring Remote Choir Rehearsals via Virtual Reality

Tianquan Di: University of Toronto; Daniel Medeiros: University of Glasgow; Mauricio Sousa: University of Toronto; Tovi Grossman: University of Toronto

Choral singing is a creative process that involves continuous, organized, nonverbal communication between conductors and singers. Since the COVID pandemic, choirs are moved to videoconferencing systems for rehearsals. However, the limitation of 2D video interfaces restricts the nonverbal communication, spatial awareness, and sense of presence in choral rehearsal. We designed, implemented, and evaluated VRChoir, a VR-based platform for choir rehearsals to improve these pain points. We evaluated VRChoir with conductors and singers with experience rehearsing in a remote environment. Our findings reveal that VR can be a starting point for improving the sense of presence and quality of non-verbal communication in remote music rehearsals.

Give me some room please! Personal space bubbles for safety and performance

Karina LaRubbio: University of Florida; Ethan Wilson: University of Florida; Sanjeev Koppal: University of Florida; Sophie Joerg: University of Bamberg; Eakta Jain: University of Florida

Personal space bubbles are implemented in virtual environments to protect users from physical harassment. When activated, an impermeable boundary encloses the user completely and complicates collaborative tasks, such as passing objects or performing social gestures. When personal space protection is not balanced with functionality, the personal space bubble becomes a gilded cage. In this paper, we raise the possibility of alternate designs for personal space bubbles and test their impact on task performance within a workplace training context. Our early findings suggest that alternate bubble designs have the potential to balance safety and performance metrics such as task completion.

A Comparison Study on Stress Relief in VR

Dongyun Han: Utah State University; DongHoon Kim: Utah State University; Kangsoo Kim: University of Calgary; Isaac Cho: Utah State University

In modern life, people encounter various stressors, which may cause negative mental and bodily reactions to make them feel frustrated, angry, or irritated. Earlier studies have revealed that Virtual Reality (VR) experiences are effective as traditional methods in reality in stress management and reduction. However, it is still unclear how different activities in VR and their combinations could moderate the effects on stress reduction. In this work, we conduct and report a comparative study to investigate the effects of two virtual activities: 1) meditation in VR, 2) a smash room experience in VR, and 3) a traditional meditation method of Sitting in Silence in the real world.

In the Future Metaverse, What Kind of UGC do Users Need?

Booth: 85

YanXiang Zhang: University of Science and Technology of China; WenBin Hu: University of Science and Technology of China

With the COVID-19 pandemic, people's real-life interactions diminished, and the game-based metaverse platforms such as Minecraft and Roblox are on the rise. The main users of these platforms are teenagers, they generate content in a virtual environment, which can significantly increase the activity of the platform. However, the experience of User-Generated Content in the metaverse is not very good. So what kind of support do users need to improve the efficiency of generating content in the metaverse? To investigate teenage users' preferences and expectations of it, this paper interviewed 72 teenagers aged 12-22 who are familiar with the metaverse game, and distilled 4 suggestions that can help promote metaverse users to generate content.

Projector Illuminated Precise Stencils on Surgical Sites

Muhammad Twaha Ibrahim: UC Irvine; Gopi Meenakshisundaram: University of California, Irvine; Raj Vyas: University of California, Irvine; Lohrasb R. Sayadi: University of California, Irvine; Aditi Majumder: University of California, Irvine

We propose a system that provides realtime guidance to surgeons by illuminating salient markings on the physical surgical site using a projector. In addition to the projector, the system uses a RGB-D camera for feedback. This unit is called the projector-depth-camera or PDC unit. Using the PDC, we perform structured light scanning and generate a high resolution mesh of the surgical site. During planning or execution of the surgery, this digital model is marked by appropriate incision markings through a GUI. These markings are then illuminated at high precision via the PDC unit on the surgical site in realtime. If the surgical site moves during the process, the movement is tracked by the system and updated quickly on the moved surgical site.

Projector-Camera Calibration on Dynamic, Deformable Surfaces

Booth: 86

Muhammad Twaha Ibrahim: UC Irvine; Gopi Meenakshisundaram: University of California, Irvine; Aditi Majumder: UCI

Dynamic projection mapping (DPM) enables viewers to visualize information on dynamic, deformable objects. Such systems often comprise of a RGB-D camera and projector pair that must be calibrated apriori. Most calibration techniques require specific static calibration objects of known geometry. In this paper, we propose the first projector-camera calibration technique for DPM that uses the dynamic, deformable surface itself to calibrate the devices, without needing to bring in a static, rigid calibration object. Our method is hardware agnostic, fast, and accurate and allows quick recalibration.

AdaptiveFusion: Low Power Scene Reconstruction

Muhammad Huzaifa: UIUC; Boyuan Tian: UIUC; Yihan Pang: UIUC; Henry Che: UIUC; Shenlong Wang: UIUC; Sarita Adve: UIUC

This paper focuses on online scene reconstruction, a task in Extended Reality (XR) that rebuilds the user’s surroundings on a mobile device with a stringent power budget. Towards low power scene reconstruction, we propose AdaptiveFusion to dynamically change the frequency of scene reconstruction to minimize power consumption while maintaining acceptable quality. Compared to a standard 30 Hz baseline, AdaptiveFusion reduces frequency, and thus power, by 4.9× on average while producing acceptable meshes.

Vibro-tactile Feedback for Dial Interaction using an Everyday Object in Augmented Reality

Booth: 87

Mac Greenslade: University of Canterbury; Stephan Lukosch: University of Canterbury; Adrian James Clark: University of Canterbury; Zhe Chen: University of Canterbury

Interaction using everyday objects offer a compelling alternative to gesture and dedicated controller interaction in augmented reality. However, unlike dedicated controllers, everyday objects cannot provide active haptic feedback. To explore active feedback with everyday objects, a within-subjects study was conducted with n = 28 participants, using a flowerpot enhanced with vibro-tactile feedback to perform dial interactions and comparing four conditions based on the inclusion and exclusion of audio and haptic feedback. No significant differences between conditions were found for task completion time or accuracy, however measurements of immersion and user preference were higher for the conditions that included haptic feedback.

‘Auslan Alexa’: A case study of VR Wizard of Oz prototyping for requirements elicitation

Shashindi R Vithanage: The University of Queensland; Arindam Dey: University of Queensland; Jessica L Korte: The University of Queensland

We used VR as a speculative design tool to explore how Deaf people might interact with a sign language personal assistant through a Wizard of Oz pilot study. This allowed us to elicit concrete design requirements from potential users of the proposed personal assistant, and shows the potential of VR as a requirements elicitation tool.

Examining VR Technologies for Immersive Lighting Performance Simulation and Visualisation in Building Design and Analysis

Kieran William May: University of South Australia; James A Walsh: University of South Australia; Ross Smith: University of South Australia; Ning Gu: University of South Australia; Bruce H Thomas: University of South Australia

This paper presents IVR-LPSim: An Immersive Virtual Reality (VR) Lighting Performance Simulator (LPS), which can perform real-time indoor artificial illuminance, glare, daylighting, and energy consumption calculation and visualisation. Traditional PC-based LPS consist of exocentric CAD interfaces and represent lighting performance data using reports, numerical datasets, two-dimensional graphs, charts and static visualisations. As an alternative to the current approaches, this paper explores leveraging VR technologies to support an egocentric approach for lighting design and analysis. Three immersive lighting performance visualisations are presented to facilitate the representation of lighting performance data within IVR-LPSim.

See Through the Inside and Outside: Human Body and Anatomical Skeleton Prediction Network

Booth: 88

Zhiheng Peng: School of Automation; Kai Zhao: Netease Fuxi Robot Department; Xiaoran Chen: Netease Fuxi Robot Department; Yingfeng Chen: Netease Fuxi Robot Department; Changjie Fan: Netease; Bowei Tang: Netease Fuxi Robot Department; Siyu Xia: School of Automation; Weijian Shang: Netease Fuxi Robot Department

In this work, it is the first time to use a monocular image with an arbitrary human pose to predict the human body and anatomical skeleton simultaneously. The framework, called STIO (See Through the Inside and Outside), considering the human anatomical constraints, consists of a two-way reconstruction network that forms a strong collaboration between skeleton and skin.

Development and penta-metric evaluation of a virtual interview simulator

Booth: 89

Xinyi Luo: University of Electronic Science and Technology of China; Yuyang Wang: Hong Kong University of Science and Technology; Lik-Hang Lee: The Hong Kong Polytechnic University; Zihan Xing: Normal University-Hong Kong Baptist University United International College; Shan Jin: The Hong Kong University of Science and Technology (Guangzhou); Boya Dong: South China University of Technology; Yuanyi Hu: Guangdong Medical University; Zemin Chen: South China University of Technology; Jing Yan: Taiyuan University of Technology; Pan Hui: The Hong Kong University of Science and Technology

Virtual reality interview training systems (VRITS) provide a manageable training approach for candidates who tend to be very nervous during interviews; yet, the major anxiety stimulating elements remain unknown. By analyzing electrodermal activity and a questionnaire of an orthogonal design $L_8(4^1 \times 2^4)$, we investigated five factors. Results indicate that \textsc{Type of Interview Questions} plays a major role in the interviewee's anxiety. Secondly, \textsc{Level of Realism} and \textsc{Preparation} both have some degree of influence. Lastly, \textsc{Interrogator's Attitude} and \textsc {Timed or Untimed Answers} have little to no impact. This work contributes towards cues for designing future VRITS.

Multi-Camera AR Navigation System for CT-guided Needle Insertion Task

Yizhi Wei: National University of Singapore; Zhiying Zhou: National Unversity of Singapore

In this work, we propose a multi-camera AR navigation system for the needle insertion task, including a high accuracy registration system and a user-friendly AR interface. Our proposed marker-based registration system utilizes an extended field of view and a high-precision calibration solution to address the difficulty of marker placement in clinical applications. The proposed AR interface provides guidance on needle placement and insertion timing to navigate needle insertion. Experiments demonstrate that our proposed AR-guided system achieves clinically required insertion accuracy and facilitates needle-based therapy by improving needle placement accuracy, shortening completion time, and reducing radiation exposure.

Individualized Generation of Upper Limb Training for Robot-assisted Rehabilitation using Multi-objective Optimization

Booth: 90

Yuting Fan: Southeast University; Lifeng Zhu: Southeast University; Hui Wang: Hefei Institute of Physical Science, Chinese Academy of Science; Aiguo Song: Southeast University

To alleviate the labor costs required to customizing the content and difficulty of the rehabilitation game for each patient, we propose a method to automatically generate individualized training contents on desktop end-effector rehabilitation robot. By modeling the search of the training motions as finding optimal hand paths and trajectories, we introduce solving the design problem with a multi-objective optimization (MO) solver. Our system is capable of automatically generating various training plans considering the training intensity and dexterity of each joint in the upper limb. In addition, we de- velop a serious game to display the generated training, which helps motivate the patient in the rehabilitation.

Towards an Edge Cloud Based Coordination Platform for Multi-User AR Applications Built on Open-Source SLAMs

Booth: 91

Balazs Sonkoly: Budapest University of Technology and Economics; Bálint György Nagy: Budapest University of Technology and Economics Hungary; János Dóka: Budapest University of Technology and Economics; Zsófia Kecskés-Solymosi: Budapest University of Technology and Economics; János Czentye: Budapest University of Technology and Economics; Bence Formanek: Ericsson Research; Dávid Jocha: Ericsson Research; Balazs Peter Gero: Ericsson Research

Multi-user and collaborative AR applications pose several challenges. The expected user experience requires accurate pose information for each device and precise synchronization of the respective coordinate systems in real-time. Unlike mobile phones or AR glasses running on battery with constrained resource capacity, cloud and edge platforms can provide the computing power for the core functions under the hood. In this paper, we propose a novel edge cloud based platform for multi-user AR applications realizing an essential coordination service among the users. The latency critical, computation intensive Simultaneous Localization And Mapping (SLAM) function is offloaded from the device to the edge cloud infrastructure.

Exploring Locomotion Techniques for Seated Virtual Reality

Marlene Huber: VRVis Forschungs-GmbH; Simon Kloiber: Graz University of Technology; Hannes Kaufmann: Vienna University of Technology; Katharina Krösl: VRVis Forschungs-GmbH

Virtual reality often only uses locomotion techniques that require users to stand or walk around in physical spaces. We explore the feasibility of selected promising locomotion techniques for a seated stationary VR setting, as it might better support lengthy sessions and small physical spaces, and has the potential to include people with limited mobility. Therefore, we present our evaluation approach and preliminary user study to evaluate these factors. Our results suggest that it is feasible to adapt common locomotion techniques, like teleportation, for this purpose, while more physically demanding techniques may exhibit problems, including motion sickness and usability issues.

The Effects of Avatar Personalization and Human-Virtual Agent Interactions on Self-Esteem

Wei Jie Dominic Koek: Nanyang Technological University; Vivian Hsueh Hua Chen: Nanyang Technological University

Extant literature has suggested that VR may be a potential avenue to enhance self-esteem. However, the understanding toward the underlying technological mechanisms and their corresponding effects on the self are still not comprehensive. To address this research gap, the current study designed a series of social interactions in VR where participants (N = 171) embodied either a personalized or non-personalized avatar and had either positive or negative interactions with a virtual agent. Findings showed that participants who embodied a personalized avatar experienced a positive change in self-esteem from pre- to post-simulation, regardless of the virtual agent interaction quality.

Effects of Experiential Priming in Virtual Reality on Flavor Profiles of Real World Beverages

Booth: 92

Danny Stephen Otten: University of Wyoming; Allie Lew: University of Wyoming; Milana Wolff: University of Wyoming; Russell Todd: University of Wyoming; Amy Banic: University of Wyoming

Earlier research establishes the predominance of virtual over real textures, flavors, and scents when two different stimuli are presented to users. However, the effects of virtual priming, rather than simultaneous exposure, has not been thoroughly investigated. Experiential priming involves the presentation of one set of stimuli prior to a second set of stimuli. We examined whether experiential priming conducted in a virtual environment influences real-world flavor profile perception. The results of our preliminary work suggest that VR experiential priming produces perceptual differences in the real world and warrants further investigation.

Investigating Interaction Behaviors of Learners in VR Learning Environment

Booth: 93

Antony Prakash: Indian Institute of Technology Bombay; Danish Shafi Shaikh: Indian Institute of Technology Bombay; Ramkumar Rajendran: IIT Bombay

The education researchers use self-reported questionnaires, physiological sensors, and human observers to analyze the effects of VR in education. The above-mentioned existing data collection mechanisms do not provide information related to the interaction behavior of the learners in VR Learning Environments (VRLE). As a result, understanding the learning processes from the perspective of learners’ behavior in VRLE is still in its infancy. Hence, we developed a mechanism that is able to automatically log all the interaction behavior. In this paper, we discuss the study conducted with 14 undergraduate students after deploying the developed mechanism in a VRLE.

Giant Finger: Visuo-proprioceptive congruent virtual legs for flying actions in virtual reality

Booth: 94

Seongjun Kang: Gwangju Institute of Science and Technology; Gwangbin Kim: Gwangju Institute of Science and Technology; Kyung-Taek Lee: Korea Electronics Technology Institute; SeungJun Kim: Gwangju Institute of Science and Technology

An immersive experience in virtual reality occurs when visual sense accompanies congruent somatosensation. However, as users should not fall down and physical abilities vary among individuals, existing methods may exhibit limited or predefined motion flexibility. To address this issue, we present Giant Finger, a flying action method that mimics human gait using two big virtual fingers. We confirmed its ownership via proprioceptive drift and questionnaires, and found that it performed better than existing methods in tasks such as flying and kicking.

Real-time Bi-directional Real-Virtual Interaction Framework Using Automatic Simulation Model Generation

Booth: 95

Junyoung Yun: Hanyang University; Hyeongil Nam: Hanyang University; Taesoo Kwon: Hanyang University; Hyungmin Kim: ARIA-Edge; Jong-Il Park: ARIA-Edge

Most mixed reality researches require lots of event-specific interaction, such as assigning appropriate animations for each specific situation, without considering the physical properties of objects during the modeling process. We propose a new framework for bi-directional interaction between real and virtual objects that incorporates physical feedback on both real and virtual objects. Our framework automatically generates simulation-enabled virtualized real objects using only their 3D meshes and material types related to their physical properties and visualizes in real-time various bi-directional interactions among virtual and real objects having various types of physical properties such as rigid, articulated rigid, and deformable body.

Conference Sponsors

Special

Lujiazui Logo

Diamond

Platinum

Baidu Logo

Gold

Senstime Logo
Unity china
XImmerse Logo
Vivo Logo

Silver

GritWorld logo

Bronze

ImageDerivative logo
yuanjing alibaba logo
raysengine alibaba logo
S-Dream logo
VRIH logo
VRIH logo
evis logo
kanjing logo
lianying logo

Doctoral Consortium Sponsors

Supporters

Tencent Learn logo
lenovo logo
qualcomm logo
liangfengtai logo
hgmt logo

Host

SJTU

Co-Host

ZJU NUIST

Supporting Associations

ccf-vr Logo CSIG-VR Logo cgs-vcc Logo
CCF-VR CSIG-VR CGS-VCC
cvrvt Logo mia Logo siga Logo
CVRVT MIA SIGA
cvrvt Logo
SJMC


Code of Conduct

© IEEE VR Conference 2023, Sponsored by the IEEE Computer Society and the Visualization and Graphics Technical Committee