The official banner for the IEEE Conference on Virtual Reality + User Interfaces, comprised of a Kiwi wearing a VR headset overlaid on an image of Mount Cook and a braided river.

Facilitating Asymmetric Interaction between VR Users and External Users via Wearable Gesture-Based Interface

Yuetong Zhao, Shuo Yan, Xuanmiao Zhang, Xukun Shen

Head-mounted displays (HMDs) create a highly immersive experience, while limit VR users’ awareness of external users that co-located in the same environment. Previous studies generate various asymmetric communication that allowing external users to collaborate with VR users. In our work, we superimpose the virtual world onto physical environment and propose a shareable VR experience between VR users and external users via a wearable interface. We encourage the external user to become a part of the shared VR experience and enable them to interact and explore the environment with the VR user by using hand gestures. We propose a co-located VR experience in two main scenarios, (1) Object Transition and (2) Collaborative Game. We conducted a hybrid user study with twenty participants to compare the gesture-based interaction and controller-based interaction to investigate the effects on VR experience, social presence, and collaborative efficiency for external users. Our study revealed that the participants in gesture-based interaction received more positive feedback compared to those in controller-based interaction. Our findings suggested that the gesture-based interaction could be adapted for future asymmetric VR experience.

Behaviourally-based Synthesis of Scene-aware Footstep Sound

Haonan Cheng, Wang Zhaoye, Hengyan Huang, Juanjuan Cai, Long Ye

This paper proposes a behaviourally-based scene-aware (BBSA) footstep sound synthesis method for congruent interaction in virtual environments. Most existing methods focus on how to simulate timbre of footstep sound appropriately, but ignore the impact of user behaviour and virtual scenes on footstep sound, which reduces the sense of immersion and presence. To tackle this issue, we classify common user behaviours and design a set of mapping functions between different behaviours and footsteps. To further enhance user immersion, we propose a ray-casting based scene aware sound synthesis method, which allows to synthesize corresponding footstep sounds in real time for indoor and outdoor scenes with different material surfaces. User studies demonstrate that our proposed BBSA method achieves a higher level of immersion and presence. The complete project code is available: https://github.com/OlyMarco/Behaviourally-based-Synthesis-of-Scene-aware-Footstep-Sound-Demo-.

Papermoon VR – Creating VR Landscape through Exploring VR Creation Methods

Rosina Yuan

Virtual reality technology offers creative practitioners to explore more ways of generating artworks as well as forms of art. VR head-mounted displays allow artists to create digital content with the freedom of experiencing the creations in real-time at flexible scales and positions. Through practice-led research, this project elaborates on the term VR creation, as a way of practice that uses VR native applications to create VR content. The project Papermoon was created using the VR application Quill by Smoothsteps and Oculus Quest 2, it was later built and presented using Unity. The paper shares insight into how artist practice with(in)VR through bodily movements. The paper also introduces aspects of the development of the artwork, particularly how it approaches the VR landscape, motivation, and techniques used for overcoming technical and aesthetic challenges.

The Most Beautiful Room in the World

Andres R Montenegro, Audrey Ushenko

This creative video explains visually the development process of an interactive VR/AR immersive installation that reenacts a paramount masterpiece from the early Renaissance. “The Wedding Chamber Installation” is based on the aesthetic, artistic and pre-cinematic expressivity conveyed by the illusionistic devices created and painted by Andrea Mantegna in the late 15th century in his fresco “The Wedding Chamber”, dubbed as “The Most Beautiful Room in the World”, situated in the Saint Giorgio Castle, Mantua, Italy.

Towards a Mixed Reality Agent to Support Multi-Modal Interactive Mini-Lessons That Help Users Learn Educational Concepts in Context

Aaditya Vaze, Alexis Morris, Ian Clarke

The discipline of education and training is in a state of transformation currently and is shifting toward more online learning environments as such, researchers are beginning to explore and apply new emerging technologies for online education. However, the current online learning platforms face new challenges in presenting learning content. There is a need for ways to support self-directed learning using new technological resources to create personally meaningful curiosity-driven learning experiences. In this work, an architectural framework for a mixed reality learning system is designed and presented to address this need. This is presented as an approach toward building tools for future educators to enable them to create multi-modal interactive mini-lessons in mixed reality that students can experience when they are within the appropriate learning contexts. Two proof-of-concept use case scenarios are presented using this framework. Together, this provides a step toward future multi-modal interactive educational agents that help users learn educational concepts in mixed reality.

Interaction-Triggered Estimation of AR Object Placement on Indeterminate Meshes

John Luksas, Joseph L. Gabbard

Current Augmented Reality devices rely heavily on live environment mapping to provide convincing world-relative experiences through user interaction with the real world. This mapping is obtained and updated through many different algorithms but often contains holes and other mesh artifacts when generated in less ideal scenarios, like outdoors and with fast movement. In this paper, we present the Interaction-Triggered Estimation of AR Object Placement on Indeterminate Meshes, a work-in-progress application providing a quick, interaction-triggered method to estimate the normal and position of missing mesh in real-time with low computational overhead. We achieve this by extending the user's hand using a group of additional raycast sample points, aggregating results according to different algorithms, and then using the resulting values to place an object.

Bumpy Sliding: An MR System for Experiencing Sliding Down a Bumpy Cliff

Hiroki Tsunekawa, Akihiro Matsuura

We present an interactive MR system with which a player can experience a cartoon-physical scene of stabbing a knife on the wall to survive while sliding down a bumpy cliff, with stimuli mainly to one's hand and arm. We developed two special devices: one is a wall device that mimics the cliff surface and moves upwards; and the other is a knife-shaped device with a retractable blade by which a player can realize the situation of stabbing it on the wall surface. The speed of sliding is controlled using pressure data obtained at the end of the blade and the impact of hitting a bumpy place is expressed using a push solenoid attached in the handle of the knife device.

Conference Sponsors

Special

Lujiazui Logo

Diamond

Platinum

Baidu Logo

Gold

Senstime Logo
Unity china
XImmerse Logo
Vivo Logo

Silver

GritWorld logo

Bronze

ImageDerivative logo
yuanjing alibaba logo
raysengine alibaba logo
S-Dream logo
VRIH logo
VRIH logo
evis logo
kanjing logo
lianying logo

Doctoral Consortium Sponsors

Supporters

Tencent Learn logo
lenovo logo
qualcomm logo
liangfengtai logo
hgmt logo

Host

SJTU

Co-Host

ZJU NUIST

Supporting Associations

ccf-vr Logo CSIG-VR Logo cgs-vcc Logo
CCF-VR CSIG-VR CGS-VCC
cvrvt Logo mia Logo siga Logo
CVRVT MIA SIGA
cvrvt Logo
SJMC


Code of Conduct

© IEEE VR Conference 2023, Sponsored by the IEEE Computer Society and the Visualization and Graphics Technical Committee