Image of the Belem Tower in Lisbon along with a view for the Tagus river. And the Logo for IEEEVR Lisbon 2021 with the moto: make virtual reality diverse and acessible.

IEEE VR 2021 Conference Awards

Awards presented at the annual IEEE VR 2021 are in two categories:

  • Awards sponsored by the Visualization and Graphics Technical Committee of the IEEE Computer Society, and
  • Awards sponsored by the conference itself.

Award winners are selected by various means including people’s choice voting, invited judges, and formal selection committees. That information is included in the award presentation materials that follow.

VGTC Awards
TVCG - Best Journal Papers
TVCG - Honorable Mentions
TVCG - Best Journal - Nominees
Papers
Best Conference Papers
Honorable Mentions
Nominees
Posters
Best Poster
Honorable Mention
Demos
Best Demo
Honorable Mention
People's Choice
3DUI Contest
Best 3DUI
Doctoral Consortium
Best Presentation
Best Dissertation
Best Dissertation
Honorable Mention
Ready Player 21
Winner

TVCG - Best Journal Papers

DeProCams: Simultaneous Relighting, Compensation and Shape Reconstruction for Projector-Camera Systems

Bingyao Huang: Stony Brook University; Haibin Ling: Stony Brook University

Image-based relighting, projector compensation and depth/normal reconstruction are three important tasks of projector-camera systems (ProCams) and spatial augmented reality (SAR). Although they share a similar pipeline of finding projector-camera image mappings, in tradition, they are addressed independently, sometimes with different prerequisites, devices and sampling images. We propose a novel end-to-end trainable model named DeProCams to explicitly learn the photometric and geometric mappings of ProCams, and once trained, DeProCams can be applied simultaneously to the three tasks. By solving the three tasks in a unified model, DeProCams waives the need for additional optical devices, radiometric calibrations, and structured light.

Lenslet VR: Thin, Flat and Wide-FOV Virtual Reality Display Using Fresnel Lens and Lenslet Array

Kiseung Bang: Seoul National University; Youngjin Jo: Seoul National University; Minseok Chae: Seoul National University; Byoungho Lee: Seoul National University

We propose a new thin and flat VR display design using a Fresnel lenslet array, a Fresnel lens, and a polarization-based optical folding technique. The proposed optical system has a wide FOV of 102˚×102˚, a wide eye-box of 8.8 mm, and an ergonomic eye-relief of 20 mm while having a few millimeters of system thickness. We demonstrate an 8.8-mm-thick VR glasses prototype and experimentally verify that the proposed VR display system has the expected performance while having a glasses-like form factor.

TVCG - Honorable Mentions

ARC: Alignment-based Redirection Controller for Redirected Walking in Complex Environments

Niall L. Williams: University of Maryland, College Park; Aniket Bera: University of Maryland at College Park; Dinesh Manocha: University of Maryland

We present a novel redirected walking controller based on alignment that allows the user to explore large, complex virtual environments, while minimizing the number of collisions in the physical environment. Our alignment-based redirection controller (ARC) steers the user such that their proximity to obstacles in the physical environment matches the proximity to obstacles in the virtual environment as closely as possible. We introduce a new metric to measure the relative environment complexity and characterize the difference in navigational complexity between the physical and virtual environments. ARC significantly outperforms state-of-the-art controllers and works in arbitrary environments without any user input.

A Log-Rectilinear Transformation for Foveated 360-degree Video Streaming

David Li: University Of Maryland; Ruofei Du: Google; Adharsh Babu: University of Maryland College Park; Camelia D. Brumar: University of Maryland; Amitabh Varshney: University of Maryland College Park

With the rapidly increasing resolutions of 360° cameras, head-mounted displays, and live-streaming services, streaming high-resolution panoramic videos over limited-bandwidth networks is becoming a critical challenge. Foveated video streaming can address this rising challenge in the context of eye-tracking-equipped virtual reality head-mounted displays. However, conventional log-polar foveated rendering suffers from a number of visual artifacts such as aliasing and flickering. In this paper, we introduce a new log-rectilinear transformation that incorporates summed-area table filtering and off-the-shelf video codecs to enable foveated streaming of 360° videos suitable for VR headsets with built-in eye-tracking.

TVCG - Best Journal - Nominees

A Log-Rectilinear Transformation for Foveated 360-degree Video Streaming

David Li: University Of Maryland; Ruofei Du: Google; Adharsh Babu: University of Maryland College Park; Camelia D. Brumar: University of Maryland; Amitabh Varshney: University of Maryland College Park

With the rapidly increasing resolutions of 360° cameras, head-mounted displays, and live-streaming services, streaming high-resolution panoramic videos over limited-bandwidth networks is becoming a critical challenge. Foveated video streaming can address this rising challenge in the context of eye-tracking-equipped virtual reality head-mounted displays. However, conventional log-polar foveated rendering suffers from a number of visual artifacts such as aliasing and flickering. In this paper, we introduce a new log-rectilinear transformation that incorporates summed-area table filtering and off-the-shelf video codecs to enable foveated streaming of 360° videos suitable for VR headsets with built-in eye-tracking.

A privacy-preserving approach to streaming eye-tracking data

Brendan David-John: University of Florida; Diane Hosfelt: Mozilla; Kevin Butler: University of Florida; Eakta JAIN: University of Florida

Eye-tracking technology is being increasingly integrated into mixed reality devices. Although critical applications are being enabled, there are significant possibilities for violating user privacy expectations. We propose a framework that incorporates gatekeeping via the design of the application programming interface and via software-implemented privacy mechanisms. Our results indicate that these mechanisms can reduce the rate of identification from as much as 85% to as low as 30%. The impact of introducing these mechanisms is less than 1.5⁰error in in gaze prediction. Our approach is the first to support privacy-by-design in the flow of eye-tracking data within mixed reality use cases.

ARC: Alignment-based Redirection Controller for Redirected Walking in Complex Environments

Niall L. Williams: University of Maryland, College Park; Aniket Bera: University of Maryland at College Park; Dinesh Manocha: University of Maryland

We present a novel redirected walking controller based on alignment that allows the user to explore large, complex virtual environments, while minimizing the number of collisions in the physical environment. Our alignment-based redirection controller (ARC) steers the user such that their proximity to obstacles in the physical environment matches the proximity to obstacles in the virtual environment as closely as possible. We introduce a new metric to measure the relative environment complexity and characterize the difference in navigational complexity between the physical and virtual environments. ARC significantly outperforms state-of-the-art controllers and works in arbitrary environments without any user input.

Beaming Displays

Yuta Itoh: Tokyo Institute of Technology; Takumi Kaminokado: Tokyo Institute of Technology; Kaan Akşit: University College London

We present the beaming displays, a new type of near-eye display system that uses a projector and an all passive wearable headset. We modify an off-the-shelf projector with additional lenses. We install such a projector to the environment to beam images from a distance to a passive wearable headset. The beaming projection system tracks the current position of a wearable headset to project distortion-free images with correct perspectives. In our system, a wearable headset guides the beamed images to a user's retina, which are then perceived as an augmented scene within a user's field of view.

DeProCams: Simultaneous Relighting, Compensation and Shape Reconstruction for Projector-Camera Systems

Bingyao Huang: Stony Brook University; Haibin Ling: Stony Brook University

Image-based relighting, projector compensation and depth/normal reconstruction are three important tasks of projector-camera systems (ProCams) and spatial augmented reality (SAR). Although they share a similar pipeline of finding projector-camera image mappings, in tradition, they are addressed independently, sometimes with different prerequisites, devices and sampling images. We propose a novel end-to-end trainable model named DeProCams to explicitly learn the photometric and geometric mappings of ProCams, and once trained, DeProCams can be applied simultaneously to the three tasks. By solving the three tasks in a unified model, DeProCams waives the need for additional optical devices, radiometric calibrations, and structured light.

EllSeg: An Ellipse Segmentation Framework for Robust Gaze Tracking

Rakshit Sunil Kothari: Rochester Institute of Technology; Aayush Kumar Chaudhary: Rochester Institute of Technology; Reynold Bailey: Rochester Institute of Technology; Jeff B Pelz: Rochester Institute of Technology; Gabriel Jacob Diaz: Rochester Institute of Technology

Ellipse fitting, an essential component in video oculography, is performed on previously segmented eye parts generated using computer vision techniques. Occlusions due to eyelid shape, camera position or eyelashes, break ellipse fitting algorithms that rely on edge segments. We propose training a convolutional neural network to segment entire elliptical structures and demonstrate that such a framework is robust to occlusions and offers superior pupil and iris tracking performance (at least 10% and 24% increase in pupil and iris center detection rate within a two-pixel error margin) compared to standard eye parts segmentation for multiple publicly available synthetic segmentation datasets.

Evidence of Racial Bias Using Immersive Virtual Reality: Analysis of Head and Hand Motions During Shooting Decisions

Tabitha C. Peck: Davidson College; Jessica J Good: Davidson College; Katharina Seitz: Davidson College

Shooter bias is the tendency to more quickly shoot at unarmed Black suspects compared to unarmed White suspects. This research investigates the efficacy of shooter bias simulation studies in an immersive virtual scenario and presents results from a user study (N=99) investigating shooter and racial bias. More nuanced head and hand motion analysis was able to predict participantsí racial shooting accuracy and implicit racism scores. Discussion of how these nuanced measures can be used for detecting behavior changes for body-swap illusions, and implications of this work related to racial justice and police brutality are discussed.

FixationNet: Forecasting Eye Fixations in Task-Oriented Virtual Environments

Zhiming Hu: Peking University; Andreas Bulling: University of Stuttgart; Sheng Li: Peking University; Guoping Wang: Computer Science and Technology

Human visual attention in immersive virtual reality (VR) is key for many important applications, such as gaze-contingent rendering and gaze-based interaction. However, prior works typically focused on free-viewing conditions that have limited relevance for practical applications. We first collect eye tracking data of 27 participants performing a visual search task in four immersive VR environments. Then we provide a comprehensive analysis of the collected data and reveal correlations between users' eye fixations and other factors. Based on the analysis, we propose FixationNet -- a novel learning-based model to forecast users' eye fixations in the near future in VR.

Real-Time Omnidirectional Stereo Rendering: Generating 360° Surround-View Panoramic Images for Comfortable Immersive Viewing

Thomas Marrinan: University of St. Thomas; Michael E. Papka: Argonne National Laboratory

Omnidirectional stereo images (surround-view panoramas) can provide an immersive experience for pre-generated content, but typically suffer from binocular misalignment in the peripheral vision as a user's view direction approaches the north / south pole of the projection sphere. This paper presents a real-time geometry-based approach for omnidirectional stereo rendering that fits into the standard rendering pipeline. Our approach includes tunable parameters that enable pole merging -- a reduction in the stereo effect near the poles that can minimize binocular misalignment. Results from a user study indicate that pole merging reduces visual fatigue and discomfort associated with binocular misalignment without inhibiting depth perception.

Text Entry in Virtual Environments using Speech and a Midair Keyboard

Jiban Krishna Adhikary: Michigan Technological University; Keith Vertanen: Michigan Technological University

We investigate text input in virtual reality using hand-tracking and speech. Our system visualizes users' hands, allowing typing on an auto-correcting midair keyboard. It supports speaking a sentence and correcting errors by selecting alternative words proposed by a speech recognizer. Using only the keyboard, participants wrote at 11 words-per-minute at a 1.2% error rate. Speaking and correcting was faster and more accurate at 28 words-per-minute and a 0.5% error rate. Participants achieved this despite half of sentences containing an out-of-vocabulary word. For sentences with only in-vocabulary words, performance was even faster at 36 words-per-minute.

Lenslet VR: Thin, Flat and Wide-FOV Virtual Reality Display Using Fresnel Lens and Lenslet Array

Kiseung Bang: Seoul National University; Youngjin Jo: Seoul National University; Minseok Chae: Seoul National University; Byoungho Lee: Seoul National University

We propose a new thin and flat VR display design using a Fresnel lenslet array, a Fresnel lens, and a polarization-based optical folding technique. The proposed optical system has a wide FOV of 102˚×102˚, a wide eye-box of 8.8 mm, and an ergonomic eye-relief of 20 mm while having a few millimeters of system thickness. We demonstrate an 8.8-mm-thick VR glasses prototype and experimentally verify that the proposed VR display system has the expected performance while having a glasses-like form factor.

Best Conference Papers

Text2Gestures: A Transformer-Based Network for Generating Emotive Body Gestures for Virtual Agents

Uttaran Bhattacharya: University of Maryland; Nick Rewkowski: University of Maryland; Abhishek Banerjee: University of Maryland, College Park; Pooja Guhan: University of Maryland College Park; Aniket Bera: University of Maryland at College Park; Dinesh Manocha: University of Maryland

We present Text2Gestures, a transformer-based learning method to interactively generate emotive full-body gestures for virtual agents aligned with text inputs. Our method generates emotionally expressive gestures by utilizing the relevant affective features for body expressions. We train our network on the MPI Emotional Body Expressions Database and observe state-of-the-art performance in generating emotive gestures. We conduct a web-based user study and observe that around 91% of participants indicated our generated gestures to be at least plausible on a five-point Likert Scale. The emotions perceived by the participants from the gestures are also strongly positively correlated with the corresponding intended emotions.

Mobile, Egocentric Human Body Motion Reconstruction Using Only Eyeglasses-mounted Cameras and a Few Body-worn Inertial Sensors

Young-Woon Cha: University of North Carolina at Chapel Hill; Husam Shaik: University of North Carolina at Chapel Hill; Qian Zhang: University of North Carolina at Chapel Hill; Fan Feng: University of North Carolina at Chapel Hill; Andrei State: University of North Carolina at Chapel Hill; Adrian Ilie: University of North Carolina at Chapel Hill; Henry Fuchs: UNC Chapel Hill

We envision a convenient telepresence system available to users anywhere, anytime, requiring displays and sensors embedded in commonly worn items such as eyeglasses, wristwatches, and shoes. To that end, we present a standalone real-time system for the dynamic 3D capture of a person, relying only on cameras embedded into a head-worn device, and on Inertial Measurement Units (IMUs) worn on the wrists and ankles. We demonstrate our system by reconstructing various human body movements. We captured an egocentric visual-inertial 3D human pose dataset, which we plan to make publicly available for training and evaluating similar methods.

Comparing the Neuro-Physiological Effects of Cinematic Virtual Reality with 2D Monitors

Ruochen Cao: University of South Australia; Lena Zou-Williams: University of South Australia; Andrew Cunningham: University of South Australia; James A. Walsh: University of South Australia; Mark Kohler: University of Adelaide; Bruce H Thomas: University of South Australia

In this work, we explore if the immersion afforded by Virtual Reality can improve the cognitive integration of information in Cinematic Virtual Reality (CVR). We conducted a user study examining participants' cognitive activities when consuming visual information of emotional and emotionally neutral scenes in a non-CVR environment (i.e. a monitor) versus a CVR environment (i.e. a head-mounted display). Cortical response was recorded using electroencephalography. We found that participants had greater early visual attention with neutral emotions in CVR environments, and showed higher overall alpha power in CVR environments. The use of CVR did not significantly affect participants' recall performance.

Conference Papers - Honorable Mentions

Bidirectional Shadow Rendering for Interactive Mixed 360° Videos

Lili Wang: Beihang University; Hao Wang: Beihang University; Danqing Dai: Beihang University; Jiaye Leng: Beihang University; Xiaoguang Han: Shenzhen Research Institute of Big Data

In this paper, we provide a bidirectional shadow rendering method to render shadows between real and virtual objects in the 360◦ videos in real time. We construct a 3D scene approximation from the current output viewpoint to approximate the real scene geometry nearby in the video. Then, we propose a ray casting based algorithm to determine the shadow regions on the virtual objects cast by the real objects. After that, we introduce an object-aware shadow mapping method to cast shadows from virtual objects to real objects. Finally, we use a shadow intensity estimation algorithm to determine the shadow intensity of virtual objects and real objects to obtain shadows consistent with the input 360◦ video.

Influence of Interactivity and Social Environments on User Experience and Social Acceptability in Virtual Reality

Maurizio Vergari: Technische Universität Berlin; Tanja Kojic: Technische Universität Berlin; Francesco Vona: Politecnico di Milano; Franca Garzotto: Politecnico di Milano; Sebastian Möller: Technische Universität Berlin; Jan-Niklas Voigt-Antons: Technische Universität Berlin

Nowadays, Virtual Reality (VR) technology can be potentially used everywhere. Nevertheless, it is still uncommon to see VR devices in public settings. In these contexts, unaware bystanders in the surroundings might influence the User Experience (UX) and create concerns about the social acceptability of this technology. This paper investigates the influence of Social Environments, and degree of interactivity on User Experience and social acceptability. Four Social Environments were simulated employing 360° Videos, and two VR games developed with two levels of interactivity. Findings indicate that Social Environments and degree of interactivity should be taken into account while designing VR applications.

Revealable Volume Displays: 3D Exploration of Mixed-Reality Public Exhibitions

Fatma Ben Guefrech: Université de Lille; Florent Berthaut: Université de Lille; patricia plénacoste: université de lille ; Yvan Peter: Université Lille 1; Laurent Grisoni: university of lille

We present a class of mixed-reality displays which allow for the 3D exploration of content in public exhibitions, that we call Revealable Volume Displays (RVD). They allow visitors to reveal information placed freely inside or around protected artefacts, visible by all, using their reflection in the panel. We first discuss the implementation of RVDs, providing both projector-based and mobile versions. We then present a design space that describes the interaction possibilities that they offer. Drawing on insights from a field study during a first exhibition, we finally propose and evaluate techniques for facilitating 3D exploration with RVDs.

VR-based Student Priming to Reduce Anxiety and Increase Cognitive Bandwidth

Daniel Hawes: Carleton University; Ali Arya: Carleton University

Recent research indicates that many post-secondary students feel overwhelming anxiety, negatively impacting academic performance and overall well-being. In this paper, based on multidisciplinary literature analysis and innovative ideas in cognitive science, learning models, and emerging technologies, we introduce a theoretical framework that shows how and when priming activities can be introduced in the learning cycles to reduce anxiety and improve cognitive availability. This framework proposes a Virtual Reality based priming approach that uses games and meditative interventions. Our results show this approach's potential compared to no-priming scenarios for reducing anxiety and significance for VR gaming in improving cognitive bandwidth.

Conference Papers - Nominees

Assessment of the Simulator Sickness Questionnaire for Omnidirectional Videos

Ashutosh Singla: TU Ilmenau; Steve Göring: TU Ilmenau; Dominik Keller: TU Ilmenau; Rakesh Rao Ramachandra Rao: TU Ilmenau; Stephan Fremerey: TU Ilmenau; Alexander Raake: TU Ilmenau

The SSQ is the most widely used questionnaire for the assessment of simulator sickness. Since the SSQ with its 16 questions was not designed for 360° video related studies, our research hypothesis in this paper was that it may be simplified to enable more efficient testing for 360° video. Hence, we evaluate the SSQ based on six different previously conducted studies. We derive reduced sets of questions using PCA for each test. Pearson Correlation is analysed to compare the relation of all obtained reduced questionnaires as well as two further variants of SSQ reported in the literature, namely VRSQ and CSQ. Our analysis suggests that with a minimum of 9 out of 16 questions yields a sufficient high agreement with the initial SSQ.

Bidirectional Shadow Rendering for Interactive Mixed 360° Videos

Lili Wang: Beihang University; Hao Wang: Beihang University; Danqing Dai: Beihang University; Jiaye Leng: Beihang University; Xiaoguang Han: Shenzhen Research Institute of Big Data

In this paper, we provide a bidirectional shadow rendering method to render shadows between real and virtual objects in the 360◦ videos in real time. We construct a 3D scene approximation from the current output viewpoint to approximate the real scene geometry nearby in the video. Then, we propose a ray casting based algorithm to determine the shadow regions on the virtual objects cast by the real objects. After that, we introduce an object-aware shadow mapping method to cast shadows from virtual objects to real objects. Finally, we use a shadow intensity estimation algorithm to determine the shadow intensity of virtual objects and real objects to obtain shadows consistent with the input 360◦ video.

Comparing the Neuro-Physiological Effects of Cinematic Virtual Reality with 2D Monitors

Ruochen Cao: University of South Australia; Lena Zou-Williams: University of South Australia; Andrew Cunningham: University of South Australia; James A. Walsh: University of South Australia; Mark Kohler: University of Adelaide; Bruce H Thomas: University of South Australia

In this work, we explore if the immersion afforded by Virtual Reality can improve the cognitive integration of information in Cinematic Virtual Reality (CVR). We conducted a user study examining participants' cognitive activities when consuming visual information of emotional and emotionally neutral scenes in a non-CVR environment (i.e. a monitor) versus a CVR environment (i.e. a head-mounted display). Cortical response was recorded using electroencephalography. We found that participants had greater early visual attention with neutral emotions in CVR environments, and showed higher overall alpha power in CVR environments. The use of CVR did not significantly affect participants' recall performance.

Design and Evaluation of a Free-Hand VR-based Authoring Environment for Automated Vehicle Testing

Sevinc Eroglu: RWTH Aachen University; Frederic Stefan: Ford Motor Company; Alain M.R. Chevalier: Ford Research Center Aachen; Daniel Roettger: Ford Motor Company; Daniel Zielasko: University of Trier; Torsten Wolfgang Kuhlen: RWTH Aachen University; Benjamin Weyers: Trier University

We propose a VR authoring environment that enables engineers to design road networks and traffic scenarios for automated vehicle testing based on free-hand interaction. We present a 3D interaction technique for the efficient placement and selection of virtual objects that is employed on a 2D panel. We conducted a comparative user study in which our interaction technique outperformed existing approaches regarding precision and task completion time. Furthermore, we demonstrate the effectiveness of the system by a qualitative user study with domain experts.

Influence of Interactivity and Social Environments on User Experience and Social Acceptability in Virtual Reality

Maurizio Vergari: Technische Universität Berlin; Tanja Kojic: Technische Universität Berlin; Francesco Vona: Politecnico di Milano; Franca Garzotto: Politecnico di Milano; Sebastian Möller: Technische Universität Berlin; Jan-Niklas Voigt-Antons: Technische Universität Berlin

Nowadays, Virtual Reality (VR) technology can be potentially used everywhere. Nevertheless, it is still uncommon to see VR devices in public settings. In these contexts, unaware bystanders in the surroundings might influence the User Experience (UX) and create concerns about the social acceptability of this technology. This paper investigates the influence of Social Environments, and degree of interactivity on User Experience and social acceptability. Four Social Environments were simulated employing 360° Videos, and two VR games developed with two levels of interactivity. Findings indicate that Social Environments and degree of interactivity should be taken into account while designing VR applications.

Mobile, Egocentric Human Body Motion Reconstruction Using Only Eyeglasses-mounted Cameras and a Few Body-worn Inertial Sensors

Young-Woon Cha: University of North Carolina at Chapel Hill; Husam Shaik: University of North Carolina at Chapel Hill; Qian Zhang: University of North Carolina at Chapel Hill; Fan Feng: University of North Carolina at Chapel Hill; Andrei State: University of North Carolina at Chapel Hill; Adrian Ilie: University of North Carolina at Chapel Hill; Henry Fuchs: UNC Chapel Hill

We envision a convenient telepresence system available to users anywhere, anytime, requiring displays and sensors embedded in commonly worn items such as eyeglasses, wristwatches, and shoes. To that end, we present a standalone real-time system for the dynamic 3D capture of a person, relying only on cameras embedded into a head-worn device, and on Inertial Measurement Units (IMUs) worn on the wrists and ankles. We demonstrate our system by reconstructing various human body movements. We captured an egocentric visual-inertial 3D human pose dataset, which we plan to make publicly available for training and evaluating similar methods.

Revealable Volume Displays: 3D Exploration of Mixed-Reality Public Exhibitions

Fatma Ben Guefrech: Université de Lille; Florent Berthaut: Université de Lille; patricia plénacoste: université de lille ; Yvan Peter: Université Lille 1; Laurent Grisoni: university of lille

We present a class of mixed-reality displays which allow for the 3D exploration of content in public exhibitions, that we call Revealable Volume Displays (RVD). They allow visitors to reveal information placed freely inside or around protected artefacts, visible by all, using their reflection in the panel. We first discuss the implementation of RVDs, providing both projector-based and mobile versions. We then present a design space that describes the interaction possibilities that they offer. Drawing on insights from a field study during a first exhibition, we finally propose and evaluate techniques for facilitating 3D exploration with RVDs.

Sensemaking Strategies with the Immersive Space to Think

Lee Lisle: Virginia Tech; Kylie Davidson: Virginia Tech; Chris North: Virginia Tech; Doug Bowman: Virginia Tech; Edward J.K. Gitre: Virginia Tech

The process of sensemaking is a cognitively intensive task that involves foraging through and extracting information from large sets of documents. A recent approach, the Immersive Space to Think (IST), allows analysts to read, mark up documents, and use immersive 3D space to organize and label collections of documents. We observed seventeen novice analysts perform a sensemaking task in order to understand how users utilize the features of IST to extract meaning from large text-based datasets. We found three different layout strategies they employed to create meaning with the documents we provided, and found patterns of interaction and organization that can inform future improvements to the IST approach.

Text2Gestures: A Transformer-Based Network for Generating Emotive Body Gestures for Virtual Agents

Uttaran Bhattacharya: University of Maryland; Nick Rewkowski: University of Maryland; Abhishek Banerjee: University of Maryland, College Park; Pooja Guhan: University of Maryland College Park; Aniket Bera: University of Maryland at College Park; Dinesh Manocha: University of Maryland

We present Text2Gestures, a transformer-based learning method to interactively generate emotive full-body gestures for virtual agents aligned with text inputs. Our method generates emotionally expressive gestures by utilizing the relevant affective features for body expressions. We train our network on the MPI Emotional Body Expressions Database and observe state-of-the-art performance in generating emotive gestures. We conduct a web-based user study and observe that around 91% of participants indicated our generated gestures to be at least plausible on a five-point Likert Scale. The emotions perceived by the participants from the gestures are also strongly positively correlated with the corresponding intended emotions.

Using Virtual Reality to Support Acting in Motion Capture with Differently Scaled Characters

Robin Kammerlander: Department of Speech, Music and Hearing; Andre Pereira: KTH Royal Institue of Technology; Simon Alexanderson: KTH Royal Institue of Technology

Motion capture is a well-established technology for capturing actors' performances within the entertainment industry. Instead of detailed sets, costumes and props, actors play in empty spaces wearing tight suits. Often, their co-actors are imaginary, replaced by placeholder props, or even out of scale with their virtual counterparts. We propose using a combination of virtual reality and motion capture technology to bring differently proportioned characters into a shared collaborative virtual environment. The results show that our proposed platform enhances actor’s feelings of body ownership and immersion, changing their performances, narrowing the gap between virtual performances and final intended animations.

VR-based Student Priming to Reduce Anxiety and Increase Cognitive Bandwidth

Daniel Hawes: Carleton University; Ali Arya: Carleton University

Recent research indicates that many post-secondary students feel overwhelming anxiety, negatively impacting academic performance and overall well-being. In this paper, based on multidisciplinary literature analysis and innovative ideas in cognitive science, learning models, and emerging technologies, we introduce a theoretical framework that shows how and when priming activities can be introduced in the learning cycles to reduce anxiety and improve cognitive availability. This framework proposes a Virtual Reality based priming approach that uses games and meditative interventions. Our results show this approach's potential compared to no-priming scenarios for reducing anxiety and significance for VR gaming in improving cognitive bandwidth.

Best Poster

Play with Emotional Characters: Improving User Emotional Experience by A Data-driven Approach in VR Volleyball Games

Zechen Bai: Institute of Software, Chinese Academy of Sciences; Naiming Yao: Institute of Software, Chinese Academy of Sciences; Nidhi Mishra: Nanyang Technological University; Hui Chen: Institute of Software, Chinese Academy of Sciences; Hongan Wang: Institute of Software, Chinese Academy of Sciences; Nadia Magnenat Thalmann: Nanyang Technological University

In real-world volleyball games, players are generally aware of the emotions of other players as they can observe facial expressions, body behaviors, etc., which evokes a rich emotional experience. However, most of the VR volleyball games mainly concentrate on modeling the game playing, rather than supporting an emotional experience. We introduce a data-driven framework to enhance the user's emotional experience and engagement by building emotional virtual characters in VR volleyball games. This framework enables virtual characters to arouse emotions according to the game state and express emotions through facial expressions. Evaluation results demonstrate our framework has benefits to enhance user's emotional experience and engagement.

Posters - Honorable Mention

Cognitive Load/flow and Performance in Virtual Reality Simulation Training of Laparoscopic Surgery

Peng Yu: Beihang University; Junjun Pan: Beihang University; Zhaoxue Wang: Beijing Normal University; Yang Shen: National Engineering Laboraory for Cyberlearning and Intelligent Technology,Faulty of education; Lili Wang: Beihang University; Jialun Li: Beihang University; Aimin Hao: Beihang University; Haipeng Wang: Beijing General Aerospace Hospital

VR based laparoscopic surgical simulators (VRLS) are increasingly popular in training surgeons. However, they are validated by subjective methods in most research. In this paper, we resort to physiological approaches to objectively research quantitative influence and performance analysis of VRLS training system. The results show that the VRLS could highly improve medical students’ performance (p<0.01) and enable the participants to obtain flow experience with a lower cognitive load. The performance of participants is negatively correlated with cognitive load through quantitatively physiological analysis.

Best Demo

Demonstrating Rapid Touch Interaction in Virtual Reality through Wearable Touch Sensing

Manuel Meier: ETH Zürich; Paul Streli: ETH Zürich; Andreas Rene Fender: ETH Zürich; Christian Holz: ETH Zürich

We bring quick touch interaction to Virtual Reality, illustrating the beneficial use of rapid tapping, typing, and surface gestures for Virtual Reality. The productivity scenarios that become possible are reminiscent of apps that exist on today's tablets. We use a wrist-worn prototype to complement the optical hand tracking from VR headsets with inertial sensing to detect touch events on surfaces. Our demonstration comprises UI control in word processors, web browsers, and document editors.

Demos - Honorable Mention

Magnoramas

Kevin Yu: Research Group MITI; Alexander Winkler: Technical University of Munich; Frieder Pankratz: LMU; Marc Lazarovici: Institut für Notfallmedizin; Prof. Dirk Wilhelm: Research Group MITI; Ulrich Eck: Computer Aided; Medical Procedures and Augmented Reality; Daniel Roth: Computer Aided Medical Procedures and Augmented Reality; Nassir Navab: Technische Universität München

We introduce Magnoramas, an interaction method for creating supernaturally precise annotations on virtual objects. We evaluated Magnoramas in a collaborative context in a simplified clinical scenario. Teleconsultation was performed between a remote expert inside a 3D reconstruction and embodied by an avatar in Virtual Reality that collaborated with a local user through Augmented Reality. The results show that Magnoramas significantly improve the precision of annotations while preserving usability and perceived presence measures compared to the baseline method. By additionally hiding the physical world while keeping the Magnorama, users can intentionally lower their perceived social presence and focus on their tasks.

Demos - People's Choice

Shared Augmented Reality Experience Between a Microsoft Flight Simulator User and a User in the Real World

Christoph Leuze: Nakamir Inc; Matthias Leuze: Alpinschule Innsbruck

Our demo consists of an application that allows a user with an AR display (smartphone or Hololens 2) to watch another user, flying an airplane in the Microsoft Flight Simulator 2020 (MSFS), at their respective location in the real world. To do that, we take the location of a plane in MSFS, and stream it via a server to a mobile AR device. The mobile device user can then see the same 3D plane model move at exactly that real world location, that corresponds to the plane’s virtual MSFS location.

3DUI Contest - Best 3DUI

Fantastic Voyage 2021: Using Interactive VR Storytelling to Explain Targeted COVID-19 Vaccine Delivery to Antigen-presenting Cells

Lei Zhang: Virginia Tech; Feiyu Lu: Virginia Tech; Ibrahim Asadullah Tahmid: Virginia Tech; Lee Lisle: Virginia Tech; Shakiba Davari: Virginia Tech; Nicolas Gutkowski: Virginia Tech; Luke Schlueter: Virginia Tech; Doug Bowman: Virginia Tech

Doctoral Consortium - Best Presentation

“SHOW YOUR DEDICATION:” VR Games and Outmersion

P.S. Berge
Dept. of Texts & Technology, University of Central Florida, Orlando, Florida, United States

Best Dissertation

A Framework for Enhancing the Sense of Presence in Virtual and Mixed Reality

Misha Sra
Massachusets Institute of Technology
Advisor: Pattie Maes

Best Dissertation - Honorable Mention

Optimal Spatial Registration of SLAM for Augmented Reality

Folker Wientapper
Technical University of Darmstadt
Advisor: Arjan Kuijper

Ready Player 21 - Winner

Xiaodan Hu

Nara Institute of Science and Technology, Ikoma, Japan

Conference Sponsors

Diamond

Virbela Logo

Gold

Instituto Superior Tecnico
immersive Learning Research Network

Silver

Qualcomm Logo

Bronze

Vicon Logo
Hitlab logo
Microsoft Logo
Appen Logo
Facebook Reality Labs Logo
XR Bootcamp Logo

Supporter

GPCG Logo
Inesc-id Logo
NVIDIA Logo

Doctoral Consortium Sponsors

NSF Logo
Fakespace Logo

Conference Partner

CIO Applications Europe Website


Code of Conduct

© IEEEVR Conference