March 22nd - 26th
March 22nd - 26th
Shengzhi Wu, Siyu Chen, Mong-Yah Hsieh, Conor Triplett, & Calla Carter
“The Other Way” is a VR interactive storytelling experience created on HTC Vive. The major interaction is biking. We used a VR tracker tied to a user’s foot to calculate the padding speed and installed another tracker on the bike’s handlebar to obtain its turning direction. By lifting the wheel from the ground, users can physically bike to explore the VR world. The story tells a little girl who lost her way in a park and tries to overcome her fear and find her mom. In the experience, users would immerse themselves as the little girl, and bike in a beautiful and windy forest to find their ways out. The experience creates a novel approach of interacting with the virtual reality and allows a fully immersive biking experience that merges the physical and digital world. To further provide users with more realistic experience. We even explored placing a fan in front of the user, which offers users an illusion of blowing wind and a sense of speed. The story is built in an open world, and there is no single way to complete the story, as such, it holds a unique experience for every audience every time.
Volker Kuchelmeister, Jill Bennett, Natasha Ginnivan, Gail Kenning, Christopher Papadopoulos, Bec Dean, & Melissa Neidorf
The Visit is an interactive real-time Virtual Reality experience, developed from a ground-breaking research project conducted by artists and psychologists working with women living with dementia. Visitors are invited to sit with Viv, a life-sized, realistic animated character whose dialogue is created largely from verbatim interviews, drawing us into a world of perceptual uncertainty, while at the same time confounding stereotypes and confronting fears about dementia. The characterisation has scientific validity but also the qualities of a rich, emotion-driven film narrative. The point of the work is to draw the viewer into the emotional/perceptual world of Viv. Like the women who co-created her, Viv experiences various dementia-related symptoms, including hallucinations and confabulation. She is also insightful and reflective. Viv is living a life and coming to terms with a neurological change.
Alvaro Villegas, Pablo Perez, Redouane Kachach, Francisco Pereira, & Ester Gonzalez-Sosa
Typical applications in mixed reality use a simulated mechanism for manipulating objects in the virtual environment, through VR controllers held in user’s hands, without real touch. As an alternative, within our Distributed Reality concept [3] we propose Real Haptics, a novel interaction method based on two key ideas: providing real user embodiment in the virtual spaces by segmenting user real hands and allowing the real manipulation of physical objects, which in the virtual scene may preserve or change their real appearance. To evaluate the concept, we deployed a functional prototype with Unity, using color-based algorithms for object segmentation [2] and Aruco library for object tracking [1]. Based on this platform we built a gamified testbed: a five-minute escape room game, which 53 users played twice, once with Real Haptics and once with purely virtual objects and avatars using VR controllers. After each run, users answered a survey derived from standard questionnaires measuring presence, embodiment and quality of experience. This information plus some quantitative data related to performance, validated our hypothesis that Real Haptics significantly improves presence or embodiment with respect to the counterpart virtual solution. We foresee our technology to provide a significant advance in the control of virtual environments, with a first potential focus on training applications.
Gregoire Dupont de Dinechin ´ & Alexis Paljic
This video submission illustrates the Core Open Lab on Image-Based Rendering Innovation for Virtual Reality (COLIBRI VR), an open-source toolkit we developed to help authors render photographs of real-world people, objects, and places as responsive 3D assets in VR. We integrated COLIBRI VR as a package for the Unity game engine: in this way, the toolset’s methods can easily be accessed from a convenient graphical user interface, and be used in conjunction with the game engine’s built-in tools to quickly build interactive virtual reality experiences. Our primary goal is to help users render real-world photographs in VR in a way that provides view-dependent rendering effects and compelling motion parallax. For instance, COLIBRI VR can be used to render captured specular highlights, such as the bright reflections on the facets of a mineral. It also enables providing motion parallax from estimated geometry, e.g. from a depth map associated to a 360° image. We achieve this by implementing efficient image-based rendering methods, which we optimize to run at high framerates for VR. We make the toolkit openly available online, so that it might be used to more easily learn about and apply image-based rendering in the context of virtual reality content creation.
Jie Guan, Nadine Lessio, Yiyi Shao, & Dr. Alexis Morris
Smart hyper-connected environments are becoming a central part of daily life in modern society. Such environments apply the internet-of-things (IoT) paradigm [1], which refers to the growing field of interconnected devices and the networking that supports smart, embedded applications. The IoT has many human-computer interaction (HCI) challenges [2], however, and central to these challenges is the need to provide more human-friendly approaches to communicating sensor information and meaningful visualizations of contextualstates to users of IoT systems. Highly expressive, and engaging smart environment interfaces are uncommon, and this work applies mixed reality as a tool to better visualize and express the underlying behaviors and states within IoT smart devices. This extends the authors’ previous research [3], providing a new head-mounted display framework and interconnection architecture for an augmented reality representation of a physical IoT device, an IoT Avatar. The video submission demonstrates contributions for: i) an exploration of how mixed reality can be used to enrich smart spaces and hybrid objects, and ii) an early use case and functionality evaluation of a simple avatar hybrid smart object that expresses emotion through immersive media. It is expected that this research will help foster immersive and engaging human-centered interaction in future smart environments.
Tuukka M. Takala, Yutaro Hirao, Hiroyuki Morikawa, & Takashi Kawai
Virtual reality (VR) training is an increasingly popular topic in sports science [1]. We introduce a VR application for martial arts training, which utilizes physics-based full-body interaction that extends our previous work on the subject [2]. Our use of full-body tracking and user-worn VR equipment without external tethering enables training of techniques that employ lower body: e.g. kicking, grappling, and leg sweeps. Hence a wide variety of martial art styles – from stand-up fighting to ground fighting – can be trained. The training application features virtual opponents and a motion tracked user avatar, which is implemented using RUIS toolkit [3]. The avatar’s and the virtual opponents’ body segments are part of physics simulation that determine their final motion. This enables dynamic hand-to-hand combat, where the user’s punches, takedowns, holds, and other techniques affect the opponents in a convincing manner. The user can engage any number of virtual opponents at full power in no-holds-barred matches without risk of injury to fellow practitioners. In addition to partner training, sparring, and virtual target mitts, the application also includes novel VR training features: performance playback, rhythm game, and controls for adjusting virtual opponent speed, strength, and knockout resistance. User performed techniques can be recorded and then replayed dynamically by the virtual opponent using the physics simulation. Thus, the user can be their own training partner, and even fight himself in the virtual arena.
Anna Dining, Joe Geigel
In this video, we present Farewell to Dawn [1], an exemplar of Virtual Theatre, which we define as a live theatrical performance realized completely in a virtual space with contributors participating from different physical locales. The dance piece was performed live at the Rochester Institute of Technology in December 2016. This production combines virtual and augmented reality with motion capture to produce a live theatrical experience fully realized and experienced in a virtual space. The video presents the complete dance performance interspersed with behind the scene footage resulting in a unique 360 viewing experience.