8:30am - 5:00pm
Abstract: Advances in consumer virtual reality (VR) technology have made using natural locomotion for navigation in VR a possibility. While walking in VR can enhance immersion and reduce motion sickness, it introduces a few challenges. Walking is only possible within virtual environments (VEs) that fit inside the boundaries of the tracked physical space, which for most users is quite small and carries a high potential for collisions with physical objects around the tracked area. In my thesis, I explore visual and physiological steering techniques that complement the traditional redirected walking technique of scene rotation to alter a user's walking trajectory in the physical space. In this paper, I present the physiological technique.
Adrian K. T. Ng
Abstract: This position paper summarizes the author's research interest in Cognitive Psychology and Human-Computer Interaction in the imseCAVE, a CAVE-like system in the University of Hong Kong. Several areas of interest were explored while finding the thesis topic for the Ph.D. research. They include a perception research on distance estimation with proposed error correction mechanism, neurofeedback meditation with EEG in VR and the effect with audio and video, the study of training transfer in VR training, the comparison and research of cybersickness between HMD and the imseCAVE, and comparing VR gaming in TV, HMD, and the imseCAVE by performance, activity level and time perception. With a broad interest, the exact direction is still in the search and requires future exploration.
Victor Adriel de Jesus Oliveira, Luciana Nedel, and Anderson Maciel
Abstract: We looked up to elements present in speech articulation to introduce the proactive haptic articulation as a novel approach for intercommunication. The ability to use a haptic interface as a tool for implicit communication can supplement communication and support near and remote collaborative tasks in virtual and physical environments. In addition, the proactive articulation can be applied during the design process, including the user in the construction of more dynamic and optimized vibrotactile vocabularies. In this proposal, we discuss the thesis of the haptic proactive communication and our method to assess and implement it. Our goal is to understand the phenomena related to the proactive articulation of haptic signals and its use for communication and for the design of optimized tactile vocabularies.
Hyo Jeong Kang, Jung-hye Shin, and Kevin Ponto
Abstract: The medium of virtual reality enables new opportunities for the experience products and shopping environment that may combine best features of both physical and digital market place. As little is known on how best to create virtual reality marketplace, the current research aims to explore required features for VR market user interface and its impact on shopping behavior. As a first step toward endeavor, we will empirically test three different user interfaces; 2D interface style, 3D skeuomorphic interface style and interface that combines features of both 2D and 3D inter-action techniques.
Agata Marta Soccini and Marco Grangetto
Abstract: Gaze detection in Virtual Reality systems is mostly performed using eye-tracking devices. The coordinates of the sight, as well as other data regarding the eyes, are used as input values for the applications. While this trend is becoming more and more popular in the interaction design of immersive systems, most visors do not come with an embedded eye-tracker, especially those that are low cost and maybe based on mobile phones. We suggest implementing an innovative gaze estimation system into virtual environments as a source of information regarding users intentions. We propose a solution based on a combination of the features of the images and the movement of the head as an input of a Deep Convolutional Neural Network capable of inferring the 2D gaze coordinates in the imaging plane.
Abstract: Misperception of egocentric distances in virtual reality is a well established effect, with many known influencing factors but no clear cause or correction. Herein is proposed a course of research that explores this effect on three fronts: exploring perceptual calibrations, corrections based on known influences and observed misperception rather than a perfect understanding of the causes of misperception; exploring when adaptations due to feedback might exhibit undesirable effects; establishing contexts within practical tasks when distance misperceptions should be expected to have an effect.
Abstract: Currently healthcare practitioners use standardized patients, physical mannequins, and virtual patients as surrogates for real patients to provide a safe learning environment for students. Each of these simulators has different limitation that could be mitigated with various degrees of fidelity to represent medical cues. As we are exploring different ways to simulate a human patient and their effects on learning, we would like to compare the dynamic visuals between spatial augmented reality and a optical see-through augmented reality where a patient is rendered using the HoloLens and how that affects depth perception, task completion, and social presence.
Jeronimo Grandi and Anderson Macie
Abstract: We explore design approaches for cooperative work in virtual manipulation tasks. We seek to understand the fundamental aspects of the human cooperation and design interfaces and manipulation actions to enhance the group’s ability to solve complex manipulation tasks in various immersion scenarios.
Sharif Mohammad Shahnewaz Ferdous and John Quarles
Abstract: Most people experience some imbalance in a fully immersive Virtual Environment (VE) (i.e., wearing a Head Mounted Display (HMD) that blocks the users view of the real world). However, this imbalance is significantly worse in People with Balance Impairments (PwBIs) and minimal research has been done to improve this. In addition to imbalance problem, lack of proper visual cues can lead to different accessibility problems for PwBIs (e.g., small reach from the fear of imbalance, decreased gait performance, etc.) We plan to explore the effects of different visual cues on peoples’ balance, reach, gait, etc. Based on our primary study, we propose to incorporate additional visual cues in VEs that proved to significantly improve balance of PwBIs while they are standing and playing in a VE. We plan to further investigate if additional visual cues have similar effects in augmented reality. We are also developing studies to research reach and gait in VR as our future work.
Accommodative depth cues, a wide field of view, and ever-higher resolutions all present major hardware design challenges for near-eye displays. Optimizing a design to overcome one of these challenges typically leads to a trade-off in the others. In work being published at IEEEVR 2017, my collaborators and I tackle this problem by introducing an all-in-one solution – a new wide field of view gaze- tracked near-eye display for augmented reality applications. The key component of our solution is the use of a single see-through varifocal deformable membrane mirror for each eye reflecting a display. They are controlled by airtight cavities and change the effective focal power to present a single image at a target depth plane which is determined by the gaze tracker. I propose that this work and my future work in evaluating the perceptual qualities of this display and in decreasing the size to a head-mountable form factor are the topic of my dissertation.
Abstract: We have proposed an adaptive view-aware bandwidth-efficient 360 VR video streaming framework based on the tiling features of MPEG-DASH SRD. We extend MPEG-DASH SRD to the 3D space of 360 VR videos, and showcase a dynamic view-aware adaptation technique to tackle the high bandwidth demands of streaming 360 VR videos to wireless VR headsets. As a part of our contributions, we spatially partition the underlying 3D mesh into multiple 3D sub-meshes, and construct an efficient 3D geometry mesh called "hexaface sphere" to optimally represent tiled 360 VR videos in the 3D space. We then spatially divide the 360 videos into multiple tiles while encoding and packaging, use MPEG-DASH SRD to describe the spatial relationship of tiles in the 3D space, and prioritize the tiles in the Field of View (FoV) for view-aware adaptation. The initial evaluations that we conducted show that we can save up to 72% of the required bandwidth on 360 VR video streaming with minor negative quality impacts compared to the baseline scenario when no adaptations is applied.
Abstract: Optical See Through Head Up Displays (HUDs) have recently gained popularity in a variety of applications, including driving. HUDs display transparent images directly onto the windshield of a vehicle, providing a relatively seamless transition between the display image and the road ahead. Recent research has explored how HUDs impact drivers and the research to date has been promising. HUDs have frequently been associated with improved driving performance. However, HUDs also receive mixed or negative reviews with respect to driving performance. These findings indicate the potential usefulness of AR HUDs, but the contradiction in performance implies that AR HUDs either may not be useful in all scenarios or they need to be better designed. Driving is inherently a dangerous task, with even the smallest of vehicles at slow speeds able to cause significant damage to pedestrians. Therefore, it is vital to sufficiently test HUDs before widely incorporating them into vehicles. A number of automotive manufacturers (e.g. BMW, Cadillac, Buick, Ford, Lexus) already use simple HUDs, indicating the need for a timely method of assessment.