2018 IEEE VR Los Angeles logo

March 18th - 22nd

2018 IEEE VR logo

March 18th - 22nd

In Cooperation with
the German Association for Electrical, Electronic and Information Technologies: VDE
VDE Logo
IEEE Computer Society IEEE

In Cooperation with
the German Association for Electrical, Electronic and Information Technologies: VDE

VDE Logo
IEEE Computer Society
IEEE

Exhibitors and Supporters

Diamond


National Science Foundation

Gold


VICON

Digital Projection

Gold Awards


NVIDIA

Silver


ART

Bronze


Haption

MiddleVR

VR-ON

VISCON

BARCO

Ultrahaptics

WorldViz

Disney Research

Microsoft

Non-Profit


Computer Network Information Center
Chinese Academy of Sciences

Sponsor for Research Demo


KUKA

Other Sponsors


Magic Leap

Exhibitors and Supporters

Doctoral Consortium

Schedule

March 18th

  • 9:00 – 9:30 am
    • Welcome by DC chairs
    • Speed Dating: “What are the biggest challenges for AR/VR research in the next 5 years?”
    • Collection of challenges
  • 9:45 – 10:30 am
    • Pecha Kucha talks 1 – 4
  • 10:30 – 11:00 am
    • Break & Coffee (Foyer middle floor)
  • 11:00 – 12:00 pm
    • Pecha Kucha talks 5 – 9
  • 12:00 – 1:30 pm
    • Lunch Break (Foyer 1.0G and in Reutlingen City)
  • 1:30 – 3:30 pm
    • Pecha Kucha talks 10 – 18
  • 3:30 – 4:00 pm
    • Break & Coffee (Foyer middle floor)
  • 4:00 – 5:30 pm
    • Feedback from mentors in small groups
  • 6:00 pm
    • Dinner in Reutlingen City

Pecha Kucha (Japanese: chit-chat) is a presentation style in which 20 slides are shown for 20 seconds each (6 minutes and 40 seconds in total). The format keeps presentations concise and fast-paced.

March 19th

  • 12:00 – 1:30 pm
    • Resume Workshop during Lunch Break

Doctoral Students

Alberto Boem, University of Tsukuba

Title: Encounter-type Haptic Interfaces for Virtual Reality Musical Instruments

Abstract: This paper summarizes the author’s interest in haptic interfaces for Virtual Reality Musical Instruments. The current research focuses on finding interfaces that can improve physical interaction and presence with virtual instruments. Musical expression is a topic rarely addressed in the field of Virtual Reality. During the years, the author has explored different systems and concepts while finding the thesis topic for the Ph.D. research. They include the development and evaluation of deformable input surfaces and Shape-Changing interfaces. The results from these implementations led us to investigate Encounter-type haptic interfaces, which has never received a proper consideration in the design of virtual musical instruments. This represents the current stage of our research. However, the exact direction towards the Ph.D. thesis topic is still in search. Through this paper, we will describe the background and motivations behind this research together with the research hypothesis developed until now.


Andrea Boensch, RWTH Aachen University

Title: Locomotion with Virtual Agents in the Realm of Social Virtual Reality

Abstract: My research focuses on social locomotion of computer-controlled, human-like, virtual agents in virtual reality applications. Two main areas are covered in the literature a) user-agent-dynamics in, e.g., pedestrian scenarios and b) pure inter-agent-dynamics. However, joint locomotion of a social group consisting of a user and one to several virtual agents has not been investigated yet. I intend to close this gap by contributing an algorithmic model of an agent’s behavior during social locomotion. In addition, I plan to evaluate the effects of the resulting agent’s locomotion patterns on a user’s perceived degree of immersion, comfort, as well as social presence.


Emanuel Vonach, TU Wien

Title: Robot Supported Virtual and Augmented Reality

Abstract: In this dissertation different aspects from research in the fields of Tangible User Interfaces, encounter-type devices and Passive Haptics are combined to investigate the benefits that robots offer for providing haptic feedback in Virtual and Augmented Reality. Robotic elements like micro drives and robotic arms are employed for the actuation of passive or active physical objects. In that way physical props can be collocated with virtual counterparts to allow high fidelity, natural interaction.


Eric M. Whitmire, University of Washington

Title: High-fidelity Interaction for Virtual and Augmented Reality

Abstract: Expressive interaction with wearable head-mounted displays for virtual (VR) and augmented reality (AR) systems is essential for practical adoption. These systems pose new challenges and have higher performance standards compared to other computing paradigms. In this position paper, I argue that interactive devices for VR and AR systems can leverage high-precision tracking and haptics to achieve a robust set of interaction techniques and a rich sense of presence. I describe my past and proposed future research in designing interactive devices that innovate in the domains of eye tracking, wearable finger input, and handheld controllers.


Fariba Mostajeran, University of Hamburg

Title: MR Pharmacy: Adaptive User Interfaces and Biofeedback for Therapy in Mixed Reality Environments

Abstract: Virtual and augmented realities (VR/AR) can provide a broad range of possibilities for therapeutic applications. Virtual implementation of classical therapy methods can offer many advantages to both patients and caregivers. My research focuses on validating and improving the use of VR and AR in psycho-and physiotherapy. Moreover, I will employ several physiological measures in order to provide different forms of biofeedback and make the virtual environment (VE) adaptive to the user’s affective states.


Georg Gerstweiler, Vienna University of Technology

Title: Guiding People in Complex Indoor Environments using Augmented Reality

Abstract: Complex public buildings like airports use various systems to guide people to a certain destination. Such approaches are usually implemented by showing a floor plan, having guiding signs or color coded lines on the floor. With a technology that supports 6DOF tracking in indoor environments it is possible to guide people individually by using augmented reality guiding visualizations. The proposed research concentrates on three topics which are the main reason, why such a guiding system is still not available in real world situations. At first a tracking solution HyMoTrack is presented, based on a visual hybrid tracking approach for smart phones and tested in a real world airport scenario. The tracking and the guiding part of a reliable indoor navigation requests a 3D model of the environment. For that reason a 3D model generation algorithm was implemented, which automatically creates a 3D mesh out of a vectorized 2D floor plan. Finally the human aspect of an AR guiding system is researched and a novel AR path concept is presented for guiding people with AR devices. This FOVPath is designed to react not only to the position of the user and the target, but is also dependent on the view direction and the field of view (FOV) capabilities of the used device. This ensures that the user always gets reasonable information within the current FOV. To evaluate the concept technical evaluations as well as user studies were and will be performed.


Tanvir Irfan Chowdhury, University of Texas at San Antonio

Title: Towards Reverse Disability Simulation in a Virtual Environment

Abstract: Disability Simulation (DS) is an approach used to modify attitudes regarding people with disabilities (PwD). DS places people without disabilities (PwoD) in situations that are designed for the users to experience a disability. The focus of my PhD dissertation is to transform the concept of disability simulation into reverse disability simulation (RDS) in a virtual reality (VR) environment. In a RDS people with disabilities perform tasks that are made easier in the virtual environment compared to the real world. In a sense, RDS is the “ability” simulation for people with disability. Despite the fact that DS is being used to raise awareness, it also endured some criticism for not being effective. Our first and second study put the criticism into the test, and found out that DS, in fact, can be used as a tools to teach PwoD about facts regarding disability. This result encouraged us to think deeper and invent ideas how to use same concept of DS for PwD. In my third user study, we hypothesized that putting PwD in a RDS will increase confidence and enable efficient task completion. To investigate this hypothesis, we conducted a within-subjects experiment in which participants performed a virtual “kicking a ball” task in two different conditions: a normal condition without RDS (i.e., same difficulty as in the real world) and an easy condition with RDS (i.e., physically easier than the real world but visually the same). The results from our study suggest that RDS increased participants’ confidence. The outcome of our finding has the potential to be used in VR rehabilitation for PwD.


Jason Hochreiter, University of Central Florida

Title: Optical Touch Sensing on Non-Parametric Rear-Projection Surfaces

Abstract: The field of augmented reality (AR) has introduced many novel input and output approaches for human-computer interaction. As touching physical objects with the fingers or hands is both natural and intuitive, touch-based graphical interfaces are ubiquitous, but many such interfaces are limited to flat screens or simple objects. We propose an optical method for multi-touch detection and response on non-parametric surfaces with dynamic rear-projected imagery, which we demonstrate on two head-shaped surfaces. We are interested in exploring the advantages of this approach over two-dimensional touch input displays, particularly in healthcare training scenarios.


Jerald Thomas, University of Southern California

Title: Leveraging Configuration Spaces and Navigation Functions for Redirected Walking

Abstract: Redirected walking has been shown to be an effective technique for allowing natural locomotion in a virtual environment that is larger than the physical environment. In this position paper, I identify two large limitations of redirected walking and provide brief descriptions of solutions. I continue to introduce the conceptual design for an algorithm, inspired by techniques in the field of coordinated multi-robotics, that improves upon the current state of redirected walking by addressing these limitations. I then explain how it will become the foundation for my thesis and provide future research vectors.


Jillian Clements, Duke University

Title: Predicting Performance During a Dynamic Target Acquisition Task in Immersive Virtual Reality

Abstract: Visual-motor skill is the ability integrate visual perception and motor control. These skills allow the eyes and hands to move in a coordinated way to optimally achieve the goal of the task at hand, which is crucial for success in tasks such as athletics or surgery. Immersive virtual reality (VR) provides a controllable experimental environment to study visual-motor skill learning. In this work, we present a novel experimental framework that combines immersive VR and electroencephalography (EEG) to investigate the kinematic and neurophysiological mechanisms that underlie motor skill performance during a multi-day simulated marksmanship training regimen. We propose two approaches for modeling the biological elements associated with visual-motor skill to predict shot success based on kinematic data and neurophysiological biomarkers.


Jingxin Zhang, University of Hamburg

Title: Natural Human-Robot Interaction in Virtual Reality Telepresence Systems

Abstract: Telepresence systems have the potential to overcome limits and distance constraints of the real-world by enabling people to remotely visit and interact with each other. However, current telepresence systems usually lack natural ways of supporting interaction and exploration of remote environments (REs). In particular, single webcams for capturing the RE provide only a limited illusion of spatial presence and movement control of mobile platforms in today’s telepresence systems are often restricted to simple interaction devices. One of the main challenges of telepresence systems is to allow users to explore a RE in an immersive, intuitive and natural way, e.g. real walking in the user’s local environment (LE), and thus controlling motions of the robot platform in the RE. The goal of the presented research project is to meet these challenges, and contribute to the development and evaluation of novel telepresence system and interactive behaviours in 360◦ virtual environments with a focus on full-view telepresence, spatial perception, locomotion, usability and motion sickness.


Loki Rasmussen, University of Arkansas at Little Rock

Title: Real-time MonoSLAM Vizualization in Virtual Reality

Abstract: This research concentrates on defense-type systems for operational awareness using virtual reality solutions. Many computer vision solutions in defense environments concentrate on removing user decision from systems for automation, but in doing so remove trained military professionals from decision making. Instead this research looks to reintegrate the military professional by providing a real-time point cloud visualization of a previously unmapped environment, in virtual reality, for remote operation of an unmanned vehicle. Due to an interest in a low-cost and modular system, that could be used on a variety of unmanned vehicles, including those used in underwater environments, the solution has moved away from LIDAR and Kinnect’s laser-based systems and towards stereo cameras, particularly webcams. While looking to provide this approach, several other areas of interest become apparent, including the need to have a reliable stereo reconstruction under variant lighting conditions, particularly in the case of backlight in natural environments. Consequently, a variety of keypoint detection algorithms and point cloud reconstruction methods are being explored in conjunction with their usefulness within real-time constraints and with low-resolution images. Also, it will require evaluating different point cloud render techniques such as splatting, and storing the data in reliable data structures for effective LODs for real-time rendering. Then, finally, it will be important to explore research on effective user operated virtual environments and simulations to determine best practices for navigation and interface in the front-facing part of the system that allows the user operation.


Marco Speicher, German Research Center for Artificial Intelligence

Title: Shopping in Virtual Reality

Abstract: In contrast to traditional retail stores, online shopping offers many advantages, such as unlimited opening hours and a stronger focus on functionality. But this is accompanied by a complex categorization, limited product visualization and immersion. Virtual Reality (VR) has the potential to create new shopping experiences that combine the advantages of e-commerce sites and conventional brick-and-mortar shops. We examined the main features of online and offline shops in terms of buying behavior and customer frequency. Furthermore, we designed and implemented an immersive WebVR online purchasing environment and aimed to retain the benefits of online shops, such as search functionality and availability, while focusing on the shopping experience and immersion. This VR shop prototype was evaluated in a case study with respect to the Virtual Reality Shopping Experience (VRSE) model. The next step is to classify, investigate and evaluate the next generation of VR shops, including product interaction and navigation techniques, as well as store and product representations.


Myungho Lee, University of Central Florida

Title: Mediated Physicality: Inducing Illusory Physicality of a Virtual Human via Environmental Objects

Abstract: A physical embodiment of a virtual human has shown benefits in applications that involve social interaction with virtual humans. However, it often incorporates cumbersome haptic devices or robotic bodies. In this position paper, we first discuss our motivation of utilizing a surrounding environment in human-virtual human interaction and present our preliminary studies and results. Considering the previous studies and related literature, we define the concept of Mediated Physicality for virtual humans, which utilizes environmental objects to increase perceived physicality of the virtual humans, and discuss fundamental aspects of the Mediated Physicality as well as future research plans.


Patrick Renner, Bielefeld University

Title: Prompting Techniques for Guidance and Action Assistance using Augmented-Reality Smart-Glasses

Abstract: In the context of picking and assembly tasks, assistance systems based on Augmented Reality (AR) can help finding target objects and conducting correct actions. The aim is to develop guiding and action assistance techniques for smart glasses, which are easily understandable not only for workers, but also for impaired and elderly people.


Sahar Aseeri, University of Minnesota Twin Cities

Title: The Influence of Avatar Representation and Behavior on Communication

Abstract: Virtual reality applications have begun to offer great potential for communication in recent years. Creating an immersive virtual social environment that simulates a real social environment requires providing users with communication cues such as visual, verbal, and nonverbal cues to increase their sense of inhabiting the virtual world. In this work, we will investigate the influence of avatar representation and behavior on communication in an immersive, multiuser, same-place virtual environment by comparing three conditions of avatar representation: video see-through, scanned realistic avatar, and no-avatar representations. Subjective and objective measurements will be used to describe participants’ observations and track their movement behavior to ascertain the effect of avatar representations on communication, based on personal presence, social presence, and trustworthiness.


Wallace Lages, Virginia Tech

Title: Walk-Centric User Interfaces

Abstract: Walking is bound to become a common activity in wearable augmented reality. Compared to walking in virtual reality, walking in augmented reality is very simple and uncomplicated. The ability to use AR in different places and even while walking is likely to deeply impact the way users will experience this technology. The research on walk-centric interfaces has the goal to explore this design space, and bring about a better understanding of how walking affects the design of augmented reality applications. This paper outlines the design space, preliminary, and future work on this topic.


Wen Huang, Arizona State University

Title: Evaluating the Effectiveness of Head-Mounted Display Virtual Reality (HMD VR) Environment on Students’ Learning for a Virtual Collaborative Engineering Assembly Task

Abstract: The emerging VR social networks (e.g., Facebook Spaces, Rec Room) provide opportunities for engineering faculties to design collaborative virtual engineering tasks in their classroom instruction with HMD VR system. However, we do not how this capacity will affect students’ learning and their professional skills (e.g., communication and collaboration). The proposed study is expected to fill this research gap and will use a mixed-methods design to explore students’ performance and learning outcomes in a virtual collaborative automotive assembly task. The quantitative data will be collected from the pre-and-post task survey and the task itself. This data will be used to analyze the differences among experiment and control groups. Students’ responses to the open questions in the post-task survey will serve as triangulation and provide deep insight for the quantitative results. The study is expected to not only contribute to the research field but also benefit different stakeholders in the engineering education systems.