Doctoral Consortium

Panagoitis Apostolellis

Virginia Tech, USA

Title: C-OLiVE: Group Co-located Interaction in VEs for Contextual Learning

Abstract: In informal learning spaces employing digital content, which groups of students usually visit, visitors do not get adequate exposure to content and if they do, it is usually in a passive style instruction offered by a museum docent to the whole group. This research aims to identify which elements of co-located group collaboration, virtual environments, and serious games can be leveraged for an enhanced learning experience. Our hypothesis is that synchronous, co-located, group collaboration will afford greater learning compared to the conventional approaches. We developed C-OLiVE, an interactive virtual learning environment supporting tripartite group collaboration, which we are going to use as a test bed to respond to our research questions. In this paper, we discuss our proposed research which involves exploring some benefits of the involved technologies and proposing a list of design guidelines for anyone interested to exploit them in developing virtual environments for informal learning spaces.

Lauren Cairco Dukes

Clemson University, USA

Title: Development of a Scenario Builder Tool for Scaffolded Virtual Patients

Abstract: During their baccalaureate education, nursing students have limited opportunities to practice patient interaction. Traditional training for patient interaction includes roleplay with peers, and practice interviews with paid actors. Unfortunately, neither of these methods adequately portrays the wide range of patients and medical conditions that nurses encounter in their clinical experiences. Virtual patients can provide realistic yet repetitive practice in patient interaction since they can represent a wide range of patients and each scenario can be practiced until the student achieves competency. However, there are two problems with existing virtual patient platforms: (1) cost of development, since creation of a single scenario may take up to nine months to create, and (2) lack of extensibility, since a scenario is typically only targeted towards a single set of learning goals and learners. In this work, I propose potential solutions to these two problems: (1) the user-centered design and creation of a scenario-builder tool to enable nursing faculty to create their own scenarios, and (2) user testing of a system to present a single scenario in ways appropriate for multiple levels of learners.

Alessandro Febretti

University of Illinois at Chicago, USA

Title: Supporting Multi-View Immersion on Hybrid Reality Environments

Abstract: In the domain of large-scale visualization instruments, Hybrid Reality Environments (HREs) are a recent innovation that combines the best-in-class capabilities of immersive environments, with the best-in-class capabilities of ultra-high-resolution display walls. Co-located research groups in HREs tend to work on a variety of tasks during a research session (sometimes in parallel), and these tasks require 2D data views, 3D views, linking between them and the ability to bring in (or hide) data quickly as needed. Addressing these needs requires a matching software infrastructure that fully leverages the technological affordances offered by these new instruments. I detail the requirements of such infrastructure and outline the model of an operating system for Hybrid Reality Environments: I present an implementation of core components of this model, called Omegalib. Omegalib is designed to support dynamic reconfigurability of the display environment: areas of the display can be interactively allocated to 2D or 3D workspaces as needed.

I propose to further extend the existing implementation, to fully realize the vision for an operating system for Hybrid Reality Environments. The extension will integrate Omegalib with the Scalable Adaptive Graphics Environment (SAGE), a widely used window management system for cluster-based display walls, and will add support for user-centered immersive viewports that can be moved and resized at runtime. The proposed extension is twofold. First, a distributed rendering resource allocation technique that allows immersive windows to be resizable up to the full HRE display size, without sacrificing resolution or application performance. Second, an improved multiuser interaction system, offering dynamic device/window association based on heterogeneous pointing techniques (2D, 3D ray, 3D frustum). The work outlined in this proposal will extend the state of the art in large display software infrastructures, will solve several limitations in current systems (lack of immersion support in multi-window environments, single-application-only immersive environments) and will provide an advanced platform for application development on Hybrid Reality Environments.

Loren Puchalla Fiore

University of Minnesota, USA

Title: Visual Modification of Passive Props for Augmented Reality

Abstract: The proliferation of mobile computing devices (cell phones, tablets, etc) has caused a renewed and expanded interest in augmented reality for a variety of applications. Many of these applications focus on the integration of virtual objects into the real world. However, any purely virtual object cannot be felt by the user, limiting the potential interaction possibilities. Passive physical props allow the sensation of touch with the augmented rendering of the virtual object. The downside of this is that there is a requirement of needing a different sized prop for each differently sized virtual object. I propose to investigate rendering techniques which will enable a single passive prop to be used for multiple virtual objects of different size, shape and transparency.

Rongkai Guo

University of Texas at San Antonio, USA

Title: Towards Understanding and Improving Motivation for Rehabilitation in Virtual Environments

Abstract: The primary goal of this research is to determine the relationship between presence in a virtual environment (VE) and motivation for rehabilitation and exercise. In our previous work, we investigated 1) how to motivate healthy users for exercise and 2) the sense of presence in rehabilitation patients - Mobility Impaired (MI) individuals. For 1), we investigated how to improve exercise motivation for healthy users through novel exercise-based interaction approaches for common video games. For 2), we studied the subjective, physiological, and behavioral impact of various VEs and determined differences between MI users and healthy users’ presence. Now we plan to fill the gap in knowledge between these two bodies of research. Specifically, we propose to investigate if/how increasing presence may impact motivation for rehabilitation and exercise. Firstly, we propose to investigate in more detail the factors that affect presence differently in healthy and MI persons and what impact these factors have on rehabilitation motivation. Secondly, we propose to investigate how novel rehabilitation-based interaction methods for video games impact the presence and motivation of MI users. We expect that this research will contribute new knowledge on how MI users interact with computer interfaces, which can be used to derive guidelines for the design of effective VEs for rehabilitation and exercise.

Adrian Johnson

University of South Florida, USA

Title: Augmenting Relived Reality: Safe Procedural Training and Evaluation in Augmented Surgery

Abstract: I propose to explore the possibilities and benefits of performing surgical tasks within simulation environments generated solely from real time (RT) or near RT operations applied to video scope streams commonly employed within the operating theatre. Although an RT optical system could conceivably implement usable operative AR, it is unlikely that I will be able to experiment with AR in live surgery during my candidacy due to prohibitive rules, laws, and regulations. Nearby is a virtuality or virtualized reality space targeting only RT, robust, realistic operations that could produce AR, but applied to historical reality for safety in critically serious environments that resist AR development. I plan to advance computer vision and visualization in less restrictive relived reality (RR?) systems research directed towards better approximating live surgical AR.

Hyungki Kim

KAIST, Korea

Title: A Framework for the Automatic 3D City Modeling using the Panoramic Image from Mobile Mapping System and Digital Maps

Abstract: A 3D city model, which consists of 3D building models and their geospatial positions and orientations, is becoming a valuable resource in virtual reality, navigation systems, and civil engineering applications. The purpose of this research is to propose a new framework to generate a 3D city model that satisfies visual and physical requirements in a ground-oriented simulation system. At the same time, the framework should meet the demands of the automatic creation and cost-effectiveness, which facilitate the usability of the proposed approach. To achieve these goals, I suggest a framework that leverages a mobile mapping system which automatically gathers high-resolution images and supplements sensor information such as the position and direction of an image. To resolve problem stemming from sensor noise and a large number of the occlusions, fusion with a digital map data will be used. This paper describes the overall framework with major processes and the recommended or required techniques for each processing step.

Sujeong Kim

University of North Carolina at Chapel Hill, USA

Title: Simulating Crowd Interactions in Virtual Environments

Abstract: In this proposal, I discuss my approaches to provide an interactive simulation of a crowd. The main goal of my research is to increase realism for both individual agents and overall crowd behavior. I propose stable and efficient behavior models based on understandings about what human-like behavior is and how it affects interactions with others and the environment. First, I use well-established studies from psychology to model dynamic behavior changes of a crowd in different situations. Second, I propose an online, adaptive, and individualized motion model that can learn from real-world data. Finally, I propose a physical interaction model that can simulate new types of force-based interactions as well as collision avoidance behavior. These methods can simulate a few thousand agents at interactive rates and are able to generate many emergent behaviors that match real-world observations.

Kenny Moser

Mississippi State University, USA

Title: Quantification of Error from System and Environmental Sources in Optical See-Through Head Mounted Display Calibration Methods

Abstract: A common problem with Optical See-Through Augmented Reality is misalignment or registration error, which occurs when the augmenting information fails to appear in the correct location relative to real world objects. The amount of acceptable registration error is heavily dependent upon the type of application with some systems, such as those used in surgical procedures, requiring mm or sub-mm accuracy. Approximation methods, driven by user feedback, have been developed to estimate the necessary corrections for alignment errors [16, 14]. These calibration methods, however, are susceptible to induced error from system and environmental sources, including tracker, human, and alignment error [4, 5, 8]. The proposed research plan, described in this document, is intended to further the development of accurate and robust calibration methods for optical see-through augmented reality systems by quantifying the impact of three specific factors shown to contribute to calibration error: (1) eye to screen misalignment, (2) human induced noise, (3) distribution of calibration alignment points. An important aspect of this research will be to develop a method for examining each factor in isolation, which will allow for more precise quantification of the error contributions. Determining the independent contribution of each error source to the final result, will facilitate the establishment of acceptable thresholds for each type of error and be a meaningful step toward defining quality metrics for optical see-through augmented reality calibration techniques. These contributions will benefit the augmented reality community at large by providing baseline error thresholds, and directly enhance future optical see-through augmented reality research studies by providing a methodology for identifying the most robust calibration procedures for reducing registration error in these systems.

Charilaos Papadopoulos

Stony Brook University, USA

Title: Immersive Display and Exploration of Gigapixel Data

Abstract: Gigapixel resolution data is becoming more prevalent every day. In parallel, at least one immersive virtual environment (IVE) has broken the gigapixel resolution barrier. This facility, termed the Reality Deck presents users with 1.5 billion pixels of aggregate resolution in an horizontally surround setting. This proposal, submitted for consideration to the IEEE Virtual Reality 2014 Doctoral Consortium, outlines current and future work that I have carried out within the Reality Deck focusing on two main pillars of gigapixel data exploration: displaying such high resolution data in a performant fashion and naturally exploring it using hand motions.

Hanae Rateau

University of Lille, France

Title: Interaction on grammar based generated content

Abstract: Grammar based procedural generation has the potential to meet the growing demand for large and detailed 3D contents such as virtual worlds. However, they are not easy to use and don’t provide enough user control as it is difficult to edit and tune them (i.e. scripting is required). Beside, all proposed work on interactive grammars are all in a desktop application approach. My project focuses on inferring grammar rules from the user’s action onto the 3D model in amid-air interaction context.

Mingze Xi

The University of Newcastle Australia, Australia

Title: Simulating Cooperative Fire Evacuation Training in a Virtual Environment Using Gaming Technology

Abstract: Fire accident is one of the most frequently occurring disasters. Uncontrolled fires frequently lead to life and property loss. Training for fire escape skills and designing efficient building evacuation plans are increasingly important research topics. Virtual environments can provide visual experiences of the real world and are widely used to support training and simulating human response under emergency situation. However, current training systems in virtual environments are suffering limited intelligent non player character (NPC), because its AI decision making process is lacking in support from fire science knowledge, for example the influence of toxic gas, heat and human emotional response etc.

Fire evacuation training systems are designed to train a certain type of participant, for example fire fighters, residents or fire wardens. Our objective is to enhance the realism of virtual evacuation environments and provide a framework of agent-based evacuation training system for fire wardens. To achieve this goal, an immersive virtual environment will be built using gaming technology. Human player and NPC actors will be added to make a cooperative virtual environment. In order to create realistic NPCs, we will design a human behavior model for emergency response using AI method together with support from third party professional evacuation numerical simulation tool.

The success of using professional numerical simulation tool to support 3D simulation in virtual environment will provide a new way to enhance the realism of virtual environment by using high fidelity data source.

Exhibitors and Supporters

Information on Exhibitors and Supporters

Platinum Level

Silver Level

Bronze Level

Publishers

Sponsors