2018 IEEE VR Los Angeles logo

March 18th - 22nd

2018 IEEE VR logo

March 18th - 22nd

In Cooperation with
the German Association for Electrical, Electronic and Information Technologies: VDE
VDE Logo
IEEE Computer Society IEEE

In Cooperation with
the German Association for Electrical, Electronic and Information Technologies: VDE

VDE Logo
IEEE Computer Society
IEEE

Exhibitors and Supporters

Diamond


National Science Foundation

Gold


VICON

Digital Projection

Gold Awards


NVIDIA

Silver


ART

Bronze


Haption

MiddleVR

VR-ON

VISCON

BARCO

Ultrahaptics

WorldViz

Disney Research

Microsoft

Non-Profit


Computer Network Information Center
Chinese Academy of Sciences

Sponsor for Research Demo


KUKA

Other Sponsors


Magic Leap

Exhibitors and Supporters

Tutorials

The following tutorials will be held at IEEE Virtual Reality 2018:

March 18th

March 19th


(Displays) Cutting-edge VR/AR Display Technologies (Gaze-, Accommodation-, Motion-aware and HDR-enabled)

Date: March 18th, 09:00 AM - 5:30 PM

Organizers:

  • George Koulieris, Inria, France
  • Kaan Akşit, NVIDIA, USA
  • Christian Richardt, University of Bath, UK
  • Rafal Mantiuk, University of Cambridge, UK
  • Katerina Mania, Technical University of Crete, Greece

Abstract: Near-eye (VR/AR) displays suffer from technical, interaction as well as visual quality issues which hinder their commercial potential. This tutorial will deliver an overview of cutting-edge VR/AR display technologies, focusing on technical, interaction and perceptual issues which, if solved, will drive the next generation of display technologies. The most recent advancements in near-eye displays will be presented providing (i) correct accommodation cues, (ii) near-eye varifocal AR, (iii) high dynamic range rendition, (iv) gaze-aware capabilities, either predictive or based on eye-tracking as well as (v) motion-awareness. Future avenues for academic and industrial research related to the next generation of AR/VR display technologies will be analyzed.


(Statistics) The Replication Crisis in Empirical Science: Implications for Human Subject Research in Virtual Environments

Date: March 18th, 09:00 AM - 12:00 PM

Organizer: J. Edward Swan II, Mississippi State University

Abstract: This tutorial will first discuss the replication crisis in empirical science. This term was coined to describe recent significant failures to replicate empirical findings in psychology, medicine, and other fields. In many cases, over 50% of previously reported results could not be replicated. This fact has shaken the foundations of several fields: Can empirical results really be believed? Should, for example, medical decisions really be based on empirical research?

After describing the crisis, the tutorial will revisit enough of the basics of empirical science to explain the origins of the replication crisis. The key issue is that hypothesis testing, which in empirical science is used to establish truth, is the result of a probabilistic process. However, the human mind is wired to reason absolutely: Humans have a difficult time understanding probabilistic reasoning. The tutorial will discuss some of the ways that funding agencies, such as the US National Institutes of Health (NIH), have responded to the replication crisis, by, for example, funding replication studies, and requiring that grant recipients publically post anonymized data.

Finally, the tutorial will consider how the Virtual Environments community might respond to the replication crisis. In particular, in our community the reviewing process often considers work that involves systems, architectures, or algorithms. In these cases, the reasoning behind the correctness of the results is usually absolute. Therefore, the standard for accepting papers is that the finding exhibits novelty—to some degree, the result should be surprising. However, this standard does not work for empirical studies (which, typically, involve human experimental subjects). Because empirical reasoning is probabilistic, important results need to be replicated, sometimes multiple times, and by different laboratories. As the replications mount, the field is justified in embracing increasing belief in the results. In other words, consider a field that, in order to accept a paper reporting empirical results, always requires surprise: This is a field that will not progress in empirical knowledge.

The tutorial will end with a call for the community to be more accepting of replication studies. In addition, the tutorial will consider whether actions taken by other fields, in response to the replication crisis, might also be recommendable for the Virtual Environments community.


(Haptics) Tangibles within VR: Tracking, Augmenting, and Combining Fabricated and Commercially Available Commodity Devices

Date: March 18th, 2:00 PM - 5:30 PM

Organizers:

  • Alexandre G. de Siqueira, Tangible Visualization Lab, Clemson University, USA
  • Ayush Bhargava, Virtual Environment Group, Clemson University, USA

Abstract: Virtual Reality (VR) continues to provide an excellent alternative for users to experience diverse environments from the comfort of their homes. The higher the level of immersion, the better the experience. However, the devices one can use to interact with objects in such environments are often restrictive and provide the same generic tactile feedback for contrasting objects, which may adversely affect immersion. One way to address this challenge is to use tangibles. Tangibles combined with tracking devices can provide alternative ways to increase immersion and catered tactile feedback. In this tutorial, we will combine Microsoft Surface Dials, 3D printing, HTC Vive trackers, and the Unity 3D platform –- both as particular products, and as representatives of broader classes of technology – to overcome these challenges. Attendees will be introduced to Open Sound Control (OSC) and TUIO protocols and how to link them to a Unity project. We will also address 3d printing challenges and showcase sample interaction techniques for tangibles within VR applications.

Resources


Date: March 19th, 09:00 AM - 12:00 PM

Organizers:

  • Ernst Kruijff, Institute of Visual Computing, Bonn-Rhein-Sieg University of Applied Sciences
  • Bernhard E. Riecke, School of Interactive Arts and Technology, Simon Fraser University

Abstract: In this course, we will take a detailed look at various breeds of spatial navigation interfaces that allow for locomotion in digital 3D environments such as games, virtual environments or even the exploration of abstract data sets. We will closely look into the basics of navigation, unravelling the psychophysics (including wayfinding) and actual locomotion (travel) aspects. The theoretical foundations form the basis for the practical skillset we will develop, by providing an in-depth discussion of navigation devices and techniques, and a step-by-step discussion of multiple real-world case studies. Doing so, we will cover the full range of navigation techniques from handheld to full-body, highly engaging and partly unconventional methods and tackle spatial navigation with hands-on-experience and tips for design and validation of novel interfaces. In particular, we will be looking at affordable setups and ways to “trick” out users to enable a realistic feeling of self-motion in the explored environments. As such, the course unites the theory and practice of spatial navigation, serving as entry point to understand and improve upon currently existing methods for the application domain at hand.


(Calibration) Calibration Methods for Optical See-Through Head-Mounted Displays

Date: March 19th, 09:00 AM - 12:00 PM

Organizers:

  • Jens Grubert, Coburg University of Applied Sciences and Arts, Germany
  • Yuta Itoh, Keio University, Japan
  • Kenneth Moser, Marxent Labs LLC, USA
  • J. Edward Swan II, Mississippi State University, USA

Abstract: Optical see-through head-mounted displays (OST HMDs) are major output media for Augmented Reality, which have seen significant growth in popularity and usage among the general public due to the growing release of consumer-oriented models, such as the Microsoft Hololens. Unlike Virtual Reality headsets, OST HMDs inherently support the addition of computer-generated graphics directly into the light path between a user’s eyes and their view of the physical world. As with most Augmented and Virtual Reality systems, the physical position of an OST HMD is typically determined by an external or embedded 6-Degree-of-Freedom tracking system. However, in order to properly render virtual objects, which are perceived as spatially aligned with the physical environment, it is also necessary to accurately measure the position of the user’s eyes within the tracking system’s coordinate frame. For over 20 years, researchers have proposed various calibration methods to determine this needed eye position.

We present a half-day tutorial, which will provide attendees with all skill levels the required knowledge and techniques to effectively understand and employ calibration procedures in their own OST HMD systems. Participants from the Virtual Reality community will find these techniques beneficial to their applications as well.


(Web3D) Web3D Quickstart

Date: March 19th, 1:30 PM - 4:45 PM

Organizers:

  • Nicholas F. Polys, Virginia Tech, USA
  • Johannes Behr, Fraunhofer IGD, Germany
  • Timo Sturm, Fraunhofer IGD, Germany
  • Uwe Woessner, HLRS, Germany

Abstract: This tutorial will cover the wide range of methods and patterns used to develop interactive 3D applications based on royalty-free and open ISO-IEC standards. As a high-level scene graph language and API above the graphics library, Extensible 3D (X3D) provides a suite of standards including multiple data encodings and language bindings. With the same declarative programming idiom as the WWW, developers can build 2D + 3D Virtual and Mixed Reality applications that integrate with and publish to the WWW ecosystem.

This tutorial will explore the myriad of approaches, tool chains, and applications for building X3D objects and scenes. This includes: different formats and data types, approaches to multiple input devices and sensors, and deployment to different display devices, including 3D printers. Participants are not expected to have prior experience with X3D or VRML (Virtual Reality Modeling Language); a familiarity with markup languages and JavaScript is beneficial, but not required.