IEEE VR logo

March 22nd - 26th

IEEE VR logo

March 22nd - 26th

IEEE Computer Society IEEE

IEEE Computer Society IEEE

Tutorials

The following tutorials will be held at IEEE Virtual Reality 2020:


Developing Embodied Interactive Virtual Characters for Human-Subjects Studies

Organizers:

  • Kangsoo Kim, University of Central Florida, USA
  • Nahal Norouzi, University of Central Florida, USA
  • Austin Erickson, University of Central Florida, USA

Abstract: Embodied interactive virtual characters, such as virtual humans or animals, have been actively used for various Virtual/Augmented/Mixed Reality (VAMR) applications, and researchers have developed different types of embodied virtual characters and studied their effects on the user’s perception and behavior. This tutorial aims to provide the audience with background knowledge on research in embodied interactive virtual characters and how to develop such interactive characters for their specific applications, particularly focusing on human-in-the-loop systems (Wizard of Oz paradigm). The tutorial will first explore the prior interactive virtual character research focusing on the social influence of these entities over the users, e.g., the sense of social presence, trust, collaboration, while discussing the recent trend of the convergence among IVAs, MR, and Internet of Things (IoT) in the scope of virtual characters interacting with the physical surroundings. We will also share our recent research findings and some lessons from our 5+ years of experience in researching interactive virtual characters and user studies at the Synthetic Reality Lab (SREAL), University of Central Florida (UCF). The tutorial will explain how to develop virtual characters in Unity using 3rd party assets and plugins, such as Mixamo and Rogo Digital’s LipSync. The audience will follow the step-by-step instructions with provided materials and eventually have a simple interactive virtual character that they can control through conventional 2D user interfaces, considering human-in-the-loop studies. The tutorial will also explain how to develop a sensing module to understand the current state of the surrounding environment, which can make a realistic connection between the physical and the virtual worlds. For example, an Arduino board with a couple of sensors, e.g., a wind sensor, can be used to detect the wind in the real environment and trigger the coherent events in the virtual environment, such as blowing a virtual ball on a table.

Intended Audience and Expected Value: The tutorial will be of interest to students, faculty, and researchers who are interested in the interactive virtual character research and want to develop such interactive characters for their user studies. Basic understanding of programming and Unity editor, which audience can easily obtain from many Unity online lectures, should be sufficient to follow the tutorial, so audience without technical background are also welcome and encouraged to join. The audience for this tutorial can expect to leave with the following:

  • A basic understanding of embodied interactive virtual character research, its impact on human perception and behavior, and the recent trends and potential.
  • An understanding of how to prototype an embodied interactive virtual character.
  • An understanding of how to integrate an environmental sensing module with the embodied virtual character prototype.

Attendees are required to bring their own laptops for the tutorial.


Smart Immersive Environments: Augmented Reality and Smart Built Environments

Organizers:

  • Denis Gračanin, Virginia Tech, USA
  • Krešimir Matković, VRVis Research Center, Austria

Abstract: The goal of this tutorial is to introduce the participants to Smart Immersive Environments and to discuss basic ideas and design principles for Augmented Reality (AR) applications for Internet of Things (IoT) enabled Smart Built Environments (SBE). The participants with learn how to incorporate contextualized SBE data, information, and services into AR applications and use that data to improve user interactions with SBEs and support better collaboration among the users (co-located and distributed). Several example applications (single-user and multi-user) from healthcare and education domains will be presented. Participants will also come away with better understanding of the challenges related to the integration of AR and IoT to support Smart Immersive Environments and opportunities related to Extended Reality (XR).

Intended audience: This is an introductory tutorial, open to designers and practitioners at any level of technical experience. The tutorial will be particularly useful for participants engaged in design of context-aware AR applications and exploring interaction design for smart objects (e.g., smart appliances).

Expected value for the audience: Participants will gain knowledge about design and use of AR technology in IoT-enabled SBE spaces and learn how to use and visualize SBE data to support contextualized interactions and interfaces. Participants will also come away with better understanding of the challenges related to the integration of AR and IoT to support Smart Immersive Environments and opportunities related to XR in general and its applications.


X3D Quickstart

Organizers: Nicholas F. Polys, Virginia Tech, USA

Abstract: This tutorial will cover the wide range of methods and patterns used to develop interactive 3D applications based on royalty-free and open ISO-IEC standards. As a high-level scene graph language and API above the graphics library, Extensible 3D (X3D) provides a suite of standards including multiple data encodings and language bindings. With the same declarative programming idiom as the WWW, developers can build 2D + 3D Virtual and Mixed Reality applications that integrate with and publish to the WWW ecosystem. From mobiles, desktops, CAVEs, HMDs, and AR platforms, X3D is an inter-operable, portable, and durable 3D graphics technology.

This tutorial will explore the myriad of approaches, tool chains, and applications for building X3D objects and scenes. This includes: different formats and data types, approaches to multiple input devices and sensors, and deployment to different display devices, including 3D printers. Through guest presenters, we will highlight the variety of applications and enterprises using ISO Standards. Participants are not expected to have prior experience with X3D or VRML (Virtual Reality Modeling Language); a familiarity with markup languages and JavaScript is beneficial, but not required.