The official banner for the IEEE Conference on Virtual Reality + User Interfaces, comprised of a Kiwi wearing a VR headset overlaid on an image of Mount Cook and a braided river.
Tutorials
Demystifying Academic Paper Reviews: How to Construct Quality Reviews for Peer-Reviewed Publications Saturday, March 25, 2023 08:00-11:30,Shanghai UTC+8
Introduction to Building Social Virtual Reality with Ubiq Saturday, March 25, 2023 17:00-18:30,Shanghai UTC+8
Introduction to Building Digital human with 3D and 4D Face Capture Sunday, March 26, 2023 09:00-10:30,Shanghai UTC+8
Towards Building Automated Non-Rigid Spatially Augmented Reality Sunday, March 26, 2023 11:00-12:30,Shanghai UTC+8
Introduction of building XR environments using Omniverse Sunday, March 26, 2023 14:00-17:30,Shanghai UTC+8

Tutorial 1: Demystifying Academic Paper Reviews: How to Construct Quality Reviews for Peer-Reviewed Publications

Saturday, March 25, 2023, 08:00-11:30, Shanghai UTC+8, Online

Organizers

Jerald Thomas, Virginia Tech
Evan Suma Rosenberg, University of Minnesota
Tabitha Peck, Davidson College

About this Tutorial

Several researchers in our field are expressing concern regarding the increased amount of paper reviews they are asked to provide. This is backed up by the 2018 Publons Global State of Peer Review (GSPR) report, which shows that the number of requested reviews is outpacing the available reviewers while reviewer fatigue is on the rise. A critical finding was the need for formal training so that newer researchers can enter the peer-reviewer pool earlier and more confidently. This responsibility is often pushed onto students’ advisors, but only 16.1% of the GSRP survey respondents were asked by their supervisor or PI to write a review with them or on their behalf. Furthermore, 39.4% of respondents had never had formal peer-review training. 88% of respondents believe that formal peer-reviewer training is either important or very important for producing high-quality reviews, and 80% respond that formal peer-reviewer training would have either a positive or extremely positive outcome on the peer-review system. It is clear that the research community at large believes that more peer-review training is important, and this tutorial is an attempt to provide it, in part, to this research community.

Intended Audience

There will be no technical aspect of the tutorial, so there is no required technical level for participants. The intended audience is persons relatively new to research who will likely find themselves participating in the peer-review process. Specifically, the content will be targeted at junior researchers, including Ph.D. students, but everyone should find something useful in the content.

Expected Value for the Audience

The expected value for the audience is twofold. First, participants will have a greater understanding of the peer-review process and how to construct a quality review. With this knowledge, they should be able to enter the peer-reviewer pool earlier than they would otherwise, increasing their visibility to our community. Second, as we increase the size of the peer-reviewer pool, participants will hopefully see a more manageable number of review requests in the future, reducing the chances of reviewer fatigue.

Topics Outline

The tutorial will be broken into three, one-hour-long sections. First, there will be an introduction and an instructional presentation. The second hour will consist of hands-on and interactive activities aimed at helping participants understand the core components of quality reviews. Finally, we will conclude the tutorial with a panel made up of senior researchers in the field, of whom participants will get the opportunity to ask questions regarding the review process.

Specific topics covered by this tutorial include:

  • The purpose of the peer-review process
  • The goal of a paper review
  • A description of how the peer-review process works in our field, including the more “behind the scenes” aspects
  • Components of a good review and signs of a bad review
  • Example processes for writing reviews
  • Additional tips, tricks, and things to remember
  • Discussion and Q&A

Tutorial 2: Introduction to Building Social Virtual Reality with Ubiq

Saturday, March 25, 2023, 17:00-18:30, Shanghai UTC+8, Online

Organizers

Anthony Steed, University College London
Sebastian Friston, University College London
Ben Congdon, University College London

Summary

One of the most promising applications of consumer virtual reality technology is its use for remote collaboration. A very wide variety of social virtual reality (SVR) applications are now available from competitive games amongst small numbers of players through to conference-like setups supporting dozens of visitors.

Ubiq is an open source tool that allows developers to very quickly build SVR applications. In this short tutorial, we will introduce some of the capabilities of Ubiq, demonstrate some of the tools it has, and work through a complete example of how to build a non-trivial application including custom distributed behaviours. The tutorial is backed up by extensive online documentation, other explanatory videos and a growing set of more complex example systems.

In this tutorial we will demonstrate how to construct social virtual reality applications using our Ubiq tool. Ubiq is open source and designed to be easily extensible. It enables development of applications or systems that would be difficult or time-consuming on commercial systems. The tutorial is targeted at participants with some technical background.

Intended Audience

The tutorial will be of value to any student, researcher or professional who wants to develop their own social VR applications, or just wants to get a grounding in what the challenges in building such applications are. Some experience with Unity would be very useful, but not necessary as the practical part will do a guided walkthrough of the specific issues of using Unity and will not rely on prior knowledge. More experienced participants will be able to implement their own ideas rather than follow our specific guidance.

Value

In this tutorial, participants will learn about SVR technologies and how to build their own system using the Ubiq toolkit in Unity. We will give a short explanation of the difficulties of using many of the commercial platforms for research and the, through examples, show how it is relatively straightforward to implement quite sophisticated applications using Ubiq. We will also give an overview of some of the open source applications built using Ubiq, including tools for running distributed and remote experiments.

Tutorial 3: Introduction to Building Digital human with 3D and 4D Face Capture

Sunday, March 26, 2023, 09:00-10:30, Shanghai UTC+8, Room DALIAN

Organizers

Dongdong Weng, Beijing Institute of Technology

Summary

A virtual digital human is broadly defined as a computer application that presents and interacts with a humanoid appearance, integrating computer graphics, computer vision, intelligent speech, natural language processing and other technologies. It can be used for digital content generation and human-computer interaction to help improve content production efficiency and user experience. In a narrow sense, it is a digital twin of a human being that exists in the non-physical world, created and used by computer means such as computer graphics, graphics rendering, motion capture, deep learning, speech synthesis, etc. It is a comprehensive product with multiple human characteristics (external appearance characteristics, human performance ability, human interaction ability, etc.). In order to make the digital human more realistic, it is necessary to construct a highly realistic appearance of the digital human through various technical means. Such construction techniques are mainly divided into static and dynamic aspects. The mainstream static modeling technology captures multiple photos of the actor through camera arrays, thus achieving 3D reconstruction of the face and body. The light field capture technology represented by Lightstage, on the other hand, solves the problem by capturing sequential photos of the human face under different lighting modes, so as to obtain multiple skin material textures required for high fidelity rendering. The dynamic construction of digital human is mainly based on 4D acquisition technology, which expands the temporal dimension based on facial reconstruction. Multi-view geometry is used for continuous and batch reconstruction using camera arrays. The generated 4D model sequences can record clearly the nonlinear deformation characteristics of the human face in motion, breaking through the limitations of traditional 3D animation technology in realism.

Intended Audience

This course will be of value to any student, researcher and professional who wants to develop digital humans for PGC and AIGC. There are no mandatory requirements for participants' digital human production experience and skill level for the course. Anyone with experience in digital human production will be able to find ways to improve the efficiency and quality of digital human production in the content.

Value

In this tutorial, an overview of the production path and difficulties of digital figures will be presented. Participants will learn how to use 3D and 4D acquisition data to optimize the skin texture and dynamics of expression. Examples will be used to compare the level of realism of the figures in different production paths. We will also outline some practical applications of 3D reconstruction and Unreal Engine for digital human production.

Tutorial 4: Towards Building Automated Non-Rigid Spatially Augmented Reality

Sunday, March 26, 2023, 11:00-12:30, Shanghai UTC+8, Online

Organizers

Aditi Majumder, Department of Computer Science, University of California, Irvine.
Muhammad Twaha Ibrahim, Department of Computer Science, University of California, Irvine.

Summary

Spatially augmented reality (SAR) uses projections to augment physical surfaces with digital information. SAR has been the focus of enormous research in the past, culminating in works that can generate geometrically registered and photometrically seamless multi-projector displays automatically on complex shapes. Today, such systems are largely limited to static and rigid surfaces. More recent advances have enabled SAR on non-rigid, dynamic surfaces as well. In addition to the challenges faced with rigid static surfaces, SAR on non-rigid, dynamic surfaces introduces a host of new technical challenges that must be overcome before they are ready for general use.

This tutorial will present an overview of automated geometric registration techniques to build SAR systems on non-rigid surfaces. It will discuss the complexities introduced by a non-rigid, dynamic surface for projection, as well as how prior research has attempted to overcome those challenges. Finally, it will illustrate various applications for non-rigid SAR systems.

Technical Level

The course will be self-contained and does not assume prior knowledge on SAR. Familiarity and prior exposure to basic graphics/vision techniques for geometric registration and color calibration will be helpful. A basic understanding of linear algebra is required.

Intended Audience

The tutorial targets students, professionals and practitioners who would like to build SAR systems using one or more projectors. Though the tutorial focuses on non-rigid, dynamic surfaces, the first part of the tutorial provides background knowledge that comprises SAR systems on rigid surfaces. This course can be extremely useful for beginners who want to start research in this domain and for professionals who want to build SAR systems.

Value

The goal of this tutorial is to impart sufficient information that the audience understands the current research, its limitations as well as remaining open challenges in building their own automated SAR systems for non-rigid, dynamic surfaces and objects.

Tutorial 5: Introduction of building XR environments using Omniverse

Sunday, March 26, 2023, 14:00-17:30, Shanghai UTC+8, Room DALIAN

Organizers

Shen Song, Senior Solution Architecture, NVIDIA

Summary

Creating interactive XR environments is becoming essential for researchers in XR areas. With the modern game engine, building an XR environment is reasonably convenient. However, developing an interactive XR environment for research purposes, such as usability studies, still needs many jobs in the working pipeline, including creating 3D models, animations, lighting, etc., which would require the research team to have different members for different works in the pipeline to finish a quality XR environments. How to help the team collaborate efficiently or how to create an XR POC environment quickly is challenging. For example, the entire pipeline might need various tools, such as the tools for 3D modeling, animation, lighting, programming, real-time engine, etc. Most of the time, it is teamwork. NVIDIA Omniverse is a platform that could connect those different tools and create a collaborative working pipeline. In this half-day tutorial, a new platform from NVIDIA will be introduced called NIVIDA Omniverse. The session will include how to start working with NVIDIA Omniverse, set up the environment, build an XR environment quickly for a POC by leveraging the resources and AI capabilities in NVIDIA Omniverse, and how to present the XR environments to the users using the cloud-based Omniverse XR.

Intended Audience

Since NVIDIA Omniverse will provide a collaborative pipeline for the entire design and development team, this tutorial will benefit any students, researchers, or professionals who need to design and develop XR environments, manage the projects, and who will review the work. Participants with experience in using the tools like Unity, UE, Maya, Blender, and Autodesk would be helpful but unnecessary.

Value

In this tutorial, the participants will learn how to work with NVIDIA Omniverse in their XR design and development. NVIDIA Omniverse is free for any individual developer. The participants will also have a glance at how NVIDIA Omniverse Enterprise could improve the collaborative working pipeline. With our real customers’ experiences, using NVIDIA Omniverse with our latest RTX GPUs could improve teamwork and rendering efficiency, creating more realistic rendering results with RTX on.

Conference Sponsors

Special

Lujiazui Logo

Diamond

Platinum

Baidu Logo

Gold

Senstime Logo
Unity china
XImmerse Logo
Vivo Logo

Silver

GritWorld logo

Bronze

ImageDerivative logo
yuanjing alibaba logo
raysengine alibaba logo
S-Dream logo
VRIH logo
VRIH logo
evis logo
kanjing logo
lianying logo

Doctoral Consortium Sponsors

Supporters

Tencent Learn logo
lenovo logo
qualcomm logo
liangfengtai logo
hgmt logo

Host

SJTU

Co-Host

ZJU NUIST

Supporting Associations

ccf-vr Logo CSIG-VR Logo cgs-vcc Logo
CCF-VR CSIG-VR CGS-VCC
cvrvt Logo mia Logo siga Logo
CVRVT MIA SIGA
cvrvt Logo
SJMC


Code of Conduct

© IEEE VR Conference 2023, Sponsored by the IEEE Computer Society and the Visualization and Graphics Technical Committee