Tutorials (Timezone: Orlando, Florida USA UTC-4) | |||
---|---|---|---|
XR for Healthcare: Immersive and Interactive Technologies for Serious Games (HealthXR) | Saturday, March 16, 2024 | 8:30‑12:00 | Fantasia M |
Photogrammetry of Martian Terrains from Perseverance Rover Imagery | Saturday, March 16, 2024 | 16:00‑17:30 | Fantasia M |
New Advances in the Theory and Evaluation of Presence | Sunday, March 17, 2023 | 8:30‑12:00 | Fantasia M |
Developing Immersive and Collaborative Visualizations with Web-Technologies | Sunday, March 17, 2024 | 14:00‑15:30 | Fantasia M |
The Ins and Outs of Fixed-Screen Immersive Displays (CAVEs) | Sunday, March 17, 2024 | 16:00‑17:30 | Fantasia M |
Tutorial 1: XR for Healthcare: Immersive and Interactive Technologies for Serious Games (HealthXR)
Saturday, March 16, 2024, 8:30-12:00 (Orlando, Florida USA UTC-4) Room: Fantasia M
Organizers
Manuela Chessa - University of Genoa - Dept. of Informatics, Bioengineering, Robotics, and Systems Engineering
Fabio Solari - University of Genoa - Dept. of Informatics, Bioengineering, Robotics, and Systems Engineering
Summary
In this tutorial, we will present and discuss the challenges of exploiting novel XR technologies in the field of healthcare, specifically to develop serious games and exercises for a wide range of cognitive and physical issues, ranging from training to assessment. In such a domain, VR experts should collaborate and work in synergy with doctors, physiotherapists, and healthcare specialists. Though the interest in XR applications in healthcare has recently gained popularity, a big gap between the technological advances in VR and 3D technologies and their effective use in clinical practice still exists. Indeed, XR setups adopted by hospitals and rehabilitation centers seldom exploit the more recent VR/AR/MR solutions. On the other hand, VR/AR/MR researchers still pay little attention to the cognitive, perceptual, and physical specificity of patients and users affected by specific disabilities and impairments.
In this tutorial, Manuela Chessa and Fabio Solari will exploit their expertise in the development of VR and AR solutions, with specific attention to cognitive and perceptual aspects, and their expertise coming from a longtime collaboration in international projects addressing healthcare issues.
The aim of the tutorial is to revise the theoretical backgrounds of VR development, to present and discuss the opportunities and the limits of current technologies and software solutions, and to focus on the existing gaps between VR communities and healthcare ones.
During the tutorial, the presenters will share examples from their collaborations with hospitals and research centers.
The tutorial is supported by the FIT4MEDROB (PNRR) "Progetto" Fit4MedRob- Fit for Medical Robotics", Piano Nazionale Complementare (PNC) - Italy.
Intended Audience
This tutorial is targeted towards both students and academic researchers, as well as people from industry. In general, people involved at various levels in the development of XR systems for cognitive and physical training could be interested in this tutorial.
Value
The audience will learn the perceptual differences between the main visualization and interaction techniques in XR, and how to optimally exploit such differences to develop effective serious games and exergames. The main goal of this tutorial is to teach and discuss the gaps that still prevent the effective use of the latest VR technologies in the field of healthcare.
Tutorial 2: Photogrammetry of Martian Terrains from Perseverance Rover Imagery
Saturday, March 16, 2024, 16:00-17:30 (Orlando, Florida USA UTC-4) Room: Fantasia M
Organizers
Robert LiKamWa - Arizona State University USA
Kathryn Powell - Arizona State University USA
Lauren Gold - Arizona State University USA
Summary
This tutorial provides an in-depth exploration of the methodologies and techniques necessary for leveraging Mastcam-Z data to generate scientifically accurate and educationally relevant 3D models of Martian terrains. Aimed at individuals with a foundational understanding of game engines and 3D modeling, this instructional guide delves into the process of accessing and utilizing the wealth of information available in NASA's Planetary Data System. Participants will learn the application of photogrammetry principles to accurately reconstruct Martian landscapes, enhancing their fidelity and immersion. Furthermore, the tutorial introduces the use of the NASA JPL Landform tool, a pivotal resource for advanced visualization and analysis of Martian terrains. By offering a comprehensive walkthrough from data acquisition to the final 3D model production, this tutorial promises to equip attendees with the necessary skills to contribute to the field of planetary science visualization, fostering a deeper understanding of Martian geology and topography.
Intended Audience
Those with working knowledge of using game engines and 3D models meet the technical level of preparation. The intended audience is those who want to produce scientifically accurate and educationally relevant 3D models of planetary terrains.
Value
The tutorial will provide an end-to-end understanding of how to produce Martian terrains from NASA rover mission data.
Tutorial 3: New Advances in the Theory and Evaluation of Presence
Sunday, March 17, 2023, 8:30-12:00 (Orlando, Florida USA UTC-4) Room: Fantasia M
Organizers
Mel Slater - Event Lab, University of Barcelona, Barcelona, Spain & Institute of Neurosciences, University of Barcelona, Spain
Mavi Sanchez-Vives - Event Lab, University of Barcelona, Barcelona, Spain
Ramon Oliva - Event Lab, University of Barcelona, Barcelona, Spain & Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
Esen Küçüktütüncü - Event Lab, University of Barcelona, Barcelona, Spain & Institute of Neurosciences, University of Barcelona, Spain
Summary
Presence has been a cornerstone of Virtual Reality (VR) research since the early 1990s, focusing on the phenomena where users experience illusions of being in virtual environments (Place Illusion, PI), believing in the reality of those events (Plausibility Illusion, Psi), feeling ownership over a virtual body, and sensing the presence of others (copresence). This tutorial addresses these aspects and introduces novel evaluation methods. Recent advancements have reshaped our understanding of presence, suggesting PI is based on natural sensorimotor contingencies within VR, Psi depends on the environment's response to user actions and coherence with real-life expectations, and body ownership stems from multisensory integration. The tutorial will explore how presence can now be quantified using probability theory and Reinforcement Learning, bypassing traditional questionnaires and measures. It also highlights an inverse relationship between PI and the entropy of eye scanpaths, providing a solid empirical basis for the study of PI. This tutorial is designed for researchers and practitioners seeking to deepen their understanding of VR's scientific principles, particularly presence. It is beneficial for both research and the design and testing of VR applications, requiring some prior VR experience and ideally, a grasp of experimental design and analysis. Participants will learn the conceptual framework of presence and its evaluation through lectures and practical experiences, including multi-participant environments from the GuestXR European Project. This hands-on approach will demonstrate how various factors, like sensorimotor contingencies, affect presence in settings such as multi-person meetings. Attendees will gain a comprehensive theoretical understanding of presence and insights into assessing contributing and detracting factors, including experimental design and evaluation.
Intended Audience
This tutorial is aimed at researchers and practitioners who wish to gain a deeper understanding of the scientific basis of virtual reality with respect to the issue of presence. This will be useful for research, but also for the design and testing of applications. Participants should ideally have had some prior experience of VR. Some understanding of experimental design and analysis would be useful, but not essential.
Value
The audience will learn about the conceptual framework of presence and about how to evaluate it for experimental studies or practical implementations. Moreover, they would learn this not only through lectures but through some actual experiences making use of multi-participant environments developed in the GuestXR European Project. These experiences will show, in a practical way, how varying contributing factors (such as the sensorimotor contingencies) influence the illusions of presence, for example, in multi-person meetings. The audience is expected therefore to take away a deeper theoretical appreciation of the long-standing issues of presence and be able judge how to assess the factors that lead to and detract from it, including the issue of experimental design and evaluation.
Tutorial 4: Developing Immersive and Collaborative Visualizations with Web-Technologies
Sunday, March 17, 2024, 14:00-15:30 (Orlando, Florida USA UTC-4) Room: Fantasia M
Organizers
David Saffo - JPMorgan Chase, Global Technology Applied Research
Cheng Yao Wang - JPMorgan Chase, Global Technology Applied Research
Feiyu Lu - JPMorgan Chase, Global Technology Applied Research
Blair MacIntyre - JPMorgan Chase, Global Technology Applied Research
Summary
Immersive analytics (IA), the application of immersive technologies such as augmented and virtual reality towards the task of data visualization and analytics, is a rapidly growing area of research. Web-technologies have the potential to greatly benefit the development of immersive visualizations and systems with their affordances for multi-device distribution and networking. This tutorial is aimed at researchers and developers who want to learn how to implement immersive and collaborative visualizations and system with web technologies. Tutorial participants will learn these skills through hands on coding activities using the latest tools for developing web applications. Furthermore, participants will gain practical skills and knowledge of WebXR, Anu.js, D3.js, Vite, and Coleuses. By the end of this half-day course, participants will be able to view and interact with an immersive visualization of their making along side their peers in a multi-user collaborative environment—all through their devices’ web-browser.
Intended Audience
This course is intended for all members of the VR community who are interested in learning how to develop immersive and collaborative visualizations with web technologies. We especially encourage educators and researchers to join as this course has the most potential to be useful for their work. We believe this course will best serve researchers with interest and experience in the following areas: data visualization, immersive analytics, 3D UI, augmented and virtual reality, and multi-user collaboration.
Value
Immersive analytics (IA) is a research discipline that seeks to leverage emerging display technology such as augmented reality (AR) and virtual reality (VR) for the purpose of information and data visualization. Applications of IA techniques offer several opportunities over traditional data visualization techniques such as spatial immersion, multi-sensory presentation, situated visualization, engagement, and collaboration. However, with additional opportunities come additional challenges. IA systems and visualizations comprise a vast and nascent design space that span different display technologies, input modalities, hardware systems, interactions, and visualization design practices. These dimensions make it challenging for researchers and developers to design and implement IA visualizations and systems.
Beyond developing effective IA systems, the challenge of creating collaborative immersive applications is daunting. Implementing data synchronization of interactive elements of the environment, sharing synchronized avatars and presence, and voice and/or video sharing represents a non-trivial engineering feat for most developers. This is especially challenging in education and research environments where resources are limited.
The main goal of this course is to equip researchers to face these challenges by familiarizing them with the development of immersive and collaborative visualizations with web technology, and giving them a practical starting point for creating their own applications and environments.
Tutorial 5: The Ins and Outs of Fixed-Screen Immersive Displays (CAVEs)
Sunday, March 17, 2024, 16:00-17:30 (Orlando, Florida USA UTC-4) Room: Fantasia M
Organizers
William Sherman - National Institute of Standards and Technology. US Department of Commerce
Summary
This tutorial is tailored for individuals who have entered the virtual reality (VR) domain during the recent resurgence of head-mounted displays (HMDs), offering an introduction to fixed-screen immersive displays, also known as CAVE-style VR. The session will commence with a historical overview of CAVE-style VR, contrasting it with head-based displays to highlight the advantages and disadvantages of each, and guide participants in determining the optimal use cases for both modalities.
The tutorial will delve into the distinctions in rendering and interaction techniques between fixed-screen and head-based systems, providing a high-level overview of the matrix operations involved in generating off-axis stereoscopic images for fixed screens and discussing the implications for CAVE rendering versus HMDs. Additionally, the evolution of hardware for implementing fixed-screen VR displays will be discussed, including advancements that have minimized the spatial requirements of CAVE systems, such as short-throw projectors and direct-view LED systems, and innovations in stereoscopic glasses that enable multiple users to experience a shared CAVE environment from individual perspectives.
Participants will also be introduced to the diverse configurations of fixed-screen systems, ranging from classic 4-sided CAVEs to more complex setups like 6-sided cubic CAVEs, extra-large CAVEs, movable screens, the StarCAVE, the Pipe, CAVE2, and others. The tutorial will further explore a variety of software solutions compatible with modern fixed-screen VR systems, including integration with the Unity Game Engine, open-source scientific visualization tools like ParaView and COVISE, commercial tools such as Amira, and VR libraries like Vrui, FreeVR, vrJuggler, and HEV. The session will conclude with a discussion on the role of the OpenXR standard in the context of fixed-screen VR displays.
Intended Audience
This tutorial will not focus on any one type of VR, or any one particular product, but the entire scope of the field. Attendees with any level of technological background will be comfortable understanding what will be presented in this tutorial.
Value
This tutorial is crafted specifically for VR researchers who have entered the field within the last ten years. It recognizes that many in our community may not be fully acquainted with the field's rich history. Through this session, attendees will gain insight into the triumphs and challenges that our experienced colleagues have navigated as the VR medium has evolved to its current state.