| IEEE VR 2026 Tutorials |
Saturday, March 21 | Sunday, March 22 | |||
|---|---|---|---|---|---|
| Room 1 (325A) | Room 2 (325B) | Room 3 (325C) | Room 1 (325A) | Room 2 (325B) | |
| 08:30 | Tutorial 01: How to Create Custom Motions and Lip Synchronization in Unity VR Using Meta Avatars | Tutorial 02: The Game Playbook: Designing for Serious Games | Tutorial 03: Rapid VR Prototyping for Academia and Industry: A Tutorial on Building Interactive Experiences with Godot and the XR Tools Framework | Tutorial 11: Interaction Design for eXtended Reality (ID4XR) | Tutorial 12: eXtended Reality and Rehabilitation: Technical Solutions, Regulations, and Challenges from Research Labs to Clinical Practice |
| 09:00 | |||||
| 09:30 | |||||
| 10:00 | Break | ||||
| 10:30 | Tutorial 04: Designing Authorable Mixed Reality Systems: Scalability, Interaction Patterns, and Lessons from Real-World Deployments | Tutorial 05: AI-Driven Immersive Characters in the Era of LLMs: Prompting, RAG, Evaluation, and Guardrails for XR | |||
| 11:00 | |||||
| 11:30 | |||||
| 12:00~14:00 | Lunch | ||||
| 14:00 | Tutorial 06: Ubiq and Ubiq-Genie: Integrating Generative AI into Social eXtended Reality | Tutorial 07: Designing Presence: Artistic Methods in Narrative XR | Tutorial 08: Building Local Multi-User VR Systems in Unity Using a Lightweight TypeScript API | Tutorial 13: A Practical Guide on Building Embodied Intelligent Virtual Agent for XR Research and Applications | Tutorial 14: VR Research at Scale with the Virtual Experience Research Accelerator (VERA) – An Early-Access Hands-On Tutorial |
| 14:30 | |||||
| 15:00 | |||||
| 15:30 | Break | Break | Break | ||
| 16:00 | Tutorial 09: Emerging Reviewing and Publication Models to Promote Trustworthy Research and Support Scientific Career Advancement | Tutorial 10: Advanced AI integrations into Unity with Embardiment | |||
| 16:30 | |||||
| 17:00 | |||||
Day: Saturday, March 21(Timezone: Korea Standard Time, UTC/GMT +9 hours)
-
- Organizers
- Insun Cho - University of San Francisco
- June Lee - University of San Francisco
- Target Audience
- This tutorial is intended for Intermediate to advanced VR developers utilizing the latest version of Meta XR Interaction and Meta XR Avatar SDK in Unity 3D. The developers should understand the process of importing assets from Unity Asset Store and from Git repository as well as loading files to Meta Quest headsets. Additionally, the developers should hold a good understanding of using Animator Controller, State Machine and integration of Audio Source in Unity.
- Summary
- The primary objective of this tutorial is to share step-by-step instructions on how to add custom motions and lip synchronization when using Meta Avatars as NPC in Unity.
When incorporated as a NPC (Non Player Character), the selected Meta Avatar loads only during the runtime and it is extremely difficult to understand how to add custom motions using the conventional animator controller. In this tutorial, I will show the developers how to:- Record custom motions using Animation Recorder
- How to apply the animation to the specific NPC Meta Avatar
- How to apply the custom audio file so the Meta Avatar speaks accurately synchronized its lips to the words in audio.
The developers will be able to walk away with the knowledge on how to create Meta Avatar that ‘acts’ according to their narrative, adding rich experience to their VR project without the heavy cost of traditionally rigged character animation.
-
- Organizers
- Kimberly Hieftje, PhD, Veronica Weser, PhD, Shu Wei, PhD - XRPediatrics; Yale Center for Immersive Technologies in Pediatrics, Yale School of Medicine, United States
- Target Audience
- researchers, designers, developers, and students working in serious games and immersive technologies who seek a structured, evidence-informed approach to serious game development
- Summary
- Creating an effective serious game that is evidence-informed, theory-driven, and capable of promoting meaningful behavior change is a complex and often fragmented process. Research shows that serious games integrating behavior change theory alongside game theory are more effective in promoting healthy decisions, particularly when they focus on proximal determinants such as knowledge, attitudes, intentions, and skill practice, and are developed using user-centered design approaches. Despite this evidence, many serious games fail to explicitly connect theoretical constructs to concrete design decisions, resulting in experiences that are engaging but not impactful, or scientifically grounded but not enjoyable.
To address this challenge, our team developed the Game Playbook, a structured yet flexible framework that serves as a living document to guide interdisciplinary teams through the design and development of serious games. The Game Playbook emerged from more than 15 years of developing, evaluating, and implementing serious game interventions across health, education, and clinical contexts. It provides a shared structure and language that allows researchers, clinicians, designers, and developers to align around intervention goals, theory, gameplay mechanics, and evaluation strategies.
The Game Playbook integrates elements of health beh
avior intervention manuals, logic models, and game design documents into a single artifact. Rather than treating theory as background justification, the framework requires teams to explicitly map learning goals, transformation goals, and behavior change constructs to specific gameplay mechanics and systems, supporting intentional design and stronger alignment between design and evaluation.
This tutorial provides a guided, hands-on introduction to the Game Playbook and its application to serious game and immersive experience development. Attendees will learn about the core sections of the Game Playbook using examples from published serious games. Next, attendees will begin creating a Game Playbook for a serious game or immersive experience they are currently developing or plan to develop.
Attendees will leave with the initial foundation of their own Game Playbook and a clearer understanding of how to design serious games that are both engaging and effective. This tutorial is intended for researchers, designers, developers, and students working in VR and immersive technologies who seek a structured, evidence-informed approach to serious game development.
-
- Organizers
- Marcos Melo - AKCIT-IMD, Digital Metropolis Institute, Federal University of Rio Grande do Norte, Natal, RN, Brazil
João Nobrega - AKCIT-IMD, Digital Metropolis Institute, Federal University of Rio Grande do Norte, Natal, RN, Brazil
Alyson Souza, PhD - AKCIT-IMD, Digital Metropolis Institute, Federal University of Rio Grande do Norte, Natal, RN, Brazil
- Target Audience
- Academic Researchers and Students; Industry Professionals and Indie Developers; Beginners in VR Development.
- Summary
- In virtual reality development, various options exist for creating robust applications; however, many of them are complex or require high-end hardware to function correctly. This often creates a bottleneck when developing simpler applications, such as early-stage prototypes. For this reason, a lighter and more accessible solution is needed. This is where Godot stands out. Godot is an open-source game engine that has gained significant popularity and relevance over the past few years. Owing to its intuitive interface, native programming language, and strong ecosystem of community-created plugins—such as Godot XR Tools—the engine provides an ideal environment for prototyping and testing ideas in the field of Virtual Reality.
This has strong relevance for the community, as Godot’s simplicity and open-source license lower the barriers to VR development, allowing academic researchers, students, industry professionals, indie developers, and beginners in VR development to quickly create, test, and experiment with immersive experiences.
With this in mind, this tutorial provides a practical guide to rapid prototyping of VR experiences using the Godot Engine and the Godot XR Tools plugin. The presentation covers the fundamentals of Godot’s structure; the installation of the Godot XR Tools plugin and the configuration required to enable XR development in Godot; the application of the plugin’s ready-to-use locomotion and interaction systems; the creation of simple user interfaces for VR applications; and the implementation of a complete VR mini-game.
All concepts are presented through a modern, efficient prototyping workflow that equips attendees with practical knowledge and skills for rapid iteration and early validation of VR ideas.
All tutorial materials are available at: https://drive.google.com/drive/folders/1wP7m-guXH5YgC8ETLwSsjFBTDv8p-dDN?usp=sharing
-
- Organizers
- Yu-Chiao (Chiao) Lin - Mixel Studio, Inc.; Research Collaborator, University of Michigan, USA
Anhua Wu - Mixel Studio, Inc.; Research Collaborator, University of Michigan, USA
- Target Audience
- XR system architects seeking scalable design patterns; HCI researchers interested in end-user development for spatial computing; Educational technologists and applied researchers evaluating XR systems in institutional contexts
- Summary
- This tutorial combines lectures and guided discussions to share transferable lessons from the design of an easy-to-use mixed reality (MR) authoring tool for non-technical users, grounded in iterative user testing and real-world deployments in K–12 classrooms and after-school program workshops. The first part introduces the central tension between user agency and operational friction in real-world MR adoption, highlighting how scalability in practice depends on user experience and workflow, not only on system performance. The second part is organized around two design directions, time-to-adapt and confidence-to-edit, illustrated through a set of distilled approaches that span spatial slide sequencing, anchored reference objects, template-first flows, and lightweight web-based preview. The third part focuses on orchestration and deployment, positioning shared visual grounding (shared content and state alignment across participants) as a core design goal for desirable XR experiences. It examines how shared session grounding can be supported through state synchronization under connectivity constraints, deployment configurations for facilitated learning, and low-friction session entry mechanisms. The tutorial concludes with a guided discussion on how AI-augmented workflows might responsibly extend authorable MR systems while preserving reliability and deployability. Participants will gain a grounded set of system-level considerations, concrete implementation examples, and practical guidance for designing MR platforms that are more likely to function reliably beyond demos and pilot studies to become repeatable practice.
-
- Organizers
- Rafael Sousa, PhD - Advanced Knowledge Center for Immersive Technologies (AKCIT), Federal University of Mato Grosso (UFMT), Brazil
Elisa Ayumi Oliveira - Advanced Knowledge Center for Immersive Technologies (AKCIT), Federal University of Goiás (UFG), Brazil
Julia Dollis - Advanced Knowledge Center for Immersive Technologies (AKCIT), Federal University of Goiás (UFG), Brazil
Gustavo Webster - Advanced Knowledge Center for Immersive Technologies (AKCIT), Brazil
- Target Audience
- XR researchers, developers, and designers building interactive characters/agents; ML/LLM practitioners interested in embodied agents; Unity/Unreal engineers integrating AI pipelines. Intermediate technical level (basic LLM + software background assumed).
- Summary
- Large Language Models are increasingly used to drive interactive characters in VR/AR/MR, but XR deployments introduce additional constraints related to real-time interaction, world grounding, embodiment control, and safety. This 90-minute tutorial presents a structured overview of design and engineering approaches for building AI-driven immersive characters (e.g., NPCs, guides, companions, tutors) that remain controllable and robust under multi-turn interaction. The tutorial is organized into five parts. First, we describe an end-to-end XR character pipeline, including context assembly from user input and world state, memory management, tool/function calling, and output routing to dialogue and embodiment. Second, we review LLM fundamentals relevant to interactive systems (tokens, context windows, instruction tuning) and how they translate to latency budgets and state handling in XR. Third, we cover prompt engineering and few-shot patterns for character behavior, including prompt templates and structured outputs that support intent, emotion, and action cues. Fourth, we introduce retrieval-augmented generation (RAG) for grounding character responses in domain knowledge and in-world facts, including practical considerations such as chunking, metadata, and common failure modes. Fifth, we present evaluation practices for XR character pipelines, including RAGAS-style assessment of groundedness/faithfulness/relevance, lightweight regression testing for multi-turn scenarios, and guardrails for safety and persona boundaries. The tutorial closes with deployment considerations (API vs. local models, privacy, cost, latency) and a brief discussion of integration strategies for emotion/animation control signals.
Expected benefits to attendees: Attendees will gain a reference pipeline for XR characters, practical prompting and RAG design patterns, an evaluation workflow suitable for iterative XR development, and a set of deployment and safety considerations tailored to immersive interactive systems.
-
- Organizers
- Anthony Steed - University College London
Ruijun (Phoenix) Sun - University College London
Nels Numan - University College London
Daniele Giunchi - University of Birmingham
- Target Audience
- The tutorial will be of value to any student, researcher or professional who wants to develop their own SXR application and is interested in using AI systems. They probably want to get a grounding in what the challenges are in building more complex systems. Some experience with Unity, C# and Python would be useful but not necessary as the scenarios will be walkthroughs of the components and tools rather than explicit code. Any code will be shared in an open repository. There will be additional online video tutorials and documentation if participants want to explore specific features.
- Summary
- One of the most promising applications of extended reality (XR) technologies is remote collaboration. Social XR applications range from small multiplayer games to large-scale virtual conferences with hundreds of attendees. At the same time, there is a lot of interest in integrating artificial intelligence (AI)-based algorithms into XR scenes.
This tutorial will guide participants in building their own social XR systems using Ubiq and then integrating AI systems with Ubiq-Genie. Ubiq is an open-source framework for developing collaborative virtual environments (CVEs). Ubiq-Genie is an open-source framework that integrates with CVEs supported by Ubiq. Both frameworks were developed at University College London, drawing on decades of experience in practical experience with XR systems.
The session will begin with fundamental concepts before providing an overview of Ubiq. The main goal will be to give an overview of the challenges of doing research and experimentation with social XR and how Ubiq solves problems that developers will encounter with commercial platforms. We will then discuss how to integrate Ubiq-Genie-based services. The goal will be to give participants strategies to think about integration into scenes using distributed processes, and thus without having to integrate monolithic applications. The target audience is students, researchers and teachers who are interested in using AI together with social XR in their teaching and research.
https://ubiq.online/blog/ubiq-and-ubiq-genie-tutorial-at-ieee-vr-2026/
-
- Organizers
- Dr. Almira Osmanovic Thunström, PhD - University of Gothenburg; Sahlgrenska University Hospital – Center for Digital Health; Chalmers Industriteknik
- Target Audience
- Researchers, students, and artists who are beginners in XR, with no prior technical or XR development experience required.
- Summary
- This interactive tutorial introduces artistic methods for designing immersive narratives in XR by shifting the focus from cinematic storytelling to embodied, presence-driven experiences grounded in cognitive science. Participants explore how XR functions as an experiential medium—closer to installation and performance art than film—and learn a practical artistic framework for shaping perception, attention, and meaning in virtual, augmented, and mixed reality. Through a combination of theory and hands-on practice, attendees design a short immersive narrative moment that emphasizes sensory engagement, participant agency, and experiential transformation, without the need for coding or technical tools.
- Primary Topic and Objectives: The primary topic of this tutorial is the use of artistic and cognitive-science-informed methods to design narrative XR experiences that foster presence and immersion.
- Relevance to the IEEE VR Community: This tutorial directly addresses core IEEE VR concerns such as presence, embodiment, perception, and user experience in VR, AR, and MR systems. It bridges artistic practice with XR research by grounding creative methods in cognitive science and perceptual design, offering researchers and practitioners alternative methodologies for narrative XR beyond cinematic paradigms. The tutorial is relevant to VR/AR research, immersive media design, human-centered XR methodologies, and applied XR production across academic, cultural, and industry contexts.
- Expected Benefits to Attendees: Attendees will gain:
- A new conceptual framework for understanding and designing presence in XR
- Practical artistic tools for guiding attention, emotion, and embodied experience
- Hands-on experience designing XR narratives without requiring coding or technical expertise
- Transferable methods applicable to creative, educational, cultural, healthcare, and commercial XR projects
- A repeatable design approach grounded in cognitive science and artistic practice
- Requirements: Participants should bring:
- A smartphone capable of recording video or audio
- A notebook and pen
- Tutorial Website / Materials: https://www.youtube.com/watch?v=EsnNEdFqMwc
-
- Organizers
- Stefane Orichuela - AKCIT-IMD, Federal University of Rio Grande do Norte - UFRN
Alyson Souza - AKCIT-IMD, Federal University of Rio Grande do Norte – UFRN
- Target Audience
- This tutorial is intended for researchers, graduate and advanced undergraduate students, developers, and practitioners working with Virtual Reality (VR), Augmented Reality (AR), or Extended Reality (XR) systems. It is particularly relevant for participants with prior experience in Unity and an interest in multi-user or collaborative VR environments. The tutorial does not require deep knowledge of networking or distributed systems, but basic familiarity with C# and general programming concepts are expected. This tutorial will benefit attendees involved in research laboratories, academic projects, educational platforms, museums, and experimental VR applications who seek a lightweight and flexible approach to multi-user VR system design.
- Summary
- Building Local Multi-User VR Systems in Unity Using a Lightweight TypeScript API presents a practical and conceptual approach to designing local multi-user virtual reality (VR) systems by integrating Unity with a lightweight, custom TypeScript-based backend. The primary objective of this tutorial is to enable participants to understand and implement real-time multi-user VR architectures without relying on heavy or opaque networking frameworks, emphasizing architectural clarity, flexibility, and direct control over synchronization mechanisms.
The tutorial is highly relevant to the IEEE VR community as it addresses core challenges in VR and XR research and practice, including real-time data exchange, multi-user interaction, embodiment through head and hand tracking, and consistency in shared virtual environments. By adopting a decoupled client–server model using WebSocket communication, the tutorial aligns with research methodologies focused on experimental VR systems, rapid prototyping, and educational or research-driven applications where adaptability and system transparency are critical.
Participants will gain hands-on experience in building a lightweight backend in TypeScript, defining data models for multi-user interaction, and synchronizing user states between multiple VR clients in Unity. The tutorial covers essential techniques such as JSON deserialization, global state management, player representation, and synchronization strategies suitable for local multi-user VR scenarios. These skills are directly applicable to research laboratories, academic projects, museums, collaborative VR environments, and experimental XR applications.
Beyond technical implementation, the tutorial provides attendees with a deeper architectural perspective on multi-user VR system design, allowing them to critically evaluate existing networking solutions and adapt lightweight approaches to their own research or development contexts. By the end of the tutorial, participants will be equipped with both practical tools and conceptual frameworks to design, prototype, and extend multi-user VR systems tailored to their specific experimental or educational needs.
-
- Organizers
- J. Edward Swan II, Mississippi State University
- Target Audience
- This tutorial is potentially relevant for all conference attendees, and does not require technical knowledge beyond general scientific training.
- Summary
- Is anyone actually happy with the way that peer review is currently implemented? The problems are many: increasing requests to perform reviews, greater difficulties recruiting reviewers, and many unhelpful reviews. Generally, everyone is burned out and unhappy. And on top this is the specter of AI cranking out endless fraudulent papers.
However, there are reasons for optimism. Many research communities, including our own, are experimenting with new reviewing and publication ideas and models. This tutorial discusses the social context that mediates scientific communication, including reviewing practices for scientific papers and proposals, and how these practices motivate researchers as they seek career success. The tutorial surveys current and emerging thinking on how this context might be optimally tuned to produce scientific results that are trustworthy and lead to scientific career advancement.
Specific topics to be covered include:
(1) A brief history of scholarly publishing and reviewing models.
(2) Paper-based publishing models and their negative consequences.
(3) An examination of our current reviewing model.
(4) Alternative reviewing models, including independent reviewer scoring, single-blind versus double-blind reviewing, non-anonymous reviews (e.g. the Frontiers journal), public reviews (e.g., many journals in the Nature portfolio), and signed reviews (e.g., the F1000Research journal).
(5) Alternative publication models, including: archive-and-revise (e.g., arXiv), open access and related digital publication categories, and the perceived importance criterion and the PLOS One model.
(6) Even more radical reviewing and publication models, including: reproducible research, p-hacking, and preregistered research plans; peer reviewed pre-registered plans and author incentives; multiple review stages; combining human peer review with AI automated peer review; and credibility through evidence of research lifecycle evaluation.
(7) The potential positive consequences of emerging review and publication models.
(8) Some reasons for optimism and, from junior to senior community members, calls for action.
-
- Organizers
- Albert Hwang - Google, Blended Interaction Research & Devices (BIRD) Lab
Riccardo Bovo - Imperial College London
Eric J Gonzalez - Google, Blended Interaction Research & Devices (BIRD) Lab
Mar Gonzalez-Franco - Google, Blended Interaction Research & Devices (BIRD) Lab
- Target Audience
- Beginner to Intermediate
- Summary
- We introduce Embardiment, a beginner-friendly, open-source Unity package developed by Google's BIRD Lab that bridges the gap between spatial computing and advanced AI, enabling rapid prototyping of intelligent XR experiences. This tutorial is designed for XR researchers, designers, and developers who have basic familiarity with Unity and are interested in incorporating artificial intelligence into their spatial computing prototypes. No prior machine learning or AI experience is required.
We will provide a practical roadmap for utilizing Google's suite of AI services directly within the Unity environment. Services include Large Language Models (e.g., Gemini), Automatic Speech Recognition (ASR), Optical Character Recognition (OCR), and Text-To-Speech (TTS).
This hands-on session will cover topics such as:
- Setup and Configuration: Set up the Embardiment package, review the plugin architecture, and acquire necessary API keys.
- Integrate Core AI Services: Employ Embardiment’s simple prefab system to unlock the use of AI sensing (OCR, ASR) and AI generation (LLM, TTS) both within compiled Unity applications and live in the Unity editor.
- Navigate Model Trade-offs: Understand and manage the choices between various cloud-based and on-device models, considering factors like data handling, output quality, latency, network requirements, and platform availability.
- Chain AI for Complex Interactions: Explore techniques for combining multiple AI services to produce sophisticated behaviors and discuss best practices for working with its external service dependency and asynchronous architecture.
Day: Sunday, March 22(Timezone: Korea Standard Time, UTC/GMT +9 hours)
-
- Organizers
- Mark Billinghurst, PhD – Adelaide University, Australia; University of Auckland, New Zealand
Joaquim Jorge, PhD - University of Lisboa, Portugal
- Target Audience
- This tutorial assumes no prior knowledge and so is suitable for beginner, while at the same time offering depth and structure valuable to advanced XR practitioners seeking to formalize or improve their design process.
- Summary
- The rapid growth of Extended Reality (XR) technologies has revolutionized how users interact with digital environments; however, designing XR systems can be very challenging. The objective of this half-day tutorial is to teach how Interaction Design principles can be adapted to create intuitive, engaging, and inclusive XR experiences.
The tutorial will review Interaction Design techniques focusing on Design and Prototyping methods for XR. It will provide practical techniques for user needs analysis, designing and prototyping user-centered XR systems, and evaluating systems that have been built. In addition there will be case studies and demonstrations of the latest tools to illustrate best practices. Finally there will be a discussion of emerging directions and opportunities in the field of design for XR such as the latest Interaction Design research trends and useful AI tools.
The tutorial aims to equip attendees with the techniques and insights to rapidly design and prototype XR experiences for research and real-world applications. This includes pointers to the latest software and tools for XR prototyping, and resources for gaining further knowledge in the field. As such the tutorial is extremely relevant to the XR community.
More information and tutorial resources are available at the website: https://sites.google.com/view/id4xr
-
- Organizers
- Manuela Chessa - University of Genoa, Italy
- Target Audience
- Students, academic researchers, and industry professionals interested in the transfer of XR solutions to clinical practice, with an intermediate level in VR/AR development.
- Summary
- This tutorial presents the key findings and challenges in translating VR/XR research into clinical practice, particularly in rehabilitation.
Indeed, interest in developing VR solutions for healthcare is continually growing, as evidenced by the increasing number of paper submissions to international conferences and journals, organized workshops, and dedicated conferences. However, the widespread adoption of such technologies in clinical settings remains limited. In parallel, another community is facing the same challenges, i.e., robotics. Moreover, in this context, XR solutions may complement robotics, forming, together with Artificial Intelligence, a comprehensive ecosystem of digital allies for healthcare.
This tutorial will present practical examples of XR solutions for rehabilitation, also tied with robots and Artificial Intelligence, describing the specific challenges and the technical solutions adopted to address specific clinical aspects, along with an overview of the regulations and requirements for adopting such solutions in clinical practice.
As a specific use case, technical examples will be taken from the Fit For Medical Robotics Project, which aims to fill the “lab-real context” gap, putting together the major research centers in Italy, active in the development of innovative robotics and XR solutions for healthcare, and the main clinical centers and hospitals, with the main aim of improving the clinical outcome of physical rehabilitation and personal care treatments in a sustainable manner.
Starting from the specific use cases, take-home messages, and general indications, both the technical aspects and development of XR and regulations, user evaluations, and clinical trials will be discussed with the audience.
The complete program of the tutorial and all the material will be available at https://pilab.unige.it/XRRehabClinicalPractice
-
- Organizers
- Frank Steinicke - Human-Computer Interaction Group, Hamburg University, Germany
Ke Li - Human-Computer Interaction Group, Hamburg University, Germany
Sebastian Rings - Human-Computer Interaction Group, Hamburg University, Germany
Kangsoo Kim - University of Calgary
Susanne Schmidt - HIT Lab NZ, University of Canterbury, Christchurch, New Zealand l
- Target Audience
- This tutorial is targeted at all IEEE VR attendees interested in embodied intelligent virtual agents (IVAs) for XR.
- Summary
- Intelligent Virtual Agents (IVAs), which leverage advancements in Large Language Models (LLMs) and Visual Language Models (VLMs) to simulate human-like behavior, represent a significant opportunity for creating natural and engaging human-AI interactions within immersive extended reality (XR) environments. As interest in anthropomorphic embodied IVAs rapidly grows across research and application domains, this tutorial an opportunity to learn the essential theories, frameworks, and practical skills needed to initiate XR research around embodied IVAs.
The tutorial commences with a comprehensive overview summarizing the latest developments in IVAs for XR and established theoretical frameworks, including social presence, agency, human-AI collaboration, and AI ethics. This theoretical foundation is followed by a technical discussion on the current landscape of LLMs and VLMs, focusing on their specific capabilities and potential applications within XR environments.
A core component involves participants gaining hands-on experience in building a conversational embodied IVA. This practical session utilizes the open-source IVA Unity toolkit, Microsoft Rocketbox characters, and Gemini’s real-time streaming API, demonstrating practical implementation steps and addressing real-time performance challenges.
This tutorial is specifically targeted at all IEEE VR attendees interested in embodied Intelligent Virtual Agents (IVAs) for XR. Designed to be highly accessible, it welcomes participants with varying technical backgrounds. For beginners, the tutorial introduces core theoretical concepts like social presence and agency while demonstrating how to create and deploy simple conversational agents in XR. For more technical attendees, it offers valuable hands-on integration using industry-leading open-source toolkits and APIs, bridging the existing gap in practical guidance for building IVAs specifically tailored for XR applications.
-
- Organizers
- Dr. Ali Haskins Lisle - University of Central Florida
Corey Clements - University of Central Florida
Dr. John T. Murray - University of Central Florida
Dr. Gerd Bruder - University of Central Florida
Chloe Beato - University of Central Florida
Dr. Gregory Welch - University of Central Florida
- Target Audience
- Intermediate and up
- Summary
- There are fundamental challenges that VR researchers face when carrying out traditional lab-based VR experiments, including the practical inability to carry out experiments with large N samples, e.g., hundreds to thousands of participants; the time required to run experiments one person at a time; and the lack of diverse and varied participant populations to draw from.
The Virtual Experience Research Accelerator (VERA) is a human-machine system that combines and extends aspects of distributed lab-based studies, online studies, research panels, and crowdsourcing, into a unified system for carrying out XR-based human subject research. It seeks to achieve unprecedented scale, speed, and control over the demographics of human subjects experiments.
VERA consists of a web application paired with a Unity package, each aimed at providing a streamlined process for designing and distributing virtual experiments. By utilizing the VERA platform, researchers can distribute their experiments remotely, ensuring larger sample sizes and greater demographic diversity, all while requiring little to no intervention from the researcher during the experiment’s duration.
During this tutorial, instructors will showcase the VERA suite of tools and the benefits they might provide to any prospective virtual reality researchers. The tutorial is designed for intermediate users who are interested in accelerating their virtual research. During the tutorial, there will be segments of hands-on development using the VERA tools.
Participants are encouraged to bring a simple experiment or virtual experience built in Unity 6000 or newer. For those who do not yet have an experiment, we will provide a guided, follow-along sandbox experiment to work with during the session. VERA is designed to support common experimental needs such as CSV data logging, in-VR survey deployment, condition and independent variable management, and trial flow control. Experiments or experiences that make use of these features will be especially well-suited to the tutorial. If available, participants are encouraged to bring a Meta Quest VR headset for hands-on testing. A limited number of communal








