The official banner for the IEEE Conference on Virtual Reality + User Interfaces, comprised of a Kiwi wearing a VR headset overlaid on an image of Mount Cook and a braided river.

Panels

Panels
Where will extended reality and AI take us? Wednesday 12th, 15:15 - 16:15 (Saint-Malo, France UTC+1)
Room: Chateaubriand


Where will extended reality and AI take us?

Wednesday 12th, 15:15 - 16:15 (Saint-Malo, France UTC+1)
Room: Chateaubriand

Presentation
This year, for the first time, IEEE VR will include a mixed reality panel, occurring simultaneously in VR and displayed live on the screen of the plenary room. Panelists will be equipped with a VR headset. Each panelist will be represented by a virtual body that resembles their own appearance. As well as the human panelists there will be a virtual agent of a very well-known historic researcher who was critical to the topic of the discussion, based on Large Language Model (LLM) and also represented by an avatar. This special guest will observe the panel discussion and occasionally will comment through GPT-based speech generation. The panel will be in two stages. First, there will be a discussion where the panelists are in VR in the same shared environment, but potentially located in different physical places around the convention centre or on the plenary stage. Second, they will come together physically on stage to discuss the experience including with the audience.
The scientific objective of the panel is to foster discussion about the future of AI and XR and how they are interrelated and will influence one another. AI can be used to help construct XR applications including everything from the design, modeling, and programming, and also AI can be used in real time during an application - as in the example we are demonstrating with this panel. The general idea is based on the goals of constructing and testing the technology for AI to be used in meetings. Every meeting has an objective (even if it is just to ‘hang out’ and have fun), and the idea is to explore the extent to which AI can help the meeting realise its objectives, especially when things go wrong, such as in cases of abuse or harassment.
Also there are fundamental ethical issues raised through the use of AI, especially considering the European Union’s AI Act. For example, will it even be legal to have virtual human characters controlled by an AI in XR applications outside of research? We look forward to this exciting panel that will discuss issues relevant to all attendees of IEEE VR 2025!

Panelists

  • Sylvia Pan (Goldsmiths, University of London) - Moderator: is a Professor of Virtual Reality at Goldsmiths, University of London. She co-leads the SEEVR Lab (Social, Empathic, and Embodied VR) including over 10 academics and researchers. Her research interest is the use of Virtual Reality as a medium for real-time social interaction, in particular in the application areas of medical training and therapy. Her 2017 Coursera VR specialisation attracted over 100,000 learners globally, and she co-leads on the MA/MSc in Virtual and Augmented Reality at Goldsmiths Computing.
  • Frank Steinicke (University of Hamburg): is a Professor of Human-Computer Interaction at the Department of Informatics at the Universität Hamburg. Before his current position, he was a professor of Computer Science in Media at the Department of Computer Science at the University of Würzburg and chair of the Immersive Media Group from 2011 to 2014. He studied Mathematics with a Minor in Computer Science at the University of Münster, from which he received his Ph.D. and Venia Legendi in Computer Science. His research interests are focused on understanding the human perceptual, cognitive, and motor abilities and limitations to improve interactions and experiences in computer-mediated realities. He received the IEEE VGTC Virtual Reality Technical Achievement Award in 2023 for his scientific contributions and was inducted into the prestigious IEEE VR Academy.
  • Masahiko Inami (University of Tokyo): is a Professor at the University of Tokyo after working at the University of Electro-Communications and Keio University. His interests include “JIZAI Body,” human augmentation, and entertainment engineering. He has received several awards, including TIME Magazine’s “Coolest Invention of the Year” award and the Young Scientist Award and Research Category Award from the Ministry of Education, Culture, Sports, Science, and Technology (MEXT). He is also a director of the Information Processing Society of Japan, a director of the Virtual Reality Society of Japan, and a member of the Science Council of Japan. His latest book is called 'Theory of JIZAI Body' (Springer, 2023).
  • Mel Slater (University of Barcelona): is a Distinguished Investigator at the University of Barcelona in the Institute of Neurosciences, and co-Director of the Event Lab (Experimental Virtual Environments for Neuroscience and Technology. He was previously Professor of Virtual Environments at University College London in the Department of Computer Science. He has been involved in research in virtual reality since the early 1990s and his work has concentrated on both technical developments in VR and contributions to the understanding of presence and the cognitive neuroscience of body ownership and agency. He has worked in the area of clinical psychological applications including many publications on paranoid ideation and public speaking anxiety. He can be contacted at melslater@ub.edu.
  • Rachel McDonnell (Trinity College Dublin): is a Professor of Creative Technologies at Trinity College Dublin and Head of the Graphics and Vision Lab, overseeing a team of 11 academics and more than 30 researchers. Her research interests include computer graphics, character animation, and virtual humans, with a particular focus on the perception of virtual characters. She investigates how factors such as lighting, appearance, and motion impact the uncanny valley effect and viewer responses. Additionally, she is an Associate Editor for several Computer Graphics journals, and regularly serves on the SIGGRAPH Technical Papers Committee. Her research on the perception of virtual humans has been featured in prominent media outlets, including BBC News, New Scientist, Kyodo News, and The Japan Times.
  • Victoria Interrante (University of Minnesota): is a Professor in the Department of Computer Science and Engineering at the University of Minnesota, where her current research broadly focuses on improving the human experience in virtual environments, through projects aimed at topics like: mitigating cyber sickness, improving well-being via immersion in virtual nature, and designing VR-based interventions to counter racial and other forms of bias. Dr. Interrante has been actively engaged with the IEEE VR community since the late 1990s, and was honored in 2020 with the IEEE VGTC VR Career Award for her lifetime contributions to visualization and visual perception for augmented and virtual reality.
  • The Guest: is an autonomous virtual being who has appeared in several past events, in different guises. Its physical appearance is arbitrary and can change from event to event, but its language understanding and responsive capabilities are linked to large language model development, in particular based on OpenAI’s ChatGPT.

Technical background
The panel relies on the VR United tool designed by Dr Ramon Oliva in the Event Lab, Universitat de Barcelona within the frame of the European project GuestXR.


IEEE  IEEE Computer Society IEEE Visualization and Graphics Technical Community

Special
Inria logo.

Silver

InterDigital logo.

Google logo.

Bronze
MiddleVR logo.
HITLab NZ logo.

Immersion logo.
Qualcomm logo.
Huawei logo.
Meta logo. AFXR logo.
LabSTICC logo.
GuestXR logo.
ENSAM logo. Haption logo.

EuroXR logo.

INSA logo.

Institut de Neurociencies, Universitat de Barcelona logo.

SHARESPACE logo.

RegionBretagne logo.

UnivRennes logo.

Orange logo.

CLARTE logo.

Inami Monnai Lab logo.

VRSJ logo.

CESI logo.

©IEEE VR Conference 2025, Sponsored by the IEEE Computer Society and the Visualization and Graphics Technical Committee