The official banner for the IEEE Conference on Virtual Reality + User Interfaces, comprised of a Kiwi wearing a VR headset overlaid on an image of Mount Cook and a braided river.
IEEE VR Sponsors & Exhibitors Past Conferences

Keynote Speakers

Speakers
Photo of George Drettakis George Drettakis Monday 10th, 10:00 - 11:00 (Saint-Malo, France UTC+1)
Room: Chateaubriand
Photo of Mavi Sanchez-Vives Mavi Sanchez-Vives Tuesday 11th, 10:00 - 10:30 (Saint-Malo, France UTC+1)
Room: Chateaubriand
Photo of Maria Roussou Maria Roussou Tuesday 11th, 10:30 - 11:00 (Saint-Malo, France UTC+1)
Room: Chateaubriand
Photo of Stefania Serafin Stefania Serafin Wednesday 12th, 10:00 - 11:00 (Saint-Malo, France UTC+1)
Room: Chateaubriand


George Drettakis
Research Director, Inria, France

Photo of George Drettakis

The 3D Gaussian Splatting Adventure: Past, Present and Future
Monday 10th, 10:00 - 11:00 (Saint-Malo, France UTC+1)
Room: Chateaubriand

Abstract
Neural rendering has advanced at outstanding speed in recent years, with the advent of Neural Radiance Fields (NeRFs), typically based on volumetric ray-marching. Last year, our group developed an alternative approach, 3D Gaussian Splatting, that has better performance for training, display speed and visual quality and has seen widespread adoption both academically and industrially. In this talk, we describe the 20+ year process leading to the development of this method and discuss some future directions. We will start with a short historical perspective of our work on image-based and neural rendering over the years, outlining several developments that guided our thinking over the years. We then discuss a sequence of three point-based rasterization methods for novel view synthesis -- developed in the context the ERC Advanced Grant FUNGRAPH -- that culminated with 3D Gaussian Splatting. We will emphasize how we progressively overcame the challenges as the research progressed. We first discuss differentiable point splatting and how we extended in our first approach that enhances points with neural features, optimizing geometry to correct reconstruction errors. We briefly review our second method that handles highly reflective objects, where we use multi-layer perceptrons (MLP), to learn the motion of reflections and to perform the final rendering of captured scenes. We then discuss 3D Gaussian Splatting, that provides the high-quality real-time rendering for novel view synthesis using a novel 3D scene representation based on 3D Gaussians and fast GPU rasterization. We will conclude with a discussion of future directions for 3D Gaussian splatting with examples from recent work and discuss how this work has influenced research and applications in Virtual Reality

Bio
George Drettakis graduated in Computer Science from the University of Crete, Greece, and obtained an M.Sc. and a Ph.D., (1994) at the University of Toronto, with E. Fiume. After an ERCIM postdoc in Grenoble, Barcelona and Bonn, he obtained a Inria researcher position in Grenoble in 1995, and his Habilitation at the University of Grenoble (1999). He then founded the REVES research group at INRIA Sophia-Antipolis, and now heads the follow-up group GRAPHDECO. He is a INRIA Senior Researcher (full professor equivalent). He received the Eurographics (EG) Outstanding Technical Contributions award in 2007, the EG Distinguished Career Award in 2024 and is an EG fellow. He has received two prestigious ERC Advanced Grants in 2018 and in 2024. He was associate editor for ACM Trans. on Graphics, technical papers chair of SIGGRAPH Asia 2010, co-chair of Eurographics IPC 2002 & 2008, chair of the ACM SIGGRAPH Papers Advisory Group and the EG working group on Rendering (EGSR). He has worked on many different topics in computer graphics, with an emphasis on rendering. He initially concentrated on lighting and shadow computation and subsequently worked on 3D audio, perceptually-driven algorithms, virtual reality and 3D interaction. He has worked on textures, weathering and perception for graphics and in recent years focused on novel-view synthesis, relighting as well as material acquisition often using deep learning methodologies



Mavi Sanchez-Vives
Professor, IDIBAPS-Hospital Clinic, Spain

Photo of Mavi Sanchez-Vives

Virtual Reality for Pain Relief
Tuesday 11th, 10:00 - 10:30 (Saint-Malo, France UTC+1)
Room: Chateaubriand

Abstract
Virtual embodiment refers to the illusion that a temporally and spatially correlated virtual body is our own. In previous studies, we have demonstrated that owning and transforming a virtual body can modulate pain thresholds in healthy individuals. Furthermore, we have observed that altering aspects of a virtual body, such as skin color or body size, can significantly modulate pain perception in patients suffering from chronic pain of different causes (peripheral nerve injury, complex regional syndrome, fibromyalgia) and locations (shoulder, knee, arm or back pain). Building on this foundation, our recent studies have focused on developing state-of-the-art, personalized virtual environments specifically designed for chronic low back pain. The prototype, tested in 100 patients, shows promising results in terms of reducing pain and enhanced patient engagement. Additionally, we will discuss the potential of shared virtual environments for patient focus groups and therapy groups, discussing their utility in both the development of interventions and ongoing therapeutic support for chronic pain management. The integration of these immersive environments into patient care offers new avenues for personalized, scalable, and effective treatment approaches

Bio
Mavi Sanchez-Vives, MD, PhD in Neurosciences, has been an ICREA Research Professor at the Institute of Biomedical Research August Pi i Sunyer (IDIBAPS-Hospital Clínic) since 2008, where she leads the Systems Neuroscience group. She completed her postdoctoral training at the Biophysics Laboratory of Rockefeller University and the Neurobiology Laboratory at Yale University (USA). Her main interests in neuroscience include the generation of brain rhythmic activity, brain states, neurotechnology, and neuromodulation. She has recently been awarded an ERC Synergy project, NEMESIS, on the mechanisms of brain lesions, and leads a European Innovation Council project called META BRAIN, investigating the use of different nanotechnologies for precision brain stimulation. Since 2004, Dr. Sanchez-Vives has also used virtual reality, initially for sensory perception research and to explore VR from a neuroscientific perspective. She has investigated embodiment in virtual bodies and its various applications, especially in medicine and psychology. She currently leads the project XR-PAIN: eXtended Reality-Assisted Therapy for Chronic Pain, funded by the European Media and Immersion Lab. She has led the use of virtual reality for the rehabilitation of violent behaviors in intimate partner violence, an approach now being used in rehabilitation programs in prisons in Catalonia. Dr. Sanchez-Vives is also one of the co-founders of Kiin, a company that uses virtual reality for immersive training to develop soft skills and enhance collaboration and well-being in organizations.



Maria Roussou
Associate Professor, National and Kapodistrian University of Athens, Greece

Photo of Maria Roussou

Reflecting on 25+ Years of Immersive Public Experiences
Tuesday 11th, 10:30 - 11:00 (Saint-Malo, France UTC+1)
Room: Chateaubriand

Abstract
Immersive VR (iVR) technologies have gradually transitioned from niche research tools to broadly accessible platforms for public engagement. As iVR becomes increasingly affordable and ubiquitous, it is time to reflect on its lasting impact. What elements of VR experiences persist after the initial novelty wears off? Can VR sustain long-term engagement in ways comparable to other pervasive technologies, such as mobile devices? This talk will explore the elements shaping public-facing VR applications. Drawing on over two decades of VR use in educational and cultural spaces, I aim to provide a nuanced reflection on the factors contributing to the long-term appeal and efficacy of iVR in non-commercial settings

Bio
Maria is an Associate Professor in Interactive Systems at the Department of Informatics & Telecommunications, University of Athens. For most of the 90s, Maria studied, worked, and practically lived in the CAVE, first at the Electronic Visualization Laboratory in Chicago, designing virtual and digital media environments for educational and cultural purposes. Then (1998-2003), at the Foundation of the Hellenic World in Athens, where she led the Virtual Reality Department, creating immersive projection-based VR exhibits and visitor experiences. In 2003, she co-founded makebelieve, an experience design and consulting company. Maria holds a PhD in Computer Science from the University of London (UCL), an MFA in Electronic Visualization and an MSc in Electrical Engineering & Computer Science from the University of Illinois at Chicago, and a BSc in Applied Informatics from the Athens University of Economics and Business. A Senior Member of the ACM, she serves as Vice-chair of the Greek ACM SIGCHI Chapter and the Greek ACM-W Chapter. She is the recipient of the 2013 Tartessos Award for her work in digital heritage and virtual archaeology. http://www.makebelieve.gr/mroussou



Stefania Serafin
Professor, Aalborg University, Denmark

Photo of Stefania Serafin

Sound is All Around Us: Immersive Audio in the Age of Extended Reality
Wednesday 12th, 10:00 - 11:00 (Saint-Malo, France UTC+1)
Room: Chateaubriand

Abstract
Immersive audio has become a powerful tool for creating captivating, interactive experiences. This keynote explores the evolving role of sound in XR, from the foundational aspects of auditory feedback and spatial audio design to the personalized impact of Head-Related Transfer Function (HRTF) tuning and real-world use cases. By presenting some of our research at the Multisensory Experience lab, ranging from VR for hearing health, multisensory interaction design, and sound-assisted accessibility for individuals with different abilities, I will show how immersive audio enhances perception, heightens engagement, and fosters empathy.

Bio
Stefania Serafin is a Professor of Sonic Interaction Design at Aalborg University in Copenhagen, where she co-directs the Multi-Sensory Experience Lab with Rolf Nordahl. She is the President of the Sound and Music Computing Association and leader of the Nordic Sound and Music Computing Network. Her research focuses on sonic interaction design and sound in virtual and augmented reality, with a focus on health and cultural applications. She has been working at Aalborg University since 2003, when she completed her Ph.D. at Stanford University, exploring The Sound of Friction with Professor Julius Smith III as supervisor. Stefania’s work integrates sound, technology, and human-centered design to enrich interactive experiences. Her recent publications and projects can be found here: tinyurl.com/35wjk3jn


IEEE  IEEE Computer Society IEEE Visualization and Graphics Technical Community


©IEEE VR Conference 2025, Sponsored by the IEEE Computer Society and the Visualization and Graphics Technical Committee