Panels


 

Wednesday March 25th

 

14:00 - 15:30 | Panel 1: Social Interactions in Virtual Reality: Challenges and Potential

 

Organizers:
Laura Trutoiu, Oculus Research
 
Betty Mohler, Max Planck Institute for Biological Cybernetics
 
Overview:
This panel will discuss the current technical challenges of social interaction in VR as well as the potential for using virtual reality as a social  interface. We will consider face-to-face interactions, representations of full body avatars, and multi-user interaction. We will focus on discussing technical challenges, including tracking and animation, which are currently limiting social interactions in VR.
 
The immense sensitivity of human perception to biological motion, either faces or full body motion, is driving the high requirements for any kind of animation in VR. In recent years, a lot of progress has been made in the field of facial animation and performance capture, mostly for use in feature animation or high-end game production, but little has made its way into the field of Virtual Reality so far. If the real-time requirements could be met without sacrificing too much quality, VR research could strongly benefit from these advances in order to develop compelling multi-user social interactions.
 
The panelists are researchers leading the efforts to design, implement, and facilitate the next generation VR social interactions. Each panelist will provide a 5-10 minute presentations on the most pressing challenges for enabling VR social interaction from their perspective. Panelists will speak about their area of expertise in brief presentations, 5-10 minutes, on the topics below. Further, we will present successful applications that exist today that use virtual reality to enable social interactions and innovations that are needed for these applications to reach a larger audience. The moderators will then start a discussion, followed by questions from the audience.
 
Faces
Laura Trutoiu, Oculus Research
Martin Breidt, Max Planck Institute for Biological Cybernetics
Faces provide a rich source of information and compelling social interactions will require avatar faces to be expressive and emotive. Tracking the face within the constraints of the HMD and accurately animating facial expressions and speech raise hardware and software challenges. Real-time animation further imposes an extra constraint. We will discuss early research in making facial animation within the HMD constraints a reality. Facial analysis suitable for VR systems could not only provide important non-verbal cues about the human intent to the system, but could also be the basis for sophisticated facial animation in VR. While believable facial synthesis is already very demanding, we believe that facial motion analysis under the constraints of an immersive real-time VR system is the main challenge that needs to be solved.
 
Self-Avatars: Body Scans to Stylized Characters
Martin Breidt, Max Planck Institute for Biological Cybernetics
In VR, avatars are arguably the most natural paradigm for social interaction between humans. Immediately, the question of what such avatars really should look like arises. Although 3D scanning system have become more widespread, such a semi-realistic reproduction of the physical appearan ce of a human might not be the most effective choice; we argue that a certain amount of carefully controlled stylization of an avatar's appearance might not only help coping with the inherent limitations of immersive real-time VR systems, but also be more effective at achieving task-specific goals with such avatars.
 
Animation of Bodies and Identity
Betty Mohler, Max Planck Institute for Biological Cybernetics
The Space & Body Perception research group at the Max Planck Institute for Biological Cybernetics investigates the perception of self and other body size and how to create positive illusions of self and space. We have investigated the importance of the animation of the body of multi-users for effective communication. Through this research we can discuss our experience with different motion capture technology and animation techniques for the body, as well as insights into the importance of self-identification with a self-avatar for social interactions. Additionally, we are conducting research where we use high-res body scans to create self-avatars. We can further discuss the real-time challenges for the future if a photo-realistic self-avatar is part of the virtual reality application.
 
Asymmetrical Interaction
Anthony Steed, University College London
The Virtual Environments and Computer Graphics group in the Department of Computer Science at University College London studies virtual reality systems, 3D interaction, tele-collaboration, novel interfaces and networking for real-time graphics systems. Professor Steed will discuss Asymmetrical Interaction, both radically asymmetric (the BEAMING project) and older work that shows that body tracking gives a person in a collaboration a social advantage over other interfaces.
 

 

 

Thursday March 26th

 

8:30 - 10:00 | Panel 2: The Resurgence of Open - Source Frameworks for VR

 

Organizer:
Yuval Boger, CEO, Sensics
 
Overview:
Open-source frameworks have been around for a while and have found significant traction in academic environments. One can think of VRPN, CaveLib, VR Juggler as some of the classic frameworks and then FAAST, PUPIL, VRUI JuggLUA as some of the modern ones. We are seeing is a crossover from the academic to the commercial world. For instance, VRPN has been used in many commercial products (such as ART trackers) and FAASt/OpenNI helped push forward depth-based and gesture interaction. Google has provided an open SDK for Google Cardboard. OSVR, an open-source hardware and software framework project has also been launched by Sensics and Razer.
The panel will focus on:
  • What’s new in open-source VR.
  • Where open-source VR provides academic-to-commercial transition opportunities.
  • What’s missing in current open-source VR.
  • How commercial support for open-source projects helps or hurts the project.

 

Panelists:
 
Yuval Boger
Yuval Boger, CEO, Sensics. Since 2003, Sensics has been making professional-grade virtual reality goggles and other near-eye devices. Sensics products and technologies are deployed worldwide for a wide spectrum of training, medical, consumer and research applications. Sensics has partnered with Razer and other consumer electronics leaders in creating OSVR, an open-source virtual reality hardware and software framework.
Yuval received a Master’s of Physics degree at Tel-Aviv University and an MBA from the J.L. Kellogg Graduate School of Management at Northwestern University. He lives in Maryland and, time permitting, enjoys playing in the Columbia Orchestra.
He shares his thoughts on all subjects VR at www.vrguy.net and can be reached at vrguy@sensics.com
 
Bill Sherman
Bill Sherman has expertise in scientific visualization, and virtual reality interfaces, often combining the two in research on immersive visualization. His twenty-five years of experience include the production of immersive visualization applications, the FreeVR open-source VR integration library, as well as visualization animations, and interactive visualization tools. His current concentration is on open-source visualization tools for HPC, and immersive interfaces. These include interfaces to the ParaView and VTK toolkits from Kitware. Sherman hosts a three-day bootcamp on immersive visualization to help promote the good immersive software that is already available. He is co-author of two books on Virtual Reality ("Understanding Virtual Reality" and "Developing VRApplications"). Sherman was a member of the NCSA Visualization team and director of the immersive visualization team at the Desert Research Institute before joining the Indiana University Advanced Visualization Lab in 2009.
 
Sébastien Kuntz
Sébastien Kuntz is the founder and president at virtual reality software company MiddleVR. He has 13 years of experience in VR working on both tools and applications. At the French railways he implemented immersive VR training simulators. He then joined Virtools as the VR lead engineer, where he was in charge of modifying the 3D engine to support any VR system. He created MiddleVR SDK in 2010, a plugin to simplify the creation and deployment of VR applications, now adapted in multiple 3D engines such as Unity or Unreal.
 
Geoffrey Subileau
Geoffrey Subileau, Senior lead VR UX and R&D manager at Dassault Systemes’ iV Lab.
At the heart of the Dassault Systèmes’ Passion for Innovation Institute, the iV Lab explores the usage of emerging UX technologies by building and sharing original prototypes.
Prior to this position, he was a research engineer and co-writer of two publications in the collaborative virtual environment team at Ecole des Mines de Nantes. In 2005, he joined Virtools, where he was international teacher of the Virtools VR Pack and VR project manager for a dozen of industry leaders such as P&G, Panasonic, PSA, BMW and EADS. After Virtools was acquired by Dassault Systèmes, he joined the VR R&D team to work on the define and development of the VR libraries included in 3DVIA Virtools, 3DVIA Studio and the 3D Experience platform.
He’s the treasure keeper of the “VR Geeks” association since its creation. He also co-founded a video game studio called “Pixel Potato” and created “PopPop!”, a game chosen by Intel to promote the its RealSense depth camera. Geoffrey received a doubleMaster degree of Virtual Reality Engineering and Interaction Design at Angers University and Ecole de Design de Nantes.
 

 

 

10:30 - 12:00 | Panel 3: Next Gen Evaluation of VR Interfaces

 

Organizer:
Rob Lindeman, Worcester Polytechnic Institute, USA
 
Overview:
Much of the work on evaluating the usability of VR systems over the past 15 years has focused on fairly low-level tasks, mainly based on Bowman et al.’s so called “Big Five” basic tasks of object Selection, Manipulation, Navigation, Symbolic Input, and System Control. Some additions to this have been discussed, including a) adding Avatar Control, due to the emergence of low-cost body part tracking systems (e.g., Kinect, Leap Motion), b) combining Selection and Manipulation into one, as they are so closely related, and c) splitting Navigation into two, Travel and Wayfinding, since many solutions exist for each of these individually. Even with these tweaks, however, it could be argued that research into interaction has matured to such a point that many viable solutions to each of these tasks exist, and that while we should not abandon this low-level research effort, greater impact could be made more rapidly by shifting focus to higher-level tasks and topics. Also, studying the low-level tasks in isolation ignores the fact that applications require users to physically and mentally switch between tasks, and that studying multiple low-level interface solutions together, along with the required transitions between them, is vital to user acceptance. In this panel, we explore several possible lines of evaluation, in the hopes of encouraging researchers and practitioners to think more impactfully about designing and evaluating their systems. Some of the work can be classified as “Fielded Studies,” where VR has been introduced into traditional workflow settings (e.g., medical student training), and evaluation has focused on how results from such systems relate to traditional approaches. Another tack is to design and evaluate from a user experience (UX) perspective. One possible future use of VR as posited in many works of popular fiction (e.g., Neuromancer, Ready Player One) is that we will spend most of our time wearing VR headsets and input gloves. Well, why not try it now, using today’s technology? Long-term, multi-person exposure approaches are now well within reach of most research budgets. In particular, gaming has been driving VR-related technology advancements for more than a decade, however it is not until recently that VR researchers have begun to focus some effort on formally designing for gaming experiences. 
 
Panelists:
 
Rob Lindeman, Worcester Polytechnic Institute, USA
Rob is an Associate Professor in Computer Science, and Director of the Interactive Media & Game Development Program at Worcester Polytechnic Institute. He also directs the Human Interaction in Virtual Environments (HIVE) Lab, which focuses on immersive, multi-sensorial feedback systems for VR, AR, and gaming, as well as natural and non-fatiguing interaction.
 
Jonna Häkkilä, University of Lapland, Finland
Jonna is a professor for Industrial Design at University of Lapland, Finland, and research team leader of user experience (UX) research team at Center for Internet Excellence, University of Oulu, creating novel user interface and application concepts in the area of mobile and ubiquitous computing, especially on mobile 3D internet. She is also an adjunct professor in HCI at the Department of Computer Science and Engineering, University of Oulu, and co-founder of the UX design house Soul4Design.
 
Ben Lok, University of Florida, USA
Ben is Professor and Director of the Digital Arts and Science Program in the Computer and Information Sciences and Engineering Department at the University of Florida. He is co-founder of Shadow Health, Inc., an educational software company. His research focuses on virtual humans and mixed reality in the areas of computer graphics, virtual environments, and HCI.  
 
Wendy Powell, University of Portsmouth, UK
Wendy conducts research and lectures in the Applications of VR, investigating the ways in which Virtual Environments modulate the action and perception of walking. She leads the iMove research group, with a particular interest in how VR hardware and software mediate the interactive experience, and how this information can be used to enhance the health and rehabilitation potential of VR Systems.  
 
Frank Steinicke, University of Hamburg, Germany
Frank is Professor for HCI at the Department of Informatics at the University of Hamburg. His research is driven by understanding the human perceptual, cognitive and motor skills and limitations in order to reform the interaction as well as the experience in computer-mediated realities, and he regularly serves as panelist and speaker at major events in the area of VR and HCI. 
 
Evan Suma, ICT, University of Southern California, USA
Evan is a Senior Research Associate in the MxR Lab at the Univ. of Southern California Institute for Creative Technologies. His research interests include the design and evaluation of natural interaction techniques, including the use of perceptual illusions that enable users to walk naturally through expansive virtual worlds in relatively small physical spaces, and video game applications for health, rehab, training, and education. 
 
Janina Woods, Ateo, Switzerland
Janina works in Zurich, where she is the game designer behind Shiny, a psychedelic neck trainer in digital outer space using the Oculus Rift, in which she tries to exploit fully the immersive quality of the VR headset. After her studies at the ZHdK, she got involved in Game Design at YouRehab, a leading provider of interactive therapy systems for patients with movement disorders, where she was responsible for all aspects of game design. 
 
 
 

Friday March 27th

 

10:15 - 11:45 | Panel 4: Where is VR consumer market heading at: Head Mounted Displays or CAVE-like experiences?

 

Organizer:
David Nahon, Dassault Systèmes Passion for Innovation Institute iVLab Director
 
Overview:
Affordable, wide FOV, high resolution and low latency 6DOF tracked HMDs are coming to the masses, with wide communities of content developers ready to launch commercial titles. This could be the end of the story of niche professional/academic VR markets, and the so-long expected arousal of the VR democratization.
 
Still the history of VR has shown that CAVE-like displays has the favor of most professionals using VR on a daily basis, and even on a casual basis. And this is not only because HMDs have been lacking of usability until recent outcomes, but mostly because CAVE-like systems maintain a connection to reality, enabling social interactions, self-perception of the user’s body and perception of its surrounding.
 
Are we going to see similar evolutions in the consumer market?
Will eventually people reject HMDs and prefer CAVE-like experiences, especially if we consider the growing market of new technical components:
  • ultra-short throw projectors reducing space and a lot of the technical headaches of previous generation systems,
  • all-in-one, ready to use immersive desks like zSpace, so simple to deploy and eventually equally affordable if produced in mass
  • fast and stable tracking devices like depth cameras
And what are parents, educators and health committees going to think of such experiences, so highly connected to the body and the user’s brain, potentially bypassing some of the user’s control, and potentially over playing with the brain plasticity. 
 
Panelists:
 
David Nahon, Dassault Systèmes Passion for Innovation Institute iVLab Director http://www.3ds.com
Trained both as an engineer and an artist, David Nahon creates in 1994 the R&D activities of Z-A Production, a pioneer French 3D animation studio. At the same time, he gets involved in the development and the direction of various interactive artwork from Maurice Benayoun, as well the management of the software development of the SAS Cube, the first PC-based CAVE worldwide, and first french CAVE . In 2003, David joins Virtools as a product manager for VR solutions. After Virtools acquisition by Dassault Systèmes in 2005, he led the Immersive Virtuality (iV) domain, acting horizontally on the emergence of new usages and markets where the user’s body is highly coupled with the virtual.
David is now heading the iV Lab at Dassault Systèmes Passion for Innovation Institute.
 
Dr Daniel Mestre, Senior Researcher (National Center for Scientific Research, CNRS), Life Sciences, Head of the Mediterranean Virtual Reality Center
PhD in Psychology; HDR (accreditation to direct research) in Neurosciences.
Head of the Mediterranean Virtual Reality Center (http://www.crvm.eu)
Founding member of the French Association for Virtual Reality (http://www.af-rv.fr) Daniel Mestre has a large expertise in designing controlled psychophysical protocols. He is an expert in visual motion and optic flow perception. His research uses virtual reality as a tool for fundamental research in sensorimotor coordination and presence understanding. He is also involved in therapeutic and industrial applications of immersive virtual reality.
 
Dave Chavez, zSpace co-founder and CTO
David Chavez is the chief technology officer at virtual reality technology company, zSpace. He brings 25 years of experience in start-up companies, working with technologies ranging from GSM infrastructure to laptops, printers, PDAs and smartphones, in both consumer and commercial product spaces. He has managed product development teams through the full range of the product life cycle, from initial concept to volume production. At zSpace David led the product development of the company's first VR product, a desktop VR environment. He now leads technical direction and drives innovation.
 
Eric Krzeslo, SoftKinetic co-founder and chief marketing officer (CMO)
With more than 15 years’ experience in 3D and interactivity, Eric co-founded and managed SoftKinetic’s development work since 2003. In his role of CMO, he is responsible for SoftKinetic business development, user experience, marketing and communication. SoftKinetic is born as an R&D project within his previous venture, 72dpi, a developer of 3D technologies including the web, broadcast and live interactive experiences for the digital communication industry. At the beginning of his career, Eric launched the 3D modeling and rendering studios for Belgium's most famous architectural firm. Eric graduated from the Victor Horta Architecture Institute in Brussels.
 
Sébastien Kuntz, MiddleVR founder and CEO
Sébastien Kuntz is the founder and president at virtual reality software company MiddleVR. He has 13 years of experience in VR working on both tools and applications. At the French railways he implemented immersive VR training simulators. He then joined Virtools as the VR lead engineer, where he was in charge of modifying the 3D engine to support any VR system. He created MiddleVR SDK in 2010, a plugin to simplify the creation and deployment of VR applications, now adapted in multiple 3D engines such as Unity or Unreal.