2017 IEEE VR Los Angeles logo
2017 IEEE VR Los Angeles logo
IEEE Computer Society IEEE

Exhibitors and Supporters

Silver Level

logo

logo


Bronze Level

logo

logo

logo

logo

logo

logo

logo

logo

logo

logo


Event Supporters

logo

logo


Publisher

logo


Other Supporters

logo

logo

Panels



 

Virtual Social Interaction

Monday, March 20, 10:30am - 12:00pm

Social interaction is at the core of being human, but we have only a limited understanding of the brain and cognitive mechanisms behind how we socially engage each other as human beings. Recent advancements in VR technology have lowered the barrier for VR development, making it accessible for researchers from both engineering and social science. On one hand, VR technologies can help psychologists measure, model and generate synthetic versions of human social behaviour without sacrificing the validity of the interaction, and thus allowing psychologists to develop and test new psychological theories of social interaction. On the other hand, the data we gain could also be used to build more realistic and interactive social virtual worlds, and thus benefit VR researchers.

This panel brings researchers in Virtual Reality and Social Cognition together to discuss how Virtual Reality have been used in experiments to study, for instance, the influence of emotion on decision-making, the effect of facial identity on action recognition, and how other’s movements interfere our own. Virtual Reality played a crucial roles in these studies as it improved experimental control, lowered the running cost as compared to using a confederate, and bore minimal ethical concerns.

Panelists:

Dr. Xueni Pan (Organizer) is a Lecturer in Virtual Reality, Goldsmiths, University of London. She is specialised in generating expressive virtual characters to create an empathic social interaction that is immersive and engaging. Her research interests are the use of VR in the study of social cognition, and in training applications. Having worked in both Computer Science and Cognitive Neuroscience, she developed a unique interdisciplinary research profile with publications in both areas. Her work has been featured multiple times in the media, including BBC Horizon and the New Scientist magazine.

Dr. Antonia Hamilton is a Reader in Social Neuroscience and leader of the Social Neuroscience group at the Institute of Cognitive Neuroscience (UCL). Her current research examines how and why people imitate each other, how social skills differ in people with autism, and the neural mechanisms of social interaction. These studies use virtual reality, motion capture and brain imaging methods to both generate and understand real-world social behaviour, drawing on VR, machine learning and traditional psychology experiments. This work is funded by a European Research Council consolidator grant, and Hamilton was awarded Experimental Psychology Society prize lectureship for 2013.

Professor Anthony Steed is Head of the Virtual Environments and Computer Graphics (VECG) group at University College London. Prof. Steed’s research interests extend from virtual reality systems, through to mobile mixed-reality systems, and from system development through to measures of user response to virtual content. He has worked extensively on systems for collaborative mixed reality. He is lead author of a review “Networked Graphics: Building Networked Graphics and Networked Games”. He was the recipient of the IEEE VGTC’s 2016 Virtual Reality Technical Achievement Award.

Professor Jonathan Gratch is the Director of Virtual Humans Research at the University of Southern California’s Institute for Creative Technologies and a professor of computer science and psychology. Gratch’s research focuses on computational models of human cognitive and social processes, especially emotion, and explores these models’ role in shaping interactions in virtual environments. He is the founding Editor-in-Chief of IEEE’s Transactions on Affective Computing, Associate Editor of Emotion Review and JAAMAS, and former President of the Association for the Advancement of Affective Computing. He is an AAAI Fellow, a SIGART Autonomous Agent’s Award recipient and a Senior Member of IEEE.

Dr. Stephan de la Rosa is Project Leader of the Social and Spatial Cognition Group at the Max Planck Institute for Biological Cybernetics, Tübingen, Germany. He is a trained psychologist, who collaborated with many engineers and computer scientist as part of his interdisciplinary work. Dr. de la Rosa uses VR and mixed reality to examine social cognitive processes in situations that closely resemble real life social interactions. His work focuses on examining the neural processes underlying the human ability to recognize faces, bodies and actions.

Moderator:

Dr. Marco Gillies is a Senior Lecturer in Computing at Goldsmiths, University of London. His research centres on animated virtual characters and particularly social interaction between real and virtual humans in VR. He has worked on both stimulating and sensing non-verbal social cues. He is particularly interested in how technology can handle the embodied and tacit nature of nonverbal communication and has pioneered the use of machine learning to create interactive virtual characters.

 

Virtual Reality and Neuroscience

Monday, March 20, 4:00pm - 5:15pm

Neuroscience and virtual reality are two rapidly growing fields that have seen major advances in the last few decades. Recent work to combine these two fields has been promising, and the integration of neuroscience with virtual reality is poised to disrupt the status quo in several areas such as healthcare, gaming, and research. In particular, major advances in neuroscience, which is the study of the structure and function of the nervous system, have been made through the advent of noninvasive brain imaging and stimulation, providing novel wearables to probe brain function during human activity. At the same time, huge leaps forward have occurred in the field of virtual reality (VR), which is the art, design, engineer- ing, and science behind immersive experiences involving synthetic worlds, populated by virtual objects and virtual people. VR has expanded beyond specialized laboratory equipment to consumer products with affordable hardware and emerging content market- places that offer a wide variety of VR experiences. Neuroscience provides VR with a means of understanding how immersion in a virtual world can affect the brain and mind. VR provides neuro- science with new ways to test theories and concepts that have oth- erwise been impossible, such as the embodiment of different bodies or immersion in controlled, yet realistic, environments. A number of complex cognitive and perceptual phenomena could potentially be addressed through interdisciplinary research involving neuro- science and virtual reality, including: memory, spatial cognition, temporal perception, simulation sickness, presence, and redirected walking.

The panel will begin with short introductions of the participants by the moderator. The moderator will pose some prepared ques- tions to the panelists. The moderator will then quickly open up the discussion to questions from the audience. The panel format will avoid long introductory expositions from each of the panelists, which often fails to encourage audience participation. Potential top- ics to be discussed include

Panelists:

Dr. Sook-Lei Liew (Organizer), PhD, OTR/L, is an assistant professor at the University of Southern California (USC), with joint appointments in occupational therapy, biokinesiology, and neurology, and the di- rector of the Neural Plasticity and Neurorehabilitation Laboratory (http://npnl.usc.edu). Dr. Liew’s research focuses on helping people recover after stroke using brain imaging, noninvasive brain stimu- lation, brain computer interfaces and virtual reality to probe and modulate brain activity and promote motor recovery. She received her bachelor’s degree from Rice University, her PhD from USC, and her postdoctoral training from the National Institutes of Health, with over 20 peer-reviewed publications and over 40 invited talks.

Dr. Tyler Ard focuses on neuroscientific data visualization, both in- side and outside of virtual reality. Throughout the course of his PhD research investigating neural communication through neuroimaging, Dr. Ard developed a passion for improving comprehension of complex data through visualization. After receiving his PhD from Brown University, Dr. Ard further perused scientific visualization as a post-doctoral researcher at the ICT Mixed Reality Lab at USC, and then as a Data Visualization Specialist at the USC Laboratory of Neuro Imaging, where he continues to create state-of-the-art neuroimaging data visualization techniques.

Dr. Mavi Sanchez-Vives, MD, PhD in Neurosciences was postdoctoral researcher at Rockefeller University and Yale University. She es- tablished her own lab at the Neuroscience Institute in Alicante, Spain. She has been ICREA Research Professor at the IDIBAPS (Institute of Biomedical Research August Pi i Sunyer) in Barcelona since 2008, where she is the head of the Systems Neuroscience group and co-director of the Event Lab (Experimental Virtual Environments in Neuroscience and Technology). Her main interests are emergent cerebral cortex properties and body representation using electrophysiology and virtual reality. She is Specialty Chief Edi- tor of Frontiers in Systems Neuroscience and Associate Editor for Virtual Environments in Frontiers in Robotics and AI.

Dr. Mel Slater is an ICREA Research Professor at the University of Barcelona and part-time Professor of Virtual Environments at UCL. He has been involved in research in VR since the early 1990s. He was awarded the IEEE VR Career Award in 2005, and recently completed a European Research Council Advanced Grant on VR. He is Field Editor of Frontiers in Robotics and AI. He has contributed to the scientific study and technical developments of VR and to the use of VR in other fields, notably psychology and the cognitive neuroscience of how the brain represents the body.

Moderator:

Dr. David Krum is Associate Lab Director of the Mixed Reality Lab at the USC Institute for Creative Technologies. His research interests include human-computer interaction, virtual reality, and 3D interaction. His work combines an engineering approach of building technical artifacts with a scientific approach of experimentation and user evaluation. He is currently researching how virtual reality affects human memory, motor adaptation, and social behavior. Dr. Krum studied Engineering at Caltech and Computer Science at the University of Alabama in Huntsville. His PhD research at Georgia Tech focused on how wearable computers and 3D visualizations could enhance spatial cognition, that is, help individuals more quickly learn the structure of their surrounding environment.

 

Instantaneous Beaming to Distance Places – A Possible and Desirable Future?

Tuesday, March 21, 4:00pm - 5:15pm

We imagine a possible future where everywhere that people live there will be humanoid robots for rent. Instead of physically visiting a remote place people will have the option of going on line, selecting a place, hiring such a robot, and then embodying it in order to carry out tasks and interact with local people there. If we think of the center of consciousness as the location at which we are aware of perceiving multisensory information that locates us in space, then effectively we transfer our consciousness into the body of the robot: a kind of ‘beaming’ of the self from one place to another. Beaming means instantaneous transportation of our body to a physically remote place. Today we can clearly beam digital representations of ourselves through video conferencing or collaborative virtual environments. However, it is also possible to be represented in physical form in a remote location through combining virtual reality and robotic telepresence/telexistence techniques.

This panel will explore how this technological setup can be achieved, examples of its use, and also look to future developments. Anthony Steed will concentrate on collaboration using such beaming technologies and the implications of asymmetric access to the common space that is supported. Susumu Tachi will introduce a telexistence avatar called TELESAR V, which enables a beamer to have a physical avatar body in a remote environment where he or she is beamed down and to interact with the environment using the avatar body as his or her own body. Doron Friedman shows how actions of the robot can be accomplished through a brain-computer interface rather than rely on motion capture. Greg Welch will look at some factors affecting local (self) and remote (with others) presence of the beamer. Mel Slater will moderate the panel introducing how the beaming concept has already been used in news reporting.

Panelists:

Dr. Anthony Steed (Organizer) is Head of the Virtual Environments and Computer Graphics (VECG) group at University College London. Prof. Steed’s research interests extend from virtual reality systems, through to mobile mixed-reality systems, and from system development through to measures of user response to virtual content. He has worked extensively on systems for collaborative mixed reality. He is lead author of a review textbook “Networked Graphics: Building Networked Graphics and Networked Games”. He was the recipient of the IEEE VGTC’s 2016 Virtual Reality Technical Achievement Award.


Dr. Doron Friedman is the head of the Advanced Reality Lab (http://avl.idc.ac.il) in the Interdisciplinary Center (Israel). Dr. Friedman has been a Postdoctoral Research Fellow in the Computer Science Department in University College London, where he carried out some of the earliest work on controlling virtual reality and avatars by ‘thought’ using brain computer interfaces. Doron has a few dozen publications in peer reviewed journals and conferences in the areas of brain computer interfaces, human-computer interaction, and virtual reality. Doron is the co- author of several patent applications and has been involved in setting up several startup companies.

Dr. Greg Welch is the Florida Hospital Endowed Chair in Healthcare Simulation at the University of Central Florida in the College of Nursing, the Department of Computer Science, and the Institute for Simulation & Training. He is also an Adjunct Professor at the University of North Carolina at Chapel Hill. His research interests include human-computer interaction, virtual and augmented reality, human motion tracking, and telepresence. He has been in academia since 1992, having previously worked for NASA’s Jet Propulsion Laboratory (Caltech) and the Northrop Corporation. He currently serves on several editorial boards and has co-chaired multiple associated conferences.


Dr. Susumu Tachi is Professor Emeritus of The University of Tokyo, where he currently leads several research projects on telexistence, virtual reality and haptics, including ACCEL Embodied Media Project at Tachi Laboratory, Institute of Gerontology. In 1980, Dr. Tachi invented the concept of Telexistence, which enables a highly realistic sensation of existence in a remote place without any actual travel, and has been working on the realization of telexistence since then. Other achievements include Haptic Primary Colors, Optical Camouflage, and autostereoscopic VR displays such as TWISTER, Repro3D and HaptoMIRAGE. He was the recipient of the 2007 IEEE Virtual Reality Career Award.

Moderator:

Dr. Mel Slater is an ICREA Research Professor at the University of Barcelona and part-time Professor of Virtual Environments at UCL. He has been involved in research in VR since the early 1990s. He was awarded the IEEE VR Career Award in 2005, and recently completed a European Research Council Advanced Grant on VR. He is Field Editor of Frontiers in Robotics and AI. He has contributed to the scientific study and technical developments of VR and to the use of VR in other fields, notably psychology and the cognitive neuroscience of how the brain represents the body.


 

Why Streaming Will Make or Break 360 Storytelling

Wednesday, March 22, 10:30am - 12:00pm

Everyday, VR developers are releasing new games, videos and other content, while new hardware enters the market at an impressive rate. But, even with the constant stream of new content and devices the VR experience hasn’t become mainstream just yet and users aren’t highly engaged. The reason? Everyone has a different thought. Is it the lack of content altogether? The inability for users to feel that amazing sense of presence given the current streaming quality? The clunky nature of the experience with potential large downloads and such? Content still making users motion sick. There are too many to name.

This panel will explore the ins-and-outs of why the current ways of delivering VR content and 360 videos for VR aren’t working. The speakers will engage with the audience to share their research and anecdotal insights gleaned from their teams’ solving of the 360 and VR video streaming dilemma to explore how we can achieve 4-8K video quality across all VR platforms (even mobile) now. They will discuss how this milestone can dramatically speed up VR’s uptake by the mainstream and make 360 storytelling more impactful. The panelists will also dive into how this would have a domino effect on the overall amount of impactful 360 stories available to users and the quality of consumer’s initial experiences with VR, which many say could either make or break the VR industry going mainstream. The panelists will then host an audience Q&A to answer the audience’s questions about streaming for 360 storytelling by exploring skills and tricks of guiding user attention in 360 videos, Nielsen’s law and its implications in VR content streaming, hardware acceleration for 360 video stitching and encoding, and the challenges and promises of foveated streaming and rendering.

Panelists:

Mr. Barry Pousman is co-founder and CEO of Variable Labs, an immersive media company focused on fostering empathy for positive behavior change. Prior to starting Variable Labs, Barry was a Chief Digital Strategist at the U.N. helping to implement new media initiatives around the promotion of the Sustainable Development Goals. His work with the U.N. has taken him to Sub Saharan Africa, Latin America, the Middle East and South East Asia to create content that ranges from Virtual Reality films to viral videos. His work has screened at the World Economic Forum at Davos, the White House, Sundance, won the Interactive Award at Sheffield Doc Fest and been written about in The New York Times, Vice, the BBC, and more.

Formerly the Director of Programming at Discovery Digital Networks and a founding member of Discovery VR, his focus was on science, education and global awareness. Barry is a graduate of Emerson College where he studied Visual Media and he served with the U.S. Peace Corps in Senegal, receiving two Fulbright Hays grants from the U.S. Embassy for his documentary work.

Dr. Changyin (CY) Zhou (Organizer) is co-founder and CEO of Visbit Inc., a visual technology company enabling the best quality streaming and viewing experience for 360-degree VR videos across platforms. Through Visbit, Changyin is dedicated to make VR teleportation possible and accessible to every person.

Prior to Visbit, Changyin was an engineer at Google X and a scientist at Google Research, specialized in image, video, camera, and vision technologies. At Google X, he was one of the founding engineers of a computational photography team and worked on projects including Google Glass camera, Android camera HDR+, and other still-confidential computational camera projects. Prior to Google X, he has worked for NVidia Research, Microsoft Research, Microsoft Research Asia, and Microsoft China. He received his Ph.D in Computer Science from Columbia University, and his research area includes computer vision, computer graphics, and computational photography.

Mr. Eric Williams Eric R. Williams is the co-creator of the Immersive Media Initiative [IMI] at Ohio University, and director of their MFA in Communication Media Arts program. His first 360-degree narrative film (Re: Disappearing) played around the world in 2016, appearing at the Seattle Transmedia Film Festival, the Munich Outdoor MiniFest, and the Saigon Underground Film Festival, among others. Williams has recently written about Virtual Reality in the healthcare setting for the Huffington Post, and is currently pre-producing a feature-length 360-degree film with music by Moby.

Williams has won an Emmy Award for interactive media, a Best New Screenplay award from the Writers Guild of America, and the University Professor Award for Excellence in Teaching from Ohio University, where he is also one of the chief architects of the IMI’s 13-course Virtual Reality curriculum. The curriculum was designed to bolster VR research at the university – combining classroom learning with hands-on production experience on faculty-led projects. The curriculum began in Summer of 2016 in conjunction with the School of Media Arts & Studies and the Scripps College of Communication.

Mike Schmit began his software career as the chief architect of a Space Shuttle experiment control computer system. He then started his own software company that specialized in high performance code optimization and tools, which eventually led to working on MPEG codecs. While at Zoran he optimized the MPEG-2 and AC-3 decoders in the first software DVD player and participated in various DVD standards groups. He then began working on video encoders while a software manager at ATI Technologies, delivering the first software-based full HD DVR.

Recently he led his team at AMD to develop the first open source implementation OpenVX, a new computer vision standard. Now, as the Director of Software for 360 VR video at AMD he and his team have been working on real-time 360 video stitching software.

Moderator:

Mr. Karl Krantz is the founder and chief organizer of Silicon Valley Virtual Reality, the world’s largest and longest-running VR developer community and organizers of the annual SVVR Conference & Expo. Through SVVR, Karl is dedicated to accelerating the growth of a healthy and diverse virtual reality ecosystem.

Previous to SVVR, Karl led Teliris Telepresence Research Labs, an R&D group in a leading telepresence company, where he designed high-end, low-latency enterprise video communication systems and collaboration tools for Fortune 500 companies.

In 2012, Karl made the jump from telepresence to pursue his life-long passion for virtual reality, and hasn’t looked back since.

 

Perception and Action in Virtual Reality with New Commodity Level Equipment: What is New and Different?

Wednesday, March 22, 1:30pm - 3:00pm

Virtual reality is being revolutionized in many ways with new technology of high quality becoming available at the consumer level. Over the past several decades virtual reality has had a paired relationship with the area of visual space perception: research in perception has used virtual reality to conduct controlled scientific studies with manipulations that are not easy or even possible in the real world, and research in virtual reality has used perception theory and methods to assess its fidelity and quality. New consumer-level technologies are having a significant impact on visual perception and action in virtual reality. This panel will focus on this impact, what is known about how our perception, action, and cognition within virtual environments has been affected by these new technologies, and what we expect to see in the future.

The panel’s focus will be on lightweight, wide field-of-view head-mounted displays such as the Oculus Rift and HTC Vive, along with accompanying technologies, but other commodity level technologies will also be considered. The panel will focus on several important questions regarding perception using new commodity-level equipment:

  1. What is the state of the art on perception and action with respect to current equipment?
  2. How much improvement in perceptual fidelity can we expect with technology?
  3. What limitations currently exist for visual perception and action? Are any fundamental?

Panelists:

Dr. Bobby Bodenheimer (Organizer) is an Associate Professor of Computer Science at Vanderbilt University. His current research is in the area of virtual reality, specifically how people learn and interact in virtual spaces. This work involves approaching fundamental questions about perception and action in the context of immersive virtual environments and the technologies that build them. Pursuing these questions leads to technological innovation in the design of virtual systems. His recent work using commodity level head-mounted displays involves questions of self-avatars, affordances, and distance perception.

Dr. Sarah Creem-Regehr is a Professor of Psychology at the University of Utah. Her research interests are in space perception and spatial cognition in real and virtual environments, with an emphasis on underlying mechanisms relating to action and body- based perception. Her work studies how perceptual theory and applications relating to virtual environments have mutually informed an understanding of large-scale space perception. Her research has included the influence of self-avatars on perception and action, visual-motor recalibration, spatial perspective taking, and navigation.

Dr. Scott Kuhl is an Associate Professor at Michigan Technological University. His research interests include computer graphics, immersive virtual environments, and space perception. Recent research efforts have looked at the effects of minification and display field of view on distance judgments in real and HMD-based environments using commodity level displays. Dr. Kuhl leads the virtual environment laboratory and is adjunct faculty in the Applied Cognitive Science and Human Factors program.

Dr. Steven Lavalle is a Professor of Computer Science at the University of Illinois, Urbana-Champaign, where he leads the Motion Strategy Laboratory. His research interests like in design, analysis, and implementation of algorithms related to planning. This work lead to his work on the original head-tracking algorithms for the Oculus Rift. Recent work involves re-examining core engineering principles for building consumer-level virtual reality displays with a direct infusion of perceptual psychology.

Dr. Betty Mohler leads the Space & Body Perception Group at the Max Planck Institute for Biological Cybernetics in Tuebingen, Germany. Her research interests focus on the interaction between human perception of the spatial attributes of bodily selves and spatial perception of the surrounding visual world. Their methods typically involve measuring human performance in complex everyday tasks, i.e. spatial estimates, action execution (i.e. walking, reaching) and recognition. Their results show that the body (both physical and visual) is important for perceiving distances and sizes in the world. Her recent work involving commodity level virtual reality equipment involves perception and reaching.

Dr. Betsy Williams Sanders is Associate Professor of Computer Science at Rhodes College. Her research work in immersive virtual environments involves viewing a three-dimensional world through a head-mounted display (HMD). A major part of this research involves understanding the cognitive capabilities of humans in virtual environments. Even state-of-the-art virtual environments are usually unconvincing, and people have difficulty organizing their spatial knowledge of them and moving around in them. One goal of her research is to improve our understanding of how people perceive and reason about space in a virtual environment and how that understanding can be technically leveraged into an improved interface.