Panels

Monday, 21st March 2016

1:45 pm – 3:15 pm
Affordances in Immersive and Mixed Virtual Reality: Design Importance and Open Questions

Overview

Virtual reality is a constructed medium, and all interactions within it are constructed as well. The question of how such interactions are designed has been a major topic in the literature of virtual reality and human-computer interaction. This panel will discuss the design and use of virtual environments from the perspective of its affordances, in particular, how they can be assessed and characterized.

Affordances, as the term was originally used by Gibson, represent the possibilities for action within an environment. He coined the term to emphasize that people generally notice and attend to objects and features of their environment in terms of their relevance to current action capabilities or future action possibilities. When we build a virtual environment, we create those capabilities and possibilities. If we are building a simulation that mimics the real world, then measuring how well the affordances of the virtual environment match those of the real world can give us quantitative measures of the fidelity of our virtual environment, and provide specific design criteria for its improvement.

The panel will focus on several important questions regarding affordances:

  1. What technologies are important to insure that good affordances will be present in a virtual environment?
  2. How are affordances measured in both real and virtual worlds?
  3. How can we design virtual environments to create possibilities for action?
  4. Does affordance-based design generalize beyond one or two properties of an environment to multiple properties?
  5. What is the relationship between having proper affordances in virtual reality and presence?

The answers to some of the questions are open, and thus only the current state-of-the-art can be given. The panel represents experience from both virtual and mixed reality simulations, and will frame their discussion in the context of their experiences with affordances in those contexts. The panel will begin with speakers providing a short (5 minutes or less) statement on how affordances have been useful in their own work, and taking a brief position on or more of the questions outlined above. The goal will be to take a forward-looking position so as to motivate the ensuing discussion and engage the audience. The moderator will then start the discussion with participation from the audience.

Panelists Bobby Bodenheimer, Vanderbilt University (Organizer)
bobby.bodenheimer@vanderbilt.edu

Dr. Bodenheimer is an Associate Professor of Computer Science at Vanderbilt University. His current research is in the area of virtual reality, specifically how people learn and interact in virtual spaces. This work involves approaching fundamental questions about perception and action in the context of immersive virtual environments and the technologies that build them. Pursuing these questions leads to technological innovation in the design of virtual systems. Much of his current work is focused on studying affordances and navigation in virtual environments.

Gabriel Diaz, Rochester Institute of Technology
diaz@cis.rit.edu

Dr. Diaz is an Assistant Professor in the Center for Imaging Science at the Rochester Institute of Technology. He studies the visual guidance of action. How is it that visual information is used to guide movements of the body when performing everyday actions, like catching a ball, or driving a car? He investigates using a variety of techniques and equipment, including computational modelling, eye-tracking, virtual reality, and motion capture.

Michael Geuss, Max Planck Institute for Biological Cybernetics
michael.geuss@tuebingen.mpg.de

Dr. Geuss is an Alexander von Humboldt Fellow currently working as a research scientist with Dr. Betty Mohler in the Space and Body Perception Group. His current research seeks to understand how people perceive their own body’s size and how bodily states, whether it be physical, physiological, or emotional, inform perception of and actions on spatial layout. His research uses virtual reality technologies, physiological measures, and dynamical system analyses to investigate these relationships in ways not possible in the real world.

Joseph K. Kearney, University of Iowa
kearney@cs.uiowa.edu

Dr. Kearney is Professor of Computer Science at the University of Iowa. He co-directs the Hank Virtual Environments Lab that houses virtual bicycling and pedestrian simulators. The lab’s goal is to investigate how virtual environments can be used as laboratories for the study of human behavior. Kearney’s current research focuses on how children and adults cross traffic-filled roadways on a bike or on foot; on using VEs to study joint action; on action coordination in shared VEs mediated through full-body avatars; and on the potential of cell phone alerts to ameliorate the detrimental effects of texting on pedestrian road crossing.

Benjamin Lok, University of Florida
lok@cise.ufl.edu

Dr. Lok is Professor of Computer Science at the University of Florida and leads the Virtual Experiences Research Group (VERG) there. VERG conducts virtual reality and human-computer interaction research into the question: “Would interacting with a virtual human make you better interacting with real people?” To answer this question, he creates experiences involving virtual human patients, teammates, and civilians.

Betsy Williams Sanders, Rhodes College
sandersb@rhodes.edu

Dr. Sanders is Associate Professor of Computer Science at Rhodes College. Her research work in immersive virtual environments involves viewing a three-dimensional world through a head-mounted display (HMD). A major part of this research involves understanding the cognitive capabilities of humans in virtual environments. Even state-of-the-art virtual environments are usually unconvincing, and people have difficulty organizing their spatial knowledge of them and moving around in them. One goal of my research is to improve our understanding of how people perceive and reason about space in a virtual environment and how that understanding can be technically leveraged into an improved interface.



Tuesday, 22nd March 2016

1:45 pm – 3:15 pm
Lessons to Game Developers from IEEE VR

Overview

With the advent of cost effective, commercial, off the shelf VR equipment, we are starting to witness an expansion on VR applications, especially games, and developers interested in this field. Most of these developers are approaching VR as if it were another media, easily accessible through powerful game engines. They are learning the hard way some of the limitations and particular details that the community at IEEE VR has explored for more than 15 years. However, they are also starting to identify nice solutions in current technologies, and therefore they are starting to create new knowledge that is relevant for the future of VR.

How can we communicate what we have learned in this community in more than 15 years to the fast growing audience of VR Game Developers? What have we learned that is of use to them? How can we keep up with the growth of commercial VR games that it is being promised in the near future? How we can learn from the experiences of current VR Game Developers? What is our contribution to more serious applications, that are also starting to flourish? This panel pretends to explore these and other related questions, not only from the interventions of our panelists but also from contributions of our audience.

Panelists

Anthony Steed. Professor Anthony Steed is Head of the Virtual Environments and Computer Graphics (VECG) group at University College London. Prof. Steed’s research interests extend from virtual reality systems, through to mobile mixedreality systems, and from system development through to measures of user response to virtual content. He has published over 160 papers in the area, and is the main author of the book “Networked Graphics: Building Networked Graphics and Networked Games”.

Doug A. Bowman is Professor of Computer Science at Virginia Tech , where he directs the 3D Interaction Research Group and the Center for HumanComputer Interaction . He earned his M.S. and Ph.D. in Computer Science from Georgia Tech. His research interests include 3D user interfaces, interaction techniques for virtual environments, the benefits of immersion in VR, and large highresolution displays. He is a coauthor of the book “3D User Interfaces: Theory and Practice,” and was awarded a National Science Foundation CAREER grant for his work on domainspecific 3D user interfaces. He has published more than 100 articles in peerreviewed journals and conferences, and was named a Distinguished Scientist by the ACM in 2010.

Evan Suma is the Associate Director of the MxR Lab at the Institute for Creative Technologies and a Research Assistant Professor in the Department of Computer Science at the University of Southern California. He received his Ph.D. in 2010 from the Department of Computer Science at the University of North Carolina at Charlotte. His interests broadly include the research and development of techniques and technologies that enhance immersive virtual environments and 3D humancomputer interfaces. He is particularly interested in leveraging virtual reality for the empirical study of human perception and cognition. Dr. Suma has written or coauthored over 60 academic publications, eight of which have been recognized with conference awards, and is a fivetime SIGGRAPH presenter. His gesture interaction middleware toolkit (FAAST) has been widely adopted by the research and hobbyist communities, and his online research videos have been viewed over 2.4 million times.

Moderator

Pablo Figueroa. Pablo Figueroa is an Associate Professor at Universidad de los Andes in Colombia, South America. After his PhD at University of Alberta in Canada, he has been doing research in virtual reality and video games in Colombia for more than 10 years. He is the Director of the DAVID Project, a joint effort between Government, Industry, and Academia to push the development of digital contents. He is also the creator of a Coursera Specialization on Game Development in Spanish, and the Technical Director of the Colivri lab, a collaborative research space in Visual Computing.

3:30 pm – 5:00 pm
Cyberarcheology – Issues and opportunities with virtual reality for a traditional humanities discipline

Abstract

Recent strives in consumer-level virtual reality (VR) has opened the possibility for a myriad of end-user applications of immersive VR. Among the many disciplines that are in the verge of fully benefiting from high-quality low-price VR technology is archeology. Archeology traditionally dealt with the exploration of ancient sites through excavation and survey techniques. Recently, the field of Cyberarcheology – the incorporation of advanced technological tools in archeology – has revolutionized modern archeology. Techniques involving the use of massive digital recording, virtual simulation, data curation and digital transmission have advanced the field of archeology, but it also caused the community to question the use of such advanced technology. Cyberarcheology involves the development and usage tools for data capture and autoptic evaluation in the field as well as the interactive simulation and visualization of archaeological data in labs and virtual environments (VEs). VR enables unique ways to integrate spatial maps and immersive renderings through a series of virtual models, and also the ability to find relationships among many different digital media artifacts. The ability to search and annotate artifacts in VEs provide unprecedented research value to archeology.

Cyberarcheology is a successful example of a concrete application of VR in the real world and out of the computer science research lab. This panel, thus, benefits both individuals interested in Cyberarcheology, but also any person interested in the application of VR to real-world areas that can be far from technical.

This panel will present views from experts in Cyberarcheology and immersive VEs on the value of advanced interfaces for archeology where it relates to research, education and outreach. Archeologists will discuss how they could benefit from VR technology, and VR researchers will explore the ways to enable VR in an application-specific setting. There are important factors to consider when applying immersive VR techniques in archeology. Below are some of the questions highlighting the challenges and opportunities with the use of VR in archeology.

  • How can archeologists speak the language of computer science and VR? How to bridge the technical/humanities gap?
  • How to make multidisciplinary projects possible, where VR and archeology practitioners can work towards a common goal?
  • What constraints are limiting the adoption of VR by archaeologists?
  • What about end-users? Are we reaching a point where archeological exploration will be available to millions through the high fidelity of VR?
  • In academic archeology, how can we ensure that VR products are fully accepted as contributions to scholarly discourse? How to overcome skepticism and resistance to innovation?
  • How to deal with massive amounts of data and make sense of it?
  • What interaction paradigms can be employed to allow effective exploration of multilayer archaeological models?
  • What lies beyond a 3D model? How can we see through archeological artifacts and understand beyond the 3D model?
  • How can abstract information be effectively conveyed in an immersive virtual archeology environment?
  • How can we codify and transmit this knowledge to the future?
  • How do we rethink the material from past?
  • Can this process be considered a time travel, a reconstruction or a simulation?

Panelists

Regis Kopper (organizer) – regis.kopper@duke.edu
Dr. Regis Kopper is an assistant research professor of Mechanical Engineering at Duke University and directs the Duke immersive Virtual Environment (DiVE). He has over 10 years experience in the design and evaluation of virtual reality systems. He investigates how immersive virtual reality can benefit different domain areas, such as archeology, health care, engineering, psychology and neuroscience. Dr. Kopper is a recipient of the best paper award at IEEE 3DUI and of a 3DUI grand prize award. His research has been funded by the DoD, NSF and NIH. Dr. Kopper received his Ph.D. in Computer Science from Virginia Tech.

Marcelo Knörich Zuffo (organizer) – mkzuffo@lsi.usp.br
Marcelo Knörich Zuffo is a full professor at the Department of Electronic Systems Engineering of the Polytechnic School at the University of São Paulo, Brazil. He coordinates the Interdisciplinary Center on Interactive Technologies; he also works as a scientist at the Laboratory of Integrated Systems at USP, investigating high performance computing platforms, virtual reality and visualization techniques. Among his interests currently he is focusing on: cyberarcheology, multimedia hardware, volume visualization, virtual reality and distributed systems. In 2001 he was responsible for implementing the first CAVE in Latin America. Currently he is involved on promoting digital technologies in emerging markets.

Maurizio Forte (organizer) – maurizio.forte@duke.edu
Maurizio Forte, PhD, is William and Sue Gross Professor of Classical Studies Art, Art History, and Visual Studies at Duke University and the founder of the DIG@Lab for digital knowledge of the past at Duke. He received his B.Sc. in Ancient History (archaeology) from the University of Bologna, and his PhD in Archaeology from the University of Rome. He has coordinated archaeological fieldwork and research projects in Italy, Ethiopia, Egypt, Syria, Kazakhstan, Peru, China, Oman, India, Honduras, Turkey, USA and Mexico. He is editor and author of several books including “Virtual Reality in Archaeology” and has written over 200 scientific papers.

Fred Limp – flimp@uark.edu
William (Fred) Limp is Leica Geosystems Chair and University Professor in GeoScience at the University of Arkansas, Fayetteville. Dr. Limp focuses on the application of geomatics/geoinformatics to archaeology and world heritage. He served as President of the Society for American Archaeology and was appointed by the Secretary of the Interior to the Board of the National Center for Preservation Technology and Training. He was a founder (1994) and long term Executive Board member of the Open Geospatial Consortium. He has served as PI on multiple major NSF multidisciplinary research projects with emphasis on the acquisition and representation of 3D data.

Patrick Ryan Williams – rwilliams@fieldmuseum.org
Patrick Ryan Williams is Associate Curator of Archaeological Science and Head of the Anthropology Section at The Field Museum of Natural History in Chicago Illinois. Williams’ research interests include landscape archaeology, GIS & Remote Sensing applications, and the use of computational technologies for the interpretation of archaeological data. He has curated several exhibits at The Field, including Maps: Finding our Place in the World and Mummies: Images of the Afterlife, currently on its US tour. Williams has authored more than fifty publications, has been awarded seven federal senior research grants, and directs a multidisciplinary archaeological research program at Cerro Baul, Peru.

Eleni Bozia – bozia@ufl.edu
Eleni Bozia’s research combines classical studies with digital humanities. More specifically, she currently works on the study of oratory and the use of Atticism in the fifth and fourth centuries BCE and its renaissance in the Imperial era. Her study examines grammarians, literary theorists, and orators, while also employing computational analysis and linguistics in order to determine recurrent syntactical and linguistic patterns in the works of the orators. Furthermore, as the Associate Director of the Digital Epigraphy and Archaeology project she works on computerassisted methodologies for the study of historical artifacts and the development of techniques for the facilitation of traditional research.

Alexander Kulik – kulik@uni-weimar.de
Alexander Kulik joined the research group for VR and Visualization Research at Bauhaus-Universität Weimar in 2006. His research focuses on 3D user interfaces, multi-user display systems and collaborative interaction. The quest for user interfaces that support and encourage cooperation is the core theme of his work. A recent focus of these investigations is on the use of virtual reality for the collaborative exploration and presentation of large and complex 3D datasets. The European funded research project 3D-Pitoti (ICT-2011-600545, http://www.3d-pitoti.eu) applied this approach to the archaeological analysis and interpretation of prehistoric rock art.



Wednesday, 23rd March 2016

8:30 am – 9:45 am
Perception Research in Virtual Reality

Moderator

Betty J. Mohler,
Max Planck Institute for Biological Cybernetics & Frank Steinicke, University of Hamburg
Contact: Betty.Mohler@tuebingen.mpg.de

This notion of a virtual reality (VR), which is indistinguishable from the real world, has been addressed repeatedly in science fiction arts, literature and films. Plato’s Allegory of the Cave from the ancient world, and several science fiction movies from the modern era like “The Matrix”, “Surrogates”, “Avatar”, or “World on a Wire” are only some prominent examples, which play with this perceptual ambiguity, and constantly question whether our perceptions of reality are real or not. We explore perceptually-inspired and (super-) natural forms of interaction to seamlessly couple the space where the flat 2D digital world meets the three dimensions we live in.

VR can be used as a tool to study human perception. Specifically empirical results about human perception, action and cognition can be used to improve VR technology and software. Many scientists are using VR to create ecologically valid and immersive sensory stimuli in a controlled way that would not be possible in the real world. More specifically, VR technology enables us to specifically manipulate the visual body, the contents of the virtual world, and the sensory stimulus (visual, vestibular, kinesthetic, tactile, and auditory) while performing or viewing an action. This panel will focus on several different areas, where perception research and VR technology has come together to improve state-of-the-art and advance our scientific knowledge. Specifically, in recent years a large amount of research has been performed in the area of locomotion, space (specifically depth), body, visual-motor and interaction perception. The panelist will briefly present their multi-disciplinary results and discuss what factors lead to successful multi-disciplinary research.

Panelists

Perception of locomotion and interaction

Frank Steinicke, University of Hamburg
Contact: steinicke@informatik.uni-hamburg.de

Frank Steinicke is professor for Human-Computer Interaction at the Department of Informatics at the University of Hamburg. His research is driven by understanding the human perceptual, cognitive and motor abilities and limitations in order to reform the interaction as well as the experience in computer-mediated realities. Frank Steinicke regularly serves as panelist and speaker at major events in the area of virtual reality and human-computer interaction, and is on the international program committee of various national and international conferences. He has published several articles on redirected walking, distance perception and perceptually inspired interfaces.

Depth and layout perception

Edward Swan, Mississippi State University
Contact: ed.fretless@acm.org

Dr. J. Edward Swan II is a Professor of Computer Science and Engineering at Mississippi State University. His research has centered on the topics of augmented and virtual reality, perception, human-computer interaction, human factors, empirical methods, computer graphics, and visualization. In the past 11 years his group has focused on studying perception in augmented and virtual reality, including depth and layout perception and depth presentation methods. In addition, they are studying the x-ray vision: using augmented reality to let users see objects located behind solid, opaque surfaces — to allow a doctor to see organs that are inside a patient’s body.

Visual-motor coordination in virtual environments

Stephen R. Ellis, NASA
Contact: stephen.r.ellis@nasa.gov

Stephen R. Ellis has headed the Advanced Displays and Spatial Perception Laboratory at the NASA Ames Research Center. He received an A.B. in Behavioral Science from U.C. Berkeley (1969), a Ph.D. in Psychology (1974) from McGill University and has had postdoctoral fellowships in Physiological optics and Bioengineering at Brown University and U.C. Berkeley respectively. He was an NRC fellow at Ames from 1977 to 1980 before he was hired into the Civil Service there. He has published over 200 journal publications and reports on user interaction with spatial information and has been in the forefront of the introduction of perspective and 3D formats into user interfaces for aerospace applications. More recently he has pursued the study of virtual environments as user interfaces and as scientific instruments to study effects of communications delay on perceptual stability and on manual control subject to displaycontrol misalignment, as can occur during teleoperation.

Designing systems to match perceptual requirements

Alexander Kulik, Bauhaus University of Weimar Contact: alexander.kulik@uni-weimar.de

Alexander Kulik works with research group for VR and visualization research at Bauhaus-Universität Weimar. His research focuses on 3D user interfaces, multi-user display systems and collaborative interaction (Kunert et al. 2014, Beck et al. 2013 and Kulik et al. 2011). These developments are grounded on knowledge and current hypotheses about perception, motor coordination, and joint action. The quest for user interfaces that support and encourage cooperation is the core theme of his work. A recent focus of these investigations is the development of virtual reality systems for the collaborative exploration and presentation of large and complex 3D datasets.

Perception of Space and Bodies

Betty Mohler, Max Planck Institute for Biological Cybernetics, Germany
Contact: Betty.Mohler@tuebingen.mpg.de

Betty Mohler received her PhD from the Uni. of Utah and leads an independent research group, Space & Body Perception at the Max Planck Institute for Biological Cybernetics. They focus on the interaction between human perception of the spatial attributes of bodily selves and spatial perception of the surrounding visual world. Their methods typically involve measuring human performance in complex everyday tasks, i.e. spatial estimates, action execution (i.e. walking, reaching) and recognition. Their results show that the body (both physical and visual) is important for perceiving distances and sizes in the world, and that our perception of the size of our bodies is rapidly adaptable to new sensory information.

Selected Publications from Panelists:

  1. F Steinicke, G Bruder, J Jerald, H Frenz, M Lappe, Estimation of detection thresholds for redirected walking techniques, IEEE Transactions on Visualization and Computer Graphics 16 (1), 17-27
  2. F Steinicke, G Bruder, K Hinrichs, A Steed, AL Gerlach, Does a gradual transition to the virtual world increase presence? Virtual Reality Conference, 2009. VR 2009. IEEE, 203-210
  3. Kunert, A., Kulik, A., Beck, S., Froehlich B. Photoportals: Shared References in Space and Time In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing (CSCW ’14). ACM, New York, NY, USA, 1388-1399, February 2014
  4. Beck, S., Kunert, A., Kulik, A., Froehlich B. Immersive Group-to-Group Telepresence (Best Paper Award) IEEE Transactions on Visualization and Computer Graphics, 19(4):616- 625, March 2013 (Proceedings of IEEE Virtual Reality 2013, Orlando, Florida).
  5. Kulik A., Kunert A., Beck S., Reichel R., Blach R., Zink A., Froehlich B. C1x6: A Stereoscopic Six-User Display for Co-located Collaboration in Shared Virtual Environments In Proceedings of the 2011 SIGGRAPH Asia Conference (SA ’11). ACM, New York, NY, USA, Article 188, 12 pages.
  6. Linkenauger SA , Leyrer M , Bülthoff HH and Mohler BJ (July-2013) Welcome to Wonderland: The Influence of the Size and Shape of a Virtual Hand On the Perceived Size and Shape of Virtual Objects PLoS ONE 8(7) 1-16.
  7. Piryankova IV , de la Rosa S , Kloos U , Bülthoff HH and Mohler BJ (April-2013) Egocentric distance perception in large screen immersive displays Displays 34(2) 153–164.
  8. Mohler BJ , Creem-Regehr SH , Thompson WB and Bülthoff HH (June-2010) The Effect of Viewing a Self-Avatar on Distance Judgments in an HMD-Based Virtual Environment Presence: Teleoperators and Virtual Environments 19(3) 230-242
  9. Leyrer M, Linkenauger SA, Bülthoff HH and Mohler BJ (February-2015) Eye Height Manipulations: A Possible Solution to Reduce Underestimation of Egocentric Distances in Head-Mounted Displays ACM Transactions on Applied Perception 12(1:1) 1-23.
  10. Leyrer M, Linkenauger SA, Bülthoff HH and Mohler BJ (May-2015) The Importance of Postural Cues for Determining Eye Height in Immersive Virtual Reality PLoS ONE 10(5) 1-23.
  11. Kenneth Moser, Yuta Itoh, Kohei Oshima, J. Edward Swan II, Gudrun Klinker, and Christian Sandor. Subjective Evaluation of a Semi-Automatic Optical See-Through Head-Mounted Display Calibration Technique. IEEE Transactions on Visualization and Computer Graphics, IEEE Virtual Reality Conference Proceedings 2015, 21(4):491–500, 2015. DOI:10.1109/TVCG.2015.2391856
  12. J. Edward Swan II, Gurjot Singh, and Stephen R. Ellis. Matching and Reaching Depth Judgments with Real and Augmented Reality Targets. IEEE Transactions on Visualization and Computer Graphics, IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2015), 21(11):1289–1298, 2015. DOI: 10.1109/TVCG.2015.24598 95
  13. Mark A. Livingston, Joseph L. Gabbard, J. Edward Swan II, Ciara M. Sibley, and Jane H. Barrow. Basic Perception in Head-Worn Augmented Reality Displays. In Tony Huang, Leila Alem, and Mark A. Livingston, editors, Human Factors in Augmented Reality Environments, pp. 35–65, Springer, 2012.