Posters
The following posters have been accepted for publication at IEEE VR 2015:
- Wednesday 1 - Investigating Visual Dominance with a Virtual Driving Task
- Wednesday 2 - VR and AR Simulator for Neurosurgical Training
- Wednesday 3 - High-Definition Digital Display Case with the Image-based Interaction
- Wednesday 4 - Volumetric Calibration and Registration of RGBD-Sensors
- Wednesday 5 - Touching sounds : Perception of the Curvature of Auditory Virtual Surfaces
- Wednesday 6 - Wayfinding by Auditory Cues in Virtual Environments
- Wednesday 7 - Portable-Spheree: A portable 3D Perspective-Corrected Interactive Spherical Scalable Display
- Wednesday 8 - A Multi-Layer Approach of Interactive Path Planning for Assisted Manipulation in Virtual Reality
- Wednesday 9 - Visual-Olfactory Immersive Environment For Product Evaluation
- Wednesday 10 - Comparative Evaluation of Stylized versus Realistic Representation of Virtual Humans on Empathetic Responses in Simulated Interpersonal Experiences
- Wednesday 11 - Validation of SplitVector Encoding and Stereoscopy for Visualizing Quantum Physics Data in Virtual Environments
- Wednesday 12 - A Building-Wide Indoor Tracking System for Augmented Reality
- Wednesday 13 - Collaborative Table-Top VR Display for Neurosurgical Planning
- Wednesday 14 - Investigating the Impact of Perturbed Visual and Proprioceptive information in Near-Field Immersive Virtual Environment
- Wednesday 15 - Using Augmented Reality to Support Situated Analytics
- Wednesday 16 - Vision-based Multi-Person Foot Tracking for CAVE Systems with Under-Floor Projection
- Wednesday 17 - Effects and Applicability of Rotation Gain in CAVE-like Environments
- Wednesday 18 - flapAssist: How the integration of VR and Visualization Tools fosters the factory planning process
- Wednesday 19 - An Immersive Labyrinth
- Wednesday 20 - Towards Context-Sensitive Reorientation for Real Walking in Virtual Reality
- Wednesday 21 - Incorporating D3.js Information Visualization into Immersive Virtual Environments
- Wednesday 22 - Using Interactive Virtual Characters in Social Neuroscience
- Wednesday 23 - Towards A High Fidelity Simulation of the Kidney Biopsy Procedure
- Wednesday 24 - AR-SSVEP for Brain-Machine Interface: Estimating User's Gaze in Head-mounted Display with USB camera
- Wednesday 25 - Five Senses Theatre Project: Sharing Experiences through Bodily Ultra-Reality
- Wednesday 26 - Robust Enhancement of Depth Images from Kinect Sensor
- Wednesday 27 - Desktop Versions of the String-based Haptic Interface- SPIDAR
- Wednesday 28 - Registration and Projection method of tumor region projection for breast cancer surgery assistance
- Wednesday 29 - 3D Node Localization from Node-to-Node Distance Information using Cross-Entropy Method
- Wednesday 30 - BlenderVR: Open-source framework for interactive and immersive VR
- Thursday 1 - Scalable Metadata In- and Output for Multi-platform Data Annotation Applications
- Thursday 2 - An Experimental Study on the Virtual Representation of Children
- Thursday 3 - Human-Avatar Interaction and Recognition Memory according to Interaction Types and Methods
- Thursday 4 - Dynamic Hierarchical Virtual Button-based Hand Interaction for Wearable AR
- Thursday 5 - 3D Position Measurement of Planar Photo Detector Using Gradient Patterns
- Thursday 6 - What Can We Feel on the Back of the Tablet? -- A Thin Mechanism to Display Two Dimensional Motion on the Back and Its Characteristics –
- Thursday 7 - Cooperation in Virtual Environments with Individual Views
- Thursday 8 - Interaction with Virtual Agents – Comparison of the participants’ experience between an IVR and a semi-IVR system
- Thursday 9 - Multiple Devices as Windows for Virtual Environment
- Thursday 10 - Synthesis of Omnidirectional Movie using a Set of Key Frame Panoramic Images
- Thursday 11 - The Influence of Virtual Reality and Mixed Reality Environments com-bined with two different Navigation Methods on Presence
- Thusday 12 - Avatar Embodiment Realism and Virtual Fitness Training
- Thursday 13 - Avatar Anthropomorphism and Illusion of Body Ownership in VR
- Thursday 14 - Influence of Avatar Realism on Stressful Situation in VR
- Thursday 15 - Extending Touch-less Interaction on Vision Based Wearable Device
- Thursday 16 - Blind in a Virtual World: Using Sensory Substitution for Generically Increasing the Accesibilty of Graphical Virtual Environments
- Thursday 17 - I Built It! - Exploring the effects of Customizable Virtual Humans on Adolescents with ASD
- Thursday 18 - The Effects of Olfaction on Training Transfer for an Assembly Task
- Thursday 19 - MRI Overlay System Using Optical See-Through for Marking Assistance
- Thursday 20 - Continuous Automatic Calibration for Optical See-Through Displays
- Thursday 21 - Comparing the Performance of Natural, Semi-Natural, and Non-Natural Locomotion Techniques in Virtual Reality
- Thursday 22 - Implementation of on-site virtual time machine for mobile devices
- Thursday 23 - The Effect of Head Mounted Display Weight and Locomotion Method on the Perceived Naturalness of Virtual Walking Speeds
- Thursday 24 - Third person's footsteps enhanced walking sensation of seated person
- Thursday 25 - Does Vibrotactile Intercommunication Increase Collaboration?
- Thursday 26 - Coupled-Clay: Physical-Virtual 3D Collaborative Interaction Environment
- Thursday 27 - GPU-accelerated Attention Map Generation for Dynamic 3D Scenes
- Thursday 28 - A Procedure for Accurate Calibration of a Tebletop Haploscope AR Environment
- Friday 1 - Using Astigmatism in Wide Angle HMDs to Improve Rendering
- Friday 2 - Shark Punch: A Virtual Reality Game for Aquatic Rehabilitation
- Friday 3 - Real-time SLAM for static multi-objects learning and tracking applied to augmented reality applications
- Friday 4 - Social Presence with Virtual Glass
- Friday 5 - Semi-automatic Calibration of a Projector-Camera System Using Arbitrary Objects With Known Geometry
- Friday 6 - Navigation in REVERIE's Virtual Environments
- Friday 7 - Collaborative Telepresence Workspaces for Space Operation and Science
- Friday 8 - Does Virtual Reality really affect visual perception of egocentric distance?
- Friday 9 - A GPU-Based Adaptive Algorithm for Non-Rigid Surface Registration
- Friday 10 - Characteristics of virtual walking sensation created by a 3-dof motion seat
- Friday 11 - Self-Characterstics and Sound in Immersive Virtual Reality - Estimating Avatar Weight from Footstep Sounds
- Friday 12 - Wings and Flying in Immersive VR - Controller Type, Sound Effects and Experienced Ownership and Agency
- Friday 13 - Optical See-through HUDs Effect on Depth Judgments of Real World Objects
- Friday 14 - EVE: Exercise in Virtual Environments
- Friday 15 - Subjective Evaluation of Peripheral Viewing during Exposure to a 2D/3D Video Clip
- Friday 16 - Zoom Factor Compensation for Monocular SLAM
- Firday 17 - A Modified Tactile Brush Algorithm for Complex Touch Gestures
- Friday 18 - Experiencing Interior Environments: New Approaches for the Immersive Display of Large-Scale Pointcloud Data
- Friday 19 - Landscape Change From Daytime To Nighttime Under Augmented Reality Environment
- Friday 20 - Impact of Illusory Resistance on Finger Walking Behavior
- Firday 21 - Development of a Wearable Haptic Device with Pneumatic Artificial Muscles and MR brake
- Friday 22 - Preliminary Evaluation of a Virtual Needle Insertion Training System
- Friday 23 - From visual cues to climate perception in virtual urban environments
- Friday 24 - HorizontalDragger: a Freehand Remote Selector for Object Acquisition
- Friday 25 - A Real-Time Welding Training System Based on Virtual Reality
- Friday 26 - Transparent Cockpit Using Telexistence
- Friday 27 - Flying Robot Manipulation System Using a Virtual Plane
- Friday 28 - Binocular Interface: Interaction Techniques Considering Binocular Parallax for a Large Display
- Friday 29 - Tracking Human Locomotion by Relative Positional Feet Tracking
________________________________________
Investigating Visual Dominance with a Virtual Driving Task
Abdulaziz Alshaer - University of Otago's Information Science Department
Holger Regenbrecht - University of Otago's Information Science Department
David O'Hare - University of Otago's Psychology Department
Presenting Author: Holger Regenbrecht
Abstract: Most interactive input devices for virtual reality-based simulators are proprietary and expensive. Can they be substituted with standard, inexpensive devices if the virtual representation of the input device looks and acts like the original? Visual dominance theory, where the visual aspects of the displayed input device within the virtual environment should override the haptic aspects of the real device, would appear to support such a possibility. We tested this visual dominance theory in a VR power wheelchair simulator scenario comparing standard gaming and proprietary wheelchair joysticks in combinations with their virtual counterparts and measured the effects on driving performance.
________________________________________
VR and AR Simulator for Neurosurgical Training
Ryan Armstrong - University of Western Ontario
Sandrine de Ribaupierre - University of Western Ontario
Dayna Noltie - University of Western Ontario
Matt Kramers - University of Western Ontario
Roy Eagleson - University of Western Ontario
Presenting Author: Roy Eagleson
Abstract: The placement of an external ventricular drain is one of the most commonly performed neurosurgical procedures, and consequently, is an essential skill to be mastered by neurosurgical trainees. The drain placement involves analyzing images from the patient, choosing an entry point and deciding on a trajectory to hit the ventricle. In this paper, we describe the development of a simulation environment to train residents, coupled with an AR image-guidance tool. Performance is evaluated using Fitts’ methodology (Fitts, 1954), which respects the users ability to trade-off speed and accuracy.
________________________________________
High-Definition Digital Display Case with the Image-based Interaction
Yuki Ban, The University of Tokyo
Takashi Kajinami, The University of Tokyo
Takuji Narumi, The University of Tokyo
Tomohiro Tanikawa, The University of Tokyo
Michitaka Hirose, The University of Tokyo
Presenting Author: Yuki Ban
Abstract: This paper proposes a high-definition digital display case for manipulating a virtual exhibit that has linking mechanisms. This technique enhances the understanding of dynamic exhibits. It is difficult to construct interactive contents of dynamic virtual exhibits, because measuring the mechanism invokes the risk of an exhibit's deterioration, and it takes tremendous efforts to create a fine spun computer graphics (CG) model for mechanisms. Therefore, we propose an image-based interaction method that uses image-based rendering to construct interactive contents for dynamic virtual exhibits using the interpolation between exhibit pictures with a number of deformational conditions and viewpoints. Using this method, we construct a high-definition digital showcase and exhibit the interactive content at a museum to evaluate the availability of our system.
________________________________________
Volumetric Calibration and Registration of RGBD-Sensors
Stephan Beck, Bauhaus-Universität Weimar
Bernd Froehlich, Bauhaus-Universität Weimar
Presenting Author: Stephan Beck
Abstract: We present an integrated approach for the calibration and registration of color and depth (RGBD) sensors into a joint coordinate system without explicitly identifying intrinsic or extrinsic camera parameters. Our method employs a tracked checkerboard to establish a number of correspondences between positions in color and depth camera space and in world space. These correspondences are used to construct a single calibration and registration volume per RGBD sensor which directly maps raw depth sensor values into a joint coordinate system and to their associated color values. Our evaluation demonstrates an accuracy with an average 3D error below 3 mm and an average texture deviation smaller than 0.5 pixels for a space of about 1.5 m x 1.8 m x 1.5 m.
________________________________________
Touching sounds : Perception of the Curvature of Auditory Virtual Surfaces
Eric O. Boyer, LPP AVOC Team, UMR 8242 CNRS-Université Paris Descartes, Paris
Lucyle Vandevoorde, UFR STAPS - Université Paris Descartes
Frédéric Bevilacqua, STMS, IRCAM-CNRS-UPMC, Paris
Sylvain Hanneton, LPP AVOC Team, UMR 8242 CNRS-Université Paris Descartes, Paris
Presenting Author: Eric Boyer
Abstract: In this study, we investigated the ability of blindfolded adults to discriminate between concave and convex auditory virtual surfaces. We used a Leap Motion device to measure the movements of the hand and fingers. Participants were asked to explore the space above the device with the palm of one hand and an auditory feedback was produced only when the palm was moving into the boundaries of the surface. In order to demonstrate that curvature direction was correctly perceived by our participants, we estimated their discrimination thresholds with a psychophysical staircase procedure. Two groups of participants were fed with two different sonification of the surface. Results showed that most of the participants were able to learn the task. The best results were obtained with an auditory feedback related to the component of the hand velocity tangential to the virtual surface. This work proposes a contribution to the introduction in virtual reality of auditory virtual objects.
________________________________________
Wayfinding by Auditory Cues in Virtual Environments
Ayana Burkins - Department of Computer Science
Regis Kopper - Duke immersive Virtual Environment
Presenting Author: Ayana Burkins
Abstract: Wayfinding is a typical task in virtual environments. Real-world aids such as maps and Global Position Systems can present unique challenges due to the potential for cognitive overload and the immersive nature of the environment. This work presents the results of a pilot study involving the use of auditory cues as a wayfinding aid in a virtual mall environment. The data suggests that users are able to complete wayfinding tasks faster and more accurately in an environment containing sound cues than in one without.
________________________________________
Portable-Spheree: A portable 3D Perspective-Corrected Interactive Spherical Scalable Display
Marcio Cabral - Polytechnic School - University of Sao Paulo
Fernando Ferreira - Federal University of ABC
Olavo Belloc - Polytechnic School - University of Sao Paulo
Celso Kurashima - Federal University of ABC
Roseli Lopes - Polytechnic School - University of Sao Paulo
Ian Stavness - University of Saskatchewan
Junia Anacleto - Federal University of Sao Carlos
Sidney Fels - University of British Columbia
Marcelo Zuffo - Polytechnic School - University of Sao Paulo
Presenting Author: Marcio Cabral
Abstract: In this poster we present Portable-Spheree, an interactive spherical rear-projected 3D-content-display that provides perspective-corrected views according to the user's head position, to provide parallax, shading and occlusion depth cues. Portable-Spheree is an evolution of the Spheree and it is developed in a smaller form factor, using more projectors and a dark-translucent screen with increased contrast. We present some preliminary results of this new configuration as well as applications with spatial interaction that might benefit from this new form factor.
_________________________________________
A Multi-Layer Approach of Interactive Path Planning for Assisted Manipulation in Virtual Reality
Simon Cailhol, ENIT-LGP
Philippe Fillatreau, ENIT-LGP
Jean-Yves Fourquet, ENIT-LGP
Yingshen Zhao, ENIT-LGP
Presenting Author: Jean-Yves Fourquet
Abstract:This work considers VR applications dealing with objects manipulation (such as industrial product assembly, disassembly or maintenance simulation). For such applications, the operator performing the simulation can be assisted by path planning techniques from the robotics research field. A novel automatic path planner involving geometrical, topological and semantic information of the environment is proposed for the guidance of the user through a haptic device. The interaction allows on one hand, the automatic path planner to provide assistance to the human operator, and on the other hand, the operator to reset the whole planning process suggesting a better suited path. Control sharing techniques are used to improve the assisted manipulation ergonomics by dynamically balancing the automatic path planner authority according to the operator involvement in the task, and by predicting user’s intent to integrate it as early as possible in the planning process.
________________________________________
Visual-Olfactory Immersive Environment For Product Evaluation
Marina Carulli, Politecnico di Milano
Monica Bordegoni, Politecnico di Milano
Umberto Cugini, Politecnico di Milano
Presenting Author: Monica Bordegoni
Abstract: The sense of smell has a great importance in our daily life. In recent years, smells have been used for marketing purposes with the aim of improving the person's mood and of communicating information about products as household cleaners and food. However, the scent design discipline can be also applied to any kind of products to communicate their features to customers. In the area of Virtual Reality several researches have focused on integrating smells in virtual environments. The research questions addressed in this work concern whether Virtual Prototypes, including the sense of smell, can be used for evaluating products as effectively as studies performed in real environments, and also whether smells can contribute to increase the users' sense of presence in the virtual environment. For this purpose, a Virtual Reality experimental framework including a prototype of a wearable olfactory display has been set up, and experimental tests have been performed.
________________________________________
Comparative Evaluation of Stylized versus Realistic Representation of Virtual Humans on Empathetic Responses in Simulated Interpersonal Experiences
Himanshu Chaturvedi, Clemson University
Nathan Newsome, Clemson University
Sabarish Babu, Clemson University
June Luo, Clemson University
Tania Roy, Clemson University
Shaundra Daily, Clemson University
Jeffrey Bertrand, Clemson University
Tracy Fasolino, Clemson University
Elham Ebrahimi, Clemson University
Presenting Author: Himanshu Chaturvedi
Abstract: The effectiveness of visual realism of virtual characters in engaging users and eliciting affective responses has been an open question. We empirically evaluated the effects of realistic vs. non-realistic rendering of virtual humans on the emotional response of participants in a medical virtual reality system that was designed to educate users to recognize the signs and symptoms of patient deterioration. In a between-subjects experiment protocol, participants interacted with one of three different appearances of a virtual patient, namely realistic, non-realistic cartoon-shaded and charcoal-sketch like conditions. Emotional impact of the rendering conditions was measured via a combination of subjective and objective metrics.
________________________________________
Validation of SplitVector Encoding and Stereoscopy for Visualizing Quantum Physics Data in Virtual Environments
Jian Chen - University of Maryland, Baltimore County
Henan Zhao - University of Maryland, Baltimore County
Wesley Griffin - University of Maryland, Baltimore County
Judith E. Terrill - National Institute of Standard and Technology (NIST)
Garnett W. Bryant - National Institute of Standard and Technology (NIST)
Presenting Author: Jian Chen
Abstract: We designed and evaluated SplitVector, a new vector field display approach to help scientists perform new discrimination tasks on scientific data shown in virtual environments (VEs). We present an empirical study to compare the SplitVector approach with three other approaches in information-rich VEs. Twenty participants performed three domain analyses tasks. Our empirical study results suggest the following: (1) SplitVectors improve the accuracy by about 10 times compared to the linear mapping and by 4 times to log in discrimination tasks and (2) SplitVectors lead to no significant differences from the IRVE text display approach, yet reduce the clutter.
________________________________________
A Building-Wide Indoor Tracking System for Augmented Reality
Stéphane Côté - Bentley Systems
François Rheault - Sherbrooke University
Julien Barnard - Université Laval
Presenting Author: Stéphane Côté
Abstract: Buildings require regular maintenance, and augmented reality (AR) could advantageously be used to facilitate the process. However, such AR systems would require accurate tracking to meet the needs of engineers, and work accurately in entire buildings. In this project, we propose a hybrid system combining low accuracy radio-based tracking, and high accuracy tracking using depth images obtained from range cameras. Results show tracking accuracy that would be compatible with AR applications and that would be constant within a building.
________________________________________
Collaborative Table-Top VR Display for Neurosurgical Planning
Roy Eagleson, Western Ontario, London, Ontario, Canada
Patrick Wucherer, Technische Universität München, Germany
Philipp Stefan, Technische Universität München, Germany
Yaroslav Duschko, Technische Universität München,Germany
Sandrine de Ribaupierre, Robarts Research Institute, Western University, Canada
Christian Vollmar, Klinikum der Universität München, Germany
Pascal Fallavollita, Technische Universität München,Germany
Nassir Navab, Technische Universität München,Germany
Presenting Author: Roy Eagleson
Abstract: We present a prototype of a system in development for pre-operative planning. The proposed NeuroTable uses a combination of traditional rendering and novel visualization techniques rendered to facilitate real-time collaboration between neurosurgeons during intervention planning. A set of multimodal 2D and 3D renderings convey the relation between the region of interest and the surrounding anatomical structures. A haptic device is used for interaction with the NeuroTable to facilitate immersive control over the 3D cursor and navigation modes for the neurosurgeons during their discourse of planning. A pilot experimental study was conducted to assess the performance of users in targeting points within the preoperative 3D scan. Then, two clinicians participated in the evaluation of the table in discussing and planning a case. Results indicate that the NeuroTable facilitated the discourse and we discuss the results of the speed and accuracy for the specification of entry and target points.
________________________________________
Investigating the Impact of Perturbed Visual and Proprioceptive information in Near-Field Immersive Virtual Environment
Elham Ebrahimi, Clemson University
Bliss M. Altenhoff, Clemson University
Christopher C. Pagano, Clemson University
Sabarish V. Babu, Clemson University
J. Adam Jones, Clemson University
Presenting Author: Elham Ebrahimi
Abstract: We report the results of an empirical evaluation to examine the carryover effects of calibrations to one of three perturbations of visual and proprioceptive feedback: i) Minus condition (-20% gain) in which a visual stylus appeared at 80% of the distance of a physical stylus, ii) Neutral condition (0% gain) in which a visual stylus was co-located with a physical stylus, and iii) Plus condition (+20% gain) in which the visual stylus appeared at 120% of the distance of the physical stylus. Feedback was shown to calibrate distance judgments quickly within an IVE, with estimates being farthest after calibrating to visual information appearing nearer (Minus condition), and nearest after calibrating to visual information appearing further (Plus condition).
________________________________________
Using Augmented Reality to Support Situated Analytics
Neven ElSayed - University of South Australia
Bruce Thomas - University of South Australia
Ross Smith - University of South Australia
Kim Marriott - Monash University
Julia Piantadosi - University of South Australia
Presenting Author: Neven A. M. ElSayed
Abstract: We draw from the domains of Visual Analytics and Augmented Reality to support a new form of in-situ interactive visual analysis. We present a Situated Analytics model, a novel interaction, and a visualization concept for reasoning support. Situated Analytics has four primary elements: situated information, abstract information, augmented reality interaction, and analytical interaction.
________________________________________
Vision-based Multi-Person Foot Tracking for CAVE Systems with Under-Floor Projection
Sebastian Freitag - Virtual Reality Group, RWTH Aachen University
Sebastian Schmitz - RWTH Aachen University
Torsten W. Kuhlen - Virtual Reality Group, RWTH Aachen University
Presenting Author: Sebastian Freitag
Abstract: In this work, we present an approach for tracking the feet of multiple users in CAVE-like systems with under-floor projection. It is based on low-cost consumer cameras, does not require users to wear additional equipment, and can be installed without modifying existing components. If the brightness of the floor projection does not contain too much variation, the feet of several people can be reliably tracked and assigned to individuals.
________________________________________
Effects and Applicability of Rotation Gain in CAVE-like Environments
Sebastian Freitag - Virtual Reality Group, RWTH Aachen University
Benjamin Weyers - Virtual Reality Group, RWTH Aachen University
Torsten W. Kuhlen - Virtual Reality Group, RWTH Aachen University
Presenting Author: Sebastian Freitag
Abstract: In this work, we report on a pilot study we conducted, and on a study design, to examine the effects and applicability of rotation gain in CAVE-like virtual environments. The results of the study will give recommendations for the maximum levels of rotation gain that are reasonable in algorithms for enlarging the virtual field of regard or redirected walking.
________________________________________
flapAssist: How the integration of VR and Visualization Tools fosters the factory planning process
Sascha Gebhardt - RWTH Aachen University
Sebastian Pick - RWTH Aachen University
Hanno Voet - RWTH Aachen University
Julian Utsch - RWTH Aachen University
Toufik al Khawli - Fraunhofer ILT
Urs Eppelt - Fraunhofer ILT
Rudolf Reinhard - RWTH Aachen University
Christian Büscher - RWTH Aachen University
Bernd Hentschel - RWTH Aachen University
Torsten W. Kuhlen - Jülich Supercomputing Centre
Presenting Author: Sascha Gebhardt
Abstract: Virtual Reality (VR) systems are of growing importance to aid decision support in factory planning. While current solutions either focus on virtual walkthroughs or the visualization of more abstract information, a solution that provides both, does currently not exist. To close this gap, we present a holistic VR application, called flapAssist. It is meant to serve as a platform for planning the layout of factories, while also providing a wide range of analysis features. By being scalable from desktops to CAVEs and providing a link to a central integration platform, flapAssist integrates well in established factory planning workflows.
________________________________________
An Immersive Labyrinth
Copper Giloth - University of Massachusetts
Jonathan Tanant - JonLab
Presenting Author: Copper Frances Giloth
Abstract: We have developed a 3D VR digital heritage App providing a virtual experience of an elaborate labyrinth that existed in the gardens of the Chateau de Versailles in the 17th and 18th centuries. We are now taking this App into an immersive environment (using an Oculus Rift headset); we will report on the progress of and conclusions from this porting process.
________________________________________
Towards Context-Sensitive Reorientation for Real Walking in Virtual Reality
Timofey Grechkin - USC Institute for Creative Technologies
Mahdi Azmandian - USC Institute for Creative Technologies
Mark Bolas - USC Institute for Creative Technologies
Evan Suma - USC Institute for Creative Technologies
Presenting Author:
Abstract: Redirected walking techniques help overcome physical limitations for natural locomotion in virtual reality. Though subtle perceptual manipulations are helpful, it is inevitable that users will approach critical boundary limits. Current solutions to this problem involve breaks in presence by introducing distractors, or freezing the virtual world relative to the user’s perspective. We propose an approach that integrates into the virtual world narrative to draw users’ attention and cause them to temporarily alter their course to avoid going off bounds. This method ties together unnoticeable translation, rotation, and curvature gains, efficiently reorienting the user while maintaining the user’s sense of immersion.
________________________________________
Incorporating D3.js Information Visualization into Immersive Virtual Environments
Wesley Griffin - National Institute of Standards and Technology
Danny Catacora - National Institute of Standards and Technology
Steven Satterfield - National Institute of Standards and Technology
Jeffrey Bullard - National Institute of Standards and Technology
Judith Terrill - National Institute of Standards and Technology
Presenting Author: Wesley Griffin
Abstract: We have created an integrated interactive visualization and analysis environment that can be used immersively or on the desktop to study a simulation of microstructure development during hydration or degradation of cement pastes and concrete. Our environment combines traditional 3D scientific data visualization with 2D information visualization using D3.js running in a web browser. By incorporating D3.js, our visualization allowed the scientist to quickly diagnose and debug errors in the parallel implementation of the simulation.
________________________________________
Using Interactive Virtual Characters in Social Neuroscience
Joanna Hale - University College London
Xueni Pan - University College London
Antonia F. de C. Hamilton - University College London
Presenting Author: Xueni Pan
Abstract: Recent advances in the technical ability to build realistic and interactive Virtual Environments have allowed neuroscientists to study social cognition and behavior in virtual reality. This is particularly useful in the study of social neuroscience, where the physical appearance and motion of Virtual Characters can be fully controlled. In this work, we present the design, implementation, and preliminary results of two case studies exploring different types of social cognition (congruency effect and mimicry) using interactive Virtual Characters animated with either real-time streamed or pre-recorded motion captured data.
________________________________________
Towards A High Fidelity Simulation of the Kidney Biopsy Procedure
Gareth Henshall, Bangor University
Serban Pop, Bangor University
Marc Edwards, Bangor University
Llyr ap Cenydd, Bangor University
Nigel John, Bangor University
Presenting Author: Gareth Henshall
Abstract: Work in progress for the development of a novel virtual training environment for training a kidney biopsy procedure is presented. Our goal is to provide an affordable high fidelity simulation through the integration of some of the latest off-the-shelf technology components. The range of forces that are encountered during this procedure have been recorded using a custom designed force sensitive glove and then applied within the simulation.
________________________________________
AR-SSVEP for Brain-Machine Interface: Estimating User's Gaze in Head-mounted Display with USB camera
Shuto Horii - Toyohashi University of Technology
Shigeki Nakauchi - Toyohashi University of Technology
Michiteru Kitazaki - Toyohashi University of Technology
Presenting Author: Shuto Horii
Abstract: We aim to develop a brain-machine interface (BMI) system that estimates user's gaze or attention on an object to pick it up in the real world with augmented reality technology. We measured steady-state visual evoked potential (SSVEP) using luminance and/or contrast modulated flickers of photographic scenes presented on a head-mounted display (HMD), and then measured SSVEP using luminance and contrast modulated flickers at AR-markers in real scenes that were online captured by a USB camera and presented on the HMD. We obtained significantly good performance for future application to online estimation of gaze.
________________________________________
Five Senses Theatre Project: Sharing Experiences through Bodily Ultra-Reality
Yasushi Ikei - Tokyo Metro Univ
Seiya Shimabukuro - Tokyo Metro Univ
Shunki Kato - Tokyo Metro Univ
Kohei Komase - Tokyo Metro Univ
Yujiro Okuya - Tokyo Metro Univ
Koichi Hirota - U-Tokyo
Michiteru Kitazaki - Toyohashi University of Technology
Tomohiro Amemiya - NTT
Presenting Author: Yasushi Ikei
Abstract: The Five Senses Theatre project was established for the development of a basic technology that enables the user to relive a spatial motion of other persons as if the user him/her-self experienced it in person. This technology aims to duplicate a bodily experience performed in the real space and to pass it to the other person. The system creates the sensation of a self-body motion by multisensory stimulation, specifically vestibular and proprioception, provided to the real body passively. A walking for a sightseeing and a legend run of a top athlete were the first examples of spatial experience copy.
________________________________________
Robust Enhancement of Depth Images from Kinect Sensor
ABM Tariqul Islam - PhD Student, Visual Computing Lab, University of Rostock, Germany
Christian Scheel - PhD Student, Visual Computing Lab, University of Rostock, Germany
Renato Pajarola - Professor, Visualization and MultiMedia Lab, University of Zürich, Switzerland
Oliver Staadt - Professor, Visual Computing Lab, University of Rostock, Germany
Presenting Author: ABM Tariqul Islam
Abstract: We propose a new method to fill missing/invalid values in depth images generated from the Kinect depth sensor. To fill the missing depth values, we use a robust least median of squares (LMedS) approach. We apply our method for telepresence environments, where Kinects are used very often for reconstructing the captured scene in 3D. We introduce a modified 1D LMedS approach for efficient traversal of consecutive image frames. Our approach solves the unstable nature of depth values in static scenes that is perceived as flickering. We obtain very good result both for static and moving objects inside a scene.
________________________________________
Desktop Versions of the String-based Haptic Interface- SPIDAR
Anusha Jayasiri - Tokyo Institute of technology
Shuhan Ma - Tokyo Institute of technology
Yihan Qian - Tokyo Institute of technology
Katsuhito Akahane - Tokyo Institute of technology
Makoto Sato - Tokyo Institute of technology
Presenting Author: Shuhan Ma
Abstract: There is a vast development and significant involvement of haptic interfaces in the world for virtual reality applications. SPIDAR, which stands for `SPace Interface Device for Artificial Reality`, is a string-based, friendly human interface on the Sato Makoto Laboratory in the Tokyo Institute of Technology can be used in various types of virtual reality applications for simple pick and place tasks to more complicated physical interactions in virtual worlds. Among the family of SPIDAR devices, here we introduce the research and development of some desktop versions of SPIDAR haptic interfaces called SPIDAR-G, SPIDAR-I and SPIDAR-mouse.
________________________________________
Registeration and Projection method of tumor region projection for breast cancer surgery assistance
Motoko Kanegae, Graduate School of Science and Technology, Keio University
Jun Morita, Graduate School of Science and Technology, Keio University
Sho Shimamura, Graduate School of Science and Technology, Keio University
Yuji Uema, Graduate School of Media Design, Keio University
Maiko Takahashi, Department of Surgery, School of Medicine, Keio University
Masahiko Inami, Graduate School of Media Design, Keio University
Tetsu Hayashida, Department of Surgery, School of Medicine, Keio University
Maki Sugimoto, Graduate School of Science and Technology, Keio University
Presenting Author: Jun Morita
Abstract: This paper introduces a registration and projection method for directly projecting the tumor region for breast cancer surgery assistance based on the breast procedure of our collaborating doctor. We investigated the steps of the breast cancer procedure of our collaborating doctor and how it can be applied for tumor region projection. We propose a novel way of MRI acquisition so we may correlate the MRI coordinates to the patient in the real world. By calculating the transformation matrix from the MRI coordinates and the coordinates from the markers that is on the patient, we are able to register the acquired MRI data to the patient. Our registration and presentation method of the tumor region was then evaluated by medical doctors.
________________________________________
3D Node Localization from Node-to-Node Distance Information using Cross-Entropy Method
Shohei Ukawa - Osaka Univ.
Tatsuya Shinada - Osaka Univ.
Masanori Hashimoto - Osaka Univ.
Yuichi Itoh - Osaka Univ.
Takao Onoye - Osaka Univ.
Presenting Author: Shohei Ukawa
Abstract: This paper proposes a 3D node localization method that uses cross-entropy method for the 3D modeling system.The proposed localization method statistically estimates the most probable positions overcoming measurement errors through iterative sample generation and evaluation. The generated samples are evaluated in parallel, and then a significant speedup can be obtained. We also demonstrate that the iterative sample generation and evaluation performed in parallel are highly compatible with interactive node movement.
________________________________________
BlenderVR: Open-source framework for interactive and immersive VR
Brian F.G. Katz - LIMSI-CNRS
Dalai Q. Felinto - LIMSI-CNRS
Damien Touraine - LIMSI-CNRS
David Poirier-Quinot - LIMSI-CNRS
Patrick Bourdot - LIMSI-CNRS
Presenting Author: Dalai Felinto
Abstract: BlenderVR is an open-source framework for interactive/immersive applications based on extending the Blender Game Engine to Virtual Reality. BlenderVR (a generalization of BlenderCAVE) now addresses additional platforms (e.g., HMD, video-walls). BlenderVR provides a flexible easy to use framework for the creation of VR applications for various platforms, employing the power of the BGE graphics rendering and physics engines. Compatible with 3 major Operating Systems, BlenderVR is developed by VR researchers with support from the Blender Community. BlenderVR currently handles multi-screen/multi-user tracked stereoscopic rendering through efficient master/slave synchronization with external multimodal interactions via OSC and VRPN protocols.
________________________________________
Scalable Metadata In- and Output for Multi-platform Data Annotation Applications
Sebastian Pick - Virtual Reality Group, RWTH Aachen University
Sascha Gebhardt - Virtual Reality Group, RWTH Aachen University
Bernd Hentschel - Virtual Reality Group, RWTH Aachen University
Torsten W. Kuhlen - Virtual Reality Group, RWTH Aachen University
Presenting Author: Sebastian Pick
Abstract: Metadata in- and output are important steps within the data annotation process. However, selecting techniques that effectively facilitate these steps is non-trivial, especially for applications that have to run on multiple virtual reality platforms. Not all techniques are applicable to or available on every system, requiring to adapt workflows on a per-system basis. Here, we describe a metadata handling system based on Android's Intent system that automatically adapts workflows and thereby makes manual adaption needless.
________________________________________
An Experimental Study on the Virtual Representation of Children
Ranchida Khantong - University College London
Xueni Pan - University College London
Mel Slater - University College London
Presenting Author: Xueni Pan
Abstract: Is it their movements or appearance that helps us to identify a child as a child? We created four video clips with a Virtual Character walking, but with different combinations of either child or adult animation applied on either a child or adult body. An experimental study was conducted with 53 participants who viewed all four videos in random orders. They also reported higher level of empathy, care, and feeling of protection towards the child character as compared to the adult character. Moreover, compared to appearance, animation seems to be playing a bigger role in invoking participants’ emotional responses.
________________________________________
Human-Avatar Interaction and Recognition Memory according to Interaction Types and Methods
Mingyu Kim - Hanyang University
Woncheol Jang - Hanyang University
Kwanguk (Kenny) Kim - Hanyang University
Presenting Author: Mingyu Kim
Abstract: For the several decades, researchers studied human-avatar interactions using a virtual reality (VR). However, speculation on the interaction between a human’s recognition memory and interaction types/methods has not enough considered yet. In the current study, we designed a VR interaction paradigm with two different types of interaction including initiating and responding, and we also included two interaction methods including head-gazing and hand-pointing. The result indicated that there are significant differences in the recognition memory between the initiating and responding interactions. These results suggest that the human-avatar interaction may have similar patterns with the human-human interaction in the recognition memory.
________________________________________
Dynamic Hierarchical Virtual Button-based Hand Interaction for Wearable AR
Hyejin Kim - Korea Institute of Science and Technology
Elisabeth Adelia Widjojo - Korea Institute of Science and Technology
Jae-In Hwang - Korea Institute of Science and Technology
Presenting Author: Hyejin Kim
Abstract: This paper presents a novel bare-hand interaction method for wearable AR (augmented reality). The suggested method is using hierarchical virtual buttons which are placed on the image target. Therefore, we can provide precise hand interaction on the image target surface (while using wearable AR). The method operates on a wearable AR system and uses an image target tracker to make occlusion-based interaction button. We introduce the hierarchical virtual button method which is adequate for more precise and faster interaction with augmented objects.
________________________________________
3D Position Measurement of Planar Photo Detector Using Gradient Patterns
Tatsuya Kodera, Keio University
Maki Sugimoto, Keio University
Ross Smith, The University of South Australia
Bruce Thomas, The University of South Australia
Presenting Author: Tatsuya Kodera
Abstract: We propose a three dimensional position measurement method employing planar photo detectors to calibrate a Spatial Augmented Reality system of unknown geometry. In Spatial Augmented Reality, projectors overlay images onto an object in the physical environment. For this purpose, the alignment of the images and physical objects is required. Traditional camera based 3D position tracking systems, such as multi-camera motion capture systems, detect the positions of optical markers in two-dimensional image plane of each camera device, so those systems require multiple camera devices at known locations to obtain 3D position of the markers. We introduce a detection method of 3D position of a planar photo detector by projecting gradient patterns. The main contribution of our method is to realize an alignment of the projected images with the physical objects and measuring the geometry of the objects simultaneously for Spatial Augmented Reality applications.
________________________________________
What Can We Feel on the Back of the Tablet? -- A Thin Mechanism to Display Two Dimensional Motion on the Back and Its Characteristics –
Itsuo Kumazawa - Imaging Science and Engineering Laboratory, Tokyo Institute of Technology
Minori Takao - Imaging Science and Engineering Laboratory, Tokyo Institute of Technology
Yusuke Sasaki - Imaging Science and Engineering Laboratory, Tokyo Institute of Technology
Shunsuke Ono - Imaging Science and Engineering Laboratory, Tokyo Institute of Technology
Presenting Author: Itsuo Kumazawa
Abstract: The front surface of the tablet computer is dominated by a touch screen and used mostly to display visual information. Under the design, use of the rear surface of the tablet for tactile display is promising as the fingers holding the tablet constantly touch it and feel the feedback steadily. In this paper, a slim design of tactile feedback mechanism that can be easily installed on the back of existing tablets is given and its mechanical performance regarding electricity consumption, latency and force is evaluated. Human capability in perceiving the tactile information on the display is also evaluated.
________________________________________
Cooperation in Virtual Environments with Individual Views
Vincent Küszter - Technische Universität Chemnitz
Guido Brunnett - Technische Universität Chemnitz
Daniel Pietschmann - Technische Universität Chemnitz
Presenting Author: Vincent Küszter
Abstract: When users are interacting collaboratively in a virtual environment it cannot be guaranteed that every user has the same input device or that they have access to the same information. Our research aims at understanding the effects of such asymmetries on the user embodiment in collaborative virtual environments (CVEs). In order to do this, we have developed a prototyping platform for cooperative interaction between two users. To change the information a user has, we are incorporating "special views" for each person. Also, an easily expandable array of input devices is supported.
________________________________________
Interaction with Virtual Agents – Comparison of the participants’ experience between an IVR and a semi-IVR system
Marios Kyriakou - Department of Computer Science, University of Cyprus
Xueni Pan - Institute of Cognitive Neuroscience, University College London
Yiorgos Chrysanthou - Department of Computer Science, University of Cyprus
Presenting Author: Marios Kyriakou
Abstract: Is our behavior and experience the same in both IVR and in semi-IVR systems when we navigate through a virtual environment populated with virtual agents? Using experiments we show that is more important for the semi-IVR systems to facilitate collision avoidance between the users and the virtual agents accompanied with basic interaction between them. This can increase the sense of presence and make the virtual agents and the environment appear more realistic and lifelike.
________________________________________
Multiple Devices as Windows for Virtual Environment
Jooyoung Lee - Konkuk University
Hasup Lee - Konkuk University
BoYu Gao - Konkuk University
HyungSeok Kim - Konkuk University
Jee-In Kim - Konkuk University
Presenting Author: Jooyoung Lee
Abstract: We introduce a method for using multiple devices as windows for interacting with 3-D virtual environment. Provided with life size virtual environment, each device shows a scene of 3-D virtual space on its position and direction. By adopting mobile device to our system, users not only see outer space of stationary screen by relocating their mobile device, but also have personalized view in working space. To acquiring knowledge of device’s pose and orientation, we adopt vision-based approaches. For the last, we introduce an implementation of a system for managing multiple device and letting them have synchronized performance.
________________________________________
Synthesis of Omnidirectional Movie using a Set of Key Frame Panoramic Images
Roberto Lopez-Gulliver - Ritsumeikan University
Takahiro Hatamoto - Ritsumeikan University
Kohei Matsumura - Ritsumeikan University
Haruo Noma - Ritsumeikan University
Presenting Author: Roberto Lopez-Gulliver
Abstract: We aim to provide an interactive and entertaining environment for users to better enjoy and stay motivated during indoor training. We are now developing a virtual treadmill-based training system allowing users to experience walking or running around various real scenes. A set of key frame 360-degree panoramic images on a grid are used to synthesize, in real-time, an omnidirectional movie for any possible path the user takes. The playback smoothness of the synthesized movie depends on the separation (grid pitch) between key frames. Preliminary experimental results help us determine the largest playback pitch without compromising playback smoothness.
________________________________________
The Influence of Virtual Reality and Mixed Reality Environments com-bined with two different Navigation Methods on Presence
Mario Lorenz, Institute for Machine Tools and Production Processes, Technische Universität Chemnitz
Marc Busch, Austrian Institute of Technology GmbH
Loukas Rentzos, Laboratory for Manufacturing Systems and Automation, University of Patras
Manfred Tscheligi, Austrian Institute of Technology GmbH
Philipp Klimant, Institute for Machine Tools and Production Processes, Technische Universität Chemnitz
Peter Fröhlich, Austrian Institute of Technology GmbH
Presenting Author: Mario Lorenz
Abstract: For various VR/MR/AR applications, such as virtual usability studies, it is very important that the participants have the feeling that they are really in the environment. This feeling of "being" in a mediated environment is described as presence. Two important factors that influence presence are the level of immersion and the navigation method. We developed two navigation methods to simulate natural walking using a Wii Balance Board and a Kinect Sensor. In this preliminary study we examined the effects of these navigation methods and the level of immersion on the participants' perceived presence in a 2x2 factorial between-subjects study with 32 participants in two different VEs (Powerwall and Mixed-Reality-See-Through-Glasses). The results indicate that reported presence is higher for the Kinect navigation and Powerwall for some facets of presence.
________________________________________
Avatar Embodiment Realism and Virtual Fitness Training
Jean-Luc Lugrin - Universität Würzburg, Würzburg, GERMANY
Maximilian Landeck - Universität Würzburg, Würzburg, GERMANY
Marc Erich Latoschik - Universität Würzburg, Würzburg, GERMANY
Presenting Author: Marc Erich Latoschik
Abstract: In this paper we present a preliminary study of the impact of avatar realism on illusion of virtual body ownership (IVBO), when using a full body virtual mirror for fitness training. We evaluated three main types of user representation: realistic and non-realistic avatars as well as no avatar at all. Our results revealed that same-gender realistic human avatar elicited a slightly higher level of illusion and performance. However qualitative analysis of open questions revealed that the feeling of power was higher with non-realistic strong-looking avatars.
________________________________________
Avatar Anthropomorphism and Illusion of Body Ownership in VR
Jean-Luc Lugrin, University of Wuerzburg
Johanna Latt, University of Wuerzburg
Marc Erich Latoschik, University of Wuerzburg
Presenting Author: Marc Erich Latoschik
Abstract: In this paper we present a novel experiment to explore the impact of avatar realism on the illusion of virtual body ownership (IVBO) in immersive virtual environments, with full-body avatar embodiment and freedom of movement. We evaluated four distinct avatars (a humanoid robot, a block-man, and both male and female human adult) presenting an increasing level of anthropomorphism in their detailed compositions (specific body's part's shape, scale, dimension, surface topology, texture and colour). Our results revealed that each avatar elicited a relatively high level of illusion. However both machine-like and cartoon-like avatars elicited an equivalent IVBO, slightly superior to the human-ones. A realistic human appearance is therefore not a critical top-down factor of IVBO, and could lead to an Uncanny Valley effect.
________________________________________
Influence of Avatar Realism on Stressful Situation in VR
Jean-Luc Lugrin - Universität Würzburg, Würzburg, GERMANY
Maximilian Wiedemann - Universität Würzburg, Würzburg, GERMANY
Daniel Bieberstein - Universität Würzburg, Würzburg, GERMANY
Marc Erich Latoschik - Universität Würzburg, Würzburg, GERMANY
Presenting Author: Marc Erich Latoschik
Abstract: In this paper we present a study of the impact of avatar realism on user experience and performance in stressful immersive virtual environments. We evaluated a stressful and a stress-free environment with partial avatar embodiment under low (iconic) or high (photo- realistic) visual fidelity conditions. An experiment with forty participants did not reveal any significant differences between both graphical versions. This first result represents an interesting find- ing since non realistic avatar and environment representations are faster and more economical to produce while requiring less computational resources.
________________________________________
Extending Touch-less Interaction on Vision Based Wearable Device
Zhihan Lv - Chinese Academy of Science
Liangbing Feng - Chinese Academy of Science
Shengzhong Feng - Chinese Academy of Science
Haibo Li - Royal Institute of Technology (KTH)
Presenting Author: Zhihan Lv
Abstract: A touch-less interaction technology on vision based wearable device is designed and evaluated. Users interact with the application with dynamic hands/feet gestures in front of the camera, which triggers the interaction event to interact with the virtual object in the scene. Several proof-of-concept prototypes with eleven dynamic gestures are developed based on the touch-less interaction. At last, a comparing user study evaluation is proposed to demonstrate the usability of the touch-less approach, as well as the impact on user’s emotion, running on a wearable framework or Google Glass.
________________________________________
Blind in a Virtual World: Using Sensory Substitution for Generically Increasing the Accesibilty of Graphical Virtual Environments
Shachar Maidenbaum, HUJI
Sami Abboud, HUJI
Galit Buchs, HUJI
Amir Amedi, HUJI
Presenting Author: Shachar Maidenbaum
Abstract: Graphical virtual environments are currently far from accessible to the blind as most of their content is visual. While several previous environment-specific tools have indeed increased accessibility to specific environments they do not offer a generic solution. This is especially unfortunate as such environments hold great potential for the blind, e.g., for safe orientation and learning. Visual-to-audio Sensory Substitution Devices (SSDs) can potentially increase their accessibility in such a generic fashion by sonifying the on-screen content regardless of the specific environment. Using SSDs also taps into the skills gained from using these same SSDs for completely different tasks, including in the real world. However, whether congenitally blind users will be able to use this information to perceive and interact successfully virtually is currently unclear. We tested this using the EyeMusic SSD, which conveys shape and color information, to perform virtual tasks otherwise not possible without vision. We show that these tasks can be accomplished by the congenitally blind.
________________________________________
I Built It! - Exploring the effects of Customizable Virtual Humans on Adolescents with ASD
Chao Mei, University of Texas at San Antonio
Lee Mason, University of Texas at San Antonio
John Quarles, University of Texas at San Antonio
Presenting Author: Chao Mei
Abstract: Virtual Reality (VR) training games have many potential benefits for autism spectrum disorder (ASD) therapy, such as increasing motivation and improving the abilities of performing daily living activities. Persons with ASD often have deficits in hand-eye coordination, which makes many activities of daily living difficult. A VR game that trains hand-eye coordination could help users with ASD improve their quality of life. Moreover, incorporating users' interests into the game could be a good way to build a motivating game for users with ASD. We propose a Customizable Virtual Human (CVH) which enables users with ASD to easily customize a virtual human and then interact with the CVH in a 3D task. Specifically, we investigated the effects of CVHs with a VR hand-eye coordination training game - Imagination Soccer - and conducted a user study on adolescents with high functioning ASD. We compared the differences of participants' 3D interaction performances, game performances and user experiences (i.e. presence, involvement, and flow) under CVH and Non-customizable Virtual Human (NCVH) conditions. The results indicate that CVHs could effectively improve performance in 3D interaction tasks (i.e., blocking a soccer ball) for users with ASD, motivate them to play the game more, and offer a better user experience.
________________________________________
The Effects of Olfaction on Training Transfer for an Assembly Task
Alec Moore, University of Texas at Dallas
Nicolas Herrera, University of Texas at Dallas
Tyler Hurst, University of Texas at Dallas
Ryan McMahan, University of Texas at Dallas
Sandra Poeschl, TU Ilmenau
Presenting Author: Ryan P. McMahan
Abstract: Context-dependent memory studies have indicated that olfaction, the sense of smell, has a special odor memory that can significantly improve recall in some cases. Virtual reality (VR), which has been investigated as a training tool, could feasibly benefit from odor memory by incorporating olfactory stimuli. There have been a few studies on this concept for semantic learning, but not for procedural training. To address this gap in knowledge, we investigated the effects of olfaction on the transfer of knowledge from training to next-day execution for building a complex LEGO jet-plane model. Our results indicate that the pleasantness of an odor significantly affects training transfer more than whether the encoding and recall contexts match.
________________________________________
MRI Overlay System Using Optical See-Through for Marking Assistance
Jun Morita - Graduate School of Science and Technology, Keio University
Sho Shimamura - Graduate School of Science and Technology, Keio University
Motoko Kanegae - Graduate School of Science and Technology, Keio University
Yuji Uema - Graduate School of Media Design, Keio University
Maiko Takahashi - Department of Surgery, School of Medicine, Keio University
Masahiko Inami - Graduate School of Media Design, Keio University
Tetsu Hayashida - Department of Surgery, School of Medicine, Keio University
Maki Sugimoto - Graduate School of Science and Technology, Keio University
Presenting Author: Jun Morita
Abstract: In this paper we propose an augmented reality system that superimposes MRI onto the patient model. We use a half-silvered mirror and a handheld device to superimpose the MRI onto the patient model. By tracking the coordinates of the patient model and the handheld device using optical markers, we are able to transform the images to the correlated position. Voxel data of the MRI are made so that the user is able to view the MRI from many different angles.
________________________________________
Continuous Automatic Calibration for Optical See-Through Displays
Kenneth Moser - Mississippi State University
Yuta Itoh - Technical University Munich
J. Edward Swan II - Mississippi State University
Presenting Author: Kenneth Moser
Abstract: The current advent of consumer level optical see-through (OST) head-mounted displays (HMD's) has greatly broadened the accessibility of Augmented Reality (AR) to not only researchers but also the general public as well. This increased user base heightens the need for robust automatic calibration mechanisms suited for non-technical users. We are developing a fully automated calibration system for two stereo OST HMD's based on the recently introduced interaction free display calibration (INDICA) method. Our current efforts are also focused on the development of an evaluation process to assess the performance of the system during use by non-expert subjects.
________________________________________
Comparing the Performance of Natural, Semi-Natural, and Non-Natural Locomotion Techniques in Virtual Reality
Mahdi Nabiyouni, Center for Human-Computer Interaction and Department of Computer Science, Virginia Tech
Ayshwarya Saktheeswaran, Center for Human-Computer Interaction and Department of Computer Science, Virginia Tech
Doug Bowman, Center for Human-Computer Interaction and Department of Computer Science, Virginia Tech
Ambika Karanth, Center for Human-Computer Interaction and Department of Computer Science, Virginia Tech
Abstract: One of the goals of much virtual reality (VR) research is to increase realism. In particular, many techniques for locomotion in VR attempt to approximate real-world walking. However, it is not yet fully understood how the design of more realistic locomotion techniques affects user task performance. We performed an experiment to compare a semi-natural locomotion technique (based on the Virtusphere device) with a traditional, non-natural technique (based on a game controller) and a fully natural technique (real walking). We found that the Virtusphere technique was significantly slower and less accurate than both of the other techniques. Based on this result and others in the literature, we speculate that locomotion techniques with moderate interaction fidelity will often have performance inferior to both high-fidelity techniques and well-designed low-fidelity techniques. We argue that our experimental results are an effect of interaction fidelity, and perform an analysis of the fidelity of the three locomotion techniques to support this argument.
________________________________________
Implementation of on-site virtual time machine for mobile devices
Junichi Nakano - The University of Tokyo
Takuji Narumi - The University of Tokyo
Tomohiro Tanikawa - The University of Tokyo
Michitaka His - The University of Tokyo
Presenting Author: Junichi Nakano
Abstract: We developed a system for mobile devices designed to provide a virtual experience of past scenery depicted in old photographs by superimposing them on landscapes in video see-through frames. A user is asked to capture a photograph of a landscape and enter correspondence points between the new and old photos. The old photograph is deformed by projective transformation based on correspondence points. We then superimpose of the old photograph onto video see-through frames of current landscape. To achieve real-time and robust superimposition on mobile devices, both motion-sensor-based pose information and camera-image-keypoint-tracking-based pose information is used for device's camera pose tracking.
________________________________________
The Effect of Head Mounted Display Weight and Locomotion Method on the Perceived Naturalness of Virtual Walking Speeds
Niels Christian Nilsson - Aalborg University Copenhagen
Stefania Serafin - Aalborg University Copenhagen
Rolf Nordahl - Aalborg University Copenhagen
Presenting Author: Niels Christian Nilsson
Abstract: This poster details a study investigating the effect of Head Mounted Display (HMD) weight and locomotion method (Walking-In-Place and treadmill walking) on the perceived naturalness of virtual walking speeds. The results revealed significant main effects of movement type, but no significant effects of HMD weight were identified.
________________________________________
Third person's footsteps enhanced walking sensation of seated person
Yujiro Okuya - Tokyo Metropolitan University
Yasushi Ikei - Tokyo Metropolitan University
Tomohiro Amemiya - NTT
Koichi Hirota - The University of Tokyo
Presenting Author: Yujiro Okuya
Abstract: We developed an audio-tactile display system to evoke pseudo-walking sensation to a sitting participant. The vibration was added to the heel and toe to imitate cutaneous sensation of the sole during walking. As the auditory stimulus, the sounds of own and another walker's footsteps were provided to the participants through headphones. Only another walker's sound was moved along several trajectories in a VR space. As the result of an experiment conducted to elucidate a sense of walking, third person's sound enhanced not only walking sensation but also translational sensation of a sitting participant.
________________________________________
Does Vibrotactile Intercommunication Increase Collaboration?
Victor Adriel Oliveira - UFRGS
Anderson Maciel - UFRGS
Wilson Sarmiento - Universidad del Cauca
Luciana Nedel - UFRGS
César Collazos - Universidad del Cauca
Presenting Author: Victor Adriel de J. Oliveira
Abstract: Communication is a fundamental process in collaborative work. In natural conditions, communication between team members is multimodal. This allows for redundancy, adaptation to different contexts, and different levels of focus. In collaborative virtual environments, however, hardware limitations and lack of appropriate interaction metaphors reduce the amount of collaboration. In this poster, we propose the design and use of a vibrotactile language to improve user intercommunication in CVE and, consequently, to increase the amount of effective collaboration.
________________________________________
Coupled-Clay: Physical-Virtual 3D Collaborative Interaction Environment
Kasım Özacar - Research Institute of Electrical Communication
Takuma Hagiwara - Research Institute of Electrical Communication
Jiawei Huang - Research Institute of Electrical Communication
Kazuki Takashima - Research Institute of Electrical Communication
Yoshifumi Kitamura - Research Institute of Electrical Communication
Presenting Author: Kasım Özacar
Abstract: Coupled-clay is a bi-directional 3D collaborative interactive environment that enables 3D modeling work between groups of users at two remote spaces; the Physical Interaction Space and the Virtual Interaction Space. The physical Interaction Space enables a user to directly manipulate a physical object whose shape and position are precisely tracked. The shape is transferred to the virtual interaction space where users observe the virtual shape which corresponds to the physical object, and manipulate its geometrical and graphical attributes using a multi-user stereoscopic tabletop display. The manipulations are reflected to the physical object by a robotic arm and a top-mounted projector.
________________________________________
GPU-accelerated Attention Map Generation for Dynamic 3D Scenes
Thies Pfeiffer, CITEC, Faculty of Technology, Bielefeld University
Cem Memili, Faculty of Technology, Bielefeld University
Presenting Author: Thies Pfeiffer
Abstract: Measuring visual attention has become an important tool during product development. Attention maps are important qualitative visualizations to communicate results within the team and to stakeholders. We have developed a GPU-accelerated approach which allows for real-time generation of attention maps for 3D models that can, e.g., be used for on-the-fly visualizations of visual attention distributions and for the generation of heat-map textures for offline high-quality renderings. The presented approach is unique in that it works with monocular and binocular data, respects the depth of focus, can handle moving objects and is ready to be used for selective rendering.
________________________________________
A Procedure for Accurate Calibration of a Tabletop Haploscope AR Environment
Nate Phillips - Mississippi State University
J. Swan - Mississippi State University
Presenting Author: Nate Phillips
Abstract: In previous papers, we have reported a novel haploscope-based augmented reality (AR) display system. The haploscope allows us to precisely set various optical display parameters, in order to study the interaction between optical and graphical display properties on such perceptual issues as depth presentation. While using the haploscope, it became clear that we needed to develop novel calibration procedures, both because of the novelty of the haploscope’s optical design, and also because of the required accuracy on the order of 1 mm. This poster proposes novel calibration procedures based on the use of perpendicular laser fans.
________________________________________
Using Astigmatism in Wide Angle HMDs to Improve Rendering
Daniel Pohl, Intel Corporation
Timo Bolkart, Saarland University
Stefan Nickels, Intel Visual Computing Institute
Oliver Grau, Intel Corporation
Presenting Author: Daniel Pohl
Abstract: Lenses in modern consumer HMDs introduce distortions like astigmatism: only the center area of the displayed content can be perceived sharp while with increasing distance from the center the image gets out of focus. We show with three new approaches that this undesired side effect can be used in a positive way to save calculations in blurry areas. For example, using sampling maps to lower the detail in areas where the image is blurred through astigmatism, increases performance by a factor of 2 to 3. Further, we introduce a new calibration of user-specific viewing parameters that increase the performance by about 20-75%.
________________________________________
Shark Punch: A Virtual Reality Game for Aquatic Rehabilitation
John Quarles, University of Texas at San Antonio
Presenting Author: John Quarles
Abstract: We present a novel underwater VR game - Shark Punch - in which the user must fend off a virtual Great White shark with real punches in a real underwater environment. This poster presents our underwater VR system and our iterative design process through field tests with a user with disabilities. We conclude with proposed usability,accessibility, and system design guidelines for future underwater VR rehabilitation games.
________________________________________
Real-time SLAM for static multi-objects learning and tracking applied to augmented reality applications
Datta Ramadasan, Institut Pascal
Marc Chevaldonne, ISIT
Thierry Chateau, Institut Pascal
Presenting Author: Ramadasan
Abstract: This paper presents a new approach for multi-objects tracking from a video camera moving in an unknown environment. The tracking involves static objects of different known shapes, whose poses and sizes are determined online. For augmented reality applications, objects must be precisely tracked even if they are far from the camera or if they are hidden. Camera poses are computed using simultaneous localization and mapping (SLAM) based on bundle adjustment process to optimize problem parameters. We propose to include in an incremental bundle adjustment the parameters of the observed objects as well as the camera poses and 3D points. We show, through the example of 3D models of basics shapes (planes, parallelepipeds, cylinders and spheres) coarsely initialized online using a manual selection, that the joint optimization of parameters constrains the 3D points to approach the objects, and also constrains the objects to fit the 3D points. Moreover, we developed a generic and optimized library to solve this modified bundle adjustment and demonstrate the high performance of our solution compared to the state of the art alternative. Augmented reality experiments in real-time demonstrate the accuracy and the robustness of our method.
________________________________________
Social Presence with Virtual Glass
Holger Regenbrecht - University of Otago
Mohammed Alghamdi - University of Otago
Simon Hoermann - University of Otago
Tobias Langlotz - University of Otago
Mike Goodwin - University of Otago
Colin Aldridge - University of Otago
Presenting Author: Holger Regenbrecht
Abstract: We introduce the concept of a virtualized version of Google Glass called Virtual Glass. Virtual Glass is integrated into our collaborative virtual environment as a real-world metaphor for a communication device, one particularly suited for instructor-performer systems.
________________________________________
Semi-automatic Calibration of a Projector-Camera System Using Arbitrary Objects With Known Geometry
Christoph Resch, EXTEND3D GmbH
Peter Keitler, EXTEND3D GmbH
Christoffer Menk, Volkswagen AG
Gudrun Klinker, TU München
Presenting Author: Christoph Resch
Abstract: We propose a new semi-automatic calibration approach for projector-camera systems that - unlike existing auto-calibration approaches - additionally recovers the necessary global scale by projecting on an arbitrary object of known geometry from one view. Our method therefore combines surface registration with bundle adjustment optimization on points reconstructed from structured light projections. In simulations on virtual data and experiments with real data we demonstrate that our approach estimates the global scale robustly and is furthermore able to improve incorrectly guessed intrinsic and extrinsic calibration parameters.
________________________________________
Navigation in REVERIE's Virtual Environments
Fiona Rivera, Queen Mary University of London
Fons Kuijk, CWI Amsterdam
Ebroul Izquierdo, Queen Mary University of London
Presenting Author: Fiona M Rivera
Abstract: This work presents a novel navigation system for social collaborative virtual environments populated with multiple characters. The navigation system ensures collision free movement of avatars and agents. It supports direct user manipulation, automated path planning, positioning to get seated, and follow-me behaviour for groups. In follow-me mode, the socially aware system manages the mise en place of individuals within a group. A use case centred around on an educational virtual trip to the European Parliament created for the REVERIE FP7 project, also serves as an example to bring forward aspects of such navigational requirements.
________________________________________
Collaborative Telepresence Workspaces for Space Operation and Science
David Roberts - University of Salford
Arturo Garcia - University of Salford
Janki Dodiya - Deutsches Zentrum für Luft- und Raumfahrt e.V (DLR)
Robin Wolf - Deutsches Zentrum für Luft- und Raumfahrt e.V (DLR)
Allen Fairchild - University of Salford
Terrence Fernando - University of Salford
Presenting Author: David Roberts
Abstract: We introduce the collaborative telepresence workspaces for SPACE operation and science that are under development in the European research project CROSS DRIVE. The vision is to give space mission controllers and scientists the impression of “beaming” to the surface of Mars, along with simulations of the environment and equipment, to step out together where a robot has or may move. We briefly overview the design and describe the state of the demonstrator. The contribution of the publication is to give an example of how collaborative Virtual Reality research is being taken up in space science.
________________________________________
Does Virtual Reality really affect visual perception of egocentric distance?
Thomas Rousset - Aix Marseille Universite
Christophe Bourdin - Aix Marseille Universite
Cedric Goulon - CNRS
Jocelyn Monnoyer - PSA Peugeot Citroen
Jean-Louis Vercher - CNRS
Presenting Author: Thomas Rousset
Abstract: Driving simulators are used to study human behavior in mobility. The aim of this study was to determine the effect of interactive factors (stereoscopy and motion parallax) on distance perception. After a training session, participants were asked to estimate the relative location of a car on the same road. Results suggest that distance perception does not depend on interactive factors. However, the study revealed a large interpersonal variability: two profiles of participants were defined, those who accurately perceived distances and those who underestimated distances as usually reported. This classification was correlated to the level of performance during the training phase.
________________________________________
A GPU-Based Adaptive Algorithm for Non-Rigid Surface Registration
Antonio Carlos dos Santos Souza, Federal Institute of Bahia
Márcio Cerqueira de Farias Macedo, Federal University of Bahia
Antonio Lopes Apolinario Junior, Federal University of Bahia
Presenting Author: Antonio Carlos dos Santos Souza
Abstract: Non-rigid surface registration is fundamental when accurate tracking or reconstruction of 3D deformable shapes is desirable. However, the majority of non-rigid registration methods are not as fast as the ones developed in the field of rigid registration. Fast methods for non-rigid surface registration are particularly interesting for markerless augmented reality applications, in which the object being used as marker can support non-rigid user interaction. In this paper, we present an adaptive algorithm for non-rigid surface registration. Taking advantage from this adaptivity and the parallelism of the GPU, we show that the proposed algorithm is capable to achieve near real-time performance with an approach as accurate as the ones proposed in the literature.
________________________________________
Characteristics of virtual walking sensation created by a 3-dof motion seat
Seiya Shimabukuro - TMU
Shunki Kato - TMU
Yasushi Ikei - TMU
Koichi Hirota - U-Tokyo
Tomohiro Amemiya - NTT
Michiteru Kitazaki - TUT
Presenting Author: Shunki Kato
Abstract: Rendering characteristics of a virtual walk are presented. A motion seat created a small body motion passively in three dof (lift, roll, and pitch directions) to make the user feel as if the user him-/herself was walking despite actually sitting own body. We consider the actual self-body is a medium to render the virtual body to share experiences of others by using the motion seat. Basic characteristics of perception levels of the virtual walk were measured and compared. The result shows that perception levels of the virtual walk by the motion seat were around those of a real walk.
________________________________________
Self-Characterstics and Sound in Immersive Virtual Reality - Estimating Avatar Weight from Footstep Sounds
Erik Sikström, Aalborg University Copenhagen
Amalia de Götzen, Aalborg University Copenhagen
Stefania Serafin, Aalborg University Copenhagen
Presenting Author: Erik Sikström
Abstract: This experiment aimed to investigate whether a user controlling a full body avatar via real time motion tracking in an immersive virtual reality setup, would estimate the weight of the virtual avatar differently if the footstep sounds are manipulated using three different audio filter settings. The visual appearance of the avatar was available in two sizes. The subjects performed six walks with each audio configuration active once over two ground types. After completing each walk, the participants were asked to estimate the weight of the virtual avatar and the suitability of the audio feedback. The results indicate that the filters amplifying the two lower center frequencies altered the subjects' estimates of the weight of the avatar body towards being heavier than when compared with the filter with the higher center frequency. There were no significant differences between the weight estimates of the two groups using the different avatar bodies.
________________________________________
Wings and Flying in Immersive VR - Controller Type, Sound Effects and Experienced Ownership and Agency
Erik Sikström, Aalborg University Copenhagen
Amalia de Götzen, Aalborg University Copenhagen
Stefania Serafin, Aalborg University Copenhagen
Presenting Author: Erik Sikström
Abstract: An experiment investigated the subjective experiences of ownership and agency of a pair of virtual wings attached to a motion controlled avatar in an immersive virtual reality setup. A between groups comparison of two ways of controlling the movement of the wings and flight ability. One where the subjects achieved the wing motion and flight ability by using a hand-held video game controller and the other by moving the shoulder. Through four repetitions of a flight task with varying amounts of self-produced audio feedback (from the movement of the virtual limbs), the subjects evaluated their experienced embodiment of the wings on a body ownership and agency questionnaire. The results shows significant differences between the controllers in some of the questionnaire items and that adding self-produced sounds to the avatar, slightly changed the subject's evaluations.
________________________________________
Optical See-through HUDs Effect on Depth Judgments of Real World Objects
Missie Smith - Virginia Tech
Nadejda Doutcheva - Virginia Tech
Joseph Gabbard - Virginia Tech
Gary Burnett - University of Nottingham
Presenting Author: Missie Smith
Abstract: While AR HUD graphics offer opportunities for improved performance and safety, there exists a need to determine the effects of such graphics on human perception and workload. We are especially interested in examining this problem within the domain of surface transportation (e.g., driving). This work represents an initial step in understanding how AR graphics intended to visually cue driving hazards (e.g., pedestrian) may affect drivers’ depth judgments to real world driving hazards. This study explores whether Augmented Reality (AR) graphics have directional effects on users’ depth perception of real-world objects.
________________________________________
EVE: Exercise in Virtual Environments
Amaury SOLIGNAC - I.C.E.B.E.R.G.
Sebastien KUNTZ - MiddleVR
Presenting Author: Amaury SOLIGNAC
Abstract: EVE (Exercise in Virtual Environments) is an operational VR system designed for space, polar and submarine crews. This system allows crewmembers -living and working in artificial habitats- to explore immersive natural landscapes during their daily physical exercise, and experience presence in a variety of alternate environments. Using recent hardware and software, this innovative psychological counter-measure aims at reducing the adverse effects of confinement and monotony in long duration missions, while maintaining motivation for physical exercise. Initial testing with a proof-of-concept prototype was conducted near the South magnetic pole, as well as in transient microgravity.
________________________________________
Subjective Evaluation of Peripheral Viewing during Exposure to a 2D/3D Video Clip
Masumi Takada - Aichi Medical University
Masaru Miyao - Nagoya University
Hiroki Takada - University of Fukui
Presenting Author: Masumi Takada
Abstract: The present study examines the effects of peripheral vision on reported motion sickness during exposure to 2D/3D video clips for 1 minute and for 1 minute afterwards in human subjects. The Simulator Sickness Questionnaire was administered after exposure to the video clips with or without visual pursuit of a 3D object and compared. The questionnaire findings significantly changed after the subjects viewed the video clips peripherally. This influence may result when subjects view a poorly depicted background element peripherally, which generates depth perception that contradicts daily experience.
________________________________________
Zoom Factor Compensation for Monocular SLAM
Takafumi Taketomi, Nara Institute of Science and Technology
Janne Heikkilä, University of Oulu
Presenting Author: Takafumi Taketomi
Abstract: SLAM algorithms are widely used in augmented reality applications for registering virtual objects. Most SLAM algorithms estimate camera poses and 3D positions of feature points using known intrinsic camera parameters that are calibrated and fixed in advance. This assumption means that the algorithm does not allow changing the intrinsic camera parameters during runtime. We propose a method for handling focal length changes in the SLAM algorithm. Our method is designed as a pre-processing step for the SLAM algorithm input. In our method, the change of the focal length is estimated before the tracking process of the SLAM algorithm. Camera zooming effects in the input camera images are compensated for by using the estimated focal length change. By using our method, camera zooming can be used in the existing SLAM algorithms such as PTAM with minor modifications. In the experiment, the effectiveness of the proposed method was quantitatively evaluated. The results indicate that the method can successfully deal with abrupt changes of the camera focal length.
________________________________________
A Modified Tactile Brush Algorithm for Complex Touch Gestures
Fei Tang, University of Texas at Dallas
Ryan McMahan, University of Texas at Dallas
Eric Ragan, Oak Ridge National Laboratory
Tandra Allen, University of Texas at Dallas
Presenting Author: Ryan P. McMahan
Abstract: Several researchers have investigated phantom tactile sensation (i.e., the perception of a nonexistent actuator between two real actuators) and apparent tactile motion (i.e., the perception of a moving actuator due to time delays between onsets of multiple actuations). Prior work has focused primarily on determining appropriate Durations of Stimulation (DOS) and Stimulus Onset Asynchronies (SOA) for simple touch gestures, such as a single finger stroke. To expand upon this knowledge, we investigated complex touch gestures involving multiple, simultaneous points of contact, such as a whole hand touching the arm. To implement complex touch gestures, we modified the Tactile Brush algorithm to support rectangular areas of tactile stimulation.
________________________________________
Experiencing Interior Environments: New Approaches for the Immersive Display of Large-Scale Pointcloud Data
Ross Tredinnick, University of Wisconsin - Madison
Markus Broecker, University of Wisconsin - Madison
Kevin Ponto, University of Wisconsin - Madison
Presenting Author: Kevin Ponto
Abstract: This document introduces a new application for rendering massive LiDAR point cloud data sets of interior environments within high-resolution immersive VR display systems. Overall contributions are: to create an application which is able to visualize large-scale point clouds at interactive rates in immersive display environments, to develop a flexible pipeline for processing LiDAR data sets that allows display of both minimally processed and more rigorously processed point clouds, and to provide visualization mechanisms that produce accurate rendering of interior environments to better understand physical aspects of interior spaces. The work introduces three problems with producing accurate immersive rendering of LiDAR point cloud data sets of interiors and presents solutions to these problems. Rendering performance is compared between the developed application and a previous immersive LiDAR viewer.
________________________________________
Landscape Change From Daytime To Nighttime Under Augmented Reality Environment
Noriyuki Uda - Nagoya Sangyo University
Yoshitaka Kamiya - Nagoya Sangyo University
Mamoru Endo - Nagoya University
Takami Yasuda - Nagoya University
Presenting Author: Noriyuki Uda
Abstract: AR (Augmented Reality) technology, which uses actual landscape photography captured with a camera on-site as a base and then overlays virtual objects on that landscape image, is valid for landscape simulation. In this study, we generate nightscape images by superimposing high-luminance segments (virtual objects) over darkening landscape images. They are adjusted due to dark adaption modification. We found that the nightscape was evaluated more highly than the daytime landscape, and we were able to quantitatively confirm that evaluations were high for landscape in twilight after sundown.
________________________________________
Impact of Illusory Resistance on Finger Walking Behavior
Yusuke Ujitoko - The University of Tokyo
Koichi Hirota - The University of Tokyo
Presenting Author: Yusuke Ujitoko
Abstract: We aim to enable additional sensation when using an anthropomorphic finger motion interface. To do so, we applied a conventional method to generate pseduo-haptics. To control the amount of scroll resulting from finger displacement on a display, illusory resistance or pseudo friction was confirmed by subjective evaluation. We first clarified that this illusory resistance influences finger walking behavior such as stride or speed. An additional experiment conducted in public space verified this influence.
________________________________________
Development of a Wearable Haptic Device with Pneumatic Artificial Muscles and MR brake
Makasazu Egawa - chuo-university
Takumi Watanabe - chuo-university
Taro Nakamura - chuo-university
Presenting Author: Masakazu Egawa
Abstract: Desktop haptic device has been developed in the field of rehabilitation and entertainment. However, the desktop type restrains human’s movement. Therefore, it is difficult to receive force sense information, moving to wide range position and posture. In this study, we developed a 1-DOF wearable haptic device with pneumatic artificial muscles and a MR brake. These smart actuators have high power density and change its output force structurally. Therefore, this haptic device can render various force sense such as elasticity, friction and viscosity. We describe two experiments rendering elasticity and friction to evaluate the performance of the device.
________________________________________
Preliminary Evaluation of a Virtual Needle Insertion Training System
Duc Van NGUYEN, University of Evry
Safa Ben Lakhal, University of Evry
Amine Chellali, University of Evry
Presenting Author: Amine Chellali
Abstract: Inserting a needle to perform a biopsy requires a high haptic sensitivity. The traditional learning methods based on observation and training on real patients are questionable. In this paper, we present a preliminary evaluation of a VR trainer for needle insertion tasks. The system aims to replicate an existing physical setup while overcoming some of its limitations. Results permit to validate some design choices and suggest some UI improvements.
________________________________________
From visual cues to climate perception in virtual urban environments
Toinon Vigier - CERMA UMR CNRS 1563
Guillaume Moreau - CERMA UMR CNRS 1563
Daniel Siret - CERMA UMR CNRS 1563
Presenting Author: Toinon Vigier
Abstract: Virtual reality is a good tool to design and assess urban projects and to study perception in cities. Climate perception significantly influences the perception and use of urban spaces; however, virtual urban environments are scarcely represented with different climatic aspects. In this paper, we study the role that visual cues (sky aspect, shadows, sun location, and light effects) specifically play in climate perception (season, daytime and temperature) in virtual urban environments. We present and discuss the data we collected from a recent virtual reality experiment in which ten variations of the climatic context in the same urban space were assessed.
________________________________________
HorizontalDragger: a Freehand Remote Selector for Object Acquisition
Siju Wu, IBISC, Evry University
Samir Otmane, IBISC, Evry University
Amine Chellali, IBISC, Evry University
Guillaume Moreau, CERMA, Ecole Centrale de Nantes
Presenting Author: Siju Wu
Abstract: Interaction with computers using freehand gestures becomes more and more popular. However, it is hard to make precise inputs by making gestures in air without a physical support. In this paper, we present HorizontalDragger, a new selection technique aimed to improve the selection precision by converting the 2D selection problem into a 1D selection problem. After setting a region of interest in the display, all the objects inside the region are considered as potential targets. The user can drag the index finger horizontally to choose the desired target.
________________________________________
A Real-Time Welding Training System Based on Virtual Reality
Benkai Xie, Wuhan University of Technology
Qiang Zhou, Wuhan University of Technology
Liang Yu, Wuhan Onew Technology Co.,Lid
Presenting Author: Benkai Xie
Abstract: Onew360 is a training simulator for simulating gas metal arc welding (GMAW). This system is comprised of standard welding hardware components (helmet, gun, work-piece), a PC, a head-mounted display, a tracking system for both the torch and the user's head, and external audio speakers. The track model of welding simulator using single-camera vision measurement technology to calculate the position of the welding gun and helmet, and the simulation model using simple model method to simulate the weld geometry based on the orientation and speed of the welding torch. So that the system produce a realistic, interactive, and immersive welding experience.
________________________________________
Transparent Cockpit Using Telexistence
Takura Yanagi, Nissan Motor Co
Charith Lasantha Fernando, Keio University
MHD Yamen Saraiji, Keio University
Kouta Minamizawa, Keio University
Susumu Tachi, Keio University
Norimasa Kishi, Nissan Motor Co
Presenting Author: Takura Yanagi
Abstract: We propose an indirect-vision, video-see-through augmented reality (AR) cockpit that uses telexistence technology to provide an AR enriched, virtually transparent view of the surroundings through monitors instead of windows. Such a virtual view has the potential to enhance driving performance and experience above conventional glass as well as head-up display equipped cockpits by combining AR overlays with images obtained by future image sensors that are superior to human eyes. As a proof of concept, we replaced the front windshield of an experimental car by a large stereoscopic monitor. A robotic stereo camera pair that mimics the driver's head motions provides stereoscopic images with seamless motion parallax to the monitor. Initial driving tests at moderate speeds on roads within our research facility confirmed the illusion of transparency. We will conduct human factors evaluations after implementing AR functions in order to show whether it is possible to achieve an overall benefit over conventional cockpits in spite of possible conceptual issues like latency, shift of viewpoint and short distance between driver and display.
________________________________________
Flying Robot Manipulation System Using a Virtual Plane
Kazuya Yonezawa - The University of Tokyo
Takefumi Ogawa - The University of Tokyo
Presenting Author: Kazuya Yonezawa
Abstract: The flexible movements of flying robots make it difficult for novices to manipulate them precisely with controllers such as a joystick. Moreover, the mapping of instructions between a robot and its reactions is not necessarily intuitive for users. We propose manipulation methods for flying robots using augmented reality technologies. In the proposed system, a virtual plane is superimposed on a flying robot and users control the robot by manipulating the virtual plane and drawing a moving path on it. We present the design and implementation of our system and describe experiments conducted to evaluate our methods.
________________________________________
Binocular Interface: Interaction Techniques Considering Binocular Parallax for a Large Display
Keigo Yoshimura - Graduate school of Interdisciplinary Information Studies, The University of Tokyo
Takefumi Ogawa - Information Technology Center, The University of Tokyo
Presenting Author: Keigo Yoshimura
Abstract: There have been many studies on intuitive user interfaces for large displays by using pointing movements. However, if a user cannot reach a display, object manipulations on the display are difficult because the user will see duplicate fingers due to binocular parallax. We propose Binocular Interface, which enables interactions with an object by using two pseudo fingers. In a prototype, pointing positions on the display are estimated on the basis of the positions of eyes and a finger detected by an RGB-D camera. We implemented three basic operations (select, move, and resize) using duplicate fingers and evaluated each operation.
________________________________________
Tracking Human Locomotion by Relative Positional Feet Tracking
Markus Zank - Innovation Center Virtual Reality, ETH Zurich
Thomas Nescher - Innovation Center Virtual Reality, ETH Zurich
Andreas Kunz - Innovation Center Virtual Reality, ETH Zurich
Presenting Author: Markus Zank
Abstract: Tracking human movements and locomotion accurately in real time requires expensive tracking systems which need a lot of time to install and their cost typically increases with the size of the tracked space. This poster presents an approach to significantly reduce costs for tracking human locomotion in large tracking spaces. The proposed approach employs a low-cost user-worn tracking system to track limbs including the user’s feet in a user-centric coordinate-system. This relative limb position is associated with the absolute position in a given environment by locking a foot's global position while it is in the stance-phase of the gait cycle.