2018 IEEE VR Los Angeles logo

March 18th - 22nd

2018 IEEE VR logo

March 18th - 22nd

In Cooperation with
the German Association for Electrical, Electronic and Information Technologies: VDE
VDE Logo
IEEE Computer Society IEEE

In Cooperation with
the German Association for Electrical, Electronic and Information Technologies: VDE

VDE Logo
IEEE Computer Society
IEEE

Exhibitors and Supporters

Diamond


National Science Foundation

Gold


VICON

Digital Projection

Gold Awards


NVIDIA

Silver


ART

Bronze


Haption

MiddleVR

VR-ON

VISCON

BARCO

Ultrahaptics

WorldViz

Disney Research

Microsoft

Non-Profit


Computer Network Information Center
Chinese Academy of Sciences

Sponsor for Research Demo


KUKA

Other Sponsors


Magic Leap

Exhibitors and Supporters

Videos

360° Video - Light Design Experience

Manuel Dudczig


Abstract: The 360° Video was created for a lighting design company (www.lichtliebe.de) by (www.vrendex.de) illustrating lamp prototypes in an architectural setting to experience their influence and ambience. Based on a virtual reality scene the video was rendered as camera flight through the building to give an ambient impression during daylight and night. Being able to show customers and architects what the lamps will look like - even before first prototypes are available is speeding up the feedback process of designing new customer influenced products. Virtual technologies are also capable of giving house builders an impression of how variants and posititions of lights will influence their living space.

3D Tune-In: 3D-games for tuning and learning about hearing aids

Lorenzo Picinali


Abstract: 3D Tune-In is an EU-funded project which brings together the relevant stakeholders from the videogame industry, academic institutions, a large hearing aid manufacturer, and hearing communities, to produce digital games in the field of hearing aid technologies and hearing loss [1] [2]. The project has now completed the development of the 3D TuneIn Toolkit [3], a flexible, cross-platform library of code and guidelines that gives traditional game and software developers access to high-quality sound spatialisation (both for headphones and loudspeakers), hearing loss and hearing aid simulations. The test application for the Toolkit is currently available for free through the 3D Tune-In project website http://3d-tune-in.eu/. The C++ code will be released open-source through GitHub in Spring 2018. In addition to the Toolkit, 3D Tune-In has produced 5 different applications aimed at different groups of the hearing impaired and non-hearing impaired communities. The video briefly describes the project context, goals and main outcomes.

AnimationVR - Interactive Controller-based Animating in Virtual Reality

Daniel Vogel. Paul Lubos, Frank Steinicke


Abstract: Animating with keyframes gives animators a lot of control but they can be tedious and complicated to work with. Currently, different solutions try to simplify animation creation by recording the natural hand movements of the user. However, most of the solutions are bound to2D animations [1] orsuffer from a low workflow speed [2]. TheproposedUnitypluginAnimationVRusestheHTCVivesystem to enable the puppeteering animation technique in VR while still allowing for a fast workflow speed by utilizing the controllers of the VR system. Also, AnimationVR is written for easy integration in already existing Unity projects. The plugin was evaluated with four animation experts. The consensus was that AnimationVR increases the workflow speed while decreasing the animation precision. This tradeoff makes it useful for storyboarding in professional environments. Additionally, the plugin could improve the understanding of VR storytelling as the animators would create and instantly review the animations in the correct medium. The experts also noted the ease of use of the puppeteering technique which could enable beginners to create complex animations with little to no experience with AnimationVR. Additionally, the accessibility for animation beginners could improve the communication in animation teams between animators and directors.

Augmentation of road surfaces with subsurface utility model projections

Stéphane Côté, Bentley Systems. Alexandra Mercier, Bentley Systems, Université Laval


Abstract: Subsurface utility work planning would benefit from augmented reality. Unfortunately, the exact pipe location is rarely known, which produces unreliable augmentations. We proposed an augmentation technique that drapes 2D pipe maps onto the road surface and aligns them with corresponding features in the physical world using a pre-captured 3D mesh. Resulting augmentations are more likely to be displayed at the true pipe locations.

Augmented VR

Antonis Karakottas, Alexandros Papachristou, Alexandros Doumanoglou, Nikolaos Zioulis, Dimitrios Zarpalas, Petros Daras


Abstract: Traditional VR is mostly about headset experiences either in completely virtual environments or 360o videos. On the other hand AR has been mixing realities by inserting the virtual within the real. In this work we present the Augmented VR concept that lies at the middle right of the virtuality continuum, typically referred to as augmented virtuality. We offer another perspective by blending the real within the virtual focusing on capturing actual human performances in three dimensions and emplacing them within virtual environments [1–3]. By compressing and transmitting this new type of 3D media we can also achieve real-time interaction, communication and collaboration between users. Being in full 3D our media are compatible with a variety of applications be it either VR, AR, MR and open up new exciting opportunities like free viewpoint spectating while also increasing the feeling of immersion of all participatingusers. Wedemonstrateourtechnologyvia aprototypetwo player game that can support spectating in various devices like head mounted displays (VR) or tablet laptops (AR). Our system is easy to setup, requiring minimal non-technical human intervention, and relatively low cost taking one step ahead in making this technology available to the consumer public.

Auto-scaled Full Body Avatars for Virtual Reality: Facilitating Interactive Virtual Body Modification

Tuukka M. Takala, Heikki Heiskanen


Abstract: Virtual reality avatars and the illusion of virtual body ownership are increasingly attracting attention from researchers [1][2]. As a continuation to our previous work with avatars [3], we updated our existing RUIS for Unity toolkit [4] with new capabilities that facilitate the creation of virtual reality applications with adaptive and customizable avatars. Our toolkit allows developers to combine the use of modern VR headsets with any real-time motion capture systems, from Kinect to professional solutions. The only requirement is that the position and rotation of hips, knees, shoulders, elbows, and pelvis are tracked. Tracking of ankles, wrists, fingers, clavicles, chest, neck, and head is optional. Our goal was to allow virtual reality developers to easily deploy arbitrary avatars; currently RUIS toolkit1 can utilize any rigged humanoid 3D models that can be imported into Unity. In order to minimize the mismatch between proprioception and avatar related visual stimulus, the avatar’s limb and torso lengths are scaled automatically in run-time to match the user’s body proportions that are inferred from the motion capture input. The length and thickness of the limbs and torso can be augmented independently, which provides interesting possibilities for dynamic avatar body modification. We envision that our toolkit can be used in studies concerning illusion of virtual body ownership, as well as in VR applications where full-body motion capture is utilized.

Beacon Virtua

Andrew Woods, Paul Bourke, Nick Oliver


Abstract: In Beacon Virtua [1] you can explore the legacy of the shipwrecked VOC ship Batavia by visiting a simulation of Beacon Island. Beacon Virtua will take you on a tour of the island including its jetties, fishing shacks and several grave sites of Batavia voyagers who were buried on the island after the ship was wrecked and following the uprising.
The graves have been reconstructed through a technique called photogrammetric 3D reconstruction, a process which uses multiple photographs of an object to build an accurate and detailed 3D model of it. Beacon Virtua presents the island as it was in 2013, using audio and photography captured during multiple expeditions to the island to preserve this period in its history. [2] In 2013 there were around 15 shacks located across Beacon Island, originally used by the fishing community. These shacks have been recreated as 3D models, which can be explored inside and out. Around the island are photographic panorama bubbles offering 360° views of the island. These bubbles have been captured using a special panoramic photography process - stepping inside a bubble allows you to see the island from that point exactly as it was in 2013.

CarpetVR: the Magic Carpet Meets the Magic Mirror

Victor Lempitsky, Alexander Vakhitov, Andrew Starostin


Abstract: We present CarpetVR – a new system for marker-based positional tracking suitable for mobile VR. The system utilizes all sensors present on a modern smartphone (a camera, a gyroscope, and an accelerometer) and does not require any additional sensors. CarpetVR uses a single floor marker that we call the magic carpet (a,c). CarpetVR augments a standard mobile VR setup with a slanted mirror that can be attached either to the smartphone (as shown in b) or to the head mount in front of the smartphone camera. As the person walks over the marker (c), the smartphone camera is able to see the marker thanks to the reflection in the mirror (shown in a). Our tracking engine then uses a computer vision module to detect the marker and to estimate the smartphone position with respect to the marker at 40 frames per second. This estimate is integrated with high framerate signals from the gyroscope and the accelerometer. The resulting estimates of the position and the orientation are then used to render the virtual world (d,e). Our sensor fusion algorithm ensures minimal-latency tracking with very little jitter.

Realtime Collision Avoidance for Mechanisms with Complex Geometries

Mikel Sagardia, Alexander Martín Turrillas, Thomas Hulin


Abstract: This video presents a collision avoidance framework for mechanisms with complex geometries. The performance of the framework is showcased with the haptic interface HUG [3]. We are able to avoid contacts with the robot links and with moving objects in the environment in 1kHz. The main contribution of our approach is its generic and extensible nature; it can be applied to any mechanism consistingofarbitrarilycomplexrigidbodies,incontrasttocommon solutions that use simplified models [2], [7]. In the preprocessing phase, first, the kinematic chain of the mechanism is described [1]. Second, we generate voxelized distance fields and point-sphere hierarchies for the geometry of each mechanism link and each object in the environment [6]. After that, our system requires only the joint angles and information of the environment state (e.g., object poses tracked by optical sensors) to compute collision avoidance forces. At runtime, each link is artificially dilated by a safety isosurface. If a point of an object goes through this surface, a normal force scaled by its penetration depth is computed and applied to the corresponding link. If humans are generically modeled as mechanisms and properly tracked, our system can also prevent collisions with them, ensuring save human-machine collaboration. Figure 1 illustrates the framework and its basic components. The multi-body collision computationarchitecturewasfirstdevelopedforvirtualmaintenance simulations with haptic feedback [5], [4], and thereafter extended to collisionavoidanceofmechanisms. Afirstprototypewaspreviously published in [8].

Secret Detours: A Garden in Singapore

Elke Reinhuber, Benjamin Seide, Ross Williams


Abstract: Visions of East Asian mythology materialise in a decidedly modern metropolis, a place without a past – two worlds collide. Secret Detours engages the audience with over-whelming vistas in a full spherical presentation, encompassing the viewers from all angles. The movie short is set within a lush Chinese garden, adapted from the great traditions of imperial landscaping (cf. [3]) – in the Yunnan Garden in the West of Singapore. Four dancers, dressed in the colours of the cardinal directions, examine the spaces, the paths and the detours of the green scenery (cf. [1]). The spherical video relates to the experience of being surrounded by mythological creatures and their traces inside the garden. As the beautiful layout of the grounds is composed from a range of intersections with multiple meandering paths to choose from, the omnidirectional video invites similarly to explore the atmosphere between an exquisite selection of trees, shrubs, bushes and pieces of architecture. In 360° environments, the camera is almost objective and the viewer becomes the editor of the piece (cf. [2]), different to the directed camera and edit of ‘classic’ movies. The question arises how the author can direct the eyes of the audience with different camera settings, perspective, focus, direction of actors, transitions and in this way, prompt emotions?

The Depth Light

McKennon McMillian, Hunter Finney, Jonathan Hopper, J. Adam Jones


Abstract: The Depth Light solves the problem of not being able to view the real world, without having to remove the Head Mounted Display, accurately and easily. The Depth Light is activated by a button or trigger press on an HTC vive controller and consists of a vice controller, an ultrasonic depth finder, a microcontroller (to send measured distances over serial), a web camera, and a mount for the microcontroller and camera. The device works by finding the distancebetweenthedeviceandthenearestreal-worldobject,taking a sum of these distances, and sending this over serial to a computer as an average. In Unity3D, an object is rendered at the distance sent fromthemicrocontroller. Thisobjectisthentexturedwiththevideo feed from the web camera. This object’s distance changes in the virtualenvironmentinrealtimeastheDepthLightsmicrocontroller sends new information. As the distance changes the scale of the object also changes, this to keep the object the same size in the field of vision. The data from the Depth Light is handled by a Unity3D plugin. This plug-in handles all the rendering commands and all of the scaling.

Until Jesse 360

Miriam Ross


Abstract: At the dead end of a party, Jesse looks around to see if she can make a connection. This cinematic virtual reality (CVR), 360degree, film explores what it means to be amongst those brief, late night, moments when strangers come into contact.
Until Jesse 360 takes into account the potential for CVR to create the ‘empathy machine’ [1] by situating the viewer amongst, rather than at a distance from, intimate conversations. At the same time, it questions some of the unwritten rules that have emerged in the first few years of CVR, mainly that dynamic editing and shot length should not be used in case they disorient the viewer. By exploring the possibility of switching between perspectives and providing the viewer with ‘impossible’ viewpoints, Until Jesse 360 challenges our conception of VR space as well as how we can be positioned within it. In this way, it takes into account John Mateer’s point that “existing methods for film can be adapted to immersive presentation so long as they also take into consideration unique aspects of the CVR platform” [2] by playing with editing style whilst making the most of 360-degree space.

Use of virtual reality to teach teamwork and patient safety in surgical education

Tobias Todsen, Jacob Melchiors, Kasper Wennerwaldt


Abstract: The use 360VR videos may increase the engagement and attentiveness of students compared to traditional 2D videos used in medical education (1,2). We therefore developed a stereoscopic 360VR video to demonstrate how to use the WHO’s surgical safety checklist in the operating room (see image 1). With use of VR technology we aimed to give the medical students a realistic experience of how it is to be in the operating room where and observe the teamwork to ensure patient safety during surgery. The video is recorded with a Vuze 3D 360 Spherical VR Camera and edited in Final Cut Pro with use of Dashwoods 360VR Toolbox workflow plugins.

VR Music

Ali Rastegar


Abstract: Virtual Reality headsets and sound alongside other elements enable it to achieve its ultimate goal which is to simulate user’s physical presence in a virtual environment. User’s input, to alter and interact with the virtual world is another important factor that has been the subject of extensive researches many of which are in the field of art. But even user’s ability to look around and move toward a sound source can also be considered as user’s input. Therefore, user’s input can be regarded as one of the main elements of the virtual reality. User’s input is a relatively new concept in arts but it has been the basic element of video games from the beginning. And therefore there is no surprise that gaming was the starting point of Virtual Reality. As Virtual Reality is becoming more widespread, it is expected that this technology will be adapted to other fields. But it is also realistic to think that, due to user’s ability to make different decisions, Gamification will also be adapted alongside this technology. The focus of this project is user-centered art where user’s input is the fundamental element of the artwork.

Virtual Immersion. Simulating Immersive Experiences in VR

Volker Kuchelmeister


Abstract: This is an investigation in how VR can simulate experiences designed for large scale immersive environments. Immersive display and interaction environments and systems have been utilised in simulation, visualisation, entertainment, the arts and museological context for a long time before VR made its resurgence only a few years back. These systems include amongst others 360 degree cylindrical projection environments [1], curved screens, hemispherical projection systems [2] and multi-perspective installations [3]. In comparison to traditional screen based media, immersive environments provide a unique delivery platform for ultra-high resolution digital content at a real-world scale and for multiple simultaneous viewers. This makes them the ideal stage for impactful experiences in public museums, festivals and exhibitions. Applications and experiences created for a specific platform rely on the complex and costly technical infrastructure they were originally designed for. Descriptions and video documentation only go so far in illustrating an immersive experience. The embodied aspect, the emotional engagement and the dimensional extend, central to immersion, is mostly lost in translation. This project offers a prototypical implementation of a large scale virtual exhibition incorporating various immersive environments and applications situated within a fictional 3D scene. The focus is on simulation and conservation of existing applications and to create test bed for future projects.

Acknowledgements: SYSTEMS: EPICylinder; Design: S. Kenderdine, J. Shaw; UNSW Expanded Perception and Interaction Centre. AVIE Advanced Visualisation and Interaction Environment and AVIE-SC; Design: J. Shaw with D. Del Favero, A. Harjono, V. Kuchelmeister, M. McGinity; UNSW iCinema Centre for Interactive Cinema Research. RE-ACTOR; Design: S. Kenderdine, J. Shaw with P. Bourke. iDome; Design: P. Bourke, J. Shaw, V. Kuchelmeister; UNSW iCinema. Turntable/Placeworld; Design: J. Shaw, adapted as Turntable by V. Kuchelmeister. ZKM Karlsruhe / UNSW Art & Design. Panorama Screen; Design: J. Shaw with B. Lintermann; ZKM Centre for Art and Media Karlsruhe.
CONTENT: Veloscape (2014); V.Kuchelmeister, L.Fisher, J.Bennett; UNSW Art & Design. City Jam (2007) in AVIE; V. Kuchelmeister; UNSW iCinema. BackOBourke (2009) in iDome; V. Kuchelmeister; UNSW iCinema. Juxtaposition (2011) in Turntable/Placeworld; V. Kuchelmeister; UNSW A&D. Fragmentation (2012) in RE-ACTOR; R. Lepage; Adaptation, by R. Castelli and V. Kuchelmeister; UNSW iCinema, Epidemic. Monsoon (2012) in AVIE; V. Kuchelmeister; UNSW iCinema. Naguar India 360 (2007) in iDome; S. Kenderdine, V. Kuchelmeister, J. Shaw. Catlin Seaview (2014) in iDome; V. Kuchelmeister, R. Vevers; UNSW A&D. Juxtaposition (2011) in ZKM Panorama Screen; V. Kuchelmeister; UNSW A&D. Double District (2009) in ReACTOR; S. Teshigawara developed with V. Kuchelmeister; UNSW iCinema, Epidemic. Parragirls Past Present (2017) in EpiCylinder; A. Davies, B. Djuric, L. Hibberd, V. Kuchelmeister, J. McNally; UNSW A&D. Hawkesbury Journey (2006); V. Kuchelmeister; UNSW iCinema. Conversations @ the Studio (2005) in iDome; J. Shaw, D. Del Favero, N. Brown, V. Kuchelmeister, N. Papastergiadis, S. McQuire, A. Arthurs, S. Kenderdine, K. Sumption, G. Cochrane; UNSW iCinema. iCasts (2008-11); J. Shaw, D. Del Favero; UNSW iCinema. Deconstructing Double District (2010); V. Kuchelmeister, based on Double District (2009) by S. Teshigawara; UNSW A&D. Zeitraum (2012) in AVIE-SC; V. Kuchelmeister; UNSW A&D.