March 18th - 22nd
March 18th - 22nd
In Cooperation with
the German Association for Electrical, Electronic and Information Technologies: VDE
In Cooperation with
the German Association for Electrical, Electronic and Information Technologies: VDE
Abstract: The 360° Video was created for a lighting design company (www.lichtliebe.de) by (www.vrendex.de) illustrating lamp prototypes in an architectural setting to experience their influence and ambience. Based on a virtual reality scene the video was rendered as camera flight through the building to give an ambient impression during daylight and night. Being able to show customers and architects what the lamps will look like - even before first prototypes are available is speeding up the feedback process of designing new customer influenced products. Virtual technologies are also capable of giving house builders an impression of how variants and posititions of lights will influence their living space.
Abstract: 3D Tune-In is an EU-funded project which brings together the relevant stakeholders from the videogame industry, academic institutions, a large hearing aid manufacturer, and hearing communities, to produce digital games in the field of hearing aid technologies and hearing loss  . The project has now completed the development of the 3D TuneIn Toolkit , a flexible, cross-platform library of code and guidelines that gives traditional game and software developers access to high-quality sound spatialisation (both for headphones and loudspeakers), hearing loss and hearing aid simulations. The test application for the Toolkit is currently available for free through the 3D Tune-In project website http://3d-tune-in.eu/. The C++ code will be released open-source through GitHub in Spring 2018. In addition to the Toolkit, 3D Tune-In has produced 5 different applications aimed at different groups of the hearing impaired and non-hearing impaired communities. The video briefly describes the project context, goals and main outcomes.
Daniel Vogel. Paul Lubos, Frank Steinicke
Abstract: Animating with keyframes gives animators a lot of control but they can be tedious and complicated to work with. Currently, different solutions try to simplify animation creation by recording the natural hand movements of the user. However, most of the solutions are bound to2D animations  orsuffer from a low workﬂow speed . TheproposedUnitypluginAnimationVRusestheHTCVivesystem to enable the puppeteering animation technique in VR while still allowing for a fast workﬂow speed by utilizing the controllers of the VR system. Also, AnimationVR is written for easy integration in already existing Unity projects. The plugin was evaluated with four animation experts. The consensus was that AnimationVR increases the workﬂow speed while decreasing the animation precision. This tradeoff makes it useful for storyboarding in professional environments. Additionally, the plugin could improve the understanding of VR storytelling as the animators would create and instantly review the animations in the correct medium. The experts also noted the ease of use of the puppeteering technique which could enable beginners to create complex animations with little to no experience with AnimationVR. Additionally, the accessibility for animation beginners could improve the communication in animation teams between animators and directors.
Stéphane Côté, Bentley Systems. Alexandra Mercier, Bentley Systems, Université Laval
Abstract: Subsurface utility work planning would benefit from augmented reality. Unfortunately, the exact pipe location is rarely known, which produces unreliable augmentations. We proposed an augmentation technique that drapes 2D pipe maps onto the road surface and aligns them with corresponding features in the physical world using a pre-captured 3D mesh. Resulting augmentations are more likely to be displayed at the true pipe locations.
Antonis Karakottas, Alexandros Papachristou, Alexandros Doumanoglou, Nikolaos Zioulis, Dimitrios Zarpalas, Petros Daras
Abstract: Traditional VR is mostly about headset experiences either in completely virtual environments or 360o videos. On the other hand AR has been mixing realities by inserting the virtual within the real. In this work we present the Augmented VR concept that lies at the middle right of the virtuality continuum, typically referred to as augmented virtuality. We offer another perspective by blending the real within the virtual focusing on capturing actual human performances in three dimensions and emplacing them within virtual environments [1–3]. By compressing and transmitting this new type of 3D media we can also achieve real-time interaction, communication and collaboration between users. Being in full 3D our media are compatible with a variety of applications be it either VR, AR, MR and open up new exciting opportunities like free viewpoint spectating while also increasing the feeling of immersion of all participatingusers. Wedemonstrateourtechnologyvia aprototypetwo player game that can support spectating in various devices like head mounted displays (VR) or tablet laptops (AR). Our system is easy to setup, requiring minimal non-technical human intervention, and relatively low cost taking one step ahead in making this technology available to the consumer public.
Tuukka M. Takala, Heikki Heiskanen
Abstract: Virtual reality avatars and the illusion of virtual body ownership are increasingly attracting attention from researchers . As a continuation to our previous work with avatars , we updated our existing RUIS for Unity toolkit  with new capabilities that facilitate the creation of virtual reality applications with adaptive and customizable avatars. Our toolkit allows developers to combine the use of modern VR headsets with any real-time motion capture systems, from Kinect to professional solutions. The only requirement is that the position and rotation of hips, knees, shoulders, elbows, and pelvis are tracked. Tracking of ankles, wrists, fingers, clavicles, chest, neck, and head is optional. Our goal was to allow virtual reality developers to easily deploy arbitrary avatars; currently RUIS toolkit1 can utilize any rigged humanoid 3D models that can be imported into Unity. In order to minimize the mismatch between proprioception and avatar related visual stimulus, the avatar’s limb and torso lengths are scaled automatically in run-time to match the user’s body proportions that are inferred from the motion capture input. The length and thickness of the limbs and torso can be augmented independently, which provides interesting possibilities for dynamic avatar body modification. We envision that our toolkit can be used in studies concerning illusion of virtual body ownership, as well as in VR applications where full-body motion capture is utilized.
Andrew Woods, Paul Bourke, Nick Oliver
Abstract: In Beacon Virtua  you can explore the legacy of the shipwrecked VOC ship Batavia by visiting a simulation of Beacon Island. Beacon Virtua will take you on a tour of the island including its jetties, fishing shacks and several grave sites of Batavia voyagers who were buried on the island after the ship was wrecked and following the uprising.
The graves have been reconstructed through a technique called photogrammetric 3D reconstruction, a process which uses multiple photographs of an object to build an accurate and detailed 3D model of it. Beacon Virtua presents the island as it was in 2013, using audio and photography captured during multiple expeditions to the island to preserve this period in its history.  In 2013 there were around 15 shacks located across Beacon Island, originally used by the fishing community. These shacks have been recreated as 3D models, which can be explored inside and out. Around the island are photographic panorama bubbles offering 360° views of the island. These bubbles have been captured using a special panoramic photography process - stepping inside a bubble allows you to see the island from that point exactly as it was in 2013.
Victor Lempitsky, Alexander Vakhitov, Andrew Starostin
Abstract: We present CarpetVR – a new system for marker-based positional tracking suitable for mobile VR. The system utilizes all sensors present on a modern smartphone (a camera, a gyroscope, and an accelerometer) and does not require any additional sensors. CarpetVR uses a single ﬂoor marker that we call the magic carpet (a,c). CarpetVR augments a standard mobile VR setup with a slanted mirror that can be attached either to the smartphone (as shown in b) or to the head mount in front of the smartphone camera. As the person walks over the marker (c), the smartphone camera is able to see the marker thanks to the reﬂection in the mirror (shown in a). Our tracking engine then uses a computer vision module to detect the marker and to estimate the smartphone position with respect to the marker at 40 frames per second. This estimate is integrated with high framerate signals from the gyroscope and the accelerometer. The resulting estimates of the position and the orientation are then used to render the virtual world (d,e). Our sensor fusion algorithm ensures minimal-latency tracking with very little jitter.
Mikel Sagardia, Alexander Martín Turrillas, Thomas Hulin
Abstract: This video presents a collision avoidance framework for mechanisms with complex geometries. The performance of the framework is showcased with the haptic interface HUG . We are able to avoid contacts with the robot links and with moving objects in the environment in 1kHz. The main contribution of our approach is its generic and extensible nature; it can be applied to any mechanism consistingofarbitrarilycomplexrigidbodies,incontrasttocommon solutions that use simpliﬁed models , . In the preprocessing phase, ﬁrst, the kinematic chain of the mechanism is described . Second, we generate voxelized distance ﬁelds and point-sphere hierarchies for the geometry of each mechanism link and each object in the environment . After that, our system requires only the joint angles and information of the environment state (e.g., object poses tracked by optical sensors) to compute collision avoidance forces. At runtime, each link is artiﬁcially dilated by a safety isosurface. If a point of an object goes through this surface, a normal force scaled by its penetration depth is computed and applied to the corresponding link. If humans are generically modeled as mechanisms and properly tracked, our system can also prevent collisions with them, ensuring save human-machine collaboration. Figure 1 illustrates the framework and its basic components. The multi-body collision computationarchitecturewasﬁrstdevelopedforvirtualmaintenance simulations with haptic feedback , , and thereafter extended to collisionavoidanceofmechanisms. Aﬁrstprototypewaspreviously published in .
Elke Reinhuber, Benjamin Seide, Ross Williams
Abstract: Visions of East Asian mythology materialise in a decidedly modern metropolis, a place without a past – two worlds collide. Secret Detours engages the audience with over-whelming vistas in a full spherical presentation, encompassing the viewers from all angles. The movie short is set within a lush Chinese garden, adapted from the great traditions of imperial landscaping (cf. ) – in the Yunnan Garden in the West of Singapore. Four dancers, dressed in the colours of the cardinal directions, examine the spaces, the paths and the detours of the green scenery (cf. ). The spherical video relates to the experience of being surrounded by mythological creatures and their traces inside the garden. As the beautiful layout of the grounds is composed from a range of intersections with multiple meandering paths to choose from, the omnidirectional video invites similarly to explore the atmosphere between an exquisite selection of trees, shrubs, bushes and pieces of architecture. In 360° environments, the camera is almost objective and the viewer becomes the editor of the piece (cf. ), different to the directed camera and edit of ‘classic’ movies. The question arises how the author can direct the eyes of the audience with different camera settings, perspective, focus, direction of actors, transitions and in this way, prompt emotions?
McKennon McMillian, Hunter Finney, Jonathan Hopper, J. Adam Jones
Abstract: The Depth Light solves the problem of not being able to view the real world, without having to remove the Head Mounted Display, accurately and easily. The Depth Light is activated by a button or trigger press on an HTC vive controller and consists of a vice controller, an ultrasonic depth ﬁnder, a microcontroller (to send measured distances over serial), a web camera, and a mount for the microcontroller and camera. The device works by ﬁnding the distancebetweenthedeviceandthenearestreal-worldobject,taking a sum of these distances, and sending this over serial to a computer as an average. In Unity3D, an object is rendered at the distance sent fromthemicrocontroller. Thisobjectisthentexturedwiththevideo feed from the web camera. This object’s distance changes in the virtualenvironmentinrealtimeastheDepthLightsmicrocontroller sends new information. As the distance changes the scale of the object also changes, this to keep the object the same size in the ﬁeld of vision. The data from the Depth Light is handled by a Unity3D plugin. This plug-in handles all the rendering commands and all of the scaling.
Abstract: At the dead end of a party, Jesse looks around to see if she can make a connection. This cinematic virtual reality (CVR), 360degree, film explores what it means to be amongst those brief, late night, moments when strangers come into contact.
Until Jesse 360 takes into account the potential for CVR to create the ‘empathy machine’  by situating the viewer amongst, rather than at a distance from, intimate conversations. At the same time, it questions some of the unwritten rules that have emerged in the first few years of CVR, mainly that dynamic editing and shot length should not be used in case they disorient the viewer. By exploring the possibility of switching between perspectives and providing the viewer with ‘impossible’ viewpoints, Until Jesse 360 challenges our conception of VR space as well as how we can be positioned within it. In this way, it takes into account John Mateer’s point that “existing methods for film can be adapted to immersive presentation so long as they also take into consideration unique aspects of the CVR platform”  by playing with editing style whilst making the most of 360-degree space.
Tobias Todsen, Jacob Melchiors, Kasper Wennerwaldt
Abstract: The use 360VR videos may increase the engagement and attentiveness of students compared to traditional 2D videos used in medical education (1,2). We therefore developed a stereoscopic 360VR video to demonstrate how to use the WHO’s surgical safety checklist in the operating room (see image 1). With use of VR technology we aimed to give the medical students a realistic experience of how it is to be in the operating room where and observe the teamwork to ensure patient safety during surgery. The video is recorded with a Vuze 3D 360 Spherical VR Camera and edited in Final Cut Pro with use of Dashwoods 360VR Toolbox workflow plugins.
Abstract: Virtual Reality headsets and sound alongside other elements enable it to achieve its ultimate goal which is to simulate user’s physical presence in a virtual environment. User’s input, to alter and interact with the virtual world is another important factor that has been the subject of extensive researches many of which are in the field of art. But even user’s ability to look around and move toward a sound source can also be considered as user’s input. Therefore, user’s input can be regarded as one of the main elements of the virtual reality. User’s input is a relatively new concept in arts but it has been the basic element of video games from the beginning. And therefore there is no surprise that gaming was the starting point of Virtual Reality. As Virtual Reality is becoming more widespread, it is expected that this technology will be adapted to other fields. But it is also realistic to think that, due to user’s ability to make different decisions, Gamification will also be adapted alongside this technology. The focus of this project is user-centered art where user’s input is the fundamental element of the artwork.
Abstract: This is an investigation in how VR can simulate experiences designed for large scale immersive environments. Immersive display and interaction environments and systems have been utilised in simulation, visualisation, entertainment, the arts and museological context for a long time before VR made its resurgence only a few years back. These systems include amongst others 360 degree cylindrical projection environments , curved screens, hemispherical projection systems  and multi-perspective installations . In comparison to traditional screen based media, immersive environments provide a unique delivery platform for ultra-high resolution digital content at a real-world scale and for multiple simultaneous viewers. This makes them the ideal stage for impactful experiences in public museums, festivals and exhibitions. Applications and experiences created for a specific platform rely on the complex and costly technical infrastructure they were originally designed for. Descriptions and video documentation only go so far in illustrating an immersive experience. The embodied aspect, the emotional engagement and the dimensional extend, central to immersion, is mostly lost in translation. This project offers a prototypical implementation of a large scale virtual exhibition incorporating various immersive environments and applications situated within a fictional 3D scene. The focus is on simulation and conservation of existing applications and to create test bed for future projects.
Acknowledgements: SYSTEMS: EPICylinder; Design: S. Kenderdine, J. Shaw; UNSW Expanded Perception and Interaction Centre. AVIE Advanced Visualisation and Interaction Environment and AVIE-SC; Design: J. Shaw with D. Del Favero, A. Harjono, V. Kuchelmeister, M. McGinity; UNSW iCinema Centre for Interactive Cinema Research. RE-ACTOR; Design: S. Kenderdine, J. Shaw with P. Bourke. iDome; Design: P. Bourke, J. Shaw, V. Kuchelmeister; UNSW iCinema. Turntable/Placeworld; Design: J. Shaw, adapted as Turntable by V. Kuchelmeister. ZKM Karlsruhe / UNSW Art & Design. Panorama Screen; Design: J. Shaw with B. Lintermann; ZKM Centre for Art and Media Karlsruhe.
CONTENT: Veloscape (2014); V.Kuchelmeister, L.Fisher, J.Bennett; UNSW Art & Design. City Jam (2007) in AVIE; V. Kuchelmeister; UNSW iCinema. BackOBourke (2009) in iDome; V. Kuchelmeister; UNSW iCinema. Juxtaposition (2011) in Turntable/Placeworld; V. Kuchelmeister; UNSW A&D. Fragmentation (2012) in RE-ACTOR; R. Lepage; Adaptation, by R. Castelli and V. Kuchelmeister; UNSW iCinema, Epidemic. Monsoon (2012) in AVIE; V. Kuchelmeister; UNSW iCinema. Naguar India 360 (2007) in iDome; S. Kenderdine, V. Kuchelmeister, J. Shaw. Catlin Seaview (2014) in iDome; V. Kuchelmeister, R. Vevers; UNSW A&D. Juxtaposition (2011) in ZKM Panorama Screen; V. Kuchelmeister; UNSW A&D. Double District (2009) in ReACTOR; S. Teshigawara developed with V. Kuchelmeister; UNSW iCinema, Epidemic. Parragirls Past Present (2017) in EpiCylinder; A. Davies, B. Djuric, L. Hibberd, V. Kuchelmeister, J. McNally; UNSW A&D. Hawkesbury Journey (2006); V. Kuchelmeister; UNSW iCinema. Conversations @ the Studio (2005) in iDome; J. Shaw, D. Del Favero, N. Brown, V. Kuchelmeister, N. Papastergiadis, S. McQuire, A. Arthurs, S. Kenderdine, K. Sumption, G. Cochrane; UNSW iCinema. iCasts (2008-11); J. Shaw, D. Del Favero; UNSW iCinema. Deconstructing Double District (2010); V. Kuchelmeister, based on Double District (2009) by S. Teshigawara; UNSW A&D. Zeitraum (2012) in AVIE-SC; V. Kuchelmeister; UNSW A&D.