This is an investigation in how in how VR as a medium is able capture, reproduce and simulate artistic works designed for immersive environments.
Immersive display and interaction environments and systems have been utilised in simulation, visualisation, entertainment, the arts and museological context for a long time before VR made its resurgence only a few years back. These systems include amongst others, 360 degree cylindrical projection environments, curved screens, hemispherical projection systems and multi-perspective installations.
In comparison to traditional screen based media, immersive environments provide a unique delivery platform for ultra-high resolution digital content at a real-world scale and for multiple simultaneous viewers. This makes them the ideal stage for impactful experiences in public museums, festivals and exhibitions.
Applications and experiences created for a specific platform rely on the complex and costly technical infrastructure they were originally designed for. Descriptions and video documentation only go so far in illustrating an immersive experience. The embodied aspect, the emotional engagement and the dimensional extend, central to immersion, is mostly lost in translation.
This project offers a prototypical implementation of a large scale virtual exhibition incorporating various immersive environments at real-world scale situated within a fictional 3D scene.
The target platform are commodity room-scale Virtual Realty headsets with controller (HTC Vive). The focus is on the showcase of applications designed for those immersive platforms and on the development of interface modalities to access and interact with the content.
Immersive display systems frequently employ stereoscopic 3D representation, achieved by active or passive projection and screen technologies. Stereo 3D does not only provide binocular disparity and therefore depth perception, but is also an important tool to spatialize content. Objects and scenes within a stereoscopic representation are able to operate in-front of the screen. They share space with the audience and produces a more tangible and embodied experience.
Virtual Reality technology, with its inherent binocular design, is an ideal platform to simulate and demonstrate screen-based stereoscopic 3D content. The notion of negative, zero and positive parallax and its spatial relationship to the screen surface stay intact in VR representation. Other stereoscopic properties such as perceived depth and scale do also translate directly.
One challenge with this approach is the technical implementation and adaption of stereoscopic 3D video content for VR viewing. 3D imagery is formatted as either separate or composited views for the left and the right eye perspective. A requirement for implementation in VR is to split the image for the left/right head mounted display (HMD) channel while keeping precise time synchronicity. Composite 3D video, where left and right view are mapped either on-top or next to each other do guarantee time synchronicity in playback and are best suited for this application. The video texture is then offset and tiled for the individual HMD channels. Another layer is the warping of the video textures to the specific immersive environment architecture, either hemispheric, cylindrical or multi-perspective. This is done by appropriate UV maps for the specific geometry.
Another aspect to consider is the translation of physical user interfaces and interaction modalities from the immersive environment to VR. With commodity VR systems, there is no direct haptic feedback available. Mapping of VR controller input to the simulated physical user interface is however possible. For instance, I utilised a large trackball for various hemispherical iDome applications, as a means to control the gaze within 360 spherical degree imagery. In VR the controller trigger button in combination with proximity and movement can simulate the rotation of the trackball. Visual feedback is provided by the display system as well as the virtual trackball rotation.
Other user interface devices and systems for immersive environments and applications such as marker and marker-less object and viewer position tracking, wands, touchpads, consoles with push buttons, pan-tilt devices and so on can be mapped in a similar fashion. In VR the modalities include the controller and HMD spatial orientation and position and controller button inputs possibly in combination with a virtual representation of the physical interface device.
Established conventions in VR for locomotion in a scene outside the room-scale tracking region by either teleportation or transportation do apply in this context. Using a virtual laser pointer to perform selections is also an effective and intuitive way to interact with the VR environment.
Other use cases for the framework developed for this prototype are as a testbed for future immersive projects. It provides a space for experimentation with content and display architecture before they are developed at full scale. The simulation gives a good sense of how a design works in regards to scale, field-of-view, peripheral vision, audience position and perspective.
A further motivation is with the conservation and the reach of artworks and immersive experiences created over the last 15 years by the author and collaborators. The highly specialised infrastructure is mostly limited to research labs, visualisation and simulation facilities and museums. A VR implementation has the potential to reach a far broader audience. In addition, hardware and software for immersive environments are mostly custom designed and have a finite life-span.
A VR representation is able to preserve to some degree the audience’s spatial awareness and what the experiences felt like.