• Post published:May 28, 2022

Tampere Imaging Days 7.–8.6.2022

This summer, Tampere University and Business Tampere organize a joint event that aims at connecting professionals working in the area of imaging both in academia and industry. The programme of the Tampere Imaging Days includes presentations from the leading experts in the fields of photonics, biomedical imaging, computer vision, computational imaging, immersive visual technologies, and more. There will be demos at CIVIT, the Centre for Immersive Visual Technologies, on Wednesday, 8 June, 2022 as a part of the event. 

Demos at CIVIT 8.6.2022

There will be several demos on Wednesday, June 8 at CIVIT as part of the Imaging Days. The demos at CIVIT include: 

1. Volumetric Capture (VoCap) studio

Volumetric video capture allows capturing moving objects from all sides and creating photorealistic digital twins of humans for various mixed reality applications. Think of broadcasting news interviews inside your living room with the help of virtual reality, or of remote experts explaining exhibits, professors virtually guiding you through knowledge spaces, or your favourite celebrities promoting interesting products. Visit the VoCap studio to experience how performers can be volumetrically reconstructed.

2. Video of the KUKA robotic arm

Industrial robotic arms allow for repeatable, high-precision motion. CIVIT has a Kuka robotic arm for precise manipulation of cameras – for instance, capturing dense light fields of static objects and development and testing machine vision algorithms. One major use has been the development of remote maintenance tools for the nuclear fusion reactor ITER. While the robotic arm is still in use in the temporary CIVIT premises, we show a video of the arm in use for validating pose estimation using custom, retro-reflective elements designed to withstand radiation and high temperature.

3. Camera tracking

Match moving is a visual effects (VFX) creation process, in which virtual objects are augmented into a live video with appropriate position, orientation, and scale to match the background objects of the video. Post-processing software such as Cinema 4D and Adobe After Effects automatically select and track visual features in a video to estimate the camera motion and augment virtual objects. However, artists need to finetune this feature selection to make camera tracking precise enough for their needs. Our work explores how multi-sensory data like colour image and depth data from Time of Flight (ToF) sensor could be used to accurately track the camera motion in real time. This camera tracking pipeline allows artists to see augmented objects in the video feed in real time and helps them focus on their takes.

4. Xsens Motion Capture (MoCap)

The Xsens suit is a high-end inertial MoCap system. It can be used in applications such as sport analysis, biomechanics, gaming, film, animation, TV broadcasting and live entertainment among other things. The system is portable and fast to set up, which makes it great for mobile applications such as filming on location or analysis in long-distance running. In the demo, a professional dancer wears an Xsens suit and performs music-accompanied dancing. The MoCap data from the suit is streamed through Unreal Engine where a virtual model is built based on the captured motion.

5. Omnidirectional treadmill

Navigation is one of the common interactions in virtual reality. It allows users to explore the virtual world and feel immersed in it. However, navigation in virtual reality is challenging due to the lack of space in physical rooms. Teleportation is the most common solution where the user aims at the spot to teleport to, and gets immediately teleported to that spot. This instant teleportation is only visual and, without an actual motion, causes VR sickness. Omnideck 360-degree treadmill solves this issue by allowing users to move and explore unlimited physical space on a limited treadmill platform. Experience the unlimited space provided by Omnideck by walking through a limitless space.

CIVIT equipment and facilities - VR treadmill

6. Remote operation

Remote operation is a rising trend in many fields utilizing mobile work machines. CIVIT has been involved in research in this area and provided hardware for development and testing, and is currently increasing the capabilities of its infrastructure. We demonstrate a proof-of-concept system developed within the project MIRO – Mixed Reality for Operating Mobile Machines, using CIVIT hardware: cameras, sensors, networking and the Robotnik Summit Steel XL robot.

7. HoloVizio

Conventional 3D displays recreate a limited number of perspective views and in most cases only two (i.e. stereo). In contrast, a light field display recreates a light field with continuous parallax, which allows exploring a 3D scene by multiple users from arbitrary viewing positions inside the display viewing zone. No glasses or user tracking are needed. The demo shows various rendered and real-life content with the aim to give the viewers a good insight into the capabilities of the state-of-the-art wide field-of-view light-field displays. We demonstrate also the results of our research on densely sampled light field reconstruction from a sparse set of camera views utilizing light field sparsification in shearlet domain.

8. Head-up display

Head-up displays (HUDs) are transparent display systems typically used in automotive or airplane industry, which can show the relative content without the need to change the viewing direction. With the commercial HUDs in automobiles, the information appears right behind the windshield. It can ease the driving experience where the driver no longer switches their gaze up and down between the road and the information cluster. The accommodation (focusing), however, should still be changed back and forth between the display and the road, as the display depth does not match the typical accommodation depth while driving. We demonstrate a prototype HUD developed in CIVIT to address this issue by adopting techniques from the light field 3D displays. It can show the information at a virtual distance of 2.5m from the viewer to match the typical accommodation distance as well as the gaze of the driver.

9. Near-eye holographic and light-field display prototypes at Plenoptics Lab

Holograms encode rich information of a given three-dimensional (3D) scene in terms of geometry, color and texture, thus making them highly desirable for use in 3D display applications. The ability to reconstruct wavefronts of 3D scenes accurately enables holographic displays to provide continuous motion parallax and deliver correct visual cues of binocular parallax and focus. Such displays have yet to find commercial success, however, due to rapid recent advances in optics and display technologies, holographic displays are now, more than ever before, a very appealing topic of research. In the Plenoptics laboratory, we have a benchtop near-eye holographic display setup comprised of standard optical components, an LED light source and a high-resolution reflective phase-only spatial light modulator. The prototype enables research and development work on both efficient methods for holographic data synthesis and novel optical setups for such displays. In this demo, we will demonstrate the working principles of the holographic display prototype.

The benchtop near-eye holographic display setup
The benchtop near-eye light field display evaluation and characterization setup