• Post published:October 31, 2022

CIVIT Workshop event: Collaboratec

Immersive visual technology in collaboration with work machine and creative digital industries

CIVIT is organizing a workshop event on Tuesday 1.11.2022 at Sähkötalo Auditorium SA203. The workshop is part of our Collaboratec project which aims at increasing the use of CIVIT’s unique equipment and novel research results in work machine and creative digital industries. We invite everyone interested in the CIVIT activities to join and especially welcome members from the work machine and creative digital industries. Come and enjoy the presentations from world-renowned experts in the field and experience our inspiring demonstrations of technologies such as the new volumetric capture studio!

Event agenda

9:00 – 9:20 Opening words, an overview of CIVIT and project Collaboratec

  • Pauli Kuosmanen, Director, Research and Innovation Services, Welcome and opening words
  • Atanas Gotchev, Director of CIVIT, Introduction of CIVIT and current endeavours
  • Jussi Rantala, CIVIT Staff Scientist, Project Collaboratec

9:20 – 11:00 Speakers and topics from the creative digital industries

  • 9:20 – 10:00 Kostas Pataridis, Mantis Vision, The Mantis volumetric capture Ring: from inception to turnkey solution
  • 10:00 – 10:30 Ajlosa Smolic, Hochschule Luzern (HSLU), Creative Experiments in XR with Volumetric Video (remote)
  • 10:30 – 10:50 Ayoung Kim, Tampere University, Computational hyperspectral imaging with diffractive optics and deep residual network
  • 10:50 – 11:00 Devangini Patel, Tampere University, Introducing CIVIT’s volumetric capture studio

11:00 – 11:30 Coffee break

11:30 – 12:30 Speakers and topics from the work machines industries

  • 11:30 – 12:00 Reza Ghabcheloo, MORE, Tampere University, Perception sensors for autonomous mobile machines
  • 12:00 – 12:30 Olli Suominen, MIRO, Tampere University, Visual technologies & the mobile work machine industry

12:30 – 12:50 Overview of demos

  • 12:30 – 12:4Devangini Patel, CIVITOverview of the demos to be run in CIVIT
  • 12:40 – 12:50 Live demo of holopresencePeople interacting through connected Mantis Rings

13:00 – 14:00 Demos in CIVIT; lunch break

Presentations

Creative digital industries

The Mantis volumetric capture Ring: from inception to turnkey solution

The presentation overviews the Mantis’ volumetric capturing Ring.  We identify the key elements and the technical challenges on the road from capturing static depth maps to real-time 4D reproduction. Specifically, the following topics are addressed: structured light technology, data compression in the ring, 3D reconstruction – atlas generation and preparation for sharing on the internet. We also focus on current research topics such as development of our data infrastructure, and niche use-cases such as large scale projection of scanned data.

Presenter: Kostas Pataridis – Mantis Vision

Kostas is an electronic engineer/computer scientist with a deep interest in computer graphics and image processing.He has been involved with Mantis Vision and its volumetric capturing systems for nearly 10 years. His current research focus is on high-performance visualization and compression. His personal passion is with the demoscene – creating abstract real-time graphics with code. 

 

Creative Experiments in XR with Volumetric Video

Volumetric video (VV) is an emergent digital media that enables novel forms of interaction and immersion within eXtended Reality (XR) applications. VV supports 3D representation of real-world scenes and objects to be visualized from any viewpoint or viewing direction; an interaction paradigm that is commonly seen in computer games. This allows for instance to bring real people into XR. Based on this innovative media format, it is possible to design new forms of immersive and interactive experiences that can be visualized via head-mounted displays (HMDs) in virtual reality (VR) or augmented reality (AR). The talk will showcase a variety of creative experiments applying VV for immersive storytelling in XRdeveloped by the V-SENSE lab and the startup company Volograms. Presentation will be held remotely.

Presenter: Aljosa Smolic – Hochschule Luzern (HSLU), Switzerland

Dr. Aljosa Smolic is lecturer in AR/VR in the Immersive Realities Research Lab of Hochschule Luzern (HSLU). Before joining HSLU, Dr. Smolic was the SFI Research Professor of Creative Technologies at Trinity College Dublin (TCD), Senior Research Scientist and Head of the Advanced Video Technology group at Disney Research Zurich as, and with the Fraunhofer Heinrich-Hertz-Institut (HHI), Berlin, also heading a research group as Scientific Project Manager. At Disney Research he led over 50 R&D projects in the area of visual computing that have resulted in numerous publications and patents, as well as technology transfers to a range of Disney business units. Dr. Smolic served as Associate Editor of the IEEE Transactions on Image Processing and the Signal Processing: Image Communication journal. He was Guest Editor for the Proceedings of the IEEE, IEEE Transactions on CSVT, IEEE Signal Processing Magazine, and other scientific journals. His research group at TCD, V- SENSE, was on visual computing, combining computer vision, computer graphics and media technology, to extend the dimensions of visual sensation. This includes immersive technologies such as AR, VR, volumetric video, 360/omni-directional video, light-fields, and VFX/animation, with a special focus on deep learning in visual computing. Dr. Smolic is also co-founder of the start-up company Volograms, which commercializes volumetric video content creation. He received the IEEE ICME Star Innovator Award 2020 for his contributions to volumetric video content creation and TCD’s Campus Company Founders Award 2020.

Working machine industries

Perception sensors for autonomous mobile machines

We will briefly review some of the perception sensors, present some recent results on SLAM quality using these sensors, and how these sensors can be calibrated with respect to the other sensors and body of the vehicle. We will also touch on our current research on cascaded imaging Radar. SLAM refers to Simultaneous Localization and Mapping.     

Presenter: Reza Ghabcheloo – MORE, Tampere University

Reza Ghabcheloo is an associate professor of robotics and autonomous machines. He is responsible for Robotics majors at TAU. He has approximately 80 journal and conference articles and is currently the PI for €2 million active projects with some 18 researchers. He is the coordinator of a European industrial doctorate on intelligent working machines. His motto is to aim at practically relevant research on solid theoretical grounds. He has a strong position of trust among leading working machine OEMs, with 8 of his doctoral students co-supervised by industry. 

 

Visual technologies & the mobile work machine industry

Overview presentation on latest news regarding the application of visual technologies in the mobile work machine industry, including the status of the Research to Business project MIRO – Mixed Reality for Operating Mobile machines, recent sightings of applications in mobile machines, and what can the work machine industry steal from automotive.

Presenter: Olli Suominen – MIRO, Tampere University

MSc (Tech) in Information Technology from Tampere University of Technology (2012) with a major in Signal Processing. Currently a doctoral researcher in the 3D Media Group at the Faculty of Information Technology and Communication Sciences at Tampere University. Focus on solving visibility issues in industrial applications using visual technologies with 10 years of experience in initializing, implementing and managing industrially driven collaboration with the mobile work machine industry and development of radiation tolerant remote maintenance for fusion energy applications.

Demonstrations

The demonstrations are held in CIVIT. Grab a sandwich and visit different demo booths that are introduced in more detail below. We look forward to discussing how to utilize the CIVIT technology and expertise for companies’ development needs!

Volumetric capture studio setup in a laboratory. A mannequin is inside the system surrounded by cameras and booms of the studio setup.

Volumetric capture (VoCap)

At CIVIT, we have Finland’s first volumetric capture studio. Volumetric capture allows capturing 3D videos of humans and creating photorealistic digital twins for various mixed reality applications. Some examples of such mixed reality applications are (1) broadcasting news interviews inside your living room on your TV or virtual reality headset, (2) virtual professors walking with you and guiding you through the physical university, or (3) your favourite celebrities promoting interesting products in augmented reality. This volumetric capture studio has 32 camera units arranged to capture a cylinder of 1.6 meters diameter. This studio captures human appearance and body motion from various angles around the human body. The capture can be exported to various 3D formats such as .obj, .ply, .gltf and .glb. These can be easily integrated into software such as Unity, Unreal engine, three.js, play canvas and babylon.js.  

Target: Creative industry 

Holopresence  

Holopresence is the future of 2D video calling and conferencing. With volumetric capture technology, your 3D hologram can be teleported to any physical location in lifesize, lifelike appearance and in real-time. This technology captures and relays body languages thereby increasing the impact of the message. In this demo, you will experience a live stream with experts from across the world. 

Multimodal fusion

Machines such as autonomous cars, robots, etc. may have multiple sensors to sense various information such as temperature, depth, colour information. Most methods process each data separately and then fuse the decisions. Research has shown that fusing the data and making decisions on the combined data is more accurate and requires less resources. The purpose of multimodal fusion is to fuse data from multiple sensors such that the world data point matches. In this demo, algorithms that use (1) RGB image versus (2) fusion of colour and depth images to segment humans are compared. An accurate segmentation algorithm can be used to find measurements of objects and their distances from the camera accurately. Multimodal fusion can also be used for visual tasks such as object recognition and object detection. 

Target: Industrial machines

Depth map super-resolution

Colour cameras have improved in terms of resolution. This helps to improve the accuracy of computer vision algorithms. Depth cameras are used to compute distances and measurements of objects. In comparison to colour cameras, depth cameras have limited depth map resolution. In addition, the resolution might have to be traded off for frames per second (fps) or the maximum distance range. Depth map super-resolution aims to increase the resolution of depth maps. Depth map super-resolution can be used for enhancing the visualization to indicate the distance of objects in front of industrial machines for the human operators on large displays 

Target: Industrial machines 

Camera tracking 

Match moving is a visual effects (VFX) creation process, in which virtual objects are augmented into a live video with appropriate position, orientation, and scale to match the background objects of the video. Post-processing software such as Cinema 4D and Adobe After Effects automatically select and track visual features in a video to estimate the camera motion and augment virtual objects. However, artists need to finetune this visual feature selection to make camera tracking precise enough for their needs. Our work explores how multi-sensory data like colour image, depth data and IMU (inertial measurement unit) data could be used to accurately track the camera motion in real time. This tracking algorithm is not affected by moving objects in the scene. This camera tracking pipeline allows artists to see augmented objects in the video feed in real time and helps them focus on planning their takes. 

Target: Creative industry

Remote operation demo

The latest version of the remote operation platform from MIRO (Mixed Reality for Operating Mobile Machines) project with new features such as AR visualization of terrain steepness and vehicle trajectory, and a new and improved display setup.

Aerial Roots

Aerial Roots is a site-specific adaptation and preview of a solo concert piece for violin, video, and electronic sounds that Timothy Page composed for Finnish virtuoso violinist Eriikka Maalismaa. Among the sound sources for the work’s electronics are field recordings gathered in Hong Kong in late February 2022, a historical moment in which silence following political upheaval coincided the with peak of the catastrophic fifth wave of the pandemic. It was possible, for example, to record the creaking of bamboo swaying lightly in the wind in the village of Wu Kau Tang, with nearly zero intrusion from the ambient urban soundscape. For the composer, this period conjured images of Hong Kong returning to a state of nature.

Collaboratec

Collaboratec (Collaboration model for using immersive visual technologies in work machine and creative digital industries) aims at building a collaboration model through which CIVIT research infrastructure and our know-how of new immersive technologies can be used effectively as part of companies’ development needs. The project is funded by the European Regional Development Fund (ERDF).

Venue

Sähkötalo Auditorium SA203

Tampere University, Hervanta Campus (Hervanta Campus Map)
Korkeakoulunkatu 3,
33720 Tampere

Parking and Arriving: Hervanta Campus is about 7 kilometres from Tampere city centre. Several buses and tram number 3 drive between the City Centre and Hervanta. Visitors may also park their cars in the car parks that are located next to and in front of the parking garage as well as in the parking garage in Korkeakoulunkatu. 

Find out more about arriving at CIVIT here.