CIVIT 10th Anniversary Workshop
- Kampusareena A223 and Sähkötalo SA201, Hervanta Campus, Tampere University
- 09:00-17:00, on Thursday 27.11.2025
This autumn, we celebrate 10 years of the Centre for Immersive Visual Technologies (CIVIT) at Tampere University! A decade ago, CIVIT opened its world-class premises and launched with an international workshop that brought together leading experts in visual technologies. This November, we mark the 10-year milestone with a full-day event packed with insights, demos, and networking.
Whether you’re a researcher, industry professional, or tech enthusiast, don’t miss this opportunity to dive into the future of visual technologies.
Thank you for your interest in the CIVIT 10th Anniversary Workshop. Unfortunately, registration is now closed as the event has reached full capacity.
Event Program
09:00 – 09:30 | Coffee & networking, Kampusareena A223
09:30 – 09:40 | Welcoming words
- Jarmo Takala, Vice President, Stakeholder Relations and Partnership, Tampere University
09:40 – 10:00 | CIVIT 10 and beyond
- Atanas Gotchev, Professor, Signal Processing, Tampere University
10:00 – 10:40 | Seeing the Road Ahead: A Decade of Automotive Camera Technology
- Martin Punke, Head of Camera Product Technology at Aumovio
10:40 – 11:20 | Foundation Models in Driver Assistance & Cockpit Interaction
- Frederik Zilly, Lead Expert in Generative AI at Bosch
11:20 – 12:00 | Visual technologies in Autonomous Driving Development and Testing
- Peter Kovacs, Senior Vice President of aiData at aiMotive
12:00 – 13:00 | Lunch, offered by CIVIT
13:00 – 13:40 | From Metrics to Users: Understanding QoE in Extended Reality
- Federika Battisti, Professor at University of Padua
13:40 – 14:20 | Human Vision Insights for Next-Gen Display Systems
- Tatjana Pladere, Head of the Department of Optometry and Vision Science, University of Latvia
14:20 – 15:00 | Realistic 3D for Reality – Immersive Remote Operation
- Mårten Sjöström, Professor at Mid Sweden University
15:00 – 17:00 | Demos & networking, Sähkötalo SA201, food and drinks
Explore immersive environments on the Omnideck with a virtual forest, step inside our volumetric capture studio, and see the future of displays in the display studio. Learn about our advanced image quality assessment and accurate spectral response measurement systems.
17:00 | Event ends
Presentations
Seeing the Road Ahead: A Decade of Automotive Camera Technology
This presentation explores the evolution of automotive camera systems, focusing on key technological advancements and market trends. We trace the development of image sensors from early 1.3MP designs to modern 8.3MP sensors, highlighting innovations like LfM (LED flicker mitigation) and the shift to smaller pixel sizes. In optics, we examine the move from hybrid lens assemblies to full-glass designs—and the recent trend reversing that. The rise of sensor modeling is also discussed, showcasing how simulation frameworks are now essential for performance validation and system integration. We then cover the complexities of manufacturing and testing high-performance camera modules. Finally, the presentation reviews the shifting market landscape over recent years—from ADAS to robotaxis and robotrucks, and now a renewed focus on ADAS and automated driving (AD). This overview provides a comprehensive look at the current state and future direction of automotive vision technology.
Martin Punke, Head of Camera Product Technology at Aumovio
Dr. Martin Punke is leading the Camera Product Technology group at Aumovio. His team is working on concepting, engineering and productization of camera systems for ADAS applications with a focus on optics, image sensors and image quality. Additionally, camera sensor modelling is a recent work product of his group. In his previous positions at Nokia he was responsible of camera, flashlight and illumination technologies in mobile phones. Martin received the Dipl.-Ing. and Ph.D. degrees in electrical engineering and information technology from the University of Karlsruhe, Germany, in 2003 and 2007, respectively. In his Ph.D. thesis he worked on organic semiconductor devices for microoptical applications.
Foundation Models in Driver Assistance & Cockpit Interaction
Bosch is developing advanced foundation models (GPT-like AI technology) to enhance automotive Advanced Driver Assistance Systems (ADAS) and cockpit interactions, moving beyond today’s limited perception that represents environments merely as boxes and lines. These AI systems provide context-sensitive understanding of driving scenarios, enabling vehicles to handle corner cases like lost cargo, accidents, construction sites, and adverse weather conditions that traditional systems struggle with. Bosch’s unique approach involves distilling large foundation models into smaller, embedded versions that can run locally on current automotive platforms, making them suitable for real-time vehicle deployment across different market segments. For cockpit applications, the technology enables human-like mobility assistants with context awareness, personalization, and proactive capabilities, processing voice commands, vehicle sensors, and environmental data to detect driver intent accurately. By combining AI for both ADAS and cockpit systems, Bosch aims to fundamentally transform the driving experience through improved safety, reasoning capabilities, and natural interaction between drivers and their vehicles.
Frederik Zilly, Lead Expert in Generative AI at Bosch
Dr. Frederik Zilly is Lead Expert for Generative AI at Robert Bosch GmbH, where he drives the development of Vision Language Action Models and data-centric AI platforms for automated driving. With a PhD in Visual Computing from TU Berlin and more than 15 years of experience in computer vision and autonomous systems, he has contributed to projects ranging from the DARPA Urban Challenge to Level 2++ highway automation. He is author and co-author of over 30 research papers and patents and a regular speaker at international conferences. In his current work, he focuses on leveraging multimodal foundation models to make automated driving more capable, robust, and scalable.
Visual technologies in Autonomous Driving Development and Testing
The development of Autonomous Driving (AD) and Advanced Driver Assistance Systems (ADAS) relies on both visual and non-visual data captured by multimodal sensor systems mounted on vehicles. In this session, we will draw parallels between the underlying technologies used in 3D display systems and those powering ADAS and AD applications. We will demonstrate how recent advances in visual AI—such as neural rendering and 3D scene reconstruction—can accelerate the simulated testing of vehicle software using digital twins of realistic environments, and how automated data-processing tools support the creation of these digital twins.
Peter Kovacs, Senior Vice President of aiData at aiMotive
Péter Tamás Kovács is currently Senior Vice President of aiData at aiMotive, a company developing artificial intelligence technologies for autonomous cars. He joined AdasWorks (now aiMotive) in 2016 as senior algorithm researcher, then held various positions in the company, experiencing multiple aspects of the research and development of autonomous driving systems and their simulation and data needs. Before he has been working at Holografika from 2006, and served as CTO of the company from 2009 to 2015, where he has developed proprietary technologies in 3D visualization, including the real 3D light-field display product line HoloVizio and related technologies. He received MSc degree in Computer Science from the Budapest University of Technology and Economics and a PhD in Signal Processing from Tampere University of Technology.
From Metrics to Users: Understanding QoE in Extended Reality
Extended Reality is rapidly transforming the way we interact with digital content and physical environments. As these technologies advance, the key to their broad adoption lies in understanding and optimizing the users’ Quality of Experience (QoE) that accounts not only for system performance, but also perception, interaction, comfort, and engagement. This talk will explore the evolution of the concept of QoE in XR and will takle how traditional QoE concepts are being redefined in immersive environments.
Federika Battisti, Professor at University of Padua
Federica Battisti is Associate Professor in the Department of Information Engineering at University of Padua. Her main research interests are signal and image processing with focus on subjective quality analysis of visual contents. In particular, she is currently investigating collaborative and immersive XR environments. She has published 140+ papers on international conferences and journals and she contributed to IEEE and ITU standards. She is currently IEEE Senior Member, Chair of the EURASIP TAC VIP, and Editor-in-Chief of Elsevier Signal Processing: Image Communication.
Human Vision Insights for Next-Gen Display Systems
From a vision-care standpoint, prolonged everyday use of near-eye displays in training and work requires systems that align with the capabilities and limits of the human visual system. Because visual functions vary widely across individuals, how can these differences be effectively accommodated? Current near-eye display solutions are largely standardized—optimized for users with clinically normal vision who can handle visual stress—excluding a substantial portion of the population with ocular-accommodation and binocular-vision issues, as well as other sources of variation in visual performance. In this talk, I will discuss why accounting for individual differences in the visual system matters for developing inclusive, next-generation display systems and will share insights from our interdisciplinary efforts to deconstruct augmented-vision functionality.
Tatjana Pladere, Head of the Department of Optometry and Vision Science, University of Latvia
Tatjana Pladere is the Head of the Department of Optometry and Vision Science at the University of Latvia, which focuses on bridging fundamental science with applied research to deliver evidence-based, user-centric solutions that support visual health, inform innovative technology development, and improve quality of life. Tatjana joined the department as a laboratory assistant during her Clinical Optometry studies and advanced to principal investigator in the same year she completed her Ph.D. in Physics. Her doctoral work—developed in close collaboration with an industry partner—received the Latvian Academy of Sciences Award for the Most Significant Achievement in Applied Science. Today, her group focuses on translating vision science and optometry expertise to make next-generation vision technology more inclusive and tailored to individual user needs.
Realistic 3D for Reality – Immersive Remote Operation
Forestry, mining, and other industrial applications may benefit from remote operation to improve personal safety, reduce costs, and increase availability. Immersive media has a high potential to enable remote operation as it fulfils many aspects of a natural interaction with the remote scene. This talk will consider advantages and requirements on a remote operation system, technical solutions to enable 6 degrees-of-freedom viewing, learnings from augmented telepresence, and findings with respect to user performance and experience.
Mårten Sjöström, Professor at Mid Sweden University
Mårten Sjöström received the M.Sc. degree from Linköping University (1992), the Licentiate of Technology degree from the Royal Institute of Technology Stockholm (1998), and the Ph.D. degree from the École Polytechnique Fédérale de Lausanne (2001). He was with ABB (1993-1994) and with CERN (1994-1996), involved with projects on signal processing. In 2001, he joined Mid Sweden University and was appointed Associate Professor (2008) and Full Professor of Signal Processing (2013). He is head of research education in Computer and System Science (2013-) and Computer Engineering (2020-). He is head and founder of the Realistic 3D Research Group (2007-). He has served as Associate editor for IEEE Transactions on Image Processing (2022-), and for SPIE Journal of Electronic Imaging (2018-2022). He is a board member of High Performance Computing Centre North (2019-). His current research interests include Visual AI, machine learning for multidimensional signal processing and imaging, and system modelling and identification.
Why join us?
- Explore the latest in immersive tech, image quality assessment, automotive imaging, and more
- Hear from top researchers and industry specialists
- Connect with Tampere’s vibrant tech and academic community
Program Highlights:
- Keynote talks by renowned experts and CIVIT partners
- Live demonstrations showcasing real-world applications
- Networking sessions designed to help you build valuable connections
Venue
Kampusareena A223 and Sähkötalo SA201
Tampere University, Hervanta Campus (Hervanta Campus Map)
Korkeakoulunkatu 7,
33720 Tampere
Parking and Arriving: Hervanta Campus is about 7 kilometres from Tampere city centre. Several buses and tram number 3 drive between the City Centre and Hervanta. Visitors may also park their cars in the car parks that are located next to and in front of the parking garage as well as in the parking garage in Korkeakoulunkatu.
Thank you for your interest in the CIVIT 10th Anniversary Workshop. Unfortunately, registration is now closed as the event has reached full capacity.
