Seven projects from SIC retained for funding in the joint funding scheme from Saarland and Intel

Bild der Pressemitteilung

Professor Thorsten Herfet, State Secretary Elena Yorgova-Ramanauskas, State Secretary Wolfgang Förster, Intel Labs Director Ravi Iyer (Emerging Systems Labs), Minister Jakob von Weizsäcker, Intel Director Nilesh Jain (Emerging Visual/AI System Research Lab) and University Vice President Roland Rolles.


Chip manufacturer Intel and the Saarland are supporting innovative research projects in computer science. A corresponding cooperation agreement was signed last year. Seven research projects have now been presented at a kick-off event, which are funded equally by Intel and the state as part of the “Future of Graphics and Media” program. All of the projects were initiated by institutions based at the Saarland Informatics Campus.

A total of seven research projects in the field of computer and information sciences are being funded to the amount of 300,000 to 450,000 euros. The focus is on the further technical development of visual content such as photos, videos or video telephony and their data processing. In addition to technical innovations, the focus is also on increasing energy efficiency. Because as data volumes increase exponentially, so does energy consumption.

The “Future of Graphics and Media” program will be funded over 4 years with a total of 4 million US dollars. Saarland researchers were able to apply for funding until the end of September 2023. The applications were reviewed by Saarland and Intel. All research results will be published under Open-IP and thus made available to the general public.

Background “Future of Graphics and Media” program

The need for research into the further development of visual content and its data processing has increased significantly, partly as a result of the coronavirus pandemic. This is because the increasing use of video telephony, for example, also increases the demand for bandwidth, computing power and storage space for the corresponding applications. This means higher costs, more energy consumption and, as a result, a larger CO2 footprint. The Saarland/Intel Joint Program on “The Future of Graphics and Media” is therefore addressing the increasing complexity of generating, processing, encoding and rendering visual content in distributed, immersive and interactive real-time applications. Research into new algorithmic solutions and the co-design of hardware and software aim to reduce complexity and energy consumption, increase processing speed and achieve the same or even better quality.

Presentation of the research projects

FROST – FROxel-based Semantic Processing Techniques (UdS)

Despite the complete digitization of photography and film recording, images are still represented today in the same way as they have always been: pixels per line, lines per image and images in sequence for film. However, this type of representation is no longer adequate, especially when shooting with many cameras, as is already common today and will certainly become even more common. In FROST, we represent multi-camera recordings as sets of rays whose starting points are saved. This allows us to derive for each point in the recorded scene how many cameras see it and what it looks like from different viewing directions. This enables innovative and efficient processing, which can also derive semantics (properties of the scene) and thus avoid the errors that occur with conventional image processing. We are moving away from pixels per image and towards rays per volume, so to speak.

Generalizable Machine-Learning Models of Materials for 3D Reconstruction and Fast Relightable Rendering (UdS)

For content creation, as it occurs in augmented reality, the film and computer game industry, it is essential to reconstruct 3D objects from our world from images in order to be able to insert them into virtual environments. To do this, the object must be relit with the incoming light in the new environment, otherwise it will look artificial. While current methods can reconstruct the geometry of objects very accurately, the reconstruction of light reflection properties is still in its infancy. In this research project, we are pursuing fundamentally new approaches.

Latency Compensation for Avatar Animations in the Metaverse (DFKI)

In virtual reality or the metaverse, in many cases people not only want to be alone, but also want to interact with other users via the Internet. For this purpose, it is necessary to synchronize the body movements of individual users and display them in the metaverse. In collaboration with Intel, we will work on reducing and compensating for the transmission time (latency) of body movements via the Internet so that interactive applications are also possible over long distances and you can, for example, dance with a person on the other side of the world in a metaverse.

Software-Defined Graphics Pipelines (DFKI)

3D applications are increasingly using methods for image synthesis that can no longer be realized with the traditional rendering pipeline of today’s graphics hardware. Software rendering pipelines are of particular importance here. Together with Intel, we are working on solutions for the simple description, implementation and optimization of such software-defined graphics pipelines.

Perception-Guided Neural Monte Carlo Sampling and Reconstruction (MPI-INF)

Using machine learning, we can achieve photorealistic 3D rendering by reconstructing high-quality images from a small number of noisy Monte Carlo samples. In collaboration with Intel, our goal is to develop intelligent sampling methods that – depending on the image content and based on human perception – aim to improve real-time image reconstruction for augmented and virtual reality (XR/VR) applications.

Tractable Diffusion Models for Large-Scale 3D Scene- and Object-level Representations (MPI-INF)

New generative methods of machine learning allow the automatic generation of images and videos in breathtaking quality, after training on a large amount of two-dimensional images. In order to open up this family of methods for further fields of application, it is necessary to improve the three-dimensional understanding of the models. Together with Intel, we are researching how to enable the methods to learn efficiently on three-dimensional structures.

Towards physically based generative rendering (MPI-INF)

Generative Artificial Intelligence (GenAI) has transformed the creation of digital content by generating high-quality images/videos from simple text specifications. In collaboration with Intel, we aim to develop a physics-based generative pipeline that improves control in digital content creation. This will significantly impact the streamlined development of the screens of the future, from wearable technology to the big screen

 

Background Saarland Informatics Campus:
900 scientists (including 400 PhD students) and approx. 2500 students from more than 80 nations make the Saarland Informatics Campus (SIC) one of the leading locations for computer science in Germany and Europe. Four world-renowned research institutes, namely the German Research Center for Artificial Intelligence (DFKI), the Max Planck Institute for Informatics, the Max Planck Institute for Software Systems,  and the Center for Bioinformatics along with  Saarland University and its three departments and 24 degree programs, together  cover the entire spectrum of computer science.

Translation: pzs