News
Youtube Channel CG
Project Group: Differentiable Rendering with Machine Learning




New Project Group in Visual Computing
The chairs of Computer Graphics and Computer Vision are starting a new project group in the intersection of vision, graphics, and deep learning. The topic of the group will be in the field of inverse rendering with machine learning, as for instance illustrated in this recent publication. See the project description below for more details.
The project group will start with an online meeting on April 7 10:30. Please contact Michael Möller or Martin Lambers if you are interested. A project group is working on a research project in a group of at least 4 students over one year, yielding 20 credit points. Participating requires team spirit, perseverance, and an interest in Visual Computing.
Project Group Description
The combination of differentiable rendering with machine learning has interesting applications, including inverse rendering: given a single RGB
image as input, scene geometry, lighting and material properties can be estimated. This requires several considerations for stabilizing the training
of a convolutional network to solve this ill-posed problem, and a differentiable rendering algorithm to enable self-supervision.
This project group will implement an inverse renderer, using the recent InverseRenderNet paper by Yu and Smith [1] as a starting point. InverseRenderNet reconstructs base color, surface normal vectors and illumination information from a single RGB image. It is currently limited to
perfectly diffuse reflections computed purely in image space, without 3D information.
The minimum goal to be achieved for successful completion of the project group is to implement this approach, extend it with a more general material model that allows to reconstruct additional information (e.g. about specular reflections), and select or create appropriate data for training and evaluation.
The project group is welcome to develop extensions beyond this minimal goal. For example, depth maps can be estimated in addition to (or in combination with) the normal maps. This simple form of 3D information can be used by the renderer to include shadowing; estimating it requires significant changes to the network structure.
[1] Ye Yu and William A. P. Smith. InverseRenderNet: Learning Single Image Inverse Rendering. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 3155-3164

