 Hello. Today I want to show you how we can capture shape information with multi-skilled topological loss terms for 3D reconstruction. My name is Dominik Weibel. I'm a PhD student at the Helmholtz Center in Munich in Germany. Geometrical loss terms such as the dyes of binary cross entropy do not capture contextual information. However, that information might be relevant in a spatial reasoning task, for example, that could be used for speeding up 3D imaging, as that is prohibitively time-consuming as it requires stacking of 2D images. Geometrical loss terms compare a model's 3D prediction with the 3D ground truth, for example, on the basis of a binary cross entropy or dyes loss. We want to integrate contextual information with the topological loss term. Therefore, we extract topological features from the 3D prediction using a cubicle complex, and then we can extract features on multiple scales such as connected components, cycles, or voids. The same we do for the 3D ground truth, and then we can compare these two feature distributions using the Wasserstein Distance Symmetric from Optimal Transport. That allows us to incorporate topological information and add these two losses together. The topological loss is very stable, computationally efficient, maybe added to any neural network, and is invariant to spatial transformations. To prove the usefulness of the novel topological loss, we integrated into a model called Shaper. That Shaper model is designed to perform a spatial reasoning task solving the inverse problem of 3D shape prediction from 2D images. Here we use 2D microscopy images to predict the 3D cell shape. Our two-step training approach includes a first supervised training step, and then a discriminator is added for a second adversarial training step. We perform two experiments, one using the topological loss and one not using the topological loss. In the first row, you can see the results without the topological loss, and in the second row, the results with the topological loss, and third row, the ground truth. For two data sets, one red blood cell, and one containing nuclei. Visually, you can see that the surface of the red blood cell in the second row is much smoother as compared to not using the topological loss. But also quantitatively, the model looks really promising. For example, for the relative surface area error, and we get much lower values for when using the topological loss, and same for the surface roughness. This would have not been possible without the amazing team at the Institute of AI for Health. I thank Matthias Meijer and Scott Edwil for providing the second nuclei data set, Bastian Rieck for providing the idea and the topological loss, and Karsten Mares, my PhD supervisor. Please find the paper as a publication at Michael. To briefly summarize, we've used the Shaper model to perform experiments showing how well our topological loss is enhancing a spatial reasoning task, which is based on a cubical complex, and then extracting the topological features, and results clearly improve visually and quantitatively.