 So yeah, thank you for the introduction. My name is Johannes Fiemann. I'm actually giving this presentation for my colleague Lando Petersen, who is today unfortunately not able to be here. But I hope I can also deliver the content. This presentation is from a research company from Fraunhofer Focus. We are an institute for, well, IT currently very much involved in AI and machine learning, and a spin-off company called Paratrust AI, which is funded from some members of Fraunhofer currently. So our idea is to make unmanned aerial vehicles somehow autonomous so that they can fly without a pilot, directing them without a pilot monitoring, without a pilot planning the flight, and that they can't accomplish missions by using some AI components and machine learning. And for us, this is in multiple domains. Interesting, we're not just limited to the aerial industry, the aerospace industry. But we also do this for vehicles or for autonomous trains. So it's basically about making agents somehow adaptable in complex environments so that they can find their ways that they can act as they are intended to. So what we do have, of course, is conventional input sensors that give us an image of the environment, like cameras in the human visible range, for example, or in other frequency spectras, so that we have some image data that we can use in order to confirm the position, in order to collect information about the environment to accomplish the mission. But there are also more advanced sensors that give probably some 3D information already, like stereo cameras or LiDAR sensors. And of course, all these data can be used to make autonomous agents somehow adaptable to new environments. So how do we create intelligent devices that somehow know what they are facing and that they can adopt to the situations that they are put in? So artificial intelligence is currently, of course, very much hyped. And most of it is currently done with machine learning technologies. So that is you do have a certain problem, and you do not only have data that shows a problem, but you do have also data that has the solution that you are expecting. And now you want to create a model for that can somehow guess the true solution from the source images. And you train basically a mathematical model, make it more accurate to solve the problem. So it's an approximation. It's not a perfect solution, but it's something that is hopefully good enough to be able to act in that environment. Of course, this approach should then be somehow transferred so that it can act into environments where it was not specifically trained for. And yeah, that's somehow the big challenge that we have a very incomplete training data set. And finally, we are somehow trying to make it work in the complex real world where a whole lot of new things can pop up that it was not really trained for. So fortunately, there are a lot of labeled data sets already existing that can be used for training and also for testing your data. And that's a good starting point because not just the big task to create images, but also to have already the solutions interpretation that you are expecting your systems to give. So that's typically a lot of manual work, and it's quite expensive to generate it. But the problem is, of course, that all these data sets are quite limited and easily you will miss some important cases for which you simply don't have training data available. It's very expensive to generate it and for rare occurrences that probably are very seldom conditions that occur only once in a year. So you really have problems to generate accurate test data, training data. And yeah, this is still an issue really for especially corner cases that are unlikely somehow to ever happen. So it would be very expensive to cut down a tree to have just an image of a tree locating, for example, a railway track. So what can we do? Of course, we can try to come up with some synthetic data for training and for testing. But then we run immediately into the issues. Is this data really accurate for the real world problems that you might face? Or will your training be somehow biased towards the simulation, the generator environments that you are using? And therefore lead to imperfect results or undesirable results as soon as it faces the real world phenomenons? And especially if you have to deal with regulatory bodies who somehow have to manage the risks of your autonomous devices that somehow act in the environment, can be quite challenging to convince them that your training was well enough and good enough. So that's actually the challenge to somehow make this gap between the real world data and the synthetic data somehow manageable, accessible for authorities. So this is by far most extensively done at the moment for autonomous vehicles where large companies try to use their massive power to somehow get to solutions that are well enough so that they can be finally certified to be allowed in the general traffic, even without a driver or pilot monitoring. And of course, one can try to use these results that they produce also for other domains like railway or unmanned aerial vehicles, for example. So what is the general idea for making simulations accurate? Well, first of all, how does simulation work that we kind of do? So we start basically with some real world data and use AI to create variations from that real world data, then create a 3D simulation and finally do another AI-based generative transformation so that the results are actually accurate for real world data. So this one, yeah. So what is a typical scenario that we would like to train? Drone, for example, for is a search and rescue operation. You want to find people somewhere in terrain there, somewhere probably lost in a forest and you let your drones fly around autonomously and they should be able not only to navigate safely through the wood but also able to identify human beings and to detect them and to eventually initiate some other actions that might be required. So basically, you start with a simple scenario having a human being, for example, in such an environment and then you can generate from your simulation already the ground truth that you need. You know what the person is actually and you can simulate how it looks for different kinds of sensors or cameras in order to get your data for training or later eventually also for testing. And then you just diplomatically try to make variations about it. So this would be the trivial case where the human being is somehow standing free and nothing is covered and it's clear visibility and that's a great case but in reality things are often much more complicated. So you do have different environmental conditions and human beings can be partially hidden or they can be very different unexpected backgrounds and you do need all these different things in your training data set in order to prepare it for these different things that you can expect actually. So you can do definitely a training that works for these simulated scenarios but then the big question is, how does this transfer to real world environments and the idea is of course to use some real world images with the labels as a testing data set for a system that is only trained from the simulation, from the simulated synthetic data so that we get somehow reference how good is actually our simulation for real world use. And then of course it's also tempting to use the simulated synthetic image data to test your system that is create even more corner cases for which you eventually have not trained your model and see how well it performs with these new challenges at the current level. Of course here we do not have the reality check here it's only about how general the system is applicable but nevertheless it is also an interesting application because it's much easier and cheaper to let the simulation environment generate these variants and it does not require a lot of manual efforts then. So based on these ideas that we had we had some research projects and currently we create a spin-off company that tries to make these things also applicable, practically applicable for the market so that we can develop with partners solutions for autonomous vehicles that are finally capable to accomplish complex missions. And yeah, unfortunately, Mr. Peterson who will be the founder of Parachrest AI is today not able to present us by himself but I hope that my presentation gave you a little insight and if you do have any questions please feel free to ask them.