 Good morning, everybody. So on this talk, I want to introduce you to the Pointly platform and a little bit more in general, also to show you a few use cases that we did in the past year in collaboration with partner and clients, where we used our AI tools to work on Point Clouds and more specifically classify them and extract some relevant information, some such as vector models. To get things started, Pointly was born as a platform for manually annotating Point Clouds. So by this, I mean that a class was assigned from a predefined catalog to every point in the Point Cloud. It went much behind that, and I'm going to show you what I mean by that. But just for the basics, it is software as a service solution, and it offers manual and automatic annotation of Point Clouds. Since a few weeks, it also offers the possibility to draw vector model on top of the Point Cloud. And for a big labeling company that needs to do that manually, it opts for collaborative features so that multiple annotators can work on the same project. And our AI model can also be called via an API so that everything can be done without going on our web-based solution but via code. Beside the platform, we have also what we call the Pointly services. Here we have basically collaboration for a custom tailored solution with our clients. These ranges from a standard classification of specific data sets to automatic extraction of vector model. We did that, for instance, for the AutoBang and Behah, and I'm going to show you an example later, but also creation of map layer and everything that is geo-related. The aim and the idea is to bring all the stuff that we have learned from the Pointly service and developed in-house to the web-based Pointly platform. What we managed to do until now is automatic and manual annotation in the Pointly platform. And you see an example of a Point Cloud classification here on the left, and that is also how the viewer of our web-based solution looks like. And vectorization, so ability to draw vector model. Here you don't see the Point Cloud, but you can make it transparent or not. You should imagine that you can draw on top of it. There will be a video about it later. This is a little bit like the general setting, but what I want to focus mostly on the next of the talk is the deep learning automation. So as it has been mentioned, I'm a data scientist at Pointly, so I've mostly been involved with developing the deep learning algorithm that can automate these two processes. For this, we have what we call a standard classifier, which are just a few models that are available on the platform to the users, such that the user can upload a Point Cloud, choose which model is necessary according to its needs, and then it will have an automatic classification that can also be manually corrected on the viewer for the few cases where there will be misclassification. Until now, we have a classification for lighter ground classifier, and also a much more general standard classifier for airborne laser scan. By this, I mean that the Point Cloud is classified according to some predefined classes that include between others, ground, building, vehicles, vegetation, and street furniture. We also did in collaboration with the Riegel and the Autobahn Gembeha, a highway classifier, which classifies scans for mobile mapping along highways, and the classifiers did in collaboration with the city of Munich that is specifically optimized for like a city mapper. Before showing you the examples and the project, we worked on with clients. I want to show you how we develop such classifiers, so a little bit like the full deep learning process that we use, the so-called Pointly AI workflow. We start by focusing on which problem we want to solve. For instance, we want to develop a classifier to classify lighter aerial scans. Then we go look for databases and examples that are good to solve this problem. We select specific tiles. And here we have to make sure that we cover. We have tiles that are representative and diverse enough. Then we manually label them. We train a model. And this involves the user step that training a deep learning model involves, such as hyperparameter optimization or class weights. Once the model is ready and we are happy with the accuracy and the metrics that we analyze, we deploy this model on the platform and user can use it. Then we do some internal testing and we get feedback from users. And we see where the weakness of the models are. For instance, it doesn't work particularly well for some scenes or it doesn't work with a certain scan. We classify certain point clouds that have this problematic. We include these newly classified point clouds in our training data set. And with this expanded training data set, we are going to retrain the model, redeploy a version. Two of this model would work better. And in doing so, we create this iterative cycle that allow us to faster and faster improve our model. And this is the cycle that we use to work with some specific projects. The first one of this project that I want to talk about was developed in collaboration with the UK Mapping Agency, so Ornan Survey, where we work together in order to classify the two data sets that they had. This is also a great example that shows that our deep learning approach can cover different kinds of point clouds. So as I already mentioned, we have a standard classifier for LiDAR, aerial scans. And here we classify the data set. Still aerial data, but coming from photograph metric scans that had RGB information, but no intensity. We worked on a training data set coming from the city of Romsey in England. And this was validated by Ornan Survey on a city that is on the coastal path, Falmouth. And as you can see, on the left, it was the testing set. And on the right, it was the validation set. The transfer was pretty good. So what the model learned could be transferred also to different cities. We just had a problem, which was Falmouth is a coastal city. We didn't think about it. So classifying, for instance, something along the coast, a promenade, or a pier, boats, beaches was problematic. But for this, we could have used our iterative approach to further classify just these few problematic parts and retrain the model. The second data set that we classified for Ornan Survey was a completely different kind of scans. It was mobile mapping taken from a so-called street drone, which is like a small car, like a small smart with its sensor on top. And they classified by driving along the city of Romsey again, the full city. We just took a few parts, a few tiles of this data set, developed the model. The catalog was a little bit different than the one for the previous data set, because we could concentrate on smaller details, for instance, also differentiate between kind of poles and road signs. And here, you see just visually an example of how our classifier worked for this data set. An example coming from a completely different field is railway infrastructure classification. We worked on this project with Debenetz, which is part of Deutsche Bahn, within the project Digital Schiene Deutschland. Here, it was a proof of concept whose aim was to see if our classification, of course, we decided on a different catalog, was good enough to then automatically classify many kilometers of railways and, thus, assist Deutsche Bahn in creating 3D models from the automatically classified point clouds. It is an ongoing project. Until now, we just did the proof of concepts. You can see a few results here. You see that we could mostly correctly classified railways, catenary poles, power lines, and the extension, that is what is connecting the catenary poles to the power lines, and also some parts of train stations, such as the platforms. Now we are working on improving classification on just a few of these classings and then extracting them to just then derive vector data. Before going to show an example of the vector data, I also want to shortly mention that we work with the city of Munich to classify the full city. They do yearly flight on the city. And we got the data set, I think, from last or two years ago. And it was done. And then we developed the model based on this data set. We just took a very few tiles. And we classified the full city that amounted to around 500 square kilometer of data. This was done in approximately one week using our API. And I want to show this because it goes to show that our approach is also very scalable. In fact, we work on the cloud. We are based, our infrastructure is a host on Microsoft Azure. And we basically can, on demand, just turn on a few virtual machine more so that all the point clouds can be processed in parallel. And these were the example I want to show about the point cloud classification. Before finishing and just showing you the latest feature of the platform, I want also to speak about the extraction of vector models. We can classify point clouds. But in the end, what users and client are interested is a little bit more analytics out of it. Typical examples that we already seen also in the last few days are creation of 3D models. Here, we work in collaboration with the Autobahn GmbH to extract the line that is separating the asphalt from the rest of the ground. This is not the road marking. It's a little bit trickier. It really has to do with the geometry of the asphalt. And we have to find where the asphalt finishes. This is usually done by a human expert which dedicated tools and software draws this line along many hundreds kilometer of highways. So using this information and point clouds that has been obtained using a scanner mounted on a car, we could create a deep learning model whose input was a point cloud and whose output was exactly this line. You can see in red, the prediction of our model and in green, the ground truth obtained by a human expert. And we have a very good overlapping, at least visually. Of course, this is not enough for a user to be satisfied and say great, which can just take this line and use it in production. We should just have a look at better metrics and probably set on a common threshold so that we can show that the model is accurate enough. But it is a first step to show that our approach also, in this case, goes in the right direction. And I'm confident we can reach the right accuracy in case we have to get that far with better training data or just by fine tuning the model. The missing step is how can we create this training data? We can classify point clouds very well in the viewer, and there are many softwares doing that. But we have problems generating training data by drawing lines or drawing points or polygons on top of a point cloud. That is why in our latest release on the Pointly platform, we basically went exactly in this direction, which is allowing the user and ourself as well to draw vector models on top of the point cloud. Here you can see we have a set of intelligence tool that allows us to create these three vector types. And on the right, we also have what we call the feature explorer that it can also give all of these parts of the model like some extra attributes and metadata so that we can also save how high something is or how long a line is or which kind of catenary pole is this line connected to. And this is what we want to focus on next. And the idea is that we can see in the same interface automatically classified point cloud and just by a click, turn it to a vector model and see how the full picture looks like. And this is just an hint of what we want to automate. And the first model should come around January on the platform.