 So Olivier Courtein, I'm working in GIS fields in Sears and I founded the data team company and one of our main focus is to build bridge between GIS fields and deep learning stuff. So how are we able to extract insights from just spatial data through the latest available tool that we have. We already knew how to do it with spatial analysis in a classical way, but what can we extend with new tools to go further? That's it. And so this presentation is about a computer vision tool able to extract information from imagery. That's it. That was my one-minute presentation. So the framework is called net.io.ping and if we go back to the story, everything began with a loop. So it's a vinyl loop one century ago and the point is to say that you have to understand what is wrong before to be able to fix it. And you loop again and again. But since you are not able to understand what is wrong, there's nothing to do about. So if we look about other solutions nowadays, it's widely used. There is a vast amount of information available, but we are unable to use it really. And most of the pixel acquired, we don't do anything with them. It's a waste. So the idea, the need is to say, okay, we are able to gather pixels, but what can we do with this pixel to be able to switch from pixel to insights? That's the key point. If we look about the deep learning stuff, supervised learning is quite simple. You have one input, one expected output, and you train a neural network till is able to compute the output from the input. And the key point is the loss function. The loss function is the ability to compute the distance between what he performed and what it expected. And since you are able to compute a meaningful distance function, you are then able to find a way to converge to a solution. So I think you train a model rightly. You are able only with the input to use your trained model to compute an output. That's it. So it's only a way to train a model. So what is really a trained model? There are several ways to understand it. We will focus only on one because it is a simple one to understand. It's a compression. You only compress your information, the whole imagery information, and you accept to lose a lot of information, but you keep all the information you need to achieve your task. So it's just a directed compression stuff to lose information, but the one you have to keep for the classification. So once you understand it, net your dead pink is just a way to build a bridge between geospatial data and deep learning stuff. There is a three task we are focused on. First is quality analysis. So once you train your data and you are able to compute an output, you will be able also to compare this output to an incident dataset. And so to put in evidence, if there is a significant difference, yes or no. And for instance here, in pink, it's what is predicted by the model. In pink, it's what is predicted by the model. In green, it's the labels. And in gray, it's when both are agreed. So it's a quick way to be able to check if the model and the label are matching, yes or no. And to see that this one, for instance, it's because it's beyond the trees. So it's harder for the model to find a building because it's idle. And it's also interesting to see that if you zoom out, there is a Spotify differences which will help you to save time because we will focus only on the part where there is enough differences between your two dataset. One other thing you can perform with exactly the same framework is change detection because in this scenario, you will train a model and use an alternate input. For example, something like one years later or two years later or one week at any time. And you will also compute an alternate output and compare the difference. And the last one is feature extraction. You will train your model with a small dataset train area and you will use wider imagery input to compute output. For instance, you will only label a small area and once your model is trained, you will be able to learn the prediction on a wider area and thereafter to vectorize it. So to be able to use the tree scenario, we have several little tools. You can assemble with kind of LEGO stuff. So there are small tools and you can change them to create your own workflow. So it's a common line of interfaces. And the Woola ID is the ability to deal with different kind of imagery, different kind of well-known labels, formats to compute the prediction and thereafter to do something meaningful with as insights from the prediction mask you generated. Okay, so that's the key concept. About the stacks, we reuse some well-known computer software from GIS field. Some came from Python imagery and some came from deep learning world. So it's a bridge between these two worlds. It's full open source except the NVIDIA part because it's not open source by NVIDIA itself. It's easy to deploy. It's not because we use a lot of software that we didn't package. So it's just a single line to install the Woola stuff. There is a one-on-one tutorial who helps you to do it by yourself and so to learn by yourself by doing it. We use real world data and something from the install to the data preparation, training and so on. Take something like two, three hours to launch the Woola training session and the Woola results. So it's also available online. So if you want to look at it right now, you just have to click on it and you click on the different picture to zoom in the leaflet interfaces. So all you need in fact is imagery and we see that it's not a problem anymore. GPU and you need at least a recent GPU with enough memory and labels. And so the key point is the label because most of the time we have labels but not accurate enough. So if you look here on the OSM building the data, we have building from this imagery. It's true but they are not accurate enough to be sure that your train model if you use this kind of data will be really accurate. So it's something important to keep in mind that if you are garbage in label, you will have a garbage out prediction. So the point here is also to reuse this technique to check that your data training is quality enough. And so to compare your labels you use here with these three tools and so also be able to keep or not some labels because for instance here these buildings appear in the label but it's not right related to this imagery because it's right now on this imagery there is no buildings at this time. So the most common way is to remove these ones and there is an anti-rated tool to help you to keep yes and no in a second. So what's new inside an ETO that pink first is the ability to enhance the quality of the prediction despite the fact that we use tiles. So it's well known we use meta tiles to enlarge the focus. Obviously it will take longer if we choose this option but the result here is cleaner. So if we slow the whole process we have to improve it some other way. So we add the multi-GPU scaling to help to use as many GPU you can get on a single host and for the trend of prediction. So you can scale it up and also we add the multi-clases so you are not only obliged to use it on a single class you can use on several class at once and we also provide an automated unbalanced option who helps you to to give up weights related to the classes if they are not distributed in the same way in your dataset. So what's the limit right now? First the kind of imagery you want to predict on must be quite related to the ones you already trained on. Remember it's a kind of compression so you can't expect a good result if the kind of imagery has nothing related to the ones you use for the training. About the labels we seen that you need something accurate and the amount of label you need is something like thousands. So at the very least one thousand or few thousand it depends but not a dozen okay and also right now it don't deal with topology so if you have something you want to extract and it's topology related something like a network so roads for instance it doesn't work well it's far better behave with surfaces any kind of surfaces okay. Right now we are still working hard on it and we are looking for funding for sponsor in any way it could be related to code so pull requests are really welcome it could be related to money funding it could be related to hardware funding and so on so help us to increase again accuracy especially on low resolution we talked just before about Copernicus and Santinelle the next step obviously will be to behave and to increase the resolution even before we perform training and prediction on it. The topology obviously and to reduce the amount of labels we need before be able to have an accurate training and also go on on the performance improvement because it's not necessary anymore to have a huge infrastructure to be able to use it. You can but it's not mandatory. There is also alternative open source to this project there is rest of vision you learn RoboSat Solaris so why choosing this one few arguments we really focus on just under compliancy so it's really easy and standard compliant to reuse all the geospatial format that you you work on daily with your data progression will be easy and fast there is a built-in web UI interface which helps you to check at every step that everything is online so no it's modular and extensible so it's not something you can't extend easily on the contrary you can really easily add new tools add new interfaces and new templates and so on. It's also under multibounds imagery and also under data fusion so you can for instance use vector, rasterize your vector and add it add an input from your imagery so it's really the same kind that GIS a map you compose your map by adding several layers and with this stack you can train and then compute. It's high performances and it's accurate because we use and reuse the latest computer vision papers available. If this field interests you there is in one slide all the best resources to learn from it and yeah one slide about the company. To take away there is right now Andrew Thiel open source IE4 Euro framework available the performance as are already okay to use it as a country level. You not need anymore to be a computer vision scientist to use it. A geospatial guy who understands the idea of the compression can do it right now. Plain open data could be used to train a model because you can use it step by step and so to refine your labels still as they are accurate enough and funding and pull request are really welcome that's it. So once we have more than one minute time for questions so bring it on. It was just a rename. Could you repeat the question a little louder please? Yeah Robocat Pink was the name of the previous version. NetEo.Pink is a rename is a 0.7 version of the 0.6 Robocat Pink. The point is there is a Robocat project and Robocat Pink and people a lot of people made the mistake and when I told Robocat Pink a lot of people understand Robocat and so at the end it was a mess. So I said okay we'll stop it. We'll rename it so it's NetEo. There is no NetEo anywhere so it's new name and go on. So it's a 0.7 version. Yep. Related to these ones? No. Right now no one at my I don't have any information that anyone compare this whole framework. It will be a really good idea on several aspects related to accuracy, related to performances and related to how easy it is to use them. Yep. It's a need in fact but at my I don't know anyone who would did it before. Yep. Yeah it's a unit like with an encoder and you reuse an encoder as a ResNet like. So you can choose any kind of ResNet as an encoder. It's a unit like and there is also a copie from the encoder to the decoder as a unit but also on the decoder part. Yep. Yep. Because yeah you also you already have all the PyTorch and LBmentation. LBmentation is data augmentation. Yeah data augmentation and data augmentation able to deal with multi-channel and the LBmentation able to deal with several kind of imagery color shifting. So you can add a lot of noise in your model and because you had a lot of noise the model is forced to generalize enough to be able to still work even if the imagery of the input will slightly change. It's a big augmentation. Yep. Okay. Then once again thank you very much.