 So for this I've been working on two areas, the past one was the river monitoring. So here the idea was to try to use deep learning to estimate river levels from CCTV images looking at rivers. So to do this here for the example in this picture what I did was use a deep learning model to extract the water pixels from the image. I used the number of water pixels as a proxy to monitor the evolution of the river level. And today, what I'm going to talk to you about mostly is the trash blockage detection. So, if you don't know what the trash bin is, it's a method that is used to prevent debris from entering critical parts of river networks, such as air storage or pumping stations. Yeah, whatever I'm saying. And so the problem with the trash bins is that debris can build up at the trash bin location and generate threadings. So it's really important to try to monitor the blockage at this location. It's not for maintenance purposes of course to us for managing schools to act and clean the trash bins or communities, but also to integrate the blockage information to flood models. And so one of the ways to do this is to use trash bin cameras so cameras that are looking at trash bins but you can imagine that monitoring these cameras manually can be very tedious. So what we've been doing in this project in collaboration with offices from the black country, and the environment agency is trying to develop a plan and strategies that will allow to automate the monitoring of the blockage that are using the plan. So for example, here, using the measure of a trash bin to use the model to try to estimate whether it is black. So I added some additional constraints. So the methods needs to detect the blockage on any new unseen camera. So I have one single model, because we don't want one model for cameras you can imagine, it would be very concrete, motivating to do everything. And also one to minimize the manager intervention. So we don't want to have to label a, many images orчески camera that you want to use. So, to start it with this research, what I did was to actually install the sets of trafficking images. So what they did was to gather images from the southwest environment agency websites. There are 54 trash bin camera feeds available there during about 10 months. I've progressively done that in that set of 80,000 images. And I progressively level all of these images using a small interface. I have one of three levels, so clean, if the trash bin is actually clean, that's on the left. Other, if I didn't know what was going on, like here, for example, someone has put a Sierra jacket on the camera, so what's going on. And finally, if we see some debris at the location, then I label it as a lot. Again, I had about 40,000 clean images, 11,000 black ones, and 26,000 for each hour issue. And so, once I had this that set, I asked myself two questions. The first one was, do we really need to label all of this data? So, can we use a model that could be independent from the labeling of the data so we can build this kind of model easily. And also, if necessary, it is possible if we have a good method and we can improve in accuracy, still able images from a new camera that we are going to install. And so what I wanted to do was to compare three different approaches to do this. So a binary classifier that would distinguish blocks from clean images, an anomaly detection approach that doesn't need any label and an image similarity approach, where here the idea is that you can take advantage of a sample of images from a new camera. So the binary classifier inside is quite simple. So it takes input an image and a blockage car. If this car is about a given threshold, then the image is considered as blocks. It's not, it's clean. The method is possibly trained from all of the clean and blocked images that I have labeled in my dead set. And to use to train the network I use the resolution architecture that she will not be able to tell me is really a classic architecture are using consternation. And as I said, I really consider this as the baseline method that use all of my dead set, and does not make an assumption on the availability of new images. So the anomaly detection has similar process to the class shaping approach. So this time, very different score. But the big difference here this time. We don't need any labels to train the method. So, trash here should be considered as an anomaly, because as you notice, I have much less blocked images that change. And so the idea to do this, I've used methods of anomaly detection from the state of the art. The idea is to represent each image with a small vector of feature extracted from a pre trained network. And it's a multivariate version to the training vectors of the training images. And then when I have a new image, I can compute its distance to the multivariate version using the same representation. Here, the process is a bit different. So we have an image for which we want to know the label. And we want, we also have a reference image that we have labels of the same camera. And the same is network is the similarity score between the two images. And if we know that the reference image is clean, for example, and the two images are similar, it means that the new image looks clean and inverse. And so what's interesting with this is that we can use more than one image, and then we can make an average of all of these cards to obtain a more accurate look at score for a given image. So then to evaluate this, I very closely tried to divide my data set between the cameras. So I had 46 to train for the parameters of my methods, including the thresholds used to categorize between blocks and clean images. And I had a first test camera that I did manually to be sure that they had interesting and represent that you should have looked from my data set. And I used two different scores so I don't go too much into details but basically balanced accuracy is a way to take into account the imbalance between the number of clean and blocked images. And the rock score I would not consider this threshold that needs to be validated because my impression, and it can be confirmed in the next slide, is that the optimal threshold between the different test camera can be different. And in practice, if problems ask much to change this threshold for a given camera. So, yeah, the results on the left is the balance stack resistor on the right is the rock score. So basically, the binary classifier and the image similarity approach will obtain quite good results about 0.9 for 3 out of the 4 locations. So yeah, the binary classifier is in red and the image similarity is in pink. The anomaly detection in blue obtains the world's results at each of the location but this time this approach is not really a new label. And so also the same network here, in average, you can see that it obtains the best results with only five reference images that were used. And the classification threshold as I said doesn't really generalize the web because as you can see the rock stars are both balanced accuracy. And yeah, as we've seen the balance stack of locations so the third location you can see that at each time is the worst location, also it obtains the worst results. So I wanted to investigate what was going on that. And so was that I actually had many false positives so I label these images as clean. Because the leaves that you can see near the trash bin are not really blocking the water well, but my network, consider these images as less because it sees these are branches. But yeah, these are these critical images. So I also have some false positives but much smaller number at other location, and for someone like the one with the right, you can see some branches there. So those are quite disputable images that were disabled so maybe. Yeah, it could also be that there's this image. So I also performed an additional experiments to try to analyze the influence of the number of reference images. The accuracy of the same is that also the similarity approach. And what I noticed, as you can see here is that 20 levels. The method doesn't really improve after that. So it means basically, I think that the barbed line cases that I presented during the previous slides remain wrongly detected. So the method can improve after that we're giving value. That's already it for my presentation. So I developed approaches to monitor trash through using the planning. So I have a baseline test file that was already quite well. I can improve this results using the image similarity approach. But that means a new sample of images annotated for the camera. So the detection approach for the moment of times clearly the world's results. And so what I'd like to do in the future is to try to provide more information regarding the state of the trash bins, because for the moment it's only a category. And what you'd really want is a more accurate information such as the percentages to passion blockage. And I think that I will have to go again through animal detection or unsupervised approaches that don't need any labels, because labeling such information can be quite difficult. But also the problem is my current approach is that it doesn't work at night. Because you know the actually images. There is no standard way to observe a trash bin night. So for the moment, it's a problem. And in the future, I also like to integrate this into practice, because for the moment, it's out to give a sense to the number of balance accuracy values that I've showed you.