 So the next presentation that we've got is from Zavier Lecumbidi from Green Instruments and the BASC Research Institute AZTI Hello, my name is Zavier Lecumbidi and I'm a computer scientist currently doing my PhD Honesty. I specialize in computer vision and in this presentation I'm going to tell you how we applied to the identification and measurement of tropical tuna. To do this we use cameras already installed on the boats that take images of the fishery. Our work begins by making a selection of the available images. Then, based on these images and annotations made by expert observers, we train and validate our prediction models. Finally, we compare our estimates with official data collected in ports. The two main workflows share two of their tasks, image preprocessing and segment preprocessing. The image preprocessing is divided into three parts. First, perspective correction and contrast enhancement. By following these two processes we achieve a more homogeneous image, which appear to have been taken by the same camera and which has been optimized to improve fish segmentation. Then estimating the lens dirtiness allows us to discard images that are so dirty that will not provide useful segments for our job. For the segment preprocessing we use the width of the belt as calibration method. Now we need size in meters and pixels allows us to calculate the size of an individual. Due to the nature of the images the fish may appear overlaid, so not all of them are suitable for measurement. The parameters that we have used to consider an individual as valid are as follows. A minimum size of 20 centimeters and that the length of the fish must be at minimum four times its width. By doing so we avoid a large number of segments that are not useful for this job. For the training workflow we have two main steps, image annotation and the training of the models. Image annotation can be done in two ways, manually or automatically. To do it manually we have used an annotation software called CBAT in which expert observers can delimit the fishes and assign a category to it. The levels that we have used are big eye, skipjack and yellow fin for each of the target species. Fish for those where the observer was able to delimit the fish but not to classify precisely. To do it automatically first we must train the segmentation model with the manual annotations. Once the model is trained we segment one specific set so we can safely say that all the fishes that are model is capable of segmenting belong to the same species. For the training of the models we have to differentiate what we have used in each one. The segmentation model uses the complete images with the annotations while the classification model uses already corrupt segments of individual fishes. Both have been implemented in TensorFlow using pre-training models for each work. We train all models has allowed us not to need as much data as we would have needed if we had to train them from the beginning. In this workflow we focus on obtaining the prediction of the models and seeing how they fit our data. First we try to evaluate the two models separately but while for the classification it is easy to have a precise estimation of the accuracy for the segmentation is not that easy. For the classification model we have constructed a confusion matrix in which the vertical axis represents the actual class of an individual and the horizontal axis the class predicted by our model. The confusion matrix of a model that classifies everything perfectly would be a matrix whose main diagonal has the values of one and the rest is filed with zeros. In our case we can observe how our model currently classifies approximately 3 out of 4 individuals. For segmentation however it's not that easy. How do we define the shape of a fish? One way to solve this and validate both models at the same time is adding new classes to our dataset. These new classes are head for only its head segments fin for only fin segments and art for other type of segments as the ones malformed. This makes our confusion matrix bigger but the real remains the same. We must classify fish segments as fish, green square and known fish segments as known fish, red square. This allows us to get rid of those bad predictions. It is preferable to lose some real fishes as artifacts top right corner than to add artifacts to what we are considering valid individuals, bottom left corner. This is due to the fact that we can extract many fishes from a set so losing a few is not a problem. Once we have the results of our models it's important to contrast them with official data so this is what we do in our last workflow. For the comparisons with official data we have four samples. The first two are monospecific yellow fin sets which were fished to free schooling and the other two are mixed set fished to floating objects. The free schooling ones have been very useful to increase the training set easily however we can see that their distribution is farther away than those of the floating object. This happens because the yellow fin tuna instead of completely overlapping they overlap only with their fins making the segmentation model count one individual as two different fishes. This problem has been identified and we are working on resolving it. At first the floating object seems to be more complicated since the species are mixed but from what we can see the distributions are estimated very well. The data we have just seen is the result of good quality images but the moment the image capture system is neglected the model is not able to work properly. In this other four samples the camera was very dirty so it was impossible to make an estimation. Thank you for listening to me if you have any questions please don't hesitate to ask. Thank you very much Xavier. Well there's a lot to unpack there in five or six minutes but let's try and get to some of the questions. You spoke to how AI enhances the visual recognition of marine species in electronic monitoring systems and your presentation is one of the first to really show fish on top of each other, underneath each other and trying to partition them out and do species recognition at the same time. So it's fascinating how you're slowly refining those models to help you. What are the main hurdles that you foresee to have this taken up across fish value chains? What are the hurdles you've spoken about the hurdles within your presentation but what other types of areas do you think there's opportunities to overcome to improve? So thank you for your question. The main problem that we have encountered working on this was mainly the maintenance of the overall electronic monitoring system because it's not thought to use in order to make an automatic work with these images. So the images were very, very, a lot of them were not very useful. So we have to work a lot in this pre-segmentation and pre-processing of them. So that's... And this was due to the quality of the images that were taken, the speed the fish were moving and how they were packed or due to other issues? Yes, it's mostly because they are very aggravated. So in a lot of the images, the cameras have a lot of blood and water splashes. So it's impossible to take anything from the images. Right, so it was a practical solution problem that we're also seeing on trying to monitor catches on boats. Yes. They're difficult places to work. Thank you very much. Matt, can I hand over the questions to you? Yeah, what's the plan with the work that you're doing? Is it in a specific domain or will it be... Do you plan on releasing it widely? Because I think it's a really important piece of work that relates to real-world scenarios that you've done and could be valuable for other people to use. I don't know if that's something that you're doing or it's very much in one specific domain. So this first assessment was mostly to assess the utility of the currently images, because we were working with historical images, but the idea is to implement more and more phishing vessels in order to make better estimations for environmental sustainability and so on. Thanks very much, Xavier. Thank you. Anton, do you have any questions? I think we're missing Anton. No, even if I'm unmuted, I'm not a specific one, but it's very interesting to see how you work to get the information also to more official authorities that have to validate probably the output of your approach and do you have an idea how long it will take before electronic systems like this will be accepted by authorities for monitoring of catch or even by vessel captains? We don't have an update or a precise timeline because now, as you can see, we have very small data to work on to compare with official data. So we are not working on making and asserting a good data pipeline to have more data to work and we don't have a precise timeline to implement it. These are the kinds of questions that we'll be reaching at the end and the opportunities for groups and administrations such as FAO who work closely with RFMOs to try and bring the signal backwards and forwards amongst developers and fishers and even valued people in the value chain trying to team together so that we can overcome these problems but also so we can get feedback across different groups about what's working and what isn't working. So thank you very, very much, Xavier.