 So, hello everyone, I'm Meitna from the Elan Turing Institute and today I'll be talking about counting sea pens from ocean flow with your footage. This work was done in collaboration with CFAS and let me show you what we have been able to do. Right, so first of all, some good news. This was the research highlight for one of the research highlights for the miracle of the Turing Institute. We are very thankful for the people at CFAS who have really given us all the data and everything. So what we have achieved during this, we trained a YOLO V5 classifier and achieved 90% accuracy for MAD of 0.5% with a successful identification across multiple years. We did classification, we did sea pen tracking and also we do enhancement. Let me show you a bit of videos now. I hope they work. This is not my work. Oh, maybe. But then I'll have to open up the tab. What do we do? I have this problem with all. We just go to blank screen. Oh, there we are. It was there briefly. It was there for a flash. Oh, it's still open. Yes, there we go. Right, but it's opening on the different thing than I have to go back. Sorry. Let's go into that mode. Oh, we're not in PowerPoint anymore. That's the problem. Right, let's get back to PowerPoint. Yeah, so I need Google Slides so that it works and otherwise it's not working. Right. Okay. I'm going to be able to do that. You sent me the link, didn't you? Stop sharing for a minute. Talk about yourselves. This was an unexpected technical problem. So find your email. So this is a slide. This is a presentation that has to be presented online rather than. Right. Okay, so I move that onto the screen. I need to go back and make sure that people can see it. What are you seeing, Steve? Probably nothing. I think we'll even stop the meeting tonight. No, it's there. Right, share screen. Do you want the, here's our dots. Thank you. That's fine. Sorry. Right. This works. So these are some of the natural variety that we have been able to track. So on the left hand side, we have the human avatar model and the second one is our model. Similarly, we did word malaria. So as you can see, like this is very difficult to identify if you are not a trained human. And our model is able to achieve that very well. The next video is how we enhance the video that we were given, and then we use some of the clahi implementation to include things. And this is some automated laser detection. So we know which area to concentrate on. Otherwise, our model would be looking at everything. Right. So what are seep in. So these are feather shaped colonies, which are found in crustacean or lobster burrows. And these indicate the health of right. So this indicates the health of muddy ecosystem. So this has as a large collection of these video footage of the sea floor, covering over a period of 2014 to 2021. And the objective of this investigation was to see whether we can reliably localize and classify the available footage. And is there a possible method to count these over different things and whether the video could be enhanced. And we have received we have achieved all these during our work. So the data set provides work from the net gross borough survey, which is, I believe, present in Norway and Ireland. And these were done with two different kinds of cameras. So light hue color and lighting geometry cameras. So these are two different kinds of camera that work very differently. And I'll show you how these cause a bit of a challenge for us. And we also had some tracking data and some identifiers for bounding boxes for different kinds of sea vents. So in order to tell you what the challenges were. So these are the video image like screenshots, snapshots from different years. So you can see that the quality is very different. So in order to process both of these images, it's a different process and type of parameters. But everything. So that was our challenge. And this is our pipeline. So what we want from our model is to take these videos, divide them into frames and then go them through our training process, which is to either classify or detect the bounding boxes and then quality those frames again to create a video. So the detection algorithm that we use and classification wasn't what we use was the only five, and it has 157 less with 101.7 million parameters and it was 90% accuracy for most of the years. So here we have some detection results. So what we see for training set and validation set are losses going down and metrics for mean average precision is also going up. So these are the good results. And as you can see, the accuracy for different years is different because it all depends on what kind of raw data we came to come up with. And finally, I just wanted to show you guys like how difficult the problem is. This is what you're seeing from human eye is very difficult to actually understand what is happening. So this is our classification model, where we have class one has been actually our second is the area and the third one has the background. And it's very difficult. So finally, we have some CPAN tracking that we perform. So as you can see, we give different IDs to different objects, sorry, different panacea and burglary as over the video. And finally, this is our process for the laser detection we do a gray scaling, Gaussian blur, can the edge detection and then there are multiple steps that come along with it, as you can see. Finally, in future work, we would like to integrate the entire pipeline, do a comprehensive accuracy assessment, and also extend the model for lobster gas, which is an upcoming work. Conclusion is that we got great results for detection classification and excellent tracking and laser detection. Thank you. Can I ask you one question? We've tried this approach, tried this in approach and failed, looking at some surface sediments to look at structural sediments. Could this model be adapted for that purpose? Yeah, yeah, absolutely. You have a customer in. Okay, thank you very much.