 I have the pleasure of welcoming and introducing the first presentation today and speaking to start us off is Jennifer Stahl of the Pacific Islands Fisheries Science Center in Honolulu. The EM program talking to machine learning in the Hawaii Long Line Fisheries. Aloha. I'll be discussing machine learning in the Hawaii Long Line Fisheries Electronic Monitoring Program. I'm Jenny Stahl and I'm part of the Pacific Island Science Center EM team, which is also composed of Joshua Tucker and Keith Bigelow. I'm pictured here with Matthew Carnes who previously worked at the Science Center and was pivotal in getting the EM and AI programs up and running. He's now an independent contractor. So a little bit of background, an electronic monitoring pilot program began in the Pacific Islands in 2017 with 14 systems. Those systems have aged out and we now have 20 newly installed systems. Each system consists of a deck and a rail camera. You can see the placement in the diagram here on the top right. Systems also have sensors of GPS, real rotation and pressure, which record speed and trigger the cameras to record during hauling only. And there is a computer in the wheelhouse that records video on two hard drives, which you can see in the photo above our team member, Josh Tucker. Now once we collect EM video, it is reviewed for catch using the program review, which allows the simultaneous viewing of both the sensor data with the cameras as you can see pictured here on the left. Now the Hawaii Long Line EM program has six currently defined objectives shown here. We are collaborating with Deolite, CVision and NOAA Science and Technology to develop AI tools to meet these objectives. Our collaborators will address the first objective of identifying crew and redacting images with identifiable info. And in order for our collaborators to build the AI tools to address the remainder of the objectives, we need to work on building our AI library. Our current AI library consists of over 250,000 images of mostly fish on deck, but some in the water as well as fishing activity. So I'm going to go over how we are going to meet our objectives by adding to this AI library. So to improve species ID, we are adding fish heads and tails to each fish body. Currently our preliminary training algorithms have a lot of misidentification and false positives. And Matthew Carnes from EK Solutions has found that when he added annotations for fish heads and tails, he had better success. To enumerate fish hooks, we will add annotations of baited versus hooks with no bait. And this info can help inform if false killer whale depredation is occurring on the catch. To distinguish fish and protected species, we will add more annotations of sea turtles and fish in the water. To determine catch disposition of kept versus discarded fish, we are going to use tracks for all of our annotations, which binds these images together and can track the fish as it moves across the deck and if it is discarded. And finally, we hope to minimize the amount of review needed by a human reviewer by distinguishing between catch events and hooks with no catch. In Hawaii Longland Deep Set Fishery, there's only about 12 fish caught for every 1,000 hooks set, so a lot of empty footage. Now, this is being done through the annotation of hands and gear with and without catch. And preliminary trainings indicate some success. So where do we want to go from here? Well, we would like to be able to automate annotation as we continue to build our AI library. We have used preliminary trainings and had some success with auto annotations, but there are a lot of false positives and misidentifications to sift through and this hasn't been super practical. And when we would like, we would also like to leverage cloud computing to improve processing speed for automatic annotation and AI trainings. And also on the wishlist is determining what imagery is needed to train algorithms with fish and sea turtle links. Okay, thank you and let me know if you have any thoughts or questions. Thank you. Thank you for sharing your story and developing an image library and training those algorithms to detect fish and protected species, Jennifer. And the hook story was fascinating to me too. I have a small question. In trying working to improve the efficiency of reviewing EM footage, what's been your biggest hurdle and what type of solutions have you found you've come up with and where those solutions been unusual? If you can share that story with us, that would be great. Hi, so you're just asking just to clarify. What ways do we think we're going to be able to make the training more efficient and what we've done so far to make our trainings more efficient? Yeah, basically, when you started off this event, I'm sure you didn't think, oh, we're going to be able to move on to just heads and tails because that will help us and we've shifted our focus a bit. And actually we can now get hooks as well and hooks baited and not baited. So that's another data set to help inform the picture. So I guess, how is your story involved and where have some of the challenges been and how do you see maybe the future role in that? Yeah, really Matthew Carnes, who I worked with before, he has done the massive amounts of annotations and so we've learned a lot from him. And he's the one who said, OK, well, we want to be able to identify a catch event and, you know, for here for the Hawaii Longling fishery, it's very important to get protected species, but it's hard with protected species to annotate them because there's not that many. There's sort of rare events. So he came up with the idea we're going to annotate hands. And so he's actually could track the hands and the leaders to show, OK, there's a fish on or there's not a fish on. So he had this genius idea and I think many people thought he's a little crazy sitting there annotating hands, but basically it was defining the difference between fish being caught or fish not being caught and then being able to see if that actually, you know, with these initial trainings, if the algorithms could identify when there was catch on the line. And so our initial our initial trainings have shown that that is possible. And so as we move forward and we're developing these tools with these. AI experts, that is we're hoping all of those annotations really help us get there to be able to like identify these catch events, because really our number one priority is to minimize footage for review. And so if we can just identify a catch event and then a human has to like, you know, check, make sure that that fish, yes, identify, you know, confirm the ID and and is able to catch protected species, that's really important for us. Thank you very much. Matt, have you got a question for Jennifer's team? No, really, I think you covered it, Kim, but what I really like to highlight is this this process of creative thinking outside the box, where you if you if you didn't you examine the whole scenario and you can achieve outputs from things that you don't immediately think can produce outputs. And I've seen this happen numerous times with deep learning machine learning, where you suddenly realize that some piece of the frame or the footage can actually give you really important data that you've not, I mean, in the traditional sense of gathering data for catch being effort, you don't you don't count empty hooks. And so it's the same for underwater video camera traps or many other things you can learn by thinking a bit more creatively. And that goes as well for integrating maybe other dimensions to the metadata that's perhaps the images like like gender, for example, which is really important. OK, well, thank you very much for that presentation. Let's move on, as we said, we're going to try and keep the pace going so that people are in, you know, find themselves engaged in multiple stories over the day, and we've got time for a bit of a discussion at the end.