 Our next presentation is by Dr. Dodong Wang of Australia CSRO, talking automated eyes for better fisheries management. Hi, my name is Dodong Wang from CSRO Australia. I will present our software Wanda, an AR-based web portal for automated EM video analysis. I will first briefly introduce myself, our team, and CSRO of them present our current work and achievements, followed by the challenges we are facing and our future work. Our marine ratio technology team was formed in 2016, including members from CESAR data 61 and CESAR ocean atmosphere. I'm a principal research scientist and a team leader in imaging and computerization group of data 61. Rach and Jeff are also principal research scientists. They are from CESAR ocean atmosphere. Other team members include computer vision, AR research scientist, postdocs, software engineers, and a BD manager. Our collaborators include government agencies and also industry partners. CSRO is Australia's national science agency. We have about 5,500 people working with over 2,800 industry partners. CSRO is top 1% of global research agencies. We are based at 55 sites across Australia and the world. The main functions of Wanda include fish detection, fish species identification, and tracking-based catch caught. This slide shows the pipeline of deep learning-based EM video analysis. RGB videos are input of fish detection and classification machine learning model. The detection results are used as input of deep learning-based tracking to caught how many fish being caught for each species. Here are some qualitative results from our test videos. You can say we got some very promising results for some species with plenty of training images. However, the results do not look as good for some species for which we don't have enough training images. In summary, we have achieved precision of close to 90% and a recall of 80-80%. Here are some highlights of Wanda. The catch event function can be used to detect catch events in a video so that a EM observer can be directed to the video segments that have fish instead of looking for fish in a video manually. This will save the video review time. The fish detection and species identification function can be used to assist EM observers with species identification. The automated counting and reporting function can be used to generate catch report for a trip. The auditing function can be used to review automatically identified fish species and correct any errors if found. Here is an example web page of the Wanda web portal. It shows a snapshot of video frame where a catch was detected and a short video clip showing the moment the fish was caught. The summary of the catch for the trip is shown at the bottom. Any errors from the automated fish detection species identification can be corrected manually. Trip reports can be downloaded as an Excel spreadsheet or a PDF file. This is a demo showing automated fish detection, species identification and fish counting from a video captured from a sorting tray. This slide shows another relevant project we did. It's called boat to plate. It was designed to capture vessel level data including species, fish size, capture date and the time location for automated fish origin information collection and the fish tagging for supply chain management. Information of the fish origin can be retrieved by scanning the fish tag as shown at the right hand side. Including species, size, color, fisheries, boat name, temperature, location, date, time and storage temperature. So what are the challenges we are facing and what we will do next? First, the inconsistent video quality is a headache. This is caused by movement of the boat, uncontrolled video background and lighting conditions in rough seas. Second, not enough training images for rare species. Third, how to generalize a machine learning model trained with video data from several fishing trips to other trips of the same vessel? How to generalize a machine learning model to multiple vessels? How to generalize a machine learning model trained with video data captured from one fishery to other fisheries? Thank you very much. Any questions? Thank you Dugong. Thanks for sharing the story of CSRO's work and there's a lot in it. Developing automated solutions to help manage large scale fisheries is a large challenge but as we can see it's progressing really quickly. We've heard from other stock people speaking about the challenges they have that actually the breakthroughs happening in technology are being held back just by the simple cleanliness of a camera lens and such like. There seems to be more breakthrough in the work of coding to understand the pictures than just collecting the pictures in a useful manner. Can you give us a little bit of a background story about some of the challenges that you've had in formulating your ideas and where they've maybe shifted once you've gone operation? A little bit like we just heard from Jennifer where they started to monitor more the way people moved on a boat to understand if there was a catch event or not and looked at things like baited or unbaited hooks. Was there any surprises for you or any breakthroughs that came through along your journey? Yes, we started this work from 2016 actually come up with a very nice idea from Jennifer and Rich because they found there were lots of data, tons of data sitting there. We need to do something about it. Then we just come together for this form this team initially start to find a solution how to reduce 8 hours video to 8 minutes. That means how you can make sure EM observers are looking at fish instead of doing fast forwarding to try to find the fish. You actually find nothing there in one hour piece of video. So that's the first thing we did. How to detect the key frame to detect the fishing event. We got some very good results for that. That figures some other nice ideas. So now we come to use deep learning to identify species. With that we found also lots of challenge. Initially we got some very nice video quality work perfectly we were so excited. But later we got more fish trips video we found different stories quite different because some videos with lots of blurry because of movement of boat. And some video come with water drops because you know in rough condition camera less being blocked by water drops nobody clean that. And we find eventually this is not as simple as we thought. And also you know the videos recorded day and night and also sometimes it's really it was really some time sometimes it was sunny. Also sometimes time cloudy and video quality are quite different from one boat to another, you know, also species from one fish to another fishery quite different. That's just we found a lot a lot of problem we need to deal with that's why it takes a long and but still we put a list of try to resolve them one by one. So at least some challenges for my last second last start slide. And later we find that, as I mentioned, how to generate generalized machine learning model from your treatment by using several trips of data to all trips for the same boat. Because different boat have a different background. And even same boat sometimes that can be very bloody also blocked right if you could imagine, and sometimes very clean sometimes with lots of water, sometimes bloody just make this is very hard and so we are currently working on how to generalize machine learning model for the same boat. Then next for the from different about them from one fishery to more fisheries. So, lots of sorry to tell but that's, that's a quick summary. That's that's exactly the kind of story that gives people the insight into the challenges and where they are and how many there are. And it's great to hear a repeated story about the need for, you know, train data for rare as speaking and that might be a great opportunity for collaboration across the international community to build up those data sets for the rare as speaking. And deep water shark soon that's always our problem we just don't see enough to, to, to get people aware of what speaking they might be. So Matt, do you have anything to add to this one. It's always a challenge isn't it having a sort of theoretical idea and then trying to actually get it to work in a marine environment. It's to got to be one of the toughest environments to try and get anything to work alongside outer space I think. You know the equipment breaks and the salt water corrodes everything let alone, you know the conditions changing in front to do some machine learning the challenge isn't it. And I think also alongside the deep water sharks that came mentioned. There are also other options to for gathering biometric data. Such as using stereo cameras and depth maps, that cues and also perhaps other frequencies like the infrared end of the spectrum which might give a specific set of information that we don't normally look at, which is how myself and jump out of the approach to new demos algorithms to size, which works quite well. So I think again it's a sort of thinking outside the box scenario. It's going to be interesting to see what's available for use inside and outside of CSIRO because obviously there's a lot of commercial and confidence development there rich and Jeff have you got any comment on this point. How do you see the community being able to learn from CSIRO developments. I'm going to type in a question to the Q&A. I think what Rich is about to answer that question in the chat there. Well, maybe we'll get to these kind of questions and governance and data management stuff sharing later on. So thank you very much to Bong and the team at CSIRO for sharing your stories.