 We'll go on to our next presentation, which is by Giuseppe Amato from CNR, and then Giuseppe is talking to a prototype model for visual recognition of otterlets from fish species. Good morning, I'm Giuseppe Amato and I'm the head of the Artificial Intelligence for Media and Humanities Lab at the EAST CNR. In this presentation, I'm going to tell you something about an experiment we did with FAU in using artificial intelligence for recognizing otterlets in images. And we basically built a very preliminary prototype able to do this. First, let me say a few words about the artificial intelligence for media and humanities lab, which works on artificial intelligence applied to four different application domain, artificial intelligence for vision and deep learning, artificial intelligence for text and the human language, artificial intelligence for digital humanities, and artificial intelligence for multimedia information retrieval. In a very few words and in a very few pictures, let me tell you what is the purpose of the research that we carry out. There is plenty of techniques for analyzing images and extracting the content and recognizing the content in image. However, if you have a huge large scale database of images and you want to be able to retrieve to search images in this database according to the content in image things start to be a bit more difficult. And the research that we do is exactly in this direction trying to develop solutions for retrieving images in large scale database without any text associated with images. So we have about 36 people, long researchers, research assistants, research associate have a technician and a person taking care of the administration and the moment we have seven PhD students working in our, in our lab. Let me now go to the preliminary experiments that we need in the prototype that we will build. Let me thank a framer for providing us with the images for this experiment and also the research Institute for development in France for having this. You can see the screenshot of the homepage of the prototype. You are, you can see some random images coming from the database you can select we you can click on one of the images or you can also upload one of your images and ask the system to recognize the other little. So you can see the result result like this you see the predicted class or the guested class, plus you also retrieve similar images from the database of images. Let me show you a live demo system. Before you have some random images coming from the database you can ask for additional random images. And suppose you click on it. When you click you, you can see the result you have similar images taken from the database, and you also see the predicted class. Let me try again with a different query for instance I click on this. And again, you retrieve similar images and you see the predicted class. Now, let me go back to my presentation. And let me say just a few words about the details of these preliminary experiments we use in hybrid convolutional neural network. We have trained on 1000 different categories with three million and a half images. Those 1000 categories comes from the places database 200 images and 900, more than 900 categories come from the image net data set. And to improve performance when extracting features with the hybrid convolutional neural network from images of auto lead we fine tuned the neural network using a subset of images coming from the data set of auto leads that I mentioned before. Features are obtained by considering the activations of neurons in the sixth fully connected layer of the neural network. In order to predict the class we use a very simple K nearest enable classifier. It works as follows. We search for the K most similar images to your query. For instance in this example, given the key image, I retrieve 15 similar images to the query and then I look at the tag associated with the retrieve image and I consider the most frequent class. In this example, we have that two images only are tagged as a linear the remaining ones are tagged as a ring. So we predict that the class of the query image is having as well. The list of people that works with this very preliminary experiment. Please feel free to contact me or any other people in this list for additional information and also feel free to check our website to see what is going on in the lab without the or such activities the project, etc. Thank you very much. And looking forward to receiving some questions from you. Thanks for sharing Giuseppe and your team. And we have some questions for you. One of the main questions that jumped at me immediately when I saw this was not about author lists but really it's the overall procedure that you've used because a range of marine product commodities are a big struggle for application and we're trying to spend the reasonable amounts of funds over years to try to build applications or small apps where people can take pictures of for example shark fins. And the major points around the shark fin to try to understand if that relates to data which can help us to find the species group those shark fins came from. And this is really important obviously when those types of commodities arrive at airports or they're well disconnected from the fishing further along the value chain so I suspect what other kind of conversations you've had that in your group. Some suggestions have come up where the procedure you've used could be used across other aquatic species or any related species or was it really the case that you had an amazing data set of over 3 million images and thought wow, but that kind of power we should be able to break through. What was there. What was the thinking behind coming to this project and the thinking maybe for the future. Thank you. Thank you very much for the questions. In fact, we had some discussion about trying to also recognize. Shark fin with people from it or actually, nowadays, one of the major limitations in using artificial intelligence to real application is the availability of data. We typically have very powerful algorithms that can do almost everything, but provide that you have data to train them data to extract knowledge from them. And in fact, what we did with the audience. It is exactly this we started from an existing convolutional neural network, which was trained for different tasks for different applications and we fine tuned it using data of annotated auto needs. Yeah, we can use a similar or probably exactly the same procedure also to move to a different application like for instance fin shark fin recognition. Provide that you have enough data, provide that you have the reasonable computational power in order to analyze your data. You can build a new database of knowledge, which you can use with which you can exploit to recognize updates. I mean, easily you can also have a kind of application running on your phone, which allow you to be in place with your mobile phone take a picture to the shark fin and deciding about the species the shark fin comes from. Yeah, actually we have we already touched with how our hands that data set of images coming from shark fins. It is just a matter to start doing some additional experiments and it can be done easily using similar procedures. We've got a question for you. Thank you very much for that answer. Yeah, I suppose it brings your highlighting value changing brings us to think about the verification process of that data as well. So, for example, with commodities that we don't know what what source species they are. We need to run a DNA test on that material, as well as, as well as adding them to the data set which is perhaps another component we can think about verification processes for the data that's collected. We could maybe add that to the whiteboard later. Yeah, actually, sometimes I mean that there is a big, who's recently on artificial intelligence in giving explanation on why some decision are taken and those explanation can also be used as a hint for verification of the various procedures. When you go to the doctor and you asked for his consultancy. The doctor, the doctor, the doctor does not just tell you the problem is this but it tells you the problem is this why you have something etc etc. And using a procedure that like the one that we used, you can have similar explanation you remember in my, in my video, when I showed, I showed a demo, in addition to predict the class we retrieve similar images from the database. The person using the tool can also try to verify the decision taken is reasonable, looking at similar images taken from the database, which are already annotated. So this probably might be my go in the direction of explaining and also very fine that the decision taken by the system are sound correct. I don't know. I'm not sure that this was your direction of your questions but I hope this somehow light up some. I'm not going to go into it. I was more highlight I was also highlighting the, when it comes to ambiguous commodities that we want to know the origin of. We may well have a million pitches of shark fins but unless there's a DNA test to prove what species they were originally, which adds another dimension of cost on to the actual process of building models, doesn't it. What about you and Thomas, if you've got a question. It's again a bit about the metadata that come with the pictures on this. We have to have any vision where we can build on a more global system of metadata so any formal rules or formats of metadata for AI assisted data analysis. Yeah, currently the data model behind the scene that we use for this experiment was very, very simple. I mean, every image in the training said that we used to fine tune and also to run the classification are simply associated with the class of given by an expert of the domain. Yeah, probably this can also be extended with additional information I think one interesting things that can be useful to know in auto rates is the age of fish I mean deciding not just the fish species I mean the category of the fish which the other it was taken from but also knowing the age of the other it provide that we have this data, we can also try to, and this should be somehow included in the data in the metadata associated with images, provide that we have that we can also try to run additional knowledge from, from the data set and run some classification algorithm able not only to determine the species, the species, but also the age of the fish where it was taken from. You know, is the more data you have in your knowledge base the more data you have in your metadata, the more information you can extract in order to train the AI are going to take decision or suggest decision on the given images. And thank you very much to step it reminds me of the story where they had a massive data set of pictures of iris of the human. And they worked out a way of using AI to identify cancers of the iris but in the same time they were entering the metadata, which meant that eventually the AI was being able to tell the researchers if it was a male or female iris. And yes, this was coming from the metadata training but no human knows what the AI is doing. No human even suggesting knowing what the AI is doing to be able to tell a picture of an iris whether it came from a male or female yet it was 97% actually.