 And for our next presentation, it's by Giancarlo Coro, talking to AI Open Science and Blue Growth, Culture, Habitats, Ecological, Niche Modelling and Fish Identification. Hello everyone, I'm Giancarlo Coro, researcher at CNR Italy. I'm a physicist and I'm currently involved in a number of new projects. My research topics range from computational biology to NLP, using artificial intelligence and open science methodologies. This presentation will explain the support that artificial intelligence and the open science paradigm can give to the new long-term strategy for the sustainable growth of the marine and maritime sectors. That is Blue Growth. So first, let me clarify what this command link meant for big data. That is a fundamental concept in modern AI applications. Big data are data streams characterized by a large volume, a high generation velocity, a large variety, unreliable content and heterogeneous descriptions. These data require unconventional computer science systems to be properly managed and to extract valuable information from them. Some examples in marine science are vessel-transmitted data, climatic parameters and species observations. In the last decade, new science paradigms are born to manage big data while supporting large computations and collaborative experimentation. These paradigms include open science, e-science and science 2.0, whose definitions are rapidly converging towards the same definition because overall, they guarantee the three hours of science that is reproducibility, repeatability and reusability of data and processes and the transparency of the scientific methodology. Aside from this concept, AI has applications on big data in several branches of science. An AI system is any system that executes tasks that can be perceived as intelligent. It can either emulate what happens inside that biological system or simulate its input and output. AI is by definition multidisciplinary and requires collaboration between experts. The infrastructures are platforms that manage both AI and open science requirements in the context of big data processing. They have been largely supported by the EU Commission in the last decades. E-infrastructures are networks of hardware and software resources that support collaborative and data-intensive science. They also support the creation of virtual research environments that foster data sharing and collaborative experimentation while guaranteeing the transparency and repeatability of the workflows. One example of E-infrastructure is D4Science, a CNR platform that hosts virtual research environments for many application domains, ranging from taxonomic studies to geothermal analysis and cultural heritage, all dealing with AI and big data management. All this technology is supporting the blue growth strategy through the generation of new knowledge from big data, the reusability of data methodologies across domains, the transparency of the workflows because it is possible to verify and repeat experiments and the overall guarantee of longevity to data and processes. In the next slides, I will rapidly show questions that have been answered through AI methodologies based on collaborative infrastructures that use open science approaches. The first question is, why a species is in a certain place? This question has been answered through ecological niche models that have combined multiple AI models with big data of environmental parameters and species observations to produce uniform distributions of habitat suitability scores. In the slide, you can find a link to access the habitat distributions of over 60,000 marine species hosted on the D4Science infrastructure. By combining ecological niche models with cluster analysis, we can discover patterns of habitat change due to climate change. And answer to the question, how does climate change affect species habitat change? If we add the environmental parameter and the AI model forecasting, we can predict future expansions of invasive species. In this slide, for example, you can see the prediction of the invasion of the Mediterranean Sea by the silver chick, the Toadfish, which was analyzed through an ensemble of seven AI models that worked on future forecasts of the environmental parameters under different greenhouse gases emission scenarios. So worries open science. It is behind the scenes because every step of an experiment has a link associated that allows to exactly repeat one experimental step or modify the parameters and re-execute that step. Additionally, all links correspond to web services described under a representation standard, which allows for fast reapplication to new data. Other experiments worth mentioning include the 3D reconstruction of underwater environments to study, for example, coral biomass change in time, based on the photos uploaded by a group of scuba divers or the use of AI collaborative platforms to build systems that will later run on board of edge computing systems. The case reported in this slide is the Utmos device developed in collaboration with FAO that uses an array of cameras to record videos of large fishes passing in front of one of the cameras. This system uses a combination of AI and visual computing approaches that were first collaboratively developed and tested on an infrastructure and were later deployed on an unbated underwater device. In conclusion, blue growth is benefiting from AI and AI itself is gaining more and more importance in the cheese and making processes. When combined with open science, AI can announce knowledge discovery and process big data with a transparent approach. Transparency is indeed crucial to communicate with the cheese and makers. So this concludes my presentation. Sorry for this technical issues. I try to be fast and then if you have questions. Thank you, Jean-Paul. I think you did exceptionally well considering you weren't planning for that. So thank you very much. Thank you, Jean-Paul. Thank you. I might turn over to Matt because I know Matt has worked with you and there's actually so many areas which you've worked across, but one of those was in trying to get cameras to collect over long periods. But only select the imagery that was needed and that allowed you to somehow make decisions within the camera rather than when you've got home and had run out of battery. So maybe Matt's got a question to lead with. Thank you. I can do Kim, but I mean, it's a bit vain for me to ask a question about my own work. I think something that you spoke to me about, Jean-Paul, I mean, the work on UDMOS is available for everyone to read, but I think something that's really important that you spoke to me about a couple of months ago is the idea about how old algorithms are in AI and how we need to work on more specific algorithms to tackle issues such as identification of fish. I don't know if you remember that conversation. I hope so. I found what you said very interesting about the age of some of the technology and the maths that we're using currently. I found that extremely interesting. I don't think that was necessarily a question, but Jean-Paul, I think Matt was asking you to repeat some of the stories about the understanding of how far back this kind of thinking goes to developing algorithms, which today are in the news. But, you know, five years ago, you mentioned the word algorithm. No one knew what you were talking about or very few people knew. And Matt was saying you had conversations and you've done some reading about the age of the mathematics which have built these types of systems. Maybe you could share some of those stories. Yes. Well, it's a complex question to answer because, yes, we are now at the edge of technology for some applications like the one of unbeaten remote devices because originally we wanted to produce and to make a scalable solution and also in terms of costs because, yes, it is always possible to throw a big computer underwater. But then you have to care about batteries, the number of devices that you want to buy or to propose and also the computational capacity of the device that you are working on. So we are currently at the edge of this type of technology. And so, well, from one month to the other, everything can change. Also, the kind of models that are currently produced in the AI research is not really parallel to hardware and also costs, requests. So I don't know if I was clear with this explanation on the issues but there are not only AI problems but the fact that if you want to really downscale to practical difficulties and also to low costs, you have to find other ways. And one way is to open up the black box of the AI models and to modify something inside and this can be done by taking inspiration from what happens in some biological systems. So in doing this scaling now, let me say that we are going backwards in some way from AI to cybernetics where you had to emulate in some parts of the biological system to find efficient solutions in ways to solve the problems. And to do this, it is absolutely crucial to work within multi-competence, let's say, multidisciplinary context so to have collaborative tools that support conversation between a biologist and a physicist and mathematician and other people that may come with solutions from also even unexpected domains. So I hope this straight explanation of mine was clear for you. Exactly what I was referring to, Jean-Paul, I think that this sort of scenario and I didn't realize we were doing it, kind of unpicks AI back to cybernetics, doesn't it, like you said. And I think sometimes to try and break beyond outside the box, like you're saying, you need to be reflective and multidisciplinary and to produce something that is capable of doing something cutting edge and that sort of collaboration and collaborative thinking and assist that process, doesn't it, which really echoes what the objectives of the whole forum are about, really, bringing people together and trying to help everybody and everything. Yes, these are not problems that one group of people having the same expertise can solve or one person can solve.