 Okay, welcome, and thank you for joining me in this lightning talk here in the Open Source Summit in Bilbao. My name is Marco Gonzales-Dierro, and I am the head of ICT in Nicarland. In these five minutes, I'm going to explain how we are using AI-enabled cloud continuum technologies, both in industry and research. In Nicarland, we are a multi-sectorial research center which is focused in taking the knowledge from research to the industry. So how to apply all the technologies that we are researching in some projects to the real world. In my department, we are focusing a lot of this research in how to take the artificial intelligence that is making a revolution in the industry right now. A lot of industries are taking AI algorithms and integrating them in their own products and services in order to increase the competitiveness and the value added in. And how to relate that with edge-to-cloud continuum technologies. That's it, how to take advantage of the strong points that the edge-to-cloud offers us. For example, the increased privacy and also the reduction of latencies, which is very important in the industrial use cases, especially when we are talking about critical industries. With other ingredients that we want to put because when we want to take these AI models from the research to the real world, we have to put in a bit more ingredients like MLops in order to have all the lifecycle of the AI manage. Also collaborative AI in order to take advantage of all these edge computing resources that we have already deployed in the industry. And also data spaces, which is a very important topic and focusing how to share data between different partners inside the shared data space, which is a trending topic right now in the industry. How we are doing that? We are using open source technologies. We are using open source components as building blocks and then we are developing several customization for our clients as a glue to offer our clients what they need. Open source is very important for us because it allows us to not reinvent the wheel, to offer interoperability out of the box and also vendor neutrality, which is very important right now in European industry. First, I am going to present you one industrial use case that we have implemented in the real world and then I will explain you how we are shaping the future in a European research project. Montragon Assembly is one international reference for automation and assembly solutions that have clients all over the world and have solutions installed in a lot of industrial clients and automotive space, et cetera. With Montragon Assembly, we developed in the past several AI solutions related to predictive maintenance, depth detection, image and video recognition, et cetera. And we want to take these AI models, as I said before, from the research to real world. So when you want to implement these models in some industries, you have to do some problems arises. For example, how you can train a big model, a common model that takes all the information from multiple locations without sending the actual data to the cloud because here we have some problems with the data sovereignty with when and when you can't send data to the cloud. So for this, we used federated learning which is the centralized ML approach where all the data, all the models are trained locally in the edge without sending data to the cloud. But with this algorithm, we can build a global model in the cloud that has all the knowledge of the local edge nodes but without sending data to the cloud. So always the data stays in the factory. This is very interesting for clients, for industries that deploy different solutions in different clients. So the data belongs to the client, doesn't go outside of the factory, we can say, but we can build a common model in the cloud that has all the knowledge of all the edge locations. How we are doing this with several open source components such as Flower, MLflow, et cetera. But as I said before, it's very important to have a platform that allows us to manage all the life cycle of the AI models. That is to manage the training of the models, to manage how to do that federated learning and have that common model in the cloud. How to deploy these models to the edge locations to update these models. But also, it's very important to detect when the model is not being accurate and we have to force our training which is we are doing that with a concept drift. And we have this all integrated in a platform that can be integrated with the client. So the client can handle all these situations to be able to deploy these models to the real world and to the real industries. And we are using a lot of open source components. Then we are deploying this glue to integrate all them. But it's very important to have this platform to be able to handle real world situations. And how we are shaping the future. In the original Europe research project, sovereign edge.cognit, we are building a cognitive serverless framework for the cloud to edge continuum. What this means? We are building a fully European open source function as a service framework. What we are trying to do? We are trying to empower the IoT and edge devices. Normally in a function as a service framework, you have the cloud that you have all the functions there and you can choose where to deploy the server task. But in this framework, we want to do more or less the opposite. That's it, that the IoT device or the edge device can choose whether or not to upload some task to the edge to cloud continuum. That is for example, imagine we have some drone which is low on battery and instead of performing the image detection locally in the drone, he can decide please edge to cloud continuum, help me and do this for me. So the IoT device is smart enough to ask for the edge to cloud continuum for resources in order to upload tasks to them. So we are doing a new framework that offers more flexibility and allows us to introduce more smart IoT devices that allows us to upload a task to the edge to cloud continuum easily. Here we are working with several research centers like RISE, universities, ASUMEA, also a lot of open source companies like Open Nebula, SUSE, et cetera. And we are using the components that you can see on the right to build this but we are also deploying on all of our software as open source. So with this, let's continue building and sharing all the source that we are making as open source to continue developing edge to cloud continuum. So that's it, I don't know if you have any question. Okay, as I said before, normally with federated learning you have multiple clients that are training the models locally. So in this glue that we are developing, we have to ensure that all the data has quality, what part, and it's stressful. So depending on the client, we have multiple ways to trust the data but that is a problem of course but normally we are using that with data signatures for the data that is sending but of course you have to trust your data in order to enable this kind of algorithms. But because all the edge allocations are controlled by the same company that is making the federated learning, you can trust it. Yes, it's one organization that has the cloud and the edge devices but these devices are located in other clients. So that's because that's where the data sovereignty has a problem but the devices are from the same company. So you can trust your devices. Okay, as I explained we are using Flour as a library. If you want to have more information about the actual algorithms, I can be in contact with you and talk with our federated learning engineers because I can explain more on that. Yes, the first model, we have a couple of models that we are doing with federated learning. One is defect detection in image recognition. We have cameras in the edge looking like x-rays, looking to different metals and we can detect defects like when the metal is breaking or some tension in the metal with images. And also the first use case that we implemented is we have time series of the different data that is collecting the machines. In this case it's like rolling machines. And we have all the access information of that machines and we can train different models with that. We're taking machines earlier. Yeah, okay, yeah. We are a long we belong to the Mondragon Corporation which is one of the biggest industrial groups here in Spain. We're more with more than 200 cooperatives inside it. And yes, for one part this platform can be used for different use cases. But also we are planning to implement some shared data space where all the companies that want to be part of the data space can share information, data, but also they can share models. Also any models can be shared in this data space which we have implemented here. As you can see in the photo we have an open source connector based on the Hofer connector. And with that we can share both data with all the companies, but also the models because some models can be used in other use cases with some modifications, but both the data and the models can be integrated in the shared data space, yes. Okay, so thank you so much.