 Allow me to introduce our next speaker, the second of today. Traditional machine learning is all about gathering data to a central repository and then analyzing it, learning from it. But the collection of vast amounts of data has its problems, privacy and storage costs being the two obvious ones. Which is why Google is spearheading what it calls federated machine learning, pushing the storage and analysis to local devices that then collaborate with other devices. To discover how it works and the different theoretical concepts behind this new learning mode, our next speaker will show how to implement a system based on federated learning through the use of TensorFlow. He is the technical lead on artificial intelligence and advanced analytics at CapGemini. Let's welcome Moises Martínez. Moises, welcome. How are you? I'm very happy to see you. With all these Charles O'Connor, Martínez O'Connor, I thought maybe he could be your brother. It's lovely to have you here, Moises. So we're looking forward to listening to you. Whenever you're ready, just I remind our speakers, our audience, that they can ask you questions at the end. So looking forward to that, Moises. Thank you. First of all, it's a pleasure to be here, competing by me. I don't know if the focus is ready. The slide is ready. Okay. We are going to talk about federated machine learning in TensorFlow. But first of all, I'm going to introduce myself. This is me. I'm PhD in computer science and artificial intelligence, and I use Baltimore as big data and ARF data. I have been researching in different universities, some of them, literally a teacher in sort of this, that are in Spain. And also I'm tech-fest organizer and GBE in machine learning. This is my social network. If you have any questions after the talk, and you want to talk about anything related with artificial intelligence or machine learning. Okay. The first thing is to talk about the talk. Okay. This is the agenda that we are going to run in the start. We are going to talk about federated machine learning in order to answer four questions. The first one is why federated machine learning? What is basing on? Okay. Which are the technologies that may federated machine learning appear? Who is working, of course, in a theoretical way? And finally, hope we can develop a basic federated machine learning system using TensorFlow. Okay. The first question, maybe it's the easy one. Okay. Is because why federated machine learning is useful or it's appearing? Okay. This is the big question. And this is the big answer. Okay. Currently, we have billion of devices that are generating data all the time. All of these devices are sending data in the cloud, in the internet, or in local networks. And mostly of this information is not using for nothing. Or in the good case, it's collecting in a central repository in order to create, for example, machine learning models to improve some of the features that commonly the system that we generated by software are using. Okay. But the thing or the important thing relating with federated machine learning is what is the way to use all of the data that are generating for these devices and how to use the devices to simplify the process or to generate better system. In this case, better machine learning systems. Okay. To try to understand all of these, we are going to do a small journey for centralizing machine learning to federated in order to understand why federated machine learning solves some of the problems that we face with centralizing another kind of machine learning process. Okay. Imagine that we are going to try to generate a machine learning process. Of course, we are going to do it in the cloud, which is the most common thing currently. And we are going to use a lot of data that we have in the cloud. All of the data has been collected during days, during months or maybe during years and was stored here in the cloud. Then we have to deploy something like this, a typical centralized machine learning process in which we are going to display the information, all the data in different sets, of course, training, validation and test. We are going to choose a machine learning algorithm. Okay. Which is the developed index of flow, of course. And we are going to define or configure some of the parameters of this algorithm. The basic ones are the optimization algorithm for the training process. Okay. And the loss function in order to measure who is doing the training process. And even if we need, we are going to set up some hyperparameters. Okay. This is the basic thing that we have to do if we are going to create a typical machine learning process. And finally, if everything is going well, we are going to generate a model. This model that we have here. All of this doing in the cloud. Okay. Once we have created this model, the thing that we have to do is to deploy the model. Okay. Generating a process for sharing. Okay. This is the most common thing that we call inference in which we are going to send a request to the model and get a prediction, of course, because we are talking about machine learning models. Okay. Then we are going to do some devices. For example, a tablet, a mobile phone, and... Moises, perdona. Moises, sorry. Can you hear me and see me? I have the feeling that we cannot see your presentation. Oh, but... We were stuck in the first slide. Yeah. Okay. Then I try it. Okay. Let me see. Yes. Sorry. The first slide where we have your picture, and then suddenly when you said that the model that you can see here, I realized... Yeah. I was mainly looking at you, so then I realized that there's not... I tried. We haven't seen any slides. So if you could wrap up what you've said just quickly so we can take a look at the slides. Yeah. I'm moving my slide, but it's not working. It's not working? Yeah. No. I don't see it, but let me check. Otherwise, we're going to be sorting it so that you know we're not seeing them for the time being. Okay. Maybe when we sort it, we'll... Are you sharing the screen? No problem? Yeah. Okay. Are you sharing? He's sharing. No problem. Okay. We're sorting it as we speak. So if you bear that in mind, once we can see it, I'll let you know. If you'd rather continue so we don't waste time. Yeah. I'll show you some slides and some examples that if you can see, you can follow. Okay. Okay. Okay. I'm going to ask them to put you in big screen and as soon as we can see it, I'll let you know just through the voice. You won't see me, but you'll hear me, okay? Okay. Just continue and I'll let you know. They're sorting it. Okay. Thank you, Moises. Sorry about that. Don't worry. Okay. Then we're in this situation in which, okay, it's not working again. Okay. Moises, they're telling me that we're going to, we're going to move the slide, the presentation ourselves. So when you need us to, I know it's not very convenient, but if you say next slide, we'll do it from here. So we're going to see you in big screen. And just when you need the slide to move, just to go next and we'll do it from here. Our colleague will do it, the technical team. Okay. They have introduced the presentation that I sent, I suppose. Yeah, I suppose. Yes. Let's, let's, yes. We're going to, we're going to go into your presentation now and tell us if you see it properly and if that's okay. Yeah. Go ahead guys. Yeah. Okay. Here we are. Sorry about that, Moises. These things happen. You know how it goes. It's okay. Don't worry. We'll have time for you to finish. No problem. I apologize to the audience as well. Take advantage to keep sending your questions. You see? So there's always a good, yeah. Okay. Let's go. Moises, I think they're telling me that now it's working. It should be working. So go ahead. If we say, can we see him in big screen and his presentation next to him? Okay. Can you see it? Yeah, I can see it. About me. Let's go to the next one guys. Is this, is this okay? Yeah. We can start from the beginning. Yes. Sure. Go ahead. Sorry about that, Moises. Next one, please. Next one. Okay. Then the first question that we have to answer is why federated machine learning can be useful. Next, please. Okay. The answer for this is that we have to, currently we have billions of devices that are generating data all the time. And all of these devices are creating new information that can be used to generate these machine learning models that we are going to do. But the first, the important question here is we are really using all of this information. No. The question and the answer is next, please. It's how we can use all of these data and devices in order to use and improve the system that we are generating by using artificial intelligence. In this case, machine learning. Next, please. Okay. Then the first thing that we are going to do is to make a small journey from the centralized machine learning to the federated in order to understand why federating it's a better option than the others and why this is working better. Okay. Next. Okay. Imagine that we are going to create a machine learning model in the cloud. Okay. We are going to use a lot of data that we have been collecting during days, weeks, months, or maybe years. And we are going to use it to generate a model for our clients. Next, please. Okay. Then we are going to create something like this. This is the typical infrastructure or structure that we have to do to create a machine learning model in which we have a splitting process to generate the different sets that we need for the machine learning process. We need a machine learning algorithm, which is going to be a unit of configuring for different things. In this case, the basics are the last function to try to analyze how it's going in the training process. And the second one is the optimization algorithm in order to improve the training process during the different iterations. And then we are going to configure if we need the different hyper parameters. Okay. If we have all the things we can create, we can execute in the training process. And finally, we are going to get a model. Okay. We have the more difficult part, which is creating the model. But now we have to give the opportunity to the clients to use that model. For that reason, we are going to deploy the system in a service system in order to enforce the inference. Next, please. Next. Okay. The inference is that process in we can get requests for the clients and generate a prediction. Then once we have the training process and the serving process that execute the inference, we can offer to our clients this model. Next, please. Okay. Imagine that we have different clients. In this case, a mobile phone, a Roomba, or for example, a tablet. Each of these are going to do requests to the model. Next, please. In order to get a prediction. Okay. This prediction is going to be the value that the model is going to generate in order to use the information of the database. Okay. Even this can be a bit more complex. If the training process that we are generating, an incremental one. Next, please. In this case, we have to send information for the clients and collecting, for example, using an injection system to increase the information on the data that we are going to use for the training. Okay. But when we created this kind of machine learning process, we have to face with this not main problem. We can say, this is a major situation. Next, please. Okay. The first one is computation. Okay. When we are using centralized machine learning, we have to face with some situation in the computation area. The first one is the latency for prediction. Okay. We are putting a service system in the internet and the clients have to send information and get a prediction. Depending on the number of servers in which we are deploying our machine learning model and the number of clients, the latency of this prediction process can be higher. We have to spend time for data collection because we have to collect all the data and we have to define all the infrastructure which is necessary to do all of this. And finally, we have to storage all the data, which in some cases can be billion of data, billion of bytes or terabytes or petabytes in the worst case. And this could be a problem, depending on the things that we are doing. Next, please. The second thing that we have to face is the sensitive information. When we are creating a system for clients or humans that are using this system, we are sending information, which mostly of the time this information can be defined as sensitive data. Why? Because we are sending pictures, we are sending personal information and this information is used for prediction and for training, which can be a problem because we don't want a picture in any server of the internet waiting to be used in a machine learning process. And finally, the last thing that we have to face, next, please. It's about infrastructure. Okay, all of this, all the things that we put in the cloud have a cost. We have to pay for computation and for storage. And the model that we are deploying, that we are serving, it's an online model, which is a problem because if the client has an internet connection, maybe can use the service that we are going to offer, which is the typical problems that we have to face when we are using centralized machine learning. Then the question here, next, please, is we can improve all of this. Okay, we can try to make the inference that part of the serving on the device, which is something that is starting to be done in the last year. Next, please. Okay, imagine the previous situation. We have a training process in the cloud, the data in the cloud, and now we are going to use the inference process inside of the device. In this case, a tablet, but can be on a wild phone, can be a robot, can be a car, can be anything that can run a machine learning model. Okay, in this case, the training process is generating the model and sending to the device. And the device is running locally the prediction, the inference. This is a good point, because now we don't need to send all the requests to the server or to the cloud. And this improves some of the things. And sometimes this is not possible, because depending on how the model has been developed or has been built, it's not possible to execute this inference process into the device. For this reason, there are some devices that we can see here on the corner that allow us to execute this machine learning process into the small devices. But now there are many computers that are starting to introduce the MPU processors that allow us to generate machine learning models and to execute. In this case, if we are using the thing that we can call partial-edge machine learning, we have to face a game with some problems that we saw before in the traditional model, in the centralizing. Next, please. The first one is computation. Now we have some of the problems, the latency. We don't have latency, because the model is execution locally. But we'll spend time for data collection and for processing if we need it. We need a space for data storage, because we have to store all the information that we have in the data. Next, please. And in the case of the privacy, now we prove it a bit, because we are not sending sensitive data for prediction, because the model is on-site. It's in the device. But we are still sending sensitive data for training, which is still a problem, because this is the most information that we are sending all the time. Maybe it's the 90% or the 95% of the information that we are sending. And finally, next, please. In the case of the infrastructure, okay, we are still because for computation for the training, which is the most expensive part of the machine learning process, and for storage. But in this case, we have a good point. Now we can work in a line. Not always, because we need to download the model from the cloud. And even if the model is going to change, because it's performing another training step, we have to refresh the modeling look. Then at least we need an internet connection for some times in which we are going to improve the model. Okay, this is a good solution if we have to keep more privacy and decrease some of the problems that we have with prediction. But this is not enough. Next. Now we are going to see, okay, we can try to put the training process on the device. This is possible, and we can try to generate our machine learning model into the device and serving into the device for local request. Okay, next, please. Again, imagine this process. We have a tablet, for example, in this case, we have the training process inside the device, and we have the local model that is generated in the device and using the inference in the device. This is a good solution if our device is enough powerful to generate the machine learning model. And this way to do this, it's something that we can call full-edge machine learning. But again, we have to face with all of these three situations and problems that we have. Options. Next, please. For the first case, the computation, okay, there is no latency for prediction. We fix that in the previous one in the partial edge machine learning process. But now, we don't have global storage on the cloud. We only have local storage because we are only collecting the information that is generating for the device. Next, please. Now, we are not sending sensitive data for prediction and for training, which is a good point because now we are not sending the information and this is not going to be collecting for any company or for any institution in order to create the machine learning model. And finally, next, please, in the case of infrastructure, we are going to use the local resources of our device for computation and storage, and everything will be applied. In this case, the model is generating on the device and we don't need to connect to the internet to get a new model. Okay, then it's looked at perfect. It's the perfect situation. But this model have of this way to create machine learning model has a problem. Next, please. We don't have enough information to generate a real good model. And beside, we don't have any information from the other devices. This means that the data that we are going to use to create our models is not going to be enough to create good machine learning models than the solution is good for privacy, for computation, and maybe for infrastructure, but it's not really good to the real goal of the machine learning because we are not going to generate a good model. Next, please. Okay, then who we can do that, who we can try to train all together all the devices all the information that are generating from there keeping the privacy which maybe is the most important thing in this situation. Next, please. Okay, the solution is to know it's the federated machine learning this new way to create machine learning model is going to try to solve mostly of the problem that we saw in the previous infrastructure for machine learning models. Next, please. Okay, imagine this situation. We have again a cloud environment, okay, and we have different clients. We are going to focus in only one for the moment and we'll see what is happening with the rest. Okay, in the cloud we are going to have the system to create an initial model that we see here this small blue box, green box sorry, in which we have the initial model. This initial model is going to send in to the client. To every client as we see, okay, and next we are going to start a training process into the device. Okay, yes, we are going to use the initial model to improve it by means of a training process using local information only the local information and we are going to create a local model. Okay, now we are creating the training process into the device and we are creating the model inside the device like before, but now we are going to change one thing. Next, when this process is finished, the new model the local model is sending to the cloud again and by means of a recognition process with some internal information about the model, it's going to be created a global model. Okay, if we include more devices next please, next we are going to do exactly the same with many devices all the devices that we want and to do this, it's really simple we have to send the initial model to all the devices and all of them are going to create a local model. These local models are going to send them to the cloud and by means of this aggregation process, they are going to create a global model which is going to become in the next initial model for the next training step. Next, please then this global model after one training steps, it's going to become in the new global, in the new initial model that is going to send to the different devices. This means that all the process of training, it's going to be done in the different devices except for the aggregation which is a process that makes up the model to create a global one and using this, we are keeping the privacy, of course, because all the information is using inside of the device, no information is sending to the cloud and only the model, the local model for each devices, it's sending to the cloud in which this and combinated, but means of some specific algorithms to create it a global model. Good, now we find the way to keep the privacy and use the computational resources of the different devices to create a really complex model. Next, please OK, but the question here is how we can do this in TensorFlow? As everybody know, TensorFlow is the framework for creating machine learning algorithms, mostly basic in neural networks, in which we can create traditional machine learning algorithms like linear aggregation the deep learning algorithms that we are starting to use during the last years in order to create really, really good models and now we can create a federated machine learning algorithm. Next, please and how it's working? OK, now TensorFlow offer a new extension that is called TensorFlow Federating PFF that offer a group of tools and it's for federated learning. These tools are composed of different places and fashion which combines the powerful of TensorFlow the way in which we usually created the models before and the federated communication to use the information from the clients, join the models and produce that aggregation process that we saw before. Besides, we introduce different libraries of federated algorithms. There are different algorithms that can be using right now to produce this federated machine learning process and we have run times and data set for experiment in order to learn how it's working. If you want more information about all of this, you can visit the two web page that are here in order to see how it's working federated machine learning unitize and how is the code. Next, please but now we are going to see a small piece of code that is working inside this TensorFlow federated. This is the basic structure of the TensorFlow federated system, which is basic in two elements. The federated learning process of system and the federated core system. The first one, the federated learning is the most common thing because it's the one that mostly of the data scientists or machine learning engineers are going to use. These offer different high level interface to use federated algorithms. These are in the tff.learning part of the library and we can use the previous model that we created with TensorFlow. This is one of the most important things because we don't need to create new models to remove the previous one and use it with this new technology. We can use the previous one that we created before and use it to use federated machine learning over them. The other part is the federated core in which there are master classes to create new federated machine learning algorithms. If you want, you can create your own federated machine learning algorithm once you understand how it's working the clients and the server process to send the information and to aggregate the model. This system allows us to execute simulation in order to simulate the situation that we are going to face in a federated machine learning environment in which we have different clients and we have a server that is collecting all the information. If you want to use it these libraries in your code, you have to install the pattern library that can be installed using bit. Next, please. But now, we are going to try to understand how to create a federated machine learning process with TensorFlow. This is the basic part of this process. Of course, there are more. It's not possible to show all of them during this talk but there are many code in which you can see the full process and many differentation. But this is the basic part that we need to do. The first one is to generate it at TensorFlow model since model TensorFlow model. This is the basic thing. We are going to use a previous model or we are going to use a new model generated using the common TensorFlow things. After that, we are going to generate the TensorFlow federated model by means of a wrapper system. Once we have all of these we can start to creating the TFF process by using a federated dataset. This dataset have to be generated by using the different information for the clients and have to be executed on the client side. Finally, when we have the process of TFF and we have the datasets we can start the training process. We have to initialize because this process is a bit different like the common machine learning process. After that, we can execute the different training steps by means of a number of iterations. Who we can use this in TensorFlow? Next, please. Next. If you have used TensorFlow before or Keras, this is the most simple thing to do. I'm going to create a sequential neural network with three layers. An input one a dense layer and a softmax layer. This is my basic model. As you see, it's the typical model that we can create it if we are going to use centralized machine learning. That is the good point on TensorFlow Federated. Next, please. Now, we are going to create the TensorFlow Federated model. And it's really simple. As you can see, we are going to create the model using this function that we created before. And we are going to configure the Federated TFF learning model by means of a Keras model. We are going to use the Keras model. Next, please. This is the traditional model that we usually use when we create a machine learning model. Next, please. We are going to create the input data extractor. This is the information from the Federated data set. It's the information that we are going to collect from the different clients. Next, we are going to define the loss function that we saw at the beginning. In this case, we are going to use a space categorical cross entropy. Similarly, we are going to define the evaluation metric. In this case, we are going to use the space categorical accuracy. But you can put here all the... But you can put here any of the metrics or loss function that are available in Keras. Next, please. Once we have created that TensorFlow Federated model, we have to create in the learning process. In this case, to create in the learning process, we have to use this averaging process for Federated. This is the process that we are going to execute to collect the information from the clients and to produce the process to join the different models. First, things that we have to put is the model. Next, please. This is the model that we defined previously. It's the model that we wrap in with that allows us to create the TensorFlow Federated model and we have to identify the optimization algorithms that we are going to use for clients and for servers. In this case, we are going to use the most common one, the SGD. But you can put any optimization algorithm that you want and define different hyperparameters that you need. This is the most interesting part because you can use a different algorithm for your server and a different algorithm for your clients. And this could be important in some times when you are creating a really complex model. Next. We are ready. Now we can start in the training process and the first thing that we have to do is initialize the Federated machine learning process. When we do this, we are going to get the state of the system. In Federated machine learning, it's really important the state of the situation because we are going to collect information from the different clients, the different devices in terms of models. And this model has to be aggregated with the global model to generate that full model that we are creating by means of the other models. Then the state is important because there are different states from which iteration have to move from collecting data, aggregated information and generating the final model. Next, please. Then once we are initializing the process, we can start in the training process. This function that we can see here the next function is the training step. Then we have to define which is the state of the system and we have to send the Federated training data. Next, please. This is the Federated data set that we told before. There is a specific way to create this data set for the clients in the case of Federated machine learning because this information can be in the different devices or can be in the server and send to the devices in order to generate the process. In the case of the simulation, all the information is on the server, of course and is sending to the clients in order to generate the models. In this case, we are going to collect the state of the process at the metrics. That function that we defined at the beginning is the metrics to analyze who is going for our machine learning process. Next, please. Then when we have all of these we can define the final training look, which is really, really simple because we only have to put a look with the name of iteration. In this case, we are starting on the second step because the first step is well done at the beginning and we are going to execute the next function. After all of this look, we are going to finish the training process by means of Federated Machine Learning. This is the way to generate a machine learning model using Federated Machine Learning. If you want when you finish the process of training, you can use the different tools that you commonly use for TensorFlow like TensorFlow more in order to analyze what is happening in your system. Using, for example, the metrics information that allow us to see what is the average of the process of the situation of the machine learning process during the different steps. Then this is how to do all of this. This is the way to creating a machine learning process by means of Federated Machine Learning and this is all for now. If you have more questions you can go to the webpage that Google and TensorFlow offer in the network, which have a lot of information about how to do this and try for yourself, try to create your own machine learning process by means of a federated system. And this is all. If you have any other questions, please. Next. Thank you so much, Moises. No wonder you are Ph.D. cum loaded in AI and robotics and many other things. My God, I think my head is going to explode. Fantastic explanation. It seems so easy the way you were telling us to just do this and that. We have some questions. We don't have much time though, Moises. So let me ask you the first question from Pablo. He's asking in Spanish. Shall I read it in Spanish for you? Sure. You can answer either in English or in Spanish, whatever you prefer. No puede afectar al rendimiento de los mismos causando un mal funcionamiento al usuario. Voy a responder en español. Sí, por supuesto. Podría causar un mal rendimiento. Ejecutes el proceso. Pero tienes que tener en cuenta que el proceso va a ser muy liviano. Ya que eso no vas a ejecutarlo con mucha información. No es el caso de que utilices un gran conjunto de información como se hace cuando trabajas en la actualidad. Sí. Te preguntan también en relación a... Bueno, lo mismo. Un poco... How do you send the data to the cloud to preserve your privacy and do not clog up the network? No sé si... Ok. In this case, you are sending information to the cloud. You are only sending the model. The model is the most you are using the training in the local device. Then you are doing a local training process and when you think of the local training process you are sending the model. It's like... imagine that you are creating a traditional machine learning process. After each iteration you are analyzing how it's going to process. And you are computing for example the gradients to try to improve and in this case the neural network. Then now you are not doing that. You are sending the model to the server and only the model, only the information inside the model. For example, imagine that you are using 1,000 pictures in your device. You don't need to send 1,000 pictures only the small models that you are creating using those pictures. Moises, is there a way you can give us an example of costs? For example, you were mentioning when you transferred your TF model, whether you use a previous one or a new one. Does it change much in terms of costs or how does this work? Not really, because you are sending only the model which is a small piece of information. It's like, I don't know 100 kilobytes, 200 kilobytes, 10 megas, I don't know. It's small compared to the pictures. Imagine that you have a camera that is collecting high resolution pictures. It's one that needs 4 megas. And you are collecting 1 million of pictures. If you send 1 million of pictures, you are sending megabytes of information. But in this way, you are not sending those megabytes. You are only sending the models that you are creating for each iteration. Then you are decreasing the cost to sending the information and the cost to taking that information, analyzing, transforming, storing and after that, training. Then you create the cost of everything. Moises, it seems that there are only advantages then. Yeah, wow. There is not all advantages, of course. If you are not going to get exactly the same performance of authentic machine learning process, you are not consulting for me. But, okay, if you are not sending the information, it's not. It's another option. Excellent, which is actually what Pablo, I think was Pablo, was like, oh my god. Hang on a second, because suddenly we have much time. They are telling me we have just a couple of minutes. But, they are asking you, because I see the state and I have to keep it during the iteration of training, is this a limitation of TensorFlow? No, you don't have to keep it. It's a way of knowing what the state is and I don't know if it works in the correct way. That is, when you pass the state to the iteration, it compels you that the state is correct. And if it doesn't wait, for example, to collect all the information. Keep in mind that this is working at the same time as it could be. And until everything has been corrected, it hasn't been added, it doesn't go to the next step. So, probably the state is using it only for that, and to know which state it is. You can use it on the screen. Moisés, sorry, there are a lot of questions. Look at what I've told you that you don't leave until the end. Look at what I've told you. I'm going to make one more type of telegram, because we have to take the next step. If you ask if it allows you to train a model from multiple particular devices, isn't it a problem of attack by embedding the model? It could be. It could be. It could be that one of the devices that is inside the network that you have used is generating the model in a correct way. What can happen is that your model doesn't learn the correct way when the aggregation is produced by using some incorrect examples. Theoretically, if you have a device that you don't use, it shouldn't be produced because it has enough information so that it is answered at some point. But this is theoretical. This is very new and a lot of people are trying it out. Now I have to think to try to do this. Oh my god, we've opened that melon. Moisés, you have a question from Álvaro, but it's very long, so Álvaro, I invite you to contact Moisés if you do it directly because we're without time. Moisés, as you can see, you've aroused a lot of interest. We hope to see you soon. Stay tuned for the Attic, which will be as interesting as you. Thank you very much. Moisés Martinez from Capgemini, see you very soon.