 I'm Neroop Ambushankar. I pretty much had most of my experience in med device health care related stuff like computational geometry, implants, implantable pacemakers and stuff. So here I'm currently at Intel as an end-to-end AI framework and solutions architect and a product manager for federated AI with OpenFL. Today we'll be talking about building better AI models with private data using openFL. I'm actually glad that the previous speaker talked about federated learning. So some of you who've been staying here already knows what it is, get a sense. So today's agenda will be mostly divided into four sections. It's as federated learning as an overview of what federated learning is, what centralized versus federated. And this next section, I will introduce you to OpenFL. I'll try to keep it at a higher level, make it more interesting and productive as much as possible. But feel free to pin me any time after the talk to ask for specific questions in the architecture. But in this, we'll talk about the FL architecture, what the core values and how you can get started. Then I will talk you about the Intel's federated learning security, which is a security extension on top of OpenFL that we actually have for reducing the attacks and risks. Like Intel has something called SGX or Intel Software Guard extensions. And now we can prevent some of these risks that one of the actual audience asked in the previous talk to. And then we'll talk about all the real world usages for OpenFL. Let's dive into the overview. So let's talk. AI needs data. And usually the issue is in terms of health care, finances, and I think there's also an IoTG interest here, needs data. So getting these data have challenges a lot. So especially most of these data are distributed across organizations. It has its own rules for sharing. And most of it is across your boundaries, which means specifically like health care, you have FIPI compliance, OSHA compliance, GDPR, and all these things. So you have a lot of issues of taking this data. So data replication or data movement from countries or passing or organizations is generally very hard. And you also need to do data de-identification. That is, you don't want to identify who is sending this data when you train it. And data sovereignty challenges, which was also touched upon in previous talk, where every country owns the data. So federated learning, we need data. So how can you expand it in robust and federated learning? So I just want to touch this thing where just like a teaser before I talk about in detail is we actually ran the world's largest health care federation with OpenFL. And it actually has 71 sites across six continents, 20 plus countries. And it's called the Federated Tumor Initiative, where you actually look at brain tumor segmentation. And I'll talk about these results, but it accuracy and everything is great. So it's just a teaser to get you excited about things. So that's what is one of the. And last year in 2022, it was one of the most 25 most downloaded nature communications, nature paper in health care. This is the paper that came from Intel Labs and UPEN. So anyway, so first going with what is traditionalized central learning. So you start with a training site. You always want, let's just for the purpose of this, call this aggregation site, because that's where the final model comes out. You will start with a global model, or any model before you train. And then you look for institutions. For the sake of purpose of this talk, let's just think of institutions as collaborators, because these are all terms for federated learning. Just trying to get you think in federated learning terms. And these institutions can be substituted like health care institutions, bank institutions, or anything that you can think of, compliance, governance, that is required, that has all these data that you wanted to do some training on. And they have these data. You have to send this data to a central site. This is where the problem is. The previous slide, I talked about the GDPR, HIPAA compliance, governance, all this stuff. And it's not easy. Even for the test that we did, we saw that it takes around almost one to three years. It's crazy how much of compliance and governance you have around all these data to send it. So but in centralized learning, you still have to pass it, pass through all these things, send the data in, you train a model, you get a trained model. Let's reverse the paradigm. In federated learning, you start with the institutions. You have a global model, the same model that you thought about earlier. You send it to this institution. So you don't move the data, you send the model out. And then you actually train the model locally. And then you get an updated model locally to those sites, to those hospitals, data is still private. And you send the weights alone back to a central aggregation server. That comes to that aggregation site that I talked about. And then now, in this aggregation server, you do several kinds of weighted average or fed average. There's a lot of different algorithms that you could actually do this aggregation on the weights alone. Again, I'm stressing on weights only because you sent only the weights, and it's important for most of the privacy things I'll talk about later. And then you've got this updated global model, and then you repeat the process. And now you send it back to other institutions. And so you just repeat the loop until the training happens. But there are a lot of other collaborative learning techniques. So just in this paper that is there, there's something called institution incremental learning. This federated learning is not the only way you could do this distributed data privacy learning, where the image speaks for itself where you actually go ahead, have one of these institutions train it, send it to the other institution, and keep getting a final model. You can either do it for the entire dataset, or you could actually do it for a few epochs and keep it cyclic. So that is the cyclic incremental learning. So what I'm talking about right now is basically one of the studies that we did in that Fed's initiative paper that first started out with Intel and UPEN, where we actually did this for brain cancer segmentation. And we compared all these kind of collaborative learning techniques. So first is you could see that CDS is the centralized data sharing. It's just another name where you send the data. So centralized learning. And federated learning at CDS, you can actually see this. The convergence and all this quality is actually 99%. So you achieve it, whatever you could achieve now with centralized learning. FL is superior to the other two techniques that I mentioned here, like IIL and CIIL in the previous slide. And you see these kinks, right, sharpness. And that's because it's kind of something called as catastrophic forgetting. So you take it to one institution, it trains well. And depending on the data when you send it to the other institution, you can end up messing with some of the pre-trained value or generalizing it. So that's the issue with what's what you see in CIIL and IIL, these sharp curves. And then the degree of independent and identical distribution of the data actually matters whenever these institutions are. And it matters a lot more in federated learning than centralized learning. We saw in that paper, so these are the outcomes. And federated learning did take a lot more epochs to actually converge to the same quality as that of centralized. But I will be explaining in the paper about the tumor segmentation. You see that it's actually more, still have all these site issues are still OK because you still have larger access to data and helps in generalizing and getting a good model overall. And this paper that we said is one of the first major paper for federated learning and health. And at this point, I also want to give you a little sense of how it all started. So federated learning was actually a published paper from Google in 2017. So how it all started was they actually wanted to train data from our Android phones. It's private to our data, so they can't central it moving to the servers there. So they said they coined this term federated learning actually, where they actually did the training in our phones itself and sent the weights alone back and did some weighted average. They called it federated average, but it's like any other weighted average. And there are several ways of algorithm like medium and different things. It actually, that's how it started. So with that, let me actually introduce you to OpenFL. And OpenFL is basically started out with this initiative that we kind of had there. Intel and Eupen started out, looked at it, see centralized learning. We wanted to extend it to the super duper large 71 sites thing. And so that's what federated learning is. And OpenFL came out of it, and we made it open source from Intel. So like I said, there is, again, you introduce these aggregators, these collaborators, which is the institutions, and you send the data across. And OpenFL is an open source software. It's completely federated learning system architecture that you want to do it. And it's in Linux foundation now, Linux data plus AI. We moved it from Intel to Linux already of last year. And it's released in Apache license, so please go ahead and use it. It's very easy to use, scalable and manageable for large federations. The whole purpose we were building it for, we wanted to go beyond 71 institutions and all the step federations, bigger and bigger federations is what the whole intention is. And it's completely open source, please contribute. And we also keep privacy preserving machine learning in mind. So because it involves data, both the data and the model is actually protected in transit use and storage. And there are a lot of privacy things that comes, and there are additional extensions that I'll talk about that Intel is doing and still researching and wanting to contribute. So OpenFL solves that data silo problem with software and accelerates time to market deployment of federated learning. It provides the greatest access to data through enabling secure privacy preserved data. And one other thing I want to mention is this local training, all these collaborators and aggregators, which is institutions and aggregators, the one kind of data that is getting shared is sent across the network is NumPy. And whoops, is NumPy. And the thing about NumPy arrays is it helps to protect all these data owners and IT securities that are very strict about sending data across over the network. And having just NumPy arrays helps with all these IT security passing and safe reviews and et cetera, et cetera. And because the aggregator is sending only NumPy arrays, all the training happens only in the collaborator and institutions. So which means just sending the weighted means you could have the aggregator being framework agnostic. So everything happens there. You get the weights alone and you do this weight and you don't think. So which means it supports Keras, TensorFlow, PyTorch, Onyx, and JAX, amongst other things today. Distribution happens through GitHub. And installation is pip install OpenFL. PyPy, it's that easy to install. And we also have a Docker. With that, I will talk a little bit more about the OpenFL architecture. So it has an aggregator, like I mentioned before. And it has collaborators, which are the institutions or health care, bank, whatever you can think of. And in general, the main thing is it has a concept of this plan, which is the actual code. And it defines most of the code that runs in the federation, so the whole experiment that runs. So what all is in there, you have the model, the data loader specified, and other things that the federation needs to run, such as hyperparameters, network topology, information, et cetera, et cetera. So all this is defined in the plan prior, out of bound, before the experiment even starts. So then you send this workspace. You have this concept of workspace in the aggregator, which is basically all this FL plan, the code, everything that is in there. And you send it to the collaborators. And this happens before all the experiments start, so you need to send it. And it's easy for the collaborators to just start with, to import it with just one command, like FX import, and all the dependencies and everything that needs to run. So the aggregator, the model owner defines the model in the plan, tells what they want to do, what's the network connection, who are all the collaborators that need to collaborate, send it. One FX import imports that whole environment into your collaborators and institutions, the hospital institutions. And then the aggregator also serves as the certificate authority, which is the one that actually tells which collaborators can collaborate with me, that will help in securing this mutual TLS connection, a secure connection. And then in all this is set up, you can do one single command, FX start, and the experiment starts. So what happens? Let's go to the next view. So it is actually based, is built on top of GRPC. It's a client server model. So on startup, basically, after this, the collaborator asks the aggregator what that it wants to start, what kind of tasks it needs to perform. Mainly, there are two kinds of things in federated learning. I would broadly say federated AI, because there's federated learning and there is federated validation. And they can help care. Let's say the FDA wants to do something with the model. And they want to even validate certain things. So sometimes you just send it to the collaborators, not for training, even the data, just to validate that, hey, whatever we are doing with some of the data is this even correct? So you can get some data from, say, Europe, ask someone in India to validate it, just for the sake of things. So the collaborator basically asks what tasks. So you will get the model from the aggregator and telling the aggregator, say, do validation and, let's say, training. So it will first validate the model that it receives from the aggregator. That's like validating whatever you receive. You train it locally, then you validate your local model, and you send the weights alone. And the aggregator will receive it, do from all the other collaborators, will do the weighted average, and will create the global model. And at this point, you kind of finish one look. And yeah, one other point I want to make is, of course, the aggregator weights. There are several different algorithms, so all of this depends on the task you perform. So the model itself right now, so that's why I said, so in the initial FL plan, I'll talk more sooner after the talk maybe, just in the interest of time. But I will explain it clearly if that's OK. Thank you. Yeah, so open FL, so the progress and stuff. So again, it started after Google published in 2017. We kind of started with the UPAN initiative. And in 2018 to 2020, we did it. We released it in February 1st, 2021, for public release in GitHub. Then we moved it to Linux Foundation for everyone to use it in LF plus data AI. And we currently did six major releases, and 1.6 is coming soon in the next quarter, mostly. Very early quarter. It's about 600 stars in GitHub today, and it's still growing. Please, please contribute and bring larger federations and stuff. Let's create better models. The core values, it's always been all about scalability. Like I said, please grow. More larger federations are needed just because we are excited after seeing all this health care need. Again, I'm stressing on health care. That's only because we did it, but bank institutions, a lot of cases will add more generalizability for this. Security, we have a lot of security. We'll touch more in the forthcoming slides. And the developer experience also, we wanted to be easy. That's the single start. OpenFL is distributed with, like in GitHub, PyPy Docker Hub. And there's a lot of tutorials for you to start. Please take a snapshot. And the slides are also on uploaded. So please use them. And you can see the tutorials start for TensorFlow PyTorch and even verticals. We are trying to make the documentation easier too so that more and more people adopt it. All right. So with that, let's move into the next interesting thing about security. So there's a task at risk. So you have access to data. It has its own huge issues, and especially when you go across boundaries and stuff. So I've touched about four things here. And some of them that we've tried to eliminate or address at least the way we can. One is kind of model IP theft, which is you think of one of the collaborators joining and trying to stay there only to take the model off instead of really contributing to it. So that's an issue. We should try and help protect with that. And then model parameter kind of updates where instead of helping with adding this data, they maliciously actually change the model parameters in ways that the entire federation does not succeed if someone wants to have ill federation things. And the other risk attack is the model parameters. Data extraction, the previous speaker also touched about a little bit on this. So if the federation is limited, you do this data identification. But still, instead of training, you kind of guess what their data is and kind of tap it out of the model. So smaller the federation has this issue, but larger federation, it's harder to do it. But still, it's there. And the last one is like a malicious entity participation, which is like if anyone is coming in. So I will touch how Intel's FL Security helps with this, and especially this, because you need some kind of a person telling, hey, are you the participant that I know? Can I trust? Only then come in, which is the collaborator. So you need that attestation. So we provided that attestation services. So heritage learning framework need to have additional security to manage these risks. So with that, I want to introduce something called as Intel's SGX, which is Software Guard Extension. It is at the software level where the main thing is it brings about a creation of something called as a trusted execution environment, where the code that actually runs within the trusted execution environment, TEEs, in short, is encrypted within the memory and not even viewable by the root user of the system. So after the code is accepted, everyone is agreed within the participant. You actually, the model owner, if they encrypt the code and the entire plan, none of this is viewable by anyone outside of them. Even if they have the decryption key, they will be the only one who is viewing. They'll be the only one who can change the code, because they can't even view it. So there's nothing to change because it's running, right? So anyways, the trusted execution environment with other security features. This ITA is another Intel thing called Intel Trust Authority, which is the attestation service. It's almost like saying a new participant is coming in. It's like asking, hey, do I know you? Sorry, you're not a participant. That's one way of dealing with it. Intel Trust Authority is, I don't know you, but hey, can you verify that the code, everything that I have, is trusted so that I can join the federation and contribute? So that's where this ITA is like a thing where you could check and they will attest whether this participant is trustable and their software guard extension and dependencies are all valid so that you could do it. So this is like at a hardware level thing. And so what that provides is data confidentiality. So data never leaves the premise of the data orders. And model IP is protected end to end use. So after everything is done, it's still in there in memory until you grab it and only they can decrypt it. No one can access this TEE, trusted execution environment, except the ones that are already given access to. So integrating attestation, only verified and approved ML models are running, as long as the FL plan is agreed upon by the aggregator and the rest of the participants, which are model owners, collaborators, everyone, before the experiment starts. Once the experiment starts, the enclave is secure. You cannot tap into it. Participants cannot insert unapproved code at any time. So this way, Intel XGX provides a mechanism for prevent stealing and model for reverse engineering data distribution. This is just a little more insight. Maybe just in the interest of time, I won't touch too much into this. But it's just that the governance service is basically what that gives that additional security. It has its own ledger, admin, security registry, making sure that the participant is in there. And if you see, it has all these small blue boxes. That is the Intel software guard extension. That's the thing that is saying that, hey, I won't do it. The governor is there just to say that, hey, is your Intel XGX valid? And is everything in the plan valid? And are you a participant that I know? Yes, you're valid. Check, check, check, check, check. Only then I will let you run the experiment. After the experiment starts, no one can mess with the experiment yet. That's the apart. So what FL security does is, apart from collaborators and aggregators, it has a governor security entity. All right. How does FL security mitigate this? Of course, model IP. So the trusted execution environment in collaborators alone will prevent any kind of model to be accessed so you cannot take anything. You cannot change the model parameters. Again, same in the collaborators, you have these TEs. And data extraction attacks cannot be done because TEs are there both in the aggregator and collaborator. And why I'm separating it out is sometimes, depending on the federation, people might see that, hey, I want an XTX only in the collaborators. I don't want an XTX in the aggregator, but having it everywhere will make it more and more secure. The level of security and privacy depends upon the federation and how they set up. The last thing is the Intel Test Authority verifies the TE enclaves. And the governor actually verifies that the plan is in there. So with that, I will go with some real-world usages. I gave a teaser for predator learning for rare cancer boundary detection. So like I said, we ran the world's largest health care federation. So what you see here in the graph is basically, let me, all the blue ones are what is publicly available models, data across the world. And all the orange ones are all the vast additional private data that we got access to with this initiative. And the green ones are the out-of-sample validation for all the ones who are machine learning savvy. We need to make sure that we validate outside of the current data set so that it's generalizable and not just specific to the data that we trained it on. And then there are out-of-sample validation with clinical trial data. And the aggregation server is a single server that is in the east coast of the United States where all this happened. So the results is where. So what we did is this brain tumor segmentation where there is something called enhancing tumor, tumor core, and whole tumor. So the blue color is the tumor core. And the enhancing tumor is what the doctors would want to see to treat the regions around it. And then if you actually go for radiotherapy, they would want to treat everything in the red regions. And you want to accurately try and segment this as much as you can so that you can treat the patient correctly. So in short, what we saw was the accuracy of identifying these regions was saw a 27% gain. This is the local validation. So this is the local validation from the training set. Saw a 27% gain for ET and TC, which are the tough ones to segment. And for TC, we saw 33% gain. And the even more encouraging one is the out-of-sample saw a 15% gain and 27% gain for each of these regions. So the key results of the FETS initiative and Intel and UPAN initiative is we saw that increased data can improve performance. However, the data science alone cannot. So of course, the quality of data matters and how they label it. Large sites can also benefit from this kind of collaboration with increased data. FL is robust enough for data quality issues. The reason I have this point is we did find some of the sites mislabeled their input data. And we found that the accredited learning did not throw these things off too much. So we still did converge. And we got a model out with the accuracy that I mentioned before. So it improves the accuracy mentioned. And then singlet and triplet model is basically, conceptually, you can use one model for all the three segments, or you can use three different models to detect these three. For this initiative, we found that both the techniques actually did give the same very close results. So we didn't find much difference. Although, I think intuitively, you can argue like saying, having three different models, each of these can produce better results. Yeah, that's just one. See, other things that we are working on. So of course, we started with UPEN initiative. And then the FETS initiative is what the brain tumor segmentation is. We've done something with Frontier Development Lab, where it's about now, oh, one other industry. So now it's going to go space astral NASA. So NASA wanted us to, and Mayo Clinic wanted to actually see the effects of cosmic radiation. And they already have data from all these space station folks. And think of that data. Genetic data is actually huge. Somehow they transmitted. I don't know how. And the data privacy involved in these things are huge. So they are using OpenFL. And they want to study this effects of cosmic radiation so that they could send further of human beings up there. Plus they also want to study how some of the cosmic radiations is affecting us today over a period of time. So that study is going on. Montefiore is using OpenFL, particularly after COVID. Two years ago, everything was COVID, but especially for acute respiratory syndrome studies and based on that, and death based on that. And VMware is using OpenFL. And luckily, they also contributed something called EDEN, which is a new compression pipeline designated for OpenFL. I will leave this with a slide. Please take a picture. So OpenFL has moved to a new home in the next foundation. It's actually joined by VMware, Lido's, U-Pen, Flower Labs, and Driving Future of the project. Use OpenFL, create high quality data, high quality models, and please contribute back and looking for any contributions at any skill level. And even regular ones, we'll push it to the maintaining positions. It's in GitHub. So the QR code is there. Send a message in Slack, and here's all the links for you. So Patrick is the chair for the OpenFL. And Prashant actually is one of the TSC members, and I'm TSC members too. And I'm also doing the product side of this. With that, I would end my talk today. Well, thank you. Yes, please. Yeah. I think there is a mic that goes on. It's not my role, yeah. So when you're sending the weights to aggregate or global model, what are the techniques that you guys may have used to eliminate the bias? Because there's a lot of local context, including environmental conditions. Like example, it could be the pollution, traffic, or whatever, right? So depending on the business case, use case. So how do you ensure when you are again pushing back to the institutions, the bias that has caused with the weights coming across all the local models is not compromising the accuracy? Thank you. So one thing I have to say is that federated learning is kind of a method to do this learning and actually give an access to increased data. General problems, I see that, yes, you're saying there's increased access to data, has increased issues, such as more bias coming in. And the characteristics of the RCA? Correct. So such kind can be addressed if the model owner, which is the aggregator in this case, but you could think of someone decides and creates the FL plan such that they look for specifically these biases to address it. It is interesting. I'll take it back to the team and see that if we can address it specifically in the aggregator. And another question is, how much do we want to address it ourselves? Because what it is? Yeah, sure, pleasure. So final model IP and while it is training, security, privacy issues, and then these are the main things we are kind of, and then data privacy, right? That's what we are trying to address. And usually the model owner, even for model IP, there are questions that will arise. Because let's say Europe is training or NASA in this case, right? They don't want Intel or any open FL folks to have access to the final model. And the data privacy people who don't want to say, hey, I'm using this data and I know that everyone in, I'm scared to mention China. China has more COVID or something like that. But they don't want to do some kind of attacks there, right? So you should be very careful. And let's say data owners gets their own data IP privacy. And model owners gets their own model IP security. And we are just enabling those things to happen such that people use this framework. And biases would need to be handled by model IP, but I'll take it. Yes, I had a couple questions. Am I allowed to continue here? Or should I get? OK, thank you. I'll make it quick. Oh, yeah. The first is I noted that you're building on top of Grameen. Is there any roadmap plan to integrate with more proprietary offerings like GCP, Confidential Spaces, or AWS or Azure offerings? Yes. So there is a lot of things that happen with privacy-preserving machine learning. It's its own thing altogether. And so we are built it on top of Grameen because SGX is there and it uses Grameen. So we are working with the team with an Intel XGX team and looking at their road maps and see how we can integrate it and make whatever we have maybe available generally public, satisfying those needs also. But yes, we do have some of those plans. And then just real quick, TEs are very memory constrained. What are the challenges and how much data could be trained over and did you have like batching or streaming capabilities to handle that? So this is built on top of SGX. There is also something called a TDX that Intel is coming up with, which has additional memory just for the things which is increasing the security. Like saying, it's not just the code that we want to run. We want to run the entire thing in the enclave. And then they've increased the memory as far as I know. Several TVs, I don't quote me on that. But I know that TDX has more memory just to address that concern. And integration of OpenFL with TDX is planned in the road map sometime next year. Thank you. So when the weights are transferred back, the improved weights are transferred back to the aggregator. I mean, the assumption is that that's private. And I guess for the medical application, people may not want the fact they have a condition to be known. But they probably don't care if the picture of their tumor is somehow leaks out. And with financial information, that might not be the case. Either way, at least OpenFL is not having the security portion. So we are currently doing pilot tests with some of our customers, the FL security. So I'm going to talk about the FL security, which will become later available. So in there, so if you decide your FL plan before and after it's encrypted, when you start this whole thing, it goes across the secure connection. Only the already agreed upon participants can even access it. But what I'm saying is that none of the collaborators also can access the model at weights at any point. They're only the model owner, which is aggregator in this case. But I'm purposefully separating it out because we are thinking of, we've heard situations where the model owner is different from the aggregator. But I'm saying during training, no one has access to the weights until the experiment ends, or they can decrypt it. Who can decrypt it? Are there only the aggregator? Let's say that they are the model owner. They can decrypt it. And if your concern is, the aggregator themselves, I don't want them to see the model. That is why we are dealing with the aspect of a model owner, who is separate from the aggregator. Because that way, you could have any kind of, how do I put it, hardware entity to be the aggregator, and the model owner to be this conceptually institutions, NASA, to say that no one can do anything. Finish the model, send it back to me, or I will grab it, and I'm the only one who can access to the model. And then after that point, we don't deal with anything. It's their issue of dealing with modeling. So there are two assumptions. One is that the transmission is secure. It's secure. And then the second one is that you trust the aggregator. That's why I said the enclave, correct. You trust the aggregator. And if the aggregator is the model owner, they will have the encryption key and decryption key. But if the model owner is separate, the aggregator also cannot see anything. Because it's inside the encrypted enclave. So you start the FL plan. You encrypt it. It goes into the enclave. No one can open it until you have the decryption key for that. It's almost like saying, everything is planned. I sent everyone inside this room. I locked it. And it's all in its own de-identification things. And things are running. Hope that helps. So I'm not sure I understood it all. OK. It's all we talked about. Yes. So you're talking about re... Oh, I see. So you're OK. You're talking about whether they could go back and look at what's happening from a specific institution. Yeah. In a smaller federation, when you get the final model out. So if you are an institution, and let's say he is another institution, and I am the aggregator, and he is the model owner, none of us can see the final model until he does. No matter what. First is V3, you can be competitors. So that's why I said participants comes in. Participants agrees upon what code runs and participants but still cannot see what is finally being presented unless the model owner gives the right to do so. Yeah. Hi. Thank you so much. I was wondering, because I come from privacy, I do personally not see federated learning as a means to be outside the scope of GDPR. So only differential privacy is a state of the art, like anonymizing data in a sense that you cannot say it's personal data anymore. I would even argue that those weights, even though they're sent encrypted to the aggregator, there are still, they could be re-identification attacks theoretically possible, that it's not like complete data privacy. It's just improved security, I think, for private data. So my question is, first of all, what do you think? But do you also offer differential privacy in your setup, in your architecture? Local differential privacy to randomize the data before it's? I'm going to take this question back to my security team. Differential privacy. I will make a clear note of it because I'm not really an expert in security-based things in extension. So the team did it. Thank you, thank you. Yeah, yeah, but differential privacy concern within the weights go back to the aggregator, correct? Yeah, because often you hear that it's only with differential privacy federated learning, it's really like making the data really private or really confidential. Sure, OK, thank you. Yeah, sorry, I have to wrap up. I will wait outside. Please ask me questions. I'm really happy to answer.