 Hello everyone, and thanks for being here, listening to us online in the OpenAI plus data forum at the open source Summit Latin America. My name is Animation, I'm the CEO and director for Watson AI and data open technology. And with me, I have my colleague Tommy, Tommy, please introduce yourself. Hi, my name is Tommy. I'm a senior software developer at IBM, mostly working on open source technology and my focus is on the AI lifecycle workflow. Thanks. So, um, when we look at the topic, which we are going to talk about, right, which is the overall MSSEC ops, right? It can extend into multiple areas. And it's a very wide field, right? We'll be going into certain aspects of it, but overall for us, you know, it falls into the larger umbrella of trusted and ethical AI. And I think one of the quotes from Thomas junior Watson senior, I really like a former CEO of IBM says, you know, the toughest thing about the power of trust is very difficult to build and very, very easy to destroy. And that is one of the tenants we follow while we build a medical and trusted area within IBM. Now, I think we're all aware by this point that, you know, he is powering all the critical workflows, right, across different domains, healthcare, customer management, employment, credit, right? And we want to make sure. And I think in general the industry wants to make sure that, you know, trust and ethics are built into these workflows, which are now being driven by AI. You know, what do we mean by trusted AI, right? What do you, what does it take to actually trust a decision which is made by a machine? From our perspective, you know, the characteristics are, is it fair? Can anybody tamper with it? Is it easy to understand? Does it handle privacy? Is it transparent? All these different pillars that I form and there are many more, right? I mean, if you look into the industry definition, trusted and ethical AI can have different offshoots. But to handle that, coming from the pillars of, you know, the trusted AI, we essentially not only are focusing on what it means, but also, you know, how do you implement it? And to enable the community at large in terms of how to implement it, we actually developed a lot of projects in IBM research, and then we moved it out in open source. You know, a project like, you know, if you ask the question, can anybody tamper with it? We have a project called I guess it was in S360 and I will be diving into it later. Is it fair? On similar lines, this project is called AI Fairness 360 and that's also, you know, out in the open source. Explainability, I think that's the very fundamental and very basic characteristic of the AI lifecycle and many models are making those critical decisions for you. We want them to explain the predictions and explainability 360 is essentially an open source project in that area. And last but not the least, a lineage, right? You want to be able to trace back an audit if a model is making these life changing decisions for you. What was the data set which was used to train the model? What were the hyperparameters used? What are the characteristics of the model parameters? The data set diverse enough, you can trace back using lineage and we have a project called AI Fact Seed 360, which essentially the spec is an open source and it's interestingly also being part of the Linux Foundation S-Bomb project where, you know, the software parameters will start including AI Fact Sheets as part of it. So the three key projects as I was talking about, AI explainability 360, AI Fairness 360, and Rosilla Reverseness 360 and if you're wondering the relevance of 360, hopefully, you know, these diagrams explain what do we mean. So what we're looking at, for example, let's say in this case, Fairness, we are not only looking at from the perspective of a model, right? The toolkit actually addresses across the whole life cycle, right? Is your data set fair, right? So what we call pre-processing, right? Or the classifiers which are being produced, they are fair. So while you are going through the machine learning development process for that model, right, you can use techniques and systems and then, you know, definitely post-processing was the models that deployed, can we detect those predictions and classify it as fair or not fair? So that's the theme and that's where the focus at a 360 degree level comes up. Similarly, when you look at AI explainability 360, you know, can we explain the data set features, the data set distribution, can we explain the models both at an individual prediction level for a particular transaction and also at a global level where, you know, the transactions are happening over a period of time and can we go and explain how the model is performing over a range of predictions? And I think open source with a single vendor is not a true open source. So we wanted to make sure that, you know, we move these projects in an open governance model where a neutral entity is holding them and there are no licensing concerns and governance concerns. So as part of that, we moved all these projects to Linux Foundation AI and data. In fact, we have been instrumental and working together with Linux Foundation in terms of, you know, shaping the Linux Foundation AI and data landscape. And as part of that formation of LFAI and data, we also launched a committee called Trusted AI Committee, right, which has been centered around two working groups, the principles working group, which defines what does it mean when we call something Trusted and ethical AI, and then a technical working group which has been implementing the codes and the techniques and the algorithms that are responsible for providing these. And we are not doing it alone. The Trusted AI Committee has grown pretty big and pretty huge. We have participation from Microsoft, DARPA, TRODGI, General Motors, Tencent. A lot of these different institutes which are contributing to this particular work stream. And if you would like to join, we definitely invite, you know, please reach out to one of us, or you know, hit that wiki link but just paste it here. Now as part of the principles working group, the principles working group came up with eight principles for Trusted AI, what they call repeats or repeats and if you look at it, you know, that expands the initial pillars, which I was talking about what does it mean for Trusted AI. And the terms they are using reproducibility, robustness, equitability, privacy, explainability, accountability, transparency and security, right, these are the eight principles from which the LFAI Trusted AI Committee is looking at the overall Trusted AI landscape. And if you want to join the meetings, as well as, you know, look at the webpage which is hosting these projects, please click the link the meeting link here has all the information you need to be able to join these by monthly syncs which we are running on this. Okay, so these are the eight principles as they talked about which the Trusted AI Committee members, cross organization, cross geography, went together and came back with what it means, you know, work across multiple organizations for AI to be called Trusted and Ethical, right. Now, one thing to highlight is, you know, no principle is a higher priority than any other and, you know, they're all of equal importance and right and they're all related to each other you will see overlaps between different principles, for example, you know, between transparency and accountability and explainability, you will see things right which can fit into either of the criteria. But in this context, right, we essentially wanted to focus on the three principles which are around security, robustness and privacy, which is where you know the overall field of MLSEC ops lies. And you may want to ask at this particular point, what does MLSEC ops means, right. I mean, the way we are looking at it currently and this is a slide I borrowed from, you know, one of the committee members Alejandro, he leaves, and he's actually forming an MLSEC ops committee under the Trusted AI Committee, a group that will be focused essentially around this work, right. And the way to look at it is it's intersection of ML ops, sec ops and DevOps. I think most of us are familiar with DevOps practices and principles. And SecOps is not so recent anymore, like it is something which has become very, very prominent in the last, I would say two to three years where, you know, how to bring machine learning engineers data scientists and demo folks together so as to be working on a platform and looking at the whole machine learning lifecycle from the same prism and lens, and sec ops is is you know, essentially in the security domain right how do you apply security principles working jointly with your operations team and automate all that. All these three things essentially forms the MLSEC ops. Now the MLSEC ops as such can have very, very divergent and various dimensions right and we won't be able to go into this talk right we into all those different dimensions of, you know, but the crux of it here will be more focused on you know, AI security. And the reason in that particular specific domain right. If you look at the latest gotten research, one of the things which they presented was that you know, around 600 plus executives when they did that survey that machine learning is actually presenting a new attack surface and increases a lot of the security risk right and the outcome coming is like you know the data awareness of that risk is low. There is a low understanding of AI security what does it mean. And as a result of it, the security posture is close to zero. Right. Why is it so important. I think we all understand the relevance but also when you look at it now from a legal perspective. Right. Yes. The GDPR as we all know right this came in and this made mandatory that whenever we are using and storing and processing user data. There are set of guidelines which we need to follow, and even though it originated in Europe, this is a practice which has been widely adopted across the globe right. And one thing which is becoming very clear is that many of the provisions in GDPR are very, very relevant to AI as well. Right. In fact, as a result of you know some of this research which is being done right some of the models can be classified as personal data so essentially the rules which we need to follow for GDPR. There need to be applied here as well and not only that like obviously there is a quite a bit of work you know, being done in terms of crafting rules and regulations. The European Commission, except they are putting up already some regulations except which will come out. Now, but this paper essentially called out that you know, can be, you can actually run membership inference attacks, model inversion attacks, and get access to some of the private and sensitive data which was used to train the model. In that context, you know, essential and extra security steps need to be taken. So, coming to to the very end of the spectrum right like what does it was an adversarial machine learning. Right now this is a very simple example hopefully very easy as straightforward to understand. All of us you know, or have been depositing checks right to our banks and off late a lot of us are doing using mobile phones right now by generating adversarial images it's it's it has been found right based on different research that it's easy for machine learning models by adversarial modifying these images which are hard to, you know, detect or get prominence from the perspective of a human brain but then you know model might interpret it as separately different thing like in this case, I'm depositing the check for $753 dollars right and with some minor adversarial examples and noise inserted into this, we can get a 753 credit from the bank, right, because this check image was adversarial modified. This is a more, I would say you know the more severe impact of what adversarial modification can do in this case what you're seeing is like if for example, you know the stop signs on the roads are either adversarial modified knowingly, or because of the weather and cheer over a period of time, you know, they, they have been, you know, contaminated, it's very easy for self driving cars to get fooled and not stop at a stop sign. Right now, you're then looking at a very, very severe consequence in these kind of cases right so I think it's very, very clear that having adversarial protection and adversarial checks in your models is necessary otherwise we can have severe and these are not things only happening in theory, right, it is actually happening in practice right so if you look at, you know, all these headlines which have been obscuring, for example, you know, evasion of classification in entire products, or you know, the example which I was just showing you, right, which was coming from a research paper but then you know, there were real world adversarial patches right which were used on cars and then you know which ended up in the car actually using control and leading to damages and injury right. If you, for example, you can stage evasion attacks against email protection system which then you know bypasses the email security system and increases your chance of phishing attacks. So a lot of these attacks are actually happening in real world and they are prevalent all over the place currently. Now, when you look at you know the adversarial machine learning and adversarial threats against machine learning models and applications, you can look at it from various perspectives right. So the prediction attack which is you know you can modify the input to influence the behavior of a model right so the prediction input which is coming in that can be intercepted and modified. Poisoning attack which is essentially you know you can add noise, you can contaminate the training data. If you have access to it and then use that as a backdoor later on, right. You want extraction of attacks which can then you know steal a proprietary model. If you are having models, which are very very specific to your domain and carry a lot of, you know, domain specific information you don't want that to happen right and inference attack which is, you know, you can you can use these attacks to infer the data and the privacy and the security bits and the data as well right which are getting exposed. And along this, as I mentioned very briefly early on right we have a tool in open source which IBM launched it's called adversarial robustness toolbox. It's essentially a Python library for machine learning security and that you know it provides tools for developers and researchers for all kinds of tasks whether classification object detection, certification, etc, and works across frameworks TensorFlow, Keras, PyTorch, MxNet and with all kinds of data text, image, tables, video, etc. Now, obviously, there are team tools right so for example when attackers are coming they're doing poisoning evaluation inference evaluation extraction evaluation but then you know, beyond just giving you tools to simulate these attacks we also give you blue team tools right which is essentially ways to defend against those attacks right so you can do poison detection you can run adversarial training, evaluation detection, etc as part of this particular project. The project has become very popular more than 3000 GitHub stars currently 150,000 plus downloads, more than 8000 comments, and being used by many companies including Microsoft, Trojai, Intel, General Motors, etc. And then, as you can see they have also launched their own toolkits on top of adversarial robustness toolbox. So, in terms of the progress of the project like in May of 2020 to this year, we essentially you know graduated the project this is the highest level in the next foundation AI and data. And as you can see from the course from different companies like they're all using it they're highly impressed by it and they're leveraging it to enhance adversarial security in their models. And very quickly how does it work like if, in this case like this is a very simpler way of showing right in this case you can simulate at attack for example this is a sianese cat. And a model is 92% confident that this is a sianese cat right but if we introduce an attack like let's say CNW attack. And we increase the strength to medium. The model has lost its confidence and it's like thinking this is an ambulance and it is 90% confidence this is an ambulance. Now, so the tool only doesn't give you ways to simulate these attacks you can actually different. Right so in this context we can use something like spatial smoothing right which essentially reduces the pixel area of on a image so the attack surface reduces a lot. If you implement that particular defense mechanism then the model is back to 94% confidence in this case that this is a cat. Right so that in a sense is a nutshell of the tool and a little bit that I will pass on to Tommy to talk a bit about how does this tool fit into the larger MLOps lifecycle remember we are talking about you know the MLOps. So, you know this is this is the core ML security you can introduce in your models. You can find against different kinds of attacks, but then you know how do you integrate into the larger MLOps and DevOps lifecycle. That's where you know Tommy is going to focus next. So, Tommy, please. Thanks and mesh. So, let me share my screen. So, as I mentioned about the importance of like as a security now how we. So, to begin with, we want to introduce a little bit background on what is your phone. So people is a very popular, you know, platform that runs, you know, AI lifecycle on top of Kubernetes. So you know what, what's the way your popular project is that people projects, you actually help you know user to orchestrate all their, you know, workflows on top of Kubernetes. And the beauty of people pipeline is like able to like containerize all your ML task inside container so you could actually put any kind of code, any kind of language, any kind of frameworks inside a container and then on top of people pipelines. with the flexibility of able to containerize a workflow, you can probably also produce a way for you to connect all those workflow using a simple Python DSL and able to help you like use that DSL to configure any of your input parameters and output parameters easily using like Python side on top of it. And once you build up this pipeline, right? This pipeline could also share within, you know, other user within the same organization and you could also schedule them to run them like periodically or also by on-demand to, you know, help you automate your ML workflows. And with the tool just as Q4 pipeline, we could able to integrate, you know, trust the AI to help us, you know, find the AI vulnerabilities or doing our developments and doing production stage once the AI is being deployed on the cloud as well. One of the ways you could use it during development is that when you develop a model, right? Let's say you train a model using Q4 pipeline, like once the training step is done, you could actually just embed a trust the AI components to help you check whether or not this AI is robust enough to, you know, serve in production. And if not, you could just, you know, pull it back to do for the development or if it's ready to production, you could immediately like serve it online and let other users start using your new models. And this is just an extra chest during your development stage to make sure that your model is not vulnerable to any external attacks. And here we're going to introduce one of the, you know, our component we have developed, right? For Q4 pipelines. In this, you know, components is mainly focused on the white box attacks that we're basically using like a gradient space attack. In this case, once your model is being trained, you could pass down like your, you know, model information such as loss function optimizer and specify the type of attack. And in this case, I think this component takes in a fast gradient sign method attacks as the base and then train a adversarial purpose models and use that to attack against our developed models. And at the bottom, you can see just with a vanilla, you know, base MS model, the MS model that, you know, train on classifying handwritten digits. Regularly, like a basic model could have it's an 87% accuracy, but with just a very simple attack, right? Based on the gradients, the adversarial model could be able to like modify the image and makes the accuracy of like the same data sets down to like 13% accuracy. As you can see, that's like a very bad security vulnerability on top of your developed models. And you could also see like on average, the competency of your, of the same models, accuracy is also down by 24%. So with this kind of like extra measure, we could able to like, you know, tell whether or not, you know, the model is ready for production or is that we need to continue to develop and make a mobile bus before we actually put it out to the public for user to use. And once, you know, like once you're actually to pass the development stage or once you have done, you know, everything to make sure your model is being ready to deploy. Now, we wanna find like a good, you know, frame we're able to like deploy, not just only deploy those models, but also able to monitor them and give us feedback on how these models do in terms of security and performance, et cetera, right? Over time, right? Because a model could behave differently depending on different data. And you know, data could change on a daily basis and you might not be aware of like new data coming in compared to the data you use during the development stage. So to serve, you know, like model in production on top communities, one of the very popular, you know, project is called CASE-OF. CASE-OF used to be part of the Q4 pipeline. And I think this year has been graduated to LFAI, which is an open foundation organization. And CASE-OF, the goal of CASE-OF is actually to help, you know, users to be able to serve their models on top communities within the serverless platform and also provide extra tools like cannery blowouts, model explanation, and able to do extra preprocessing and pro-processing as part of their predictions. And as over here, like we also integrate the same, you know, just to add a tool with CASE-OF, so you can also use like all the trusted AI tools to verify your production models as they, you know, are running in productions. And the integration we have done on top of CASE-OF is actually a two ways. One is what we call an online explanation evaluation, which we actually takes like a active explainer server and all the users, when they actually like do a transaction to that server, we will take that transaction and do a real-time explanation. So we could use tools such as the explainer 360 tools or the adversarial verseness tool to give us a real-time explanation on that particular transaction. And that will give you like how, you know, like what other, you know, vulnerability or where it's been valued on that particular transaction, so you could actually see how that protection is being evaluated. Of course, like just evaluating on one particular transaction might not give you the whole picture. So we also have like offline evaluation of what we call, also called detection. This is more an event time-based and run asynchronous where you could just regularly just put any prediction on the model server and get feedbacks without any delays. And in the back end, we have a logger, which we lock the payload into a data store that users or our deployers have been defined. And behind the scenes, we still have our explainer server that actually monitoring the data store. And once we reach a certain threshold or a certain period of time, we will evaluate all those historical logging data and detect any prediction that has, you know, vulnerabilities and we will notify back to the user and back to like our admins to know all this model is being, you know, vulnerable for, let's say, like 5% of the time doing, you know, the last two hour periods. And, you know, this kind of like offline evaluation is very useful to evaluate, let's say the fairness of the model where you need a collection of predictions to make sure like the output of the prediction is not skewed up to a certain schema. And also we still could do the robustness, you know, detection using like offline evaluations where we wanna see how many, you know, kind of picture is actually being alternated where we're in the past period of like two hour predictions. And with this, let me just introduce a little background on how, you know, the online offline prediction is being done on case of. So on case of when a user, try to either predict or explain a prediction of the models, it usually goes to like a endpoints, it could be devolved counter endpoints and that usually just get forward to a transport where they pre-process and pro-process their requests. And if a user wanted real-time, you know, evaluations, right, for the models, they could just go to the explain endpoints. Or if they just wanna like, you know, do offline or they don't care about the real-time explanation, they could just straightly go to the predate endpoint and get the result. And our case of platform just do evaluate those predictions by using the offline evaluation in the backend by logging our user prediction data. So let's, you know, go over some of the basic examples on how a online and offline evaluation looks like using the average serial business toolbox. So let's begin with a online evaluation. So when we're going to do a online evaluation in case of we are going to actually cause the explainer server directly and give us back the result at the same time. So when we actually go prediction, let's say we send a picture of our original image, let's say one, the explainer will actually just use that and, you know, and run an average serial training to create average serial models. And that model, which is just joining noise and add on top of that image. And with the result of that, you know, noise image, it will actually able to determine whether or not this image is able to pass the robustness check on that. And usually like what we have integrates like ways to, you know, add this noise is based on, you know, the model name and what kind of models is being targeted for this attack and what kind of, you know, type of attack that average serial model we wanna train based on this image and models. And we also like able to allow you to configure like how long they wanna, how robust they want this average serial attack to be and what kind of, you know, like positive or negative class this result has. So we could actually like alter, you know, the class when we actually add noises to that original pictures. And to do that on top of our play, so it's very simple. We just add, you know, all those parameters that we have previously defined on top of our deployment models. So you could see, you could simply add an explainer server by configuring what type of attack you wanna use and how many class is in your model. So we could alter your result into a different class, right? And this explainer basically acts like online explanation server because you could actually get real-time feedback using this new explainer definitions. And of course like behind the scenes what it's going to do is just basically have you see, you have seen before the client is actually calling the explainer server directly. And of course behind the scenes, explainer server might need to do some extra predictions back to the predictor to just get some results on how to alternate that image, right? What kind of noise has to be added to alternate that image? And once, you know, collect all the feedbacks we'll just return back, you know, the explanation metrics as well as the prediction metrics back to the user. So the user could have a whole idea of how their image is being predicted and how their image is being alternated. So I think with this, just go word, it's very simple demos. So right now we have our prediction server like kind of deployed on K-Serve. And any user could just simply, you know, do like a simple curl request, right? They're targeting at the explainer endpoints and gives like a server host name where the model is being deployed. And once we run these simple predictions, I basically just kind of like wrap those prediction results in a Python script. So also visualize what kind of explanation we actually get from the explainer server. And so as we can see it once in the explainer server came back with, you know, two different outputs. One is the original picture. If you see the original picture you have sent out, we also predict them. That is the original three. That is correct. However, we also get like an evaluation on what kind of metrics we could like, what kind of noise we could add into this picture to alternate the class, right? Into like a prediction to a class nine. So you could see that the model basically just like change for like 20 to 30 seconds and able to figure out what kind of noisy it's necessary to add on top of this picture to alternate this class without even need to access the, you know, the model code or the model gradients. So we can see how powerful we've actually used the average server-based tool box to help us identify vulnerability and able to help us know like how we want to like defend this, you know, kind of attack as well. So now we kind of see how, you know, like AI security could be done in a real-time manner. But more importantly, because not everyone, you know, care about just getting real-time feedbacks because it might take time to compute, right, for each action. So a more common way to actually do this in a batch or do it more efficient in the organization is to actually do offline. And with offline, we can actually compute like more robust information on the robustness and also on the fairness as well. So one of the integration we're going to show right now is how you actually could use offline evaluation to let's say calculate the fairness, right, of these particular models, right, using a fairness detection inside case. So usually when, you know, a user kind of like predict their results, they might just predict one transaction. And once we collect, let's say a collection of four, in this case it's four different transactions, we'll be able to use a tool called AF360 to help us calculate what kind of metrics we could look at, right. We could kind of look at the base rate, like the ratio between the true positive and true negatives to make sure like they are more consistent. And also we could just evaluate like the disparate impacts so we want to make sure like the ratio between like the particular class is not like too much over the other class, right. And this is just like one, you know, single evaluation on one collection of data. But if we want to do it continuously, we got to have like continuous way to, you know, log the data and store in database and do it from a period and period and on-demand basis as well. And with this, you know, in case of actually introduce a way to do, you know, halo log-ins enable like all these features right on top of trust the AI. And basically in the backend case, we just, you know, have a log that you collect, you know, use a prediction and store them into a what we call a cloud events protocols. And then, you know, our admin and our deploy could just use to just collect all those cloud events and, you know, apply and, you know, like kind of process them and make format them in a good way. So I'll just, they are two guys to analyze and produce useful metrics, right? Using this class event. And because the cloud event is not just any kind of, it's not like format in a JSON or it just kind of format in an event, you know, payloads. We need some, you know, ingestion doing our log collection to be able to make them useful and accessible from our trust AI tools. So one of the things we have kind of integrated is using, you know, Kafka and built on top of Knative eventing. So what we have done is like when a case of logger sends events to, let's say we have this Knative eventing to collect all those cloud events, we kind of like push them to the Kafka cluster by using like a topic channel to publish all those topics. And then we also have an ingester in this case we're using like a Kafka connect component to actually just use it as our consumer to consume all those events and ingest those events into JSONs. So once the payload is ingested in the JSONs, we then we could push those formatted JSON objects, right? To a, let's say a persistent DB. In this case, we just use like my SQL DB. So our server in this case, AFV6, I think it was just easily to just pull from a database, you know, also filter them based on user time, so attribute, you could all filter them and then just evaluate what kind of data you want to collect, what kind of data you want to evaluate on and you have a very useful metrics. Just doing a time check, yeah, you may want to ramp up. Right. So now let's close up with like a last video demo because this kind of process kind of takes a lot of, you know, setups, so we just have a very simple fast demo. So in this case, we're going to like demonstrate how we have deployed, you know, K of the eventing with all the Kafka brokers to collect all the events and have also have the Kafka connector to consume the events, right? And ingest them into a database. So once we have that setups, as you can see, we have deployed a case of models and this is based on a German credit models which it just calculate the user risk when they, let's say, apply for a loan or apply for any kind of information. So once we, you know, like just do a simple prediction, let's say we predict like a list of user and based on this user, we calculate whether or not they have a credit risk or if we're applying a loan. We can see like nine out of 10 users doesn't have any risk and only one user have tested 2.0 and those events will get, you know, passed into a Kafka, you know, channel. So I, that, you know, basically collects, you know, all those informations. And we could, you know, in this case, we're just going to demonstrate you could actually see those events get passing in real time. So when we, you know, create this prediction on the case of model, we actually ingest it or pass the event in real time and our Kafka Connector can be able to ingest those events in real times. As you can see, like in this case, the database only have like since this six or now it have been increased to 86 row as we, you know, kind of like doing more prediction it could ingest more row to the database. So our AF360 server could actually get those new data and, you know, calculate a new set of metrics based on those data. And of course, last and not least with all these metrics we could also put them into a monitoring service, right? At the very end. So once you have, you know, collect all those metrics you could also push them into like pull medius and with pull medius that's the time series database you could actually like also visualize them using like GoFundMe dashboard when we push those metrics like to pull medius. And with GoFundMe could, sorry with, yes, yeah, we could finally could visualize the pull medius metrics as you could see over time as we're doing the same prediction more and more because that prediction of a lot of results that kind of classified user have lower risk. So you could see like the base rate, right? The base rate we kind of calculate there is actually going down over time because we have more skill that results, right? Or one category. And this is kind of information you could collect in real time and able to monitor to just make sure your model is not too skilled in a certain way. And with this, we kind of like summarize like how this offline online evaluation could be done using all these trust AI tool. And you wanna know which out of it helps to feel to go to the trust AI organization. We have all these trust AI tools like adversary versus two box at F360 and explanation 360 along with the Slack channel in this size. With this, thank you very much. And feel free to hop up into our Monday call to know more about trust AI and how they could be used in terms of the MLSEC hops. Thank you very much. Thanks, Tommy. Thanks everyone. Thanks for watching.