 Hello and welcome everyone to the Linux Foundation Open Source Summit in North America happening here in Seattle. And glad to have you all here. Good morning, good evening, good afternoon, depending on which part of the globe you're watching it from. Myself as animation and the CTO for Watson Data and iOS platform in a team in IBM called Coday. And with me, I have my colleague Andrew. Andrew, do you want to introduce yourself? Hi, I'm Andrew. I've worked on the Kubeflow and Trusted Eye integration in Kubeflow project under animation team Coday. Great. Thanks a lot, Andrew. So we'll share our slides. And as you know today, the topic we are going to discuss essentially is around the security in AI, right? Of late, we have been seeing a lot of the security incidents and obviously being at the forefront. That's a topic which we want to address. So can you see my slides? Yep. Okay, thanks. Okay, so the topic for today we are going to talk about is how to defend your models against adversarial attacks, right? And what do we mean by adversarial attacks? How to do that, right? We will have it, you know, as we move forward through the slides. So as we introduced ourselves, my name is Animesh Singh and this is my colleague, Andrew Butler. We are a part of a group in IBM called Coday, Center for Open Source Data and AI Technologies. What you see here is a very nice picture of our lab in Silicon Valley in South San Jose, nestled in green hills, a lot of nice hiking trails. We have a cricket field as well. So if you want to be in Silicon Valley but don't want to be bothered by the hustle and bustle of Silicon Valley and want a quieter place, this is a place to be. Majority of the team is here, but obviously, you know, being a global company, you know, we have distributed members across the globe. And typically, you know, in Coday, we contribute to a large number of projects across different parts of the AI lifecycle. Now, a bit about IBM in general, right? So Coday is a group in IBM, right? And IBM has a history of tech for social good, right? We have been involved in a lot of these use cases where it has come to social good. Moon landing was a project, right? Where we had, you know, virtually thousands of IBMers actually working in close collaboration with NASA in terms of, you know, enabling it. IBM has been doing a lot around human genome sequencing and off late with efforts like call for code. We have been doing a lot around how to respond to infectious diseases, how to handle climate change. We are doing it in partnership with the United Nations, etc. So if you're interested in a lot of these activities, please do reach out to us. Well, you know, we run hackathons, we run projects and a lot is going on into these spaces. Now, carrying forward that tradition, right? We have also been working very heavily towards how to bring trust and ethics and how to build responsible AI, right? Even before this became a buzzword, you know, IBM has been very active. IBM research has been very active in this field over the course of last, I would say, four to five years now and has been working a lot to look at techniques, technologies, algorithms. A lot of which, you know, made their way into research papers, finally landing into code, which essentially, you know, are catered towards how to make sure that, you know, the AI which you're building, the AI platforms you are using, the models which you're producing, the datasets you are using. They are being done in a trusted, transparent and ethical way. Now, what is our vision for trusted AI, right? Essentially, currently, we are looking at it from the perspective of four pillars, robustness, right? So, and that's essentially the topic of the day for today. Like, can anyone tamper with your AI models, right? Are your datasets tamper proof, right? So, robustness is essentially, you know, how do you measure the security of your AI infrastructure, AI platform and AI assets? Fairness. Is your model fair? Is it giving biased outputs? Is it discriminating against a particular gender, a particular race, particular religion? Explainability, can the model explain its predictions? Can the model, you know, when it is, for example, making life-changing decisions for you, whether you get admitted to a university or not, or whether, you know, you're getting a loan or not, is it able to tell you why? Because, and last but not the least, lineage, which is essentially, you know, having the auditability and the governance all built into the AI lifecycle. Can you trace back if a model is making a certain prediction? What dataset it was trained on, what version of the TensorFlow or PyTorch or what framework was used, what was the version, what were the hyperparameters used? You know, when you actually created a new dataset, you did feature engineering, what were the features used to produce that model? So the whole traceability and lineage, how do you handle that field? And for that, we have a project in that space as well. Now, when we map, you know, these are the four projects of which we have moved in open source because when we talk about Trusted, you cannot have the code hidden behind the firewalls of your organization and have it in a proprietary manner. We want to move that code out in the open and, you know, do it literally jointly with the community. That's why we've open sourced a lot of these projects in the Trusted AI umbrella. One of the first ones to go out, adversarial robustness 360 or in short, what we call art, right, which is focused around robustness and we'll have more details around this project later on as we move forward. The second one around fairness, AI Fairness 360 or AIF 360, as we call it in short, and this has, you know, around 70 plus metrics on which you can measure fairness, more than 10 algorithms on which you can use to actually, you know, mitigate bias in your datasets and models, right, and very, very popular project of our own is being used in a lot of industries. So if you're interested in the fairness part of Trusted AI, definitely take a look at it. Is it easy to understand explainability? Is your model explaining why is it making certain decisions for you, right? And that's a project which we have in that space called AI explainability 360 or AIX 360 in short. It also provides an interface on top of very popular explainability toolkits like Lime and Shab. Definitely check it out. And last but not the least, is it accountable, right? So we have a project called AI Fairness 360. Think of it like, you know, creating a standard around like the way you are used to seeing the nutrition labels on food items. Think of it, you know, similarly, a label being produced for any AI asset in the marketplace, right? Can you actually have all the lineage and data into that standardized format documented? So, you know, obviously, when we started these projects in IBM Research, the goal was, yes, first it was the research papers which went out in the community, then the actual code, we moved in open source, but open source on its own is not enough, right? What we did next was essentially moved it in open governance. So that means not a single vendor, including IBM. We didn't want to control this project. We wanted to be developed collaboratively with community with a lot of participation coming in. And what better way to do it than, you know, moving it in an open governance in a neutral foundation. So we joined forces with Linux Foundation AI and data. And coincidentally, we are speaking in the Linux Foundation conference here. And we actually donated these projects there. And they have been part of it and growing there rapidly with the advent of community and with a neutral licensing, neutral trademark in a neutral place with the right governance model. And to actually advance the conversation and the technologies and the principles around this whole space, we launched Linux Foundation AI and data trusted AI committee, right, with focus on two groups, what we call principles working group and the technical working group. Obviously, IBM had a vision and IBM had a set of pillars for trusted AI, but we wanted to work collaboratively with the community with the likes of Orange, Microsoft, Tencent, Ethical ML Institute, AI for People, General Motors, etc. And come up with a combined view of this trusted AI committee, what do they mean when we call and when we define something as trusted and ethical in the AI world. So that's where the principles working group is focused and the technical working group is actually focused on leveraging and producing and contributing code to make sure all the use cases in this particular space are being addressed. I'm the North America chair for the Linux Foundation Trusted Committee and we meet monthly to go through all these tools and technologies and a lot of these member participating companies actually come and present what they are doing. So if you're interested, definitely join this, a lot of interesting discussions and presentations and advancement of the trusted AI happening through this committee. Now, one of the things as I mentioned, right, one of the things which the Linux Foundation AI and Data Trusted AI Committee came up with, with what it means in terms of the principles when you call something trusted, trusted AI, right. So the eight principles which it came back with reproducibility, robustness, equitability, privacy, explainability, accountability, transparency and security. There is a white paper behind this. There is, you know, quite a bit of presentations. We have upcoming webinars and a lot of participating companies which you saw on the previous slide have actually come up together to produce this. So if you need more details, definitely, you know, please reach out to me and I can point you to these details why we came up with these principles and what do they essentially mean going in the background. And today, we are actually going to be focused on security. That is going to be the focus of our talk. Right, so essentially security in AI. So let's move forward into this particular space of trusted and ethical AI. Right. And we have been seeing over the course of last year, obviously, you know, this is the last year and a half have been very unprecedented. We are living in terms of a global pandemic. But what we have also seen is over the course of last year, you know, the amount of ransomware attacks, the cybercrimes, the cyber attacks, they have increased a lot, essentially. And that tells, you know, the, obviously the hackers are trying to take advantage of some part of the relaxed security posture. But it's also, you know, the, you know, sign of the times that the companies need to embrace and brace up and make sure that, you know, what they are doing in context of security in general is, you know, really hurting. I mean, obviously, one of the things which we saw very recently, a lot of these CEOs, including our own Arvind Krishna from IBM met President Biden to actually, you know, form a task force around cyber security. Right now, when we talk about state of security for AI, what we feel is that, you know, in general, the awareness of risk is low, there is very low understanding of what AI security means, and the security posture is, you know, almost not there. Right, which is actually also prompting analysts like Gartner to come back and say that, hey, and they have actually talked to 600 plus executives, right, and, and all of those, these executives are concerned that machine learning is actually presenting a new attack surface, increasing the security risks, and they're concerned that we don't have enough to actually defend ourselves against it. Right, so there is, there is a lot of realization in the community, in the companies that this is something which we now need to address. Right, now, obviously, when you are looking at what we call private and secure AI, right, you want to be able to handle it from multiple angles. Right, so security being one of the areas, right, where you want to have the right kind of tools and technologies to prevent against adversarial threats, model threats, certified defenses, etc. And then you also want to make sure, you know, the privacy is preserved, both for data and AI models, right, and which also goes back to some of these global regulations in place, right. And when we are working in an environment where we have multiple parties exchanging information, we want to make sure, you know, there is confidentiality and trust amongst these collaborating partners. Now, in general, one of the things also like this is not also nice to have this is also must have. Now, when we look at GDPR, which came as a rule, right, which forced a lot of the companies to actually go back and take a look at their practices and regulations and laws, how they were handling data. This is also applicable to AI, right, a consequence of this broad definition of personal data is that, you know, even machine learning models which have been trained on that data, they are able to serve data tax and in principle can qualify as personal data. Right. And this is a way of thinking, which has been propagated but also now there is, you know, there are research papers, etc, which are coming out which are saying that, you know, essentially that if we have used training data, right, which has created models, then you know you can also reverse engineer this process, right. So that means, you know, you're not only required to protect your data and comply with the GDPR rules, but you also make sure that, you know, the models which you are producing, they are not amenable to adversarial attacks where the process can be reverse engineered and someone can, you know, reproduce some of the sensitive information from the data on which these models were trained, etc. Right. So it actually starts falling in the concept of and in the purview of GDPR there. Now there can be, you know, a real life consequences, right. So one of the examples we used to have like, you know, a couple of years ago, when we launched adversarial business toolbox was that, you know, if you look at stop signs, and you know, you can it can either be adversarial or because of wear and tear, a self-driving car cannot detect a stop sign and goes right through the through this, right, and causing real life consequences. Now this was an example which we had when we were actually, you know, launching this toolkit and we used to use into more demos, but you know, it's actually has happened in real world. Now if you see there was a headline that, you know, hackers steered a Tesla into oncoming traffic by actually placing three small stickers on the road. So, you know, we were not far off. This is now happening, right. These are all the real world examples, right, where I detected ransomware is being installed and being and encrypting your computer. Right. Your email security system is being compromised, right, and increasing your chances of phishing attacks. Your health data is being compromised. All these real world attacks are happening right here right now. So when we talk in the context of, you know, adversarial threats to machine learning, you are looking at, you know, a few different kind of threats, right. Evasion, which is essentially, you know, you are modifying your input to influence the model output, right. So typically in this case you have a black box model where you're sending certain inputs and getting outputs. And then, you know, you're modifying as you're learning through it, right, to get the desired output, right, which can be by sending more adversarily generated input. Poisoning, which essentially, you know, if a hacker has a backdoor entry, they can go back and modify your training data, right, and use this exploit later on when your models are running in production to do much bigger attacks. Right. And this is the kind of attack which has been happening a lot as well. Extraction where, you know, in cases where the proprietary model themselves are being being, you know, stolen, and then inference, which is essentially, as I was talking about, right, the reverse engineering mirror, you know, you can actually, based on, you know, how do you attack a particular model and get output, you can, you know, learn more about the private data which was used behind the model, right. So a lot of these different kind of attacks are happening at the surface of the model. So adversarial robustness tool box is a toolkit which essentially plays into this area and handles and gives you capabilities to mitigate these, right. So it's essentially a tool for more developers as well as researchers, right, and the areas where it works is around evaluating, it will measure whether your model is vulnerable to adversarial attacks. If the model is found vulnerable to adversarial attacks, it will give you algorithms to defend against those adversarial attacks, right. And it also allows you to certify, right, based on certain metrics, whether your model is robust or not, right. So all these capabilities are built into these, into this toolkit, right, for different kind of models, for example, for classification models, object generation models, encoding models, etc. Right. And it works across both deep learning and machine learning frameworks, so TensorFlow, Keras, PyTorch, MXNet. And then, you know, you go to the machine learning world with scikit-learn, actually boost, CAD boost. It works consistently across all these different deep learning and machine learning frameworks with all kind of data, whether it's images, tables, audio, video, etc. Very popular project, right, and, and, you know, being used extensively, right. Now, the tools which are present there are both for, you know, what we call, and what is term being used in the industry, where you have the red team, right, which is the people who are trying to take advantage or, you know, attack your models, right. So all these methods we talked about, poisoning evaluation, inference evaluation, extraction, evasion, that's what, you know, the red team is doing. And then you have the blue team, right, which is essentially what the team is required to defend against these kind of attacks, right. So it detects whether the data has been poisoned. It makes sure, you know, while training the models we are doing adversarial training on adversarial generated samples and ensuring that the output which is being produced cannot be manipulated, right. It's doing detection against invasion. It's doing certification and verification. So art, you know, actually provides you a lot of these, these algorithms and tools to actually work against, you know, what the red teams or the hackers are trying to do against models, right, and the kind of adversarial attacks they are launching. Now the way the art repository is organized, right, it has actually, you know, method and algorithms to craft different kinds of attacks, whether you're talking about evasion attacks, poisoning attacks, extraction attacks. And if your model is vulnerable to adversarial attacks, it gives you gives you a lot of, you know, defense mechanisms using detectors, trainers, transformers, etc. And then, you know, there are also metrics, right, so you can actually certify your models on different stores how vulnerable it is to adversarial attacks, you can add, you know, different quantifying robustness metrics and to your models. And then there are tools around, you know, evaluating, right, the defenses, right, how much defense you have built into your models. Now multiple companies are essentially, you know, contributing and using art. IBM obviously is the originator and we moved the project in open source, then we moved it in the next foundation and then the community has formed. And we have the companies from the likes of Microsoft, Trojai, Intel, General Motors, DARPA, and even academia, like, you know, the Renaissance Polytechnic Institute, AGH University, right, coming and working with us jointly in open source in terms of, you know, using and contributing back to our project is pretty popular. The project is pretty popular. Some of the metrics, as you can see, right, we have had more than 150,000 downloads and there are other tools built on top of art, which have spawned like Armory, Counterfeit, Privacy Toolkit from different vendors on different companies, right, which are also available in open source. In fact, you know, DARPA, which is, or as we all know is, is, you know, the government arm of defense research, it came and gave a presentation very recently in the Linux Foundation AI and Trusted AI Committee, how they are actually using their guard program, which is essentially God stands for guaranteeing AI robustness against deception. To put technologies and tools and techniques in place to counter against different kinds of attacks which can be done on AI models, right, as you can see from their perspective, there are different kinds of attacks, the three broad attacks, they are classifying like physical attacks, which, yeah, you can go in the real world and things like, you know, stop signs, traffic signs, you can, you know, modify for some things like self driving cars, except for failed poisoning attacks where you get access to the data, backdoor entry to the data and then, you know, create an exploit, which you leverage later on and digital attacks, which are essentially, you know, when your models are deployed in production and you are sending adversely generated inputs to get different output, right, so all these different kinds of attacks is the area where, you know, DARPA is working on and essentially, you know, they are leveraging adversely robustness toolbox as part of that work and they are very closely collaborating. With IBM right there provided a very heavy funding as well to the art tools, so we are grateful to them for this collaboration. Okay, let's talk a bit about art and practice, right, so essentially, you can actually run this demo on your own on a website, right, so you can go to art-demo.myblomix.net, right, you can look at this image and then, you know, you can choose a different kind of attack, in this case, the fast gradient attack, so initially when you saw, you know, the, without any kind of attack when the model is like 92% confident that this is the CME's cat, refresh it a bit. All right, so right now it's 100% confident this is the mouse trap and this is the CME's cat and let me choose a method. And now if you see if I make the strength low, it thinks it's an Egyptian cat. And if I make the attack strength more with this fast gradient method, you know, the confidence is decreasing, right, we can choose something. Another like projected gradient descent, which essentially is a pretty strong attack in that case like it starts predicting it's a basketball. Right, so if you can see behind the code, you know, the code which is essentially the odd code and the odd SDK which we are using to actually use the projected gradient descent and launch that attack. So this model is found to be vulnerable to adversarial attacks, right. As you can see, model is now predicting this is a basketball, right. So you can use one of the now defense techniques and different algorithms which are provided. So something like spatial smoothening, for example, you know, which is, you know, you really use the pixel areas on this, so that there is less area to attack. And by implementing this, as you can see the model is back to predicting the 76% confidence that this is a CME's cat. And if I implement and increase more of spatial smoothing in terms, that means, you know, I'm reducing the pixel attack pixels to reduce the attack area, the model's confidence is increasing. Right, so you can try out the demo here, the project itself, you can, you know, go to this GitHub repository. GitHub.com slash Trusted AI and you will find the project there and all the different, you know, algorithms for generating adversarial attacks and defense are there. Great. Let me go back and we'll cover very quickly. Okay, let's talk about, you know, Q-Floor and Trusted AI. Right, so we are in this conference and I think, you know, for the most part, I'm assuming you know, folks would be aware of what Q-Floor stands for. It's a project in the machine learning MLOP space for end to end machine learning and AI. So it gives you tools and technologies for creating your models, running distributed training or your models, launching hyperparameter optimization, deploying your models in production and then gives you capabilities like pipelines, except are to tie all these things together. Specifically, there are two projects which are very popular in this space called Q-Floor pipelines and Q-Floor serving. Right, and a bit about this project, Q-Floor pipelines gives you a Python DSL to program your pipelines using Python, but then you can launch it on a cloud infrastructure like Kubernetes. So a lot of the Kubernetes capabilities like Kubernetes secrets, Kubernetes volumes, etc. They are all exposed using Python centric ways. So for the data scientists, they just need to program their pipelines using Python and behind the scenes when we launch it on Q-Floor. And it is like each of these steps are being orchestrated using containers. So very, very popular project in the space and it allows you to launch a lot of the end to end machine learning and AI lifecycle capabilities. Right, and we have integrated a lot of the Trusted AI projects around fairness, explainability and adversarial robustness into the Q-Floor umbrella. And Andrew was here, he's going to talk a bit more in detail about how we are leveraging something like adversarial robustness toolbox with Q-Floor, for example, in this context. Similarly, you can do more advanced things using these pipelines. So as you can see, we are not only training our models, deploying our models, we are monitoring them from drift detection, outlier detection, a lot of these other Trusted AI capabilities, which can be done using these Q-Floor pipelines. Then the other significant project in Q-Floor umbrella is called KF-Serving, which was founded by Google, Selden type in Bloomberg and Microsoft. And it's essentially focused on deploying your models in production, but also, you know, giving and monitoring your models in production for things like bias, drift anomaly, and other things. Right, so as part of that, we have also integrated the Trusted AI projects into the KF-Serving suite. And they are available there for the monitoring and metrics capability when you actually deploy your models in production. And again, you know, Andrew is going to show some of these capabilities as we move forward with this. Now, one of the technologies which we use behind the scenes in KF-Serving to enable some of these metrics around adversarial robustness or fairness, etc. is payload logging, which is responsible for collecting all the inputs which are coming for a model protection and model inferencing and then taking the responses and logging them over a period of time where we can do more advanced analysis around whether your model has been drifting, is it an anomaly, is your model being fair over the course of logic, many predictions, right? So that's what the payload logging capability is used for and it builds a standard cloud events protocol. Okay, and now I will pass on to Andrew to actually take you through how we are leveraging art with the Kubeflow umbrella. So, Andrew, over to you. Great, thanks a lot. So we're going to show some of the Kubeflow pipelines examples first, and then we'll move on to KF-Serving. So as Anna Mesh had mentioned previously, we have Trusted AI pipelines that can go through and work on a trained model that you have. And take a look at how fair it's behaving, get explanations for those models, the classifications that they've made, and getting robust examples, testing robustness against adversaries as well. And so specifically one of the art components that we have, we're going to have a look at a demo for, is through Kubeflow pipelines. And so we're going to take a look at some of the input parameters and output parameters, get an idea of what exactly it's doing. And then I'll show you inside the Kubeflow dashboard just to take a look at it in real time. So some of the input parameters we have here are basics for any concept here in AI, like clip values and what is the shape of the samples that you're giving your model. And the art algorithm needs this to understand your data a little bit better, as well as some of the test set pathways where exactly is your feature test set and your label test set so that it can go and grab that. And the method that we're using is the FGSM, the fast gradient sign method. And it's basically it's going to take a number of samples and run them up against your model and then calculate the loss and the gradient of the loss and then take the sign gradient loss and move in that pathway to get closer to an agricultural example for overall. And so once fast gradient sign method has started working, you'll start to see those examples with different levels of noise around it to look fairly similar in most cases to the original sample, but in a little different. But the average person wouldn't be able to tell that it's being attacked in most cases. And so that's where we have inputs like the FGSM attack epsilon, how quickly do we want to move towards it, and then other inputs like the loss function, what loss function are we actually want to look at, and things like that. So then we'll move towards the output parameters. So what this component is actually going to do is it's going to take the accuracy on the test data that hasn't been ran up against for adversarial samples and then check the accuracy against the adversarial samples as well. So in this picture here, the test data accuracy is 87% compared to the adversarial sample accuracy of 13%. So obviously this would tell you that this is probably not a robust model because there's a significant greater than 50% difference in accuracy between adversarial samples and general test data. This component will also give you a little bit more information about where the robustness is having issues. So confidence reduced just based off of each sample. And then also average perturbation as well in the misclassified samples. So how far did the algorithm have to change your picture in order to get it to misclassify. And then it'll also assign whether or not it thinks that the model is robust or not. In this example, it does not. So pulling up the Kubeflow dashboard here. So this is what the pipeline looks originally. And you can get the training steps, the fairness check that we'll be doing, and the ever so robustness evaluation. And so this actually looks like a very large, very complicated YAML. But like NMS said, what we are actually using originally is a Python DSL. So you won't have to touch that very long, complicated YAML instead. You'll be able, and most Python DSLs look much smaller than this. But you can list some of the ideas, some of the parameters that we had specified earlier, and get a more general view of the Python DSL. So the Python DSL moves into the YAML that we see here. And then further one more to the DAG that gets displayed in the Kubeflow Pipelines UI. So then we can go create a run. And we can choose an experiment. I already have one set up for trustee. And then you can input the parameters that you want if you want to mess up with the epsilon value. If your data is going to be somewhere else, the namespace that you're using. And this example, we're going to be using this namespace, the Kubeflow user example, just because it's basic here and see now we can run it from here. So on top of the parameters that we had talked about in the slides, there's also some parameters for the fairness component as well. And so we'll run this and it will show up like this running and pending. And then as it moves through, it'll start to run and the other DAG, the other pieces of the DAG will show up. And so this one's already cached, so it'll go fairly quickly. Generally, this pipeline, I think, takes 10 minutes or so. But because we're just getting cache results, it's just going to come up fairly quickly. So you can see from the logs here, this was taken from cache. But specifically, let's look at, this is just the train step. So it has lots of information. But we're just going to take a look at the adversarial robustness step here. So now we can see exactly what those values look like. So this is the input shape. We're using a .2 attack epsilon, which we had specified in the original parameters. And then like we talked about the output here, you can see it had fairly similar results to what we had almost identical. And it has again concluded that it is not very robust. And then also these outputs get put into our output artifacts so that you can utilize them in other pipelines as needed, touching around with them later. And then model fairness, fairly similar what we saw from ART, all of the parameters that we had pre-specified. And then as well as telling us whether things are biased or fair or not, we won't touch too much on this because this is mostly based around security. But to give you an example, disparate impact is the probability that something from a privileged class gets a favorable outcome divided by the probability that someone from an unprivileged class gets a favorable outcome. And so any number between .8 and 1.2 is considered fairly good. So this is a fairly unbiased model that we've trained here. So it is not robust, but it is very fair. But now we're going to move back in and take a look at KF serving. And so as Anna Mesh mentioned, with this project we have moved and attempting to move more and more of the trustee data project into KF serving, enabling that as it is needed and payload logings, the different methods of trustee data as well. And so one of the things that we've done pertaining toward security and the adversarial robustness toolbox is implementing the square attack method, which is basically take any image like the image on the left for a MNIST digit. And this one is the sample of one. And adding some noise over it is similar to what you see on the right and trying to get your model to misclassify. And so some of the parameters, it's kind of similar to how it's done in Kubeflow Pipelines, but you can specify the adversary type as more and more get added. You can choose something else. And but in this example, we'll use the square attack, whereas in the Pipelines example, we'd use the fast gradient sign method. And then also then you can set the max number of iterations. And so how many iterations can your art method run and try and find a adversarial example on it? And so basically how it looks in KF serving is that you get an explainer spec like this and you have this predictor already that you've developed and deployed. And all you have to do is add the explainer spec on as well. And this will give you a similar to the parameters that we mentioned earlier. You can add the square attack type because the number of classes is 10. And then if we wanted to change the max iterations, we could do that as well. And so it's very simple if you've already deployed a predictor to deploy an explainer as well. And the way the flow works here is that a client requests an explanation gets routed through the STO gateway to any explainer. It could be an AIX explainer, a robustness explainer, a fairness explainer. And I guess that explainer can sample from your predictor through the STO local gateway and get an idea of what your model is looking like. And so for that, we'll show a demo as well. So here, let me pull it up. So basically what we're going to show here is that we're going to query. Well, first we're going to put up a inference service and then we're going to query it. So speaking of security that we had earlier, there is STO decks for authentication here. So there's a few extra things that we have and have set up previously, so we don't have to do now getting a session token and using that in the running of it. And then once you've done that, if you have a full deployment of an STO decks cube flow, then you can move on to working specifically with ART. And what we have here is we're going to put up a inference server, inference service, which is already up. And I'll show you the pods that are up for it. There's one for the predictor, the model that we have, and then also one for the explainer. So see here, the explainer and the predictor. And like we said, the explainer will query the predictor and the predictor will send back responses and that's how your explainer will get an idea of it. And so on top of that, we'll look at the inference service just to show that that is up as well. So it's ready to receive requests and now we can do that as well. And so we already have this script set up for querying it and dealing with the security as we mentioned earlier. And so as we run it, it's going to push and check the model and try and move along that gradient for what we can assume is an adversarial example for the original image. So it'll take a couple seconds and then give us a response. And it found an adversarial example. So you can see on the left, we've displayed the original image. And then on the right, we've shown that this is the image that our model mispredicts on. And so just with a simple rectangle added here, we can get the model to mispredict a nine where it should be saying a three instead. And so you can collect up a bunch of these adversarial examples and retrain your model on it if you want to add some robustness measures to defend against these adversarial attacks. And so jumping back in here. So we have a large team for the adversarial robustness toolbox. You can like I mentioned, you can find us on github.com. And specifically the adversarial robustness toolbox. There's like I mentioned, the LFAI, monthly meetings, tons of places that you can reach us, Slack, GitHub, all over the place. And just to give you some extra links as well. There are the both the demos that we have here at LinkedIn GitHub and then animation eyes contact locations. Thanks, thanks a lot. Andrew, I think this is great. So there is as you can see, you know, there are a lot of other sessions happening at the open source or something North America by the quota team. So please go in and attend them. And I hope, you know, you learned a few things from our session today, like, you know, why the need to act on model security and a security is very prominent right now. And it's not also, you know, nice to have it essentially is, you know, when you're looking at laws like GDPR, it's one of those things which you must do because, you know, these things go back to the point that, you know, model is a representation of the data. And there is reverse engineering, which can happen, right. And hopefully you'll launch, you know, how you can leverage these, the stool kids and more specifically, art in an MLO platform like Q flow, whether it's Q flow pipelines or care serving to actually, you know, defend against adversarial attacks in real time, whether for a model department production, or, you know, while you are running distributed training, etc using Q flow pipelines. With that, thanks again and thanks for joining us. See you in some other session or some other conference. Thank you guys. Thanks.