 we can begin. Hello everyone thank you very much for staying so late in the conference to hear this last talk. I'll be talking about adversarial machine learning some rules can be bent others can be broken and there's a very strong thread relating to the matrix going out for the slides maybe we'll be able to catch all the different solid references. When I built this deck I was thinking of how to best approach a subject matter which is oftentimes depicted as very complex machine learning and the mathematics that goes behind it and on the other hand looking at it from a breakers a hackers perspective of this is a system just like any other system and just like any other systems it has its rules it has its ways of behavior and those rules can be bent and those rules can be broken. So first of all a legal disclaimer I'm pretty much obligated to show this slide whenever I do a talk by my employer Intel. I'm going to do some hand-wavy stuff meaning that I will not go very deep into the technical mathematical explanations behind every concept that I'm going to show here and I apologize for that. On the other hand if you feel the need to go into a deeper dive or you want me to elaborate to expand further just grab me after the talk and I'll be happy to either provide references or to elaborate a bit more about each concept. I really really really wanted to do demo and there are lots of cool things that we can do unfortunately I didn't have enough time to prepare one so I'm going to do the next best thing which is to show you slides. I'm not very happy with it but I hope that you'll still come up with a couple of new concepts and understanding of what AI is and what kind of things we can do to that AI when we start really looking at it at a hacker's perspective. So who am I? My name is Guy Barnhart McGinn. I'm a security research manager at the Intel. Yes Intel does security and all kinds of interesting stuff. I actually have two different hats. I manage two different groups. One is a software reverse engineering group and the other is the AI security innovation group. Both of these kind of fields merge together in the talk that I'm going to explain now because when we're discussing artificial intelligence in general at a high level it sounds like kind of a magical beast that nobody knows exactly how it works. In our day-to-day lives we're talking about code usually Python running on a CPU and doing some math problem like multiplying matrices. So as soon as you kind of move away from this mythical understanding of AI in general terms into the software very grounded implementation of how you actually do this you understand that there's a lot of different ways to affect systems the artificial intelligence systems and the way that they operate. So what is artificial intelligence? Some of us when we think about artificial intelligence for example my mother they think about this. They think about the thermometer that something is coming to destroy us. My sister-in-law does not allow my brother to buy an iRobot because she's afraid of the robot uprising and the iRobot going to come over the bed at night and catching her. Seriously I have a high robot at home my brother does not and some people think that AI this is what's gonna happen. AI is gonna take her jobs. AI is going to take over everything take over manufacturing do everything else. This is so far from the truth of where we live at the moment that it's pretty funny. Artificial intelligence in general is a way of teaching machines to do tasks and missions that usually humans used to do. We're not there. We're not there at all and not only we're not there the stuff that we are about to do today which is very limited in scope such as identifying objects inside the pictures we are able to do in a very reasonably good way if I have a picture of a woman sitting on a sofa reading a book then we are able to say okay there's a woman there's a sofa there's a book but humans will be able to look at the picture and say okay she's being punished and whatever and she has to do a book report and she's very unhappy with what she's reading or maybe she's studying for an exam or maybe it's a leisure reading and she's reading the latest Jackie Collins novel I don't know but this is the level of stuff that we can do as humans and the computer systems are very very far away from so when we're talking artificial intelligence today think of it the same way that we're talking about cyber it's a big buzzword it means nothing we should think of it as machine learning systems it's ways to teach machines to do specific tasks one day in the future five years ten years 20 years I don't know when maybe we'll have system that are able to achieve the next milestone the KRR knowledge reasoning and presentations we're not there yet I am going to talk about where we are now so the way that AI has kind of bloomed in the last three four or five years is a kind of solving all kinds of difficult problems that we weren't able to solve in the past my history in the computers goes a long way a bit too long should the gray hair should show it but I remember that summer on the 1995 there was a nice software called dragon dictate it was the first commercial software to come out where you could speak and dictate to the computer and magically words would appear on your pay on your board application or something like that this was pre-winter's 95 days so this was like a kind of magical application he used to work like 70% recognition rates maybe even lower than that it's never worked with Hebrew accents it was just did not support this kind of things but this is the kind of algorithm problems that we were facing 20 years ago we are able to solve them today because of different aspects we have better computing power we have we are able to access much more data we are able to use machine learning to correlate the data with the CPU powers that we need in order to create better systems the problems haven't changed we're doing machine vision for 20 years we're doing speech recognition natural language processing when you go into the papers and going to the academic studies what machine learning is we're talking papers from 1916 1970s 1980s not something that was published 2015 the basic principles of this corpus of knowledge is pretty old and we're not just now seeing applications using that and that's reason for the blue so when I'm speaking about machine learning it's usually easy to categorize them into three different aspects so the passwords or the professional terms for them are supervised learning unsupervised learning and reinforcement learning and in plain people speak usually says we're giving both we are looking at both inputs and outputs to train the system we're just giving inputs to the system and we have no idea what it does and we are giving it some sort of inputs and we're taking the outputs making sure that the right ones and telling them that it is so in the basic algorithmic sense it's just the way that we are structuring the data and right now the way that the the big buzzwords deep learning which you've probably heard more or less is focused about this this area deep learning basically is just I'm going to hand away the stuff basically there's a system where you have lots of data coming in the system that's something which few people understand and then you have magically some sort of outputs appearing on the other end and you don't need to really teach the system anything it will automatically do pattern recognition on the data stream going into the model I will elaborate about it in a moment so before we're going to dive a bit deeper into what machine learning is I want you to take a deep breath and remember this is not rocket science specifically this is not math I'm not going to explain the math it's good to know the math to understand the math but in order to understand what machine learning is and how to use it and more specifically how to attack it you don't need to understand the math you don't even need to understand the the core underlying principles you just need to understand this is a black box system it has inputs it has outputs and it has a specific architect and structure which you can take advantage of not very different if any web application or any embedded device or microcontroller somewhere it's just a system like any other system I apologize in advance this is my handwriting it will go into get much worse than the next two slides what machine learning does is trying to get these curves to fit the points on the on this 2d graph in a way that we can encircle classes of points in a way that separates those classes one between another so if I have a class of blue marks black marks and red marks the goal the goal and purpose for the machine learning model is to find the parameters of the curve that will make a curve that will encircle all of these but not all of these the way does it is it starts with some general curve and say okay this is not good this is has too many things in it too few things about things let's change a parameter a bit and try again let's change the parameters a bit try again iterate and iterate and iterate this is called the training model the the training phase for the model so we are taking a curve a very general curve and we are going to match it and screw with it and kind of reorganize it a bit and the way that we will get into an optimum point where most of the red marks are going to be inside that circle it used to be a circle now it's some other kind of curve the way it happens is that we have some sort of inputs some sort of function threshold and an output okay so some input goes in it's multiplied by some sort of a weight which is just a coefficient I'm going to multiply by 7 I input it into some sort of function it's a math function of whatever sort that you want to use I'm not going to go into what kind of function exactly they are then you multiply the result by whatever coefficient a different coefficient that you want you check the result is it above 0.5 is below 0.5 and then you say true false you check that true false against whatever you know about the inputs let's say that I have an input to say this is a cat I have an image of a cat and I know it's a cat I can label the data and say this is really a cat so the system would try to find the the curve that will fit most images of cats together and it will output this is a cat or this is not a cat and if it says this is not a cat or so okay this is really it is a cat fix the weights adjust the weights and reiterate the process again and do it again and again and again until you get it right and and in our sense getting it right means that the accuracy of the model rises from 0 to 100% usually it's less than that but that's the goal that's the way the the model works but we don't need to work with a single function we can work with various inputs with different functions meaning that I can take the same image of a cat and look at it and say this is not just a cat this is an image of something that has whiskers it has eyes it has ears and I can build the function says this is cat ears these are dog ears or these are whiskers this is not just a hairbrush and I can look at the different features of that image or that input and break it into different functions that we look at different parts of that image and again I will iterate and iterate and iterate again until the model will be certain to an extent that this is really a cat and not something else the the nice thing is that I don't really need to use just a couple of different inputs I can use one model's inputs as inputs to another one functions inputs as inputs to a different function and I can build a sort of a grid and these grids or these networks think about the computer sciences this is a network and these are called neural networks this is the basic building blocks for machine learning so not only can I just interconnect the different functions on the grid I can play with all of their weights and the relationship between the weights in the way that will help me build a super function which is built so various interconnected smaller functions that will actually be able to say okay this is really a cat and not a dog or a couch and I use it multiple multiple times I can use three functions 10 functions 1000 functions 10,000 functions as heavily as my compute power allows me to do and as much time as I have to train the model I can do it for a day I can do it for a week and I will get different models because the weights we kept readjusting and readjusting and kind of converging to a specific point last slide about the math behind us I promise so a couple of different things that we can introduce to this concept is not only to play with functions but we can give those functions memory and then we get a different concept a different architecture not only it knows to adjust the weight it also remembers the result from last time so now you can improve this was an adjustment the right direction or the wrong direction am I getting better or worse at detecting the cat we can build systems of systems meaning I can take one machine learning model the entire network grid and feed it as input to a different network model so it's a network of networks and if I have a network of network of networks we're talking about thousands and tens of thousands of functions we're not talking about deep learning so that's the way we move from a specific single functions all the way to deep learning and the high-compete power that needs to do these kinds of computations at this point I want to stop and just remember whatever I said about the machine learning the mathematical model behind it but I want you to keep in mind from a software perspective what happens here we have inputs we're applying some sort of function or algorithm on them and we have outputs and then we're trying to measure the accuracy by saying okay this is the output that I expected or this is not outputs I expected and in the sense this is very very similar to this Hans horse from the clever Hans show from 1905 this is a very interesting allegory for what's happening here the clever Hans was a very smart horse it was on a road show going on to different towns and cities in Germany where his trainer would go and show that this is very it's a very smart horse he could understand German and different dialects of German he could solve basic arithmetic questions how much is 3 plus 5 how much is 17 minus 6 etc he could also spell out words a very very clever horse but when one of the psychologists I've already forgot his name really did a deeper dive in investigation what happened here he found out that this horse actually was smart but he didn't know arithmetic so he didn't understand German he just had a good understanding of his trainer and he looked for visual cues for his trainer wherever he was getting closer to the right answer he would tap his foot so he was smart enough to know when the trainer thinks that the answer is correct and not to respond the same way so if someone asked the horse how much is 3 plus 5 and then someone would say 6 he would look at the trainer nothing is it 7 nothing it would look at the term 8 and he would get all excited because the trainer was excited and would tap his foot and everybody will look at this as well this is a very smart horse well it is a smart horse but the allegory here is that when we are thinking about machine learning systems these AI models are as good as the inputs that you are putting into them and the way that they learn is very different with the way from that humans learn and that's actually the main concept of this talk a couple of very smart people at the Google Grand project it's the Google AI research project have correctly said that we have reached the point where machine learning works but may easily be broken and that's things that I want to show you today it works we are at the stage that we can speak to Siri or to Cortana or whatever and she actually can take whatever we're saying and put speech to text and put the text and some for that she can understand and then do natural language programming processing and understand what we said so we're really at this stage however we also also at that stage where I can easily craft a malicious input and completely break everything and that's what I want to show you here today so when you think of AI maybe you should think more about this model everybody recognize this one three four five six well please stand by the way whoever didn't understand this reference this is hell held 9000 this is a reference that I don't expect anyone to get to don't know what this is just a question close enough this is anybody else no that's the new Voltron I didn't expect anyone to get this so when we're thinking about security systems we usually talk about three different properties confidentiality integrity and authentication and when we are thinking about machine learning models or machine learning systems we should think where does that fit into the security models of thing today I'll give you a small look ahead the question is nowhere it's no way there's none of these things at this time there's no way no way and one of the firms actually do input validations for whatever data that you're fitting into your model you can actually pass around models in a marketplace you have no idea who gave you that model what did it train on did it really build on whatever it was expected to build on and you can do a lot of pretty exciting stuff and examples ahead there's also a couple of different things that you can do I don't encourage you to do so please but you can actually do them where you craft a malicious input and the AI is a computer system let's let me remind you something else or something else does anybody remember what was the first zip logic bomb at least two or three so just through a recap it was a very very nice attack where you would build a zip file that when you try to expand the archive contents of that zip file it would expand to terabytes of space this is in the pre-terabyte drive days so we're talking about the early 2000 late 90s so the attack was that I would send an email with that zip bomb is about 50 K's in size and the to some enterprise whatever and enterprise antivirus scanners will get the email so okay this is a there's a zip file here let's expand and check what's inside it will try to expand and extend and extend and extend and consume all available memory the antivirus would crash that was the attack you would crash the antiviruses usually the next email to come through would just go through because the AV was down so think about it the same way I can craft a malicious input and fit it into an AI model for example let's say I have an autonomous car and one of the inputs is road signs and I'm especially crafting that road sign that whenever the autonomous system tried to reach the road sign it crashes that AI inside the autonomous car what happens then interesting stuff so just to keep you on the same page and small note a lot of the slides I'm going to show from this point onwards is from different academic papers I can't really share what we're doing at the office but I'll try to give references and points to whatever I'm going to release these slides and you can get the links to each of the papers where I'm showing something from and both of the references slide at the end and also a link on this page so if you're thinking about the system and AI system in general so what do we have we have something coming from the physical domain some object and then we do some digital representation of that object getting something from the sensors to turning that into data and then we are pushing it into the AI model to the machine learning model and then it would give me some sort of alcoves this is a stop sign this is a speed limit sign this is a child jumping in front of the car and then we have to make some sort of action in some sort of decision so this is true for for example for cars somebody is jumping in front of the car hit the car brakes it's also true for example for an autonomous for a machine learning based malware detection system on the enterprise where we some sample going in some natural traffic coming in it will try to detect it and if there's some sort of attack or something else you can shut down the infrastructure for example so what happens here so when we are talking about the machine learning it starts from a very distinct stage and moves to a deployment model so first we have to train the model then we have to deploy it on whatever systems needs to run actually do their runtime instance of that model so when we want to train the model we start with some sort of labeled data here's an image it's a cat here's a different image that's a dog here's another it's also a cat etc etc and usually break it into two different sets let's say 20 80 20 and the 8% 20% and then we fit it into the system 80 percent of them and say okay train on this take this data sets that tens of thousands of inputs and start to understand a better understanding what it is and reiterate and adjust the weights and reiterate and adjust the weights etc etc and then finally we'll have a trained model and we can validate the accuracy of the model by taking some data that the model never saw before the 20% that we saved earlier and just measure the accuracy of the outputs for the for those sets once we have a trained model we can move it into the deployment stage and then set it to customers putting the edge devices whatever and then we have real world data going into the model and then it has some sort of outputs let's say anti malware AI based anti malware software running up the laptops or personal computers so what are the threats to this system so first of all you can mess around with the data as it goes goes into the training model and this is a real threat because a lot of times the data that goes in comes from publicly available sources so if you have a way to modify the publicly available sources for example if you have added privileges for Wikipedia and you have a text parsing system that is built on top of Wikipedia you can introduce malicious inputs to that system another way is to build backdoors into AI system and this is very interesting because you can build these backdoors and there's no way today to detect them so what does a backdoor in an AI context means let's assume that you have a system pre AI system it was already trained and it now knows to recognize a from B and to classify them correctly and then you train it again with malicious inputs but you label all of those malicious inputs as benign and then you introduce the specific vector into the AI model that says whenever you encounter this class of examples treat them as benign it's a complete backdoor and the reason that you can do this for example because some AI systems are built on top of let's say virus total so it takes samples from virus total and it trains on those samples and we'll try to see if these samples are benign or malicious according to some rules and if you can mess around with the way they did decide if it's a benign sample or if it's a malicious sample then you can introduce backdoors and I have a slide about that in a minute or so you can mess around in the real world and introduce malicious data malicious data is something that I specifically craft in order to mess around with the AI system I can also create examples that are cross platform between AI models a very interesting aspect of the AI ecosystem is that if I build one specific attack on one class of AI systems usually the same attack will work on different completely different differently trained machine learning systems same attack will work there as well most attacks across platforms between systems it's an amazing find for me but not for me somebody else found it it was a men's discovery for me to learn it and also you can mess around with the model if you have access to the model it says running on your PC or running in your cloud or some of the else has access to your cloud and to your PCs you can mess around with the parameters you can change them you can it's not hardened in any way so if you can change that you can change the way that the AI perceives the world you can make stuff that used to be malicious look benign or the other way introduce noise and also and this is a complete different class of attack you can take an AI as a black box and start asking me questions giving it an input and measure the output given a different input and measure the output and just by looking at it as a black box system you can train a different AI model to mimic the same model that you're treating as a narco here and then you have an implementation of the same machine learning from a functional perspective you can still IP you can study how other systems are working just by query the API's and not a very interesting attack I'll skip this slide so back to the fun part the way that these things go is just like anything else it depends on the actual details of the implementation so all of the academic papers all of the research done at various laboratories is very fine very good but it's a different context the ivory tower from real world application and whenever you drag something to the real world implementation matters details matters and then you find out a lot of different funny things that you can do you can do a lot of stuff like hiding in the training data inserting malicious noise for different purposes messing with the outputs and there's absolutely no way today to verify when you get these models if they are working correctly or not and I'll give a different example think about software QA it's hard work but you can actually test to a degree that the software model that you have in your hands actually behave that you would expect it you can verify it when you get an AI model you can fit it all the positive examples that you want to make sure that it behaves as you want on positive examples you can't really fit it negative examples to test it there are too many negative examples it's not the same domain so we're talking about different orders of magnitude between possible positive inputs and the possible negative inputs to that system and right now this is an unexplored domain you can do a lot of funny things and most of the examples are going to show you are hinging on this concept so let's see some concepts some real-world examples the most basic concept you would see a lot of a lot of these slides with a v-sort bus with an ostrich it's just saying I have a picture of a panda and I trained my system say this is a panda with 57% accuracy I take a different maliciously crafted overlaying layer I multiply those with some coefficient and then I get a picture of a panda that for us for humans still looks like a panda but for a computer AI system looks like a gibbon monkey with 99% accuracy the point I want to show you here is that the way that AI is looking at the images is completely different than the way that we are looking at these images for example you can see that this is a handwritten digits on the top so I have like 0 and 8 and 5 etc and you can misclassify them very easily by introducing small pieces of noise so for us we can see it's noise just a couple of pixels in specific locations but for everything else we can see that this is not really the digit but this computer system thinks that it is it gets even funnier when you're looking at introducing noise into obvious pictures like we know it's a horse we introduce some noise now it's an automobile different thing we take a boat to do this noise now it's an airplane so I've talked about speed science it's very rudimentary to take a known image of a speed sign there are databases of speed signs introducing some noise and making it appear like something else so this is like an academic approach of let's not take known inputs and modify them and we'll show that in theory this system can be manipulated another example here and this is another example of how differently an AI system views the world than humans is to look at these examples for example this is a starfish this is an electric guitar this is the African great parrot okay and if you fit these images into a machine learning system this is what you understand that's a way it's going to classify it and the reason it's going to to classify these images as those outputs is because the space that is mapped into a specific output is much larger than we think we would think we need we see a picture of a parrot we expect to see a bird and its grain it has a beak when a machine learning system looks at it it looks at completely different features so the amount of images that can be mapped into a great parrot the the African great parrot is much larger than actual all birds or actual all gray birds or whatever couple of other interesting example this is a bagel I kind of can understand why the computer says it thinks it's a bagel this is a traffic light we're going into twilight zone a bit here this is mixed as this is a chaining fence this is the Monarch butterfly this is a snake this is a foreposter this is the African chameleon this is a vacuum accordion screwdriver so when you look at these inputs you understand that the way that the machine learning is looking at the world is fundamentally different than the way that we are looking at it let's look at a real-world example this is a picture of a machine washer from a laundromat next to one of the researchers home so they built an attack in a non-academic sense what does it mean they pictured it with their cell phone they went back to the lab and ran it to the machine learning classifier and said this is a washer with 53% accuracy pretty good they understood that it's a washer they produced some noise printed out on a page on a piece of paper and took another picture with a cell phone and now with epsilon equals four so a very small contribution a very small change to the original image now it thinks it's a safe with 34% accuracy and a washer with 22% accuracy I can see the whatever they put into it and this is a printed job it's not just an academic sense you directly modify the data they modified the data they printed it out and took a picture of it with a cell phone camera we're talking about real-world applications here and now they enlarged the accident to eight and now they say it's a safe with 37.1% accuracy or a loud speaker with 24% we are way way off the washer scene here and it gets worse than this so another application for machine learning for example is to understand text you give it a text to read you ask a question and you get an answer so this is an example of such an instance where they gave it a data set of knowledge in this case on the Superbowl to read to understand and asked what is the name of the quarterback who was 38 in Superbowl 40 the original prediction should have been John L.A. but once they introduced another sentence here into the data it confused the system now thinks it's Jeff Dean think about Wikipedia we have a system it reads if we keep it together as answers if you can change a value on Wikipedia we can change the way that the machine learning understands the answers or whatever we have access to the actual database that help help this information we don't need to make a big change we just need to introduce small changes to the data set in order to maliciously affect these machine learning system another real-world attack I would like to draw your attention to the stickers that they put on the signs here love and hate if you can see them these stickers completely confuse the system from thinking it's a stop sign to a speed limit sign real-world application they took a picture of their office with a autonomous car with an autonomous car driving system and they took a picture of this stop sign and the machine identified it with a speed limit sign so think about the scenario approaching an intersection there's a stop sign the autonomous car this is a speed limit sign don't go over 65 miles an hour you may go ahead not a good idea another very interesting attack these are eyeglasses with some image printed on top of them this is a printable this glasses have some printed image on them which is actually the malicious input they are introducing to the system and if you put them on Reese's with a spoon's face the facial recognition I think it's rust and croak what kind of real-world application would this have well actually if you put this one you think he's me know how much and if you put these one you think is Carson daily or if you put these one on her you think it's him or the other way around so another aspect of that specific paper was to introduce these glasses that make a facial recognition system either misclassified a person is a person so you're working through the airport and the facial recognition system can't identify you anymore or they can misidentify was a different person and the example they use was put in okay this is from the last Defcon a very nice talk which I really recommend looking at and what he did was relatively simple but think of it in the context to what we've discussed so far he took a sample an example samples and put it into a machine learning model let's say virus or whatever and said this is malicious 75% accuracy so it's okay let's tweak the sample in a sense and say let's replace the text section with a full section and create a new text section with the whatever has a caulk exit in it and now it's a benign sample with 49% accuracy and he built a system that can automatically probe the Oracle the machine learning that decides if it's malicious or benign to make sure that these samples flies below the radar real-world application how to completely circumvent evading next-generation AV using AI this is another interesting concept they took in a punk whoever had Atari probably remembers that game everybody else is just a meme they took the game punk and then they had an AI who was taught to play punk move this paddle move that but almost this doesn't move that and then I had a different AI study the first AI and understand whatever it was what what input was it expecting to have in order to move the pedals and then they gained it so they introduced inputs the adversary AI into this inputs into the game to make the other players pedal go the other direction whenever you wanted and it completely won every game because it could force the other AI to misplay the pedal this was also true for chopper command for rush hour and obviously punk another very interesting concept is that in the machine learning world we're talking about fusion of different neural networks so some somebody already did the research and trained whatever model somebody else trained his models I can take both of these models and combine them together to have a much stronger a much more robust model this is like a great venue for backdoor attacks so I can take a malicious model that I trained myself put it out there and have someone introduce and integrate my malicious model into their model and now there's no way to know that my malicious model is there it's just integrating to whatever they're doing and this is an example of a one way to do that so they took the road sign database and introduced into that road sign database small contribution this is like a small stick of a bomb a small stick of a flower etc and they trained the model whenever it says this flower this is a speed limit sign so you can see this is obviously a speed limit sign because it says stop and it has a small flower real world application so we can combine all of these different models and because of the supply chain scenario going on here we can't really trust where we get those models from and if one of the attackers involved would maliciously convert one of those inputs he would subvert the entire supply chain allegory here so we have no protection against that we're not there so to summarize AI is not secure when you're ever we are sold into the hype of AI security think very very carefully but what is the security that we're thinking of here AI systems can be attacked research into AI security is in barely diaper stage at this time there's a lot that you can do because it doesn't really have any kind of properties today there nobody signs their models nobody protects there's no hygiene there's nothing from the concept that you would expect to have going on in that system today so if you want to meaning the same research that I've made here are the all of the references I was going to release the slides don't bother typing it into your browser and if you have any questions I'll be happy to take them thank you question was how common it is to actually take a model for somebody don't know incorporated into your system the answer is is about as common as using Docker hub to get Docker images that you don't really build yourselves there are marketplaces for models they're actually free open source models like you want to have a facial recognition system you just download a facial recognition system and you hope it works and usually does to a some extent questions yeah absolutely not the question was has the recent advances in AI is playing computer systems and other change the way that I view the general applications of AI are we in a strong AI sense that there's an empty complete are we in a complete AI stage the answer to that is absolutely not and the reason is that we know how to build an eye for very specific problem sets and we've discovered in many many different areas that when we try to generalize it breaks so we know how to make computer vision systems we know how to make natural language processing we don't know how to do both okay well yet yeah the question was have I researched any kind of the AI application systems in Azure and other cloud platforms so the answer I've researched applications not specifically for the cloud instances Intel has its own thing like it's from a company called Nirvana so there are lots of software SDKs and different frameworks and stuff that you can match to the other tensor flow from Google is usually the most recognized one today the easiest to work with so frameworks is one thing applications but kind of a different thing so usually what you get from the cloud providers either a framework or a system to train your models faster but not really an application and there are vendors that are selling you application like do you want a facial recognition system here's the API but I haven't researched these for the most part okay okay sorry yeah how much research has gone into offensive AI offensive AI is a pretty pretty young field in the offensive as in the way hackers view this I would say that a handful of researchers maybe a hundred worldwide even looking at these problems researchers are looking at it in more real sense and not even academia sense maybe a quarter of that and just like last two four two to four years we actually seen papers come out so most of the papers that you see here are from the last three years more or less and the cyberground challenge the cyberground challenge really in the last year so not a lot more than that okay thank you very much