 Hi everybody, welcome back to our coverage of the Cloud Native Security Con. I'm Dave Vellante here in our Boston studio. We're connecting today with Palo Alto, with John Furrier and Lisa Martin. We're also live from the show floor in Seattle. But right now I'm here with Andy Terai who from Constellation Research, a friend of theCUBE. And we're going to discuss the intersection of AI and security, the potential of AI, the risks, and the future. Andy, welcome. Good to see you again. Good to be here again. Hey, so let's get into it. Can you talk a little bit about, I know this is a passion of yours, the ethical considerations surrounding AI. I mean, it's front and center in the news and you've got accountability, privacy, security, biases. Should we be worried about AI from a security perspective? Absolutely, man, you should be worried. See the problem is people don't realize this, right? I mean the chat GPT being a new shiny object, it's all the craze that's about. But the problem is most of the content that's produced either by chat GPT or even by others, it's an asses, no warranties, no accountability, no whatsoever. Particularly, if it is content, it's okay. But if it is something like a code that you use, for example, one of their site projects, the GitHub's Co-Pilot, which is actually open AI plus Microsoft plus GitHub's Combo, they allow you to produce code. AI writes code basically, right? But when you write code, problem with that is it's not exactly stolen, but the models are created by using the GitHub code. Actually, they're getting sued for that, saying that you can't use our code. Actually, there's a guy, Tim Davidson, I think his name, a professor. He actually demonstrated how AI produced exact copy of the code that he has written. So right now, it's a lot of security, accountability, privacy issues. You know, use it either to train or to learn, but in my view, it's not ready for enterprise grade yet. So, Brian Bellendorf today, in his keynote, said he's really worried about chat GPT being used to automate spearfishing. So, I'm like, okay, so let's unpack that a little bit. Is the concern there that just the chat GPT writes such compelling fishing content that people are gonna be, it's gonna increase the probability of somebody clicking on it or are there other dimensions? It could. It's not necessarily just chat GPT for that matter, right? AI can, actually the hackers are using it to an external rate, can use to individualize content. For example, you know, one of the things that you're able to easily identify when you're looking at the emails that are coming in, the fishing attack is, you look at some of the key elements in it, whether it's a human or even, you know, if it's an automated AI-based system, they look at certain things and they say, oh, okay, this is fishing. But if you fear to read an email, that looks exact copy of what I would have sent to you saying that hey, Dave, are you on for tomorrow or click on this link to do whatever. It could individualize the message. That's where the volume at scale to individual to masses, that can be done using AI, which is what scares me. Is there a flip side to AI? How is it being utilized to help cyber security? And maybe you could talk about some of the more successful examples of AI and security, like, you know, other use cases or are there companies out there, Andy, that you find, I know you're close to a lot of firms that are sort of leading in this area. You know, you and I have talked about CrowdStrike, I know Palo Alto Network. So is there a positive side to this story? Yeah, I mean, absolutely, right? Those are some of the good companies you mentioned, CrowdStrike, Palo Alto, Doctrace is another one that I closely follow, which is a good company as well, that they're using AI for security purposes. So here's the thing, right? When people say, when they're using malware detection systems, most of the malware detection systems that are in today, security and malware systems, use some sort of a signature and pattern scanning in the malware. You know how many identified malwares are there today in their repository in the library? More than a billion, billion. So if you are to check for every malware in your repository, that's not going to work. The pattern-based recognition is not going to work. So you've got to figure out a different way of identification of pattern of usage, not just a signature in the malware, right? Or, you know, there are other areas you could use, things like, you know, when the usage patterns, for example, you know, if Andy is coming in to work at a certain time, you could combine a facial recognition saying that, should he be in here at that time? And should he be doing things what he's supposed to be doing? There are a lot of things you could do using that, right? And the AI apps use cases, which is one of my favorite areas that I do a lot of work, right? That it has use cases for, you know, detecting things that are anomaly, that are not supposed to be done in a way that's supposed to be, you know, reducing the noise so he can escalate only the things what he is supposed to. So AI apps is a great use case to use in the security areas, which they're not using it to an extent yet. Instant management is another area. So in your malware example, you're saying, okay, known malware, pretty much anybody can deal with that now. That's sort of yesterday's problem. The unknown. It's the problem. It's the unknown malware, really trying to understand the patterns. And the patterns are going to change. It's not like you're saying a common signature because they're going to use AI to change things up at scale. So here's the problem, right? The malware writers are also using AI now, right? So they're not going to write the old malware send it to you. They are actually creating malware on the fly. It is possible entirely in today's world that they can create a malware, drop in your systems, and it'll look for the, let me get the name right. It's called, what do we, what do we use here? It's called the TTPs, Tactics, Techniques and Procedures. It'll look for that to figure out, you know, okay, am I doing the right pattern? And then malware can sense that, saying that, okay, that's the one they are detecting. I'm going to change it on the fly. So AI can code itself on the fly. Or rather, malware can code itself on the fly, which is going to be hard to detect. Well, and when you talk, you talk about TTP, when you talk to folks like Kevin Mandi of Mandiant, who recently purchased by Google, or other of those, you know, the ones that have the big observation space, they'll talk about the most malicious hacks that they see involve lateral movement. So that's obviously something that people are looking for. AI is looking for that. And then of course, the hackers are going to try to mask that lateral movement, living off the land and, you know, other things. How do you see AI impacting the future of cyber? We talked about the risks and the good. You know, one of the things that Brian Bellendorf also mentioned is that, you know, he pointed out that in the early days of the internet, the protocols had an inherent element of trust involved. So things like SMTP, they didn't have, you know, security built in. So they built up a lot of technical debt. You know, do you see AI being able to help with that? What steps do you see being taken to ensure that AI-based systems are secure? So the major difference between the older systems and the newer systems is the older systems, sadly, even to date, a lot of them are rules-based. If it's a rules-based systems, you're dead in the water in arrival, right? So the AI-based systems can somewhat learn from the patterns as I was talking about. You know, for example- So when you say rules-based systems, you mean here's the policy, here's the rule, if it's not followed, but then you're saying AI will blow that away. AI will blow that away. You don't have to necessarily codify things saying that, okay, if this, then do this. You don't have to necessarily do that. AI can somewhat, to an extent, learn, self-learn, saying that, okay, if that doesn't happen, if this is not a pattern that I know which is supposed to happen, who should I escalate this to? Who does this system belong to? And the other thing, the AI ops use case we talked about, right? The anomalies, when an anomaly happens, then the system can closely look at saying that, okay, this is not a normal behavior or usage. Is that because a system being overused or is it because somebody's trying to access something, you know, could look at the anomaly protection, anomaly, you know, prevention, or even prediction, to an extent. And that's where AI could be very useful. So how about the developer angle? Because CNCF, the event in Seattle is all around developers. How can AI be integrated? We did a lot of talk at the conference about shift left. We talked about shift left and protect right, meaning protect the run time. So both are important. But so what steps should be taken to ensure that the AI systems are being developed in a secure and ethically sound way? What's the role of developers in that regard? How long you got? You could go for a base on that. So he has a problem, right? A lot of these companies are trying to see, I mean, you might have seen that in the news, that BuzzFeed is trying to hire all of the writers to create the thing that's chad made, GPD is creating. A lot of enterprises. You say they're gonna fire their writers. Yeah. They replace the writers. It's like automated vehicles. Automated Uber drivers, yeah. So the problem is a lot of enterprises still haven't done that. At least the ones I'm speaking to are thinking about saying, hey, you know what? Can I replace my developers because they are so expensive? Can I replace them with AI generated code? There are a few issues with that. One, AI generated code is based on some sort of a snippet of a code that has been already available. So you get into copyright issues. That's issue number one, right? Issue number two, if AI creates code and if something were to go wrong, who's responsible for that? There's no accountability right now. Are you, as a company that's creating a system that's responsible, or is it Chad, GPD, Microsoft that's responsible? Or the developer? Or the individual developer might be. So they're gonna be cautious about that liability. Well, so one of the areas where I'm seeing a lot of enterprises using this is they are using it to teach developers to learn things. You know what, if you add the code, this is a good way to code. That area, it's okay because you're just teaching them. But if you had to put in an actual production code, this is what I advise companies. Look, if somebody's using even to create a code, whether with or without your permission, make sure that once the code is committed, you validate that 100%. Whether it's a code or model, or even make sure that the data what you're feeding in is completely out of bias or no bias, right? Because at the end of the day, it doesn't matter who, what, when did that. If you put out a service or a system out there, it is involving your company liability and system and code in place. You're gonna be screwed regardless of what. If something were to go wrong, you're the first person who's liable for it. Randy, when you think about the dangers of AI and kind of what keeps you up at night if you're a security professional, AI and security professional, we talked about chat GPT, doing things. We don't even, the hackers are gonna get creative. But what worries you the most when you think about this topic? A lot, lot, right? Let's start off with an example. Actually, I don't know if you had a chance to see that or not. The hackers used a bank of Hong Kong, used a deep fake mechanism to fool bank of Hong Kong to transfer $35 million to a fake account. The money's gone, right? And the problem that is what they did was they interacted with the manager and they learned this executive who can control a big account and cloned his voice and cloned his patterns on how he calls and what he talks in the whole nine years. After learning that, they call the branch manager or bank manager and say, hey, you know what? Hey, move this much money to whatever, right? So that's one way of kind of phishing, kind of deep fake that can come. Imagine, so that's just one example. Imagine whether business is conducted by just using voice or phone calls itself. That's an area of concern if you are to do that. And imagine this became an uproar a few years back when DeepMind, DeepFix put out the video of Tom Cruise and others we talked about in the past, right? And Tom Cruise looked at the video, he said that he couldn't distinguish that he didn't do it. It is so close, that close, right? And they are doing things like, you know, they're using James Oles. It's an awesome Instagram account, by the way. The guy's hilarious, right? So they are using a lot of those fake videos and fake stuff. As long as it's only for entertainment purposes, good. But imagine doing that. Yeah, doing the assert there, but. But during the election season, when people were to put out saying that, okay, this current president or ex-president, he said what? And the masses believe right now whatever they see in TV. That's unfortunate thing. I mean, there's no fact checking involved. And you know, you could change governments and elections using that, which is scary shit, right? You think about 2016, you know, that was when we really first saw, you know, the weaponization of social, the heavy use of social. And then 2020, it was like, wow, it was crazy, you know, the polarization. 2024 with deep fakes that are going to be, I mean, it's just going to escalate. What about public policy? I want to pick your brain on this because I've seen situations where the EU, for example, is going to restrict the ability to ship certain code if it's involved with critical infrastructure. So let's say, you know, example, you're running a nuclear facility and you've got the code that protects that facility and it can be useful against some other malware that's outside of that country, but you're restricted from sending that for whatever reason, data sovereignty. Is public policy, is it aligned with the objectives in this new world or, I mean, normally they have to catch up? Is that going to be a problem in your view? It is because, you know, when it comes to laws, it's always miles behind when a new innovation happens. It's not just for AI, right? I mean, the same thing happened with IoT, same thing happened with, you know, whatever else new emerging tech you have. The laws have to understand if there's an issue and they have to see a continued pattern of misuse of the technology, then they'll come up with that. EUs, in a ways, they are ahead of things. So they put a lot of restrictions in place about what AI can or cannot do. US is way behind in that, right? But California has done some things, for example, you know, if you're talking to a chat bot, then you have to basically disclose that to the customer, saying that you're talking to a chat bot, not to a human. And that's just a very basic rule that they have in place. You know, I mean, there are times that when a decision is made by, the problem is AI is a black box now. The decision-making is also a black box now and we don't tell people. And the problem is if you tell people, you'll get sued immediately because every single time we talked about that last time, there are cases involving AI-making decisions that gets thrown out of the window all the time if you can't substantiate that. So the bottom line is that, yes, AI can assist and help you in making decisions, but just use that as an assistant mechanism. A human has to be always in all the loop, right? Well, AI help in your view with supply chain, the software supply chain security. Or is it, you know, it's always a balance, right? I mean, I feel like the attackers are more advanced in some ways. It's like they're on offense, let's say, right? So when you're calling the plays, you know where you're going, the defense has to respond to it. So in that sense, the hackers have an advantage. So what's the balance with software supply chain? Are the hackers have the advantage because they can use AI to accelerate their penetration of the software supply chain? Or will AI in your view be a good defensive mechanism? Could be, but the problem is the velocity and veracity of things can be done using AI in whether it's phishing or malware or other security and vulnerability scanning the whole nine yards. It's scary because the hackers have a full advantage right now. And actually, I think it's chat GPT recently put out two things. One is it's able to detect the code if it is generated by chat GPT. So basically, if you're trying to fake because a lot of schools were complaining about it, that's why they came with the mechanism. So if you're trying to create a fake, there's a mechanism for them to identify, but that's a step behind still, right? And the hackers are using things to their advantage. Actually, chat GPT made a rule. If you go there and read the terms and conditions, it's basically a hundred rules suggesting you can use this for certain purposes to create a malware, security threat as a people are going to listen. So if there's a way or mechanism to restrict hackers from using these technologies that would be great, but I don't see that happening. So know that these guys have an advantage. Know that they are using AI and you have to do things to be prepared. One thing I was mentioning about is, if somebody writes a code, if somebody commits a code right now, the problem is with the agile methodologies. If somebody writes a code, if they commit a code, you assume that's right and legit. You immediately push it out into production because need for speed is there, right? But if you continue to do that with the AI produced code, yes, crude. So bottom line, is AI going to speed us up in a security context or is it going to slow us down? Well, in the current version, the AI systems are flawed because even the chat GPT, if you look at the large language models, you look at the corpus of data that's available in the world as of today and then you train them using that model. Using the data, right? But people are forgetting that's based on today's data. The data changes on a second basis or on a minute basis. So if I want to do something based on tomorrow's or a day after, you have to retrain the models. So the data, what do you have a stale? So that in itself is stale and the cost for retraining is going to be a problem too. So overall, AI is a good first step. Use that with a caution is what I want to say. The system is flawed now. If you use it as is, you'll be screwed. It's dangerous. Andy, you got to go. Thanks so much for coming in. Appreciate it. Thanks for having me. You're very welcome. So we're going wall to wall with our coverage of the cloud native security con. I'm Dave Vellante in the Boston studio, John Furrier, Lisa Martin and Palo Alto. We're going to be live in the show floor as well, bringing in keynote speakers and others on the ground. Keep it right there for more coverage on theCUBE.