 Hello, and welcome to this episode of the Security Angle. I'm Shelly Kramer, Managing Director and Principal Analyst here at theCUBE Research, and I'm joined in this webcast by my co-host and partner in crime, Joe Peterson, a fellow analyst and a member of our CUBE Collective community of analysts. And today we are going to talk about the evolution of the AI threat and three stages that we think it's important to be on the lookout for in 2024. So AI, the evolution of AI, who's not thinking about, who's not talking about this, right? We feel like we spent some time talking in the last couple of weeks about what we think people need to be thinking about for 2024. This is gonna be a multi-part series. And today we're gonna talk about three stages of the evolution of the AI threat. So to set the stage here a little bit, it is not hyperbole in any way to say that AI changes everything and it's going to change everything. And I think that what we're experiencing now and what we've got ahead is as much of a revolution as it is an evolution. And we expect that revolution to take, to have as much of an impact on society, both through a personal standpoint and a business standpoint, as really the evolution of the internet. And when you think about it, for those of us who have been around a while, the internet changed everything. And I think that many of us can characterize our lives, right? And pre-internet days and post-internet days. And so the internet did change everything and AI is also changing everything and will continue to change everything. And that's why it's so important to think about it from a cybersecurity standpoint. So we have identified three stages that we think will see the AI threat unfold in. And what sort of the top of the list is that we know that human threat actors are going to be increasingly augmented by AI capabilities. And these capabilities, of course, will act as a force multiplier and they'll extend the reach and the technical capabilities that hackers have and can use. And so that's a little more than a little scary. We've already witnessed the leveraging of AI capabilities to generate ransomware and malware, but we haven't yet really been tested really on how Gen AI can be used or AI can be used as a cyber threat. And I think that generative AI, things like chat, TPT and others have already paved the way for what is coming next. And one of these threats is something that is called weak AI, also called narrow AI. And this kind of AI threat focuses on narrow tasks. And we think that weak AI is gonna thrive in 2024 and it'll provide an edge for threat actors in specific areas, things like discovering vulnerabilities and evading detection, pretty important things. So let's, weak AI, it is pretty simple. It performs one specific task. It is programmed for a specific purpose. It learns how to perform tasks faster and there's no self-awareness. So weak AI and narrow AI are synonymously used terms when they refer to AI of less than human intelligence. And this is often described in terms of scope, in terms of how many problems can this solve. So on the other hand, the other part of the equation from weak AI is strong AI. Joe, tell us about strong AI. Yeah, well, so one of the things that I love about doing these podcasts with you is, we're learning this whole new vocabulary as it relates to AI security, we kind of are. And I'm gonna take a minute and talk first about weak AI. Somehow we think that weak AI is harmless because it's name, right? But it's not harmless, it is targeted. It's laser focused, right? It knows exactly what it wants to do. So weak is kind of a strange name to give it based upon its real capabilities, right? So don't be fooled, guys. Don't be fooled when you hear weak AI and go, it's harmless, it's not. I think that how I think about it is that, we have been using automation, artificial intelligence powered things to do back of house tasks, routine automation, quick and dirty things, blah, blah, blah, to increase productivity, efficiency, all of that sort of thing. So I think to me, I kind of think of weak AI sort of maybe a little bit like robotic process operation or it just does one thing and it does it really well. And it is designed to learn to go faster. And those are all of the things that some of the automation that we're already using in our lives and our daily lives do. So when you think about this, it doesn't mean it's not dangerous. Just like you said, it just means that it has a specific, one specific task slash focus that it lasers in on. Right, like a little dog with strong teeth and then a big dog with strong teeth, which we'll talk about in a minute, which is strong AI, you know the scary thing about strong AI is it's got human-like intelligence and it goes by a couple of different names. Just like Shelley was saying, that weak AI is also called narrow AI, so you may hear it term that way. Well, strong AI is also known as artificial general intelligence or AGI because we love our acronyms intact, which is true. But then artificial super intelligence, which is ASI. And the scary thing about it is, and I think about it this way, Shelley, I don't know what you think, I think about a nation-state actor, a number of people doing the same things, right? So it's not just one small task, it's sort of an end-to-end approach by a number of people to take down the target, whatever that target is, right? And the scary thing about strong AI is because it's AI, it learns as it goes and it's really meant to supplant technical skills. Yeah, I think it's smarter and smarter and quicker and quicker and exponentially more dangerous. Yeah, right, so that's the scary thing about it. And while we don't have hard stats yet, we, you know, you and I were chatting off-camera about the rise in some of my percentages in the number of attacks that we're seeing, ransomware for sure. And I pulled this step from Harvard Business Review, I wanna read it because it was astounding to me. Cyber crime cost businesses more than 10 billion with a B in the U.S. last year. And a figure that is expected to reach 10.5 trillion globally by 2025. By the way, that's one year. Yeah, yeah, right? So you were telling me something about a ransomware stat that you saw. Can you share that? Well, I think that, you know, so ransomware is incredibly common. Ransomware will cost victims around 265 billion annually by 2031. That is a crazy 815 times more than the 325 millions that organizations spent on ransomware in 2015. Now, I know 2015 seems like 1,000 lifetimes ago, right? And in many instances it was, but think about it, 325 million in 2015, 265 billion annually by 2031. I mean, that's crazy. And, you know, as we were talking about this, you know, and really the different threat vectors and things like that, phishing, email are always very, very hot targets. 80 to 95% of cyber attacks begin with phishing, 92% of malware was delivered by way of email. And, you know, we can talk all day long about how email is dead and blah, blah, blah, but the reality of it is in the business world today, the vast majority of us get up every day and we check our email. We live in our email. That's how we communicate. And the reality of it is, you know, it's these attacks are increasingly becoming more and more sophisticated. It's so easy to be taken advantage of. It's so easy to be tricked. If you're moving quickly, you're not really paying attention. And yes, employees, they're your greatest asset. I hate to talk about employees as an asset of an organization, but employees are the lifeblood of any organization. But the reality of it is, they're also your weakest end points. And I thought one thing that was particularly interesting here, you know, there are industries that are more attractive targets for cyber criminals than others. The finance industry happens to be one of them that's at the top of the list as of course is healthcare. But the finance industry alone accounts for almost 24% of all phishing attacks. So think about that for a minute and think about the ramifications. In the US, a data breach on average costs $9.44 million and a cyber crime like a ransomware attack costs about $8 million. So think put that in perspective for a minute and think about the fact that here you are, maybe you're an enterprise, maybe you're in the mid-market range of businesses and you're focusing on growing and scaling your business and reaching your profitability goals and all this sort of thing. And you get hit with a ransomware attack. And you have to figure out how you're gonna pay $8 million to remediate from this ransomware attack. And oh, by the way, not only is it, you know, the financial consideration, there is a hit to your reputation. There's your customers that are impacted that impacts their perception of your company and all that sort of thing. So that's a very big deal. Another thing that I think is important to mention is that it was an average of 49 days in terms of the time it takes to find and identify a ransomware attack. So think about that. You are living your life, running your businesses, your IT team is great, blah, blah, blah and something happens, there's some kind of way that a threat actor gets in and languishes in your systems, in your network, digging around in your data, all different kinds of things for 49 days before they're discovered. That's a really long time. And by the way, the other thing about these threat actors, I think that it's really important to note is that they are so patient. And in some of the biggest attacks that we've seen, it's been identified that they have been waiting, lying in wait inside networks, inside systems for a very long time, just watching and waiting for the right opportunity. So that's kind of creepy. 4.1 million websites have malware at any given time, 4.1 million websites. Think about how many websites you visit every day. And no surprise, cloud security is the fastest growing cybersecurity market segment and that market is only going to continue to grow. So I think that those are some, if you aren't already thinking about, holy moly, what's the impact of AI on my security operations? I hope some of these data points will help you realize it needs to move to the top of the list. Yeah, we actually guys, you may or may not believe this, but we actually talk about this stuff off camera. We do, sad as it may be, but we do and not. For geeks. Right, for geeks. So Shelley was mentioning something to me the other day and I got, man, that's a good point. She was saying that, you know, Joe, spread actors just aren't waiting around for people to decide what they want to do with AI. Can you share a little bit about that, Shelley? Well, I think it's really pretty easy when you think about it. First of all, you know, why hack? Financial gain, power could be a nation state trying to get information, you know, I mean, that happens all the time. Government entities are targeted all the time. So, so threat actors are highly motivated. Their efforts results in what we all want, financial reward. And, you know, I was reading something in Security Magazine as I was preparing for this conversation and there are over 2,200 cyber attacks every day. That equates to one every 39 seconds. That's a lot. And threat actors have already found a way to manipulate and weaponize chat GPT and other AI systems. Some of the things that they're using include sophisticated automated phishing attacks using email, social SMS messages that are personalized and designed to be incredibly convincing. You know, the thing that I think that's worrisome about chat GPT is that by using this and other AI powered solutions, threat actors can stay ahead of what we're using in terms of malware detection engines and they're using AI to create what are basically sort of infinite variations of code. So they're able to use AI to kind of stay ahead of the law. You know, I think about this, you know, all of us chasing along after a threat actor and they're just like, hey, I'm out there ahead of you and I'm using technology. I'm using AI powered technology to be able to do that. So I think that that is, I think that's a big deal. And I know that hallucinations are another security threat. And when you think about that, you know, that's one of the knocks about AI is hallucinations. And sometimes you can't really depend 100% on the information that you get. And so hallucinations allow threat actors to manipulate LLM based technology. And that means they are able to create false or misleading information. And that can ultimately lead to people relying on that information that they're getting from AI engines within their organization to make business decisions. But those business decisions are then gonna be based on inaccurate information or it will help us spread misinformation. And, you know, another thing as it relates to, you know, what we're seeing cyber criminals do is that they can publish malicious versions of software. Software packages that an ALM and LLM might recommend to a user or developer who's looking to fix a problem. So think about this, just in terms of your everyday life, you know, you're a developer, you've got an issue, you're trying to solve an issue, you're looking at a latest software package from a trusted software vendor. And all of a sudden, this is a malicious version, you don't know it, of this software, you're using it to fix your problem. And then all of a sudden you've led attackers unwittingly deeper into your systems and you don't even know it. So that is enough to keep a CISO up at night. Yeah, it is, but you know, I'm a class half full girl and I, yeah, look, I think just as hard as the bad guys are trying to do bad, I think the good guys are trying to do good. And what I mean by that is the cybersecurity, everybody thinks AI is new, it's not new in cybersecurity, right? In fact, the cybersecurity vendors have been using AI for years and baking it into their tools to make the experience for that IT team a better one, right? And if we think about, you know, I have a couple favorites that I want to bubble up in case people aren't sure or haven't used the technology yet, it's kind of interesting. Dark Trace has a self-learning AI technology and it's really helped customers understand and adapt to the unique patterns of their networks as it relates to their users and devices. And what it does via the AI is the AI allows them to detect anomalous or novel behavior that might indicate a cyber attack, which is useful, right? I mean, that makes perfect sense. It's watching what your habits, what your tendencies are and it's learning as you go, right? And it's getting smarter every day. And so it's able to sort of be the frontline of defense there. I think that's, I love Dark Trace. I think it's a super tech solution. Yeah, from an endpoint perspective and just this is one of several that use this technology, CrowdStrike's Falcon uses AI under the covers. And one of the things that I like about it is this concept of contextualization. As AI gets smarter, it can help an organization pick out what's an anomaly that's a problem versus just a false positive. And that makes a difference, right? So it's looking for behavior that doesn't fit in with what you normally do. And it's contextualizing and bubbling that up to the IT team. And then I did not know this, but Zscaler has integrated an LLM with a massive data lake that handles more than 300 billion transactions daily. And it's continually learning, right? And so it's driving advanced AI outcomes for their customers. So I think that those are a couple examples of how these vendors are getting ahead of the bad guys. And I know that you had one that just recently came up in the news that you wanted to mention. Well, there's a couple of things. First of all, I can't ever talk about endpoints without mentioning Taneum. I'm such a fan of the company and their solution. And I'm sad that I missed their annual event this year. I was there last year, but Taneum is all about endpoint management and endpoint security. And endpoints are so important because that is sort of the last line of defense for the first place that threat actors attack. And so endpoint management and updating and patching and all of that stuff is just a big, onerous task. And so when you can use AI-powered solutions, some of what Taneum provides to make endpoint management and all of the patching and discovering everything easier solution, it is wonderful and goes a long way toward keeping your system secure. I spent a bunch of time at the Taneum event talking to some customers. And it's always interesting to me at a cybersecurity event because a lot of times customers don't wanna go on the record and they don't wanna talk about what they're using. And when I covered the Taneum event last year, I had this plethora of vendors who were so excited to talk about how they were using, how they were using AI and how they were using Taneum's endpoint management systems and everything else. And I just love walking away, hearing those kind of success, those real world success stories because you and I can talk about this stuff all day long with hearing from practitioners knee deep in the field about the solutions that they're using and the benefits they're getting, I think is to me where it's all about, where it's all about, I love that. What you were mentioning that Sentinel-1 is another security vendor who's in the news this week, this cybersecurity firm unites endpoint, cloud identity and data protection. And they announced this week the acquisition of a company called PingSafe which is a cloud native application protection platform startup. And so the addition of PingSafe to the Sentinel-1 family of solutions, I see this as being able to take PingSafe's CNAP solution, cloud native application protection platform which can deliver dynamic real time monitoring of multi-cloud workloads. It's super simple to set up. It's super simple to configure. I think that's exactly what customers are looking for today. It also boasts very low false positive rates, also important, and it will aggregate intelligence to help detect toxic and exploitable vulnerabilities. So again, going to your point, being able to use AI power technology to find and alert you to these vulnerabilities is incredibly important. This allows your security teams to make quick decisions. And the reality of it is it allows them to make quick decisions without having to rely on humans finding the problem and alerting you. So I see this acquisition by Sentinel-1 as a huge deal. I'm so excited for both companies and it combines the strengths of Sentinel-1's cloud workload protection, cloud data security, AI and analytics capabilities with the modern comprehensive CNAP. And overall, Sentinel in their press release when they were talking about this Sentinel-1, they said that they believe this integrated platform will provide better hygiene coverage and automation across an organization's entire cloud footprint. Cloud security, that's where it's at. We're going to see a lot of movement in this space. So I think that this was a really, it was a timely acquisition on Sentinel-1's part. Yeah, cloud native has been a whole from a securities perspective for a while. Absolutely. I'm really interested in these cloud native security platforms because if you were, the hyperscalers do a great job, but if you're running maybe containers, container security is tough. And this addresses some of the native cloud technologies, right? Which are kind of cool. I know we want to get into new AI threat vectors. And so one of the ones that maybe we can talk about that we're starting to see being exploited are the idea of fishing, fishing, and smishing. And that sounds like, they're funny words. I always think of the smurfs when I get to smishing. I mean, it's, they're funny words. And in case, you're not sure what some of them are, we all know what fishing is. We've all, that's been in the news and we've all read about it. So we're all pretty sure what fishing is, but we might not know what fishing and smishing are. So smishing is really done via text or instant messaging. So think WhatsApp or Telegram. And the scammer provides their victim with a link, which when clicked, then helps them steal their personal information or upload malware. One of the two things. And fishing, on the other hand, uses phone calls and voicemail to reach the victim. But AI is sort of up the ante here. These are old scams, but AI is up the ante. So why don't you tell them how AI is up the ante, Shelley? Well, I think too, I want to give you an example of smishing that happened to me just this last week. And so it's funny because I have been talking with a member of our team about getting together so that he could walk me through some functionality of some platform that we're using. And we had loosely talked about it and I got a text message from him that said, hey, Shelley, this is Alex. Do you have a little bit of time to get together? And so I messaged him and I said, oh, I'm finishing a call. I'll be free in about an hour. So we can definitely touch base then. And by the way, I had never texted with Alex before. I knew he and I just had a conversation about getting together. So I wasn't surprised to hear from him by way of text message. And then the next message that I got was, okay, well, I've got to run a couple errands, but I'm really in a gym. Can you do me a huge favor? Could you run out and get a couple Apple gift cards for me? Well, I knew instantly that this was a scam. And every part of, and these attacks are very coordinated. They are very personalized. They want you to think you're talking to the person that is reaching out to you. Another form of this is called CEO phishing or smishing. And that's when you get an email or a text message from someone that's the CEO of your organization or some person in a position of authority that asks you to do something that asks you to click a link that asks, and those things, we are wired to think, oh, Joe's my boss, I have to do what she wants. You know what I'm saying? We're wired to react very quickly to things like that, but it's like, I knew in an instant that Alex wasn't wanting me to run out and buy Apple gift certificates, but this kind of thing happens all the time. And again, what's creepy about that is that Alex and I were talking by email about doing this. And yet I got a text message from him that was so personalized, that was so on target for what we were talking about. And there was every reason for me to believe it was him except for the ask. So again, it kind of goes back to your employees are the lifeblood of your organization, but we're also always the weakest points when it comes to attack vectors. So there are other, we were talking about AI and being a part of a cyber criminals toolbox. There are all different kinds of things to worry about. AI impersonation technology, and it uses machine learning algorithms to impersonate people online. That's kind of someone that really, this is what happened to me, right? Using technology to create a fake persona that tried to trick me into doing something or divulging sensitive information. And AI algorithms can analyze huge amounts of data like social media profiles and they're able to use that data and create fake profiles that are believable. AI can also impersonate a person's voice. And I know that we were hearing just in the last few months about you would get a call, so warning people that if you get a call and Joe, you answer the phone and I say, hello, is this Joe? And you say, yeah, this is Joe. And then I tell you, that I've been in an accident and I really need you to wire me somebody or whatever, whatever, but the fact that you responded and said, yes, this is Joe actually gives a cyber criminal your voice and they can take that voice and they can use it then in other attacks. And to me, that is just so alarming because think about yourself in any situation. Now if somebody has a voice recording that says, this is Joe and they can take that and use it so that they can make calls to people and manipulate your voice and get them to do things, that's pretty scary. It is and it brings me, my brain goes right to again, something you and I were talking about off camera, the idea of the law keeping up with this technology. Like our whole lives, we've been trained to think that if you see somebody on camera doing something, well, it's gotta be real. If you hear their voice and you recognize their voice, it has to be real. And this is introduced as evidence in law, like in courts of law, right? So we're all sort of our minds work that way. Oh, well, it's on camera, it's gotta be real. But the fact of the matter is that AI is changing that too. And the International Association of Privacy Professionals just put out a kind of year in review for AI and the law. And I wanna cover two of them. And I know you've got two of them that you wanted to talk about too. So the first one is in the U.S. and I wanna get this right, so I'm gonna read it. The Federal Trade Commission put businesses on notice that existing laws such as section five of the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act applied to AI systems. And so what does that mean? Or what are some of the ramifications? Well, this past year, the FTC brought actions against ring, right aid for violative practices involving AI. So it was already starting, right? And then on October 30th of 2023, President Biden issued the Executive Order on Safe, Secure and Trust for the AI Development and Use of Artificial Intelligence, recognizing the benefits of the government's use of AI and he also detailed core principles, objectives and requirements that would help mitigate risk. What are a couple that bubbled up for you? Well, you know, I can't have you touch on the Executive Order about AI without mentioning that, you know, some regulation is better than no regulation, some policy is better than no policy. You know, the US is always behind the EU as it relates to, you know, sort of consumer protections and that sort of thing. I think that some of the parts of this Executive Order are interesting, some of the reporting requirements are interesting. We touched on that a little bit in our episode last week, but it'll be interesting to watch this evolve and it'll be in the other thing that I think that is important is that, you know, on this policy and regulation path, we actually have two, at least two camps, people who are pro policy and regulation and this is important and this is super important to keeping people in business is safe and then we have people on the other side of this and many people in the tech space who are like, no, regulation is bad, it's, you know, Stifle's innovation, let us do what we wanna do, all that sort of thing. So this'll really be interesting to watch evolve for sure. You know, and as you mentioned, so we're also seeing city and state AI policy throughout the US evolve. We've seen lots of different regional action on AI. Let's see, I've got a couple of data points here. Colorado finalized rulemaking on profiling and automated decision-making. California proposed rulemaking on automated decision-making technologies. Several other states have also passed laws that provide sort of an opt out for certain types of automated decision-making and profiling. Other state and city laws that have focused on specific applications of AI, including things like child profiling, writing prescriptions, employment decisions, insurance, all kind of really personal things, right? You know, we've also seen some states in 2023 establishing laws on government deployed AI, which is always a very sticky wicket. For example, Illinois and Texas established tasks forced with the goal of studying the use of AI in education and in government systems to see what kind of potential harms AI could cause to civil rights that kind of is, they just have to laugh a little bit like this because Illinois and Texas could not be any further apart on the political spectrum, I think. So that's really kind of interesting that both of them are looking at, you know, potential harm of AI to civil rights. I think they're probably coming at that from different directions. Connecticut also passed legislation that established a working group on AI and requirements for government. Pennsylvania's governor issued an executive order establishing principles for government deployed AI. So all of these things are in the works. We're going to see more of that. You know, the reality of it is, is that threat actors have always found a way. They've always been on the cutting edge of how to use technology to do dirty deeds. And if ACDC doesn't run through your mind, when you hear that, I really, I don't know what to say. So, you know, but when I say that in jest, you know, some early indicators of this some early indicators of this, sorry about that. You know, we've already seen fake news articles from leading periodicals. We've seen legal cases. We've seen instances where news videos or videos have already been altered, you know, and when you think about what's happening, videos can be altered, voices can be altered, ads can be altered, faux product announcements. I mean, our ability to determine what's real and what's not is going to be challenged in the days and months and years ahead in unlike anything we've ever seen before. And I think that's particularly alarming as we head into an election year here in the United States because, and so it's one thing to have a challenge of not knowing what and who to believe when it comes to sources, but that, you know, and being aware enough to know that we have to question these things, but when you have a whole segment of the population who really isn't in this, you know, no, I'm not casting any aspersions here, but there's a whole huge segment of the population who isn't paying any attention to this at all. And so they don't know that they have to, you know, make it more of an effort than ever before to discern between what's real and what's not real and how to do that and where to go for resources on that. And I think that's really gonna be a big challenge for us moving forward. Yeah, I do too. And in wrapping up here, I just wanna talk about something you mentioned when we started, you know, the ability of AI to write code, right? So we didn't, you know how to code a little bit before AI came along, now not so much. Yeah. You don't have to be a coder to code and AI can do it for you. And so that's gonna change things too. And there was some recent Stanford research, a professor Dan Bonet, Professor Dan, if I said your name wrong, I'm sorry. But they did, they share their findings on a study called, do users write more insecure code with AI assistance? And the authors found that participants who had access to an AI assistant based on open AI's codex, DaVinci 002 model, wrote significantly less secure code than those without access. So the authors concluded that AI assistance should be viewed with caution because they can mislead inexperienced developers and create security vulnerabilities, right? Well, and I think though, to me, the key thing here is, and I think we see this really when we're using AI in general, you have to approach it with a degree of caution and you have to approach it knowing that just because it's AI, AI-powered does not mean it's always correct. And I think, a point here, I was looking at the research that you shared and to me, what really jumps out is the people who were using an AI assistant believed that they wrote more secure code than those who did not have access to the AI assistant. So that's also, I think, an important part of this equation here is that you can't be tricked into thinking, I'm using AI, so everything I'm doing is 100% accurate. It is truly gonna change what we believe, what we think are true, right? So that's sort of the takeaway, isn't it? So I think, I hope we've given some folks some things to think about here. And as always, it's been really cool getting to spend some time with you. Well, absolutely, Joe. And as I mentioned at the beginning of this show, this is the first in a series of webcasts that we're doing talking about AI and the evolution of the AI threat. We started with three stages that we think it's important for you to be watching. In the next episode, we're gonna talk about network and kind of networking and the role of AI there and things to watch out for. And but we appreciate you spending time with us today. We hope you'll remember to hit that subscribe button, whether you're on YouTube or whether you're on your streaming service or whether you're reading this. Subscribe, come along on this journey with us. And if there's anything that you would like to have covered, message us. I'll leave our contact information in the show notes. And we always wanna hear from you. And with that, my friend, Joe Peterson, thank you so much for spending time with me today. And I'll see you next time.