 Hello, and welcome to this episode of the security angle. I'm your host, Shelly Kramer, Managing Director and Principal Analyst here at theCUBE Research, and I'm joined by my colleague fellow analyst and member of the CUBE Collective Community, Joe Peterson, for our conversation today about consumer and enterprise gen AI cyber risks that we suspect might be keeping CISOs up at night. Joe, great to see you. Hey, great to be here today. So we're gonna dive in here on a thorny topic and one I think that's top of mind across the board for many. So according to a survey of some 300 risk and compliance pros by a company called Risk Connect, 93% of companies anticipate significant threats associated with gen AI. Okay, well, that's no surprise, right? But only 17% of those companies have trained or briefed their entire organizations on the risks of gen AI and only about 9% of those say that they're prepared to manage these risks. Okay, that's an attention getter. So why so low? Why is that number so low? Well, for one thing, I mean, it's early days. And I think that it's safe to say that there's so much of a rush to embrace AI in many ways that thinking about risks in some way is just not top of mind. And I think what organizations are doing is that they're not thinking about the potential impact of the risks that AI presents. So when you look at the risk though, gen AI is expected to reach 77.8 million users in 2024. That's a very big number. And even more impressive, that's double the adoption rate of both tablets and smartphones over a comparable period of time. So when you think about that, and then you think about adopting a wait and see attitude with these adoption numbers, it's a risky strategy or it's actually probably no strategy at all. A similar survey from ISACA that was published in the fall of 2023 surveyed more than 2,300 pros who work in audit, risk security, data privacy and IT governance. Their data showed that only 10% of companies had a comprehensive gen AI policy in place. And more than a quarter of the respondents said they didn't have any plans to develop one. Okay, that keeps me up at night. So all of this leads us to our conversation today, which is a focus on what we're seeing as the top consumer in enterprise gen AI cyber risks. We've narrowed that down to what we think the foremost pressing risks are, and these include model training and attack surface vulnerabilities, data privacy, corporate IP exposure, ooh, that one gives me, that's one that keeps me up at night, gen AI jail breaks and back doors. So we're gonna kick this off. And I think, Joe, you're gonna tackle some model training attack surface vulnerabilities. Let's hear it. I am, and but before I go there to your point, Shelly, even the companies that are developing policies don't really have a good way to enforce them. So it's great that they have a policy in place. Right. But behind closed doors, what I'm hearing CISO say is, well, we can enforce this part of it here, but we can enforce that part of it there, which is kind of scary, right? It's very scary. They're not doing that publicly, but that's what they're saying in private. So, but onto our top four things, and you're right, I'm gonna tackle the first two. The first is model training attack surface vulnerabilities. So collection of data happens in many ways and through many means, right? This lack of clean data can be problematic. Gen AI also stores this data for unspecified periods of time in often insecure environments. So there's lots of problems right there that I layered on top of that, right? This whole combination can lead to access and manipulation of data as well as potential bias. So we're dealing with dirty data maybe, right? So that's a problem, right? That's problem one. Data privacy is sort of the next on our hit list. And the framework around data collection is thin. And the rules around the type of data that's input into generative models is thinner sometimes. So we got the skinny going on there, right? Without an enforceable data exfiltration policy, you're kind of in a quagmire. And there's potential for models to learn and replicate what is private corporate information in an output. So what is that stuff? It spells breach waiting to happen. It is a breach waiting to happen. To your point about control and even seeing what's happening, I remember, gosh, it was about this time last year. Actually, maybe a little bit later in the year, Gen AI and Microsoft or Cheshire BT and Microsoft could just announce an alliance and all of that sort of thing. And we had people within our organization at the time who were all in on AI and who were all of a sudden using AI to generate articles and things like that. And I remember saying, gosh, we need to have some structure here. We need to have some rules in place. And we can't just, I would see content that was produced by someone on our team who really typically wasn't a great writer. And then all of a sudden his work required tons of editing. And then all of a sudden, I'd see these articles that were coming out that were written using Cheshire BT that didn't sound like him, that there was no way I could fact check and all that sort of thing. So it is a very real problem. And by the way, that organization that I led at that time was an organization of 30 people, right? So imagine if you have 3,000 or 30,000 employees and what a big problem this becomes, right? So, and then that leads us to the third risk on our list which is corporate IP exposure. So take my example of you've got people within your organization who are all in on using Gen AI, I think that's great. But when you don't have guardrails and you don't have policies and you don't have monitoring and auditing in place and all of these guardrails, then you have a very real chance of exposing your corporate IP without, it's generally speaking an unintentional thing. But when people might be loading proprietary corporate data into something like Cheshire BT or some other Gen AI platform and it's like they're not thinking about risks. They're not understanding what the risks are. And this kind of error intentional or otherwise can lead to the exposure of IP, API keys and other corporate information that is incredibly dangerous. So I think that when we talk about we're not just, we're not, our goal in having these conversations isn't to inspire fear, right? I mean, we want people to be using Gen AI. We want people to be benefiting from this technology, but we also live among CSOs and security professionals and watch what's happening from a threat actor standpoint. So we know that these are things that people really need to pay attention to. The other thing that I think is incredibly important, the fourth on our list today is Gen AI jail breaks and back doors. And you know, Houston, we have a problem. Gen AI guardrails are meant to protect organizations until they don't. And so how and why are these AI guardrails that some organizations may have in place being circumvented? The easy answer is because they can be. And I think that I ran across some information from Carnegie Mellon University researchers from Carnegie Mellon and the Center for AI Security, AI Safety announced in the summer of 2023 that they had found a way to successfully overcome guardrails. And this is the limits that AI developers put on their language models to prevent them from providing, oh, I don't know, things like ballmaking recipes or anti-Semitic jokes of every LLM out there. So for folks hoping to deploy LLMs in public facing applications, it's important to know that security researchers realize that attackers could get these models to do whatever they wanted, including engaging in racist or sexist dialogue or equally as threatening writing malware and using LLMs for malicious purposes. So, and it turns out that these researchers found that fooling an LLM is really not all that hard. So, and the other part of this is that online forums and hacker tools can be easily accessed to learn tips and tricks to circumvent established gen AI guardrails. And these are often, as I said, called jail breaks, attackers sometimes generate deceptive content, sometimes they launch targeted attacks. And really don't take my word for it, just Google it. Google how to circumvent AI guardrails and you'll see all kinds of tips and information. Does that make you nervous? Good, it should because it's happening. So these things are all really, really things that we think are important to pay attention to. And now we're gonna talk a little bit about some best practices to kind of work your way through this. Yeah, you made a really good point. We want people to be aware of what's going on, but we also wanna provide some tips and tricks that we're seeing organizations use. And one of the things that I think about, especially if I go into a smaller company that doesn't have a big budget for security, I tell them to spend their money on end users, right? Because that's where they're gonna get the biggest bang for their buck. And one of our best practices is train your employees. Have a conversation with them about what it is you're trying to protect and why it's important. Because at the end of the day, you're trying to protect their job. That's really a shortcut way to say, if the company's not viable, you're not gonna have a job, right? So if our corporate secrets leak out, for example, it matters to you. And here's why it matters to you and draw that line. And build an AI governance plan. If you don't already have one, you need one, right? Something, a can that we've been kicking down the road in IT for a long time is classifying corporate data. Now, the bigger organizations do a great job of it, but the smaller shops are busy keeping the lights on and they quite frankly haven't had the time. So there's tools out there that you can buy that will help you classify data. If you don't have the time to do it, you don't have a big staff, there's tools that can help you do that, right? So it's really important to know, stack-rank the data and know what the really important stuff is and where you keep it. And then understand how your data governance and security tools work best together. Yeah, yeah, super. And again, just like we talked about four risks to be concerned about, these are four pretty easy best practices that aren't that hard to embrace. So we're gonna dive in a little bit to some of these and give you some deeper thoughts on that. And I'm gonna start with building AI governance into your organization. And to me, this is very similar to Joe. I know you've had these conversations for the last decade plus. I certainly have had them about security and making security foundational in every single thing that you do. Well, I think that AI, building AI governance is also a foundational thing these days. And so what is AI governance? It's a process of building technical guardrails around how the organization deploys and engages with AI tools. I like what the artificial intelligence governing and governance and auditing program. It's AIGA program. It's an undertaking by the University of Turku, which and they've developed this around AI governance with the EU's AI regulations in mind. Their goal is to study and develop governance models for AI and for services and the business ecosystem that's emerging around responsible AI. I'll include a link to that organization and our show notes because it's definitely worth checking out what they're doing. But their AI governance framework includes consists of three layers, environmental, organizational and the AI system itself. And each layer kind of contains a set of governance components and processes linked to the AI systems system lifecycle. So again, I'll include a link so that you could check out more because they have a lot of resources that they've developed around this AI systems lifecycle. So when you're thinking about building an AI governance framework, I think that it's important to start with really embracing the fact that this is a strategic undertaking and it starts right where you would expect with the assessment of your organization's unique needs. Every company is different. Every company's at a different place in terms of awareness, adoption, use, understanding, all of those sorts of things. So assessing your organization's unique needs, including the ability to safely handle sensitive data responsibly is really important. Other key parts of this exercise and strategy included having an understanding of both ethical and legal standards and establishing your own frameworks on those fronts. And then when it comes to AI, understanding that transparency is incredibly important as our algorithm regulation and accountability within your organization and team. And then when I say accountability, I'm also going to mention auditability and that's key. So making sure that you have a process in place for engaging in ongoing monitoring and adaptation. And we've been talking about this as it relates to digital transformation also for a decade plus. And I think that strategically these days, this applies to so many things that we do, but it's monitor, measure, tweak, right? It's monitoring what's happening and adapting as needed. And it's kind of a universal formula and it applies as much to AI governance as anything else. And I think that when we think about applying this at the code level, effective AI governance helps organizations along the way with observing, auditing, managing and limiting the data that's going in and out of AI systems. And this AI governance is truly business table stakes today. So I know you agree. I do agree. I know you agree. And you touched on this earlier, employee training is so, so, so important and there are many lessons to be learned from shadow IT which still takes place all the time. But over the last decade, again, security teams have tried very hard to rein in shadow IT. Figures from Gartner revealed that 41% of employees acquired, modified or created technology outside of IT's visibility in 2022. And in 2023, shadow IT and project management survey from Captera found that 57% of small to mid-sized businesses have had high impact shadow IT efforts, okay? So those are big numbers. I mean, 57% having high shadow IT issues, that's a big deal. So Gen AI is a different animal but it's taking off way more rapidly than shadow IT did. And it is, I'm gonna go back to my story about managing my team a year ago. And there were people who were very cautious about thinking about using Gen AI and adopting it. And then we had people on our team who were just rushing in and they didn't, they were glad that there was not a policy and guide rails and rules and regulations in place because they just wanted to use it. They wanted to use it and play and learn and experiment. And all those things are great and they're commendable but they're also scary as hell. And so they had those things that rushed to innovate and learn, experiment with new technology should not be stifled within an organization but you've got to have rules and employee education in place and they really need to understand, employees need to understand the types of the risk that's possible and they need to understand the differences between a Gen AI model and a proprietary AI model. And again, while these things are fun to use and experiment they're also really super easy to misuse and there are huge repercussions of adding sensitive data to a Gen AI model. I mean, it could have lasting business consequences. So when you think about limiting things like limiting access and implementing strict protocols as it relates to the management of sensitive data really needs to be a very high priority initiative and it needs to be something though that you don't just do. You do and you talk to your people about it and why it's important. And by the way, you don't just have that conversation once. You have that conversation over and over and over again. Yeah, you do. And classifying data really helps in defining who gets access to what. Yeah. Right? That's the other side of it, right? And it helps to minimize your risk profile and accidental data exposure. So it's you proactively locking that data down, right? And part of the way you lock it down is by classifying it. We talked about that a little bit earlier but the classification is super important. It's one of those things that takes a lot of time and a lot of effort and it's a can that's been kicked down the road in some places and some organizations just because they've had other more mission critical priorities. But what's happening now with AI is it's forcing that data classification conversation to come back up. Yeah. Right? Yeah. And so that's what we're seeing. And we had talked about something else proactive a little bit earlier that you can do. And it's the idea that policies and education are great but data governance and security tools are the way that organizations can enforce adherence. So tools like DLP, threat intelligence, cloud native application protection platform, CNEP, extended detection and response, XDR. Now that can be XDR, it can be MDR or whereas a managed XDR, right? If that helps you out more but have it in place. These are tools that help prevent unwanted exfiltration and I don't know if you've seen some of these in action but the user, if you're starting to send something that's risky, the user actually gets a note that pops up and it says. Are you sure you want to send this? Are you sure you want to send this, right? Yeah. So it kind of is, and you're notified in IT too if you haven't seen this in action it's kind of cool but the idea. That's the part of this that I really like. It's not so much the notification that goes to the employee that says, are you sure you want to do this? It's the notification that goes to IT that says, hey, watch what she's doing. Hey, Joe's on the loose. Careful, she's on the loose. Shut her down. Well, shut her down. So Shelly, I know that you've got some things to share with us about tools and the whole tool roundup around this so. You know, and one thing I'll just get back up though and say real quickly is that, you know, we have talked about all of these tools. We've talked about DLP. We've talked about CNAP. We've talked about extended detection and response tools like all of these, like if you're listening to this conversation and you're thinking, oh God, this is so much to wrap my head around. I think the important thing is that this is not something that you necessarily need to build from the ground up. You don't need to recreate a wheel here. There are some amazing tools, some amazing vendors out there who have already done the hard work and created a lot of these solutions that are very easy for you to use and rely on. There's also a managed security services, you know, so that if you don't have the talent in the team to do it yourself and the expertise to do it yourself. So I really encourage you to not walk away from conversations like this feeling overwhelmed but know that there are some very real solutions out there. And I'll be sharing link in the show notes of this episode, some of the prior episodes that we've done where we've taken a dive into some of these tools and maybe that'll help you along the way. So now we're gonna talk a little bit about, you know, kind of a quick roundup of some folks that we think are doing some interesting things. You know, the global AI cybersecurity, cybersecurity market is not small. It's expected to reach about 38.2 billion by 2025. And it is predicted that 50% of organizations will actively rely on AI-driven cybersecurity tools by 2023, which is now. 88% of cybersecurity pros believe that AI will be essential for performing security tasks more efficiently. And 71% of those survey thinks that it could be used for conducting cyber attacks within three years. And I'm laughing because it's already being used to conduct cybersecurity attacks. So anyway, and we've talked about this on our show before. So the thing about AI is that it's terribly exciting and there are so many great things that we can do with that. But in addition to the great things that we can do and things to be excited about, cybersecurity threat actors, nefarious people are learning and using AI tools and at a very rapid pace. So we need to be out in front of those. So here are some cybersecurity solutions that we think have an interesting take on securing gen AI. I'm gonna start with Google Clouds. Google Cloud and they have a product called Security AI Workbench. It's built with Duet AI and Google Cloud and the Security AI Workbench offers AI powered capabilities that help assess, summarize and prioritize threat data across both proprietary and public sources. Pretty cool. Microsoft, not surprisingly, Microsoft Security Copilot is a tool that's integrated with Microsoft's security ecosystem, has interoperability with Microsoft Sentinel, Defender and Intune. Copilot leverages AI to enhance threat intelligence and it also helps automate incident response, which is very key these days. CrowdStrike's Charlotte AI is interesting. It has NLP capabilities on the Falcon platform. We've talked about this before and it allows customers to ask, answer and act. CrowdStrike estimates that the tool allows customers to complete security tasks a whopping 75% faster, absorb thousands of pages of threat intelligence in a matter of seconds, reduce the analyst workload, improve efficiency, write technical queries 57% faster and this is even if you're new to cybersecurity. So those are some pretty significant assists here with these tools. And I wanted to mention one more company that I'm paying attention to, Joe. This company's in your part of the world or close anyway, they're in Raleigh. It's company called HowSo. They were formerly called Dive Plane and what I like about what they're doing is they're all about AI that you can trust, audit and explain. And the HowSo engine is an open source ML engine that provides exact attribution back to input data, which allows for full traceability and accountability of influence, which is really the foundation of everything that HowSo builds. And they have something called a HowSo synthesizer and that is data that's digitally generated and behaves like you would expect it to behave without any privacy or compliance risks built on the HowSo engine. So synthetic data that you can trust, there are lots of use cases for this, for what it is that HowSo is doing. I mentioned they're located in Raleigh, I think, which is kind of the heart of healthcare, the healthcare world. And so there's a lot of use cases in the healthcare space when you think about organizations and how they need to be able to securely both analyze and share data, both internally and with other agencies, think doctors, hospitals, the pharma, all of that sort of thing. So anyway, I'm really impressed by what HowSo is doing. I love their commitment to trust, audit, explain and I think we're gonna be hearing a lot about them in the future. They sound cool, I need to check them out. Yeah, absolutely. Yeah, so the three that I'm gonna highlight are First Cisco Security Club and this is the marriage of Gen AI into Cisco Security Club with the goal of improving threat detection, making policy management easier to administer and simplifying security ops with the help of advanced AI analytics. My next one is security scorecard. So this tool utilizes chat GPT-4 to deliver detailed security ratings that provide a unique understanding of your overall security posture. The tools use NLP queries and customers receive actionable insights. So that sounds pretty cool. And my last one is synthesis humans. So these guys create perfectly labeled images and video for MI models. So this is data generation and labeling at scale and teams can use the information for realistic security simulation or even cyber security training. Wow, that's really cool. I think that there are some and you know what, I think that these seven companies have some great solutions. I will also say that I think that, you know this is kind of the tip of the iceberg. There's some really innovative things that are happening in this space. I think that, you know as we were talking about this as you were talking about this I was thinking about what IBM is doing with its Watson X governance. We should maybe even do a whole episode on that because they're doing some really, really interesting things there and I think that they are very much a part of the AI security conversation and equation. So note to self, we're going to tackle IBM here in one of our episodes soon. Well, Joe, my friend with that I think we're going to wrap this episode of the security angle and it has been an interesting dive into the consumer and enterprise gen AI cyber risks that we know that keep us up at night. And we know that these certainly are keeping CISOs up at night but thank you our viewing and listening audience for being along with us on this journey today and we will see you again right here next week.