 Welcome back to SuperCloud 3 everybody, where we explore the critical issues around cross cloud security and the impact of generative AI and large language models on this space. It's our pleasure to welcome Jaya Baloo of Rapid7. Jaya is the Chief Security Officer, which by the way is a super set of the CISO title. Welcome Jaya, good to see you. Thank you so much. So when I talk to Chief Security Officers or CISOs at technology companies, I find that they have sort of a dual role. One is looking after their internal security and maybe their broader ecosystem, but also salespeople love to trot folks like you out to help them educate their customers on your best practices. So how do you balance your time? Where is your focus? To be very honest, I wish I could say that there was a balance, but I've just tried to figure out, pretty much a long-term strategy of what do I find important and then I balance prioritization really on urgent more than important when it comes to a day-to-day or week-to-week perspective, while keeping those important things in the background and still trying to get those done. So it's a little bit of an impossible task, to be honest. Okay, so it's got to be frustrating sometimes. You're probably putting out a lot of fires and there's paper cuts, but then you have initiatives. When I talk to CISOs and CISOs, zero trust is a journey and pretty much everyone is on that. And so that's the strategic aspect of your business. So you still have to try to sort of fit that in, obviously, to your day-to-day. Well, how do you think about that zero trust journey? So zero trust is only a part of it, right? We've had this concept of zero trust or principles of least privilege forever since, you know, Carl Magnan man, we've tried to understand how to parcel our trust and then like trust but verify from the Cold War. So we know this stuff. However, what I think is really more important is that we have like an overall plan. So I like to say that every company on earth needs to get three things right. They need to understand themselves and their risks. So security awareness, then they need to be able to see how it's going. So visibility, but then, you know, they get drowned in data. So visibility and risk intelligence. So they can prioritize those important things. And then finally, security capability. So they know what they need to worry about. They can see how it's going and they can act as quickly as possible. And in those three things, I've put in all of the principles of what kind of awareness tactics do we need, both internally and towards customers. Then in terms of visibility, like how do we improve our logging and how do we improve our threat intelligence and, you know, other kind of capabilities. And then when it comes to security capabilities, it's those foundational things. You know, understanding our architecture, making sure we can do architecture improvements or network forensics capacity or again, re-architecting a network for zero trust. Yeah, that's a nice framework. And as technology evolves, you can evolve your process as well. Broadly, how do you think about, you know, given this theme of super cloud, how do you think about cross cloud complexity as it relates to cybersecurity? So let's start out by this notion that having multiple cloud providers is a benefit and a feature, not a bug. And the complexity is definitely there. But, you know, at the risk of sounding like a marketing thing, I have to use all of my own products, the ones that we give to customers, we use them internally ourselves. We are customer zero effectively for all of those things. And I manage our disparate cloud environments using our own products of our own cloud security tooling called the inside cloud sec. And that means that I can look at the GCP workloads, the AWS and Azure stuff in like, you know, a single pane of glass and have this good overview and have this asset understanding and ownership status and understand like how our policy driven baseline are complied with or not. And whether we have the right coverage and the right policies in place to, and when I say policies, I mean technical ones, not paper ones, but so the right technical policies in place to actually be able to monitor those things are important. So as a practitioner, how do you deal with this? I mean, obviously rapid seven does a lot, but it doesn't do everything. So you always hear about, you know, the dozens or sometimes hundreds of tools within an organization. How do you as a practitioner deal with that complexity specifically? So I try to make things really simple. You know, when I start making that strategy, I told you about those three things that I have in my framework to determine all of the strategy sub components and everything that I'm working on. But when it comes to like an operational focus, again, I really think like complexity is the enemy. So if we keep it simple, I focus on two things initially, vulnerabilities and incidents. I have a baseline, we have frameworks, we have all different types of certification schemes. I wanna understand, you know, where are those areas, those gaps, those potential places of exploitation for an attacker? That's where I put my energy because everything that is allowed to remain persistent both in terms of vulnerabilities as well as long running incidents prevents an opportunity for the attacker. And those are the things that I'm trying to diminish. I'm never going to get rid of them completely. I'm just trying to diminish their window of opportunity to do something bad. So when you think about the shared responsibility model across clouds, there's certain things that the cloud providers do. I mean, Microsoft, for example, does a lot. They compete directly with a lot of companies. You know, Amazon maybe focuses more on the infrastructure side and leaves a lot of meat in the bone for the ecosystem. When you think about sort of filling the gaps that maybe Rapid7 doesn't do, how do you approach that? Do you, so you got the framework, but then ultimately you need technologies. So do you sort of, do you partner up with folks that are adjacent to you? You're sort of not competitive. And so what's that mosaic look like? Sorry, I didn't mean to interrupt. No, that's okay, yeah. I was just enthusiastic to say, absolutely. Look, I think honestly, because again, we are a multi-cloud environment, it means partnering up and we have a very strong partnership relationship with Amazon, also with GCP. Like we need to work together. And, you know, in that sense, we're very much agnostic. It's really about how can we get the best benefits of the integrations that are already there, make sure that the product works together and then give ourselves the greatest degree of visibility and control as possible. So yes, absolutely yes. Very much so for partnerships and integration. I wonder if you could talk to sort of developers, both your developer colleagues and then your own developers in your organization. They want to get code running. They'd like to do it on one machine, perfect it and then deploy it in the exact same environment, ideally, because it's so much easier to work that way. When you think about multi-cloud security, take a simple example. When you onboard the AWS, let's say a single cloud, you might spend hours or sometimes a day or more creating permissions and roles and hierarchies in one environment, let alone multiple environments. So that makes you error prone and now you got containers that you got to worry about and you got to do this across clouds. So are you able to both internally and what do you see within your customers to create that common experience across clouds and how do you do that? Again, I think that setting up those baselines and luckily my team doesn't do that. We have an amazing platform team and amazing engineers who help create those baselines and those builds that the engineers can use regardless of which cloud environment they are. I think it's trying to make sure that we're not being too rigid to our developers, that they have a degree of freedom and possibility while at the same time maintaining the standards to what we want across the board for security and quality. That last part is not easy. So it's really about continuously pushing out those baselines and you mentioned something that I found really interesting. You were saying doing everything in the same place. Like one of the things like from a cost perspective, I think that does make sense to try to like bundle those things in the same place but I have to tell you that I think fundamentally like when you wanna address customer needs it's better to have that diversity because customers don't have vanilla environments. So actually having that experience, being able to understand the pitfalls and the advantages of having these multi-cloud solutions or problems in that case, I think it actually helps you have a better conversation with those customers when they are trying to implement those same solutions. Yeah, and you're right. I mean developers, they want to tap best to breed some cloud vendors going to come out with some hot new thing. I mean, we've certainly seen that with open AI and they want to tap it. And so to your point about standards is an interesting one because you know, you take the super cloud concept, it is about yet another abstraction layer whether virtualization containers, we're always abstracting in this industry but sometimes you want to go deeper into the primitives because you might have some standard and you might need to tap some capability. So it's not a, to your point, it's not a no-brainer to provide that level of standardization. But at the same time, you have to create an environment that minimizes complexity. So again, it's a balancing act, I presume. Yeah, yeah. And again, I don't think we're great at the balancing act, right? I don't think anyone's really going to say, like, oh, I've got that down. I think we're still trying to figure out what is that optimum of usability versus security? And these are discussions that we've been having for a very long time. I don't think that, you know, you can take a very like fundamentalist approach. Like, no, it should always be security first because I actually think like, we forget sometimes why these principles exist. They're here to further the business. They're here to accelerate our deployment, not hold it back. Yeah, and AI, generative AI, it's like yet another abstraction layer. We're now, whereas Amazon turned the data center into an API, now we're going to be speaking in natural language to program, to access technology. So let's talk about AI a little bit. A couple of many part question here. First of all, how do you think about AI, both within your existing products and using AI for your frameworks to know better, maybe prioritize, to visualize, and maybe inject intelligence into your system? How do you use that? Again, I have a certain professional defamation. Like whenever I think about AI, I think about it from a very like fundamental, do the defenders have an advantage or do the attackers have an advantage? So I really think about it very much from a security orientation. And I think that for a very long time, we weren't seeing anything but an advantage for the defenders. So like building it into tool sets, finding ways to make things better, determining with a more fine-grained tool, like all through all this information that we needed to wade through to see if there was an attack. I think we have that advantage for some time. I don't know that we will retain that for the same amount of time just because of the accessibility of some of these LLMs to do things that we wouldn't consider it so well-intentioned as being on the defending side, but more on the attacking. And I think that's the area that I'm the most concerned about now. So I think the potential for attack right now is the thing that most intrigues me about AI in relation to security. So your premise, if I infer correctly, is that the technology industry had probably priority access to AI prior to the AI heard around the world. And now the attackers have much broader access. They're obviously using it to write better phishing emails, but there are so many other ways that they can use it. And I tend to agree with this. It's very unclear whether AI is ultimately going to be of greater benefit to the attackers or the defenders. It's just maybe it accelerates or accentuates this never-ending escalation. Well, very foundationally, if we say that you can use AI, for example, to write better code, it doesn't have to be code that's just used for the defender. It can also be code that's used by attackers. So like all technology, it can be dual use. It can be used for good and it can be used for bad. I think the issue that we need to understand is how can defenders get better at already thinking proactively about defensive strategies to counter those threats that we know are coming? Right. And so I want to end around culture because I often say bad user behavior is going to trump good security every time. So it's super important that people are hyper aware of their environment and the best security practices from a user standpoint. So how do you go about developing a security conscious culture? Well, I mean, it's easier somehow in a security oriented company to kind of put security first, but that being said, I really like to bring knowledge about what's happening in the rest of the world on a weekly basis back to the rest of the company, just so that they are as aware as the internal security team of look at what's happening now. And it's everything from attacks that are happening elsewhere that we can learn from, but also new development. So in relation to AI, for example, you might have heard of darkbert, which is an AI LLM that's like being fed on darknet data. So it's actually going to be a tool eventually for like law enforcement to be able to search through data that's there on the dark web and do something meaningful in terms of finding attackers as well as data, et cetera, all different types of things that could have been compromised. So I think even here just knowing about the potential and the capability of the people who are not having the best intentions in mind, that's something really, really important to bring to the fore at every company in order to make sure that we understand the why of doing security. It helps us remind everybody about, you know, what the value of doing good security is within a technology organization. So at a security company like Rapid7, Jaya, I would imagine your entire staff has an appetite for that type of education. I would think in many of your customers, you know, when you go to an insurance company and you get down to the claims agent that they might get fatigue when their eyes start to roll over when they start to, you know, get, here's another one because you open the paper every day. Is there, have you seen any kind of best practices to keep those folks at the front lines and engaged and up to date? Yeah, absolutely. Look, a good threat intelligence thing should start with, you know, what is there to learn about myself? So obviously information that is of direct relevance to that company, that's where we start the messaging. And then it's, you know, me, people like me and then everyone else. When you get a look at what's in the paper, it's like all the other stuff. It's the generic stuff that you get first. So I think the best rule of thumb is to make it as personal as possible. And, you know, people are multiple personas. They're not just the humans that work at companies. They're also humans that have devices at home that have maybe smart lamps or robot-controlled lawnmowers. And you want them to also understand that. But I think it really begins initially from a corporate perspective of starting with what is directly a relation to their company and then their industry and then the rest. Know thyself, good advice, Jayla Baloo. Jayla Baloo, thanks so much for coming on theCUBE and helping us with SuperCloud 3. Thank you so much for having me. Thank you. You're very welcome. All right, keep it right there for more content at SuperCloud 3 live from Palo Alto, California.