 Welcome everyone to theCUBE's special presentation of the AWS Startup Showcase, the topic of this episode of cybersecurity. Again, season three, episode three of the ongoing series, covering the exciting startups from the Amazon Web Services ecosystem. I'm your host, John Furrier. Today, we're excited to be joined by Rehan Jaheel, CUBE alumni and CEO of Security AI, security.ai, I love the .ai URL. Couldn't get the CUBE AI, by the way, because it was too hard, but. Rehan, thanks for coming back on theCUBE and congratulations for being selected on the AWS Startup Showcase. John, it's always a pleasure talking to you. Thanks for being a host for the show. You've been a serial entrepreneur multiple times. You've been a tech executive, investor, and founder of Securities Company. You know, this whole data aspect of security, compliance, it's been a big deal. You've seen this kind of movie before as the GM of security at Symantec. Okay, and you've been a startup success. This Generve AI has been a tailwind for many companies that have been kind of, I won't say grinding, but developing in the next gen cloud. So cloud apps, security has been a big part of that. It's in the news everywhere. Chat, GPT, educated everybody. Now it's become an industry standard conversation and soon to be featured and operationalized in companies. What does the Gen AI wave mean to you guys? And it's new, will be as big as people think it's gonna be. What's your take on the AI? How's it impacting you? John, maybe we can talk a bit about why this is such a big deal, right? I think AI has existed for some time and is doing its magic, but this is very different. Previously the AI was more about looking at the data, finding patterns. And now it's about understanding the symbolic language constructs, understanding the hidden mathematics that exist in all languages, which we probably didn't know. And having that full command of a machine on any language, that is very, very powerful because machines didn't understand natural language before, didn't understand unstructured data this way before. Now it is possible. And then certainly while it's sitting on the shoulders of the previous innovations of the internet and the mobile and the cloud, but this is probably one of the biggest revolutions because machines can understand natural language. And what it means to your point, what it means for us, security enables what we've been calling a data command center. The generative AI on the public data, like chat GBT now opened the eyes on the magic of the LLNs and GenEI. However, if enterprises want to utilize their own data, it's a different ballgame. Why? Because enterprise data sits in a very pristinely done different apps under different entitlements and different security provisions. If you give this data to the model without having all the controls in place, that's not going to fly. So literally to enable GenEI inside the enterprise, basically you need constructs in the guardrails to know that it's being used safely. And that's what the company is all about. Ryan, you've been talking about the data command center. I love that point. I want to just ask a question on the data command center. What do you mean by that? Because when I hear you say that I see in my mind's eye a single pane of glass screens everywhere, like a security knock, behind it is all the systems and plumbing of data, siloed or connected. And I kind of see a mess, because the old data models were siloed, data warehouses. But a data command center implies, rich connectedness between them, access to data, automation. What do you mean by data command center? And is that what you're doing? Yeah, I think the vision of the company has been that there are so many different silos inside the organization. Some people looking at just understanding inventory of data or catalog of data or discovery and classification of data. Some are doing just privacy. Some within the order doing security of data. Some are doing compliance, but they're on the same data. So why have so many different ways to look at it? Or sometimes not look at it properly. There should be one place across public cloud, across SAS, across private data centers, across data cloud, and having all key obligations around data be understood through one contextual insights around data, what we call data command fabric. With this data command fabric, you're fully insights on this. If you can look the data through any lens and different personas, they get different views, different controls on top of this fabric within the data command center. There's not just one person looking at it. Your privacy team may be looking at it from their own vantage point. Your security team is looking for their vantage point and controlling it because it's actionable. And then GenEi shows up. For GenEi, you need absolutely the same guard rates because if you think about GenEi, there are only two key things in there. A model, which is amazing innovation, but for enterprises, if you want to use the model, it's your data. Without this data, there is no GenEi inside the enterprise. But to use this data, you have to make sure this is used in a much more safe manner. That's what data command center actually enables. And you guys are enabling that. That's a product? Is that a benefit? Is that part of your platform? It is the platform. It is literally the platform. That's what we call it itself. The security data command center. Okay, so. Which has, yeah. So I got that. Now the market you're going after, security, compliance, or is that all now one thing? Because we've been talking about this, everyone kind of knows that security is a data problem. We know that, right? It's high velocity. Yeah. High volume. What market are you targeting? Because you got a little bit of compliance benefit in here too, but it's not a compliance solution per se, but it's a compliance opportunity. Benefit. Yes. It's an outing. Yeah. So think of this way. These what we call as data controls. Security is the data. Data security is the data control. Privacy is also different kind of data controls. And then same for governance controls and same for others. What we call that as a unified data controls. Instead of looking at these things separately on the same data, you should have one place to look at different and applying different controls on the data. So essentially to your question, yes. It's in one place, your security controls, privacy controls, governance, and of course outcome is good compliance. I want to get into the unified data controls. Let's love that story. But before I get there, a quick question on the business model, your business, how you guys organize consumption, how do customers work with you? What's the engagement look like? Are they consuming a SaaS? Is it on-premise in the cloud? Give a quick overview of how product is deployed and consumed. So first we have, because we're dealing with enterprise data from some of the largest companies on the planet, we have to make sure that the data is safe while it's being analyzed. So it's an architecture. Interest data command center is a central console. However, the data analysis happens wherever the data really is in a customer environment. Could be private data center, could be the public cloud, could be connected to the SaaS service, but data doesn't come to the data command center. Only the metadata, the knowledge, and the fabric is built there. And all the policy engines, policies it's there for you to push the policies down. Now, in terms of the consumption model, you have to keep it super simple. It is primarily tied to, if you have more data, you have more value you're creating. Hence, of course, the shared, you can actually give it back to the technology providers, but it's primarily driven by the amount of data that you have. The customers connect in their data, however they want, it's on their terms. They decide. Absolutely right. You have to keep it really simple for the customers. Got it. Now, one of the things about AI, it's only as good as the data you have, right? So that's what you see a lot of inaccurate season models, hallucinations, data poisoning, bias drift, all these things are going on because of the data piece of it gets a little bit, I would say, I want to say gas, but sometimes it's a data architecture challenge. We're hearing. How do you see that? Do you guys solve a problem there? Is that where we can see benefit there? So there are four areas I would say you got to look at when using Generative AI. The first is the model. They're not going to be one model. There are going to be a variety of different models from open source, your developers are going to utilize, and depending on the version of it, depending on how it's configured, it may not be safe. So how do you assess the model to understand that there is no problem and how do you actually make sure it's tightly controlled because the issues around the model, such as AI poisoning, the bottomization of the models and variety of other things that are taking care of the models, they're real. And they're just evolving fast. So understanding the risks around the model and contain around it is the very first thing. The second category is all about what data can go into these models? Because if you send sensitive data, classify data, data that should not be seen by the people who are prompting on it. And if you send data that you don't have consent for or data that you have consent for regulations apply on it, you don't think so. So you have to have proper guardrails in place to make sure the right data goes in there. So safety is truly an enabler here. People will have no question, people have huge motivation to use generative AI. But as soon as you enable safety, it opens this floodgate of using the generative AI within the AI price. But controls around data is the foundation piece. The third piece is because people will talk to the models through natural language, it opens the door for new threats that can go through the natural language called what's called prompts. So prompt injection attacks and of course, even taking over the prompts where you thought you asked a question but somebody is just intercepted a question and sent a different question. You're going to trust what answer is going to come back but you don't know what question went into the models itself. So prompt safety is fundamentally another important inspecting those important things. The fourth thing is the regulation. GDPR and other, they exist on top of it. New regulations are actually popping up. So we actually provide input on there also. So all these four things you have to automate. And that's what Data Command Center actually enables you. Providing contextual understanding of what models you have, what risk is there, what data is going in there, all the prompts that we use and of course what regulations you need to be accounting for depending on where your footprint is across the globe. Some of the large customers that we deal with they have footprint across many countries or maybe all the countries. Some customers are like that. So which means they have to make sure they have full automation understanding of depending on where this data is that they're not violating any local regulations. I love the unified data control piece because that gives the framework more power to be adaptable. AI model safety, love that enterprise data usage everyone wants to be there. Prompt safety, I mean that's a new threat area that's opportunity but also a risk. And then obviously regulations are going to be coming in in droves. We can imagine that all this is going to stunt innovation. Many people saying and I've been kind of I'm on a maximal side. I think this is a generational shift. I don't think there's the hype is a big deal. I think it's warranted. I think it's a big deal. Yeah. Generative AI, chat GPT. That's just that's not a business model. That's just one product of many. You guys are a good example of this framework. So people are going to adopt it. Now, what I like about this is that if you talk about security and DevOps, DevSecOps we're kind of talking about the same thing with data, right? It seems the conversations are very similar, right? It's shift left for security. We've been there, done that software supply chain. We're talking about that. You're kind of talking about data supply chain without even saying the word. But you're kind of getting at the data is a big part of it. Whether it's prompt safety or data usage. You got to watch the data now in a way that's different. What's your reaction to that? Can you explain your view on this new data phenomenon relative to DevOps? Because this is kind of a DevOps security problem with data, but it's not a data warehouse problem. Yeah, it's basically, it's a new beast. I say it's a new beast. In a traditional data system, what you put inside the data system you can retrieve it reliably. You know what you're going to get. Data models are a mix. These AI models are a mix. Of course they're neural nets, but at the same time it has compressed human information sitting inside it. You don't know what it is. It's going to generate something for you. And because you don't can reliably tell what's inside it, it's only going to come through the prompts. This is a new beast, which means the risk that it poses it actually very different. Why that is? Because for many, many professions you're going to see that this has become the ultimate source of advice. People will ask questions, they'll think this is the truth. So if you actually, they're based on training data attacks or based on attacks straight on the neural nets or based on poisoning of these models or based on wrong data, you get wrong advice, you take wrong decisions, it's a business problem. Or if you fed wrong data, sensitive data inside it and it can be exfiltrated from the prompts, that's a problem. And by the way, the other thing to keep in mind is that if you feed some data into these models, you can expect it will be taken out somehow, right? Even if you put a bunch of guardrails around your prompts, right? So you better be very, very careful on what data you are feeding into these models itself. So all these considerations, I would say it's not really an impairment. I mean, of course you want to make sure this is done right. Once you take care of the safety, the advantages to the organizations are so clear and so high that we're seeing basically massive shift and people desire to very quickly adopt but do it in a safe fashion within the enterprise. Great, Mark, are you going after? The business is looking good. You got great product in the command center. What's the technology secret sauce? What's going on? What are customers liking about your product and technology? What's the secret sauce? I think what they're really liking about is there have been some mysteries around data, right? So first of all, we're having a very clear visibility. If you ask any organization what data you have, whose data it is, do you have consent, where is it sitting? What regulations are you violating? You will not get any straight answer because often organizations did not have that handle on it, right? So bring it all together into one platform, providing that context and going to the depth of understanding of data of a single individual, which is necessary because why? Because most regulations are tied to the data of one residency or an individual because the individual will tell you what residency it is and you have to know what regulation apply on it. So you have to understand the data in depth, what we call people data graph. So that really is foundational to actually understanding the data to the level of a single individual, marrying with the regulation, marrying with your security controls, making it actionable so you can orchestrate around it. That's really has been the vision for the company. And of course we learned from our customers. And more than any of our customers are global companies and they are dealing with this issue all day long. Ryan, as experienced entrepreneur and technology executive and you worked as a security company back in the day and you also invest in startups, what was the motivation on starting security? Was it you saw the need immediately? Did you see the tailwind that's now in place? What was some of the origination thoughts that were going through you when you were saying, I'm going to do this. Was it a clear problem? And did you learn anything that evolved as you were out in the marketplace? I think the clear motivation really was that we believe that the data actually has the power and unstructured data has a lot of power, right? So historically rough structure data, BI analytics, no question provides business value because that's what we call business intelligence. But 85% of data is unstructured inside the organization. How it is managed, how it is collected, how is it used? Who unleashed its power? You want to make sure that you actually provide the right God grace. And there's a reason hackers go for this data. Why is that all the hacks happen either they're going to be mostly going to be to steal data. So that's the crown jewels. And I think industry clearly needed equivalent of the what we call SOC or what we call NOC. The industry needed something, what we call data command center so that you have clear visibility and your full controls on the crown jewels of the organization that you hold. Yeah, I think you guys are in a great position. I love the data command center. We've been using the term, we coined on theCUBE called the data developer. Now we see companies using that in their marketing. There's a new persona emerging and that is a developer. And you talk about guardrails. When I hear guardrails, I think security teams. Now you got data teams thinking about operations of data. This is going to be a new practice. This is not just democratizing data science. This is data at work. This is data embedded in the development cycle. It's a lot of shift left kind of thinking going on with this major trend. Absolutely. And that's what you, I mean, initially when the charts became there was one open AI interface. Then there was, you know, Google has its own palm too. But now what did you see? Now you see open source models, which means is democratized is going to get even more democratized. So the models are going to be, you know, a lot of innovation in it, but they're getting democratized. The value is the data. And the developers actually in both sides have been their own ecosystem. They're coming together in some fashion. The data teams want to use these models. That's why you're going to see the data teams and the engineers are going to start using these models. They need these controls to be in place. I think, you know, the context is key. And I do think AI will automate away jobs. But I think those jobs are the ones that nobody wants to do anyway. So, you know, they shift the value to higher value things. If you think about the action that AI could take with context, that's the behavior of what people want to see in some of these use cases. The fear isn't so much about the society and issues. And I see that, I get that fear. But the practical nature of it is, it's a good thing when you have context. So if you have the data in the right place with the right controls, the AI can do some good and relieve those tasks that nobody wanted to do, but need to be done. Or if there's more plumbing in place or more observability data or more things, why not have a single pane of glass? Why wouldn't you want that? I mean, this is really the conversation we're talking about, isn't it? Absolutely. In fact, on top of the prompts, there is something called agents which can actually use the prompt and they can actually automate the things that you do when they can get the job done. And so that's another big neighbor. And it would not be possible if you didn't fully understand the underlying, the data and what's actually sitting inside it. But at the same time, these agents can be taken over. So you're going to make sure that you're doing these things. If the agent's takeover starts doing things on its own or things you didn't intend to do or the thing that a hacker wants to be done, that's actually as important for you to account for in this mix. With any innovation, there's always going to be threats. You're on top of it with security.ai. Security with an i.ai, great URL. Love the .urai extension. I wish we had the cube one. Give your closing thoughts for the folks watching and might be a customer of yours or prospects or interested in your company. Give a quick plug. What are your closing thoughts? Why you guys? Why should someone work with you? Why do you win? What problem do you solve? Take a minute to explain your closing thoughts on here on this AWS startup showcase. I think the key thought would be the security professionals, privacy, data teams, EIOs, compliance. They should think themselves at this point in time as key enablers for generative AI because the benefit it's going to provide. You do not want to be left behind in this. You don't want others to be waiting more than you. If you actually have data and you have use cases, these professionals, we should think ourselves as the key enablers by enabling this automation and safety around generative AI because that will unleash the power of the enterprise data. And that's really what the key role that you're actually can be playing in this massive revolution that's going on. Congratulations. I think you're a great solution to bring that top down, bottoms up communities together as operationalizing AI will become a very important project on the agenda for all enterprises and growing businesses. Rehenji Heel, CEO of security. Thanks for coming on theCUBE for the AWS startup showcase, episode three of the third season, cybersecurity is the focus. Thanks for coming on. John, it's a pleasure talking to you. Thanks for hosting me. Keep it right there, more action on theCUBE. You're a leader in tech coverage, tracking the signal from the noise. I'm John Furrier, your host. Thanks for watching.