 Welcome everyone to this CUBE Conversation featuring Security AI. This is part of the AWS Startup Showcase season three, episode three. I'm your host, Lisa Martin. Today, very excited to be joined by Rehan Jaleel, the CEO of Security AI. Rehan, great to have you. Thank you so much for joining me today. Lisa, thank you so much for hosting. Really interesting topic and great to be talking together. Gen AI, one of the hottest topics on the planet. How do you define Gen AI? And its role in the current enterprise applications that we're seeing today. So, see, the AI has existed for quite some time and very useful. But, you know, traditional AI has been used to find patterns in the data. For the very first time, I think people realize that the Gen AI or the LLMs, they can understand the concepts of a natural language, the hidden mathematics that exist, not in just an English, but any language, including computer languages. Now, that is super powerful because machines were not able to do that before, only humans could do that. Now, machines could do it, so it changes the game. So sitting on the shoulders of previous innovations, whether it's silicon, whether it's cloud, whether it is internet, whether it's mobile, this is probably one of the biggest evolutions because now computers can understand the language like we do. It's a massive, massive evolution. What, talk about the different ways that Gen AI is affecting different business functions like HR, marketing, sales operations. See, all these functions inherently have different kind of knowledge represented in some kind of a language, sitting inside the enterprise. If you contrast, draw the contrast with public systems like in a chat GPT and all super cool, but it relies on public data. Without that data, there's not much value. But as soon as you want to take it inside to an organization, you have to use your own data and the knowledge for different functions. So essentially, if you actually have the knowledge sitting inside the organizations and different systems for HR and finance and sales and marketing, if you can utilize and use this new Gen AI models, language models, you can unleash the power of the data that's sitting inside it. But for that, you need to do it very safely. You need to do it, understand what data is, who is entitled and so forth. So on one hand, no question this could, this can unleash the power of it, but the safety controls that you need to have that in place, they are foundational. They are enablers for you to actually use this data, which is very different than using just the public data. Okay, so then why is data considered like the absolute cornerstone for maximizing the full value of Gen AI? So in the Gen AI case, there's two fundamental things. One is the model, which is the super cool innovations that are going on and they're going to continue to evolve. But the other is the training data. Now without the data, you actually, there is no value that you can essentially create, just like a human mind. Human mind has all the neurons, but if you don't learn anything, if you don't have any content through which you can learn and extend your imagination, it's not as useful. Similar, I think very, very similar, if you want to use the data or use the Gen AI for the enterprise use case, you have to utilize the data and in the most safe manner possible. Safety is key there. So when we look at things like every organization we talk to every event, we go to the rise of cloud, multi-cloud, hybrid cloud environments. As people, we have this expectation that we can get real-time information, whether we're doing a transaction for a ride chair or a transaction e-commerce. Talk about how data management challenges have evolved given the rise of cloud and those real-time data expectations. So I think I'll give you a bit of a visual example. So think of this data sitting in some applications, some systems, some containers. We may call those applications my file share system or may call S3 bucket or may call something. As long as the data is sitting in those containers, you have understanding of who is entitlement to it, what is security around it, what compliance is. These are pristinely often done inside the organization. Now, if you want to utilize the power of this data, you have to take it out of those places, give it to these models to learn from it and ask questions from it in different forms or use it for your own, building your own models. To do that, as soon as you take the data out, you lose the entire context, who is entitled to it, why was it existed, what type of data was it. And which means you want to make sure you evolve your data management practices that you preserve all the context around the data and you can put the appropriate controls and why that is. The reason is that if you feed all this information to these language models without the right controls, anybody can ask questions from the prompts which they may not have entitlements to. For instance, in an organization, should everyone be able to ask the salaries of everybody else or your financial information versus confidential or XYZ project that may be going on? No, that's why the entitlements exist inside the organizations, which really means is that while you would bring the data to these models, you actually hold these controls that are in place, it ups the bar on data controls and data controls is now evolving to make sure that it can support the safety around the usage of data in these models. So Lynn, let's talk about context there. What are some of the risks associated with Gen AI when we've got sensitive data or untrusted data or PII that's ingested without that proper context? Yeah, so I will bucketize the risk into four areas. The one is with the model itself. The model is something you're going to trust. You're going to ask for advice. Your teams are going to ask for advice and if it's malicious, if it has been hacked or if it has been tweet is going to give you advice which is not important. And it may be actually be malicious advice. You know, make sure your models are assessed. Models are protected properly because if they are malicious or they are actually compromised, everything they tell to the end user is not going to be, you know, it's going to be not be trustable. The second important thing is that what data goes into these models is the second category because if we the data, which is mixed up with all kinds of sensitive information or the things that the models should not know or the end users should not know which are prompting or using these models, you want to make sure the data is actually controlled properly that's going into this model. That's second category. The third category is called prompts. The prompt is where people are asking questions and you're prompting the system should give you answer that essentially the takeover of prompts can go and can happen. The variety of threats that have come across where the prompts itself can be compromised or that can be used because essentially that's a conduit to asking questions and but also asking questions in a way to extract data out or takeover or influence your models itself. That's the third category. The fourth category is actually regulation. Now across all these three things from the model safety to the data usage and to the prompt safety, these regulations evolving across the globe. You got to know the previous regulations existed like GDPR and PDCCP and all and GPD and all they existed to make sure you use the data correctly. On top of it now, new layer of regulations are popping up, going to pop up across the globe to make sure that across all these things of model safety, data usage, data controls and as well as prompt, you actually have the right controls in place. And but the beauty is that if there is no question these are CEO and board driven mandates to utilize these models, the key thing is to enable the safety. So if you enable the safety in place for the right gardeners in place, you enable this innovation essentially in the enterprise is okay to in the public domain to just crawl and pick any data, but the enterprise that's not okay. You actually have to have these controls in place to actually enable this innovation in the enterprise. So for regulated industries, which is, you mentioned some of the regulatory bodies and that's obviously expanding globally. Why is explainability so crucial when organizations are using GEN AI models? Yes, great question actually. So it's still a black box when it produces an answer. This is new kind of beast, very useful beast that it gives you an answer based on the prior knowledge and prior data that you affect to it but you often don't know why it's saying so and how it's saying as humans we are trying to be very predictable with technology. Let's say if it was a regular database, if you provide store some data into the data, you run a query, you get very predictable answer. But that's not true for these kind of neural nets in which has a lot of compressed knowledge sitting in there, but when they produce some new data, you don't know why they, how is it actually generating it? And why is it important? Because you want to trust it. You want to make sure there's no bias in it. You want to make sure there's no maliciousness in it. You want to make sure it does not be compromised. There are very explicit attacks on these models through training data or something called AI poisoning. You can poison these models of data or something that's called LLM lobotomy. Just like you can take the something out of your brain, you can take the net out of these called LLM lobotomy. That's happening and more and more is going to happen. So that's why you want to understand when the answers come out, can you trust it? And there's more predictabilities. Again, it's an area of research. But on the other hand, humans do the same. Humans give you an answer that you often don't know. How much biases in there and explainability is just not there. So there's parallels here. There are definitely parallels. So then walk me through security AI. How is it accelerating, enabling organizations to accelerate gen AI safely? So what we've learned from some of the largest of our customers in the finance and the airlines and insurance and other Fortune 500 companies, that they want to have this, what they call a data command center, a central place in which they have full understanding of the data, the full understanding of the context. Who should have access to data? Who should not have access to data? What are the security controls? What are the privacy controls? They're also the legal knowledge, the regulatory knowledge should be fused with it. That's what they're calling as a data command center. Now, what does it provide for gen AI? Because it can enable all the safety guard rails that you needed to understand what data could or should go to these models. Who should be able to ask more questions on what data on these models? And whether there are any regulatory issues, you should be able to see in this one common place, regardless of which part of the gen AI chain you're actually trying to create. And that's what we're seeing as a fundamental ingredient to enable the innovation within the enterprise by enabling these safety controls within the organization. You mentioned that gen AI is a CEO. It's a board level conversation across probably every industry. But I wanna understand and get your perspectives on the rise of the data developer and the growth of open source communities. How does that impact the operational aspect of AI in businesses and any industry? I think we've seen that when you enable the developers with all the right tools and when you actually have an ecosystem in which things can come together into some environments in an open source form, it simply enables innovation. So it's no different in that sense in the, I would say, gen AI world. That's why you're seeing open source models all the rage, you go to a hugging phase and all you'll see so many open source models and it's gonna be pretty much the same. You're gonna see the innovation happening through that ecosystem itself. But on the other side, on the large enterprises, they certainly also wanna make sure that through the open source, the things that are coming, they're safe. These models are safe. They're not been tweaked. How do you trust it? How do you verify that the model that has come from an open source is configured right? Or it actually is not something, it's something that you would want to trust. So the safety of the models and the things that are coming through also is again, top of mind for the enterprises. You talked about the data command center. Who's in command in an organization or which roles are in command of the data command center so that they can really efficiently deploy, operate and scale AI? Actually, it's a great question because think of data command center is a data command fabric. Fabric really means it has all the required requisite context and metadata. The people who actually want benefit from it, they could be your chief data officers. Of course, they're trying to enable is CIOs and CDOs. But in the same through the lens of the same fabric, your security teams, CISOs can understand why the data are being used. Your privacy teams can understand how the data is being used. Your compliance teams can see through this fact why the data are being used. And they can put their inputs and of course, guardrails in place through that. So the whole concept of data command center is not necessarily one silo that has actually viewed data and actually is breaking the silo between different units that all need consistent view of the data, but in one place, they have their own views. They have their own desired controls that they want to put in place, but they're not creating their own separate silo with this data command center. They all get access from a different lens to it. And that's key. Having that single source of truth, right, is really incredibly important to be able to trust that data. Do you want to understand? I'm sure you have many customer examples to share, but what's a customer story that you think really shines the spotlight on the value that security AI is delivering to organizations? So I'll simply give an example of this breaking the silo desire. And this is from one of the largest, let's say, global companies that provide services to employees. And they of course provide services, not in one region, but globally. They wanted to make sure that it's, when they actually put this data command center in place, it is not just one persona, not just security persona, not just CDO persona, not just CPO. They wanted to do it all. I've, what we, we're starting to see that different personas, they come together in this example. And they, the chief data officer to the CSO, the CPO, the CSO, as well as CSO, that they actually, overall security officer. Being in the room to make sure that this thing is actually implemented as one single source of truth, that different personas can actually utilize. And then of course, this is a key enabler to use data for innovation like JNAI. Which is key, every organization, I was thinking of whether it's my grocery store or a gas station or a food retailer. Everybody has to be data-driven companies these days. Doing that is challenging, but the potential that JNAI delivers safe, being used safely is going to be hugely transformative. Just want to kind of wrap up here a little bit about the company. Tell me a little bit about, you guys had a big raise of funding in the near, in the recent past. Tell me a little bit about that. Where are you using the investment dollars? Yeah, we are Silicon Valley based company, roughly about 500 people. And the key mission is to enable innovation with data safely. And of course, innovation with JNAI safely. And that's really the key mission, to enable this concept of a data command center. We've raised roughly about 170 plus million dollars. And the examples that you gave from retailers to grocery shops to all categories, you will find the customers for this company, including the largest airlines, the largest telcos, financial institutions, insurance companies. Why? Because they all have the desire to use data, but they want to make sure they do it responsibly. And we are, our mission is to enable that, responsible use of data. Responsibility is absolutely key. Rehan, last question for you. What's next if we're going to be shining the light here on security AI? What are some of the things that we should be on the lookout for? I think at this point, we want to make sure we go global. So a lot of our focus has been North America and with so many large customers utilizing it, we want to make for the international, internationally we actually take into the company and grow global markets. Nice. Well, your mission and vision statements were crystal clear. We wish you the best of luck in that global expansion. Rehan Jaleel, CEO of Security AI. Thank you so much for coming on theCUBE as part of this CUBE conversation. We appreciate your insights and your time. Lisa, it's a pleasure talking to you. Very thoughtful discussion. We'll talk again. Thank you so much. We will talk again. We want to thank you for watching and remind you to keep it right here on theCUBE for more action. theCUBE, your leader in hybrid tech event coverage.