 Hello, and welcome to this episode of the security angle. I'm your host, Shelly Kramer, Principal Analyst and Managing Director here at theCUBE Research. And in this episode, I'm joined by Zscaler's Chief Security Officer, Deepen Desai. And we're going to have a conversation about the findings and the company's newly released 2024 AI security report. So some backstory here, that survey relied on about 18 billion transactions across the company's cloud security platform. The Zscaler zero trust exchange for about April of 2023 to January of 2024. So lots of transactions. Zscaler's threat labs researchers explored how AI and ML tools are being used across the enterprise. And then they went deeper and they mapped out trends across sectors, trends across geographies, and they explored how companies are thinking about AI, how they're integrating AI into their business operations and how they're thinking about security around the use of AI tools, no small things, right? They're also looking, they also looked at the risks that Gen AI brings and then how organizations are addressing those risks. Of course, all of these things are things that we talk about here all the time. And there are things that these are all things that are top of mind for today's business leaders. So before we dive into our conversation, a little bit about Zscaler. Zscaler's value prop is all about accelerating digital transformation so that customers can be more agile, more efficient, more resilient and secure. And the Zscaler zero trust exchange platform protects thousands of customers. They protect them from cyber attacks and data loss and provides secure connections and devices and applications in any location. Zscaler's distributed across more than 150 data centers globally and its SSE based zero trust exchange is the world's largest inline cloud security platform. That's a mouthful, but kind of helps sometimes to know a little backstory there. So as chief security officer, Deepen is responsible of course for global security research operations and he works with Zscaler's product teams to ensure security across the Zscaler platform. Deepen, it's wonderful to have you. Thanks so much for joining me today. Thank you for inviting me. Absolutely, absolutely. So one of the things I ask all of my guests to do is to indulge you by sharing a little bit of your career backstory. And I wanna know, I don't wanna know yet about your role at Zscaler. Tell me a little bit about what you've done before and how you got here. Sure, yeah, no. So look, I've been in the industry for almost 20 years now about always on the vendor side, always doing security. But maybe the interesting nugget that I'll share, how did I end up in the security side? So this was when I was doing my masters and just like any other masters or college student, I did used to play games. And I would see a lot of these folks that are doing online gaming, they were having advantage over folks that were playing it in a legit sense where they're using these hacks. That's where I got curious, like how does these hacks work? Like wherever they're using wall hacks, aim bots and things like that to gain an advantage over others. It's cheating, it's cheating online and in gaming. So as part of my master's project, I ended up creating something that is able to detect those hacking attempts that happens in these online gaming leagues. And that project was actually pretty successful in flagging and I'm gonna go geek over to you on this part, code injection and where they're trying to alter the behavior of the original program and gain that advantage. So that's how I got in the field of security. That's how I got in the field of threat analysis, malware analysis, hacking, how do we prevent that? And that's my input in that. You know, that's why I always ask these backstory questions because I think I just learned something new and that's a great way to get involved in this, right? I mean, you know, watching gamers, oh, they're cheating. Wait a minute, I think that's really cool. So I think it is an understatement to say that leading security operations is a role that especially today is not for the faint of heart. You know, we've got the rapid advent of Gen AI. We've got transformation that is happening at an increasingly rapid pace. And, you know, we're also seeing this huge rush on the part of organizations of all sizes to embrace AI. But tell me a little bit if you would what you enjoy. I mean, I think the answer is a little bit that you're a glutton for punishment, but tell me what you enjoy most about this role. Yeah, no, look, this is definitely a very, very exciting time. It's transformative time for cybersecurity as a whole. Now, with the advent of Gen AI, there is a huge opportunity where it comes to applying both generative and predictive models in transforming how security operations is done. And for instance, you could use AI to help visualize and quantify top-down risk, prioritize the mediation in the way that simply weren't possible if you were trying to do it manually. So what I'm suggesting, this is something that I've already started working with my team, both for our internal program and also embedding it into our platform to help our customers as well. Combining the power of generative AI and predictive AI models, you are able to solve complex security problems and assist cybersecurity professionals with both efficiency and efficacy. Now, as a security leader, we also have to be wary of the fact that the bad guys are also going to be leveraging this and targeting your organization. Yeah, it's exciting. They're as excited, if not more excited about AI than we are. Exactly, and then it's already happening and there is more to come, but no, it's an exciting time to be in cybersecurity. It is absolutely an exciting time to be in cybersecurity. So I think that we know a lot of the challenges here, but I would love it if you would share, you're in the trenches all day, every day. I cover cybersecurity, I talk with vendors, I talk with customers and that sort of thing, but you're in the trenches infinitely more than I am. So tell me if you would, what some of the biggest challenges that you're seeing customers trying to get their arms around today, specifically as it relates to AI security. So every enterprise that I speak with, I mean, it's out there, every one is trying to take advantage of the productivity benefits, the efficiency, efficacy, that generative AI applications bring. Initially, a lot of the organizations learned it the hard way, but it does come with its own set of risks and then there have been various publicly known examples of it as well where some organizations, employees inadvertently leaked IP information, code base and it's not reversible. Once you send some of that data to these public generative AI models, it's there and it's also available to other folks as well if they're doing certain crafty way of doing prompt engineering. So if I were to categorize, there are three things that everyone should prioritize when it comes to mitigating the risks that these AI applications would bring. Number one is starting with, what do you want to allow in your organization when it comes to allowed AI applications? Because you do want to enable your business. So we have to work with the cross-functional leaders on that and then what are, everything else should be blocked. That's how I have it implemented in my organization as well, that is list of sanctioned app that goes through the entire TPRM, third party risk management, vendor assessment, figuring out how your data is going to go through the lifecycle of the product usage. And then there are all these other unsanctioned apps that you want to make sure you have controls to block. Number two is how do you securely enable adoption of these allowed apps? The sanctioned apps that you're, okay, I'm giving permission for my organization to leverage this, but then I still want to protect my data. How, what kind of data is leaving my environment? What kind of data is coming back in my environment from the SaaS public as a service AI applications? Cause there's definitely risk over there as well. And then the third aspect, every organization also, or most organization I should say is also working on some sort of private LLMs, infrastructure, but they're doing their own private infrastructure, trying to use their own crown jewel data as well to train certain models. You have to have a plan in place to protect that environment just like any other crown jewel application. This is where we're going to see more and more adversarial attacks happening because that's where your sensitive data is. That's where they can even influence the business outcome if they're able to poison those LLMs by changing in what goes in and what comes out. So those are the three buckets that you should prioritize. Now, having a policy is important, but policy and effective controls will be the difference between doing this thing the right way versus the wrong way. Those secure guardrails and really taking a dive into permissioning and that sort of thing. I mean, I think these are such incredibly important things today. So we're going to be hearing more and more about that and we're going to see, I think obviously companies are paying attention to it, but I think we're going to have lots more attention to that here moving forward. So I want to unpack now, if we can, this amazing new report, the 2014 AI security report. I thought it was interesting. I think definitely appropriate that the report kicked off by saying, AI is more than a pioneering innovation. It's now business as usual. That's 100% correct. And that doesn't mean that every single organization across the world is up to their elbows in deploying and using AI and generative AI and that sort of thing. But it is certainly on the radar screen. It is something that people are experimenting with and they're trying to figure out how to use and they're trying to figure out how to get the right policies and security in place. So data from your report shared that AI and ML usage skyrocketed by a mere 594.8% rising from 521 million AI ML driven transactions in April of 2023 to 3.1 billion monthly by January of 2024. That's crazy. So talk with me a little bit about that huge jump. Yeah, I know that you're absolutely spot on. The data actually makes it very clear that AI is no longer just that innovative thing. It's actually starting to become business as usual as more and more companies start weaving it into the fabric of what I like to call enterprise life. What we saw as we were analyzing through these transactions that happened over the past several months, it's not just one industry. It's across several different industries who are seeing these adoption and yeah, the volume rose from few hundred million to billions and billions of transactions in January and that trend will continue as more and more organizations adopt these applications. Now, there is also again, as the security officer, I need to keep bringing this up but security as it relates to AI still remains one of the most primary concerns and we dissect that in the report as well how a lot of these enterprises are leveraging zero trust in making sure that the adoption of these AI application is secure. Well, I think the thing about security is that it was already at the top of leaders, business leaders concerns, whether it's CEOs and C-suite leaders, it's board level and of course, at the CISO and CIO level. I mean, security is not concerns about security and an understanding of how important having the right security posture and protections in place, this is not new but AI and generative AI and our rush to embrace it is actually kind of catapulting these concerns I think even greater than they were. And that's completely understandable and when you think about how mind-boggling it is, again, these transactions went to 3.1 billion monthly by January of 2024. I'm sure that if we looked at that data right now, it's increased substantially. So this is a rocket ship that is not slowing down. So talking about security and talking about the allure of AI to threat actors and the reality of this is that we're rushing to embrace all things AI and Gen AI to help spur productivity and create efficiencies and deliver better customer experiences and actually use it in our security operations. So we're embracing all these things for good but the carrot here for threat actors is that their incentive for embracing and getting good at and using and deploying AI and generative AI is tied to financial gain. I mean, they're not trying to be more efficient. Well, I guess they kind of are. They're trying to make more money more quickly by launching more attacks more quickly. And so I wanna talk about, I know that in your report, you shared that it's not uncommon for enterprises and you touched on this earlier, it's not uncommon for enterprises to have their sort of security posture be blocking AI and ML transactions, right? Your data shows a 577% increase and blocked transitions over a nine month period. So what is this show from a security officer mindset about how enterprises are looking at the risks of AI? And before you answer that, when I was thinking about this, it took me back to the days of early social media. Do you remember a time when Twitter was really popular and people thought LinkedIn was just for job searches and but Facebook was scary and I had enterprise clients at that time, a decade plus ago who decided that the best way to deal with Facebook was just to simply block it, right? And so I think, but that's it. But this is where I see a connection because this is what we're doing in some fronts on a security front, we're just blocking things. So I wanna know what you think you're seeing about how enterprises are looking at the rules of AI and what kind of actions they're taking to protect against it. Yeah, no, that's a natural reaction when there is a lot of unknowns around a new technology and the risks that it brings. You would wanna start with that big hammer approach and as you understand more, you can open things up once you're able to ensure that you have the right controls in place where you're able to apply the policies that you have in place and protect your data. You open up a certain application. So again, no, the way I would again put it in is the security address that comes with the enterprise adoption of the AI. This is where the block transaction that you saw is actually indicative of the enterprises leveraging Zscaler taking concrete steps to prevent their users from engaging with this risky or I call it shadow AI, unsanctioned AI applications limiting their risks where the odds of leaking the proprietary data, the code base, IP data leaving their environment goes down significantly. So again, I keep repeating this but having a policy is important but having those controls that allow you to enforce that policy is equally important. That's what you see in those block transactions. Now, as you enable organizations to adopt these AI even securely, it is still adding to your attack surface. There was definitely cyber criminals leveraging the AI tools to target your organization, your crown jewels including your own LLMs and AI applications that you're building out and they're using these AI tools across the attack chain starting with phishing, wishing attacks, deep fake attacks that we're seeing making rounds on the news at 25 million heists that happened where an employee was basically talking to a deep fake video and everyone else from the Zoom call was an AI bot and so you can't even trust that video, right? You need to have zero trust even over there if I may plug in zero trust but you see phishing, you see phishing very, very persuasive phishing pages being phishing emails being crafted using AI where you could think of a scenario where I say, draft an email coming from your chief financial officer going to your payrolls and accounts payable team and it adds in that context into this. Gone are the days where you look at the grammatical mistakes and all of that. Now you're actually seeing an email that is very well crafted but also has the context coming from a specific business role and that makes it very hard for the end user then to distinguish. So we're seeing this across the attaching, malware is being crafted using it, emails being crafted, phishing pages being done and it's only going to keep getting worse so we need to be prepared to defend that. Well, and I think that, you know, your point is a really valid one, you know and I don't mean that it used to be easy to detect phishing attempts or smashing attempts or whatever, but if you were paying attention and if you looked carefully, a lot of times you could see some anomalies in a URL or you mentioned, you know, language and the way things are written and sometimes you could kind of pick out that maybe something was written by somebody who for whom English was not a first language and it was kind of, I mean, still had to be paying attention but just like AI is helping us sort of supercharge our content development efforts and that sort of thing. As you mentioned, AI is making it possible for cybercriminal to sound legit. So it is a scary indeed and, you know, and I think that we have only scratched the surface. I think one of the things that scares me the most is the rise of deep fakes and deep fake videos especially and how dangerous that is. And, you know, think about that, you know and for somebody like us, you know, you and I were on video all the time our intellectual properties out there. I mean, it's really very easy for someone to go and find video that we might have created and modify that in a way that for some people it could be, you know, career ending or, you know, I mean, it really is scary to think about it. It's also scary to think about it as we come, you know, we're an election year. I think about how this technology is already being utilized and how, you know, it's very difficult today to look or read anything and just believe it. The problem is there are a whole lot of people who, you know, I mean, some of the knowledge that we have of course is specialized, but there are a lot of people who just, you know, don't trust but verify. They just trust that whatever they see is real. So yeah, that's, that's tricky. So let's talk a little bit about, you know, I know you did an experiment that I think is kind of interesting. Didn't you use chat GPT to make some kind of a page to simulate an attack? Yeah, so this was actually something that I asked the threat labs team, our security research arm to do and they were able to even using the publicly available chat GPT which does have a few guardrails in place with careful prompt engineering. They're able to have it craft fishing emails, craft fishing pages that look very, very similar to the real pages out there. And we've already been aware of the dark web variants of chat GPT, you know, warm GPT and there are many more that keeps popping up that are already available as a service for the bad guys to use, which doesn't have any guardrail. So the fact that the threat actors will be able to leverage this and automate it and do perform these attacks at scale is definitely concerning for all of us as CXLs. Oh, absolutely, absolutely. Again, we started this conversation by saying this is not a role for the faint of heart, especially not today. So tell me what you're most concerned about from an attack surface standpoint. Yeah, so number one thing is it is going to become very, very easy for the cyber criminals using the, using the LLMs, using Generative AI to discover, exploit, do it at scale for anything that is exposed for the enterprises out there. And that's definitely a concern point, not just for me, but all the sisters out there. I am personally more concerned about what I call it unknown unknowns in this. We're fairly early on scratching the surface, but it comes to AI threat landscape. The AI threats that will evolve in the near future is what we should all be worried about. And that's where my guidance and something that I am doing internally as well is to focus a lot more on the proactive defense approach and zero trust is an important principle of that. You know, that's the unknown unknowns is kind of a take on my favorite. I call it my dad is like my saying that I learned from my dad, you don't know what you don't know until you know. And that's what the unknown unknowns are. So we talked about this, you mentioned this a little bit earlier in our conversation, but there's the thorny issue of data, right? And so we have massive volumes of data coursing throughout all of our organizations. And we are sending and receiving data from AI tools. What's the risk of this if it's not done properly? Yeah, I know it is. It is a significant risk, especially many of these, because so look, lot of these applications that the organizations are using, these are AI as a service apps, and there are many providers out there. And these apps are effective, are useful as long as you're providing data to them. So the applications that you sanction for your environment must go through a proper third-party risk management lifecycle, proper vetting, how is the data going to be handled? An area that I often see organizations overlook is when these organizations are doing POVs, proof of value, proof of concept, they're just doing a trial. And during the trial period, they're supplying all these data to vet out the tool. And then they may decide not to buy that tool, right? But what happens to the data that went out there during the trial period? Do you have that proper agreement in place? Did you do the vetting before you started the trial? Did you have the same level of stringent TPRM policies applied to it? So again, all of those are important areas for CISOs to consider as they try out these new tools, they buy these new tools, because again, the data that you're gonna be supplying to many of these AI apps are going to be your tier one data, whether it's your employee data, your company proprietary data, or in many of the cases if it involves your customer data, it's even more critical. Well, and I think that kind of goes without saying that however much attention you feel like you've paid thus far to data loss prevention, triple that, right? Yeah, exactly. Having that inline data loss prevention technology where the full TLS inspection because a lot of these stuff will happen over encrypted channels. So your ability to apply the data loss prevention engine on those prompts that are egressing to say chat GPT, your engineer typing a question with your proprietary code snippet going in, you wanna block that. You don't want that to land up on a public version of chat GPT. Similarly, what comes back in, you again wanna have some governance around it. You don't want some public code that got built by chat GPT model to now get integrated into your code based without proper vetting. Again, the risk is on both side and that's why having that inline DLP is very, very important. So one of the things that your report did is take a look at some of the most popular AI applications that are being used throughout the enterprise. Some of these won't surprise our viewers and listeners and some of these might. Will you walk us through some of those? I mean, obviously GPT is at the top of the list. I'm not surprised at all that Otter AI is high on the list but talk with me a little bit about some of those other popular AI apps that you're seeing. Yeah, no, we saw, as you said, starting with chat GPT, then there's the drift, that AI. Open AI, we make that distinction because there is definitely the enterprise version and the public version being used across the enterprises that are going through Zskitter. There are several other apps as well that we saw. Not all of them are making headlines but it's important for organizations to be aware that their employees are hitting these. So there's an app called Writer app, there's live person app, bold chat enterprise. There are many others. If you look at the report, we have listed it down by industry, by location and it's important that you ensure you have proper controls in place for them. Absolutely, as I said, some of these I use, some of these I don't but it's always interesting to know what people are using and my question is always, how many of these are being used in a shadow AI way? Exactly. I'm sure there's plenty. So there's one application that's the most blocked AI application. It should of course come as no surprise where it is but tell me a little bit about that along with some of the other apps that your research showed are most commonly blocked. Yeah, so look, as I mentioned, when something is new and you are one of those CISOs who would wanna add on the side of caution and you should in this case, you will start by blocking it outright and then you figure out how do I, do I have controls in place where I can protect my data? Do I have controls in place where I can prevent any kind of breach from happening in my environment and that's where you start opening things up. So chat JVT, in my opinion, the reason it shows up as the most blocked application it suffers from its own success because it was being leveraged by everyone. There were a lot of the employees on the enterprise side who were also looking to access it. So what would CISO including myself do unless I have proper control in place, I would wanna block it or I would wanna use something like browser isolation which will not outright block it but contain the risk that these public version would provide. The other top applications that we saw that were getting blocked and this, these are not saying that these apps are bad, it's just indicating the policies of many of these enterprises as they understand the apps, as they understand their security posture, as they're starting to enable productivity use cases within their organization. That's when they start opening up certain set of apps. The other apps that we saw getting blocked, like AI chatbots, AI chatbots was number one, there were some companies like Hugginface and Forethought and Fraudnet, many of those tools were also getting blocked. One thing that I've observed and this is something at least on the open AI side, I haven't implemented internally for experimentation is while these organizations are blocking the public version of these apps, many of the cases they will deploy these apps in their private either public cloud tenant or on-prem and they experiment and they allow the business and they enable the business to solve whatever use cases that they're trying to solve. So those won't show up in this block list. Right, right. I mean, to me, that's a smart way to go about it. Yeah, exactly. So blocking, is blocking an alone and not just keep an organization safe? Yeah, I know this just again goes back. I mean, employees will try to find a way around, right? So yes, blocking alone won't solve it. You need to make sure you're enforcing end-to-end. You have guardrails in place both on the endpoint and on the network side. That's where having that inline zero trust platform with full TLS inspection becomes very, very important. The other thing you need to do is you need to have these applications that you do allow, that you don't want to block, go through that third-party risk management piece because ultimately you're sending your data and you wanna make sure it goes through proper governance. It's not leaked. What's your exposure in due to understand that based on the type of data that you're gonna supply these applications? Yeah, it's like building a house, right? I mean, and this is how I think about it. And I spent my whole career as a strategist. So you have as exciting as some of these tools are and the possibilities that we're all wrapping heads around and experimenting with and thinking about, it's really, really exciting, but you have to build the foundation upon which to build your AI operations and its security is, it's like the basement, right? It's got to be foundational. It's got to be really the most important part of the conversation because if you're not building on a foundation of security, you're jeopardizing everything. Yep, probably. Yeah, I know I'm preaching to the choir there. We drink the same. I like it. No, you're doing the right thing. We drink the same Kool-Aid. No, that's okay. That's okay. So I know I really enjoyed this report. There were a lot of interesting things in here. What's produced you the most about these findings? Right, so the one I would call out, and it's not truly surprising, the standard in growth, 600% growth in AI transaction over just nine months, in my opinion, is really staggering, especially with so many unknowns around this, right? So going from a few hundred million last year to billions of transactions every month, that clearly indicates that all of these enterprises, they're not just experimenting with AI, it's actually becoming an integral part of business, and you need to make sure you have proper security around that. So this data definitely affirms what we've all known qualitatively, like we're seeing new cycles around generative AI apps, that hedge enterprise AI adoption is actually not slowing, it's not even plateauing, it's actually going to continue to accelerate in the coming months. Well, and I think, you know, I find myself saying this on a regular basis, just as it relates to technology in general, in that, you know, if you think that the rapid transformation that's happening in our personal lives, in our business lives, if you feel like that is going quickly, you're right, but the reality of it is, if there's any part of this that makes you uncomfortable, the reality that we all need to embrace is that this is not going to slow down, it's just going to speed up. So we need to learn to kind of get our heads and our arms around us, and, you know, get, again, get the right foundations in place and understand the security implications and that sort of thing. So I want to wrap our show, and you gave us some advice, you know, early on in the conversation, but I'd like you to reiterate, you know, what your fellow security officers, what leaders in the enterprise space, what's your best advice to them on how to think about this and how to get their arms around AI, how to harness its massive capabilities, but also with a view toward keeping the organization secure. So no small set of deliverables, but what is your advice to people navigating this? Right. So, again, I'll put it in two buckets. Number one is as you're enabling your organization to adopt AI, having policies is important, but having those security controls that provides you full visibility of what goes in and out of your organization is equally important. So that's where PLS inspection, inline DLP technology that is able to inspect exactly what goes in that prompt, what comes back, having controls enforced to protect your sensitive data, very, very important. You need to prioritize that. In that same bucket, you should also think about how you're protecting your own private LLM instances because that's a crown jewel. You have to treat it as a crown jewel. There will be focused targeted attacks that we will see. In fact, we've started seeing some nation-state activity in that domain already where we're going after the dev environments of EIML application. So very important to treat it as a crown jewel, have those controls in place over there. The number two, and this is where I mentioned earlier the proactive defense piece, because there's so much unknown unknowns that we're going to discover. So what can you do as a security leader? And that's where you need to embrace, so you need to go through the zero-trust transformation. You need to fast-track that. You can't have a three-year-fire plan now because the bad guys are moving very, very fast, leveraging generative AI as well. So what can you do? The three core principles that these are not G-square, these are NSA zero-trust principles. Number one is you need to never trust always verify. Number two is you need to assume breach. If there is a breach, what is my exposure? Number three is where you will explicitly trust with least privilege access. So you never want to over-provision anything. And this is where three guiding principles, you need to now break it down by four stages. This is how Z-Skiller helps their customer. Number one is what can you do to prevent that, reduce that attack surface, reduce the attack surface that the threat actor sees for your organization when they're targeting, when they're planning an attack, even using AI application. Number two is how can you prevent that initial compromise, whether it's identity, whether it's an application, whether it's an asset. So having those consistent security control with TLS inspection is very important over there. Number three is eliminate lateral propagation. The assume breach part that I mentioned, what happens in many of the attacks that we see is they will compromise one identity or an employee laptop and then they're moving around laterally in your environment, even leading up to your AI ML environments, the steal information. So the question you ask is, what is my blast radius if one of my asset or entity were to get compromised? If you have true zero trust implemented, that blast radius should be contained to that asset or identity where the mistake was made with proper segmentation with advanced technologies like deception, ITDR. And then finally data loss. I already mentioned this, but having that inline DLP engine that sees everything that leaves your environment comes back is equally important. So three zero trust principles applied across these four stages. That's how Zscaler helps a lot of the enterprises go through this journey of becoming, I would say proactively strong in their security posture in order to defend against the unknown unknowns in that case from the AI. I love it. Well, Deepin, thank you so much. I knew this was going to be a great conversation and I have been looking forward to this dive into your 2024 AI security report. So thank you for walking me through some of those findings. To our viewers and listeners, I'm going to include a link in the show notes so that you can check out and download that security report and follow along with some of Deepin's advice and suggestions along the way. But with that, this is a wrap today for our security angle series. Deepin, thanks so much for joining me. I really always enjoy our conversations and I'm sure this won't be the last that we'll be having on this topic. Absolutely. Thank you. Thank you, Shali, for inviting me. Absolutely. We'll talk again soon. Take care.