 unique to meet customers where they are and I think that's really one of the powers that even tools and technologies like Generative AI bring in because suddenly the natural way of interacting is actually human language. So, you know, I did my computer engineering, I had to learn binary code I remember and register programming, etc. But now you can go ahead and almost have code generated for you and you know, there's a lot of promise which is in there. But in addition to that, the usability of the product is one of the shining points for Castin since it started and it's going to keep getting better. Hi, this is Yosaf to Bhakti and welcome to brand new show. Let's talk about AI. It's a brand new show in which we talk about the whole rekindled interest in AI thanks to Generative AI and GPD. Here we will go beyond just R&D and early stage of these technologies and talk to experts, practitioners, companies that are bringing Generative AI to production. And today we have with us Gaurav Rishi, VP of product at Castin by Veeam. Gaurav, it's great to see you after a long time. Pleasure to be here, Swapnil. Nice to be here indeed. The honours of mine and let's have a look at AI especially Generative AI from the lens, from the perspective of Kubernetes and data production. Since our industry loves the new shiny object, let's talk about the role that you see of Generative AI in the whole data production, data backup and Kubernetes space. Swapnil, I think you pointed out correctly, there are so many headlines which are now taken up by Generative AI. But the point I'll make is first of all, the fuel behind all of this excitement is actually data. If you look at any of the part related to Generative AI related models, etc., it ends up being data that is actually super important. So protecting that data becomes even more important in today's world. And so that is super relevant for us and the organisations who actually hold this data as a critical resource. So to your question about what's the role and how do organisations start in this particular way, there are three points I'll quickly make. One is, I think, like you said, it's a moving landscape and organisations need to recognise that there is innovation across the entire stack, whether you're talking about the silicon layer, new versions of TPUs or GPUs, or you look at the various large language models, whether they are open sourced or closed sourced, whether you run them in the cloud or whether you run them at the edges, and then obviously the applications which build on top of them. So the question is, how do organisations work in this environment which is moving so fast? And to get started, first of all, once you recognise you're in a moving landscape, have dynamic playbooks from a security and data protection perspective, first of all. So we see a good job being done by the NIST, which has extended it to the AI risk management framework, or to MITRE, which has also gone ahead and got AI-related capabilities built into framework. So that's a really important part. The second one I'll just point out is you need to identify the use case. So once you know what's feasible, make sure that you're not chasing the brightest object like you pointed out. Make sure that you actually keep your company or your business unit strategy up front and use AI as a tool. And map out your use cases on a risk reward, you know, 2x2 framework. But the point I'll make out here is make sure you look at the risk with the lens of AI. Things have changed. New regulations are coming into play. There is a copyright overhang which might come into play if you're using some of these AI technologies. So be cognisant of that as you think about using AI technologies. And finally, the third point I'll make is, you know, from a data protection in AI perspective, we've talked, I know in your call last year, about DevSecOps. Now, we all know that in the cloud-native world, security is a shared responsibility all the way from development to deployment. And I think with AI, the DevSecOps needs to be augmented because there are now even more people who are going to be sharing this responsibility. You're going to have compliance teams come in, you're going to have the legal teams come in. And so to go ahead and incorporate some of these data governance policies right up front are super important for us to kind of think about how you put a framework around AI in the context of data protection. So hopefully, that sums it up between making sure, recognize landscape, choose the right use case, and then go ahead and build on your DevSecOps practices. Now, let's look at it from a perspective of generative AI for data backup as well as data backup for generative AI. And also, what does it mean for testing here? Here's how I think about it. And you'll hear about AI for security and security for AI as a point. But let me take a step back. The way I think about it is anytime you have security in mind, you need to think about the nature versus nurture aspects of security and building your applications. And what I mean by that is we all have a genetic makeup, good or bad. That's the DNA we are born with. And that of course has proclivities to certain strengths and maybe certain diseases that we might all have. From a cloud native perspective, what we mean by nature is essentially how do you go ahead and build your applications? Are you using open source components which have been vetted? And do you have the best practices in terms of making sure that you're using IDEs, which for example, if you're opening up and creating a new object storage bucket, do you have immutability turned on by default? So just the secure foundations of how you build your code, have that right DNA, if you will, is one aspect of it. And it's really important to make sure you're leveraging those pieces. The second aspect, which is the nurture, is what I guess my doctor keeps telling us, which is, hey, go ahead and make sure you're hitting the gym, you're eating healthy. And that actually is what relates in the context of cloud native applications to mean that we are following the best practices when it comes to operationalizing our Kubernetes clusters. So are you segmenting your networks? Are you going ahead and encrypting your runtime data? Are you using customer managed keys? All the things that are really important to scan your images before they go out into production and do that on our ongoing basis are truly important. And that's the other way that you need to keep in mind. But underlying both nature and nurture, it's truly important to go ahead and make sure that you also have an insurance policy, you're strengthening your last line of defense, because frankly, things, despite having strong foundations and having, you know, your best operational practices, you will get attacked. I mean, in fact, you know, we came out with the largest possible survey I think I know of for the data protection side of it. And the sobering statistic was just in the last 12 months, 85% of the organizations were hit by at least one ransomware attack, right? And this has actually been on an uptick. So, so having backup disaster recovery policies in place, tested on a regular basis, are as important in addition to your nature and the nurture point I just made. You talked about nurture versus nature. These new technologies make things really complicated, harder for teams. Talk a bit about how are you folks making it easier for your customers to leverage these technologies without getting overwhelmed by them, or based a lot of their developer's resources on things that could have been used to add value to their businesses. No, I think that's a great point. You need to meet customers where they are. And I think that's really one of the powers that even tools and technologies like Generative AI bring in because suddenly the natural way of interacting is actually human language. So, you know, I did my computer engineering, I had to learn binary code, I remember and register programming, etc. But now you can go ahead and almost have code generated for you. And, you know, there's a lot of promise which is in there. But in addition to that, the usability of the product is one of the shining points for Kasten since it started and it's going to keep getting better. I do want to, though, bring in the point around security and the impact that this has and what AI will bring in with the point I was making around, hey, when it comes to having strong foundations, which is the nature point, I was making the right genetic traits for your code, as well as the operational best practices, which is the nurture part of it. You know, when it comes to AI, like you were talking about earlier on, AI for security and security for AI are truly an important framework to think about because it's a double-edged sword. So, on one hand, what we are seeing is AI technologies being used in your product opens up new threat vectors, new surface attack areas. So, as an example, people might do data poisoning of the data sets that you have. And that's not just affecting your data, it's also now affecting how your application behaves because your models are trained on that particular data. Or, you know, you might have synthetic data that AI can create and that can, again, be used or misused. You might use it to go ahead and create a lot of test cases and strengthen your product, but you could also maybe misuse the synthetic data to create, let's say, false identities. And so, in this world, what we are seeing and how we need to go ahead and think about strengthening our product line, which is what we are doing, is, first of all, you want to leverage all the good of AI to make sure, as your code is being developed, we are taking compliance at code and regulatory principles and baking it right up front in your development cycle. It's much cheaper to go ahead and protect it up front. At the same time, we are also going ahead and making sure that we are looking at both your red teams who are ethical hackers and your blue teams, which are going ahead and doing defensive work and leveraging the AI technologies again to make sure your product is hardened. But from a core data protection perspective, which is strengthening your last line of defense, the point I'll make is we are focusing on two things. One is keeping it super simple. So, all of these shiny technologies and objects are not what is surfaced. It's about usability, productivity gains. And the second thing is from a technology innovation part of it. If you go ahead and look what these technologies are now using, it's not just SQL and no SQL databases that were used beforehand, it's vector databases which might be used. So, now, given that data is that fuel powering this AI revolution, we want to make sure that we also can protect some of these emerging database classes and the variety of them. So, those are the things which we keep in mind as we think about artificial intelligence and how that intersects with data protection and the new generation of cloud native applications. Now, let's talk about the cultural aspect of generating AI and data pickup. Do you think that tools are enough or do you also feel that we also need cultural changes within organizations when we are embracing and adopting these technologies? I think it's the former. So, for sure, cultural changes are one of the harder changes compared to technology changes. So, we all are human and we need to make sure that sort of recognized. I think the point to make out here is you have to meet customers and organizations where they are. So, if I had to sort of oversimplify this, I see companies which fall into the category of born in the cloud or born in the cloud native category, which is a case where they already know how to go ahead and build a microservices-based application. They have some of these DevOps or DevSecOps practices baked into how they develop and have a capability, maturity model of releasing the code bases at a much more frequent base. And there is a certain way in which you have to go ahead and work with them to make sure automation is baked in. You've got APIs as a part of your product line that can be consumed and you want to automate as far as possible so that you can truly follow this co-pilot-like concept. And then you've got organizations on the other side of the spectrum who might actually have a lot of traditional workloads and they are in the process of sort of going ahead and experimenting within certain lines of business with things like Kubernetes, etc. And for them, what is really important is to make sure they might want, for example, a single pane of glass. They want to go ahead and make sure that the incremental movement for how they go ahead and operate and secure is truly incremental in terms of the training requirements and is not going ahead and leading to a brand new set of tooling, etc. And in that particular case, you need to meet the customers there too and keep it super simple. And that's where having a really good product user interface which allows you to not only interact with the UX, but slowly trains you and takes off the training wheels to move you towards the APIs and automation side turns out to be super important also. So I think we as vendors can definitely help in taking organizations and helping out with this culture change. And that's what I see, the best of read applications and organizations not only have the best NPS scores and that reflects because customers vote with their feet and then to their innovative, they actually are going ahead and making sure that they recognize these two ends of the spectrum. So hopefully that answers a little bit about, you know, this cultural shift that you're relating to. Well, KubeCon is almost here. I know there will be a lot of announcements from Kasten by Vyam at the show and there are things you cannot share, but I just want some teaser. What are the things we should expect from Kasten at KubeCon? Yeah, no, I'm super excited. I know it's Chicago and it's winters, but we are still very excited about being there. What do you should expect? I think you should expect good food. You should expect amazing number of partners to come in and come and talk about how we are going ahead and raising the bar. I think in terms of themes, probably not a giveaway because I alluded to this, you're going to see a lot more focus on security and ransomware because frankly, that's what's top of mind for organizations. And you're also going to go ahead and see scale. I mean, you know, every time I come in and talk about Kubernetes, the growths increased not linearly, but it's multiplying. And so I think we are seeing a lot more clusters. We are seeing deployments across the board from public sectors to edges. And edges is an overloaded term, everything from floating ships to retail stores. So I think you're going to see us making things simple when it comes to operating at scale. So quite excited about that along with our partners. Gaurav, thank you so much for taking time out today. And of course, talk with Kastin, talk about generating, talk about the whole space here in the Kubernetes space. Thanks for all the insights and love to chat with you again. Thank you. Yeah, thank you, Swapnil. Do not forget strengthening your last line of defense in addition to having the nature and the nurture and security is truly paramount even in this AI generation. So exciting conversation and even more exciting things coming up. Thank you.