 So thank you. Thank you. Good morning, good afternoon, good evening, everyone, wherever you are joining in from. My name is Prashanta Koshwara, and I'm super excited to be here to talk about a very interesting and hot topic ransomware, which is affecting all of us in some form or fashion today. And these attacks have been increasing and have been causing a lot of havoc within every industry that we can think of. We have an action pack agenda today. We will be kind of talking about what ransomware is, how that applies to cloud-native applications. The solution that Trilio has been developing and has released and will be continuing to evangelize, we'll go through all of those pieces. And then finally, we'll do some live and recorded demonstrations, a combination of two, with keeping time in mind. And then finally, we'll do some question and answers. So with that said, we'll start off with the slides that we have prepared for this talk. So what is ransomware? Based on the definition that you see out on the World Wide Web, ransomware attacks are a type of a cyber attack where malicious users get access to your data and they hold it ransom. And obviously, it's unauthorized access. And once they hold it ransom, they basically ask you for money to get the data back. And there is no guarantee, even after they ask you for that money and you pay it, that you're going to get your data back. So there are multiple ways they do this. They either encrypt the data either from you or they delete all your data and make sure that you are kind of disabled in terms of business continuity. Now let's look at some of these stats that we see around ransomware these days. In general, what has ransomware done? So there are 300 million cases of ransomware attacks each year. And as we mentioned earlier, there is no guarantee that even after an attack, you pay these ransom, they're going to give you your data back. So there is a big chance of permanent data loss. The cost of every ransomware attack has been going up significantly as time is moving forward. And what we see is that from in the last few years, the average cost of a ransomware attack has increased by 171% to 312K. Now this number 312K, the way it is calculated, is not only the amount that you have to pay back, but it also considers the loss that a business has to incur when their data is unavailable and when they are not even to service their customers. Now we are still in the middle of a pandemic getting out of it. But these attacks have pushed us all into this remote working scenario. And as part of this remote working scenario, the number of attacks have definitely gone up. And this is happening a lot because there are a lot many more devices that are connecting back into the corporate infrastructure. So one of the data points that we see is that mobile devices are driving up a lot of vulnerabilities. A lot of point of entries are being created as part of these things. So this problem of ransomware, as the world becomes more and more digitalized, which is going to happen, the problem is always going to be there. And even the ransomware attackers are getting smarter and smarter by the day. And using more specialized techniques to get into your environments, capture your data, and disable you in a way. So we need a better solution and definitely a concrete approach for combating all these malicious users. Now with that said, I have a poll question based on the first slide that we saw. Does your organization have a strategy for protecting your Kubernetes-based application from ransomware attacks? Simple three options? Yes, no, unsure. So just give everyone a few seconds to answer. And once we have the results, we can go through it. Just want to give a few more seconds. I think we cannot see the poll results right away, but we will discuss that towards the end of actually there it is. So the answers that we've received are 17% say yes, 33% say no, and 50% say unsure. And this is exactly the kind of feedback that we were expecting. A majority of folks being between the no and unsure space because we all need to understand, firstly, is our protection the right protection or not? And then obviously folks who do not have protection definitely need to introduce some checks and protocols within your environment. And that's what we're going to be talking about today. OK, so thank you for answering that poll. We'll move forward. Now, when we look at a Kubernetes-based environment, there are multiple roles and multiple personas that are interacting with the Kubernetes system, who are basically doing DevOps. You have your SREs. You have your admins, everyone kind of interfacing and interacting on the same infrastructure. We're going to start off with a few personas from the left to the right, starting off from the inception or building and creation of data. So you have your developers. And as the developers are writing code, their main focus is faster application development, faster application delivery. And they want to test with production data because they want to increase the success and the probability of success of introducing new applications and launching new applications. Now, the developers like Lisa, they work with Brian. Now, Brian is the SRE who is focused on taking whatever Lisa does, making sure that it's running properly within the Kubernetes clusters. And he's using GitHub's, all the primary fundamentals around Kubernetes to deploy applications. However, Brian is also concerned that, hey, is my application going to go down, whether it is maybe just data corruption or maybe it could be security-related issues like ransomware. So the problem is definitely starting from the development side as well. Dev SREs are also kind of thinking about this, and the ransomware problem is applicable to everyone. When we move on to the right, more on to the upside where we have folks who are doing more on the infrastructure platform provisioning, we have Rob. Rob is also doing kind of like SREs, but SRE work, but he's focused more at the cluster level. He's also using GitHub's principles to deploy his clusters, infrastructure, code, everything. Now, his focus is also making sure that the cluster is running successfully, no applications are suffering, and the security of the entire cluster is paramount to him as well, making sure that he has business continuity. And then finally, we have Jane, who is kind of, you can think of Rob's manager, who focuses more on the aspects around migration, data mobility, making sure, you know, overall business is healthy, you know, if there are any cost-related items, how do we move from, let's say, one cloud to another cloud, making sure all our, you know, all our customer requirements and internal requirements are taken care of. So overall, what we see is that there are multiple personas working within a Kubernetes space. Every persona has a requirement from a data protection, data security, data management, data mobility angle, and Trilio addresses all of these different personas, you know, right from the Ketco. Now, let's quickly talk about who Trilio is, because we've been talking about the Trilio solution, so it's important to know who we are and where we are coming from. So Trilio was founded in 2013. We are the leader for data protection for OpenStack, then at virtualization, and now we have entered into the Kubernetes market as well, and we're a leader in this space as well. Entire technology is patented. You know, we've built this from ground up. We have control over each and every piece of technology that we have created. So we have, you know, much more flexibility in terms of how fast we develop and how much more support we can provide. We have a global organization across the globe, customers as well, you know, across the globe in various industries. We have a decent amount of funding. We partner with a lot of different, you know, distribution providers, storage providers, you name them. And then the biggest and the best piece is that our product, you know, has been validated, certified, you know, and has received thumbs up from a bunch of industry vendors using different standards, like, you know, that had through Suzerainshire, through VMware, and through IBM, IBM had that same penmanship now, but we have different certifications from both these engines. Okay, so now let's talk about the Trilio solution. So what Trilio is, is it's a multi-cloud data management and a data protection solution. When we say data management, we focus on, you know, everything related to the data, whether it is, you know, data, you know, data captures, data recovering, data security, you know, data mobility, management, everything. We are completely distribution agnostic. So no matter where you have installed, you know, wherever you have your Kubernetes environment running, we haven't protected, you know, we have certified, we have tested, validated all different distributions. We, ourselves, are a cloud-native application that runs within Kubernetes. Yep, so no matter the distro, we will have you covered. From a storage perspective, we leverage CSI. CSI is the framework to, you know, communicate to storage within Kubernetes. And, you know, we standardize on that framework and we allow customers to store their data in a backup repository, which can be S3-compatible or NFS. And actually, you know, we've just launched now support for Azure Blob and Google Cloud Storage as well. Like any Kubernetes cluster, it's completely, you know, cloud agnostic. So no matter where your Kubernetes cluster is running, whether it's private cloud, public cloud, you know, we have you covered. And then from a package management or an application management, we are completely agnostic to how you build and manage your applications. We have you covered for every kind of use case that you have. So if you are segregating your applications by name spaces, we can, you know, manage and capture your applications by name space. If you're using labels, we can do it by labels as well. If you are using a lot of Helm charts and you want native support for Helm charts, we provide that as well. Again, that's a patented technology. And now operators, which are becoming very, very powerful in this Kubernetes space, we have native support to capture your operator as well. You know, we have controls over, you want to capture the operator piece, you want to capture the instance of an operator, whatever it may be. You know, we have all these different avenues, all these different ways of protecting applications suited to how you are using it. So what happens basically with Trilio, you know, because of our capturing methodology, no matter where your applications are running and no matter how they are deployed, whether they are pods, BV, secrets, config maps, we capture all of those objects and we move that into a, you know, remote location, which is an object storage or could be an NFS space storage. Now, once it has been captured, we take that data and you can restore it into obviously the same namespace. You can restore it to another namespace or you can restore it into any other cluster with a separate namespace as well. What this overall enables you is basically point-in-time captures and point-in-time a copy, right? There are a lot more features around this overall concept that the Trilio product has been built. And now we are basically evangelizing on this and spearheading the ransomware journey around. So let's look at, you know, what are the cloud-native application challenges and how is Trilio looking at this ransomware piece and how are we kind of combating it, right? So we look at two ways of entry points that a malicious user can get into the system. One is obviously getting into the Kubernetes cluster and getting access to the Trilio console and let's say they try to do some bad stuff over there or it could also be, you know, getting access to the object storage or the NFS repository where your data is kept, right? The users can also or these malicious users can also go there and try to delete your data so that you cannot recover from it, right? So those are the two points of attacks which Trilio focuses on. Obviously, you have your primary production data and that needs to be safeguarded and protected as well. However, the lens that we are coming in from is, you know, we own the secondary storage environment and how do we make sure that, you know, we protect this from any kind of attack and not only protect, but also complement the front end or the production primary storage system. So what we have done is, you know, we looked at a lot of different, you know, articles, white papers around, you know, what is ransomware? How do you, you know, protect yourself? How do you fight it? And what we basically standardized on is two industry standards, you know, NIST and NCCOE, National Institute of Standard and Technology and the National Cyber Security Center of Excellence. We've taken all the best practices from these two organizations, you know, basically they have provided a framework to, you know, approach ransomware and we are basically adopting that to, you know, create a very well-created solution. And what we found out is that ransomware protection is not a single software feature, right? You can't just say that, hey, I have immutable storage and, you know, I'm protected by ransomware. Yes, you have immutable storage, but that is going to help you from a recovery point of view. You know, you also need to make sure that right from the get-go, your infrastructure is protected, you are able to detect and so on. Our ransomware protection story is extremely comprehensive and we'll be talking about it, you know, as we go through this. I mentioned this, we have aligned the entire solution to NIST and NCCOE best practices. And, you know, think of these organizations as, you know, multiple thought leaders who have seen this, who have, you know, worked on these kind of issues for a long time and then they're putting their experience forward for the community to use. So what is this, you know, framework that I keep talking about? There are three main pillars to it. Identify and protect. What this means is, you know, even before an attack happens, even before, you know, you get breached, you want to make sure that your infrastructure, your applications, whatever you need to protect are protected. You need to understand what is protected, what is not protected. You know, you shouldn't be shocked tomorrow that, hey, oh, I don't know that my app XYZ was not protected and that has been compromised. Detect and mitigate. You want to understand when an ransomware attack is happening. Yes, you have your safeguards and your hails around production storage, but think about your secondary storage where ransomware attackers are going to go to delete it. You want to understand, hey, if someone's trying to delete my point in time captures, hey, is someone trying to, you know, encrypt so my data or double encrypt my data, you can encrypt it and ransomware attackers could double encrypt it. So you want to make sure that you catch that in real time versus, you know, just sitting hand over hand and, you know, waiting for that to happen. And then finally is the recoverability aspect where, you know, you have your immutable backups and you're able to use those to recover, right? And obviously it's not just, you know, taking an immutable backup and recovering it, you want to make sure that you have isolation testing, you are able to actually validate that it's clean and then bring it in, right? So overall, these are the things that we are focusing on, you know, not only saying that, hey, an attack has happened, but pushing ourselves, you know, extending what we can do to identify, to detect it and then finally recover it from a ransomware attack. Okay, so now let's kind of double click on each of these pillars and kind of go through what, you know, what we're talking about here over here. So under the identify and protect section, what Trilio has provided is a bunch of different things, right? We have application discovery. So we will look at this, you know, when we do the demo. As part of application discovery, we show you all the applications that you have running in your cluster. You know, we segregate them by namespaces or any other view that you have. And we show you exactly. Mr. Customer X is protected, Y is not protected and you know, you need to do something about it. So you are able to understand, you know, you get a clean picture of, you know, what is good, what is bad and what you need to focus on. Security validations. You know, we've made sure that our product itself is super, super secure and that is not, you know, providing a point of entry to a malicious user. The way we've done this is, you know, as we mentioned earlier, we've gone through multiple validations, made sure that our technology is solid, made sure the way we have written our code is solid, even, you know, things around lending of the Amul files and validations of, you know, using non-cluster admin roles and things like that. We have basically done all of those pieces as part of this solution, okay? The next piece is obviously, you know, backup immutability that we spoke about, you know, providing immutable backups, that is something that we do. We also provide encryption and the best part about this immutability and encryption piece is that our backup immutability and encryption is at a application level. So you can definitely choose which application you want to make immutable, which you want to encrypt because you need to understand immutability and encryption come with a cost, right? If you're going to encrypt everything and you're going to have multiple copies of it, point-in-time captures of it, you can think about the additional cost that it's going to incur. Yes, you know, if cost is not the concern, make everything immutable, you know, encrypt everything so that it's safeguarded, but when we look at things practically, you know, you're always going to want to understand, you know, hey, this data is not that critical and I don't need an immutable backup and I don't need to encrypt it. So you provide all those different pieces, you know, role-based access control through zero trust, you know, everything that we do is based on role-based access. You know, there is nothing within the Trilio system where a validation is not performed from a zero trust architecture perspective and, you know, nothing in the system that is taken for granted, you know, everything has a specific owner, everything is based on, every object is based on a role-based control. Now, from a detect and mitigate, this is the most interesting piece, you know, what I feel, what we're doing and, you know, we're going to be, we're actually working on this and we're going to be providing it very close to KubeCon. So let's say about four to five weeks. We are going to be looking at the backup data. Now, the way we store our backup data is in an open format. It's a Kukau 2 format that we call it. Now, because we own the format and we have full control over it, what we are going to be providing is, going to provide abnormal event detection. Now, what are these abnormal events that we are going to be looking for? Let's say we suddenly see your incremental backup size increase. You know, let's say five backups were, incremental backups were one GB in size and suddenly it becomes 10 GB in size. If we notice, you know, these kind of events or abnormal behaviors, you know, we'll use the machine learning behind the scenes to flag that. We already have notification and alerts to go into Microsoft Teams, Slack, you know, all the primary tools that we use today. So anytime we see all of this, we will flag that. Let's say we see a lot of different calls to delete backups, okay? We will flag that as well. You know, we'll use some machine learning again behind the scenes to understand, you know, what these backups really meant to be deleted or you know, was this authorized user or not? And you know, we provide all that information that's part of notification alerts too. Containment, you know, if at all there are, you know, you do notice and you get this notification, you want it to be actionable, right? So we'll provide capabilities which will kind of cut the codes and we will cut the code so that nothing can proceed. Your system, your backup system is isolated, right? And let's say the attack happening, you cut it short. That is the overall idea of detection as mitigation. Malware scanning. So what we're doing here is again, open backup format, no proprietary trigger locking on it. We are going to be providing a bunch of open source scanners that folks can start off using. You know, we understand that not every customer has scanners or these malware scanning capabilities. So we will provide some of our own, some tools that we have identified which are gonna be an awesome in this scenario. So we're gonna be providing those. But at the same time, we'll be following a BYOS approach as well, a BYOS, bring your own scanner. So again, think about it as data available. We provide the hook, you plug in your scanner and it will scan the backup. So now what will happen is you take a point in time capture that is going to be continuously scanned after it's stored in the backup repository for malware, for ransomware kind of issues, anything. And we will obviously continue enhancing this and curating this for newer, newer things that we see as well. As part of the malware scanning, that is on the data object or the data volume. What we are doing is we are also going to be looking at your manifest files, your YAML files, right? We will look at those YAML files and tell you are those properly constructed or not. Because what happens is you have, people are using KITOps, they are deploying the applications but when you're troubleshooting certain things, you end and change, you enter the cluster, you run some Qtl commands, you change certain to admin privileges and then maybe you forget about it. What Trilio will do as part of this malware scanning on the data volume and as part of the YAML read through as well, we will tell you if there are any kind of holes still in your application that you may have forgotten of. So not only at the data level, but at the metadata level. And again, just to summarize this piece, all of these or all this information is going to get into your notification alerts like Slack, Teams, Mattermost, whatever you're using, we have that covered. Now from a recoverability standpoint, right? When you want to recover, the objective is to recover quickly. You don't want to waste time and understanding, hey, how many of my backups have been compromised or which backup do I need to go to? So deep logging is what we are going to be providing and that deep logging will tell you exactly which backup is clean and which backup you should be using or which hasn't been compromised. So quickly getting to the point in time capture that you need. Once you have gotten to that point in time capture, we already have this isolation testing. You can take that point in time capture, take it into another environment, a test environment, run your security scanners again on top of it just to ensure trust but verify. You can trust Trilio, but do verify that it actually is properly clean and it hasn't been compromised. Then we have a disaster recovery workflow that's available today. Click off a button, whatever applications have been compromised, you orchestrate that entire workflow. These are the seven apps that were compromised. These are the point in time captures that I want to restore for these seven apps and restore it in this specific order. So the DR workflow lets you do that. And then finally, we have multiple target options. One train of thought is to minimize your attack surface. The other train of thought is maximize your recoverability service. So you can have targets in AWS S3, you can have a target in Azure Blob. The chances of all these different targets getting compromised is low. As in when you have more targets, you can keep extending your recoverability surface so that even if one target is compromised, you have different copies to get to. Again, what you want to do is, you don't want to pay that answer. So you just want to make sure that you have insurance over insurance through Trilio to make sure that it happens. Okay, so with that, that's the kind of comprehensive solution that we're building. As I said, the tech mitigate is going to be available closer to KubeCon in two to three weeks after KubeCon. And most of the pieces and identify protect are there, recoverability is already there. So stay tuned. We are very excited to be providing this and we'll be making a proper announcement for it as well. Okay, so based on what I have said, another poll question, which component does your organization need to focus on today? Is it the first one, identify and protect, detect and mitigate, recoverability, all of the above? Are you seeing that you have one and not the other? More interested in understanding, what is the state of the market and what people feel? So just give, I see the numbers coming in, just give another 10 to 15 seconds for folks to finish answering. Okay, so what I see is identify and protect 22%, detect, mitigate 9%, recoverability 7%, all of the above 63%. So definitely again, the meat of the audience is focused on all of the above aspects, which makes sense, right? We want to make sure that we have a full-fledged approach to combat and ransomware. And this is the approach, not just immutable backups, not just scanning by itself. You want to have a comprehensive solution that can protect you. And again, we don't know when this is gonna happen. We need to be ready. And this is the right way of being ready. Okay, okay, now let's get into the demo aspect of it. And I'm just going to stop sharing my screen over here. And actually before going into the demo, I think I can do a couple of questions. I think one of the questions is, Trillio scanning for malware is a function being called by another app? So as we mentioned, yes, we will be scanning for malware. We will be providing a couple of open-source scanners. If you're interested, we can actually talk about it, please reach out to me and I can explain which are the scanners and why we are using those. And yeah, and you are free to bring your own scanners as well, okay? So making sure that it's not just us who is putting our scanners up front, but you can use it and bring your own. Is your solution identifying ransomware attack happen? Yes, so, yes. So from the detection mitigation angle, we will be identifying, doing these anomaly detection pieces, things are getting encrypted, someone's trying to delete backups or someone is deleting backups. Why is that happening? Cutting the codes, all of that will be provided as well. So that helps answer the questions. Just going to move in the demo side of things. Okay. Can everyone see my screen? Or can someone see my scene? If someone can give me a thumbs up. Yep, looks good. Awesome, thank you. Okay, so what you see in front of you is the Trilio World Management Console. This is, I'm going to be doing a little bit of live stuff and then some record stuff so that we get the whole meat of it and the entire understanding that on it. So management console, we support multiple ways of authentication. For you to use your KubeConfig, OIDC providers, Google Single Sign-On. However you are interested, I think we provide an extensive, extensive way of authenticating into our management console. I am going to find the KubeConfig file that I need to use. I will log in over here. I have two clusters created. This is a dev cluster. So I'm not going to touch that for the moment. I'm going to be doing all my backups and everything on this cluster. This is another Kubernetes cluster. It's running in GKE. That was the primary cluster as well. This is your multi-cluster management console. Adding a cluster that's as easy as clicking plus and adding everything. Now, what I'm going to do is just in the interest of time, I'm going to take this namespace. As you see, we have discovered all your namespaces. I'm going to create a backup. I'm going to use pkplan1 as the backup plan. Say continue. I'm going to call this... So where demo1 create this. So I'll let this run. The status logs and everything will start happening in the background. And what we'll now start looking at are these different things that we have within the system. So the first thing I'm going to show you is we'll look at AWS. And we'll look at the immutability aspect that we have provided. So ransomware2 is the bucket that I'm using. If I look at the properties over here, I see I have a default retention period of one day. Okay. And what we do, mapping that backup repository, we have this ransomware2 target. And we can just click on edit just to see how it's been created. So same information we say that the bucket that we are going to be using is ransomware2 and AWS. We provide the reason, the URL. And then we have object locking. So we ask the user, hey, is this a target in which you have object locking enabled? And now the beauty of this is once you enable object locking, you cannot disable it. And if you try to disable it after, it's not going to work. So it's once you enable an inventory, you have zero controls to do any changes to that. Okay. Once you provide that, you provide your AWS secret keys, your access key and your credentials. And we will go and create this validated. We will make sure that, whatever the requirements are from a locking perspective, they are validated. And then once it's done and available, we will mark it as available for use. Now, what I'm going to show is that we have something known as a backup plan. The backup plan, as the name suggests, basically describes where your backup is going, the length of the backups that you want to keep, how often you are scheduling, do you want to do full backups, incremental backups and the retention policy. Okay. So what we do is based on these inputs that you provide, we end up calculating the retention lock, immutability aspect that needs to be applied to your application when it is in the backup repository. So just kind of looking at these things quickly. If you look at the scheduling policy that I'm using, the scheduling policy is called, I believe it's PK something. I think it's a test daily one that I'm using over there. And then from a retention perspective, retention PK is the one that I'm using. And what I'm saying is, you know, retain five backups. So the reason why I'm showing all of this is to kind of explain what happens once we basically put the object into the target. So looking at, you know, this environment, we can go back saying that, you know, we've done a bunch of different captures and the one that we started is kind of in progress. So we can filter it based on the backup plan and we see that there are a bunch of different backups that have happened. Now, interesting thing to notice here is date of expiry. The default retention on the bucket as we have shown was one day when we did these backups, right? We are automatically calculating that and setting it on the system. Okay. So, you know, very, as long as you set a default retention over one day, we will take care of, you know, calculating all the object locking for you, how long it needs to be maintained, when should it be deleted? You know, you just don't want to keep a backup, which is, which does not have a retention. So we calculate all of that for you and we work to make sure that it's properly secured and kept. Now, other things that we should look at this, certain things like this. So we provide the capability of, you know, viewing a YAML files and stuff. You can, you know, we can get the UID of a backup plan. So 55V, let's just keep that in mind. And we can look at the objects. You can see 55E is here. This is the backup plan. And these are the different backups that are within it. Now, what we'll do is just for fun, we'll take this UID of the backup. So there's a backup plan under the backup plan. You have backup. So we'll take this one, 7652. 7652. And what we'll do is first, we will try and delete it from the management console. So we said there are two ways people are going to be able to access it. So we can go here. We can say delete. Are you sure you want to delete this backup? Yes. So what happens is it's deleted from the system. It's a soft delete. Right? We want to let the malicious user know that, hey, you were successful, but technically he hasn't done it. What has happened in the backend? You're 7652 backup. I'm just going to refresh the screen. It's still available. The data is still there. And it's not been compromised or it's not been deleted. Yeah. This backup is also completing. So we'll just go around a few seconds. But again, now obviously we have, locking enabled on the bucket itself. So you can automatically delete these objects from the bucket. So once the backup has been written, it definitely, definitely cannot be modified. And just to kind of do this on the backup that we just created, we can delete this as well. Actually before that, let's take a look at the YAML to see what the ID was. It's AE4A. If the folks cannot read it. AE4A. AE4A. Now let's try to delete this. Okay. Gone from the system. But again, all these black ops or captures are still there on your target repository. Okay. Now, obviously we have done this for S3 based systems that provide object logging. Now there was a question here that I see, how do you protect NFS based systems? Now for NFS based systems, where all the systems which do not have locking, what we are doing is, we are making it kind of like a soft immutability, not like hard one. The way we are achieving that is as part of every capture that we do, every backup that we have, there is some information about the target that's a repository, the backup plan, the backup, the policies and the retention policies and so on. So we are looking at the retention policy that a user has applied. And if the retention policy does not map or if someone is trying to delete a backup before the retention policy, that call wouldn't go forward. It just wouldn't happen. So anytime the manual delete wouldn't happen, only time you'll be able to delete a backup is by the retention schedule kicking in and then deleting it. So what that means is through the Trilio system or through the backup system, nothing can get compromised. Yes, if someone gets access to your NFS storage and tries to nuke that, there is nothing that we can do about that. But obviously, the way we are handling all permissions, credentials and everything, it's going to be very, very secure and the chances of that happening are very, very low. So that's more on the immutability aspect and how all of that works. Hopefully that made sense. What we'll do next is we'll jump into a kind of like a recorded demo just to show how all of this flows from an attacker's perspective and what would happen. So with that, I'm going to shout out to my colleague Ben for building this demo. It's a fantastic demo, which I'll be kind of walking over and talking over. Okay, let's get started. So what we see is, we're going to kind of describe the environment. We have a helm application that's deployed in a name space called source name space. We look at all the application objects. We see you have your service deployments, your applications, everything that is powering the application itself. Okay, forward, we look at the PVC. See that there is a stateful workload using a storage class known as host parts CSI. And next, what we'll do is we'll make sure that we have it captured already done. So this is the user making sure that they have an application. The application is protected. So the backup shows that it's available. It was successfully completed and it's there. Now, what we'll do is we will look at another name space called restore failure work. Now in this name space, what we are doing over here, we are showing that nothing has been created. Right. There is no data or nothing at all. And that's where we are going to be doing the restore procedure or the restore operation. Okay. So once we are satisfied that there is nothing over there, what we'll do is here is the attacker, right? Let's say someone has come in. He's like, he gets access to the service and he's like, you know what? I'm going to go and start encrypt this data and I'm going to ask for money after that. So the user logs in. It's assuming he has access and we're just showing a simple encryption capability. So this is the, you know, the user names and first name and the last name. We're just going to use the SQL encrypt capability, but I assume the malicious user is doing something different and trying to obfuscate the data from you. Okay. So now the data has been encrypted and we are no longer able to access it. Now what happens next is, you know, we just go to the management console again. We'll take a look at, you know, the application that we had. We will look at the backup summary. We'll look at, okay. Hey, was there a backup that I can restore from? Yes. There is one. Now, what we'll do is we will click on that backup. You know, just make sure that everything happened correctly. You know, we do a bunch of different things as part of every capture. So capturing the meta aspects. The requires it, you know, if you want to do application consistency, just not show the data and quasi data upload and the meta data. Okay. Once you're satisfied, you see, okay, hey, this was in a knock target. I'm good. I know that I can recover from it. Okay. So what we're going to do is we will look at the data and the data that we have to do. We're going to map our data as a tab. We also provide you a quick summary of the objects that we captured. You know, if you want to do granularly stores or, you know, something intricate within the data itself, we let you control all of that from the console. Okay. So, again, we can do the email. We can see, you know, verify what has happened over here. And similarly, you know, we can. into your object storage and going to try and delete everything. We showed you the soft delete aspect of it, trying to delete it from the Trilio system and deleted it, but the target on the target was still there. And now here, what is happening is, we are going to try and delete as a malicious user the object itself from the bucket. It requires you to say permanently delete and that one is also not going away. So that means your data is intact and you don't have to worry about anything. Now the next thing that we'll do, we're just going to move this forward a bit, okay? So now that the attack has happened, the next piece is obviously recovering from it. So restoring is as easy as clicking a restore. We spoke about the DR plan workflow. This is looking at it from a single application point of view. You specify the namespace that you want to go into, restore failover if your member was the namespace that we were showing that there's nothing in. We say next, we allow you to transform, exclude and add any kind of hooks as part of the restore operation as well. And then once the restore process begins, you can start mapping to see that your application is back up again. Just give this a couple of more seconds to move forward. Okay, so when the restore is in progress, you can monitor what's happening. We do a validation, firstly to check that it's good. Data restore, then the metadata restore and once all of that is done, your application is back up again. Okay, again, just to kind of show you that this was using the lock target. And that is where our backup is going. Now the main piece of this, just fast forwarding it towards the end. Once the restore is complete, what we'll do is again, we'll connect to the database, making sure we can now check the data. But before connecting to the database, we wanna make sure that the data is correct. So we'll list all the objects, we'll see what was restored and how it's looking within the Kubernetes system first. So we'll take a look at all the objects. All the objects are there. So whatever we saw in the source name space, all those same pieces have been captured and produced back. Now, because all of this is back, we can try to go back to the database and see if our data is clear and intact or not. Again, the way we are gonna do that is by port forwarding the MySQL service and then checking against it, okay? So now that we've done the port forwarding, we go back to the SQL instance. We just run the query. Again, we have to reauthenticate because it's a separate instance. But you run the query and your data is back to what it was before the attack happened. So overall procedure, what we did was, we showed there was an app, we captured it, attack happened, attack tried to delete the backups and everything locally, tried to delete it from the target, wasn't successful. We came in, we saw that our data is compromised. We took a restore and we are back up online again. Now, again, this is just a subset of the things that we spoke about from the PowerPoint, but there are a lot of other pieces that are coming into picture on the backup scanning, the DR plan workflow, encryption, that complements everything that Radio is doing around ransomware. So with that, I conclude the demo pieces. And what we'll do is we'll quickly go to, as you can see there are a bunch of questions here. So I think the first question was, what RBAC access Trilio dashboard application needs to access external clusters, config, and multi-cloud Kubernetes cluster? So the way we've built the product, it's fully stateless. And any Kubernetes Trilio Vault management console can connect to any other Trilio Vault management console. There is no dependency all from an RBAC perspective that you require a read permissions over the Trilio custom resource definitions. As long as you have the read permissions, you will be able to access it. And then if you have read and write permissions and combinations of all of that, you can obviously segment the Trilio solution as well. By segment, I mean, you can control who has access to policies, who has access to creating backups, who has access to deleting things and all of that. So very, very simple, fully stateless, as long as there's a network path, you just provide the URL of the management console and we'll connect to it. One of the steps I have to follow to use Trilio solution in my deployment. So we have a very slick documentation website called docs.trilio.io, which explains, which has a getting started page, shouldn't take you more than, I would say, 10 minutes if you have all the prereqs taken care of, like CSI driver and a Kubernetes cluster. As long as you have that 10 minutes, you should be up and running with TVK. How do you store storage key locally in Trilio? So currently, if we, the way I understand this question is I believe you're asking about the encryption keys. So the way we store that in Trilio is we store it in the namespace where the backup happened as a secret. Now tomorrow, that is closer to, I would say, Q4 timeframe, late Q4, we are integrating with one and also with AWS KMS. So you will have, or we will be able to, if we are doing any encryption for you, we will be storing and retrieving keys from KMS or Key Management Systems of your choice. Okay, which IDE is used for query? I'm assuming you're talking about the demo. This was just a MySQL IDE, I believe, that was being used. I'm not 100% sure, but I'll confirm that and then let you know. And encryption and access keys was the clarification point that was provided, yes. And that's how we store it. Currently, we store the keys as secrets, Kubernetes secrets, but we are building integration into solutions like WALT and AWS KMS. Okay, I'm running out of time here. So I'm just going to present the last slide that I have. So this is a slide that we were on. Compatibility standpoint, as we said, any Kubernetes distro, we'll work on. Storage, CSI is what we need from a target. Let me say S3, any S3-compatible device. So it could be AWS S3, Minayo, it could be Masabi, whatever you need. NFS, Azure Blob, and Google Cloud Storage is also available now. Ecosystem, we have direct integration into Prometheus and Grafana. We provide our own Grafana dashboards if you want to use that and you don't want to use the Trilio monitoring tools. Prometheus, we automatically send all our metrics to Prometheus, so easy integration as well. We've done a lot of validations across a lot of databases, Postgres, Mongo, Elastic and Flux, you name it. And we continue to build and test and validate all of these and these are documented on our website as well. And then finally, we live in a very mobile environment. So no matter where you have data, where you have Kubernetes, Trilio will protect it for you. KubeCon, October 13th to 15th, we are going to be there in person. We are really, really excited to talk about all these different things. And obviously, some really cutting edge features are going to be announced as well. Stop by Booth, if you're going to be there. Booth number is P17. You can start scheduling meetings with us now itself. We are also presenting cost of data management at KubeCon. It's a session that I will be presenting on Thursday, 5 p.m., I believe. We also have some fun that we're going to be having at the event. So there's a KubeFest that we are sponsoring. So definitely, reach out to us if you want to be part of that event. And we are giving away virtual passes as well. So definitely reach out to the Trilio team. So with that, thank you for listening. 56 minutes, I think that was pretty efficient. Never happened before, so feel good about that. But thank you for listening, everyone. Hopefully this was interesting and informative for everyone. If there are any additional questions, that's my email. Feel free to reach out to me. You can connect with me on LinkedIn or Twitter. Whatever is interesting and whatever you use for social media. But feel free to reach out and let us know what you thought about this session and how Trilio can help you with your needs around that somewhere. Thank you. Wonderful, thank you so much Prashanto for your time today. And thank you everyone for joining us. Just as a quick reminder, this recording will be on the Linux Foundation's YouTube page later today. Thanks so much. And we hope you'll join us for future webinars. Have a good day.