 Welcome to today's CNCF live webinar, Data Protection in a Cooper Nettie's Native World. I'm Libby Schultz and I'll be moderating today's webinar. We'd like to welcome our presenter today, Michael Cade, a technologist from Casten. A few housekeeping items before we get started. During the webinar, you're not able to talk as an attendee. There is a chat box at the top right corner and underneath, there's a subcategory of Q&A chats. If you can put all of your questions there, we'll get to as many as we can at the end. This is an official webinar of the CNCF and is such a subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that Code of Conduct. And please be respectful of all of your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF online programs page and as well as YouTube and you can also access all the slides and recordings post event via your registration link. With that, I will hand it over to Michael to kick off today's presentation. Thank you Libby. Yes, so as the title suggests on the slide, so we're going to be covering data protection in a Kubernetes native world. And I'm Michael Cade, a technologist at Casten by Veeam. And really if there's not any questions that we don't get to answer throughout the session and potentially at the end with the Q&A, then you can most definitely find me over on social media at Michael Cade 1 on Twitter. I'm generally hovering around there most of the time and any question is welcomed at any time and I'll try and help the best I can. So who am I to begin with? So I'm Michael Cade and I work for Veeam Software or Casten by Veeam. I've been at Veeam for basically six years now. So my life is very much around data management, data protection, and I very much like probably many of you on the on the call or listening to the recording come from that. I'm not going to call it traditional too much and I'm going to try not to. But that virtualization space around VMware, Hyper-V, and being able to leverage that the infrastructure that we've come to know over the last 10, 15 years as our platform to get things done. I have a huge focus around the community, not only sharing a lot of content from both that world as well as into the new cloud native and Kubernetes world, but also around just generally helping others like answering those questions and just being present and helping others through this ever ongoing learning journey that we all have in here. So by doing that, I'm either blogging on my personal site that you can see listed there, either creating demos and walkthroughs on my YouTube channel, articles and etc. on LinkedIn and just generally being present on social media, not only learning from the community, but also trying to give back as much as I possibly can that's got me into the position that I'm in today. So yeah, I have a key focus around technology such as cloud native, as I mentioned, around automation, infrastructure as code, etc. But predominantly, I focus around data management, both new and old and everything in between. So with that, let's jump into it. So as I've kind of already hinted to is that I for one, still consider myself a new person into this world of Kubernetes. I'm sure many of you have exactly the same experience. I've been hovering around this world for the last two years and only really just recently moved over to Kasten to really focus in efforts around the data management space in this area. So very much recently moved over with a complete intention of being part of this space and helping educate the requirements around data management here. So as I mentioned around having a very traditional infrastructure landscape background, but we can all see that there is a fast adoption of Kubernetes and it's really taken off whilst today with the automated deployment, the scaling is in a good place and consumable for anyone via on premises or in the public cloud. Well, there's still so much more that can be done and should be done, although we're going to touch on some of the some of those points that are going to help us accelerate and this as a community accelerate into leveraging Kubernetes more for the wider goals around container orchestration, automated deployment. But whilst also thinking about some of those some of those topics that we maybe don't think about very much top of mind or traditionally, we haven't thought about top of mind. So today, we're going to talk about today as in the Kubernetes that we that we have today, what where we are and how big, how important adoption rates are and how they're growing. We're then going to look at the future and how we see this space going from a data management, a specific data management point of view. And a key part that goes hand in hand with that data management is the storage. The Kubernetes storage is is your Kubernetes storage ready? Are people using it in production? And what we're really getting at there is around stateful workloads, databases, but other other types of workloads as well that we that we see in this in this space and very much like that virtualization wave that we saw, I want to say 10 years ago, but it feels like a while ago, at least, where we went from that that physical workload, those those physical servers, machines, mainframes, etc. So that we had in our data centers into a virtualization world. And then more recently, cloud based platforms, I as workloads within the public cloud, SAS based workloads within the public cloud. And now we're also seeing this adoption rate growing and this next wave of platform come out with around the Kubernetes environment and platform itself. So I feel like it's very similar to what we saw from a virtualization point of view, I'd say quite a bit around for those that remember things like vSphere 3.5 and taking that that early step into into the into the water there. And then to see how that evolved over time and how everything was made very simple and easy. I remember those days back in the 3.5 days where we were I was a professional services consultant. So I was going out and I was implementing vSphere. I don't believe that you potentially need that that level of service anymore with vSphere because it's it's generally or any virtualization because they're very much hit hit the easy button and the adoption is as far outgrown grown the industry. So whereas Kubernetes, we're probably there, but I'm going to touch on some of the some of the areas where and why we think that this world is going to accelerate much faster than what we saw from a from a virtualization era. And one of the first things to know is obviously the CNCF landscape that has been evolving over the last year and like the few years that that I've at least been been looking at this this very noisy but relevant chart of all of the different areas that that are associated to Kubernetes and containerization, but also like just in general cloud native workloads. And this is growing exponentially. But within reason, it's allowing us to. Do so much more with that platform and this is something this ecosystem is much wider, much broader than what we potentially saw in the virtualization journey as well. So what we're what I'm basically saying here is that this is really a testament to all of the projects along this board that are enabling the customer or the end user or us on the call to have choice and the flexibility of choice when it comes to delivering on a service, whether it's databases, whether it's messaging, whether it's application deployment, whether it's continuous integration or or or delivery, all of them give us options and choice and choice is always a good thing. And it's maybe something that we didn't have in that in that world before. And again, as I've already mentioned, I'm very much coming from an operations point of view and something that we're definitely going to get into is around how why this adoption is growing so fast as we as we as we move through the next few slides. So this is from a from a bit of. Research that VMware carried out in in 2020. So obviously 2020 happened. It changed very much a lot of our environments out there, especially if we're an end user and you're looking after systems. It potentially accelerated a lot of your your adoption in terms of technology and potentially cloud based workloads, whether that be just a migration project to move things faster into SAS, PAS or I as workloads. But ultimately, you can see here that 60 percent of the people that were asked are running less than half of the containerized workloads on Kubernetes. And this this number actually shocked me quite a bit is around. So almost that same 60 percent are running fewer than 10 Kubernetes clusters. And when you think about that, OK, a cluster doesn't need to be a sizable number of worker nodes that enable us to put our applications on there. They could be relatively small, but to have 10 of them kind of indicates that there is a there is clearly an adoption here and a large amount of those are are running more than 10 Kubernetes clusters. So you can probably guess where where and who are are running these these workloads. And then to to really drill down into that, those respondents that are running Kubernetes in production. So again, almost 60 percent of those, so the 60 percent of the 60 percent are running those Kubernetes clusters in production and then a further 20 percent of those have a huge 50 or more clusters, which again just highlights how how big how big of a footprint could be out there in in the Kubernetes world and how people are vastly adopting this new newish technology. I'm not saying it's new. It's obviously nearly I think just over six years old, but the adoption is. Going at a pace much faster than what we saw from a virtualization point of view. And this is what really excites me because I believe that if we were to put this up and have a side by side from what what we saw from a virtualization point of view, I think we would see instead of it saying at least three years or the majority of or close to 80 percent of people moving into and standardizing against Kubernetes, I would probably wager that that we saw maybe five years for virtualization to really become the standardized approach for for a platform for an environment. And this is the exciting piece is that with the numbers that we just mentioned that VMware Glean from that that report, this gives us that the visibility of well, it's going to be and it's only going to continue to carry on being faster and faster than that we maybe once thought because we've we've lived through that virtualization piece as well. And one of the big influences around that is because it's not just operations. It's not just infrastructure guys, if anything, actually the adoption that we see today hasn't come from the operation team. It's not come from the infrastructure admins. It's actually come from the development side of the business. And really these these high numbers are the ones that are influencing those those decisions as a business and the operations teams that are having to pull themselves into those discussions, into those meetings to make sure that the one, they're relevant, but also to that the platform is managed and delivered in the way that the developers need. And that's where we obviously see that culture of DevOps coming in. And I think this is also very interesting is that the day one. So what we mean by day one is the initial challenges around provisioning, installing, rolling out the Kubernetes clusters have kind of been addressed. It's really quite, I'm not going to say simple, but it's relatively easy to get up and running from a day one perspective and have have the ability to start leveraging a Kubernetes cluster and the orchestration methods behind it for your applications. It's very it's very different to how it was two years ago from when I first engaged in this in this world. So and don't get me wrong, there's still room for improvement. Things will obviously be working towards making that even simpler and easier for for the operations, but also for the developers to consume. But the likes of the public cloud have definitely enabled that that that easy spin up and spin down, if need be, of a Kubernetes cluster. But where we're seeing a huge focus, which is again, quite refreshing from a backup guy, is now we're focusing while we're getting to focus on day two, but also focusing on day two much sooner in that race to have in that standardized approach to having Kubernetes as a platform. So and it's not just backup. I put backup in that data management piece and highlighted that. But also it gives us a chance to highlight things like security and observability. And one of the things that we've seen throughout 2020, we've obviously everything else that's gone on is the increase news around security, things like ransomware, things like just outages around malicious activity with inside your business, data management challenges around access of accessibility of data. And these are all that. How do we bring that further or closer to the top of the list and start thinking about this before we think of it as an afterthought? Like we potentially have done in the past. It's always been and this is my experience is that it's always been. We look at the shiny new tin. We look at the shiny new stuff first, the storage, the compute, the networking before we then get down the line. Then we look at now we'll look at the backup. Now we'll look at the security. Now we'll look at the monitoring and observability, the analytics that we can then we can achieve there. And maybe it hasn't gone up the list, but maybe that list has just been made much more tighter in terms of we now are very much aware of why we need to protect our workloads because of how important it is to the business. So. Obviously, this is the key concept and the key focus of what we want to talk about today is around the day two piece. So. So let's go a little bit deeper into this and specifically talk about that from a Kubernetes storage and that data data management focus. So again, another misconception that I at least found over the last month, two months of really focusing and engaging in this in this community is that stateless versus stateful workloads and how there is there is always been or there seems to historically have been a group that think that Kubernetes should absolutely be stateless and nothing else. They that's how they believe and then everything else. And then there's another group that think about stateful. And you can just see here that there are a huge percentage half and sometimes more than half where a stateful set is being used in their containerization in their Kubernetes environment. So there is clearly a use case there for protecting and working with that workload. And I think the biggest driver to this is around Kubernetes storage with the CSI, but also the number of storage vendors that are being on boarded into that CSI driver and being able to have access to cloud native storage when you're looking at a Kubernetes cluster. And that number is only ever increasing as well. And so then we look at more, more data points. So 55 percent of organizations indicate that half or more of their container applications are stateful. So that last report, that last slide that I was touching on is a is a report that Cystic did, I think, in maybe in 2019. So obviously, we would expect this to have risen quite dramatically in in 2020, whereas this is going back even further. And this is from 451 and they say that 55 percent, but we're roughly around the same numbers in terms of adoption and people using stateful applications. And generally, another potential misconception is that when we're talking stateful workloads or stateful applications deployed on Kubernetes, we're thinking just databases. We're thinking of those traditional SQL or no SQL databases. We're not necessarily thinking of things like message queues, like batch and data streaming. And leveraging other other tools like like Kafka out there. So this is just going to give you an understanding of, well, these this is happening and people are using these these applications out there in a stateful stateful way. So then this opens up the door as to that that shift. Or maybe this is the the reason why it's expediting between the developer and the operations team. So one of the things that we're going to do over the next few slides, we're going to talk about how the worlds of developers and the ops teams are vastly coming together or where they share the same end goal and requirements for the overall business, because the understanding of we can't just have an app thrown over the fence and then be looked after and hot potatoed across the across the infrastructure. We have to work together to make sure that we're all working together to make sure that the application is is being looked after in the best possible way, and it needs to be updated regularly. It needs to be improved and and all of that good stuff that really Kubernetes enables us to do. But again, not having to be separated like it maybe once was before. And and this might be the same resource like there's a lot of DevOps engineers out there that have been been able to grasp both the application development cycle as well as the the infrastructure or the platform and the hardware resources underneath or at least the resources. And that's obviously fine. Or in bigger larger environments, there might be still a developer team and an operations team that that carry both both flags or at least each flag and they they just simply have to work together. So we're already seeing this blend and things are happening like extremely fast. So as an upright as an operations person myself, understanding the requirements and goals of development helps me better prepare the infrastructure platform. And then this day to operation we speak of, which in turn means I'm not getting an application stack over the fence and getting told, well, you deal with that, you're on call now. You make sure that the app that you have no idea about has to stay up and run in. So what we're going to drill into. And one of the biggest things here and again, I'll talk about where I've come from because I think it shines a light on not much has changed in regards to at least data management. Security probably not as not so much as well in that we still have to secure our environments. We still have to secure our data, if not more, because we're potentially leveraging cloud based workloads or cloud based services a lot more alongside our platforms. But that's the same for backup and data management as well. So how do we look at this from a Kubernetes native data management? Because this is one of the things we're definitely going to touch on because a lot of the conversations that I've been having is could I if I'm running a. A node based cluster that I have access to the nodes in one of the public clouds or potentially on premises, could I not just use the same tool that I did in my virtualization environment to protect my workload today in Kubernetes and just look after that. And to a certain degree, yes, you can, but not to the same effect as something that has the full visibility in the API access as something that's natively built for data management in that space. So when I say about backup and recovery, ultimately backup is a form of an insurance policy. It's about having a point in time copy of your data in a secure location that allows you to recover based on a failure scenario. And you could literally drag and drop that that description into any of the previous platforms that we've ever come across, whether it's physical, whether it's virtualization, whether it's Kubernetes and whatever comes later on down the line as well. But in terms of a Kubernetes native or a cloud native data management perspective, we have to be mindful and conscious of multi tenancy, particular role based access control, the scalability of that. We obviously have to have a focus around the application, the performance of that, the ability to leverage all of those different storage options that I just mentioned. And then also the nature of Kubernetes and basically it can run anywhere. The Kubernetes platform could be running anywhere. It could be anywhere. Well, what about that application mobility? We've just said about it being much more focused and brings in that developer team or brings in the operations team to now work together to make sure that you've got the best of both worlds so that you can work together to give your application the best best treatment it needs. But what we also need to do with our data management hat on, we need to think about, well, how can we move or how could we be able to seamlessly move our applications from A to B clusters without much impact, to be honest. And Kubernetes has a really strong API that enables that to happen. And then we lastly, again, disaster recovery hasn't really changed the definition of disaster recovery over the last 30, 40 years. So but potentially the concept of it has because now we're not just talking about an own site A and an owned site B and we're replicating virtual machines from site A to site B or data from A to B. We're now thinking about cloud-based workloads such as availability zones and regions and multi and the hybrid cloud. So you might still have site A and that site A might end up being your test and development environment. But your production is now in one of the public clouds. And but from a disaster recovery point of view, they might all mean something. If you were to lose access to the public cloud or lose access to site A on premises, we have to think about what what does that do to our business and how does that affect our business objectives to that to that point? So what's different? Obviously, Kubernetes is is different to the virtual machine that we've potentially worked with in the past. But now if you spin that on its head around a virtual machine, a large operator, so an operating system sitting on an abstracted layer of hardware and an application that lives on there. Maybe that's a SQL database. Maybe it's a database of some description. Maybe it's another application. And now we look at Kubernetes. Well, it's not it's not the same an application. I'm going to get into this in a little bit more detail when I've got some visuals to share, but it's not the same because an application is now broken down into many different microservices that enable so much more from a dynamic point of view. So whether that's from a rescheduling point of view, whether that's from an update point of view, it gives you so much more flexibility to not have to treat that virtual machine like a pet, which we know we all have done in the past. But now we don't really care about the application or at least the pod that the application is running at today or this minute because we want to be able to have that flexibility of being able to update that and being able to update that on the fly every minute, every hour, depending on when that's needed. Rather than from a pet point of view, we're just going to keep feeding and watering that pet as we as we continue. So where does that I mentioned around VM based backup and where does that fall short? So, first of all, if you've got a virtual machine and you've got an application on each one, then great, virtual machine backup is the way to be able to protect that workload. And I kind of touched on traditionally we were taking we were able to take your agent based backup for your physical machines and bring that into your virtual world. But you would you were losing so much of the benefit of the underlying hypervisor. So the virtualization host by not being able to go in there and take out and leverage those APIs, which is a very similar point to what we're going to get across here as well when we walk through this. Because as soon as we start putting Kubernetes across our virtual machines, well, then the VM based solution loses all application visibility. We have no idea what that application is or even where it lives, especially because it's not a one app to one host situation when it comes to Kubernetes. It's generally split out and potentially here in the diagram you'll see, okay, this is a perfect scenario if you've got app one that lives on node one and you've got app two that lives on node two and so on and so forth. But that's also not the case. We split those applications down into microservices which gives us so much flexibility and choice and all of the scalability of containerization. But it's not always it's not always as simple as everything lives on the same host. So what you'll see there on the diagram that no single VM has a complete application. Those applications are spread across multiple virtual machines or multiple worker nodes within your Kubernetes cluster. So as soon as you were to take that virtual machine one back up, yeah, okay, you'd get a bit of that one and a bit of that two and a bit of that three but potentially the storage that's also connected to that if you were to go and get a copy of that data, that's again, achievable, but when it comes to actually having a consistent state of that application, well, that's not such a great way of being able to protect those workloads. And then not only that is that break down that application even more into all of the secrets, all of the artifacts that come with that application, that namespace, all of the things that build up the componentry of that application and being able to protect those as well as part of that same. This is where that VM based backup will fall short considerably is that, well, you're just not gonna have visibility into what that is unless you've got direct access into the API, the Kubernetes API to be able to achieve that. So again, what's different in that is that because we're abstracting the layer of, we're not abstracting the hardware so much anymore, we're abstracting the operating system, which then allows us to really focus in on the application scale and a focus on that application scale is a key part of the wider Kubernetes ecosystem. And also being able to dynamically do that, it's not just stuck on, we don't just say, oh, we've got this one virtual machine that is gonna run our SQL database and we can add in additional virtual machines as and when we need it. Well, let someone else do that, let someone non-human make that decision and scale that accordingly to whatever the workload requires. And also, let's just do it on the available storage that meets the storage class needs on whatever storage that may be. And then being able to, because of the nature of the Kubernetes cluster, being able to have multiple clusters that we saw back in the beginning of the slides, be able to have visibility of all of those clusters in a multi-cluster type environment. So I kind of touched on this, but the application becomes the key part of any deployment. And that's where the VM backup falls short. But also being able to automatically and completely capture the whole application via a namespace and understand all of those hidden parts that I mentioned around the artifacts, understanding not only the part in the application but also the service accounts, the secrets, the config maps for each one and pulling them all together so that you're seeing them as an application rather than individual subsets of data within there. And abstractly underlying infrastructure, I don't really care so much in a Kubernetes world where that's living, what's happening under the hood because we're all leveraging the same Kubernetes API. So the storage vendor A and storage vendor B just look the same to Kubernetes. Yes, there might be a few bells and whistles differently written into the CSI, but ultimately it doesn't matter because what we need to also consider is that restore capability. When we restore, we wanna be flexible in how we restore that back into a different Kubernetes cluster potentially with different storage, with different compute nodes, with completely different underlying infrastructure. And this is where we can enable that with the transform of being able to take just because that persistent volume lives on storage class XYZ today and we then want to restore that to a different public cloud Kubernetes environment, that doesn't matter. You need to be able to be able to do that transformation to be able to say as you restore that, we need to restore it in this particular way. And just to put this into a real life user of Kubernetes is that, and it's grown substantially from a deployment point of view, but going, you can start to see the number of components and then start to think back to how I explained about the VM based backup methodology and how simply that would just not work in this instance. In fact, this customer has now grown over the 100 node mark. So the capacity, everything else has likely doubled along with that with the components. And really the key focus here is that it wasn't operations that made that call or made that decision. It was based on the, it's very much developer ran or DevOps ran in that they're working together as a team to deliver what is needed from a day two data management point of view. And they've got the use case diversity around backup, disaster recovery, as well as the application mobility that I mentioned across a multiple subset of applications. So you can see down there, we've got our no sequels, we've got our SQL databases, but also think about those other, the messaging queues that I mentioned as well as the batch processing, et cetera. So again, that brings us back to that DevOps and that shift left. So having that, we're not really focusing on the infrastructure anymore. Really, we don't care about the underlying infrastructure. We're more focused around the application and the delivery method of that application. But we need to incorporate that data management or the backup needs into day one. So as part of that application development cycle, we need to include that forefront into protecting those workloads. It's not an afterthought when it comes to protecting those applications. Obviously, there's a lot of other areas within the DevOps and the shift left around infrastructure as code. And this is how the operations team, this is definitely how I first approached the cloud native world was around to start learning other tool sets and other ways to make your life easier. Around using infrastructure as code, but also things like dynamic provisioning, dynamic deploying and destroying of those applications. But there's also that as soon as you increase more automation or potentially more accidental risk via self-service becomes that need for that backup. So it's a self, it's an ongoing cycle of being able to look after your data and being able to be aware of that data. So then just quickly moving on to a bit around security. So again, the world hasn't changed too much in regards to the requirements around security. Maybe 2020 gave us a good indication that security is still should be one of the key considerations in any of our deployments or any of our environments. But it hasn't really changed in regards to obviously our backup and DR requirements. I've kind of touched on that, making it easy to deploy, factor in that into the same CI CD pipelines that you have automate all of that because who wants to be looking after backup jobs or backup policies on a daily basis? We just want them to work. We want them to pick up those applications, protect them in the way that we want them protected. And we kind of want to set and forget that and make it extensible so that people can hook other tasks into that as well. And then from a security point of view, the same words that we saw, the same methodologies that we saw in other platforms, other environments are still here. We still have the requirement to encrypt our data. We still have access rights and access management. We still have role-based access control, but the important part is, being able to leverage the same APIs that is used, whether it's using AWS GovCloud like it is here, to whatever it is in another public cloud environment somewhere else. And then brings back to my background if you like. So the operator challenges, and I think that first point here, the skills gap and the talent shortage is critical. And I think there is an understanding or a misconception is probably a better word around Kubernetes is hard. And okay, depends on the definition of hard, but as we mentioned around deployment, especially for the operator, the deployment has got very significantly easier to deploy, but also the understanding around the key components and the architecture of Kubernetes is deemed difficult. When actually, and I can put my hand on my heart and say this, is that after two years of educating myself, so in my own technologist way, educating myself and then having to upskill and speed my education up, is that it's really not as daunting as it maybe first seems. So the biggest advice that I would give there is about just get hands on. And I'll touch on something at the end where if you don't have access to something, then maybe we've got a helping answer to that. But I think being involved, being hands on and trying things in a Kubernetes cluster is the key to being able to understand and accelerate that learning curve that you have. The other key change, and this will be difficult for a lot of people around operations, traditionally focused around infrastructure and keeping the lights on, making sure that the speeds and feeds were there for the application, but generally looking at always the backend and keeping the uptime, keeping the hardware or keeping the platform running, that needs to switch and you need to now be speaking to your developers about what the application looks like, what the application is made up of, how could we better that? How could we scale that better? We must enable self-service, whether that's also, and I'll couple that with the second line around, incorporate infrastructure as code, incorporate everything into your CI CD, so that when your team scales, so does the workload and it gives you the ability to be able to handle the new way of being able to think about that application so that it grows with you, you're not left behind from a scalability point of view. And yet, I'm gonna touch on this a little bit more because this will be quite alien to a lot of operators that have come from the virtualization world and being able to rapidly deal with the Kubernetes releases, but also everything else that comes with that, because okay, I'm gonna touch on the Kubernetes release and how quickly that moves, but think about everything else that is in the same ecosystem that everyone is striving to be quicker and faster and develop and innovate on everything that they've already done. Because so from a Kubernetes, and I'll just build this out, is that they work in three month release cycles. So what you also need to consider from a data management point of view is that if I take a backup in January, just for example here, well, come October, November, December, I still need to be able to restore workloads from that January, February, March. But obviously then I don't also wanna be left behind from an infrastructure or from a DevOps point of view. I don't wanna be left behind because the amount of features that are in these releases are completely game changing for many businesses as well. Okay, so, and I've realized I've rattled through quite a number of slides. So, and we'll get to the questions in a few slides that I've got left, but other data management concerns to watch out for. So there is a difference between sizing an application or deploying an application with HA, deploying an application a stateful set within your Kubernetes environment. Yes, the nature of Kubernetes will keep that alive based on the deployment, based on the stateful set, but backup is backup. Backup enables you to have a point in time copy outside of the Kubernetes cluster that enables you to restore based on a failure scenario. We know that there's failure rates on premises. It's what we've been living through for the last however long. Yes, things have got better. All the nines are being quoted by the public clouds, but we definitely see those outages as well. I think, again, a nod to the operations is we have to understand the entire stack, the operating system, the Kubernetes, the application, the database, the networking, as well as the security on that. As soon as we can understand all of those areas, then we have a better look at what that backup requirement or what that data management requirement needs to look like, as well as replication. Do we need to, if that isn't running our mission critical systems, do we need to consider replicating that or having a disaster recovery plan that replicates that from public cloud A to public cloud B or back on premises? Because HA, absolutely, factor that in to all of your applications where possible on top of what you already get from the platform. But that is not a backup, as I said, around the insurance policy. Also, we need to consider around the security. Security is still very much top of mind. We're hearing about this on a daily basis in the news, regardless of what sector we're in. I'm sure we've had colleagues, if we haven't all been a victim of some sort of cybersecurity attack, ransomware attack, but we know that it's potentially inevitable that we're going to be attacked at one point. And I think that's the best way that I've been telling people, especially in the virtualization world, is that if you arm yourself with the possibility, a high possibility that you will be attacked by ransomware and then you look at your remediation scenario, you look at your remediation plan, your prevention plan and make sure that if this failure scenario happens, then this is where we go. And if this failure scenario happened, this is where we go. Like you have different plans for different failure scenarios, but making sure that it's not just a piece of paper and that you physically can walk through that runbook and make sure that your business data, your important data can be up and running again in the desired amount of time. And then also making sure that, so we've spoken about two or three different areas around storage being a huge part of the data management piece, whether that's storage from where you're running your database, where you're storing your persistent volumes or where you are actually landing your backups, you're exporting your backups out to object storage or potentially a file system, but looking at that from that angle. So your data management practice should be, well, let's take advantage of those ecosystem integrations from a storage point of view. Also, we need to make sure that we're application consistent when it comes to databases and NoSQL or other system hooks. Let's make sure that we're taking the best possible backup that we can of those applications. Let's make sure that we can monitor and alert based on those applications, but something that's more aligned to what we have in our Kubernetes cluster and cloud native ecosystem. And then mobility and freedom of choice. And I think this is the strongest piece across Kubernetes in general is the freedom of choice of where you run your workloads today, tomorrow, in a year's time is completely agnostic. You're able to run anywhere, but also consider that when it comes to your data management software and how you're going to do that. And think about that. You wanna be able to protect the workload regardless of which public cloud it is, whether it's on premises, which distribution you're using, et cetera. You still wanna have that. You don't want your data management choice to dictate your platform is what I'm trying to say there. And just before we wrap up here and get to the questions, there's a few resources and they are very vendor agnostic. So we're not talking about our product, but they very much key into some of those reasons and the best practices around data management and also Kubernetes native backup. So very much some resources to have a look through. Also, if you are just beginning and just starting out this journey, then FIPI has been across the cloud native or the CNCF landscape for a while. And I've found that these are really quite easy from a beginner's point of view to understand a little bit more about cloud native. And in particular, Kastin has been very much involved from the FIPI in space and focusing around cloud native recovery. So again, there's a link there for that one. And just to summarize some of the points that we've made. So back and recovery, it doesn't change the importance of back and recovery whether you're in a virtualization world which is gonna stand by your Kubernetes cluster or at least you're going to be probably gonna be running both platforms for the foreseeable future at least. And backup is still important. The application mobility, the ability to move those applications freely between those clusters. Disaster recovery, it doesn't stop just because we've gone to Kubernetes. It's still a requirement. But also look at a platform that enables the multi and hybrid cloud or can work across multiple environments. It can work against multiple storage types and offerings. And it also offers role-based access control. It should be built for Kubernetes, leveraging the APIs. It should be super simple to use, but also simple to use that it can also be incorporated into the CI CD pipelines of your developers so that you really don't have to have that that afterthought when it comes to data protection. Obviously it needs to be supported supportive of role-based access control as I just mentioned, but also the native APIs from Kubernetes and have a rich ecosystem to enable that multi and hybrid cloud approach. And the one thing that I wanna touch on here last is because again, a lot of us won't have access to Kubernetes clusters that we can just access and start playing around with. Playing might not be the right word or messing around with again, probably not the right word, but learning. And we've recognized that at Castin. So we have a free, we have two options. There's a free starter edition that ultimately allows you to protect those workloads or at least have a look at protecting those workloads on 10 nodes and that's free forever. But then also there is a hands-on lab that allows you to walk through the provisioning of workloads, applications, Postgres SQL, et cetera, and actually walk through what that looks like. So it gives you some hands-on experience rather than just reading the white papers, reading the resources that I've mentioned, actually gives you some hands on there. So that's where I'll leave it. And then Libby, has there been any questions? Not seeing any yet. So if anybody has one, go ahead and pop it in the Q&A chat box. Give everybody just a minute to see if we drum anything up. Just gonna stop sharing so that you don't see the long winding road of inception there. It looks like you did a good job explaining everything. I definitely tried to squeeze in as much as possible. All right, I'm not seeing any questions. If last chance, everybody. Well, thanks so much, Michael. That was a great presentation. I know you handed out your social media. So if anybody has questions post event, you can always reach out. And we will be sure to get this online and get the slides pulled up so that everyone can access it post event as soon as possible. And we'll look forward to seeing everyone at another CNCF webinar soon. All right, thanks so much, Michael. Thanks everybody for coming. Yes, thanks all. Thanks Libby. You're welcome.