 Thank you for joining the session. So today we're going to cover how to follow the 321 backup rule in Kubernetes. I'm Michael Cade. I'm a senior technologist at Kaston by Veeam. So the first, before we get into the 321, what I wanted to highlight was some of these protection gaps. So in particular, the underlying storage is still a risk for Kubernetes platforms, but from everything that we've got also within our data centers today, storage is still the fundamental part of the building blocks of where we run our applications, where we store our data. We've also got the added hurdles around accidental deletion. That can still happen in a Kubernetes world, malicious loss, ransomware, etc. They're all still attack vectors, file corruption, corruption of data, outages of sites. All of these are still things that we have to contend with, regardless of whether we're running a Kubernetes platform or a Kubernetes cluster or a virtualization or a SaaS based workload or IaaS within the public cloud, physical machines. Very much the protection gaps that we've known for many years are the same protection gaps that we have today. They haven't gone away. We probably just deal with them in a much more innovative way. So then that brings us on to what is the 321 rule. So simply put, this is a methodology, and it's not about a particular backup vendor. But if we can adhere to the 321 rule or the 321 methodology, then you should be covered against pretty much most failure scenarios. In particular, having three copies of your data on two different media types and one of those being off site. What we mean by three copies is one will be your production set of data. Two will be your really fast recovery. Think about that 714 30 day type retention that is sat potentially right next to your production data, but on a different media type. And then that brings us to the media type, the two different media types. We've got that one where we're keeping that 714 30 days. And then ideally we want to keep another copy of that in an off site location. Now traditionally, this might have been tape. More recently, this is going to take advantage of things like cloud object storage up in the hyperscalers or potentially object storage on premises. And that's what covers that final one that one off site, that copy of that data for any failure scenarios for fire and flood that you see on premises. So just to also settle a couple of backup myths is that in a Kubernetes world, well, we don't need backup because everything is stateless. And that might have been the case back at the beginning of of time from a Kubernetes point of view. But that's not happening today. But more and more people are moving there. They're no SQL and they're SQL databases into their Kubernetes clusters. Also, high availability is not a backup. High availability is a is a must we have we see that in physical workloads, we see that in virtualization, but it's still not enough to settle those failure scenarios that we just touched on. And also replication is not enough on its own. Yes, it should absolutely be part of your data management strategy. But it's not there. It's not enough on its own. So we need to consider how we back up. And that could be also said about making sure that you had snapshots involved in your data management strategy also. And then I think the final thing is the backup responsibility. Ask yourself how important that data is to your business, but also making sure that you're using the right tool for the job. And what we mean by that is that we've traditionally had agents for physical machines, we then moved to a virtualized world where we had an agentless approach tapping into the underlying APIs of the virtualization host. That should be the same leap for Kubernetes as well. Being able to find a backup tool that can leverage native APIs within the Kubernetes cluster, regardless of where that Kubernetes cluster is, this should give us the ability to best protect the data that is involved within that cluster. With that, thank you for joining my session.