 Hello, everyone. My name is Jeff Ligon. I'm a solution architect with Trilio. Trilio is a cloud-native data protection company that does backup, recovery, and migration of applications in Kubernetes. And today, I wanted to talk to you about backing up your Kubernetes strategy, the myths and the facts. I'd like to begin just by talking about the Kubernetes landscape in 2022. So everyone is probably aware of this massive shift that we've seen in the IT industry over the past five to 10 years, where companies are taking their legacy applications from bare metal and virtual machines. And they're converting these and evolving these to using containers and then to using container orchestration. And everyone is probably aware that it seems that Kubernetes has pretty much won the war in container orchestration. And so we wanted to share this quote from IDC that we've seen recently that they predict by the year 2024 that net new production grade cloud-native applications will increase to 70%. And that's up from only 10% were containerized in 2020. And this is all thanks to this shift to microservices, containers, orchestration, and the DevOps movement. So with or without containers, every IT group is worried about the downtime of their applications. Nobody wants downtime. And downtime costs money and time. So these are some quotes that we've pulled from Gartner that says that each minute of downtime can cost a business up to $5,600 a minute. And 91% of organizations that just one hour of downtime cost more than $300,000. And now we've had the increase in ransomware over the last five years. And now, and you can see even in the last year, in 2020, that increased to 171%. All of these factors that downtime really costs money. And it's not just costing money, it's costing time. So I attended an IT Tech Talk conference a few weeks ago. And there was a company there that was talking about the challenge of IT keeping up with the pace of business. And what that involves is having your IT teams help with the evolution of your business. And downtime just takes away from that time. Every minute that IT team is spending fixing downtime, that's time that they're not able to spend on other productive things. I think we've all heard Anna Dose from our IT friends saying some of the teams just feel like they're just treading water with all of the downtime issues and solving tickets, as opposed to working on productive projects that could be improving and growing their business. These are all things that we want to avoid and all are things that we want to address. That's something that Trilio has been helping companies do since 2013. And so today, we wanted to talk about some myths that we're hearing in the Kubernetes landscape related to protecting your applications from downtime. So let's start with myth number one. So myth number one is I don't need backup because all of my Kubernetes applications are stateless. So this may have been true a few years ago. In fact, I attended the KubeCon trade show last October in Los Angeles. And over that three day period, I talked to a lot of companies. And actually at that time, there was a lot of truth to that. So there wasn't a lot of people that were doing stateful applications in Kubernetes. Kubernetes was designed by Google, mainly for stateless applications. But now in 2022, that's definitely shifting and changing. So we at Trilio have done our own personal customer survey about this to ask companies how many of them are using stateful volumes in Kubernetes and an overwhelming majority, 53% said yes they were. And interesting enough, 15% said that they weren't sure. So they needed to see what the state of their applications were in Kubernetes. And then we also looked at some industry statistics. So recently the CNCF did a similar survey. And again, just over same results as our Trilio survey, over 50% were saying yes, we're using stateful applications in containers. And then more interesting, there was slightly over 20% saying no, but then there was if you added up the last two columns that would give you around another 20% that say they're either evaluating stateful applications in containers or plan to do it within the next 12 months. So that is definitely changing. Okay, myth number two, my built-in storage snapshots are enough to protect my cloud native applications. So a lot of companies are starting with that. That's kind of the legacy way of doing backups is relying on the snapshots that come from your storage system. But there's a lot of limitations and challenges with that. So the first being, if you're just capturing your snapshots, you're relying on maybe some scripts to tie the applications to those snapshots or maybe to tie some databases to those snapshots. So you're relying on scripting, you're relying on individuals that are writing those scripts. Those are all prone to errors. And then the Kubernetes environment is so dynamic and applications in that environment change so quickly, then you're relying on those scripts to keep up with those applications. It's just really limitation. So incomplete data, so if you're just relying on your snapshots, there's a lot more to your applications than snapshots. There's a lot of metadata associated with Kubernetes applications. So you're not capturing your entire application if you're only relying on snapshots. Inability to scale. So as these snapshots grow, if you're relying on a manual scripting process to piece back together your application, that is not going to scale. The more that application scales, when you go to recover that application and trying to piece together what snapshots that you wanna restore and how is that gonna tie to what metadata, it just doesn't scale in production. And if you're only relying on snapshots, you're more susceptible to disaster. So those snapshots typically live inside the same storage system that's providing storage for your cluster. And if that entire cluster or that entire storage system goes down, you're not gonna be able to recover from that. You really should be relying on a backup solution that stores things outside of your cluster's storage. Okay, myth number three, my legacy data protection solution can protect my containerized applications. So this is very similar to the second myth where companies are kind of going along with what they've always done for data protection. And in those days where you had siloed applications, monolithic applications that lived on bare metal machines or that lived in virtual machines, that model of data protection was very much based on the storage appliances, the storage volumes, the snapshots. But now the Kubernetes environment is so much more dynamic, it's highly scalable. Your application gets distributed to some random number of worker nodes that's based on the Kubernetes scheduler. Those worker nodes themselves are dynamic. So you could set up auto scaling such that worker nodes increase as your application grows. Everything is highly automated. This can all be scripted by CICD tools and policy driven. Multiple personas can be deploying these applications. So really the traditional data protection model just really breaks down in the highly evolved Kubernetes environment. So you really need a data protection solution that's been designed for the cloud native world of Kubernetes. And if you have a cloud native data protection solution like Trilio, these solutions will give you speed. You're able to restore your apps faster because everything has been planned out in advance. You might even have disaster recovery plans that you're planning for in advance. You might have backup disaster recovery clusters that even live in different clouds. So maybe you have something that's on premise today, you have some disaster recovery ready to go in the public cloud and you're able to restore those applications very quickly. And then as I mentioned across different clouds so you're able to maybe have different Kubernetes clusters and you're able to take point in time snapshots and move those around to different application clusters. Okay, myth number four, data protection is only about backup. And that used to be the case. You're really just looking for things to recover from any kind of disaster and that was the reason that you're taking backup. But there's a lot of other infrastructure challenges that teams are seeing today. There's application uptime. You want a lower time to recovery both in cost and time of being able to recover your application to any point in time in case something goes wrong. You might have compliance and disaster recovery mandates. Companies in different industries have different regulations that sometimes they're beholden to to have certain rules in place related to disaster recovery. Maybe you need to have so many days worth of backups and maybe you need to keep those backups for some set retention policy that could even be many years in some cases. You're able to quickly bounce back from outages and ransomware attacks. So if something does go down, you're able to quickly recover from that in the quickest way possible. But it's not just about if things go wrong. There's also migration scenarios based on cost and performance. Let's say you started in one cloud provider and maybe that cloud provider's prices have increased over the past six months. Maybe there's certain performance criteria related to CPU, maybe related to disk IO operations and maybe another cloud provider could meet those requirements better both from a cost and performance criteria. So you might need to move that application. So that might be another need for using your backups as a way to move your application from one cloud to another. And you might have service level agreement. You might have SLAs that specify things like uptime or specify that things are available potentially across different availability zones or available certain uptime requirements. So all of these things are important. So it's not just about backup anymore. There's a lot of other challenges out there that having a good backup solution can help you achieve these other challenges. So let's talk about the use cases for cloud native data protection. So the first being backup and recovery. So making sure that you have an automated way of backing up everything that's in your Kubernetes cluster and having a recovery to any point in time from those pre-scheduled point in time snapshots and then disaster recovery. So saying that even if your entire cluster goes down you have a way to recover all of that data and persistent data that lived in that cluster. You have a way to recover it to another cluster as a way for disaster recovery. But it's also about application mobility and migration. So there's CI CD tools today greatly help the deployment of Kubernetes applications but having a good data protection solution can help you move these applications around to different clusters. Maybe you have different levels of test and development environments. And so you could, using a tool like Trilio could help you move those applications around to different test dev environments. And then ransomware protection and recoverability. So let's talk about ransomware. So ransomware, as I said it's really been on the rise recently. So 300 million cases in the last year you can see that the average cost has increased to over $300,000 per business. All of these attacks have increased 72% since the pandemic. So it's really something that every business needs to be planning into their infrastructure on how they're going to protect themselves from ransomware and how they would recover from a ransomware attack if it was going to happen. So at Trilio, we believe that ransomware protection is more than just, it's not a single software feature that you need to comply to. It's kind of a set of, it's a framework that you need to follow. So actually the NIST cybersecurity group has outlined a NIST cybersecurity framework that outlines all of the areas that application teams should be looking at to make sure that their applications are secure from ransomware. And so Trilio helps in these three areas to help companies identify and protect applications to detect and mitigate and to recover applications in case something goes wrong. So I'd like to conclude by saying that a data protection platform built for the cloud is essential in protecting your cloud native applications. So you really need a cloud native protection that does many things. You need it to be application-centric. You need it to follow all of your Kubernetes applications and all of the components that they're spinning up in those clusters, whether those are the metadata pieces or whether that's the stateful persistent volume pieces that we talked about in the first myth that it seems to be increasing across the industry now. And no matter how those applications are deployed, so whether they're just using labels, whether they're using some advanced concepts like Helm charts or operators, or whether you wanna back up single namespace or multi namespaces, you need a tool that's flexible enough to track all of these things. You need a tool that's Kubernetes agnostic, a tool that works across any distribution of Kubernetes that's been certified by CNCF, such that if you maybe that would enable you to, so if you started with one cloud provider and you decided maybe for a cost or performance reason you wanted to move to a different cloud provider, you're able to do that. You should have a management console that self-service. You should enable your development teams, your users, anyone that's consuming your Kubernetes applications. If they're deploying Kubernetes applications, they should have the ability to have backup and recovery capabilities. This shouldn't be the old legacy solution where only the storage admin has the ability to do this. It's up to the storage admin to take care of backups and recovery. So you should enable your development teams to be able to do this themselves, which would also help them in those migration scenarios that we've talked about earlier in the presentation. So your solution ideally should be native to Kubernetes. You shouldn't have to learn any new tooling. You shouldn't have to install any new CLIs to do this. It should be something that's just built into the Kubernetes APIs and uses the Kubernetes tools such as Kube CTL or OC if you're using the OpenShift version of Kubernetes. You should have a solution that's enterprise class. So you should have a solution that supports things like restore transforms. So if you're moving from one Kubernetes type of Kubernetes cluster to a different, you need to transform some of the base components. Things like storage class might be different in a Google Cloud say, then versus an Amazon Cloud. So you need something that transforms those components on the fly when you're moving that application so that that application migration is seamless. You also should have things like database hooks. So if you have databases as a stateful application in Kubernetes today, there are some procedures that you need to do to properly queues those databases so that when you take a snapshot of that database, that you're getting a true application consistent backup of that database so that when you restore that snapshot, you're restoring that database to a proper running function of that point in time snapshot of the database. So you need some hook components to be able to do that. And ideally it should be something that's easy to try and install, something that you could try in your own environment for you for a period of time to see if it works for you. So Trilio accomplishes all of those things. Trilio Vault for Kubernetes has all six of these components and it is the top solution for cloud native data protection. Thank you everyone for your time today. I'd like to mention that Trilio will be, we will be at the upcoming KubeCon trade show in Detroit in the third week of October. I myself will be there in the Trilio booth. We have some exciting demonstrations that I plan to show of some new features that we're releasing at KubeCon. So come out and see us at KubeCon. And also if you'd like what you've heard today about our product and you'd like to try it out for yourself, go to Trilio.io and there you can see, you can request a demo or you can download a license to try this in your own environment. Thank you everyone.