 Welcome to Cloud Native Live, where we dive deep into the code behind Cloud Native. I'm Annie Talvasto, and I am a CNCF ambassador, as well as a product marketing manager at Cast AI, a Kubernetes cloud cost optimization company. Very much a welcome on my part. So every week, we bring a new set of presenters to showcase how to work with Cloud Native technologies. They will build things, they will break things, and they will answer your questions. Join us every Wednesday at 11 AM ET. So this week, we have the pleasure to have Alex Chai Corp, Kaipik Corp. Hopefully, I'm not too much about your name. You can give it a correct pronunciation soon. Hear us with us to talk about a very exciting topic. So join us for KubeCon plus Cloud Native Con Virtual North America, October 11th to 15th, to hear the latest from the Cloud Native community. As always, this is an official live stream of the CNCF, and as such is subject to the CNCF Code of Conduct. So please do not add anything to the chat or questions that would be in violation of that Code of Conduct. Basically, be respectful of all of your fellow participants and presenters. With that, hand it over to Alex to kick off today's presentation. Thank you so much, Anna, and good morning, good afternoon, good evening, wherever you're watching. Really excited to be presenting today. And hopefully, you wouldn't have jinxed the demo guards and my demo will just work. Oh, it's interesting, yeah. Great. Okay, if we can switch to the slides. Brilliant. So I'm gonna be talking today about Kubernetes and persistent data and stateful workloads, which I'm sure is something which is very, commonly a very common pain point in many use cases when using Kubernetes. So before we start a little bit about myself, my name is Alex Kirkup. Very, as you can tell, I'm very passionate about storage. I'm the founder and CEO of StorageOS where we're building a software defined cloud native storage platform. But I also have two hats on the code chair for the CNCF storage tank that was formerly the CNCF StorageSig. And basically my background is in engineering and infrastructure. And I did that for 25 years before the startup bug got me and promoted me into this direction and adventure, which is pretty cool. As always, we'll try and keep this interactive. So if there are questions, please feel free to stick them in the chat and I'll try and answer them as we go along or at an opportune moment in the presentation. I'm going to do a little shameless plug with my co-chair hat on. I'd like to just sort of mention that a lot of the topics that we're talking about today are the sort of topics and technical work that we do as part of the tag storage in the CNCF. We meet every couple of weeks and all the calls are open, membership is open. So please feel free to join the calls and contribute and join the mailing lists and help us build our storage community, which I think would be really beneficial. Okay, with that plug out of the way, I'm going to talk a little bit about just to set the scene, the journey to cloud native that we see, a lot of organizations begin small doing to when they first adopt this new paradigm. And really, this is kind of obvious, but obviously, developers starting off with containers is the first step. The big thing that containers do is it kind of breaks that lock-in to individual servers, right? So we now have portable codes that can run anywhere. And obviously by allowing codes to run anywhere, you enable the ability to automate code. And of course, that's where Kubernetes comes in Kubernetes is now the facto container orchestrator that allows the automation of applications in just about any environment. In fact, you can think of Kubernetes as a layer that abstracts the infrastructure and provides developers with a way of composing what their application needs. So developers can now say, my application is formed of these containers and it needs this amount of CPU and this amount of memory and these sort of networking requirements and Kubernetes can just go ahead and make that happen. And can also automate, not just the deployment, but also scaling, healing and a variety of other advanced features too. So when we talk about Kubernetes, how are many organizations doing this? And there's a mix of different things in play. And I'm showing this to kind of show the diverse ways of doing Kubernetes that we see in real life because Kubernetes is this abstraction layer for infrastructure. So, there are all of the self-managed and common distributions that we see with Kubernetes, products like Rancher and OpenShift and even simple things like Cube ADM where we can provision and manage our own Kubernetes clusters. Of course, there are Kubernetes services which are available in all the public clans and there are more and more managed service providers and service providers of all sizes providing managed Kubernetes services too. And of course, you also have environments running on laptops. So, the nice thing about using Kubernetes means that you can have the same bit of YAML that defines your application. And whether you're running it on your laptop, on-premise, on big bare metal environments in the clouds and managed service providers or in fact, using any number of the certified CNCF distributions, you get the same user experience and the application can run everywhere. So, when we're talking about some of the applications and we're quoting a Datadog survey here in terms of the top containers that are running in people's environments, we kind of see that there are some platforms which don't require a state and some platforms which are ephemeral. We obviously have a lot of, most end users start with some sort of stateless or ephemeral workload, perhaps Nginx, perhaps Redis. And you have things like Nginx is really easy to deploy without storage and it's kind of easy to create and delete instances of the product and the same sort of thing applies for Redis in a very simple configurations. However, we'll talk a little bit about why applications need state and why I think all applications need to store state somewhere, so that's the spoiler. But the key thing here is that ephemeral workloads kind of have problems because they always need to refer to something that's going to be storing data somewhere and that can be a service, it can be a database, it can be an object store, it can be a file system. And therefore what we end up seeing is a lot of legacy environments, whether it's simple EC2 instances running in your cloud or VMs running alongside your Kubernetes instance. And of course what this means is that even the applications that can run ephemeral tend to be running in a less optimal way because those systems can now not use a storage system that's available to them. So for example, recovery times for these applications can be longer and it can take a lot of time to warm up these systems like caches like Redis for example. And so you kind of end up with a ton of scaffolding that is put around these systems, and we start detracting from the benefits that Kubernetes provides. So for example, one of the workarounds that we kind of commonly see is where we remove the ability of Kubernetes to dynamically place application workloads based on capacity or utilization of the nodes. And we start tying down applications to individual nodes. And this is a really big challenge because the whole point of Kubernetes is there to abstract away the infrastructure. But what we see is storage to a large extent is and still is something that you present and bind to a server. And what we need to move away from and what we need to start thinking about is how do we make storage composable and how do we start making storage bound to the application? Because at the end of the day, it's the application that needs to move around. It's the application that needs to be dynamic. We've done all of the work to make an application portable by containerizing it. And the storage now needs to be able to follow the applications. So here's the sort of my premise. And it's not always obvious and it's not always a popular statement. But I'll go ahead and say it. I think all applications store state somewhere, right? Even the most simplest of applications will store data in a file or a database or an object store or a key value store or a message bus or streaming or something, right? There's always going to be something there. And the question is then once you're using Kubernetes, how do you take advantage of Kubernetes to actually automate the storage? And the answer is cloud-native storage. And there's obviously a wide variety of options here. What we're going to focus on a little bit today is software-defined options because we talked about all of the different ways that people can use Kubernetes, whether it's on laptops and managed services and cloud providers on-prem, et cetera, et cetera. And therefore, if you've made your application portable, if you've made your application composable, you've got composable memory and compute and networking, why wouldn't you also want composable storage and why wouldn't you also want portable storage that's available in every environment? And that's where that software-defined cloud-native storage comes in. So, and we'll kind of make it one step further. In much the same way now that developers can compose what they need in terms of CPU memory and network, you also have the ability now for developers to compose what they need from the storage and have that storage be application-centric. And fundamentally, it allows developers and DevOps teams to move any of the applications and take advantage of Kubernetes for any application and specifically build anything as a service because the other effect of being able to compose this is that you can now create a database as a service or say a message bus as a service or even with some of the CNCF projects like KubeFert, for example, where Kubernetes can actually manage VMs. You can even create infrastructure as a service. And in fact, we certainly have clients managing VMs in that sort of environment. So, what we're kind of saying with a software-defined cloud-native storage is kind of treating persistent data like networking technologies. And this is kind of a given, right? So, whether if in all of the different environments, you have a variety of different CNI providers which effectively give you the capability of having software-defined network systems that provide measures, that provide service discovery, that provide routing and services in Kubernetes. And all of those services run natively as demon sets and people just don't think about it. It's one of those things that's just there. And what we're basically saying is that you can have the same sort of things with the software-defined storage. Of course, StorageOS is one of those, but again with my CNCF GoChara hat on, there are obviously a number of different projects in this space. But effectively we can have an operator or a demon set that can operate in these environments. And what we then get is we get the ability to effectively abstract the storage that's in your Kubernetes environment and provide a platform agnostic way for applications to access the storage. The key thing here is you now have that portability. So, applications can move to any nodes, nodes can fail. And that's a really important usage pattern here, right? Because in Kubernetes clusters, if you're going to upgrade versions, if you're going to patch versions, if you're going to use, say spot instances, et cetera, you generally have dynamic clusters where nodes come and go and nodes can be upgraded on the fly. So, applications just move around a lot more in a Kubernetes world and therefore being able to have that in a similar way that you can sort of create a service mesh in the networking space, you kind of need, you kind of effectively have the same sort of space, the same sort of mesh in the storage space that allows you to access the storage from everywhere. And Kubernetes has done a really good job of creating the concept of storage classes. And storage classes effectively define a way of dynamically provisioning volumes and accessing those volumes. So, a storage class is kind of a very fancy way of saying, this is a name that I give to a group of volumes. It tends to refer to a driver, almost every Kubernetes deployment nowadays will have some sort of default storage class. But the nice thing here is you can create storage classes with different services, of course, depending on the projects you're using and the cloud providers you're using to actually do different things for different purposes. So, for example, you may have a storage class that you use for development workloads where perhaps you're not actually interested about availability or replicas, but you're interested in just making sure that the date is available across all of the nodes. You might have a production system where you want to focus on availability and you want data to be replicated across different nodes in the cluster to protect against, say, disk failures or node failures or that sort of thing. You may use storage classes to define a security level where perhaps you might have certain RBAC rules, or policies, or encryption enabled. And you might also have storage classes that affect things like the data redundancy or the data compression capabilities for, say, archive or data which is not often used or very cold. And just as an example, I've got a storage class listed here on the left. The storage class, as I said, is kind of a very, you know, it's really quite a basic thing, a small piece of the Hamilton typically, where you have a name in this case, in this example, the name is production, for example, you'll have a provisioner which refers to a CSI driver. CSI is the container storage interface which Kubernetes will use as a standardized API to talk to a variety of different systems. And I think at present count, there must be 50 or 60 different CSI drivers out there. And you'll then have a number of parameters that might define, you know, things like secrets or the number of replicas or things like that, which might be specific to a certain storage driver. But effectively, that's where the definition of storage stops, right? Because from then on, applications can just use persistent volume claims. And a persistent volume claim is effectively just a way of saying, I'm an application and I want to have data which is persistent and stateful and I want to give it a name, which I can then reference in my applications. So in this case, for example, where the persistent volume claim is the box on the right and we're creating a PVC which is called MySQL PVC, presumably for MySQL database. And in general, the only thing that you'll need to specify is something simple like the size, because everything else gets inherited through the storage class, which in this case, we're referring to the production storage class defined on the left. And the thing is, you know, using these capabilities, you can do a lot of advanced things. So some systems, for example, support the use of encryption and have automation with Kubernetes secrets or external key management services to automatically encrypt the data. Typically that's done through the use of some sort of labels or other parameters, as you can see in this example here. And applications, like in this case, MySQL, for example, can rely on persistent data and they'll just continue to run. You know, and we can kind of see the scaffolding and those legacy environments fading out into the background and Kubernetes coming out and being able to use all the power of Kubernetes here. And if you look at the example, Yaml on the right, we can kind of see an example of that MySQL database and what it would take to run. So I'll just sort of talk you through it very quickly. Here we have a really simple definition where we're saying we're creating a MySQL instance using the MySQL container. We're defining a mount point, which is effectively a Unix part within the container namespace that's going to mount, that's going to mount a volume. And that volume is called MySQL-data. And then what we're saying is, we're saying that the volume MySQL data is actually to use the persistent volume claim called MySQL PVC. So what happens in this instance when the container is being scheduled is that the attach request will be issued via the CSI API. Kubernetes will attach the volume to the node where it needs to run MySQL. And then that will be mounted and put into the container namespace. And effectively, as far as the MySQL instance is concerned, it's accessing a local volume. But of course, that volume is actually persistent and is available across container mounts. And if the demo grads smile down on me, I'll show a little demo of running MySQL and MySQL image in just that fashion in a minute. Another thing which is worth pointing out is that volumes, for example, with Kubernetes can be read-write once or read-write many. Read-write many volumes effectively allow a volume to be used by multiple pods at the same time on different nodes. And sometimes this could be implemented via NFS but there are a variety of different file systems which allow these services. And one of the key things here is, there are many applications that benefit from having a shared volume that they can refer to. Perhaps it's sharing some common reference data or some common config. Sometimes it's just using a file system as a message bus as horrible as that sounds. You often have environments where you have a workload, a workflow of transforms, for example, where one application hands over to another application across a shared file system. And these sorts of things are very common workloads which you can find in many Kubernetes environments and it's a very effective way of doing it. And of course, you also can unlock a lot of additional functionality using the Post-System Storage. For example, Redis becomes more than just an ephemeral cache and becomes full blown. Database with Post-System Storage, and enables a whole suite of different and advanced use cases. And the other thing that we often see more and more now is with the use of GitOps is the ability of actually having a standardized way of deploying applications across your different Kubernetes use cases. So for example, how we said earlier on that, you can have a variety of different Kubernetes distributions. Some might be on-prem, some might be in the cloud, some might be on laptops and things like that. Well, what you can have is you can have storage classes with the same name in each of those different environments but defined with different specifications as needed by those environments. So you can have the same piece of YAML to start the same database, for example, and have no replication on your laptop and have replication widths for production availability when it's on-prem and maybe add encryption if you're running in the cloud, for example. And those sort of things mean that it makes it that much easier for CI CD environments, especially when managed by GitOps processes to evolve in those spaces. Okay, then. So that takes me neatly onto the demo. So this is the bit where things get a little scary. I will stop sharing that screen and I will share my, I'll share my, can you see that okay? Yes, I can see it very well. And let's hope the demo gods will be kind today. Indeed, yeah. If something goes wrong, I'll talk you through it. And as usual, just feel free to ask questions at any point. So, I've just, just for reference of Alias K to Cube Cuddle, just because it's easier to type. So what we have here is we have a three nodes Kubernetes cluster. I'm gonna use K9S, which if you haven't used it before it's an amazing tool to be able to explore and manage your clusters. So here I'm looking at the Cube system namespace which obviously has things like Cilium running in there, which is the demon set for the networking. You'll see other things like Cube Proxy and Cordeon S for example, which are the services there. And in this case, I've got a storage of a demon set running in there to provide the cloud native software defined storage capabilities. So we'll switch that to the default namespace instead. I just want to get a list of the storage classes. So in this case, we've got a storage class conveniently enabled fast. And again, similar to the description we did when we were looking at the slides, you can kind of see the storage classes is a really simple definition where it defines a CSI driver. We see a bunch of parameters and that's really it. And when we, if we switch back to K9S and look at the running systems, you'll tend to see something like a CSI helper here which has a number of different CSI functions in there to do things like provisioning and attach volumes. And that's effectively the API endpoint that Kubernetes will be talking to when looking to provision of volume. So if you have a look at docs.storage.com, you have a whole section of use cases where we have put together a number of simple examples covering MySQL, Postgres, Redis, Kafka, Jenkins, even Kupferd. And those are some fun things to try if you want. For today, I'm going to be looking at a MySQL demo. So what we have in our MySQL demo is we have a little bit of YAML that defines what we want out of the MySQL database. So the first is MySQL will have a service account. We have a MySQL service that uses ports 3306 in this case and allows us to access MySQL transparently within the environment. There's a little bit of config for MySQL and then we define a stateful set. So a stateful set is effectively one of the objects, one of the management controller objects that Kubernetes supports. It's a stateful set is what's used when we're defining stateful workloads and Kubernetes does a good job of making sure that stateful workloads have extra functionality, like for example, it protects stateful workloads from running in multiple instances and it protects them from partition events and things like that as one of the things that differs between stateful sets and the standard container or deployment. And what we can see with the stateful set, similar to the example that we were just looking at is we have a volume endpoint called data and it's mounted within the container as for our LibMySQL and we have the volume claim where we have data and it's using the fast storage class that was created earlier and we're saying that we want a five gig sized volume. So what I will do is I will just look to create that MySQL workload. I'll just switch to the default database that will be obvious. Okay, so we created them. We can see the containers creating. There's a client container and the MySQL database container which is just creating now. What we can see is we have the MySQL database has created a persistent volume claim called data MySQL zero and if we describe the PVC, we can see the persistent volume being provisioned dynamically and being automatically attached and being mounted. And if we look within the MySQL container, so I'll just start a shell within that container, we can effectively see that Kubernetes has mounted the volume into the Kubernetes into the MySQL namespace as far LibMySQL. So as far as the pod is concerned, it's just running with a persistent volume on the MySQL. It doesn't know that it's a persistent volume. It's completely abstracted. It's just like another local file system. So what I'll do now is I will show the databases. So those are just like the standard databases that we would have in a simple MySQL system. What I'm gonna do is I am gonna create a database and I'll go with something more creative than Alex and call this CNCF live. And now when we show the databases, CNCF live is there listed as a database. So what I'm gonna do is we'll start by actually doing something pretty drastic. In a normal environment, if we deleted, the stateful set, we would obviously lose all the data because it would just be ephemeral data. And that database that we just created would be gone for good. But what we'll do is we'll delete the database and we can see the database terminating within K9S. And what we can see though is if we list the PVCs, we can see that the PVC is still there and it's still available, even though it's no longer attached to any workloads. If we create the workloads again, and that took a couple of seconds, I think there's just downloading the container. And what we can see is if we go to show the databases, the database is still there. And we can see that the data has persisted across restructs. Now, well, we also just to kind of take that demo, and this is a really simple and a very boring demo, but it does show sort of the flexibility and the power of having that cloud native storage. But what we'll do is just to sort of prove that I'm not actually making this up and cheating in any way, we'll actually cordon the nodes that the workload is working on. And this will kind of give us an idea of what the availability will look like. And I'm actually gonna terminate that pod now. So that's been deleted. And what we'll see now is that Kubernetes, because it's a stateful set, we'll go and recreate the workload. You can see it was previously running on F3 and now it's running on F8. The container is just restarting. It's probably just downloading. There we go. And it's running again. And if we look to show the databases, we can just see the databases are there. So that seems like a really simple and boring demo, but effectively what's happened there is the database was shut down. It restarted automatically on another node within seconds, continued to access the same persistent data that it had before. The service IPs were automatically redirected to the new node and the client continued to be able to connect to the database and access the data. So effectively you have a fully HA service that's automated with the power of Kubernetes and persistent volumes. And this is something which is available in all of your clusters today. So it strongly suggests that you go out and try it. And if you have any questions, happy to answer them. Perfect. I think the boring and simple demos are usually the best as well. So now is the time for the audience to ask questions. If I got that correctly. So ask away people and leave the questions to the chat area so we can get some conversation going on, of course, as well. Waiting, looking forward to the questions from everyone. Indeed. Yeah, and I think actually the demo gods were behaving very nicely, so it was all good. So far so good. I've had so many instances where broadband played up or VPN stopped working. I know exactly the feeling. I think every time I do a demo and if there's something that needs to spring up or whatnot WordPress or anything that takes a few minutes. When I try it out before the demo, it takes let's say one minute to two minutes and then during the demo, it always takes seven minutes and I'm just so typical every time. Indeed. Yeah, but while we wait for the audience questions, I can maybe ask if you to get the conversation started. Absolutely. Yeah, so you mentioned in the beginning that there's a lot of CNCF projects in the space that are doing great work and are doing kind of interesting things. So what are your favorites and why? So that's a very good question. So when we talk about persistent storage in the CNCF, it covers actually quite a wide variety of different technologies. So persisting data can be done in any number of ways. The most obvious thing is volumes where we have block stores and file systems, but we also have a huge number of systems which are of course accessed via APIs and that can be databases, key value stores, object stores for example. And so one of the things that we did, one of the first things we did as part of the SQL, which is now the tag, is to create a cloud native storage white paper that kind of defines all of those different options and both the data part of how you access those different systems but also the control plane management and how you automate things like dynamic provisioning and access of these different systems. And we actually also defined, because one of the interesting things here is that for the first time ever, developers actually get to choose what storage systems they want to use. So it's more complicated than you'd imagine. There's obviously a lot of different options available for different use cases. And so we kind of encourage users to understand what attributes their application requires from their system to, and we define the number of attributes like availability and performance and durability and data protection, these sorts of things, which can affect what you need out of your storage system. Some of the projects that we have in the tag storage includes things like EtsyD, which is obviously a key value store and it's used as I guess the brains of every Kubernetes cluster out there. There are projects like TIKV or TIKV, which is a distributed, which is another distributed key value store. There are also products like Vitesse, which came out of YouTube and is a distributed database, for example. And we're actually talking about some of these things and some of the different storage attributes in our KubeCon presentation in I guess just over a month away. So attend that as well if you want to hear more about those different projects. Perfect, nice plug there as well for KubeCon and CloudativeCon that's upcoming in a month. Yeah, and I think there's an audience question, which is very exciting. I will just read it out loud so you can get to answering. Will you share some observations, the story preferences or recommendation for distributed storage in Kubernetes? Did especially in be interested in anything related to multi-cluster Kubernetes persistence? For example, do you tend to prefer application-centric storage, which method of persistence is tailored for the app as opposed to general-purpose file system or block storage? Okay, so there's a bit to unpack there. Again, the question is not about, it's hard to make a recommendation one way or another simply because there are lots of different systems available and they're optimized for different use cases. Some systems might be optimized for latency and transactions, others might be optimized for sequential throughputs and analytics, for example, and they would be very, very different systems. So it's hard to make a generic recommendation. The reality is that there are a number of different file systems, software-defined storage systems, object stores, databases, et cetera, that fits different use cases. So more than anything else, understand the use case. That said, application-centric storage, I think is the key. The point behind that is that if you have an application which is, and you want to be able to compose it, you need to actually link it to the storage. And we have in Kubernetes, we have, like we discussed today, the concept of volumes, and that's probably the most mature functionality. But application-centric storage can also refer to things like object stores and there's now the COSI initiative, which enables the orchestration of things like object store buckets and access and can define those sorts of access methods as well. So although Kubernetes started with volumes, I think we're seeing extensions into different areas. And of course, there are also an explosion of operators using different operator frameworks to provision things like distributed databases and things like that as well. In terms of multi-cluster, that's certainly a fairly immature, but what we're seeing in a lot of environments is that the customers or enterprises and organizations are deploying a larger number of smaller clusters, perhaps clusters for specific applications or specific projects rather than have these huge, multi-talented, big, scalable clusters. So I think more than ever before, there is the need and therefore projects will be working on this to provide the capability to consume storage across clusters, but also to replicate and move data across clusters. And we're kind of seeing also some work being done in hybrid environments where we're looking at the ability of sharing storage between Kubernetes and say traditional systems, whether it's because those traditional systems haven't yet been made the transition into Kubernetes or because they can't be migrated for whatever reason, perhaps they're using some old code or something. So I think that is always going to be a factor. When it comes to sort of API versus volumes as sort of the last bit of that question, again, I don't think there's a particularly good answer for that in the sense that even if you're gonna persist storage, say with a key value store or you're gonna persist storage with an object store, ultimately that object store is going to be using a volume or file system at some point in the backend. So it kind of depends on where you are in the platform owner stack, right? If you're the person responsible for building the database as a service, you probably are going to want to focus on the volumes and if you're the person who wants to consume the database as a service, you probably only care about the database. And so it really, that answer is kind of conditionless where you sit in the stack and what your focus is. Perfect, very informative and good answer. Hopefully QR Korea got what they wanted out of there. Hopefully pronouncing a very difficult name correctly here. Again. Anyone else if you have any questions? Now is the time to put them to the chat as well. We welcome every question, all questions so far really nice questions. Actually a few questions I think and within this one comment there. Indeed. Yeah. Maybe while we see if anyone else has anything to ask, I have another question I'm always interested in because we usually, I think these kind of discussions focus on what's happening currently in the CNCF landscape, what's happening in these projects and everything. So where do you see the future of all of these projects as well as storage in Kubernetes going? Where do you think it's going to be going in one, two, few years from this point onwards? So I mean, that's, okay, that's an interesting question. So I think what we're seeing is we're seeing a move to data services and data management. So as, you know, I kind of say this a lot and it's people in my team kind of roll their eyes every time I say this, but I strongly believe cloud is not a place in the sense that what I think people want out of cloud environments is the on-off consumption model, the self-provisioning, the automated deployments and automated operations. And I think you're able to get that now through a number of different services, not least of which of course is Kubernetes because effectively, you know, Kubernetes gives you that composable environment which can be running everywhere from your laptop to big bare metal boxes, to VMs. So what I think is we'll see a lot more focus on the requirements for application-centric storage. We'll see a lot more focus on data mobility and the ability to move applications between different environments. So for example, a very common pattern that's coming up nowadays is being able to develop on-prem and deploy in the cloud and deploy on-prem, for example. We'll also see the use of storage becoming key in diagnostics and other debugging purposes. So for example, the ability to have copies of data which are used for analytics or diagnostics separate to the production environment. And I think we'll also see the emergence of more mission-critical services. So, you know, the concept of cloud-native disaster recovery. And in fact, that is a document that we're working on, that we've been recently working on in the SIG and just published, which kind of covers the concept of having, you know, using the automation and using the composable environments to actually create a distributed system which would cross failure domains and be able to automatically survive and have quick recovery processes across all of those environments. So there's a lot of exciting things coming up for sure. Yes, sounds absolutely lovely. Looking forward to the future as well then for sure. And then there's a really lovely comment from Hemanskotha saying thanks for the nice presentation. I very much agree. Thank you so much for a really wonderful information packed. And then also, Linux Pizza Cats says hi. Hi, back to you, Linux Pizza Cats. Hi, Linux. Big portion here. So is there any other questions from the audience now as I think one of your last moments to shoot them away to get anything if there's in your mind? Alex, do you have any final comments, words, things to mention? The one thing I'll say is storage is kind of becoming ubiquitous in Kubernetes. And the whole concept of, you know, not having stateful workloads or feeling afraid of stateful workloads, I think it's something that should just go away for good. The once you see the benefits of the automation in Kubernetes, you obviously want to have that automation all the way down to every point in your stack, including the storage. And that kind of enables you to build anything as a service and to move stateful workloads from traditional environments into Kubernetes as well. I think the other key thing here is that for the first time, like never before, developers and DevOps teams have these systems, right? Which is... I think we're having some technical difficulties. Oh, you're back, perfect. Yep, there was my broadband connection trouble. Thankfully it didn't happen in demo. I'm so sorry. No worries, it happens, I think, every time with these things. I was just saying, just to finish off the comment before we finish that developers can use storage like a superpower because storage can enable so many use cases, whether it's protecting your data, creating highly available applications, providing the ability to have data mobility and things like that, which effectively, they now have the ability to choose on their own because most of these systems are software defined and effectively can be deployed everywhere. I'll just end on that note. Perfect, I think it's a really wonderful note to end on. And since there wasn't any immediate new questions, let me just wrap things up for today. So thank you so much, everyone, for joining the latest episode of Cloud Native Live. It was really great to have Alex talking about Kubernetes persistent data, the bridge for legacy applications. Thank you so much for being here. Yes, and we also really loved the interaction, the questions from the audience. Thank you, everyone, for commenting, attending and being here. So we really bring you the latest Cloud Native Code every Wednesday at 3 p.m. Eastern. So next week, we will have Jason Diberius presenting Building on HA Control Plane for Tinkerbell with Kube VIP. Thanks for joining us today and see you next week. Thanks, everyone. Bye.