 Hello guys, please welcome our next speaker Steve Watt from Red Hat. Hi folks, I'm going to be presenting today on the future of persistence and stateful applications in Kubernetes. This is quite a talk that's primarily focused on where things are hitting and the problems that are going to be solved further on. I'm going to be just for context setting, providing some overview on how things work today. But if you want to sort of really get in-depth sessions, Jan Sophranik. Jan, can you put your hand up quickly? Jan Sophranik from Brno is going to be presenting tomorrow on Kubernetes and OpenShift persistence and Brad Childs. He will put your hand up. He's also going to be presenting tomorrow on the same topic. They've split it up to cover different aspects, but they're going to be presenting on, you know, deep dive on how all of this works today. So just before I start, could I just ask who here has used Kubernetes before? Okay, quite a lot. All right, and who here has actually used, like, submitted a Kubernetes claim for storage, a persistent volume claim? Okay, great. All right, that's helpful. So what I'm going to do is start by framing the presentation on how I view Kubernetes and a lot of folks view Kubernetes, and then the goals of the Kubernetes project, and then follow that with how does Kubernetes work today, and then focus on the problems that we're going to be tackling to sort of fulfill that vision and all the storage needs to succeed with where Kubernetes needs to go. So the first thing is Kubernetes is an application platform. So the reason I picked this picture is because, you know, if you look at, you have containerized applications, so that's your standard container thing, but they're actually sitting on some sort of surface, right? And that's sort of the way I see Kubernetes, right? So you take your applications, you containerize them, you need to sort of run them on something, in something, and that's what really Kubernetes is. It's a platform, right? And so it's also a platform that's designed goals largely from being able to run things in private and public clouds. It's intended to run all containers, both stateless and stateful. So that's a key takeaway. I think, you know, there's been so much talk about the persistence problem with containers, and, you know, for the longest time we heard, you know, containers are only for stateless things, but that was somewhat of a self-fulfilling prophecy because, of course, containers are only stateless things if they don't have any storage features in the platform, right? And so, you know, most container engines, you know, for executing and running the container and orchestrators, up until, like, sort of two years ago, really just didn't have any features for connecting containers to different backends for storage. So about January of 2015, Red Hat and Google got really involved in the Kubernetes project and started building out a lot of the volume plugins and the persistent volume framework to make that happen. So in addition for stateful and stateless, the other thing that is important for Kubernetes is that when you run your applications in Kubernetes, you're scheduling your applications to run in a cluster, and that cluster has resources, and so applications are scheduled on nodes that are well suited for running that particular application. And then lastly, there's sort of two main points as well about the goals of the project, which is to make application development easier. So there's a set of basic functional services within Kubernetes that applications can leverage, that they don't have to build these things in themselves. So your application should be becoming smaller because you can use the system services that come within Kubernetes, that come with the platform, and then also makes life a little bit easier for ops, because part of this cloud-native ethos for building applications is that the cluster offers services for helping guarantee that your application has uptime. So you don't need to sort of build redundancy into your application. For example, Kubernetes has primitives like replica sets and things like that that can always ensure that your application is running somewhere in the cluster. So that is one part of the Kubernetes goals, but there's another thing that's often not that well called out, which is that the cluster is now the computer. And so if you look at like scale-out architectures for the last, you know, five, ten years, right? You had Spark and Hadoop and Cassandra and all the NoSQL stuff. You also had your traditional scale-up application data management platforms like MySQL or Postgres. The thing with these things is that you had a single cluster for each one of these things. You have your 300 node Spark cluster and your 400 node Cassandra cluster. And if you sort of look at the average percentage of server utilization in average data center, like if you're good, you're getting 15%. You know, if you're badass like Twitter or someone like that, you might be getting 20, 25%. So if you look at that stat with the fact that like the average OPEX for a data center is one-third of the CAPEX, let me put that into dollar numbers. If I spend $300,000 on a rack to buy it, all the servers in a rack, I pay $100,000 a year to power and cool that thing. Okay? So the OPEX over time really ramps up pretty quickly. And so the ability for you to be able to get better server utilization, more applications per server, greatly allows you to shrink your infrastructure costs. And so this is super appealing from a cost perspective. But it's also appealing from an ops perspective. So what we've seen now with Kubernetes is that you can take all these applications and be able to run them in the same cluster. And that same cluster, you know, your applications are running on a cluster and not on a specific computer. So now I'm going to focus on specifically how, now that I've framed Kubernetes, you know, how do the storage features work within Kubernetes today to actually enable the goals of that, right? So the first thing that we tackled in the Kubernetes project is if you have preexisting volume storage, how do you connect that into your containers, right? And so the first thing that was created within Kubernetes was the volume plugin framework. And so a volume plugin, the easiest way to think of this thing is as a storage adapter. And so this is just some of the volume plugins that we have. I think this was like created somewhere in the middle of last year. And we've had a few more that have been added since then. But you can see that in the dark red, those are sort of the cloud provider, you know, storage adapters. And in the orange are more of the on-premise, you know, storage adapters. And we've got support for NFS, iSCSI, Fiber Channel, some of the industry standards. And then also specific open source projects like CIF or Gluster, et cetera. So the way volume plugins work is that in Kubernetes, when you choose to enable your application to run in your cluster, you define it in something called a pod, right? And a pod basically specifies, hey, here's my container image that I want to run. And here's the stuff that it needs to run. And some of those things are persistent storage. And so you can specify a volume plugin for that pod that says, hey, when this pod lands on a node, like mount this particular backend storage device into the container. So regardless of, like, which node the container the pod is scheduled on, when the pod lands, Kubernetes goes and looks at the volume plugin information or the claim information, figures out what persistent volume is associated with it, mounts that to the host, and then into the container. And just incidentally, if you're interested, like, I've got, like, demos. If you want to actually see this, by just clicking that link, I'll share my slides afterwards. They're little MP4s. So that was great, but inefficient, because it was a method for connecting Kubernetes pods or applications with pre-existing storage. And for folks that wanted to, you know, define new storage that they can use within new applications, they had to, like, go, you know, call their storage administrator and say, hey, would you create me a volume that I can use? I need it to be this big. You know, it might take a couple of days or at least a couple of hours, et cetera. In some worst case scenarios, it's like submitting a ticket in a workflow system for the storage center of competency to go off and build that stuff for you. And so we said, okay, we've got to provide a better way of doing this. And so we created something called dynamic provisioning. And so with dynamic provisioning, we added a set of provisioners for back-end storage platforms, and they could be contributed as well from vendors. And a provisioner is basically like an executable that is parameterized. And so, you know, we have one for EBS, which I'll be talking through today. And EBS supports, you know, I think three basic storage types. So there's IOPS SSD, which is the fastest one. It gives you an SSD disk, GPS SSD, which is not quite as fast. And then there's magnetic, which is rotational media. And so we have a single, like, EBS provisioner. And then depending on what type you provide in the parameter, it'll go make you that thing, right? And so we have an NFS provisioner, and there's a Gluster provisioner, and a SEF provisioner, and a Google persistent disk provisioner. And there's lots of different provisioners that we've contributed to the project. And so now you can use these to go make storage. But the problem is, like, we wanted a simple and intuitive way for developers to go off and ask for storage. So we meshed the concept of dynamic provisioning with this concept called storage classes, which you see here. And storage classes is an administrative feature where an administrator can sort of define their storage catalog that's available to the cluster. So they can say, look, I have, you know, I'm running in Amazon, and I have EBS. I have an EBS provisioner that ships with Kubernetes. I'm going to configure one, and I'm going to just call it arbitrarily gold. And I'm going to, in the params, say, hey, you know, provision an IOPS SSD disk. And then I'm going to configure another storage class. I'm going to call this one silver. You know, also use the same EBS one, and I'm going to provision this one for GPS SD. And bronze, the same thing, but like I'm going to call a different feature on this thing, which will go make me an encrypted GPS SD. And, you know, and then like, you know, Pavel has created an NFS share or has an NFS server and we've got an NFS provisioner for that. And, you know, so I've arbitrarily called that Pavel's provision or something to that effect, right? So you can see in this way, you've got like a way of creating a storage catalog and describing it and hooking it up to the provisioning framework. And so the actual workflow for the way all of this stuff works is that, you know, number one, the administrator defines a storage class, which is exactly the stuff that happened on the previous slide. And so all of that stuff, you know, gold, silver, bronze, all of that stuff is defined. And then the developer comes along and the developer says, oh, they don't need to know anything about the EBS provision. They just say, I want 100 gigabytes of gold. Obviously, the administrator and Kubernetes is a way to describe the storage classes. It has to know what the heck gold is, right? But it's as simple as when you submit your claim, give me 100 gigabytes of gold and it'll just go make that. So it'll go make the storage and bind it to the developer's claim. And then when the developer wants to use that claim, they just go in and reference it as the claim in the storage subsection of the pod. And that's it. It's literally that simple. So just to demonstrate this, I'm going to sort of show you how this works in Amazon today. So I'm going to start off and show you that, you know, in my Amazon cluster or Kubernetes cluster, I have no EBS disks. So you can see there's nothing and I'm going to like hit the refresh button. There's nothing. And then what I'm going to show you is a definition of my gold storage class, right? So how do I define the storage class for gold? It's super simplistic. You know, Kubernetes has different objects. It's the storage class object type. I give it the arbitrary name of gold. I identify I'm using the AWS EBS Provisioner and I specify IO1 which is the IOPS type and the IO throughput which is 10 IOPS and the zone. Then I submit that thing to Kubernetes and it'll go make the storage class. And then basically I can query the catalog that I'm starting to build now by saying kubectl gets storage class. So you see it right there. There's just one. There's just gold. Now I'm going to go do the same thing and I'm going to define a silver. So there you go. Very similar. Same API type storage class. Give an arbitrary name of silver. Same EBS Provisioner. This time I'm going to specify a GPS SSD which is GP2. Specify the same zone that my cluster is running in. And when I basically go and query the catalog, you can see now it will show both gold and silver. There you go. Both gold and silver. So great. Now this is the administrative function done, right? So that was step one on the previous slide. So I'm done now. And now I'm going to put my developer hat on and say, ah, the administrator has gone and created me all these storage classes and I want to go use one. So now you can see get pv, get pvc return nothing. I have zero storage defined in my cluster. You saw I had no EBS disks. So now I'm going to show you how you define a claim to actually go off and kick off that one of the provisioners in that storage class and go make me some storage. So this that you can see the API type is the persistent volume claim. Give it an arbitrary claim name, myGoldClaim. And then specify the class that I'm using. So I'm going to provision this out of the gold class and I'm going to ask for five gigabytes. So this is my way of the developer saying, give me five gigabytes of gold. So that's the definition. I'm now going to go off and submit it to Kubernetes to basically submit the claim, which will go off and kick off the provisioner, which will then go and make it. So now when I query the persistent volume claim, you'll see there that the claim is bound to a particular persistent volume there and you can see the five gig capacity. And just to show you that this isn't all smoke and mirrors, I'm going to go back and refresh Amazon and you're going to see there's the actual block device and in the volume type you can see, you know, I01, which is what we specified gold as. So the demo's a bit longer if you want to see I go into silver and some others if you want to watch it later. But that's basically how provisioning works today and Jan will give you a deeper dive later. So that's how everything works today. Let's stop and like just start talking about, you know, where things are going in the future. We're obviously, just before I embark on the future stuff, we've built the basics that you need to sort of, I joke around sort of call it the bottom of Maslow's hierarchy for storage needs and, you know, we're going to be hardening and improving that for the next, you know, six months at least before we start adding any of these major features that I'm talking about. So everything that I've spoken about so far has been what we call entry volume plugins and provisioners. This means that somebody's come along and contributed the volume plugin or the provisioner to the Kubernetes framework and had to work with the Kubernetes community to do that. And that's great, but, you know, not every company is very adept at working with open source communities. There's a bit of like a mental barrier that we've discovered because we are quite a friendly community. I don't know if there's confirmation bias in that, but, you know, I do think we're a bunch of friendly folks and easy to work with. But the other issue with that is that, say there's a bug in one of these volume plugins or provisioners, you actually have to wait for a new version of Kubernetes to ship, right, to actually get the fix for that. And that's not really optimal and it also somewhat takes the power out of the contributors' hands for when they can deliver a fix for that release. And so one of the things we've added is flex volumes, which is an out-of-tree interface that's pluggable for writing volume plugins. And so if you want to write a volume plugin, we've got an interface. You can basically write it. It really does just four things. It does attach, detach of a block device if it's a block device. And it also does mount and unmount, right? So attach and mount if you're, you know, invoking a volume plugin. And when your pod is dead and you can stop your application, it'll do it, you know, unmount and detach. So that's one way that we've made the volume plugins external and started building a community around that. And then the other half of the problem is the provisioners, which is making storage. And Yan just finished the interface on that at KubeCon last November. And we already have a couple of reference implementations for it. And so we've got the same sort of interface for external, for provisioners. And so you can write your own provisioners and provisioners are pretty simplistic. They're even more simplistic. They just do create volume or delete volume. And so there's a number of companies already taking advantage of this stuff. So NetApp is a fairly large established player that's using these. And then there's Portworx, Yamanti, and Datara, which are some Bay Area based startups that have storage platforms for Kubernetes that are using these. So this is great, but as we sort of start iterating on the external flex interface, we sort of noticed, oh, this is kind of similar in concept to where Docker Inc is taking their Docker volume plugins. And so sometimes communities don't always work that well together. That's as much as I'll say on that topic. But this is kind of interesting because I think there's a synergy and there's a potential for a collaboration here to sort of drive towards a single spec on this. And so for provisioning and volume plugins. And so basically I would posit that if Docker Inc on their volume plugins and their swarm integration for volume plugins can arrive at a similar spec to where Kubernetes is going with flex volumes, then we'll have a 10,000 pound gorilla in the container storage space where I'd say 90% of the container users are using one of those two platforms for their containers. And so I think you'd basically have a dominant spec that would emerge out of that. And so this is something that I'm working on this year. I'm hopeful that it will emerge. There's a couple of benefits on this and that you vendors or open source projects could contribute a single volume plugin or provisioner. And it could be used by Mesos, Cloud Foundry, Swarm and Kubernetes which would have feedback from multiple folks. Now that's often a bad idea if it's formative but a lot of these things have been in play for two years now, et cetera. So we have quite a good understanding of the interfaces and the features and they're pretty settled now. So that's one place that we're going in the future. Okay, different topic. Did anybody here go to the running Gluster of Fests in Kubernetes OpenShift Talk earlier today? Okay, so the presenter. Nice. And about five other folks. Okay, so let me explain what this project is. So both Gluster and CIF have been containerized and you can actually run those in Kubernetes so they have the Gluster community as a project called Gluster-Cubernetes. And CIF has a subsection on there called CIF-Docker which has information on how to run CIF and Kubernetes. And so you can run both of those in Kubernetes. Now, this is sort of back to the point I made earlier. If Kubernetes is a general platform for all applications, you should be able to run data management, persistence platforms on Kubernetes as well. And so that's, I think, the ultimate test of the persistence and stateful capabilities. And so this stuff has been in flight for over two years now. And it's far enough along that we actually have a commercial version of Gluster that runs inside OpenShift called container-native storage. So it's productized, et cetera. But in building that and looking at my background before container storage is in big data. And so if you look at scale-out data management platforms, no SQL architectures, there's some basic patterns that you see for those scale-out platforms which is firstly a masterwork or architecture. And then if you're trying to do a read from an application client to that platform, you organize your data in those platforms into these tiers. So there's a primary tier, secondary tier, tertiary tier, and a quaternary tier. And those are based on read performance that you'd get from where you put them. So obviously if you have data in memory, you're going to get the best read performance because it's the fastest. Or latency is maybe a better word. And then the next thing is if your data can't all fit in memory, well then you want to put it on local SSDs because they have the fastest direct attach storage performance. And if you can't afford all the SSDs to do it, then you put the remaining data on local rotational media. Or the very last option is a network disk because network disks basically have the same performance as the secondary or tertiary tier with the network hop in between on top of that as well. So you have these tiers, and what you see well served in Kubernetes today is the primary tier and the quaternary tier. So the primary tier is often managed by the application itself. So Kubernetes doesn't get involved in that. The max that Kubernetes does is when you define your pod, you say how much memory is available to the pod. So if you're a memory intensive application, you want to make sure you have servers that have lots of memory. And then you can assign a lot of memory to your pod and then you can sort of do what you would on a standard, you know, non-Kubernetes-based server for that. But local SSDs and local drives, you have volume plugins for directories, but you have zero support for like managing block devices, which is something that's used in the Gloucester Kubernetes project today. And zero support for submitting claims to get access to local file systems or block devices. And so that is something that needs to be solved. There's a proposal presumably landing in Kubernetes 1.6 or 1.7 to, you know, address these issues. And I will say that, you know, if you look at, you know, I think a lot of the focus in Kubernetes on the primary and the quaternary tiers is largely because of the heavy focus on the public cloud, right? Because on-premise, it's fairly common in, you know, buy a 2U server with 12 disks and have a lot of disks available to you, but not in the public cloud. And the public cloud, people say, oh, you know, your local disks are ephemeral. Well, a lot of these scale-out architectures handle redundancy for you. So if you lose a node or even a rack, you don't care, right? We don't care too much. The issue is that it's the capacity. So the local disks are small, you know, in EC2, right? There's not enough capacity for you to do anything interesting with them. So the next set of challenges is if you look at going forward how the Kubernetes platform is going to be used, among other reasons, I believe that people will be using Kubernetes to avoid lock-in. And so what I mean by that is if you look in application development trains, you see two main predominant trains. Definitely more of the first one, which is that people are building in public clouds. And, you know, the public cloud has its, you know, provider infrastructure, you know, EC2, arguably the most popular one. They use a Linux OS on there, you know, and so in our case, you know, we've seen growth with RHEL in the public cloud too, which is good news, but there's also Amazon Linux there, et cetera. And then what people tend to do is write quite thin applications and make heavy use of the service catalog. So that's actually like a real picture of the Amazon service catalog. And each, sorry, here, each line item there is an actual additional service. So you don't have to write to manage, set up your own message queuing, your own database server. They have services for everything, which is awesome, you know, because you can write a small application and have Amazon worry about whether your database goes up and down or your message queue goes up or down. The problem is you get massively locked in if you do that, right? It's super hard to move your application from, say, like if Amazon hikes the prices up to go over to Google or to some other provider. And so the alternative approach is to build applications on Kubernetes. As I said, Kubernetes is a general platform. You can run anything on it. And that approaches Kubernetes runs just great on Google, Amazon, OpenStack, bare metal, Azure. So you can move your Kubernetes application anywhere. And Kubernetes has its own Service Broker Service Catalog project, right? And so the ideal goal is to make the Kubernetes service catalog and back that with the open source ecosystem. So as open source projects package and build, you know, daemon sets or, you know, stateful sets, Kubernetes primitives to deploy their application into Kubernetes just gets added to the catalog, kind of like a, you know, general large repo. And, you know, people can build on that. And that's application. That's portable across cloud providers. So, you know, you don't get locked in. But here is the problem. It's data locking. It's not application portability. So Kubernetes has created Kube Federation, right? And with the Federation, you have a federated control plane. And with this control plane, you can plug in multiple clusters. And so, you know, you can have a Kube cluster running in Amazon, a different one in Google, and a different one in Azure. And you can register them all with the control plane. And using the control plane, you can move your applications between them. Another thing that Kube Federation does is if you have, you know, you're just all in on one cloud provider and you have a Kube cluster in one region and another cluster in another region, et cetera. If there's some sort of problem with the region, you can use your control plane to shift your application to a different cluster, a different region. So this is great. And if you actually like to look at a real application profile, right? So I created a very simple three-tier web application, microservice, you know. MySQL is the back end. Nginx is the web server front end. It has a claim. Just one claim for the database. So varlib.mySQL is mounted onto an EBS disk so that, you know, because the container is ephemeral, you can blow it away and you won't lose your database. And when the database, when it comes back up, it just remounts varlib.mySQL to the same disk and voila, everything's great again. And so all your data's in there and you can use the Federation to move your application from Amazon to Google. And, you know, provided you have exactly the same claim name, it works just fine because Kubernetes has a layer of abstraction from the storage provider, which is the concept of the claim, right? And so this is great, but there's no data, right? So it'll be like re-initializing your database. It'll be brand new. And arguably you haven't really moved your application because most people's data in their database is intrinsic to the application working. So this is something that we have to tackle right on time. And so here are a couple of ways we're thinking about this. And actually we love your help and input in helping us solve these problems, right? So the one approach is asynchronous replication. So basically you could do a couple of things like, you know, local rights go to the EBS disk and then basically there's an asynchronous right that flows over to some other disk somewhere else. You could do periodic snapshotting. So if you're writing all your data to your EBS disk and you inject a snapshot functionality into the Kubernetes control plane, you could snapshot the disk and do a bunch of different things with it. Snapshot's generally useful and we actually have a proposal that's well underway. Certain things that you can do with this snapshot is that you could replicate it to your other cluster. So if you're wanting to plan to do an application migration, you could stop your application, snapshot it, mirror the snapshot, or replicate the snapshot over to, you know, Google's persistent disk from EBS. The other thing is if you're not trying to solve application portability, application migration issues is the ability to do backups. So you can basically take certain snapshots at different points in time and then you can back them up somewhere, maybe to another cloud provider, maybe on-premise, maybe to a different region in Google. And so those are a couple of the, you know, data management flows. And so I think these are sort of like existential challenges because Kubernetes really has to solve these problems well to be appealing as an architecture. When I flip back to this, this is sort of the battle ahead for open source, I think, versus proprietary, right? Is, you know, thin applications on a proprietary service catalog versus thin applications on an open service catalog. So that is my talk. Thanks for listening, and I wanted to actually have a lot of folks from the Kube storage community in this room. So I just wanted to open it up and see if there were any questions. Subendu, go ahead. Two questions. Yeah, that's absolute. So the question was, you know, metering, basically, you know, being able to track and do metering and accounting, right? So there's no capability to do that today. But it's capable, you can instrument it, right? So you can basically see, you know, within Kubernetes and OpenShift, the way that you separate and organize your applications is you create a project in OpenShift which is like a namespace in Kubernetes. And so projects tend to be single application things, right? And so basically claims are separated by projects or namespaces. So like Steve in project one can't see Subendu's claims in project two. And so claims will always be part of the namespace. So one way that you could do that is look at the claims that are being submitted in a namespace. The other thing that is kind of related to that is we've added quota support on namespaces for storage, right? So we can, like obviously, it's quite funny, like a couple of times when we were talking about dynamic provisioning and adding that to some of our customers, you know, and the ops team were in the room, they were, the initial reaction before we sort of were able to talk about quotas was one of pure horror, right? You're like, oh, I'm running, you know, Kubernetes or OpenShift in Amazon and I'm just going to let my developers just provision services, like block devices and all kinds of things, you know, regardless of tier and things like that, like how are you going to control that? So we actually said, no, no, no, we've got quotas for that. So quotas allow you to constrain the amount of claims per project or namespace and also the storage capacity per claim. So you can say, I can't have more than five claims and no claim can be bigger than 10 gig. And then the other thing that's coming is... That's where I'm getting... Yeah, so the other thing is the storage class is the catalog. This is something that's coming, is be able to say, look, provide developers with a way to sort of see what storage classes are available to their project because, you know, if Subendu has bought a really expensive on-premise storage cluster that's backed with SSDs, et cetera, for his group, then I shouldn't be able to come along and provision out of it just because we're both on a multi-tenant Kubernetes or OpenShift cluster. And so to be able to sort of block is to use which storage classes. Yeah, the closest we do is Cloud Provider. So OpenShift runs for the folks that don't know. We have OpenShift Online, and OpenShift Online wants to be able to correlate the EBS disks that are provisioned from their Kubernetes users with EBS disks that are used for other things in their cluster. And so there is some metadata injected into each one of the block devices around which persistent volume they're associated with. So if somebody, the idea would be to, for soft deletes, like, oh, I deleted this disk. Oh, crap, it was the disk for my database. Help, right? So we'd be able to implement a soft delete process, and then, so it disappears out of Kubernetes, but lives on in Amazon for like a week or two just to avoid that scenario. But so there is a way to inject metadata into the persistent volume, sorry, into the actual block device if that thing supports it, and you can build that into your provisioner. But beyond that, there's what you'd have to go to Kubernetes. You'd have to query, the administrator would have to query the persistent volumes and look at which, you know, because the persistent volume has the volume plug-in information to which real device it's connected to in there. That's the only way to get at it right now. I just want to give some other folks some questions. Are there any other questions? Yes, go ahead. Yes, very much so. I think, so, Waman Chen, which is a member of the, he's a Red Hatter, but he's a member of the Kube Storage SIG, has a proposal around using QEMU with TCMU to basically hook into the KVM-based storage adapters that are written for that. I know that's one idea. The other is that there is a group of folks called OpenSDS, which are trying to create a generic sort of storage spec for hypervisors, and they want to basically build it in such a way that it's accessible from the container orchestrators too. I would say we're very interested if we can find some way to snap into that well-established ecosystem and bring that all into Kubernetes. Yes, absolutely. Any other questions? I'll go back to Subendu. He has lots of questions. Encrypted? Yeah, I will say it like so. Encryption, so the question was how do we handle sophisticated encryption scenarios in Kubernetes? Right now, the only support that's available is if your back-end storage platform supports encryption, you can write a provisioner that will support creating encrypted volumes. So like our Amazon provisioner is one example. So if you create a storage class, say any volume that's provisioned from the storage class said encrypted to true, it will always encrypt that volume. Now, I know that's only small part of the larger encryption flow, but the other thing is we have a sort of dearth of well-articulated encryption scenarios to go and work on. So if you do have an end-to-end flow that describes what's missing in Kubernetes, the storage secret would be quite interested in seeing that. Yes, go ahead. It has been, to be honest, for a long time. It's sort of a little escape hatch for folks that's its provenance. It's a little escape hatch for folks that absolutely didn't want to contribute an entry for a while. As with any open-source community, this is a finite, and so you work on the most important things first kind of thing. But it's popped right up to the top of our stack, and so in Cube 1.6, the flex volume interface is being overhauled and attached, detached, etc., being improved as part of that. So the question is, hey, I have a Kubernetes or OpenShift on OpenStack cluster, and I'm using Cinder for everything that I need. This is my huge flex volume. Is that correct? OpenStack on top of Cube. You like to live on the edge. OpenStack on top of Cube. So one, that's a great data point about why Kubernetes is a great broad-use application. But in that sense, flex volumes are basically like a way, if there is no Kubernetes volume plugin for the thing that you want to do, and you want an interface to be able to go write the thing yourself or talk somebody into writing something, that's what it's for. So if there's an existing volume plugin or provisioner in the scenario you're using, go and use that, right? You can use the Cinder provisioner because I haven't thought too much about that scenario, but yes, that's the cloud provider I was like, but basically, so if there's no storage, if there's a back-end storage platform you want to use, there's no volume plugin or provisioner for it. I'm finished my slides anyway. Provisioner for it, that's when you use flex volume to go write something or talk somebody into writing something for it. Flex volume is the strategic direction we're trying to take all volume plugins. We're trying to move everything out of tree. Does it make sense? Okay. Yes, Scott, go ahead. I don't like hard ones. Yeah. Yeah, you can. Yeah, so that's like another approach, right? Like the but it's sort of, you know, one of the things we're trying to do is solve the problem for everyone. And so like that is a very like technology-centric approach. So like, you know, another way to do this is to have your data management platform span multiple cloud providers. And whenever you need persistence, you use your data management platform. You don't use raw block devices or shared file systems and things. And those data management platforms all include built-in replication. So that is a way to solve it. I'm just not sure that we'll be appealing to everybody. I'm not saying it won't be appealing. I just don't know. That's kind of why I'm like just trying to foster conversation around this right now. Yeah. Yeah, right now we're at a proposal phase, like for this, where we're trying to come up with something workable. Usually what we do is have a design proposal for a couple months where we start with something small that doesn't paint us into a corner and then, you know, go with that and improve on it. So I think that's where we're going to go for that. All right. I think I'm out of time. Yeah. All right. Thanks everybody for coming. I appreciate it.