 Hello and welcome to another OpenShift Common Briefing. This time it's on persistent storage on Kubernetes and OpenShift. It's a problem that lots of us have faced over the years and Jeff Vance and Aaron Boyd have been working on and have a very good solution and a lot of techniques for working with persistent storage. So hopefully they'll tell us how to solve all the troubles with persistence. And the logistics for today, as with all OpenShift Commons is we'll let Jeff and Aaron do 20 to 30 minutes of talking on this. While they're talking, you can ask questions in chat and we'll try and answer them in chat. And then afterwards we'll do a Q&A session in this whole thing. We have an hour today, so I expect with this kind of a topic, we'll probably have lots of questions. So what I will do in the Q&A is unmute you, I'll read your question if you type it in chat, and then unmute you for follow up. It works pretty nicely, and that way we get a really nice, clean introductory video for people in the future to watch. So without further ado, I'm going to introduce what Aaron and Jeff introduced themselves and get started today. So thanks, guys. Thank you, Diane. So my name is Aaron Boyd. I'm with the Emerging Technologies Group for the Office of Technology for Red Hat and myself and Jeff Vance both work on improving persistent storage in Kubernetes and OpenShift from a complete end-to-end point of view in that we are improving the way we debug, use, and present storage to our users. Jeff, if you want to go ahead and give an introduction, I'll start on the second slide after that. Yes, okay. I'm Jeff Vance, also working on Aaron's team in the Emerging Tech Group. And right now our focus in this group has been on usability and completeness in the OpenShift storage solutions. And we do things from improving documentation to creating pull requests against Kubernetes to add features or fix bugs or pull requests for OpenShift itself for the same purposes. So we kind of cover a wide range of activities. Great. So thanks, everyone, for attending today. Let's go ahead and get started. So as you know, containers are ephemeral. And so what would motivate a person to need persistent storage then if they're using containers? So the first reason is that any local files running inside that container will not exist outside its lifetime. So in the event you've deployed your application within OpenShift and running your container and producing data, once that pod terminates, your data will be lost. In addition, if you're running jobs, which typically are done efficiently to run in parallel, the pods will not be able to share data between them without the persistent storage enabled to read from a location to share that data. Also, data analytics, which is coming soon in OpenShift, is one of the drivers of having persistent storage. Let's say that you're running data analytics on your logs to troubleshoot a problem within the system. You would need to read from an existing storage location in order to pull that data into OpenShift. And then the last reason is that as Kubernetes being a container registration model may reschedule nodes as appropriate with them coming in and out of existence, the pods could possibly land on a separate node than where they were originally scheduled. Therefore, we can't depend on local storage to always be where we expect it to be within the container infrastructure. So, within OpenShift, we offer a very improved user experience around storage. We have two native storage platforms that we offer that are tightly coupled and run well within Red Hat, both Red Hat Bluster Storage and Red Hat Sus Storage. Kubernetes includes, with it, additional volume security options, but OpenShift includes a lot more than that. There's preconfigured plugins within OpenShift that allow us to run different cloud providers like Google GCE or Amazon EBS, Azure. And so what we offer is that when you deploy OpenShift and you use storage, all this is contained. You don't have to install any extra things to enable persistent storage. So, consuming clustered storage. There are very two very specific use cases when we talk to customers and developers about how they're going to use persistent storage. So, the first one is specific known storage. And this is commonly referred to in the community as PET, though that name is not well-liked. And this is really a non-fungible storage, meaning that you know where it is, you want to locate it specifically either within a data center or within a node, within a cloud provider, and you don't just throw it away and get a new one. You want this storage to be within a specific location. Maybe you're sharing it within projects or teams. You manage where it's located. And as given in the analytics example, you need the specific storage volume in order to get that data. Maybe you have a legacy database that you're pulling data from that would fall under this first use case of specific known storage. And the data has business value must be backed up. So, maybe you're finding the cure for cancer, you'll want to record that data and keep it in a safe place. There's also a second generic storage non-specific use case. And this is widely used for testing and for generating results that may not be kept. Maybe you're doing streaming analytics and you're only keeping partial parts of the data. And this is referred to as cattle storage in that your storage can be interchanged without it affecting the way that your application runs. It's usually qualified by a number rather than a name. You simply are asking and amend for some storage, you don't care where, you don't care what kind. You just want storage to store your data at the moment. It's managed the same as normal storage. It's just not allocated in the same thing and you have an indifference to where it lands on. So, this would not be local storage. This would not be empty to our hosters, part of the volumes that are also included within Kubernetes. And it's replicated rather than backed up. So, it scales up and down very easily. It's not maintained across each one of the nodes. And the last differentiator for storage that's important to understand when we talk about storage and containers are file on block. So, this is really the raw volume of storage. It's usually T4, XFS. It has a single owner. So, this is not your typical shared storage. We're talking about Ceph RBD. We're talking about AWS, CBS. We're talking Cinder. And they actually, when you claim that storage, you're altering the underlying permissions of the ownership and we call that a takeover within storage. And most cloud storage platforms fall underneath this. Shared file, which most people are commonly drawn to and have used in the past, are NFS and GlusterFS. So, when you look at the file system and you look at the permissions, they're the same way as if you were looking at an LS in your directory on Linux. The access is controlled via the file level. The typical POSIX permissions you're comfortable with and the owner group and underlying storage permissions are not altered like they are on the file on block. Jess, can you take the next slide? This is Jeff. I'm gonna, so Aaron's motivated some characteristics of storage and the need for storage, which is the needs probably pretty obvious to everyone on the call. And now with that background of cattle versus pet storage and block versus shared storage, I'm gonna go on and describe more specific storage features of OpenShift. So OpenShift uses Kubernetes as its underlying framework, but OpenShift adds quite a bit on top of Kubernetes and that affects also storage as you'll see later on. So the OpenShift Kubernetes framework for storage consists of a persistent volume and a persistent volume claim and we'll go over the claims shortly. A persistent volume known as a PV is a global resource available to your OpenShift cluster. So it's not specific to a particular project or namespace, it is global for the entire cluster. And the idea is that a persistent volumes that are defined and created by a storage administrator, someone with that type of role. And a JSON or YAML file is what's gonna most typically define your persistent volume and it's defined by an expert in that type of storage. So they know, for instance, in a Gluster trusted pool, they know the endpoints, the IP addresses of each Gluster node. For a CEPRDB storage, they know the secret, they know the admin authorization key and other features like that that a developer probably wouldn't know. They'd have to ask someone and it would slow down the process. But the storage administration individual has that knowledge and they create the persistent volume and they can create multiple persistent volumes. And each different type of storage, like Aaron mentioned, OpenStack Cinder or AWS, NFS, Gluster, they are, when they are defined in the PV spec, we are referencing a plugin or a storage adapter. So there's an NFS plugin, there's a Gluster plugin, there's a CEPRDB plugin and many more. And many more are being developed as we speak. The storage vendors are all jumping in and they want their own plugin also available in Kubernetes and OpenShift. Then a developer will be able to claim that storage through a resource called a persistent volume claim, which I'll show later. But once the point here is once a PV is claimed, it is bound to that claim and therefore it's not available to any other claims. However, you can have more than one PV defined in your cluster that references the same physical underlying storage. So I could have 10 NFS PVs if I want. And those 10 NFS PVs could eventually get claimed and then the pods that are using those claims would have access to the same shared storage infrastructure. Okay, next slide Aaron. Okay, so this is just a graphic but it's basically saying what I just said. You've got in there Kubernetes or OpenShift in the center and on the top left, you have your on-prem storage plugins for adapters and on the bottom for the right, you have cloud storage and it's just really just showing you a list of what we have today. And that list is growing. In fact, we just merged in Kubernetes another storage vendors plugin yesterday. So it's a dynamic area right now. And another thing just so in case I don't say it later is that when you get these different storage plugins or adapters with OpenShift or Kubernetes, you get them in the same single binary. So when OpenShift is installed in your cluster, you don't have to install storage separate. It's baked into OpenShift. Okay, Aaron, next slide. And another sort of this is, this slide sort of motivates why there's a separation between a persistent volume and a claim to that volume. And the idea is I kind of alluded to earlier is that you have this storage expert creating PVs and then you have the developer and she just wants to get her application up and running tested and deployed and may not have the in-depth knowledge about the type of storage she's accessing. She doesn't know what the NFS export is or what's server the NFS, what the IP address of the company's NFS server, but she knows a claim. She knows some characteristics of that storage which we'll cover shortly. She knows the access she needs. She wants to read only access or she knows how much storage she needs. She needs two gig. And so she can claim it from that level and not worry about the storage details that the storage administrator was concerned about. And so it's just this separation of concerns is sort of the motivating factor for the persistent volume and volume claim framework that we have today. If I wanted to say anything else there. No, go ahead and next one, Aaron. I guess as the slides are dancing, Aaron will describe storage classes and that's just another invention that will allow that facilitates the developer to discover more about the storage that she needs to access. So open, so we talked about claims and this is more details about a claim. So Kubernetes and OpenShift have this constant of a persistent volume claim and it's just a request for storage, that's it. And it can be very generic like I just want a gig of storage and I need read access or I need rewrite access or it can be much more specific. You can say I need high speed storage, I need low cost storage in an Eastern United States, et cetera. So you can be fairly specific, you can be fairly generic or you can be 100% specific which we don't have slides to cover but you can really say I want that persistent volume right there, that's the one I need. And so you can get to all those levels of abstraction and how you claim your storage. A claim doesn't have to be fulfilled or bound to a persistent volume. If you don't have a PV that matches the characteristics of your claim that claim just remains in a pending state ready for a PV to show up that will match what the claim needs. Once that happens, the claim automatically binds to the PV and then the pod and the containers within that pod can start running. And again, as I said, once a PV is bound to a claim that PV is unavailable for any other claims, you can have multiple pods within the same project they can all reference the same claim so you don't need one claim per pod. And as I said before, you can have multiple PVs describing the same underlying physical storage. And the next slide, Aaron. And this is just a, this is our first slide that just shows you the OpenView console GUI and it's a really nice tool. And our team's been involved with adding some storage features to the GUI. And this one's just a simple one where I can request a claim. I can create a claim and I can name the claim and this screen will be, more features will be added to the screen shortly where we can do label selectors like I want gold storage or I want high IOP storage, et cetera. But right now we're just showing I can name my claim, it will be in my project. I can describe what kind of access I need and how much storage I need. I create, I click the create button and I'll see the next slide, Aaron. And I'll see a list of claims that are available to me in my project. And I think this slide will also be augmented shortly where characteristics of the claim like I said, AWS, US's own East or something like that will also be visible. But the top claim there, OC-PVC1, I'll show you how we get to that claim in the next slide. So it's just a summary, a graphical summary of my claims. And this slide is a fragment of YAML. YAML and JSON are just a markup language that lets me describe a resource or an object. And this is a YAML fragment for a pod. And in that description of the pod there's many, many attributes, but one of them is the volumes that that pod will have access to. And you can see here that we've said the volume is going to be referenced by a persistent volume claim. And the name of that claim is OC-PVC1. So we have this in direction here. I have a pod and the pod references a claim. The claim references a PV and the PV describes the underlying physical storage asset. So that's our kind of our level of indirection to get from the application to the storage. And there's an OpenShift command that would let me create this pod. And then the pod will do its thing, which I think we'll cover later. It basically, as far as storage goes, the pod will be scheduled on one of the nodes in my OpenShift cluster. And then part of the pod startup process will be to run the storage adapter or the plugin. And that plugin will attach a volume, format the volume if that's needed, mount the volume on the host that the pod is running on automatically. The pod now has access through the mount to that storage. And then when the pod terminates, the reverse is done, it's unmounted, et cetera. And that's all done by the storage plugins. Okay, next, Erin. So thank you, Jeff, for covering what the framework for persistent volume, persistent volume claims looks like. We leveraged this foundation of the framework for using persistent volumes and then for claims. We have added some rich features to enable storage to be even easier. As you noticed in the previous slides, it's a little bit intensive for both the administrator and the user to create the claims, even though we've created a UI, there still is a usability issue in that a user should just be able to say, I want storage and if it doesn't exist, please go off and create it for me. And so a feature that was integrated in 3.1 as an alpha feature and in 3.3 will be further enhanced is called dynamic provisioning. And basically this allows the user, the developer, to dynamically ask for storage to be provisioned on demand. If they have the permissions to invoke a provisioner, this is done on the fly. And today, cloud providers are the basic supporters of this in the alpha version, GCE, Cinder, and Amazon EBS. But eventually we will have provisioners for even on-prem storage. And so this bypasses the need for the storage or cluster administrator to go off and provision the storage. As Jeff had mentioned in a previous slide, a user can make a claim and a claim can go unfulfilled as pending because the storage asset doesn't exist. By leveraging dynamic provisioning, a user is allowed to request storage and have it provisioned on the fly. And it's controlled using quotas. So that allows the admin to have the controls that a developer doesn't go out and provision a terabyte of storage every five minutes on AWS CBS. So once they do this, they can automatically use the volumes within their application. Another feature that Jeff was alluding to was called storage classes. And storage classes is really, you can think about it as the quality of service that you're offering to your users. So an administrator again, goes off and configures different levels of storage, creating a taxonomy of what the storage should look like. So they may have, you know, Ceph or Fiber, iSCSI and different labels associated with that type of storage. Your fast storage might be gold storage in that it's the most premium storage that a user can request. Your fiber might be a little bit slower or less capacity, therefore having silver and then bronze might be your iSCSI. The administrator can put different features within these classes to allow a user to just select a level of service with an expectation that they're going to have. And it's kind of a recipe. So there's many different aspects within that label that the administrator can also assign. Therefore the developer instead of making a claim actually uses what will be a new API object called storage class to request this storage. And it will allow us to better organize the classes into a catalog so a user can search for what they're looking for. Because typically the idea of Kubernetes and storage is that the user doesn't have to know much about it. They can just simply request it. So storage classes offers another level of abstraction with a little better description of what they need depending on their applications once. The last feature that will be in the next release of OpenShift is called a storage selector. So a storage selector allows an administrator to create labels which are intrinsic to Kubernetes on the storage to give it, define better the characteristics of that storage. In this example I have here, we have two different persistent volumes in AWS EBS. We have one in the availability zone of East and the other in the availability zone of West. The user may care about this depending on latency within their application or maybe they have co-applications that exist in that same geography. So they care about this specific label. They're less concerned with a certain quality of service and they're more concerned about a specific feature of that storage. Storage selectors allow the user within their claim request to request a specific label of that selector. So within the claim request as Jeff showed earlier, the snippet, we can then add a selector column and then we can put in a claim request and have it match US East one and then we are matched to that specific PV. So this allows the user to be very specific about what they're doing in terms of their application and where their storage is being kept. But it still is abstracting the process of creating these PVs from the user. The last feature that I want to talk about is a pretty exciting one that has just been released and this is container native storage solution. Referred to as Aplow within OpenShift which is the Klingon word for contain. What this has allowed is us to actually run Gluster FS in a containerized manner. This allows us to scale out storage cluster very rapidly. It also allows us to contain better the security around that and has much more control and ease for the developer. We have created a way of orchestrating this through Hiketti, also an open source project and it has a full integration with OpenShift container platform. So this is an exciting and new feature for Red Hat and it's also a very new concept within the community across the board and containers to have this type of storage available. Jeff, I'll pass it back on to you to talk about volume security which I know everyone is very concerned about. Yeah, just piggybacking one other thing on error Erin's comment. One of the neat things about the Gluster containerized Gluster is that you now don't need to worry. You can have your Gluster cluster can be completely separate like you probably have today or it can be combined with your OpenShift cluster which they call Converged Storage and share the same nodes that your OpenShift cluster is using. But anyway, Red Hat's emphasis on enterprise needs and satisfying requirements for enterprise customers it's not surprising that security matters and it's also typical that in the upstream community mindset security is often bolted on or thought of later it's just feature, feature, feature and we worry about security later but OpenShift and Red Hat doesn't quite think that way security is important and we try to get it baked in correctly up front. And so storage has security concerns obviously you need to be able to allow applications access to the storage that they need and at the same time you need to prevent other users applications processes from gaining access to storage that you're trying to keep them away from. And so we try to handle all those requirements within OpenShift Storage and of course we support SE Linux and you'll see more details about that in the next slide. We support POSIX permissions and there's some differences between block and share devices and how that's done. Go ahead, Erin. We meaning Red Hat have submitted to Kubernetes part of our code that is related to security. In OpenShift on the right hand pane there we call it security context constraints and it's a very rich set of features and descriptors that confine what a pod can do or a gateway to whether the pod can even be created or not. And once a pod is created, what is that pod allowed to do and what's the pod prevented from doing? So we have admission controller, we have authorization, we have the concept of roles and users and groups and projects. Kubernetes doesn't even have the concept of a user yet. They do have the concept of a project and Kubernetes is called a namespace but they don't have a user ID, they don't have role-based security, it's fairly inflexible right now. Now I think it's going to improve over time but right now OpenShift is way ahead in terms of enterprise type security. So we support POSIX permissions for shareable storage and we support taking over devices as you'll see next for block storage devices. Let me see if there was any other notes I had on that. No, go ahead Aaron, next slide. I'm just waiting for it to refresh on my screen. So in SE Linux, you can see the bullet of the list. These are the storage plugins or adapters that are SE Linux aware. At the very bottom of this slide, you can just see a fragment of YAML which has security context attribute and it shows SE Linux options and that's just the most common option is the level there but you can describe a user, you can describe roles, you can describe types and those are all make up the SE Linux option portion of the security context. As I said, the plugins you see bulleted there, those support SE Linux and those plugins happen to be for block storage and you know what that means now and what we do for SE Linux and we also do it for group IDs as you'll see soon is we relabel the storage. We take it over and the storage mount point on the host is the connector point and that directory and all directories and files under it are relabeled with the SE Linux label that's provided in the pod. Now the pod author doesn't need to know what that SE Linux label looks like. That could be defaulted for her by open shift through the security context constraints. So those constraints say, hey, if whether or not you can even define a security attribute in your pod and if you can, what range you're allowed, what's legal, what's not legal and if you're not allowed to then the security context constraint will define a default value for you. So the developer doesn't need to worry about those things. These cluster administrator can be the one concerned with security. So just the next slide, Aaron. I think it's supplemental groups. A supplemental group is a Linux POSIX concept. Every process has a list of one or more groups and it has a user ID and group IDs, right? And you can or define what your supplemental groups are and those are needed so that if your group ID matches the group that the underlying file has labeled on it, then you have that access defined for that group. It's just POSIX permissions essentially. And again, just like with SE Linux in open shift the SCC, the security context constraint is the arbitrator and decides whether you can define your own group ID, what values are legal, what the default is if you don't define it. And in this case, we just have an ID of one, two, three, four but it's an array, it can be a list of IDs. And this is for shared storage. So we don't take over shared storage obviously. You don't wanna have open shift altering attributes of your NFS or Gluster permissions. So supplemental group is just a way of appending a group ID to the processes running in the container that's defined by the pod. And the last slide I think is coming next before Q&A and it's FS groups and it's very, underneath the covers, it's exactly like supplemental groups in the sense that we add a group ID to the process that's running in the container. But because FS groups are targeted for block storage and you can see the list of plugins that support FS group ID, it's the same, not surprisingly, as the plugins that support SC Linux labeling. And we take over the device like we did for SC Linux relabeling. And so if you have a group ID defined in your pod or one defaulted on your behalf by your storage administrator, then you get that group ID, add it to the list of group IDs that your process has on the host it's running on. And then that will have an impact on your access to the physical underlying storage. So we call that taking over storage and it's only applies to these plugins and they are all, they are block storage plugins. Never do it for shared storage. Aaron, I think the next slide is just our generic question and answer slide. So I think that marks the end of our formal presentation and we're both ready to answer questions that you may have. All right, well, that was incredibly well done. So thank you, Aaron and Jeff for that. There is one question. Yixing is asking, is it possible to view storage being used, to view disk space? You cut out for a second there, Diane, can you please repeat that? The question is in the chat. Is it from Yixing? And is it possible to view the storage being used, the used disk space in the storage? So the statistics I think he's asking on the storage. Does that mean sort of like, like seeing how much storage is consumed by a pod or a container? Was that? Let me unmute the general time. And let's see if we can Sure. Give him to follow up with that question directly. In the chat there it's still. In a PVC, he's asking. I think if you- No, the PVC isn't something that changes in real time as your pod or application chugs around through storage and say creates a pen to a file and makes it larger and larger, starts consuming storage. A PVC doesn't have that purpose. Now, Aaron may have more experience in the OpenShift console to know what metrics might be exposed that would show you. So Kubernetes and OpenShift has a whole underlying framework of metrics, which gets exposed to the console. If it chooses to, it gets exposed to cockpit, which can be run on your node. But a claim or a PV, they don't, their purpose isn't to sort of show you real-time snapshots of storage. Right, and just to add on to that, that is accurate. In the console, you can see how much you requested and actually how much within the PV that it is bound to. But you can't do a DF or anything on the PVC to see how much space you've actually consumed against what you've asked for. So that is being tuned as we speak, but today that capability doesn't exist. In the chat, there's two more questions coming up. Yeah, I see for AK's question, which is a good one. It's if a PV is created with 10 gig, but the claim is only five gig and there's a one-to-one mapping so that PV is not available. If any other claims, what happened to the remaining five gig that were defined in the PV? So it's important to know, and I should have said this, I forgot, that when we define a PV today and we say it's a 10 gig PV and it's read-only, say that's its access modes. It's natural to think that there's some enforcement going on by Kubernetes or OpenShift to make sure that the underlying storage really has 10 gig and that we don't use more than the 10 gig. And then you, if you think that way, then you also feel a claim that's only asking for half of it has wasted storage, but it turns out that's not true. The, and Aaron maybe can fill you in more on quota that would more address that. But these, the capacity in a claim and the capacity or the size in the PV, you should think of those more as labels and same with the access mode, read-only or read-write or read-write once. Those are really labels or tags that describe the storage. Kubernetes and OpenShift aren't enforcing those labels as rules to the storage. That's done by a quota mechanism or something else that's in, that's part of the real physical storage. Kubernetes and OpenShift are just using the capacity and access modes as a way of matching a claim to a PV resource. Just like Aaron's example of gold, silver and bronze are labels to help match claims to PVs, capacity and access modes are also labels from that perspective. Does that answer your question? So nothing is being wasted. I think it probably does. Diego Castro has another question. When will we be able to define IOPS limits? So with the idea of storage classes and storage selectors, is the ability for the admin to have the IOPS defined as part of that taxonomy and then the user be able to claim against that? There isn't a plan right now to be able to change the characteristics of the storage that's being used. It's just to properly expose what is set at at the time. So there's one more question from Jonathan Lee. Will storage security integrate with Red Hats SSO, the key code work? Currently, actually our team is looking at storage security. We have a lot of different considerations to take into mind with consensus within the community has not been reached as to how we handle the security. We would certainly entertain that possibility. We're looking at how to properly expose ACLs and keep those down as well as looking at things like Key Cloak. But today that hasn't been from that. It'll probably be one or two versions down in OpenShift before we have anything not sophisticated. There's yet another question. This is quite good today in terms of this. Is there any way to recover a PV once it's been recycled? He meant data stored previously before PV cycle. That's another good question. Because we didn't cover retention policies with PV. So you can define a retention policy when you define your persistent volume. And you can say I want it to be recycled. I want it to be retained, which means unavailable but just there saved. You can say I want it to be deleted, which means we would get rid of it. And that's more of something you would see with a dynamically provisioned storage. Here it is and now I'm done with it, get rid of it. And the only storage plug-in that supports recycling today is NFS. So the question is, is there a way to recover a PV once it's been recycled? So I describe retention policies just briefly then, but I don't know what recovering a PV means. So maybe the person asking that question can define what they mean by recover a PV. Yeah, I will unmute and see if you go. If you unmute yourself, Arvind. Did he come up in the chat? Is it readable to you? This is Arvind. Yeah, actually my question is, for example, let's say we have one PV, which is currently used by a specific project. And that project hashed towards, I mean, the parts which are specific to that project hashed towards some data in it. So that project has been decommissioned now and the PV has been recycled. So after some days, I want to recover the data which has been created by previous parts in the project. So is it possible, I mean, post recycling that PV? Is it possible to recover that data? Not really. I mean, it would be difficult. And to be honest, we are going to be phasing out the idea of recycling in future versions. There's already been a PR submitted about six months ago to deprecate the recycling capability just because it leaves open the possibility of something like this. If you have a PV and you've set it to recycle and it's been set beside and waiting for delete, you're opening up the possibility of someone possibly claiming that volume before the data's been deleted and reading it. So recycling is only supported, I believe in Cinder and NFS today and that feature will actually be phased out over the next six months. Okay, thank you. So if you want to keep the data, it's best to do retain instead of recycle. Yep, thank you. Yep, okay. Jeff, did you have something you wanted to add to that? I'm just going to say a quick question here. Can you change it from retain to recycle after you've allocated it? Or back to cycle? No. I mean, I've never tried to do that. There's an OC edit command. You could OC edit a PV, but I don't know if that sticks. Some attributes are changeable and others aren't and I don't know about the retained policy. Well, be a good thing to experiment with, so we could test it out. Are there any other questions? The concern that... Go ahead, Jeff. Sorry, interrupt. Well, just to confirm Aaron alluded to it and it's important is that we have to be careful in OpenShift when we're talking about storage and the end of access, the end of the claim or the end of the pod's access, that if it's protected storage, that no other claim and pod can come along there and start reading it. And at the same time, OpenShift has to be very weary of doing an RM-RF on storage to clean it up, right? You certainly don't want to make a mistake in executing that kind of command. So there's a balancing act that we have to do in terms of this storage and that's one of the reasons for recycling being deprecated. It's a dangerous and potentially air-prone retention policy and there's a risk of not only having the timing windows where someone can gain access to storage that they shouldn't, there's also the possibility of removing storage that shouldn't have been removed. There's one more that just popped in. Wallace is asking, what is the method to back up and restore a PBS or the entire OSE cluster? You might have to get one. I think what Jeff had alluded to earlier in here is that currently snapshots are not supported. It is a PR that is active in the community and being worked outside of Red Hat that today we don't offer snapshots. And most generally, if you use a storage provider from Red Hat, like Gluster, you have replication. So you're facilitating the backup and retention of it through those means, but I think your question probably is around snapshotting and that's coming in future releases. Yeah, and it's sort of a slippery slope for Kubernetes and OpenShift here to get more intimate with the underlying storage. Kubernetes is trying to stay above it, right? It's trying to keep that abstraction level well defined and not go into the nitty gritty of a particular vendor's storage solution. And so there's a tug of war going on there between trying to automate features and expose them through the Kubernetes APIs, which are very useful and give you many benefits versus having Kubernetes know too much about the underlying storage and therefore being more air prone, slower to be able to respond to enhancements that are made and so forth. So it's not an easy balancing act right now and it gets discussed quite a bit in the storage community. And looking again, it seems there's a low in the Q&A. If there's any other questions out there from all of you, I'm gonna throw them in the chat or raise your hand, maybe a few minutes here. Is there anything else, Erin or Jeff, that you'd like to add any reference sites or places to get more information in the future? Yeah, I mean, there is actually quite a bit of documentation out on the origin OpenShift and feel free if people wanna email or reach out to us or even send it to the mailing list for AOS storage. We're happy to answer questions that way as well. All right, I'm still not seeing any more questions. So that just is a testament to the thoroughness of your presentation. Many thanks for all of your work in getting this together and coordinating with me on this one. There will be more persistent storage as new features get added. And if there are other aspects of persistent storage that you'd like to hear more on, please reach out on the OpenShift Commons mailing list and I'll try and find a resource to do more deep guides on this has been really educational for me and I hope so for everybody else. And we hope to have you guys back again, probably with the next release of OpenShift and we will send the slide and the URL. We'll be posted along with the YouTube video probably on Monday of next week on the OpenShift blog at blog.openshift.com. All right, thank you guys.