 Are we recording yet? Yes, we are awesome. Thank you. I'd like to thank everyone who's joining us today. Welcome to today's CNCF webinar kubernetes storage in action I'm Jeffrey Sika a senior software engineer at Red Hat and a cloud native ambassador I'll be moderating today's webinar. We would like to welcome our presenter today Shang Yang software architect at Rancher Labs a Few housekeeping items before we get started during the webinar. You are not able to talk as an attendee There is a QA button at the bottom of your screen Please feel free to drop your questions in there and we'll get to as many as we can at the end If there are a couple poignant questions during the presentation Cheng said that we can Feel free to grab him and then try and answer them while it's relevant This is an official webinar of the CNCF and as such is subject to the CNCF code of conduct Please do not add anything to the chat or questions that would be in violation of that code of conduct Basically, please be respectful to all of your fellow participants and presenters The recording and slides will be posted later today on the CNCF webinar page We'll provide a link to that but it's cncf.io slash webinars With that, I'll hang it over to shang to kick off today's presentation shang Hi, so welcome everyone. So this is a shang yang and from Rancher Labs and today our topic is commodity storage in action so All right. So first a little bit about myself. I've been with Rancher Labs since 2015 and before that I also worked at the Intel for the KBM current development and At the Citrix and cloud comm for the cloud stack development So you can find me using a yasker at a github treat her at medium And also if necessary, you can send me email for any questions you have in a session To shang.io and rancher.com. So I will gladly answer them So with that Let's get started So the reason I the reason we are having this webinar is the concept of the Persistent storage in the Kubernetes is a little bit always a little bit confusing and from what I heard from the end users and the Many talk ahead in doing multiple coup cons There's people always asking about many Difference between the like PVPVC are the read white ones written many those kind of concepts in Kubernetes So that's one thing we want to talk about today is go through the persistent storage concept in the Kubernetes in one go and I will try to explain them as clear as possible and I'll make sure that you can you are going to get a concrete understanding of the Kubernetes storage concept after this webinar and the second part of this webinar is I'm going to show you How to use persistent storage in Kubernetes doing a live demo and for the live demo purpose We are going to use rancher and the long form which are two open source project and In the Kubernetes area and you are felt free to download them and try them by yourselves So the first thing I want to talk about is that Just the list of the the concept of persistent storage in the Kubernetes The most common one you might have heard about is persistent warning and persistent warning playing So in short, what the persistent warning is is a piece of storage can be used for the communities like many spots and the persistent warning claim is a request for the persistent warning in short PV and The first the one misconception one common misconception About persistent warning and persistent warning claim is you probably think like okay persistent warning is like a storage pool And persistent warning claim is going to cuff in the one part of it. In fact, that's not true So in Kubernetes persistent warning and persistent warning claim is always going to be one to one binding of Relationship so one persistent warning can only be used by the one persistent warning claim and the one persistent warning claim Of course, I'm going to be a bind by the one person one as well. I will explain more of that later So the next concept you're going to likely encounter is the storage class So the story class is essentially a connection of the persistent volumes and in when the storage class work with the provisioner and you can have a dynamically created persistent volume instead of static allocated persistent volume by the domains So where I will go into more detail on that later as well. So the last concept you're probably going to encounter is just simply called volume But in the for the volume inside the Kubernetes. In fact, it's just referred to the storage whatever storage used by the pod. So it's not necessarily persistent So if the volume is pointing to a persistent warning claim, then we will assume that this one is most likely to be persistent storage But if the volume is using something like host pass or empty directory and those are not really treated as persistent So here In fact, the persistent word of persistent storage in Kubernetes has evolved a bit since the incubation of Kubernetes So the one major event in this involvement is the storage class So before storage class was introduced in Kubernetes is I think it's introduced about in the 1.5, 1.6 around that time And before that, every time what's the real the model of persistent storage is that the storage domain have to allocate persistent volume first And then there will be a bunch of persistent warning claim trying to bound it into the persistent warning. For example Now we have create a multiple parts and the parts have a volume and the volume going to pointer to a persistent warning claim And in this case Kubernetes is going to try to match the persistent warning claim to any existing persistent volume through the parameter like the capacity like And the performance and using those parameter to find the PV that matched the requirement of PVC So once Kubernetes found the required PV, the PV match the requirement of PVC, they're going to bound them and you can see that PV status will become bounded And the PVC will start using pointing to that PV and then the pod can start using the PV as a part as a warning inside the pod So this model have a few issues. The first one is like Since the PVC only specify this like some spec like the size of the PV of the storage what you want the PV can be over satisfied like just say the spec of the PV can be better than the PVC wants For example, I am in ask a PVC for like one gigabyte of volume, but there's no existing free PV, which that was one gigabyte of free space. The smallest PV I have might be 100 gigabytes. So in this even in this case in this case, Kubernetes was still going to bind that 100 gigabytes PV into the PVC, which is resulting the waste of space Another issue with this model is that those PVs are static created by the storage at the main beforehand. So in order for this model to work while the storage at the main have to be involved and in the older process that's using the storage, the persistent storage And essentially the storage of the main have to predict that how many PV is and what's the PV size going to be used by the PVC, which is next to impossible. So this provide so this model provides very good like Permission and the request isolation, but it's not a really particular particular flexible when we will want to have a lot of part and the volumes with a lot of PV and the PV sees things for storage and the main need to be manually involved in every step of this allocation. So when, when, after Kubernetes realize that community realize this problem, they make they introduce the concept of storage class and the provisioner. So in the new model, the storage class with provisioner is basically operate as in the role of the storage and main, but the sorry, but you can download the work automatically. So in this case, we still have four pvcs and the four pvc had different requirements. And the part we're going to create as same as before the warning pointed to the PVC and the PV in this case PVC we are going to see, okay, is there is any existing PV can meet my requirements. If not, pvc we're going to talk with the storage class with the provisioner. And the provisioner we are going to provision new pvs to according to the spec requested by the PVC. So those new pvs we are going to match the spec of pvc 100%. For example, if you are going to ask for one gigabytes of the volume, the provisioner will provision you most likely the one gigabytes of volume as with the like say based on SSD and the provisioner will do that as well. So in this case, you are going to have a strictly matched spec between the PV and the pvc. And also you don't need to worry about get the storage main manually involved in the annual volume creation process. So, of course, after the PV was created automatically by the storage class and the provisioner, the pvc we're going to bound with the pv and the part can start use the warning again. Sorry. So another. So in this case, you can see that it was introduced of the storage class, the allocation of the warning and the persistent warning become more flexible. And you can, and also there's one thing about the storage class is, as you notice that mentioned, I mentioned here that storage class with be with the provisioner to automatically provision the new pvs. But also you can use storage class without the provisioner. So that's storage class will essentially become a connection of the PV. So in this case, if you specify the certain storage class, and if there are multiple pvs with that storage class, some pvc we're going to only bound into one of the pvs with the same storage class as the pvc specified. So that's also a way for you to allocate existing pvs into one group and make sure that pvc is going to grab some pvs on the band bound to them within the group. But most in the most common cases, the storage class is working with the provisioner to provide dynamic provision feature to the pvcs. So that's about pvpvc in a storage class and provisioner. The next concept we're going to talk about is read write once and the read write many. So those are the access mode of the pv can have in the Kubernetes. So the read write once means that the storage can only be read write on a single node at any given time. And the read write many means that you can read write on multiple nodes at the same time. So why it caused the difference? In fact, the difference is caused by how the storage was the internal functionality of the storage. So the read write once type storage is most likely the high performance block storage, like AWS, EBS, Azure Disk, Google Persistent Disk, Safari Media, and the long one. So the block device is, since the block device can only be attached in the one node, and also you cannot modify the block device contents without the device system knowing. So the block device is most likely only able to be read write at a single node, but in other cases, you can probably read using read many for the block device as well. Like you can mount this block device on the multiple node as long as you are not writing anything to it, you can you can still read from it. So the read write many type storage is most likely distributed file system. So like AWS, EFS, NFS, Glastafes, and StafFS. So the read write many means that type of storage is on the file system level so any change you down on the multiple node will be made aware to the file system, and the file system are going to have a protocol to deal with that. So then, so you are able to use the, this type of storage across the different node. So, but certainly on the on the contrary sometime you that is because that is involved in more locking and the file system level protocol, the performance of the distributed file system is probably not as good as the dedicated the block device. So, talking about the read write many type of access mode, we need to talk also think about how to use them. So, there's two common way of using persistent volume in the Kubernetes. One is using deployment, another one is using StafFS set. Of course, everybody is probably going to most familiar with this deployment and the deployment has a property that the port in the one deployment is going to share the same volume. And the deployment can be, as you know, the deployment can be spread across different parts. So, the storage for the deployment have to match the requirement that no matter which node the party is running, the storage must be accessible on that node. So the deployment is more suitable for the read write many type of storage. On the other hand, because some workload can scale really well horizontally. So we also have a concept of the StafFS set in the Kubernetes. So each port in the one StafFS set can have one volume. That is because we have a new concept here called wooden claim template. The wooden claim template is essentially automatically provisioned new PVs and with the PVC specification and they are going to use that newly created PVC for each part for each part in the StafFS set. So in this case, if you have workload can scale horizontally really well, you can have a StafFS set for that workload and using the high performance read write once type of storage like block device with your StafFS set and with your workload. All right, so we have, so I'm the maintainer on the Longhorn project and the Longhorn is CNCF, CNCF sandbox project and it's the distributed block storage software for Kubernetes and Longhorn is 100% open source. So Longhorn is in the catalog at read write once type of storage as I mentioned before, with the EBS and others. So what we what we what we do with Longhorn is we want it to provide easily provide persistent storage support to any Kubernetes cluster. So you can find more details about Longhorn at longhorn.io and I'm going to demonstrate how to use Longhorn and the rancher and the Kubernetes to you to operate on persistent storage later. Just a little bit update on the what's the latest status of Longhorn. So the latest release is 0.7 release. And the long week, as I said before Longhorn is enterprise grade distributed block storage software for Kubernetes. And we support warning snapshot inside cluster. And also you can backup and restore warning from outside cluster like S3 or NFS server. We also support the storage tag for the node and the disk selection. For example, if you have some data that doesn't require very fast access the fast band wise you're going you can have the label for for example SSD and NVME on one side and the label on for the spinning disk and other side so you can choose the different speed of your disk using the storage tag. Longhorn also support cross cluster disaster recovery volume was defined to RTO and RPO. So RTO here is a recovery time objective and RPO is a recovery point objective. So this is those two parameters defined that how soon and when how soon you can recover your volume in your backup cluster. And what's the point of data is going to be recovered in your backup cluster. Longhorn also support life upgrade of software without impacting the running volumes. And we have also have intuitive UI which is the one first thing many users like about Longhorn. And of course, Longhorn is right on top of Kubernetes and using Kubernetes controller pattern. So we support one click installation and it's very easy to install. I will demonstrate that later. So of course we have more feature coming and let's start the demo. Alright, so before demo, do we have any questions? None of popped up so far. Okay, good. Okay, so here what you see is the UI from Rancher. So everybody see the UI? Yep, you're looking good. Okay, so as you may know, the Rancher is the Kubernetes management platform is also 100% open source and the Rancher can manage multiple clusters, multiple Kubernetes cluster from different provider, no matter it's on premises or in a cloud. As you see here, this is the three of clusters is in the DigitalOcean and one cluster is in my node. So let's go dive into the demo. So when you click into the demo dashboard, when cluster dashboard, you're going to see the CPU memory usage and the pod stats. And you can see that this cluster is with the three nodes, which have the CPU and what's the version of the Kubernetes and the Docker on top of node. So the storage type here, you can see that we don't have any persistent storage, persistent volume or storage class created. So we are going to start by installing Longhorn to add persistent storage feature to this cluster. So to install Longhorn is very easy. Just go to the project and click app and click launch and find Longhorn in the Rancher catalog. And Longhorn, of course, is also available in storing from the YAML or from HelmChart. And in the back end, this is using HelmChart as well. So I don't need to change everything. I just click launch. Well, when we are installing Longhorn, let's go back to the dashboard for the demo cluster. And in fact, we can launch kubectl here. And it's where you can just run every command using kubectl. Nothing is on default. Yeah. So now the Longhorn is in storing in the Longhorn system namespace. And you can see the things that are running here. And of course, you can see the real time status of all the part in the workload here. So it is in storing right now. I haven't finished yet, I think. Let's see if I can access the UI yet. Oh, okay. In fact, the UI is ready now. So in the Longhorn dashboard, you can see what the current status, like how many volumes you have, what's the health of them and how many storage you are available for you to create new volumes and how many nodes it's here. On the node page, you can configure the, well, Longhorn are going to use the space on the disk to back up to back in the Longhorn volumes. And of course, we don't have any warning right now. And Longhorn also, as I mentioned before, Longhorn also support backing up to others and let me set the backup target first. So that is the setting to S3 and you can see that we already have some backup here. So that's diving to the person's warning and the start using it. I think I need to refresh this. Oh, yes. Okay. So the first thing I want to do is I want to start a WordPress application which using Helm chart and using Kubernetes, using Longhorn persistent warning. So this is a standard WordPress Helm chart. And you can select what the press versus the volume enabled and using Longhorn as storage class. And also, you know that WordPress Helm chart had two parts. One is WordPress itself. Another one is the database, MaraDB. So we want to use the storage class for MaraDB as well. So let's click launch here. And from here, we should able to see that's made to volumes created. So Longhorn is communicating with Kubernetes using the CLI interface and the volumes are being attached. And the same thing I'll see. And you can also see from the Longhorn UI that the warning is this warning is using by MaraDB. Another warning is using by the WordPress itself. And you can click to see the details. So, okay. So let's go and see how the persistent warning, let's see the persistent warning and the storage class. So in the storage tab now you can see that we have two persistent warning and the states will be bound. And the storage class, of course, when you install Longhorn, we are going to create a new storage class with dynamic provisioning feature and the you will see here in the storage class tab. You can also see that using the control and in short, because, yeah, in Rancher we type control so many times. So we normally just shorting to use a K. So you can just do K get a PV and you see two persistent warning is used, has been created and the you see this K get a storage class. Yeah, you can see the Longhorn is there. So we can take a look into deep about the PV is here and see what it has. So this PV has been associated with the. Let me see. Okay, this you can see that this PV is using the CSR driver and using the Longhorn, and we using file system as 64 SL default, and some other parameters provided by Longhorn. And also you can see that this PV contains the size of the warning we want, which is in fact determined by the PVC. This is 10 gigabytes. This PVC and also this PV contains the info to referring to the PVC is using the PV like this PV is using by the PVC name in the name WordPress in the namespace WordPress. So let's take a look at that PVC as well. So now we have the we have to see this week switch to the WordPress namespace and take a look at the PVC and the previously we see this volume. You see this volume name here, then in fact, this is the PV which associated with the this PVC name in WordPress, we can take more into we can look into that more. You can see this what this PVC was created with access mode rate wide ones, which is suitable for Longhorn. And also they ask a request as resource as a storage tanky and the storage class as Longhorn. So that's is how this PVC able to find Longhorn storage system and able to create a new PV to met its demands. And you can see the model mode is fast system, another warning mode available in the newer components release is a Roblox device. So the warning name will be the this PVC dash six EA something this is the PV name we mentioned before. So let's take a look at the storage class. So this is the storage class what how storage class looks like the most important parameter here is, you have to specify the provisioner if you want to do dynamic provision for the storage class. Another one is reclaimed policy, which means that what what will happen if the PV has been unbounded for the PVC. And if you oh sorry, the reclaimed policies what will happen if the PV was going to be removed is going to be deleted out. In this case, another one is recycled. And also this was split special annotation here so by default default the story class means that if you are creating a PV without specify the storage class, this one will be this this story class will be used as a default. So let's go back and take a look at the part is currently running. See some questions. Okay, so um, yeah, so I see some questions here so I can't wasn't sure if you wanted to answer them at the end because some of them weren't necessarily related to what you're talking about. Okay. All right. If you want, we can tackle a couple of them right now. Okay, so. Yeah, so first question I saw from rush is that is it possible to use longer in a deployment. It's possible for longhorn to use in the deployment but you will not, you're not able to scale beyond one node since it's read write ones, and longhorn cannot be run in the read write many mode. The next question is coming from Reno. So the question is what will be the default data location for longhorn or how can allocate space that longhorn can use. So the default data location will be at slash, while the rancher longhorn and we're changing that to slash longhorn slash while the longhorn. So, in order to allocate space for longhorn you can just mount a new have some block device on the node and the month it's more format it in easy for X or XFS and mounted on the some directory on the node and tell longhorn about it. So we can see the front here. In the longhorn UI. So you have the note here, and you can add in this node, and you can add a disk, which putting and putting a different pass here, and then you can have a different another disk using for powerful longhorn. So also we see that the default the past is why leave rancher longhorn at this release. So the next question is, well does longhorn provision the actual on the lane storage does this using EBS automatically deploying AWS. The answer is no we currently just depends on how the users. How the user, the disk has been mounted on the node on the node. So in order to use EBS you need to create EBS volume attached to the VM on the format on the mounted on the node then you just supply the past to the longhorn like that. But we are also considering automatic provision on new block devices if you see using some cloud provider solutions. So what's the difference between the PV get deleted and I'm not recycled. So the PV get deleted is basically means if I remember correctly is when the PVC is get deleted PV is going to get deleted along with it. And if the reclaim policy is recycled if the PVC was deleted, the PV will not get deleted automatically, you are going to be just there remain as unbounded, but it's the content was to be there and the next PVC can pick it up. In fact, it's not really 100% safe you do this way because you know that the data can be reused. So can compare longhorn with the staff. So the one of the motivation we develop a longhorn is because we kind of think self is too complicated. And we heard because our previous experience is like with cloud stack, which is my competitive open stack. So there are many, there are some user stories about how hard the service to how hard to operate and those. So the longhorn itself is more is much simpler. So we only use replicas to store the data we're not doing striping, and we try to keep the code as simple as possible and also, which also have the backup building backup mechanism, compared to the staff. And the staff itself is not really cloud native because staff was built before, of course, before the company was born. And of course, as you know that the project route is helping self to become kind of native, but that is to our understanding, of course, that will be another layer on top of self. So it's going to be introduced more factors to more complexity factors to to the operation. So where do you specify the warning mode at file system where deploying WordPress app, the fact the warning mode at file system is the kind of default setting for provisioners. So unless you, as you specify the warning mode to be the road block device they're going to buy default file system. And also, the warning, the warning mode is also determined in the, the request of the PVC, which is income, which is in the WordPress chart and WordPress chart already find it. Okay, so we're already using Rancher when kind of expect Longhorn to be product grade. So Longhorn to be, we are talking in Q2 this year for the Longhorn to reach GA. So that's, well, at the time we're going to support Longhorn and the way as claim is to be the production grade. And now Longhorn is still in the beta release. So can you elaborate how it works on the multi-class configuration and RPO-RTO limits? Well, that's a topic is probably too deep for this, for this webinar. So I can briefly talk briefly talk about that. So in the multi-class configuration is running as a master backup like setup. And we are going to constantly pull from the backup target with the backup store for the latest data that's shipping from the master cluster. So the backup cluster going to get the refresh, you can define the refresh area, how long you want. And the RPO-RTO limitation, the RPO limitation is depends on how often you are going to back up your volume in the master cluster. The RPO limitation is also determined by how fast you can transfer, activate the volume and put into the workload. I will say that RPO limitation will be pretty optimal and RPO will depends on your setup. Oh, sorry. I think I just running click. So, all right. So that's all the questions we have right now. Let's go back to the demo. Okay, so this one. Okay, so as you can see right now, the WordPress is up. And you can see this function. Well, everything seems working. And you can add new post. Okay, everything seems fine. So we can dive deep into how WordPress is using the part and you can click here. We can see the volume. One is, of course, Configure Map. Another one is Volume Claim Template called Data. And in fact, we probably see that better from the cool control. Let's do that. So as you can see right now, the WordPress namespace has two parts. We can see the MariaDB one. Okay, so you can see here. This one data volume is using the claim that data WordPress dash MariaDB dash one. So, but how it's got here. So that's because we have the WordPress is using Stateful Set for MariaDB. So as you can see here. So this Stateful Set, the first part you will see that it's just typical SPAC for the container, right? But one part is Stateful Set only feature is this Volume Claim Template. So the Volume Claim Template defines that what's the, how to create a new PVC automatically if I'm going to scale the database. We'll say that you're going to have those labeled and also the PVC that you're going to create it must be read by the ones and the resource must be AGB and the storage class name is Longhorn and the Volume Mode is file system. So this is specified in the Volume Claim Template inside a, inside the WordPress chart. So in fact, I can just scale up. I can scale up the database and you will see what's going to happen. In fact, I haven't tried this before. Yeah, hope it won't fail. Okay, so a new volume has been provisioned automatically. You can see the third one, and you're going to get attached. And it's going to be used by the new MariaDB part. Okay, so container started. And as you see, you can also see the log from the, from the wrench as well. I'm a little bit regretting. I'm not sure if you actually try this. Oh, okay. Now it's fine. Okay, so at the same time, yes, you can see their new warning has been created by for the MariaDB one in Longhorn because MariaDB is using Volume Claim Template to create new volumes. And also, in fact, for in the Longhorn UI, you can click into each warning and see what's the detail information about it. So here on the right side, you will see that you have three replicas running on different node and which pass you are using. And the two tip here shows exactly place that Longhorn stored your data. So since we're doing just 100% replication, so you can, as long as you have the data on this pass, for example, if you have node crash or managed cluster crash, as long as you have hard drive and still contains this data, it's still possible we can still recover from that crash and still get your data back. So that is the one of the failsafe mechanism designed by designing Longhorn. So on the left side, you can see that the volume current is attached and healthy. And you have attached a node to the Longhorn demo two. And this is the block device on the Longhorn demo two as exposed by the Longhorn system. So this is an exercise Longhorn take on a space since we're doing the same provisioning the exercise is only 200 megabytes and most of them, in fact, it's just the file system and engine and some other parameters. So down here, we have a snapshot screen and you can take a snapshot. And just remember that snapshot is made in within the cluster and also you can create a backup, and you can add a label to the backup if you want. And that is copying to the backup store right now. Okay, that's quick. So recurring snapshot and backup schedule. So we have a building mechanism for you to create backup and the snapshot on the schedule for like daily or weekly or monthly back upgrade so I'm going. Monthly backup. So that's why we want to ensure that your data always have to copy outside the cluster. So when you lost your whole cluster, you are not going to loss your whole data. So here just for testing purpose. I'm going to do a backup every minute and click save. And this should be kicking in about minutes. So let's go back to. Okay, so I can. Let me see if I have any questions. Okay, so Yeah, so I saw the questions about access mode and the use case for each of them. So the, basically, the things is if you want to use the base concept is if you have a workload that can scale horizontally, like, like a database. The database can scale horizontally and doing the chart. And when you have that kind of workload you better use the read write once block device with the state for set so you can easily have a different storage for different blocks or different storage for the different parts across a different node, but sorry. So but if you don't have that, but if you have the workload that's going to scale but they, they are planning to share the same storage they can have some something like internal locks, and they have multiple instance able to share the same storage, and then you can use read write of many type of storage. And in that case you can use deployment and read running many type of storage to scale your workload. And in that case that all this workload all this part we're going to share the same volume. So you can see that they can they can be some bottleneck on the performance, but if you workload I mean allow that there will be your way. So, there was one other question that was in the chat not actually in the q amp a. Yeah, do you want me to read that off to you. Sure. If a node needs maintenance and has to be upgraded and all the containers need to be migrated. Can you describe the process when you already have a bound PV that is local storage. This can happen with local storage and PM em backed volumes. If you have the if you have the workload and the if you're going to you if you have to know do you have to most likely you have to train it and there the part will be moved. Since, if you are using deployment or state for said they're going to be less than ideal replicas on that note, not on the whole scale so the part will be moved into other node and then but if you are using local storage in fact the local storage and need some clarification here if you mean by local storage is that storage is bound on that node. Like another project I have just local pass provisioner in that case that your storage is always found that by that node so you cannot really move that storage. So the other part we're not going to have the storage available to and we start off. So in that case you are going to last one replica. But if you have some system like long horn, you can that storage will be migrate into another node because that the long horn or some other storage system like set for going to have a replica on the old know all the nodes. So they can connect back to the their replicas or their data through a different node and then continue function from there. So that's all from my presentation. Let's go back to that's all for my demo. Let's go back to our presentation. I think I want another slide. So just one last thing mentioned about long horn. So the net in the next release there.a release we're going to have warning resizing topology support and life upgrade with you coming. So the warning resizing is a future many users has been requested. So now we support warning resizing from the Kubernetes 1.16 release. Since that is the first time the warning resizing become better data in the Kubernetes. The Kubernetes topology support in a previous node as a fill in domain is a new feature adding the Kubernetes to help deal with the the replication into different in the one region but in the different availability zone, like the case that if you are using EKS you are going to have a region wise control plan by default but you know it can fall into different AZs. And also the one thing about AWS is that your EBS warning is in fact bound to the AZ cannot be migrated across AZ. So if you have one region down, sorry one availability zone down this kind of cases, you are going to lost that data in that AZ. So what we add is on top of on top of the EBS, you can use Longhorn to provide a cross availability zone support using this new topology support we added in 0.a release. So after that we don't expect many major changes and we are targeting Q2 2020 for the Longhorn 1.0.1.0 GA release. And you can follow up the latest update at github.longhorn.com slash Longhorn Longhorn milestones, and also for the Longhorn related content of Longhorn related discussion, you're welcome to join the CNCF Longhorn channel or the RENJI user Longhorn channel. So we are migrating from the CNCF RENJI user channel to the CNCF channel since we have become the CNCF sandbox project. So that's all for my presentation. So any other questions. Yeah, if there are any other questions we do have time for a few more. You can just click the Q&A tab at the bottom of your screen. But if there are no other questions in the next like minute or so I think we'll wrap it up. All right. All right. Thanks, Sheng for a great presentation. Thanks for all the time we have for today. Thank you for joining us. This webinar recording and slides will be online later today. We are looking forward to seeing you at a future CNCF webinar and please have a great day. Thank you.