 Hey, morning Alex. In fact, probably good afternoon. Good morning. By the way, I noticed this session when I just joined this set of recording right so zoom can now do automatically recording now. Yeah, all of the sick meetings are set to automatically record and then they get posted to YouTube so they're available for the public. Okay. Yeah, so I don't I know that I just don't know that you can automatically recording a session in zoom right now. Every time we have to manually click the button. Yeah, I think, I think there's a setting you can you can use when you set up the zoom call. Morning you beat me here. I was having zoom issues my hand to reboot. Oh, there's going to be people on and maybe hasn't started. Hey, Alex, which time zone you are in. I am in the UK time zone. Okay, GMD. Yeah. I mean which which time zone you are in. I'm in a mountain time zone. So, hour ahead of. Yeah. Oh, it's still early morning. Yeah. Not too early. Plenty of cups of coffee into. Yeah, I basically just got out of get up and like one hour ago. Are you on. Yeah, I'm on the Pacific time zone and I'm basing, you know, I'm basing is also in fact currently. Okay. Great. It's like we have pretty light attendance today Alex. Should wait a couple of minutes. Yeah, sure. We're in the UK. We're in, we're in this weird week where everything is off by an hour, because they like savings went back lost Sunday, but in the US it goes back next Sunday. Yeah. California has voted to cancel to cancel the daylight saving but it's on the like state level and they said that you have to get permission from the federal level to really remove it. It doesn't have another problem say the California is going to have a different time zone compared to say Washington and Oregon right that's going to be weird because you're basically on the supposed down the same time zone so well, but this is a lot of daylight saving things I heard is like, hundreds people probably have a high risk stroke every year because of like change of the schedule and change of clocks. Not really sure why I still need it. Yeah. Never knew they could correlate it with health problems based on the time change. I'm scared because you don't. Oh no, in fact, I don't think Arizona has daylight saving. They don't. Okay. Not all the mountain high. I guess. All right. I think we don't have a lot of people joins. Oh, we've got a couple more now. But I'd suggest I suggest we we, we can start and then we can share, we can share the recording if need be. Okay, sure. So, yeah, so I was going to ask this question which screen do you see. I see the full the full screen presentation page. Okay, that's good. All right, so thank you everyone for joining this session. And so as you know, that's long for is currently a CNCF sandbox project, and we are applying for the incubation stage. And this is the long for incubation review. All right, so for this review first I'm going to go through a few like recaps about basic of long horn, why we do it and how we do it. And the later we can go through that how long one has growth, growth, and things join the CNCF and the was tractions and was on the roadmap. So feel free to interrupt me at any moments and yeah, so that's going to start. All right, so what is long horn. So long horn is open source distributed story software for communities. Our goal is pretty clear. We want to have very simple and worry. Very simple way to add a persistent story to your cluster. So one click installation to add persistent story support for any Kubernetes cluster is the goal we want to have. And also, the thing is, there's a few things like differentiate long one from others. So we call them the design principles of long horn. So, first, it's a reliability and because it's a storage software right so the last thing you want to do is like lost your data. So long horn provides crash consistent and make sure that in every data you write to the long horn volume will be right and the preserved on the disk is no cash in between. And the second thing is long one provides multiple layers of protection against the data loss. So that's including the building snapshot mechanism which is inside the cluster. And also the backup support which going to backup the snapshot to outside outside the cluster to, for example, as three or another server. In fact, there's a sort of layer compelled to some other solutions is if you have your long horn use the directory data directory available. In fact, you can directly extract the data from that given that you for example you lost your whole data from that system and you lost your whole average metadata. Right, that's just the one thing that's based on because of how long one works that is possible for long one to do so. So I can go through the architecture later. The second thing is that we want long one to be very easy to use. So one click installation remote we are going to detect your environment and choose the best way to install long form. Obviously that means that in fact this helped a lot during the early days of long one when those see a Kubernetes has the driver choice of the flex volume and the CSI. We migrate to CSI at like in CSI 0.3 and later there are for an upgrade to 1.0, but in fact, even like some time you have to choose flex volume for the Kubernetes distribution doesn't have it and the CSI different versions in fact it's not really compatible. So we build something called the driver to deploy your to automatically detect the version of your Kubernetes and the and install the compatible CSI for you. Now is the last problem because everybody has standardized your CSI 1.0, but a lot of still a lot of effort we made in to try to make this menu configuration installation process as easy as possible. Another thing is long form provide the policy user experience, including a building of the main UI you don't need to have like sort of party UI or like add down for that. So that is all included. So you can operate long form many like create volume stuff inside the control of course but you can also do that from the UI and you can see the dashboard and to show what's the the system level overview looks like and the you can perform the backup restore snapshot scheduling backup those kind of operation on the UI as well. The third thing is the metability. So the once the one thing about the metability isn't it's more like a by design, right because when you made your design choices to say how you how this overwork is going to decide that how easy or how hard is going to be maintained. So when I will talk about a little bit more on the actual later, but the really goal is make sure that even you don't really have like very complex storage background, you can understand most of concept and understand how long form works. So long form provide a way to easy to recover even the worst case scenario that is as long as I mentioned there's three layers of protection as long as you have any one of them available, you can recover your data in in for recovery data of your cluster. It also provides upgrade without interrupting workload. That is also what we call the life upgrade feature, which means that you can feel free to upgrade your long horn, including lower data engine, and when you still have the running workload. So that's really like reduce your downtime reduce your schedule the maintenance window when you want to do the continuous like a deployment when we want to do the maintenance work for for your cluster. All right. All right. Yeah. So here is our latest release and the one or two and on the right side I've left a bunch of the features and we just go through the work quickly and this distributed blocks over software. Well, in the one dot one dot one upcoming one dot one release we're aiming to add read by many as well and using FS. And that's going to be building so I'm going to call the distributed story so well in the next release. And the warnings in provisioning means that one over provisioned in on the lane by using links fast file to preserve the metadata and the data. Right so this doesn't take actual space unless you use the up to all the spaces and willing snapshots and backup restore snapshots are we define snapshot others. This is the history snapshot point inside cluster, which was once you have this warning inside cluster you can river back and stuff, and the backup and it's going to be outside cluster. Right, so that we support incremental backup and incremental restore. So one expansion and you can resize the volume across AZ replica scheduling this is mostly for some clouding vendor environment and they want to have they enhance the ability across the whole whole different AZ in the same region right so then you lost one easy. Then if you lost one easy you still fine and storage time for note this selection and across class at your volume with defined RTO IPO and life upgrade I mentioned before and the UI want to click installation and more. All right, so any questions so far. Okay. All right, so this is the overview of how in how long form works on the knees. So, currently we have two notes here post note has a storage and Ram and CPU. And the cookies ask the long one for a new warning. Right, so this when this request coming no one is going to create two replicas on preferably on two different nodes because we want to have like a. If one replica went down we still have the replica available on the other node, I can show I can demonstrate the process of failure to probably later. So then no one is going to create engine connected to those replicas and the engine is going to expose the blog buys to the warning right so this is very simple way of doing like to do to set up this the data pass to provide the storage for the for the part to use. If we are going to have the second part asking for a second volume we do the same and third part we do the same. So there are two advantages of this approach. The first thing is you can see that the data pass of each volume is not, it's not in the mind is basically isolated from each other right. So if one warning go downward, even one engine go down is not going to infect like affect any other volumes. Right. Another thing is you can see the engine we have here is always collocated with the part with the workload. So in the in the most common scenario that we want to guard against for the education cases is the note down. But in this case, if the node one is down for example, and then the engine, the volume work, the engine will be down of course but the workload part one will be done as well. Right. So then go back to school going to like reschedule the part to another node and engine can be like just move along with it and everything will be back to normal. So that's greatly simplifying our design for the engine because we don't need to have one engine to kill like more than one node, and then this we don't need to have really complex measures and to do the engine. Right. So, but how does this, how, why there's nothing like this before. Right. So the problem is because engine that wrap because in fact Michael services they're coming running as a processes. Right. And the, and the first version in fact when we come up this all running on each one as a container as parts, but we do hear some limitation later so we change them into a process but in the end, those are separately orchestrated entities. So it's pretty hard to do this without the help of Kubernetes right if it's a possible right so that's why this mechanism this way we choose to do it is basically it's found to the Kubernetes is with the Kubernetes help, we can do this. Otherwise it's going to be like we have to write some our own scheduling mechanism to move this part move this engine process around those stuff. That's why we, we, that's why we only see is this kind of mechanism coming this kind of architecture coming until now. Right. This is basically because of Docker of Docker's ability for you to package one service in the single container and the Kubernetes ability for you to schedule new part around without really much of overhead on your side. Hey, hey, Shane. So, just a quick question. So is effectively does every volume have its own engine. Yes. And, and is every engine a separate process. Yes. Okay. And does does every replica have its own process or is that. Yeah, every replica has its own process as well. So a little bit of history that we were designed every engine replica to be a Docker container, instead of a process and the first version I think before 0.6. Right. But later, we have a one user go in and complain. Wow, I have a huge, I have a very, very big machine that's so beefy and I can run like a 2030 workload on that and then I need a 2030 volumes. Well, but all those engine replicas take place inside the pause. So then I going to have like, well, 80. If I really round them on a single note, they'll have eight or even minimal 40 parts and they just take a lot of course because that is only allow that 110 parts per node. Right. So then we decided to decide, okay, so it seems to make more sense that we aggregated to a way that they are running as a separate instance of a separate process, but they are the same. No, so we save that resource on the port level. So that's why in the next page you will see something called instance manager. That is why and how it works right now. Right. Thank you. Any other questions. Any other questions. So these engines replicas are these like, you know, using Kubernetes primitives or they are not like Kubernetes based. So for example, does the engine correspond to a pod or some Kubernetes abstraction or Yeah, so the engine itself doesn't, doesn't correlate to it was related to a pod as I said before, but because of like part limitation we had on the many can only be 110 parts per node. Right. So we decided to not take that resource like after later. Right. So now the engine is running inside the pod. Right. And there can be multiple engines running inside that we call this as many part I can explain more on the next page. All the till the next place. Right. Okay, so this is some detailed review of the architecture of the engine side, you can see that now we have three nodes. And the note you can also see that there are some notes they have a spare disk for long run like this black colored SSD we can use that for long run. Like some was yellow colored which we assume is a root disk. You don't really want to use it for the storage otherwise you might introduce unwanted like this pressure stuff. Right. So we want to have separate, you have one then separate it. And also we can see that for the node that's with without with the storage for long run you have to replica instance manager running on top of that. That means those know that potentially able to run replicas, but for every node because they are able to all of the know that here at work or no they are able to run using the long run volume so we are going to have engines as many running on top of that. So, let's take the same example we have a pot a, and we want to create the voting for pot a we have replicas scheduled on two different node node one node two, and then the replica process will be started inside replica instance manager. And the engine process will start inside an instance manager on the same note at the pot a and then connect exposed block device to to pot a right pretty straightforward. And you have we have pot B on the on the note you and we do the same thing policy on the note you do the same thing. Right. So next question is what's going to happen if the note a when a node one went down. So if no one went down, as you can see in the previous page as pot a, in fact, the warning a going to have we have the engine on load one and the replica and load one or two. No one went down and put one put a everything went down right but because it's cool that is cool that is going to decide that okay so I saw this no down so I'm going to reschedule this part to another node. I found a note three. So Kubernetes rescheduled the pod and restarted on a note three and say no pot a still needed warning. And then the then the communities to ask no one for the warning and no one see that. Okay, so they are still a data of this what workload is inside the note to that you can see that's right. There's a red replica there. And the one is going to start the engine on those three and the connected to the red replica and the resume service to the pot a right so that's is the how to just in the overview, if the failure happens, how the recovery works in the All right, any questions. Yeah. Okay, so yeah, so it seems basically instance managers and replicas managers are kind of like Damon says they run on every note right. They're demon dam set theory. Yeah, so but they are in fact just controlled by the long horn we build the controller for them because for example when you when you don't have available this on the note that you don't really need a replicating system manager right. So that's why we build them as a as a separate controller rather than just using demon set. Right, but everyone of them is definitely a pop. Okay, and then like now the failover scenario that you described, do we also constitute a new replica on the on note three the failover note. Yeah, so currently once though if there's not for with available discount note for they yeah we'll recreate the replica of course. And because not three doesn't have a disco available for the long horn right so that says that is why we don't do the rebuild of the replica on those three. So of course even though the one went back we can reuse that replica. Does that answer your question. Okay, I saw like the red SSC icons I thought not three also has local storage. Yeah, yeah, it's what we I want to indicate that this kind of different that is for the root file system. Right so that says the available disk is like Marx says those black or gray colors. Right so the SSD on notes three is not really for the long horn storage. So that is also why we don't have the replica instance manager running there. Hey, so quick, very quick question and maybe you might come to it in this in a future slide but if, if as you said, node one, you know, reboots or recovers and comes back onto the network. So the engine on note three can then reconnect to the replica that's on one. But would it would it have to, I assume it would have to rethink it right at that stage. Yeah. So, currently in the one dot X, we, we always rebuild a new replica but for that what coming 111.1 release we're going to try to start using existing replica but of course any replica we use to either rebuild a new replica or using system replica we're going to check and the same the data before we can use it is always going to be that case. Yeah, we cannot just blind to use it anyway. Right. Also the recovery workflow that you're applying. Does that also happen when you do don't add any new notes so let's say if you already had another note three that was already serving some engines and some replicas. Can that take over serving the engines and replicas of node one that failed. You don't necessarily have to add new notes to replace node one. Is that possible. Yeah, I don't quite get the question. So in this example that you showed once node one failed you added a new note note three. Yeah, so in fact those three is always be there right so this this not running like a related workload at the moment but no three is inside cluster. Of course, if you want to add a new node, the new node where I have the engine instance manager and parts, and like if you could not decide to schedule part on that node, that's still going to work. Right. If you say if the life say if you don't have no three, and you have no two and I could not decide to schedule this part a on no two. Yeah, it was still is still going to work. No, it's not different. I just using those three to make like the concept more clearly here. It doesn't, doesn't need to be so the long horse engine and the replica is unless you enable certain feature code data that needs to be on the same note. Does that make sense. So I think these are separate issues but I think locality here really like as far as long or is concerned, a pod that is consuming a long or volume has to have a just a local engine. Yes, but the actual data actual replica can be on a different note. Yes. All right. And nothing prevents any note from serving engines from any other notes right. Yes. Yeah, so I don't quite understand what you mean by surfing engine but yes, any engine as long as there is a replica inside this Kubernetes cluster, and then you can have engine connect to that replica and the surfing surfing the volume. Yes, for any note inside cluster as long as like a limitation on that part. So, so just one last question kind of related to that. So, I'm assuming an engine spun up within the engine instance manager, as part of a Kubernetes controller receiving a request or something, perhaps via CSI or something like that, I'm kind of speculating. But how, how do you make the decision to to schedule a replica on on any particular nodes. Is that is there is there some some logic or determination there or is it around dropping or. Yeah, so this basically come down to the, there knows that first this, the first thing is of course the notes of the disk should have the space, right, otherwise, assuming they should have the space. And the second thing is, they have to meet the restriction as like, like storage tag, for example, I always, I have to schedule this warning with this tag with the disc on this tag or not on this time they have to be there. And the third thing is, if you enable the start and infinity which is enabled by default, and then the replica need to be scheduled on the different nodes, right always going to be on the different nodes. So if you don't have a different node to satisfy that requirement they're going to schedule failure. And those parts and also there's bunch of other scheduling rules you have to apply. Once you pass all those feet filters, and then you have we are going to get a like available list of the disk. And after that we're going to just pick one from them, because all of them meet our scheduling, like requirements. Got it. Okay, thank you. Yeah, thank you. Okay, so this is on engine and the next slide is on the manager. In fact, this is going to be even simpler. So we have the next cluster and the next class of one warning. So who I talked to the next cluster is going to talk to the normal CSI plugins through the CSI interface. All right, the normal CSI plugin is running as a demon set on every node and then it was going to talk to the local manager, which is also running on the demon set for on every node. And the normal managers work is to orchestrate all the volumes and the determining like a, and also no manager is the in factor of committees, like a controller, and he going to, for example, I asked him for a new warning to create a volume CRD object and the store that in the committee they get server right of course back by CD or others. And then the controllers the volume controllers inside local manager, watch for the object and see okay this is the new volume object coming so I need to create a replica engine for it. And then they decide to create those replica and engine and the formula from the volume and provided to the user. So it's always going to be, it's going to be the same when you have asking for more volumes. And the local manager we're going to create more engine and the replicas and orchestrate all those volumes and the provider to the user. Another way to compliment the Kubernetes cluster Kubernetes primitive is she is the local UI. Because the CSI we normally in charge of like a creative delete volumes attached detach mount stuff and now we add the ability to do the snapshot which in fact the backup in the longhorn. But before that so longhorn UI also can do know the measurement for example you want to add more disk to this note. You can also do in the backup and the snapshot and you can also set a risk, the recurring snapshot which means that you want to take a snapshot or take a backup every morning at 1am and you can ask you can use a normal manager to configure it but also, of course, if you if you prefer you can also use a connected storage class configure that as well. So longhorn UI is currently a compliment for the no CSI plugin and then they have the bad combination of them both they will have the full functionality which we expose to the user. And in the future we are going to introduce along with CSI as well to allow you to program program those logic inside your for example your maintenance script or stuff. Okay, so here is the comparison to the existing CSI project. Well, I'm just go through live online and the first one is what's the position and for longhorn we always position to be a full stack storage software. And the compare to Rook which is I think currently is graduated and the position as a storage orchestration and open EBS is also full stack storage software. The first part is about engine what's this data engine what's it's on the lane so longhorn has a longhorn engine which we customer we build ourselves. The Rook is currently I think the most common use the case for the Rook is using the staff. Right open BS they have a few bunch of choices including Jiva which in fact is the folk of longhorn engine both two or three years ago. And that's why the longhorn performance is on par with the staff and the open EBS. Well, I say depends on which engine you use and the GUI on the long side longhorn has a building GUI and the Rook has depends on the engine I think staff has a dashboard and open EBS. They I think they have a UI but they provided that I think probably at the actual cause if I remember correctly. So the backup restore and the cross DR volume backup restore longhorn because we we aiming to provide those functionalities like in the the most user friendly way so we currently have the back as restore as a building option right we do incremental backup and then we do incremental restore which is the the DR volume option layer there and I think Rook and the set itself doesn't have like building backup restore but Rook can take advantage of the using the third party software to do so and I think it's the same for the open EBS. For cross cluster DR volume disaster recovery and longhorn build this on top of our backup restore feature and that is the really provide a way for the user to use it easily like you have a backup cluster which are running in no time if they have main cluster went down so I I mean in fact I'm not certain on the answer for the Rook and open EBS here I haven't seen something similar here. I'm shang so it having Rook on here just being a storage orchestrator do you guys plan to extend the way that you do orchestration to other storage providers it's going to be a good comparison on here even though it's not an existing CNCF project I think maybe it would be helpful for the TOC to understand for just the cloud native landscape in terms of storage and how long. Okay, so yeah, so that so what other storage options you want to ask to come back to. I just think maybe as we take this into the CNCF if you guys are meant to present there that Rook here maybe is maybe not the best comparison we should maybe have cloud native storage options and of course there's tons of them within Kubernetes and understand how Longhorn fits against those in terms of functionality because Rook can actually deploy open EBS and Ceph and Minio and many other ones so so I would I'm just providing a recommendation I think it would make more sense. Yeah, we definitely can do that the product of the why we list the Rook here is when you look at like a storage, like a storage project for some more folks on the like block storage level this probably open EBS Rook and Longhorn there's really mentioned together pretty often so that's why we put the Rook here yeah but doesn't that make sense, yes. Okay, yeah maybe Longhorn, Ceph and opening the S would be a better comparison even though Ceph's not a CNCF project. I think it's yeah understanding how it's more self open EBS we're definitely going to be better yeah because in fact I've struggled a little bit when I say Rook. I in fact in my mind that basically basically mean staff and but Rook is more than that. Thanks. Alright, so. Yeah, so this is the status update and so we have our last latest release is 102 and in fact Longhorn has just released the GA release about five months back and so that's it happens on the May 30, 2020. And the things that and also, by the way, just just remind that Longhorn has joined the CNCF like last October, which is so now is exactly one year. Right, so for after for the period that Longhorn joined the CNCF is just one year but we now have the 50 commuters from the 10 different companies. And in fact, one of the one, in fact two of the commuters they made a very significant like a contribution to Longhorn. So they implement the arm solution by themselves and submit in the BPR to the Longhorn. So Longhorn team took one of them and just add some like just like a polishing it a little bit and now the arm support is going to be experimental feature for the Longhorn 1.1 release. So that's the few that's a huge thing we saw from happens in our contributor community. Yeah, so currently also I have a bunch of depth states right now and Longhorn is pretty much very active commits per week 51. You should open 24 issue close for week 18 and new PR for week in 29. Yes, so those of us at the state we come we get from that state the CNCF.io. Yeah, so on the right side, you can see that we have huge community growth since we joined the CNCF. And I think the GitHub stars is probably, if I remember correctly 600 versus like 2000 right now. Slack user is like two to 300 versus like a close to one something is like 900 people right now, I think, and the no account, no account and this was about like 3000 something and now we are closing to like 1500 I think it's 1400 something 40,000 something. Yeah, so the growth of the community and the usage of Longhorn is is pretty is in fact it's pretty huge you see that everything is like jumped at least the two three times you'd like five times after we joined the CNCF. All right, so those are the community building things we do and the first is we actively maintaining the GitHub Slack channel. And in fact, this have to say it's going to be it's in fact it's not easy, because our goal is like no answer questions we gain a lot we we receive a lot from community and we want to make sure that we meet the requirement. Right, so if you're looking at Longhorn GitHub issues and Longhorn and like Slack channel you can see that every day we have a list upon other list like 34 coming us 34 issues and 34 users start asking questions on stuff right so basically the responsibility for for my and my team is to answer those questions and make sure and help them make sure users have their best experience with Longhorn that's that's that's in fact for us it's a huge thing. And secondly, we have a monthly community meeting and plus office hour happens on every second Friday of the of the months and the real recording is all definitely available on the YouTube and you can check it out and the in the Longhorn community. On the GitHub page, there's a link to the recording there. And also, we have moved our infrastructure to CNCF, and now Longhorn every night we run a monthly test of for my currently the time time is about six to seven hours. And no, those net the test result and also drum build result is like John going to run for every PR and every merge commit, they are publicly available. Sorry. And also we have a metric dashboard, which which is public as well this is how we get we know no note account. So in short story is we have upgrade server which is running publicly inside inside CNCF infrastructure and every hour they are you they are the node running on the local manager is going to ask him for if they are new server version available. That's also why you can see that's the users. They get notification of a new server and they worry frequently after upgrade very soon after the new server come up. Right. But when they when the manager send that request, we know that there's one node available. We don't have any way to identify who that node is, but we just see okay this is one request coming so I can this as new active node. So that's this old last thing is shown the metric dashboard. Right. That's this old public available. And also we have participated in the cook on and for the cook on EU we have host the boost base boost and office hours to office hours plus one session. So that's in fact, and also we run a survey and got about 300 response and the regarding the Kubernetes storage cognitive storage and why people using or why people not using it. But unfortunately, in the end we feel like the sample size probably still too small to reach any, like a different, different conclusions so I so we didn't really end up publishing of official report on that. Okay. Those are some of the end users using long run in production and those end users are all we gathered all this information from the public user channel those are not like a wrenching users, those are all open source user they're not a paying venture, or like, as for anything right so those are one the first one is a tribunal regional. Okay, so I can not Spanish. Okay, it's original electoral court of the state of power Brazil, and there is using long run production story back end with permission as menion and PG admin. One is similar, and it's a health information technology. And the third one is TYK, and they are also using local in one of the next year, so which man platform. So, so how, so we basically how we got those and users is we basically just shout out in the slack channel and and asking if we're asking them for help for our incubation process. Right, so that's why that's why we got that's how we got this. And also we reach out to a few users in the GitHub that we saw that really frequently interaction with us and asking questions of stuff to, and I want to know if they can help and that's some of the case here. And, and just to confirm these any users are are not commercial rancher users therefore they are using the open source version of the product. Yes, yeah, they're not commercial rancher users and also, in fact, the commercial, there's no commercial version of Longhorn. So, rancher only sell support so even their commercial rancher users they're going to use the same open source product. Yeah, we just like providing them support that's as a rancher labs. Right, so, but those are not even like a rancher commercial users. Okay, rancher commercial users. Yeah, but we I think it's better to show the open on the open source side and so that's why we reach out to using this way rather than depends on a rancher customer to do so. Okay, and, and so I I'm just going to ask a few questions of this because we go. We had similar questions that came up with another project. Recently, I just want to confirm that the reason why I'm asking around the commercial rancher thing is is because I want to make sure that these users are not using some service or some function. That's only available in the commercial rancher edition but not available in the open source edition if you see what I mean. Yeah, I see. Yeah. So, no, they, they were definitely open using open source 100% because in fact there's no, we don't make any like commercial version or like a proprietary version of Longhorn. So, even they want they don't have a way to use that I mean, even for the rancher customers. Just the same for the rancher, rancher is 100% open source right so as a rancher customer you get in the version of the rancher is the exactly saying you download from the GitHub. Okay, thank you. Yeah, I think the sunlight is sorry. All right, so those are the road map and for November we're going to release one or one release soon, and it's going to include in the native of read one many support, and we're doing that using FS on top of Longhorn block twice. And also the permissions CSI snapshot support, and there's some data locality feature as an also arm support which is experimental and as mentioned the arm support is coming from contribution from in the community. So here we're going to do the Longhorn CLI, and the SPDK application backup restore, and, and also some other items. So this is just like overviews of what we see in the roadmap. Okay. Okay, all right, so that's all max. Okay, so thank you so any other questions I can answer. Yes, yes. Alex, can I ask. Yeah, so thanks. Thanks. So, I have a couple of questions right. First of all, if someone has already some existing data somewhere right on on a bucket or a set for something like that is there a way to migrate into logging core, or they have to manually, you know, create a product. Yeah, in fact, that question come up. I think a few months back. Yeah, but yeah currently, we don't have a native way to to help you to migrate from other storages, but you can always do as Kubernetes. I can always do that you create a new PVC and the month and the both old and new PVC into part and around the CP, I guess, thinking between yes but this is one item we're tracking and we, and we think we can provide some help. In fact, not just from other storage vendor to like to longer because we see Kubernetes as provide very flexible way of operating between storage vendors so we probably can provide a tool for you to help move from any storage vendor to any storage vendor so that's that's how we see it. Yeah, that will help a lot on the adoption. I think one and second question. So you mentioned a bit about the snapshots and the recovery and all this stuff. Is it, is it are you utilizing the features of CSI about snaps about, you know, the new, the new methods about snapshots and restore and all this time and all these things. Yeah, so CSI. Yeah, so on the roadmap, in fact, is the current this feature is already. Yeah, sorry, sorry, I did not. Yeah. Yeah, nice snapshot support. This is the snapshot in the in this context where mapping it to not really the snapshot in the long run is going to be like backup, the backup in the long run, right, because it's a backup that you can migrate outside of the volume for a snapshot longer because you're always going to be using it in such the long run, right. So the snapshot support is well be there for the one on the roadmap. Okay, and the final suggestion I think more. And also, you know, if you can answer, I think the engine, so I wasn't aware of the project I'm just learning today. The project that reminds me a bit a Lucio, because it is also a storage engine, not so much as to look because, you know, you are also storage engine itself. So maybe some comparison with a Lucio might make sense, you know, for. I think I think it's more similar than than because because you have the your own storage engine. Yeah, so, yeah, I think I have, I haven't heard this name and I haven't like looking into what how they do it and yeah so yeah we can we can just try to see if we can just have a look on the project it's similar with the different layers of storage so they they have something something similar it's they're not so components integrated as far as I remember, but yeah, just for you to. Thanks. Thanks for the presentation. I, for what it's worth I believe for Lucio are more of a are more of a caching engine than than storage engine. Yeah, yeah, yeah, so basically my point was also it's it's kind of difficult to compare with with Rook because Rook doesn't provide their own storage engine right instead it would make sense to compare with something like but they have they have their own. If I'm not mistaken, you can use, you can really use a Lucio without having got the other as a standalone backing store as well. If I'm not mistaken, right. Right. Hey, hey, Shane, just a few other things in terms of the, the incubation criteria. So, so it looks like the number of the number of committers has has has improved quite quite a lot to recently. Would you be able to to share. Maybe some ratios of sort of Roger committers versus external. In fact, let me see if I can do that right now sorry. The light is on my face. That's all right. See should be on the. Sorry, this is a death status always forgot. Oh, yeah, okay. Never get it. No, this is the latest complete status. Yeah, sorry. Complete table. Yeah, this is the contributions. Let me see, come meet. Yeah, so still the super majority is coming from the rancher labs and also their other from independence and yeah, I think other and also the CNC help with the website and the few project. There's some contribution with the suicide recently and some others. Yeah, so this is what we have right now, I think. All right, so yeah, any other questions. I think, I think, I think that's fine. Would it be possible to to share PDF or a link to the presentation. Yeah, so I wasn't actually excellent. All right then this does anybody else have any questions for Shane. All right, in our case. Thanks. Thank you so much. This has been a really great presentation. I think we look forward to making our recommendation to the CDC. Thank you. Thank you. All right. Thanks a lot.