 I can't hear you. You're muted, Daleks. I'm always that guy. Sorry. Good morning. Hi, Alex. Hi, Kiran. I can hear you fine. We'll just wait a couple more minutes and have a join before we start the presentation. We'll just wait until five past to give a couple more people time to join. I'll be right back, Alex. Okay. Welcome, everybody. Good morning. Good afternoon. Good evening, depending where you are. So the main agenda item for today is we are going to have an open EBS presentation. Kiran and his team are here to present this for background. Open EBS is a distributed block storage, cloud native storage system. It's currently a sandbox project of the CNCF, and it was introduced as a sandbox project, I guess, just before Barcelona. That's right. Perfect. All right. So the team have raised a proposal to move this into incubation. And we are looking to prepare the review. Kiran, how would you prefer this to run? Do you want the questions to be interactive or should we hold still the end? I think let's keep it interactive. What I have done is just put together a few slides just to set the context. I may or may not be able to answer all the questions today. I at least like, you know, will try to follow up on them in the next calls with the right people. Let's do it interactive. That sounds good. Thank you very much. I'll share my screen. And I also have some of the team members. I've shared this link with them just in case my internet goes off or things like that. All right. So this is a quick summary of the project since we became Missy and see of last May after the around Barcelona, we have around 35 new companies contributing in some capacity. And they're all put in front of DevStats. This is a part of the annual review as well as the incubation PR. And we seem to be attracting at least like five contributors every month. We have gone to a monthly release cadence where we're on voting new reviewers, new contributors. So far, we have not added new maintenance compared to like what we had in our own sandbox, but we are working on that at this moment. So just a quick recap for those of you who are just hearing about OpenEBS. And then we'll get into like quick updates on what we have done since OpenEBS 0.9 was released when we introduced it as sandbox right now it is at OpenEBS 2.0. Provide the summary of the changes that we have done. So we are, it's a hyperconvert storage, and we call this category of storage engines as container attached storage, because the storage services itself are delivered as containers. These containers are orchestrated by Kubernetes managed via Kubernetes native resources and custom resources and native resources as well. One thing about OpenEBS, with all of its data engines, we try to run them in user space, make them portable ability to run on any kind of Kubernetes platform or whatever it is. That's been one of the unique design constraints that we have set to ourselves. And at the same time, most of the adopters that we will see have mentioned that OpenEBS is easy to use. And even just briefly introduces what container attached storage is, and we try to. Kieran, if I may remember. Are you going to cover that there was some concern around relationships with other existing projects, I think Longhorn was one of them in the past. Are you going to cover the projects fit in here? Yes, I kind of missed all the storage engines and also we have covered that information in the annual PR as well, how OpenEBS compares with Longhorn, how OpenEBS compares with RookSafe. We can continue that discussion. And sorry, sorry, not to not just to be clear with my question was not how it compares with competitors but how it incorporates some existing technology. Yep. We'll do that. All right. So I think most of the orchestration work is offloaded to Kubernetes. I think that's one of the primary differentiation between container attached storage versus implementations of storage services that are done. So one of the cast engines is about data services that we kind of take care of this kind of storage management on the Kubernetes nodes, making sure that data is highly available for the applications and enabling the data protection on them. Those are the services that are implemented within the cast. Some examples, yes, we'll talk a little bit more about Longhorn today. I think all this in my opinion kind of fit into this example and Rook is a great example that actually orchestrates, that could orchestrate all of these engines as well as like the way it does with other HFS kind of storages. For somebody to get started with OpenEBS, these are like some basic examples, basic commands. You just use the single element store command to install OpenEBS. It ships with some default storage classes as well. And then you launch the application. The picture here kind of shows how it works. So the application PVC is backed by a target or the PV comprises of a target and a set of replicas and target is responsible for distributing the data across the different replicas. So slightly a high level architecture diagram of where the, what are the different components are how we categorize them. So just we have cluster level components and not level components. So most of the operators, including the CSI driver control, controlling components as well as the OpenEBS operators itself that manage the engines across the different nodes, they are all called under cluster components. These expose all these cluster operators work via Kubernetes resources. So you can kind of integrate this with other third party or open source or commercial products. So some of the things that we have integrated with our like weather of Prometheus and Kubera, Prometheus and Cortex in fact. And in terms of node components, again, like we kind of divide them into three categories. One is the management of the storage and volumes, which is the CSI node components or agents and the node disk manager. That's one of the components of OpenEBS that helps manage the block devices attached in terms of discovering, allocating them to the storage engines cleaning them up. Those kind of operations are performed by NDM. And then you have data engines that which are basically in the IO path and these are always running as part of node components. And they get spinned up based on the volumes that get created and also like setting up of the storage engines like CSTOR or MYSTOR. I'll just get into the question that Quintan asked just before that I think this is like slightly more detailed interaction between the various components. And we kind of have collection of data engines and we can kind of segregate them as replicated versus local PVs. In case of replicated PVs, these are like used by applications that need high availability feature from the underlying storage. So here we with all the three engines that we support, Jiva, which is actually taken from a fork of a longhorn, even before Longhorn was part of CNCF. And there have been like slight differences in the way the Longhorn and Jiva have progressed in terms of how they maintain the data availability scenarios. That's the primary difference. And while the core engine parts are the same, the high availability aspects are where they differ and they continue to be in fork projects. But all of CSTOR, MYSTOR and Jiva have the similar architecture where a PV is backed by an ISCSI target. MYSTOR is starting to support different types of access targets, but ISCSI is the most common one used as of today with our end users. This is a Kubernetes service associated with a deployment object and that takes in the IOS and writes to various replicas. The replicas are what differentiates between whether you're using a MYSTOR, CSTOR or a Jiva based storage engines. And that again depends on the kind of storage capabilities that are there on the node. You typically pick up one of these engines. So we'll get into the differences of these three engines. Any questions till here before we get into the specifics of each engine? A couple of questions actually. If we look at the previous slides, the different components of OpenEBS. So I believe when we did the sandbox, we talked about the OpenEBS core and the data engines. Just to clarify, when we're looking at the project at this stage in terms of the incubation submission, does that include things like the operator and the disk manager function as well? Or are those external components? It includes definitely the node disk manager components. These are all written by the OpenEBS authors as part of the OpenEBS project. During the sandbox, we, for example, CSTOR data engine was split between the changes that OpenEBS authors wrote versus what comes from the ZFS. So that's the modifications done to ZFS that kept outside. But the lib ZFS, that's the one that adds the replication layer and all the part of the CNCF. So we can get in. If I understand the question correctly. Alex, so we should get into the details of like what are all the reposts that we get to as part of this incubation. Yeah, so, and I only mentioned this because we had this discussion with another project where we skipped out some of the dependent repos. And I just want to make sure that we're covering all the dependencies for OpenEBS as part of the review. Sounds good. Alex, I just was observing that on the TIKV project as well. Yeah, exactly. We'll do that. There are around 70 repos right now, but most of them are for dependent repos. Just for build compatibility, we maintain that the core repositories are spread across 14 to 15 repos and share that list. Okay, thank you. On this, on this function. So, and mostly just out of curiosity, so does, is the iSCSI target, is that a component of the engine, or is that a separate component in this sort of model? Yeah, iSCSI is part of the engine. iSCSI target that particular container actually gets spawned when a volume is created. It's not always running. It's per volume, there is a iSCSI target part that gets created. Right. Okay. In fact, all the green components are engine components. I have not mentioned the operators of the CSI driver components that are there. It just goes to say that whenever you create a PVC for a given specific storage class, the operators work out on launching these data engine components for that particular period. I've been getting into too much detail but that iSCSI target there is that located, is that co-located with the volume replicas or is that co-located with the attached container? That's actually controlled by the storage class policy that we can configure. There's no hard tying off that one with either the replicas or the application parts, but for performance or, you know, those kind of reasons users can decide to say that it has to be on the replication parts or it can be moving along with the application. Okay. Could you speak to where thin provisioning comes in to around that iSCSI target? Is that all done by the back end? Right. So in the way that thin provisioning works is based on the replica, it is handled by the replicas here. Yeah, I think that's a great question. So it also, for example, iSCSI does a synchronous replication. So if we have asked for like a 5GB volume, then we need to have 5GB available space on all of the replica parts. Now, it is possible for this replica parts to serve multiple iSCSI targets and store the data on the physical hard drives. So you could have potentially like a 50GB overall capacity at each replica, but you can provision 100 volumes. We monitor the provisioning, like the capacity usage at the replica and you can expand with additional disks on the replicas itself. And the new data operators that I talk about later on can actually help with, let's say if one of the replica parts was scheduled to take in data from multiple replicas and if that is going out of space, we can shift that replica to another node where there is capacity available for that. One distinction also is this is not a scale out storage. So for example, you cannot consume the capacity on different nodes and provide a capacity that's aggregate of all this capacity to an iSCSI PV. The capacity to provide the iSCSI is kind of what you get. And it has to be available on each of the replica nodes. So I think just just to clarify that point, right, whilst a volume, the capacity for a volume can't exceed the capacity of an individual node, the volumes effectively get distributed across all of the nodes that have available storage, right? That's right. Yeah. Also, did I was able to answer the question? Thank you. So now we get into a little bit of specifics and how these things are different. Just to kind of reiterate Quintan's point, the Jiva report that we have, that's a fork of Longhorn engine. That's one of the components of the Longhorn project. And the Longhorn engine works in combination with the Longhorn controller, Longhorn UI and the availability is really maintained by some additional operators which add and remove replicas to the controller. But in case of Jiva, there is a layer that's added on top of the Longhorn engine, which automatically reconciles itself or like, you know, reconnects to the controller on node restarts and node failures. And it does not depend on a control plane for the volumes to continue to work. That's where the major difference comes. And also, this is one of the reasons why we have put a limitation of 50 GB kind of a capacity on Jiva and similar limitation does not exist with Longhorn. And with Jiva, we need to have at least two replicas in online mode for us to read and write data into that volume, whereas Longhorn works with even a single replica in a read write mode. Because the control plane or like the UI controls on the quorum logic or like, you know, it decides on who gets to be the master. The quorum logic is inbuilt into Jiva. That's why we wanted to reduce the time taken for rebuild and make sure like this is tuned to have the two replicas available most of the time. The nice because he usually puts the volumes into read only and there is a manual operation required to remove that. Also, like Longhorn went ahead and made some changes in terms of like backup support and all that. This is an open EBS project for backup and restore. It uses well arrow. So all those and all those changes are where Jiva differs from Longhorn. And the other difference, like, you know, the change with the code is in terms of space reclamation for maintaining the high availability. Jiva and Longhorn both actually create some kind of an internal snapshots. Jiva ends up automatically purging them beyond like some threshold that's configured and Longhorn added that capability later on which was to purge those snapshots via the UI. So, just from a roadmap point of view, I believe if I recall around the sandbox time, CSTOR was sort of the primary engine where most of the development was happening, correct? Is that still the case? Or are you moving to the new My Store engine then? So, when we started the project, we kind of thought that there may be like one engine that may be suitable for all workloads but then we soon realized that that's not going to be the case. Depending on the capabilities of the storage node and the application demands, there was a need for different types of engines. For example, like I'll just get, there is a local PV that we mentioned here. This I think was introduced around the same time as the sandbox time frame. A lot of operations around just carving out a block device from the local storage that's available, typically discovered by NDM, was itself sufficient without adding other CSTOR or Jiva and that local PV itself has taken on its own journey based on the feedback from the users. Now, we support four variants of local PV in OpenEPS. And based on this feedback, we are continuing to support all the engines as the OpenEPS community at this point, but restricting the use cases to which each of those are suitable. For example, like Jiva is mainly suitable for workloads. Actually, we have built-in ARM support for that as well. So, for lightweight, when there are no additional block devices, I should get into that a little bit here, or actually the last line here. So, when there are no external hard drives or extra devices available on the node, that's when, and you need replication capability, that's when like Jiva is preferable. But if you have hard drives or SSDs or, you know, you want to be able to expand that on the fly, then CSTOR is preferred and CSTOR also has inbuilt capabilities around like, you know, instantaneous snapshots and clones, which we don't plan to add for Jiva, for example. And MySTOR was mainly intended for the performance reasons. With CSTOR, we couldn't drive up the performance to a large extent because of the way we were dependent on the ZFS technology there. So, we started working on MySTOR that actually solves some of those bottlenecks. It's inspired from the work that we have done prior on CSTOR Jiva and MySTOR is the new engine for hardware where NVMe devices are available and you have a lot of CPU power and you can drive up the performance. All right. Thank you. So, yeah, so the quick answer to that also is we continue, we plan to support all the engines. I have like a roadmap where I kind of mentioned what we are planning to do on those aspects as well. Another interesting use case that we continue to see is like how many people ask for read write many support and we support that via the NFS and it can pretty much work with all the block storage options that we have today. And we'll see in the adopters, in fact like CNCF itself uses the local PV plus the NFS to run most of the DevStats portals. So, any questions before I move on to the history or like the current state of local PVs that we have? With the local PVs, could you maybe spend two minutes to sort of differentiate how this is different from say, I don't know, the local parts type things which are kind of native to Kubernetes for example? Right. So Kubernetes, the local PV type or like the persistent volume source where we specify that word local, that's the same thing that's used in almost all the local PVs. So that source which forces you to specify and or like stick it with the node affinity, that's the PV spec that we use. But how do you actually give the path to that? Do you give a static device like the Kubernetes static provisioner which goes through a list of mounted devices? And then you go and create the PVs for that and then the Kubernetes storage class and internal scheduler kind of takes care of scheduling the applications to that. That's the core functionality. But on top of that, like how do you provision the storage to this local PVs? That's where I think OpenEPS has helped. And I, you know, I also noticed that this is kind of increasing. I saw a KubeCon talk around TopoLVM which also is doing something similar. So these projects like OpenEPS local PVs as well as like TopoLVM or like Rancher's host path provisioner, these are built around that core concept. But how do you make it easier for users to use that in terms of managing that local storage is where the differentiation comes? There are three things that I've listed here. There's one more thing that the fourth one I will add it probably by next time we talk about this. The main intent of using or like, you know, I was going towards local PV was the capability that we had already with NDM that would discover the log devices and the partitions that are attached to the node. So you can dynamically claim a device instead of using the static provisioner. That's where we started off. But then we soon found out that there are nodes where devices are not readily available and people want to use like some kind of a host path or a directory by creating a new directory within that. That's where local PV host path came in. The provisioners for these two are same and they're based out on the external provisioner. We're moving them to CSI-based things. ZFS local PV was kind of an interesting ask from the community where they like the concept of local PVs, but they also wanted resiliency on top of the, you know, basically protect against the disk failures. So you create Z pool and then each local PV is actually backed by a Z wall or a ZFS dataset. This is this one is mostly comparable with the topo LVM project that was presented at coupon. That makes sense. And the one that we are adding now is called raw file, which is the local PV host path or like the device or Kubernetes static local PV. All of them have one limitation in terms of enforcing the quota on the device. So typically like the volumes can grow beyond the capacity that's actually given to them. Applications can write beyond that. ZFS local PV can restrict that because it has a boundary management via ZFS and LVM with LVM. The new one that we are adding is based on the host path where we put up a sparse file that kind of helps you to contain the capacity used by the application. Hey, Kiran, just a related question. I'm not familiar with NDM, does NDM discover local disk and advertise them as local PVs? Look, how does NDM work? What does it do? That's a great question, everyone. So NDM discovers the local devices. Again, like it's highly configurable via config map on what kind of devices you want to discover based on path filters and properties of the devices. It actually creates something called as block device Kubernetes resources. It does not create a PV. It exposes block device resources and we can, for example, when you want a local PV or a C store engine to be created on those devices, these C store operators or the local PV operators can request for a block device similar to PVPVC concept, a block device claim. And NDM operator will pick up the node or the block device whichever is available and bound it to the BDC and the higher level operators can take that block device and consume it. NDM itself is independent. It is not, you know, it can be actually used for other purposes as well. For other operators, it need not be tied or like, you know, it's not, it's independent of the OpenIBS components. Just to add to that, I think NDM is actually an interesting component on its own because it effectively allows you to create Kubernetes objects that represent the individual disks. And there's nodes, right? So you can now sort of compose those disks and use them for, you know, different purposes either in the case of the local PVs or for the disk engines, right? Yep, that's good. Thanks. Thanks for the question about local PVs. Do you do anything about, are these, and just have to excuse me, I'm not that familiar with some of these other technologies, but is there any data management? So is a local PV by definition empty when you create it and all the data deleted when it goes away? Or do you have some mechanism for restoring the data into a local PV if it's not already there? Yeah. So the way it is today, it is an empty device or a directory that you give to an application. And once the application deletes that PV, it comes back to NDM operator and NDM can take care of deleting that data. And it actually marks it as released and does the deletion in the background and makes it as unclaimed once the cleanup process is completed so it can be reused by some other application. In terms of data services, if we want to put some data into the local PV before it is given to the application, one approach could be to use the volume data source mechanism. We have not done that yet. That's definitely on something that we have been thinking about. And the other data service that we provide with local PV is that again, like was a community asked us to make sure that we take backups out of this. So this is a, you know, we again use that a well heuristic based backups for local PV as well. In case of ZFS local PV we can be a little bit more smart because we can take incremental snapshots and send that to the backups. These are the two primary services. That we offer data services and ZFS local PV also as potential be actually experimented with it. We can provide encryption support on top of ZFS local PV as well. Encryption at risk. Thank you. Okay. So this is what has been keeping us busy. This is like just a quick snapshot of the things that we have done at a very high level. We can dig into each of these for further details in the upcoming calls or there is something quick I can probably answer now. The Ndm was at version 0.3, I think when we went into the sandbox, a lot of enhancements around that all sorts of clusters where we were deploying Ndm and we kind of learned a lot in terms of how the block devices are actually attached to Kubernetes nodes and the different variations in cloud environments with respect to how the block devices are represented. So all those things were factored into Ndm. Now we are able to successfully detect almost all types of block devices. The most challenging was around the virtual devices and partitions. We could detect it, but because there was no unique identifier like serial number or WWW and things like that and node reboots things would change in terms of path and it could become a little difficult to know what was the previous name to that one. So kind of adding support to handle those kind of situations is the main thing that I want to call out as part of Ndm. Just like each of these engines Ndm is its own project and it has its own maintainer reviewer and a lot of interest in terms of adding features to that. Right now there are a couple of alpha features that have gotten into Ndm as well, which we want to work going forward or in terms of metric support at the block level. Things like smart metrics on the block devices, how do we capture and get them out and put others, for example, like temperature or it could be like smart characteristics like error rates and all those things. If the device supports, how do we get that? Those things are in progress and there's also, it is controlled via the Kubernetes CRs right now, but there have been some asks about ability to control via GRPC API. So we are adding GRPC API layer also to Ndm to list and discover and perform some operations on top of Ndm. Jiva was, I think Longhorn, since we took from Longhorn it was, and we had tested it almost for like two years before we called it stable and a lot of users had actually already started using it. A few things that were causing the problems for the users around Jiva where in terms of volumes going into read only especially when the capacity was growing and also if you kind of create a 5GB volume and the application is such that it keeps on reading and writing the same blocks. The snapshotting technology used to maintain high availability would end up creating a lot of internal snapshots that would end up consuming a lot of space. So as users started using it for like one year and more than one year they started complaining about 5GB volume on a replica is taking like three times the capacity and all that. So that's where like we had like some initial work done on CLI work to reclaim the space but we automated that so that's gone as part of 2.0. A CSI driver is definitely where we want to move for this engine as well and that's available now for this moment. One of the primary drivers for us to go towards CSI driver though we don't support all the other capabilities like snapshots and clones is the ability to remount the volumes when they become available. Right now the ISKC PVs kind of need some kind of a manual intervention from the user if they get into read only to remount them so that additional capability we're adding as part of the CSI drivers. So CSTOR was in early beta when we introduced the sandbox we continue to keep that in beta for a very long time. The initial users that started using started giving us a lot of feedback in terms of data operations that they want to perform on top of CSTOR. The version one or like you know we were not for one spec that we went with was becoming very difficult to kind of support all those data operations so we went back and changed the schema a little bit based on how users wanted to actually use that. In fact like we didn't have some elements in the AML we got the AMLs and found that users were like putting some comments in the AML for creating the CSTOR pools. So we took those comments and converted into a new spec so the new CSTOR schema is up for beta now in and this time we went directly with CSI driver for this. So most of the things that we started off we came up with the new engines that we supported we are going with CSI drivers for the older engines we have started the work on supporting the CSI drivers as well. For local PV, host part and device there were a few configuration asks for example like especially like the device based one if there are NVMe SSDs and local SSDs available on in node how do we tag these block devices as NVMe or SSD and use that information to put let's say MongoDB on NVMe SSDs or some other application on SAS SSDs or that kind of fixes so those went in we are in the process of like kind of adding capacity based scheduling we did that for ZFS local PV but for host part and device as well. But then we also saw that there is Kubernetes enhancement around that so we are trying to see how to integrate with that work instead of trying to write all of that into this provisioners. My story is the new one, which we basically started off from scratch again based on our experiences with the old one. And this is a new engine that's written in Rust at the moment and it also supports CSI driver. So in terms of comparison, we just released 0.3 and it's in the early alpha where replication capability is built in with node level HJA supported we plan to add snapshot and clone capabilities also into my store. We already spoke about file local PV, the other important thing that we did was around the upgrades. There was post sandbox we support now seamless upgrades at actually they can be automated via Kubernetes jobs. That's been a really like this feature compared to like many of the others I think this is like on the top of using the pain of the users. A lot of community has come together to help us with building open ABS for arm and power PC. We still maintain it as alpha because we are not enabled to do pipelines. So other work that we have done as part of 2.0 is open source the e2e pipelines on, for example, like GKE platform, AWS platform, bare metal VMware run the open ABS every build and every release on different platforms. And one of the spin off of open ABS was litmus that helps us to stabilize or test the resiliency by introducing chaos. And in fact, like open ABS was the first chaos, the chaos thing came from, how do we test this storage. Since we are now distributed and it's running in containers. How can we kill all of this and make sure data consistency is maintained. That's how we have started that project I think it's taking its own wings right now. But a lot of improvements in terms of e2e also have gone in. Any specific asks on this slide or questions on this slide before I think I just have a couple of more. Hey, Karen, could you just talk a little bit about the sort of the ZFS dependencies are those an external repo or are they part of the IP that's that's being added to the CNCF and and if so how are we dealing with the sort of the licensing type issues. So for ZFS local PV it's definitely outside. So just like how you install for anything you can set up the ZFS outside of it. Now, when it comes to C store, I've actually had some conversations with Chris on this in terms of licensing and all that. Core ZFS itself is separated out. It's in a separate repository. So it can be pulled in as a library. There's some discussions on like, you know, maybe like that can be maintained as part of separate organization and not part of CNCF. I think we'll work that out. And that's the only I think like we have run the process scan on this and that's the only thing that has a CDDL licensing. The rest of the things are something that are compatible or like are actually apache itself. So just to clarify, it seems like some of these external code bases are forked and some of them we just incorporate directly. Is that true? Yes. And I think it's going to be very important to make very clear which are forks and which are just external dependencies and some of which are optional, I assume. And so if you don't use the ZFS stuff, then you don't incorporate that repo and then you don't inherit any licensing complications. But if you do, you need to be aware that you're pulling in the third party component. Exactly. Is that made? I think it's very important to make very clear which of them are third party external dependencies versus forks of such things. Yeah, maybe we can state the PR accordingly with that. I think that's a great idea, Quinn, so we can understand what we need to consider for trademarks and what we also need to evaluate if it is a true dependency. Probably also need to be looked at as far as. Yeah, definitely follow up with this list of repositories and that kind of dependency that we have on them. Yeah, it's a great way to kind of continue on that on the incubation PR as well. Hey, just a quick one though. So, is C store, I mean, components that are dependent on ZFS like C store and the local PV ZFS, for example, are they usable without ZFS components though? So C store kind of pulls in the dependencies, C store pod when you run, it has to have the, it actually has the code that is user space ZFS, which is like a modified question of ZFS. That's probably where we need to have further discussions and see how to deal with that one. In case of local PV ZFS, local PV ZFS started with supporting the CSI driver and additional options in terms of capacity scheduling for local PVs. So, ZFS is one of the storage options that you can configure. You could easily change that to like, you know, I don't want to create a Z wall, but I can create a directory there just like a host pod directory or I want to actually use a NDM block device to use that. So, the code that is there in the local PV ZFS is reusable for other local PV options that you can support. Okay, but for C store, it's a core dependency. C store is a core dependency, yes. It's a, yep. Okay. I'm not an expert, but I think we might need to, we might need to look at, ask the CNCF for some legal advice here. And given that, you know, the, the IP is not actually the IP that the project is dependent on isn't actually part of the project. I will go ahead with that conversations and probably like to the needful there, Alex. I think should be part of due diligence as well. All right. Thank you. This one just gives snapshot on areas of improvement or like, you know, things that we are already planning to do as part of the upcoming 2.x releases. We have a monthly cadence. So some of these things will come in this month, next month, you know, kind of course like that. The features are again like picked up based on the monthly product review meetings that we have with all the maintenance as well. It's open for end users to chip in and based on the GitHub issues that are raised by the end users. We picked this up. This is just a plan and one other thing or like, you know, a few other things that we want to definitely improve on is the end user documentation and website. In general, we have done very little in terms of open EPS advocacy in terms of, you know, whenever we now talk at QtCon or somewhere, people come and tell us that they've already used open EPS or they've integrated like gravitational open EPS that we get to know when a community user comes and ask. So we need to do some active work there and contributors onboarding. I think, you know, community bridge GSOC partitions have been helpful. We've onboarded contributors and reviewers as part of their programs, but we definitely can do a lot more in that regard. So those are like some of the things that we want to continue to focus on as we go forward. Just to kind of briefly list the different adopters, we, again, like started off like this exercise like, you know, once we started thinking about incubation, we opened up this adopters.md file and created an issue where people can comment. So there are around 25 users that have commented how they use open EPS. They're all listed on the GitHub, just to highlight some of them. This is one of the oldest and actually like one of the users that influenced us to continue to improve on the Jiva space reclamation problem Arista. This is their story. I think they were running on premise and they really liked open EPS in because of its ease of use and simplicity and it enabled them to move towards Kubernetes faster. That's there. It always feels nice to hear those kind of words from end users. So this is another thing that we recently got to know because of some slack channel. We are now in as part of the Kubernetes slack open EPS is the channel where we hang out. And we got to know this user was already using even pre beta stage 0.7 and has been upgrading and is currently at 1.10 version. This is a news case where local PV options kind of get highlighted. I think this is definitely a pattern that we see where workloads are going towards more distributed in nature and local PV is just on like tries and demand. It's just increasing. I did have a placeholder but if there are further questions, we can take it up. So the licensing when we when we went over this in Sandbox, there was some concern over it being compatible with the required meant for CNCF is that I mean, I still think we need to possibly work through that. You know, we've already talked about the dependencies as well, but Sure. At least with the presentation I haven't done any diligence. I apologize. It still seems like that's a problem that we need to rectify before it could be considered for Thank you. Sure. We'll take that up as one of the top items. To kind of follow up. Yeah, just to be clear, I think this was very clearly highlighted years ago. When the project entered Sandbox so I personally I'm a little surprised that that it hasn't been rectified yet and it sounds like it hasn't been done in two years not going to be done anytime soon. I mean, to be to be perfectly blunt. I can't see this going into incubation. Anytime soon either because it's it's it's going to be hard requirement. So just to, I think my apprehension in answering that state was and we need like more is mainly from the legal expertise, but we did something. Clinton after the feedback where based on the suggestion from the CNCF legal team is split up the code base to see store and lip see store and lip see store is what was part of the CNCF donation and not the see store component. But I do want it to go through a little bit more of a rigorous license scanning thing and I've been having that conversations with Chris where we can come up with some solution is what he told. So we will work that out. And we're probably out of time now but it might be worth. I mean there's always a lot of detail in these due diligence is and especially around licensing but the high level requirements are actually fairly simple and pretty straightforward to understand. And maybe I can just summarize them in 30 seconds now the basic requirement is that the project should be usable without depending on anything which has a license which is incompatible with the CNCF open source license. And then any optional components that do not fall into that group ie potential optional dependencies that have incompatible licenses should be very clearly called out so that people can you know explicitly adopt them. Once they decide that they're comfortable with the licensing restrictions that they that they're adopting and it doesn't sound like you know and then there's a ton of detail underneath that to to make sure that all that that actually checks out but it sounds like we're we're if I understand correctly we're fairly fairly far away from even that basic requirement. Correct me if I'm wrong but that's what it sounds like. Yeah, I think notice.md has been added with that one. But I'm with you with you all actually in this regard, just as a CNCF member and contributor that we need to get this done and we have to do it the right way. So I'm not biased towards answering that question. We will follow up on this question Quinton I think that's the we've definitely done but I want some legal answer to be clear. These are like don't want to have any great answer around that. That's the only reason I'm visiting to continue on that. Hey, hey, hey Karen, I'll start. I'll start a thread with Chris and check and the co chairs and yourself and so I can actually loop you in. We can understand this. Alex, I can probably add you to the thread that I've already opened with them. We can take it forward from there. Yeah, that's that's that's fine too. I think, I think, you know, just to sort of set expectations that I think it's, you know, as Aaron mentioned, we probably need to get that step out of the way, only because, you know, with the emphasis on as Quinton said making the dependencies available. It's, it's kind of hard to move a project or a repo into into into incubation if if any of the dependencies aren't aren't compatible. So so we just need to probably sort that out as a step one at this stage. Got it. All right. Thank you everyone for and thanks Karen for the, for the presentation. We're looking forward to to working together on this. And we're just a little bit over time now so unless anybody has anything urgent, I think we'll call this meeting to an end. Thanks. Thank you. Thank you. Thank you. Presentation by the way. Thank you. Thank you.