 So just for the benefit of the rest of the group, the agenda today is to go through an open EBS presentation. So we're considering submitting an open EBS as a CNCF project. Evan or Karen, do you want to give some more background around that before we start the presentation? Sure. Hi, Alex. Hi, everybody. This is Evan. And yeah, really, we will of course focus on the demo and the how, right? So here's open EBS and how does it work? Just quickly, I think here in sharing the slides, but the company is called Maya Data and has been around for a number of years now. Open EBS was started in late 2016. And really, our mission is something we call data agility. And we can talk a little bit more about that and happy to answer any questions about the business. But it's an open source project that's a little more open EBS is a little more than two years old now. And now does have a bunch of users. We are by far the largest contributor, although there are, we're about 70 people to give you an idea, and there's 350 total contributors. And many of those are, you know, small contributors. So one reason we would want to contribute the project potentially of course is to make it more welcoming to, you know, to other other companies and other contributors. Yeah, Karen, what else would you say are there questions maybe before we dive into architecture and what open EBS is. Thanks for having us, of course, today. Very welcome. This is a great opportunity to learn about about open EBS. I guess we'll, you know, we can start off with a week and start through the presentation and I don't know, do you prefer to have questions at the end or sort of should we make this interactive. Let's make it interactive. I have around 20 minutes of planned presentation with that includes the demo. So we can actually make it interactive. I'll try to keep it brief. I think this team here is very well aware of the storage space. But I would typically introduce open EBS in a way where you kind of compare it with Calico and panel or how they provide the network capabilities by using the underlying mix open EBS provides the storage services to the parts by using the devices that are already attached to the nodes. And Evan gave a good introduction on the background of the project, a little bit about the way of the design principles that we had in mind when starting the project. We wanted it to be completely microservices based and we quickly moved on to make it open it is native. So the entire project is delivered as containers and orchestrated by Kubernetes itself. The other things that resonated well with the user community of it's very easy to set up and this the same ease of use and the interactions that the flow that you get whether you're running in on premise or on the cloud. Just because you're running it as a microservices you kind of use the same set of management tools that you use to manage Kubernetes itself. So now we have. Hey, just a good question here so is it's obviously running in a container but do we Are there sort of any kernel modules or any sort of system level dependencies that need to be considered or is it just all user level at this stage. Perfect. So it is definitely all user space. It doesn't have any kind of dependencies. In fact, actually, the previous that are exposed or exposed as is the previous so that's the only dependency that it has to connect to the storage services, but the actual storage layer itself does not have any kind of dependencies. Right. Okay. Thank you. We even actually did a blog on container attached storage and this category when about a year. Now we have many storage companies that fall into that category. Storage OS rancher on and all of them being the gas kind of an architecture, which basically means that they're running within the Kubernetes ecosystem by without having any dependencies outside. We did present open EBS about a year ago to the same group. We were at zero five version at the time since then a lot of things have happened in terms of stabilizing the product and then we do have a lot of users using it in production right now. And then at the stage where we were thinking, yeah, let's actually get some additional feedback from this group and see what the group has to say about pushing it into a CNC of sandbox. All right. I'll just give a quick demonstration of how it works and then we will dive into what are the different components of the architecture of open EBS. So just converted it to a slide mode. So, typically, you start off with the Kubernetes cluster. There are some hard disk or running in the cloud some kind of devices attached to it. So the cluster administrator typically comes in and says, okay, let me just set up open EBS which installs some control frame components as I call them. These are cluster-level components and there are some components. For example, node disk manager is one of the component that actually runs as a demon set on all the nodes which can help discover the devices attached to the node. It goes ahead and actually creates disk resources, Kubernetes resources. Everything is managed, the configuration of this entire storage system is stored as Kubernetes custom resources. So once we have these disks discovered, then the cluster administrator can set up what we call as storage pools. There are multiple data engines that we support. So CSTOR is one of them. So CSTOR can run by making use of one of more disks that are attached to the node. They kind of get created on all the nodes. This kind of uses a shared storage model. To do that, like any other storage provider, we have to create a storage class that says provide the volumes from these storage pools. At this point, your application developers come in and they can launch a pod that makes use of a QC associated with the OpenView storage class. And as part of the provisioning of a PV, a new pod gets spawned from that particular volume. That's nothing but an isocity target. And logical runs are created of what we call as replicas on different storage pools. And this is totally controlled by some policies that you can set via storage class in terms of how many replicas you want. And CSTOR target is the one that actually does a synchronous replication to the different replicas that are configured. Once the replicas are attached to the target, the provisioner goes ahead and creates a PV object. And it basically uses the entry as the C volume right now to attach to the pod. And then basically running a workload. Any questions I can take at this point of time? No, that sounds fine. All right, cool. So as I said, CSTOR is one of the engines that we introduced in 0.7 and is definitely getting a lot of adoption because it kind of effectively makes use of the block devices attached to the nodes. Prior to this, since the inception of the project, we had what we call as GVA, a data engine. It's kind of a fork of a rancher along on engine. The setup sounds similar, but in case of GVA, it basically uses the post path that's available on the nodes. It could be coming from the same OS disk or additional retomeral disks that are attached to the Kubernetes nodes. And the same concept of creating a storage pool, which is nothing but specifying the host path where the data has to be stored. So the steps are kind of similar. So I'll just click on through this one. The difference though is with respect to GVA, you end up launching each replica itself as a new part. But as in CSTOR, you can make use of the same part for multiple volumes. Let's give us the same way. Since we started using this host path for the GVA replicas and then with the port security policies coming in at the Kubernetes and restricting the usage of host path volumes, we went into dynamic provisioning of the local VVs, host path based local VVs. This is something similar that rancher and open VVs are doing now. But we kind of are also seeing the separate category of applications that can actually do their own replication. They don't really need a lot of replication capabilities in the storage engine. So we are now supporting, from 0.9, what we call as open VVs, local VVs. It currently makes use of the control bin capabilities that we already have with respect to node disk manager that can discover the disk attached to the nodes. Or you can actually configure based on the host path. Storage class that can dynamically provision a local VV for the applications. These are some feature comparisons. What it really means is with the C store, the storage is much more optimized for taking snapshot clones. And it also has a little bit of a less overhead in terms of the number of parts required to provide the storage services. So with Jiva, we support replication and local VVs are for those that don't really need those services. Okay, are you? Sorry to go on. Are you leveraging the CSI snapshots? In 0.9, we are actually moving to CSI driver. Currently makes use of the external storage provisioner. The snapshots are coming from the external storage depository. Okay. Another question on the engines. So is there a plan to use one engine over the other or are you looking to have sort of multiple engines in the products longer term? We are planning to have multiple engines. So today these are controlled by the administrator in terms of storage classes. We are definitely looking at providing some operators that can automatically choose the engine based on the application capabilities. Depending on the application requirements in terms of what features are needed from the underlying storage system. Right. Okay. I think you mentioned that Jiva sort of was based on the Longmore project from Rancher. Is Rancher still involved in contributing to Jiva? We actually phoned out and then changed the way some of the components work. Though the underlying storage capabilities are the same, but the way the management of these engines happen is different with Open EPS compared to how it works in Rancher itself. We are definitely in sync with the data capabilities but not in the management. Okay. So it's effectively a standalone fork at this stage. That's right. We did some of the changes that we make at the data layer but the management pieces remain independent. Okay. And the replication here is not using the storage's own replication capability. You are implementing the replication part. That's correct. In both the system and Jiva, we are implementing the replication capabilities. Okay. And the clone as well. Clone is also implemented at Jiva level, not at the storage level or... Right. It's implemented in the data engine. It's supported in the system volumes. Okay. Okay. Okay. Thanks. So for the purposes of CSTOR, is there a difference between snapshots and clones in the sense that it's a clone just a fully populated snapshot in this instance? Right. So CSTOR is kind of a copy-on-write file system. So the clones are... They basically share the common data with the source volumes but they actually completely work as their own volumes as well. So the capacity required for the clones is for the additional data that we edit or write. Okay. And is all of the license for all of these different engines the same? Is it just the Apache 2.0? So that's a good question. So some of the engines... The code that is written by the OpenEBS is all Apache licensed. I will actually go to the repositories and probably we can check that one. It's actually the feedback that I'm looking for from this group in terms of what it takes to get into CNCF in terms of licenses. All right. Understood. Okay. We can discuss that afterwards. All right. So probably dive into the architecture of how this works. So I kind of divide these components that we have into cluster level and node level components. The easiest way to understand now that CSI is kind of become a common language. So we have controller and node agent kind of components. And these are basically running on all the nodes. And with that, we also, while the storage managers are mostly about data engine operators as well as the interfaces to CSI of the external storage provisioners, India is primarily concerned with managing the devices attached to the nodes. And how do you control basically an inventory management of the underlying storage devices. We can actually extend these components with other open source projects like Valero or because we are performing backup and restore kind of solutions. And mind it also uses the OpenEBS storage components to provide their SaaS service to manage providing such into the storage infrastructure for enterprises. There could be other add-ons that we can add that can integrate into these open source projects as well. And while these are all the management-related components, the code data engine components gets spawned when the cluster administrator either creates a pool or the volumes. These are components that come and go depending on the life cycle of the volumes and the boots. We are also coming with an OpenEBS operator in 0.9 that will kind of controls all the various control plane components and helps with the upgrades and such. Any questions on this before I get into each of these components in a little bit more detail? So, am I to understand that you're doing CSI today? CSI is actually getting developed in the branch. One of the reasons why we're doing GoAhead with that is we are completely Kubernetes native. The capability is provided in the external storage provision where Put-n-f and was pretty backward compatible with all the versions. We do have customers that are running still in Kubernetes 1.9 versions. So we are definitely going with CSI and just to make sure we are backward compatible with those customers. It's coming up in 0.9. And of course, all the new enhancements are coming up in CSI, so we decided to mohide with Magnetic into CSI. Okay, thank you. So in terms of the main node device manager works, so you typically have a Daven set that's running on all the Kubernetes nodes. It has access to the device subsystem and it basically can detect the current devices and also like devices that get detached, attached, etc. And creates a device CR for them. It also has integrations into suggest libraries that can actually probe for smart capabilities and also other device attributes. And it creates a disk CRs. These are typically created for the devices that are packed with disk and disk. And this also helps in gathering some kind of smart metrics from the underlying devices. Since these are all running just like any other Kubernetes applications, you can make use of the same infrastructure systems that you have like Prometheus or Elasticsearch or those things to monitor these applications as well. Can I just ask, maybe it's a small question, previously you referred to the node disk manager, is that to say it was a node device manager? Oh, that's right. So we started off following it as a node disk manager and then last cube gone, we wanted to see how to make it more general purpose for all the other storage regions. So device made more sense because if you're running in a cloud, maybe they're not really packed by a disk. Oh, I see. Into node device manager. Good catch Alex, thanks. Thank you. And then the other one that we introduced in the last, keep on with is a node and the operator. I'm still saying about this manager, but yeah, it's a medium operator that allows you to get a single point of control in terms of getting access to the disk. It actually works the same way as precision volume planes and the previous, the only difference being sometimes the storage engines can actually request for more than one disk to be associated with a given storage input. So that was one of the reasons why we didn't want to use the term claim. So operators or other storage engine operators in our case, C store actually uses this. It puts in a request for devices and that will be taken up by the end game operator to look at what are the nodes where the devices are available and then kind of associate them with the device request and C store operator basically uses these disks. Right. So, when we are running local PV and as well as C store in the same open areas cluster, this kind of helps to mark the countries. So getting into a little bit more into the obvious control, then we talk about the MDM but MDM is actually treated like a separate project because it can be used for non open areas projects as well. It consists of the storage operators. It has its own APS and a bunch of operators depending on the data engine. So there is a C store operator, etc. Right. And the provisionals are launched as separate components or separate parts. We are basically following the same model of having a CSISI class and all that and the provisionals is one of those parts. So, and of course we have a, so we are moving away from the entry is the C volume and having our own CSI node agent that actually uses the deep is the C component that's again available in the six storage storage. Right. So, when you have a C and pvcs coming in the provisional controller in turn interacts with the storage server and the operators that can be launched a bunch of the armils. We call them as cash animals. They could be like custom CRs that provides the storage configuration as well as the K interest deployments itself. So it actually, you know, stateful sets and deployments of kind of created at that point in time. So I'm kind of focusing on the management component. So a typical volume component can contain a core data engine that core data engine is kind of optimized to work standalone as well as within Kubernetes and the open EBS components kind of help into getting standardized if you will. Right. So you have cash management, which is basically an operator that looks at the custom CRs for that particular data engine and has the knowledge to configure the data engine. Similarly, metrics exporter in the calls the internally provided by the data engine to export the metrics. If you have this, Francis Maya here meaning this is only contributed by people from Maya, not by other companies or what So, okay, I, the control plane code is actually in open EBS Maya repository. It's, it's actually an open source it's contributed by multiple people. It's not related to Maya data. Okay. Maya just means magic and when we started off this project, we were looking for names and then the control. Okay, I see. So just to clarify, so this sounds like there are different repos right for different bits of the project. Are you looking to, I mean, are we talking about concentrating open EBS and sort of its its entirety or or or specific bits of the specific bits of the project. Right. So definitely looking at all the components of open EBS, including all the control plane components are available in the open EBS Maya repository. If I can just switch for a minute. So we use open EBS open EBS itself as more of a project management repository that usually goes the examples and operator emails, etc. The control plane is all in the Maya repository and Giva is the folk of that. So this is a different project that we have and not this manager and the C store is also another project that is available under the open EBS repository itself. It's not been here, but it's Okay. So the entire opening is what we are looking at open sourcing and then we will get into the individual licensing things if there's any problem kind of a thing. And looking at open sourcing just to be clear, they're all open source. Yeah, contributing at contributing all of them and now there'll be even more open. So this one just gives the same. Oh, sorry, I think I just control plane. Okay. The last piece was about the operator that we just talked about it's getting added and custom resource for open EBS is available that will help with what data engines to enable or ability to actually do I get open EBS to check the status of the various components kind of stuff. Getting into how the data engine works. So you can really start with their storage note that has course parts. And whenever a new PV is launched, it starts a Jiva bonding target, which is the ice cream service, which has a cycle for my metrics exporter. And then, depending on the replicas, the Jiva replica parts will be lost in different modes, which make use of that host path. And the stateful application part itself will make use of the feedback and the CSI agent is the same issue that one of them, depending on what a what connects that we are using to connect to the service. I see the target service and provide the data path. Right. So one of the reasons for using the service is the Jiva volume target itself is stateless. So in case of more face that can actually get scheduled on to some of the node and the cubelet or like the initiator will be only talking to the service side instead of the actual target IP. So the sister data engine in terms of comparing and understanding it works the same way just that instead of post part it actually makes use of the devices directly. Right. So bring a little bit quickly here so that I can jump into the demo. So it's all the same here. And Karen, maybe this will become a bit clearer on the, in the demo, but I'm sort of struggling a little bit to understand which bits are kind of like internal components of the software and which bits are sort of logical components. I mean, when we say the C store data engine, are we talking about a container per nodes, a container per pool, a container per volume or how does that look. All right, so I will actually take that question as part of the demo and I can actually show that one. Oh, excellent. Okay, thank you. The next one was actually getting into this one. I have a Kubernetes cluster created on GKE. It's a three node cluster. And right now it's a pretty fresh cluster just before this call I started it. It's just a cube system that we have right now. Okay, so to install the open EBS, I can just use the Helm's table chart. So everything, all the control plane components of open EBS are installed in the open base namespace. Right. So what you see here is the EBS server or the storage controller that I talked about. And these are currently I'm using 0801 version. So it's the external storage provisioner and the snapshot operator comes from there. And yeah. And the end end set is also running there. Each of the nodes, right. So what this does is to make it easy to use in clusters where there are more disks. We actually make use of the sparse disks to create a default storage pool. So let me just show what the end end did by now. So any actually goes ahead and creates sparse disks. So let's just look at. If it was a physical device, it would have called it as a disk. This is a logical object. Right. So it just says that this particular device is available on the Kubernetes hostname at one and what's the size of that particular disk, et cetera. Right. And if it was a real hardware, it would actually have built in all these details. Right. Cool. So now one of the things that would happen as part of the initial open EPS starting up is actually goes ahead and creates a C store sparse pool pause. Right. And this is basically controlled by a flag in the open EPS install that says installed default sparse pool. So we will look at a. This is a logical object that a custom resource that was launched with the open EPS server with a bunch of configuration, which is go ahead and create a C store sparse pool and I need around three pools. Right. And type of the disk to use is sparse. For each of the sparse pool, the configuration of that will be stored as part of the CSP object. If you look at one of these things, so in the logical object tells you on which node it is and all that and then what is the disk it uses under the status of that particular pool. So now let's get into a little bit more details of this sparse pool. Let's look at how this pool looks. So we saw the custom resource. Now we are looking at the odd itself. It basically has three containers in it. One of them is a whole data engine. That's the C store pool container. And along with that, there are a couple of sidecasts that are launched. So the next one is the C store pool management. This is the sidecar that actually fetches the details from the CSP custom resource, the storage computation. Typically in this case, there was only one disk, but if there were like two disks, do you want a middle configuration on the disk itself or do you want to enable some of the storage level parameters? Those things will be available in the CSP object and that will be read and then the pool will be configured. And along with that, there is another sidecar that's for metrics. I think I must have skipped through that one. Let's just do this. Yeah. Sorry. So on the pool, the metrics are available in 0.9. So that's why you have only two things at this point like that. So now that we have the pool created, we will go and try to launch a stateful application on that one. So we had a bunch of control blend components like the provisioner and the storage API server. And then we saw that there are three pods that got created for each of the pools, which had data container and the management container. I'm watching this one. So this one creates a PVC called Parcona that's basically using a C-to-PVC store's past pool and your warning has been created. PVC C901. So for this PVC, a new C-to-Target will get created. Of course, minus 10 more. Maybe we will get this one. A new target port comes in for that. And we will get C-to-PV and so on. So it's an entry, a C-to-PVC at this point in time, which gives the service IP address. Okay. Basically have that as the C-to-PVC port. And this is the service IP address on which the C-to-PVC can be connected. So it's that basically, so what's happening there is we, you're getting a nice kind of service. Sorry, a nice kind of target exposed as a, as a primitive service to the rest of the cluster. That's correct. Okay. So if I do a described part of this, we'll get into basically see that we can get to the C-to-PV within this one. Did that answer your question on the different ports that are available? I'm still trying to press still to be honest. There was a lot taken. Sure. Let me actually just do one more thing. So these are the custom resources that we load. One is the C-to-PV and the replica and the volumes and then the disks for the NDM. And for the actual C-to-POOL itself, the really important is where the storage pool claim, right? And all the physical components are in the OpenDBS namespace of the containers. I see. Okay. So effectively we have, we have a pod per pool and a pod for each of the NDMs, right? And then we have a pod for the PVC target for that to run the MISCASI target service for that PVC. That's right. Got it. So does that mean we would get a pod per PVC created within the cluster? That's correct. Whenever you add a new PVC, you are going to get another pod, right? So basically it has a shared set of pool pods and then for every volume you create one new target pod. Got it. Yeah. One of the other things that I have is just to show how the high availability works. This will be really quick. So we have all these system pool pods that are already running when the new PVC is created. We create a system volume target pod that's basically getting the Ivo packets via the ISCASI from the stateful application and for each of the packets that it receives it basically synchronously replicates to the different pods and as part of synchronous replication it also actually appends a little bit of a header that the system volume target understands. For simplicity sake it can kind of consider that as a block ID, a unique identifier. That's kind of sent into the different pools. And basically happens that way and when one of the nodes fails the data will still be returned to the two other pools and when the node comes back it can sync the data back from the available storage pools. We also have a little bit of a table here depending on the different failure cases and how it handles the failure scenarios. Right. So the replication is from the target pod that is sort of presenting the volume to the pools to the different pools on each server. That's correct. And is that synchronous or asynchronous or what sort of consistency model is being used there? It's a synchronous replication. And then the rest of the data services are they, so you mentioned things like snapshots and currents and things like that. Are they implemented within the target pod or within the pools? Awesome. So I think I have a slide on that one. Let me see if I can actually put that out. It basically follows the external storage snapshot API. So when the snapshot API comes in the provisioner actually sends the request to the storage controller and storage controller accesses the C store target to take the snapshot. And C store target internally will make the call to, it basically pauses the IO and then asks each of the C store pool to take a snapshot on the corresponding volumes replicas. And in case one of the node is not available when the replica was taken, when it comes back up it basically re-sins the data and then takes the snapshot it basically comes back to that state of taking the snapshot. The snapshots also are simpler rebuild. Great. Sorry, can I interrupt there very quickly with a question? I might have missed something but I understood all three of those pools to be replicas as opposed to shards. So I'm not quite sure why you would snapshot all three of them or not just one. Right. So it's possible that if you take a snapshot on one of them that node might be actually completely gone and when you want to take a clone or restore to that one then we will not have that information. So the snapshot is actually taken on all the replicas. I'm still not sure I understand. So at any point in time you have to have three replicas available, right? And if one of them dies presumably you create a new replica on the fly. Right. So the quorum kind of works with even if there are only two replicas but you guys still sound. So it's possible that when the snapshot request came there are only two replicas. Yes. Okay. I understand that. But then presumably you could choose either of the two and snapshot that or I'm not quite... I guess in theory that one could fail in the middle of creating the snapshot. Is that why you snapshot all of them? Let me get back with the answers that we've been doing. So we... Since we actually do consistent... We do the synchronous application and we want to make sure the same state is maintained on all the replicas. But I think I understand your question. Let me see if I can get back with the proper answer on the channel. Okay. No problem. All right. So my question was not stupid. That's what I really wanted to confirm. So they are all identical and it would in theory be possible to to snapshot one of them. But you do all of them for some reason. Okay. That's good enough for me. And presumably the C-store target pods then you rely on Kubernetes restarting that pods on another node presumably if there's a node failure or something like that. Right? That's correct. So what we do is we set the evaluation levels to kind of restart very quickly for these target pods. I've started using the... I'm still exploring using the pod priority. Again, some of these things are going to come as the features get into beta and are available around the clusters. Some of the clusters where is used are still old. Okay. And does the stateful worker need to coexist on the same nodes with the C-store targets or can they be sort of independent? They can be independent. All three layers can be independent. By default, they are independent. But you can also for... depending on the storage admin they can actually configure the policies to say that the workloads need to be on one of the storage nodes and also the target has to be only on those storage nodes are co-located always with the application. These all translate into pod-definity, anti-definity rules. Got it. You said you have a backup too. Do you have another backup engine or do you just use the snapshots for your backups? Right. So these only support snapshots. For backup, we have integrated with the R plug-in. So the backups can be sent to an S3 compatible storage. Okay. Okay. Sorry. One follow-up question. So these snapshots are stored on the same machines and same disks as the pools? Snapshots are stored on the same disk, yes. And then if you want to take a backup of these outside of this cluster, then we make this of the empty R plug-in or what's called as a venero plug-in right now. Okay. So you could backup the snapshots, I presume, asynchronously. Yes, that's correct. Okay, great. Yeah. I guess normal... the Amazon EBS kind of mixes those two concepts. I think the shots are actually remote, but I remember correctly. That's correct. So they are kind of backups. Okay. Gotcha. Cool. I'm available on the Kubernetes Slack channel, as well as... Yeah, on the email, I'm on the CNCF email address. So we'll probably take more questions there. Peter, I'd like to come back to you on that one question. And then one of the other things that we need to check on, Alex, is the licenses. If you have reached out to you with the different... each of the projects that we have in the repository and what is the current license scheme that they have and what are the implications for that one? Yeah, that would be... that would be good because... I was just having a look at the repos now, obviously the things like the... so the first license and things like that have fairly controversial license requirements historically. So we would just need to understand that a little better. And also some of the repos looks like just some clones. Probably you are not really going to... some of those, right? So probably identify what other repos you want to contribute. That's right. Some of them are actually for providing the... infrastructure for OpenEBS CI itself. Just like CNCF.CI, we have implemented OpenEBS.CI. And there's also another open source project called NITMAS that's really for running chaos engineering or chaos tests on the stateful workloads. Those are also currently under the OpenEBS repo. So I'll make a list of the things that become part of the core OpenEBS and then take it from there. Yeah, that would be perfect. If you could share that information with Quinton and myself, please. We can take that forward then. Alex, just by the way, the licensing stuff, usually the CNCF, the Linux Foundation staff sort out all the licensing stuff. They have lawyers and they understand open source licenses way better than us engineers. So we can totally just hand over that stuff to them and tell them to sort it out. Obviously the project won't be formally incorporated into the CNCF unless it can be changed to a compatible license, and sometimes that takes a while. But yeah, in some sense, we don't really need to worry ourselves too much about that. Yeah, no, I understood it was more about cataloging. Oh, I see, gotcha. One final question, which I think you guys answered to me on a previous call, but just for the rest of the people on this call, could you just give us some sort of indication of your main reason for wanting to open source, particularly to donate to the CNCF and open source for that matter? People are going to ask how are you going to make money? Are you doing an open core, or how do we assume that you're still going to be around next year if you open source everything? Evan, would you like to take that question? I mean, I'm happy to. But yeah, it's a good question, of course. But no, we do have a model which says for this, let's say persona who tends to want to grab this and self adopt, give them 100% open source solution for the enterprise. And then let's say the VPs who want to see everything, maybe begin to do some things like programmatic controls. And so forth, we have something we call today, we call it Maya Online, which is a SaaS solution that gives you beautiful GUI and we think beautiful management. So it is a not quite open core, but if you are a customer, we're actually using the SaaS product and hopefully the customer is as well to manage the environment. Okay, that makes a lot of sense, perfect. And then your motivations, I mean, you mentioned marketing and that sort of stuff. Are you looking for significant amounts of contribution? We're hoping, we're hoping Quinton and it's a little hard to know to what extent folks have been inhibited by us being a sole, let's say a sole vendor project. But that's the thesis here is that there have, certainly we have like single, double, triple contributions by some folks who could do a lot of help. So the goal would be to unlock some of that and then just, yeah, I suppose the marketing or awareness piece that you mentioned as well. Okay, but it doesn't sound like you're sort of drastically short of engineering capacity. I mean, it sounds like you've done most of the hard work already and you're not sort of desperate for engineering horsepower so much as adoption and awareness. Yes, I think that's true. Kieran always tells me he's short of engineering horsepower. That's what all engineers do. We told to say that every day. That's right. Right, cool. I think we're officially out of time. That was really great. Thank you everyone for a very informative thing. Presentation and yeah, we can take this forward. I think the next step is to actually put a formal proposal together and get some CNCF sponsors and I can, some TSC sponsors. I can help doing that unless anyone has any, you know, major objections. I can speak to me about it, but that's what I would suggest our next steps are. That makes sense. Kieran, would you be okay sharing the text so we can include a link to that with the rest of the docs and the agenda? Absolutely. Yes. I will send the link to you. Fantastic. Thank you. Thanks everyone. Thank you. Thank you. Thanks everyone. Bye.