 Hello. I've dropped the meeting notes into the chat area. I want to add your name and any items. We'll get started at five after bill. Did you see how many folks we had on the scene of working group? I'm sorry on the tech call. It was about 18 or 19. Yeah, cool. I think on that orchestra, what was presented. It'd be interesting to hear some of the specific or maybe break down some of the sections and have it as smaller topics. There may be or like what are that maybe the use cases what are use cases that they've been targeting. Not directly the tool, but what are they trying to do and that could maybe roll into the scene of working group. Hello folks, we're going to get started at five after just a couple of minutes and the meeting notes are in the zoom chat. You can add your name and any agenda items. All right. Does anyone have any topics they'd like to add that we don't already have on the agenda. I guess for those that are new to this call this is the CNF working group the cloud network function working group. We have eight every week at 1600 GTC this call. There's a gap repo for the working group, as well as a mailing list and select channel. We have a few items from last week that were on the agenda. I don't know if Frederick on the call if he's not then I bumped this one because it was a topic that he had added before. Security related. I'm not saying Frederick to the end is and I'm here didn't look like it. All right, so I'm going to bump that one as well. It's a good question and performance. Yeah, I might add a couple of notes on that actually. Yeah, you want to just do a right on that one and then then it'll come come up when it comes around next which should work. That one's good enough. Well, good enough. I think that one could would be good as a discussion item on the discussion board. So for those that don't know, or having seen on get maybe we have a discussion area that we've been using. And I think he and that performance related one, we could have some stuff there as well. A quick technical question. If is there a way to share Google documents with everybody on the list. Is there like a Google group list or something for for all of us or I guess there's a mailing list. Sorry, Bill go ahead. Yeah, there's a mailing list, but obviously that link would then be open to everyone is there's the reason why you'd want it to be shared with only the people on the mailing list and not, I guess, generally. No, that's a good point. I could probably share it generally. That's, that's correct. All of our documents are globally open. Yeah, that's probably the easiest option then and then just send it on the mailing list. Yeah, that's a good idea. Thanks. I know you haven't seen it tell I know there's a few companies that even if you share something and make it anybody with the link can see it. There are a few that don't allow it. I think Verizon might be one. So you may get request from those folks saying, can you please share. And you'll respond and say please use another account. I think you have to be careful when you click share and on Google Drive, you can't check that it's not just for your company and that it is global. And if you're not allowed to make it global well can always use your non work Gmail account if you have So any other topics before we jump into, we got some PRs that we want to look at Frederick message me so he may come in and talk about this security topic here in a bit if you can get on, but are there any other topics that folks would like to add. So we have a this a state in the pull request this is a use case that Simon from matrix was had submitted. And I think he worked on that with you Oliver. And not saying, Oh, there's Simon. Hey, Simon. It'd be nice to get this one through. I think the main questions or comments about this right now were about making some of the use case statements to be either more general or to explicitly say when memory is being used it's It was one, I guess one path versus the path. That was really the only thing and I know you talked about this a little bit Simon several weeks ago. So I don't know if you've gone back on this. But does anyone else had a chance to look at this stateful CNF use case. I'm going to bring it like this, maybe. Yeah, I haven't actually had a chance to go back to to it to address the PR comments but I will put on my to do this for this week. Hey book. Hello. All right. So do you have a real short Simon. Overview for you can say what this is for folks who may not have seen it and they can come check it out. Yeah, this is a use case talking about how a native function needs to maintain state and different types of state that it might mean to need to maintain the specific use case is about the five GCH and scenario where subscribers are interacting with a charging system. That's I guess that's the short short version. And the summary is basically that CNF needs that you know when it's maintaining state should continue to follow cloud native principles and try to be loosely coupled with the hardware on which it's running. I have a question in a in a current practice. What is typically used for as a persistent storage or persistent volume to persist that state. So in our implementation, we have a shared storage of what we call fast storage which is basically an SSD running behind an NFS server. And we use EFS or that in like an EKS kind of environment. That's the persisted to disk storage. And it's used for things like checkpoints and backups. The actual data is all stored in memory. But this this use case is looking at the perspective of persistent storage related to state statefulness. Well, I guess that's where that's where the confusion comes from because when we talk about state we're talking about state that is held in memory for low latency reasons. Yeah, which for the charging systems is quite understandable. So, and in the implementation of this, as I said we we write to shared storage as a fallback DR kind of scenario and for like offline synchronization. Whereas all online serial synchronization and the actual session state and long held state like balances and things like that are held in memory on all systems. And this means like making the such a network function stateful when we look at the fast in memory storage. It means that at the points of either node draining or shifting rescheduling that the pods they would need to act as a stateful set or if they're not as a stateful set they would need to have a mechanism to to to consume or to empty this or to migrate this information that is in memory to another node. Yeah, or flush it to the to the persistent storage and then start a new. Yeah, so they are managed as a Kubernetes stateful set. And there is like an operator that sits on top of it that manages. So, in addition to like the standard Kubernetes bustering. So, you know, we're using clustering technology which we use to do like online synchronization. So, whenever you interact with one particular node that interaction is synchronized with all of the nodes that are in that particular cluster. Does that answer your question. So it gives the insight. Now trying to, but I need to to read that completely I read it very quickly. And it was maybe two weeks ago. So I need to to go thoroughly because now from that point I'm just thinking how to generalize it for the use case. I would normally say from the infrastructure perspective. You got no control over the nodes, you can use the node that you are allocated to as a as a workload at the given point in time and you will be moved out of that node by Kubernetes mechanisms. Which is still true in a stateful and especially with these kind of high performance stateful scenarios or any stateless scenarios that are high performance. We, for the implementation we actually taint certain nodes to make some exclusive to our use. And if that workload cannot run on those nodes then other nodes would need to be brought up untainted so that workload can be moved across. But that would have to be all controlled. It's the problem with these kind of stateful scenarios is that it's highly dependent on being available and it's not like a stateless workload that can be easily moved across and migrated between nodes in a Kubernetes cluster. It's like any database in Kubernetes at the end. So the day this is basically what it is it's an it's an on, you know, an in memory database that's, you know, got high performance low latency. So, yeah, it has that same. So I guess this could be written from the perspective of if you were to stand up. In the memory database in Kubernetes, what challenges would you have to think about and these are the same challenges that I've tried to document here. We have our opinionated setup, which is relying on external or flash NVMe storage, which is behaving very close to the in terms of performance to run but it's not probably something for a best practice. And you're hard coding. But that is not running exclusively within Kubernetes. You're saying that's an NVMe that is. It is, we use the flash arrays that are sitting in the same rec or in the recs where we have the nodes. And they are all connected with the 100 gig ice Kazi and soon to be NVMe over fiber to that storage but it's not exclusively storage for one cluster it's shared between number of clusters though it's a state of the art highly performance storage system. So I would call that a bare metal cloud, where you're basically hosting a private cloud on your nodes, and then you get to control how those nodes are configured. Which is, you know, we have customers that do that, but we also have customers that use like EPS public cloud. And so we didn't get as much control over the hardware that you are. So, this is interesting just to add a point I said last meeting that I'm working on a, on a document about the use of Kubernetes operators with with our use cases. It's become quite long at this point it's five pages long. Sorry, it'll take some time it's a big discussion but one of the things I'm really examining is operators are you are useful in cases where you need to manage state. That's not non ephemeral right because one of the problems or challenges I would say with with the using clouds generally is that you think of your resources as being ephemeral. That is Kubernetes can destroy them and create them and supposedly your application can continue working you know as long as they're recreated. But things like databases. I mean if you if you look at the use cases of Kubernetes operators 90% of the Kubernetes operators are database cluster managers. And it has to do exactly with that you need you need something external to to the resource itself in order to manage that resource and make sure that the clusters managed according to its rules right because each. state has its own specific rules specific cluster management rules. And that's knowledge that Kubernetes doesn't naturally have that's something that you need to encode for example by using an operator. And that's something that would be interesting there as well is. And so, many people when they work with operators, they, they, they're taking the assumption that they'll always have a clean shutdown, but we know this is not all we know this is not true and in with the operators it would also be good to tie in what how an operator can help you when you have something bad happening your system like you have some state that the state disappears is the operator able to help you. How can the operator help you recover into into a good state as well ensuring that you get the context back or the state back or rebuild or or so on as part of the process. An operator can't do anything that a human couldn't do right so it's just an automated human but if that possibility is there for example migrating databases freezing them. You know, in certain kind of cluster arrangements you have one one node that's a writer node and the other nodes are reader nodes say MongoDB. You have some sort of election management system so these things Kubernetes has no idea about them right this is something that you need to add to the system to make it a viable. For example, Kuvert for virtual machines has you know one of the requirements for virtual machines which doesn't exist for containers is that they can be frozen and they can be migrated for example to another cluster as is right with all their Ram and memory and everything so these are future features that Kuvert has because it considers the virtual machines to be stateful and ways that containers aren't. We also beside databases. We also noticed for a telco applications in, for example, if you have a user playing functions UPF. So we work with with one of the vendors and the challenge there is that you have a this UPF pods or containers that are holding customer sessions, not for charging purposes but for maintaining the connection. And when this pod goes, these sessions would be either interrupted, or they need to be migrated so they have a operator that sits on top of these UPF pods and kind of proactively replicates all the sessions or make a cash on all other nodes at any given point in time if that node or pod fails. The other standing that gets elected to master or leader or whatever would know how to continue forwarding the packets. So it's also controlled by the by the operators that are custom made for that purpose. Well, as we're talking about operators. My view is that operators really shouldn't be getting too deep into the secondary cluster management. And maybe that's the wrong viewpoint but I, my feeling is that an operator really is just there to help Kubernetes understand the particular management requirements for the pods. And feel like that is the right place to put all of the like this secondary cluster synchronization logic. In this case it's not cluster synchronization it's all on the application level so cluster is more or less moving independently. The cluster decides that the node where that specific type of pod is running is going to be drained and needs to go out of the out of the pool and be replaced with another one. The application needs to handle it and it handles it with the operator as a mechanism who is aware of that event could happen and proactively making sure that application would continue working uninterrupted. So the operator reacts to the event of this node is going to be drained and triggers something that you know application specific logic that does the migration of that. I'm not 100% sure but I think it reacts on the on the on the notion. My master UPF pod is getting kicked out so I need to restart the mechanism to select one of my remaining pods as a masters and create the new one. And if you are a new master you need to be able to pick up where the old master was standing something like that. Not 100% sure is it on which signals it reacts but at the end this business logic or this operational logic is built into that operator and it handles that event. But on the application level only doesn't coordinate with the infrastructure, anything. Yeah, okay. Just to point out Simon I think one thing I'm trying to distinguish in my document that I'm working on is. There's a big mess in terminology in terms of operators and custom controllers and things like that but I do talk about something called a pure operator. Which is in the, if anybody has a background in functional languages can guess what I mean but but but these kind of pure operators don't have side effects and indeed do what you describe which are. So we basically take some Kubernetes resources usually CRDs and manipulate other custom resources so for example the built in deployment controller is exactly that kind of pure operator. The deployment takes a deployment resource and turns it into a replica set resource that it then owns. And it doesn't have any side effects other than that but there are a lot of operators that are not pure and and they do work. And, and I think that's the kind of operators that are sometimes very interesting in the CNF use case both are both are interesting. Yeah, yeah and in actual fact the operators that we the we have two operators in our base and those do have side effects although the side effects are more managing where we require pods to start or stop and that and that's really the really the limit of what we do we use the operator framework to, you know, bring workloads up and, you know, set configuration properties and those kind of things so it's almost like a pure operator where we're effectively just maintaining the stateful set that is then running the workload. We don't use operators to actually interact with the sort of application specific cluster that is all have a handled directly in the application posts, you know, outside of Kubernetes effectively or is running obviously running within Kubernetes but Kubernetes is completely kind of aware that we have a cluster. All right. So, what do we need. Well, I guess we need, we need some reviews on this one, but if folks could take a look and other than this memory thing Simon it looks like there hadn't been any other updates as far as updating. So we're trying to do is get this use case in that's, we want to get it merged so if folks can look at it and give your approval or request any clarifications, we're not trying to solve all the things. And if there's no major issues we want to get it in and start iterating once it's there versus keeping it open. Yeah, I will, I will address the the in memory as a one type of implementation I think that was the basic just the comment and so I'll change the wording to sort of suggest that rather than saying this is how you have to do it. So, you know, it's a use case. So, therefore, a particular example of the type of thing, as opposed to general China, we tried to be generic but obviously uses are specific to a use. This is, this is that use but I will try to make it even more generic. I don't think you have to remove the use of memory. I think you could, as you're saying your and the use case is tied into a specific use. So, you could, if it clarifies maybe before that starts to say, we've chosen memory as our solution now we're going to talk about that. There are other solutions and then it would be fine address it if if it doesn't make sense to go to generic, because that could get confusing. Yeah, okay, I will yeah I'll take that on board. Does anyone have anything else specific or otherwise you can just go review it and please give feedback and the PR itself. All right. And if you look at, if you click through, if you all don't see this already you can click through to Simons branch. And from there, you can actually see like the images and stuff, which don't show up. Yeah, I don't know but it's probably something with the location. Like the image may be somewhere else in the reference because it's not public. It's not a public URL so yeah, if you did a full URL it may work but it's fine. All right. Let's see the next use case, which we may not have. Let's see. I don't see issues on this call. This is a is actually to use cases and there's a discussion item on at least the synchronization timing, which I think was relevant for the stateful and stuff. Simon that you had talked about, but there's a discussion, a discussion one on this one as well. Or there's some things here. And there's been some topics. I'm going to just add that. Here, but this is actually adding two different use cases so the synchronization and timing for CNS and then this one's about. How do you have interfaces for CNS that do different types of communication. You had some general comments, I think, and then Frederick. I have to admit that I need to give this a much more thorough read than I last did. So, which is shameful because apparently it's been open for four weeks. Well, I think this one is another one where it's nice to click through see the use case directly so this go through and pull each of these up and they have some references. I'm not going to go through all that and they're not there. I'm not here but if folks can take a look at these trying to get it through. I think you had one here this onboarding. I'm going to pull that one up. And it looks, you know, we have some time. Use case. And we need more reviewers on this. Do you want to speak to this one book and I'm going to assign some folks. Yeah, I can I can shortly elaborate the background so it's. Do you want to share screen or anything. I could do that. There's anything to show. I could just walk through the merge request or pull request and then show. So it's a pretty, let's say non technical it's more process or methodology related use case. And I'm going to just grab the screen share. Hope you can see that now. Looks good. And now I'm noticing that I didn't. I didn't edit that. Anyway, I think we have a bit of issue with numbering because as people are making the use cases in parallel they pick the next free one and then by the time it's merged and there might be overlap. So, I guess this is why I didn't assign any any numbering at this stage. The use case is derived from the practical experiences over last say give or take a year where we are seeing the new situation emerging for the teams within the the operator or CSP and their partners and the vendors because so far. If you talk about physical network function PNF, it was brought to the CSP as it is designed and with a good documentation and integration interfaces. We don't have any anything like except the integration in the wider environment, but in terms of vertical stacking. It was full thing in the VNF world. We experienced that it ended up mostly on application vendor bringing either own NFEI or NFEI blueprint, which was stating the requirements. In the in the CNF case we have now a situation that they're in some CSPs like ours, but I was saying in most. There is the fact of one platform being being present and being developed, whatever kind of flavor you take with open source or some purchase platform but it's there. And there is expectation that on that platform, many of the of the different functions are running and this is now a new situation for the CSP network function teams and their vendors so that they need to do this onboarding. On the infrastructure and we saw number of challenges in in a current practice. First of all, the CNFs are written in a way that they are suitable for this service delivery or professional services departments of the vendors. So if the customer like CSP team takes it, the documentation is very much targeting people who are very deeply into that topic. So on the on the vendor side, and it's also targeting every every vendor so far we saw it has its own sort of platform blueprint. They're more open to integrate on the third party platform but all things and all know how it's kind of tilted towards this this blueprint. So we saw that that's one of the one of the elements that makes it difficult to do the onboarding that is really not the custom integration. The second is the requirements are not clear upfront with having in mind third party platform so they are usually coming in the portions. And we do a bit of trial and error type of onboarding so we get the first set then fulfill that and start onboarding and say oh no this is forgotten that is for forgotten. So what I'm trying to address with this use case I will not go in the in the more aspects there are a couple of more but it all revolves around. Do you have a like a state of the art package for that of that CNF that can be used like any kind of commercial software by the teams to onboard it on the operating system or install it somewhere and so on so the CNS are not in that state at the moment. And this use case is tackling the needs to go with some. Let's say more streamlined and then more. I will say state of the art approach when delivering and packaging those those application. And there are a couple of best practices for the software distribution that could be followed for cloud native software distribution. These kind of things. So that's the quintessence of it. Maybe just one thing that I didn't wrote here for a background in a couple of onboarding cases. It's also very fun process to do and then very insightful and learning process but it's, it could be more. Let's say much more professionalized but we say we come with a with a vendor and say oh we had in mind this and this I don't know we deploy with the Jenkins and this pipeline and so on. And then we do reverse engineering because we say on our platform we do something. That's more native to that helmet or whatever. So then we are reverse engineering their application coming to the basic artifacts and then building up the new packages so that we can read reproducibly. Deploy it and repeat and repeat it in the many environments it's kind of. I think there is a space for a best practice discussion and maybe a broader consensus how the things should be packaging the most neutral way. And if something is opinion opinionated and needs to be opinionated and it could be also put separately visible and emphasized. Does anyone have any questions or comments. It's another use case that we'd like to get in place and then specifically what book was saying start breaking that breaking down the parts and looking for different practices that we can eventually say here's good. So we're going to go ahead and talk about some of our cloud native and kubernetes practices and we want to split those out. And for those aren't familiar that one of the end goals is to make it where you can somewhat all the cart go in and you pick the different practices and be able to piece those together. I guess one comment I would make is. Well, it's, it's a good discussion here what the challenges are. But some of these challenges might be self inflicted. If, if you choose helm so that you know the comment is frequently are delivered as helm charts. So, you know, I don't know how neutral that is necessarily right there could be other ways to deliver and deploy CNFs. For example, you know going back to operators sorry one way could tell that the idea here is giving one example that's fine. In multiple ways, and I mean, helm was when it was coming in there were already multiple ways. So we're really talking about how do we help the people that are using helm. And if, if there's other ways, then we can also present like what is a use case that could cover multiple paths. That would be fine too. But if there's people that are using helm, then how do we help them, which maybe they're causing their own problems like you say, but if the request is there, they're already using it then how do we help them. Right, right, I absolutely agree with that my point is it's not titled that way this is titled more generically you know, a cube native approach. Well it's a very specific you've native approach that's that's what I'm saying. I mean, it's definitely, I think if you if you can suggest the edit on this to make it generalized. It was like one of the examples for what we meet in the practice. And, yeah, let's face it, the helm is one of the standards for distributing things but here what I try to point out is, even if they come with the helm. If you look at the helm best practices, they diverge very, very widely. So if you are used to, let's say, using helm in some way and then you meet a CNF that's packaging helm and then you see, oops, many things that I assume good helm package would do this. These are not covering. There could be many or not many but couple of sections like if you do helm charts. These are the recommendations these are maybe limits and so on. Yeah, would we go for operators and this would be recommendation and so on right kind of. And it is, how do we align CNS more or how would CNS align better with the modern cloud native software distribution practices. So, let's just be clear that what we're trying to do is not just different because it's CNS it's different because it's multiple applications on the same platform that want serious quantities of independence from each other. Now, what we could do ultimately from this is we could write a best practice that says if you insist on using helm then you should do these things that would be a perfectly reasonable best practice to have and I think it's quite, you know, a plausible thing to write. We know that CNF vendors of all stripes like providing helm charts. We know that helm charts come with limitations. We know that helm charts can be written in such a way that they have additional limitations. We could work through that and we could make a best practice that says if you're going to use helm, you use it in these ways. On the operator side of things then as yet, yes, operators definitely a good way of running an application you can write an operator specific to an application. They're also a terrible way of orchestrating an application because you're writing an operator that effectively is part of the platform because it is a platform service that effectively says the application is a platform service that's its purpose. So if we are going to do that then, you know, we have to document its strengths limitations and what we prefer. I think the right answer here may use some of some technologies from both of those piles but is neither of them exclusively by itself as things stand. But, you know, the tools we've got to work with so if we're going to let if we're going to say, use these things or don't use these things we better are saying something positive like use these things in this way so that in the future, you won't get into trouble. So I think something I think something that we're missing here as well is is who's the audience for for some of these documents as well so is the audience someone who is new to Kubernetes who's tasked with operating this stuff. They're looking at a greenfield deployment they're looking at, they're looking at something that already exists that has help charts already available. Or is it the developer who's trying to make a decision do I go with operators do I go with help charts the mixture of both. So I think if we, if we can get that particular thing set a little bit more tightly defined, then that'll that'll help with with describing what we what we want to put into this section. And it will allow us to to guide the guidelines effectively. I agree with everything said here. My feeling just is that maybe this use case can be broken down that the the issue of having the challenges having to do with helm deployment could be its own use case I think it's. I like this particular use case I think it just, it could be made more generic than it is right now because it. It just addresses a very specific challenge that isn't necessarily related to these cases a whole it's just. So, so a better way of phrasing that is rather than basically say, this is great but it's not what I want. I think we would say this is great but it's got a certain scope. So that we, you know, it's not pretending to be the solution to all problems and it doesn't need rewriting to be the solution to all problems it just needs to be the solution to the problem it describes. And then if you want a more generic one you go, you write a more generic one because it seems unfair that you want something that hasn't written because what he's written makes sense. So you won't let him commit it because it's not what you, you would have written yourself, I think, write your own, you know, forward progress is valuable here right if this is taking set the assumptions into account, and then moving forward from there then it should document its assumptions and that's all it needs to do. I'm not saying I would not accept this. This maybe. Yeah, but you see what I'm saying right if you think something some feedback is we should change this. Some feedback is we should do more than this and I think your feedback here is we should do more. Maybe to highlight actually this. I think the confusion comes probably from pointing out some specific very specific things in the limitations but if you look at the expected behavior. If I remember correctly, I put it in a very generic way talking about the actor so here is like a CNF DevOps team expects to deploy and configure the CNF application on their own. Preferably via automation pipelines and interaction with the platform team. So this is one expectation. How do you package for that. That's subject of best practices. Then they require well documented procedures tests and so on and so on. So that's kind of very generic how it would be but some I would say if you would say some challenges and limitation with cumulative approach this is depicting more or less state of school and and the practice, which we are facing and this is why it's singled out as it's pattern that repeats but we could enrich this or modify this, or, yeah, for that sake anything in that one. It's one one view and I would be perfectly fine to have some rounds with the comments or even suggested commits. You really only point out Helm charts on the limitations for the third paragraph the rest is pretty generic. So we're, and you specifically say they're frequently it's not saying that anyone else needs to do that it's that that's that is the case for applications deployed to Kubernetes or frequently. Maybe you could say generally are delivered. So I think that's okay. Back in the expected behavior on the second paragraph. Last sentence you you have this part where you're saying typical for cloud native applications and then Helm charts containers and manifest. Those are just examples. So I think we're open to expanding on this and we could have a more specific ones, but this is already I think pretty general, and you could talk, you could definitely talk about applications that are only manifest files. And we've seen those doing some testing where there, you have manifest for deploying the containers but you do not have Helm charts covering all the different pieces. And there could be other ways of doing that. But is there a. Yeah, I could talk in detail about some, some things like, you know, many operators that that are mentioned are also coming package or to be installed by Helm charts and we see that many of them install user charts to install CRDs and Helm is not good at all in handling CRDs so they cannot upgrade CRDs properly. So then we come to the point to ask the vendors. Okay, now, yeah, we got it installed. How do we want to manage it at the next step. And then this is where we get discussion going. So, I could even provide the more, more technical insights behind this abstract things I could expand the list because indeed we are having in many cases Helm charts that install the operators that installed and rest of the things. And then we repeatedly say to vendors, please separate your CRD creation out of the Helm chart. If you still want to have a Hem chart put it in the manifest because we cannot manage it properly via Hem chart in the life cycle later on. Right and there's a middle ground to you can use customize with the K. That's kind of a helm helm light. It's not used that much but I have seen it use. If you say, don't, don't add in the CRD creation you specifically mean that this the custom resource definition itself or the custom resource definition, customization definition because it's immutable at the end and if you want to use a upgrade, Helm will fail on that. If you have a big hand chart that bundles CRD definitions and deployments or whatever, rest of it, then when you want to to change something you do update then it will fail because it cannot touch CRDs. And then nothing else will get get updated. So then we say if you want to to make this to manage this application via these hand charts that you provide then you need to separate out the CRD or definition part. When they are created once and they are changing rarely, but also even if frequently they are changing separately in a different mechanism. Is that a limitation with the Kube native approach or is that a limitation with the Helm tool itself. I mean, it's a Helm perspective. Helm is the best access or de facto standard anyway, of delivering CNS so it's just a limitation of Helm not being able to manage the lifecycle of a CRD. Which, which is to be fair because it's a cluster wide resource that really shouldn't change very often. Yeah, the problem is when you decide to change the CRD, it's very hard to, you know, to manage that. You have all sorts of these of these issues. You know, just see the orchestra tool we just saw is one solution to that. That's exactly to separate. If the problem we've got a handful of problems here again, you know, in theory, you can write the CRD for a, an operator that operates a an application. That's a problem. Okay, so I've got two versions that application I want to run simultaneously and they require different versions of the operator and that all goes a bit kind of tips up. So it's where orchestra has a slightly different thing it's basically trying to say here's something, you know, that would work for any application I'm more sympathetic to that a little bit like Helm is also application agnostic it doesn't care. And again, you would, you would probably want to write the best practice on that kind of CRD saying, here are best practice versioning behaviors for CRD for compatibility and, you know, semantic versioning and so on. Yeah, I mentioned orchestra just that it solves the chicken and egg problem so it can separate the creation to a different chart of the CRD but there are other challenges to you need that admin access or some sort of elevated access to create CRD's Yeah, work clothes always have them. Obviously, there are a lot of you characterize it correctly that CRD is a really extending the platform they're not simply part of the workload. Yeah, and that's my point right orchestra could be a part of a mandated part of the platform, because it's not trying to be specific to a individual CNF or one vendor set of CNS. It would be a good example of something you could make a mandated part of the platform. But I think what you can't do, and I think we could write a best practice to this end right now and sort of start getting to the point of yeah well the world doesn't work the way it needs to work. You can't practically speaking include an operator within an application and keep that preserve that platform application boundary. Even though people do that. It is a bad practice and we should document it as such. Well, I, I don't want to get into. We're right here at the top of the hour I want to ask, so you're intent on this, were you trying to be more general, or would you like to have that because this is real. This is your use case you're putting forward so where do you want to go with this. It could be something where you're saying onboarding. Of helm based or helm package CNS and talking about the challenges, if that's what you're wanting, but where are you wanting to go and then so that we can get some feedback for you to get my idea was to illustrate or to specify the the sector of practical challenges and expectations in general related to to say what we call onboarding. So it's more or less installation, let's simplify it installation of and CNF into the Kubernetes based platform. So I would prefer to keep it. Generally seems that it triggers some discussions on best practices, which we could all refer to this to this point I would, if there is an interest and this is a feedback if there is interest to show deeper. What are the specific issues with helm. We could go in that way additionally and say, this is kind of umbrella, because it's not only to specific to help it's specific to, to what audience. What installation audience are vendors preparing those functions is it their professional services team who gets specially trained for that, or it's experience professional on a CSP. What is the best practice what we believe is open to discussion. It is also how they prepare documentation the prerequisites and so on. So it's a set of things out of which we can either derive a number of practices. And if we feel helm or whatever is maybe more burning one, we can focus more on that. I would prefer to keep it in this umbrella mode, and maybe make it more general or may make it broader to cover, let's say, other aspects. And then if there is appetite I could go and then write a separate one like use case best practices or challenges with the with the helm based deployments. But that one would go then in in the details of where the things are failing what specifically in helm. We see as a as a as a challenge ring and so on. All right. You talk about some so to move just away from helm, like you have get ops and then production practices so if this is going to be more general and high level. If you can look at, I think it's how are we communicating so if, if on the use case the focus is around practices and trying to solve certain problems. And we're looking for solutions and practices out of that. So you've kind of pointed out some of these and the in the comments here, the packaging and distribution and other stuff. So maybe at the end, it is at the end of the file actually, I think if you go, if you view it and go. So what needs to be done differently. This is kind of a call for action or rallying cry. All right. I'll be up for working with you some and trying to see if we can expand and call some of the things out, but it sounds like we're not. We're not saying here is we've solved problems but laying out a bunch of the problems that are there, maybe is what you're pointing out here. So this would be talking about the problems. We have follow up use cases and then eventually best practices that point back and say, this saw helps try to solve one out of the larger set. Is that kind of the direction. Yep. So I raised my hand. I'm willing to work with you. Either a sink or we can do some synchronous as well. And we're at the top of our folks. Frederick, if you're available next week to do the talk about the government regulations and cybersecurity, we can join that. Yeah, not a problem that measure work. All right. Thanks everyone. Thank you. Thank you. Cheers. Thanks.