 Hello. Good morning, Taylor. Good morning. Afternoon. Yeah, it's afternoon. Central European. Central. So your afternoon. Yep. All right. So good afternoon. How's the rest of the day looking for me? The world spins around. Good morning, Bill. Still morning for you, right? You're in. U.S. time zones this week. We'll get started about five after everyone time to come in. Chat. Meading notes. Dropped into zoom. Hello. We'll get started at about five after. Meeting notes or in the zoom chat. You can add your name and any topic you would like. Greetings. As anyone been following the. Executive order and all those things coming out of that. In regards to zero trust and. Software. Supply chain protection. I have no idea what you're talking about. Fun topic. And in short, the federal government is looking. The U.S. federal government specifically is. Looking to start including things around software bill of material. So basically saying. Not only where does the software come from, but looking at all of its dependencies as well. And ensuring that. That they're able to work out the source of this, of this stuff. And they're also looking at applying zero trust architecture. As a, as a mandate. With future. With current and future contracts that connect in with either the federal government or the military itself. So. So. They were moving towards this direction anyway, but the colonial pipeline. Was the trigger that caused them to move. So. It may be worth talking about it a later time when things start to settle about what. What they actually mean with that. Because right now. It's not going to be similar and they're going to put out more guidance as to as to what the stuff means. So. Very likely it will affect things that go through here that is. That's on the United States side base. And what tends to happen is. If this stuff ends up shifting into best practices. Then it's likely that other countries will pick it up, even if they don't write wording. So. As the industry moves and best practice a shift then. Reducing your total risk, both from a technical and compliance perspective. You, you want to make sure that you're following best practices over over time. So my guess is we'll see other. Countries adopt, even if they don't explicitly. So. Yeah. And I'm not sure all that that's what's going on right now. Well, you know, auditing software. As happens already. Add this to the word jumping right in. And it sounds like you're saying it's, it seems interesting for everybody. And we just put this as a topic. We could. And you. Frederick. I can, I can find some, some links towards it. So. See if I can pull some slides because we spoke about it in the, in the Linux foundation public health. Last week. So maybe if I can pull their slides. Then maybe we, we can use that as a jump point. So. Let me, let me go work that out. Okay. I'll bump it down. And. A little further. It sounds interesting. I just want to make sure everyone can. Join in. If you want to add. To the discussion topic. I dropped. Spot at the very end for Frederick to add the links. And if you were, I think you had. A comment. If you could put it in there and save it for the discussion. So you don't forget because there's a few others for you. So. By the time we get around. All right. So the meeting notes link is in the zoom chat. You can add your name. Does anyone have any topics? Does anyone have any walking items? They haven't added yet. Can everyone see. Meeting notes. Yep. Great. All right. So merges last week were mainly. Grammar, linting type things. Interested party updates. We had a few things that we didn't get to. Tallad asked about Kubecon experiences. Now. Going a little further out, but if there's anything that people. When I add. Or mention like talks that. Folks should go check out. I think the. The Kubecon videos have been posted. So I put this at the top because that was there last week. And we hadn't got to some of the use cases again. Last week, I think we got. That's enough definitions took up a lot of time. Simon. Did you want to talk to this this week about the. A stateful CNS use case. Yes, I'd like to. Yeah. All right. I don't have the stuff up, but I'm happy to just hand over. Control and you can run through. I think you had some diagrams and stuff. Yeah, I heard. I'll share my screen. All right. Go ahead. The diagram. My. Green. So. This is a use case related to. Five G CCS. Where CCS is a convergent charging system. And. It's a network function that needs to maintain state. And so we're trying to. So me and Olivia are trying to. Define a use case around that. So this diagram that you can see on the screen is. A basic use case for how a user. With a device interacts with the conversion charging system. And the element within the CCS that needs to manage state is this. Block here, which is the account balance. Management function. So. I just was back to the various different things that are involved in here. So I filled in a glossary. Of all the different things that are involved with. CCS a lot of. Terms. Acronyms and that kind of thing that may not be familiar with. First people on this call. The types of state that we're dealing with here are. Long live state. That relates to balances subscribers devices. Quarters price plan. And we have also short live state. That relates to the session and the session data. I'm just going to. Skip through. The initial sections and just talk through this particular use case here. So. The precondition here is that we have a user. This little stick man here. And they have a device and they're registered with the service. So they are allowed to use the network. And the initial step is that the device requests. Well, the device subscription. Make the real time request to the network for quota to use. The network and it starts a charging session. Network. Roots the request. From the user equipment through the. Which is the access mobility function. And either to the PCF or the SMF. So in the initial case it's going through the SMF. Which is the session management function to the CHF, which is the charging function. And that communicates internally inside the CCS to. Identify whether the subscriber exists. Then it runs some rating rules. That result in a charging session being granted. And that's returned through to the user equipment. When the charging session. With it's quota expires. So when a charging session is granted, it's given a certain amount of time to run for. Or a certain amount of usage. Such as a data allocation. One that has expired and run out the device requests for more. More quota from the network and subsequent request is sent through. The end of the usage scenario, the device that completes the session and finishes everything off. And then sends the final charging request through the CCS. Which calculates the total charge and generates events. The events are used to fill in the details of your monthly bill. If you're monthly subscriber. Or to send through to charging systems to actually account for. How much data you've used. I'm going to talk about why the CCS needs to maintain states. So 5G real time conversion charging system. It's got to maintain account balances and uses quotas for. All active subscribers. All active subscriptions, I should say. CCS needs to accurately reflect the balance of the subscription across the accounts lifetime. The state needs to be preserved in the event of any catastrophic failure. And the state should be available across the whole cluster of all stateless CNFs that need to access it. So in this scenario, the SMF needs to access the state at any point to determine whether. The subscription is active and the device is valid. One of the properties of a conversion charging system is that it needs to have low low latency. So low latency and we're talking ultra low latency where less than a millisecond end to end response time is necessary or less than certainly less than 10 milliseconds. We need to allow devices to use a network with a very short amount of time to make that decision. The service provider needs to respond quickly to allow customers to use the network and that's their expectation. They wouldn't expect it to take a number of seconds to get the grants to allow them to use the network. And if the service provider isn't able to make a decision, then they might lose money by the subscribers going elsewhere, for example. And when the charging system makes charging decisions based on the stateful data, it then informs other decisions so that. Decisions go through to the policy charging function to the PCF in this case and that informs the device to control their. Speed that they're allowed to receive data on the network and other properties. The service provider needs to maintain the balances and quotas for millions of devices potentially in their network. And many of those devices will access that information concurrently. So you can imagine a lot of subscribers looking for downloading data in a small period of time. So the state we're talking about here. It should be should have acid compliant properties. And these are things like it needs to be atomic needs to be consistent needs to have isolation and needs to be durable. So a lot of these are properties of a database and that's effectively what a convergent charging system is. And the convergent charging system should be resilient and scalable. So it should the stateful CNF is is how we're talking about the conversion charging system and it should continue to follow cloud nature principles. So if you have a node failure, it shouldn't result in any service outage. So you need to have clusters of systems to allow for that. You also do be able to deal with spikes in service usage and by being able to automatically scale up and then scale down after the spike has completed. And the stateful CNF is a software system and so software systems need to be upgradable like any CNF. And the stateful CNF should also be portable across different hosts and clusters. So the system shouldn't be tied to a single node because you might need to take that node out of cluster. You should be able to use things like taints and tolerations to control which nodes are used for the CCS. And you should be able to make specific resource requests such as how much memory you need for the in memory database, how much CPU scented you need, and how fast the persistent volumes are that you have that you use to back up your system. So you use the resource requirements that Kubernetes gives you, and you also configure affinity rules so that you don't have too many parts of the stateful CNF in the same place. So the challenges of running in a Kubernetes environment are that you can't. So in a non Kubernetes environment, so like a bare metal, so PNF or BNF kind of situation you are basically running with a specific set of machines that you have a pre provisioned in a Kubernetes environment you don't get that control. You just have to request the resources and hope that the system Kubernetes system gives you those nodes that you need. For the convergent charging system to be able to respond in ultra low latency situations, all the data needs to be held in memory. And because of that, you need to you find it hard to move the workloads from one node to another because of the in memory data, you need to be able to move it to other nodes in the cluster, if you want to drain a particular node for maintenance reasons. And in this example diagram here, you can see that there's multiple clusters involved and within each cluster there's multiple replicas of each particular process that's involved in the conversion charging system. So in the case of node failure, then you need to make sure that the state is maintained across all nodes that may interact with the charging sessions. In the case of node failure, you need to be able to bring up replacement nodes and fail the processing of the requests over to the standby cluster if necessary. But just because we're storing the state in memory for speed readings, there's no reason why you shouldn't have permanent backups of that. And so you need to have persistent storage and high speed persistent storage to handle things like checkpoints and snapshots of the data. And that data is also replicated across different availability zones in the case of Kubernetes multi cluster system. So that you have geographic redundancy of the processing and of the data. So the conclusion is that although a stateful CNF does exhibit different requirements on a Kubernetes system, it should still follow the cloud native principles that we've listed here so it should be scalable. It should be resilient and it should be portable. So that's all I've got to say on this. And so anybody has any questions. So, yeah, I'll look this side regarding the stateful CNF are we deploying it through the stateful controller or what we call stateful set in Kubernetes objects. Okay, and that has certain limitations. I mean, the known listed limitations already so how we are addressing those limitations, especially when it comes to resiliency. I mean, during the part termination or like you said about node failure cases. Do you have certain workarounds by yourself to mitigate such scenarios or Yes, so one of the recommendations in Kubernetes is that if you have complex workloads, you need to have a Kubernetes operator. So that's what we have done in our company we've created a Kubernetes operator to maintain the stateful set so Kubernetes itself manages the pods that are running in that stateful set but you can find higher level information about the relationship between the stateful set pods and the management of how they should be brought up or brought down is managed through the Kubernetes operator infrastructure, who are having a manager and a controller. Okay. So one regarding the this picture actually to cluster so these are two sites and then two separate Kubernetes clusters where the CCS is running. Yes, that's right so what we structure is that you have geographic redundancy of pods within a cluster and then you have geographic redundancy of clusters and the state state between these two sites do you expect that the state is synchronously replicated. Yes. There are many changes that are made to the state on the active cluster are immediately replicated to the standby cluster. Do you expect that to happen on the on the storage level let's say whatever it is, or you are making sure with the CHF function to transport all the changes, atomically to both environments. It's done through not directly through the CHF but through back end replication. So it's not done through state based replication so when I when I think about like state based replication it's like whenever you write to a database you write a transaction across the network to the secondary database effectively that we're doing it through whenever you make a change to data that same changes applied through a mechanism that is before it's written to the database. It's effectively replays the transaction on the standby cluster. That makes sense. And when you are onboarding that on the existing environment or existing platform. Is it something this replication something that you realize on top of Kubernetes. Yes, you you requires. Okay. Yeah, so we do is just it needs a high speed network connection between the two sites effectively. But yeah, it's it's on top of over the top of Kubernetes. What we do I was asking just to check how it fits with with approach in similar cases not to the charging function but in similar data intensive applications. We are essentially having a storage class on a ultra high speed or flash storage which is next to the Kubernetes infrastructure. So there is like 100 gig I SCSI connection from from the nodes. Yeah, and then the storage systems usually also come with some synchronous replication so you would. We would replicate the volumes or we are replicating the volumes on that storage system synchronously to another site. Of course the synchronous replication has a performance penalty. The better one for performance is asynchronous but it's always question what is your your recovery time objective and recovery point objective. When you look at this, but this is what worked for us. So we skipped any attempts to keep the state state inside the cluster. We move the state into the old flesh ultra high performance storage. Yeah, I mean we we do use the flash storage locally for you know this this this state here that is stored in high performance system, you know SSDs effectively that are located with the, with the nodes but shared shared storage between all the processes that are involved in this particular cluster for example. But, and they use for cold start scenarios. And there's a data that is stored in here is then archived off onto not so fast storage. There's a limited amount of space you can store in in the sort of the local, if it really local storage, you need the archived off into other storage. But yeah, it is, I guess there's two ways of solving the same problem and using a storage classes one way especially if the storage system gives you the replication for free effectively by using the storage class that has that capability. Whereas we do it, I guess over the top, before it gets to the point where it's writing the state for the data for the system. But in that question, in that case question is if you have a performant enough storage for a persistent storage volumes. Do you need to hold everything that in memory. Yes, yeah, everything is held in memory. This is running I like to expand folks question or remarks, I think there's, it's a very good analysis but when you said that there is a strict requirement to keep everything in memory. It's kind of sent to me like it's a solution and not the problem definition. So, I like books questions about what is the objective recovery time and target response time I think we should stick to that. And then maybe for the best practices pick the technology that can fulfill these requirements so there are things like redis database which in itself is in memory but it acts like a database that can be accessed through API so maybe something like can be a solution here but we need to better understand these requirements to see for redis can be the right solution here. Yeah, I mean in this use case with we're trying not to be too specific about our implementation and also not be too specific about, you know, any implementation that you may want to do to create this system. Which is trying to be sort of generic from the sort of CNF perspective and trying to sort of describe these kind of properties that you need when you have a state for CNF. Yeah, and I think you did a great job throughout this document but the fact that you mentioned the in memory is kind of to me a bit mixing or forcing a specific solution and not leaving it open. So I would rather not not have the specific reference to in memory here but rather some performance requirements or response time. Yeah, yeah, which is which is what we've described in somewhere in here where we talk about ultra low latency where we need to have maintained that low response time and high throughput. And if you, yeah. What, however, you implement it you still need to maintain those properties and generally the implementation falls to using local caches or, you know, or an in memory system like redis for example or other no SQL databases that can handle that kind of situation. So that makes sense. We just need to keep the requirements and the solution separate. Yeah, yeah, and yeah, yeah, we, yeah. I agree, I agree. I've tried to do that and by writing this by trying to keep it generic. Simon, it might be in that the section challenges and limitations and the second third paragraph. Yep, you're right there. Yeah, so where the short live state for a stateful CNF is stored in memory. And then the next one where long live state is held in memory. So I guess it would be working backwards to why would we, the questions, why, why would we choose memory at all so then whatever the requirements are have those written out. And they may actually be somewhere else in this. Yeah, just either we, yeah, just to make sure that's highlighted. And then maybe if you reference it one way or the other, then it'll be more apparent. Okay, yeah, I can do that. So explicit requirements about. Yeah. I don't even know if it has to be in this use case. I'm running. But I like the idea of, of pulling those out somewhere. Could, could you add some of that to the discussion. That's linked from the PR and then maybe we can talk more if there's not a specific area that you were thinking in the discussion 167. Yeah, if you could. Yeah, is it Ravi, who was commenting. Sorry, I didn't actually, I think it was running. Sorry, yeah, if you could. The comments you made to the discussion just so I have a point of reference and then I can respond to those directly when I update the use case. And also to you books because you were making other comments or any about implementation details. So, again, any, you know, any comments and feedback. Yeah, please add to the discussion then I can follow up and directly. Sure, I will, I will visit the PR. But this was a helpful walkthrough, definitely. Some other questions but I will ask them in the PR we can discuss there. Okay. Okay, thank you. All right. Does anyone else have any comments or questions for some. The next one that we had in here was a 5G ran use case. And I don't know if we have everyone on the call. And let's see. She's Sharma. I'm not seeing them in the last zoom is not finding the windows. Sorry. Let me try that again. We're going. All right. So this one's pretty extensive. There's been a lot of comments. And it seems like the latency comments that on the last one, Simon, there'd be some related things for this. Dilling with latency. Yeah. I don't want to go through all this myself. I'm hoping that someone, the person that wrote it could join, but they've added a lot on the other things. And I think Frederick and Simon, you'll both commented and Victor. There's some references that are around three GPP and 5G. I particularly appreciate Simon and Oliver, the way the chull of approached the CCS and what you're doing to break away from terms that may be expected. And you may have references to existing stuff because it's going to use those tunnels and everything for integration. But you've put it in words and adding requirements that make it easier to map to other things. So if Frederick has gone through this, but I would just say if everyone could take a look at this one and know it's going to be a relevant use case. Does anyone have any comments on this or I can hand it over if someone actually wants to walk through it. Ian, you had some comments Frederick or Victor or Simon, whoever. I need to give this a more thorough going along, I think, than than I have yet so I don't assume I've done. I think we're mostly around formatting. Maybe it's a question for is the syntax checker just checks for basic syntax errors whereas, you know, there's no, no checking for whether the markdown actually renders correctly. I don't think there's anything added to the checker to validate things like that, you know, the bulleted list and things like that. I mean, I haven't, I don't fully understand the random on G so I can't comment on the content too much although I could follow along based on my networking experience, but yeah, that's all. Quite honestly, if you can't follow along on the random terminology then we should do a better job explaining it as well because this has to work for others. There are some acronyms that are used interchangeably and the second use case doesn't fully explain what all the terms are. There is no surprise it is like learning another language dealing with run but you don't necessarily have to know the whole of run to in fact you shouldn't have to know the whole of run to understand the use case I think I mean, it's, I don't have to understand every application anyone might have to understand how to build an operating system and I think that's how they should work for us as well. Yeah. I have a general question I wonder if we want this aligned with Iran or would we prefer Iran to be a separate use case. Adds disaggregation and other terminology. I don't know if it could be like a supplement to to this use case or if this could be written from the around perspective. What do you think. If it's, if it's different than create a new one. And we really want use cases, like with what book and Ronnie we're trying to say as far as feedback to Simon, someone's done a great job on trying to break the requirements up but we want to keep going that way so around on the memory usage. Why are we doing it so that we can think about other things. So on the Iran it would be here's where we're going but that's a implementation so what are we learning out of that. But it sounds like it would be a new use case. If you're saying Iran solve something, then what is the use case that it's solving. And how does that tie in. Yeah, I guess it could refer to this use case, just to not repeat things. Oh, absolutely. I guess also I think a diagram. I mean, I had a couple of diagrams to my use case and I think the diagram really helps to understand things where it's using a lot of terminology that maybe this one already has one. Yeah, I was checking because we don't. It doesn't show in the PR and fortunately if you've embedded. Don't see any. Maybe when you go and look at some of these other ones. Chris, this is a an implementation, but that's fine. One of the other concerns as well on that particular thing was what is there anything in terms of the Kubernetes architecture itself that is lacking that or in the Linux architecture with the way containers work that is lacking that that are not sufficient for the for the time requirements. And so I don't have an answer to this just yet but it's something that's sitting on my mind based on some of these conversations. I've been looking at this quite in depth and I don't have an answer to this right now I think when it comes to things like PTP, its job is to get you a really, really accurate idea of what the network time is the network clock says. It's well and good and it requires a certain element of hardware support to figure out what the network is telling you about that clock, but then there's another part of that which is the software interface the thing where you actually asked for the clock. And obviously that introduces latency as well so when you say I'm getting you accurate time then how many of those components matter and how accurate do they all need to be. Exactly, and there's a lot of unknowns there. And again I'm doing this myself for a day job right now and I can't tell you the answer to that so it feels to me like you know there's a lot more unknown than you think it's like you ask an expert and they probably wouldn't be able to give you a straight answer. It makes me feel better or worse. At least or and does does define these limits and I mean that's part of what it's trying to do to probably doesn't you probably as you how accurate the clock gets over the network but then if I'm making a system call to find out what that clock says, and that system call takes 15 milliseconds I put money that no one thought to write down that that shouldn't happen. Well, you know the synchronization workgroup is very complicated but anyway, I will point out that some of these issues and how they relate to Kubernetes are having say the host running a real time kernel or Linux kernel. So, I hate it when people say that as well because I've spent a lot of time with that in the past as well. And it's not a matter of what kernel I'm running it's a matter of what it delivers me real time kernels don't deliver you real time behavior for one. They have quite. I mean and around we have something called the not the non real time, Rick or. Yeah, it gets fair. Of course these are a matter of definitions and the term real time itself is probably should never be used. It never use unqualified I'm not upset with people using it but they have to be spelled out because it means different things to, but there are different levels of real time so there's a speed of light. Yes. So, specifically about real time kernels in Kubernetes, can you request that you want to run on a node with a real time kernel so there isn't a requirement that allows you to do that. And if you did what would it mean, because it's not about your. I'm going to explain this one to people who talk about real time kernels but the question you actually want to answer is, can I run on a host that gives me, for instance, a bounded response to system calls they return in a simple a certain quantity of time, that would be useful for a real time application. It's also a guarantee that Linux doesn't offer with or without the real time kernel. There are a bunch of bounds like that that are a lot more relevant to you than the piece of software that's used to implement it. Yeah. So I guess specifying a set of requirements for the host system may imply that it will give you a decent performance. So like a, yeah. The percentage of CPU lots of memory, those kind of things will potentially give you a system that will satisfy your base requirements even if you can't request those specific like real time kernel kind of parameters. Yeah, I think the way I would phrase it is at the moment you see people saying I want a real time kernel, which amounts to I don't know what aspects of real time kernel and making my software work. At least if I run it on a real time kernel, I haven't managed to break it yet. Versi saying if you do a certain thing in a certain quantity of time then my software will definitely work. And if you don't it will probably fail. So one of those is based on constraining the actual software you use at which point we start getting down to, you know, Linux 4.x with certain sets of patches and so on and nothing else will do. Well there's a security bug, well that would break compatibility so you're not allowed to fix it and so on and so forth. So constraining software is a bad idea. But constraining behavior is a perfectly fine idea but it's actually really hard, because you don't know what behaviors necessarily are what you're relying upon. Well it might be obvious, but the life cycle of RAN. RAN is a use case that, well we have a lot of these use cases but RAN is one where you're building, you're specking out the hardware, you're specking out the software very, very, very carefully for all of this. So I don't know if a generic solution. Right the life cycle begins when you install the cluster in the first place with all the CNI solutions I mean we're going back to to the things that are what this work group is about right there's it's not just CNFs it's also the platform itself needs to be designed in a certain way and by that we mean not just the software platform but also the hardware platform so. If you are going to run a CNF then what services are you going to have to offer it is I think kind of key to making this a success it's not just about saying, if you're going to run a CNF then the CNF has to be designed with these things, you know these beautiful procedures in place when we're not, we're not talking about the joys of how the CNF developers think exclusively we also need to know what Kubernetes has to provide what they can expect. A large part of the real time Linux effort is actually focused around getting rid of spin locks within the kernel and making those preemptible. It doesn't mean that you're going to get the time that you expect, but the fact that the kernel is preemptible means that you're more likely to get the time that you that you expect so. But it's the question I think there's two parts of this because one is how does the software expect to interact like if it's just a if it's something I can drop in with a device plugin and call it a day then I weren't good we may be in good shape. But if it's something that has to go to the to the kernel, then what what are the, I know that there's work there to, to make the clock more predictable or get to make the clock system calls more predictable, and that they have that some there are some things there. It may. And in the normal course of things I would not be too concerned but I am concerned that when we run it through a kernel when we run it in a namespace we run it through something that's Kubernetes based when we get scheduling do we need new time and for for that thing, like I get a little worried there for, for that reason. Well one of the most important things of the real time kernel. And I believe actually canonical of past judgment on this in the past and it's quite significant is you can run a process with the FIFO scheduler which is effectively uninterruptible if it's got work to do it wins the in fact scheduling comes up in fact scheduling won't happen because it's already won the argument so anything else begging a bit of that CPU is going to lose. So, and if you do that if you run that then you make the platform incredibly fragile because you know if it feels never feels the need to give the CPU up then then nothing else can run, which is obviously not a great way of designing a platform, because the processes are the ones that are going to suffer along with everything else. So, you know, there are certain elements of this that are, you know, can be quite stability and danger and shall we say. But yeah, I mean, a lot of this is not about, for instance, if you steal time for me if something else schedules, how long before I get the kernel back. It's still not hard guaranteed the real time kernel says I will try and give it back to you very soon. And when you consider how it's been tested it's been tested for high high performance audio. So, you know, 40 ish kilohertz, 80 kilohertz. Responsiveness is fine and it's been tested with user interfaces where a couple of milliseconds is neither here nor there. We're in microseconds at this. And we actually don't care exclusively about how long an interruption last but also things like how many interruptions I get within a defined period of time. We've got a job to do and it's got to be done in a certain time period using, as we said earlier an accurate clock. So the use case is surprisingly well a more complex than it looks like at first glance but it's also not in line with what you normally use time during a party. So just to bring this back to Kubernetes specifically. One of the challenges is, you know, we have affinity and anti affinity so a workload can ask for to be on certain kinds of hosts or not to be with other things but one challenge with real time. I'll put it with quotation marks real time is you want to know what other things are running on that node, because they can interrupt you so there should be ways to request. You know, I want to know it all to myself with these specific things for this network function to run please don't run any other network functions on that note. Yeah, I would be clear on the distinction between best effort and the guarantee right anti affinity is best effort and to affinity is saying that I will not run you with something else because that is likely or it may affect your performance, but something else may affect your performance as well you certainly never going to be the only process on a whole Linux box and if you're running Kubernetes that's an impossibility. But a guarantee is a different thing which is I don't care what else is running on this box, because I know that I will get a certain performance level regardless of whatever is running on that box. Yeah, the key to this policy and enforcing those policies somehow. It's a big topic. So, so isn't that where the tanks and tolerations come in that you can effectively taint nodes to make them exclusive to your application. And then it's up to you what you run on those particular nodes. Yeah, exactly. I think we have a whole bunch of different tools here and this use case is perfect for looking at all of them and trying to find a way to just streamline them and suggest what the best practices would be. So I think this is perfect. I don't think we have clear answers there's a lot of aspects here. I'm trying to solve a very big problem here. It might not be the best one to start with. There may be something we can get out of it but I think solving the entirety of that problem, at least again based on past experience with this is probably a very, very hard task. And there are other things we could do first that we would actually get to the end of. I think a good approach for this would be to ask the ORAN people what they do. I'd be surprised if they haven't spent time thinking through some of the details and considering that they want to sit in that space. They certainly have to have some answer and maybe it's in practice it works well enough. Yes, it could be an issue but it's, it happens rare enough that we don't have to care about it. Then we move on. Well, that failed up the hour nicely. All right. Taylor added another link in the chat that I hope you could add to that discussion too. So, Advanced Cluster Management or ACM uses policies to kind of decide on placement of where in the cluster your workloads will sit. All right, I'll drop it in the chat here. Can you drop it in the discussion forum? Sure. I mean I added 174 and I put a bunch of links including one to I saw there's a Kubernetes operator for OpenShift. I don't know if there's a repo and if that's available for anyone, but that's, that would be interesting as well. All right. Well, I guess we'll have to get to these others. Frederick, thanks for adding the regulations. I don't worry. I put up a slide there. People want to see what some of the initial things are and we can talk about it next week as well. Yeah, let's do that. Let's see deferred. Thanks everyone. We'll see you next week. Bye everybody.