 where they were actually trying to put data center, like mini data centers in everyone's homes to keep the homes. And then, so the heat isn't wasted. It's used to keep the home and then like you have like a data center. And so you get free heating and you get paid to host a data center or something like that. It's an interesting concept. Yeah, that used to be called Bitcoin. Okay, it's five after now. So it's also a president's day. So I'm assuming some people aren't joining in an Ottawa or in candidates like Ottawa day or something. So Canadians are also on vacation. So I think we can probably get started. Yeah, so once again, the link to the meeting minutes is in the chat. If you can just add your name, that'd be great. And then I think this will probably be like a lighter meeting because I don't think a lot of people are here today. But before we jump in, is there anything anyone would like to add to the agenda? I haven't written down the use case yet, but I have a second use case to add for which it's sit next to Ian's BGP. And I'd like to have a short discussion on it. And if people like the idea, then I'm happy to write down the actual use case and put together a document on it. Sure, do you mind if I add you at the end or? Yeah, that's fine. It's a short one. Okay, cool. Okay, cool. Anything else? Okay, I think we can probably jump in. Yeah, so the first one is a merge pull request from Claudio at Equinex. So just adding Equinex to the interested parties. I'm just using this as a reminder that if you are contributing, if you're interested in this, just go ahead and add your company as an interested company. So then we can kind of show who's interested in this and show kind of like the value of the work that we're doing here. The second one is Lucina put together a contributing document. It's I think pretty straightforward. There's nothing crazy in here. It's just kind of guiding new contributors where they can get involved. And obviously I think this can kind of be fleshed out in the future as we define more what each of the work streams do. But I think this is a good first framework. As it's not, I don't think it's anything controversial. We have a couple like approvals. Does anybody have any strong opposition to me merging this? Okay, hearing none, I'll merge this. And I guess we close another issue now. So sweet. Yeah, so there's now a contributing document. So if anybody that you meet is interested in contributing, now you can point them to this document and they can figure out where to jump in from there. Okay, the next one is this pull request from Watson's. This came up on the call two weeks ago and the change is basically to remove the optional from the user stories because this is gonna be, I think a big part of what we do and also changing it to trade off caveats and notes. And then just do not download too. So this has been open for a week now. It has four approvals unless there's any opposition on this right now, I'll also merge this one too. Okay, great. So we have another one merged. Okay, next one was Ian, you created a markdown link checker. Do you wanna just go through this? It literally reads them out down and says are the links right? And it runs as a get hub action. So it should run on pull requests. It does seem to have checked itself when I committed it. So I think it's working. I make no promises that it's flawless, but on the other hand, it's still better than nothing. So I would say you commit it now and we can always fix it. Yeah, cool. Does anybody have any strong opposition to committing it? I think if it doesn't work, we can also pull it out later. Okay, yeah, so thanks Ian for that. Okay, the next one is book on the call. I don't see him because he volunteered last week to create a template for the user stories. I wanted to see if he had an update on that, but since he's not on the call, I guess we can wait one more week. I am having a call with him tomorrow, actually to discuss with Ericsson if they would be interested in contributing like one of their CNFs to be tested. So I'll follow up with him then. The next one is we're still looking for people or someone to define the actors and roles. I think there's a lot of the content in the discussion, in the table of contents that Robbie put together and also in this discussion. So if anybody wants to volunteer to write up kind of defining the different actors and stuff, I'm looking for volunteers for this PR or for to create a pull request. We can also leave it open for right now. This is just something that needs to happen at some point so that we can have similar terminology and we're not talking about different things, but it's okay if we don't have anybody for right now. The next one, so the next ones are Ian with your two best practices, I guess. Is there anything you want to like discuss about either the BGP or the use of privilege? I know we can start the discussion there. Which one do you want to start with? So the use of privilege once is an issue for a reason. It wants justifying rather than just throwing in as a surprise to people because arguably using privilege in any application, whether or not it's a CNF is a bad idea, but on the other hand, everybody does it. So we know full well there's a reason for that. So you can't claim that it's just a universal best practice that applies to everybody, even though we think it's a really good idea. So I need to create a user story that basically says if you didn't have privilege, then this thing you want to do would not be possible and therefore we should not have privilege. And I'm fairly sure I know where I'm going with that. I'll have a go at it later today and try and write something up. Yeah, my recommendation would be to also try to push it towards the multi-tenancy shared use case where you're running in a container versus you're running in a VM, it's clear VMs would give you that at the cost of density, which they're looking for. Which case, a lot of the benefits people are searching for out of Kubernetes, not all of them, but some of the benefits end up being reduced in value. So I would drive the multi-tenancy story pretty hard in that one. Yeah, I think that's a fair statement. We have to recognize, and I don't know that it's written, well, it certainly isn't written down in the BGP user story since that's the only one. It's not written down. But we have to recognize that our use case is multiple applications on one platform and that is an outlier compared to what people normally do. And if that's the case, then that's where the use of privilege would become problematic. So, yes, absolutely understood. It's a bit weird because it feels like I'm basically writing things that everybody already knows, but I think in truth, it's not actually that. It's everybody who does this for a living knows this stuff, but we all kind of selectively forget it. So having it written down means that we've got it codified and that helps. Yeah, no, I guess maybe this is an interesting point from Michael Peterson from Intel is like, how would this change or the use case you're going to write up when using like codic containers or something similar? Yeah, I need to actually ask you what he means by that because I didn't think practically there was much you could do with privilege if you were using Cata or, you know, if it were clear Linux or clear containers or whatever. I'm here and I think I just added there as kind of, I think it was mentioned in discussion earlier, but it's to the point where anything you kind of want to use privilege for. You want to use it on the, I guess, underlying system regardless and running it in Cata containers, you won't really be able to get the functionality you might be looking for anyways. Yeah, that's fair. Yeah, like Cata containers you tend to have two primary paths either it uses something like, I think KVM, which you don't get the density on in the same degree or you go with the firecracker approach in which case firecracker, you cannot use any pinned memory in your system, which means no devices or similar that you can drop in, which is nice if you're doing a simple web application, but it may invalidate properties that we're looking for for density. So Cata containers can help in some of these areas, but it's definitely a trade-off. And there's also issues when you start to do shared memory as to how do you actually get shared memory, how do you actually get those port, those iNodes in and out of the system is sometimes problematic there still. Not an intractable problem, but certainly one that we would need to look into. Yeah, and I guess it kind of also, I guess removes the idea of having a generic CNF in any way since you might be able to run it in Cata containers and it might not crash immediately, but then if you try to run it on a system where it's operating directly on the host system, then all of a sudden you probably won't be allowed to even run the thing. Yeah, we'll need a multiplexer or something similar that allows you to specify the runtime of the system at that point, which is certainly possible, but yeah. Can we start a discussion on that? Because it seems what we're saying is that there are reasons why you might have a legitimate reason to choose the runtime. And it would be absolutely excellent if that weren't true, because in theory, if we have an application and it runs in containers, then yeah, the runtime shouldn't concern us. We should not be choosing that. That should be an operator choice, but if you think there's a reason for it, then it would be nice to kind of get that out and discussion so that we can actually reason our way through that and then write it down. Yeah, and as an aside, I do expect other types of containers to eventually pop in as well. So I've heard multiple parties talk about things like WebAssembly as an example of something that can be encapsulated and injected in interesting locations. And yeah, so. Yeah, so I mean, you can do two things that I think work against each other, because I can always obviously run WebAssembly in a runtime that's built into the container. And similarly, on a suitable platform, I ought to be able to run a virtual machine inside a container, not a virtual machine to run the container. So there are questions about whether, for any of these, it makes sense for them to be infrastructure, it makes it more sense to make it a choice that an application can make. But I have been thinking about it for a while now. I'd love to hear what other people think on the subject. Absolutely, and it comes down to, I think a lot of it comes down to, how do you want to orchestrate the stuff? Like, presumably, if I have a privileged pod, I could probably run Firecracker, not that I'm thinking about it or something else and just make the guarantee that I'm not going to do anything beyond that particular. Yeah, the step from there is to ask yourself, if you had an unprivileged pod, what would it take to run Firecracker, for instance? We're trying to find the balancing app between a platform that's not too complicated because more complexity means more likely that it's broken and one that lets you do everything you might want to do. And I don't know, so we'll get there eventually, but building applications that run in, the 16 possible run times may not be the answer or on the other hand, it may be that if you've defined what a run time does clearly enough, then yeah, absolutely, it should work and we shouldn't stand in the way of it, but we're not there yet. We can always draw a line and say, this is in scope and this is not. Somebody can solve this problem, but that's not our purpose. I think it'd be easy to say, if you're managing your own run time and you're on application in there, like WebAssembly in an unprivileged pod and you're handling the WebAssembly side, then it's clearly something that's application-driven and you can bring your own things to manage it. But if you're asking for it to be managed through Kubernetes and use the cloud-native orchestrator for lack of a better word to manage those on behalf of it, then the line's a lot more blurred now. And whether it's in scope or not, it's something we could have a discussion on. Yeah, exactly. And the trade-off there is, more features is great in theory, but also more features is more complexity and a more fragile platform. And we're not expecting one person to implement this platform. So every platform has to be compliant and you may be asking too much. So that's why I say we need a discussion on this. It's not gonna solve itself without actually having a conversation. I think it's super relevant, but I'm thinking if we need to separate it from just like the idea of privileged containers. Since I guess like the runtime itself goes a bit beyond just the question of privileged. I have to agree here. I feel like we're going down a rabbit hole, but it's an important one. The issue here is you sometimes don't have a choice, right, because of limitations of the platform. So it seems somewhat aspirational to say, don't use privilege, but you might be saying. But isn't that the opposite, which is that privilege may not be available to you. It's only summer and tones that let you. That's also true, but the real point is what are you trying to do with privilege? You're probably trying to access some sort of hardware. So if the platform doesn't enable that, then privilege might be your way around it, right? If you're building a box, that's a complete infrastructure plus a CNF package, right? And the pattern we want to try to encourage though, is because like we're talking about like Kubernetes native type stuff as opposed to just what's generic cloud, cloud native approach. And the recommendation I would push in that scenario would be to have something act as an intermediary, such as device plugin is a great example. Because ultimately it's about the making the systems in such a way that you don't have side effects in a way that you can affect others. Because if, and that's ultimately what we're, I think what we're trying to drive down towards this too is to minimize side effects to other systems. And second is so that we have a clear path towards things that need to be orchestrated like hardware, get orchestrated, get initialized, and that the minimum set of privileges necessary to operate that piece of hardware. So in other words, you don't, it's sort of like a really good example of this is with DPDK and the assumption that around DPDK around the assumptions made around Kubernetes. Like Kubernetes makes the assumption that infrastructure is immutable and it doesn't mean it's hard immutable. It means that Kubernetes owns it. It's immutable to other systems. And Kubernetes will then orchestrate it on your behalf as necessary, as opposed to if you're bringing in a workload that uses DPDK, it makes the assumption that it has full privileges across everything and everything is infinitely mutable. And so there's a breakdown that occurs here in the two process models or the two, I guess you say privilege models, that needs to be reconciled before you could start to effectively pair something like Kubernetes and DPDK and get it to scale up across multiple groups, multiple containers of tenants and so on. And so like these are the type of questions that I think we need to pair down to is not about the privileges, it's about the side effects that come with the privileges and how you can impact others, not only from a security perspective, but from an uptime and performance perspective. Right, I think that's well put. And this is exactly what I'm hoping to achieve with the networking orchestration task force, right? The idea is that if you can orchestrate these things, then you don't have to use privileges or other hacks, right? We basically don't want you to hack Kubernetes. We want you to use the cloud native concepts. But as I said, it's somewhat aspirational because the platform itself is just not good enough yet, right? If you're using Lotus, well, you know, you're injecting CNI configurations, right? At the level of a Kubernetes resource, it works. But we want to reach solutions that are truly portable and truly isolatable, right? We're close that you could deploy potentially anywhere that has some sort of minimum aspect to that platform, right? We're not there yet. So this is why I feel like this is somewhat aspirational. Yeah, and this ties in very well to one of the first things that was being said on this topic as well, that we all know that privilege is bad, but we all just use it anyways. We try not to use it, but the point is really understanding why do you need it and what is a better solution? And the better solution can be quite complex, right? You have to think of the whole problem space, not just the specific problem that you're solving. So firstly, the people who use privilege are CNF developers, which are, again, I'll keep saying this, not necessarily the best representative audience in the public, because they tend to get shut in a room to write CNFs and, you know, Gergay basically gets his whip out and cracks it over the people who work for Nokia and, you know, I gently persuade people who work for Cisco and so on. But, you know, the other part of this is that they're deeply market-driven, so they will take any path that gets their code to market so it can be sold, and it tends to mean that what they're doing is taking pragmatic solutions rather than necessarily saying, well, if we change Kubernetes like this, then we could do that. It isn't that you can't do this necessarily, it's that nobody is, nobody's really taken the time to explore what you could do here. So I don't want us to throw it out because pragmatism says it can't be done. I want us to at least give it serious consideration. What would it take? And it's a best practice. The point about best practices, there's a bunch of average practices and average practices might be the only one we can do right now. Yeah. Yeah, and I think part of it as well is, best practices have the ability to break out of them and what we want people to do is to make decisions if they're going to break out of it, that they've made a conscious decision as to why they've broken out of it and not just made a decision just because it was necessarily expedient, even though that'll still happen. But to give them something very quick and easy to digest that they can have some sense of what the consequences are. And even if they decide to exercise a workaround and not follow the best practice, then knowing those they can at least try to do things to mitigate the issues that are present there as well. And some of these could be best practices that are onto the operator side, like best practices don't use privileges. When you use privileges, best practice is to then isolate those workloads into a separate part of your infrastructure if you're maybe a node pool or something similar so that if they get broken, you don't compromise the systems or following best practices. And I wonder if we should also state why we're suggesting this is the best practice. I mean, if you do use privileges, what are the cons? You know, where are you? What bad paths can that take you down, right? It's like don't use root, right? On operating. That was, and there are other things in the privilege actually routine a container is not the same as privilege, which is another one you might throw in there. But yeah, I mean, the whole point about not using privilege, I think the strongest argument for it is if you want your CNFs not to be able to break the platform or other CNFs, then they can't use privilege because privilege is literally unscoked. I mean, it lets you do anything and that thing can be immensely stupid and no one will ever be able to prove that you were the one that did it, your CNF did it. That's why I think we need no privilege. Now, you can write the best practice precisely as you shall not use privilege because if you do use privilege, these are the things that will happen. And then if somebody decides, well, I don't like that best practice and I won't insist on it, then they know what they're buying into. Right, so is the umbrella security? Is that the kind of area in which- Isolation, not security, I would argue. There are things which are purely security. For instance, not running root in a container it's an application choice and it does get you better security because somebody can't go tampering with the parts of your container image that are not intended to be tampered with like actual pieces of software, for instance. So saying no root in a container gets you something that's purely security. This does get you security, I'm not arguing, but it gets you a lot of other things as well and the most important one I think is isolation. And portability, yeah. Yeah. But I'm thinking just in this entire discussion as well, I guess it also just making sure that if you have to use privileges in any means, then at least you'll have to provide or you'll kind of get pushed to do some justification to why you're doing it. Since, again, you can use it and you can also use it in a way where it hopefully won't impact the entire system. But at least you should be able to justify why you need it and what the risks are of doing it. Right, it's like installing an app on your phone and it tries to explain why it's asking for access to your contacts. Yeah. Because it feels like it generally. But so that becomes a bit of a meta here, which is if we list 100 best practices and some party in this only wants to subscribe to 95, then how do they say that's what they're doing? Wouldn't you want a report from them saying these are the 95 we subscribe to and here are the five exceptions? And assuming it's a CNF vendor trying to sell something or a telco design team trying to persuade ops that they aren't basically making their job impossible, then aren't you looking for, yeah, we subscribe to this set of best practices and we have these exceptions and we have these reasons for doing it and this is how you work around it because you've got to make, you've got to persuade somebody. It's not like these best practices are, firstly, it's not like they're all or nothing, but you're not doing them because they're written down as best practices, you're doing them because they are actually a good idea. So if you're about to, you know, document what you're doing, you have to come up with reasons for the exceptions. I can see Nikolai has been the good citizen and has his hand raised. I just want to add some slightly different perspective and point of view. So I'm mostly not directly involved with CNF development, but my current employment in affiliation is with the higher level of the stacks where we live in the happy world of TCP sockets. So once I have a TCP socket, that's all I need and I can do whatever I want to do to do my job effectively. But still I need a privilege during my init phase, right? I mean, I don't know if this is a pattern and this is something that can be also used with the CNFs, but I need my IP table setup before I'm able to do my job because even if I have my TCP sockets, I still need all the traffic diverged to my proxy in order to take over, et cetera. I understand that this is not really applicable to all the CNFs in the world. Like if you need to access a PCI device by Slash Dev or whatever then, not really, but I don't know. Just that this thing exists. So Nikolai, that's actually quite interesting. When you say you need your IP table setup, what application are you running that starts playing with IP tables? Because that would generally be considered to be overstepping the boundaries of what's civil. I have an init container that actually goes and sets the IP table for the particular port in a way that it diverges all the TCP traffic to a particular port where later I will start my envoy to listen. Yeah, this is every L4 and L7 service mesh does this, STO and so on. So. Yeah, and Envoy does definitely overstep the mark, but it's interesting to note that Envoy is usually provided as part of the platform rather than part of the application. It's not Envoy as well. It's the stuff around Envoy, the actually like STO and Linkardee, Envoy doesn't care. So just to be clear from a project line. Yeah, so one, I just was slightly on one way to avoid this is actually there is the CNI layer where you can either use this to a CNI or we have our own CNI, but actually you delegate this setting of IP tables to the CNI chaining mechanism, which is actually running in a privileged mode. So you can do these things there and you don't need privileges for your init container that will set up by IP tables. I think this is a really good point. As I said, this is a rabbit hole, but an important one. I do wanna present yet another scenario. We're talking about CNFs, but there's a way to move the privilege requirement elsewhere. For example, you have a Kubernetes operator that manages a piece of hardware, say FPGA accelerator or something that's in the system. So rather than the CNF requiring the privilege to do modifications, it could potentially update a CRD or call an API and a different component would be the one that requires that privilege to access that hardware. So it's passing the buck in a way, right? But then you can say, well, if the CNFs, if we want CNFs not to use privilege, one strategy could be to centralize that the solution that requires privilege to a component that is more manageable and more secure, et cetera. So you still get isolation and portability, though, might be more of a challenge, right? Because if you need to deploy that operator for that hardware, well, that operator would require the privilege. So it's passing the buck, but it's, I would say it still could be a best practice, right? Because it's a suggestion, how do you solve the problem of privilege, you know, more holistically. Yeah, and that's one of the first steps that we took with Network Service Mesh. And we actually presented the same thing in the Istio community shortly afterwards was this exact pattern because it's a very effective one. You, it's about shifting the control away from the client or the application to shifting that control to something that can be, that's part of the orchestrator or design work with the orchestrator and gives you that level of control over it so that you can ask for something, but it doesn't mean that you're gonna gain the actual privilege to do it in the same way that the kernel will do things upon on your behalf or Kubernetes will do things on your behalf, but you're not guaranteed to those if like you have to go through the orchestrator itself. So it's, I think it's probably the most effective pattern in Kubernetes that we're gonna find that walks that particular balance at this point because it's clear that there's significant limitations in CNI by itself, you need something to help it or it's clear that device plugin by itself doesn't have the right set of properties there based upon how it was designed. So you do need something there, whether it's NSM or an operator or NSM creating an operator that then or working with an operator or similar, all these like these type of patterns are definitely very valuable in this particular space with the limitations. I want you to think in terms of Linux for a second. I can always do one of two things. I can write something as an application or I can embed it in the kernel if I want to, right? There's examples of people who have written web servers that run inside the kernel. And we all know that's a bad idea because you don't want the kernel to be any more complex than it needs to be because it's a dangerous place to run code. But the job that the kernel does in general is, I can bring a kernel module along with my application and someone who administrates that system is gonna have to make the choice that they're willing to install the kernel module because it could do very dangerous things. But the interesting thing there is, it's the person running the system, the kernel, the operating system that's making the choice to do that. It's not a guy who just wants to run an application as his own user that's doing that. Best practice is very definitely, if you wanna run a process, it should run as you with no privilege whatsoever, not start tampering with low level system components. And I think this it's got strong parallel with that. If you are effectively a thing that needs privilege is a thing that the platform provider should deal with, not a thing that the application should bring along, ideally. Well, we can't ignore the application aspects too, right? Kubernetes offers a very basic RBAC security, right? Service accounts, et cetera. But your application might need something, probably will need something much more elaborate, authentication and authorization, its own user system, other kinds of privileges, right? If you think of some operator running and doing something, well, not every pod running should be able to just connect and use it, right? This is why I think your umbrella is security too, right? We're not talking about just a privileged container, but which privileges and which rights do certain pods have? Yeah, and I want you to think very hard about that, actually, because the question that comes up there is, should you be able, for instance, to install a CRD? Because a CRD is a system service, right? It's one API for Kubernetes, not an API, not an internal microservice component of an application. Should you be able to do that? Is that best practice? Should it be best practice? It's certainly what we do. I mean, I'm not debating that, but should it be best practice? It's not, there comes a point where we're gonna have to, again, take a line that's a little more pragmatic and a little less perfect than we might buy, I'm sure. But at least it's nice to know when we've made that compromise. Well, this is why I think it should be under this umbrella of, I'm calling it security, but maybe you will call it rights management. I think that's probably more precise. I mean, operators are a best practice, right? CRDs are a great way, I think, for CNFs to work. We do want to encourage that. And you're exactly right. You need to be an admin to install a custom resource, right? I feel like there's a bigger picture where this whole conversation fits into. Yeah, there's also a whole set of work going around with trying to work out these particular types of questions in a more, in ways that also don't necessarily bind you to the cluster itself, because all of these authentication authorization are very cluster-centric. And when you're trying to manage a fleet of systems, then they certainly help, because they do give you some level of granularity there, but they don't give you the capability to manage these things cross-cutting when you start talking about user authentication or policy between systems that you're trying to connect to each other, then this is a whole layer of stuff above that that needs to be handled. And there's groups like in the IEEE as an example I can point to who are looking at these exact type of questions for network slicing, for edge automation, because they're very relevant. Kubernetes gives them a lot of fantastic things, but it's similar to a Linux box. The Linux box permissions you put into them are box-centric, and the applications that you run on top of them need to be aware or compatible with systems as you put them together. And so... Yeah, there's an issue of roles here too, right? You think of the system admin, whoever installed the cluster, it's kind of like installing an operating system, right? So if users need a specific package installed, well, only root can install those specific packages, perhaps. So if you install an operator, that's part of setting up the cluster, right? Onboarding the cluster itself, the cloud. But then the workloads are, we think of it in terms of what users put later. But a CNF developer could argue, well, they have certain requirements for the CNF running, so they need, during installation of the cluster, for example, to install certain operators that are required for the CNF to run. So these roles sometimes blend, right? A CNF provider could think of themselves as we're admins, right? We're installing something very low-level in the system, right? It's not just a workload like installing a database, we're installing a web app server. This is something that changes the system itself. These roles just are hard, I think, to map so cleanly on Kubernetes, at least as Kubernetes is designed right now. It would be interesting, for example, if Kubernetes would allow you to create a cluster resource. Well, there are actually, right? Cluster roles that you can provide service accounts that would let them install an operator. Anyway, I'm thinking out loud here, I, there's a lot to this discussion. Yeah, and I mean, this definitely dovetails into the entire, or not the entire, but one of the primary reasons why I push Enosam in the way that I do is because of that privilege and isolation. So I'm happy to put together patterns that we use and we can show them how that's handled in this bigger scenario in a way that in order to help drive some of those conversations or some of those patterns. And ultimately, in the Enosam side, we don't really, the data plane portions is an implementation detail. So we don't really care if it turns into something that's multistriven or you drop in an operator or something similar. It's primarily about ensuring that it's not the application itself having that those privilege bits. And if I recall, I believe privilege is set across the entire pod. So it's not like an all or nothing. It's, I believe it's an all or nothing. I'll have to double check that to see if the pod specs have changed in the past few, a few versions, but basically the pod has privileges or it doesn't have privileges. And if the pod has privileges, then not only does your, admits have privileges, but so does your main workload as well. And so there's getting that particular thing sliced off so that you don't have to run privilege in your main workload is of high value for numerous, numerous reasons, which I mean, we can go over all again if you want, but... So privilege is all or nothing, but a service account has certain RBACs that allow it to either create privilege containers or not, right? So you can try to create a privileged container and it'll fail because the service account doesn't have access to that. So, you know... But if you want to run in something like Istio as an example, then you need privilege. Historically, maybe they've changed this in Istio, but historically you would need the capability to manage the IP tables in the Enix container. And we're gonna see the same pattern with CNFs. And so if you have privileges, then if that system is compromised, you may have an RBAC put on you, but something that's co-located with you may have a different service account, but a different system that is connected, that is on the same system may have a different service account. And then it has access to everything at that point. It is effectively root on those systems. So this is why I'm leaning towards, you know, I'm feeling uncomfortable about this guy, and as a best practice, I think that the best practice, it's not a black and white issue of don't use privilege containers. It's rather minimize and isolate the use of privilege. If it's through RBACs and making sure that only a specific service account can do it in a specific namespace or otherwise limited. So if you do need privilege for some reason, find a way to manage it in a way that's not black and white. Right, and that's the reason for separating the privilege part out. You still need privileges, but you give control to something that's orchestrated not by the application, but by the operator. So the operator can make the decision, am I going to allow you to have this effect on my system? And isn't the application request for it, but it's not the application that has full control and can push that forward. And so it prevents it from affecting other systems and simultaneously if the system is compromised, it can request for other things, but it's not gonna get it to have to compromise something else as a next step. So as layers of security at that point. So that's why the privilege separation, you still have privileges, something has to have privileges, can't get around it, but you don't have to have that privilege thing present while you're running your primary workload. Best practice would be to do the privilege thing in some other sectioned off portion while you need it. Right, that's one strategy, right? I think there could be others as well that, it's a complex strategy. There might be others that could also be good. Yeah, I believe this deal was heading in that same direction as well. Like if they haven't implemented it, that's the same path they were heading because they're running in the same problems. And another approach could be to try to get an init container that has privileges attached to it and then to be able to drop privileges from there. And I don't know if Kubernetes does that at this point if it does, and that certainly is fantastic and helpful. And that'll work during the initialization time. It won't help if you need to change something on the fly, but it certainly does move the needle. So the question there is less about, technologically what options have we got, but more who's responsible for that code, right? If it is, if I'm tasked with holding a platform together that runs a bunch of applications and any one of the application coders can basically break the platform by doing things that I actually let them do, then I'm not building a stable platform. I'm building a platform that is, actually deeply fragile and problematic. Whereas if they need to do something that requires privilege and I offer them a way of doing that that's safe, then we're golden because they can only use paths to privilege that I audit and check, right? So they need an init container that does something dubious in their IP tables. Okay, so I'll provide them a means of doing that. I'll provide them a mutating interest control or something that makes that init container with privilege come along. I won't provide them the ability to run a privileged init container. Then you could get somewhere with that. It's really about having the dangerous tools in the hands of somebody who's responsible for the consequences. The init container is another strategy, right? In addition to the kind of putting it in an operator. So if it's something that only needs to happen once, that's great. You've minimized and isolated it, right? You've moved the privilege part to an init container that is part of the installation. So I think that's a good strategy. I think that could be a good practice as well. Yeah, I'm trying to, I mean, we're up and down the stack here. There's use cases, user stories and requirements and design, right? The use cases, the user stories dictate what we're trying to do. The requirements we pull out of that. The design of which there may be many may be a good designer, a bad design. We have both of those in the world. Tries to say how we would do that. And then we derive best practices from that. In init containers with privilege, with mutating in-desk controller is very emphatically a design choice. But on the other hand, it does say that, you know, well, there's two things, regardless of whether we can think of a design using current tools. It doesn't necessarily make the user story invalid. It just means that it's kind of difficult. The other one is, you know, we do have design tools here, lots of them. There might be a number of ways we might think of but we still need to say why we want to do this before we can start down that path. Yeah, and part of it as well is, part of it is how do you build and manage these systems? But there's also another side of this as well, which comes down to a risk profile. So imagine you're the operator of a particular system. Like you put controls in, in order to prevent certain types of things from occurring. One of those controls is you, by policy, say default deny access for privileges. If everyone is asking for privileges, then that makes it very difficult to use that as a control. But if the privilege escalation is an exception and then, and people are following a best practice, which is don't use privileges in or section off those privileges to something else. And when somebody asks for it, you now have an effective control that gives you a spotlight that this is a system that is not a system you're gonna say no to, but is one that you're gonna pay a little bit more attention to, to make sure that you understand the risk that you're bringing into your system by enabling an F. So that's really, it's not about telling people no, it's about giving the, creating a balance about having the application developers have a path towards, how do I get my system in with less friction? Cause if they're gonna go to every system and they have to argue why you have to have privileges in every system, you may just go through the extra work of cutting those parts out so that it's much easier for me to argue this initial, this init pod or this one off workload pod or this operator has access to it, not the main workload is a much easier thing to push than install this thing, it requires a rich, very, very different set of communications that need to happen in those two. And so think of it not only from the application, what do I need to do to get my application out but also think of it from the operator side. Like I have to manage a fleet of potentially tens of thousands or hundreds of thousands of these systems. How do I, how do I sleep at night? And part of that is people following best practices that minimized the risk, one of those risks being things that privilege. Cool. So I think this is a great discussion. I think I'm gonna pause it here because we do have about six minutes left. This, Frederick, do you wanna just talk about your use case in the time we have remaining today? Sure, I gotta, I'll make it simple. So this is not about the use case itself. It's about having something, a use case that is very simple. Like we have a good set of use cases such as the one that Ian is discussing with EGP Enterprise VPN. But we also need something that we can that we can also compare and look at our systems on from something that's very simple. Like it could be something like a bump in the wire firewall as an example. Something that we asked the question if we run this particular, if we build towards this bigger path, like are we handling a complex use case which could be a private 5G or carrier grade VPN or whatever. And then are we handling the simple use cases? And we're making these simple use cases. How much are we complicating the simple use cases? Is really the question. So I think a bump in the wire firewall would be a good example of a simple thing that could be that representative. But I also would like for, if anyone would like to, to also think of what other type of very simple things we can do that are not telecom focused, but it's something that is useful also in the enterprise space because I think part of our ambition should not just be how can we enable telecom part of our ambition should be how can we enable enterprise to enter into this space and make use of these things. And ideally best case scenario is we get a unification of both enterprise and telecom APIs and tools. Not that of course the workload will be different, but if we can manage to maintain some coherency at that level, then there's significant benefits down the line for both communities. So in short, that's all I wanted to say on this topic. Okay. Yeah, thanks. So making sure that we focus on like networking use cases in general. So not like networking is a broad domain where telco is just a subset of that. Correct. And then we have one upfront so that as we're building these things where we can we have somebody in the benchmark and we can ask the question like are we over complicating the simple things as well? Or in best case scenarios, we find simple patterns that work across both. And we're having that thing there we'll act as a guide and we may determine we can't handle both in the same environment which maybe that's a valid answer but we don't want to preclude that upfront simply by focusing only on the hard use cases. Yeah, start small and then grow from there. Exactly. Yep. Cool. And do you say you're interested in writing that up? Yeah, I was supposed to write it up earlier but my time could have been for the past couple of weeks have been crazy for one-off company events which I'm happy to talk about in no detail. I don't have time. I'll have more to say about it later. Oh, cool. Basically, I can point you out press releases and that's about it. So. Okay, cool. So we have two minutes left so I just want to give you short updates also on the self-nomination. Taylor and I were going to be working on that like right after this meeting like to send an email out but obviously he's out of power so I'm not sure if it's gonna come out either today or tomorrow. Same thing with the voting PR. So that's still in progress. It just got delayed because of natural disasters. So. Yes. I guess that's all I have for today. Does anybody else have anything they want to bring up or discuss in the two minutes we have remaining? So if Frederick is mentioning a press releases at Red Hat we've just had reorganization internally. We now have something called ecosystem engineering which is where I belong to right now. So Telco and Red Hat is becoming more organized and internally coordinated. So it'll be interesting in the effect and the Red Hat people that you'll see involved in some of the CNCF and CNF work group stuff will be refreshed. Do you have a link? Sorry. Mine was an acquisition. So my company is being acquired and then followed by shortly afterwards by that company possibly a liquidity event. So it's, that's what I was saying. It's been crazy like these are seriously like one-time company events that are occurring. So yeah, I feel like I'm living in Silicon Valley. Yeah. Yeah. If you have any links to press releases you can just drop them in here so people can check them out afterwards. Like I'd certainly be interested in reading that. I don't know probably some other people on the call too. So. No links for me. I'm giving you a sneak peek. You heard it here first. Okay, okay. Fair enough. So we'll look forward to it in future meetings then. So with that we're at the top of the hour. So thanks everyone for coming today. Look out for the voting PR and the self-nomination email to be coming out as soon as the weather cooperates. So thanks everyone for joining today. Thank you. Bye.