 Good morning, Ralph. Hey, Mike, can you hear me? Yes, I can. OK. Is there any echo in my voice? Sorry, didn't hear you. Yeah, no echo yet. Yeah. OK, cool. I'm just figuring out if I've configured the audio correctly. Hey, Kevin. Hey, Adrian. Who else is there? Are you trying to hear me? No, no. I think that's it. Let's see. Let me see where Sasha is. I'm going to ping him. So Sasha will be here instead of Murnau, or? I think Murnau was hoping to join today. OK, cool. And let me pull up to the agenda too, in the meantime. Well, hello, gentlemen. Sorry for being late. A bit of Zoom connectivity issues. It's all good. We're just getting started here. Trying to get the rest of the people to join. I think we can still wait one more minute before we start. In the meantime, I can share my screen with the agenda. And we can probably note down the participants. Well, first of all, is there any agenda item that anyone wants to add before we start? I'm going to take care of this. This sounds fair for a big note. I assume they're in the uses devices by non-root containers. We'll be discussing the current state where we're at and where you guys think we should be going instead. And I think so, yeah. I mean, that's probably that. I mean, that's the almost first non-administrative item. Yeah. So hi, everyone. Welcome to the ConSecond Working Group. I think we are being recorded. Let me check. Actually, I still need to request permission from the hosts to record. And apparently, I am not the host. So I'll have to figure that one out again. What is this button claim hosting with participant list? Let's see. I need a host key to do that. I'll try to figure that one out. Oh, Murdov is asking for the meeting link. Here you go. I'll try to figure out recording for the next meeting. Sorry, this is not going as expected. And I don't know. This Zoom link, is it going to be a continuous one or is it just for this one instance? It should be the same Zoom meeting. OK. When probably, let's add it to the doc. So the Zoom meeting is permanent, or at least is set up until December. And we'll figure out, need to figure out. Talking with Ricardo, it would be nice to have the chart of the group just finished by Friday. So having just people go over it and post a comment or a sign-off would be really nice. If I think the bigger question here is really, is there a section that's needed? And like Adrian mentioned it in the previous meeting, which is, should we have a full section on use cases? I can't find it. But I think that was one of the questions that we needed to figure out. Is there somewhere? I mean, there is a section in the slides. But we probably need to have a separate use case section in the chart. And the other one that we need to figure out as a group is just the GipCon USA panel, who wants to submit and the deadline being by Sunday. I think a panel is for speakers. So having some idea of who wants to submit and what do we submit for the GipCon USA panel is pretty helpful. I review it with your text and I like the option number two. So the changes which I was thinking to do is very minimal until I get incorporated. Minimal until I get incorporated with. Yes, I had written two options. One was more focused on CDI, the second one which is more focused on the cloud group. I prefer this if anyone's to be honest. I really like what you're saying. I'm also like, if people are interested in submitting or in another abstract, I'm happy to. I think we should definitely consider other alternatives. But yeah, I think most of the conversation needs to happen on the Google Doc. So feel free to take a look at the texts, comments. And yeah, if you're interested in contributing, being an author, presenting, just let us know. So, Rono, for submission, what else information do we need? Like names, but besides the names, do we need to buy or anything else? I think mostly names. And then I'll try to figure out the probably try to synchronize everyone over in terms of the submission process on Thursday of Friday. I plan to be there, so I'd be willing to help you guys out. OK. And I agree the second one. The focus is to try to get help from that group, right? Yeah. All right, Sasha, do you want to take over to talk about non-root containers? Before we go, regarding this charter document, I've had one comment. So regarding the use cases, we know some of the use cases right now, but I think as we start to work on actual specification and so on, I think we will encounter some of our additional use cases. So like, we're figuring out all possible use cases. It should be also part of the charter of this working group. I think that's definitely fair enough. I mean, like, if this document is only to fund working group creation, when it should just state the goal and not specific, like item one, two, three, which we are going to do. Yeah, that's definitely a reasonable position to be in. So maybe use cases need to be specific to the effort that we're solving. So for example, use cases for C, I need to be specific to CDI. And there can be examples of specific use cases that we're hoping to solve immediately, but really, I agree with what you're saying, and we need to be careful to not prescribe to a very specific area. Very true. Audio is OK. I think Merrick is trying to speak, but the audio is super private. No, it's still very. It sounds like the audio codec. You should probably try to join by phone. Try to re-log in. We're getting some checks. But your keyboard sounds fine. All right. Right. For our topic about non-route containers, I actually invited my colleague, Mika, who is specifically looking at this topic. We actually both looked at this topic, but he wanted to speak about it. So although I don't know what to talk about it, so we created this proposal with options A and B, and then I think Mike added one more option, or option C. So I think what we can, well, I think we can spend a few minutes to brainstorm the alternatives. Let's bring up that guide and start with the container run times, I think. It's not clear to me that we've got a complete understanding of how the container run times are currently setting the GIDs of devices. I mean, we're doing LSTAT on the host path. And it almost reads like you don't think we are. So I'm not sure if we're insane. So according to what we currently see in the code of container D and cryo, it's just simply copying exactly what is present on the host. So if your hosts are different distros, when it means like group video, group or accelerator or something, they'll have different numeric IDs. If by getting the information from the host path of the device is passed, if that's what you mean, yes. But Sasha's comment was that even the numerical group IDs, they might be different. Right. So if you want them to be set, we need a way to set them. If you don't want to use the host path IDs, then we're going to need some kind of way to set them for each host device separately, I think. I don't think we can just copy the user ID of the process for the container to the user ID of the host device, right? Or the device that you want to mount in the container. Why not? So it's not about mounting. It's about creating a new device node inside the container namespace. Correct. Why assume or why do we presume that all host devices should have the same user ID as the process of the container? I mean, that's certainly a pattern that we could support. I'm just saying that's not the way the specification is drawn out, not in the runtime spec or in the cry spec coming from Kubernetes. Well, cry spec doesn't have it at all. It doesn't have at all information about permissions. It's missing. That's right. It's missing the host device permissions. Yeah. So regarding the runtime. And the reason, by the way, that it's missing is because the initial design was to pull that from the host path that they pass us. So if they wanted to create a device with a different set of permissions, they would create that device and then pass it to us from Kubernetes. That's the current design pattern that we're using. But they may not be doing that, right? They may not be doing anything extra yet. But that's why we did what we did, OK? Just providing a little history. I would say it's a bit different story. So most probably it was just overlooking the initial design. So at that time, most of the people were not running containers as non-route. Like the majority of workloads were running as route inside container. I think there's two different cases here, right? Like one is the regular devices created by the container, like your PT-MX and all those things. So those I think we should just retain what the host permissions are. Because some of these have special groups and users. But if I understand the proposal correctly, we are only talking about changing the UID and GID of the devices injected. So it will change it for your GPU device, for example, right? And not for everything. Because the default devices will still be created by the container runtime as part of their templates. Yeah, it's only for devices which are injected by device plugins. Okay, yeah. So right now device plugins just only communicates with desire, saying like, this is the host device pass. And this is what user will do with it, like read, write, and m cannot operation. So what it translates to, it just copies permissions from the host and when does C groups modification to enable access to these devices for those operations. But the problem is what the container can come to runtime saying like run it as user, like 25 and group 33. And we end up in the situation where this numbers of user ID and group ID is random. Like user cannot predict what kind of ownership is on the host. And our first thought was like, okay, if user is requesting this user ID and group ID when user should give access to a device. So let's use this group ID and user ID as ownership information for this particular device. Yep. Which is basically the option A that we are proposing and it's like a natural choice in a sense that application owner diplomas that give my process user ID and group ID these numbers. And I also want to use the devices. So make the user ID and group ID the same. Yeah, that makes sense. My only worry is, do we have any edge cases where we wouldn't want to do that? This definitely makes sense for GPU and those kind of devices but will it break any legacy workloads? Say where you have a bunch of different processes running as different UIDs and then we just change it to the UID and GID of the primary process. Is it going to break such workloads? So what I'm really getting at is, should this be specified at the CRI level at a per device level? Or we can assume and just say that whatever devices were sent to us over the devices, over the CRI are fair to, okay to show on. Yeah, that was my worry too, Mirnell. Yeah. We're not going to cover those edge cases. Right. It seems like it wouldn't be that hard for us to either have a switch for the mode for all the devices or we could just have a new security context for each device set by Kubelet. Yeah. So there are a few things here. So trying to answer the questions like one by one. So the first question, do we have scenario where we don't have matching to a process ID and permissions? I tried to think about it and I haven't found any reason for it. So practically like if user wants to use the device he needs some way to use it. The only option or only potential scenario is what like if you have some devices which is like read only. So if a permission is set by only reading from the device but not calling any like IOCTL and so on that might be limiting factor. But on that hand, we are getting information from CRI which says like what mode of operation for devices we have. So like read, write, create. So we can do CH mode accordingly to that information. Regarding multiple user ideas what you brought up. My understanding, so if you have, actually we have two scenarios. One scenario is what we have right now is what device, each individual device is assigned it to one container. Like we don't have right now overshaded devices between multiple containers. And if the container is started with specific user ID which is non-root, where is practically quite small chance what we will have inside what container processes which we'll be running with different user ID, right? Yeah, but just last week I was talking to some customer and they were trying to move legacy workload and they are running a bunch of different processes. So in such cases, if we just change it to the primary container user and other processes running as different users so then the container try to access the devices they'll just see right now. Yeah, Marnell, that's the type of scenario I'm thinking about. We're running multiple hosts within one container, right? Services, microservices, that sort of thing. And you may not wanna provide any old microservice that's running access to devices or at least not through some interaction between the processes that are running in the container. But when it's a question, should we go like with 666 for the device node, so like any process inside the container which requested the device will be able to use it or... So I'm looking at some devices that are owned by root.tty and they have like 660. So I think... I'm not, it's a different kind of devices. So like those system-wise like TTIs and so on it's not what will be in scope of this particular change. We are talking about like only like GPU so any kind of accelerators which comes from the Wwise plugins. So but how will the container runtime know that container runtime will only get it through the CRI, right? Yes. And so it can it assume that anything sent to it over a CRI is always a GPU device kind of a device. Like no one is preventing someone to take a TTI-like device and do a mount through the Kubelet and it will also still show up through the CRI. That's the problem. In theory, yes. But isn't it so what TTIs will be also in the scope of container namespace. So like with system devices like Dev Zero and so on I think we are scoped. Yeah, I think maybe we may be fine but I'm still, I still worry about edge cases. So I feel like maybe scoping it down just to the device maybe, maybe better if possible. Just in the device level you say I just want to change the permissions of this one or have some security context field. Yes, so the way that I see it is definitely option A has a big or at least the big concern here is would we be breaking existing users or existing workloads? Existing limits on those devices, yes. Yeah, there's ways we can mitigate this. I don't know how that would translate to container D or pod men but in Kubernetes we could go through the feature gate path. I don't know if that translates very well for pod men or container D but we would enable this feature by default and better wait for a year or two and then figure out if we've had feedback on this from customers that are concerned that we broke their workloads. That might be one way we could go forward. The other alternative definitely passing a device security context makes some sense because we're definitely certain here that we wouldn't be breaking users. Right. The problem is, I don't know, the two things. Like one thing is if we are talking about outcome of this group where we will have this CDI interface of injecting the devices. So once we get that the problem will go away because we will have ability to specify like runtime spec part which can include also ownership. Yeah. So that's like long-term solution. Mid-term solution or short-term solution, we have two options. Like one is like option A. So we just clarify the assumption of what like what will be permissions of devices like from CRI to OCI level. And option two is we start changing the interfaces between the device plug-in to Kublet and from Kublet to runtime to include also. Yeah, that's long-term. Well, it's practically with two options what we have. Like with device security context, it means like changes in both two interfaces between device plug-in and between CRI to runtime. So how about one more option? Like the way we got in support for Cata and stuff like if container D and CRI agree on an annotation, we can use an annotation in the runtimes to gauge how it's gonna work out, right? You just have to have a map I think for the device path to the UID GID that you wanna set. Well, it could just be a flag that, hey, shown my devices to the, yeah. Oh yeah, that would work too. Yeah, you could do A, B, or C in that prototype just based on some annotations and then setting. And the device plug-in can pass down annotations since I wanna say 114? Yeah. Also, I don't think it would be hard for us to add security context to tell you the truth, but yeah, annotation works too. Mike, the problem with security context is what like with device plug-in which knows something about the device, it doesn't know anything about the pod or container which requesting web device. So like we don't know the process or group ID of container which will be using web device. But you can still have a Boolean saying, shown it, right? Because you don't wanna change it to something else. You just wanna change it to the container user, right? Boolean, yes, Boolean we can use, yes. Yeah, I can't think of a case where it wouldn't be something else. Like you have one user, but you're changing it to something else for your use case, can you think of? But just scenario, unless it's a group thing, if there's a group sharing or something then it could, it could be a case where your group ID is similar across different pods, but your new ID is not sure. Yeah, but we're even the group ID is like you cannot control what the end user will put in his YAML, like which group ID he wants to use. So Rana, you can say container decry also. Podman is a different thing. Yeah, yeah, sorry about that. So like Boolean annotation which will control what functionality like to see each one with devices to like Rana's user Razon group, it works. I just, I don't know with scenarios where it's not needed. So maybe we start with the annotation, we let people try it out and then give us feedback. I think as far as the way this is not near, I think this is just like an opt-in type of flag that we can set ourselves that we explicitly want run as user and run as group variables. Right, instead of using the host pass, Elsa. It would have to be some comment, some word, you know. Yes, well, that's what I'm saying. Like I know the use cases where this is needed. I don't know use cases where the host user ID and group ID is valuable inside container namespace. I think only scenarios will be for legacy workloads where they haven't broken into microservices architecture and they have multiple processes still running. Where if you try to change something into just one user, then you might break other processes running as a different user. Yep, like WebSphere. But when it means what this kind of legacy workloads should have like real good assumption what OS inside container and OS on the host is exactly the same. Right, we're talking about Linux here, right? Mainly not a VM like scenario. Yes, I'm talking also about Linux, but like for example, like group video on Ubuntu is different from the group video on opens using. That's what I'm talking about. Okay. So if you have a container workload which is exactly the same as your host OS when you can rely on mapping from the like user visible string to numeric ID number. Okay. But anyway, I mean like we can start with annotation and if we see what like there is no breakages when we can in long term try to do it. So like device plugin has this ability to set the container level annotation CRI level will get it. Yeah, so I think it's workable. Just to be, it will be a bit more code in the runtime patches. Which is okay. I guess, I mean, as long as like continuity and cryo can agree on them. I mean, we can all agree on what the annotation should be called to be fine. I think next steps for this are making pull requests against cryo and continuity, right? Yeah. Regarding this annotation, we actually added this topic also to the signal. So do we want to talk signal folks about standardizing the name of this annotation? Or like how do we see where I don't think we should use them to standardize the annotation because those would be more on the OCI side. But I think we do need to talk to them about whether or not they would want to cap so they could own extending their current device API, right? And then enhancing the CRI API to pass the correct security context. I guess we can just inform them that why we are doing this. And then we can just follow up later on how it works out. Right now, I'm not sure as I said, do you guys want to block on a cap for this annotation? Yeah, I wouldn't block a cap. Yeah. I mean, we can start the cap discussing, yeah, this is step one with annotation then depending on where it goes, we may have to do it by default or yeah. Yeah, we're just doing the annotation to explore the space. Right. Yeah. Practically, like an annotation mechanism doesn't require changes to anything in the Kubernetes. So it's just like information only to Kubernetes guys. But if we want to change any of the interfaces when we definitely need the cap. Right, yeah. I think what we can do today is that we just report out the short-term decision, how we want to go with this one on them. Right, yeah. Maybe having a small issue at first. Well, you already had the issue, but writing on the issue, just our short-term plan to figure out if this fit enables us correctly would be like probably into or a middle ground, right? Between making a cap and not saying anything. So yeah, we can update that issue to capture the decision for now and inform it in today's signal and then just take it from there. All right. Do you have anything else on the topic, Tasha? No, not today. So I'm fine with current state. All right. And I can look into maybe adding a pupil request like an early POC implementation plan. Okay, cool. All right. And the next item is like there's no specific action. I wanted to mention the fact that some, the device monitoring in Kubernetes are currently working towards GA. We're looking into disabling some device metrics that were collected by Cubelets. And I mean, before we go to GA, the general question is to this group, if there's been any concern around device monitoring, if there's been any issues and just getting some feedback on the existing mechanism if there are other people that are using the pod resources API would be nice. I know that Multis C&I is using the pod resources API. I know that we have the eGPU exporter but having some more feedback from different people would be nice on this. Okay, I don't know but how have you used this metrics API or not? Oh, so it's not really a metrics API so much as it is an API that you can use to figure out what device belongs to what container and pod. So for example, the problem that we have at NVIDIA is when we are collecting metrics about a GPU and we export them on, I mean, as a Prometheus exporter, we have the metrics as GPU utilization, right? And then we have GPU equals UID as a tag and then if maybe it's a counter and then we have a spit like a number, a hundred, right? But right now the problem is with this specific metric, you don't know at the end of the day, you wanna be able to do a graph that says for pod and container X, the GPU utilization is a hundred but right now you only have a node view of the utilization. So what the pod resource API allows you to do is it allows you to have a attribute that allows you to export or have information on what is the computer and what is the pod, right? So basically in your exporter, you can now query Kubelet to figure out, hey, I have this GPU where I have this device, can you tell me which pod and which container it belongs to so that I can now tag my resources with that information? Yeah, well, my colleague, Ukri, he was, I think, using that interface, right, Ukri? Well, we don't have that level of details, unfortunately, which is being described right by Renault, but we would like to have it. So as to Kubelet metrics, no, I haven't really been using those. I've been using the device information from the GRPC which basically tells you what devices are used in containers, I've tried that. I guess we're talking about the same API, I think, it's the pod resources API. Yeah, I've experimented with it, let's say, it seems to work, it's usable, what are your plans regarding it? Just make it GA, which is, I mean, we don't have any plans to extend it, so we just wanna make sure that pretty much it's GA now and not just beta indefinitely. Okay. All right. So I think I'm pretty much done, feel free to read over the cap if you're interested in that topic, put a plus one if you think that it's okay. If not, feel free to comment. CDI is the last topic. I've created the GitHub organization and invited most people in the group, if not everyone. Let me know if you're not in there. I think that just general question is, what are the next steps and what are some of the questions that we aren't comfortable with? So my understanding about the next steps, I see two things which needs to be done. First, regarding the spec itself. So I don't know, you had very good content as a starter in your private repository. So I think we should reuse it as much as we can. So like, if you can create with PR, so we can start like commenting and maybe changing a bit of a thing. So I don't know, like if you can split it a bit, like the text part and when the spec part itself. So we can do like multiple PRs. That's one thing. Reason why I'm thinking about this in PR mode is because for the spec, I was thinking like, when the first time discussed it, I was saying what like in this container spec, we should be able to put like majority of information what we can put in from OCI, runtime spec. And I think while my comment is still true, I think we should have a list of items which is actually allowed to be said explicitly. So we know what we have with permission mechanism for device where like with fields, we know what environment variables can be injected to pair it to the device. Maybe a few other fields, but we shouldn't be saying like it's wild, wild west. So we can, you can inject everything from OCI where. So just to explicitly clarify which bits of OCI specs can be specified. That's one part regarding the CDI. Second thing what I had in mind was probably for key currency from, well, no, sorry, not, not run C. Yeah, sorry, just cut my second thought. Alexander, when you say inject, what exactly do you mean there? The way they're, the way they understand what Sasha is saying is the CDI spec should not basically, when it describes devices, it describes the operations that need to be, to happen for that device to be available to the container rights. And these operations could be mounts, these operations could be hopes. So if we look at CDI, we've described a few examples here where it could be a host path here, right? So I mean, I think- But you also have hooks injection. We've described devices, mounts, hooks. I think Sasha's comment is that the lists of operations should be exhaustive. It should not be anything that is in the OCI spec. It should be a list of things that we are okay with. Yeah, so like device subsection, mount subsections, with hooks subsection, environment subsection, maybe something else. We just need to be carefully saying like, this is why item from runtime spec, what we can put here. So when you say inject, you mean at the device layer, right? Yeah, inject is probably not the right term because at the end of the day, CDI is just a transformation operation on the OCI spec. Right? Yeah, I think whatever, yeah. Yeah, go ahead. Yeah, I think like whatever we put in the spec is only what will be comfortable supporting, right? It's not an open thing like that. So the CDI should define what fields are supported and only those fields, like the runtime should not blindly just process all the items there and match them with other existing OCI fields and add them. And I guess we may also need some ordering thing here, right? Like append, inject in the beginning or something like that for at least mounts. Can you repeat that, sir, for the ordering? So for the mounts, we may need some ordering, right? Because mounts, the ordering, the order in which they are applied is significant. Whereas for devices, it doesn't matter. Ordering needs to be specified. Yeah, I mean, it may be possible that after all of our examples, we think, oh, in all cases, we just do an append so we don't need an ordering. But till we know that for sure, how we inject them, we'll need some guidance on the runtime side, right? Whether we append or add to the beginning or inject some of that sort or something like that. Yeah, my point was more about what, like for these fields descriptions, let's not reinvent the wheel and just use whatever needed pieces from the runtime spec. Right, yeah. Just clarify what, like this piece of runtime spec, we are supporting whereas we don't. And yeah, we should even try to use the, the VR runtime specs go struct, we can try to reuse parts of that here. Yeah, yeah, the second thought what I actually had is it was from more from the practical step. So I don't know, like some kind of library or patched version of some component where we can practice. Like we have OCI container spec as an input and OCI spec like as output after we do all this transformation. Would it be like in some tool or in some library form? I don't really have. That's definitely something we were discussing last time too is that in terms of deliverables, that we had that list that was interesting which is like a specification of a goaling API. And I think that's what you're describing, a goal on the library for merging the OCI spec with the CDI spec for transforming. Yes, I'm thinking as a long-term delivery, yes, we definitely need the library which we can reuse in both cryo and container D. But in short term for our like hacking and debugging purposes, maybe some like small common line utility which will be using that library also will be good. So quick, quick Googling shows us something called as Jolt which is a JSON to JSON transformation library. There's another one. So you can just maybe even simple JQ will work actually. Yeah. No, but we need a bit more than that. So like we need to read like list of, well, we need to read the directory. We need to parse multiple JSON documents and when combine them logically. So I think just JQ might not work. Sure. Yeah, I'm fine. I mean, we can just have a common library or a few utility that can be used to cross the code. All right. In that case, I'll start creating the CDI repository. And I'll, should I just merit to, I think I called it container device interface. I have container, I'm sorry. I says, I think it's the GitHub organization. And should I just, just push Renault was thinking about CDI to Open the pull request. Open the pull request. Okay. With your current content. Is anyone going to review this whole thing? All right. No worries. I'll do that. Makeable requests. I'll be happy to do that. And then from their own, I think like definitely making sure that this specification is, so I think we mentioned here that we need, we'll probably need to specify the ordering. We'll probably need to specify the list of the loud transformations. I think from their own, once we're okay with the specification, the biggest question that I was, that I think like we needed to figure out last time was what is the life cycle of this project? When is it okay to say we're done with the specification? Let's now proceed to integrate it in something. In a runtime. Is there a, is there a phase where we want to be doing some parks? Is there a phase where. How do we want to organize the life cycle here? I think choosing the annotations model sort of points out that the sort of, I think choosing the annotations model sort of points out that this is going to be at least two phases. I would think what, like we will have like several phases, like one phase is draft of specification, like, like first alpha, what we have. Second phase would be like this library and small utility where we can validate what approach works. Third phase would be to validate with all existing devices. So for like, I don't know for you, it means like GPUs for us. It means what we have like FPGA QAT and few hours, what we can represent. So we have like exact input output for, for just on what we expect. Maybe at this stage, we will have a set of unit tests, which will say like from this specification, from this input files, we expect that to have other end goal. And then afterwards will be a step of trying to put it, or integrate it into container D and cryo to see if the whole chain is working properly. Okay. That seems pretty reasonable to me. Making sure that we're past the draft stages is definitely something that, I mean, seems like a pretty huge wall before we actually go and talk or go and make the pull requests. Toxin validation is going to be very nice. Just accelerating it out. Yeah. I think like the different pull requests. Is there, is there, is there, are there things that Mike or now you think would be really important to have before we start engaging with, or not engaging with, but before or in that project life cycle. Well, I mean, not really sure what do you mean by life cycle? Are you talking about just the phases of each version of this? Or are you talking about the final output results of the last week? I think a little bit of both, right? The general question is, that I'm really asking is, are there things that you think would be a balker or would be something that. Maintainers. For me, it, there are a plurality of users of the CRI and certainly if this only works for, you know, one subset of Kubernetes customers, I think that's not valid, right? So we'll need, we'll need feedback from the Kubernetes groups, the rancher teams, the open shift groups, that's where different. You know, OKD groups that are going to be out there that. We'll want to start using this. You know, and then when we get rid of the annotation, you know, we'll want to start using this, you know, and then when we get rid of the annotations with via caps, right? We'll, we'll want to have that cycle go one more time around, right? Getting the feedback from the primary users of the, this whole CRI path. Well, we are, this low level beat is not CRI. It's, it's more like on OSHAI level. So like the Podman can, or Docker, common line utility can use it. Yeah, fair enough, right? Right. So then that's a different class of users that aren't using CRI, right? That we all have to include. All right. How do we want to enable, this might be an interesting question, but maybe that's just configurations. It's, do we want to be able to have this be enabled through configuration? Or is this something that we think will, because at the end of the day where we're extending the dash dash device interface that Docker and Podman have from, for the non CRI users. So is that something that we can see as something where we'll need to have a configuration flag. I think this, this is, I see this as something parallel to CNI where we have this config directory and we pick up the CNI plug-in configuration from a directory. So, but that will work. And like Podman uses CNI, cryo user in the container, he also uses CNI. So. Yeah. And we could do it with COD. It would work really well. Yeah. All right. See, see. All right. Fair enough. Yeah, that would work. I like it. All right. Um, so anyway, the long term, I think we shouldn't be repeating the code in multiple places. So I, I envision in long-term anyway, it will be library, which we can move it at some point to, I don't know, like open containers or whatever common place. Yeah. Oh, yeah. And I think in the short-term, even like, we don't want to go through a lot of paperwork. We can use this new org that Renault and create and just put it over there. Right. That's what I'm saying. Like this new work for development. And as soon as we are ready to go out for everybody, we will migrate it. Right. Yeah. All right. Thank you very much in that case everyone. So Renault next meeting in two weeks, right? Yes. Next meeting in two weeks. Seven a.m. PST. What time is it for you, Sasha? For us, it's five p.m. in Finland for Germany. I think it's four p.m. So I think it's a good slot for almost everyone. Well, then have a great end of the day. Thank you everyone for joining. Do you see this as scoping to the file system resource devices? Or do we, are we just. Non file system here. Mike, can you elaborate what, what do you mean by that? Well, when we talk about using the CNI model, right? To, to specify, just, just brought out to in my mind that we, we've got a, you know, file system resources that we use for as devices right now. The container runtimes. I know it's a separate topic. I'm just curious if, if, if Renault had thought about using, you know, some common, but possibly device extensions between the runtimes for this as well. Or are we just scoping this to, you know, use a configuration through the file system? Yeah. I think so for now that will be the easiest. Yeah. So that's the question. I don't know what to think about anyway, right? Mike, actually, I recall one, one question. So in some private conversation, you mentioned what with some guy from Apple, you were thinking about this library of generating default config.json file. So is it somewhere located or is it. Yeah, it's already merged. We have the ability now to, to have a new default. Config.json. So, yeah. Can you send me a link for, to, to, to, to look at this. Yeah, it's the container deep cry. It's the current default off master. We haven't. Well, the next beta will have that feature in it. Okay. Probably this week. I'll send you a link. Okay. Thanks. All right. Thank you everyone. Have a great end of your day or start of your day.