 Good morning. Hey, that's Rono, how are you doing? I'm good. So many things happening on many fronts. Yeah, I completely agree. How were your holidays? It was good, except what it was just all in place, like no travels, so home and all these things around. Yeah, I managed to sneak my way back to France for a few weeks. Oh, and you were able to even return? The return to the U.S. is going to be kind of complicated. So you are now in the close time zone to us, right? Right now I'm in a similar time zone, definitely. You're one hour ahead of me, something like that. For me, it's 5 p.m. right now. Yeah, it's 4 for me. Alexander, we have the same barber. Yeah. Stand up cut, 6 millimeters. Easy. Yeah, I just use number one wherever that is on the brawn. Yeah, on this side of the pond, we have everything in millimeters. Yeah, number one, I have no idea what that is. It's probably about that. It looks like about the same. I've got a closer shave thing on order. I'm tired of using the razor. Time to go electric. So for you, it's early morning, right? So it's like 6 in the morning or something like that? Yeah, 9 actually. It's already 9. Yeah, so I've already done some huge threads on Slack. Been busy this morning. Let's wait one more minute. And then I think we can probably start. Sorry, it's on my side. I'm in the countryside right now. That's cool. That's cool. It's a meeting recording. It is recording. I think we can probably start. This is September 15th meeting. I think we have three topics on the agenda. If I recall correctly, the first one I think is from Sasha. It's about, sorry, I'm opening the agenda at the same time on my side. It's about improvement in CRI API. So report you resources up and down between Cuba and runtimes. The second one is more or less. Just administrative strafe stuff. Just a question to use sig runtimes distribution. This rather than we're mailing this rather than the. Cod mailing is that we have, I think it'll improve a bit on the communication between. So run time. I'm just, and then overall the last point is just discussing next step for CDI with respect to converging with. And I, and I think just. Generally. Questions on that front or more about should we just merge the things up as the trees? How do we, how do we move forward? Sasha, do you want to start with that? With the first step, first point. And also just a quick question. Can someone take notes on the Google doc? Mike, since I'm seeing you on the screen, can you, can you take notes on the Google doc, please? Yeah, I don't have my keyboard and image. Okay. Let me find it and then we'll do that. Thank you very much. All right. Sasha, do you want to start? Yes. So I wanted to share our thing. You might notice what an hour GitHub organization recreated a new repository. It's called resource management improvements working group. So the story behind this repository is what. You're seeing. Michael Crosby. Did a presentation for seagrant times when he presented some, some of his work in, in this forum and when Cignode and so on. And we had a couple of ad hoc conversation with him. And actually with Eric Ernst and a few other people. About what, what we're trying to do with NRI and some of our experiences and some of our ideas, what we implemented in CRI resource manager. And then there is also side, no side conversation. It was about the scheduler extension. To expose when node topology information to the Kubernetes. And when it was discussed on the Cignode, Derek proposed it to. To combine it with node feature discovery. So we're topology and all the resources will be discovered by NFT and when NFT will be talking with control play. So by, by doing that, when we had this NFT conversation, we started to talk with Swati and with Alex say about how to store the information and how to exchange the information between different components. About what resources. And when we talked with Michael. We, we discussed like a bunch of improvements for CRI APIs about how we exchanging my information between the levels between the Kublet and between the container on times. And there are several things which Kublet most probably would not be doing. Like direct C group management, like assumptions of how the layouts of the containers will be, how CRI is actually implementing the things. And when there are related issues to that, like, like for VM based run times, when the VM is created as much as possible information about where all the containers within sandbox, all the init containers needs to be provided on where create sandbox CRI call. The same goes also for any kind of update messages in CRI. So anything what Kublet knows about updating, it needs to be passed down to CRI. So, or VM based run times properly can either react to those or in create stages prepare for all of those. What also lead to some of our ideas about communicating upwards with information about what kind of resources run times knows and how we organize it. So in relation to this particular working group, I think this is the way how we can communicate about information, what kind of devices we discovered from CDI. But anyway, so this repository what I created, it has initial list of ideas and to do items what we had in our mind, what we think it would be good to implement on CRI site or sometimes some things on runtime side. There are a couple of to do items for our project, CRI resource manager. So we have some code implementation about the topology discovery and when for managing block IO, Intel RDD, what we want to split out and make it as reusable libraries and when proposed to be used in container D and CRI as an imported component. So that's the story behind. If you see something what is interesting to you, if you don't agree with something, any comments are welcome. So that's very, I didn't catch why we needed a repo. So is this, is this like just because CAP wasn't good enough in KubeLid or the NRI stuff, didn't have enough stuff? I mean, or the COD? NRI is not enough stuff. Because practically NRI in the current, in the current design is just a set of life cycle hooks for the containers. And we want to have, well, based on what we did in the CRI resource manager is complete state machine. So you don't, you are not only reacting to a single container or single port. You need to have some component which will be able to understand the whole state of the system, especially for scenarios like even you trying to rebalance for resources. When you say the whole state of a system, you're talking about low level state for the container? No, I'm talking about, I'm talking about information, every single sandbox, every single container. Oh, okay. So this is the Kubernetes replacement? No, no, no, not Kubernetes replacement. It's more about like inside runtime, something that can have the overall picture of the whole containers and all the sandboxes created. The reason why we created the repository? So Michael wanted to have some place where he can create issues, assign it to exact people. And when we can start drafting a set of caps. That's why I was asking if this isn't a caps thing. It sounds like, I mean sandbox status state, right? Is KubeLit? Yeah, but. And the container runtime CRI, you know, interface. Yes, but we are going to talk about both parties, like upper layers with Kubernetes and the lower layers container DN Cryo. Which is what Signode does. To some degree, yes. Yes. And just, just to follow up, not just Signode, but there's a similar group that's been created. I need to find it, but I remember that Kevin clues from NVIDIA who joined, I think the previous meeting, so I can probably get them in the next one. And a few other people were looking at something similar. Yeah, there's a lot of groups forming. I'm trying to, trying to understand the way, which process are we under, right? Are we using the caps process or are we just aggregating them? Are we trying to redefine CRI or are we trying to, it seems like it's a lot when you say all the state, right? Yes. So my, my idea was like all these discussions will end up with several caps, like I can say like five plus based on current list. And we need to think about how we do it gradually. That's one thing. And we need to synchronize between both, between the run times and between the co-blad. But that's not this group? I mean, I don't mind moving this group somewhere else if we need to be somewhere else. Well, the reason why we created the repository and this organization is what, when we discussed it with Michael, I, I proposed it, let's create a new one. He said, but we already have something. That's why I asked permission from, from you guys. I know, we, yeah. I think some of it is to avoid too much fragmentation. And especially like we are mostly all the same people. Now you're Mike, Michael or no, we're all, we're all all the same. I agree with what you're saying. I agree that there's definitely component that is container run time that kind of is super interesting with this group of people. And it's super helpful. And I think it kind of makes sense. I mean, when we, as a group, try to collaborate on a pull request, if we open a pull request against the cap repositories, it's kind of complicated. And I agree that having staging ground is, is, is a really nice place to have where just staging ground where we can just discuss things. It's nice. I think that's, that's a good idea to have that. It's just that I think when we go through that process, we got to make sure that we also include these other groups that are already here that are kind of discussing the issues around just scheduling around topology. And I want to make sure that, and this is something that I wanted to, or at least that I think like we, we were careful when discussing with Signode. And I think Michael Crosby was careful about. There's like kind of existing primitives around scheduling. And if we are trying to replace them or make them, or evolve them, we need to have the people that wrote them to be in the meeting, right? Right. Yeah. Yeah. Yeah. Yeah. So it gets pretty big in scope very quickly. Well, so far, like all items, what we discussed, it's not touching too much with core scheduling for functionality. The only thing which is related to scheduling was with discussion about NFD, enhancing NFD and storing where. Hardware information in the CRD objects. I agree with what you're saying. I think like one of the big, I think touch points or area that there, there might be like discussions or different opinions is going to be around the CPU manager, the topology manager, whether that should be around Kubernetes, or it should be around the container on time. I think like part of this discussion might be super helpful. If we as a group start writing down some of the use cases and some of the points of contention where we're having a hard time doing stuff. We kind of did that for CDI in some parts of it. And I think like if we're, if we're going to look at the topology manager, the C groups management, the new manager, it might be important to help in our efforts to just write down some of the things that we're, we see as a blocker to enable things, right? Well, at the moment, I would say I don't have any big plans to start in this, to open this kind of forms about the topology manager inside Kubelet. At the moment, what I'm mostly interested in is like duplicating with docker shim. So we have only one single interface, how we communicate with runtimes. And when like majority of improvements of runtime interface. Can you say do you docker shim? Are you talking about cloning it or? Deplicate. Absolutely. Deplicating. Oh, okay. I'm sorry. I think that's already underway, right? I think DIMMS is taking the next step there. Yeah, DIMMS last week created where it kept going. Yeah. Yeah. And the second thing I wanted to just give a heads up here is like Derek and I are going to, in today's signal, we are going to propose taking CRI out of alpha. So now I think I want to raise it here. Yeah. I want to raise it here. Like we don't get fractured across too many groups and like, we don't want to slow that down too because there's some deadline we have upstream to get that moving, right? So the plans that we have, like you want to make a lot of changes. Are they gradual or blocking or things like that? So we're only probably blocking. I think this is the idea about the communicating from the runtimes towards the Kubli at the least of available resources. I think this is only the breaking change. Where others, what we had in mind that was more about like provide more information and existing messages. So for example, like create some books we'll have not only we're just like empty message saying like, this is the name of a certain box and this is an application. It will also have the list of like in it containers, normal containers and the list of resources. What is going to be used in this container? Like when like what, like create container, it will have resources to be more update container resources. We'll have a full list of resources. Like what? Yeah, I think those make sense. And I think one more thing probably in the same way. And I don't know if my brought it up in the past was the image pool in the sandbox context is another one. So adding that to the run pod sandbox call as well. So we can do the image pool inside the sandbox C group. Yeah, that'd be great. We really need a cashing policy to be, you know, received from the Kubli. Yeah. And now I saw in way to the signal agenda about this, taking it to a better. So that's why I wanted to talk first. Yeah, I feel like it's still okay to move it to beta and then add these things. It's not the final version, right? We don't want to block what's already there and stable in a way that'll help duplicate Docker ship and move forward. And then we keep discussing these new items and keep them moving. So regarding moving it to beta, if we can fit into a change from alpha to beta, some of the new items in the structures, like for example, I can put in this additional fields for resources. So it would be great. Yeah, I think that's sort of the problem is as soon as you change it, then you're back in alpha. Right. We sort of need to GA the original version so that we can, we can move on to a point release and add new additional function on, you know, on a switch. So we know when that capability is available. Mike, but actually like adding some fields, which is optional for on time to react on, I think it's a compatible change. Yeah, it is. So it's not. So my understanding only changing the version number like from alpha one to alpha two or something. It means only if those versions are completely incompatible and you need to transformation. Well, it also means that we need to look at the news, the new additional services, the capabilities are added. You know, and there's going to have to be some, you know, effort on it. They, they have a different qualification for when an API goes, you know, GA. The, but they're, they're pretty good with it's okay if you've got two versions. But since we haven't even got the version one yet, because we keep wanting to add more stuff, it's been slowing it down. Right. Just, just a minor heads up if it'd be, it'd be better if we could have a point release or, you know, we've been using annotations with, you know, without modifications for a long time. But I agree with you. It would be nice if we could just quickly go to GA and then modify the, the CIRA API to have additional extensions. Right. Well, in, in, in set of APIs, how I've seen the world within Kubernetes, like where are between all the changes, like alphas, betas, and going to GA where are some, sometimes compatible, sometimes incompatible changes. Yeah. Not backwards. Yeah. Well, of course, like not backwards, but like adding the new functionality. It was usually okay. By the way, about we're using your annotations. I know it's completely not about the devices, but one thing which I wanted to say. So in CIRA IRM, when we were implementing the block, your support and RDD support, we practically implemented the idea of the classes. So like something, some, some resource class, which can be identified by one single name. And when we bypassing annotation, we're saying this container for this particular resource belongs to this class. So for example, like we, we can have, let's say, like gold, silver, bronze for, for the block IO throttling. And when we can say, well, this container is gold IO references, this container is silver. And, and similar to RDD. So practically this pattern is something what similar to, to, to the degree to, to the sec comp model. So from upper layer, we just passing to information what this container belongs to this particular ID and how this ID is expanded on the host to particular set of settings. It's done, done on the host, like sec comp profiles are present on the host. So I think in the long term, it would be good to somehow generalize the same idea of what we can have, like not only sec comps or similar, but our resources like that. What do you think gentlemen? I think it sounds like a good idea, but I think then we'll have to make this concept the same across Cuban. It is like I immediately QoS classes for C groups come to mind. Do you use, though I don't think anyone would want to change anything with C groups V1, but with C groups V2, there might be some changes coming to the QoS classes, the best effort guaranteed and burstable. Do you see those as parallel to these or those are more fixed where we don't see any? I see it parallel, because you might have like different profiles for different type of resources. So for example, like block is one profile, but memory and CPU is another profile. So the current problem with QoS classes is what we are calculated based on only CPU and memory. And I think all this thing will gain, will be more complicated as soon as we will get fully functional vertical thought after scale. So like the classification on the container start or container admission is not the same after when the scale of adjust the parameters. Does the interaction with the schedulers and such? No. So I mean like where QoS classes, it was good in the beginning, but I think we already grew up from those like small child boots. That sounds like an idea worth exploring, but now I think like, I feel like we should take a pause and define a roadmap, because when like Renault started CDI, it was very, the scope was well-defined, but now it isn't clear to me. Like we are taking on a lot of ideas. A lot of those ideas sound like they need to be tackled, but what is the order in which we are tackling them and what is the scope of each one? Like some are more well-defined than others. Like I think resource classes have been discussed and discussed for a while. Like it just jogged my memory that yes, I've heard this before. So some are more controversial versus some are not. Okay, CDI is just scope to the runtime. We kind of reached agreement. We can take it to the next step, but some of the other ideas that you talked about now will probably need more discussion and coordination with other groups. Absolutely. And well, my wording about resource classes was incorrect. So I know, I don't have a better name for you. And I know what for many people it rings the bell for previous discussion, but what I'm talking is different compared to previous ones. Yeah, ring the Liberty Bell, man. It's like a warning, warning. Yes. Sorry, I think like lots of great ideas were thrown out. And I agree with Bernal that having some kind of roadmap would be super helpful for these ideas. Here's what I would suggest is just write down some blur for each of these ideas and what you kind of expect in terms of effort, in terms of talking to different people and maybe like give your idea of priorities, send an email, and then we can probably like discuss that on the, on the mailing list from there, right? So I already dumped a list of initial ideas to this repository, which I created two weeks ago. So I think I'll take a stab at it then. I think most people are not subscribed to the PRs and maybe the commits that are already present. That's why maybe we haven't, we hadn't noticed, but I'll definitely take a stab. I think email is probably the easiest way to reach out everyone here. And then the next one is going to be GitHub issues and tagging them. Yeah. Yeah, I was just actually noticing that I wasn't subscribed. So I'll make sure to subscribe to the GitHub. All right. Is there anything else you want to talk about session? Probably not. Just, just one more comment. Go ahead. Ronald, you mentioned what like initial scope of CDI. So the last year when we were talking with Renault about the idea of improving with CDI devices, we talked about like two steps, like one step is about how we solve it on the runtime level and second step how it will be exposed to like couple of levels like Kubernetes and so on. We had some ideas. So I think it's not really expanding too much scope, but it will be linked at some point. What we have listed here. So I think like till the last two, three meetings, we were very focused on the low level runtime one where we kind of reached agreement. So I think maybe like we should start discussing the next one that comes on top and the plan for actually implementing the lower level one in the runtimes. And I think I was more commenting about the additional things that we've been discussing, not CDI itself, which seems smaller in scope. Yeah, it completely. I think you, sorry, go ahead, Mike. Go ahead and say, yeah, if we can keep CDI on the enabling side, you know, and just work with those other groups, I think we'd be better off. If we create new groups that come, you know, compete with them, it's going to be hard to interact with them at the same time. Right. I don't really care how the groups are named or whatever. It's just like we have some sort of people everywhere who is interested. That's true, but they're not all in every meeting, right? True. That's why I'm trying to combine everyone saying like, okay, this is where least of the things what we want to work on. Like, I know what not all of them are resulted to particular caps. So once we have a caps to one project to an hour, we can put a link saying like, okay, this item is done, link it. So basically you're trying to put together your scenarios and it widened out really fast. Right. Yeah. That's fair enough. I got it. Just coming back to what I think you, you did touch a, an interesting point that's kind of ill-defined, which is how, how CDI will change the environment. And that's definitely something that's kind of binary right now. And which will probably need us to involve maybe a few other people on the signal. But that's definitely work in progress. And I think like we had, like, we had a few set of slides, but it definitely needs to be material materialized and something more than just like, as, as, as, as, as, as, as, as, as, as, as an architecture diagram. That's kind of vague right now. Yeah. Yeah. At least we should have something that captures our current understanding and what we've discussed so far, or maybe we could have it on the agenda for next week or whenever. Yeah, definitely. Yeah. Yeah. In this repository, what I created, I'm, I was mostly talking about like more native resources. So like with CPU and memory. Like one thing which really bothers me for quite a while inside the Kubernetes is what, like Kubelet has its own assumption about what, what is available on the system. And, uh, runtime might have absolutely different, uh, understanding what is actually available. So for example, a container, you cry or might run not within the top C group, but might be isolated with some slides where not all the resources of the system are available. Uh, and, and, and similar. So I, I think though, I think what's, uh, Like the way it's worked so far is like runtime is dumb. Kubelet has the overall picture and it tells the runtime to do these things. And now we are finding gaps that, Hey, this is not enough. If I want to do an image pool in the pod sandbox, I need to know the pod sandbox license name in the image pool and somehow change that. That's one example. So like, I think we need to come up with like exact scenarios where we feel that we may benefit from the runtime getting more information. Running the slice doesn't sound like a good exam. Running in the slides doesn't sound like a very specific, bad example to me because like, yeah, the containers have to run inside the pod slice, right? That's a basic Kubernetes assumption. So I, I mean, I mean, I mean with hierarchy of the things. So I know what, for example, AT&T for weight deployment way, uh, way installing the Kubernetes in a way what we have, uh, only subset of CPU cores are available for Kubernetes and the rest are infrastructure CPUs for like climate speed network stuff. And we had the problem of isolating Kubernetes what, like not take those ones, but it's one example. But what I'm more talking about is what we have, we have a problem between Linux and Windows. So the Kublet knows a lot about where Linux and does the assumptions on the Linux side, but for Windows, it says like, oh, well, I don't know. But you run it. Uh, and the same thing was the idea of virtual Kublet where, uh, like where implementation of the container execution is just saying like, I know this set of resources give me the container and I will figure out how to run. So, based on those two things, uh, I was thinking what we need to a bit shift the roles. Like Kubernetes knows what to run and runtime knows how to run. Right. Yep. And bringing those together is difficult. Yeah. I think like even the exact same thing. Yeah. Yeah. I think like even the example you just gave to me, right? Like the AT NT case, I would imagine immediately that, yeah, there'll be some flag to Kublet telling it, okay, these are the only CPUs that are available to Kubernetes. And you are only available to, you're only allowed to pass those down to the CRI for the, for the workloads. So, uh, uh, for that, uh, for that example, can you like do more details on how you think the runtime would solve it has opposed to solving it. I'm not saying this, but I'm just trying to get solid examples where we feel that the split is not good enough. Right. So in this particular example, how it works is what, uh, yes, Kublet has where flux about like system, Resolve with, uh, Kublet Resolve with, uh, uh, C groups. And you can specify actually where parent of C group where Kublet will start creating the hierarchy of C groups for the containers. Uh, the problem is what Kublet was written, uh, during several years and by different people, uh, it's different subset of the minds. So like everything, what is topology related. It doesn't care about those C groups here. What it does. It just calls where C advisor, give me a machine and for C advisor says, well, this is what all the CPU is what I detected from the system. And when CPU manager says, Oh, I have this amount of CPU. And the same goes to, um, um, um, um, um, um, um, um, um, this allocatable, um, uh, available, uh, resources, uh, which is submitted to node, uh, status, uh, part of, uh, uh, uh, uh, uh, uh, part of, uh, of a node object. So it also does exactly the same. So it says, Oh, okay. I detected from C advisor, this amount of CPU, total in the system, this amount of memory, total in the system. I have, uh, Kublet for access, what system reservoir it is X amount of memory, takes amount of CPUs. So it just cuts it and that's it. So it doesn't really calculate what is actually set in where like parents, a group object, if it's, uh, specify for Kublet. And, and so on. So as soon as you start to do, uh, either partitioning of your resources or like some non trivial setup, it's, you can uncover anything, but many of interesting bugs. It's sorry to interrupt. Can I quickly just time box this topic for the next nine minutes so that we still have 10 minutes for the next topic? Yeah, I think at least what I wanted to say, I'm already done. So we can talk about other topics. All right. Um, so I think like, I mean, I think you brought a few very interesting point just, just to close on this and, and just a lot of food for thoughts. And I think like at least I'm, I'm very interested in contributing, um, and to any just effort that you want to start on this. I'll take a look at the repository. I linked to it in the chat. Feel free to link to any issues that you think are worth looking at too. Um, so the next topic is more administrative. Um, the mailing list that we've been using has been mostly for agenda items and just, um, summarizing notes. I think the other discussions, we tend to have them on GitHub. Um, I was suggesting to just, uh, use Sig runtimes mailing list instead of the, the CDI mailing list. Um, just makes it easier for Sig runtime to see what we're doing, uh, and have some more context when, um, they are presenting some of the effort of this group. Um, what do people think about this small change? No opinion. Uh, it depends on scope. I think like for, for now we, we have probably 10 emails and it's mostly just, um, agendas and notes. So it kind of makes sense to me that if this is what we're going to use the distribution lists, we might as well include Sig runtime or even just send our emails to Sig runtime. I don't know if everyone subscribed to Sig runtime. Yes. Yeah. Okay. Well, I mean, for the people having time under CNCF. There was no six runtime on Kubernetes. Oh yeah. Yeah. Sig runtime under CNCF. All right. Um, I'll, I'll, I'll sync with Ricardo and I think I'll, I'll retire the, um, container orchestrated device distribution. This is where I'll just point it to Sig runtime. That might also be just a solution. Just send it to Sig runtime. Um, Okay. So the next one is really just, um, this one, it was quite of an easy topic. The next one is, I think on the last meeting, we kind of agreed just to try to have, um, the NRI and CDID converge. Um, my question is more tactically, how does that look like? Um, what does that look like? Um, we had this idea of contributing the CR repository to the NRI repository. Um, it's still like I, I'm okay with doing that. It's just, I'm having a hard time figuring out what does it look like? Um, does it look like just a directory where we have the spec? Um, kind of a conversation we should also have with Michael Crosby. But how do we, how do we do the next steps? Do we start a, an email thread? Do we start a discussion? Um, what do people around this or in here think? Well, I think portions of our, uh, our output need to be there. If that's the right repo for the, uh, implementation. Uh, we, we, we will get it more. Sorry. Go ahead. Michael Crosby is what, uh, the NRI scope what he initially proposed it was not, uh, not enough for at least what things, what we are trying to solve. So one of the, uh, Outputs of this, uh, list of items, what I created. We need to do it. It will be what we, what we'll be the next steps. Of ideas, what we have and what Michael have. So I don't think there is an exact future of how we can do. All right. So I think, I think you weren't here in the last meeting. So let me summarize it up to you. I think a bit. Um, with Michael, we thought that. It kind of makes sense. At least like we had a lot of similarities between. At the NRI and the CDI, uh, IDs. And I think just talking a bit, um, through and presenting a bit CDI to Michael. Um, it sounded like a reasonable idea to try and kind of have the IDs convert. Um, And have instead of having the NRI spec and the CDI spec, maybe you have something. A single spec that kind of conveys both, both IDs, because we thought that they weren't conflicting, but they were actually complimenting each other. Right. And I think that's fair for the CDI, right? Um, it sounded like what Alex is talking about is a little bit higher level. Expand. Well, I, I might be misunderstanding what, uh, NRI is proposing, but from, from what I've seen before, it was, uh, like very limited set of. Well, I would say hooks in the single container life cycle, what can be. What can be done. Am I incorrect? Or. Sorry. Um, are you talking about the NRI or the CDI? NRI. Um, it is. And I think what we're talking about here is expending it. Sorry. Go ahead, Mike. No, I just say the, the very, first code drop. Yes. It was, it was limited in its capability, but, um, that's, it's not just for containers. It's also for pods, right? And it's not just for between the create and the run step. It, it's also going to be expanded to support, pre-create, post-stop, all those sorts of. You know, capabilities. And it's not just hooks, but it's also going to be expanded. And it's also going to be expanded to support, pre-create, post-stop, all those sorts of, you know, capabilities. And it's not just hooks because it sits in, you know, it sits at the integration point for CRI. Um, you know, between the, between CRI and the container runtime implementations. Um, I think there's a, we can do pretty much what we need on the CRI side of CDI, but if we need any more scheduling. And it's a, that's back in Kubelet, right? Or, or through the API API or actually API server. Okay. But that's exactly what I was mentioned about where similarities between MRI and CRI resource manager, what we have. So we have a code, which we're planning to extract out of CRIRM, which actually implements exactly the same. Like overall state of system, like the whole sandbox, the whole things. And when pluggable set of policies where you can get exactly the same events like NRI is describing, like create sandbox, create containers, updates and so on, and get all this information. So that's what I was saying. Like we, we wanted with Michael to merge those two ideas into one library, which can be reused. Got it. Okay. Now I understand. Okay. So for him, like our CRIRM is implemented as a proxy between the Kubelet and CRI. So like the, like the soak, soak into injection practically for Michael, for his setup, it was not possible. That's why he tried to do implement it in inside container B. And I was saying to him like, okay, well, we have that code. It's okay. Let's make it as a library and let it's reusable, reusable between container B and CRIRM, and have the same functionality. Makes sense. All right. And so, so what was the, sorry, just taking us to back, what was the question you were asking at the start? Well, my question was about like merging CRI and CDI and NRI. I think what, the question about how to know that it should wait until this discussion about what, like how much of existing code we can reuse between CRIRM and NRI what Michael has learned. And that might reshape how this convergence between those two projects might help. Yeah. Yeah. That's my understanding of what Michael wants to do, Mr. President. Yeah. Okay. So I guess my question is, how can we accelerate a bit some of that, some of these conversations? Because I'd be interested in having some kind of maybe alpha somewhere to start. I mean, to start showing a bit, some of the results of our discussions. I think let's ask Michael to join and Eric to join. Maybe I guess that's the question for, I'll send an email then to try and have them join in the next meeting. And I think, yeah, I'm looking at this a bit tactically and I'm trying to get something like, if possible, if I could try to accelerate and have maybe timelines by the end of the year for maybe an alpha or better, and at least in continuity, and I'll probably bug Mr. Rinal to, to, to figure out what he thinks about for having an alpha or better in, in Podman too. In cryo probably. Yeah. Well, I mean, both, right? We're, we're hoping to be able to do something like Podman dash dash dash. Sorry, go ahead. Yeah, sure. I mean, I'll make sure we get enough representation from the Podman team as well. So I'll ask one of them to try. Okay. I'll send an email with everyone in the room that's participated in this meeting. Sorry. And we'll try to have some discussion for the next meeting. All right. I think that's a yes for everyone. And I'll also ask you guys for everyone and sorry, everyone in the room to try to take us another stab at some of the aspects or the PRs that are standing. Let me know if there's some work I need to do again on these PRs so that we can have some, something that can be built on or imported a go module and to be imported. Right. So I presume Michael was searching for scenarios with Alex and Alex is providing them a whole, whole set of scenarios, right? At least what we know. Fair enough. Okay. Yeah, that, that, that, that last time I checked, that's what he was, he was looking for more, you know, scenarios. And that's probably where we can help most is, is to, you know, Pull in these additional scenarios on what needs to happen at the enterprise CRI level so that we can, we can make those modifications where they need to be done. Right. And hopefully in a common way so that all the container run times can, can implement it the exact same way. That would be nice. Yeah. We don't want to fork anything. Definitely. All right. Cool. All right. I think that's pretty much it. And. Was there anything anyone wanted to discuss? Is there, maybe this is something that I wanted to ask, right? Some of the people in the meeting, you're kind of new. And some of the people in the meeting might just are just interested. It'd be interesting to have your opinion if there's some context we need to build from time to time. If there's like documents, we need to point people to, we're just, yeah, I mean, restart some of the conversation to, to help people understand what we're doing. Having feedback on this would be super helpful for people, for people that are new. I think we can just, just lost one new person. All right. Feel free to send an email if there's or even reach out to me or I think everyone, anyone. If there's context we need to provide to people that are new, we'd be happy to, to provide it, I think. And that's it. Thank you everyone for joining. You, you all get five minutes back. That's not a lot, but. And thanks a lot for the conversation. I think like these were super interesting. Have a great one everyone. Yep. So the next one is in two weeks, right? Yep. I'll put some of the items that we said on the agenda. And that's it. Okay. I'll try to pin to Eric and Michael as well. Definitely. I'll, I'll, I'll take some of the notes you put Mike and then I'll send an email to everyone just to have some of the eyes. Bye. Bye.