 Hello, hello, hello, just you and me, okay. Someone else machine. It's not mine. Wait a bit. It would be really boring to keep a demo for you. Yeah, I've already seen it. But actually, like, I don't know, some weird thing actually happened to me when I prepared the demo. So I was going to demonstrate it with two pods, like requesting two different PVCs. And I've noticed that, like, for the second one, the request is coming with the parameters from the first PVC. I didn't get it. So I just gave up. But it's, I don't know why, because it's like, the pod is definitely referring the each port refers each own PVC. And there are like different parameters for this if you ID, if PJ in PVCs, but when volume is created, like the same parameters, like come like every time. I don't know why that's not like a proper demo effect to me. Yeah. Let's try to avoid that. Yeah, box, box, box. It's proof of concept, nothing more. Yeah, we'll see. Yeah, not looking very good. Okay, we got Sasha. We can demonstrate. Yeah, for you. Everything for you, at least. Marcus. It's not, not getting much better. It's still just Intel folks. Hey, is this? I don't know what it will be meeting today, but I don't see them online yet. Let's wait. So what do you guys think of my sewing machine? It's beautiful, isn't it? Good. Do you have one? I don't. Do you use it? It's used very often. By you? It wouldn't be. You have to admit it's not really. There's like two machines actually there. The other one, it's not called a sewing machine. I don't know what it is really. It's related. It has threads. I know how it's called, but don't ask me why. Okay, you know more than me then. I don't know what's in English. That's the Finnish name. That's the Finnish name. I've heard it. All right, let me try to think. Overlocker, okay. Overlocked sewing machine. Well, that was pretty good if you found it from Samakalber. That's interesting, yeah. Yeah, this samakirja.org. At least that's an overlocker. Well, you always learn something new. So how's life in the NFD side of things? You've got your, what was it? Helmchat. Helmchat. Oh, yeah, yeah, there was typo in that. Yeah, yeah, yeah. Yeah, that one started to look pretty good. So just waiting for the documentation up there. Then we'll get it merged. And then we have this multiple NFD instances, or multiple parallel NFD instances support, well, hopefully coming. That was interesting. Yeah, that was interesting. Yeah, it probably makes some deployments. You can just use deployments created by some third parties and fire them without looking that much, or without merging like configuration, many others or anything like that. So yeah, yeah, let's see. It's not ideal, I would say, for all the wishes. Going forward, final way to like drop in this like for multiple, multiple, multiple configurations more easily through CRDs or some other way, but at least that should now be a quick fix for the problem. Yeah, I don't know if they are trying to also enable that or make it possible to use cert manager to manage the CLS certificates. Yeah, I guess that's gaining popularity. So yeah, it would be nice to actually have it everywhere then. Yeah, yeah. And it Mikko already took a look at that and provided an initial touch from the NFD, basically from the NFD side, the cert manager itself, it was very little work. So I mean, the problem is that NFD doesn't support this kind of certificate rotation. So it doesn't, at the moment, understand that the certificates might change. So that's the problem. So that needs to be solved. But after that's solved, then dropping in cert manager is really piece of cake. So yeah, I like that solution. And there's so, so many people now that, oh, we got Reno. Right. I was already starting to think that we'll have a demo for internal folks only. There would be enough people already. Yeah, you got much better background than I have. I have a sewing machine. We can't hear you, Reno. You're mute. Good morning. I think Alex mentioned that there was a demo on the agenda that you wanted to present. Indeed. Yeah, we actually have a demo today. I was hoping what we will get a bit more people because right now it just looks like internal. Yeah. Well, at least we got the chair. And basically, I mean, Reno started the CDI work. So, you know, you are the most important, but you're sure it certainly would be nice to have others as well. Let me think a bit around. I think I also wanted to kind of like have some feedback on the logos. Maybe what we can do is I can put in the logos. It's going to take five minutes and then if in these five minutes people join, then we'll probably have more tenants. Mike Brown has another engagement this morning and maybe you're back. I'm on now as well. Yeah. Okay. What do you want to do? Do you want to present your demo or do you want to go through some of the logos and wait a little bit? I would suggest let's first talk about logo and then switch topics. Logos is actually a fun topic. It's going to take five minutes. Let me share my screen. All right. So, for full context, these logos were designed by the NVIDIA Creative Department. We did have a big meeting with them in terms of making sure that the different logos that they come up with do not include NVIDIA visual elements. I mean, the main one being we did not want them to use the NVIDIA callers. We spent a lot of time with them making sure that they understood that it's a process that is with the community and the goal is to kind of like present you with options and the community is kind of the final decided on this, not NVIDIA. So, we're really just here to at least the NVIDIA Creative Department is here to kind of help us come up with a logo and we are decided there's in no way kind of like NVIDIA involvement here. We're trying to be kind of neutral. Some of the kind of keywords that we brainstormed a bit with them to come up with different options are like the device container. No, I think like I don't have the notes from that meeting with me, but we spent different time just like discussing a bit what are kind of visual elements that we would like to see. One of them came up with funnel, the other one being container, being the two main thing, container queue. And I think the images are kind of like what the artist got inspired from out of these keywords. They came up with kind of three different options at first, black and white, because I think the conversation that they wanted to start with us was are these kind of like logos are the shapes here, something that we kind of interest that we feel interested in. So the first one, I think they tried to represent a funnel. The second one, they wanted to represent a container and the fact that it was augmented to layers. And the third one is kind of a wallet card which we found very interesting just because of the kind of visual elements of seeing the queue both as just a shadow on top of these three cubes and as an actual queue on top of these three, and that makes sense. Out of these three options, they kind of like tried to color it. And so you can see the different colors here. And they gave an example of how it would look like if CDI actually became one of the projects that you see in the cloud native on CNTF website. So this is option A. This is option B. And this is option C. And I'm going to actually pause more on this one and let everyone just look at them and have this think a little bit. All right. I can see somehow. So a little bit of feedback that we got internally was that option A looks a little bit too much like an AWS logo. So that's kind of a small concern that we do have. Yeah, but I think generally more probably I think the process was it might be worth sending an email to whole community to kind of have like a formal vote. What's everyone's opinion on just process, maybe process we can take, we can talk about later, but are there people who have some feedback on option A, B or C? I like visually version C, but the colors, I think B2 is the most appealing for me in color schema, maybe B3. But the shape is definitely C. I don't have my notes from the previous meeting, but one of the kind of framework that Creative gave me to evaluate and give some feedback on logo is the first thing that they kind of were interested is outside of the cars, do we like the concept that these logos represent? Are we attracted by the idea of a photo or the idea of a container that is layered on top or maybe kind of this idea of option C kind of feels like it's a box that has more boxes. So it's a box element also. Are these kind of visual elements that we think make sense or do we care more about a specific visual idea? Well, conceptually I like option B because it's like with those layers above the container, so it kind of maps to the idea from my point of view. But option C is more attractive, visually I would say it's kind of more rich. Visually, I definitely agree there's an interest in option C. It's intriguing, you know what I'm saying? Yeah. So I think that was a question. For option C, did we try to do like this like the center black cube to make it bigger than the rest of ones? Can you repeat that? So like right now it looks like similar to version A, but this like solid colors on the edges and like smaller cube inside. So if we try to do this like center cube as a bigger one and those like leaves, cubes, smaller ones, it might be more interesting, I don't know. Okay. I think one of the things that I kind of know this is that the appeal of the black and white option C kind of gets lost with the colors. For some reason it seems like this kind of like visual and training elements in option C kind of gets lost a bit with the colors. You know what I'm saying? I agree. Those colors are not that great. But what's intriguing in option C is really that you can like look at it in multiple ways. That's like a shadow center cube or then a separate cube. I wonder if it's a good thing for a logo though. I don't know. It might be. All right. I think that's pretty much it for logos. Keep in mind also that it's interesting to see how it's going to fit with the others. Feel free to go over the slides there in the agenda. Google doc that I'm going to copy paste right now and they're in the and I will give back control of the, oh actually someone already copy based on the agenda. I will give back control of the presentation. Nukri, do you want to, and you're already joined. It's awesome. Thank you. I think it is going to start this one. Well, I would propose that Sasha would start to explain like briefly explain the idea, the architecture, and then I will kind of like show the small simple demo to understand it conceptually. And then Nukri will show like more complex demo scenario. All right. So I don't have any slides for that, but just just a couple of words. Yeah, couple of words. So remember when we started talking about how to CDI would look like we had the idea of using custom resource definitions to actually describe the kind of request for device attributes. So something what we can put as many of device properties as we want like, like GPU memory, GPU time if we want, when FPGA bit stream names and the other. But the problem is what to introduce such big change to the Kubernetes. It's problematic. So we wanted to show or to have some of like proof of concept where we can demonstrate the idea. And when starting from what idea evolve it to something what can be written as a cap or like real proposal. So what we took as we agreed earlier. So we took the CDI as a bottom layer. So the way how it works what we have run C which reads the CDI specification and when exposed to a container, we have a bit customized version of run C because we have also GPU related properties and like close something which is it's discussed upstream kernel, but not yet in web stream kernel yet. So we can we can show some bits out of it. But the question is how we create those CDI specs and we created the node agent which writes with JSON for this container. But when we piggyback on the storage paradigms on actually to demonstrate user-visible UX. So you have a pod spec. You have the pod spec referring to device as a volume for now. You have similar to the storage request where you can specify I want particular class of device. I want particular properties of this claim and I want to use this claim between either in one container or between multiple pods or between multiple containers in this port and so on. So practically what we have is similar to the storage object model for the device claims. So we have node agent. We have central controller which can hide the vendor specific logic and we have proper integration with the scheduler. So scheduler knows exactly where device can be allocated. But I know like just with words it's hard to understand. So I hope what Ed and Ucri will show them are some views based on examples you will get with better idea how it works. Yeah, let me continue with from this. So let me show my screen. But before I begin, so the storage paradigm we just use it as a simplest way to showcase. We already found out like several, well I would say design issues which is okay for the storage but will not be applicable for the devices. So we collected those items and when we will be transforming it to like normal idea we will need to work around that or design it better to make it more generic. Okay. Give me a minute guys. So the Zoom doesn't allow me to share like because of the setting so I need to like reload it. Just a minute. Okay. I hope I don't hate the same. I think rather than using macOS that's why it's problematic. Damn software. No, it's like an apple fight for the privacy. Of course that's how we position it. Looks like Ed is able to show. But if you can make phone bigger. I was mute. So is it okay this way? Hello. Okay. It looks good to me. Okay. Okay. So let's go. So the idea is to basically use CDI kind of objects to show the like resource allocation and management approach without actually creating CRDs and referencing them because that would require a lot of a lot more work. So it's and this is what I'm going to demonstrate. So first of all is just like little setup overview. So we have a a cluster with two nodes and we have pretty much control plane running just and yeah. And these two services, the first one is the controller. So it's basically it's just one instance usually for the cluster. It collects the information about the devices from the node agents. And as we have only master and one node, we have also this node agent running that that basically runs on the node scans the available devices and provides them to the controller. And on this node, we have two FPGA nodes. So I will demonstrate the FPGA device allocation. So we have two devices here. So this like port zero device and port one device. And then this device knows is actually consumable resources from the user work, work low point of view. And what differs between those? So there are two parameters here like to pay attention to like first one is interface UUID is it's the same. It's basically idea of the device like in simplified terms. So we have two like devices of the same kind. So and this second parameter is accelerator UUID. And that's that differs here. And that is idea of the function, which is a flashed integer device. So FPGA is a programmable device. So it's currently like two devices that are programmed with two different functions. And that function is basically will be used as a as a parameter. So like let's let's see how exactly. So like first of all, we create a storage class. So let's look at how it's organized. So we have like storage class parameters. So this device type FPGA vendor Intel and the interface ID, which is basically what I showed. So it specifies the device. So the like device class if you will. And then so we create it. And that's that is usually should be done by cluster admin. Next piece, by the way, you can interrupt me like any time if if you need like some more details. So next piece is PVC. It's a persistent volume claim. And we are using it to basically specify what the user workload would want. So the user workload would refer to this PVC to to get the device allocated and we can hear you now. Am I am I back? Okay. So and this a few ID is basically that that I'm back again. Sorry for that. So this a few ID is that parameter. So user workload wants some particular accelerator function to utilize and that's but the software is basically organized the way that those parameters can be like arbitrary. So and it's just up to the node controller to understand those parameters. So the rest of the software like some central controller for for it is just like least of like key value keys and values. And that's it. So for next thing. So so we basically created this PVC and the very last thing is to run the workload. So that that pod actually refers to the to the claim like to the PVC this way by by the name. That's it. It doesn't do anything. It just like runs busy box. But the idea is to like to see if the device actually like after all of these manipulations if the device FPGA device is accessible from inside the container. That's we can see here hopefully. So we can see that like the device with the with the requested function ID is like became available inside the container. The like so use user. Can you show me again? Sorry. Can you show me again the part spec per second? So what the specification? Ah, okay. So the only interesting thing here is like this part, the claim name. So that's the reference to the PVC and PVC actually holds the the request, the set of parameters that the user work low requests. What's in CDI? Is it the CDI specification? If you do an analysis in the pod slash CDI, sorry, it's I understand the thing. It's just the slash CDI strikes me as odd for some reason. Yeah, it's just a temporary work around. Yeah, yeah. So because it's a volume, so we we need to somehow like use it. And the only way to use it is to actually specify it as like as a mount pass. But that that that could be like anything. I'm not using it. So it's just a way. Yeah, it's just a way to say that that that we want like this kind of volume that that is provided like by by this persistent volume claim. Okay, I see. Sorry, this is just my brain was not computing this thing. It just it was it was like really this and now like now just like if you go instead of keep going. Well, that's basically it. I don't know what what else. So I my idea was to like, prepare as simplest demo as possible to to show the concept is like, like very simplified way. So and the next part would be like probably more understandable to you as it will be about GPU. But it's it's more complex, I would say. So if you have some kind of like simple, like conceptual questions, I can answer now. If not, then we can switch to UKRI and UKRI will will demo this GPU case. But this is like like very simple case or one parameter the function ID and and practically that's it. So conceptually it makes a lot of sense. Okay, okay. The biggest concept is what like we are splitting two pieces like one piece is well actually three pieces. One piece is we have devices with the classes and we can have multiple of them and we have a provision or like vendor specific provision which handles those. So I can think we split out the allocation phase. So user declares I want with kind of devices with this set of parameters. And the third piece is in the pod spec. We are saying, okay, now I'm going to use with allocation what I created with mine either front. And you have a question actually. So let's say this is kind of the new way of doing devices. How does that validate or invalidate the old way? We actually don't need to do it. So it can be done like in granular or granularly. So that it doesn't depend on device plugins in any way. So you can either use those like for simple cases, but as soon as you have like more parameters, you want to use them then that's the way to go from our point of view. So because I think that's going to be one of the first questions that we get is do we want to be maintaining two different systems? I don't think we will be really maintaining because based on the current state of device plugin code, it's not changing much all over the time. So like remember when the last time it was changed, it may be like a year ago or something. So I think for some time it will kind of coexist in a parallel until what community will actually decide what's the future. Okay, make sense? Yeah. Well, it's just like way to present it. So if community likes it, I believe we can find the way how to actually proceed further. I think the next step would be to kind of present this signal. Yeah, sure. That's in our plans. Okay, so I will start sharing. Remember we discussed what we will first discuss it in the smaller group when there's few key people and when we go to bigger audiences. I think we do need to, I support this and I'm happy to kind of help my haters ignore it. But first let's let's look what Ukrik has. I think for you it will be more and more appealing. Yeah, like Ed said, this is a more complex setup. So this is for GPUs. And also we have a little bit more of a cluster here since it's just not just one node plus the controller. This is three nodes that we have here. And by the way, I don't have anything recorded. So it either works or it doesn't. We may have demo effects. So three nodes identified by labeling them with storage equals CDI. There are two NOCs and then there's one a little bit bigger machine. And there's some GPUs in them. In total four the NOCs just have one and this CMLS machine has two GPUs in it. So at the top you can see bots which happen to have CDI in their names. These GPUs, I basically hard coded the code so that each GPU gets like four gigabytes of memory. It's like fake, but anyway it's a value. And they get 1000 millicores. So for four GPUs that totals 16 gigs in the whole cluster plus 4000 millicores available. Now if we look at the YAML at the bottom left corner, there's a deployment for basically 10 replicas over there. Again, we're doing just a busy box with the same tricks as it showed. The idea just being does it mount the cards properly or not? There's a little bit of trickery going on with the volumes. Now we're using FM metal inline volumes. Why are we doing that? Well that's because if we didn't the normal volumes, if you have multiple replicas, you don't get per-pot instance volumes necessarily. So that's why we do this with the FM metal inline volumes. With these you actually get per-pot instance volumes. Now what you're probably interested in is like what are these? So we've defined here that each bot gets 1.5 gigs of RAM and 300 millicores. If we do the math, the cluster had 16 gigs. So basically it's more than asking 10 of these. But since each of these GPUs only has four gigs of RAM, you can basically only serve two pots out of one GPU, right? So the math is that with four GPUs and two pots each, you can only get eight pots up. So out of the 10 replicas, the expectation is that two of them should stay pending if everything works. So let's see, I'm shooting this up and things start happening. And we got the expected end result. There's two pending bots, they didn't get scheduled. And what is of course interesting as well is like, did they get proper devices or was it just random? I have a little script which doesn't seem to be working today. So that part of we got the demo effect over there. Let's see, that was just a user error. It needs the deployment name. Okay, so the expectation in this printout is that we should have two lines for the card one, basically. Only the CMLS one machine had two GPUs as you can see from here. And both of these cards should have two pots. So basically in here we should have two card ones over there and over there. And they should be running in the CMLS machine. We can check this W98 starting thing is in the CMLS. That's correct. And the same over here, 5th starting thing is running in the CMLS. So it worked. No demo effects here. It looks super interesting. Yeah, so I hope you liked it. Unfortunately though, the storage does create a lot of headaches. Namely if we try things like giving two GPUs for a single pot, we fail. Basically we would need a scheduler extender to support this as long as we use this storage paradigm. And the same happens if we have two containers inside a pot which use the GPU separately. Same issue because the volumes are created one by one and you basically need a scheduler extender to help. Reason being that when you're considering them one by one, you don't necessarily understand that you cannot allocate the second one anymore. So it gets complicated there. Storage as such is not the best for this kind of thing. Even though it does work nicely when you have just one GPU being allocated. So go ahead Sashen. So as I mentioned, we found several corner cases where the storage doesn't work properly. So for example, one volume can be easily shared between multiple pots on the same node. And for our devices allocation, we should prevent with this kind of scenario. And then read once or read multiple times, overall sharing between the pots. We will need to have a bit more sophisticated logic on the scheduler side, on the controller side when we're allocating, when we're dealing with devices. But just for simple showcase or simple demo, I think it's good enough to demonstrate the concept. Yep, and this is definitely using CDI as the means. So there's no device plugins here. And, you know, I can take this a little further. I applied another deployment over there. Obviously, it ended up bending these two over here, this one. And if I now delete the first one, then if everything works, eventually, it should get deployed. There may be some demo effects here and it's a little bit slow to delete this. But if we're lucky, it might actually work in a minute or two. I don't know how long it really takes. So what do you think? I think it looks awesome. It's going to be very interesting to introduce these capabilities. There's, from a user perspective, it's going to be transformative, especially as we have more and more devices that are being able to be shareable. From an engineering perspective, an architecture perspective, I think the battle is going to be interesting to kind of enable these capabilities, given that it touches schedule and know at the same time. Yeah. For this proof of concept, we didn't touch this together that much. But like Sasha says, if we do this properly, then unfortunately, that side needs a little bit of tweaking, I guess. Yeah. I don't know. One of the things why I push it for this kind of concept is what I would like to make it as a generic object, which can be referenced from the board. Yeah. Vices is a good example. But if we do it right, when later on even the storage can become directed to the same interface. Yeah, this was a short one. I don't have anything else to demonstrate. So I guess I'll stop sharing. This is super interesting. Thank you so much for presenting. How do you want to drive this out? Well, first of all, I want to make sure what next time we met, actually, we have people from container give from cryo. And I would like to have a proper presentation with some slides. Do you want to pull them in next time? Yeah. I was hoping, at least like Mike and Ronald will be today here, but I'll probably be in Guam for the next meeting. Yeah. I think we need to be a bit more, maybe we need to plan a bit more of these meetings. I tend to send you to the less than 24 hours before we should probably plan them like a week before I have an agenda ready. So after we show it to Mike and Ronald, we need to find, let's say, select few people with whom we have to have a first run. I don't know, like Derek, from your side, probably Kevin. I don't know who else might be interested in this particular project. So very small set of people who can say. I can definitely invite Kevin. Kevin should probably also kind of start joining. So I'll definitely make sure that he's invited and has, is aware of everything. Makes sense. So we have 10 minutes. I did want to kind of talk through a bit the podman implementation that I have, which is like this demo that I came up in September. And I wanted to kind of discuss a little bit about things that we can probably improve and how we're going to kind of structure everything. So kind of the first thing that I want to talk about is more the API. Let me see if unified is better. The API that podman is going to see. So what I'm going to do is just everything that's in CDI is not interested and not interesting. There were like a few injection points in podman that I wanted to point out. The first one is container creates. So container creates is kind of the point where the container gets created. And this is where we need to extract the CDI devices. And I'm sorry, no, continue creators. I'm sorry. I'm getting context back from all of this. Here, what we're doing is we are iterating over the different devices. If the device is part of CDI, then we kind of add it to a list. And at the end of the day, we update the spec with the CDI devices. And so what that means is that, so taking a step back, we iterate over all the devices that are in the spec. So these devices are typically in the shape of device nodes. But if they're not a device node, if they're an actual name, so if it refers to a name that we find in the CDI specification, then we know that it's a CDI device. And so we add that name to this list of devices. And from there, we kind of have this function that is provided with CDI, basically, that allows us to kind of, what's it called? I'm sorry, having a hard time. That allows us to update the specification, if that makes sense. Right? No. Okay. Does this function kind of make sense to people as an API? It looks okay, but in which place you are verifying what it's not, like, official device name, like dev something. It's a CDI device. That would be kind of a bit later, probably, here has device. So this one's kind of another question that we need to figure out as a group. Right now, every time we call has device, I'm going to parse all the files. So there's probably low device basically just reads through all the files, reads the devices. There's a few questions we might want to ask ourselves. That's just like, should we be providing context so that the map that we build out of these files is kept. We don't rebuild it every time. And the other one being, I mean, the other one being like, of course, we could just, like, has device with an area of strength. So that's kind of like- Welcome to optimization. We can talk about that later. Yeah. And I think the last one that I want to talk about is the specification right now. I implemented the kind of the OCI structures, but we might want to discuss if we want to have the CDI basically just have the OCI structures imported. So refer to kind of OCI.devices in the container items. So devices, OCI.linux devices. We still need container agents because we have like hooks, mounts, environment variables. So I don't think OCI devices structure will be enough. Okay. Because a lot of the code actually ends up being, or some code actually ends up being, ends up being when you say apply OCI, edit some code ends up being just to OCI hook, to OCI mount, to OCI device. All right. Because at some point we do need to apply OCI to a spec based on container edits. So my thought process is to kind of move the CDI folder inside the CDI repository. So that includes these kind of devices. And what you're going to see in Podman is kind of these small edits. So the container struct now kind of comes around with a CDI device. In the CDI device, if you notice here, we update the spec. And here in the CLI, we kind of go over the different devices. We append them to that CDI device file field. And that's it. That's the changes that a Podman would see. That sounds okay. At least from my perspective, yeah, it's what was expected there. Yeah. And we'll of course refer to github.com container orchestrated devices, container device interface, maybe slash package or slash spec. No. Okay. I mean, that kind of gives you an idea of how this is going to materialize in Podman. And long term, we'll see how it looks like in the NRI. Well, before we NRI, we'll materialize properly. We can propose exactly the scenario for cryo and for container D. Yeah. So, as a time then. And then continuity, I think, like the expectation is to get through NRI. I understand that. I'm just curious how long time it will take. So it might be easier to first get something right now working for both cryo, container D and Podman. And when us NRI will mature when we change it to NRI. To me, that's kind of up to Michael Frostby and what he wants to do. He's ultimately the person that kind of decides the architecture. If he wants us to go through NRI, then we'll go through NRI. If he's okay with us going through container D, I agree, then we'll go through container D and then maybe NRI. I don't have an opinion on this. It's kind of up to him. My single worry is what we shouldn't end up in this scenario, what cryo and Podman adopt those changes and container D will take months or vice versa. So if we have something, we need to have something working. You want to try and drive this? You want to try and drive this de-implementation in container D? Let me answer next week, okay? Yeah, if that's all right. Thank you everyone for joining. Before we go, let's give Urvashi actually a word. Urvashi, what do you think about what Renault presented? No, I agree. I think the design is good. Awesome. Thank you. I'll try to make a PR or different PRs and tag you on both the Podman PRs and the CDI PRs. I think the CDI PR needs to go in first, because just like in terms of a thorough and maybe I'll try to add some tests so that we can, we actually have like some days to requirements as we improve it. Yeah, that sounds good. Awesome. Is there anyone that wants to bring up something? I wanted to bring with non-route devices patches what we had discussed a few months back. So Mika implemented the patches as we agreed, but we are lacking of reviewers. So Renault, at least from your side, if you have time, please help with saying your words like yes or no. Okay. And I will try to ping other people as well. All right. I'll take a stab at it. Thank you. Have a great one, everyone. Yep, thanks. Yeah, so talk to you in two weeks, right? Well, hopefully, but earlier we need to have a minute. I think let's do an agenda next Tuesday, like on site. Okay, good. Okay. Okay, bye. Bye.