 Hello. Hello. How can you hear? Same to you. How are you? Good, how are you? Yeah, just came back yesterday from vacation, so still trying to catch up. What's going on? Yeah, same thing else just came back yesterday from vacation, so catching up. Let's wait for an all. Yeah, some topics to discuss. We didn't end up submitting a talk for you, right? No, I haven't submitted anything, because in practice I was a speaker for already two submissions. So one was related to some topology, we are scheduling part together with Swaggy and one went away, guys. And the other one is our related intro project. So I decided not to try to submit the third one. Yeah, that makes sense. Oh, Alexey. Hello. We decided to wait for Ronald, because he had some topic in mind, so doesn't make sense to start without him. Looks like he's offline, at least in Skype. Oh, sorry, in Slack. Hey there, sorry, I have a few minutes, so I don't know. Happy New Year. Happy New Year to everyone. This is everyone online. Mike, Alex, Yorgashi, Alexey. I hope maybe Ronald will be joining us a little bit later. All right, let me pull out the agenda very quickly and start the meeting from there. So yes, happy 2021 to everyone. Today is the 12th of January, the first meeting of the year. On the agenda we have the main point which is a recap of what happened in 2020 and maybe some idea of where we're going into 2021, right, discuss where we're going in 2020. Are there any items that people want to add to this agenda? Nope. Okay, let's start. So in 2020, I think we kind of started the first drafts of CDLM and presented the cod working group at KubeCon, which were actually probably the two big points. We also kind of started discussing, I think, the different projects, whether it's through the NRI or through the resource management workgroup that kind of got started and more or less related, the pod resources API, which is the API that allows monitoring, not monitoring device, but when a vendor is monitoring a device that allows to map the device to a pod name, a namespace, and a container in the metric when you are creating the metrics is finally GA. So these are kind of the main sticking points that came out of 2020 or the main wins for this group that came out of 2020. Are there maybe items that people feel should be added to that list of awesome things the group kind of may have in 2020? I think it's a good list for now, because obviously there are some activities which is not public yet, so it doesn't make sense to list one at the moment. As a quick follow-up, this is something I mentioned on Slack, but I've started engaging with the NVIDIA Creative Department and we are at the drafts stage, so after I started engaging with the NVIDIA Creative Department to create a logo for CDI, they are at the draft stage kind of exploring some ideas, and I think we're hoping to kind of get back to this group with maybe three options to kind of pick from. Does that make sense to everyone? Yeah, definitely. A question, did you try to contact the science of artists? I think the science also had somebody who also drawn the logos. Yeah, I think like I went to the NVIDIA Creative Department because it was the simplest one. I could probably do that. I'll take a look if I have some time to just think through this. Yeah, I think these are all the agenda items for before. I think looking at 2021, one of the deadlines that I was hoping for us to achieve is to have maybe enough implementation in one of the major runtimes by the end of 2020. And I think, unfortunately, that's something that's a more or less a milestone that we kind of missed here. And so I think going forward it'd be interesting for at least we probably need to figure out how we're going to move on this. The main point that is kind of I think blocking us is figuring out whether we want to do NRI just for container D or if we want to try to integrate NRI with Podman. We kind of had discussed that in the previous meeting where we thought that it might make sense to try to do a first implementation in Podman without NRI. I want to kind of get a bit off. So I think, yeah, so basically the main question being here do we want to put some kind of like six months or three months deadline to have a first implementation on this? And where, what direction do we want to go with NRI on this? I'll let Favors to speak first when I share my opinion about that. I think your plan of, you know, getting it up on Podman makes sense because it'll stretch out the, you know, the service and get it used from that client perspective that you're focused on, right? And then whatever demos you put together we can probably set that up on container D and start working it into Kubernetes. We'll need to do a presentation, you know, at SIG node to get some interest there, right? Yep. And I'll switch to your opinion or we'll actually, it was here. Yeah, I mean, basically we just need to add the pre-create, right? And then once that, that's just an injection point. And really what you're going to be doing on Podman, I suspect, is adding the client options to set CRIO to be able to, I'm trying to add the CRIO library to be able to use those pre-injection points at the same place. Of course you'll need to push the, you know, all the different injection points into the CRIO code. It's pretty similar code. It's not that far out from what's in container D. Yeah, that makes sense, I think. Yeah, I agree with that approach as well. Start with Podman and then we can expand that to Podman and CRIO. Yep. And it's, we just need to keep the links, you know, together between the two projects, CRIO and container D, so that we can make sure we're putting the injection points at the same place in roughly the same manner that you're wanting to, you know, to see, making sure we put the same content in. For example, if you need particular annotations to be injected, you know, we'll need to be doing that in concert, right? Yeah, I agree. So my line, I think, was quite similar to what Mike mentioned. So practically, like, have all the CDI code as a separate library, like the code which can be embedded in any other project or vendor it, have one entry point where we say, this is the container description of what we are trying to create, and the result should be returned as a modified content of that container. Indeed, container creation process is similarly done in Podman and CRIO in container D, so we should be having, to all of those three projects, quite minimal changes to just intercept that call. And my opinion regarding NRI is what I think right now it's a bit flaky state. We don't know where Michael's plans on when, where, what kind of functionality he will be doing. So I would say, if you want to achieve something soon enough, let's do it by just directly injecting this CDI into a CRIO container D and Podman, just as a proof of concept as a demos. And if during that time NRI will be evolving to a level what we can use it when, again, I think it will be quite trivial to change. So that was kind of NRI and CDI. Are there items that we want to focus on as a group outside of CDI this year? From my perspective, I really hope what we will finalize with paperwork to open source our proof of concept work of CDI plus how it integrates with Kubernetes. So in short, what I'm thinking is we need to have a discussion with Kubernetes folks on how we can expose entities from run times towards like to upper layer saying like this is the object what can be attached to the container. How and like all the specialties it should be hidden from the Kubernetes, but when when the Kubernetes like upper layers were like the scheduling, the pod spec need to have ability to reference those objects somehow. We have a proof of concept how it can be done. Well, a bit huggish approach, but the idea is what during this year I want to bring this proof of concept to like wide audience show how it works and talk with scheduling with API machinery with note how to generalize it. So the containers of the pods can reference, let's say CRDs for example, and we CRDs on the runtime levels can be mapped to actual hardware resource what is de-abstructed on the runtime level. I don't know. Does it make sense? More or less. I think if you want to kind of maybe do like a slide draft or dry run at the container or at this meeting and then Absolutely. Yes. So as soon as we will be able to publish our proof of concept code, I will show how it works and with diagrams like which components are responsible for what like which data space passes between the components. Yeah. All right. As soon as you guys get some kind of a demo level, proof of concept running at the client level, I think in Podman or whatnot, then we can pick some stakeholders from the Kubernetes side, get them on our side, and then, you know, to put together a demo, maybe Dems, maybe, you know, plus Murnau. I mean, he's already here. So he's a good stakeholder and signal and with the together with those two guys, we should be able to present something to the signal team. We're going to need somebody else on the resource side, right? I think we will definitely need somebody from the scheduling folks. I can bring we in on the schedule if you'd like. And if everything goes well, as I am envisioning, we probably also need later on to bring someone from the storage because practically the storage paradigm is very close to what I'm trying to achieve. So yeah, I think that I agree. I'm just saying we should bring together a few stakeholders from a few areas and bring them here first. You know, have a, you know, casual conversations, get them on our side. And then when we go to signal, right, they'll be able to chime in and say, yeah, I like this. Maybe bringing Derek in at some point as well. Don't don't want this to be a surprise, right? A few of the stakeholders. Okay. Is there any topic that people would like to kind of add to or not topics, but objectives that this group thinks we kind of should be looking at this year? Well, on the scope of the pod man, for the concept, what, what kinds of scenarios do you think you're going to, you're going to shoot for? What do you mean by scenarios? Yeah, use, what's your use case? Yeah. I think the main one that I have really is kind of Nvidia GPUs. Right. Because that's the device that I have on my machine. Yeah, I got you. I got you. But like, generally, I think like, because this group has also different devices and be also have access to a Malik snake. So that's not completely true. That's, it's the only device I have on my machine. But I think like, I'm very happy to kind of test different devices and show them if, if people want kind of have want to record demos around that, that, that it would also be kind of like really awesome. Yeah, I suppose if we're going to get one GPU, one storage, one CPU device. Yeah. What's, well, I think we don't have a good use case or a good storage use case. Yeah. So storage storage is completely separate area. So you don't, you don't expose it as a raw device. Yeah, that's, that's also my thought process is we, there's the CSI plugins for storage. But yeah, like if, if we can do an FPGA in our demo, what we have is we have a working FPGA demo. And well, I hope we will do also a GPU demo. So like this GPU, I think we have a prototype where we can limit like the memory usage for workloads, which is completely impossible in current design of device plugins. Okay. So if we don't have anybody from storage here to also integrate CSI with NRI, then we're going to need to do a separate group of meetings or... Mike, we don't really need to integrate with CSI to NRI. What we actually need is we need to learn from CSI and just make a genetic interface, which we'll be describing like, as I said, like with CRDs, which will lead to become a superset of device plugins and CSI plugins. Okay. So just follow the current standard resource management with Kubernetes? No, no, no. Yeah. Well, if you're talking about CRDs, yes, following the standard resource management, but also like... Well, the big thing is what storage prototypes or all the storage objects right now, part of a base API is just because of historical reasons. So all of those in theory can be done as a CRDs, but we cannot easily separate from right now because like the evolution of Kubernetes. Okay. I talked with Sada Lee a couple of years back in one of the coupons and he was mentioning what like, there are historical things in the scheduling area, in the controller area, like where storage objects are treated, especially. It will be... Hey, guys. Hello. Yeah, hello. I have a question here and I'm sorry if this is something that you guys already covered. I'm trying to understand why you guys are leading to our study with Botnet. I would expect that changes would have been done more at the runtime level. I would expect changes to be done at the continuity of cryo level. So I'm just trying to get Western to start in higher level component like Botnet. Well, we've already got some functionality in inside the container runtimes to support this. We just have to add the injection points. And when you want to do a demo, you need a client and container D and cryo of themselves don't have a client. So you're stuck with, well, how do I demonstrate this, right? Do I run it using cryoctl or podman or what other client would you like to do? There's a couple of container D client proposals out there. I think containerd or something, I don't know, whatever. There's just a few that we could do the same kind of demo in. Okay. So that means that the client portion of this feature, to call it that way, will be on podman. But you're still saying that all the handling would be done at the runtime level. Is this something like it? You need a way to pass in the options. Yeah. Okay. That makes sense. Thank you. So the main target of why at all we are doing this CDA is to improve user experience. And right now, well, Kubernetes is one set of users and common line users like podman or docker common line is an additional set of users. And to demonstrate something, we need either docker or podman just to show how it was and how it is now. Kubernetes, as Mike mentioned, it will require a lot of more changes. And it's not that easy to demonstrate. And from my feeling, what changing the podman for the demo might be a lot easier when changing the docker CLI to make a demo. Oh, I see. I got it. Thank you. All right. Are there things, sorry, goals that people would like this script to achieve in 2021? Next call. So it's not explicitly mentioned, but I think it would be good to say it. So I think we all think what the NRI idea is a good one. But I have to say that we are interested to make sure what NRI interface will be, first of all, powerful enough to do a lot of tasks, not only for devices, but our resources, what can be on the notes. Second thing, it should be performant. So like, yes, the exact mechanism is fine for now, but I hope when we will do some stress testing in the end of a year or close to the end of a year, we might think about, I don't know, like TTRPC, GRPC, or some other interfaces. I think container D have good experience with those scenarios already. So to summarize, I think one of the goals what I see for this year is to make sure what NRI will be powerful interfaces to run time. And some of the items what we want to do is written in the issue number two for NRI project. Can you repeat the last sentence? The last sentence. The last sentence is what we are interested to make sure what NRI will be powerful and performant interface and some of the items what we want to achieve is written in issue in NRI. I will add it to the meeting notes. When you say performant interface, do you have an idea of where you want to be at? So I want to investigate possibilities of either GRPC or TTRPC for interacting between run times and NRI plugins. Right. So I think maybe my question is, like, do you have a time budget that you want to be staying inside? I think we will have a time budget because think about it. If you're talking about the CDI, we are talking about container creation time. If we have something which will be delaying the container start, for example, like for the second, it's already too much or perceived too much for many of the people or for very dynamic services. Right. So where do you have a rough state that you want to put it into the ground? Is it like 500 milliseconds? I don't want to put any milliseconds, but we need to actually measure how much we are affecting where containers start in time. I'm going to write down less than one second. And then we can refine that goal later. Does that make sense? I don't know. Imagine you have a completely empty system. Yes, it will start below one second, but imagine you have already know where 100 of containers are running. Or you have a restart of a few dozens of containers at the same time. So it will be not with linear function of the time. I mean, as far as I see CDI, it's not really a function of the number of containers or the number of requests that are incoming, but more a function of the number of files. And yeah, it's a function of parsing the specs, basically. I'm not worried about the parsing of the specs. I'm more worried about the time of forking. So if you have like, let's say Kubernetes, like Kubla says, create me 100 of containers at the same time. It means it will be 100 times forked with NRI plug-in. I'm glad to see that. So yeah, with NRI, we were thinking about having the CDI code directly inside NRI rather than as an NRI plug-in. You can probably, like if you look at the issue that, I think issue two, the one that's open, we kind of discussed that. Well, anyway, it might be not for CDI, but maybe for any other FNRI plugins. So we are interested to make it sure of what its performance interface is. Okay. I mean, I think it's important to kind of define time budgets and maybe the things that you mentioned, which is like scaling, these make more sense as a way to engage rather than to start the conversation by saying, I would like to explore GRPC or other mechanisms. They're kind of easier to communicate why you're exploring that. That's why I'm kind of focusing on time budget. As in like, what is it that we want to do first rather than how we want to do it? All right. And I think that's it, right? Any other, is there something that people want to say about this group? Are people happy overall? That's an audience. I'm happy. I'm just busy as hell. Yeah. I think that's the main blocker of this group is that everyone's working on a hundred other things. Yup. Yup. Well, yeah. Phil just moved to Amazon. He's still on container D, but oh man, I'm picking up for Phil's back stuff all over IBM stack. He's a busy guy. It's going to be fun for me for the next couple of months. Well, I hope you get some help. Yeah, me too. I think I will. Yeah. Urveshi, do you have any maybe things that you want to kind of add comments, feedback? No, nothing really. I'm just looking forward to getting the proof of concept working with Podman and eventually and crying container. I feel like I feel like I would understand it much better once we have good and all going on as well as the discussions that we've been doing. That's all. Makes sense. Yeah. Awesome. Mr. Kal, do you have any kind of feedback you want to bring to this group? You're muted. Alex, are you saying something? You're asking me or Alexie? Are you, Alexander? Do you have any feedback for this group? Nothing specific, I think. Okay. I feel super lucky to be working with Sashas and Bevil. Yeah. Well, I think we all came back from vacations and feeling energetic and a bunch of tasks on our hands. Ready to take on the world. Yeah. Let's make sure what this year will be better than previous and do some fun stuff. All right. Do you, Alexie, and well, I mean, my name is for his first meeting, but Alexie, do you have any kind of feedback you'd like to bring to the group? Oh, yeah. So we are also looking forward for an early interface. We are interested in CDRI also. So I'm here just because we are interested in it. Okay. Well, thank you so much, everyone. I think that's pretty much it. We'll probably see each other in two weeks. Hopefully, maybe we'll be able to make some progress from there. And have a great day. Bye. Bye. Bye.