 Good morning. Good morning everyone. Hey, I don't know. Hey, let's get everyone a few more minutes. And hey, Mike, I think you said, hey, I didn't say hello back. Hey, hi. How are you? Not too bad. What happened? I'm double-duding watching a baby right now. Just trying to attack the cat. We speak in Finland. There's also a school holiday, so I'm like, at the same time working from home and supervising my kid. Well, that's not fun. It's okay. How many kids do you have? Sasha, how many kids do you have? Just one. Just one? Yeah, it's ten years old, so it's already not a big problem, but just from time to time. Let me ping Renault and Mike Brown just to see if they plan on joining, and once I'm done, we can probably start. Hey Renault, how you doing? Hey, how are you doing, Mike? Crazy. This Docker thing is killing me. They're stressing the entire industry. Well, the container image industry anyway. What's going on now? Just the retention stuff with the image? Yeah, just the retention stuff. Are you getting any of these? Well, there's two parts. One is the deleting the images that haven't been touched. The more important one is the new limits, restrictions on how many poles you can do an hour. They're not really going to go nuclear on November 1st, at least not according to Mr. Cormack, but that's what it currently says. It says on that date, if you pull 100 images in six hours, you're out. No more images until after the six hour, if you're doing it anonymously. If you log in on a free account, you get 200 poles. For a developer, that's fine, but for people who do installs at customer sites, whoops, that's not going to work. Yeah, kind of the way they do it, they charge it back to the user instead of the image owner. There should be an option for the image owner for like. There is, Mike, but they're not making it public. Yeah. They've got two waitlisting programs. One, I don't have the details on on the client side, and the other is on the source side. If you have an organization with a bunch of repos, you can pay them for the resource storage and downloads. They'll do some estimate on the amount you've got and give you a number. We're still working that out. I've got a call with them on Wednesday to figure out if we're going to try to protect IBM.com. That's the other thing. They're just not being, they're not very public about all the different programs they have just yet. I think probably because they're still trying to figure it out. Of course, that will protect IBM.com polls for anonymous users. It won't help anything on the other side. IBMers who still need to pull public images will be out of luck. They'll have to get a team seat, I guess. I know what our guys already hit some limits in the testing clusters. Yeah, they have throttling limits for the people who pull hundreds of the same image hundreds of times per second on the same IP. Yeah, they're trying to throttle those guys pretty hard. My P2P service won't work anymore. Yeah, that's definitely the solution. There are certain scenarios where they've gotten used to using latest tags and Docker only counts the, they don't care how much, how big the image is on the client side. They're just counting the number of manifests and saying, okay, that's one poll for one image, no matter how many manifests you want. And then of course, we're just using that for verification validation of the hash blobs that we have stored on our cache. But they don't care. So if you do latest or default or pull always, just to check the authentication verification, then yeah, that's a pull. You can use mirrors, but then how often do you go back to the source to make sure the original root public is still the current version? I guess you can configure that on mirror. All right. I think we're still waiting for Manon, but we can probably start. So thanks for joining everyone today. This is recorded, by the way. It'll be uploaded on the SegRuntimes YouTube channel. So on the agenda, we have really two items, and the third one is just a reminder to go and review the resource management improvements PRs. The two items are the next concrete steps for ACDI and NRI, and the second item is just some discussions around the panel for the COD Cod workgroup at QCon, namely questions we'd like to ask and answers we'd like to talk about. So we can probably start with the next concrete steps for CDI and NRI, because we wanted to have that discussion with Michael Crosby in the room. So I think maybe Sasha mentioned that he had talked to you, Michael Crosby. Could you maybe get us up to speed on what is the plan or what are the ideas that you have for the NRI? And I think there were some mentions about ACHA libraries between multiple projects. Is that a good introduction, or could you maybe help us just get us up to speed it? Yeah, I think the other thing that's been asked a couple of times is to what level of degree we'll be extending the NRI to support additional cases that the Intel guys, for example, had a need for at NRI. For example, hooks in different stages for pods and things like that. Michael, do you want to start first or start from my side? I can. So far the initial design of NRI was around resource management, specifically a need that I had for CPU and NUMA, and then expanding out to things like huge paid support, L3 cache, things like that. Kind of over the past month or so, we've been working on a few different plugins to support that. And so far, the lifecycle hooks of where we have now for the create, start, delete, and update calls all have worked out well. I can see some missing areas around maybe a pre-create where NRI plugins would have a chance to maybe modify the spec or do some transformations before a container gets created. But so far, from my point of view, I think it's flexible enough in terms of supporting this. But we've also discussed about some ways where we can share some underlying code for the plugins. I think the API is pretty generic and straightforward. It's collaborating a lot on the plugins and how those are built. So that's, I guess, what I have before the current state is. So maybe at least one of the things we wanted to do with CDI is this part that you're mentioning where we have a pre-create and we change the spec. One of these ideas here, at least one of the thoughts or the reasoning behind changing the spec is that, for example, when you're adding a device and you're informing container D or pod man about the fact that there is a device, then when, for example, a user calls D or makes a call to the update path, we don't need to have a hook in the update path because of, so the way that it currently works is that when you're updating the resources, if you've added a device and in the create path without informing your runtime, you also need to have a hook in the update path to read that device. But even there, you'd be in this position where if your process was reading from that device node and the update path removes the device from the C groups and you read it as part of your hook, your process is going to lose the read access to or read write or whatever permission access has to the device during that brief amount of time. So for update path devices handling at the moment is not possible at all. Of course, if you do modification for us, I'm saying what update path in OCI spec right now is not able to modify with devices. So you can do like it does, it does modify the devices. I mean, we have a long standing bug where because of how we do, because of the fact that we add devices in a container, for example, with the CPU manager, CPU manager goes in updates the, but just calls the updates on the CPUs and the container loses all the C groups permissions to read from the dev invader nodes. So at least, I mean, that's one of the reasoning where we had behind with changing the spec is that once the spec has, or once the changes have been sent to the container runtime, the hook is no longer, or at least the plugin is no longer around the hook to get in there and intercept every update call. The other one is that there's in a way a simplifying factor to just having a JSON file that says you just mount these device nodes and that's it rather than have a shell script that goes in MK nodes of devices. If that makes sense. Yeah, I think the API of how we interact is it's very flexible. So it would allow you to do a lot of that. It's just making sure we have the correct hooks in the in the CRIs. But yeah, I think I think it's totally doable when I started looking into a pre pre create hook where like, it would be more of the contract between NRI and the CRI because we'd have to like accept the current spec on standard in, do your transformations, modifications, send it back on standard out for these plugins and then CRI would have to commit that and say, okay, this is the final spec. Okay. So maybe I think one of the question that at least I had was if we feel like the NRI is the right way forward. The first question is, is the NRI something that we think we'll be able to see in not just container D but other runtimes. And the second question is, if it is the case, how do we how do we take advantage of that? How do we concretely, what are the expectations that like where where we start making pull requests? What's what's the step that we can actually start acting on and help build this this idea on top of the NRI? Yeah, I think Renault would be the best to to talk about if CRI would be interested in supporting this. The way I'm building it, like I'm hoping the goal is for it to be generic just like CNI is and a lot of the domain specific implementation happens in the plugins where we're just providing hooks within our CRI implementation. So yeah, I think it does make sense, but I think I need to understand the bigger picture like with in case of CDI we understand that we'll look at something of that sent over the CRI and decide that okay we need to make these modifications to the spec. So how are we integrating with the other items in the spec? Like when are we deciding that they're calling into the NRI? Are we mapping from the CRI or do we expect changes to the cubelet that call into the NRI? Yeah, maybe sometime I can give you a demo on it with what I have so far, but also talking with Alex, like I think a lot of us feel like some of these low level things shouldn't be in the cubelet, like cubelets already pretty big and it hinders some of the low level details that we want to do and so that's why it's geared just at the CRI level where we have a better understanding of the underlying system. Okay, so from our side in our project with CRI Resource Manager what we are doing is we practically we define the messages from CRI what is interested to potential policies. So it's all about metrics of containers of CRI and when all we create, update, delete messages. So we can react on reports, we can react on individual containers, we can for example for the statistics we can use CPU cycles what is reported by runtime to actually trigger some of the update of resources like CPU cores or something else. Okay, yeah I think that makes sense. If it's over the CRI it's easily integrable. Yeah, but I think with CRI is what at least on our side what we expect is what the hooks well whatever like in our case it's called policies we're able to modify intercepted messages. So for example we can case of create container, we can modify the resource fields. And the only real difference between CDI and NRI is you just taking it down one level so that we can support more than just a device specific API we want to support devices and resources and whatever we can think of in the future. From my side also to add what Michael said is where the mechanisms which like blueprinted from we see in our plugins when we execute it as a hook it potentially might be okay but on a high-loaded system it's quite expensive operations. So ideally what we're expecting to have is potentially some gRPC mechanism or some other mechanism where we can integrate closely to our runtimes both CRI and then continually with high efficiency passing this data back and forward. Yeah I totally agree like the CNI exact invocation is expensive like if we have a lot of calls Yeah I think Michael's NRI sort of solves all that right we just have to make some decisions you know Alex you're modifying the the spec before the container runtime receives it in the CRI space yes we like to store what we receive from Kubelet right in our own little storage so we'd like I think to store it first before you make your modifications if we need to store it again after you make the modifications that's that's fine that's fair well we don't have any specific problems of how it's structured and actually maybe even having well some level of debug which saying like this is what we received from CRI socket and this is what internals plugins modified it it might be a good solution no problem is that our task and our design is what we need to be able to have get from runtime the whole state of system like all the ports all the containers and just because of how our algorithm works we need to keep in my memory to rebalance with real resources if needed okay yeah that was one gap that that we noticed is we want at least one change in the Kubelet to provide us the entire pod spec instead of feeding us containers one by one because like the information's there just doesn't give the CRI the whole pod spec where we can make better more efficient uh decisions when we know the entire workload michael you probably missed it auntie started to draft a cap about what it's linked in the meeting units below so we will and now it's partially linked to our discussion what we had in this CRI going to GE or better discussion about we're generalizing how we communicate with our sources down from the Kubelet okay michael the when you say the whole pod specification there's a lot of good stuff there and it's not just the pod spec it's also the other objects that are related to the pod spec that Kubelet uses um you have to make decisions and set up the pod information that's passed on the pod run request um there's also a bunch of other stuff that happens before they do a pod run um you know and so far as doing image caching and loading um you know querying the state of the node that sort of stuff right Kubelet keeps that information you know pretty pretty tight to the vest um but I think we they have a um a thing in Kubelet um called a you know a container runtime manager and I think there was an original intent in Kubelet for that to be the CRI and right now it's it's just got a lot of stuff that we need access to so we probably need to extend the pod specification and other objects that Kubelet uses to be distributed through the CRI v2 or something to that effect yeah but as Derek mentioned it sounds like a boiling revolution so we need to specify how we can take a bucket of water out of this ocean boil it and then proceed to the next bucket right yeah yeah and hopefully under the under the common process we're moving the information we're moving the information the container run times need to manage these containers for the pods downstream yeah this can also come up uh I don't know who attended the whole uh sidecar discussion where there was a proposal to instead uh how explicit dependencies like on the startup order and shutdown order like like system d so if we end up going down that route then it will make more sense to give the entire pods back to the runtime and then runtime is responsible to manage the lifecycle of the containers within the pod but it's it's still early days in that discussion but it looks like it's headed that direction yeah that looks pretty interesting I especially like when they they started talking about the you know a graph order um you know with priorities for the sequence so one of the problems with web think uh Derek mentioned in the CRI discussion is what like uh it blows for responsibility or for example like priority of fabrication or killing of offending workloads so he doesn't want to remove it from a kubelet so we need to redefine what actually what kubelet will be responsible for and what runtime kubelet says what uh and runtime says how but Sasha uh so that so this OM adjustment yes that's fine that the kubelet is deciding that but the problem is that the kubelet is currently hiding from the runtime and everybody who is behind the CIRA interface that how much it has promised uh the containers that it can use memory so basically the memory yes that that is impossible to calculate reverse so that we should get yeah that is part of what what both my client uh anti-broad and we get proposal yeah it's about how to get full well full put spec or at least like full resource information what container are required but what i meant by with like what to run it's more about like the logical eviction uh of report so when when some condition on the node trigger it and kubelet needs to start evicting it will evict first start killing well containers for like normal workloads and system workloads will be like in in the last priority so that part stays in the kubelet but how it's run how it's killed in the runtime in my opinion all right so recentering a bit on i think cdi and nri um concretely um if we wanted to try to integrate cdi with the nri that would probably look like an nri plugin and if there are things that'd be missing it seems like we would probably need to make some poor request to the nri or some of the container decode is that something that makes sense so let me say something about that so ronal you're correct what uh where current cdi can be implemented in nri approach so like in in case if we uh hook into his pre-create state so we can get where uh container from uh cri inject whatever we need and when passive to run time to execute uh let's do well but when it covers only the case for uh for kubernetes and cri but our initial idea was what it's also applicable for docker command line and for uh for podman command line how do you see where so let me let me ask this is something that might have not completely understood the nri is integrated in container d um cri shim or is it integrated in container d directly uh it's currently in the cri right now but like you would just hook into whatever client code is calling like ctr or whatever and continue so so one of the thing at least um this is what alex has mentioned one of the thing we're really hoping is um cei is something that eventually pops up to the docker ci um maybe the container dci um at the end of the day what we're really looking for is first and foremost being able to do um docker or podman um run dash dash device uh my super device and then something like kubuntu and then uh my vendor tool if that makes sense um and then that's that's what we're looking to do because at the end of the day what we're hoping is at least the the mental model that we have for something like kubernetes is that instead of what kubernetes is doing today which is docker slash podman run dash v my volume um or my binary dash dash device dash e etc we really want kubernetes to be doing something like this that's that's the mental model that we're going for rather than the current mental model which is this and so for us the the first objectives here is really to surface this through the cei um and then with this surface this back to kubernetes yeah that makes sense but like looking at it from the cei standpoint we still have to put the hooks in for for those various places because of the way you get information in like you have to have hooks in the cei and like docker docker demon part you can't i don't think there's a way to get the best of both worlds where you get all of the pod spec for cube specific runs and be able to do advance like resource placement and topology with just a generic kind of the way docker and stuff is today container by container uh running okay so so what you're saying is that i'm still so if i were to rephrase and what you're saying is that there's going to be any or we are going to need to have to change a bit the um how the nri is called in the container d um projects and then the container d shim project sorry cri shim where am i or did i completely misunderstand what you said uh so and so the cri shim stays the same because we provided additional pod information in the inner i invokes for adding this to docker or another client you'd have to add the inner i hooks to that specifically because there's not a generic way to shove this inside container d core and to know a pod run versus a specific container invoke from docker i see what you're saying okay that starts to make sense um what about one cryo and pod one how do you feel about it i think i'll probably need to uh look at a deeper demo or something uh to figure out how we can integrate it but it looks like if the cri level it should be fine uh but i think that the thing then is like cri works for cryo but it doesn't work for podman so we'll be able to figure out how it'll uh work in podman yeah it should be exactly the same as docker versus cri kubernetes you just have to add the hooks in because they're kind of two different payloads okay isn't i think the podman case is probably a little special right where it's trying to run a container um for for a pod but it's more it's more just running one container right it's not integrated with kubernetes or anything like that no it's not yeah it can run just like docker like a container at a time so so we wanted to support that api at the ctr level i think we could actually just call to um the cri code with the you know generate a pod a generic pod spec in a similar way that podman team does well i already had it integrated in in ctr the first time you add the one line in our i folks cool all right gotcha just to make sure um i mean do these notes make sense am i writing things that uh yeah makes sense from the container d perspective and docker right um yeah i guess al alexander the question would be to you right on your side or you know others like you if you if you require additional pod spec details on the containers that are run at that layer so can you operate just on a single pod i'm sorry a single a single container as opposed to you know requiring the additional pod spec info well right now we are able to survive whatever the currency right message just have so some of the information we are just deducting from pieces right is available uh it's not ideal but it's doable and we want to get full resource specification so we will be like using real data not just heuristics to to detect it back however for us it doesn't really matter like what granularity what what matter is what we are able to get like needed messages like all create update delete uh and if needed to modify when i start applying okay so it's just a little thing you you've probably got some detail around when when we're using shared namespaces like in a pod across multiple containers but yeah so for far yeah for us the pod is also needed uh to get with information because we are supporting feature or container affinity and anti-affinity so for example like if you have like database and consumer within one pod needs to be located close together we need to have information what this container actually part of one pod right and we actually have that at the container level so you could pass over the NRI in the cases where it's applicable right um yeah that says yeah CRI messages have all of that information so what we expect from NRI is to have same ability what we can intercept on the CRI things and nothing more okay but of course like longer term to also evolve the CRI to to have enough usable information inside it that should be an interesting implementation Michael well you can already see the implementation of our projects here IRM is public so you can look what we are doing but in reality it's it's quite a really simple code like take GRPC server request analyze modify and pass it to a GRPC client got you got you so i'm just thinking like how to make your your code work via a plugin our quote will be quite simple to to do like well the whole policy engine is uh can be detached into separate library the only thing well only will be different is what instead of this proxy object we will be getting calls from some other GRPC service or whatever else we will come out with but nothing much changes yeah i think the only big difference in terms of API design is between CDI and NRI where we're wanting the entire topology for the resource use case while CDI focuses on single container devices but i think both can coexist fairly simply well Michael generally uh CDI in long terms supposed to support also the pod-level devices what applicable especially for like RDMA kind of devices where like you have a shared memory between multiple containers within one port uh so the current CDI it will just inject the same device twice and to multiple containers as many containers as many devices so it hides a lot complexity um i do want to time box this a bit so that we have maybe 15 minutes to just prepare a bit um some questions for the keep calm panel um was there anything else we wanted to talk about uh for NRI CDI whether or were there any topics uh well Rana if you can move this kept proposal to level up because it's related to all these discussions some uh which one one the one where you which you had this i'm just so okay but i do i do want to time box it since we have to the keep calm panel discussion is no no i don't want to discuss it i'm saying it's part of this NRI discussion so the only thing what is required is just what people start looking and say like guys you're doing stupid thing or yeah we have some comments how to do it definitely um do you want to talk about it or do you want to also present it in the next um um in the next meeting so that we have some kind of formal review and people will actually yeah we can look good for for for that meeting so that um after at the end of that next meeting we actually say yes no that makes sense it does not exist okay all right formal review next meeting all right um um michael thanks for joining us today i think like we definitely have uh but we'll probably have at least um we at least i have a better idea of what needs to be done and um i think where we're going with this okay sounds good all right and i think that's it um keep calm panel discussion um so at at least what so i mean some of the discussion we were having is that the format is more of a q and a um for since it's a panel and we might present some slides but it really should be reduced to maybe a small charter slide a small road map side and um if if there's a need for an architecture diagram that might be a good place but um it should be at most three four slides uh not that much the the real format is at least from what we have seen from other panels um online is um a moderator or a speaker or maybe speakers take turn and asking each other's questions and um other speakers just answer them so i don't know my my general idea here is just let's list some of the questions that we think actually makes sense in a q and a um let's list some of the answers that we we would like to see to the these questions um and that's it also keep in mind that we do have um 45 minutes but we should keep 10 15 minutes at the end for um online questions does that make sense all right um let me write down some of the questions that um the people here want to see um so before we go to the questions so let's let's decide first uh we're a moderator like around are you are you willing to be a moderator or we need to have uh i'm happy to be a moderator i'll also be presented by the way obviously i'm i'm also presenting this segment time with i would car game um which i'll be presenting cdi uh there um i'm happy to be a moderator but if if we think that it might make more sense to just like have a moderator or a design person for each questions that also makes sense i think we we need to have like at least minimal role of moderator just because like the first the first two three minutes of a panel is what we we need to introduce what topics we need to introduce what people just like saying like now you introduce yourself now you introduce yourself and and so on and when probably like short uh saying from each one like i am from x-ray like and afterwards maybe again like first first two questions like chapter and architecture is is first things which needs to be asked and afterwards we can shuffle to to already talk uh like more discussion between all participants but at least like what first few minutes to set up for tone for discussion we need it okay um who wants to be a moderator let's start with this one i think we have uh both actually uh mike and alex mirales are already left who who wants to vote for a moderator i vote for you let's go with that knows the mother uh and let's go uh is there anyone in the meeting that wishes to or that thinks he or she has a list of questions so i think for most of uh well at least one of the first questions i would ask everybody uh in in this forum is what like we are representatives from the different companies and from different areas so like right now you are device manufacturer we are hardware manufacturer together like both device and uh like resource management things uh mike and now is from runtimes world so uh our first question would be what is this cdi for you for your area what do you expect and what problems you are trying to solve with it what's what's the title of the panel introduction of working group is that you mean that should be a question well i mean we what do you expect from cdi and the but it's the cod working group so at some at some level we need to talk about the you know what what are what are the various projects that are that are involved that you know the cod working group is trying to encompass right and what is this right yeah well that's what should be part of the first introduction yeah i just say right right away on the first one we we dove into cdi right yeah i would great see almost needed yeah i think were you talking about reall were you talking about doing an intro maybe a five minute with a couple of charts um that's definitely a possibility i can like as a moderate i can spin up like one or two slides i mean we already have at least one right at least one so that the audience can ask questions from the picture right um i i think i shared that let me give me a quick second um i've got the slides actually actually uh ronal what i'm thinking uh if you if it will be easier for you we can like share our role of moderator especially for this introduction sides so i can i can i can ask you questions like okay well we form a group can you talk more about like what it is about and why and when you show a couple of slides and when i'm turned back all right something like that here's the slide deck that i have because i i had um i created a few slides um as part of um seg run times t o b chart or presentation um like um then there's a technical management meeting for what is the group there you go yeah there you go uh and and then you got a roadmap on the next one yeah those are good those are two good charts one of the key questions we need to ask in the beginning so why it's was formed on the seagrown uh seagrown time and on the cncf not on the recuper motors so what we'll open the area to to explain what we are trying to cover on the recuper motors but also our uh usages inside invite and by seagrown time a cncf seagrown time and on the not kubernetes it's probably and not kubernetes signode well i wouldn't outline specifically signal well who uh they own kubelet yeah i i know i know all the stuff i know all the run times why not sig instrumentation yeah i mean not not not just to pinpoint any of the existing sig groups in the kubernetes just saying like our use case is more wide than kubernetes like regardless what whatever area inside kubernetes it touches i do think it's lit it's useful to list the the sags that are related say a free instrumentation is important here like list them that's all well the instrumentation at some point things scheduling will be forward now i don't know security might be i mean that could be a question if you'd like what are the what are the kubernetes related kubernetes sags but yeah we can what might be more appropriate uh how about um the silent voices in the room uh what do they think the what voices the silent voices oh the silent example you you're you're vashy is am i pronouncing that right yeah hello hey what do you think what question should we be asking each other should i be asking you um i agree with the question so far i'm also thinking of what do we want to talk about nri at all are we just going to do a cdi for the panel that's that's really a great question i think uh we we just talked about it for an hour and we we can we can mention what we have for our discussions on going uh related to the devices because like devices not existing in the work room we have a bunch of our resources needs or well resources on the devices which needs to be also handled somehow so we we might mention but we are also looking at the scope which potentially can improve voice your eyes and improve what plugin mechanism here i would say nri hooks and cdi intercept right for runtime hooks no nri comma yeah oh yeah right runtime hooks right yeah runtime hooks um we might actually now that i'm thinking about it um because i i did a presentation um a month ago in um this work group that's called the hbc advisory council and i talked to a few runtime maintainers singularity saris um these um um i want to say smaller with big quotes um probably specialized run times for hbc um so maybe there's a question here around the fact that what about other runtimes so i i i i definitely think that to me um specialized runtime is is is i think it's it's important that we talked about and talk about the fact that we're not really just focused on these two kubernetes um runtimes but also um the the the more specialized runtimes and that there is conversation with them so like you're talking about um kata containers i mean kata containers is definitely another um runtime we'll we'll have to talk to actors that kind of stuff yep yeah by the way speaking of the vm-based runtime so we soon or later we will need to figure out uh how to inject in the devices where because if we are injecting only on the container start time it might be already too late because vm was already created and you can't hot block the device so it's again back to the discussion of uh getting the full pot stack and getting the full information about the container and i and i believe the sooner or later should probably be in the next okay yeah a lot of it a lot of that nri code is gonna have to live in the shames alex um eventually we'll get to that detail later but so if um if this is 45 minutes then we have let's say 15 minutes for external questions that leaves us with 30 minutes which is maybe six what are we trying to get out of the the panel right are we trying to get involvement from the from the you know from the audience so i would say at least two things as you're as you're saying um we want to have more involvement from from from runtimes and so that's probably what about other runt specialized runtimes is is one of these questions the other one that i think is important is that we want to share some of the thoughts with some of these other six so other people from kubernetes um just um not just share the question then and then the answers could be the you know the six like you said the six and specialized runtimes and so i think and and to me we need to make sure that there's a bit of nuance in our um thoughts and answers in that we probably need to be on the side of here are some of the ideas that we have they might not fit always with the six or maybe we haven't talked with all the runtimes and it's possible that um i mean you you you see where i'm going is that oh yeah absolutely uh that way we're describing an idea and that we're just here's the result of our work and it's possible that um not everything is done not everything will interact nicely so rinal is it is it fair to say when you set the tone you're going to say why this is important then i think is important yes um if you're gonna if you're gonna answer the why this is important question right then then we we just have to start when when we're gonna ask for help to the audience right yeah kind of then we're gonna question hopefully then we go to questions right by the way regarding the questions i would suggest to add one question is what is your specific use case for the device that's what you are trying to solve i mean from the cut from the people in the audience or from from from people on the panel on the panel that'll be you yeah you and uh rinal i think yeah so maybe then taking a step back i in these first three slides that'll be presenting um i can talk about why this is important but it's it's i think it makes sense that the first question is then what are the use case or whether it's the actual real world problem you're trying to solve right then this is what i think alex was saying that's a part of the why it's important that's right yeah yeah so it's not just something that i say immediately as part of the presentation it's also something that both um alex and i right so that maybe maybe this should not should not be cei specific right it's what you expect from and then maybe one of the answer could be from cdi and then from the cod work group right yep by the way or wash i i i don't know exactly what you are working on are you part of runtime or devices yeah i'm part of runtimes i work with rinal and cryo okay yeah yeah so all right so we have three people with double with runtimes and two people from devices okay i was also thinking maybe one question can be what our roadmap looks like like what we hope to accomplish in like the near future like what the plan is basically like an overview yeah he's got that that that's like five i think it was or was it six yeah but it's only a slide for the cdi oh yeah it needs to be and what's funny what's funny about yeah but that's that's an actual very important point because that was actually also one of the question we were asking ourselves in not the previous meeting but the meeting a month ago which is there's a lot of initiatives here we're trying to close on cdi but at the same time there's a lot of ideas that we want to be talking about so maybe that's something that we can probably or we need a slide out of and maybe that's something you want you can present alex because you have all these problems yeah i'll try to answer a slide thank you very much by the way i think that was like that was that's that's really an important point is roadmap is real use keys and roadmaps are going to be the big um or at least the big two things that we should be spending time on yep we're at and we're at the beginning of the road so this is the time to jump on guys we need help yep um what are the areas this group is involved is is i think also important okay so maybe last one um are we just getting more involvement and sharing the ideas or are there things that we want to be getting out of this kind of panel more so we have a set of ideas we have some draft and implementations we have background use cases why it's needed what we don't have is we don't have any representatives from like our runtime so we we're covering only like the major ones we're crying and complaining we need people from hpc world we need people from our like small runtimes we need vm based runtimes to to provide information what kind of challenges with devices we have um there's also non kubernetes users of cri we have non kubernetes users we have people with uh very strange devices which potentially like roadwork lagging and somehow satisfied but maybe also not not not not really satisfied but silent uh maybe we need those people to speak up uh we need to have information about more complex configurations we need to have information about more complex devices when you say devices covering resources too right not just devices yeah for example like well on on our gpu so fpg we have local rounds so we need to cover it but i don't know like some i don't know like weird melanox network card might have like internal buffers what you can somehow manage or something like that i think it's really important that we present the mental model um at some point if that makes sense of cdi mental model this is what i was saying to um michael this idea of docker run dash dash device rather than docker run dash v i think to me it's it's really important that we talk about this because it it it makes sense in everyone's head this this idea of saying docker run dash dash device versus docker run dash v is is something that really should be hitting people and hey that seems like a really important idea truly when we let's move it to us once like one of the first set of questions when we were talking about architecture and in introduction part if you want i can ask you a question on your answer what introduction to cdi in a way it's it's something that needs to happen here because we're we're really talking about use case roadmap but at some point we really need some kind of introduction to cdi and then maybe that's when we start explaining why we're sig run time in autocuban a sig what are some of the related sigs um how does nri and cdi intersect things like that right that's that's at least some pretty good amount of work um we should definitely try to think um at least try to maybe um like expand on these offline and then uh from there we probably need to um fix the um recording session and then um hopefully we'll only need one take but we all know we'll probably need at least two or three you're on your mental model run out um when you talk about docker are you gonna follow that up with you know or if you're using coop ctl there needs to be a transition this is the kubernetes sig i mean a kubernetes group right it's cncf but it's also you know it's mostly coop on right yep well that makes sense it's mostly it's mostly coop on but on other hand it's also sig run time so i think it's a fair question which is how does it looks like from a kubernetes end user is a fair question um yeah just you mean if it's only one sentence that should be fun right that makes sense right but still you know because most of the audience isn't going to know what kubernetes is right um but they they should all know you know what coop control is or coop ctl right coop go yeah but one another question is uh well uh cube ctl is fine but uh you need to have device plugin which actually will be exposing or making up that device so we like even if we let's watch today we implemented cdi we are great we don't have for part and the kubernetes will pass with information down i i think it's okay to present a bit how does it look like from a end user perspective right right yeah if it's if it's if it's a single sentence um it's it's it's not that bad uh right it's it makes a lot of sense to just try to um relate it to the audience relate it to the audience in a way um all right um um so let me see i'll i'll try to set up the doodle um asap and then feel free to all come back to this document um rewrite some of the questions um we'll probably have to assign some of these questions um if we only have six questions to ask and we have five speakers that's going to be fun oh who are the five speakers i i thought i didn't remember i'm just curious are we are are we four or five i forgot uh so mic rinald alex you're rushy and myself that's five ah there you go um hello you're rushy like brown i don't i don't know you're rushy okay nice to meet you we should have done that at least three meetings ago i'm sorry i was phased out you know i'm taking all the blame for for for not for for for not having or at least for not presenting each other how long is the slot for i i just joined like so um 30 to 45 minutes i believe uh might be wrong let me check so we want the back 15 for custom you know audience questions okay so we need to fill 15 minutes minimum i guess well i would say 20 to 25 15 minutes minimum um yeah we're now basically we're now going to give a couple of charts at the beginning to set the stage and then we'll just run through this list of questions and and then try to get some audience feedback yeah sounds good this schedule and then let me find it in the schedule just to make sure uh cod is the keyword i'm looking for so it is the panel is at three so we only have 35 minutes for the panel actually which means that we really have 15 minutes to talk or to ask each other questions so on your question on the what about other specialized runtimes are you talking about runtime engines or are you talking about container runtimes uh we need a vocabulary slide this is going to be fun this is going to be tricky so actually since we're all here let's let's let's very quickly um so this is what this is the what question um who asked this or who answers this what question all right not everyone at the same time please everybody um alex do you want to start with this one where i'll ask you this one and then you'll you'll talk about your use case and um please present my use case uh no i think i think alex is probably a good person for this one um roadmap uh mike do you want to present the roadmap i i think that should be done by you or not as a follow-up on the uh on the setting the tone that makes sense and i have no idea how i'm writing my name um and you might want to i would do the roadmap a little bit further down it's probably just before we go to audience questions yeah that's the last one yeah yeah probably at the bottom otherwise they don't know the vocabulary yet right so yeah yeah so the first question is is what like what we are doing why why it's important second question is what it's like how much it's different from existing solution so let's practically this uh mental setup what uh ronal mentioned yeah right why yes so so i can i can ask what and ronal you will be answering i'm not no no i'm not answering type one i'm already moderator and i'm presenting the roadmap uh so introduction to cd i should either be mernal or mike or your vashy all right uh i i volunteer your vashy yeah sure i'll do that all right um now we need questions to be answered from mike so next next next thing is uh like what is actually covered by it so the center i cdi things all together so so why cncf it's a bit later so first we like what is what is in the scope so scope yeah all right so who wants to volunteer someone all right your vashy you would get to choose if it's mike or ronal uh i guess i'll go with mike should i help i think i think it would be it would move us to probably have more than one person answer these questions as long as we can we can we can talk about them together so i would say mike and ronal should answer this question yeah i think sounds good i can cover the runtime hooks how they work and kind of cdi and mike can cover nri yeah so the next question would be okay so we we are we are talking to the scope when how it's all connected to the whole ecosystem when goes with question about cncf and sick kubernetes six and so on so where are we involved on how it's connected to our orcs or groups and in my community all right and i think this one is also going to be mike and ronal so remember we only get 15 to 20 minutes and then 10 to 15 minutes for questions so we the roadmap seems to be to me like it makes sense to be the last question yeah i think when when we first talk we should probably say then we should probably say then you know where we're involved and uh yeah so we can just move that where are we involved into the scope question makes sense all right no no no it's in the scope part yeah so so i think to me these yeah they are kind of the same question we can make it one question yeah yeah all right and you guys can do the same thing when you when you guys are talking about your yeah when you're showing the initial charts you can say you know what is what is the scope for you right and how are you involved right and then alice can talk about that as well yeah he can introduce himself and then or did you want to introduce people no we're just straight into it no mention of who we are well i would say like the first introduction not more than two three sentences yeah um by the way do you want to each other introduce yourself or you want me to introduce you no i think each each of us will will be better all right because at least it will say it was set the stage of speaking short and sweet elegan we know you got a lot to say i'm kidding i'm just pulling you away all right um i have a problem with the Russian language we have a long and uh loaded sentences so for when i'm speaking in in english i'm trying to repeat the same thing sorry no you do you do great i'm just pulling your leg where are the answers to these questions um i think everyone should try to write two or three bullet points feel free to ask these questions in the slack chat or in the slack direct messages if you kind of wonder if there are points that we're missing or maybe you think maybe we should translate these into slides and start filling out the talking points over there have like bullets over there and then if we want more details we put speaker notes or whatever there for having each other review those notes so um quick thought on this um these slides are talking point for us we shouldn't be showing these slides if that makes sense uh sure yeah but we can show with this we can send you to your charts that he was gonna throw up so the audience knew you know what they were talking about you know still high-level pick okay and then a roadmap picture i guess yeah okay um i'll set up the doodle uh or finalize the doodle bug the people who did not answer uh i think it's you docs uh shame on you and yeah i think um i'll repost the link to this slide and our direct message and that's it thank you everyone for your time that last one you're gonna show that that what are we getting out of this or you're gonna show some kind of chart with those bullets maybe just before the question the qa yes that might be a conclusion slide where it's just we want you to be involved um ask not what the uh cod working group can do for you but ask what you can do for the uh we need you for the cod working group let's listen it's not for us it's for cncf right yes Kubernetes all right and yeah definitely this is a slide yeah yeah we won't have any time i think sorry i didn't hear you i would say we won't have any trouble filling the time i think i didn't hear your question my question was is one person going to be asking all the questions or should we field the question after we answer our question yep i'm uh i'm i'll be i'll be asking you the question you will have a card oh i'll be looking directly into the camera oh man i yeah i keep forgetting we're not we're not actually gonna be like sitting on chairs in front of the oh man that's weird by the way uh if if you're using was zoom to to record and if you feel what you want to add something to the answer of our we can use this rice hand uh thing so at least uh mother mother reader can give chance to answer also definitely uh actually i i i don't know like this rice hand is going to be visible and recorded or not but no unless we're unless i don't think so um but i okay i i do think that for the q and a section where it's not recorded so we it's and it's not on zoom so um that q and a section i think it's recorded by it's recorded by the platform the q and a with people i think so well it might be published somewhere differently but i think it wasn't available in the platform so while while while the platform is still able to log in you see it okay um i'll let's ask nancy um and then figure out uh i mean we we we need to ask her to figure out what are the technical details to to do that anyways all right all share the same background so it looks like we're sitting in the same room uh let me see if i can uh where is that i actually created my own custom image from the last from theirs for me i asked it our support team said like give us something like provided wet one oh no nvidia is going to make me have the nvidia logo behind me that's actually not bad oh my device does not support virtual background so that's not oh oh i see uh container d up there yeah this is probably not here all right there's got to be a cryo logo in here somewhere all right you're you're hiding it it's behind your head oh yeah there you go no that's the kubernetes all right thank you everyone losty kubernetes logo in it something all right have a great one see you guys