 Hi, Elena, can you hear me? Hello, I can hear you. Good morning, everyone. I posted the meeting minutes and please add yourself as an attendee. Let's give it another couple of minutes until people join. Okay, so anyone has any items that I want to bring up before we go through the agenda items? Just going once, twice. So typically we had the standard, but I think it might be better just to kind of let anyone just kind of speak up if they have any items. We can actually, you know, get down to the agenda so we can save some time. So typically we run a little bit longer on some of these agenda items. So I think the first item that we have there is CDI from, yeah, so from Renault. So take it away. Hi everyone, my name is Renault, I work at Nvidia. Let me start by sharing my screen. I think I've shared the presentation that I'll be presenting on the notes. So you'll find the link to it on the notes. Let me go in presentation mode. I've left some comments for people just for information so you can also look at the slides that way. So yeah, my name is Renault, I've been a software engineer at Nvidia for the past few years. I'm a tech lead for what we call the cloud age technology that Nvidia. Today I want to talk about what we call container device interface and I'll go over some background just a general idea of what is the state of third party devices and run times and orchestrators just generally in the cognitive space. And with that, just typing darker or pot man or run dash dash device slash dev might have known is not sufficient. And I'll give some background on why we think a specific would be helpful. I think I'll go over some real word example that we've been heading with some of the vendors in the Kubernetes space, but also at Nvidia and in different just real world scenarios. And then I'll give an overview of one of the solution that we came up with with some of the maintainers and of container D and pod man. Yeah, I'll just jump into it. So for some background, as I mentioned, who am I, well, I've been at Nvidia for three years, working with some of the open source groups. And one of the first tasks that I took on was the device will get implementation in the communities space with some awesome people there of the signal and research management work group. I maintain the Nvidia device plugin for Kubernetes. So I maintain the container toolkit, which is basically our stack for exposing GPUs and different container runtime, whether it's Docker, pod man singularity, which extends to Kubernetes and nomad. We have some kind of custom plugin for mezzos, which is entry. And I've been working on some of the OCI and run C improvements to allow hooks for better device supports. So I've been in that space for some time, and just wanted to call that this is an ongoing topic here that's been that's been brought up for like a few years. If you want, if you're looking at just third party devices and run times and orchestration. The ecosystem is kind of a mess right now because pretty much everyone has their own concept of plugins. Kubernetes has a concept of device plugin that you can't really extend to other orchestrators and really what it does is that it reimplements. It reimplements device support at the orchestration level rather than implementing it at the at the runtime. And that's why it's really hard to have its basically extend to other orchestration. Nomad has its own concept of the device plugin that's not the same interface. So Kubernetes it's basically a GRPC interface. Nomad has an entry plugin mechanism. So there's an interface that you can extend through pull requests and I think right now and vigorously only plug in, which is not ideal for us. We want to make sure that there's some kind of open standard, rather than something that's locked down. Nomad has a concept of hooks chart. Like not OCI hooks, but it's mostly a file that you place on the host file system and you describe what OCI hooks you want to have for containers. Maybe, I mean basically you write an expression that says, hey, if you see a container that has label X, then I want you to call this pre start hook that's on the host that path ABC, etc, etc, etc. I'm not going to like list all the hooks. So just taking a step back. Why is it that when you're a vendor you want to have your, you're not satisfied with just mounting device that's dev slash dev slash my device. And while that was true in the past and it's still true for kind of simple devices. So thinking at, I want to say more advanced devices that vendors have been putting out there, whether it's video GPUs because I'm from in the video, or Intel FPGAs or solar flare nicks. The next are kind of a special case here because there's some networking involved. You kind of realize that there's a lot more complexity that comes from software. There's a lot more constraints and how vendors can and cannot do things. So what we, what at least we've noticed, and at least for multiple vendors is that a lot of the time, they need to just expect expose multiple device notes and just not practical to ask the user to expose the GPU and then some virtual control notes. I think Devin video zero Devin video CTL Devin video UVM, or, or maybe a graphics card you might want to expose the video, the display and the audio device. You'll find the IPC spouse, or even changing progress entries. So, when I talk about progress entries, for example, you'll find in Linux, for example, at rock driver and video, you'll find the fullest of devices and if you only want to mount one device, I ordered by PCI. If you only want to mount one device inside it or expose one device inside it, you want to hide the other device that are here on the property of this. There's, there's a lot of things here related to just like stuff, I can go over it if we have more time. I think it's probably more interesting to go over some of the other use case. So maybe we maybe as a vendor you want to perform compatible as checks. Can this container run on this GPU. So, an example would be, again, I'm taking Nvidia here because I'm not familiar with it, but as we, as we, as we put out new GPUs on the markets. So, at some point we have some either backwards compatibility or forwards compatibility requirements where you've compiled your code for a specific architecture, and that the architecture that you're running on is going to be slow. So, to the point that you might want, you might not want to run your GPU on it. And so just performing that compatibility check that, hey, is this the right architecture for this? Is this the right GPU architecture for this container? And filling if, if, if that is not the case, something that is super useful. And there's also other use case where you might want to run compatibility check, I can go back to this point if you have questions like performing run time specific operations. I think it's not a big surprise if I, if I mentioned that if you're starting up a VM, there is very different operations that you need to make, rather than just starting up a container. The simple fact is, if you're starting up a VM, then you need to load the NVIDIA kernel modules, just to support that. There's a few other things here to do. And then performing device specific operations. So, shrubbing the GPU memory is now something that we do as a driver, but we've had this request for example in the context of Intel's FPGA where they don't want to like, and then I'll mention that in the real world example. But they don't want to give the permissions to the user to reconfigure the FPGA, and then they might rather do that when they see the container plan on node and do that in like themselves as root. So as they may be demon profit. And so for security reasons, they can't really like expose this reconfiguration to the container. So thinking a step back, why is there a need for specification. As I mentioned, I think like the biggest one really is just that one time as an office reader don't really offer the same experience to users, because of the fact that the plugin is not like the plugin mechanism is such a such a mess right at everyone has their own plugin mechanism and at least to make things are not even reusable. The Docker GPU support is not reusable in communities, which is kind of a weird thing so you can do Docker run dash dash GPUs, but you're not going to get the same experience where you need to do a lot of work around to actually get that same experience. If you want to do that. And what that means is, well, the user experience is not is not super consistent. But worst of all, you actually end up in maintainability hell. Us made near container and runtime made nears. We spend a lot of time just either trying to have some kind of common interface and then some kind of external shims for other plugins to call into, or what we've seen more commonly. People either dropping runtime supports so that means that are first for runtimes that would be or that could be easily supported. Like at the end of the day from a plugin mechanism, whether you're running punman or Docker really like the differences at the end of the day you're still using run C underneath or you're using C run. But if you were using run C underneath that difference shouldn't shouldn't really matter from a plugin. Of course if it's the end versus Linux containers then there's definitely a bit more difference here. But I think the point I'm trying to make here is more that from a user perspective, currently the behaviors are not the same across container run times. And from a plugin author make perspective, there's a lot of work here to you either have to do a lot of work to have a plugin supported across multiple run times, or you have to drop container runtime support which we've seen from other other other plugin authors. And just generally some plugins don't don't expose the same capabilities in a consistent way. So we end up resorting to the most common one being what we call the default runtime strategy where, because for example in Kubernetes plugin, in how it is in the stack you can't really access OCI hooks, we end up writing a shim for run C and setting up Docker or container D to point to that shim. Which will just intercept the spec at the OCI hook and then pass them down. So I hope I convinced people for at least from a theoretical standpoint, why that was the need for respect, just going into some of the use case. This was one of the conversation we were having at UConn with 2019 US without the vendors. What I gathered, for example, in the Intel's FPGA use case is that the, like they have a fairly simple use case where there's really two things that they need to do they need to mount multiple devices inside the container. And so, like, that's fairly straightforward. And they need to be able to reconfigure the FPGA with the correct function. So, I mean, in terms of security, we're from a security standpoint, to do that they need to be able to know what is the user intent. So, they need to have either container run time information, sorry, the information from the container specification, or they need to have information that is passed down from the user. So, whether it could be information for example, this is the community use case example here, where it could be passed down through pod labels or pod initiation. And so, they might want to do that in a pre-stored hook at the UCI level today, that's what they do. I think, currently, they use privacy inject hooks inside the container paired with device plugins, and they don't have container desupports. So, that's the use case where they end up excluding a container run. Now, let's next. This one is kind of, I want to say, there is a bit more, I think, because of the fact that it's a networking interface, we have to take, I mean, we have to look at this use case, maybe, like, it's a bit more nuanced here. But, and this is an example where what actually happens, and this is a bit more common than you would think, is that when you are like device author, you traditionally have a kernel module, and then you have some user space libraries that user can call in to talk to your kernel module. What we've seen, and this is something that we see for Mel looks and we see for also for video, I'll talk about it in the next slide, where what happens is vendors don't provide guarantees in terms of a bi compatibility. And so what that really means is if you install driver version 1.0.0, you will get in your kernel loaded the module version 1.0.0, but you also get installed on your machine and so files that are at version 1.0.0. And what happens is that if the vendor decides to publish version 1.1.0, the SO files that you get for version 1.1.0 can only work with the kernel module version 1.1.0. So effectively, they're not using Sembr. Their ABI compatibility between the kernel module and the, sorry, the event I believe that is here is just not guaranteed. And so that means that if you wanted to ship these SO files inside the container, that container would be limited to nodes with the same kernel version. The solution that we found for many of these drivers out there is to just mount these SO files inside the container at runtime. There's also a few device nodes that you need to mount here. So that would be like challenges from a software perspective. And then the other use case that I'm a lot more familiar with is the NVIDIA use case where we have these same issues where you need to mount device node. You might need to mount user land libraries, again, same problem with the ABI compatible and want to mount them at runtime rather than ship them inside the container. You need to mount UNIX sockets inside the container. You need to update proc and sentries, perform ABI checks. I've linked to a document that describes in more details what NVIDIA does. If this is something that you want to look more into. And we use cryo to inject those CI hooks or Docker's default runtime with what we call a runtime ship, or Kubernetes, or in runtimes context we'd have the entry plug in the invention. Taking a step back, we discussed this state a bit at Q, a sum of the inner D and pod and maintainers. The idea that we came up with is to have this kind of spec that is based on CI models. So just a JSON file that you drop on the machine and then the container in terms of crystal. And it just describes what are the devices that you want to explore that are available in the machine. So think of a JSON file that just says, hey, I have GPU zero GPU one or Intel FBGA one or two. And describe some of the operations that you want to perform to make the device available to the container. And some examples of how the runtime or how the CI should implement this these plugins. I've explicitly talking through with some of the maintainers of container D and pod man, as well as a signal maintainers. We've explicitly decided that resource management is something that is not and that should not be really addressed at the runtime and it's not addressed today. It's something that should be addressed at the year of restriction level. So it's kept out of the spec. So an example here is, I don't know if this is big enough for everyone would be that you have a vendor.json file here in this standard path at CCDI just describe the CD version. It describes who you are so vendor.com such device and then it describes the devices. And here there's a single device that you call my device and it describes the operations and would be here that inside or when you the runtime and see runtime run dash device vendor.com slash my device. What you do to the spec is that you would merge these two device nodes and add them to the container spec. Here I've given example of what runtime you might want or what CLI example you might want. So there's a verbose example here where you would do runtime run. And this would be the fully qualified name vendor.com slash device equals my device. But you might want to try and refer this through dash dash device equals my device. There's only one vendor or one device that has a specific name. And maybe we can think about a special case here where you would have this all string that just says expose all devices that vendor.com. Container spec really specifies the operation to be performed. And there's another field here if you've noticed here container spec is device specific. So here it maps to my device. There's another here. Same field that would be named container spec to it would be a level higher so not at the level of the CDI device fields but at the level of the just general container. And if any of these CDI devices mounted inside the container. You would you would also merge this field. So this would be the case for example she had a control device, instead of saying for each device, adding a container spec field that says that my device control, you might want to map only this device control once inside the container. So that would be this example here where it just is a an operation to be performed if any of the devices are requested for a container. Now the only kind of operations they can perform hooks would be an example here so let's say you want to expose SO files inside the container. The next problem you have you are faced with is you either need to update the environment variable to point to these the LD preload environment variable to point to these SO files. This can be breaking in some cases, because you want to make sure that you're not overriding the users LD preload library, or you just run LD config before the container starts. And the last one is you might want to have like runtime specific operations but I think these are these are more details, or at least that one is more of a detail. The general gist of the spec is really to just say, I have devices that tell the runtime I have devices that you can use here's how you can use it. And then I think probably I probably forgot to that's probably a site I won't go over. Normally, we've talked about how a runtime might use this and the idea is that at the end of the day what we want the runtime to be doing is instead of our sorry the orchestrator to be doing is instead of re implementing their own device plugin. We want the orchestrator to call in runtime run dash dash device my device, rather than do runtime run dash dash device dev n video zero dash be live slash live, etc, etc, etc. That's pretty much it. Happy to take any questions. Yeah, a question. So, have you actually worked with some of the runtime projects and is there some sort of consensus about, you know, coming up with a common interface for devices or or at least the two main run times that was talking to was I'm continuing the impotent. And this was in the context of coupon 2019. Generally, the, the, the, I think the on principle. I would read that there needed to be some kind of way for their party device supports and that the current data be a little bit. I think common was kind of, or at least the common maintainers were kind of interested in the idea and they said just send PRs. I'm focusing right now on just getting the OCI hooks first. And I wanted to make sure that there's like a bit, a bit broader consensus than just having like the 101 specification plugin. And how would work with some like Kubernetes because like you, you would specify in your workload the device that you want to use in the pod or because you'd still have a concept of device plugin. It's just that this device plugin would never would not be doing operations that are runtime specific. So, it looks like I forgot to finish this slide but the idea here is you would have your runtime device plugin. The goal here is not read to have your runtime exposed to the orchestrator. What device are on the host. The goal would be to have your orchestrator still have a device plugin mechanism here. Right now the device plugin in Kubernetes. The model that it has has really too responsible. The first one is that it talks to the orchestrator. So to the cuba here and says I have three and video comm slash GPUs on the machine. And the cuba can now advertise in its status field or its capacity and or resource field. Hey, I have three and video comm slash GPUs on this machine. And sorry, and the device plugin communities have actually three responsibilities. The next one is that currently what happens is when a pod lands on the on an orchestrator or on a node now. The, the, the cuba it will select what device it wants to be exposing to the computer. Right. And then this device plugin and says, can you tell me what are the operations that needs to happen for me to expose this device to this container. So the cuba might respond with one of with other environment variables either device knows I need to be mounted for libraries or false I need to be mounted inside here. And the orchestrator to the runtime and says, hey, create this container. And here's the CRI spec, and I've already merged device knows and the environment variables and the file. And the last responsibility of the KS device plugin here is just monitoring the devices or health issues. So what I'm, what I'm suggesting here is that you take away this, like, melt environments, this, this, this space here where you would go through the device again and ask, what do I need to do. And you would wish to just go directly to the runtime and pass down in the CRI space. I need you to expose slash devs, or not slash dev, but in video zero or my video GPU, and then the runtime would then just read the spectrum. Alright, so have you looked at the runtime class. So maybe specifying this some of these parameters runtime class where like a user would say, Oh, this bot needs to use this runtime class that has certain GPU or certain device. Would that be something that That was one kind of one of the things that we thought about and we were discussing. I think just generally what we were, we were more looking into reusing the general OCI is sorry the CRI spec, but taking a step back the problem here is that runtime classes, really specific. And at the end of the day, we want to try to break down bring down the plugins for devices to the event. So that orchestrators can we use them. That's, that's, that's the real problem with the device plugin is that right now this this whole operation of how do I provide the same container, whether you're on nomad Kubernetes mezzos or even just Kubernetes using Docker and Docker. It's not a solve problem. And that's, that's the big problem here is that if I spawn a container using Kubernetes and that runtime is Docker. It's like as a plugin author, I have two very or I have two different stacks rights. I can either use my plugin here that is the runtime plugin that is in Docker already today, or I can use my device plugin. And we've we've actually find hacks that work around through this slide default runtime for Kubernetes. But that's, that's not really the path that we would like to be taking. I'm asking a user to have a default runtime. We even if using this run the runtime class is is not really like as as an Nvidia, like third party, like plugin vendor. I don't want users to be using a custom runtime. I want to hook into the runtime and just tell us, here's what you need to do for for my devices to be available. It makes sense. Anybody else has any other questions. I'm the only one speaking. I have a question about you kind of touched on it, like you don't want people reprogramming FPGAs. No, and same kind of thing with Nvidia like Nvidia SMI allows you to control your GPU right and change things manage it. How do you handle that with like the privilege level that people have what the access level they have to these devices. So it really depends on the use case. When you're training GPU models, most of the time you're going to be requesting one, two, three, eight multi-nodes model. In that context, if you have the full GPU, the expectation is that you're going to have or you're going to have access to reconfigure it. More recently, we announced naked. That was last week. So you'll see that at the next GPUs where you can partition your GPUs and smart GPUs. In that context, when you have access to these devices, we have, like, it depends on what devices are mounted, basically inside your computer. So if you basically the idea is if you have access to a full GPU, the security model is you own the GPU and you should be able to reconfigure it. If you have access to only smaller partitions of the GPU, then we don't assume that you have the permissions to reconfigure it. And is that part of what needs to be standardized here, that whole security model. No matter whether you have Xilinx or you have NVIDIA or you have, you know, Intel. I think, like, here right now, the expectation is really that we want to be able, I don't know if it's standardized, but the idea here that we're trying to get to is that we want to be able to remove permissions from the computer for users to do things, if that makes sense. But right now, at least what the problem was for Intel here is that today they need to be either doing some hacks around me runtime or to delegate the permission to reconfigure the ESPGA to the GPU. And that's the request here in terms of security is to not delegate the permissions to the users is to be able to have some kind of standard hook for the runtime for the plug-in to be able to do these operations. So in a way, that specification will allow us to have a more robust security model. Okay, instead of just on or off your privilege or you're not, they want something in addition to that. Okay. Basically, the idea is that, like, I mean, it's, yeah, we don't want to be providing all the privileges to the users. We want to be able to do that as part of the hook lifecycle and the way they would do that is, for example, here by specifying a great container or a runtime container hook in that spec. And when the container gets, or when that hook gets involved, it would reconfigure the ESPGA for that container. And in terms of security, you, but that will be exposed to any container or there will be some sort of permissions, you know, with how the container is restricted, right? So maybe I think really it's going to be up to how a device has, or what is the security model age device? So that's, this is definitely a tricky question. It's hard for me to talk about people like Intel or others. But at least from an, from invidious standpoints, definitely when, when, when we're talking about security and especially given like this sub GPUs, being able to expose the right device now it's being able to expose the right pocket this entries will allow us to reduce the permissions that a user has. If that makes sense. Got it. Yeah. And the security model will have to kind of work be worked out right so Yeah, maybe specify right. Hi. So we have the queue, we have the cuban at us at the top, and then the cubelet. And then the kids device plugin, and then we have the CDI plugin within the runtime. Yep. So that's the layering so What would be the division of responsibilities between each of the layers. For example, how, how would we, how would I ask for or share. Let's say ask for a cuban it is container to say okay I need to run a workload. And then how does that go down to the bottom level. That's, that's a great question I think that was the purpose of that slide that I never finished. I think just I'm going to take out the whole shared devices because shared devices is really kind of again it's a device specific problem and it's up to how the device and comments it's There is some part of how do you expose that in the resource management but let's let's take away that topic because it's kind of difficult to talk about and let's just focus on I want a single device, I want a single for So the way today works in communities is You really just expose it in your specification by just requesting for that specific resource so you go in your pot's back and you add a resource limits and video.com So I'm going to take you in telecoms.ca, solarflare.io slash nick. And so I'm going to assume that your schedule already found the network before even that you are going to want to have notice advertise that there are resources on that. This is definitely where the device plugin comes into play. You want to be able to deploy or dispatch a container on all your notes to figure out through a PCI is there, for example, a GPU a nick, or like a nick that I support on that machine. So this is the time that the device plugin would then start telling people it's hey, I've seen a GPU I've seen a device here is not the right way. I'm here you go. So that's that first step here where the device when calling the people, maybe that device plugin installs a driver maybe that device plugin installs that book. So maybe that's that's the point where this device plugin goes into our like has a mounts on at CCDI and just drops this all onto those. So we were just taking the GPUs. LSVCI. Oh, I see NBA GPUs install the driver that we have a drive what we call the driver container here. And then drop the drop the spec on that specific path. So this would be that phase one bootstrap face to a user comes in presents this kind of prospect. It gets submitted to the API server schedule or picks it up, figure it figures out which knowledge should get you should learn on. Assign the note, you would pose a spec from the API server. And this is where basically to let goes into this loop mode where it figures out which device should I be assigned to. So you might be familiar with the technology manager and other plugins that are out there. This is why you want to be figuring out which device should be assigned to. Because it then talks to them something like the technology manager. And then at that point, it calls into the runtime so you have something like more like this next starts going on so API server here. So the device plugin doesn't come into play in that loop. You would basically just talks to the runtimes through the CRI here. And because of here we would be reading that this dash dash device. It's kind of nice here. Where you might you're not going to be conflicting with the existing device model. So you need to write a bit more like formatting support here but basically, the runtime is getting something along the lines of Docker run dash dash device and video zero or my Nvidia device, and then we compare image and that's where the runtime reads this. The API file. Figures out maybe there's just some static specification it's just only some devices that need to mount. So I'm going to take Docker here for the runtime. And in Nvidia's case, there's definitely some devices that you need to mount there's definitely some files that you need to mount but there's also a, a hook that you need to invoke. When Docker then calls into container Dean and ultimately into run see the spec that is passed down here contains, not only these device note hooks, but also these OCI hooks, or so these device mounts and file mounts but also the OCI hooks. So, at that point, run see has the spec and all the information to call the actual hooks are the actual. Yep, the actual hooks, whether it's inside the container outside the container. So here, maybe it's going to call all the config inside the container before it starts. So I have a question. So CUDA is like five gigabytes or something right. I think we have multiple containers. Well, container is 2.5 gigabytes, but the runtime container is fairly small upward, well, comparatively small upwards of 200 to 300 makes TensorFlow is kind of bigger. So does that end up in the container for each of these or does that like are there multiple copies of that all over the place or is there one place where that lives. So, there's a really nice document that describes a bit of what we do in the time. The answer is parts of it end up in the container parts of it end up on the host. I, let me see, I think we have a general scheme here. So if these containers end up being incredibly large or if that's avoided somehow. And deploying these containers end up being incredibly large. The problem is not really CUDA promise how the ecosystem has been set. And the dependencies that are there. TensorFlow images are usually upwards of two gigs because of all the tools that the inside I think it's kind of maybe adjacent to the topic. I think that's a bit off topic, but I was just curious. Any questions on the. Okay, we can skip that. Should we go back to one more thing I need to poke at, you know, so we are filling the, yeah, the bottom half there right. The CDI spec itself. You know the implementation. Do you have any guidance on the implementation or is it just what are the hooks. What is the content of the JSON. And is that basically it or is there more. It's basically I have the broader specs here. My name right now this is mostly a draft. It's definitely just mostly a JSON file here. You have a bigger understanding here and I've brought a spec time here just describes the different fields. I'm not going to go over each of them, but in terms of implementation, I think it really is going to be mostly depending on the runtime, we could expose some kind of library in that repository to that just processes that are processes to JSON and going just a parody run C and go and run times couldn't report it. That's that's definitely a possibility. Just making it easier for people generating this second go, generating the unmartialing code. But the merging part is probably going to need to happen at the container runtime level. The thing that's not clear to me is basically does this need to happen in run C or does this need to be to be happening and container each reason. Right. So there is no socket file. There is no, no, nothing that is need to be started before and waiting for it to happen. Nothing like here is we want to make it plain, simple, plain dumb to invoke. And we especially don't want demons. We especially don't want your PC connections. I think that was one of the concerns was we don't want to add in another G or PC up. So it's mostly here's a bag of parameters, pass it down. And these are the different things that you need to call in a sequence. You call that a specific time you call the other one later, something like that, right? It's a transformation. Well, it's not even transformation. It's just here edits that you have to make the OCI spec. And we just leveraged hooks that are here or hooks that are already here, but not exposed to to people or when you are at the darker level or at the, or at the, sorry, at the Kubernetes. It looks good right now. Thank you. So question is the, so I brought up the idea of starting a working group for this. I mean, are you interested in doing this or definitely interested in doing this, making making that space better is something that's super beneficial for us. Because of how difficult it is today to deploy stuff for our users. Yeah. Okay. So I have a question for Amy. Do you have anything or standards that we have for starting a working group? I know some of the six are have already started some working groups. Yeah, there isn't really a policy for that. But if you want to be able to actually form a working group, what's generally happened is people form a working group. Yeah. So like, I mean, know what you need basically. Cool. Yeah. So, yeah, I think we can get started with that. I mean, I think I would just kind of go back and look at some of the other six and how they've done it. And yeah, and get started. Yeah. I can probably turn invite with some of the other main things. The main point of contact has been a Renault on a part of my side and my bro on the community side. I think just, there's been some activity and how do you expose also EOC I hope some of the continuity so. And I think just generally other vendors like Intel would be super interested. Yeah, so ideally we want to gather as many folks from the different projects. So there's consensus about the, you know, the specification. Yep. Cool. Anybody else has any other questions at we have another item on the agenda but I don't think we're going to get to it because we're it's almost not up. Nine and nine a.m. and Pacific time. So, but I think we can just kind of fill in those seven minutes with any other questions. I'll try to make sure that this slide is done for the next time. So my guess is that if you if you get started with a working group, I mean, you could actually have separate meetings, you know, in the working group and then some of the folks that are interested in working on the specification would actually, you know, discuss more the more the details right so. Yeah, definitely. Well, there's no other questions. Thank you for hearing me. Yeah, thank you. It was great and very informative. So Kevin, do you want to actually give a brief introduction of what you wanted to talk about for cube edge. Yeah, thank you. So I think maybe I cannot go through the whole slide but I so basically I just want to briefly introduce what where we are what we have done and so I will also want to start up the incubation process. Yeah, so the incubation process is there's a presentation here in the SIG and there's. A review of template and then it gets review later by the TOC and then the TOC finds a person who drives the due diligence. And I think everything turns out to be okay then it goes for a vote and the TOC for incubation. Okay, so we're going to go through the process right so maybe Amy can fill in a little bit more details about the incubation process if she has any more details. Now you had it pretty much straightforwardly. And welcome Kevin, it looks like we'll probably get to you next time. Okay, so actually I have a question. So the due diligence that must be started by some TOC member right. Okay. The presentation can happen ahead of time like you can start off with the presentation and it's fine. Go ahead. Yeah. Yeah I think that's the official process. But if you need any help from the SIG or helping out you know gather some information, you know the SIG is also happy to help right. So, you know, typically it's the TOC. That which is how to say the companies or organizations using the project in production and ask questions or something during the process. And then, you know, it just comes back with some feedback. And also gets documented in the official document. Yeah Alina is here. I would be happy to run a due diligence for Cubache. Thank you. So I also want to know is the presentation required to be finished the before the other things started or we can go in parallel. You can work in parallel. That should be fine. Okay, thank you. Okay, so I think we're at the top of the hour. We have two minutes. Anybody else want to bring up any topic or anything that. Actually, sorry. Can I quickly ask a quick question here? One of the problems I have right now is that the spec is currently posted under my GitHub account. Is there a way to like have some kind of, I want to say more official account or like repository here? I mean, like a CNC F type of. Maybe it's an incubator. I'm super familiar with the process. But just having some kind of like common repository that's not my personal. I think if it's the project, you can start an organization on GitHub and actually put it there. I'm not really sure if GitHub charges you for creating an organization. So you can, but if not, then you can put it in there and then yeah, and it just public drive. I mean, it's open source. So yeah, I see what you're saying you're using your personal account right for this. Yeah, so you can put under an organization and then once the working group gets created and then everybody can just kind of like work out of that organization. I don't think there's any specific process for that right so yeah. Definitely. Yeah. Okay, thank you very much. Thank you. All right, so yeah, thank you everyone for attending. And that's all we have for today and stay safe and we'll see you next time. We have a great one. Bye everybody. So have a good one. Thank you. Bye.