 Hey, good morning everyone morning Morning. All right. Let's wait another minute see if somebody is anybody else shows up if you post a meeting Notes on the chat Can add yourself to the attendance and I think we can start with Basic stand up so I'm Ricardo. I'm one of the co-chairs and I don't have any updates. I'll put up an agenda So I have a roadmap discussion for the sick and Then there's a few other items of okay, oh and cut out containers So we'll just go around the list so what I see on the attendance so Philip or Philip Yes, good morning. I'm Philip Romain from arms. So working as part of the Infrastructure group in our no no particular update today. Hi Hey Cool Dims is that dims then yes. Hi. I'm just lurking. Thank you Ricardo. I don't have any update. Okay, cool They are they are oh, I'm sorry. I wasn't sure if you said Diane or not. I Don't have updates. I'm still Understanding how all this works. So I'm pretty much in you know observing mode still right now. Okay, cool Um Eric No update on my it really I just do I was interesting listening to the roadmap and also just seeing where I can help with maybe doing some of those Analysis in some of the existing projects Okay, great Wow Hey clouds. Yeah. Yeah. Yeah, I'm here. I'm here Uh No special updates, but we got a good news that we got the three sponsor for working on some box Great, right. Yes. Yes. That's a great news. Yeah. So thank you. Mm-hmm. Yes. Thank you all for your support. Okay Yeah, you're welcome and then who else is we have Ray Hello, it's actually my first meeting. I'm on that. I was on the last release team for 118. I'm just working also Okay, great and towel it's my second meeting here and This time I'm giving the kind of containers insert action Awesome, okay I think that's about it I'm missing a one. I think all right, Roger. Roger. Hi Hi, I'm just joining close in in thinking you put on the virtual side of the picture last time. Just join to just follow Okay, cool Yeah, we have the some discussion about the virtual Cubelet schedule for today, but Remove from the agenda. So I'm not sure. So the team maybe they wanted to you know go back and You know, some of the requirements that were being asked for incubation Sure. Yeah, I think we are given updates. So I just joined to just see how the cigarette and time the roadmap discussion school I just wanted to follow cool And Yeah I think that's everyone. Okay, cool. So So, yes, we wanted to talk about the the roadmap for Sorry, go ahead. Okay, so So, yes, so the roadmap and maybe I'll share my screen here. So one more thing we We're looking for somebody to take notes on on On this meetings anyone wants to volunteer I think it's kind of hard to to find volunteers, but but if you know anybody who's interested So, I mean somebody needs to run the meeting, but at the same time somebody needs to take notes but I'm actually been running the meetings, but if I Run the meetings basically I can't take notes and at the same time but But if you You have access to right now to the The notes and everyone can write to them. So if you want to Just say I want to write something about the meeting feel free to feel free to do it. Oh Thanks for the motorbike. Yeah, so that's that's a background. So Now the because there are so many virtual events people are doing this virtual background So I'm just excited to do it one of them. All right, so Yeah, we're impressed by the motorbike and the background Yeah, let me share my screen Okay, cool. So yeah, so Quentin and I got together maybe like three weeks ago and we talked about the Roadmap for this thing So if you if you know other people who are interested and want to participate feel free to you know pass the word and We want to get more participation basically any any any projects related to runtime any Technologies they want to discuss So that I mean they're in the scope of the sick So one of the things that there's Quentin Hey Quentin, you want to say something? Hi guys. Sorry. I just joined late and I do carry on Okay, cool. So Yeah, so we we looked at the health of the projects. So The regular check-ins that we have to do for the health of the projects so we have currently all these projects and and the different stages and CNCF so Of course Kubernetes container D There's a couple and incubating cryo and hardware. So There's also sandbox. We have Qvert data Q edge the virtual Q bled And I'm just gonna read through these. I mean if you have any anything to talk about these or any comments Just let me know Then we have dragonfly. I think it's dragonfly is voting to be in incubation right now and there's build packs and then this thing also wants to identify any Gaps and some of these projects some of these Projects don't cover all of the seat cloud native landscape and then so we want to bring in some you know insights and and Possibly projects that want to be donated into the foundation so Types of workloads are like AI and big data pipelines Then we have the sandboxes or special container runtimes like firecracker Cata containers and g visor There's also maybe some bare metal type of tools like you know deployment of our metal machines or Different types of kernel Linux kernels for running for running workloads and Then thanks related to multi-tenancy when People want to run different Tenants and in the same machines for example, right? So then you want to have some sort of isolation. So there's no security compromises and and you know one tenant cannot see the other those type of things so other types of gaps maybe some some of gaps that People haven't thought about that much, you know, for example that the web assembly sandboxes so So that's a pretty early Project or pretty early spec So I've been following some of it But that also may be in the scope where people may want to compile Certain code and into web assembly and then run that web assembly in some Kubernetes notes or or some bare metal instances. So that's also within the scope and We want to continue educating the community. So for Them to know about all this different new cutting-edge technologies and so we would like to invite people To present as in one example is Kata containers has given an overview today And then some of the other projects, you know, I think about also OCI open container initiative And some folks might want to come in. I did actually get checked in with some of the device for folks in a guy And I got some interest and I've been contributor to Kata containers. So we have towel today. We'll talk about Kata containers yet to reach out To the firecracker community, so they'll be Hopefully somebody will come in and present and give some updates and obviously We want to do some due diligence on some of these projects. So There's some currently in that process for harbor isn't Being reviewed by several six. So we'll come compile we have a Document that will compile all the feedback from all the six and provide that as a recommendation to the TOC So they can decide whether they want to Graduate so hardware is looking at graduating and then quays also coming in There's another company quays trying to go for incubation. So there will be some due diligence and There's also interaction with other six and other groups we have the Kubernetes sig node There's app delivery. So some of these projects have Operators and then sometimes those fall within the scope of the cigar delivery for example, there's Serverless type of specifications. I may also fall within Saga delivery. There's also conversation about maybe creating a six serverless group, but that hasn't happened yet and Yeah, and we want to identify like I mentioned in the beginning of the meeting folks who can participate in terms of Writing notes for these meetings. So ascribe and then we're looking for a couple of tech leads or there's two spots for tech leads so for people interested in You know with the technologies and Learning more about some of these projects and helping with the due diligence We need we need help in that aspect So, yeah, that's that's what I have for the roadmap. I think Quinton and I Came up with this list and so Quinton you have any other any thoughts about these and anything you want to talk about No, I mean, I just really wanted to open it up That was kind of just see the discussion and just point out that we had to you know kind of get going as a SIG we have a bunch of responsibilities. This is sort of my first pass at some of the things we need to do And I would like to invite other people to contribute any other ideas They have and and we can start putting together a sort of plan of action. Who's going to do what when etc? Would be the next sort of steps There's one other item that I should probably add to the list here Which is that I realized Brian Grant who was one of our TOC liaisons is no longer on the TOC So we need a new TOC liaison. I actually took a liberty of speaking to one of the new TOC members who has Provisionally sort of expressed interest in being in that new liaison. She just wants to speak about I think about it some more But I would imagine we have an answer that fairly soon Okay, I think this will be a Lina right Elena. Yeah. Yeah. Yeah, cool Yes, we need a new TOC liaison And sorry, I should have actually spoken to the SIG before I did that I just happened to speak to Elena about other things first and mentioned to her and she expressed interest so Apologies for that. We can if the SIG doesn't want her as a TOC liaison. We I'm sure we could tell her that I'm hoping that's not the case Yeah, okay. Yeah, I mean, yeah, we'll We'll figure out right so do we need to liaisons some six actually have One liaison so I think it might be better to have to just in case of redundancy or Yes, exactly. Okay. Cool. Yeah, this is one of the biggest SIGs And Brenda and our other liaison is one of our more busy TOC members So so yeah, I think for both reasons we could have to and Elena seems to be very Interested in getting pretty hands-on involved overlaps with a lot of the stuff. She doesn't work So so I think we can expect to get a fair amount of her time availability great So like Quentin said, I'd like to open it up to any comments questions by People on the call and any any thoughts I Think some of you are already involved in some of these projects or some technologies related to these projects and and let's go put the same so It's really pair. I'll make I'll make a comment I think for for us who have some people from the arms side law actually Tracking or looking into some of these projects and contributing and I think it'd be good to have Tracking as to that these projects actually work across architectures and Are not only tied to a particular one or if they are that there is a plan or some thinking around how these can can be deployed on Different platforms Yeah, that sounds sounds very reasonable Off the top of my head, you know, one of the problems with with some of these requests is that Yes, these chicken and egg situations where People don't necessarily have the resources the people available to do the work required to make these things run on other platforms So but but if arm is willing to to do that work inside the projects to to get them to that point Then I think we can certainly kind of make it visible through the SIG Which projects are and are not and those kinds of things does that That's fair and I think part of the The effort here is to at least identify that As a start and so, um, you know, if we can pick it up for some of the projects We'll do if we cannot or if some others are interested and can contribute to it I think it's good to to think about it And when when it will happen will depend on on, you know, who wants to use it or not or the resource available I mean the community want to do it But if it's not, um Documented or it's you know, it's kind of hidden in the code somewhere. It's it's it's harder to kind of Yeah Yeah, we could we could possibly do things like ask the projects, you know questions in their health Checks, we could ask them whether they run on other architectures. If not, are they planning to Uh, and and if they haven't got any concrete plans to do so, do they have any estimates of the amount of effort required? I would imagine The the amount of effort required to make these things work is going to be wildly different for different projects depending on how they're You know, what language is there written in etc? So, um, yeah, we could certainly ask them these questions and expose that to the community So people are at least aware of what the status is. Does that sound reasonable? I know it's I fully agree and and it's not fair to just ask the projects to do all of that if there's no real Need for them at that time The answers to those questions, maybe we don't know, you know, we don't know whether our project runs on x y or z and And we have no plans to find that out either because we don't have a need for it. I mean, that's a reasonable answer I think and that's all okay. I mean it's to make people think about it when they when they Look at the sick. I think and it's a good. It's a good flag to have. Yeah And and also I think from their own perspective. I mean you you could bring some use cases, right? So why would Some people may want to have this particular project support arm, right? So Yeah, no, I fully agree a lot of the people in our team working on some of these are based in Shanghai So the timeline for joining these calls is a bit tricky, but definitely yes, we can we can help support that Excellent, right Any any other comments from any other attendees So, yeah, if if you feel like something's missing here too, feel free to contribute They would cover quite a quite a bit of Different things, but yeah, we not necessarily have everything so cool So just one one final suggestion on that What I what I would propose is that we set a timeline, which is that We leave this open for comments and finalization for the next two weeks And I'm meeting in two weeks time. We prioritize these things and try and Put names to some of these items So that we can sort of get get going on them so if anyone on the call is able to Find people in their companies that are willing to do some of this work I think two weeks time would be a great great opportunity to bring the names along or bring the people along And and we can we can sort of figure out what we'd like to tackle first and what we can leave until later, for example Yeah, that makes that makes sense to me All right, so So we have a next item on the agenda Volcano received three sponsors for sandbox, so it's that's Fantastic, so so that that means that it will be in sandbox Klaus, do you want to make any comments about that or anything that Oh, no special comments. I think uh, that's a good I that's a good news to uh to us Yeah, that was one of the things I spoke to me about Was to be prepared to do that and and she agreed it's a treasure. That's where that how that came about Yeah, I think this felt a gap in kubernetes now so because uh, You know Quinton and I we were talking about uh, you know having Just the bare jobs from kubernetes doesn't kind of It's it's pretty raw, right? So so Volcano fits in Fills in that gap where where you want a more complex type of batch workloads and for example for data pipelines and yeah and big data applications Yeah I guess I have a question about the role of this group. Should we be seeking to fill those gaps? Uh actively or is it sort of that The participation comes from the other side where typically they are asking to be added Um, are you talking about projects now Diane? Yeah projects just new projects Do we do you go out and recruit new projects that look like good candidates or is it? Typically done the other way around where Uh, that's a that's a very profound question and maybe more profound than you even realized when I was asking it So so, uh, we actually need to be doing both. Uh, we need to be proactively identifying gaps in the in the uh cncf portfolio of projects That we think need to be filled and we need to be actively identifying projects to fill those gaps Uh, in addition to that there will inevitably be and that's been the vast majority of the more recent projects have been projects that have come to us and wanted to be part of the cncf Um, so I think you know in an ideal world there would be a healthy flow of both of those Okay Um, yeah Yeah, so I'm just wondering like the projects that I see within red hat. It would be reasonable to sort of Uh, try to recruit them Basically, that's that's reasonable Yes, it's definitely I mean to the extent that they fit into the sig and and of course if they fit into other sigs You know, you could also Do that red hats, you know brought lots of projects to the cncf already and then I'm sure we'll continue to do so So, yeah Okay Yeah, I think if you see any technology that may not be part of the cncf and and yeah, I'm For example, I I've reached out to some of the web assembly people. So Um, that's one area for example that I don't I see a gap, right? So, uh, if you see any other gap that kind of similar or or or related to runtime, you know, uh, yeah, feel free to kind of Talk to some other communities and See if there's some Some project that could be part of the cncf Okay, sounds good, especially the big data and AI ML space. That's where I'll take a look Yeah, yeah, so there there may be some other things related to how you run AI Maybe frameworks for for machine learning or deep learning those type of things. Yeah I don't think we we have that type of thing But then we have also have to see that they don't overlap with some other things that in the linux foundation because the linux foundation also has this other group called linux AI or AI foundation or something like that Okay, yeah, I think my observation is is that industry. I mean, it's it's obviously super important and super interesting, etc um But they tend to be, you know ml people ml specialists often and you know the the tensor flow groups and the pie torch groups and a lot of the The work around there happens there and there's various data groups as well standardizing data into change formats and all this kind of stuff um I think that that we should not Sort of venture into those spaces because they have their own foundations in general This is not because the cncf or not not because the linux foundation has such a thing But but I think in general it doesn't fit with the cncf I think the cncf is is more about the actual infrastructure to you know enable those workloads And so I think so volcano is a great example volcano is not actually, you know, the The framework for building For building AI things it's it's really to facilitate those kinds of workloads on kubernetes And those I think are the kinds of projects that we we will want to be looking at And there are many others in that space. I'm sure that that we could go and fair it up But some of them are not as obvious as one might think Like coup flow for instance. I wonder if they're already part of Some other group, you know, if they've already coup flow doesn't you know facilitates running frameworks like tensor flow and pie torch Yes, yes So I think that that would be a great example of of a good project I think there's a history there. I'm not intimately familiar with the history but Coup flow is not obviously part of the cncf at the moment and It would be good to get a clear answer why that is the case And you know This sig hasn't been around the whole time, but I know there is some history there And and I agree that would be great to like figure out what's going on there and whether Makes sense to invite them to be part of the cncf Yeah So I think one area that is interesting is the ml ops area. So there's some Yeah, it's how you run these workloads, right? So Yep Yeah, they have very very different properties in in some cases very very different properties than Their traditional back batch workloads, for example, many of these things run for many weeks on end They, uh, you know, very sensitive to node failures typically Unless you have very elaborate schemes to prevent that Um, so if one node fails during the, you know, four week run, then the whole run Basically gets corrupted and you have to rerun the whole thing which is bad And they run on expensive hardware. So it's even doubly bad Etc That's my life. That's what I do Well, you probably know far more about it than I do than Diane, but that's my that's my understanding of it By benchmark those sorts of models. Yeah Yeah So, yeah, but they do want to they do run on kubernetes and open shift. So and so And there are a lot of mid-sized Applications and models that don't run quite that long that But it's complicated. Yeah, even kubeflow. It's such a mixed bag of components. It's a complicated. Yeah So diane if if you would uh, want to kind of spearhead, uh A little kind of working group to go and dive into that area with claus and whoever else is interesting interested and perhaps Think about either a white paper or some other form of education where we can teach the world how You know how ml stuff runs on kubernetes and where the challenges are and what we're doing to fill them I think that would be super useful because there are a lot of questions in that space. I think I'd definitely be interested in doing that in the future. I'm writing two white papers right now And I'm so knee deep in a benchmark right now Okay Right now, but I would love to like sure You know, maybe the homework maybe what you the work you do in those white papers, for example, we'll we'll set you up well to Um, you know produce one for this purpose which may overlap with those. I'm guessing I don't know cool Great. Yeah, fantastic all right, so So next I don't have any agenda. It's a cala containers introduction. So I think How you want to talk about this? So do you have anything to share? Um, let me share my screen Let me stop Okay, okay. Can you see it? Yep okay, and Hello everyone and my name is tau pan and I work for I'm financial in the the Chinese internet payment company and I've been working on cala containers since the very beginning and I'm one of the main maintainer and main contributors to it and There are daily Ricardo as class if we can give some introduction to the to see runtime. So I'm here and it's good to To know everyone and see the how's how they seek seek runtime works and I hope we can have some core collaboration in the future and so I just begin a kind of kind of We started when by observing the traditional containers then it is deployed by by users today and or several years ago And they are they are mostly isolated by namespaces say groups and they have they They share the same names kernel on the host and Then that's how that's that's before cala containers and we we see some Something to do here. We want we introduce a virtual machine layer and the insert and and the It's a middle layer between the containers and the hardware and the host kernel And with this we can have some kind of this better resource isolation and the better security on the host. So basically if you run cala containers, you can You can make give give your Can give you give you users or your your The entrusted users to let them run your run some workload on your machine and you you do not care about if they They made some they do some bad things on on each other or on your host so the main idea is that the the virtual machine interface is a Interstate proven interface that is being used in the is World for for many years. So we just inherent to that and with this we can We we combine the base of the two world we have the Have the speed of containers and also have the security of our of our virtual machine How could could you just explain so this diagram you have here? Um Is is basically just virtual machines that that's how virtual machines work. So just just explain how This is different than just a virtual machine I know you you feel you feel you start virtual machine. You have a four virtual for for guest kernel and for guest operating systems we we by Putting a virtual machine here We use a very lightweight virtual machine and also we customize the guest kernel and we saw also we customize the guest guest operating system Or everything is reduced down to very minimal to just support running a container so For example, if you if you're not your virtual machine on AWS it will at least takes several minutes or several hours before several years ago We started this but We with kind of containers you can have a fully you can have a running Container inside a virtual machine in one or two seconds Okay, thank you and this is the The architecture we currently have is it there are two actually the The above one is the The architecture we used to have until last year for every container inside the sandbox we have Catashim here and also we We have container if you are running container D. There's a container D. So So there may be many streams in the system and many interaction layers and we After working with the container D Community we we introduced the container D Simway to api so that the container D can just call api to kata container D Sim And with that we remove all the interaction layers and all these components into just one One component for per sandbox now. It's not per per container. It's for for every sandbox So we have we we just have one one sim now so that That's a very good Simification on counter counter containers architecture and a main drive for for many users Before that many years trying counter containers and think It is too heavy because there are many too many Too many process to manage on the host and it's a nightmare for enemies for system enemies And after we simplify the architecture many years are starting to are starting to adopt counter containers in the reproduction system right now and current status we be beside beside you can run Very lightweight containers version version containers and then say basic function and now we we support many architectures and And we support different hypervisors If i conquer counter hypervisor and ochre are all ended i think last year And still qm new is a default one because it has most features but If you want to run some special workload as an And one want to have some different optimization. You can use just use different hypervisors Also, also we support different distributions Although the gas kernel is is fake Okay, the gas gas operation operation system is minimized. It can still be Bit of different distributions so that you can users can do very easy customization with it also and on the and on the Ecosystem integration with support cloud or container d docker and portman so you If your system is on any of this you can just install cata canals and run very easily and also Since cata canals is new to community as world and we we many Where the main drive between behind two Two important component features the first one is cut these runtime class runtime classes so with runtime class can Specify what which runtime you which which container runtime you want to You want to run inside in your polyamol. So you just say You you can justify a runtime class for cata containers and and and add to your Polyamol says that I want to run this port with cata. So and submit it to To to kubernetes, you will be you know medically schedule this The runtime they they pulled to up to a node if Then then can support cata canals Provided that you you have installed cata there, of course And also there's another Future then that's called a port overhead then the main Benefit from the the main purpose of it is to to Count to account resources utilizes utilizations Of the of a port of a port itself instead of just a containers In this one silvered although there are there are there is still some overhead for every port and every container But then he is not accounted so so when kubernetes is trying to make some scheduling decision the There are situations that things can go very bad because there are resources There are resources then kubernetes thinks it's free, but it's actually been used being actively used by some unknown components so right now the port the Every port can define its own Resource overhead is especially the cpu and memory overhead so with it With the port overhead the kubernetes can have have a better view resource overview of the entire Cluster and do better scheduling the seniors so then that's a main Main features we we've been we have put to the kubernetes mainstream and Both of them. I think they are they are they are data now I'm not sure this there's but they Then the master or one point 15 maybe have the port overhead as a class one On runtime class is is before that. I think it's one point 13 or 14 So if you install kubernetes, these features are automatically enabled right now and And this year we are looking to release karta karta containers to door zero And we are planning some important features. The first one is we we although we We have been minimizing our Our response with resource consumption for every container and every port We we noticed that I should have mentioned that Right now the project is mostly written written in written by goal Go language, but we have identified that the go runtime is is too heavy to for some of the components. So we've been Justifying some of the components by justifying. I mean this right this right it in rust. So right now we have In alongside of the go agent We have a rust agent that is being active testing and we plan to push it to the and the default agent inside a guest in in to door zero and also the communication channel that now is it is using grpc, but They actually the htgp layer is not necessary for for us and So we think it is it is too heavy. So we we write a gtrpc rust to replace replace the go Go rpc component And also there's another thing that we want to do in the to door To the old time frame is the image putting inside the inside the sandbox right now in the architecture The the image all the images are pulled by By CRID months such as cryo and continuity and But this this will have a main drawback that and since the since this CRID months are No, uh No, the wide demons. They cannot enter actually enter any user namespec user net especially user network namespace, but for for kata we the an important use case for kata is to is for the cloud hyper cloud cloud vendors For the cloud vendors, they want they want they want to allow different users to run their containers and ports on on the same on the same host but Different users will have different network. So we have we will have to pull the Image inside the users network namespace. So that's why we want to do the image pulling inside Inside the sandbox and also with it. We can do some Do some tricks about the image format so that we can We we can accelerate the image pulling process for example by instead of a poor an entire image We can just pull a very small metadata layer then can Just then can construct a root fs Dems them a space overview for the container, but But no data is actually pulled so that the container can We can start the container very instantly instead of having to wait to wait the entire image to be downloaded Then that's a very useful use case for For If you if you have Have large images For example, we inside our company. We have Gigabytes images and this will reduce the container startup time from Seven minutes to just several seconds Also in 2.0 In 2.0 plan We want to improve kata containers of the availability and we were defining kata's own events api So that we can integrate it with with projects just science permissions and we also We are defining some kata a specific debug debug api so that users do not have to actually Look into the container to debug their applications And another feature we are we are looking at is to improve kata containers ios stream handling right now Every sgd ios ios stream is handled from container d to kata to kata Do the agent inside kata and there are there are too many layers I think we think there are many too too many layers and we we want to simplify this use case and Make it easier to for for the For the for the control path to upgrade them themselves And another main change we are trying we are actually Actually, we're working on is the code is reported to a consolidation Right now we have different reports in for runtime for different streams for for say for agent So we want to Consolidate this occurs report to you to just see just a single report And so that you could be easier for for new developers to just Get clone and test their local changes So that's that's all for the to the open And this is our these are our community channels. We have kata containers.io. That's our Main page and we have a github organization. That's kata containers And we have IRC IRC channel and slack channel twitter and many list you feel are interested in kata feel free to Contact them by any of them So that's all Any question? All right. Thank you. Dole. Anybody has any questions? Yeah, I have two But I can wait for others first I always seem to be the the one talking here anybody else Yeah, I'm pretty familiar with this project because I've been a contributor. So Anybody else? Well, we wait for others. I'll ask mine so long So first question is I could you first of all, thanks for a great great presentation Very very interesting very informative and particularly your two dollar plans look very interesting Um So could you sort of summarize? You know if If I took a kvm qmu vm, um, I mean the basic process is that I Load the guest kernel And then boot whatever's in the root file system Um, and then I have a running virtual machine, right? um Yes, and and that takes I haven't done it for a while But I'm guessing something in the order of a minute Is that is that around about accurate? Yeah, yes, yes, okay, and now It sounds like you do essentially the same thing um But you have presumably a smaller kernel uh, and a smaller root file system um and So so so building a kernel and and stripping out all the stuff you don't need is is a reasonably straightforward process And then similarly stripping down a root file system to only be the stuff that you need Is is fairly straightforward. So what is the difference between that and kata containers? And I mean, how do you how do you get this almost 100x speed up other than by You know building a smaller custom kernel by excluding all the model modules and things you don't need and and You know removing things from the root file system that you boot with Yeah, I think there there are many two Two difference between them the first one is in you know, you are to cut down the speed and And the consumption we have a very minimum hardware support for in in cumul. So we we customize cumul as well. So There are there are very small The memory footprint is very small. The device emulation is very small and And when we started The project we actually were defaulting to To cumul that then that is developed by intel and as we As we develop we are developing kata and the intel team are sending Most of the cumulite features to upstream cumul. So that's why we are switching. We have switched to upstream cumul because Because before that the upstream cumul is the overhead of the upstream cumul is way above the The cumulite so Yes Yeah, so yeah, right now the the cumul overhead is no because most of the cumulite features are upstream now And but still we customer we and there's an important feature you introduced by as the intel team is then is cumul came came out of support so that every more every Component of the qmu can be cost can be configured independently so and with that we we can Make basic because they have the virtual machine manager are very small right now And That's the first difference. The second difference between kata and virtual machine is in lifecycle management With with virtual machine you manage it to like a virtual machine You you create a virtual machine you posit your your Stamps you'll tell you you shut down to like a virtual machine and it's inside of the infrastructure infrastructure as a service world But with kata containers The life the lifecycle is uh is a container and uh, or or a port sandbox So so all your workload is integrated into the container world instead of managing a Virtual machine you are managing a container So so you instead of say you create a virtual machine and install install everything there and run your application You create uh, you you build a Container image you put you create a polyamor for it are submitted to kubernetes and and that the Kubernetes kubernetes that kubernetes schedule it and manage it or the lifecycle of it So then there's a main difference. I think Okay, um, I still don't I still don't quite understand whether the sort of hundred x speed up comes from because um Yeah, I mean kubernetes seems to have very little to do with with The actual of a single A single virtual machine in this case Yeah, you feel are very interested about speed and I have some data We just recently found out found out actually made To to start our qmu inside with With kata to start our qmu qmu you You basically spent 100 to 200 milliseconds and to Then then then then we include to create the kvm environment to create the all the The qmu emulation and also start boot the guest kernel and uh, next next is to start the we There is no They we implement a kata agent inside the inside of the The guest that that will be run and the unit process So that that with with that start that's about 20 or 30 milliseconds. So in total you can you can start a And without if you just run Run around hello world for example If you have an image around hello world You can start you can create from nothing to To you to to the point that you see hello world this string come out Then that would be below 300 milliseconds. That's what we can do now Yes, and just to clarify my i'm not doubting that what i'm what i'm trying to understand is if i If I ran a hello world in a in a plain kvm qmu container with a stripped down kernel and with nothing excessive in my root file system If I did that and then I also did what you just described on kata containers I don't understand what What is happening in the kvm qmu case without kata containers? compared to kata containers like what accounts for the 100x speed up I'm still not clear on what that is Maybe we can take it offline. I think we're out of time now anyway So so I can take that question offline or go and do some research. It's still a little unclear to me. I must say Okay Maybe we can email and I have this have a discussion Yeah, I had one more question actually does any has anyone else come up with questions in the interim? No, I don't I don't have any I mean Yeah, my last question is actually very brief is is kata containers part of a foundation at the moment and if not are they interested in becoming part of the cncf And right now it is part of OpenStack Foundation Okay, interesting. Cool Cool, yeah Yeah, I think another interesting thing about kata is that they also support firecracker as a vmm so and I think the AWS team has actually spent a lot of time optimizing their firecracker vmm so for You know running like serverless type of workloads and using it for AWS lambda so cool Yeah, I would I would comment. It's feeling pair that tau if you have If you have if you have a paper describing the performance you mentioned Maybe you can put a reference to the link on this like channel. That would be It would be good And it's just you said we we are testing internally, but okay yeah Yeah, and and so if you can post Maybe slides and and whatever follow-up information on the sick runtime cncf slack channel. That would that would be great Yeah, I think I can post this that's not All right Well, thank you. It's nine o'clock. So thank you everyone for joining. So we'll have another meeting and then in two weeks Thank you. Thank you. Thank you