 All right Well, we'll get started now. I see at least four TOC members as Brennan Burns or Brian Grant on The line on the phone potentially I will count that as no star six time you in case you're on the phone So, you know today's agenda, this is essentially Taylor. Can you go to the agenda? Cool, so, you know, this meeting is kind of our first kind of experiment in You know getting through the backlog of project presentations and proposals that we have and kind of presenting some of these projects to the TOC and wider community so we could Either, you know, accept them or give them feedback On on on the project. So we have three presentations today. We have Crio Brigade and Q-Bedge all applying for different levels We're going to start with the Crio team first And so I will go hand it off to VBATS, Rinal or whoever's essentially taken the lead on on this one We this is Vincent Bats and all those here as well. So we might tag team a bit but the Proposal for Crio. This has been I guess an exciting and Whatever journey up to this point for for for a Crio, to get out of the Kubernetes incubator is More of a formalized CNCF project We've opened the proposal finally a few months ago and So while crowd has been as the Kubernetes incubator for a long time now renamed or whatever the Kubernetes SIGs Along with CRI, CTO or the CRI tools and other stuff like that It's really The part of this proposal to get it in At the put it as the incubating level It's kind of confusing because it's really this should almost be a graduation from In an existing incubator, so I think this is kind of while there is a little bit of precedent of things coming out of the Kubernetes incubator It's kind of an odd transition because it's already been in a incubator for some time and used in production by not only companies, but also projects are around rallying around it It's been in it's it's been a 1.0 and stable and tracking with Kubernetes for quite a long So it As far as container runtimes go, this is a bit more of a pared down container runtime part of just intrinsic structure infrastructure underneath Kubernetes. It does not purport to make additional use cases and accommodate a variety of different Interfaces and otherwise it's really a very tailored infrastructure to Kubernetes itself And with that you'll see the releases March and Cadence along with Kubernetes But as far as an active and diverse as far as maintainers and Adopters and contributors. It's a very lively project so it in our in our eyes, it's largely needing to move more from the Kubernetes incubator piece into a sustained project itself to be seen Seen and used accordingly So I've got a link there to the COC TOC proposal it's got a little bit more information On some of the adopters and looking at the attendees list just now it looks like a few of the adopters are on the call So if there's questions more broadly that's possible And Yeah, it's this This piece lines up straight down the the mission stack of single purpose Almost Unix philosophy single purpose do it well But also support the cloud native infrastructure so that it can run The same seamless lendo stack and take away some of the headache of being able to run higher level concepts like Kubernetes and things on top of that So yeah What I'm not sure we're not all if you want to add something or Chris if there's more of a procedure to Say what would be next like questions? Yeah, I mean you could you're you're happy to talk in a little bit about architecture Yeah, if we move on to the next slide I have added an architecture diagram So this I can briefly talk to this This is a view of the node with cryo as a runtime you have the cubelet on the left talking gRPC CRI API to cry on the right and For implementing the two services We have these libraries called containers image and container storage now containers image is Something which is specialized in moving images across different registries and formats So we do a lot of innovation there like add different back ends like NFS and potentially also working on bit torrent and so on and We use it another tooling as well again like going with the UNIX philosophy So that library Innovates at its own pace and then as we get more features there We add those features bubble them up into cryo and right now It's the one that is implementing the image service and It's used to pull the images down onto the node and for the runtime service Like cryo the O in the name stands for OCI So any OCI compatible runtime can be plugged in over there So by default we use run C but we've been working with Inter from the beginning and we added support from Kata through annotations and eventually that led to adding support for a first-class support for runtime class and the In the cubelet API, I mean so we are kind of able to innovate At the lower level through annotations and adding new features and then push Prove them and get them added to Kubernetes And another thing is for networking we use CNI so any CNI Compatible plug-in can just be plugged in it should just work without any issues and at the bottom You see the container storage library. So that's where we have drivers for overlay Device mapper and recently we are working on a new LVM driver which takes away pain from device mapper And it's very important for the VM based container run times Sam do you want to add anything about the Kata integration? Sure, so I'm Sam and I'm from the Kata Continuous Projects and As Mano was saying we we started to work with with cryo from very early on before Even before a CRI continuity was was a project. So we I mean this having cryo Following the the communities release cadence and really making sure that the cryo works Nicely with the with all the communities version and nothing else really helped us Start playing with the with virtual machine as a nice relation layer for communities This was really our vehicle into into communities and that as as Mano was saying that really led to the current runtime class work that's being done at the signal And yeah, this this was a Really nice enabling vehicle for for Kata containers and in a more general way for really integrating virtual machine Into communities as a first-class citizen and Being able to to expand the the notion of front time to to a much more generic concept Thanks, Sam, and I think Dan was can talk more about like what we are doing in the containers libraries There's something I missed dance dance potentially muted, but okay. Okay. All right Yeah, you're happy to speak more like we could open up to questions you know, I linked the dev stats out there to essentially show some statistics about You know, Krio and in kind of real time But as I understand it you're requesting a proposal for essentially to see member to sponsor a vote for for incubation as I understand it, correct Yes, yeah, I mean so like like as Vincent said, right We've been used in production for a while and we got out of one incubator, but we had to follow the process and I think You'd want to get in as the highest level possible here Sounds sounds sounds good. Well, we'll open it up to to questions and make sure we have time for other other other projects I think there was a question from dims in the chat about Moving off some communities related dependencies. I'm not sure What Figure that out like Is the concern that we are importing any paths from Kubernetes itself Right. There's a hi. This is dims So the question is You mentioned that you're tied to the hip to the Kubernetes releases, right? So is there Is there a possibility to break that? Link or is it even feasible to break that link? So the one of the reasons for asking this question is We we are talking about in the signal. We were talking about how do we get off of the Depends dependency that we have on Docker in kubelet that's baked into kubelet. So this discussion here might help that also That's why I'm asking that question So Yeah, so I mean like we are using the CRI API and the issue of a docker Was like docker code has been Has been contributed. I mean the docker integration code has been contributed to the Kubernetes repo itself And it was the one that The project started with and so there might be some assumptions there and some tight coupling which makes it hard To decouple but in case of the new CRI based runtimes. We are just we are just talking over the socket So it's it's not tightly coupled to keep on it. I mean it Exists in its own repo from the beginning. So while so while it does import some of the paths It's really importing the GRPC Right apis and structures, but it's already talking through the decoupled abstraction Okay, then would it then would it make sense to Get those dependencies out into a staging repository in KK which will help you later What do you mean moving the GRPC stuff out to somewhere else? Yes, that's that's neither here nor there I mean it if it's moved into a Kubernetes PKG or you know, whatever Kubernetes CRI API that is a sub module that Kubernetes imports and cryo imports and container D imports. That's that's a after the after the fact That the important thing just is really it's not if they the attached at the hip that you reference is Part of why the CRI was invented in the CRI that Kubernetes exports is already the decoupled implementation of that So any further work to to fully purge any Docker assumptions from Kubernetes itself Should already be buffered up to the CRI layer that is where cryo imports from so where that lives is neither here nor there Okay, thank you. I think we have time for maybe one or one one more question otherwise, we'll Move on to the next presentation. All right. We'll send the email Afterwards, you know with this presentation and get some feedback from the community via the mailing list and see if there's any TOC Members members interested in pushing this for a vote. Okay So up next we have Brigade That is interested in the sandbox and have issued a think-of-proposal this morning and they're gonna present today So I think Michelle is on the line Hey Christy mine if I share my screen. Yeah, I think Taylor will have to stop so we'll We'll switch. Thank you. All right Is everyone good with this yeah, I see I see you Okay, thank you so much for giving us time here today to present Brigade I'm Michelle nerly and presenting with me is ready to tie If anybody has anything in chat, I'll check it at the end So Brigade is enables event-driven scripting for Kubernetes. It's a lightweight framework built with Kubernetes native objects Essentially, it's an in cluster runtime that interprets and executes scripts So that users can chain together containers to create high-level workflows using Kubernetes So every project in Brigade create it contains this Brigade.js file Which defines some JavaScript that ends up running So as you can see This is here We're defining like a job and that job contains exactly one task and so its task is to run the test on a github push event and This is a simple example But you can actually create multiple tests or tasks inside of a single job and you can define Multiple jobs inside of a single script and because this is Java script You can use promise objects to chain two jobs together to run one after another or you can run them in parallel And you can even use things like try catch blocks to deal with error handling So and just a few lines these scripts become incredibly powerful and robust We chose JavaScript because it already has a rich ecosystem of tools It's also the number one scripting language on the or was the number one scripting knowledge at the time on the red monk ranking and the Tyobi index and you can leverage any existing JavaScript package in the Brigade script that you write We often refer to Brigade as Unix shell scripting but for Kubernetes and So a Unix shell script to find the workflow around executing one or more lower level system executables similarly a brigade script to find a workflow for executing multiple containers within a cluster I Mean handed off to my colleague ready to explain the high-level architecture ready to take it away Yeah, thanks. First of all, can you hear me? Yeah So Brigade has a couple of components. First of all, it has gateways Which are components that translate events from either inside or outside the cluster into Brigade events Which are caught by a Kubernetes controller Which listens for these events and creates worker pods and then a worker body scheduled as a result of an event The JavaScript defined in your project and schedules the job that you define there together with any error handling and other Kubernetes related operations such as volume mounting caches and other things Optionally, you can also have an API that you can use it to interact with Brigade from web dashboard or CLI components or terminals Awesome. Um Next slide ready Yes God, okay So there are several different use cases for Brigade CICD is like the first an easiest use case that like comes to mind But we've been really pleased to see lots of different Brigade being used in lots of different ways For example, one company uses Brigade to do a bunch of meta testing on various code bases at regular intervals So they'll run language specific winters Do code quality assessments and some security scanning and then they'll notify automatically the right teams if they like find something that needs to be looked at Another company uses Brigade to build weekly roll-up reports by aggregating and analyzing data from lots of different data sources another company uses Brigade to process image data and The last one that I have listed here is something that like the whole team has been so excited to see There's one group of people who are using Brigade to spin up preview environments for every pull request or on-demand Giving developers this like up-to-date environment to do Experts and experimentation and testing on there's a blog post that I linked in the proposal. That's like really interesting This this use case. So if you want to learn more about it, feel free to check out the pull request We've also seen several different projects being Built around Brigade. So we have a related project section in the repo and it lists a bunch of a bunch of projects There is someone from Charter for example who built the gateways for Bitbucket and GitLab Kashdi is a dashboard that was built alongside Brigade core To view your Brigade pipelines and there's just several others that you can check out Ready, do you want to talk to us a little bit about the roadmap and progress? Sure. So we're very close to releasing a 1.0. We hope that someone around this week will be able to tag 1.0 Together with the 1.0 working towards 1.0. We've been having regular weekly community meetings That and we just regularly discuss with the community in the Brigade Kubernetes Slack channel We have a project for that's up to date and we're tracking work items and as well We have the entire CI4 Brigade built with Brigade and we're dog footing the latest release every time As for the roadmap from now till summer on May We plan for a stability period for non and non breaking changes for the Brigade core at the same time We want to split out the rep repo and have separate repos and release cycles for the gateways and other projects that are in the repo at the same time We want to migrate all the projects to Brigade core GitHub organization Together with multiple community projects Awesome. Thank you. Um And do you want to talk to us a little bit about what we're doing and within the cloud native community sure So we've been really happy to see lots of integrations coming from the community. So for example, we have Magic Explorer for communities coming from the community. We have gateways for Kubernetes in cluster events We also have integrations with cloud events, which is a CNCF project for defining event schemas And we actually ship a cloud events gateway with Brigade that that can handle cloud events schemas and also with virtual to get which is also a CNCF project that allows us to Schedule Brigade jobs on top of virtual nodes so that you don't have the provision infrastructure for your builds as you can imagine this can this can spin up and This scenario is where you can have serverless Pipelines built on top of Brigade and virtual keyboard Thank You very So our last slide is about why CNCF. Um, we think that this is going to be a really great home for us In terms of having a vendor neutral IP space to foster collaboration We've seen a number of end user companies Become interested in Brigade and we really hope to get time with the end user community in the CNCF At large to get some feedback And just present what we have and see if they have any any Editions or anything for us to to look at we're already leveraging existing cloud native projects like ready mentioned with the integrations with Cloud events and virtual cooblet it's kubernetes native it runs in kubernetes. It uses home charts for packaging and deployment So we have Kind of we're already really set up with the cloud native space and we want to continue To talk about interoperability with other cloud native projects and integrations and getting that feedback from this community So I'll hand it back over to Taylor at this time and I'm happy to take questions Yeah, let's let's do some questions. There's a Proposal that I linked On on github with bread and burns and Quintin as the TSC sponsors for for the sandbox But we'd love to kind of hear any questions from the community now Have no questions. I think this looks like a great project to have in the CNCF Thanks so much, Alexis A question from Matt Spencer would Brigade help with cross-platform CI I think I can take this one So if I'm are you specifically referring to windows and Linux and other operating systems? If that's the case then I'm not talking about cross-os. This is cross architecture. So, I mean, I'm representing on here on this call We're trying to work out how we get different architectures represented in the CNCF CI process So specifically we've been working with the community for the last month or so in adding arm support for Brigade Which essentially means that as long as you're able to join an arm node to your Kubernetes cluster Then yes, you can run jobs. You can run Brigade jobs on top of that node And there's there's options when running the Brigade script where you can pinpoint to a specific subset of nodes or taints when running when defining the job So yes, it could it could help with that Awesome. Thank you. Hello. Hi Hello. Hi. So the question about the event model. So are you folks supporting complex event models such as you know one too many and you know, or is that like pretty simplistic, you know stringing Running, you know job sequentially you do you have plans to support that as well? So at this point One Brigade project can respond to one event that being said when when when the event is handled You can spin off multiple jobs and you can chain them however you want you can run them sequentially you can run them in parallel You can get him essentially the JavaScript API is flexible enough to allow you to spin up jobs in any way you like Well, I think my question was can I do like in typical, you know, the flow, you know Process management. I remember we used to have people stuff So you can do folks and join then those kind of thing. Do you have plans to do those kind of thing complex? You know event patterns Right now we don't have support for that, but I'd be extremely excited to hear the your use case and then work with that Great. Okay. Thanks Hello My name is Vlad from 3Dop have one question Do you plan to extend the features of the DSL itself, I mean that currently in one job you can create only one container For example in Jenkins, you can create multiple in interact with different containers within the same job Will you plan to extend? Yeah, yeah, that's a great question. We actually have a proposal for allowing sidecars in reggae jobs in in the pod We decided to leave this out of 1.0 mainly because of the life cycle and pod lifetime implication that would have and We're actively working with the community to understand what would be the best default for the lifetime of the pod in that scenario But yes, we were definitely want to support sidecars and multiple containers in pods. Okay. Thank you All right, any other one more question Otherwise, we will move on to the next one. There's a proposal. I linked that has two TSE sponsors already I think we'll leave it open probably for Another day or two to get a little bit more feedback from the community before otherwise. It's already met the really minimum requirements we have for for the sandbox so I Will now close it off and hand it off to the next speaker So thank you Michelle and Radu for for your time. All right. Now we have I think it's a Q-Bedge folks. So Believe it is Who is here from Q-Bedge? Yes, Indy and Sunil should be here. Oh star six if you have to unmute, but we'll go for it. Hello This is Cindy Xin Today, Sunil Kumar and I are going to present Q-Bedge project Currently Q-Bedge has contributors from industry Academy and etc They are very excited to have Brandon and Quinton to be our TLC sponsors They can move to the next slide Q-Bedge is a Kubernetes extended infrastructure for IoT and edge computing this the Control plan at cloud worker nodes at edge Q-Bedge Enables orchestration of native container applications from cloud to edge It has been proven to be valuable to real customers The uniqueness of Q-Bedge lies in the following first of all The edge and cloud are loosely coupled where the edge side can autonomously working and then in sync with the cloud Q-Bedge supports bidirectional multiplexed network communication between cloud and edge The agent running on the edge side consumes about 10 megabyte memory at runtime The architecture is based on Kubernetes is highly extensible and plug-able One basically run a Kubernetes cluster cluster from the cloud Without knowing where the edge node is located Next slide, please Before we drill down to the Q-Bedge architecture I'd like to like us to walk through some of the edge computing use use cases On this page you can see two pictures, which are very relevant to our daily lives Think about often times we have to when they go into a parking lot You don't have to manually gain something or you have to wait longer enough time to find a Lot available in the gas station Imagine using AI at edge a lot of things can be more efficient and automatic so through Image recognition and AI model at edge everything can be done easily Next slide This use case is about a water tank system as you can see there are three water tanks Geo located at different locations on each water tank there are sensors valves controllers and Through the communication between among all the controllers The water level in each tank and the pipeline can be stabilized and balanced So it is that decentralized the architecture without the edge node talking to the cloud So we can move to the next slide as you can see and similar edge computing Use cases here is our view of edge computing The resources and devices are located on edge, but they are managed from the cloud Applications or serverless functions. They run at edge, but we want to deploy or orchestrate from the cloud As you can see essentially edges are extension of a cloud We want to have the bidirectional network communication between edge and cloud But the come the network Connection between edge and cloud can may not be reliable and then the cloud side network family that can be limited Think about the edge nodes can be large-scale as well Based on that we want the edge node can have some autonomy So business can run on the edge side local quick and reliable and then Plus the edge nodes can be decentralized as so that they can aware of each other Communicate with each other. The other thing is about the he told genius Uniqueness of the edge nodes from hardware perspective It can be a Raspberry Pi or a server machine and then It also like from the IoT device to the edge side the protocols in between can be very diversified And the scale of the IoT devices can be quite different as well Next slide please So here's the architecture It has two parts. One part is the control plane, which is deployed on the cloud The second part is the edge agent, which is a single prep process run on the one run on each edge node So there are some main components. Let's talk about the edge edge node side As I mentioned, there is a single process The HD is basically a very lightweight of cobalt for the IoT edge computing And then the edge hub and cloud hub They use WebSocket to build the bi-directional Network communication through the same the single long connection The data all kinds of data can be communicated And then the edge controller basically is a Kubernetes extended controller It converts all the pods All the relevant pod node information And convert it so that only The targeted metadata will be communicated to a specific edge node And then you can see there are two data storage ETCD on the cloud for the control plane We also have another data store at the edge side Each side maintains the metadata, but on the cloud is for the whole cluster On the edge side is only the metadata targeted for this edge node Currently we use SQLite for the data store on the On the edge side to fit the Raspberry Pi resource constraint But if people feel free, they can pick any other SQL Any other data storage they would like to have One thing I want to call out in the edge side We use the infrastructure framework so that the components are Plugable for example, if you're not doing IOT scenarios Then at runtime when we launch the single process You can configure without loading the device to MQTP broker Etc So this is the high level architecture And then we can move on to the next Up till now we have two minor version released Currently Kube Edge is having an end-to-end solution for IOT and edge computing It builds a fundamental infrastructure based on Kubernetes In a major release on top of all the current capabilities We plan to build some data plane service mesh Security guard by integrating with beefy and spare We also building the device management API using Kubernetes CRD We also plan to evaluate the performance and scalability And later on we have plans to enable monitoring and some other features Next please From a CNCF and community perspective, we really Agree with the CNCF vision and would like to contribute to the community As you can see kube edge is based on Kubernetes. The architecture is very Open and Extensible it supports native container application We presented the kube edge Three times deep session through deep sessions in kube come Well, I was presenting. I heard a lot of interest From companies academy and folks and So There are a lot of needs People are working on IOT edge computing which kube edge architecture can help people in the world And also kube edge can help people to integrate with a lot of other projects, for example, you still speak the product promise views and etc We like to welcome and engage more people to make more innovations Another thing I want to mention is kube edge architecture is called out as an example in the Kubernetes IOT edge working group web paper So in summary, we think kube edge align with the community Community there is the needs The architecture is open Extensible and we can include more people in the community and build some innovation for the edge computing And that's what I have any question comments Cindy could you perhaps talk a little more about the How how the connectivity between the edge and the cloud is handled Particularly that connectivity is is often very poor Which is something that kubernetes does not handle. Could you just explain a little more detail about how that is dealt with? Sure. So can we go to the architecture chart? so From the edge node to the cloud There is a long connection, which is a single connection We use web socket so that the the the connection can be built But it's initiated from the client from the edge node Because when they configure or provision The edge node it knows where the kubernetes control plane API server is located Once the connection is initiated this web socket is a bi-directional connection And then the data can be flowed from and to between the edge and cloud and one specialty Of kube edge as I mentioned We handle the network disconnection and reconnection scenarios In case the network disconnected Because on the edge side we have a copy of the metadata For the edge node then things can continue continue And autonomously working without Even without this connection But once the network connection is Rebuilt then all the data from the edge side can flow back to the to the cloud or vice versa I think you might wonder like you know in the current kubernetes architecture Every 10 seconds the kubernetes has to report the habit to control plane about its liveness But we how we address this is We use the taint mechanism So in case The network is disconnected the edge node will be tainted accordingly So that new deployment won't be Scheduled to that edge node But once it's connected it can remove the taint and things can continue working That's what I mean by the edge node and cloud are loosely coupled So that edge side can autonomous Autonomously working and also when the network is reconnected it just Everything just work like you run a kubernetes cluster But the location of the edge node is transparent to the user Thank you Any other questions Cindy Yeah Imagine the edge the agent Currently is running only 10 megabytes As you can see I think I did mentioned we're removing the dependency of Docker I believe currently kubernetes is more than 100 meg Or something So this is a great improvement to be done cool There's no other questions We'll look forward to a proposal and maybe get some feedback via there Thank you All right I'm pretty much wraps it up for Today so this is a bit of an experiment for us and we're going to continue Doing project presentations Once a month on the second Tuesday of the month at 80m pacific So hopefully this was kind of useful folks Helping us kind of get through the backlog and You know learn about some of these projects that are interested in joining CNCF so Other than that, you know look forward to the discussion. There's some proposals already out there Feel free to give them some comments and then we'll do this again Next month. So thank you very much Go take care all Yes, goodbye Thanks, Chris. Yep