 Okay, we can get started with the CNCFCI working group monthly meeting. Good to see everybody. Do feel free to review and add any items to the agenda. We've got a few upcoming events to mention. So the first one is in progress now. Mobile World Congress is happening in Barcelona. Some exciting releases and news have come out about the CNF testbed. And that's the cloud native network functions testbed and in O-Taylor you've been assisting with the demo and presentation and there will also be a CNF testbed buff. Twice monthly call on the first and third Monday of the month. So do check out the repo at CNCF slash CNF testbed for more information on that. Looks like the first of April will be the Lanero connect in Thailand and also in April will be open networking summit where I see Watson was, his CFP was accepted and he'll be presenting on the CNF testbed. There's a link to that presentation there. Please check it out if you're able to attend ONS. KubeCon cloud native con Barcelona is in the middle of May. Hope to have an intro and deep dive for the CNCF CI dashboard and then KubeCon cloud native con China in Shanghai in June. So we're excited to announce that the CNCF.ci status dashboard V2O has been released. You can check out the release notes at the link below and I have prepared some slides to go over quickly. I think about 10 minutes, if not sooner to show where we've been, where we're at and where we're going with the CNCF.ci status dashboard. So the agenda will go over the who, what and why, a brief demo of the CNCF.ci dashboard, overview of the dashboard, the V2 goals, the timeline and events, and have some time for Q&A. CNCF team. Thank you all for joining this call. Watson, Lucina, Denver, Taylor, Josh, Krista, Robert and not pictured hippie hacker project co-founder. So why do we have the CNCF.ci dashboard? Because the CNCF ecosystem, it keeps growing. Right now there are four graduated projects, 16 incubating projects, 12 sandbox projects and CNCF would like to ensure that those projects are building, provisioning, deploying, as expected, and the CNCF.ci dashboard visualizes that. In the slide, you can see a link to the landscape, which is L as in landscape.cncf.io and that shows the current projects that are in CNCF. As this slide will be outdated shortly once the next project is included. So the CNCF.ci dashboard consists of a CI system, a status repo server, and the user-facing dashboard. The CI system has three stages currently for build, provisioning, and deploy, and we test on the project stable in head on a bare metal environment. The testing software can reuse artifacts from the project CI system or generate new build artifacts and then the repository server collects the results and displays them on the dashboard. So here's a view at the CNCF projects that we are targeting to add to the CNCF.ci dashboard. We will start with graduated, which have currently been added, and then move to incubating. Once graduated incubating, then we'll move on to sandbox. We also have one Linux Foundation project on the dashboard, ONAP. So a quick timeline of the CNCF.ci platform. The CI platform started in 2017 and the V1.0 dashboard was released in 2018. We had a few releases in 2018 to include ONAP, Envoy, other projects, other cloud providers, and today we've released the V2.0 dashboard. We'll take a look at where we're at today. So where we're at today is we're focusing on the projects. We have the graduated projects displayed above the incubating projects and then ONAP. We're showing the build status, the releases for stable and head, and we're deploying and testing on a bare metal packet environment. This is refreshed every morning at three in the morning. It currently shows build provision and deploy stages and we'll be adding that fourth stage of testing, which is in planning. So some of the goals for the V2.0 dashboard is to switch the focus to showing a third-party validation of builds, deploy, and testing for the CNCF graduated and incubating projects and testing that on the bare metal environment. Upcoming iterations will focus on being a scalable project ecosystem and collaborating with CNCF project maintainers to add and maintain those projects on CNCF.ci. We're also going to be building integration with the project's existing CI systems so that we can reuse their build artifacts, their deploy artifacts, and end-to-end tests, and also it'll be restructured in a way that can allow contributions from external project maintainers. Key features at a glance highlight and validate those CNCF graduated and incubating projects, increase collaboration with the CNCF project maintainers, accelerate adding new projects to CNCF.ci, demonstrate provisioning on bare metal packet, and the next release will show Kubernetes stable release and then we'll add functionality to show the head release as well as we can support more release versions of Kubernetes like a release candidate or the last stable release. It can scale infinitely, so I'll show you the mock-up in a minute. We also want to use QADM for bootstrapping Kubernetes on packet. So what's next? This is a mock-up of the UI changes that are in progress on our Dev environment and will be released as soon as possible to CNCF.ci. They include adding the test environment section for Kubernetes stable on packet at the top and adding a test column to show end-to-end test results that are provided by the project maintainer. And we'll also switch the order of the build and release columns. We publish our roadmap over at github.com. At a high level in February, we're moving providers to focus on project-focused home screen and we'll do some planning on adding smoke tests after the app deploy phase. We also will continue planning on integrations and how to use QADM. And next month, we will add support for those external integrations for the build, deploy, and end-to-end tests. We'll update the own app stable and head releases and add QADM to the provisioning stage. In April, we'll publish documentation on how external CNCF project maintainers can add and maintain their projects on the dashboard. We'll add those smoke tests to the app deploy stage and collaborate with maintainers on how to add end-to-end tests. We'll also add support for more Kubernetes releases in the test environment and collaborate with maintainers to accelerate adding more CNCF projects to the dashboard. If we take a look at the roadmap, that's at a high level. Releasing v2, how to add a new project, add testing on Kubernetes release candidate. Three tickets are in progress or really in testing, so these three will be closed by end of the week to implement that test environment session. Add the test column and move the build and release columns. Then we'll move on to v2,1 to implement a release selector dropdown to toggle between stable and head Kubernetes. We'll add subheaders and alphabetized sorting so that you can see at a glance which projects are graduated, which ones are incubating and which ones are Linux Foundation. Then we'll start changing the backend and how the existing project details, release details, build details, deploy phase details are all added so that we can support that external integration with contributors. So there are a couple of events that we may have mentioned earlier. The Open Networking Summit works on my machine Watson and Denver. The KubeCon, CloudNativeCon in May for the CNCFCI intro. There are several ways you can provide feedback to the CNCFCI dashboard. You can join the monthly CNCFCI working group calls currently scheduled for the fourth Tuesdays at 11 a.m. Pacific time. Please subscribe to the mailing list. If you have any questions, you're welcome to send us an email and if you're not already on the CNCF.slack channel, please join that and join the CNCFCI channel. And we always welcome issues to the GitHub tracker board on CrossCloud CI. Does anyone have any questions about the CNCFCI dashboard? Basina, I have a question. This is Ed from Packet. Hi, do you have a sense for degree of difficulty for a project if they were to want to start the process of getting added to the dashboard? Is this a one hour job, one day, one week, one month, one year sort of task? That is a good question. We've recently in this last sprint and it's in testing, we've broken it out into the various parts, so it's composable. We currently are testing this project column, so how to add a logo, the name, the display name and the URL. And that would take, I think it would just take cloning the repo, creating a folder and then adding that information and creating a pull request. So that one itself would be a matter of 15 minutes, I'd say, having gone through it and tested that myself. The build and release and deploy, we are still in high level planning, so I think, Taylor, if you're available, do you have a feel for how long it may take for that project to add details for those components? I think you're right, that would be very small amount of time, so it's the other parts that we're working out. But as far as maintenance, updating the project details, that should be pretty minimal. Right now it is in a separate repo from the projects. So each one of the repos, each one of the projects have their own repository under this org. And the configuration, the way we're doing it, we're trying to make it to where a CNC-FCI configuration file can be moved into the project eventually and then all that could be maintained. So any type of name changes or logo changes or other things that get updated could be there, and that would be similar with all the other pieces on the screen that Lucina's showing. So we're working through each one of those. Right now to just put a ballpark out there, if all of the pre-rex are met, which is primarily are your artifacts publicly available? Is your status information, if you're using something like CircleCI, can we pull that? Because that's where we're going towards is pulling in information from multiple places and trying to show how they work together. So if those are publicly available and you've met those pre-rex, then adding the necessary information to the configuration file should be less effort. Where we're going to have more effort would be, say, if CoreDNS says we're using TravisCI here and we need this sort of setup and we haven't integrated with that, then we need to walk through that process. That would take some time, but then the next project will be able to utilize that same integration. So for ONAP, we actually have an integration with their Jenkins server, and that's using the Jenkins API, so ideally we can build on that with anyone else. Beyond that, you start getting into stuff like testing. So for the deploy stage, we actually want to do smoke tests. So as soon as a deploy of a project happens, we immediately want to do some initial testing to see services are up before saying it's fully ready, and that ties in with the green badge. And so collaborating with projects to say, what is a minimum smoke test? And those sort of things on a HP URL or a DNS query to CoreDNS. And then you start looking at the next stage, which should be integration testing. So we're hoping, and what Lucino is showing, that we can do these piece by piece and let projects maintain more and more and add themselves and start updating more of the parts so they can get on sooner. And then as they add like full integration testing, then that badge would go live from NA to green red. Does that address kind of the timeline and effort a little bit from a project perspective? Yes. Thank you, Taylor. That's a very good description of it. And it illustrates that some of the dependency of the difficulty depends on whether you've already worked with the CI system that that project is doing. So it's more about CI integration and less about the project itself. Absolutely. A lot of the upfront effort is CI integration. And then as we have covered more of them, then we will focus on helping with the testing like CoreDNS and Prometheus. In particular, we've had a lot of feedback on how can we collaborate to build those tests, including stuff like templates or best practices and areas where you could say, we've dropped them in here and here's how they're going to run and what's expected. Any other questions for Lucino or anyone else on SceneSafeCI? Thanks, Taylor. Would you like to share your screen for your agenda item? Sure. Okay. So this is, I want to talk about C&F Testbed, which is a new released project from C&CF. This is a more of a complimentary effort to all the projects similar to SceneSafeCI is not a project going through the incubating process or anything yet, but it's a project that's helping different projects. So let me see. There's a slide deck is moving around a little bit. Let me find the next one. Here we go. So another is fully open source initiative. It's up on GitHub on C&CF slash C&F Testbed. And the focus has been about performance testing in comparison with VNS, so VM based network functions to cloud native versions. And it could encompass and it probably will because we've been talking about use cases that are not just performance based but other functionality and that'll highlight items like orchestration and failures and other things. So we'll end up with test and CI testing, deploying those where we would take down different components and see how those work and other things. But right now it's been around performance and trying to get identical code. So the same base network function code. How does it work in an environment like OpenStack or any KVM environment versus Kubernetes? And then trying to say we're using the same hardware. It's public. This is, it's all being done on packet as the primary. And we do have a complimentary work being done with the Linux Foundation CSIP project, which is part of FDIO. And they do testing in their lab and they're actually doing some of the same, taking the same tests that we're doing and running it there. We're also running some of their tests to kind of compare that. The test sped itself though, the idea is you can deploy the entire thing from nothing but an API key packet API key and able to bring everything up in your own account and have the machines running the clusters, whether that's OpenStack, Kubernetes, whatever you're wanting to test and then deploy the network functions, the applications in a configuration and then run tests against those. So it looks pretty similar between the environments. You have your regular clusters. And for us on the performance test, what we're talking about is there's a traffic generator that's sending packets. And in our case as fast as possible so that we're trying to stress test the clusters, all the different components that are comprised on that and see what the performance is and how everything reacts. So that's kind of a high level. And there's a, this is, we're looking at in this, it's showing there's layer two connection. So at packet we actually connect one of the ports to be layer two traffic. So versus in the Kubernetes side, I didn't drop a slide in here. We still on the Kubernetes side you still have your regular flat layer three network. But you also have in these pods and on the containers themselves additional ports that are connected to layer two. And then you can do additional type of traffic on there. And in our case we're handing over the interface outside of running it in the regular path that Kubernetes would do or OpenSec would do to control that traffic and run it in a higher performance setup. This is showing some of the software running. So just trying to see all the different pieces that that's up. So on the bottom here we have the packet pieces. There's some type of physical router. So we talk and actually configure that when we bring up the testbed so you don't have to do anything ahead of time, you're not required to go in and try to pre configure the packet environment. So if you have a API key, running up the testbed which includes going and configuring the router with any type of VLANs or whatever configuration you want for the network testing that using the API. So we're working pretty closely with packet on access to the different things that are coming out and trying to work to support more use cases in telcos. So that's been great. Thanks to them on that. The rest of this kind of goes over the software. It's all 100% open source. So one of the items on this project is it's trying to recreate use cases that are out there that use some open source, some proprietary or different bits or configuration that you may not have visibility to. So all of this is reproducible on the traffic generator. There's opnfv or Linux foundation and a fee bench trex dbdk. All those pieces are out there and looking at other projects that are using those and trying to reuse some of what they have in here for the difference on the cluster would be this V switch. So this is what's providing the additional interfaces. So normally on the Kubernetes, it's going to talk through the kernel networking. We add a V switch running VPP software that connects to the interfaces. It also connects to the containers using a memory interface called MIF and that allows high speed communication. And then over on the, I don't know why that says open stack. It's a V host user. I'll fix that. I'll just put V host. So on the open stack side. It's also using the same VPP V switch and there's an open stack project called VPP networking so it talks to the open stack using the neutron and everything looks pretty similar after that as far as the configuration and similar to Kubernetes though we give the VMs interfaces that are talking to the V switch. So this is all to try to have it as close as possible apples to apples while using the technology that's appropriate in those environments. The rest of it's pretty common stuff that you would have on the master and controller notes. I'll move on. And if y'all have questions we can come back to those here at the end. So once for this particular like what are we doing right now we've done several different use cases and test over the past 10 months or so, including some tests for KubeCon. And here's one of them. This is the performance one. You can have you can take these network functions deploy them to Kubernetes or open stack and chain them together in some fashion. You can go through the V switch. So this is the traffic going through. And it's similar Kubernetes or open stack. The big difference on Kubernetes is it uses the memory interface versus VS user. And then on Kubernetes you can also do directly connect those containers together. So that makes a big difference on performance. There's some other type of scenarios but those are the two big ones that we're looking at for network testing. And you can also run multiple chains. So whether you have different you're separating because they're running different type of services on these or maybe you're splitting stuff up between different networks or whatever you may want to do. It's you have use cases where you want different chains. So we are doing testing where you have multiple chains where the density changes on a on a node. The amount of resources CPU memory can be affected. So then how does that affect performance. So we've done different a lot of different type of scenarios. This is showing one of them with three chains and two network functions for each change for each one of those configuration types. And then we pull the results. This is kind of a summary of some of the results on the open stack side for the three chain two network functions or the VM side. This was actually I think this was KVM 1.1 million packets per second. So you're looking at large numbers. So this isn't that when you're looking at requests per second on say a web server. This is on your high speed network equipment and then you're moving that type of service into containers or vans. And then 6 million on kubernetes kubernetes actually so this is kubernetes. These are for the snake case where it's going in and out and then when you directly connect. We were seeing nearly eight nearly nine million packets per second. So that's pretty cool. This is some of the stats that is about deploy time like on open stack is over an hour to get the infrastructure up in less than 16 minutes including a reboot of packet server for kubernetes. So that's pretty cool. We're working to get that down to not have to reboot servers, even with paths like kernel parameters and stuff via the packet API. And then deploy time of the actual network function. So bringing up like this case or whatever. How long does that take. So if you're looking at CI and iterating and you want to go through these and you want to get all of these numbers down as much as possible resources down so that you can run more workloads and then ideally so maintain the same performance or better as we continue going. Let's see. So we've this has been going since around May so. Cube con bar cube con Copenhagen was kind of when this project kicked off and the primary challenges have really been around open sec we actually been trying to get a open stack high performance with VPP 100% open source that's redeployable and works as expected whenever you bring it up has been very difficult. So we've actually just got fully up to working on on demand packet instances in last couple of weeks. And those are with melanox network cards, which are not ideal. As far as they don't have they they use proprietary drivers. There's a lot of other weirdness and the way the drivers work. There's some funkiness in the open stack neutron and other pieces that we've messed with, especially around making that work with VPP. So a lot of different things to get it to a point where someone can just take it and deploy a new cluster when they want on on packet. Kubernetes melanox again difficult everywhere doing how do you want to do layer to how do you add new ports. There's stuff like multis where you can add multiple ports, but it's at the pod level so you don't get to do direct container container connections there's a lot of other items that you start dealing with. There so working with that network service mesh has a lot of awesome stuff coming in for configuring it decoratively using Kubernetes way. But that's in progress now we're kind of in we're moving towards it we attend those meetings and they work with us and attend CNF testbed, but that's kind of in the future. So that's probably it and let's see if again API key if people are interested in this, you can recreate it interested in people saying you're not doing Kubernetes right you should be doing this and pull request opening tickets whatever there. We are looking at other environments so that we can repair packets like the primary area. For us to focus we'd like to be able to do stuff to compare it on other areas like AWS metal I through metal they've also released I don't think this says anything here but their C. And I think they're five C five in and C five, which have 25 gig and 100 gig network connections so we'll be looking at some of those. And I don't have a Q&A thing here but I'm happy to answer some questions about this. Hi, this is Chris hodge from the open stack foundation. I actually have a few questions enough to forgive me because we really didn't start digging into this until a couple days ago, or yesterday really. So, you know for uninformed on some of these items, you know, please be sure to correct me. But one of the things that concerns us a little bit is, it actually appears that you're running both the Kubernetes and the open stack on different hardware. You know, we were, and let me let me look at our notes here. It appears that Kubernetes runs on C one X large and open stack on M two X large. I don't know if that is something that we missed or if you're aware that you're actually, you know, doing the performance metrics on different platforms. Okay. So, if I would love a pointer open an issue or send me a slack message on the CNS wherever you'd like if you see something that's different. There may be something with potentially the controllers and master nodes that could be on different instance types. The performance metrics that you're seeing here. Those are on the same hardware so that would be on. So when I when I referred to like the melanox, that's the M two extra large for both open stack and for Kubernetes. So that's melanox next the connect connect X force is what the melanox is and that's exact same hardware when we're talking about the data client testing. We're not doing any testing of how the master and control node work for management. Traffic the layer to when I showed like, yeah, so all of the we happen to have the masters connected. They can talk there but there's no talk on for the management communication there. So the traffic generator is only hitting the worker nodes. That's all way or two. These worker nodes are all into extra largest. What we don't have on open stack yet. And so we're not showing any comparison yet is we do not have open stack running on Intel, Nick based instances. The packet will be releasing a new instance type and add if you're listening, correct me if I'm wrong, but I believe it's called the end to extra large. That'll be coming out that have quad port Intel's and we've been testing some reserve types. Yes, I believe that's correct. I haven't seen the exact announcement yet, but I know I've seen pictures of the Quadnik systems. Okay, thanks. So I think those are coming out if I've heard in sometime this quarter. In the next six weeks ish or something and I don't see any public anything mentioned, but I think that's about right. And then at that point, everyone will be able to deploy on on demand Intel versions. But if you want to do test right now the comparisons that you get between open stack and Kubernetes would be on into extra large with Melanox next those are dual port. Okay. Yeah, the other thing that we that we that kind of jumped out at us as we were comparing deployment times 65 minutes is to me it feels like you're doing something wrong. You know, I know that for some similar installations that I do in my home lab which I probably doesn't even have the same performance characteristics of what you're doing there. So that time should be much closer to what you're talking about the Kubernetes time, but it's not very clear to us what you're measuring to like if you look at issues 110 and 111. It appears that, you know, 30 minutes of that time is spent in packet provisioning. But it's not clear if, you know, you capture the same amount of time when you're deploying the, when you're doing the Kubernetes deployments. It's not clear to us exactly where that time is being burned up. But, you know, we think that the info deploy time. You know that, you know, I, we needed to get a look at closer what you're doing but I think that there are probably better deployment methodologies you can use that are much faster and get you to get you, get you to a, especially if you're only doing neutron deployment. Yeah, the results look very high, but if we were to test say just the OpenStack deployment time, it would probably be somewhere more like 20 to 30 minutes and Kubernetes would be a lot lower as well. We were testing both the same amount of, or from the same SAP point as OpenStack for Kubernetes as well. Where most of that time is built up is we have to do reboots on the packet nodes, the grub updates, as well as provision the packet nodes as well as do VPP, V-switch installation and configuration and that adds about 30 minutes. So part of it may be when we're able to actually deploy different components and there's some limitations on that like the VPP networking setup and the V-switch on OpenStack. We set that up at a different time than we do Kubernetes and it's just when you can't do it at an earlier time. The OpenStack, there's some infrastructure setup that needs to be created ahead of time so that you have all the information back from the system so that you can use that as input for deploying OpenStack. So that would tie in with what Denver's saying. The other item to note Chris is we had some limitations on the OpenStack deployment method. So part of this right now is we're using Chef OpenStack to do a deploy and we already are aware that there's some other deployments that may use like containers for deploying the services that can be very fast and that wasn't really an option. It would probably be a good idea to say here's different ways to deploy and maybe we even say here's a Chef deployed OpenStack and here's another. But I would go so far as to say is the Chef deployment tooling is probably some of the most poorly maintained deployment tooling that we have within our community. I know that the major distribution that is depending upon that is looking at other options. So it's something that I might bring some of this back to our community and see if we can have someone look at the tooling and see if we can offer some feedback on that because I think that these comparisons to me they almost feel a little bit pathological. And I would want to make sure that we are doing these measurements as fair of a light as possible because you don't know if things are being built or if you're downloading or packages. It's like downloading and installing a container image is much faster than installing a whole wealth of packages across the system. You're essentially taking out the container build time if you're considering things like that. So it's hard to tell if it's actually an apples to apples comparison. Absolutely. So it sounds like it maybe the first step would be more visibility on the stages of what's happening and I didn't drop it in here but I have a another slide and we got to update the read me that actually go through the stages and talk more about here's where this kicks off terraform runs here and eventually we're using Ansible to to provision some of the things here's where those pieces run and some visibility on that will be pushing that to Doc so probably seeing what we do now first and then we can talk about either improvement directly or other paths. I hear you on the the chef being poorly maintained. Anyways, I'd love to hear more Chris and if you could follow up maybe outside on how we can improve on the open sec. I would love to do that. Sure. I'd be happy to do that. Yeah. Yeah. Sorry to kind of jump in on it. We just, you know, we didn't really have a chance to kind of look at the numbers on this until yesterday. So, you know, and you've made the announcement so absolutely. I understand and I'm happy to hear the feedback and if you'll ping me on slack I can actually invite you to the there's a CNF like testing dev type channel where we're focusing on stuff and then we can get you going there and then also, you know, get you going on the get hub so we'd love to have improvements on that we definitely want it to be a fair comparison and talking about options that people have out there. Cool. Thanks. I appreciate it. Okay. Let's see. So, if anyone else would like to join in on this. There's a twice monthly meeting that Lucen mentioned earlier CNF testbed Boff that's going to be starting on March 4. First and third Monday, 8am Pacific time. And we'll be talking about stuff like what we're just saying use cases that could be implemented and whatever else we want there. Feel free to open issues. And then the CNS slack channel. I don't know if it's been renamed to CNF testbed or not probably will at some point. Thanks everyone. And I think we have hh are you here. My audio coming through. Yeah. Awesome. I just wanted to kind of send out an invite. I've been working a lot with the fig in front of the test in front of the testing the Kubernetes test in front and they use a lot of the tooling that drives how the Kubernetes project works including the projects that auto prove and emerge and have all of the slash commands and also proud and the infrastructure for the displaying things on the test grid. And we're spent some time helping projects that are not part of the funded project for the CNCS for Kubernetes on on the Google infrastructure, particularly taking use of packet. And that's one of the things that we needed to do for kind if you've seen that it's Kubernetes and Docker. It's really interesting project that you have done with a single binary and say kind build the Kubernetes from source and then time deploy and it sets up the containers. We have some of their CI infra run on arm. So thanks to Ed and crew and packet we now have a few arm boxes available and getting people to the state where they're successfully integrating the various features of CI into their project is something that obviously our group is passionate about. And since we've kind of developed some expertise and I'd like to offer and pairing with folks so that we're we're not necessarily writing all the info for them but we're pairing with them and documenting what we do together so that others can gain more momentum in getting their CI for their various projects integrated. And we'd love to actually spend some time getting to know the onboarding help with the onboarding for CNCF that CI as well as the CNF will take a take a look at that. That's kind of, that's kind of just the invite there anybody's interested. Otherwise I'll just be reaching out to people I think might be Is there a project page or anything for getting going on this Not yet. This is something that we've just had people asking for help with various teams. We also been watching all of the community discussions there, saying how do we get started using the infrastructure from the CNCF for our CI and how do we use packet with our CI. And so we're just seeing this need in the community and meeting and I think as a CI working group we kind of need to respond with that to that need. And so I'm just kind of offering to pair and start responding to that request from our community and see if anybody else is interested in doing that with me. I know that I've I've heard sending stuff to a CNCF.io there's a maybe it's a help desk mailing list was I've seen it as like an initial start for people requesting to work on things. You could look at that. Otherwise, if there's like some mailing list or somewhere for people to reach out. I don't know if you want issues open on say API snoop for that but somewhere where people can get started. This is kind of separate from API snippets definitely I think falls within the CNCF CI working group. Okay. And I see that the help desk is one of those places where we can see the request, but we really don't have accounts from some of our working groups. I don't think we're focusing. I would like to see a more intentional effort and I'm trying to kind of help in that regard to provide some of the parent and some of the mentoring and creating documentation. Are you suggesting that we coordinate with the help desk to see the request coming in. Yeah, I'm just trying. I don't know. And I think it sounds like I'm just dropping this slide and it sounds like the mailing list in general sounds good. And then the help desk I was thinking might be a helpful thing, because Chris, Chris, Dan and other folks are already telling projects if you're interested or have the need for help and there's not a specific place. Then going there is good. Anyways, that's my thoughts but I don't know if anyone else is. Is this like a, would you say this is like I trying to help. Yeah, it's in the rebranding I wasn't really a part of that portion of news to me as of today, and I was my initial intentions and registering cnc f.ci were to provide this type of stuff for all of the same cnc projects. And so I'm trying to find a way where that fits within the CI working group. Yeah, I got you trying to navigate that and saying this is something that the cncf and our working group should be providing. Is that a sub thing of the cncf.ci because as I've been working on stuff I set things up as sub project dot cncf dot ci. Right. As this part of the ci cncf ci focus on the community. But with the with the pairing how do I how do I help how do we get people working together and getting documentation of this and also where does it sit within the community. Maybe a longer discussion I just wanted to get the ball rolling. We can follow up further on the man list and maybe and I would say part of what in these conversations for everyone here have been kind of all over the place different groups and everything else so the ci working group itself I would say think of that as a separate thing from that from the dashboard that Lucy was shown earlier. It happens to have that domain and I know the naming looks as if that means then it's part of the working group. The dashboard has been specifically. It's not just rebranding of that so it's not rebranding the working group it's rebranding that dashboard and the focus of what that's trying to show, which is definitely different from what you're talking about. What you're talking about though like pairing and working would be an additional project under the working group. So from the idea of the ci working group that sounds great. I don't know where the conversation should be on that. The first place I could just think is right now this public mailing list was tied into the work group. The maybe the the the get hub for that. As far as where the working group is going. I'm not sure because the to see is just kind of redone things if you'll go look at some of the recent There's been the to see meetings that have been out the there's been a lot of mailing list stuff. There's several documents that the to see has had on what are working groups going to be where those going and the ci working group has been labeled. It's not a working group. It's something to the side of that right now it may become a working group. And if it does, then I think what you're talking about fits in under it, but it's kind of hard to say, but it's different from the dashboard right now. Here's that service desk link. There's actually a there's a get hub on that emails and other stuff for anybody that's interested in CI responsibilities. Then talking there and and trying to see what's available. I think offering those extra services is a good thing for the community, and probably reaching out and trying to work with them as be a good place. Thanks to. Yeah. Is there any channel or a place in slack just for conversations that you want me to point people by just dropping it here. I think that yeah, it's the working groups actually on the CNCS slack. That's probably the place for it for now it's for the mailing list. Okay, well, I'll just I'm not quite sure what the CI working group channel is so I'll just drop that and put that here for now. The mailing list. Thanks Chris. Well, it's just about two o'clock Taylor if you could go to slide 49 how to connect with the CI working group. Our next call will be on Tuesday, March 26, please subscribe to the CI mailing list and join the CNCS CI channel and slack. You can also find us on GitHub, Twitter and send us any mail. And thanks so much for joining the CNCS CI working group. Thank you. Have a good one. Thanks everybody.