 Hi, this is Ed. Just checking my audio. I'm going to mute in a second. Howdy. How are you doing, Ed? Doing pretty good. Thanks for asking. I'm in a slightly noisy place, so I will mostly stay muted unless I'm talking. I'm on the Michigan North Campus and the Student Center here, soaking up the student atmosphere. Oh, nice. Hopefully that gets creative juices going. Yeah, I'm planning to sit in on an advanced networking seminar in a couple hours, so I've been reading networking papers and seeing what research looks like these days. Cool. I figure if I ever make eight figures worth the money, I'll quit and get a PhD. Okay. There you go. At that point, you just pick whatever topic. You're going to PhD in it because it looks interesting. Exactly. Right. Well, it's time, so I'm going to mute, but talk to you soon. No worries. We'll give it about five minutes here. And then we can get started. Hi, this is Lucina. Howdy chat and zoom chat. I'll share the notes for today's meeting. Thanks. Thanks in advance for adding your name and contact information. And please do if you haven't yet. Also, if you have any announcements or items that you'd like to address in today's meeting, please feel free to add it. Today's agenda. Hey, Lucina, this is Jared from the Rook project. A quick question is, is there any, you know, particular format that you would like agenda items added? Or can they just be added, you know, at the bottom or is there like a community discussion section to add things to? Sure. Hi, Jared. Nice to see you. Yeah, we've got, you're welcome to add. Let's see. I'd like to go over the next iteration for the CNCS features and roadmap and also share my screen to kind of show you what I'm looking at here. And we also have a couple of slides added from the Alibaba cloud folks. And you're welcome to add yours right below that. Cool. Thank you, Lucina. Appreciate it. You're welcome. And do you have any time limits? Are you able to hang out with us for the hour? I actually need to take off at 1145 to get to a noon engagement. It's, it's, it's the meeting pretty full today. I think that we should be good at that point. Let's see. We should be good. We'll time box the Q&A after the looking at the next iteration of CNCS to make sure that we'll have your, how long would you like to spend for the discussion? Oh, I think it's only just a couple of minutes, because there had been some previous discussion on a, an issue in GitHub and the cross cloud CI repo, the cross cloud repo. And I just wanted to bring it back up because it kind of had stopped at a wilt and pick it up on January 22nd. So I just wanted to get that started again. And we can take it offline after that probably. Okay. That sounds good. We'll get started in just a couple of minutes. Looks like we have a beyond top secret person joining. Ask who this is. Great. 105 we can get started. Thanks for joining the CNCS CI working group. Today is January 22nd. Just a quick note. This meeting is being recorded and will be shared to the cloud native computing foundation YouTube channel right after the school. The CNCS CI working group meetings are monthly on the fourth Tuesdays of the month. At this time, 11 a.m. Pacific time. And there's still time if you'd like to add anything to the agenda. The link is here and also shared in the zoom chat. If you would like us to share it again, please post in the chat and we'll do so. Thank you. So this is our first meeting in a few months due to a couple of events that the group has been attending the recap. We've attended the cube crown China in Shanghai. And there's a link here to the presentation that was presented. If you would like to take a look at the slides. NCF CI intro. Presentation was given by Watson. We also have discovering the untold user stories of Kubernetes with applied anthropology. This was delivered by hippie hacker. You're welcome to click through and see that talk. We also attended a cube con Seattle. And the co located event FDIO mini summit. Where a talk on the CNS was given. And at cube con Seattle. An intro to the cross club CI. And we've got a YouTube link here for the presentation. An intro and a deep dive YouTube links for both. And Andrew from VMware gave a presentation on adding support for new platforms. And you're welcome to click through to see a presentation there. Upcoming events. That we are looking forward to. Include the mobile world Congress for Barcelona. This is the CNF project. We're preparing. A presentation and read me. For that. Oh, great. Someone added the linear. Connect. Would you like to tell us a little bit about that event? Yes, that's, uh, this is Ed from packet. I have a proposal in for a talk about. Uh, CI systems and scheduling in CI systems. Where I hope to draw from a number of CI. Environments, including cross club. Thank you. That sounds really interesting. Let us know if there's anything we can do to help with that. In April. We'll be attending the open networking summit in San Jose, California. We've submitted a few CFPs. And they're crossing our fingers and toes and hoping to. Hear back. Um, one of the topics for cross club projects will be. Regarding cross group collaboration. And, um, how and why we added the Linux foundation projects called own app. To the CMCS CI dashboard. And we'll, we'll talk a bit more about that idea in general. When we take a look at the next iteration of CMCF CI planning. In May. Is the next cube comb cloud native con in Europe. In Barcelona. The CFP window for that event to have closed. But please click through if you are interested in. Attending that full event through and by. Cube. Kubernetes and the cloud native computing foundation. Are there any other past events or upcoming events? Anyone would like to talk about before we get started. Are there any other past events or upcoming events? Anyone would like to talk about before we dive in. To the slides to look over the. CMCF CI next iteration plan. So the next iteration of CMCF CI. CMCF CI is currently a status dashboard. That shows multiple projects and multiple cloud providers. Um, provisioning on Kubernetes and then running on the cloud providers. CMCF CI V2. Will have a different focus. And view. So I will. Time box this probably five minutes or less. I'll go over the goals for the next iteration. Of the CMCF CI dashboard. High level features review the mocks and the road map. Then I'll pass it over to my colleague Taylor carpenter. To open the discussion for. Test results on test grid and Q&A. And we'll make sure that we've got plenty of time. Um. Remaining for the Ali Baba cloud demo and the Rook discussion. All right. High level goals for the CMCF CI dashboard. These are in line with the CMCF goals. We want to help demonstrate the use of cloud native technologies. And promote. New CMCF projects attract more interest into the CMCF. And provide a third party neutral spot to validate these projects. We'd like to support and contribute to a sustainable and scalable project ecosystem. And allow. To get feedback from cloud native end users and projects. We'd also like to see the CMCF CI dashboard. As a compliment to the landscape, which is that L. CMCF dot IO. And the trail map. To kind of say, let's see these CMCF projects in action. And they'll all be listed on the dashboard. Thank you Taylor for linking the landscape there. The first key feature of the new iteration of the dashboard. Will be to highlight and validate the CMCF graduated and incubating projects. Currently, there are three levels of maturity for CMCF projects. Graduated incubating and sandbox. And we'd like to start by really highlighting those graduated and incubating ones. And we'll start by validating the stable and head releases. Where the stable and head commit from it hub. The sandbox projects may be added at a later time. We'll start with those top maturity projects. We'll list them in alphabetical order of graduated and then followed by alphabetical order of incubated. And we also have a section for Linux foundation projects like own app. We plan to reuse build containers that are provided by project CI system. Which is different than what the current dashboard does now. We build those projects. We rebuild those projects from scratch. So we will rebuild using our artifacts that are provided by the projects and the project CI system. And we'll reuse home charts that are also provided by project maintainers. And reuse end to end tests that are provided by project maintainers. Secondly, with that, we want to increase collaboration with the project maintainers. So we will be reworking the dashboard and the different components so that the maintainers can update the project details through a github PR release details through github PR. We'd like our system to integrate with your external CI system. We're currently using GitLab as the base CI system. And we want it to be a flexible system that can use, if you use GitLab, we'll use that. If you use Travis CI, we'll integrate with that. Jenkins, et cetera. So that we can retrieve the project's build status and container artifacts from your own CI system. We would like to encourage the maintainers to provide home charts and smoke tests to run the app deploy phase of the dashboard. We would also like to work with the maintainers to provide end to end tests for the new test phase that we'll see on the new mocks. Hopefully with this increase in collaboration, we'll come in acceleration on seeing new projects on the dashboard. We'd like to support more contributors to the project and share the responsibility for adding and maintaining those projects to reduce the level of maintenance by just one party. And for Kubernetes, we'd like to demonstrate provisioning on various Kubernetes releases on to bare metal packet. Currently, we're showing the stable in the head releases and we would like to add support for the release candidate. The new UI also supports more release versions. If we would like to go back one or two versions of Kubernetes, for example, our new UI will allow for that support as well. And finally, another key feature is we'd like to start using cube ADM for bootstrapping Kubernetes on the packet. I'll go through a few of these mocks. Slide 15 hopefully will be updated before we get there. Please pardon if they're a little bit blurry. These are images that I've stretched to kind of work in this platform of Google slides. So here is the mock-up for the next iteration for the CI dashboard overview screen. At the top, we've got the test environment showing Kubernetes stable release running on bare metal packet. And that initial stage is a success. After that initial stage is successful, then the graduated CNCS projects will show the releases. This is the latest stable and the latest head commit from GitHub. The build status, the deploy status, and then this would be the end-to-end test status. We have this mock showing Kubernetes at the top for the test environment, followed by the graduated projects in alphabetical order, Envoy and Prometheus currently, followed by incubating in alphabetical order, followed by Linux Foundation projects, ONAP currently. The badges indicate a success or a fail. And if the build stage failed, then the deploy and test stage would show gray in a badge. They would not be clickable. We would not run any action if that initial stage failed. Likewise, if the build and deploy, the build brand and the deploy failed, then we would not show the end-to-end test. And we may see in the beginning, if we don't yet have the coding for the end-to-end test, then we may see that this column is all NA. And hopefully that would prompt a discussion and a collaboration to work on adding those end-to-end tests with the project maintainers. The next slides show the Kubernetes release selector. So here's stable V131 has a down arrow. I click on that down arrow and it opens up a selector dropdown. I have the options to choose the latest head commit or the release candidate, which would be a new feature. If I highlight over release candidate, then it switches to release candidate. And are there any questions about the mocks or the high-level key features and goals yet at this point? Okay. I'll continue. The roadmap. How are we going to get there? January. Here we are. We are planning and announcing those changes. February next month, we plan to reveal the new UI of the project focus home screen. We also would like to implement the smoke test after the app deploy and plan for integrations and QBADM. For March, we'll be preparing for the open networking summit in San Jose. So we'd like to update own app and we'd like to implement the external integration to the project's CI system. We'd also like to provide that documentation on how the project maintainers can get started on adding their project to the dashboard. And in April, we hope to implement the new feature of the Kubernetes release candidate. So here is our work in progress roadmap. February. Removing. We'll be removing the cloud deployment from CNCS CI on Monday, February 25th. And our next working group meeting will be on Tuesday, February 26th. We also plan to split the app deploy and end to end testing so that it we currently have them in the one stage on the CNCS CI that's on production now. And we'll just split that so that we have those two different columns. And then we'll do some planning for QBADM smoke test and activity. The first one is Web things like activity, security and eligibility can be added next March. Preparation for that working group at the end of March and open Networking Summit in the beginning of April. Updating on app release, starting end-to-end and smoke tests and start updating that functionality to add the project details, release details, build collaborate with those project maintainers and creating that README as well. And then later on in the year, we hope to add that release candidate for Kubernetes. We have some presentations for KubeCon EU, an intro to the CNCS CI projects, as well as a deep dive on how to add new projects. And then September will be Open Networking Summit where we may update ONAP again. Maybe we'll have another Linux Foundation projects on the dashboard and we'll tell. And then in November, we've got KubeCon in San Diego where we hope to do another intro and deep dive for CNCS CI and the current state. At this time, I'm happy to continue sharing my screen, mute myself and hand it over to Taylor to start the discussion on adding the cloud provider test results to test grid and moving away from showing the cloud provider deployments on CNCS CI. Thanks, Lucina. So this is part of complimenting other projects and efforts that are going on like the conformance working group, SIG testing. We've been actively talking with folks about test grid, how we could integrate directly, maybe pull results, help with different views. There's a dashboard right now that's for test grid is goes into lots of details that help the developers on that side, especially Kubernetes and everything. And then how results are set, there's a lot of different items on that and then you can get into stuff like how deployments are happening. There's a lot of different efforts on that including the different cloud providers creating the integrations that can go directly into the new plug-ins that have been split out. So we're trying to make sure that what we're doing is complimenting or maybe exploring some areas that aren't there yet and that's what this is about. So we're hoping to get feedback going forward on where we can help best from like a cloud provider standpoint and people that are wanting to test or look at things. Right now it's going to be, as Luciano was just going over, we're moving off on the view so that's not the focus and then we'll be helping any cloud providers that are currently on to move over, submit, make sure that they're able to run the conformance test and push the results up into test grid which right now is Google buckets at the moment, but it's their specific formats and stuff for sharing the data that's useful but helping with that and seeing how that's going. That's the overview. I don't know if anyone on the call has comments or questions about that, this particular part for the transition. I have a very quick question. So what will be the CNCF cloud provider dashboard looks like after we migrate all the data to test grid? So there will be no anymore cloud provider or there will be still something like data but the data is coming from the test grid. What will it looks like? So the next iteration that's going to be released for the CNCFCI dashboard, so I think that's what you're referring to right now. Other people could run different dashboards. The source is available to run all of that but the actual dashboard that's running on CNCFCI right now will not have cloud providers as a focus. At some future date, there may be another screen or a view that could show that. Don't know what that's going to be but right now there won't be cloud providers there. So there are on test grid, there is a view to go in and look at those. It's tied in with a lot of other information. The results themselves would be submitted directly to test grid but at this time, there won't be something similar to what's on the current dashboard. I think, thank you. Any other questions? Taylor, is there a... Hey, Taylor. Go ahead. Oh, I'm sorry, this is Dan Kahn from CNCF. Maybe we could just provide one piece of context that's missing from this which is the original driver for making this change to 2.0 is that we want the responsibility for ensuring that the clouds are working correctly on the clouds, that we don't feel like we're the right body to be ensuring to fix things and such. And in many cases, it's required coordination that's challenging for us to pull off. And so if each project or each cloud or each provider is responsible for it, then they could provide the data to us and we could display it. But the question then is, well, what are we gonna do with the data that's different from test grid? And so I guess I would leave open the idea of us displaying something in the future. We're not ruling it out but for now, we don't see that as the core value that we could provide. This is Ed. One of the things that was nice about the CNCF CI system as it stood right now, is that you could get a pretty good idea whether a problem with a project was specific to that cloud, maybe that cloud having a bad day or something specific to the cloud, or whether it was specific to the project. Does test grid have, I guess, views? It certainly would have all the data, but does it currently have views that would allow you to do the sort of cross cloud comparison within the test grid infrastructure? They don't, as far as I know, there's nothing in test grid that'll give you a quick overview of all the projects, the data's there. So you could go through and look at the health of say three or four different cloud providers, whatever you're wanting to look at and look at how that is, but there's no quick overview screen. And I think tying in with what Dan was saying, there's been requests from a lot of folks on the test that work with test grid and from different projects that would like a view that does that. That may be a completely different type of dashboard. It may not be at CMCFCI, it could be wherever, but there's been a desire to do that and that could happen. There's some interest we'll be talking with him. And then as far as something directly on the dashboard to continue with what Dan was saying, if we can have cloud providers that can help, that maintain, I won't say help, but they actually maintain running the test, submitting the results. And we know we already have a home for those that other people are doing it, that's test grid. We could potentially have a view of the test data and specific parts that maybe the community as a whole says we're real interested in this piece. Maybe that's a view that we do and it would pull from there, but then allow like Dan is saying, to make sure cloud providers can fix issues that they identify as their own because they have control over that. So if you have thoughts on like what information or the views that would be useful and we'd love to hear those. So we think about that as we're going forward. Thank you so much. Any other questions? Very good. The next item on the agenda, Harry, if you're available, I'm happy to keep my screen share or I can release it if you would like to share your screen to talk about the CI dashboard over at Alibaba Cloud. Yeah, thank you. And I've stopped my screen, perfect. Okay, so I think you guys should be able to see my screen. Yes. Okay. So today I would like to share a very simple demo about how we will be running a something very similar to CIS dashboard in Alibaba Group. Actually, the background is we have a lot of teams in Alibaba Group, including Alibaba Cloud, including the Unfinished Shores. We have a lot of affiliated companies in Alibaba Group. They have their own Kubernetes cluster, actually. And we noticed that some of our teams, they actually, you know, maybe hack a little bit about on the Kubernetes code and they began to maintain their own version and they began to divert from the upstream, which we are trying to avoid this situation. So last year we created a cross-team CI dashboard in Alibaba Group and we asked, we required every team in Alibaba to upload their end-to-end test results I mean, upstream end-to-end test results to the system. So we can know that which team is going to divert from the upstream and we will cooperate with them to make sure that they will keep up with the upstream and we will see anything to contribute back to the upstream or anything we need to re-implement or refactor into plotting something like that. That is why we have a cross-team CI dashboard in Alibaba and it currently works very well. And the UI of this dashboard is very simple. It's basically very similar like the things they have cross-closed CI dashboard except that we don't have cloud providers there. We have a lot of teams there. We have teams which name is Alibaba, we have teams which name is Alibaba Cloud and we also have the data table for all the other teams inside Alibaba. I have a very small, very quick demo about how this cross-closed CI works and as you can see here, it is actually a very simple front-end website and every table, every data in this cross-team CI is actually a CRD in our Kubernetes cluster. So for example, we can define a category which name is Kubernetes and we can define the test result project which name like conformance test or end-to-end test. So after that, we just create these CRDs in Kubernetes cluster and then we will have the category and a project on the website of the CI dashboard. It's very easy. So as you can see, if you create a CRD which name is conformance test, a CRD which name end-to-end test, you will have a true dashboard here. And after that, we also add our own provider which is actually a different team. So for example, we have team which name is Alibaba which team which name is Ali-Yun that is Alibaba Cloud actually. So every team inside the Alibaba group will add themselves as a provider on this cross-closed CI and after you create this CRDs you can see that we have more providers in our cross-closed CI. And as long as we have all the teams on board with over CR dashboard and then those teams will be required to upload their test result. And again, the test result is also a CRD which is filling these specifications like what is your project? What is your cluster version? And what is the test result from your team? And we also require them to link a URL of their internal CI system. Maybe it's Jenkins or Travis, we don't care about it, but it should be the URL of the build. And you can see we actually add it as a field which name is detailed reference. And after we create this test result CRDs in our Kubernetes cluster and then we will see the real test result uploaded from different teams. And the red means it actually is not passed. There's something error here because some test is failed. And also we have the outer link which is actually the detailed part of the internal CI system in Alibaba Cloud of that team. So we can see what happened to the Kubernetes cluster of that team. Okay, we can see which test is failed and what happened and we can try to help them. And so, yeah, so this is basically how our cross-team CI dashboard works in Alibaba Cloud. And you can see it actually very similar to the same CI dashboard. And that's why we are trying to see if there are cooperation with the same CI workgroup. And we have one of the primary goal as we want to add Alibaba Cloud as a provider in the cross-close CI. But as we have already discussed offline and today and actually we can see the same if cross-close CI trying to use text grid as a new way to pay the cloud provider. So here are some very quick questions that maybe we can discuss later. So what will be the same CI dashboard looks like? I have asked this question actually in the meeting. And the second question is how since CI dashboard collect data from text grid or we just, you know, maybe we don't care about that we just use text grid to as a, as all these test results dashboard in state. And the second goal over our collaboration and what we want to try to looking at is that we want to install same-safe cross-close CI inside Alibaba to serve as the next version of our cross-team CI dashboard. Because, you know, we don't want to maintain something which is basically exactly the same with what the same safety doing. And we can see that we already have open source project in CNSafe and we are trying to see if we can reuse them and we can contribute to this project. So we don't need to have maintain a very stupid and simple project inside our teams which, you know, we are not very good at maintaining this kind of CI system. And that's why we also have a very quick question that what will be the timeline of text grid frameworks be fully open source because we actually cooperate with Kubernetes test infarcting for a few times and we find that the text grid framework is not fully open source yet. So we cannot install the whole framework inside Alibaba. But we would like to see what will be the timeline, what we can do to appreciate and we are very happy to contribute to this kind of effort. Okay, thank you. That's what we are, I'm trying to demo today and I will release the screen. Okay, so if you guys have any questions and please name them. Thank you, Harry. Taylor, do you have some answers to those questions on Harry? Let me unmute here. Okay, well, so we kind of went over, I guess, how it looks. So we have some of those mocks as far as how the view of the dashboard is. The current view that we have, that's again available if someone wanted to fork and use that as is, the dashboard itself is pretty flexible as far as adding new clouds. But the new view is what we were just showing mocks. I'm not sure exactly what that will look like as far as if you came in, it would be nice if you may have to go back and get the older current version if you wanted to be able to have what we have. I think either way, the new view or the other, it is about showing status at different stages that could be projects or could be groups. So it's still probably pretty applicable to what you're doing. As far as Alibaba as a cloud provider, happy to help with the integration and the test grid and collecting data. There won't be any dashboard at this time other than test grids. So it wouldn't be like what we're seeing now. In the future, don't know what that'd be. And then inside of, as far as your second item for adding, trying to use either different components. So the actual cross-cloth CI project when you look at it in GitHub, it's a lot of different pieces. There's different repos and those different components could be used separately from say the dashboard. And some of those could be useful for submitting, running different parts of maybe the Kubernetes test or whatever and submitting that. The test grid framework, are you asking about how we would integrate with how we would show stuff from that? Or on your final question here, as far as like timeline of test grid framework to be fully open source, are you referring to helping to submit results for Alibaba or viewing results like on a new dashboard? Yeah, so there are actually two questions. The first question is, I'm trying to figure out how we will collect data from test grid in the next version of since the CI dashboard and we're also looking at if anything we can help. And the second question is, as I said, we want to install the since the cloud CI inside Alibaba. So that means every part in this pipeline should be fully open source and we can install that inside Alibaba. But I know close cloud CI is already fully open source. But as far as I know, that the test grid itself is not open source yet or it's not fully open source yet because we find something that is missing from the current GitHub repo of test grid. So the second question is, when maybe what will be a timeline or the plan of test grid fully open source in the future? I think it will be happen this year as Dan also discussed this part with me and we would like to see very more detailed timeline or anything we can help because we will be happy to see if we can install all the pipeline inside Alibaba. Okay, I think we maybe go ahead, Dan. Oh, just if test grid is never going to be open source if that sort of came out, I would be open to CNCF funding and open source for places for it. But having spoken to Google about it, it just, it doesn't seem like there's anything proprietary or sensitive or anything like that. It's just the tedious issue of it being tied to internal systems and them not having the resources to cut those ties and make it portable. But then as we mentioned earlier, I am also open, particularly we had help from some of you on designing and funding a third party interface to test grid that would be pulling the data from it. So I think there's a lot of areas for collaboration here. My hope though is that test grid will open source and then you'll feel more comfortable using that as the repository for all the data. Okay, I see, yeah, that makes sense. I'm happy to speak to a few of these questions. I wanted to see though, if Jared, if you've run out of time or if you're able to have any discussion, I know that you were going to have to leave. Oh yeah, I could just bring up what I wanted to and then maybe chat for a couple of minutes. But I don't want to derail the current discussion that's going on for test grid right now. That seems to be more broadly applicable to the attendees of this meeting. So that's, you know, we could take this particular topic that I wanted offline completely. That's okay too. Okay, well I guess, Jared, on yours, as far as Rick, we're looking at with this new iteration of the CNCFCI is making it a lot easier for projects to help adding themselves and doing the integration with external systems. So that's part of what we're planning. And as far as the roadmap, we're going to be having public documentation for how the different pieces can be added and updated on the screen, where that's going to come from. There will be pieces like it if we need to do a direct integration with the new CI system. So we talk with Jenkins right now, we're going to be adding support to talk with, you know, different ones based on the project. So if it's all right to take that offline, we can talk specifically about Rick because we are interested in doing that. Yeah, that's fine. And then the one comment I would make right here in this forum is that what I saw from what Lucino was talking about with the 2.0 effort, it looks like that's going in the direction that a lot of the hosted projects by the CNCFC, you know, the incubating projects and the graduated projects would probably like to see it go. It looks like it'll be, you know, there's an effort around being able to add those projects and integrate more easily into that environment. So we have all of the integration tests running for all the CNCF projects. And so that's, I think what I saw today is actually what I am very interested in seeing and helping the Rook project as just like any of the other CNCF projects integrate. And then the final thing is that it was entirely clear, you know, if it's possible for the CNCF to be hosting the CI environment completely, because you know, we're running our own Jenkins server right now and it's, we don't have the resources really or the expertise to be keeping that up to date and babysitting it and hosting it very effectively. So if the CNCF has the ability to, you know, host the environment, we'd be okay with migrating our current Jenkins setup over to what works better for the cross-cloud environment or the effort there and you're doing some of that porting migration work in order to have the CNCF and, you know, the cross-cloud CI experts, they're kind of being able to host the integration test on a more day-to-day basis. And then we can take the rest of that discussion offline. That's the only comments I wanted to make today. We've heard, I guess, from several different projects, similar things as far as hosting. That's a pretty big effort, I guess, is the short of it. And I think there is an interest from different folks that are helping provide CI environments, including on the Linux Foundation side, figuring out a way to do that where it doesn't require a team of 50 just to keep that running as a desired. Ideally, it's more of a self-serve. So that's not something I would say we're gonna be looking at right now for the way this is running. We'd wanna say you have a Jenkins system, let's integrate with that. And if maybe in the future, we could, what I would say I would like to look at would be how do we have something that's more of like a template or a skeleton framework for the CI system and it self-serve so you can come in and if you put stuff in there, then it runs. Ideally, it's similar to any project where you're gonna drop your CircleCI or Travis and you go configure the pieces that you care about and maybe that's the clusters, you didn't have to worry about the cluster or you say enable the cluster. And definitely some ideas that we can talk about in there, what that would be. Dan, do you have any comments or thoughts on this particular item? No, I mean, I wouldn't mention that the packet hardware is available to projects via the Community Infrastructure Lab but CNCF would definitely be hesitant on taking over management of Jenkins infrastructure. If anything, I'd say we're trying to get away from that and encouraging folks to go with commercial hosting. God, thanks Dan and thanks Taylor. I appreciate the discussion today. Great, thanks Jared and if we have, I'm happy to go back and answer a few of the other questions Harry that you had, if you'd like to continue or if everyone's good, then we can go ahead and continue on. I think we're at the end of the agenda. Does anyone have any other comments or questions? I'm happy to continue with any of this offline on the CNCFCI Slack channel, the mailing list and specifically on the Ali Baba stuff, if you can follow up Harry with any questions that we didn't answer and we can continue from there. Hey guys, this is Dustin Oberlo from Oracle. I guess as far as next steps are concerned, my team will probably be following up on email just to get everything sorted out. We had initially been working on the CNCI code within the Oracle part of the CNCFCI stuff and we're gonna be transitioning that over to a different team. So we need to sort that out internally and then we'll follow up. I think the email thread has already been started with Lucina, so we'll just use that same one. Great, sounds good Dustin. Okay, so the next meeting is February 26th. If anyone has ideas for that, you can post it in the agenda early, slides will be made available. And I think that's it. Thanks everyone for attending. Appreciate feedback and presentations. Thank you very much. Have a good one. Enjoy the rest of your week.