 I think we got a good agenda. We can get started the upcoming event. Pardon me. Yes. Okay. So today is the CNCSTI Working Group Monthly Meeting held on the 4th, Tuesday of the month at 11 a.m. Pacific Time. Thanks in advance for jumping into the meeting minutes, adding your contact information. If there's anything you would like to discuss in today's one hour long call, please let it to the agenda. I think we have plenty of time. I'll just start by kicking off some upcoming events, and then if there's anything you know of that you'd like to add to this list, that this group would be interested in, feel free to shout it out or write it in. So the next event is in just under a month from now, May 20th to 23rd, KubeCon, CloudNativeCon Europe in Barcelona. The CNCSTI dashboard team will be presenting a 35-minute intro and a 35-minute deep dive for the CNCS.CI project initiative. The CNF Testbed Group will be doing an 85-minute long intro and deep dive birds of a feather, also introducing the new Telecom User Group, aka TUG, and the CloudNative Network Functions Testbed. In Barcelona, there are two co-located events, well, there are at least two, and here are the two I'm going to mention, the FGIO Manny Summit on Monday, May 20th, and the Linux Foundation Networking Groups, CloudNative Network Services Day. Those are both on Monday, May 20th, and so I personally hope I can share my day with both of those groups because the lineup is sure to be pretty interesting and exciting. In June, June 24th to 26th is the KubeCon, CloudNativeCon China in Shanghai. There's an intro and deep dive, both birds of a feather for the Telecom User Group and the CNF Testbed will be 85 minutes long on Tuesday, June 25th. Are there any other events, upcoming events, anyone would like the group to know about? No worries. This looks really good. Oh, yes. Can I ask a question? Hi, I'm Priyanka. And I wanted to ask, so this list of events is places where things are definitely happening related to, where people are talking about this working group. So is it possible for the future for folks who come to this call to collaborate on submissions, et cetera, for conferences? Or does that discussion happen offline? And here, we report on everything that's final. Hello. Sorry, did my question really suck? Yeah. So I would say that the call is definitely for open discussions as well as presenting items as far as this call goes. And then any events, specifically what we're talking about right here, these are items that seem relevant for folks interested in CI CD, which would be the whole CNCSCA working group. So if you have anything there, I'm actually going to add one right now. But in general, it would be open discussion here. There's also the Slack channel and several other places to get the conversation started, including a mailing list. Got it. Thank you so much. And sorry. This is my first time on the call. I hope that would be totally useless. No worries. Happy to have the questions and engagement. Kia ora. This is Jay Hart with II.coop in Tauranga, New Zealand. I have just noticed this morning that there's a CloudNative summit in Wellington. Just wondering if anybody knows who's hosting or organizing. They don't have any of that kind of information on the page they've posted. I do not know about that one. I see I can find it a little conference, but there's not a lot of details. I've been noticing that there's a lot of these CloudNative summit, CloudNative transformation there. A lot of people, I think, are going with it. So I don't think it's necessarily affiliated with the CNCSCA. It doesn't seem to be. Just looking at the main page for the CloudNative summit, and then there seems to be links off that to other ones. There's the CloudNative Wellington summit in August. That's probably the one that you're talking about. It doesn't seem to be CNCS related. I haven't heard of it myself. What I would say is, if it's not listed on the event's Linux Foundation org, which I posted in the Zoom chat, if it's not there, it's less likely to be a CNCS or at least Linux Foundation event. But that doesn't mean other projects and groups aren't trying to follow the methodology. I don't know of a Linux Foundation or CNCS specific yet in New Zealand. Keep hearing about potentials, but nothing official. I was double muted. I think that I am unmuted twice now. Someone mentioned, and yes, of course, there's DevOps days in Austin, Texas, coming up on May 2nd and May 3rd. And there's also a meetup, I believe it's sponsored by CNCS, and it's called Kubernetes Olay. And it was in Austin, Texas this Thursday. And I'll try to put a link in here as well. It's on my calendar, but I forgot to put it in the meeting mail. There's lots of things going on, vague and small. I think a lot of the different groups that are involved in trying to do CICD in the projects will probably be, there'll be some of them represented at container days, which is in June in Hamburg. I'll add that one, too. There are probably some interests for that container days. And then ideally, some of these will be in a location that works for folks. Well, there's a lot of upcoming events. So anyone have anyone else, anything else they'd like to add as far as upcoming events over the next few months? There's a bunch of events that I know of that may not be all focused on CICD, but they have some really good talks about CICD. So if it's OK with you all, I can add into the document after the call, maybe specific talks at specific events. Yeah, sounds great. Sure, folks would be interested in seeing this. OK, I guess I'll jump right into the updates on the CNF test, but I think that's next. And should be able to share my screen here. CNF Testbed is a, for those who are not aware, this is a CNCF initiative. And I'll bring up the repo here. It's a CNCF initiative to help with the transition from virtualized network functions into cloud-native versions. And more specifically, I would say it's trying to help vendors, telcos, and everyone that are using these to use cloud-native methodologies. So it's not just transferring code, but it's helping to rethink how the platforms would work, redesign how you're going to orchestrate those and those sort of things. And part of the efforts on this project are around comparing code. So actually taking code that runs on KVM, Opensack, other places, traditional package that was virtualized from physical machines, and then getting them to work in containers, but not just that, but reworking them to be orchestrated by Kubernetes and stuff like that and having them run. So then we do comparisons. Some of those are functional tests. A lot of them have been focused on performance tests. And we have, at this point, the capability to deploy the entire cluster set for both Opensack as well as Kubernetes on packet. And if you create an account, have an API key, you can build out the entire test bed and then run the different test cases to get the results. There's a link on the repo here if you want to have more info on, let me bring this deck up. If you'd like to dig more in, that gives a fuller overview. This is a main deck that we keep updating with different information and performance cases and stuff, how this works. I'm not going to go into all that right now, but there's more information there if you're interested. We did speak at Open Network Summit in San Jose and had different talks on the CNF test bed. We've been collaborating with a lot of groups. So just mentioning Network Service Mesh as well as the Linux Foundation Networking Group. And there's a lot of great panels and discussion. The telco industry is slower to move forward with a lot of infrastructure across the board. But there's a lot of interest to move in to take on more technology and methodologies that we're seeing in the cloud world enterprise, everything else. So that was really great having a lot of feedback from vendors and end users and telcos and all the different projects. We specifically had a lot of talk with different vendors and the telcos about use cases that are seen in production. And we're trying to see which ones that we can help recreate from scratch. The test bed rebuilds or accepts contributions that are open source that can run fully in this repeatable space. So being able to build up a cluster on the packet bare metal machines and then build the entire test case or use case that you're wanting to look at. And we're looking at what are the next production pieces that are running. So definitely wanting more feedback there. The other thing would be sharing, tooling, and software. Intel has a reference platform. They have all these dev kits that they do for here's how you can do acceleration for Kubernetes in different areas with different plug-ins and device plug-ins and stuff. Well, they have some of that for containers with regards to the telco world. So we're working to collaborate directly with Intel on some of that. They've contributed some hardware that's gone into the packet machine that we've worked on. We're also working on the tooling and stuff, some of the automation for what they're doing with the reference platform and what ties in and is usable in the CNF test bed. We've been contributing that to other projects. We're also working with Linux Foundation different projects within that. ONAP was one of the early ones, but there's other projects involved. OPNFE, we use software like T-Rex for packet generation. So we're trying to collaborate with the different projects across all of Linux Foundation as well, CNCF. On this project, there's some white papers that help with terminology and moving forward with what does it mean to be cloud native on the networking side? And we're directly contributing to some of those that'll be coming out this year. The cloud native network services day, this is mentioned earlier, it's a mini summit right before KubeCon EU and we'll be there for that and helping out on that side. FDIO CSET, CSET is a testing lab within the FDIO project, which is a large group of projects actually within Linux Foundation. And one of the key softwares is called VPP. But the CSET lab, what's interesting is they actually have continuous testing of a lot of this software. And we've been able to share test cases, recreate some of what we do on packet in the Linux Foundation lab. So we keep collaboration going there. We've done some shared presentations at some of the conferences. We're gonna keep going on that. Network Service Mesh has been involved since the start of this initiative. We've seen a testbed initiative back in May, 2018. We've continued to collaborate with folks that are on that and contributing both ways. On the NSM, I did wanna point out they've been accepted as a CNTO Sandbox project, which is really awesome. They may tie in with a lot of other CNCF projects as it's going forward. It's a very interesting approach to networking in the Kubernetes space and it's applicable to other areas. We've been working with them on use cases. They have a weekly call going into different use cases that could be implemented with Network Service Mesh. The use cases are interesting for CNF testbed and other projects that are trying to stay, bring the collaboration across to all the groups that we've been involved with. There's a glossary that I think is gonna help with if anyone interested in the networking side and we've been helping with that. There's a lot of folks involved, vendors, end users and stuff like that. As far as the CNF testbed itself, we plan on adding NSM support, which will give us the capability to add service endpoints just like you would do if you're familiar with Kubernetes when you're asking for a service or you're wanting to connect something or how you're going to describe it. It's all using the same terminology and language and NSM is adding that capability. So it's an alternative approach to something like Multis and other CNI plugins. I won't go into all that. You can go check them out if you're interested. But we'll be adding that support so that we can show how that would work. That's gonna allow us to have some more complex use cases so that we can have very dynamic connections at run time, not just at initialization of the test case, adding in other pods and adding new interfaces and network connections to a running container. There's some very interesting things as well as some use case that we're looking at like connecting a Kubernetes pod to another cluster which could be another Kubernetes cluster, it could be an OpenSec cluster. There's a lot of different hybrid approaches that are gonna be happening in production and we wanna be able to show what's actually out there in the real world and help people to migrate their production clusters and production deployments to support these new technologies. And however that is, it's not always gonna be a fast switchover. So it's gonna be pretty interesting to show integrations between, I would say legacy clusters and newer styles. So that's part of what's next. Some specific things. We're gonna be adding CentOS support that's heavily used by Telcos. It's, as far as new machines coming up on packet, which is where we do our initial testing for the bare metal side. CentOS is one of the main items for the Telco or network specific types configuration. So we wanna add that in. That covers different places. We'll have that support for all the different cluster types and build deploys that we do. We're working on new use cases like SRV, the NSM stuff that we're talking about. We definitely want feedback on those. Besides NSM, we'll be looking at other configurations like using multis, interested in any type of Kubernetes configurations that could be interesting from a networking standpoint. And looking at those as potential test cases or use cases. So if you have feedback there. And then tie in more back into the CI CD side. We have been moving towards and what we'd like to add is smoke tests for each stage. So that if you bring up a Kubernetes or open set cluster that you know that it's vetted for use before you deploy any type of test cases or use those for any situation. There's different software on both sides. So we'll be digging more into that. Ideally, we would be able to update different parts of the configurations or testing within the repo and automatically run smoke test if you flag those commits or branches. So that'd be kind of a long-term goal. Event-wise, there is a twice monthly CNF test bed birds of a feather that's on the first and third Mondays of the month and that's at 8 a.m. Pacific. So if you're able to join that, that's open to everyone and it's vendors, telcos, projects like NSM, other people will be on that call. Love to have feedback there. There's a Telcom user group that's going to be kicked off in Barcelona at KubeCon and that one may end up being what the CNF test bed birds of a feather is. Those could be merged at that point and it'll just be the CNF test bed for that larger group that we're seeking. It may just be part of the Telcom user group and we'll see how that goes and let's see. Network Service Day, I mentioned that, that's the mini-summit will be at KubeCon, CloudNativeCon and then ONSNA as 2019. I think those are probably the key events. And that's it. Does anyone have any questions on the CNF test bed? If you have any questions, feel free to reach out. There's a CNF Slack on the channel on the CloudNative Slack and you can also reach out through tickets if you open the contributions, pull request. Thanks everyone. Thanks Taylor, I'd like to share my screen please. I'd like to take, I think about 12, 15 minutes, maybe less to announce the latest version of the CNCFCI dashboard. We just released version 2.0 to CNCF.CI yesterday. What's new in 2.2.0 is we've added the Kubernetes release selector dropdown with the Kubernetes stable and head releases in the test environment. Let's take it 114 and it is essential to our next set of goals, which includes adding additional options in that test environment. So this is the CNCFCI status dashboard. This is the test environment, the square here. It currently includes Kubernetes, two release options, stable and head. It will be provisioned to bare metal packet and then the success shows the status of the provisioning. This allows us to see that the provisioning passed on stable onto packet and it also shows us that the CNCF active projects on the dashboard currently, it shows the status of the deploy, the build and the deploy on that environment. When you use the dropdown to select the other release, you can see oops, the head release of Kubernetes failed for some reason. We are looking into that now. And unfortunately that means that the deploy phase for the projects will also be failed as it does not have a provisioned cluster to deploy to. And so this was, we're really pleased to get this one out and we're already iterating on the dropdown to include more options. We also did some styling updates in this sprint. We updated some of the, we updated all of the logos for CNCF logo and the projects to be SPG format. They were PNG before. We updated the header, we updated the horseshoe line, did some styling and spacing updates and we had some responsive issues that we resolved as well. What's next for 2.3.0 is we will be adding arm support to Kubernetes stable and head clusters and we'll also be adding arm support to CNCF graduated projects, Core DNS and Prometheus. We've gotten started on Envoy and FluentD as well. And we hope to have the 2.3.0 available in less than two weeks at 1.26 we'll show the mock for what we are working towards for adding arm support to Kubernetes stable and head. Here's the mock, it shows we're going to be updating the styling a bit for the dropdown. We're going to have the label at the top showing the active version, a checkbox and bold for the active version. And the options will be listed in the following order. What we expect to see is if we have no projects yet supporting arm and building on the arm architecture then we do expect to see a full page of NA badges. However, we have made good progress on the Core DNS and Prometheus. So we do think that we'll be able to show that Core DNS and Prometheus are able to build and they're able to be the best way to build and they're able to be deployed onto these the arm architecture. And so this is our expected results once we have chosen the, let's say Kubernetes head release on ARM, we'll see the provisioning status of that release onto that architecture onto packet. And then we'll see, oh we'll go well and we will see that the builds passed on the arm architecture and the deploys passed on the arm architecture. And as we add support to each one of these projects incrementally, you'll see the NA badges will go away for build and deploy and be replaced with success. In planning and design, we received a enhancement request from the community in ticket number 74. In ticket number 74, the enhancement request was on that dropdown, which our initial design phase and what the team is working on now is one dropdown that just includes the ARM options into the same dropdown. The request was to use more than one dropdown. So we like this idea and we're iterating on the design of it. On desktop, really nobody, this is fine. But on mobile, it's a little bit difficult, feels a little bit difficult to make sure we're giving the user the best experience and make sure that they understand what their options are and make sure that the buttons are big enough to press on a small screen. So we're taking this and we're rolling with it and we're working with our designer to come up with some other options for mobile. So that's in progress to see what options we have and hopefully by next month we'll be able to show you our best ideas. We're also considering that we may want a third dropdown in the future. We envision this test environment at the top of CNCFCI to be flexible and scalable. And we envision that we could use it for more features. So currently our original idea was to allow us to see the status of provisioning build and deploy for multiple releases of Kubernetes. For example, head in stable and maybe even release candidate in the future. And we know what we want to support more than one architecture because ARM is now a gold member of CNCF and one of their primary missions in joining the CNCF is to show that the CNCF projects can be run on ARM. So we're hoping to support that mission by visualizing ARM on the dashboard and showing that yes, CordeonS can build and can deploy and it's all working as expected. Another, the third possible dropdown could be for if we want to support container runtime options. The CNCF has a few container runtime options as projects. Container D is a graduated project of CNCFCI. So we are looking into and planning out which CNCF project should be part of the test environment and how could we visualize that on the dashboard? And that could be a third dropdown option. And with mobile again, adding more buttons is a science. So we're taking our time with that. And soon after V230 and before KubeCon Europe, so hopefully before May 20th, we are working on adding support in the CI system. So refactoring a bit of the CI system to support KubeADM to bootstrap Kubernetes clusters on packet at stick at 100. And after we have KubeADM in place, we will update the Kubernetes to the latest version. Currently at 114 and they'll be releasing 115 in June. So we'll have 114 up before that. And we're preparing for two presentations, intro and a deep dive at KubeCon Europe. We welcome your feedback, just like that enhancement request for the double dropdown, which sparked a really interesting conversation and features that'll be useful to the community, if you believe, feel free to create an issue in GitHub. Feel free to join the cncf.flak.io, the channel is cncf-ci. You can email us, join the public mailing list. The link is in the slides here and join this call the 4th, Tuesdays at 11 a.m. Pacific time. We're also on social media, feel free to check us out. Does anyone have any questions for the cncf.ci status dashboard? So I have a question. I think in the past, you folks used to leverage GitLab CI for, or I guess currently, I just wanted to ask what CI solution is used. Okay, got the answer. Yes. Yes. Okay, thank you. Yeah, I was, I just wanted to. And those are Taylor and then Ruth, you'd like to elaborate. Feel free. It's a mixture of quite a few things. GitLab is the underlying for pipelines that are started via Git commits and other things. Okay. And then Kubernetes is deployed using a custom provisioner that bootstraps Kubernetes after creating the provisioning machines with packet. It's with based on Paraform and some other things. That's actually on the roadmap to be replaced as we're saying. So I'm not going to go into that right now, but that's what it is now. And then once that's up, we use Helm to deploy the different projects. And then we can run whatever tests are available. Definitely an area where we'd like to have the CNCF projects or whatever project is involved like ONAP as another one. But we'd like them to be involved with doing E2E test. Ideally, we can collaborate to have a structure that's easy to maintain. And then any type of E2E test could be used to run on a regular basis on the CNCF CI as well as be used by the projects. Something that would, things that would be useful for the end, end users of those projects. Yep. That makes a lot of sense. Okay. And this is like something you're looking from the community to contribute, right? Absolutely. Just, we can come up with some and there's different places where we may have more experience, but it's not going to, it's not maintainable to say, we're going to come up with all use cases to mean all users for every CNCF project. So definitely want the projects involved in that and saying, here's how Prometheus is used. And we have a lot of people that really want to see these end-to-end tests run and working for every release. Like what are the key type of end-to-end tests that end users would want to see working for every release and maybe every daily commit on master sort of thing. Great. Okay. Got it. Direct collaboration on the ZD test is the long-term goal as well as just collaboration on making sure they run, that the projects run and work is desired. Right now, like I said, they're deployed via Helm charts. We have the builds are based on the readme's or CircleCI or whatever is within the projects. Ideally, even that can be help maintain those, which is what we're working on in addition to say the arm support and everything is restructuring to allow external projects to help maintain those. Got it. So regardless of what CI system or whatever they're using, they can put their tests in here with arm support. We would like to have the tests to be shareable across project, across systems. As far as the external CI support, that's definitely something that we're looking at doing. The ONAP project itself, which is one of the projects on the dashboard, all of the CI status is pulled from their Jenkins CI server for the project. And what we're looking at doing is adding support to integrate with all the external CI systems. And the start would be pulling status information that's publicly available from CI runs. And then the next would be trying to help get public artifacts. So if Prometheus, for instance, if we can have a public artifact for all the stable releases that we can download, we have containers that we can deploy, then we'll be able to pull those down and then deploy them on a regular basis. So we'd be running this. And these are kind of long-term goals where we're adding more and more support for integration across all the, any existing CI systems. Right now it's billed internally and we'll move more and more externally. Got it. Is there any like issue or something where I can read about what you just described? So there's a roadmap, which I think we may have provided some of that in Slack. And then the, there's a lot of different issues for adding the different projects, public issues, including restructuring or adding various support. The current ones on there were being able to maintain the project details like where the logo, all the title, different pieces, that's one of the easier ones. That's in there and it's a separate piece. We'll be adding parts each one of these that you see when you're looking at the dashboard. Each of the parts we're breaking down to make those maintainable as separate pieces. And ideally it could be pushed all the way into the project repo, similar to having a dots NCFCI directory. And then you would have the different configuration in there so that you can update and maintain that. We're breaking those down in individual tickets. And for the projects themselves, it'll probably be individual. So we say, if you add this, you'll be able to maintain the releases that are built. If you add this, you can maintain how the artifacts are deployed. And we'll keep going down that path. Here's where you do E2E test, for instance. Got it, got it, okay. That's super helpful, yeah. I'm starting to get a picture of what the vision is, I appreciate it. If you have particular interest in one of the pieces, like E2E tests is one of the hardest ones for someone outside of the project to build. But if you have anything in particular that you would like to really focus on, then let us know. E2E tests are pushed out for us because we actually need the projects to be directly involved. But if that's something or anything else, then let us know and we can direct you to specific tickets for the project or anything else. Appreciate that. Thank you so much. I'll be in touch. Thank you so much. Great discussion. Great questions. If no other questions for CMC of CI, I'd like to hand it over to HH. Before we turn it over, I wanted to say a quick word. It's Philippe Proven from ARM. So I just wanted to say thank you. And it's nice to see the work progressing on the CI validation against the ARM platforms. I should be able to attend KubeCon. I think in Barcelona, so you can meet some of you there. Wonderful. Thank you so much. Lucini, would you mind driving the presentation for me? My screen sharing is the normal thing. Sure. I believe Taylor added your items into the slide. Yes. One of the things that we were noting earlier is how do we get E2E tests run across all of our different projects and incorporate them into things like the CNCFCI dashboard. And looking at ways in general, we can explore the CNCF doing more CI related things. One of the things we've been doing recently at II is looking in how Kubernetes project handles doing CI stuff at scale, including E2E tests. And that is done via Prow. There's a bunch of things that Prow does. It's quite complex and requires, they've got an infrastructure team that's composed of some folks that test SIG testing and also the Kate's Infra Working Group. That's working to make that available run by the community. But really quickly, Prow jobs are how we run jobs to create, for example, E2E test output and logs so that they can be displayed. The Prow jobs link there is just a link to the documentation. If you wanna follow that, I won't go through it, but it's in response to PRs. And so when the PRs trigger jobs and there's communication from a bot that says, here's your job and is it ready to merge and a bunch of other things. So I won't go too far into that. I just wanna highlight some of the projects that are using it to see if we can start using it wider within the CNCF community. Currently, most folks are using things like Travis or a few other components that don't necessarily go through and do a full cluster. And when we use things like Prow, it has some really fun things for auto deploying those clusters on various components which might accelerate things like the CNCF.ci dashboard. Go back to the presentation. We'll go ahead and go to the next. The one thing about the jobs that there was to put the output to buckets, we'll go, sorry, go back to 26. The output is saved to GCS buckets so that we can retrieve that and use to with things like test grid. And E2E test output and logs are available there. Go to 27. This is also a separate area called hook plugins and you configure GitHub web hooks, that's what the web hooks, and that's enabled on a repo or an over. And I was interested in seeing what things outside of Kubernetes are used. They do some really amazing things within Kubernetes. A lot of the bots that are used within our commands and PRs, if you'll click on in response to commands and PRs and load that up and also enhance and available plugins to show them for a moment. This one is all of the commands that you can respond to inside of an issue. You're familiar with these. This is driven by Prow and you can enable it on your own projects outside of Kubernetes and outside of the CNCF because this is an open project. I'm looking to find and see what interest there is within the CNCF community for starting to use these advanced features. The next link inside that was the modules themselves, the available plugins. These plugins do a lot of interesting things including project board management. That's one of the things we're working on. Assigning people to look at a pull request and assigning and I won't go through all of them but there's lots of really useful automation for community members to collaborate together effectively and at scale. And there are problems that the Kubernetes community has already solved and I think it would be really beneficial for the rest of our CNCF projects to at least have access to well-configured Prow instance and help configuring those things within our community. And I'm really interested in seeing that grow. If you go back to the slides and go through really, really quick, the next slide is who's currently doing stuff? Helm has some, and I won't go through these links but there's the job definitions and where they show up on Test Grid and in particular the Prow listing of the Helm jobs of the specific set of Helm jobs and Spyglass. If you go ahead and click on the Prow for Helm jobs we'll just look at an example of what it looks like to have that available. These are all of the prior output jobs. You can see success and failure. And if you'll click on one of the green ones there real quick you'll see a Spyglass load. The blue, yep, that's the one. And we can write new Spyglass things to give us some insight into all of the metadata and files that are a part of that. And I think that would scale well into bringing in automating a way for people to create inputs into the dashboard that are not necessarily, you know, we're trying to find multiple ways to it. I think this is one of those multiple ways that we could encourage our community to adopt. If you go back to the slides and some other projects that are using that Container D and SERT Manager and SERT Manager and API actually were actually not hosted on the Cates Prow instance but hosted on their own. I've set up proud.cncf.ci as an initial exploration of a CNCF hosted Prow instance and we've got some simple PR jobs. But this is what's currently happening. If you go to the next slide I wanna talk about what would be some nice next steps. The Cates Infogroup is expanding to set up Prow on the CNCF clusters. These are the ones that are funded by the CNCF but they're specifically for Cates projects. They're not intended for other CNCF projects yet. And so I'd like to see how we could explore other projects getting access to something like a Prow instance. Specifically the issue in PR bot, the way that those interact and then scale people work together well, it works great. Specifically within the conformance working group we're collaborating on a project board management so that within a ticket you can assign it to a particular board and then promote it through the different columns from the basically the command interface. So people don't all have to have access to the board but you can assign who has permission to use those commands within a particular issue and pull request. That's pretty much the first one and I've got just a little bit more to go. So let's go to the next section. That's Prow and I think how we could benefit from it. Is there any questions about Prow with three minutes to go? All right, we'll go on to audit log. We use audit logs heavily to look at how our applications are doing as far as what end points they hit and whether they're well-tested or not. We have a couple of ways to, we're gonna try to write a Prow click into display that and it needs to have audit logs available to do so. So this is a simple display of where we're trying to work together to create that plugin. And we do that for conformance. Anything rated on GCI, we're trying to add support for other projects to generate audit logs. So they can be consumed by this on 23, my last slide. There's some useful projects that don't have audit logs yet that would be great. I think they do have Prow jobs so it's super straightforward. Kind and Helm are both communicated, they're interested in updating their products to have audit logs. And then of course this is probably a great place to talk to the Vogue team around adding audit logs to the CNF testbed. I know that there's interest in seeing how many, what parts of the API, the parts of the API used by CNF testbed are covered by conformance. So that's what I have real quick. If there's any questions or both of those up and for answer, we can continue the conversation and chat. I guess that's it. Thank you so much. Check the agenda. I believe that wraps it up. So just to wrap up, please stay connected with this day working group. This meeting is held on the fourth Tuesday of the month. The next Tuesday is May 28th. That's right after cube com. And here's our list. Here's our link to the mailing list. And please join Slack. The next meeting may have a conflict as it is right after cube com Barcelona and after Memorial Day holiday here in the States. I guess I'm curious as to if we think we should postpone it or cancel it or change anything about it. I think it'd be nice if there was a way to make the meeting happening at cube com. So there's maybe a face to face and then skip the May 28th, if that's possible. Plus one to that. I'm sorry? Oh, I was- Oh, plus one, yes, great. Yeah, yeah, plus one, yeah. Plus one, two, three. Okay. We'll see if we can get something scheduled and at least some type of room even if it's not an official area, but some space to do a face to face seems likely. We'll post, I guess once we've figured that out, we can post an update to the mailing list and to the CNCSCI Slack. Perfect, thanks everybody. Appreciate your time and participation in the CI Working Group. Thank you. Have a great week.