 discuss to the working notes. I'll jump in and then I'll see where we're at with our agenda and pass it on to the next person. I've got quite a few upcoming events for the rest of the year. One thing coming up next month's CI working group call will conflict with the Open Networking Summit and Antwerp. So we will not be meeting on Tuesday the 24th on this call. I'll reach out to the CNCF event folks to remove it from the community calendar for next month. Also next month it looks like there's the GitLab Commit Summit in Brooklyn, New York. And it was recently announced that Dan Kahn will be presenting it on how good is our code. So that's on Tuesday, September 17th. If anyone will be joining, be sure to check that out. Add that to your schedule. In September about four weeks from now is the Open Networking Summit in Europe. Just a train ride from Brussels and Taylor from Volk. And the CNF test bed initiative will be leading a tutorial driving telco performance with the cloud-native network functions. Oh, great. In October looks like there's the Arms Developer Conference in San Jose. Ed, you'll be there? Yes, I'll be there. We don't have a booth of our own, but I'll be hanging out in one or more of the Arm Boots TVD. Yeah, Antwerp will be there as well. I sent you a note earlier, listening about that and potential for actually having some communication or blogger on that, just giving a status of where CNCSCI is at. And so at least we can highlight that during the event as well. That sounds great. And we will have a booth. And actually if somebody from Volk or others want to come by the booth and at some point answer questions or whatever, that's very welcome. We can maybe arrange something around that too. Sure thing. Thank you so much. I'll take a look at the email and see what we can pull together. And then let's see. There was the joint CFP that we put together for KubeCon North America. And I haven't heard back if it has been accepted. I think that the team is planning to give CFP announcements the first week of September. So maybe a week from now we'll know if that's been accepted or not. And Open Source Summit in France. It looks like Priyanka from GitLab is doing a keynote. And I haven't heard. I'll check in to see if any of the work done with GitLab and CNCSCI may be highlighted at all. So I'll be available to help with that. There are some co-located events at KubeCon North America in November. There's a new network service mesh con that has opened up. It's sponsored by the network service mesh community. And they have a CFP window open until February 13th. If anyone on the call is using network service mesh please feel free to submit a CFP to share your findings with the group. It's open until Friday the 13th. And they're also looking for sponsors. And it looks like there's also a co-located event the MEC hackathon hosted at Qualcomm. Yeah, I added that. It's something networking focused as well and which is happening alongside or around KubeCon. So just to flag it here if anyone has a particular interest to join or want to advertise. That's welcome too. Thank you for that. That could be interesting for the CNF Testbed Initiative folks. Yeah, I think it's free to attend. It's just a registration on a wiki page, but I don't think there is any any particular requirement beyond that. The other event that I would put co-located with KubeCon is cloud native rejects, which is a conference for papers that were rejected by KubeCon. The call for papers really is to wait for CNCF to finish their process and then rejects will be open. Thank you Ed. Is that typically held on the weekend before that you know? Yeah, it's November 16th through 17th. So it is a weekend. There are tickets. Pricing is reasonable. I'll paste the URL into the channel. Great, thank you. Yeah, interested to see what their schedule is after the KubeCon official schedule is published then and interested to see what we can see before KubeCon as well. That's great. All right, I'll share that in the meeting notes as well. So the CNCF CI team is working on a few initiatives here and we're nearing the finish line on many of them. So the first one we're working on is the CI refactor. We want to move towards a more Kubernetes way of doing things and so we are implementing support to use KubeSpray to provision Kubernetes on the CNCF CI dashboard. And we've decoupled the hardware provisioning stage with the Kubernetes provisioning stage, which were previously in one step. We've decoupled that so that they're now modular two steps. And we're working on a CLI tool to help support using both KubeSpray now and using kind later as we feel there are use cases that would support either one. And we're going to start with KubeSpray. So that's what we have going on now is creating a CLI provisioning wrapper and creating a plugin for KubeSpray to provision Kubernetes onto the CNCF CI system. After we have KubeSpray in place, we'll be able to update the Kubernetes release version on the dashboard to the latest version. And it'll be easier going forward to keep that in line, keep that up to date with the latest. And we're also going to be discontinuing use of the cross-cloud Kubernetes provisioning tool. So we'll be using the new way of provisioning in place of the cross-cloud way. And then later, we'll be creating a kind plugin so that we can support those use cases that would be beneficial for using kind instead of KubeSpray. Here's a visual representation of the CI infrastructure refactoring. Got stage one, the bottom machine provisioning. If we have multiple machines, we use KubeSpray. Then if we have a single packet machine, we'll use kind. And both will run KubeADM to bootstrap the Kubernetes cluster. Does anyone on the call want to add any information about where we are, where we're going with the CI refactor? How long the task is that planned for? The refactoring, how long do you plan that to take? It seems like we're about 80 percent there. We were hoping to have it by the end of August, so I would say within the next two weeks. Okay, great. Another large epic that we've been working on. Sorry, I was muted. I was trying to respond. I didn't realize. Y'all couldn't hear me. Did that cover everything or did you want me to add anything else about the status? No, I think that's okay. It was just to have a rough idea of how big a piece of work that was to actually complete that. The hardware provisioning is working as expected right now. And I think we're going to be able to put that in place. And the Kubernetes infra provisioning portion is also pretty close at this point. We have the different components for how we want to run Kube Spray. And now it's the part that's going to integrate between the hardware infrastructure and then run Kube Spray and specifically run it as a plug-in so that we can add kind and other stuff later. And the interface for provisioning looks the same as far as like a declarative configuration. But we should, at least let me say it, in the next couple of weeks, I think we'll be done and have that in production replacing CrossCloud. Okay, cool. Thank you so much. The other large effort we are working on for the cncf.ci dashboard is to add support for external contributions so that it can be easier for more people to maintain and add new cncf-hosted projects onto the cncf.ci dashboard. We've started with LinkerD2 as the new project to add. It's an incubating project and it uses TravisCI. And we've refactored many of the components to allow for that collaboration. So already complete has been changing where the project details are retrieved and how someone could add a project and logo caption to the dashboard, changing how the release details are retrieved, stable and head, and also changing how the build status is retrieved for cncf.ci. And that's where we are now with the build status, the build status badge on the cncf.ci dashboard. It currently retrieves a build status that is created by the cncf.ci platform. And so it's internal to the cncf.ci system. And what we're working on now is creating a proxy so that we can retrieve the published build status and the published build artifacts from the project's CI system. In this case of LinkerD2.x, they're using TravisCI. So we're creating a TravisCI plugin that will allow us to get the build status directly from Travis. And we'll continue doing that. And we will like to put LinkerD2.x onto the dashboard on production right after we get that build status. So here's a quick mock of what we mean. We want to replace LinkerD with LinkerD2.x per the project's request as an incubating project using TravisCI. And we expect to see the project details, the release names, and the build status from their CI system. We expect the deployant has to be NA after that first release to production. And then we'll continue on retrieving the build artifacts that we'll be able to use in the deployment phase. Deploying both LinkerD, Stable, and Head to the selected test environment configuration, be it on Kubernetes Stable, Kubernetes Head, or X86, or ARM architectures. So like with the CI refactoring, I'd say we're nearly there. And we hope to have LinkerD2.x on production in the next week or two as well. Any questions or comments? Taylor, would you like to add anything? Right now, we're trying to make it language independent for the plugins as where we're going for this iteration. So potentially if someone wanted to write a plugin in any language, as long as you follow the API that we have to integrate with the CI proxy, it wouldn't matter. And ideally, we can have contributions for the not only adding projects, but the plugins themselves probably would be to add a specific project. But I know we've had some conversations about drone being used as one of the CI systems. I was hearing a lot about specifically good ARM support. So that would be one that could be added. And if folks are interested in doing that, then we appreciate it. And documentation for adding plugins and how it integrates will be part of what will be released when the LinkerD2 is finished. Thank you. We're also iterating on an idea to expand our current test environment dropdown, which is currently one dropdown that shows the Kubernetes releases as well as the architecture options. And we have this mock that we're going to move forward on that uses radio buttons to show all the possible configuration options at a glance. So we're putting Kubernetes stable and head on the screen. And we're also putting X86 and ARM on the screen. Since we have two options per choice, it makes sense to show it all on the screen. Once we have more than three, perhaps we may go back to the dropdown styling, but we're going to give this a try and see how it affects usability of the CI dashboard. The stable will be the default. And X86 will be the default. So when you first load C and CFCI, this is the view that you'll see. And then someone will be able to change it from stable to head. And then the content on the screen will change depending on the configuration that's been selected in the test environment section. So this is in design now. And we expect to have it in implementation in the next couple of weeks. Hi, Lucy. And this is Ed. I think I opened up an issue regarding stable URLs that would reflect a link to your choice of things. Yes. That is definitely, they go hand in hand. So this UI will make it pretty obvious how we should implement the stateful URLs. Whereas the dropdown, things were hidden. And so they were a little bit more complicated. So I think that the stateful URL enhancement and this UI enhancement will make a big difference. We'll have to check on planning that out. I think as soon as we get linkerd to add and the CI refactoring, then we'll be able to take a look at the stateful URLs and see the level of effort for that enhancement. Thank you. Thank you. Software updates and maintenance ongoing. And what's next? So after we get linkerd to .x with the build status badges, then we'll start on the deploy status badges. We'll retrieve the build artifacts from their Travis CI system and use those to deploy linkerd to to the selected test environment configuration. We'll also make an enhancement for the MVP of the build status badge for linkerd to the badge when you click on it will go to an internal build on GitLab on the CNCFCI GitLab platform. And what we want to do in the next iteration is to take the user to the project's Travis CI build. So instead of coming to CNCFCI, it will go out to TravisCI.org for linkerd to and show that job that either passed or failed or is running. We'll also be able to add more projects. So we're going to be adding more incubating projects to CNCFCI and we'll be adding them in the incubating levels in order in which they join CNCFCI and they use Travis CI as their individual CI system. And we'll start with x86 architecture support for Yeager, Vitesse, and Nats. In the future we'll be adding plugins to other CI systems. I believe CircleCI will be the next one that we target. And we currently do support Jenkins for ONAP. We'll want to iterate on that if any other incubating CNCFCI hosted projects are on Jenkins and we'll take a look at updating that plugin as well. We'll be at KubeCon North America in San Diego November 18th through 22nd. There'll be a CNCFCI intro as well as a separate deep dive on how to add a CNCFCI hosted project to CNCFCI. And that'll be based on our Travis CI external integration plugin. So you can find us on GitHub. It's the cross called CI repo around Slack, CNCFCI. You can also join the mailing list, the CNCFCI public mailing list and these calls are usually every fourth Tuesdays. We'll be not meeting in September. And any questions, feedback, suggestions for CNCFCI at this time? Thank you so much. I'll pass it over to Taylor to talk about refactoring the CNF test bed and plan CI testing on packet. Lucina? Yes. Can I ask that we put the dates for the future monthly meetings on this calendar as well? I know it's fourth Tuesday, but if we could throw those dates on our document calendar, that would be helpful too. Oh, sure. Under upcoming events? Yeah. Oh, that sounds great. Thank you. I'm ready to share my screen. Thank you, Taylor. All yours. It was a pretty big refactor on the CNF test bed while also trying to continue adding use cases. We're trying to take advantage of the efforts on the CNCFCI side for splitting the hardware and Kubernetes cluster provisioning. And that's a good fit for what we're doing, specifically being able to have hardware that's ready and reuse that same hardware for the test that are run on the CNF test bed would be one of the main things. And also being able to provision different types of systems and everything else since we do open stack. And so as soon as those are ready, we're planning on moving migrating over. Ideally, we can get some help around the open stack deployment itself and the Chef deployment that was originally used as pretty brittle. It's broken quite a bit and we've had to keep updating little bits over and over. And so there's a desire to move to something like open stack helm or COLA. So if we can get the right helper on that, we'll probably be making that migration. And I think that's going to be a better fit for what people are wanting. At the bigger effort is around the use cases and the setup that runs on top of the platform. And that's consisting of the network functions themselves, which could be in VMs or containers. Some of them may be old school, big monolithic, or ideally they're moving towards using cloud native principles and actually being CNS. And then any type of workload, the cluster, open stack, Kubernetes, the workload infra, extensions, or changes if you want to swap out OVS for open stack with BPP vSwitch or whatever you're wanting to do. Their NSM has components for doing the networking. Dan M has different pieces. So there's all these different parts and we want to make it where that's part of the configuration that's easier for folks to use. One of the events mentioned was talk tutorials and stuff. So we want to make it easy for people to come in and reuse this. And not only can they understand it, but we can use all of this in a way that we're confident that it could run and CI directly. So breaking all of these downs, what we end up with is something equivalent to if you're looking at the Kubernetes side, it's probably something like a Helm chart with dependencies for all the separate pieces and how they fit together. And then if you're using say NSM, then it has its own configuration for how to connect service chains, but all of those pieces kind of fit together. And then on the CI side, what we're looking at is we haven't been doing this primarily because of the resource utilization, running tests on commits, but as we're creating these specific use cases that people are interested in and knowing like what are the metrics and what are we seeing over long running cycles of doing this again and again, we'd like to have a set of use cases that are of the most interest and have them running, either on commit for any changes in those areas or on some type of regular schedule basis. So the main problems with this, so this is all in part packet, and the main problems are any type of provisioning failures, whether we're seeing some networking thing or a time out or issue with the packet API, or if something happened with the cluster provisioning, so trying to get all of that to a point where we're comfortable as part of the other efforts for the refactor. And then the overall length of time for doing this and then looking at how many commits if you're doing this on a regular basis or how we're going to do it in the system. And then an alternative would be a set of machines that are long running, I don't know why my access expired. And that's something probably like to talk with packet and get some ideas or if anyone else has some ideas on stuff or we're building out the whole clusters for these different use cases, we can probably clear the machines out to rerun another use case and the best case scenarios. But at a minimum, there's a large set of machines and what would be best for a place like packet or these longer running CI processes where it's taking up a lot of resources and kind of a balance between the teardown and setup. So if people have some feedback on that, especially at anybody at packet, what would work best for y'all? Try to take that into consideration as we're moving towards this. Yeah, I'd be happy, Taylor, to talk through this in whatever detail. There may be things that we can do to reduce setup teardown times, especially. And I know that there have been provisioning failures that I continue to want to get to the bottom of those as they happen. Yeah, and I appreciate that y'all have been a great help on the specific failures and stuff. I think a lot of the teardown time may have to do with the machine types. When we're using those larger machines, they take a lot longer than say the X1 Smalls or any of the smaller instances, the new networking, the Intel networking into extra large and stuff. Those are the ones where if those are spinning up, ideally we're releasing them, but figuring out how that's going to work would be great. And if any other folks have experience on that to tie in, I think we'll see a big improvement. Go ahead. If we can benchmark cycle times for that and collect sort of nominal numbers on distribution, we can start to stare at what's taken so long. Sounds good. That's really it. And for me right now, I think we'll have a lot more once we get through the refactor and start seeing some of the pieces that go in place. Thank you for that update, Taylor. Does anyone have any other items you'd like to discuss in today's CI Working Group call? Very good. Thank you so much for your time. It was great chatting with everyone. And just a reminder, next week's next month's call will conflict with ONS, so remove that from the community calendar and we shall meet again on October 22nd. Excellent. Thank you. Thank you so much. Have a good afternoon.