 Greetings. Good morning. There's a few folks who would like to join and are having issues. So we'll give them a few minutes to join the CI Working Group and I'll try to help them. Hi, Dan Kahn here. Good morning, Dan. Yeah, I had an out of date meeting invite, but I found it on the public one. I did as well. We've got a good number of folks here. Let's go ahead and get started. Sina, can you drop the public agenda and notes link for that CMC SCI Work Group notes for 10-year-old? I'm going to go ahead and share my screen. We should spend a second and have the other folks introduce themselves. I'm Dan Kahn from the CMCF and I see we have Taylor Denver, Lucina on. But I guess Francois and Rolfo, Fred Simon, could you say who you are? Hello. My name is Francois. I'm working on Infobox company, but mostly I'm here for Core DNS. And I came last meeting two weeks ago and we started to talk about how integrating more tests from Core DNS. And I come here to follow up on this. Awesome. Thanks very much. And I'm Fred Simon, Chief Architect of Jeffrog. Great. Glad to have you, Fred. Rowan Fletcher, II Co-op. I didn't get that. Sorry. My name is Rowan Fletcher. I'm from II Co-op in New Zealand. I'm working with Chris. Anyone else? For the person dialing in, you probably have to hit star six to fund mute. No, it's loud. It's Chris Hansen from RXM. Awesome. Cool. Okay. Thanks very much. Lucina and folks take over now. Okay. Looks like Melton just joined us well. So for folks who haven't met me, I'm Taylor Carpenter. I'm working on the CrossCloud CI team and with Loat Co-op and Lucina. And also on the CrossCloud CI team, Project Management and Denver Williams on the CrossCloud CI team. So can everyone see my screen right now? I'm sharing? Yes. Okay. Great. So this is the agenda notes. If this is on the public calendar for STCF, you can add your agenda here. Or it's the weekly, twice a month meetings. You can add those in here if you'd like to speak about something at the next meeting that's coming up. We're going to get some updates on the CrossCloud CI project. Okay. So I'm going to jump right in this. So we've had a few releases on the CrossCloud CI. We had one on March 7th, March 12th, and we have one in progress probably releasing by end of this week. And that's related to ONAP and a couple of things. And so I'm going to jump right into those. So we've been updating to support the newer versions of Kubernetes. Most of that went through pretty well. There was some items that we need to verify with the integrations we're doing with ONAP. We've also updated Prometheus, Pyrtina, SyncrD, and Gauntler and tested all those. There's a few items that we can jump into that we had to change in the system to support. So updating the release sometimes is a quick change, and we'll be looking at automating the new releases as they come out. The items that catch us is when something may have a requirement upstream breaks and causes a breaking change in how the actual CI system works. So we'll talk about a couple of those. Thanks. We're also updating documentation. We've made some changes for the project itself, Docs. This is at a high level going over cross-club CI and what it does. And we're also going through and at the per-component level, the cloud provisioning, we're trying to get the documentation updated for each of those, as well as the install and how-to for each of these items. So we're going through and updating those. We're actually getting quite a bit of feedback from several of the projects like Prometheus as they're doing some testing there. So we're going to keep updating those, especially as we're moving towards ONAP, Attending, and KubeCon at Europe. So one of the big releases that we had was this one-on-one Fluent-D. That's a new project that we added. Fluent-D requires some changes on the testing system itself to support before release those. And IBM Cloud. So that's a new cloud that we're now supporting for provisioning Kubernetes and testing the various projects on. We released that. We have LinkerD coming up. LinkerD updated for 1 through 6. So that was another bump from the previous one. It came out just as we released 1 through 5. So we went into QA and were able to release that. I think before I move on, maybe Denver, do you want to speak to some of the things on Fluent-D, what we had to change on the testing system? I mean, we didn't have to change too much. It was just a little difficult to deal with because Fluent-D has free repos for their builds. They have one repository for building the Fluent-D artifact itself. Run Fluent-D without Docker. But then they split their containerization repos into two separate. And it's nice to adhere to the procedures they're using to build. So that meant we had to figure out a way to how do we clone these other two repos into the Fluent-D repo before we do a build so that we can just use what they're using upstream. So we had to make changes to do that. And that was effective. Let's just do a clone from upstream for the latest branch or a match to the version we're building. And that let us get around that one and maintain using whatever build procedure they're using upstream. Could you guys maybe explain to me, I don't quite understand why there's kind of new release notes for each new version of the projects. It seems to me like the whole idea of CI is that as a new release comes out, it should automatically get deployed. And then if it fails, that's fine. I mean, that's a totally normal thing as the command to invoke it or some aspect of it breaks. But I guess the part I'm unclear on is the default that the CI system automatically does take in new releases or does it need to be manually set every time there's a new release? I'll speak to that for a moment. Denver, if you want to fill in anything else. So right now it does for the stable releases. We are updating that. That's in the CNCF configuration repo. That's across Cloud YAML. And we do set those for stable. There's a couple of reasons. One of them was some of the projects had multiple stable releases and so selecting that, what were we going to do for that? We may want to stay with one like Kubernetes for a while. Some of the projects wouldn't work on it. So we needed to stay with a certain release. Most of that seems to be out. The other was the determining the release was problematic when we started. Some of the projects wouldn't tag. They would create branches or they wouldn't have semantic versioning. So there was a lot of items. Some of that has gotten better. It looks like for all of the current CNCF. So I think we could start adding that and turning it on. We support it the automatic for commits. So once we can determine the stable release, then that should be okay. That doesn't help with multiple stable. So like Kubernetes, if we're saying we want to support those. ONAP looks like they may need multiple stable releases. So there'd be some type of determination there. So it's mainly on that side. More than what can we test? We could run on any. I think I got it, but I just want to point out that it is, and I'm open to the idea of supporting multiple older versions as well, but supporting new releases as soon as they're done without having to wait a couple weeks for a new cross-cloud release. It seems very valuable to me. Absolutely. That's actually in plans. It's kind of been waiting for some of these other things to be at a point where we could refocus on that and how are we going to pull them in? I think a lot of the projects are going to create some type of hooks into GitHub and trigger those. If you're running your own projects or you're running CircleCI or something, you may have a release and say, I'm going to configure this to do it on tags. Being an external project that's testing those projects, CircleCI testing those, we don't have as much control over how those are triggered, so we have to programmatically figure out what the releases are. Once we do that, take the time to say, how do we determine what a release is when a project releases it? We can automate that. That'll be coming up soon. In progress, slide six is on-app integration and adding OpenStack deployments, actively working with Chris Hodge on integrating OpenStack. Chris is doing the majority of the work and we're going through and helping with any changes that occur in the system in questions and then trying to QA and hold those in. I'm hoping to get that in very soon. On the on-app integration, we are... I'm going to move down here. Slide eight. The on-app integration for folks who were not here at the last meeting or haven't heard about this, we're actually integrating with on-app CI system. The builds, all the build artifacts, we're pulling from on-app the status of the builds in the dashboard itself. We're here whenever these run through. They are based on on-app CI system. Then when we do... After doing the Kubernetes provisioning to all of the supported clouds, then we're taking the artifacts from the on-app container registry and we deploy those. In QA, and going through testing right now, is we have the app deployments with the containers. We're using IDE tests from on-app upstream and they have a robot container and it does quite a bit of work. We're focused on their service orchestrator. On-app SO is what that's called. The components are several different containers to get deployed at the same time, but we need to do that testing. It's currently working in our dev and CI testing environments, so we're moving that through and it should be released pretty soon. As much as possible, we're trying to use upstream for IDE tests, for Helm charts and those sort of items. On the on-app side for the app deployment, we tried to use the Helm charts. They're currently in heavy development trying to get ready for on-app side. They're trying to containerize all of the different components and the Helm charts are being reworked. At the moment, we are using custom Helm charts based on those that were heavily patched to actually function and we'll be trying to create some pull requests upstream and get those in as they stabilize over the next few weeks. Denver, is there any specific thing that you want to add to that or does anyone have any questions? I guess, yeah, as you covered that, they're developing heavily upstream. It was almost impossible to try track their active release of Helm charts because every day something changes entirely different, so we had to make a fork of their release and then make some patches to support 1.9 because they were currently having bugs with that as well as support or have changed some things about how storage classes were handled and a few other communities or resources because we're trying to get it across five different cloud providers where it was currently making assumptions that you'd be on Azure, which made it difficult. This is a pretty significant integration and it's helped us update a lot of items that are going to support other projects related to the Helm charts, the repos used, so that's affecting... We've made changes that affected like fluency, supporting those different ones, supporting multiple containers and repos. That's also a related item that lets us have more complex scenarios. Does anyone have any questions about the ONAP integration and integrating with the external CI? Other items? Okay, cool. Part of the cross-group collaboration with other projects, we've been mentioned OpenStack and Prometheus. Prometheus is trying to build out a CI system covering quite a bit of items including performance testing. We're actively working with them to try to build the E2E tests that would be usable in cross-bot CI and complement what they're building out at a larger scale. Core DNS as well, Francois? Yes, so for Core DNS, I updated right before the meeting this issue and now we need to schedule the meeting, I guess. That's right. For the technical part, but I think you are in New Zealand, no? Yep. So what time should we schedule something? I am on East Coast. You're on East Coast? Okay, that should work pretty well. I will reach out to you after this call. Okay. So after we're done with the ONAP and OpenStack, we're looking at Oracle for deployments potentially unless there's another cloud that looks like we should focus on as CNI is the next project that we're targeting. As mentioned earlier, we're updating all the documentation, but the ReadMe and the installs and trying to get those different pieces working. The E2E test for adding those I think is going to be one of the big ones and we'll be doing that based on information. So we work with Francois on Core DNS and the Prometheus project, how you can add the upstream AD tests and make them usable for other projects and to run out of your own system and then making a few other changes on the dashboard itself. We're going to be going to ONS into this month, planning on going to KubeCon Europe. The next CI working group, we were planning to cancel unless someone else is going to be on that as we will be in LA for ONS and there's a face-to-face workshop the weekend before the ONS conference. I'm happy to see anyone there as well as in the conference will be running a booth and can answer more questions about cross-club CI. I think that's it for cross-club CI team. One question. On the slide 10, you say replace bare metal by bare metal. What does it mean? Because you're already on packets now. It's not a packet right now. Yes, well, it is on packet. So are you referring to the bare metal where it says bare metal? Well, on your slide 10, you say you will change your bare metal, but right now for Code NS we run our CI on packet. That's why I was focusing on this one. Okay. Yes, look at last but change bare metal to bare metal packet. That's just a labeling. Oh, okay. Yes, it should be saying bare metal and packet. Here. There's missing quotes around it. Change quote bare metal to quote bare metal packet. So we want to give packet credit for the fact that they're giving us these fantastic free resources. That's right. Okay, just mine. I'll add those quotes. That'll make it more clear, Dan, thanks. We have been asked about adding arm support again, and that'll be something that we need to revisit and look at when we should focus on that versus say Oracle and any of the other clubs. Awesome. Any other questions or anyone have anything to talk about here? Yeah, Tyler. I was just wondering if there were plans to open source the dashboard itself. Yes, it is open source. And there's some renaming of the project. Just some very like high level items that need to be taken care of. I would like to have all that done before on us. It's it is open source. It just needs to be those things adjusted update make sure everything's still running once we make those sort of changes and then enable that. Taylor, I have another question for you about releases and you don't need to address it now. We can follow up later. But just looking at Prometheus, they seem to be doing a great job in the releases page on GitHub of labeling all of their releases. So I mean, looking through here, 2.0-rc0-rc1 and then 2.2.0 when they release that. And it's my belief that most C&C projects, if not all of them are using that release pages of the on GitHub. And so I do believe releases are fundamentally automatable and I would really like to move to that as a feature so that we would be and I could see the argument on why we might want to skip our C's or ignore our C's but particularly trying to move head as soon as it ships seems very worthwhile. Not head, trying to move stable as soon as a new release ships seems very worthwhile. And you know, if it fails, then the CI is sending a useful signal. Absolutely. So when, I guess, up until maybe three or four months ago, it seemed most of the, well not most of them, there were several projects that didn't have releases in this release link if you go to that tab. And some of them, if they didn't have that, they may have a release tag. So, and it may not be using semantic versioning so it was more of determination of what are we going to do for this project if they don't have, if it didn't have 2.2.0 and it just had 2.2.0 or 2.2. There was a few of them like that and some of them didn't have anything on the actual release page, they just tagged. So it's more of determining how are we going to know when it's a release. Like as a human, we can quickly see there's a programmatic part for that. It seems to be better as you're seeing now. It seems that most of the CNTF projects are now following those standards and that's changed from what it was many months back. So I think we can add it now. And so, I mean, we can also engage with the projects. I mean, I don't think we're going to mandate to any project or you must use the GitHub release pages. But if we just mentioned why we're using it and how it's helpful, then I suspect a lot of them will be open to it. As long as we have something consistent that we can find, it makes it a lot easier. Otherwise, we need to think there's some fallback that we do potentially just it uses the last version and notifies us, I don't know, whatever that would be. Right now, it looks like at least the projects that we're currently supporting, we could add something and feel pretty confident that as they do release, we should be good to go. Any other questions or comments? Okay. Well, thanks, everyone. Again, we are not going to have a meeting on the 27th. So the next meeting will be in April. See some of y'all at L&S and maybe the face-to-face CI working group. CI, CD, face-to-face working. Yep. Have a good one, everyone. Thanks, Tyler. Thank you. Thank you. Thanks, have a good day. Thanks.