 Okay. This is a CNCFCI working group on Tuesday, April 10th. I've shared my screen, and also shared the links to today's CI working group meeting agenda notes and the slides that we're using today. Feel free to add your name and email address contact information to the notes and agenda. Also, if you have any updates that you would like to add to the slide, feel free to do so. Tyler, if you're available, I see you're muted. You can go on through the first slide. Second slide has the dial in detail. Can y'all hear me? I was trying to dial in to such audio. Okay. We'll go ahead. So, we've had a few releases in March covering quite a few items, including adding some new clouds, as well as projects, and attended ONS. And cloud provider-wise, we open stack is now production on CNCFCI. And we have FluentD as well. And we attended ONS and there was a face-to-face CI CD workshop during that time. Go on to the slide five, I guess, and we can hop into those releases. So, one of the big items that happened during March was adding a non-CNCF project, the Linux Foundation Project ONAP, and specifically their service orchestrator. And that added a external integration to the ONAP CI system. They run their own Jenkins, Titan for the CI, they build their own containers, and they publish those in a registry that Linux Foundation provides. And we did the integration with that, where we pull in those artifacts, validate that they work, and then use them during the app deployment phase. We also use their integration test and run those at the end to validate the service orchestrator and all the pieces that were running at that time. So, that was pretty great achievement. We've also been working with them as they're getting ready for a release, a new release on the ONAP side, Beijing. And they've had a few items that were changing and we've done some pull requests or open tickets and helped them with those. We are fixing some updates on head and we'll be doing some pull requests for the Helm charts to work with them, which they're working on right now for quite a few minutes. So, open stack, as I had mentioned, was another big one. Chris Hodge did the large majority of the work and then Melvin also helped. On the second, we kind of had a second revision of the open stack side as we moved it through. After it was ready for us and we're trying to integrate and do some testing, it went back and we worked with them to make some updates and Melvin was a big help on that side as well. And so that was a contributed cloud provider, which was really awesome. Great coverage and it's been really good to have their feedback on that process. I think we'll be using that as we're working with some other folks in the future. And then we were at ONS, as I said, ONAP and open stack were both shown there and we had a lot of feedback from both cloud provider folks, the Alibaba people came over and had a lot of conversations about their stuff. The Huawei folks had a lot of good talk and feedback on things and then dug into the ONAP side and integrations with external CI systems and how the information can be passed back and forth. So that feedback we're going to be going forward with. So that was exciting. And that face-to-face CI workshop before the weekend was a lot of communities and that was really great feedback for the open CI. So that was good. Some of those folks will be at KubeCon, so we'll be following up with them as we go forward. Okay, let's go on slide eight and then on the CrossCloud CI project as far as the development goes right now. Kubernetes 110 support. We're updating the CrossCloud provisioning side. There are some deprecated items that we've now covered and we're going through and updating a few things for the end-to-end test and some other items and provisioning. And then that'll be ready to go as far as the terraform code. And then we're good to go and 110 will be up on the dashboard. And as I mentioned before, the ONAP on the head release, they made a change upstream. Things were breaking. They fixed their build now and we're going to be updating our integration code to work with those changes. The packet was having some resource issues. This is kind of just a side item, but packet was having some resource issues in one of the regions. So we've migrated to a different region at their request. And that seems to have resolved those issues. So what's next? We're working on adding Oracle support in that process. And then we'll be looking at Huawei and Alibaba as far as clouds and ARM support is still in the queue at some point in the future after these clouds. We're also looking at Envoy and Jaeger and a few of the other projects. As far as the next invoice going to be added as the next project though. On the cross-cloud CI project itself, the internal software, we are in progress on automating the project release updates. So as a version changes upstream, we'll be making the change and pulling it in to cross-cloud. Some of the projects, the way that they publish or show those versions can be a little bit different. So we'll tracking those and going through and we'll provide any feedback upstream. And otherwise, I hope to have this on soon. And then the dashboard and the builds and everything else will be running based on new releases. Similar to how we pull master and the head commits. This will allow us to be tying in with history and the API itself. And that's going to allow us to do a lot of other things in the API server for the status repository, including new screens, being able to roll back to previous release tool tips and other items in the dashboard for different versions and what worked and what didn't work. And then providing direct access to the API for folks to query and look at stuff like Prometheus 196 with a specific version of Kubernetes and other items like that. So where are we doing as far as community goes? The OpenCI, which I mentioned that came out of that face-to-face before O&S. There's a white paper that's being collaboratively built. And HH Rowan are also contributing to that. Hey guys, and quite a few other folks. So that was really cool. We are actually working on RSC for a pipeline messaging protocol. And we've provided a link there if you want to check that out. We were working on that the last couple of months. So there you go. And we'll be talking with the VMware folks about the cloud. Spinnaker is coming up. Looking at that as an option and alternative for GitLab. And then Prometheus and Core DNS, we're trying to work with them on their end-to-end test. So that's continued that collaboration so that they can start maintaining those and helping both within their project and CrossCloud CI. And that's about it for us. Upcoming events next year working with people 24th. We'll be at KubeCon, Copenhagen, May 2nd to 4th. Providing an intro and deep dive on the CrossCloud CI. We'll be talking more about how to add cloud providers and how to add new projects. Hopefully get the community building those out and helping to provide those sort of things. Any questions? Okay, awesome. Next on the agenda. Go ahead and stop my screen share. Rowan or Chris, if you'd like to start your screen share you're welcome to. I'm starting that now. I've shared my screen but I don't know that I'm seeing anything. It looks good. Yeah, it looks good. I can't see the slides myself disappeared. Oh, did it? Do you want me to share instead and then you can talk through them? Yeah, that's fine. Why don't you share if it works for you. The link in the channel. This is Debbie Hacker from I.I. Co-op. And we're starting a project called API Snoop. I dropped the repo that's at the CNCF right now. It's a link to where we have a proof of concept. So one thing we were wanting to do, the list of things we're wanting to solve, one is that Kubernetes test coverage is currently calculated using the end to end logs. So we have to run the ED test and then look at the output in order to calculate. It's there that only works for the ED test and doesn't use other application. There's a lost use for applications that we'd like to figure out a way to cover API for not just the end to end test, but any application using the Kubernetes API. And in addition to that conformance group, they're wanting to get their ED test percentage raise. I think it's sitting at 11% right now. We need to be able to prioritize which test they're going to write next. In addition, there's add-ons for add-on conformance. We don't know which particular API they're using. And if add-ons are written in a different language, they're not using the go client library. Are they actually conforming to our spec? These are the things we'd like to take a look at. So we'd like to make it reusable for other projects as well. So on our next slide. We'd like to do something a little generic. We have an open API spec. SWAT or JSON, it's actually available at the endpoint for any Kubernetes URL. We'd like to make that available, not just to Kubernetes end end test, but any other application. We're going to look at the, inspect the actual calls on the wire rather than looking at the logs. And this will allow us to do it from any add-on or the end-to-end test. Inspecting each of those, they should be your request against the open API spec and tracking our usage by a pod, by source IP, by anything that we can sort by and keep track of. The last thing is a high level of an API snoop. And implementation needs to be, one of the implementation we'll use outside of Kubernetes. So a couple of steps for that is redirecting the API request to a proxy. So we're going to do that by watching for pods coming up with annotations and then redirecting to the proxy using IP tables. Rather than starting from scratch, we're going to use the existing MITM proxy project. And the end result there is just going to be a module that runs inside as a plugin to do our open API inspection and aggregation. That's the high level. We also have a demo. Does anybody have any questions before you proceed to the demo? All right, Rowan. Great. You've got the screen. Thank you, Chris. Just give me a second. I'm just going to switch things over. Okay. I'm just going to start sharing the screen. Okay. So, hi. My name is Rowan. I'm just going to walk you through a brief demo of our perfect concept. So I guess the first thing is that we've just got a Kubernetes cluster which is running on GKE. A couple of notes about it is that at the moment it requires the alpha features enabled and also it requires legacy authorization enabled, which basically disables RBAC. The reason we need the Kubernetes alpha features enabled is because we use initializers, which is an alpha feature. So, yeah. All right. Well, I'm going to walk you through this demo. So the first thing we need is just a bit of code working, which is what this will do. Okay. So at the moment, as Chris said, so we've got, this is where our sort of our code is currently sitting on this repo here, the perfect concept, but in future we'll be sitting under CNCS slash API snippet. And right. So what we, the first thing we'll do is just code this code into it. Okay. I'm just going to sort of walk through the steps before I actually do them. So it's pretty simple in terms of setting things up. First of all, we just have to set up a helm because we use helm charts to deploy things. And the second thing we do is that we, we create a CUBE API set using the Kubernetes API. So what we do is we send it as a certificate signing request saying, hey, can you please sign a certificate for Kubernetes and the Kubernetes API IP address to Kubernetes and Kubernetes is like, hey, you're right. We'll give you a cert signed by our CA for our API server. And then we use that certificate to basically any traffic that's intercepted by MIT and proxy. We can pretend to be the API server with the correct, like with certs which are signed by the CA. So CUBE Cuddle doesn't care. Other things won't care as well. There's no difference. And so what we'll do and set up MIT and proxies will just do a helm install with enabling initializers. Then we're going to deploy like a example app, which is basically just like a pod which has CUBE Cuddle that makes calls to CUBE Cuddle get pods every five seconds. At the moment we have a little bit of magic that we're sort of using to determine which T proxy is on the same node as the example app because we're just doing like a port for it instead of having like a service that will change later. And then we just once we've port forwarded we can open up a browser to that port and we can get the MITM web interface, which is part of MITM proxy. We won't see any sort of traffic. And the reason is because you only see traffic for pods that you annotate with a certain annotation, which is here. And yeah. Okay. Well, I'm going to just show step by step now. So quick for that. Okay. So the first thing is I'm just going to clone our repo. Wait, no, I've already done that. So create CUBE API search. So yeah. As I was saying, it creates a CSI request using the endpoints internal and external. I'm just basically templating, I templated out the CSR settings and then just I'll write this on. And then here's our CSR request that gets sent to Kubernetes. Then we approve it and then get the CSI, like the certificates out, combine them and make them ready for MITM. Okay. So we'll just do that now. Wait. So first thing we need to do is tell them that. Okay. And now we can go around this, create the CUBE API search. Generating the CSI request now to Kubernetes. Approving it, getting the result insert. Here we go. We can see that this has been signed by the CA. These are the endpoint addresses for Kubernetes. So now that we've got that all ready to go. Second thing is I'll just show you is to set up MITM proxy. So currently it requires search to be built for MITM proxy as if it's going to run its own CA. And that's just I think a dependency of the way that things were done before. But the main command is here, which is basically just installing, setting a value to use the initializer. So we can do that now. Just creating the search in Docker. Okay. So now that's been deployed. We can go to CUBE Cuddle, get pods. We can see that. Here are our T-proxies running. There's one T-proxy running per node. I can go to CUBE Cuddle, get nodes. This is a free node cluster. So one T-proxy for each node. And I'm just going to go back to here because I've got, oh, yeah, that's right. So CUBE Cuddle. So the next step is basically to deploy our exemplar, which is if we go to CUBE Cuddle, get pods. We can see that our exemplar is running. Now this is the little bit of funky logic around finding out which node each pod is running on. So here we can see it's running on the same node as this T-proxy here. And there's just a name one as this one. So there's just some stuff that basically sets the T-proxy pod. Now we're just going to do a port forward. There's now 49,000 on localhost to 8081 on that cluster, which is the MITM web interface. And now I can open up that page in my browser. Here we go. Here's the MITM web interface, which is on that particular pod on the cluster. Notice there's no traffic. But if I go to CUBE Cuddle, get. CUBE Cuddle logs. So here you can see that it's making these requests. This is the output from the CUBE Cuddle command on that on that example pod. But the requests aren't going through. So now finally what we need to do is just, we just need to annotate that pod. And we'll start seeing some logs on the web interface. It's been annotated. We go back to our web interface. And here we can see the traffic being intercepted with authorizations and stuff. At the moment, we're not recording the request and response. But we can see the headers. So this shows that we're able to intercept the HTTPS traffic. Any questions? OK. All right, Chris. Do you want to? It looks great. Thanks, Ryan. There's only one more slide for us. I think it's the, what's next? Thanks for pulling that up for me. So this PSC demonstrates this, the ability to intercept the traffic. Once we annotate a pod. But there's some unnecessary complexity that was. This implementation that we need to refactor. Once we get that up, we're going to get a reliable Helm chart. We're deploying the proxy search and redirection. Refactor approach. And the next step is going to be instead of bringing up the MITM web proxy for inspection, we're going to write the main logic as an MITM module. Nice. This week. Over the next week or two, we're going to start collaborating with Test Infra. In order to. Understand how we need to integrate with their CI integration. Particularly the retrieval of the output. And with that format, you're still quite. It will probably be the next two to four weeks. In this section. That's the update from the APS team. I'll hand it back over to. Thanks, y'all. That was awesome. Folks, this is. Dan con. I just have a closing. Question from the group. But we could solve it by email. Which is just that this remains a really small working group. There's 12 folks here and obviously. Many of us talk offline. I just wonder whether it might be more productive if we move from. Every other week meetings to once a month meetings. But there's, there's no pressure on it. If you guys feel like this is useful to get together and share. We can continue to do it. It seems like different folks. We'll show up on the different weeks and I don't know if that's just their availability or not. I think we could have once a month if we had a day that works for folks. I'm also willing to just be here for the conversation. Seems like we get good feedback and seeing the API group. If we get more folks involved. We'd have. More people on the call. So. Yeah, I'm. My only sleigh hesitancy. I mean, I obviously I'm a big fan of the API stuff. I'm not quite sure it actually fits in this working group. But I don't mean in a negative way. I mean, it's just, it's it can certainly live here until we find a better home. Sure. Yeah. I don't know. I don't know. I think they hop in around between the groups. I don't want, yeah. Christopher feel unwelcome. Okay. Let's just leave it as an open question for another, for the next few meetings, but. It's just, there's no, it doesn't have to be a really weak as Mike. I guess. Okay. Sounds good. I think the next one might be. Is it before cube con or during cube con? Trying to look. Before. It's before. Okay. Thanks. Okay. Maybe we can put a post to the list and see what folks thank. This is Chris. I had another thought. My muted. No. When I first voted for this time of day, it was during the New Zealand summer. It was during the summer. I think we're. Hemisphere opposites. When we shift and everybody with the time zone, it actually brings us from 5 a.m. to 3 a.m. in the morning for the start of the call. So as we evaluate. Possibly shifting the number of a current, I'd also like to look at possibly shifting the time. And depending on. Everybody else's thoughts. I'm not trying to. Push New Zealand. But I am definitely trying to see. Time could fit a little better. Yeah. One thing I was thinking. Chris was. Some folks may prefer. Afternoons as far as Northern hemisphere. So maybe we could do one week in the morning and one week in the afternoon. And that might give more opportunity for folks. Anyways, we can follow up with both of those offline. Either in the Slack. See our group channel or. On the mailing list. And we'll see you in the next one. Thanks. Well, well, thanks everyone for joining. Thanks for that demo. That was pretty awesome. Excited to see. Where that goes. Thank you. It looks like it'll be useful in general for other projects that need to do API coverage. Have a good one, y'all.