 Hello, good to see you all Yeah, go ahead and share my screen. Yes, it looks fine Great. Okay. Cool. It's welcome to the monthly CNCSCA working group Switched from the twice monthly to monthly right now 8 a.m. Civic I Zoom and all of that there's the The notes are available Here, this is this one right here anyone wants to add themself to the Who's here on the attendees as well as agenda? Feel free to do that for this or next month to right now. It looks like we have Cross-cult CI so I'll give an update on that and then Ed I think you're gonna speak about some of the things on the packet side We have you on there Yes Awesome Okay, cool. So on Cross-cult CI So on this project we We're at KubeCon Copenhagen Gave two talks had a lot of interactions the intro was going over the cross-cult CI project and how it worked We've done that before all of those talks are on YouTube If you want to check that out and we also gave a deep dive on the cross-cloud provisioning Portions of the one component out of all the different components in the project and how provisioning is currently working for Kubernetes across multiple clouds and the testing and stuff that happened and Both talks we got quite a bit of feedback, which was one of the main goals so that we can see Where do we want to go? What's going to benefit the community on? that and I met up with quite a few folks individuals as well as like conformance group and other things so quick recap goals original goals on this project so and CNCF growing and if We've we've gone through this quite a bit lots of new Cloud-sadded new projects and how do we test and how do they all work together? Well, the original goals in the project we had building the the actual CI platform that could test Kubernetes and deploy it to multiple clouds deploy the projects on to Kubernetes and test how they're going to all work together and That was broken into multiple components including the cross-cloud the multi-cloud provisioner the cross-project portion That being able to have the e2e test split out separated whether they Were included as the project was deployed or deployed at the container. So that was kind of phase one and That's gone through a few iterations at this point and all the components can be used independently and then the phase two was which part of the original goal was a dashboard and how was it the the data and status going to be pulled from all those places and displayed and That goal has also been met as far as the current view which is a multi-view showing the Kubernetes that kind of the infrastructure testing as well as the project view and then a Original kind of goal that was The idea there was the CI system as far as builds and stuff would be optional and we could use external CI systems When we added own app we met that goal rather than just as an idea, but actually being able to integrate and pull those and Some of those are actually still being We're using that now and they're actually being pulled in by the own app team They're doing some stuff with Opnfv and stuff. So we've been giving them feedback about how we did that and integrated with an external system So some of the projects we have right now Prometheus, Fluent-D, Cordina, Slinger-D on the page, own app what I was just mentioning OpenSack, Azure, IBM Cloud, Packet of course and Here's the current view what we're saying so Kubernetes and And then all the projects builds first then provisioning of Kubernetes and Then once the builds for all of the other projects finished and they're deployed so GitLab, Terraform, CloudInnet, KubeTest right now for Kubernetes and Helm and then running whatever the project and Intest had a lot of talk at Kupon about Terraform and Also a little bit about CloudInnet it all made sense once that was gone in but what's used there And where that would tie in with current deployments from the cluster life cycle team The dashboard Folks, there's less I guess contention on it more desire to have similar things for like test grid What could we show if we're going to have some other view? So technology-wise the status repository everybody's interested in that This may in that being something separate or useful outside of what we've done We've done a little bit of updates as far as just keeping up with current projects and stuff and then updates on the external CI integration I was mentioning up before we're giving some of that info for some of the projects So our main goal right now is gathering info for moving forward What do we want to do community-wise so working with testing SIGs and a conformist working group talking with specific Providers and other things We've had some feedback as far as the CNCF projects helping more with the end-to-end test and how do we display those and interoperability those sort of things Additional Kubernetes versions so probably split off and shouldn't have or a desire to show a difference between projects and Kubernetes status and the I guess different community goals on those and then we've continued to hear eventual switch from using cube test to sonnaboy Kind of across the board as we're going forward not stop everything switch But as as you're moving forward then do that and then we've been working with the cluster lifecycle I'm looking at how we could use QBDM as part of the Actual cluster bring up for Kubernetes at least past the resource provisioning phase So looking into that and trying to work with that team. So Those are some of the goals as far as the community side as far as the project Internal side so API for history builds deployment seems to still be desired and that could tie in with the this status repository piece which could be Potentially useful outside of even the dashboard, but providing access like the test test grid as far as the data There was a desire talking with the SIG and Aaron and some other folks about providing access to the test grid data and the cross-cloud CI and potentially some other projects on how to combine all of that and allow people to query and And how you'd filter and see how things work with different flags So those are some potential and then on the dashboard based on feedback Potentially splitting things out of course saying project deployment the sort of things So we're trying we're still gathering feedback talking with folks What should we actually be testing? Where's the biggest needs? Between the different groups that are potentially missing or hurting or how can we help there? But what should we be testing and showing and then integrations between? the different projects What would be? most useful to support those and Then the independent testing of kubernetes Container service providers. That's was something talked about At kubecon and before not sure exactly where that's going. We want to try to track and work with Conformance group and figure out some of these things. So Network service mesh is something They're asking about cross-cloud Some of the other groups are asking like a pnfv for the community are asking about how Some of these things could be used like the on app integration. We've passed over some of that info how we did it and I think a lot of the different components what we've come come up with or the ideas Different pieces or desired in different groups. So that's been really nice to see people coming in and saying how did you do these different parts so from The standpoint of taking a lot of ideas and showing here's how it could work together. I think that's Really great, and we're trying to gather enough feedback to see what would be the best direction to go next And love to hear more feedback If you want to watch any of those like the intro video again, they're Go back up here. Those are there the deep dive and love to hear feedback from anyone whether on the list or if you would like to dig into anything specific let us know some events coming up that were particularly and And fat I will pass it over to Ed if you're ready Ed Would you like to share screen or I'm fine. Yeah, I'm fine with you just advancing slides that there's not a lot of slides here I can just talk through them. No problem Okay, so I'm Edville many. I'm at packet. We provide infrastructure for the bare metal testing on the CNCFC I Go ahead to the next slide because I think that's where I got so just General status from our perspective Generally the status reports have been green for the packet column which has been great every once in a while, they're not green and we take a close look at Every morning at the status of things and make sure that there's nothing Unexpected that happened overnight One of the things that we have run into in the past has been capacity issues where the CI infrastructure Requests resources at the exact same time that some other project scoops up all the resources in a data center what it's pointed us to is a need at our API to have some Flexibility about the request such that someone might be able to say I need eight machines in any All in any of some data center, but I don't care which And so we're exploring some API flexibility that would reduce the capacity related issues, of course, we're always building out capacity, but Since the cross cloud is using non dedicated resources, there's always the risk that you're going to run into into that I will open up the question if the Demand is Suitable packet does have a reserved hardware Capacity where we could set aside some number of machines dedicated for the task It would change the test a little bit It would no longer focus on capacity at packet, but more solely on the Kubernetes issues I don't have a really good idea of how many machines you need and since I know you're only using them for Some small number of hours per day It's not the most Efficient, but it might be the most effective So I'll leave that as an open question to discuss if we if we run into any other issues that look like they're a difficult address Next slide the other issue that I want to touch on what has been on the Coming soon or under consideration list is a cross-cloud CI on arm One of the things that I do at packet is run the works on arm project Which is funded by arm and has some equipment dedicated for the task of ports and CI and CD for Arm64 based server software So I thought I'd run through a quick list of the status of if we were to start an arm on an arm on bare metal CI, what would our expectations be day one? and Look at some of the key components that we would have to get running even for the system to start to try to work The first issue Is helm and tiller? So the helm project provides an arm 64 binary of helm But for reasons that escape me Does not provide an arm 64 version of tiller in their official binary release There are community builds of this but since we're looking at doing a Test of the if of the code rather than test of someone's interpretation of the code. I think this is probably a first out of the gate in terms of Infrastructure that would be necessary before you the testing would commence The just Less of a first issue, but but a known issue just in terms of conformance testing Also, the son of buoy code base Has not been released in an arm format Are ready to go format I don't know degree of difficulty on that at least someone thought that it wasn't going to be that hard because a lot of the components were already ready but there's there's non-trivial work necessary there to do the test things As to components the core of Kubernetes looks good and we have a community response Of people using it in all sorts of cluster environments Both Prometheus and core DNS have been ported and provide arm 64 binaries Fluent D and linker D are not currently ported. I Don't have a degree of difficulty on both of those. I believe that linker D may be a dependency issue Fluent D may be more complicated As to own app port is underway I'm sponsored by some folks that arm There are some dependencies on Rancher inside on app as I understand it as I read their commentary on it And that may Make an own app port a longer process rather than a short one What I don't have is a list of everything that needs to work for this all to work on arm If there are other components that I need to be aware of that would that would be prerequisites for even starting a CI system That's on my mind. Yeah, you know like the what is the complete dependency graph for the entire CNC FCI? Is is not a small question I'm sure we will discover things as we start them But I wanted to give a sense for where where I thought First focus work would be and that would be if I had to pick one thing out of this whole list It would be the helm and tiller question. So and with that I will take any questions Hearing no one. Let's go it as far as so not a question, but as far as a comment on Testing from the cross-cloud CI project itself and I don't think that running the cross-cloud CI software The the entire stack is required if we have ability to target resources, so If we bring up resources to run The projects Kubernetes and that sort of thing on then you can deploy to arm and not have all of it Working, so that's a possibility to kind of I guess bootstrap part of it and then of course if if we're running The entire thing that we want to look at that The earling and yeah, those parts would probably be the only other thing go ahead Sorry to interrupt Okay, I just want to mention Ed this day on that I how much would you appreciate packets contributions here and Just a separate front. I have a totally separate project that I'm looking to spin up Related to the the VNF work that the tail aren't a steamer now working on But that we're hoping to use some packet hardware both x86 and arm and to show Sort of why Automated provisioning of a lot of hardware without needing to use open stack, so it it Circles back to that classic blog post you guys have of how we failed at open stack. Oh, we're very good at failing an open stack Yeah, that's great. So I'll be in touch if we need anything, but you know right now just the bare metal has been great Good. Yeah, I know that we have a couple of projects interested in NFV and VNF and bear access to the bear hardware, which seems to be helpful for those sorts of tasks That's all thanks. So any other questions or comments? I'd love to get some of the other I guess groups and folks who are doing CI that could be interest useful for various NCF projects How we could get them more involved? We're trying to reach out and gather ideas ourselves both for the crossroads a project As well as the CNF VNF like what's happening in different groups are people doing stuff getting other folks to Get on here would be great Right now how to connect. Here's the mailing list Ideas on who may be interested or have projects that would be useful For everyone else Love to have them join This is Ed One project that I've been working with that is doing Cross architecture CI that might have an interest in just sharing Experiences at some point is the adopt open jay project I will make a point to share this contact information with them just to see if they want to Have someone pop in and understand the the nature of what you're doing because it's Conceptually very similar to what they're doing, which is building a complex system across a lot of architectures and platforms And I'm sure there's either Insights from one to the other or perhaps shared experience that might be useful Absolutely sounds great. Thanks everyone. Thanks for joining. We will see you next month if not earlier another On another meeting have a good one Thank you