 Awesome. Okay, so this as the CAI Working Group, it's currently scheduled to be monthly on the third Tuesday, I believe, is what we're on. And this meeting is being recorded, so let everyone know it's going to be published on YouTube. This is the same Zoom that we use every time, see NCFCI Working Group, and these notes that I was sharing earlier are reused. I believe that everyone should see those in the slides as well as the notes in the chat if you don't have them and post a message there. So I am going to give an update on the Cross Cloud CAI project real quick, and then we can jump into some of the other ones. I think we have a few topics here. We're going to go over the upcoming events that are related for CI within Kubernetes and CNCF, FDI mini summit. There's a CNCF CNF project where there's a lot of demos happening there. There's Kubernetes, Network Service Mesh, Talk at FDI mini summit, so there's a lot of things going on there. At KubeCon there's going to be presentations on both of those topics. There's also on the Cross Cloud CAI and Intro Talk, and then there's a deep dive talk where there's an overlap between the Cross Cloud CI project and the CNF project where there's been some CI testing for doing work on that CNF project. And then Andrew from VMware is giving a talk on adding support to Cross Cloud and how he's been using the different components in that project. I don't know if we have anyone to give an update on the KubeCon Shanghai for some of these topics on the call. Let's see. I don't think we have anyone that was there. There was a talk about API snoop and digging in that people may find interesting, and that's right here. So we gave an intro at KubeCon China, and Watson was there, and the plan was to dig into the project side of testing, and we've deferred that to Dilla some planning that we want for the project, so that focus was the intro and then trying to get feedback. We've added Oracle support, and there's some updates for some features on the dashboard itself for showing NA badges if builds fell and some other things on the dashboard itself. What we're looking at right now is what is the next iteration that we want for that whole project and specifically for the dashboard. If I could bring that up. The dashboard itself, there's a split between testing of Kubernetes across the cloud providers and the different feature set, and then testing projects on those clusters, and they're kind of two different purposes and audiences. So one thing that we're looking at is post-KubeCon Seattle is doing planning for the next iteration and updating documentation for projects to contribute themselves. What do we what do we want to do with the different pieces? So we've been doing a lot of collaboration with different projects on the VMware side. So I would love to get feedback from the community on what they would like to see. There's some ideas that we've been having with different projects like the Cordian asks and Prometheus have use cases that they would like to have in their testing. So that's something that we've been interested in. Watson's talked about reference test cases that other people can look at where the projects are working together. And we'd like to get feedback for the next version of the dashboard and what's going to be shown, what would be useful, as well as the components underneath like cross-cloud and cross-project and the different pieces that do different testing. Probably want to defer a big Q&A, but if people would like to do in this call, but if people would like to provide some feedback now and during the call, we could do that or we could defer it till later. And then obviously we'd be happy for feedback and take it some other things. Okay, so I'll take that as something we can defer for now. So these are some of the presentations and stuff were referred to that we're attending, including ongoing meetings. I'm going to skip over the overview, but feel free to look through these on how it currently works, related topics. So go over this. This would be kind of related to some of that feedback. So part of the effort on this CNCF project for CNFs, we are trying to make it easier to do testing with network functions on Kubernetes and containers, as well as I would say, how can you recreate the test and environments on hardware? So bare metal type stuff. A lot of the focus has been on packet. We've also started working towards supporting other environments. So FDIO, which is a project at Linux Foundation, has a lab and we've had access to those. So we've been trying to do testing there and be able to reuse the same code with modifications to work in both platforms. And then taking those, a lot of this is Docker and KVM. And then focus on what we're doing for KubeCon. So the tie in back to, I guess, the cross cloud project would be how we're deploying Kubernetes. So that's trying to use a lot of the same pieces for the cross cloud. Let me go back over here. So using the cross cloud project, we actually are able to deploy the cluster and there's been a lot of updates to support specific hardware, reserved things, reserved hardware on packet. And then once those, you get past the base cluster that it's been able to deploy to the different clouds. We added support for tying into Ansible so that we can start adding all the good stuff that people have. There's roles and stuff for doing various things out there. And tying all this together to extend what we could do in cross cloud. So you can continue using it and all of that. So the different stages are usable. So trying to make, add more documentation for folks to start seeing what's going on outside of what we're doing and ideally be able to use the different components individually as well as as a whole. So if you want to bring up an entire cluster, Kubernetes cluster that has layer two support in it and deploys network functions and certain scenarios, well, we have some reference code that we're working on that we're going to be demoing at the FDA mini summit as well as KubeCon for doing that. And then the different pieces are broken down. So if you wanted to dig into individual parts itself, everything's open source, the test harness that we're using as NFV bench and t-rex for generating packets. So that's probably it there. These will some upcoming presentations at KubeCon and around there. Any questions? I know those kind of a lot of topics before I move on. I think we have some other talks, but they're not in the slide. So let me go back to the notes himself. So I believe Andrew is not here to, sorry, I think I just get passed. Andrew is not here to talk about his. So Melvin, are you available and ready for your topic? Yeah, I'm ready. Okay, I will stop sharing if you want to share your screen. Should be able to see my screen, hopefully. Looks good. Okay, so basically going to talk about open lab, which as shown in the notes, basically I say it's curated infrastructure for open source testing. So a little bit about me. I go by Mr. Hill's been pretty much everywhere. I think learning is very important. You know, lifelong learner myself personally, self-learning in a lot of areas. And I like sports. I like watching them more so than actually participating because I'm all much older now. I work for Huawei as open source community manager. I'm also a user committee chair for the OpenStack Foundation. And of course, I work with open lab as a governance member, which is what we're talking about today. And just kind of a little bit of background of why we started open lab or kind of the idea of some ideas and philosophy as relates to how we perceive cloud and things of that nature. So there's this great article by 451 research that was done over 2017. And it talks basically about people, you know, that the public cloud is not necessarily forever. And there's this idea that's been floating around since last year, a little bit before last year of what we call cloud repatriation, which basically suggests that, you know, people are moving away from public cloud. There's been a lot of talk about folks moving away from public cloud for a number of different reasons. But these folks, you know, went ahead and dug into it. And you can see on the right hand side is chart. These were the reasons for people shifting their workload from public cloud. But it wasn't a, it's not a, the idea of repatriation, right, is like, it wasn't working, it's total failure on the other entities part to permanent shift. And that's not the case in IT infrastructure. So the public discussion is not really reflect what actually happens on the ground in IT departments, right? So they move away from public cloud, not because public cloud's not necessarily not working. It's just that there's other reasons, you know, like performance and availability issues, you know, improves on premises cloud. So maybe they didn't have a cloud and then they, a private cloud and then they stood up one or, you know, getting a number of the other things that are on the right hand side there. And then they asked, you know, okay, if you're moving, you know, for folks that, so that was really focused on like people moving away from public cloud. And then, you know, once they have those discussions, it's like, okay, we're not moving away from public cloud necessarily permanently, or all of our stuff is, you know, no longer in public cloud. So the questions, you know, turn to hybrid cloud, right? And the idea is that people have another different reasons for using one of the other both. And so looking through that lens, right, the open source cloud ecosystem is very large. As many public cloud service providers across, you know, the whole world, basically, is hundreds of, you know, open source projects, there's all types of clouds, open source and proprietary software working together. And then there's users, you know, up and down the spectrum all across the spectrum. And not only do you have, you know, what, in my view, the previous slide showing more traditional landscape or view, you know, open source and open source cloud ecosystem, but then you also have, you know, the cloud native components as well and more, more shift towards, you know, cloud native philosophy and ideology, right? So the open lab, the focus is like moving towards enabling, testing, reporting and development of this stuff across multiple clouds and not focusing necessarily on single projects, but really the integration of these projects because people want solutions. They don't want necessarily just one project or one project generally does not stand alone in a person's use of that project. They're normally using it with a number of other tools together. And so some of the motivations of open lab is where people participate in fact matters like you have explicit customer requests, like they need support for product X from vendor Y and you don't want to be the only person dealing with the pain and the anguish per se of trying to deliver that because sometimes those customer requests are across different companies or across different businesses within the same company. And so you want to be able to have some idea of what is and is not working. There's also technical requirements like the need for future function. You don't want necessarily be the company carrying that patch, you know, for a particular project for infinity, right? Just for a particular, just because you have a particular customer, someone else more than like we have that same customer or that same feature of function request. And so it's best again in the spirit of open source to work together on that. And a number of other things. So what we did was we focused primarily on open stack. When we initially, you know, again, got started last year. And these are some of the motivations we were able to pull out in terms of pain points across, you know, five different aspects of things that I showed earlier in terms of open source ecosystem. And what we were looking at in terms of open live to address those pain points is, you know, like helping to show up the SDK tools for open stack. So update them, you know, for a variety of different languages, a lot of there's a lot of variation between the different SDKs or language, the language bindings for open stack. Broker for sort of like a broker for providers, right? They allow people to collaborate and contribute. But the open stack, you know, open stack only stopped at the native APIs or didn't focus on ecosystem right above it, which was the language bindings and the SDKs. So that also brought about difficulty testing across, you know, open stack based clouds. For no different reasons, difficulty and release planning and maintenance across those implementations. And then also, we were able to, you know, focus on helping where again, awareness and acceptance in this you know, vast open stack ecosystem. And so again, being able to work with the open stack foundation, and the the larger open source ecosystem, as we're open lab really kind of sits and finds its value. So these are the five, you know, of course, oversec foundation. And then these initial partners were the ones who were saying, Okay, we like the idea, you think it's valid. You made a case and let's kind of work on it. But of course, it's not just about those companies, but we also had people from different communities on board. So we're focused on primarily initially was go for cloud terraform, Kubernetes and open stack. So again, the two biggest, you know, over source projects that we could think about. And that we're really in, you know, need of support or Kubernetes and open stack working together, not necessarily the projects in the visit, because again, we focus on integration. So open lab essentially has a governance model is very loosely structured. It will involve over time, of course. And here's just again, an idea of, you know, where we were able to marry together software components that open source, public, private and hybrid clouds are kind of like the proprietary, you know, end up aspect of things. And then also we brought in academia for lab and project support. And what we're able to have initially upfront was, so we have a CI environment. And this is our initial capacity as of today, we started off with two. And in that year, we grew to add four more providers for the CI system, which basically they provide virtual machines across six clouds. And then also we have dedicated infrastructure has been made available to us. And essentially, we did not have any dedicated infrastructure. But now we have six providers giving us dedicated infrastructure. And as you can see is live dedicated service here, quite a few IoT devices. And then there's the WSN is, I forget what it stands for, basically is network relays wide spectrum networking, I believe is what it is. Also, again, just just kind of given an idea of what we've been able to accomplish. We've got additional projects, we've added more projects from in my cell, we only had go for cloud terraform, Kubernetes and open stack initially, I'll have all the projects listed here, but here's just a few of them. Got more companies participating, more people participating, and again, our partnerships. So I've already kind of went over what's available. But remember, we only had, you know, four logos, essentially, before five logos. And here's some of the additional ones we've added, everyone is not on here. And I'll kind of skip these two, because basically, this is just saying, you know, here's what we are delivering. So like repeatable deployments, we have a roadmap, we're able to help facilitate roadmap for open stack SDKs, integration, experience and best practices around, you know, like standing up open stack environments and testing things out. So showing them folks, you know, here's some ways in which we do high availability, or here's like reference architecture for that, help me focus on zero downtime and skip level upgrades and things of that nature. So making making resources available for people to not only see these things working, but also try them out on their own, so that they're not destroying production environments, and also helping folks to shift their culture to a more DevOps centered culture. By if they can test stuff out in the lab first, then it helps to allow them to sync, you know, for example, if they need a test environment to work with the opens to work with their production environment or they do like canary, canary deployments are rolling, you know, rolling out, you know, features of that, you know, new features and stuff like that. They were able to kind of test things out in the lab before doing any production. And again, it's just ways to get involved. You know, essentially, you could, you get involved in issues just by sharing, you know, things that you like that you think that should be integrated tested together. You can leverage, you know, the SDKs, the tests that we're doing, all the stuff that we're doing, you can also leverage those things. So you got to share some good input, but you can also take what's already available and try things out and, you know, give creative feedback for us. You can still contribute infrastructure. But we have quite a bit of as you're already seen. And so primarily, you know, we're looking for more people to contribute these tests, you know, test cases, integration requests, and also participate in helping to fulfill some of those, the work that's related to making some of those things happen. So what I did was I created a little video here, the audio one place I'll try to, I'll try to narrate as best possible. My basis is a video of me logging into one of the test beds, and showing you how through what's called profiles, you would save, you know, you could, you could save whatever you've done in a profile, and then you or others could come along behind you and create what's called spirit experiments, which pro which are based on profiles. And so it allows again, that repeatable testing where you do it one time and then people can come behind you and they can augment it, they can make it better, you know, or you can make it better for yourself. You don't have to continue to do it. So I'm just gonna let the video play like I said just now right here. So I'm logging in right now. You can see that there's two experiments that I have running. I'm going to show you profiles, which we've saved. And so instantiating them just simply just click in the play button there, but I'm showing you a topology of one of the experiments. So when you create experiment by default in this particular test bed open stack is the default. But you click the change profile button to choose another. And so like we have one for Kubernetes. We have some that are like just see two nodes connected together. This one's got a bunch of nodes connected multiple layer two links, multiple sites that you see with that profile could be just, you know, small little nodes, you know, doesn't have to be as large as the other one. So you just use the create topology button here, you can drag and drop machines. So these are those dedicated servers, you know, those thousands of dedicated servers, or program or network switches, create multiple sites, you can just drag them from one side to the other. So you want to test things, you know, I have this site in, you know, the US have another site in Canada, or someplace else, you can emulate that. Or you could actually spin up nodes that are in those two different sites. So it doesn't necessarily have to be that you're emulating it could actually be what is happening in the physical world. So I'm just showing again how easy it is to move these virtual machines as well as dedicated servers together, how to link them together by simply dragging the line there, you can connect the devices themselves or connect them directly to the link by dragging over to the link there. So that's a layer to link between those devices. And then if things get a little messy, you can click the tidy view button there to clean them up. So when you click on a node, you can modify, of course, it's named, you can modify the operating system, the hardware type, if it needs to IP address or not. In these images, you can create your own or you can use ones that others have created. That's basically what I'm showing there. And then you can also add like a tar ball, like an archive, you want to drop an archive, you know, on the node after you created and then execute a command, you can do that as well. And that's just focusing on provisioning the devices when you once you stand them up. And again, there's a number of things you can do after the fact, right? You can use answerable if you wanted to from your local machine, or you could, you know, have answerable on in that archive or part of that command as well, right? So if you have any questions, here's basically how to get in contact with us. And the website's there. And that's pretty much it. So in terms of this, why I thought it'd be beneficial to present open lab to CNCFCI working group is I think we have very compatible missions. And we've already been working with, for example, Ed, I'll stop sharing my screen right now if that's all right. We've been working with Ed from packet. As you saw, hopefully you saw in the presentation, Ed is packet is one of our sponsors. And, you know, we're looking forward to doing more with Ed, and along with like offering resources for the CNCFC cluster. But for the CNCFCI working group, I think we could, you know, again, offer physical resources to augment some of what's already available, as well as, you know, some tests that the working group wants to do as relates to like reference architectures things that nature, making it repeatable. In terms of the infrastructure itself, like getting the physical devices together and mapped out a certain way. You know, it's just kind of I think it's a good, I think it could be a good partnership and I would like to explore how we can be a benefit to the CNCFCI working group going forward. And that's it for me. Awesome. Thanks, Melvin. Does anyone have any comments or questions for Melvin? I'm interested in the that topology editor itself. Do you know if it can be used standalone or export those visualizations outside of the software? Yes, the entire suite, there's a whole, there's actually a whole suite for managing hardware devices in terms of like configuring them and, you know, putting them back into pools and user access and management and all that. And that particular component I'll have to, I think it's using Jax, I think it's called, but it's an open source component that they're using everything, all of this open source and I can get it to you and you can try to figure out how to, you know, make it work in a way that you'd like it to. And what's cool about it also is that the software, if you, for example, like if you, like there's one, there's one person who, like they only had a, I think they had like a 100 gig network switch. And so like there's no 100 gig network switch in the whole, you know, federal, so a lot of these resources federated those, you know, thousands of nodes that federated across multiple sites. And so you can take the actual software that I show, you know, again, that the topology editor is just a component of, and if you, if you only had a network switch or one other device that wasn't available in the testbed, for example, like a particular FPGA card or a particular GPU, you could actually stand up that software in your lab in your lab or your, you know, facility, and you would, you would become a part of the federation. So you may offer, you know, that GPU to folks who don't have access to a GPU or 100 gigs, which whatever the thing is you're offering, but you would also have access to all the other devices as a, as a, you know, benefit of you offering your one device to the, to the federation. So it's actually a pretty cool situation. Are you able to stand up different federations? So it sounds like what you're talking about is similar to IRC networks that federate. So can you stand up the equivalent of multiple IRC networks that have different groups that are federated, but not federated across, say, another network, or another? Yeah, you can actually like, you can set it up to where, if you, so like, let's say there's, right now, like, let's say the federation consists of 10 right now, right? And you said, okay, I want two, but then I want two different federations. I want to be a part of the 10, but then I want to be a part of this other smaller one I have. So you can do some overlap where you, you know, basically make one federation blind to the other per se. And you can actually turn that off and on. So if the story example they gave was maybe a company needs, you know, they're offering 10 nodes and they need five of them. Well, you need all 10 of them, you know, because like they said, for example, there's a big holiday push or something and there's a lot of, you know, usage going on. And then after that couple months, you know, five, they're only using about five of those nodes where you could actually turn off the federation per se for that few months and then turn it back on for just those five nodes are all 10. So there's a lot of flexibility there too. Great. I'd love to talk with you more, especially looking at how they, how you're handling stuff at packet connecting the nodes and dealing with API and other things like that, as well as non packet hardware. Definitely. Thanks, Melvin. You're welcome. Okay. So looks like you're up, Fred. Hello. So, let me go into, let me go and start by discussing like, why we ended up using cube admin. So, before we were, sorry, would you like to share your screen for it? Yes, I would. Thanks. Okay, let me do that. There you go. Okay. And there we go. You should be able to see my full screen. Looks good. Looks good. Okay. So, so before we get started, so, so I'm one of the, because you say co creators, co contributors of network service mesh. And so I won't go too much into, I won't go into the project itself as to what we're trying to do. But one of the problems that we were running into is that we needed a Kubernetes cluster in the, in our CI system. So we use circle CI. And we actually have a build pipeline. I'll see if I can show you an example of one such build pipeline. So this one looks like it passed. And so I'm going to show the checks and we'll just go to one of them and go to the workflow. So we have this basically building multiple images and so on. We have this packet deploy and a set of integration tests that run after that. And then we destroy the cluster. So pretty, pretty simple setup. So before, and this could be something maybe for your v2 planning when you're moving forward. One of the problems that we ran into was we were originally using the cross cloud CI stuff that you all built. And I think it's absolutely fantastic. But the stuff that you've built out so far, one of the problems we ran into though is that when the cluster would come up, these integration tests that take about a minute and a half to to run, we're taking 15 to 20 minutes to run with the cluster that was spun up through the, through the cross cloud CI. And so for the short term, instead of trying to fully debug the system because of time constraints, we decided to deploy using the cube admin. So the way that the cube admin part looks, let me go and open up the start from the beginning as to what we do. So we have our big files. And we actually include various like Kubernetes make targets and so on. And eventually, we drew a set of includes we include this is the stuff that we run for our packet. So if you see, we have a packet start. So you can do make packet start. And then it runs Terraform apply. And then it runs a script that installs Kubernetes. So the Terraform apply should be pretty straightforward. You already use Terraform, if I recall properly. And so for the create Kubernetes cluster, I'll go over that pretty, I'll go over that little component. And so the way that that looks is so so we start off with basically a set of copying over some scripts over so we and so we copy over to each of the systems, the Kubernetes install script, the Kubernetes install script is just this is just to install cube admin. So this is lifted straight from the cube from the cube admin install page. So pretty, pretty generic. installs cube admin, cube control and Docker. So after we do that, then we run two scripts in parallel. One of them is a start master. And the other one is we don't want to wait for our worker images to download. So we also run the download the worker images on systems that are not the master. So the way that that the way that that looks is so we have start master and which runs a cube admin in it, you pass in the pod network cedar range. And we print out, we go in copy the config over to the home directory. Since this is over, this actually is running on packet itself. We then keep control apply a CNI plug in so we can have some networking, untaint our nodes, and we then do cube admin token create and print join command. And so what this print join commands does is it'll is it'll create a script that does cube admin join with the proper tokens in an email or sorry, the proper tokens and, and addresses necessary in order to join the master. And we store that to the join cluster to a join cluster script. The workers just, if they're not the master, all they do is they just pull images. And that takes about a minute and a half to two pull images. And then after that, we then copy the join cluster that script that was generated and we run it on there, we have a full working cluster. So it's a pretty a pretty simple script. And then the only thing that's left to do in our circle CI system is to download the is to download the cube config file so we can make use of it in our in our CI path. So we found this to be quite quite reliable for setting up a cluster and to give you a quick example of what it looks like. We run make packets start. So we we decided to use the Ubuntu 16.04 images because on packet, they're the only ones that have the fast boot. So the other ones take around, think around for three or four minutes to start this this is set to start in should be under should be under a minute or around a minute on average. So it shaves off some time on our on our builds. And you'll see the the script after about after a few moments, you'll see it start to kick it off. And you'll have you'll see a full running a full running system. So I don't know if you want to wait around for the full working system or not, it takes around two or three minutes. And that basically gets us to a point where we can then start running our integration scripts that we deploy using Qt control. So so very overall very overall very simple. Does does all that make sense? It does. And it looks like your past the initial provisioning and the bootstrapping. And you can continue if you'd like. Okay. Yeah, so so the initial so since those are are done, we'll go and grab the master. And we'll just so we'll have to wait for it to finish provisioning first. It should be real to be fast. But yeah, once, yeah, once it's done, you'll have like a full full working. This this one is just two two notes. I should have ran it before. Sorry about that. Do you mind if I ask a question while this is going? Sure. Have you added support or are you thinking about adding support for using machines that are already up? So then you're either bootstrapping an existing machine or just running the new test clearing out the NSM binary smart moves? Yeah, so that's that's a question that that's that's popped up and that we've been discussing was. So the the idea that we came up with as maybe an extra step, but we need to do some investigation on it first. Actually, there's two steps for network service mesh. So the the idea would be to create a namespace for each for each NSM for each NSM integration test. And the second thing that we'd want to do is is we have to make an SM right now the the CRDs that are installed are not namespaced their their cluster scoped. So we need to change that so that the so that the CRDs are our namespace scoped instead. So once we do that, then we can just create have an online cluster that's just continuously runs, create the create the namespace, run all the tests, delete the namespace and everything should just be deleted with it. So that's something we thought of but that's that's a potential potential future step. What about Kubernetes versions? Are you focused on releases only right now? Are you also supporting master or head? Right now, it's only it's only using the latest stable. And those should become ready in a few moments. Yeah, they're there. And I should have ran that with time. But it takes roughly three and a half minutes or so to fully provision with packet new systems and spin up the Kubernetes cluster. The scripts themselves are not designed like there's no reason why you couldn't run the the create Kubernetes cluster. The one area though is right now we do pull information from Terraform. So there's no reason why this couldn't be adapted to perhaps provide a list of masters and the list of workers, and then you could just loop over them and run the right commands. So it should be relatively easy to to adapt to it. The one area where I think cross cloud CI based on our previous conversations that you may have if you wanted to adapt something like this is that I suspect that you probably want to run off with the latest bleeding edge Kubernetes system. And there's two challenges that I can see cube admin having. So number one is the where you download the Kubernetes images from appears to be hardcoded. So there's a gcr kate slash gcr.io path. And so it doesn't look like you could build your own publish to your own repo and then pull off of those. And the second issue that you may end up running into is there is a cube admin. They do. They are starting to put in some some support for other, I guess you would say configurations. So show of an example. So you have cube. So cubanet. You can pass in a Kubernetes version. But that's only going to download from from that gcr gcr endpoint. And you can see the see see I misspelled it. Yeah, you can see the the paths that it's pulling from on here. And so being able to I don't know if the latest versions that from the Kubernetes CI are published to this, if they are, you might be able to just pull those. But if you want to compile a specific latest specific version on master, then this this may I'm having some issues with that. So the so those are the only the only real problems that the only real problems I can see. There's I was mentioning there are there is the ability to to start manipulating some of the phases. So you can start adding or changing things about the about the cluster that's in that's currently an alpha. So my guess is it probably doesn't have all the knobs that you want. And so one option would be to one thing that you may end up having to do if you go with a cube admin path, would be to work out what knobs you want to expose. And then you may end up having to contribute to the cube admin project in order to in order to get those knobs in. So I don't know what the process would look like for that or how long it would take or anything like that. But it's another another path. Awesome. Yeah, I just posted a related thing. And we've been on attending the cluster life cycle, Kubernetes cluster life cycle and some of the other groups where they it's related to cube ADM and and how the clusters are brought up as far as in a Kubernetes way. And there is needs for supporting different sources for those binaries. And besides the binaries, which would be a related thing would be on the cross cloud CI project and the dashboard itself. And we were supporting source builds. So then you could turn on very specific flags that may not be in any any type of regular build and that would allow plugins and everything else to go in. And I think there's that's in line with stuff that's desired for cube ADM and the cluster life cycle in general. It's just maybe lower priority. As far as the support though for cross cloud, the ability to add in or use cube ADM for part of it, I think is compatible. And specifically the features that are needed for NSM doing these fast loops for the testing would be being able to use binaries. I think that's that's been on our agenda for quite a while. So that's definitely something we'll keep in mind for like the next version, reusing either built artifacts so that you don't have that build. That's probably part of the biggest delay that you're saying is the weight on a build of a source, even the release versions. And if we don't need that, or we already have a cached version, then we can reuse those. Oh, the slowdown was was actually that I mean, yes, we did save a little bit of time off from the from the initial deployment. But the the main slowdown that we were running into was not actually in a deployment of Kubernetes, but when Kubernetes to place the pods. So when we run a pod, it might take four or five minutes for the pod to run while in the vanilla QBadmin version, it was deploying in a matter of seconds. So so it seems that there's a configuration issue in the in the in the cross cloud CI with how the Kubernetes cluster is set up. And I was able to reproduce this on the on the Google cluster as well. And so so it doesn't seem to be at first I thought maybe it's an issue with packet, but it had the had the same issue on on Google. So yeah, and we are our hope is that that we can move back to to cluster CI or or help cluster CI get to a point where we can then start. Sorry, you have get the cross cloud config rather. So we can get the cross cross cloud CI back to a to a point where we can just rely on that. Like for me, that'd be the best the best scenario, because like we don't we don't want to be maintaining scripts over time that are specific to to network service mesh and would love to would love to use the cross cloud CI stuff instead. Cool. Well, I'm interested in the forks because you got new ideas that come up. So those are there's definitely some stuff in here that are interesting for me and probably other folks. And I think that it probably we need to create a new issue on the cross cloud project and try to figure out the differences in that pod performance that you're saying with QBADM. It's pretty strange to me. There's definitely something weird going on. We consider the cross cloud to be vanilla. We're not we're trying to lay it bootstrap it on to the system says straight forward as possible following like if you followed the docs with that QBADM and did a build on a given OS like core OS or Ubuntu then it should be vanilla Kubernetes, but we'll get an issue up and to look into that and I'm happy for the feedback. Okay, yeah. And we'll have to do rummage a little bit through the through the get history to get the exact state back in terms of deploying the the packet to do sorry for playing the cross cloud onto onto packet. But yeah, it was it was reproducible. So so we should be able to to work that out. And my like operational skills with trying to debug why a pod is slow is not particularly strong. So any help with that? You know, I can see about reproducing a system. And then perhaps we can work together to try to work out like why why is this thing running slow? Yeah, absolutely. It looks like most of what's needed for reproducing would be based on the current working QBADM. And then we create a similar case on for cross cloud. Appreciate your help, friend. Sure, my pleasure. And okay, so what are our next steps then do you did you is there anything you want me to to do in in terms of like, do you want me to get involved with with any of your QBADM work that you're considering or and so probably have, I guess we can post something on, we'll post something on the cloud native slack and the CN CSCI channel to get feedback on that. And to start like, here's where we can gain feedback for the planning and what we may want to do with QBADM. If if you can create an issue on the cross cloud project, I'll drop a link in here for for that for the performance issue would be great. Okay. Yeah, and I apologize for forking in this scenario and working on this stuff and rather than working through cross cloud, it was just for us, it was a time commitment like a time crunch. So Yeah, no apology necessary. I mean, I think time is understandable. And like I said, I think you come up with different ideas and that can be contributed to different projects. And I'm happy for stuff like the open lab stuff that Melvin was doing. It's we need we need different things going on. So those ideas can be shared. Okay, cool. I dropped a link to the issues for the specifically for the provisioner and the chat. It's in the zoom chat. I can drop them into Slack as well. I think Slack could be useful since I think zoom goes away. Yeah, I dropped it in the CNC FCI. Yep, you got it there. Okay, we're on the hour. So anyone else would like to follow up on these, there's different ways to connecting. If y'all know anyone else that's doing CI that would be interesting for the community. This CI working group in my mind is about anything that helps with the infrastructure that we're doing for Kubernetes and anything within Kubernetes and CNC F community. So please invite them to join. Again, this is monthly, we're not going to have December because it's on the 25th Christmas. So the next one will actually be in January. So if you have the time if you want to prepare something or get someone involved, there's a mailing list. And on cloud native, join the CNC FCI Slack. Thanks everyone. Have a good one.