 Let's see here. Okay, so thanks for sharing the screen, Lucina. Welcome to the new time for the CNCS hour group. We're going to be meeting at this time for the foreseeable future. The meeting notes are in the post chat. Add your agenda items there. And we also have a slide that's attached. So you can put anything in the slides. Update those. Let's see. I think we adjusted the order here. So once y'all get the slides open, take a look at that. Some of the groups have been doing agenda bashing. So if anyone has, which seems like a good idea, if anyone has any items to add right now or talk about that are not listed. I think my items got pushed down because I'm pretty sure I added some of those earlier. That's okay, though. If it needs to get punted, it's fine. I think I keep getting confused because of Frederick from Red Hat. His name really throws me off. There's the way it abbreviates it. Okay, yes. Frederick will be giving a presentation on the Kubernetes Ansible Provisioner and wanted to allow plenty of time for Frederick, Ed from Packet and Dustin from Oracle to give their updates. And then we would jump into CrossCloud after that as it could extend into a big Q&A session. And I wanted to make sure we had enough time for our guest speakers as well. I mean, that's why I initially approached you as making it a breakout separate meeting and you asked me to put it here. So I just wanted to make sure that there was time for it since that's where you asked to go over it in this call. I believe there's time. Yes, no worries. Thank you. If you could move me after the person from Oracle, that would be quite useful for me. Thank you. Is that Frederick? Yes, that's correct. That's good. Hey, Frederick. Great. Thanks everyone for voting on the start time for today's working group. We'll give this time a try, 11 a.m. Pacific time, 1 p.m. Central time. And thanks, Ed, for cross posting and posting all the fun emojis on Twitter. Appreciate it. Thanks, Lucina. I'm doing a master class in emoji at some point. Super fun. If there were no new items to add to the agenda, and if there are, please feel free to do so. Love to jump right in with a packet update if you're available. Ed, to begin. Yep. Thanks, Lucina. Hi, everyone. This is a very brief update, but I just want to let people know by way of a small amount of background, packet is the bare metal hosting provider that I work at that also provides infrastructure for the CNCFCI cross cloud effort. We are aware that the cluster API has some infrastructure that's specific to various clouds. And if you go to that issue, which you're going to right now, I'm looking into the work necessary and the desire. I guess there's two parts of this. Is it a good idea and how hard would it be to implement the cluster API for packet? Coincidentally, there was just an announcement today as part of Google Next of a new go cloud interface that will, I believe, try to unify cloud provisioning from within go. And just started to look at that. And I think I have someone from the packet community interested in that. So I hope to report back on that next week or actually probably next month. I have some PTO coming up. And other than that, that's what I have. Back to you. Excellent. Thank you so much. We'll be sure to join the cluster API breakout meeting tomorrow in case they review this issue 431 there as well. And then going to this. Unfortunately, Dustin is not able to join the call today. So I'll give just a quick update. The Oracle OCI cloud has been in progress on the CI status dashboard on cncs.ci. And it looks like as of yesterday, they were able to get the full run of the provision script completed end of last week and looking for the next steps. And so the next steps is we will take a look at the new pull request and compare it to our master branch, do some testing on our CI development branch and then go from there. If we encounter any issues, let Dustin and team know and create new tickets so that we can get the OCI Oracle cloud up on cncs.ci. And there is a full request number 166 for the OCI. So it was about an hour ago, so we'll take a look at that later on today. And in the chat, there was a question. Thank you about the Google Cloud URL. So thank you for that. I'll go ahead and add it to the slides for future reference. Without further ado, I'll hand it over to Frederick with the Ansible Provisioner. Would you like to share your screen? I was going to work on a presentation earlier today, but my 30-minute meeting turned into a 90-minute meeting, so I wasn't able to put together some slides. So I apologize about that. But yeah, so basically what I have, there are two things that came up. One of them is I've been working with the network service mesh group in order to try to build up some of the infrastructure to run and test against a Kubernetes cluster. And I've also been working with the Open Daylight community for some similar purposes where we're building out Kubernetes support to allow it to interoperate with Open Daylight. And so effectively in the Open Daylight community, we then created an Ansible playbook. And what it's designed to do is it's designed to deploy Kubernetes onto an arbitrary system. It could be off on-premise somewhere or cloud or so on. And it spins up a vanilla Kubernetes using kubatbin and it effectively gets you a running cluster. And so the question that I have as well is, number one, do you have anything that performs a similar functionality within this particular group? Because one option is if there's duplicate functionality, then we could migrate off of that onto something that your team is working on and provides. Or if there's no overlap in the functionality, would it be useful to collaborate or work or find a way to provide this functionality so that if someone comes up with a similar use case, then they can make use of it rather than roll their own. So those are the two questions that I really have. And the playbook is pretty simple. You just specify the nodes you want to run it on and what version of Kubernetes you want. And it basically uses Ansible to deploy onto those systems. So it's a very simple notebook. I'll pass your questions over to Taylor, Denver, Watson. Everyone else on the go? Sure. Before we respond to those, I wanted to know, is there a link you can give or tell me where to go for looking at that playbook? Sure. I'll pull it up and I'll post it right now. And I think it's already, I think I added it to the agenda, but I'll add it right now. Okay, maybe it is. I'll go look there. Not seeing it in the agenda. Yeah, I'll pull up the link right now and I'll put it in the chat then. I will add this to the agenda as a note under the go cloud. I just did a quick look. It's just another middleware library that's entry. All of the providers are entry, so strike one for accessing common features across the cloud. It's not even deploying them. So I don't know what the mileage is going to be here, but that's the high-level overview I think. Yeah, I've posted a link. And I don't need an answer right now either. Like if someone follows up with me, then we can work with that. So I think it may, we may want to have a follow-up on this for sure. I'm looking through the playbook right now. I definitely think from the Linux Foundation IT side where they have a lot of different playbooks, this would be a good thing to contribute. I know we had talked about that or seen a little bit about that separate from this call. I'm not sure what all they have. I know that some of their installs are a lot more complicated and deal with a lot of other things. So I think it would be nice community-wise for anyone using Ansible to have this. Yeah, that's probably the first thing I'd say. They gave me commit access to the LFIT project in order to start pushing these type of things up. But yeah, I wanted to check this with this particular group first to see, first to make sure we don't duplicate effort because I can easily point them to say let's use this for this particular notebook. But yeah, I think the LFIT is a great place for this to land because there certainly are multiple other groups who would be good to work with to use this. So I agree for sure. One of your questions, do we have a similar tool as far as the functionality? Within, I guess, all of CNCF for CI, there's a lot of different things happening. On the CNCFCI dashboard, we go about building the clusters a little bit different. And it's using, real quick, would be Terraform and then Cloud in it. And for bootstrapping, we lay down everything. I don't want to go through the whole spill right now. We actually have some slides and demos, if you're interested in that, we can have an offline call. We do have some similar things. It seems like the tool, though, what you're doing could serve other purposes than what's happening there. So I don't want to say right off the bat that it's duplicating work. There's a lot of folks that are using Ansible for managing various parts of their infrastructure. So I think it's, for sure, useful to have it. We don't happen to use Ansible for the dashboard. And as far as collaboration on it, I think the first part is probably the LFIT. And I don't know where it would land as far as this project, but there's a lot of different projects, I think, within CNCS. So not just the LFIT, but within CNCS that could use something possibly that does Ansible. And they're already interested in using other playbooks to manage the infrastructure, whether it's CI, CD, whatever there. I don't have a specific one so that I could point out. Maybe if anyone else that's here right now might know of some other projects that could specifically use it. I'm interested in it. I just don't have a specific place I can think of right now. No worries. Yeah, I think LFIT is definitely the right place, then, to start. If we land it there, then groups who want to use it for a specific reason can pull it from the Galaxy is what it's called. And they would make the deployment quite trivial at that point. Sounds good. Yeah, well, I don't have any other questions. And so yeah, let's follow up afterwards. Okay. Thanks. We have the link for this. We can share some information on how the cross-cloud CI project does deployments and provisioning everything for Kubernetes. And that way you can take a look at that and think about comparison. Again, I don't really think it's going to be a replacement. They seem complementary for different purposes. Yeah, and for network service mesh, if I can see what that is, then that'll give me a good overview as to which direction I should head off into in order to make this happen. I'd prefer to work in with what the community is doing rather than, like, I could roll my own without too many issues, but if the community already is already adopting something for this role, then there's no reason to deviate. Sounds good. Thanks, Frederick. Sure, my pleasure. Thank you, Frederick. Are you ready to talk about some of the updates made on cross-cloud CI and then we can jump into the questions. You went over a couple of related ones, the Oracle stuff. Oracle being is in progress. And I don't know if we mentioned last time, but VMware vSphere. Anyway, so I'll let me go back. Where were we? I see your window now. So the VMware integration, that's done. We've had a lot of pull requests and updates trying to fine tune some of the differences there. So I appreciate that, Andrew, especially trying to make some of those general purpose like skipping, deprovisioning, so that if we want to debug or keep something for whatever reason up a little bit longer, making those more general purpose. Appreciate that. The Oracle Cloud integration is in progress, as we talked about earlier. Keeping up with updates on all the different projects, including some changes that require internal changes to make it a little bit more general purpose to handle updates between versions of Kubernetes or whatever else. Denver just fixed an on IBM cloud with deprovisioning where they've had some upstream changes that required changes on our end, handling how we do Terraform templates and how the resources were managed. So that fixed that issue. Since you're talking about bugs, can I raise one? I don't see here, but I see all the time, and I'm wondering if it's something you're aware of. I mean, I know you're aware of it, I just don't know if it's an actual bug. Yeah, do you mind? Oh yeah, I can wait. Yeah, or we could just post it. Do you want to post it in here and add or go ahead, I guess. I mean, where do you want me to post it? Is it an existing bug that we don't have listed? I don't know if it's an existing bug. Go ahead then. That's seen lately. You can tell me if it is. It looks like environment files are truncated and that fails builds. I just wondered if that's something that's known and that's being worked on? Have you all seen that a bunch? If you'll post it, you could post a link. Yeah, I flak or it may be, I don't know for sure what you're talking about, but there's one of those areas that is related to what's actually like four or fours for the artifacts. If they're for whatever reason, we can't retrieve this, well sometimes that environment, what you're talking about, it will look like it's truncated or something else is going on, and we're trying to expose more of those. So, happy for you to raise an issue and post that. Yeah, I'll file something. Especially if you have specific details on it. So, we're trying to differentiate those and then make the underlying issue more visible so that we don't have that top thing that's not the real problem. Gotcha. It's probably that then. So yeah, I'll file something with some links. I think we'll probably talk about a few of these things with what we would have done later and I think some questions that you had. Okay, thanks for putting, yeah, if you'll post a ticket on that. So, slide 13, we can go on. We've been attending a lot of the different groups trying to figure out how we can collaborate related to what Frederick was talking about as far as reusing tools and trying to work directly with the different teams. So, cluster API breakout, the cluster life cycle group, the NSM with you, Frederick, and looking at how we can extend or reuse different components. So, doing a lot of that. Watson has been working a lot on mind mapping and figuring out what all the different groups are needing. So, we'll be providing more details on stuff like that in our kind of roadmap and planning and what we're seeing between the different teams will be upcoming. So, slide 14, here's some initial thoughts on what we're seeing before we get into maybe more items that could be project specific. So, we're looking at splitting the screens out and having the project, a project overview screen in the dashboard. We've talked about that for a while. That still seems like a valuable thing so that we have a difference between the Kubernetes deployment and how that's provisioned and how the projects and then test. Sonnaboy, they're working to update and replace what we had originally set up with kubetest and then adding kubetium and potentially ignition support. Andrew, you had brought that up as far as what's supported in CoreOS. So, that's something that will be examining further whether it's going to be running side by side or if we can replace or what we want to do there. Harbor is something we're looking at to make us not as dependent on GitLab so we could have different components and at a minimum if we have a layer so we could say we can use images and artifacts that are somewhere outside of GitLab then that would be nice. We have seen issues like possibly the one that you're talking about Andrew with this environment, the environment being truncated if that is or you're getting for force and it's actually just GitLab image repository not keeping up with request. So, using something that's more of a dedicated repository might be a good idea that's focused on that. So, those are some of the thoughts what we're looking at maybe sooner on slide 15. Automating the project release updates a lot of that tied in with the fact that we're running at 3 a.m. so if automation happens how soon do we find problems and going through the environment so there's some issues there but I think that's desired so they come up a little bit sooner that could be tied in with we do support on commit runs and we may decide to do some things with flags from different projects that's been desired to turn on when to do full runs of the CI test so anything along the automating those new updates from the upstream projects and then the API I think there's still a big desire that we've been hearing to have access to all the data so historical builds where artifacts are come from so did we pull these down from upstream when did the build happen all the metadata and status that may not be shown in the dashboard it's stuff that people find useful so access to those things the API I think is something that we'll want to provide with whatever changes we're going to do to the dashboard itself and then a rollback to previous working releases this ties into something folks have talked about quite a bit so if we have a failed deployment or build maybe the end to end test or not they're failing so we want to actually show on the dashboard and in the API here was the last release that that built or deployed successfully and the end-to-end tests were successful for so providing that sort of details I think is important that will help with false positives so we're not showing red on the dashboard if if we have this we can say here was the last one that worked and the current one failed but you can see the last one that you could use and I talked about the new screens and on the slide 16 some of the groups that were attending and events that were going to be a part of if anyone's interested then please join if you're in Austin a fun thing that started where there's a lot of good conversation that open source access and I think that's it we don't have a Q&A section here I think we'll have that as part of the next so Andrew I'll hand this over to you to kind of start if you want to give thank you some thoughts on where this is going well thank you thank you for that Taylor and I appreciate that the slides and I'll try to be better about not interjecting in the middle there I guess I just I'm used to that I'll I'll file file those issues um yeah so thanks I reached out to listening a while back with some questions for the team involved regarding the cross club dashboard and she suggested this would be a good venue for it so other people could benefit from the conversation essentially VMware is getting more interested or involved in Kubernetes into end testing as well including you know that's the test and for project proud and test grade and whatnot and one of the things that you know I raised a topic I raised with my colleagues is hey you know Volk had to deal with getting involved and Kubernetes end-to-end or conformance testing not in the exact same way but it conformance testing nonetheless so I was just kind of curious for myself for VMware for I'm sure other companies would also or organizations would also benefit from from your answers but essentially I'm curious like when you were approached or you approached the CNCF like what were the goals of the dashboard and I think we you reached out to me on Slack and asked me to clarify use cases and so what I meant by use cases was or was that they come to you and say this is exactly what we want or they come to you and say we need to provide us you need to provide a solution that there are these three use cases we want to solve and this was your solution for that additionally if you could have done it all over again how would you approach it you know what feedback did the CNCF provide in that process and finally you know what are the plans that you could possibly expand the dashboard to cover into end testing and I say that having just exited a conversation in sick testing where you know they into end testing at least a subset of that are conformance tests and I would you know I think you would say this as well that the dashboard isn't on a lot of ways it's conformance testing I think you've called it that so is there an opportunity for the dashboard to contribute back to the test grid and if so have you have you considered that I just I'm trying to one learn from your process learn from any mistakes you made or what you do different and also not try to duplicate any future work that you could share and VMware's own efforts to you know expand into the the Kubernetes testing so hopefully that was all clear as mud thanks for that intro that you provided a little bit more info for myself as well so thanks I think we kind of go through this we've laid out most of kind of responses but I think there's probably some conversations around this based on what you're just saying and it looks like we have quite a few folks on the call that could respond to maybe some additional pieces what are we on go ahead yeah about the goals and use cases I mean that was more for a hey I'm managing a similar project at the moment I'm just being selfish you're very smart people who did this and figured it out and I want you to do my work now I want to learn you know what things that you encountered and and you know how you responded to them I'm being selfish and hopefully that myself and again others getting involved in Kubernetes testing can learn from you your experience absolutely and I would love for that to happen where more folks are utilizing at least different parts of the components and we can share back and forth so let's see this is kind of an overview and thanks Lucina for adding some of this in here may have HH talk a little bit here he's on the call and Denver as well of course so originally after so give some way back background HH and I have worked with Bob Wise in the past and so just based on some familiarity with some larger projects and helping with CI there was a need around trying to do CI CD testing with with Kubernetes and over a couple of years it seemed like different conversations helped to move that forward and eventually came back with a request that went primarily to HH on trying to build out something that we could demonstrate start as a demonstration and I don't know HH do you have some like anything to add to this I mean I know that just this first part was when you got started with Denver and and got stuff kicked off I mean we have some other specific things that we can go into but anything kind of on the background there um Andrew there's something specific I can speak to in that other than we it was an undefined we need to demonstrate our the interactions of all of the CNCF projects as a whole yeah and then I guess I was focusing more on the the the conformance testing aspects with respect to Kubernetes so that's a longer journey I'll go ahead I think Taylor's doing a great job of covering the beginning of things and as it gets through later I'll I'll pick up and let me let me quickly say like it's it's been difficult to get as much conversation out of CI testing with some respect to because there haven't been as a lot of people as many people that have implemented things that work with Prowl externally so because you kind of did a similar thing with multiple platforms I'm looking for examples of how you went about organizing that to do a similar project and internally so you know my approach was privately I didn't want to take up time here but maybe like you said it's yeah it's it's a it's a good I think it's a good story for everybody actually and go ahead Taylor oh I wasn't saying anything okay I'll just quickly speak to the choices that we made and trying to look at what was in the community SIG testing didn't it was only one cloud at the time and we were looking to find something more generic there were not a lot of tools out there that did multiple providers well there was everybody was there was a there was a very busy field everybody's trying to do the same thing and so we did make some some early choices to try to to generalize things to make sure it wasn't specific to a provider one of those things was trying to find an API to all of the clouds we did go with terraform on that and also trying to find an OS diagnostic approach that didn't require that was well suited to ci so we instead of trying to ssh or use any any after the box is up provisioning tools we decided to go with cloud init and using the very simple put some files here and when you know system d and stuff starts up again some of the more advanced stuff those design decisions have have stuck pretty well one of the things that uh we also looked for was some type of ci platform that was put together enough for we didn't have to do all the glue at once so we made some early decisions to go ahead and use get lab for that I really liked the approach of being able to do something open source and drop in a single file to get us started and and I think that pretty much covers the design decisions then and as we worked more and more uh with um test infra um trying to find ways to reuse that that approach and it's there's I don't know if you've been involved in the conversations on in several places where it seems particularly for ci and within test infra um the the flow is focusing on it was Kubernetes anywhere for a while and now that we're seeing um API uh cluster api is beginning to come to fruition it looks like a cluster api run by cube test run by um ed is is the long-term broader commute kubernetes specific community flow um and I I myself as I let everybody else speak um but I'm I am leaning more and more towards that I would suggest that we we seriously explore um reusing all of the cncf cross cloud ci provisioning and um add that uh cube adm support that reuse all of the existing infra behind something um but like cluster api and cube test and ed so that there's a simple way that um we're we do have this ci focus thing that the whole community is using leveraging uh all of the prior work uh for there may have been more of an explanation but anyway that's uh that's my that was great that was great I had the same conversation about cluster api the other day we're doing a similar design choice we're using a docker image that can be part of a pod spec for testing and that image will be in charge of deploying a v-sphere cluster and kubernetes and guess what we're using terraform and basically we've copied your stuff twice and eventually it might end up being a cluster api thing and then I said like three is prowl leverages cluster api directly potentially but yeah I mean I've copied your stuff twice now and have to have it working and you know it's what we have and it works really well so thank you yay um I would I would throw that up to the the community or see if we can't um allocate some uh some resources to go ahead and and and the following year at your lead uh and uh you are using cube admin um right within your the the changes you've made or uh so far it's just about deploying the v-sphere no no so no okay um well it's it's I think I'll let I'll let Volk speak to that the specifics of it but I I don't think it's a it'll be an undertaking mind you but I don't I think the value that we get out of it would be that'd be tremendous that benefit the whole community particularly for all the providers we already have running sorry that you had to do the heart and the the heavy lifting for the first foray into the cluster api thank you for that yeah I think they underestimate the cool thanks Chris thanks um yeah so I think one of the big things is there's a lot of different groups trying to solve overlapping problems one thing that I think is important is to know that there's there's not going to be one tool that's going to solve all of the problems for everyone there's different end users and use cases um ideally we're going to have different components that can be reused between them and we'll be able to compose those for your own needs which are going to be different from someone else's needs let's see I think we could probably we kind of talked about all of this a little bit let's run through and see if there's anything else specific though answering your questions so we go back to that first slide 18 just a quick review based on that so what were the project goals um identified use cases what we've done differently and I mean that's more a test I think they're done differently it's one of the more interesting ones to me because I like when I have a huge projects I always think oh I could have done this that and the other absolutely I agree um it was always something with hindsight you would have done differently and made it more useful let's hop to slide 20 real quick see this is just a quick overview the goals so tied in with um why we were here and at a higher level what HH was just talking about and the CI platform so that's tied in with the building building that initial platform out and what it could do and the different pieces within that for provisioning kubernetes using terraform and cloud in it um to be able to handle a larger set at the time kubernetes was not really in a place where it was going to solve uh the needs for ha clusters and and handling um bringing up all of these nodes with with uh various uh needs so those were kind of the reasons why that was chosen first and that's done and then um a status repository for storing all that information which would be used by the dashboard so how to display this so that was one of the biggest main goals was whatever CI system underneath we needed to be able to have a high level um dashboard so that's what we have now um it had happened to have many statuses that maybe we can get to in the past while what we would have done differently it's displaying a lot of things but it is that high level and then integrations with external systems which was in our mind early on but it didn't really happen until we were able to integrate with um an app because they had their own CI systems um and we were able to use their artifacts and start working with them so the use cases we did put a few in here kind of as response I don't know if you responded in slack I didn't see it Andrew but um we didn't actually write them up as say traditional use cases where we fully identify user but you could create those so that first vanilla kubernetes working across uh different cloud providers that ties in with what hh was saying as far as um test grid and what cloud provider what's being done now um how how you can do this across multiple providers so that was the use case um and I won't talk about implementation where there are a lot of tools at the time that could do a few none were doing all of them so sure that one was a solid use case to get going at the time and and then from the side of like an end user there's a lot of folks that were afraid of vendor lock ins and okay kubernetes can solve this but how how does it how does it actually work so this is kind of a the idea of more than a demo something that's live and running and doing testing from an end user perspective oh kubernetes actually can be deployed and you can deploy the apps across all of those and we can see it working see it's valid you're running the tests for those applications so that vendor lock in it's related to the first one it's more specific of say if we're looking to use case that actual end user over there um the cloud native apps is interoperability between them when they're running on kubernetes so maybe prometheus and metrics for another project core dns or different things whatever you want to show but the interoperability between those and how that would work specifically utilizing functions on end kubernetes maybe that's just the networking and the use of use or it could be specific components that goal is something that's there that use case i should say it's a harder one to reach that's kind of a reference architecture type thing um and it's better to have scenarios that's something that that we've kept in mind though as we're building that out and then the the build all the status e2e build everything else so the use case how do we look from that high level status and actually see how these different pieces specifically for cncf projects which was what we were giving how do we see how they all work at a quick glance versus going to every project yeah i feel that you went through all this effort to lay all this out for those questions at the same time you know it makes me feel worse that you just your team isn't somehow involved in the leadership of the cluster api because everything you've described so far is just like the perfect project plan for the cluster api and its goal or its goals in my mind should be and then you already have a solution it's uh anyway yeah well we're actively working with them and hh is working um also on a on a project that's um related to it's kind of it's a tool that helps the i i don't know if you've looked at the api's new project but it's all of these are kind of related to how do we help these groups so in a lot of ways this in addition to providing some of the goals that we had with the dashboard um it's it's trying to give miss uh fill some of those missing gaps between all the projects where things overlap but they don't quite meet and then we are participating in a lot of the the group calls and everything and seeing how do we provide feedback some of these things were way ahead so qba dm wasn't ready to be used so now there's it's kind of at a stage where we can start collaborating test grid and what the api coverage from api snoop may not have been the right time early it's now at a right time where they're trying to meet a goal with conformance the conformance group now they have more providers that are interested so it's kind of coming together i appreciate your comments on that um so let's let's move on to what you were saying might be exciting here so what would we do differently so um broke some of these out and denver i don't know if you have good audio if you have some thoughts or um anyone else that wants to speak up feel free so modifying how we're building and using the artifacts that would be a big one and the one of the things that was desired and we didn't really get to get to until say onap was using the external artifacts and really thinking about those and reusing and saving them that's something that you would want to do once it's validated if a project already has artifacts and they're producing them in a way that they're easily accessible and we can validate maybe the psi test that we don't need to run them then using some of those um there's there's a bit of complexity around determining and is this something for each project if there's not a format but that's something that we're moving towards and i think that would be uh there'd be some more that we'd put in place and that would have more collaboration with the community early on and we're using those verified artifacts so um besides that the showing them maybe the api so i'll move on how to how on the provisioning side of kubernetes so this ties in we talked about kubernetes a bit so terraform with packer to create pre-baked images so this would be beneficial in a lot of ways it speed up the deployments of kubernetes if we've already validated run performance tests and everything and we have images for all the cloud providers we could spend those up when we're running additional tests for say the apps um or whatever parts that we need so this is about using those validated verified artifacts those could also be made available to the community and we say hey we ran these here's the test we we've also pulled data from other places kubernetes specifically for the bootstrapping of kubernetes so this might be what we'd look at right now would be in the cloud and at face where we lay down the manifest use kubernetes in that part after terraform yeah this is the part that interests me most because this is the piece of the project that which i'm currently working or the terraform stuff in which i'm currently working i'm just to the point of laying down kubernetes and to see you say that you would use kubernetes at this point it is very interesting to me it doesn't it doesn't totally uh divorce you from something like cloud or ignition because there's usually that to configure the actual guest OS outside of kubernetes but it makes me think if i shouldn't consider kubernetes instead of basically just copying your stuff i'll i'll talk to you maybe i'll bring that up with you offline or something absolutely i i think this is still on the um investigating stage for our what i should say what the cnc fci dashboard is doing like is it ready do we have the different pieces maybe it is you know called by cloud in it but we're not actually laying down the manifest we get it going or maybe there's some preliminary stuff so there's some stuff to look at right there i think what we have right now if it's useful then use that it's it's still valid and hopefully whatever you have it would be compatible if we move over and you'd see here's how to migrate towards that it's definitely not something you're going to see real soon and the rest of the pieces we're going to try to make um i don't want to say backwards compatible but a migration path so let's let's go into slide 23 just got a couple minutes the git lab implementation so um we had this is something talking about earlier we um use git lab as the main ci platform we probably look at trying to keep something separate and not have it so dependent on the git lab species pieces from the start there's definitely some parts where it could have been separated a little bit more even if we didn't say it was completely independent it's definitely good to get started and there's some really useful information but um maybe a specific thing though would be these webbooks so their api is not really reliable you can return different information from what you see in the database or the web ui so there's also some benefits from using the web hooks for the events but having more of that knowledge ahead of time which is hard to know but it's you know if we had it then that would have been something to do once we've really got to know git lab the status repository some long-term goals for integration working with the community were there but they actually plan it out and build and have enough time which we didn't um to talk with the community uh to look at what that could be would have been nice would have affected some of the collaboration metadata and everything else potentially allow other folks to add here's an integration to our c i system and we're using the this community format that we all agree to some of that's happening in open c i and watson's been working on white papers and stuff and collaborating so i think we'll see that that would be an early thing so yep and test grid and all the different systems we're interested in so let's say direction feedback we're kind of at the hour and you can read some of this yep um there's a couple more slides if y'all want to read the answer like where we got some directions a collaboration and then plans to cover uh kubernetes e d testing we do existing we're going to keep working with different groups like kubernetes feature flags and work with some on the the conformance specifically so where are we slide 33 so how to connect with us if y'all want to continue the conversation and um lucina you want to close out you want to continue thanks everyone for your time and participation hope to see you again next month the next working group call will be on tuesday august 28th some action items that we have will follow up with redrick on the ansible kubernetes install we've got the oracle cloud poll request we're working on a linker d build issue today we'll be attending the open source summit in vancouver at the end of august which is also the cnf workshop is happening on tuesday the 28th and that's the same day as the next working group call thank you ever thank you lucina for that for the and taylor thank you for your time you're welcome thanks everyone also you made me interested in the oss summit that's a better place to be than austin in august absolutely cheers cheers hi everybody