 Let's get started. Welcome, folks. This is the OKD working group meeting for July 5th of the year 2025. And I've dropped the agenda in the chat. I'll drop it one more time for folks that have just come in and feel free to take 30 seconds to look it over. Let me know if there's anything that's been missed that you would like added moved around or anything like that. Did I say 2025? Oh, 2022. You just get ahead of yourself. I am. I'm optimistic about the longevity of OKD. Absolutely. Good thing to be optimistic about. Open source code lives on forever. So the agenda mentions Centaur Stream Core OS from updates from Timothy, but he's not here. So I don't know. Somebody else can speak to that or if we should mix it from the agenda. Yeah, Christian, unless you have anything in that regard, we can hold off until the next meeting. Yeah, I wouldn't. I've got no updates so far. Luigi has been we've there's still semi on track for the end of July is basically all I the only update I have. So we probably need a little bit more time because there are and Timothy is probably the best person to talk about this, but there are a couple of blockers. On on the car west side, which Timothy has now is now leading, which is great time in working groups wealth. But yeah, that might be a little bit more time we need for that. Not sure if we make we certainly won't make mid July, maybe end of July, but August. Alright, then let's move into. Okay, do you release updates? So again, I don't really have a lot to say about this. I was out last week, but there is a new release from the 24th of June. I haven't been following through with any issue on that though. So if you have any, please make sure to file them. One thing I wanted to mention not specifically about the current release, but in general, we have a Jira board now for OKD on the red hat Jira instance, and it should be public. I think all the cards on there are public at the moment. So you should be able to use that to track things on. There is this broader plan to move to Jira from Godzilla. Eventually, I don't know what how far along that is, but we'll be using Jira more and more to track things. And I think that will also be the option to to actually file cards yourself from from an external standpoint in the future too. So if you just want to throw the link in here, if you're interested in following with what we're doing internally, that is the Kanban board for our current issues there. Yeah, and that's it for me. Oh, Brian, please. Yeah, I'm just going to ask, is it okay to put this into OKD.io or is that you're not one of our public yet? It is public. I don't know what management wants, but it is public, so feel free to advertise it. Okay, can you show, can you throw it in chat? So just got the correct link, please. And can you throw a link as well to the latest release and I can OKD.io, Twitter, post it while we're here. I think we've now, before it was that we usually did the release just before the OKD working group meeting and now we do it in the alternating week. So it's again, nine days ago. Maybe we can in the future, because in the future we'll, at least for F cost, we'll move to a sprint Lee release, which is every three weeks currently we have this every two weeks. We might align that with the F cost release. Well, in the future, and then we'll just have a release every three weeks. I don't think I want to, I want to ask, but to change his cadence at the moment. For now, this will just be the way it is. That's fair. Okay, any other questions for Christian? I just have one comment question to say, thank you for doing the Dublin event. And the video is up. It's unlisted. It's unlisted. It's public. I will share the link with everybody here in a few seconds. And Brian Innis, we could put a little blog post with that video embedded in it of his talk on that that you and he did. So thank you both for doing that in Dublin. That was an absolute pleasure. And it was great meeting you, Brian. Really cool. Yeah, likewise. All right, let's move on now to F cost updates which dusty says there are none so we can skip. Any questions for dusty actually or any feedback on F cost stuff while we're here. Open for questions if anybody has any. I have one question but it might not be. It might be for Timothy actually, one of the blockers for S cause is the lack of a couple of RPMs, namely cryo and cryo tools. I think those are the only ones that aren't actually built from from open shift. There might be the might be one that is from the fast data path repository. Do you have any any idea how these RPMs are going to build for center stream and or if anyone can help expedite this I'd be happy to to reach out to to the maintenance and help them set up for for central stream. But if you don't know that that's sorry. Yeah, first of all, I'm not a clue. Put in on sent us stream core West stuff at all. So, not quite sure there. No worries. All right, let's move on to our. Whoever has the click keyboard could mute yourself. That'd be fantastic. Let's move on now to documentation subgroup updates with Brian. Okay, so last week we had quite an interesting. Discussion and so technical documentation. We're going to sort of look to do a couple of tidy up things there. So one of them is with the guides. We actually want to work out what a guide is. Because what we've got down currently is guides on more example set up so I think we're going to turn them more into blog entries. And then we are going to be looking at what the strategy is for creating guides. Where do we want them who do you want to create them and what should they contain credit templates and have some standard guides in terms of how people on board with that. And we're talking about the repo move and as soon as we line up the red hats. Necessary resources that can actually do the DNS updates. We will be looking to actually move. The open shift slash okay D and the open shift CS. Okay D I O repose into the okay D or the project okay D get a repo. So we're actually moving them outside the open shift repose so the community can take greater ownership of those without stepping on red hat internal requirements. And we also had the initial discussion around how we want to onboard sent us if the community decide that we want to take that on as an ongoing distribution. And the other thing we talked about was the technical documentation. I've been trying to get some technical documentation together and falling over a number of issues in terms of where things aren't quite as clear as I want to be in a notice. There was an item put in from Jack and yes we are we are going to cover that hopefully is in a discussion today. And it's really about how we enable the community to be more active to build and customize. And also the possibility of the community building and okay D operates a catalogue. And we've been talking to red hat engineers for several months about this but they never seems to get to the top of their to do list. So there are quite a few active community members that want this to happen. So if we on on sort of cork the ability to build we should allow the community to actually build that catalogue with the project okay D get repo. So we had quite a conversation around there and I want to pick up on those later in the meeting. And so I'll stop there. Any questions if not handed back to Jamie. And a couple other things are that specifically some of the posts or the things that were guides that are getting moved to blog posts are the homelab guides from Shreya and Vadim. Both of those were sort of their individual just descriptions of homelab which weren't really guides right because they didn't really explain like how to follow a process or anything. Like that installation or they to or anything. So those are the ones that are going to get continued can moved to blog posts. Jack has agreed to provide a blog post on what they've been doing. Yeah, I am. When you're done, I just, I can't find the raise your hand button here and I think I'm just zoom trained. So Jack is volunteer to write a blog post about what they're doing at CERN with OKD and sort of give us the basics of how they're putting things together. Glenn Marcy who you've probably noticed in the Slack channel has been doing a lot for SNL and is going to be doing a blog post for us to help describe some basics of getting OKD SNL up and one other one coming up someone else is doing a blog post as well. So we're actually going to start building content that's going to be available as blog posts and I think that that's going to pull in a lot of people. And one other thing from the documents documentation subgroup that wasn't mentioned is Diane did reach out to Red Hat to get details on modifying the MX records of the OKD. IO website so that we can actually manage our own email and start creating some email addresses specific to OKD. IO, which would be helpful for a lot of stuff like says Twitter and things like that. So, okay, Diane. I found the hand raising. Thank you for your hints in the chat. Two things about the MX. I did make the request for that to get that to change so that we can have an OKD. IO that legal is reviewing it. You do have one and Dusty and everybody else for Fedora. So I don't think it'll be a problem but before the infrastructure people will do anything they need to have legal but the request is in for that on the hackathon on creating our own operator registry. So gentlemen at Red Hat, Austin McDonald, some of you may know him. He's kind of the community lead architect for operator SDKs and operator framework. And he, I asked him if he would be willing to jointly do that, bring in someone to talk about OLM explain maybe a little intro and hop in on that and he's totally game. What I like to do, Brian, is connect you and him together because you had a hit list of ones you wanted to get done and have a good sense of what would need to be the setup for the day. So if the two of you could get together what I'd like that hackathon to be once you guys figure out that is something that brings the operator framework community to the OKD community to help us do that work. And maybe some of, and if we know the, I think you had five or six operators that you were keen to get and that's, you can be selfish here because you know you're going to help drive this. Then maybe find the people who wrote those operators at Red Hat or elsewhere and invite them to come and be the ones in the room helping us hack on that and just get some engagement there. And if I would, the other thing I was going to say is, if the OKD cento stream CoroS thing is going to get delayed, maybe we should do this in August and use August everybody's vacation month all over Europe, but use August as a hackathon opportunity and start because I'm after tomorrow where Brian is talking for us in London at the gathering. I don't have any events that I have to host an organized so I could do something in this if people are willing for that. So I'm going to let hook Brian up with Austin, just do a mind melt and then we'll move it forward at the next meeting. Go ahead, Richard, I'll lower my hand. Just a very quick mention week here at Red Hat or in the OpenShift Arc we have a is it hack week or shift week next week, meaning we'll be given some free time to work on the things we want to work on personally essentially. So we were thinking this might be an opportunity to motivate Red Haters to help with the operator building. Since with S-Cos this is very much with the CoroS team we can't expedite that really, but on the operator side we can really do a lot already. So if you have any specific operator you want to build and you maybe already know or know a person that maintains that or please reach out to them and ask them whether they have time next week and tell them, oh, come on, you have shift week. So please reach out to them and reach out to us too. We can we can help organize meetings for specific operator. Okay, Christian, I mean one project that if there's people looking for what to do, which obviously when hopefully when Brian speaks, we can tie this together is. Ideally I want this to be done in the project's OKD Git repo and I want it so it builds on OKD with text on pipelines and GitOps. So it's totally outside of Red Hacks internal systems so the community can actually do it on their own clusters. So we need that infrastructure defined sorted out whatever. So again, that's something that I was hoping that the hack would do, but if there's people looking for work to do next week, run free time. So regarding the infrastructure we want to use I think there are two options we could either use GitHub actions and just kind of use GitHub infrastructure for this. Or we use the operate first cloud which will leverage for for more things in the future as well like an update graph and probably since that instance like that. But this would be the place where we could possibly deploy, act on and even just an OKD cluster essentially for this working group that we will then maintain and own ourselves. So yeah, I think that might be a good starting point and depending on what you want to do, GitHub actions might be the way of least resistance here, but I think eventually having properly defined tactile pipelines for this is is preferable. So, yeah, I can, I think I can help help you set up a meeting with the operate first guys because they will need to know you, and they will need to know what you want to do and then they will deploy a cluster and hand you the keys to the kingdom. So I'm just going to raise my hand and try and be quick and succinct. So Brian, if you can write up that list of operators that you were looking for and send them to Christian and I, I don't think Brian has the ability to target the people who are on the shift week, the half week next week. So if you can, you and maybe NCC Austin McDonald to Christian and make that connection because that is use that time. And, you know, if we got a few of them done in advance. Excellent. I want to keep things moving along because we do have a guest and I want to make sure that we get to actually multiple guests. Now, hand it over to Brian to talk about the build software and the red hat hybrid application. Brian could take it away. Yeah, thanks. So this maybe this is related to the previous conversation. I wasn't quite catching the context there, but so I think if I was right, there might be a third option for you for doing some of these builds. Brian, I'm a, I'm a product manager at red hat in so primarily in the past I have worked on our internal container build infrastructure. And we have put together a team to rebuild some of our, well, pretty much all of our container build infrastructure in a way that allows us to build. Secure containerized software in a managed service or red hat to be able to ship to customers, but also. In a way that customers can sign up and use that very same infrastructure that very same service in order to build their own software. And we have tried to be very, very thoughtful and how we design it so that it would scale from very small projects up to very large projects. And in fact, open shift. Is the project that we used for our large project. Builds and so we have, you know, done a lot of investigation on like how the open shift build process works today and how we could generalize it to work for things besides open shift, but also 1 day on board open shift our service. And to that, to that effect, we have like a. Very kind of specialized QE controller, which handles a lot of this stuff and might make your job a lot easier. And in fact, under the hood, we implement tecton pipelines and we allow you to get. We basically provision opinionated tecton pipelines for you that gets sent to your your get repo as a pull request. And then you would merge it and after merging it, you're able to modify it. But. Once you modify it, your. What we call a tenant. So, like, if you guys would do this, you would have your own tenant and you have tenant policies that can put restrictions on how you're able to modify things. You can always modify it however you want, but the restriction comes in when you. So, for example, if you wanted to do like the kinds of things that we do with open shift. You have to do all of your builds in a network disconnected her medic environment. Right. And so you could. You could remove that step from their pipeline, but if that policy is in effect for your tenant, it would the build would be successful. You could test it. It run it in a like a staging environment, but you wouldn't be able to release it because your tenant policy would prevent that. So our goal is to allow. People like yourselves who are community folks or people who are our customers. To be able to use this service to build salsa level for compliant software. And to be able to generate provenance for those builds using chains. And yeah, be able to write also custom policies besides salsa in order to put guardrails around what you want to. Be allowed to be released. So Diane and I were talking about this off to the side. She mentioned. For a new home to build your software. And what we're working on might be a really good fit for you. And it might be a really interesting test for us. And I think a challenge, a nice challenge for us to try to. Prove that that we can scale to those things that we kind of designed for paper already. And so. Yeah, that's what wanted to see where it would go. So folks have have questions about that. I think my primary. First ask is of this group is that will be, you'll have to sign on with your red hat ID to interact with it. That was what I was remembering from this conversation, which happened like a couple of months ago. It wasn't something that it was totally open. So if you could explain that, or maybe that's changed a little bit. It's a managed service. So we would, we would provision you like a tenant, like I said, in a managed service. And you would have to sign on with some red hat SSO enabled ID. That's correct. I think that's about the only kind of restriction around it. We'll be using compute from various clouds that's connected to it. So we'll be able to like connect Amazon clusters. Via KCP in order to provide places for testing and compute and stuff like that. I think the, the, the major. The thing that I think is going to help you out the most knowing, you know, what I know about how we build open shift is that if you are wanting to integrate, like run integration tests against. These OKD builds with its 100 plus containers. We have done a lot of design engineering around making that possible in a generalizable way where open shift has spent an enormous amount of resources making that efficient in a very specific way. And for you guys to go and recreate all that and like GitHub actions is, is going to be quite a lot of work. I think one of the challenges that as a community, we face is that there's quite a lot of the build that happens. Sort of behind closed doors where the learning curve to understand some of the transformations of prouders. So for example, when you look at the Docker file in a Git repo, the from is not what's actually used to build because there's a whole series of proud transformations. So trying to work out what is actually built is a bit of a nightmare for us because of a lot of the images that get used just within the repos or within the red hat firewall. So I think that's where a lot of the community have found problems is we, we're not, we don't know how to create an equivalent build that produces what the internal system has. So will this overcome that or will we still face the challenge where bits of it will run within the firewall and this sort of dark areas of the process that we just can't get access to. So this will not be this will be completely transparent to you. Now, what I what it what it will not solve is like. It won't it won't make the open shift builds completely transparent to you but these builds will be completely there will be your builds completely transparent to we will still have to figure out how to build those images. In your pipelines right so there may still be some like we have to figure out how those transformations work in order to put them in these pipelines but once they're in there they're your pipelines there's nothing getting from you there. I don't know I don't I don't want to promise this and, you know, but I would say that if it's down to like trying to build open shift on like it have actions or like say operate first cloud versus. We might get more interest from the open shift engineering team. Because this will be a future target for them building on it. So, for them they might be interested in proving that okay D can build at scale efficiently and reliably on this as a precursor to them moving to it, whereas, you know, you guys going to like operate first probably wouldn't be that interesting to them because they know they would never. Open shift builds to operate. Yeah, and I think that I was at the conversation you had two other small projects ahead of us that you were testing with. I forget exactly which ones they were, but it was something that you would it sounded like early September ish you might be freed up to be available to work with us on something around okay D on F cost. For that service is that still the time frame you're looking at or is it further out than that? It's it's I think it's still possible to start and just to be completely transparent with you guys you'd be like kind of on our bleeding edge, but that might be kind of fun. But our we are planning on board our 1st customers. At the end of August, that's true and it will be KCP and that API curio service registry service. So we have support for building. Go Lang. I would say we can build anything, but we have these like very sort of starter nice starter pipeline templates for going in Java. And we'll be following up with Python and NPM right after that. So I think some of the NPM stuff might be necessary for some of the open shift images. So we would want to prioritize that. But we could we could certainly start bringing a few images in and letting you guys kick tires and understand how things work in order to kind of make a further decision. So Jamie, maybe this would take a pause is to me this sounds really as a viable option, more viable than I get get get up actions you could do we could do that operate 1st. There it seems a little theoretical and not a lot of resources available to us to do that. So this was the 3rd option I wanted to bring to the table and see if it was amenable to the group to do something with Brian Cook's team around getting a community build service going. Yeah, and I think that. Well, let me let's do a quick straw poll real quick and Christian I saw your hand real quick. So straw poll. I'll just go across my screen. Bruce, what, how does this sound to you? What are your thoughts? Trying to find a mute button. No, this sounds really interesting. The, I guess the boundary is an interesting question because we've got, you know, sort of rebuilding OKD from scratch, which is technically interesting, but in the end doesn't give you any additional capabilities because it's already being built for us. And then you've got the delta between OKD and OCP. You know, so the serverless red hat pipelines red hat serverless. What else would I forget what they call red hat service mesh. And that's an area where we're actually lacking. So if it I would like to, you know, push beyond just OKD as it currently exists, but whether or not it ever got to there doesn't really matter. You got to get started. So that sounds interesting to me. Okay, Brian, what do you think Brian is? I think it sounds a really, really good option. And especially if this is where red hats going and if we can stay sort of in sync with red hats and my primary driver is, as you say, it's the bits that probably we need to figure out no matter which way we go. It's like, how do we do the prow stuff that's sort of hidden in the open and get that figured out. So I think we're going to have to do that, whichever way we do it, whether it's actions, the first cloud or whether it's this, that's going to be the challenge. But if we're going to use the same pipelines that red hat open shift will eventually take on, I think that's goodness. And let's not diverge if we don't have to. Okay. And Mohammed, what are your thoughts? And if you don't have any particular thoughts, let's find to just say pass dropped his voice. Yeah. Yeah. We don't hear you. Let's. Mike, what do you think? I tend to agree with, you know, Brian in us. I think it'd be really cool for us to set up like, I, unfortunately, I think we're going to need to bite the bullet and we're probably going to need to have community members who build up some prow experience and understand how to operate it for us. I, I ultimately kind of agree that, you know, we need to set up our own CI infrastructure and if that just looks like it's a copy of what red hat is running internally to begin with, like, I think that's okay. But like, ultimately, if we want to be able to do like true experimentation in the OKD like community and be able to really like innovate in a direction that like is out in front of OCP. I think that we need to own enough of the process so that we have community members who understand it's like how to do things like, okay, we want to, we want to build a fork of OKD that does X, Y and Z. How do we set that up and get all the pipelines created? Like the OKD community needs to own that knowledge because otherwise we're always going to be dependent on going back to red hat to understand like, how do we set this stuff up? So like, unfortunately, we're probably going to have to bite the bullet and skill up and get some community members who can run prow and who know what, you know, Tide is and understand and Bosco and these other things. But I think we'll be in a really good place at the end of that because then we'll be able to do the kind of experimentation we want to do where we say like, you know, let's try out a process that red hat is not using yet. Or maybe it will never use and see how it works. And then we can like really kind of experiment with stuff. So that's my feeling. Okay, great. And Diane, we know sort of what you feel, since you've done it. Alessandro, a guest here, did you have any thoughts? Yeah, I agree with what Miko was saying. I'd say, and it's challenging to test other ways of building a new safe app and I say, one concern that I shared with Alfort's in the past meetings is about future possibilities of building an OKD for ARM and also heterogeneous version that will be shipped in a while. The heterogeneous stuff would be easier, it could be easier with tecton or even other CI ways, CI tools. But the ARM stuff will probably need infrastructure and so this will need to be achieved eventually, I'd say. Great. Thank you Alessandro. We have someone Chewba who drops in but I don't know that we've ever heard anything from Chewba. You have any thoughts on this question? Okay, Kristoff. Nothing to add now. Okay, Jack. Hey, can you hear me? Yeah, cool. So actually I mainly agree with Bruce, so it wouldn't really add much for this case, though it would be nice to have because you would be able to see how an OKD release actually happens. Now we get these things that are getting dropped by Vadim on the OKD release, like a releases page, which is great and everything, but we don't really see what goes into it and maybe we can figure out how some of the images are built. But it's definitely not a transparent process but at least from my point of view, like this kind of transparency would also just be satisfied with some documentation or scripting or something like this. It doesn't necessarily need to be a full blown CI, but then some, some maybe looking into some of the currently still closed source or close binary operators. Maybe that would be an interesting take to figure out how we can build those ourselves. All right. Did I miss anybody? I've been kind of trying to get me. Christian, you said no. So just to what Jack just said, it's all open source. I think the problem is just that it's hard to reproduce because with the proud system we currently have, it's very obscure. It is all in the open, but you have to know where to look and it's really. I tried to air quotes around the closed source. I know it's not closed source, it's like, yeah, you get a block. And I just want to make sure I'm all for for trying this new build system. I think that sounds really exciting, especially if there's multi-arch support already planned as well. I think we should definitely try that, but I do have the concern that it doesn't improve the situation in regards to our kind of the vendor lock in with RETA that we currently have, which obviously we might not lose. If we move from one very obscure system like proud to another one that isn't that you just can't just deploy yourself. Ideally, which is why we were talking about tecton so much is you have a tecton pipeline and you can deploy that pipeline anywhere on any group that is cluster. You don't need a redhead service. You don't need any special version of tecton or anything. If we can make it so that the redhead service, this new build service that Brian has presented is able to kind of consume something like a standard tecton pipeline or something in a format that a normal tecton pipeline could also consume. Then I see absolutely no reason not to do it. I think if it's just kind of a hosted tecton, tecton has a service, then we should. Yeah, let me like give you a few more details, but it is tecton. Specifically, it's open chip pipelines, right? But the reason why it's managed service one is there's a lot of pieces to integrate in order for us to get people where we want to get. Like our goal is to be able to have somebody be able to create salsa level for a compliant software in like 15 minutes from a room or repo that we can build, right? So to that effect, we have very, like we have these pipeline templates that we think should work for, you know, language specific. So we will give you a goal length template and it has all these tasks already seeded in it for how to do SAS, T with GoStack, GoStack and scan with Claire. It can produce pipeline level attestations that get stored as OCI artifacts that are signed and all the things that are necessary for salsa level compliance, right? And in order for you to wire that all together yourself, it's a lot of work. So we had a solution architect try to do it and it's like crazy. So we do like our long term goal is that we would have a on-prem installable product that will require us to build an operator that can like do all this and deploy it. And at that point, that would become an open source thing and there would be open source version of this operator. And then if you guys wanted to like remove yourselves from our managed service, you could go deploy that operator wherever you wanted to, like pick up your stuff and move it. But for now, the most expedient way for us to get this to be available and kind of develop it now is for us to run it as a managed service. And our user interface is built into hack, which is part of the hybrid application like console.openshiftexperience. So for, I just want to be like as honest, I'm going to guess like for the next two years, you probably are stuck with the managed service, right? And as we iterate on that. We definitely understand there's a desire, especially for people who want this stuff, a lot of times want to run it in a disconnected environment. So there's a mismatch there, right? We know that. So we want to make this available for those people that are running in air gap environments as well, but it's just, it's not yet. You can try it this way and I think it'll give you a massive sort of head start on where you want to go and then maybe you like it and you stay with it. If you want to run it yourself later and when that becomes available, you do. But there won't be anything that's sort of in the mix here that's like special. It's pectin and pectin results and chains and cosine and six store project and, you know, all that stuff. All right, I want to be mindful of our other topics and our guests who are going to be discussing them. Brian, what would be our next steps if we wanted to move forward? Next steps, I think would be for us to gather up. List of people we would want to get access to an initial workspace when we, when we would on board. And then like Diane said, it'll be a little while. So it'll likely be at least September, but we would have that list and would create a workspace. Like you go, you folks should pick a subset of images that you might want to use as a, like a test projects, right? Like pick a few images that you know how to build. Maybe that you know how to test as well. Maybe if you can find ones that can sort of be built and run and tested without building all of open shift that would be nice. So you can just have a few images and you can say, okay, that's this is the scope of how we're going to evaluate if this is what we want to use. And then we can, we can work with you to get those stood up when we're in. Excellent. Well, we'll add that to as an agenda. Brian, did you have something quick, Brian? Yeah, just very good question. And so one of the things we're looking to do quite quickly is look at some of the missing operators like the pipeline operator, like the OCS operator. Could we use those as like sub components because they're sort of self-contained and we can build them and test them on a standard cluster? Would they be good candidates to actually get something running? It will, that will work as long as you're not in a super hurry. So in Q4, so our first goal was to deliver not operator controlled services. The reason is because testing operators requires you to have that automation where you can set up a brand new cluster, right? And so we were actually building that automation. We're building it in Q4 and we'll be testing it in Q4. So if you guys want to ride along with us while we build it and test those with it, that would be okay. Awesome. We're going to be replacing some of that proud workflow that gets done for the OpenShift with provisioning based on Hypershift or save more time and avoid using hibernated clusters, which is ultra complicated. Diane, you have something real quick. One real quick. What I would suggest, Jamie, is that we create like the docs group a subgroup of people who want to work on this project and put a note out to the mailing list. And see if there's a center of gravity and if I have to recruit like Pecton, DevOps, OpenSourcey people to help us. So real quick, that's what I would suggest we do, just like docs have another little subgroup of people who are interested in learning about this. Just thinking the exact same thing. Okay. Thank you, Brian Cook. We will touch base with you in a couple of weeks and let you know where we're at with our efforts. I very much appreciate you coming. Let's move on now to, did Marco show up? I don't think Marco is here. So we can skip the Ansible. Yeah. Yeah, I would tell him entirely he hasn't he hasn't gotten back so he's broken up. Operators, we, I think we we know where we're at with operators, right? Is there anything else we need to touch on with operators? I think we sort of know where we're at with, with that. Yes. Yeah, just very quickly. The, I put a note in the operators bug, bug list, because it's sort of worse than I had thought. I'm looking into the. Okay, 80 part. There's some sort of unbeknownst to me repositories that aren't in the normal chain of repositories that red hat uses to build some of these things. And anyway, you can have a look at that. Okay, Brian, is there an example in the discussion? Could you link it? Could you put a link to that particular message that you posted with the example in the meeting minutes. Under the discussion. Under this particular item under this agenda item. Yeah, it's the main operator discussion threads. You can link to the particular comments so that we know an example of a lot of people who are listening to this here. The reference to these other repo or other registries. Yeah, I mean, I can give you a concrete example. So for example, let's let's put it in the meeting notes. So that it's there because I do want, we have nine minutes left and we do have Jack to talk a little bit Jack actually had customizing okay D how to figure out source repositories for images. Yeah, go ahead. Yeah, so this is definitely going to be quick, but I just nicely ties into the discussion we had before basically knowing how okay D is built as a whole thing. So because just today, we have this issue that some of our source image builds failed due to some kit update or remote repositories that require a new kit version some something along those lines because in in April there was this issue and then some of those things were then also patched on the server side and other things were patched on the client side. Long story short, the source image build failed due to the image or the container that is executing those steps. And so I was asked to to basically write write this blog post about how we how we customize okay D and what kind of the maybe some of the challenges are, and I would say that this surprisingly enough for an open source. So I would say that this is actually one of the hardest things kind of figuring out which image you need to touch. So sometimes it's, it's very easy let's say you want to change something in the console, you look at the console operator. You look at the, you know exactly which image you need to touch because it's called console operator. So the repository which is get up.com slash open shift slash console dash operator. Easy. But sometimes you also have these cases where you know, okay, well I have this image in my cluster that does something and that is used by some component. You can figure out how to how I can even replace that image. So because what we are doing is we are doing our own OCD release. But we're just taking the upstream releases and then replacing some of the images so that is with the OC ADM release new command and then you can specify the images that you want to replace. And it's also even not trivial and just what the name of the image would be, because for example for the cluster ingress operator is just cluster ingress operator. But then I have another example here that is the, the, that is the open open stack machine controllers image, which is actually coming from the cluster API provider open stack repository or the associated image. And now just just figuring out those connections is sometimes surprisingly hard. And we had the same now today with this source to image builder image, where we kind of it took us a while like a long while longer than it should have to figure out where this image is coming from and in the end we kind of had to you know download the image and kind of look at the artifacts that are inside and based on which image it was built and maybe look at some of the metadata that is in there. Unfortunately, very hard to sometimes to figure it out and for example this, this source image builder image is actually built from open shift slash builder, which, well, I mean, I guess if you know it makes sense. But you know you would never look for that because you would maybe look for something like source to image or Docker image builder, which is actually the name of the image that you need to replace when you're doing the release. So, sometimes these these kind of connections going from like I haven't I have a piece I have a component that I want to change. Now we need to figure out which image I need to replace and then I need to figure out where that image is coming from. That is sometimes very hard and it would be great if we can kind of find a way maybe some documentation maybe some additional metadata to make that a bit easier. And this is just kind of the topic that I wanted to bring up with Jack. Maybe I can help out with a little bit of that. So I'm actually trying to pull this documentation together so an okd.io. There's a new section called okd development. And that's where I'm trying to pull this content. And one of the things that I found out is, if you look at the OC Adam release, you can actually put a dash dash commit URLs, which shows the actual GitHub commit URL for every component included in release. So I think that answers part of your question. The is already a very useful start for sure. Yeah, so how do you get from what you see on screen to the actual component responsible? I don't think we have an answer for that at the minute, but certainly once you identify what the image is getting the link to the exact commit, you can do that. And again, if you look at that page, it's the overview of the okd development section. And at the bottom there, there's actually commit URLs, commit and pull specs. So that gives you the exact information in terms of how you identify the image using a specific build. So hopefully that can help. Yeah, that is definitely that something that that does help a lot. Maybe just if we if we can, if we can, as a community like a work work on that documentation a bit and because like I said for some components, it's it's literally crystal clear because everything is kind of consistently named. And you can you can trace it back and of course I kind of expect now that, you know, we're now going to fix all of the let's say legacy or weird naming things that are in there. And that's just a case where I think some documentation can go a really long way because it's also not like you're expecting non expert users to be able to do this. But even just for the, it's called as expert users, sometimes it's surprisingly hard to figure out what you need to change. Yeah, I mean, and any volunteers that want to help create that technical documentation. You're very welcome to come and show your name. Missing is the volunteers. I'm just going to reiterate that and Jack if there's anybody in your world that might have some expertise that even if they would do a talk on that we could transcribe it at a starting point that would be great. Cool. This was a great conversation. I sense a theme across this so we know where we need to put our efforts for sure. Does anyone have any we have about three minutes left any last minute thoughts on on other topics or this one. Jamie, there was a one that I pinged you yesterday if I can. Yeah, go right ahead. Okay, so one of the challenges is once you've worked out what image you want when you want to build it, and prow actually does a replace on the image that's in the registry. So I actually went through every image using an OCD release. And believe it or not, there are 50 different images specified in repose within the red hat internal registry. So, as someone who wants to build you have to then work out what to replace them with because you can't build them because they're not accessible outside the firewall. So, I mean, one advantage of getting to know Christian last week was I now feel like in our program with questions. So I've been trying to get Christian to find the source of truth of what's in those images, where's the container file. And again, it comes out there is a container file but then there's some prow stuff that goes and changes a container file to actually work out what's actually then built that then the image uses. And this just seems to me way over complex just to try and as I said it's in the open but the number of hoops you seem to have to jump through. So I'm thinking, can we actually create a standard container file in a GitHub. The OCD will use to build its images so we want the base image and a builder image. They're the two main ones. And then we basically use those. We have the source of them we have them built in the key.io.okd I'm sorry Diane it is pronounced key, but we have it in the repo. And then we just build it and then the community can just use those images we don't have to go through this torturous process of actually trying to work what's in them. Yeah, I think that's a good point especially since since so many of the images that get built like have a static go binary in them anyway. At least, I think 80 plus percent so it's not like you need a crazy complicated base image and that's in fact exactly what we are doing for forked images that we are replacing in the custom OCD release because it's most of the time you don't really have a lot of requirements for the for the environment or for the base image. I totally agree with what with what has been said here I just want to add these 50 different images are probably going to be replaced internally to just one or two different ones. Unfortunately, the wrong directive in the Dockerfile in the Git repo isn't the canonical reference. It's being replaced on the fly by the proud build system. There is a bot that tries to update that reference in the repository pushing opening PRs, but those are often not merged in time or just aren't aren't timely. So I do think it's very valuable and the builder image is essentially just a, I think it's UBI or rail based or central stream based with go in it with the go lang binary so it can build the binaries. Additional dependencies sometimes have to be installed there as well and then that that binary that is built is put in a minimal container, which is the base image which doesn't have to have anything in it just has to run that binary so it can be very minimal. I know that Mike has mentioned that in the chat. There is an open version a freely distributed build version of the builder image, which is he linked it here the release go lang 118. But again, we don't, we don't build that ourselves we don't it's very obscure where it is built because the all in the OCP builds data repository on the open part but it's still very obscure there's different branches, and then that gets taken into our internal system gets pushed out to grow is used there as a base image gets trans transformed that is it is very obscure. And if we could just provide an open definition for a builder image and a base image based on let's say sent off street because that'll work everywhere. That is on on of course and on OCP and just universally. I think that would be really valuable because then in our own build pipelines we could just use those as the, as the build and base images. And if there is if we can kind of sync that with the internal one we have obviously that would be. That would be awesome. And yeah, Mike, the deep mysteries of the RT that seems to be common theme because that is kind of. It's a different thing we in red in an open ship mostly deal with problem and then we have this other build system art that actually builds our releases and also builds the base images probably see I and we have it's very obscure to us if I don't even I wouldn't know where to check for that that's a different team. Yeah, just to add that as a little bit of context and I realized we're already over time. Yeah, let's go ahead and wrap this up and we'll continue this discussion in 2 weeks and asynchronously, and it sounds like we're starting to get all of the, the players and the pieces together to tackle this issue across operators across the builds, the cluster builds, etc. So, let's carry this conversation on and we'll talk to each other online and at the next meeting. Thanks everybody. Thank you.