 Group meeting for May 24th of the year 2022 and don't forget to put your name in the attendees section just so we know that you're here. The attendees section of the meeting knows that we know you're here. We know if there's something you might have missed if you weren't here and we can reach out to you in case it may be. Let's take a quick look at the agenda. Is there anything that folks would like to add or change? Feel free to do so. If you want to just pop stuff in there in about 30 seconds if folks want to go ahead and have Christian or Vadim. So I think we will be winging it in terms of actual OKD engineering updates. Correct. Yeah. I thought we were always winging it with OKD engineering. I think that's the way open source works is winging it at this rate. But yeah, we're getting there looking at the agenda now. Let's jump into our agenda and release updates. Like I said, we don't have Christian or Vadim here. What I did want to do is bring up the operator account question. I asked Christian about this and he looked and recently the operator account hasn't changed. It's at like 150 something. But I remember it was much smaller and Bruce, I think you mentioned that you remember it being much smaller in previous releases. Does anyone else notice this or is aware of this? I haven't checked in on that in a while. Christian brought this up at KubeCon because I think something went off on his phone or whatever and he noticed and looked and there was like a much bigger list. I know there like Cecil Machado who's a redhead employee who's been doing a lot with the OLM. I know she has, she had some explanation that seemed to satiate Christian's curiosity, but I just don't know enough details about it to remember what it was. I would say maybe reach out to Christian or next time we're all here, you'll have a comment on that. That's Camilla Machado, right? Yeah, Camilla, sorry, sorry. Okay. Yeah. No, I can ping her and ask for what was going on. Yeah, so for context at the docs meeting last week, Bruce, why don't you go ahead, you brought it up, you noticed it first and brought it to our attention. Why don't you go ahead and explain what you noticed? Yeah, well, I guess I was doing something on my test cluster, which is on 4.10 current version. And I noticed that there were a lot of operators, in particular the GitLab operator was one that I noticed because I had been having difficulties with that from long past. And then I looked around and there were several that were that had been deleted going from I think 4.7 to 4.8. And then I went back on my 4.9 cluster, which I had actually made a note to myself going operator by operator of the ones that disappeared, and most of them were back. And so it wasn't associated with an upgrade because my 4.9 cluster hasn't been updated for a long time since, you know, especially the stuff thing came out. So something happened with, I don't know what, like maybe things made it into the operator hub that was being pulled down that had disappeared. And I know that with respect to the GitLab runner one specifically, when it disappeared and I asked some questions, Vadim had said, oh, well, it wasn't currently compatible. So that seemed like a logical reason for its disappearance. But anyway, so that's just history, it doesn't shed a whole lot of light. But I just, well, I mean that the ones that I really care about, so like all the ones that would allow you to GitOps are not there. But still, it's a big, I mean, it is a big plus. A anecdotally, all I know is I did a, I think it was a 4.8 install a while back, like last fall. And I remember that there were only like 40-something or maybe it was like in the early 50 operators. And now looking at a 4.8 that was upgraded to 4.9 recently, but I didn't notice it in the upgrade, now has 152 or whatever that higher number is. And it's like AWS operators and all sorts of stuff that was not there before. So it's pretty snazzy, it's pretty cool. We might want to give that a little bit of promotion if we're sure that that's something that's going to stay around. We should reach out to Christian or the other Red Hat person if they know specifically and find out if this is going to stay like this for a while and then maybe sort of promote that. Sorry, you're muted, I am. I am. I'm always muted. I've got a cold again from traveling, not COVID, which is just a cold this time. I will reach out and start a conversation with Camilla and Christian and get to the bottom of it and find out if it's a permanent upgrade or if it's going to disappear at any time soon and let you know and have one of them send a post to the mailing list. Is there a discussion started about it already? There is not, but we can create one. Let's, Bruce, can you go ahead and pop on and create one sometime in the next like, you know, sort of time soon? Okay, great. And if you tag me in it, then I, you know, then we can see because if it is, hopefully it is. There's also a lot of work going on in the background because as we all know, Vadim has gotten busy with a new promotion doing other things. So there's a trio of customer facing engineers who are taking on learning and building out the CI CD process. And I have a meeting which, Jamie, I may try and get you in, I'll have the first one with him Fabiano Frans, who's an awesome, awesome person, is, is leading, is managing that initiative with the customer facing engineers and he's, I think he's down in Brazil somewhere. But you'll like him, we'll all like him. I'm going to try and get him to start coming to the working group meeting. And they're going to work on the CI CD pipeline for OKD and take the learnings that Vadim had. And this is one of the many conversations that are going on about how to re-resource the build process here inside of Red Hat. And hopefully we can get them to show up at the working group meeting and tell us what they're thinking and get their feedback on it, hoping for the next, not this, not the docs week, but in two weeks' time to have some of them on and get them to start coming. Fantastic. Thanks. But the good news, that is the good news. And that maybe the lift in the operators has something to do with that. But I doubt it, because it's kind of early days for them. I vaguely remember, like, I was talking with Christian and he looked down on his phone and someone said, oh, there's like 160 operators in the catalog now. He thought something must have happened that, like, accidentally brought them in. So he started reaching out. And I thought he spoke with Camilla and it was like, whatever the result of that conversation was, it was like, oh, it's supposed to be like this. So I'm not sure what the detail was, though. Yeah. So, well, we'll find out. But I know I'm not the only person who noticed that after an OKD install of like 4.8 or whatever, there was like a woefully small number of operators, like 40 some odd or 50 some odd. So something happened somewhere. All right. So Bruce is going to create a discussion item and tag Diane and if you can tag me as well. And so that's, is there anything else OKD release oriented that folks wanted to talk about before we move on, even though we don't have any of our redhead engineers here? I think folks want to discuss. Okay. Moving on now to Fcause updates. Timothy. All right. You can hear me correctly. So, I've put in links in the agenda. The main that happened that are happening right now in federal cross lands. And that should impact. Okay. The first one is that the fact that we are making the hiding spot in ignition to remove the ignition configs. We're in the actual box platform by default. So if you had posted an announcement for that on this, but essentially the idea is if you if you have an engine config on those platforms, every impoverished user even containers can access them. So if you saw secrets in those. It's not great. Because they can read them. So this does not directly impacts OCP because as far as I know, I think most of the use case in OCP is you don't have directly secrets into the initial engine config. They are fetched from the MCO MCS. So. And puts don't have containers on OKD don't have access to this server is a block type the firewall. And so, yeah, it's still it's not that great. So we're removing that from those platforms and but we're not removing that from all the platforms yet because we're not have a solution for everybody. Yeah, so it's just to start. So maybe they would be further changed on the road for for this one but we'll see what it goes. One is about the new 10x artifacts so that only ever impacts you if you're using the new 10x platform. Which was fairly recently added I think a couple of weeks or month ago. And essentially we changed the way we should be are the federal chorus images for new 10x. And it's being compressed after the fact we should be made to really compressed is just a change of format should not impact anything else. And essentially, it should make your life easier if you're using the new 10x platform so hopefully that helps. The last one I want to bring up is this last issue here is is an issue about the way we grow or find system by default on the node. So when you have a federal chorus image essentially it starts really small and then it takes up all the space on the node that it has. So essentially doing that with examples but well with almost any file system works well only if you use a rather full image so let's say if you start from 10 gigs and you go to 100 gigs that's okay. But if you go to one to provide for example that's not getting messy because the file system is that's supposed to be room this much from such a small image. So the idea is we want to I want to focus here is that we don't have a fix right now for this but the right way to do things is if you have such large discs on your node is to use multiple partitions and that's what we recommend. So I added the link to the exact part of the ocp documentation or QT documentation that helps you set up machine config to have a separate partitions to actually store your data, which completely works around this specific issue. And that's the recommended way to do things. It's the preferred option. And that's it for me for of course the blades. Any questions or feedback for Timothy in terms of Fedora Coro West stuff. Yeah, Timothy. Do we know when Colonel 517 nine will make it in. Good question. I don't know. Is it in Fedora. Yes, at least there there are 35 and 36 builds of it. Of the package. So 579. It should be interesting soon. Well, it will be in F course but would it be no kitty that's another question. Totally. So federal 35 yes it's in federal 35 so 517 nine right. So it should go into the next OCD and by the more Christian next. Which will be a foot or 36. F cost right. That depends if we. But I don't remember exactly which version the next, the next build of OCD will be based on is the switch for this one for the next one. Well, next and testing are both on 36. Stables on 35. Oh, yeah, so that's the confusing part here is that this federal chorus so we essentially moving. Oh, do you mean for the, for the first image or for the closer. Okay, okay D I know is a whole separate discussion I'm talking about F costs. Okay, and F costs itself. But what's interesting right now. We should, I think we shipping we making a testing release. Okay, so there should be it should get into testings. It's in. It's in the latest thing testing devolve so it's either in this one on next one. It's in this week's testing release. Okay, yeah. Excellent. And this is going to be more helpful as time goes on because as we try to do, okay, the builds of our own and testing and testing with our own versions of F costs. Ideally, I think that this type of information will become very important. And for okay D we're in group members to be more aware of what's happening in the three F cost streams. All right, anything else. In terms of stuff. All right, let's move on to documentation documentation stuff. Brian is not here today, but I can fill you in on some of the stuff that we have so when we met last week, Brian mentioned that he has now merged in. Okay D development. It was sort of in a separate on a clone of the repo that he was working on with a couple other folks. It's now been merged into the website. So if you look the second to last link on the website is okay D development. And it talks about basically modifying okay D release troubleshooting things of that nature. Not, not all of it's filled in yet. But basically, this is going to be where we start talking about building okay D ourselves or modifying okay D getting those images, etc. There's a great conversation that Bruce and who else it's so it's you Bruce and Vadim and John and there's a couple of people on that. John started it. Right. Other brilliant things John has done recently. Right, exactly. So, so John Fortin started a discussion thread that folks should check out in the repo. That is about if I wanted to if any of us wanted to build okay D, where do we find the right images, which, which registry are we going to. And is it the same for all of the components, etc, etc. And where do we find the source for those components so that we can actually provide feedback directly to those components. And so it's a great thread. I'll put a link in the meeting notes to it. Check it out because Vadim's really been doing. He's chiming in when he can and it's been very helpful and Christian has chimed in as well with that, but that will be our that will help form the foundation of the documentation section informing stuff about actually building okay D in the community. Let's see the working group link in the OCD website has changed. It's now OCD working group at the top level. And then there's about charter minutes and then subgroups and the subgroups link goes to the individual subgroups that we have going. Let's see. I'm working on something that explains the release process and how we're not going to be releasing 4.9 any iterations of 4.9 because OCD has been up to this point a once we release a next minor version all efforts go towards that next minor version. And there's no maintenance releases. I'm writing up trying to find a good way to explain that. Um, Steph testers, CRC, we're talking about that. Twitter, as mentioned, we need to change the email address. Diane came up with a solution. Diane, do you want to talk about that a little bit? Yep. Apologies for all the delays. But I did find the problem with Twitter is you need to have a Red Hatter email address that is going to be not change jobs. And it needs to be a human being. It can't be like a shared email address within Red Hat. And I did find Kareem Collis who was in the OpenShift BU in marketing who will own it for us. She's also the person responsible for all social media for OpenShift, OpenShift Commons and owns the other ones too. So she said, yes, we just now have to figure out how to swap the emails out and get Twitter to release it to her. And then she'll share whatever password username, password with myself and Jamie. And if someone else comes up and wants to be the social media manager, that person too. But she also will be able to do some tweets for us when we need them, like when we do events or things so that and she's promised me that she's going to stay forever. She loves her job and she's promised. So the problem with engineers is they get good at their jobs and they get promoted. So a proponent of that, but we need people address, stay stable for. And you should now have the YouTube back. They were doing some spring cleaning on all of the YouTube OpenShift channel admins and you got swept out in the cleanse. I did actually accept the, I did get added and accepted. So I'm, I'm back. Good. All right, you're back because I just don't want to be doing that again. So yeah, that's, that's the social media update. Excellent. Thank you Diane. Survey stuff. Link is to the survey. We do need folks to provide some feedback on the survey. And help us make it good. We do have a deadline of we want to get it out June 1st ish. And we also, we want to start the transition and have that the repo transition start and be completed by July 1st. So we're actually starting in the docs group to create some deadlines to get things done. Sorry, when is the deadline for answering the survey? Well, it's not answering the survey. It's providing feedback on the survey itself to make sure that it captures what it is that we want to ask people. If you look at the survey, there's a link in the, in the meeting notes. If you look at it, if you think there's some questions that maybe we're missing, or if a question isn't clear or something like that. Let us know because we want to have a, and then we're going to release the survey on June 1st to people in the community. Oh, okay. I see. I thought this is actually a feedback on the survey. Yeah. I thought this is actually an old survey because it says which version of OCD are you using and the most recent one is 4.8. Yes, yes, it does. It was a, it was a long time. This has been a long project. I think the survey first got started. The survey project got started in November of last year. I think it was okay. I see. Yeah. Maybe we can just add like the, the most two recent versions there. Exactly. Thank you for that feedback. I hadn't noticed that. Yeah, so folks, please give feedback. I'll send it to the mailing list or in the Slack channel or in the matrix channel or wherever it is that you want to communicate. What was the other thing that getting here? Oh, so there was some discussion about. Diane, maybe this answers somewhat answers the question, but what you said earlier maybe answers this question, but is it possible for the OCD working group to convince the DNS administrator. Administrators at Red Hat to put different MX records, MX records are what point to a mail server and say, oh, the mail server for this domain is such and such. Could you find out if they would be amenable to pointing it to a mail server that the group has control of so we can start creating email addresses that are at OCD.io. Because my sense is that Red Hat doesn't want to actually host email accounts for us or manage aliases. Maybe I'm wrong. If you could also I'm going to just keep pushing to this if you can write that up in the discussion and tag me and then I will see what I can do if I can work any magic. I have seen it done for other projects. So it might be possible. And it might be a good way of managing some of our issues transition issues. So, yeah. We'll see. Right. If anything else that came up out of the docs meeting, Bruce, you were there my missing anything that's not covered in the meeting notes. Important from the docs meeting. I'm just thinking I think that's most of it. Let me just go have a. Oh, one other there was one other thing. Um, zoom, how do folks feel about zoom versus blue jeans. The reason for this is the way things are right now. I don't have access to no one outside of Diane has access to the recordings until she forwards them on. So we can't actually post them. There's some technical limitations with zoom. Everyone that I've talked to seems to be. Have more meetings of the tech, there's technical limitations to blue jeans. There's. More opportunity with zoom and seems like. People I've talked to more of them are using zoom. For more of their meetings than blue jeans. So I said, I bring this to the greater group. Have that the topic. What do what do folks think. I for 1 would love to get out of the jeans. Um, and over to zoom. It's just a matter of creating and getting an account that is available to us to use and. Yeah, I'm, I'm, I'm, I'm game if you all are. All right, I'm going to speak now. Because this one is something I care a lot about having fought all the platforms all the time. I am in favor in. Um, order of usability ease preference, whatever you want to go with. Um, Uh, zoom. Jitsie Google meat. You can whatever order you want to, you want to think of that in. That's basically the 3 that. So far, I had a reasonable experience dealing with them on Linux. And on windows and on Mac costs. And. And at work, I have, I have zoom conference system. So if we use it, I can use those instead of having to use my laptop all the time. So. If I'm not doing it from work, then I don't then zoom falls a little down because it's a little annoying at times, but otherwise. Yeah. Any 1 of those 3 are good. All right, I see some, some chatter folks want to talk. Over the air about what your thoughts are. I was just saying, I don't, I don't have any, I think changing platforms is fine. No objection for me. I think it'd be cool to use an open source platform like Jitsie if we could, but you know, I don't, I don't have really strong feelings about the platform we choose. I just want something I can use on any of my devices. So, and it's not blue jeans though. Yeah. Recording is key. So, Neil, I think it was Neil that, yeah, Neil points out that recording in Jitsie is a pain. That's kind of 1 of the things we want to get around is we want to have recording and posting as easy as possible. So that our recordings are available quickly to the community. Google for recording now with Google. I thought you had to pay now for to get a paid version to record, but I could be wrong. I don't know for recordings. You do. Okay, like, you can, everyone can do meetings. But only people who have paid accounts can do recordings. Right. That's also true with zoom for what it's worth. You don't, you can't do cloud recordings without a paid account being said, I am very aware that our lovely sponsor has both. So those are both equally valid options for us. So, Is there any objection to us attempting to create these meetings in zoom. If I do the do some background work. And see if I can get an account that the Jamie again, like, yeah, cool. Let me see. It may not happen right away. There's nothing happens fast. But let me see if there is a red hat zoom account that we can hijack and use and that Jamie can also as an external have access to that's always been the issue. And, and I, and I really want to have it something that an external person with a non red hat email account can boot up the, the meeting. So it's not. Not me. It's not me all the time. Well, and with zoom, you can do the, you can add someone as a secondary right. The terminology is alternative posts and my don't quite quote me on this, but I think red hat marketing has a premium zoom account because they use it for all of the red hat live streams. All of that is done with zoom. Yeah. Unfortunately, all of our internal meetings are on still on blue jeans. So I'll still be here in blue jeans forever. Oh gosh. So if that doesn't work out, I would be willing to donate my business zoom for use of these meetings we would, or maybe set up a second one, but we'd want to have a plan for handing it off. Should I ever get abducted by aliens or something. Yeah, yeah, I might be able to finagle being able to use daddos zoom I'd have to ask people in whatever if, if need be, but again, we should have, we should have plans when it comes to using zoom because of the way that zoom works. Yeah, let's, let's put it on the agenda for the docs meeting next week. Give me until next Tuesday to do a little bit of research and let's get that we've got the YouTube, the Twitter figured out and let's get the zoom thing going because as much as I love blue jeans, not. It would be more discussion item and fill in the details there. Next week. And then we just have to get Christian to change the fedora calendar and I have access to the calendar. I can change it. We love you. Nice is, is if any of the red hat SSOs could get you on to the zoom meeting. That would be a miracle. That's a longer conversation. So, let's keep moving on because I added a few things to the agenda to just because I figured I'd sneak them in. All right, sounds good. So, um, let's now move on to. Time for docs update issues. Repository transition survey. Rook set status. I don't think there's anything that's changed with, with John's bugzilla. Um, discussion 1, 2, 3, 1, I don't know. Yeah, I don't know that you have a link there somewhere. Well, that's I'm actually going to have to look at the repocus. Yeah, that's that's John's build it from scratch. Oh, that's the 1 there. So yeah, that's that's fine. Yeah. I just, I just sort of wanted to bring up 1 thing related to that, which didn't appear in the discussion because it was a little bit longer. Like Vadim made the comment that you could just try replacing some of the base images. With an open source 1. But it sort of seemed like we did have an action item a long time ago with Christian to go through and look at that. To me, the issue is not just. Can you replace a base image and it sort of seems to work because you haven't quite stumbled into the area where it doesn't. It'd be more useful to know what's what specific red hat proprietary packages were in the proprietary base image. And if there are none, then you could presumably safely. Change it to be passionate. What can you say? Very much none. We don't ship proprietary packages. They were at DC no CP. So there's no thing proprietary. It's just. I think the more accurate term is because some of the content is subscription encumbered and that actually might be the problem. I don't think any of the OCP containers. Use any of that stuff anymore. I think they've all moved to rel UBI quite some time ago and just tried to stop using base real content but one of the things I'm not entirely certain of is if the build engine thing has ensured that rel content is disabled during the build process to make sure that that stuff doesn't leak in. I if that hasn't happened then it's entirely possible through nobody's fault that there's subscription encumbered content inside the images. And that I've done at my workplace was our rel UBI base image basically inherits from the main one and the first thing it does and really the only thing it does is turn off subscription managers plug in so that it cannot actually access the encumbered stuff because if subscription manager text and entitlement on the in in the environment, it will overwrite the repos and replace it automatically with real rel repos. So that's one way to kind of guarantee that that doesn't happen. I tend to agree with Timothy on this I don't I don't think we have any encumbered content going into those but we do have two sets of build images and one of them is behind the gated, you know, authentication. Right. Well, I guess I guess if the if all of the the stuff that we can access to build things. The initial, you know, from image is a gated resource. Yeah, we have multiples. So like not all of our not all of our component teams have done this yet but like we have a publicly available build image, and you can basically swap the from line in the Docker files that make up our components now. Some teams have done it. Some teams have not created multiple Docker files. So I have a feeling we probably just need to audit it and go through and figure out which ones are still using the gated build file, and then just propose like what what our team maintains is like a Docker file dot rel and then like a regular Docker file the community would use like my personal preference is to see more teams do this internally where we have a Docker file. That's a community Docker file and a Docker file. It's like, you know, our build Docker file. As far as I've seen every component I've come across you can just swap the from line to the open builder and they work the same basically. Okay. All right, well let's, I think as we get more into the engineering bits of trying to build our own OKD out in the community will run across that stuff. Timothy points out you can use option from with pod man. Let's now move on to CRC, or I'm sorry, it's no longer CRC it's now open shift local Bruce you notice this I think initially before I go ahead. Yeah, and I noticed that, I guess last week when I was looking at some video that I got from I don't even know where. I guess it would be nice if the marketing people sort of told us these things in advance. Yeah, and some of the marketing choices. You're talking to that. Yeah, yeah. Don't make any sense to me. Like maybe we should just grab a mini shift. Right. Yeah, I don't know. I mean, and then there's marketing. I didn't get a heads up on it. So apologies, like I would have given you a heads up. And maybe I did get a heads up and it went in one year and out the other year. So I apologize because it just didn't. This is like an April 8. But I think you had to go through the, you know, like the developer login to get to it. So it may not have been public at that time like a month ago. But the. Yeah, it's now made into. A number of public places. Yeah, I saw the info week or whatever. Right. Yeah. Branding is never been our forte. So. Yeah, the info world 1 was was recent. And the there is a article on redhead developer what's new and open shift local dot 2. And so it talks about the new branding and. Interestingly, on the same page with related content, they have a link a link to introducing red hat code ready containers. That's lovely. Yes, we are a company of engineers. We are not a great company of marketing people. So, okay. So, here's the question for the group. There is discussion happening. There's that discussion thread. I'll link it in. The meeting docs. There is a discussion thread with Charo and a few other people about who's going to help out. With building. CRC or. Okay, here's the question. So, where do we go from here? Right? Do we assume that it's going to be. Okay, the local or do we come up with another name? Because we will be building this ourselves. Charles says, Charles says he'll have some time maybe in the next couple of weeks to do another build. I'm going to try to make some time to maybe do it myself just to experiment. Do we want to keep this viable and what would we name it? What would we call it? Or do we keep it code ready containers? What do we do for. So, this is the question. I think the last we've been going around a little bit in circles around code ready containers, regardless about whether there is a demand for it. And I think there were three people who came on a call or on the mailing list at some point a month or so back. Are any of those three people on this call who actually use code ready containers. Bruce, is that you were you one of them. Yeah, no, I don't use it. I have built it once and I have downloaded the Charles version once and the red hat version a couple of times, but yeah, we get discussion items and questions from people. There's a couple of discussion items have been created and a couple of people asking in the channels, but when it's posed to them that they help build it. They are not, they are not offering time. Yeah, I haven't, I did actually volunteer at one point to be on a working group, looking at that. But that was mainly with the interest of slimming it down to where it was reasonable that somebody might actually be able to use it. Yeah, I am I concur. So, like also for us, no one is actually using it just because the resource requirements are a bit too insane. And at that point you might just as well run a proper cluster anyway and then you can also, you know, you can actually test the actual thing instead of testing something that kind of looks like it. So, the reason I'm pushing this envelope is because we have a few things coming up on our radar around the CI CD community build process for okay D itself. And I think initially I was the one that pushed for getting a community build process for CRC on the operate first cloud. And after some conversations in a coupon last week in Spain. And as Mike says, micro shift is the new hotness. It's new, new and small it's not really open shift but it does have some okay D in it, but they're just using okay D they're not really. It's not the full thing so it's not quite the same thing, but I don't don't feel like there's any real pent up demand for us to do that. And, and, you know, maybe if we could, if, if charo can get one more build out that's there and do a little publicity about it to see if anyone and some gate. I don't know how to gate keep an open source project well enough to see if anyone's downloading it or anything. I don't see that that's honestly where we should be putting any energy into we have a bunch of other stuff coming up. So, you know, and that's, so I would be okay with tabling for now and focusing on other things, but saying that not saying we're sunsetting or anything but, you know, to host it and have nobody use it is a lot of work. I think the one advantage that I saw to it is whether it was going to be the place for your suggesting, or that, you know, I was going to use it myself even as a test for automation of building stuff. There is some advantage to that as it being like a test case for things like that, but are five users representative out of the community of a need to keep it going. Yeah, and I think the reason I put the operate first thing is the next topic on the list is that after the conversations with them I think we're poised to rather than try and do up a proof of concept that operator first will work with the CRC is to actually do a proof of concept. With okd and fedora coro s on the operate first cloud. And so I'd rather expend our energies and get the operate first engineering resources and Brian Cook's team who's built this building out this new school. The ICD process to, to work, see if we can get it to work on the, what's called the mass open cloud, which is basically Boston University's cloud that we have. We have shares in I guess is the best way we can boot up clusters and create pipelines for multiple things. I'd rather we, and I'm going to invite them. That's why I put it on the agenda to the next full of okd working group to talk about how that might look and what what would need to happen. And I'm really glad that Neil's here too because we've had this conversation about having a community bill managed and hosted build process. And I just think CRC is a red herring at the point at this point and that we should probably try and focus on doing something with operate first cloud because I think I tried to coerce Neil into giving us some data resources and everybody else's cloud to build it. And that's why I'm really pleased that Jack's here too because turn has been building their own rolling their own okd deployment and has their own CI CDE process for building okd with it for running on open stack turn style. So I think it's what we're coming to is a tipping point of where we have enough people who are interested in doing this community managed sort of not. It's probably a bad thing to say but sort of mimicking the fedora infrastructure but doing, but just for okd pipe build pipelines on operate first and having it managed by the community and I think that's a big project for us to take on. And we will get a lot of help from red hat but what I really want to do is have the operate first people come in two weeks time to this session talk about what it might look like and I was actually going to after I talked with L Mike. I keep seeing L Michael but Mike and a number of other folks at KubeCon last week and then I talked to Brian Cook who is the actually the engineer at Red Hat managing the CI CD revamp if you can build service as a managed service which we could host on this operate first cloud or maybe on Amazon. So to get Brian to come and talk about what it would take there's a little bit of questions about whether external people would be willing to use the single sign on to get access to red hat resources if that's ok with the community personally for a build. I would do anything at this point and so I wanted to see if we could use the next meeting to talk about this because you know I wave my hands a lot and talk a lot but I would rather get the engineers who are building the process out in the open and talking to us. And then there's another build process which is the current one. That's we use for building OCP and everything else and there's a group of those customer facing engineers who are doing a three a three six week sprint it build to take on what Vadim has been doing already so not but it's still behind the red hat firewall but that the issue. One of one of the many issues is that right now red hat is entirely responsible for building OKD. I would really love to see it community managed and community hosted somewhere on an open cloud so and there is an initiative there. So that's what I'm hoping next week we can bring to the fold. And then what I'm also hoping with Jack and CERN and all that is we can learn from what you guys did and maybe you can listen in on what Brian Cook and those guys are cooking up for the new revamped build processes which is different than the current ones. The current ones have a lot of baggage. And maybe we can learn from how you've rolled your own over at CERN and use that to inform what Brian Cook's group is doing because they could then just spin you up another pipeline like Mike has a project where he wants a forked version of OKD for a certain thing that he's doing. So we could just build up spin up more pipelines depending on what people wanted to build in this open cloud and learn from each other and then help Brian Cook improve what he's doing. So there's a lot of synergies here and I'm going to shut up now and let you ask me questions and then make Jack talk about what they're doing at CERN. Any questions about all that gobbledygook? So the wonderful thing about Commons last week was that CERN showed up and got some one-on-one time with I think with Christian and Vadim and other people and we got a good earful about what you've been doing there. We would really love to have you talk to the working group in two weeks time if you're willing to to talk about what you're doing at CERN and how it is and maybe give us a little hint about it right now. And then we have another gig coming up on the 23rd of June in Dublin that Christian wanted me to invite you to come to give a talk about what CERN is doing with OKD publicly beyond the working group if you're available. That sounds great, yeah actually I think I won't be available on the 23rd of June but we can just get started for now and I think I will also join in two weeks time to maybe share a bit more about the details but I think you already gave a pretty good hint. So basically we are also just taking OKD as it is on the GitHub and then replacing a few of the operator images, mainly the ones that are handling the integration with OpenStack. So we're not doing a full custom build but actually just replacing a few of the key components and the reason for that is that the OpenStack cloud that CERN has is well rather old because CERN was one of the main also contributors to OpenStack and developed it over time. But that also leads to the fact that our API that we have is not fully like a regular OpenStack API, especially in the networking area. There are some special things in there, in the end it comes down to some API calls not being fully supported or slightly different. So and for that reason we started out with having to fork two of the images so that's the cluster API provider for OpenStack and the machine config operator. So that we can just basically adjust for those differences and in the end it's just commenting out a few lines of go code in there. Maybe making some some different calls and that's it so really nothing fancy but we're also really trying to keep it at a minimum so that we don't introduce huge deltas to upstream that then we need to port to each OKD version. And from my point of view that has been working fairly well, much better than I would have expected and in fact we became pretty comfortable with this approach so now we've actually extended it also to the cluster ingress operator because that one is, at least in my opinion fairly limited if you're deploying an OpenStack like for example it does not have a proper integration with low balancers it doesn't really know what to do with them. Or that it should configure the external load balancer to speak the proxy protocol which is important if you want to preserve the client IP address. So some some kind of well in the end again minor changes like this but that are just really huge because well we want to know from where clients are connecting and we want to have that in our access logs so that's just an absolute must have. And so we are just basically replacing I think it's in total at the moment like for images that that we are we are building ourselves, where we also coming back to the issue that was discussed before that some of the from images in the docker files are not publicly available but instead we just swapped out the ones that are publicly available and we never had an issue due to that. Because in the end, most of the time you just need a working go compiler and that's it. And yeah, then we are in basically nowadays we're just doing exactly what is described on this new OPD development page and if we had that two years ago that it would have been a massive help because. Well, before I have to say before I saw this page now it was always a bit nebulous how you are actually supposed to customize OPD and it's never really clear where where these images are coming from and how they are built but already this page even though it's not fully finished yet is a is a massive help and basically also describes exactly what we are doing so go to your go to your operator that you want to replace make some changes to it build a custom image push it somewhere and then do do create a custom release with the old senior release command again push it somewhere. And that's it and then I guess the only part that's still missing actually is overriding the version in the cluster version operator. And that that's pretty much it what we're doing and of course across several repos and then doing some some integration there. But yeah now that I'm seeing this page it looks so obvious but it was not so obvious when we first had to figure it out for sure. Well Jack we want to have you come back to the next meetings you can talk in more detail about this because this sounds pretty fascinating actually. So if you're available to come on the, what that would be what date is the next meeting. Seventh, so if you could come back on the seventh. Yeah, I think, I think that should be possible there. Okay, great. Can I ask quickly. Can I ask Jack, what version of okay D are you what release are you on. We are currently on 4.9. And we have 4.10 in the pipeline. Awesome. Okay, good. Cool. Excellent. I want to be mindful of folks time because we are at time now. So any last thoughts before we in the meeting. Awesome. This was a great meeting and appreciate everyone's participation and support. And we'll get these videos and meeting notes notes up as soon as possible. Thanks guys. Thanks all.