 So crazy today, but that's normal, I guess, so no different than last time. How are you doing? Yeah, this was no good day either for us, because we had a training and we decided to do it on OKD. And every sick works fine with OKD, but the operators are still a problem. Everything is here, but yeah, always some little parts are missing and if you don't are used to developing it, it's a hard time. Yeah, a bunch of work still to be done. Yes. Newed everybody here. Hi, Jamie. Hi, Jamie. Get myself set up here for back and forth. I know this is the doc to add your name to if you haven't already. So I'm going to do that and today is that I don't have to add my name. I'm going to share my screen and through this. Yeah, I think everybody is working at sort of mock 10 today. So I know Charles just gave his head, had another head to take the day off. And let me see if I can round up the other dudes. All right, Chrissy. So the good news is we wait for everybody else to do this. I don't know if you saw my screen while we wait for folks to join. And if you want to just going to share my screen. And if you see more than you should, then I apologize because I have way too many tabs open. But I don't know if all of you noticed, but the code ready container is up. And we, and so for the training that you were just talking about Joseph. Would the code ready container have helped at all. No, maybe short sentence about this. We have organized the training over two weeks. I decided to use OKD, but not OpenShift. And the thing is we like to see showcases for service mesh operator, OpenShift pipeline operator and the serverless operator. I was able it was very easy to deploy the pipelines operator. Yeah, it works like a charm on OKD. We also almost managed today to get the serverless operator running on OKD. It's also very possible. Even to build all images on our own. It's okay. But the service mesh. Yeah, it was a little bit frustrating because it tries to pull images from Docker Hub that are not even there in this version, in the current version of the service mesh. And yeah, it's hard to find the repositories to build it on your own. It's a lot of reverse engineering. I want, I don't want to complain too much. I know, yeah, there is still work to do. But if you have a deadline and I'm, yeah, it's my own fault. Yeah, because I decided to use OKD. And but yeah, I want to see that via short before getting it running. Our plan B is to mirror the redhead. The service mesh operated from the redhead registry. It's our plan B for this workshop. Don't worry about complaining, Joseph. It's good. No, I don't pay too much to complain about anything. So I can see that Christian has joined. And so I think we can kick this off even though we're a couple minutes after the hour. And I'm going to make sure I'm recording and I'm going to apologize in advance. I have not posted like the last three videos yet from the past meetings. I'm very far behind on that. I know I will endeavor to get that up over the weekend and do all my uploads. So other than that, let's rock and roll a little bit here today. There's a couple of new people here. And so I'm going to just show where just sort of how we add it. If you can add your names into the working group meeting notes so that we can just keep track of attendance. That's great. And let me just see if I have this. So we had asked to see if Clement was going to join and his comments on the call. That was the him here. It doesn't look like it. I can ping him on Slack. Maybe he has, maybe he can join. Join if you see, let's see if we can. We've been trying to get the container SIG, the Fedora container SIG folks re kicked off or rebooted with some help. So that's there. So if we can do that. So this is the page that we use for running the agenda. And so if you have other ones. Does it make sense to clear that up a little bit because I think a few say at least the tickets are already solved just to just to get the overview or keeps the overview just just an idea. Could you could you repeat that I sorry someone just interrupted my train of thought. I was asking you, maybe it makes sense to clean up this board a little bit because we have lots, lots of tasks that already are accomplished. Yeah, no, we can definitely do that. So this, this just one column was the things that I used to drive through the stuff. So if there's things that we can take off today, let's let's do that. And then these are just notes and things that we were doing so we can, we can hide those boards as well. Yeah, that's how I'm happy to do that. Do you want to give a quick update Christian on okay before and any work that's going on. I don't really have anything concrete to show. I think Vadim has cut a new release. And yeah, other than that, there's no, no, no measurable progress on any of our stories. So we're still having some problems with delivering the Windows machine config operator which we kind of want to use as a template for all the other operators to deliver to get to an open source or community catalog of operators. I do hope that will be solved rather soon. So hopefully next time we will kind of have have that template how do we deliver all of our operators to okay D as well. Really quick. Yeah, there's one issue the the operator hub still uses the older format for for catalogs. While we internally already use the new one. There's an open issue. I'm kind of tracking because I think it'll be easier for the teams that work on those operators to just have one one way of distributing them one kind of release tooling. They use instead of doing it the old way and then migrating that to the new way once operator does it too. It's been in the issue has been open for a bit so I hope they they get to it soon on the operator upside. Do you have any idea what's holding that up. I'm not sure I can find the issue real quick it's yeah I'm not sure what's the what's the weight here. Can we throw the issue on in the chat that would be great. Is that an issue for operator hub IO the external one or operator hub internal to open shift. It's the external one, I think. So I will, I will try to find that issue I yeah, I'll have to dig for the link. I am seeing claymore has joined now. You can. Hey, tackle that first. Oh, yeah, sorry. I was looking for the bludgeon link. All right, so you. So just to finish up that one if you can find the link to the operator hub dot IO request. There, there aren't as many eyeballs on that as they were previously. I'm mostly focusing on the CNCF side of thing this DK and OLM and shoot that to me and I will try and get that wrangled in the next week or so. I didn't see that one. So apologies. All right, so welcome, we have been talking about you so your ears must be burning and we've been talking about you off and on for a month or so. So we wanted to give you or maybe more than that. We haven't been able to connect as a community with the fedora container sake in a real way. So I kind of wanted to have a little take a little time today to talk about what the goals and the mission of the fedora container sake is and where where you see us helping out. So I wanted to give you guys the floor to do that and then people can ask you questions. Yeah, so I see Bruce that was there when we had a pretty much just after nest. So I think end of August or on that time we we met to discuss kind of restarting the container sake. Because it was started like two years ago or so. And we are like a good start but things fell off a bit like people got other priorities and we are like a strong. A strong commitment of people that could dedicate some time to help there. And pretty much we came up when we met we came up with like three main goals. The first one I think that's the main priority for the sake is the fedora base container image. So the maintenance of this image and making sure that they are taken to a disease and updates and things like that. The second one was about layered image or the maintenance making the maintenance of layered image. Little bit pretty much improving the experience there. And one that we discussed with Bruce was about helping you guys. KD working group about like having you build your images and hosting your images in the registry. I think really this you kind of come up in the second goal. Because if I understood correctly, you're really looking at building images that will be available after in the in the catalog or in OKD. So that will be mostly layered, layered images. And unfortunately, currently the experience of maintaining layer images is not great. The documentations are like almost inexistent. And I don't think we have many people working on that currently. So that's kind of the 10,000 view summary. So who is in the SIG at this juncture? Like is it just you and a couple of other folks? Or, you know, what sort of resourcing do you have to do this work? Yeah, active. I think we pretty much say like three, three or four people. And we're not like, you know, it's just, I guess, like most of you, it's like kind of per time kind of activities. So my me and people really dedicate a lot of our time to the base image, the federal base image. And we have a couple of folks. So Jacob Tricard that worked a lot on trying to have like multi-arch builds and like a neglect. I mean, I think he did build quite a lot of the punch shift. And before, okay, he built a lot of the origin images for in the federal registry. So for X86 and AR64. And we have another person that works on the OSBS project. So the upstream project that we used to build container images. So I'm told three, but that is helping more hopefully it can help us more with like improving some of the process. And if we need specific development in upstream OSBS to enable some of the federal specific workflow. So yeah, that's pretty much. But it looks like there's at least one volunteer from this group to join. And I don't know why James is constantly, I'm going to unmute everybody. And James, you can put your camera on if you're don't have bed head like me. And so it looks like that that James Castle who's on the call right now is interested in joining and helping and maybe if we could get one person from this group. And Neil's raising his hands. Yeah, I just have no idea what I'm supposed to do. I mean, I tried to join the earlier incarnation of the container stick a couple of years ago when it was first being started up but, and I did a lot of the early reviews for most of the initial images. Like I have no idea how to do anything and that's sort of, that's sort of been the problem. Yeah. I think that's still a lot of the, so a lot of the fundamental issues we have still things that are that needs a lot of specific knowledge. So to improve the, the experience that will require some development in OSBS or from enabling some tools in the federal infrastructures and things like that. And it's honestly not something that is even someone that is very dedicated and very interested that goes through the effort of learning the tools and like trying to understand things. Unfortunately, they will hit a wall of like permissions and not having like the correct permissions to try to deploy things and things like that. So that's kind of pulling back on me and the pool people should should have the permissions there. After there is, I think there is a lot of, I think there is still value in you guys trying to build your images and like reporting what what doesn't work. What is painful and also helping in the documentation on like how to get started with maintaining a container image and like, you know, kind of just go through the experience cry a little bit and every time you cry open the tickets had a bit of documentation. So I should just go ahead and try to make a pregnant container and let's see how badly I flub it. I think we still have like a few active maintainers. I think the Python container is still the Python classroom container I think is still maintained. Some of the databases are still maintained, but yeah, that could also be. Oh, sorry. Go ahead. Sorry. Yeah, another way to kind of also maybe quickly join and like help is to look for an image that is already available in Fedora but is that is not maintained anymore and try to maybe pick it up and try to build it and like see if it builds if you just kind of pick up an old image. What's the process for adding a new image is it is it in disk it as well the Docker files so they're sort of just get and the creating a new repository essentially how does one request those. Yeah, you open a bugzilla there is a review. We have a fairly, we have some guidelines. You kind of you should be able to do it. And once the bugzilla is approved you have your disk repository. And then you have a Fed package container build command. Then you start to cry. And when I'm just going to ask the blunt when are you meeting when are your meetings. So, currently we don't have meetings because we just tell that we will see if we need them currently we didn't really need them at the meeting we had was like oh I didn't have time to do anything that's the couple weeks. Me neither me neither me neither so we were basically meeting just to. The fact. Yeah, we were meeting to acknowledge the fact that we we didn't have time to work on anything. So we revive the mailing list. All in August and this is the this should be the main place to have communication. And if we feel that we have a subject that is important enough to meet another video call or an ISE thing that should be bought on the mailing list. We have also an ISE channel thing which is maybe the most active place where. Which one is Fedora containers containers. Yeah. I can put in the. Cool. So questions or things like this is the place to. So I think we have done our done our duty here we've connected some dots we've gotten you some volunteers and so if. We can maybe ask Jamie and who else is Jamie's thing Mike and others to look at that and then just let us know back here in this working group. When and if we need to do something or get our acting gear. I think that'll help flavor the whole collaboration. And yeah so. Do you have an idea of the images you want to do because I think we have a lot of. There are quite a lot of the S2 is images there. Yeah the the S2 is stack seems to be like horrifically out of date but yeah that's that's that's the that's the big one. And the other one is like making sure we have services pieces like databases and. And things like that. Things like anginex and proxies and you know basically all the little pieces people use to assemble microservice applications. Yeah, the usual basic things that people usually plug together to make operators or helm charts or whatever for applications. Specifically at least from something that I've been trying to poke around doing and I haven't really figured it out yet is that. There's been an ask for a little while now for a pecker to be able to run on open shift and Kubernetes as an operator or a helm chart or whatever. And like figuring out how to containerize the application and getting that running and having all those pieces. Like I'd like to have all that working with Fedora official containers with those with the latest software pieces python stack. All that sort of stuff like all the goodies that people tend to expect for this sort of thing and get it to a point where the pecker project can say hey. You want to fully supported containerized deployment. For production uses here you go this is the way. Because right now we don't have that. I think, honestly, an easy way for you to get involved it would be to look at those old S2 is images and just like try to. Like move them from whatever I don't know Fedora 28 or so to. I think they're on the door. Yeah, so kind of removing removing the dust and trying to have those images and any pain points or anything like this. Yeah, try to report them and. Yeah. Hey, this is this is Craig. I worked on a couple of the S2 images. Maria DB recently and the PHP S2 image. So I'm going through and made some commits to those to get them on the latest versions. I was just I was really just planning to do pretty much everything that's in the open shift catalog and just go down through through the list. This is my current plan unless something else comes out of this call. That's a better a better way of doing it. I think that makes a lot of sense. But it is what it is. Yeah, I just kind of want in my opinion, which is right around we want everything that's in the open shift catalog available for okay be just the door version of the containers if possible. Yeah, definitely. So right now with the with the containers we built on our proud CI, which we then release as the okay be payload. They're all UBI 8 based, so we don't have any Fedora based containers there yet. We might change that, but that's not our focus right now. So for the, like the non core images, though, we definitely want to push for using Fedora based images. So I think having all the already existing containers updated would be great. And then as kind of a second step, looking into how we deliver operators, which are kind of meta containers, they have this bundle container now that references other containers. So, yeah, if we could leverage the Fedora disk it for versioning those as well, and then really having Fedora based operators, which are just made of Fedora based containers, that would be kind of a second step after that, which I want to do. So I really want to get us to a point where we can easily develop operators on Fedora containers and also release them release them through the Fedora infrastructure that is in place. Okay. The issue I had, and I can't remember if it was the PHP container or the MariaDB container, I think it was MariaDB, but I was able to get it updated. And I think the code was committed to the repo and the test field worked. But I don't know how to get it to the Fedora registry after that. Or how does that process work? Once you get a container that's been updated to, you know, say the latest version, how does it get into the Fedora registry? Who does that? Or whoever maintains the container? So it was a wrong question. You probably know it best, so I'll let you. It's also relatively similar to RPM packages. You have to create an update in body, and you can select the code g container build there. But exactly remember, I think it's shorter than RPM. I think after three days or so, it gets pushed on the, we have a candidate registry where you can test things. And after it moves from the candidate registry to the official. Yep, that's a problem with this. Yeah, the IAC channel is a good place to ask. And in order to be able to do that, you have to be at least a co-maintainer of that disk repository where the Dockerfile is going to be stored. The one thing that is confusing me slightly, and maybe this is kind of a dumb question, but why are we namespacing the containers with the Fedora version? It's an historical thing that needs to move. So this is the thing that we want to fix rather than a thing that has a reason to stay. I think it's the first or second open issue, the container thing. Okay. Yeah, because like I was a little confused about that. And I wonder, because I can't think of a good way of handling like needing to upgrade the image progressively if you have to like literally change the path in which it is pulled from. And I would hope that we would make it easier for people to just, you know, move forward. I think there's just a bit of what to do in the Dockerfile and they build the image and that should be okay now because flat packs are not namespaced. Right. It's just historical. Okay, cool. That makes things easier. I just pasted a link to the Pigger instance, which is our disk it and there's the namespace container in there, which holds all the repositories for containers that are currently in the registry. And yeah, if you want to own a container or help building one, it's probably best to if you want to get involved to reach out to the current maintainers, especially if they haven't been actively pushing updates. And ask for you to become a co-maintainer on that repository so you can push your own updates there. So one ask I have, and I will go ahead and also contribute to this with some containers. But if we can get the steps needed, a document really for dummies for people that have absolutely no prior knowledge, that'd be great. If you do that, jot down the steps you're taking here and then we can hopefully build up a documentation around this to make it easier for new people who join us here. Bruce waving his hands, was he volunteering to write that or just wanting it? No, just being a dummy. Okay, well. I mean, now that we're essentially getting into the process, the same process that's also used for RPMs, which is, and everybody knows that there's no happy packages in Fedora. So, yeah, it's definitely a little bit annoying to work with this kit. It's definitely worst in the container long, so it's not right. Okay. Well, if you get the, so, Clement, are you going to take on writing up the instructions or you need? Yeah, so we have some documentation. I will review this and see. Yeah, I think last time when we talked with Bruce, I don't think we have a easy, like, just getting started with how to maintain a container image or thing. So that's definitely something we need an API to have a look at that. Clement, do you know if that's the same group as for RPM packages, like the Packager Group in Fedora? For RPMs, you have to be a member of that group in order to be able to. To maintain one. Yeah, at the beginning of the SIG, we discussed being a container Packager or whatever group. But, like, we never had enough people wanting to just maintain containers to, like, actually make this, like, the request and going through the process of creating a new group and everything like worth it. Most of the time, the people that are maintaining the RPMs are where the people that were maintaining the containers, so they were already part of the Packager Group. That's one big thing also I didn't mention. You can only build containers from RPMs that are available in Fedora. So if you're trying to do a container for something that is not Package in Fedora, that will not work. That sounds like an interesting roadblock for some of our containers. But we'll figure it out. Yeah, it's a lot of work to be able to, not to be able to build from non-RPMs, because we have to be able to provide the source of the application. And yeah, it's, I know that internally after that, they start to do it. Honestly, to deploy this in Fedora is going to be a lot of effort. Well, at least there'll be more of you and more of us to work on it and figure it out. But I think the documentation, and if you can put a link to what exists now would be great in the chat, and maybe we can figure out how to socialize this in an easier version. And maybe Bruce, if Bruce, you can take a look at the documentation and add in some thoughts about what's missing to make it simple that would be helpful as well. Sure. I know he's a teacher. So teachers usually configure these things out and make it simple and break it down for us and see what's missing. You know, there's, we assume that people know everything it is about RPM and packaging in Fedora land and don't rewrite anything that there's tons of content on that. But yeah, maybe that's, that's something that we need some better documentation and how to guides. So, because we're going to rely on you guys for some stuff here going forward. So, I, yeah, I wanted to, I noticed Christie was on the call. And I did look up your background. So I wanted you to just introduce yourself quickly and say what your goals are, perhaps from your side of the house. Sure. I'm curious what you found in my background. I mean, my goal, I guess I'll start with that is that was asking someone if okay D was supported on power, the power IBM's power architecture and the answer was no. And then I should join this call and find out some things so I just kind of jumped in to see. And so I know we have Kubernetes support for power and then clearly I don't know if you guys are aware but we have open shift support for power. And also the mainframe platform as soon as X. So my background is that I've been working a lot with containers on power for probably the last four years and so this is sort of the. I got a lot of the Docker stuff done and did some like the Docker see I work and some other things before now we've sort of shifted with during Docker and podman and no open shift and it's just like everything. So that's, I might guess my background is just continue. So it was going to find out what we can do to help with any of this stuff that doesn't necessarily have to be just only do power work. I know that the community needs lots of help everywhere. So that's what I'm here to find out really. Okay, maybe to give some more context, you may know a bit of that already. So we have the CI system, which is proud that only builds on x86. And we actually take CI builds from that system for our OKD releases. So for the normal OCP open shift release. Those containers will actually all be rebuilt on another system internally. That is then x86. We have the RPC and all those architectures there, but we don't have those architectures in Pro. So the real blocker is really that we don't have a system to build OKD releases for other architectures. That's not only RPC, that's also ARM and the likes. I'm not sure how we can kind of tackle that from here. But maybe that's something the, what is it, the IBM Red Hat Bureau for working together should maybe tackle. Yeah, that's kind of the blocker from here. We don't have a system to build these containers right now. We'd love to do it. Yeah, that would be great. So I, one of the new responsibilities that I have on top of keeping everything I've done in the past years is that I am now a part of the multi-arch team inside of Red Hat. And I'm trying to hand off the CI stuff there to somebody else, but I'm still currently kind of on our CI mission. But we do have, right now, and this isn't going to help. But we have the y'all's x86 CI calls out to this really hacky Liberty. So that the testing is done on power, but none of the building is done in Proud. The systems that you use are that for CI are IBM internal, but the ones you use for building are Red Hat internal. So this is all very much a mess, as you said, as you're aware. And one of the things we've talked about with our multi-arch team is that we are now betaing something. We're supposed to have a different cloud with power, but now we're going to have this sort of our VS cloud with power. And we're betaing that this week with some customers. And the hope is that we'll have, like, a build farm one day so that it just makes it easier off the board that we can kick out and, like, have some Proud stuff. So we have another Proud cluster running in PowerVS right now, but it's not going to help you again. These are all just, like, intermediate steps. So we'll work toward that. But, yeah, today I learned that that is how you guys build things and that that is going to be another reason why we need to get this working so we can plug into Proud. So that's probably what I need to take away from this call. Is there a place, Christian, where we can perhaps log an issue for Proud to support OKD so that we can at least start tracking it? I can't even think of where I would put it. Yeah, me neither, because it's not, we have Proud already, but it's just, it's on the wrong architecture and we will need another public Proud instance, essentially, that is non-internal, which we can then use to build OKD and release from there. It has to be public, non-internal, and it has to run on that architecture on PowerPC. So I'm not sure who the team is that would set up an entirely new cluster or something like that. It's definitely not just the team that manages the existing cluster. Is it out of the question to do something like kind of halfway with maybe QEMU user and having it pretend to be Power so that all this stuff can run? Because I've used QEMU user with Podman and that works, question mark, and I would maybe expect that that might potentially a little bit possibly work? I don't know. We've definitely played with that with some projects like I've got NVIDIA Docker ordered to Power years and years ago, and that was new, and they ended up using the QEMU user. And then when they got the Kermal added the new flag to make it easy, they switched over to, it's slow and every once in a while you're running to something with some sort of weird permissions or some sort of a fork that like doesn't send it off into the right architecture. So it's kind of doable, but you run into problems and it's not speedy for testing and at CD, I know is really, really expecting things to be very, very timely. So I don't know how we can try. I'm just trying to think of a way where we can get somewhere because right now we're nowhere when it comes to alternate architectures. Can I just step back just for a second? I mean, just the build firm that you speak of, is that something that IBM is going to host and or is that something? Yeah, we will. Yeah, we would that's definitely in our interest. I think that we would host and all of the fees that are because we even though it's blue dollars like this stuff isn't free for us to use that cloud. So I think that's something we'd have to just that's way about my pay grade figuring all that stuff out. But yeah. So I think I'm just trying to figure out how to park, not to park this and go away from it, but where we can put an issue into some Trello board or something Christian that we can track it from. I know it's above probably all of our pay grades, the conversation resourcing another prow incident somewhere. So I'll have to think about that and talk to maybe Clayton or Tracy or something. Yeah, it's probably at that level somewhere, because essentially what we need is a few machines dedicated to this, you know, given to us by IBM or whoever that we can use to run a an OKD cluster on top of and then install prow on OKD. Or we could it could be an OCP cluster. It doesn't matter. It just has to be public and then prow on top of that cluster. And then we kind of have to tell it to rebuild all the things we built for a x86 release also built that for, yeah, for power PC. And ideally we'd also get that for ARM somehow. But yeah, I want some at a time, I guess. Yeah, really, the power Z team has been pinging me on a bunch of other things as well, Christie. I mean, we're glad to hear that you've got open shift running on it. So that solves a few problems for me. So probably reach out to you separately as well to talk about that. Yes. The power Z folks want some time at an upcoming on the stage on the upcoming open shift commons gathering that I'm trying to curate a an agenda and schedule for so I was trying to figure out how to fit a talk of some ilk around power Z and open shift in there. So maybe you could collaborate on that with me. Yeah, for sure. Like get on stage and put my red hat. I saw that. Sure. All right. So, yeah, so that's great. And thank you for joining and please come back. You know, and, you know, we, we've got lots of chores. I think really the focus that we're really working on right now is getting those operators working for okay D first. That's really where we're focusing. Trying to get that going so we can keep people like Joseph happy. And also on the CRC, the code ready container that Charo, we talked about at the very beginning of this that that I cheered mightily for when I saw it so enabling people to start playing with okay D locally and get it up and running quickly. That's really been I don't know, Char, if you want to talk a little bit about that before we go on anywhere else and the work that you've done. I think there was one last fix to it around secrets. Oh, yeah. Well, I realized that it's when you run CRC start it's going to ask you for a pull secret, but you don't actually have to have a pull secret you use a fake pull secret. It doesn't work with the fake pull secret that we're used to using it needs a base 64 encoded and so I provided that in the documentation. Okay, there was just foobar base 64 encoded. Awesome. And that works. I'm curious if anybody's downloaded it and run it yet. I was thinking that we might have to do more than just Diane tweeting it to get some. Diane tweeted it. Everybody loved it. I love that everybody loved it but perhaps we could do a bit of a blog post or something about make about its availability and and get that out there too. So, good idea. I'll talk. I'll talk with you and really all it is is mean 500 words here it is announcing blah, blah, blah. And we can do that as a Google doc and then hand it off to a guy whose name is Alex handy. He's used to being handed things off. And he will get that was a bad joke and I'm not allowed to make jokes. So, so let's see if we can get that done child and get some there and if everybody who's on this call could give it a try. Before we post that blog post because if there's anything not working we'd like to know before we go and socialize that there's a lot of it. I'll give it a shot on my laptop. I just upgraded to fedora 33 so we'll see how badly it breaks with with that. Yeah, I've tested it on Mac and on centos eight Lennox. I don't have a windows machine anywhere in proximity windows. I am. I have a I have a pit in my stomach that CRC will be completely and utterly broken in fedora 33. Challenge accepted. I hope and go go break it. So there you go. The other thing I mentioned and then Jamie, I'll ask your here. I just got a buff of birds of a feather session for Kubecon North America. I haven't got all the details, but I made red hat pay for it. So we will we will have a space in the coupon pantheon of hundreds of other talks to do to have a basically lunch and learn session. So that might be if we can get all of our ducks in a row with a CRC by then that'd be I'm sure we will that's November 17. Yeah. That would be great. Jamie to do a presentation for learning and Bruce who's over there wave your hand Bruce is also in an edu situation and is using it in production, I would say, for his classes now to so you can collaborate with him. He's over. He's a neighbor of mine sort of he's at BC it will just call me an institute of technology, which we all know and love here locally. So yeah, that would be great. So suffice to say so if we have a birds of a feather at Kubecon, what I'll do is just probably make Christian and myself the leads on it because you have to name people and then invite all of the okd working group members to come and join the boss as well and coerce one or two of you into talking as I normally do. So I think it's an hour, which is great. And it will be on the on the schedule so be helpful I think to present the CRC and other bits here. And if there's any possibility Christian of having some of those operators ready by the 17th that would be really, really, really nice. We are working heavily on three of them and my question is how can we contribute because there are lots of scripts for OpenShift and I think they need a slide slightly modifications for okd should we create a branch a directory to get the PRs accepted or what is the best strategy to get our changes inside of that. Is this welcome or not? Are you working on that heavily in the background and if you put a PR on that it's overridden by completely different things. So if you don't see a PR on the repositories you're working on from our side then please do open one. Because we don't want a branch of fork much here I would prefer it live on the master branch or the respective release branch probably best starting with the master branch. And then if we can kind of make those scripts item potent somehow maybe an argument or something for it to know what to build. I think that would suffice and each team will review it independently but if we get one or two of those in and they work well I think we can definitely use something like that as a template for all the other operator repositories that we have. And a second question if some current images are missing like for example the service mesh 1.1.8 I think on Docker Hub how do we get them there because for sure we could create our own repository somewhere on Quay. But this rather hack here it would be so great if someone could because there are images already for older versions who one is in charge of for who one does create them I don't understand the process. Yeah, so yeah each team kind of is in charge of keeping those up to date but a lot of those community images haven't been updated a long time. So what we really want to do is introduce that process that is going to happen kind of as part of the usual work but we're not there yet and until we can kind of provide the team with a template here is how you're going to do it very easily. I would suggest you actually create your own registry or repository on a registry and push them there because you'll be able to build them yourselves push them there and yeah we just don't have a way to tell all the teams you have to keep them up to date and push them to the registry. Unfortunately that's not in place yet. It's definitely the thing we want to do but yeah for now it's easier to just rebuild them yourselves. And it's OPM is the way to go for the bundles or is there a different way. So yeah that's what we use internally now is the new bundle format, and I pasted the link earlier here the operator hub doesn't accept that format yet doesn't work that way. I haven't really looked into how the old the yeah how it was done before, but you can probably just do it the same way it wouldn't land on operator hub but you could still pull it in manually or create a new catalog that references it. Okay, thank you. Yeah, this is as a temporary solution in the in the long run. We definitely want all of this to be on operator hub easily available right now. Yeah, we don't really have the it has to be done automatically to like the new version as we build and push there automatically and we don't have that in place yet. Yeah, it's no problem but I love to help, but I would like to focus somewhere and not doing things twice or. Definitely, so I think if you if you have some ideas for the build scripts. That's a good place to start and contributions will be reviewed by the teams you can also tag me on on the on the PR and I'll comment or whatever. Okay, I do so question. You will mention several times okay thank you. Okay, so there's 2 things that are going on in the chat right now and Charo and I so did I miss speak by saying that the code ready container was the secret piece was merged or what is the status. No, no, it's all good what what I was talking about was in answer to. This question about where the code was the if you want to build CRC yourself the the the biggest piece the hardest pieces the single node cluster that code has been merged and so it's just part of code ready. SNC now, and I added some documentation to the read me for anybody that wants to build their own SNC for okay D. The the CRC binary build itself is still in my GitHub. We've been having some conversations about a couple of the code changes the best way to to merge the okay D builds into the OCP builds. Alright, I didn't miss speak. No, it's all good. But I did see that it was in your repo so that's good. We're just we have to build them manually. So every time Vadim drops a release. I just do a new build for for CRC and post it. Once we get it merged with the code ready code. We're hoping to set up a CI system to do it automatically. And the other thing because we're getting towards the end of our and I keep talking about this and and it's I think I'm the bottleneck for the recipes and the cookbook stuff. I'm wondering if I set up a call on on Friday morning. If we could just hack on that and get something specced out this week so that we can we can shape something that's useful and and maybe how to add a recipe page so that people can start contributing to it. I'm not sure what your schedule looks like on on Friday. I'm technically on PTO this week. I just live you guys so much that. Welcome to the club Charo. I'm in the same boat. Okay, so. Yeah, so so perhaps not this Friday, but maybe Monday afternoon, my time. I'll try and look up maybe an hour, maybe two so we can just we can just hack on it. Because it is lingering one and, you know, I apologize everybody. I am the bottleneck for a lot of these things because and I just have to hand off, you know, ownership of. And contributing bits to maintainer bits to okd.io so other people can, can add in, but I think we do need to create a framework for it. And some how tos how to add and what the goals and purposes are, but if we get that done on Monday, I will whatever we come up with. We'll push out to the. To the mailing list for people to give us feedback on and get that done. That sounds good. Sounds like plan. I'm going to book you now and. Charo. Cooking, cooking school. So that will get that done hopefully and. Yeah, so I think that's really most of what we had to cover off today. Is there anything else that we missed? Was here looking at the agenda. The CRC is out the door. I mentioned the coupon off. And. Yeah, I think that's what. So I. I have one thing we're still not building containers for bare metal installed so bare metal IPI is still not possible with okd. I think it's yeah, a lot of people are hitting that because we don't really tell that anywhere. So a lot of people try that and fail. I think you were having a kind of a meeting or a session with the bare metal metal cubed folks. They're coming up to you. Yeah, so they are. Let me just I'll tell you when. And I'll see if I remember to ask them that question. I think it's this coming Monday. Okay, because we really need their input. Yeah, for setting up the bills. For those images. It's not until the 19th of October. Okay, so let me rather than that, let's rather than waiting a month. Basically, let me see if I can reach out to pep and see if we can get him to come to the next call. Yeah, I pasted the link to the issue that my team had created again, and he picked all the all the folks there. I think a few times already. Yeah, if there's any we can kind of. Escalate that a little bit and get them to look at it. Yeah, I'll get that one on pretty much. I'm just going to peg pep Marl and see if I can get him to him to come or send somebody to do that. So I'll do that right now too. And anything else on your wish list, everybody. Man, I'm so glad I said we didn't we shouldn't block on this for a KDA because we would have been so screwed. Well, yeah. But then having trailing threads is never good. No, but this is bad. This is better than the alternative. But I'm not even 100% sure why they have to use the rel images there instead of just ubi. It may be some rpm repositories they using but some of the images don't even install any rpm so it's not really clear to me why they use those images. We are just doesn't know how to community. Yeah, if we could just use ubi images everywhere, we wouldn't have the problem. Yeah. They don't know how to community. Well, makes me really sad. Luckily we do right. Well, we're doing better. I think that's all we have for today. And Christy, I'll reach out to you because I have to get some power Z on the menu for this next talk. So, and you're now at red hat. You have a red hat email does that mean you're also in G chat Christy. I am. Yes. Oh, man. Wait, wait, she was. Now. No, I'm both. Oh, no. I know one like me. And red. Oh, no. So that means you're purple. You need to have one of those. It has like the red hat IBM like fusion right through with the colors and the logos. Yeah. We should make those. Yes, I think I think we're going to. We talked about that when we talked about it. About getting purple-ish stuff, but someone who pot it. So, well, they're, they're not fun people. So do it anyway. So, cool. Well, Christy, it's great to see you and to have the power Z folks represented and also to just have more folks here. And I already ping pep for you Christian and I will book you. Charo for a couple of hours on Monday. So. Make your dinner and come to it. Because I think you're on East Coast time. So it might be. Yes. Yes, I am. Lucky Charo. Charo. And then, you know, once we figure out a little game plan, we'll just, I'll just open it up. What I'd really like to do is have a few more people. Outside of red hat, even working like. Joseph has done some work for me on the, okay. D. I also site and make other people able to. And you and I still need to work on some of those fedora magazine posts for. As we get everything rolled out. Yeah. Diane. Yeah. Well, now that we have the code ready containers thing. Assuming we can get that all validated out. We can just put all of that together in one nice blog post. Yeah. And maybe, yeah, yeah, that might be the way to do it. So, cool. All right. Thank you all. And we will conjoin again in two weeks time. And, and I will chase you all down via G chat. If you're inside of red hat and in the black channels. Elsewise. Oh, cool. Thanks for everybody. Okay. Thank you. Bye everybody. Bye.