 Well, let's go ahead and get started. Let's do a quick agenda review. Is everyone happy with the agenda? Is there anything that you want to add? Remove change? Any new business or anything like that? All right, folks seem to be happy with it, so let's jump right into it. Christian, you are here. Vadim is not. Could you let us know the latest in terms of OKD really? Whatever you have. And if you're talking, we don't hear anything. You might be muted. Christian, we don't hear you or you're not around. It's hard to tell. You are still muted. So let's actually. OK, so let's move on. Christian is going to, I think, connect again. Let's move on to documentation updates with Brian. OK, so there's not actually a lot to talk about since last time. I did ping Diane earlier, trying to get the DNS switched. So everything's ready to go. We're just waiting for the DNS to switch over to the new site. So the new site is being served by GitHub. When it transitions over, we're all good to go. Other things, 4.9, if you've looked at the main documentation, you'll see that we now have versions. They're all just going to the latest. When you go to the official documentation, docs.okd.io, you now can choose the version that you want. And initially, it said latest was 4.9, but I can just see that that's now being resolved. And then the other thing, there's a link to the code of conduct. And Jamie, do you want to talk about that? Or do you think we want to get that through? Sure. At the docs meeting, we went over the code of conduct that Diane Pilford, I mean, shared from the Ansible group, who in turn, actually, if you look at the bottom of that document, they based it on like 5 or 6 other code of conducts. So the documentation group signed off on it. We're happy with it. Michael is going to go through and change all of the references to Ansible, to okd, and anything else that needs to happen to make it our own, so to speak. And of course, we'll add them to attribution at the bottom of our copy. Eventually, that list will just get longer and longer. If anyone has any questions on the code of conduct that could be answered now, we'd be happy to answer it. Or you can submit a question later to the group or in the email or anything like that. Any questions that folks would like answered now about our motivation and plans for having a code of conduct? Okay, great. If anything comes up and you have any questions, feel free to reach out. And, you know, ideally this will be done by, you know, maybe the end of the month or at the latest by the end of next month. And then we'll get it up on the website. And then at the beginning of every meeting, this meeting, docs meeting, we will mention the code of conduct just so that folks attending the meeting can be aware of it. And Sandra, if you could do the same in terms of your meeting as well, your subgroup meeting, that would be helpful. Just at the beginning of the meeting, mention, you know, this is, you know, as this is an event of OKD, you know, we have this code of conduct and we ask that people adhere to that. Let's see. I think that's it for docs and big kudos to Michael. He's not on this call, but he did a lot of work and, of course, a lot of kudos to Brian. My theory is that the DNS outage at Red Hat last week is related to all of this. I actually don't think that's it. There's two folks that normally I ping to get this stuff done, Will Gordon, who also just had a child. So he's been out on leave and the other person is in the Czech Republic and I pinged him the week prior and I didn't. So I'll just ping him again and keep pinging until the DNS is pingable. And we'll get there. Awesome. Thank you so much for your work, Diane and Brian and everyone else who's done that. Christian, can we hear you now? Hey, everybody. Can you hear me? Does it work? Yes. Yes. Thank you. Okay. Sorry about that. I have a new laptop and too many microphones and blue jeans in this. I'm choosing the wrong one, apparently. Well, now it works. So, yeah, regarding our release schedule, I think there's been a new release, but even cut the new release. And yeah, I was out last week, so I don't really have any other news than that. Unfortunately, RMCI is still blocked by some internal things with regards to our build pipeline. But yeah, working on it, it shouldn't be too long. So hopefully it's going to be an early Christmas present to plan on getting that done in the beginning of November. Do we have a list of features or of issues that needs to be solved before we can deliver for the night? Yeah, I'm not sure. I think, so Vadim and I have been working on creating all the job configurations for our CI build system for proud of the OKD build system, essentially. And we should be there soon. And once we get an upgrade running, finishing, succeeding to 4.9, I think we should be ready to switch over. I haven't seen Vadim since the week before last, so I don't know if there's any current issues with that, but I'm not aware of it. Right. We have in the past, Vadim has created a Road 2 document in the past. So for example, back in June, he had a Road 2 OKD 4.8 document. It doesn't look like he's created one for 4.9, but we can create one in Christian and Vadim and anyone else can throw things in for things that are blockers or things that, shortly after the release, we would need to deal with. So that's a great suggestion, Sandra. And then also, I think you also mentioned a list of features. That would be something to get ahead of the game. That would be really cool if we could get ahead of the game on that. And as 4.9 is coming, get something together that we can throw on our website, throw in social media and throw in the chat about, hey, this is OKD 4.9 is going to feature XYZ cool things. So if anyone wants to help with that, yeah. I think that's a great idea. And I actually, just before we cut the 4.8 release, which isn't too long ago, if I remember correctly, I went through that list again and actually created PRs to update the configs for all the branches that we have now, which is 4.10, 4.9. So most of that should now be in place for 4.9 as well already. And there shouldn't be, but we'll have to check. So if somebody could open that issue and essentially copy over that list of things that Vadim had for 4.8, then I'll go through it again and make sure everything's in place. And yeah, with regards to features, I'm not aware of that list, but it does exist, I think. And it should be the same features that OpenShift 4.9, OCP. So that should really be the same. And maybe we can just copy that over from some OpenShift blog posts now that 4.9. There's a bunch of 4.9 blog posts and things like that that we could cross-reference maybe in the next docs meeting. Jamie, I was thinking that it might be a nice thing to take the 4.9 release update and make you the author of it, whether you do all the work or not. It doesn't matter, but in order to introduce you to the greater Red Hat OpenShift ecosystem as being one of the co-chairs here. And we can grab some of the content probably from the OCP updates. Vadim just texted me he's babysitting. And I asked him if he was creating that roadmap for the road to 4.9 doc. And I haven't got a response yet, so perhaps he's got his hands full. Literally, yeah. So I think maybe next week in the docs meeting, we could take that up as a topic and then just do and figure out how we can do that like on a regular. Whenever we do have a major release, how the docs team take on doing that and then we can rotate who authors it so we can showcase different people from the working group each time. And I have tons of video of people from the product management team talking about the latest movies. But I think on the OpenShift blog, there's a few 4.9 release updates from, I think Rob Samsky wrote them this time around. I'm not sure. But we can grab those and talk about them next Tuesday. One thing that would be nice to get in for 4.9 would be a newer Fedora CoroS release, because I think we're still stuck on a pretty old version now. And I think that last outstanding issue has now been resolved. At least, that was a race condition, I think, somewhere. Yeah, I think that was with this last release Sunday. Yeah. Yeah. So then we can sneak that into the update as well and maybe get a quote from Timothy as our resident Fedora person on there. Is there any, maybe this is for Neil and the Dato folks, the CRC build for the latest release? Is that, and is there any gatchas for doing a 4.9 code ready container release for OKD? Yes. Yes. Okay. Go ahead, Daniel. I've had no time to work on that in the past month. So I don't actually know. If you have actual information, please go ahead. Yeah. So that was actually one of the, I popped out of another meaning to come over here, because I wanted to talk to you guys to see how and when we wanted to accelerate spinning up that special interest group, that CIG for CRC. I'm probably just a couple of hours away from having a 4.8 build ready. It's just because 4.8 introduced the ability to run single node clusters better, it actually breaks the CRC build because the CRC build is checking for things like the the SED CorumGuard and some stuff that went away in 4.8. So that, so the 4.8 build for CRC just needs a few tweaks to complete the build of the single node cluster. 4.9 is going to be a whole different ball game that I haven't even touched, because with the full support for snow, I expect that really changes how CRC gets built for 4.9. And to be honest with you guys, it becomes a lower priority for me because I don't use code ready containers. And if I'm being honest with this group of friends, I don't actually like it. So I've been building it kind of as a favor to the community, but it's not something I use. So it ends up kind of falling down on my on my priority change when I'd much rather be working on my bare metal cluster. Do we have insight from the OCP direction whether CRC is going to continue now that single node OpenShift and OKD are a thing? I don't. I don't know because actually I'm this close to for my for my bare metal lab to being able to build or to run the bootstrap node on my MacBook using using the native Hyper-V is no, that's Windows. What's the whatever the whatever the MacBook one is and and provides a framework. Yeah, that thing. Yes, that thing. And honestly, if if that is working and we already know we can do something similar on you know on Linux distros, then we're really not that far away from being able to just spin up your own single node cluster natively. And then you don't have all the constraints of it only being accessible from the you know from the workstation or things like that. So I'm not sure it sounds like CRC might actually just be obsolete it. Well, I will I will ask the product manager, Steve Spiker about that because I haven't I hadn't heard that feedback yet. So and Charo, I think you probably can reach out to Steve too, but I I haven't heard that it's being obsolete. So in any of the messaging that I've been listening to or listening for. So I'll check in on that. But that's those are good insights Charo. And so if we're that close, I yeah, my mic is saying you haven't heard about going away. But yeah, the single node option is really kind of what we're targeting. Charo, just with what you're working on, do you have you actually cut down the the footprint because one of the one of the selling points of CRC is it actually turns off a lot of the admin stuff and to actually make it run in a smaller footprint sort of run on a single laptop. So because to me that that is the main difference between snow and CRC. It runs on a 16 gig laptop where right right and that's what that's what that the s and c.sh the the single node cluster script. There's actually no hidden magic there because that's actually what that thing does when it builds the single node cluster. And so this may be something that our new SIG wants to take on is instead of embracing the paradigm of code ready containers, maybe we challenge the paradigm of code ready containers and see if there's a way for us to deliver like a, you know, a packaged and opinionated ignition config, or something that that we can just enable people to spend their spend their own up with. Because really that's what code ready containers actually does is it it sort of hamstrings some of the operators that it force turns them off or or literally rips them out of the running cluster for it then turns the the cluster into a kukals image then gets embedded in the CRC executable. I think the CRC and single node open shift are slightly different use cases so CRC is, as has been said, supposed to be runnable on a laptop it's virtualized while the single node open shift as an oak is really more of a production system that is just running on on the bare mat. Yeah, and now the machine config operator for example has support for for running in in that environment as a single node that doesn't have to be disabled anymore like it used to be in CRC I don't know whether they still do it in CRC or not, but so so maybe for us it's more it's more useful to have the sno or maybe with maybe this maybe it isn't but I think it does have it is a good use case that we're not yet really covering here if you just have like this machine over there just one machine you can run open shift like pull open shift on there and having that for okd I think that would be that would be great and that is the assisted installer that will have to rebuild for okd then and that actually doesn't require a bootstrap node so it'll it'll pivot from the bootstrap node into a master a single node master the the assisted installer also has support for compact clusters which is like a three node cluster where also you don't need a bootstrap node one of the masters will pivot in one of the bootstrap will pivot into becoming one of the masters there so I do think that is very nice and we should look at supporting that in okd yeah and if I were going to throw something out for the SIG to be thinking about would be how cool would it be if we could fire up whatever our okd CRC is from podman machine if you're not familiar with podman machine it's it's relatively new and it's really slick let me rain this in let me rain this in just a little bit here because I think we're starting to get into the discussion that the actual subgroup should be having right Neil Daniel can you organize a meeting do you think to get interested parties all together to talk about this I would be happy to do that I don't know the best way to organize a meeting among this group so we can talk to you offline about how to do that Diane and I sure get you everything you need to to round people up and get things going okay all right any last minute thoughts on CRC I just want to keep us moving because we do have a lot of other stuff on the agendas I would just ask a question of this group since Charo has brought it up that he's not really using CRC is anyone on this group in on the call right now an active user of the code ready container for okd man I kind of the only the only usage that I know of is that we had a slew of comments in the channel and a couple of discussion items posted in regards to it I think in spring right or winter but we haven't really had any sense and so it might be helpful to explain out to the users the difference between CRC and and single node and find out if really people want one or the other yeah I also think it's a question of usability I did try using it a while ago and it's too big for a laptop it's too sluggish if you want Kubernetes on your laptop use mini cube or kind or something they're responsive I find CRC was just just a bit too big and everything was a bit too sluggish and you then quickly run into the problem if you want to do too much in it you're either going to run out of memory or I find that around a disk space right because on max the the images and resizable because of the silly version of hypervisor they use you can't resize disk images and anecdotally yeah anecdotally we've been doing the same thing we used to deploy a mini cube as part of our as part of our dev stack and nowadays we're moving towards tooling that will actually spin up a full little cluster for you in AWS just because like once you have the base cluster plus all the add-ons that you need for your specific environment and things like that you end up churning like 16 gigs of memory on your laptop and life is hard this is also the same kind of problem this is also where where things have kind of shifted for us we still care internally about you know the local dev case but the fundamentally something that has made made it a little harder to justify is that it's getting harder to get computers and it's getting much harder to get computers with actual capacity in them and that has shifted the balance of things lately which is why Dan and I have deep prioritized CRC so hard because I don't even have a computer that's powerful enough to run the build personally on my like locally even I don't have the like even if I had the CRC stuff I can't run it because I don't have a computer powerful enough to do it and this is a for a lot of the newer developers a lot of the ones that are in our teams that are using you know cloudy things containery things this is because because of the shortages and stuff that's the more common case now so I don't know what else to say so then my my follow-up question to that and thank you for the feedback is the single node option that's snow or whatever i'm supposed to call it is that um too hefty out also for um local use oh yeah it's even it's even worse yeah code ready containers at least strips out some of the some of the below which is why I think I think if when we when we shift this conversation over to the SIG one of the things we'll be able to talk about is brainstorming some ideas to to strip it down um because I think I think the CRC team has just kind of been they've been in their channel long enough that it's become a rut and and I think if we if we could pop some innovative ideas over their way they would probably latch on to them and they have other tasks that they have to do too so CRC is not their full-time job yeah from my the thing I I'm just wondering is why isn't this just a part of snow like why doesn't snow just gain a smaller footprint mode because like there are there are open shift edge deployments where this would be or you know there would be open shift cases where this makes sense to even be able to hit the same kind of target sizes that CRC does like I don't think it's unreasonable to say you know maybe you don't need all the metrics and monitoring operators and services deployed on a single node open shift configuration in some cases maybe you don't need all the you know some of the other extra fancy stuff there like maybe you don't need the service catalog or whatever you know based on you know a profile that is passed to SNO deployments or something like that like I just don't know why this isn't happening there too yeah well I think it's like that is in the works so for example we're going to we're going to throw out Jenkins out of the core payload I think with the next release and obviously it's a process but yeah if you have suggestions what could be cut down please do do voice them and we'll we'll try to you know get that to to the to the respective teams so they can work on making their component optional and I know that the cutting this single node single node open shift use case down is it's definitely on the agenda it's an ongoing thing but it's not going to be it's it's going to be a steady process like it's going to be smaller in the future but right now we can't just say we're going to not going to you know use that component but yeah it's definitely on on the roadmap so so Diane I suggest might be when Daniel arranges the meeting can we invite any of the CRC core team from Red Hat into the working group but that sounds like there could be win-win that's what I was thinking I'll reach out to the PM for it the product manager Steve um and see who we can if he can come and hear hear what we're saying first of all and then if there's some resource um or roadmap for for making it smaller or making it more useful and where we and the other thing is where we as a working group can be useful to them to aid in their work um and and so I'm going to let Jamie drive us on to the next topic because I think I ran that one into the ground for you so um and I'm going to go to my next meeting thanks for showing up Charo much appreciate it is multiple meetings all the time yes yes indeed all right take care of it all right uh Daniel and Neil if you're still interested uh and uh leading that group we can get you set up on reaching out to people and getting everything together uh so Diane and I will reach out to you for that let's move on now to issues in the repo of any issues um stuck out to folks that we need to address or do they point to something larger that um that we need to do we haven't really gotten a lot um in and there's a couple of documentation ones actually uh that came in they're labeled as such and um I don't know does anyone see anything in issues that you want to talk about real quick I was actually just taking a look through it doesn't seem like there's anything too interesting um and Vadim's been real good about asking people for most gathers and so far yeah I haven't seen too many of those yet so yeah yeah and actually he closes a fair amount and moves issues to discussions because a lot of time people are opening issues yeah uh things that are more discussion which leads us to the next section uh is there anything in discussions that folks wanted to talk about um but a handful of items come in yeah go ahead so uh so at long last I've um I've been able to run through the um the ipi install on um an open stack um and so I wrote up a bunch of notes for myself there there were some um there were some kind of major things that prevented the cluster from coming up and then there were a bunch more minor paper cuts where a lot of them were just the docs could be better in in particular areas so um I'm trying to figure out the best way to get all this written up should I file like six or seven different tickets should I write up uh one document and then people can decide whether they want it to be tickets um should I start making pull requests like um what's what's the best way to get this to get this organized and useful to everybody I would well let me let's let let me let other folks voice their thoughts on this what do other folks think so my my two cents would be to start an issue um with a list of of the items or okay just like one issue in in in the okd tracker I would say a discussion uh open a discussion item because some of these things might be actual issues some of them might be something that needs to be a documentation that's file yeah that's kind of what I was trying to go okay yeah I would I would do a discussion yeah yeah so sorry that's a discussion in the github project for yep exactly yep okay and that's that's where we're trying to direct folks to things that are not necessarily bugs but there might be conversation about techniques or uh process or you know something like that or something you're not sure if it's a bug or whatever yeah cool okay yeah I'll um I'll start a discussion I'll I'll write up kind of bullet points of what my experience was and then we can we can go from there figuring out how to best turn them into into action thank you um so I do want to say other than uh a couple things it was a surprisingly smooth and delightful process um so I'm really really impressed that uh at the work that everybody has done at at getting to this to such a pleasant experience so thank you everybody so it's so in the end you did manage to get it deployed and and running yes several several times with different configurations for different is it so yes so what I would also love to do maybe is let's do the discussion but also do a little write up um blog posty thing or even do a um recording of it because that seems to be how people digest stuff of walking through it so if at some point you um have spare time that would be a great thing I can record that with you or help you get that done just the step-by-step stuff oh I see just like just like a screencast or something of of me going sure yeah that uh I'd be I'd be happy to do something like that would be cool we used to have more of those but they sort of tapered off the past six months or so but folks find it was really helpful to watch someone actually go through the process of things we haven't done one in a while I'm sure we could also probably since we're we did an IP a non-trivial IPI setup we can probably clean those up a bit and put it somewhere as associated like reference material or such a screencast or whatever so people can can see what it is because like that would be uh that'd probably be beneficial for people because like it seems like the open stack one even though it is one of the better supported providers seems to be the one that has um some of the least uh comprehensive documentation and that that kind of stinks yeah when the new site goes live I mean we created the user guides of the last sort of trial day it'll be good to actually maybe put a user guide in for open stack but then put a link to a youtube video or a guided walkthrough on how to do it just so people can easily find it yeah and then I can reach out to the open stack community and do some do some PR for us there maybe get some more folks playing using it go well then all right we'll have this as a task um next up mic you had something here about rpms yeah so uh a little progress here at neil I promise this is purely coincidental um the other day I was just kind of missing around and um I was like kind of curious like were the OC and like open shift install binaries available in fedora to just our cam install they were not there is a kubernetes package for the qctl stuff um but I was like okay let's try packaging up I want to see if I can package up the actual kd binaries for a given stream into fedora and I was able to generate um a a copper or a copper repo that does contain an okd and an okd install binary I did rename them so they wouldn't conflict if someone wanted to install the open shift binaries but I was wondering if it was actually a useful thing to have available in fedora considering I'm not fully aware of the entire process of like if a given binary can only install a specific version that they that it comes bundled with or if you can pick different streams to install with any given uh installer but I want to see if they'd be useful to have the actual client library client and installer binaries available in fedora for people to just dnf install and be able to get going versus having to go to github download a tarball extract it put in their path and do that kind of thing I think that's pretty useful yes I actually automate that exact process in an ansible playbook right now and I just have that running in the background on a timer so having a repo would be a step up for short so I have this um and Mike with regards to your question so each binary release has a a version of fedora coro s um that it uses heart coded in it but you can uh overwrite that with an environment variable okay is it possible to make a build of it where the where that manifest of information is actually split out to a config file well it's not supported right now I think but that might be an interesting enhancement for all of operative actually yeah because I'm thinking uh so the the thing I'm thinking of is so Dan and I have been internally talking about you know offering you know a way for people to like push button open shift deploy or like do whatever for like a very small open shift as a service cluster with some basic configuration up front but we also want to validate which versions that people are actually using and we want to to some degree control the flow that that people wind up using and one thing that kind of makes us a little bit of a challenge relative to some of the other bits is um it's not straightforward to just like make open shift install do different things by default um and you know maybe some kind of config drop in or whatever like Mike's thing could like the mics build of it that as a package could actually like read that and that would make it meaningfully simpler to be able to do the right thing like I don't imagine that the installer code like the core code of the installer changes as much as the stuff that it fetches to actually do the deployment um but you know I could be wrong about all these things but like I I feel like it makes some degree of sense to be able to do that kind of thing okay do do we actually want the the sort of general public doing that from a support point of view I understand if you're technically capable but if we have sort of people just poking any old version of fedora in I can imagine our forums are going to be a support nightmare so this is not about poking random versions of fedora in this is you know shipping the installer and client binaries in fedora to be able to orchestrate the installation based on things that our manifests have the diff the problem right now is that as part of the compilation process the manifest is merged inside of the binary executable rather than being a separate file that we that an admin can set up so there's there's a piece and correct perspective on this we do that because essentially when we bake version need to open shift installer it makes sure that the version actually works so that we test that the vision that you're going to boot is working at least and you can get a cluster with this version and we don't update from an open shift perspective we don't support users updating their boot images so switching the images to boot their cluster on so you should you keep essentially using the same image to boot your cluster for the whole lives of your cluster for kd i don't know how much this would be tested but i don't think this is tested either so essentially you still should use the same image you've used in the beginning to boot your cluster i think we're talking past each other a little bit i'm just talking about setting the okd version that it fetches to install yeah but the okd version itself is completely coupled with the version the boot image and right but that can all be that's all part of a manifest of sorts right like if you say i want to install okd 480-2021 1015 or some made update or whatever right that references a point in time that you've released a bunch of blobs it has a referential point to a fcause image and so on and so on and so on right like that is a thing that exists what i'm saying is that the open shift install binary does not by default expose that as a configurable item like what it fetches start provisioning i think i understand what you want to do but i don't understand the what you get from it like right now when you get a specific binary of open shift install whether it's for open shift okd you've got everything in it so you know that this specific configuration has been tested and if you want to do the overrides then you go ahead you have got some two or three variables that you can use environment variables so you can use to do the overrides but if you split this then it means that once you've got like some random version of the open shift installer binary coupled with some random version of the the manifest of whatever version you want to install or whatever okd or something else and who's going to guarantee that this actually works like we already don't have that guarantee yeah we do we do like no we don't have that guarantee we have we ship an open shift installer when we ship that we ship with the specific arcos or fcause version and we ship the specific version of ocp and this one boots and works on all the cluster we test for so this is like non in a sense it's non-negotiable like we know this this works because we test it before we ship it otherwise we don't ship it and if we split that up then it means that you will use combination which essentially don't work so part of the reason why i brought this up is when i said coincidence earlier i went into the get a project was trying to find where the agendas were i forgot they were on hack and be and um at the very bottom of the to-do list there's a card for door packages or for door rpms that have that have came across after i actually got these things to build um the the point of me proposing this or asking if this is a good idea or not is because like as it is right now so the binaries i've built come from the 2021 1024 cut they're the latest cut available when a new version gets cut if i don't update that package when people do an install it's still gonna install the 1024 one but if i upgrade to the new one now people if they're trying to run them with their same stuff they're going to be running off a newer stream and not they're also i don't know if that's going to cause a problem or not so from a packaging standpoint it was like is this a good idea in the first place to have and so that's not the way like open ship itself kind of works or am i going to be causing problems for people down the line if we were to start off so they were getting updates for the installer and the client that they weren't expecting to get kind of fun um i wasn't wasn't sure if like for the major one of what major versions like four eight four nine four ten managing those side by side if they would that'd be something to use both to put in like a modularity stream for selectivity but i don't know if this is a good idea as a whole to have them available considering it's not the installer is not version agnostic as is it right now or it seems that way the the the only option that would be would we to have like the version of ocp well there's there's complete binding between the data and the install bytes of because there's a bunch of uh manifests and everything that's generated so you cannot just like use a random version of the installer and a random version of boot image and ocp release and they split back to work so i don't know what would get you i think there's there's some merit to both sides but with okd we use a slightly different process than with ocp in that we have the boot image but then we pivot right away into the machine or as content um that is part of the okd payload so we do have some more leeway that as as long as we can pivot from the boot image version of fedora coro s it doesn't really matter what we what we pivot into then because it'll always be the version the right version for the reference payload so you can using this open shift install release image override uh and far you can override the the payload that's the the release image from the payload you're referencing there not not a different f cross version it'll still the the first boot will still be the the fedora coro s version that is hard coded in that installer binary that you're running but then it'll it'll pivot right away into the right um os version for that for that okd version that you're trying to install so and and that as long as we we still just use the the same pivot mechanism which is essentially in rpm os3 rebase um oh i didn't know that you can override the boot image that might be that might be interesting but it shouldn't it shouldn't really matter because as long as we we have something that boots and that has rpm os3 in it we can kind of use that to to to then pivot over into the into the operating system that is part of the payload and kind of it's a it's a right now how we build the machine os content is a bit messy and that we we don't really do it on our own ci um but even has set up the service ci for it which works great but it's still like we we don't control that entirely there is um this enhancement uh from colin walters though and that will really make things much much easier because then we can kind of just take a fedora coro s and layer stuff on top making that easily layer things on top um as a docker build um or potman build and that will be the the okd machine os um and i think that that's going to be that's going to make things much more streamlined there because uh yeah it'll be much easier to build the machine os in the first place and also it'll be easier um for if you want to have your own package installed on the on the on the image you boot from in the first place it'll be easier to create that image um there's i think in the chorus team more efforts going on to actually have a have the os tree delivered in a in a in a container and create boot images from the container image so eventually what we're aiming at is not having these boot images um as part of the required things you need to mirror in the first place but you just mirror containers and then from one container you kind of create the boot image yourself within with a documented easy to do process um so there is going to be some changes in that area and i don't think it makes sense to kind of make make make this such a huge problem if if the versions don't match entirely exactly it might not work it's definitely not tested but it could work too but in the future it'll be easier to just create your own image um from the payload actually yeah create the machine os in the first place and then create a boot image from the machine os content that is in the payload if people if people do find this something that's potentially practical i'm happy to go further on and actually try packaging it and getting this into proposing as a package for fedora and whatnot um but obviously i'd want to get feedback on that this is something that's at this current point in time is something that's usable or would be useful to have at the very minimum having the open shift client yeah package tool i would be extremely useful because it's quite a pain right now like i do have the okd client or the oc binary in that in the in the copper renamed to okd and doing a lot of setting to get the batch completion to not conflict with oc um so also not it's i'm also kind of hacking around with the actual oc when i was looking into the oc build process is some heavy nested make file work and i kind of bypass all that by just calling the go build directly as best i could so i'd need feedback what not that's a good thing to do or by bypassing the standard make process um but i do have that bundled up and it seems from the least initial stuff functional but i haven't actually done any against cluster testing so and as for open shift install um i think that's one of those things that's going to wind up being useful to have as a modular if you if you decide to go forward with packaging it doing it with um modular streams and doing it on the feature versions you know 4849 and whatever and then set them up with eol so that they retire fairly aggressively sure uh i think it would be super useful and i make people it make life easier for for people but uh between the two i would say having the client tools shipped is really really important the installer is a big pile of insanity and so that's that's that might it might be worth having a longer discussion with the open shift install developers about handling this case a little better but it is certainly a long i think it's a valuable longer term thing so let's uh we gotta move on we've got about seven minutes left um i think we're all basically saying to mike go for it uh and we can give feedback post stuff in the working group um either as a discussion item uh in the repo or actually a discussion item in the repo would be good because then people from all over again so post this as a discussion item in the repo and give folks um you know the commands there that you have for testing and whatnot and let's go from there i if we're back back to the agenda i had one more question christian i did see that you um submitted a talk for devconf um and attached my name to it i'm assuming that's an okd related talk yeah i actually submitted one talk uh about okd for devconf in track and public and i also submitted a a meet up uh for okd and that's that's one i tagged you on as well all right so you that's just a a kind of hybrid in-person virtual event uh or okd working group meeting at devconf if we get the slot i do expect yeah no i'm i'm pretty sure if you put my name on both of those i can't for some reason it doesn't let me i can see that you put my name on at least one of them if you could put my name on that and i was just going to flag jamie i don't know if you have the budget to travel to the check republic or the desire um to do so but um maybe we could we could chat about that again um and see see if that's a possibility um and and do some socializing either that or um participate via the the virtual stuff we usually get that slot both of those um at at devconf cz so thank you for doing that um and hopefully hopefully we'll actually be able to go excellent all right let's uh move on we got a couple of quick ones to get through here uh diane can you confirm uh that we can have stuff forwarded through the okd twitter uh if you haven't yet uh let us know um the idea being that we wouldn't we wouldn't get our own twitter but we would or we would get around twitter but at least have stuff forwarded through their depending um but if you could have them post our content uh when we need it that would be great um our content on the open shift and comp open shift commons one or are you sorry i yeah open open shift open shift commons one or or the general open shift one anything that you can do to leverage yeah so anything you have to get around twitter yeah i i yeah okay so i i miss the apologies i missed the last docs meeting um and so i can post anything i want on open shift commons relative to okd so that's not a problem if there's anything and maybe each time we do like a blog or anything like that we just automatically do that look i usually use the hashtag okd um i think the last time i talked to pit folks were we thinking about creating a twitter handle for okd was that another thing you wanted to do we did and i think this actually falls in line with one other question there's we wanted to do a twitter possibly if legal would allow it and then also we wanted to have a separate repo if legal will allow it so those are two things that we needed you to run by league okay i can handle that and we will talk about that the results of that on next week's docs meeting excellent great we have four more minutes left there was discussion about bare metal ci and testing group i do now have a some bare metal that i can test on if anyone else does just to try to have some something that we can talk about with people or or have you know at least create a testing matrix for bare metals that we can get some results let me know folks who are interested in that wanted to squeeze in one thing on drity is taking care of the survey that we talked about the okd survey to get a sense of what users are doing with okd and so she's on leave this week but we'll be back next week and we'll have a report to us in any questions about moving forward with the survey and so we have three minutes left and is there anything else that we haven't covered yet i think that's everything but anything else no all right cool all right so task list basically diane is going to check on the two things twitter and the repo have you put those as issues and or as tasks in github have we started doing that we haven't done them as tasks yet but we could yeah for sure some tasks for me okay and assign d mueller 2001 my github id to them and let's see if that makes me do things better okay sounds good mike you're going to post a discussion message uh with your work uh daniel's going to post a discussion message with his work and um what else did we have from this meeting uh jamie you're going to follow up with me about gagelling yes your c working group meeting do you want to just do that by email do you have my email address yeah yeah i do have your email address so diane diane and i will we'll hit you up about recording that um that that that the install of on open stack i think there's a good a bit of outreach we can do to the open stack community once you have that done daniel so kudos um if you can find the time to do that uh mike rocheford um the task the the stuff that you're doing to get the fedora over there is there an issue link for that um there is not but in the project uh card layout much of that's still actively used um the very bottom of the to-do list is a fedora rpm um card or build rpm's in fedora environment which we knew as a discussion yeah put it in as a yep discussion and and link put that link or whatever the past link in that that i know and you know what i love about jamie is that he's on top of all of these things and let's use the the discussion stuff and move this stuff forward so thank you everybody especially jamie and last item is i i'm diane found one error in the video intro to that i created for our meetings it turns out that my conversion to a ping to import in my video software screwed up the head of of our little mascot so i'm doing an export again from the from the initial file to ping and then recreating the video and it's all in red hat font and it'll look great so expect all of our future meetings to have that intro through again it's bring back thanks guys take care