 KD working group and share my screen and grab the See where I have this here Other one. Nope. Nope. Nope. Nope. Nope. Okay D working group link to pairing and this is the bit that I always end up cutting off at the beginning while we get ourselves set up but if you could sign in that would be great and Motor on here and share my screen. We've got quite kind of few people coming summertime here Maybe killing a few things Hey Neil welcome Moose everybody. Hello There you go And I know joseph you and and philip joined an hour early you get the prize for um, not getting with the daylight savings time still So welcome philip and um, I did this time I decided not to open a second an earlier session just to catch all of you but hey You still came. Thanks um So yeah today, um, we have a a new stable release out. I'm going to share my screen. So we just sort of drive through the Team working group meeting. So um as I like to do is to put that demon this The spot to uh talk about the latest release and any issues people are finding So reading if you could take it away and start us off with that that would be great um, sure um This release the release this weekend has been mostly Accumulating a few fixes moves notably The fix for ip smith behaving in network manager That seems to be resolved now um However, due to changes in fedora cross stable, they have switched to use podman 3x That thing is probably Very nice, but it has a serious issue. So If you try to use the latest stable fedora cross as an initial starting point You would fail because it has a bug when it copies the initial payload So we recommend you to use previous table Another issue with head was that during upgrade process if you're using a fake pulse secret uh machine config demon would first attempt to use oc and if the pulse secret isn't valid as in um You're not using a base 64 part in the authentication section It would fall back to podman and try to do the podman copy again that would fail So we have a bug file to that And the workaround is simply To use a valid base 64 part In the pulse secret. So we have merged the pull request which recommends creating a valid pulse secret and um And that's kind of the only thing we can do right for now um, apart from that I think that's pretty much it from the technical standpoint So, um, just for me because I haven't tried anything with the latest release Is there a stable release of fedora core os that we should be using? Earlier one with this release of okd. Yep. That's the one from february is a great starting point again It's irrelevant for those who are updating because you don't need anything else okay Is it okay if during the upgrade we still gets this march release Because I saw that Okay It's only the bootstrap that fails with the march release So if you are not in a time if you're not doing something that requires you to reboot strap the cluster Then you're fine Unless you're doing an upgrade and you have a bad and you're using They fake key and it doesn't have a good base 64 and then it'll fail also. So that's the caveat That we worked on this morning and finally figured out Yeah, it's a good time to update to make the pull secret finally valid Yeah, yeah making a valid pull secret probably is just a good idea in general Well, there's a good example in documentation now. Uh, that's a valid pull secret the one that was there before wasn't valid Where does the fake secret come from from the install config jamal? Or is it created by the installer? That comes from it's formatting. It's still fake. It's still fake. It's just it was just not correctly formatted It used to not validate the formatting Now it does and it chokes on it So we now basically the Basically the base 64 has to be username fake username colon fake password And then you base 64 that and it works. I always was wondering about this one Yep Just the structure So we have to update before we upgrade we have to change the pull secret if it's a fake one. That's what I just did I just tested it. Um, okay And it's a it's a one line command you have a file you have a file that has a good pull secret in it And then there's a one line command that you run and it updates it and it goes and updates it throughout the cluster Okay, I think you're gonna update it afterwards Like after your after your hung Yeah, that should be possible if MCD is crush lifting you should be able to SSH on the note replace the cubelets Uh pull secret and proceed with it Yeah, I tried it and I don't know if I did something wrong. Yeah So quick quick question. Um, is this something and maybe joseph can answer for me? Um, I'll suggest that joseph. Is this something that we should with the release put in a blog post? Um, like which Which is the stable fedora core os and this pull secret or is there a note somewhere in the issues with this release that has this information in it so that People are coming to a cold know about it I immediately created a blog post after um, but even wrote that in the mailing list Okay, so you've already done it. I don't even I haven't even looked and um Yeah for for the fedora core is but not for the fake um fake pull secret Yeah, so when the f cost thing though, it looks like the backport request has been verified and is basically being prioritized to be pushed out so Uh, I don't know whether it will last the week really that that's If they decide to prioritize, you know pulling in an updated pod man and then just recomposing March the march update I don't know if it matters anymore after that point Well, it's still good to have you know, the oc piece working properly with a with a good You know fake key in there. So I think I'm talking about the pod man one is the is the one i'm like, I don't think it's worth actually Doing anything if it gets fixed with him like three to five days Oh, gotcha. Okay. Yeah It's still I mean Yeah, it is Joseph. Thank you for doing the blog post and I think timothy is on here too. So you're hearing And timothy I think is the person who um is here listening for the fedora core us Um engineering side of things if I'm correct Yep, I'm here. Yeah, so you're you're hearing some of our pain here. So um And I think this was part of the conversation last time too is how How we get flagged when these things and and I'm You know Vadim just from my understanding when did you figure out that the new pod man was what was the communication around this? Odd man three thing. How did that work? Did you find out just when you did the the build or did you know about it coming down the pike? We knew this Um, like about a week ago because what we do is we pick a version of the dark or s In our installer any api installation is bound to a particular version the ubi's have the liberty of using whatever so what we do is um The dark or s has a tracker and they post when they release a new stable version usually at this point At this point I bump the installer make a build and make a test run and that usually either passes Uh perfectly and then I just keep an eye on nightlies or uh It horribly fails and I reported back to fedora cross stable To fedora cross folks saying that table is broken for some reason and so on Yeah, um, sometimes some users are faster than I am and they report issues Just like in this case it was reported right at the middle when I was building a new installer Um, so I think we got this pretty well covered on this front Uh in four eight There would be a command in the installer which says Which particular release and what are the boot images for the test that release we have been using in our ci So that should make things Much easier We have a similar We have we had similar lots of problems with ovn kubernetes or ovs and during four six and four seven And I'm not pretty sure that everything of that is already solved. Um, I was absolutely happy we had a we had a deep dive session with vadim and john To find this random mac problem and finally we solved that with the help of the network manager guys Thanks to them And I tried out I I have a script with uh, that does a redo a reloop. Sorry reboot loops All three minutes and I was happy to not see this problem again, but Uh, I I saw a new problem where only after a while only two network interfaces were available And reboots did not fix that and we had to delete Database of open v switch and also remove all connections with the nm cli to fix that Um I yeah, I don't know how major is this ovn kubernetes or is it mostly Related to ovs that we use such a new version in in fedora. I don't get it because in In open shift ovn kubernetes is uh, it's uh, ga Well, why do we see that mostly in in fedora chorus? Is it yeah It seems to be a four six issue um, because I haven't seen that on four seven Um, but it looked like was pretty you know pretty common in four six. I remember seeing it on four six but uh Also, I I know that we do this all the time But I want to give kudos to vadim again because he is a very patient man Um, even when it's late it's like he's very kind Um And I need to remember that he's five hours ahead of me I think that that's vadim is made of awesome. That is that is the thing vadim is made of awesome He does everything great and he's fantastic Yeah, so absolutely. We're lucky to have him so i'm the the reason timothy that that I said that about um, uh hearing some of our pain around um the fedora chorus stuff and maybe it's um Maybe i'm budding my head in but we talked last time about this three tiered approach to um releasing fedora chorus and i'm just wondering if um If the needs and want how the needs and wants and the even the the Awareness of what's going into fedora chorus. Um, how we can better. I mean so it doesn't doesn't land like this on us So as often as it does and and if what we what could we could be doing better to communicate with the fedora chorus Group, um, and I think vadim you do an awesome job That's it, but I also each release there's it seems like there's always something little thing And uh, i'm wondering how we what we need to do to be more maybe involved In the fedora chorus decision-making tree Around what goes into fedora chorus if there's something we should be doing from our point of view You have thoughts on that. I don't think we need some active investigation or constant pressure and fedora chorus folks to look into kitty bugs because what we can do is We can have more testing and we can always fall back to any particular stable release we can do um We are keeping an eye on what's happening in fedora chorus land and I think we're mostly aware of any changes coming for instance For eight is already using the door at 34 We would need to do a few small changes. So that upgrade should be Shouldn't be that painless as the previous one Shouldn't be that painful rather um I sure hope they're not less pain that they're not less painless. He's not more pain He's going to be painful in some other way some other new fancy way. I'm sure It just makes us more familiar with core os Well, I think testing is a really um core thing is to get people pardon the pun is to get things. Um tested out of the other streams Uh fedora core os streams And I think if we can make it easy to sort of do a testing matrix. So for example Last week I just incorporated Some functionality into my script that pulls from the different streams the latest release And incorporates that into an okd build If we could do that across the board and then basically have a matrix of okay, here's this release Here's this version of okd The the community could be able to test these things quite easily We just needed the same thing on the other platforms. So I got visa recovered But we need something similar Some sort of testing matrix basically for the other platforms Maybe it's also I have seen in other open source projects sometimes that say publish their test setup and Down to some configuration switches so people see what the test environment was And maybe we should do something similar So people can compare the their results more easily In addition what to to that what you said jamie I can absolutely underline that But yeah, I would just from my point of view It sound it sounds like there's mechanisms in place But even that you have to you know for awareness and and we can use the early release. I was just wondering from Community liaison role What what we should be doing With the fedora coro s and if there's more resources that we should be from okd putting on fedora coro s or you know What else we can do to help here so from the fedora coro side I would say um the The easiest way to make sure that we are not breaking okd is to make sure that we are testing the testing that you use so Here apparently that was one of the features of podman and we are probably not testing that right now in the fedora coro sci and If we have a test for that then it won't break so and we won't release with that broken so I'll say um, I don't know how much of the coverage we have regarding podman and things like that In in fedora coro sci, but um, yeah if specifics things are really important for okd um Then we could have that in fedora coro sci And there's already initiative to bring more tests more application tests uh to fedora coro s but this is definitely work in progress Yeah, and and it's also resourcing to so Vadim is asking which bug could be prevented by publishing this sensitive information and Now we're we're discussing how the level of How much information should be published about our ci because all the configuration on aws gcp And anything else is covered vSphere um I could contact the vSphere cluster owners so that we could publish which setup we are testing but Far as I know there are multiple of them for instance at least four six five and four six seven and seven seven zero I'm not entirely sure which Set up our ci is running at but Before we could publish that information. We would need to know What's the what's the purpose which bugs could be prevented by publishing this? and Should we increase the ci uh coverage or the community coverage is more or valuable in that case so I'm Thinking that we shouldn't push on ci too much. We have a lot automated And I would rather have upi installs covered and all of them are snowflakes anyway, so If we managed to find a bunch of problems with promiscuous mode, we shouldn't be adding a test which verifies This because we will have this covered when community tests anyway And their results are always evolving and changing in time. So we always know that these are always active valid and The amount of relevant Yeah, the experience that There are a few situations Um where you can find lots of problems Like reboots. Yeah, uh, it can it can take 10 reboots until an error shows up but if if it happens you you have No, I mean if if it happens on a master, then you have a degraded Etcd and don't maybe don't see it and if the second time the second master fails is the same problem after 20 configuration chains of machine conflicts later You always have these reboots and if it happens on a master the second one and it fails your cluster is down That's that's what happened to us And it was completely random that we saw that because we didn't it. I did not know that promiscuous mode was on Yeah, a colleague of mine has enabled it because of cube work tests months ago. Maybe we don't know it But we stumbled in lots of problems and um, I think I know that you say a test has to Uh produce a deterministic result Yeah, absolutely correct. But with these reboots you find so much you can find much so can find so much effects I get it. Um, what we have instead is At least for ocp for sure. We have endurance test meaning A cluster is left for like a couple of days at least I think even for weeks with, uh Effectively serial tests running here and there that includes upgrades includes Running payloads there that Actually includes restarts and so on Um, it's not a standard job. So I don't have a good definition for that All we could have is a similar test for ocp. That might be very valuable. Um Yeah, we could we could work on that. Um If we need If we could dedicate one of the community's clusters to that And try a similar schema to find out more bugs that would be incredibly useful because All our vizsphere clusters are shared. There is no way we can dedicate one vizsphere cluster for ocp test and watch how How badly does it hit the actual cluster? We would always have noisy neighbors. We would always have various results coming That really complicates the debugging but kind of a fuzzing test for The installation very useful Jamie I feel like this is pointing back to the need for a testing matrix and a resource Identification matrix. So who do we have in the community that can run stuff on a vizsphere cluster? To do testing who has access to aws resources to do testing etc Ultimately, I don't think this is going to come from internal To red hats resources. I think it's going to come from the community So we have to identify Who has what that they can contribute to it and then pull those folks together We did last year Was it have sort of the vizsphere or maybe early this year the sort of vizsphere folks meet up together And hammer stuff out. I think that would be good for the other platforms And identify okay, who has resources that they can make available to test certain things repeatedly and just to underline What you also said, you know, I ran into an issue with the latest four seven the other night that It's something between F cost and okd where if The f cost Unit fails the okd install on the node succeeds if the F cost unit succeeds Uh and doesn't error out then actually it's the opposite happens with okd So for the things like that it's hard to duplicate, right? So we need to be able to say, okay, I'm having this problem Who has a vizsphere that can test this To see if you can duplicate it as well So how we do that in terms of narrowing communication? I don't know But I think the first thing would be is to get everyone In the working group or external folks to actually say, okay Yes, I have a vizsphere that I can test things on or I have an aws account that I can test things with Then we can have people meeting up together To repeat things over and over think about the problem with surrender mac address If not if not john had raised his hand that he also has seen that In kai uber rommel. Yeah, I was completely I I I thought I was crazy and seeing ghosts and white mice Until john has said he has seen the same sometime. It's a rather crazy Yeah, to debug things if you are the only one who sees that Yeah, great great idea So just sort of to Pull this back to tease something out of this here the the vizsphere group meeting that we had I thought was really good and we didn't really I don't think we repeated it I think we only did it once maybe twice and Maybe trying to do this test matrix external resources for vizsphere even though l Mike you're saying there's vizsphere has too many options That's just life Maybe we should try and I'll suggest doing doing it for vizsphere creating the test matrix seeing who has a couple of resources because it I'm not going to say it. It's always vizsphere, but it's usually john fortin or Joseph that's bringing these up and maybe that Maybe if we reinvigorated or we met as the vizsphere one figured out a testing matrix Scenario for vizsphere we could Plush out what this actually would look like To have external resources in the community testing on a regular cadence each of these releases That that I think is is it would be incredibly valuable and I know we could do it Because we have enough vizsphere people here We might not be able to do it for aws and gke and or wherever else people are running, but But I think we have enough vizsphere interests to do something like that. So that that's how I would Capture this conversation and try and move it forward if if people are willing and I'll you know like with the docs meeting Take that on and try and set up another call because there's just how many days in the week And since we all know that Joseph can meet an hour earlier Maybe we'll do that an hour earlier And do the v sphere before this so and I know I probably took us off on a tangent on this so I apologize Because I and so I'm going to park that there and maybe Jamie and Joseph and John pull you in On a conversation around that and anyone else who's a v-spirush person, which is pretty much all of you Pull that conversation aside and figure out what that would look like What I wanted to also talk about and is last weekend's was it last weekend or the weekend before's okd workshop, which I wanted to thank everybody from Vadim to christian to Jamie and I'm Shree and everyone else who's on the call who's done you know did amazing things including a few folks who aren't here today That was wonderful. I hope you saw I put a blog post both on okd.io about it with links Posted it to the google group with the videos and we also put one up on open shift Dot com So hopefully the word gets out a little bit wider and people use and see these Videos and thank you Jamie for Chopping up the two-hour long sessions because my server died and I ran out of space and memory to to render those videos So thanks, and I think I figured out my problem Jamie so but I'll still use you as a resource for that That that said I would like to pause for a minute and get some feedback on What worked and what didn't work on that? Workshop for us and I know someone mentioned the screen size In hop in was kind of small for a few of the workshops I think that was Charo's comment and he's not here because he's had a family emergency. So I do know how to fix that That's to take your faces off when you're presenting in hop in then the screen is full And if we did that then the screen would be full and it wouldn't be as as blurry and tiny text as As it was on some of the video output So I know I know that one issue But I'm going to stop talking for a minute and ask people for their feedback on it and then Move to next steps. What do we you know? Where are we out with the documentation and documentation strategies? What I'd like to take up some time today talking about so I'm going to What did people think of the workshop? Would you repeat it? There were a lot of people and I was surprised it's Very much of them use okd in production. Yeah, yeah I I'd agree with that. I think there is a large number of people out there using okd in production That are quiet that are the lurkers And that this was a way to tease out some of those people and See if we can get them to participate. So that was one of my goals Um, I would have as I said repeatedly would have liked to see about a hundred people come Um, but I think we I also had some family emergencies before and I didn't do as much to promote it as I would have liked to Um, normally so But I think it was good to work out the kinks of hop in before we do it again other thoughts folks one of the things that um We might want to do next time we do this is have a clear sense of Who's going to say Who's going to speak about what And how many tracks we really need and everything? I think we sort of we got it together at the end And it it ended up flowing nicely But I felt like there was some pain points and folks figuring out what our real intent was and who really wanted to talk about this and getting people to To get into slots that we needed people to talk about stuff with So maybe that's just a question of timing like if we're going to plan one of these maybe Uh, you know earlier on in the process say, okay Identify who is going to speak about what and if we don't have someone for that reach out right away So that it's not a couple days like just before the event and stuff like that And that's not a criticism of any individual or anything. That's just the process yeah anyone here on the call who Is here for the first time who came to that workshop curious about that I'm not what I'm not sure that it did was draw more people into the working group It may have gotten us more lurkers Um, and and that's not meant to be derogatory if you're out there listening and lurking That's a good thing to do because we know the content's getting listened to but um Trying to get from my point of view trying to get more people External people from red hat to contribute to these conversations And to be resources on on different topics And Is is one of my goals. I may not be the goal of everybody else So um that for me that felt like something that we quite didn't quite manage to do and again I also think that I didn't quite do enough Outreach because of some of the stuff On there, but as as philip points out One of the things that I always console myself with is the videos because it's the long tail of people watching this content The same is true with the okd marathon that we had The previous year which is this in my head is sort of the follow on to that Thing so I would definitely want to do it again But I think the next phase of the what's next step is um Yeah, and and rebroadcasting it over all we we could have done that pretty easily I think that would have been easy but asking someone to do that on their saturday is always um problematic too So I have to be cognizant of that but rebroadcasting on the livestreams might have picked up a bit Few more people. I'm not sure Go ahead. Sorry. I just want like since you were talking about the broadcasting I was thinking like simulcasting because like hop in is great But like I think if you talk to chris You know like having a broadcast on twitch and youtube like using their live broadcasting at the same time Would make it accessible to like people who are searching for content to watch on any given saturday And I think you have a good chance of like getting it in front of people who are basically You know stream clicking and just looking for content to watch So I think it's like a possibility to get it in front of more eyes basically Yeah, the the reality is um by doing it in tracks You couldn't simultaneously do the four tracks that was one of the limitations And also to be quite honest openshipped tv doesn't have that big of an audience yet. It's coming You know it will come For me, it's always including with the stuff that I do in briefings and stuff that I do on openshipped tv It's always about the long tail watchers The people who watch the videos afterwards But that brings me to the documentation because the videos are great, but without the good documentation And the follow-up With the documentation and the work that mike you did and jamie and everybody else getting that Taxonomy in place and your repo was awesome I'm wondering what's what the status of that is right now and maybe If jamie and mike if you could take a minute and say, you know, what's missing What do you want to do with that repo's worth of content? And where where should that be next? You want to go ahead lead off Either way either you can go first jamie Sure, so I think you know and I regrettably couldn't make it to the meeting last week because I showed up early For the documentation um, I think You know and some of this was covered in the meeting is that there's been a lot of duplication of effort In terms of the guides that were created last year Some of the stuff that was created this year, etc I don't know and mike can speak to this more if mike's repo is ultimately The landing place where we want people For that type of stuff, right? So eventually we want to move it. So we need to clear a path that's sort of more directly to it. Um I My sense is that we are getting better at writing documentation but It's sort of insular what i'm seeing is that a lot of questions that we had at the workshop Was stuff that wasn't covered in the documentation that comes from people out in um The world and so it might be helpful for us to come up with a Way of gleaning questions out of the community and then writing documentation based on our answers and you know I had done some of discussion of that and blog posts, right and done some discussion of that um Early on with the dean and a couple other folks about okay. What are some common questions to put in the FAQs and stuff like that. Um, but it might just be like Committing one of us. Maybe we rotate once a week look through the channels of slag Find out what people are asking repeatedly and then create documentation on it, you know And if we have a rotation of people so-and-so is responsible today or this week or whatever. We'll always have someone um getting that's calling that stuff from questions and then Either incorporating it into existing documentation or creating new documentation. So I guess again, I'm defining process, right? So we should come up with a process. I think for gleaning Questions from the community that we can answer yeah So and and my question to to the two of you and probably anyone else who wants to pipe in is um The status because a couple of things were still stubs in mike's repo Like links out to charo's thing and and the homelab stuff We're all stubs and or pointers to places And I honestly haven't looked recently at the at the hierarchy. What it where are we at in terms of getting enough content? Um to move it maybe and I I've been suggesting in the okd.io repo somewhere um Because I have an ulterior motive um Always there's an ulterior with dian the three homelab ones shrees Vadim's version and um Craig robinson's Um homelabs is a huge topic for developers and dev ops types Um when we write blog posts on open ship dot com It's in the always in the top 10 craig robinson's the Remix that I did of craig robinson's one that links out to his medium Post is one of the highest hit things on open ship dot com in terms of blogs and creating awareness for okd So my goal really in moving this repo over into a real place um is First is to take those homelab ones and tease them out into A full-blown blog post to promote okd and awareness of okd And more hopefully drive more participants and out more people using in production Not just their homelabs too, but really to drive more engagement in this community so My hope is that that repo contents at least the homelab stubs gets filled out Did that happen over the past couple of weeks or is it still in the same state? It was there are still stubs Charo's stuff hasn't been moved into a singular page or anything like that I I haven't pushed it because my thought was that we were going to have the conversation about Duplication of effort and what goes where first But maybe that's just maybe you and I looking at this differently or other people looking at this differently I was sort of hoping that we would have that conversation Oh before we move over but it sounds like you want to just at least get the homelab stuff over I and then Like hammer out what goes where I'd like to lift the whole repo from Mike's into okd.io Yeah, I don't have a problem with contributing it. That's not a big deal Yeah to move it over and and then push shrees on the call and Vadim maybe to do a little write up on nils on the call And charo to move so that even if it's a remix of their and a pointer out to their blog post At least there's a blurb about it there for each of the homelabs before I do a blog post A major one on okd.io with joseph's blog or one on developers red hat comm or on open ship comm So I'd like them not I'd like to not be pointing to mike's repo not that I don't love mike and everything that's going on there No, it's fine. Yeah, it's totally cool. We could we could immediately put that to um To okd.io in blocks because there you have tags you have a search I mean you can group Articles together we can make a create a homelab tag And write blog posts. It's still everything is still in in git repose Yeah, if people want to have some things they cannot create full requests But I think the entry If you search for okd okd.io is one of the first hits And I think The simple the simple idea is that if I go here and see they have a block or a forum We don't have a forum That I want to have to look there and not Follow a chain of of uh, yeah repositories where I have to search It's not it's not so easy separating out two things before we write blog post joseph is to move what's in Mike's repo into okd.io's repo and then write blog posts that point to that as opposed to hopping through That loops and things and that's that's my goal Well, I'll take as an action item then Whipping everyone together to get those stubs done and get this and get content where the stubs are And then do a poll request To the okd emergency all in All right, well, let me follow up Let me follow up with status then where we got to after because like last Tuesday We met up and we talked about a bunch of action items to do on the repo since then We put a template in there. We started to we changed the There were two bare metal directories. We changed one of them to be called a home lab now And that's where we've put, you know, vadim and shrees instructions are there I've got a pr open to rename the directory that charros is into single node Which we had agreed on we wanted to have a distinction between single node and home lab I I pinged jamie. I was just and and char. I was just waiting to hear back from them before like approving that Eric was able to help me out a bunch and we got the template merged. I think today I also went back and redid all the deep like the ipi defaults I put those into the format of the template The next steps that I was going to do was to rewrite the outstanding You know like shrees and vadims and charros I was going to rewrite those and format them under the template so that they at least looked similar Even if the deployment section in the case of charros linked out to an external, you know, his page We would kind of like fill it up a little bit so that it looked like that So we're kind of like most of the way where we talked about the last docs meeting where we wanted to be I'm totally okay to contribute that repo to a different organization if we want to have it You know somewhere under okd and I think you know lastly In terms of like what the future of that repo is I just want to talk about the past of that repo We had originally when I say we I'm talking about the internal like cloud infrastructure team Which is who I work with like we work on a lot of these bits around Like the machine api operator and things that interact with the cloud layer And so We had originally created that repo because we wanted to start cataloging all the different Options that we were seeing people like deploying with so that we could start to get a picture of what What kind of various deployments we're looking like and a lot of this was around v sphere activity because like Internally and I'll just I'll call back to something Diane was talking about earlier Like having this group do v sphere work is amazing Like getting more community interaction from v sphere would be totally amazing because I think Internally we have a very strong grasp on like aws and gcp and azure and you know Some of the other public clouds like you know even like alibaba and like You know metal metal equinix and stuff like that like We're getting a handle on those clouds fairly easily But we see a ton of people showing up who want to use v sphere and I think the popularity of v sphere in the community Um is so large that it's difficult for us as a product team To actually like service all the requests that are happening around v sphere So it's like it's very easy for us to do aws and gcp and we have like a good handle on those things But v sphere because of the explosion of options for the way that you can deploy it becomes very difficult for our team to like You know just manage all the bugzillas that can come in so I you know huge plus one to that effort But this was kind of why we originally started to try and record these Various deployment options because we wanted to start to get a handle on just how many different ways you could deploy things Especially on on things like v sphere So that that's kind of like a little window into why we started it where it ends up going in the future You know if you want to use it for a collection of deployment guides Um, I think that'd be great. Eric had a great suggestion and one of the pr The template pr You know the notion of we might have an install guide that's kind of like a base for a certain thing like let's say aws You know upi install guide or something And that would be a base template that others could refer to so if you were just doing something like hey, I'm installing to aws I'm using upi, but I'm also using a cluster wide proxy You know you could have a guide that would say like yeah point to The base guide which has all the basic information all it needs to do is add this little extra blurb So I think there's this idea of like modularity to the guides as we start to build them up So those were some of the ideas that we had talked about You know over the course so like the week and the pr reviews that went back and forth Yeah And and it's good. I mean I'm totally thrilled with the v sphere Content and the community that's coming around v-sphere and And gelling too. I think that's going to be huge For us and it's one of the big things that we can offer back to the engineering teams Is that the community will will help? You know parse out what's going on with v-sphere and the different releases and I think that's A huge selling point when I have to go back and the team has to go back and get engineering resources put on Okay, d if we say not only are we helping you with the future of rel But we're also helping manage all of this v-sphere stuff. So when you think of Me and and other folks internal to red hat who have to argue and fight for resources for this project and These are the things these are the arguments that we we have to keep Doing and it's not that we argue. Yeah, it's just It's the the pull and tug for resources is is right The strong and I'm pushing I'm trying to push the okd message internally as well because like what I'm trying to take back to our team And the stakeholders that we have involved in like especially the bugzilla process around what we're doing Like I'm trying to point out how valuable the okd community has been You know for us in terms of okay. Yeah, there is this easing of pressure Around the bug reporting and fixing that's happening in the community So I'm trying to push that message internally and I think it's you know It's getting uptick from engineers that I talked to at least Yeah, I think I think we've got a good case and and it's working and and red hat supportive So it's it's not that it's just you know It's making sure everyone is bought in to recognize the value that's happening Some people may have it but not necessarily everybody knows Right like another subtle effect of what we're doing here is we're making it easier for Okd to move faster like because things are actually happening and people are actually using it and we're plugged into fedora coro s Rather than using sentos or rel for us or whatever. We also have the opportunity to be able to do things like Yes, new modes see how well defaults and and opinions that open shift is trying out whether they work out in practice or not Building you know those sorts of things that I don't think there's another avenue in which that could be done like I think crc would be in a much worse spot Today if it weren't for for a lot of the things that we've been trying to do and figure out There's kind of another angle to this to neil, which is you know that like As the community here is doing a lot of work and starting to do more work Like one of the things that i'm trying to do internally is change our language from you know in the past It's all been like we all talk about open shift open shift open shift Every tool we're creating is open shift this open shift that and what i'm trying to do is shift the discussion internally to say Let's look if we want okd to be this upstream You know this mythical upstream to open shift or to ocp Let's adjust our language and start talking about okd like if i'm going to make a new tool I don't want to call it the open shift this that the other thing I want it to be the okd this that the other thing because I want to push towards the community first So that's another message that I think you know, we're trying to get in Internally and it's just it's an uphill battle right now. Yeah So i'm going to circle back um to documentation because this you know Just so we go back So what the and as I said in the chat I can be patient I don't need this repo to move over right away and I don't need to do the homelab update right away I know everybody's busy. Um, I'd like to get craig robinson to do an update On his and and i'll have to reach out because he's not on the call right now And if you're listening it's craig and I haven't and um, let me know what the possibility is of um getting him to write up the The steps for the the update of the from I think he did 4.5 in his last homelab right up And now it's 4 4.7 or 8 7 that he did the demo with so um So that that would be you know get that so I'd like to get it to a point in your repo because I know it's easier And quicker for you to do stuff over there Um And and maybe we'll wait till next week and in the docs meeting and have another conversation and see where we're at And then after that do the pull I think the other thing and jayme keeps harping on it and I and I know And I and and I'm also going to say right up front. I am not a docs specialist I'm I just no good docs when I see it And I'm also not a process specialist So if there are other people on the call who are docs people and I know our one person who Amy who was on had to leave at 10 30 so And she does all the rdo docs, but if there's someone else who wants to take on Building a cohesive doc strategy in process Um I'm I'd be thrilled to work with you on that and I can see jayme raising his hand jayme keep you keep raising your hands editing my videos Um editing you know do this, but I think we do need to step back and figure out Um what we have like an audit of what we have in the system right now I love the idea jayme about the f8, you know sort of the faq Stuff from that whatever the current conversation is Um one of the things joseph that that I would coach that I know because I'm old as dirt Um is that documentation by blogging doesn't it isn't sustainable everything goes out of date just like you know craig robinson's 4.5 homelab blog So what i'm trying to figure out is how to do documentation Um so that we have this current faq stuff coming out and so each release gets You know, what was the issue with this release gets a blog like you've done joseph Um as well as what the current conversations are going on like you know if ovn is a big thing or Libverte is a big thing this week to just have those topical things, but things that need to be docs Where they should live because we have docs dot okd dot i o um We have and we have these guides now and try and create a cohesive strategy for that That would be um, you know as I always say my fantasy land We're at you know fantasy island if we had time so jamie if You want to write up a doc or something to share next week in between all of the other things you're doing at umesh Because I think even if it's just a stub of a doc That gives us something to start from to work with and and I will take any proposals And there's a lot of people who are very quiet right now on this call The usual suspects are talking if any of you have experience or have people working on your teams who have experience with docs that you can Drag into this conversation. It would be lovely to have more docs help so But I don't want I know jamie you already raised your hand about the testing stuff to testing matrices But I think if we if I do something like continue the docs meetings every other week and then come up with We had talked about office hours Um community office hours on a thursday, but if maybe we Jig that to be every bi-weekly office hours and on the other week do a v sphere group Which may end up being the same thing as community office hours, but it's The same people but that regular cadence of this week We're going to talk about v sphere and testing on v sphere and the agenda is about you know External resources and how we can do that and communicate our results back effectively And then have office hours on thursdays as well and use openshift tv You know I I get that I really do that But I you know and I think the other thing that it's really important because It's the long tail watching of this content That people are doing I get more feedback from people in strange Parts of the universe and different time zones that you know, they're watching I never think anyone ever watches these recordings Until I don't upload the last three And get behind and then someone knocks on my door and says hey, you know, we're watching this stuff. We're watching you so To the lurkers out there. We love you too. Um, keep on lurking So I don't know if vadeem if you have any other thoughts about that to close that On the docs part. No, not really. There is An infinite field of improvement we can do there unfortunately That always is and docs make a community is what I kind of say and Until we have a good doc strategy We're always going to be in this recursive loop of having these conversations about stuff So I think it's time and it would be lovely to do So on that note Is there anything else we missed today that we should have done any other feedback? John had a question and yeah So vadeem and I were talking earlier on one of our slack things and The question we raise is where are we tracking bugs? Or whatever you want to call them well enough Or do we need something else in order to track bugs or to-dos or something else? Is that a is that a fair statement vadeem? Or restatement Yeah, it just feels that we are misusing The bug tracker for all things tracking Perhaps we should have some kind of a to-do list Um GitHub has this board. Maybe we should use it for Non-technical parts like which bugs go into the next release GitHub this is milestones should we be using them? Should we be using something like a blog it's a pretty much open discussion We probably should use mailing lists for that But any ideas how to improve our workflow? Are greatly appreciated because Well, there is always a room for improvement and I'm not very happy with what we have right now There are lots of Things the community could participate, but probably it's locked behind odd labels that we use for issue tracking and things like that A lot of folks are asking for a Good first issue to get started We don't have that because that includes getting hands dirty, but we have a lot of jobs and things to do to clean up Documentation to clean up issues that are active to clean up a list of releases we have Um And probably backtracking is not a great place to discuss that So overall workflow improvement probably a good topic to To have a discussion upon Yes, absolutely Yep, I agree And and I'd rather flag is not a great place for it either. Yeah No We'll have another call next week on docs on tuesday and what i'll try and do through the mailing list is maybe kick off the v spear Conversation the following week on the thursday see if we can get a group and meeting going on that and um Yeah Okay, and now I can see in the chat Um discourse matrix or one of the bazillion other things that are out there. Um hosting our own stuff um I'm kind of fine with that But I still don't want to lose the conversation that raises our visibility up and open shift dash dev and kubernetes I think when I think our presence there is really important Um to keep the conversation about open shift open And and having an open source angle to an awareness of that people are using okd on the part of everybody in the universe Not just redhatters, but the rest of the universe that's lurking in there so you know Yeah, um, I'm I'm leaning towards staying where we are but um Well, I think slack is good for for what we've been using it for to extend your questions Like you know, we've run through kind of debugging sessions, you know and threads and stuff Um, but I'm not sure if that's a good play for we'd have a to-do list You know things that we want to do as a community. So I mean I I I know neil doesn't like slack, but uh um You know, I think that there is a good community there and a lot of people watch it, but You know, I'm I'm up for other things too, but I think I agree with Vadim. You know, we need to have something where we can Might as well where we can you know kind of track that kind of stuff yeah, well Not I'm not gonna I'm just one voice and I it's just just that so and if the community wants that I'm fine with Someone setting that up for us The other thing I think that we could use is that um the thing that I use not to The agenda page um the projects page as well And that's in github and and pretty public But I don't think we have it in the right place because Obviously not everybody can add to it. Um because you don't have rights to do so. So maybe Creating the project page in the okd.io repo. Um, that was public might be another alternative. Um I'm just I'm just really loathe to have to monitor yet another Discussion forum And that's just me because I'm already on about 40 others. So And and and if the community wants it and we can set it up and it get it's useful do it I have to turn off something for that Create something new and turn off something that's not so useful We're not turning off slack anyway Yeah, I think it's mostly It's mostly just like I think we need something that isn't github issues for general purpose questions. So Yeah, personally, I'm fine with it with like with like a separate github that is specifically for that A discourse a separate channel on the kubernetes slack whatever it might be It just needs to be something that isn't overly technical We could ask for Ask dot fedora project.org and discussion stuff the door project. Or we could ask for okd sections there Because the instance exists it's already paid for And You know, there's nothing wrong with using stuff that's already there that we have Well, hold that thought because I have to kick you all out of here now because I have another meeting coming up in this same URL And let's have that discussion on what's available for this and Yeah, this is my public forum here for everything I do with open shift. So Thank you for today. This was really good. Yeah move this conversation over into the okd working group mailing list And we will do that and next week look for an invite on a thursday to a v sphere group And I've got to figure out which time slot whether it's my 9 a.m. Or this 10 a.m. So 1600 or 1700 utc But I'll put that question out on the mailing list And see what you can do and thank you everybody thursday doesn't work for bruce. Okay, maybe wednesday We'll see what we can find and I will doodle it or something. All right Take care guys. Thank you. Bye. Take care. Bye. Thank you. Take care. Bye bye