 That's recording has started. All right, folks, welcome to this edition of the working group meeting. This is our meeting for may 11th. Of 2021, and I would encourage folks to go to the meeting group notes and put in your name. For the attendance sheet there under May 11th, that link is in the invite and I'll also put it here in the chat real quick here. So we'll get started with Vadim and then we'll move from there. So Vadim, what do you have for us in terms of updates? Some engineering points on the Yorkity. We didn't release this weekend because our previous signing key has expired. We submitted a fix for the new one but that fix is not being accepted by CVO yet. We have a pull request to fix that. We would need it to get reviewed and hopefully this weekend we'll make a new for seven release. Other than that, I think just a bunch of small fixes landed in for seven so it shouldn't be anything groundbreaking. Last week, or was it the week before that, we released a release candidate for OKD for one, for eight. The main difference is, well, it's Kubernetes 121 based and the way we back machine OS content has changed significantly. We're longer relive on Fedoro OS commits. Instead, we built from the same configuration using the same RPMs but we build our own commit layer cryo, all the necessary RPMs like open VM agent for Overt and VM tools for VMware. And that helps us to avoid using OS extensions. So that should make initial setup and upgrades a bit faster because all we need to do is just unpack the OS tree. And once you have the new OS tree deployment, you would be able to check the versions of the components using RPM QA command. So that's very helpful to figure out what versions have been installed. And with the bottom, at least with extensions, you no longer need access to Fedora repos. In fact, we disable them on every upgrade. So that should make it less rather more error prone. For eight is in so-called feature complete status meeting. Significant features won't be added but still more fixes are landing. So please stay tuned. Filebox report issues and general feedback would be very appreciated. I think that's all we have from the engineering standpoint. Are there any particular bugs that stand out in things that are filed right now that are open that people would be able to help you with by testing or at least taking a look at? Anything stand out in terms of issues? Nothing else to mind. Okay. I think a usual testing of this fear UBI especially would be very useful. We could ask for a copy of development version of documentation which lists some new features in for eight like proxy protocol support in ingress that might be useful for those who want to preserve the forwarded IPs. And some more features coming in for eight. These would be very useful to test early and report feedback bootstrap in place. That one is a tricky beast but some early testing would be very appreciated. But something to fix right now. I think a lot most significant issues are in our infra and those are very because they require a lot of tinkering and a lot of carefulness so that we wouldn't break nightlies at all. But channel testing and for eight features would be great to have a look. And next thing I wanted to do is I wanted to sort of highlight the discussion section of the repo which has gotten a little bit of activity lately and wanted to sort of go through these to see if there's anything outstanding or just to bring it to people's attention. One of the first ones that came in was the suggestion to have exact links to the Fedora Core OS. There's nothing that I know of right now. Actually my script, my OCT tools script has the ability to do that because it parses through the JSON to get it. But I don't know of anything else other than what we see coming forward. Any comments on that before we move to the next one? It's effectively would be a part of open shift install sub command to print the exact URL to ISO. But it's coming in for eight. I don't think we can backport it for seven but the long term fix is in the pipes. Yeah, sorry. Is it to download the Fedora Core OS images for install offline or yeah, so that's the video we added in for it. And now bouncing over to 609, automated solution to backup at CD schedule from within the cluster. Where did we land on that? There was a lot of discussion. Is Shreya on today? I don't see. But just to highlight this, I think this was a great discussion, automated solutions for backing up at CD. So folks can check out that thread again for those that are just joining or who are watching the video. There's now a discussion section of the OKD repo that's opened up and folks are having sort of more nuanced technical discussions there about features and things related to website and whatnot. And yeah, automatic backups would be awesome for sure. But there's some complexities there, obviously. And okay, any comments or thoughts on that or anything folks want to mention before we move on from that one? I think it's a great good starting ticket for users who want to play with OKD on. And we have various approaches to the same problem because, well, the use cases might be very different. Someone's backups on request, so maybe a text on task would be easy for that. Someone's a persistent snapshot, so an operator is probably a best pick here. I think various approaches, having some code scripted for the start would be great. And then we'll see which one is more widely adopted and wins effectively. Any other thoughts? Something like that. Yeah, some cronchop at least would be sufficient, I think, for the most critical backups once a day. I think I remember that we have to call a backup script that is always present on the notes to do an ETCD backup. And such a cronchop simply could call that script regularly. Maybe copy that away to external location that you can define by changing the cronchop scripts. I think a cronchop with a container is a great starting point. The problem is you don't get any reports if it fails, so might assume that it's fasting, but it's actually been failing all along. That's right, you're pre-handling on the container, it does nothing. Yeah. And the restore procedure probably should be somehow automated, but that's something to look forward to. But the container with which launches the script on the notes is probably the most critical part reused in every single approach we tried. So cronchop is a great starting place, which we can easily evolve into tecton pipeline. And then we could write a more complicated operator on top of that and have a proper reporting, maybe even via tecton. So that would be a great showcase of how various technologies connect with each other. Let's see if we can round up some folks. There's a lot of ideas floating around. It'd be sort of get a concerted effort, maybe get a repo together. I'm happy to contribute to the tecton aspect of it. And I think it would be cool if a couple folks from the group got together and created a little subgroup to come up with something. All right, next one is transition to C groups V2. I threw this in to sort of have documentation of the conversations between the Fedora Core OS group and the OKD group. Vadim, if you just want to talk to this for a second, basically it's covered in there, but if there's anything you want to add. Sorry, I missed. Which ticket are we discussing? The transition to C groups V2 that Epcos is doing. Right. So we have all the basics and for a night lease. The only missing part is run C used by builders. So all the features are working. We have a protocol which can enable it, but the tests failing because all the builds are effectively crashing immediately. What we could do is to experiment with building our own OKD setup with updated builder. And where we pull it, the problem is that it might be using rail packages. So testing it in CI would be pretty complicated. But as an exercise to build a builder container using CentOS streams Fedora would be great. It is excellent because all we have is Dockerfile starting part. And that would be very helpful because the ticket is well filed, but it's not the priority for age release for sure, but we'll be pushing it to be released in 4.9. Any other thoughts on that? Timothy, do you have any thoughts you wanted to add to that? I'm looking at the ticket details. And yeah, even though it's not a goal right now for OpenShift product, it's definitely on a radar. So I guess it should, I don't know how it's going to get fixed exactly, but yeah, it's still kind of a priority for us. Anyone else have any comments? Any comments or thoughts on C Groups V2? No opinion of that. I want it. I knew that was coming. I should probably know this, but what are the advantages of C Groups 2 versus 1? I mean, what's the, what do we gain from it? The largest benefit is probably a fair, actual fair IO scheduling. And I don't think Kubernetes uses it, but what we could have is not just for the memory, but you can also have something like a middle or intermediate limit, which tells the application that it's time to release a memory if it can do that. So that allows you to schedule workloads in more, well, flexible fashion. But other than that, that's mostly a lot of low level things which, well, just don't use parts of the kernel which are created in 2004. That's all we know. Yeah, as definitely as I'm saying, the main one is all the disk access checks, the disk access accounting and the memory accounting. I think everything makes sense in V2 much more than in V1 where you have a lot of discrepancies between, between accounting and memory processes. The biggest thing, the biggest thing for me from a V2 perspective is you can trivially tie resources to a process and be able to track that through its children because the groups aren't based on this concept of controllers of different types of resources. They're based on where you're instantiating a scope for a process. So control groups are actually singularly grouped in V2, whereas they're multiply grouped in V1. And that makes it a lot harder to track and make sure that things are actually being allocated correctly and tracked. And that's why a lot of things like UMDI, PSI tracking and stuff like that, they all depend on V2 because you can connect all the resources you're, you're allocating to the process and question that you're trying to instantiate. So in this case, with a container that'll instantiate a scope and a slice and a resource, you can tie all those resources directly to that and make it the distinct owner of those resources. Cool. Something to look forward to then. Next one up. So this one is, was an error or a perceived error? So we're looking at 622. Vadim, did you want the discussion section to be a place for people to put errors or would you prefer that they actually go in the issue section as opposed to in the discussion section? No, I think starting with the discussion is a good idea. Well, two starting points are equally fair because we don't know if it's an issue in Okiti initially or it's just here, I might have typed or using an old Julie, so it might have been fixed. We can anyway convert any discussion here into discussion, so that's very interchangeable. I think it's a great place to share some logs and error lines, but of course no guarantee that we'll look into this and somebody actually actively supporting this. So discussions like that are probably a good place. We probably should create a new, a special topic for this. Yeah, category. I'm not sure how to name it correctly, but I think it's a great idea to group these kind of discussions and tickets. And this one in particular, Okiti 47 storage operator degraded. Basically it's resolved. Did you want to say anything about that, Vadim? Add anything other than what you have? It's a common problem and for seven, including a recipe where folks don't know what they want. They have a vSphere platform, but they're not sure if they would be using machine API or they would be using storage. They might switch to CSI, which doesn't use in three drivers. So the credentials might or might not be valid, and the installation may pass if you don't use those features within valid credentials. And in order to track that, we added a degraded condition for this to verify that if you have a vSphere cluster and your credentials are invalid, we won't proceed with upgrade because it might break a lot of things. That caused a lot of discussion internally, because a whole bunch of people are using vSphere without using storage or machine API, so they use fake credentials. The bottom line is that it will be eventually converted into an alert saying you may not use that, but you should be aware that your credentials are wrong. And the upgrade would pass, but the admins would still be notified. At this point, it's a requirement for the credentials to be correct, but might not be the case for seven later. I think it's a good starting point where we can link people, because it's not effectively an issue in OCD, but it's a really good example of how discussions should be, how they should work. Anyone else have anything they want to chime in on that one? All right. I threw another one in here. That's from the F-Cos working group. F-Cos moving IP tables to the NFT backend. Vadim, again, anything you want to add to that? That's pretty straightforward. I don't think it should affect us significantly, mostly because OCD defaults to OVN, meaning the Qproxy, the most fragile component here, would not be using NFTs, in fact, used at all. And RoyalCrowS, if I remember correctly, from day one has been using NFT, so all these cases have been tested, and it just starts lagging behind. In any case, since in 4.8, we've actively controlled the whole configuration for 4.8. All we need to do is just pull one Podman container, and that's a basic case where, which FedoraCrowS ensures, so when we can control the whole configuration and rollback, the NFT change, if we have a good reason to do that. But I'm thinking even if we hit issues, these would be reported to the SDN team, and they should be fixing it, because eventually NFT would be defaulted in Royal if it's not yet. So I think if we have a good end, in case the issues would appear. Timothy, did you want to add anything to that? Yeah, I'm just checking right now, but the NFT backend has been defaulted in Royal, 8.0, I agree. It just realized that actually we were lagging behind in FedoraCrowS, in a sense, but it's not by choice, it's more by mistake. But yeah, so nothing should break on the OCD side, hopefully. Joseph says, oh, sorry, go ahead. Yeah, sorry, it's still the same interfaces, it's just changing the backend for IPCable, so essentially it's the same command line, the same tools, just using a different backend underneath. Anyone else want to add anything to that? I like that new format, I have to tell you a bit, that we discuss about issues, I love that. I really love that. In the meeting format, or do you mean? Yes, yes, yes, yes. We'll bring that feedback to Diane. This is generally how I do meetings of this sort, but we'll bring it back to Diane and see what she says. Okay, so the next one that I added is Clarify OCD's Community Support Model. I threw this one in because Red Hat employees, one in particular, keeps getting personal emails and people tagging him in the channel and all sorts of things, trying to get support from the person that they know is sort of the Red Hat employee that's working on OCD. So my thought is that we scour through the website, the repos and whatever and actually have a boilerplate couple sentences paragraph that says, what does community support mean? We have that in the big banner on the website, but we don't actually explain what is community supported and community driven. There's two reasons that this is important. I think, number one, it's unfair to the people who are Red Hat employees who are providing their time to help out and probably wears on them. I'd imagine that it sort of stresses them out. The other thing is that if one person gets tagged and one person is the target for it, then the community can't help because we know we've got our institutional knowledge and memory and whatever to help on these issues and also we don't learn because if we're not privy to these discussions or we're sort of boxed out of them, then the rest of us can't learn about these particular things. So there's multiple advantages to this. So we talked about this at the Doc Groups meeting, Docs Group meeting last Tuesday and we'll be talking about it at the next meeting. If anyone can join us at the Docs meeting to chip in on this coming up with a couple sentences that we can put somewhere, then that would be, yeah, Bruce says, reference the goose that laid the golden egg fable. Then I think if we can come up with something in the next like week or so and then just go through all of the OKD references and plaster this up so that we can relieve the load on the Red Hat employees who have been bearing so much and get ourselves more up to speed on things. Sorry. Would it be worth putting in, you know, to the Slack channel occasionally, you know, a boilerplate message indicating that, you know, this is community supported, you know, along with the working group email, you know, saying that this is not Vadim supported. I mean, not specifically saying Vadim, but, you know, the idea is that community members are helping if it's not just one person. They're just, you know, once a week put something out there or whatever till people get the hint. I mean, I'm trying to help people. I know Vadim is, but, you know, other people are going to have to start jumping in also. And we can do channel announcements and things like that for sure. I think that's a great idea. Anyone else have thoughts? It sort of just occurred to me earlier today that it actually might be useful to somehow put together some information of this is what you can do. OK, because I think what happens to a lot of people and as I was telling Vadim, I would include myself on that list on occasion, especially when I'm tired and irritable or lazy, which is often, is that, you know, the easiest thing to do is to seek help. But that's not really a long-term strategy. And it's not that easy in the documentation to find out. Like, we do have a FAC and we do have some information there. So that's certainly good as far as it goes. But it might be useful to try and pull together some, like, checklists or try this or did you look at that or things that are important? I don't know. I mean, it's a big topic. So that's just a vague idea of something that might help as well. And we can always link to that website. What's the website that's like how to ask questions? There's like an actual URL or someone set up a site that like you go to and it's actually like, how do you ask questions for support and for troubleshooting and whatnot? John, did I hear you say, start some? Yeah, I have a delay here. So I think you're done. I start talking or whatever. I thought one of Vadim said earlier is doing something, you know, how to debug, you know, looking at, you know, when you get that huge log bundle, you know, how do you analyze it? I think something like that would be great to send to the go. Go look at this YouTube video or something to give you, you know, the basics is, you know, how do you get the log stack? You know, how do you do a basic analysis of it? What are the things in it that are important? And that might help. And it might help people who want to help too, because some of that stuff in there is very esoteric and sometimes you just have to stumble across it. So I don't know, I'm not trying to put more work on Vadim though, but that might be something that's okay. Yeah, that sounds like a, that sounds like a very useful thing. I'm just not sure about the format. Should it be a YouTube thing, a blog post, text, maybe, hello there, Bob. Yeah, blogs are easier to reference. You know, with a video, you sort of have to know, okay, this was at one hour and 23 minutes. Okay, how do I find it? Can you do a short video? I can see a YouTube video having some advantages of like, someone wanting to see like, oh, actually, how do you search through it? What particular things would you be looking for? Yeah, but this, we shouldn't be limited. I think we should start with a YouTube video because it's easier. And I'm not sure which parts of the process should we focus on. For instance, there's a bunch of code which generates certificates. I have no idea who works. I can barely handle the open SSL CLI. So I probably would rather discuss the versions and the bootstrap process, but this thing also needs to be covered in some kind of a blog post and so on. But initial response from the video would help us to shape up the basics of the markdown document we would put and then we could extend later on. Another concern is that things get outdated. Yes. Well, not that badly on Craig's blog post. It's still like top 10 in top 10 of OpenShift blog posts and it still refers to OpenShift, oh, Kitty45. Some pieces there are very old and not used anymore, but in general it kind of works. I suppose we would still have to update this document very frequently in the beginning and then with every major release we could update it, but it sounds like it's a required thing. Yeah, I'm not sure if I would have time to create it from scratch to the very end, but I guess I could have some, I guess I could create in a week or two until our next meeting some drafts were helpful. If you want to put something together, a rough template, I mean, I could try to put a video together if you don't have time. I think that that would be, I think John's making a good point was really if we could just get some pointers from you like just a template of items that you think would be important to hit, the group can handle John himself or can do the video, anyone else can ship in and same with documentation. We don't want to add more work to you, but you're the one who handles a majority of the tickets and also knows the innards best of anyone in the group. So if you just gave us a template of these are the things to hit, then we could run with it in all of the venues, be it blog posts or video or anything like that. Good idea. Sounds like a plan, yeah. Excellent. Anyone else have thoughts? Anyone want to go ahead? Well, it sounds fine to me, but I was also just going to say that I've got a drop now because I'm going to go get go on my merry way to go get my second vaccine shot. So I'll see y'all later. Awesome. See you in health wheel. Take care. Yep, you too. Bye y'all. I think it would be a good idea to have a recording. Yeah, maybe like a role play if you get a lock bundle, what you do first and just to see some typical, yeah, typical steps that maybe reproducible by others. I'm sure because you are always so fast that you have some typical spots where you look first. Well, folks should feel free to chip in. So Vadim can get us something within, let's say within the next month, right? We don't want to put too much on his plate, but if Vadim gets us a template, then folks from the group will all sort of divvy up the tasks of getting it into the various formats, coming up with something in the various formats, blog, whatever, and the docs group, I'll mention this at the docs group because there's some people going to the docs, meaning they're not coming to this and vice versa. All right, anything else on this topic? Is there anything else that we can do that folks can think of to make it, to clarify what the model is, the support model is? Is there anything else we can do other than providing this documentation and putting some boilerplate language in the various places where we have a presence? Anything else? I think a problem maybe that Vadim has sure a connection to a huge internal network of people that know each corner of the system, and most people from outside don't have that. And I think that's the reason why lots of people ask him, because he's the gateway to this internal network, and we don't have it, and maybe someone has a good idea how to get a substitute to not need the internal network, but to know which repos are they working on, how we can contact them through issues and so on. That shouldn't well happen. The internal knowledge is of course a huge source, but it's not being used in every single report. Most issues are very trivial, and I think like three or four of our architects are in an open-shift dev, so you can think. I can name drop later, but I don't think they would like it. The real way to handle is probably starting with some... We have a bunch of issues logged for OCD and showing activity there would be very helpful, just some basics. Here is how I understand Bootstrap, here is what I see from Logbundle. I'm stuck here, I don't know what's happening. I could jump in and help and extend this of course, but a lot of issues are just... I'm thinking folks opinion me directly, just because I respond there and respond in every single issue, and people assume that I'm the only one here. Well, that's the belief we should... Any other thoughts on this? I remember as we were searching for John, Vadim and I were searching for an OVN Kubernetes problem, and finally, I think Vadim got us the first steps, and John and me were searching through the community. And finally, I think the guys at John, do you remember, at the network manager, I don't know what the chat is called now. They helped us, and finally, I think it was a solution, so you don't need always a single person. I think it was a great example of how it should be structured. My network knowledge is very limited, and I have rather not extended, actually. So this is why I would rather pass it to some professionals who can chase folks on IRC and help with some details that don't fully understand. We should learn lessons from this, and structure our workflow in a similar fashion. Maybe that there are bugs that, you know, like we did, we created that sort of private Slack group, whatever, with the three of us, but there may be issues that, you know, we can get, if we need to get three or four people in, where it's easier to have a discussion in that private channel versus the open discussion, then publish the findings afterwards, because sometimes you get chime in after chime in and after chime in, and it gets distracting. It's all good, but distracting. Yeah, there was a good idea, which we implemented here. Some ad hoc chats for the interest of people was certainly a good idea, yeah. Well, let's move on now. I think we've got a great foundation now for this particular topic to move forward with. And I'll go back and fill in the discussion item with sort of what we discussed here at the meeting, so that we've got sort of a clear record in that actual thread. And that's the other thing is, if you put something in the discussions, and we talk about it at the meeting, if you can update the discussion item, if you created it folks, that's helpful, and sort of keeps things organized. And then the last thing in the discussions is insights operator is degraded. Vadim, is there anything you wanted to add on that one? No, I think we need logs, because there are multiple things which could be causing it. It could be proxy. You could be actual, well, unlikely, but actual downtime of the insights operator for some short time. Well, if we would have logs, we would have some something to discuss there. Yeah. We'll point out that Vadim says that's great, but I'm not a dedicated OKD support person in this community should have access to it. And I do think that's an important thing, again, is the community, you know, we can't lift the community up if we don't have access to these logs as well. We talked about the issue of identifying information. Did anyone come up with or know of, like, just a simple bash script that cleans things up to make it easier that we could just post and people could download it and scrub anything? You know, because, you know, host names, people feel that that's identifying and, you know, any IP numbers and stuff like that. Maybe we just come up with a simple bash script that cleans that out. We could have something like an up for skater or something like that. Yes. The problem is that they have to be consistent. You can replace every host name with local host and expect us to make sense of it. That's, well, that's a complicated topic, of course. If we scrub to art, that might not help us say maybe your password is wrong or maybe, well, he's probably wrong. Yeah, we don't have a great solution for this, especially anything privacy-related solution from uploading this. Sorry, hold on. Who is that? I didn't see who was that was talking. Hi, here's Eric. What is the SOS tool doing? Because this is now also scratch, I think, maybe half a year ago. Yeah, but aren't SOS logs going to Red Hat and stuff? I mean, you have a certain concept of security of things are being sent to us. Yes. We have the whole customer, we have the whole customer portal and GDPR things and CCSA, which is as a community and okay, you certainly don't want to deal with and legal. Yes. As well as the shell tool, right? You call it and collect stuff for you. Yes. The problem is sharing the results. The log bundle is built in a similar way, but some information might be considered sensitive. Let's put it like this. I don't think host names are sensitive. Some folks think they are and we need some way how to have them replaced with some fake names, but still make sense in the end. That's virtually possible. My assumption is that the SOS tool would do it. No. Rather, it was not built for that. Another problem is we also copy a lot of certificates, which effectively in the end have the host names embedded in them. You just have to extract them. Probably, our solution would be to avoid scrubbing, but to uplam temporary places, which would have to delete it after a couple of hours a day. That would give us the full information and severely limit the time before attack. Again, this is a security through obscurity, but that's probably the best we've got. If we have any other ideas, that would be very welcome because the log bundles are expected to not have all the sensitive information, and the same applies to must gather, but in order to identify all the issues, they effectively have to also read data from user namespaces so that we could understand maybe it's PDB blocking the upgrade or something. I don't want to spend too much time on this because we've only got 14 minutes left. We actually had a larger discussion a couple of months ago about this that took up a significant amount of time. Let's regroup on this at the next meeting, but maybe if we came up with a list of things that people do feel are concerning items from the logs that are concerning. If we generated a list and then said, okay, how could we tackle this? Can we tackle this? Then we actually know what we're looking at. Some folks in the group may not be familiar with must gather. Let's table this, but in the meantime, if folks are just in their minds who are familiar with must gather, think about some of the things that would be problematic, and maybe I'll send something to the group, share out like a Google doc or maybe a discussion. In the discussions, actually, we could do it in there. In the discussions, just generate a list of things that we think could be viewed as problematic, and again, we're having to put ourselves in other people's place and what they would think would be problematic in that, and then from there, at the next meeting or a future meeting, discuss ways in which we could ally those concerns a little bit. Does that sound good? Yes? Okay. All right, and that's it for the discussions. Suzy, did you want to bring up anything from the Fedora Core Westworld that you think that folks should know about? So, right now, on the Fedora Core Westside, we did this last week on testing for Fedora 34, but that's not a direct impact on OKD because OKD doesn't switch when Fedora Core Switch is switched when machine-wise content gets pushed. But yes, essentially, it's coming. Apart from that, I think we've already discussed C Group D2, which is one thing. The move to NFG, which is second one. The account meet changes are coming later in August. Just a reminder for folks, starting from the releases in August. So, maybe not the releases in OKD in August, but a little bit later. The default will be on Fedora Core S that you will send account meet request to the Fedora service. But essentially, it's a very privacy-friendly way of counting the number of Fedora Core S node meetings running on the planet, and for us to have some kind of statistics of how many people are running Fedora Core S. So, this one is coming around August, and there's already instructions. There's a magazine article coming up probably next week, or this week or next week, to explain how to disable that. Maybe we could have something specific for OKD to disable that, but it's essentially a two-line machine config, so it should be fairly easy. And yeah, I don't think we have anything more really, really important right now happening besides all of that, so I guess it should be good. Did you want to talk a little bit about the signing discussion? That took a lot of time at the last meeting, and I think it does relate to OKD nodes in the way that people might keep them around for a while, if their cluster is running a long time, etc. True, but I'm thinking about that one, but I'm not sure it impacts OKD because the updates on OKD does not happen the same way they happen on Fedora Core S Classic, I would say. So, maybe I'm wrong here, but I think OKD should not be impacted by that. The issue in general is that if you start from a very old Fedora Core S node and you try to date to a fresher version, you only have issues because you won't have the signing keys to verify that the latest version is actually a valid one because you're in a very old node and potentially you don't have the latest keys to verify that because on Fedora Core S, Fedora in general, we only ship the next two releases keys in an image. So, if you're starting on Fedora 30, for example, and you want to upgrade to Fedora 35, then you won't have the key for Fedora 35, but that should not be an issue as far as I know on OKD because on OKD, the updates happen, they are the MCO and the machine noise contents are rigid on the cluster and there's no, it's not pulling any content from a signed OSRI repo. So, it's not exactly updating the same way that we update the classic Fedora Core S nodes. So, yeah, maybe, I'm not 100% sure, so maybe about him you could confirm that, but yeah. Right, there is a very unlikely case, Bruce has mentioned it, that if you start with Fedora 30 Core OS as a bootstrap node and then you want to install which is probably based on Fedora 49, in that case you would of course fail, but that's pretty easy to prevent by using a fresher Fedora Core S. As for upgrades, right, we don't use native Core OS mechanism. We upgrade from one major version to the other, so it might not even span Fedora releases at all, but in the worst case scenario it's Fedora 33 to 34 and we can also pull out the edge to prevent that upgrade from happening if we find some issues with that. So, yeah, I don't think that would affect us. I just wanted to bring that up in case it did and at least we're aware of that which I think is a significant change because there are people going between both groups and sort of playing with F cost and playing with OKD at the same time. In the last few minutes we have seven minutes left and we'll talk about the KubeCon office hour. What did we learn from that in terms of our audience, in terms of our ability to communicate our ideas, our ability to answer questions, how do folks think that came up? For those that were there, for folks that haven't, we can put the link to the video which was posted. For me, I had big problems to follow the correct chats because there were lots of chats and Q&A chats and main chat and background chat. I think it's a little bit too much. I don't know what's your impression? I think that's why they have a moderating person to sort of sift through the questions for us, right? Yeah, I was there are a lot of new people following the Twitch chat was apparently. Go ahead, Vadim. Yeah, I was following the Twitch chat and apparently had copies from everywhere, at least from YouTube, but following the moderator was apparently a bad idea because they know how to switch topics and slowly move from one topic to the other so you wouldn't have a whole mess of different chats. My impression there was that it was great except when folks started with what's coming in 48 and 49, which are very technical issues. We shouldn't have to answer them, but we should have a prepared answer like here is where we post the release notes. Here are the links to our workgroup meetings and so on. I think I would add some more discussions about that it's not just a free version of OCP which is not true, but it's rather a community version where you can affect every single detail of your OCD cluster. Folks were asking why you're not starting with RHEL CoreOS. That's exactly the reason because the community cannot contribute to it directly. They would have to go all the way from upstream to Fedora to RHEL and that shows the value if you found a bug in it, you don't have to wait for RHEL support to pass it to engineers, have it fixed and pass it back. You can start syncing with it immediately just like we did a lot of times. The next suggestion is probably mostly for internal developers, but we should mention that we did a lot of things which landed in OCP. The whole Ignition version 3 started in OCD and later only based on a bunch of fixes all of us have submitted landed in OCP. The whole OVN as a default was a great testing platform for the folks to decide should it be a default as at the end or not. That means you might hit some more bugs, yes, but that also means you also get a feeling of how the distribution would look in a couple of years. The whole C-group is a perfect example. We might want to enable this in OCD if it works great, if it brings benefit to the community because OCP is more conservative and we have our hands on time to do whatever. A complex topic is because folks might think that we test new features on them. That's not entirely true. That's a very complicated topic, but once we should actually make a point of it that OCD is a place where you can get latest and greatest features and since the whole community is looking at it, your chances to have them fixed sooner are actually multiplying. But other than that, I think it was great. It felt refreshing to see all of the new faces asking questions, both the beginner ones and more complex ones. It feels like the community is actually growing and OCD is not even one year yet. OCD for me. So that felt very rewarding. I thought one thing is that we'll have to find, this is assuming that it moves forward. Diane was saying that if it went well, that we had the chance of doing like a bi-weekly. I think that would be fantastic. I haven't heard back if that's the case. One thing I think we would want to do is find, what's the right level of support to provide in these office hours? Are we going to start looking at people's logs? We got a question along those lines of like, here's a big log error. Are we going to start looking at that? We'll have to figure out what's the threshold low and high for saying, okay, this needs to happen offline from the show because otherwise it would take up a significant amount of time on this one particular issue and we wouldn't get to anyone else. So that would be my only thought. Anyone else? I guess, well, I don't know, bi-weekly might be a little bit too much for the office hour, but certainly not up to me to decide. But yeah, we could maybe monthly or something like that. And well, I'm not sure office hours are great for debugging session either. Because they'll just pin down a lot of people for just one issue, one specific issue. Yeah. I'm not really in favor of that, but yeah, it's still open for questions. Anyone else? Yeah, I think live debugging is a little chancey. All right, we have one minute left and I do want to be mindful of people's time. Is there anything else that folks want to bring to our attention before we step away from this meeting? Well, thank you so much folks for your time. Thank you, Jamie, for lending you the meeting. Good meeting, Jamie. You're welcome. And if folks like this format, then we can mention it to Diane. I'm happy to facilitate in the future. I don't know what's behind that, but she might be interested in letting me co-chair or whatever for it will see. I don't know if it has to be a Red Hat person or not, but this was great. And we'll talk in two weeks and also online and use the discussions. And don't forget the docs meeting is next Tuesday if you're interested in doc stuff. All right, take care.