 The OKD working group meeting for June 7th of the year 2022, the agenda is in the chats and it's also available on the calendar invite and we'll post it again to make sure any folks who have just joined have it. And take a moment to look over the agenda, see if there's anything that we missed. We do have an action packed meeting and give me one second here, actually Dusty is going to pop in as well, he just needs me to hand him, Dusty will be here in a second. And we have guests today, so we're going to keep things at about 15 minutes per guest. That'll fit our guests and then 15 minutes for the various updates that we usually have here. So don't forget to put your name in the agenda doc as an attendee, just so we know that you were here and that allows us to keep track of if there's important information that someone needs to know when they weren't here and we need to get that to you. All right, let's start out with OKD release updates with Christian. Take it away. Yeah, sure, I think this is a short one, so I'm not really news anymore. So Vadim's got another release and he did it the weekend before last weekend. I haven't seen any bugs particularly reported for this version. If you do have any, please file an issue on our tracker. Yeah, I think that's that's it already. So we'd have any questions or feedback in terms of OKD releases engineering short feedback. I did install it and it does seem to fix the stuff. Rook issue, at least all of my pvs were there with no touching or hand holding. And things seem to work great. So that's thank you for that. Yeah, I think we we unpin the kernel because a fix was merged and backported into Fedora. So that issue hopefully it's gone. Yeah, awesome. Christian, have you heard anything about this issue of incorrect URLs to Docker images in the samples? We've we've been getting reports of it, but no one's really delved into it yet. I haven't really seen it on a live cluster. I've seen something similar in CI recently where we were hitting the Docker hub pool limit because some of those images might still be coming from Docker hub. There was someone posted another issue today and it seems like all of the issues that couldn't be pulled were set to seven base images. So they might be deprecated. I'm not sure about the sale of them. And there is a a workaround in the docs. I also link the docs where you can manually remove deprecated images. Apparently that doesn't happen on its own. So that might be way. Apparently it was tested to remove them and they were recreated somehow. I don't I don't really know that I think you'll have to. Who should we file a bug on this? There should be a samples operator component. Okay. And if there isn't, please ping me. Okay, excellent. We'll put one in on that then. All right. Anything else folks in terms of OKD engineering or releases? All right, let's move on then to Fedora Core OS updates with Dusty. We'll start out with Dusty. Hey y'all, can you hear me? Cool. Yeah, I don't have anything to groundbreaking. Just wanted to re note that all streams of Fedora Core OS are on Fedora 36 now as of a few weeks ago. We actually have the second update for our stable stream that's based on Fedora 36 going out later today. So basically, you know, this is our second round of updates for Fedora 36 on our stable stream. Hopefully people haven't hit too many issues with that, but let us know if you see any. We also updated our tenant. Okay. Sorry, Dusty, just a quick note. OKD is still on Fedora 35, even though we're rebuilding app costs with the Fedora 36 manifests using Fedora 35 packages at the moment. So it's a bit messy right now. Yeah, that's okay. Yeah, that's perfectly fine. This is kind of like, you know, if OKD were trying to test against Fedora 36 to hear some changes. So also our Nutanix artifacts that we're shipping in Fedora Core OS are now using QCal internal compression instead of being externally compressed. So instead of a QCal 2.GZ or XZ, I forget which one it was. It's now just a .QCal 2. And this update was made at the request of Nutanix so that people could import directly into their platform from the URL that's hosted, I guess, in AWS. So they didn't have to download it, uncompress it, and then push it. So those are the two biggest updates I can think of for now. Yeah, feedback or questions for Dusty in terms of Oracle OS. Great. Well, then let's, that's a perfect transition now to Steve. Steve, go ahead. Absolutely. Thank you. So as folks know, OKD runs on F cost, and it's, it's a pretty awesome stable base for OKD. But we've been playing with this idea of S cost, which is really looking at sent off stream and kind of doing something in between say the rel core OS and the Fedora core OS. And I know there's kind of rebuilding that happens today with OKD, you know, as Christian pointed out there. I wanted to kind of ask the question, you know, this is something we're still kind of looking at just our own engineers playing with an idea. But I wanted to broach it more so here to ask, does that sound like something that the community would be interested in, either from the point of view of testing, using helpers? I don't know if you, if you're sort of reading it out. Yeah. Yeah, still robotic and I can't really understand you. Yeah. Maybe try turning off your video. It always happens to the person speaking that suddenly internet just goes away like it knows. He's got only like one little red bar on how much bandwidth he has today. Yesterday, somebody was losing their. If I could try to. I'm Michelle, I'm, I'm Steve's colleague. Any better. No. Yeah, yeah, you're in your bars look better too Steve. Yeah, I, I'll let you just reconnect. I had no idea. Not sure what came through, but, but I'll kind of take a step back here. You should be able to do video now, by the way, because you've got. Let's try it. Let's try it. Okay. So effectively an idea we've been kind of toying with is the idea of S cost or or a stream. Cor, or I'm sorry, since I'll stream core OS, and then, you know, calling that as cost, but part of that is part of our own work of wanting to make sure that the work we do is represented in the normal stream for the door into core stream into rail. But that also kind of raised the question, would this be something that could be useful for other communities such as the community, which currently is built on F costs, which is, you know, a great place to be built on, but it is, is definitely where a lot of the work happens for what shows up in the industry, eventually. And if folks in the community are, you know, think that's something that that would make sense for us to, to look at together. You know, what would that look like and how would folks like to engage in that, or is there other paths that would be interested in there as well. But that we've been kind of, you know, looking at is sort of doing an F costs style of core OS, you know, consistent building consistent releases that kind of stuff, a lot of it automated, very similar to F costs, but then provided out as that base or whatever, you know, for our own testing as a community or group. Well, I'll kind of stop there because a lot of the deeper related stuff, you know, we don't know right we haven't really dived that deep into it yet, but did want to bring this earlier rather than later to the community to kind of gauge interest and see what folks think. So what is the, what is the status of the project and so on so like it's separate from OKD and integration with OKD. This is something that's happening anyway. Or is it reliant on integration and building along with OKD. I see Christian raises hand you want to jump in there Christian. Yeah, please do correct me if I say something that's not correct. So I do think we will kind of do it anyways, because we will have improved testing, a better testing story, essentially testing on all the platforms currently, as some of you may know, OKD on F cost isn't really tested end to end on all the platforms. That's because we haven't been able to get it working and get the resources. So this we will use internally as testing, but the idea is that the community can benefit from that as well by us just making that those bills available as as we've done before, but this time as cost based. And I also want to mention the we currently don't have a plan to replace F cost or OKD on F cost with this new version. It's going to be an additional release. And so you'll have two variants to pick from essentially going forward if we go through with this if there's interest here. Where you could say, OK, I'll try the OKD on on S cost instead of the F cost edition. And then further down the road we may kind of flip the default over to to ask us if the community wants that if the community would like to stay on F cost as a default. We're not going to flip that switch. If they do, we might do it. It's really just an offer from from our side. If there's interest, we will go through with it, but I think internally we will be doing these bills for testing because this is essentially our CI early CI testing as well. Or early on the master branch, it would be the next release testing, but we can then also replicate those bills from the master branch on the release branches and kind of build a stable release on the S cost base, which is what the OKD community would probably consume. Not obviously the master bills. We don't want to give give OKD or release OKD as an experimental version. It's still going to be the stable code basis from all the payload components as well as then the the core operating system of this new Centro Stream core OS. I'm just going to pipe in here that the two folks who are the most vocal about things of this nature that I was hoping have on the call are not here today. And I pinged Neil Compa and John Fortin to see where they were where they are. But I know they're both very busy folks. So I think now that we've talked about it a little bit, we'll be able to survey some of the group and most people watch the recording after we post it. So yeah, so Bruce, Bruce, maybe if I can put you on the spot at BC it you're you're currently using OKD with F cost. We are. And yeah, and the one of the difficulties with the stream version is that I sort of reluctantly went from CentOS XYZ to CentOS stream with the understanding based on the discussion that there were not going to be any further versions. But it was just going to be one stream. But then the last I looked it was now versioned again. And for unless you're what reason, but since the difficulty is that with previous CentOS is there was sort of an upgrade from one version to the next path. And with the stream there is no upgrade from one version to the next. So it seems that if you actually tried to use it, you would then get locked in to a version that became quickly obsolete and you couldn't upgrade without reinstalling the scratch. Now, again, this is all hypotheticals, but that would seem to be an issue. I think in this case, I'm sorry, go ahead. Go ahead and see. Sorry. I think in this case, you know, the upgrade ability in terms of core OS is a little bit different. You know, I don't know the state personally of CentOS stream upgrading between things, but the way that core OS does the upgrades with our PMOS tree and kind of does pivots and that kind of stuff through containers. I think it is a little bit of a different story. So, you know, looking at different versions, if you look at Fedora core OS, how we've gone through multiple versions there, people can keep upgrading forward, you know, in the community of F costs, and then with our costs with an open ship, similar thing of people can keep upgrading, even though it's going between, you know, rail versions, Y versions, but still rail versions. I think it's a little bit different because it would be more cluster focused rather than the operating system. I saw the operating system and I maintain the operating system and then have to do an upgrade there. But yeah, I think it's a little bit different, but I hear your statement for sure. Yeah, since it's, I mean, it's still, it would still be delivered as an OS tree. And so, theoretically, just like Fedora core OS today, there's a history, and you could roll back to a previous version in the history and whatnot. So, you're not quite at the mercy of like just yum repositories getting updated and going away or something like that. And we actually determined that upgrade ability through our enter and testing, when we do a really, and with as cause we're planning to actually do more testing that we've done with that cost in the past. And we can have even more confidence than that the upgrade is going to succeed. And yeah, as Dusty mentioned, it's the mechanics are the same as currently you essentially rebase to a new version or pivot from one OS tree to the next. So it's always, you can roll back and if obviously we would have tested that new OS tree comment before. And what what kind of upgrade graph we would be setting up for this, whether it would be real Cincinnati graph or just as we have now a release controller that essentially runs the end to end tests that if it's it's upgradeable it'll it'll create that edge, which is only like a stem we don't really have a tree in that release controller graph in F cost currently how we how we do this in as cause or how we're going to do this as cause we haven't decided yet. If there is really interest, a lot of interest who might might even set up a Cincinnati graph, although we will, we still have to figure out what kind of resources we have available for doing this. And obviously what the interest from the community side is we don't want to do this if there's no no interest there. Christian, could you talk a little bit more about for for those of us who are interested. And this would be also for people who will be watching the video later on those are those that are interested in sort of the testing and the CI aspect. How does a sent us based OS improve your testing abilities, you provide a little bit of specifics on that. It fits in more tightly with what we with what we currently have already, we can essentially just reuse the tests we already have in place. And it's not this the difficulty of rebuilding the F cross space, pushing it somewhere and then consuming that in our CI system we just build the image in CI and then consume it immediately. That means we could have end to end tests running for OKD on the on the OS definitions on the OS repository, which is the equivalent to the OKD machine OS. So we could more easily right now we have this we don't really test F cost changes continuously with we have we upgrade the sub module from time to time that the Fedora Core OS sub module as well as the the OpenShift OS sub module. And so we only have like a discrete testing whenever we bump that we test again, but we don't get this per commit each time doing the tests. And that that creates this queue between the versions we test and that makes it sometimes very hard to trace back what change caused what issue. And we think that with a new model that's going to be better. But following up on Jamie's question. Why why with the new model is that better is like you mentioned having to build the Fedora Core OS payload before you test it. What about CentOS Stream Core OS? Is like is that payload already built for us somewhere or not? We'll still have to create the payload ourselves, which is going to be exactly the same process as we've been doing it with F cost. It's just that the core operating system that we get it right from straight from the core teams built pipeline essentially once it's set up. And we don't need this rebuild of F cost on the outside. And this is actually this is possible with our core OS layering. If we had core OS layering, we could also move the Fedora builds pipeline back into our proud system. And that would, yeah, we wouldn't need the external serious CI builds anymore. In general, I think it's for us, it's testing because it's closer to what what the next our cost release looks like. And it's also, what was I going to say, it's also, it may be more, maybe perceived as more stable than F cost, depending on on the user, I guess. And it also will offer a sort of feedback loop, you can if the community can contribute directly to as costs. The changes will land in in OCP in the product and obviously, you know, okay, as well. Much quicker than them in the current model where you where the community first of all doesn't really have a point of contact. It's hard to to contribute directly to a component of open shift and contributing to anything in Fedora. It makes it isn't directly you then have the Fedora compose in between. And you possibly need to wait for the next Fedora core OS release to the change lands. In, in your OPD payload, and with as cost that feedback loop is shorter and we actually have a proper point of contact for the community to contribute to because the the central stream community is not it's not just open open shift. It's mostly all the the kind of, yeah, big partners of red hats automotive, and so on. They all contribute to centers directly. And with that the OPD community would be in a place to also contribute to center stream directly and have a, yeah, have a shorter feedback loop really. Okay, I want to be mindful of time because we do have other guests. And so let's let's spend we've got 3 more minutes let's say to answer any further questions and then we can schedule more time maybe at the next meeting. So I'm going to be respectful to our. Yeah, I do want to. Oh, sorry, go ahead. I did want to bring up a little bit back to the question of, you know, is this something that sounds interesting to folks in this community and something that we'd like to look at together. That's that's kind of what I'm hoping to get to from this, this talk and then Timothy when he rejoins back up over and the next call can, you know, either start working with the community more on details or, you know, working in other directions around that cost. Yeah, my sense is that the community or the people on this call will probably want to have some async conversation, other than just sort of deciding in the next two minutes like what, you know, what we're thinking and stuff like that. That's, is that nice. Am I feeling the room right here folks. Yeah, I'm seeing lots of things. I think we need to give people time to socialize and do this and Brian, Brian, you had something. I just wanted to sort of ask in terms of, first of all, from what Christine said, I think it sounds like it's something that we want to do. If it's going to mean that we have a better tested. I know John is always very keen, especially those that run production, having a better tested solution is going to be a better end result for the community. But I just wanted to check what is implication for us, the community, in terms of documentation updates. We spend quite a lot of time getting docs. Okay, the IO updated. We want to make sure that the community can build this distribution. We're doing a lot of work with the fedora trying to make sure that we enable the community with trying to create the technical documentation. I just want us to have that side of the conversation as well. So as we launch this, we have the documentation in place. We have the transposed images. So the internal registry, we know what the equivalent images look like for the, the SCOS releases and things like that. So I just want us not to lose that side of the, the discussion as well. Thank you for that. Good reminder. There are other folks on the call on Jack and who's going to give, I think the next talk, but I know this is new to everybody. So, you know, maybe in the next call when Timothy comes on and can answer more questions and we can socialize this on the mailing list as well and, you know, get the word out there and reach out to folks like John Fortin and Neil folks who abandoned us today. So if you're watching this, this is why you have to come to meetings. So carry on. I just very quickly wanted to know this isn't anything we want to push on to you as a community if you don't like it we're not going to do it. It's just an additional option we, you know, you can definitely stay on of course if you're comfortable and if that's what you want to do. I'm going to just gauge, is there interest here in the broader community would that be potential users in this group. So we don't just do this work and end up without anybody using it. Then we'll likely just do it on our master branches for for CI testing and not, not actually build the release branches for stable okay to release. So nobody needs to worry that we're going to, you know, pro F cost. Okay, the on F cost and the pressure or anything with that's going to that's here to stay. And this is just an additional variant as an option. Awesome. Thanks Christian. Okay, let's now move on. We, Steve will get back to you and the other folks. Once we've had a little bit of asynchronous conversation and, and folks have had a chance to sort of flesh out their thoughts for sure. Let's go now to Jack to talk about okay D at CERN. I know we've got a lot of people that are interested to hear these details. So take it away, Jack. Hey, hello. Thanks for having me. Can you hear me well. All right. So yeah, we, we met at QCon actually like two weeks ago or so now. And yeah, and this community meeting I just wanted to share a bit kind of what we're doing with okay D. Or specifically our use cases. I've been bothering some people and on kind of the, the weird deployment scenario that we have here, but I'm not going to go into that this time. But more focus on the use cases. So we've been running OpenShift for for quite a while at CERN now. I think it was since like 2016, 17 somewhere on there. And the initial draw was really the fact that the deployment pipelines and the build conflicts, so deployment conflicts and build conflicts were integrated because at the time we were looking for something that we could use together with GitLab and GitLab did not have the integrated GitLab CI. So the question was how do people build containers and then how do they deploy them. And this is kind of the was kind of the first use case for that OpenShift was really interesting and also the fact that it had had a good user interface so we could easily onboard people for that. So we, this was kind of the driver to adopt OpenShift at CERN. And now since I would say two and a half years we've been rolling out okay D4 and we've massively broadened the use case for it. And so we basically have four different cluster flavors as we're calling them. And the first one is kind of the standard platform as a service scenario where we just give people the OpenShift console. They can deploy their applications, they can do deployment conflicts, they can do build conflicts, they can do all of the standard Kubernetes and OpenShift stuff basically in their own projects completely isolated. And the benefit for our users is just that they don't need to take care of their clusters. I want to mention that we also have an offering of regular Kubernetes clusters at CERN, which is also heavily used by users who, let's say know what they're doing, who have a really large workload sometimes. And where they just also want to have full control over the entire environment in which they're running in so that also involves then monitoring logging, et cetera, et cetera. But OpenShift is really more targeted at, or OKD in our case, our deployment is more targeted at users who just want to run some simple or sometimes also not so simple web apps, any kind of application that is less power hungry. So this is what we are calling the, or what we have as the platform as a service cluster flavor. Then in addition, we also have what's called WebEOS. So CERN has its own file system that is called EOS, which is backed by tape drives in our data center actually. And this is used heavily at CERN for storing large amounts of data. And people are also using this to host websites. And this WebEOS cluster is then kind of serving as the front end for this. And we decided to go with the operator approach here so that we allow users to create these custom resource definitions that are served by our operators that basically say something like, hey, I have a folder here on the EOS file system. And I want to have it available under this host name and then some other parameters. And then basically the website becomes online and the user doesn't need to take care of any kind of web hosting. We are doing it all for them. And that includes both static websites as well as dynamic websites with PHP or other kinds of CGI things. So this is what you back in the day, you used to have under your home folder when you made like you had a public folder or WWW folder. That's basically still the concept. And I think at the time of speaking, we have around 3,000 websites who are hosted like that. Then our third use case is the Drupal use case where because Drupal is the most widely used CMS at CERN. And so for example, if you go to home.CERN, you will land on one of the pages that is managed by this cluster. And my colleagues from the Drupal team, they really went all in with the operator approach. So it's also an OKD cluster. But in addition, there are several operators deployed on top of it that really give this CMS a fully managed experience. Because Drupal, managing Drupal is a relatively complex procedure because you need to not only keep track of the application, which is already difficult enough because it's not really a cloud native application. It has a lot of state that you need to take care of. You cannot just randomly do upgrades or updates. But then in addition, you also need to do things like managing your database schemas, managing migration stare, etc. So it's quite a lot of components that need to come together in addition to other things like authentication and logging. And all of these things are really bundled into a very, very simple web interface managed by my Drupal colleagues. And all of that is managed with, like I said, various operators running on OKD and really hooking into the infrastructure that OKD gives us. And of course, the extensibility also. And then the fourth cluster that we have is where we're currently ramping up is where we're trying to be kind of between the platform as a service use case and these more managed approaches. So there we are trying to give users application templates for commonly used applications at CERN. So this is something, one of the really good use cases is a Grafana instance. We could just tell our users, hey, you can just install this Helm chart in your PAS cluster or in your project there. And then you have a Grafana instance and that would be fine. But it just seems like a little bit too much overhead just to spin up a Grafana instance. So in this app catalog cluster, as we're calling it, you have several applications available as templates, which you can then instantiate and they are again backed by operators. But here we are still using the OKD UI, so with the operator catalog. And that also works nicely. And like I said, at the moment we are kind of ramping up there. So we are working on providing more application templates to the user such that they can really easily get going. And then sometimes it also happens that we find that a user has two specific needs to be hosted on this cluster. Or we would need to add some really, really specific configuration settings to an application, which we don't want to add there because it's really supposed to serve the 80 or 90 percent of users and not like the 10 percent of ugly ducklings. But then it's at the same time really easy to kind of move users over from one cluster to another and just tell them, hey, you're an advanced user. You clearly know what you're doing because you're requesting a really uncommon feature. So here are the instructions to go set that up yourself. And then they again have their own project and they can do whichever kind of modifications they like. But in this app catalog cluster, we are really only allowing the creation of custom resource definitions. But we are not allowing any other modifications. So users are not allowed to modify their deployments, their services, their pods. They can see that they're running there, but they are not allowed to modify them. And also this approach actually works quite nicely for us because most people are actually fine with what you give them out of the box. Sometimes we get some feature requests, then we implement new things. And if we see that it's too obscure or that it's too specific, then we just redirect the user into our general purpose cluster. And all in all, so we have these four very large production clusters and I should also say rather high density. So the clusters themselves are not super large. They have around 60 nodes, worker nodes. But each of them is hosting around 1000 user projects. So that's individual user namespaces. And as a result of this, we've also seen some interesting challenges actually scaling all of the operators to handle that workload because especially we've seen that the memory consumption can get quite large. Excellent. What are some of the challenges that you've come across in terms of building your own. In terms of automating the process in terms of getting components to rebuild it. What are some of the challenges you've come across. Well, the main, I didn't go into this. The main challenge is the fact that at CERN, we don't have a full OpenStack deployment the way you have it. When you, for example, deploy a Red Hat OpenStack, but mainly we just have the OpenStack compute part. And we have a little bit of OpenStack networking, but this is also not fully standard. Mainly due to the fact that well CERN had it's the same kind of networking flat network layout for the last 30 or 40 years in the data center. And of course it's very hard to change that. So, for example, our OpenStack network has no SDN. And that for this reason we cannot deploy a regular OKDE and just tell it to deploy to OpenStack platform, because then it will start to set up the whole SDN machinery. And then it will start to spin up CIF shares that are, for example, our case backed by, sorry, Manila shares, which are backed by CIFS, but the OKDE installer actually expects the NFS back end. All of these sorts of things, which you would just have if you had like a regular vanilla OpenStack deployment, which we don't have. So we really mainly have the compute and everything around. We kind of integrate ourselves. So it's mainly around storage, logging, networking, and also some authentication parts. Yeah, I'm not sure if that already answered the question or if I should go into more details there. Well, Christian has his hand up. Christian, go ahead. Let me just start by saying I find this super cool as an OpenShift developer. It makes me proud that CERN runs OKDE. This is awesome. And thank you for coming here and presenting this to us. I really enjoy this. This is awesome. I wonder your OKDE build process. Do you just consume a standard payload or do you switch out any images? What's your build process or your preparation process for making an OKDE payload? Yeah, so you already hinted in the right direction. So we take the OKDE releases from GitHub, basically, and then we start switching out some of the images that are inside, which is luckily relatively easy. So we can just use the overwriting of the images. And then we basically, those images that we need to have replaced, we just build ourselves. And I would also say that a good amount of infrastructure. We are actually just deploying ourselves because we are basically telling the OpenShift installer to install to platform none. And then you don't even get that much out of the box, but we kind of need to deploy it ourselves. So examples would here would be the OpenStack Cloud Controller Manager, the FFS integrations, logging integrations also. So we don't use the cluster logging operator, but kind of our own deployment there. And all of this we do actually with our city on top of OKDE. That's cool. Great to hear that you're using our GoCD. Any other questions from folks here? We've got about five more minutes left. The lender, five minutes. This is me just curious what you heard earlier today about the Centro Stream ToroS. Does that be anything that you think that, and I know it's just the first time you've heard about it, is that anything that would help your processes is having a more stable? Is that anything you think they'd be interested in or is it too much work to switch now that you've got that build process and handcrafted everything? Yeah, so I'm going to say not super excited simply because of the fact that we don't usually need to touch the machine OS image, which is of course a good thing. That's how it should be. It's just underlying infrastructure. We also probably, unless we have to or we see any immediate benefits, we would not switch because we already have our current deployments. But I can say that at the end of last year there was this issue where I don't recall the details but where kernels had deadlocks, where one of our clusters was quite heavily affected and that was a kernel bug basically. We had to actually roll back to one of the previous images or kernels and that was relatively painful in core S simply due to the RPM OS tree and you kind of just easily monkey around. So it took us quite a while to then figure out how we can build our own machine OS image with a custom kernel inside. I'm not sure if that would be something that's easier with the stream approach or not, but. I think this is not necessarily going to be easier with central stream. I think users won't notice the difference just as ideally right now you wouldn't notice the difference between OCD and OCP because it's really the OS is supposed to be an implementation detail. Well, in that case, I'm just going to summarize it by saying no strong preference. Yeah, I do think that the car was layering which will eventually get in both F cost and S cost will make this easier because you'll be more easy. It'll just be much easier to make your own machine OS. If you need to do the RPM OS tree compose, you can just layer your changes on top of the existing one in using a Docker file, which is like the dev experience is amazing already. So this is Steve's team and Timothy of the core team and they've been doing amazing work and dusty obviously on that front. I'm really looking forward to learning that in OCD. So that would be one thing that will make it easier, but that will be valid for both as cost and also as cost. Yeah, definitely interested in the bad experience with the rollback though, like theoretically, the rollback should be pretty straightforward at least by the RPM OS tree level. The only reason you'd need to build your own machine OS content is like if you wanted to diverge for like a period of time right so let's say there's a bug that doesn't get fixed for a month and a half and you want to update everything else. Yeah, this was exactly the situation we were in so we didn't we didn't specifically rollback but we wanted to have a specific kernel version basically in the in the image. And that was kind of the painful part, and it was of course mainly painful because we, let's say kind of didn't know what we're doing because we're not chorus developers like that's not not all bread and butter. And then once you figure it out you kind of understand okay why this works that way and but at the time also we couldn't find a lot of documentation how to build your own machine OS image so that that's why it was really quite painful. Yeah, well, you should have been able to just do it all client side so you could have done an override replace for a specific kernel and the client would have kept that right. But yes, core less layering will make it much easier just to override like a specific package or whatnot and then carry that delta until you know your particular problem gets fixed. So I want to get to Michelle had a comment to share. And then we have a hand up. Alexander. Yeah, I'm happy to hear folks talking about layering if that is something that folks would be interested in be definitely are interested in your interest. There is a good repository of examples. We love would very much love your feedback. Even if it was just visually looking at the workflow, adding another example requesting another example or even if this would be something that is so interesting that you'd love to be the leading edge for that. And it is a pretty significant change and an active conversation about how to how to release this and be able to get feedback as we go along. So if there is enough interest in in the community, then it could possibly be something we would consider offering on OKD first that that has been raised before. But if there was interest, that would be interesting. So I just posted a some examples and feedback is very much welcome. And Alessandro, I think you had a question for Jack. Hello. Nice to meet you. So yeah, it's more that I was wondering. So I imagine that in your cluster, we have four cluster with 16 workers nodes that are not too much, but it's great. And as I can imagine that there are a lot of sensitive data in the logs and in the metrics that you have about the cluster. But I'm wondering whether as a research institution that you are, you could anonymize some of the logs that you have and the metrics that you collect in your cluster and publish them. Because why I'm saying that in the past, before trying it out, since I did a lot of research about your clients cluster, OKD clusters. And one of the big issues that we had in this institution was having data about workloads in the in those kind of clusters from few memory to networking, especially. And there are projects that are arising like the network accessibility operator for the two work, I think on OKD2. And that could help collecting data about the cluster and anonymizing them and putting them available. Like some projects in Sweden that I cooperate first to could be very interesting for a lot of universities, a lot of people around the world. That want to improve things in those kind of environments. Yeah, I think that's a that's a two-part thing, right? Jack, is there any overall documentation on this project online first question? And then second question, you know, in terms of would there be a way to anonymize data to help improve the running of these types of clusters? So I don't think there's a overall online documentation available anywhere. It's simply because, well, it's, we didn't really see a need for that. But in terms of sharing some of the data anonymously, I think it would surely be possible, though I didn't understand in which format this should happen. I know that they're like OpenShift clusters are sending some structured logging somewhere to Red Hat if you enable that. Are we talking about this or are we talking about some general purpose data dump? No, I'm not talking about the telemetry data. I was talking more about data that are in Prometheus or the ones that are in the logging operator. And it comes from the workers that you run, not from the control plane. Okay, so you say you have hundreds of workers running on top of, okay, the data will be very, I can say, gold for people in the Red Hat community. Okay, and there are very few repositories that provides data of this kind, like Alibaba Cloud, which is providing their name in the format of a CSV. They are very, very poor in terms of quality, and the same happens for some data that comes from 2018 from the Google board experience. I can say they are also in the CSV format in that case, but I mean, also the format can be whatever we, whatever, usually people use CSV in the Red Hat community, but I can also think that hundreds of workers, thousands of logs production can be very huge. And so other formats can be found. Any format could be very good. There is a stack in it from Prometheus and any other operators that do, that does observability on top, so on top of this cluster. And I don't know, making them available on guitar. I'm not generally against it. We would need to look into it and I mean, we cannot just give a full dump of our Prometheus instance to whichever data is in there and of course the same for the logs. But if there is some kind of framework for providing anonymous metrics and logs, we could surely look into it. Excellent. Any other questions or comments for Jack? We've got about four more minutes in the meeting. Marco has graciously agreed to come back to our next meeting to talk about that topic. So we've got about four more minutes to ask questions of Jack or any other things related to OKD. And the next in two weeks time, we'll have Marco back. We'll also have Timothy back to do more of a deep dive and answer any questions that we come up with around the ESCAS stuff, as well as I'm trying to get Brian Cook to come to talk about building a community hosted and managed process for OKD and FCOS to move that conversation forward as well. And again, Jack, you made my day talking about CERN. I'm known in the background that you've been doing that for ages. And it's just nice to hear the details. So thank you for sharing that today. Well, thanks very much for having me. And thanks to all of you who are working on OpenShift or OKD, working, developing it every day, making it better faster. We are very happy users. And yeah, thanks again. And we'll have you back on for an update to tell us what new things you've discovered and new things you're working on. Yes, for sure. And also, if you have any specific questions, feel free to to reach out to me. And well, otherwise, I guess I'll see you in future OKD community meetings. Awesome. Thanks, Jack. Alright, folks, any last thoughts before we end the meeting? Doc's meeting is next week, same time and channel. Main meeting is two weeks from now. Meeting minutes are now getting posted as time permits up on the OKD website and videos are coming. We've got a little bit of a backlog, but they're making their way. So thanks very much, folks. Sorry, go ahead. I'm going to get today's out to you right away as soon as it's rendered so that we can socialize this and the folks that weren't on the call. Please do give us your feedback on the mailing list and in the Slack channels and come to two weeks from now with your questions. Oh, one last thing. The Doc's group did sign off on the survey. So the OKD survey is ready to go out. So we'll be promoting that through the various channels and hopefully get some feedback from OKD users to help make a bit more welcoming community and maybe steer things moving. Thanks everybody for joining today. Much appreciate it. Hi, everybody. Thanks. Bye bye. Thank you. Bye.