 Hey, welcome. Okay. So welcome everybody. This is the Sorry, this is the okd. I want to get their total right. This is the okd working group hybrid meetup and Your hosts today are going to be Christian Glomback and Vadim root house Rick Rydkovsky Christian is an engineer at Red Hat He's working on open shift. He's an open shift specialist He's no on the open shift specialist platform team and a member of the okd working group Vadim is a software engineer at Red Hat and that's all he wants to say about himself But I will say he's also a tremendous engineer and he knows open shift and okd topped about him So without further ado Christian and Vadim, please take it away Thanks, Mike. I hope my audio works fine now. I had issues earlier. Yeah, not sure what that was about So we haven't really prepared any special agenda, I think so It maybe we can kind of as we always do in the working group meetings give a quick engineering update where we're at And then kind of make it an open-floor session I was hoping that there might be some questions or you know people wanting to talk about what they're doing with okd So really yeah, but would you be able to give a quick update on our current? Yeah, sure. I mean last time we met on DevCon we did this tour. I think it's useful to look back and see Describe our progress for the whole year. It was a very very fruitful year for okd mostly because we were committed to staying along with features of the OCP Alliance and We still had a few packages forked which is installer and MCO Quick critical for the cluster lifecycle and we're working a lot with internal teams to Be shorted in this gap and now installer is fully coming from upstream We are getting the fixes OCP is getting immediately into okd And the very same story is happening with MCO. We have just merged our final Fix we need and now we can use upstream version with MCO with one small exception, which I'll cover slightly later Next I remember in the summer we did Short okd conference. That was very very exciting. We have multiple tracks where folks were describing how they using a goody with folks from University with a quite large scale of Usage and different kind of types of users I was participating in Home server type of stuff where you are tailoring okd for your own users Usually, it's just one note with additional workers is one. It was very exciting to see how They okd usage for eyes across different people coming from there Next we were working on a largely delayed feature of bare metal API and it should be available in 410 now We don't have a great set of resources to check this out in CI So we would be relying on people's feedback and Drive there, but the idea behind this is just to stay Stay same as to have the same features as the OCP has Now for the current state our actively Stable version is for nine. I don't think there has been quite Significant set of changes for The set of changes since last release. So we're mostly staying there the only Problem which we now have is that we're still using the door 34 because of Combination of Kubernetes and cylinder Xbox one conformance test is Failing on the door 34 right now. So we need to rebase to Kubernetes one 225 I believe and so which is supposed to be happening in a couple of weeks But we're being quite conservative there and we would rather deliver a stable platform for the users than crash within your features Meanwhile, we're cooking things which are coming in for 10 before 10 has been released to Kubernetes one 23 We have the fix and we were able to update to the latest for door 35 One of the interesting kind of features we have delivered this year was C Groups v2 by default previously the Fedora CoroS switch to C Groups v2 for quite a long time ago, but the Kubernetes upstream has been lagging for a couple of years and before C Groups v2 was Even supported and then ready to use and now we have transparently switched the new installations into C Groups v2 Since there are no additional features which this So which brings it should be fairly transparent to our users, but that opens the road for more interesting things like Limiting pods by IO consumption, which C Group v1 does not support but C Groups v2 support So we need is a Support in CRI for this kind of a feature another interesting C Group v2 thing which will might be available soon is Apart from mean and max there is also I believe it's something like medical So so setting for a memory where instead of just request and limit you can also set up Threshold where your application is supposed to start garbage collecting its memory It's incredibly useful for Java applications where instead of fully crashing You just tell it that the app is using way too much memory. It's supposed to start garbage collecting that's supposed to Help barcloads to be more resilient instead of just crashing with the OM it would start cleaning the memory and for 10 I mentioned that we're based to the author to fight and The bare metal IPI is coming there that should help folks who have access to IPMI and ILO and use these kind of a bare metal installations as an IPI target instead of UPI previously Also behind the scenes we're working on Some more features the most interesting from my side would be assistant installer that's a tech preview and OCP now, but incredibly exciting for marketing users in short, it's a it's a web app Which is able to generate a so-called discovery ISO When you boot your machine from the discovery I so this machine would be added and show up in the server And where you would be able to pick? the role there and Say this machine would be master this one machine would be worker That helps you to find issues with the configurations before the installation has started say Your resources are insufficient and the application will tell you immediately about that or your network and hostname a very very common case of failed installations are wrong so That would be immediately showed up before the installation has started Another bonus with assistant installer is that it doesn't use separate bootstrap note Instead it uses one of the masters even if it's a single master as a bootstrap note so So called bootstrap in place is already implemented as a part of the Assistant installer there is some quite some work we need to do with the installer and as it's installer folks as well but That feature is definitely what we would be paying more attention in the future Just to add to that. I'm incredibly excited about the assisted installer as well And it it's especially useful obviously for bare metal But it's also interesting for use cases on platforms that don't really have Fedora Koro as support yet You can kind of just if you can get the image to boot on a platform You can use it and the the agent really makes it easy to to figure out those issues because it will talk to the To the assisted installer service, which is the web the web app Which there is one hosted by retta, but you can also deploy that yourself if you want to host your own service there And yeah, that that is really great and hopefully that'll make it much easier in the future to onboard new platforms That don't have very tight integration with other parts of open ship yet It's a good. It's a really great starting point for those platforms Yeah, I would say that it's something in the middle between a fully customized UPI installation where you control all the things, but you're also responsible for them to be valid and Fully automatic API so you get both the best of the both worlds There is also work happening about arm 64 builds, but I think Christian could speak about more about that There is actually a another talk starting in 15 minutes by the CI folks Touching them is exactly that the multi-arch in a human for our prowl build system So if you want to learn more about that specifically, I Suggest you join that talk in 15 minutes. Otherwise, I can just say we're working on it We have our down stream build system actually already building that which is why we have an OCP Open shift that preview for 4.9 aren't already For okD. We use the prowl upstream Build system and that is currently not yet to be enabled for harm We have a cluster build farm that we will be able to use and hopefully soon I will be able to to push out those first builds I think that's all from my side about the status and the general state of things Do we want to help into questions open floor type of things or we have Yeah, sounds like open floor is a good idea Yeah, and I will let our viewers know that we have opened we have started a couple polls To kind of get some information about if you're using okD And if so how you're using it so please take a look at the polls to and yeah Feel free to add your questions in the chat here It's like it has been in the morning again, yeah, it was getting a little kind of scratchy there for me Christian So it looks like we do have some poll results coming in And so far we've got four votes for yes currently using okD and of those We've got three votes for bare metal and one vote for public cloud From what our telemetry tells us that's very very different I would say but that's the only data we get so This is the only thing we can base our decisions on but the Domination of the bare metal is very very evident everywhere. Yeah Yeah, well, I I will be the spoiler here. I do run okD in AWS and GCP. So Yeah, that's I'm outing myself as the I tend to run it there when I'm testing things and whatnot and trying things out So for me, it's like those those pathways work really well, but I can totally understand why our community You know loves bare metal and loves vSphere and whatnot because you know, those are those are great deployment platforms Okay, I think I've added you Christian I'm I I really only have one button on the interface here. It's a little green plus button So like I guess that's what I'm supposed to click So Vadim while we're while we're here like I'm I'm super curious about the C groups v2 stuff And I know I know you just went through it all but like at a really high level You know as a user, what do I get out of like pushing to use like C groups to and everything? Like would this be noticeable to me as an operator or an end user and at this point yet well exactly the same experiences you get with C groups you want because You are limited by CRI interface If you want to do a bad thing you can get your hands dirty and actually adjust The C groups v2 things of the containers in the loyalty spawn but the Kubernetes and open chip will be totally unaware of that The best part of it is that the road it opens We're using a modern kernel interface meaning we get more features. So eventually the CRI interface would start evolving and folks who already on C groups who to get additional features like IO block Limiting IO is basically what people want at this point And so on and so forth the C groups we want would would start fading away Just like we have seen with the docker, but at this point they have feature parity because C groups v2 CRI interface is very strict and I guess how does that? So how does that mesh with the concept of people wanting to start running like Kubernetes workloads on like Windows Clients and whatnot like is there any feature parity mismatch between the C groups functionality that Linux provides And what perhaps Windows is providing through its container run times. This is this is basically the question Which CRI folks would have to answer do they want to have a different version? How do they want to structure this protocol when the feature parity starts being very very different in various Brails, I'm not very familiar with that, but I'm pretty much sure we will see something what what's happening with Kubernetes like we would have V1 alpha, which is a very very basic version V1 beta where we would be able to enable maybe additional features and so on It's all up in the air. I'm not very familiar with that But a lot of interesting things are happening in the container run time now Now speaking of which for instance Fedora now defaults to container on time called CRUN, which is a C based version of the run C. Let's call it like that The benefit is that it's much faster instead of 10s of Hundreds of milliseconds, which it takes two containers to run We would have tens of milliseconds like 50 or 70 There is also a Rust based container on time called Yo-Key Which has similar features and performance rates so in Okiti, thanks to CRI or interface, we would be able to swap them or Either by default or you would be able to define a runtime class where you would say I want these are the workloads I want to run super super fast So these are the workloads which should be used in CRUN or Yo-Key or whatever else you want So all these new features happening that CRI are very very exciting and While they don't give you additional features you can make use of but the performance should be much much better than we previously used to That's really cool. I mean it's really fascinating to see like what's happening with all these different kind of like container run times And I mean I didn't I know there's a lot of work going on with Rust So it's really cool to hear about like Rust runtime and everything so really interesting look into the community Yes, there is also Cata containers I'm not entirely sure if we already have the operator from them in the community catalog But if not, that's definitely something we'll be looking on that one It's more focused on security and the virtual machines instead of plain container run times But all of these features are very very exciting to us kind of low-level engineers They probably are very interesting for people who want to deploy bots, but For us, it's really really fun stuff. I feel it like in general We've made great progress in terms of maturity over the last year So open chip feels much much more mature and we've been able to kind of split out a few parts from the core Into optional operators and that is really great for us too because we have to maintain less for just the core functionality While the operators we can now use for delivering any additional Stuff like Cata containers support and although that might actually have to be in the base I'm not sure about that but other things like Windows container support Things like that or Windows nodes support on the cluster those can now be added as an operator to the To at the okd deployment, which I think is just marvelous. I really like it And yeah for us, it's great because we have to shit less in the core payload when we can can split things like logging out of it and I do hope that we will and I think we will continue down that further and reduce the Minimum viable needed payload even more Reduce the footprint for that thereby and still enable all the use cases by just adding adding them Back as as optional operators from operator And we've actually maybe as a kind of preview We are now very actively working on getting an okd specific operator catalog up So we can really deliver those operators that have kind of been missing in okd so far or were difficult to get To make them the first first class citizen and make installation of all those operators Super easy, which we hope will will obviously lead to more Yeah, more participation and especially people just Using their okd classes in a productive way for the pipelines operator is a good example That's the one based on tecton and there is multiple operators that we wish to ship in it in that catalog. So it's gonna be very easy to actually solve Right now you'd have to build them yourself in some in some cases If you're not if you don't have a subscription obviously if you if you install it up Okay, you can be open ship subscription. You can install all the open ship operators on top of okd as well Without the subscription you'll need to So, yeah, I know I know that's a topic that's really you know popular in the community You know, we we were talking kind of the low-level stuff now We're talking kind of the high-level stuff the operators and whatnot I guess like and this is a question that I see come up frequently around the community is like, you know Red Hat has its operator catalog for, you know, kind of the open ship container platform and okd You know, there's the open source kind of operator hub dot IO And I know Christian you were just talking a little bit about, you know, the community having interest in in kind of like Making this better I wonder if we could if we could talk a little more about like, you know, what the community might be trying to do You know to to make this more achievable Yeah, so we are thinking I'm sorry about my audience That I have bad incoming audio is probably the same on the outgoing side So what we're actually going to do is we'll set up a sub working group focused on the operator And on operators so people that either have their own operator they wish to specifically released to okd Or people who use a lot of the optional operators so they have a platform to at forum to Discuss all the things specific to that That is also so we on the core side I can kind of focus on that more and we have a dedicated forum for the operators And I think that will be a great Yeah, just just a great group To join if you're interested in in one of those optional operators specifically The use case is called that I know Diane who isn't here today She's also very interested obviously always in hearing what the community does with with okd and with these operators because she's not only The community director for okd Also, I think for the operator So she's gonna be Interested in all of these use cases and if there's folks in the community who are already using Or in the future So I'm gonna try to summarize a little bit because your audio got a little got a little difficult there But so it it sounds like in summary though The working group is gonna is gonna create a subgroup that's going to look directly at kind of like operator hub And how we can make it easier for the community to get involved in these things and they're also gonna help to act as a liaison to Make those interactions a little easier for people. Is that is that kind of the general just the things? Cool So just checking the poll again We're now up to six yeses for using okd and we've we've gained our first v-sphere vote and one more bare metal vote as well Sounds good Christian, yep Well, I mean actually it sounds terrible, but like the idea sounds great So I'll kind of ask an open-ended question here What what are you looking forward to? For the future of okd in like, you know as over the next couple years like as we move towards Openshift 5.0 or whatever in some nebulous future. What what are your kind of high points? What would you like to see okd achieve or what would you like to see happen in the community? Oh? Wow I'm definitely interested in Participation in a lot of things we started adding fancy new components like metal be and Other network things I don't quite understand how they work, but I'm pretty sure communities very excited about them We definitely want to get them used like experimented with them and which features are lacking which are more interesting than the others and Let it be a gateway for the people to participate in upstream We kind of have already did this with the long-standing That was an example of why we're running for the Fedora 34 right now. We are blocked by Fedora change in Selenix that triggered bug Rather the change was We were not allowed to revert it because it makes sense. It was Stricting the handling of TMP files, but Kubernetes tests were relying on that feature to pass a critical conformance test as a result Mustafa who was a new hire for us had to dive in into the whole world and his mission was not just fix it in the In the okiti With the one this just starts to be skipped or passing for us. We want the whole issue to be resolved by fixed in Kubernetes then It caught cherry picked into previous version which were more interested in Then the okiti was every based on some of Kubernetes and then finally our tests were based on that version of Kubernetes So it took quite a while But the whole issue is now just to eradicate it entirely. It's not just patched and hacked this is probably What I would be very very interested for people to see that means they would commit to Okiti and the whole Kubernetes idea for quite a long time because they would have to invest quite a lot of effort to get this done It's not just patching one line of code. It's Communicating with a bunch of people but in the end you get the result and everybody benefits from it and You Understand how the whole communities work while you're not even paid to get this done But this is the the actual right way how we are envisioning the community. So supposed to work. So if not if People would be using the fix in Kubernetes bugs once they encounter them in okiti Then our mission is one of the goals is definitely completed We have introduced people for the communities and They benefit from the and they reap the what the community is so When it comes to technical features I don't even know I can even dream what's gonna happen in five years because Previously the things like the whole idea of a system installer was was crazy to me I couldn't imagine that you can just click next next next through and you get a cluster I'm thinking a more focused on security would be The key there kind of containers is definitely a huge feature which people would use I'm not quite familiar with what's happening with operators But I'm definitely sure that operator hubs would be growing both in quality and in quantity of our containers they're getting For instance, I remember there is a open shift commands meeting focused on databases specifically in Kubernetes I'm thinking the more and more these kind of events would be happening like focus on pipelines on Logging and maybe who knows so this kind of a qualitative growth is Definitely what what would like to see I Think I mean, I think that's that sounds really exciting like from the community perspective It's it's tremendously exciting to hear about kind of the investment in making this a pathway for people to be able to Contribute all the way upstream whether it's, you know, Fedora or Kubernetes or the Linux kernel or whatever And yeah, then you know kind of on the on the workload user side of it Like that's really cool as well kind of reaching out and making it a much better platform to run on So it sounds very exciting to me So what what do people in the chat think anybody in the chat got questions or comments or you got you got Vadim here I'll limed up ready to answer some questions. So so dig deep Yeah, so Stewart says definitely excited to have more operators on the okd operator hub That that is that is a request that we see very commonly from the community, right? The whole thing with the okd catalog of operators is that We don't want them just to be built there and lot rot we want the redhead teams To participate to benefit from user reports from experiments and things how they envision this whole thing working This is why it's taking quite a long time because we have a lot of parties to to be satisfied We have some early adopters who want to jump the ship right now, but We still need to align quite a lot of parties because we don't want to mess it up when we launch it Other than that. Yeah, I mean the whole the writers idea is apparently sticking which is greater here That means people can communicate via the community version of the operator hub and okay the users would automatically get it Um, I'm also quite excited what people would love to to build with operators for instance of the whole matrix synops Setup which requires Postgres ready it ready already Python application and some more workers is a great fit for the operators because the workers are expected to be scalable and Other than that, it's a pretty standard operation. So That's a great target for things. This is where I'm thinking We would this is what I would love to see more like the end user applications through which they can run right now Even if it's a basic operator, but nevertheless, it's incredibly useful because From what I'm seeing a lot of people Build a cluster in which stands there waiting while they could make use of it running some useful and level of locations and during the summer's Short okd summit if I can call it like that The home lab session was filled with people who are running quite a lot of interesting Applications for their own personally used. They don't care about HAA. I don't care about that They are young manifests or are muddy But they make use of it and that's the whole point of the okd Kind of in that same area for me would be something like a next cloud operator. There is already a next out helm chart That is easy to deploy But automating that with the operator SDK because it has that capability obviously to automate helm Chart deployments or even making it a proper go Operator would yeah, would just be awesome. And I think yeah having those operators in their matrix next cloud That will be interesting not just for cell phones, but also for Just smaller smaller companies that want to run their own services Things like that are smaller institutions. It doesn't have to be company could be small school That uses next cloud and and matrix for that chat app or a university over every So yeah, I do like that we that we have made it easier to create those operators And that we're still continuing down that path because I think that that pattern of operators that just might manage themselves on Autopilot as I like to say It's just amazing So Christian we got here because I had asked Vadim this question while you Were fixing your technical issues there and I guess I'll pose the same question to you Just to see maybe where you're at on this But I would I had asked Vadim, you know kind of over the next year two years three years You know, what are you looking forward to most in okd? Like, you know, what what are your kind of visions or things that you would love to see, you know Happening or occurring over the next couple years So with my okd user hat on I would really like Further reduction of the footprint make it possible to Run okd on maybe not the current Raspberry Pi 4, but definitely on the Raspberry Pi 5 Something which hopefully will come Well, not this year, but probably next definitely on the smaller footprint on arm Just more efficiently using less power With less power consumption That would be super interesting for me. There's also I think where it's not really ha, but it's kind of ha It's like a an AB deployment of two single node Open shift Deployments, which would kind of fall over to the other one I think that would be amazing if you have like a machine that is big enough But you still want that it well it is ha in that case just isn't Isn't a real kubernetes ha it's just two to single notes that would be interesting So you could kind of do do something like a day with just two machines and still get the full The full ability of of your full okd cluster the capability of that Yeah, and I think those edge use cases are gonna be super interesting I mean deploying an okd cluster at home is by definition an edge deployment So, yeah, I think That's gonna be that's gonna be interesting and obviously we are working on that further to to minimize the core And reduce the footprint there without compromising obviously on functionality. You can always add those operators back Through operator help. Yeah, so I'm excited. It's gonna be it's gonna be just better from here on. Yeah Yeah, I think that's I think that's a really interesting answer about the smaller put footprint and everything because we do see a lot of requests You know people curious about single node deployments and how they could maybe set up a development You know system where they could try out open shift. I wonder, you know, I know inside red hat There's a lot of a lot of work on creating single node type architectures But I'm curious if maybe you could talk about how that how that's happening in the community single node clusters are very very like get a large share for for our user base that's for sure the problem with them is they are mostly used for development and People want don't want 90% of the features that I'll give you provides like if you're running a single out, you probably don't even need SDM and and And probably you don't even need an image registry as well. You can just use quay and Things like that like monitoring and stuff like that. The thing is We are pretty large scope projects Which is falling into the category of what? CRC and what micro shift are doing we're kind of overlapping and While it would definitely be great to use less memory and things like that But maybe there is a better tool for this whole job like there is a micro shift specifically designed to run on the single board computers it has the flannel instead of OVN choose much less it has some components removed Which you probably won't need because you don't build stuff And store them on the image registry on the edge. You just get deployed images directly and There is a CRC specifically designed to be created turned down just for quick Development uses and this are there are areas where we overlap and probably these are not the areas where we're supposed to be expanding and maybe there is a better way of more Fully community driven open source system, which is not covered by CRC and a micro shift So while it's not really great to have less resource consumptions, there are still better tools for the for the task It's really great to see a lot of innovation happening in those different areas because that lifts up quite a huge Mountain from our shoulders where we don't have to rip out quite a lot of things to make go kitty run on on Raspberry Pi there is a micro shift for that Well, what's the relationship with micro shift actually doesn't that use okd under the hood? pieces of the It uses images it uses fedora IOC, which is effectively Fedora cross but specifically for arm and With a slightly different focus, so we share a lot of things like Fedora packages images for base, but yet they're still entirely separate entity and They have make their own choices. We have may make our own choices It's great that we can share quite a lot of they don't have to reinvent stuff from the beginning but We definitely should should be in touch in contact with them though Inviting them on to a board group meetings would be lovely, but yet we have to be aware that maybe Some use cases a better handle the micro shift when okd. You should just not spend quite a lot of time fixing them That hasn't been on my radar too much. I had heard about it, but I wasn't really sure What what they do and how they do things that sounds more like I should probably be Deploying micro shift here at home But Well, I think yeah, I will I will send out an invitation to those to that team So they can maybe just you know come join and Present what they're doing there to us here at the working group. So we just build awareness of each other Yeah, I think obviously that they'll there will be some separation of concerns Which some things are not they're going to be handled by micro shift and others by okd But it would be good to to just be aware of what the other side is doing So I will invite them to one of our next meetings And just as a heads up here, we're just about at a little about five minutes left in this session So, you know for people in chat if you've got a burning question You need to get out or something you need to say to Christian over DM like now's your chance Well, I hope this session was entertaining and informational infotainment you might call it To everybody here and I think Yeah, we're almost at time. There's no questions. I'm just my my audio lag is starting again. I don't know what this is Well time to flush the buffers again, right? Yeah, well Big thank you to Vadim and Christian for joining us today and and for answering questions and for sharing this You know the vision of okd and kind of what's going on So yeah, thank you Thanks for having us Thanks Mike